id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.06710
Crystal structure prediction using neural network potential and age-fitness Pareto genetic algorithm
While crystal structure prediction (CSP) remains a longstanding challenge, we introduce ParetoCSP, a novel algorithm for CSP, which combines a multi-objective genetic algorithm (MOGA) with a neural network inter-atomic potential (IAP) model to find energetically optimal crystal structures given chemical compositions. We enhance the NSGA-III algorithm by incorporating the genotypic age as an independent optimization criterion and employ the M3GNet universal IAP to guide the GA search. Compared to GN-OA, a state-of-the-art neural potential based CSP algorithm, ParetoCSP demonstrated significantly better predictive capabilities, outperforming by a factor of $2.562$ across $55$ diverse benchmark structures, as evaluated by seven performance metrics. Trajectory analysis of the traversed structures of all algorithms shows that ParetoCSP generated more valid structures than other algorithms, which helped guide the GA to search more effectively for the optimal structures
Sadman Sadeed Omee, Lai Wei, Jianjun Hu
2023-09-13T04:17:28Z
http://arxiv.org/abs/2309.06710v1
Crystal structure prediction using neural network potential and age-fitness Pareto genetic algorithm + ###### Abstract While crystal structure prediction (CSP) remains a longstanding challenge, we introduce ParetoCSP, a novel algorithm for CSP, which combines a multi-objective genetic algorithm (MOGA) with a neural network inter-atomic potential (IAP) model to find energetically optimal crystal structures given chemical compositions. We enhance the NSGA-III algorithm by incorporating the genotypic age as an independent optimization criterion and employ the M3GNet universal IAP to guide the GA search. Compared to GN-OA, a state-of-the-art neural potential based CSP algorithm, ParetoCSP demonstrated significantly better predictive capabilities, outperforming by a factor of \(2.562\) across \(55\) diverse benchmark structures, as evaluated by seven performance metrics. Trajectory analysis of the traversed structures of all algorithms shows that ParetoCSP generated more valid structures than other algorithms, which helped guide the GA to search more effectively for the optimal structures. neural network potential genetic algorithm age-fitness Pareto optimization crystal structure prediction ## 1 Introduction Crystal structure prediction (CSP) is the problem of predicting the most energetically stable structure of a crystal given its chemical composition. Knowing the atomic structure is the most crucial aspect of comprehending crystalline materials. With the structural information of the material, advanced quantum-mechanical methods such as Density Functional Theory (DFT) can be utilized to calculate numerous physical characteristics of the crystal [1]. As the physical and chemical characteristics of a crystal are dictated by the arrangement and composition of its atoms, CSP is critical to finding new materials that possess the needed properties such as high thermal conductivity, high compressing strength, high electrical conductivity, or low refractive index. CSP based computational materials discovery is significant and has the potential to revolutionize a range of industries, such as those involving electric vehicles, Li-batteries, building construction, energy storage, and quantum computing hardware [2, 3, 4, 5, 6]. For this reason, CSP, along with machine learning (ML)-based inverse design [7, 5, 8, 9, 10], has emerged as one of the most potential methods for finding novel materials. Although there have been notable advancements in the field of CSP, the scientific community has yet to solve this fundamental challenge that has persisted for decades. CSP presents a significant challenge due to the requirement to search through an extensive range of potential configurations to identify the most stable arrangement of atoms of a crystal in a high-dimensional space. The complexity of CSP stems from the combinatorial nature of the optimization challenge, where the number of potential configurations grows exponentially with the number of atoms present in the crystal [1]. Additionally, the prediction of the most stable structure relies on several factors, including temperature, pressure, and chemical composition, further increasing the intricacy of the problem. Historically, the main method for determining crystal structures was through experimental X-ray diffraction (XRD) [11], which is time-consuming, expensive, and sometimes impossible, particularly for materials that are difficult to synthesize. Computational approaches for CSP provide a faster and more affordable alternative than experimental methods. A typical strategy involves searching for the crystal's lowest energy atomic arrangement by optimizing its potential energy surface (PES) using different search algorithms. However, in some cases, simpler metrics such as the cohesive energy or the formation energy of the structures can be used instead [4]. The highly non-convex nature of the PES, which can contain a vast number of local minima, reduces the efficiency of the search algorithms. Moreover, finding the global minimum of a PES is categorized as an NP-hard problem [12]. Most research on the CSP problem concentrates on _ab initio_ techniques, which involve exploring the atomic configuration space to locate the most stable structure based on the first-principles calculations of the free energy of possible structures [13; 14; 15]. Although these methods are highly accurate, the scalability and the applicability of these ab initio algorithms for predicting crystal structures remain a challenge. These methods are severely constrained because they rely on expensive first-principles density functional theory (DFT) calculations [16; 17] to determine the free energy of candidate structures. Furthermore, these methods are only applicable for predicting structures of comparatively small systems (\(<10-20\) atoms in the unit cell). Although there are inexpensive models available to estimate the free energy, they tend to have a poor correlation with reality, which can result in an inaccurate search [14]. For example, state-of-the-art (SOTA) graph neural networks (GNNs) have demonstrated the capability to accurately predict the formation energy of candidate structures [18; 19; 20; 21; 22; 23], their performance on predicting non-stable or meta-stable structures is significantly lower as they are usually trained with stable crystals. Several search algorithms have been applied to the CSP problem, including random sampling [12], simulated annealing [24; 25; 26], meta-dynamics [27; 28], basin hopping [29; 30], minima hopping [31], genetic algorithm (GA) [32; 33; 14; 34], particle swarm optimization (PSO) [15], Bayesian optimization (BO) [35; 36], and deep learning (DL) [37; 38]. Among them, the USPEX algorithm, developed by Glass et al. [14], is a prominent CSP algorithm based on evolutionary principles, using natural selection and reproduction to generate new crystal structures. It incorporates a combination of three operators- heredity, mutation, and permutation to explore the configuration space. To evaluate candidate structures, they use ab initio free energy calculation using tools like VASP [39] and SIESTA [40] which are highly accurate, but extremely time consuming. Another important CSP algorithm named CALYPSO was devised by Wang et al. [15], which employs a PSO algorithm to explore the energy landscape of crystal structures and identify the lowest energy structures. To accomplish this, they developed a special strategy for removing comparable structures and applied symmetry-breaking restrictions to boost search effectiveness. Both USPEX and CALYPSO methods have been successfully applied to predicting the crystal structures of diverse materials, including those under high-pressure conditions, complex oxides, alloys, and etc. The random sampling-based CSP algorithms have also demonstrated their effectiveness. For example, AIRSS presented by Pickard et al. [12], describes a scheme that generates different random crystal structures for different type of crystals and conducts DFT calculations on them to determine the most stable one. Another genre of CSP methods are template-based methods [41; 42; 43] which involves finding an existing crystal structure as the template using some heuristic methods, or the ML method, etc, which has a similar chemical formula and then replacing some of its atoms with different elements. However, the accuracy of these models is constrained by the diversity and availability of the templates, as well as the complexity of the target compound. Inspired by the recent success of DL-based methods in protein structure prediction [44; 45; 46], a DL-based algorithm, AlphaCrystal [38] has been designed to predict the contact map of a target crystal and then reconstruct its structure via a GA. However, the effectiveness of this model is constrained because its performance relies on the accuracy of the predicted space group, lattice parameters, and distance matrices. Moreover, it ultimately depends on the optimization algorithm for reconstructing the final structure from the contact map as it is unable to provide end-to-end prediction like DeepMind's AlphaFold2 [45]. Compared to previous DFT-based CSP algorithms such as USPEX and CALYPSO, a major progress in CSP is to use machine-learning potential models to replace the costly first principle energy calculation. Cheng et al. [36] developed a CSP framework named GN-OA, in which a graph neural network (GNN) model was first trained to predict the formation energy and then an optimization algorithm was then used to search for the crystal structure with the minimum formation energy, guided by the GNN energy model. They show that the BO search algorithm produces the best results among all optimization algorithms. However, predicting formation energy using GNNs has its drawback as its performance largely depends on the dataset it is trained on. A structure search trajectory analysis [47] also showed that current BO and PSO in GN-OA tend to generate too many invalid structures, which deteriorates its performance. While both USPEX and CALYPSO have been combined with ML potentials for CSP before GN-OA, they were only applicable to small crystal systems such as Carbon structures, Sodium under pressure, and Boron clusters [48; 49] due to the limitation of their ML potential models. Recently, significant progress has been achieved in ML potentials for crystals [50; 51; 52; 53; 54] that can work with multi- element crystals and larger crystals systems. This will bring unprecedented opportunities and promise for modern CSP research and materials discovery. For example, recent advancement in deep neural network-based energy potential (M3GNet IAP) [53] has shown its capability to cover \(89\) elements of the periodic table while the CHGNet [54] model was pretrained on the energies, forces, stresses, and magnetic moments from the Materials Project Trajectory Dataset, consisting of \(\sim 1.5\) million unstable and stable inorganic structures. It is intriguing to explore how well modern CSP algorithms based on these ML potential can perform. Inspired by this progress, we propose the ParetoCSP algorithm for CSP, which combines the M3GNet potential with the age-fitness pareto genetic algorithms for efficient structure search. In this algorithm, candidate structures in the GA population are compared based on both the genotypic age and the formation energy, predicted by a neural network potential such as M3GNet or CHGNet. Compared to previous GN-OAs, we showed that the significant global search capability of our ParetoCSP allows it to achieve much better prediction performance. Our contribution in this paper can be summarized as follows: * We develop an efficient ParetoCSP for CSP, which combines an updated multi-objective GA (NSGA-III) by the inclusion of the age fitness Pareto optimization criterion and a neural network potential (M3GNet IAP), utilized to correlate crystal structures to their final energy. * Our systematic evaluations on \(55\) benchmark crystals show that ParetoCSP outperforms GN-OA by a factor of \(2.562\) in terms of prediction accuracy. * We reinforce GN-OA by replacing its formation energy predictor MEGNet with the M3GNet IAP final energy model and show that it improves the default GN-OA by a factor of \(1.5\) in terms of prediction accuracy. We further demonstrated the significant improvement in the search capability of ParetoCSP by showing that ParetoCSP outperforms the updated GN-OA by a factor of \(1.71\) in terms of prediction accuracy. * We provide quantitative analysis of the structures generated by ParetoCSP using seven performance metrics, and empirically show that ParetoCSP found better quality of structures for the test formulas than those by GN-OA. * We perform a trajectory analysis of the generated structures by all evaluated CSP algorithms and show that ParetoCSP generates a great more valid solutions than the GN-OA algorithm, which may have contributed to ParetoCSP's better performance in predicting the crystal structures. ## 2 Method ### ParetoCSP: algorithm description The input of our algorithm (ParetoCSP) is the elemental composition of a crystal \(\{c_{i}\}\), where \(i\) is the index of an atom and \(c_{i}\) is the element of the \(i\)-th atom in the unit cell. A periodic crystal structure can be described by its lattice parameters (\(L\)) \(a,b,c\) (representing the unit cell size), and \(\alpha,\beta,\gamma\) (representing angles in the unit cell), the space group, and the atomic coordinates at unique Wyckoff positions. Our algorithm is based on the idea of the GN-OA algorithm [36] with two major upgrades including the multi-objective GA search algorithm and the use of M3GNet potential for energy calculation. GN-OA has been proven from previous researches that incorporating symmetry constraint expedites CSP [36; 55]. Similar to the GN-OA approach, our method also considers crystal structure prediction with symmetry constraints. We incorporate two additional structural features, namely crystal symmetry \(S\) and the occupancy of Wyckoff position \(W_{i}\) for each atom \(i\). These features are selected from a collection of \(229\) space groups and associated \(1506\) Wyckoff positions [56]. The method begins by selecting a symmetry \(S\) from the range of \(P2\) to \(P230\), followed by generating lattice parameters \(L\) within the chosen symmetry. Next, a combination of Wyckoff positions \(\{W_{i}\}\) is selected to fulfill the specified number of atoms in the cell. The atomic coordinates \(\{R_{i}\}\) are then determined based on the chosen Wyckoff positions \(\{W_{i}\}\) and lattice parameters \(L\). To generate crystal structures, we need to tune the \(S\), \(\{W_{i}\}\), \(L\), and \(\{R_{i}\}\) variables. By selecting different combinations of \(S\), \(W_{i}\), \(L\), and \(R_{i}\), we can generate a comprehensive array of possible crystal structures for the given \(c_{i}\). In theory, determining the energy of these various structures and selecting the one with the least energy should be the optimal crystal arrangement. However, exhaustively enumerating all these structures becomes practically infeasible due to the staggering number of potential combinations. To address this complexity, a more practical approach involves iteratively sampling candidate structures from the design space, under the assumption that one of the sampled structures will emerge as the most stable and optimal solution. Consequently, we adopt an optimization strategy to guide this search process towards identifying the structure with the lowest energy. In particular, we utilize an genetic algorithm, NSGA-III [57; 58], improved by incorporating AFPO [59] to enhance its performance and robustness. First, we generate \(n\) initial random structures. We then assign them an age of \(1\) and convert them into crystal graphs. There are multiple approaches to encode crystals as graphs [60; 18; 19; 61; 62]. In short, we can consider each atom of the crystal as nodes of the graph, and interaction between them (e.g., bonds) can be encoded as edges. Interactions can be limited to certain cutoff range to define more realistic graphs. Each node and edge need to assigned feature vectors for the DNN to learn the specific property. After generating the initial structures, we predict their final energy/atom using the M3GNet universal IAP [53]. Next we calculate fitness considering both energy and age of the generated crystals (two independent dimension in the Pareto front). After that, we check whether the total number of generations are less than a certain threshold \(\mathcal{G}\). If yes, we increase the age of all individuals by \(1\). This follows the Pareto tournament selection, which selects the parents among the individual structures for the next generation. We usually set the tournament size to \(2\) which selects half of the population as parents. Next, we perform the genetic operations - crossover and mutation. After crossover, we update the age of each individual by inheriting the maximum age of corresponding parents. Similarly, after mutation, individual ages are updated by inheriting the age of their respective parents. These operations result in a new population of \(n\) individuals for the next generation. The concept of age ensures a diverse population by containing both old and young individual, as well as effectively prevents from converging into local optima [59]. We then increase the generation number and repeat the whole process by calculating the final energy/atom of each structure until the generation number \(\leq\) the threshold Figure 1: **The flowchart of ParetoCSP algorithm.** It starts by generating \(n\) random crystals and assigning them an age of \(1\), where \(n\) denotes the population size. One complete generation then goes through the following steps: calculating energy of the structures and fitness, selecting parents, performing genetic operations, and updating the age. After a certain threshold of \(\mathcal{G}\) generations, the lowest energy structure from the multi-dimensional Pareto front is chosen and further relaxed and symmetrized to obtain the final optimal structure. The genetic encoding is shown in the lower right corner of the flowchart. It contains lattice parameters \(a\), \(b\), \(c\), \(\alpha\), \(\beta\), and \(\gamma\), the space group \(S\), the wyckoff position combination \(W_{i}\), and the atomic coordinates \(R_{i}\) of atom indexed by \(i\). \(\mathcal{G}\). After finishing \(\mathcal{G}\). generations, we obtain a set of \(\mathcal{F}\) non-dominated solutions on the Pareto front. We select the solution with the lowest final energy per atom as the optimal solution. We further relax the structure using the structure relaxation method of M3GNet IAP, which produces a more refined structure with lower final energy per atom. Finally, we perform a symmetrization operation to symmetrize the structure to output the final structure. Figure1 shows the flowchart of our ParetoCSP algorithm. ### AFPO: Age-fitness Pareto optimization One of the key requirements for a GA to achieve robust global search is to maintain the diversity of the population. Here, we employed the multi-objective genetic algorithm, AFPO by Schmidt and Lipson [59] to achieve this goal. The AFPO algorithm is inspired from the idea of age layered population structure (ALPS) [63; 64], which divides the evolving population into layers based on how long the genetic material has been present in the population so that competitions happen at different fitness levels, avoiding the occurrence of premature convergence. The _age_ of an individual is defined as how long the oldest part of its genotype has been present in the population [65]. Instead of partitioning the population into layers as done in the HFC algorithm [63], AFPO uses age as an explicit optimization criterion (an independent dimension in a multi-objective Pareto front). A solution is considered optimal if it has both higher fitness and lower age compared to other solutions. This enables the algorithm to maintain diversity in the population and avoid premature convergence to local optima, as well as to find better solutions at faster convergence speed [59]. The AFPO algorithm starts by initializing a population of \(N\) individuals randomly and assigned an age of one to all of them. The fitness of an individual is evaluated by calculating its performance for all objectives. The fitness values are then used to rank the individuals based on their Pareto dominance. The algorithm then updates and assigns the age for each individual. The age of an individual is increased by one with each generation. When crossover or mutation occurs, the individual's age is set to the maximum age of its parents. The algorithm uses a parameter called the tournament size \(K\) which determines the number of individuals that compete for selection. Specifically, \(K\) individuals are selected at random. It then forms the Pareto front among them, and eliminating any dominated individuals. After that, crossovers and mutations are applied to the parents to generate offspring. The objective function values for each offspring are evaluated and the updated ages are assigned to each offspring. The newly generated offspring replace some of the older individuals in the population based on their age and fitness values. To avoid premature convergence towards sub-optimal solutions, a few new random individuals are added to the population in each generation to maintain diversity. The algorithm continues to iterate through the above steps until a stopping criterion is met, such as a maximum number of generations or a desired level of convergence. For more details, the readers are referred to the reference [65]. ### NSGA-III: multi-objective GA We use the NSGA-III [57] algorithm to implement the age-fitness based genetic algorithm AFPO. NSGA-II is an improved version of the popular multi-objective evolutionary algorithm NSGA-II [66]. Here we describe the NSGA-III framework as defined in reference [57; 58]. The NSGA-III algorithm begins with defining a group of reference points. To create an offspring population \(Q_{i}\) at generation \(i\), the current parent population \(P_{i}\) undergoes genetic operations. The resulting population, \(P_{i}\cup Q_{i}\) is then sorted based on their nondomination levels (\(F_{1},F_{2}\), and so on). The algorithm saves all members up to the last fully accommodated level, \(F_{k}\) (considering all solutions from level (\(k+1\)) onward are rejected) in a set called \(\delta_{i}\). The individuals from \(\delta_{i}\setminus F_{k}\) have already been chosen for the next set of candidates, while the remaining spots are filled by individuals from \(F_{k}\). The selection process of NSGA-III is substantially altered from the approach used in NSGA-II. First, the objective values and reference points are normalized. Second, each member in \(\delta_{i}\) is assigned a reference point based on its distance to the individual with a reference line formed by connecting the ideal point to the reference point. This method enables the determination of the number and positions of population members linked to each supplied reference point in \(\delta\setminus F_{k}\). Next, a niching technique is applied to pick individuals from \(F_{k}\) who are underrepresented in \(\delta_{i}\setminus F_{k}\) based on the results of the association process explained earlier. Reference points with the fewest number of associations in the \(\delta\setminus F_{k}\) population are identified and corresponding points in the \(F_{k}\) set are searched. These selected members from \(F_{k}\) are then added to the population, one by one, until the required population size is achieved. Thus NSGA-III utilizes a different approach in contrast to NSGA-II to sustain diversity among population members by incorporating a set of well-distributed reference points that are provided initially and updated adaptively during the algorithm's execution [58]. More implementation details can be found in the reference [67]. ### M3GNet Inter-atomic Potential (IAP) Energy potential is one of the key components of modern CSP algorithms. Here we use M3GNet [53], which is a GNN based ML potential model that explicitly incorporates \(3\)-body interactions. This model combines the graph-based DL inter-atomic potential and the many-body features found in traditional IAPs with the flexible graph material representations. One notable distinction of M3GNet from previous material graph implementations is the inclusion of atom coordinates and the \(3\times 3\) lattice matrix in crystals. These additions are essential for obtaining tensorial quantities like forces and stresses through the use of auto-differentiation. In the M3GNet model, position-included graphs serve as inputs. Graph features include embedded atomic numbers of elements and pair bond distances. Like traditional GNNs, the node and the edge features are updated via the graph convolution operations. Our M3GNet potential was trained using both stable and unstable structures so that it can well capture the difference between these two. The precise and efficient relaxation of diverse crystal structures and the accurate energy prediction achieved by the M3GNet-based relaxation algorithm make it well-suited for large-scale and fast crystal structure prediction. ### Evaluation criteria Many earlier studies [15; 14; 12] have depended on manual structural examination and ab initio formation energy comparison to assess the performance of a Crystal Structure Prediction (CSP) algorithm. But these metrics do not address the situation that an algorithm may not find the exact solution for a crystal and it is not clear how much the generated structure is deviated from the ground truth structure. Usually previous works did not quantitatively report how good or bad a solution is. Also, if two algorithms fail to generate the exact crystal structure, these metrics do not describe which one is closer to finding the optimal solution. Recently, Wei et al. [47] proposed a set of performance metrics to measure CSP performance which alleviated this issue greatly. We used seven performance metrics from that work to measure the performance of our CSP algorithm and the baselines. The required data are the crystallographic information file (CIF) of both the the optimized and relaxed final structure generated by the CSP algorithm and its corresponding ground truth stable structure. Details about these performance metrics can be found in [47]. They are shortly listed below: 1. Energy distance (ED) 2. Wyckoff position fraction coordinate root mean squared error distance (W\({}_{rmse}\)) 3. Wyckoff position fraction coordinate root mean absolute error (W\({}_{mae}\)) 4. Sinkhorn distance (SD) 5. Chamfer distance (CD) 6. Hausdorff distance (HD) 7. Crystal fingerprint distance (FP) ## 3 Results Our objective is to demonstrate the effectiveness of ParetoCSP for crystal structure prediction by showing that the multi-objective AFPO GA enables a much more effective structure search method than the BO and PSO and that M3GNet IAP is a more powerful crystal energy predictor than the previous MEGNet model. ### Benchmark set description We selected a diverse set of \(55\) stable structures available in the Materials Project database [68] with no more than \(20\) atoms. Among them, \(20\) are binary crystals, \(20\) are ternary crystals, and \(15\) are quarternary crystals. We chose the benchmark set based on multiple factors such as diversity of elements, diversity of space groups, special type of materials (e.g., perovskites), and usage in previous CSP literature etc. Supplemental Fig. S1a shows the diversity of the elements used in the benchmark set. Table 1 shows the detailed information about the \(55\) chosen test crystals used in this work. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Composition** & **No. of** & **Space group** & **Formation energy** & **Final energy** & **M3GNet final energy** \\ & **atoms** & & **(eV/atom)** & **(eV/atom)** & **(eV/atom)** \\ \hline \hline \end{tabular} \end{table} Table 1: **Details of the \(55\) benchmark crystals used in this work.** The first \(20\) crystals are binary, second \(20\) crystals are ternary, and last \(15\) crystals are quarternary, and each of these types of crystals are separated by single horizontal lines. We can see that the ground truth final energies and the predicted final energies by M3GNet IAP are very close, demonstrating M3GNet’s effectiveness as an energy predictor. \begin{tabular}{l l l l l l} \hline TiCo & 2 & \(Pm-3m\) & \(-0.401\) & \(-7.9003\) & \(-7.8986\) \\ CrPd\({}_{3}\) & 4 & \(Pm-3m\) & \(-0.074\) & \(-6.3722\) & \(-6.4341\) \\ GaNNi\({}_{3}\) & 4 & \(Pm-3m\) & \(-0.291\) & \(-5.3813\) & \(-5.3806\) \\ ZrSe\({}_{2}\) & 3 & \(P-3m1\) & \(-1.581\) & \(-6.5087\) & \(-6.5077\) \\ MnAl & 2 & \(Pm-3m\) & \(-0.225\) & \(-6.6784\) & \(-6.7503\) \\ NiS\({}_{2}\) & 6 & \(P6_{3}/mmc\) & \(-0.4\) & \(-4.7493\) & \(-4.9189\) \\ TiO\({}_{2}\) & 6 & \(P4_{2}/mmm\) & \(-3.312\) & \(-8.9369\) & \(-8.9290\) \\ NiCl & 4 & \(P6_{3}mc\) & \(-0.362\) & \(-3.8391\) & \(-3.8899\) \\ AlNi\({}_{3}\) & 4 & \(Pm-3m\) & \(-0.426\) & \(-5.7047\) & \(-5.6909\) \\ CuBr & 4 & \(P6_{3}/mmc\) & \(-0.519\) & \(-3.0777\) & \(-3.0908\) \\ VPt\({}_{3}\) & 8 & \(I4/mmm\) & \(-0.443\) & \(-7.2678\) & \(-7.2638\) \\ MnCo & 2 & \(Pm-3m\) & \(-0.0259\) & \(-7.6954\) & \(-7.6963\) \\ BN & 4 & \(P6_{3}/mmc\) & \(-1.411\) & \(-8.7853\) & \(-8.7551\) \\ GeMo\({}_{3}\) & 8 & \(Pm-3n\) & \(-0.15\) & \(-9.4398\) & \(-9.3588\) \\ Ca\({}_{3}\)V & 8 & \(I4/mmm\) & \(0.481\) & \(-3.2942\) & \(-3.1638\) \\ Ga\({}_{2}\)Te\({}_{3}\) & 20 & \(Cc\) & \(-0.575\) & \(-3.4181\) & \(-3.4160\) \\ CoAs\({}_{2}\) & 12 & \(P2_{1}/c\) & \(-0.29\) & \(-5.8013\) & \(-5.7964\) \\ Li\({}_{2}\)Al & 12 & \(Cmcm\) & \(-0.163\) & \(-2.6841\) & \(-2.6623\) \\ VS & 4 & \(P6_{3}/mmc\) & \(-0.797\) & \(-7.1557\) & \(-7.3701\) \\ Ba\({}_{2}\)Hg & 6 & \(I4/mmm\) & \(-0.384\) & \(-1.7645\) & \(-1.7582\) \\ \hline SrTiO\({}_{3}\) & 5 & \(Pm-3m\) & \(-3.552\) & \(-8.0249\) & \(-8.0168\) \\ Al\({}_{2}\)FeCo & 4 & \(P4/mmm\) & \(-0.472\) & \(-6.2398\) & \(-6.2462\) \\ GaBN\({}_{2}\) & 4 & \(P-4m2\) & \(-0.675\) & \(-7.0893\) & \(-7.0918\) \\ AcMnO\({}_{3}\) & 5 & \(Pm-3m\) & \(-2.971\) & \(-7.1651\) & \(-7.8733\) \\ BaTiO\({}_{3}\) & 5 & \(Pm-3m\) & \(-2.995\) & \(-8.1070\) & \(-8.1012\) \\ CdCuN & 3 & \(P-6m2\) & \(0.249\) & \(-4.0807\) & \(-4.0228\) \\ HoHSe & 3 & \(P-6m2\) & \(-1.65\) & \(-5.2538\) & \(-5.2245\) \\ Li\({}_{2}\)ZnSi & 8 & \(P6_{3}/mmc\) & \(0.0512\) & \(-2.5923\) & \(-2.6308\) \\ Cd\({}_{2}\)AgPt & 16 & \(Fm-3m\) & \(-0.195\) & \(-2.8829\) & \(-2.8415\) \\ AlCrFe\({}_{2}\) & 4 & \(P4/mmm\) & \(-0.157\) & \(-7.7417\) & \(-7.6908\) \\ ZnCdPt\({}_{2}\) & 4 & \(P4/mmm\) & \(-0.444\) & \(-4.0253\) & \(-4.0164\) \\ EuAlSi & 3 & \(P-6m2\) & \(-0.475\) & \(-6.9741\) & \(-6.9345\) \\ Sc\({}_{3}\)TiC & 5 & \(Pm-3m\) & \(-0.622\) & \(-6.7381\) & \(-6.7419\) \\ GaSeCl & 12 & \(Pnnm\) & \(-1.216\) & \(-3.6174\) & \(-3.6262\) \\ CaAgN & 3 & \(P-6m2\) & \(-0.278\) & \(-4.5501\) & \(-4.7050\) \\ BaAlGe & 3 & \(P-6m2\) & \(-0.476\) & \(-3.9051\) & \(-3.9051\) \\ K\({}_{2}\)PdS\({}_{2}\) & 10 & \(Immm\) & \(-1.103\) & \(-4.0349\) & \(-4.0066\) \\ KCrO\({}_{2}\) & 8 & \(P6_{3}/mmc\) & \(-2.117\) & \(-6.4452\) & \(-6.4248\) \\ TiZnCu\({}_{2}\) & 4 & \(P4/mmm\) & \(-0.0774\) & \(-4.4119\) & \(-4.4876\) \\ Ta\({}_{2}\)N\({}_{3}\)O & 6 & \(P6/mmm\) & \(-0.723\) & \(-9.3783\) & \(-9.3848\) \\ \hline AgBiSeS & 4 & \(P4/mmm\) & \(-0.404\) & \(-3.7363\) & \(-3.8289\) \\ ZrTaNO & 4 & \(P-6m2\) & \(-1.381\) & \(-9.5450\) & \(-9.5429\) \\ \hline \end{tabular} ### Performance analysis of ParetoCSP The default version of ParetoCSP uses M3GNet universal IAP as the final energy evaluator for the candidate structures to guide the AFPO-based GA to identify the most stable structure with the minimum energy. Our algorithm ParetoCSP predicted the exact structures for \(17\) out \(20\) binary crystals (\(85\%\)), \(16\) out of \(20\) ternary crystals (\(80\%\)), and \(8\) out of \(15\) quarternary crystals (\(53.333\%\)) (see Table 2). Overall, ParetoCSP achieved an accuracy of \(74.55\%\) among all \(55\) test crystals for this research which is the highest among all evaluated algorithms (\(\approx 1.71\%\) the next best algorithm). Details on comparison with other algorithms and energy methods are discussed in Subsection 3.3 and 3.4. The exact accuracy results for all algorithms are presented in Table 2. All the structures were assigned \(\boldsymbol{\check{\mathsf{\mathsf{\prime}}}}\)(exact), or \(\boldsymbol{\check{\mathsf{\mathsf{\prime}}}}\)(non-exact) based on manual inspection which was predominantly done in the majority of the past literature [36; 15]. We observed that ParetoCSP successfully found the most stable structures of all cubic and hexagonal binary crystals and most tetragonal binary crystals in the benchmark dataset. The three unsuccessful binary crystals that ParetoCSP failed to identify their exact structures are Ga\({}_{2}\)Te\({}_{3}\) (monoclinic), Li\({}_{2}\)Al (orthorhombic), and Ba\({}_{2}\)Hg (tetragonal). For ternary crystals, ParetoCSP successfully determined the exact stable structures for all tetragonal crystals and most cubic and hexagonal crystals. However, there were four instances where the prediction failed, namely for Li\({}_{2}\)ZnSi (hexagonal), Cd\({}_{2}\)AgPt (cubic), GaSeCl (orthorhombic), and K\({}_{2}\)PdS (orthorhombic). In the case of quarternary crystals, ParetoCSP achieved dominance over most hexagonal and tetragonal structures. Li\({}_{2}\)MgCdP\({}_{2}\) (tetragonal), Sr\({}_{2}\)BBrN\({}_{2}\) (trigonal), ZrCuSiAs (tetragonal), NdNiSnH\({}_{2}\) (hexagonal), MnCoSnRh (cubic), Mg\({}_{2}\)ZnB\({}_{2}\)Ir\({}_{5}\) (tetragonal), Ba\({}_{2}\)CeTaO\({}_{6}\) (monoclinic) are the seven quarternary failure cases for ParetoCSP in terms of finding exact structures. Based on these observations, we can claim that ParetoCSP combined with M3GNet IAP demonstrated notable efficacy in predicting cubic, hexagonal, and tetragonal crystalline materials. However, its performance in predicting monoclinic and orthorhombic crystals is comparatively less successful. This can be accounted due to the higher number of degrees of freedom of monoclinic and orthorhombic crystal systems compared to simpler crystal systems like cubic or hexagonal. Also monoclinic and orthorhombic crystals have a varied range of complex structural motifs, which makes CSP algorithms difficult to predict their exact structures. However, this does not diminish the claim that our algorithm is the best among the four ML potential based CSP algorithms evaluated here. Later, we demonstrated that the other CSP algorithms also faced similar challenges. Ground truth and predicted structures of sample crystals are shown in Fig. 2 using the VESTA tool, which contains examples of both successful and unsuccessful predictions. Now, we analyze the performance of ParetoCSP in terms of the quantitative performance metrics. As mentioned before, we used a set of seven performance metrics to evaluate the prediction performance of different CSP algorithms. The values of each performance metrics for all \(55\) chosen crystal is shown in Table 3. Ideally, all the performance metric values should be zero if the predicted structure and the ground truth structure are exactly the same. We identified the values of the failure cases which indicate the _poor quality_ of the predictions. The process for determining them involved identifying the highest value for each performance metric among all successful predictions (we name them _satisfactory_ values), and then selecting the values that exceeded those for the failed predictions. We have highlighted these values in bold letters in Table 3. We noticed that with the exception of K\({}_{2}\)PdS\({}_{2}\) and ZrCuSiAs, all but \(12\) of the failed cases demonstrated higher energy distance values compared to the satisfactory energy distance value (\(0.7301\) eV/atom), indicating non-optimal predicted structures. Similarly, for Sinkhorn distance (SD), apart from ZrCuSiAs, the remaining \(13\) unsuccessful predictions exhibited significantly higher values than the satisfactory SD value (\(5.6727\)A), suggesting poor prediction quality. For W\({}_{rmse}\) and W\({}_{mae}\), we assigned a cross (\(\times\)) to indicate if the predicted structure and the target structure do not have similar wyckoff position configurations in the symmetrized structures and thus they cannot be calculated. We observed that, \(11\) out of \(14\) failed predictions (symmetrized) do not have a similar wyckoff position compared to the ground truth symmetrized structure, indicating unsuccessful predictions. However, for Chamfer distance (CD) metric, only \(6\) out of \(14\) failed predictions displayed higher quantities than the satisfactory CD value (\(3.8432\)A), indicating that CD was not the most suitable metric for measuring prediction quality in crystal structures for our algorithm. In contrast, Hausdorff distance (HD) showed that \(10\) out of \(14\) failed predictions had higher values than the satisfactory HD value (\(3.7665\)A). Notably, the only performance metric that consistently distinguished between optimal and non-optimal structures across all failed predictions is crystal fingerprint (FP) metric (satisfactory value: \(0.9943\)), demonstrating its effectiveness in capturing the differences between these structures. In conclusion, all the metrics provided strong evidence of the non-optimal nature of the \(14\) failed structures. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Composition**} & **ParetoCSP** & **ParetoCSP** & **GN-OA** & **GN-OA** \\ & **with M3GNet (Default)** & **with MEGNet** & **with M3GNet** & **with MEGNet (Default)** \\ \hline \hline TiCo & βœ“ & βœ“ & βœ“ & βœ— \\ CrPd\({}_{3}\) & βœ“ & βœ“ & βœ— & βœ— \\ GaNi\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ ZrSe\({}_{2}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ MnAl & βœ“ & βœ“ & βœ“ & βœ“ \\ NiS\({}_{2}\) & βœ“ & βœ— & βœ“ & βœ“ \\ TiO\({}_{2}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ NiCl & βœ“ & βœ— & βœ— & βœ— \\ AlNi\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ CuBr & βœ“ & βœ— & βœ— & βœ— \\ VPt\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ MnCo & βœ“ & βœ“ & βœ“ & βœ“ \\ BN & βœ“ & βœ“ & βœ“ & βœ“ \\ GeMo\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ Ca\({}_{3}\)V & βœ“ & βœ“ & βœ— & βœ— \\ Ga\({}_{2}\)Te\({}_{3}\) & βœ— & βœ— & βœ— & βœ— \\ CoAs\({}_{2}\) & βœ“ & βœ— & βœ— & βœ— \\ Li\({}_{2}\)Al & βœ— & βœ— & βœ— & βœ— \\ VS & βœ“ & βœ— & βœ“ & βœ— \\ Ba\({}_{2}\)Hg & βœ— & βœ— & βœ— & βœ— \\ \hline SrTiO\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ Al\({}_{2}\)FeCo & βœ“ & βœ“ & βœ— & βœ— \\ GaBN\({}_{2}\) & βœ“ & βœ“ & βœ— & βœ— \\ AcMnO\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ PaTiO\({}_{3}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ CdCuN & βœ“ & βœ— & βœ— & βœ— \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance comparison of ParetoCSP with baseline algorithms.** Successful and failed predictions via manual inspection are denoted by a βœ“ and οΏ½, respectively. ParetoCSP with M3GNet achieved the highest success rate in finding the exact structures of these crystals, GN-OA with M3GNet achieved the second best success rate. ParetoCSP with MEGNet performed as the third-best, while GN-OA with MEGNet performed the poorest. These results highlight the significant impact of using M3GNet IAP as crystal final energy predictor and structure relaxer, and the effectiveness of the AFPO-based GA as a structure search function. \begin{tabular}{l c c c c} HoHSe & βœ“ & βœ— & βœ“ & βœ— \\ Li\({}_{2}\)ZnSi & βœ— & βœ— & βœ— & βœ— \\ Cd\({}_{2}\)AgPt & βœ— & βœ— & βœ— & βœ— \\ AlCrFe\({}_{2}\) & βœ“ & βœ— & βœ— & βœ— \\ ZnCdPt\({}_{2}\) & βœ“ & βœ— & βœ— & βœ— \\ EuAlSi & βœ“ & βœ— & βœ“ & βœ“ \\ Sc\({}_{3}\)TIC & βœ“ & βœ“ & βœ“ & βœ“ \\ GaSeCl & βœ— & βœ— & βœ— & βœ— \\ CaAgN & βœ“ & βœ— & βœ“ & βœ— \\ BaAlGe & βœ“ & βœ“ & βœ“ & βœ— \\ K\({}_{2}\)PdS\({}_{2}\) & βœ— & βœ— & βœ— & βœ— \\ KCrO\({}_{2}\) & βœ“ & βœ— & βœ— & βœ— \\ TiZnCu\({}_{2}\) & βœ“ & βœ“ & βœ“ & βœ“ \\ Ta\({}_{2}\)N\({}_{3}\)O & βœ“ & βœ— & βœ— & βœ— \\ \hline AgBiSeS & βœ“ & βœ“ & βœ— & βœ— \\ ZrTaNO & βœ“ & βœ— & βœ“ & βœ— \\ MnAlCuPd & βœ“ & βœ— & βœ— & βœ— \\ CsNaCl & βœ“ & βœ— & βœ“ & βœ— \\ DyThCN & βœ“ & βœ— & βœ“ & βœ— \\ Li\({}_{2}\)MgCdP\({}_{2}\) & βœ— & βœ— & βœ— & βœ— \\ SrWNO\({}_{2}\) & βœ“ & βœ— & βœ— & βœ— \\ Sr\({}_{2}\)BBrN\({}_{2}\) & βœ— & βœ— & βœ— & βœ— \\ ZrCuSiAs & βœ— & βœ— & βœ— & βœ— \\ NdNiSnH\({}_{2}\) & βœ— & βœ— & βœ— & βœ— \\ MnCoSnRh & βœ— & βœ— & βœ— & βœ— \\ Mg\({}_{2}\)ZnB\({}_{2}\)Ir\({}_{5}\) & βœ— & βœ— & βœ— & βœ— \\ AlCr\({}_{4}\)GaC\({}_{2}\) & βœ“ & βœ“ & βœ— & βœ— \\ Y\({}_{3}\)Al\({}_{3}\)NiGe\({}_{2}\) & βœ“ & βœ— & βœ— & βœ— \\ Ba\({}_{2}\)CeTaO\({}_{6}\) & βœ— & βœ— & βœ— & βœ— \\ \hline \hline \multirow{4}{*}{Accuracy} & Overall: **74.55\%** & Overall: **40\%** & Overall: **43.636\%** & Overall: **29.091\%** \\ \cline{2-4} & Binary: **85\%** & Binary: **60\%** & Binary: **60\%** & Binary: **50\%** \\ \cline{1-1} & \multirow{2}{*}{Ternary: **80\%**} & \multirow{2}{*}{Ternary: **40\%**} & \multirow{2}{*}{Ternary: **45\%**} & \multirow{2}{*}{Ternary: **30\%**} \\ \cline{1-1} & & & & \\ \end{tabular} \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Crystal** & **ED** & \(W_{\mathbf{rmse}}\) & \(W_{\mathbf{mae}}\) & **SD** & **CD** & **HD** & **FP** \\ \hline \hline TiCo & \(0.0009\) & \(0.0\) & \(0.0\) & \(0.007\) & \(0.007\) & \(0.007\) & \(0.0\) \\ CrPd\({}_{3}\) & \(0.0071\) & \(0.0\) & \(0.0\) & \(0.0408\) & \(0.0204\) & \(0.0136\) & \(0.0\) \\ GaNN\({}_{3}\) & \(0.0355\) & \(0.0\) & \(0.0\) & \(0.0839\) & \(0.042\) & \(0.028\) & \(0.0\) \\ ZrSe\({}_{2}\) & \(0.0206\) & \(0.0062\) & \(0.0025\) & \(0.6353\) & \(0.4235\) & \(0.5848\) & \(0.3243\) \\ MnAl & \(0.0\) & \(0.0\) & \(0.0\) & \(0.0002\) & \(0.0002\) & \(0.0\) \\ NiS\({}_{2}\) & \(0.2016\) & \(0.2889\) & \(0.2303\) & \(5.6727\) & \(3.8432\) & \(3.7665\) & \(0.269\) \\ TiO\({}_{2}\) & \(0.6931\) & \(0.2304\) & \(0.1431\) & \(4.209\) & \(2.8535\) & \(1.8551\) & \(0.9793\) \\ NiCl & \(0.3284\) & \(0.2562\) & \(0.1723\) & \(1.3811\) & \(2.3407\) & \(1.1495\) & \(0.6431\) \\ AlNi\({}_{3}\) & \(0.0234\) & \(0.0\) & \(0.0\) & \(0.0727\) & \(0.0363\) & \(0.0242\) & \(0.0\) \\ \hline \end{tabular} \end{table} Table 3: **Quantitative performance metrics of ParetoCSP with M3GNet for the 55 benchmark crystals evaluated in this work**. For each metric and each failure cases, the values which are greater than the range of exact predictions are denoted by bold letters to mark as high values that quantitatively shows their non-optimality. Binary, ternary, and quarternary crystals are separated by single horizontal lines. Figure 2: **Sample structure prediction by ParetoCSP.** Every ground truth structure is followed by the predicted structure. (a) - (p) shows that the structures of MnAl, ZrSe\({}_{2}\), GeMo\({}_{3}\), SrTiO\({}_{3}\), Ta\({}_{2}\)N\({}_{3}\)O, and GaBN\({}_{2}\) were successfully predicted, while (q) - (t) shows that ParetoCSP was unable to predict the structures of GaSeCl, and NdNiSnH\({}_{2}\). All the structures were visualized using VESTA. For better visualization, we set the fractional coordinate ranges of all axis to a maximum of \(3\) for Ta\({}_{2}\)N\({}_{3}\)O, GaBN\({}_{2}\), and GaSeCl, and we used the space-filling style for Ta\({}_{2}\)N\({}_{3}\)O, and GaSeCl. Besides these, we set the fractional coordinate ranges of all axis to a maximum of \(1\) for all structures, and used the ball-and-stick style. \begin{tabular}{l c c c c c c c} CuBr & \(0.3225\) & \(0.2521\) & \(0.1784\) & \(1.8724\) & \(2.5043\) & \(1.0065\) & \(0.3054\) \\ VPt\({}_{3}\) & \(0.2415\) & \(0.3235\) & \(0.2411\) & \(1.3424\) & \(0.2395\) & \(0.2805\) & \(0.1772\) \\ MnCo & 0.0 & 0.0 & 0.0 & 0.0001 & 0.0001 & 0.0001 & 0.0 \\ BN & \(0.3643\) & \(0.4026\) & \(0.2454\) & \(2.513\) & \(1.947\) & \(2.608\) & \(0.8948\) \\ GeMo\({}_{3}\) & \(0.0401\) & 0.0 & 0.0 & 0.1894 & 0.0473 & 0.0325 & 0.0 \\ Ca\({}_{3}\)V & \(0.4592\) & \(0.2048\) & \(0.1149\) & \(3.3111\) & \(2.8356\) & \(3.6542\) & \(0.019\) \\ Ga\({}_{2}\)Te\({}_{3}\) & \(\mathbf{2.0112}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{53.3896}\) & \(\mathbf{4.6825}\) & \(\mathbf{4.8998}\) & \(\mathbf{1.7875}\) \\ CoAs\({}_{2}\) & \(0.4629\) & \(0.4389\) & \(0.2684\) & \(5.3617\) & \(2.8407\) & \(2.9208\) & \(0.9943\) \\ Li\({}_{2}\)Al & \(\mathbf{30.7051}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{61.9154}\) & \(\mathbf{3.9575}\) & \(\mathbf{4.8314}\) & \(\mathbf{2.1345}\) \\ VS & \(0.4204\) & \(0.2477\) & \(0.1806\) & \(1.9372\) & \(1.3665\) & \(1.8303\) & \(0.9189\) \\ Ba\({}_{2}\)Hg & \(\mathbf{5.206}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{8.7511}\) & \(\mathbf{4.9936}\) & \(\mathbf{7.3342}\) & \(\mathbf{1.2468}\) \\ \hline SrTiO\({}_{3}\) & \(0.0185\) & 0.0 & 0.0 & 0.0934 & 0.0374 & 0.0271 & 0.0 \\ Al\({}_{2}\)FeCo & \(0.0098\) & \(0.2357\) & \(0.112\) & \(0.137\) & \(0.0685\) & \(0.0658\) & \(0.1755\) \\ GaBN\({}_{2}\) & \(0.0041\) & \(0.3889\) & \(0.289\) & \(2.1663\) & \(1.5589\) & \(1.9171\) & \(0.0455\) \\ AcMnO\({}_{3}\) & \(0.0385\) & 0.0 & 0.0 & 0.116 & 0.0464 & 0.0336 & 0.0 \\ BaTiO\({}_{3}\) & \(0.0136\) & 0.0 & 0.0 & 0.0924 & 0.037 & 0.0268 & 0.0 \\ CdCuN & \(0.0031\) & \(0.441\) & \(0.4259\) & \(2.7337\) & \(2.9172\) & \(2.2949\) & \(0.0397\) \\ HoHSe & \(0.0033\) & \(0.3643\) & \(0.3148\) & \(2.859\) & \(1.906\) & \(1.9716\) & \(0.0575\) \\ Li\({}_{2}\)ZnSi & \(\mathbf{25.3593}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{34.3079}\) & \(2.9587\) & \(\mathbf{4.104}\) & \(\mathbf{1.8731}\) \\ Cd\({}_{2}\)AgPt & \(\mathbf{22.5447}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{16.9997}\) & \(3.5895\) & \(\mathbf{4.2417}\) & \(\mathbf{2.4137}\) \\ AlCrFe\({}_{2}\) & \(0.6621\) & \(0.2486\) & \(0.1507\) & \(3.6931\) & \(2.2245\) & \(2.2518\) & \(0.7886\) \\ ZnCdPt\({}_{2}\) & \(0.0384\) & \(0.4717\) & \(0.4503\) & \(3.2733\) & \(3.5537\) & \(2.0384\) & \(0.0643\) \\ EuAlSi & \(0.0495\) & \(0.3849\) & \(0.2963\) & \(4.5051\) & \(3.0034\) & \(2.2451\) & \(0.3419\) \\ Sc\({}_{3}\)TIC & \(0.0026\) & 0.0 & 0.0 & 0.0431 & 0.0173 & 0.0125 & 0.0 \\ GaSeCl & \(\mathbf{23.3337}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{38.0257}\) & \(\mathbf{8.615}\) & \(\mathbf{11.7449}\) & \(\mathbf{2.0172}\) \\ CaAgN & \(0.0064\) & \(0.441\) & \(0.4259\) & \(3.6479\) & \(3.1055\) & \(2.4023\) & \(0.0483\) \\ BaAlGe & \(0.002\) & \(0.4547\) & \(0.3889\) & \(3.0476\) & \(1.6942\) & \(2.5291\) & \(0.0326\) \\ K\({}_{2}\)PdS\({}_{2}\) & \(0.5466\) & \(0.2467\) & \(0.1377\) & \(\mathbf{22.0109}\) & \(3.7687\) & \(3.5226\) & \(\mathbf{1.3316}\) \\ KCrO\({}_{2}\) & \(0.0342\) & \(0.2740\) & \(0.1934\) & \(2.5233\) & \(1.9562\) & \(1.8946\) & \(0.6105\) \\ TiZnCu\({}_{2}\) & \(0.0188\) & \(0.4083\) & \(0.3344\) & \(3.8363\) & \(2.83\) & \(1.609\) & \(0.6861\) \\ Ta\({}_{2}\)N\({}_{3}\)O & \(0.4603\) & \(0.2357\) & \(0.1111\) & \(3.144\) & \(2.3813\) & \(1.4458\) & \(0.7499\) \\ \hline AgBiSeS & \(0.0154\) & 0.0 & 0.0 & 0.1914 & 0.0957 & 0.0808 & 0.1298 \\ ZrTaNO & \(0.0935\) & \(0.5182\) & 0.5 & 0.4704 & 0.2352 & 0.2191 & 0.4131 \\ MnAlCuPd & \(0.0187\) & \(0.1719\) & \(0.0865\) & \(3.3567\) & \(2.3023\) & \(2.219\) & \(0.7371\) \\ CsNaClCl & \(0.0046\) & \(0.5\) & 0.5 & 0.1822 & 0.0911 & 0.0848 & 0.1639 \\ DyThCN & \(0.0322\) & \(0.4082\) & \(0.3333\) & \(0.1057\) & \(0.0529\) & \(0.0451\) & \(0.0216\) \\ Li\({}_{2}\)MgCdP\({}_{2}\) & \(\mathbf{39.8356}\) & \(\mathbf{\times}\) ### Performance comparison with GN-OA As reported in [36], the GN-OA algorithm achieved the highest performance when utilizing Bayesian Optimization (BO) [69] as the optimization algorithm and MEGNet neural network model as the formation energy predictor to guide the optimization process (default GN-OA). Based on the data presented in Table 2, we observed that GN-OA showed a significantly lower success rate than that of ParetoCSP. In comparison to ParetoCSP, GN-OA achieved an accuracy of only \(50\%\) (\(10\) out of \(20\) crystals) in predicting structures of binary crystals, whereas ParetoCSP achieved \(85\%\) accuracy. For ternary crystals, GN-OA achieved a success rate of \(30\%\) (\(6\) out of \(20\) crystals) compared to ParetoCSP's \(80\%\). In the case of quarternary crystals, GN-OA did not achieve a single success, whereas ParetoCSP achieved a success rate of \(53.333\%\). Overall, the success rate of GN-OA was only \(29.091\%\), which is approximately \(2.562\) times lower than the accuracy achieved by ParetoCSP. Moreover, GN-OA could not predict any structure that ParetoCSP could not predict. These clearly establish the dominance of ParetoCSP over GN-OA, highlighting the higher quality of structure searching provided by AFPO-based GA compared to BO, and the effectiveness of M3GNet IAP-based final energy prediction compared to MEGNet's formation energy prediction. To understand the deteriorated performance of GN-OA in our benchmark study, firstly, we found that the CSP experiments conducted in the original study of GN-OA[36] primarily focused on small binary crystals, particularly those with a \(1\):\(1\) atoms ratio. Secondly, a majority of these binary crystals belonged to four groups, namely oxide, sulfide, chloride, and fluoride, that demonstrates the lack of diversity in the GN-OA's benchmark set (see Supplementary Fig. S1b). Moreover, most of the crystals examined had the cubic crystal system (mostly belonging to the \(Fm-3m\) space group). It merely explored other crystal systems or space group. This choice of test structures for experimentation was insufficient in terms of CSP where only a few crystals possess all these specific properties. A more thorough exploration of diverse crystal systems and space groups was necessary to demonstrate GN-OA's CSP performance. Our study effectively demonstrated that the optimization algorithms used in GN-OA are inadequate for predicting more complex crystals (such as quarternary crystals). Furthermore, our empirical findings highlighted the shortcomings of using MEGNet as formation energy predictor in guiding the optimization algorithm towards the optimal crystal structures. In summary, we established that ParetoCSP outperformed GN-OA by achieving a staggering \(256.2\%\) higher performance in terms of success rates than that of GN-OA, and the AFPO-based multi-objective GA proved to be a much better structure search algorithm than BO. Additionally, M3GNet IAP provided more accurate energy estimations for effective CSP compared to the MEGNet used in GN-OA. ParetoCSP also performs a further structure refinement using M3GNet IAP after obtaining the final optimized structure from the GA, which contributed to its higher accuracy compared to GN-OA where this is entirely absent. Fig. 3 shows performance metric value comparison for some sample crystals. For better visualization, we limited the \(y\)-axis values to \(20\) for Fig. 3a and 3b, and to \(10\) for Fig. 3c and 3d. We found that the default ParetoCSP with M3GNet achieved lower (better) performance metric values for all the chosen sample crystals in terms of the metrics of ED, HD, and FP and for the majority of the cases for SD, and CD, compared to the default GN-OA. For some crystals (e.g., Ta\({}_{2}\)N\({}_{3}\)O), AgBiSeS, MnAlCuPd, SrWNO\({}_{2}\)) the differences in the performance metric quantities are huge, indicating ParetoCSP's strong dominance over the default GN-OA. ### Performance comparison of CSP algorithms with different energy models As discussed in the previous section, M3GNet universal IAP proved to be a better energy predictor than MEGNet. To fairly and objectively evaluate and compare our algorithm's performance, we replaced ParetoCSP's final energy calculator (M3GNet) with the MEGNet GNN for formation energy evaluation. Subsequently, we also replace MEGNet with M3GNet in GN-OA to show that the M3GNet IAP performs better than MEGNet for predicting the most stable energy for CSP. As a result, we ran experiments on four algorithms - ParetoCSP with M3GNet (default ParetoCSP), ParetoCSP with MEGNet, GN-OA with MEGNet (default GN-OA), and GN-OA with M3GNet. The results of ParetoCSP with M3GNet have been discussed in detail in Section 3.2. ParetoCSP with MEGNet outperformed the default GN-OA by a factor of \(\approx 1.31\) in terms of exact structure prediction accuracy. Individually, ParetoCSP with MEGNet achieved \(60\%\) (\(12\) out of \(20\)), \(40\%\) (\(8\) out of \(20\)), and \(13.333\%\) (\(2\) out of \(15\)) accuracy in predicting structures of binary, ternary, and quarternary crystals, respectively. In comparison, GN-OA with MEGNet achieved accuracies of \(50\%\), \(30\%\), and \(0\%\) for binary, ternary, and quarternary crystals, respectively. This comparison clearly demonstrated that the AFPO-based GA is a more effective structure search method than BO. NiS\({}_{2}\) and EuAlSi are the only two crystals (both hexagonal) that GN-OA with MEGNet could predict the exact structures of but ParetoCSP with MEGNet could not. But the opposite is true for \(8\) crystals including GaN\({}_{3}\), GaN\({}_{2}\), BaAlGe, AgBiSeS, etc., predominantly belonging to the tetragonal crystal system. Additionally, ParetoCSP with MEGNet were not successful in predicting any structure that ParetoCSP with M3GNet could not, strongly indicating the necessity for M3GNet as the energy predicting function (outperformed ParetoCSP with MEGNet by a factor of \(\approx 1.86\)). From Fig. 3, we can see that ParetoCSP with M3GNet achieved much lower performance metric values than ParetoCSP with MEGNet for the majority of the cases, indicating its better prediction caliber. Based on the analysis conducted so far, two hypotheses were formulated: firstly, that GN-OA with M3GNet would outperform the default GN-OA, and secondly, that ParetoCSP with M3GNet would outperform GN-OA with M3GNet. As anticipated, GN-OA with M3GNet outperformed the default GN-OA (by a factor of \(\approx 1.5\)), again demonstrating M3GNet IAP as a much better energy model than MEGNet. For binary, ternary, and quarternary crystals, respectively, GN-OA with M3GNet (GN-OA with MEGNet) achieved \(60\%\) (\(50\%\)), \(35\%\) (\(30\%\)), and \(13.333\%\) (\(0\%\)), respectively. Moreover, the default GN-OA did not achieve superiority over GN-OA with MEGNet on any chosen crystal, but the opposite is true for \(8\) crystals including TiCo, VS, HoHSe, CsNaLiC, etc., and a majority of them belongs to the hexagonal crystal system. However, despite the improved performance of GN-OA with M3GNet, it's efficiency still fell short in comparison to ParetoCSP with M3GNet due to the more effective structure search function of the latter, proving both hypothesis true. ParetoCSP with M3GNet outperformed GN-OA with M3GNet by a factor of \(\approx 1.71\). Furthermore, the default ParetoCSP accurately predicted every structure that GN-OA with M3GNet successfully predicted. Again from Fig. 3, we can see that ParetoCSP with M3GNet achieved smaller performance metric values than GN-OA with M3GNet for the majority of the crystals. In fact, for some crystals such as Al\({}_{2}\)FeCo, Ta\({}_{2}\)N\({}_{3}\)O, AgBiSeS, and SrWNO\({}_{2}\), the differences of metric values are enormous. To report the final outcomes, ParetoCSP with M3GNet outperformed all algorithms (\(\approx 1.71\times\) the second best, and \(\approx 1.86\times\) the third best). GN-OA with M3GNet ranked second best, exceeding the performance of the third best ParetoCSP with MEGNet by a small margin (by a factor of \(\approx 1.09\)). The default GN-OA demonstrated the lowest performance compared to all other algorithms. ### Parametric study of ParetoCSP As a multi-objective GA, there are several hyper-parameters to set before running our ParetoCSP algorithm for CSP. Here we conducted experiments with our ParetoCSP algorithm with different parameter settings to evaluate their effect. We selected \(8\) crystals for this study containing both successful and unsuccessful predictions, namely TiCo, Ba\({}_{2}\)Hg, HoHSe, Cd\({}_{2}\)AgPt, SrTiO\({}_{3}\), GaBN\({}_{2}\), MnAlCuPd, and AgBiSeS. The hyper-parameters chosen for the study include population size, crossover probability, mutation probability, and total number of generations used. The default parameter set is mentioned in Supplementary Note S1. All the performance results are presented in Table 4. Figure 3: **Performance metric comparison of different CSP algorithms evaluated over the sample benchmark crystals.** The metric values of ParetoCSP with M3GNet is much smaller (better) than those of other baseline algorithms, which quantitatively shows its superiority. In most cases, GN-OA with MEGNet’s metric values are the highest (worst) which is aligned with the observation that it demonstrated the poorest performance among all CSP algorithms. First, we examined the effect of different population sizes on the selected crystals. We ran the experiments with five different population sizes. The results in Table 4 shows that our algorithm performed best with a population size of \(100\). Conversely, it could not accurately predict the structures of any crystal with a population size of \(30\), except for SrTiO\({}_{3}\). ParetoCSP consistently performed poorly for Ba\({}_{2}\)Hg and Cd\({}_{2}\)AgPt with every population size, while the results of SrTiO\({}_{3}\) showed the opposite trend. Second, we analyzed the performance of our algorithm with varying crossover probabilities. The results indicated that the best performance was achieved with a probability of \(0.8\), and this was the only probability for which ParetoCSP identified the exact structure of MnAlCuPd. Except GaBN\({}_{2}\) and AgBiSeS, for all five other crystals, ParetoCSP showed consistent performance with other crossover probabilities. We observed that our algorithm performed well with higher crossover probabilities for GaBN\({}_{2}\), and poorly for AgBiSeS with probability \(<0.2\) Next, we evaluated ParetoCSP's performance with different mutation probabilities. and observed that ParetoCSP performed best with a mutation probability of \(0.01\). Only MnAlCuPd and AgBiSeS had their exact structure successfully predicted with this mutation probability, while for other crystals except GaBN\({}_{2}\), ParetoCSP performed consistently with other probabilities. Our algorithm successfully predicted the structure of GaBN\({}_{2}\) for mutation probabilities \(\geq 0.01\). Finally, we ran experiments with different generations to investigate the impact on algorithm performance. In [36], all experiments were run for \(5000\) steps for the BO. However, our results from Table 4 showed that \(1000\) generations were sufficient for ParetoCSP to achieve the optimal results for all \(8\) crystals. Except for GaBN\({}_{2}\), and AgBiSeS, for all five other crystals, ParetoCSP achieved optimal solutions within \(250\) generations. We would like to mention that we did not evaluate for \(<250\) generations, so it is possible that ParetoCSP could perform optimally for these crystals even with a smaller number of generations. None of the above mentioned hyper-parameters could accurately predict the ground truth structures of Ba\({}_{2}\)Hg, and Cd\({}_{2}\)AgPt. \begin{table} \begin{tabular}{||l|c c c c c c c||} \cline{2-9} & **TiCo** & **Ba\({}_{2}\)Hg** & **HoHSe** & **Cd\({}_{2}\)AgPt** & **SrTiO\({}_{3}\)** & **GaBN\({}_{2}\)** & **MnAlCuPd** & **AgBiSeS** \\ \hline \hline Pop \(30\) & βœ— & βœ— & βœ— & βœ— & βœ“ & βœ— & βœ— & βœ— \\ Pop \(60\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ“ \\ Pop \(100\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ Pop \(200\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ“ \\ Pop \(300\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— \\ \hline CP \(0.1\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ— \\ CP \(0.2\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ“ \\ CP \(0.4\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ“ \\ CP \(0.6\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ— & βœ“ \\ CP \(0.8\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ \hline MP \(0.0001\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ— \\ MP \(0.001\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ— & βœ— \\ MP \(0.01\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ MP \(0.1\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ MP \(0.5\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ— & βœ— \\ \hline Gen \(250\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ— \\ Gen \(500\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ— \\ Gen \(1000\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ Gen \(2000\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ Gen \(5000\) & βœ“ & βœ— & βœ“ & βœ— & βœ“ & βœ“ & βœ“ & βœ“ \\ \hline \hline \end{tabular} \end{table} Table 4: Performance results with different hyper-parameters of ParetoCSP with M3GNet. ### Failure case study ParetoCSP successfully predicted the structures for \(41\) out of \(55\) benchmark crystals in this research. Here we conducted a further thorough investigation of the \(14\) unsuccessful predictions. For this, we calculated performance metric values of these \(14\) structures for all four algorithms discussed in this paper and then experimentally showed the quality of each algorithms' output. We excluded the W\({}_{rmse}\) and W\({}_{mae}\) for this study as all four algorithms failed to predict these structures accurately. The results are presented in Fig. 4 (only two of them are shown here in the main text, and the rest are shown in the Supplementary File.) The comparison results for energy distance metric (ED) is presented in Supplementary Fig. S2a. We limited the \(y\)-axis value to \(80\) for better visualization. ParetoCSP with M3GNet dominated all other algorithms for ED, achieving the lowest errors for \(9\) out of \(14\) crystals. ED is related to the final energy difference between the ground truth and the predicted structure, indicating that predicted structures by ParetoCSP are more energetically closer to the target structures' energy than those by other algorithms. The only failure case where the ParetoCSP had the highest ED value among all algorithms was Li\({}_{2}\)Al. The three performance metrics SD, CD, and HD, are all related to the atomic sites of the ground truth and predicted crystal. ParetoCSP with M3GNet again outperformed all other algorithms, achieving lowest distance scores for a majority of the failure cases, suggesting that the structures predicted by the ParetoCSP algorithms have the closest atomic site configurations compared to the target structures among all algorithms. We presented the results in Supplementary Fig. S2b, Supplementary Fig. S2c, and Fig. 4a, respectively with the \(y\)-axis of Supplementary Fig. S2b limited to \(200\) for visualization purposes. Finally for the fingerprint metric (FP), which is related to the crystal atomic site fingerprint, ParetoCSP with M3GNet achieved the lowest distance errors for \(11\) out of \(14\) crystals among all algorithms, proving better atomic site prediction quality. The results are shown in Fig. 4b. Li\({}_{2}\)Al again is the only crystal where the default ParetoCSP's FP value is the highest among all. The observation that Li\({}_{2}\)Al had the highest ED and FP values for ParetoCSP suggests that the combination of AFPO-based GA and M3GNet might not be the optimal choice for predicting this crystal. On the contrary, ParetoCSP with M3GNet achieved \(4\) out of \(5\), or \(5\) out of \(5\) lowest performance metric values for Ga\({}_{2}\)Te\({}_{3}\), K\({}_{2}\)PdS\({}_{2}\), Sr\({}_{2}\)BBrN\({}_{2}\), ZrCuSiAs, MnCoSnRh, and Ba\({}_{2}\)CeTaO\({}_{6}\) indicating that we are on the right track to predict structures of these crystals. In summary, each of the performance metrics is related to some specific features of the ground truth crystals, and ParetoCSP with M3GNet outperforms all other algorithms, which indicates that it predicts structures with better quality (more closer to the ground truth structures) than other algorithms despite none of them are exact solutions. ### Trajectory study To further understand why ParetoCSP works better than GN-OA algorithm, we utilized the multi-dimensional performance metrics of CSP [47] to examine the search patterns of both optimization algorithms employed in ParetoCSP and GN-OA. For most of the crystals, the number of valid structures generated by ParetoCSP is enormous. For better visualization, we selected six crystals for this study which had comparatively smaller number of valid structures: SrTiO\({}_{3}\), MnAlCuPd, GaN\({}_{3}\), Al\({}_{2}\)FeCo, Sc\({}_{3}\)TIC, and SrWNO\({}_{2}\). ParetoCSP predicted exact structures of all these crystals, whereas GN-OA failed to predict the structures of MnAlCuPd, Al\({}_{2}\)FeCo, and SrWNO\({}_{2}\). We used a population size of \(100\), and total \(250\) generations for ParetoCSP. For comparing fairly, we ran a total of \(15000\) steps with both Figure 4: **Performance metric comparison of structure prediction of different algorithms for the \(14\) failure cases of ParetoCSP with M3GNet. Despite inaccurate, ParetoCSP with M3GNet generated structures closer to the corresponding ground truth structures than any other algorithms.** GN-OA with MEGNet and M3GNet (GN-OA stopped making progress after \(5000\) steps for all of our targets). To analyze the structure search process, we computed the distance metrics between the valid structures and the ground truth structure. These distance features were then mapped into two-dimensional points using t-distributed stochastic neighbor embedding (t-SNE) [70]. The purpose of t-SNE is to map data points from a higher-dimensional space to a lower-dimensional space, typically 2D or 3D, while preserving the pairwise distances between the points. The intuition is that data points that are closer to each other in the higher dimension will remain close to each other after the mapping to the lower dimension. Subsequently, we visualized the trajectories of the structures during the search by connecting consecutive points if the latter structure had a lower energy than the former one. We presented the trajectories for SrTiO\({}_{3}\) and MnAlCuPd in Fig. 5, and the rest are shown in Supplemental Fig. S3 (see Supplementary Fig. S4 and S5 for trajectory figures without arrows for better visualization of structure mapping). The initial points are represented by green triangles, while the ground truth structures are denoted by red stars. First, the distributions of the generated valid structures over the search by ParetoCSP and GN-OA are very different (Fig. 5a and 5d versus Fig.5b, 5c, 5e, 5f). ParetoCSP's distribution are much more diverse while the GN-OA's generated structures tend to be located in a shallow region (Fig.5g), indicating that the algorithm can only generate valid structures in a focused path. This is presumably due to the single point search characteristic of the BO algorithm. While a focused search is good when the direction is correct, it runs a high risk of getting trapped in the channeled path and thus loses its structure search capability. These assumptions become more visible from closely looking at Fig. 5g and 5h where t-SNE for all three algorithms are drawn in the same figure (see Supplementary Fig. S6 for combined t-SNE for other chosen crystals). We can see that points generated by ParetoCSP are more spread out and have more diverse search directions than other algorithms which ensures its more higher structure search performance. This may explain ParetoCSP's success and GN-OA's failure in predicting structures of MnAlCuPd, Al\({}_{2}\)FeCo, and SrWNO\({}_{2}\). Another way to understand the structure search efficiency of ParetoCSP and GN-OA is to check the number of valid structures during the search process. ParetoCSP generated \(2492\), \(1518\), \(2248\), \(2873\), \(1843\), and \(1633\) valid structures in predicting SrTiO\({}_{3}\), MnAlCuPd, GaN\({}_{3}\), Al\({}_{2}\)FeCo, Sc\({}_{3}\)TIC, and SrWNO\({}_{2}\), respectively, while the original GN-OA with MEGNet generated only \(1003\), \(681\), \(1701\), \(1350\), \(1499\), and \(1066\) valid structures for the same three targets, respectively. GN-OA with M3GNet, instead generated a little bit more valid structures for SrTiO\({}_{3}\) (1049), GaN\({}_{3}\) (2044), and Al\({}_{2}\)FeCo (1475) but fewer for MnAlCuPd (\(569\)), Sc\({}_{3}\)TIC (\(1165\)), and SrWNO\({}_{2}\) (\(955\)). The number of valid structures generated by both GN-OA algorithms are significantly smaller compared to those of our ParetoCSP, indicating that the superiority of ParetoCSP may lie in its capability to make effective search by generating more valid structures. According to the findings of [36], this showed that our ParetoCSP's AFPO-based GA search function performed much better than BO. Overall, GN-OA struggled to generate valid structures during the search process and wasted a majority of the search dealing with invalid structures. Moreover, the higher percentage of valid structures generated and more diverse search function of ParetoCSP may have contributed to its higher probability of finding the exact structures. Figure 5: Trajectories of the traversed structure during search of different CSP algorithms. (a) - (c) shows the trajectory for SrTiO\({}_{3}\), and (d) - (f) shows the trajectory for MnAlCuPd. The trajectories were drawn by calculating the distance metrics for the valid structures during the search and mapping them into 2D space using t-SNE. Two consecutive points were connected if the latter structure had a lower energy than the former one. (g) and (h) show the t-SNE for all three algorithms in the same figure for SrTiO\({}_{3}\) and MnAlCuPd, respectively. The initial and optimal structures for all algorithms are marked with different colors and shapes. The points in ParetoCSP’s trajectory are more spread out and have more diverse search directions than the other algorithms. ## 4 Discussion We present ParetoCSP, a CSP algorithm which combines an AFPO enhanced multi-objective GA as an effective structure search function and M3GNet universal IAP as a constructive final energy predictor to achieve efficient structure search for CSP. The objective is to effectively capture the complex relationships between atomic configurations and their corresponding energies. Firstly, ParetoCSP uses the age of a population as a separate optimization criterion. This leads the algorithm to treat the age as a separate dimension in the multi-objective Pareto front where the GA aims to generate structures to minimize the final energy per atom, as well as having low genotypic age. According to the finding of [59], this provides a more extensive search process which enables the NSGA-III to perform better as shown in the trajectory results in Section 3.7, where we see that ParetoCSP generated a lot more valid structures during the search process than other evaluated CSP algorithms. This demonstrates the effective exploration of the crystal structure space by ParetoCSP and efficient identification of the most stable structures. Overall, we found that ParetoCSP remarkably outperforms the GN-OA algorithm by a factor of \(2.562\) and overall achieved \(74.55\%\) accuracy. The comprehensive experimentation was carried out on \(55\) benchmark sets consisting of diverse space groups, which shows that the algorithm can efficiently handle a wide range of crystal systems, including complex ternary and quarternary compounds, whereas GN-OA performed poorly on the quarternary crystals, and most of the ternary crystals. Moreover, a majority of them belongs to the cubic crystals system, proving GN-OA's lack of capability of explore the structure space of diverse crystal systems. However, all the algorithms show poor performance for crystals belonging to the orthorhombic and monoclinic crystal systems. This performance limits of ParetoCSP can be attributed to either the optimization algorithm or the ML potential. First we found that for both ParetoCSP and GN-OA, the search process tends to generate a majority of invalid structures even though ParetoCSP works much better than GN-OA. These invalid structures are a waste of search time. Better algorithms that consider crystal symmetry or data-driven generative models may be developed to improve the percentage of valid structures and increase the search efficiency during the search process. In ParetoCSP, the M3GNet IAP is used as the final energy predictor during the search process and structure relaxer after finishing the search process. Compared to MEGNet, M3GNet IAP is proven to be a better choice since after replacing GN-OA's MEGNet with M3GNet IAP, its performance can be improved by a factor of \(1.5\). Overall, our results suggest the importance of developing stronger universal ML potentials in modern CSP algorithm development. Other IAP models such as TeaNet [50] can be experimented to check whether better performance can be achieved with ParetoCSP and can be compared to the results with M3GNet. Unlike GN-OA, ParetoCSP performs a further refinement of the output structure which helped generate exact structures. We used M3GNet IAP for the structure relaxation. More advanced structure relaxation methods can be tested instead to get better performance. For the first time, we have used a set of seven quantitative performance metrics to compare and investigate algorithm performances of ParetoCSP and the baselines. We can see from Table 3 that each of the unsuccessful predictions had at least one of the performance metrics value larger than the ground truth value. Additionally, Fig. 3 shows that ParetoCSP with M3GNet generated better solutions than any other baseline CSP algorithms as they had much lower performance metric distances (errors) than others. Furthermore, the performance metrics also show that even though ParetoCSP was unable to predict \(14\) crystal structures, it still produced better quality structures compared to other CSP algorithms. They can also be used to show for a specific crystal whether the algorithm is on the right track to predict its structure or not. Inspired by the great success of AlphaFold2 [45] for protein structure prediction, which does not rely first principles calculations, we believe that data-driven CSP algorithms based on ML deep neural network energy models have big potential and can reach the same level as AlphaFold2. For this reason, we have focused on the performance comparison with the state-of-the-art GN-OA, a ML potential based CSP algorithm and we did not compare our results with CALYPSO [15] and USPEX [14], despite that USPEX also utilizes evolutionary algorithms like ours. These algorithms are extremely slow and are not scalable to complex crystals as they depend on ab-initio energy calculations, which is computationally very expensive and slow. Currently, they can only deal with simple chemical systems or relatively small crystals (\(<10\) atoms in the unit cell) which is a major disadvantage. ## 5 Conclusion We have introduced an innovative CSP algorithm named ParetoCSP, which synergizes two key components: the multi-objective GA employing age-fitness Pareto optimization and the M3GNet IAP, for predicting the most stable crystalline material structures. The AFPO-based GA effectively functions as a structure search algorithm, complemented by the M3GNet IAP's role as an efficient final energy predictor that guides the search process. Through comprehensive experimentation involving \(55\) benchmark crystals, our algorithm's potency has been demonstrated, notably surpassing GN-OA with MEGNet and GN-OA with M3GNet by substantial factors of \(2.562\) and \(1.71\), respectively. Utilizing benchmark performance metrics, we have provided an in-depth analysis of the quality of structures generated by our algorithm. Furthermore, we have quantitatively depicted deviations from the ground truth structure for failure cases across all algorithms, highlighting ParetoCSP's superior performance in this aspect as well. By means of a trajectory analysis of the generated structures, we have established that ParetoCSP produces a greater percentage of valid structures compared to GN-OA during the search process due to its enhanced search algorithm. Given these significant progress, we believe that ML potential based CSP algorithms such as ParetoCSP hold immense promise for advancing CSP's boundaries and facilitating the discovery of novel materials with desired properties. ## Contribution Conceptualization, J.H.; methodology,S.O., J.H., L.W.; software,S.O., J.H. ; resources, J.H.; writing-original draft preparation, S.O., J.H., L.W.; writing-review and editing, J.H and L.W.; visualization, S.O. ; supervision, J.H.; funding acquisition, J.H. ## Acknowledgement The research reported in this work was supported in part by National Science Foundation under the grant 10013216 and 2311202. The views, perspectives, and content do not necessarily represent the official views of the NSF.
2302.14700
Nonlinear social evolution and the emergence of collective action
Organisms from microbes to humans engage in a variety of social behaviors, which affect fitness in complex, often nonlinear ways. The question of how these behaviors evolve has consequences ranging from antibiotic resistance to human origins. However, evolution with nonlinear social interactions is challenging to model mathematically, especially in combination with spatial, group, and/or kin assortment. We derive a mathematical condition for natural selection with synergistic interactions among any number of individuals. This result applies to populations with arbitrary (but fixed) spatial or network structure, group subdivision, and/or mating patterns. In this condition, nonlinear fitness effects are ascribed to collectives, and weighted by a new measure of collective relatedness. For weak selection, this condition can be systematically evaluated by computing branch lengths of ancestral trees. We apply this condition to pairwise games between diploid relatives, and to dilemmas of collective help or harm among siblings and on spatial networks. Our work provides a rigorous basis for extending the notion of ``actor", in the study of social evolution, from individuals to collectives.
Benjamin Allen, Abdur-Rahman Khwaja, James L. Donahue, Cassidy Lattanzio, Yulia A. Dementieva, Christine Sample
2023-02-28T16:13:01Z
http://arxiv.org/abs/2302.14700v2
# Natural selection for collective action ###### Abstract Collective action--behavior that arises from the combined actions of multiple individuals--is observed across living beings. The question of how and why collective action evolves has profound implications for behavioral ecology, multicellular, and human society. Collective action is challenging to model mathematically, due to nonlinear fitness effects and the consequences of spatial, group, and/or family relationships. We derive a simple condition for collective action to be favored by natural selection. A collective's effect on the fitness of each individual is weighted by the relatedness between them, using a new measure of collective relatedness. If selection is weak, this condition can be evaluated using coalescent theory. More generally, our result applies to any synergistic social behavior, in spatial, group, and/or family-structured populations. We use this result to obtain conditions for the evolution of collective help among diploid siblings, subcommunities of a network, and hyperedges of a hypergraph. We also obtain a condition for which of two strategies is favored in a game between siblings, cousins, or other relatives. Our work provides a rigorous basis for extending the notion of "actor", in the study of social behavior, from individuals to collectives. Collective action is a form of social behavior in which multiple individuals act together, affecting their own fitness as well as that of others [1, 2, 3, 4]. Such action may be helpful, as in ants building "living bridges" for others to cross [5], _Dytostelium_ cells forming a stalk to raise others into the air to be lofted to new environments [6], or dolphins collaborating to rescue an injured companion [7]. Collective action may also be harmful, as in coalitionary killing among primates [8]. Collective help and harm are both salient to human society [9], and likely have been since our early ancestors [10]. The evolution of social behavior has been studied intensively using a variety of theoretical approaches, including kin selection [11, 12, 13], multilevel selection [14, 15], evolutionary game theory [16, 17], and population genetics [18, 19]. A particularly influential approach is inclusive fitness theory [11, 13, 20], which aims to quantify selection on social behavior in terms of the fitness consequences for the actor and their genetic relatives. These approaches illuminate how the evolution of social behavior depends on patterns of genetic assortment [21, 22], which in turn emerge from the population's family [23, 24, 19], group [14, 15], spatial [25, 3], and/or network structure [26, 27, 28, 29]. However, collective action is challenging to model mathematically. It is inherently nonlinear, in that collective effects differ markedly from the sum of individual contributions. Collectives may vary in size, overlap in membership, and/or change over time. Population structure affects the formation of collectives as well as the consequences for selection. These complications preclude common modeling assumptions such as symmetry, linearity, and homogeneity. Theoretical investigations have focused primarily on public goods scenarios, in which the benefits of collective action are shared equally among a defined group of actors [30, 2, 31, 3, 28, 32]. Little theory exists for how selection leads collectives to act toward those outside the collective [33, 4], or toward different members within the collective. ## Modeling framework We build upon a general mathematical framework for natural selection [34], which allows for arbitrary spatial and/or family structure, mating patterns, and fitness-affecting interactions (Appendix A; Fig. 1). There are two competing alleles, \(A\) and \(a\), at a single genetic locus. Taking a gene's-eye view, we imagine Figure 1: **Modeling framework.****a** We consider a population of alleles at a specific locus, which can be of type \(A\) or \(a\). Each allele resides in a particular genetic site, within an individual. Each time-step, some alleles are replaced by copies of others, as a result of interaction, reproduction, mating, and/or death. This is recorded in a parentage map \(\alpha\), indicating the parent allele of each site in the new state. **b** The process of selection is represented as a Markov chain. State transitions are determined by sampling a parentage map \(\alpha\) from a probability distribution, which depends on the current state and captures all effects of social interaction, spatial structure, mating pattern, and so on. With mutation, there is a unique stationary distribution over states. **c** Multilateral genetic assortment is quantified by collective relatedness \(r_{S,g}\), which characterizes the likelihood that site \(g\) contains allele \(A\) when all sites in set \(S\) do. **d** Under neutral drift, collective relatedness can be computed via Eq. (3), using the expected branch lengths, \(\ell_{S}\), of the tree representing \(S\)’s ancestry. The smaller the coalescence length \(\ell_{S}\), the more likely that sites in \(S\) contain the same allele. a set \(G\) of genetic sites, each housing one allele (Fig. 1a). Haploid individuals contain one site each, diploids two, and so on. The set of sites--and hence the population size--is fixed over time. The allele occupying each site \(g\in G\) is indicated by a binary variable \(x_{g}\), with value 1 if \(g\) contains allele \(A\) and 0 if \(g\) contains \(a\). The variables \(x_{g}\) are collected into a binary vector \(\mathbf{x}\) representing the population state. Selection proceeds as a Markov chain. In each state \(\mathbf{x}\), individuals may interact, migrate, mate, reproduce, and/or die. On the gene level, some alleles are replaced by copies of others, resulting in a new state, \(\mathbf{x^{\prime}}\). The new allele in each site \(g\) is either survived or copied from the allele previously occupying some other site, which we denote by \(\alpha(g)\). Here, \(\alpha\) is a parentage map [34]: a set mapping from \(G\) to itself, indicating the site from which each allele is inherited (Fig. 1a). In each state \(\mathbf{x}\), a probability distribution, over all possible parentage maps, captures the effects of all interactions--as well as all consequences of spatial structure, mating pattern, and inheritance (Mendelian or otherwise)--on the transmission of alleles to the next state. Two parameters, \(u\) and \(\delta\), quantify the rate of mutation and the strength of selection, respectively. For nonzero mutation (\(u>0\)), the process converges to a stationary probability distribution over states \(\mathbf{x}\). We say that selection favors allele \(A\) if, in the low-mutation limit, \(A\) has greater stationary frequency than \(a\). ## Collective relatedness Selection for collective action depends on multilateral patterns of genetic assortment [24, 28]. To quantify these patterns, we introduce a measure of the collective relatedness of a set \(S\) of sites to a single site \(g\): \[r_{S,g}=\lim_{u\to 0}\frac{\mathbb{E}\left[\underline{x}_{S}\left(x_{g}- \bar{x}\right)\right]}{\mathbb{E}\left[\bar{x}(1-\bar{x})\right]}. \tag{1}\] Above, \(\underline{x}_{S}=\prod_{h\in S}x_{h}\) has value 1 if all sites in \(S\) contain allele \(A\) and 0 otherwise, \(\bar{x}=\frac{1}{n}\sum_{h\in G}x_{h}\) is the frequency of \(A\), and \(\mathbb{E}\) denotes expectation over the stationary distribution. The numerator in Eq. (1) quantifies whether \(g\) is more or less likely than an average site to contain allele \(A\), when all of \(S\) does. The denominator is the expected allelic variance in the population. As we show in Appendix D, Eq. (1) generalizes standard pairwise relatedness measures--based on covariance [23], identity-by-descent [25], and geometry [35]--and builds upon previous efforts to extend relatedness beyond pairs [21, 36]. Collective relatedness is difficult to evaluate for arbitrary selection strength, but can be systematically computed for neutral drift (\(\delta=0\)) using coalescent theory [37, 38]. The key quantity is the expected total branch length, \(\ell_{S}\), of a tree representing the ancestry of a given set \(S\) (Fig. 1d). These coalescence lengths, \(\ell_{S}\), can be computed by solving the system of linear equations \[\ell_{S}=\begin{cases}|S|+\sum_{\alpha}p(\alpha)\,\ell_{\alpha(S)}&|S|\geq 2\\ 0&|S|=1.\end{cases} \tag{2}\] Above, \(p(\alpha)\) is the probability that parentage map \(\alpha\) occurs; under neutral drift, this probability not depend on the state \(\mathbf{x}\). Collective relatedness under neutral drift is then given by \[r_{S,g}=\frac{\bar{\ell}_{S}-\ell_{S\cup\{g\}}}{\bar{\ell}}. \tag{3}\] Above, \(\bar{\ell}_{S}\) is the average of \(\ell_{S\cup\{h\}}\) as \(h\) runs over all sites in \(G\), and \(\bar{\ell}\) is the average of \(\ell_{\{h,k\}}\) over all pairs \(h,k\in G\). ## Condition for collective action We represent collective action using collective fitness effects, \(c_{S,g}\), that quantify how each set of sites \(S\) affects the fitness of each site \(g\). Specifically, if all sites in \(S\) contain allele \(A\), the fitness of each site \(g\) (inside or outside of \(S\)) is altered by \(c_{S,g}\), relative to the all-\(a\) population state. Aggregating over all sets \(S\), the net effect on \(g\)'s fitness in state \(\mathbf{x}\) is \(w_{g}(\mathbf{x})=\sum_{S\subseteq G}c_{S,g}\underline{x}_{S}\). We prove in Appendix E that selection favors allele \(A\) over \(a\) if and only if \[\sum_{g\in G}\sum_{S\subseteq G}c_{S,g}r_{S,g}>0. \tag{4}\] This condition has two complementary interpretations. First, for a given site \(g\), the sum \(\sum_{S\subseteq G}c_{S,g}r_{S,g}\) characterizes the expected fitness effect of all social interactions experienced by an \(A\) allele in site \(g\). \(A\) is favored if the total fitness effect on \(A\) alleles, over all sites \(g\), is positive. Second, for a particular set \(S\) of sites, the sum \(\sum_{g\in G}c_{S,g}r_{S,g}\) has the form of an inclusive fitness effect [11], in that \(S\)'s contribution, \(c_{S,g}\), to the fitness of each site \(g\), is weighted by collective relatedness, \(r_{S,g}\). However, in contrast to standard inclusive fitness theory, the actor, \(S\), is not an individual but a collective--a set of genetic sites whose joint actions affect their own fitness and that of others. Condition (4) applies not only to collective action _per se_, but to any fitness-affecting interaction. The net effect of all interactions in state \(\mathbf{x}\) on the fitness of each site \(g\) can be written uniquely in the form [12]\(w_{g}(\mathbf{x})=\sum_{S\subseteq G}c_{S,g}\underline{x}_{S}\), whereupon Condition (4) again determines which allele is favored. Condition (4) is valid for any strength of selection \(\delta>0\). For weak selection (\(\delta\ll 1\)), the collective relatedness coefficients \(r_{S,g}\) can be evaluated using Eq. (3). This provides a method to evaluate weak selection on any nonlinear fitness-affecting behavior, with arbitrary spatial, network, group, and/or mating structure. If the collective fitness effects \(c_{S,g}\) vanish for sets \(S\) above a fixed size, this computation takes polynomial time. ## Collective action among diploid relatives JBS Haldane famously quipped that he would jump into a river to save two brothers, or eight cousins. Haldane's insight is formalized in Hamilton's rule [11, 13]: A behavior providing benefit \(b\) to a relative, at cost \(c\) to oneself, is favored if \(br>c\), where \(r\) quantifies relatedness between actor and recipient. What about other interactions between relatives [21, 24]? Consider an arbitrary two-player game with two phenotypic strategies, Cooperate (C) and Defect (D). \(AA\) individuals have phenotype C, \(aa\)'s have phenotype D, and \(Aa\) heterozygotes have phenotype C or D with probabilities \(h\) and \(1-h\), respectively, where \(0\leq h\leq 1\) quantifies genetic dominance. The payoff to phenotype \(X\) interacting with phenotype \(Y\) is denoted \(\pi_{XY}\), where \(X\) and \(Y\) can be either C or D. This game is played by two relatives in a large randomly-mating population. Their relationship is characterized by the probabilities, \(p\) and \(q\), that their maternally- and paternally-inherited alleles, respectively, descend from a recent common ancestor. For example, maternal half-siblings have \(p=1/2\) and \(q=0\). The average \(r=(p+q)/2\) is Wright's coefficient of relationship [39] (one-half for full siblings, one-eighth for cousins, etc.). Combining Eq. (3) and Condition (4) with standard results in coalescent theory [37, 38], we find (Appendix G) that weak selection favors allele \(A\) (and hence cooperation) if \[-c+br+\frac{2d(h-\frac{1}{2})(r-pq)}{3}>0. \tag{5}\] Above \(c=\frac{1}{2}(\pi_{\rm DC}+\pi_{\rm DD})-\frac{1}{2}(\pi_{\rm CC}+\pi_{\rm CD})\) is the cost of cooperation (averaged over the two phenotypes of interaction partners), \(b=\frac{1}{2}(\pi_{\rm CC}+\pi_{\rm DC})-\frac{1}{2}(\pi_{\rm CD}+\pi_{\rm DD})\) is the benefit to the other, \(d=\frac{1}{2}(\pi_{\rm CC}+\pi_{\rm DD})-\frac{1}{2}(\pi_{\rm CD}+\pi_{\rm DC})\) is the synergistic effect of both employing the same strategy. The first two terms recapitulate Hamilton's rule, while the third captures the joint effects of synergy, \(d\), and genetic dominance, \(h\). The factor \(r-pq\) is positive, unless the individuals are clonal (\(p=q=1\)) or unrelated (\(p=q=0\)), in which case it vanishes. Cooperation is therefore promoted if it is synergistic (\(d>0\)) and mostly dominant (\(h>1/2\)), or anti-synergistic (\(d<0\)) and mostly recessive (\(h<1/2\)). Although we describe this scenario in terms of cooperation, Condition (5) is valid for any two-player, two-strategy game. What if Haldane must collaborate with one or more siblings to save another [33, 19]? Consider a collective of \(m\geq 2\) full siblings. Each Cooperator in this collective pays cost \(c/m\). Another sibling receives a benefit, \(b_{k}\), depending on the number \(k\leq m\) of Cooperators in the collective. Evaluating Condition (4), we find that weak selection favors cooperation if the benefit from all siblings helping exceeds twice the total cost: \(b_{m}>2c\). Remarkably, this condition does not depend on the intermediate benefits, \(b_{k}\) for \(1\leq k\leq m-1\), nor on the degree of genetic dominance, \(h\). Finally, will siblings work together for a common benefit? Consider a threshold public goods game [2] played by \(m\) siblings. Each Cooperator pays cost \(c\). If all players cooperate, then each receives benefit \(b\); otherwise, no benefits are received. We find that cooperation is favored if \(2br_{m}>c\), where \(r_{m}\) is the intra-relatedness of a collective of \(m\) siblings (Table 1). For large \(m\), this condition reduces to \(b>2c\). ## Collective action on networks and hypergraphs Network structure--representing spatial or social relationships--has a profound effect on the evolution of social behavior [26, 27, 29]. Exact mathematical results have been derived for pairwise interactions [27, 29], but are difficult to obtain for interactions beyond pairs [30, 28, 32]. To apply our framework, we let \(G\) be the set of nodes in a network. Each node represents an individual, with a single heritable type. As a simple model of collective action, suppose that a particular collective \(S\), of size \(m\), may help or harm a particular node \(g\)--inside or outside \(S\)--at some cost to \(S\)'s members (Fig. 2a). Individuals in \(S\) may pay \(c/m\) each to contribute to the action, where \(c>0\) represents the action's total cost. If all individuals in \(S\) contribute, the payoff of target node \(g\) is altered by an amount \(b\); otherwise, there is no effect. Collective help is the case \(b>0\), while \(b<0\) indicates collective harm. Reproduction occurs via death-Birth updating [26, 27, 29]: First, a site is chosen uniformly from the population to be replaced; then, a neighbor is chosen with probability proportional to \((\text{payoff})\times(\text{edge weight})\) to reproduce into the vacancy. Applying Condition (4) (see Appendix H), weak selection favors this collective help or harm if \[bd_{g}\left(r_{S,g}-r_{S,g}^{(2)}\right)>\frac{c}{m}\sum_{h\in S}d_{h}\left(r_ {h}-r_{h}^{(2)}\right). \tag{6}\] Above, \(d_{g}\) is the degree of node \(g\), \(r_{S,g}^{(2)}\) is the expected collective relatedness of \(S\) to two-step random walk neighbors of \(g\), \(r_{h}=r_{\{h\},h}\) is the self-relatedness of site \(h\), and \(r_{h}^{(2)}=r_{\{h\},h}^{(2)}\) is the expected relatedness of \(h\) to its own two-step random walk neighbors. The left-hand side of Condition (6) compares the effect of collective action on the target \(g\) (weighted by collective relatedness from \(S\)) to that on \(g\)'s two-step neighbors, who compete with \(g\) for opportunities to \begin{table} \begin{tabular}{r c c c c c} \hline \# of siblings, \(m\) & 1 & 2 & 3 & 4 & 5 \\ \hline Arbitrary dominance, \(h\) & \(\frac{1}{2}\) & \(\frac{17+2h}{48}\) & \(\frac{57+12h-4h^{2}}{192}\) & \(\frac{209+38h+12h^{2}-24h^{3}}{768}\) & \(\frac{801+104h+88h^{2}-32h^{3}-80h^{4}}{3072}\) \\ Recessive (\(h=0\)) & \(\frac{1}{2}\) & \(\frac{17}{48}\) & \(\frac{19}{64}\) & \(\frac{209}{768}\) & \(\frac{267}{1024}\) \\ No dominance (\(h=\frac{1}{2}\)) & \(\frac{1}{2}\) & \(\frac{3}{8}\) & \(\frac{31}{96}\) & \(\frac{19}{64}\) & \(\frac{433}{1536}\) \\ Dominant (\(h=1\)) & \(\frac{1}{2}\) & \(\frac{19}{48}\) & \(\frac{65}{192}\) & \(\frac{235}{768}\) & \(\frac{881}{3072}\) \\ \hline \end{tabular} \end{table} Table 1: **Collective intra-relatedness, \(r_{m}\), of \(m\) full siblings in a diploid population** reproduce. The right-hand side quantifies the effects of costs paid, and is never negative. Collective \(S\) can be favored to help node \(g\) if \(r_{S,g}>r_{S,g}^{(2)}\); that is, if \(S\) is more related to \(g\) than to \(g\)'s two-step neighbors. In this case, help is favored if the benefit-cost ratio \(b/c\) exceeds the threshold \[(b/c)_{S,g}^{*}=\frac{\sum_{h\in S}d_{h}\left(r_{h}-r_{h}^{(2)}\right)}{md_{g} \left(r_{S,g}-r_{S,g}^{(2)}\right)}. \tag{7}\] In contrast, if \(r_{S,g}<r_{S,g}^{(2)}\), then collective help is never favored, and collective harm (\(b<0\)) is favored if \(b/c<(b/c)_{S,g}^{*}\). If \(r_{S,g}=r_{S,g}^{(2)}\) (in particular, if \(g\) and all of its two-step neighbors are within \(S\)) then \(S\) cannot be favored to help or harm \(g\). Using Condition (6), we can determine whether any set of nodes on a given network is favored to help or harm any target. On a large cycle (Fig. 2b, Extended Data Figure 6a-d), a collective of four or more connected nodes is favored to help its own boundary node if \(b>2c\), or the two neighbors of a boundary node if \(b>4c\). Neither help nor harm can be favored to any other node. On heterogeneous networks (Figs. 2cd, Extended Data Figure 6ef), selection can favor extreme collective altruism, in which the benefit is negligible compared Figure 2: **Collective action on simple networks.****a** We consider a scenario in which a collective \(S\) of \(m\) network nodes may help a particular node \(g\). \(A\) individuals within \(S\) pay cost \(c/m\) each; if all pay, then \(g\) receives benefit \(b\). **b** For a large cycle network, a connected collective of size at least four is favored to help its own boundary nodes if \(b>2c\), and the neighbors of these boundary nodes if \(b>4c\). **c** On a windmill network with \(n\gg 1\) blades, a blade is favored to help the hub if \(nb>7c\), which can be satisfied even if the benefit is negligible compared to the cost. In contrast, help to a node within the blade is only favored if \(b>\frac{56}{41}c\). Harmful behavior can be favored toward nodes in other blades, if \(b<-14c\). **d** The β€œspider” network displays similar behavior to the windmill, but help is more readily favored to the inner node of a leg (\(b>\frac{21}{25}c\)) than to the outer leg (\(b>2c\)). Results shown here are for large networks; finite-size results are given in Extended Data Figure 6. to the cost. This occurs when the target is a highly-connected neighbor; such "hubs" are critical for the spread of alleles. Toward other targets, collective harm can be favored. A windmill blade (Fig. 2c) is favored to harm a node in another blade if \(b<-14c\), and a spider leg (Fig. 2d) is favored to harm the outer node of another leg if \(b<-7c\). For real-world networks of sociable weavers [40] (_Philetairus socius_) and desert tortoises [41] (_Gopherus agassizii_), small, isolated subcommunities are often favored to help their neighbors, and occasionally their neighbors-of-neighbors (Fig. 3 and Extended Data Figure 7). In some cases, help can be favored even if the benefit is less than the cost. Larger and more centralized subcommunities, in contrast, show only potential for collective harm. We also analyze collective action by hyperedges--representing multilateral human relationships--on real-world hypergraphs [43, 32] (Fig. 4). In this context, behaviors spread to neighbors via imitation, rather than genetic transmission. For an academic coauthorship hypergraph [43] (Fig. 4ab, Extended Data Figure 8), we again find that the propensity to help neighbors--and neighbors-of-neighbors--decreases with collective size. Collective harm can be favored to more distant individuals, even (in one case) when the damage inflicted is less Figure 3: **Collective action on animal networks.** We partitioned networks into subcommunities using the Girvan-Newman algorithm [42]. Critical benefit-cost thresholds \((b/c)_{S,g}^{*}\), from each subcommunity to each target, were computed according to Eq. (7). Positive (resp., negative) values indicate potential for collective help (resp., harm); this help or harm is selected if the benefit-cost ratio exceeds \((b/c)_{S,g}^{*}\) in absolute value. Results are shown here for particular subcommunities (indicated by ovals); results for all subcommunities are shown in Extended Data Figure 7. **a,b** Two co-nesting networks of _Philetairus socius_[40]. In **a**, the collective is favored to help its neighbor whenever \(b/c>1.57\), a lower threshold than for two of its own members. The collective in **b** can be favored to help all of its neighbors up to distance 2, but not one of its own members (because this member is irrelevant of the spread of the collective’s alleles). **c,d** Co-burrowing networks of _Gopherus agassizii_[41]. In **c**, help to one of the collective members is favored if \(b/c>0.79\), meaning the benefit may be less than the cost. In **d**, such net-negative help can be favored to a neighbor of the collective. than the cost. For a hypergraph representing attendance at social events [44] (Fig. 4c, Extended Data Figure 9), which is more densely interconnected, no hyperedge is favored to help any individual outside the hyperedge. ## Synergy and collective actors The evolution of social behavior is typically studied at the level of individual actors. Natural selection is believed to lead individuals to act as if trying to maximize their inclusive fitness [11, 20]--a sum of fitness effects weighted by relatedness to each recipient. However, multilateral, synergistic interactions make it difficult to identify the inclusive fitness of any given individual [13, 20]. Condition (4) resolves this difficulty by expanding the notion of "actor" to include collectives. Each synergistic effect is ascribed to a collective actor \(S\), comprising all sites involved in the synergy. The resulting consequences for selection are captured by \(S\)'s "collective inclusive fitness effect", \(w_{S}^{\text{IF}}=\sum_{g\in G}c_{S,g}r_{S,g}\). Here, \(r_{S,g}\) quantifies \(S\)'s shared genetic interest (positive or negative) in site \(g\)'s fitness. To separate \(S\)'s effects on its own members versus outsiders, \(w_{S}^{\text{IF}}\) can be decomposed as \(r_{S}\sum_{g\in S}c_{S,g}+\sum_{g\notin S}c_{S,g}r_{S,g}\), where \(r_{S}=\lim_{u\to 0}\left(\mathbb{E}\left[\underline{x}_{S}-\underline{x}_{S} \bar{x}\right]/\mathbb{E}\left[\bar{x}(1-\bar{x})\right]\right)\) is the common value of \(r_{S,g}\) for all members \(g\) of \(S\). This "intra-relatedness", \(r_{S}\), is always positive, and exceeds \(r_{S,g}\) for any \(g\) outside of \(S\). Figure 4: **Collective action on hypergraphs** We computed critical benefit-cost thresholds for collective action, \((b/c)_{S,g}^{*}\), for hypergraphs representing coauthorship of history research articles [43], and co-attendance at social events [44]. Results for particular hyperedges are shown here; see Extended Data Figures 8 and 9 for all hyperedges and targets. **a** For the coauthorship hypergraph, a four-node hyperedge can be selected to help neighbors, but only for benefit/cost ratios of at least 14.3. **b** A two-node hyperedge is favored to help its neighbor if \(b>0.62c\), and help to neighbors-of-neighbors can be favored as well. Toward other nodes, selection can favor collective harmβ€”in one case this requires only \(b<-0.91c\). **c** For the social event hypergraph, which is more densely interconnected, no hyperedge is favored to help others. Selection favors an allele if the total inclusive fitness effect of all collectives is positive, \(\sum_{S\subseteq G}w_{S}^{\rm IF}>0\). Although every subset of sites \(S\) is included in this total, only those with non-negligible intra-relatedness, \(r_{S}\), and synergistic effects, \(c_{S,g}\), contribute appreciably to selection. Does selection lead collectives to act as if trying to maximize inclusive fitness? In one highly idealized case, yes. Suppose that a mutant allele affects the actions of only a single collective \(S\). We prove that weak selection favors this mutant allele if and only if it increases the quantity \(\sum_{g\in G}w_{g}({\bf x})\,r_{S,g}\), which can be interpreted as \(S\)'s inclusive fitness. In this way, selection would lead \(S\) to act as if maximizing inclusive fitness, if the actions of all other collectives (including those contained in or overlapping with \(S\)) were held fixed. Although conceptually appealing, this result has limited applicability. It is unclear how an allele could affect the actions of only a single collective, and not that of its members or overlapping sets. Moreover, the inclusive fitness interests of a collective can differ from those of its members or subsets (Fig. 5), making simultaneous maximizing behavior impossible. Instead, divergent interests create the potential for evolutionary conflict, such as over worker reproduction in ant colonies [45]. Selection, according to Condition (4), leads not to maximizing behavior, but to conflict and compromise over competing individual and collective prerogatives. Figure 5: **Conflicting inclusive fitness interests within a collective a** On a cycle of size 8, three consecutive sites have conflicting relatedness values to a neighboring target site (indicated in red). The closest site has positive relatedness to the target, indicating potential for helpful behavior, while the other two have negative relatedness, indicating potential for harm. **b** For neighboring pairs of sites, collective relatedness to the target is positive for the nearer pair but negative for the further pair. **c** Taken together, the three sites have negative collective relatedness to the target. The outcome of selection reflects an aggregation over these differing individual and collective interests, as quantified by Condition (4). ## Outlook Our work provides a mathematical theory for how the actions of collectives, toward their own members and/or others, are shaped by natural selection. Our main result, Condition (4), shows how selection for collective action--or any other social behavior--depends on genetic assortment (quantified by \(r_{S,g}\)) and synergy (quantified by \(c_{S,g}\)). This genetic assortment can arise from interactions within families or groups, or from spatial clustering on networks and hypergraphs. Small, isolated communities show the greatest propensity to evolve collective help toward neighbors. A robust field of research [46, 47] has demonstrated how collective action is achieved via interactions among individuals. Our results complement this research by illuminating why, and when, such collective action evolves. The notion of collective actors introduced here provides new evolutionary grounding for the idea of shared agency [48]. Moreover, since every multicellular organism is a collection of cells, and every genome a collection of genes, the origin of multicellularity [49] and other transitions in individuality [50] can be understood as the emergence of radical new forms of collective action. In this sense, all action is collective action. ## Data availability No datasets were generated in this work. We made use of publicly available datasets from the Network Data Repository [51] ([http://networkrepository.com/](http://networkrepository.com/)), UCINet [52] ([https://sites.google.com/site/ucinetsoftware/datasets](https://sites.google.com/site/ucinetsoftware/datasets)), and Austin R. Benson [43] ([https://www.cs.cornell.edu/~arb/data/](https://www.cs.cornell.edu/~arb/data/)). ## Code availability Coalescence lengths and benefit-cost thresholds on networks and hypergraphs were computed using MATLAB (version R2022a). Code is available at [https://github.com/Emmanuel-Math-Bio-Research-Group/Collective-Action](https://github.com/Emmanuel-Math-Bio-Research-Group/Collective-Action). To implement the Girvan-Newman algorithm we used the UCINet [52] software package (v6.753), available at [https://sites.google.com/site/ucinetsoftware/](https://sites.google.com/site/ucinetsoftware/). ## Acknowledgements We are grateful to J. Arvid Agren, Alex McAvoy, Joshua Plotkin, and Qi Su for feedback and discussions, and to Julia Shapiro for help with figure design. This project was supported by Grant 62220 from the John Templeton Foundation. ## Author Contributions BA conceived the project. BA, AK, JD, YAD, and CS analyzed the model. BA and CL designed the figures. BA, YAD, and CS supervised student researchers. BA wrote the manuscript. ## Appendix This appendix contains a full mathematical description of our modeling framework, proofs of our main results, computation of examples, and comparisons to previous work. The derivations are mostly self-contained, with only a few reliances on results proven in previous works [54, 55, 34, 56, 57]. We assume familiarity with discrete mathematics (sets, relations, functions, etc.) [58] and probability, especially finite Markov chains [59]. Readers may also refer to Refs. [55, 34] for a more pedagogical exposition of the kind of mathematical framework used here. Section A presents our mathematical modeling framework. Sections B and C prove key mathematical results, culminating in a proof of equivalence between various criteria for selection (Theorems C.2 and C.3). Section D presents our definition of collective relatedness, and its relationship to other assortment measures such as coalescence length and identity-by-descent. Our main result regarding collective action is proven in Section E, Theorems E.1 and E.2. Our result on maximization of collective inclusive fitness (in a highly idealized case) is proven in Section F. Finally, we develop the application of our main results to collective action among diploid relatives (Section G) and on networks and hypergraphs (Section H). ## Appendix A Modeling framework We start by introducing the modeling framework from which our results are derived. The framework presented here is a variant on one developed in earlier works [55, 34, 56, 57]. It describes a population of fixed size and spatial structure, which may be haploid, diploid, or otherwise. The relationship to previous formulations of this framework is discussed in Section A.9. ### Set-theoretic notation We will use standard set-theoretic notation throughout: \(\in\) for element, \(\subseteq\) for subset, and \(\varnothing\) for empty set. The cardinality (size) of a finite set \(S\) is denoted \(|S|\). For two sets \(S\) and \(T\), we let \(S^{T}\) denote the set of all \(T\)-indexed tuples of the form \((s_{t})_{t\in T}\), with each \(s_{t}\in S\). For a set mapping \(f:S\to T\), we denote the image of a subset \(S^{\prime}\subseteq S\) by \(f(S^{\prime})\subseteq T\), and the preimage of a subset \(T^{\prime}\subseteq T\) by \(f^{-1}(T^{\prime})\subseteq S\). We also use the shorthand \(f^{-1}(t)=f^{-1}(\{t\})\) for the preimage of a singleton subset \(\{t\}\subseteq T\). ### Genetic sites and individuals Adopting a gene's-eye view, we consider a population of alleles at a particular genetic locus. There is a set \(G\) of genetic sites at which alleles can reside; each site houses a single allele. The total number of sites is denoted \(n=|G|\). The individuals in the population are represented by a fixed set \(I\). The set \(G\) of genetic sites is partitioned into subsets \(G_{i}\), for each \(i\in I\), representing the sites contained in individual \(i\). The number of sites in individual \(i\), denoted \(n_{i}=|G_{i}|\), corresponds to \(i\)'s ploidy (\(n_{i}=1\) if \(i\) is haploid, \(n_{i}=2\) if diploid, and so on). ### Alleles and states There are two competing allele types, \(A\) and \(a\), which are assigned numerical values \(1\) and \(0\), respectively. The allele type occupying each site \(g\in G\) is indicated by a binary variable \(x_{g}\in\{0,1\}\). The overall population state is the binary vector \(\mathbf{x}=(x_{g})_{g\in G}\). The set of all possible states is \(\{0,1\}^{G}\). ### Transitions A transition from one state to another involves two components. The first is a set mapping \(\alpha:G\to G\), indicating the site from which each allele in the new state is inherited (in the case of new offspring) or retained (if the allele survives). Thus \(\alpha(g)=h\), for \(g,h\in G\), indicates that the new occupant of site \(g\) is the same as, or a copy of, the allele that occupied site \(h\) in the previous time-step. Our framework does not formally distinguish between these two possibilities, as they both result in transmission an allele from \(g\) to \(h\). We call \(\alpha\) the _parentage map_, understanding a surviving allele to be its own "parent". The second component is the subset \(U\subseteq G\) of sites that undergo mutation during the transition. In general, \(U\) can be any subset of \(G\), including the empty set \(\varnothing\) (indicating that no mutations occur). The probability that parentage map \(\alpha\) and mutation set \(U\) occur in state \(\mathbf{x}\) is denoted \(p_{\mathbf{x}}(\alpha,U)\). For each state \(\mathbf{x}\), \(\{p_{\mathbf{x}}(\alpha,U)\}_{(\alpha,U)}\) comprises a joint probability distribution on the set of possible combinations of \(\alpha\) and \(U\). We allow the probabilities \(p_{\mathbf{x}}(\alpha,U)\) to depend on the state \(\mathbf{x}\) in an arbitrary way, subject only to a Fixation Axiom introduced in the next subsection. In this way, our framework encompasses a wide variety of models of selection. Spatial, network, or group structure, behavioral interactions, migration, mate choice, and allele transmission (Mendelian or not) are all represented implicitly in the probability distributions \(p_{\mathbf{x}}\). We denote the marginal probabilities of a parentage map \(\alpha\) and a mutation set \(U\), respectively, by \[p_{\mathbf{x}}(\alpha)=\sum_{U\subseteq G}p_{\mathbf{x}}(\alpha,U)\qquad\text {and}\qquad p_{\mathbf{x}}(U)=\sum_{\alpha:G\to G}p_{\mathbf{x}}(\alpha,U).\] (A.1) We also use the notation \(\mathbb{P}_{\mathbf{x}}\) and \(\mathbb{E}_{\mathbf{x}}\) for probabilities and expectations, respectively, under the distribution \(\{p_{\mathbf{x}}(\alpha,U)\}_{(\alpha,U)}\) for a given state \(\mathbf{x}\). For example, the expected number of (self + offspring) of site \(g\in G\), in a single transition from state \(\mathbf{x}\), can be written as \(\mathbb{E}_{\mathbf{x}}[|\alpha^{-1}(g)|]\). ### Fixation Axiom For the population to evolve as a unit, it should be possible for at least one genetic site to spread its progeny throughout the population. We formalize this with the following axiom: Fixation Axiom.There exists a site \(g\in G\) such that, for each \(h\in G\), there is a finite sequence of parentage maps \(\alpha_{1},\ldots,\alpha_{m}\) with \(p_{\mathbf{x}}(\alpha_{k})>0\) for each \(k=1,\ldots,m\) and \(\mathbf{x}\in\{0,1\}^{G}\), and \(\alpha_{1}\circ\cdots\circ\alpha_{m}(h)=g\). ### Selection Markov chain A particular model within our framework is defined by a set \(G\) and a collection of probability distributions \(p_{\mathbf{x}}\) for each state \(\mathbf{x}\in\{0,1\}^{G}\), satisfying the Fixation Axiom. Given these ingredients, the process of selection is represented as a Markov chain on \(\{0,1\}^{G}\). We call this the _selection Markov chain_, denoted \(\mathcal{M}=(\mathbf{X}^{t})_{t=0}^{\infty}\). The initial state \(\mathbf{X}^{0}\) may be chosen arbitrarily. For all subsequent times \(t\geq 1\), state \(\mathbf{X}^{t}\) is constructed from \(\mathbf{X}^{t-1}\) by first sampling a parentage map \(\alpha^{t}\) and mutation set \(U^{t}\) from \(p_{\mathbf{X}^{t}}\), and then, for each \(g\in G\), setting \[X_{g}^{t}=\begin{cases}X_{\alpha^{t}(g)}^{t-1}&g\notin U^{t}\\ 1-X_{\alpha^{t}(g)}^{t-1}&g\in U^{t}.\end{cases}\] (A.2) In particular, if there is no mutation (that is, if \(p_{\mathbf{x}}(\varnothing)=1\) for all \(\mathbf{x}\in\{0,1\}^{G}\)) then \(X_{g}^{t}=X_{\alpha^{t}(g)}^{t-1}\) for all \(g\in G\). We can write this compactly as \[\mathbf{X}^{t}=\mathbf{X}_{\alpha^{t}}^{t-1},\] (A.3) where, for any state \(\mathbf{x}\in\{0,1\}^{G}\) and set mapping \(\sigma:G\to G\), \(\mathbf{x}_{\sigma}\) denotes the state with allele \(x_{\sigma(g)}\) in each site \(g\in G\). For some purposes, it will be useful to include the parentage mapping \(\alpha^{t}\) and mutation set \(U^{t}\) as part of the Markov chain state. For this, we define the augmented Markov chain \(\tilde{\mathcal{M}}\) with states \((\mathbf{X}^{t},\alpha^{t},U^{t})\), with \(\mathbf{X}^{t}\), \(\alpha^{t}\) and \(U^{t}\) as defined above. The initial state \((\mathbf{X}^{0},\alpha^{0},U^{0})\) of \(\tilde{\mathcal{M}}\) may be chosen arbitrarily. ### Mutation and selection parameters In order to vary the effects of mutation and selection, we allow the probabilities \(p_{\mathbf{x}}(\alpha,U)\) to additionally depend on two parameters: a mutation parameter \(u\) with \(0\leq u\leq 1\), and a selection intensity parameter \(\delta\) with \(0\leq\delta\leq\epsilon\) for some \(\epsilon>0\). In addition to requiring that the Fixation Axiom be satisfied for all combinantions of \(u\) and \(\delta\), we impose the following additional assumptions on the behavior of the probability distributions \(p_{\mathbf{x}}\) with respect to these parameters. The first is a differentiability requirement: 1. Each probability \(p_{\mathbf{x}}(\alpha,U)\) is jointly twice-differentiable, with continuous second partial derivatives, in both \(u\) and \(\delta\) Second \(\delta=0\) should represent neutral drift. This means that alleles \(A\) and \(a\) should be interchangeable, and thus the probability distribution on \(\alpha\) and \(U\) should be independent of the state: 1. For \(\delta=0\), the probabilities \(p_{\mathbf{x}}(\alpha,U)\) are independent of \(\mathbf{x}\). Third, no mutation should occur in the case \(u=0\): 1. For \(u=0\), \(p_{\mathbf{x}}(U)=0\) for all \(U\neq\varnothing\). Fourth, when \(u>0\), it should be possible for new mutations to arise and sweep to fixation: 1. For \(u>0\), there exists some \(g\in G\), satisfying the conditions of the Fixation axiom, such that \(\mathbb{P}_{\mathbf{x}}[g\in U]>0\). Fifth, the probability of multiple mutations should be of order \(u^{2}\) as \(u\to 0\). 2. For each \(\mathbf{x}\in\{0,1\}^{G}\) and each fixed \(\delta\geq 0\), \(\lim_{u\to 0}\frac{\mathbb{P}_{\mathbf{x}}[|U|\geq 2]}{u}=0\). Sixth, since \(u\) is intended to only parameterize mutation, the marginal probability of each parentage map in a given state should be independent of \(u\): 1. For each \(\alpha:G\to G\) and each \(\mathbf{x}\in\{0,1\}^{G}\), \(p_{\mathbf{x}}(\alpha)\) is independent of \(u\). Seventh, for \(u<1\), it should be possible for no mutation to occur: 1. For \(u<1\) and all \(\mathbf{x}\in\{0,1\}\), \(\mathbb{P}_{\mathbf{x}}[U=\varnothing]>0\). Finally, in order to isolate the effects of selection, we assume that probabilities of mutation are the same in the two monoallelic states: 1. For each \(U\subseteq G\) and \(u\geq 0\), \(p_{\mathbf{a}}(U)=p_{\mathbf{A}}(U)\). Assumption (M6) removes the possibility that mutation can favor one trait over another; thus any differences in the frequency of \(A\) versus \(a\) must be due to selection alone. Without (M6), the effects of mutation and selection are difficult to disentangle [60]. ### Phenotypes Although most of our analysis will be conducted on the gene level, it will in some cases be useful to employ a notion of individual phenotype. For this, we suppose there are two phenotypes, numbered 1 and 0. The phenotype \(\Phi_{i}\in\{0,1\}\) of each individual \(i\in I\) is determined stochastically based on the vector \(\mathbf{x}_{|G_{i}}=(x_{g})_{g\in G_{i}}\) of alleles present in this individual. Specifically, each individual \(i\) has phenotype 1 with some probability \(\varphi_{i}(\mathbf{x}_{|G_{i}})\), and otherwise has phenotype 0. We require that individuals with only \(A\) alleles always have phenotype 1, \(\varphi_{i}(1,\ldots,1)=1\) and those with only \(a\) alleles always have phenotype \(0\), \(\varphi_{i}(0,\ldots,0)=0\). Each individual's phenotype is determined independently of all others'. For example, suppose \(i\) is a diploid individual, with genetic sites \(G_{i}=\{i_{1},i_{2}\}\). Let \(0\leq h\leq 1\) represent the degree of genetic dominance, so that an \(Aa\) heterozygote has phenotype \(1\) or \(0\) with probability \(h\) or \(1-h\), respectively. Then the probability \(\varphi_{i}(\mathbf{x}_{|G_{i}})\) that \(i\) has phenotype \(1\) can be expressed as: \[\mathbb{P}_{\mathbf{x}}[\Phi_{i}=1]=\varphi_{i}(x_{i_{1}},x_{i_{2}})=h\left(x _{i_{1}}+x_{i_{2}}\right)+(1-2h)x_{i_{1}}x_{i_{2}}.\] (A.4) We observe that \(\varphi_{i}(1,1)=1\), \(\varphi_{i}(0,0)=0\), and \(\varphi_{i}(1,0)=\varphi_{i}(0,1)=h\), as required. We collect the phenotypes of all individuals into a vector \(\mathbf{\Phi}=(\Phi_{i})_{i\in I}\), representing the phenotypic state of the population. The phenotypic state \(\mathbf{\Phi}\) depends stochastically on the genotypic state \(\mathbf{x}\), according to the probabilities \(\varphi_{i}(\mathbf{x}_{|G_{i}})\). The probability that transition event \((\alpha,U)\) occurs in phenotypic state \(\mathbf{\Phi}\) is denoted \(\hat{p}_{\mathbf{\Phi}}(\alpha,U)\). Here and below, a hat is used to indicate quantities that depend on the phenotypic state \(\Phi\) rather than the genotypic state \(\mathbf{x}\). The phenotype-level formalism discussed in this subsection is a special case of the gene-level framework introduced earlier. The gene-level framework can be recovered from the phenotypic one by setting \(p_{\mathbf{x}}(\alpha,U)=\mathbb{E}_{\mathbf{x}}\left[\hat{p}_{\mathbf{\Phi}} (\alpha,U)\right]\), with expectation taken over the probability distribution on \(\mathbf{\Phi}\) in state \(\mathbf{x}\). ### Relationship to prior frameworks The framework introduced here is closely related to those developed in previous works [55, 34, 56, 57], especially Ref. [34]. There are, however, two key differences to highlight. First, we do not explicitly record births and deaths in the present framework. That is, we make no formal distinction between an allele surviving into the next generation versus being replaced by a copy of itself. This simplifies notation, and also allows for surviving alleles to move between sites (e.g. via migration of adult individuals), which was not allowed in previous formulations. Second, we do not specify any particular relationship between reproduction and mutation. Previous formulations assumed that offspring alleles acquire mutations independently with some fixed probability. By instead allowing for an arbitrary joint probability distribution on the set \(U\) of mutated sites and the parentage map \(\alpha\), we allow for mutation of surviving alleles, mutation rates that vary across sites, and non-independence of mutation events in a particular state. Despite these differences, some results proven in previous works [55, 34, 56, 57] carry over to the present framework with minimal modification, and will be invoked without proof. Stationarity and fixation In this section we establish the long-term behavior of the selection Markov chain \(\mathcal{M}\). The case of no mutation is particularly important; we therefore introduce the notation \(\mathcal{M}_{0}\) for the \(u=0\) case of \(\mathcal{M}\). ### Asymptotic behavior of the selection Markov chain There are two cases for the asymptotic behavior of the selection Markov chain \(\mathcal{M}\) depends on the mutation parameter \(u\). For \(u=0\), the population is eventually taken over by one of the two competing alleles, while for \(u>0\), there is a unique stationary distribution, \(\pi_{\mathcal{M}}\). We formalize these observations in the following theorem: **Theorem B.1**.: \(\mathcal{M}_{0}\) _has absorbing states \(\mathbf{a}\) and \(\mathbf{A}\), and all other states are transient in \(\mathcal{M}_{0}\). For \(0<u<1\), \(\mathcal{M}\) has a unique stationary distribution, \(\pi_{\mathcal{M}}\), that satisfies_ \[\lim_{t\to\infty}\mathbb{P}[\mathbf{X}^{t}=\mathbf{x}|\mathbf{X}^{0}=\mathbf{ y}]=\pi_{\mathcal{M}}(\mathbf{x}),\] (B.1) _for any pair of states \(\mathbf{x},\mathbf{y}\in\{0,1\}^{G}\)._ Proof.: The claim regarding \(\mathcal{M}_{0}\) is Theorem 1 of Allen and Tarnita [55]. For the \(0<u<1\) case, it suffices to show that \(\mathcal{M}\) has a unique closed communicating class, and is aperiodic on this class. Consider an arbitrary initial state \(\mathbf{X}^{0}=\mathbf{y}\). We will show that states \(\mathbf{A}\) and \(\mathbf{a}\) are both accessible from \(\mathbf{y}\), which will prove that \(\mathcal{M}\) has a unique closed communicating class containing \(\mathbf{A}\) and \(\mathbf{a}\). Consider a site \(g\in G\) and sequence \(\alpha_{1},\ldots,\alpha_{k}\) satisfying the properties specified in the Fixation Axiom and Assumption (M2). Suppose the transition events \((\alpha_{1},\varnothing),\ldots,(\alpha_{k},\varnothing)\) all occur in sequence from initial state \(\mathbf{y}\); this sequence has positive probability by (M5) and the assumed properties of \(g\). The resulting state is \(\mathbf{a}\) or \(\mathbf{A}\), respectively, if \(y_{g}=0\) or \(y_{g}=1\). Now suppose the next transition event has \(g\in U\) (this has positive probability by Assumption (M2)), and that the sequence \((\alpha_{1},\varnothing),\ldots,(\alpha_{k},\varnothing)\) again follows afterwards. The resulting state is now either \(\mathbf{A}\) or \(\mathbf{a}\), respectively, if \(y_{g}=0\) or \(y_{g}=1\). In either case, both \(\mathbf{A}\) and \(\mathbf{a}\) are accessible from \(\mathbf{y}\), and therefore \(\mathcal{M}\) has a unique closed communicating class. Aperiodicity follows from the fact that \(\mathbb{P}_{\mathbf{A}}[U=\varnothing]\) and \(\mathbb{P}_{\mathbf{a}}[U=\varnothing]\) are both positive by Assumption (M5); any transition event with \(U=\varnothing\) in state \(\mathbf{a}\) or \(\mathbf{A}\) leaves the state unchanged. ### Ancestral mapping Here we introduce a family of stochastic mappings that record ancestral lineages in the selection Markov chain. **Definition**.: In \(\tilde{\mathcal{M}}\), for any pair of times \(t_{0}\) and \(t\) with \(0\leq t_{0}<t\), we define the _ancestral mapping_\(A_{t_{0}}^{t}:G\to G\) by \[A_{t_{0}}^{t}=\alpha^{t_{0}+1}\circ\alpha^{t_{0}+2}\circ\cdots\circ\alpha^{t}.\] (B.2) For \(t_{0}=t\) we define \(A^{t_{0}}_{t}\) to be the identity mapping on \(G\). In words, site \(A^{t}_{t_{0}}(g)\in G\) contains the ancestor, at time \(t_{0}\), of the allele occupying site \(g\in G\) at time \(t\). Conversely, the preimage \((A^{t}_{t_{0}})^{-1}(g)\subseteq G\) identifies the locations of descendants, at time \(t\), of the allele occupying site \(g\) at time \(t_{0}\). In the \(t_{0}=t\) case, each allele's "ancestor" is itself--that is, \(A^{t}_{t}(g)=g\). Absent mutation, each allele is a faithful copy of its ancestor. Accordingly, in \(\tilde{\mathcal{M}}_{0}\), \(X^{t}_{g}=X^{t_{0}}_{A^{t}_{t_{0}}(g)}\) for each \(g\in G\) and \(0\leq t_{0}<t\). We can write this property in the shorthand notation of Section A.6 as \[\mathbf{X}^{t}=\mathbf{X}^{t_{0}}_{A^{t}_{t_{0}}}\,.\] (B.3) This property can be proven inductively by iterating Eq. (A.3). For any given initial time \(t_{0}\), the population must ultimately (as \(t\to\infty\)) reach a state where every allele is descended from a single ancestor at time \(t_{0}\). Such an outcome is represented by the ancestral map being constant over \(G\); that is, there is some \(g\in G\) such that \(A^{t}_{t_{0}}(h)=g\) for all \(h\in G\). We formalize this observation in the following lemma: **Lemma B.2**.: _For each fixed \(t_{0}\geq 0\)\((\mathbf{X}^{t},A^{t}_{t_{0}})_{t=t_{0}}^{\infty}\) is a Markov chain. Every state \((\mathbf{x},A_{t_{0}})\) with nonconstant \(A_{t_{0}}\) is transient in this chain. For \(u=0\), all recurrent states are absorbing, and they all have the form \((\mathbf{A},A_{t_{0}})\) or \((\mathbf{a},A_{t_{0}})\) with \(A_{t_{0}}\) constant._ Proof.: That \((\mathbf{X}^{t},A^{t}_{t_{0}})_{t=t_{0}}^{\infty}\) is a Markov chain follows from the fact that \(\alpha^{t}\) is sampled from \(\{p_{\mathbf{X}^{t-1}}(\alpha)\}\) independently of \(\mathbf{X}^{0},\ldots,\mathbf{X}^{t-2}\) and \(\alpha^{0},\ldots,\alpha^{t-1}\). For the claim regarding transience, consider a site \(g\in G\) and sequence \(\alpha_{1},\ldots,\alpha_{m}\) satisfying the properties specified in the Fixation Axiom. Then for any mapping \(A_{t_{0}}:G\to G\) and any site \(h\in G\), we have \(A_{t_{0}}\circ\alpha_{1}\circ\cdots\circ\alpha_{m}(h)=A_{t_{0}}(g)\), and thus \(A_{t_{0}}\circ\alpha_{1}\circ\cdots\circ\alpha_{m}\) is a constant mapping. Since \(p_{\mathbf{x}}(\alpha_{k})>0\) for each \(k=1,\ldots,m\) and \(\mathbf{x}\in\{0,1\}^{G}\), it is possible to transition in \(m\) steps from \(A_{t_{0}}\) to a constant mapping. This proves that all nonconstant mappings are transient. Furthermore, if \(A^{t}_{t_{0}}\) is constant for some \(t\geq 0\), then \(A^{t}_{t_{0}}\circ\alpha=A^{t}_{t_{0}}\) for any \(\alpha:G\to G\), and it follows that \(A^{t^{\prime}}_{t_{0}}=A^{t}_{t_{0}}\) for all \(t^{\prime}\geq t\). The rest of the claim, for \(u=0\), follows from Theorem B.1. ### The low-mutation limit Many of our key results pertain to the low-mutation (\(u\to 0\)) limit of \(\mathcal{M}\). #### b.3.1 Limiting stationary distribution The stationary distribution \(\pi_{\mathcal{M}}\) has a well-defined limit as \(u\to 0\)[55], which we denote by \(\pi_{\mathcal{M}_{0}}\): \[\pi_{\mathcal{M}_{0}}(\mathbf{x})=\lim_{u\to 0}\pi_{\mathcal{M}}(\mathbf{x}).\] (B.4) This limiting distribution, \(\pi_{\mathcal{M}_{0}}\), is nonzero only for the monoallelic states \(\mathbf{A}\) and \(\mathbf{a}\)[54, 55]. While \(\pi_{\mathcal{M}_{0}}\) is a stationary distribution for \(\mathcal{M}_{0}\), it is not unique in this regard; indeed, any probability distribution supported only on states \(\mathbf{A}\) and \(\mathbf{a}\) is stationary for \(\mathcal{M}_{0}\). #### b.3.2 Appearance of mutations Since the selection Markov chain becomes concentrated on the monoallelic states \(\mathbf{a}\) and \(\mathbf{A}\) as \(u\to 0\), the mutations of relevance to selection are those that arise in these two states. This motivates the following definition of site-specific mutation rates: **Definition**.: For each site \(g\in G\), we define the _mutation rate \(\nu_{g}\) at \(g\)_ as \[\nu_{g}=\frac{d\,\mathbb{P}_{\mathbf{A}}[g\in U]}{du}\bigg{|}_{u=0}=\frac{d\, \mathbb{P}_{\mathbf{a}}[g\in U]}{du}\bigg{|}_{u=0}.\] (B.5a) More generally, for any subset \[S\subseteq G\], we define \[\nu_{S}=\frac{d\,\mathbb{P}_{\mathbf{a}}[S\cap U\neq\varnothing]}{du}\bigg{|}_ {u=0}=\frac{d\,\mathbb{P}_{\mathbf{A}}[S\cap U\neq\varnothing]}{du}\bigg{|}_{ u=0}.\] (B.5b) The second equalities in Eqs. (B.5a) and (B.5b) are due to Assumption (M6). It follows from Assumption (M3) that \(\nu_{S}=\sum_{g\in S}\nu_{g}\) for each \(S\subseteq G\). Assumption (M3) also implies that the transition probability from \(\mathbf{a}\) to any state \(\mathbf{x}\), which we denote by \(P_{\mathbf{a}\to\mathbf{x}}\), can be expanded under low mutation as \[P_{\mathbf{a}\to\mathbf{x}}=\begin{cases}1-u\nu_{G}+\mathcal{O}(u^{2})&\text {if }\mathbf{x}=\mathbf{a}\\ u\nu_{g}+\mathcal{O}(u^{2})&\text{if }x_{g}=1\text{ and }x_{h}=0\text{ for all }h\neq g\\ \mathcal{O}(u^{2})&\text{otherwise.}\end{cases}\] (B.6a) Similarly, for transitions from \[\mathbf{A}\], we have \[P_{\mathbf{A}\to\mathbf{x}}=\begin{cases}1-u\nu_{G}+\mathcal{O}(u^{2})&\text {if }\mathbf{x}=\mathbf{A}\\ u\nu_{g}+\mathcal{O}(u^{2})&\text{if }x_{g}=0\text{ and }x_{h}=1\text{ for all }h\neq g\\ \mathcal{O}(u^{2})&\text{otherwise.}\end{cases}\] (B.6b) **Definition**.: We define the _mutant appearance distribution from \(\mathbf{a}\)_, \(\mu_{A;a}\), as a probability distribution on \(\{0,1\}^{G}-\{\mathbf{a}\}\), characterizing the limit as \(u\to 0\) of the probability of reaching a given state via a transition away from \(\mathbf{a}\): \[\mu_{A;a}(\mathbf{x})=\lim_{u\to 0}\frac{P_{\mathbf{a}\to\mathbf{x}}}{1-P_{ \mathbf{a}\to\mathbf{a}}}.\] (B.7a) The _mutant appearance distribution from \[\mathbf{A}\], \[\mu_{a;A}\], is defined similarly as a probability distribution on \[\{0,1\}^{G}-\{\mathbf{A}\}\] : \[\mu_{a;A}(\mathbf{x})=\lim_{u\to 0}\frac{P_{\mathbf{A}\to\mathbf{x}}}{1-P_{ \mathbf{A}\to\mathbf{A}}}.\] (B.7b) We also define the _two-sided mutant appearance distribution_, \(\mu\), on \(\{0,1\}^{G}\), by first sampling from either \(\mu_{A;a}\) or \(\mu_{a;A}\) with probabilities \(\pi_{\mathcal{M}_{0}}(\mathbf{a})\) and \(\pi_{\mathcal{M}_{0}}(\mathbf{A})\), respectively, \[\mu(\mathbf{x})=\pi_{\mathcal{M}_{0}}(\mathbf{a})\mu_{A;a}(\mathbf{x})+\pi_{ \mathcal{M}_{0}}(\mathbf{A})\mu_{a;A}(\mathbf{x}).\] (B.7c) It follows from Eq. (B.6) that the mutant appearance distributions are concentrated on states with one mutant allele, whose location \(g\) is chosen proportionally to the mutation rate \(\nu_{g}\): \[\mu_{A;a}(\mathbf{x}) =\begin{cases}\frac{\nu_{g}}{\nu_{G}}&\text{if $x_{g}=1$ and $x_{h}=0$ for all $h\neq g$}\\ 0&\text{otherwise}\end{cases}\] (B.8a) \[\mu_{a;A}(\mathbf{x}) =\begin{cases}\frac{\nu_{g}}{\nu_{G}}&\text{if $x_{g}=0$ and $x_{h}=1$ for all $h\neq g$}\\ 0&\text{otherwise.}\end{cases}\] (B.8b) #### b.3.3 Rare-mutation lemma A lemma proven in Ref. [57], applied to the framework described here, provides a very useful connection between the \(u\to 0\) limit and the \(u=0\) case of \(\mathcal{M}\): **Lemma B.3**.: _Let \(f:\{0,1\}^{G}\to\mathbb{R}\) be any function with \(f(\mathbf{A})=f(\mathbf{a})=0\). Then_ \[\left.\frac{d\operatorname{\mathbb{E}}_{\pi_{\mathcal{M}}}[f]}{du}\right|_{u= 0}=\nu_{G}\sum_{t=0}^{\infty}\operatorname{\mathbb{E}}_{\mathcal{M}_{0}}[f( \mathbf{X}^{t})|\mathbf{X}^{0}\sim\mu],\] (B.9) _and the sum on the right converges absolutely._ This lemma allows us to pass back and forth between low-mutation limits under the stationary distribution, \(\pi_{\mathcal{M}}\), and expected sums over the mutation-free process, \(\mathcal{M}_{0}\). Motivated by this result, we define the operator \(\langle\ \rangle\), on functions \(f:\{0,1\}^{G}\to\mathbb{R}\) satisfying \(f(\mathbf{A})=f(\mathbf{a})=0\), so that \(\langle f\rangle\) is equal to both sides of Eq. (B.9): \[\langle f\rangle=\left.\frac{d\operatorname{\mathbb{E}}_{\pi_{\mathcal{M}}}[f ]}{du}\right|_{u=0}=\nu_{G}\sum_{t=0}^{\infty}\operatorname{\mathbb{E}}_{ \mathcal{M}_{0}}[f(\mathbf{X}^{t})|\mathbf{X}^{0}\sim\mu].\] (B.10) ### Fixation probability We are interested in the whether or not a mutant lineage, starting from a single allele, will eventually take over the population. To this end, we define the fixation probability of alleles \(A\) and \(a\): **Definition**.: The _fixation probabilities_\(\rho_{A}\) and \(\rho_{a}\) are defined as the probabilities of becoming absorbed in states \(\mathbf{A}\) and \(\mathbf{a}\), respectively, in \(\mathcal{M}_{0}\) with initial state sampled from the appropriate mutant appearance distribution: \[\rho_{A} =\lim_{t\to\infty}\operatorname{\mathbb{P}}_{\mathcal{M}_{0}}[ \mathbf{X}^{t}=\mathbf{A}|\mathbf{X}^{0}\sim\mu_{A;a}]\] \[\rho_{a} =\lim_{t\to\infty}\operatorname{\mathbb{P}}_{\mathcal{M}_{0}}[ \mathbf{X}^{t}=\mathbf{a}|\mathbf{X}^{0}\sim\mu_{a;A}].\] Theorem 2 of Fudenberg and Imhof [54] implies that the fixation probabilities are related to the limiting stationary distribution \(\pi_{\mathcal{M}_{0}}\) by \[\pi_{\mathcal{M}_{0}}(\mathbf{x})=\begin{cases}\dfrac{\rho_{A}}{\rho_{A}+\rho_{ a}}&\mathbf{x}=\mathbf{A}\\ \dfrac{\rho_{a}}{\rho_{A}+\rho_{a}}&\mathbf{x}=\mathbf{a}\\ 0&\text{otherwise.}\end{cases}\] (B.11) ## Appendix C Fitness and selection We now turn to the quantification of selection, in particular states as well as in the overall selection process. The results here pertain either to no mutation (\(u=0\)) or to the low-mutation limit (\(u\to 0\)) limit. The main results are Theorems C.2 and C.3, which show the equivalence between different measures of success. ### Lineage fitness The quantification of fitness is a perennial topic of debate in evolutionary theory [61, 62, 63]. Here, we follow the principle that "there is no fitness but fitness, and the lineage is its bearer" [64] by defining fitness as an attribute of a genetic lineage originating at a particular site in a particular state: **Definition**.: The _lineage fitness_\(W_{g}(\mathbf{x})\), of site \(g\in G\) in state \(\mathbf{x}\), is defined as the expected number of long-term descendants of the occupant of \(g\), when starting from initial state \(\mathbf{x}\): \[W_{g}(\mathbf{x})=\lim_{t\to\infty}\mathbb{P}_{\tilde{\mathcal{M}}}\left[|(A_ {0}^{t})^{-1}(g)|\,\Big{|}\,\mathbf{X}^{0}=\mathbf{x}\right].\] (C.1) Since total population size is fixed, the total fitness of each site in each state must equal the population size \(n\): \[\sum_{g\in G}W_{g}(\mathbf{x})=\lim_{t\to\infty}\mathbb{P}_{\tilde{\mathcal{M} }}\left[\sum_{g\in G}|(A_{0}^{t})^{-1}(g)|\,\bigg{|}\,\mathbf{X}^{0}=\mathbf{x }\right]=n\] (C.2) For \(u=0\), Eq. (C.1) implies the following recurrence equation on lineage fitness: \[W_{g}(\mathbf{x})=\mathbb{E}_{\mathbf{x}}\left[\sum_{h\in\alpha^{-1}(g)}W_{h} \left(\mathbf{x}_{\alpha}\right)\right]=\sum_{\alpha}p_{\mathbf{x}}(\alpha) \sum_{h\in\alpha^{-1}(g)}W_{h}(\mathbf{x}_{\alpha}).\] (C.3) Above, \(\mathbf{x}_{\alpha}\) is the state for which each site \(g\in G\) contains allele \(x_{\alpha(g)}\). Eqs. (C.2) and (C.3) form a system of linear equations for the lineage fitness \(W_{g}(\mathbf{x})\) of each site \(g\) in each state \(\mathbf{x}\). Unfortunately, this system is difficult to solve in general, and we will need to introduce other fitness measures to obtain tractable results. ### Neutral drift and reproductive value For neutral drift (\(\delta=0\)), the probabilities of transition events--and hence all quantities derived from them--are independent of the state \(\mathbf{x}\), according to Assumption (D2). We indicate quantities under neutral drift using a superscript \({}^{\circ}\); for example, the probability of parentage map \(\alpha\) under neutral drift is denoted \(p^{\circ}(\alpha)\). The neutral stationary distribution, \(\pi^{\circ}_{\mathcal{M}}\), is symmetric with respect to interchange of alleles \(A\) and \(a\); see Propositions 2 and 3 of Ref. [34] for a formal statement and proof. It follows that for each \(g\in G\), \[\mathbb{E}^{\circ}_{\pi_{\mathcal{M}}}[x_{g}]=\frac{1}{2}.\] (C.4) This result can also be obtained from Theorem 3.3 of McAvoy et al. [65]. Neutral drift also leads to the key concept of reproductive value [66, 67, 68]: **Definition**.: The _reproductive value (RV)_ of each site \(g\in G\), denoted \(v_{g}\), is its lineage fitness under neutral drift: \(v_{g}=W^{\circ}_{g}\). Applying Eqs. (C.2) and (C.3), we obtain the following system of recurrence relations for reproductive value: \[v_{g} =\sum_{\alpha}p^{\circ}(\alpha)\sum_{h\in\alpha^{-1}(g)}v_{h} \text{for all }g\in G\] (C.5a) \[\sum_{g\in G}v_{g} =n.\] (C.5b) These recurrence relations uniquely determine the reproductive values \(v_{g}\) for each \(g\in G\)[69, 34]. ### Fitness increments While the definition of lineage fitness in Section C.1 is conceptually natural, it is difficult to apply because it pertains to the far future (\(t\to\infty\)). To quantify selection, we first need to isolate how fitness is affected by events in the current state only. To do this, we consider a hypothetical process in which selection operates in the current time-step only, and all steps thereafter follow neutral drift. In such a process, the expected number of long-term descendants of site \(g\) in state \(\mathbf{x}\) is \[\mathbb{E}_{\mathbf{x}}\left[\sum_{h\in\alpha^{-1}(g)}\lim_{t\to \infty}\mathbb{P}^{\circ}_{\tilde{\mathcal{M}}}\left[|(A_{1}^{t})^{-1}(h)| \,\Big{|}\,\mathbf{X}^{1}=\mathbf{x}_{\alpha}\right]\right] =\mathbb{E}_{\mathbf{x}}\left[\sum_{h\in\alpha^{-1}(g)}W^{\circ} _{h}\right]\] \[=\mathbb{E}_{\mathbf{x}}\left[\sum_{h\in\alpha^{-1}(g)}v_{h} \right].\] To isolate the effects of selection, we subtract the expected number of long-term descendants under neutral drift only, \(W_{g}^{\circ}=v_{g}\). This motivates the following definition of fitness increments: **Definition**.: The _fitness increment_ of site \(g\in G\) in state \(\mathbf{x}\) is \[w_{g}(\mathbf{x})=\mathbb{E}_{\mathbf{x}}\left[\sum_{h\in\alpha^{-1}(g)}v_{h} \right]-v_{g}.\] (C.6) In words, \(w_{g}(\mathbf{x})\) is the expected difference in reproductive value between the allele occupying \(g\) and this allele's offspring (counting itself, if it survives) in the next time-step. The total fitness increment of all sites is zero, for each state \(\mathbf{x}\), since the total reproductive value of all sites is constant: \[\begin{split}\sum_{g\in G}w_{g}(\mathbf{x})&= \mathbb{E}_{\mathbf{x}}\left[\sum_{g\in G}\sum_{h\in\alpha^{-1}(g)}v_{h} \right]-\sum_{g\in G}v_{g}\\ &=\sum_{h\in G}v_{h}-\sum_{g\in G}v_{g}\\ &=0.\end{split}\] (C.7) Above, we have used the fact that for any parentage map \(\alpha:G\to G\), each \(h\in G\) is an element of \(\alpha^{-1}(g)\) for exactly one \(g\in G\), namely, \(g=\alpha(h)\). It also follows from Eq. (C.5a) that the fitness increment of each site is zero under neutral drift: \(w_{g}^{\circ}=0\) for all \(g\in G\). ### Selection increments We quantify selection between the \(A\) and \(a\) alleles by looking at changes in the _RV-weighted frequency_, defined in each state \(\mathbf{x}\) as \[\hat{x}=\frac{1}{n}\sum_{g\in G}x_{g}v_{g}.\] (C.8) Note that we have \(\hat{x}=0\) in state \(\mathbf{x}=\mathbf{a}\) and \(\hat{x}=1\) in state \(\mathbf{x}=\mathbf{A}\). We denote the RV-weighted frequency at time \(t\) in \(\mathcal{M}\) by \(\hat{X}^{t}\). \(\hat{X}^{t}\) is a Martingale for neutral drift without mutation, meaning that \(\mathbb{E}_{\mathcal{M}_{0}}^{\circ}[\hat{X}^{t+1}|\mathbf{X}^{t}=\mathbf{x}]= \hat{x}\). **Definition**.: We define the _selection increment_ in state \(\mathbf{x}\), denoted \(\Delta(\mathbf{x})\), as the expected change (absent mutation) in \(\hat{x}\) from state \(\mathbf{x}\): \[\Delta(\mathbf{x})=\mathbb{E}_{\mathcal{M}_{0}}[\hat{X}^{t+1}-\hat{X}^{t}| \mathbf{X}^{t}=\mathbf{x}].\] (C.9) The following lemma, an instance of the Price equation [70], gives an expression for \(\Delta(\mathbf{x})\) in terms of fitness increments: **Lemma C.1**.: _For \(u=0\), in each state \(\mathbf{x}\),_ \[\Delta(\mathbf{x})=\frac{1}{n}\sum_{g\in G}x_{g}w_{g}(\mathbf{x})=\frac{1}{n} \sum_{g\in G}\left(x_{g}-\bar{x}\right)w_{g}(\mathbf{x}).\] (C.10) Proof.: We first observe that, from Eqs. (A.3) and (C.8), \[\mathbb{E}[\hat{X}^{t+1}|\mathbf{X}^{t}=\mathbf{x}]=\frac{1}{n}\,\mathbb{E}_{ \mathbf{x}}\left[\sum_{g\in G}\sum_{h\in\alpha^{-1}(g)}v_{h}x_{\alpha(h)} \right]=\frac{1}{n}\,\mathbb{E}_{\mathbf{x}}\left[\sum_{g\in G}\sum_{h\in \alpha^{-1}(g)}v_{h}x_{g}\right].\] (C.11) Substituting this and Eq. (C.8) into Eq. (C.9), we have \[\Delta(\mathbf{x}) =\frac{1}{n}\,\mathbb{E}_{\mathbf{x}}\left[\sum_{g\in G}\sum_{h \in\alpha^{-1}(g)}v_{h}x_{g}-\sum_{g\in G}v_{g}x_{g}\right]\] \[=\frac{1}{n}\sum_{g\in G}x_{g}\left(\mathbb{E}_{\mathbf{x}}\left[ \sum_{h\in\alpha^{-1}(g)}v_{h}\right]-v_{g}\right)\] \[=\frac{1}{n}\sum_{g\in G}x_{g}w_{g}(\mathbf{x}).\] This proves the first equality in Eq. (C.10), and the second follows from Eq. (C.7). For neutral drift (\(\delta=0\)), the selection increment is zero in each state, \(\Delta^{\circ}=0\), since \(w_{g}^{\circ}=0\) for each \(g\in G\). We also observe that \(\Delta(\mathbf{A})=\Delta(\mathbf{a})=0\), reflecting the fact that, absent mutation, the RV-weighted frequency \(\hat{x}\) cannot change from states \(\mathbf{A}\) or \(\mathbf{a}\). ### Equivalence of success criteria The success of an allele can be quantified in a number of ways. Commonly used measures are based on expected frequency, on fixation probability, and on fitness-based measures. Here we show that these measures are equivalent in our framework, in the \(u\to 0\) limit. Similar equivalencies have been proven for other models and frameworks [25, 71, 72, 55, 60, 12, 34]. **Theorem C.2**.: _The following success criteria are equivalent:_ 1. \(\pi_{\mathcal{M}_{0}}(\mathbf{A})>\frac{1}{2}>\pi_{\mathcal{M}_{0}}(\mathbf{ a})\)_,_ 2. \(\rho_{A}>\rho_{a}\)_,_ 3. \(\langle\Delta\rangle>0\)_._ Above, the brackets \(\langle\;\rangle\) in (iii) refer to the operator defined in Eq. (B.10). Proof.: The equivalence (i) \(\Leftrightarrow\) (ii) follows directly from Eq. (B.11). For (ii) \(\Leftrightarrow\) (iii), we first develop expressions for \(\rho_{A}\) and \(\rho_{a}\) in terms of \(\Delta(\mathbf{x})\). Since the weighted frequency \(\hat{x}\) is \(0\) and \(1\), respectively, in states \(\mathbf{a}\) and \(\mathbf{A}\), we can express fixation probability in terms of the limiting expectation of \(\hat{X}^{t}\): \[\rho_{A} =\lim_{t\rightarrow\infty}\mathbb{E}_{\mathcal{M}_{0}}[\hat{X}^{ t}\,|\,\mathbf{X}^{0}\sim\mu_{A;a}]\] (C.12a) \[\rho_{a} =1-\lim_{t\rightarrow\infty}\mathbb{E}_{\mathcal{M}_{0}}[\hat{X} ^{t}\,|\,\mathbf{X}^{0}\sim\mu_{a;A}].\] (C.12b) We let \(\hat{\mu}\) denote the expected RV-weighted frequency of a new mutant type: \[\hat{\mu}=\mathbb{E}_{\mu_{A;a}}[\hat{X}]=\mathbb{E}_{\mu_{a;A}}[1-\hat{X}]= \frac{1}{n\nu_{G}}\sum_{g\in G}\nu_{g}v_{g}.\] (C.13) Then, using Eqs. (C.12a) and (C.9) we can express the fixation probability of \(A\) as \[\rho_{A} =\mathbb{E}_{\mu_{A;a}}[\hat{X}]+\sum_{t=0}^{\infty}\mathbb{E}_{ \mathcal{M}_{0}}[\hat{X}^{t+1}-\hat{X}^{t}\,|\,\mathbf{X}^{0}\sim\mu_{A;a}]\] \[=\hat{\mu}+\sum_{t=0}^{\infty}\mathbb{E}_{\mathcal{M}_{0}}[\Delta (\mathbf{X}^{t})\,|\,\mathbf{X}^{0}\sim\mu_{A;a}].\] (C.14) Similarly, using Eq. (C.12b), the fixation probability of \(a\) can be written as \[\rho_{a} =1-\left(\mathbb{E}_{\mu_{a;A}}[\hat{X}]+\sum_{t=0}^{\infty} \mathbb{E}_{\mathcal{M}_{0}}[\hat{X}^{t+1}-\hat{X}^{t}\,|\,\mathbf{X}^{0}\sim \mu_{a;A}]\right)\] \[=\mathbb{E}_{\mu_{a;A}}[1-\hat{X}]-\sum_{t=0}^{\infty}\mathbb{E}_ {\mathcal{M}_{0}}[\Delta(\mathbf{X}^{t})\,|\,\mathbf{X}^{0}\sim\mu_{a;A}]\] \[=\hat{\mu}-\sum_{t=0}^{\infty}\mathbb{E}_{\mathcal{M}_{0}}[\Delta (\mathbf{X}^{t})\,|\,\mathbf{X}^{0}\sim\mu_{a;A}].\] (C.15) Combining these expressions with Eqs. (B.7c), (B.10), and (B.11) provides the relationship between \(\langle\Delta\rangle\) and the fixation probabilities: \[\langle\Delta\rangle =\nu_{G}\sum_{t=0}^{\infty}\mathbb{E}_{\mathcal{M}_{0}}[\Delta( \mathbf{X}^{t})|\mathbf{X}^{0}\sim\mu]\] \[=\nu_{G}\left(\pi_{\mathcal{M}_{0}}(\mathbf{a})\sum_{t=0}^{ \infty}\mathbb{E}_{\mathcal{M}_{0}}[\Delta(\mathbf{X}^{t})|\mathbf{X}^{0}\sim \mu_{A;a}]\right.\] \[\qquad\qquad+\pi_{\mathcal{M}_{0}}(\mathbf{A})\sum_{t=0}^{\infty }\mathbb{E}_{\mathcal{M}_{0}}[\Delta(\mathbf{X}^{t})|\mathbf{X}^{0}\sim\mu_{a; A}]\right)\] \[=\,\frac{\nu_{G}}{\rho_{A}+\rho_{a}}\] \[\qquad\times\left(\rho_{a}\sum_{t=0}^{\infty}\mathbb{E}_{ \mathcal{M}_{0}}[\Delta(\mathbf{X}^{t})|\mathbf{X}^{0}\sim\mu_{A;a}]+\rho_{A} \sum_{t=0}^{\infty}\mathbb{E}_{\mathcal{M}_{0}}[\Delta(\mathbf{X}^{t})| \mathbf{X}^{0}\sim\mu_{a;A}]\right)\] \[=\,\frac{\nu_{G}}{\rho_{A}+\rho_{a}}\left(\rho_{a}(\rho_{A}-\hat{ \mu})-\rho_{A}(\rho_{a}-\hat{\mu})\right)\] \[=\,\frac{\nu_{G}\,\hat{\mu}}{\rho_{A}+\rho_{a}}\left(\rho_{A}- \rho_{a}\right).\] The right-hand side (and hence the left-hand side) is positive if and only if Condition (ii) holds, proving (ii) \(\Leftrightarrow\) (iii). The above result makes implicit use of Assumption (M6), that mutation rates are the same in the two monoallelic states. If this assumption does not hold, the relationship between fixation probability and expected frequency becomes more nuanced [60, 34, 56]. ### Weak selection We now turn to weak selection. This means we consider selection as a perturbation, in \(\delta\), of neutral drift (\(\delta=0\)). We use prime \((^{\prime})\) to indicate \(\delta\)-derivatives at \(\delta=0\). Thus the first order expansion of a function \(f(\delta)\) can be written \[f(\delta)=f^{\circ}+\delta f^{\prime}+\mathcal{O}(\delta^{2})\qquad(\delta \to 0).\] (C.16) For weak selection, we have the following version of Theorem C.2: **Theorem C.3**.: _The following weak-selection success criteria are equivalent:_ 1. \(\pi^{\prime}_{\mathcal{M}_{0}}(\mathbf{A})>0>\pi^{\prime}_{\mathcal{M}_{0}}( \mathbf{a})\)_,_ 2. \(\rho^{\prime}_{A}>\rho^{\prime}_{a}\)_,_ 3. \(\langle\Delta^{\prime}\rangle^{\circ}>0\)_._ Proof.: The equivalence \((i)\Leftrightarrow(ii)\) follows directly from taking \(\delta\)-derivatives, at \(\delta=0\), of the quantities in the corresponding conditions of Theorem C.2. To prove (ii) \(\Leftrightarrow\) (iii) we first observe that, for every fixed \(0<u<1\), \[\frac{d\,\mathbb{E}_{\pi_{\mathcal{M}}}[\Delta]}{d\delta}\bigg{|}_{\delta=0}= \sum_{\mathbf{x}\in\{0,1\}^{G}}\pi_{\mathcal{M}}^{\circ}(\mathbf{x})\;\Delta^{ \prime}(\mathbf{x})=\mathbb{E}_{\pi_{\mathcal{M}}}^{\circ}[\Delta^{\prime}],\] (C.17) where the first equality uses the fact that \(\Delta^{\circ}(\mathbf{x})=0\) for each state \(\mathbf{x}\). Taking \(u\)-derivatives at \(u=0\) gives \[\frac{d\langle\Delta\rangle}{d\delta}\bigg{|}_{\delta=0}=\frac{\partial^{2}\, \mathbb{E}_{\pi_{\mathcal{M}}}[\Delta]}{\partial u\,\partial\delta}\bigg{|}_{ \begin{subarray}{c}u=0\\ \delta=0\end{subarray}}=\frac{d\,\mathbb{E}_{\pi_{\mathcal{M}}}^{\circ}[ \Delta^{\prime}]}{du}\bigg{|}_{u=0}=\langle\Delta^{\prime}\rangle^{\circ}.\] (C.18) Now, taking \(\delta\)-derivatives of Conditions (ii) and (iii) in Theorem C.2, we have \[\rho^{\prime}_{A}>\rho^{\prime}_{a}\quad\Longleftrightarrow\quad\frac{d \langle\Delta\rangle}{d\delta}\bigg{|}_{\delta=0}>0\quad\Longleftrightarrow \quad\langle\Delta^{\prime}\rangle^{\circ}>0,\] (C.19) as desired. If the criteria of Theorem C.3 are met, we say that _weak selection favors \(A\) over \(a\)_. Theorem C.3 is a variation of Theorem 8 of Allen & McAvoy [34], and generalizes Proposition 4.1 of Taylor [72]. ## Appendix D Genetic assortment and relatedness We now turn to measures of genetic assortment, building up to our definition of collective relatedness. ### Identity-by-state A set of alleles are _identical by state (IBS)_ if they are copies of each other, whether by co-ancestry or another reason. To formalize this concept we define, for each \(S\subseteq G\), the state function \(\iota_{S}(\mathbf{x})\), which equals one if all sites in \(S\) contain the same allele in state \(\mathbf{x}\) (that is, if \(x_{g}=x_{h}\) for all \(g,h\in S\)), and zero otherwise. We also define \(\iota_{S}^{A}(\mathbf{x})\) (respectively, \(\iota_{S}^{A}(\mathbf{x})\)), to equal one if all sites in \(S\) contain \(A\) (respectively, \(a\)) in state \(\mathbf{x}\), and zero otherwise. We can express these state functions algebraically as \[\iota_{S}^{A}(\mathbf{x})=\prod_{g\in G}x_{g}, \iota_{S}^{a}(\mathbf{x})=\prod_{g\in G}(1-x_{g}),\] (D.1) and, for \(S\neq\varnothing\), \[\iota_{S}(\mathbf{x})=\iota_{S}^{A}(\mathbf{x})+\iota_{S}^{a}(\mathbf{x}).\] (D.2) If \(S\) is a singleton set, then \(\iota_{S}(\mathbf{x})=1\) for all \(\mathbf{x}\in\{0,1\}^{G}\). In the vacuous case \(S=\varnothing\), we have \(\iota_{\varnothing}^{A}(\mathbf{x})=\iota_{\varnothing}^{a}(\mathbf{x})= \iota_{\varnothing}(\mathbf{x})=1\) for each \(\mathbf{x}\in\{0,1\}^{G}\). From Eq. (D.1) we obtain the relationship \[\iota_{S}^{a}(\mathbf{x})=\sum_{T\subseteq S}(-1)^{|T|}\iota_{T}^{A}(\mathbf{ x}),\] (D.3) as well as the identities \[x_{g}\iota_{S}^{A}(\mathbf{x})=\iota_{S\cup\{g\}}^{A}(\mathbf{x}),\qquad x_{g} \iota_{S}^{a}(\mathbf{x})=\iota_{S}^{a}(\mathbf{x})-\iota_{S\cup\{g\}}^{a}( \mathbf{x}).\] (D.4) ### Identity-by-descent Genetic assortment can also be quantified using identity-by-descent (IBD) [73, 74, 75, 76]. Two alleles are identical by descent if no mutation separates them from their common ancestor. #### d.2.1 Definition We formalize IBD within our framework as a time-dependent random equivalence relation on the set \(G\) of sites, using the ancestral mappings defined in Section B.2. **Definition**.: In the Markov chain \(\tilde{\mathcal{M}}\), sites \(g,h\in G\) are _identical-by-descent (IBD) at time \(t\)_ if there exists \(t_{0}\), \(0\leq t_{0}\leq t\), such that 1. \(A_{t_{0}}^{t}(g)=A_{t_{0}}^{t}(h)\), and 2. For all \(t_{1}\) with \(t_{0}<t_{1}\leq t\), \(A_{t_{1}}^{t}(g)\notin U^{t_{1}}\) and \(A_{t_{1}}^{t}(h)\notin U^{t_{1}}\). Condition (i) of this definition says that \(g\) and \(h\) have a common ancestor at time \(t_{0}\), while (ii) says that no mutation has occurred in the lineages of \(g\) or \(h\) since that common ancestor. This notion of identity-by-descent applies to arbitrary selection strength, as well as arbitrary mutation rates. It is straightforward to show that identity-by-descent at time \(t\) is an equivalence relation (reflexive, symmetric, and transitive) on \(G\). Consequently at any time \(t\), the set of sites \(G\) can be partitioned into _IBD classes_, such that two sites are in the same IBD class if and only if they are IBD to each other. We record identity-by-descent using binary random variables \(Q_{S}^{t}\), for each nonempty \(S\subseteq G\) and \(t\geq 0\), with \(Q_{S}^{t}=1\) if all pairs \(g,h\in S\) are IBD at time \(t\), and \(Q_{S}^{t}=0\) otherwise. These variables obey the recurrence equation \[Q_{S}^{t}=\begin{cases}1&\text{if }|S|=1\\ Q_{\alpha^{t}(S)}^{t-1}&\text{if }|S|\geq 2\text{ and }U^{t}\cap S=\varnothing\\ 0&\text{otherwise.}\end{cases}\] (D.5) In words, the sites in a non-singleton set \(S\) are IBD if and only if their parents (in \(\alpha^{t}(S)\)) were IBD in the previous time-step and no mutation occurred in these sites during the transition to the current state. These variables \(Q_{S}^{t}\) can be collected into a vector \(\mathbf{Q}^{t}=(Q_{S}^{t})_{\varnothing\subset S\subseteq G}\), which records all IBD relationships within the population. The sequence \((\mathcal{M},\mathcal{Q})=(\mathbf{X}^{t},\mathbf{Q}^{t})_{t=0}^{\infty}\) is a Markov chain in its own right, as can be seen from Eq. (D.5). #### d.2.2 IBD probability Moving from particular states to the overall selection process, we define the IBD probability of a set of sites: **Definition**.: The _IBD probability_ of a nonempty subset \(S\subseteq G\) is defined as \[q_{S}=\lim_{t\to\infty}\mathbb{E}_{\tilde{\mathcal{M}}}[Q_{S}^{t}].\] (D.6) The following proposition verifies that \(q_{S}\) is well-defined, and is equal to 1 for all \(S\) when \(u=0\): **Proposition D.1**.: _For each nonempty \(S\subseteq G\) and \(u\geq 0\) the limit in Eq. (D.6) exists and is independent of the initial state of \(\tilde{\mathcal{M}}\). For \(u=0\), this limit equals 1 for all nonempty \(S\subseteq G\)._ Proof.: We recall that the sequence \((\mathcal{M},\mathcal{Q})=(\mathbf{X}^{t},\mathbf{Q}^{t})_{t=0}^{\infty}\) in \(\tilde{\mathcal{M}}\) is a Markov chain in its own right. For \(u>0\), an argument similar to the proof of Theorem B.1 shows that the state of \((\mathcal{M},\mathcal{Q})\) has a unique stationary distribution \(\pi_{(\mathcal{M},\mathcal{Q})}\), and \[\lim_{t\to\infty}\mathbb{E}_{(\mathcal{M},\mathcal{Q})}[Q_{S}^{t}]=\mathbb{E} _{\pi_{(\mathcal{M},\mathcal{Q})}}[Q_{S}],\] regardless of the initial state of \((\mathcal{M},\mathcal{Q})\), proving the result in the case \(u>0\). For \(u=0\), it follows the Fixation Axiom that \((\mathcal{M},\mathcal{Q})\) has absorbing states \((\mathbf{A},\mathbf{1})\) and \((\mathbf{a},\mathbf{1})\)--where \(\mathbf{1}\) indicates that all sites are IBD to each other (\(Q_{S}=1\) for all \(\varnothing\subset S\subseteq G\))--and other states are transient. Thus \(\lim_{t\to\infty}\mathbb{E}_{\tilde{\mathcal{M}}}[Q_{S}^{t}]=\lim_{t\to\infty }\mathbb{E}_{(\mathcal{M},\mathcal{Q})}[Q_{S}^{t}]=1\) for all \(S\) when \(u=0\). In the case of neutral drift, Eq. (D.5) implies the following recurrence relations for neutral IBD probabilities: \[q_{S}^{\circ}=\begin{cases}\sum_{\begin{subarray}{c}(\alpha,U)\\ U\cap S=\varnothing\end{subarray}}p^{\circ}(\alpha)\;q_{\alpha(S)}^{\circ}&|S| \geq 2\\ 1&|S|=1.\end{cases}\] (D.7) Sites that are identical by descent must also be identical by state. Formally, in the Markov chain \(\tilde{\mathcal{M}}\), for all nonempty \(S\subseteq G\) and \(t\geq 0\), \((Q_{S}^{t}=1)\Rightarrow(\iota_{S}(\mathbf{X}^{t})=1)\). This follows from tracing through the definitions of IBD, \(\tilde{\mathcal{M}}\), and the ancestral mapping \(A_{t_{0}}^{t}\). Moreover, since mutations always change the allele type (\(A\) to \(a\) and vice versa), two mutations are required for a pair of sites to be IBS but not IBD. This suggests that IBS and IBD probabilities should agree to first order in \(u\) as \(u\to 0\). We formalize this observation as follows: **Proposition D.2**.: _For each nonempty \(S\subseteq G\),_ \[\lim_{u\to 0}\frac{1-q_{S}}{u}=\langle 1-\iota_{S}\rangle.\] (D.8) Proof.: We first write \[\lim_{u\to 0}\frac{1-q_{S}}{u}=\frac{d(1-q_{S})}{du}\bigg{|}_{u=0}=\frac{d\, \mathbb{E}_{\pi_{(\mathcal{M},\mathcal{Q})}}[1-Q_{S}]}{du}\bigg{|}_{u=0}.\] (D.9) We next apply Corollary A.4 of Allen and McAvoy [57] to the Markov chain \((\mathcal{M},\mathcal{Q})\). This yields the following variant of Lemma B.3: \[\frac{d\,\mathbb{E}_{\pi_{(\mathcal{M},\mathcal{Q})}}[1-Q_{S}]}{du}\bigg{|}_{u= 0}=\nu_{G}\sum_{t=0}^{\infty}\mathbb{E}_{(\mathcal{M},\mathcal{Q})_{0}}\left[ 1-Q_{S}^{t}\,\big{|}\,(\mathbf{X}^{0},\mathbf{Q}^{0})\sim\mu_{(\mathcal{M}, \mathcal{Q})}\right].\] (D.10) Above, \((\mathcal{M},\mathcal{Q})_{0}\) indicates the \(u=0\) case of \((\mathcal{M},\mathcal{Q})\), and \(\mu_{(\mathcal{M},\mathcal{Q})}\) is an extension of the mutant appearance distribution \(\mu\) to states of \((\mathcal{M},\mathcal{Q})\), obtained by first sampling a population state \(\mathbf{X}\) from \(\mu\), and then choosing the IBD state \(\mathbf{Q}\) such that two sites are IBD if and only if they contain the same allele. In \((\mathcal{M},\mathcal{Q})_{0}\), with initial state sampled from \(\mu_{(\mathcal{M},\mathcal{Q})}\), we have \((Q_{S}^{t}=1)\Leftrightarrow(\iota_{S}(\mathbf{X}^{t})=1)\) (that is, IBS and IBD are equivalent in the absence of mutation). We can therefore rewrite the right-hand side of Eq. (D.10) as \[\nu_{G}\sum_{t=0}^{\infty}\mathbb{E}_{(\mathcal{M},\mathcal{Q})_ {0}}[1-Q_{S}^{t}\,|\,(\mathbf{X}^{0},\mathbf{Q}^{0})\sim\mu] =\nu_{G}\sum_{t=0}^{\infty}\mathbb{E}_{\mathcal{M}_{0}}[1-\iota _{S}(\mathbf{X}^{t})\,|\,\mathbf{X}^{0}\sim\mu]\] \[=\langle 1-\iota_{S}\rangle.\] Combining with Eqs. (D.9)-(D.10) yields Eq. (D.8). ### Genetic dissimilarity and coalescence length From Eq. (D.8), we obtain a natural measure of the genetic dissimilarity of a \(S\) of sites under low mutation: **Definition**.: The _genetic dissimilarity_ of a nonempty subset \(S\subseteq G\) is defined as \[\ell_{S}=\lim_{u\to 0}\frac{1-q_{S}}{u}=\langle 1-\iota_{S}\rangle\,.\] (D.11) The genetic dissimilarity \(\ell_{S}\) quantifies the effect of rare mutation on the stationary probability that the sites in \(S\) are _not_ all IBD or IBS. By Eq. (B.10), \(\ell_{S}\) is proportional to the expected duration of time for which the sites in \(S\) do not all contain the same allele, in \(\mathcal{M}\) starting from the mutant appearance distribution. It follows from Eq. (D.11) that \(\ell_{S}\) is always nonnegative, and is zero if \(S\) is a singleton or if the sites in \(S\) always contain the same allele. For neutral drift (\(\delta=0\)), symmetry under interchange of \(A\) and \(a\) implies \[\ell_{S}^{\circ}=\left\langle 1-2\iota_{S}^{A}\right\rangle^{\circ}=\langle 1 -2\iota_{S}^{a}\rangle^{\circ}\,.\] (D.12) Taking the \(u\)-derivative of Eq. (D.7) at \(u=0\), and applying Eqs. (B.5b) and (D.11), we obtain a recurrence relation for the neutral genetic dissimilarities: \[\ell_{S}^{\circ}=\begin{cases}\nu_{S}^{\circ}+\sum_{\alpha}p^{\circ}(\alpha) \,\ell_{\alpha(S)}^{\circ}&|S|\geq 2\\ 0&|S|=1.\end{cases}\] (D.13) This recurrence relation implies that the neutral dissimilarities \(\ell_{S}^{\circ}\) can be understood as coalescence lengths scaled by mutation rates. More precisely, \(\ell_{S}^{\circ}\) is the expected sum of the site-specific mutation rates \(\nu_{g}\), over the coalescent representing the ancestry of set \(S\), up until a common ancestor is reached. This idea is formalized by Allen and McAvoy [57], who derive Eq. (D.13) for a geneneralized coalescent process, and show that it uniquely determines the \(\ell_{S}^{\circ}\) (denoted \(m_{S}^{\prime}\) in Section 7.2 of Ref. [57]). We therefore refer to the neutral dissimilarities \(\ell_{S}^{\circ}\) as _coalescence lengths_, which is shorthand for "expected total branch length of the coalescent tree of \(S\), scaled by mutation rates \(\nu_{g}\)". ### Collective relatedness Having introduced the requisite concepts of identity by state, identity by descent, and coalescence length, we are now prepared to define collective relatedness. #### d.4.1 Definition and alternative expressions We introduce the following notation for the average identity-by-state of a fixed set \(S\) to all sites \(g\): \[\bar{\iota}_{S}(\mathbf{x})=\frac{1}{n}\sum_{g\in G}\iota_{S\cup\{g\}}( \mathbf{x}),\quad\bar{\iota}_{S}^{A}(\mathbf{x})=\frac{1}{n}\sum_{g\in G}\iota _{S\cup\{g\}}^{A}(\mathbf{x}),\quad\bar{\iota}_{S}^{a}(\mathbf{x})=\frac{1}{n }\sum_{g\in G}\iota_{S\cup\{g\}}^{a}(\mathbf{x}).\] (D.14) We now define collective relatedness as follows: **Definition**.: For a nonempty set of sites \(S\subseteq G\) and a site \(g\in G\), the _collective relatedness of \(S\) to \(g\) with respect to allele \(A\)_ is defined as \[r_{S,g}^{A}=\frac{\left\langle\iota_{S}^{A}(\mathbf{x})\left(x_{g}-\bar{x} \right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}=\frac{ \left\langle\iota_{S\cup\{g\}}^{A}(\mathbf{x})-\bar{\iota}_{S}^{A}(\mathbf{ x})\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}.\] (D.15a) The _collective relatedness of_ \[S\] _to_ \[g\] _with respect to allele_ \[a\] _is obtained interchanging the roles of_ \[A\] _and_ \[a\] _in Eq. (_D.15a_):_ \[r_{S,g}^{a}=\frac{-\left\langle\iota_{S}^{a}(\mathbf{x})\left(x_{g}-\bar{x} \right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}=\frac{ \left\langle\iota_{S\cup\{g\}}^{a}(\mathbf{x})-\bar{\iota}_{S}^{a}(\mathbf{x}) \right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}.\] (D.15b) Collective relatedness can be expressed in a number of equivalent ways. First, by applying Eq. (B.10) and L'Hopital's rule to Eq. (D.15a), we obtain Eq. (1) of the main text (which uses \(r_{S,g}\) as shorthand for \(r_{S,g}^{A}\)). Second, the expression \(\bar{x}(1-\bar{x})\) in the denominators is equal to the variance in \(x_{g}\) over all sites, and can be rewritten as follows: \[\bar{x}(1-\bar{x}) =\frac{1}{2}\left(1-\left(\bar{x}\right)^{2}-\left(1-\bar{x}\right) ^{2}\right)\] \[=\frac{1}{2}\left(1-\frac{1}{n^{2}}\sum_{h,k\in G}\left(x_{h}x_{k} +(1-x_{h})(1-x_{k})\right)\right)\] \[=\frac{1}{2}\left(1-\frac{1}{n^{2}}\sum_{h,k\in G}\iota_{\{h,k\}}( \mathbf{x})\right).\] If we let \(\bar{q}=\frac{1}{n^{2}}\sum_{h,k\in G}q_{\{h,k\}}\) and \(\bar{\ell}=\frac{1}{n^{2}}\sum_{h,k\in G}\ell_{\{h,k\}}\) denote the average IBD probability and coalescence length, respectively, between all pairs, then Eq. (D.11) gives \[\left\langle\bar{x}(1-\bar{x})\right\rangle=\lim_{u\to 0}\frac{1-\bar{q}}{2u}= \frac{\bar{\ell}}{2}.\] (D.16) We can then rewrite Eqs. (D.15a) and (D.15b) as \[r^{A}_{S,g} =\frac{2}{\bar{\ell}}\left\langle\iota^{A}_{S\cup\{g\}}-\bar{t}^ {A}_{S}\right\rangle\] (D.17a) \[r^{a}_{S,g} =\frac{2}{\bar{\ell}}\left\langle\iota^{a}_{S\cup\{g\}}-\bar{t}^ {A}_{S}\right\rangle.\] (D.17b) Combining Eqs. (D.2), (D.11), and (D.17), we obtain elegant expressions for the average of \(r^{A}_{S,g}\) and \(r^{a}_{S,g}\): \[\frac{r^{A}_{S,g}+r^{a}_{S,g}}{2}=\lim_{u\to 0}\frac{q_{S\cup\{g\}}-\bar{q}_{S}}{1- \bar{q}}=\frac{\bar{\ell}_{S}-\ell_{S\cup\{g\}}}{\bar{\ell}},\] (D.18) where \(\bar{q}_{S}=\frac{1}{n}\sum_{h\in G}q_{S\cup\{h\}}\) and \(\bar{\ell}_{S}=\frac{1}{n}\sum_{h\in G}\ell_{S\cup\{h\}}\). For neutral drift, the symmetry of \(\pi_{\mathcal{M}}\) under interchange of \(A\) and \(a\) (in the neutral case) implies that collective relatedness for the two alleles coincide: \[\left(r^{A}_{S,g}\right)^{\circ}=\left(r^{a}_{S,g}\right)^{\circ}=\lim_{u \to 0}\frac{q^{\circ}_{S\cup\{g\}}-\bar{q}^{\circ}_{S}}{1-\bar{q}^{\circ}}= \frac{\bar{\ell}^{\circ}_{S}-\ell^{\circ}_{S\cup\{g\}}}{\bar{\ell}^{\circ}}.\] (D.19) This is Eq. (2) of the main text. We therefore write \(r^{\circ}_{S,g}\) for the value of both \(r^{a}_{S,g}\) and \(r^{a}_{S,g}\) under neutral drift. These neutral collective relatedness quantities \(r^{\circ}_{S,g}\) can be computed by solving Eq. (D.13) for the coalescence lengths \(\ell^{\circ}_{S}\), and then applying Eq. (D.19). #### d.4.2 Properties of collective relatedness Collective relatedness has the following properties: * The average collective relatedness of a set \(S\) to all sites is zero: \[\frac{1}{n}\sum_{g\in G}r^{A}_{S,g}=\frac{1}{n}\sum_{g\in G}r^{a}_{S,g}=0.\] (D.20) * A set of sites has the same collective relatedness to each of its members. Specifically, for any nonempty \(S\subseteq G\), we have \(r^{A}_{S,g}=r^{A}_{S}\) and \(r^{a}_{S,g}=r^{a}_{S}\) for each \(g\in S\), where the intra-relatedness quantities \(r^{A}_{S}\) and \(r^{a}_{S}\) are given by \[r^{A}_{S}=\frac{\left\langle\iota^{A}_{S}(\mathbf{x})-\bar{\iota}^{A}_{S}( \mathbf{x})\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle},\qquad r ^{a}_{S}=\frac{\left\langle\iota^{a}_{S}(\mathbf{x})-\bar{\iota}^{a}_{S}( \mathbf{x})\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}.\] * The average relatedness of each site to itself is 1: \[\frac{1}{n}\sum_{g\in G}r^{A}_{\{g\},g}=\frac{1}{n}\sum_{g\in G}r^{a}_{\{g\},g}=1.\] However, the relatedness of an individual site to itself is not necessarily 1, even under neutral drift. Instead, we have \[r^{A}_{\{g\},g}=r^{a}_{\{g\},g}=\frac{\left\langle x_{g}-x_{g}\bar{x}\right\rangle }{\frac{1}{n}\sum_{h\in G}\left\langle x_{h}-x_{h}\bar{x}\right\rangle}=\frac{ \bar{\ell}_{\{g\}}}{\bar{\ell}}.\] Thus, a site \(g\) has self-relatedness greater than 1, \(r_{\{g\},g}>1\), if and only if \(\bar{\ell}_{\{g\}}>\bar{\ell}\), meaning that the average coalescence length from \(g\) to other sites exceeds the average coalescence length of all pairs. * The "collective" relatedness of the empty set to any site is zero under neutral drift: \(r^{\circ}_{\varnothing,g}=0\). This follows from Eq. (D.19), noting that \(\ell_{\varnothing\cup\{h\}}(\mathbf{x})=\ell_{\{h\}}(\mathbf{x})=0\) for all \(h\in G\). Away from neutral drift, however, \(r^{A}_{\varnothing,g}\) and \(r^{a}_{\varnothing,g}\) are not necessarily zero; instead, we have \[r^{A}_{\varnothing,g}=-r^{a}_{\varnothing,g}=\frac{\left\langle x_{g}-\bar{x} \right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}.\] (D.21) Thus \(r^{A}_{\varnothing,g}\) is positive if site \(g\) is more likely than the average site to hold an \(A\) allele in transient states of the selection process, and the analogous statement holds for \(r^{a}_{\varnothing,g}\). #### d.4.3 Collective phenotypic relatedness It is also useful to have a notion of collective relatedness at the level of phenotypes, using the formalism of Section A.8. Let us denote the frequency of the \(A\) allele in individual \(i\in I\) as \(\bar{x}_{i}\): \[\bar{x}_{i}=\frac{1}{n_{i}}\sum_{g\in G_{i}}x_{g}.\] (D.22) We then define collective phenotypic relatedness as follows: **Definition**.: The _collective phenotypic relatedness_ of a set of individuals \(J\subseteq I\) to an individual \(i\in I\), with respect to phenotype 1, is defined as \[r^{1}_{J,i}=\frac{\left\langle\mathbb{E}\left[\prod_{j\in J}\Phi_{j}\right]( \bar{x}_{i}-\bar{x})\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle }=\frac{\left\langle\prod_{j\in J}\varphi_{j}(\mathbf{x}_{|G_{j}})\left(\bar{x }_{i}-\bar{x}\right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}.\] (D.23a) Analogously, the collective phenotypic relatedness of \(J\) to \(i\) with respect to phenotype \(0\) is defined as \[r^{0}_{J,i}=\frac{\left\langle\mathbb{E}\left[\prod_{j\in J}(1-\Phi_{j})\right]( \bar{x}_{i}-\bar{x})\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}= \frac{\left\langle\prod_{j\in J}(1-\varphi_{j}(\mathbf{x}_{|G_{j}}))\left(\bar{ x}_{i}-\bar{x}\right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}.\] (D.23b) ### Relationship to other relatedness measures The collective relatedness introduced here is closely related to established definitions of pairwise relatedness based on covariance [77, 23, 78, 79], identity-by-descent [25, 80, 81, 72], and geometric considerations [35]. A number of these standard pairwise relatedness measures can be recovered from collective relatedness in the case that the "collective" is a single site or individual. #### d.5.1 Identity-by-descent Relatedness is often quantified using identity-by-descent probabilities. In the case of a singleton set \(S=\{h\}\), Eq. (D.18) gives \[r^{\circ}_{\{h\},g}=\lim_{u\to 0}\frac{q^{\circ}_{\{h,g\}}-\bar{q}^{\circ}_{ \{h\}}}{1-\bar{q}^{\circ}}.\] (D.24) The right-hand side is a standard measure relatedness between two haploid individuals [25, 80, 81, 72]. #### d.5.2 Geometric relatedness Grafen [35] introduced a definition of relatedness with a geometric interpretation. Translated into our notation, Grafen's Eq. (7) for the relatedness of individual \(j\) to \(i\) is \[R_{ji}=\frac{\left\langle\varphi_{j}(\mathbf{x}_{|G_{j}})\left(\bar{x}_{i}- \bar{x}\right)\right\rangle}{\left\langle\varphi_{j}(\mathbf{x}_{|G_{j}})(\bar {x}_{j}-\bar{x})\right\rangle}.\] (D.25) Above, we have replaced Grafen's sums over a "list of occasions" with expected sums over transient states of the process, as in Eq. (B.10). The numerator of Grafen's definition, Eq. (D.25), agrees with that of our Eq. (D.23a) for \(r^{1}_{J,i}\) in the case of a singleton set, \(J=\{j\}\). The denominators are differeerent, rejecting a different choice of normalization. While Grafen's definition has the property that relatedness to oneself is always one (\(R_{ii}=1\) for all \(i\in I\)), ours has the advantage of allowing for relatedness from different actors--including collective actors with different numbers of individuals--to be directly compared. #### d.5.3 Genetic covariance A number of established definitions of relatedness involve a ratio of covariances, which can in some cases be interpreted as a correlation coefficient [82, 83] or a regression coefficient [77, 78, 79]. We show here that standard regression definitions of relatedness [78, 79] can be recovered as an expectation of our \(r^{A}_{S,g}\), with \(g\) sampled uniformly from the population and \(S\) sampled from the "social environment" of site \(g\). To obtain this connection, we suppose that each site \(g\in G\) has an associated "social environment", characterized by a fixed probability distribution \(\{p_{S|g}\}_{\varnothing\subset S\subseteq G}\) over nonempty subsets \(S\). These social environments are given _a priori_. They are understood to quantify how frequently each collective interacts with a given site. For each site \(g\in G\), we define the state variable \(y_{g}\) as the probability that a set sampled from \(g\)'s social environment contains only allele \(A\): \[y_{g}=\mathbb{E}_{S|g}\left[\iota^{A}_{S}(\mathbf{x})\right]=\sum_{\varnothing \subset S\subseteq G}p_{S|g}\,\iota^{A}_{S}(\mathbf{x}).\] We let \(\bar{y}=\frac{1}{n}\sum_{g\in G}y_{g}\) denote the population average of \(y_{g}\). Now suppose that a site \(g\) is sampled uniformly from \(G\), and then a nonempty set \(S\subseteq G\) is sampled from \(\{p_{S|g}\}\). We compute the expectation of \(r^{A}_{S,g}\) under this scheme: \[\mathbb{E}_{g,S}\left[r^{A}_{S,g}\right] =\frac{1}{n}\sum_{g\in G}\sum_{\varnothing\subset S\subseteq G}p_ {S|g}\,r^{A}_{S,g}\] \[=\frac{\left\langle\frac{1}{n}\sum_{g\in G}\sum_{\varnothing \subset S\subseteq G}p_{S|g}\,\iota^{A}_{S}(\mathbf{x})\left(x_{g}-\bar{x} \right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}\] \[=\frac{\left\langle\frac{1}{n}\sum_{g\in G}y_{g}\left(x_{g}-\bar {x}\right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}\] \[=\frac{\left\langle\frac{1}{n}\sum_{g\in G}y_{g}x_{g}-\bar{y} \bar{x}\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}\] \[=\lim_{u\to 0}\frac{\mathbb{E}_{\pi_{\mathcal{M}}}\left[\sum_{g \in G}y_{g}x_{g}-\bar{y}\bar{x}\right]}{\mathbb{E}_{\pi_{\mathcal{M}}}\left[ \bar{x}(1-\bar{x})\right]}\] \[=\lim_{u\to 0}\frac{\text{Cov}_{\pi_{\mathcal{M}},g}[y_{g},x_{g}]} {\text{Var}_{\pi_{\mathcal{M}},g}[x_{g}]}.\] (D.26) In the last line, the numerator and denominator are, respectively, the covariance of \(x_{g}\) with \(y_{g}\) and the variance of \(x_{g}\), with state \(\mathbf{x}\) is sampled from \(\pi_{\mathcal{M}}\) and \(g\) sampled uniformly from \(G\). Our result in Eq. (D.26) has the same form as a standard definition of relatedness based on linear regression [78, 79], which can be written in our framework as \[r(\mathbf{x})=\frac{\text{Cov}_{g}[y_{g},x_{g}]}{\text{Var}_{g}[x_{g}]}.\] (D.27) Although Eqs. (D.26) and (D.27) have essentially the same form, they differ in that Eq. (D.27) applies to a specific state \(\mathbf{x}\), whereas Eq. (D.26) averages over all states, weighted according to the low-mutation limit of the stationary distribution. We also emphasize that while Eq. (D.27) is typically applied in the case of individual actors, Eq. (D.26) allows for collective actors. Therefore, the expectation of \(r_{S,g}^{A}\), with \(g\) sampled uniformly and \(S\) sampled from the social environment of \(g\), recovers a generalization of a standard regression definition of relatedness [78, 79]. A similar result can be obtained for \(r_{S,g}^{a}\), by replacing \(\iota_{S}^{A}(\mathbf{x})\) and \(x_{g}\) with \(\iota_{S}^{a}(\mathbf{x})\) and \(1-x_{g}\), respectively. #### d.5.4 Phenotypic covariance Other common definitions of relatedness [23, 84] are based on covariance between phenotype and genotype of interacting individuals. To relate these definitions to ours, we apply the concept of social environment, from the previous subsection, at the level of phenotypes rather than alleles. In this context, we represent the social environment of an individual \(i\in I\) by a given, fixed probability distribution \(\{p_{J|i}\}_{\varnothing\subset J\subseteq I}\) over nonempty sets of individuals \(J\subseteq I\). We define \(z_{i}\) as the probability, in a given state \(\mathbf{x}\), that a set in \(i\)'s social environment contains only phenotype 1: \[z_{i}=\sum_{J\subseteq I}p_{J|i}\,\mathbb{P}_{\mathbf{x}}[\Phi_{j}=1,\;\forall j \in J]=\sum_{J\subseteq I}p_{J|i}\prod_{j\in J}\varphi_{j}(\mathbf{x}_{|G_{j} }).\] (D.28) Now we compute the expectation of \(r_{J,i}^{1}\) as first \(i\) is sampled uniformly from \(I\), and then \(J\) is sampled from \(\{p_{J|i}\}\): \[\mathbb{E}_{i,J}\left[r_{J,i}^{1}\right] =\frac{1}{N}\sum_{i\in I}\sum_{\varnothing\subseteq J\subseteq I }p_{J|i}r_{I,j}^{1}\] \[=\frac{\left\langle\frac{1}{N}\sum_{i\in I}\sum_{\varnothing \subseteq J\subseteq I}p_{J|i}\prod_{j\in J}\varphi_{j}(\mathbf{x}_{|G_{j}}) \left(\bar{x}_{i}-\bar{x}\right)\right\rangle}{\left\langle\bar{x}(1-\bar{x}) \right\rangle}\] \[=\frac{\left\langle\frac{1}{N}\sum_{i\in I}z_{i}\left(\bar{x}_{ i}-\bar{x}\right)\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}\] \[=\frac{\left\langle\frac{1}{N}\sum_{i\in I}z_{i}\bar{x}_{i}-\bar{ z}\bar{x}\right\rangle}{\left\langle\bar{x}(1-\bar{x})\right\rangle}\] \[=\lim_{u\to 0}\frac{\text{Cov}_{\pi_{\mathcal{M},i}}[z_{i}, \bar{x}_{i}]}{\text{Var}_{\pi_{\mathcal{M},g}}[x_{g}]}.\] (D.29) This resulting expression for \(\mathbb{E}_{i,J}\left[r_{J,i}^{1}\right]\) is closely related to relatedness measure developed by Michod and Hamilton [23], which can be expressed in our framework as \[r(\mathbf{x})=\frac{\text{Cov}_{i}[z_{i},\bar{x}_{i}]}{\text{Cov}_{i}[\varphi_{i}, \bar{x}_{i}]}.\] (D.30) However, our result in Eq. (D.29) differs from Michod and Hamilton's [23] definition, Eq. (D.30), in three ways: (i) Michod and Hamilton's definition applies to individual actors only, whereas ours allows for collective actors; (ii) Eq. (D.30) applies in a particular state, whereas Eq. (D.29) averages over all states in the \(u\to 0\) limit of the stationary distribution; (iii) the denominators differ, which amounts to a different choice of normalization. #### d.5.5 Queller's (1985) coefficient of synergism We now turn to measures of relatedness between a pair of individuals and a third individual, which are precursors of the collective relatedness measure introduced here. Queller [21] introduced a "coefficient of synergism" defined in the final term of his Eq. (3), to quantify how frequently a synergistic effect between like phenotypes will arise. This coefficient of synergism, denoted \(s(\mathbf{x})\), is defined the same way as \(r(\mathbf{x})\) in Eq. (D.30), but with \(i\)'s "social environment" consisting of pairs of the form \(J=\{i,j\}\), so that \(z_{i}\) is given by \[z_{i}=\sum_{j\in I}p_{\{i,j\}|i}\;\varphi_{i}(\mathbf{x}_{|G_{i}})\,\varphi_{ j}(\mathbf{x}_{|G_{j}}).\] (D.31) From the discussion in Section D.5.4, Queller's \(s\) is closely related to our \(r^{1}_{\{i,j\},i}\), the differences being that (i) \(s(\mathbf{x})\) averages over pairs in a given state \(\mathbf{x}\), whereas \(r^{1}_{\{i,j\},i}\) pertains to a single pair \(\{i,j\}\), averaged over the \(u\to 0\) limit of the stationary distribution, and (ii) the denominators differ, amounting to a difference in normalization. #### d.5.6 Taylor's (2013) joint relatedness Taylor [36] introduced a measure of joint relatedness between a two haploid individuals and a third. In our notation, for three sites \(g,h,k\in G\), Taylor's joint relatedness--as defined in Eq. (9) of Ref. [36]--can be written as \[R_{gh-k}=\frac{\text{Cov}_{\pi_{\mathcal{M}}}[x_{g}x_{h},x_{k}]}{\text{Var}_ {\pi_{\mathcal{M}}}[x_{k}]}\] (D.32) The idea is that sites \(g\) and \(h\) jointly produce a synergistic effect on the fitness of site \(k\), and this synergistic effect is weighted by the joint relatedness \(R_{gh-k}\) in determining the consequences for selection. Taylor's \(R_{gh-k}\) serves a similar role to our collective relatedness \(r_{S,k}\) (with \(S=\{g,h\}\)), and the formulas are similar as well. However, the two definitions are not equivalent, as can be seen by taking the low-mutation limit of Eq. (D.32): \[\lim_{u\to 0}R_{gh-k} =\lim_{u\to 0}\frac{\mathrm{Cov}_{\pi_{\mathcal{M}}}[x_{g}x_{h},x_{k}]}{ \mathrm{Var}_{\pi_{\mathcal{M}}}[x_{k}]}\] \[=\lim_{u\to 0}\frac{\mathbb{E}_{\pi_{\mathcal{M}}}[x_{g}x_{h}x_{k} ]-\mathbb{E}_{\pi_{\mathcal{M}}}[x_{g}x_{h}]\,\mathbb{E}_{\pi_{\mathcal{M}}}[x _{k}]}{\mathbb{E}_{\pi_{\mathcal{M}}}[x_{k}]-\left(\mathbb{E}_{\pi_{\mathcal{M }}}[x_{k}]\right)^{2}}\] \[=\frac{\pi_{\mathcal{M}_{0}}(\mathbf{A})-\left(\pi_{\mathcal{M}_ {0}}(\mathbf{A})\right)^{2}}{\pi_{\mathcal{M}_{0}}(\mathbf{A})-\left(\pi_{ \mathcal{M}_{0}}(\mathbf{A})\right)^{2}}\] \[=1.\] Thus, for each triple \(g,h,k\in G\), Taylor's \(R_{gh-k}\) converges to \(1\) in the low-mutation limit. Collective relatedness \(r_{S,g}\), which is already defined as a \(u\to 0\) limit, provides a more informative quantification of genetic assortment under low mutation. ## Appendix E Selection for collective action Here we derive our main results, Theorems E.1 and E.2, which provide conditions for success in terms of synergistic fitness effects and collective relatedness. ### Representing synergistic fitness effects Our first task is to identify and represent synergistic effects on fitness. For each site \(g\in G\), the fitness increment \(w_{g}(\mathbf{x})\) can be uniquely represented [85] in the form \[w_{g}(\mathbf{x})=\sum_{S\subseteq G}c_{S,g}\,\iota_{S}^{A}(\mathbf{x}).\] (E.1) We interpret \(c_{S,g}\) as the synergistic effect of set \(S\) having allele \(A\), relative to \(a\), on the fitness of site \(g\). An explicit formula is given by \[c_{S,g}=\sum_{T\subseteq S}(-1)^{|S|-|T|}w_{g}\left(\mathbf{1}_{T}\right),\] (E.2) where \(\mathbf{1}_{T}\), for \(T\subseteq G\), is the state with \(x_{g}=1\) for \(g\in T\) and \(x_{g}=0\) for \(g\notin T\). By Eq. (C.7), the synergistic fitness effects of a set \(S\), over all target sites \(g\), sum to zero: \[\sum_{g\in G}c_{S,g}=\sum_{T\subseteq S}(-1)^{|S|-|T|}\,\sum_{g\in G}w_{g} \left(\mathbf{1}_{T}\right)=0.\] (E.3) So far we have expressed synergistic fitness effects from the perspective of allele \(A\) relative to \(a\). Alternatively, we may take the perspective of allele \(a\), and uniquely write \[w_{g}(\mathbf{x})=\sum_{S\subseteq G}\tilde{c}_{S,g}\,\iota_{S}^{a}(\mathbf{x }).\] (E.4) Here, the \(\tilde{c}_{S,g}\) are given explicitly by \[\tilde{c}_{S,g}=\sum_{T\subseteq S}(-1)^{|S|-|T|}\;w_{g}\left({\bf 1}_{G-T} \right),\] (E.5) and satisfy \(\sum_{S\subseteq G}\tilde{c}_{S,g}=0\). Using Eq. (D.3), we obtain that \(c_{S,g}\) and \(\tilde{c}_{S,g}\) are related by \[c_{S,g}=(-1)^{|S|}\sum_{T\supseteq S}\tilde{c}_{T,g},\qquad\tilde{c}_{S,g}=(-1 )^{|S|}\sum_{T\supseteq S}c_{T,g}.\] (E.6) It follows from Eq. (E.6) that the maximal degree of synergy does not depend on which representation in Eq. (E.1) or Eq. (E.4) is used. By this we mean that if there is some \(d\geq 0\) such that \(c_{S,g}=0\) whenever \(|S|>d\), then it is also true that \(\tilde{c}_{S,g}=0\) whenever \(|S|>d\). Most flexibly, we can represent fitness as \[w_{g}({\bf x})=\sum_{S\subseteq G}\left(c_{S,g}^{A}\,\iota_{S}^{A}({\bf x})+c _{S,g}^{a}\,\iota_{S}^{a}({\bf x})\right),\] (E.7) where the \(c_{S,g}^{A}\) and \(c_{S,g}^{a}\) are subject to \[\sum_{g\in G}c_{S,g}^{A}=\sum_{g\in G}c_{S,g}^{a}=0\qquad\mbox{for each $S \subseteq G$},\] (E.8a) and \[c_{S,g}^{A}=c_{S,g}^{a}=0,\mbox{ for all $S\subseteq G$ and $g\in G$, when $\delta=0$}.\] (E.8b) Although the representation in Eq. (E.7) is not unique, it will prove useful in later analysis. With fitness represented this way, the change due to selection can be written using Eqs. (C.10), (D.4), and (E.8a) as \[\Delta({\bf x}) =\frac{1}{n}\sum_{g\in G}x_{g}\sum_{S\subseteq G}\left(c_{S,g}^{A }\,\iota_{S}^{A}({\bf x})+c_{S,g}^{a}\,\iota_{S}^{a}({\bf x})\right)\] \[=\frac{1}{n}\sum_{g\in G}\sum_{S\subseteq G}\left(c_{S,g}^{A}\, \iota_{S\cup\{g\}}^{A}({\bf x})+c_{S,g}^{a}\left(\iota_{S}^{a}({\bf x})-\iota _{S\cup\{g\}}^{a}({\bf x})\right)\right)\] \[=\frac{1}{n}\sum_{g\in G}\sum_{S\subseteq G}\left(c_{S,g}^{A}\, \iota_{S\cup\{g\}}^{A}({\bf x})-c_{S,g}^{a}\,\iota_{S\cup\{g\}}^{a}({\bf x}) \right).\] (E.9) In particular, for the representations in Eqs. (E.1) and (E.4), we have \[\Delta({\bf x})=\frac{1}{n}\sum_{g\in G}\sum_{S\subseteq G}c_{S,g}\,\iota_{S \cup\{g\}}^{A}({\bf x})=-\frac{1}{n}\sum_{g\in G}\sum_{S\subseteq G}\tilde{c} _{S,g}\,\iota_{S\cup\{g\}}^{a}({\bf x}).\] (E.10) The second equality can also be obtained directly using Eqs. (D.3), (E.6), and (E.8a). We caution that the sums over both \(S\) and \(g\) are required for the second equality to hold; it is not true in general that \(\sum_{g\in G}c_{S,g}\,\iota_{S\cup\{g\}}^{A}({\bf x})=-\sum_{g\in G}\tilde{c} _{S,g}\,\iota_{S\cup\{g\}}^{a}({\bf x})\), nor that \(\sum_{S\subseteq G}c_{S,g}\,\iota_{S\cup\{g\}}^{A}({\bf x})=-\sum_{S\subseteq G }\tilde{c}_{S,g}\,\iota_{S\cup\{g\}}^{a}({\bf x})\). Condition for success under arbitrary selection strength We now state and prove our main result: **Theorem E.1**.: _Suppose fitness is represented as in Eq. (E.7), subject to Eq. (E.8a). Then \(A\) is favored over \(a\), in the sense of Theorem C.2, if and only if_ \[\sum_{g\in G}\sum_{S\subseteq G}c^{A}_{S,g}\,r^{A}_{S,g}>\sum_{g\in G}\sum_{S \subseteq G}c^{a}_{S,g}\,r^{a}_{S,g}.\] (E.11) In particular, Condition (E.11) becomes \(\sum_{g\in G}\sum_{S\subseteq G}c_{S,g}\,r^{A}_{S,g}>0\) for the representation in Eq. (E.1), giving Condition (3) of the main text. If we instead take allele \(a\)'s perspective, using the representation in Eq. (E.4), we obtain \(\sum_{g\in G}\sum_{S\subseteq G}\tilde{c}_{S,g}\,r^{a}_{S,g}<0\). Proof.: Using Eq. (E.8a), we can rewrite Eq. (E.9) as \[\Delta(\mathbf{x})=\frac{1}{n}\sum_{g\in G}\sum_{S\subseteq G} \left(c^{A}_{S,g}\left(\iota^{A}_{S\cup\{g\}}(\mathbf{x})-\bar{\iota}^{A}_{S} (\mathbf{x})\right)-c^{a}_{S,g}\left(\iota^{a}_{S\cup\{g\}}(\mathbf{x})-\bar{ \iota}^{a}_{S}(\mathbf{x})\right)\right).\] (E.12) Applying the operator \(\left\langle\ \right\rangle\) to both sides, we have \[\left\langle\Delta\right\rangle=\frac{1}{n}\sum_{g\in G}\sum_{S \subseteq G}\left(c^{A}_{S,g}\left\langle\iota^{A}_{S\cup\{g\}}-\bar{\iota}^{ A}_{S}\right\rangle-c^{a}_{S,g}\left\langle\iota^{a}_{S\cup\{g\}}-\bar{ \iota}^{a}_{S}\right\rangle\right).\] By Theorem C.2, \(\rho_{A}>\rho_{a}\) if and only if \[\sum_{g\in G}\sum_{S\subseteq G}c^{A}_{S,g}\left\langle\iota^{A}_{S\cup\{g\}} -\bar{\iota}^{A}_{S}\right\rangle>\sum_{g\in G}\sum_{S\subseteq G}c^{a}_{S,g} \left\langle\iota^{a}_{S\cup\{g\}}-\bar{\iota}^{a}_{S}\right\rangle.\] The result then follows from multiplying both sides by \(2/\bar{\ell}\) and applying Eq. (D.17). ### Condition for success under weak selection Theorem E.1 holds for arbitrary strength of selection \(\delta>0\). However, the collective relatedness coefficients in Condition (E.11) are difficult to evaluate, because they depend on the stationary distribution \(\pi_{\mathcal{M}}\), which itself depends on the process of selection. For a more tractable condition, we prove a weak-selection version of Theorem E.1: **Theorem E.2**.: _Suppose fitness is represented as in Eq. (E.7), subject to Eq. (E.8a). Then weak selection favors \(A\) over \(a\) in the sense of Theorem C.3 if and only if_ \[\sum_{g\in G}\sum_{S\subseteq G}(c^{A}_{S,g})^{\prime}\,r^{\circ}_{S,g}>\sum_ {g\in G}\sum_{S\subseteq G}(c^{a}_{S,g})^{\prime}\,r^{\circ}_{S,g}.\] (E.13) Proof.: Taking the \(\delta\)-derivative of Eq. (E.12) at \(\delta=0\), and recalling condition (E.8b), we obtain \[\Delta^{\prime}(\mathbf{x})=\frac{1}{n}\sum_{g\in G}\sum_{S\subseteq G}\left((c^ {A}_{S,g})^{\prime}\left(\iota^{A}_{S\cup\{g\}}(\mathbf{x})-\bar{\iota}^{A}_{S} (\mathbf{x})\right)-(c^{a}_{S,g})^{\prime}\left(\iota^{a}_{S\cup\{g\}}(\mathbf{ x})-\bar{\iota}^{a}_{S}(\mathbf{x})\right)\right).\] (E.14) By Theorem C.3, weak selection favors \(A\) over \(a\) if and only if \[\sum_{g\in G}\sum_{S\subseteq G}\left((c^{A}_{S,g})^{\prime}\left\langle\iota^ {A}_{S\cup\{g\}}-\bar{\iota}^{A}_{S}\right\rangle^{\circ}-(c^{a}_{S,g})^{ \prime}\left\langle\iota^{a}_{S\cup\{g\}}-\bar{\iota}^{a}_{S}\right\rangle^{ \circ}\right)>0.\] (E.15) The result follows from multiplying by \(2/\bar{\ell}^{\circ}\) and applying Eqs. (D.17) and (D.19). ### Conditions for success at the phenotype level It is also useful to derive conditions for selection that apply at the level of phenotypes. We follow the formalism for phenotypes introduced in Section A.8, with relatedness defined as in Section D.4.3. As in Section A.8, we use hats (\(\hat{\cdot}\)) to indicate quantities that depend on the phenotypic state \(\mathbf{\Phi}\), rather than the (allelic) population state \(\mathbf{x}\). The fitness increment of each site \(g\in G\) in phenotypic state \(\mathbf{\Phi}\) is defined as \[\hat{w}_{g}(\mathbf{\Phi})=\hat{\mathbb{E}}_{\mathbf{\Phi}}\left[\sum_{h\in \alpha^{-1}(g)}v_{h}\right]-v_{g}=\sum_{\alpha:G\to G}\hat{p}_{\mathbf{\Phi}}( \alpha)\sum_{h\in\alpha^{-1}(g)}v_{h}-v_{g}.\] (E.16) The fitness increments in population sate \(\mathbf{x}\) is then recovered by \[w_{g}(\mathbf{x})=\mathbb{E}_{\mathbf{x}}[\hat{w}_{g}(\mathbf{\Phi})].\] (E.17) To proceed, we must assume that sites within a single individual have the same fitness increment: 1. For each individual \(i\in I\) and each phenotypic state \(\mathbf{\Phi}\), the fitness increment of each site in \(i\) is the same: \(\hat{w}_{g}(\mathbf{\Phi})=\hat{w}_{h}(\mathbf{\Phi})\) for each \(g,h\in G_{i}\). Assumption (I) formalizes the principle of "fair meiosis" in Mendelian inheritance. It excludes the possibility of gene drive, in which certain alleles are more likely than others in the same individual to be transmitted during meiosis [86]. We will only invoke this assumption for specific individual-level results. With Assumption (I) in force, we let \(\hat{w}_{i}(\mathbf{\Phi})\) denote the fitness increment of each site in individual \(i\in I\), so that \(\hat{w}_{g}(\mathbf{\Phi})=\hat{w}_{i}(\mathbf{\Phi})\) for each \(g\in G_{i}\). By Eq. (C.7) we have \[\sum_{i\in I}n_{i}\hat{w}_{i}(\mathbf{\Phi})=0.\] (E.18) We next obtain a phenotype-level analogue of Eq. (C.10), which can also be understood as an instance of the Price equation [70]. **Lemma E.3**.: _If Assumption (I) holds, then the selection increment in state \(\mathbf{x}\) is given by_ \[\Delta(\mathbf{x})=\frac{1}{n}\sum_{i\in I}n_{i}\operatorname{\mathbb{E}}_{ \mathbf{x}}\left[\hat{w}_{i}(\mathbf{\Phi})\right]\bar{x}_{i}=\frac{1}{n}\sum_{ i\in I}n_{i}\operatorname{\mathbb{E}}_{\mathbf{x}}\left[\hat{w}_{i}(\mathbf{ \Phi})\right](\bar{x}_{i}-\bar{x}).\] (E.19) _In particular, if each individual has the same ploidy (=number of sites) then_ \[\Delta(\mathbf{x})=\frac{1}{N}\sum_{i\in I}\operatorname{\mathbb{E}}_{ \mathbf{x}}[\hat{w}_{i}(\mathbf{\Phi})]\bar{x}_{i}=\frac{1}{N}\sum_{i\in I} \operatorname{\mathbb{E}}_{\mathbf{x}}\left[\hat{w}_{i}(\mathbf{\Phi})\right] (\bar{x}_{i}-\bar{x}),\] (E.20) _where \(N=|I|\) is the number of individuals._ Proof.: We begin with Eq. (C.10): \[\Delta(\mathbf{x}) =\frac{1}{n}\sum_{g\in G}x_{g}w_{g}(\mathbf{x})\] \[=\frac{1}{n}\sum_{i\in I}\sum_{g\in G_{i}}x_{g}w_{g}(\mathbf{x})\] \[=\frac{1}{n}\sum_{i\in I}\sum_{g\in G_{i}}x_{g}\operatorname{ \mathbb{E}}_{\mathbf{x}}[\hat{w}_{g}(\mathbf{\Phi})].\] Now invoking Assumption (I), we have \[\Delta(\mathbf{x}) =\frac{1}{n}\sum_{i\in I}\operatorname{\mathbb{E}}_{\mathbf{x}}[ \hat{w}_{i}(\mathbf{\Phi})]\sum_{g\in G_{i}}x_{g}\] \[=\frac{1}{n}\sum_{i\in I}n_{i}\operatorname{\mathbb{E}}_{ \mathbf{x}}\left[\hat{w}_{i}(\mathbf{\Phi})\right]\bar{x}_{i}\] \[=\frac{1}{n}\sum_{i\in I}n_{i}\operatorname{\mathbb{E}}_{ \mathbf{x}}\left[\hat{w}_{i}(\mathbf{\Phi})\right](\bar{x}_{i}-\bar{x}).\] The last equality follows from Eq. (E.18). This proves Eq. (E.19). Eq. (E.20) follows from observing that if \(n_{i}\) is constant over all individuals \(i\in I\), then \(Nn_{i}=n\). As in Section E.1, we write each fitness increment \(\hat{w}_{i}(\mathbf{\Phi})\) uniquely in the form \[\hat{w}_{i}(\mathbf{\Phi})=\sum_{J\subseteq I}\hat{c}_{J,i}\left(\prod_{j\in J }\Phi_{j}\right),\] (E.21) for some coefficients \(\hat{c}_{J,i}\) with \(\sum_{i\in I}\hat{c}_{J,i}=0\) for each \(J\subseteq I\). We then obtain a phenotype-level version of Theorem E.1: **Theorem E.4**.: _If Assumption (I) holds, then selection favors allele \(A\) if and only if_ \[\sum_{i\in I}n_{i}\sum_{J\subseteq I}\hat{c}_{J,i}\,r_{J,i}^{1}>0.\] (E.22) Proof.: Taking the expectation of Eq. (E.21) in state \(\mathbf{x}\) gives \[\mathbb{E}_{\mathbf{x}}\left[\hat{w}_{i}(\mathbf{\Phi})\right]=\sum_{J\subseteq I }\hat{c}_{J,i}\,\mathbb{E}_{\mathbf{x}}\left[\prod_{j\in J}\Phi_{j}\right]=\sum_ {J\subseteq I}\hat{c}_{J,i}\left(\prod_{j\in J}\varphi_{j}(\mathbf{x}_{|G_{j}}) \right).\] (E.23) Combining with Eq. (E.19), the selection increment can be written \[\Delta(\mathbf{x})=\frac{1}{n}\sum_{i\in I}n_{i}\sum_{J\subseteq I}\hat{c}_{J,i}\left(\prod_{j\in J}\varphi_{j}(\mathbf{x}_{|G_{j}})\right)\left(\bar{x}_{i }-\bar{x}\right).\] (E.24) Applying the operator \(\langle\;\rangle\) to both sides and invoking Eq. (D.23a), we obtain \[\langle\Delta\rangle=\frac{1}{n}\langle\bar{x}(1-\bar{x})\rangle\sum_{i\in I }n_{i}\sum_{J\subseteq I}\hat{c}_{J,i}\,r^{1}_{J,i}.\] (E.25) The result now follows from Theorem C.2. ## Appendix F Maximization of inclusive fitness for a single collective Does selection lead collectives to act as if maximizing inclusive fitness? We find one highly idealized case in which it does. Let us assume (unrealistically) that the fitness increment of each site depends only on which alleles are present or absent from a particular nonempty subset \(S\subseteq G\). This means that, for each site \(g\in G\), there exist three values, \(w^{A}_{g}\), \(w^{a}_{g}\), and \(w^{Aa}_{g}\), such that, in each state \(\mathbf{x}\), \[w_{g}(\mathbf{x})=\begin{cases}w^{A}_{g}&\text{ if }x_{h}=A\text{ for all }h\in S\\ w^{a}_{g}&\text{ if }x_{h}=a\text{ for all }h\in S\\ w^{Aa}_{g}&\text{ otherwise.}\end{cases}\] (F.1) By Eq. (C.7) we must have \[\sum_{g\in G}w^{A}_{g}=\sum_{g\in G}w^{a}_{g}=\sum_{g\in G}w^{Aa}_{g}=0.\] (F.2) Under this assumption, we obtain the following maximization result: **Theorem F.1**.: _If Eq. (F.1) holds for a particular nonempty \(S\subseteq G\), then weak selection favors \(A\) over \(a\) in the sense of Theorem C.3 if and only if_ \[\sum_{g\in G}w^{\prime}_{g}(\mathbf{A})\,r^{\circ}_{S,g}>\sum_{g\in G}w^{ \prime}_{g}(\mathbf{a})\,r^{\circ}_{S,g}.\] Proof.: We rewrite Eq. (F.1) as \[w_{g}(\mathbf{x}) =w_{g}^{A}\,\iota_{S}^{A}(\mathbf{x})+w_{g}^{a}\,\iota_{S}^{a}( \mathbf{x})+w_{g}^{Aa}\left(1-\iota_{S}^{A}(\mathbf{x})-\iota_{S}^{a}(\mathbf{x })\right)\] \[=w_{g}^{Aa}+\left(w_{g}^{A}-w_{g}^{Aa}\right)\iota_{S}^{A}( \mathbf{x})+\left(w_{g}^{aa}-w_{g}^{Aa}\right)\iota_{S}^{a}(\mathbf{x}).\] (F.3) This representation of fitness has the form of Eq. (E.7), with \[c_{S,g}^{A} =w_{g}^{A}-w_{g}^{Aa}\] (F.4a) \[c_{S,g}^{a} =w_{g}^{a}-w_{g}^{Aa}\] (F.4b) \[c_{\varnothing,g}^{A} =c_{\varnothing,g}^{a}=\frac{1}{2}w_{g}^{Aa}\] (F.4c) \[c_{T,g}^{A} =c_{T,g}^{a}=0\qquad\text{for all }T\neq S,\varnothing.\] (F.4d) Eq. (E.8a) holds for these \(c_{T,g}^{A}\) and \(c_{T,g}^{a}\) coefficients as a consequence of Eq. (F.2). Applying Theorem E.2, weak selection favors \(A\) over \(a\) if and only if \[\sum_{g\in G}\left((w_{g}^{A})^{\prime}-(w_{g}^{Aa})^{\prime}\right)r_{S,g}^{ \circ}>\sum_{g\in G}\left((w_{g}^{a})^{\prime}-(w_{g}^{Aa})^{\prime}\right)r_{ S,g}^{\circ}.\] (F.5) Above, primes indicate derivatives with respect to \(\delta\) at \(\delta=0\). The result follow from cancelling the terms with \((w_{g}^{Aa})^{\prime}\) on both sides, and observing that \((w_{g}^{A})^{\prime}=w_{g}^{\prime}(\mathbf{A})\) and \((w_{g}^{a})^{\prime}=w_{g}^{\prime}(\mathbf{a})\). Thus, if only the behavior of a single collective \(S\) is under selection--in the sense that fitness of each site depends only on the set of alleles present in \(S\)--then weak selection acts to increase the quantity \(\sum_{g\in G}w_{g}^{\prime}(\mathbf{x})\,r_{S,g}^{\circ}\) over monoallelic states \(\mathbf{x}\). This can be understood as saying that, if selection acts only on the collective behavior of \(S\), then it will favor increase in the collective inclusive fitness of \(S\). Theorem F.1 is conceptually intriguing. It suggests that each collective has a particular genetic interest, which would be maximized if it were left to evolve on its own with the behavior of all other collectives fixed (_ceteris paribus_). However, the required assumption--that only the actions of a single collective are under selection--is unlikely to apply (even approximately) to any real-world population. Because of this, Theorem F.1 does not imply that selection in real-world populations will lead any particular collective to act as if maximizing inclusive fitness. ## Appendix G Collective action among diploid relatives Here we apply our results to social behavior among relatives in a diploid population. This model uses the formalism for individual phenotypes, as introduced in Section A.8 and further developed in Sections D.4.3 and E.4. ### Population model We consider a diploid population with discrete generations. We first describe the model for a hermaphroditic (one sex) population, and then show how it extends to two sexes. We focus on the population of juveniles in each generation. There is a set \(I\) of juvenile individuals, with size \(N=|I|\). These individuals are partitioned into \(N/M\) families of size \(M\) each. Each individual \(i\in I\) has genetic sites \(G_{i}=\{i_{1},i_{2}\}\). There are two phenotypes, numbered 1 and 0. \(AA\) individuals have phenotype 1, \(aa\) individuals have phenotype 0, and heterozygotes have phenotype 1 or 0 with probabilities \(h\) and \(1-h\), respectively, where \(0\leq h\leq 1\) represents the degree of genetic dominance. Overall, the probability that individual \(i\in I\) has phenotype \(\Phi_{i}=1\) in state \(\mathbf{x}\) is given by Eq. (A.4). The overall phenotypic state of the population is represented by the vector \(\mathbf{\Phi}=(\Phi_{i})_{i\in I}\). The effect of phenotypic state \(\mathbf{\Phi}\) on the survival of each each juvenile individual \(i\) is represented by a "payoff function" \(f_{i}(\mathbf{\Phi})\), which captures the effects of \(i\)'s own phenotype as well as all social interactions affecting \(i\). This payoff is then rescaled to a survival function \(F_{i}(\mathbf{\Phi})=1+\delta f_{i}(\mathbf{\Phi})\). The population follows a three-stage lifecycle. In the one-sex case, the stages proceed as follwos: 1. Survival: A fixed number \(N_{\mathrm{A}}\leq N\) of juveniles survive to adulthood. These \(N_{\mathrm{A}}\) surviving adults are sampled in sequence (without replacement) from \(I\), each proportionally to \(F_{i}(\mathbf{\Phi})\). 2. Mating: From these \(N_{\mathrm{A}}\) surviving adults, \(N/M\) ordered pairs of individuals are sampled, uniformly and independently (with replacement). 3. Reproduction: Each ordered pair produces \(M\) juvenile offspring to fill a single family group. Alleles are transmitted according to Mendelian inheritance: For each new individual \(i\) in a family group with parents \((j,k)\), site \(i_{1}\) randomly inherits the allele in \(j_{1}\) or \(j_{2}\), and site \(i_{2}\) randomly inherits the allele in \(k_{1}\) or \(k_{2}\), each chosen with equal probability independently of all other such choices. Mutation occurs independently with probability \(u\) at each site. The model for two sexes has the same lifecycle, with the following amendments: The set \(I\) of juveniles is partitioned into equally-sized subsets \(I_{\mathrm{M}}\) and \(I_{\mathrm{F}}\) for males and females, respectively. In the survival stage, \(N_{\mathrm{A}}/2\) males and \(N_{\mathrm{A}}/2\) females are sampled from sets \(I_{\mathrm{M}}\) and \(I_{\mathrm{F}}\) respectively, again proportionally to \(F_{i}(\mathbf{\Phi})\). In the mating stage, the first entry of each mating pair \((j,k)\) is sampled from the surviving males, and the second from the surviving females. In the reproduction stage, each mating pair \((j,k)\) produces \(M/2\) male offspring and \(M/2\) female offspring. For each juvenile offspring \(i\), site \(i_{1}\) inherits an allele from one of the paternal sites, \(j_{1}\) or \(j_{2}\), and \(i_{2}\) inherits an allele from one of the maternal sites, \(k_{1}\) or \(k_{2}\), each independently with equal probability. ### Analysis of selection To make analytical progress, we first assume that \(M\ll N\). Under this assumption, the probability that a given individual \(i\in I\) survives to adulthood is asymptotically \((N_{\rm A}F_{i}(\mathbf{\Phi}))/(N\bar{F}(\mathbf{\Phi}))\), where \(\bar{F}(\mathbf{\Phi})=\frac{1}{N}\sum_{j\in I}F_{j}(\mathbf{\Phi})\) is the population average fecundity. Each adult individual has, on expectation, \(2N/N_{\rm A}\) offspring (regardless of family size \(M\)). Therefore, each juvenile individual \(i\in I\) produces \(2F_{i}(\mathbf{\Phi})/\bar{F}(\mathbf{\Phi})\) offspring on expectation. Since each parental allele has \(1/2\) probability to be transmitted to each offspring, each allele in individual \(i\) produces an expected \(F_{i}(\mathbf{\Phi})/\bar{F}(\mathbf{\Phi})\) copies in the next generation. By symmetry, each site \(g\in G\) has reproductive value \(v_{g}=1\). Therefore, applying Eq. (E.16), the fitness increment of each site in a given individual \(i\in I\) is \[\hat{w}_{i}(\mathbf{\Phi})=\frac{F_{i}(\mathbf{\Phi})}{\bar{F}(\mathbf{\Phi})}-1=\frac{F_{ i}(\mathbf{\Phi})-\bar{F}(\mathbf{\Phi})}{\bar{F}(\mathbf{\Phi})}.\] (G.1) For weak selection, taking the \(\delta\)-derivative at \(\delta=0\) yields \[\hat{w}^{\prime}_{i}(\mathbf{\Phi})=f_{i}(\mathbf{\Phi})-\bar{f}(\mathbf{\Phi}),\] (G.2) where \(\bar{f}(\mathbf{\Phi})=\frac{1}{N}\sum_{j\in I}f_{j}(\mathbf{\Phi})\). Applying Lemma E.3, the weak selection increment in each state \(\mathbf{x}\) is \[\Delta^{\prime}(\mathbf{x})=\sum_{i\in I}\mathbb{E}_{\mathbf{x}}\left[f_{i}( \mathbf{\Phi})-\bar{f}(\mathbf{\Phi})\right](\bar{x}_{i}-\bar{x})=\sum_{i\in I}\mathbb{ E}_{\mathbf{x}}\left[f_{i}(\mathbf{\Phi})\right](\bar{x}_{i}-\bar{x}).\] (G.3) In the second equality above, the terms involving \(\bar{f}(\mathbf{\Phi})\) cancel because \(\sum_{i\in I}(\bar{x}_{i}-\bar{x})=0\). Eqs. (G.1)-(G.3) apply to both the one-sex and two-sex models. ### Coalescence lengths We now turn to computing coalescence lengths for the neutral (\(\delta=0\)) case of this model, under the further assumption that \(M\ll N_{\rm A}\ll N\). We first consider the population of \(N_{\rm A}\) adults at each generation. In the one-sex case, this adult population is asymptotically described by the neutral Wright-Fisher process for \(N_{\rm A}\) diploid individuals. The neutral process in the two-sex case was formally characterized in Section 8.2 of Allen and McAvoy [57]. In either case, coalescence lengths for alleles in adults are asymptotally characterized by the standard Kingman coalescent [37, 87, 38, 57] on \(2N_{\rm A}\) alleles. Let \(\lambda_{k}\) denote the coalescence length of a set of \(k\geq 1\) alleles in adults, and let \(L_{k}=\lim_{N_{\rm A}\to\infty}\lambda_{k}/(4N_{\rm A})\). A classical result [88] gives \[L_{k}=\sum_{j=1}^{k-1}\frac{1}{j},\] (G.4) which is the expected total branch length for \(k\) sites in the Kingman coalescent [37, 38]. Now we return to the juvenile population. For any nonempty \(S\subseteq G\), let \(P_{S}\) be a random variable representing the number of distinct sites among the parents of alleles in \(S\), within the adult population, in the \(N\to\infty\) limit. (This requires \(S\) to be defined in such a way as to be independent of \(N\), which will be the case for all sets we examine.) Note that \(1\leq P_{S}\leq|S|\). Then neutral expected coalescence lengths obey \(\lim_{N\to\infty}\ell_{S}^{\circ}=|S|+\mathbb{E}\left[\lambda_{P_{S}}\right]\). Normalizing by \(4N_{\mathrm{A}}\) and applying Eq. (G.4), we obtain \[\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{\ell_{S}^{\circ}}{4N_{ \mathrm{A}}}=\lim_{N_{\mathrm{A}}\to\infty}\frac{|S|+\mathbb{E}\left[\lambda_ {P_{S}}\right]}{4N_{\mathrm{A}}}=\mathbb{E}\left[L_{P_{S}}\right].\] (G.5) Let us apply this result to \(\bar{\ell}=\frac{1}{n^{2}}\sum_{h,k\in G}\ell_{\{h,k\}}\), the average coalescence length among all pairs of (juvenile) sites. For two sites sampled uniformly from the juvenile population, their likelihood of having the same parent allele is negligible since they are unlikely to belong to the same family (\(N\gg M\)), and distinct families are unlikely to have the same parents (\(N_{\mathrm{A}}\gg 1\)). Therefore, in the relevant limits (first \(N\to\infty\), then \(N_{\mathrm{A}}\to\infty\)), \(P_{\{h,k\}}=2\) almost surely for almost all pairs \(h,k\in G\). Eq. (G.5) then gives \[\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{\bar{\ell}^{\circ}}{4N_{ \mathrm{A}}}=L_{2}=1.\] (G.6) ### Collective relatedness among siblings We now begin computing collective relatedness quantities, in the limits of first \(N\to\infty\) and then \(N_{A}\to\infty\). It will be useful to define an operator \(\llbracket\,\rrbracket\) on state functions \(f(\mathbf{x})\) with \(f(\mathbf{A})=1\) and \(f(\mathbf{a})=0\) by \[\llbracket f\rrbracket=\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{ \left\langle\bar{x}-f(\mathbf{x})\right\rangle^{\circ}}{\left\langle\bar{x} \left(1-\bar{x}\right)\right\rangle^{\circ}}=\lim_{N_{\mathrm{A}}\to\infty} \lim_{N\to\infty}\frac{2\left\langle\bar{x}-f(\mathbf{x})\right\rangle^{\circ }}{\bar{\ell}^{\circ}}.\] (G.7) This operator is affine in the sense that, for state functions \(f(\mathbf{x})\) and \(g(\mathbf{x})\) with \(f(\mathbf{A})=g(\mathbf{A})=1\) and \(f(\mathbf{a})=g(\mathbf{a})=0\), we have for all scalars \(a,b\in\mathbb{R}\) with \(a+b=1\). As an example, let \(f(\mathbf{x})\) be the identity-by-state function \(\iota_{S}^{A}(\mathbf{x})=\prod_{g\in S}x_{g}\) for some nonempty \(S\subseteq G\). By Eq. (D.11) and symmetry of \(A\) and \(a\) under neutral drift, we have \[\left\langle\bar{x}-\iota_{S}^{A}\right\rangle^{\circ}=\left\langle 1-\bar{x}- \iota_{S}^{a}\right\rangle^{\circ}=\frac{1}{2}\left\langle 1-\iota_{S} \right\rangle^{\circ}=\frac{\ell_{S}^{\circ}}{2}.\] (G.8) Combining with Eqs. (G.5) and (G.7) gives \[\llbracket\iota_{S}^{A}\rrbracket=\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to \infty}\frac{\left\langle\bar{x}-\iota_{S}^{A}\right\rangle^{\circ}}{\left\langle \bar{x}\left(1-\bar{x}\right)\right\rangle^{\circ}}=\lim_{N_{\mathrm{A}}\to \infty}\lim_{N\to\infty}\frac{\ell_{S}^{\circ}}{\bar{\ell}^{\circ}}=\frac{ \mathbb{E}\left[L_{P_{S}}\right]}{L_{2}}=\mathbb{E}\left[L_{P_{S}}\right].\] (G.9) This means that, for any set \(S\) of sites in the juvenile population, \(\llbracket\iota_{S}^{A}\rrbracket\) is equal to the expected total branch length, under the Kingman coalescent, of the parental sites from which the alleles in \(S\) are inherited. Eq. (G.9) will be instrumental in computing collective relatedness. #### g.4.1 Self-relatedness We begin with the relatedness of any individual \(j\in J\) to itself. Combining Eqs. (D.23) and Eq. (A.4), and applying the limits \(N\to\infty\) and then \(N_{\mathrm{A}}\to\infty\), we obtain \[r_{j}^{\circ}=\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{\left\langle \left(h\left(x_{j_{1}}+x_{j_{2}}\right)+(1-2h\right)x_{j_{1}}x_{j_{2}}\right) \left(\frac{x_{j_{1}}+x_{j_{2}}}{2}-\bar{x}\right)\right\rangle^{\circ}}{ \left\langle\bar{x}(1-\bar{x})\right\rangle^{\circ}}.\] Recalling that \(x_{g}^{2}=x_{g}\) for each site \(g\), the bracketed quantity in the numerator can be expanded as \[h\left(\frac{x_{j_{1}}+x_{j_{2}}}{2}+x_{j_{1}}x_{j_{2}}-2\left( \frac{x_{j_{1}}+x_{j_{2}}}{2}\right)\bar{x}\right)\\ +(1-2h)\left(x_{j_{1}}x_{j_{2}}-x_{j_{1}}x_{j_{2}}\bar{x}\right).\] (G.10) Applying Eq. (G.7), we can express \(r_{j}^{\circ}\) in terms of the \(\llbracket\rrbracket\) operator as \[r_{j}^{\circ}=h\left(2\left\llbracket\left(\frac{x_{j_{1}}+x_{j_ {2}}}{2}\right)\bar{x}\right\rrbracket-\left\llbracket\frac{x_{j_{1}}+x_{j_{2} }}{2}\right\rrbracket-\left\llbracket x_{j_{1}}x_{j_{2}}\right\rrbracket \right)\\ +(1-2h)\left(\llbracket x_{j_{1}}x_{j_{2}}\bar{x}\rrbracket- \llbracket x_{j_{1}}x_{j_{2}}\rrbracket\right).\] (G.11) Invoking Eq. (G.9), and noting that (in the \(N_{\mathrm{A}}\to\infty\) limit), \(j_{1}\) and \(j_{2}\) must come from distinct parental sites, we have \[\llbracket x_{j_{1}}\rrbracket=\llbracket x_{j_{2}}\rrbracket =L_{1}\] \[\llbracket x_{j_{1}}x_{j_{2}}\rrbracket=\llbracket x_{i_{1}}\bar{x }\rrbracket=\llbracket x_{i_{2}}\bar{x}\rrbracket =L_{2}\] \[\llbracket x_{j_{1}}x_{j_{2}}\bar{x}\rrbracket =L_{3}.\] Substituting into Eq. (G.11) and evaluating via Eq. (G.4), we obtain \[r_{j}^{\circ} =h\left(2L_{2}-L_{1}-L_{2}\right)+(1-2h)(L_{3}-L_{2})\] \[=h(1)+(1-2h)\left(\frac{1}{2}\right)\] \[=\frac{1}{2}.\] (G.12) Thus the neutral relatedness of any individual \(j\) to itself is \(r_{j}^{\circ}=\frac{1}{2}\) in this model. #### g.4.2 Collective relatedness to other sibling We now consider the relatedness of a set \(J\) of siblings to another sibling \(i\notin J\). Again combining Eqs. (D.23) and Eq. (A.4), we obtain \[r_{J,i}^{\circ}=\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{\left\langle \left(\prod_{j\in J}\left(hx_{j_{1}}+hx_{j_{2}}+(1-2h)x_{j_{1}}x_{j_{2}}\right) \right)\left(\frac{x_{i_{1}}+x_{i_{2}}}{2}-\bar{x}\right)\right\rangle^{\circ} }{\left\langle\bar{x}(1-\bar{x})\right\rangle^{\circ}}.\] (G.13) Let us define the notation \(\underline{x}_{(a,b)}\), for \(0\leq a\leq m\) and \(0\leq b\leq m\), to be the average value of all products involving \(a\) distinct factors of the form \(x_{j_{1}}\) and \(b\) distinct factors of the form \(x_{j_{2}}\), for \(j\) varying over \(J\). Expanding the first factor in the numerator of Eq. (G.13) according to the multinomial theorem, we have \[\prod_{j\in J}\left(hx_{j_{1}}+hx_{j_{2}}+(1-2h)x_{j_{1}}x_{j_{2}}\right)\\ =\sum_{\begin{subarray}{c}k_{1},k_{2}\geq 0\\ k_{1}+k_{2}\leq m\end{subarray}}\frac{m!\;h^{k_{1}+k_{2}}\;(1-2h)^{m-k_{1}-k_{ 2}}}{k_{1}!\;k_{2}!\;(m-k_{1}-k_{2})!}\;\underline{x}_{(m-k_{2},m-k_{1})}.\] (G.14) Each term on the right-hand side corresponds to choosing \(k_{1}\) factors of the form \(hx_{j_{1}}\), \(k_{2}\) factors of the form \(hx_{j_{2}}\),, and \(m-k_{1}-k_{2}\) factors of the form \((1-2h)x_{j_{1}}x_{j_{2}}\). Using this result, we can express Eq. (G.13) as \[r^{\circ}_{J,i} =\sum_{\begin{subarray}{c}k_{1},k_{2}\geq 0\\ k_{1}+k_{2}\leq m\end{subarray}}\frac{m!\;h^{k_{1}+k_{2}}\;(1-2h)^{m-k_{1}-k_{ 2}}}{k_{1}!\;k_{2}!\;(m-k_{1}-k_{2})!}\] \[\times\left(\left[\underline{x}_{(m-k_{2},m-k_{1})}\bar{x}\right] \right)-\left[\left[\frac{\underline{x}_{(m-k_{2}+1,m-k_{1})}+\underline{x}_{ (m-k_{2},m-k_{1}+1)}}{2}\right]\right]\right).\] (G.15) We now evaluate the \(\left[\hskip-1.0pt\left[\hskip-1.0pt\right]\hskip-1.0pt\right]\) operations in Eq. (G.15), by means of Eq. (G.9). We start with terms of the form \(\underline{x}_{(a,b)}\). For \(b=0\), we have \[\left[\hskip-1.0pt\left[\hskip-1.0pt\underline{x}_{(a,0)}\right]\hskip-1.0pt \right]=\frac{1}{2^{a-1}}L_{1}+\left(1-\frac{1}{2^{a-1}}\right)L_{2}=1-\frac{ 1}{2^{a-1}}.\] (G.16) Similarly, for \(a=0\), \[\left[\hskip-1.0pt\left[\hskip-1.0pt\underline{x}_{(0,b)}\right]\hskip-1.0pt \right]=1-\frac{1}{2^{b-1}}.\] (G.17) For \(a,b\neq 0\) we have \[\left[\hskip-1.0pt\left[\hskip-1.0pt\underline{x}_{(a,b)}\right] \hskip-1.0pt\right] =\left(\frac{1}{2^{a-1}}\right)\left(\frac{1}{2^{b-1}}\right)L_{2}\\ +\left(\frac{1}{2^{a-1}}\left(1-\frac{1}{2^{b-1}}\right)+\left(1- \frac{1}{2^{a-1}}\right)\frac{1}{2^{b-1}}\right)L_{3}\\ +\left(1-\frac{1}{2^{a-1}}\right)\left(1-\frac{1}{2^{b-1}}\right) L_{4}\\ =\frac{11}{6}-\frac{1}{3}\left(\frac{1}{2^{a-1}}+\frac{1}{2^{b-1 }}\right)-\frac{1}{6}\left(\frac{1}{2^{a+b-2}}\right).\] (G.18) We next consider combined terms of the form \(\frac{1}{2}\left(\underline{x}_{(a+1,b)}+\underline{x}_{(a,b+1)}\right)\). For \(b=0\), using Eqs. (G.16) and (G.18), we have \[\left[\!\left[\frac{\underline{x}_{(a+1,0)}+\underline{x}_{(a,1)}}{2}\right]\!\right] =\frac{1}{2}\left(1-\frac{1}{2^{a}}+\frac{11}{6}-\frac{1}{3}\left( \frac{1}{2^{a-1}}+1\right)-\frac{1}{6}\left(\frac{1}{2^{a-1}}\right)\right)\] \[=\frac{1}{2}\left(\frac{5}{2}-\frac{1}{2^{a-1}}\right)\] \[=\frac{5}{4}-\frac{1}{2^{a}}.\] (G.19) Similarly, for \(a=0\), \[\left[\!\left[\frac{\underline{x}_{(1,b)}+\underline{x}_{(0,b+1)}}{2}\right]\! \right]=\frac{5}{4}-\frac{1}{2^{b}}.\] (G.20) For \(a,b\neq 0\), applying Eq. (G.18) gives \[\left[\!\left[\frac{\underline{x}_{(a+1,b)}+\underline{x}_{(a,b+1 )}}{2}\right]\!\right] =\frac{11}{6}-\frac{1}{6}\left(\frac{1}{2^{a-1}}+\frac{1}{2^{a}}+ \frac{1}{2^{b-1}}+\frac{1}{2^{b}}\right)-\frac{1}{6}\left(\frac{1}{2^{a+b-1}}\right)\] \[=\frac{11}{6}-\frac{1}{2}\left(\frac{1}{2^{a}}+\frac{1}{2^{b}} \right)-\frac{1}{3}\left(\frac{1}{2^{a+b}}\right).\] (G.21) We also require terms of the form \(\underline{x}_{(a,b)}\bar{x}\). For \(b=0\), \[\left[\!\left[\underline{x}_{(a,0)}\bar{x}\right]\!\right]=\frac{1}{2^{a-1}}L_ {2}+\left(1-\frac{1}{2^{a-1}}\right)L_{3}=\frac{3}{2}-\frac{1}{2^{a}}.\] (G.22) Similarly, for \(a=0\), \[\left[\!\left[\underline{x}_{(0,b)}\bar{x}\right]\!\right]=\frac{3}{2}-\frac{ 1}{2^{b}}.\] (G.23) For \(a,b\neq 0\), \[\left[\!\left[\underline{x}_{(a,b)}\bar{x}\right]\!\right] =\frac{1}{2^{a+b-2}}L_{3}+\left(\frac{1}{2^{a-1}}\left(1-\frac{1} {2^{b-1}}\right)+\left(1-\frac{1}{2^{a-1}}\right)\frac{1}{2^{b-1}}\right)L_{4}\] \[\qquad+\left(1-\frac{1}{2^{a-1}}\right)\left(1-\frac{1}{2^{b-1}} \right)L_{5}\] \[=\frac{3}{2}+\frac{1}{3}\left(1-\frac{1}{2^{a+b-2}}\right)+\frac{ 1}{4}\left(1-\frac{1}{2^{a-1}}\right)\left(1-\frac{1}{2^{b-1}}\right)\] \[=\frac{25}{12}-\frac{1}{4}\left(\frac{1}{2^{a-1}}+\frac{1}{2^{b-1 }}\right)-\frac{1}{12}\left(\frac{1}{2^{a+b-2}}\right)\] \[=\frac{25}{12}-\frac{1}{2}\left(\frac{1}{2^{a}}+\frac{1}{2^{b}} \right)-\frac{1}{3}\left(\frac{1}{2^{a+b}}\right).\] (G.24) Combining Eqs. (G.19)-(G.24), we find that for any \(a,b\geq 0\) with \(a+b\geq 1\), \[\left[\!\left[\underline{x}_{(a,b)}\bar{x}\right]\!\right]-\left[\!\left[ \frac{\underline{x}_{(a+1,b)}+\underline{x}_{(a,b+1)}}{2}\right]\!\right]= \frac{1}{4}.\] (G.25) It then follows from Eqs. (G.13) and (G.15) that the collective relatedness of \(J\) to \(i\) is \[r_{J,i}^{\circ}=1/4.\] (G.26) Strikingly, this result does not depend on the number of siblings, \(m\), nor on the degree of dominance, \(h\). #### g.4.3 Collective intra-relatedness of siblings Finally, we compute the intra-relatedness \(r_{J}^{\circ}\) of a set \(J\) of siblings, given by \[r_{J}^{\circ}=\lim_{N_{\text{A}}\to\infty}\lim_{N\to\infty}\frac{\left\langle \left(\prod_{j\in J}\left(hx_{j_{1}}+hx_{j_{2}}+(1-2h)x_{j_{1}}x_{j_{2}}\right) \right)\left(\frac{1}{2m}\sum_{j\in J}\left(x_{j_{1}}+x_{j_{2}}\right)-\bar{x }\right)\right\rangle^{\circ}}{\left\langle\bar{x}(1-\bar{x})\right\rangle^{ \circ}}.\] (G.27) Using the multinomial expansion in Eq. (G.14), and the identity \[\begin{split}&\underline{x}_{(m-k_{2},m-k_{1})}\left(\frac{ \sum_{j\in J}\left(x_{j_{1}}+x_{j_{2}}\right)}{2m}\right)\\ &=\frac{k_{2}}{2m}\,\underline{x}_{(m-k_{2}+1,m-k_{1})}+\frac{k_ {1}}{2m}\,\underline{x}_{(m-k_{2},m-k_{1}+1)}+\frac{2m-k_{1}-k_{2}}{2m}\, \underline{x}_{(m-k_{2},m-k_{1})},\end{split}\] (G.28) we can express the intra-relatedness \(r_{J}^{\circ}\) in terms of the \(\llbracket\,\rrbracket\) operator as \[r_{J}^{\circ}=\sum_{\begin{subarray}{c}k_{1},k_{2}\geq 0\\ k_{1}+k_{2}\leq m\end{subarray}}\frac{m!\;h^{k_{1}+k_{2}}\;(1-2h)^{m-k_{1}-k_{ 2}}}{k_{1}!\;k_{2}!\;(m-k_{1}-k_{2})!}\\ \times\left(\left\llbracket\underline{x}_{(m-k_{2},m-k_{1})}\bar{ x}\right\rrbracket-\frac{k_{2}}{2m}\left\llbracket\underline{x}_{(m-k_{2}+1,m-k_{1})} \right\rrbracket-\frac{k_{1}}{2m}\left\llbracket\underline{x}_{(m-k_{2},m-k_{1 }+1)}\right\rrbracket\\ -\frac{2m-k_{1}-k_{2}}{2m}\left\llbracket\underline{x}_{(m-k_{2},m-k_{1})}\right\rrbracket\right).\] (G.29) Unlike for \(r_{J,i}^{\circ}\), the expression for \(r_{J}^{\circ}\) does not appear to simplify in general. However, by evaluating the terms in Eq. (G.29) according to Eqs. (G.16)-(G.23), one can compute \(r_{J}^{\circ}\) for any given number \(m\) of siblings. The first five values are: \[r_{J}^{\circ}=\begin{cases}\frac{1}{2}&m=1\\ \frac{17+2h}{48}&m=2\\ \frac{57+12h-4h^{2}}{192}&m=3\\ \frac{209+38h+12h^{2}-24h^{3}}{768}&m=4\\ \frac{801+104h+88h^{2}-32h^{3}-80h^{4}}{3072}&m=5.\end{cases}\] (G.30) Evaluating these expressions for different values of \(h\) leads to the intra-relatedness values given in Table 1 of the main text. In the recessive case (\(h=0\)), a closed-form expression for \(r_{J}^{\circ}\) can be obtained. Eq. (G.27) simplifies in the recessive case to \[r_{J}^{\circ} =\lim_{N_{A}\to\infty}\lim_{N\to\infty}\frac{\left\langle\left(\prod _{j\in J}x_{j_{1}}x_{j_{2}}\right)(1-\bar{x})\right\rangle^{\circ}}{\left\langle \bar{x}(1-\bar{x})\right\rangle^{\circ}}\] \[=\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[ \!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\! \left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\!\left[\left[\!\left[\! Above, the coefficients \(C_{K}\) satisfy \[\sum_{\varnothing\subset L\subseteq K}C_{L}=b(\mathbf{1}_{K}),\] (G.35) where \(\mathbf{1}_{K}\in\{0,1\}^{J}\) is the phenotypic state of \(J\) with 1's in the entries corresponding to individuals in set \(K\subseteq J\), and 0's in the entries corresponding to individuals in \(J-K\). Combining Eqs. (A.4) and (G.34), the expected benefit to \(i\) in genotypic state \(\mathbf{x}\) is given by \[\mathbb{E}_{\mathbf{x}}\left[b\left(\mathbf{\Phi}_{|J}\right)\right]=\sum_{ \varnothing\subset K\subseteq J}C_{K}\prod_{k\in K}\varphi_{k}(x_{k_{1}},x_{k_ {2}}).\] (G.36) Applying Eq. (G.3), the weak selection increment is given by \[\Delta^{\prime}(\mathbf{x})=-c\sum_{j\in J}\varphi_{j}\left(x_{j_ {1}},x_{j_{2}}\right)(\bar{x}_{j}-\bar{x})+\sum_{\varnothing\subset K\subseteq J }C_{K}\prod_{k\in K}\varphi_{k}\left(x_{k_{1}},x_{k_{2}}\right)(\bar{x}_{i}- \bar{x}).\] (G.37) Applying the operator \(\langle\ \rangle^{\circ}\) to both sides, dividing by \(\langle\bar{x}(1-\bar{x})\rangle^{\circ}\), and invoking Eq. (D.23) and Theorem C.3, we find that weak selection favors \(A\) if and only if \[-c\sum_{j\in J}r_{j}^{\circ}+\sum_{\varnothing\subset K\subseteq J }C_{K}\,r_{K,i}^{\circ}>0.\] (G.38) Eq. (G.12) gives \(r_{j}^{\circ}=\frac{1}{2}\), and Eq. (G.26) gives \(r_{K,i}^{\circ}=\frac{1}{4}\) for each nonempty \(K\subseteq J\). Substituting, we obtain the condition \[-\frac{c}{2}+\frac{1}{4}\sum_{\varnothing\subset K\subseteq J}C_{K}>0.\] (G.39) Applying Eq. (G.35) and simplifying, this condition reduces to \[b(\mathbf{1}_{J})>2c.\] (G.40) Thus--as reported in the main text--collective help to \(i\) is favored if, when all members of \(J\) cooperate, the total benefit exceeds twice the total cost. Interestingly, this condition does not involve the values of \(b\left(\mathbf{\Phi}_{|J}\right)\) on any phenotypic states of \(J\) other than the all-Cooperator state \(\mathbf{1}_{J}\). #### g.5.2 Threshold public goods We now consider a threshold public goods game, which can also be understood as a multiplayer Stag Hunt game. As before, there is a set \(J\) of siblings, and phenotypes 1 and 0 correspond to Cooperators and Defectors, respectively. Each Cooperator in \(J\) pays cost \(c\). If all members of \(J\) are Cooperators, they each receive benefit \(b\); otherwise no benefit is received. The payoff to each individual \(j\in I\) is given by \[f_{j}(\mathbf{\Phi})=\begin{cases}-c\,\Phi_{j}+b\prod_{k\in J}\Phi_{k}&j\in J\\ \\ 0&j\notin J.\end{cases}\] (G.41) To derive the conditions for weak selection to favor cooperation, we formulate the selection increment according to Eq. (G.3), making use of Eq. (A.4): \[\Delta^{\prime}(\mathbf{x})=-c\sum_{j\in J}\varphi_{j}\left(x_{j_{1}},x_{j_{2} }\right)(\bar{x}_{j}-\bar{x})+b\prod_{k\in J}\varphi_{k}\left(x_{k_{1}},x_{k_{2 }}\right)\sum_{j\in J}(\bar{x}_{j}-\bar{x}).\] (G.42) Applying \(\langle\ \rangle^{\circ}\) to both sides, dividing by \(\langle\bar{x}(1-\bar{x})\rangle^{\circ}\), and invoking Eqs. (D.23) and (G.27) and Theorem C.3, we find that weak selection favors allele \(A\) if and only if \[-c\sum_{j\in J}r_{j}^{\circ}+mbr_{J}^{\circ}>0.\] (G.43) Since \(r_{j}^{\circ}=1/2\) for each \(j\in J\) according to Eq. (G.12), this condition reduces to \[2br_{J}^{\circ}>c,\] (G.44) as reported in the main text. ### Two arbitrary relatives We now extend to relationships beyond full siblings. We imagine a model similar to the model described above, but with individuals grouped by a relationship other than full siblings (half-siblings, cousins, etc.). We do not attempt to explicitly construct such a model, as the setup would depend on the pedigree relationship in question. Instead, we suppose that such a model will share the following features with the full-sibling model: 1. The juvenile population is represented a set \(I\) of \(N\) diploid individuals, with sites \(i_{1}\) and \(i_{2}\) for each individual \(i\in I\). 2. The effects of phenotypic state \(\mathbf{\Phi}\) on the survival of each individual \(i\in I\) to adulthood is captured by an arbitrary function \(f_{i}(\mathbf{\Phi})\). 3. At each time-step, \(N_{\mathrm{A}}\) surviving adults are sampled from \(I\), in such a manner that, as \(N\to\infty\), the probability individual \(i\) survives becomes proportional to \(F_{i}(\mathbf{\Phi})=1+\delta f_{i}(\mathbf{\Phi})\). 4. Each of the \(2N_{\mathrm{A}}\) sites in the adult population produces, on expectation, \(N/N_{\mathrm{A}}\) copies in the next juvenile population. 5. For \(k\geq 1\), let \(\bar{\ell}_{k}^{\circ}\) denote the average of \(\ell_{S}^{\circ}\) over all sets \(S\subseteq G\) of size \(k\). Then there exists a constant \(C\) such that \[\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{\bar{\ell}_{k}^{\circ}}{N _{\mathrm{A}}}=CL_{k},\] where \(L_{k}=\sum_{j=1}^{k-1}1/j\). The above assumptions are quite flexible in allowing for various kinds of family relationships to be described. In particular, Property 5 holds for any model for which the adult population converges (in the specified limits, under some rescaling of time) to the Kingman coalescent [37]. Many population models satisfy this property, including models with two distinct sexes [87, 38, 57]. We also note that Properties 2 and 4 do not require independence across individuals in the survival of juveniles to adulthood, nor in the production of offspring by adults. Within such a model, we consider two individuals \(i,j\in I\) with a particular pedigree relationship (siblings, cousins, etc), quantified by two probabilities \(p_{1}\) and \(p_{2}\). With probability \(p_{1}\), site \(i_{1}\) shares a recent common ancestor with \(j_{1}\); likewise, with probability \(p_{2}\), site \(i_{2}\) shares a recent common ancestor with \(j_{2}\). To state this formally, we assume there is some fixed \(T\geq 1\) such that, under neutral drift (\(\delta=0\)), asymptotically as first \(N\to\infty\) and then \(N_{\mathrm{A}}\to\infty\), the ancestral map \(A_{0}^{T}\) has the following properties: 1. With probability \(p_{1}\), \(A_{0}^{T}(i_{1})=A_{0}^{T}(j_{1})\), otherwise \(A_{0}^{T}(i_{1})\) and \(A_{0}^{T}(j_{1})\) are distributed uniformly and independently over \(G\); 2. With probability \(p_{2}\), \(A_{0}^{T}(i_{2})=A_{0}^{T}(j_{2})\), otherwise \(A_{0}^{T}(i_{2})\) and \(A_{0}^{T}(j_{2})\) are distributed uniformly and independently over \(G\); 3. The events described in (i) and (ii) are independent of each other. Here, \(T\) represents the number of generations needed to characterize the pedigree relationship. So, for example, half-siblings have \(T=1\) and either \(p_{1}=\frac{1}{2}\) and \(p_{2}=0\), or alternatively \(p_{1}=0\) and \(p_{2}=\frac{1}{2}\). Full cousins have \(T=2\) and \(p_{1}=p_{2}=\frac{1}{8}\). #### g.6.1 Relatedness of one relative to another We first compute the relatedness \(r_{\{i\},j}^{\circ}\) of (the set containing) one relative to the other: \[r_{\{i\},j}^{\circ}=\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{ \left\langle\left(h\left(x_{i_{1}}+x_{i_{2}}\right)+\left(1-2h\right)x_{i_{1} }x_{i_{2}}\right)\left(\frac{1}{2}\left(x_{j_{1}}+x_{j_{2}}\right)-\bar{x} \right)\right\rangle^{\circ}}{\left\langle\bar{x}(1-\bar{x})\right\rangle^{ \circ}}.\] (G.45) The bracketed quantity in the numerator can be expanded as \[h\left(\frac{x_{i_{1}}x_{j_{1}}+x_{i_{2}}x_{j_{2}}}{2}+\frac{x_ {i_{1}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}}{2}-2\left(\frac{x_{i_{1}}\bar{x}+x_{i_{2} }\bar{x}}{2}\right)\right)\\ +\left(1-2h\right)\left(\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}+x_{i_{1 }}x_{i_{2}}x_{j_{2}}}{2}-x_{i_{1}}x_{i_{2}}\bar{x}\right).\] We again make use of the \(\llbracket\rrbracket\) operation, defined in Eq. (G.7), to express this relatedness: \[h\left(2\left\llbracket\frac{x_{i_{1}}\bar{x}+x_{i_{2}}\bar{x}}{2} \right\rrbracket-\left\llbracket\frac{x_{i_{1}}x_{j_{1}}+x_{i_{2}}x_{j_{2}}}{2} \right\rrbracket-\left\llbracket\frac{x_{i_{1}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}}{2} \right\rrbracket\right)\\ +(1-2h)\left(\llbracket x_{i_{1}}x_{i_{2}}\bar{x}\rrbracket- \left\llbracket\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}+x_{i_{1}}x_{i_{2}}x_{j_{2}}}{2 }\right\rrbracket\right).\] (G.46) Using Eq. (G.9) and the given relationship between \(i\) and \(j\), we compute \[\llbracket x_{i_{1}}\bar{x}\rrbracket=\llbracket x_{i_{2}}\bar{x} \rrbracket=\llbracket x_{i_{1}}x_{j_{2}}\rrbracket=\llbracket x_{i_{2}}x_{j_{1}} \rrbracket=L_{2}=1\\ \llbracket x_{i_{1}}x_{j_{1}}\rrbracket=p_{1}L_{1}+(1-p_{1})L_{2} =1-p_{1}\\ \llbracket x_{i_{2}}x_{j_{2}}\rrbracket=p_{2}L_{1}+(1-p_{2})L_{2} =1-p_{2}\\ \llbracket x_{i_{1}}x_{i_{2}}\bar{x}\rrbracket=L_{3}=\frac{3}{2} \\ \llbracket x_{i_{1}}x_{i_{2}}x_{j_{1}}\rrbracket=p_{1}L_{2}+(1-p_ {1})L_{3}=\frac{3-p_{1}}{2}\\ \llbracket x_{i_{1}}x_{i_{2}}x_{j_{2}}\rrbracket=p_{2}L_{2}+(1-p_ {2})L_{3}=\frac{3-p_{2}}{2}.\] Substituting in Eq. (G.46) yields \[r^{\circ}_{\{i\},j} =h\left(2(1)-\left(1-\frac{p_{1}+p_{2}}{2}\right)-1\right)+(1-2h )\left(\frac{3}{2}-\frac{6-p_{1}-p_{2}}{4}\right)\] \[=h\left(\frac{p_{1}+p_{2}}{2}\right)+(1-2h)\left(\frac{p_{1}+p_{2 }}{4}\right)\] \[=\frac{p_{1}+p_{2}}{4}.\] We conclude that \[r^{\circ}_{\{i\},j}=r/2,\] (G.47) where \(r=(p_{1}+p_{2})/2\) is Wright's coefficient of relationship [39]. #### g.6.2 Intra-relatedness of two relatives Next we compute the intra-relatedness \(r^{\circ}_{\{i,j\}}\) of the set containing the two relatives together: \[r^{\circ}_{\{i,j\}}=\lim_{N_{\mathrm{A}}\to\infty}\lim_{N\to\infty}\frac{ \left\langle\varphi_{i}(x_{i_{1}},x_{i_{2}})\,\varphi_{j}(x_{j_{1}},x_{j_{2}}) \left(\frac{x_{i_{1}}+x_{i_{2}}+x_{j_{1}}+x_{j_{2}}}{4}-\bar{x}\right)\right\rangle ^{\circ}}{\left\langle\sum_{g\in G}x_{g}(x_{g}-\bar{x})\right\rangle^{\circ}},\] (G.48) where \(\varphi_{i}(x_{i_{1}},x_{i_{2}})\) and \(\varphi_{j}(x_{j_{1}},x_{j_{2}})\) are as given in Eq. (A.4). For the sake of symmetry, \(r^{\circ}_{\{i,j\}}\) is expressed in Eq. (G.48) as the average of \(r^{\circ}_{\{i,j\},i}\) and \(r^{\circ}_{\{i,j\},j}\); we recall from Section D.4.2 that \(r^{\circ}_{\{i,j\},i}=r^{\circ}_{\{i,j\},j}=r^{\circ}_{\{i,j\}}\). The bracketed quantity in the numerator can be expanded as follows: \[\varphi_{i} (x_{i_{1}},x_{i_{2}})\,\varphi_{j}(x_{j_{1}},x_{j_{2}})\left(\frac{x _{i_{1}}+x_{i_{2}}+x_{j_{1}}+x_{j_{2}}}{4}-\bar{x}\right)\] \[=h^{2}(x_{i_{1}}+x_{i_{2}})(x_{j_{1}}+x_{j_{2}})\left(\frac{x_{i_{1 }}+x_{i_{2}}+x_{j_{1}}+x_{j_{2}}}{4}-\bar{x}\right)\] \[\quad+h(1-2h)\left(x_{i_{1}}x_{i_{2}}(x_{j_{1}}+x_{j_{2}})+x_{j_{1 }}x_{j_{2}}(x_{i_{1}}+x_{i_{2}})\right)\left(\frac{x_{i_{1}}+x_{i_{2}}+x_{j_{1 }}+x_{j_{2}}}{4}-\bar{x}\right)\] \[\quad\quad+(1-2h)^{2}x_{i_{1}}x_{i_{2}}x_{j_{1}}x_{j_{2}}\left( \frac{x_{i_{1}}+x_{i_{2}}+x_{j_{1}}+x_{j_{2}}}{4}-\bar{x}\right)\] \[=h^{2}\left(\frac{x_{i_{1}}x_{j_{1}}+x_{i_{2}}x_{j_{2}}}{2}+\frac {x_{i_{1}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}}{2}\right.\] \[\quad+2\left(\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}+x_{i_{1}}x_{j_{1}} x_{j_{2}}+x_{i_{1}}x_{i_{2}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}x_{j_{2}}}{4}\right)\] \[\quad\quad\left.-2\left(\frac{x_{i_{1}}x_{j_{1}}\bar{x}+x_{i_{2}} x_{j_{2}}\bar{x}}{2}\right)-2\left(\frac{x_{i_{1}}x_{j_{2}}\bar{x}+x_{i_{2}}x_{j_{1 }}\bar{x}}{2}\right)\right)\] \[+h(1-2h)\left(3\left(\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}+x_{i_{1}} x_{j_{1}}x_{j_{2}}+x_{i_{1}}x_{i_{2}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}x_{j_{2}}}{4} \right)+x_{i_{1}}x_{i_{2}}x_{j_{1}}x_{j_{2}}\right.\] \[\quad\left.-4\left(\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}\bar{x}+x_{i_{ 1}}x_{j_{1}}x_{j_{2}}\bar{x}+x_{i_{1}}x_{i_{2}}x_{j_{2}}\bar{x}+x_{i_{2}}x_{j_{ 1}}x_{j_{2}}\bar{x}}{4}\right)\right)\] \[+(1-2h)^{2}\left(x_{i_{1}}x_{i_{2}}x_{j_{1}}x_{j_{2}}-x_{i_{1}}x_{ i_{2}}x_{j_{1}}x_{j_{2}}\bar{x}\right).\] We can therefore express the relatedness \(r_{\{i,j\}}^{\circ}\) as \[r_{\{i,j\}}^{\circ}=h^{2}\left(2\left[\frac{x_{i_{1}}x_{j_{1}} \bar{x}+x_{i_{2}}x_{j_{2}}\bar{x}}{2}\right]+2\left[\frac{x_{i_{1}}x_{j_{2}} \bar{x}+x_{i_{2}}x_{j_{1}}\bar{x}}{2}\right]\right.\] \[\quad\quad\quad\left.-2\left[\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}+x_ {i_{1}}x_{j_{1}}x_{j_{2}}+x_{i_{1}}x_{i_{2}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}x_{j_{2 }}}{4}\right]\right)\] \[\quad\quad+h(1-2h)\left(4\left[\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}} \bar{x}+x_{i_{1}}x_{j_{1}}x_{j_{2}}\bar{x}+x_{i_{1}}x_{i_{2}}x_{j_{2}}\bar{x}+ x_{i_{2}}x_{j_{1}}x_{j_{2}}\bar{x}}{4}\right]\right.\] \[\quad\quad\quad\left.-3\left[\frac{x_{i_{1}}x_{i_{2}}x_{j_{1}}+x_ {i_{1}}x_{j_{1}}x_{j_{2}}+x_{i_{1}}x_{i_{2}}x_{j_{2}}+x_{i_{2}}x_{j_{1}}x_{j_{2 }}}{4}\right]\right.\] \[\quad\quad\quad\left.-\left[\![x_{i_{1}}x_{i_{2}}x_{j_{1}}x_{j_{2 }}]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Using Eq. (G.9) and the given relationship between \(i\) and \(j\), we compute \[\llbracket x_{i_{1}}x_{j_{2}}\rrbracket =\llbracket x_{i_{2}}x_{j_{1}}\rrbracket =L_{2}=1\] \[\llbracket x_{i_{1}}x_{j_{1}}\rrbracket =p_{1}L_{1}+(1-p_{1})L_{2} =1-p_{1}\] \[\llbracket x_{i_{2}}x_{j_{2}}\rrbracket =p_{2}L_{1}+(1-p_{2})L_{2} =1-p_{2}\] \[\llbracket x_{i_{1}}x_{i_{2}}x_{j_{1}}\rrbracket =\llbracket x_{i_{1}}x_{j_{1}}x_{j_{2}}\rrbracket =\llbracket x_{i_{1}}x_{j_{1}}\bar{x}\rrbracket =p_{1}L_{2}+(1-p_{1})L_{3} =\frac{3-p_{1}}{2}\] \[\llbracket x_{i_{1}}x_{i_{2}}x_{j_{2}}\rrbracket =\llbracket x_{i_{2}}x_{j_{1}}x_{j_{2}}\rrbracket =\llbracket x_{i_{2}}x_{j_{2}}\bar{x}\rrbracket =p_{2}L_{2}+(1-p_{2})L_{3} =\frac{3-p_{2}}{2}\] \[\llbracket x_{i_{1}}x_{j_{2}}\bar{x}\rrbracket =\llbracket x_{i_{2}}x_{j_{1}}\bar{x}\rrbracket =L_{3}=\frac{3}{2}\] \[\llbracket x_{i_{1}}x_{i_{2}}x_{j_{1}}\bar{x}\rrbracket =\llbracket x_{i_{1}}x_{j_{1}}x_{j_{2}}\bar{x}\rrbracket =p_{1}L_{3}+(1-p_{1})L_{4} =\frac{11-2p_{1}}{6}\] \[\llbracket x_{i_{1}}x_{i_{2}}x_{j_{2}}\bar{x}\rrbracket =\llbracket x_{i_{2}}x_{j_{1}}x_{j_{2}}\bar{x}\rrbracket =p_{2}L_{3}+(1-p_{2})L_{4} =\frac{11-2p_{2}}{6}\] \[\llbracket x_{i_{1}}x_{i_{2}}x_{j_{1}}x_{j_{2}}\rrbracket =p_{1}p_{2}L_{2}+(p_{1}+p_{2}-2p_{1}p_{2})L_{3}\] \[\qquad+(1-p_{1})(1-p_{2})L_{4}\] \[=\frac{11-2p_{1}-2p_{2}-p_{1}p_{2}}{6}\] \[\llbracket x_{i_{1}}x_{i_{2}}x_{j_{1}}x_{j_{2}}\bar{x}\rrbracket =p_{1}p_{2}L_{3}+(p_{1}+p_{2}-2p_{1}p_{2})L_{4}\] \[\qquad+(1-p_{1})(1-p_{2})L_{5}\] \[=\frac{25-3p_{1}-3p_{2}-p_{1}p_{2}}{12}.\] Substituting, and using \(r=(p_{1}+p_{2})/2\) for Wright's coefficient of relationship, we obtain \[r_{\{i,j\}}^{\circ} =h^{2}\left(2\left(\frac{3-r}{2}\right)+2\left(\frac{3}{2}\right) -(1-r)-1-2\left(\frac{3-r}{2}\right)\right)\] \[\quad+h(1-2h)\left(4\left(\frac{11-2r}{6}\right)-3\left(\frac{3-r }{2}\right)-\frac{11-4r-p_{1}p_{2}}{6}\right)\] \[\qquad\qquad\qquad+(1-2h)^{2}\left(\frac{25-6r-p_{1}p_{2}}{12}- \frac{11-4r-p_{1}p_{2}}{6}\right).\] Simplifying, we arrive at \[r_{\{i,j\}}^{\circ}=\frac{1+r}{4}+\frac{\left(h-\frac{1}{2}\right)(r-p_{1}p_{ 2})}{6}.\] (G.49) #### g.6.3 Arbitrary two-strategy game We represent the interaction between individuals \(i\) and \(j\) as an arbitrary \(2\times 2\) matrix game, with phenotypes \(1\) and \(0\) representing the two strategies. The game matrix is \[\begin{array}{ccc}1&0\\ 1&\pi_{10}\\ 0&\pi_{01}&\pi_{00}\end{array}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[f_{i}(\mathbf{\Phi}) =\pi_{11}\Phi_{i}\Phi_{j}+\pi_{10}\Phi_{i}(1-\Phi_{j})+\pi_{01}(1- \Phi_{i})\Phi_{j}+\pi_{00}(1-\Phi_{i})(1-\Phi_{j})\] \[=\pi_{00}+(\pi_{10}-\pi_{00})\Phi_{i}+(\pi_{01}-\pi_{00})\Phi_{j}+( \pi_{11}-\pi_{10}-\pi_{01}+\pi_{00})\Phi_{i}\Phi_{j}.\] (G.51) Similarly, \[f_{j}(\mathbf{\Phi})=\pi_{00}+(\pi_{10}-\pi_{00})\Phi_{j}+(\pi_{01}-\pi_{00}) \Phi_{i}+(\pi_{11}-\pi_{10}-\pi_{01}+\pi_{00})\Phi_{i}\Phi_{j}.\] (G.52) Eq. (G.3) for the weak-selection increment \(\Delta^{\prime}(\mathbf{x})\) holds for any model satisfying Assumptions 1-5 from the beginning of Section G.6. Combining with Eqs. (G.51)-(G.52), we obtain \[\Delta^{\prime}(\mathbf{x})=\pi_{00}\big{(}(\bar{x}_{i}-\bar{x}) +(\bar{x}_{j}-\bar{x})\big{)}\\ +(\pi_{10}-\pi_{00})\big{(}\Phi_{i}(\bar{x}_{i}-\bar{x})+\Phi_{j} (\bar{x}_{j}-\bar{x})\big{)}\\ +(\pi_{01}-\pi_{00})\big{(}\Phi_{j}(\bar{x}_{i}-\bar{x})+\Phi_{i} (\bar{x}_{j}-\bar{x})\big{)}\\ +(\pi_{11}-\pi_{10}-\pi_{01}+\pi_{00})\big{(}\Phi_{i}\Phi_{j}( \bar{x}_{i}-\bar{x})+\Phi_{i}\Phi_{j}(\bar{x}_{j}-\bar{x})\big{)}.\] (G.53) Now applying \(\langle\;\rangle^{\circ}\) and invoking Theorem C.3, weak selection favors allele \(A\) if and only if \[(\pi_{10}-\pi_{00})\left(r_{i}^{\circ}+r_{j}^{\circ}\right)+(\pi_ {01}-\pi_{00})\left(r_{\{j\},i}^{\circ}+r_{\{i\},j}^{\circ}\right)\\ +(\pi_{11}-\pi_{10}-\pi_{01}+\pi_{00})\,2r_{\{i,j\}}^{\circ}>0.\] (G.54) Substituting from Eqs. (G.47) and (G.49) and noting the symmetry between \(i\) and \(j\), this becomes \[(\pi_{10}-\pi_{00})+(\pi_{01}-\pi_{00})\,r\\ +(\pi_{11}-\pi_{10}-\pi_{01}+\pi_{00})\left(\frac{1+r}{2}+\frac{ \left(h-\frac{1}{2}\right)(r-p_{1}p_{2})}{3}\right)>0,\] (G.55) with \(r=(p_{1}+p_{2})/2\) as before. Defining the cost, \(c\), benefit, \(b\), and synergistic effect, \(d\), of phenotype 1, as \[c =\frac{1}{2}(\pi_{01}+\pi_{00})-\frac{1}{2}(\pi_{11}+\pi_{10})\] (G.56a) \[b =\frac{1}{2}(\pi_{11}+\pi_{01})-\frac{1}{2}(\pi_{10}+\pi_{00})\] (G.56b) \[d =\frac{1}{2}(\pi_{11}+\pi_{00})-\frac{1}{2}(\pi_{10}+\pi_{01}),\] (G.56c) we can rewrite Condition (G.55) as \[-c+br+\frac{2d}{3}\left(h-\frac{1}{2}\right)(r-p_{1}p_{2})>0,\] (G.57) which is Condition (4) of the main text. ### Relationship to prior results The questions of collective action among siblings and games between relatives, have been explored in a number of previous works. Here we discuss the relationship between our findings and key results from the literature. #### g.7.1 Jones (2000) Jones [33] considered the problem of whether two full siblings (referred to as Ivan and Alyosha) will to help a third (Dmitri). If both Ivan and Alyosha have the Cooperator phenotype, they each pay cost \(c\) to generate benefit \(2b\) to Alyosha. If one or both of them have the Defector phenotype, no costs are paid and no benefits received. Thus, in contrast to our model in Section G.5.1, the costs paid in Jones's model are conditional on Ivan and Alyosha both being Cooperators. As in our model, Jones [33] considers a large, randomly mating population. The Cooperator phenotype is conferred by an allele with dominance \(0\leq h\leq 1\) at a single genetic locus. Jones finds that the Cooperator allele increases from a given frequency \(x\) if and only if \(bR>c\), where the relatedness quantity \(R\) is given by \[R=\frac{2h^{2}+x(1+4h)+x^{2}(3-12h^{2})+x^{3}(2-8h+8h^{2})}{4h^{2}+x(2+6h-4h^{ 2})+x^{2}(4-2h-12h^{2})+x^{3}(2-8h+8h^{2})}.\] (G.58) Amending our framework to incorporate conditional costs as in Jones's model, we find that weak selection favors two full siblings to help a third if \(br_{J,i}^{\circ}>cr_{J}^{\circ}\), where \(J\) is a set of two siblings and \(i\notin J\) is a third sibling. Using \(r_{J,i}^{\circ}=1/4\) and \(r_{J}^{\circ}=(17+2h)/48\) from Eq. (G.30), we can express this condition in the form \(bR>c\), where \[R=\frac{r_{J}^{\circ}}{r_{J,i}^{\circ}}=\frac{12}{17+2h}.\] (G.59) The results in Eqs. (G.58) and (G.59) are not directly comparable, since Jones's result pertains to a particular Cooperator allele frequency \(x\), whereas ours reflects the entire process of selection. They do coincide, however, in one case of interest: if there is no genetic dominance (\(h=1/2\)) and both alleles are equally abundant (\(x=1/2\)), then Eqs. (G.58) and (G.59) both give \(R=2/3\), indicating that helping is favored if \(b>\frac{3}{2}c\). #### g.7.2 Garay et al. (2023) Garay et al. [19] also investigate collective action among some siblings to help another. In the case that the benefits of help scale linearly with the number of helpers, they obtain the condition \(b>2c\) for a collective of full siblings to help another, in agreement with our Eq. (G.40). However, for a nonlinear benefit function, they obtain a more complicated condition. This contrasts with our finding in Eq. (G.40) that the \(b>2c\) condition is valid for all benefit functions, linear or nonlinear. This apparent difference in results stems from the criteria used to characterize selection. Garay et al. [19] ask whether collective help is evolutionarily stable, i.e. robust to invasion by a non-helper mutation. In their model, the helping behavior is recessive, meaning that a single copy of a non-helper mutant allele eliminates the helping behavior. In contrast, we have characterized selection here in terms of pairwise fixation probabilities, or equivalently, the low-mutation limit of the stationary distrubution (see Theorem C.2). This condition for success is distinct from--albeit related to--the evolutionary stability condition used by Garay et al. [19]. The relationship between these selection criteria is discussed, for example, in Refs. [89, 90]. This difference in selection criteria explains the different conditions obtained. #### g.7.3 Games between relatives A number of works [91, 92, 93, 94] have analyzed the evolutionary dynamics of two-player games played by relatives. A typical setup involves a large haploid population with two heritable types, corresponding to the strategies in a \(2\times 2\) matrix game. Each individual plays with a large number of partners, a fraction \(r\) of which are clonal relatives of the individual (guaranteed to have the same type). The remaining fraction, \(1-r\), are drawn from the population at large. Using the game matrix (G.50) with parameters \(b\), \(c\), and \(d\) defined as in Eq. (G.56), Strategy 1 will increase from a particular frequency, \(x\), if and only if \[-c+br+2d(1-r)\left(x-\frac{1}{2}\right)>0.\] (G.60) This is equivalent to Eq. (5a) of Queller [92] and Eq. (3.3) of Ohtsuki [93], although the parameters \(b\), \(c\), and \(d\) are defined differently in these works. Eq. (G.60) expresses a similar idea to Eq. (G.57)--which is Condition (4) of the main text--in that Hamilton's rule is extended to incorporate synergy between two relatives. The differences are that (i) Eq. (G.60) applies to a haploid population, while Eq. (G.57) applies to diploid relatives with arbitrary genetic dominance, and (ii) Eq. (G.60) pertains to a particular allele frequency, \(x\), while Eq. (G.57) pertains to the overall process of selection. ## Appendix H Collective action on networks and hypergraphs We now consider a haploid, asexual population structured as a weighted graph or hypergraph. This extends previous analyses of evolutionary games on graphs [95, 26, 27, 30, 96, 28, 29] to arbitrary nonlinear interactions. ### Model To construct this model, we endow the set of sites \(G\) with the structure of a weighted (undirected) graph. (We will turn to hypergraphs later.) The edge weight between sites \(g,h\in G\) is denoted \(w_{gh}=w_{hg}\). We define the weighted degree of each vertex \(g\in G\) as \(d_{g}=\sum_{h\in G}w_{gh}\). Let \(p_{gh}^{(n)}\) denote the \(n\)-step random walk probability from \(g\) to \(h\), using step probabilities \(p_{gh}=w_{gh}/d_{g}\). In each state \(\mathbf{x}\), each site \(g\in G\) has a payoff \(f_{g}(\mathbf{x})\), which may depend arbitrarily on \(\mathbf{x}\). State transitions follow the death-Birth rule [26, 29]: At each time-step, one site \(h\in G\) is chosen uniformly at random to be replaced. A neighboring site \(g\in G\) is then chosen, with probability proportional to \(p_{hg}\left(1+\delta f_{g}(\mathbf{x})\right)\) to produce an offspring, which fills the vacancy in site \(h\). With probability \(u\), the offspring acquires a mutation; otherwise it inherits the allele of the parent. For hypergraphs, we amend the death-Birth rule as follows: after a site \(h\) is chosen to be replaced, the reproducing neighbor \(g\) is chosen with probability proportional to \((1+\delta f_{g}(\mathbf{x}))\) times the number of shared hyperedges with \(h\). This process is equivalent to death-Birth on the projection graph of the hypergraph \(G\)[97]--a graph with the same vertex set as \(G\), and edge weight between vertices \(g\) and \(h\) given by the number of hyperedeges they share in \(G\). For death-Birth updating, the reproductive value \(v_{g}\) of each site \(g\in G\) is proportional to its weighted degree; specifically, \(v_{g}=nd_{g}/\sum_{h\in G}d_{h}\)[68, 29]. These reproductive values possess the reversibility property \[v_{g}p_{gh}^{(n)}=v_{h}p_{hg}^{(n)},\] (H.1) for all \(g,h\in G\) and \(n\geq 0\)[29]. ### Conditions for selection We compute the fitness increment of site \(g\in G\) in state \(\mathbf{x}\) according to Eq. (C.6), making use of the reversibility property (H.1): \[w_{g}(\mathbf{x}) =\left(\frac{N-1}{N}\right)v_{g}+\frac{1}{N}\sum_{h\in G}\left( \frac{p_{hg}(1+\delta f_{g}(\mathbf{x}))}{\sum_{k\in G}p_{hg}(1+\delta f_{g}( \mathbf{x}))}\right)v_{h}-v_{g}\] \[=\frac{1}{N}\left(\sum_{h\in G}\left(\frac{v_{h}p_{hg}\left(1+ \delta f_{g}(\mathbf{x})\right)}{1+\delta\sum_{k\in G}p_{hk}\,f_{k}(\mathbf{x })}\right)-v_{g}\right)\] \[=\frac{v_{g}}{N}\left(\sum_{h\in G}\frac{p_{gh}\left(1+\delta f_ {g}(\mathbf{x})\right)}{1+\delta\sum_{k\in G}p_{hk}\,f_{k}(\mathbf{x})}-1 \right).\] For weak selection, taking the \(\delta\)-derivative at \(\delta=0\) gives \[w_{g}^{\prime}(\mathbf{x}) =\frac{v_{g}}{N}\sum_{h\in G}p_{gh}\left(1+\delta\left(f_{g}( \mathbf{x})-\sum_{k\in G}p_{hk}\,f_{k}(\mathbf{x})\right)\right)\] \[=\frac{v_{g}}{N}\left(f_{g}(\mathbf{x})-\sum_{h\in G}p_{gh}^{(2)} f_{h}(\mathbf{x})\right).\] An equivalent result was obtained in Eq. (78) in the SI of Allen et al. [29]. Applying Eq. (C.10), the weak selection increment in state \(\mathbf{x}\) is \[\Delta^{\prime}(\mathbf{x})=\frac{1}{N^{2}}\sum_{g\in G}v_{g}x_{g}\left(f_{g}( \mathbf{x})-\sum_{h\in G}p_{gh}^{(2)}f_{h}(\mathbf{x})\right).\] (H.2) To quantify synergistic effects, we represent the payoffs to each site as a polynomial: \[f_{g}(\mathbf{x})=\sum_{S\subseteq G}C_{S,g}\,\iota_{S}^{A}(\mathbf{x}).\] (H.3) Then substituting in Eq. (H.2) and making use of Eq. (H.1), we obtain \[\Delta^{\prime}(\mathbf{x}) =\frac{1}{N^{2}}\sum_{g\in G}\sum_{S\subseteq G}v_{g}x_{g}\left(C _{S,g}-\sum_{h\in G}p_{gh}^{(2)}C_{S,h}\right)\iota_{S}^{A}(\mathbf{x})\] \[=\frac{1}{N^{2}}\sum_{g\in G}\sum_{S\subseteq G}v_{g}\left(C_{S,g }-\sum_{h\in G}p_{gh}^{(2)}C_{S,h}\right)\iota_{S\cup\{g\}}^{A}(\mathbf{x})\] \[=\frac{1}{N^{2}}\sum_{g\in G}\sum_{S\subseteq G}v_{g}C_{S,g}\left( \iota_{S\cup\{g\}}^{A}(\mathbf{x})-\sum_{h\in G}p_{gh}^{(2)}\,\iota_{S\cup\{ h\}}^{A}(\mathbf{x})\right).\] Applying Theorem C.3 and Eq. (D.15a), weak selection favors allele \(A\) if and only if \[\sum_{g\in G}\sum_{S\subseteq G}v_{g}C_{S,g}\left(r_{S,g}^{\circ}-\sum_{h\in G }p_{gh}^{(2)}\,r_{S,h}^{\circ}\right)>0.\] (H.4) For purposes of computation, it is often more convenient to write this condition in terms of coalescence lengths: \[\sum_{g\in G}\sum_{S\subseteq G}v_{g}C_{S,g}\left(\sum_{h\in G}p_{gh}^{(2)} \,\ell_{S\cup\{h\}}^{\circ}-\ell_{S\cup\{g\}}^{\circ}\right)>0.\] (H.5) The neutral coalescence lengths can be obtained from Eq. (D.13) as follows. Since each site is replaced with probability \(1/N\), and mutation occurs with probability \(u\) in each new offspring, the mutation rates are given by \(\nu_{S}=|S|/N\). Eq. (D.13) becomes \[\ell_{S}^{\circ}=\begin{cases}\frac{|S|}{N}+\frac{N-|S|}{N}\ell_{S}^{\circ}+ \frac{1}{N}\sum_{g\in S}\sum_{h\in G}p_{gh}\,\ell_{(S-\{g\})\cup\{h\}}^{\circ} &|S|\geq 2\\ 0&|S|=1,\end{cases}\] (H.6) which simplifies to \[\ell_{S}^{\circ}=\begin{cases}1+\frac{1}{|S|}\sum_{g\in S}\sum_{h\in G}p_{gh}\, \ell_{(S-\{g\})\cup\{h\}}^{\circ}&|S|\geq 2\\ 0&|S|=1.\end{cases}\] (H.7) ### Collective help or harm We now consider the collective help or harm scenario described in the main text, with allele \(A\) encoding for cooperative behavior. The collective in question is a particular nonempty set \(S\subseteq G\) of size \(m=|S|\). Within \(S\), each site containing an \(A\) allele pays cost \(c/m\). If all sites in \(S\) contain allele \(A\), a particular target site gains "benefit" \(b\), which may be positive (for collective help) or negative (for collective harm). Supposing that \(g\notin S\), the payoff functions are given by \[f_{h}(\mathbf{x})=\begin{cases}-\frac{c}{m}x_{h}&h\in S\\ b\,\iota_{S}^{A}(\mathbf{x})&h=g\\ 0&h\notin S\cup\{g\}.\end{cases}\] (H.8) When represented as in Eq. (H.3), this payoff function has the following coefficients: \(C_{\{h\},h}=-c/m\) for all \(h\in S\), \(C_{S,g}=b\), and \(C_{T,h}=0\) for all other combinations of \(T\subseteq G\) and \(h\in G\). Substituting, Condition (H.5) becomes \[b\,v_{g}\left(r_{S,g}^{\circ}-\sum_{k\in G}p_{gk}^{(2)}r_{S,k}^{\circ}\right) >\frac{c}{m}\sum_{h\in S}v_{h}\left(r_{\{h\},h}^{\circ}-\sum_{k\in G}p_{hk}^{( 2)}r_{\{h\},k}^{\circ}\right).\] (H.9) This is equivalent to Eq. (5) of the main text. A similar argument shows that Condition (H.9) also applies in the case \(g\in S\). From Condition (H.9) we obtain the critical benefit-cost ratio for collective action from \(S\) to \(g\): \[\left(\frac{b}{c}\right)^{*}_{S,g} =\frac{\frac{1}{m}\sum_{h\in S}v_{h}\left(r_{\{h\},h}^{\circ}- \sum_{k\in G}p_{hk}^{(2)}r_{\{h\},k}^{\circ}\right)}{v_{g}\left(r_{S,g}^{ \circ}-\sum_{k\in G}p_{gk}^{(2)}r_{S,k}^{\circ}\right)}\] (H.10a) \[=\frac{\frac{1}{m}\sum_{h\in S}v_{h}\sum_{k\in G}p_{hk}^{(2)} \ell_{\{h,k\}}^{\circ}}{v_{g}\left(\sum_{k\in G}p_{gk}^{(2)}\ell_{S\cup\{k\}} ^{\circ}-\ell_{S\cup\{g\}}^{\circ}\right)}.\] (H.10b) Eq. (H.10b) is particularly useful for computation as well as theoretical results. Indeed, a number of useful observations follow directly from this equation: * The numerator in Eq. (H.10b)--and hence in Eq. (H.10a) as well--is always nonnegative (and is zero only in a special case; see next bullet). Outside of this special case, the sign of \(\left(\frac{b}{c}\right)^{*}_{S,g}\) is the same as that of \(r^{\circ}_{S,g}-\sum_{k\in G}p^{(2)}_{gk}r^{\circ}_{S,k}\) (or equivalently, of \(\sum_{k\in G}p^{(2)}_{gk}\ell^{\circ}_{S\cup\{k\}}-\ell^{\circ}_{S\cup\{g\}}\)). If \(r^{\circ}_{S,g}>\sum_{k\in G}p^{(2)}_{gk}r^{\circ}_{S,k}\) then collective help can be favored for sufficiently large \(b>0\) (relative to fixed \(c>0\)). If \(r^{\circ}_{S,g}<\sum_{k\in G}p^{(2)}_{gk}r^{\circ}_{S,k}\) then collective harm can be favored for sufficiently negative \(b\). In short, for deltive help to be favored, it is necessary that \(S\) be more related to \(g\) than to \(g\)'s two-step neighbors. * The numerator in Eq. (H.10b)--and hence also in Eq. (H.10a) as well--is zero if and only if every \(h\in S\) is the only two-step neighbor of itself. Since \(G\) is connected, this can only occur if \(G\) is a (possibly weighted) star graph, and \(S\) is a singleton set containing only the hub. In this case, the denominator also evaluates to zero for any target vertex \(g\). The hub of a star graph has the unique property that its reproductive success does not depend on the payoff of any vertex: it reproduces if and only if a leaf vertex dies. Thus, no payoff-affecting action is either favored or disfavored for the hub of a star graph. * If \(g\in S\), then \(\ell^{\circ}_{S\cup\{g\}}=\ell^{\circ}_{S}\) and \(\sum_{k\in G}p^{(2)}_{gk}\ell^{\circ}_{S\cup\{k\}}\geq\ell^{\circ}_{S}\), which together imply that \(r^{\circ}_{S,g}\geq\sum_{k\in G}p^{(2)}_{gk}r^{\circ}_{S,k}\). It follows that a collective cannot be favored to harm any of its own members. * Suppose that site \(g\) and all of its two-step neighbors are in \(S\). Then \(\sum_{k\in G}p^{(2)}_{gk}\ell^{\circ}_{S\cup\{k\}}=\ell^{\circ}_{S}=\ell^{ \circ}_{S\cup\{g\}}\), and it follows that \(r^{\circ}_{S,g}=\sum_{k\in G}p^{(2)}_{gk}r^{\circ}_{S,k}\). From this we conclude that if \(g\) and all of its two-step neighbors are in \(S\), then \(S\) is never favored to collectively help or harm \(g\). Any help or harm to \(g\) has the opposite effect on \(g\)'s two-step neighbors, which cancel out to yield no net effect on the spread of allele \(A\). ### Threshold public goods We also consider a public goods variant of the collective help scenario. Instead of the benefit going to a particular site \(g\), it is spread equally among members of \(S\). Equivalently, each Cooperator in \(S\) pays cost \(c\), and, if all pay the cost, each member of \(S\) receives benefit \(b\). The payoff function is \[f_{g}(\mathbf{x})=\begin{cases}-cx_{g}+b\,\iota^{A}_{S}(\mathbf{x})&g\in S\\ \\ 0&g\notin S.\end{cases}\] (H.11) Condition (H.9) becomes \[b\sum_{g\in S}v_{g}\left(r^{\circ}_{S,g}-\sum_{k\in G}p^{(2)}_{gk}r^{\circ}_{ S,k}\right)>c\sum_{h\in S}v_{h}\left(r^{\circ}_{\{h\},h}-\sum_{k\in G}p^{(2)}_{ hk}r^{\circ}_{\{h\},k}\right).\] (H.12) The critical benefit-cost ratio for the threshold public goods game for set \(S\) is then \[\left(\frac{b}{c}\right)^{*}_{S} =\frac{\sum_{h\in S}v_{h}\left(r^{\circ}_{\{h\},h}-\sum_{k\in G}p^{( 2)}_{hk}r^{\circ}_{\{h\},k}\right)}{\sum_{g\in S}v_{g}\left(r^{\circ}_{S,g}-\sum _{k\in G}p^{(2)}_{gk}r^{\circ}_{S,k}\right)}\] (H.13a) \[=\frac{\sum_{h\in S}v_{h}\sum_{k\in G}p^{(2)}_{hk}\ell^{\circ}_{\{h,k\}}}{ \sum_{g\in S}v_{g}\left(\sum_{k\in G}p^{(2)}_{gk}\ell^{\circ}_{S\cup\{k\}}-\ell ^{\circ}_{S}\right)}.\] (H.13b) ### Cycles We now compute conditions for collective help or harm on particular graph families, beginning with the cycle. The cycle (main text, Fig. 2b) consists of \(N\) vertices, each joined to exactly two others. #### h.5.1 States The cycle is convenient to analyze in that, instead of considering the full state \(\mathbf{x}\in\{0,1\}^{G}\), one need only keep track of the number of \(A\) alleles. This is because, in the death-Birth process without mutation (with initial state sampled from the mutant appearance distribution \(\mu\)) the only possible states are those for which the \(A\) and \(a\) alleles each form contiguous blocks of adjacent vertices. We therefore index the possible states as \(k=0,\ldots,N\), where \(k\) indicates the number of \(A\) alleles. The initial state (sampled from \(\mu\)) is either \(k=1\) or \(k=N-1\), with probability \(1/2\) each. Under neutral drift without mutation (\(u=\delta=0\)), a state with \(k\) contiguous \(A\) alleles will transition to \(k+1\) or \(k-1\) contiguous \(A\) alleles with probability \(1/N\) each; otherwise (with probability \((N-2)/N\)), it will remain with \(k\) contiguous \(A\) alleles #### h.5.2 Sojourn times We compute the neutral bracket operation \(\langle\ \rangle^{\circ}\) for death-Birth on the cycle via sojourn times, using Lemma B.3. For \(1\leq k\leq N-1\), let \(\sigma_{k}\) denote the expected duration of time (i.e., sojourn time) spent in states with exactly \(k\)\(A\) alleles, under neutral drift with initial state sampled from \(\mu\): \[\sigma_{k}=\mathbb{P}^{\circ}_{\mathcal{M}_{0}}\left[\sum_{g\in G}X^{t}_{g}=k \;\middle|\;\mathbf{X}^{0}\sim\mu\right].\] (H.14) Allen & McAvoy [57, Appendix A.3] provide recurrence equations that uniquely determine sojourn times from a given initial distribution over states. For dB on the cycle with \(u=\delta=0\), using the transition probabilities from the previous subsection, these recurrence equations become: \[\sigma_{k}=\begin{cases}\frac{1}{2}+\frac{(N-2)\sigma_{1}+\sigma_{2}}{N}&k=1\\ \frac{\sigma_{k-1}+(N-2)\sigma_{k}+\sigma_{k+1}}{N}&2\leq k\leq N-2\\ \frac{1}{2}+\frac{\sigma_{N-2}+(N-2)\sigma_{N-1}}{N}&k=N-1.\end{cases}\] (H.15) The unique solution is \(\sigma_{k}=N/2\) for all \(k=1,\ldots,N-1\). This means that the death-Birth process on the cycle, with \(u=\delta=0\) and initial state sampled from \(\mu\), spends an expected \(N/2\) time-steps having each number \(k\) of \(A\) alleles, for \(1\leq k\leq N-1\). These sojourn times can be used to evaluate the neutral bracket operation \(\langle\,\rangle^{\circ}\) on any function that depends only on number of \(A\) alleles. Indeed, for any such function \(f:\{1,\ldots,N-1\}\to\mathbb{R}\), Lemma B.3 gives \[\left\langle f\left(\sum_{g\in G}X_{g}^{t}\right)\right\rangle^{\circ}=\nu_{G} \sum_{k=1}^{N}\sigma(k)f(k)=\frac{N}{2}\sum_{k=1}^{N}f(k),\] (H.16) since \(\nu_{G}=1\) for the death-Birth process. #### d.5.3 Coalescence lengths We can now compute the necessary coalescence lengths on the cycle, using Eqs. (D.11) and (H.16) rather than the recurrence relations (H.7). Let \(\iota_{m}(k)\) denote the average value of \(\iota_{S}(\mathbf{x})\) as \(S\) runs over all contiguous blocks of length \(m\), where \(\mathbf{x}\) is any state with exactly \(k\)\(A\) alleles in a single contiguous block. We similarly define \(\iota_{m}^{A}(k)\) and \(\iota_{m}^{a}(k)\) as respective averages of \(\iota_{S}^{A}(\mathbf{x})\) and \(\iota_{S}^{a}(\mathbf{x})\) over contiguous blocks \(S\) of length \(m\), where state \(\mathbf{x}\) contains exactly \(k\) contiguous \(A\) alleles. To compute \(\iota_{m}^{A}(k)\), we note that there are \(k-m+1\) ways for a block of \(m\leq k\) contiguous sites to be contained within a block of \(k\) contiguous \(A\)-containing sites. This gives \[\iota_{m}^{A}(k)=\begin{cases}(k-m+1)/N&m\leq k\\ 0&m>k.\end{cases}\] (H.17) We now compute \(\ell_{m}^{\circ}\), the coalescence length for any contiguous contiguous block of \(m\) sites. We begin by invoking Eq. (D.11): \[\ell_{m}^{\circ} =\left\langle 1-\iota_{S}\right\rangle^{\circ}\] for any set \[S\] of \[m\] contiguous sites \[=\left\langle\left(\iota_{\{g\}}^{A}+\iota_{\{g\}}^{a}\right)- \left(\iota_{S}^{A}+\iota_{S}^{a}\right)\right\rangle^{\circ}\] for any \[g\in G\] \[=2\left\langle\iota_{\{g\}}^{A}-\iota_{S}^{A}\right\rangle^{\circ}\] by symmetry under neutral drift \[=N\sum_{k=1}^{N-1}\left(\iota_{1}^{A}(k)-\iota_{m}^{A}(k)\right)\] by symmetry of cycle and Eq. (H.16) \[=N\left(\sum_{k=1}^{N-1}\frac{k}{N}-\sum_{k=m}^{N-1}\frac{k-m+1}{N}\right)\] by Eq. (H.17) \[=(m-1)\left(N-\frac{m}{2}\right).\] Therefore, the coalescence length for any contiguous block of \(m\) sites on the cycle is \[\ell_{m}^{\circ}=(m-1)(N-m/2).\] (H.18) We also require the coalescence length for sets of consisting of a block of \(m\) contiguous sites together with a site of distance \(j\geq 2\) from the block. There are two gaps in such a set, one of size \(j-1\) and the other of size \(N-m-j\). Let \(\iota_{m,j}^{a}(k)\) denote the average of \(\iota_{S}^{a}(\mathbf{x})\) as \(S\) runs over all sets of this form, where \(\mathbf{x}\) is any state with exactly \(k\) contiguous \(A\) alleles. There are two ways to have \(\iota_{S}^{a}(\mathbf{x})=1\) for such a set \(S\) and state \(\mathbf{x}\). First, all of the \(A\) alleles in \(\mathbf{x}\) may be contained in the gap of size \(j-1\) in \(S\); this can happen \(j-k\) ways. Second, all of the \(A\) alleles in \(\mathbf{x}\) may be contained in the gap of size \(N-m-j\) in \(S\); this can happen \(N-m-j-k+1\) ways. Proceeding similarly to the previous calculation, we compute \[\ell_{S}^{\circ} =\left\langle 1-\iota_{S}\right\rangle^{\circ}\] \[=N\sum_{k=1}^{N-1}\left(\iota_{1}^{a}(k)-\iota_{m,j}^{a}(k)\right)\] \[=N\left(\sum_{k=1}^{N-1}\frac{N-k}{N}-\left(\sum_{k=1}^{j-1} \frac{j-k}{N}+\sum_{k=1}^{N-m-j}\frac{N-m-j-k+1}{N}\right)\right)\] \[=\frac{N(N-1)}{2}-\left(\frac{j(j-1)}{2}+\frac{(N-m-j)(N-m-j+1)} {2}\right)\] \[=m\left(N-\frac{m+1}{2}\right)+(N-m-j)(j-1).\] So the coalescence length for a contiguous block of \(m\) sites together with another site distance \(j\) away is \[\ell^{\circ}_{m,j}=m\left(N-\frac{m+1}{2}\right)+(N-m-j)(j-1).\] (H.19) Note that \(\ell^{\circ}_{m,j}=\ell^{\circ}_{m+1}+(N-m-j)(j-1)\), where \((N-m-j)(j-1)\) is the product of the two gap sizes in a set of this form. Although Eq. (H.19) was derived for \(j\geq 2\), it also holds in the cases \(j=0\) and \(j=1\), giving \[\ell^{\circ}_{m,0} =(m-1)\left(N-\frac{m}{2}\right)=\ell^{\circ}_{m}\] \[\ell^{\circ}_{m,1} =m\left(N-\frac{m+1}{2}\right)=\ell^{\circ}_{m+1}.\] Overall, Eq. (H.19) is valid for \(0\leq j\leq N-m+1\). #### h.5.4 Conditions for collective help or harm We now compute the conditions for a collective \(S\), of \(m\) contiguous sites, to help or harm a target site \(g\). The critical benefit-cost ratio \((b/c)^{*}_{S,g}\) can be computed using Eq. (H.10b) with the coalescence lengths obtained in Eqs. (H.18)-(H.19). We also use the fact that a two-step random walk on the cycle will end at its starting point with probability \(\frac{1}{2}\), and will end two spaces to the left or right with probability \(\frac{1}{4}\) each. Consequently, the numerator of \((b/c)^{*}_{S,g}\) in Eq. (H.10b) evaluates to \[\frac{1}{m}\sum_{h\in S}v_{h}\sum_{k\in G}p^{(2)}_{hk}\ell^{\circ}_{\{h,k\}}= \frac{1}{2}\left(\ell^{\circ}_{1,0}+\ell^{\circ}_{1,2}\right)=N-2.\] (H.20) We now break into cases according to the target site \(g\): * **To neighbor:** Suppose the target \(g\) is an immediate neighbor of the collective \(S\). For \(2\leq m\leq N-2\), evaluating Eq. (H.10b) yields \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{m,3}+ \frac{1}{2}\ell^{\circ}_{m+1}+\frac{1}{4}\ell^{\circ}_{m}-\ell^{\circ}_{m+1} }=\frac{4(N-2)}{N-m-6}.\] (H.21) For \(m=1\), \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{2}+ \frac{1}{2}\ell^{\circ}_{2}+\frac{1}{4}\ell^{\circ}_{1,3}-\ell^{\circ}_{2}}= \frac{2(N-2)}{N-4}.\] (H.22) Eq. (H.22) recovers a known result for Prisoner's Dilemma games on a cycle, first obtained as Eq. (4.4) of Ohtsuki and Nowak [98]. * **To non-neighbor vertex outside collective:** Now suppose \(g\) is distance \(j\) from \(S\), with \(2\leq j\leq N-m-2\). Then Eq. (H.10b) becomes \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{m,j-2}+ \frac{1}{2}\ell^{\circ}_{m,j}+\frac{1}{4}\ell^{\circ}_{m,j+2}-\ell^{\circ}_{m,j}}=-\frac{N-2}{2}.\] (H.23) In this case, collective help is never favored, and harm is favored if \(2b<-(N-2)c\). * **To boundary vertex:** Next, suppose that the target \(g\) is a boundary vertex of \(S\). For \(3\leq m\leq N-1\) we have \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{m}+\frac{ 1}{2}\ell^{\circ}_{m}+\frac{1}{4}\ell^{\circ}_{m,2}-\ell^{\circ}_{m}}=\frac{2(N -2)}{N-m-1}.\] (H.24) For \(m=2\), \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{3}+ \frac{1}{2}\ell^{\circ}_{2}+\frac{1}{4}\ell^{\circ}_{2,2}-\ell^{\circ}_{2}}= \frac{4(N-2)}{3N-8}.\] (H.25) For \(m=1\) we have \(S=\{g\}\) and \(\left(\frac{b}{c}\right)^{*}_{\{g\},g}=1\). * **To interior neighbor of boundary vertex:** Now suppose \(g\) is in \(S\), one space away from the boundary (this requires \(m\geq 3\)). For \(4\leq m\leq N-1\), \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{m}+ \frac{1}{2}\ell^{\circ}_{m}+\frac{1}{4}\ell^{\circ}_{m+1}-\ell^{\circ}_{m}}= \frac{4(N-2)}{N-m}.\] (H.26) For \(m=3\), \[\left(\frac{b}{c}\right)^{*}_{S,g}=\frac{N-2}{\frac{1}{4}\ell^{\circ}_{4}+ \frac{1}{2}\ell^{\circ}_{3}+\frac{1}{4}\ell^{\circ}_{4}-\ell^{\circ}_{3}}= \frac{2(N-2)}{N-3}.\] (H.27) * **To other interior vertex:** Finally, suppose \(g\) is within the collective and is more than one space from the boundary (this requires \(m\geq 5\)). In this case, all two-step neighbors of \(g\) are also in \(S\). It follows from the final observation in Section H.3 that \(\left(\frac{b}{c}\right)^{*}_{S,g}=\infty\) and neither collective help nor harm can be favored. Results for the cycle are summarized in Extended Data Figure 1a-d. ### Windmill The Windmill graph (Fig. 2c of the main text) has one hub vertex and \(2n\) "blade" vertices. Each blade vertex is joined to one exactly other blade vertex, as well as to the hub. We denote the hub vertex by \(h\) and the blade vertices by \(b\). The weighted degrees are given by \(d_{b}=2\) and \(d_{h}=2n\), so the corresponding reproductive values are \(v_{b}=1/3\) and \(v_{h}=n/3\). #### h.6.1 Coalescence lengths We solve for coalescence lengths using Eq. (H.7). In indexing the coalescence lengths, we use primes (\({}^{\prime}\)) and double primes (\({}^{\prime\prime}\)) to indicate vertices on different blades. For example, \(\ell_{bbb^{\prime}}\) is the coalescence length of sets comprising two distinct blade vertices on one blade and one on a different blade, whereas \(\ell_{bb^{\prime}b^{\prime\prime}}\) corresponds to sets with three blade vertices on three different blades. To reduce clutter, we omit the superscripts \({}^{\circ}\), and understand all coalescence lengths to be computed at neutrality (\(\delta=0\)). The recurrence relations, Eq. (H.7), for sets of size two are \[\ell_{hb} =1+\frac{1}{2}\left(\frac{n-1}{n}\ell_{bb^{\prime}}+\frac{1}{2n} \ell_{bb}+\frac{1}{2}\ell_{hb}\right)\] \[\ell_{bb} =1+\frac{1}{2}\ell_{hb}\] \[\ell_{bb^{\prime}} =1+\frac{1}{2}\ell_{hb}+\frac{1}{2}\ell_{bb^{\prime}},\] giving the solution \[\ell_{hb}=\frac{16n-6}{2n+3},\qquad\ell_{bb}=\frac{10n}{2n+3},\qquad\ell_{bb^ {\prime}}=\frac{20n}{2n+3}.\] (H.28) For sets of size three, Eq. (H.7) gives \[\ell_{hbb} =1+\frac{1}{3}\left(\frac{n-1}{n}\ell_{bbb^{\prime}}+\frac{1}{n }\ell_{bb}+2\ell_{hb}\right)\] \[\ell_{hbb^{\prime}} =1+\frac{1}{3}\left(\frac{n-2}{n}\ell_{bb^{\prime}b^{\prime\prime }}+\frac{1}{n}\ell_{bbb^{\prime}}+\frac{1}{n}\ell_{bbb^{\prime}}+\ell_{hbb^{ \prime}}+\ell_{hb}\right)\] \[\ell_{bbb^{\prime}} =1+\frac{1}{3}\left(\ell_{bb^{\prime}}+\ell_{hbb^{\prime}}+\frac {1}{2}\ell_{bbb^{\prime}}+\frac{1}{2}\ell_{hbb}\right)\] \[\ell_{bb^{\prime}b^{\prime\prime}} =1+\frac{1}{2}\ell_{bb^{\prime}b^{\prime\prime}}+\frac{1}{2}\ell_ {hbb^{\prime}},\] and the solution is \[\ell_{hbb}=\frac{21n-6}{2n+3},\qquad\ell_{hbb^{\prime}} =\frac{26n-6}{2n+3},\qquad\ell_{bbb^{\prime}}=\frac{25n}{2n+3},\] (H.29) \[\ell_{bb^{\prime}b^{\prime\prime}} =\frac{30n}{2n+3}.\] #### h.6.2 Conditions for collective help or harm We now compute the critical benefit-cost thresholds \((b/c)_{S,g}^{*}\), where set \(S\) consists of the two vertices on a single blade. For help or harm to the hub, evaluating Eq. (H.10b) gives: \[\left(\frac{b}{c}\right)_{S,h}^{*}=\frac{v_{b}\left(\frac{1}{4}\ell_{hb}+ \frac{1}{4n}\ell_{bb}+\frac{n-1}{2n}\ell_{bb^{\prime}}\right)}{v_{h}\left( \frac{1}{2}\ell_{hbb}+\frac{1}{2n}\ell_{bb}+\frac{n-1}{2n}\ell_{bbb^{\prime}}- \ell_{hbb}\right)}=\frac{2(14n-9)}{n(4n-9)}.\] (H.30) As \(n\to\infty\), \(\left(\frac{b}{c}\right)_{S,h}^{*}=\frac{7}{n}+\mathcal{O}\left(\frac{1}{n^{2}}\right)\). To a vertex within \(S\): \[\left(\frac{b}{c}\right)_{S,b}^{*}=\frac{v_{b}\left(\frac{1}{4}\ell_{hb}+\frac {1}{4n}\ell_{bb}+\frac{n-1}{2n}\ell_{bb^{\prime}}\right)}{v_{b}\left(\frac{1} {4}\ell_{bb}+\frac{1}{4}\ell_{hbb}+\frac{1}{2n}\ell_{bb}+\frac{n-1}{2n}\ell_{ bbb^{\prime}}-\ell_{bb}\right)}=\frac{4(14n-9)}{41n-36}.\] (H.31) As \(n\to\infty\), \(\left(\frac{b}{c}\right)_{S,b}^{*}\) converges to \(\frac{56}{41}\). To a vertex in a different blade: \[\left(\frac{b}{c}\right)_{S,b^{\prime}}^{*}=\frac{v_{b}\left(\frac{1}{4}\ell_{ hb}+\frac{1}{4n}\ell_{bb}+\frac{n-1}{2n}\ell_{bb^{\prime}}\right)}{v_{b} \left(\frac{1}{4}\ell_{bbb^{\prime}}+\frac{1}{4}\ell_{hbb}+\frac{1}{2n}\ell_{ bb}+\frac{n-1}{2n}\ell_{bbb^{\prime}}-\ell_{bbb^{\prime}}\right)}=-\frac{14n-9}{n+9}.\] (H.32) As \(n\to\infty\), \(\left(\frac{b}{c}\right)_{S,b^{\prime}}^{*}\) converges to \(-14\). These results are summarized in Extended Data Figure 1e. ### Spider The Spider graph (Fig. 1d of the main text) contains one hub vertex (labeled \(h\)), \(n\) inner vertices (labeled \(i\)), and \(n\) outer vertices (labeled \(o\)). The hub vertex connects to all inner vertices, and each inner vertex connects to a single outer vertex. The weighted degrees are \(d_{h}=n\), \(d_{i}=2\), and \(d_{o}=1\), so the corresponding reproductive values are \(v_{h}=n/4\), \(v_{i}=1/2\), and \(v_{0}=1/4\). #### h.7.1 Coalescence lengths We solve for coalescence lengths using Eq. (H.7). We again use primes (\({}^{\prime}\)) and double primes (\({}^{\prime\prime}\)) to indicate vertices on different "legs". For example, \(\ell_{io^{\prime}}\) is the coalescence length from an inner vertex and an outer vertex on different legs, while \(\ell_{io^{\prime}o^{\prime\prime}}\) refers to one inner and two outer vertices on three different legs. We again omit the superscripts \({}^{\circ}\), and understand all coalescence lengths to be computed at neutrality. For sets of size two Eq. (H.7) gives the following recurrence relations: \[\ell_{io} =1+\frac{1}{4}\ell_{ho}\] \[\ell_{ho} =1+\frac{1}{2n}\ell_{io}+\frac{n-1}{2n}\ell_{io^{\prime}}+\frac{ 1}{2}\ell_{hi}\] \[\ell_{hi} =1+\frac{1}{4}\ell_{ho}+\frac{n-1}{2n}\ell_{ii^{\prime}}\] \[\ell_{ii^{\prime}} =1+\frac{1}{2}\ell_{io^{\prime}}+\frac{1}{2}\ell_{hi}\] \[\ell_{io^{\prime}} =1+\frac{1}{4}\ell_{oo^{\prime}}+\frac{1}{4}\ell_{ho}+\frac{1}{2 }\ell_{ii^{\prime}}\] \[\ell_{oo^{\prime}} =1+\ell_{io^{\prime}}.\] For sets of size three, Eq. (H.7) gives \[\ell_{hoo} =1+\frac{n-1}{3n}\ell_{ioi^{\prime}}+\frac{1}{3n}\ell_{io}+\frac{1} {3}\ell_{ho}+\frac{1}{3}\ell_{hi}\] \[\ell_{hii^{\prime}} =1+\frac{n-2}{3n}\ell_{ii^{\prime}i^{\prime\prime}}+\frac{2}{3n} \ell_{ii^{\prime}}+\frac{1}{3}\ell_{hoi^{\prime}}+\frac{1}{3}\ell_{hi}\] \[\ell_{hoi^{\prime}} =1+\frac{1}{3n}\ell_{io^{\prime}}+\frac{1}{3n}\ell_{ioi^{\prime} }+\frac{n-2}{3n}\ell_{ii^{\prime}o^{\prime\prime}}+\frac{1}{6}\ell_{ho}+\frac{ 1}{6}\ell_{hoo^{\prime}}+\frac{1}{3}\ell_{hii^{\prime}}\] \[\ell_{hoo^{\prime}} =1+\frac{2}{3n}\ell_{ioo^{\prime}}+\frac{n-2}{3n}\ell_{io^{ \prime}o^{\prime\prime}}+\frac{2}{3}\ell_{hoi^{\prime}}\] \[\ell_{i\alpha i^{\prime}} =1+\frac{1}{6}\ell_{io^{\prime}}+\frac{1}{6}\ell_{hoi^{\prime}}+ \frac{1}{3}\ell_{ii^{\prime}}+\frac{1}{6}\ell_{ho}+\frac{1}{6}\ell_{ioo^{ \prime}}\] \[\ell_{ioo^{\prime}} =1+\frac{1}{6}\ell_{hoo^{\prime}}+\frac{1}{6}\ell_{oo^{\prime}}+ \frac{1}{3}\ell_{io^{\prime}}+\frac{1}{3}\ell_{ioi^{\prime}}\] \[\ell_{ii^{\prime}i^{\prime\prime}} =1+\frac{1}{2}\ell_{hii^{\prime}}+\frac{1}{2}\ell_{ii^{\prime}o^ {\prime\prime}}\] \[\ell_{ii^{\prime}o^{\prime\prime}} =1+\frac{1}{3}\ell_{io^{\prime}o^{\prime\prime}}+\frac{1}{3}\ell_ {hoi^{\prime}}+\frac{1}{3}\ell_{ii^{\prime}i^{\prime\prime}}\] \[\ell_{i\alpha^{\prime}o^{\prime\prime}} =1+\frac{1}{6}\ell_{oo^{\prime}o^{\prime\prime}}+\frac{1}{6}\ell_ {hoo^{\prime}}+\frac{2}{3}\ell_{ii^{\prime}o^{\prime\prime}}\] \[\ell_{oo^{\prime}o^{\prime\prime}} =1+\ell_{i\alpha^{\prime}o^{\prime\prime}}.\] For brevity, we omit the explicit solutions for \(\ell_{S}\); they all have the form \((an^{2}+bn+c)/(12n^{2}+35n+1)\) for some integer coefficients \(a\), \(b\), and \(c\). #### h.7.2 Conditions for collective help or harm We now compute the critical benefit-cost thresholds \((b/c)_{S,g}^{*}\), where \(S\) is a set of two vertices on a single leg. For collective help or harm to \(S\)'s own outer vertex, evaluating Eq. (H.10b) gives: \[\left(\frac{b}{c}\right)_{S,o}^{*}=\frac{\frac{1}{2}\left(v_{o}\left(\frac{1}{ 2}\ell_{ho}\right)+v_{i}\left(\frac{n-1}{2n}\ell_{ii^{\prime}}\right)\right)}{ v_{o}\left(\frac{1}{2}\ell_{io}+\frac{1}{2}\ell_{hoi}-\ell_{io}\right)}= \frac{84n^{2}-67n-1}{42n^{2}-25n-1}.\] (H.33) As \(n\to\infty\), \(\left(\frac{b}{c}\right)_{S,o}^{*}\) converges to \(2\). To \(S\)'s own inner vertex: \[\left(\frac{b}{c}\right)_{S,i}^{*}=\frac{\frac{1}{2}\left(v_{o}\left(\frac{1} {2}\ell_{ho}\right)+v_{i}\left(\frac{n-1}{2n}\ell_{ii^{\prime}}\right)\right)}{ v_{i}\left(\frac{n+1}{2n}\ell_{io}+\frac{n-1}{2n}\ell_{ioi^{\prime}}-\ell_{io} \right)}=\frac{84n^{2}-67n-1}{4(n-1)(25n+1)}.\] (H.34) As \(n\to\infty\), \(\left(\frac{b}{c}\right)_{S,i}^{*}\) converges to \(21/25\). To the hub: \[\left(\frac{b}{c}\right)_{S,h}^{*}=\frac{\frac{1}{2}\left(v_{o}\left(\frac{1} {2}\ell_{ho}\right)+v_{i}\left(\frac{n-1}{2n}\ell_{ii^{\prime}}\right)\right)} {v_{h}\left(\frac{1}{2n}\ell_{io}+\frac{n-1}{2n}\ell_{ioo^{\prime}}+\frac{1}{ 2}\ell_{ho}-\ell_{hoi}\right)}=\frac{84n^{2}-67n-1}{n(12n^{2}-19n-9)}.\] (H.35) As \(n\to\infty\), \(\left(\frac{b}{c}\right)_{S,h}^{*}=\frac{7}{n}+\mathcal{O}\left(\frac{1}{n^{2}}\right).\) To an inner vertex in a different leg: \[\left(\frac{b}{c}\right)^{*}_{S,i^{\prime}}=\frac{\frac{1}{2}\left(v_{o}\left( \frac{1}{2}\ell_{ho}\right)+v_{i}\left(\frac{n-1}{2n}\ell_{ii^{\prime}}\right) \right)}{v_{i}\left(\frac{2n-1}{2n}\ell_{ioi^{\prime}}+\frac{1}{2n}\ell_{io}- \ell_{ioi^{\prime}}\right)}=-\frac{84n^{2}-67n-1}{4(25n+1)}.\] (H.36) As \(n\to\infty\), \(\left(\frac{b}{c}\right)^{*}_{S,i^{\prime}}=-7n+\mathcal{O}\left(1\right).\) To an outer vertex in a different leg: \[\left(\frac{b}{c}\right)^{*}_{S,o^{\prime}}=\frac{\frac{1}{2}\left(v_{o}\left( \frac{1}{2}\ell_{ho}\right)+v_{i}\left(\frac{n-1}{2n}\ell_{ii^{\prime}}\right) \right)}{v_{o}\left(\frac{1}{2}\ell_{ioo^{\prime}}+\frac{1}{2}\ell_{hoi}- \ell_{ioo^{\prime}}\right)}=-\frac{84n^{2}-67n-1}{12n^{2}+35n+1}.\] (H.37) As \(n\to\infty\), \(\left(\frac{b}{c}\right)^{*}_{S,o^{\prime}}\) converges to \(-7\). These results are summarized in Extended Data Figure 1f. ### Numerical computation To numerically compute critical benefit-cost thresholds on an arbitrary weighted graph \(G\), we first compute the relevant coalescence lengths \(\ell_{S}^{\circ}\) using Eq. (D.13), and then apply Eq. (H.10b). We make use of the fact that Eq. (D.13) is recursive in set sizes: for each set \(S\) of size \(k\geq 2\), Eq. (D.13) expresses \(\ell_{S}^{\circ}\) in terms of coalescence lengths of sets of size \(k\) or \(k-1\). Because of this, Eq. (D.13) can be solved first for sets of size two, then size three, and so on up to the largest set size that is needed. To obtain critical benefit-cost thresholds \(\left(\frac{b}{c}\right)^{*}_{S,g}\) for a particular set \(S\) requires computing coalescence lengths for sets up to size \(|S|+1\). Obtaining the coalescence lengths \(\ell_{S}^{\circ}\) for sets of size \(|S|=k\), given those for size \(|S|=k-1\), requires solving a system of \(\binom{N}{k}\) linear equations. For standard algorithms based on Gaussian elimination this takes \(\mathcal{O}\left(\binom{N}{k}^{3}\right)\) time, although more efficient scaling is possible in theory [99, 100]. Computations for empirical networks and hypergraphs were performed using MATLAB (version R2022a). Code is available at [https://github.com/Emmanuel-Math-Bio-Research-Group/Collective-Action](https://github.com/Emmanuel-Math-Bio-Research-Group/Collective-Action). To partition empirical networks into subcommunities, we used the Girvan-Newman algorithm [42], as implemented in UCINet v6.753 [52].
2308.01430
FinVis-GPT: A Multimodal Large Language Model for Financial Chart Analysis
In this paper, we propose FinVis-GPT, a novel multimodal large language model (LLM) specifically designed for financial chart analysis. By leveraging the power of LLMs and incorporating instruction tuning and multimodal capabilities, FinVis-GPT is capable of interpreting financial charts and providing valuable analysis. To train FinVis-GPT, a financial task oriented dataset was generated for pre-training alignment and instruction tuning, comprising various types of financial charts and their corresponding descriptions. We evaluate the model performance via several case studies due to the time limit, and the promising results demonstrated that FinVis-GPT is superior in various financial chart related tasks, including generating descriptions, answering questions and predicting future market trends, surpassing existing state-of-the-art multimodal LLMs. The proposed FinVis-GPT serves as a pioneering effort in utilizing multimodal LLMs in the finance domain and our generated dataset will be release for public use in the near future to speedup related research.
Ziao Wang, Yuhang Li, Junda Wu, Jaehyeon Soon, Xiaofeng Zhang
2023-07-31T07:44:15Z
http://arxiv.org/abs/2308.01430v1
# FinVis-GPT: A Multimodal Large Language Model for Financial Chart Analysis ###### Abstract In this paper, we propose FinVis-GPT, a novel multimodal large language model (LLM) specifically designed for financial chart analysis. By leveraging the power of LLMs and incorporating instruction tuning and multimodal capabilities, FinVis-GPT is capable of interpreting financial charts and providing valuable analysis. To train FinVis-GPT, a financial task oriented dataset was generated for pretraining alignment and instruction tuning, comprising various types of financial charts and their corresponding descriptions. We evaluate the model performance via several case studies due to the time limit, and the promising results demonstrated that FinVis-GPT is superior in various financial chart related tasks, including generating descriptions, answering questions and predicting future market trends, surpassing existing state-of-the-art multimodal LLMs. The proposed FinVis-GPT serves as a pioneering effort in utilizing multimodal LLMs in the finance domain and our generated dataset will be release for public use in the near future to speedup related research. ## 1 Introduction In the era of large language model (LLM) [6, 7, 9, 10], various real-world applications will be deeply and permanently changed by the LLMs as well as other large models (LMs). For instance, the LLMs already demonstrated a superior performance in various NLP tasks such as understanding and generating human-like text. Similarly, the large multimodal models (LMMs) has opened up new possibilities for more complex applications such as embodied robot. Thus, a good number of research efforts as well as industrial attentions have been attracted to explore the possibility whether such LMs could be utilized for financial related tasks. Therefore, we are motivated to propose this novel multimodal large language model (FinVis-GPT) specifically designed for understanding financial chart. The proposed approach are two-stage ones. At the first stage, we must carefully prepare a dataset for this task which will be released for public use in the near future. At the second stage, we train a large multimodal model using this dataset. Note that it is very demanding to tune a large multimodal model from the begining. Thus, we only fine-tune an existing model using this generated dataset. We expect that, by leveraging the power of LLMs, the proposed FinVis-GPT should be capable of interpreting financial charts and providing more accurate analysis in a human-like manner. This capability allows FinVis-GPT to answer a wide range of questions, such as predicting future trends based on historical data, identifying key patterns, and providing explanations for observed market phenomena. As aforementioned, the key contribution of our work is the creation of a financial task oriented dataset for pre-training and instruction-tuning the large models. For the pre-training phase, we have curated a dataset comprising various types of financial charts along with their corresponding descriptions. This dataset enables FinVis-GPT to learn the intricate relationships between visual patterns in financial charts and their textual interpretations. For the instruction tuning phase, we have prepared a dataset that pairs images of financial charts with a set of instructions or questions. This dataset allows FinVis-GPT to learn how to respond to specific queries related to financial chart analysis, thereby enhancing its ability to generate relevant and accurate responses. After training the FinVis-GPT on this dataset, we investigate the model performance via various case studies due to the time limit. The results demonstrated that FinVis-GPT can effectively analyze financial charts and generate reliable and accurate interpretations. We believe that our work paves the way for more sophisticated applications of multimodal LLMs in the financial domain, potentially transforming how financial analysis is conducted. ## 2 Related Work The evolution of LLMs and LMMs have already become the major research subjects recently. In this section, we briefly review several most pertinent works in these areas and discuss their relationship to our proposed model, FinVis-GPT. Large Language Models and Instruction TuningThe transformation of LLMs into instruction followers has been a prominent research direction. For instance, InstructGPT [8] was introduced as a model designed to follow instructions given in natural language and generate useful responses. This model demonstrated that instruction tuning could significantly enhance the performance of LLMs, surpassing even the capabilities of GPT-3. Building on this concept, Chiang et al. [1] fine-tuned the LLaMA model [10] on user-shared dialogues collected from ShareGPT, resulting in an open-source chatbot with impressive performance. Large Multimodal ModelsThe extension of LLMs to handle multimodal inputs has been a significant advancement in recent research. The KOSMOS-1 model [4], trained from scratch on web-scale multimodal corpora, showcased impressive performance across language understanding, generation, and perception-language tasks. Similarly, MiniGPT-4 [12] demonstrated the potential of aligning a frozen visual encoder with a frozen LLM, Vicuna, using a single projection layer. Further extending the multimodal capabilities, mPLUG-Owl [11] was proposed to concurrently support multiple modalities and facilitate diverse unimodal and multimodal abilities through modality collaboration. In a similar vein, LLaMA-Adapter V2 [3] was proposed as a parameter-efficient model capable of handling visual instructions. Lastly, InstructBLIP [2] was designed to handle a variety of instructions, showcasing its ability to generate detailed captions, count specific objects, and address general inquiries posed by users. Building upon these advancements, our proposed model, FinVis-GPT, incorporates financial charts as part of the multimodal input. This integration enables a more nuanced understanding of financial data, marking a significant step towards the application of multimodal LLMs in the financial domain. By leveraging the strengths of both instruction tuning and multimodal capabilities, FinVis-GPT aims to provide insightful analysis of financial charts, demonstrating the potential of multimodal LLMs in domain-specific applications. ## 3 Generating Multimodal Financial Dataset The data collection for FinVis-GPT involved creating datasets for two phases: pre-training alignment and instruction tuning. The goal of these datasets was to equip the model with the ability to understand and interpret multimodal data, particularly financial charts, and generate valuable responses based on given instructions. An illustrative example of our whole collection pipeline and the collected data is shown in Figure 1. ### Pre-training Alignment Dataset Pre-training alignment is a crucial step in training multimodal models, as it allows the model to align various types of data into a common embedding space. For the purpose of this step, we used historical daily stock price data of Chinese A-share from 2006 to 2023. This data was segmented into smaller sets containing 60-80 trading days, and each set was further divided into prompt data (data given to the model for prediction) and predict data (data to be predicted), with the former comprising 60-80% of each set. Images were generated from this prompt data using the mplfinance1 library, with a split of 80% for candlestick charts and 20% for line charts. To simulate real world scenarios, the generated charts were enhanced with moving averages of 3, 6, and 9 days, volume bars, and various chart styles, all added randomly. Figure 1: The designed process to generate multimodal dataset. You will play the role of a financial expert. Upon receiving a k-line chart, you should first identify what type of chart it is and then describe the different stages of stock trends. You are required to conduct professional financial analysis on the input data while ensuring that your analysis is comprehensive and professional from different perspectives. Finally, you need to summarize your findings. To facilitate generating answers, you will not receive an image but rather data related to the k-line chart. In this scenario, since it is assumed that you are analyzing an image as an expert, your answer should pretend that you are analyzing an image and only mention content commonly found in k-line charts. In your answer: * Do not evaluate what you are doing; simply provide answers. * Use "this stock" instead of direct stock codes. * Do not explain the meaning of different data segments or their names. * Do not draw charts; use text descriptions based on data only. * Avoid saying more data is needed or suggesting other factors be considered; provide clear analytical conclusions instead. The output format for analysis results: 'Answer', using markdown format. The first line of received content represents the name of each data segment, with each subsequent line representing one day's k-line data separated by spaces. The data structure for each entry in this dataset consisted of an image, an instruction, and an answer. The instructions, designed to request an interpretation of the charts, were manually crafted. The answer for each instruction was generated by using chatGPT to interpret the prompt data. The prompt given to chatGPT are shown in Table 1. ### Instruction Tuning Dataset For instruction tuning, a separate dataset was created, comprising 200K sets, each with about five questions. The primary purpose of this dataset was to fine-tune FinVis-GPT's ability to generate relevant and accurate responses to specific queries related to financial chart analysis. Like the pre-training alignment dataset, the data structure for this dataset also consists of an image, an instruction, and an answer. However, the key difference lies in the generation manner of instructions and answers: they were crafted using both prompt data and predict data, which allowed the model to generate future trend predictions, the prompt designed to give chatGPT is reported in Table 2. ### Dataset Statistics Table 3 provides a detailed breakdown of the key statistics associated with the collected datasets used in the pre-training and instruction tuning phases of the FinVis-GPT model. The count of words in questions, answers, and total dialog ex You will play the roles of a financial expert and a questioner respectively. The questioner presents a candlestick chart to the expert, and then asks questions based on different aspects of this stock. A total of five 'Q&A' sessions are required. The questioning should be diverse, with questions as comprehensive and rich as possible, from multiple perspectives. For example, asking the expert for professional financial analysis of the stock, predicting its future trend or explaining data. To facilitate prediction, we will provide two sets of data: know data and future data. In this scenario, it is assumed that both the expert and questioner only know about the known data; while future data is only used to help generate accurate predictions. Therefore, when answering questions related to predictions, experts must pretend that they only know about known data without mentioning future data explicitly but must combine them with generated predictions using future data to answer such questions clearly. In addition, since you are playing an expert who receives images in this scenario; your answers need to use language suitable for analyzing an image-oriented perspective while focusing on content commonly found in candlestick charts. During questioning by the questioner or answering by the expert: * Do not directly use stock codes; instead use 'this stock' as a substitute. * Do not explain what each segment represents. * Do not mention any segment names. * Experts should refrain from evaluating their own tasks during responses. * Avoid saying more information is needed or suggesting other factors need consideration. * Do not allow questioner to view additional information themselves. When asked for predictions by questioner during questioning sessions - experts must give out forecasts based on generated results using future-data. The output format for analytical results: "Question@Answer@" where both "@" symbols serve as separators between Question and Answer pairs except at any other time within either part (use markdown formatting). The first line of the known data indicates each segment's name followed by k-line trading records for each day, with data segments separated by spaces: \(\{prompt\_data\}\) The first line of the future data indicates each segment's name followed by k-line trading records for each day, with data segments separated by spaces: \(\{predict\_data\}\) \begin{table} \begin{tabular}{l} \hline \hline \multicolumn{1}{c}{ \begin{tabular}{l} You will play the roles of a financial expert and a questioner respectively. The questioner presents a candlestick chart to the expert, and then asks questions based on different aspects of this stock. A total of five β€˜Q&A’ sessions are required. The questioning should be diverse, with questions as comprehensive and rich as possible, from multiple perspectives. For example, asking the expert for professional financial analysis of the stock, predicting its future trend or explaining data. To facilitate prediction, we will provide two sets of data: know data and future data. In this scenario, it is assumed that both the expert and questioner only know about the known data; while future data is only used to help generate accurate predictions. Therefore, when answering questions related to predictions, experts must pretend that they only know about known data without mentioning future data explicitly but must combine them with generated predictions using future data to answer such questions clearly. In addition, since you are playing an expert who receives images in this scenario; your answers need to use language suitable for analyzing an image-oriented perspective while focusing on content commonly found in candlestick charts. During questioning by the questioner or answering by the expert: * Do not directly use stock codes; instead use ’this stock’ as a substitute. * Do not explain what each segment represents. * Do not mention any segment names. * Experts should refrain from evaluating their own tasks during responses. * Avoid saying more information is needed or suggesting other factors need consideration. * Do not allow questioner to view additional information themselves. When asked for predictions by questioner during questioning sessions - experts must give out forecasts based on generated results using future-data. The output format for analytical results: "Question@Answer@" where both β€œ@” symbols serve as separators between Question and Answer pairs except at any other time within either part (use markdown formatting). The first line of the known data indicates each segment’s name followed by k-line trading records for each day, with data segments separated by spaces: \(\{prompt\_data\}\) The first line of the future data indicates each segment’s name followed by k-line trading records for each day, with data segments separated by spaces: \(\{predict\_data\}\) \end{table} Table 2: Prompt designed for instruction-tuning stage in data collection. changes (denoted as '#') are examined under various statistical metrics such as mean, the 5-th percentile (q-5%), and the 95th percentile (q-95%). During pre-training, we observe that on average, questions have around 28.68 words, while answers contain approximately 401.15 words. This indicates that responses tend to be much more detailed and comprehensive. The entire dialog, including both questions and answers, contains about 429.83 words on average. The data distributions for the number of words in the questions, answers, and the entire dialog show a wide spread, as evidenced by the 5th and 95th percentile values. In the instruction tuning phase, the number of turns taken averages at 4.79, hinting at the complexity and depth of the conversations in the dataset. Questions contain fewer words compared to the pre-training dataset, with an average of 19.96 words. The answers in this phase are significantly shorter, with approximately 63.03 words on average. This suggests a shift towards more focused and concise communication. The entire dialog contains about 397.36 words on average, with a less pronounced spread than observed in the pre-training dataset. ## 4 Model Training The FinVis-GPT model was built on top of the pre-existing LLaVA [5] model, incorporating the advanced language capabilities of the latter while extending them for the specific financial context, the model architecture is plotted in Figure 2. The training process consists of two major steps: pre-training alignment and instruction tuning. ### Pre-training Alignment Pre-training alignment aimed at teaching the model to understand the relationship between visual patterns in financial charts and their corresponding textual interpretations. The pre-training alignment dataset, consisting of various financial charts and corresponding descriptions, was used for this purpose. For the pre-training, we adopted the same training approach as LLaVA but used our specifically curated dataset of financial charts and descriptions. The model was trained using a batch size of 128 and a learning rate of 2e-3. The pre-training was carried out on 8 NVIDIA Tesla A100 GPUs for 1 epochs. The effectiveness of pre-training alignment was evaluated by feeding the model with new, unseen financial charts and checking its ability to generate accurate and relevant descriptions. The generated descriptions were evaluated by a panel of financial experts for their accuracy and relevance. ### Instruction Tuning Instruction tuning is a technique that allows the model to learn how to generate appropriate responses to specific instructions or queries. For this, we used the instruction tuning dataset, which was specifically created for the purpose of fine-tuning FinVis-GPT. The tuning phase involved adjusting the model's parameters so that it could accurately respond to instructions about financial charts. This phase was also executed using a batch size of 128 and a learning rate of 1e-5 for 3 epochs. ### Regularization and Model Validation To prevent overfitting during the training process, we incorporated dropout and weight decay regularization techniques. We also used early stopping based on the validation set performance to determine the optimal number of training epochs. Model validation was performed intermittently throughout the training process. We maintained a holdout validation set that was not used during the training process. At the end of each epoch, the model was tested on this validation set to gauge its performance and to ensure it was learning the intended tasks effectively. In sum, the training process of FinVis-GPT was a meticulous process aimed at harnessing the language prowess of LLaVA and tailoring it to the complex task of financial chart interpretation and analysis. ## 5 Experiments ### Experimental Setup We compared FinVis-GPT against several baseline models including LLaVA [5], MPLUG-Owl [11], and MiniGPT-4 [12]. Each of these models represents the latest advancements in multimodal learning with unique advantages. The metrics used for comparison included quality of financial chart descriptions, understanding of financial context, and prediction accuracy of financial trends. We employed the following three tasks to evaluate each model: Figure 2: The model architecture. \begin{table} \begin{tabular}{l l c c c} \hline \hline & & mean & q-5\% & q-95\% \\ \hline \multirow{3}{*}{pre-train} & \# Question & 28.68 & 21 & 36 \\ & \# Answer & 401.15 & 179 & 882 \\ & \# Dialog & 429.83 & 207 & 910 \\ \hline \multirow{3}{*}{instruction} & \# Turns & 4.79 & 3.00 & 5.00 \\ & \# Question & 19.96 & 11.00 & 14.00 \\ \cline{1-1} & \# Answer & 63.03 & 23 & 41 \\ \cline{1-1} & \# Dialog & 397.36 & 238 & 748 \\ \hline \hline \end{tabular} \end{table} Table 3: Summary statistics of the collected dataset. Here, β€˜#’ represents word count. The dataset includes two main categories: pre-training and instruction. The statistics cover the mean, 5th percentile (q-5%) and 95th percentile (q-95%) of word count for questions, answers, and dialogues in each category. Figure 4: Experiment results on question answering. Figure 5: Experiment results on trend prediction. Figure 3: Experiment results on description generation. * **Description Generation:** For this task, the models were given an image of a financial chart and were required to generate a description, capturing the key trends, patterns, and anomalies. * **Question Answering:** This task involved a comprehension test where models were given an image of a financial chart along with a set of questions. The questions were designed to assess the model's understanding of the financial context of the chart. * **Trend Prediction:** For this task, models were provided an image of a financial chart along with historical financial data and were asked to predict future trends. The predictions were compared with actual future data to evaluate the model's predictive performance. ### Results and Discussion Description Generation.The task of description generation is exemplified in Figure 3, where a randomly selected outcome is presented. Based on these results, it is obvious that LLaVA fails to accurately identify the image as a representation of stock trends. In contrast, Minit-GPT4 demonstrated a superior understanding by correctly recognizing the image as a stock trading chart, though it inaccurately identified the blue line as a stock trend line. Moreover, mplug-owl managed to acknowledge the image as a stock price chart but it introduced several unrelated elements, causing its description to veer off the accurate interpretation. Among all models assessed, FinVis-GPT emerged as the most proficient, correctly recognizing the image and providing a concise and accurate description. This underscores its capacity for generating superior descriptions when compared to the other models in this specific context. Question Answering.The question answering task is plotted in Figure 4. The results reveal that LLaVA substantially misconstrued the stock trend, erroneously identifying the black candle line as the past trend and the white as the future trend. Meanwhile, Mini-GPT4 muddled the representation of black and white lines, further compounding its output with a significant amount of irrelevant content. The mplug-owl model exhibited a complete lack of recognition for the image, fabricating an entirely unrelated narrative. In contrast, the response provided by FinVis-GPT was both concise and accurate, earning it the top spot amongst the compared models for this task. Its output underscores the superior efficacy of FinVis-GPT in understanding and accurately answering questions based on the given visual representation. Trend Prediction.An example of trend prediction is depicted in Figure 5. The left image represents a market trend over a certain period, with the trend within the black box provided as input to the models. The accurate prediction for this trend should indicate an upward trajectory. However, LLaVA's prediction was contrary to this, presenting a downward trend instead. Mini-GPT4 failed to answer the prediction question accurately, and instead produced unrelated information, a phenomenon often referred to as 'hallucination'. Similarly, mplug-owl's output was also characterized by this 'hallucinating' issue. In contrast, FinVis-GPT's prediction was not only accurate but also incorporated a proper description of the trend. This showcases FinVis-GPT's superiority in trend prediction tasks, with an ability to provide both accurate and informative responses. ## 6 Conclusion In this work, we presented FinVis-GPT, a novel large multimodal model tailored to the financial domain, with a focus on financial chart analysis. Our approach integrated the benefits of pre-trained LLMs with a curated dataset sourced directly from the financial sector. The FinVis-GPT model showed significant improvement over existing models in terms of generating accurate, relevant, and financially styled responses. Through the creation of a robust instruction tuning dataset and case studies, we have demonstrated the potential of multimodal LLMs in the financial sector. This work lays the foundation for more sophisticated applications of AI in finance, potentially transforming the landscape of financial analysis. Future work will focus on further expanding the applicability of FinVis-GPT in more diverse financial scenarios and real-time financial decision-making.
2309.06076
Grain Growth and Dust Segregation Revealed by Multi-wavelength Analysis of the Class I Protostellar Disk WL 17
The first step toward planet formation is grain growth from (sub-)micrometer to millimeter/centimeter sizes. Grain growth has been reported not only in Class II protoplanetary disks but also in Class 0/I protostellar envelopes. However, early-stage grain growth occurring in Class 0/I stages has rarely been observed on the protostellar disk scale. Here we present the results from the ALMA Band 3 ($\lambda$ = 3.1 mm) and 7 ($\lambda$ = 0.87 mm) archival data of the Class I protostellar disk WL 17 in the $\rho$ Ophiuchus molecular cloud. Disk substructures are found in both bands, but they are different: while a central hole and a symmetric ring appear in Band 3, an off-center hole and an asymmetric ring are shown in Band 7. Furthermore, we obtain an asymmetric spectral index map with a low mean value of $\alpha$ = 2.28 $\pm$ 0.02, suggestive of grain growth and dust segregation on the protostellar disk scale. Our radiative transfer modeling verifies these two features by demonstrating that 10 cm-sized large grains are symmetrically distributed, whereas 10 $\mu$m-sized small grains are asymmetrically distributed. Also, the analysis shows that the disk is expected to be massive and gravitationally unstable. We thus suggest a single Jupiter-mass protoplanet formed by gravitational instability as the origin of the ring-like structure, grain growth, and dust segregation identified in WL 17.
Ilseung Han, Woojin Kwon, Yusuke Aso, Jaehan Bae, Patrick Sheehan
2023-09-12T09:13:41Z
http://arxiv.org/abs/2309.06076v1
Grain Growth and Dust Segregation Revealed by Multi-wavelength Analysis of the Class I Protostellar Disk WL 17 ###### Abstract The first step toward planet formation is grain growth from (sub-)micrometer to millimeter/centimeter sizes. Grain growth has been reported not only in Class II protoplanetary disks but also in Class 0/I protostellar envelopes. However, early-stage grain growth occurring in Class 0/I stages has rarely been observed on the protostellar disk scale. Here we present the results from the ALMA Band 3 (\(\lambda=3.1\) mm) and 7 (\(\lambda=0.87\) mm) archival data of the Class I protostellar disk WL 17 in the \(\rho\) Ophiuchus molecular cloud. Disk substructures are found in both bands, but they are different: while a central hole and a symmetric ring appear in Band 3, an off-center hole and an asymmetric ring are shown in Band 7. Furthermore, we obtain an asymmetric spectral index map with a low mean value of \(\alpha=2.28\)\(\pm\) 0.02, suggestive of grain growth and dust segregation on the protostellar disk scale. Our radiative transfer modeling verifies these two features by demonstrating that 10 cm-sized large grains are symmetrically distributed, whereas 10 \(\mu\)m-sized small grains are asymmetrically distributed. Also, the analysis shows that the disk is expected to be massive and gravitationally unstable. We thus suggest a single Jupiter-mass protoplanet formed by gravitational instability as the origin of the ring-like structure, grain growth, and dust segregation identified in WL 17. Protostars (1302), Circumstellar disks (235), Circumstellar dust (236), Circumstellar grains (239) 0000-0002-4870-2880]Ilseung Han 0000-0002-4882-788X]Woojin Kwon 0000-0002-4880-788X]Yusuke Aso 0000-0002-4880-788X]Jaehan Bae ## 1 Introduction Protoplanetary disks, circumstellar disks of the so-called Class II young stellar objects (YSOs), are the natal place of planets. However, it is unknown when planet formation begins. Up to now, the youngest protoplanets have been identified only in intermediate-aged protoplanetary disks through optical to (sub-)mm observations: a Jupiter-mass planet around AS 209 (1\(-\)2 Myr; Bae et al., 2022) and AB Aur (4 Myr; Currie et al., 2022), and two super-Jovian-mass planets around PDS 70 (5 Myr; Keppler et al., 2018; Haffert et al., 2019), and a potential Neptune-mass planet around TW Hya (10 Myr; Tsukagoshi et al., 2019). On the other hand, exoplanet surveys have shown that protoplanetary disks do not have enough mass to form planets, implying that planets should form before the Class II stage (e.g., Greaves & Rice, 2010; Najita & Kenyon, 2014; Manara et al., 2018). Theoretical studies support this idea by demonstrating that planets, particularly Jupiter-mass gas giants, can rapidly form by gravitational instability (GI) before the Class II stage, i.e., within \(\sim\)1 Myr (e.g., Boss, 1997, 1998; Mayer et al., 2002, 2004; Durisen et al., 2007, and references therein). Recently, systematic observational studies in (sub-)mm wavelengths have also suggested the possibility of planet formation in the Class 0/I protostellar stages (e.g., Tychonice et al., 2018, 2020; Williams et al., 2019; Tobin et al., 2020). Indeed, a systematic study toward young circumstellar disks has also been carrying out (e.g., Ohashi et al., 2023). Now planet formation is expected to begin in the early protostellar stages. Grain sizes are estimated by a dust opacity spectral index (\(\beta\)) in (sub-)mm wavelengths (e.g., Miyake and Nakagawa, 1993; D'Alessio et al., 2001; Draine, 2006; Kwon et al., 2009; Ricci et al., 2010). \(\beta\) is usually derived from a spectral index (\(\alpha\)) in the optically thin and Rayleigh-Jean's approximation case as \(\alpha\) = \(\beta\) + 2 (see also Section 3.3). Based on theoretical studies, the index is also related to other dust properties, such as shape, composition, and porosity (e.g., Pollack et al., 1994; D'Alessio et al., 2001; Kataoka et al., 2014), but above all, it is highly sensitive to the size: specifically, \(\beta\lesssim\) 1.0 at a 1-mm wavelength indicates a grain size larger than 3 mm (Draine, 2006). Indeed, many observational studies in (sub-)mm wavelengths have so far reported the presence of mm/cm-sized large grains in protoplanetary disks by showing such low mean \(\beta\) values (e.g., Beckwith and Sargent, 1991; Rodmann et al., 2006; Andrews and Williams, 2007; Lommen et al., 2007; Ricci et al., 2010, 2010; Kwon et al., 2011, 2015; Tazzari et al., 2021), compared with the interstellar medium (ISM) consisting of submicrometer-sized small grains (\(\beta_{\rm ISM}\simeq\) 1.7; e.g., Finkbeiner et al., 1999; Li and Draine, 2001; Planck Collaboration et al., 2014, 2014). Grain growth occurs even in the very early evolutionary stages of YSOs. Considering the angular resolutions of previous interferometric observations, grain growth (\(\beta\lesssim\) 1.0) has mainly been reported in the inner envelopes (e.g., Jorgensen et al., 2007; Kwon et al., 2009, 2015; Chiang et al., 2012; Miotello et al., 2014; Galametz et al., 2019) and the outer disks (e.g., Tobin et al., 2013) around Class 0/I protostars. Recently, higher-resolution observations with ALMA and the Karl G. Jansky Very Large Array (VLA) allow us to probe grain growth in Class I protostellar disks in more detail. For example, the presence of 10 cm-sized large grains was suggested in the outer region of the edge-on disk CB 26, although such rapid grain growth could not be probed in the inner region due to high optical depth (Zhang et al., 2021). Grain growth even to cm size was reported in the less inclined disk EC 53 (\(i\) = 34.8\({}^{\circ}\); Lee et al., 2020). In addition to the \(\beta\) analysis, Harsono et al. (2018) suggested rapid grain growth to mm size implied by a lack of CO isotopologue emissions in the inner region (\(R_{\rm disk}\lesssim\) 30 au) of TMC-1A. Grain growth occurs in disks non-uniformly. It has been reported that \(\beta\) values of central regions in YSO envelopes and disks are smaller than those of outer regions (e.g., Kwon et al., 2009; Guilloteau et al., 2011; Perez et al., 2012, 2015; Tazzari et al., 2016). In addition, \(\beta\) values show a dependence on substructures of protoplanetary disks, which have been detected by high-resolution ALMA observations (e.g., Tsukagoshi et al., 2016, 2022; Macias et al., 2019, 2021; Carrasco-Gonzalez et al., 2019; Long et al., 2020; Sierra et al., 2021). Particularly in the ring region, \(\beta\) is smaller than 1.0, indicating that grains grown to mm/cm sizes are concentrated in this region. Such ring-like substructures are recently revealed in Class 0/I protostellar disks as well (e.g., Sheehan and Eisner, 2017, 2018; de Valon et al., 2020; Nakatani et al., 2020; Lee et al., 2020; Segura-Cox et al., 2020; Sheehan et al., 2020, 2022; Alves et al., 2020), where grain growth may be enhanced. Within these disk substructures, particularly consisting of inner holes and outer rings, the spatial distribution of dust grains is different depending on grain size. For example, previous near-infrared (NIR) and (sub-)mm observations toward protoplanetary disks have shown that in shorter wavelengths, the size of the observed holes becomes smaller (e.g., Hashimoto et al., 2012, 2015; Dong et al., 2012, 2017; Zhang et al., 2014; Pinilla et al., 2015; Hendler et al., 2018; Keppler et al., 2019), and the dust scale height larger (e.g, Villenave et al., 2019, 2020). It means that \(\mu\)m-sized small grains are more widely distributed than mm/cm-sized large grains in the radial direction, and they can be lifted up to the disk surface, for example, due to turbulence in the vertical direction. Furthermore, within the (sub-)mm wavelength regime, the spatial distribution between mm- and cm-sized grains is slightly different in both radial and vertical directions. According to ALMA multi-wavelength observations toward protoplanetary disks of Class II YSOs, the width of the rings is narrower in longer wavelengths (e.g., Pinilla et al., 2017, 2019), and the dust scale height is smaller in longer wavelengths (e.g., Villenave et al., 2020, 2022). It indicates that larger grains are more concentrated in the ring and more settled down toward the disk midplane. However, whether or not dust segregation happens in Class 0/I protostellar disks is still unclear. In this paper, we present ALMA Band 3 and 7 archival data of the Class I protostellar disk WL 17 to investigate the size and spatial distribution of dust grains on the protostellar disk scale. WL 17 is an M3-type protostar (McClure et al., 2010) and is located in the L1688 region of the \(\rho\) Ophiuchus molecular cloud (\(d\) = 137 pc; Ortiz-Leon et al., 2017). According to previous observations in IR and (sub-)mm wavelengths, it has been classified as a late Class I protostar with an age of \(\lesssim\)0.7 Myr (e.g., Enoch et al., 2009; Evans et al., 2009; Dunham et al., 2015), indicating that its protostellar envelope is nearly dissipated (Enoch et al., 2009; van Kempen et al., 2009). A small disk around the protostar has been revealed by multiple ALMA dust continuum observations (Sheehan and Eisner, 2017; Sadavoy et al., 2019; Cieza et al., 2019; Gulick et al., 2021). Particularly, despite its compact size, the Class I protostellar disk clearly shows substructures consisting of a large central hole (\(R_{\rm hole}\) = 12 au) and a horseshoe-like narrow ring (\(\sigma_{\rm ring}\) = 11 au; \(R_{\rm disk}\) = 23 au) in the high-resolution Band 3 image (Sheehan and Eisner, 2017), which may imply the possibility of grain growth within the ring, similar to structured protoplanetary disks. Furthermore, Gulick et al. (2021) recently reported that the disk substructures are also resolved in Band 7, and the disk is geometrically flared based on the marginally resolved Band 6 image. The disk is relatively more massive than other Class I disks in the \(\rho\) Ophiuchus molecular cloud (\(M_{\rm dust}\) = 13\(-\)32 \(M_{\oplus}\); e.g., Williams et al., 2019; Sadavoy et al., 2019). It implies that planets are more likely to form in this massive disk because the planet formation efficiency is predicted to increase in disks with more available material (e.g., Andrews et al., 2013). For these reasons, the clearly-structured disk WL 17 is one of the best targets for studying grain growth and dust segregation on the protostellar disk scale through multi-wavelength analysis. This paper is organized as follows. In Section 2, we describe the observational details of the ALMA Band 3 and 7 archival data, and the data reduction and imaging procedure. In Section 3, we present ALMA Band 3 and 7 dust continuum images and the spectral index map. In Section 4, to investigate the size and spatial distribution of dust grains in the disk, we perform radiative transfer modeling and analyze the modeling results. In Section 5, we discuss these modeling results in the context of planet formation. Lastly, our conclusions are summarized in Section 6. ## 2 Observations and Data Reduction We used the ALMA archival data of the Class I protostellar disk WL 17 taken in Band 3 and 7 during Cycle 3 (2015.1.00761.S; PI: Patrick Sheehan). As shown in Table 1, the Band 3 observations were made in two configurations (C-8/7 and C-2/3) from 2015 October 31 to 2016 April 17. Each configuration had the same spectral setup using 4 spectral windows centered at 90.495, 92.432, 102.495, and 104.495 GHz with a 2-GHz bandwidth. In the extended configuration (C-8/7), two execution blocks were taken with the same calibrators: J1517\(-\)2422 for flux and bandpass calibration and J1625\(-\)2527 for phase calibration. The flux densities of J1517\(-\)2422 were set to be 2.256 Jy at 97.479 GHz with a spectral index of \(-\)0.234 and 2.555 Jy with an index of \(-\)0.300 for the two execution blocks. The flux densities of J1625\(-\)2527 were bootstrapped as 0.795, 0.784, 0.736, and 0.725 Jy for individual spectral windows of the first execution block. They were 0.821, 0.807, 0.749, and 0.739 Jy for the second execution block. The numbers of antennas used for these two execution blocks were 38 and 37, respectively. In the compact configuration (C-2/3), J1733\(-\)1304 was a flux calibrator, while J1427\(-\)4206 was a bandpass calibrator. The flux density of J1733\(-\)1304 was set to 3.279 Jy at 90.495 GHz with a spectral index of \(-\)0.562. Like the extended configuration, J1625\(-\)2527 was employed as a phase calibrator, and its flux densities were bootstrapped as 0.689, 0.680, 0.633, and 0.626 Jy in individual spectral windows. The number of antennas used was 40. The Band 7 observations were made in two configurations (C-3 and C-6) from 2016 May 19 to 2016 September 11. The same calibrators as the Band 3 extended configuration observations were used in both configurations of the Band 7 observations: J1517\(-\)2422 for flux and bandpass calibration and J1625\(-\)2527 for phase calibration. The spectral setup in the compact configuration (C-3) had 5 spectral windows centered at 343.018, 344.219, 345.358, 354.524, and 356.269 GHz with bandwidths of 2 GHz, 117.188, 117.188, 234.375 MHz, and 2 GHz, respectively. The flux density of J1517\(-\)2422 was set to 1.914 Jy at 348.678 GHz with a spectral index of \(-\)0.265. The flux densities of J1625\(-\)2527 were calculated to be 0.252 Jy at 343.018 GHz and 0.247 Jy at 356.269 GHz for the wide 2-GHz bandwidths. The number of antennas for this configuration was 40. In the extended configuration (C-6), the spectral setup likewise had 5 spectral windows with the same bandwidths as the compact configuration. These spectral windows were centered at 342.978, 344.178, 345.318, 354.483, and 356.227 GHz, which are slightly different from those in the compact configuration due to the Doppler shift effect by the rotation and revolution of Earth. The flux density of J1517\(-\)2422 was set to be 2.800 Jy at 342.978 GHz with a spectral index of \(-\)0.200. The flux densities of J1625\(-\)2527 were calculated to be 0.288 Jy at 342.978 GHz and 0.282 Jy at 356.227 GHz for the wide 2-GHz bandwidths. The number of antennas was 37. Details of the observations are summarized in Table 1. In addition, general descriptions of the observations in both Band 3 and 7 are found in Sheehan and Eisner (2017, 2018) and Gulick et al. (2021). The ALMA archival data were calibrated with CASA (CASA Team et al., 2022) of the versions utilized in individual data reduction scripts: CASA 4.5.0 and 4.5.3 for the Band 3 extended and compact configuration data (C-8/7 and C-2/3) and CASA 4.6.0 and 4.7.0 for the Band 7 compact and extended configuration data (C-3 and C-6), respectively. Imaging and analysis for the Band 3 and 7 data were performed with CASA 5.4.0. Because the Band 3 observations were spanned over about a half year as summarized in Table 1, we considered the proper motion of WL 17. For the proper motion correction, before combining all the execution blocks to make the final image, we first imaged individual execution blocks separately using Briggs weighting with a robust parameter of 0.5 and then compared their disk centers. To measure the disk center, we fit an elliptical Gaussian to each image using the CASA task _imfit_. Note that the difference of the disk centers between the first two execution blocks, which were taken in the same extended configuration (C-8/7; Table 1), was negligible due to the short time interval. In the combined data of the extended configuration, the deconvolved Gaussian center was finally obtained as \(\alpha\)(J2000) = 16\({}^{\rm h}\)27\({}^{\rm m}\)06\(\aas@@fstack{\circ}\)77 and \(\delta\)(J2000) = \(-\)24\({}^{\circ}\)38\({}^{\prime}\)15\(\aas@@fstack{\prime\prime}\)44. The Gaussian center was adopted as a common disk center, and also, it was assigned as a phase center using the CASA tasks _fixvis_. In contrast, there is an obvious proper motion between the extended and compact configuration images. To correct the proper motion, using the CASA tasks _imfit_, _fixvis_, and _fizplanets_, the measured disk center of the compact configuration image (the third execution block in Table 1) was shifted toward the common disk center, and also the shifted position was set as the phase center. The offset of the Band 3 compact configuration data from the Band 3 extended configuration data (i.e., the common disk center) is (\(-\)42.75 mas, 5.12 mas). The offset of the Band 7 extended configuration data from the common disk center is (\(-\)11.25 mas, \(-\)20.98 mas). Note that the proper motion reported by Ducourant et al. (2017) is (\(-\)10.0 \(\pm\) 0.5 mas yr\({}^{-1}\), \(-\)27.9 \(\pm\) 0.4 mas yr\({}^{-1}\)), which is comparable to the offset of the Band 7 extended data from the common disk center data taken with an interval of about a year. The offset of the Band 3 compact configuration data is largely different from the corresponding proper motion value, but it is understandable considering the limited angular resolution. After combining all the extended and compact configuration data, we tried self-calibration as well but did not achieve a significant improvement, perhaps due to a low original S/N of 21. We did not, therefore, include self-calibration in the Band 3 imaging. The final image was made using Briggs weighting with a robust parameter of 0.5, which provided the best compromise regarding both angular resolution and sensitivity. The Band 3 image has a synthesized beam of 0.074\({}^{\prime\prime}\)\(\times\) 0.060\({}^{\prime\prime}\) (PA = 78\({}^{\circ}\)) and a sensitivity of 33 \(\mu\)Jy beam\({}^{-1}\). In addition, elliptical tapering (2.0M\(\lambda\)\(\times\) 1.5M\(\lambda\), PA = 80\({}^{\circ}\)) was employed to achieve a comparable synthesized beam size to a Band 7 image. The tapered image has a synthesized beam of 0.108\({}^{\prime\prime}\)\(\times\) 0.103\({}^{\prime\prime}\) (PA = 67\({}^{\circ}\)) and a sensitivity of 34 \(\mu\)Jy beam\({}^{-1}\). The Band 7 observations were also carried out in two configurations (C-3 and C-6; Table 1). In these two configurations, each execution block was calibrated and imaged separately using Briggs weighting with a robust parameter of 0.5. We found that the Band 7 image made by the compact configuration data does not resolve any substructures because of a limited angular resolution of 0.331\({}^{\prime\prime}\)\(\times\) 0.148\({}^{\prime\prime}\), which is larger than the entire disk size (\(R_{\rm disk}\)\(\lesssim\) 0.2\({}^{\prime\prime}\); Sheehan and Eisner 2017). Also, we verified that the total flux does not change significantly when combining the compact configuration data with the extended configuration: 125.9 \(\pm\) 0.3 mJy for the combined data (Gulick et al., 2021) and 125.8 \(\pm\) 1.36 mJy for the extended configuration only. For the Band 7 imaging, thus, we used only the extended configuration data in order to focus on disk substructures. On the other hand, the flux difference within the 3\(\sigma\) area is relatively large in Band 3 (\(\sim\)7%): 6.84 \(\pm\) 0.17 and 7.32 \(\pm\) 0.14 mJy in \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Band & Date & Freq. Range & Antennas & Config. & Baselines & On-source Time & & Calibrators & \\ & & (GHz) & & & (m) & (minutes) & Flux & Bandpass & Phase \\ \hline 3 & 2015 Oct 31 & 89.50-105.49 & 38 & C-8/7 & 85-16196 & 2.82 & J1517-2422 & J1517-2422 & J1625-2527 \\ & 2015 Nov 26 & 89.50-105.49 & 37 & C-8/7 & 68-14321 & 2.82 & J1517-2422 & J1517-2422 & J1625-2527 \\ & 2016 Apr 17 & 89.49-105.48 & 40 & C-2/3 & 15-601 & 0.97 & J1733-1304 & J1427-4206 & J1625-2527 \\ 7 & 2016 May 19 & 342.01-357.24 & 40 & C-3 & 15-640 & 0.40 & J1517-2422 & J1517-2422 & J1625-2527 \\ & 2016 Sep 11 & 342.01-357.24 & 37 & C-6 & 15-3144 & 0.91 & J1517-2422 & J1517-2422 & J1625-2527 \\ \hline \end{tabular} Note. – In this paper, all the Band 3 data were used, but we used only the extended configuration (C-6) data in Band 7. \end{table} Table 1: Summary of ALMA Observations the extended-only and combined configurations, respectively. In Band 3 we decided to use the combined data to avoid the flux filtering issue and to have a large _uv_ coverage for beam matching. The same imaging procedure was applied to the Band 7 data set. Using the CASA tasks _imfit_, _fixvis_, and _fixplanets_, the measured disk center was shifted toward the common disk center, which was determined in the Band 3 extended configuration image as described above, and was also assigned as the phase center. For the same reasons with the Band 3 imaging, we did not perform self-calibration in Band 7 either. The final image was made using Briggs weighting with a robust parameter of 0.5, which was the best compromise between angular resolution and sensitivity. The Band 7 image has a synthesized beam of 0.107\({}^{\prime\prime}\)\(\times\) 0.104\({}^{\prime\prime}\) (PA = \(-\)37\({}^{\circ}\)) and a sensitivity of 0.4 mJy beam\({}^{-1}\). ## 3 Observational Results ### Band 3 Continuum Figure 1 shows a Band 3 continuum image of the Class I protostellar disk in WL 17. Because we combined all the compact and extended configuration data listed in Table 1, this image has a slightly lower angular resolution (0.074\({}^{\prime\prime}\)\(\times\) 0.060\({}^{\prime\prime}\)) than the image (0.06\({}^{\prime\prime}\)\(\times\) 0.05\({}^{\prime\prime}\)) presented by Sheehan and Eisner (2017) that used only the extended configuration data (the first two execution blocks in Table 1). Nevertheless, Figure 1 clearly reveals disk substructures: a central hole and a horseshoe-like ring around the hole. These substructures are consistent with those reported by Sheehan and Eisner (2017) and Gulick et al. (2021). The hole has a radius of \(\sim\)0.06\({}^{\prime\prime}\) (8 au), and the ring has a width of \(\sim\)0.08\({}^{\prime\prime}\) (11 au). These values will be measured more specifically through radiative transfer modeling in Section 4. The ring has a nearly symmetric shape but a marginally asymmetric brightness distribution in the azimuthal direction, showing a maximum intensity of 0.696 mJy beam\({}^{-1}\) at PA = 32\({}^{\circ}\) and a minimum intensity of 0.477 mJy beam\({}^{-1}\) at PA = 270\({}^{\circ}\). In the central hole, there is a weak emission above the 8\(\sigma_{\rm B3}\) level, which was previously discovered by Sheehan and Eisner (2017), but the contrast with the background emission inside the hole is not significant, which is about 2\(\sigma_{\rm B3}\). The total flux within the 5\(\sigma_{\rm B3}\)-contour region with a radius of \(\sim\)0.17\({}^{\prime\prime}\) (23 au) is measured to be 6.82 \(\pm\) 0.13 mJy. In addition, Figure 2a shows a tapered Band 3 image with an angular resolution of 0.108\({}^{\prime\prime}\)\(\times\) 0.103\({}^{\prime\prime}\) to achieve a comparable synthesized beam size to a Band 7 image, which will be introduced in the following subsection. The geometry of the disk is consistent with previous studies. To measure the geometry, we fit an elliptical Gaussian to the high-resolution Band 3 image (Figure 1) using the CASA task _imfit_. We obtain that the deconvolved Gaussian has an FWHM of 0.272\({}^{\prime\prime}\)\(\pm\) 0.012\({}^{\prime\prime}\)\(\times\) 0.235\({}^{\prime\prime}\)\(\pm\) 0.010\({}^{\prime\prime}\) (37 au \(\times\) 32 au) and a position angle of 58\({}^{\circ}\)\(\pm\) 14\({}^{\circ}\). Its inclination angle is also estimated to be 30\({}^{\circ}\)\({}^{+7}_{-11}\)\({}^{\circ}\) from the major axis and minor axis values of the FWHM. This FWHM is comparable to other FWHM values obtained from previous ALMA Band 6 continuum observations (Cieza et al., 2019; Sadavoy et al., 2019). Sheehan and Eisner (2017) obtained the inclination angle of 28\({}^{\circ}\) and the position angle of 82.4\({}^{\circ}\) through radiative transfer modeling only with the ALMA Band 3 extended configuration data (the first two execution blocks in Table 1). Recently, Gulick et al. (2021) estimated the inclination angle as 31.2\({}^{\circ}\) and the position angle as 56\({}^{\circ}\) through visibility modeling, using all the Band 3 data sets listed in Table 1. Furthermore, according to van der Marel et al. (2013), the \({}^{12}\)CO (3\(-\)2) outflow was observed to extend in the northwest-southeast direction by the James Clerk Maxwell Telescope (JCMT). The outflow was measured to be inclined by 50\({}^{\circ}\) from the line of sight. Assuming that thermal continuum emission originates from isothermal dust grains and is optically thin in (sub-)mm wavelengths, a dust mass can be measured from a total flux density as follows (Hildebrand, 1983): \[M_{\rm dust}=\frac{F_{\nu}d^{2}}{\kappa_{\nu}B_{\nu}(T_{\rm dust})}, \tag{1}\] Figure 1: ALMA Band 3 (3.1 mm; 97.5 GHz) continuum image of the Class I protostellar disk WL 17. Contour levels are {4, 8, 12, 16, 20} \(\times\)\(\sigma_{\rm B3}\), where \(\sigma_{\rm B3}\) corresponds to 33 \(\mu\)Jy beam\({}^{-1}\). Particularly, the non-circular 8\(\sigma_{\rm B3}\) contour implies the weak emission in the central hole, which was previously reported in Sheehan and Eisner (2017). The synthesized beam size shown at the lower left is 0.074\({}^{\prime\prime}\)\(\times\) 0.060\({}^{\prime\prime}\) with PA = 74\({}^{\circ}\). The red cross indicates the position of the protostar. where \(F_{\nu}\) is the total flux density at the frequency \(\nu\), \(d\) is the distance, \(\kappa_{\nu}\) is the dust mass absorption coefficient (so-called dust opacity) at the frequency \(\nu\), and \(B_{\nu}(T_{\rm dust})\) is the Planck function at the dust temperature \(T_{\rm dust}\). The total flux density measured within the 5\(\sigma_{\rm B3}\)-contour region is 6.82 mJy (Figure 1). As introduced in Section 1, the distance is 137 pc, which is the same as that used in Sheehan and Eisner (2017). Note that this value is the mean distance to the L1688 region in the \(\rho\) Ophiuchus molecular cloud (Ortiz-Leon et al., 2017). The dust opacity at a central frequency of 97.5 GHz is adopted to be 0.975 cm\({}^{2}\) g\({}^{-1}\), which was calculated from the equation in Beckwith et al. (1990): \(\kappa_{\nu}\) = 10 (\(\nu\) / 1 THz)\({}^{\beta}\) and \(\beta\) = 1. In addition, this widely-used opacity is comparable to the opacity with the maximum grain size (\(a_{\rm max}\)) of 1 mm calculated by several previous studies (See also Section 4.1; e.g., Andrews et al., 2009, 2011; Birnstiel et al., 2018; Pavlyuchenkov et al., 2019). Regarding dust temperature, in multiple previous observations toward the \(\rho\) Ophiuchus molecular cloud, it was assumed to be uniformly 20 K for calculating dust masses of the complete observed disk sample, including WL 17 (Andrews and Williams, 2007; Williams et al., 2019; Sadavoy et al., 2019). Given the wide range of physical properties, such as bolometric luminosity (\(L_{\rm bol}\)), for protostars (e.g., Dunham et al., 2015), several disk surveys have recently adopted various mean dust temperatures adjusted for each protostar (e.g., Tobin et al., 2020; Encalada et al., 2021). Also, for WL 17, we confirmed that the mean dust temperature in the ring, where most grains are concentrated, is estimated to be 30 K by assuming the radiative equilibrium that will be discussed in Section 4. Consequently, adopting \(T_{\rm dust}\) = 30 K, the dust mass of the WL 17 disk is obtained to be 26 \(M_{\oplus}\) in Band 3. ### Band 7 Continuum Figure 2b shows a Band 7 continuum image of WL 17 with an angular resolution of 0.107\({}^{\prime\prime}\)\(\times\) 0.104\({}^{\prime\prime}\). As mentioned in Section 2, we used only the extended configuration archival data to focus on disk substructures (the second execution block in Table 1). Compared with the Band 3 image shown in Figure 2a, the Band 7 image reveals different substructures: an off-center hole and an asymmetric ring. These substructures are also consistent with those reported by Gulick et al. (2021). The hole has a radius of \(\sim\)0.04\({}^{\prime\prime}\) (5 au), and its center is shifted toward the southwest direction. The ring is asymmetric about the disk minor axis: specifically, the northeastern part has a larger width of \(\sim\)0.13\({}^{\prime\prime}\) (18 au) than the southwestern part with a width of \(\sim\)0.10\({}^{\prime\prime}\) (14 au). The ring also has an asymmetric brightness distribution along the azimuthal direction, showing that the maximum intensity is 22.7 mJy beam\({}^{-1}\) at PA = 60\({}^{\circ}\) while the minimum intensity is 16.6 mJy beam\({}^{-1}\) at PA = 180\({}^{\circ}\). The total flux within the 5\(\sigma_{\rm B7}\)-contour region with a radius of \(\sim\)0.23\({}^{\prime\prime}\) (32 au) is measured to be 125.81 \(\pm\) 1.36 mJy. Likewise, we estimate the dust mass of the disk in Band 7. The dust opacity in Band 7 is calculated as 3.50 cm\({}^{2}\) g\({}^{-1}\) at a central frequency of 350 GHz by the equation in Beckwith et al. (1990). With the same dust temperature and distance of 30 K and 137 pc, the dust mass in Band 7 is 13 \(M_{\oplus}\), which is half of that estimated in Band 3. In other words, when assuming the typical dust opacity values (\(\beta\) = 1) computed in Beckwith et al. (1990), \(\kappa_{\nu}\) = 0.975 cm\({}^{2}\) g\({}^{-1}\) in Band 3 and \(\kappa_{\nu}\) = 3.50 cm\({}^{2}\) g\({}^{-1}\) in Band 7, there is a discrepancy in the dust mass estimation between these two bands: 26 \(M_{\oplus}\) and 13 \(M_{\oplus}\). To match up these dust masses, the (sub-)mm dust opacity spectral index between Band 3 and 7 is lower than the typical value (\(\beta\) = 1) employed in Beckwith et al. (1990). Such a low dust opacity index in (sub-)mm wavelengths suggests the possible presence of mm/cm-sized large grains in the optically thin disk midplane (Miyake and Nakagawa, 1993; D'Alessio et al., 2001; Draine, 2006). Grain growth in WL 17 will be further discussed through the \(\beta\) analysis in the following subsection. ### Spectral Index A dust opacity (\(\kappa_{\nu}\)) is reasonably well described as a power-law function of frequency, \(\kappa_{\nu}\)\(\propto\)\(\nu^{\beta}\), in (sub-)mm wavelengths (e.g., Hildebrand, 1983; Beckwith et al., 1990; Beckwith and Sargent, 1991; Miyake and Nakagawa, 1993). Based on theoretical studies, the (sub-)mm dust opacity spectral index (\(\beta\)) depends on various properties of dust grains, such as size, shape, composition, and porosity (e.g., Miyake and Nakagawa, 1993; Pollack et al., 1994; D'Alessio et al., 2001; Draine, 2006; Kataoka et al., 2014). Among these dust properties, it is highly sensitive to the maximum grain size (\(a_{\rm max}\)): \(\beta\lesssim\) 1.0 at \(\lambda\) = 1 mm corresponds to \(a_{\rm max}\)\(\gtrsim\) 3 mm (Draine, 2006). Thus, \(\beta\) is commonly utilized to investigate grain growth in YSOs (e.g., Kwon et al., 2009). The dust opacity index (\(\beta\)) is directly linked to the spectral index (\(\alpha\)) in (sub-)mm wavelengths. The spectral index is defined as \(\alpha\) = log(\(I_{\nu_{1}}\)/\(I_{\nu_{2}}\)) / log(\(\nu_{1}\)/\(\nu_{2}\)), where \(I_{\nu_{1}}\) and \(I_{\nu_{2}}\) are specific intensities at certain frequencies \(\nu_{1}\) and \(\nu_{2}\). The relationship between the spectral index and the dust opacity index is derived as the following equation (e.g., Tsukagoshi et al., 2016; Pavlyuchenkov et al., 2019): \[\alpha=3-\frac{h\nu}{k_{B}T_{\rm dust}}\frac{e^{h\nu/k_{B}T_{\rm dust}}}{e^{h \nu/k_{B}T_{\rm dust}}-1}+\frac{\tau_{\nu}}{e^{\tau_{\nu}}-1}\beta, \tag{2}\] where \(h\) is the Planck constant, \(\nu\) is the geometric mean frequency between frequencies \(\nu_{1}\) and \(\nu_{2}\), \(k_{B}\) is the Boltzmann constant, \(T_{\rm dust}\) is the dust temperature, and \(\tau_{\nu}\) is the geometric mean optical depth between optical depths \(\tau_{\nu_{1}}\) and \(\tau_{\nu_{2}}\). Note that for the optically thin case (\(\tau_{\nu}\ll\) 1), assuming the Rayleigh-Jeans approximation, Equation 2 can be simply expressed as \(\alpha\) = \(\beta\) + 2 (e.g., Kwon et al., 2009). On the other hand, for the highly optically thick (\(\tau_{\nu}\gg\) 1) case in the Rayleigh-Jeans regime, Equation 2 is expressed as \(\alpha\) = 2, which means that the dust opacity index cannot be estimated from the spectral index at all. Thus, in order to investigate grain growth, it is necessary to first measure optical depth. Using the Planck function, we compute the optical depths for the Band 3 and 7 dust continuum emissions within the 5\(\sigma\)-contour regions (Figures 2a and 2b). The mean intensities in Band 3 and 7 are 0.707 mJy beam\({}^{-1}\) and 10.8 mJy beam\({}^{-1}\), respectively. We adopt a dust temperature of 30 K for this calculation because this value is considered the mean dust temperature of the ring based on the radiative transfer modeling introduced in Section 4.1. The mean optical depth values are calculated to be 0.35 in Band 3 and 0.57 in Band 7, and then Equation 2 is derived as \(\alpha\) = 1.84 + 0.79\(\beta\). The optical depths at peaks are 0.72 in Band 3 and 2.40 in Band 7. We acknowledge that the small \(\alpha\) could be caused by a combination of relatively high optical depths, temperature gradients, and/or self-scattering in the line of sight (e.g., Li et al., 2017; Galvan-Madrid et al., 2018; Liu et al., 2021; Xu et al., 2023). However, we argue that it would be limited to small regions so our data may not be affected significantly. Therefore, with caution we consider that the emissions in the two bands are marginally optically thin, and we will discuss it further in Section 4.2. Recently, Gulick et al. (2021) also showed consistent results that the Band 3 and 7 continuum emissions are marginally optically thin in the entire disk region. We can thus estimate grain size from the spectral index between Band 3 and 7. Figure 2c shows the spectral index map obtained from the Band 3 and 7 dust continuum images in Figures 2a and 2b. Only the intensity values above the 5\(\sigma\) level are used to calculate the spectral index. This spectral index map has two interesting features. First, the index values are overall low with a mean value of 2.28 \(\pm\) 0.02 in a narrow range of 2.01 and 2.71. The uncertainty of this mean value is determined from the uncertainties of the Band 3 and 7 total flux values measured in the previous subsections. Note that the absolute flux calibration uncertainties are \(\sim\)5% in Band 3 and \(\sim\)10% in Band 7, resulting in about 0.12 variations of the spectral index measurement (\(\alpha\) = 2.28\({}^{+0.11}_{-0.12}\)), which implies that the mean spectral index is still low. In addition, a variation of spectral indexes appears. The white contours in Figure 2c and 2d mark where the statistical error of spectral indexes based on the sensitivities of both bands is 0.07. The inner region, which has a smaller error, shows a variation of spectral indexes up to \(\Delta\alpha\sim\) 0.26. This indicates that the variation of spectral indexes is not negligible, which is larger than 3\(\sigma\). Note that the spectral index error due to absolute flux uncertainties does not affect the spatial variation. Using the above \(\alpha\) equation, the dust opacity index \(\beta\) is calculated as 0.56 \(\pm\) 0.03. Several theoretical studies Figure 2: ALMA images of WL 17. (a) Tapered Band 3 dust continuum image with the synthesized beam of 0.108\({}^{\prime\prime}\)\(\times\) 0.103\({}^{\prime\prime}\) (PA = 67\({}^{\circ}\)) with the sensitivity of 34 \(\mu\)Jy beam\({}^{-1}\). The red cross indicates the position of the protostar. The original Band 3 image is shown in Figure 1. (b) Band 7 dust continuum image with the synthesized beam of 0.107\({}^{\prime\prime}\)\(\times\) 0.104\({}^{\prime\prime}\) (PA = 67\({}^{\circ}\)) with the sensitivity of 0.4 mJy beam\({}^{-1}\). Note that substructures are different between Band 3 and 7. (c) Spectral index (\(\alpha\)) map between Band 3 and 7. The \(\alpha\) values are overall small with a mean value of 2.28 \(\pm\) 0.02, and also they are distributed asymmetrically. (d) Statistical error map of spectral indexes. The white contours of (c) and (d) mark where the error level is 0.07. have shown \(\beta\) profiles as a function of \(a_{\rm max}\) and \(q\) within a similar wavelength range to the interval between Band 3 and 7, where \(a_{\rm max}\) and \(q\) are the maximum grain size and index of the power-law grain size distribution \(n(a)\)\(\propto\)\(a^{-q}\), respectively (e.g., D'Alessio et al., 2001; Ricci et al., 2010; Birnstiel et al., 2018). According to these profiles, \(\beta_{\rm B3-B7}\) = 0.56 corresponds to \(a_{\rm max}\) = 0.2\(-\)20 cm and \(q\) = 2.5\(-\)3.0. Given the estimated age of this late Class I protostar (\(\lesssim\)0.7 Myr; Dunham et al., 2015), dust grains have already grown up to a few centimeters in size during the protostellar stages. Indeed, grain growth to mm/cm sizes on the protostellar disk scale, demonstrated by such a low \(\beta\) value, has so far been reported in only a few Class I sources, such as EC 53 (Lee et al., 2020) and CB 26 (Zhang et al., 2021). Second, the \(\alpha\) values are asymmetrically distributed in the disk. It suggests that dust grains are differently distributed depending on their sizes. ## 4 Modeling analysis As described in Sections 3.1 and 3.2, the disk substructures are different between Band 3 and 7: a central hole and a symmetric ring in Band 3, while an off-center hole and an asymmetric ring in Band 7. In Section 3.3, from the intrinsic difference between the brightness distributions in these two bands, we obtain the asymmetric spectral index (\(\alpha_{\rm mm}\)) map with a low mean value of 2.28 \(\pm\) 0.02, which implies rapid grain growth and dust segregation at the protostellar disk scale. Thus, in this section, to verify these two features suggested by the spectral index map, we conduct radiative transfer modeling with the public code RADMC-3D (Dullemond et al., 2012) and analyze the modeling results. ### Modeling Setup The protostellar properties for our modeling analysis are based on previous studies. As mentioned in Section 1, WL 17 is an M3 protostar (McClure et al., 2010), whose typical effective temperature (\(T_{\rm eff}\)) is 3410 K (Herczeg & Hillenbrand, 2014). We adopt this temperature for our models. Note that Sheehan & Eisner (2017) used a similar effective temperature of 3400 K, which was measured by Keck NIRSPEC observations in Doppmann et al. (2005). From the Spitzer Space Telescope observations, Dunham et al. (2015) obtained the extinction-corrected infrared spectral index (\(\alpha^{{}^{\prime}}_{\rm IR}\)), bolometric temperature (\(T^{{}^{\prime}}_{\rm bol}\)), and bolometric luminosity (\(L^{{}^{\prime}}_{\rm bol}\)) of WL 17 as 0.72, 420 K, and 0.64 \(L_{\odot}\), respectively. Based on \(\alpha^{{}^{\prime}}_{\rm IR}\) and \(T^{{}^{\prime}}_{\rm bol}\), the authors showed that WL 17 is in the late Class I stage, supporting previous results (e.g., Enoch et al., 2009; Evans et al., 2009). Also, the most probable duration of the Class 0+I stage was calculated to be from 0.46 to 0.72 Myr by comparing the populations between the Class I sample and the reference Class comprising all of the Class II sample and part of the Class III sample (Dunham et al., 2015). In order to estimate the protostellar mass and luminosity of WL 17, we refer to the MIST isochrone, which covers a wide age range from 0.1 Myr to 20 Gyr (Choi et al., 2016). According to the isochrone, protostars with 3410 K and 0.46\(-\)0.72 Myr have 0.3 \(M_{\odot}\) and 0.45\(-\)0.62 \(L_{\odot}\). This protostellar luminosity range is consistent with the extinction-corrected bolometric luminosity. For these reasons, we assume the protostellar mass and luminosity as 0.3 \(M_{\odot}\) and 0.5 \(L_{\odot}\), respectively. Note that this protostellar luminosity is the same as that adopted in Sheehan & Eisner (2017). Regarding disk geometry, as described in Section 3.1, our estimates are consistent with the previous results. The inclination and position angles of the disk are thus fixed at 30\({}^{\circ}\) and 58\({}^{\circ}\), respectively, for the modeling analysis. We employ dust opacities computed by the DIsc ANAlysis (DIANA) project (Woitke et al., 2016). The opacities follow a power-law size distribution, \(n(a)\propto\)\(a^{-q}\) from \(a_{\rm min}\) = 0.05 \(\mu\)m to \(a_{\rm max}\), where \(q\) is the power-law index, \(a_{\rm min}\) is the minimum grain size, and \(a_{\rm max}\) is the maximum grain size. In order to constrain the size Figure 3: DIANA dust absorption opacities used for radiative transfer modeling. The fifteen adopted opacities have different line colors and styles depending on \(a_{\rm max}\) and \(q\). Two grey dashed vertical lines correspond to ALMA Band 3 (3.1 mm) and 7 (0.87 mm) wavelengths, respectively. For comparison, the opacity computed in Beckwith et al. (1990) is indicated by a black dash-dotted line. Note that this widely-used opacity is particularly similar to the DIANA opacity with \(a_{\rm max}\) = 1 cm and \(q\) = 3.5. distribution of dust grains, we parameterize \(a_{\rm max}\) and \(q\): \(a_{\rm max}\) = {10 \(\mu\)m, 100 \(\mu\)m, 1 mm, 1 cm, 10 cm}, and \(q\) = {2.5, 3.0, 3.5}. The other parameters for constraining dust properties are assumed to be the same as those defined in Woitke et al. (2016). All the opacities used for our models are shown in Figure 3. Particularly, the opacity with \(a_{\rm max}\) = 1 cm and \(q\) = 3.5 is comparable to the widely-used one calculated by Beckwith et al. (1990): \(\kappa_{\nu}\) = 10 (\(\nu\) / 1 THz)\({}^{\beta}\) and \(\beta\) = 1. The opacity in Beckwith et al. (1990) is also supported by the opacities of mm/cm-sized large grains computed in several previous studies (e.g., Andrews et al., 2009, 2011; Birnstiel et al., 2018; Pavlyuchenkov et al., 2019). The spatial grid of our models is defined in spherical coordinates (\(r\), \(\theta\), \(\phi\)) that RADMC-3D supports (e.g., Dullemond et al., 2020), and we employ azimuthally symmetric models, i.e., independent of \(\phi\). The \(r\) grid is spaced logarithmically: it has a total of 60 cells and starts from a dust sublimation radius (\(R_{\rm sub}\) = 0.05 au) to 50 au, which is far enough to cover the entire disk region in the radial direction (\(R_{\rm disk}\) = 22.7 au; Sheehan and Eisner, 2017). Note that \(R_{\rm sub}\) is calculated to be 0.05 au by the following equation: \(R_{\rm sub}\) = (\(L_{*}\) / (\(4\pi\sigma_{\rm SB}T_{\rm sub}^{4}\)))\({}^{0.5}\), where \(L_{*}\) is the stellar luminosity, \(\sigma_{\rm SB}\) is the Stefan-Boltzmann constant, and \(T_{\rm sub}\) is the dust sublimation temperature. \(L_{*}\) is adopted to be 0.5 \(L_{\odot}\), as mentioned above, and we assume that \(T_{\rm sub}\) is 1500 K (e.g., Andrews et al., 2009). To more specifically describe the disk substructures shown in Figure 1, we divide the \(r\) grid into two parts: hole and ring regions. These two regions are separately sampled on a logarithmic scale. The hole region has 20 cells from 0.05 au to 8 au, and the ring region has 40 cells from 8 au to 50 au. The \(\theta\) grid is spaced linearly: it has a total of 30 cells and starts from 75\({}^{\circ}\) to the disk midplane (\(\theta\) = 90\({}^{\circ}\)). We confirmed that this range is large enough to cover a few times the dust scale height used by Sheehan and Eisner (2017) and the dust scale height adopted in our modeling. Likewise, in order to sample the entire \(\theta\) grid at a higher resolution toward the disk midplane, we divide the \(\theta\) grid into two parts: upper and lower layers. The upper layer has 5 cells from 75\({}^{\circ}\) to 80\({}^{\circ}\), and the lower layer has 25 cells from 80\({}^{\circ}\) to 90\({}^{\circ}\). Next, the cylindrical radius \(R\) and the vertical height \(z\) are defined as \(R\) = \(r\)sin(\(\theta\)) and \(z\) = \(r\)cos(\(\theta\)), respectively, to express the physical quantities of our models below, such as temperature and density. We set the dust temperature distribution in the WL 17 disk based on an empirical relation between the disk-midplane temperature and the optically thin limit. Assuming dust grains are in radiative equilibrium with a central protostar, dust temperature in an optically thin region is expressed as the following power-law function (e.g., Equation 5 from Kwon et al., 2009): \(T_{\rm thin}(R)\) = \(T_{\rm sub}\) (\(R\) / \(R_{\rm sub}\))\({}^{-2/(4+\beta)}\), where \(T_{\rm sub}\) is the dust sublimation temperature, and \(R_{\rm sub}\) is the sublimation radius at \(T_{\rm dust}\) = \(T_{\rm sub}\), and \(\beta\) is the dust opacity index. Through a detailed radiative transfer modeling analysis, Kwon et al. (2015) showed that the dust temperature distribution in the midplane of the FT Tau disk is roughly a third of such an optically-thin temperature distribution, particularly for distances ranging from a few au to tens of au from a central protostar, and the slope of the distribution is steeper in the midplane. Such a steeper slope is likely due to a higher optical depth in the midplane (e.g., Looney et al., 2003; Kwon et al., 2015). We apply these results to our models because the physical properties of both protostars and their surrounding disks are similar (e.g., \(L_{\rm bol}\), \(T_{\rm eff}\), and \(R_{\rm disk}\); Long et al., 2018, 2019). The resultant radial dust temperature distribution that we adopt is defined as follows: \[T_{\rm mid}(R)=30~{}{\rm K}\left(\frac{R}{15~{}{\rm au}}\right)^{-0.45}. \tag{3}\] We assume that the disk is vertically isothermal. Note that Equation 3 is comparable with the midplane temperature obtained by Sheehan and Eisner (2017) and that widely used for a flared disk in radiative equilibrium (e.g., Chiang and Goldreich, 1997; D'Alessio et al., 1998; Dullemond et al., 2001). We adopt Gaussian rings for the radial dust surface density distribution. Indeed, the Gaussian function has been often used to reproduce dust surface density distributions in protostellar and protoplanetary disks with rings and gaps (e.g., Muto et al., 2015; Dullemond et al., 2018; Huang et al., 2020). For WL 17, Sheehan and Eisner (2017) adopted a power-law dust surface density distribution to describe a typical protostellar system, consisting of a spherical envelope and an embedded disk. However, WL 17 has been classified as a late Class I protostar (e.g., Enoch et al., 2009; Evans et al., 2009; Dunham et al., 2015), and van Kempen et al. (2009) reported that the envelope has been nearly dissipated because there is no extended C\({}^{18}\)O (3\(-\)2) emission within an angular radius of \(\sim\)40\({}^{\prime\prime}\). Furthermore, we add another Gaussian function to reproduce the weak emission in the center of the hole (hereafter called the inner disk), as shown in Figure 1. Thus, the radial dust surface density distribution for our models is defined as \[\Sigma(R)= \Sigma_{\rm hole}\exp\left(-\frac{(R-R_{\rm hole})^{2}}{2\sigma_{ \rm hole}{}^{2}}\right)+ \tag{4}\] \[\Sigma_{\rm ring}\exp\left(-\frac{(R-R_{\rm ring})^{2}}{2\sigma_ {\rm ring}{}^{2}}\right), \tag{5}\] where \(\Sigma_{\rm hole}\) is the peak surface density of the inner disk at the radius \(R_{\rm hole}\) (\(=\) 0), \(\sigma_{\rm hole}\) (\(=\) 3.9 au) is the width of the inner disk, \(\Sigma_{\rm ring}\) is the peak surface density of the ring at the radius \(R_{\rm ring}\), and \(\sigma_{\rm ring}\) is the width of the ring. Note that since the inner disk is not resolved at the current angular resolution, we simply assume that the inner disk is at the center of the hole and has the same width as the FWHM of the Band 3 synthesized beam: \(R_{\rm hole}\) = 0 au and \(\sigma_{\rm hole}\) = \(\sigma_{\rm beam}\) = 3.9 au. The other parameters, \(\Sigma_{\rm hole}\), \(\Sigma_{\rm ring}\), \(R_{\rm ring}\), and \(\sigma_{\rm ring}\), are set as free parameters for fitting. We adopt a power-law function for the radial dust scale height profile. Assuming that a disk is in hydrostatic equilibrium, this profile is determined from a power-law dust temperature distribution (e.g., Kwon et al., 2015). Considering the dust-settling effect, we also adopt a new factor \(f_{\rm H}\), which depends on the maximum grain size (\(a_{\rm max}\)). Specifically, based on the adopted \(a_{\rm max}\) = {10 \(\mu\)m, 100 \(\mu\)m, 1 mm, 1 cm, 10 cm}, the factor \(f_{\rm H}\) is largely divided into three values. First, the models with \(a_{\rm max}\) = 10 \(\mu\)m have \(f_{\rm H}\) = 1. Because \(\mu\)m-sized small grains are well mixed with gas, these grains are hardly settled down toward the disk midplane. Second, we assume \(f_{\rm H}\) = 0.5 for \(a_{\rm max}\) = 100 \(\mu\)m. Indeed, the dust scale height of a few hundred \(\mu\)m-sized grains is reported to be 0.1-0.8 times the gas scale height due to turbulence (e.g., Ohashi and Kataoka, 2019; Doi and Kataoka, 2021), implying that such intermediate-sized grains are moderately mixed with gas. Last, the remaining models with \(a_{\rm max}\)\(\geq\) 1 mm have \(f_{\rm H}\) = 0.1 because mm/cm-sized large grains are known to be highly settled down toward the disk midplane (e.g., Andrews et al., 2011; Kwon et al., 2011; Pinte et al., 2016; Villenave et al., 2022). In summary, the radial dust scale height profile for our models is defined as \[H(R)=f_{\rm H}H_{0}\left(\frac{R}{R_{0}}\right)^{h}, \tag{6}\] where \(f_{\rm H}\) is the dust settling factor, \(H_{0}\) (= 1.2 au) is the dust scale height at the radius \(R_{0}\) (= 15 au), and \(h\) (= 1.275) is the disk flaring index. Note that based on the dust temperature distribution (Equation 3), \(H_{0}\) and \(h\) are calculated as 1.2 au and 1.275, respectively. Finally, we search for model parameter sets reproducing the observed images best. This process includes two steps: finding parameter sets best fitting to the Band 3 image (Figure 1) and comparing the model images generated using the parameters of the first step but at Band 7 with the observational Band 7 image (Figure 2b). Also, to investigate grain properties, we select 15 pairs of (\(a_{\rm max}\), \(q\)), covering wide ranges of the maximum grain size (\(a_{\rm max}\)) from 10 \(\mu\)m to 10 cm and the power-law size distribution index (\(q\)) from 2.5 to 3.5 (see also Figure 3). For these 15 pairs of (\(a_{\rm max}\), \(q\)), we individually set a model with four free parameters (\(\Sigma_{\rm hole}\), \(\Sigma_{\rm ring}\), \(R_{\rm ring}\), and \(\sigma_{\rm ring}\)) and fit to the Band 3 data in the image domain. The best-fit model is obtained by maximizing the likelihood function, whose logarithm for each model produced by RADMC-3D on the image domain is defined by: \[\ln L = -\frac{1}{2}\Sigma\left[\frac{(I_{\rm obsrv}-I_{\rm model})^{2}}{ \sigma^{2}}+\ln(2\pi\sigma^{2})\right], \tag{7}\] \[\sigma^{2} = \sigma_{\rm obsrv}^{2}+(fI_{\rm model})^{2}, \tag{8}\] where \(I_{\rm obsrv}\) and \(I_{\rm model}\) are the specific intensities of the Band 3 data and model, respectively, \(\sigma_{\rm obsrv}\) = 33 \(\mu\)Jy beam\({}^{-1}\) is the RMS noise level of the Band 3 data. The \(f\) parameter is introduced to consider unknown uncertainties in the fitting, which is caused by more complex distributions of quantities than assumed in our model: e.g., density, temperature, opacity, etc. (e.g., Xu et al., 2023). In circumstellar disks, such unknown perturbations are likely proportional to the intensity. Hence, we adopted \(fI_{\rm model}\) as the unknown uncertainty. This form also has the advantage of giving more weight to the fitting of relatively faint emission, which is effective for evaluating the disk shape. Note that \(f\) always converges to 0.1 regardless of initial free parameters; thus we fix it at 0.1 for this fitting. The maximizing process is performed through the Markov Chain Monte Carlo (MCMC) method using the public python package emcee(Foreman-Mackey et al., 2013). The uniform prior probability distributions for the free parameters are given over: 0 \(<\Sigma_{\rm hole}\)\(<\) 10 g cm\({}^{-2}\), 0 \(<\Sigma_{\rm ring}\)\(<\) 20 g cm\({}^{-2}\), 5 \(<\)\(R_{\rm ring}\)\(<\) 30 au, and 0 \(<\sigma_{\rm ring}\)\(<\) 10 au. The initial free parameters are sampled with 100 walkers and 1500 steps. The first 1000 steps are used to explore the parameter space for the burn-in phase. By adopting the medians of the burn-in phase as second initial values, the remaining 500 steps sample the posterior probability distribution. The best-fit parameters are taken as the medians of the final posterior probability distributions, and the uncertainties of the parameters are determined by the 68% confidence interval. In this way, the best-fit model in Band 3 is obtained for each of the 15 pairs with different grain size distributions (Table 2). The reduced chi-square values (\(\chi^{2}_{\rm reduced}\)) for the 15 best-fit models are around 1.75, defined by \[\chi^{2}_{\rm reduced}=\frac{\Sigma\left[(I_{\rm obsrv}-I_{\rm model})^{2}~{}/~ {}\sigma_{\rm obsrv}^{2}\right]}{N-M}, \tag{9}\] where \(I_{\rm obsrv}\) and \(I_{\rm model}\) are likewise the specific intensities of the Band 3 data and model, respectively, \(\sigma_{\rm obsrv}\) = 33 \(\mu\)Jy beam\({}^{-1}\) is the RMS noise level of the Band 3 data, \(N\) is the number of pixels on the image plane, and \(M\) is the number of free parameters. Finally, Band 7 model images are individually generated using the 15 best-fit parameter sets with different grain size distributions and are compared with the observed image. Based on the comparison, we constrain the size and spatial distributions of dust grains and identify the presence of additional substructures. ### Modeling Results All the 15 Band 3 models reproduce the Band 3 data well, regardless of the adopted dust opacities. Figures 4 and 5 show the Band 3 and 7 intensity and residual maps of part of the adopted models. The two left columns are the Band 3 intensity and residual maps. In the intensity maps, substructures are clearly seen, consisting of a large central hole (\(R_{\rm ring}\simeq 16\) au) and a narrow ring (\(\sigma_{\rm ring}\simeq 3.1\) au), which are consistent with the observed image in the left top panel. The residual maps show a noisy pattern mainly within the 3\(\sigma_{\rm B3}\) level, indicative of good fittings. However, given that all the models explain the data well, we can see that there is a degeneracy between the opacity and the dust surface density: as listed in Table 2, when the adopted opacity is lower, the best-fit surface density values (\(\Sigma_{\rm hole}\) and \(\Sigma_{\rm ring}\)) become higher, and vice versa. This degeneracy is attributed to using only single-band data. When applying the physical parameters obtained from the Band 3 fittings to the Band 7 data, we find that only the models with cm-sized large grains reproduce the central substructure shown in Band 7. The right two columns in Figures 4 and 5 show the Band 7 intensity and residual maps. Compared with the Band 7 image in the top panel, the intensity maps in Figure 4 marginally show the substructures, whereas those in Figure 5 are highly saturated without the central hole. Additionally, the intensity maps of the remaining 9 models, which are not shown in Figure 5, are also centrally peaked. The reason is that as shown in Figure 3, these models have steeper \(\beta\) slopes than cm-sized models in Figure 4, resulting in a significant increase in intensity from Band 3 to 7. This difference is more evident in the residual maps. Figure 4 shows that the residuals of the three models with \(a_{\rm max}=1\) cm and \(q=2.5\), \(a_{\rm max}=10\) cm and \(q=2.5\), and \(a_{\rm max}=10\) cm and \(q=3.0\) are distributed mainly within the \(\pm 3\sigma_{\rm B7}\) level at the center. On the other hand, the residual maps in Figure 5 and of the remaining models show highly negative values up to \(-40\sigma_{\rm B7}\) in the same central hole region. This difference indicates that cm-sized models reproduce the Band 7 data, particularly the central hole, relatively better than the other models. Furthermore, the optical depths of these three best-fit models are consistent with the observed mean values. For example, in the case of the model with \(a_{\rm max}=10\) cm and \(q=2.5\), the optical depths at the peak (\(R\simeq 16\) au) are around 2.0 in both bands, while the other regions have a considerably low value of less than 0.5. Thus, these modeling results suggest that grain growth up to \(1-10\) cm in size has already occurred in the ring and inner disk. It appears that the three large-grain models do not reproduce the substructures observed in Band 7 fully yet. As shown in Figure 5, there are asymmetrically distributed positive residuals in the ring region, implying the presence of dust grains less sensitive to the Band 3 wavelength. Indeed, the spectral index between residual intensities of Band 3 and 7 at the residual peak position of PA = 60\({}^{\circ}\) is calculated to be 3.4\(-\)3.6, which is very close to that of ISM (\(\alpha_{\rm ISM}\simeq 3.7\); e.g., Finkbeiner et al., 1999; Li & Draine, 2001; Planck Collaboration et al., 2014, 2014). This suggests that the asymmetric residual regions have an addition of \(\mu\)m-sized small grains, which are presumably located in the upper layer of the disk. Note that these large-grain models have a smaller dust scale height for mm/cm-sized larger grains (a smaller \(f_{\rm H}\) parameter; see Section 4.1). Furthermore, such grain growth up to a few centimeters in the ring can be supported by the azimuthal shift of the continuum peak observed in (sub-)mm wavelengths. The Band 3 image has the strongest 3\(\sigma_{\rm B3}\) peak in the northeast from the center, although symmetric models fit it well overall. The peak of the Band 7 image is much stronger. For example, Baruteau & Zhu (2016) performed two-dimensional hydrodynamic simulations to investigate the dynamics of dust grains in a transition disk with a narrow ring. The gas in the ring rapidly forms a horseshoe-shaped vortex driven by the Rossby-wave instability (RWI), which is similar to the ring shape in WL 17, and the distribution of dust grains along the vortex depends on the grain size: \(\mu\)m-sized small grains are in the vortex center, while cm-sized large grains are located ahead of the vortex. Particularly, considering gas self-gravity, the shift angle of \(1-10\) cm-sized grains is about 30\({}^{\circ}\) (see Figure 9 in Baruteau & Zhu, 2016). To apply the result of dust distributions in a vortex to WL 17, we examine the disk rotation direction using the Additional Representative Images for Legacy (ARI-L; Massardi et al., 2021) image of the ALMA Band 6 \({}^{13}\)CO (\(2-1\)) molecular line emission (2019.1.01792.S; PI: Diego Mardones). Figure 6 shows the moment 0 (integrated intensity) map of the redshifted and blueshifted \({}^{13}\)CO (\(2-1\)) emissions and the 1.3-mm dust continuum emission. Considering the angular resolution of the data, Figure 4: Part of the 15 best-fit models depending on \(a_{\rm max}\) and \(q\). The top two panels are the same as the Band 3 and 7 data shown in Figures 1 and 2. The red star indicates the position of the protostar. The remaining panels are the model images. From the left, the first shows the three Band 3 model images depending on \(a_{\rm max}\) and \(q\), the second column shows their residual maps obtained by subtracting individual models from the Band 3 observational image, the third column shows the Band 7 model images, and the last column shows their residual maps obtained by subtracting individual models from the Band 7 observational image. Contour levels in the Band 3 and 7 residual maps are \(\{-3,\,3\}\), where \(\sigma_{\rm B3}=33\ \mu\)Jy beam\({}^{-1}\), and \(\{-3,\,3,\,9,\,15\}\), where \(\sigma_{\rm B7}=0.4\) mJy beam\({}^{-1}\), respectively. Note that only these three models, mainly populated by cm-sized large grains, reproduce well the central hole in Band 7. Figure 5: Part of the 15 best-fit models depending on \(a_{\rm max}\) and \(q\), same as Figure 4, but different models. Contour levels in the Band 7 residual maps are \(\{-39\), \(-24\), \(-12\), \(-6\), \(-3\), \(3\}\) for model 6, \(\{-24\), \(-12\), \(-6\), \(-3\), \(3\}\) for model 9, and \(\{-15\), \(-9\), \(-3\), \(3\}\) for model 12, \(\sigma_{\rm B7}\) = 0.4 mJy beam\({}^{-1}\). Note that these models do not reproduce the Band 7 data well, particularly the central hole. these emissions likely come from an envelope structure. The systematic velocity of this target is 4.5 km s\({}^{-1}\), which was obtained from previous JCMT HARP \({}^{12}\)CO (3\(-\)2) observations (van der Marel et al., 2013). The redshifted component (5.99\(-\)6.32 km s\({}^{-1}\)) is clearly seen in the southwestern region along the disk major axis. On the other hand, the blueshifted component (2.67\(-\)3.00 km s\({}^{-1}\)) is not seen in the northeastern region, presumably due to the diffuse foreground material (e.g., van Kempen et al., 2009). Indeed, a strong \({}^{12}\)CO (3\(-\)2) self-absorption feature was identified in the blueshifted velocity range between 2\(-\)4 km s\({}^{-1}\)(van der Marel et al., 2013). This moment map indicates that the envelope is rotating clockwise. We can thus expect that the disk is rotating clockwise as well. Note that the near side of the disk is toward the northwest, and the far side toward the southeast. In the clockwise rotation, the Band 3 peak is ahead of the Band 7 peak. As shown in Figures 2a and 2b, the shift angle between these two peaks is about 30\({}^{\circ}\), which corresponds to that of 1\(-\)10 cm-sized grains. In addition to our modeling analysis, grain growth to 10 cm may be supported by the continuum peak shift between Band 3 and 7. ## 5 Discussion As shown in Figures 1 and 2, the protostellar disk surrounding WL 17 has a large central hole (Sheehan and Eisner, 2017; Gulick et al., 2021). In general, various scenarios for such a central hole have so far been proposed: for example, grain growth (e.g., Birnstiel et al., 2012; Ohashi et al., 2021), photoevaporation (e.g., Alexander and Armitage, 2007; Owen et al., 2010), disk winds by magnetorotational instability (e.g., Suzuki et al., 2010), and dynamical clearing by (sub-)stellar or planetary companions (e.g., Artymowicz and Lubow, 1994; Zhu et al., 2011; Pinilla et al., 2012; Bae et al., 2018). To explain the hole in WL 17, some of these scenarios have been discussed in previous studies. Sheehan and Eisner (2017) suggested that photoevaporation is unlikely due to the high accretion rate expected in the Class I stage. Takahashi and Muto (2018) showed that disk winds can reproduce the hole, but their best-fit model cannot reproduce the inner disk, which is revealed in WL 17 (Figure 1). Last, Sullivan et al. (2019) found that there is no stellar companion that can dynamically clear the hole, based on their radial velocity measurement. As described in Section 4.2, the azimuthal difference of the continuum peak between Band 3 and 7 also implies the possibility of planet formation (Baruteau and Zhu, 2016). In the following section, we discuss whether dynamical clearing \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Model & \(a_{\rm max}\) & \(q\) & \(\Sigma_{\rm hole}\) & \(\Sigma_{\rm ring}\) & R\({}_{\rm ring}\) & \(\sigma_{\rm ring}\) & M\({}_{\rm dust}\) \\ & & & (g cm\({}^{-2}\)) & (g cm\({}^{-2}\)) & (au) & (au) & (\(M_{\oplus}\)) \\ \hline 1 & 10 \(\mu\)m & 2.5 & \(0.31^{+0.02}_{-0.02}\) & \(3.24^{+0.09}_{-0.10}\) & \(15.96^{+0.04}_{-0.04}\) & \(3.09^{+0.06}_{-0.06}\) & 95.2 \\ 2 & & 3.0 & \(0.31^{+0.02}_{-0.02}\) & \(3.24^{+0.11}_{-0.10}\) & \(15.96^{+0.04}_{-0.04}\) & \(3.09^{+0.06}_{-0.06}\) & 95.2 \\ 3 & & 3.5 & \(0.31^{+0.02}_{-0.02}\) & \(3.24^{+0.10}_{-0.10}\) & \(15.96^{+0.04}_{-0.03}\) & \(3.09^{+0.06}_{-0.06}\) & 95.2 \\ \hline 4 & 100 \(\mu\)m & 2.5 & \(0.30^{+0.02}_{-0.02}\) & \(3.10^{+0.11}_{-0.09}\) & \(15.97^{+0.04}_{-0.04}\) & \(3.12^{+0.06}_{-0.06}\) & 92.2 \\ 5 & & 3.0 & \(0.30^{+0.02}_{-0.02}\) & \(3.13^{+0.10}_{-0.10}\) & \(15.97^{+0.04}_{-0.04}\) & \(3.12^{+0.06}_{-0.06}\) & 92.9 \\ 6 & & 3.5 & \(0.30^{+0.02}_{-0.02}\) & \(3.15^{+0.09}_{-0.09}\) & \(15.96^{+0.04}_{-0.04}\) & \(3.12^{+0.06}_{-0.06}\) & 93.6 \\ \hline 7 & 1 mm & 2.5 & \(0.06^{+0.00}_{-0.00}\) & \(0.67^{+0.02}_{-0.02}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.12^{+0.06}_{-0.06}\) & 20.0 \\ 8 & & 3.0 & \(0.07^{+0.00}_{-0.00}\) & \(0.76^{+0.02}_{-0.02}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.13^{+0.06}_{-0.06}\) & 22.6 \\ 9 & & 3.5 & \(0.10^{+0.01}_{-0.01}\) & \(1.03^{+0.03}_{-0.03}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.13^{+0.06}_{-0.06}\) & 30.6 \\ \hline 10 & 1 cm & 2.5 & \(0.26^{+0.02}_{-0.02}\) & \(2.70^{+0.09}_{-0.08}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.11^{+0.06}_{-0.06}\) & 79.9 \\ 11 & & 3.0 & \(0.21^{+0.01}_{-0.01}\) & \(2.13^{+0.07}_{-0.07}\) & \(15.97^{+0.04}_{-0.04}\) & \(3.11^{+0.06}_{-0.06}\) & 63.3 \\ 12 & & 3.5 & \(0.16^{+0.01}_{-0.01}\) & \(1.66^{+0.05}_{-0.05}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.11^{+0.06}_{-0.06}\) & 49.3 \\ \hline 13 & 10 cm & 2.5 & \(1.59^{+0.09}_{-0.09}\) & \(16.38^{+0.43}_{-0.53}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.12^{+0.06}_{-0.05}\) & 487.1 \\ 14 & & 3.0 & \(0.59^{+0.06}_{-0.06}\) & \(9.78^{+0.33}_{-0.27}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.13^{+0.06}_{-0.06}\) & 291.7 \\ 15 & & 3.5 & \(0.40^{+0.02}_{-0.02}\) & \(4.12^{+0.14}_{-0.11}\) & \(15.98^{+0.04}_{-0.04}\) & \(3.13^{+0.06}_{-0.06}\) & 122.9 \\ \hline \end{tabular} Note. –These parameters are obtained by fitting the adopted models to the Band 3 data (Figure 1): \(\chi^{2}_{\rm reduced}\simeq 1.75\). The dust masses estimated from the dust continuum images in Band 3 (Figure 1) and 7 (Figure 2b) are 26 \(M_{\oplus}\) and 13 \(M_{\oplus}\), respectively. \end{table} Table 2: Best-fit Parameters of the Adopted Disk Models by a planetary companion(s) (so-called planet-disk interaction) can explain the rapid grain growth and dust segregation identified by our modeling analysis. In addition to the planet-disk interaction, given the early evolutionary stage, we discuss the possibility of protostellar infall for the origins of the features, as several hydrodynamic simulations have demonstrated that material infalling from an envelope onto a disk can form a dust ring and induce grain growth to mm/cm sizes within the ring (e.g., Bae et al., 2015; Kuznetsova et al., 2022). ### Planet-disk Interaction #### 5.1.1 How do grains rapidly grow and segregate? An interaction between a disk and a giant planet can explain the rapid grain growth occurring at the protostellar disk scale. Drazkowska et al. (2019) performed two-dimensional hydrodynamic simulations to investigate how dust grains evolve by a single Jupiter-mass planet in a disk. These simulations assume that a disk around a 1-\(M_{\odot}\) protostar has a radius of 34 au, and a 1-\(M_{J}\) planet circularly orbits at 10 au for 4000 orbits (corresponding to \(\sim\)0.13 Myr). The simulations also consider dust coagulation and fragmentation. During the first 1000 orbits, the planet has already carved a gap in both gas and dust, and \(\mu\)m-sized small grains in the ring have rapidly grown up to cm-sized large grains. Also, after grains quickly reach a steady state within this first 1000 orbits, the size distribution of these grains does not change significantly during the remaining 3000 orbits. We emphasize that the initial conditions of the simulations are similar to WL 17, except for the protostellar mass, and the resulting size and spatial distributions of dust grains are highly comparable to those revealed by our modeling analysis (Section 4.2). Assuming a smaller protostellar mass of 0.3 \(M_{\odot}\), corresponding to the mass of WL 17 (Section 4.1), the first 1000 orbital period is calculated to be \(\sim\)57,700 yr, which is much less than the estimated age of the late Class I protostar (\(\lesssim\)0.7 Myr; Dunham et al., 2015). Thus, if there is already a single Jupiter-mass protoplanet orbiting WL 17, then this planet can rapidly carve a central large hole and a narrow ring and trigger grain growth in the ring during the Class I stage. A single Jupiter-mass planet can also explain grain growth in the inner disk of WL 17. Drazkowska et al. (2019) demonstrated that, in addition to grain growth in the ring, during the first 1000 orbits, part of \(\mu\)m-sized small grains pass through the gap and then grow, resulting in an inner disk consisting of mm/cm-sized large grains. Previously, Zhu et al. (2012) also showed similar results: mm-sized grains can penetrate a gap carved by a 1-\(M_{J}\) planet and form an inner disk, and \(\mu\)m-sized smaller grains can also penetrate the gap and grow rapidly to mm size in the inner disk. These simulation results are consistent with our modeling results that there are 1\(-\)10 cm-sized large grains in the inner disk of WL 17. The planet-disk interaction also interprets the dust segregation identified in the WL 17 disk. As shown in Section 4.2, along the ring in the azimuthal direction, cm-sized large grains are distributed symmetrically, whereas \(\mu\)m-sized small grains are distributed asymmetrically. Bae et al. (2019) performed two-dimensional hy Figure 6: Moment 0 (integrated intensity) map of the redshifted and blueshifted \({}^{13}\)CO (2\(-\)1) molecular line emission using the ALMA Band 6 ARI-L data. The redshifted emission (5.99\(-\)6.32 km s\({}^{-1}\)) is shown as red contours. Its contour levels are {\(-\)3, 3, 6, 9,..., 21} \(\times\)\(\sigma_{{}^{13}CO}\), where \(\sigma_{{}^{13}CO}\) is 5.14 mJy beam\({}^{-1}\) km s \({}^{-1}\). The blueshifted emission (2.67\(-\)3.00 km s\({}^{-1}\)) is shown as blue contours with levels of {\(-\)3, 3} \(\times\)\(\sigma_{{}^{13}CO}\). The size of the synthesized beam shown in the lower left is 1.255\({}^{\prime\prime}\)\(\times\) 0.971\({}^{\prime\prime}\) (PA = \(-\)88\({}^{\circ}\)). Grey color scale and black contours denote the 1.3-mm continuum image of the same ARI-L data. Its contour levels are {5, 10, 20, 40, 80, 160} \(\times\)\(\sigma_{{}^{B6}}\), where \(\sigma_{{}^{B6}}\) is 223.2 \(\mu\)Jy beam\({}^{-1}\). The synthesized beam size of the continuum is nearly the same as that of the \({}^{13}\)CO (2\(-\)1) line emission. Red and blue arrows indicate redshifted and blueshifted outflows, respectively, and the systematic velocity of WL 17 is 4.5 km s\({}^{-1}\)(van der Marel et al., 2013). Grey lines indicate the position angles of the disk major and minor axes, 58\({}^{\circ}\) and 148\({}^{\circ}\), which are obtained from the ALMA Band 3 continuum image (Figure 1). Note that the redshifted \({}^{13}\)CO (2\(-\)1) emission is in the southwestern region along the disk major axis, while the blueshifted one is not seen. drodynamic simulations to examine how the radial and azimuthal distributions of dust grains are evolved by one or two Jupiter-mass planets in a protoplanetary disk. They assumed that a disk around a protostar of 0.85 \(M_{\odot}\) has a gas radius of 198 au, and that grains with various sizes ranging from 0.1 \(\mu\)m to 1 mm are annularly distributed between 50 and 100 au. They did not consider dust evolution, i.e., coagulation and fragmentation, but instead focused on changes in the spatial distribution of dust grains in the ring by the planet(s). Their simulations show that dust grains are quickly (less than 0.6 Myr) segregated in the radial and azimuthal directions by one or two Jupiter-mass planets. Particularly, when there is only a single planet with 5 \(M_{J}\), mm-sized large grains, which are decoupled from gas, are concentrated at a pressure bump induced by the planet(s), while \(\mu\)m-sized small grains, well mixed with gas, are more widely distributed inside and outside the pressure bump. In addition to the radial direction, these large grains are symmetrically distributed along the azimuth, whereas the spatial distribution of the small grains is relatively more asymmetric, resulting in an off-center hole (see also Figure 1 in Bae et al., 2019). This difference, particularly in the azimuth, is consistent with our modeling results, although the radial difference cannot be investigated by our observational data due to limited angular resolutions compared to the ring width. Also, Drazkowska et al. (2019) showed similar results that a giant planet with 1 \(M_{J}\) can rapidly cause dust segregation in a disk with a radius of 34 au, which is a much smaller disk than the above simulations (\(R_{\rm disk}=198\) au) but comparable to the disk around WL 17 (\(R_{\rm disk}\lesssim 20\) au; Table 2). In the radial direction, as the grain size becomes larger, dust grains are more concentrated toward the ring induced by the planet (see also Figure 5 in Drazkowska et al., 2019). Although the radial dependence of dust distributions needs to be studied further with higher angular resolution data, these two numerical simulations suggest that if there is already a Jovian planet of a mass \(\lesssim\)5 \(M_{J}\) in the hole of the WL 17 disk, this planet can cause dust segregation as well as grain growth. As the second step, we estimate the mass of the putative planet in the central hole. Kanagawa et al. (2016) proposed an empirical relationship between the observed gap width of a protoplanetary disk and the mass of a single giant planet in the gap. The formula is expressed as follows: \[\frac{M_{\rm p}}{M_{\star}}=2.1\times 10^{-3}\left(\frac{\Delta_{\rm gap}}{R_ {\rm p}}\right)^{2}\left(\frac{h_{\rm p}}{0.05R_{\rm p}}\right)^{3/2}\left( \frac{\alpha}{10^{-3}}\right)^{3/2}, \tag{10}\] where \(M_{\rm p}\) and \(M_{\star}\) are the masses of the planet and the protostar, respectively, \(\Delta_{\rm gap}\) is the gap width, \(R_{\rm p}\) is the orbital radius of the planet, \(h_{\rm p}\) is the gas scale height at \(R_{\rm p}\), and \(\alpha\) is the Shakura-Sunyaev viscosity parameter (Shakura and Sunyaev, 1973). Note that Kanagawa et al. (2016) defined the gap width \(\Delta_{\rm gap}\) and the orbital radius \(R_{\rm p}\) as \(\Delta_{\rm gap}=R_{\rm out}-R_{\rm in}\) and \(R_{\rm p}=(R_{\rm out}+R_{\rm in})/2\), respectively, where \(R_{\rm in}\) and \(R_{\rm out}\) are the inner and outer edges of the gap. Assuming that the radial surface density profile of a ring is Gaussian, \(R_{\rm in}\) and \(R_{\rm out}\) are derived to be \(R_{\rm in}=\sqrt{2{\rm ln}2}\sigma_{\rm in}\) and \(R_{\rm out}=R_{\rm ring}\)\(-\sqrt{2{\rm ln}2}\sigma_{\rm ring}\), where \(\sigma_{\rm in}\) and \(\sigma_{\rm ring}\) are the standard deviations of the Gaussian inner disk and outer ring (see also Equation 4). Since \(\sigma_{\rm in}\) is assumed to be 3.9 au (Section 4.1) and the best-fit parameter \(\sigma_{\rm ring}\) is obtained as 3.1 au through the MCMC fitting (Table 2), \(\Delta_{\rm gap}\) and \(R_{\rm p}\) are calculated as 7.76 and 8.47 au, respectively. The gas scale height \(h_{\rm p}(R=R_{\rm p})\) is 0.58 au, which is derived from the dust scale height profile of our models without considering the dust settling effect (Equation 6). Likewise, as adopted in our models (Section 4.1), the protostellar mass \(M_{\rm s}\) is set to 0.3 \(M_{\odot}\). Lastly, we assume that \(\alpha\) is \(10^{-3}\), which was used in the above previous theoretical simulations (e.g., Kanagawa et al., 2016; Bae et al., 2019; Drazkowska et al., 2019). These input values provide a mass estimate of the potential planet in the hole to be \(\sim\)0.9 \(M_{J}\). #### 5.1.2 How does a Jupiter-mass planet rapidly form? Many theoretical studies have so far predicted that gravitational instability (GI) can form Jupiter-mass gas giants (\(\lesssim\)10 \(M_{\rm J}\)) in a massive protostellar disk around a 1-\(M_{\odot}\) protostar within a few thousand years (e.g., Boss, 1997, 1998; Mayer et al., 2002, 2004; Durisen et al., 2007, and references therein). In particular, several of these theoretical studies have demonstrated the possibility of rapid giant planet formation by GI in a disk around an M-type protostar, which is the same spectral type as WL 17 (e.g., Boss, 2006; Backus and Quinn, 2016; Mercer and Stamatellos, 2020). Boss (2006) showed that within a few hundred years, a marginally gravitationally unstable disk (\(R_{\rm disk}=20\) au and \(M_{\rm disk}=0.021-0.065~{}M_{\odot}\)) around an M-type protostar of 0.1 or 0.5 \(M_{\odot}\) can develop spiral arms, and then a few Jupiter-mass clumps form in the spiral arms. Backus and Quinn (2016) showed that a gravitationally unstable disk (\(R_{\rm disk}\leq 30\) au and \(M_{\rm disk}=0.01-0.08~{}M_{\odot}\)) around an M-type protostar of 0.3 \(M_{\odot}\) can develop spiral arms, and then these arms rapidly fragment into a number of dense clumps with an average mass of 0.3 \(M_{\rm J}\). We emphasize that these two studies adopted disk radii, disk masses, protostellar masses, and midplane temperatures very similar to those of WL 17 (see also Section 4.1). Furthermore, Mercer and Stamatellos (2020) showed that a gravitation ally unstable and larger disk (\(R_{\rm disk}\) = 60\(-\)120 au and \(M_{\rm disk}\) = 0.040\(-\)0.083 \(M_{\odot}\)) around an M-type protostar of 0.2\(-\)0.4 \(M_{\odot}\) can fragment and finally form Jupiter-mass protoplanets (2\(-\)6 \(M_{\rm J}\)) within a few thousand years. All these theoretical studies prove that GI can form Jupiter-mass protoplanets in a massive protostellar disk around an M-type protostar rapidly. We inspect whether WL 17 has a gravitationally unstable disk. In general, disk instability is determined by the Toomre \(Q\) parameter (Toomre, 1964). The Toomre \(Q\) parameter is defined as \[Q=\frac{c_{s}\Omega}{\pi G\Sigma}, \tag{11}\] where \(c_{s}\) = \(\sqrt{k_{B}T/\mu m_{p}}\) is the isothermal sound speed of an ideal gas, \(\Omega\) = \(\sqrt{GM_{\star}/R^{3}}\) is the Keplerian angular velocity, \(G\) is the gravitational constant, and \(\Sigma\) is the disk surface density. In the \(c_{s}\) expression, \(k_{B}\) is the Boltzmann constant, \(T\) is the gas temperature, \(\mu\) is the mean molecular weight, and \(m_{p}\) is the proton mass. We assume that gas temperature is the same as dust temperature, hence \(T\) follows the dust temperature distribution of the disk midplane (see Equation 3), and that \(\mu\) is 2.37 (e.g., Kauffmann et al., 2008). Also, the disk surface density \(\Sigma\) can be derived from the best-fit radial dust surface density distribution \(\Sigma_{\rm dust}\), which is obtained by the MCMC fitting (Section 4.2), with a typical gas-to-dust ratio of 100. Note that protostellar disks around M-type protostars are gravitationally unstable when \(Q_{\rm min}\)\(\lesssim\) 0.9\(-\)1.5 (e.g., Boss, 2006; Backus and Quinn, 2016). Figure 7 shows Toomre \(Q\) parameter profiles as a function of radius between 8 and 24 au. These three profiles are obtained from the three best-fit models (\(a_{\rm max}\) = 1 cm and \(q\) = 2.5; \(a_{\rm max}\) = 10 cm and \(q\) = 2.5; and \(a_{\rm max}\) = 10 cm and \(q\) = 3.0), respectively. The Toomre \(Q\) parameter profiles decrease toward the ring region and then increase again toward the outer region. Particularly, all the models have the lowest Toomre \(Q\) parameters in the ring region: the model with \(a_{\rm max}\) = 1 cm and \(q\) = 2.5 has \(Q_{\rm min}\)\(\simeq\) 0.91, and the other two models with \(a_{\rm max}\) = 10 cm and \(q\) = 2.5 and with \(a_{\rm max}\) = 10 cm and \(q\) = 3.0 have \(Q_{\rm min}\)\(<\) 0.3. According to the above theoretical studies suggesting the condition needed for disk instability, \(Q_{\rm min}\)\(\lesssim\) 0.9\(-\)1.5 (e.g., Boss, 2006; Backus and Quinn, 2016), these low \(Q\) values of all the three best-fit models indicate that WL 17 is gravitationally unstable in the ring region. Such instability supports the presence of a Jupiter-mass planet(s) as well as further planet formation in the future. In addition, assuming the typical gas-to-dust ratio of 100, the three models have high disk masses of 0.024 \(M_{\odot}\), 0.146 \(M_{\odot}\), and 0.088 \(M_{\odot}\), respectively. These high disk masses are consistent with previous ALMA disk survey results that the dust mass of WL 17 is within the top 30% of the Class I protostellar disks in the \(\rho\) Ophiuchus molecular cloud (Williams et al., 2019; Sadavoy et al., 2019; Encalada et al., 2021). WL 17 is thus a gravitationally unstable and massive disk so that Jupiter-mass protoplanets have likely formed within a short time. If a protoplanet were detected, it would be the youngest one compared to the cases confirmed so far: a Jupiter-mass planet around AS 209 (1\(-\)2 Myr; Bae et al., 2022), a Jupiter-mass planet around AB Aur (4 Myr; Currie et al., 2022), two Jupiter-mass planets around PDS 70 (5 Myr; Isella et al., 2019; Benisty et al., 2021), and two Jupiter-mass planets around HD 163296 (6 Myr; e.g., Teague et al., 2021; Izquierdo et al., 2023). Note that the planets orbiting AS 209 and HD 163296 are kinematically identified by localized velocity perturbations in molecular line emission. ### Protostellar infall An alternative to the planet-disk interaction is the protostellar infall scenario: material infalling from an envelope onto a disk can induce substructure and grain growth. Bae et al. (2015) showed that isotropic infall triggers the Rossby-wave instability (RWI), and this instability forms vortices, which can efficiently trap dust grains, particularly cm-sized large grains, and enhance grain growth. Recently, Kuznetsova et al. (2022) examined more realistic cases with anisotropic infall. It corresponds to filamentary inflows, called streamers, that have been lately reported both in (sub-)mm observations (e.g., Yen et al., 2014, 2019; Pineda et al., 2020; Thieme et al., 2022; Valdivia-Mena et al., 2022) and in numerical simulations (e.g., Seifried et al., 2015; Kuffmeier et al., 2017, 2019). Their simulations show that streamers also trigger the RWI, forming vortices and pressure bumps in a disk. Furthermore, in these cases, dust grains drifting inward are concentrated on the annular pressure bumps and rapidly grow therein. These radial drift and grain growth result in a compact disk with a mean radius of 55 au and a ring structure consisting of mm/cm-sized large grains. This radius is in good agreement with the observational result that the mean dust disk radii of the Class 0/I sources are less than 50 au in the VLA/ALMA Nascent Disk and Multiplicity (VANDAM) survey of the Orion Molecular Clouds (Tobin et al., 2020; Sheehan et al., 2022). The infall scenario can be applied to WL 17, although there is no feature of streamers found yet. As described in Section 4.1, it is in the late Class I stage, and its age is estimated to be less than 0.72 Myr (Dunham et al., 2015). Considering this young age, an envelope is expected to still remain, and Sheehan and Eisner (2017) also suggested that this target is embedded in the remnants of its envelope. Indeed, the \({}^{13}\)CO (2\(-\)1) emission in Figure 6 reveals that part of the inner envelope is rotating, although the outer envelope has almost dissipated (van Kempen et al., 2009). Also, the dust continuum images in Band 3 and 7 (Figure 2) show that the disk is compact but annularly structured, with a ring radius of 16 au (Table 2). In addition to the ring structures, the peak-intensity positions between the two bands differ by about 30\({}^{\circ}\) in the azimuthal direction. As mentioned in Section 4.2, this difference suggests the presence of a vortex triggered by the RWI. These features are consistent with the above theoretical prediction, indicating that material is still being accreted from the surrounding envelope onto the disk. Thus, putative infall motion can also interpret the observed ring and the grain growth within the ring. However, this infall scenario cannot explain the grain growth in the inner disk of WL 17. The inner disk is indeed detected in the higher-resolution Band 3 image (Figure 1), and our three best-fit models suggest that there are 1\(-\)10 cm-sized large grains in this inner disk. Bae et al. (2015) showed that since a vortex driven by infall efficiently traps 1\(-\)10 cm-sized dust grains, an inner disk is depleted of dust and contains only gas. In contrast, as discussed in Section 5.1.1, the planet-disk interaction can explain the grain growth in the inner disk, thus making it the preferential scenario. ## 6 Conclusions We used ALMA Band 3 and 7 archival data of the Class I protostellar disk WL 17. Using these multi-wavelength and multi-configuration data, we present the Band 3 and 7 dust continuum images with angular resolutions of 0.07\({}^{\prime\prime}\) (10 au) and 0.1\({}^{\prime\prime}\) (14 au), respectively. We also obtain a two-dimensional spectral index (\(\alpha\)) map between these two bands. In addition, to further constrain grain properties, we perform radiative transfer modeling by testing several dust opacity models, which follow the power-law size distribution \(n(a)\propto a^{-q}\) from \(a_{\rm min}\) = 0.05 \(\mu\)m to \(a_{\rm max}\), and compare the models with the multi-wavelength data. The main results are summarized as follows: 1. Disk substructures are clearly resolved in both Band 3 and 7, but they are significantly different: the Band 3 image shows a central hole and a symmetric ring, whereas the Band 7 image shows an off-centered hole and an asymmetric ring. These substructures are consistent with those reported by Sheehan & Eisner (2017) and Gulick et al. (2021). 2. The spectral index (\(\alpha\)) map between Band 3 and 7 shows two features: the \(\alpha_{\rm mm}\) values are overall low with an average value of 2.28, and they are asymmetrically distributed. Based on the mean specific intensity, the WL 17 disk is estimated to be moderately optically thin in Band 3 and 7 (\(\tau_{\nu}\)\(\lesssim\) 0.6), indicating that the spectral index can be understood by grain sizes (e.g., Draine, 2006). The spectral index map, therefore, suggests that (1) grains have already grown to mm/cm sizes, and (2) they are differently distributed depending on grain sizes. Figure 7: Toomre \(Q\) parameter radial profiles of three best-fit models with (a) \(a_{\rm max}\) = 1 cm and \(q\) = 2.5, (b) \(a_{\rm max}\) = 10 cm and \(q\) = 3.0, (c) \(a_{\rm max}\) = 10 cm and \(q\) = 3.0. To focus on gravitational instability in the ring region, these profiles show between 8 and 24 au. In the vertical direction, the black dashed and dotted lines indicate \(R_{\rm hole}\simeq 16.0\) au and \(\sigma_{\rm ring}\simeq 3.1\) au of the best-fit models, and the ring region is shown as the shaded region in each panel. In the horizontal direction, the black dashed and dotted lines indicate the critical Toomre \(Q\) parameter values of 0.9 (Backus & Quinn, 2016) and 1.5 (Boss, 2006), respectively, for a protostellar disk around an M-type protostar with 1 M\({}_{\odot}\). The grey shaded regions below these two lines are gravitationally unstable, meaning that Jupiter-mass planet(s) can rapidly form within the ring region. 3. Only the models having a small scale height of dust grains and being populated by cm-sized large grains (\(a_{\rm max}\) = 1 cm and \(q\) = 2.5, \(a_{\rm max}\) = 10 cm and \(q\) = 2.5, and \(a_{\rm max}\) = 10 cm and \(q\) = 3.0) can explain the disk substructures, particularly the central holes observed both in Band 3 and 7. These modeling results suggest that grains have rapidly grown up to 1\(-\)10 cm in size and have been settled down toward the midplane during the protostellar phase. 4. Nevertheless, the best models cannot fully explain the ring emission. Notably, in Band 7, the ring region has highly positive and asymmetric residuals. This can be interpreted as another \(\mu\)m-sized dust population, presumably in the upper layer, less sensitive to the Band 3 wavelength. It implies that cm-sized large grains in the midplane are symmetrically distributed in the azimuthal direction, whereas \(\mu\)m-sized small grains are asymmetrically distributed. 5. The rapid grain growth and dust segregation identified by the modeling analysis can be explained by a single Jupiter-mass planet based on previous hydrodynamic simulations with a similar environment to WL 17. The high disk masses (\(M_{\rm disk}\)) of 0.024 \(M_{\odot}\), 0.146 \(M_{\odot}\), and 0.088 \(M_{\odot}\) inferred from the three best models (\(a_{\rm max}\) = 1 cm and \(q\) = 2.5, \(a_{\rm max}\) = 10 cm and \(q\) = 2.5, \(a_{\rm max}\) = 10 cm and \(q\) = 3.0) result in low minimum Toomre \(Q\) parameter (\(Q_{\rm min}\)) values of 0.91, 0.15, and 0.25 in the ring region. It means that the disk is gravitationally unstable, so a giant planet has likely formed by gravitational instability (GI) during the Class I stage. We are grateful to the anonymous referee for thoughtful comments. I.H. thanks Thiem Hoang, Chang Won Lee, Sang-Sung Lee, and Aran Lyo for helpful discussions. W.K. is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2021R1F1A1061794). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.00761.S, ADS/JAO.ALMA#2019.1.01792.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. ALMA ALMA CASA (McMullin et al., 2007), RADMC-3D (Dullemond et al., 2012), emcee (Foreman-Mackey et al., 2013)
2310.12160
On geometric interpretation of Euler's substitutions
We consider a classial case of irrational integrals containing a square root of a quadratic polynomial. It is well known that they can be expressed in terms of elementary functions by one of three Euler's substitutions. It is less known that the Euler substittutions have a beautiful geometric interpretation. In the framework of this interpretation one can see that the number 3 is not the most suitable. We show that it is natural to introduce the fourth Euler substitution. By the way, it is not clear who was the first to attribute these three substitutions to Euler. In his original treatise Leonhard Euler uses two substitutions which are sufficient to cover all cases.
Jan L. CieΕ›liΕ„ski, Maciej Jurgielewicz
2023-09-24T01:56:39Z
http://arxiv.org/abs/2310.12160v1
# On geometric interpretation of Euler's substitutions ###### Abstract We consider a classial case of irrational integrals containing a square root of a quadratic polynomial. It is well known that they can be expressed in terms of elementary functions by one of three Euler's substitutions. It is less known that the Euler substitttutions have a beautiful geometric interpretation. In the framework of this interpretation one can see that the number 3 is not the most suitable. We show that it is natural to introduce the fourth Euler substitution. By the way, it is not clear who was the first to attribute these three substitutions to Euler. In his original treatise Leonhard Euler uses two substitutions which are sufficient to cover all cases. **Keywords**: integral calculus; irrational integrals; conics; rational parameterization; fourth Euler's substitution ## 1 Introduction Integrals of rational functions can be expressed in terms of elementary functions. Therefore a natural method of integration consists in using suitable substitutions and integration by parts to reduce our problem to integration of rational functions. In this paper we consider irrational integrals containing the quadratic root of a quadratic polynomial, i.e., integrals of the form \[\int R(x,y)dx\, \tag{1}\] where \(R\) is a rational fuction (a quotient of two polynomials) of \(x\) and \(y\), and \[y=\sqrt{ax^{2}+bx+c}. \tag{2}\] The subject is, in principle, known. A standard method to deal with such integrals consists in using one of the so called Euler's substitutions [1, 2]. However, there are some details which need to be clarified. We will describe in detail a geometric approach to this problem and explain how many Euler substitutions actually exist. In fact, to our beest knowledge, all sources mention exactly three types of substitutions in this context. It is not clear who was the first to introduce such classification. Euler substitutions are usually introduced and discussed in Russian sources, see, e.g., [5, 3, 4] (Leonhard Euler, although of Swiss origin, lived and worked in Saint Petersburg for many years). Surprisingly enough, the three substitutions appeared also in an old textbook by a Harvard professor, [6], without any reference to Euler. In our paper we present a clear geometric intepretation of this problem, shortly mentioned in some sources, mainly of Russian origin [2, 7]. The textbook [7] is not translated into English. Another book by the same author, [3], does not mention this geometric approach in the section on Euler's substitutions. The main novelty of this paper is the introduction of the fourth Euler substitution, which is a natural consequence of the geometric approach discussed in our paper. ## 2 Three classical Euler's substitutions The main idea of Euler's substitutions consists in expressing \(\sqrt{ax^{2}+bx+c}\) as a linear function of \(x\) and a new parameter \(t\) in such a way that the resulting equation is linear with respect to \(x\). In this paper we use the most common numbering of these three substitutions, compare [1, 2, 3, 4, 6]. In some sources a different order is used, see [5, 8]. ### First Euler substitution This substitution can be done only in the case \(a>0\): \[\sqrt{ax^{2}+bx+c}=\pm x\sqrt{a}+t. \tag{3}\] Squaring both sides we get: \[ax^{2}+bx+c=ax^{2}\pm 2xt\sqrt{a}+t^{2}\.\] Terms quadratic in \(x\) cancel out and the resulting equation is linear in \(x\). Computing \(x\), we get a rational dependence on \(t\): \[x=\frac{t^{2}-c}{b\mp 2t\sqrt{a}}. \tag{4}\] Then, from (2) and (3), we get \[y=\frac{\mp t^{2}\sqrt{a}+tb\mp c\sqrt{a}}{b\mp 2t\sqrt{a}}. \tag{5}\] ### Second Euler substitution This substitution can be done only in the case \(c>0\): \[\sqrt{ax^{2}+bx+c}=xt\pm\sqrt{c}. \tag{6}\] Squaring both sides we get: \[ax^{2}+bx+c=x^{2}t^{2}\pm 2xt\sqrt{c}+c. \tag{7}\] The constant \(c\) cancels out and dividing both sides by \(x\) we again derive an equation linear in \(x\). Hence, similarly as in the previous case, \[x=\frac{b\mp 2t\sqrt{c}}{t^{2}-a}\,\qquad y=\frac{bt\mp(t^{2}+a)\sqrt{c}}{t^{2 }-a}. \tag{8}\] ### Third Euler substitution This substitution can be done only in the case \(\Delta>0\), where \[\Delta\equiv b^{2}-4ac \tag{9}\] is the discriminant of the quadratic polynomial. Then the polynomial has two distinct real roots \(x_{1}\) and \(x_{2}\), and the third Euler substitution is given by: \[\sqrt{ax^{2}+bx+c}=(x-x_{1})t. \tag{10}\] Squaring both sides we get: \[a(x-x_{1})(x-x_{2})=(x-x_{1})^{2}t^{2}\qquad\Rightarrow\qquad a(x-x_{2})=(x-x _{1})t^{2}. \tag{11}\] Computing \(x\) from the resulting equation and then using (10) and (2) we obtain \[x=\frac{t^{2}x_{1}-ax_{2}}{t^{2}-a}\,\qquad y=\frac{(x_{1}-x_{2})at}{t^{2}-a}\, \tag{12}\] where, of course, \[x_{1,2}=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}. \tag{13}\] ### Original Euler's approach It is interesting that Leonhard Euler himself in his famous monograph used only two of these substitutions, see [9]. He considered two cases: \(\Delta>0\) and \(\Delta<0\). In the first case (\(\Delta>0\)) he proposed the substitution (6), while in the second case (\(\Delta<0\)) he proposed the substitution (3) in a slightly modified form: \[\sqrt{ax^{2}+bx+c}=x\sqrt{a}-t\sqrt{c}. \tag{14}\] Obviously, the case \(\Delta=0\) is not included because then the quadratic polynomial is a square of the linear function in \(x\) and \(y\) is linear is \(x\) as well. Hence the integrand in (1) is rational in \(x\) from the very beginning. ## 3 Geometric interpretation It is convenient to square both sides of represent (2) obtaining the equation of a quadratic curve \[y^{2}=ax^{2}+bx+c \tag{15}\] We will denote this curve (a conic section) by \(Q_{abc}\), i.e., \((x,y)\in Q_{abc}\) ### Elliptic case: \(a<0\) The canonical form of the quadratic polynomial yields: \[y^{2}+|a|\left(x-\frac{b}{2|a|}\right)^{2}=c-\frac{b^{2}}{4a} \tag{16}\] We can distinguish three cases, depending on the sign of the discriminant \(\Delta\): \[\Delta<0\quad\Longrightarrow\quad Q_{abc}=\emptyset\quad\mbox{(empty set)} \tag{17}\] \[\Delta=0\quad\Longrightarrow\quad Q_{abc}=\left\{\left(\frac{b}{2|a|},0\right) \right\}\quad\mbox{(single point)} \tag{18}\] \[\Delta>0\quad\Longrightarrow\quad Q_{abc}\mbox{ is an ellipse} \tag{19}\] Only in the last case we get a non-degenerated quadratic curve. ### Parabolic case: \(a=0\) For \(a=0\) (and \(b\neq 0\)) the conic \(Q_{abc}\) is a parabola with the symmetry axis \(y=0\). ### Hyperbolic case: \(a>0\) The canonical form of the quadratic polynomial yields: \[y^{2}-a\left(x+\frac{b}{2a}\right)^{2}=c-\frac{b^{2}}{4a} \tag{20}\] We can distinguish three cases, depending on the sign of the discriminant \(\Delta\): \[\Delta<0\quad\Longrightarrow\quad Q_{abc}\mbox{ is a hyperbola with vertices at the line }x=-\frac{b}{2a} \tag{21}\] \[\Delta=0\quad\Longrightarrow\quad Q_{abc}\mbox{ is a pair of intersection lines} \tag{22}\] \[\Delta>0\quad\Longrightarrow\quad Q_{abc}\mbox{ is a hyperbola with vertices at }x\mbox{ axis} \tag{23}\] Therefore, for \(\Delta\neq 0\) we get a non-degenerated quadratic curve. ### Rational parameterization - standard approach The key idea leading to a rational parameterization consists in fixing an arbitrary point \(P_{0}=(x_{0},y_{0})\) on the conic \(Q_{abc}\) and assigning to any other point \(P=(x,y)\) of this conic the line \(P_{0}P\). Taking as a parameter \(t\) the slope of this line we obtain a rational parameterization of the conic \(Q_{abc}\)[2, 7]. Thus we have the system of three equations: \[\begin{split}& y^{2}=ax^{2}+bx+c\,\\ & y_{0}^{2}=ax_{0}^{2}+bx_{0}+c,\\ & y-y_{0}=t(x-x_{0})\.\end{split} \tag{24}\] The points \((x,y)\) and \((x_{0},y_{0})\) belong to the conic \(Q_{abc}\) and \(t\) is the slope of the straight line passing through \((x,y)\) and \((x_{0},y_{0})\). Subtracting the second equation from the first one we get: \[\begin{split}&(y-y_{0})(y+y_{0})=(x-x_{0})(a(x+x_{0})+b)\,\\ & y_{0}^{2}=ax_{0}^{2}+bx_{0}+c,\\ & y-y_{0}=t(x-x_{0})\,\end{split} \tag{25}\] Substituting the last equation into the first one we obtain: \[(t(y+y_{0})-a(x+x_{0})-b)\left(x-x_{0}\right)=0\, \tag{26}\] \[y_{0}^{2}=ax_{0}^{2}+bx_{0}+c,\] \[y-y_{0}=t(x-x_{0})\.\] Assuming \(x\neq x_{0}\), we get \[t(y+y_{0})=a(x+x_{0})+b\, \tag{27}\] \[y_{0}^{2}=ax_{0}^{2}+bx_{0}+c,\] \[y-y_{0}=t(x-x_{0})\.\] Now, the first and the last equation form a system of two linear equations for two variables \(x,y\), which can be solved in the standard way. As a result we obtain: \[x=\frac{x_{0}t^{2}-2y_{0}t+ax_{0}+b}{t^{2}-a}\, \tag{28}\] \[y=\frac{-y_{0}t^{2}+(2x_{0}+b)t-ay_{0}}{t^{2}-a}\,\] which means that we expressed \(x\) and \(y\) as rational functions of the parameter \(t\). **Corollary 3.1**.: _There are infinitely many Euler-like substitutions. Each of them is determined by the choice of \(x_{0}\), provided that \(ax_{0}^{2}+bx_{0}+c\geqslant 0\). Then the point \(P_{0}\equiv(x_{0},y_{0})\) is given by:_ \[P_{0}=\left(x_{0},\pm\sqrt{ax_{0}^{2}+bx_{0}+c}\right) \tag{29}\] In particular, the second Euler substitution corresponds to \(x_{0}=0\) (provided that the graph of the quadric \(Q_{abc}\) intersects the axis \(y\)), see Fig. 1 and Fig. 2. The third Euler substitution corresponds to \(x_{0}\) being a root of the polynomial \(ax^{2}+bx+c\) (provided that the graph of \(Q_{abc}\) intersects the axis \(x\)), see Fig. 3 and Fig. 4. The first Euler substitution apparently does not fit this picture. However, its geometric interpretation is even simpler and more evident. The formula (3) describes the family of lines parallel to asymptotes of the corresponding hyperbola, see Fig. 5. We may treat it as a special case of (28) when the point \((x_{0},y_{0})\) lies at infinity. Note that points \((x_{0},\pm x_{0}\sqrt{a})\) belong to the conic (15) in the limit for \(x_{0}\to\infty\). ## 4 New insights from the geometric interpretation The description given in the previous section is more or less known (see, e.g., [2, 7]), although we are not aware about any reference containing all these details. We are going to derive from this geometric picture more quite interesting consequnces. First of all, we identify characteristic points on the graph of a quadratic curve which can be chosen as \(P_{0}\) in the most natural way: vertices (\(M_{1}\), \(M_{2}\), \(R_{1}\), \(R_{2}\)) and intersections with coordinate axes (\(R_{1}\), \(R_{2}\), \(V_{1}\), \(V_{2}\)), see Fig. 6 and Fig. 7. In particular, in the case of the second Euler substitution \(P_{0}=V_{2}\) (see Fig. 1 and Fig. 2) or \(P_{0}=V_{1}\), while in the case of the third Euler substitution \(P_{0}=R_{1}\) (see Fig. 3) or \(P_{0}=R_{2}\) (see Fig. 4). The first Euler substitution is related to \(P_{0}\) at infinity. Figure 1: Geometric interpretation of the second Euler substitution in the case \(a<0\) and \(c>0\). The point \(P\) is parameterized by the slope \(t\) of the line \(P_{0}P\). Figure 2: Geometric interpretation of the second Euler substitution in the case \(a>0\) and \(c>0\). The point \(P\) is parameterized by the slope \(t\) of the line \(P_{0}P\). Figure 3: Geometric interpretation of the third Euler substitution in the case \(a<0\) and \(\Delta>0\). The point \(P\) is parameterized by the slope \(t\) of the line \(P_{0}P\). Figure 4: Geometric interpretation of the third Euler substitution in the case \(a>0\). The point \(P\) is parameterized by the slope \(t\) of the line \(P_{0}P\). Figure 5: Geometric interpretation of the first Euler substitution. The points \(P\) and \(P_{1}\) are parameterized by intersections \(t\) and \(t_{1}\), respectively, of the \(y\)-axis with the line parallel to one of the asymptotes of the hyperbola \(y^{2}=ax^{2}+bx+c\). Figure 6: Characteristic points on the graph of an ellipse: intersections with the coordinate axes (provided that they exist) and extremes (minimum \(M_{1}\) and maximum \(M_{2}\)). ### Fourth Euler's substitution The geometric approach presented above includes all three classical Euler's substitutions, but it is still missing vertices \(M_{1}\) and \(M_{2}\). Therefore, it is natural to introduce another (fourth) Euler's substitution, geometrically related to missing vertices: \(P_{0}=M_{1}\) (see Fig. 9 and Fig 8) or \(P_{0}=M_{2}\). The algebraic description of the fourth Euler substitution is based on the canonical form of the quadratic polynomial: \[y=\sqrt{a(x-p)^{2}+q} \tag{30}\] where \[p=-\frac{b}{2a}\,\qquad q=-\frac{\Delta}{4a}. \tag{31}\] The fourth Euler substitution is defined by: \[y=\sqrt{q}+(x-p)t. \tag{32}\] Figure 8: Geometric interpretation of the fourth Euler substitution in the case \(a>0\). The point \(P\) is parameterized by the slope \(t\) of the line \(P_{0}P\), where \(P_{0}=M_{1}\). Figure 9: Geometric interpretation of the fourth Euler substitution in the case \(a<0\). The point \(P\) is parameterized by the slope \(t\) of the line \(P_{0}P\), where \(P_{0}=M_{1}\). Squaring both sides we get: \[a(x-p)^{2}+q=q+2(x-p)t\sqrt{q}+(x-p)^{2}t^{2}. \tag{33}\] The constant \(q\) cancels out and dividing both sides by \(x-p\), we obtain \[a(x-p)=2t\sqrt{q}+(x-p)t^{2}, \tag{34}\] which is linear in \(x\). Hence \[x-p=\frac{2t\sqrt{q}}{a-t^{2}}\,, \tag{35}\] and using (32) we get \[y=\frac{a+t^{2}}{a-t^{2}}\sqrt{q}. \tag{36}\] Thus we have a rational dependence of \(x\) and \(y\) on the parameter \(t\). Moreover, \[\frac{dx}{dt}=\frac{2(a+t^{2})\sqrt{q}}{(a-t^{2})^{2}}\, \tag{37}\] and we can easily transform the irrational integral function (1) into an integral function rational with respect to \(t\). ### Rational parameterization - other parameters Geometric approach suggests also some modifications or new variants of the existing rational parameterizations. Here we confine ourselves to one example. Introducing a new parameter \(\tau\) \[\tau=2t\sqrt{a}-b\, \tag{38}\] we obtain the following simplification of the first Euler substitution: \[x=-\frac{1}{4a}\left(\tau+\frac{\Delta}{\tau}+2\right)\,\qquad y=\frac{1}{4 \sqrt{a}}\left(\tau-\frac{\Delta}{\tau}\right). \tag{39}\] Geometrically, the parameter \(\tau=0\) corresponds to the line passing through the point \((p,0)\) and this is one of two asymptotes (that is why \(x\rightarrow\infty\) and \(y\rightarrow\infty\) for \(\tau\to 0\)). ## 5 Euler's substitutions _versus_ trigonometric substitutions Another popular method for computing irrational integrals (1) consists in making a suitable trigonometric or hyperbolic substitution. We use the canonical form of the quadratic curve (compare (30)): \[y^{2}=a(x-p)^{2}+q. \tag{40}\] Assuming \(q\neq 0\) (otherwise \(y\) depends linearly on \(x\)) we introduce new variables \(\xi,\eta\) as folows: \[\eta=\frac{y}{\sqrt{|q|}}\,\qquad\xi=\frac{(x-p)\sqrt{|a|}}{\sqrt{|q|}}. \tag{41}\] Then (40) becomes \[\eta^{2}=(\operatorname{sgn}a)\ \xi^{2}+\operatorname{sgn}q\, \tag{42}\] because \(a/|a|=\operatorname{sgn}a\), etc. Thus we have three separate cases (in the fourth case -both signs negative- there are no real solutions), where trigonometric or hyperbolic substitutions are well known: \[\begin{array}{lcl}\eta=\sqrt{\xi^{2}-1}&\implies&\xi=\cosh\vartheta\,\ \eta= \sinh\vartheta\,\\ \eta=\sqrt{1-\xi^{2}}&\implies&\xi=\cos\vartheta\,\ \eta=\sin\vartheta\,\\ \eta=\sqrt{\xi^{2}+1}&\implies&\xi=\sinh\vartheta\,\ \eta=\cosh\vartheta. \end{array} \tag{43}\] Is it better than Euler's substitutions? This is a matter of taste. Perhaps it is more easy to memorize, however, one has to remember that integrals of trigonometric or hyperbolic functions have to be converted into integrals of rational functions by another substitution: \[t=\tan\frac{\theta}{2}\quad\text{or}\quad t=\tanh\frac{\theta}{2}. \tag{44}\] ## 6 Conclusions We presented and discussed a geometric approach to Euler substitutions. One consequence of this thorough discussion was the introduction of the fourth Euler substitution in addition to three traditionally mentioned Euler substitutions. In fact, we can say about infinite number (one parameter family) of Euler-like substitutions. They can be further modified or simplified by suitable linear or fractional linear transformations.
2309.07368
Fabrics: A Foundationally Stable Medium for Encoding Prior Experience
Most dynamics functions are not well-aligned to task requirements. Controllers, therefore, often invert the dynamics and reshape it into something more useful. The learning community has found that these controllers, such as Operational Space Control (OSC), can offer important inductive biases for training. However, OSC only captures straight line end-effector motion. There's a lot more behavior we could and should be packing into these systems. Earlier work [15][16][19] developed a theory that generalized these ideas and constructed a broad and flexible class of second-order dynamical systems which was simultaneously expressive enough to capture substantial behavior (such as that listed above), and maintained the types of stability properties that make OSC and controllers like it a good foundation for policy design and learning. This paper, motivated by the empirical success of the types of fabrics used in [20], reformulates the theory of fabrics into a form that's more general and easier to apply to policy learning problems. We focus on the stability properties that make fabrics a good foundation for policy synthesis. Fabrics create a fundamentally stable medium within which a policy can operate; they influence the system's behavior without preventing it from achieving tasks within its constraints. When a fabrics is geometric (path consistent) we can interpret the fabric as forming a road network of paths that the system wants to follow at constant speed absent a forcing policy, giving geometric intuition to its role as a prior. The policy operating over the geometric fabric acts to modulate speed and steers the system from one road to the next as it accomplishes its task. We reformulate the theory of fabrics here rigorously and develop theoretical results characterizing system behavior and illuminating how to design these systems, while also emphasizing intuition throughout.
Nathan Ratliff, Karl Van Wyk
2023-09-14T01:01:13Z
http://arxiv.org/abs/2309.07368v1
# Fabrics: A Foundationally Stable Medium for Encoding Prior Experience ###### Abstract Most physical systems have dynamics functions that are just a nuisance to policies. Torque policies, for instance, usually have to effectively invert the natural classical mechanical dynamics to get their job done. Because of this, we often use controllers to make things easier on policies. For instance, inverse dynamics controllers wipe out the physical dynamics so the policy starts from a clean slate. That makes learning easier, but still the policy needs to learn everything about the problem, including aspects of a solution which are common to many other problems, such as how to make the end-effector move in a straight line, how to avoid joints and self collisions, how to avoid obstacles, etc. Over the past few years it's become standard to formulate learning not in C-space, but in end-effector space and use controllers such as Operational Space Control (OSC) to capture some of these commonalities. These controllers, whether inverse dynamics or OSC, reshape the natural dynamics of the system into a different second-order dynamical system whose behavior is more useful. And the trend is, the more useful behavior we can pack into these reshaped systems, the easier it is to learn policies. However, OSC is from the 80's, and captures only straight line end-effector motion. There's a lot more behavior we could and should be packing into these systems. Earlier work [15, 16, 19] developed a theory that generalized these ideas and constructed a broad and flexible class of second-order dynamical systems which was simultaneously expressive enough to capture substantial behavior (such as that listed above), and maintained the types of stability properties that make OSC and controllers like it a good foundation for policy design and learning. This paper, motivated by the empirical success of the types of fabrics used in [20], reformulates the theory of fabrics into a form that's more general and easier to apply to policy learning problems. We focus on the stability properties that make fabrics a good foundation for policy synthesis. Fabrics create a fundamentally stable medium within which a policy can operate; they influence the system's behavior without preventing it from achieving tasks within its constraints. When a fabrics is _geometric_ (path consistent) we can interpret the fabric as forming a _road network_ of paths that the system wants to follow at constant speed absent a forcing policy, giving geometric intuition to its role as a prior. The policy operating over the geometric fabric acts to modulate speed and steers the system from one road to the next as it accomplishes its task. We reformulate the theory of fabrics here rigorously and develop theoretical results characterizing system behavior and illuminating how to design these systems, while also emphasizing intuition throughout. ## I Introduction Policies all operate on underlying system dynamics: what the robot wants to do absent external control. These dynamics can be as straightforward as the underlying classical mechanical dynamics of the robot, where the system's inertia defines its Riemannian geometry, the network of paths the system would travel along absent gravity and frictions (see [16] for a discussion of the connection between classical mechanics and Riemannian geometry). That mechanical system of paths, though, is often irrelevant to tasks and a hindrance to achieving a desired behavior. Therefore, control systems often work to reshape that geometry into something more relevant, or at least less disruptive. For instance, inverse dynamics control [17] removes the geometry entirely, replacing it with a Euclidean geometry in C-space (a blank slate), such that additional controllers can generate a desired behavior without competing with the native system geometry. Operational space control [9] builds upon this idea and not only clears the geometry, but also reshapes it into something more relevant to the task. Specifically, it replaces the physical geometry with a different Riemannian geometry where geodesics move the end-effector in straight lines. Policies then build off that more useful geometry to define useful task behavior. Since tasks are often more easily described in the end-effector space, this starting geometry is highly relevant to many problems--it encodes useful prior information. However, operational space control captures only a small fraction of the commonalities among tasks. Most tasks, for instance, require some form of obstacle awareness, such as avoidance or attraction toward a surface (e.g. grasping). Moreover, robots generally avoid joint limits and self-collisions, and approach targets from a particular direction (e.g. approach a table orthogonal to a surface when touching it). Many of these behavioral elements can and should be factored out and encoded into the reshaping controller itself, and ideally we should extract these common behavioral components from data. In this work, we formalize this concept of an underlying behavior shaping controller into what we call _fabrics_. We define fabrics rigorously as _conservative_ autonomous second-order differential equations, show how to construct them by _energizing_ a generating system, and thoroughly characterize their intrinsic stability properties. This theory supports the type of fabric design used in recent fabric learning work such as [20] and gives the theoretical foundations for encoding prior information into an underlying fabric and training policies over the top of it. When the fabric has a particular geometric path consistency property, we call it a _geometric_ fabric. This path consistency gives the fabric a speed invariant road network of paths that guide the system around obstacles and other constraints and broadly encode important prior information. Policies navigating these fabrics generally follow the network of paths and need only choose when and how strongly to push the system from one path to another and how to regulate energy along the way. We provide a number of theoretical statements characterizing energy regulation and system convergence, including convergence to desired goals (the zero set of a forcing term). We tailor the theory to providing insight into the design and training of fabrics and the policies residing on them. Importantly, these shared fabrics, especially those learned from data [20], constitute well-informed priors on behavior. We discuss this perspective throughout this work. Generally, when we us the term _prior_, we mean it in the broader sense than purely Bayesian probabilistic. We simply mean it acts as a way to inject prior experience into the system to improve the sample efficiency of training policies (including the manual design of policies which is, itself, an information theoretic learning process--an iterative process resulting in a policy expected to generalize to novel situations.) Note that fabrics can be used to develop probabilistic priors by running stochastic policies over them to generate distributions of trajectories. But the underlying fabric itself, which captures the essence of the encoded information in its geometry, is not probabilistic. ### _Related work_ Since robots are physical systems, their dynamics are well-understood and governed by the Euler-Lagrange equation, as characterized in any number of introductory robotics text books [7, 17]. The mathematics of these systems is sophisticated, with Lagrangian symmetries giving rise to conserved quantities such as energy conservation [18], and control theorists have exploited those mathematical properties thoroughly for the stable design of complex nonlinear controllers by reshaping the systems into different, more favorable, dynamical systems [1, 4]. These fundamental equations have also been used in the creation of modern robot simulators like [11, 12], which are critical tools for designing or learning robot control policies. In classical systems, the Euler-Lagrange equations decompose into two parts. One part is the closed system's inertial equations. This component can be shown to be geometric in nature in the sense that it produces speed-invariant paths through space [4, 16]. The system, under the influence of only its inertia, follows the same path regardless of speed (or more generally accelerations along the direction of motion). These geometries are Riemannian, and the system's mass matrix is the Riemannian metric (see [16] for a derivation). The second part includes additional forcing and damping terms. Both of these components are required to accurately model the complex, nonlinear physical phenomena in the real world. Reshaping these models into useful behavioral systems within the same class of classical mechanical systems (Riemannian geometries) is common [2, 3, 4, 9], and interestingly, Riemannian geodesics have also been shown to model large segments of human motion in [10, 13], indicating that energy efficient human motion can be largely captured by following geometric paths. However, these Riemannian systems are fundamentally limited in their expressivity for two reasons: their metrics can only be a function of position (no velocity), and the metric plays a double role of both defining the geometry of paths itself and specifying how one sub-system weights together with another. Broader classes of second-order differential equations, such as Riemannian Motion Policies (RMPs) for motion generation in [14], aren't limited in these ways and have been shown empirically to have high-capacity for representing intricate behaviors, but are less well understood. Recently, [19] generalized classical mechanical models to what are called geometric fabrics, building off a type of system termed a _bent Finsler systems_, to expand the modeling capacity of these types of systems specifically to remove those above limitations. Geometric fabrics capture the flexibility of RMPs while being provably stable and maintaining a form of (non-Riemannian) geometric path consistency, building off the mathematics of Finsler and Spray geometry [16]. With bent Finsler systems, behavioral designers were able to engineer policies that can outperform both the classical mechanical systems of geometric control and RMPs. These systems were also shown to outperform linear dynamical systems (such as Dynamic Movement Primitives, DMPs), RMPs, and a variety of baseline neural architectures like Long Short-Term Memory (LSTM) networks in learning contexts [20]. These systems superficially resemble artificial potential fields [8], but are built to enable the design of the _geometry_ rather than the potential function, which improves their regularity and dramatically boosts performance in practice. Behavior can be written directly into the underlying road network of paths, reshaping the system geometry, rather than relying on the potential function to push the system (fight) against a less relevant natural geometry. Independently, Bylard et al [5] developed what are called Pullback Bundle Dynamical Systems (PBDS) as a rigorously covariant version of the Geometric Dynamical Systems (GDS) developed earlier in [6]. They approached the problem as developing Riemannian metrics on the tangent bundle (space of positions and velocities) of a given manifold. While rigorous, those systems are analogous in their representational capacity to the Lagrangian fabrics outlined in [15] and from the analysis in [19], it's now know they lack the flexibility to independently represent both the geometry of paths and the metric independent of one another. For that reason, PBDS rely heavily on non-geometric potential functions similar to the standard potential shaping techniques of geometric control [4]. The perspective we develop here builds from the results on nonlinear (spray) geometries of paths detailed in [16], and requires less technical machinery than developing metrics directly on the tangent bundle. Finsler fabrics and more broadly Lagrangian fabrics are analogous to PBDS, but geometric fabrics are a fundamentally more expressive class of systems. All of these earlier works, while elegant in their generalization of classical mechanical or Riemannian systems, often require complex tensor calculations, such as evaluating the Euler-Lagrange equation, to fully follow the theory. This complexity introduces challenges, especially when learning is involved. Here, we reformulate these systems in a way that's more intuitive, easier to handle both conceptually and implementationally, and emphasizes the role they play as fundamentally stable mediums for guiding policies. ### _A note on generalized notation_ Often a classical mechanical, (bent) Finsler, RMP system, etc. takes the form \(\mathbf{M}(\mathbf{q},\dot{\mathbf{q}})\ddot{\mathbf{q}}+\boldsymbol{\xi}( \mathbf{q},\dot{\mathbf{q}})=\mathbf{0}\), and when forced by some potential function \(\psi(\mathbf{q})\), and damped by a dissipating term \(\mathbf{B}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}\), we get \[\mathbf{M}\ddot{\mathbf{q}}+\boldsymbol{\xi}=-\partial\psi-\mathbf{B}\dot{ \mathbf{q}}. \tag{1}\] The resulting acceleration is \[\ddot{\mathbf{q}} =-\mathbf{M}^{-1}\boldsymbol{\xi}-\mathbf{M}^{-1}\big{(}\partial \psi+\mathbf{B}\dot{\mathbf{q}}\big{)} \tag{2}\] \[=\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})+\mathbf{f}( \mathbf{q},\dot{\mathbf{q}}), \tag{3}\] where \(\widetilde{\mathbf{h}}=-\mathbf{M}^{-1}\boldsymbol{\xi}\) and \(\mathbf{f}=-\mathbf{M}^{-1}\big{(}\partial\psi+\mathbf{B}\dot{\mathbf{q}}\big{)}\). That first term \(\widetilde{\mathbf{h}}\) is conservative and often geometric (and/or unbiased in the sense that it won't push the system away from rest) and the second term both forces away from \(\widetilde{\mathbf{h}}\) and regulates the injection and dissipation of energy. In this work, we address general decomposed systems of the form given in Equation 3. We use \(\widetilde{\mathbf{h}}\) to denote a fabric (conservative term) and the \(\mathbf{f}\) to denote a forcing term that pushes against the fabric and regulates energy. The tilde denotes that the term is conservative (a fabric), to distinguish it from a generator that creates a fabric through energization (see Definition II.9 and Lemma II.10). This decomposition is very general and covers many systems, including the ones above. If \(\widetilde{\mathbf{h}}\) has an associated system metric \(\mathbf{M}\), it's often useful to think of forcing policies as force functions such as \(\pi(\mathbf{q},\dot{\mathbf{q}})=-\partial\psi-\mathbf{B}\dot{\mathbf{q}}\) which are transformed by the system metric into the forcing term \(\mathbf{M}^{-1}\pi(\mathbf{q},\dot{\mathbf{q}})\) such as in Equation 2. Note that strong Eigen-directions of \(\mathbf{M}\) trim away components of the force. In that sense, \(\mathbf{M}\) defines the system priorities, intuitively defining which directions in space important to \(\widetilde{\mathbf{h}}\) and which directions that aren't. ### _Overview_ We begin in Section II with a series of results characterizing the fundamental stability of fabrics as a medium for policies to operate on. Throughout, we define and develop the theory of fabrics generally but also detail the important role path consistency plays in the more specific case of _geometric_ fabrics which form a concrete road network of paths for policies to operate across. We start by defining fabrics to be conservative second-order autonomous differential equations in Definition II.1 and show that energy conservation, by itself, gives the fabric important stability properties. Terminologically, in Definition II.4, we decompose the full system into \(\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}+\mathbf{f}\), where \(\widetilde{\mathbf{h}}\) is the fabric and \(\mathbf{f}\) is the policy, and together they form a _forced system_. This terminology derives from the force form \(\mathbf{M}(\mathbf{q},\dot{\mathbf{q}})\ddot{\mathbf{q}}+\boldsymbol{\xi}=\pi (\mathbf{q},\dot{\mathbf{q}})\) where \(\mathbf{M}\) is a symmetric positive definite system metric and we have the relations \(\widetilde{\mathbf{h}}=-\mathbf{M}^{-1}\boldsymbol{\xi}\) and \(\mathbf{f}=\mathbf{M}^{-1}\boldsymbol{\pi}\). Here \(\pi\) is called a _force_ policy. Intuitively, a system traveling along a fabric will always maintain constant energy if the policy does nothing, and the policy can always simply dissipate energy to come to a stop. Over a bounded period of time, a bounded policy can only inject a finite amount of energy into the system, so it always has the means to easily bring the system back to rest. Fabrics, in that sense, innately form a fundamentally stable medium across which the policy operates. We show in Lemma II.10 that we can always transform any given second-order autonomous system into a fabric simply by speeding up or slowing down along the direction of motion, and Definition II.9 gives a specific _energization_ transform that does that. Importantly, Proposition II.13 then shows that if the underlying generating system is geometric (path consistent), energization stabilizes it without changing the collection of paths (since it operates entirely by accelerating along the direction of motion which is known to leave paths unchanged in geometric systems). Then Proposition II.16 shows that any policy operating across a geometric fabric can be decomposed into a zero-work energy-preserving term which bends (or steers) the paths without changing the energy, and an energy regulation term which modulates speed along the direction of motion without changing the path. All policies thereby act to simply modulate the underlying fabric's energy while steering the system. When training policies, one can potentially exploit this observation to define data efficient policy parameterizations. Section II finishes with a discussion of convergence to the zero set of the forcing policy. Broadly, there are many cases where goals can be characterized by zero sets of some vector field. For instance, the local minima of a potential are the zero sets of its gradient. A forcing policy is a vector field that vanishes when it no longer wants to move the system, so the zero set of the forcing policy is a good characterization of the policy's goals. Proposition II.17 presents some general conditions under which the forced system converges to the zero set. One of those conditions is the practical statement that if the system (with bounded accelerations) converges, it must converge to the zero set. That comes from the simple observation that the fabric is conservative and therefore wouldn't itself push the system from rest (zero energy). So if it comes to reset at the zero set, neither the fabric nor the policy wants it to move from there. It's often straightforward to design convergent systems that dissipate energy properly to bring the system to rest at a zero set, so even if we can't otherwise prove global convergence of the system, we can design practically convergent systems which are guaranteed to be at the policy's zero set when they converge. Moreover, these observations suggest that given a goal, we can parameterize the policy to ensure the policy is zero if and only if it's at the goal. Then a training system needs only learn how to modulate energy effectively to converge nicely to that zero set. Section III moves into a more complete discussion of theoretical conditions on energy regulation. Propositions III.1 and III.3 give some policy parameterizations for which we can guarantee bounded energy and a natural form of energy regulation. The main result of this section is Theorem III.5, which gives a specific energy regulation formula under which any forced fabric can be guaranteed to converge to the zero set of the policy provided there exists what we call a compatible potential which we use to guide the energy regulation. Section IV gives a final stability analysis for a common case where the fabric has a corresponding system metric and is being forced by a damped potential function. This setting is similar to the geometric fabric setting of [19], but more general. Importantly, we allow the underlying geometric fabric to be arbitrary and paired with any system metric. It's typically much easier to design and implement such systems than the bent Finsler geometries described in [19]. Finally, we summarize the takeaways in Section VI. ## II The Fundamental Stability of Fabrics Fabrics are stable autonomous second-order differential equations that can form well-informed priors on policies by encoding behavioral information common across many tasks. Individual control policies use fabrics by navigating across them. In this section, we define fabrics and characterize their utility and fundamental stability. Throughout this work we use the multivariate calculus notational conventions outlined in [15]. Note that in earlier work we built in specific conditions to handle boundary conformance for manifolds with a boundary. Those boundary conditions often require systems to be unbounded (e.g. accelerations or metrics approach infinity), which is impractical for real-world implementation and numerical integration. More recently, [19] described how to integrate explicit hard constraints into the definition of systems like the ones we consider here; constraint forces effectively fold into the forcing policy making them conceptually simpler. In many cases practical implementations use terms that are softer and better conditioned to smoothly avoid constraints (e.g. added potential function in the framework of [19]). We, therefore, cover only the unconstrained setting here, and refer the reader to [19] for details on how to incorporate hard constraints. **Definition II.1** (Fabrics).: An autonomous differential equation \(\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})\) is a _fabric_ if it conserves a Finsler energy \(\mathcal{L}(\mathbf{q},\dot{\mathbf{q}})\). This definition states that a fabrics is simply a conservative second-order autonomous differential equation. That conservation property is what makes the fabric a nice stable medium for policy design. The following Lemma shows that the fabric itself doesn't attempt to push a system from rest. This property will enable policies to reliably navigate the fabric and converge to any given desired goal. **Lemma II.2**.: _If \(\widetilde{\mathbf{h}}\) is a fabric, then \(\widetilde{\mathbf{h}}(\mathbf{q},\mathbf{0})=\mathbf{0}\)._ Proof.: Let \(\mathcal{L}\) be the fabric's conserved Finsler energy. Finsler energies can be written \(\mathcal{L}=\frac{1}{2}\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{ \mathbf{q}}\) where \(\mathbf{M}_{\mathcal{L}}=\partial_{\dot{\mathbf{q}}\dot{\mathbf{q}}}^{2} \mathcal{L}\) (see [16]), so \(\dot{\mathbf{q}}=\mathbf{0}\) if and only if \(\mathcal{L}=0\). If \(\widetilde{\mathbf{h}}(\mathbf{q},\mathbf{0})\neq\mathbf{0}\) at time \(t\), by continuity, there exists an \(\epsilon>0\) such that \(\dot{\mathbf{q}}\neq\mathbf{0}\) at time \(t+\epsilon\). But that would mean the energy changes which contradicts the fabrics conservation property. Therefore, \(\widetilde{\mathbf{h}}(\mathbf{q},\mathbf{0})=\mathbf{0}\). _Remark II.3_.: Lemma II.2 shows that fabrics as defined in Definition II.1 are _unbiased_ in the sense that they can influence the system's behavior while in motion, but vanish when the system stops. In other words, a system at rest remains at rest, allowing convergence regions to be entirely governed by the zero sets of other forcing terms (see Definition III.6 in [15] for a precise description.) **Definition II.4** (Navigating across fabrics).: Let \(\mathbf{f}(\mathbf{q},\dot{\mathbf{q}})\) be a finite second-order differential equation. \(\mathbf{f}\) is called a _navigation policy_ when added to a fabric to form the system \[\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})+ \mathbf{f}(\mathbf{q},\dot{\mathbf{q}}). \tag{4}\] We often say \(\mathbf{f}\)_navigates_ across \(\widetilde{\mathbf{h}}\). When the context is clear, we often refer to it simply as the _policy_. We often describe the system in Equation 4 as a _forced_ system because of it's relation to forcing policies as defined next. **Definition II.5** (Forcing policies).: In many cases, there is a relevant positive-definite system metric \(\mathbf{M}(\mathbf{q},\dot{\mathbf{q}})\) that can be used to shape navigating term (see Equation 2 for the intuition). In that case, we usually write the system in its force form \[\mathbf{M}\ddot{\mathbf{q}}+\boldsymbol{\xi}=\boldsymbol{\tau} \tag{5}\] where \(\boldsymbol{\xi}=-\mathbf{M}\widetilde{\mathbf{h}}\) and \(\boldsymbol{\tau}\) is an external force. The navigation term is then constructed using a _forcing policy_ denoted \(\boldsymbol{\tau}=\pi(\mathbf{q},\dot{\mathbf{q}})\), matching standard policy notation. Since the metric \(\mathbf{M}\) is invertible, there is a one-to-one correspondence between forcing policy and navigation term with \(\mathbf{f}=\mathbf{M}^{-1}\boldsymbol{\tau}\). Again, when the context is clear, we often refer to it simply as the _policy_. The following lemma collects together some previously proven results that characterize the energy conservation properties of fabrics. **Lemma II.6** (Properties of fabric energies).: _Let \(\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})\) be a fabric with Finsler energy \(\mathcal{L}\). The Hamiltonian has the property \(\mathcal{H}_{\mathcal{L}}=\dot{\mathbf{q}}^{T}\partial_{\dot{\mathbf{q}}} \mathcal{L}-\mathcal{L}=\mathcal{L}\) and its time derivative takes the form \(\dot{\mathcal{H}}_{\mathcal{L}}=\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{ \mathcal{L}}\dot{\mathbf{q}}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}=0\) where \(\mathbf{M}_{\mathcal{L}}\ddot{\mathbf{q}}+\boldsymbol{\xi}_{\mathcal{L}}= \mathbf{0}\) are the Euler-Lagrange equations of \(\mathcal{L}\) with \(\mathbf{M}_{\mathcal{L}}=\partial_{\dot{\mathbf{q}}\dot{\mathbf{q}}}^{2} \mathcal{L}\) and \(\boldsymbol{\xi}_{\mathcal{L}}=\partial_{\dot{\mathbf{q}}\dot{\mathbf{q}}} \mathcal{L}\dot{\mathbf{q}}-\partial_{\dot{\mathbf{q}}}\mathcal{L}\). The fabric conserves \(\mathcal{L}\) so it has the property \(\dot{\mathcal{H}}_{\mathcal{L}}=\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{ \mathcal{L}}\dot{\mathbf{h}}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}=0\)._ Proof.: These results are proven in [19], with the final fabric property following from conservation of energy. We use these properties to prove the following theorem which shows that fabrics are fundamentally stable in the sense that the energy of a forced system is bounded at any given time and can always be dissipated to bring the system to rest. **Theorem II.7** (Fundamental stability of fabrics).: _Let \(\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})\) be a fabric with Finsler energy \(\mathcal{L}\). If \(\mathbf{f}\) is a finite navigation policy, the corresponding forced system \(\ddot{\mathbf{q}}=\ddot{\mathbf{h}}+\mathbf{f}\) has finite energy after a finite time and will come to rest if the navigating term is set to \(\mathbf{f}=\mathbf{f}_{\mathrm{damp}}=-\mathbf{M}_{\mathcal{L}}^{-1}\mathbf{B}( \mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}\), where \(\mathbf{B}(\mathbf{q},\dot{\mathbf{q}})\) is any positive-definite damping matrix._ Proof:: By Lemma II.6\(\hat{\mathcal{L}}[\ddot{\mathbf{q}}]=\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{ \mathcal{L}}\ddot{\mathbf{q}}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}\), so for our system we have \[\hat{\mathcal{L}}[\widetilde{\mathbf{h}}+\mathbf{f}] =\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{\mathcal{L}}(\widetilde{ \mathbf{h}}+\mathbf{f})+\boldsymbol{\xi}_{\mathcal{L}}\big{)} \tag{6}\] \[=\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{\mathcal{L}}\widetilde{ \mathbf{h}}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}+\dot{\mathbf{q}}^{T}\mathbf{M }_{\mathcal{L}}\mathbf{f}\] (7) \[=\hat{\mathcal{L}}[\widetilde{\mathbf{h}}]+\dot{\mathbf{q}}^{T} \mathbf{M}_{\mathcal{L}}\mathbf{f}\] (8) \[=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f} \tag{9}\] since \(\hat{\mathcal{L}}[\widetilde{\mathbf{h}}]=0\). This is the work done by \(\mathbf{f}\) on the system. The total work gives the energy after \(T\) seconds as \[\mathcal{L}(\mathbf{q}_{T},\dot{\mathbf{q}}_{T})=\mathcal{L}(\mathbf{q}_{0}, \dot{\mathbf{q}}_{0})+\int_{0}^{T}\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L} }\mathbf{f}dt, \tag{10}\] which is finite. Choosing \(\mathbf{f}=\mathbf{f}_{\mathrm{damp}}=-\mathbf{M}_{\mathcal{L}}^{-1}\mathbf{B }(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}\) after \(T\) seconds gives energy change \[\hat{\mathcal{L}} =\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\big{(}-\mathbf{M}_ {\mathcal{L}}^{-1}\mathbf{B}\dot{\mathbf{q}}\big{)} \tag{11}\] \[=-\ddot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}<0. \tag{12}\] Since \(\mathcal{L}\) is lower bounded, \(\hat{\mathcal{L}}=-\ddot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}\to 0\) which means both \(\dot{\mathbf{q}}\to 0\) and \(\ddot{\mathbf{q}}\to 0\) as \(t\to\infty\). _Remark II.8_.: See Corollary II.15 for a simplified form for the damper \(\mathbf{f}_{\mathrm{damp}}=-\beta(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}\) where \(\beta>0\) is a scalar, which is appealing for preserving the path consistency of geometric fabrics as defined in Proposition II.13. Given any (finite) autonomous second-order differential equation \(\mathbf{h}\), we can always accelerate along the direction of motion strategically to ensure any given Finsler energy is conserved. The following definition characterizes how to do that. **Definition II.9** (Energization).: Let \(\ddot{\mathbf{q}}=\mathbf{h}(\mathbf{q},\dot{\mathbf{q}})\) be a finite autonomous second-order differential equation, and let \(\mathcal{L}\) be an energy. The _energized_ system is the transformed system defined as \[\ddot{\mathbf{q}} =\mathrm{energize}_{\mathcal{L}}\big{[}\mathbf{h}(\mathbf{q}, \dot{\mathbf{q}})\big{]}=\mathbf{h}+\alpha\dot{\mathbf{q}} \tag{13}\] \[\text{where}\ \ \alpha=-\frac{\dot{\mathbf{q}}^{T}\big{(} \mathbf{M}_{\mathcal{L}}\mathbf{h}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}}{ \dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}} \tag{14}\] This system transformation, which we call _energization_, turns any \(\mathbf{h}\) into a fabric by making it conservative. Note that the energy can be any Lagrangian, although it's common for that energy to be more specifically a Finsler energy. **Lemma II.10**.: _Let \(\ddot{\mathbf{q}}=\mathbf{h}(\mathbf{q},\dot{\mathbf{q}})\) be a finite autonomous second-order differential equation, and let \(\mathcal{L}\) be a Finsler energy. The energized system \(\ddot{\mathbf{q}}=\mathrm{energize}_{\mathcal{L}}\big{[}\mathbf{h}\big{]}\) conserves \(\mathcal{L}\) and is therefore a fabric. We call \(\mathbf{h}\) the generator of a fabric constructed in this way._ Proof:: We show that the energized system conserves \(\mathcal{L}\). By Lemma II.6\(\hat{\mathcal{L}}[\ddot{\mathbf{q}}]=\dot{\mathbf{q}}^{T}\big{(} \mathbf{M}_{\mathcal{L}}\ddot{\mathbf{q}}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}\), so after energization, the time rate of change of \(\mathcal{L}\) is \[\hat{\mathcal{L}}[\mathbf{h}+\alpha\dot{\mathbf{q}}] =\dot{\mathbf{q}}^{T}\left[\mathbf{M}_{\mathcal{L}}\left(\mathbf{ h}-\frac{\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{\mathcal{L}}\mathbf{h}+ \boldsymbol{\xi}_{\mathcal{L}}\big{)}}{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L }}\dot{\mathbf{q}}}\dot{\mathbf{q}}\right)+\boldsymbol{\xi}_{\mathcal{L}}\right]\] \[=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{h}-\left( \frac{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{h}+\dot{\mathbf{q}}^{T} \boldsymbol{\xi}_{\mathcal{L}}}{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}} \dot{\mathbf{q}}}\right)\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{ \mathbf{q}}\] \[\qquad\qquad+\dot{\mathbf{q}}^{T}\boldsymbol{\xi}_{\mathcal{L}}\] \[=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{h}-\dot{ \mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{h}-\dot{\mathbf{q}}^{T} \boldsymbol{\xi}_{\mathcal{L}}+\dot{\mathbf{q}}^{T}\boldsymbol{\xi}_{\mathcal{L}}\] \[=0.\] In general, energization may change the behavior of a system since the path traced by a system often changes when the system speeds up or slows down. (E.g. an orbiting satellite will fall to earth if it slows and shoot out to space if it speeds up.) The following proposition characterizes the class of systems whose behavior is unaffected by energization. **Definition II.11**.: A system \(\ddot{\mathbf{q}}=\mathbf{h}(\mathbf{q},\dot{\mathbf{q}})\) is Homogeneous of Degree 2 (HD2) if \(\mathbf{h}(\mathbf{q},\alpha\dot{\mathbf{q}})=\alpha^{2}\mathbf{h}(\mathbf{q}, \dot{\mathbf{q}})\) for \(\alpha\geq 0\). _Remark II.12_.: An HD2 system modulates its accelerations in just the right way to maintain its path, independent of speed. If the system were constrained to follow a given path, speeding up by a factor of \(\alpha\) would induce accelerations \(\alpha^{2}\) times higher to maintain the path. An HD2 system has this scaling property built in to make its integral curves trace speed invariant paths. This speed invariance is a defining property of geometries [16]. The next proposition characterizes the class of path consistent fabrics constructed by HD2 generators. **Proposition II.13** (Geometric Fabrics).: _Let \(\ddot{\mathbf{q}}=\mathbf{h}(\mathbf{q},\dot{\mathbf{q}})\) be an HD2 generator, and let \(\mathcal{L}(\mathbf{q},\dot{\mathbf{q}})\) be a Finsler energy. The paths traced by the fabric \(\ddot{\mathbf{q}}=\mathbf{h}(\mathbf{q},\dot{\mathbf{q}})=\mathrm{energize}_{ \mathcal{L}}\big{[}\mathbf{h}\big{]}\) match those of \(\mathbf{h}\). Moreover, the energized system is also HD2 so \(\ddot{\mathbf{q}}=\mathrm{energize}_{\mathcal{L}}\big{[}\mathbf{h}\big{]}+ \gamma\dot{\mathbf{q}}\) trace the same paths as its HD2 generator \(\mathbf{h}\) for any time varying \(\gamma\). Fabrics constructed this way are called geometric fabrics and the class of geometric fabrics is the unique class of path consistent fabrics._ Proof:: A property of HD2 systems is that they can accelerate along the direction of motion arbitrarily without changing the system's path [16]. The energization transformation is defined as \(\mathrm{energize}_{\mathcal{L}}\big{[}\mathbf{h}\big{]}=\mathbf{h}+\alpha \dot{\mathbf{q}}\) for a particular choice of \(\alpha\). Therefore, the energized system is path consistent. Similarly, adding another term \(\gamma\dot{\mathbf{q}}\) is also an acceleration along the direction of motion, so the paths remain consistent. Examining the system under the specific energization coefficient, we see \[\mathrm{energize}_{\mathcal{L}}\big{[}\mathbf{h}\big{]} =\mathbf{h}-\frac{\dot{\mathbf{q}}^{T}\big{(}\mathbf{M}_{ \mathcal{L}}\mathbf{h}+\boldsymbol{\xi}_{\mathcal{L}}\big{)}}{\dot{\mathbf{q}}^{T} \mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}}\dot{\mathbf{q}} \tag{15}\] \[=\mathbf{h}+\mathbf{A}\big{(}\mathbf{M}_{\mathcal{L}}\mathbf{h}+ \boldsymbol{\xi}_{\mathcal{L}}\big{)}, \tag{16}\] where \(\mathbf{A these properties). **A** is also HD0 since \(\mathbf{M}_{\mathcal{L}}\) is and there are two factors of \(\dot{\mathbf{q}}\) in both the numerator and denominator. Therefore, the energized system is HD2 since \(\mathbf{h}\) is HD2. We prove uniqueness by contradiction. Suppose \(\widetilde{\mathbf{h}}\) is a geometric fabrics but is not HD2. (If it is HD2, then it can be constructed as described above.) Then there exists a \((\mathbf{q},\dot{\mathbf{q}})\) where \(\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}(\mathbf{q},\lambda\dot{\mathbf{q}}) \neq\lambda^{2}\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})\) for some \(\lambda\geq 0\). That means for that state and that \(\lambda\), the integral curve starting at \((\mathbf{q},\lambda\dot{\mathbf{q}})\) will deviate from the integral curve starting at \((\mathbf{q},\dot{\mathbf{q}})\) after some finite time. Therefore, it can't be geometric which is a contradiction since it's a geometric fabric. _Remark II.14_.: The bent Finsler systems described in [19], which can be characterized as generalizations of classical mechanical systems, are geometric fabrics as defined in Proposition II.13. The definition here, though, is broader and easier to work with in practice than the earlier definition. In bent Finsler systems, metrics must be defined by Finsler energies, requiring the application of Euler-Lagrange equations which can be computationally complex and challenging to implement. Under our definition here, allows metrics to be arbitrary HD0 positive semi-definite matrices dramatically simplifying design. The Finsler energy is still used for energization of the HD2 geometry generator, but it can remain simply since it needs only define the desired measure of speed, not the metrics. This simpler setup was already used in [20] and is especially helpful where automatic differentiation is involved. **Corollary II.15**.: _A forced geometric fabric of the form \(\ddot{\mathbf{q}}=\mathrm{e}\mathrm{i}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\ network of paths. The following proposition gives insight into the behavior of forced fabrics and helps guide design. It states that if we can get the system to converge with sufficient finite damping, it will converge to the zero set (goal) of the navigation policy. Strategically, we can increase the damping to slow the system as needed to give the navigation policy more influence over the behavior. And if we can prove the navigation policy converges on its own (or equivalently when forcing the Euclidean fabric), then we can construct a modified navigation policy that's guaranteed to converge to the desired goal. These results can guide policy design, although it's far from a complete characterization of convergence or stable policies. In many cases, we might learn a policy over a given fabric, for instance using RL. Convergence and stability are more complex in this setting, but fabrics make it easier to safely explore and find performance stable and convergent solutions. **Proposition II.17** (Convergence).: _Let \(\widetilde{\mathbf{h}}\) be a fabric and let \(\mathbf{f}\) be a bounded navigation policy with zero set \(\mathcal{S}=\{\mathbf{q}\;|\;\mathbf{f}(\mathbf{q},\mathbf{0})=\mathbf{0}\}\). Let \(\breve{\mathbf{q}}=\widetilde{\mathbf{h}}+\mathbf{f}=\widetilde{\mathbf{h}}_{ \mathbf{f}}\) denote the forced system. Then if \(\widetilde{\mathbf{h}}_{\mathbf{f}}\) converges, it converges to \(\mathcal{S}\)._ Proof.: By Lemma II.2, \(\widetilde{\mathbf{h}}(\mathbf{q},\mathbf{0})=\mathbf{0}\) for all \(\mathbf{q}\). Therefore, at convergence, \(\breve{\mathbf{q}}=\widetilde{\mathbf{h}}(\mathbf{q}^{*},\mathbf{0})+\mathbf{ f}(\mathbf{q}^{*},\mathbf{0})=\mathbf{f}(\mathbf{q}^{*},\mathbf{0})=\mathbf{0}\), which implies \(\mathbf{q}^{*}\in\mathcal{S}\). Proposition II.17 shows that training navigation policies can be a powerful design choice. If we enforce through structural choices the desired zero set of the navigation policy and train the policy to successfully converge, then we're guaranteed that it converges to the correct goal. When \(\mathbf{f}\) does not necessarily converge on its own (e.g. it may require additional damping), Theorem III.5 gives an explicit class of energy regulators that will guarantee convergence to \(\mathcal{S}\) in the case where there exists a _compatible_ potential. ## III Energy Regulation This next proposition characterizes how to regulate energy within a given range \([0,\mathcal{L}_{\max}]\) using an _energy regularizer_ while using a navigation policy \(\mathbf{f}\) to both modulate system energy and steer. When driven by \(\mathbf{f}\), the system increases energy (speeds up) to a maximum energy level then maintains that energy as long as \(\mathbf{f}\) is pushing the system forward. If the system is moving against \(\mathbf{f}\) it removes energy (slows down). Examples of when this second case may occur are (1) the system is moving the wrong way, e.g. away from a goal; (2) the system is approaching a goal and \(\mathbf{f}\) includes sufficient damping to bring it to rest at the goal. In both cases, the energy regularization is removed and \(\mathbf{f}\) acts to slow the system. **Proposition III.1** (Energy Capping 1).: _Let \(\widetilde{\mathbf{h}}\) be a fabric with Finsler energy \(\mathcal{L}\) and let \(\mathbf{f}\) be a navigation policy. Design a regularized system of the form_ \[\breve{\mathbf{q}}=\widetilde{\mathbf{h}}+\mathbf{f}-\lambda\mathbf{M}_{ \mathcal{L}}^{-1}\mathbf{B}\dot{\mathbf{q}}, \tag{20}\] _where \(\mathbf{M}_{\mathcal{L}}\) is the energy tensor of \(\mathcal{L}\) and \(\mathbf{B}(\mathbf{q},\dot{\mathbf{q}})\) is positive definite, and choose_ \[\lambda=\max\left\{0,\frac{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}} \mathbf{f}}{\dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}+\gamma(\mathcal{L} )}\right\}, \tag{21}\] _where \(\gamma(0)=\gamma_{\max}\), \(\gamma(\mathcal{L}_{\max})=0\), and \(\mathcal{L}_{\max}\) is a desired energy cap. Then the regularized system has the following energy properties:_ 1. _Bounded energy:_ \(\mathcal{L}\in[0,\mathcal{L}_{\max}]\)_._ 2. _Energy increases when moving with_ \(\mathbf{f}\)_: When_ \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}\geq 0\)_, we have_ \(\dot{\mathcal{L}}\geq 0\) _with equality only when either_ \(\mathcal{L}=\mathcal{L}_{\max}\) _or_ \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}=0\)_._ 3. _Energy decreases when moving against_ \(\mathbf{f}\)_: When_ \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}<0\)_, we have_ \(\dot{\mathcal{L}}<0\)_._ 4. _Energy rates of change are instantaneously the same with and without the fabric, and the regularizing damper only decreases energy_: \(\dot{\mathcal{L}}[\widetilde{\mathbf{h}}+\mathbf{f}-\lambda\mathbf{M}_{ \mathcal{L}}^{-1}\mathbf{B}\dot{\mathbf{q}}]=\hat{\mathcal{L}}[\mathbf{f}- \lambda\mathbf{M}_{\mathcal{L}}^{-1}\mathbf{B}\dot{\mathbf{q}}]\leq\hat{ \mathcal{L}}[\mathbf{f}]=\hat{\mathcal{L}}[\widetilde{\mathbf{h}}+\mathbf{f}]\)_._ Proof.: The energy derivative is \[\hat{\mathcal{L}} =\dot{\mathbf{q}}^{T}\big{[}\mathbf{M}_{\mathcal{L}}\breve{ \mathbf{q}}+\mathbf{\xi}_{\mathcal{L}}\big{]} \tag{22}\] \[=\dot{\mathbf{q}}^{T}\big{[}\mathbf{M}_{\mathcal{L}}\widetilde{ \mathbf{h}}+\mathbf{\xi}_{\mathcal{L}}\big{]}+\dot{\mathbf{q}}^{T}\mathbf{M}_{ \mathcal{L}}\big{(}\mathbf{f}-\lambda\mathbf{M}_{\mathcal{L}}^{-1}\mathbf{B} \dot{\mathbf{q}}\big{)}\] \[=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}-\lambda \dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}},\] since \(\dot{\mathbf{q}}^{T}\big{[}\mathbf{M}_{\mathcal{L}}\widetilde{\mathbf{h}}+ \mathbf{\xi}_{\mathcal{L}}\big{]}=\hat{\mathcal{L}}[\widetilde{\mathbf{h}}]=0\) by the conservation property of \(\widetilde{\mathbf{h}}\). Choosing \(\lambda\) per Equation 21, we have two case. If \(\dot{\mathbf{q}}^{T}\mathbf{M}\mathbf{f}<0\), then \(\lambda=0\) and \[\hat{\mathcal{L}}=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}<0. \tag{23}\] This case proves property 3. The second case is, if \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}\geq 0\), then \[\lambda=\frac{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}}{\dot{ \mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}+\gamma(\mathcal{L})} \tag{24}\] and \[\dot{\mathcal{L}} =\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}-\left( \frac{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}}{\dot{\mathbf{q}}^{T }\mathbf{B}\dot{\mathbf{q}}+\gamma(\mathcal{L})}\right)\dot{\mathbf{q}}^{T} \mathbf{B}\dot{\mathbf{q}} \tag{25}\] \[=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}\left[1- \frac{\dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}}{\dot{\mathbf{q}}^{T} \mathbf{B}\dot{\mathbf{q}}+\gamma(\mathcal{L})}\right]. \tag{26}\] We can make two observations: 1. When \(\gamma=0\), \(\mathcal{L}=\mathcal{L}_{\max}\) and \(\dot{\mathbf{q}}\neq\mathbf{0}\), so \(\dot{\mathcal{L}}=0\). 2. When \(\gamma=\gamma_{\max}\), \(\mathcal{L}=0\) and \(\dot{\mathbf{q}}=\mathbf{0}\), so \[\frac{\dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}}{\dot{\mathbf{q}}^{T} \mathbf{B}\dot{\mathbf{q}}+\gamma(\mathcal{L})}=0\] (27) so \(\dot{\mathcal{L}}=\dot{\mathbf{q}}^{T}\mathbf{M}\mathbf{f}\geq 0\). To prove property 2, we note \(\dot{\mathcal{L}}>0\) only when \(\gamma>0\) and \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}>0\). And \(\dot{\mathcal{L}}=0\) when either factor in Equation 26 is zero, which means either \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}=0\) or \(\dot{\mathbf{q}}=\mathbf{0}\). The latter condition implies \(\gamma=0\) and \(\mathcal{L}=\mathcal{L}_{\max}\). Property 1 follows by noting that \(\dot{\mathcal{L}}=0\) at \(\mathcal{L}=\mathcal{L}_{\max}\) so \(\mathcal{L}>\mathcal{L}_{\max}\) would be a contradiction. Finally, property 4 derives from the simple observation that the contribution from \(\widetilde{\mathbf{h}}\) to \(\dot{\mathcal{L}}\) drops out in line 22 because \(\widetilde{\mathbf{h}}\) is conservative. And \(\lambda\geq 0\) only removes energy with its contribution being \(-\lambda\dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}\leq 0\) since \(\mathbf{B}\) is positive definite. _Remark III.2_.: The use of \(\gamma\) in the denominator of Equation 21 makes it robust at \(\dot{\mathbf{q}}=\mathbf{0}\). The specific profile of \(\gamma\) defines how \(\lambda\) moves between \(\lambda_{0}=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}/\dot{ \mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}\) (to fully cap the energy with \(\dot{\mathcal{L}}=0\) when \(\mathcal{L}=\mathcal{L}_{\max}\)) and \(0\) when \(\mathcal{L}=0\) (equiv. \(\dot{\mathbf{q}}=\mathbf{0}\)). **Proposition III.3** (Energy Capping 2).: _Let \(\widetilde{\mathbf{h}}\) be a fabric with energy \(\mathcal{L}\) and let \(\mathbf{f}\) be a navigation policy. Design a regularized system of the form_ \[\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}+\mathbf{f}+\lambda\dot{\mathbf{q}}- \beta\dot{\mathbf{q}}, \tag{28}\] _and choose_ \[\lambda(\alpha_{f})=\gamma(\mathcal{L})\alpha_{f} \tag{29}\] _where \(\beta\in[0,\beta_{max}]\), \(\gamma(\mathcal{L})\in[0,1]\), \(\gamma(0)=0\), \(\gamma(\mathcal{L}_{max})=1\), and_ \[\alpha_{f}=-\frac{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}}{ \dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}}. \tag{30}\] _Such a system will have bounded energy \(\mathcal{L}\in[0,\mathcal{L}_{\max}]\) for all time._ Proof.: The energy time derivative is \[\dot{\mathcal{L}} =\dot{\mathbf{q}}^{T}(\mathbf{M}_{\mathcal{L}}\ddot{\mathbf{q}}+ \boldsymbol{\xi}_{\mathcal{L}}) \tag{31}\] \[=\dot{\mathbf{q}}^{T}(\mathbf{M}_{\mathcal{L}}(\widetilde{\mathbf{ h}}+\mathbf{f}+\lambda\dot{\mathbf{q}}-\beta\dot{\mathbf{q}})+\boldsymbol{\xi}_{ \mathcal{L}})\] (32) \[=-\beta\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q }}+\dot{\mathbf{q}}^{T}(\mathbf{M}_{\mathcal{L}}(\mathbf{f}+\lambda\dot{ \mathbf{q}})) \tag{33}\] since \(\dot{\mathbf{q}}^{T}\big{[}\mathbf{M}_{\mathcal{L}}\widetilde{\mathbf{h}}+ \boldsymbol{\xi}_{\mathcal{L}}\big{]}=\dot{\mathcal{L}}[\widetilde{\mathbf{h} }]=0\) by the conservation property of \(\widetilde{\mathbf{h}}\). In general, \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}\) can perform work on the system, changing its energy levels. However, system energy will ultimately be bounded given that \(\gamma\) can become equal to 1 arbitrarily, and certainly \(\gamma(\mathcal{L}_{max})=1\) by design. Whenever \(\gamma=1\), the energy time derivative becomes \[\dot{\mathcal{L}} =-\beta\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{ q}}+\dot{\mathbf{q}}^{T}\left(\mathbf{M}_{\mathcal{L}}\left(\mathbf{f}- \frac{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}}{\dot{\mathbf{q }}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}}\right)\right) \tag{34}\] \[=-\beta\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}} \tag{35}\] If \(\beta=0\), then system energy is conserved, and if \(\beta>0\), then energy is dissipated. In essence, \(\gamma\) can monitor the system energy and decide how much work can be done by \(\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{f}\), which results in shifting energy levels that are ultimately bounded by \(\mathcal{L}_{max}\). Within the preset boundary conditions, \(\gamma\) can behave arbitrarily, fluctuating the system energy. \(\gamma\) can therefore be learned from experience, enabling it to modulate system energy advantageously. In parallel, \(\beta\) can also be learned, promoting dynamic braking. Note, if \(\beta>0\) and \(\gamma=1\) persists, then system energy will decrease resulting in \(\|\dot{\mathbf{q}}\|,\|\dot{\mathbf{q}}\|\to 0\) as \(t\rightarrow\infty\). Note, this does not imply that \(\|\mathbf{f}\|\to 0\) as well, but rather, the system can controllably come to rest regardless of \(\|\mathbf{f}\|\). Finally, robustness to numerical issues when leveraging this design for \(\lambda\) when \(\|\dot{\mathbf{q}}\|\to 0\) can be obtained via the strategies in Section V. To effectively regulate the energy of a navigation fabric to _guarantee_ convergence to the navigation policy's zero set, we need a measure of progress toward that zero set. That measure of progress can be given by a potential function that's compatible with the navigation policy in the sense that it's negative gradient generally points in the same direction as the policy's vector field and is (locally) minimized at the policy's zero set. **Definition III.4** (Compatible potential).: Let \(\mathbf{f}(\mathbf{q},\dot{\mathbf{q}})\) be a navigation policy. We say a potential function is _compatible_ with \(\mathbf{f}\) if \(\partial\psi(\mathbf{q})=\mathbf{0}\) if and only if \(\mathbf{f}(\mathbf{q},\mathbf{0})=\mathbf{0}\) and \(-\partial\psi^{T}\mathbf{f}(\mathbf{q},\mathbf{0})>0\) wherever \(\mathbf{f}(\mathbf{q},\mathbf{0})\neq\mathbf{0}\) (equiv. \(\partial\psi\neq\mathbf{0}\)). The next theorem prescribes how to regulate the energy of a navigation fabric given a compatible potential. **Theorem III.5**.: _Let \(\mathrm{energy}_{\mathcal{L}}\big{[}\mathbf{h}(\mathbf{q},\dot{\mathbf{q}}) \big{]}\) be a fabric with generator \(\mathbf{h}\) and Finsler energy \(\mathcal{L}\), and let \(\mathbf{f}(\mathbf{q},\dot{\mathbf{q}})\) be a navigation policy with compatible potential \(\psi(\mathbf{q})\). Denote the total energy by \(\mathcal{H}=\mathcal{L}+\psi\). The system \(\ddot{\mathbf{q}}=\mathrm{energy}_{\mathcal{H}}\big{[}\mathbf{h}+\mathbf{f} \big{]}+\gamma\dot{\mathbf{q}}\) with energy regulator_ \[\gamma(\mathbf{q},\dot{\mathbf{q}})=-\left(\frac{\dot{\mathbf{q}}\,\dot{ \mathbf{q}}^{T}}{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}} \right)\partial\psi-\beta\dot{\mathbf{q}} \tag{36}\] _converges to the zero set of \(\mathbf{f}\) for \(\beta>0\)._ Proof.: The proof of Theorem III.5.: The total energy \(\mathcal{H}\) of the energized system \(\mathrm{energy}_{\mathcal{H}}[\mathbf{h}+\mathbf{f}]\) is conserved by definition, and we will show that with damping it's minimized and the system comes to rest. We then show that at convergence the compatibility conditions between potential \(\psi\) and perturbation field \(\mathbf{f}\) ensure that at convergence \(\mathbf{f}=\mathbf{0}\). The time derivative of the total energy \(\mathcal{H}=\mathcal{H}_{\mathcal{L}}+\psi\) is: \[\dot{\mathcal{H}}=\dot{\mathbf{q}}^{T}\Big{[}\mathbf{M}_{\mathcal{L}}\ddot{ \mathbf{q}}+\boldsymbol{\xi}_{\mathcal{L}}+\partial\psi\Big{]}, \tag{37}\] where \(\mathbf{M}_{\mathcal{L}}\ddot{\mathbf{q}}+\boldsymbol{\xi}_{\mathcal{L}}= \mathbf{0}\) are the equations of motion of \(\mathcal{L}\) defined by the Euler-Lagrange equation (see [19] for a derivation). We assume \(\mathbf{M}_{\mathcal{L}}\) is bounded in a finite region and strictly positive definite everywhere; in particular, it doesn't vanish or reduce rank as \(\dot{\mathbf{q}}\to\mathbf{0}\). To derive energization, we take the system \[\ddot{\mathbf{q}}=\mathbf{h}+\mathbf{f}+\alpha\dot{\mathbf{q}} \tag{38}\] and solve for the \(\alpha\) which makes \(\dot{\mathcal{H}}=0\) (i.e. calculate the acceleration along the direction of motion needed to conserve energy). Plugging Eq. 38 into Eq. 37, setting to zero, and solving for \(\alpha\) gives: \[\dot{\mathbf{q}}^{T}\Big{[}\mathbf{M}_{\mathcal{L}}\big{(}\mathbf{h} +\mathbf{f}+\alpha\dot{\mathbf{q}}\big{)}+\boldsymbol{\xi}_{\mathcal{L}}+\partial \psi\Big{]}=\mathbf{0} \tag{39}\] \[\Rightarrow\alpha=-\frac{\dot{\mathbf{q}}^{T}(\mathbf{M}_{ \mathcal{L}}\mathbf{h}+\boldsymbol{\xi}_{\mathcal{L}})}{Z}-\frac{\dot{\mathbf{q }}^{T}(\mathbf{M}_{\mathcal{L}}\mathbf{f}+\partial\psi)}{Z}\] (40) \[\Rightarrow\alpha=-\frac{1}{Z}\mathbf{q}^{T}\big{(}\mathbf{M}_{ \mathcal{L}}(\mathbf{h}+\mathbf{f})+\boldsymbol{\xi}_{\mathcal{L}}+\partial \psi\big{)} \tag{41}\] where \(Z=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}\). Equation 41 is the form given in Definition II.9. The \(\alpha\) of Equation 41 by definition makes the undamped equations in 38 conserve the Hamiltonian \(\mathcal{H}\), therefore the damped equations \[\ddot{\mathbf{q}}=\mathbf{h}+\mathbf{f}+\alpha\dot{\mathbf{q}}-\beta\dot{ \mathbf{q}} \tag{42}\] for \(\beta>0\) decreases energy at a rate \[\dot{\mathcal{H}} =\dot{\mathbf{q}}^{T}\Big{[}\mathbf{M}_{\mathcal{L}}\big{(} \mathbf{h}+\mathbf{f}+\alpha\dot{\mathbf{q}}\big{)}+\boldsymbol{\xi}_{ \mathcal{L}}+\partial\psi\Big{]}-\beta\dot{\mathbf{q}}^{T}\mathbf{M}_{ \mathcal{L}}\dot{\mathbf{q}} \tag{43}\] \[=-\beta\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{ q}}. \tag{44}\] Since \(\mathbf{M}_{\mathcal{L}}\) is strictly positive definite, this final expression is less than \(0\) for all \(\dot{\mathbf{q}}\neq\mathbf{0}\) and \(0\) for \(\dot{\mathbf{q}}=\mathbf{0}\). Since \(\mathcal{H}\) is always decreasing but also lower bounded, we know that its rate of decrease must converge to zero \(\dot{\mathcal{H}}\to 0\) (it stops decreasing at some point). Therefore, \(\dot{\mathcal{H}}=-\beta\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{ \mathbf{q}}\to 0\) which means \(\dot{\mathbf{q}}\to\mathbf{0}\) and hence \(\dot{\mathbf{q}}\to\mathbf{0}\). Plugging \(\alpha\) from Equation 40 into the system in Equation 42 and taking the limit with \(\dot{\mathbf{q}},\ddot{\mathbf{q}}\to\mathbf{0}\) gives \[\ddot{\mathbf{q}} =\mathbf{h}+\mathbf{f}\] \[=\Big{(}\mathbf{h}-\beta\dot{\mathbf{q}}-\frac{\dot{\mathbf{q}} \mathbf{q}^{T}}{Z}\big{(}\mathbf{M}_{\mathcal{L}}\mathbf{h}+\boldsymbol{\xi}_ {\mathcal{L}}\big{)}\Big{)} \tag{45}\] \[\qquad\qquad\qquad+\mathbf{f}-\frac{\dot{\mathbf{q}}\dot{\mathbf{ q}}^{T}}{Z}\Big{(}\mathbf{M}_{\mathcal{L}}\mathbf{f}-(-\partial\psi)\Big{)}\] \[=\mathbf{V}+\mathbf{f}-\frac{\dot{\mathbf{q}}\dot{\mathbf{q}}^{T} }{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}}\Big{(} \mathbf{M}_{\mathcal{L}}\mathbf{f}-(-\partial\psi)\Big{)}. \tag{46}\] Here \(\mathbf{V}\) collects the terms in parentheses from the second line which vanish in the limit with \(\mathbf{V}\to\mathbf{0}\) as \(\dot{\mathbf{q}}\to\mathbf{0}\), and we write \(-\partial\psi\) because it's the negative gradient that has positive inner product with \(\mathbf{f}\) per the compatibility conditions. On left-hand-side we have \(\ddot{\mathbf{q}}\to\mathbf{0}\), so it's the rest of the terms in Equation 46 we need to analyze in the limit as \(\dot{\mathbf{q}}\to 0\). Note that \(\dot{\mathbf{q}}^{T}\dot{\mathbf{q}}/(\dot{\mathbf{q}}^{T}\mathbf{M}_{ \mathcal{L}}\dot{\mathbf{q}})\) has two factors of \(\dot{\mathbf{q}}\) in both the numerator and the denominator. Since \(\mathbf{M}_{\mathcal{L}}\) is bounded and doesn't vanish in the limit, it limits to a projection operator \[\mathbf{A}=\frac{\mathbf{v}\mathbf{v}^{T}}{\mathbf{v}^{T}\mathbf{M}_{\mathcal{L }}\mathbf{v}}, \tag{47}\] where \(\mathbf{v}=\lim_{t\to\infty}\dot{\mathbf{q}}/\|\dot{\mathbf{q}}\|\) is the limiting direction of motion as the system comes to a stop. This notation allows us write Equation 46 as \[\ddot{\mathbf{q}} =\mathbf{V}+\mathbf{f}-\frac{\dot{\mathbf{q}}\dot{\mathbf{q}}^{T} }{\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}}\Big{(}\mathbf{ M}_{\mathcal{L}}\mathbf{f}-(-\partial\psi)\Big{)} \tag{48}\] \[\xrightarrow[t\to\infty]{}\mathbf{0}=\Big{[}\mathbf{I}-\mathbf{A} \mathbf{M}_{\mathcal{L}}\Big{]}\mathbf{f}+\mathbf{A}(-\partial\psi). \tag{49}\] The matrix \(\mathbf{P}=\mathbf{I}-\mathbf{A}\mathbf{M}_{\mathcal{L}}\) has nullspace \(\mathbf{v}\) since \[\Big{[}\mathbf{I}-\mathbf{A}\mathbf{M}_{\mathcal{L}}\Big{]} \mathbf{v}=\mathbf{v}-\frac{\mathbf{v}\mathbf{v}^{T}}{\mathbf{v}^{T}\mathbf{M}_ {\mathcal{L}}\mathbf{v}}\mathbf{M}_{\mathcal{L}}\mathbf{v} \tag{50}\] \[\mathbf{v}-\mathbf{v}\left(\frac{\mathbf{v}^{T}\mathbf{M}_{ \mathcal{L}}\mathbf{v}}{\mathbf{v}^{T}\mathbf{M}_{\mathcal{L}}\mathbf{v}} \right)=\mathbf{v}-\mathbf{v}=\mathbf{0}. \tag{51}\] Likewise, \(\mathbf{A}\) is rank-1 with column space spanned by \(\mathbf{v}\), so \(\Big{[}\mathbf{I}-\mathbf{A}\mathbf{M}_{\mathcal{L}}\Big{]}\mathbf{f}\) and \(\mathbf{A}(-\partial\psi)\) must be linearly independent when they're both nonzero. We'll prove \(\mathbf{f}=\mathbf{0}\) by contradiction. \(\mathbf{Pf}\) and \(\mathbf{A}(-\partial\psi)\) are orthogonal so for Equation 49 to hold, they must both be zero. If \(\mathbf{f}\neq\mathbf{0}\), then since \(\mathbf{Pf}=\mathbf{0}\) we must have \(\mathbf{f}\in\mathrm{span}(\mathbf{v})\). And since \(\mathbf{A}(-\partial\psi)=\mathbf{0}\), we must have either that \(\partial\psi=\mathbf{0}\) or \(\partial\psi\perp\mathbf{v}\) which implies \(\partial\psi^{T}\mathbf{f}=\mathbf{0}\). Both of these contradict the compatibility conditions. Therefore, \(\mathbf{f}=\mathbf{0}\). One simple way to leverage Theorem III.5 is to choose a potential \(\psi\) whose zero set \(\mathcal{S}=\{\mathbf{q}\,|\,\partial\psi(\mathbf{q})=\mathbf{0}\}\) characterizes the goal and then define \(\mathbf{f}(\mathbf{q},\dot{\mathbf{q}})\) so that it's compatible with \(\psi\) by construction. For instance, the following \(\mathbf{f}\) would be compatible: \[\mathbf{f}=-\frac{\partial\psi}{\|\partial\psi\|+\epsilon}-\mathbf{B}(\mathbf{q}, \dot{\mathbf{q}})\dot{\mathbf{q}}. \tag{52}\] The first term is the soft normalized negative gradient, and the second is a damper. ## IV Forcing energized fabrics Here we analyze forcing an arbitrary fabric term \(\widetilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})\) using a forcing term pushing against a system metric of the type described in Section I-B Equation 3. This a case is more specific than the general energy regulation settings discussed in Section III, but it's an important and common one used, for instance, in [20]. The forcing term in this case takes the form \[\mathbf{f}(\mathbf{q},\dot{\mathbf{q}})=-\mathbf{M}^{-1}\partial\psi-\mathbf{M}^{ -1}\mathbf{B}\dot{\mathbf{q}} \tag{53}\] where \(\mathbf{M}(\mathbf{q},\dot{\mathbf{q}})\) is an arbitrary positive definite system metric and \(\mathbf{B}(\mathbf{q},\dot{\mathbf{q}})\) is an arbitrary positive semi-definite damping matrix. \(\widetilde{\mathbf{h}}\) can be an arbitrary fabric. For instance, we may construct a transform tree, populate its spaces with arbitrary specs, and pull them back into the root. The resulting spec \((\mathbf{M},\boldsymbol{\xi})\) defines a differential equation \(\mathbf{M}\ddot{\mathbf{q}}+\boldsymbol{\xi}=\mathbf{0}\) with acceleration \(\ddot{\mathbf{q}}=-\mathbf{M}^{-1}\boldsymbol{\xi}=\mathbf{h}(\mathbf{q}, \dot{\mathbf{q}})\). That \(\mathbf{h}\) can then be used to generate the fabric \(\widetilde{\mathbf{h}}=\mathrm{ergize}_{\mathcal{L}}\big{[}\mathbf{h}\big{]}\) by energization. The matrix \(\mathbf{M}\) defines the system metric which we use to define the forcing term given in Equation 53. If the individual specs on the transform tree are themselves geometric (the metrics are HD0 and the policies are HD2), the resulting fabric is a geometric fabric. Importantly, the metrics don't need to be Finsler (deviating from the theory of [19]), just HD0. The following theorem shows that these systems are stable and convergent to the logical minimum of a potential function with appropriate choice of damping. **Theorem IV.1**.: _Let \(\tilde{\mathbf{h}}(\mathbf{q},\dot{\mathbf{q}})\) be a fabric with positive-definite system metric \(\mathbf{M}(\mathbf{q},\dot{\mathbf{q}})\), and let \(\psi(\mathbf{q})\) be a potential function. Then we can always find a finite positive definite damping matrix \(\mathbf{B}(\mathbf{q},\dot{\mathbf{q}})\) such that the system_ \[\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}-\mathbf{M}^{-1}\big{(}\partial\psi+ \mathbf{B}\dot{\mathbf{q}}\big{)} \tag{54}\] _converges. And at convergence, by Proposition II.17\(\psi\) is at a local minimum._ Proof:: Suppose our system is \[\ddot{\mathbf{q}}=\widetilde{\mathbf{h}}+\mathbf{f}+\alpha_{\mathcal{L}}\dot{ \mathbf{q}}-\beta\dot{\mathbf{q}}, \tag{55}\] with \(\mathbf{f}\) as given by Equation 53, \(\beta\in\mathbb{R}_{+}\), and where \(\alpha_{\mathcal{L}}\in\mathbb{R}\) is the energization coefficient with respect to some energy \(\mathcal{L}\). Our proof follows a standard Lyapunov analysis. We design our Lyapunov function as \[V=\frac{1}{2}\dot{\mathbf{q}}^{T}\mathbf{M}\dot{\mathbf{q}}+\psi. \tag{56}\] The time derivative of the Lyapunov function is \[\dot{V}=\dot{\mathbf{q}}^{T}\mathbf{M}\ddot{\mathbf{q}}+\frac{1}{2}\dot{ \mathbf{q}}^{T}\dot{\mathbf{M}}\dot{\mathbf{q}}+\dot{\mathbf{q}}^{T}\partial\psi. \tag{57}\] Plugging in \(\ddot{\mathbf{q}}\) from Equation 55 above yields \[\dot{V}= \ddot{\mathbf{q}}^{T}\mathbf{M}\Big{(}\widetilde{\mathbf{h}}- \mathbf{M}^{-1}\partial\psi-\mathbf{M}^{-1}\mathbf{B}\dot{\mathbf{q}}+\alpha_ {\mathcal{L}}\dot{\mathbf{q}}-\beta\dot{\mathbf{q}}\Big{)}\] \[+\frac{1}{2}\dot{\mathbf{q}}^{T}\dot{\mathbf{M}}\dot{\mathbf{q}} +\partial\psi^{T}\dot{\mathbf{q}}. \tag{58}\] Rearranging and canceling terms reduces the expression to \[\dot{V}=\dot{\mathbf{q}}^{T}\mathbf{M}\big{(}\widetilde{\mathbf{h}}+\alpha_{ \mathcal{L}}\dot{\mathbf{q}}\big{)}+\frac{1}{2}\dot{\mathbf{q}}^{T}\dot{ \mathbf{M}}\dot{\mathbf{q}}-\dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}- \beta\dot{\mathbf{q}}^{T}\mathbf{M}\dot{\mathbf{q}}. \tag{59}\] We now write \(\alpha_{\mathcal{L}}\) as the sum of a term \(\alpha_{0}\) designed to remove \(\widetilde{\mathbf{h}}\) and \(\dot{\mathbf{M}}\) and a residual \(\tilde{\alpha}\). I.e. \(\alpha_{\mathcal{L}}=\alpha_{0}+\tilde{\alpha}\) with \[\alpha_{0}=\frac{-\dot{\mathbf{q}}^{T}\mathbf{M}\widetilde{\mathbf{h}}-\frac {1}{2}\dot{\mathbf{q}}^{T}\dot{\mathbf{M}}\dot{\mathbf{q}}}{\dot{\mathbf{q}}^{T }\mathbf{M}\dot{\mathbf{q}}} \tag{60}\] so that \[\dot{\mathbf{q}}^{T}\mathbf{M}\big{(}\widetilde{\mathbf{h}}+\alpha_{0}\dot{ \mathbf{q}}\big{)}+\frac{1}{2}\dot{\mathbf{q}}^{T}\dot{\mathbf{M}}\dot{ \mathbf{q}}=0. \tag{61}\] We assume that \(\mathcal{L}\), \(\mathbf{M}\), and \(\widetilde{\mathbf{h}}\) are designed such that the residual \(\tilde{\alpha}\) is bounded. Substituting \(\alpha_{\mathcal{L}}=\alpha_{0}+\tilde{\alpha}\) into 59 gives \[\dot{V}=\dot{\mathbf{q}}^{T}\mathbf{M}\big{(}\widetilde{\mathbf{h}}+\tilde{ \alpha}\dot{\mathbf{q}}+\alpha_{0}\dot{\mathbf{q}}\big{)}+\frac{1}{2}\dot{ \mathbf{q}}^{T}\dot{\mathbf{M}}\dot{\mathbf{q}}-\dot{\mathbf{q}}^{T}\mathbf{B }\dot{\mathbf{q}}-\beta\dot{\mathbf{q}}^{T}\mathbf{M}\dot{\mathbf{q}}. \tag{62}\] Regrouping yields \[\dot{V}=\Big{[} \dot{\mathbf{q}}^{T}\mathbf{M}(\widetilde{\mathbf{h}}+\alpha_{ 0}\dot{\mathbf{q}})+\frac{1}{2}\dot{\mathbf{q}}^{T}\dot{\mathbf{M}}\dot{ \mathbf{q}}\Big{]} \tag{63}\] \[+\tilde{\alpha}\dot{\mathbf{q}}^{T}\mathbf{M}\dot{\mathbf{q}}- \dot{\mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}-\beta\dot{\mathbf{q}}^{T}\mathbf{ M}\dot{\mathbf{q}}.\] The first group of terms vanish by the design of \(\alpha_{0}\), so we get \[\dot{V}=\tilde{\alpha}\dot{\mathbf{q}}^{T}\mathbf{M}\dot{\mathbf{q}}-\dot{ \mathbf{q}}^{T}\mathbf{B}\dot{\mathbf{q}}-\beta\dot{\mathbf{q}}^{T}\mathbf{M} \dot{\mathbf{q}}. \tag{64}\] We combine the two damping terms to produce \[\dot{V}=\tilde{\alpha}\dot{\mathbf{q}}^{T}\mathbf{M}\dot{\mathbf{q}}-\dot{ \mathbf{q}}^{T}\widetilde{\mathbf{B}}\dot{\mathbf{q}}. \tag{65}\] This equation can now be upper-bounded via the Rayleigh-Ritz theorem as \[\dot{V}\leq\overline{\lambda}_{M}\|\dot{\mathbf{q}}\|^{2}-\underline{\lambda}_{ B}\|\dot{\mathbf{q}}\|^{2}, \tag{66}\] where \(\overline{\lambda}_{M}\) is the maximum eigenvalue of \(\tilde{\alpha}\mathbf{M}\) and \(\underline{\lambda}_{B}\) is the minimum eigenvalue of \(\tilde{\mathbf{B}}\). Via the design of \(\mathbf{B}\) and a sufficiently large \(\beta\), we can enforce that \(\underline{\lambda}_{B}>\overline{\lambda}_{M}\) yielding \[\dot{V}\leq-b\|\dot{\mathbf{q}}\|^{2}, \tag{67}\] where \(b=\underline{\lambda}_{B}-\overline{\lambda}_{M}>0\). We now invoke LaSalle's invariant set theorem to give \(\|\dot{\mathbf{q}}\|\to 0\) as \(t\to\infty\). This implies \(\|\dot{\mathbf{q}}\|\to 0\) as \(t\to\infty\), and consequently, \(\|f\|,\|\partial\psi\|\to 0\) as well. This ultimately guarantees that the system will come to rest at a minimum of \(\psi\). ## V Numerical Considerations The mathematical definition of energization given in Definition II.9 has a numerical instability at \(\dot{\mathbf{q}}=\mathbf{0}\). The following definition gives two robust variants that can be used for practical implementation. The choice of which to use depends on the properties of the generator being energized as discussed below. **Definition V.1**.: Let \(\ddot{\mathbf{q}}=\mathbf{h}(\mathbf{q},\dot{\mathbf{q}})\) be an autonomous second-order differential equation, and let \(\mathcal{L}\) be an energy. The _vanishing energization_ transform is defined as \[\mathrm{energize}_{\mathcal{H}}^{\epsilon}[\mathbf{h}] =\mathbf{h}+\alpha\dot{\mathbf{q}} \tag{68}\] \[\text{with}\ \ \alpha=\frac{1}{Z+\epsilon}\dot{\mathbf{q}}^{T}\big{(} \mathbf{M}_{\mathcal{L}}\mathbf{h}+\boldsymbol{\xi}_{\mathcal{L}}\big{)} \tag{69}\] for \(\epsilon>0\) where \(Z=\dot{\mathbf{q}}^{T}\mathbf{M}_{\mathcal{L}}\dot{\mathbf{q}}\). This variant smoothly reduces \(\alpha\) to zero as \(\dot{\mathbf{q}}\to\mathbf{0}\) avoiding numerical instability and ambiguity at \(\dot{\mathbf{q}}=\mathbf{0}\). Another variant which we call the _robust energization_ transform additionally preserves the unbiased property of energization while resolving numerical issues: \[\mathrm{energize}_{\mathcal{H}}^{\eta_{\sigma},\epsilon}\big{[}\mathbf{h} \big{]}=\eta_{\sigma}(\|\dot{\mathbf{q}}\|)\ \mathrm{energize}_{\mathcal{H}}^{\epsilon}\big{[}\mathbf{h}\big{]} \tag{70}\] where \(\eta_{\sigma}(s)\) is some function that diminishes to zero as \(s\to 0\) with length scale \(\sigma\). For instance, \(\eta_{\epsilon}(s)=1-\exp\big{\{}-s^{2}/(2\sigma^{2})\big{\}}\) is a common choice. The vanishing energization transform is the same as the standard energization transform aside from the \(\epsilon\) in the denominator. When the generator \(\mathbf{h}\) is unbiased (zero at \(\dot{\mathbf{q}}=\mathbf{0}\)), this transformed system is also unbiased. The robust energization transform is useful when energizing a biased generator to create an unbiased system. It explicitly includes the \(\eta_{\sigma}\) term to ensure the resulting system is zero at \(\dot{\mathbf{q}}=\mathbf{0}\) (unbiased). ## VI Conclusions This paper reformulates fabrics to focus on their fundamental stability as a medium for policies to operate across. The fabric creates a nominal prior behavior which guides the policy. The policy then steers across the system and regulates energy. When the fabric is geometric, it forms a well-defined road network of paths that the system wants to follow. This reformulation is more intuitive than previous formulations, while subsuming those formulations, making the fabrics both flexible and easier to use in practice, particularly for learning applications.
2309.14472
Shock-Wave Refinement of the Friedmann-Robertson-Walker Metric
The mathematics of general relativistic shock waves is introduced and considered in a cosmological context. In particular, an expanding Friedmann-Roberson-Walker metric is matched to a Tolman-Oppenheimer-Volkoff metric across a spherical shock surface. This is the general relativistic analogue of a shock-wave explosion within a static singular isothermal fluid sphere and may be regarded as a model for the Big Bang. These shock waves are constructed both within and beyond the Hubble radius, which corresponds to a universe outside and inside its Schwarzschild radius respectively. Certain self-similar perturbations of the FRW metric lead to an accelerated expansion, even without a cosmological constant, and thus it is conjectured that such a mechanism may account for the anomalous acceleration observed today without recourse to dark energy.
Christopher Alexander, Blake Temple, Joel Smoller
2023-09-25T19:14:13Z
http://arxiv.org/abs/2309.14472v1
# Shock-Wave Refinement of the Friedmann-Robertson-Walker Metric ###### Abstract The mathematics of general relativistic shock waves is introduced and considered in a cosmological context. In particular, an expanding Friedmann-Robberson-Walker metric is matched to a Tolman-Oppenheimer-Volkoff metric across a spherical shock surface. This is the general relativistic analogue of a shock-wave explosion within a static singular isothermal fluid sphere and may be regarded as a model for the Big Bang. These shock waves are constructed both within and beyond the Hubble radius, which corresponds to a universe outside and inside its Schwarzschild radius respectively. Certain self-similar perturbations of the FRW metric lead to an accelerated expansion, even without a cosmological constant, and thus it is conjectured that such a mechanism may account for the anomalous acceleration observed today without recourse to dark energy. General Relativity Einstein-Euler Self-Similar Shock Wave Cosmology Dark Energy ###### Contents * 1 Introduction * 2 The FRW Metric * 3 The General Theory of Shock Matching * The Case \(r_{*}=0\) * The Case \(r_{*}>0\) * 6 Self-Similar Extensions of FRW-TOV Shock-Waves Inside the Hubble Radius * 7 Conclusion ## 1 Introduction In the Standard Model of Cosmology, the expanding universe of galaxies is described by a Friedmann-Robertson-Walker (FRW) metric, which in spherical coordinates has a line element given by, \[ds^{2}=-dt^{2}+R^{2}(t)\left(\frac{dr^{2}}{1-kr^{2}}+r^{2}[d\theta^{2}+\sin^{2}( \theta)d\phi^{2}]\right). \tag{1}\] In this model, which accounts for physics on the largest length scale, the Universe is approximated by a space of uniform density and pressure at each fixed time, and the expansion rate is determined by the cosmological scale factor \(R(t)\) that evolves according to the Einstein field equations. Astronomical observations indicate that the galaxies are uniform on a scale of about one billion light-years, and the expansion is critical, that is, \(k=0\) in (1). According to (1) with \(k=0\), on the largest scale the Universe is infinitely flat Euclidean space \(\mathbb{R}^{3}\) at each fixed time. Matching the Hubble constant to its observed values and invoking the Einstein field equations, the FRW model implies that the entire infinite universe \(\mathbb{R}^{3}\) emerged all at once from a singularity (\(R=0\)) some 13.7 billion years ago, and this event is referred to as the _Big Bang_. This article, updated in 2023, summarises the work of Smoller and Temple in [11, 12, 18, 19], and then describes subsequent advances by Smoller, Temple and Alexander in [22, 20, 1]. The Smoller-Temple solutions describe a two-parameter family of exact solutions to the perfect fluid Einstein field equations that refine the FRW metric by a spherical shock-wave cutoff. The Einstein field equations with a perfect fluid source are also referred to as the _Einstein-Euler_ equations and we make the distinction that an _exact_ solution is one that is defined by a system of ODE, whereas an _explicit_ solution is one that can be given in terms of elementary mathematical functions. In the original Smoller-Temple solutions, the flat (\(k=0\)) FRW spacetime approximates the expanding wave created behind the general relativistic version of an explosion into a static singular isothermal sphere. There are two cases: matching _outside the black hole_, which means the shock surface is inside the Hubble radius of the FRW spacetime, and _inside the black hole_, that is, beyond the Hubble radius. In the examples given in the original article, Smoller and Temple recognised that the flat FRW spacetime did not fully resolve the expanding wave behind the shock, but argued that it was qualitatively close. After Smoller and Temple introduced a family of expanding waves in [20], Alexander in [1] accomplished the goal of fully resolving the expanding wave behind these general relativistic shock waves for the case of shock waves inside the Hubble radius, that is, outside the black hole. This is accomplished by incorporating self-similar perturbations of the FRW spacetime, characterised in [20] as the natural self-similar extension of the FRW spacetime, that is, spacetimes that are asymptotically FRW at the origin. Alexander's solution for pure radiation resolves a long-standing open problem first proposed by Taub in [3], and later taken up from a different point of view by Smoller and Temple. Most interestingly, the expanding waves behind these shocks naturally introduce accelerations relative to the flat FRW spacetime. Since the flat FRW spacetime is the starting model of Cosmology, it is intriguing to consider these models as an explanation for cosmic acceleration without recourse to a cosmological constant. We argue here that the analysis needs to be extended to a parallel family of solutions beyond the Hubble radius, as accomplished in [18, 19] for flat FRW shock matching, to make a plausible case for such an explanation. We first review these Smoller-Temple solutions and then return in Section 6 to discuss the refinements made by Smoller, Temple and Alexander. In order to construct a mathematically simple family of shock-wave refinements of the FRW metric that satisfy the Einstein-Euler equations exactly, we assume critical expansion (\(k=0\)) and restrict to the case that the sound speed in the fluid on the FRW side of the shock wave is constant, that is, we assume an FRW equation of state \(p=\sigma\rho\) with \(0<\sigma\leq c^{2}\), where \(\sigma\), the square of the sound speed \(\sqrt{\frac{\partial p}{\partial\rho}}\), is constant. For \(\sigma=c^{2}/3\), this equation of state describes a state of matter known as _pure radiation_, as well as the equation of state of the relativistic limit of free particles, which is correct during the Radiation Dominated Epoch of the Early Universe [24]. Also, as \(\sigma\) ranges from \(0\) to \(c^{2}\), we obtain qualitatively correct approximations to general equations of state. Now by using units such that the speed of light \(c\) and gravitational constant \(\mathcal{G}\) are set to unity, the family of solutions is then determined by two parameters: \(0<\sigma\leq 1\) and \(r_{*}\geq 0\). The second parameter \(r_{*}\) is the FRW radial coordinate \(r\) of the shock in the limit \(t\to 0\), that is, the instant of the Big Bang.1 The FRW radial coordinate \(r\) is singular with respect to radial arc-length \(\bar{r}=rR\) at the Big Bang (\(R=0\)), so setting \(r_{*}>0\) does not place the shock wave away from the origin at time \(t=0\). The distance from the FRW centre to the shock wave tends to zero in the limit \(t\to 0\), even when \(r_{*}>0\). In the limit \(r_{*}\rightarrow\infty\), we recover from the family of solutions the usual (infinite) FRW metric with equation of state \(p=\sigma\rho\), that is, we recover the standard FRW metric in the limit that the shock wave is infinitely far out. In this sense, our family of solutions of the Einstein-Euler equations represents a two-parameter refinement of the standard Friedmann-Robertson-Walker metric. Footnote 1: Since when \(k=0\) the FRW metric is invariant under the rescaling \(r\rightarrow\alpha r\) and \(R\to R/\alpha\), we fix the radial coordinate \(r\) by fixing the scale factor \(\alpha\) with the condition that \(R(t_{0})=1\) for some time \(t_{0}\), say present time. The explicitly defined solutions in the case \(r_{*}=0\) were first constructed in [12] and are qualitatively different from the exact solutions in the case \(r_{*}>0\), which were constructed later in [19]. The difference is that when \(r_{*}=0\), the shock wave lies closer than one Hubble length from the centre of the FRW spacetime throughout its motion, but when \(r_{*}>0\), the shock wave emerges at the Big Bang at a distance beyond one Hubble length [17]. The Hubble length, also referred to as the Hubble radius in certain contexts, depends on time, and tends to zero as \(t\to 0\). We show in [19] that one Hubble length, equal to \(c/H\) where \(H=\dot{R}/R\), is a critical length scale in a flat FRW metric because the total mass inside one Hubble length has a Schwarzschild radius equal to exactly one Hubble length.2 That is, one Hubble length marks precisely the distance at which the Schwarzschild radius \(\bar{r}_{s}\equiv 2M\) (of the mass \(M\) inside a radial shock wave at distance \(\bar{r}\) from the FRW centre) crosses from inside (\(\bar{r}_{s}<\bar{r}\)) to outside (\(\bar{r}_{s}>\bar{r}\)) the shock wave. If the shock wave is at a distance closer than one Hubble length from the FRW centre, then \(\bar{r}>2M\) and we say that the solution lies _outside the black hole_, but if the shock wave is at a distance greater than one Hubble length, then \(\bar{r}<2M\) at the shock, and we say the solution lies _inside the black hole_. Since \(M\) increases proportional to \(\bar{r}^{3}\), it follows that \(\bar{r}>2M\) for \(\bar{r}\) sufficiently small, and \(\bar{r}<2M\) for \(\bar{r}\) sufficiently large, so there must be a critical radius at which \(\bar{r}=2M\). In Section 2 (taken from [18, 19]), we show that when \(k=0\), this critical radius is exactly the Hubble radius. When the parameter \(r_{*}=0\), the family of solutions for \(0<\sigma\leq 1\) starts at the Big Bang, and evolves thereafter outside the black hole, satisfying \(\bar{r}>2M\) everywhere from \(t=0\) onward. But when \(r_{*}>0\), the shock wave is further out than one Hubble length at the instant of the Big Bang, and the solution begins with \(\bar{r}<2M\) at the shock wave. From this time onward, the spacetime expands until eventually the Hubble radius catches up to the shock wave at \(\bar{r}=2M\) and then passes the shock wave, making \(\bar{r}>2M\) thereafter. Thus when \(r_{*}>0\), the whole spacetime begins inside the black hole (with \(\bar{r}<2M\) for sufficiently large \(\bar{r}\)) but eventually evolves to a solution outside the black hole. The time when \(\bar{r}=2M\) actually marks the event horizon of a _white hole_, the time reversal of a black hole, in the ambient spacetime beyond the shock wave. We show that when \(r_{*}>0\), the time when the Hubble length catches up to the shock wave comes before the time when the shock wave comes into view at the FRW centre. Furthermore, when \(\bar{r}=2M\), assuming \(t\) is so large that we can neglect the pressure from this time onward, the whole solution emerges from the white hole as a finite ball of mass expanding into empty space, satisfying \(\bar{r}>2M\) everywhere thereafter. In fact, when \(r_{*}>0\), the zero pressure Oppenheimer-Snyder solution outside the black hole gives the large time asymptotics of the solution (see [9, 15, 19] and the comments after Theorems 6-8 below). Footnote 2: Since \(c/H\) is a good estimate for the age of the Universe, it follows that the Hubble length \(c/H\) is approximately the distance of light travel starting at the Big Bang up until present time. In this sense, the Hubble length is a rough estimate for the distance to the further most objects visible in the Universe. The explicitly defined solutions in the case \(r_{*}=0\) give a general relativistic version of an explosion into a static singular isothermal fluid sphere, qualitatively similar to the corresponding classical explosion outside the black hole [12]. The main difference physically between the cases \(r_{*}>0\) and \(r_{*}=0\) is that when \(r_{*}>0\), that is, the case when the shock wave emerges from the Big Bang at a distance beyond one Hubble length, a large region of uniform expansion is created behind the shock wave at the instant of the Big Bang. Thus, when \(r_{*}>0\), lightlike (also known as _null_) information about the shock wave propagates inward from the wave, rather than outward from the centre, as is the case when \(r_{*}=0\) and the shock lies inside one Hubble length.3 It follows that when \(r_{*}>0\), an observer positioned in the FRW spacetime inside the shock wave will see exactly what the Standard Model of Cosmology predicts up until the time when the shock wave comes into view in the far field. In this sense, the case \(r_{*}>0\) gives a black hole cosmology that refines the standard FRW model of Cosmology to the case of finite mass. One of the surprising differences between the case \(r_{*}=0\) and the case \(r_{*}>0\) is that, when \(r_{*}>0\), the important pure radiation equation of state \(p=\frac{1}{3}\rho\) comes out of the analysis as special at the Big Bang. When \(r_{*}>0\), the shock wave emerges at the instant of the Big Bang at a finite non-zero speed (the speed of light) only for the special value \(\sigma=\frac{1}{3}\). In this case, the equation of state on both sides of the shock wave tends to the correct relation \(p=\frac{1}{3}\rho\) as \(t\to 0\) and the shock wave decelerates to a subluminal speed for all positive times thereafter (see [18, 19] and Theorem 8 below). Footnote 3: One can imagine that when \(r_{*}>0\), the shock wave could get out through a great deal of matter early on when everything is dense and compressed and still not violate the speed of light bound. Thus when \(r_{*}>0\), the shock wave would _thermalise_, or more accurately, _make uniform_ a large region at the centre early on in the explosion. The authors speculate that such a mechanism might offer a substitute for inflation in this regard. In all cases \(0<\sigma\leq 1\), \(r_{*}\geq 0\), the spacetime metric that lies beyond the shock wave is taken to be a metric of Tolman-Oppenheimer-Volkoff (TOV) form [8], that is, \[ds^{2}=-B(\bar{r})d\bar{t}^{2}+\frac{1}{A(\bar{r})}d\bar{r}^{2}+\bar{r}^{2}[d \theta^{2}+\sin^{2}(\theta)d\phi^{2}]. \tag{2}\] The metric (2) is in _standard Schwarzschild coordinates_, that is, diagonal with a radial coordinate defined by spheres of symmetry. Furthermore, the metric components depend only on the radial coordinate \(\bar{r}\). Barred coordinates are used to distinguish TOV coordinates from unbarred FRW coordinates for the purpose of matching the metric at the shock surface (see Section 3). The mass function \(M(\bar{r})\) enters as a metric component through the relation, \[A=1-\frac{2M(\bar{r})}{\bar{r}}. \tag{3}\] The TOV metric (2) has a very different character depending on whether \(A>0\) or \(A<0\), that is, depending on whether the solution lies outside or inside the black hole. In the case \(A>0\), \(\bar{r}\) is a spacelike coordinate and the TOV metric describes a static general relativistic fluid sphere.4 When \(A<0\), \(\bar{r}\) is the timelike coordinate and (2) is a dynamic metric that evolves in time. The explicitly defined shock-wave solutions are obtained by taking \(\bar{r}=R(t)r\) to match the spheres of symmetry, and then matching the metrics (1) and (2) at an interface \(\bar{r}=\bar{r}(t)\) across which the metrics are Lipschitz continuous. This can be done in general. In order for the interface to be a physically meaningful shock surface, we use the result in Theorem 4 below (see [11]), so that a single additional conservation constraint is sufficient to rule out delta function sources at the shock and guarantee that the matched metric solves the Einstein-Euler equations in the weak sense.5 The Lipschitz continuous matching of the metrics, together with the conservation constraint, leads to a system of ordinary differential equations (ODE) that determine the shock position, together with the TOV density and pressure at the shock. Since the TOV metric depends only on \(\bar{r}\), the equations thus determine the TOV spacetime beyond the shock wave. To obtain a physically meaningful outgoing shock wave, we impose the constraint \(\bar{p}\leq\bar{\rho}\) to ensure that the equation of state on the TOV side of the shock is qualitatively reasonable, and we require that the shock be compressive as the entropy condition. For an outgoing shock wave, this corresponds to the conditions \(\rho>\bar{\rho}\) and \(p>\bar{p}\), that is, the pressure and density need to be larger on the side of the shock that receives the mass flux (the FRW side when the shock wave is propagating away from the FRW centre). This condition breaks the time reversal symmetry of the equations and is sufficient to rule out rarefaction shocks in classical gas dynamics [10, 19]. The ODE, together with the equation of state bound and the conservation and entropy constraints, determine a unique solution of the ODE for every \(0<\sigma\leq 1\) and \(\bar{r}_{*}\geq 0\), and this provides the two-parameter family of solutions discussed here [12, 19]. Footnote 4: The metric (2) is, for example, the starting point for the stability limits of Buchdahl and Chandrasekhar for stars [24, 13, 16]. Footnote 5: The Einstein field equations \(G=\kappa T\) are second-order in the metric and so delta function sources will in general be present at a Lipschitz continuous matching of metrics. The Lipschitz matching of the metrics implies that the total mass \(M\) is continuous across the interface, and so when \(r_{*}>0\), the total mass of the entire solution (inside and outside the shock wave) is finite at each time \(t>0\). Both the FRW and TOV spacetimes emerge at the Big Bang. The total mass \(M\) on the FRW side of the shock has the meaning of total mass inside radius \(\bar{r}\) at fixed time, but on the TOV side of the shock, \(M\) does not evolve according to equations that give it the interpretation as a total mass because the metric is inside the black hole. Nevertheless, after the spacetime emerges from the black hole, the total mass takes on its usual meaning outside the black hole and time asymptotically the Big Bang ends with an expansion of finite total mass in the usual sense. Thus, when \(r_{*}>0\), our shock-wave refinement of the FRW metric leads to a Big Bang of finite total mass. The Smoller-Temple family of shock-wave solutions are rough models in the sense that the equation of state on the FRW side has constant \(\sigma\) and the equation of state on the TOV side is determined by the equations and therefore cannot be imposed. For more accurate equations of state, a more accurate description of the expanding wave created behind the shock is needed to meet the conservation constraint and thereby mediate the transition across the shock wave. At the time of publish of the original article, the authors thought such expanding waves to be pretty much impossible to model as exact solutions, however, in the most recent work of Alexander [1], this has been resolved for shock waves inside the Hubble radius by considering self-similar perturbations of the flat FRW spacetime. Not only do these modifications permit enough parameter freedom to impose the same equation of state on each side of the shock, but the perturbations of the flat FRW spacetime induce an accelerated expansion without the presence of a cosmological constant. A rigorous proof for the existence of a general relativistic shock wave inside the Hubble radius with a pure radiation equation of state each side of the shock is also demonstrated in [1]. The fact that we can find global solutions that meet our physical bounds within and beyond the Hubble radius, that are qualitatively the same for all values of \(\sigma\in(0,1]\) and all initial shock positions, strongly suggests that such a shock wave would be the dominant wave in a large class of problems. In Section 2 we derive the FRW solution for constant \(\sigma\) and discuss the Hubble radius as a critical length scale. In Section 3 we state the general theorems in [11] for matching gravitational metrics across shock waves. In Section 4 we discuss the construction of the family of solutions in the case \(r_{*}=0\). In Section 5 we discuss the case \(r_{*}>0\), and in Section 6 we discuss the advances made since 2005, in particular, replacing the flat FRW solution with a family of self-similar perturbations and discussing how such waves exhibit an accelerated expansion. See [12, 19, 20, 1] for details. ## 2 The FRW Metric According to Einstein's Theory of General Relativity, all properties of the gravitational field are determined by a Lorentzian spacetime metric tensor \(g\), whose line element in a given coordinate system \(\mathbf{x}=(x^{0},...,x^{3})\) is given by \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}. \tag{4}\] We use the Einstein summation convention whereby repeated up-down indices are assumed to be summed from 0 to 3. The components \(g_{\mu\nu}\) of the gravitational metric \(g\) satisfy the Einstein field equations, \[G^{\mu\nu}=\kappa T^{\mu\nu},\qquad\qquad\qquad\qquad T^{\mu\nu}=(\rho c^{2}+p)u^ {\mu}u^{\nu}+pg^{\mu\nu}, \tag{5}\] where we assume the stress-energy-momentum tensor \(T\) is that of a perfect fluid, making these equations the _Einstein-Euler_ equations. Here \(G\) is the Einstein curvature tensor, \[\kappa=\frac{8\pi{\cal G}}{c^{4}} \tag{6}\] is the coupling constant, \({\cal G}\) is Newton's gravitational constant, \(c\) is the speed of light, \(\rho c^{2}\) is the energy density, \(p\) is the pressure and \(\mathbf{u}=(u^{0},...,u^{3})\) is the fluid four-velocity [24]. We will also use the convention \(c=1\) and \({\cal G}=1\) when convenient. Putting the metric ansatz (1) into the Einstein-Euler equations (5) gives the equations for the components of FRW metric [24], \[H^{2}=\left(\frac{\dot{R}}{R}\right)^{2}=\frac{\kappa}{3}\rho-\frac{k}{R^{2}}, \tag{7}\] \[\dot{\rho}=-3(p+\rho)H. \tag{8}\] The unknowns \(R\), \(\rho\) and \(p\) are assumed to be functions of the FRW coordinate time \(t\) alone, with the dot denoting differentiation with respect to \(t\). To verify that the Hubble radius \(\bar{r}_{crit}=1/H\) is the limit for FRW-TOV shock matching outside a black hole, write the FRW metric (1) in standard Schwarzschild coordinates \(\bar{\mathbf{x}}=(\bar{t},\bar{r})\) where the metric takes the form \[ds^{2}=-B(\bar{t},\bar{r})d\bar{t}^{2}+\frac{1}{A(\bar{t},\bar{r})}d\bar{r}^{2 }+\bar{r}^{2}d\Omega^{2}, \tag{9}\] and the mass function \(M(\bar{t},\bar{r})\) is defined through the relation \[A=1-\frac{2M}{\bar{r}}. \tag{10}\] It is well known that a general spherically symmetric metric can be put in the form (9) by coordinate transformation [24, 5]. Substituting \(\bar{r}=Rr\) into (1) and diagonalising the resulting metric (see [19] for details), we obtain \[ds^{2}=\frac{1}{1-kr^{2}-H^{2}\bar{r}^{2}}\left(-\frac{1-kr^{2}}{\psi^{2}}d \bar{t}^{2}+d\bar{r}^{2}\right)+\bar{r}^{2}d\Omega^{2}, \tag{11}\] where \(\psi\) is an integrating factor that solves the equation \[\frac{\partial}{\partial\bar{r}}\left(\psi\frac{1-kr^{2}-H^{2}\bar{r}^{2}}{1- kr^{2}}\right)-\frac{\partial}{\partial t}\left(\psi\frac{H\bar{r}}{1-kr^{2}} \right)=0, \tag{12}\] and the time coordinate \(\bar{t}=\bar{t}(t,\bar{r})\) is defined by the exact differential \[d\bar{t}=\left(\psi\frac{1-kr^{2}-H^{2}\bar{r}^{2}}{1-kr^{2}}\right)dt+\left( \psi\frac{H\bar{r}}{1-kr^{2}}\right)d\bar{r}. \tag{13}\] Now using (10) in (7), it follows that \[M(t,\bar{r})=\frac{\kappa}{2}\int_{0}^{\bar{r}}\rho(t)s^{2}ds=\frac{1}{3} \frac{\kappa}{2}\rho\bar{r}^{3}. \tag{14}\] Since in the FRW metric \(\bar{r}=Rr\) measures arc-length along radial geodesics at fixed time, we see from (14) that \(M(t,\bar{r})\) has the physical interpretation as the total mass inside radius \(\bar{r}\) at time \(t\) in the FRW metric. Restricting to the case of critical expansion (\(k=0\)), we see from (7), (13) and (14) that \(\bar{r}=1/H\) is equivalent to \(\bar{r}=2M\), and so at fixed time \(t\), the following equivalences are valid: \[\bar{r}=\frac{1}{H}\iff\frac{2M}{\bar{r}}=1\iff A=0. \tag{15}\] We conclude that \(\bar{r}=1/H\) is the critical length scale for the FRW metric at fixed time \(t\) in the sense that \(A\) changes sign at \(\bar{r}=1/H\), and so the universe lies inside a black hole beyond \(\bar{r}=1/H\), as claimed above. It is shown in [16] that the standard TOV metric outside the black hole cannot be continued into \(A=0\) except in the very special case \(\rho=0\), as it takes an infinite pressure to hold up a static configuration at the event horizon of a black hole. Thus to do shock matching beyond one Hubble length requires a metric of a different character, and for this purpose, in [18, 19] we introduce the TOV metric _inside the black hole_, that is, a metric of TOV form with \(A<0\) and whose fluid is co-moving with the timelike radial coordinate \(\bar{r}\). The Hubble radius \(\bar{r}_{crit}=c/H\) is also the critical distance at which the outward expansion of the FRW metric exactly cancels the inward advance of a radial light ray impinging on an observer positioned at the origin of a flat (\(k=0\)) FRW metric. Indeed, by (1), a light ray travelling radially inward toward the centre of an FRW coordinate system satisfies, \[c^{2}dt^{2}=R^{2}dr^{2}, \tag{16}\] so that \[\frac{d\bar{r}}{dt}=\dot{R}r+R\dot{r}=H\bar{r}-c=H\left(\bar{r}-\frac{c}{H} \right)>0, \tag{17}\] if and only if \(\bar{r}>c/H\). Thus the arc-length distance from the origin to an inward moving light ray at fixed time \(t\) in a flat FRW metric will actually increase as long as the light ray lies beyond the Hubble radius. An inward moving light ray will, however, eventually cross the Hubble radius and reach the origin in finite proper time due to the increase in the Hubble length with time. We now calculate the infinite redshift limit in terms of the Hubble length. It is well known that light emitted at \((t_{e},r_{e})\) at wavelength \(\lambda_{e}\) in an FRW spacetime will be observed at \((t_{0},r_{0})\) at wavelength \(\lambda_{0}\) if \[\frac{R_{0}}{R_{e}}=\frac{\lambda_{0}}{\lambda_{e}}. \tag{18}\] Moreover, the redshift factor \(z\) is defined by \[z=\frac{\lambda_{0}}{\lambda_{e}}-1. \tag{19}\] Thus, infinite redshift occurs in the limit \(R_{e}\to 0\), where \(R=0\), \(t=0\) is the Big Bang. Consider now a light ray emitted at the instant of the Big Bang and observed at the FRW origin at present time \(t=t_{0}\). Let \(r_{\infty}\) denote the FRW coordinate at time \(t\to 0\) of the furthermost objects that can be observed at the FRW origin before time \(t=t_{0}\). Then \(r_{\infty}\) marks the position of objects at time \(t=0\) whose radiation would be observed as infinitely redshifted (assuming no scattering). Note then that a shock wave emanating from \(\bar{r}=0\) at the instant of the Big Bang will be observed at the FRW origin before present time \(t=t_{0}\) only if its position \(r\) at the instant of the Big Bang satisfies \(r<r_{\infty}\). To estimate \(r_{\infty}\), note first that from (16) it follows that an incoming radial light ray in an FRW metric follows a null trajectory \(r=r(t)\) if \[r-r_{e}=-\int_{t_{e}}^{t}\frac{d\tau}{R(\tau)}, \tag{20}\] and thus \[r_{\infty}=\int_{0}^{t_{0}}\frac{d\tau}{R(\tau)}. \tag{21}\] Using this, the following theorem is proved in [19]. **Theorem 1**.: _If the pressure \(p\) satisfies the bounds_ \[0\leq p\leq\frac{1}{3}\rho, \tag{22}\] _then for any equation of state, the age of the Universe \(t_{0}\) and the infinite red shift limit \(r_{\infty}\) are bounded in terms of the Hubble length by:_ \[\frac{1}{2H_{0}}\leq t_{0}\leq\frac{2}{3H_{0}}, \tag{23}\] \[\frac{1}{H_{0}}\leq r_{\infty}\leq\frac{2}{H_{0}}. \tag{24}\] _Where we have assumed that \(R=0\) when \(t=0\) and \(R=1\) when \(t=t_{0}\), \(H=H_{0}\)._ The next theorem gives closed form solutions of the FRW equations (7), (8) for constant \(\sigma\). As a special case, we recover the bounds in (23) and (24) from the cases \(\sigma=0\) and \(\sigma=\frac{1}{3}\). **Theorem 2**.: _Assume \(k=0\) and the equation of state \(p=\sigma\rho\), where \(0\leq\sigma\leq 1\) is constant. Then, for an expanding FRW universe (\(\dot{R}>0\)), the solution of system (7), (8) satisfying \(R=0\) at \(t=0\) and \(R=1\) at \(t=t_{0}\) is given by,_ \[\rho =\frac{4}{3\kappa(1+\sigma)^{2}}\frac{1}{t^{2}}, \tag{25}\] \[R =\left(\frac{t}{t_{0}}\right)^{\frac{2}{3(1+\sigma)}},\] (26) \[\frac{H}{H_{0}} =\frac{t_{0}}{t}. \tag{27}\] _Moreover, the age of the Universe \(t_{0}\) and the infinite redshift limit \(r_{\infty}\) are given explicitly in terms of the Hubble length by:_ \[t_{0} =\frac{2}{3(1+\sigma)}\frac{1}{H_{0}}, \tag{28}\] \[r_{\infty} =\frac{2}{1+3\sigma}\frac{1}{H_{0}}. \tag{29}\] From (29) we conclude that a shock wave will be observed at the FRW origin before present time \(t=t_{0}\) only if its position \(r\) at the instant of the Big Bang satisfies \(r<r_{\infty}\). Note that \(r_{\infty}\) ranges from one half to two Hubble lengths as \(\sigma\) ranges from \(1\) to \(0\), taking the intermediate value of one Hubble length at \(\sigma=\frac{1}{3}\). Note also that by using (25)-(26) in (14), it follows that \[M=\frac{\kappa}{2}\int_{0}^{\bar{r}}\rho(t)s^{2}ds=\frac{2\bar{r}^{3}}{9(1+ \sigma)^{2}t_{0}^{\frac{2}{3+\sigma}}}\frac{1}{t^{\frac{2\sigma}{1+\sigma}}}, \tag{30}\] so \(\dot{M}<0\) if \(\sigma>0\). It follows that if \(p=\sigma\rho\), and \(\sigma\) is a positive constant, then the total mass inside a radius of constant \(r\) decreases in time. ## 3 The General Theory of Shock Matching The matching of the FRW and TOV metrics in the next two sections is based on the following theorems that are derived in [11].6 Footnote 6: Theorems 3 and 4 apply to non-null shock surfaces. **Theorem 3**.: _Let \(\Sigma\) denote a smooth three-dimensional shock surface with spacelike normal vector \(\mathbf{n}\) relative to the spacetime metric \(g\), let \(K\) denote the second fundamental form on \(\Sigma\) and let \(G\) denote the Einstein curvature tensor. Assume that the components \(g_{\mu\nu}\) of the gravitational metric \(g\) are continuous up to the boundary on either side separately and Lipschitz continuous across \(\Sigma\) in some fixed coordinate system. Then the following statements are equivalent:_ 1. \([K]=0\) _at each point of_ \(\Sigma\)_._ 2. _The curvature tensors_ \(R^{\mu}_{\nu\sigma\tau}\) _and_ \(G_{\mu\nu}\)_, viewed as second-order operators on the metric components_ \(g_{\mu\nu}\)_, produce no delta function sources on_ \(\Sigma\)_._ 3. _For each point_ \(P\in\Sigma\) _there exists a_ \(C^{1,1}\) _coordinate transformation defined in a neighbourhood of_ \(P\)_, such that, in the new coordinates (which can be taken to be the Gaussian normal coordinates for the surface), the metric components are_ \(C^{1,1}\) _functions of these coordinates._ 4. _For each_ \(P\in\Sigma\)_, there exists a coordinate frame that is locally Lorentzian at_ \(P\)_, and can be reached within the class of_ \(C^{1,1}\) _coordinate transformations._ _Moreover, if any one of these equivalencies hold, then the Rankine-Hugoniot jump conditions,_ \[[G^{\mu\nu}]n_{\mu}=0 \tag{31}\] _hold at each point on \(\Sigma\)._ The Rankine-Hugoniot jump conditions express the weak form of conservation of energy and momentum across \(\Sigma\) when \(G=\kappa T\). Here \([f]\) denotes the jump in the quantity \(f\) across \(\Sigma\), which is determined by the metric separately on each side of \(\Sigma\) since \(g_{\mu\nu}\) is only Lipschitz continuous across \(\Sigma\). The notation \(C^{1,1}\) denotes a function whose first derivatives are Lipschitz continuous. In the case of spherical symmetry, a stronger result holds. In this case, the jump conditions (31) are implied by the single condition \[[G^{\mu\nu}]n_{\mu}n_{\nu}=0 \tag{32}\] so long as the shock surface is not null and the areas of the spheres of symmetry match smoothly at the shock and change monotonically as the shock evolves. Note that in general, assuming that the angular variables are identified across the shock, we expect conservation to entail two conditions, one for the time and one for the radial components. The fact that the smooth matching of the spheres of symmetry reduces conservation to one condition can be interpreted as an instance of the general principle that directions of smoothness in the metric imply directions of conservation of the sources. **Theorem 4**.: _Assume that \(g\) and \(\bar{g}\) are two spherically symmetric metrics that match Lipschitz continuously across a three dimensional shock surface \(\Sigma\) to form the matched metric \(g\cup\bar{g}\). That is, assume that \(g\) and \(\bar{g}\) are Lorentzian metrics given respectively by_ \[ds^{2} =-a(t,r)dt^{2}+b(t,r)dr^{2}+c(t,r)d\Omega^{2}, \tag{33}\] \[d\bar{s}^{2} =-\bar{a}(\bar{t},\bar{r})dt^{2}+\bar{b}(\bar{t},\bar{r})d\bar{r }^{2}+\bar{c}(\bar{t},\bar{r})d\Omega^{2}, \tag{34}\] _and that there exists a smooth coordinate transformation \(\Psi:(t,r)\rightarrow(\bar{t},\bar{r})\), defined in a neighbourhood of a shock surface \(\Sigma\) given by \(r=r(t)\), such that the metrics agree on \(\Sigma\) (with the implicit assumption that \(\theta\) and \(\varphi\) are identified). Moreover, assume that_ \[c(t,r)=\bar{c}(\Psi(t,r)), \tag{35}\] _in an open neighbourhood of the shock surface \(\Sigma\), so that, in particular, the areas of the two-spheres of symmetry in the barred and unbarred metrics agree at the shock surface. Furthermore, assume that the shock surface \(r=r(t)\) in unbarred coordinates is mapped to the surface \(\bar{r}=\bar{r}(\bar{t})\) by \((\bar{t},\bar{r}(\bar{t}))=\Psi(t,r(t))\), that the normal \(\boldsymbol{n}\) to \(\Sigma\) is non-null, and that_ \[\boldsymbol{n}(c)\neq 0 \tag{36}\] _where \(\boldsymbol{n}(c)\) denotes the derivative of \(c\) in the direction of the vector \(\boldsymbol{n}\).7 Then the following are equivalent:_ Footnote 7: That is, we assume that the areas of the two-spheres of symmetry change monotonically in the direction normal to the surface. For example, if \(c=r^{2}\), then \(\frac{\partial c}{\partial t}=0\), so the assumption \(\boldsymbol{n}(c)\neq 0\) is valid except when \(\boldsymbol{n}=\frac{\partial}{\partial t}\), in which case the rays of the shock surface would be spacelike. Thus the shock speed would be faster than the speed of light if our assumption \(\boldsymbol{n}(c)\neq 0\) failed in the case \(c=r^{2}\). 1. _The components of the metric_ \(g\cup\bar{g}\) _in any Gaussian normal coordinate system are_ \(C^{1,1}\) _functions of these coordinates across the surface_ \(\Sigma\)_._ 2. \([G^{\mu\nu}]n_{\mu}=0\)_._ 3. \([G^{\mu\nu}]n_{\mu}n_{\nu}=0\)_._ 4. \([K]=0\)_._ _where \([f]=\bar{f}-f\) denotes the jump in the quantity \(f\) across \(\Sigma\), and \(K\) is the second fundamental form on the shock surface._ ## 4 Shock-Wave Solutions Inside the Hubble Radius - The Case \(r_{*}=0\) To construct the family of shock-wave solutions for parameter values \(0<\sigma\leq 1\) and \(r_{*}=0\), we match the explicitly defined solution (25)-(27) of the FRW metric (1) to the explicitly given TOV metric (2) outside the black hole, that is, with \(A>0\). In this case, we can bypass the problem of deriving and solving the ODE for the shock surface and constraints discussed above, by actually deriving the explicit solution of the Einstein-Euler equations of TOV form that meets these equations. This explicitly defined solution represents the general relativistic version of a static singular isothermal fluid sphere, that is, singular because it has an inverse square density profile and isothermal because the relationship between the density and pressure is \(\bar{p}=\bar{\sigma}\bar{\rho}\) with constant \(\sigma\). Assuming the stress-energy-momentum tensor for a perfect fluid and assuming that the density and pressure depend only on \(\bar{r}\), the Einstein-Euler equations for the TOV metric (2) outside the black hole are equivalent to the Oppenheimer Volkoff system: \[\frac{dM}{d\bar{r}} =4\pi\bar{r}^{2}\bar{\rho}, \tag{37}\] \[-\bar{r}^{2}\frac{d\bar{\rho}}{d\bar{r}} =\mathcal{G}M\bar{\rho}\left(1+\frac{\bar{p}}{\bar{\rho}}\right) \left(1+\frac{4\pi\bar{r}^{3}\bar{p}}{M}\right)\left(1-\frac{2\mathcal{G}M}{ \bar{r}}\right)^{-1}. \tag{38}\] Integrating (37) we obtain the usual interpretation of \(M\) as the total mass inside radius \(\bar{r}\), \[M(\bar{r})=\int_{0}^{\bar{r}}4\pi\xi^{2}\bar{\rho}(\xi)d\xi. \tag{39}\] The metric component \(B\) is determined from \(\bar{\rho}\) and \(M\) through the equation \[\frac{1}{B}\frac{dB}{d\bar{r}}=-\frac{2}{\bar{p}+\bar{\rho}}\frac{d\bar{p}}{d \bar{r}}. \tag{40}\] Assuming \[\bar{p} =\bar{\sigma}\bar{\rho}, \tag{41}\] \[\bar{\rho} =\frac{\gamma}{\bar{r}^{2}}, \tag{42}\] for some constants \(\bar{\sigma}\) and \(\gamma\), then substituting this into (39), we obtain \[M(\bar{r})=4\pi\gamma\bar{r}. \tag{43}\] Putting (41)-(43) into (38) and simplifying yields the identity \[\gamma=\frac{1}{2\pi\mathcal{G}}\left(\frac{\bar{\sigma}}{1+6\bar{\sigma}+ \bar{\sigma}^{2}}\right). \tag{44}\] From (39) we obtain \[A=1-8\pi\mathcal{G}\gamma<1. \tag{45}\] Applying (40) leads to \[B=B_{0}\left(\frac{\bar{\rho}}{\bar{\rho}_{0}}\right)^{-\frac{2\pi}{1+\bar{ \sigma}}}=B_{0}\left(\frac{\bar{r}}{\bar{r}_{0}}\right)^{\frac{4\pi}{1+\bar{ \sigma}}}. \tag{46}\] By rescaling the time coordinate, we can take \(B_{0}=1\) at \(\bar{r}_{0}=1\), in which case (46) reduces to \[B=\bar{r}^{\frac{4\pi}{1+\bar{\sigma}}}. \tag{47}\] We conclude that when (44) holds, (41)-(45) and (46) provide an explicit solution of the Einstein-Euler equations of TOV type for each \(0<\bar{\sigma}\leq 1\).8 By (45), these solutions are defined outside the black hole since \(\bar{r}>2M\). When \(\bar{\sigma}=\frac{1}{3}\), (44) yields \(\gamma=\frac{3}{56\pi\mathcal{G}}\) ([24], equation (11.4.13)). Footnote 8: In this case, an explicit solution of TOV type was first found by Tolman [23] and rediscovered in the case \(\bar{\sigma}=\frac{1}{3}\) by Misner and Zapolsky ([24], page 320). To match the explicitly given FRW solution (25)-(27) with equation of state \(p=\sigma\rho\) to the explicitly given TOV solution (41)-(47) with equation of state \(\bar{p}=\bar{\sigma}\bar{\rho}\) across a shock interface, we first set \(\bar{r}=Rr\) to match the spheres of symmetry and then match the timelike and spacelike components of the corresponding metrics in standard Schwarzschild coordinates. The matching of the \(d\bar{r}^{2}\) coefficients yields the conservation of mass condition that implicitly specifies the shock surface \(\bar{r}=\bar{r}(t)\), \[M(\bar{r})=\frac{4\pi}{3}\rho(t)\bar{r}^{3}. \tag{48}\] Using this together with (42) and (43) gives the following two relations that hold at the shock surface: \[\bar{r} =\sqrt{\frac{3\gamma}{\rho(t)}}, \tag{49}\] \[\rho =\frac{3}{4\pi}\frac{M}{\bar{r}(t)^{3}}=\frac{3\gamma}{\bar{r}(t) ^{2}}=3\bar{\rho}. \tag{50}\] Matching the \(d\bar{t}^{2}\) coefficients on the shock surface determines the integrating factor \(\psi\) (see Section 2) in a neighbourhood of the shock surface by assigning initial conditions for (12). Finally, the conservation constraint \([T^{\mu\nu}]n_{\mu}n_{\nu}=0\) leads to the single condition \[(1-A)(\rho+\bar{p})(p+\bar{\rho})^{2}+\left(1-\frac{1}{A}\right)(\bar{\rho}+ \bar{p})(\rho+p)^{2}+(p-\bar{p})(\rho-\bar{\rho})^{2}=0, \tag{51}\] which upon using \(p=\sigma\rho\) and \(\bar{p}=\bar{\sigma}\bar{\rho}\) is satisfied providing \(\sigma\) and \(\bar{\sigma}\) are related by \[\bar{\sigma}=\frac{1}{2}\sqrt{9\sigma^{2}+54\sigma+49}-\frac{3}{2}\sigma- \frac{7}{2}=:H(\sigma). \tag{52}\] Alternatively, we can solve for \(\sigma\) in (52) and write this relation as \[\sigma=\frac{\bar{\sigma}(\bar{\sigma}+7)}{3(1-\bar{\sigma})}. \tag{53}\] This guarantees that conservation holds across the shock surface. It thus follows from Theorem 4 that all of the equivalencies in Theorem 3 hold across the shock surface. Note that \(H(0)=0\) and to leading order \[\bar{\sigma}=\frac{3}{7}\sigma+O(\sigma^{2}) \tag{54}\] as \(\sigma\to 0\). Within the physical region \(0\leq\sigma,\bar{\sigma}\leq 1\), \(H^{\prime}(\sigma)>0\), \(\bar{\sigma}<\sigma\) and \[H\left(\frac{1}{3}\right) =\sqrt{17}-4\approx 0.1231,\] \[H(1) =\sqrt{28}-5\approx 0.2915.\] Using the formulas for the FRW metric in (25)-(27) and setting \(R_{0}=1\) at \(\rho=\rho_{0}\), \(t=t_{0}\), we obtain the following formulas for the shock position: \[\bar{r}(t) =\alpha t, \tag{55}\] \[r(t) =\frac{\bar{r}(t)}{R(t)}=\beta t^{\frac{1+3\sigma}{3+3\sigma}}, \tag{56}\] where \[\alpha =3(1+\sigma)\sqrt{\frac{\bar{\sigma}}{1+6\bar{\sigma}+\bar{ \sigma}^{2}}}, \tag{57}\] \[\beta =\alpha^{\frac{1+3\sigma}{3+3\sigma}}\left(\frac{3\gamma}{\rho_{0 }}\right)^{\frac{1}{3+3\sigma}}. \tag{58}\] It follows from (43) that \(A>0\) and from (56) that \(r_{*}=\lim_{t\to 0}r(t)=0\). The entropy condition that the shock wave be compressive follows from the fact that \(\bar{\sigma}=H(\sigma)<\sigma\). Thus we conclude that for each \(0<\sigma\leq 1\), \(r_{*}=0\), the solutions constructed in (41)-(58) define a one-parameter family of shock-wave solutions that evolve everywhere outside the black hole, which implies that the distance from the shock wave to the FRW centre is less than one Hubble length for all \(t>0\). Using (55) and (56), one can determine the shock speed and check when the Lax characteristic condition [7] holds at the shock. The result is the following (see [12, 1] for details).9 Footnote 9: Note that even when the shock speed is larger than \(c\), only the wave, and not the sound speeds or any other physical motion, exceeds the speed of light. **Theorem 5**.: _Let_ \[\sigma_{1} =\frac{\sqrt{10}+1}{9}\approx 0.462,\] \[\sigma_{2} =\frac{\sqrt{5}}{3}\approx 0.745,\] _then the Lax characteristic condition holds at the shock if and only if \(0<\sigma\leq\sigma_{1}\) and the shock speed is subluminal (less than the speed of light) if and only if \(0<\sigma<\sigma_{2}\)._ The explicitly defined solution in the case \(r_{*}=0\) can be interpreted as the general relativistic version of a shock-wave explosion into a static singular isothermal fluid sphere, known in the Newtonian case as a simple model for star formation [16]. As the scenario goes, a star begins as a diffuse cloud of gas. The cloud slowly contracts under its own gravitational force by radiating energy out as gravitational potential energy is converted into kinetic energy. This contraction continues until the gas cloud reaches the point where the mean free path for transmission of light is small enough that light is scattered, instead of transmitted, through the cloud. The scattering of light within the gas cloud has the effect of equalising the temperature within the cloud, and at this point the gas begins to drift toward the most compact configuration of the density that balances the pressure when the equation of state is isothermal. This configuration is a static singular isothermal sphere, the general relativistic version of which is the explicitly given TOV solution beyond the shock wave when \(r_{*}=0\). This solution in the Newtonian case is also inverse square in the density and pressure and so the density tends to infinity at the centre of the sphere. Eventually, the high density at the centre ignites a thermonuclear reaction. The result is a shock-wave explosion emanating from the centre of the sphere, with this explosion signifying the birth of a star. The solutions when \(r_{*}=0\) represent the general relativistic version of such a shock-wave explosion. ## 5 Shock-Wave Solutions Beyond the Hubble Radius - The Case \(r_{*}>0\) When the shock wave is beyond one Hubble length from the FRW centre, we obtain a family of shock-wave solutions for each \(0<\sigma\leq 1\) and \(r_{*}>0\) by matching the FRW metric (1) to a TOV metric of the form (2) to form a shock wave under the assumption that \[A(\bar{r}) =1-\frac{2M(\bar{r})}{\bar{r}}=:1-N(\bar{r})<0. \tag{59}\] In this case, \(\bar{r}\) is the timelike variable. Assuming the stress-energy-momentum tensor \(T\) is taken to be that of a perfect fluid co-moving with the TOV metric, the Einstein equations \(G=\kappa T\) (inside the black hole) take the form [19], \[\bar{p}^{\prime} =\frac{\bar{p}+\bar{\rho}}{2}\frac{N^{\prime}}{N-1}, \tag{60}\] \[N^{\prime} =-\left(\frac{N}{\bar{r}}+\kappa\bar{p}\bar{r}\right),\] (61) \[\frac{B^{\prime}}{B} =-\frac{1}{N-1}\left(\frac{N}{\bar{r}}+\kappa\bar{\rho}\right). \tag{62}\] The system (60)-(62) defines the simplest class of gravitational metrics that contain matter, evolve inside the black hole and such that the mass function \(M(\bar{r})<\infty\) at each fixed time \(\bar{r}\). System (60)-(62) for \(A<0\) differs substantially from the TOV equations for \(A>0\) because, for example, the energy density \(T^{00}\) is equated with the timelike component \(G^{rr}\) when \(A<0\) but with \(G^{tt}\) when \(A>0\). In particular, this implies that inside the black hole the mass function \(M(\bar{r})\) does not have the interpretation as the total mass inside radius \(\bar{r}\) as it does outside the black hole. Equations (61), (62) do not have the same character as (37), (38), and the relation \(\bar{p}=\bar{\sigma}\bar{p}\) with constant \(\sigma\) is inconsistent with (61), (62) together with the conservation constraint and the FRW assumption \(p=\sigma\rho\) for shock-wave matching. Thus, instead of looking for an explicit solution of (61), (62) ahead of time, as in the case \(r_{*}=0\), we assume the FRW solution (25)-(27) and derive the ODE that describe the TOV metrics that match this FRW metric Lipschitz continuously across a shock surface and then impose the conservation, entropy and equation of state constraints at the end. Matching a given flat (\(k=0\)) FRW metric to a TOV metric inside the black hole across a shock interface leads to the system of ODE [19], \[\frac{du}{dN} =-\left(\frac{(1+u)}{2(1+3u)N}\right)\left(\frac{(3u-1)(\sigma-u) N+6u(1+u)}{(\sigma-u)N+(1+u)}\right), \tag{63}\] \[\frac{d\bar{r}}{dN} =-\frac{1}{1+3u}\frac{\bar{r}}{N}, \tag{64}\] with conservation constraint \[w=\frac{(\sigma-u)N-\sigma(1+u)}{(\sigma-u)N+(1+u)}, \tag{65}\] where \[u=\frac{\bar{p}}{\rho}, w=\frac{\bar{\rho}}{\rho}, \sigma=\frac{p}{\rho}. \tag{66}\] Here \(\rho\) and \(p\) denote the (known) FRW density and pressure and all variables are evaluated at the shock. Solutions of (63)-(65) determine the (unknown) TOV metrics that match the given FRW metric Lipschitz continuously across a shock interface such that conservation of energy and momentum hold across the shock and such that there are no delta function sources at the shock [6, 14]. Note that the dependence of (63)-(65) on the FRW metric is only through the variable \(\sigma\), and so the advantage of taking constant \(\sigma\) is that the whole solution is determined by the inhomogeneous scalar equation (63) when \(\sigma\) is constant. We take as the entropy constraint the condition that \[0<\bar{p}<p, 0<\bar{\rho}<\rho, \tag{67}\] and to insure a physically reasonable solution, we impose the equation of state constraint on the TOV side of the shock,10 Footnote 10: This is equivalent to the dominant energy condition [2]. \[0<\bar{p}<\bar{\rho}. \tag{68}\] Condition (67) implies that outgoing shock waves are compressive. Inequalities (67) and (68) are both implied by the single condition [19], \[\frac{1}{N}<\left(\frac{1-u}{1+u}\right)\left(\frac{\sigma-u}{\sigma+u}\right). \tag{69}\] Since \(\sigma\) is constant, equation (63) uncouples from (64), and thus solutions of system (63)-(65) are determined by the scalar non-autonomous equation (63). Making the change of variable \(S=\frac{1}{N}\), which transforms the Big Bang \(N\to\infty\) over to a rest point at \(S\to 0\), we obtain, \[\frac{du}{dS}=\left(\frac{(1+u)}{2(1+3u)S}\right)\left(\frac{(3u-1)(\sigma-u) +6u(1+u)S}{(\sigma-u)+(1+u)S}\right). \tag{70}\] Note that the conditions \(N>1\) and \(0<\bar{p}<p\) restrict the domain of (70) to the region \(0<u<\sigma<1\), \(0<S<1\). The next theorem gives the existence of solutions for \(0<\sigma\leq 1\), \(r_{*}>0\) inside the black hole [18]. **Theorem 6**.: _For every \(0<\sigma<1\) there exists a unique solution \(u_{\sigma}(S)\) of (70) such that (69) holds for all \(0<S<1\). Moreover,_ \[0<u_{\sigma}(S)<\bar{u}, \lim_{S\to 0}u_{\sigma}(S)=\bar{u}, \lim_{S\to 1}\bar{p}=0, \lim_{S\to 1}\bar{\rho}=0, \tag{71}\] _where_ \[\bar{u}=\min\left\{\frac{1}{3},\sigma\right\}. \tag{72}\] _Furthermore, for each of the solutions \(u_{\sigma}(S)\), the shock position is determined by the solution of (64), which in turn is determined uniquely by an initial condition which can be taken to be the FRW radial position of the shock wave at the instant of the Big Bang,_ \[r_{*}=\lim_{S\to 0}r(S)>0. \tag{73}\] Concerning the the shock speed, we have the following theorem. **Theorem 7**.: _Let \(0<\sigma<1\). Then the shock speed \(s_{\sigma}(S):=s(u_{\sigma}(S))<1\) for all \(0<S\leq 1\) if and only if \(\sigma<\frac{1}{3}\), that is, the shock speed is subluminal if and only if \(\sigma<\frac{1}{3}\)._ For the shock speed near the Big Bang (\(S=0\)), we have the following theorem: **Theorem 8**.: _The shock speed at the Big Bang (\(S=0\)) is given by:_ \[\lim_{S\to 0}s_{\sigma}(S) =0, \sigma<\frac{1}{3}, \tag{74}\] \[\lim_{S\to 0}s_{\sigma}(S) =1, \sigma=\frac{1}{3},\] (75) \[\lim_{S\to 0}s_{\sigma}(S) =\infty, \sigma>\frac{1}{3}. \tag{76}\] Theorem 8 shows that the equation of state \(p=\frac{1}{3}\rho\) plays a special role in the analysis when \(r_{*}>0\), and only for this equation of state does the shock wave emerge at the Big Bang at a finite non-zero speed, the speed of light. Moreover, (72) implies that in this case, the correct relation \(\bar{p}=\bar{\sigma}\bar{\rho}\) is also achieved in the limit \(S\to 0\). The result (71) implies that (neglecting the pressure \(p\) at this time onward) the solution continues to a \(k=0\) Oppenheimer-Snyder solution outside the black hole for \(S>1\). It follows that the shock wave will first become visible at the FRW centre \(\bar{r}=0\) at the moment \(t=t_{0}\) (where \(R(t_{0})=1\)) when the Hubble length \(1/H_{0}:=1/H(t_{0})\) satisfies \[\frac{1}{H_{0}}=\frac{1+3\sigma}{2}r_{*}, \tag{77}\] where \(r_{*}\) is the FRW position of the shock at the instant of the Big Bang. At this time, the number of Hubble lengths \(\sqrt{N_{0}}\) from the FRW centre to the shock wave at time \(t=t_{0}\) can be estimated by \[1\leq\frac{2}{1+3\sigma}\leq\sqrt{N}_{0}\leq\frac{2}{1+3\sigma}e^{\sqrt{3 \sigma}\left(\frac{1+3\sigma}{1+\sigma}\right)}. \tag{78}\] Thus, in particular, the shock wave will still lie beyond the Hubble radius \(1/H_{0}\) at the FRW time \(t_{0}\) when it first becomes visible. Furthermore, the time \(t_{crit}>t_{0}\) at which the shock wave will emerge from the white hole, given that \(t_{0}\) is the first instant at which the shock becomes visible at the FRW centre, can be estimated by \[\frac{2}{1+3\sigma}e^{\frac{1}{4}\sigma} \leq\frac{t_{crit}}{t_{0}}\leq\frac{2}{1+3\sigma}e^{\frac{t_{crit }}{1+\sigma}}, 0<\sigma<\frac{1}{3}, \tag{79}\] \[e^{\frac{\sqrt{3}}{4}} \leq\frac{t_{crit}}{t_{0}}\leq e^{\frac{3}{2}}, \sigma=\frac{1}{3}. \tag{80}\] Inequalities (79), (80) imply, for example, that at the Oppenheimer-Snyder limit \(\sigma=0\): \[\sqrt{N_{0}}=2, \frac{t_{crit}}{t_{0}}=2\] and in the limit \(\sigma=\frac{1}{3}\): \[1<\sqrt{N_{0}}\leq 4.5, 1.8\leq\frac{t_{crit}}{t_{0}}\leq 4.5.\] We can conclude that the moment \(t=t_{0}\), when the shock wave first becomes visible at the FRW centre, the shock wave must lie within 4.5 Hubble lengths of the FRW centre. Throughout the expansion up until this time, the expanding universe must lie entirely within a white hole, that is, the Universe will eventually emerge from this white hole, but not until some later time \(t_{crit}\), where \(t_{crit}\) does not exceed \(4.5t_{0}\). ## 6 Self-Similar Extensions of FRW-TOV Shock-Waves Inside the Hubble Radius The previous two sections focused on constructing general relativistic shock waves with the explicitly known flat (\(k=0\)) FRW metric forming the interior expanding wave. In this section we consider a broader family of expanding waves and demonstrate how to impose conservation across a shock surface with a TOV spacetime on the exterior. This achieves three things: The first is the determination of all expanding waves that are regular at the radial centre (\(\bar{r}=0\)) that can be matched with conservation to a TOV spacetime inside the Hubble radius (outside the black hole), the second is the determination of the unique expanding wave with a pure radiation equation of state (\(p=\frac{1}{3}\rho\)) that can be matched to the pure radiation TOV spacetime, and the third is the determination of the accelerated expansion that results from a general relativistic explosion with a TOV exterior. Extending the flat FRW spacetime to a family of expanding waves is motivated by the fact that both the FRW and TOV spacetimes are self-similar in the variable \(\xi=r/t\), that is, the metric components and hydrodynamic variables depend only on the single variable \(\xi\). The self-similarity of the TOV spacetime means that to match an expanding wave to this spacetime with conservation across the shock and with an equation of state of the form \(p=\sigma\rho\), it must be the case that the expanding wave also be self-similar in \(\xi\)[3]. Furthermore, requiring that the expanding waves have a _regular centre_, leaves only one family of self-similar expanding waves, with these being the one-parameter family of _asymptotically FRW_ spacetimes described independently in [4] and [20]. Thus it is the case that the family of spherically symmetric self-similar perturbations of the FRW metric account for all physically admissible expanding waves that can be matched to a self-similar TOV metric with conservation across the shock. Any self-similar metric may be written, without loss of generality, in the self-similar Schwarzschild coordinate form \[ds^{2}=-B(\xi)dt^{2}+\frac{1}{A(\xi)}dr^{2}+r^{2}[d\theta^{2}+\sin^{2}(\theta) d\phi^{2}]. \tag{81}\] Under the assumption of spherical symmetry, the fluid four-velocity may also be written, without loss of generality, as \(\mathbf{u}=(u^{0},u^{1},0,0)\). Under the normalisation condition \(g(\mathbf{u},\mathbf{u})=-1\), the fluid four-velocity has only one independent component and can thus be fully specified through the _Schwarzschild coordinate velocity_, defined by \[v=\frac{1}{\sqrt{AB}}\frac{u^{1}}{u^{0}}. \tag{82}\] Together with \(A\), \(B\), \(\rho\) and \(p\), the Schwarzschild coordinate velocity \(v\) is one of five unknown variables that completely specify a solution to the self-similar Einstein-Euler equations. As there are only four independent components of the spherically symmetric Einstein-Euler equations, a barotropic equation of state of the form \(p=p(\rho)\) is used to close the system. However, spherical symmetry and self-similarity in the variable \(\xi\) restrict this equation of state to the form \(p=\sigma\rho\) for constant \(\sigma\)[3]. Following the development of Smoller and Temple in [20], substituting the metric ansatz and equation of state into the Einstein-Euler equations yields the system of nonlinear ODE: \[\xi\frac{dA}{d\xi} =-\frac{(3+3\sigma)(1-A)v}{\{\cdot\}_{S}}, \tag{83}\] \[\xi\frac{dG}{d\xi} =-G\left[\left(\frac{1-A}{A}\right)\frac{(3+3\sigma)[(1+v^{2})G-2v ]}{2\{\cdot\}_{S}}-1\right],\] (84) \[\xi\frac{dv}{d\xi} =-\left(\frac{1-v^{2}}{2\{\cdot\}_{D}}\right)\left[3\sigma\{ \cdot\}_{S}+\left(\frac{1-A}{A}\right)\frac{(3+3\sigma)^{2}\{\cdot\}_{N}}{4\{ \cdot\}_{S}}\right], \tag{85}\] in addition to the constraint \[\rho=\frac{3(1-v^{2})(1-A)G}{\kappa r^{2}\{\cdot\}_{S}}. \tag{86}\] The variable \(G\), not to be confused with the Einstein curvature tensor, is defined by \[G=\frac{\xi}{\sqrt{AB}} \tag{87}\] and the bracketed terms are given (using the form found in [1]) by: \[\{\cdot\}_{S} =3(G-v)-3\sigma v(1-Gv),\] \[\{\cdot\}_{N} =-3(G-v)^{2}+3\sigma v^{2}(1-Gv)^{2},\] \[\{\cdot\}_{D} =\frac{3}{4}(3+3\sigma)\left[(G-v)^{2}-\sigma(1-Gv)^{2}\right].\] Thus a solution is specified fully by the three variables \(A\), \(G\) and \(v\). The asymptotically FRW spacetimes are described by the family of solutions to (83)-(86) with the leading order form as \(\xi\to 0\)[20]: \[A(\xi) \approx 1-\frac{1}{4}a^{2}\xi^{2}+O(\xi^{4}),\] \[G(\xi) \approx\frac{1}{4}(3+3\sigma)\xi+O(\xi^{3}),\] \[v(\xi) \approx\frac{1}{2}\xi+O(\xi^{3}).\] The parameter \(a\) is referred to as the _acceleration parameter_, as changing this parameter value away from \(a=1\), which corresponds to the unperturbed FRW spacetime, changes the accelerated expansion of the spacetime as measured by an observer at the radial centre. This change is specified through the modified red-shift versus luminosity relation \[d_{l}=2ct_{0}\left(z+\frac{a^{2}-1}{2}z^{2}+\frac{(a^{2}-1)(5a^{2}+4)}{10}z^{ 3}+|a-1|O(z^{4})\right) \tag{88}\] where \(d_{l}\) is the luminosity distance, \(t_{0}\) is the time of observation of the radiation and \(z\) is the redshift factor [20, 1].11 It is also important to note that \(a\neq 1\) also breaks the spacial homogeneity of the spacetime, with this change becoming more apparent for larger \(|a-1|\) and less apparent closer to \(\xi=0\). Footnote 11: Note that (88) was originally derived in [20], but due to an error in the original paper this expression was corrected and reproduced in [1]. The asymptotically FRW spacetimes are not known explicitly away from \(\xi=0\), instead these solutions are described by the system of ODE (83)-(85) and constraint (86). In order to match an asymptotically FRW spacetime to a TOV spacetime with conservation holding across the shock, the following lemma is required [1]. **Lemma 1**.: _Let \((A,G,v)\) denote a solution to (83)-(85) and suppose there exists a \(\xi_{0}>0\) such that_ \[A(\xi_{0})=1-2M(\bar{\sigma}). \tag{89}\] _Then \((A,G,v)\) can be matched to the TOV spacetime (with equation of state \(\bar{p}=\bar{\sigma}\bar{\rho}\)) on the surface \(\xi=\xi_{0}\) and the Rankine-Hugoniot jump condition is given by_ \[\frac{[\sigma+v^{2}(\xi_{0})]G(\xi_{0})-(1+\sigma)G^{2}(\xi_{0})v(\xi_{0})}{[1+ \sigma v^{2}(\xi_{0})]G(\xi_{0})-(1+\sigma)v(\xi_{0})}=\bar{\sigma}. \tag{90}\] Thus if a solution satisfies (89) and (90), then by Lemma 1 this solution is a shock-wave solution. However, for this shock wave to be stable in the gas-dynamical sense, that is, for the fluid characteristics to impinge on the shock surface from both sides, the following theorem is needed [1]. This theorem establishes the _Lax entropy conditions_, also known as the _Lax characteristic conditions_. **Theorem 9**.: _Let \((A,G,v)\) denote a solution to (83)-(85). If there exists a \(\xi_{0}>0\) such that \((A,G,v)\) can be matched to the TOV spacetime (with equation of state \(\bar{p}=\bar{\sigma}\bar{\rho}\)) to form a shock-wave solution with a subluminal shock speed \((G(\xi_{0})<1)\), then the Lax characteristic conditions are satisfied if:_ 1. \(\sigma=\bar{\sigma}\)_, or_ 2. \(\sigma<\bar{\sigma}\) _and_ \(G(\xi_{0})>\sqrt{\bar{\sigma}}\)_, or_ 3. \(\sigma>\bar{\sigma}\) _and_ \(\{\cdot\}_{D}(\xi_{0})<0\)_._ The addition of the acceleration parameter \(a\) means it is possible to specify both the FRW and TOV equation of state independently, with the Rankine-Hugoniot jump condition then determining \(a\). We see from Theorem 9 that if we were to require both equations of state to model pure radiation (\(p=\frac{1}{3}\rho\)), that is, a uniform equation of state across the shock surface, then the entropy condition is automatically implied from the Rankine-Hugoniot jump condition. Indeed, numerical approximations of such a solution imply \(a\approx 2.58\)[1]. Such a solution, if taken as a cosmological model, would yield an accelerated expansion many orders of magnitude larger than what is currently observed, since the expected value of \(a\) in such a model would be \(a\approx 1\)[20]. However, because this model uses the TOV spacetime outside the black hole, the shock surface would be within the Hubble radius and thus expected to be visible at the present time. It remains an active area of research for Alexander and Temple to construct shock-wave solutions with shock surfaces beyond the Hubble radius and determine the accelerated expansion exhibited by these spacetimes. For the solutions constructed beyond the Hubble radius in Section 5, it is intriguing that both equations of state tend to \(p=\frac{1}{3}\rho\) in the limit of the Big Bang, further reinforcing the expectation that \(a\approx 1\). We finish with the following theorem from [1]. **Theorem 10**.: _There exists an \(a>1\) such that an asymptotically FRW spacetime can be matched to a TOV spacetime within the Hubble radius to form a pure radiation general relativistic shock wave that satisfies the Lax characteristic conditions._ ## 7 Conclusion We have delved into the mathematics behind general relativistic shock waves that admit asymptotically FRW spacetimes as the expanding wave behind the shock. By placing a TOV spacetime on the exterior, these shock waves model the general relativistic analogue of an explosion within a static singular isothermal fluid sphere. Whether such modifications to the FRW spacetime could provide an alternative to the Standard Model of Cosmology first depends on whether the shock surface lies within the Hubble radius. If it does, such a shock wave should be presently observable, which is not the case. This means that the TOV spacetime on the exterior must be modified to be within a black hole. We have seen that it is possible to construct a shock-wave beyond the Hubble radius through this modification. Such a model has the notable property of containing finite total mass but does not account for the accelerated expansion observed in our Universe today. To introduce an accelerated expansion, it is necessary to extend the flat FRW spacetime to a family of self-similar perturbations. This extension provides an additional free parameter \(a\), which allows for the equation of state to be specified independently each side of the shock, and in particular, allows both equations of state to model pure radiation. Such a shock wave would thus be applicable in the Radiation Dominated Epoch of the Early Universe. Constructing general relativistic shock waves within and beyond the Hubble radius with the explicitly known flat FRW spacetime behind the shock was considered in Sections 4 and 5 respectively. The extension of Section 4 to using self-similar perturbations of the flat FRW spacetime was considered in Section 6 and permitted the construction of general relativistic shock waves that induce an accelerated expansion. Section 6 resolved the picture within the Hubble radius, an area which has been in active research since Cahill and Taub's seminal paper on self-similar solutions in General Relativity [3]. The extension of Section 5 to considering self-similar perturbations of the flat FRW spacetime behind a shock lying beyond the Hubble radius remains an active area of research for Alexander and Temple.12 If the accelerated expansion induced by this shock wave matches what observations suggest the rate was during the Radiation Dominated Epoch, then such a model would offer a mathematically independent mechanism for the accelerated expansion observed today without the need for a cosmological constant, and thus, without the need for dark energy.
2302.14728
Global Context-Aware Person Image Generation
We propose a data-driven approach for context-aware person image generation. Specifically, we attempt to generate a person image such that the synthesized instance can blend into a complex scene. In our method, the position, scale, and appearance of the generated person are semantically conditioned on the existing persons in the scene. The proposed technique is divided into three sequential steps. At first, we employ a Pix2PixHD model to infer a coarse semantic mask that represents the new person's spatial location, scale, and potential pose. Next, we use a data-centric approach to select the closest representation from a precomputed cluster of fine semantic masks. Finally, we adopt a multi-scale, attention-guided architecture to transfer the appearance attributes from an exemplar image. The proposed strategy enables us to synthesize semantically coherent realistic persons that can blend into an existing scene without altering the global context. We conclude our findings with relevant qualitative and quantitative evaluations.
Prasun Roy, Saumik Bhattacharya, Subhankar Ghosh, Umapada Pal, Michael Blumenstein
2023-02-28T16:34:55Z
http://arxiv.org/abs/2302.14728v1
# Global Context-Aware Person Image Generation ###### Abstract We propose a data-driven approach for context-aware person image generation. Specifically, we attempt to generate a person image such that the synthesized instance can blend into a complex scene. In our method, the position, scale, and appearance of the generated person are semantically conditioned on the existing persons in the scene. The proposed technique is divided into three sequential steps. At first, we employ a Pix2PixHD model to infer a coarse semantic mask that represents the new person's spatial location, scale, and potential pose. Next, we use a data-centric approach to select the closest representation from a pre-computed cluster of fine semantic masks. Finally, we adopt a multi-scale, attention-guided architecture to transfer the appearance attributes from an exemplar image. The proposed strategy enables us to synthesize semantically coherent realistic persons that can blend into an existing scene without altering the global context. We conclude our findings with relevant qualitative and quantitative evaluations. ## 1 Introduction Person image generation is a challenging yet necessary task for many recent computer vision applications. Though the problem has been majorly addressed by utilizing different generative algorithms, often, the generation quality does not meet the requirements of the practical applications. Moreover, the existing person image generation algorithms rely on two main factors. First, they heavily utilize the appearance and pose attributes of the target to generate the final image [4, 17, 18, 25, 28, 29, 34]. This approach indirectly demands intricate supervision from the users in the form of keypoints, masks, or text inputs [23, 33]. As these attributes are only associated with the person image being generated, we can assume them as _local attributes_ or _local contexts_. Secondly, the generation processes that rely heavily on local contexts often ignore global contextual information like background, camera perspective, or the presence of other people and objects in the scene. These over-simplified generation techniques result in target images that fail to blend into a complex natural scene. In this paper, we have addressed an exciting yet challenging task of person image generation considering the global context of the scene. The proposed method is entirely data-driven and does not require any local context from the user. We circumvent the necessity of user input by estimating the best possible local attributes for the transfer process using the available global attributes. The estimated local attributes are further refined to generate more realistic person images. The main contributions of the proposed work are as follows. * Unlike most existing methods, the proposed technique considers global attributes to generate person images. Thus, the proposed approach enables us to synthesize human images that can blend into a complex scene with multiple existing persons. * The proposed technique utilizes a data-driven refinement strategy which significantly improves the perceptual quality and visual realism of the generated images. * The data-driven approach provides crude control over the appearance attributes to achieve some extent of generation diversity. * The proposed approach achieves state-of-the-art results in most qualitative and quantitative benchmarks. The rest of the paper is organized as follows. We discuss the relevant literature in Sec. 2. The proposed approach is discussed in Sec. 3. Sec. 4 describes the dataset, experimental protocols, and evaluation metrics. The qualitative and quantitative results are analyzed in Sec. 5. A detailed ablation study is discussed in Sec. 6, followed by an analysis of the limitations of the proposed method in Sec. 7. Finally, we conclude the paper in Sec. 8 with a summary of the major findings, potential use cases, and future scopes. ## 2 Related Work Image generation is a complex yet intriguing task in computer vision. Generation of person images under different conditions is particularly important for tasks like pose transfer [19], virtual try-on [6], person re-identification [31] etc. With the advancement of Generative Adversarial Networks (GANs), person image generation algorithms have also seen new success. Most work on person image generation focuses on generating a person in a target pose given a source image and target pose attributes. The target pose attributes are given as keypoints [4, 17, 18, 25, 28, 29, 34], 3D mask [19], or text [23, 33]. In [17], the proposed generation framework consists of novel pose synthesis followed by image refinement. An UNet-based model is designed to generate an initial coarse image, which is refined in the second stage by another generative model. In [18], the authors propose a two-stage generation algorithm with the help of a multi-branched generation network using the target keypoints. Three mapping functions are learned adversarially to map Gaussian noise to the relevant embedding feature space for targeted manipulation of the generated person image. In [2], the authors have addressed the generation problem by synthesizing the keypoint-conditioned foreground and the background separately. Zhu et al. [34] have proposed a keypoint-based pose transfer method by incorporating a progressive attention transfer technique to divide the complex task of the generation into multiple repetitive simpler stages. Researchers have also explored the 3D mask as the conditional attribute in the person image generation pipeline. Li et al. [13] have estimated dense and intrinsic appearance flow between the poses to guide the pixels during the generation process. In [19], the authors propose an end-to-end model that incorporates surface-based pose estimation and a generative model to perform the pose transfer task. Although several algorithms are proposed for person image generation, they require extensive information about the target pose for the generation process. Moreover, most existing algorithms consider the local attributes in the process, which makes them unsuitable for complex scenes. Recently, in [5], the authors have considered both local and global attributes for the person insertion problem. While the algorithm exhibits some promising initial results, generating visually appealing scene-aware person images is a largely unexplored problem, with [5] being the only attempt in recent literature to the best of our knowledge. ## 3 Method We propose a three-stage sequential architecture to address the problem. In the first stage, we estimate the potential location and pose of the target person from the global geometric context of the existing persons in the scene. The generated coarse semantic map performs appreciably in providing an estimate of the target location and scale. However, such a crude semantic map performs extremely poorly while attempting to transfer appearance attributes from an exemplar to render the final target. To mitigate this issue, we have taken a data agonistic refinement strategy in the second stage to retrieve a representative semantic map for the target from an existing knowledge base. Finally, we render the target semantic map in the third stage by transferring appearance attributes from an exemplar of the target person. We show an overview of the proposed architecture in Fig. 2. ### Coarse Generation Network We follow a similar approach as [5] to generate a rough estimate of the target person's position, scale and pose. This network performs an image-to-image translation from a semantic map \(S\) containing \(N\) persons to another semantic map \(T\) having the \((N+1)\)_-th_ person. The network aims to generate a coarse semantic map for a new person such that the new person is contextually relevant to the existing persons in the scene. We show a few examples of the coarse generation network in Fig. 3 Both \(S\) and \(T\) are single-channel semantic maps containing eight labels corresponding to eight regions of a human body. As mentioned by [5], this reduced set of label groups simplifies the semantic map generation while retaining sufficient information for high-quality image synthesis in the following stages. The reduced set of semantic label groups contains - background (0), hair (1), face (2), torso and upper limbs (3), upper body wear (4), lower body wear (5), lower limbs (6), and shoes (7). In [5], the authors also provide one channel for the face and another optional channel to specify the region boundary for the target. In contrast, we do not consider these additional channels due to our different approaches to refinement and rendering in later stages. The coarse generation network directly adopts the default encoder-decoder architecture of Pix2PixHD [20]. We use a spatial dimension of \(368\times 368\) for the semantic maps. The original semantic maps are resized while maintaining the aspect ratio and then padded with zero to have the desired square dimension. We use nearest-neighbor interpolation when resizing to preserve the number of label groups in the semantic maps. The only modification we apply to the default Pix2PixHD architecture is disabling the VGG feature-matching loss because it is possible to have a wide variation in the target person's location, scale, and pose, which leads to significant uncertainty in the generated semantic map. ### Data-Driven Refinement Strategy The rough semantic map provides a reasonable estimate for the target person, which is contextually coherent with the global semantics of the scene. While the spatial location and scale of the target are immediately usable to localize a new person into the scene, the semantic map itself is not sufficiently viable to produce realistic results. In [5], authors use a multi-conditional rendering network (MCRN) on the roughly estimated semantic map, followed by a face refinement network (FRN) on the rendered target. While this approach produces some decent results, it is limited in scope due to solely relying on the initially generated rough semantic map from the essence generation network (EGN). We notice two crucial issues in this regard. Firstly, the use of a coarse semantic map highly affects the visual realism of the generated image. Secondly, it is not easy to achieve control over the appearance of the generated target with a fixed semantic representation. For example, if EGN produces a semantic map that appears to be a man while the intended exemplar is a woman. The subtle difference in core appearance attributes between the estimated semantic map and exemplar poses a significant challenge in practically usable generation results. We attempt to improve visual quality and appearance diversity in the generated results by introducing a data-driven refinement strategy with a clustered knowledge base. We collect a set of finely annotated semantic maps of high-quality human images to construct a small database having a diverse range of natural poses. This database Figure 3: Qualitative results of the coarse generation in stage 1. Semantic maps of existing persons are marked in gray, and the coarse estimation of the target semantic map is marked in purple. Figure 2: The architecture of the proposed method consists of three main stages. (a) Coarse semantic map estimation from the global scene context in stage 1. (b) Data-driven refinement of the initially estimated coarse semantic map in stage 2. (c) Rendering the refined semantic map by transferring appearance attributes from an exemplar in stage 3. works as a knowledge base for our method. To optimally split the knowledge base into several clusters, we first encode the individual semantic maps using a VGG-19 [26] model pretrained on ImageNet [3]. The semantic maps are resized to a square grid of size \(128\times 128\), maintaining the aspect ratio and using zero padding. The resampling uses nearest-neighbor interpolation. After passing the resized image through the VGG-19 network, the final feature extraction layer produces an output of dimension \(512\times 4\times 4\). To avoid too many features during clustering, we apply adaptive average pooling to map the feature space into a dimension of \(512\times 1\times 1\). The pooled feature space is flattened to a 512-dimensional feature vector. We perform K-means clustering on the encoded feature vectors corresponding to the samples in the knowledge base. From our ablation study in Sec. 6, we have found 8 clusters work best for our case. After the algorithm converges, we split the knowledge base by the algorithm-predicted class labels. During refinement, the coarse semantic map is center-cropped and resized to dimension \(128\times 128\), maintaining the aspect ratio. The resampling uses the same nearest-neighbor interpolation as earlier. The resized coarse semantic map is then similarly encoded and passed to the K-means algorithm for inference. After receiving a cluster assignment, we measure the cosine similarity between the encoded coarse semantic map and every sample previously classified as a cluster member. The refinement returns one or more existing samples by the similarity score-based ranking. The retrieved selection acts as the refined semantic map of the target person. ### Appearance Attribute Transfer and Rendering In [5], the authors train the rendering network on single instances extracted from multi-person images. In contrast, we impose the rendering task as a pose-transfer problem to transfer the appearance attributes conditioned on the pose transformation. Let us assume a pair of images \(I_{A}\) and \(I_{B}\) of the same person but with different poses \(P_{A}\) and \(P_{B}\), respectively. We aim to train the network such that it renders a realistic approximation \(\hat{I}_{B}\) (generated) of \(I_{B}\) (target) by conditioning the pose transformation \((P_{A},P_{B})\) on the appearance attributes of \(I_{A}\) (exemplar). We represent each pose with a semantic map consisting of 7 label groups - background (0), hair (1), face (2), skin (3), upper body wear (4), lower body wear (5), and shoes (6). For effective attribute transfer on different body regions, the semantic map \(P\) is converted into a 6-channel binary heatmap (0 for the background and 1 for the body part) \(H\) where each channel indicates one specific body region. We use a spatial dimension of \(3\times 256\times 256\) for \(I_{A}\), \(I_{B}\), and \(\hat{I}_{B}\). Consequently, the same for \(H_{A}\) and \(H_{B}\) is \(6\times 256\times 256\). We adopt a multi-scale attention-based generative network [21, 22] for rendering. The generator \(\mathcal{G}\) takes the exemplar \(I_{A}\) and the depth-wise concatenated heatmaps \((H_{A},H_{B})\) as inputs to produce an estimate \(\hat{I}_{B}\) for the target \(I_{B}\). The discriminator \(\mathcal{D}\) takes the channel-wise concatenated image pairs, either \((I_{A},I_{B})\) (real) or \((I_{A},\hat{I}_{B})\) (fake), to estimate a binary class probability map for \(70\times 70\) receptive fields (input patches). The generator \(\mathcal{G}\) has two separate but identical encoding pathways for \(I_{A}\) and \((H_{A},H_{B})\). At each branch, the input is first mapped to a \(64\times 256\times 256\) feature space by convolution (\(3\times 3\) kernel, stride=1, padding=1, bias=0), batch normalization, and ReLU activation. The feature space is then passed through 4 consecutive downsampling blocks, where each block reduces the spatial dimension by half while doubling the number of feature maps. Each block consists of convolution (\(4\times 4\) kernel, stride=2, padding=1, bias=0), batch normalization, and ReLU activation, followed by a basic residual block [7]. The network has a single decoding path that upsamples the combined feature space from both the encoding branches. We have 4 consecutive upsampling blocks in the decoder, where each block doubles the spatial dimension while compressing the number of feature maps by half. Each block consists of transposed convolution (\(4\times 4\) kernel, stride=2, padding=1, bias=0), batch normalization, and ReLU activation, followed by a basic residual block. We apply an attention mechanism at every spatial dimension to preserve both coarse and fine appearance attributes in the generated image. Mathematically, for the first decoder block at the lowest resolution, \(k=1\), \[I_{1}^{D}=D_{1}(I_{4}^{E}\ \odot\ \sigma(H_{4}^{E})) \tag{1}\] and for the subsequent decoder blocks at higher resolutions, \(k=\{2,3,4\}\), \[I_{k}^{D}=D_{k}(I_{k-1}^{D}\ \odot\ \sigma(H_{5-k}^{E})) \tag{2}\] where, \(I_{k}^{D}\) is the output from the \(k\)_-th_ decoder block, \(I_{k}^{E}\) and \(H_{k}^{E}\) are the outputs from the \(k\)_-th_ encoder blocks of image branch and pose branch respectively, \(\sigma\) denotes the _sigmoid_ activation function, and \(\odot\) denotes the Hadamard product. Finally, the resulting feature space goes through 4 consecutive basic residual blocks, followed by a convolution (\(1\times 1\) kernel, stride=1, padding=0, bias=0) and _tanh_ activation to project the feature maps into the final output image \(\hat{I}_{B}\) of size \(256\times 256\). The generator loss function \(\mathcal{L}_{\mathcal{G}}\) is a combination of three objectives. It includes a pixel-wise \(l_{1}\) loss \(\mathcal{L}_{1}^{\mathcal{G}}\), an adversarial discrimination loss \(\mathcal{L}_{GAN}^{\mathcal{G}}\) estimated using the discriminator \(\mathcal{D}\), and a perceptual loss \(\mathcal{L}_{VGG_{\mathcal{G}}}^{\mathcal{G}}\) estimated using a VGG-19 network pretrained on ImageNet. Mathematically, \[\mathcal{L}_{1}^{\mathcal{G}}=\left\|\hat{I}_{B}-I_{B}\right\|_{1} \tag{3}\] where \(\|.\|_{1}\) denotes the \(l_{1}\) norm or the mean absolute error. \[\mathcal{L}_{GAN}^{\mathcal{G}}=\mathcal{L}_{BCE}\left(\mathcal{D}(I_{A},\hat{ I}_{B}),1\right) \tag{4}\] where \(\mathcal{L}_{BCE}\) denotes the binary cross-entropy loss. \[\mathcal{L}_{VGG_{\rho}}^{\mathcal{G}}=\frac{1}{h_{\rho}w_{\rho}c_{\rho}}\sum_{i= 1}^{h_{\rho}}\sum_{j=1}^{w_{\rho}}\sum_{k=1}^{c_{\rho}}\left\|\phi_{\rho}(\hat{I }_{B})-\phi_{\rho}(I_{B})\right\|_{1} \tag{5}\] where \(\phi_{\rho}\) denotes the output of dimension \(c_{\rho}\times h_{\rho}\times w_{\rho}\) from the \(\rho\)_-th_ layer of the VGG-19 network pretrained on ImageNet. We incorporate two perceptual loss terms for \(\rho=4\) and \(\rho=9\) into the cumulative generator objective. Therefore, the final generator objective is given by \[\mathcal{L}_{\mathcal{G}}=\text{arg}\min_{G}\max_{D} \lambda_{1}\mathcal{L}_{1}^{\mathcal{G}}\ +\ \lambda_{2}\mathcal{L}_{GAN}^{\mathcal{G}}\] \[+\ \lambda_{3}\left(\mathcal{L}_{VGG_{4}}^{\mathcal{G}}\ +\ \mathcal{L}_{VGG_{9}}^{\mathcal{G}}\right) \tag{6}\] where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are the tunable weights for the corresponding loss components. The discriminator \(\mathcal{D}\) is a generic PatchGAN [8] that operates on \(70\times 70\) receptive fields of the input. It takes the depth-wise concatenated image pairs, either \((I_{A},I_{B})\) or \((I_{A},\hat{I}_{B})\), as a real (1) or fake (0) image transition, respectively. The discriminator loss \(\mathcal{L}_{\mathcal{D}}\) has only a single component \(\mathcal{L}_{GAN}^{\mathcal{D}}\), calculated as the average BCE loss over real and fake transitions. Mathematically, \[\mathcal{L}_{GAN}^{\mathcal{D}}=\frac{1}{2}\left[\mathcal{L}_{BCE}(\mathcal{D }(I_{A},I_{B}),1)+\mathcal{L}_{BCE}(\mathcal{D}(I_{A},\hat{I}_{B}),0)\right] \tag{7}\] Therefore, the final discriminator objective is given by \[\mathcal{L}_{\mathcal{D}}=\text{arg}\min_{D}\max_{G}\ \ \mathcal{L}_{GAN}^{ \mathcal{D}} \tag{8}\] ## 4 Experimental Setup **Datasets:** We use the multi-human parsing dataset LV-MHP-v1 [11] to train the coarse generation network in stage 1. The dataset contains 4980 high-quality images, each having at least two persons (average is three), and the respective semantic annotations for every individual in the scene. The annotation includes 19 label groups - background (0), hat (1), hair (2), sunglasses (3), upper clothes (4), skirt (5), pants (6), dress (7), belt (8), left shoe (9), right shoe (10), face (11), left leg (12), right leg (13), left arm (14), right arm (15), bag (16), scarf (17), and torso skin (18). As discussed in Sec. 3.1, we reduce the original label groups to 8 by merging as - background + bag (0), hair (1), face (2), both arms + torso skin (3), hat + sunglasses + upper clothes + dress + scarf (4), skirt + pants + belt (5), both legs (6), both shoes (7). While training the coarse generation network, we select one random instance of a scene as the target person and the remaining instances as the input context. We prepare 14854 training pairs from 4945 images and 115 test pairs from the remaining 35 images. For data-driven refinement in stage 2 and rendering network in stage 3, we use the DeepFashion [16] dataset. The dataset contains high-quality single-person instances with wide pose and attire variations. A subset of the samples has color annotations for 16 semantic label groups. We reduce the number of label groups to 7 by merging multiple semantic regions as - background + bag (0), hair + headwear (1), face + eyeglass (2), neckwear + skin (3), top + dress + outer (4), skirt + belt + pants (5), leggings + footwear (6). We prepare 9866 images and corresponding semantic maps for creating our clustered database. We select 9278 image pairs for training and 786 image pairs for testing the rendering network. **Training details:** We train the coarse generation network with batch size 16 and VGG feature-matching loss disabled. All other training parameters are kept to defaults as specified by the authors of Pix2PixHD [20]. The clustering follows Lloyd's K-means algorithm with 8 clusters, a relative tolerance of \(1e^{-4}\), 1000 maximum iterations, and 10 random initializations for the centroids. For the rendering network, we set \(\lambda_{1}=5\), \(\lambda_{2}=1\), and \(\lambda_{3}=5\) in the generator objective. The parameters of both the generator and discriminator networks are initialized before optimization by sampling values from a normal distribution of mean = 0 and standard deviation = 0.02. We use the stochastic Adam optimizer [9] to update the parameters of both networks. We set the learning rate \(\eta=1e^{-3}\), \(\beta_{1}=0.5\), \(\beta_{2}=0.999\), \(\epsilon=1e^{-8}\), and weight decay = 0 for both optimizers. The network is trained with batch size 4. **Evaluation metrics:** Although quantifying visual quality is an open challenge in computer vision, researchers widely use a few quantifiable metrics to assess the perceptual quality of generated images. Following [4, 5, 17, 25, 28, 29, 34], we calculate Structural Similarity Index (SSIM) [30], Inception Score (IS) [24], Detection Score (DS) [15], PCKh [1], and Learned Perceptual Image Patch Similarity (LPIPS) [32] for quantitative benchmarks. SSIM considers image degradation as the perceived change in the structural information. IS estimates the KL divergence [10] between the label and marginal distributions for many images using the Inception network [27] as an image classifier. DS measures the visual quality as an object detector's target class recognition confidence. PCKh quantifies the shape consistency based on the fraction of correctly aligned keypoints. ## 5 Results We have performed an extensive range of experiments to explore and analyze the effectiveness of the proposed framework. In Fig. 4, we show a few qualitative results for person image insertion. The final modified scene containing a synthesized person is generated from the original scene and a given exemplar of the target person. It is important to note that no local attribute about the final rendered scene is provided to the generator. To analyze the overall generation quality of the rendering network, we perform a quantitative comparison against eight recently proposed person image generation algorithms [4, 5, 17, 25, 28, 29, 34]. As shown in Table 1, the proposed rendering method outperforms existing algorithms in most evaluation metrics. One of our method's main contributions is refining the initially estimated coarse semantic map to achieve highly detailed person image generation. As we perform a nearest-neighbor search in the semantic feature space of samples in pre-computed clusters, given a coarse semantic map, we can dynamically select a refined candidate for either _women_ or _men_ as per requirements. This step can be automated if the gender of the exemplar is either known or estimated using a trained classifier. In Fig. 5, we show top-5 matches for both _women_ and _men_ samples given a coarse semantic map as the query to the cluster. ## 6 Ablation Study We perform an extensive set of ablation experiments to optimize our generation pipeline. The ablation experiments and the observations are briefly discussed below. **Feature representation during clustering:** As mentioned \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SSIM \(\uparrow\)} & IS \(\uparrow\) & DS \(\uparrow\) & PCKh \(\uparrow\) & LFIPS \(\downarrow\) & LFIPS \(\downarrow\) \\ & & & & & (VGO) & (SparNe) \\ \hline PG\({}^{2}\)[17] & 0.773 & 3.163 & 0.951 & 0.89 & 0.523 & 0.416 \\ Deform [25] & 0.760 & 3.362 & 0.967 & 0.94 & - & - \\ VUNet [4] & 0.763 & **3.440** & 0.972 & 0.93 & - & - \\ PATN [34] & 0.773 & 3.209 & **0.976** & 0.96 & 0.299 & 0.170 \\ XingGAN [29] & 0.762 & 3.060 & 0.917 & 0.95 & 0.224 & 0.144 \\ BiGraphGAN [28] & 0.779 & 3.012 & 0.954 & 0.97 & 0.187 & 0.114 \\ WYW (KP) [5] & 0.788 & 3.189 & - & - & 0.271 & 0.156 \\ YWTH (DP) [5] & 0.793 & 3.346 & - & - & 0.264 & 0.149 \\ Ours & **0.845** & 3.351 & 0.968 & **0.97** & **0.124** & **0.064** \\ \hline Real Data & 1.000 & 3.687 & 0.970 & 1.00 & 0.000 & 0.000 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison of the rendering network with existing methods. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multicolumn{1}{c}{Number of classes} & \multicolumn{3}{c|}{Average counter numbers of top-match} & \multicolumn{3}{c}{Average counter numbers of top-5 matches} \\ \hline & **Mex** & **Women** & **Overall** & **Mex** & **Women** & **Overall** \\ \hline K = 8 & **40.812** & **0.8319** & **0.8309** & **0.9731** & **0.8471** & **0.8325** \\ K = 16 & 0.8184 & 0.8307 & **0.8371** & **0.7941** & 0.8146 & 0.8272 \\ K = 22 & 0.8073 & 0.8313 & 0.8379 & 0.7264 & 0.8140 & 0.8325 \\ K = 64 & 0.7965 & 0.8260 & 0.8368 & 0.7715 & 0.8109 & 0.8328 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of clustering with VGG-encoded features. Figure 4: Qualitative results generated by the proposed method. Each set of examples shows – the original scene (**left**), an exemplar of the target person (**middle**), and the final generated scene (**right**). Figure 5: Qualitative results of refinement in stage 2. The first column shows a coarse semantic map as the query, and the following columns show the top-5 refined semantic maps retrieved for both genders. The cosine similarity score for each retrieval is shown below the respective sample. (Best viewed with 400% zoom) in Sec. 3.2, we use 512-dimensional VGG-encoded features to guide the refinement process. To evaluate the effectiveness of VGG features in the proposed refinement strategy, we consider the raw pixel features in the ablation study by converting the input image into a feature vector. The conversion process downscales (nearest-neighbor interpolation) the original \(176\times 256\) images to \(22\times 32\), keeping the aspect ratio intact, followed by flattening to a 704-dimensional feature vector. We evaluate both the feature representation techniques for different numbers of clusters \((K=8,~{}16,~{}32,~{}64)\). As shown in Tables 2 & 3, for a particular value of cluster numbers \(K\), VGG-encoded feature representation outperforms the raw pixel-based representation in average similarity score of top retrievals. As shown in Fig. 7, our strategy uses similarity score-based ranking for both genders. The VGG feature-based clustering provides a better resemblance between the query and retrieved semantic maps. From our ablation study, we find \(K=8\) works best for our data. variants. We consider only one attention pathway in the rendering network in the second and third ablation settings. In the second variant (**HR only**), the attention operation is performed at the highest feature resolution only (just before the decoder block \(D_{4}\)). Similarly, in the third variant (**LR only**), the attention operation is performed at the lowest feature resolution only (just before the decoder block \(D_{1}\)). In the final settings (**Full**), we use the proposed attention mechanism as shown in Fig. 2 and described in Sec. 3.3. We train and evaluate all four variants on the same dataset splits while keeping all experimental conditions the same, as noted in Sec. 4. We show the evaluated metrics in Table 4 along with qualitative results in Fig. 8. We conclude from the analytical and visual results that the proposed attention mechanism provides the best generation performance. **Refinement:** We show the efficacy of the data-driven refinement on the final generation in Fig. 6 by comparing the rendered scene with and without applying the refinement strategy. ## 7 Limitations Although the proposed method can produce high-quality, visually appealing results for a wide range of complex natural scenes, there are a few occasions when the technique fails to generate a realistic outcome. Due to a disentangled multi-stage approach, these limiting cases may occur from different pipeline components. In our method, coarse generation in stage 1 provides the spatial location and scale of the target person. Therefore, wrong inference in this step leads to a misinterpretation of the position and scale in the final target. The refined semantic target map is retrieved from the pre-partitioned clusters based on encoded features of the coarse semantic map in stage 2. Consequently, an extremely rough generation in stage 1 or a misclassified outlier during clustering in stage 2 can lead to a generated person that does not blend well with the existing persons in the scene. Finally, due to a supervised approach of training the renderer in stage 3, the appearance attribute transfer often struggles to generate high-quality outputs for both imbalanced and unconventional target poses. We show some of these limiting cases in Fig. 9. ## 8 Conclusions In this work, we propose a novel technique for scene-aware person image synthesis by conditioning the generative process on the global context. The method is divided into three independent stages for a concise focus on individual subtasks. First, we use a coarse generation network based on the existing Pix2PixHD architecture to estimate the target person's spatial and pose attributes. While the spatial characteristics in the initial semantic map provide sufficient geometric information for the target, the semantic map itself does not preserve enough label group correctness, leading to improper attribute transfer in the rendering stage. We mitigate this issue through a data-driven distillation of the coarse semantic map by selecting candidate maps from a clustered knowledge base using a similarity score-based ranking. Finally, the appearance attributes from the exemplar are transferred to the selected candidate map using a generative renderer. The rendered instance is then injected into the original scene using the geometric information obtained during coarse generation. In our experiments, we achieve highly detailed realistic visual outcomes, which are further supported by relevant analytical evaluations. We also discuss an extensive ablation study and the limitations of our approach. We believe investigating a better way to model the global scene context and a robust end-to-end approach to the problem will benefit the potential future applications of the proposed method.
2309.11522
Modeling Current and Future High-Cadence Surveys of Repeating FRB Populations
In recent years, the CHIME (Canadian Hydrogen Intensity Mapping Experiment) interferometer has revealed a large number of Fast Radio Bursts (FRBs), including a sizable population that demonstrates repeating behavior. This transit facility, employing a real-time FRB search pipeline, continually scans the sky with declinations between $-10^{\circ}$ and $90^{\circ}$ for events with fluences $\gtrapprox 0.4$ Jy ms. We simulate a population of repeating FRBs by performing Monte Carlo simulations of underlying source populations processed through a mock CHIME/FRB observing pipeline. Assuming intrinsic repeater rates follow a Poisson distribution, we test assumptions about the burst populations of the repeater sample, and construct models of the FRB sample assuming various cosmological distributions. We infer the completeness of CHIME/FRB observations as a function of observing cadence and redshifts out to 0.5. We find that, if all simulated bursts have a fixed Poisson probability of repetition over their integrated time of observation, repeating burst detections across comoving volume should continue to grow near linearly on the order of decades. We predict that around 170 of the current CHIME/FRB one-off sources will ultimately repeat. We also make projections for FRB repeaters by future facilities and demonstrate that the number of repeaters they find could saturate on a $\sim$3 yr timescale.
Kyle McGregor, Duncan R. Lorimer
2023-09-19T21:49:54Z
http://arxiv.org/abs/2309.11522v2
# Modeling Current and Future High-Cadence Surveys of Repeating FRB Populations ###### Abstract In recent years, the CHIME (Canadian Hydrogen Intensity Mapping Experiment) interferometer has revealed a large number of Fast Radio Bursts (FRBs), including a sizable population that demonstrates repeating behavior. This transit facility, employing a real-time FRB search pipeline, continually scans the sky with declinations between \(-10^{\circ}\) and \(90^{\circ}\) for events with fluences \(\gtrapprox 0.4\) Jy ms. We simulate a population of repeating FRBs by performing Monte Carlo simulations of underlying source populations processed through a mock CHIME/FRB observing pipeline. Assuming intrinsic repeater rates follow a Poisson distribution, we test assumptions about the burst populations of the repeater sample, and construct models of the FRB sample assuming various cosmological distributions. We infer the completeness of CHIME/FRB observations as a function of observing cadence and redshifts out to 0.5. We find that, if all simulated bursts have a fixed Poisson probability of repetition over their integrated time of observation, repeating burst detections across comoving volume should continue to grow near linearly on the order of decades. We predict that around 170 of the current CHIME/FRB one-off sources will ultimately repeat. We also make projections for FRB repeaters by future facilities and demonstrate that the number of repeaters they find could saturate on a \(\sim 3\) yr timescale. Unified Astronomy Thesaurus concepts: Radio transient sources (2008) ## 1 Introduction Fast Radio Bursts (FRBs) are among the most enigmatic extragalactic sources of radiation. First discovered in 2007 using archival data from the Parkes (Murriyang) telescope (Lorimer et al., 2007) these dispersed highly-energetic millisecond-scale bursts of radio emission have been shown to be consistent with a cosmological origin (Thornton et al., 2013). While no conclusive progenitor mechanism for FRB emission has been widely accepted, modeling shows that directional magnetar flares have energy budgets similar to observed burst energies (see, e.g., Popov et al., 2018). With 672 one-off bursts currently known1(Xu et al., 2023), a recent data release from the Canadian Hydrogen Intensity Mapping Experiment (CHIME) collaboration has more than doubled the number of known repeating FRBs, currently standing at 63 (CHIME/FRB Collaboration et al., 2023). Searches led by the CHIME/FRB collaboration using this facility are carrying out a census of FRBs. As a transit instrument that each day observes the 200 square degrees flanking the north-south meridian of the sky at declinations between -10 and 90 degrees, CHIME repeatedly scans the sky and is therefore well suited to probe the repeating FRB population. Footnote 1: For a list, see [https://blinkverse.alkaidos.cn](https://blinkverse.alkaidos.cn). With a small known sample size compared to the inferred daily incidence of bursts from across the universe (expected to be on the order of thousands per day; Thornton et al., 2013), a Monte Carlo simulation is a powerful tool to infer properties of the underlying FRB population(s). It is also important to determine the limits of our facilities and find how many bursts can be found in a given cadence, as well as account for survey selection effects that may influence completeness. We present a model for an observed repeater population constructed from first principles, which can provide a prediction for future detection rates by CHIME/FRB. From this, we can determine a model census of bursts whose properties can be compared to the known sample revealed in (CHIME/FRB Collaboration et al., 2023). Inspired by the latest release of the repeater sample (CHIME/FRB Collaboration et al., 2023), we apply a Monte Carlo approach in this paper. In Section 2 we discuss how we build a model source population, in Section 3 we discuss our procedure for generating a mock CHIME/FRB observing campaign on this population. We compare these modeled populations to the observed population in Section 4, and discuss the relevance to previous and future work in Section 5 and present our conclusions in Section 6. ## 2 Model Source Properties In constructing a synthetic population of repeaters, we must first consider properties intrinsic to each source. These parameters include right ascension and declination on the sky (\(\alpha\), \(\delta\)), redshift (\(z\)), mean luminosity (\(L_{0}\)), intrinsic pulse width (\(w_{\rm int}\)), and host-galaxy dispersion measure (\(\rm DM_{Host}\)). As FRBs are a cosmological population, we assume a uniform distribution of locations on the celestial sphere. We draw from a uniform random deviate between 0-24 hr in right ascension. In declination space, however, following Chawla et al. (2022), we draw uniformly across \(-10^{\circ}<\delta<90^{\circ}\). The cosine dependence in solid angle for an isotropic population cancels the inverse cosine dependence in exposure. While not representing a uniform distribution on the celestial sphere, this does give a distribution proportional to the product of the declination-dependent sky coverage and exposure for CHIME/FRB which more accurately represents the underlying position distribution of detected sources (for details, see Chawla et al., 2022). It is currently unclear what the underlying redshift distribution of FRBs is in general, and there is debate as to the distribution probed by CHIME/FRB (see, e.g., Zhang and Zhang, 2022; Shin et al., 2023). In addition, the DM distributions for CHIME/FRB repeaters appears to be significantly different to those of the one-off bursts (CHIME/FRB Collaboration et al., 2023), which leads to further uncertainty in their underlying redshift distributions. For our models of repeating FRBs, we explore two scenarios for the redshift distributions: a population following a constant density in comoving volume (\(V_{c}\)) and a population following the cosmic star formation rate (SFR; Madau and Dickinson, 2014). For the former case, the probability density in redshift \[P_{\rm comoving}(z)\propto\left(\frac{1}{1+z}\right)\frac{dV_{c}}{d\Omega dz}, \tag{1}\] where \(\Omega\) is the solid angle and \(\frac{dV_{c}}{d\Omega dz}\) is the differential comoving volume element. For the latter case, \[P_{\rm SFR}(z)\propto\left(\frac{(1+z)^{2.7}}{1+\left[\frac{(1+z)}{2.9} \right]^{5.6}}\right)\left(\frac{1}{1+z}\right)\,\frac{dV_{c}}{d\Omega dz}. \tag{2}\] In both cases, as we discuss later, we restrict our maximum redshifts probed by the simulations to be \(z_{\rm max}=0.5\). More distant sources are not required to account for the currently observed sample of repeaters. For the mean luminosity of each burst, \(L_{0}\), we draw from a Schechter function employing a fit from Shin et al. (2023). We impose bounds between \(10^{39}\) and \(10^{45}\) erg s\({}^{-1}\). For a 1 ms pulse, this is consistent with the inferred luminosity limits of observed bursts, consistent with the burst energy limits used by Chawla (2022). The Schechter luminosity distribution is commonly used to describe the energetics of extragalactic sources and takes the form \[P_{L_{0}}(L_{0})dL_{0}=\left(\frac{L_{0}}{L_{*}}\right)^{\alpha+1}\exp{\left( \frac{L_{0}}{L_{*}}\right)}d(\log{L_{0}}), \tag{3}\] where the characteristic luminosity \(L^{*}=2\times 10^{44}\) erg s\({}^{-1}\) and power law slope \(\alpha=-1.3\). Given that the current CHIME/FRB repeaters are known to vary in intensity (CHIME/FRB Collaboration et al., 2023), we dither the daily burst luminosities using a scheme described in Section 3. In simulating intrinsic pulse widths, we draw from a lognormal distribution as given in Luo et al. (2020). The dispersion measure (DM) of a burst is the integral of electron number density over the line of sight to the burst, which will include contributions from each source of propagation en route to Earth and thus serves as an observable proxy for distance. For our simulations, following previous authors (e.g., Luo et al., 2018), we computed the observed DM, \[\rm DM_{obs}=DM_{MW}(\textit{l},\textit{b})+DM_{IGM}(\textit{z})+\frac{DM_{host }}{1+z}, \tag{4}\] with the Galactic component, \(\rm DM_{MW}\), from the YMW16 model of Milky Way electron density (Yao et al., 2017), the intergalactic medium (IGM) component, \(\rm DM_{IGM}\), from the Macquart (\(z\)-DM) relation (James et al., 2022), and the repeater host galaxy contribution, \(\rm DM_{host}\), from a lognormal random deviate parametrized by Mo et al. (2023). We draw a catalog of \(10^{5}\) mock repeaters using Monte Carlo methods in scipy. This gives us a base of source repeater progenitors over which we can iterate to simulate an observation campaign with CHIME, the procedure for which is described in the next section. ## 3 Model Chime Observing Campaign While CHIME is a complex instrument, responsive but highly sensitive to on-site conditions that are prohibitive to model efficiently (see, e.g., Andersen et al., 2023), we can infer the observability of a given event by imposing a fluence threshold of 0.4 Jy ms. This lower limit is consistent across the CHIME Catalog 1 data (CHIME/FRB Collaboration et al., 2021) and the new repeater data (CHIME/FRB Collaboration et al., 2023), and used in similar population syntheses (Chawla, 2022). Our mock observation pipeline first iterates over each of the \(10^{5}\) simulated repeater and determines the daily burst incidence using a Poisson random deviate. Due to the shape of CHIME's primary beam, higher declinations will have a greater exposure over CHIME's operations, resulting in a more complete sample at high declinations. This declination dependence has been modeled in similar studies (see James, 2023). We choose to weigh the daily Poisson burst incidence by this exposure in our forward modeling, such that higher declinations are more likely to sound off on a given day to correct for the declination bias. The mean of the probability mass function for the number of observed bursts each day over the four CHIME cylinders, \[\lambda=\frac{4\times\mathrm{FWHM}_{E-W}}{360^{\circ}\cos(\delta)}\ \Gamma(R_{\mathrm{min}},R_{\mathrm{max}},\gamma), \tag{5}\] with \(\mathrm{FWHM}_{\mathrm{E-W}}=0.32^{\circ}\) the full width at half maximum of each of synthesized beam (Chawla, 2022). This corresponds to the product \(\lambda\ =\ (\mathrm{time\ in\ beam})\ \times\ (\mathrm{bursts\ day}^{-1})\) for each day of the simulation. Following James (2023), \(\Gamma(R_{\mathrm{min}},R_{\mathrm{max}},\gamma)\) is a power-law random deviate ranging between \(R_{\mathrm{min}}=10^{-3}\) and \(R_{\mathrm{max}}=10^{-1}\) bursts day\({}^{-1}\) with a power-law index \(\gamma=-2.2\). We choose \(R_{\mathrm{min}}\) to be approximately the reciprocal of the current survey length (3.3 yr) and models with \(R_{\mathrm{max}}>0.1\) day\({}^{-1}\) produce too many observable events. These are defined in terms of the total survey cadence, rather than the time in beam. In addition, we do not assume any correlation between burst rate and intrinsic luminosity, \(L_{0}\), over the range of \(L_{0}\) we model. The observed pulse width, \(w_{\mathrm{obs}}\), includes contributions from sources of propagation and instrumental parameters. It is given by the quadrature addition \[w_{\mathrm{obs}}=\sqrt{[(1+z)w_{\mathrm{int}}]^{2}+w_{\mathrm{scat}}^{2}+w_{ \mathrm{samp}}^{2}+w_{\mathrm{DM}}^{2}}\,, \tag{6}\] where \(w_{\mathrm{int}}\) is the intrinsic pulse width and the \((1+z)\) factor accounts for time dilation, \(w_{\mathrm{scat}}\) is the scatter broadening, \(w_{\mathrm{samp}}\) is the data sampling interval and \(w_{\mathrm{DM}}\) is the dispersion broadening across a finite frequency channel. To model the intrinsic width, following Luo et al. (2020), we draw from a log normal distribution so that \(\log_{10}\left(w_{\mathrm{int}}/\mathrm{ms}\right)=\mathrm{normal}(\mu_{W}, \sigma_{W})\). Here the normal distribution has a mean \(\mu_{W}=0.2\) and standard deviation \(\sigma_{W}=0.33\). The CHIME/FRB sampling time \(w_{\mathrm{samp}}=0.98\) ms (CHIME Collaboration et al., 2022). The scattering timescale is calculated alongside the DM\({}_{\mathrm{MW}}\) and DM\({}_{\mathrm{IGM}}\) contributions with the pygedm package (Price et al., 2021) using the electron density model developed by Yao et al. (2017). As mentioned above, some variation in the flux density of repeaters across observations is noted in the CHIME sample, which implies a mechanism for differing luminosity over consecutive bursts from the same repeater. While the sample size for most observed bursts is much too small to easily parametrize, we assume a normal distribution with a standard deviation \(\sigma=0.1L_{0}\). Using Eq. 8 of Macquart and Ekers (2018), we find that the observed fluence \[F_{\nu}=\frac{(1+z)\,w_{\mathrm{int}}}{4\pi D_{L}^{2}\Delta\nu}\ \mathrm{normal}(L_{0},0.1L_{0}), \tag{7}\] where \(D_{L}\) is the luminosity distance and the bandwidth for CHIME/FRB, \(\Delta\nu=400\) MHz. Notably here, for simplicity, we are being agnostic about any dependence of the fluence with frequency and implicitly assume that the FRBs are flat spectrum sources over the frequency ranges explored here (i.e., 400 MHz out to \((1+z_{\mathrm{max}})\,800\) MHz = 1200 MHz). As noted by Cordes and McLaughlin (2003), the signal-to-noise threshold for FRB detection can be computed through the radiometer noise considerations. Originally, we followed this approach and implemented a discretization of CHIME/FRB's beam to infer the observability of a given FRB using the radiometer equation. This method is unphysical in that it assumes a fiducial nature Figure 1: Fluence (top) and flux (bottom) distributions for the CHIME/FRB repeater sample (green histograms) showing the drop in the number of bursts around the threshold. Dashed and solid lines show polynomial fits to these tails in log and linear spaces, respectively. Normalization of these fits on the right to a probability scale allows us to implement these sensitivity roll offs in our simulated FRBs (see text). to CHIME's system-equivalent flux density, which in reality is calibrated in real-time as part of CHIME/FRB's bonsai FRB search algorithm. Following the detailed sensitivity determinations by Chawla (2022), however, after iterating over each day for each repeater in the population, we accept a burst as observable if its fluence is above the 0.4 Jy ms threshold. This gives us a population theoretically observable by CHIME, though selection corrected for various incompleteness factors compared to the CHIME survey populations. To accurately model CHIME's observing campaign, we must reintroduce these incompleteness factors which become important close to the detection threshold. It is known that CHIME is observationally biased against low-fluence events, which can only be detected at the highest gain regions of each synthesized beam, whereas higher fluence events can be readily detected across the primary beam and sidelobes (Lin et al., 2023). We account for this bias by weighting the probability of observation for each low-fluence and low-flux event with an independent Bernoulli trial, which will return a "success" or "failure" based on the exhausted probability of detection at each fluence and flux in the over-represented regime. We employ a polynomial fit for the drop-off following the empirical cutoff curve in the observed fluence/flux distributions between the 0.4 Jy ms lower fluence limit and the 5 Jy ms 95% completeness threshold identified in CHIME/FRB Collaboration et al. (2021). This is done during the filtering step after the mock observed catalog is collected in order to aid computational efficiency. The fourth-degree polynomial fits to the left edges of the fluence and flux distributions are shown in Fig. 1. We report the normalized fits as \(p(F_{\nu})=-0.002(F_{\nu}-5)^{4}+1\) for fluence and \(p(S_{\nu})=-240.8S_{\nu}^{4}+174.4S_{\nu}^{3}-30.85S_{\nu}^{2}+2.46S_{\nu}-0.045\) for flux density where the units of fluence and flux density at Jy ms and Jy, respectively. In Fig. 2 we show the application of this approach to the constant star formation model. The better agreement between the "Observability" and "Completeness" thresholds is clearly evident. ## 4 Results We ran the simulations above for a total of 7300 days (20 years), extracting the sample at 1095 days (three years) to match the timescale of the CHIME sample (CHIME/FRB Collaboration et al., 2023). Fig. 3 show cumulative distributions for the observed and model samples for the constant star formation and constant volume redshift distributions, respectively. For each model, we show the fluence, pulse width, DM, and declination distributions. While we do not take the approach of fitting models to the data in this paper, it is useful to quote quantitative metrics of comparison between the model and observed samples. Table 1 shows the results of the two-sided Kolmogorov-Smirnov (KS) tests (Kolmogorov, 1933; Smirnov, 1948) we ran on these distributions. Figure 2: Observed CHIME/FRB fluence distributions compared to those from our model population. The simulated distribution (orange) shows the constant start formation model with the simple 0.4 Jy ms fluence threshold applied. The selection-corrected distribution (blue) shows just those FRBs which survived the Bernoulli trials in flux and fluence (see text). ## 5 Discussion The goal of this work is to constrain some of the properties of the FRB repeater population as seen through regular observations with the CHIME/FRB survey described by CHIME/FRB Collaboration et al. (2023). In doing this, we have tried to strike a balance between a simple analytic model and a very detailed simulation which would also include Markov Chain Monte Carlo techniques for parameter estimation. In many respects, we have taken an approach similar to earlier studies of the pulsar population (see, e.g., Faucher-Giguere and Kaspi, 2006) where plausible models are developed that can mimic the main results. In the sections below, we discuss the main findings from this approach. ### Which model is better? As can be seen in Fig. 3, and summarized quantitatively in Table 1, the agreement between our model and observed FRB populations is generally good and we briefly comment on each of the distributions considered. While the model with the redshift distribution tracing the cosmic star formation matches the DM distribution slightly better than the constant volume model, the KS tests indicate that the two models are statistically very similar in their match to the observed CHIME/FRB repeater DM distribution. This is also reflected in the other distributions and KS test scores and is generally in lines with our expectations, since the redshift distributions we considered have a similar behaviour in the range \(0<z<0.5\). For the fluence distribution, we note that our model fluences are systematically higher than those observed by CHIME/FRB. We did not seek to optimize this further, given that the fluences in the CHIME/FRB catalog are lower limits due to uncertainties in the true source position within the CHIME beam. In the remain \begin{table} \begin{tabular}{l l r r r r} \hline \hline \multicolumn{1}{c}{ Model} & \multicolumn{1}{c}{\(F_{\nu}\)} & \multicolumn{1}{c}{\(w_{obs}\)} & \multicolumn{1}{c}{DM} & \multicolumn{1}{c}{Decl.} \\ \hline Eq. 1 & KS statistic & 0.383 & 0.189 & 0.112 & 0.154 \\ & \(p\)-value & \(2\times 10^{-8}\) & 0.030 & 0.398 & 0.112 \\ \hline Eq. 2 & KS statistic & 0.406 & 0.185 & 0.104 & 0.146 \\ & \(p\)-value & \(3\times 10^{-9}\) & 0.035 & 0.565 & 0.147 \\ \hline \end{tabular} \end{table} Table 1: KS-test results for the two redshift models considered in this study. For each model, we provide the KS statistic and associated \(p\)-value for fluence, observed pulse width, DM and declination, respectively. Figure 3: Cumulative distributions for the redshift distribution tracing the constant volume redshift distribution (Eq. 1; upper row) and the cosmic star formation rate (Eq. 2; lower row). Green lines show the observed distributions from CHIME/FRBs. The blue lines are our simulated observable populations. der of the discussion, we use the cosmic star formation redshift distribution (Eq. 2) as our reference model. ### Repeater detections versus time We have focused our attention on the cumulative distributions in Fig. 3, but note that our simulations naturally predict the detections as a function of time. As an example, to compare with Fig. 3 of CHIME/FRB Collaboration et al. (2023), in Fig. 4 we show a set of randomly selected repeaters from our reference model as a function of declination versus time. While we see a qualitative similarity to Fig. 3 of CHIME/FRB Collaboration et al. (2023), we note that (as anticipated from Fig. 3), our modeling approach results in an overabundance of sources at higher declinations than actually observed. As seen by CHIME/FRB Collaboration et al. (2023), it is notable that the higher declination sources benefit from a longer exposure time due to the unique optics of CHIME resulting in more bursts observed. Further modeling, including different burst rate distributions would be a logical extension of this work. One aspect we do consider in this paper, however, are some straightforward predictions for the number of repeaters we expect with future CHIME/FRB observations. The power-law burst rate distribution we have considered from James (2023) produces a good match for the data through CHIME/FRB Collaboration et al. (2023), which to date has reported a near-linear pattern of detection incidence. After scaling the number of sources to match the number of total number of cumulative detections by CHIME/FRB to date, our model predicts this almost linear trend will continue for the next decade, in agreement with James (2023). Any drop in this overall detection rate within the next few years would clearly challenge this model. ### One-off FRBs in the observed population An interesting question that our modeling approach can address is what the number of apparent one-off sources in the current CHIME sample are actually repeaters. Since our modeling is exclusively assuming a repeating population, we automatically keep track of any simulated FRBs for which only one pulse is recorded over any given timescale. For the constant star formation case, we find that, over a three-year period, for every repeater observed, there are 2.8 apparently one-off sources. In other words, for the 60 repeaters currently observed by CHIME/FRB Collaboration et al. (2023), our model predicts that around 170 of the currently one-off sources in the CHIME/FRB catalog will repeat. Based on the projection shown in Fig. 5, we anticipate that these sources are very likely to repeat within 5 yr. ### Predictions for other instruments Our simulations can also be used to provide interesting predictions for hypothetical FRB search campaigns with two forthcoming next-generation radio observatories, the Deep Synoptic Array (DSA-2000) and the Canadian Hydrogen Observatory and Radio-transient Detector (CHORD). These facilities, which are planned for construction and first light within the decade, are designed to have greater sensitivity than CHIME. DSA-2000 is projected to have a field of view FoV = 10.6 deg\({}^{2}\), with a fluence detection threshold for a 1 ms burst of 0.03 Jy ms and a bandwidth of 850 MHz in the 1-2 GHz band (Hallinan et al., 2019). CHORD is slated to be the direct successor to CHIME, with FoV = 130 deg\({}^{2}\), a fluence detection threshold for a 1 ms burst of 0.1 Jy ms, and a bandwidth of 1200 MHz in the 0.3-1.4 GHz band (Vanderlinde et al., 2019). Using these specifications, to illustrate the impacts of these sensitivity improvements, we proceed with two Figure 4: Declination versus time for 163 selected repeaters from the simulated population. Cyan represents bursts with \(F_{\nu}<4\) Jy ms, magenta \(4\;\mathrm{Jy\;ms}<F_{\nu}<40\) Jy ms, and yellow \(F_{\nu}>40\) Jy ms. cases acting as upper and lower limits for the daily time in beam (exposure) for each source for each instrument, respectively. As an upper limit, we consider a sample of sources constantly observed over the entire simulated campaign, corresponding to a circumpolar or never-set population always within the array's field of view, giving a daily exposure of 1 day. For a lower limit, we assume a population of same sources at the celestial equator, such that their daily exposure is \(\sqrt{\mathrm{FoV}}/360^{\circ}\) day. To enforce the completeness dropoff approaching the fluence limit, we scale the fit for \(p(F_{\nu})\) assuming an order of magnitude drop-off between the lower fluence limit and completeness limit (\(F_{\mathrm{lower}}\sim 10F_{\mathrm{complete}}\)) for each instrument. This is likely to be a conservative estimate as these facilities will be more sensitive to bursts within \(z=0.5\) than CHIME. We show the range between these limiting cases for exposure in Fig. 5.4. While we do not have a baseline for the size of population observed at higher sensitivities, as the scaling of the population discussed in section 5.2 Figure 5: Cumulative repeater incidence for CHIME/FRB along with our model projection for the next two decades. We scale the modeled detection incidence to the first 3.3 years of CHIME/FRB results. Inset: zoomed-in view of the current sample (CHIME/FRB Collaboration et al., 2023) which shows an approximately linear trend. Figure 6: Cumulative detection predictions for DSA2000 and CHORD compared to CHIME/FRB prediction from Fig. 5. For display purposes, the model projections have been smoothed by fitting their cumulative incidences to gamma distributions. The upper and lower limits are chosen to reflect two extremes of observational strategies for observing a specific field of view. is unique to a population observed with CHIME's sensitivity and unique declination-dependence not shared by these next-generation facilities, we can still make predictions for the detection rates by these observatories relative to the simulated population size. Fig. 5.4 shows the cumulative repeater detections as a fraction of the total population. Unlike the case for CHIME/FRB, the more sensitive instruments appear to show a saturation in the number of repeating FRBs observed starting at around 1000 days. While the details of the putative surveys with DSA-2000 and CHORD are overly simplistic, and are restricted to the repeating FRB population with \(z<0.5\), the results shown here highlight the potential for these and other future facilities with similar sensitivity to probe this population. In the meantime, we note that our predictions for CHIME are testable in the coming years and that CHIME's repeater sample is already a rich resource for studying the FRB population. ## 6 Conclusions We have described a simple simulation of the repeating FRB population that is based on the sample reported recently by CHIME/FRB Collaboration et al. (2023). We find that this sample of repeaters can be well described by simple model in which the sources are restricted to redshifts \(z<0.5\). Within this range, we cannot distinguish between a redshift distribution for the progenitors of FRB repeaters that follows the cosmic star formation, or is constant in comoving volume. As future CHIME/FRB observations are collected, as pointed out by James (2023), we anticipate future studies being more sensitive to the population of FRBs with \(z>0.5\). In spite of the daily cadence of the CHIME/FRB observations, our models predict that the number of observed repeaters will not saturate significantly in the coming years and that the sample will grow in size approximately at the current rate of 20 new repeaters per year. A caveat to this prediction is its dependence on our assumptions. Future data releases from CHIME/FRB will allow us to constrain the model parameters/assumptions from this study. Among the future discoveries our models also predict in the next 5 years are around 170 sources that are currently in the CHIME/FRB sample as "one-off" FRBs. Our simulation approach has tried to strike a balance between simplicity and rigor. We have attempted to incorporate the most important aspects of the population and detection process into our work, but have not tried to fine tune or do parameter estimation of the model parameters. Further modeling of the sample which explores the luminosity function, burst rate distribution and sky exposure are certainly warranted but beyond the scope of the current work. In particular, we have not accounted for the spectral behavior of the model FRBs, and the possibility of non-Poissonian burst distributions. Studies of this nature would be extremely valuable to better understand the growing population of repeating FRBs. This research was carried out during a 10-week Research Experience for Undergraduates (REU) program at West Virginia University. We gratefully acknowledge the National Science Foundation's support of the REU under award number 1950617, as well as the NASA West Virginia Space Grant Consortium for their support. We thank Emmanuel Fonseca, Clancy James, Marcus Merryfield and Vicky Kaspi for useful discussions. astropy (Astropy Collaboration et al., 2013), scipy(Virtanen et al., 2020), pygedm(Price et al., 2021)
2309.07340
Informative path planning for scalar dynamic reconstruction using coregionalized Gaussian processes and a spatiotemporal kernel
The proliferation of unmanned vehicles offers many opportunities for solving environmental sampling tasks with applications in resource monitoring and precision agriculture. Informative path planning (IPP) includes a family of methods which offer improvements over traditional surveying techniques for suggesting locations for observation collection. In this work, we present a novel solution to the IPP problem by using a coregionalized Gaussian processes to estimate a dynamic scalar field that varies in space and time. Our method improves previous approaches by using a composite kernel accounting for spatiotemporal correlations and at the same time, can be readily incorporated in existing IPP algorithms. Through extensive simulations, we show that our novel modeling approach leads to more accurate estimations when compared with formerly proposed methods that do not account for the temporal dimension.
Lorenzo Booth, Stefano Carpin
2023-09-13T22:32:17Z
http://arxiv.org/abs/2309.07340v1
Informative path planning for scalar dynamic reconstruction using coregionalized Gaussian processes and a spatiotemporal kernel ###### Abstract The proliferation of unmanned vehicles offers many opportunities for solving environmental sampling tasks with applications in resource monitoring and precision agriculture. Informative path planning (IPP) includes a family of methods which offer improvements over traditional surveying techniques for suggesting locations for observation collection. In this work, we present a novel solution to the IPP problem by using a coregionalized Gaussian processes to estimate a dynamic scalar field that varies in space and time. Our method improves previous approaches by using a composite kernel accounting for spatiotemporal correlations and at the same time, can be readily incorporated in existing IPP algorithms. Through extensive simulations, we show that our novel modeling approach leads to more accurate estimations when compared with formerly proposed methods that do not account for the temporal dimension. ## I Introduction Consider the task of modeling a soil property in an agricultural field with a point sensor. Whether the sensor is yielded by a human or an autonomous robot, the agent is tasked with deciding where to capture observations of the environment in order to inform the spatial interpolation. If the environmental properties are dynamic and can change over the course of the survey, the operator is also tasked with the option of updating an old measurement at a previously-visited site, or measuring an unvisited location. When sampling under practical constraints such as time and fuel, the operator must strategically choose sampling locations that allow for useful predictive ability in space and time, in order to arrive at a cohesive estimation of the system's state at the end of the survey. Thus, this task of _informative path planning_ (IPP) shares many elements with the task of _optimal sensor placement_ and can be formalized as a _constrained optimization_, where the agent must evaluate the best location to travel, to satisfy an objective function based in reconstructing a spatial process [23, 3]. Recently, there have been many improvements in approaches to the IPP task for various objectives including: map reconstruction with distributed agents, source position estimation for sound and contaminant plumes, and search and rescue [26]. Steady efforts have been directed toward sensing strategies for monitoring spatiotemporal processes [7]. The emergence of small, inexpensive mobile platforms points to a future where mobile sensors may be rapidly dispatched to model a dynamic phenomenon. However, to the best of our knowledge there have been limited investigations of informative planners that consider the _temporal dimension_ of information content, especially in an online planning approach. This is necessary to produce faithful representations of dynamic environments, as observations made early in the course of a survey may no longer represent the state of the system at the location at the end of the survey. Additionally, it may be desirable to infer the state of the system at arbitrary points in time, or into the future. To address this issue, we propose a novel sampling-based IPP framework that considers the information content of sensing locations in space and time. An overview of the framework is shown in Figure 1 and in the accompanying video. Inspired by the asymptotic optimality of IPP methods based on random trees [16][17] and advancements in large-scale, multiple-output Gaussian process modeling [15], our method combines an information-theoretic sampling-based planner with a spatiotemporal covariance function imple Fig. 1: An overview of our evaluation methodology. (a) shows the ground truth, and the vehicle in the replanning stage, with observation history enumerated. (b) shows the environment during the planning stage with the locations of previous observations. (c) Samples can be visualized along a path in a temporal dimension and (d) displays the final map estimate at all inducing points in the Gaussian process. mented as a separable kernel to access the information gain from the locations of candidate sensing locations both in space _and_ time. This also allows for both inference of the state and inference of model uncertainty for unexplored parts of the system and establishes a criterion for revisiting already-observed locations that no longer meaningfully reduce uncertainty of the system's current state. The contributions of this work are: * A framework for reasoning about the information content of observations in arbitrary dimensions reconciled to a metric appropriate for path planning * The integration of this spatiotemporal information function in a novel time-aware informative planner for terrestrial monitoring * Validation of the approach in the context of spatial and temporal priors with simulated and real-world dynamic scenarios inspired by common environmental dispersion processes * Exploration of interactions between the parameters governing the planner and the model Our work opens up several avenues for consideration: the continuous update of spatial and temporal priors through adaptive planning, extensions into multi-robot systems, combined sensing modalities for prediction in multiple dimensions (in a manner similar to Co-Kriging in the geostatistical literature), and extensions into different classes of multi-output Gaussian processes. Our framework will be open-sourced, for use in future investigations. This paper is organized as follows: Selected related work is presented in section II. The problem formulation is introduced in section III and our methods are discussed in section IV. In section V we experimentally evaluate our proposal and conclude in section VI. ## II Related Work This paper draws from a rich body of literature, surrounding the task of collecting observations by an autonomous agent for modeling the distribution of a variable of interest in the environment. IPP approaches have been extended to encompass different sensing modalities (e.g. altitude-dependent sensor models [26]). Notably, most IPP approaches consider the spatial phenomenon to be static or at steady-state, or they assume that the phenomenon does not change meaningfully during the duration of the survey. IPP for robotic planning is similar to methods which seek to optimize the placement or visitation of environmental sensors [21]. IPP problems that employ an adaptive planning approach re-compute vehicle trajectories as observations are collected. This approach can be framed under the category of problems which involve sequential decision-making with uncertainty, which in turn can be formally described as a Partially-observable Markov Decision Process (POMDP) [18]. As a constrained optimization problem, IPP shares may qualities with the orienteering problem [8]. Other methods leverage optimization techniques to determine the most informative route through a collection of candidate actions or locations. These approaches include Bayesian optimization [2], evolutionary algorithms [25], and reinforcement learning [27]. The asymptotic optimality of rapidly-exploring random trees (RRT) has been leveraged to solve IPP tasks in a computationally tractable manner, including exploration applications where the robot is tasked with monitoring an unknown parameter of interest [19]. Rapidly-exploring information gathering (RIG) algorithms approach the IPP task using incremental sampling with branch and bound optimization [16]. Our work builds on [17], which extended RIG with an information-theoretic utility function and a related stopping criterion. ## III Problem Formulation In this work, we consider the problem of reconstructing a dynamic scalar field given a limited number of observations, collected along a path. Paths are generated using a receding-horizon approach, alternating between planning and execution of the plan until the traveled distance exceeds the budget \(B\) or a prediction window \(t_{max}\). The task can be formulated as a constrained optimization problem, where information quantity is to be maximized subject to an observation cost. In [16], the task is specified follows: \[\mathcal{P}^{*}=\underset{\mathcal{P}\in\mathcal{V}}{\text{argmax}}\ I( \mathcal{P})\text{ s.t. }c(\mathcal{P})\leq B \tag{1}\] where \(\mathcal{P}^{*}\) is an optimal trajectory found in the space of possible trajectories \(\Psi\), for an individual or set of mobile agents such that the cost of executing the trajectory \(c(\mathcal{P})\) does not exceed an assigned motion budget, \(B\). \(I(\mathcal{P})\) is the information gathered along the trajectory \(\mathcal{P}\), and the movement budget can be any cost that constrains the effort used to collect observations (e.g., fuel, distance, time, etc.) This paper inherits the assumptions of the original RIG formulation and of prior sampling-based motion planning literature [19, 16] and adds the following assumptions with respect to time: 1. The state of the robots and the environment are modeled using discrete time dynamics 2. Movement of the sampling agent is anisotropic in the time dimension (see: section V) To quantify the information content of a trajectory, we employ a utility function that optimizes for a reduction in the posterior variance of the GP used to model the environment. This follows from framing the information gain of an observation as a reduction of map entropy or uncertainty. In [5], the authors present an approach for quantifying the information content of a map \(M\) as its entropy \(H\) and the information content of a new observation \(Z\) as the _mutual information_ between \(M\) and \(Z\), denoted as \(I(M;Z)\) and defined as follows: \[I(M;Z)=H(M)-H(M\mid Z) \tag{2}\] We take advantage of the submodularity of mutual information; that is, the information gained by adding an observation to a smaller set is more useful than adding the same observation to a larger (super-) set (See [22] for an analysis of the benefit of submodular information functions for informative sensing applications and [14] for the submodularity of mutual information.) From the perspective of the environmental modeling task, a useful survey is one that produces the most accurate representation of the environment, minimizing the expected error given field observations. This follows from equations (1) and (2). This assumption holds when the model is _well-calibrated_ with respect to the priors embodied in the model parameters 1. Our approach can be extended to an _adaptive planning_ scenario, where model hyperparameters are updated based on new measurements and future path plans leverage the updated model. In previous work, we have demonstrated how model priors can encode modeler intuition, resulting in sampling strategies that vary in the degree if exploration [4]. Footnote 1: Refer to Section V and Figure 3 for discussion of the consequences when this assumption does not hold ## IV Methods ### _Environmental Model_ We describe the spatial distribution of an unknown stochastic, dynamic environmental process occurring in a region \(\xi\subset\mathbb{R}^{2}\) as a function \(f\colon\mathcal{X}\to\mathbb{R}\) that is sampled and modeled at the discrete grid, \(\mathcal{X}\subset\mathbb{R}^{N_{t}\times N_{x,y}}\). Here \(N_{x,y}\) is a discretization of the spatial domain \(\xi\), while \(N_{t}\) is the temporal domain in which the spatial process evolves. The environmental map comprises this function \(f\) that describes our observations \(y_{i}\), plus some additive measurement noise \(\varepsilon_{i}\), i.e., \(y_{i}=f(x_{i})+\varepsilon_{i}\), where we assume that this noise follows an i.i.d. Gaussian distribution with zero mean and variance \(\sigma_{n}^{2}\): \(\varepsilon\sim\mathcal{N}\left(0,\sigma_{n}^{2}\right)\). We assume that \(f\) is a realization of a Gaussian process, represented as a probability distribution over a space of functions. Through marginalization, we can obtain the conditional density \(f\mid y=\mathcal{N}(\mu_{f\mid y},\Sigma_{f\mid y})\). The joint distribution of observations \(\mathbf{y}\), \(\{f(x_{1})+\varepsilon_{1},\ldots,f(x_{n})+\varepsilon_{n}\}\) and predictions \(\mathbf{f}\), \(\{f_{\star},\ldots,f_{\star^{n}}\}\) at indices \(\mathbf{X_{i}}\), \(\mathbf{t}\), \(\{x_{1,1}^{(st)},\ldots,x_{m,n}^{(st)}\}\) becomes: \[\begin{bmatrix}\mathbf{y}\\ f(x_{\star})\end{bmatrix}\sim\mathcal{N}\left(0,\begin{bmatrix}k(\mathbf{X}, \mathbf{X})+\sigma^{2}I_{N}&k\left(\mathbf{X},x_{\star}\right)\\ k\left(x_{\star},\mathbf{X}\right)&k\left(x_{\star},x_{\star}\right)\end{bmatrix}\right) \tag{3}\] where \(s\) and \(t\) denote spatial and temporal indices respectively. Here, environmental observations \(y\), are drawn from a training set \(\mathcal{D}\) of \(n\) observations, \(\mathcal{D}=(X,\mathbf{y})=\{(\mathbf{x}_{i,t},y_{i,t})\mid i=1,\ldots,n\}\). \(k\) is the covariance function (or kernel), \(\sigma_{n}^{2}\) is the variance of the observation noise, and input vectors \(\mathbf{x}\) and query points \(\mathbf{x}_{\star}\) of dimension \(D\), are aggregated in the \(D\times n\) design matrices \(X\) and \(X_{\star}\) respectively. From the Gaussian process, we can obtain estimations of both the expected value of the environmental field and the variance of each prediction. Noteworthy is the posterior variance, which takes the form: \[\sigma=\mathbb{V}\left[f_{\star}\right]=k\left(x_{\star},x_{\star} \right)-k\left(x_{\star},\mathbf{X}\right)\times \tag{4}\] \[\left[k(\mathbf{X},\mathbf{X})+\sigma_{n}^{2}\mathbf{I}_{n}\right]^ {-1}k\left(\mathbf{X},\mathbf{x}_{\star}\right)\] The differential entropy of a Gaussian random variable is a monotonic function of its variance, and can be used to derive the information content of a proposed measurement. We will show how this can be used to approximate information gain (equation (2)) in subsection IV-D. It is important to note that for fixed kernels the variance does not depend on the value of the observation, allowing us to reason about the effectiveness of a proposed observation before traveling to the sampling location [23]. Also notable is the kernel \(k\) which establishes a prior over the covariance of any pair of observations. Separate priors can be established in spatial or temporal dimensions, leading to the opportunity to incorporate spatial and/or temporal domain knowledge into the planning process. ### _Spatiotemporal prior_ The modeling effort can be framed as a multi-task (or multi-output) prediction of correlated temporal processes at each spatial discretization \(N_{x,y}\). As we only have a finite set of sampling vehicles (one, in fact), we cannot observe all of the spatial "outputs" for a given time, however we can establish a basis upon which they can be correlated [12]. Specifically, the Linear Model of Coregionalization (LMC) has been applied to GP regression where \(p\) outputs are expressed as linear combinations of independent random vector-valued functions \(f:\mathcal{T}\to\mathbb{R}^{p}\). If these input functions are GPs, it follows that the resulting model will also be a GP [1]. The multi-output GP (MOGP) can be described by a vector-valued mean function and a matrix-valued covariance function (see Equation (4)). A practical limitation of MOGPs has been their computational complexity. For making \(p\) predictions with \(n\) input observations \(y\left(t_{1}\right),\ldots,y\left(t_{n}\right)\in\mathbb{R}^{p}\), the complexity of inference is \(\mathcal{O}\left(n^{3}p^{3}\right)\) in time and \(\mathcal{O}\left(n^{2}p^{2}\right)\) in memory [6]. A variety of strategies exist to solve lighter, equivalent inference tasks under simplifying assumptions, such as expressing an output from linear combinations of latent functions that share the same covariance function, but are sampled independently [1]. Since our information function is only dependent on the posterior covariance, we can take advantage fast approximations with complexity \(\mathcal{O}(k(n+p\log p)\) (see discussion in subsection IV-D). As mentioned earlier, the kernel \(k\) establishes a prior likelihood over the space of functions that can fit observed data in the regression task. For the regression of discretely-indexed spatiotemporal data, where space is indexed by \(s\) (eg. latitude/longitude) and time is indexed by \(t\) (eg. seconds), we build a composite kernel by multiplying a spatial and temporal kernel: \[k((s,t),t(s^{\prime},t^{\prime}))=k_{s}(s,s^{\prime})k_{t}(t,t^{\prime}) \tag{5}\] While other approaches to kernel composition are possible and encode different environmental priors, constructing a kernel that is separable along input dimensions affords considerable computational advantages. More generally, when \(k(\mathbf{x},\mathbf{x}^{\prime})=\prod_{d=1}^{D}k^{(d)}(\mathbf{x}^{(d)}, \mathbf{x}^{\prime(d)})\), the kernel (Gram) matrix \(K\) can be decomposed into smaller matrices \(K=K_{1}\otimes\cdots\otimes K_{D}\) which can be computed in \(\mathcal{O}(Dn^{\frac{D+1}{D}})\) time (see [31] and [9] for more on kernel composition for multidimensional regression.) For the spatial relation, we use the Matern kernel with \(\nu=3/2\) and fixed hyperparameters. Comprehensively described in [29], the Matern kernel is a finitely-differentiable function with broad use in the geostatistical literature for modeling physical processes due in part to its ability to resist over-smoothing natural phenomena with sharp discontinuities. It takes the form: \[K_{\text{Matern}}(X,X_{\star})=\sigma^{2}\frac{2^{1-\nu}}{\Gamma(\nu)}\left( \frac{\sqrt{2\nu}}{l}r\right)^{\nu}K_{\nu}\left(\frac{\sqrt{2\nu}}{l}r\right) \tag{6}\] where \(K_{\nu}\) is a modified Bessel function, \(\Gamma(\cdot)\) is the Gamma function, and \(r\) is the Euclidean distance between input points \(X\) and \(X_{\star}\). \(\nu>0\), \(l>0\), and \(\sigma^{2}>0\) are hyperparemeters representing smoothness, lengthscale, and observation variance respectively. We use a radial basis function kernel (RBF or squared-exponential) in the time dimension to smoothly capture diffusive properties that may fade in time. Note that the Matern kernel approaches the RBF as \(\nu\rightarrow\infty\). ### _Informative Planning_ In this work, we present a novel planner IIG-ST to address IPP task defined in equation (1). Our planner is built upon IIG-Tree, a sampling-based planner with an information-theoretic utility function and convergence criterion [17] and derived from the family of Rapidly-exploring Information Gathering (RIG) algorithms introduced by Hollinger and Sukhatme [16]. RIG inherits the asymptotic cost-optimality of the \(\mathrm{RRT}^{\star}\), \(\mathrm{RRG}\), and \(\mathrm{PRM}^{\star}\) algorithms [20], a conservative pruning strategy from the branch and bound technique [3], and an information-theoretic convergence criterion (see discussion in subsection IV-E). We add routines to consider the time dimension of samples in the tree and combine it with a hybrid covariance function and stopping criterion grounded in map accuracy. ### _Information Functions_ From equation (2), we established information gain as the reduction of map entropy \(H\) given a new observation \(Z\). If the map is modeled as a Gaussian Process where each map point (or query point) is a Gaussian random variable, we can approximate mutual entropy with differential entropy. For a Gaussian random vector of dimension \(n\), the differential entropy can be derived as \(h(X)=\frac{1}{2}\log\left((2\pi e)^{n}|\Sigma|\right)\). If we let \(X\sim\mathcal{N}\left(\mu_{X},\Sigma_{X}\right)\) and \(X\mid Z\sim\mathcal{N}\left(\mu_{X|Z},\Sigma_{X|Z}\right)\) be the prior and posterior distribution of the random vector \(X\), before and after incorporating observation \(Z\), then the mutual information becomes: \[I(X;Z)=\frac{1}{2}\left[\log\left(|\Sigma_{X}|\right)-\log\left(|\Sigma_{X|Z}| \right)\right] \tag{7}\] where \(\Sigma\) is the full covariance matrix. For a random vector \(\mathbf{X}=(X_{1},\ldots,X_{n})\) with covariance matrix \(\mathbf{K}\), the mutual information between \(\mathbf{X}\) and observations \(\mathbf{Z}\) can be approximated from equation (7) as: \[\hat{I}(X;Z)=\sum_{i=1}^{n}\frac{1}{2}\left[\log\left(\sigma_{X_{i}}\right)- \log\left(\sigma_{X_{i}|Z}\right)\right] \tag{8}\] Using marginalization, for every \(X_{i}\), it holds that \(\mathbb{V}\left[X_{i}\right]=K^{[i,i]}\). The expression becomes: \[\hat{I}^{[i]}\left(X_{i};Z\right)=\frac{1}{2}\left[\log\left(\sigma_{X_{i}} \right)-\log\left(\sigma_{X_{i}|Z}\right)\right] \tag{9}\] and can be computed as the sum of marginal variances at _i_: \(\hat{I}(X;Z)=\sum_{i=1}^{n}\hat{I}^{[i]}(X_{i};Z)\) (see [17] for a derivation). The main motivation of using marginal variances at evaluation points (Equation (8)) is to avoid maintaining and updating (inverting) the full covariance matrix. This is of a particular concern for spatiotemporal modeling, because the number of inducing points grows on the order of \(m\times n\) for a spatial domain of \(m\) rows and \(n\) columns. Alternate GP formulations such as spatio-temporal sparse variational GPs (ST-SVGP) allow for computational scaling that is linear in the number of time steps [15] For computing the posterior variance at GP inducing points, we use LOVE (Lanczos Variance Estimates), for a fast, constant-time approximation of predictive variance [24, 11]. ``` 1:Proposed robot pose or location from RRT/RIG Steer \(p\), current map/state estimate \(\mathcal{M_{D}}\), covariance function \(k(\cdot,\cdot)\), prior map variance \(\sigma\), variance of observation noise \(\sigma_{n}^{2}\), near node information \(I_{\text{near}}\); 2:\(\bar{\sigma}\leftarrow\sigma\vartriangle\)Initialize updated map variance as the current map variance 3:if\(I_{\text{near}}\) is not empty then\(\vartriangle\)Initialize information gain 4:\(I\gets I_{\text{near}}\) 5:else 6:\(I\gets 0\) 7:\(z\leftarrow\)Propose a future measurement at location \(p\) and map \(\mathcal{M}\vartriangle\)Calculate posterior map variance at training and query points 8:\(\bar{\sigma}\leftarrow\textsc{LOVE}\left(X,X_{\star}\right)\) 9:for all\(i\in\mathcal{M_{D}}\)do 10:\(I\gets I+1/2\left[\mathrm{logdet}\left(\sigma^{[i]}\right)-\mathrm{logdet} \left(\bar{\sigma}^{[i]}\right)\right]\) 11:return\(I\) (total information gain), \(\bar{\sigma}\) (updated map variance) ``` **Algorithm 1** Information_G_PVR-ST() Algorithm 1 details the procedure for updating a node's information content. In lines 6-8, the location of a future measurement \(z\) at pose \(p\), is added to the set of past observations (training points) from the entire node graph. This is used to create a new map state containing the previous training points plus the new measurement and the preexisting query points where the GP is evaluated. Next, the posterior variance is calculated (lines 8) using LOVE (Lanczos Variance Estimates) [24, 11] to produce a posterior variance at the proposed locations of training points \(X\in\mathcal{M_{D}}\), query points \(X_{\star}\in\mathcal{M_{D}}\), and the variance of observation noise \(\sigma_{n}^{2}\). Finally, information content of the entire posterior map is updated and the information gain is returned as a marginal variance (lines 9-11). ### _Convergence criterion_ The closely related Incrementally-exploring Information Gathering (IIG) algorithm modifies RIG with an information-theoretic convergence criterion [17]. Specifically, IIG bases the stopping criterion around a _relative information contribution_ (RIC) criterion that describes the marginal information gain of adding a new observation relative to the previous state the RIG tree (see Equation 15 in [17] for a comprehensive discussion of the IIG algorithm and for a definition of the RIC). There, it was used as a tunable parameter that established a planning horizon for information gathering. In this paper, we use posterior map variance as a lower bound for mean-square error (MSE) (Equation (10)) at a arbitrary test location in the GP, given optimal hyperparameters \(\theta\) for the GP regression model. We replace the stopping criterion in IIG with a threshold established by the operator as the lower bound of expected prediction MSE. \[\mathrm{MSE}\left(\widehat{f}_{\star}\right)\geq\underbrace{\mathbb{V}\left[f_ {\star}\right]}_{=\sigma_{\star|y}^{2}(\theta)} \tag{10}\] It is important to note that this inequality holds for the hyperparameters \(\theta\) that produce an optimal predictor of \(f\) (see Result 1 in [30] for a proof of Equation (10) using the Bayesian Cramer-Rao Bound (BCRB).) In practice, \(\theta\) is learned from the data. For approximate (suboptimal) values of \(\theta\), the bound of Equation (10) will not hold, as additional error is introduced from the unknown model hyperparameters. However, when coupled with adaptive planning techniques to learn \(\theta\) from observations, then the posterior variance approaches the true lower bound of the MSE. A deeper analysis of the implications of this application is a target of future work. ### _Path selection and planning_ Once the planner terminates (either by the convergence criterion or after a fixed planning horizon), a path must be selected from the graph of possible sampling locations. We use a vote-based heuristic from [17] that ranks paths according to a similarity ratio and biases towards paths that are longer and more informative with a _depth-first search_. In the simulated environment, parameters are set for vehicle speed, sampling frequency, and replanning interval. The vehicle alternates between planning, executing, and replanning in a receeding-horizon fashion, such that 2-3 waypoints are visited in each planning interval. The path selection strategy is independent of the informative path planning algorithm and can be thought of as an orienteering problem within a tree of sampling locations. ## V Experimental Evaluation and Discussion In this section, we contrast our proposed spatiotemporal-informed planner (IIG-ST) against a traditional coverage survey strategy (see Figure 2), and an informed planner that does not consider temporal variation (IIG). We evaluate the accuracy of the final map representation at the end of the survey period under varying choices of spatial and temporal priors. We also consider the ancillary objective of making predictions of the state of environment at arbitrary points in time. This can be useful for objectives that wish to reconstruct the dynamics of a system, such as modeling a vector field. However, this is complicated by the fact that the survey envelope is anisotropic in the temporal dimension - the robot and sensor can only travel forward through time. ### _Experimental setting_ Our objective is to model the end-state of a spatial phenomenon that undergoes advection and diffusion in a 2D environment. This can represent the movement of a substance of interest in a fluid, a porous medium such as soil, or any number of similar natural processes. Two fluid parcels are initialized with inversely-proportional velocities, at opposite corners of a \(500\times 500\)-unit gridded environment. The fluid parcels advect and diffuse according to the Navier-Stokes equations for an incompressible fluid, implemented as a forward-differencing discretization without boundary conditions. We initialized the RIG-planner with fixed planning parameters: the vehicle can move a maximum of 100 map-units, every 5 time-units. Replanning is done every 10 time increments, and planning within each increment stops when estimated \(\mathbb{V}\left[f_{\star}\right]=0.15\). Sampling occurs once every 5 time increments. We set the time budget to be 100 units and compute the accuracy of the final representation of the map at \(t=50\) min. Map accuracy at different moments in mission time are presented in Figure 3. While the planner was not given a movement budget, the fixed speed of the vehicle and finite time-horizon resulted in consistent numbers of observations (\(M=21.0,SD=0.2\)) and path lengths Fig. 2: A visualization of the benchmark (coverage) sampling scenarios (top: fluid simulation, bottom: ocean sampling simulation). The posterior variance is depicted in the second panel, and the posterior mean in the third, with near-zero values filtered show the underlying structure. The coverage planners are given a path budget and node budget equivalent to the median of the equivalent metrics among all runs of the informed planners. Observations are collected on a circular coverage in the synthetic environment and a lemmiscatic coverage in the oceanic experiment. \((M=1236,SD=36)\) among the informative planners. The coverage baseline is given a proportional budget (21 observations, 1610 map units traveled). This is sufficient to complete a full tour of the environment with revisitation (see Figure 2). The full table of parameters set for the planner can be found in the accompanying video. We executed the experiments in a GNU/Linux environment on a 3.6 GHz Intel i7-4790 computer with 11 GB of RAM available. All procedures used single-threaded Python implementations for RRT sampling from [28] and multi-threaded posterior variance final map predictions were performed using implementations from GPyTorch [11] without GPU or TPU acceleration so as to simulate the resources available on an embedded system. ### _Consequences of the temporal prior_ To demonstrate the consequences of incorporating a spatiotemporal prior on informative planning in dynamic fields, we use the composite covariance function given in equation (5) both in planning and for evaluating the accuracy of the final map representation. This is notable for the baseline comparisons-while the coverage planner follows a deterministic trajectory, different map accuracies and variance reductions are expected depending on the choice of spatiotemporal prior during the construction of the final map model. For the temporal relation, we use a RBF kernel with length scales of \(\ell_{t}=20,100,200\) time units. The spatial relation comprises a Matern kernel with \(\nu=3/2\) and length scales of \(\ell_{s}=100\) distance units. To verify that the robot solves the problem in section III, we evaluate the root-mean squared error between the map representation at \(t=100\) and the state of the field at the same time. As the planner only requires the posterior covariance, it is not necessary to produce continuous estimations of the map state, so the final representation is computed once the simulation has ended. 20 episodes are run for each hyperparameter combination and summaries of average error, average posterior variance and standard deviations are found in table I. In Figure 3, we examine the choice of kernel hyperparameters on the performance of our planner. Optimal parameters were established offline using the baseline samples and a standard marginal log likelihood function and the Adam optimizer in gpytorch (\(\ell_{t}=20\) and \(\ell_{s}=30\)). These serve as the basis of comparison in the top-left panel of Figure 3 and resulted the spatiotemporal planner outperforming the temporally-naive and baseline planner for on average, throughout the entire mission duration. Large lengthscales imply a greater degree of correlation across space or time, and result a greater reduction of posterior variance. A reduction of model uncertainty should translate to a higher map accuracy, however this is not the case if the spatial priors are unrepresentative. For example, while the coverage planner had lower variance due to a longer path traveled and more dispersed observations, the resulting map accuracy was not better than the informative planners, leading to the conclusion that the spatiotemporal prior did not reflect the variation of the observed process. We want to emphasize that path planning algorithms based around variance reduction should also place the metric within a broader context of the practical objective - map accuracy. For informative planners, the effect is magnified, as the planner will move toward more dispersive sampling, thus Fig. 3: [Advection/diffusion simulation] A comparison of map error and posterior variance (lower is better) at different locations in the mission time for different spatiotemporal priors Optimal priors are chosen in the top left panel (\(\ell_{t}=20\) and \(\ell_{s}=30\)) and become increasingly suboptimal in other panels. IIG-ST (our planner) is compared the same planner lacking time information (IIG) and a circular survey strategy. The error metric is expressed across the entire spatial domain at different time indices (denoted on the x-axis), and reflects the error between the estimated map and the state of the environment _at that time_. Y-axis scales are shared between rows. missing high-frequency spatial phenomena entirely. This is demonstrated in the marginally improved accuracy and lower posterior variance for IIG-ST when given a unrepresentative spatial and temporal prior. In worst-case scenarios, a very unrepresentative temporal prior (\(\ell_{t}=200\)) can reduce the performance of the spatiotemporal planner _below_ the baseline (Figure 3, Col. 2). As the ultimate goal of informed robotic sensing _is_ model accuracy and not simply variance reduction, hyperparameter optimization must be a key component for accurate mapping and is a common practice in adaptive planning [10]. Furthermore, a time-varying kernel could be specified and optimized as observations of the environment are gathered. Future work will investigate the effect and performance of updating model priors during the course of a survey mission. The final map posterior is evaluated with the same spatiotemporal kernel in all cases, regardless of planning method to ensure a fair comparison between the methods. Only the spatiotemporal planner (IIG-ST) is able to make use of temporal variance during replanning. Training observations are obtained from a point sensor model, where the a "sample" is obtained by the simulated agent querying the ground-truth scalar field at a sample location. We use a sparse representation of posterior variance, evaluated at a 1/20 scale spatial resolution for a total of \(25\times 25\times 50\) query (inducing) points. Recent advancements in spatiotemporal GPs with separable kernels, enable computational scaling to scale lineally in the temporal dimension, instead of cubic [15]. These and other recent developments are reducing the computational burden of large GPs and informative planning with spatiotemporal information at a large scale. ### _Ocean particulate mapping scenario_ We demonstrate our spatiotemporal IPP approach in a synoptic-scale simulation using real-world ocean reflectance data. The data was collected in an approximately 1500 x 1000 _km_ region off the west coast of California from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA's Terra and Aqua earth observation satellites [13]. Rasters of weekly median reflectance from band 9 (443 _nm_ wavelength) were assembled for the calendar year of 2020. Backscattered light in this wavelength band is highly correlated with the concentration of suspended organic and inorganic particles (e.g. sediments) in the water. In terrestrial and oceanic waters, this can be used as an indicator of water quality, which can guide management decisions related to water diversion and treatment. We simulated an autonomous aquatic vehicle (AUV) with characteristics similar to the Wave Glider, which is an AUV capable of extended oceanic monitoring campaigns by using oceanic waves for propulsion. Based on the long-mission average speed of 1.5 knots, our simulated vehicle could cover a maximum of 330 km per week. We compare the performance of our informed planner against a fixed lemniscatic coverage pattern. As with the previous section, we evaluate the RMSE of the map representation, both at the final time step and at arbitrary temporal increments in the mission envelope. Summaries of average error, standard deviations, and posterior variance are presented in Table I and Figure \begin{table} \begin{tabular}{l|c|c c|c c} \multicolumn{2}{c}{_RMSE_} & \multicolumn{2}{c}{_V_} \\ \hline planner & \(\ell_{s}\) & \(\ell_{t}\) & \(\ell_{max}\) & \(t_{all}\) & \(t_{max}\) & \(t_{all}\) \\ \hline \multirow{4}{*}{IIG} & \multirow{2}{*}{30} & 20 & 0.781 (_0.066_) & 1.123 (_0.072_) & 0.686 (_0.0_) & 0.664 (_0.001_) \\ & & 100 & 0.612 (_0.096_) & 1.035 (_0.098_) & 0.64 (_0.005_) & 0.626 (_0.005_) \\ & & 100 & 0.762 (_0.113_) & 1.288 (_0.179_) & 0.645 (_0.007_) & 0.547 (_0.004_) \\ & & 100 & 0.75 (_0.222_) & 1.093 (_0.116_) & 0.462 (_0.02_) & 0.413 (_0.025_) \\ \hline \multirow{4}{*}{IIG-ST} & \multirow{2}{*}{30} & 20 & 0.733 (_0.089_) & 1.092 (_0.064_) & 0.686 (_0.0_) & 0.665 (_0.001_) \\ & & 100 & 0.611 (_0.121_) & 1.028 (_0.135_) & 0.638 (_0.005_) & 0.624 (_0.005_) \\ & & 100 & 0.768 (_0.101_) & 1.3 (_0.238_) & 0.64 (_0.004_) & 0.547 (_0.005_) \\ & & 100 & 0.866 (_0.194_) & 1.114 (_0.117_) & 0.458 (_0.014_) & 0.414 (_0.017_) \\ \hline \multirow{4}{*}{coverage} & \multirow{2}{*}{30} & 20 & 0.777 & 1.132 & 0.671 & 0.658 \\ & & 100 & 0.697 & 1.099 & 0.639 & 0.638 \\ \cline{1-1} & & 20 & 0.718 & 1.173 & 0.552 & 0.491 \\ \cline{1-1} & & 100 & 0.721 & 1.19 & 0.398 & 0.394 \\ \hline \end{tabular} \end{table} TABLE I: (L) [Advection/diffusion] Aggregated (\(n=20\)) map accuracy (RMSE) and posterior variance (mean, _std_) of the spatiotemporal planner (IIG-ST) compared to a spatial-only and deterministic survey strategies for fixed length scales. (R) [Ocean dataset] Aggregated \(n=20\) map accuracy for the ocean water quality experiment (\(\ell_{t}=100\) for all runs). Lower numbers are better. Note: standard deviation values are not expressed for the deterministic planner. Fig. 4: Example results from the ocean modeling experiments. (Top) Map error as a function of mission time, (\(\ell_{t}=100\)). (L) Example trajectory, with path trace projected above a representation of the environment at \(t=0\). (R) Aggregated statistics from the figures in the top panel. 4. As with the previous experiment, posterior variance and map accuracy are evaluated at a 1/20 scale spatial resolution. Also, as with the previous experiment, the performance of IIG-ST is sensitive to the choice of hyperparameters. ## VI Conclusion This work presented an approach for environmental modeling using a novel spatiotemporally-informed path planner. We presented a framework for quantifying the information gain of sampling locations based on their location and time and quantifying the operative outcome - map accuracy. We show that this informed strategy is computationally tractable with modern computational techniques and can outperform naive and conventional approaches, conditional on an appropriate spatiotemporal prior. Multiple avenues for future work lead from this effort. Adaptive planning can be used to revise the spatiotemporal prior as measurements are collected between replanning intervals. This approach can be extended to consider time-varying kernels. variable sensor models, and multi-robot systems.
2309.08073
Fermi surface and light quasi particles in hourglass nodal chain metal \b{eta}-ReO2
Quantum oscillations in magnetic torque and electrical resistivity were measured to investigate the electronic structure of \b{eta}-ReO2, a candidate hourglass nodal chain metal (Dirac loop chain metal). All the de Haas-van Alphen oscillation branches measured at 30 mK in magnetic fields of up to 17.5 T were consistent with first-principles calculations predicting four Fermi surfaces (FSs). The small-electron FS of the four FSs exhibited a very small cyclotron mass, 0.059 times that of the free electrons, which is likely to be related to the linear dispersion of the energy band. The consistency between the quantum oscillation results and band calculations indicates the presence of the hourglass nodal chain predicted for \b{eta}-ReO2 in the vicinity of the Fermi energy.
Daigorou Hirai, Takahito Anbai, Takako Konoike, Shinya Uji, Yuya Hattori, Taichi Terashima, Hajime Ishikawa, Koichi Kindo, Naoyuki Katayama, Tamio Oguchi, Zenji Hiroi
2023-09-15T00:04:48Z
http://arxiv.org/abs/2309.08073v1
# Fermi Surface and Light Quasi Particles in Hourglass Nodal Chain Metal \(\beta\)-ReO\({}_{2}\) ###### Abstract Quantum oscillations in magnetic torque and electrical resistivity were measured to investigate the electronic structure of \(\beta\)-ReO\({}_{2}\), a candidate hourglass nodal chain metal (Dirac loop chain metal). All the de Haas-van Alphen oscillation branches measured at 30 mK in magnetic fields of up to 17.5 T were consistent with first-principles calculations predicting four Fermi surfaces (FSs). The small-electron FS of the four FSs exhibited a very small cyclotron mass, 0.059 times that of the free electrons, which is likely to be related to the linear dispersion of the energy band. The consistency between the quantum oscillation results and band calculations indicates the presence of the hourglass nodal chain predicted for \(\beta\)-ReO\({}_{2}\) in the vicinity of the Fermi energy. Keywords: nodal chain, quantum oscillation, topological semimetal, Dirac electron, \(\beta\)-ReO\({}_{2}\) + Footnote †: journal: ## 1 Introduction In recent years, considerable attention has been paid to topological semimetals that exhibit remarkable transport properties, responses to magnetic fields, and surface states[1]-[5]. Topological semimetals are characterised by band crossings near the Fermi energy (\(E_{\mathrm{F}}\)); Weyl and Dirac semimetals have doubly and quadruply degenerate band-crossing points, where the quasiparticles are described as Dirac and Weyl fermions. In nodal line semimetals, the band crossing is not a point in \(k\)-space but a one-dimensional loop--the so-called nodal line. In particular, nodal lines produce drumhead surface states[6]. This surface state has a high density of states, and surface superconductivity and ferromagnetism may manifest accordingly[7, 8]. Generally, spin-orbit interactions (SOIs) open an energy gap at the band crossings; however, in topological semimetals, band crossings are protected from the gap opening owing to the symmetry of the crystal structure[9]. For example, in the representative Dirac semimetals Na\({}_{3}\)Bi[10, 11] and Cd\({}_{3}\)As\({}_{2}\)[12, 13, 14], the Dirac point along the rotation axis is protected by the rotation symmetry. In the nodal-line semimetals Ca\({}_{3}\)P\({}_{2}\)[15, 16] and CaAsPX (X = P, As)[17], the nodal lines are protected within the mirror plane owing to the reflection/mirror symmetry. Reportedly, materials have band crossings that are protected by non-symmorphic symmetries including partial translations such as glide planes and screw axes, rather than symmorphic symmetries like rotation and mirror symmetry. In materials with glide symmetry, bands may exchange pairs within the glide plane, thereby forming band dispersions with an hourglass shape; the Dirac point at the neck-crossing point of the hourglass is protected against SOIs through glide symmetry[18, 19]. Such hourglass dispersions have been observed in KHgSb using angle-resolved photoemission spectroscopy (ARPES)[20]. Nodal lines protected by glide symmetry have also been suggested for SrIrO\({}_{3}\)[21], WHM (W = Zr, Hf, La; H = Si, Ge, Sn, Sb; M = O, S, Se, Te) [22], and IrO\({}_{2}\)[23]. In materials with multiple orthogonal glide planes, multiple nodal lines (Dirac loops) appear that are protected by each glide plane, which may be connected by a single point, forming a chain-like structure in \(k\)-space. Materials with such electronic structures are called nodal-chain (NC) metals: these materials represent a new category of topological semimetals [24]. Wang et al. have reported that in a material with an \(\alpha\)-PbO\({}_{2}\)-type crystal structure (space group \(Pbcn\)), two orthogonal nodal lines protected by \(n\)- and \(b\)-glide are connected at a single point and possess an NC along the \(k_{y}\) direction [25]. Notably, in \(\beta\)-ReO\({}_{2}\), the first-principles calculations have predicted NCs near the \(E_{F}\): the NC metal is called the "Dirac loop chain metal". Thus, unusual transport properties and surface states derived from NCs are expected for \(\beta\)-ReO\({}_{2}\). Particularly, only the metallic electrical conduction in \(\beta\)-ReO\({}_{2}\) has been reported [26]; its magnetotransport and other properties are yet to be characterised. In addition, single crystals used in the previous study exhibited a fairly high residual resistivity of 10 \(\upmu\)Ocm. To investigate the NC signature in \(\beta\)-ReO\({}_{2}\), we grew high quality single crystals with an extremely low residual resistivity of 206 n\(\Omega\)cm and measured their physical properties [27]. A large transverse magnetoresistance of 22,000% was observed at 2 K in a field of 10 T. In addition, quantum oscillations (QOs) were observed at high temperatures and low magnetic fields of 7 K and 7 T, indicating the existence of quasiparticles with cyclotron masses that were 0.4 times lighter than free electrons. A large magnetoresistance and the presence of light quasiparticles are common features of topological semimetals. Therefore, the presence of Dirac electrons in \(\beta\)-ReO\({}_{2}\) is strongly suggested. However, it is not clear whether the large magnetoresistance and light quasiparticles are derived from NCs, because there are several Fermi surfaces (FSs) other than those associated with NCs, as suggested by first-principles calculations [27]. To confirm the presence of NCs in \(\beta\)-ReO\({}_{2}\), the FSs must be experimentally measured, and the electronic structure must be determined. In this study, QOs in the magnetic torque and electrical resistivity were measured and compared with first-principles calculations to investigate the FSs of \(\beta\)-ReO\({}_{2}\) in detail. The observed QO frequencies corresponded to the extremal cross-sectional areas of all four FSs predicted by first-principles calculations, and the angular dependences of the QO branches between the calculations and experiments were consistent. The cyclotron mass of the quasiparticles estimated from the temperature dependence of the oscillation amplitude was approximately identical to that of free electrons for the three large FSs, whereas for the small electron pocket, the cyclotron mass was extremely light--0.059 times lighter than free electrons. The shape of the electron pocket surrounded by one of the two nodal lines forming the NC was determined with high accuracy, and first-principles calculations revealed that the NC was located extremely close to the \(E_{F}\) in \(\beta\)-ReO\({}_{2}\). ## 2 Experimental Single crystals of \(\beta\)-ReO\({}_{2}\) were grown by chemical vapor transport method using iodine as a transport agent as in previous reports [27]. Obtained crystals larger than \(1\times 0.3\times 0.3\) mm\({}^{3}\) were mostly twinned, while some of those smaller than 0.3 \(\times\) 0.1 \(\times\) 0.1 mm\({}^{3}\) were single-domain crystals. Single crystal X-ray diffraction (XRD) experiments (R-AXIS RAPID II RIGAKU) were used to confirm the \(\alpha\)-PbO\({}_{2}\)-type crystal structure with space group \(Pbcn\), select single-domain crystals, and determine their crystal orientation. Magnetic torque measurements were performed by a piezo-micro-cantilever technique [28]. A single crystal was attached to the cantilever by silicon grease, and the torque signals were detected by a lock-in amplifier at a frequency of 15 Hz using a homemade bridge circuit. This technique allows for highly accurate measurements of small magnetic torque signals. The measurements were performed at the Tsukuba Magnet Laboratories of NIMS using a 20 T superconducting magnet and a dilution refrigerator. Electrical resistivity measurements were performed using a 17 T superconducting magnet and a "He gas-flow cryostat. Only single-domain portions cut from a large single crystal were used for resistivity measurements. The sample was confirmed to be a single domain by single-crystal XRD measurements. The electrical resistivity up to approximately 56 T was measured in the pulsed high magnetic field by a four terminal AC method at the frequency of 20 kHz. The voltage was recorded by a digital oscilloscope at the sampling rate of 1 MS/s and analyzed by the numerial phase detection technique [29]. The pulsed magnetic field with the duration of 40 milliseconds was generated by a multi-layer coil at the International MegaGauss Science Laboratory at ISSP. For structural analysis, powder XRD experiments were conducted at 150 K and X-ray energy of 20 keV using a quadruple PILATUS 100K detecter at the BL5S2 of Aichi Synchrotron Radiation Center. The sample was prepared by crushing single crystals and sealing it in a Lindemann capillary of 0.1 mm diameter. RIETAN-FP was used for the Rietveld analysis [30]. First-principles electronic structure calculations were performed using the all-electron full-potential linearized augmented plane wave method [31, 32, 33] implemented in the HiLAPW code [34] with the Perdew-Burke-Ernzerhof generalized gradient approximation to the density functional theory [35]. The SOIs were self-consistently taken into account for the valence and core states by the second variation scheme [36]. The energy cutoffs of 20 and 160 Ry were used for wavefunction and potential expansions, respectively. The lattice constants and atomic coordinates were taken from experimental data obtained by a Rietveld refinement for the synchrotron powder X-ray diffraction pattern in this study, in contrast to the previous calculations [27] which used structural parameters optimised _via_ first-principles calculations[37]. Brillouin-zone sampling was made by the tetrahedron integration scheme with \(I\)-centered \(16\times 16\times 16\) (\(32\times 32\times 32\)) mesh points in the self-consistent-field (density of states) calculations. Fermi surfaces were drawn with "FermiSurfer" program[38]. ## 3 Results ### dHvA oscillation in magnetic torque curve Figure 1a shows the magnetic-field dependence of the magnetic torque of \(\beta\)-ReO\({}_{2}\) at 30 mK and \(B\parallel[110]\). The magnetic torque is given by \(\mathbf{r}_{warp}=\mathbf{M}\times\mathbf{B}\), where \(\mathbf{M}\) and \(\mathbf{B}\) are the magnetisation and magnetic field, respectively. The magnetic torque comprises a monotonically increasing and an oscillating components, the former originates from Pauli paramagnetic magnetisation, whereas the latter corresponds to the QO. The QO component (upper curve) is extracted from the raw data by subtracting the Pauli paramagnetic component that is expressed by a polynomial function. The oscillating components exhibit typical de Haas-van Alphen (dHvA) oscillation behaviour, with amplitudes increasing with the magnetic field. Upon enlarging the high-field region, the oscillations are observed to exhibit a saw-tooth shape. This is caused by mixing of the harmonic components of the fundamental oscillation in high-purity crystals. Many dHvA peaks up to a relatively high frequency of 3300 T are present in the fast Fourier transform (FFT) spectrum of the QO component (Fig. 1b). Each dHvA peak is labelled from \(\alpha\) to \(\zeta\) and their composite components, starting from the lowest magnetic field. As expected from the oscillatory waveform, harmonic components up to the third order of the frequency of \(\delta\) are present herein. Based on the Onsager relation, the observed frequency \(F\) in the FFT spectrum yields the extremal cross-sectional area of the FS perpendicular to the magnetic field \(A_{\mathrm{F}}\): \(F=(\mathbf{\Phi}_{0}/2\pi^{2})A_{\mathrm{F}}\), where \(\mathbf{\Phi}_{0}\) is the magnetic quantum flux. The highest fundamental frequency labeled as \(\zeta\) is 3020 T, corresponding to the cross-sectional area of the FS \(A_{\mathrm{F}}=0.288\) A\({}^{-2}\). This area is considerable large; 49% of the cross-sectional area perpendicular to [110] in the first Brillouin zone, 0.586 A\({}^{-2}\). However, the lowest frequency, \(\alpha\), is approximately 70 T, which originates from a cross-sectional area of less than 1/40th of \(\zeta\) ### Angle dependence of dHvA branches The magnetic-field-angle dependence of dHvA oscillations in the magnetic-field orientations rotated around the \(c\) and \(b\) axes are displayed in Fig. 2: the angles are defined as \(\theta_{1}\) and \(\theta_{2}\), \(\theta_{1}=\theta_{2}=0^{\circ}\) for \(B\parallel a\), \(\theta_{1}=90^{\circ}\) for \(B\parallel b\), and \(\theta_{2}=90^{\circ}\) for \(B\parallel c\), respectively. Figure 2 shows only the fundamental frequency, excluding the synthetic and harmonic components. \(B\parallel[110]\) corresponds to \(\theta_{1}=49.6^{\circ}\), and the branches are labelled in Fig. 2 based on the assignment in Fig. 1. Most branches have a minimum at \(B\parallel a\), and the frequency increases towards both \(B\parallel b\) and \(B\parallel c\). This suggests the presence of a cylindrical or ellipsoidal Fermi surface extending parallel to the \(\alpha\) axis; for a spherical surface, the extremal cross-sectional area is constant for any angles. The highest frequencies of \(\zeta\) assume maxima values of 3317 and 3555 T at \(\theta_{1}=64^{\circ}\) and \(\theta_{2}=54^{\circ}\), respectively. Figure 1: (a) Magnetic-field dependences of magnetic torque for a \(\beta\)-ReO\({}_{2}\) crystal at \(T=30\) mK and the magnetic field parallel to the [110] direction. The raw, background curve, and oscillatory component after subtracting background component are depicted by the black and red lines in the lower panel and the black line in the upper panel, respectively. The inset shows a magnified range at high fields to demonstrate quantum oscillation. (b) Fast Fourier transform spectrum of the dHvA oscillation in the field range between 5 and 10 T. The observed peaks are assigned as \(\alpha\), \(\beta\), \(\gamma\), \(\delta\), \(\epsilon\), \(\zeta\), and their higher order summations. ### Correspondence between the experiment and first-principles calculations First-principles calculations were performed to reveal the correspondence between the FSs and angular dependence of dHvA oscillations. The crystallographic parameters of \(\beta\)-ReO\({}_{2}\) used in the first-principles calculations were precisely determined through synchrotron X-ray powder diffraction experiments (see appendix). Upon comparing the previously reported parameters [39] with the results of this study, we revealed that the atomic coordinates of the oxygen atoms differed by approximately 10%. The lattice constants differed from those optimised _via_ first-principles calculations [37] by up to 1.6%. The FSs obtained through first-principles calculations are consistent with previous reports [27], as illustrated in Fig. 3, and four FSs are present: the largest electron FS #101 electron centred at the \(\Gamma\) point, a small electron pocket #103 electron, and two hole FSs, #97 hole and #99 hole, covering the \(R\)-\(T\) line. Figure 2 presents a comparison of the simulated dHvA oscillations from the calculated FSs with the experimental results. For the three relatively large FSs #101 electron, #97 hole, and #99 hole, both the experimental and simulated frequencies and the angular dependence are in good agreement. The agreement is particularly remarkable for \(B\parallel a\), and the deviation increases as the angle moves away from \(B\parallel a\). Considering the #101 electron, as shown in Fig. 3b, a simple orbit on the cylinder surrounding the \(\Gamma\) point labelled \(\zeta\) gives extremal cross-sectional area at \(B\parallel a\), where the calculation and experiment are in good agreement. However, as the magnetic field tilts away from \(B\parallel a\), the shape of the orbit giving extremal cross-sectional area increases in complexity owing to the influence of the part extending to the \(U\) point (Fig. 3c), and a small difference in the FS shape causes a large difference in the value of the extremal cross-sectional area. Consequently, the discrepancy between theory and experiment becomes large in the region where \(\theta_{1}\) and \(\theta_{2}\) are large. Based on the calculations, the dHvA oscillations originating from the smallest #103 electron are estimated to be less than 200 T. Previous magnetisation measurements have reported a QO with a frequency of 51 T at \(B\parallel a\), which is believed to originate from the #103 electrons [27]. However, in the magnetic torque experiments in this study, the peaks could not be separated because of experimental noise superimposed onto the low-frequency region, and the angular dependence could not be clarified. Therefore, as described in section 3.5, we measured the Shubnikov-de Haas (SdH) oscillations at higher temperatures to investigate the shape and cyclotron mass of the #103 electrons. Figure 2: Angular dependences of dHvA frequencies with rotation axes along the \(c\) axis (the angle \(\theta_{1}\)) and \(b\) axis (the angle \(\theta_{2}\)). The angle of the applied magnetic field is defined as \(0^{\circ}\) at \(B\parallel a\). Frequencies considered belonging to the same branch are plotted with the same symbol and color. The simulated angle dependence of the dHvA frequencies from the Fermi surface obtained through the first-principles calculations are displayed by the line with the same colour: the frequencies originating from #101 electron, #97 hole, and #99 hole are shown as dotted green, broken blue, and solid light blue lines, respectively. The inset image presents the schematic figure of the directions of magnetic fields. ### Electron masses of Fermi surfaces Figure 4 depicts the temperature dependence of the dHvA oscillation amplitude divided by the temperature for the representative branches at \(B\parallel a\). The peaks at 2120, 900, and 597 T correspond to the calculated values of 2185 T (#101 electron), 833 T (#99 hole), and 422 T (#97 hole), respectively. The cyclotron mass (\(m\)*) of the carrier was estimated by fitting the temperature dependence to the temperature decay factor \(R_{T}\) using the Lifshitz-Kosevich (LK) equation. The temperature dependences of the amplitudes are well fitted using the LK formula, in which the amplitude \(Amp\) is proportional to the thermal damping factor (\(R_{\rm T}\)) and Dingle damping factor (\(R_{\rm D}\)), as \(Amp\varpropto R_{\rm T}R_{\rm D}\); \(R_{\rm T}=(2\pi^{2}k_{\rm B}Tm^{\rm\rm\mu}/eB\)\(\lambda)/{\rm sinh}(2\pi^{2}k_{\rm B}Tm^{\rm\rm\mu}/eB\)\(\lambda)\) and \(R_{\rm D}=\exp(-2\pi^{2}k_{\rm B}Tm^{\rm\rm\mu}/eB\)\(\lambda)\), where \(k_{\rm B}\) is the Boltzmann constant and \(T_{\rm D}\) is the Dingle temperature [40]. The obtained cyclotron masses are \(1.74m_{0}\), \(1.35m_{0}\), and \(0.67m_{0}\) for the #101 electron, #99 hole, and #97 hole, respectively, where \(m_{0}\) is the mass of free electron. The experimental values agree reasonably well with the values obtained from first-principles calculations: \(2.20m_{0}\), \(0.94m_{0}\), and \(0.60m_{0}\). The calculated results reproduce the electronic structure of \(\beta\)-ReO\({}_{2}\) not only in the shape of the FSs but also in the band dispersion. Under the same \(B\parallel a\) conditions, the cyclotron mass of #103 electron was determined to be \(0.23m_{0}\)[27]. Essentially, the three large Fermi surfaces are as heavy as or twice as heavy as the free electrons, whereas the quasiparticles from the small electron pocket have cyclotron masses of less than 1/4 those of the free electrons. ### 5dH oscillations at high temperatures According to the LK equation, the larger the cyclotron mass, the larger will be the temperature decay of the QO amplitude. Therefore, at high temperatures, QOs originating from FSs with large cyclotron masses disappear, and only QOs originating from quasiparticles with small cyclotron masses are observed. We noted the difference in cyclotron masses among the four FSs and observed only QOs originating from the #103 electrons by performing measurements above 1 K. Figure 4: FFT amplitude of dHvA oscillations in the field range between 5 and 10 T divided by temperature (\(T\)) as a function of temperature. Solid black curves indicate the fitting to 1/sinh(\(\kappa m\)*\(T\) / \(B\)). The cyclotron masses estimated from the temperature dampings (\(m\)*) and first-principles calculations (\(m_{\rm band}\)) are shown for each branch in the unit of free electron mass (\(m_{0}\)). Figure 3: (a) Calculated Fermi surfaces of \(\beta\)-ReO\({}_{2}\) consisting of four Fermi surfaces (#97 hole, #99 hole, #101 electron, and #103 electron). Cross section of the Fermi surfaces crossing \(\Gamma\) point perpendicular to the directions where (b) \(\theta_{1}=0^{\circ}\) and \(\theta_{2}=0^{\circ}\) the [100] direction) and (c) \(\theta_{1}\to 22^{\circ}\) and \(\theta_{2}=0^{\circ}\). The labelles \(\beta\), \(\varepsilon\), \(\zeta_{\rm s}\) in (b) and (c) are assigned in the FFT spectrum of the dHvA oscillations Figure 5a plots the transverse magnetoresistance data obtained by applying a current in the \(a\) direction (\(I\parallel a\)) and a magnetic field in the \(c\) direction (\(B\parallel c\)). At 1.5 K, the transverse magnetoresistance increased steeply with a magnetic field up to 30 T and slowly above 30 T up to 60 T, reaching a value 1,900 times higher at 60 T than under the absence of the magnetic field. The spike structures at low fields come from experimental noise. This field dependence is in contrast to that of WTe\({}_{2}\), a Weyl semimetal, which exhibits an increase in transverse magnetoresistance proportional to \(B^{2}\) up to 60 T [41]. In the 1.5 K data, a distinct oscillation is superimposed on the magnetoresistance component above 10 T. This oscillation component, which increases in amplitude with an increasing magnetic field, is the SdH oscillation. The SdH oscillations exhibit a complex magnetic-field dependence at 1.5 K, which is a combination of oscillations with large periods and fine oscillation components. In contrast, at 4.2 K, the fine component disappears and only a large period remains. This indicates that the QOs originating from the FSs with large cyclotron masses disappear and only the components originating from the FS with small cyclotron masses are observed. After subtracting the monotonically increasing component of magnetoresistance using a polynomial function and performing an FFT, an oscillatory component at 100 T is observed up to temperatures above 4 K. The cyclotron mass obtained by fitting the temperature dependence of the amplitude from 1.5 to 4.2 K is 0.48\(m_{0}\) (Fig. 5b), which is in good agreement with the calculated value of 0.40\(m_{0}\). Figure 5: Magnetic-field dependence of transverse magnetoresistance (MR) measured with the electrical current \(I\) flowing along the \(a\) direction and magnetic field along the \(c\) direction. The inset depicts oscillatory components after subtracting background plotted against the inverse of magnetic field. (b) FFT amplitude of SdH oscillations in the field range between 15 and 50 T divided by temperature (_T_) (orange square) as a function of temperature shown with a fit to 1/sinh(_km*T_ / _B_) (black solid line). Cyclotron masses estimated from the temperature dampings (_m*_) and first-principles calculations (_m_\({}_{\rm band}\)) are shown in the unit of free electron mass (_m_\({}_{0}\)). Figure 6: (a) Magnetic-field dependence of resistance measured with the electrical current \(I\) flowing along the \(a\) direction and magnetic field along the \(b\) direction between 1.3 and 30 K. The curves are offset vertically for clarity. (b) FFT amplitude of SdH oscillations in the field range between 0.5 and 5.5 T divided by temperature (_T_) as a function of temperature. The solid black curve indicates the fitting to1/sinh(_km*T_ / _B_). The cyclotron masses estimated from the temperature dampings (_m*_) and first-principles calculations (_m_\({}_{\rm band}\)) are shown for each branch in the unit of free electron mass (_m_\({}_{0}\)). The inset depicts the fast Fourier transform of the oscillations of resistance in the range of magnetic fields of 1–14.5 T at various temperatures between 1.3 and 12 K. The transverse magnetoresistance data for \(I\parallel a\) and \(B\parallel b\) is plotted in Fig. 6a. Clear SdH oscillations are observed at 1.5 K as in the \(B\parallel c\) data. However, in this orientation, the QO is observed at a small magnetic field of 2 T. The oscillation period is also slow, suggesting that the QO originates from an extremely small FS. Oscillations are observed up to approximately 20 K, which is a high value for QO experiments. The FFT of the oscillations yields a frequency of 8.5 T, and a cyclotron mass of 0.059\(m_{0}\) is obtained by fitting the amplitude to the temperature dependence. The FS corresponding to this frequency is considered to be the #103 electron, and the calculated frequency and cyclotron mass are 48 T and 0.087\(m_{0}\), respectively. The difference of approximately 5.6 times between the experimental and calculated frequencies may be attributed to the insufficient accuracy of the calculations for an extremely small FS. ## 4 Discussion ### Relation between the small Fermi surface and the nodal chain The dHvA and SdH oscillation results are in good agreement with the four calculated FSs: three large FSs #101 electron, #97 hole, and #99 hole with relatively heavy cyclotron masses, and a small electron pocket #103 electron with an extremely light cyclotron mass of at least 0.059\(m_{0}\). The #103 electron is an electron pocket around the \(U\) point in reciprocal space, which possesses a rugby-ball shape according to the calculations (Fig. 7a). Assuming #103 electron to be an ellipsoid, we estimated its actual shape using the cross sections obtained in the QO experiments: for \(B\parallel a\), \(b\), and \(c\), the frequencies are 51, 8.5, and 100 T, respectively, and the extremal cross-sectional areas are 4.8\(\times\)10\({}^{-3}\), 8.1\(\times\)10\({}^{-4}\), and 9.6\(\times\)10\({}^{-3}\) A\({}^{-2}\), respectively. In particular, the cross-sectional area perpendicular to \(k_{y}\) is only 0.13% of the area of the first Brillouin zone. The ellipsoid is shown schematically in Fig. 7a, where the ratio of the radius lengths in the \(k_{x}\), \(k_{y}\), and \(k_{z}\) directions is 2:12:1, and the ellipsoid is long in the \(k_{y}\) direction and collapses in the \(k_{z}\) direction. In contrast, simulations based on first-principles calculations reveal that for \(B\parallel a\), \(b\), and \(c\), the frequencies are 164, 48, and 187 T, respectively, and the ratio of the radii in the \(k_{x}\), \(k_{y}\), and \(k_{z}\) directions is 1.1:4:1. Thus, although the length ratios differ, an elongated shape extending in the \(k_{y}\) direction is consistent. As shown in Fig. 7b, the #103 electron is formed through bands branching from the same point at the \(U\) point, 85 meV below the \(E_{\rm F}\). Therefore, a small change in the \(E_{\rm F}\) causes a large change in the volume of the #103 electron. As the experimental extremal cross-sectional area is smaller than the calculated value, the actual \(E_{\rm F}\) is considered to be located between 0 and \(-\)85 meV of the calculation. The reason for the anisotropic shape of #103 electron is that bands with remarkably different slopes extend from the same \(U\) point in the \(k_{x}\), \(k_{y}\), and \(k_{z}\) directions to form the FS. The slope of the band is reflected by the effective mass of the electrons. The cyclotron mass of electrons from the bands perpendicular to the \(k_{y}\) direction, which have steep slope and small \(k_{\rm F}\), is very small (0.059\(m_{0}\)), whereas the cyclotron mass of electrons from the bands perpendicular to the \(k_{z}\) direction, which have slow slope, is large (0.48\(m_{0}\)). The experimental and theoretical values are in good agreement, and their consistency further validates that the observed QOs originate from #103 electron. The shape of the FSs near the \(U\)-\(R\) line is closely related to the NC. \(\beta\)-FeO\({}_{2}\) has two types of nodal lines orthogonal in \(k\)-space (Fig. 7a): the first type is the nodal line 1 centred at the \(U\) point on the \(k_{z}=\pi\) plane (_ZURT_ plane), and the second type is the nodal line 2 centred at the \(R\) point on the \(k_{x}=\pi\) plane (_UXSR_ plane); the nodal lines 1 and 2 are protected by \(n\)- and \(b\)-glide symmetries, respectively. These two nodal lines intersect at a point (\(\pi\), 0.26\(\pi\), \(\pi\)) on the \(U\)-\(R\) line to form a NC extending in the \(k_{y}\) direction. The results of calculations in this study are consistent with the initial theoretical work which predict NCs in \(\beta\)-FeO\({}_{2}\)[25], although there are small quantitative differences between them: for example, the energies of neck-crossing Dirac points differ by several tens of meV. According to the present _ab initio_ calculations, both nodal lines possess a considerably elongated shape: nodal line 1 crosses the \(U\)-\(Z\) line at \(k_{x}=0.984\pi\), and nodal line 2 crosses the \(R\)-\(S\) line at \(k_{z}\) = 0.982\(\pi\). In the band dispersion near the \(E_{\rm F}\), as shown in Fig. 7b, there exist hourglass-shaped band dispersions forming the nodal lines 1 and 2 on the \(U\)-\(Z\) and \(R\)-\(S\) lines, respectively, and their neck-crossing Dirac points lie at \(-\)105 meV and 136 meV. On the \(U\)-\(R\) line, the intersection of nodal lines 1 and 2, where the two hourglass dispersions overlap, lie at \(-\)73 meV. As the actual \(E_{\rm F}\) is estimated to be between 0 and \(-\)85 meV, the NC exists at energies approximate to the \(E_{\rm F}\). The calculations considering SOIs in this study showed no gap opening at the Dirac point, thereby validating the protection bestowed by the glide symmetry. Linear bands extending from the Dirac points originating from nodal lines 1 and 2 on the \(U\)-\(Z\) and \(R\)-\(S\) lines form the #101 electron and #99 hole (Fig. 7b). #101 electron and #99 hole are connected on the \(U\)-\(R\) line in the cross section of the FSs at \(k_{z}=\pi\) and \(k_{x}=\pi\), forming a chain-like structure extending in the \(k_{y}\) direction (Fig. 7c). In addition, #103 electron, which is surrounded by #101 electron in the \(k_{z}=\pi\) plane, possesses a more elongated shape than that calculated, suggesting that nodal line 1 also has an even more elongated ring than that calculated. If the NCs are present near the \(E_{\rm F}\), the Dirac electrons originating from the NCs must contribute to the transport properties. \(\beta\)-FeO\({}_{2}\) exhibits an extremely low electrical resistivity of 206 \(\beta\)-FeO\({}_{2}\) at 2 K, which most likely results from the ultra-high mobility of Dirac electrons at low temperatures. In addition, \(\beta\)-FeO\({}_{2}\) is a metal with a large electron carrier density of 1\(\times\)10\({}^{22}\) cm\({}^{-3}\), and such metals usually do not exhibit a large transverse magnetoresistance[27]. Although the transverse magnetoresistance of many topological semimetals is explained through electron-hole compensation[41], such an explanation is not applicable for \(\beta\)-FeO\({}_{2}\), where the electron and hole carrier densities are not compensated. The origin of the extremely large transverse magnetoresistance in \(\beta\)-FeO may be that the topological protection acting on Dirac electrons is broken by the magnetic field[42]. The geometry of the NCs may produce a large azimuthal dependence on the transverse magnetoresistance. \(B\parallel c\) and \(a\), in which the magnetic fields are applied perpendicular to nodal lines 1 and 2, should have different effect on the Dirac electrons from \(B\parallel b\), in which the magnetic field is applied parallel to both nodal lines. Detailed measurements of the magnetic-field-orientation dependence of the transverse magnetoresistance are expected to clarify the correlation between NC and transport properties. Moreover, the \(E_{F}\) can be potentially tuned by controlling the oxygen content or chemical substitution to alter the contribution of nodal lines 1 and 2 or to maximise the transport properties derived from NCs. ### Comparison with other topological materials In topological semimetals, quasiparticles originating from Dirac or Weyl points have extremely light cyclotron masses. Table 1 compares the observed cyclotron masses with those of representative topological semimetals. The cyclotron mass of \(0.059m_{0}\) observed in \(\beta\)-FeO\({}_{2}\) is comparable to those of other topological semimetals, suggesting the presence of Dirac electrons. Notably, #103 electron is not formed in a band branched from NC but is derived from a band crossing at the \(U\) point, which is a time-reversal-invariant momentum. Therefore, the extremely small cyclotron mass of the #103 electron may reflect the nature of the Dirac electrons originating from band crossing at the \(U\) point. Several candidate NC metals have been proposed; however, only a few candidates have been experimentally verified; TiB\({}_{2}\)[47, 48] and Co\({}_{2}\)MnGa[49] have been verified as NC metals based on ARPES experiments. The NC in TiB\({}_{2}\) is protected by space- and time-reversal symmetry, which opens a gap of approximately 20 meV at the band crossing point when the SOIs are considered. In contrast, the NC of \(\beta\)-FeO\({}_{2}\) protected by the glide symmetry is stable to SOIs and does not open a gap. The difference between the two materials is expected to appear in the transport properties, which are significantly affected by the electronic state near \(E_{F}\). A comparison of the two materials will yield a better understanding of the characteristics of NCs formed from different origins. Co\({}_{2}\)MnGa is a ferromagnet with a \(T_{\rm C}\) of 690 K[50], and its time-reversal symmetry is broken at room temperature. Thus, a comparative study of \(\beta\)-FeO\({}_{2}\) and Co\({}_{2}\)MnGa is expected to clarify the difference between Dirac and Wyle NCs. For both TiB\({}_{2}\) and Co\({}_{2}\)MnGa, theoretically predicted drumhead surface states have been observed[47, 49]. In \(\beta\)-FeO\({}_{2}\), which is considered to be an NC metal, a drumhead surface state has been also predicted by theory[25]. Further ARPES study will verify this prediction[25], and the transport properties, and optical responses derived from the drumhead surface state are expected to be elucidated in the future. Recently, magnetic torque measurements revealed that the development of magnetic susceptibility at low temperatures is attributed to the quadrupolar fluctuations[51]. In rhenium oxides, \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Material & Topological state & cyclotron mass (\(m_{0}\)) & MR(\%) & Ref. \\ \hline Cd\(\mathrm{As_{2}}\) & Dirac SM & 0.044 & 1.6\(\times\)10\({}^{3}\) & [43] \\ \hline NbP & Weyl SM & 0.076 & 8.5\(\times\)10\({}^{6}\) & [44] \\ & & (1.85 K, 9 T) & \\ \hline ZrSiS & Nodal line SM & 0.025-0.1 & 1.4\(\times\)10\({}^{6}\) & [43, 46] \\ \hline \(\beta\)-ReO\({}_{2}\) & Nodal chain metal & 0.059 & 2.2\(\times\)10\({}^{4}\) (2 K, 10 T) & This work \\ \hline \end{tabular} \end{table} Table 1: Comparison of transport properties among various types of topological semimetals (SMs). Figure 7: (a) Schematic figure of the geometric relation between the nodal chain and #103 electron pocket. At bottom left, cross-sections of #103 electron pocket determined from quantum oscillations assuming a flattened rugby ball shape is also shown (orange ellipses). (b) Electronic band dispersions around the \(E_{F}\) along the \(Z\)–\(U\), \(U\)–\(R\), and \(R\)–\(S\) lines showing hourglass dispersions with fourfold degenerate Dirac points at the neck crossings (red circle for nodal line 1, green circle for nodal line 2, and light blue circle for the intersection of nodal lines). The two pairs of bands in different colors (blue and orange) belong to different irreducible representations. The four Fermi surfaces and corresponding bands are indicated by different colored squares. (c) Cross section of the Fermi surfaces on the \(k_{z}=\pi\) and \(k_{x}=\pi\) planes near the \(U\) and \(R\) points. multipole ordering is induced by strong SOIs, in both the insulators and in metallic compounds [52, 53, 54, 55]. Moreover, determining whether the electron correlation effects observed in \(\beta\)-ReO\({}_{2}\) originate from the bulk bands or from drumhead surface state with a high density of states is a key research interest. ## 5 Conclusion In this study, quantum oscillation measurements were performed to determine the Fermi surface of \(\beta\)-ReO\({}_{2}\), an hourglass nodal-chain metal candidate. Clear quantum oscillations are observed in both the magnetic torque and resistivity measurements and show field-angle and temperature dependences that correspond well to the results of first-principles calculations using structural parameters determined through synchrotron XRD measurements. The temperature dependence reveals a significant difference in the cyclotron mass depending on the Fermi surface and orientation. The cyclotron mass of the large Fermi surface is approximately the same as that of the free electrons, whereas the small Fermi surface contained extremely light electrons with a minimum mass of 0.059 times that of the free electrons. According to the present calculation, the small electron pocket with light electrons is present near the nodal line, and thus the consistency between the calculation and experimental results suggests that the nodal chain exists at energies approximate to the Fermi energy. ## Acknowledgements PXRD experiments were conducted at the BLSS2 of Aichi Synchrotron Radiation Center, Aichi Science and Technology Foundation, Aichi, Japan (Proposals No. 202201033). This work was partly supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers JP20H01858, JP22H01173, JP22H01178, JP22K13996, JP23H04860, and JP22H04462 (Quantum Liquid Crystals). ## Appendix. Crystal structure analysis via synchrotron powder X-ray diffraction The structural parameters of \(\beta\)-ReO\({}_{2}\) were precisely determined for first-principles calculations through synchrotron powder X-ray diffraction experiments. Since most of the single crystal samples were twinned, powder samples were used for the measurements. Although structural data have been reported from laboratory X-ray diffraction experiments in the past [39], precise measurement using high-intensity and high-energy synchrotron radiation is necessary to accurately determine the oxygen positions in \(\beta\)-ReO\({}_{2}\), which is composed of the heavy element Re and the light element oxygen. As shown in Fig. A1, the simulated pattern precicely reproduces the XRD pattern, including the small peaks originating from oxygen, and the goodness-of-fit parameter \(S\) is 1.07, indicating the validity of the obtained structural parameters. The obtained lattice constants are \(a=4.80691(9)\) A, \(b=5.6400(1)\) A, and \(c=4.59576(7)\) A, which are close to previously reported experimental values \(a=4.812(1)\) A, \(b=5.616(6)\) A, and \(c=4.610(1)\) A, with a difference of 0.1-0.4% (Table A). The theoretical study that pointed out the presence of NC have used a structural parameter optimized _via_ first-principles calculations [37], which differs from the experimentally obtained crystal structure data [39]. The lattice constants obtained through first-principles calculations are large for all orientations, \(a=4.8861\) A, \(b=5.7028\) A, and \(c=4.6298\) A, which differ by 0.7-1.6%. Next, we compare the atomic positions of Re, for which the only variable is the \(y\)-coordinate, which in this study is 0.1082(2). The difference is 1.7 and 0.6%, respectively, which is closer to the first-principles calculation, with the previously reported value of 0.11 and the first-principles calculation of 0.1075. The atomic coordinates of oxygen in this study are 0.248, 0.365, and 0.416, which differ by 9.9% in the \(z\)-coordinate compared to the values of 0.25, 0.36, and 0.375 reported previously. Compared to the values 0.243, 0.360, and 0.4086 optimised _via_ first-principles calculations, the difference is relatively small (1.4-2.0%). Figure A1: Synchrotron XRD pattern of a crushed single crystal sample of \(\beta\)-ReO\({}_{2}\) at 150 K and Rietveld fitting. Observed (red crosses), calculated (black solid line), and difference (lower blue solid line) XRD patterns are shown. Green tick marks indicate the position of allowed reflections.
2309.17153
The role of radial migration in open cluster and field star populations with Gaia dr3
The survival time of a star cluster depends on its total mass, density, and thus size, as well as on the environment in which it was born and in which lies. Its dynamical evolution is influenced by various factors such as gravitational effects of the Galactic bar, spiral structures, and molecular clouds. Overall, the factors that determine the longevity of a cluster are complex and not fully understood. This study aims to investigate if open clusters and field stars respond differently to the perturbations that cause radial migration. In particular, we aim at understanding the nature of the oldest surviving clusters. We compared the time evolution of the kinematic properties of two Gaia DR3 samples: the first sample is composed of $\sim$40 open clusters and the second one of $\sim$66,000 MSTO field stars. Both selected samples are composed of stars selected with the same quality criterion, belonging to the thin disc, in a similar metallicity range, located in the same Galactocentric region [7.5-9 kpc] and with ages >1 Gyr. We performed a statistical analysis comparing the properties of the samples of field stars and of open clusters. A qualitative comparison of kinematic and orbital properties reveals that clusters younger than 2-3 Gyr are more resistant to perturbations than field stars and they move along quasi-circular orbits. Conversely, clusters older than approximately 3 Gyr have more eccentric and inclined orbits than isolated stars in the same age range. Such orbits lead them to reach higher elevations on the Galactic plane, maximising their probability to survive several Gyr longer. A formal statistical analysis reveals that there are differences among the time evolution of most of the kinematic and orbital properties of field stars and open clusters. Our results suggest that oldest survived clusters are usually more massive and move on orbits with higher eccentricity.
Carlos Viscasillas VΓ‘zquez, Laura Magrini, Lorenzo Spina, GraΕΎina TautvaiΕ‘ienΔ—, Mathieu Van der Swaelmen, Sofia Randich, Giuseppe Germano Sacco
2023-09-29T11:41:59Z
http://arxiv.org/abs/2309.17153v1
# The role of radial migration in open cluster and field star populations with _Gaia_ dr3 ###### Abstract Context:The survival time of a star cluster depends on its total mass, density, and thus size, as well as on the environment in which it was born and in which lies. Its dynamical evolution is influenced by various factors such as gravitational effects of the Galactic bar, spiral structures, and molecular clouds. Overall, the factors that determine the longevity of a cluster are complex and not fully understood. Aims:This study aims to investigate if open clusters and field stars respond differently to the perturbations that cause radial migration. In particular, we aim at understanding the nature of the oldest surviving clusters. Methods:We compared the time evolution of the kinematic properties of two _Gaia_ DR3 samples: the first sample is composed of \(\sim\)40 open clusters and the second one of \(\sim\)66,000 MSTO field stars. Both selected samples are composed of stars selected with the same quality criterion, belonging to the thin disc, in a similar metallicity range, located in the same Galactocentric region [7.5-9 kpc] and with ages \(>\)1 Gyr. We performed a statistical analysis comparing the properties of the samples of field stars and of open clusters. Results:A qualitative comparison of kinematic and orbital properties reveals that clusters younger than 2-3 Gyr are more resistant to perturbations than field stars and they move along quasi-circular orbits. Conversely, clusters older than approximately 3 Gyr have more eccentric and inclined orbits than isolated stars in the same age range. Such orbits lead them to reach higher elevations on the Galactic plane, maximising their probability to survive several Gyr longer. A formal statistical analysis reveals that there are differences among the time evolution of most of the kinematic and orbital properties of field stars and open clusters. However, the comparison between some properties (e.g. V\({}_{a}\) and \(L_{Z}\)) do not reach a sufficient statistical significance. Conclusions:Our results suggest that oldest survived clusters are usually more massive and move on orbits with higher eccentricity. Although they are still reliable tracers of the Galaxy's past composition, they do not reflect the composition of the place where they are currently found. Therefore, we cannot avoid considering kinematic properties when comparing data and models of chemical evolution, taking also into account the intrinsic differences between clusters and isolated stars. To validate the results, new studies that increase the sample of open clusters, especially at older ages, are needed. ## 1 Introduction Radial migration is due to interactions of stars with spiral arms or other non-axisymmetic structures in the Galactic potential (Sellwood & Binney, 2002). It produces changes in stellar orbits, which are initially circular. This results in a radial displacement of the stars with respect to the Galactocentric radius (R\({}_{\rm GC}\)) at which they formed. Since migration redistributes stellar populations to different parts of the Galactic disc, the abundances measured now in stars of different ages in a given Galactic location cannot be considered as completely representative of the past interstellar medium composition in that place. It is indeed necessary considering the effect of radial migration for a comprehensive understanding of Galactic chemical evolution (e.g. Kubryk et al., 2013). For instance, Loebman et al. (2016) found that radial migration has a significant impact on the shape and width of the metallicity distribution functions (MDFs) at different Galactocentric distances. The study of the chemo-dynamical properties of open clusters and field stars can provide valuable information about the impact of radial migration in the Milky Way disc. Open clusters are groups of coeval stars that formed together from the same molecular cloud, sharing the same chemical composition (see Renaud, 2018, for a review about star cluster formation and evolution in galactic and cosmological contexts). Since stars in clusters are gravitationally bound, they move together in the Galactic potential field and are subject to the same perturbations. Therefore, open clusters are expected to migrate as a coherent group. However, some perturbations might also cause individual stars to escape from the cluster (see Li et al., 2017, for numerical simulations capturing the importance of the small-scale, rapidly varying tidal component in altering the mass-loss of clusters) and become part of the field population (e.g. Moyano Loyola & Hurley, 2013) and _vice versa_(e.g. Mieske & Baumgardt, 2007). Fukushige & Heggie (2000) showed the escape time of a star from its parent cluster is related also to orbital parameters. Gravitational perturbations can also lead to cluster-cluster interactions (e.g. Khoperskov et al., 2018; de la Fuente Marcos et al. 2014), which may be important for the formation of the bar in disc galaxies (Yoon et al., 2019). Due to their different initial conditions, stars in open clusters and isolated stars are expected to be differently affected by migration. Both populations are impacted by gravitational interactions with the spiral arms and the Galactic bar, with other clusters and with molecular clouds. However, member stars of open clusters are also strongly influenced by the cluster's internal dynamics and we can consider each cluster, taken as a whole, as a more massive particle than a single star, and therefore we might hypothesize that its kinematics is impacted differently by gravitational interactions with respect to single stars. N-body simulations of gravitational interaction with particle of different masses would be needed to assess the amount of these differences. Seminal works, as that of Terlevich (1987), investigated the effect of tidal heading and molecular cloud encounters in shaping the halo of clusters and determining their lifetime. More recent N-body simulation analysed the interactions of star clusters with spiral arms (Fujii & Baba, 2012). From an observational point of view, Spina et al. (2021) using data from the Galactic Archaeology with HERMES (GALAH; De Silva et al., 2015; Buder et al., 2021) and Apache Point Observatory Galactic Evolution Experiment (APOGEE-1 and APOGEE-2; Ahn et al., 2014; Jonsson et al., 2020) surveys found that the open cluster population traces the distribution of chemical elements differently than field stars. The authors suggested that such a difference is a consequence of selection effects shaping the demography of the two populations in different ways. In fact, while field stars undergoing frequent interactions with the Galactic potential would simply migrate on different orbits, open clusters would also dissipate until they face their complete disruption. The effect of radial migration has been studied on some specific open clusters. A well-known example is the open cluster NGC 6791, one of the oldest open and most metal rich clusters. For this cluster, Jilkova et al. (2012) proposed a model suggesting its migration from the inner disc to its current location due to a strong influence of the bar and spiral-arm perturbations on its orbit. A systematic study of migration in a significant sample of open clusters was carried out by Chen & Zhao (2020). They used a sample of 146 open clusters to investigate the kinematics and metallicity distribution of open clusters in the Galactic disc, and found evidence for significant radial migration. Zhang et al. (2021) analysed the metallicity gradient of 225 open clusters identifying three sequences of clusters that represent outward migrators, _in situ_ clusters, and inward migrators. Their study suggests that radial migration is an important process in the evolution of the Galactic disc and has a complex effect on the metallicity gradient. Overall, the survival of star clusters is a complex process that depends on a variety of factors. While some clusters may be more susceptible to disruption than others and the exact conditions that determine their longevity are still not fully understood. Their structural parameters, such as mass, density, and size, are expected to play a crucial role in determining their survival time (de Grijs & Parmentier, 2007). More massive clusters are generally more tightly bound (Kruijssen, 2012), and therefore less subject to disruption, while less massive clusters are likely more easily disrupted. In addition, irrespective of their mass, more compact and denser clusters have a higher probability of surviving, as they have a greater gravitational binding energy and are less likely to be disrupted by external forces (see Angelo et al., 2023, for a discussion of the cluster compactness as a function of Galactocentric distance). The environment in which a cluster is located can also affect its survival (e.g. Grebel 2000; Lamers et al., 2005). Thus, the dissolution mechanisms of the clusters (initial gas loss, stellar evolution, relaxation and external tidal perturbations) change over time and also depend on the position of the cluster in its parent galaxy (Baumgardt, 2009). Clusters that are located in denser regions of the Galaxy, such as the disc towards the Galactic Center or spiral arms, are more likely to be subject to disruptive tidal forces caused by the Bar, the spiral arms or molecular clouds (e.g. Portegies Zwart et al., 2002; Baumgardt & Makino, 2003; Gieles et al., 2006, 2007). On the other hand, clusters that are located in less dense regions, such as the Galactic halo, are usually more isolated and therefore less susceptible to disruption (Meng & Gnedin, 2022). Finally, the dynamical evolution of the cluster can also play a role in its survival. Over time, the cluster will undergo a process of mass segregation, where more massive stars sink to the center of the cluster and interact more strongly each with the others (e.g. Allison et al., 2010). In the short term this can lead to the ejection of stars of lower masses from the cluster, and can ultimately cause the cluster to dissolve. However, if the cluster is able to maintain a balance between the processes of mass segregation and two-body relaxation, it may be able to survive for a longer period of time. By comparing the properties of clusters and field stars, the present paper aims to shed light on the role of radial migration in shaping the distribution in the Galactic disc of its stellar populations and of their metallicity. Furthermore, we aim at clarifying the nature of the surviving old clusters, a small number out of the total percentage of known clusters, and whether it is possible to use them as tracers of past composition of the Galactic disc. The paper is structured as follows: Section 2 provides a description of the _Gaia_ DR3 open clusters and field stars samples, how the sub-samples were selected, as well as how the ages of the latter were determined. Section 3 compares the kinematic properties of clusters and field stars, specifically their space velocities, orbits, and actions over time. Finally, Section 4 discusses the results and Section 5 draws our conclusions. ## 2 The samples ### The sample of open clusters in _Gaia_ dr3 We consider a sample of \(\sim\)300,000 member stars of \(\sim\)2700 open clusters with data from _Gaia_ DR3 (Gaia Collaboration et al., 2021), from which we select \(\sim\)8,000 member stars with available Gaia spectroscopic atmospheric parameters and abundances from the Gaia General Stellar Parametrizer from spectroscopy (GSPspec) (Recio-Blanco et al., 2022). To select a sample of high quality data, we used HQ (High Quality) and MQ (Medium Quality) indicators derived from a combination of _Gaia_ GSPspec flags and defined in Gaia Collaboration et al. (2023, see their Appendix B for a complete definition of the ranges of the used GSPspec flags to produce the HQ and MQ samples). The MQ sample defined in Gaia Collaboration et al. (2023) contains about \(\sim\)4,100,0000 stars with median uncertainty in [M/H] of about 0.06 dex and median uncertainty in [\(\alpha\)/Fe] of about 0.04 dex, while the HQ sample stars (\(\sim\)2,200,000) have with very low parameter uncertainties, in particular a median uncertainty in [M/H] \(\sim\)0.03 dex and \(\sim\)0.015 in [\(\alpha\)/Fe]. Through this selection we obtain a sample of about \(\sim\)4,000 members of open clusters. The relationships between the calibrated GSPspec parameters ([Fe/H], \(T_{\rm eff}\), logs) for our sample of open cluster member star, belonging to about \(\sim\)1,000 OCs, are shown in Fig. 1. Since the logg determination is slightly biased in Gaia GSPSpec, we use the calibrated values _logg_gspspec_calibrated_, _mh_gspspec_calibrated_ and _alphafe_gspspec_calibrated_ as presented in Recio-Blanco et al. (2022, Sect 9.1.1 and 9.1.2, Eqs. 1, 2 and 5). These corrections basically use fitted coefficients from literature trends to adjust logg, and a similar correction is suggested for metallicity and [\(\alpha\)/Fe] based on a fourth-degree polynomial fit of residuals against uncalibrated logg. The membership of stars in clusters, as well as general cluster parameters and their ages are taken from Cantat-Gaudin et al. (2020). Relationships between the average metallicity, age and Galactocentric distance for our sample of open cluster are shown in Fig. 2. To analyse the effect of migration, we consider only cluster with an age \(\geq\)1 Gyr. Indeed, the orbits of the youngest open clusters are not expected to be strongly affected by migration given the limited number of encounters or disturbances they may have had in their short lives. This reduces the sample to 201 clusters (20% of the total), from which we extract those with high probability (P\(>\)0.9) to belong to the thin disc which is estimated as indicated below. At this scope, we compute a thin-thick disc components separation and membership probability using the Support Vector Machines (SVMs) analysis (Boser et al., 1992), already adopted in Viscasillas Vazquez et al. (2022). We defined a training set based on the sample of Costa Silva et al. (2020), which have similar characteristics as our sample, with [\(\alpha\)/Fe]-[Fe/H] derived by Delgado Mena et al. (2017). We included the thin and thick disc populations, as well a high-\(\alpha\) metal-rich population (harm). We trained the SVM in the multiclass case with a Radial Basis Function (RBF) and implemented using the scikit-learn package (Pedregosa et al., 2011). We calculated the membership probabilities calibrated using Platt scaling extended for multi-class classification (Wu et al., 2004). We transfer the classification probability to the open cluster population. We obtained a final sample of 168 open clusters with high-probability of belonging to the thin disc (P \(>\) 0.9) in the metallicity range [Fe/H]=[\(-\)0.74, 0.45]. Their location in the Tinsley-Wallerstein diagram (TWD) is shown in Fig. 4, together with \(\sim\)200,000 main sequence turn off (MSTO) field stars potentially from the thin disc (see Section 2.2). Of these, 138 OCs (82%) are located in the Galactocentric interval 6 kpc \(<\) R\({}_{\rm GC}\)\(<\) 11 kpc. A similar analysis can be done using [Ca/Fe] instead of [\(\alpha\)/Fe] since the two ratios are very close (CaII IR triplet is the dominant source of \(\alpha\)-element abundances in the _Gaia_ spectral range). In Van der Waalmen et al. (2023), we performed the thin-thick disc separation using both [Ca/Fe] and [\(\alpha\)/Fe] finding very similar results for the open cluster population. The result is actually expected, since as mentioned above, in _Gaia_ Ca abundance is the dominant contributor to [\(\alpha\)/Fe]. On the other hand, the use of other \(\alpha\) elements might give slightly different results, such as Mg, which shows a different growth than the other \(\alpha\) elements at super-solar metallicities (see, e.g. Magrini et al., 2017; Palla et al., 2022). Finally, we might mention that we did not include the treatment of the uncertainties in applying the SVMs analysis to separate the two disc populations since we are interested only in a statistical separation and it would make the analysis more difficult. The choice of a membership probability P \(>\) 0.9 implies that stars or clusters at the edge between the two populations are automatically excluded, and only those with a high probability of belonging to one disc or the other are considered (see Figure 3). This especially happens for field stars, as the majority of the clusters have a high probability of belonging to the thin disc. The properties of the final sample of clusters used throughout this study are included in the table 2 in the Appendix. ### The sample of field stars We select nearly 5 million stars included in the General Stellar Parametrizer (GSP)-Spec sample (Recio-Blanco et al., 2016) belonging to the catalogue of about 30 million stars in the Radial Velocity Spectrometer (RVS). Among them, we select those located around the MSTO, as shown in Fig. 5, since their ages are expected to be more accurate and reliable than those of stars in Figure 1: Relationships of the calibrated GSPspec parameters for the \(\sim\)4,000 member stars of open clusters with HQ=1 and/or MQ=1 quality flags. Figure 2: Relationships of ages and galactocentric distances for the \(\sim\)1,000 open clusters. The dashed red lines indicate the cutoff in age and R\({}_{\rm GC}\) different evolutionary phases (e.g. Howes et al., 2019). To perform this selection, we consider stars that have logg between 3.8 and 4.3, and \(T_{\rm eff}\) between 5600 and 6900 K as in Chen et al. (2022). This reduced the sample to about 900,000 stars (16 % of the total sample). Fig. 5 shows the Kiel Diagram (KD) for the \(\sim\)5 million field stars with the \(\sim\)900,000 MSTOs stars boxed. Of these \(\sim\)900,000 stars we selected \(\sim\)200,000 potentially belonging to the thin disc (P\(>\)0.9), using the same techniques and the same training set as for open clusters. In this way, the field stars sample results in a very similar metallicity range ([-0.86, 0.59]) to the open clusters' one. About 99% of the stars in the selected MSTO-thin disc sample are located between 7.5 kpc \(<\) R\({}_{\rm GC}\)\(<\) 9 kpc (expressed in the catalogue by the column R\({}_{\rm med\_geo}\)). From them, we extracted only stars whose ages were determined in Kordopatis et al. (2023) (see Sec. 2.2.1 for the age determination and selection) and we applied to them the HQ and MQ selection defined in Gaia Collaboration et al. (2023). The final sample consists of \(\sim\)66,000 stars, selected in terms of HQ and MQ in the same way as the open cluster member stars. Finally, we ensured that we have consistent samples in terms of positions in the Galaxy with 'a posteriori' selection, i.e. after applying the quality selections, we reduced the sample of the field stars to the thin disc and the galactocentric region that we also want to map with clusters. #### 2.2.1 Ages of field stars For a high-confidence age determination, we selected a subsample of stars that meets the following characteristics: the standard deviation, \(\sigma\), of the five ages calculated in Kordopatis et al. (2023) considering different types of projection (with different combinations of absolute magnitudes JHKG) and the age of the stars from the Final Luminosity Age Mass Estimator (FLAME) (Andrae et al., 2018) provided by gaiadr3.astrophysical_parameters should be less than 1 Gyr. We selected the average of the six determinations as our final age, and we used the standard deviation as its uncertainty. In addition, we consider, as for open clusters, only stars with ages \(>\) 1 Gyr. It is important to note that the ages published in Kordopatis et al. (2023) are obtained from calibrated stellar parameters, while the FLAME ages do not. This may have a non-negligible effect for the ages of giants, for which the effect of calibrated logg is larger, but should be minimal in the case of MSTOs. By construction, our sample of field stars contains stars with uncertainties in age less than 1 Gyr. These values are comparable to those obtained for clusters. From the paper of Cantat-Gaudin et al. (2020), the uncertainty on the determination of log(age) ranges from 0.15 to 0.25 for young clusters and from 0.1 to 0.2 for old clusters. Considering that in our sample we have only 'old' clusters, with age \(>\) 1 Gyr, the typical uncertainties range from 0.2-0.3 Gyr for the youngest ones in our samples to about 1 Gyr for the oldest ones. They are, therefore, comparable and consistent with the uncertainties of the selected field star sample. Fig. 6 shows the relationships between age, Galactocentric distance R\({}_{\rm GC}\), and metallicity for the selected samples. Both clusters and field stars occupy a similar range of metallicity. However, the age of field stars span a wider range than for clusters, since clusters generally do not survive beyond 7 Gyr. On the other hand, the selected clusters sample a wider range of Galact Figure 4: TWD for the \(\sim\)200,000 MSTO field stars (grey symbols) and \(\sim\)170 open clusters (pink symbols), both potentially from thin disc and with similar characteristics. Clusters with a green edge represent those that are in the solar region (\(\sim\)40 open clusters). Figure 5: KD for the more than 5 million field stars of _Gaia_ DR3. The location of the \(\sim\) 900,000 MSTO stars is indicated with a red box. Figure 3: \(\sim\) 900,000 MSTO field stars in the [\(\alpha\)/Fe] vs [M/H] plane, color coded by the probability of belonging to the thin disc. tocentric distances since the cluster member stars are mostly luminous giants, i.e. they are easier to observe at large distances than the selected MSTO field stars. Finally, the difference in the types of stars observed in the field and in the clusters could generate observational biases, for example, in the derived metallicities. In parallel to the present paper, we are carrying out a work aiming to compare and validate _Gaia_ spectroscopic parameters by comparing them with the _Gaia_-ESO ones (Van der Swaelmen et al. [2023]). We confirm that there is an excellent agreement between the calibrated spectroscopic metallicities and [\(\alpha\)/Fe] of both giants and dwarfs in _Gaia_-ESO and in _Gaia_. So, taking the _Gaia_-ESO survey as a reference, there are no systematic differences in _Gaia_'s calibrated metallicities and [\(\alpha\)/Fe] considering giants and dwarfs. ## 3 Comparing the kinematic properties of clusters and field stars In this section, we compare the evolution over time of the space velocities, orbital parameters, and orbital action of a reduced sample of clusters and of sample of field stars located in the same Galactocentric region. The radial distribution in the Qxx quantiles (percentiles) of the \(\sim\)70,000 field stars is R\({}_{\rm GC}\) [Q01, Q10, Q90, Q99] [7.53, 7.82, 8.69, 8.96](kpc). Thus, 98% of the field stars (\(\sim\)68,000) are located between 7.53 and 8.96 kpc, from which we select 66,000 HQ and MQ stars. We selected the corresponding 41 clusters (see Fig. 4) located in the regions [7.5-9] kpc. The three-dimensional galactocentric coordinates, (cylindrical) space velocities, and orbital parameters of the sample field stars are obtained from Gaia Collaboration et al. ([2023]). The Galactocentric coordinates X, Y, Z (in Cartesian coordinates), the Galactocentric distance (R\({}_{\rm GC}\)), and cylindrical space velocities (radial V\({}_{R}\), azimuthal V\({}_{\phi}\), vertical V\({}_{Z}\)) in a right-handed are computed from the right ascension, declination, line-of-sight velocity, proper motions, and the EGDR3 geometric and photogeometric Bayesian line-of-sight distances from Bailer-Jones et al. ([2021]). They assumed a solar position (R, Z)\({}_{\odot}\)=(8.249, 0.0208) kpc and the solar cylindrical velocity components are (V\({}_{R}\), V\({}_{\phi}\), V\({}_{Z}\))\({}_{\odot}\)=(9.5, 250.7, 8.56) km s\({}^{-1}\) (GRAVITY Collaboration et al. [2020]). Their orbital parameters were computed with the Galpy code (Bovy [2015]), using the axisymmetric Galactic potential of McMillan ([2017]). For the clusters, we compute the orbits in a consistent way with galpy, using the clusters mean parallaxes, radial velocities and distances from _Gaia_. Due to the large number of field stars and their high density, we use a point density function (gaussian_kde) to represent them (see Figs. 7 to 9), implemented using scipy.stats, determining the density of stars at each point and assigning that value to the colours in the colourmap. For an easier comparison, the \(\sim\)66,000 field stars are also shown in equally distributed bins using a 'Quantile-based discretization function', defining the bins using percentiles based on the distribution of the data. We divided the data into 14 quantiles (q) of approximately \(\sim\)5,000 stars each and computed the mean and dispersion for each bin. We also show regressions (linear and non-linear) applied to both samples using the Ordinary Least Squares (OLS) method and a nonparametric LOWESS model (locally weighted linear regression) respectively, implemented using statsmodels. For a better comparison we also apply a Pearson and Spearman statistical correlation test computed using scipy.stats to measure the strength and direction of the relationship (linear and monotonic) between variables. In Figures A.1 to A.10 in the Appendix A we also show the cumulative distribution functions (CDFs) and the results of the two-sample Kolmogorov-Smirnov (K-S) test statistic (Kolmogorov [1933]; Smirnov [1939]) computed using scipy.stats. This allows us to analyze in more detail the distribution of the data of both samples and find where is the maximum absolute difference between the two cumulative distributions. ### Space Velocities over time Stars and open clusters orbit the Galactic Center in quasi circular orbits. The distribution of velocities of stars in the thin disc of the Galaxy is a function of the age of the stars. Younger stars, which formed relatively recently, have tangential velocities that \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{4}{c}{field stars} \\ \hline Param. & m & c & PCC & p-value & SCC & p-value \\ \hline V\({}_{R}\) & \(-\)0.162 & 0.899 & \(-\)0.009 & 0.020 & \(-\)0.007 & 0.067 \\ V\({}_{R}\) & \(-\)1.999 & 239.419 & \(-\)0.177 & 0.000 & \(-\)0.151 & 0.000 \\ \([N_{J}]\) & \(-\)0.947 & 9.162 & \(+\)0.176 & 0.000 & \(+\)0.149 & 0.000 \\ \(R\) & \(-\)0.003 & 8.270 & \(-\)0.021 & 0.000 & \(-\)0.025 & 0.000 \\ \(e\) & \(+\)0.008 & 0.094 & \(+\)0.227 & 0.000 & \(+\)0.200 & 0.000 \\ \(Z_{max}\) & \(+\)0.007 & 0.350 & \(+\)0.058 & 0.000 & \(+\)0.069 & 0.000 \\ \(f_{R}\) & \(+\)3.330 & 14.316 & \(+\)0.220 & 0.000 & \(+\)0.196 & 0.000 \\ \(f_{Z}\) & \(+\)0.166 & 4.213 & \(+\)0.056 & 0.000 & \(+\)0.067 & 0.000 \\ \(I_{Z}\) & \(-\)17.197 & 1979.141 & \(-\)0.175 & 0.000 & \(-\)0.154 & 0.000 \\ \hline \multicolumn{6}{c}{open clusters} \\ \hline Param. & m & c & PCC & p-value & SCC & p-value \\ \hline V\({}_{R}\) & \(+\)4.910 & \(+\)10.597 & \(+\)0.189 & 0.237 & \(+\)0.192 & 0.229 \\ V\({}_{R}\) & \(-\)3.882 & \(+\)249.036 & \(-\)0.247 & 0.120 & \(-\)0.034 & 0.831 \\ \([N_{J}]\) & \(+\)2.203 & \(+\)3.931 & \(+\)0.455 & 0.003 & \(+\)0.530 & 0.000 \\ \(R\) & \(-\)0.003 & 8.284 & \(-\)0.059 & 0.713 & \(+\)0.023 & 0.887 \\ \(e\) & \(+\)0.030 & 0.049 & \(+\)0.594 & 0.000 & \(+\)0.224 & 0.158 \\ \(Z_{max}\) & \(+\)0.124 & 0.051 & \(+\)0.677 & 0.000 & \(+\)0.508 & 0.001 \\ \(f_{R}\) & \(+\)12.608 & \(-\)4.284 & \(+\)0.654 & 0.000 & \(+\)0.211 & 0.185 \\ \(J_{Z}\) & \(+\)2.821 & \(-\)2.375 & \(+\)0.703 & 0.000 & \(+\)0.505 & 0.001 \\ \(I_{Z}\) & \(-\)33.421 & \(+\)2055.896 & \(-\)0.248 & 0.118 & \(-\)0.079 & 0.623 \\ \hline \end{tabular} \end{table} Table 1: Linear regression coefficients (slope and y-intercept) obtained using the least squares method, as well as Pearson and Spearman correlation coefficients and their p-values for both clusters and field stars samples. Figure 6: Relationships between the properties of the \(\sim\)70,000 selected field stars: number of stars, ages and Galactocentric distances. The dashed red lines indicate the cutoff in age and R\({}_{\rm GC}\). are close to those the Galactic disc at solar location and the Sun (i.e., \(\sim\)240-250 km s\({}^{-1}\) Russell et al. 2017; GRAVITY Collaboration et al. 2020). This is because they are on less perturbed orbits. Instead, older stars typically have smaller tangential velocities because they are likely on more elliptical orbits. Here we aim at comparing the velocity components of stars and clusters to investigate if there are substantial differences in their behaviour. In Fig. 7 we show the three components of the Galactic space velocities of star clusters compared to the ones of field stars, as a function of stellar ages. In Table 1 we show the coefficients of the linear regressions and the Pearson and Spearman correlation coefficient (PCC and SCC, respectively) with associated p-value. We recall that with a p-value less than 0.05, the results are considered to be statistically significant, implying that the observed correlation is not simply the result of chance and is more likely to reflect a real relationship between the variables. If the p-value is greater than 0.05, the results are not statistically significant, which means that there is not enough evidence to reject the null hypothesis and it cannot be affirmed that there is a significant relationship between the variables. Figure 8: Orbital parameters (\(R\), \(e\) and \(Z_{max}\)) for \(\sim\)66,000 field stars from our selected sample and \(\sim\)40 open clusters in the solar region. The data are presented in equally distributed bins (q=14) for field stars (lime squares). In the background the field stars are also shown on a density plot which is encoded in the colourbar. Symbols and colours are as in Fig. 7. Figure 7: Space velocities (V\({}_{R}\), V\({}_{e}\) and \(|V_{Z}|\)) for \(\sim\)66,000 field stars from our selected sample and 41 open clusters in the solar region. The data is presented in equally distributed bins (q=14) for field stars (lime). In the background the field stars are also shown on a density plot which is encoded in the colourbar. The size of the symbols for clusters (blue circles) are proportional to the square root of their number of members (\(\sqrt{N}\)), shown in the legend with their total number of members. The straight lines represent the linear fits (green for field stars and blue for open clusters) and the curve (cyan) is a nonparametric lowess model (locally weighted linear regression) to the clusters’ data Field stars and open clusters show correlations in their trends of radial V\({}_{R}\), tangential V\({}_{\phi}\) and vertical V\({}_{Z}\) velocities as a function of stellar ages, which are in some cases different (see the coefficients of the linear regressions in Table 1). Some of these differences also have statistical evidence (low p-values), while others are statistically weaker, likely due to the limited number of old clusters and to the scatter of their properties. The average radial velocity (upper panel) of field stars is quite constant over time, with a scatter increasing in the older populations. Young clusters have a slightly lower radial component than field stars, and show significant scatter. Both correlations are not very strong or statistically significant in both cases (high p-values in Table 1). The tangential velocity component, V\({}_{\phi}\) (central panel) is close to that of the Milky Way disc rotation for objects moving along circular orbits at the Sun location (Russeil et al. 2017). Several young clusters have higher V\({}_{\phi}\) than the one of field stars in the same age range, and the intercept of the regression for star clusters in higher than that of field stars (249 km \(s^{-1}\) vs 239 km \(s^{-1}\)). As the ages of clusters and stars increase, the two trends tend to converge, although the number of old clusters decreases dramatically after 3 Gyr. This is well reflected in the LOWESS model which is able to keep the changes between the behaviour of young and old clusters. In this case, the linear correlation is statistically significant for field stars (with very low p-values), while it is not for clusters. Concerning the absolute vertical velocity, [V\({}_{Z}\)] (bottom panel), it increase in both populations over time. However, young clusters have a smaller vertical velocity component than field stars, reaching them only for ages above 4 Gyr. In this case, the linear correlations are statistically significant in both samples. The clusters show higher PCC and SCC than the field stars, indicating a stronger correlation (see Table 1). The combination of the three results, obviously interconnected, shows us that during the first 1-3 Gyr star clusters remain more stably in nearly circular orbits than single stars, while older clusters have typically more perturbed velocity components than field stars. ### Orbital parameters and actions over time Using the velocity components and the distance of a star, and assuming a gravitational potential, the orbit of a star, characterised by its guiding radius \(R\), its eccentricity \(e\) and its inclination, parameterised by the maximum height reached above the Plane, \(Z_{max}\), can be derived. Circular orbits have eccentricities closer to zero, and will reach low heights above the Plane. In Fig. 8, we show the Galactocentric radius and the orbital parameters \(e\) and \(Z_{max}\) as a function of stellar ages. In the upper panel, we present the distribution of R\({}_{\rm GC}\) as a function of stellar ages. As per sample selection, field stars and clusters are confined between 7.5 and 9 kpc. In the central panel, we show the relation between the eccentricity of the orbit and stellar ages. During the first 3 Gyr, clusters and field stars show a slightly different behaviour: on average the orbits of clusters have lower eccentricities (\(e<0.1\), on average), i.e. more circular and less perturbed, than those of field stars. However, as time passes, clusters proceed faster towards more eccentric orbits (\(e>0.1\)), or, equivalently, open clusters with low eccentricity do not exist anymore (at least in our sample limited to the Solar neighborhood). The correlations are significant in both cases, but the effect is more pronounced in clusters (higher PCC and SCC). Finally, in the bottom panel, we show the maximum height, \(Z_{max}\), above the Plane as a function stellar ages. As for the eccentricity of the orbits, younger clusters are orbiting closer to the Galactic Plane than field stars having the same age. However, the situation changes for the surviving clusters beyond \(\sim\)3 Gyr that reach higher heights above the Galactic Plane, while field stars experience similar but smoother changes over time. The correlations are statistically significant in both cases, but again, the increase is steeper in clusters (higher PCC and SCC). An equivalent way to the use of the orbital parameters is to describe the motion of a star or of a stellar cluster by their orbital actions, which are three fundamental quantities used to describe the motion of a particle (both star or cluster) in a rotating galaxy. The radial action (\(J_{R}\)) describes the component of a star's angular momentum in the direction of the Galactic Center, the vertical action (\(J_{z}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane. The radial action (\(J_{R}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\(L_{Z}\), equivalent to \(J_{\phi}\)) describes the component of a star's angular momentum perpendicular to the Galactic plane, and the azimuthal action (\( momentum around the Galactic Center. In axisymmetric potentials, the orbital actions are used to quantify the amount of oscillation of the star along its orbit in the Galactocentric directions (R, \(\phi\), z, see Binney and Tremaine 2008). For the interpretation of the orbital actions, we follow Trick et al. (2019): the radial action \(J_{R}\) can be considered as a measure of the orbit eccentricity or the radial extent of a disc orbit's in-plane epicyclic rosette; the azimuthal action, \(J_{\phi}\), is equivalent to the angular momentum in the z-direction, \(L_{z}\), and it describes the amount of rotation around the Galactic Center. Finally, \(J_{z}\), the vertical action, quantifies the displacement above and below the Galactic Plane. In Fig. 9 we show the orbital actions of the clusters compared to the field stars. In the upper panel, we show the radial action \(J_{R}\) over time. As said above, its behaviour is similar to that of the eccentricity: young clusters have typically lower \(J_{R}\) than field stars. The correlations are statistically significant in both cases, with a steeper growth for clusters (higher PCC and SCC). In the central panel, we present the vertical action \(J_{z}\), that indicates the displacement above and below the Galactic Plane. Also in this case, the younger clusters of our sample do not exhibit a large vertical excursion around the Plane, while the trend indicates that older clusters are more likely to explore regions far from the Plane due to their inclined orbit. As in the [V\({}_{Z}\)] case, the correlations are statistically significant in both cases (p-values \(<\) 0.05), but stronger for the clusters case (higher PCC and SCC). Finally, \(L_{Z}\) in young clusters is larger than in field stars thus again indicating that the orbit of clusters is closer to a circular orbit than that of field stars. However, due to the large scatter in the cluster data, the relationship between \(L_{Z}\) and age have very low statistical relevance, confirmed by the high p-values (0.118 for PCC and 0.623 for SCC). ## 4 Discussion on the old survived clusters There is some statistical evidence that correlations exist between kinematic properties of clusters and field stars and their ages, and that, in some cases, such correlation might differ indicating a different behaviour of field stars and clusters. In other cases, these differences do not have sufficient statistical value. From the comparison of the kinematic and orbital properties of the cluster and field star population (in particular V\({}_{Z}\), Z\({}_{\rm max}\) and J\({}_{Z}\), for which PCC and SCC have low p-values), we can conclude that the former are on average more resistant to perturbative effects up to an age of about 3 Gyr, moving on quasi-circular orbits, close to the Galactic Plane. On the other hand, clusters older than 3 Gyr are quite scattered on the kinematical properties-age planes, with some of them having orbits with higher eccentricity, more inclined, thus reaching higher heights on the Plane. The fact that several old clusters have eccentric orbits (\(e>\)0.15) is not a cause in itself, but rather it is likely a consequence of the passage of time, and a necessary condition to allow their survival (Cai et al., 2016). Several authors claimed that even in the first Myr interaction with molecular clouds are more disruptive for low-mass clusters (see, e.g. Gieles and Renaud 2016), and only massive clusters with peculiar orbits might survive to the several interactions that happen in the Galactic disc in the following Gyr (Moitinho, 2010; Buckner and Froebrich, 2014). The reason why they now stand out might be related to a natural selection effect, since clusters located closer to the Galactic Plane would have more interaction and thus dissolve more rapidly. But why don't interactions, which are proven to cause such drastic changes in the orbit, lead to the destruction of the cluster as Friel (1995) pointed out? What are the particular physical properties that have made these clusters survive until today? Gustafsson et al. (2016) demonstrated that just a small fraction of massive clusters can survive for several Gyr, and that only 0.5% of all formed massive open clusters are predicted to end with high altitude on the Plane. To seek answer to these questions, we can examine some of the examples of the surviving old clusters to get an idea of their general characteristics. Old open clusters are, indeed, rare as star clusters dissipate over time. We expect that only the most massive, dense, and well-placed ones can survive several Gyr (Boesgaard et al., 2015). In Fig. 10, we plot our sample of 41 clusters with symbols proportional to their number of members (as estimated from _Gaia_ in Cantat-Gaudin et al. (2020)). Most clusters older than 3 Gyr have a high average number of members. Younger clusters have a more variable number of members, ranging from the highly populated clusters, such as NGC 2477, to clusters with few members such as UBC 139 (members from Cantat-Gaudin et al. 2020). Among the oldest clusters, there are some clusters that stand out because they are more populous that the other clusters: NGC 6791, NGC 2682 (M67), and Trumpler 19. Their large number of members has been confirmed also using methods based on the DBSCAN algorithm, complementary to the kinematic methods (Gao et al., 2014). The common characteristics of these clusters are that they are currently high above the Galactic Plane, still have a high density and a large number of members, and a relatively high metallicity. In particular, NGC 2682 is located in a low density region, and it has not likely experienced significant gravitational interactions that could have affected its structure (cf. Davenport and Sandquist 2010). Recent works revealed that it is more massive than previously believed (Carrera et al., 2019) and it underwent a mass segregation process (e.g. Geller et al., 2015), which on long term timescale could make the cluster tightly bound and less likely to disperse. Finally, the oldest and highly populated cluster in our sample is NGC 6791, which is among the most studied clusters due to its various peculiarities, as the high eccentricity and maximum height above the Plane, coupled with a high metallicity. Indeed, NGC 6791 has an orbit similar to that of a globular cluster or a dwarf galaxy than to that of a thin disc open cluster (Carraro and Chiosi, 1994; Jilkova et al., 2012). Gustafsson et al. (2016) suggested that NGC 6791 maintained so many members because several generations of stars have formed within it, given that the material expelled by AGB stars inside the cluster was retained within it. This would later be the seed for the birth of new generations of stars. However, there are no evidence of chemical anomalies or abundance large spread in NGC 6791 (Carretta et al., 2007; Bragaglia et al., 2014). As seen in Viscasillas Vazquez et al. (2022), NGC 6791 is chemically mixed, not only for its higher metallicity, but also for its [\(\alpha\)/_slow-neutron capture_] element ratios, which does not agree to that of clusters found in the same region, hence emphasises the importance of taking migration into account in chemical evolution studies. So, in conclusion, it appears that in the solar neighbourhood the oldest clusters that managed to survive have in common a large initial mass, and fortuitous orbital conditions (cf. van den Bergh and McClure, 1980). However, in different parts of the Galaxy, like the outer Galaxy, our ability to observe clusters is linked to possible observational biases, which favours the detection of massive, distant clusters high on the Plane. ## 5 Summary and Conclusions We conducted a purely observational study using high-quality spectroscopic _Gaia_ DR3 data to identify the differences between the kinematics of the population of field stars and open clusters. For a meaningful comparison between the kinematic and dynamical properties of clusters and field stars, we restricted our sample to the radial region [7.5-9 kpc]. Furthermore, we restricted the sample to clusters older than 1 Gyr because our aim is to estimate the effect of migration, which is negligible for younger clusters. We selected the sample of field stars around the MSTO, so as to have a better determination of their ages. We compared the velocity components, the orbital parameters and orbital actions of our sample of \(\sim\)66,000 field stars and of 41 open clusters. We conducted a statistical analysis to test the significance of the results. Open star clusters younger than 2-3 Gyr maintain circular orbits, with a dominant tangential component V\({}_{\phi}\) of their velocity than in field stars with similar ages. This corresponds to more circular orbits (lower eccentricity and lower height from the Galactic Plane), and it is also reflected on the orbital actions. In particular, we observed lower \(J_{R}\) and \(J_{Z}\) for the clusters with ages below 2-3 Gyr than for field stars of the same age. On the other hand, older clusters are less dominant by number and are characterised by more perturbed orbits and to be found typically higher on the Plane. These characteristics, together with being massive -or having been massive-, seem to be essential to ensure their survival for several Gyr. Thus, the oldest clusters, although still chemical tracers of the Galaxy's past composition, do not reflect the composition of the place where they are currently found. As already noticed in Magrini et al. (2023), the radial metallicity gradient of clusters older than 3 Gyr shows a higher level of scatter, and it is not obvious to use it to study the temporal evolution of the gradient unless kinematic constraints are also considered. ###### Acknowledgements. The authors thank the anonymous referee for her/his constructive and insightful suggestions which greatly improved the paper. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)).Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This work also made use of SciPy (Virtanen et al. 2020), Astropy (Astropy Collaboration et al. 2018), Scikit-learn Machine Learning (Pedregosa et al. 2011), StatsModels (Seabold & Evteldt 2010), Seaborn (Wascon 2021), TopCat (Tyler 2005), Pandas (pandas development team 2020) and Matplotlib (Hunter 2007). CVV and LS thanks the EU programme Exrasmus+ Staff Mobility for their support. CVV and GT acknowledge funding from the Lithuanian Figure 10: Our selected sample of 41 open clusters on the thin disc and solar region sized by the total number of members, coloured by age, and labelled by cluster name and number of member stars considered in this study. Science Council (LMTLT, grant No. P-MIP-23-24). LM thanks INAF for the support (MiniGrant Chees).
2308.02677
Metaverse for Industry 5.0 in NextG Communications: Potential Applications and Future Challenges
With the advent of new technologies and endeavors for automation in almost all day-to-day activities, the recent discussions on the metaverse life have a greater expectation. Furthermore, we are in the era of the fifth industrial revolution, where machines and humans collaborate to maximize productivity with the effective utilization of human intelligence and other resources. Hence, Industry 5.0 in the metaverse may have tremendous technological integration for a more immersive experience and enhanced communication.These technological amalgamations are suitable for the present environment and entirely different from the previous perception of virtual technologies. This work presents a comprehensive review of the applications of the metaverse in Industry 5.0 (so-called industrial metaverse). In particular, we first provide a preliminary to the metaverse and industry 5.0 and discuss key enabling technologies of the industrial metaverse, including virtual and augmented reality, 3D modeling, artificial intelligence, edge computing, digital twin, blockchain, and 6G communication networks. This work then explores diverse metaverse applications in Industry 5.0 vertical domains like Society 5.0, agriculture, supply chain management, healthcare, education, and transportation. A number of research projects are presented to showcase the conceptualization and implementation of the industrial metaverse. Furthermore, various challenges in realizing the industrial metaverse, feasible solutions, and future directions for further research have been presented.
B. Prabadevi, N. Deepa, Nancy Victor, Thippa Reddy Gadekallu, Praveen Kumar Reddy Maddikunta, Gokul Yenduri, Wei Wang, Quoc Viet Pham, Thien Huynh-The, Madhusanka Liyanage
2023-07-31T07:21:36Z
http://arxiv.org/abs/2308.02677v1
# Metaverse for Industry 5.0 in NextG Communications: Potential Applications and Future Challenges ###### Abstract With the advent of new technologies and endeavors for automation in almost all day-to-day activities, the recent discussions on the metaverse life have a greater expectation. Furthermore, we are in the era of the fifth industrial revolution, where machines and humans collaborate to maximize productivity with the effective utilization of human intelligence and other resources. Hence, Industry 5.0 in the metaverse may have tremendous technological integration for a more immersive experience and enhanced communication.These technological amalgamations are suitable for the present environment and entirely different from the previous perception of virtual technologies. This work presents a comprehensive review of the applications of the metaverse in Industry 5.0 (so-called industrial metaverse). In particular, we first provide a preliminary to the metaverse and industry 5.0 and discuss key enabling technologies of the industrial metaverse, including virtual and augmented reality, 3D modeling, artificial intelligence, edge computing, digital twin, blockchain, and 6G communication networks. This work then explores diverse metaverse applications in Industry 5.0 vertical domains like Society 5.0, agriculture, supply chain management, healthcare, education, and transportation. A number of research projects are presented to showcase the conceptualization and implementation of the industrial metaverse. Furthermore, various challenges in realizing the industrial metaverse, feasible solutions, and future directions for further research have been presented. Industry 5.0, the metaverse, the industrial metaverse, Virtual Reality, Augmented Reality, Virtual world. ## I Introduction The current industrial revolution, Industry 4.0, revolutionized manufacturing and allied sectors by bringing disruptive technologies such as cognitive computing, artificial intelligence (AI), cloud computing, and cyber-physical systems (CPS) to the forefront of manufacturing. Industry 4.0 allowed the machines to be intelligent, communicate with each other, and do much of the production work in factories, giving us the term "smart factories" [1]. "Mass personalization" is enabled by Industry 4.0, where customers can personalize the products they want to purchase online through mass personalization techniques. Mass personalization is realized through the Internet connectivity between the supply chains, robots involved in the manufacturing process, dealership ordering systems, and the customers [2]. While Industry 4.0 focused on the connectivity of CPS, the upcoming Industry 5.0 revolves around the relationship between man and machine. Industry 5.0 promises to integrate the creativity of humans with the precision of robots. The machinery enabled by digital technologies combined with the cognitive intelligence of humans is expected to enhance the communication and also increase the speed of production/manufacturing Table II represents the list of acronyms used in this work. As many Industry 5.0 applications are mission-critical and require real-time decisions, a platform that helps human experts in having access to the immersive experience of the situation before making a decision is mandated. It has immense potential in minimizing the losses in property and lives and provides customers better products. The metaverse is a perfect fit to bridge this gap in potential Industry 5.0 applications. The metaverse has been in the limelight recently with the claims from Microsoft and Facebook. The metaverse combines several technologies, such as video, augmented reality (AR), and virtual reality (VR), in which users can work, play with friends, and stay connected with their friends virtually through conferences, virtual trips, and concerts. The metaverse's expansion will allow humans to co-exist in a hyper-real alternate world [3]. Facebook is investing heavily in this technology as it envisions and foresees a virtual world in which digital avatars are connected through entertainment, travel, or work using VR headsets. Facebook believes that the metaverse could replace the Internet. The metaverse is expected to make the Internet medium more embodied and immersive, where people not just look at the Internet but also would experience it [4]. With its exciting features, the metaverse has immense potential to take the upcoming and futuristic Industry 5.0 to a new level. In the Metaverse, virtual replicas of the products in manufacturing can be created in the digital world, where experts can see the progress in manufacturing virtually, patients can be treated in the digital world by doctors virtually, and people can meet their peers and play exciting video games in the digital world, organizations can track the products throughout their supply chain life-cycle in the digital world, the governments can plan the infrastructure in the smart city [5, 6] by visualizing the smart city projects in the digital world and also they can be better equipped to deal and respond to the natural disasters, people can create digital assets, experience the products and also can do the shopping by experiencing the immersive technology and the list goes on. In this study, we aim to present a comprehensive review of the applications of the metaverse in the realization of the true potential of Industry 5.0. Due to the immense potential of Industry 5.0 and the metaverse in revolutionizing the industry and people's lifestyles, several researchers recently published surveys on both these technologies. In [7], the authors considered some of significant enabling technologies for Industry 5.0, such as the Internet of Things (IoT), Artificial Intelligence (AI), Augmented Reality (AR), Virtual Reality (VR), Big Data Analytics, Edge Computing, and 5G. The authors also presented industry 5.0 applications such as healthcare, Supply Chain Management (SCM), Smart Education, Cloud Manufacturing. The study in [8] presented a survey on applications of Industry 5.0 for COVID-19. The authors reviewed how humans and robots work together to perform jobs like surgery, treatment, and monitoring patients. This helps doctors and nurses to provide personalized care for patients. The collaboration among humans and machines in healthcare develop the quality of healthcare and patient outcomes. Another motivating study in [9] discussed key components of Industry 5.0 from a manufacturing perspective. The authors examined AI, robotics, intelligent machines, IoT, and big data analytics as significant elements of Industry 5.0 in the manufacturing industry. These technologies can increase throughput, effectiveness, quality, and cost reduction. The authors in [10] presented a detailed survey on how humans and robots can collaborate. The authors discuss how human-robot collaboration has the potential to benefit businesses and employees. The authors highlight some organizational challenges such as safety, cost, and change management, as well as human employee issues such as job security, trust, and social interaction. It is necessary to address all these issues so that organizations can achieve the benefits of human-robot collaboration. Another interesting study in [11] discussed how Industry 5.0 could impact education in engineering courses. According to the authors, the current engineering curriculum is insufficient to prepare engineers for the challenges of Industry 5.0. They propose four ways to improve engineering education: prioritize lifetime learning, prioritize transdisciplinary education, emphasize soft skills, and use active learning approaches. Following these ideas in engineering education prepares students for the challenges of Industry 5.0. The study in [12] presented a detailed discussion on how Industry 5.0 can revolutionize the manufacturing sector in the pharmaceutical industry. The authors identify several challenges that pharmaceutical companies must overcome in order to fully implement Industry 5.0. The authors provide solutions for pharmaceutical companies to overcome the challenges of meeting Industry 5.0 standards. In another interesting study, the authors in [13] presented a comprehensive survey on human-centric manufacturing in Industry 5.0. The authors highlight the importance of humans in the manufacturing process. Machines can help to automate processes, but they cannot replace the human touch. Humans are still required to make decisions, solve problems, and interact with customers. Human-centric manufacturing places humans at the center of manufacturing decision-making. Multiple technologies are used in this approach to improve the work environment and make it more efficient and productive. On the other hand, the study in [14] presented a comprehensive survey on the factors that support a viable and functional metaverse. The authors discussed how advances in hardware performance, such as faster CPUs and better graphics cards, allow users to have more immersive experiences. Connectivity and internet infrastructure are important for maintaining stable user communication. Advanced computer approaches and algorithms are required to create realistic simulations. User participation, user adoption, and content creation all contribute to the growth of the metaverse. Finally, the authors suggest that addressing regulatory and ethical concerns is important for maintaining a safe and trustworthy virtual environment. The study in [4] presented a detailed survey on the taxonomy, key components, potential applications, and open challenges of the metaverse. Even though several researchers presented review papers on Industry 5.0 and the metaverse separately, very few reviews exist on the fusion of these two exciting technologies. Table I explains the summary of related reviews on Industry 5.0 and the metaverse. The above observations have motivated us to conduct this comprehensive review on the applications of the metaverse for Industry 5.0. The main contributions of this study are: * We provide the definitions of the metaverse and enabling technologies of the metaverse and Industry 5.0. * The potential applications of the metaverse in several Industry 5.0 applications are presented. * Some of the key research and industry projects related to the applications of the metaverse in several Industry 5.0 verticals are discussed. * Several challenges that may arise in the fusion of the metaverse with Industry 5.0 applications are discussed. We also provide future research opportunities that drive the researchers and industry towards future research in this interesting fusion. The rest of the paper is organized as follows. Some of the important definitions and key enabling technologies of the metaverse and Industry 5.0 along with the motivation for the fusion of the metaverse with Industry 5.0 are discussed in Section 2. The potential applications of the metaverse in different verticals of Industry 5.0 are presented in Section 3. Section 4 discusses some of the important research and industry projects that are focused on the fusion of the metaverse and Industry 5.0. The key challenges, open issues and future research directions towards the fusion of the metaverse and Industry 5.0 are discussed in Section 5. Finally, we conclude our study in Section 6. ## II Preliminaries of the Industrial Metaverse This section presents a brief overview of the metaverse and enabling technologies of the industrial metaverse. ### _The Metaverse_ Neil Stevenson coined the term "metaverse" in his 1992 novel Snow Crash [4]. However, it attracted huge attention in October 2021, when Facebook officially changed its name to Meta [16, 17]. A fully functioning persistent metaverse does not yet exist. However, there are metaverse-like platforms such as Roblox, Decentralized, Axie infinity, Illuvium, Sandbox Fortnight, and SecondLife, etc [15]. The metaverse is a shared virtual environment for multiple users that combines physical reality with the digital virtual world [18]. It is based on the fusion of cutting-edge technologies that enable multidimensional interactions. Hence, the metaverse is a platform of interconnected environments. It allows users to communicate with each other in real time and interact with digital artifacts. The metaverse will significantly impact society, agriculture, education, healthcare, and other sectors. AI, IoT, blockchain, digital twins, and AR will all be able to reach their full potential in enabling the metaverse. ### _Definitions_ **Definition 1:** "The metaverse is a digital reality that combines aspects of social media, online gaming, AR, VR, and cryptocurrencies to allow users to interact virtually [19]." **Definition 2:** "The metaverse is a network of 3D virtual worlds focused on social connection. In futurism and science fiction, the term is often described as a hypothetical iteration of the Internet as a single, universal virtual world facilitated by virtual and augmented reality headsets [20]." **Definition 3:** "The word "metaverse" is a portmanteau of the prefix "meta" (meaning beyond) and "universe"; the term is typically used to describe the concept of a future generation of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe [21]." **Definition 4**: "The metaverse is a persistent and immersive simulated world experienced in the first person who shares a strong sense of mutual presence. It can be fully virtual (i.e., a virtual metaverse), or it can exist as layers of virtual content overlaid on the real world (i.e., an augmented metaverse) [22]." **Definition 5**: "The metaverse: a persistent, live digital universe that affords individuals a sense of agency, social presence, and shared spatial awareness, along with the ability to participate in an extensive virtual economy with profound societal impact [23]." **Definition 6**: "An image of virtual everything: You attend work meetings as an avatar using the Quest VR headset and use a device on your wrist to secretly text friends [24]." **Definition 7**: "The metaverse is the sum of virtual and augmented realities concentrated on a super long "Street" through which people walk as avatars and can access using goggles and plugging into terminals [24]." **Definition 8**: "The convergence of virtually-enhanced physical reality and a physically persistent virtual space [25]." **Definition 9**: "The metaverse is an expansive network of persistent, real-time rendered 3D worlds and simulations [26]." ### _Key Enabling Technologies of the Industrial Metaverse_ The metaverse is a virtual world where users can socialise, do business, and play video games. As more individuals participate, the metaverse's popularity will grow exponentially. In addition to Meta, Microsoft, and Apple, other technology companies have begun investing in the metaverse. New users are attracted by cryptocurrencies, NFTs, and play-to-earn games. The metaverse is significant as it represents the future of social media. The metaverse is predicted to become a worldwide phenomenon because of its novel 3D world, socialising, and gaming capabilities. The realization of the industrial metaverse is feasible by the key enabling technologies depicted in Fig. 1. In this section, the key enabling technologies that can help the realization of the potential of the metaverse in Industry 5.0 are presented. #### Iii-C1 Virtual Reality VR is a 3D computer-rendered virtual environment. VR makes individuals feel immersed with surrounding scenes and objects as if one of the characters of the scene appears to be real [27]. The VR headset or helmet will be used to perceive the simulated environment allowing users to interact with the virtual objects through user interaction techniques such as head tracking or tangible controllers. Since it is separated from physical reality the users with the gadgets need to be more focused in the virtual environment. The recent advancements in VR led to the basis for the innovation of the metaverse. VR and the metaverse are not the same. However, VR is an element of the metaverse, which provides a virtual space for implanting multiple users by enabling interactions simultaneously [28]. In Industry 5.0, the metaverse will deliver humans the best virtual experiences. Industry 5.0 enables the collaboration between machines and humans in developing customized products. The metaverse with the assistance of VR will remove the barriers between reality and the virtual world [29]. The metaverse will allow the user to personalize the products using VR. Industry 5.0 with the blend of advanced technologies or tools like VR can produce efficient products with all specifications of the customer and also contribute to the business by incorporating smart automation, creativity, and problem-solving skills of the human/robot partners. #### Iii-C2 3D Modeling 3D modeling is a process of creating a 3D representation of any object or surface, which helps in creating 3D virtual environments. Every physical element will have a digitized aspect to it. Converting it into 3D elements will improve the essence of the representing real world in a virtual environment [30]. The metaverse is a shared virtual 3D environment which is interactive, immersive, and collaborative. 3D modeling is a key component of the metaverse which helps in creating 3D avatars and 3D spaces enabling users to interact with each other or to perform operations in a virtual world. The metaverse is totally dependent on 3D captures and visualization. Computing devices need to understand the visual information of the user activities and their surroundings which helps in building a realistic 3D Virtual environment in the metaverse. Based on the activities of the user, automatic reconstruction of the 3D virtual environment will also take place without interrupting the operations, using efficient body and pose tracking algorithms in the metaverse. In Industry 5.0, it is essential to implement new methods to satisfy and understand customer needs and then into products and services. The recent advancement in 3D modeling will help in visualizing the transformation of the raw materials into finished goods. The metaverse with modern technologies will lead to mass customization of products by creating a relationship between humans and machines in Industry 5.0 [31]. #### Iii-C3 Artificial Intelligence Recently, several organizations started using AI for their business operations. AI is the ability of a computer or computer-controlled robots to perform the tasks like humans [32]. Users can use AI for decision-making or automation process. AI will assist the metaverse by including content analysis, supervised speech processing, and computer vision [33, 34]. AI also helps in transforming the role of entities from the physical world to the virtual environment automatically. The AI can help the metaverse in different ways. 1. AI can analyze the user images to create more realistic avatars with a variety of facial expressions, emotions, hairstyles, features brought on by aging, etc. 2. For creating AI-enabled nonplaying characters i.e., digital humans, to respond to the actions of avatars in a VR world. 3. Natural language understanding (NLU) and natural language generating (NLG) capabilities of AI enable seamless and multilingual communication in the metaverse. 4. Using AI techniques, the metaverse can be extended without the intervention of humans. AI can also assist in human-computer interactions. Automation and personalization of various services will help Industry 5.0 to enhance the user's experience. Mass customization of the products increases productivity by improving the efficiency of the collaborators between humans and machines using AI-based methods, such as machine learning, deep learning, convolutional neural networks, recurrent neural networks, reinforcement learning, generative adversarial networks, stacked autoencoders, graph neural networks, and meta-heuristic algorithms. AI-based techniques, provide a basic foundation for Industry 5.0 that can help in understanding and enhancing the requirements by enabling the concepts of Industry 5.0. #### Iii-A4 Augmented Reality AR is a technology that provides an interactive experience of a real-world environment using computer-generated information. Superimposing virtual objects into a real world based on the user's perspective will enhance the effectiveness of the users' interaction [35]. AR provides more realistic solutions by utilizing minimal hardware and also provides a solid foundation for the development of the metaverse. AR object detection techniques will help to build an efficient immersive 3D environment in the metaverse. Along with the existing technologies, the usage of modern technologies, such as blockchain in the metaverse will change the user's perspective of interacting with the Internet by launching the virtual ecosystem for various applications like virtual office space. The metaverse also opens a new opportunity for tourism, education, entertainment, retail, engineering, and many more industries [36]. In Industry 5.0, the main focus is on mass customization and ultra-personalization, which requires the blend of the latest technologies like AR to establish collaboration between cognitive systems, robots, and humans [11]. AR connects the virtual world and the physical world by overlaying the data with the physical world. AR in Industry 5.0 will guide the technicians or workers in machine maintenance i.e., performing service in real-time, and also the digital management of the workspace can be achieved by maintaining the harmony between the business and manufacturing process. #### Iii-A5 Blockchain In 2008, Nakamoto Satoshi published a white paper that served as the foundation for the blockchain concept. A blockchain, also known as a distributed ledger, is made up of a series of connected blocks, each of which is linked to the preceding one using the hash value of its header [37]. A block also contains a timestamp, nonce, and transaction data in addition to the cryptographic hash. All nodes must agree to the same consensus process in order to generate and validate new blocks of data. The data exchange in the metaverse is greatly aided by using the advanced encoding information system of the blockchain [38]. The privacy of sensitive information is protected by the blockchain's authentication, access control, and consensus methods [39]. Individuals and organizations can verify all transactions due to the detailed audit trail provided by the blockchain. The quality of data in the metaverse will improve as a result of integrating blockchain with the metaverse. The interaction between the users in several applications of Industry 5.0 will lead to the exchange of sensitive information [40]. Multiple parties may also be involved in this transit. As a consequence, users need to have greater control over their data in Industry 5.0, so that their sensitive information does not leak into the wrong hands. In this case, Industry 5.0 will effectively deliver the services with the help of the metaverse. The metaverse will safeguard the integrity and transparency of information through the transactions in a blockchain. This also ensures users' data privacy in Industry 5.0. Data stored in different applications of Industry 5.0 is very important and attackers will try to exploit the storage. If the data stored is tampered with, it will directly affect the outcomes of the applications, which may lead to unnecessary complications. On the other hand, the metaverse will allow users to interact with organizations virtually and allow them to customize products [41]. The blockchain will protect this sensitive data with its consensus mechanism, which will only allow the modification of stored data based on the agreement of all the participating nodes in Fig. 1: The key enabling technologies of the industrial metaverse. the blockchain [42]. This makes the data storage in Industry 5.0 resistant to attacks. Man and machine collaboration requires secure data transfer across various applications of Industry 5.0. For example, in smart manufacturing, the user may require to customize a product according to his choice and tries to send the data to the manufacturer. There is a chance that the data related to the customized design can be modified by the intruder in this process of data exchange. In this case, the metaverse will provide a virtual space for the user to create a design of their choice and will also share the same with the organization with the help of the advanced encoding system provided by the blockchain, which makes Industry 5.0 data exchange secure. The integration of blockchain and the metaverse will enable Industry 5.0 and its users with data privacy for their shared information, trust in the data stored, and also provide reliability in data exchange. #### Iii-C6 6g 6G is the sixth-generation standard for wireless communications technology that is currently in development. It is the successor to 5G and will be significantly faster. 6G networks will be even more heterogeneous than their predecessors and support VR/AR, real-time communications, ubiquitous intelligence, and the IoT [43]. Networks and communications are integral to everything from large-scale computation to enabling shared experiences among people. The metaverse will be greatly benefited by 6G networks. 6G has the advantages of speed, low latency, and low power consumption [44]. This will help interconnect objects in the real world with the metaverse. 6G will allow VR to reach its full potential [45, 46]. This will expand cooperation between man and machine by connecting the physical world with the virtual world, establishing the foundation for the metaverse. Numerous applications within Industry 5.0 will undergo a transformation as a result of 6G and the metaverse integration. Through the use of cutting-edge technologies like XR applications, the metaverse enables customers to do online shopping, professionals to attend meetings in their desired avatar, and will also allow students to learn about historical places virtually, with uninterruptible services provided by 6G [47]. The metaverse will enable high-quality services and experiences for users of Industry 5.0, where users can parallelly customize a product while interacting with the people of that particular organization. 6G will enable the metaverse with latency-free service where industry personnel can use holographic communication to discuss effective strategies with their peers in developing products. The integration of 6G and the metaverse will enable Industry 5.0 and its users to use high-quality services and experiences, parallel working capabilities, and latency-free communications. #### Iii-C7 Edge Computing Client-server communication is accelerated with the help of edge computing, resulting in reduced latency and less consumption of bandwidth [48, 49, 50]. Due to the distributed structure of edge networks, it is difficult to attack. If a breach occurs in a minor section of the network, the rest of the network will not be exposed. The edge computing architecture distributes processing tasks across the network, making it more resilient than other centralized systems [51]. Edge computing will aid in better synchronization of the physical world and the metaverse, as well as in the efficient transmitting of immersive virtual experiences in the metaverse [52]. Sensor data is critical in making real-time decisions, and edge computing will contribute to it by supplying uninterrupted and faster data services, allowing machines to function more efficiently in collaboration with humans [53]. In the metaverse, edge computing will also ensure reliable resource allocation, which will assist humans in making better decisions based on the outcomes of AI models. Industry 5.0 will greatly benefit from the integration of the metaverse and edge computing. The AI and XR technologies, which are supported by the metaverse, will help Industry 5.0 users make effective decisions, customize designs, and perform tasks from a remote location in real-time. The integration of edge computing and the metaverse will enable Industry 5.0 and its users to make effective decisions, customize designs, and perform tasks from a remote location. #### Iii-C8 Digital Twins A digital twin is a digital representation of an entity or system in the physical world. A digital twin consists virtual object or model that represents a real-world object, process, organization, person, or other abstraction [54, 55, 56]. In the metaverse, data from various digital twins is pooled for a holistic view of real-world entities and their associated operations. In Industry 5.0, digital twins are utilized by the maintenance sector to aid humans in better understanding of equipment and provide a thorough picture of various parts or components. A holistic view of machines helps organizations save time on machine maintenance by reducing the number of repairs and making it easier for humans to fix machines when they break down. Humans will have the necessary information about the machines in the metaverse with the help of digital twins, which will allow them to make better decisions in Industry 5.0. ### _Motivation behind the Integration of the Metaverse with Industry 5.0_ Industry 4.0 has led the manufacturing industries to focus on utilizing AI/ML algorithms to make effective predictive maintenance, production, and quality decisions. Furthermore, the advent of Industrial Information Integration Engineering integrates various methods from relevant disciplines like ICTs into industrial sectors to enable better industrial processes [57]. Chen and Yong have conducted a systematic review from 2006 to 2015 on how Industrial Information Integration Engineering effectively adopts different perspectives of various domains and usage of business process management, service-oriented architectures, and applications in the enterprise system and research opportunities [58]. They have also presented a systematic literature of industrial information integration for the period of 2016-2019 [59] which emphasizes mainly the integration of IoT, smart grids, CPS, and smart manufacturing with Industrial Information Integration Engineering. Various industrial informatics such as enterprise application integration, SOA, and grid computing, their usage in industrial information integration, and challenges were explored for better adoption [60]. The industrial metaverse can automate the entire environment, inclusive of the full supply chain, factory floors, production, retail and physical assets by replicating it digitally for an efficient remote view [61]. The problems at the factory floor or retail can be better viewed without physical presence. The advantage of the metaverse in Industry 4.0 is cost optimization as it provisions virtual prototyping, letting companies to create virtual representations of items, manufacturing facilities, and production, enabling efficient cost optimization without the necessity for physical resources. In Industry 4.0, the metaverse supports the virtual simulation and analysis of designs, hence lowering risks. In Industry 4.0, the metaverse provides a virtual interface for remote monitoring, allowing technicians to diagnose equipment issues in real-time. Industry 5.0 is the next generation of the industrial revolution and promises to integrate human creativity with efficient, intelligent, and precise technology of machines in order to create resource-efficient and user-friendly solutions [35]. Due to this new standard greater efficiency and adaptability can be achieved. Humans and robots can collaborate more efficiently and creatively in Industry 5.0. IoT, AI, digital twins, big data, and robotics will play a significant role in Industry 5.0, which assists humans in developing society, agriculture, education, healthcare, and other sectors [13]. In Industry 5.0, humans and machines work together to develop high-quality, high-speed products. The metaverse integration with Industry 5.0 will attract more customers through virtual factory tours and interactive sessions, which are enabled with the help of XR devices. The metaverse also allows virtual product presentations and aids in mass personalization. Payments can be securely processed between organizations and users through the use of blockchain technologies. AI-powered digital assistants are used in the metaverse which will improve customer service and the real-world customer experience. The benefits of integrating the metaverse in Industry 5.0 over Industry 4.0, which motivated us to perform this study, are shown in Fig. 2. #### Ii-B1 Better Human-Machine Collaboration Machines process information far faster than humans and can conduct essential computations with high accuracy and speed. Collaboration between humans and machines will result in outstanding outcomes in a variety of Industry 5.0 applications. Even the smallest inaccuracy can have catastrophic consequences when working with massive machinery. In this situation, the metaverse and its enabling technologies will provide humans with a virtual arena in which they can simulate tasks using XR applications and analyze the success rate and potential hazards using AI prior to executing the actual work. As a result, the margin for error will be kept to a minimum [62]. #### Ii-B2 Enabling Human in the Loop in Industry 5.0 In Industry 4.0, automated equipment and AI models are employed to develop products. AI models and automated robots may not have the same level of understanding of experience and emotions as humans. In Industry 5.0, by integrating a variety of sensors, the metaverse will enable people to experience and interact with products in ways that were previously impossible with existing technologies [63]. As a result, humans and machines will work together in Industry 5.0, with the help of the metaverse, to improve the quality of the product. For Example, medical students can benefit from the metaverse in their Education 5.0 by visiting a virtual anatomy lab and performing surgery on virtual patients in collaboration with trained professionals. #### Ii-B3 Reduction of Product Development Cost In order to develop a product effectively and with the highest quality, the designers need accurate information about the product from the client. The client may not be able to provide accurate information about the product's description and specifications. In this scenario, the organizations will have to construct a prototype with the acceptance of the customer. Then the mass production of products will commence if the client is satisfied. This conventional procedure requires a lot of time, manpower, and cost even before the start of large-scale production. In this scenario, the metaverse, with its supporting technologies like XR and digital twins, can help in reducing the stress on real-world systems and assist the humans in making better decisions [64]. This strategy achieves maximum creativity and innovation in designing products in Industry 5.0. #### Ii-B4 Mass Personalization with Human Touch Traditionally, online customers typically rely on the product's visuals and description to make their purchase decisions. The customer will only be able to see the product after it has been delivered and will not be able to make any customization. The customer's dissatisfaction with the quality, design, and delay in delivery may prompt the customer to reject the product. This will result in a loss for the organization if it happens with several customers. Customers in the metaverse will be able to see in 3D how products are created, delivered, and sold in Industry 5.0's supply chain. Due to the improved transparency, customers will be able to see exactly how long it will take for products to arrive and how much it will cost to deliver them. Additionally, clients will be able to personalize their products through direct interaction with the machine [65]. If the customer delivers more detailed design of the product, the error margin will be greatly reduced. This will substantially reduce consumer turnover and product return costs, which will benefit organizations and the customer. ## III The metaverse for different verticals of Industry 5.0 The metaverse is a virtual environment that includes computer-generated 3-D objects, avatars, and communication Fig. 2: Motivation for Integration of the Metaverse with Industry 5.0. devices. Users from remote locations interact through this environment for a specific purpose in real time. They have been used in several domains from education to entertainment. These domains started exploring the technology for different needs such as providing actual data in the form of 3D maps which will let the user experience knowledge, that cannot be replicated by any other models. The governments have started using the metaverse to discover a virtual model for weather conditions and natural calamities integrated with real-world data. Also, this technology has been applied in various applications of Industry 5.0, which are discussed below. This section explains how the metaverse is used in multiple Industry 5.0 applications. ### _The Metaverse in Society 5.0_ Several prototypes are being developed nowadays to integrate physical hardware devices with virtual worlds. The metaverse can play a significant role in the education sector. The facilities provided by the metaverse for education include research collaboration and meetings to provide various services to the students in a virtual campus environment. This technology can provide a platform for students and faculty to collaborate in a virtual environment for various requirements [66, 67]. Some of the benefits of the metaverse in Society 5.0 include increased communication and collaboration among humans from all over the world via virtual meetings and collaborative workspaces. Through virtual parties, gatherings, and other celebrations, the metaverse has the potential to socialize and connect with others [68]. Society 5.0 has several applications in the metaverse such as robot-assisted exam conduction, an assortment of energy for various purposes, and robotizing the agriculture sector which leads to an increase in production and advancement in sustainable industrial development [69]. Recently Japanese government has proposed a new research policy namely Society 5.0. This is defined as a humanistic society that helps to balance economic development with the intention to solve various social issues by developing a system that can integrate the physical environment with cyberspace. Human civilization is categorized into five phases by the Japanese government. The community of hunting, in which humans lived by hunting animals and gathering plants for their livelihood is called Society 1.0. The second phase is said to be an agarian society, in which the economy of the society is based on agriculture, namely, society 2.0. The third phase is Society 3.0, in which the agarian society is transformed into an industrial society. The fourth phase is society 4.0, in which information has become an important resource, called, information society. In Society 4.0, information is not shared and there is no mutual collaboration among various disciplines. This was solved by the Japanese government with the help of IoT [70], an emerging concept, called, Society 5.0. Society 5.0 aims to solve various social problems in the domain such as food, manufacturing, agriculture, healthcare, disaster management, and education by providing digital solutions with the help of advanced technologies such as AI, robotics, big data, and IoT [71]. One of the remarkable education networks of Japan, namely, KOSEN proposed a sharing system for scientific devices and information using Society 5.0. This education network has more than 50 colleges all over the country. They wanted to share the information and merge their physical infrastructural facilities among their KOSEN colleges. In order to facilitate research collaboration among the physical laboratory areas such as chemistry and material science, they have proposed a remote sharing system for scientific apparatus. Researchers should share the physical experimental equipment in real-time in order to use their skills and share their knowledge in the appropriate domain. In the proposed system, a virtual classroom was created for researchers, in which a general presentation was given to explain the details of the apparatus. Participants share their views through text-based messages and verbally. In order to have a realistic experience, classroom facilities with avatar movement were created to enhance the learning environment. Also, virtual objects representing the scientific equipment were created which resemble the physical objects. The researchers are allowed to use the apparatus remotely and perform the experiments in the metaverse environment [72]. Society 5.0 is viewed as a humanistic society where people can enjoy high sophisticated life. Along with technological innovations, there are a few limitations in the implementation of Society 5.0 in the metaverse environment. The implementation task is complex and expensive due to the utilization of VR and AR equipment. The technology is integrated with various IoT devices, which may lead to the production of a huge volume of confidential data that may lead to security breaches. Enhanced solutions are required to prevent cyber attacks [73]. Robotization in Society 5.0 may affect the employment of human manpower [74]. Fig. 3 depicts the metaverse in society 5.0. ### _The Metaverse in Agriculture 5.0_ Conventional farming practices are being improved by the latest scalable technological advancements such as IoT, AI, Fig. 3: The metaverse in Society 5.0 and machine learning (ML) which helps to reduce the risks, deliver intelligent decisions, and improve sustainability. Agriculture 5.0 is one of the applications of Industry 5.0, in which smart farms implement precision agriculture and utilize devices that include automated intelligent systems and unmanned vehicles. Agriculture 5.0 also utilizes robots, along with AI techniques. Since the conventional farming system faced a shortage of manpower, Agriculture 5.0 came into existence by integrating agricultural robots with AI techniques. The robots can reap crops in huge amounts that are significantly faster than human manpower. Also, it helps the farmers to increase the crop yield and reduce the operational costs [75]. The metaverse in Agriculture 5.0 provides numerous benefits, such as the creation of a virtual environment in which farmers can collaborate with trained professionals from around the world to share expertise and best practices. In Agriculture 5.0, the metaverse plays a vital role in training farmers to carry out harmful or dangerous tasks in a safe environment. For example, operating powerful machinery or handling chemical pesticides. By utilizing the metaverse, farmers in collaboration with machines can acquire the necessary skills and knowledge to reduce the risk of injuries and accidents [76]. The metaverse can simulate the complete development cycle of plants in a virtual environment which helps the users and robots to acquire information about plant growth in a quick time [77, 78]. The users and robots can learn the process of cultivating crops in a virtual farm environment starting from seedling to harvesting, so as to understand the various issues in crop cultivation. Users and robots can be trained in the metaverse by recreating a real working environment. Several smart farming helper applications have been developed by automating the entire smart farming process from cultivation to harvesting using AR, VR, and AI. The system consists of the automated application of fertilizers and water by detecting the movement of pests and soil moisture content. The system has various functionalities such as plant disease diagnosis, field monitoring, crop yield analysis, and monitoring of water stress and soil erosion. The metaverse will play a vital role in Agriculture 5.0 to increase food production and obtain maximum profit in the forthcoming years. Developing and implementing the metaverse for Agriculture 5.0 is complex and challenging. High-quality, portable metaverse models are required for implementation. Also, highly skilled training is required to make use of the metaverse as it is developed using VR and AR technologies and devices [79]. Fig. 4 depicts the application of the metaverse in agriculture 5.0. ### _The Metaverse in Supply Chain Management 5.0_ Supply chain management (SCM) helps industries to meet requirements and deliver customized products to consumers in a short span of time. The supply chain in the entire industrial process must be streamlined with current enterprise modeling for better adoption in the industrial revolution and other advancements [80]. Digital twin (DT) technology is Fig. 4: The metaverse in Agriculture 5.0 applied to create a digital copy of the SCM processes such as inventories, warehouses, logistics, and assets. The DT helps in the complete lifespan of SCM from the procurement stage to the development stage and includes customer locations, suppliers, manufacturers, and factories. The simulation helps the DT to acquire the data from IoT devices which can be used by AI to forecast the hurdles in the various stages of SCM. This helps the manufacturers to take preventive steps to reduce the errors and losses in SCM. Collaborative robots (Cobots) also play a vital role in SCM processes. The dangerous tasks such as lifting heavy items, material assembling, transportation, packaging, checking the quality of products timely, and finally product delivery to the customers can be done by the cobots [7]. The metaverse can improve the effectiveness of the SCM starting from customer requirements to product development by enabling human and machine collaboration. Machinery, factory layouts, raw materials, and goods can be represented in virtual form using DTs. The developers can modify the shapes of the products, and analyze the various materials and goods by collaborating with experts all over the world in real time. This technology could help in reducing the production time. The manufacturers can share the 3D designs of various parts to identify the best material supplier. Warehouse workers use AR to trace the current position of products in the supply chain. VR is used to form realistic simulations of supply chain situations. For example, an expert driver in collaboration with VR trains warehouse staffs how to correctly use a fork truck. This would help in the inhibition of accidents and teach staff on how to use new tools. Machine learning algorithms are used to assess the day-to-day statistics of supply chain activities. Based on these statistics, the data analyst applies his knowledge and expertise to reduce costs, improve customer satisfaction, and make optimal decisions. During the production process, virtual factories can be constructed using the metaverse to regulate the production plan and perform simulations in which machines and manpower can work together in manufacturing the products. Manufacturers can also construct virtual production lines to train the new employees through the metaverse [81]. A virtual warehouse can also be designed to have a stock layout plan before it can be transformed to a physical warehouse. The data in the physical warehouse can be integrated with the virtual zone so that the employees can find the number of goods, the location of the products, and the orders placed so far. Hence it is evident that the metaverse will enhance the SCM in the industrial sectors. The major challenges of the metaverse in the SCM are skilled labor, security, and privacy. Fig. 5 depicts the metaverse in SCM. ### _The Metaverse in Healthcare 5.0_ The advancements in the healthcare industry ensure humans' physical and mental well-being, which will promote a more significant contribution to a country's economic growth and industrialization [82]. Using AR/VR, telemedicine, AI, data analytics, and IoT technologies, the Metaverse Healthcare 5.0 system aims to provide effective and real-time medical services. These technologies, in collaboration with doctors and machines, form an interconnected healthcare system that improves patient care through remote monitoring, improves healthcare services, and strengthens healthcare research and education. The robots can diagnose and treat patients through proper guidance and instructions from doctors. The upsurge of the metaverse has bought numerous prospects in almost all sectors. Some of the advantages of the metaverse in Healthcare 5.0 are as follows: * Personalized treatments to the patients through wearables * Social communal space among the healthcare community * Medical IoT assisted with VR and AR for holographic construction with intelligent processing * Exercises rehabilitation for elderly and disabled patients * VR-aided therapies for nervous system diseases, anxiety and fear disorders, and post-traumatic stress Yang et al. considered the metaverse as the medical IoT aided with VR and AR glasses which are believed and expected to contribute widely to future computing platforms [83]. However, though the medical IoT has solved many problems like virtually connecting rural doctors with expert doctors worldwide, these expert doctors were not available all the time to assist the rural doctors. The expertise was also not available to supervise the research activities for clinical trials, and there is a lack of standards for real-time diagnosis through medical IoT. This paved the way for the metaverse in healthcare through medical IoT assisted with VR, and AR, that can overcome these limitations through holographic construction with intelligent processing to holographic emulation [84]. In turn, this will improve the control in virtual-real word interconnection, guaranteeing graded and customized treatment, thereby balancing the doctors' interaction on par with internal standards. Furthermore, the healthcare metaverse was suggested to be a good platform for exercise rehabilitation for elderly and disabled patients [85]. Fig. 5: The metaverse in Supply Chain Management 5.0 Likewise, Liu et al. have highlighted the integration of various technological advancements for the benefit of healthcare stakeholders based on bibliometric analysis for healthcare metaverse. They have done bibliometric research on VR-aided therapy and investigated the major VR applications, namely, nervous system diseases, anxiety and fear disorders, post-traumatic stress, and pain management, to attain Health 4.0 [86]. Thus, VR-aided therapies can alleviate the physical therapy issues like anxiety and phobias. Also, VR-aided treatments can give better insights to healthcare experts into patients' emotions through the integration of physical therapy and psychotherapy, thereby creating a better rapport between the therapists and the patients. As a result, VR-aided therapies are efficient for the prevailing pandemic-like situation [86] and can be integrated with emerging advances to offer patient-centric customized therapy services. The metaverse has been employed in cardio vascular medicine where the doctor will meet the patient avatars for treatment and diagnosis [82]. Furthermore, Chen and Zhang have carried out a bibliometric analysis on the heath metaverse [87]. The health metaverse framework suggested will promote a better imprint on the medical field and social governance. As a result, the health metaverse was recommended for remote health monitoring, clinical data monitoring, orthopedic surgery, and patients' physical fitness through a 3D - immersive environment [88]. Although the metaverse-based healthcare provides efficient healthcare solutions, data security (unauthorized exposure of wearables), data privacy (group privacy and individual privacy), and users' mental health imbalance for the metaverse adoption are the challenges in its implementation. Another primary concern is scalability; when more doctors, patients, and other healthcare professionals simultaneously use the system, the response time and computational requirement will increase. Fig. 6 depicts the metaverse in healthcare. ### _The Metaverse in Education 5.0_ The metaverse has changed most of our daily interactions shift to the virtual environment which transformed the human lifestyle and community. The metaverse can help in various teaching and learning activities such as classrooms, museums, libraries etc. In this technology, students can navigate out of conventional classroom to a virtual environment [89]. It is very similar to video gaming which is very familiar to the students and it can propel them to come closer to the learning process in a very interactive, entertaining and enjoyable manner [90]. In the metaverse Education 5.0, simulations can be created in virtual world with the actions performed by avatars to expand the imagination and improve the collaborative intelligence. Teachers and students can immerse into the virtual learning environment. Students can explore the history lessons and the different parts of the vehicle using the AR gadgets. Using the gadgets, the students can move around and view the day-to-day happenings and the historical incidents through which they gain real time experience and knowledge. When mixed reality is integrated with VR technology, the students can experience the field trip in which they can physically communicate with the virtual ancient buildings and monuments. The metaverse education 5.0 provides personalized learning experiences for the students. Medical students can benefit from the metaverse in their education by visiting a Fig. 6: The metaverse in Healthcare 5.0 virtual anatomy lab, performing surgery on virtual patients, communicating with virtual patients, and practicing clinical skills in collaboration with trained professionals. AI and machine learning techniques can further improve the quality of smart learning process in the metaverse environment. These technologies can be incorporated in digital avatars which can answer the questions asked by the students and give more alluring experience to them. Education metaverse can create virtual classrooms for variety of subjects such as science, English and history. Students can enter the virtual world protruded from their monitor using the gadgets. The teachers and students can easily transform to the virtual space because they simulate the original comfortable environment to them. Some of the application areas in smart education where the metaverse can be implemented are as follows: For learning subjects such as medical surgeries, aircraft driving, observing the internal parts of human body, group education visit to dangerous forests and travelling to archaeological locations [91]. The metaverse can further improve the smart education environment by integrating the various subjects into a unique virtual education platform for comprehensive learning experience. Some of the limitations of the metaverse in smart education are privacy violation of data collected and processed. The freedom in the metaverse may provoke the students to involve in unethical activities. The students in the metaverse immerse in the virtual environment which will cause confusion between physical and artificial world [92]. Fig. 7 depicts the metaverse in smart education. ### _The Metaverse in Disaster Management 5.0_ Natural disasters like floods, landslides, drought, storms, earthquakes, wildfire, extreme temperature, and volcanic eruptions may cause various hazardous consequences and human loss if not prepared properly. The metaverse for disaster management 5.0 provides an interactive and collaborative platform in which humans as well as robots can work together to react to disasters quickly and effectively. AR/VR simulations help in creating realistic disaster exercises wherein trained experts guide the public and staffs on how to react effectually to real disasters [93]. In actual disasters, this may assist them in enhancing their reaction speeds and decision-making abilities. VR applications like Second Life can incorporate real-life scenarios where avatars can be used to represent human actors and respond to events. In combination with trained humans, the metaverse can create a virtual environment where rescue workers from different organisations can collaborate and work together in a real-time crisis [94]. The AR-enabled metaverse can produce 3D maps by placing digital data on real-world images, which helps in estimating the level of destruction and finding survivors. For example, AR can assist in the creation of a 3D map of a destroyed building to help professional rescuers in their search for survivors. The AI-enabled metaverse analyses sensor and camera data to identify areas in a disaster zone that require human assistance. For example, AI can analyze the data from traffic cameras to identify traffic jams caused by waste, helping the government to focus on maintenance work. Though VR-based simulations in collaboration with a skilled person can provide better disaster response training, it still lacks hands-on user experience and requires more skills for adapting to it. However, the metaverse world can overcome these limitations by enabling betterment in user experience [93]. The primary concern in the deployment of the metaverse for disaster management is scalability (certainly, a number of actor participants in this environment are unpredictable). In addition, human empathy for different deadly scenarios must be tackled with proper sensitivities. Fig. 8 depicts the metaverse in disaster management. ### _The Metaverse in Transportation 5.0_ The metaverse can connect the physical and virtual world enabling people to accomplish all their needs in one place without physical presence. Transportation in Industry 5.0 is the biggest vertical which requires enormous adaptation from the traditional way of transportation. The metaverse will change all facets of transportation such as aircrew transit, public tran Fig. 7: The metaverse in Education 5.0 sit, logistics, staff transit, supply chain and intelligent transport systems. Fig. 9 depicts the metaverse in transportation 5.0. The metaverse can bring the following changes in the transportation: * Robotic based mobility in inanimate objects and community spaces, named as Meta-Mobility [95] * Leisure traveling by reduced travel time, cost, and energy * Autonomous Vehicle Fault detection, repair and anti-theft systems * Data-driven intelligent transportation * Safer transport through advanced intelligence and governance The primary concern in the metaverse is how transportation works. In the metaverse world, people can travel to different places without leaving their current location. In the metaverse, the transportation infrastructure changes as people will not use traditional forms of transportation, such as cars and airplanes. Instead, they use virtual transportation methods, such as virtual cars and virtual airplanes using AR/VR technology. AR/VR improves the passenger experience by entertaining passengers with games and videos. The integration of metaverse with Industry 5.0 allows passengers to choose their virtual vehicles, virtual environments, and virtual locations. Customization improves passenger happiness, comfort, and convenience by adding a personal touch. Virtual reality simulations can help nervous passengers reduce flight anxiety and make their trip more relaxing. Some of the enabling technologies in Industry 5.0, such as AI, IoT, and big data analytics, help in the advancement of smart transportation systems. These enabling technologies enable data processing, real-time data collection, and quick decision-making. The metaverse is a digital layer that integrates real-world transportation infrastructure, vehicles, and services in a virtual mode, allowing humans and machines to collaborate. In the metaverse, human-machine collaboration allows for human interaction and intelligent decision-making. The metaverse can be used as a training and simulation platform for autonomous vehicles, which improves transportation efficiency, safety, and user experiences. The collaboration of humans and machines has huge potential for shaping the future of transportation in the metaverse. The metaverse will have an impact on employment and layoffs in Transportation 5.0, particularly for blue-collar employees. The metaverse may lead to changes in employment duties and specifications in Transportation 5.0. For example, autonomous vehicles replace human drivers, decreasing the demand for drivers. It also provides a wide range of op Fig. 8: The metaverse in Disaster Management 5.0 portunities for technical professionals during the development and maintenance of metaverse-enabled transportation. With the help of advanced technologies and specialised skills, technical experts contribute to the development of transportation systems and play an important role in the successful integration of the metaverse in Transportation 5.0. Hyundai motors has released a press note at CES 2022 stating its future vision on "Expanding Human Reach" through robotics and the metaverse to realize humanity's unconstrained liberty of movement [96]. They have also revealed the concept of Mobility of Things (MoT), enabling mobility in inanimate objects and community spaces through robotics (Plug and Drive module). Mobile Eccentric Droid (MobED) and Boston Dynamics' Spot are some of the company's robotic products which can carry anything virtually towards attaining its brand vision of progress for mobility. Furthermore, metaverse can play a vital role in intelligent transportation systems [97]. They have used two case studies namely Wayray's Metaverse on Wheels (holographic deep reality display in cars) and Nissan's invisible to visible, and discussed various challenges in different aspects of intelligent transportation. For the metaverse to replace today's transportation infrastructure, the economic losses and funds for a new infrastructural setup must be realized. The numerous job loss to people in the transportation sector must be replaced with equivalent job opportunities. In addition, the implementation constraints like security and scalability should be accounted for during the transit of physical goods. Fig. 9: The metaverse in Transportation 5.0 ### _The metaverse in Smart City 5.0_ The smart city is the digitization of modern cities through various Information and Communication Technologies (ICT). Everything is interconnected to share data to make effective decisions in improving the citizens' welfare and effectively imparting government policies. Smart city 5.0 aims at sustainable city management by utilizing modern technologies where human and artificially intelligent agents (robots) will be working collaboratively for optimized city life management to balance all spheres of different city actors harmoniously [98, 99, 100]. Smart cities collates the information from the various sources like cameras, sensors, social media for regular feedback services to the policymakers for the betterment of the services [5, 101]. Smart city 5.0 strives to ensure urban resilience for optimal utilization of resources [102]. Fig. 10 depicts the metaverse in smart city 5.0. The integration of the metaverse in Smart City 5.0 results in a modern urban environment that improves people's quality of life while also promoting environmental sustainability. The metaverse in smart city 5.0 enables citizens, businesses, and government bodies to virtually collaborate, communicate, and share information in collaboration with trained professionals. The integration of the metaverse and digital twins in Smart City 5.0 helps the architects to construct virtual prototypes, model designs, and improve plans before executing them in real time. Human collaboration enables the creation of effective and environmentally friendly city layouts, allowing executives to take efficient decisions and improve the overall urban environment. The metaverse provides virtual tours that let travelers from all over the world to sightsee the city's important places. The metaverse helps in monitoring city traffic, water waste, electricity, and public safety in smart city 5.0 [103]. To encourage tourism, the South Korean government launched a smart tourism city project in Incheon. The project's goal is to use the metaverse to improve people's tourism experiences, giving visitors a more pleasant and personalised experience [104]. Incheon Easy is a smart tourism application offering two metaverse services. ARIncheon provides a real-time tourist experience using AR via the smartphone camera sensor and IncheonCraft, which integrates Minecraft (a sandbox game) to give the tourists an Incheon experience. IncheonCraft is a virtual-based metaverse where the tourists can experience the tour as avatars. At the same time, ARIncheon is a real-based metaverse by providing the historical maps in digital display and offers operational Incheon incentive-based service for engaging the tourists. Lim et al. [105] have presented a case study on the metaverse cities like "Metaverse Seoul" with a collaborative edge-based framework for virtual city development in the metaverse. They have recommended that edge intelligence can be leveraged to attain the features of the metaverse with its low-latency communication and faster response [105]. The various products of meta for the smart city include Horizon Home (meta's social media platform), AR calls( holographic and video calls), Virtual gyms, Presence Platform (Meta development kit), Project Cambria (Virtual Headset), and Spark AR ( for meta creators community) [106]. The data privacy and security in sharing users' private data (such as civil services) must be strengthened. Also while experiencing real-time experience, the physical uncertainties must be mapped for a better experience. ### _The Metaverse in Cloud Manufacturing_ The cloud manufacturing (CMfg) is one of the most demanding networked manufacturing paradigms. CMfg is an integration of ICT with advanced manufacturing techniques throughout the manufacturing life cycle processes. CMfg is one of the key technologies of Industry 4.0, enabling the manufacturing with a single click of the user requirements. CMfg has a manufacturing cloud, resource layer, virtual service layer, application layer, interface layer and global service layer. The manufacturing cloud encompasses the different manufacturing service providers, their resources, and services to meet the needs of different clients on a demand basis. This export process in CMfg realizes a multi-agent collaborative environment in the manufacturing resource layer where different manufacturers can share their resources such as 3d printers, robotic arms, machines, design tools, simulation tools, and modeling software. Also, it enables sharing of the manufacturers' capabilities in rendering different services. The core support ensured by the CMfg will assure efficient resource management and effective search of resource needs in addition to manufacturing life cycle services encapsulation, through the virtual service layer. The dedicated collaborative applications are provided by the application layer of CMfg. The interface layer helps the consumers to seek different services like production as a service, experimentation as a service, design as a service, simulation as a service, manufacturing as a service, maintenance as a service, and integration as a service. The global service layer maps the consumer requirements with the multi-agent collaborative environment to offer a predictive and customized user experience. This enables the user to monitor their product throughout its development life cycle. All these are possible through integration technologies like IIoT, DTs, CPS, AI and wireless sensor networks. Some of the common issues reported by the practitioners and reporters of CMfg are variability in production planning with uncertain user demands, vulnerability to security threats, social acceptance, faster multi-agent collaboration, regulation compliance, and huge maintenance overhead [107]. The metaverse in the cloud manufacturing can assist in the following ways: * Individual avatars can design and test the customized component in the immersive environment * Multi-stakeholder social communal spaces such as end-user, designers, manufacturers, and operations teams can discuss and collaborate through their avatars for operational efficiency [108] * Faster and simultaneous design and production * Forecasting inventory using on-demand sensing AI models * Improves scalability in uncertain user demand using predictive analytics thereby reducing lead time * Low-latency response in the data processing through edge enabled metaverse platform during larger user demands * Digital twins in the metaverse can reduce the quality control risks The major issue in incorporating the metaverse with cloud manufacturing would be security threats in hyper communal connectivity, and interoperability among heterogeneous service providers [109]. Furthermore, scalability in terms of CMfg consumers, as well as the general metaverse community users, will be another hectic issue. Fig. 11 depicts the metaverse in cloud manufacturing. ### _The Metaverse in Robotic Automation_ As Industry 5.0 aims to bring the human touch back to the production floor and development by teaming up humans and collaborative robots, robotic automation is one of the most important components of Industry 5.0. Though the cobots depend on human intelligence, they must be automated in such a way to cope with human commands without any consequences in the new production environment. Here the cobots are intended to do labor-intensive tasks and to ensure Fig. 11: The metaverse in Cloud Manufacturing Fig. 10: The metaverse in Smart City 5.0 consistency with skilled humans' cognitive skills [10]. The metaverse utilizes robotic automation more effectively because of its smart functionalities. In the metaverse environment, the human avatars (the human virtual image) and cobots will be collaborating to carry out the production tasks. Cobots must be automated and trained to interact with human avatars as well the real human. The metaverse in robotic automation has the following: * Cobot and Avatar collaborate to make the metaverse life reality on the production floor * The metaverse supports the creation of 3D models of the patient's anatomy, while cobots assist doctors in carrying out an operation and improve patient safety, operation time, and patient satisfaction * A cobot in collaboration with the metaverse creates a virtual training environment where staff members can practice new skills with no risk of injury. * The metaverse helps in remote maintenance and repairs by allowing professionals to remotely access machines, examine them, and then carry out virtual repairs. The real-time motion mapping of avatars or robots with that of humans is challenging. Spanlang et al. have suggested a technique for controlling the real-time motion streaming of avatars and robots. A case study on the embodiment of a person as an avatar in an immersive environment was presented to demonstrate the effective remapping of joint positions, bone length, and rotations of the human skeleton and its avatar or robots [110]. Hyundai Motors has disclosed its decision on "meta-mobility" through robotics automation in the metaverse. China has introduced its first metaverse robot through the integration of IoT, AR, VR, AI ad other cutting-edge technologies. The robotic skin by Meta was specially designed using thin rubbery plastic with magnetic particles to feel the sense of touch. The special robotic skin, with less than 3mm with AI was developed in collaboration of Meta (Facebook) and Carnegie Mellon University [111]. Unity platform by San Francisco has used robotics to train and design the metaverse world [112]. Also, NVIDIA Omniverse and Microsoft have used robotics for connecting two worlds (real and virtual) within the metaverse using 3D assets and the creation of enterprise metaverse respectively [3]. The major challenges in robotic automation are vague use cases, continual upgradation and maintenance, improper governance, lack of standard automation processes, attaining convincing expectations, and skilled employees. Fig. 12 depicts the metaverse in robotic automation. The various technical requirements for different verticals of Industry 5.0 and its coverage in the applications discussed in this section are summarized in the Table III. The various technical requirements in Table III are: 1. TechR 1 \(\leftarrow\) Computation Power 2. TechR 2 \(\leftarrow\) Memory Management 3. TechR 3 \(\leftarrow\) Scalability 4. TechR 4 \(\leftarrow\) Accessibility 5. TechR 5 \(\leftarrow\) Interoperability 6. TechR 6 \(\leftarrow\) Security and Privacy Issues 7. TechR 7 \(\leftarrow\) Legal Issues 8. TechR 8 \(\leftarrow\) Skilled Professionals 9. TechR 9 \(\leftarrow\) Brain-computer Interfaces 10. TechR 10+Inter dependencies among different societal applications The technical requirements include interdependencies among socio-economic applications as all the smart applications or different verticals in industry 5.0 will work collaboratively. Therefore, rather than interoperability among devices, interdependencies among these applications matter a lot in an industrial metaverse environment. In addition to generic interfaces used in applications today, the metaverse environment requires interfaces assisting the gaming environment that promotes the connection between brain and computer, referred to as brain-machine interfaces [113]. ## IV The Metaverse in Industry 5.0: Research projects This section highlights some of the key research projects related to the metaverse and Industry 5.0. ### _WayRay_ WayRay is a Transportation 5.0 based project that usages the metaverse to advance automotive transportation. Their AR heads-up display technology has the potential to make transportation more efficient, secure, and eco-friendly. Their system helps drivers stay informed and alert by projecting crucial information onto their windscreens, such as directions and traffic conditions. Moreover, their AR technology allows the production of fully immersive experiences for travellers by supplying appropriate context data. WayRay's transportation technology helps drivers safe with predictive warnings and alerts, keeps them productive with dynamic traffic updates, and tracks their fuel efficiency and vehicle emissions. WayRay has partnered with major manufacturers and is taking steps to get its technology to market. With the metaverse, WayRay can make automobiles more secure and efficient, bringing in a new era in transportation [116]. ### _Nikeland_ Nikeland is a project that was developed by Nike and integrates the metaverse with SCM 5.0. It provides a virtual environment for its users to shop for Nike merchandise, play games, and interact with other users. Nikeland provides the accountability and traceability of virtual objects by using features of blockchain technology. This allows users to virtually try on different varieties of shoes using AR technology, which enhances the user's buying experience. Nike will also be significantly benefited by acquiring data insights about their product design and manufacturing process, which will help them improve their marketing strategies. Nikeland transformed traditional retail and SCM by providing an engaging and data-driven environment [117]. ### _Minecraft Education Edition_ Minecraft Education Edition was developed by Xbox Game Studios and Mojang Studios mainly for use in educational environments. Minecraft Education Edition is an application that integrates the metaverse and Education 5.0. It offers students an immersive learning environment in which they can explore topics using virtual environments and interactive activities. The platform encourages collaborative learning, personalization, and global access for professionals. Minecraft Education Edition transforms education by encouraging creativity, engaging students, and preparing them for the digital era by using cutting-edge technology and gamification [118]. ### _Landian_ Landian is an metaverse based agriculture 5.0 project, which allows purchasing of land and digital assets in the metaverse. It provides a platform for farmers to interact, share knowledge, and teach new framing techniques to each other. Landia uses NEAR protocol which is based on binance blockchain specifically designed for the metaverse platform. Landin provides farmers various benefits which includes the ability to collaborate and share farming methods virtually. Farmers can also gather relevant educational resources related to farming and can exhibit their agricultural products in the virtual marketplaces. These marketplaces also allows farmers to buy and sell agricultural products. Landian shows a huge Fig. 12: The metaverse in Robotic Automation potential to transform the agricultural sector by integrating the metaverse [119]. ### _HealthBlocks_ HealthBlocks is a startup with an aim to inequality in global healthcare. It is a project which integrates metaverse with Healthcare 5.0. It uses blockchain technology to provide decentralised and secured access to healthcare services. The users can communicate with doctors from anywhere, access medical information, and obtain doctor's prescriptions in a virtual environment provided by the metaverse. HealthBlocks will help in increasing the effectiveness and quality of patient treatment by making healthcare available and budget friendly to everyone. Users will highly benefited by its simple access to healthcare services from anywhere around the world. HealthBlocks looks like a promising solution that has the huge potential to revolutionize healthcare sector with the help of the metaverse [120]. ## V Challenges and Future directions In this section, we highlighted the integration challenges of the metaverse for Industry 5.0 along with possible future directions. ### _Research Challenges_ Automation in everything will almost raise issues like higher computational requirements, data security, user privacy, health issues, and joblessness from the users' perspective. But, on the other hand, from the developers' and technology seekers' perspectives, providing a more dynamic and responsive human-machine interaction, lack of proper standards, security vulnerabilities, and the adoption of the technological advancements will be the more imperative challenges. This section highlights the research challenges, feasible solutions, benefits, and further directions for future research in implementing the metaverse. Challenges in the metaverse ecosystem and possible benefits in mitigating those challenges are shown in Table IV. #### V-E1 User Interaction The metaverse employs several interactive devices to immerse the users into a virtual world. These devices are being developed to full-fill the user requirements. The features of user interactive devices which will help to interact with the virtual world should be portable, lightweight, comfortable and wearable. The medium of interaction should be transparent in such a way that the technology should be invisible to the users and they should get brown themselves into the virtual environment. Some of the interactive technologies and devices are VR, AR and MR. VR is a virtual environment created artificially and users can immerse in the same way like they do in the real world. AR is a technology through which users interact with virtual world and it depends on wearable devices which helps to merge digital objects with live video. MR is an advanced technology in which the physical environment performs interactions using the collected digital data. Most of these interactive devices are portable, lightweight and comfortable. The high expensive cost of the interactive devices is one of the major challenge in adopting the virtual technology. Prolonged use of VR headsets will create psychological issues, neck and head tiredness. Apart from these hardware challenges, there are some issues in the quality of developed the metaverse models. Therefore high quality VR, AR and MR gadgets and reliable models can help the users for better interaction [91]. These issues can be avoided through realistic effects in a visual rendering, enabling users to touch and feel the effect like ball bounces or water rippling, but tactical emotions are still difficult [128]. To provide a sustainable interface, holograms, and eye lenses can be used alongside head-mounted devices. Furthermore, they have suggested that in addition to the human players in the virtual world, non-player characters' interaction with the users must be considered. To capture and learn about different sensory actions of humans, multimodal pretrained learning models can be utilized to learn more about visual language movements [129]. This can help in avoiding the misinterpretation of human emotions by avatars. The integration of the metaverse with Industry 5.0 is challenging since decisions must be made by both humans and machines. Due to the fact that humans cannot directly touch or feel the physical prototype provided by machine, they may be misled while making judgements. Furthermore, the interaction between humans and machines must be uninterrupted, and network or bandwidth delay will lead to catastrophic situations. The devices or equipment now used for interaction may not be suitable in all verticals of Industry 5.0 and need customization based on the applications requirements. Therefore, creating a perfect virtual replicate of a real industrial machinery is still a challenge to be addressed. #### V-E2 Computing Resources One of the most prominent challenges in implementing the metaverse is the convergence of various heterogeneous technologies like IoT, VR, AR, AI, gaming, and so on by accommodating varying computational requirements. Also, colossal data accumulation may require higher and faster computing power with ultra-low latency. The metaverse is like a massively multiplayer gaming environment that allows multiple users to participate simultaneously, requiring higher GPU and HMDs to render an immersive virtual world balanced with the physical world. So, edge computing with cognitive edge services will be an essential candidate for the metaverse to serve ubiquitously. By leveraging the AI, local decisions like positioning an avatar can be made at end devices (like the engine). Furthermore, the expensive foreground rendering (which requires lower latency and fewer graphics) can be done at edge servers instead of the cloud servers to reduce end-to-end data latency [121, 130]. The aggregated data can later be communicated to cloud servers. At the same time, the higher computational intensive background rendering can be done at the cloud servers. Consequently, AI models can be utilized for resource allocation in these denser user distributions to improve the Quality of Experience. Therefore, AI-enabled edge computing would help to solve the computation overhead issue [49]. To guarantee, an immersive experience in the virtual world as that of the physical world, the metaverse imposes a wider set of requirements like reduced motion-to-photon latency, real time rendering and control, and high-quality visual appearances. Mobile edge computing empowered with 6G will be an essential candidate to serve this requirement [131]. The latency requirements, network load, and computational resource allocation can be effectively managed through the convergence of the metaverse with mobile edge computing, blockchain, 6G technologies, and AI [132]. Mobile edge computing will be an essential part of any grander technological amalgamation for deployment on a larger scale. Thus, the integration of the metaverse with Industry 5.0 requires massive processing power to operate all of the enabling technologies and devices, which is significant challenge to be addressed. To address the computational needs of the metaverse across multiple Industry 5.0 sectors, high performance computing systems and massive cloud infrastructures are also required. A network with strong data transmission capabilities is also required for efficient usage of the computational systems. #### Iv-B3 Security and Privacy In the social virtual world in today's plurality of the Internet with heterogeneous sources, the users find the communities matching their preferences, thus preferring to have a singleton match for their choices. This requirement towards the search for a singleton online community of preferred choices is one of the main reasons for the emergence of the metaverse. On the other hand, the uniqueness of the metaverse is a singleton by aggregating various technologies, services, and goods as a shared and centralized service point. Furthermore, security in the metaverse comes with two different verticals: data security and software security. So, this centralized metaverse may force the new users (peculiar interest) to participate in its environment, and many behave weerdly to compete with other users (some may exploit too). AI singularity can make Als in the metaverse life unaware of itself and continuously improve to perform better [122]. The convergence of cyber-physical worlds with exponential technology growth has led to open security and privacy issues. Also, the highly immersive, interconnected, and interoperable environment may allow the participants to trade the virtual items online as Non-fungible tokens (NFT). These virtual items can be used in all other spaces of the metaverse [122]. This may even create unprecedented security vulnerabilities. Most people today rely on online shopping, leaving the footprint of their desires and helping the social network analyze and predict the users' needs, thereby being the product of the Internet. In turn, these users and everything will be the product of the metaverse, and the meta platform will provide surplus information about all the users to the content creators and similar businesses. As a result, users' privacy is in jeopardy in unsuspected ways. Like at a particular instance, the metaverse can closely monitor our body movements, brain responses and can predict where the user can click, on what items, and how much time they will spend [133]. The users' personal information, behavior (physiological characteristics), and interactions are three perspectives of user privacy in the metaverse. Through doxing, the users' data are already missed for online shaming [134]. Doxing in this strong bonding physical-virtual world will offer exceptional opportunities for hackers to exploit the virtual world's immersive experiences and harm in the physical world. Like other social engineering attacks, stalking and spying [135] in the virtual environment will be experienced more in the metaverse. Other notable attacks are cyberbullying, shit storming, video call bombing, gender harassment (through sexting), and raiding. These forms of denial of service may ruin the participants in the metaverse. Data integrity and user authentication are the most critical security issues to be cited while relying on automation and algorithms to maintain a virtual world. Many fabricated contents and unauthenticated users will be replicated more in the metaverse. It has been envisaged that software-driven accounts can be fingerprinted or digitally reproduced in the social media network. Furthermore, advanced AI models can make more automated accounts without noticing or detecting any algorithms. This will raise more fatal security threats in the immersive physical - virtual world environment. Blockchain based solutions can be incorporated to solve these security, integrity and user authentication issues [124]. Kwon et al. suggests that Quantum-based metaverse ensures faster and more secured metaverse applications through their proposed case scenario MetaQ which utilizes quantum kernel ML [136]. As in any technological advancements, Security and Privacy remains a significant challenge upon the integration of the metaverse with Industry 5.0. The metaverse enables large user engagement across different Industry 5.0 sectors, and users have to share sensitive information with these applications to access the services. Any attack on such sensitive data could compromise user data privacy, which is a problem that must be addressed. One of the fundamental elements of the metaverse is user anonymity, which raises concerns regarding accountability and transparency, potentially affecting the concept of Industry 5.0. This integration also opens up the new possibility to the creation of new virtual assets depending on the Industry 5.0 sectors and, in turn securing them will be further challenge that must be addressed. #### V-B4 CyberS Syndrome: Social Media Addiction and CyberBullying There are many emotional challenges faced by the metaverse due to the fundamental technologies such as VR and AR on which they are being developed. These technologies control the behaviors, emotions and intelligence of the users. The diversion of users attention in locality based metaverse applications can end up with destructive disasters. The most common health issues related to the virtual technologies are drowsiness, blurred-vision, nausea and motion syndrome. In a recent survey, nearly 50% of adults utilize social media in their day today life during pandemic period. Due to this advanced virtual technology, the user obsession to the metaverse is inevitable in future [137]. Users may depend on these virtual environment to elude from the physical world. The situation may become even dangerous where AR and VR can redirect the users to dangerous activities such as burglary, assaulting and other criminal events. The other issues the metaverse developers need to address are mental disorder issues, sexual harassment and other exploitations in virtual space such as sexual abuse and harrassment. Unethical behavior may be more dangerous in the metaverse than in physical world. VR can immerse the users in virtual environment where sexual abuse in the imaginary world can be sensed as real experience. Sometimes the users may face time syndrome where they can not able to distinguish between the timings of physical and virtual world. When the children learn through the metaverse, there is also possibility that some intruders can provide false information to exploit them and engrave misconceptions. The enterprises who develop the metaverse platforms should apply strict security solutions to ensure that the virtual environment will not be attacked by the external hackers and internal attackers [4]. A keypad application named catch a word program was developed to safeguard young Indonesians from cyberbullying type of issues in social media [138]. This can be installed in users' smart devices to avoid rudeness in digital media communication. Shielding can be offered through the words or text rendered by the avatars on the scene. But predicting the behavior of an avatar is difficult as avatars' behavior is diverse and delicate. The detection of malicious activity (bullying) by an avatar can be achieved by considering multiple factors, such as gestures, facial recognition, emotion, and social interaction [139]. The incorporation of the metaverse into Industry 5.0 presents consequential challenges in terms of work addiction. The metaverse provides a platform for experts or humans to interact with machines that is always accessible and available. Because of this seamless integration, individuals will find it difficult for stabilizing their personal and professional lives, which in consequent may lead to mental imbalance and may prone to health disorders. Therefore, better means for stabilizing these concerns upon industrial metaverse implementation must be addressed. The integration of the metaverse with Industry 5.0 will result in a highly competitive work culture, which will place enormous strain on employees and may lead to stress and health problems. #### V-B5 Impact of Human-Computer Interfaces(HCI) on Human Mental Workload The advancement of technologies and HCIs with higher computing power has led our lives to a technology-based ecosystem. The metaverse uses VR, AR, XR, or MR technologies to connect the physical world with the virtual world. The major challenge is dropping the human workload requirements while participating in the metaverse ecosystem, and in analyzing the metaverse environment's usability based on user involvement. The human workload can be determined by weighted perceptions of different factors such as physical demand, mental demand, performance, effort, temporal demand, and frustration [123]. The effect of XR on six dimensions of human workload has been investigated in [140]. They observed that AR significantly impacts the mental demand and effort dimensions of the human workload. On the other hand, VR doesn't affect human workload as the VR interfaces are very natural as current reality. Also, VR interfaces do not utilize more cognitive power of the human mind, and they provide immersive user experience, thus making them feel the reality. At the same time, AR imposes various perceptual challenges as it receives the visual cues from heterogeneous multimodal sources. However, the combination of VR and AR does not increase the user workload requirement i.e., users need not lay more significant effort. On the other hand, the adult users, patients and kids may apply more workload while using extended realities and require more tolerance of discomorts [141]. Psychological consequences of humans in the virtual world must be revealed parallelly while engaging in meta life. Hence, to mitigate these consequences, the special effects of emotional intelligence have been analysed, and an improved Web-based XR framework has been suggested to improve the emotional intelligence in the metaverse life [142]. Physiological signal technology can be adapted to learn in deep about human emotions through human body signals in the metaverse ecosystem [126]. Therefore, the metaverse models should create space in between the immersive experience so that the user involvement will be disengaged from using the gadgets for a specific period of time to avoid such discomorts. Thus the amalgamation of the metaverse into Industry 5.0 presents significant challenges in human mental workload. The metaverse delivers massive volumes of data feed to specialists, such as statistics, forecasts, and suggestions. This massive amount of information on specialists could cause congestive overload and in turn will decrease the overall performance. Therefore more data accumulation will be a hindrance that must be addressed. The metaverse enables people to multitask with machines, but a technological issue or an attack could place a tremendous amount of pressure on humans. #### V-A6 Ethical Issues The AR and VR devices used to access the metaverse environment in Industry 5.0 applications can capture the behaviour of the human brain and grab the intention of the user. Few applications can involve the users in gathering personal data and storing it on a permanent storage medium like blockchain. As it is a common practice for most of the users to accept the privacy policy without reading the full content, to overcome this in the metaverse environment, laws and regulations can be imposed, and ethical restrictions should be defined globally to prevent such problems. Being a next generation technology, the virtual environment should frame good moral and ethical standards to preserve a safe metaverse environment. Some of the ethical issues in the metaverse are misuse of the virtual environment, unauthorised access, incorrect information, and copyright and intellectual property violations. The regulations of the metaverse should be enhanced by formulating suitable laws that being upgraded frequently as per the industry requirements and safety concerns [4]. Ethical guidelines of conduct in the virtual world (including developers, participants/avatars, and designers) and the real world based on the principles of generic consistency for universal rights have been discussed in [143]. They have also suggested an ethical framework for the virtual world covering all virtual agents and their collaboration with real-world agents. The ethical issues concerned with the evolution and usage of gaming applications of the metaverse and Web3 were discussed in [144]. #### V-A7 Standardization The metaverse is a virtual environment associated with the physical world in various dimensions of Industry 5.0 applications. It is next generation Internet technology implemented using various concepts such as decentralization, submersion, automation, and proliferation. Several Industry 5.0 applications started developing these technologies with open source and standalone systems integrated with audio and visual experiences. In order to avoid facing legal issues with virtual technology, new standards and principles should be framed. The metaverse is a fantasy world where independence and liberty can lead to crimes and misconducts [145]. Proper standards must be in force to avoid negative consequences in the virtual world, or it will result in increasing the chances of abusive and criminal activities in the metaverse environment [91]. An open platform should be developed that includes a common collection of devices and methodologies where the virtual technology can be built with the available standard tools [127]. Some of the regulatory solutions for the metaverse, like standard restriction in user monitoring, emotional analysis, virtual product promotions, and simulated personas, were discussed in [146]. Furthermore, a forum for the metaverse standards ([https://metaverse-standards.org/](https://metaverse-standards.org/)) was formed for research studies on open standards for different regulatory requirements of the metaverse ecosystem, and interoperability is being conducted. This forum is constituted by leading standardization organizations like the world wide web consortium, XR Association, and other industry leaders [147]. ### _Future Directions_ Based on the challenges presented in the previous section, this section provides the feasible solutions that can address those challenges. #### V-B1 Meta blockchain with Sixth-Generation(6G) Networks One of the major concerns in the metaverse ecosystem is security and data privacy, as the ecosystem itself is a virtual social world. All the threats for social media users will penetrate more in this socialized environment. Some of them are cyberbullying and cyber flashing, which is sometimes very dangerous. As the users exchange their sensitive data over autonomous networks (maybe over untrusted channels), ensuring trust among various users of the metaverse is a mandate needed for its existence. Blockchain, the distributed, immutable, tokenization, and transparent ledger technology, with its capability of intelligent resource management and efficient maintenance of stored transactions, will be an essential candidate [148]. The metaverse and the blockchain, the meta-blockchain, will ensure security and privacy-preserving transactions. However, meta-blockchain may require higher computational overhead in accompanying the virtual players and heterogeneous technologies. Therefore, 6G with higher frequencies, higher capacity, and sub-second latency response can further boost the performance of the meta-blockchain. Also, quality of experience and interactions (like holographic communication) are the significant factors in meta-blockchain, 6G will be the viable option [149, 150]. 6G networks can integrate diverse applications through their higher user data rate, terahertz frequency bands, and enabling technologies [151]. Edge computing can help in realizing faster response (0.1 ms RTT latency) and AR holographic support. Therefore, 6G with meta-blockchain can secure applications like telesurgery, digital twinning, multiplayer games, 3D printing, and many more. Furthermore, the challenges of 6G (such as access control attacks and multiplexing issues) and blockchain (scalability when transactions increase very dynamically, lack of standards and efficient decentralization systems) implementations must be considered upon its integration. #### V-A2 Federated Learning for the metaverse Federated learning (FL) is the collaborative learning model which trains the statistical models using multiple local data without exchanging or uploading the raw data to a centralized server. Later the encrypted parameters of the trained local model (and not data samples) are shared among themselves, and an aggregated global model is developed at the centralized server. Thus, the edge devices like smartphones can learn an aggregated global prediction model collaboratively and maintain the sensitive data at local device storage, that reduces data breaches on users' sensitive personal data. Furthermore, this shared aggregated global model is integrated with local models. Therefore, FL will ensure continual learning without data breaches due to private data aggregation and reduce the communication overhead [152], specifically for the larger models of complex scenarios with massive data [153]. Hence, FL with the metaverse can help the ecosystem to be safe and serve better. FL has been employed in various applications with massive data collection like medical institutions, AR, and digital twinning cities. The authors in [154] have proposed an FL-based solution for collaborating smart cities' digital twins ensuring faster status updates through the shared local strategy of digital twins. Here the DTs share only the parameters of the locally trained model with each other, and the global model is constructed with accumulated insights from shared local strategy. Furthermore, blockchain can be utilized for securing the aggregated model in FL. FL has been integrated with the blockchain in many applications like IDS, resource trading, intelligent transportation, and resource allocation [155]. Therefore, FL can be employed in the meta-blockchain ecosystem for privacy-preserving data sharing. In the FL integrated meta-blockchain ecosystem, the metaverse users can train their model using local device data and share their training parameters with the model creator. This will reduce the communication overhead as the parameters are considerably smaller than the raw data. Also, the model aggregations can be carried out at the edge server to minimize the communication cost with the cloud [105]. #### V-A3 Quantum Computing in the metaverse Quantum computing (QC) is a phenomenon that applies principles of quantum physics to generate more powerful ways of computing. QC stores and processes the data using individual ions, atoms, photons, or electrons, creating a faster and more powerful computer. QC's two principles are quantum entanglement (lack of independence) and superposition (allow the different possible combinations of zeros and ones simultaneously). For example, QC is made of qubits, allowing the data to exist either in 1 or 0 or simultaneously in a 1 and a 0 [156]. However, the complexity lies in designing the computers for operating the world of quantum physics and QC may be a more significant threat to those applications relying on encryption [157]. Therefore blockchain can be incorporated with QC for secured computing [158]. In addition, quantum technology has been employed in communication for encrypting, and Fig. 13: Challenges, future directions and benefits of the metaverse in Industry 5.0. data transmission called quantum cryptography. This creates a unique quantum channel for data transmission, thus alleviating eavesdropping and other network attacks. As blockchain is one of the enabling technologies of the metaverse, quantum blockchain can make computing faster and more efficient. Also, quantum machine learning has been used in several critical applications, and the quantum-resistant security (quantum blockchain) solutions will bring the metaverse to life [159]. The metaverse ecosystem with more dynamic interactions can be implemented effectively with these security solutions. #### V-B4 Hyperscale computing for the metaverse Scalability is one of the major concerns in any hyper-automation system where the number of users will be increasing every minute. Hyper-scale computing allows scaling efficiently from a minimum number of servers to thousands of servers for attaining massive scalability in the distributed computing environment. Furthermore, hyper-scale computing features horizontal scalability, redundancy, and high throughput, making it best suited for applications expecting enhanced performance, fault-tolerant, and high availability [160]. On the other hand, the metaverse, the most substantial transformation of human life, requires higher IT infrastructure facilities. Therefore, more essential edge-computing services help attain this transformation possible. The primary requirements for this transformation are hardware support processing power for the massive computing and software. Also, this massive computing transformation requires consistent (constant interaction among millions of user devices and servers), rapid response, and higher bandwidth data transfers multiple times more than the current processing capacity [161]. The hyper-scale edge data centers, with the capability to scale efficiently for accommodating the massive computing requirements of the metaverse can be a possible solution. Therefore the metaverse ecosystem will be on life with the 6G-based meta-blockchain, federated learning, quantum blockchain, and hyper-scale computing environment. ## VI Conclusion This paper presents an extensive review on the metaverse applications in Industry 5.0, i.e., industrial metaverse. The metaverse can help the humans to communicate with the machines in a better way. Furthermore, it will help in bringing the human touch back into the production, help in reducing the cost of production, and realize mass personalization. In this review, the applications of the metaverse in different vertical domains of Industry 5.0, such as Society 5.0, Agriculture, Supply chain management, healthcare, smart education, disaster management, transportation, and the smart city, have been extensively discussed. Furthermore, the critical challenges in the metaverse implementation, feasible solutions, and further directions for future research are presented. From the future research perspective, several organizations such as Facebook, Neuralink, and others have been concerned with connecting the human nervous system and HCIs from input and output, respectively. Future research can focus on developing integrated chips for better HCIs to realize the potential of the metaverse in Industry 5.0. Blockchain can be utilized for secure data transactions, thereby enabling security in all aspects of the metaverse. Furthermore, federated learning, quantum computing, and hyperscale computing are expected to play a vital role in future research and development of the industrial metaverse.
2309.15022
Why most papers on filters are really trivial (including this one)
The aim of this note is to show that many papers on various kinds of filters (and related concepts) in (subreducts of) residuated structures are in fact easy consequences of more general results that have been known for a long time.
Paolo AglianΓ²
2023-09-26T15:44:45Z
http://arxiv.org/abs/2309.15022v1
# Why most papers on filters are really trivial (including this one) ###### Abstract The aim of this note is to show that many papers on various kinds of filters (and related concepts) in (subreducts of) residuated structures are in fact easy consequences of more general results that have been known for a long time. This paper is born out of frustration. As most of my colleagues I am often asked to review papers submitted to journals in fuzzy logic or abstract algebraic logic; and a large majority of them deals with some kind of particular _filters_ on some particular structure. Of course we all know that (usually) these papers are very weak and mostly useless but they keep appearing, cluttering the field and forcing good people (who would love to do otherwise) to read them at least once and spend some precious time in writing a rejection note (not to mention the Editors who have to deal with this disgrace on a daily basis). This of course is far from being news; already ten years ago a very amusing paper was published on the subject [31] and the description of the phenomenon was so good that we must (shamelessly) borrow it. _"We do not want to increase the amount of papers about particular, artificial types of filters. We want to illuminate the triviality of the theory behind these papers. Proofs of presented general claims are short and clear in contrast to proofs of particular results for concrete special types of filters which are technical and they seem like "math exercises". We also want to provide a tool for reviewers who battle with dozens of papers dealing with unmotivated types of filters."_ In spite of the author's intentions the situation now is worse than 10 years ago; not only the number of papers about filters has increased, introducing more and more preposterous definitions, but this craziness has spilled over the boundary of residuated lattices, involving subreducts or other kinds of derived structures. Just a clarification; we do not mean that every paper dealing with "interesting subsets" of (subreducts of) residuated structures is trivial. However we believe that most of them lack a real mathematical motivation and a good chunk in the majority of them consists of straightforward corollaries of a general theory that has been available (on respectable journals) for almost 30 years. In conclusion an update is due; we have chosen to treat the argument in the very general setting of universal algebra, in which a substantial theory of filters (or ideals) is already available. We stress that in this paper we will not produce any new mathematics; our aim is rather the opposite, i.e. to show that some "new" mathematics is not new at all. ## 1 What is an ideal? Given an algebra \({\bf A}\) an ideal of \(A\) is an "interesting subset" of the universe \(A\) that may or may not be a subalgebra of \({\bf A}\); an example of the first kind is a normal subgroup of the group and of the second kind is a (two-sided) ideal of a ring. Now defining what "interesting" means is largely a matter of taste; however there is a large consensus among the practitioners of the field that: * an ideal must have a simple algebraic definition; * ideals must be closed under arbitrary intersections, so that a closure operator can be defined in which the ideals are exactly the closed sets; this gives raise to an algebraic lattice whose elements are exactly the ideals; * ideals must convey meaningful information on the structure of the algebra. The three points above are all satisfied by classical ideals on lattices and of course by ideals on a set \(X\). We have however to be careful here; an ideal on a set \(X\) is an ideal (in the lattice sense) on the Boolean algebra of subsets of \(X\). There also a significant difference between ideals on lattices and ideals on Boolean algebras; in Boolean algebras an ideal is always the 0-class of a suitable congruence of the algebra (really, of exactly one congruence), while this is not true in general for lattices. As a matter of fact, identifying the class of (lower bounded) lattices in which every ideal is the 0-class of a congruence is a difficult problem which is still unsolved, up to our knowledge. Of course the same property is shared by normal subgroups of a group and (two-sided) ideals of a ring (since they are both congruence kernels). The problem of connecting ideals of general algebras to congruence classes has been foreshadowed in [17] but really tackled by A. Ursini in his seminal paper [29]. Later, from the late 1980's to the late 1990's, A. Ursini and the author published a long series of papers on the subject (see for instance [6] and the bibliography therein); the theory developed in those paper will constitute the basis of our investigation. Ideals in universal algebra We postulated that an ideal must have a simple algebraic definition; as imprecise as this concept might be, in our context there is a natural path to follow. Given a type (a.k.a. a signature) \(\sigma\) we can consider the \(\sigma\)**-terms** (i.e. the elements of \({\bf T}_{\sigma}(\omega)\), the absolutely free countably generated algebra OF type \(\sigma\); a term is denoted by \(p(x_{1},\ldots,x_{n})\) to emphasize the variable involved and we will use the vector notation \(\vec{x}\) for \(x_{1},\ldots,x_{n}\). Let \(\Gamma\) be a set of \(\sigma\)-terms; we will divide the (finite) set of variables \(z_{1},\ldots,z_{n+m}\) of each terms in two subsets \(\{x_{1},\ldots,x_{n}\}\) and \(\{y_{1},\ldots,y_{m}\}\) so that every term in \(\Gamma\) can be expressed as \(p(\vec{x},\vec{y})\) and we allow \(n=0\), while \(m\) must always be at least \(1\). If \({\bf A}\) has type \(\sigma\) a \(\Gamma\)-ideal of \({\bf A}\) is an \(I\subseteq A\) such that for any \(a_{1},\ldots,a_{n}\in A\) and \(b_{1},\ldots,b_{m}\in I\), \(p(\vec{a},\vec{b})\in I\). The following is a simple exercise. **Lemma 2.1**.: _Let \(\sigma\) be any type, \(\Gamma\) a set of \(\sigma\)-terms and \({\bf A}\) an algebra of type \(\sigma\). Then_ 1. _the_ \(\Gamma\)_-ideals of_ \({\bf A}\) _are closed under arbitrary intersections;_ 2. _the_ \(\Gamma\)_-ideal generated by_ \(X\subseteq A\)_, i.e. the intersection of all the_ \(\Gamma\)_-ideals containing_ \(X\)_, is_ \[\operatorname{Id}^{\Gamma}_{\bf A}(X)=\{p(\vec{a},\vec{b}):\vec{a}\in A,\vec{b }\in X\};\] 3. _the_ \(\Gamma\)_-ideals of_ \({\bf A}\) _form an algebraic lattice_ \(\operatorname{Id}^{\Gamma}({\bf A})\)_._ At this level of generality we cannot say much more; if the type however contains a constant we can get a more focused definition. Let \({\sf V}\) be a variety whose type contains a constant which will denote by \(0\); a \({\sf V},0\)**-ideal term in \(y_{1},\ldots,y_{m}\) is a term \(p(\vec{x},\vec{y})\) such that \[{\sf V}\vDash p(\vec{x},0,\ldots,0)\approx 0.\] Let \(ID_{{\sf V},0}\) be the set of all \({\sf V},0\)-ideal terms in \({\sf V}\); a \({\sf V},0\)**-ideal**\(I\) of \({\bf A}\in{\sf V}\) is a \(ID_{{\sf V},0}\)-ideal of \({\bf A}\). If \({\sf V}={\bf V}({\bf A})\) we will simply say that \(F\) is a \(0\)**-ideal** of \({\bf A}\). As before the set \(\operatorname{Id}_{{\sf V},0}({\bf A})\) of \({\sf V},0\)-ideals of \({\bf A}\) and the set \(\operatorname{Id}_{0}({\bf A})\) of \(0\)-ideals of \({\bf A}\) are algebraic lattices and \(\operatorname{Id}_{0}({\bf A})\subseteq\operatorname{Id}_{{\sf V},0}({\bf A})\) (and the inclusion may be strict). It is also evident that for any \(\theta\in\operatorname{Con}({\bf A})\), \(0/\theta\) is a \({\sf V},0\)-ideal of \({\bf A}\): if \(p(\vec{x},\vec{y})\in ID_{{\sf V},0}\), \(\vec{a}\in A\) and \(\vec{b}\in 0/\theta\) then \[p(\vec{a},\vec{b})\;\theta\;p(\vec{a},\vec{1})=0.\] We say that \({\sf V}\) has **normal**\({\sf V},0\)**-ideals** if for all \({\bf A}\in{\sf V}\) for all \(I\in\operatorname{Id}_{{\sf V},0}({\bf A})\) there is a \(\theta\in\operatorname{Con}({\bf A})\) with \(I=0/\theta\). If \({\sf V}\) has normal \({\sf V},0\)-ideals then of course \(\operatorname{Id}_{0}({\bf A})=\operatorname{Id}_{{\sf V},0}({\bf A})=\{0/ \theta:\theta\in\operatorname{Con}({\bf A})\}\) so we can simply talk about \(0\)-ideals of \({\bf A}\) without specifying the variety. Observe that the variety of pointed (by \(0\)) sets has normal \(0\)-ideals, so we can hardly expect any nice structural theorem for varieties with \(0\)-normal ideals. However something can be said and the interested reader can consult [4] for more information. We say that \(\mathsf{V}\) is \(0\)**-subtractive** or simply **subtractive**[30] if there is a binary term \(s(x,y)\) in the type of \(\mathsf{V}\) such that \[\mathsf{V}\vDash s(x,x)\approx 0\qquad\mathsf{V}\vDash s(x,0)\approx x.\] **Theorem 2.2**.: _For a variety \(\mathsf{V}\) the following are equivalent:_ 1. \(\mathsf{V}\) _is subtractive;_ 2. _for every_ \(\mathbf{A}\in\mathsf{V}\) _and_ \(\theta,\varphi\in\operatorname{Con}(\mathbf{A})\)_,_ \(0/\theta\vee\varphi=0/\theta\circ\varphi\) _(hence the congruences permute at_ \(0\)_);_ 3. _for every_ \(\mathbf{A}\in\mathsf{V}\) _the mapping_ \(\theta\longrightarrow 0/\theta\) _is complete and onto lattice homomorphism from_ \(\operatorname{Con}(\mathbf{A})\) _to_ \(\operatorname{Id}_{\mathsf{V},0}(\mathbf{A})\)_._ For the proof and for even more equivalent the reader can look at Theorem 2.4 in [3]. As an immediate consequence we have **Corollary 2.3**.: _Let \(\mathsf{V}\) be a subtractive variety; then_ 1. _for every_ \(\mathbf{A}\in\mathsf{V}\)_,_ \(\operatorname{Id}_{\mathsf{V},0}(\mathbf{A})\) _is a modular lattice;_ 2. \(\mathsf{V}\) _has normal ideals._ Let \(\mathsf{V}\) be subtractive, \(\mathbf{A}\in\mathsf{V}\) and \(I\in\operatorname{Id}_{0}(\mathbf{A})\); let \[I^{d}=\bigwedge\{\theta\in\operatorname{Con}(\mathbf{A}):0/\theta=I\}\qquad I ^{\varepsilon}=\bigvee\{\theta\in\operatorname{Con}(\mathbf{A}):0/\theta=I\}.\] Then the interval \([I^{\delta},I^{\varepsilon}]\) in \(\operatorname{Con}(\mathbf{A})\) consists of all \(\theta\in\operatorname{Con}(\mathbf{A})\) such that \(I=0/\theta\). The properties of the mapping \(I\longmapsto I^{\varepsilon}\) have been investigated at length in [5] and it turns out (perhaps non surprisingly) that they are connected to abstract algebraic logic. However in this paper we do not need to deal with such intricacies; we simply need to consider the case in which the mapping is good enough to allow a connection between ideals and congruences. A subtractive variety \(\mathsf{V}\) is **(finitely) congruential**[5] if there are binary terms \(d_{1},\ldots,d_{n}\) of \(\mathsf{V}\) such that \(\mathsf{V}\vDash d_{i}(x,x)\approx 0\) for \(i=1,\ldots,n\) and for all \(\mathbf{A}\in\mathsf{V}\) and \(I\in\operatorname{Id}_{0}(\mathbf{A})\) \[I^{\varepsilon}=\{(a,b):d_{i}(a,b)\in I:i=1,\ldots,n\}.\] **Theorem 2.4**.: _[_5_]_ _For a subtractive variety \(\mathsf{V}\) the following are equivalent:_ 1. \(\mathsf{V}\) _is finitely congruential witness_ \(d_{1},\ldots,d_{n}\)_;_ 2. _the mapping_ \(()^{\varepsilon}\) _is continuous, i.e. for_ \(\mathbf{A}\in\mathsf{V}\) _and every family_ \((I_{\gamma})_{\gamma\in\Gamma}\) _of_ \(0\)_-ideals of_ \(\mathbf{A}\)__ \[(\bigcup_{\gamma\in\Gamma}I_{\gamma})^{\varepsilon}=\bigcup_{\gamma\in\Gamma}I ^{\varepsilon}.\] 3. _there exist binary terms_ \(d_{1},\ldots,d_{n}\)_, an_ \(n+3\)_-ary term_ \(q\) _of_ \(\mathsf{V}\) _and for each basic operation_ \(f\) _of arity_ \(k\) _and_ \(i=1,\ldots,n\) _a (2+n)_\(k\)_-ary term_ \(r_{i,f}\) _such that_ \[\mathsf{V} \vDash d_{(}x,x)\approx 0\qquad i=1,\ldots,n\] \[\mathsf{V} \vDash q(x,y,0,0,\ldots,0)\approx 0\] \[\mathsf{V} \vDash q(x,y,y,d_{1}(x,y),\ldots,d_{n}(x,y))\approx x\] \[\mathsf{V} \vDash r_{i,f}(\vec{x},\vec{y},0,\ldots,0)\approx 0\] \[\mathsf{V} \vDash r_{i,f}(\vec{x},\vec{y},d_{1}(x_{1},y_{1}),\ldots,d_{1}(x_ {k},y_{k}),\ldots,d_{n}(x_{1},y_{1}),\ldots,d_{n}(x_{k},y_{k})).\] As a particular instance of being congruential we can to consider the case in which the interval \([I^{\delta},I^{\varepsilon}]\) degenerates to a point, i.e. \(0/\theta=0/\varphi\) implies \(\theta=\varphi\). In this case it can be shown that the mapping \(I\longmapsto I^{\varepsilon}\) is in fact an isomorphism and that \(I^{\varepsilon}\) is the unique \(\theta\in\operatorname{Con}(\mathbf{A})\) with \(0/\theta=I\). In this case the variety \(\mathsf{V}\) is called \(0\)**-regular** and we have **Corollary 2.5**.: _[_19_]_ _For a pointed (at \(0\)) variety \(\mathsf{V}\) the following are equivalent:_ 1. \(\mathsf{V}\) _is subtractive and_ \(0\)_-regular;_ 2. \(\mathsf{V}\) _is subtractive and there is an_ \(n\in\mathbb{N}\) _and binary terms_ \(d_{1},\ldots,d_{n}\) _such that_ \[\mathsf{V} \vDash d_{i}(x,x)\approx 0\qquad i=1,\ldots,n\] \[\mathsf{V} \vDash\{d_{i}(x,y)\approx 0:i=1,\ldots,n\}\Rightarrow x\approx y.\] A pointed variety which is subtractive and \(0\)-regular is called **ideal determined**[19]; if \(\mathsf{V}\) is such a variety then for any algebra \(\mathbf{A}\in\mathsf{V}\), \(\operatorname{Id}_{0}(\mathbf{A})\cong\operatorname{Con}(\mathbf{A})\) and hence \(\mathsf{V}\) is congruence modular by Corollary 2.3. Clearly groups, rings, vector spaces, boolean algebras and many other classical algebras are ideal determined and so are residuated lattices, \(\mathsf{FL}\)-algebras and many of their subreducts. They are also congruence permutable in most cases; however for instance implication algebras are not congruence permutable [21] but it is easy check that they are ideal determined. There are also non ideal determined varieties to which Theorem 2.4 applies, such as the variety of pseudocomplemented semilattices ([5], Example 4.4). Because of Theorem 2.4 a finitely congruential subtractive variety \(\mathsf{V}\) has two features that are the prototype of many papers on filters on residuated structures. Let \(T\) be a set of terms of \(\mathsf{V}\); a \(0\)-ideal \(I\) of \(\mathbf{A}\in\mathsf{V}\) is \(T\)**-special** if for all \(\vec{a}\in A\), \(t(\vec{a})\in I\) for all \(t\in T\). It is obvious that the property of being \(T\)-special is upward hereditary on ideals; it is also obvious that the \(T\)-special ideals form an algebraic lattice. Moreover let \(\mathsf{V}_{T}\) be the subvariety of \(\mathsf{V}\) axiomatized by the equations \(t(\vec{x})\approx 0\) for \(t\in T\). **Lemma 2.6**.: _Let \(\mathsf{V}\) be a finitely congruential subtractive variety, witness \(d_{1},\ldots,d_{n}\). Then for any set \(T\) of terms of \(\mathsf{V}\) and \(\mathbf{A}\in\mathsf{V}\), \(\mathbf{A}/I^{\varepsilon}\in\mathsf{V}_{T}\) if and only if \(I\) is \(T\)-special._ Proof.: Suppose that \(I\) is \(T\) special; then for all \(\vec{a}\in A\), \(t(\vec{a})\in I\) for all \(t\in T\). As each \(d_{i}(x,y)\) is an ideal term, we get that \(d_{i}(t(a),0)\in I\) for \(i=1,\ldots,n\) and therefore \((t(a),0)\in I^{\varepsilon}\) for all \(t\in T\). This implies that \(\mathbf{A}/I^{\varepsilon}\in\mathsf{V}_{T}\) as wished. Conversely, suppose that \(\mathbf{A}/I^{\varepsilon}\in\mathsf{T}\); then for all \(\vec{a}\in A\), \((t(\vec{a}),0)\in I^{\varepsilon}\) for all \(t\in T\). This implies that \(d_{1}(t(\vec{a}),0),\ldots,d_{n}(t(\vec{a}),0)\in I\) for all \(t\in T\); by Theorem 2.4(3) \[t(\vec{a})=q(t(\vec{a}),0,0,d_{1}(t(\vec{a}),0),\ldots,d_{n}(t(\vec{a}),0))\in I\] for al \(t\in T\) and thus \(I\) is \(T\)-special. So in a finitely congruential variety we have a potentially unlimited supply of \(T\)-special filters; if the variety is also ideal determined, then even more is true. Let \(\mathsf{V}^{\prime}\) be a subvariety of \(\mathsf{V}\) and let \(J\) be a set of equations axiomatizing \(\mathsf{V}^{\prime}\) relative to \(\mathsf{V}\). If we set \[T=\{d_{i}(p,q):i=1,\ldots,n\;p\approx q\in J\}\] then \(\mathsf{V}\vDash t(\vec{x})\approx 0\) for all \(t\in T\) if and only if \(\mathsf{V}\vDash p\approx q\) for all \(p\approx q\in J\). It follows that \(\mathsf{V}^{\prime}=\mathsf{V}_{T}\) and so any subvariety of \(\mathsf{V}\) can be taken as the base for defining son \(T\)-special ideals. The second consequence is the following; let \(\mathsf{V}^{+}\) be a pointed variety such a class of subreducts \(\mathsf{V}\) of \(\mathsf{V}^{+}\) happens to be a finitely congruential subtractive variety. Certainly \(\mathsf{V}^{+}\) is subtractive as well; if for any "new" operation \(f\) in the type of \(\mathsf{V}^{+}\) we can find terms \(r_{i,f}\) satisfying (2) of Theorem 2.4, then \(\mathsf{V}^{+}\) is finitely congruential as well and the ideals in \(\mathsf{V}^{+}\) are exactly the \(\mathsf{V}\)-ideals that are closed under all the \(r_{i,f}\), where \(f\) is a new operation. A particularly easy case is the one in which the new operation is itself a _pure_ ideal term, i.e. \(f(0,\ldots,0)\approx 0\). ## 3 Variations on \(\mathsf{FL}\)-algebras A **residuated lattice** is an algebra \(\mathbf{A}=\langle A,\vee,\wedge,\cdot,/,\backslash,1\rangle\) where 1. \(\langle A,\vee,\wedge\rangle\) is a lattice; 2. \(\langle A,\cdot,1\rangle\) is a monoid; 3. \(/\) and \(\backslash\) are the left and right residua w.r.t. \(\cdot\), i.e. \(x\cdot y\leq z\) iff \(y\leq x\backslash z\) iff \(x\leq z/y\). Residuated lattices form a variety \(\mathsf{RL}\) and an axiomatization, together with the many equations holding in these very rich structures, can be found in [8]. A residuated lattice is **integral** if satisfies \(x\leq 1\) and **commutative** if the monoidal operation is commutative. An \(\mathsf{FL}\)-algebra is an algebra \(\mathbf{A}=\langle A,\vee,\wedge,\cdot,/,\backslash,0,1\rangle\) where \(\mathbf{A}=\langle A,\vee,\wedge,\cdot,/,\backslash,1\rangle\) is a residuated lattice and \(0\) is a constant. An \(\mathsf{FL}\)-algebra is * a \(\mathsf{FL}_{w}\)-algebra, if it is integral and satisfies \(0\leq x\), * a \(\mathsf{FL}_{e}\)-algebra, if it is commutative, * a \(\mathsf{FL}_{ew}\)-algebra, if it is both an \(\mathsf{FL}_{w}\) algebra and an \(\mathsf{FL}_{e}\)-algebra. Residuated lattices are clearly ideal determined, hence so are \(\mathsf{FL}\)-algebras. Let \(\mathbf{A}\) be a residuated lattice and let \(A^{+}=\{a\in A:a\geq 1\}\); a **filter** of \(\mathbf{A}\) is a subset \(F\subseteq A\) such that 1. \(A^{+}\subseteq\in F\); 2. \(a,a/b\in F\) implies \(b\in\); 3. \(a,b\in f\) implies \(a\wedge b\in F\). A filter \(F\) is **normal** if 1. \(a\in F\) and \(b\in A\) implies \(b\backslash ab,ba/b\in F\). Clearly in any commutative residuated lattices (or \(\mathsf{FL}_{e}\)-algebra) every filter is a normal filter. It is well-known that there is a one-to-one correspondence (which is in fact a lattice isomorphism) between the normal filters and the congruences of \(\mathbf{A}\) given by the mutually inverse maps \[\theta\longmapsto A^{+}/\theta\qquad\qquad F\longmapsto\theta_{F}=\{(a,b):a /b,b/a\in F\}.\] Now if \(\mathbf{A}\) is integral, then \(A^{+}=\{1\}\) and hence the normal filters are just the \(1\)-ideals; if \(\mathbf{A}\) is not integral then the \(1\)-ideals are not filters but rather the **convex normal subalgebras** of \(\mathbf{A}\). However there is a straightforward way to connect convex normal subalgebras and normal filters in such a way that the results of Section 2 can be transferred easily. We spare the technical details mainly because all the examples of "papers of filters" that we will introduce deal in fact with normal filters in integral residuated lattices (or \(\mathsf{FL}_{w}\)-algebras). Indeed in the majority of cases commutativity of the monoid operation is also present. We remind that subvarieties of \(\mathsf{FL}_{ew}\) have been studied extensively in the literature; examples are the variety of \(\mathsf{MV}\)-algebras [28], \(\mathsf{BL}\)-algebras [2] and \(\mathsf{MTL}\)-algebras [16]. From our considerations it follows that any theory of "special" normal filters in integral residuated lattices (or \(\mathsf{FL}_{w}\)-algebras) can be regarded as a special case of the theory of special ideals in pointed varieties. So if one wants to create to create a (bad) paper on filters of say, \(\mathsf{BL}\) or \(\mathsf{FL}_{ew}\)-algebras all he has to do is to find some term and come up with a fancy name for the filters that are \(T\)-special w.r.t. to those equations. Then he can prove a bunch of results that are corollaries of the results in Section 2, but of course, without using the general theory, the proofs very often consist in long (and pointless) calculations. These results have been classified in [32], which is the "serious" counterpart of the more _tongue-in-cheesk_[31]; the list of these amenities is substantial and we believe there is no need to produce some more. However we would like to point out that the same trick has been applied to varieties which consist of ideal determined subreducts of varieties of \(\mathsf{FL}\)-algebras: the trick here is to take away some operation from the type of \(\mathsf{FL}\)-algebra, still keeping ideal determinacy. It is easily seen that if \(\mathsf{V}\) is any variety of \(\mathsf{FL}_{w}\)-algebras, then any class of subreducts of \(\mathsf{V}\) that contains \(\{/,\setminus,\wedge,1\}\) in its type is an ideal determined variety. Again let's deal with the integral case to make things simpler, i.e. \(\mathsf{FL}_{\mathsf{w}}\)-algebras. **Pseudo-\(\mathsf{BL}\)-algebras** are just the non commutative version of \(\mathsf{BL}\)-algebras; **pseudohoops** have been introduced and defined via equations in [18] but it is clear that they are just the \(0\)-less subreducts of pseudo-\(\mathsf{BL}\)-algebras and of course pseudohoops and pseudo-\(\mathsf{BL}\)-algebras are ideal determined. So for instance the entire [7] is a more or less trivial consequence of [20], which is a particular instance of [32], which is a consequence of the general ideal theory in Section 2. Now the reader can easily verify for instance that the more recent [9], [15] and [22] are just more of the same. ## 4 Adding operations Another nice trick is to add new operations to \(\mathsf{FL}\)-algebras, but the axioms for those new operations are carefully chosen in such a way that the _new_ filters are just the _old_ filters that are closed under those additional operations. Of course there is a very general way to do that (see [1]); here we will briefly show how to do it for commutative and integral residuated lattices (and hence for \(\mathsf{FL}_{ew}\)-algebras). Let \(\mathbf{A}\) be a commutative and integral residuated lattice; a unary operation \(h\) on \(\mathbf{A}\) is **normal** if for all \(a,b\in A\) * \(h(1)=1\); * \(h(a\to b)\leq h(a)\to h(b)\). Observe that by the second point above any normal operation is increasing: \(a\leq b\) implies \(f(a)\leq f(b)\). An \(n\)-ary operation \(f\) on \(A\) is **normal** if for all \(i\leq n\) and for all \(a_{1},\ldots,a_{i-1},a_{i+1},\ldots,a_{n}\in A\) \[f_{i}(x):=f(a_{1},\ldots,a_{i-1},x,a_{i+1},\ldots,a_{n})\] is normal. A **commutative integral residuated lattice with normal operators** is an algebra \(\mathbf{A}=\langle A,\vee,\wedge,\rightarrow,\cdot,1,f_{\lambda}\rangle_{ \lambda in\Lambda}\) such that \(\langle A,\vee,\wedge,\rightarrow,\cdot,1\rangle\) is a commutative and integral residuated lattice and \(f_{\lambda}\) is normal for any \(\lambda\in\Lambda\). **Theorem 4.1**.: _(see [1], Proposition 3.5) If \(\mathbf{A}=\langle A,\vee,\wedge,\rightarrow,\cdot,1,f_{\lambda}\rangle_{ \lambda in\Lambda}\) is a commutative integral residuated lattice with normal operators, then the \(1\)-ideals (which we may call the filters) of \(\mathbf{A}\) are exactly the filters of the commutative residuated lattice reduct that are closed under \(f_{\lambda}\) for any \(\lambda\in\Lambda\)._ If \(\mathbf{A}\) is as above, then we denote by \(P(\mathbf{A})\) the set of all unary polynomials of \(\mathbf{A}\) involving only the normal operators. If \(X\subseteq A\), let \(\operatorname{Fil}_{\mathbf{A}}(X)\) be the filter generated by \(X\) in \(\mathbf{A}\). **Lemma 4.2**.: _(see Proposition 3.6 in [1]) Let \(\mathbf{A}\) be a commutative integral residuated lattice with normal operators._ 1. _If_ \(X\subseteq A\)_, then_ \(a\in\operatorname{Fil}_{\mathbf{A}}(X)\) _if and only if there are_ \(b_{1},\ldots,b_{n}\in X\) _and_ \(p_{1},\ldots,p_{n}\in\operatorname{P}(\mathbf{A})\) _such that_ \[(p_{1}(b_{1})\wedge 1)\ldots(p_{n}(b_{n})\wedge 1)\leq a.\] 2. _If_ \(F,G\) _are filters of_ \(\mathbf{A}\)_,_ \[F\lor G=\{c:ab\leq c,\text{ for some }a\in F,\text{ }b\in G\}.\] Now this very general result, when applied to specific cases, can be made as complex as we want. Take for instance [26]; there the author defines a **tense BL-algebra** has a BL-algebra with two _tense_ unary operators. As this operators are clearly normal, the general theory applies and many of the results in the paper are simply a straightforward consequence. I daresay that the general formulation is even clearer than the particular one (which is notationally heavy), but this is just a matter of opinion. Another popular topic is adding modal operators to residuated structures. There nothing wrong in that of course; for instance in [12] the authors proposed a modal calculus with the two classical modalities \(\Box\) and \(\Diamond\) based on Hajek's Basic Logic. Its equivalent algebraic semantics consists of structures \(\langle\mathbf{A},\Box,\Diamond\rangle\) where \(\mathbf{A}\) is a BL-algebra and \(\Box,\Diamond\) are two unary operators satisfying certain equations. Filters are defined as filters (i.e. 1-ideals) of \(\mathbf{A}\) closed under \(\Box\) which is a normal operator. The axiom chosen basically imply that the congruences of \(\langle\mathbf{A},\Box,\Diamond\rangle\) coincide with the congruences of its reduct \(\langle\mathbf{A},\Box\rangle\) so the filters of \(\langle\mathbf{A},\Box,\Diamond\rangle\) coincide with 1-ideals and the general theory applies. Of course the same trick can be applied (to a certain extent) to every variety of \(\mathsf{FL}_{ew}\)-algebras; as usual one has simply to add enough axioms in such a way that the "new" filters are 1-ideals (see for instance [33]). ## 5 Changing the constant \(\mathsf{FL}\)-algebras have two constants, 0 and 1 and the 1-ideals are the filters; of course one might wonder what happens if we consider the 0-ideals. Since \(\mathsf{FL}\)-algebras are highly non symmetrical (unlike Boolean algebras) one might be led to believe that the situation may be different. And indeed it is: easy examples show that there are varieties of \(\mathsf{FL}\)-algebras that do not have normal 0-ideals. However there some cases in which we can apply the general theory to the 0-ideals. In fact the variety of \(\mathsf{FL}_{ew}\)-algebras happens to be 0-subtractive witness the term \(s(x,y):=(y\to 0)x\); so in any variety of \(\mathsf{FL}_{ew}\)-algebras the 0-ideals coincide with the 0-classes of congruences. Varieties of \(\mathsf{FL}_{ew}\)-algebras are not 0-ideal determined but: **Lemma 5.1**.: _Let \(\mathbf{A}\) be a \(\mathsf{FL}_{\mathsf{ew}}\)-algebras and let \(I\) be a \(0\)-ideals of \(\mathbf{A}\); then_ \[I^{\varepsilon}=\{(a,b):(a\to 0)\cdot b,(b\to 0)\cdot a\in I\}.\] _Hence any variety of \(\mathsf{FL}_{ew}\)-algebras is finitely congruential w.r.t. \(0\)._ So the general theory of special \(T\)-ideals can be applied in this case as well and (for instance) most of the results in [11], [24], [25] and [34] follow from that. ## 6 Tweaking the operations Fantasy has its limits and eventually even the most preposterous way of defining filters and ideals over residuated lattices runs out of steam. So one can start tweaking a little bit the operations to get different structures an more complicated algebraic proofs of exactly the same results. The first thing is to get rid of associativity of multiplication and study _residuated lattice ordered groupoids_; these are very interesting structures in they own right but the filter and the ideal theories do not present much novelty. For instance in [10]**non associative bounded residuated lattices** are introduced: in this case the binary operation is inspired by a non associative continuous \(t\)-norm on \([0,1]\) and therefore continuity forces \(1\) to be the groupoid identity. Then **non associative \(\mathsf{BL}\)-algebras** are defined as prelinear and divisible non associative residuated lattices and it turns out (not surprisingly) that the variety of non associative \(\mathsf{BL}\)-algebras is generated as a quasivariety by all the non associative \(\mathsf{BL}\)-algebras induced by non associative continuos \(t\)-norms. Clearly non associative residuated lattices are \(1\)-ideal determined and finitely congruential at \(0\) so the general theory applies again. At first I could not find any paper on special filters of such structures and I must confess I was slightly disappointed. However one should never underestimate creativity; in [27] the authors investigated \(t\)-norms on \([0,1]\) that are not only non associative, but also non unital, in the sense that, if is such a \(t\)-norm, then \(x\leq x1\) and the inequality may be strict. The introduction of these structures, called **inflationary general residuated lattices**, is well motivated and they turn out to be more interesting in that they are NOT ideal-determined of subtractive in general. Of course some of the general theory of ideals can be recovered in that setting as well, but it is no straightforward business and there is no need to explain it here. We simply notice that a non associative bounded residuated lattice is simply an inflationary general residuated lattice in which \(x1\leq 1\). Now in [35] the authors introduced a non commutative version of inflationary general residuated lattices with all the usual results; this is pointless enough but the authors cannot resist the temptation to study special filters [36]. Of course they run into trouble, since inflationary general residuated lattices do not have a straightforward theory of filters (or ideals): their solution is to go back to non associative residuated lattices but in a covert way. Look at Theorem 1 in [36]; since the statement "\(1\) is the groupoid identity" appears in both point (1) (implicitly) and point (2) (explicitly), this is really a statement about non associative residuated lattices. And it really says that if one quotients out a non associative residuated lattice by a \(T\)-special ideal, where \(T\) is any set of equations axiomatizing non associative \(\mathsf{BL}\)-algebras modulo non associative residuated lattices, then the result is a non associative \(\mathsf{BL}\)-algebra; and, of course, this is a consequence of the general theory of \(T\)-special ideals. Now that the king is naked the reader can go through [36] and have fun in discovering similar instances of this phenomenon. Another generalization that is slightly different but in the same spirit is to consider a _bounded q-lattice ordered residuated q-monoid_ as a basis for a residuated structure. In this case we give some details that are helpful in understanding the context; a **q-lattice** is an algebra \(\langle A,\vee,\wedge\rangle\) such that \(\vee\) and \(\wedge\) are commutative and associative and for all \(a,b\in A\) 1. \(a\lor(b\wedge a)=a\lor a=a\wedge a=a\wedge(b\lor a)\); 2. \(a\lor b=a\vee(b\lor b)\), \(a\wedge b=a\wedge(b\wedge b)\). Clearly the relation \(a\leq b\) if \(a\lor a=a\lor b\) is a quasiordering. A **q-monoid** is an algebra \(\langle A,\cdot,1\rangle\) such that \(\cdot\) is associative and for all \(a,b\in A\) 1. \(a\cdot 1=1\cdot a\); 2. \(a\cdot b\cdot 1=a\cdot b\); 3. \(1\cdot 1=1\). A **quasi-\(\mathsf{FL}_{w}\)-algebra** is a structure \(\langle\mathbf{A},\vee,\wedge,\cdot,/,\backslash,0,1\rangle\) where 1. \(\langle A,\vee,\wedge\rangle\) is a q-lattice and \(0,1\) are the bottom and the top in the quasi ordering; 2. \((/,\cdot)\) and \((\backslash,\cdot)\) are left and right residuated pairs w.r.t to \(\leq\); 3. \(0\wedge 0=0\) and for all \(a\in A\), \(a\cdot 1=a\wedge a\); 4. for all \(a,b\in A\), \((b/a)\cdot 1=b/a\) and \((a\backslash b)\cdot 1=a\backslash b\). If \(\mathbf{A}\) is a quasi-\(\mathsf{FL}_{w}\)-algebra and element \(a\in A\) is **regular** if \(a\cdot 1=a\). Let \(R_{\mathbf{A}}\) be the set of regular elements of \(\mathbf{A}\); then it is an easy exercise to check that * \(R_{\mathbf{A}}\) is the universe of a subalgebra \(\mathbf{R_{A}}\) of \(\mathbf{A}\) that is an \(\mathsf{FL}_{w}\)-algebra; * \(R_{\mathbf{A}}=\{a\cdot 1:a\in A\}\). A congruence \(\theta\in\mathrm{Con}(\mathbf{A})\) is **regular** if \((a\cdot 1,b\cdot 1)\in\theta\) implies \((a,b)\in\theta\). The proofs of the following two theorems are straightforward: **Theorem 6.1**.: _Let \(\mathbf{A}\) be a quasi-\(\mathsf{FL}_{w}\)-algebra and \(\theta\in\mathrm{Con}(\mathbf{A})\); then the following are equivalent:_ 1. \(\theta\) _is regular;_ 2. \(\mathbf{A}/\theta\) _is an_ \(\mathsf{FL}_{w}\)_-algebra;_ 3. \(1/\theta\cap R_{\bf A}\) _is a normal filter of_ \({\bf R_{A}}\)_;_ 4. \(1/\theta=\uparrow G\) _for some normal filter_ \(G\) _of_ \({\bf R_{A}}\)_;_ 5. \(\theta=\{(a,b):a/b,b/a\in 1/\theta\}\)_._ A **normal filter** of a quasi-\({\sf FL}_{w}\)-algebra \({\bf A}\) is a subset \(F\subseteq A\) such that * \(1\in F\); * \(a\in F\) and \(a\leq b\) implies \(b\in F\); * \(a\backslash b\in F\) if and only if \(b/a\in F\). **Theorem 6.2**.: _Let \({\bf A}\) be a quasi-\({\sf FL}\)-algebra; then_ 1. \(F\subseteq A\) _is a normal filter if and only if it is equal to_ \(1/\theta\) _for some regular_ \(\theta\in{\rm Con}({\bf A})\)_;_ 2. \(\theta\) _is a regular congruence of_ \({\bf A}\) _if and only if_ \[\theta=\{(a,b):a/b,b/a\in F\}\] _for some normal filter_ \(F\)_;_ 3. _the regular congruences of_ \({\bf A}\) _form an algebraic lattice_ \({\rm RCon}({\bf A})\) _and the normal filters form an algebraic lattice_ \({\rm NFil}({\bf A})\)_; they are isomorphic via the mapping_ \[\theta\longmapsto 1/\theta\qquad F\in\theta_{F}=\{(a,b):a/b,b/a\in F\};\] 4. \({\rm RCon}({\bf A})\) _is a complete sublattice of_ \({\rm Con}({\bf A})\)_;_ 5. \({\rm NFil}({\bf A})\cong{\rm RCon}({\bf A})\cong{\rm Con}({\bf R_{A}})\cong {\rm NFil}({\bf R_{A}})\)_._ It follows that the theory of normal filters in a quasi-\({\sf FL}_{w}\)-algebra \({\bf A}\) is equivalent to the theory of normal filters (i.e. the 1-ideals) of its associated \({\sf FL}_{w}\)-algebra \({\bf R_{A}}\). At this point one can add suitable axioms to quasi-\({\sf FL}_{w}\)-algebras to get subvarieties that are the quasi-replica of subvarieties of \({\sf FL}_{w}\). And for each of those the theory of normal filters is equivalent to the theory of 1-ideals of the corresponding subvariety of \({\sf FL}_{w}\). Examples of this are **quasi-pseudo-\({\sf BL}\)-algebras**[14], **quasi-pseudo-\({\sf MV}\)-algebras**[13] and many others. We also stress that we have not exhausted all the possible variations; for instance the reader can have fun in dissecting [23]. Conclusions First we want to emphasize that there are at least two topics that we have not touched. The first one deals with algebras whose type contains a binary operation that resembles the implication and a constant \(1\); the most famous (and serious) examples of algebras of this kind are BCK and BCI-algebras. Of course the poverty of the language allows the construction of many different algebraic structures with many non equivalent definitions of "filter" and the vast majority of them is totally pointless and utterly uninteresting. And those that might be of some interest are those for which the filter theory corresponds to the the theory of \(1\)-ideals; the reader may want to look at [5], Example 4.5 to understand what we mean. The second topic we have not considered is the introduction of the so-called _fuzzy filters_ on residuated structure; we suspect that a lot can be said in that direction as well but we did not have the stomach for it. Finally let us give more explanations on the reasons why we have embarked in this enterprize. the main reason of course is that we believe that this is bad mathematics and should be avoided. But there are also more practical reasons, as this way of doing mathematics gives a bad reputation to residuated structures and, in the end, to "contemporary" algebraic logic. This is not an exaggeration: my original field is universal algebra and I have listened to many of my colleagues joking about it...Of course one might object that most of these papers end up in fourth rate journal of worse so no harm is done; but one should not forget that those journals are indexed in Scopus for instance, so they give metrics that are commonly accepted when evaluating a researcher. How this impacts on the credibility of our field is anybody's guess (and my guess should be, at this point, clear). In conclusion I believe that as a group we have the responsibility to police our field more effectively than we are doing now. Other fields have found ways of doing that; maybe it is time to think seriously about it.
2309.07426
At the borderline of shape coexistence: Mo and Ru
Background Even-even isotopes of Mo ($Z=42$) and Ru ($Z=44$) are nuclei close to the subshell closure at $Z=40$, where shape coexistence plays a significant role. As a result, their spectroscopic properties are expected to resemble those of Sr ($Z=38$) and Zr ($Z=40$). Exploring the evolution of these properties as they move away from the subshell closure is of great interest. Purpose The purpose of this study is to reproduce the spectroscopic properties of even-even $^{96-110}_{\phantom{961-}42}$Mo and $^{98-114}_{\phantom{961-}44}$Ru isotopes and to determine the influence of shape coexistence. Method We have employed the interacting boson model with configuration mixing as the framework to calculate all the observables for Mo and Ru isotopes. We have considered two types of configurations: 0-particle-0-hole and 2-particle-2-hole excitations. The model parameters have been determined using a least-squares fitting to match the excitation energies and the $B(E2)$ transition rates. Results We have obtained the excitation energies, $B(E2)$ values, two-neutron separation energies, nuclear radii, and isotope shifts for the entire chain of isotopes. Our theoretical results have shown good agreement with experimental data. Furthermore, we have conducted a detailed analysis of the wave functions and obtained the mean-field energy surfaces and the nuclear deformation parameter, $\beta$, for all considered isotopes. Conclusions Our findings reveal that shape coexistence plays a significant role in Mo isotopes, with the crossing of intruder and regular configurations occurring at neutron number $60$ ($A=102$), which induces a quantum phase transition. In contrast, in Ru isotopes, the intruder states have minimal influence, remaining at higher energies. However, at neutron number $60$, also a quantum phase transition occurs in Ru isotopes.
E. Maya-Barbecho, S. Baid, J. M. Arias, J. E. GarcΓ­a-Ramos
2023-09-14T04:47:40Z
http://arxiv.org/abs/2309.07426v1
# At the borderline of shape coexistence: Mo and Ru ###### Abstract **Background:** Even-even isotopes of Mo (\(Z=42\)) and Ru (\(Z=44\)) are nuclei close to the subshell closure at \(Z=40\), where shape coexistence plays a significant role. As a result, their spectroscopic properties are expected to resemble those of Sr (\(Z=38\)) and Zr (\(Z=40\)). Exploring the evolution of these properties as they move away from the subshell closure is of great interest. **Purpose:** The purpose of this study is to reproduce the spectroscopic properties of even-even \({}^{96-110}_{42}\)Mo and \({}^{98-114}_{44}\)Ru isotopes and to determine the influence of shape coexistence. **Method:** We have employed the interacting boson model with configuration mixing as the framework to calculate all the observables for Mo and Ru isotopes. We have considered two types of configurations: 0-particle-0-hole and 2-particle-2-hole excitations. The model parameters have been determined using a least-squares fitting to match the excitation energies and the \(B(E2)\) transition rates. **Results:** We have obtained the excitation energies, \(B(E2)\) values, two-neutron separation energies, nuclear radii, and isotope shifts for the entire chain of isotopes. Our theoretical results have shown good agreement with experimental data. Furthermore, we have conducted a detailed analysis of the wave functions and obtained the mean-field energy surfaces and the nuclear deformation parameter, \(\beta\), for all considered isotopes. **Conclusions:** Our findings reveal that shape coexistence plays a significant role in Mo isotopes, with the crossing of intruder and regular configurations occurring at neutron number 60 (\(A=102\)), which induces a quantum phase transition. In contrast, in Ru isotopes, the intruder states have minimal influence, remaining at higher energies. However, at neutron number 60, also a quantum phase transition occurs in Ru isotopes. Mo isotopes, Ru isotopes, shape coexistence, intruder states, interacting boson model Introduction The subshell closure at \(Z=40\) is known to exhibit rapid onset of deformation, particularly around neutron number 60, resulting from the filling of the neutron \(1g_{7/2}\) orbit, which interacts with the proton \(1g_{9/2}\) one. This phenomenon was well explained in [1; 2; 3], and more recently in [4], providing insights into the origin of deformation in this mass region. Generally, the shape of a nucleus arises from a delicate balance between pairing and quadrupole nuclear interactions, favoring spherical and deformed shapes, respectively. It is important not to overlook the role of monopole interaction, which is responsible for the evolution of single-particle energies. The region around \(Z=40\) is also known for the presence of states with different shapes within a narrow energy range. This situation, known as shape coexistence, was initially proposed in nuclear physics by Morinaga [5] to explain the nature of the first excited state (\(0^{+}\)) in \({}^{16}\)O, which was assumed to be deformed while the ground state is obviously spherical due to its doubly magic nature. These experimental findings were theoretically confirmed in [6; 7; 8], where particle-hole excitations across the energy shell gap were allowed. A deformed band, originated at low energy due to the quadrupole part of the nucleon-nucleon interactions, emerged on top of the first excited \(0^{+}\) state. The presence of additional effective valence nucleons is crucial in explaining the characteristic dropping of this deformed band, often referred to as the intruder band. Another notable example of shape coexistence is observed experimentally in \({}^{40}\)Ca and is effectively described through shell model calculations involving multi-particle multi-hole excitations [9; 10; 11]. The presence of shape coexistence is clearly manifested through experimental observations of nuclear radii and isotope shifts. The phenomenon was first identified in the odd-even staggering of the radii of mercury isotopes, which indicated the coexistence of states with significantly different degrees of deformation. Subsequent studies, such as the measurements of mercury radii in the neutron-deficient region well beyond the mid-shell [12], have confirmed the role of shape coexistence in explaining nuclear structure in various mass regions. This phenomenon is particularly prominent near shell or subshell closures in protons (neutrons) with neutrons (protons) around the mid-shell. Shape coexistence has been observed in light, medium, and heavy nuclei [13; 14; 15; 16]. From a theoretical perspective, shape coexistence is described using two complementary approaches: self-consistent methods based on Hartree-Fock (HF) or Hartree-Fock-Bogoliubov (HFB) theories, and the nuclear shell model. In the region around \(Z=40\), notable studies have been conducted using the relativistic interaction PC-PK1, focusing on Kr, Sr, Zr, and Mo isotopes near neutron number 60 [17]. These studies revealed a rapid evolution in Sr and Zr isotopes, while a more moderate evolution was observed in Mo and Kr isotopes. Prolate-oblate shape coexistence was observed in \({}^{98}\)Sr and \({}^{100}\)Zr. In another study [18], even-even Ru, Mo, Zr, and Sr isotopes were investigated using the HFB approach with a Gogny-D1M interaction. The spectroscopic properties were obtained by mapping the energy density functional into an interacting boson model with configuration mixing (IBM-CM) energy surface. The Ru isotopes exhibited a smooth shape evolution with no evidence of intruder bands. The Mo isotopes required two different particle-hole configurations, resulting in a good reproduction of the yrast band but with some deficiencies in describing the non-yrast band and inter-band transitions. State-of-the-art calculations within the HFB framework were carried out by Rodriguez-Guzman _et al._[19], allowing for the treatment of axial and triaxial degrees of freedom on an equal footing. These calculations, applied to Sr, Zr, and Mo isotopes, revealed an oblate shape for Mo isotopes at neutron number 58, gradually transitioning to a triaxial shape as the neutron number increased. An island of triaxiality was evident from neutron number 60 up to 68. Another study [20] investigated Mo and Ru isotopes using the relativistic-Hartree-Bogoliubov formalism with density-dependent zero- and finite-range nucleon-nucleon interactions, as well as a separable pairing. The results were in agreement with other mean-field calculations. Additionally, the study [21] employed the density-dependent meson exchange model DD-ME2 and density-dependent point coupling models DD-PC1 and DD-PCX to explore the shape evolution of Zr, Mo, and Ru isotopes, considering only axial situations. The predictions indicated a spherical shape for the lightest Ru isotopes, nearly degenerate prolate and oblate minima for \({}^{96-102}\)Ru, a prolate and an oblate degenerate minima in \({}^{104}\)Ru, and an oblate shape for the heaviest Ru isotopes. In the case of Mo isotopes, the lightest isotopes were predicted to have spherical shapes, with nearly degenerate prolate and oblate minima for \({}^{94-100}\)Mo. Moving into the shell-model framework, it is important to note that the description of the region around \(Z=40\) is influenced by the simultaneous occupation of neutron and proton spin-orbit partners. When the neutron \(1g_{7/2}\) orbital begins to be filled, the interaction with the proton \(1g_{9/2}\) orbital favors the existence of a deformed region in Zr and Sr nuclei with a neutron number larger than 58, and likely in Mo and Ru nuclei as well. This concept has been recently applied to the Zr region [4] using the Monte Carlo Shell Model Otsuka _et al._[22], Shimizu _et al._[23; 24], which has the capability to handle open shells. However, the idea was first introduced in the seminal works of Federman and Pittel [1; 2; 3] where the simultaneous occupation of neutron-proton spin-orbit partners was emphasized as crucial. Federman and their colleagues extensively explored this mass region using a reduced model space consisting of the \(3s_{1/2}\), \(2d_{3/2}\), and \(1g_{7/2}\) neutron orbits, and the \(2p_{1/2}\), \(1g_{9/2}\), and \(2d_{5/2}\) proton orbits [25; 26; 27; 28]. More recently, large-scale shell-model calculations have been performed for the same mass region using more realistic valence spaces, as demonstrated in studies such as [29] and [30]. In [31], a shell-model calculation was conducted for \({}^{100}\)Mo and \({}^{100}\)Ru, starting from a realistic nucleon-nucleon potential and deriving the effective shell-model Hamiltonian and decay operators within many-body perturbation theory, with a focus on studying neutrinoless double-\(\beta\) decay. In [32], the multi-quasiparticle triaxial projected shell model was used to investigate the band structures of \({}^{98-106}\)Ru isotopes, providing a consistent description. In [33], the odd-even and even-even isotopes of \({}^{95-102}\)Ru were studied using the nucleon pair approximation with a phenomenological pairing plus quadrupole interaction, yielding good agreement with experimental data. Other works that provide insights into the nature of this mass region include the following studies. In [34; 35], the authors analyzed \({}^{98}\)Ru within the framework of the IBM-2 [36] and concluded that a clear vibrational pattern is present. In [37], even-even and even-odd Ru isotopes were investigated using the IBM [38], revealing a transitional behavior. The g-factors of Ru and Pd nuclei were calculated using the IBM-2 in [39]. The even-even \({}^{98-110}\)Ru isotopes were studied using the affine \(\widehat{SU(1,1)}\) Lie Algebra in [40]. In [41], the \(A=100\) region was described using the IBM-1 with a single Hamiltonian featuring constant parameters. Although this work captured the overall trends well, it could not reproduce the fine details of the spectra, especially the rapid shape evolution observed around neutron number 60. In [42], the even-even Mo isotopes were investigated using a Bohr Hamiltonian with a sextic potential in the \(\beta\) direction, without dependence on \(\gamma\). This approach provided a good description of the spectra across the entire chain of isotopes, and it was concluded that \({}^{104}\)Ru is the closest nucleus to the critical point symmetry E(5) [43]. In [44], a large set of isotopes were studied to identify good candidates for vibrational-like behavior, i.e., U(5) nuclei. Among others, \({}^{100}\)Mo and \({}^{98-104}\)Ru were identified as suitable candidates. It is important to note that this work is relatively old, and with the present experimental knowledge, the conclusions may have evolved. The present work extends our previous analysis of the \(Z\approx 40\) and \(A\approx 100\) region [45; 46; 47; 48; 49], particularly focusing on the even-even Mo and Ru isotopes. Mo is an excellent candidate for studying the influence of intruder states on the onset of deformation for \(N\approx 60\) due to the rapid lowering of the energy of the \(2^{+}_{1}\) state, a significant increase in the ratio \(E(4^{+}_{1})/E(2^{+}_{1})\), or a sudden increase of the radius. However, Ru exhibits a smoother trend in these observables, and the influence of intruder states seems to be minimal. Nevertheless, the onset of deformation at \(N\approx 60\), specifically in \({}^{104}\)Ru, has been suggested to generate a relatively flat energy surface, reminiscent of the concept of critical point symmetry E(5) [50]. The paper is organized as follows. In Section II, the present experimental knowledge on Mo and Ru nuclei is reviewed. In Section III, the theoretical framework used in this work is presented, namely the IBM-CM (interacting boson model with configuration mixing). The procedure for obtaining the fitting parameters of the model will also be discussed. In Section IV, the correlation energy gain is studied. It plays a crucial role in understanding the nuclear structure and the interaction between nucleons. In Section V, a detailed comparison between theory and experiment for excitation energies and E2 transition rates is presented. This analysis will provide insights into the agreement between the theoretical predictions of the IBM-CM and the experimental data. In Section VI, the wave functions of the nuclear states are analyzed. The structure and configuration mixing of these states will be examined, allowing for a deeper understanding of the underlying nuclear dynamics. In Section VII, additional observables, including radii, isotopic shifts, and two-neutron separation energies, are studied. These quantities provide valuable information about the nuclear shapes, deformations, and binding energies. In Section VIII, a calculation of the IBM mean-field energy surfaces and an investigation of the deformations in Mo and Ru nuclei is presented. In Section IX, an analysis of the possible existence of a quantum phase transition in the studied nuclei is discussed. Finally, in Section X, the summary and the conclusions of the paper are presented. ## II Experimental data in the even-even Mo and Ru nuclei The energy systematics of isotopes \({}^{92-110}\)Mo below 3 MeV are presented in Fig. 1, which illustrates a significant increase in the density of states in the lower part of the spectrum as the mass increases. Additionally, a transformation from a vibrational-like pattern to a rotational one is observed, beginning at \({}^{102}\)Mo and continuing for heavier isotopes. Another important observation is the decrease in energy of the first excited \(0^{+}\) state, with the minimum also occurring at \({}^{102}\)Mo. Throughout the entire chain, the energies of the \(4^{+}_{1}\) and \(2^{+}_{2}\) states are relatively close, almost degenerate in the case of \({}^{108}\)Mo, indicating the presence of a certain degree of \(\gamma\) softness in the heavier isotopes. Fig. 2 depicts the energy systematics of \({}^{94-114}\)Ru isotopes. A clear vibrational pattern is evident for isotopes \({}^{94-102}\)Ru, which later evolves into a rotational structure with some degree of \(\gamma\) softness, as indicated by the close proximity of the \(4^{+}_{1}\) and \(2^{+}_{2}\) states. Similarly, a dropping \(0^{+}\) state is observed, with a minimum energy at \({}^{102}\)Ru. For the comparison with theoretical calculations, we will consider the evaluated experimental data from Nuclear Data Sheets publications for specific isotopes: \(A=96\)[51], \(A=98\)[52; 53], \(A=100\)[54], \(A=102\)[55], \(A=104\)[56], \(A=106\)[57], \(A=108\)[58], \(A=110\)[55], \(A=112\)[59], \(A=114\)[60]. In addition to these sources, we have incorporated the most up-to-date references for certain isotopes as described below. An excellent experimental overview of this mass region, including Mo and Ru isotopes, can be found in [16] with updated references. In [61], \(\gamma\gamma\) angular correlation experiments were conducted to study the low-lying states of \({}^{96-98}\)Mo, allowing the determination of angular momenta and multiple mixing ratios. Detailed Coulomb-excitation studies of \({}^{98}\)Mo and \({}^{100}\)Mo were performed in [62] and [63] respectively. In [64], \({}^{106-108-110}\)Mo nuclei were investigated through \(\beta\)-delayed \(\gamma\)-ray spectroscopy, and for the first time, the \(0^{+}_{2}\) band in \({}^{108-110}\)Mo was measured. The authors of [65] measured neutron and proton occupancy in \({}^{98}\)Mo and \({}^{100}\)Mo, revealing a clear change in the filling of the proton g\({}_{9/2}\) shell between the two isotopes. For Ru isotopes, in [66], the \(0^{+}_{2}\) and \(\gamma\) bands in \({}^{98}\)Ru were observed using \(\gamma\)-ray spectroscopy following the \(\beta\)-decay of \({}^{98}\)Rh, as well as via the \({}^{100}\)Ru(p, t) reaction. The \(0^{+}_{2}\) state is suggested to be an intruder state rather than a two-phonon vibrational state, although the mean-field calculation presented in the same work does not fully support this hypothesis. The lifetimes of states \(2^{+}_{1}\), \(2^{+}_{2}\), and \(4^{+}_{1}\) in \({}^{98}\)Ru were measured in [67] in an attempt to resolve discrepancies observed in the literature regarding the lifetime of the \(4^{+}_{1}\) state. In [68], the \(0^{+}_{2}\) band in \({}^{102}\)Ru was studied, along with an analysis of the mixing between the \(0^{+}_{1}\) and \(0^{+}_{2}\) states to understand the deformation evolution of the ground state in even-even Ru isotopes. Measurement of 28 E2 and 3 M1 matrix elements involving 17 low-lying excited states in \({}^{104}\)Ru was conducted using Coulomb excitation in [69]. The \(g\)-factor of the \(2^{+}_{1}\) state in \({}^{96-104}\)Ru was obtained in [70]. More recently, in [71] a Coulomb excitation experiment was conducted for \({}^{102}\)Ru obtaining a little larger E2 matrix elements than the evaluated ones for the transitions from \(2^{+}_{1}\) and \(2^{+}_{2}\) to the \(0^{+}_{1}\) state, and measuring for the first time the E2 matrix element of the transition \(2^{+}_{3}\to 0^{+}_{1}\). They concluded that the \(0^{+}_{2}\) state, with \(\beta\approx 0.18\) is a little less deformed than the ground state, with \(\beta\approx 0.24\). Figure 2: The same as Fig. 1 but for Ru isotopes. Figure 1: The experimental energy level systematics of low-lying positive parity states for the Mo isotopes are displayed, showing levels up to approximately 3 MeV in energy. Levels with dashed blue lines likely correspond to spherical shapes, while those in red represent deformed shapes (for more details refer to the information in Section II). ## III The interacting boson model with configuration mixing formalism ### The formalism The framework used in this work is the IBM-CM (interacting boson model with configuration mixing). This model is an extension of the original IBM (interacting boson model) proposed by Arima and Iachello [38]. The IBM-CM allows for the simultaneous treatment of multiple boson configurations corresponding to particle-hole excitations across a shell or subshell closure [72; 73]. In this version of the model, known as IBM-1, no distinction is made between proton and neutron bosons or particles and holes. For the study of Mo and Ru nuclei, we consider the closure for protons of the \(Z=40\) subshell, where the regular states correspond to a 0h-2p (0 holes and 2 particles) proton configuration for Mo and to a 0h-4p proton configuration for Ru, and the intruder states correspond to a 2h-4p proton configuration for Mo and to a 2h-6p proton configuration for Ru. The number of valence neutrons is determined considering a neutron closed shell at \(N=50\). Hence, the number of valence bosons, denoted as \(N\), will be half of the sum of the valence protons, which is 2 for Mo and 4 for Ru, plus half the number of valence neutrons. The intruder configuration will have additionally 2 bosons. Therefore, the regular and intruder spaces will form a \([N]\oplus[N+2]\) Hilbert space. The Hamiltonian of the system consists of two sectors: one corresponding to the regular part, \([N]\), and another corresponding to the intruder part, \([N+2]\). The total Hamiltonian is written as follows: \[\hat{H}=\hat{P}_{N}^{\dagger}\hat{H}_{\text{ecqf}}^{N}\hat{P}_{N}+\hat{P}_{N+ 2}^{\dagger}\left(\hat{H}_{\text{ecqf}}^{N+2}+\Delta^{N+2}\right)\hat{P}_{N+ 2}\ +\hat{V}_{\text{mix}}^{N,N+2}\, \tag{1}\] where \(\hat{P}_{N}\) and \(\hat{P}_{N+2}\) are projection operators onto the \([N]\) and the \([N+2]\) boson subspaces, respectively, \[\hat{H}_{\text{ecqf}}^{i}=\varepsilon_{i}\hat{n}_{d}+\kappa_{i}^{\prime}\hat{L }\cdot\hat{L}+\kappa_{i}\hat{Q}(\chi_{i})\cdot\hat{Q}(\chi_{i}) \tag{2}\] is the Hamiltonian of the extended consistent-Q formalism (ECQF), [74; 75] with \(i=N,N+2\), \(\hat{n}_{d}\) is the \(d\) boson number operator, \(\hat{L}=\sqrt{10}\left[d^{\dagger}\times\tilde{d}\right]^{(1)}\) is the angular momentum, and \(\hat{Q}(\chi)=\left[s^{\dagger}\times\tilde{d}+d^{\dagger}\times s\right]^{(2) }+\chi\left[d^{\dagger}\times\tilde{d}\right]^{(2)}\) is the quadrupole operator. Note that the ECQF corresponds to a simplified version of the general IBM Hamiltonian. The parameter \(\Delta^{N+2}\) represents the energy needed to excite two proton particles across the \(Z=40\) subshell gap, resulting in 2p-2h excitations. The operator \(\hat{V}_{\text{mix}}^{N,N+2}\) is the mixing between the \(N\) and the \(N+2\) configurations and is given by \[\hat{V}_{\text{mix}}^{N,N+2}=\omega_{0}^{N,N+2}(s^{\dagger}\times s^{\dagger} +s\times s)+\omega_{2}^{N,N+2}(d^{\dagger}\times d^{\dagger}+\tilde{d}\times \tilde{d})^{(0)}. \tag{3}\] In this study, we assume that \(\omega_{0}^{N,N+2}=\omega_{2}^{N,N+2}=\omega\), where \(\omega\) is a constant parameter. The \(E2\) transition operator is built with the same quadrupole operator that appears in the Hamiltonian (2). It is defined as the sum of two contributions that act separately in the regular and the intruder sectors without crossed contributions, \[\hat{T}(E2)_{\mu}=\sum_{i=N,N+2}e_{i}\hat{P}_{i}^{\dagger}\hat{Q}_{\mu}(\chi_{ i})\hat{P}_{i}. \tag{4}\] The \(e_{i}\) (\(i=N,N+2\)) are the effective boson charges and the parameters \(\chi_{i}\) take the same values that in the Hamiltonian (2). Note that the operator cannot connect the regular with the intruder sector or viceversa. The free parameters associated with the above operators need to be determined in order to reproduce a set of excitation energies and transition rates, as described in Section III.2. This approach has been successfully employed in recent studies for Sr [48], Zr [46; 47], Pt [76; 77], Hg [78; 79] and Po isotopes [80; 81]. ### The fitting procedure: energy spectra and absolute \(B(e2)\) reduced transition probabilities In this section, we describe how the parameters of the Hamiltonian (1), (2), and (3), as well as the effective charges of the \(\hat{T}(E2)\) transition operator (4), were determined. We focus on studying the even-even isotopes \({}^{96-110}\)Mo and \({}^{98-114}\)Ru, covering a large portion of the neutron shell \(50-82\). We exclude nuclei very close to the neutron shell closure due to the limited reliability of IBM results for those cases. The goal of the fitting procedure is to achieve a satisfactory overall agreement with the available excitation energies and \(B(E2)\) reduced transition probabilities. A standard \(\chi^{2}\) method is used to determine the values of the parameters appearing in the Hamiltonian and the \(\hat{T}(E2)\) operator, following the approach described in [46; 76; 78; 80]. In general, there are 13 parameters involved, but the number may be smaller for most nuclei. We impose the constraint that the parameters should vary smoothly from one isotope to another. Additionally, we strive to keep as many parameters as possible at constant values, particularly the parameters \(\Delta^{N+2}\) and \(\omega\). The resulting parameter values for the IBM-CM Hamiltonian and \(\hat{T}(E2)\) operator are presented in Tables 1 and 2 for Mo and Ru, respectively. In these tables, certain parameters could not be determined unambiguously from the available experimental information. Specifically, the parameters corresponding to the intruder sector of most Ru isotopes and the parameters of the regular sector of \({}^{108-110}\)Mo. For Ru, only a few \(0^{+}\) states in \({}^{100}\)Ru could be identified as intruder members, but there is no strong evidence of other intruder states in the rest of the isotope chain. Therefore, we have to assume the same intruder parameters obtained for \({}^{100}\)Ru for the entire Ru chain. As a result, the description of Ru intruder states should be considered only as approximate. In the case of \({}^{108-110}\)Mo, no evidence of regular states exists, so the regular parameters of \({}^{106}\)Mo are used for those isotopes. It is worth noting the smooth variation or constancy of certain parameters, such as \(\chi_{N}=\chi_{N+2}=0\) and \(\kappa^{\prime}_{N+2}=0\) \begin{table} \begin{tabular}{c c c c c c c c} Nucleus & \(\varepsilon_{N}\) & \(\kappa_{N}\) & \(\kappa^{\prime}_{N}\) & \(\varepsilon_{N+2}\) & \(\kappa_{N+2}\) & \(e_{N}\) & \(e_{N+2}\) \\ \hline \({}^{98}\)Ru & 683.60 & -15.79 & -1.28 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 2.55 & 4.00\({}^{\rm a}\) \\ \({}^{100}\)Ru & 546.85 & -19.89 & 8.60 & 410.72 & -24.08\({}^{\rm a}\) & 2.34 & 4.00 \\ \({}^{102}\)Ru & 535.28 & -20.46 & 4.75 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 2.27 & 4.00\({}^{\rm a}\) \\ \({}^{104}\)Ru & 412.75 & -24.34 & 7.03 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 2.11 & 4.00\({}^{\rm a}\) \\ \({}^{106}\)Ru & 360.60 & -26.37 & 5.67 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 1.96 & 4.00\({}^{\rm a}\) \\ \({}^{108}\)Ru & 298.53 & -29.19 & 7.60 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 1.67 & 4.00\({}^{\rm a}\) \\ \({}^{110}\)Ru & 250.00 & -30.00 & 9.55 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 1.55 & 4.00\({}^{\rm a}\) \\ \({}^{112}\)Ru & 250.00 & -30.00 & 6.43 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 1.76 & 4.00\({}^{\rm a}\) \\ \({}^{114}\)Ru & 299.33 & -34.25 & 5.55 & 410.72\({}^{\rm a}\) & -24.08\({}^{\rm a}\) & 1.80 & 4.00\({}^{\rm a}\) \\ \end{tabular} \end{table} Table 2: Hamiltonian and \(\hat{T}(E2)\) parameters resulting from the study of Ru isotopes in the present work. All quantities have dimensions of energy (given in keV), except \(e_{N}\) and \(e_{N+2}\), which are given in units of \(\sqrt{\rm W.u}\).. It should be noted that the values \(\chi_{N}=\chi_{N+2}=0\), \(\kappa^{\prime}_{N+2}=0\) keV, \(\omega=15\) keV and \(\Delta^{N+2}=2200\) keV were fixed for all isotopes. \begin{table} \begin{tabular}{c c c c c c c c c c c} Nucleus & \(\varepsilon_{N}\) & \(\kappa_{N}\) & \(\chi_{N}\) & \(\kappa^{\prime}_{N}\) & \(\varepsilon_{N+2}\) & \(\kappa_{N+2}\) & \(\chi_{N+2}\) & \(\kappa^{\prime}_{N+2}\) & \(\omega\) & \(e_{N}\) & \(e_{N+2}\) \\ \hline \({}^{96}\)Mo & 695.84 & 0.00 & 1.50 & 15.00 & 191.56 & -9.96 & 1.12 & 13.69 & 45.0 & 2.48 & 4.00 \\ \({}^{98}\)Mo & 813.16 & 0.00 & -1.70 & -5.00 & 873.21 & -25.08 & 1.50 & -5.00 & 45.0 & 2.73 & -0.87 \\ \({}^{100}\)Mo & 517.29 & -2.00 & -1.49 & 10.00 & 408.52 & -21.98 & 0.05 & 4.34 & 45.0 & 2.24 & 2.88 \\ \({}^{102}\)Mo & 470.23 & -4.93 & -1.50 & -5.00 & 446.78 & -35.00 & 0.14 & 1.08 & 15.0 & 4.00 & -2.15 \\ \({}^{104}\)Mo & 150.00 & -10.00 & -1.34 & 7.02 & 450.04 & -35.00 & 0.49 & -0.63 & 15.0 & 2.11 & -2.05 \\ \({}^{106}\)Mo & 294.38 & -15.00 & -0.58 & 0.02 & 263.83 & -29.90 & 0.37 & 7.68 & 15.0 & 1.90 & 1.93 \\ \({}^{108}\)Mo & 294.38\({}^{\rm a}\) & -15.00\({}^{\rm a}\) & -0.58\({}^{\rm a}\) & 0.02\({}^{\rm a}\) & 203.58 & -28.85 & 0.12 & 8.98 & 15.0 & 1.90\({}^{\rm b}\) & 2.12\({}^{\rm c}\) \\ \({}^{110}\)Mo & 294.38\({}^{\rm a}\) & -15.00\({}^{\rm a}\) & -0.58\({}^{\rm b}\) & 0.02\({}^{\rm a}\) & 200.00 & -31.79 & 0.01 & 7.58 & 15.0 & 1.90\({}^{\rm b}\) & 2.12\({}^{\rm c}\) \\ \end{tabular} \end{table} Table 1: Hamiltonian and \(\hat{T}(E2)\) parameters resulting from the study of Mo isotopes in the present work. All quantities have dimensions of energy (given in keV), except \(\chi_{N}\) and \(\chi_{N+2}\), which are dimensionless, and \(e_{N}\) and \(e_{N+2}\), which are given in units of \(\sqrt{\rm W.u}\).. It should be noted that the value of \(\Delta^{N+2}=1500\) keV is fixed for all isotopes. keV in the Ru chain. Thus, both configurations in Ru exhibit \(\gamma\)-unstability. The value of \(\omega\) is constant in Ru and in the majority of Mo isotopes, with \(\omega=15\) keV, except for \({}^{96-100}\)Mo where \(\omega=45\) keV. A value of \(\Delta^{N+2}=1500\) keV is employed for the entire Mo chain, which is compatible with the values used for Zr (\(\Delta^{N+2}=820-3200\) keV) and Sr (\(\Delta^{N+2}=1360-1900\) keV). For the entire Ru chain, a value of \(\Delta^{N+2}=2200\) keV is considered, but it should be noted that this value is only constrained by the experimental information of \({}^{100}\)Ru. ## IV Correlation energy and unperturbed energy spectra The position of intruder states is generally expected to be at higher energy compared to regular states due to the creation of a 2p-2h excitation across the shell gap. The parameter \(\Delta^{N+2}=1500\) keV for Mo and \(\Delta^{N+2}=2200\) keV for Ru represents the energy needed for this excitation. However, in practice, the energy of the intruder states is corrected by the pairing energy gain resulting from the formation of two extra \(0^{+}\) pairs [82; 83]. The presence of extra bosons leads to a reduction in energy for the considered configuration. As a result, the lowest energies are expected to appear around the mid-shell, and the reduction in energy will be more significant for the intruder configuration due to the larger number of bosons (two units larger). Therefore, the actual energies of intruder states may differ from the initial expectation based solely on the shell gap energy. To gain a better understanding of the energy systematics in the regular and intruder configurations, we can examine the absolute energies of the lowest regular and intruder \(0^{+}\) states by suppressing the mixing term in the IBM-CM Hamiltonian. In this analysis, we consider pure states where the reference regular energy corresponds to 0, while the intruder energy corresponds to \(\Delta^{N+2}\). In Fig. 3, panel (a) depicts the energy curves for Mo isotopes, while panel (b) shows the energy curves for Ru isotopes. In the case of Mo isotopes, a notable observation is the crossing of the regular and intruder energies between \(A=100\) and \(A=102\), corresponding to a neutron number of 60. From \(A=102\) onwards, the intruder configuration becomes the ground state. The correlation energy of the regular configuration increases gradually, while the intruder configuration exhibits a much more rapid change. This behavior can be attributed to the larger number of bosons in the intruder configuration and to have \(|\kappa_{N+2}|>|\kappa_{N}|\). The minima of both curves occur precisely at the mid-shell region. Turning to the case of Ru isotopes, we observe that the regular configuration remains the lowest energy state throughout the isotopic chain. Notably, the gain in correlation energy is more significant for the regular state. However, it is important to note that the Hamiltonian parameters for the intruder sector are kept constant and are Figure 3: Absolute energy of the lowest unperturbed regular (red) and intruder (green) \(0^{+}_{1}\) states for \({}^{96-110}\)Mo (a) and \({}^{98-114}\)Ru (b). Thin dashed lines correspond to the reference lines for regular (red) and intruder (green) configurations (see text). the same as those used for \({}^{100}\)Ru. Therefore, these results should be interpreted with caution, considering the potential limitations of using the same Hamiltonian parameters in the intruder sector for all Ru isotopes. Overall, these energy correlation plots provide insights into the relative energies of the regular and intruder configurations in Mo and Ru isotopes, highlighting their different behaviors and the impact of correlation energy on their ordering. To gain an initial understanding of the distribution of intruder and regular states in the spectrum, we can examine the unperturbed spectra of Mo and Ru isotopes, with the regular ground state as the reference. Fig. 4 illustrates these spectra, with panel (a) representing Mo isotopes and panel (b) representing Ru isotopes. Full lines for the regular states and dashed lines for the intruder ones. In the case of Mo isotopes (panel (a)), we observe that the regular configuration exhibits a more vibrational character for the lighter isotopes. However, as the mass increases, it switches to a more rotational or O(6) behavior. This transition is evident from the reduction in excitation energies of the \(2^{+}\) and \(4^{+}\) states. On the other hand, the intruder configuration demonstrates a more prominent rotational character throughout the isotopic chain, which becomes more evident for isotopes with \(A>100\). In the case of Ru isotopes (panel (b)), the intruder configuration is considerably higher in energy, with a minimum occurring at \(A=98-102\). Beyond this region, the energy of the intruder configuration increases rapidly with increasing mass. The regular configuration, on the other hand, displays a clear vibrational pattern in the lighter isotopes. It is possible to easily identify the two-phonon triplet (\(4^{+}\), \(2^{+}\), \(0^{+}\)) or the three-phonon quintuplet (\(6^{+}\), \(4^{+}\), \(3^{+}\), \(2^{+}\), \(0^{+}\)). However, in the heavier isotopes, the presence of a \(\gamma\)-unstable structure becomes evident. The clear manifestation of this structure is seen in the seniority two doublet (\(2^{+}\) and \(4^{+}\)) and the seniority three quartet (\(6^{+}\), \(4^{+}\), \(3^{+}\), and \(0^{+}\)). Notably, at \(A=104\) (neutron number 60), there is a transition point where the system switches from one structure to the other. Overall, these unperturbed spectra provide valuable insights into the nature of intruder and regular states in Mo and Figure 4: Energy spectra for the Mo isotopes (panel (a)) and Ru isotopes (panel (b)), obtained from the IBM-CM Hamiltonian presented in Table 1 and 2, respectively. For these calculations, the mixing term in the Hamiltonian has been switched off. For each angular momentum, the energy levels of the two lowest-lying regular and intruder states are displayed. The regular states are represented by full, while the intruder states are shown with dashed lines. Ru isotopes, highlighting the vibrational and rotational characteristics and the presence of clear structural patterns in the heavier isotopes. ## V Detailed comparison for energy spectra and \(B(e2)\) transition rates In this section, we compare the theoretical calculations with the experimental data up to an excitation energy of approximately 3 MeV. Fig. 5 presents a detailed comparison between the experimental excitation energies of Mo (panel (a)) and their corresponding theoretical values (panel (b)). Similarly, it compares the experimental excitation energies of Ru (panel (c)) with their theoretical counterparts (panel (d)). It is important to note that, throughout the entire chain, the theoretical \(2_{1}^{+}\) excitation energy closely matches the experimental data. Consequently, we utilize this experimental energy as a reference to normalize the theoretical spectra (refer to Section 3.2 of Ref. [76] for more details). Beginning with the Mo isotopes, the spectra of the lighter isotopes exhibit the expected characteristics for nuclei near the neutron number 50 shell closure. Additionally, there is a sudden increase in the excitation energies of certain states attributed to the presence of the neutron number 56 subshell closure. Subsequently, the spectra evolve smoothly towards a more collective behavior, characterized by a compressed spectrum. The IBM-CM accurately reproduces all of these features, particularly the position of the \(0_{2}^{+}\) state along the entire chain. In the case of Ru isotopes, the theoretical energies quantitatively reproduce the experimental ones, clearly depicting a transition from a purely vibrational-like spectrum (where the one-, two-, and three-phonon states are easily identified) to an O(6) spectrum. However, it should be noted that the experimental levels are more scattered compared to the theoretical ones. There is a notable discrepancy observed for the \(0_{3}^{+}\) state in \({}^{106}\)Ru, which appears much higher in energy than predicted by the theoretical model. It is worth mentioning that this particular state was not included in the fitting procedure. Additionally, a slight discrepancy is observed for the \(0_{2}^{+}\) state in the case of \({}^{108-110}\)Ru, where its energy is slightly higher than the corresponding experimental value. The information regarding \(B(E2)\) transitions is presented through a series of tables and figures. Tables 3 and 4 provide a detailed comparison between the known experimental values and the corresponding theoretical ones for Mo and Ru, respectively. Additionally, figures 6 and 7 illustrate selected intra- and interband transitions, respectively, highlighting the results for both Mo and Ru. Overall, there is a satisfactory agreement between the theoretical predictions and experimental data, with only Figure 5: Excitation energies (up to \(E\approx 3.0\) MeV) for Mo in panel (a) with the experimental data and in panel (b) with the IBM-CM theoretical results. Same information for Ru isotopes, in panel (c) experimental data and in panel (d) IBM-CM theoretical results. Only two excited states (if known experimentally) per angular momentum are plotted. a few specific discrepancies. Notably, in \({}^{98}\)Mo, the model predicts a smaller \(B(E2;2^{+}_{2}\to 2^{+}_{1})\) value than observed experimentally. Additionally, it is worth mentioning the nearly equal \(B(E2)\) values for transitions within the yrast band. Generally, in both chains of isotopes, the natural increase of \(B(E2)\) values with angular momentum, reaching a maximum at mid-shell, is accurately reproduced by the model. The reliable reproduction of the transition rates serves as a stringent test for the model. Therefore, the agreement obtained between theory and experiment demonstrates the reliability of the presented calculations, particularly in cases where experimental information is more abundant. Figure 6: Comparison of the absolute \(B(E2)\) transition probabilities along the yrast band, given in W.u. Panels (a) and (c) corresponds to known experimental data for Mo and Ru, respectively and panels (b) and (d) to the theoretical IBM-CM results, also for Mo and Ru, respectively. Figure 7: Comparison of the few non-yrast intraband absolute \(B(E2)\) transition probabilities in Mo and Ru, given in W.u. Panels (a) and (c) correspond to the known experimental data for Mo and Ru, respectively, panel (b) and (d) to the theoretical IBM-CM results, also for Mo and Ru, respectively. Figure 8: Experimental excitation energies and absolute \(B(E2)\) transition rates (given in W.u.) for selected states in \({}^{100-108}\)Mo. Figure 9: The same as Fig. 8 but for the theoretical IBM-CM results. \begin{table} \begin{tabular}{l l l l} Isotope & Transition & Experiment & IBM-CM \\ \hline \({}^{100}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & 20.7(4) & 20.7 \\ & \(0^{+}_{2}\to 2^{+}_{1}\) & 51(7) & 40 \\ & \(2^{+}_{2}\to 2^{+}_{1}\) & 16.4(24) & 27.5 \\ & \(2^{+}_{2}\to 0^{+}_{1}\) & 1.10(11) & 0.14 \\ & \(2^{+}_{2}\to 2^{+}_{1}\) & \(<28\) & 5 \\ & \(2^{+}_{3}\to 0^{+}_{1}\) & \(<0.18\) & 0.32 \\ & \(2^{+}_{4}\to 2^{+}_{1}\) & 0.43(20) & 0.07 \\ & \(2^{+}_{4}\to 0^{+}_{1}\) & 0.080(11) & 0.022 \\ & \(4^{+}_{4}\to 2^{+}_{1}\) & 41(7) & 34 \\ & \(4^{+}_{2}\to 4^{+}_{1}\) & 0.18(\(\begin{subarray}{c}+9\\ 10\end{subarray}\)) & 14 \\ & \(4^{+}_{2}\to 2^{+}_{2}\) & 22.0(\(\begin{subarray}{c}+6\\ 10\end{subarray}\)) & 56.4 \\ & \(4^{+}_{2}\to 2^{+}_{1}\) & 1.9(\(\begin{subarray}{c}+5\\ 9\end{subarray}\)) & 0.0 \\ & \(4^{+}_{2}\to 2^{+}_{1}\) & \(<72\) & 7 \\ & \(4^{+}_{3}\to 2^{+}_{1}\) & \(<0.72\) & 0.08 \\ \end{tabular} \end{table} Table 3: Comparison of the experimental absolute \(B(E2)\) values (given in W.u.) with the IBM-CM Hamiltonian results for Mo isotopes. Data are taken from the Nuclear Data Sheets [51; 52; 53; 54; 55; 56; 57; 58; 59; 60] complemented with references presented in section II. \begin{table} \begin{tabular}{l c c c} Isotope & Transition & Experiment & IBM-CM \\ \hline & \(2^{+}_{5}\to 3^{+}_{1}\) & \(140(^{+30}_{-40})\) & 115 \\ & \(2^{+}_{5}\to 2^{+}_{3}\) & \(4(^{+8}_{-4})\) & 12 \\ & \(6^{+}_{1}\to 4^{+}_{1}\) & \(<290\) & 79 \\ & \(6^{+}_{2}\to 4^{+}_{3}\) & \(<52\) & 96 \\ & \(6^{+}_{2}\to 4^{+}_{2}\) & \(<1.1\) & 1.8 \\ & \(6^{+}_{2}\to 4^{+}_{1}\) & \(<47\) & 0.5 \\ & \(6^{+}_{2}\to 6^{+}_{1}\) & \(<65\) & 17 \\ & \(3^{+}_{1}\to 2^{+}_{2}\) & \(<1.8\) & 25 \\ & \(3^{+}_{1}\to 2^{+}_{1}\) & \(<1.3\) & 0.06 \\ & \(5^{+}_{1}\to 4^{+}_{3}\) & \(<1700\) & 20 \\ & \(5^{+}_{1}\to 3^{+}_{1}\) & \(<2000\) & 69 \\ & \(5^{+}_{1}\to 4^{+}_{2}\) & \(<100\) & 3 \\ & \(3^{+}_{3}\to 4^{+}_{1}\) & \(2.7(^{+1.7}_{-2.7})\) & 0.1 \\ & \(3^{+}_{3}\to 2^{+}_{3}\) & \(4.1(^{+2.4}_{-4.1})\) & 0.1 \\ & \(3^{+}_{3}\to 2^{+}_{1}\) & \(0.19(^{+10}_{-19})\) & 0.00 \\ \hline \({}^{78}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & \(20.1(4)\) & 19 \\ & \(2^{+}_{1}\to 0^{+}_{2}\) & \(9.7(^{+10}_{-25})\) & 6.5 \\ & \(4^{+}_{1}\to 2^{+}_{1}\) & \(42.3(^{+9}_{-8})\) & 23.5 \\ & \(6^{+}_{1}\to 4^{+}_{1}\) & \(10.1(4)\) & 23.3 \\ & \(2^{+}_{2}\to 0^{+}_{1}\) & \(1.02(^{+15}_{-12})\) & 7.90 \\ & \(2^{+}_{2}\to 0^{+}_{2}\) & \(2.3(^{+5}_{-4})\) & 2.3 \\ & \(2^{+}_{2}\to 2^{+}_{1}\) & \(48(^{+9}_{-8})\) & 2 \\ & \(4^{+}_{1}\to 2^{+}_{2}\) & \(15.2(^{+33}_{-30})\) & 16.6 \\ & \(2^{+}_{3}\to 4^{+}_{1}\) & \(14(4)\) & 5 \\ & \(2^{+}_{3}\to 2^{+}_{2}\) & \(<22\) & 3.5 \\ & \(2^{+}_{3}\to 2^{+}_{1}\) & \(3.0(7)\) & 30.9 \\ & \(2^{+}_{3}\to 0^{+}_{2}\) & \(7.5(^{+6}_{-5})\) & 0.0 \\ & \(2^{+}_{3}\to 0^{+}_{1}\) & \(0.032(^{+7}_{-6})\) & 0.228 \\ \hline \({}^{100}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & \(37.6(9)\) & \(37.5\) \\ & \(0^{+}_{2}\to 2^{+}_{1}\) & \(89(3)\) & 76 \\ & \(4^{+}_{1}\to 2^{+}_{1}\) & \(69(6)\) & 71 \\ & \(6^{+}_{1}\to 4^{+}_{1}\) & \(94(^{+16}_{-12})\) & 100 \\ & \(2^{+}_{2}\to 0^{+}_{1}\) & \(0.62(6)\) & 0.03 \\ & \(2^{+}_{2}\to 0^{+}_{2}\) & \(5.7(^{+14}_{-11})\) & 2.2 \\ & \(2^{+}_{2}\to 2^{+}_{1}\) & \(52(7)\) & 70 \\ & \(2^{+}_{3}\to 4^{+}_{1}\) & \(36(^{+34}_{-20})\) & 42 \\ & \(2^{+}_{3}\to 0^{+}_{2}\) & \(15(^{+5}_{-3})\) & 70 \\ & \(2^{+}_{3}\to 2^{+}_{1}\) & \(0.28(^{+15}_{-9})\) & 0.11 \\ & \(4^{+}_{2}\to 2^{+}_{2}\) & \(30(^{+7}_{-2})\) & 21 \\ & \(8^{+}_{1}\to 6^{+}_{1}\) & \(122(^{+52}_{-17})\) & 121 \\ \hline \({}^{102}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & \(74(9)\) & 74 \\ & \(0^{+}_{3}\to 2^{+}_{1}\) & \(70(30)\) & 30 \\ & \(4^{+}_{1}\to 2^{+}_{1}\) & \(89(18)\) & 105 \\ \hline \({}^{104}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & \(92(6)\) & 92 \\ & \(4^{+}_{1}\to 2^{+}_{1}\) & \(110(4)\) & 133 \\ & \(6^{+}_{1}\to 4^{+}_{1}\) & \(109(4)\) & 144 \\ & \(8^{+}_{1}\to 6^{+}_{1}\) & \(81(4)\) & 138 \\ \hline \({}^{106}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & \(102.3(25)\) & 102.3 \\ & \(4^{+}_{1}\to 2^{+}_{1}\) & \(140\) (30) & 146 \\ & \(6^{+}_{1}\to 4^{+}_{1}\) & \(130(60)\) & 159 \\ & \(8^{+}_{1}\to 6^{+}_{1}\) & \(89(12)\) & 160 \\ & \(10^{+}_{1}\to 8^{+}_{1}\) & \(93(13)\) & 66 \\ \hline \({}^{108}\)Mo & \(2^{+}_{1}\to 0^{+}_{1}\) & \(140(90)\) & 140 \\ \hline \end{tabular} \end{table} Table 3: Comparison of the experimental absolute \(B(E2)\) values (given in W.u.) with the IBM-CM Hamiltonian results for Mo isotopes. Data are taken from the Nuclear Data Sheets [51; 52; 53; 54; 55; 56; 57; 58; 59; 60] complemented with references presented in section II. Figs. 8 and 9 display the detailed excitation energies up to approximately 3 MeV and the \(B(E2)\) transition rates for both experimental data and theoretical results. These figures focus on a selected set of Mo isotopes, namely \begin{table} \begin{tabular}{l l l l} Isotope & Transition & Experiment & IBM-CM \\ \hline \({}^{98}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 29.8(10) & 29.8 \\ & \(0_{2}^{+}\to 2^{+}\) & 42 (\({}^{+12}_{-11}\)) & 33 \\ & \(4_{1}^{+}\to 2^{+}\) & 57(4) & 43 \\ & \(6_{1}^{+}\to 4_{1}^{+}\) & 12.8(\({}^{+17}_{-14}\)) & 40.9 \\ & \(2_{2}^{+}\to 0_{1}^{+}\) & 1.04(\({}^{+12}_{-14}\)) & 0.00 \\ & \(2_{2}^{+}\to 2_{1}^{+}\) & 46(\({}^{+7}_{-6}\)) & 43 \\ & \(8_{1}^{+}\to 6_{1}^{+}\) & 2.5(\({}^{+5}_{-31}\)) & 26 \\ & \(10_{1}^{+}\to 8_{1}^{+}\) & 1.27(\({}^{+2}_{-32}\)) & 0.43 \\ \hline \({}^{100}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 35.7(3) & 35.8 \\ & \(0_{2}^{+}\to 2_{1}^{+}\) & 35(5) & 32 \\ & \(4_{1}^{+}\to 2_{1}^{+}\) & 52(4) & 52 \\ & \(2_{2}^{+}\to 2_{1}^{+}\) & 31(6) & 52 \\ & \(2_{2}^{+}\to 0_{1}^{+}\) & 2.0(4) & 0.0 \\ & \(2_{3}^{+}\to 4_{1}^{+}\) & 17(5) & 11 \\ & \(2_{3}^{+}\to 0_{2}^{+}\) & 37(\({}^{+8}_{-8}\)) & 23 \\ & \(2_{3}^{+}\to 2_{1}^{+}\) & 1.24(\({}^{+9,43}_{-0,53}\)) & 0.00 \\ & \(2_{3}^{+}\to 0_{1}^{+}\) & 0.43(10) & 0.02 \\ & \(2_{4}^{+}\to 0_{3}^{+}\) & 270(\({}^{+60}_{-50}\)) & 196 \\ & \(2_{4}^{+}\to 4_{1}^{+}\) & 1.9(\({}^{+0,6}_{-0.5}\)) & 0.2 \\ & \(2_{1}^{+}\to 0_{2}^{+}\) & 1.9(5) & 0.0 \\ & \(4_{2}^{+}\to 2_{2}^{+}\) & 41(\({}^{+27}_{-21}\)) & 29 \\ & \(4_{2}^{+}\to 4_{1}^{+}\) & 27(\({}^{+18}_{-14}\)) & 26 \\ & \(4_{2}^{+}\to 2_{1}^{+}\) & 1.9(\({}^{+13}_{-10}\)) & 0.0 \\ & \(4_{3}^{+}\to 2_{3}^{+}\) & 77(\({}^{+32}_{-29}\)) & 7 \\ & \(4_{3}^{+}\to 4_{1}^{+}\) & 1.8(\({}^{+8}_{-8}\)) & 0 \\ & \(4_{3}^{+}\to 2_{1}^{+}\) & 0.9(4) & 0.3 \\ & \(3_{1}^{+}\to 2_{2}^{+}\) & 9.6(\({}^{+46}_{-41}\)) & 39.2 \\ & \(3_{1}^{+}\to 4_{1}^{+}\) & 15(5) & 16 \\ & \(3_{1}^{+}\to 2_{1}^{+}\) & 3.9(\({}^{+13}_{-12}\)) & 0 \\ \hline \({}^{102}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 44.6(7) & 44.6 \\ & \(0_{2}^{+}\to 2_{1}^{+}\) & 35(6) & 36 \\ & \(4_{1}^{+}\to 2_{1}^{+}\) & 66(11) & 66 \\ & \(6_{1}^{+}\to 4_{1}^{+}\) & 68(25) & 74 \\ & \(2_{2}^{+}\to 0_{1}^{+}\) & 1.14(15) & 0.00 \\ & \(2_{2}^{+}\to 2_{1}^{+}\) & 32(5) & 66 \\ & \(8_{1}^{+}\to 6_{1}^{+}\) & 56(19) & 70 \\ & \(10_{1}^{+}\to 8_{1}^{+}\) & 57(21) & 58 \\ \hline \({}^{103}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 57.9(11) & 57.1 \\ & \(0_{2}^{+}\to 2_{1}^{+}\) & 25(3) & 21 \\ & \(4_{1}^{+}\to 2_{1}^{+}\) & 83(9) & 81 \\ & \(2_{2}^{+}\to 0_{1}^{+}\) & 2.8(5) & 0.0 \\ & \(2_{2}^{+}\to 2_{1}^{+}\) & 55(6) & 81 \\ \hline \({}^{100}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 66(10) & 66 \\ \hline \({}^{105}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 62(6) & 62 \\ & \(4_{1}^{+}\to 2_{1}^{+}\) & 102(8) & 85 \\ & \(2_{2}^{+}\to 0_{1}^{+}\) & 0.5(4) & 0.0 \\ & \(2_{3}^{+}\to 4_{1}^{+}\) & 0.08(7) & 0.00 \\ & \(2_{3}^{+}\to 0_{1}^{+}\) & 0.005(3) & 0.000 \\ \hline \({}^{107}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 66(5) & 66 \\ & \(4_{1}^{+}\to 2_{1}^{+}\) & 86(10) & 91 \\ & \(6_{1}^{+}\to 4_{1}^{+}\) & 120(50) & 101 \\ & \(2_{2}^{+}\to 0_{1}^{+}\) & 0.6(3) & 0.0 \\ \hline \({}^{112}\)Ru & \(2_{1}^{+}\to 0_{1}^{+}\) & 70(7) & 70 \\ & \(8_{1}^{+}\to 6_{1}^{+}\) & 82(13) & 106 \\ & \(10_{1}^{+}\to 8_{1}^{+}\) & 85(13) & 100 \\ & \(7_{1}^{+}\to 5_{1}^{+}\) & 83(12) & 68 \\ \hline \end{tabular} \end{table} Table 4: Same as Table 3 but for Ru isotopes. \({}^{100-108}\)Mo, where the coexistence of two bands is most evident. The separation into bands has been performed by first considering the yrast band and then grouping the remaining levels around the \(0^{+}\) or \(2^{+}\) bandheads, or according to an O(6) scheme. In the case of \({}^{100}\)Mo, the yrast band exhibits a predominantly vibrational character, which is accurately reproduced by the IBM-CM calculation. However, the remaining states cannot be easily grouped in any obvious manner. The main features of the \(B(E2)\) transition rates are generally well reproduced, except for the \(2^{+}_{3}\to 0^{+}_{2}\) transition, where the theoretical model predicts a larger value than observed experimentally. Moving on to \({}^{102}\)Mo, the yrast band begins to deviate from the harmonic behavior in terms of both energies and \(B(E2)\) values. Nevertheless, the energies and \(B(E2)\) values are correctly reproduced by the theoretical calculations. In the cases of \({}^{104}\)Mo and \({}^{106}\)Mo, the spectra appear quite similar, and the IBM-CM calculations accurately reproduce them. However, based on the analysis presented in Section VI, the \(0^{+}_{2}\) state is regular in the former case while is intruder in the latter, and the \(2^{+}_{2}\) state is fully mixed in the former case but is intruder in the latter. Lastly, in the isotope \({}^{108}\)Mo, part of the spectra exhibit an O(6) character, although there are no known experimental \(B(E2)\) values available for comparison. Table 3 provides a comparison between the relevant \(B(E2)\) values and their corresponding theoretical predictions, which complements the information provided in the former figures. Figs. 10 and 11 illustrate the experimental and theoretical spectra for the range of \({}^{100-108}\)Ru isotopes. The overall agreement between experiment and theory is remarkable, with no significant discrepancies observed. As demonstrated in Fig. 5, nuclei up to \({}^{104}\)Ru exhibit a vibrational-like structure, allowing for the identification of different members of one-, two-, three-, and even four-phonon multiplets. Starting from \({}^{106}\)Ru, a transition towards an O(6)-like structure becomes evident, with \({}^{104}\)Ru being the critical point of this transition. Another noteworthy feature is the absence of intruder states, except for the \(0^{+}_{4}\) and \(0^{+}_{5}\) states in \({}^{100}\)Ru (not shown in the figure). This highlights the consistency of the theoretical model in reproducing the experimental spectra. Table 4 provides a comparison between the relevant \(B(E2)\) values and their corresponding theoretical predictions for Ru isotopes, which complements the information provided in the former figures. where \(k\) refers to the different states with a given \(J\), while \(i\), and \(j\) run over the bases of the \([N]\) and \([N+2]\) sectors, respectively. The weight of the wave function contained within the \([N]\)-boson subspace, can then be defined as the sum of the squared amplitudes, \[w^{k}(J,N)\equiv\sum_{i}\mid a_{i}^{k}(J;N)\mid^{2}. \tag{6}\] Fig. 12 illustrates \(w^{k}(J)\) for the first two states of each angular momentum, with the full line representing the first state and the dashed line representing the second state. For Mo isotopes, the ground state undergoes a rapid transition from an almost entirely regular structure up to \(A=100\), to a fully intruder one from \(A=102\) onwards. This trend is also observed for the \(2_{1}^{+}\), \(4_{1}^{+}\), \(6_{1}^{+}\), and \(8_{1}^{+}\) states, with the exception of \(A=96\), where the \(6_{1}^{+}\) and \(8_{1}^{+}\) states correspond to intruder configurations. No clear trend can be observed for odd angular momenta, both for the first and second members. The second \(0^{+}\) state predominantly exhibits an intruder character in most cases, except for \(A=102-104\), where it is fully regular. On the other hand, the second \(2^{+}\) state shows significant mixing for \(A=96-100\) and \(104\), while being almost purely intruder for the remaining cases. In the case of Ru isotopes, the observed trend is relatively straightforward. For the first member of all angular Figure 11: The same as Fig. 10 but for theoretical results. Figure 12: Regular content of the Mo (panels (a) and (b)) and Ru (panels (c) and (d)) for the two lowest-lying states for each \(J\) value (full lines with closed symbols correspond with the first state while dashed lines correspond with the second state) resulting from the IBM-CM calculation. momentum states a regular wave function is predominant. However, for the second member, in some cases it undergoes a transition from an intruder character to a regular one starting in the majority of cases from \(A=104\) and onwards. The representation of Fig. 12 can become cumbersome, especially in cases like panel (b) where the levels cross each other. To provide a clearer visualization, it is more effective to combine the fraction of the wave function within the regular sector with the excitation energy of the states. In Fig. 13, we present the regular content of the first four states per angular momentum along with their corresponding excitation energies. The size of each dot associated with a state is proportional to the regular content of its wave function. To provide a reference point, the size of the dot for the \(0^{+}_{1}\) states in \({}^{98}\)Ru (panel (i)) corresponds to \(100\%\) of regular content. In the case of Mo isotopes, it is evident how the regular content of the \(0^{+}_{1}\) state transitions from the ground state in \({}^{96-100}\)Mo to the first and second excited \(0^{+}\) states in \({}^{102-104}\)Mo and \({}^{106-110}\)Mo, respectively. Similarly, for the \(2^{+}\) and \(4^{+}\) states, the regular content transitions from the first member in \({}^{96-102}\)Mo to the second or third member in \({}^{104-110}\)Mo. For angular momenta \(6^{+}\) and \(8^{+}\), the regular content is mainly concentrated in the second and third members throughout the isotopic chain. In contrast, the situation in Ru is much simpler, with the majority of states belonging to the regular sector, as indicated by their significant regular content. ## VII Study of other observables: radii, isotopic shifts, and two-neutron separation energies ### Radii and isotopic shifts The nuclear charge radii is an experimental observable and its analysis provides direct information on the presence of deformation in nuclei. In our case, we anticipate to obtain from such analysis some indication on the onset of deformation around neutron number 60. Specifically, we expect to observe a kink in the isotope shift at this point. In this section, we will compare the theoretical values for radii and isotope shifts predicted by the IBM-CM with the experimental data [84]. The value of the nucleus' radius calculated using the IBM is closely associated with the matrix element of the \(\hat{n}_{d}\) operator for the ground state. This value should be combined with a linear trend that depends on the number of Figure 13: The energy systematics of the four lowest states below 4 MeV for both Mo and Ru isotopes. The panels (a)-(h) represent Mo, while panels (i)-(p) correspond to Ru. Each panel corresponds to a specific angular momentum: (a) and (i) for \(J=0\), (b) and (j) for \(J=2\), (c) and (k) for \(J=3\), (d) and (l) for \(J=4\), (e) and (m) for \(J=5\), (f) and (n) for \(J=6\), (g) and (o) for \(J=7\), and (h) and (p) for \(J=8\). The size of each symbol is proportional to the fraction of the wave function lying in the regular sector. The dot for the state \(0^{+}_{1}\) in \({}^{98}\)Ru corresponds to \(100\%\). bosons, which can be easily explained in terms of the liquid drop model. Additionally, in the case of the IBM-CM, it is necessary to consider both the regular and intruder configurations. In summary, in the IBM-CM the nuclear radius can be expressed as, \[r^{2}=r_{c}^{2}+\hat{P}_{N}^{\dagger}(\gamma_{N}\hat{N}+\beta_{N}\hat{n}_{d})\hat {P}_{N}+\hat{P}_{N+2}^{\dagger}(\gamma_{N+2}\hat{N}+\beta_{N+2}\hat{n}_{d})\hat {P}_{N+2}. \tag{7}\] The appearing parameters are common for the entire chain of isotopes and are fixed to best reproduce the experimental data, which are referred to \({}^{108}\)Mo and \({}^{104}\)Ru, respectively. The resulting values for Mo are \(\gamma_{N}=0.221\) fm\({}^{2}\), \(\beta_{N}=-0.627\) fm\({}^{2}\), \(\gamma_{N+2}=0.215\) fm\({}^{2}\), and \(\beta_{N+2}=-0.024\) fm\({}^{2}\), while for Ru, the values are \(\gamma_{N}=0.248\) fm\({}^{2}\) and \(\beta_{N}=0.018\) fm\({}^{2}\). In the case of Ru, there is no dependence on the intruder part. This approach closely follows the methodology of a previous work [85], considering only a single configuration. In the case of Mo, the sudden increase in radius at \({}^{102}\)Mo is correctly captured, along with the overall trend. However, the experimental and theoretical results do not coincide within the error bars, similar to what has been observed in other studies [46; 48; 77; 78; 80]. As a matter of fact, the onset of deformation is predicted more strongly than observed. This tiny discrepancy could be connected with the prediction of an equal degree of deformation for \({}^{104-108}\)Ru while experimentally there is a more gradual variation. For Ru, where only the regular contribution is required, the linear trend of the radii is accurately reproduced, and no abrupt changes are observed. Note the different scale between panels (b) and (d). Figure 14: Charge mean-square radii for the Mo (a) and Ru nuclei (c) and isotopic shift for the Mo (b) and Ru nuclei (d). The data are taken from [84]. Lines with dots for theoretical values and dots with error bars for experimental data. For Mo, \(\langle r^{2}\rangle_{ref}\) is fixed to reproduce the value of \(\langle r^{2}\rangle\) in \({}^{96}\)Mo, while for Ru to reproduce \({}^{96}\)Ru. ### Two-neutron separation energies The definition of the S\({}_{2n}\) involves the value of the binding energy of two neighboring nuclei separated by two mass units, having the same value of Z, as expressed by the equation, \[S_{2n}(A)=BE(A)-BE(A-2). \tag{8}\] where \(BE\) represents the binding energy of the nucleus, considered as positive. In the case of the IBM, an additional contribution depending on the number of neutrons and the square of the number of neutrons needs to be added to the calculated binding energy (see [86]). This introduces an extra linear term into the S\({}_{2n}\) value. Therefore, the S\({}_{2n}\) can be expressed as: \[S_{2n}(A)=\mathcal{A}+\mathcal{B}A+BE^{lo}(A)-BE^{lo}(A-2), \tag{9}\] where \(BE^{lo}\) represents the "local" binding energy derived from the IBM Hamiltonian, and the coefficients \(\mathcal{A}\) and \(\mathcal{B}\) are assumed to be constant for an isotopic chain [86]. In the case of IBM-CM calculations, we anticipate that the effective number of bosons, or equivalently the mass number, for the ground state will be influenced by the presence of intruder states. To account for this effect, we propose as an _ansatz_, \[S_{2n}(A)=\mathcal{A}+\mathcal{B}(A+2(1-w))+BE^{lo}(A)-BE^{lo}(A-2), \tag{10}\] where \(w=w^{1}(0)\) (see Eq. (6)). The values of \(\mathcal{A}\) and \(\mathcal{B}\) are determined, once the \(BE^{lo}\)'s are known, through a least-squares fit to the experimental values of S\({}_{2n}\), as explained in detail in [86; 46; 78]. In our case, the obtained values are \(\mathcal{A}=55.2\) MeV and \(\mathcal{B}=-0.407\) MeV for Mo and \(\mathcal{A}=65.7\) MeV and \(\mathcal{B}=-0.499\) MeV for Ru. In Fig. 15, the comparison between experimental and theoretical results is presented, highlighting the excellent agreement observed in both isotopic chains around neutron number 60 (\(A=102\) in Mo and \(A=104\) in Ru). It is important to note that only the first portion of the neutron shell was utilized to determine the parameters \(\mathcal{A}\) and \(\mathcal{B}\). Figure 15: Comparison of experimental and theoretical two-neutron separation energies in Mo (panel (a)) and Ru (panel (b)) isotopes. For the Ru isotopes, a clear linear trend is observed, which is accurately reproduced by the model. In the case of the Mo isotopes, a slight flattening is observed at \(A=102\), corresponding to neutron number 60, although the IBM-CM calculation predominantly exhibits a linear trend. This specific point corresponds to the intersection of the intruder and regular configurations. It should be noted that in the Mo isotopes, the value \(A=108\) lies beyond the midpoint of the shell, even though the same linear portion as in the first half-shell is considered. The observed discrepancy in \({}^{110}\)Mo arises from the discussion on the values of the linear coefficients in the first and second parts of the shell, as outlined in [86]. ## VIII Nuclear deformation and mean-field energy surfaces One of the goals of this work is to analyze the onset of deformation around neutron number 60. However, it is important to note that deformation itself is not directly an observable. Nevertheless, the IBM provides various approaches for calculating the deformation of a nucleus. The most common approach is the IBM mean-field method, which allows for the calculation of an energy functional based on deformation parameters [87, 88, 89]. In the case of the IBM-CM, an expanded formalism was necessary to simultaneously describe both regular and intruder configurations, which involved the introduction of a matrix coherent-state method [90, 91, 92]. Detailed descriptions of the method and its application to Pt, Hg, Po, Zr, and Sr isotopes can be found in references [46, 48, 78, 80, 93]. Fig. 16 shows the axial energy surfaces and Fig. 17 shows the full \(\beta-\gamma\) plane energy surfaces for Mo isotopes. In the axial case, the unperturbed calculations for regular (red dotted line) and intruder (green dashed line) configurations are presented, along with the full calculations (black solid line). It is evident that the intruder configuration evolves from a spherical shape to a flat surface in \({}^{100}\)Mo, transitioning into an oblate deformed shape and eventually becoming nearly \(\gamma\)-unstable. This configuration gains correlation energy more rapidly than the regular configuration. On the other hand, the regular configuration evolves slowly from a spherical shape to a shallow prolate deformed energy Figure 16: Axial symmetry energy for \({}^{96-110}\)Mo, corresponding to the IBM-CM Hamiltonian provided in Table 1. The full configuration mixing calculation (full black line) is shown together with the unperturbed calculations for the regular sector (red dotted line) and for the intruder configuration (green dashed line). surface. The full calculation clearly depicts the transition from a regular to an intruder ground state. When analyzing the energy surfaces in the \(\beta-\gamma\) plane (Fig. 17), the transition from a spherical to an oblate shape can be observed in \({}^{102}\)Mo. A secondary prolate minimum appears in \({}^{104}\)Mo, and the nuclei become almost \(\gamma\)-unstable in \({}^{108-110}\)Mo. Previous HFB calculations using a Gogny interaction [18; 19] exhibit similar energy surfaces to the present results, although there \({}^{100}\)Mo is already prolate deformed. From \({}^{102}\)Mo onwards, the shape is predominantly oblate, with triaxial minima and shallow valleys in the \(\gamma\) direction. The coexistence of two minima, one oblate and the other prolate, is present in \({}^{104}\)Mo. Note that the situation is similar to the case of Sr where the coexistence of a prolate and an oblate minimum exists [48], but differs from the Zr nuclei where an spherical and a prolate minimum coexist [47]. In Fig. 18, the axial energy surfaces for Ru isotopes are shown and the full \(\beta-\gamma\) plane energy surfaces are displayed in Fig. 19. In the axial case (Fig. 18), both the unperturbed and full calculations are presented. It is noteworthy that the intruder configuration remains well separated from the regular configuration throughout, and this is consistent with the full calculation. The nuclei start out as spherical but gradually evolve into flatter shapes, reaching full flatness at \({}^{104}\)Ru. Subsequently, they transform into \(\gamma\)-unstable shapes, with the deepest minimum occurring at \({}^{110}\)Ru. When examining the energy surfaces in the \(\beta-\gamma\) plane (Fig. 19), similar conclusions can be drawn. The flattest energy surface is observed at \({}^{104}\)Ru, which serves as a boundary between the spherical and \(\gamma\)-unstable shapes. In a previous study [18], it was found that the lightest Ru isotopes exhibit a slightly prolate shape, while a very shallow triaxial minimum is observed in \({}^{104-106}\)Ru. In \({}^{108-114}\)Ru, oblate minima are obtained, albeit with a very flat \(\gamma\) direction, and in certain cases, they become almost triaxial. To gain a clearer understanding on the evolution of nuclear shape, it is useful to represent the value of \(\beta\) corresponding to the minimum in the energy functional (indicated by red dots in Figs. 17 and 19). It is important to note that the \(\beta\) variable defined in the IBM does not directly correspond to the one used in the collective model. However, there exists a linear relationship [87] connecting both variables. Therefore, the variables presented should be understood as being proportional to the Bohr-Mottelson ones. Note that positive values corresponds to prolate shapes while negative to oblate. Fig. 20 illustrates the IBM \(\beta\) values for Mo and Ru isotopes in panels (a) and (b), respectively. The values correspond to the full calculation (IBM-CM) as well as the regular (\([N]\)) and intruder (\([N+2]\)) configurations, assuming no interaction exists between the two configurations. In the case of Mo isotopes (panel (a)), the regular configuration is spherical in the range \(A=96-102\), transitioning Figure 17: Matrix coherent-state calculation in the \(\beta-\gamma\) plane for \({}^{96-110}\)Mo, corresponding to the IBM-CM Hamiltonian provided in Table 1. The red dot marks the position of the absolute minimum. Figure 19: Same as Fig. 17 but for \({}^{98-114}\)Ru and Table 2. Figure 18: Same as Fig. 16 but for \({}^{98-114}\)Ru and Table 2. to a prolate shape for \(A=104-110\). On the other hand, the intruder configuration is spherical until \(A=100\), but then it rapidly increases its value, becoming oblate. The full calculation exhibits a similar behavior to the intruder configuration, transitioning rapidly from spherical to prolate deformation around \(A=102\). It is worth noting that although the obtained deformation for \(A=110\) is oblate, the potential is almost \(\gamma\)-unstable. Therefore, the sign of \(\beta\) for \({}^{110}\)Mo is unimportant. In both configurations, the deformation develops quite rapidly. Turning to the case of Ru isotopes (panel (b)), the regular configuration, as well as the full calculation, corresponds to spherical shapes for \({}^{98-102}\)Ru. However, they undergo a sudden transition to \(\gamma\)-unstable deformation for \({}^{104-114}\)Ru, with the maximum deformation occurring at the mid-shell. The intruder configuration exhibits a similar behavior, but it is important to remember that the intruder Hamiltonian remains fixed throughout the entire isotope chain. Lastly, we will also calculate the value of the deformation in an almost model-independent manner using the work of Kumar, Cline, and their colleagues [94; 95]. This method allows the extraction of deformation using the experimental information coming from Coulomb excitation, which is a powerful tool for accessing information about the shape of a nucleus. The key concept is to utilize the notion of the "equivalent ellipsoid" for a given nucleus. This ellipsoid is defined as uniformly charged with the same charge, possessing the same \(\left<r^{2}\right>\) and \(E2\) moments as the original nucleus characterized by a specific eigenstate. By analyzing measured data from various transitions obtained through Coulomb excitation techniques, it is possible to extract the values of collective model variables, namely \(\beta\) and \(\gamma\), for a given state. This approach provides valuable experimental insights into the deformation of nuclei. The procedure for calculating the deformation parameters involves the use of quadrupole shape invariants. Focusing specifically on the \(0^{+}\) states, the following equations define these shape invariants (see [96] for an application of the method), \[q_{2,i} = \sqrt{5}\left<0^{+}_{i}\left|[\hat{Q}\times\hat{Q}]^{(0)}\right| 0^{+}_{i}\right>, \tag{11}\] \[q_{3,i} = -\sqrt{\frac{35}{2}}\left<0^{+}_{i}\mid[\hat{Q}\times\hat{Q} \times\hat{Q}]^{(0)}\mid 0^{+}_{i}\right>. \tag{12}\] The deformation parameters are directly related to those of the triaxial rigid rotor, denoted as \(q\) and \(\delta\), respectively: \[q=\sqrt{q_{2}}, \tag{13}\] Figure 20: Value of \(\beta\) extracted from the IBM-CM energy surface for Mo (panel (a)) and Ru (panel (b)) isotopes. \([N]\), \([N+2]\), and IBM-CM correspond to a pure regular configuration, a pure intruder configuration, and to the complete calculation. \[\delta=\frac{60}{\pi}\arccos\frac{q_{3}}{q_{2}^{3/2}}, \tag{14}\] where \(\delta\) coincides with the parameter \(\gamma\) of the Bohr-Mottelson model up to a first order approximation. The deformation parameter \(\beta\) can also be obtained from the quadrupole shape invariant (11) (see, e.g., references [63; 69; 97]), \[\beta=\frac{4\pi\sqrt{q_{2}}}{3Zer_{0}^{2}A^{2/3}}, \tag{15}\] where \(e\) is the proton charge and \(r_{0}=1.2\) fm. The theoretical values of \(\beta\), \(\gamma\), \(q^{2}\), and the fraction of wave function belonging to the regular sector, \(w^{k}\) (see Eq. (6)), are presented in Table 5 for each \(0^{+}_{1}\), \(0^{+}_{2}\), and \(0^{+}_{3}\) state across the entire chains of Mo and Ru isotopes. This table reveals the coexistence of different deformations within the same nucleus, with the regular states typically exhibiting less deformation compared to the intruder states. In the case of Mo isotopes, the intruder states generally display a significant oblate deformation that increases with the mass number. However, an exception is observed in \({}^{98}\)Mo, where a low deformation is observed. This is consistent \begin{table} \begin{tabular}{c c c c c c|c c c c c} \hline State & Iso & \(q^{2}\) (\(e^{2}b^{2}\)) & \(\beta\) & \(\gamma\) (deg) & \(w^{k}\) & Iso & \(q^{2}\) (\(e^{2}b^{2}\)) & \(\beta\) & \(\gamma\) (deg) & \(w^{k}\) \\ \hline \(0^{+}_{1}\) & \({}^{96}\)Mo & 0.28 & 0.17 & 48 & 0.971 & \({}^{98}\)Ru & 0.40 & 0.20 & 30 & 0.997 \\ \(0^{+}_{2}\) & & 1.63 & 0.42 & 50 & 0.158 & & 0.35 & 0.18 & 30 & 0.988 \\ \(0^{+}_{3}\) & & 0.62 & 0.26 & 46 & 0.579 & & 2.05 & 0.45 & 30 & 0.012 \\ \hline \(0^{+}_{1}\) & \({}^{98}\)Mo & 0.37 & 0.20 & 14 & 0.881 & \({}^{100}\)Ru & 0.50 & 0.22 & 30 & 0.996 \\ \(0^{+}_{2}\) & & 0.12 & 0.11 & 14 & 0.122 & & 0.40 & 0.19 & 30 & 0.995 \\ \(0^{+}_{3}\) & & 0.49 & 0.23 & 0 & 0.730 & & 2.76 & 0.51 & 30 & 0.006 \\ \hline \(0^{+}_{1}\) & \({}^{100}\)Mo & 0.53 & 0.23 & 25 & 0.825 & \({}^{102}\)Ru & 0.64 & 0.24 & 30 & 0.994 \\ \(0^{+}_{2}\) & & 1.22 & 0.35 & 31 & 0.241 & & 0.54 & 0.22 & 30 & 0.992 \\ \(0^{+}_{3}\) & & 0.64 & 0.26 & 13 & 0.763 & & 3.60 & 0.57 & 30 & 0.010 \\ \hline \(0^{+}_{1}\) & \({}^{102}\)Mo & 1.16 & 0.34 & 26 & 0.018 & \({}^{104}\)Ru & 0.84 & 0.27 & 30 & 0.994 \\ \(0^{+}_{2}\) & & 1.48 & 0.39 & 18 & 0.968 & & 0.56 & 0.22 & 30 & 0.996 \\ \(0^{+}_{3}\) & & 0.90 & 0.30 & 23 & 0.020 & & 0.73 & 0.26 & 30 & 0.996 \\ \hline \(0^{+}_{1}\) & \({}^{104}\)Mo & 1.45 & 0.38 & 43 & 0.013 & \({}^{106}\)Ru & 1.00 & 0.29 & 30 & 0.994 \\ \(0^{+}_{2}\) & & 1.35 & 0.36 & 55 & 0.946 & & 0.62 & 0.23 & 30 & 0.995 \\ \(0^{+}_{3}\) & & 1.01 & 0.32 & 44 & 0.430 & & 0.86 & 0.27 & 30 & 0.996 \\ \hline \(0^{+}_{1}\) & \({}^{106}\)Mo & 1.61 & 0.39 & 43 & 0.010 & \({}^{108}\)Ru & 0.96 & 0.29 & 30 & 0.995 \\ \(0^{+}_{2}\) & & 1.16 & 0.33 & 30 & 0.297 & & 0.83 & 0.27 & 30 & 0.997 \\ \(0^{+}_{3}\) & & 0.99 & 0.31 & 24 & 0.694 & & 0.61 & 0.23 & 30 & 0.995 \\ \hline \(0^{+}_{1}\) & \({}^{108}\)Mo & 2.23 & 0.46 & 36 & 0.008 & \({}^{110}\)Ru & 1.05 & 0.30 & 30 & 0.996 \\ \(0^{+}_{2}\) & & 1.95 & 0.43 & 28 & 0.016 & & 0.92 & 0.28 & 30 & 0.997 \\ \(0^{+}_{3}\) & & 1.29 & 0.35 & 27 & 0.603 & & 0.71 & 0.24 & 30 & 0.995 \\ \hline \(0^{+}_{1}\) & \({}^{110}\)Mo & 1.91 & 0.42 & 30 & 0.013 & \({}^{112}\)Ru & 1.14 & 0.30 & 30 & 0.996 \\ \(0^{+}_{2}\) & & 1.68 & 0.39 & 30 & 0.012 & & 0.97 & 0.28 & 30 & 0.998 \\ \(0^{+}_{3}\) & & 0.96 & 0.30 & 21 & 0.910 & & 0.73 & 0.24 & 30 & 0.996 \\ \hline & & & & & & & & & & \\ \end{tabular} \begin{tabular}{c c c c c|c c c c} \hline State & Iso & \(q^{2}\) (\(e^{2}b^{2}\)) & \(\beta\) & \(\gamma\) (deg) & \(w^{k}\) & Iso & \(q^{2}\) (\(e^{2}b^{2}\)) & \(\beta\) & \(\gamma\) (deg) & \(w^{k}\) \\ \hline \(0^{+}_{1}\) & \({}^{96}\)Mo & 0.28 & 0.17 & 48 & 0.971 & \({}^{98}\)Ru & 0.40 & 0.20 & 30 & 0.997 \\ \(0^{+}_{2}\) & & 1.63 & 0.42 & 50 & 0.158 & & 0.35 & 0.18 & 30 & 0.988 \\ \(0^{+}_{3}\) & & 0.62 & 0.26 & 46 & 0.579 & & 2.05 & 0.45 & 30 & 0.012 \\ \hline \(0^{+}_{1}\) & \({}^{98}\)Mo & 0.37 & 0.20 & 14 & 0.881 & \({}^{100}\)Ru & 0.50 & 0.22 & 30 & 0.996 \\ \(0^{+}_{2}\) & & 0.12 & 0.11 & 14 & 0.122 & & 0.40 & 0.19 & 30 & 0.995 \\ \(0^{+}_{3}\) & & 0.49 & 0.23 & 0 & 0.730 & & 2.76 & 0.51 & 30 & 0.006 \\ \hline \(0^{+}_{1}\) & \({}^{100}\)Mo & 0.53 & 0.23 & 25 & 0.825 & \({}^{102}\)Ru & 0.64 & 0.24 & 30 & 0.994 \\ \(0^{+}_{2}\) & & 1.22 & 0.35 & 31 & 0.241 & & 0.54 & 0.22 & 30 & 0.992 \\ \(0^{+}_{3}\) & & 0.64 & 0.26 & 13 & 0.763 & & & 3.60 & 0.57 & 30 & 0.010 \\ \hline \(0^{+}_{1}\) with the "collapse" of the B(E2) values observed in its yrast band, possibly related to the closure of the neutron number 56 subshell. The deformation of the first regular state steadily increases until \({}^{100}\)Mo, but for \({}^{102}\)Mo, there is a sudden increase, and the deformation remains relatively constant for the remaining isotopes. Regarding the value of \(\gamma\), clear conclusions can only be drawn in a few cases, such as \({}^{96}\)Mo and \({}^{98}\)Mo, where the states are oblate and prolate, respectively. In other cases, the deformation is compatible with triaxiality. Moving on to the Ru isotopes, the situation is relatively straightforward, with a steady increase in deformation for the regular states, with a fixed \(\gamma\) value of 30 degrees throughout. Only in two cases are intruder states observed, exhibiting a large deformation. As a complement to Table 5, the trend of the value of \(\beta\) for the first two \(0^{+}\) states in Mo and Ru isotopes is depicted in Fig. 21. In Ru isotopes (panel (b)), the behavior is relatively straightforward, with a constant increase in \(\beta\) up to around the mid-shell region, reaching a value of approximately 0.30. This trend is similar to the one shown in Fig. 20. Both states belong to the regular sector. In the case of Mo isotopes (panel (a)), the situation is more complex, especially for the \(0^{+}_{2}\) state. The \(0^{+}_{1}\) state exhibits a rapid increase in \(\beta\) at \(A=102\), rising from 0.23 to 0.34, and then showing a more gradual increase up to the mid-shell region where it reaches its maximum value of 0.42. In addition, it is also of interest to plot the value of \(\beta\) for the first \(0^{+}\) state belonging to the regular sector (\(0^{+}\)reg) and the first \(0^{+}\) state belonging to the intruder sector (\(0^{+}\)int) in Mo isotopes, as shown in panel (c) (note that in Ru only regular states are observed). It can be observed that the regular state exhibits a rapid increase in deformation at \(A=102\) and subsequently shows a steady decrease. On the other hand, the intruder state maintains a relatively high and constant value of \(\beta\), except for the aforementioned exception observed at \(A=98\). ## IX The quest of a quantum phase transition The potential relationship between the phenomenon of shape coexistence and the existence of Quantum Phase Transitions (QPTs) has been the subject of recent investigations [47; 98; 99; 100]. In brief, a QPT occurs in systems where the ground state's structure undergoes a sudden change when a control parameter varies slightly around a specific value [101]. It is important to note that this phenomenon occurs at absolute zero temperature. The presence of a QPT is generally associated with a combination of Hamiltonians possessing different symmetries. Specifically, we can consider a scenario where two Hamiltonians, each associated with a particular symmetry (A or B), are combined. Consequently, the Hamiltonian of the system can be expressed as follows: \[\hat{H}=(1-x)\hat{H}_{A}+x\hat{H}_{B}. \tag{16}\] This formulation allows us to investigate the interplay between different symmetries, A and B, by adjusting the parameter \(x\), which determines the relative weight or contribution of each symmetry to the overall Hamiltonian. In most cases, the symmetry involved in the QPT can be considered as a dynamical symmetry [102]. The QPT occurs at a critical value \(x=x_{c}\), where the wave function undergoes an abrupt transition from having symmetry A to Figure 21: Value of the deformation, \(\beta\), extracted from the value of the quadrupole shape invariants for Mo (panel(a)) and Ru (panel (b)). Panel (c) is also for Mo but here the \(\beta\) values for the first regular and the first intruder \(0^{+}\) states are shown. having symmetry B, even though the full Hamiltonian does not possess any specific symmetry except at \(x=0\) or 1. This phenomenon is closely connected to the concept of quasidynamical symmetry proposed by D. Rowe in [103]. The existence of a QPT also implies a sudden change in the so-called order parameter, which vanishes in the symmetric phase and takes a nonzero value in the broken phase [101]. Thus, the order parameter carries information about the symmetry of the system's ground state. QPTs can be classified in a manner similar to the phase transitions that occur in macroscopic systems at non-zero temperature, employing the Erhenfest classification [104]. Based on this classical classification, QPTs can be categorized as first-order and second-order (or continuous) QPTs [104]. In a first-order QPT (where the first derivative of the ground state energy with respect to the control parameter exhibits a discontinuity), there exists a narrow region around \(x_{c}\) where both states with different symmetries, A and B, coexist. In the case of a second-order QPT (according to the Erhenfest classification, where the second derivative of the ground state energy with respect to the control parameter displays a discontinuity), there is no coexistence of symmetries around the critical region \(x_{c}\). A crucial characteristic of a QPT is that it leads to the degeneracy of a set of states and the compression of the spectra [47; 105]. In fact, in the thermodynamic limit, the excitation energy of the first excited state approaches zero at the critical point. When studying QPTs in nuclear systems, one encounters challenges related to the finite size of the system and the approximate nature of the control parameter, often identified with the nuclear mass or neutron number. As a Figure 22: Values for key QPT/shape coexistence observables for Kr, Sr, Zr, Mo, and Ru isotopes as a function of neutron number. (a) \(E(2_{1}^{+})\), (b) \(E(0_{2}^{+})\), (c) \(E(4_{1}^{+})/E(2_{1}^{+})\), (d) S\({}_{2n}\), and (e) \(\langle r^{2}\rangle-\langle r_{ref}^{2}\rangle\) (the reference values are different for each isotope chain to better distinguish the data). result, all the main characteristics of a QPT can only be observed in an approximate manner within a nucleus, and the abrupt changes are typically smoothed out [102]. In Figure 22, we present experimental data for Kr, Zr, Sr, Mo, and Ru, including the values of \(E(2_{1}^{+})\), \(E(0_{2}^{+})\), \(E(4_{1}^{+})/E(2_{1}^{+})\), S\({}_{2n}\), and \(\langle r^{2}\rangle\).These quantities serve as indicators for the presence of a QPT or the existence of shape coexistence. These nuclei are located near the subshell closure at \(Z=40\) and are close to the neutron number 60, where a rapid onset of deformation is observed. In panel (a), it is evident that in all nuclei, the energy of the \(2_{1}^{+}\) state decreases, indicating the appearance of deformation, particularly from neutron number 60 onwards. However, notable differences exist between Sr and Zr, where the drop is very abrupt, Mo, which exhibits a slower decrease, and Ru or Kr, where the transition is smoother. The behavior in Sr and Zr has been interpreted in terms of the crossing of two different particle-hole configurations [46; 48] or as a first-order QPT [47; 106]. Moving to panel (b) and considering the energy systematics of the \(0_{2}^{+}\) state, a distinct minimum is observed for Sr and Zr at neutron number 60, while Ru and Mo exhibit a relatively flat energy trend around this neutron number. In Kr, it is not possible to extract a clean conclusion. The deep minimum in Sr and Zr again suggests the crossing of two configurations or the existence of a QPT, while the smoother behavior in Mo and Ru implies a slower evolution. Panel (c) depicts the ratio \(E(4_{1}^{+})/E(2_{1}^{+})\), which serves as a clear indicator of the onset of deformation. A value around 2 corresponds to a spherical nucleus, 2.5 to a \(\gamma\)-unstable rotor, and 3.3 to a rigid rotor. Sr and Zr clearly exhibit an evolution from sphericity to a rigid rotor, Ru and Kr indicate an evolution into a \(\gamma\)-unstable rotor, and Mo lies in between both cases. This observable can also be considered as an approximate order parameter, behaving as a first-order QPT in Sr and Zr, and potentially as a second-order QPT in Mo (less clear in Ru). In panel (d), the two-neutron separation energy, S\({}_{2n}\), is displayed. This observable, which can be understood as the derivative of the binding energy, is a smoking gun for the presence of a QPT. The sudden change in its slope around neutron number 60 in Sr and Zr suggests the existence of a first-order QPT, while Mo exhibits a small perturbation indicating a possible second-order QPT. No departure from the linear trend is observed in Ru and Kr. Finally, panel (e) presents the mean-square radii. Note that the origin has been shifted differently for each isotope for clarity. Once again, a sudden increase in the radius is observed in Sr and Zr around neutron number 60, while Mo shows a less drastic increase and Ru and Kr exhibit a linear trend. Based on the information presented in Fig. 22, it is reasonable to assume that Sr and Zr undergo a first-order QPT around neutron number 60, Mo undergoes a second-order QPT, while Ru exhibits a smooth evolution without any abrupt changes. In Sr, Zr, and Mo, the ratio \(E(4_{1}^{+})/E(2_{1}^{+})\) can be considered as an order parameter, indicating the presence of a QPT, while S\({}_{2n}\) points towards the existence of a discontinuity in the first or second derivative of the binding energy. Alternatively, all the observed features in Figure 22 can also be explained in terms of the crossing of two configurations. The increase in \(E(4_{1}^{+})/E(2_{1}^{+})\) and the decrease in \(E(2_{1}^{+})\) can be easily explained by the crossing of a spherical and a deformed configuration. The drop in the energy of the \(0_{2}^{+}\) state can be attributed to the presence of an intruder configuration that gains correlation energy more rapidly than the regular configuration. Finally, the deviation from the linear trend in S\({}_{2n}\) and mean-square radius can also be explained by the crossing of two configurations. Therefore, both the QPTs and the crossing of two configurations can provide explanations for the observed phenomena in the figure. In previous works, the observed QPT in Zr has been explained in terms of the crossing of two weakly interacting configurations [47; 99; 100], providing a straightforward explanation for the behavior of S\({}_{2n}\). A recent analysis has also been conducted for Sr nuclei [49] with similar conclusions. Moreover, in Zr, the different configurations, especially the intruder one, undergo their own QPT. The concept of "Type I" and "Type II" QPTs was introduced in [99; 100], referring to QPTs occurring within a single configuration or involving two configurations, respectively. In the case of Mo, it has been shown in Sec. IV that two configurations indeed cross, although the observed changes in S\({}_{2n}\) are less abrupt compared to Zr. This situation is reminiscent of the behavior observed in Pt [76], where a crossing of two configurations occurs without inducing a QPT. It is worth mentioning that the interaction between configurations was quite strong in Pt, which resulted in the suppression of the QPT. The mixing between configurations is controlled by the parameter \(\omega\), with values of 15 keV, 15 keV, and 50 keV for Zr, Mo, and Pt, respectively. However, the interaction between configurations depends on the matrix element \(\langle 0_{\rm{re}}^{+}|\tilde{V}_{\rm{mix}}|0_{\rm{int}}^{+}\rangle\), which has values around the region where the configurations cross, namely 80 keV, 250 keV, and 250 keV for Zr, Mo, and Pt, respectively. Therefore, it is evident that the case of Mo resembles Pt, where two configurations cross but a strong interaction between them hinders the presence of abrupt changes in the spectrum. Nevertheless, Mo still exhibits some key elements of a QPT (see Fig. 22). When considering the unperturbed configurations, both rapidly transition from a spherical to a deformed shape, either oblate (regular) or prolate (intruder). According to Figs. 16 and 20, the regular and intruder configurations undergo a "Type I" QPT, while the ground state undergoes a "Type II" QPT around \(A=102\). The situation in Ru isotopes is indeed different, as the evolution of the ground state is fully determined by a single configuration. According to Figs. 18 and 19, the energy surface of Ru isotopes is initially spherical for the lighter ones, but it starts to flatten and becomes fully flat at \(A=104\) (neutron number 60). From this point onwards, a \(\gamma\)-unstable deformation develops. Based on this behavior, Ru isotopes undergo a "Type I" QPT of second order, as indicated by Fig. 22 and as proposed in [50]. ## X Summary and Conclusions This work has focused on analyzing the even-even \({}^{96-110}_{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} { ERDF/MINECO project UNHU-15CE-2848.
2308.00074
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of why one instance is an anomaly? and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model_1 and OPT_Model) and compared. The overall accuracy and F score of OPT_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Khushnaseeb Roshan, Aasim Zafar
2023-07-31T18:47:45Z
http://arxiv.org/abs/2308.00074v1
# Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model ###### Abstract Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of "why one instance is an anomaly?" and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conducted with the latest CICIDS2017 dataset. Two models are created (Model_1 and OPT_Model ) and compared. The overall accuracy and F-score of OPT_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively. Explainable AI; Autoencoder; Shapley Additive Explanation; Network Anomaly; Network Security. ## I Introduction Anomaly detection based on Machine Learning (ML) and Deep Learning (DL) is an active research area in many domains such as fraud detection [1], anomaly-based intrusion detection [2], network anomaly detection [3] and much more. Network anomalies are the unknown pattern of interest in network traffic and logs. Due to the lack of supervisory information and its unbounded nature, it is challenging to detect network anomalies and their explanation. DL models are giving tremendous results in anomaly detection but are still criticized due to their back-box nature and lack of interpretation of their outputs. The researchers have proposed so many approaches to interpret and explain the output of complex models (e.g. DL based models) over the years [4][5]. The purpose of this study focuses on so-called "unsupervised-anomaly detection" as well as its interpretation in the area of computer network traffic and logs. In the unsupervised DL algorithm, the autoencoder is used for this experimentation. Based on the autoencoder reconstruction error (RE); the normal and attack instance are separated. RE can explain anomalies but only up to some extent [6][7]. Hence, the kernelSHAP, a model agnostic approach [8], is used to explain anomalies with shapley values. Shapley values provides the true contribution of each feature based on the RE. The motivation of this work is driven by renewed attention in the field of explainable AI (XAI). XAI provide various method and tools to convert the black-box model into transparent, accountable and interpretable models [9]. If we understand the working of the complex ML and DL models, we can improve and explain its results up to some extent. The contribution of the proposed work are as follows: * A novel approach for selecting the best subset of features with shapley values without using the target class label is proposed. * Based on the attack instances, the shapley values are computed for each feature, providing the true contribution to the RE. * We selected the top features responsible for the anomalous behaviour of the attack instance and used only these features to build an improved model for network anomaly detection. * kernelSHAP method, which is a model agnostic XAI approach and the latest CICIDS2017 dataset is used for experimentation. This work also illustrates how XAI techniques can be used to improve the performance of complex models (DL models), as we did in this paper. The overall organization of the paper are as follows. Section II describes XAI. Section 0 includes related work. Section IV discusses the proposed approach, dataset and models. Section V discusses implementation and results. Finally, Section VI concludes this work. ## II Explainable Artificial Intelligence Understandability, Comprehensibility, Interpretability, Explainability, Transparency etc., are the nomenclature used by the XAI research community [9][10], as shown in Fig. 1. The term XAI is new, but the problem of explainability existed when the researcher's studied explaining the output and decision making procedure of the expert system [11]. This section focuses on various definitions of XAI (what?), the need for XAI (why?), various approaches related to XAI in brief (how?). There is no standard and globally accepted definition of XAI; different authors quote XAI differently. In [12], DARPA defined XAI as "produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately, trust, and effectively manage the emerging generation of artificially intelligent partners". In the Cambridge Dictionary of English Language [13], the revoked definition is as follows "Given a certain audience, explainability refers to the details and reasons a model gives to make its functioning clear or easy to understand." Another definition of XAI by the organizer of xML Challenge [14] is "an innovation towards opening up the black-box of ML" and as "a challenge to create models and techniques that both accurate and provide a good, trustworthy explanation that will satisfy customer's needs".In general, the goal of the XAI is centred around generating more transparent, responsible and accountable models without compromising their performance (prediction accuracy). The need for XAI becomes clear at the points where the decision-making process of the models needs to be explained, such as in life-changing decisions (medical diagnosis) [15], in big financial decisions, in criminal justification etc. [9]. The explainability of the complex models (DL models) gives the control, discover and improve the results of the black-box models. And this will further remove the barrier of AI in real-life applications where the explanation and the interpretation of the output is the core aspect for its adoption. And the other key aspects related to the need for XAI are Trustworthiness, Causality, Transferability, Informativeness, Confidence, Fairness, Accessibility, Interactivity of DL, ML and ensemble learning-based models [10]. The different points of view and taxonomy of the XAI methods existed, such as the type of data (text, image etc.), methods related to global or local explanation. These methods can be model-specific, model agnostics, or both. Fig. 2 shows the summarized mind map of XAI methods, including their purposes. These are broader aspects and essential to be considered by researchers and practitioners while developing solutions in the field of XAI [16]. We would highly recommend some of the review and survey papers by the authors of [5][9][10][16] for detailed analysis of various methods and application areas related to XAI. ## 3 Related Work Computer Network anomaly detection is a wide research domain. So many approaches based on ML and DL has been proposed [17, 18, 19, 20, 21, 22, 23] by the research community in this domain. However, not much work has been done that detect and explain the cause of anomalous instances predicted by the models. Antwarg et al. [24] used the autoencoder and kernelSHAP [8] to explain top anomalies detected by the model based on the RE. The model is trained in an unsupervised manner on benign data only (without target class feature). Both the real-world dataset such as KDDCUP 99 and the artificial datasets are used for evaluation. A similar approach is also shown in [6] by Takeishi. The author used the kernelSHAP method to explain anomalies based on the RE with PCA and shapley values. The study revealed that shapley value provided the true contribution of features in explaining the anomalies. Goodall et al. [25] presented situ a scalable solution for network attack/anomaly detection with intuitive visualization capability. Situ can monitor and discover suspicious behaviour within the network traffic logs and explain why the instance is anomalous. A case study in fraud detection and explanation was done by Collaris et al. [26]. The study revealed that different XAI techniques provided different results, yet all are valid and useful. Outliers are also considered interesting anomalies, and several authors proposed methods for outlier explanation. Liu et al. [27] proposed the framework to explain the outlier with the model agnostic approach. The interpretation is based on the three aspects, namely outlier score, abnormal features and outlier context. Micenkova et al. [28] proposed an approach to detect and explain the outlier with subspace separability. The outlier results validation is provided by the subset of the features for each outlier where the points are well separable from the rest of the data. ## 4 Proposed Approach ### _Unsupervised Features Selection based on SHAP_ In this subsection, a method of features selection is proposed without using the target class variable. These features are selected based on the RE evaluated on the attack Figure 1: XAI Word Cloud [9] Figure 2: Summarized mind map of XAI techniques background set. The purpose is to select only those features which are actually contributing or affecting the RE, either increasing or decreasing it but in large magnitude. Consequently, these features would be the cause of major deviation of the RE and are more important to classify attack instances with high accuracy and recall. The alternate approach of this method can be thought of as selecting only those features that have large raw errors. However, just by looking only at the raw feature error, one can not identify the cause of the anomaly. For example, a large error on one feature can stem the anomalous behaviour of other features [6]. Hence, Shapley values would be the best solution to find the true contribution of the features in the RE for attack instances. Further, these top contributing features are selected to build the optimized version of the model named OPT_Model. The kernelSHAP [8] method is used to build the simple explanation model of the actual autoencoder model, i.e. Model_1. The kernelSHAP method is the model agnostic approach and requires access to the dataset and model's (Model_1) prediction function. But in our case, we used the autoencoder that simply reconstructs the original input, then how to define the prediction/value function for the explanation model? For this, the proposed approach by Takeishi [6] is used, as shown in Fig. 3. The author defined the valued function based on the reconstructed error \(e(x)\) as in Eq. (1). \[V(S)=\frac{1}{d}\,E_{p(x_{S}|x_{S})}[e(x)] \tag{1}\] Here, \(e(x)\) is the reconstruction error of the single test data instance \(x\in R^{d}\) and \(x\) is the concatenation of \(x_{S^{c}}\in R^{d-[S]}\) and \(x_{S}\in R^{[S]}\) as shown in Eq. (2). \(S\) is the subset of features/indices within d and \(S^{c}\) is the complement of \(S\). \[x=\left[\begin{array}{c}x_{S^{c}}\\ x_{S}\end{array}\right] \tag{2}\] The background set and the value function must be passed in kernelExplainer function to compute the features importance based on the RE of attack instances as shown in Fig. 3. In this case, the background set consists of 200 attack instances that are further processed with kmeans to improve its overall computation. Fig. 4 shows the top forty contributing features in RE of attack instances. And finally, these feature sets are used to build the optimized version of the model named OPT_Model. ### _Autoencoder_ The autoencoder (AE) is an Unsupervised Artificial Neural Network (ANN) architecture first proposed by Rumelhart et al. [29]. The autoencoder consists of input layers, a number of hidden layers and output layers. Encoder and Decoder are two main components of the autoencoder. Encoder maps the input to the latent representation, and Fig. 4: Top forty features contributing to AE reconstruction error Fig. 3: Unsupervised feature selection approach with shapley values Decoder maps the latent representation back to the reconstructed output. The typical function used between the hidden layers is ReLU (rectified linear unit) [30]. The complete optimized architecture, such as the number of neurons in each layer, regularizer, learning rate, number of hidden layers etc., used in this experiment is shown in TABLE I. Furthermore, the Mean Squared Error (MSE), which is the most common function to measure the RE of the autoencoder, is used in this experiment. RE is the difference between the input and the reconstructed output as in Eq (3). In general, the RE is less on the benign data on which the autoencoder is trained; however, on the attack data, it is higher. The threshold is required to separate anomaly and normal data based on predicted MSE on the testing dataset by the models. For example, the instances on which the MSE is less than the threshold are labelled as benign (0) else anomaly (1). \[MSE\ =\frac{1}{N}\sum_{i=1}^{N}(\bar{x}-x)^{2} \tag{3}\] ### _Evaluation Metrics_ The accuracy is a widely used performance metric for ML models, but it is not always a good solution for the imbalance dataset. Hence both the models are evaluated based on the classification report consisting of Recall, Precision, F1-Score and Accuracy. The binary classification is separated into four groups known as a confusion matrix. * True Negative (TN): correctly prediction of the negative class. * True Positive (TP): correctly prediction of the positive class. * False Negative (FN): incorrectly prediction of the negative class. * False Positive (FP): incorrectly prediction of the positive class. The metrics (Eq. (4) to (9).) are based on the above groups. \[\text{Accuracy (ACC)} =\frac{TP+TN}{TP+TN+FP+FN}\] (4) Precision (P) =\frac{TP}{TP+FP}\] (5) Recall (R) =\frac{TP}{TP+FN}\] (6) F-Score (F) =\[\frac{2\ \times\ R\ \times P}{R+P}\] (7) Specificity =\[\frac{TN}{TN+FP}\] (8) G -means =\[\sqrt{Recall\times Specificity} \tag{9}\] In addition, the Receiver Operating Characteristics (ROC) curve and G-means are also used to select the best threshold for final classification [31][32]. The G-mean [33] is the square root of specificity and recall. G-mean is considered as an unbiased classification metric with an optimal threshold selected based on the ROC curve [34]. ## V Implementation and Results The experiment is conducted on GPU enables Google Collaboratory, Python 3.7 and Keras Deep Learning Library. ### _CICIDS2017 Dataset and Preprocessing_ The Canadian Institute of Cybersecurity created the CICIDS 2017 dataset [35], and the purpose is to develop the latest and most realistic background network traffic. This dataset is available in both packet-based and flow-based formats. The complete dataset is split into eight different files named Monday to Friday, having different attack classes. Monday file contains only benign data, and other files contain both benign and attack data. And there are a total of fourteen various attack class labels such as DDoS, PTP-Patator, SSH-Patator, Web Attack Brute Force and so on. However, in this experiment, the subset of the complete CICIDS2017 dataset is used. TABLE II. shows the instance count of benign and attack class labels used in this experiment. Monday file is used as training and validation of the models, and Friday file is used as test data. The preprocessing is also done to replace all the null and infinity values from the dataset. These value has been replaced with the mean value, and then the scikit-learn StandardSclear function is used for feature scaling [36]. There are a total of seventy-eight features of the CICIDS2017 dataset. Model_1 is based on all features, and Model 2 (OPT_Model) is based on the forty features selected by the kernelSHAP method. ### _Instance Level Explanation_ The kernelExpaliner method can provide both the instance level (single instance) and overall model explanation. The overall model evaluation is already discussed and shown in Fig. 4. In **Error! Reference source not found.**, the single normal and attack instance are explained in a way that "what are the features causing the significant deviation in the RE?". The red colour indicates the feature value causing an increase in RE; however blue colour indicates that the corresponding feature is decreasing its value. In **Error: Reference source not found.** (a), the RE of the benign instance is 0.02, and the base value of the explainer model (kernelExplainer) lies between 0.0 to 0.5 in this case. The base value or the Expected value E[f(x)] is defined as "the value that would be predicted if we did not know any feature value for the current output f(x)" [8]. The features names which are causing major deviation in RE is also shown. Similarly, in **Error: Reference source not found.**(b), the attack instance with RE of 0.29 and features responsible for this deviation is also visible. This way, one can explain anomalies based on the features causing this large RE. ### _Models' Comparision and Discussion_ Two models were built. Model_1 is one with all features set, and the second model is based on the selected forty features proposed in this paper. The top forty features which are highly contributing to the reconstruction error on the attack background set is selected based on the kernelExplainer function. These feature sets are used to train and validate OPT_Model. The architecture remains the same in the training and validation procedure for Model_1 and OPT_Model for companion purposes. For both models, the MSE is predicted. Then based on the ROC curve in Fig. 6, the optimal threshold is selected to evaluate the confusion matrix and classification report, as shown in Fig. 7 and Fig. 8. The G-mean based on the ROC curve, the optimal threshold for predicted MSE and area under the ROC (AUC) is shown in TABLE III. It is observed that the AUC for the OPT_Model is 0.95, which is higher than the Model_1 AUC, i.e. 0.804. AUC is a good measure for class imbalance datasets ad provides a single number (AUC) to compare different models [31][37]. Finally, based on the threshold, the predicted MSE is converted into binary labels, i.e. "0" and "1". For example, the optimal threshold for Model_1 is 0.22, then the MSE greater than or equal to 0.22 is labelled as "1" otherwise as "0". Fig. 7 and Fig. 8 shows the comparison between Model_1 and OPT_Model based on the confusion matrix and classification report. The novelty of the proposed work is how XAI can be used to improve the results of the DL models by selecting the most appropriate features in an unsupervised manner. Shapley values provides the true importance of the features responsible for causing anomalous behaviour which clearly improved the performance of OPT_Model. Fig. 5: (a) Features responsible for reconstruction error 0.02 for normal instance, (b) Features responsible for reconstruction error 0.29 for attack instance Fig. 6: ROC curves for Model_1 and OPT_Model Fig. 7: Confusion matrix for models (a) Model_1 and (b) OPT_Model The drawback of kernelSHAP method is its time complexity on the background set for complex and high dimensional datasets. If we increase the background set, the computation time will further increase or may take a couple of hours to compute shapley values for each feature. And selecting the appropriate background set from the complete dataset is also important and may affect the results of models [24]. ## VI Conclusion In this paper, we build the autoencoder based model that can detect and explain anomalies in the computer network traffic. Two models were created on the latest CICIDS2017 dataset. The first model is based on all features, and the second is based on selected features based on the kernelSHAP method, a model agnostic XAI approach. Top forty contributing features based on shapley values are selected to build OPT_Model, which outperformed the initial model (Model_1). This work also provides a brief introduction of the emerging XAI techniques in terms of "what?", "why?" and "how?". XAI plays an important role in explaining and interpreting the results of complex models, such as DL models. And the models intrinsic (XAI) methods help in understanding the internal working of complex models. The future extension of this work can be seen as applying other models agnostic or specific, global or local explanation techniques to explain and improve the results of unsupervised DL models for anomaly detection in the computer network traffic and other real-world applications as well.
2309.07432
SpatialCodec: Neural Spatial Speech Coding
In this work, we address the challenge of encoding speech captured by a microphone array using deep learning techniques with the aim of preserving and accurately reconstructing crucial spatial cues embedded in multi-channel recordings. We propose a neural spatial audio coding framework that achieves a high compression ratio, leveraging single-channel neural sub-band codec and SpatialCodec. Our approach encompasses two phases: (i) a neural sub-band codec is designed to encode the reference channel with low bit rates, and (ii), a SpatialCodec captures relative spatial information for accurate multi-channel reconstruction at the decoder end. In addition, we also propose novel evaluation metrics to assess the spatial cue preservation: (i) spatial similarity, which calculates cosine similarity on a spatially intuitive beamspace, and (ii), beamformed audio quality. Our system shows superior spatial performance compared with high bitrate baselines and black-box neural architecture. Demos are available at https://xzwy.github.io/SpatialCodecDemo. Codes and models are available at https://github.com/XZWY/SpatialCodec.
Zhongweiyang Xu, Yong Xu, Vinay Kothapally, Heming Wang, Muqiao Yang, Dong Yu
2023-09-14T05:28:05Z
http://arxiv.org/abs/2309.07432v2
# SpatialCodec: Neural Spatial Speech Coding ###### Abstract In this work, we address the challenge of encoding speech captured by a microphone array using deep learning techniques with the aim of preserving and accurately reconstructing crucial spatial cues embedded in multi-channel recordings. We propose a neural spatial audio coding framework that achieves a high compression ratio, leveraging single-channel neural sub-band codec and SpatialCodec. Our approach encompasses two phases: (i) a neural sub-band codec is designed to encode the reference channel with low bit rates, and (ii), a SpatialCodec captures relative spatial information for accurate multi-channel reconstruction at the decoder end. In addition, we also propose novel evaluation metrics to assess the spatial cue preservation: (i) spatial similarity, which calculates cosine similarity on a spatially intuitive beamspace, and (ii), beamformed audio quality. Our system shows superior spatial performance compared with high bitrate baselines and black-box neural architecture. Demos are available at [https://xzwp.github.io/SpatialCodecDemos](https://xzwp.github.io/SpatialCodecDemos). Codes and models are available at [https://github.com/XZWY/SpatialCodec](https://github.com/XZWY/SpatialCodec). Zhongweiyang Xu\({}^{\star}\), Yong Xu\({}^{\dagger}\), Vinay Kothapally \({}^{\dagger}\), Heming Wang \({}^{\sharp}\), Muqiao Yang \({}^{\ddagger}\), Dong Yu \({}^{\dagger}\)+\({}^{\dagger}\)Tencent AI Lab, \({}^{\star}\) University of Illinois at Urbana-Champaign, \({}^{\sharp}\) The Ohio State University, \({}^{\ddagger}\) Carnegie Mellon University Footnote †: star}\)This work was done while Z. Xu, H. Wang, and M. Yang were interns at Tencent AI Lab, Bellevue, USA. ## 1 Introduction Audio and speech codec aims at compressing the signals into low bitrate codes for efficient storage or network streaming applications. Traditionally, these coding schemes usually take advantage of some signal models and psycho-acoustics. Speech codecs like CELP [1], SPEEX [2], OPUS [3], and EVS [4] all use linear predictive modeling for signal analysis. Likewise, audio or music codecs like MP3 [5] and OPUS [3], apply classic perceptual coding technique [6] inspired by the perceptual masking effect of human hearing. In addition, conventional quantization and entropy coding methods [7] are applied for discretization and efficient coding respectively in these classic codecs. However, the performance of these conventional methods suffers with very low bit rates, e.g., at 6 kps. While traditional codec(s) struggle to achieve high-quality, perceptually accurate reconstruction at extremely low bitrates, neural codecs are capable of overcoming this limitation. SoundStream [8] uses time-domain CNNs as encoding and decoding blocks with a residual vector quantization (RVQ) to compress the intermediate latent representations. TF-Codec [9] uses a temporal linear predictive coding technique to further remove temporal redundancies. More recently, Encode [10] has been proposed which employs transformer-based network to model the code distribution, an approach originally intended for arithmetic coding, in order to achieve improved compression. AudioDec [11] further proposes a two-stage training scheme such that the encoder and decoder can be more flexible and easily switched for different applications. HiFi-Codec [12] substitutes the RVQ block with grouped residual vector quantization (GRVQ) and achieves better performance. These aforementioned neural approaches are primarily derived or adapted from VQ-VAE [13] and GAN-based Vocoders [14, 15]. Thus, the encoded information from these codec(s) is also treated as learned representations for generative tasks [16, 17, 18]. Besides single-channel codecs, spatial audio codec aims to compress multi-channel audio while preserving the spatial information [19]. Such multi-channel codecs are commonly designed for playback systems, or multi-speaker systems, e.g., stereo coding [20], MPEG-3D Audio [21], MPEG-Surround [22]. Typically, a spatial audio codec adheres to a pipeline consisting of the following steps: (i) downmix the multi-channel audio into mono or stereo, and code with a traditional audio codec, (ii) some sub-band spatial parameters are extracted from the multi-channel audio and coded channel-wise and band-wise, (iii) The decoder then resynthesizes the multi-channel audio from the previous two components. Since these codecs are designed for specific playback systems, they do not consider microphone array spatial recording systems. Bao _et al._[23] applies this scheme to microphone array recorded speech but without any reverberation (only the direct path exists). These methods do not use any neural networks and also do not fully exploit inter-channel or inter-band correlations. This means for a decent reconstruction, the system needs to have reasonably large bands coded separately for each channel, resulting in high coding bit rates. Our SpatialCodec aims to address the high coding rate challenge with neural networks. Similar to the conventional methods, our design also has two branches: the first branch codes the reference channel audio, while the second branch codes the spatial information. Then on the decoder side, the first branch's decoder outputs the reconstructed reference channel. Then the second branch's decoder's output and the reconstructed reference channel are used jointly to synthesize all non-reference channels. We train and test our proposed codec on a synthesized multi-channel spatially rich reverberant dataset with speech from a single speaker. We also propose several novel metrics to evaluate spatial cue preservation. One is spatial similarity, which calculates the cosine similarity between the estimated and ground-truth spatial features. Spatial features are designed by beamforming towards a few fixed directions. We believe this metric is a more intuitive metric because the spatial features are directly related to real-world directions. We also propose to use beamforming performance as a metric to validate our SpatialCodec's ability to preserve both the spectral quality and the main directivity. Our proposed SpatialCodec with 12 kbps of bitrate performs significantly better than 96 kbps (8 channels x 12 kbps/channel) OPUS and other channel-independent neural codecs. We also designed one black-box model for comparison. ## 2 Problem Formulation We consider the challenge of compressing spatial audio recordings from a \(M\)-channel microphone array in a reverberant environment. Let \(s(t)\) and \(h_{i}(t)\) represent the clean speech from the speaker and room impulse response (RIR) from the speaker to the \(i\)-th microphone. The signals captured by an \(M\)-channel microphone array, \(\mathbf{x}(t)\) (termed as "mixture") at time '\(t\)' is defined as: \[\mathbf{x}(t)=\big{[}h_{1}(t),....,h_{M}(t)\big{]}*s(t) \tag{1}\] where '\(*\)' denotes the convolution operation, and \(\mathbf{x}(t)=\big{[}x_{1}(t),....,x_{M}(t)\big{]}\) includes speech captured from the speaker's direct path as well as early and late reflections using the \(M\)-channel array. The goal of the proposed model is to compress \(\mathbf{x}\) to a low-bitrate representation '\(\mathbf{C}\)' such that the reconstructed multi-channel audio \(\hat{\mathbf{x}}\) at the decoder preserves all spatial cues. Our proposed model consists of an encoder (\(\Psi_{\mathrm{Enc}}\)), a quantizer (\(\Psi_{\mathrm{Quant}}\)), and a decoder (\(\Psi_{\mathrm{Dec}}\)) which are jointly optimized such that \(\hat{\mathbf{x}}\) is approximating \(\mathbf{x}\) from both spectral (perceptual quality) and spatial (direct path, early, and late reflections) perspectives. \[\hat{\mathbf{x}}=\Psi_{\mathrm{Dec}}\big{(}\mathbf{C}\big{)};\quad\text{ where}\quad\mathbf{C}=\Psi_{\mathrm{Quant}}\Big{(}\Psi_{\mathrm{Enc}}\big{(} \mathbf{x}\big{)}\Big{)} \tag{2}\] ## 3 Method The overall architecture of the proposed SpatialCodec is depicted in Fig.1 which comprises two main branches: (i) a single-channel sub-band codec pre-trained to code the reference channel of the microphone array, and (ii) a SpatialCodec that codes spatial information to reconstruct multi-channel audio signals. ### Single-Channel Sub-band Codec (First Branch) The reference channel sub-band codec is a neural frequency domain sub-band codec. The reason we are designing this instead of using existing time-domain codec is that the structure aligns with the SpatialCodec in the frequency domain, which will be discussed in 3.2. The input \(x_{\text{ref}}\in\mathbb{R}^{2\times T\times F}\) is the STFT of the reference channel audio, where \(2\) corresponds to real and imaginary components. Then the whole encoder-decoder architectures are 2D-CNNs with residual blocks treating real-imaginary as the channel dimension. The architecture is similar to HiFi-Codec [12] except 1-D CNN becomes 2-D CNN and downsampling time becomes downsampling frequency. Encoder and Decoder all have six convolutional layers, each followed by a residual unit. For the encoder, The six layers' kernel and stride for the time dimension are always 3 and 1, respectively. The kernels and strides for the frequency dimension are \([5,3,3,3,3,4]\) and \([2,2,2,2,1]\), respectively. The output channel dimensions for all layers are \([16,32,64,128,128,256]\). We use 640-point FFT, which means the encoder would compress the frequency dimension from 321 to 6 convolutional sub-bands. Then we code these 6 sub-bands independently using residual vector quantization. The decoder is just the opposite transpose convolution version of the encoder. Each residual unit contains two residual blocks. Each block contains three 2-D time-dilated CNN layers with skip connections. The first block's three layers' kernel and dilation sizes are \([(3,3),(3,5),(3,5)]\) and \([(1,1),(3,1),(5,1)]\), respectively, in order of time, free). The second block's corresponding configurations are \([(7,3),(7,5),(7,5)]\) and \([(1,1),(3,1),(5,1)]\). Details can be checked in the source code. ### SpatialCodec (Second Branch) The SpatialCodec has the same structure as the reference channel codec, except input, output, and channel dimensions of the six convolutional layers. The input of SpatialCodec is the reference channel STFT and spatial covariance matrix concatenated in the channel dimension. Given \(M\)-channel STFT \(\mathbf{X}(t,f)\in\mathbb{C}^{M\times 1}\), the spatial covariance matrix \(\boldsymbol{\Phi}(t,f)\in\mathbb{C}^{M\times M}\) is defined to be: \[\boldsymbol{\Phi}(t,f)=\mathbf{X}(t,f)\mathbf{X}(t,f)^{\text{H}} \tag{3}\] Then the real and imaginary parts of \(\boldsymbol{\Phi}(t,f)\) are concatenated with the real and imaginary part of the reference channel STFT, which gives a \(2(M^{2}{+}1)\) dimensional real feature for each time-frequency bin. The feature is then treated as the channel dimension when fed into the SpatialCodec. The output channel dimensions for all the layers in the encoder are \([128,128,128,128,256,256]\). Otherwise, the encoder and quantizer are the same as the reference channel codec. For the spatial decoder, the output is \(M-1\) complex ratio filters (CRFs) [24]\(W_{m}(t,f)\in\mathbb{C}^{2L+1,2K+1},m\in[1,...,M{-}1]\) for all the non-reference channels. CRFs encode the spatial relative transfer functions. Assume the output of the reference channel STFT is \(\hat{X}_{ref}\in\mathbb{C}^{T\times F}\), then all non-reference channels for \(m\in[1,...,M{-}1]\) are: \[\hat{X}_{m}^{\text{non\_ref}}(t,f)=\sum_{l=-L}^{L}\sum_{k=-K}^{K}W_{m}(t,f,l,k) \hat{X}_{\text{ref}}(t{+}l,f{+}k) \tag{4}\] Thus the last layer of the spatial decoder's output channel dimension is \(2\times(2L+1)\times(2K+1)\times(M-1)\), where the first 2 means real and imaginary. ### Training and Loss The training loss of the first branch mostly follows the HiFi-Codec's [12] strategy, using reconstruction loss, adversarial loss, and codebook learning loss. We also add an additional time-domain SNR loss with weighting \(\lambda\)=5. The SNR loss is defined as: \[L_{\text{SNR}}(x,\hat{x})\triangleq-10\text{log}_{10}(\frac{||x||^{2}}{||x- \hat{x}||^{2}}) \tag{5}\] The second branch is trained separately with respect to the first branch. During training, the complex ratio filter (CRF) [24] is applied to the original reference channel audio instead of the reconstructed reference channel from the first branch. The reason is that even after pretraining, the first branch can only output reconstructed speech that's perceptually equivalent to the original speech. The underlying spectrogram or waveform does not have an exact match. This means if we apply the CRF to the reconstructed speech, the original non-reference channel audio cannot be used as learning targets, because of the mismatching problem of the first branch. Thus during training, in contrast to Eq. 4, we have \[\hat{X}_{m}^{\text{non\_ref}}(t,f)=\sum_{l=-L}^{L}\sum_{k=-K}^{K}W_{m}(t,f,l,k)X_ {\text{ref}}(t{+}l,f{+}k) \tag{6}\] where \(\hat{X}_{\text{ref}}\) is substituted as the original reference channel signal \(X_{\text{ref}}\). Then we simply use the time domain SNR loss averaged over all non-reference channels: Figure 1: An Overview of the proposed SpatialCodec framework. \[L_{\text{all}}=\frac{1}{M-1}\sum_{m=1}^{M-1}L_{\text{SNR}}(\text{ISTFT}(X_{m}^{ \text{non\_ref}}),\text{ISTFT}(\hat{X}_{m}^{\text{non\_ref}})) \tag{7}\] ### Inference In inference, we still get \(\hat{X}_{m}^{\text{non\_ref}}(t,f)\) using Eq.4. The first branch's sub-band codec reconstructs the reference channel audio. Then the second branch's SpatialCodec reconstructs \(M-1\) complex ratio filters, which are applied to the reconstructed reference channel audio to get \(M-1\) non-reference channel audio reconstructed. ## 4 Experiments and Dataset We use the AISHELL-2 speech dataset [25] as clean speech sources to synthesize the multi-channel reverberant dataset. Our array configuration is from a real-world microphone array designed for the meeting scenario, which is an 8-channel linear non-uniform array where the distances between each pair of neighbor microphones are \([2,2,2,14,2,2,2]\) in centimeters. A total of 10k multi-channel RIRs are generated with random room characteristics using the image-source method. The reverberation time lies in the range [0, 0.7s]. A total of 90k, 7.5k, and 2k utterances are generated for the 'Train', 'Dev', and 'Test' datasets. We use 16kHz sampling rate. For our models, STFT is applied with 640-point hanning window size and 320-point hop size, resulting in 50 frames per second. For the complex ratio filter mentioned in Section 3.2, we use K=1 and L=4. For the vector quantization in the sub-band codec and SpatialCodec in Section 3, all the codebooks have 1024 entries. We use 2 residual vector quantization layers so that both sub-band codec and SpatialCodec have 50*6*2*10 = 6kbps bitrate, 12kbps in total. For training the reference channel codec, we follow the training scripts in HiFi-Codec [12]1 and set the batch size to be 8 and the segment length to be 4 seconds. The training takes two million steps. The SpatialCodec is trained in a similar way. We use ADAM optimizer with \(1\text{e}-4\) learning rate. Footnote 1: [https://github.com/yangdongchao/AcademiCodec](https://github.com/yangdongchao/AcademiCodec) ## 5 Evaluation Metrics This work focuses on spatial information preservation and reconstruction for multi-channel audio. For the reference channel codec, we use HiFi-Codec [12] and Encodec [10] as benchmark systems that can guarantee the performance of a single-channel audio codec. We mainly evaluate our methods using several spatial metrics and beamforming performance. We first use RTF (Relative Transfer Function) error [27], and directional of arrival (DoA) estimation error with the classic MUSIC [28] algorithm. Then we propose a more intuitive spatial metric called spatial similarity. Besides the spatial metrics, we also evaluate the target DoA's beamforming performance to double-check both the signal quality and the main directivity of the target source. For all metrics in this section, we use 2048-point FFT, 512-point hop size, and 2048-point hanning window. Below are the detailed descriptions of each metric. ### RTF Error The RTF (Relative Transfer Function) error is defined in [27] as the angle between the groundtruth RTF \(a(f)\) and estimated RTF \(\hat{a}(f)\) averaged over all frequency bins: \[\text{RTF Error}=\frac{1}{F}\sum\text{arccos Re}(\frac{\hat{a}(f)^{\text{H} }a(f)}{|\hat{a}(f)||a(f)|}) \tag{8}\] where ground-truth RTF \(a(f)\) and estimated RTF \(\hat{a}(f)\) are extracted from the first principal component of the \(M\)-channel STFT matrix \(X(f)\in\mathbb{C}^{M\times T}\) and \(\hat{X}(f)\in\mathbb{C}^{M\times T}\), respectively. ### MUSIC DoA Error MUSIC DoA error is the DoA estimation error corresponding to the ground-truth DoA. This is a very intuitive metric to roughly measure the ability of the algorithm to capture the direct path component. We use the classic MUSIC [28] algorithm's implementation in pyroomacoustics [29]. ### Spatial Similarity We provide a spatially more intuitive metric which we call spatial similarity (SS). We first define a \(B\) dimensional spatial feature vector \(\mathbf{S}(f)\), which are just \(B\) fixed super-directive beamforming responses' magnitude averaged over time. Given a multi-channel input \(X\), the spatial feature is defined as: \[Y_{k}(f) =\frac{1}{T}\sum_{t}|\text{Beamformer}_{b}(X(t,f))|,b\in[1,...,B] \tag{9}\] \[\mathbf{S}(f) =[Y_{1}(f),Y_{2}(f),Y_{3}(f),...,Y_{B}(f)] \tag{10}\] The spatial feature is similar to the beamspace-domain concept in [30, 31], which uses a few spatially sampled beamformers to transform the multi-channel STFT from RTF domain to the beamspace domain, which matches our spatial intuition. We uniformly sample beamformers' directions in the inter-channel time difference domain, which means all pairs of neighboring beamform directions have the same inter-channel time difference from the perspective of the linear array. The \(b\)-th beamformer's direction \(\theta_{b}\) follows: \[\theta_{b}=\text{arccos}(1-\frac{2b}{B}) \tag{11}\] For the super-directive beamforming, we set diagonal loading to be 1e-2. Details can be checked here [30] or our code. Then we define the spatial similarity based on the defined spatial features. After we get the spatial features of the original and reconstructed multi-channel audio, \(\mathbf{S}(f)\) and \(\mathbf{\hat{S}}(f)\) accordingly, the spatial similarity is defined as: \[\text{Spatial Similarity}=\frac{1}{F}\sum_{f}\frac{\mathbf{S}(f)^{T}\mathbf{ \hat{S}}(f)}{||\mathbf{S}(f)||\cdot||\mathbf{\hat{S}}(f)||} \tag{12}\] We believe this spatial similarity is a more intuitive metric to measure spatial cue preservation for all directions. In our case, it can capture the direct path and early reflections' direction of arrival. We also set B=50 when calculating the spatial similarity. ### Beamforming Performance Besides beamforming towards a few fixed directions, we can also beamform towards the ground truth direct path's direction and check the beamforming result. This also provides a perceptual listening opportunity to check if the direct path's signal is preserved well or distorted in the source direction. We report PESQ, SNR, STOI, and the non-intrusive DNSMOS [26] raw scores (SIG, BAK, OVRL). ## 6 Results and Discussions Our baselines include a few channel-independent coding methods (Use single channel codec to code each channel independently). They are OPUS [3] with two versions (6 and 12 kbps per channel), HiFi-Codec [12], Encodec [10], and our proposed sub-band codec (all 6 kbps per channel). HiFi-Codec and Encodec are retrained on our dataset. We also design a black-box multi-channel in multi-channel out end-to-end (MIMO E2E) model for comparison and analysis, which has the same architecture as the SpatialCodec except the last decoder layer directly outputs the multi-channel STFT reconstruction. The number of residual vector quantization layers is set to 4 so the bitrate is 12kbps. For the SpatialCodec, We also substitute the first branch's sub-band codec with HiFi-Codec and Encodec as two other baselines. Note that HiFi-Codec and Encodec are time-domain models, so STFT is needed before applying the complex ratio filter. **Overall Comparisons**: Table 1 shows the evaluation results of all proposed and baseline models. Generally, we can observe that our proposed SpatialCodec and MIMO E2E model achieves much better performance in all metrics than any other higher bitrate baselines, while maintaining only 12 kbps of bitrate in total. Also, we can see that channel-independent coding baselines are not able to preserve spatial information well even though each channel is coded with decent coding rates. Last, the black-box MIMO E2E model is worse than our proposed two-branch approach in spatial performance, e.g., spatial similarity (SS) 0.86 vs. 0.95, which shows it's hard to directly learn spatial preservation through such a black-box network. However, it performs better than SpatialCodec in beamforming performance because the main lobe of the beamformer is generally wide and not sharply focused on the target direction. **Spatial performance**: For **DoA error**, SpatialCodec achieves the best result when combined with Enodec, which differs from the DoA error of the ground-truth reverberant clean signal by only 2 degrees. When combined with sub-band codec or Hifi-Codec, SpatialCodec still achieves similar performance. All channel-independent baselines have over 22 degrees of DoA error, which is much worse than our proposed SpatialCodec methods. For **RTF error**, our SpatialCodec has the best performance when combined with our sub-band codec. Overall, it achieves an RTF error of 0.73, which exceeds the OPUS12 (8*12=96 kbps) baseline by over 18% and exceeds the OPUS6 (8*6=48 kbps) baseline by over 43%. Again, channel-independent methods all have RTF errors higher than 0.8, showing a comparably large error. The black-box MIMO E2E model can only achieve 0.95 error which is even worse than OPUS12. For **spatial similarity (SS)**, our SpatialCodec achieves a superior score of 0.95 when combined with any reference channel neural codec. For baseline methods, only OPUS12 and sub-band codec have a spatial similarity of more than 0.9. The black-box MIMO E2E model again shows inferior results here. Figure 2 shows a visualization of B=50 normalized spatial features of four methods: original multi-channel audio, OPUS with two bitrate versions, and our proposed SpatialCodec (with sub-band codec). We can see that our SpatialCodec very closely aligns with the ground-truth feature in both 1 and 3kHz frequencies, while OPUS12 and MIMO E2E show quite some deviations from the ground-truth pattern. Overall, for spatial performance, the proposed SpatialCodec method exceeds other baselines when it's combined with any reference channel codecs. Note that both the black-box MIMO E2E model and high bitrate channel-independent coding methods are unable to achieve similar performance with our two-branch methods. **Beamforming Performance**: Our SpatialCodec and MIMO E2E all have promising performance in both intrusive and non-intrusive beamforming metrics, showing decent spectral reconstruction. The reconstructed multi-channel audio demos are also given on our website. Overall, MIMO E2E performs the best performance, probably because it incorporates all channels to code spectral information. Then the SpatialCodec performs slightly worse but still much better than all other baselines. For **PESQ**, MIMO E2E has a score of 3.02, while the best SpatialCodec can achieve 2.92 (coupled with sub-band codec). OPUS12 can only achieve a score of 2.53 despite its high bitrate. Other neural channel-independent baselines achieve scores around 2.65, while sub-band codec performs a bit better. The situation is similar for **STOI**, with MIMO E2E scoring 0.85 and SpatialCodec scoring 0.8+, all other baselines score below 0.8. For **SNR**, the sub-band codec achieves the highest 9.66 dB, while our best SpatialCodec can achieve 8.28 dB. Then comes OPUS12's 7.13 dB and MIMO E2E's 6.14 dB. SNR is probably not the best metric here to show perceptual spectral performance but it's a good sanity check. For non-intrusive **DNSMOS** raw scores, the overall score is a bit low for all systems because the target signal is reverberant. MIMO E2E again performs the best. Together with MIMO E2E, SpatialCodec + sub-band codec exceeds the performance of the ground-truth reverberant clean, because the original AISHELL2 [25] speech data is not pure clean. However, this shows our model has promising spectral performance. Furthermore, SpatialCodec performs significantly better than channel-independent methods. ## 7 Conclusion and Future Work This paper proposes a novel two-branch codec framework for neural spatial speech coding. The first branch is a reference channel codec to code the source spectral information. The second branch aims to extract and code the spatial information for multi-channel resynthesis. We also propose a few novel metrics to measure spatial and spectral quality, including spatial similarity and beamforming performance. Our 12 kbps approach performs much better than all baselines including 96 kbps OPUS12. We also designed a black-box MIMO E2E model for comparison. Although we only test our system on single-speaker reverberant speech signals, this approach should be able to generalize to more complicated scenarios like multi-speaker, music sources, and moving sources. We leave these more challenging directions for future research. \begin{table} \begin{tabular}{c|c|c|c c|c c|c c|c c c} \hline \multirow{2}{*}{**SYSTEM/METRICS**} & \multirow{2}{*}{**KBPS**} & \multirow{2}{*}{**CI**} & \multicolumn{4}{c|}{**BeAMFORMING**} & \multicolumn{4}{c}{**SPATIAL**} \\ \cline{3-11} & & & \multicolumn{2}{c|}{**Intrusive**} & \multicolumn{4}{c|}{**Non-Intrusive**} & \multicolumn{2}{c}{**Non-Intrusive**} \\ \cline{4-11} & & & **SNR\(\uparrow\)** & **PESQ\(\uparrow\)** & **STOI\(\uparrow\)** & **STOI\(\uparrow\)** & **SIG\(\uparrow\)** & **BAK\(\uparrow\)** & **OVRL\(\uparrow\)** & **DoA Ear\(\downarrow\)** & **RTF Ear\(\downarrow\)** & **SS\(\uparrow\)** \\ \hline \hline \multirow{2}{*}{**DVFS12**} & \multirow{2}{*}{\(8^{\circ}\)12=96**} & \multirow{2}{*}{**Yes**} & \multirow{2}{*}{0.91} & 2.21 & 0.73 & 2.26 & 1.77 & 56.02 & 1.29 & 0.85 \\ \cline{3-11} & & & 3.91 & 2.56 & 0.76 & 2.50 & 3.37 & 2.08 & 56.88 & 1.24 & 0.85 \\ \hline \multirow{2}{*}{**BIFF-CODEC [12]**} & \multirow{2}{*}{\(8^{\circ}\)6=48} & \multirow{2}{*}{**Yes**} & \multirow{2}{*}{2.36} & 2.62 & 0.77 & 2.51 & 3.39 & 2.09 & 56.88 & 1.25 & 0.86 \\ \cline{3-11} & & & **9.66** & 2.71 & 0.79 & 2.57 & 3.30 & 2.15 & 27.56 & 0.86 & 0.91 \\ \hline \hline \multirow{2}{*}{**BIFF-CODEC [12]**} & \multirow{2}{*}{**12**} & \multirow{2}{*}{**No**} & \multirow{2}{*}{6.14} & **3.02** & **0.85** & **2.99** & **3.80** & **2.56** & 21.67 & 0.95 & 0.86 \\ \cline{3-11} & & & 5.01 & 2.80 & 0.80 & 2.73 & 3.61 & 2.30 & **13.78** & 0.75 & **0.95** \\ \hline \multirow{2}{*}{**BIFF-CODEC-SPATIALCodec(Pron)**} & \multirow{2}{*}{**6+6=12**} & \multirow{2}{*}{**No**} & \multirow{2}{*}{6.14} & 3.28 & 0.82 & 2.81 & 3.76 & 2.39 & 14.03 & 0.75 & **0.95** \\ \cline{3-11} & & & 8.28 & 2.92 & 0.82 & 2.86 & 3.72 & 2.43 & 14.83 & **0.73** & **0.95** \\ \hline \end{tabular} \end{table} Table 1: Evaluation results for 8-channel reverberant audio reconstruction on the test set. CI denotes Channel Independent. SS denotes spatial similarity. SB-CODEC denotes our sub-band codec. DoA Error is in degrees while RTF Error is in radians. SIG, BAK, and OVRL are from DNSMOS [26] raw scores. MIMO E2E corresponds to our black-box end-to-end MIMO model as in Section 6. Figure 2: Spatial Features Visualization (1kHz and 3kHz).
2309.08701
Performance Metrics for Probabilistic Ordinal Classifiers
Ordinal classification models assign higher penalties to predictions further away from the true class. As a result, they are appropriate for relevant diagnostic tasks like disease progression prediction or medical image grading. The consensus for assessing their categorical predictions dictates the use of distance-sensitive metrics like the Quadratic-Weighted Kappa score or the Expected Cost. However, there has been little discussion regarding how to measure performance of probabilistic predictions for ordinal classifiers. In conventional classification, common measures for probabilistic predictions are Proper Scoring Rules (PSR) like the Brier score, or Calibration Errors like the ECE, yet these are not optimal choices for ordinal classification. A PSR named Ranked Probability Score (RPS), widely popular in the forecasting field, is more suitable for this task, but it has received no attention in the image analysis community. This paper advocates the use of the RPS for image grading tasks. In addition, we demonstrate a counter-intuitive and questionable behavior of this score, and propose a simple fix for it. Comprehensive experiments on four large-scale biomedical image grading problems over three different datasets show that the RPS is a more suitable performance metric for probabilistic ordinal predictions. Code to reproduce our experiments can be found at https://github.com/agaldran/prob_ord_metrics .
Adrian Galdran
2023-09-15T18:45:15Z
http://arxiv.org/abs/2309.08701v1
# Performance Metrics for ###### Abstract Ordinal classification models assign higher penalties to predictions further away from the true class. As a result, they are appropriate for relevant diagnostic tasks like disease progression prediction or medical image grading. The consensus for assessing their categorical predictions dictates the use of distance-sensitive metrics like the Quadratic-Weighted Kappa score or the Expected Cost. However, there has been little discussion regarding how to measure performance of probabilistic predictions for ordinal classifiers. In conventional classification, common measures for probabilistic predictions are Proper Scoring Rules (PSR) like the Brier score, or Calibration Errors like the ECE, yet these are not optimal choices for ordinal classification. A PSR named Ranked Probability Score (RPS), widely popular in the forecasting field, is more suitable for this task, but it has received no attention in the image analysis community. This paper advocates the use of the RPS for image grading tasks. In addition, we demonstrate a counter-intuitive and questionable behavior of this score, and propose a simple fix for it. Comprehensive experiments on four large-scale biomedical image grading problems over three different datasets show that the RPS is a more suitable performance metric for probabilistic ordinal predictions. Code to reproduce our experiments can be found at github.com/agaldran/prob_ord_metrics. Keywords:Ordinal Classification Proper Scoring Rules Model Calibration Uncertainty Quantification ## 1 Introduction and Related Work The output of predictive machine learning models is often presented as categorical values, _i.e._ "hard" class membership decisions. Nonetheless, understanding the faithfulness of the underlying probabilistic predictions giving rise to such hard class decisions can be essential in some critical applications. Meaningful probabilities enable not only high model accuracy, but also more reliable decisions: a doctor may choose to order further diagnostic tests if a binary classifier gives a \(p=45\%\) probability of disease, even if the hard prediction is "healthy" [2]. This is particularly true for ordinal classification problems, _e.g._ disease severity staging [6, 7] or medical image grading [14, 21]. In these problems, predictions should be _as close as possible to the actual category_; further away predictions must incur in heavier penalties, as they have increasingly worse consequences. There is a large body of research around performance metrics for medical image analysis [20]. Most existing measures, like accuracy or the F1-score, focus on assessing hard predictions in specific ways that capture different aspects of a problem. In ordinal classification, the recommended metrics are Quadratic-Weighted Kappa and the Expected Cost [5, 16]. In recent years, measuring the performance of "soft" probabilistic predictions has attracted an increasing research interest [12, 19]. For this purpose, the current consensus is to employ Calibration Errors like the ECE and Proper Scoring Rules like the Brier score [16]. We will show that other metrics can instead be a better choice for assessing probabilistic predictions in the particular case of ordinal classification problems. How to measure the correctness of probabilistic predictions is a decades-old question, naturally connected to forecasting, _i.e._ predicting the future state of a complex system [9]. A key aspect of forecasting is that, contrary to classifiers, forecasters do not output hard decisions, but probability distributions over possible outcomes. Weather forecasts do not tell us whether it will rain tomorrow or not, they give us a probability estimate about the likelihood of raining, leaving to us the decision of taking or not an umbrella, considering the personal cost of making such decision. The same applies for financial investments or sports betting, where it is also the final user who judges risks and makes decisions based on probabilistic forecasts. In this context, Proper Scoring Rules (PSRs) have been long used by the forecasting community to measure predictive performance [10]. PSRs are the focus of this paper, and will be formally defined in section 2.1. Relation to Calibration:A popular approach to assess the quality of probabilistic predictions is measuring calibration. A model is well calibrated if its probabilistic predictions are aligned with its accuracy on average. PSRs and calibration are intertwined concepts: PSRs can be decomposed into a calibration and a resolution component [8]. Therefore, a model needs to be both calibrated and resolved (_i.e._ having _sharp_, or _concentrated_ probabilities) in order to have a good PSR value. For example, if a disease appears in 60% of the population, and our model is just "return p=0.6", in the long run the model is correct 60% of the time, and it is perfectly calibrated, as its confidence is fully aligned with its accuracy, despite having zero predictive ability. If the model predicted in a "resolved" manner with \(p=0.99\) the presence of the disease, but being correct only 70% of the time, then it would be overconfident, which is a form of miscalibration. Only when the model is simultaneously confident and correct can it attain a good PSR value. The two most widely adopted PSRs are the Brier and the Logarithmic Score [1, 11]. Unfortunately, none of these is appropriate for the assessment of ordinal classification probabilities [3]. A third PSR, long used by forecasting researchers in this scenario, the Ranked Probability Score (RPS, [4]), appears to have been neglected so far in biomedical image grading applications. This paper first covers the definition and basic properties of PSRs, and then motivates the use the RPS for ordinal classifiers. We also illustrate a counter-intuitive behavior of the RPS, and propose a simple modification to solve it. Our experiments cover two relevant biomedical image grading problems and illustrate how the RPS can better assess probabilistic predictions of ordinal classification models. ## 2 Methods ### Scoring Rules - Notation, Properties, Examples We consider a \(K\)-class classification problem, and a classifier that takes an image \(\mathbf{x}\) and maps it into a vector of probabilities \(\mathbf{p}\in[0,1]^{K}\). Typically, \(\mathbf{p}\) is the result of applying a softmax operation on the output of a neural network. Suppose \(\mathbf{x}\) belongs to class \(y\in\{1,...,K\}\), and denote by \(\mathbf{y}\) its one-hot representation. A Scoring Rule (SR) \(\mathcal{S}\) is any function taking the probabilistic prediction \(\mathbf{p}\) and the label \(\mathbf{y}\) and producing a number \(\mathcal{S}(\mathbf{p},\mathbf{y})\in\mathbb{R}\) (a score). Here we consider negatively oriented SRs, which assign lower values to _better predictions_. Of course, the above is an extremely generic definition, to which we must now attach additional properties in order to encode our understanding of what _better predictions_ means for a particular problem. Property 1:A Scoring Rule (SR) is _proper_ if its value is minimal when the probabilistic prediction coincides with the ground-truth in expectation. Example:The Brier Score [1] is defined as the sum of the squared differences between probabilities and labels: \[\mathrm{Brier}(\mathbf{p},\mathbf{y})=\|\mathbf{p}-\mathbf{y}\|_{2}^{2}=\sum_ {i=1}^{K}(p_{i}-y_{i})^{2}. \tag{1}\] Since its value is always non-negative, and it decreases to 0 when \(\mathbf{p}=\mathbf{y}\), we conclude that the Brier Score is indeed proper. Property 2:A Proper Scoring Rule (PSR) is _local_ if its value only depends on the probability assigned to the correct category. Example:The Brier Score is non-local, as its value depends on the probability placed by the model on all classes. The Logarithmic Score [11], given by: \[\mathcal{L}(\mathbf{p},\mathbf{y})=-\log(p_{c}) \tag{2}\] where \(c\) is the correct category of \(\mathbf{x}\), rewards the model by placing as much probability mass as possible in \(c\), regardless of how the remaining probability is distributed. It is, therefore, a local PSR. The Logarithmic Score is also known, when taken on average over a dataset, as the Negative Log-Likelihood. Property 3:A PSR is _sensitive to distance_ if its value takes into account the order of the categories, in such a way that probability placed in categories further away from the correct class is more heavily penalized. _Example:_ Both the Brier and the Logarithmic scores are insensitive to distance (shuffling \(\mathbf{p}\) and \(\mathbf{y}\) won't affect the score). Sensitivity to distance is essential for assessing ordinal classifiers. Below we define the Ranked Probability Score (RPS) [4, 18], which has this property, and is therefore more suitable for our purposes. ### The Ranked Probability Score for Ordinal Classification Consider a test sample \((\mathbf{x},\mathbf{y})\) in a 3-class classification problem, with label \(\mathbf{y}\) and two probabilistic predictions \(\mathbf{p}_{1},\mathbf{p}_{2}\): \[\mathbf{y}=[\,1,0,0\,],\ \ \mathbf{p}_{1}=[\,\frac{1}{4},\frac{3}{4},0\,],\ \ \mathbf{p}_{2}=[\,\frac{1}{4},0,\frac{3}{4}\,] \tag{3}\] In this scenario, both the Brier and the Logarithmic scores produce the same penalty for each prediction, whereas a user might prefer \(\mathbf{p}_{1}\) over \(\mathbf{p}_{2}\) due to the latter assigning more probability to the second category. Indeed, if we use the arg-max operator to generate a hard-decision for this sample, we will obtain a prediction of class 2 and class 3 respectively, which could result in the second model declaring a patient as severely unhealthy with serious consequences. In this context, we would like to have a PSR that takes into account distance to the true category, such as the Ranked Probability Score (RPS, [4]), given by: \[\text{RPS}(\mathbf{p},\mathbf{y})=\frac{1}{K-1}\sum_{i=1}^{K-1}\left[\sum_{j= 1}^{i}(p_{j}-y_{j})\right]^{2}=\frac{1}{K-1}\|\mathbf{P}-\mathbf{Y}\|_{2}^{2}. \tag{4}\] The RPS is the squared \(\boldsymbol{\ell_{2}}\) distance between the cumulative distributions \(\mathbf{Y}\) of the target label \(\mathbf{y}\) and \(\mathbf{P}\) of the probabilistic prediction \(\mathbf{p}\), discounting their last component (as they are both always one) and normalizing so that it varies in the unit interval. In the above example, the RPS would give for each prediction a penalty of \(\text{RPS}(\mathbf{p}_{1},\mathbf{y})=\nicefrac{{1}}{{8}}\), \(\text{RPS}(\mathbf{p}_{2},\mathbf{y})=\nicefrac{{1}}{{4}}\), as shown in Figure 1. Among many interesting properties, one can show that the RPS is proper [17], and reduces to the Brier score for \(K=2\). Despite the RPS dating back Figure 1: The RPS is sensitive to distance, suitable for assessing probabilistic predictions on biomedical image grading problems. It is the difference between the cumulative probability distributions of the label and a probabilistic prediction. more than 50 years [4], and enjoying great popularity in the weather forecasting community, it appears to be much less known in the image analysis and computer vision areas, where we could not find any trace of it. The **first goal** of this paper is to bring to the attention of computer vision researchers this tool for measuring the performance of probabilistic predictions in ordinal classification. ### The Squared Absolute RPS Our **second goal** in this paper is to identify and then fix certain failure modes of the RPS that might lead to counter-intuitive behaviors. First, in disease grading and other ordinal classification problems it is customary to assign penalties to mistakes that grow quadratically with the distance to the correct category. This is the reason why most works utilize the Quadratic-Weighted Kappa Score (QWK) instead of the linearly weighted version of this metric. However, the RPS increases the penalty linearly, as can be quickly seen with a simple 3-class problem and an example \((\mathbf{x}_{1},\mathbf{y}_{1})\) of class 1 (\(\mathbf{y}_{1}=[\,1,0,0\,]\)): \[\text{RPS}([\,1,0,0\,],\mathbf{y}_{1})=0,\ \ \text{RPS}([\,0,1,0\,],\mathbf{y}_{1})=1 /2.\ \ \text{RPS}([\,0,0,1\,],\mathbf{y}_{1})=1. \tag{5}\] Also, the RPS has a hidden preference for symmetric predictions. To see this, consider a second example \((\mathbf{x}_{2},\mathbf{y}_{2})\) in which the correct category is now the middle one (\(\mathbf{y}_{2}=[\,0,1,0\,]\)), and two probabilistic predictions: \(p_{\text{sym}}=[\,3/10,4/10,3/10\,]\), \(p_{\text{asym}}=[\,1/10,5/10,9/10\,]\). In principle, there is no reason to prefer \(p_{\text{sym}}\) over \(p_{\text{asym}}\), unless certain prior/domain knowledge tells us that symmetry is a desirable property. In this particular case, \(p_{\text{asym}}\) is actually more confident on the correct class than \(p_{\text{sym}}\), which is however the preferred prediction for the RPS: \[\text{RPS}([\,0.30,0.40,0.30\,],\mathbf{y}_{2})=0.09<0.1025=\text{RPS}([\,0.4 5,0.50,0.05\,],\mathbf{y}_{2}). \tag{6}\] Figure 2: The Ranked Probability Score displays some counter-intuitive behavior that the proposed sa-RPS can fix. Here, \(\mathbf{p}_{2}\) places more probability on the correct class but \(\mathbf{p}_{1}\) is preferred due to its symmetry. In order to address these aspects of the conventional RPS, we propose to implement instead the Squared Absolute RPS (sa-RPS), given by: \[\text{sa-RPS}(\mathbf{p},\mathbf{y})=\frac{1}{K-1}\left[\sum_{i=1}^{K}\left| \sum_{j=1}^{i}(p_{j}-y_{j})\right|\right]^{2} \tag{7}\] Replacing the inner square in eq. (4) by an absolute value, we manage to break the preference for symmetry of the RPS, and squaring the overall result we build a metric that still varies in [0,1] but gives a quadratic penalty to further away predictions. This is illustrated in Fig. 2 above. ### Evaluating Evaluation Metrics Our **third goal** is to demonstrate how the (sa-)RPS is useful for evaluating probabilistic ordinal predictions. In the next section we will show some illustrative examples that qualitatively demonstrate its superiority over the Brier and logarithmic score. However, it is hard to quantitatively make the case for one performance metric over another, since metrics themselves are what quantify modeling success. We proceed as follows: we first train a neural network to solve a biomedical image grading problem. We generate probabilistic predictions on the test set and apply distance sensitive metrics to (arg-maxed) hard predictions (QWK and EC, as recommended in [16]), verifying model convergence. Here it is important to stress that, contrary to conventional metrics (like accuracy, QWK, or ECE) PSRs can act on an individual datum, without averaging over sets of samples. We exploit this property to design the following experiment: we sort the probabilistic predictions of the test set according to a score \(\mathcal{S}\), and then progressively remove samples that are of worst quality according to \(\mathcal{S}\). We take the arg-max on the remaining probabilistic predictions and compute QWK and EC. If \(\mathcal{S}\) prefers better ordinal predictions, we must see a performance increase on that subset. We repeat this process, each time removing more of the worse samples, and graph the evolution of QWK and EC for different scores \(\mathcal{S}\): a better score should result in a faster QWK/EC-improving trend. Lastly, in order to derive a single number to measure performance, we compute the area under the remaining samples vs QWK/EC curve, which we call Area under the Retained Samples Curve (AURSC). In summary: **What we expect to see:** As we remove test set samples considered as worse classified by RPS, we expect to more quickly improve QWK/EC on the resulting subsets. We measure this with the Area under the Retained Samples Curve (AURSC) ## 3 Experimental Results We now give a description of the data we used for experimentation, analyze performance for each considered problem, and close with a discussion of results. ### Datasets and Architecture Our experiments are on two different medical image grading tasks: **1)** the **TMED**-v2 dataset ([13], link) contains 17,270 images from 577 patients, with an aortic stenosis (AS) diagnostic label from three categories (none, early AS, or significant AS). The authors provide an official train/test distribution of the data that we use here. **2) Eyepacs** (link) contains retinal images and labels for grading Diabetic Retinopathy (DR) stage into five categories, ranging from healthy to proliferative DR. Ithas 35,126 images for training and 53,576 in the test set. We train a ConvNeXt [15], minimizing the CE loss with the adam algorithm for 10 epochs starting with a learning rate of \(l=1e\)-4, decaying to zero over the training. We report average Area under the Retained Samples Curve (AURSC) for 50 bootstrap iterations in each dataset below, and also plot the evolution of performance as we remove more samples considered to be worse by four PSRs: the Brier score, the Logarithmic score (Neg-Log), RPS and sa-RPS. ### How is RPS useful? Qualitative Error Analysis The obvious application of RPS would be to train better ordinal classification models. But beyond this, RPS also enables improved, fine-grained error analysis. Let us see this through a simple experiment. Since PSRs assess samples individually, we can sort our test set using RPS, NLL, and Brier score. The worst-scored items are what the model considers the wrongest probabilistic predictions. The result of sorting predictions on the Eyepacs test set with the Brier, Neg-Log and RPS rules is show on Fig. 3. We can see that the prediction identified as worst by the RPS does indeed violate more heavily the order of categories, placing more probability on class 5 for a sample of class 1. On the other hand, for the same test set and predictions, the Brier score finds worst a prediction with 99% of the probability on class 3 and a label of class 5, and the Neg-Log score identifies a sample of class 1 for which the model wrongly predicts class 2. Figure 3: For the same test set and predictions, the RPS finds wrong samples that are more incorrect from the point of view of ordinal classification. ### Quantitative Experimental Analysis Quantitative results of the experiment described in section 2.4, computing AURSC values for all PSRs, are shown in Table 2, with dispersion measures obtained from 50 bootstraped performance measurements. We see that for the considered ordinal classification problems, distance-sensitive scores consistently outperform the Brier and Neg-Log scores. Also, the Square-Absolute Ranked Probability Score always outperforms the conventional Ranked Probability Score. It is worth stressing that when observing bootstraped performance intervals, neither the Brier nor the Logarithmic scores manage to overlap the SA-RPS interval in any of the two datasets, and in the Eyepacs dataset not even the best RPS result reaches the performance of worst SA-RPS result. For a visual analysis, Fig. 4 shows the full Sample Retention Curves from which AURSC-QWK values in Table 1 were computed. These curves show how \begin{table} \begin{tabular}{c c c c c} & \multicolumn{2}{c}{**TMED**} & \multicolumn{2}{c}{**Eyeepacs**} \\ \cline{2-5} & **AURSC-QWK\(\uparrow\)** & **AURSC-EC\(\downarrow\)** & **AURSC-QWK\(\uparrow\)** & **AURSC-EC\(\downarrow\)** \\ \hline **Brier** & 13.46 \(\pm\) 0.35 & 3.76 \(\pm\) 0.21 & 17.36 \(\pm\) 0.04 & 2.84 \(\pm\) 0.07 \\ \hline **Neg-Log** & 13.56 \(\pm\) 0.35 & 3.62 \(\pm\) 0.2 & 17.44 \(\pm\) 0.04 & 2.67 \(\pm\) 0.07 \\ \hline **RPS** & **14.76 \(\pm\) 0.28** & **2.68 \(\pm\) 0.14** & **17.81 \(\pm\) 0.03** & **1.99 \(\pm\) 0.04** \\ \hline **sa-RPS** & **14.95 \(\pm\) 0.25** & **2.53 \(\pm\) 0.12** & **17.86 \(\pm\) 0.03** & **1.88\(\pm\) 0.04** \\ \hline \end{tabular} \end{table} Table 1: Areas under the Retained Samples Curve for **TMED** and **Eyeepacs**, with a **ConvNeXt**, for each PSR; **best** and **second best** values are marked. Figure 4: We sort probabilistic predictions in each test set using several PSRs: Brier, Neg-Log, RPS, sa-RPS. We progressively discard worse-scored samples, improving the metric of interest (only QWK shown). Removing worse samples according to RPS and sa-RPS leads to better QWK, implying that they both capture better ordinal classification performance at the probabilistic level. PSRs can indeed take a single probabilistic prediction and return a score that is correlated to QWK, which is computed over sets of samples. This is because as we remove samples according to any PSR, performance in the remaining test set improves in all cases. The curves in Fig. 4 also tell a more complete story of how the two distance-sensitive scores outperform the Brier and Neg-Log scores, particularly for TMED and Eyepacs. Just by removing a 5%-6% of samples with worse (higher) RPS, we manage to improve QWL and EC to a greater extent. ## 4 Conclusion and Future Work We have shown that Proper Scoring Rules are useful tools for diagnosing probabilistic predictions, but the standard Brier and Logarithmic scores should not be preferred in ordinal classification problems like medical image grading. Instead, the Ranked Probability Score, popular in the forecasting community, should be favoured. We have also proposed sa-RPS, an extension of the RPS that can better handle some pathological cases. Future work will involve using the RPS to learn ordinal classifiers, and investigating its impact in calibration problems. ## Acknowledgments This work was supported by a Marie Sklodowska-Curie Fellowship (No 892297).
2309.12759
Dust Emission and Dynamics
When viewed from Earth, most of what we observe of a comet is dust. The influence of solar radiation pressure on the trajectories of dust particles depends on their cross-section to mass ratio. Hence solar radiation pressure acts like a mass spectrometer inside a cometary tail. The appearances of cometary dust tails have long been studied to obtain information on the dust properties, such as characteristic particle size and initial velocity when entering the tail. Over the past two decades, several spacecraft missions to comets have enabled us to study the dust activity of their targets at much greater resolution than is possible with a telescope on Earth or in near-Earth space, and added detail to the results obtained by the spacecraft visiting comet 1P/Halley in 1986. We now know that the dynamics of dust in the inner cometary coma is complex and includes a significant fraction of particles that will eventually fall back to the surface. The filamented structure of the near-surface coma is thought to result from a combination of topographic focussing of the gas flow, inhomogeneous distribution of activity across the surface, and projection effects. It is possible that some larger-than-centimetre debris contains ice when lifted from the surface, which can affect its motion. Open questions remain regarding the microphysics of the process that leads to the detachment and lifting of dust from the surface, the evolution of the dust while travelling away from the nucleus, and the extent to which information on the nucleus activity can be retrieved from remote observations of the outer coma and tail.
Jessica Agarwal, Yoonyoung Kim, Michael S. P. Kelley, Raphael Marschall
2023-09-22T10:03:16Z
http://arxiv.org/abs/2309.12759v2
# "Dust Emission and Dynamics" ###### Abstract When viewed from Earth, most of what we observe of a comet is dust. The influence of solar radiation pressure on the trajectories of dust particles depends on their cross-section to mass ratio. Hence solar radiation pressure acts like a mass spectrometer inside a cometary tail. The appearances of cometary dust tails have long been studied to obtain information on the dust properties, such as characteristic particle size and initial velocity when entering the tail. Over the past two decades, several spacecraft missions to comets have enabled us to study the dust activity of their targets at much greater resolution than is possible with a telescope on Earth or in near-Earth space, and added detail to the results obtained by the spacecraft visiting comet 1P/Halley in 1986. We now know that the dynamics of dust in the inner cometary coma is complex and includes a significant fraction of particles that will eventually fall back to the surface. The filamented structure of the near-surface coma is thought to result from a combination of topographic focussing of the gas flow, inhomogeneous distribution of activity across the surface, and projection effects. It is possible that some larger-than-centimetre debris contains ice when lifted from the surface, which can affect its motion. Open questions remain regarding the microphysics of the process that leads to the detachment and lifting of dust from the surface, the evolution of the dust while travelling away from the nucleus, and the extent to which information on the nucleus activity can be retrieved from remote observations of the outer coma and tail. ## 1 Introduction Dust released from the surface is the most observationally accessible constituent of a comet. When a comet becomes visible to the naked eye, we see primarily sunlight scattered by dust particles in its coma and tail. Theories explaining the appearance and formation of comet tails date many centuries back. Our concepts to constrain dust properties like size distribution, ejection times and velocities from the shape of and brightness distribution in a comet tail have their origins in the 19th century and primarily exploit the size dependence of solar radiation pressure on dust (Section 2.4). These concepts are described in detail in the Comets II book chapter by _Fulle_ (2004) and mainly yield information on the properties of the dust as it leaves the sphere of influence of the nucleus. The strongest limit of this approach is set by the finite resolution of the telescope images available. For a ground-based telescope, the typical seeing-limited resolution is of the order of \(1^{\prime\prime}\), which corresponds to 725 km at a comet-observer distance of 1 au. Under exceptional circumstances, such as Hubble Space Telescope observations during a close (0.1 au) Earth flyby, a spatial resolution of a few kilometers can be achieved (e.g., _Li et al._, 2017). However, no contemporary Earth- or near-Earth-based telescope can resolve the nucleus of a comet and the coma immediately above its surface. The images returned by ESA's Giotto spacecraft from comet 1P/Halley in 1986 (_Keller et al._, 1986) were the first to resolve this innermost part of the coma. They showed the comet nucleus as a solid object, and the dust as it emerges from its surface. It became clear that cometary nuclei were highly irregular bodies, certainly in shape and potentially in composition, and also that the brightness of the innermost dust coma was spatially highly variable. It displayed bright linear features apparently emanating from the surface, embedded in a more diffuse background (Section 3.2.3). Since the publication of the Comets II book, major progress in understanding the motion of dust in the near environment of cometary nuclei was enabled by several space missions. Their returned images confirmed the 1P/Halley results of the detailed fine structure in the coma brightness distribution. Indications were found that the debris itself can be outgassing and subject to physical evolution, and that part of it falls back to the surface. But open questions remain in particular concerning how activity is distributed across the surface, why it is spatially and temporally variable, and which microphysical processes lead to the ejection of dust. The answers to all these questions are necessary to understand the interior structure and composition of cometary nuclei, and eventually their formation. ESA's Rosetta mission provided us with a comprehensive, 2-year body of data obtained with various complementary techniques, including imaging and spectroscopy of dust and the surface at various wavelengths, and in situ analysis of the composition, density and velocity of both gas and dust. These data represent the best constraints we have to understand how, where and when dust is released from a cometary surface and its subsequent journey back to the surface or to interplanetary space. In this chapter, we first review the forces considered relevant for this outward journey of a dust particle (Section 2). We outline the still rudimentary knowledge of how dust activity is distributed across the comet surface and what methods can help to address this question (Section 3.1). We next describe how the dust particles are accelerated in the gas flow (Section 3.2), how solar gravity and radiation pressure take over as the gas dilutes (Section 3.3) and finally address the motion of dust in the comet's tail and trail, and its transition to the zodiacal cloud (Section 3.4). Open questions and potential means to address them are discussed in Section 4. ## 2 Forces acting on dust ### Gravity and tidal forces The gravitational force of a massive body acting on a particle of mass \(m_{\rm d}\) is \[F_{\rm g}=m_{\rm d}g, \tag{1}\] where \(g\) is the gravitational acceleration of that body. At distance \(r\) from the nucleus center of mass, the gravitational acceleration by the nucleus mass, \(M\), is defined as \[g=\frac{GM}{r^{2}}, \tag{2}\] where \(G\) is the gravitational constant. In the immediate environment of the nucleus, the gravitational field cannot be approximated by that of a point mass as in Eq. 1. The spatial extent of the nucleus, in combination with its irregular shape and potentially inhomogeneous internal mass distribution, will necessitate to consider also higher orders of the gravitational potential for calculations of the dust motion (e.g., _Werner_ 1994). The solar gravitational force on a particle at heliocentric distance \(r_{\rm h}\) is \[F_{\rm g}=\frac{GM_{\odot}}{r_{\rm h}^{2}}m_{\rm d}, \tag{3}\] where \(M_{\odot}\) is the mass of the Sun. The sphere of gravitational influence (Hill sphere) of the comet nucleus is estimated as \[R_{\rm Hill}=r_{\rm h}\left(\frac{1}{3}\frac{M}{M_{\odot}}\right)^{1/3}. \tag{4}\] Thus, for a typical 2-km radius Jupiter-family comet with a bulk density \(\rho_{\rm n}\) = 500 kg m\({}^{-3}\), perihelion distance of 1 au, and aphelion at 6 au, the Hill radius will range from roughly 200 km at perihelion to 1300 km at aphelion. The radial velocity component required for an object at distance \(r\) from mass \(M\) to stop being gravitationally bound to this mass (escape speed) is given by \[v_{\rm esc}=\sqrt{\frac{2GM}{r}}. \tag{5}\] Typical escapes speeds from the surfaces of km-sized bodies are of order 1 m s\({}^{-1}\). For the motion of a dust particle in a frame attached to the nucleus center of mass, the difference in solar gravity between the locations of the particle and of the nucleus (the tidal force) is more relevant than the absolute value of solar gravity. A particle of mass \(m_{\rm d}\) located on the Sun-nucleus line and at a distance \(r\) from the nucleus in either direction is subject to a tidal force directed away from the nucleus given by \[F_{\rm tr}=\frac{2GM_{\odot}}{r_{\rm h}^{3}}m_{\rm d}r, \tag{6}\] while the tidal force on a particle located above the terminator points towards the nucleus with the magnitude \[F_{\rm tl}=\frac{GM_{\odot}}{r_{\rm h}^{3}}m_{\rm d}r. \tag{7}\] ### Drag by surrounding gas coma The main force accelerating dust particles away from the nucleus surface comes from their interaction with the surrounding gas field. As illustrated in Fig. 1, the drag force is strongest within \(\sim\)10 nucleus radii, \(R_{\rm n}\). By that distance the gas densities have diluted significantly, making molecule-dust collisions rare and momentum transfer from the gas flow to the dust particles inefficient. The gas dynamics are discussed by Marschall et al. in this book. Once a dust particle is detached and lofted from the surface, the surrounding gas molecules collide with it and accelerate the particle. The main direction of gas expansion is away from the nucleus surface and therefore so is the net force on the dust particles. When dust densities are low enough that particles do not exert a back reaction onto the gas flow (e.g. deceleration and/or heating of the gas flow) they can be considered as test particles within the gas flow and thus be treated mathematically separately (_Tenishev et al._ 2011). This condition Figure 1: Sketch of the cometary dust environment which we divide into three main dynamical regions: 1) The coupled coma region where the dust dynamics is dominated by local forces connected to the nucleus (gas drag and nucleus gravity) (Section 3.2); 2) the transitional coma region where the dust has decoupled from the gas and small particles transition to being dominated by solar forces (gravity and radiation pressure) while large particles are bound in the gravitational field of the nucleus (Section 3.3); 3) the dust tail and trail within which the escaping dust particles are purely governed by solar forces (Section 3.4). The boundaries between these regions are complex 3D surfaces. The spatial and temporal scales given are rough estimates for a 67P-like comet at 1 au. \(R_{\rm n}\) is the nucleus radius, and \(R_{\rm Hill}\) the Hill radius. This figure is a reproduction of Fig. 4 in _Marschall et al._ (2020c). is given in most cases but there are exceptions. For example in the event of a strong outburst, where the dust plume is optically thick, this condition is certainly not satisfied. For such cases, a multi-phase simulation needs to be adopted (e.g., _Shou et al._ 2017) which also takes into account inter-particle collisions. In the following, we assume the more common case where dust particles do not significantly influence the gas flow. In addition, in most cases, the mean free path of the molecules is much larger than the dust particle size, and therefore free molecular aerodynamics can be applied. In this scenario (_Finson and Probstein_ 1968; _Gombosi et al._ 1985; _Gombosi et al._ 1986; _Gombosi_ 1987; _Sengers et al._ 2014; _Marschall et al._ 2016) the drag force, \(F_{\rm D}\), on a spherical particle is \[\vec{F}_{\rm D}=\frac{1}{2}C_{\rm D}m_{\rm g}n_{\rm g}\sigma_{\rm d}\left|\vec {v}_{\rm g}-\vec{v}_{\rm d}\right|\left(\vec{v}_{\rm g}-\vec{v}_{\rm d}\right), \tag{8}\] where \(\sigma_{\rm d}\) is the geometric cross-section of the dust particle, and \(\vec{v}_{\rm d}\) is its velocity. The mass of a gas molecule is \(m_{\rm g}\), and \(n_{\rm g}\) and \(\vec{v}_{\rm g}\) are their number density and macroscopic velocity, respectively. \(C_{\rm D}\) is called the drag coefficient. For an equilibrium gas flow, and a mean free path of the molecules much larger than the dust size, the drag coefficient (_Bird_ 1994) is defined as \[\begin{split} C_{\rm D}=&\frac{2\zeta^{2}+1}{\sqrt {\pi}\zeta^{3}}e^{-\zeta^{2}}+\frac{4\zeta^{4}+4\zeta^{2}-1}{2\zeta^{4}}\mbox{ erf}(\zeta)\\ &+\frac{2\left(1-\varepsilon\right)\sqrt{\pi}}{3\zeta}\sqrt{ \frac{T_{\rm d}}{T_{\rm g}}},\end{split} \tag{9}\] with the gas temperature \(T_{\rm g}\), the dust particle temperature \(T_{\rm d}\), the fraction of specular reflection \(\varepsilon\), and the molecular speed ratio \[\zeta=\frac{\left|\vec{v}_{\rm g}-\vec{v}_{\rm d}\right|}{\sqrt{\frac{2k_{\rm b }T_{\rm g}}{m_{\rm g}}}}, \tag{10}\] where \(k_{\rm b}\) is the Boltzmann constant. Figure 2 shows the drag coefficient as a function of \(\zeta\). For large particles the dust speed is much smaller than the gas speed. At the surface, the gas speed is of the same order as the thermal speed (the denominator in Eq. 10) and therefore \(\zeta\sim 1\) and \(C_{\rm D}\sim 5\) (Fig. 2). On the other hand, for a typical comet (water dominated emission well within the snow line) at large cometocentric distances (e.g., at \(10R_{\rm n}\)), the temperature of the gas has cooled to a few tens of Kelvin but the gas speed has reached the order of 1 km/s. In this case slowly moving particles have \(\zeta\sim 10\) and thus \(C_{\rm D}\sim 2\) (Fig. 2). Very small, typically sub-micron, particles, may attain a significant fraction of the gas speed already very close the surface, such that \(\zeta<1\) and \(C_{\rm D}>5\) (Fig. 2). Because the drag coefficient asymptotically approaches 2 for most particle sizes, a size-independent \(C_{\rm D}=2\) is often assumed rather than the more complicated Eq. 10. This choice implies that the acceleration of in particular small particles is underestimated. Eq. 9 describes the idealized case of spherical particles in a gas flow when the mean free path of the gas molecules is larger than the dust particle size. Real dust particles are not spherical and, in particular, larger dust grains are porous and fluffy aggregates (_Kolokolova and Kimura_ 2010; _Schulz et al._ 2015; _Rotundi et al._ 2015; _Langevin et al._ 2016; _Bentley et al._ 2016; _Mannel et al._ 2016; _Levasseur-Regourd et al._ 2018). This affects the dynamics of the particles as shown by _Skorov et al._ (2016a, 2018). They found that porous aggregates are accelerated to significantly higher speeds than their compact counterparts of the same radius. This behavior can be mimicked in the spherical particle paradigm described above by adjusting the masses and geometric cross-sections to the respective values of the porous particles. In this sense, the spherical particles can be understood as effective particles with the given mass and cross-section. Additionally, particles have been observed to rotate in the comae of comets 103P and 67P (_Hermalyn et al._ 2013; _Fulle et al._ 2015b), and the effect of oblate or prolate particle shapes on their dynamics was studied by _Ivanovski et al._ (2017a,b). Not only will non-spherical particles begin to rotate in the gas flow but they may also accelerate to higher speeds than spherical ones. Unless a particle rotates sufficiently fast, the influence of particle rotation on their dynamics cannot simply be parameterized into a spherical particle paradigm (Eq. 8) as described above for porous particles, because spherical particles only experience a force along the direction of the gas flow, while non-spherical, rotating particles also experience a force perpendicular to the flow. This component of the acceleration is not included in Eq. (8). ### Intrinsic outgassing Solid particles in the coma can contain volatile ices. When the grain temperature is sufficiently warm, the ices will sublimate. This loss of vapour accelerates the particle. In the ideal case of an isotropically outgassing spherical particle, the net acceleration is 0. However, coma par Figure 2: Drag coefficient \(C_{\rm D}\) as a function of the parameter \(\zeta\) as given by Eqs. 9 and 10 and assuming \(T_{\rm d}=T_{\rm g}\) and \(\varepsilon\)=0. ticles are not spheres, and outgassing may not be uniform, especially from large particles that can sustain a temperature gradient across their surfaces. Rapid rotation of a particle does help distribute the absorbed sunlight across the surface, and smooths out the temperature gradient, but the spin axis may prevent illumination of some portions of the grain. As a result, the acceleration will typically have an anti-solar component. Whether or not a tangential component exists, depends on the spin state, shape, porosity, and thermal characteristics of the materials. The calculation of the acceleration from outgassing is analogous to the non-gravitational acceleration of cometary nuclei (_Marsden et al._, 1973), except that the microphysics of the particle, e.g., thermal and radiative properties, are very different. Setting these complexities aside, the force from gas sublimation (also called "rocket force"), \(F_{\rm s}\) can be calculated from: \[F_{\rm s}=\sigma_{\rm d}m_{\rm g}Zv_{\rm th}\kappa f_{\rm ice}, \tag{11}\] where \(Z\) is the number sublimation rate of the ice per surface area, \(\kappa\) is the degree of asymmetry of outgassing (0 for isotropic, 1 for directly toward the Sun), \(f_{\rm ice}\) is the effective fractional surface area of the ice (cf. _Kelley et al._, 2013), and \[v_{\rm th}=\frac{\pi}{4}\sqrt{\frac{8k_{\rm b}T_{\rm d}}{\pi m_{\rm g}}} \tag{12}\] is the mean thermal expansion speed of the gas. The details hidden within \(\kappa\) may be complex, but Eq. 11 is still useful for estimating the potential order of magnitude of acceleration by outgassing. Since the force is generally opposite the Sun, it works similarly to acceleration by radiation pressure, in that the particles are accelerated into the tail direction. For example, _Reach et al._ (2009) examined the acceleration from outgassing water ice at 1 au and found that the resultant force, \(F_{\rm s}\), can be comparable to or much stronger than that from solar radiation \(F_{\rm r}\) (Section 2.4), and estimated \(F_{\rm s}/F_{\rm r}\sim 100\) for a 1-mm sized grain with \(f_{\rm ice}=0.0017\). Over small heliocentric distance ranges and when the relevant ice is undergoing free sublimation, the force may be approximated as equivalent to the \(\beta\)-parameter, i.e., a force \(\propto r_{\rm h}^{-2}\) that is addressed with a reduced-gravity solution (Section 2.4). Likely, the outgassing force is observable only for centimetre-sized and larger particles, because smaller particles lose their ice content too quickly and hence too close to the surface for the non-gravitational effect on their trajectories to be measured. This conclusion was reached by _Markkanen and Agarwal_ (2020) and _Davidsson et al._ (2021) from thermophysical modelling, and, complementary, by _Reach et al._ (2009) from studying the motion of particles in the debris trail of comet 73P/Schwassmann-Wachmann 3. It is possible that asymmetric outgassing changes the rotation rate of an ice-bearing comet fragment, which can lead to disintegration by centrifugal force on time scales of less than a day for a meter-sized fragment (_Jewitt et al._, 2016). _Steckloff and Jacobson_ (2016) propose that fragment disintegration by sublimation torques can lead to the formation of striae observed in the tails of some enigmatic, bright comets. Alternative explanations of striae formation invoke electromagnetic forces (Section 2.6). ### Solar radiation pressure The radiation force is proportional to the solar intensity multiplied by the cross-sectional area of the particle: \[F_{\rm r}=\frac{Q_{\rm pr}}{c}\left(\frac{L_{\odot}}{4\pi r_{\rm h}^{2}} \right)\sigma_{\rm d}, \tag{13}\] where \(L_{\odot}\) is the solar luminosity, \(c\) is the speed of light, and \(Q_{\rm pr}\) is the radiation pressure coefficient averaged over the solar spectrum (Burns et al., 1979). Dynamical models commonly use a simplified treatment of comet dust, assuming \(Q_{\rm pr}\) = 1. A more detailed treatment dependent on dust mineralogies and structures (compact or fluffy) is discussed in Kolokolova et al. in this volume. Since the solar radiation force opposes the solar gravity and both forces are proportional to \(1/r_{\rm h}^{2}\), the net force can be considered as reduced solar gravity, \[F_{\rm net}=F_{\rm r}-F_{\rm g}=(1-\beta)F_{\rm g}, \tag{14}\] where the \(\beta\) parameter is the ratio of the radiation force, \(F_{\rm r}\), to the solar gravitational force, \(F_{\rm g}\), \[\beta\equiv\frac{F_{\rm r}}{F_{\rm g}}=\frac{3L_{\odot}Q_{\rm pr}}{16\pi GM_{ \odot}c\rho_{\rm d}a} =C_{\beta}\frac{Q_{\rm pr}}{\rho_{\rm d}a}, \tag{15}\] with \(C_{\beta}=5.77\times 10^{-4}\,{\rm kg\,m^{-2}}\). Calculations show that silicate particles tend to have \(\beta<\)1 regardless of aggregate structure (_Silsbee and Draine_, 2016), while absorbing particles may have \(\beta>\)1 (_Kimura et al._, 2016). ### Poynting-Robertson drag Small particles in orbit about the Sun are influenced also by radiation pressure tangential to their motion (_Robertson_, 1937; _Wyatt and Whipple_, 1950). The resulting Poynting-Robertson force is given by \[F_{\rm PR}=\frac{a^{2}L_{\odot}}{4c^{2}}\sqrt{\frac{GM_{\odot}}{r_{\rm h}^{5}}}. \tag{16}\] The Poynting-Robertson effect causes mm-sized dust particles in the zodiacal cloud (Sec. 3.4.6) to spiral into the Sun on timescales \(\tau_{\rm PR}\gtrsim 6\times 10^{5}\) yr (_Kasuga and Jewitt_, 2019). ### Electromagnetic forces A dust particle embedded in the cometary or solar wind plasma and interacting with solar ultraviolet radiation is subject to charging by electron and ion collection, and secondary electron and photoelectron emission. Over time, the dust particle will assume the potential at which the involved currents balance. This equilibrium potential depends on the properties of the plasma environment and the dust particle, such as composition and surface roughness (_Horanyi_, 1996). In interplanetary space, the dominant charging process is photoelectron emission, and typical dust potentials, \(U\), range between 0.5V and 14V (_Mukai_, 1981). A canonical value of \(U\)=5V is often used (e.g., _Sterken et al._, 2012; _Kramer et al._, 2014). Solar wind interaction with the plasma tail is discussed in Gotz et al. in this volume. For a given surface potential and volume, the shape dependence of a grain's charge can be described by the dimensionless parameter \(\kappa_{\rm e}>\)1 that is minimal for a sphere (\(\kappa_{\rm e}\)=1) and can reach values up to \(\kappa_{\rm e}\)=5 for fractal particles (_Auer et al._, 2007). The integrated charge of a grain can thus be described as \[q=4\pi\varepsilon_{0}a\kappa_{\rm e}U, \tag{17}\] where \(a\) is the radius of a volume-equivalent sphere, and \(\varepsilon_{0}\) is the electric permittivity in vacuum. In the presence of a magnetic field, \(\vec{B}\), a charged particle moving with velocity \(\vec{v}\) relative to the field is subject to the Lorentz force \[\vec{F}_{\rm L}=q(\vec{v}\times\vec{B}). \tag{18}\] Outside the immediate environment of the comet, the relevant field is the interplanetary magnetic field (IMF), and the velocity of the dust particle relative to this field can be approximated by the velocity of the solar wind (\(v_{\rm SW}\)=400-800 km/s radially outward from the Sun) that carries the magnetic field and is at least an order of magnitude faster than typical heliocentric velocities of comets. Splitting the IMF into a radial (\(B_{r}\)), an azimuthal (\(B_{\phi}\)) and a normal component (\(B_{\theta}\)=0), Eq. 18 reduces to (_Kramer et al._, 2014) \[F_{\rm L}=\pm qv_{\rm SW}B_{\phi}=qv_{\rm SW}B_{\phi,0}\frac{r_{0}}{r_{\rm h} }\cos\beta_{\rm hg}, \tag{19}\] where \(\beta_{\rm hg}\) is the heliographic latitude, \(r_{0}\)=1 au, and \(B_{\phi,0}\)= 3 nT is the azimuthal field strength at 1 au. Hence, the Lorentz force decreases with heliocentric distance as 1/\(r_{\rm h}\), less steeply than solar gravity and radiation pressure. For a given type of particle, the relative importance of the Lorentz force will increase with heliocentric distance (Fig. 5). _Kramer et al._ (2014) and _Hui et al._ (2019) reported that including the Lorentz force in a model of the dust motion significantly improved reproducing the orientation of the dust tails of comets Hale-Bopp and C/2010 U3 (Boattini) at heliocentric distances between 15 and 30 au. _Price et al._ (2019) find that changes in the appearance of striae in the tail of comet C/2006 P1 (McNaught) coincided with the comet crossing the heliospheric current sheet and infer that dust in the striae was charged and hence subject to the Lorentz force. Striae are linear features inside a comet's dust tail of unknown origin. They are only seen in comets with very high production rate, typically dynamically new comets. Alternative models of striae formation favour processes of instantaneous disintegration of, e.g., highly non-spherical grains (_Sekanina and Farrell_, 1980) or of large boulders fragmenting under outgassing-induced torques (_Steckloff and Jacobson_, 2016, see also Section 2.3). _Fulle et al._ (2015a) find that near the Rosetta spacecraft, fluffy dust particles of extremely low density may get charged by secondary electrons from the spacecraft and disintegrate, leading to the detection of particle swarms by the on-board dust instrument GIADA. In the vicinity of comet 67P, also charged water clusters smaller than 100 nm were detected (_Gombosi et al._, 2015). ### Electrostatic lofting Charged dust lofting and transport have been proposed to explain observations of airless bodies in the solar system, such as the lunar horizon glow (_Rennison and Criswell_, 1974), the "spokes" in Saturn's rings (_Morfill et al._, 1983), and dust ponds formed on asteroid Eros (_Colwell et al._, 2005). There is little published work on the relevance of this effect in the presence of outgassing (such as on an active comet), but _Nordheim et al._ (2015) modeled the electrostatic charging of the nucleus of comet 67P and showed that charged dust grains with radii \(<\)50 nm may be electrostatically ejected from the nucleus in situations of weak activity. The electrostatic force on a particle is \[F_{\rm ES}=qE, \tag{20}\] where \(E\) is the local electric field strength. Details on the particle charging equations are presented in _Zimmerman et al._ (2016) and _Wang et al._ (2016). We here summarize some aspects of dust charging on non-cometary airless bodies: Dust on the lunar surface is levitated due to electrostatic charge gradients resulting from uneven solar illumination. Because on km-sized asteroids gravity is much lower, dust can be electrostatically ejected from such bodies (_Lee_, 1996). Recent asteroid missions have observed rocky surfaces on asteroids Bennu and Ryugu, indicating a lack of regolith (_Jaumann et al._, 2019; _Lauretta et al._, 2019). These observations and the particle ejection events observed on Bennu may be partly caused by electrostatic dust lofting and escape (_Hartzell et al._, 2022; _Nichols and Scheeres_, 2022). ### Inter-particle cohesion Inter-molecular (e.g., Van der Waals) forces between the surfaces of neighbouring particles or grains are held responsible for the internal strength of dust aggregates, agglomerates, chunks and surfaces. The precise form and magnitude of these forces depends on the structure and composition of the material, which are not well known. For a lunar-type regolith surface with average grain size \(a\), _Sanchez and Scheeres_ (2014) derive a strength of \[F_{\rm reg}=C_{\rm reg}C_{\rm\#}\phi/a, \tag{21}\] where \(C_{\rm reg}=4.5\times 10^{-3}\,{\rm Nm}^{-1}\). \(C_{\rm\#}\) is the number of neighbouring particles that a given grain touches (the coordination number), and \(\phi\) is the volume filling factor of the dust layer, i.e. the fraction of the volume that is filled by matter. For a model surface composed of agglomerates of aggregates of dust grains, and using empirical relationships based on laboratory measurements, _Skorov and Blum_ (2012) deduce the following expression for the tensile strength of a dust surface: \[F_{\rm agg}=C_{\rm agg}\ \phi\ \left(\frac{a}{a_{0}}\right)^{-2/3}, \tag{22}\] with \(C_{\rm agg}\) = 1.6 Pa, and \(a_{0}\) = 1mm. Eqs. 21 and 22 render values that differ by two orders of magnitude at \(a\)=100 \(\mu\)m, which illustrates the sensitive dependence on model assumptions and the lack of well constrained parameters. ### Relative importance of forces The relative importance of the various forces discussed in Sections 2.1 - 2.8 depends mainly on the particle size and the distances from the Sun and comet, but also on the dust and gas properties. Figures 3 - 5 illustrate the key dependencies. Nucleus gravity is always several orders of magnitude weaker than solar gravity, but since both the nucleus and the dust are subject to the solar gravitational acceleration, the relevant quantity with which to compare the nucleus gravitational force is the tidal force. The nucleus distance where tidal force and nucleus gravitational force balance is given by the Hill radius. The Poynting-Robertson effect and the Lorentz force are weakest and therefore typically not considered in calculations of cometary dust dynamics. However, far from the Sun, the Lorentz force on small particles can become comparable to radiation pressure. If present, outgassing-induced "rocket" force tends to be stronger than solar radiation pressure. Inter-particle cohesion is the only force that decreases with increasing particle size. With the parameters and in the situations shown here, it surpasses all other forces, including gas drag, for particles smaller than 100 \(\mu\)m, which raises the currently unresolved question how such particles can be lifted from the comet surface at all (see Section 3.1.3). Once lifted, the cohesion within a given particle does not directly affect its dynamics (unless the particle fragments). Hence the cohesion is indicated by dashed lines in Figure 3, and not shown in Figures 4 and 5. Once lifted from the nucleus surface, the initially dominant force on dust particles is gas drag (Eq. 8). It decreases roughly quadratically with increasing nucleus distance, as the gas dilutes, and at some distance (that depends on particle size and gas production rate), becomes weaker than solar gravity. We refer to this near-surface regime as the acceleration zone (Section 3.2) and to the dust velocity at its upper boundary as the terminal velocity. Once the influences of nucleus gravity and gas drag have diminished, the motion of the dust in the outer coma, tail and trail is largely driven by solar gravity and radiation pressure, and by the terminal velocity the dust acquired from the gas drag. In the outer coma (Section 3.3), dust that was initially ejected towards the Sun from the sunlit surface where water sublimation is strongest, will reverse its direction of motion under the influence of solar radiation pressure, such that eventually almost all dust is driven into the curved tail stretching in the anti-solar direction and along the negative heliocentric velocity vector of the comet, hence outside its orbit (Section 3.4). Generally, if particles contain sublimating ice, the forces on them can implicitely be affected (and become time-dependent) by their changing mass, cross section, and temperature. The same applies for fragmentation, spin-changes and changes of temperature. Often, a maximum liftable grain size is derived from the balance of gravity (Eq. 1) and gas pressure at the surface (Eq. 8). We here formulate it more generally as minimum liftable cross-section-to-mass ratio: \[\left[\frac{\sigma_{\rm d}}{m_{\rm d}}\right]_{\rm max}=\frac{8\pi G}{3C_{\rm D }}\frac{\rho_{\rm N}R_{\rm N}}{m_{\rm g}v_{\rm g}Q_{\rm g}}, \tag{23}\] where \(Q_{\rm g}=n_{\rm g}v_{\rm g}\) is the surface gas production rate in molecules per unit time and area. Inserting the values from Table 1 and assuming spherical dust particles, the maximum radius liftable by water vapour alone at 1 au is about 1 m. Since gas production rates can be highly variable and hard to measure on a local scale, the actual maximum liftable grain size for a given situation is difficult to predict. ## 3 Dynamical regimes ### Dust emission from the surface Due to its low optical depth, the surface brightness of dust in the cometary coma is nearly always lower than that of the illuminated surface of the nucleus. With remote sensing methods that measure scattered sunlight or thermal radiation, dust can only be detected against the dark backgrounds of empty space or shadowed surface. It is, therefore, not straightforward to identify the source regions of dust on the surface, even when resolved images of the dust coma obtained by cameras on spacecraft show a considerable fine structure near the limb (cf. Section 3.2.3). Integrated gas production rates indicate that most comets emit only a small fraction of the gas that would be expected from a sublimating surface of pure ice. An exception to this are the so-called hyperactive comets (e.g., 46P/Wirtanen, 103P/Hartley 2, _A'Hearn et al._ (2011)), in which the global water production is higher than can be explained from pure surface sublimation. In the following, we describe some of the most common methods used to constrain the distribution of activity across a cometary surface and subsequently outline their findings. A review of local manifestations of cometary activity can be found in _Vincent et al._ (2019). \begin{tabular}{l l r} \hline \hline Quantity & Symbol & Value \\ \hline Nucleus radius & \(R_{\rm n}\) & 2 km \\ Nucleus density & \(\rho_{\rm n}\) & 500 kg m\({}^{-3}\) \\ Dust bulk density & \(\rho_{\rm d}\) & 500 kg m\({}^{-3}\) \\ Drag coefficient & \(C_{\rm D}\) & 2 \\ Global water production ratea & \(Q_{\rm m}\) \\ Global CO\({}_{2}\) production rateb & \(Q_{\rm CO_{2}}(r_{\rm h})\) & 4\(\times 10^{34}\) molecules s\({}^{-1}\) (\(r_{\rm h}/1{\rm au}\))\({}^{-15}\) \\ Global CO\({}_{2}\) production rateb & \(Q_{\rm CO_{2}}(r_{\rm h})\) & 4\(\times 10^{25}\) molecules s\({}^{-1}\) (\(r_{\rm h}/1{\rm au}\))\({}^{-2}\) \\ Gas speed in coma & \(v_{\rm g}\) & 700 m s\({}^{-1}\) \\ Gas speed (from icy chunks) & \(v_{\rm h}\) & 500 m s\({}^{-1}\) \\ Dust speed as a function of radius, \(a\),c & \(v_{\rm d}(a)\) & 300 m s\({}^{-1}\) \(\sqrt{a/1\mu{\rm m}}\) \\ Ice fraction in dust & \(f_{\rm ice}\) & 0.01 \\ Radiation pressure coefficient & \(Q_{\rm prr}\) & 1 \\ Dust potential & \(U\) & 5V \\ Solar wind speed & \(v_{\rm SW}\) & 600 km s\({}^{-1}\) \\ Azimuthal IMF strength at 1 au & \(B_{\phi,0}\) & 3 nT \\ Heliographic latitude & \(\beta_{\rm hg}\) & 0\({}^{\circ}\) \\ Volume filling factor & \(\phi\) & 0.5 \\ \hline \end{tabular} Figure 4: Same as Fig. 3 for fixed particle sizes (left: 10\(\mu\)m, right: 1mm), 1 au from the Sun, and variable nucleocentric distance. The Hill radius (where tidal force and nucleus gravitational force balance) is indicated by a vertical line. Figure 5: Same as Fig. 3 for fixed particle sizes (left: 10\(\mu\)m, right: 1mm), nucleocentric distance of 100 km (hence outside the zone of acceleration by gas drag) and variable heliocentric distance. The gas drag force drops abruptly near 4 au, where we assume a steep drop of the water production rate. For the same reason, we assume that rocket force from intrinsic outgassing of water ice ceases near 4 au. Rocket force is again represented by a dashed line to indicate that both types of particles are too small to retain ice on dynamically significant timescales if mixed with dust. #### 3.1.1 Methods to locate activity sources a) Inversion/triangulation. - If a bright filament was observed at least twice from different perspectives (ideally 10\({}^{\circ}\)-30\({}^{\circ}\) of sub-observer latitude/longitude, _Vincent et al._ 2016b), its three-dimensional orientation and source point on the surface can be identified by triangulation: for each image, the projected central line of the filament and the camera position span a plane in three-dimensional space. For two images, the intersection line of the two planes describes the filament axis in three dimensions, and the intersection point of this axis with the nucleus surface (from a shape model) represents the source location (Fig. 6). If the filament was observed more than twice, all planes should intersect in the same line within the accuracy limits. This technique is called "direct inversion" by _Vincent et al._ (2016b) and relies on the assumptions (1) that the same coma structure can be identified in several images, (2) that at least within a certain distance near the surface, the filament can be described by a straight line, and (3) that its three-dimensional orientation does not change during the time covered by the observations. Without making the first two assumptions, source locations can still be identified by "blind inversion" as long as several images are available. In this approach, the intersection lines of the jet-observer planes with the nucleus surface are calculated for each filament in each image. When combining the surface intersection lines from the different images, lines corresponding to the same filament will intersect in the same point on the surface within the achievable accuracy. Hence the points having multiple intersections are interpreted as the source points of filaments (_Vincent et al._ 2016b). In the presence of many filaments, the results can also be a source density map rather than a map of individual sources. At comet 67P, this inversion technique has been used to trace the origins of both diumally repeating coma structures by e.g. _Vincent et al._ (2016b); _Shi et al._ (2016); _Lai et al._ (2019) and of irregular events ("outbursts") by _Vincent et al._ (2016a). This technique has also been applied to ground-based coma images (e.g., _Farnham et al._ 2007; _Vincent et al._ 2010, 2013, see Section 3.2.3). If the lower part of a filament is brighter than the background nucleus surface, the source points can also be identified directly (e.g., _Agarwal et al._ 2017; _Fornasier et al._ 2019b). b) Backtracing of in situ data. - To constrain the distribution of gas or dust sources across the nucleus surface from in situ measurements aboard a spacecraft, the measured dust or gas density is often used as a proxy for the activity at the sub-spacecraft longitude and latitude at the time of observation (e.g., _Hoang et al._ 2017, 2019; _Della Corte et al._ 2015, 2016). The underlying assumption is that the activity changes on timescales long compared to the traveling time of the material from the surface to the detector, and that this motion is radial. Since this assumption cannot a priori be taken as justified, in particular for the more slowly moving dust, some authors account for the travelling time by including the near-surface acceleration zone when linking the detection coordinates to the ejection point on the surface (_Longobardo et al._ 2019, 2020). These approaches help to understand broad regional and diurnal variations of activity but do not generally provide the spatial resolution to for example link coma material to specific landmarks on the surface. c) Forward modelling of the motion of dust embedded in the gas flow. - The source regions of coma dust can be constrained by forward modelling the motion of dust embedded in the gas flow field and iteratively fitting the predicted dust distribution to measurements. Models of the gas dynamics typically either follow a fluid dynamics approach or use the Direct Simulation Monte Carlo (DSMC) method to describe the motion of individual molecules (Marschall et al. in this book). The comparison of the modelled dust distribution to remote sensing observations requires additional assumptions about the light scattering and/or thermal emission properties of the dust (Kolokolova et al. in this book), and the projection of the three-dimensional dust distribution onto the image plane by line-of-sight integration. One boundary condition of all coma models is the distribution of gas and dust activity across the cometary surface, which is why they are discussed in the present context. The surface activity distribution needs to be defined such that it enables an optimal reproduction of the data in question. Generally, the obtained solution will not be unique, especially if only the local gas density is used to constrain the models (_Marschall et al._ 2020a), but fitting the same model to multiple data sets can reduce the degeneracy. Forward modelling approaches have been used both to describe temporally and spatially confined phenomena and to understand the global distribution of activity. An example of the former is the study of the influence of topography, local time and illumination conditions on filament structures emanating from the terminator region in _Shi et al._ (2018). Global models have for example been used to fit data from the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA), from the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS), and from the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS), and to combinations of such data sets (for example _Marschall et al._ 2016, 2017; _Fougere et al._ 2016a,b; _Zakharov et al._ 2018a; _Combi et al._ 2020; _Davidsson et al._ 2022). _Kramer et al._ (2017) and _Lauter et al._ (2019, 2020) instead follow an inverse modelling approach, in which the gas emission from each surface element of the shape model is a free parameter, the global distribution of gas in the coma is calculated from the surface production rates, and is probed by the measurements (as a function of space and time) of the ROSINA instrument. The vector of the surface emission rates is connected to the vector of measurements through a matrix and is optimized in order to minimize the deviation between model and measurements. The obtained vector of surface production rates describes the geographical distribution of activity. d) Torques and non-gravitational forces. - An additional means of constraining the gas activity distribution across the surface arises from the reaction force that the sublimating gas exerts on the nucleus, similar to a rocket engine. The component of the force crossing the nucleus center of mass leads to an acceleration of the heliocentric orbit, while the component perpendicular to the rotation axis creates a torque that changes the rotation speed and axis orientation. For 67P, a first rough prediction of the change of spin rate during the perihelion passage was made by _Keller et al._ (2015), assuming the local activity to be driven by the illumination conditions and energy balance only. Models aiming to fit simultaneously the rotation state and heliocentric orbit of 67P require a more complex distribution of surface activity (_Kramer et al._, 2019; _Kramer and Lauter_, 2019; _Attree et al._, 2019, 2023), but a single model to fit both constraints has not yet been identified. A highly asymmetric outgassing has been invoked to explain a rapid decrease in rotation rate of comet 41P/Tuttle-Giacobini-Kresak (_Bodewits et al._, 2018). e) Surface changes. - At both comets 9P and 67P, changes on the surface were seen when the same spot was observed multiple times. In particular the Rosetta mission with its two-year coverage of almost the entire surface offered the possibility to study such changes. Surface changes include cliff collapses, receding scarps, formation and extension of fractures and cavities, displacement of boulders, changes in dust mantle thickness and the temporary appearance of bright spots (Pajola et al. and Filacchione et al. in this book). Some of these events, such as cliff collapses and the new exposure of bright surfaces, have been associated to transient dust emission with reasonable certainty (_Pajola et al._, 2017; _Agarwal et al._, 2017). For fractures and cavities, models have been proposed that connect their formation or deepening to the sudden emission of dust (e.g., _Vincent et al._, 2015; _Skorov et al._, 2016b), but observational proof of these models has not yet been found. The connection of moving boulders and scarps and of changes in the thickness of a local dust layer to the emission of dust remains likely but also unproven (e.g., _Thomas et al._, 2013; _El-Maarry et al._, 2017; _Fornasier et al._, 2019a). #### 3.1.2 "Regular" and "irregular" activity The appearance of the dust coma and the pattern of filament structures changes with time during a rotation of the comet, but quite accurately repeats with the diurnal cycle (_Vincent et al._, 2016a). This, together with the observation that most dust is emitted from the illuminated hemisphere (_Fink et al._, 2016; _Gerig et al._, 2020) and that the dust emission follows the subsolar latitude on seasonal timescales (_Vincent et al._, 2016b; _Della Corte et al._, 2016; _Lai et al._, 2019) indicates that direct solar irradiation is the prime driver of the diurnally repeating ("regular") activity. Insolation is, however, not the only factor determining the strength of local dust and gas activity, because models assuming that a constant fraction of solar energy input is consumed by water ice sublimation, and that the dust production is proportional to the outgassing rate, fail to reproduce the in situ measurements of ROSINA, the brightness pattern of coma dust (_Marschall et al._, 2016), and the non-gravitational forces and torques (_Attree et al._, 2019; _Kramer et al._, 2019). Various patterns of systematic activity enhancements have been proposed, such as enhanced activity from sinkholes (_Vincent et al._, 2015; _Prialnik and Sierks_, 2017), cliffs (_Vincent et al._, 2016b; _Marschall et al._, 2017), fractures (_Hofner et al._, 2017), and newly illuminated,frost-covered surfaces near receding shadows (_Prialnik et al._, 2008; _De Sanctis et al._, 2015; _Fornasier et al._, 2016; _Shi et al._, 2018). For none of these location types their relative contribution to the global activity has been firmly established. _Vincent et al._ (2019) point out that generally, activity from close-to vertical surfaces (having high gravitational slope, i.e. a significant angle between the local surface normal and the negative gravitational acceleration vector), avoids being Figure 6: Example of how the source region of a jet can be inferred from images obtained from at least two different perspectives (left and center panels). Each of these two observations renders a plane that contains the line of sight, the jet and its source. The right panel shows how the source region is identified as the point where the intersection line of the two planes crosses the nucleus surface. Image credit: Ian Lai, A&A, 630, A17, p.3, 2019, reproduced with permission Β© ESO. quenched by a dust mantle, which is an unresolved obstacle to explaining activity from surfaces with low gravitational slopes. It is further possible that the activity from "pristine" surfaces (cf. Pajola et al. in this book) differs from that originating from terrains that are covered in debris that fell back from the coma. This fall-back material did not reach escape speed when accelerated by the gas and re-impacted at locations where the gas pressure was (at least seasonally) suffiently low (_Thomas et al._, 2015; _Pajola et al._, 2017). Under conditions of higher seasonal irradiation, this material can again be lifted. This effect has been evoked to explain the strong activity from the neck region Hapi on comet 67P during autumn 2014 that was reported e.g. by _Lin et al._ (2015); _Pajola et al._ (2019); _Combi et al._ (2020). At this time, the comet's approach to the Sun near 3 au led to increased sublimation of water ice, and Hapi, located at high northern latitudes, was in local summer. A significant fraction of the dust cover removed from Hapi during this epoch was later re-supplied, during northern winter in 2015-2016 (_Cambianica et al._, 2020). A large-scale trend of enhanced activity from above-average bright and blueish surfaces has been attributed to enrichment of these surfaces with water ice (_Ciarniello et al._, 2015; _Filacchione et al._, 2016; _Fornasier et al._, 2016; _Filacchione et al._, 2020). On a smaller scale, a direct link between bright, water-ice rich spots on the surface (e.g., _Pommenrol et al._, 2015; _Barucci et al._, 2016) and local activity has not been established. In addition to the diumrally repeating activity, many comets show sudden, short-lived events of increased dust emission that are often called "outbursts". Such irregular activity has been observed on a wide scale of magnitudes, ranging from small, local-scale dust plumes (e.g., _Agarwal et al._, 2017) to global events easily detectable with Earth-based telescopes (e.g., _Lin et al._, 2009). The processes triggering such events are not well understood, but a wide range of models have been proposed, and the Deep Impact and Rosetta missions have made it possible to study at which locations small outbursts occur. Comet 67P has shown irregular activity during the whole comet-phase of the Rosetta mission, from April 2014 (_Tubiana et al._, 2015) to September 2016 (_Altwegg et al._, 2017). The vast majority of these events were not detected from Earth, with the possible exception of one near perihelion (_Boehnhardt et al._, 2016). It has been suggested that outbursts occurred mainly near morphological boundaries and cliffs (_Vincent et al._, 2016; _Fornasier et al._, 2019), and some were also observed near pits (_Tenishev et al._, 2016) and circular features in the Imhotep region (_Knollenberg et al._, 2016; _Rinaldi et al._, 2018; _Agarwal et al._, 2017). One event has been directly linked to the break-off of a cliff face (_Pajola et al._, 2017). Temporal concentrations of outburst events have been reported for early morning and local afternoon (_Vincent et al._, 2016), but outbursts have also been observed from the deep nightside (_Knollenberg et al._, 2016; _Pajola et al._, 2017; _Rinaldi et al._, 2019). The relative contribution of irregular events to the total dust production rate is difficult to estimate, because it depends both on a complete knowledge of their frequency and on the amount of material emitted globally and by outburst events. Estimates indicate that individual events contribute no more than a few percent to the global dust production (_Tenishev et al._, 2016; _Lin et al._, 2017), such that the major part of the dust would be released by the diumrally repeating activity. #### 3.1.3 Processes driving the dust lifting Comets or their precursor planetesimals have resided for several billion years in the cold outer solar system: the Jupiter Family Comets in the Transneptunian disc at about 30K, and the Long Period and Dynamically New comets in the, even colder, Oort Cloud at the limits of the Sun's gravitational influence. When an object from one of these reservoirs enters on a trajectory through the region of the planets, the top layers of the surface get heated and begin to lose their volatile ices such as initially CO, and N\({}_{2}\)(_Delsemme_, 1982; _Lauter et al._, 2019), then CO\({}_{2}\), and, finally, beginning at roughly 5 au from the Sun, H\({}_{2}\)O. Inside 3 au, water ice sublimation likely becomes the dominant driver of activity, but the more volatile ices like CO\({}_{2}\) keep playing an important role in activity (e.g., _A'Hearn et al._, 2011; _Combi et al._, 2020). The energy input from solar radiation is partially re-radiated to space and partially absorbed by the porous surface material. The absorbed energy is conducted and radiated to greater depths, where it can cause phase changes in the ices and the release of gas. This gas percolates through porous material, re-condenses on colder surfaces or escapes eventually through the surface. Typically, there is a positive radial temperature gradient: the upper layers are warmer than below. During dusk or in shadowed regions, the temperature gradient may be locally inverted leading to re-condensation of water in the dust mantle. This frost can help to start activity in the morning (_De Sanctis et al._, 2015; _Fornasier et al._, 2016; _Shi et al._, 2018). The global water production of most comets is much lower than expected from freely sublimating ice surfaces. This observation led _Kuehrt and Keller_ (1994) to conclude that the refractory component in the nucleus must be sufficiently abundant to allow a refractory mantle, depleted from volatiles, to form on the surface and quench activity over wide areas. This model implies that the cohesive Van der Waals forces stabilising the refractory material exceed the gas pressure at the surface by many orders of magnitude, such that the detachment of dust particles from this refractory matrix remains unexplained. _Kuehrt and Keller_ (1994) suggest that a heterogeneous surface composition might lead to activity of a small fraction of the surface that would have to be considerably enriched in ice. Laboratory experiments, too, demonstrated that insolation of a porous ice-dust mixture leads to sublimation of the ice, and that part of the gas and dust leave the surface (_Kolzer et al._ 1995). The non-lifted part of the freed dust builds up a porous dust mantle of about 10 Pa failure stress (_Grin et al._ 1993) that eventually quenches the gas emission. This may be similar on a comet. Recent estimates of the cohesive strength of cometary material (a few Pa on metre-scales (_Attree et al._ 2018) to kPa inside a dust aggregate (_Hornung et al._ 2016)) and of laboratory analog materials (4-20 kPa, _Gundlach et al._ 2018) are indeed larger than the sublimation pressure of water ice in the relevant temperature range (1 Pa at 210K and 10 kPa at 310K). _Gundlach et al._ (2015) point out that overcoming cohesion becomes easier with growing size of the particles constituting the cometary surface, and predict that the size of ejected particles should grow with heliocentric distance. But their given size ranges do not agree well with observed cometary activity, and the model does not explain the activity of very distant comets such as C/2017 K2 that is thought to have onset near the orbit of Neptune (_Jewitt et al._ 2021). Recent models have tried to overcome the cohesion problem by ascribing a hierarchichal structure to the surface, where the material is organized in "pebbles" or aggregates that in turn consist of refractory and optionally H\({}_{2}\)O and CO\({}_{2}\) ice particles (which in themselves may have a sub-structure and be composed of "grains"). On the other hand, the "pebbles" are clumped into "chunks" (_Gundlach et al._ 2020; _Fulle et al._ 2020). The ice content of the pebbles would decrease with time and increase with depth. The porosity of the architecture would prevent the formation of an impenetrable dust mantle but increase sub-surface pressure, while the small number of contact points between pebbles would facilitate to overcome their cohesion. Alternative or additional processes to counteract interparticle cohesion could be related to thermal fracturing or electrostatic charging (_Jewitt et al._ 2019). Reaching a consolidated understanding and consensus on the processes involved in triggering and maintaining the emission of dust from cometary surfaces remains one of the open topics of the field, and is hampered by a lack of constraining data. Laboratory experiments addressing these questions are presented in Poch et al., and the interior structure of comets is discussed in Guilbert-Lepoutre et al., both in this book. ### Acceleration zone The dust acceleration region in the general coma extends from the nucleus surface to roughly ten nucleus radii. At that distance molecule-dust collisions become rare and the dust flow decouples from the gas (Fig. 1). The brightness in the dust coma is often used as a proxy for the density of dust. Indeed, when we have an optically thin coma the reflectance, \(R\), of dust in a given pixel at a certain wavelength, \(\lambda\), and at a certain scattering angle, \(\Phi\), is \[R(\lambda,\Phi)=\int_{a_{\rm min}}^{a_{\rm max}}n_{\rm col}(a)\sigma_{\rm d}(a )q_{\rm eff}(a)\frac{p(a,\lambda,\Phi)}{4\pi}da, \tag{24}\] where the smallest and largest sizes are given by \(a_{\rm min}\), and \(a_{\rm max}\), the dust column density along the line of sight is \(n_{\rm col}\), and the geometric dust cross-section is \(\sigma_{\rm d}\). The scattering efficiency, \(q_{\rm eff}\), and phase function, \(p\), depend on the material properties of the dust. Eq. (24) illustrates that the brightness of the dust cannot, in general, be taken as a proxy for the dust density. There might be a dense part of the coma with particles that have a very low scattering efficiency and thus a low reflectance compared to an area with a smaller number but highly efficiently scattering particles which appear bright. To understand the structure of the dust coma in the acceleration zone, we will first discuss the radial outflow structure and then go into more detail about how 3D jet-like structures become manifest in the acceleration region. #### 3.2.1 The extent of the acceleration region To understand the radial structure of the dust coma let us first consider a simplified coma where the dust is not accelerated but rather flows out radially from the surface with a constant speed. In this case, the local dust densities decrease with the inverse square of the distance, \(r\). This is due to the fact that mass flux conservation (\(n_{\rm d}v_{\rm d}A={\rm const.}\), where \(A\) is the surface area) through closed surfaces around the nucleus is maintained. If one thinks of these surfaces as spherical shells then their surface areas scale with \(A\sim r^{2}\). Because the speed, \(v_{\rm d}\), is constant in this example, the number density, \(n_{\rm d}\), therefore needs to scale with \(1/r^{2}\) for the flux to be constant. The dust brightness measured by a remote observer is proportional to the column, not the local number, density. We can use the above behavior and find that the column density will scale as \(n_{\rm col}\sim 1/\rho\), where \(\rho\) is now the projected distance to the nucleus in the image plane, rather than \(r\). In other words, the column density, and by extension the brightness, \(R\), multiplied by \(\rho\) is constant for free radial outflow of dust. This is also the basis for the commonly used quantity _Af\({}_{\rho}\)_(_A'Hearn et al._ 1984) which - in the case of free radial outflow - is independent of where in the coma it is being measured. The quantity _Af_ stands for the product of albedo and filling factor (optical depth), and is therefore equivalent to \(R\). Here, being interested in the acceleration region, we want to study deviations from a constant value of the product \(R\rho\). Deviation from a constant \(R\rho\) can be caused by a multitude of processes. Sublimation and fragmentation of particles can either in- or decrease \(R\rho\) with increasing \(\rho\) depending on whether the resulting particles are more or less efficient scatterers (Fig. 7). Deviations from a point source nucleus will decrease \(R\rho\), optical depth effects will increase \(R\rho\), and gravitationally bound particles can in Figure 7: Schematic of how different processes alter the behavior of the product \(R\rho\) of dust coma reflectance, \(R\), and projected distance to nucleus center, \(\rho\), as a function of the distance to the nucleus, \(r\). Deviations from a point source and non-radial flow (top left) will reduce \(R\rho\) on a length scale of \(\sim 5R_{\rm n}\), where \(R_{\rm n}\) is the nucleus radius. The sublimation of particles (top center) will also reduce \(R\rho\) but the length scale of this effect will depend on the properties of the icy particles (their ice content, size), their ejection speed and the heliocentric distance. The acceleration of particles (top right) will decrease \(R\rho\) and converge to a constant on a length scale of \(\sim 10R_{\rm n}\). Both fragmentation (bottom left) and optical depth effects (bottom center) will increase \(R\rho\). The scale on which these effects act depends on the details of the processes (i.e., the fragmentation rate). Finally, gravitationally bound particles (bottom right) will increase \(R\rho\) if they are on bound orbits while decreasing it when on ballistic trajectories. The scale for these gravitational effects is the Hill sphere. The figure was adapted from _Gerig et al._ (2018). crease or decrease \(R\rho\) depending on the type of orbit they are on (Fig. 7). _Thomas and Keller_ (1989) found that the near-nucleus environment of comet 1P/Halley is dominated by optical depth effects. They observed a behavior similar to that shown in the bottom center panel of Fig 7. Finally, the acceleration of particles will decrease \(R\rho\) (top right panel of Fig. 7) because as the speed of the particles increases, \(n_{\rm d}v_{\rm d}\) decreases more rapidly than with the \(1/r^{2}\) profile described above for free radial outflow. We can, therefore, use \(R\rho\) to determine at which point the dust flow transitions from an accelerated to a free radial outflow, corresponding to the outer edge of the acceleration region. As this is an asymptotic process there is no hard boundary. Theoretical calculations (_Zakharov et al._, 2018) showed the dust particles reach \(90\%\) of their terminal speed at around six nucleus radii. We would therefore stipulate that free radial outflow begins around ten nucleus radii. This is in very good agreement with observations at and numerical simulations for 67P, where _Gerig et al._ (2018) found the transition to a constant \(R\rho\) at around eleven nucleus radii. _Finson and Probstein_ (1968) derived an upper limit of 20 nucleus radii for the acceleration zone. #### 3.2.2 Terminal velocity and transition to the outer coma As discussed in the previous section, the acceleration region extends to roughly 10 nucleus radii. This is the interface to the part of the coma where the dust motion is controlled by solar radiation pressure and gravity rather than nucleus gravity and gas drag (Fig. 1). For numerical simulations, these two force regimes require different algorithmic approaches, such that, in practice, the two parts of the coma are most often treated separately. Strictly speaking, there is a transition region (Fig. 1) where nucleus gravity still plays a role (the Hill sphere) and large particles that do not reach escape speed will return to the surface. In any case, it is possible to define a surface - not necessarily spherical - where the dust has reached terminal speed and decouples from the gas. This surface is the upper boundary of the acceleration region and provides the initial conditions to calculate the dust dynamics thereafter (Sec. 3.3). In situations where gas drag is the dominant cause of acceleration and where \(v_{\rm g}\gg v_{\rm d}\), a simplified dependency of the terminal speed, \(v_{\rm ej}\), on particle size, \(a\), and global gas number production rate, \(Q_{\rm g}\), can be derived analytically. For the given assumptions, Equation 8 simplifies to \[\dot{v}_{\rm d}\approx\frac{C_{\rm D}m_{\rm g}}{2}\frac{\sigma_{\rm d}}{m_{\rm d }}n_{\rm g}v_{\rm g}^{2}, \tag{25}\] where \(m_{\rm d}\) is the dust particle mass. Multiplying with \(v_{\rm d}\), assuming purely radial motion with nucleus center distance \(r\), describing gas density as \(n_{\rm g}(r)=Q_{\rm g}/(4\pi r^{2}v_{\rm g}(r))\), and integrating from the surface (\(r\)=\(R_{\rm n}\),\(v_{\rm d}\)=0) to the decoupling distance (\(r\)=\(r_{\rm max}\),\(v_{\rm d}\)=\(v_{\rm ej}\)), yields \[v_{\rm ej}^{2}=\frac{\sigma_{\rm d}}{m_{\rm d}}\frac{Q_{\rm g}}{4\pi}m_{\rm g }\int_{R_{\rm n}}^{r_{\rm max}}\frac{C_{\rm D}v_{\rm g}(r)}{r^{2}}dr. \tag{26}\] The quantities in the integral depend on the radial distribution of the gas speed and on gas and dust temperatures through \(C_{\rm D}\). The terminal speed is proportional to the square root of the cross-section-to-mass ratio, \(\sigma_{\rm d}/m_{\rm d}\), and equivalently to \(\sqrt{\beta}\) or - for size-independent density - to \(a^{-1/2}\), and to the square root of the gas production rate, \(Q_{\rm g}\). The left panel of Figure 8 shows the terminal dust speeds as a function of dust size and gas production rate as obtained by numerical simulations. When the dust is much slower than the gas and the dynamics are dominated by gas drag only then the dust speed scales with \(a^{-1/2}\) and \(Q_{g}^{1/2}\), consistent with Equation 26. Deviations from this behavior are observed at very small and very large sizes. A small dust particle accelerates to almost the gas speed and asymptotically approaches it. The dynamics of large dust particles is significantly influenced by the nucleus gravity and thus their speeds are lower than predicted by Eq. 26. Above a certain size, the dust does not reach escape speed and will fall back to the surface. Even larger dust cannot be lifted. The right panel of Fig. 8 shows the phase angle dependency of the dust speed. In the numerical simulations shown there is no night side activity. Nevertheless, the gas flow and therefore also the dust flow is driven to the night side. This lateral flow from the day to the night side ensures that even in the absence of night side activity dust particles reach significant speeds at large phase angles. We would like to re-emphasize that speeds in Fig. 8 are based on spherical dust particles with the same bulk density as the nucleus of 67P. If particles are significantly fluffer or rotating, their speeds can increase beyond the values shown in Fig. 8. Dust in locally confined gas sources will attain slower speeds than if accelerated by a global gas field due to lateral gas expansion in the plume, for similar gas production rates per unit area (_Jewitt et al._, 2014). #### 3.2.3 Jets and other 3D structures in the acceleration region Reducing the acceleration region to the radial expansion would be overly simplified. Observations from comet 1P/Halley (e.g. _Keller et al._, 1987), to comet 103P/Hartley 2 (e.g. _A'Hearn et al._, 2011), and comet 67P (e.g. _Lin et al._, 2015) have shown intricate dust filament structures. Whether or not these filaments, also referred to as "jets" (see longer discussion in _Vincent et al._, 2019), have clear source regions on the surface is a critical question. Three plausible mechanisms can result in the observed filaments: 1. jets with a clearly defined and confined gas and/or dust source on the surface; 2. topographically sculpted filaments that are products of the local topography shaping the dust emission through self-shadowing and/or the underlying convergence of the gas flow; 3. optical illusions, originating from a large area on the surface and appearing as narrow structures only from specific viewing geometries. The first mechanism encompasses outbursts or exposed iv surfaces with enhanced activity compared to the background. Many outbursts have been spatially and temporally resolved at e.g. 67P (e.g. _Vincent et al._, 2016; _Rinaldi et al._, 2018). They are characterized by a sudden increase of the dust emission, peaking after a few minutes, followed by a smooth decrease of the emission to the pre-outburst level. These events are among the few situations that can with high certainty be characterized as "jets" or "plumes" in the strictly physical sense (see also _Vincent et al._, 2019). The second mechanism is related to the irregular topography of the surface. _Crifo et al._ (2002) and _Crifo et al._ (2004) have pointed out that dust structures in the coma do not require sources on the surface. Non-spherical nucleus shapes are sufficient to produce such features dynamically. The uneven surface topography focuses the gas and hence the dust flows, resulting in higher density regions within the coma. More complex nucleus shapes can produce more intricate structures in a total absence of localized sources. For 67P, this was illustrated in _Marschall et al._ (2016, 2017); _Marschall et al._ (2019) and Fig. 9. This work would indicate that the dust filaments observed in the coma of comet 67P do not have a source area in the traditional sense. The third mechanism listed above is essentially a mirage. A prime example is the big fan-like structure originating from the northern neck of 67P. _Shi et al._ (2018) demonstrated that this feature likely originates from a sublimating frost front on the morning terminator deep in the neck of 67P. The out-flowing dust particles produce a kind of fan that, seen from perpendicular to the plane of the fan, is indistinguishable from the surrounding coma. But with the line of sight inside this plane, the dust column densities in the projected fan are high in contrast to the surrounding coma. The resulting fine filament structure, however, is partly an optical illusion rather than a "jet" in the physical sense. It appears that optical illusions and topographic sculpting are the rule rather than the exception to explain filament structures in the near-nucleus environment (_Shi et al._, 2018; _Marschall et al._, 2019; _Vincent et al._, 2019). The current state-of-the art modeling suggests that no confined sources of these features are required. Rather the much simpler assumption of a mostly homogeneous surface, the topography, and viewing geometry are sufficient to explain the filamentary inner coma environment. ### Outer coma The outer dust coma begins outside the acceleration zone and ends at the tail regime. These distances will vary by grain parameters, but for most active comets with grain radii \(\lesssim 1\) mm the outer coma spans from nuclear distances of a few nucleus radii to of order \(10^{4}\) km. The dominant forces acting on the outer coma are nucleus gravity, solar gravity, and radiation pressure. In the absence of grain fragmentation or outgassing, the fine-grained dust coma is typically in a radial outflow. Thus, the grain number density varies with \(r^{-2}\), which produces the canonical \(\rho^{-1}\) coma in telescopic observations, where \(\rho\) is the projected distance to the nucleus (Section 3.2.1). Within the Hill sphere, however, large particles may be bound to and orbiting the nucleus. #### 3.3.1 Coma size-\(\beta\)-speed relationship The size of the dust coma depends on the physical properties of the grains (size, mass, and optical properties), and the ejection speeds imparted on the dust by gas pressure in the inner coma, but also from fragmentation processes or intrinsic outgassing, if relevant. In the simple case, solar radiation pressure is the primary non-gravitational force accelerating the dust grains. The acceleration is continuous, so there is no trivial delineation between the coma and tail regimes. One commonly adopted parameter is the apparent turn-back distance, \(X\), which is the distance at which a Figure 8: The left panel shows the maximum dust speed as a function of dust radius, and gas production rate. The right panel shows the phase angle dependence of the dust speed. Both results shown have been calculated for comet 67P and figures adapted from _Marschall_ (2017). Speeds \(<\)2 m s\({}^{-1}\) for larger-than-centimetre particles have also been reported from 103P/Hartley 2 by _A’Hearn et al._ (2011). grain ejected directly at the Sun will reach a speed of 0 in the rest frame of the nucleus. It is derived by integrating the equation of motion of a grain accounting for radiation pressure, ejection speed, and projection onto the sky: \[X=\frac{v_{\rm ej}^{2}\sin\theta}{2a_{\rm r}}=\frac{r_{\rm h}^{2}v_{\rm ej}^{2} \sin\theta}{2GM_{\odot}\beta}=\frac{8\pi a\rho c\tau_{\rm h}^{2}v_{\rm ej}^{2} \sin\theta}{3\overline{Q}_{\rm p}r_{\rm L}\odot}, \tag{27}\] where \(v_{\rm ej}\) is the terminal speed of the dust grain after leaving the gas acceleration zone, \(a_{\rm r}\) is the acceleration from radiation pressure (\(F_{\rm r}/m_{\rm d}\); Eq. 13), \(\theta\) is the phase (Sun-comet-observer) angle, and \(\beta\) is defined by Eq. 15. Equation 27 is relevant for heliocentric distances \(\gtrsim 4R_{\odot}\)(_Lamy_, 1974). For isotropic expansion, the turn-back distance traces a paraboloid on the sky (_Michel and Nishimura_, 1976). Based on Eq. 27, we can estimate an order of magnitude coma upper-limit size. For \(v_{\rm ej}=200\) m s\({}^{-1}\), \(a=1\)\(\mu\)m, \(\rho=500\) kg m\({}^{-3}\), and \(\theta=90^{\circ}\) (i.e., no foreshortening due to phase angle), \(X=10,000\) km at 2 au from the Sun. However, \(v_{\rm ej}\) is a function of gas production rate and dust properties. In Section 3.2.2, we showed that when the dynamics are dominated by gas drag, dust terminal speeds tend to scale with \(a^{-1/2}\) for spherical grains of constant density. Under these assumptions, \(X\) is approximately independent of \(a\), but scales with the constant that relates \(v_{\rm ej}^{2}\) to \(a^{-1}\), or alternatively with \(Q_{\rm g}\). The dependence of coma size on heliocentric distance is mainly given by the factor \(r_{\rm h}^{2}Q_{\rm g}(r_{\rm h})\), where there is no strong consensus regarding the shape of \(Q_{\rm g}(r_{\rm h})\). It is often approximated by a power law \(r_{\rm h}^{k}\) at least in confined intervals of \(r_{\rm h}\). Calculating \(Q_{\rm g}\) from the balance of sublimation, thermal radiation and solar irradiation on a perpendicularly illuminated water ice surface gives \(Q_{g}\propto r_{\rm h}^{-2}\) inside 1 au (where radiation cooling is negligible compared to sublimation cooling). Beyond, the exponent, \(k\), of a locally fitted power law transitions smoothly to \(k=-4\) at 4 au, and becomes even steeper beyond. However, beyond 2 au, the sublimation of more volatile species makes a relevant contribution to the total gas production rate, flattening its heliocentric profile. Observations suggest a much steeper than \(r_{\rm h}^{-2}\) profile for water inside 3.5 au: for 67P, _Hansen et al._ (2016) find \(k=-\)5.3 before and \(k=-\)7.1 after perihelion, while _Marshall et al._ (2017) find \(k=-\)3.8 before and \(k=-\)4.3 after perihelion. Generally, the exponent \(k\) seems to be smaller than -2 in most situations, and hence, coma size should rather grow with decreasing heliocentric distance. The dependency of \(X\) on \(r_{\rm h}\) is further - through \(v_{\rm ej}\) - affected by the dependency of \(v_{\rm g}\) on \(r_{\rm h}\). Various approximations exist for \(v_{\rm g}(r_{\rm h})\): \(v_{\rm g}\sim r_{\rm h}^{-0.5}\)(_Tseng et al._, 2007), \(v_{\rm g}\sim r_{\rm h}^{-0.4}\) within \(r_{\rm h}\) = 7 au and \(v_{\rm g}\neq v_{\rm g}(r_{\rm h})\) beyond that distance (_Biver et al._, 2002), while hydrodynamic model calculations by _Muller_ (1999) suggest weak dependence of \(v_{\rm g}\) on \(r_{\rm h}\). Rather than deriving coma-size, Eq. 27 or related considerations are often applied to the measured coma size in order to estimate coma grain properties from assumptions on velocity and/or acceleration. A few recent comets can serve as examples of the variety of conclusions that can be drawn for this analysis. _Jewitt et al._ (2019a) measured a growth in the \(\rho^{-1}\) coma size of comet C/2017 K2 (PanSTARRS), and use it to estimate a timescale of activity. Combining the magnitude of radiation pressure acceleration with this timescale, they estimate from the absence of a detectable tail that the optically dominant dust grains must have \(\beta>0.003\). The paucity of small grains in the coma suggested the influence of particle cohesion, which prevents their release and favours the production of large particles (_Gundlach et al._, 2015). _Hsieh et al._ (2004) estimated ejection speeds \(\ll 45\) m s\({}^{-1}\), based on the lack of a resolved coma in images of 133P/Elst-Pizarro and an assumption of small, \(\beta=1\) particles. _Mueller et al._ (2013) measured the growth of a narrow dust feature in images of comet 103P/Hartley 2. They found that the source was most likely active for \(\approx 22\) hr, longer than the long-axis precession of the nucleus, which had consequences on their derived source location for the feature. Finally, _Kelley et al._ (2013) studied point-sources in the coma of comet 103P/Hartley 2, and with Eq. 27 concluded that the dynamics of these \(\gtrsim 1\)-cm sized particles were not governed by radiation pressure. #### 3.3.2 Large particles in the coma Large particles or chunks of nucleus, i.e., centimeter-sized and larger, may be ejected from the nucleus or inner coma with very low speeds, and potentially placed into sub-orbital trajectories. Under the influence of an additional force, the particles can be placed into bound orbits around the nucleus. The force may arise from gas outflow anisotropies in the inner coma, or from outgassing of the large particles themselves. Evidence for centimeter-sized and larger particles may be found in cometary dust trails, meteor showers associated with comets, and in observations of comets at sub Figure 9: Left panel: Stretched and cropped OSIRIS-WAC image of the coma of comet 67P taken on 2015-05-05 06:28:54 UTC. Right panel: Modeled dust brightness from _Marshall et al._ (2019) but the nucleus is not overexposed as in the actual OSIRIS image. millimeter to centimeter wavelengths, including radar. See Ye et al. in this book for a review of cometary meteor showers, and _Harmon et al._ (2004) for a review of radar observations of cometary comae. Dust trails are addressed in Section 3.4.6. Observations of individual particles in the coma, including those in bound orbits, are a more recent phenomenon. _A'Hearn et al._ (2011) presented images from the Deep Impact spacecraft of the inner coma of comet 103P/Hartley 2 containing thousands of point sources within a few kilometers from the nucleus. Such particles were not reported at 9P/Tempel 1. Depending on the light scattering properties of the particles, they may be as large as meter-sized. The presence of such large nuclear chunks is a potential solution to the comet's hyperactivity. The large chunks provide additional sublimating surface area, which enhances the water production rate of the comet (_Kelley et al._ 2013, 2015; _Belton_ 2017). Point sources were also seen by the Rosetta spacecraft upon its approach to comet 67P/Churyumov-Gerasimenko. _Rotundi et al._ (2015) estimated their sizes, assuming nucleus-like properties, and found the largest to be meter-sized. The particles are likely in bound orbits, and remnants from the comet's last perihelion passage. Particles seemed to fill the Hill sphere at the time of the observations (radius 318 km at 3.6 au from the Sun). Outgassing of the large particles seems to be an important dynamical process, at least for those that are freshly ejected from the nucleus. _Kelley et al._ (2013) showed that comet 103P/Hartley 2 had an asymmetry in its near-nucleus (\(\lesssim 10\) km) population of large particles, and concluded that acceleration by outgassing best accounted for their distribution and their speeds as measured by _Hernalyn et al._ (2013). These particles were also likely responsible for Hartley 2's OH-tail observed by _Knight and Schleicher_ (2013) and the tailward enhanced rotational temperature seen by _Bonev et al._ (2013). _Agarwal et al._ (2016) observed the acceleration of decimeter-sized particles in the vicinity of the nucleus of comet 67P/Churyumov-Gerasimenko. The acceleration was not strictly in the anti-sunward direction. They concluded that acceleration from outgassing and the ambient coma were the processes most likely to be responsible for the observed motions. On larger spatial scales, particle outgassing may be less important. _Reach et al._ (2009) argued that on the basis of the width of the trail of comet 73P/Schwassmann-Wachmann 3, trail particles \(<10\) cm in radius are either ice-free after ejection from the nucleus, or are quickly devolatilized. A fraction of the particles or chunks in the centimetre-to-decimetre size class fall back to the surface where they form smooth layers of fallback material that was observed to cover wide regions of comet 67P. _Marschall et al._ (2020b) estimate that between 11% and 22% of the debris mass initially lifted off the surface falls back. The fallback material likely still contains substantial amounts of water ice (_Davidsson et al._ 2021). #### 3.3.3 Connecting spacecraft and remote observations of comae If the regular dust structures in the acceleration region do not have local sources in the canonical sense (Section 3.2.3) the question arises whether structures observed in the outer coma can be traced back to the surface or not. Generally, from telescope images of outer coma jet features alone, the physical processes causing these features cannot be inferred. But telescopic images have been used to infer the properties of cometary nuclei, including rotational state and number and distribution of active areas (_Vincent et al._ 2019). These results often rely on the presence of distinct jet-like features in the data. It has been demonstrated that one can reliably trace large scale dust coma structures back to a virtual surface that would be the outer edge of the acceleration zone. It seems possible to expand this inversion down to the nucleus surface at the cost of increased spatial uncertainty on the source location, if assuming that the emission is on average perpendicular to the surface and that the emission vector measured at the edge of the acceleration zone is essentially a weighted average of all contributions. From a purely geometrical point of view, the angle between the measured emission vector and the nucleus north pole defines the effective co-latitude of the source on the surface (accounting for large scale topography). This technique was successfully applied by Vincent et al. (2010, 2013) and Farnham et al. (2007, 2013) to infer the location of specific sources on the surfaces of comets 9P, 103P, and 67P from ground-based observations alone. The inverted source locations were confirmed by in-situ measurements from Deep Impact and Rosetta. Hence, the connection between dust coma morphology in ground-based observations may be further investigated when in situ spacecraft observations are available for comparisons. A rich set of dust features was observed in the coma of comet Halley during its 1986 perihelion passage, including shells, arcs, and nearly linear features, which were also observed in previous apparitions (_Larson et al._ 1987). _Sekanina and Larson_ (1986) interpreted the nearly linear features in ground-based images as repeated discrete ejection events, due to their alignment with dust synchrones. Images of the comet from the Giootto spacecraft (_Keller et al._ 1987) show regions of strong activity from small, approximately kilometer-sized regions (nucleus dimensions are \(7\times 7\times 15\) km; _Merenly et al._ 1990). The model of _Belton et al._ (1991), with 5 localized active areas on a nucleus with an excited spin state, successfully combined coma features seen in ground-based data with the inner-coma jets observed by the Giotto and Vega spacecraft. The next inner coma and nucleus to be imaged by spacecraft was comet 19P/Borrelly. The comet has a prominent asymmetry due to a jet-like feature in ground-based data (_Farnham and Cochran_ 2002). This feature is not aligned with the sunward or expected dust tail directions, indicating it is produced by directed emission rather than radiation pressure effects. Deep Space 1 images show that this jet-like feature is due to topography (_Soderblom et al._ 2004). This long bi-lobed nucleus has a rotational axis perpendicular to the long axis of the nucleus, and the telescopically observed jet is due to dust released from its long flat polar region. A similar effect was seen from the flat southern polar region of 9P/Tempel 1 _Farnham et al._ (2007). The inner coma of 81P/Wild 2, imaged by the Stardust spacecraft, presented many jet-like features and filaments (_Sekanina et al._ 2004). Ground-based images have shown two prominent dust coma asymmetries: (1) a broad fan directed to the north of the orbital plane, mainly active preperihelion; and, (2) a narrow jet-like feature, directed more sunward and to the south of the orbital plane, mainly active post-perihelion (_Sekanina_ 2003). The Stardust flyby was 98 days after perihelion, and source (1) was not active at the time. Source (2) should have been active, but _Farnham and Schleicher_ (2005) could not single out any of the Stardust-observed filaments as candidates for its source. They concluded that source (2) might have been temporarily inactive due to the diurnal rotation of the nucleus, or is the result of a combination of several filaments. This highlights one of the potential issues with remotely-observed dust features. As compared to gas, dust coma expansion speeds tend to be low, \(\lesssim 100\) m s\({}^{-1}\), and a broad range of ejection velocities may be imparted on the grains (e.g., Fig. 8). Thus material propagating outward from an active source on a rotating nucleus might trace out an arc or partial spiral for a unimodal speed distribution, but would be less pronounced, blurred, or lost altogether for a broad speed distribution (e.g., _Samarasinha_ 2000). In addition to the broad southern feature discussed above, enhanced ground-based images of comet 9P/Tempel 1 show other dust coma features (e.g., _Lara et al._ 2006). _Vasundhara_ (2009) used a spherical nucleus model and ground-based and Deep Impact data to derive four effective dust sources at the nucleus. _Vincent et al._ (2010) used a nucleus shape model based on Deep Impact flyby data to determine source locations for the features, finding six active regions in total with some compatibility with the _Vasundhara_ (2009) results. Deep Impact spacecraft data of comet 103P/Hartley 2 show a strong active area located on the small end of this bi-lobed nucleus (_A'Hearn et al._ 2011). Dust coma features in ground-based data were observed, e.g., by _Mueller et al._ (2013) and _Lin et al._ (2013). However, the nucleus is in an excited rotation state, which complicates connecting coma features seen remotely to the inner-coma features and the nucleus seen by Deep Impact. Regardless, _Mueller et al._ (2013) did compare images taken contemporaneously to the Deep Impact spacecraft's closest approach to the comet, and proposed that two secondary features in the coma originated from active areas found along the long axis of the nucleus and near the solar terminator at the time of the Deep Impact flyby. In contrast with all previous cometary spacecraft missions, the Rosetta mission enabled a much broader comparison to ground-based data, owing to its long (\(\sim\)2 yr) residence near the nucleus. Furthermore, the comet's apparent orbit-to-orbit stability in terms of activity (_Snodgrass et al._ 2017) enables comparisons beyond the 2015/2016 appartion. _Vincent et al._ (2013) studied the morphology of ground-based images of the dust coma in the 2003 and 2009 appartions, and derived a pole orientation and the planetocentric locations of three active areas. Furthermore, _Knight et al._ (2017) found good agreement with their dust coma observations and the predictions of _Vincent et al._ (2013). They indicated one active area was the Hapi region (the "neck" of the bi-lobed nucleus), but concluded the other two were less obvious, with one possibly connected to the southern region, and the other to Imhotep (a flat smooth region on the largest lobe). #### 3.3.4 Dust size distributions The size or mass distribution of dust describes the relative abundance of particles of different sizes or masses, and is usually approximated by power-laws for defined size (mass) intervals, which includes "broken" power-laws that have different exponents for different size (mass) ranges. The employed power-laws describe either the differential or the cumulative distributions. The conversion between their respective exponents, and between size and mass distributions is given by the rules of differential calculus. For such conversions, usually the assumptions of size-independent bulk density and spherical particle shape are made (e.g., _Agarwal et al._ 2007). The exponent of the power-law determines whether the largest or smallest particles in the concerned interval dominate the mass and scattering cross-section of the particle ensemble. This exponent cannot only change with size, but also with a comet's position relative to perihelion (_Fulle et al._ 2010, 2016; _Merouane et al._ 2017). Different measurements, being sensitive to different types of particles, can yield different exponents as well (e.g., _Blum et al._ 2017; _Rinaldi et al._ 2017). The size distribution of dust observed in the tail or outer coma will not generally correspond to size distribution of dust lifted from the surface (due to back-falling or orbiting particles), and even less to that of material resting on the surface (e.g., due to particles not liftable, Eq. 23) or inside the deep interior. The size distribution of escaping dust may further be affected by fragmentation and sublimation of a potential volatile component. The dust size distribution in a comet may carry information about its building blocks and formation process, provided that any post-formation changes to that distribution are understood and accounted for. The unknown fraction of fall-back material also complicates attempts to infer the refractory-to-ice ratio in a cometary interior, even when the masses of the escaping dust and gas are known, as - at least integrated over a whole perihelion passage - is the case for comet 67P (_Choukroun et al._ 2020). It is possible that the dust size distribution, and especially the particle size containing most of the light scattering cross-section, varies between comets. For some prominent, bright, long-period comets, the particles dominating the interaction with light could be micron-sized. Assumed indicators of this are the presence of a strong silicate emission feature at wavelengths near 10 \(\mu\)m and a high maximum degree of linear polarisation of the scattered light (_Kolokolova et al._ 2004), although similar characteristics are also expected from aggregates of (sub-)micrometer grains. _Fulle_ (2004) reports particularly a dominance of micron-sized particles in the scattering cross-section of long period comets Hyakutake and Hale-Bopp, but also in the active Centaur Chiron. Dust instruments flying by comet 1P/Halley detected micron-sized dust with a size distribution that makes it the optically dominant component of Halley's dust (_McDonnell et al._ 1989). Also at comets 81P/Wild 2 (_Green et al._ 2004) and 9P/Tempel 1 (_Economou et al._ 2013) micron-sized dust was detected by in situ instruments during flybys. Micron-sized particles were also among those returned from comet Wild 2 to Earth by the Stardust spacecraft (_Brownlee_ 2014). In other comets, and measured with other methods, the main scattering cross-section seems to rather be in (sub-)millimeter-sized grains. One indicator is the absence of a prominent radiation-pressure swept tail in distant comets C/2014 B1 (_Jewitt et al._ 2019b), C/2017 K2 (_Jewitt et al._ 2019a) and the interstellar comet 2I/Borisov (_Kim et al._ 2020). _Reach et al._ (2007) investigated thermal infrared emission from the debris trails of some 30, mainly Jupiter-family comets and found that in most of them, the amount of millimeter-sized dust required to explain the brightness of the trails was also sufficient to explain the scattering cross-section in the coma, such that not much scattering should have been contributed by potential additional micron-sized particles in the coma. ### Tail, trail and dispersion into the zodiacal cloud #### 3.4.1 Key parameters The tail regime begins outside the Hill sphere (Fig. 1). The trajectory of a dust particle in the tail region and beyond is determined by the particle size (through the radiation pressure parameter \(\beta\) according to Equation 15) and the ejection terminal velocity \(v_{\rm ej}\) (_Finson and Probstein_ 1968). Most dynamical models use a simplified treatment of comet dust assuming \(Q_{\rm pr}\) = 1. A more detailed treatment dependent on dust mineralologies and structures (compact or fluffy) is discussed in Kolokolova et al. in this volume. The generally applicable form for particle ejection by gas drag is given by (_Whipple_ 1950, see also Section 3.2.2): \[v_{\rm ej}\propto\beta^{1/2}. \tag{28}\] After a cometary dust particle has been ejected from Figure 10: Synchrone-syndyne network for Comet C/2011 L4 (PANSTARRS) on 2013 March 21. The short near-vertical lines are trailed background stars. Image Credit: L. Comolli - Model Overlay: M. Fulle. a nucleus and left the dust-gas coupling region (Fig. 1), its motion is mainly controlled by solar gravity and radiation pressure. The size distribution of cometary dust has been inferred from both remote sensing images and in situ data. In general, it is assumed that the distribution of particle radii can be approximated by a power law so that the number of particles with radii ranging from \(a\) to \(a+da\) is \(n(a)da=\Gamma a^{-\alpha}da\). Table (2) provides a summary of \(\alpha\) values reported in recent literature. The mean \(\alpha\) values in Table (2) is \(\alpha\) = 3.7\(\pm\)0.2 and the median is \(\alpha\)\(\sim\) 3.4. Many comets in the past were characterized by \(\alpha\) = 3.5, typical of particles in collisional equilibrium (_Dohnanyi_ 1969). #### 3.4.2 General landscape of a dust tail Assuming that \(v_{\rm ej}\) = 0, the loci of particles of a given \(\beta\) and different ejection times are defined as a syndyne curve (_Finson and Probstein_ 1968), and the loci of particles with different \(\beta\) but the same ejection time are defined as a synchrone curve. A specific example of a synchrone-syndyne network is shown in Figure 10. An online tool is available for generating synchrone-syndyne diagrams (_Vincent_ 2014)1. This two-dimensional model is used for simple analysis of comet tail morphology and has been employed to determine the \(\beta\) range and ejection times of dust, including by sporadic emission events. In reality, \(v_{\rm ej}\) is not zero. This leads to an expansion of the syndynamic tube whose width is given by the dust ejection velocity. _Fulle_ (2004) pointed out that syndyne analyses tend to yield misleading \(\beta\) values, and thus, a three-dimensional dynamical model is needed to consider non-zero ejection velocities. Footnote 1: [https://comet-toolbox.com/FP.html](https://comet-toolbox.com/FP.html) a few hundred kg s\({}^{-1}\) near perihelion, although there are variations between individual objects (_Ishiguro et al._ 2007; _Kelley et al._ 2008; _Moreno_ 2009; _Agarwal et al._ 2010). **Long-period Comets:** Long-period comets show a more diverse distribution of dust parameters than short-period comets. The scattering cross-sectional areas of several long-period cometary comae are dominated by micron-sized particles (_Fulle_ 2004; _Lisse et al._ 1998). However, recently observed distant comets show the absence of small particles, and their coma and tails are composed only of particles larger than a millimeter (_Jewitt et al._ 2019a,b). Dust speed and dust production rates as function of particle size also show large variance. **Interstellar Comets:** An accurate determination of the orbit of 1I/'Oumuamua revealed the existence of non-gravitational acceleration, for which the most straightforward explanation would be comet-like outgassing (_Micheli et al._ 2018). However, the outgassing required to supply the non-gravitational acceleration was predicted to be accompanied by a visible dust or gas coma, while 'Oumuamua was always observed as point-like. The morphology of 2I/Borisov is best reproduced by dust dynamical models if the coma is dominated by sub-millimeter and larger particles, emitted at \(\lesssim\)9 m s\({}^{-1}\) speeds, with total dust production rates estimated from imaging data \(\sim\)35 kg s\({}^{-1}\) (_Kim et al._ 2020; _Cremonese et al._ 2020). #### 3.4.5 Neckline The neckline is a substructure detected in dust tails on rare occasions and caused by dynamical effects (_Kimura and Liu_ 1977). A neckline consists of large particles emitted at a true anomaly of 180\({}^{\circ}\) before the observation (cf. Fig. 11) that were emitted with a non-zero velocity component perpendicular to the parent body's orbital plane. Their large sizes imply low \(\beta\) and low ejection speeds. Hence their orbits will overall be similar to that of the parent comet, but inclined with respect to it due to the non-zero perpendicular velocity component. After initially dispersing in the perpendicular direction, particles ejected at a given time will re-assemble in the orbital plane of the comet after 180\({}^{\circ}\) of true anomaly, and be observable as a thin, bright line of dust. Using neckline photometry and Monte Carlo models, it is possible to determine if there were significant dust emissions at any given time. _Fulle et al._ (2004) identified comet 67P/Churyumov-Gerasimenko to have a neck-line structure and concluded that the comet has significant dust production at 3.6 au pre-perihelion. Taking advantage of the neck-line effect, _Ishiguro et al._ (2016) succeeded in detecting the debris cloud ejected from the 2007 outburst of comet 17P/Holmes. #### 3.4.6 Trails and dispersion into the zodiacal cloud Comet debris trails consist of large particles that weakly interact with solar radiation pressure. They were first observed by the Infrared Astronomical Satellite (IRAS) (_Sykes et al._ 1986), and subsequently in both visible and infrared light (_Ishiguro et al._ 2002; _Reach et al._ 2007; _Sykes et al._ 2004; _Arendt_ 2014). Trails that intersect the Earth's orbit are observed as meteor showers. As of 2022 January, there are 44 known comets with remotely observed debris trails whose details are available at the website2 and 24 known comets with dust trails implied by meteor showers (Ye et al., this volume). Recent models of cometary meteoroid streams show that comet trails can also be observed by in situ dust detectors (e.g., _Kruger et al._ 2020). Footnote 2: ([https://www.astro.umd.edu/~msk/science/trails](https://www.astro.umd.edu/~msk/science/trails)) Comet debris trails contribute a significant input to the interplanetary dust particle (IDP) cloud complex (_Sykes et al._ 2004; _Reach et al._ 2007). _Dikarev et al._ (2004) and _Soja et al._ (2019) quantified that the interplanetary dust cloud at 1 au is sustained mainly by Jupiter-family comets (\(\sim\)90%), with additional contributions by asteroids (\(\sim\)10%), and Halley-type comets (\(<\)1%). _Nesvorny et al._ (2010) connected the vertical brightness profile of the observed mid-infrared zodiacal light with that of a numerical model, suggesting that \(\sim\)90% of the zodiacal cloud emission came from comets. _Yang and Ishiguro_ (2015) reached similar conclusions by comparing the observed optical properties of zodiacal light (i.e., albedo and optical spectral gradients) with those of other types of small bodies in the solar system. In contrast, a non-negligible fraction of asteroid particles in the IDP cloud is proposed by _Ipatov et al._ (2008) and _Kawara et al._ (2017). ## 4 Future perspectives In this chapter, we have attempted to describe our current understanding of how dust is released from a cometary surface and transported to interplanetary space. The vast amount of data returned by spacecraft missions and modern telescopes over the past 20 years, since the publication of the "Comets II" book, has shed light on the complexity of this process and left us with a number of open questions that we outline in the following. One question relates to how activity works at the surface level, that is, how dust and ice are mixed in the cometary nucleus, and which processes lead to the lifting of dust particles, overcoming cohesive forces (Section 3.1.3). We need a theoretical description of this process that is not in conflict with the observation that activity exists also and in particular far from the Sun. In our view, current theoretical efforts are limited by the quality of data that we have to constrain them. To characterize the surface composition, texture and structure, highly resolved remote sensing data would be needed, especially at mid-infrared wavelengths, where the maximum of thermal emission in the inner solar system occurs, and in polarized light. In situ analyses of the surface from a landed laboratory would also greatly help to understand the physical and chemical properties of the surface, and finally, experiments with analog materials in an \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Comet} & \multicolumn{1}{c}{Methodb } & Radii (\(\mu\)m) & Index, \(\alpha\) & Reference \\ \hline 1P/Halley & In-Situ & \(>\)20 & 3.5\(\pm\)0.2 & _Fulle et al._ (1995) \\ 2P/Encke & Optical & \(>\) 1 & 3.2 to 3.6 & _Sarugaku et al._ (2015) \\ 22P/Kopff & Optical & \(>\) 1 & 3.1 & _Moreno et al._ (2012) \\ 26P/Grigg-Skjellerup & Optical & \(>\) 60 & 3.3 & _Fulle et al._ (1993) \\ 67P/Churyumov-Gerasimenko (coma) & In-Situ & \(>\) 0.01 & \(3.7^{+0.7}_{-0.1}\) & _Marschall et al._ (2020b) \\ 67P/Churyumov-Gerasimenko (trail) & Optical & \(>\) 100 & 4.1 & _Agarwal et al._ (2010) \\ 81P/Wild & Optical & \(>\) 1 & 3.45\(\pm\)0.1 & _Pozuelos et al._ (2014) \\ 103P/Hartley & In-Situ & \(>\) 10\({}^{4}\) & 4.7 to 6.6 & _Kelley et al._ (2013) \\ 103P/Hartley & Optical & \(>\) 1 & 3.35\(\pm\)0.1 & _Pozuelos et al._ (2014) \\ 209P/LINEAR & Optical & \(>\) 1 & 3.25\(\pm\)0.1 & _Ishiguro et al._ (2015) \\ \hline \end{tabular} \end{table} Table 2Size Distribution Indicesa Figure 11: Subaru telescope Hyper Suprime-Cam (HSC) image of comet 67P/Churyumov-Gerasimenko (UT 2016 March 8). The trail (parallel to the horizontal axis) and the neckline structure are projected to different sky position angles. Image courtesy by F. Moreno, original version published as _Moreno et al._ (2017). Earth- or space-station based laboratory can provide good constraints as well (see Poch et al. in this book). The second question addresses the extent to which results obtained from spacecraft missions can be generalized to the wider comet population. On the one hand, this can be addressed by comparing results from the different missions we have had until now, searching for similarities, differences and repeating patterns. Since space missions are costly, the major bridge to the comet population in general will, however, be achieved through telescope observations. To link telescope observations and space missions, we need to understand which properties of the early, near nucleus dust dynamics are still reflected in the outer coma and tail and hence accessible to telescopes. Indications are that much information on the details of the activity distribution is lost in the outer coma (_Crifo and Rodionov_ 1997; _Fulle_ 2004), but remote telescope observations do reveal brightness variations in the outer coma that have not yet been linked to features in the inner coma (_Knight et al._ 2017). A possibility to establish and investigate this connection would be through dedicated modelling of the interface between inner coma and tail for those comets that have been visited by spacecraft. Modelling-wise, these two regions are typically treated separately according to the prevailing forces (gas drag vs. radiation pressure), but see Section 3.3.3 for examples of connecting spacecraft and telescope observations of cometary comae. A third complex of open questions concerns how dust evolves in its physical properties while it travels away from the nucleus. Potentially relevant processes include outgassing of embedded ice (affecting both dynamics and physical properties) and fragmentation, possibly induced by outgassing and/or fast rotation, and leading to a change in the dust size distribution. Data do not yet give clear evidence for or against any of these processes. High-resolution ground-based observations using complementary techniques such as visible light and thermal infrared spectroscopy and polarimetry could provide stronger constraints on the dust evolution at least in the outer coma. ## Acknowledgements. We thank Vladimir Zakharov, David Jewitt, Xian Shi and Felix Keiser, and the referees, Eberhard Grun and Jean-Baptiste Vincent, for their comments that significantly helped us to improve this manuscript. J.A. and Y.K. acknowledge funding by the Volkswagen Foundation. J.A.'s contribution was made in the framework of project CA-tRNA funded by the European Union's Horizon 2020 research and innovation program under grant agreement No. 757390. M.S.P.K. acknowledges support from NASA Grant 80NSSC20K0673.
2309.13447
Non-autonomous iteration of polynomials in the complex plane
We consider a sequence $(p_n)_{n=1}^\infty$ of polynomials with uniformly bounded zeros and $\deg p_1\geq 1$, $\deg p_n\geq 2$ for $n\geq 2$, satisfying certain asymptotic conditions. We prove that the function sequence $\left(\frac{1}{\deg p_n\cdot...\cdot \deg p_1}\log^+|p_n\circ...\circ p_1|\right)_{n=1}^\infty$ is uniformly convergent in $\mathbb{C}$. The non-autonomous filled Julia set $\mathcal{K}[(p_{n})_{n=1}^\infty]$ generated by the polynomial sequence $(p_{n})_{n=1}^\infty$ is defined and shown to be compact and regular with respect to the Green function. Our toy example is generated by $t_n=\frac{1}{2^{n-1}}T_n,\ n\in\{1,2,...\}$, where $T_n$ is the classical Chebyshev polynomial of degree $n$.
Marta Kosek, Malgorzata Stawiska
2023-09-23T18:27:02Z
http://arxiv.org/abs/2309.13447v2
# Non-autonomous Julia sets for sequences of polynomials satisfying Kalmar-Walsh theorem ###### Abstract. We consider a compact, polynomially convex, regular set \(K\subset\mathbb{C}\) and a sequence \((p_{n})_{n=1}^{\infty}\) of polynomials with uniformly bounded zeros and such that \(\lim_{n\to\infty}\left\|p_{n}\right\|_{K}^{1/(\deg\ p_{n})}=\mathrm{cap}(K)\), where \(\mathrm{cap}(K)\) is the logarithmic capacity of \(K\). Taking an arbitrary sequence \((d_{k})_{k=1}^{\infty}\) of integers greater than \(1\) we prove that there exists a nonempty set \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\), depending only on the sequence \((p_{d_{k}})_{k=1}^{\infty}\), such that for any compact polynomially convex regular set \(E\) the preimages \((p_{d_{k}}\circ...p_{d_{1}})^{-1}(E)\) converge in Klimek's metric to \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\). We call the set \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\) the non-autonomous filled Julia set generated by the polynomial sequence \((p_{d_{k}})_{k=1}^{\infty}\). Our toy example is generated by \(t_{n}=\frac{1}{2^{n-1}}T_{n},\ n\in\{1,2,...\}\), associated with \(K=[-1,1]\), where \(T_{n}\) is the classical Chebyshev polynomial of degree \(n\). Key words and phrases:Julia sets, polynomials, Green function, Kalmar-Walsh theorem, Chebyshev polynomials 2020 Mathematics Subject Classification: Primary 37F10; Secondary 30C15, 30E10, 31A15 ## 1. Introduction Behavior of iterates \(p^{n}:=p^{\circ n}\) of a polynomial \(p:\mathbb{C}\longrightarrow\mathbb{C}\) has been intensely studied since the nineteenth century. For an introduction to the subject see e.g. [11]. Much more recent (and much less advanced) is the study of sequences of maps of the type \(p_{n}\circ...\circ p_{1}\) where each \(p_{n}\), \(n\in\{1,2,...\}\), is a polynomial, but not necessarily the same one. Under the name of "generalized iteration", the works [2], [9] and [10] deal with such sequences when the coefficients of underlying polynomials satisfy some conditions. In this article we extend the dynamical study to sequences of polynomials without any concrete assumptions on their coefficients. We work instead with sequences of polynomials associated with a (fixed) compact set in the complex plane that have been widely studied in complex approximation and interpolation theory, often in connection with logarithmic potential theory ([35], [7], [13], [4]), namely sequences satisfying the Kalmar-Walsh condition. More specifically, let \(K\subset\mathbb{C}\) be compact and regular (i.e., the Green function of \(K\) exists and is continuous), and let \((p_{n})_{n=1}^{\infty}\) be a sequence of polynomials such that \(\deg\,p_{n}=n\). By the _Kalmar-Walsh condition_ we mean the following equality: \[\lim_{n\to\infty}\|p_{n}\|_{K}^{1/n}=\operatorname{cap}(K), \tag{1.1}\] where \(\operatorname{cap}(K)\) is the logarithmic capacity of \(K\). See subsection 2.2 for examples of such sequences of polynomials. Here is the example motivating our study of the behavior of compositions \(p_{n}\circ...\circ p_{1}\), where \((p_{n})_{n=1}^{\infty}\) is a sequence of polynomials satisfying (1.1). _Main Example._ Recall that the classical Chebyshev polynomials satisfy \(T_{d_{1}}\circ T_{d_{2}}=T_{d_{2}}\circ T_{d_{1}}=T_{d_{1}d_{2}}\). All zeros of \(T_{d}\) belong to the segment \(K=[-1,1]\). The polynomial \(t_{d}=\frac{1}{2^{d-1}}T_{d}\) is the minimal polynomial of degree \(d\) on \([-1,1]\), so \[\lim_{d\to\infty}\|t_{d}\|_{K}^{1/d}=\operatorname{cap}([-1,1])=\frac{1}{2}\] and \[\lim_{d\to\infty}\frac{1}{d}\log|t_{d}(z)|=g_{[-1,1]}(z)-\log 2\] locally uniformly in \(\mathbb{C}\setminus[-1,1]\). Here \(g_{[-1,1]}\) is the complex Green function of the segment \([-1,1]\). In consequence \(\lim_{d\to\infty}\frac{1}{d}\log|T_{d}(z)|=g_{[-1,1]}(z)\) locally uniformly in \(\mathbb{C}\setminus[-1,1]\). For a sequence of integers \((d_{n})_{n=1}^{\infty}\) not smaller than \(2\) we get \(T_{d_{n}}\circ...\circ T_{d_{1}}=T_{d_{n}...d_{1}}\), hence \((T_{d_{n}}\circ...\circ T_{d_{1}})_{n=1}^{\infty}\) is a subsequence of the sequence of all classical Chebyshev polynomials with increasing degrees and \[\lim_{n\to\infty}\frac{1}{d_{n}...d_{1}}\log|(T_{d_{n}}\circ...\circ T_{d_{1} })(z)|=g_{[-1,1]}(z)\] locally uniformly in \(\mathbb{C}\setminus[-1,1]\). By a more dynamical approach it was shown in [26] that \[\lim_{n\to\infty}\frac{1}{d_{n}...d_{1}}\log^{+}|(T_{d_{n}}\circ...\circ T_{d_{1 }})(z)|=g_{[-1,1]}(z)\] for any \(z\in\mathbb{C}\). Moreover, the convergence is uniform in the whole complex plane. The function \(\frac{1}{d_{n}...d_{1}}\log^{+}|(T_{d_{n}}\circ...\circ T_{d_{1}})|\) is the Green function of the preimage of the unit disk, \(g_{(T_{d_{n}}\circ...\circ T_{d_{1}})^{-1}(\overline{\mathbb{D}}(0,1))}\), and the convergence of functions can be thought of as the convergence in Klimek's metric of the sets \((T_{d_{n}}\circ...\circ T_{d_{1}})^{-1}\left(\overline{\mathbb{D}}(0,1)\right)\) to \([-1,1]\). It is natural to ask whether the convergence still holds if \([-1,1]\) is replaced by a (fixed) general compact, polynomially convex regular set \(K\), \((T_{n})_{n=1}^{\infty}\) is replaced by a sequence \((p_{n})_{n=1}^{\infty}\) of polynomials associated with \(K\) and \(\overline{\mathbb{D}}(0,1)\) is replaced by an arbitrary compact, polynomially convex regular set \(E\) (not necessarily equal to \(K\)). We show that this is indeed the case when the polynomials \(p_{n}\) satisfy (1.1) and their zeros are uniformly bounded (we call such polynomials _KW polynomials_). Our main result is the following (see Theorem 3.4.1): **Main Theorem**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and let \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Fix now a sequence \((d_{k})_{k=1}^{\infty}\) of integers greater than \(1\). Then, for every compact, polynomially convex and regular set \(E\), the sequence_ \[\left((p_{d_{k}}\circ...\circ p_{d_{1}})^{-1}(E)\right)_{k=1}^{\infty}\] _is convergent in Klimek's metric to the limit \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\), independent of \(E\)._ The set \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\) is the non-autonomous Julia set generated by the sequence \((p_{d_{k}})_{k=1}^{\infty}\). We establish basic properties of such Julia sets, showing that they are nonempty, compact, polynomially convex and regular. Our methods are primarily based on the approach developed in [24] (see also [25]). We further study in some more detail the non-autonomous filled Julia set generated by \(t_{d}=\frac{1}{2^{d-1}}T_{d},\ d\in\{1,2,...\}\), and the connections with the theory of best uniform approximation by polynomials on a compact set in \(\mathbb{C}\). We include a good amount of background material in our presentation to make it as self-contained as possible. Section 2 contains preliminaries from logarithmic potential theory and complex approximation theory, with quite a detailed discussion of Kalmar-Walsh theorem. Proposition 2.3.2 and Corollary 2.3.3 sharpen some previously available results, while Proposition 2.3.5 seems to be new. More new material appears in Section 3, including the definition of a non-autonomous filled Julia set, the proof of its existence and properties. The approach using Klimek's metric is explained. In Section 4 we explore the case of Chebyshev polynomials on a compact set. ## 2. The Green function, the logarithmic capacity and their approximations ### The Green function and related notions We will study certain sequences of polynomials associated with a compact subset \(K\) of \(\mathbb{C}\), namely polynomials occurring in approximating the Green function (with pole at infinity) of \(K\), provided such a function exists and is continuous. Let us first define the Green function and the logarithmic capacity of a set (mostly following [31, Section II.4]). **Definition 2.1.1**.: Let \(D\subset\mathbb{C}\) be an unbounded domain. Consider a function \(g:D\longrightarrow\mathbb{R}\) with the following properties: 1. \(g_{D}\) is harmonic and positive in \(D\); 2. \(g_{D}(z)\) tends to \(0\) as \(z\to\partial D\); 3. \(g_{D}(z)-\log|z|\) tends to a finite number \(\gamma\) as \(z\to\infty\). If such a function \(g_{D}\) exists, it is called the _Green function_ of \(D\) with pole at infinity. It can be proved that if the Green function exists, it is unique. Fix a compact subset \(K\) of \(\mathbb{C}\). Define the polynomially convex hull of \(K\) as \[\widehat{K}:=\left\{z\in\mathbb{C}:\ \forall p\ \text{polynomial}\ \left|p(z)\right|\leq\|p\|_{K}\right\},\] where \(\|p\|_{K}:=\sup_{z\in K}|p(z)|\). Let \(D_{\infty}^{K}\) denote the unbounded component of \(\mathbb{C}\setminus K\). We have \(\widehat{K}=\mathbb{C}\setminus D_{\infty}^{K}\). The set \(K\) is called _polynomially convex_ if \(K=\widehat{K}\). Note that polynomial convexity of \(K\) is equivalent to the connectedness of \(\mathbb{C}\setminus K\). **Definition 2.1.2**.: Let \(K\subset\mathbb{C}\) be such a compact set that the Green function \(g_{D_{\infty}^{K}}\) of \(D_{\infty}^{K}\) with pole at infinity exists. We say then that \(K\) is _regular_ and define _the Green function \(g_{K}:\mathbb{C}\longrightarrow\mathbb{R}\) of \(K\)_ via formula \[g_{K}(z)=\begin{cases}g_{D_{\infty}^{K}}(z),&\text{ if }z\in D_{\infty}^{K}\\ 0,&\text{ if }z\in\widehat{K}\end{cases}.\] The number \(\exp(-\gamma)\), where \(\gamma\) is the limit from the last item of Definition 2.1.1, will be then called the _logarithmic capacity_ of \(K\) and denoted by \(\operatorname{cap}(K)\). Note that if \(K\) is regular, then \(g_{K}=g_{\widehat{K}}\) and it is a continuous function in view of Definitions 2.1.1 and 2.1.2. Moreover \(g_{K}\) is subharmonic in \(\mathbb{C}\). _Example 2.1.3_.: Let \(a\in\mathbb{C}\), \(R>0\) and let \(K=\overline{\mathbb{D}}(a,R):=\{z:|z-a|\leq R\}\). Then \(g_{K}(z)=\log^{+}(|z-a|/R):=\max\{0,\log(|z-a|/R)\}\) and \(\operatorname{cap}(K)=R\). _Example 2.1.4_.: Let \(K=[-1,1]\). Then \(g_{K}(z)=\log\big{|}z+\sqrt{z^{2}-1}\big{|}\), with the square root branch defined in \(\mathbb{C}\setminus(-\infty,0)\) so that \(\sqrt{1}=1\), and \(\operatorname{cap}(K)=1/2\). _Example 2.1.5_.: Fix an \(R>1\). Let \(K=E_{R}\) be the ellipse with foci \(-1,+1\) and semiaxes \(a=\frac{1}{2}\big{(}R+\frac{1}{R}\big{)},\ b=\frac{1}{2}\big{(}R-\frac{1}{R} \big{)}\). Then \(g_{K}(z)=\log^{+}\big{(}\big{|}z+\sqrt{z^{2}-1}\big{|}/R\big{)}\) and \(\operatorname{cap}(K)=(a+b)/2=R/2\). It follows from Definitions 2.1.1 and 2.1.2 that \(\exists C\in\mathbb{R}\ \forall z\in\mathbb{C}:g_{K}(z)\leq C+\log^{+}|z|\). The class of all functions subharmonic in \(\mathbb{C}\) and satisfying such an inequality - the Lelong class - is denoted by \(\mathcal{L}\). A theorem of Siciak (see [21, Theorem 5.6.1]) characterizing the class \(\mathcal{L}\) says that for a function \(u\in\mathcal{L}\) there exists a sequence \((p_{n})_{n=1}^{\infty}\) of complex polynomials in \(\mathbb{C}\) such that \(\forall n\geq 1:\deg\ p_{n}\leq n\) and \(u=(\limsup_{n\to\infty}\frac{1}{n}\log|p_{n}|)^{*}\). Here \({}^{*}\) denotes the upper semicontinuous regularization of a function, \(v^{*}(x)=\limsup_{y\to x}v(y)\). We will consider such sequences of polynomials for \(g_{K}\). The following estimates will be used later: **Lemma 2.1.6**.: _Let \(K\subset\mathbb{C}\) be compact and regular. Then_ \[\exists M>0:\quad|z|\geq M\quad\Longrightarrow\quad 2<g_{K}(z)+\log \operatorname{cap}(K)<1+\log|z|.\] Proof.: In view of Definitions 2.1.1 and 2.1.2, \[\lim_{z\to\infty}\big{(}g_{K}(z)-\log|z|+\log\operatorname{cap}(K)\big{)}=0,\] so there exists \(\widetilde{M}>0\), such that \[|z|\geq\widetilde{M}\quad\Longrightarrow\quad-1+\log|z|<g_{K}(z)+\log\mathrm{cap} (K)<1+\log|z|.\] It suffices now to take \(M:=\max\left(\widetilde{M},e^{3}+1\right).\) **Definition 2.1.7**.: Let \(\varepsilon>0\) and let \(K\subset\mathbb{C}\) be compact and regular. Then the \(\varepsilon\)-sublevel set of \(g_{K}\) (also called the \(\varepsilon\)_-augmentation_ of \(K\)) is \(K_{\varepsilon}:=\{z\in\mathbb{C}:\;g_{K}(z)\leq\varepsilon\}.\) If in addition \(K\) is polynomially convex, the family \(\{K_{\varepsilon}\}_{\varepsilon>0}\) forms a neighbourhood base of the set \(K\) in \(\mathbb{C}\) as shown in [22, Corollary 1]. It was proved by M. Mazurek (published in [33, Proposition 5.11]) that \[g_{K_{\varepsilon}}=\max(0,g_{K}-\varepsilon). \tag{2.1}\] This result implies the following properties of the sublevel sets: **Proposition 2.1.8** (cf. [6, Proposition 2.3]).: _Let \(K\) be a regular compact subset of \(\mathbb{C}\). Then: \((i)\) For every \(\varepsilon>0\) the set \(K_{\varepsilon}\) is polynomially convex. \((ii)\) For every \(\varepsilon>0\) the set \(K_{\varepsilon}\) is regular. \((iii)\)\(K_{\varepsilon+\sigma}=(K_{\varepsilon})_{\sigma}\) for every \(\varepsilon,\sigma>0\). \((iv)\)\(\mathrm{cap}(K_{\varepsilon})=\exp\left(-\gamma+\varepsilon\right)\)._ _Example 2.1.9_.: From Examples 2.1.4 and 2.1.5 it follows that for the segment \(K=[-1,1]\) the \(\varepsilon\)-sublevel set of \(g_{K}\) is the (filled) ellipse with foci \(-1,+1\) and semiaxes \(a=\frac{1}{2}(e^{\varepsilon}+e^{-\varepsilon}),\ b=\frac{1}{2}(e^{ \varepsilon}-e^{-\varepsilon}).\) The well known Bernstein-Walsh inequality ([35, Lemma in Section 4.6]; for a new proof see also [32]) states that for every regular compact set \(K\subset\mathbb{C}\) and for any polynomial \(f\) of degree \(n\) \[\forall z\in\mathbb{C}:\quad\frac{|f(z)|}{\|f\|_{K}}\leq\exp(ng_{K}(z)).\] In particular \[\forall z\in\mathbb{C}:\qquad|f(z)|^{1/n}\leq\|f\|_{K}^{1/n}\exp(g_{K}(z)). \tag{2.2}\] Recall also the following polynomial transformation formula: Let \(f\) be a polynomial of degree \(n\geq 1\). Then \[g_{f^{-1}(K)}=\frac{1}{n}g_{K}\circ f. \tag{2.3}\] In particular, for any polynomial \(f\) of degree \(n\geq 1\) the preimage under \(f\) of a regular set, e.g., a closed disk, a line segment or an ellipse, is a regular set. ### Sequences of KW polynomials - definitions and examples Before formulating our main definition let us recall the known notion of Chebyshev polynomials. **Definition 2.2.1**.: Let \(E\subset\mathbb{C}\) be a compact set. A monic polynomial \(t_{n}\) of degree \(n\geq 1,\)\(t_{n}(z)=z^{n}+a_{n-1}z^{n-1}+...+a_{1}z+a_{0},\) is called the \(n\)_th Chebyshev polynomial_ (or the \(n\)_th minimal polynomial_) on \(E\) if \(\|t_{n}\|_{E}\leq\|q\|_{E}\) for any monic polynomial \(q\) of degree \(n.\) Let us recall a well known example of Chebyshev polynomials (cf. Main Example from the Introduction). _Example 2.2.2_.: Let \(E=[-1,1]\) and \(n\geq 1\). Then \(t_{n}:=\frac{1}{2^{n-1}}T_{n}\) is the \(n\)th minimal polynomial on \(E,\) where \(T_{n}\) satisfies the formula \(T_{n}\left(\frac{z+z^{-1}}{2}\right)=\frac{z^{n}+z^{-n}}{2}\). The polynomials \(T_{n}\) are called the _classical Chebyshev polynomials_. The following fact is due to Fejer [16], but can be also found e.g. in [27]. **Lemma 2.2.3**.: _All zeros of the Chebyshev polynomials on a compact set \(E\) lie in the convex hull \(\mathrm{conv}(E)\)._ The main definition of our article follows. **Definition 2.2.4**.: Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set. Consider a sequence of polynomials \((p_{n})_{n=1}^{\infty}\) such that \[p_{n}:\mathbb{C}\ni z\longmapsto\big{(}z-\zeta_{1}^{(n)}\big{)}...\big{(}z- \zeta_{n}^{(n)}\big{)}\in\mathbb{C},\quad n\in\{1,2,...\}.\] Thus all \(p_{n}\) are monic polynomials and \(\forall n:\ \deg p_{n}=n.\) We say that \((p_{n})_{n=1}^{\infty}\) is a _sequence of KW polynomials_ associated with \(K\) if the set \(\bigcup_{n\in\mathbb{N}}\Big{\{}\zeta_{1}^{(n)},...,\zeta_{n}^{(n)}\Big{\}}\) is bounded and \[\lim_{n\to\infty}\|p_{n}\|_{K}^{1/n}=\mathrm{cap}(K). \tag{2.4}\] Recall that (2.4) is the Kalmar-Walsh condition (1.1) from the Introduction. Before we comment extensively on this definition, let us note that Chebyshev polynomials (Definition 2.2.1) on a compact polynomially convex regular set \(K\) provide examples of polynomials satisfying Definition 2.2.4. We observe the following fact: **Corollary 2.2.5**.: _If \(K\subset\mathbb{C}\) is compact, polynomially convex and regular then the sequence of Chebyshev polynomials on \(K\) is a KW sequence associated with \(K\)._ Proof.: The condition on the zeros of the polynomials follows from Lemma 2.2.3. Moreover, in [17] it was proved that (2.4) holds for the sequence of Chebyshev polynomials on a compact set. The letters KW in Definition 2.2.4 refer to a very well known theorem. Namely, if \(\bigcup_{n\in\mathbb{N}}\left\{\zeta_{1}^{(n)},...,\zeta_{n}^{(n)}\right\}\subset K\), then the classical result of Kalmar and Walsh (see [35, Section 7.3, Theorem 3 and Section 7.4, Theorem 4] or [7, Theorem 1.4 and Theorem 1.5]; cf. [19, II.2.B, Theorem 1 and Lemma 1]) says that (2.4) is equivalent to the fact, that \[\lim_{n\to\infty}|p_{n}(z)|^{1/n}=\operatorname{cap}(K)\cdot\exp(g_{K}(z)) \tag{2.5}\] uniformly on compact subsets of \(\mathbb{C}\setminus K\), as well as to the following statement: for any function \(f\) holomorphic in a neighbourhood of \(K\) the sequence of its interpolation polynomials with nodes \(\zeta^{(n)}\) converge uniformly on \(K\) to \(f\). Some examples of sequences \((p_{n})_{n=1}^{\infty}\) such that \(\bigcup_{n\in\mathbb{N}}\left\{\zeta_{1}^{(n)},...,\zeta_{n}^{(n)}\right\} \subset K\) and (2.4) is satisfied can be found in [19, II.2]. Thus more examples of KW polynomials are provided. Let us comment more on the situation when the Kalmar-Walsh condition (2.4) is equivalent to (2.5) or in other words to \[\lim_{n\to\infty}\frac{1}{n}\log|p_{n}(z)|=g_{K}(z)+\operatorname{cap}(K). \tag{2.6}\] The work [7], investigates, among others, conditions sufficient for (2.4) (see [7, Theorem 1.5]), under the assumptions that \(K\) is polynomially convex and that the zeros of all \(p_{n}\) lie in \(K\). The equivalence between (2.4) and (2.5) is stated there as part of [7, Theorem 1.4]). For the proof the authors refer the reader to [35], where the topic is treated in [35, Section 7.3, Theorem 3 and Section 7.4, Theorem 4]. The assumptions in [35] are that \(K\) is a compact, regular and polynomially convex set and that the zeros of \(p_{n}\) do not have accumulation points in \(\mathbb{C}\setminus K\) (in particular, they do not have to lie all in \(K\)). The convergence in (2.6) is then shown to hold uniformly on compact subsets in \(\mathbb{C}\setminus K\). The equivalence between (2.4) and (2.6), with uniform convergence on compact subsets of \(\mathbb{C}\setminus K\) in (2.6), is also explicitly proved in [19, II.2.B, Theorem 1 and Lemma 1], under the assumption that \(K\) is a compact subset of \(\mathbb{C}\) whose complement \(\mathbb{C}\setminus K\) is a simply connected domain. Moreover, all zeros of all \(p_{n}\) are assumed to lie in \(K\). A proof of implication \[(2.4)\Longrightarrow(2.6)\] in [27] for Chebyshev (minimal) polynomials on a compact set \(K\) is easily generalized to any polynomial sequence satisfying (2.4). The zeros of \(p_{n}\) are assumed to be uniformly bounded but are allowed to have limit points in the unbounded component of \(\mathbb{C}\setminus K\) (as may be the case for Chebyshev polynomials). Let \(Z^{\prime}\) be the set of all limit points of zeros of \(p_{n}\) in the unbounded component of \(\mathbb{C}\setminus K\). Then, if (2.4) holds, (2.6) holds uniformly on compact subsets of \(\mathbb{C}\setminus(K\cup Z^{\prime})\). In [27] there is also an example of a set \(K\) and a sequence of polynomials \(p_{n}\) satisfying (1.1) for which (2.6) may fail at a point \(z\in Z^{\prime}\). There is no treatment of bounded components of \(\mathbb{C}\setminus K\), so there seems to be no loss in assuming that \(K\) is polynomially convex. Finally, we are assuming that the set of the zeros of all \(p_{n}\) is bounded. It can be shown that this assumption is not too restrictive. Namely, a polynomial sequence \((p_{n})_{n=1}^{\infty}\) such that (2.4) holds can be replaced by a polynomial sequence \((q_{n})_{n=1}^{\infty}\) such that (2.4) holds for \(q_{n}\) and the zeros of \(q_{n}\) are uniformly bounded. More precisely, the following proposition is true: **Proposition 2.2.6** ([18, Theorem 12]).: _Let \(K\subset\mathbb{C}\) be a compact set with \(\tau=\operatorname{cap}(K)>0\) and let \(\Gamma\) be a Jordan curve such that \(K\subset\operatorname{int}\widehat{\Gamma}\). Suppose that \((p_{n})_{n=2}^{\infty}\) is a sequence of polynomials such that \(p_{n}(z)=z^{n}+a_{n-1}^{(n)}z^{n-1}+...+a_{0}^{(n)}\) and \(\limsup_{n\to\infty}\|p_{n}\|_{K}^{1/n}\leq\tau\). Let us decompose \(p_{n}\) as \(p_{n}(z)=q_{n-\sigma}(z)r_{\sigma}(z)\), where \(r_{\sigma}(z)=z^{\sigma}+...\) is a polynomial whose zeros are precisely the zeros of \(p_{n}\) not belonging to \(\widehat{\Gamma}\) (or \(r_{\sigma}\equiv 1\) if there are no such zeros)._ _Then we have: \(\sigma=\sigma(n)=o(n)\), \(\lim_{n\to\infty}\|q_{n-\sigma}\|_{K}^{1/(n-\sigma)}=\tau\) and \(\lim_{n\to\infty}\frac{1}{n-\sigma}\log|q_{n-\sigma}(z)|=\log\tau+g_{K}(z)\) uniformly on any compact subset of \(\mathbb{C}\setminus\widehat{\Gamma}\). Moreover, \(\lim_{n\to\infty}\|r_{\sigma}\|_{K}^{1/n}=1\)._ ### Properties of sequences of KW polynomials We will now establish some properties which will be crucial in the further investigation. We start with a lemma valid for all polynomials of degree at least \(2\). **Lemma 2.3.1**.: _If \(P:\mathbb{C}\longrightarrow\mathbb{C}\) is a polynomial of degree at least \(2\), then_ \[\forall\theta>1\ \exists R=R(\theta)>0:\quad|z|\geq R\quad\Longrightarrow \quad|P(z)|\geq\theta|z|. \tag{2.7}\] Proof.: Observe that \[\lim_{z\to\infty}\frac{|P(z)|}{|z|}=\infty,\] since \(\deg P\geq 2\). The relation (2.7) follows. A property of KW polynomials now follows. **Proposition 2.3.2**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set. Let further \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Then_ \[\exists R>0\ \forall n\geq 2:\quad|z|\geq R\quad\Longrightarrow\quad|p_{n}(z )|\geq e|z|.\] Proof.: By Definition 2.2.4 there exists such \(r>0\) that all zeros of all \(p_{n}\) lie in \(\mathbb{D}(0,r)\supset K\). Put \(\gamma:=\log\operatorname{cap}(K)\). Let \(M>0\) be as in Lemma 2.1.6 and fix \(R_{0}>\max(M,r)\). We have \(\min\{g_{K}(z)+\gamma:\ |z|=R_{0}\}>2\) by Lemma 2.1.6. As \(n\to\infty\), since \((\ref{eq:KW})\Longrightarrow(\ref{eq:KW})\), \[\frac{1}{n}\log|p_{n}(z)|-\frac{1}{n}\log|z|\longrightarrow g_{K}(z)+\gamma\] uniformly on compact subsets of \(\mathbb{C}\setminus\overline{\mathbb{D}}(0,r)\), so we can choose an integer \(N=N(R_{0})\geq 2\) such that \[\forall n\geq N:\quad\frac{1}{n}\log|p_{n}(z)|-\frac{1}{n}\log|z|>1, \tag{2.8}\] provided \(|z|=R_{0}\). Since all zeros of the polynomials \(p_{n}\) are contained in \(\mathbb{D}(0,r)\varsubsetneq\overline{\mathbb{D}}(0,R_{0})\), we may apply the Minimum Principle for harmonic functions, which implies that (2.8) is satisfied for all \(z\in\mathbb{C}\) such that \(|z|\geq R_{0}\). Therefore \[\forall n\geq N:\quad|z|\geq R_{0}\quad\Longrightarrow\quad|p_{n}(z)|\geq e |z|.\] Now, by Lemma 2.3.1 \[\forall j\in\{2,3,...,N-1\}\ \exists R_{j}>0:\quad|z|\geq R_{j}\quad \Longrightarrow\quad|p_{j}(z)|\geq e|z|.\] It suffices to take \(R:=\max\{R_{j}:j\in\{0,2,3,...,N-1\}\}\). **Corollary 2.3.3**.: _If \(K\subset\mathbb{C}\) is a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) is a sequence of KW polynomials associated with \(K\), then_ \[\exists\varrho>0\ \forall R\geq\varrho\ \forall n\geq 2:\quad p_{n}^{-1} \left(\overline{\mathbb{D}}(0,R)\right)\subset\overline{\mathbb{D}}(0,R).\] Proof.: It follows from the previous proposition that \[\exists\varrho>0\ \forall R>\varrho\ \forall n\geq 2:\quad|z|\geq R\quad \Longrightarrow\quad|p_{n}(z)|\geq e|z|,\] since \(|z|\geq R\Longrightarrow|z|\geq\varrho.\) In particular for a fixed \(R>\varrho\) \[|z|\geq R\quad\Longrightarrow\quad|p_{n}(z)|>R.\] _Remark 2.3.4_.: The Proposition 2.3.2 and the Corollary 2.3.3 have some similarities to [12, Proposition 3.3], where a different asymptotic condition on polynomials was assumed. We do not know of any examples of sequences of polynomials satisfying that condition which would not be KW polynomials. At the end of this section we want to note the following characterization of a compact polynomially convex set via KW polynomials associated with it. Recall that set \(Z^{\prime},\) defined in the proposition below, appeared also in the previous subsection (in the context of the convergence in (2.6)). **Proposition 2.3.5**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and let \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Let \(Z^{\prime}\) be a set of the limit points of the set of the zeros of all \(p_{n}\) contained in \(\mathbb{C}\setminus K\). If \(Z^{\prime}=\emptyset\), then \(K=\bigcap_{n\geq 1}Z_{n},\) where_ \[Z_{n}=\{z\in\mathbb{C}:|p_{n}(z)|\leq\|p_{n}\|_{K}\},\quad n\in\{1,2,...\}.\] Proof.: The inclusion \(K\subset\bigcap_{n\geq 1}Z_{n}\) follows from polynomial convexity of \(K.\) Assume now that \(z\in\bigcap_{n\geq 1}Z_{n}\setminus K\). By the definition of \(Z_{n}\) \[\forall n\in\{1,2,...\}:\quad|p_{n}(z)|^{1/n}\leq\|p_{n}\|_{K}^{1/n}.\] By (2.4) and its consequence (2.5), taking the limit as \(n\to\infty\) we see that \(\mathrm{cap}(K)\cdot\mathrm{exp}(g_{K}(z))\leq\mathrm{cap}(K),\) hence \(g_{K}(z)\leq 0\). This is a contradiction with the polynomial convexity of \(K\). ## 3. Polynomial Julia type sets ### Autonomous Julia sets **Definition 3.1.1**.: Let \(P:\mathbb{C}\longrightarrow\mathbb{C}\) be a polynomial of degree \(d\geq 2\). The _(autonomous) filled Julia set_ of \(P\) is \[\mathcal{K}[P]:=\{z\in\mathbb{C}:\ (P^{n}(z))_{n=1}^{\infty}\text{ is bounded}\}.\] It is well known that the filled Julia set is nonempty, compact, perfect, totally invariant under \(P\) (i.e., \(P(\mathcal{K}[P])=P^{-1}(\mathcal{K}[P])=\mathcal{K}[P]\)) and moreover \[\mathcal{K}[P]:=\mathbb{C}\setminus\left\{z\in\mathbb{C}:\;\lim_{n\to\infty}P^{ n}(z)=\infty\right\}. \tag{3.1}\] For more information, see [11]. _Example 3.1.2_.: \[\forall n\geq 2:\quad\mathcal{K}[z\longmapsto z^{n}]=\overline{\mathbb{D}}(0,1) \quad\text{ and }\quad\mathcal{K}[T_{n}]=[-1,1],\] where \(T_{n}\) is the \(n\)th classical Chebyshev polynomial for \(n\geq 2\). Note that polynomials of degree \(1\) could be included in Definition 3.1.1. However, then for \(z\longmapsto z\), we would obtain the whole complex plane and for \(z\longmapsto z+1\) the empty set as the filled Julia sets, so we would loose two nice properties of Julia sets. This is why the definition is restricted to polynomials of degree at least \(2\). Let us define the notion of escape radius of a polynomial, following [25, page 53] (a slightly different definition was proposed in [14]). **Definition 3.1.3**.: An _escape radius_ for a polynomial \(P:\mathbb{C}\longrightarrow\mathbb{C}\) is a number \(R>0\) with the following property \[|z|>R\quad\Longrightarrow\quad\lim_{n\to\infty}P^{n}(z)=\infty.\] **Lemma 3.1.4**.: \(R>0\) _is an escape radius for a polynomial \(P\) if and only if \(\mathcal{K}[P]\subset\overline{\mathbb{D}}(0,R).\) In particular, if \(R\) is an escape radius for \(P\) and \(\varrho>R\), then \(\varrho\) is an escape radius for \(P\) too._ Proof.: This follows from the definition and from (3.1). **Corollary 3.1.5**.: _If \(R>0\) is an escape radius for a polynomial \(P\), then_ \[\mathcal{K}[P]=\bigcap_{n=1}^{\infty}P^{-n}\left(\overline{\mathbb{D}}(0,R) \right).\] Proof.: Let \(R>0\) be such that \(\mathcal{K}[P]\subset\overline{\mathbb{D}}(0,R)\). Then for every \(n\geq 1\) we have \(\mathcal{K}[P]=P^{-n}(\mathcal{K}[P])\subset P^{-n}(\overline{\mathbb{D}}(0,R))\). Conversely, if \(z\in P^{-n}\left(\overline{\mathbb{D}}(0,R)\right)\) for every \(n\geq 1\), then \((P^{n}(z))_{n=1}^{\infty}\subset\overline{\mathbb{D}}(0,R)\), hence \(z\in\mathcal{K}[P]\). A sufficient condition for a positive number to be an escape radius for a polynomial was given in Lemma 2.3.1. ### Non-autonomous Julia sets We will consider now a generalization of the autonomous filled Julia set. As in Definition 3.1.1 here also usually only polynomials of degree greater than \(1\) are considered (cf. e.g. [10], [25]), however since we are interested in sequences of KW polynomials, we will allow one polynomial to be of degree \(1\). **Definition 3.2.1**.: Let \((d_{n})_{n=1}^{\infty}\) be a sequence of integers such that \(d_{1}\geq 1\) and \(\forall n\geq 2:d_{n}\geq 2.\) Let \((p_{n})_{n=1}^{\infty}\) be a sequence of polynomials with \(\deg p_{n}=d_{n}\). We define the _(non-autonomous) filled Julia set_ of the sequence \((p_{n})_{n=1}^{\infty}\) to be \[\mathcal{K}[(p_{n})_{n=1}^{\infty}]:=\{z\in\mathbb{C}:\ ((p_{n}\circ\cdots \circ p_{1})(z))_{n=1}^{\infty}\ \text{is bounded}\}.\] _Remark 3.2.2_.: It is straightforward that in the situation from the definition \[\mathcal{K}[(p_{n})_{n=1}^{\infty}]=p_{1}^{-1}\left(\mathcal{K}[(p_{n})_{n=2 }^{\infty}]\right).\] In particular if \(p_{1}:\mathbb{C}\ni z\longmapsto z\in\mathbb{C}\), then \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]=\mathcal{K}[(p_{n})_{n=2}^{\infty}]\). Note that it follows from Definition 3.2.1 that \[\mathcal{K}[(p_{n})_{n=1}^{\infty}]=\bigcup_{r\in\mathbb{N}}\bigcap_{n\geq 1 }(p_{n}\circ...\circ p_{1})^{-1}\left(\overline{\mathbb{D}}(0,r)\right),\] hence \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\) is of \(F_{\sigma}\)-type. A non-autonomous filled Julia set may be finite, which is impossible for an autonomous one. As shown in [10], if we take \(p_{n}:\mathbb{C}\ni z\longmapsto n^{2^{n}}z^{2}\in\mathbb{C},n\geq 1\), then \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]=\{0\}\). Similarly, if we just exchange the first polynomial taking \(q_{1}:\mathbb{C}\ni z\longmapsto z^{2}-1\in\mathbb{C}\) and \(\forall n\geq 2:q_{n}:=p_{n}\), then \(\mathcal{K}[(q_{n})_{n=1}^{\infty}]=\{-1,1\}\). If \((p_{n})_{n=1}^{\infty}\) in Definition 3.2.1 above is periodic (i.e., there exists an \(m\) such that \(p_{m+i}=p_{i}\) for every \(i\)), then we obtain the autonomous filled Julia set (see Definition 3.1.1) of the composition of the polynomials \(p_{1},...,p_{m}\). In particular for a constant sequence we also get an autonomous Julia set. The interesting case is when the sequence is not periodic, e.g., when each polynomial in the sequence has a different degree. _Example 3.2.3_.: If \((d_{n})_{n=1}^{\infty}\) is a sequence of integers not smaller than \(2\) and \(p_{n}:\mathbb{C}\ni z\longmapsto z^{d_{n}}\), then \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]=\overline{\mathbb{D}}(0,1)\). Furthermore \(\mathcal{K}[(T_{d_{n}})_{n=1}^{\infty}]=[-1,1]\). This follows from Example 3.1.2 and the composition formulae \(p_{n}\circ p_{k}=p_{nk}\) and \(T_{n}\circ T_{k}=T_{nk}\). In particular \(\mathcal{K}[(z\longmapsto z^{n})_{n=1}^{\infty}]=\overline{\mathbb{D}}(0,1)\) and \(\mathcal{K}[(T_{n})_{n=1}^{\infty}]=[-1,1]\). The following example is a consequence of the previous one and of Remark 3.2.2. _Example 3.2.4_.: For any non-constant polynomial \(P:\mathbb{C}\longrightarrow\mathbb{C}\), the sets \(P^{-1}\left(\overline{\mathbb{D}}(0,1)\right)\) and \(P^{-1}\left([-1,1]\right)\) are (non-autonomous) filled Julia sets. The authors of [9] define non-autonomous Julia sets for special sequences of polynomials. Namely, they define a class \(\mathcal{B}\) of sequences \((f_{n})_{n=1}^{\infty}\), where \[f_{n}(z)=\sum_{j=1}^{d_{n}}a_{n,j}z^{j}\] and \(d_{n}\geq 2\) for any \(n\). The sequences in this class satisfy some conditions, in particular 1. there is a constant \(A\geq 0\) such that \(|a_{n,j}|\leq A|a_{n,d_{n}}|\) for \(j\in\{0,...,d_{n}\}\) and all integers \(n\). However, this condition does not hold for some important sequences of polynomials. _Remark 3.2.5_.: The sequence of the Chebyshev polynomials on \([-1,1]\) does not belong to the class \(\mathcal{B}\). Proof.: Recall that \(T_{n}\) is the classical Chebyshev polynomial of degree \(n\) and the \(n\)th Chebyshev polynomial on \([-1,1]\) is \(\frac{1}{2^{n-1}}T_{n}\) (see Example 2.2.2). We may write \(T_{n}(z)=2^{n-1}z^{n}+a_{n-1}^{(n)}z^{n-1}+...+a_{0}^{(n)}\). Note that \(a_{n-1}^{(n)}=0\) for every \(n\geq 1\). First we will prove that \(a_{n-2}^{(n)}=-n2^{n-3}\) for \(n\in\{1,2,...\}\). Indeed, this is true for \(T_{1}(z)=z,\ T_{2}(z)=2z^{2}-1,\ T_{3}(z)=4z^{3}-3z\). To argue by induction, let \(n\) be such that \(a_{k-2}^{(k)}=-k2^{k-3}\) for \(k\in\{2,3,...,n\}\). From the recurrence formula \(T_{n+1}(z)=2zT_{n}(z)-T_{n-1}(z)\) (valid for all \(n\geq 1\), if we set \(T_{0}(z)\equiv 1\)) we can express \(T_{n+1}(z)\) as \[2z\left(2^{n-1}z^{n}+(-n2^{n-3})z^{n-2}+a_{n-3}^{(n)}z^{n-3}+...+a_{0}^{(n)} \right)+\] \[-\left(2^{n-2}z^{n-1}+(-(n-1))2^{n-4}z^{n-3}+a_{n-4}^{(n-1)}z^{n-4}+...+a_{0}^ {(n-1)}\right).\] Then \(a_{n-1}^{(n+1)}=-n2^{n-2}-2^{n-2}=-(n+1)2^{n-2}\), as claimed. Thus for the polynomial \(\frac{1}{2^{n-1}}T_{n}\) the coefficient corresponding to \(z^{n-2}\) is \(-n/4\). Hence condition \((P2)\) does not hold and \(\left(\frac{1}{2^{n-1}}T_{n}\right)_{n=1}^{\infty}\notin\mathcal{B}\). For the explicit expression of coefficients of \(T_{n}\) corresponding to the powers of the variable \(z\) see [30, T1.2 (40)]. Remark 3.2.5 shows that the approach from [9] cannot be used in the study of non-autonomous Julia sets for the sequences of Chebyshev polynomials on compact sets. Another approach was proposed in [25]. There all polynomials from the sequence \((p_{n})_{n=1}^{\infty}\) have to have a common escape radius \(R\) such that \[\sup_{n}\|p_{n}\|_{\overline{\mathbb{D}}(0,R)}<\infty. \tag{3.2}\] In this case \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\) is compact, non-empty and has some better properties too. Let us however note that no proof is provided in [25] and one has to follow ideas from [24], which are presented in quite a specific case of polynomials of the same degree. Note first that not all sequences have the property of common escape radius. For instance there is no common escape radius for the sequence \((z\longmapsto z^{2}-n)_{n=1}^{\infty}\). One can namely check that the smallest escape radius for polynomial \(z\longmapsto z^{2}-n\) is \(\frac{1}{2}+\sqrt{\frac{1}{4}+n}\) (see also Example 4.1.7). On the other hand, even if there is a common escape radius, the condition (3.2) does not have to be satisfied, which can be seen in the following example. If \(p_{1}:\mathbb{C}\ni z\longmapsto z^{2}-2\in\mathbb{C}\), then \(\mathcal{K}[p_{1}]=[-2,2]\), therefore in view of Lemma 3.1.4 the smallest escape radius for \(p_{1}\) is \(2\). Take now \(p_{n}:\mathbb{C}\ni z\longmapsto z^{n}\in\mathbb{C}\) for \(n\geq 2\). It is obvious that \(R\) is a common escape radius for thus defined sequence \((p_{n})_{n=1}^{\infty}\) if and only if \(R\geq 2\). However \(\sup_{n}\|p_{n}\|_{\overline{\mathbb{D}}(0,2)}\geq\sup_{n}2^{n}=\infty\). It might be hence difficult to follow the idea from [25]. Moreover, in the case of polynomials of different degrees one has to totally rewrite the proof from [24]. Therefore we prefer to present our whole route, for the completeness of the article, without referring to that case. The following theorem is our first result about the non-autonomous Julia set defined with use of a sequence of KW polynomials. **Theorem 3.2.6**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). If \((d_{k})_{k=1}^{\infty}\) is a sequence of integers not smaller than \(2\), then \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\) is nonempty and compact._ _Moreover \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\) is nonempty and compact too._ Proof.: Let \(R\) be as in Proposition 2.3.2. We have \[|z|\geq R\quad\Longrightarrow\quad|(p_{d_{k}}\circ...\circ p_{d_{1}})(z)|\geq e ^{k}|z|\longrightarrow\infty,\ \text{if}\ k\rightarrow\infty.\] Therefore \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]=\bigcap_{k=1}^{\infty}(p_{d_{k}}\circ...\circ p_{d_{1}})^{-1}\left(\overline{\mathbb{D}}(0,R)\right)\). It follows that \(\mathcal{K}[(p_{n})_{n=2}^{\infty}]\) is nonempty and compact too. For the additional assertion it suffices to use Remark 3.2.2. ### Klimek's metric In our further investigation we will need some results for the space with Klimek's metric. We use the following notation \[\mathcal{R} =\mathcal{R}(\mathbb{C})=\] \[:=\{K\subset\mathbb{C}:\ K\ \text{is compact, regular and polynomially convex}\}.\] For \(E,F\in\mathcal{R}\) Klimek defined in [22] their distance \[\Gamma(E,F):=\sup_{z\in\mathbb{C}}|g_{E}(z)-g_{F}(z)|=\max\left(\sup_{z\in E}g _{F}(z),\sup_{z\in F}g_{E}(z)\right)\] and showed that \((\mathcal{R},\Gamma)\) is a complete metric space. Note that a sequence \((E_{n})_{n=1}^{\infty}\) is convergent to \(F\) in \((\mathcal{R},\Gamma)\) if and only if \(g_{E_{n}}\rightrightarrows g_{F}\), i.e. the function sequence \((g_{E_{n}})_{n=1}^{\infty}\) is uniformly convergent to \(g_{F}\) in the whole complex plane. Fix now a polynomial \(P\) of degree \(d\geq 1\) and consider the following mapping: \[A_{P}:\mathcal{R}\ni K\longmapsto P^{-1}(K)\in\mathcal{R}. \tag{3.3}\] (2.3) yields that this mapping is an isometry if \(d=1\) (since \(f\) is bijective) and a contraction with contraction ratio \(1/d\) if \(d\geq 2\). In the latter case, since \((\mathcal{R},\Gamma)\) is a complete metric space, by Banach Contraction Principle the mapping \(A_{P}\) has a unique fixed point. This fixed point is the above defined (Definition 3.1.1) filled Julia set \(\mathcal{K}[P]\) (see [22]). In particular \(\mathcal{K}[P]\in\mathcal{R}\). Moreover, by the classical proof of Banach Contraction Principle \[\forall E\in\mathcal{R}:\quad\lim_{n\to\infty}P^{-n}(E)=\lim_{n\to \infty}(A_{P})^{n}(E)=\mathcal{K}[P].\] Once again using (2.3) we deduce that \[\forall E\in\mathcal{R}:\quad\frac{1}{d^{n}}g_{E}\circ P^{n}\rightrightarrows g _{\mathcal{K}[P]}.\] Now we would like to use the Klimek metric in our case of KW polynomials. The following proposition is an important step in proving that the compact sets \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\) and \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\) obtained in Theorem 3.2.6 are regular and polynomially convex. **Proposition 3.3.1**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). If \(E\in\mathcal{R}\), then_ \[\exists C>0\ \forall n\geq 1:\quad\Gamma(E,p_{n}^{-1}(E))\leq C.\] Proof.: Fix \(E\in\mathcal{R}\). Take \(\varrho\) from Corollary 2.3.3, fix \(R\geq\varrho\) big enough to satisfy \(E\subset\overline{\mathbb{D}}(0,R)\). By Corollary 2.3.3 \[p_{n}^{-1}(E)\subset p_{n}^{-1}\left(\overline{\mathbb{D}}(0,R)\right)\subset \overline{\mathbb{D}}(0,R)\quad\text{ for }n\geq 2.\] Hence \[\forall n\geq 2:\qquad\sup_{z\in p_{n}^{-1}(E)}g_{E}(z)\leq\sup_{z\in \overline{\mathbb{D}}(0,R)}g_{E}(z)=:C_{1}. \tag{3.4}\] Note that \(C_{1}\) is a non-negative number and does not depend on \(n\). By the properties of Green's function \(C_{2}:=\max_{z\in\overline{\mathbb{D}}(0,R)}g_{K}(z)\) is well defined and non-negative. By assumption \((p_{n})_{n=1}^{\infty}\) satisfies (2.4). Therefore there exists \(C_{3}>1\) such that \(\forall n\geq 1:\quad\|p_{n}\|_{K}^{1/n}\leq C_{3}\). Hence by (2.2) \[\forall n\geq 1\ \forall z\in E:\quad|p_{n}(z)|\leq C_{4}^{n},\] where \(C_{4}=C_{3}\exp(C_{2})>1\). In view of (2.3) we have \[\forall n\geq 1:\quad\sup_{z\in E}g_{p_{n}^{-1}(E)}(z) =\sup_{z\in E}\frac{1}{n}g_{E}(p_{n}(z))\leq\] \[\leq\sup_{z\in\overline{\mathbb{D}}(0,C_{4}^{n})}\frac{1}{n}g_{E} (z)=\sup_{z\in\partial\overline{\mathbb{D}}(0,C_{4}^{n})}\frac{1}{n}g_{E}(z), \tag{3.5}\] and the last equality follows from the Maximum Principle. By Lemma 2.1.6 there exists \(N_{1}>1\) such that for \(n\geq N_{1}\) \[\forall z\in\partial\overline{\mathbb{D}}(0,C_{4}^{n}):\quad g_{E}(z)\leq n \log C_{4}-\log\exp(E)+1.\] And furthermore there exists \(N_{2}\geq N_{1}\) such that for \(n\geq N_{2}\) \[\forall z\in\partial\overline{\mathbb{D}}(0,C_{4}^{n}):\quad\frac{1}{n}g_{E} (z)\leq 2+\log C_{4}.\] Combining that with (3.5) gives \[\forall n\geq N_{2}:\quad\sup_{z\in E}g_{p_{n}^{-1}(E)}(z)\leq C_{5}:=2+\log C_ {4}. \tag{3.6}\] Put also \(C_{6}:=\max\left\{\Gamma(E,p_{n}^{-1}(E)):n\in\{1,...,N_{2}-1\}\right\}.\) We see that \[\forall n\geq 1:\quad\Gamma(E,p_{n}^{-1}(E))\leq\max\{C_{1},C_{5},C_{6}\}.\] Some properties of the filled Julia set of a polynomial of degree at least \(2\) (see Definition 3.1.1) followed from the Banach Contraction Principle. Now we need a generalization of this result. **Theorem 3.3.2** (Enhanced version of Banach's Contraction Principle, [24, Lemma 4.5]).: _Let \((X,\rho)\) be a complete metric space and let \((H_{n})_{n=1}^{\infty}\) be a sequence of contractions of X with contraction ratios not greater than \(L<1\). If_ \[\forall x\in X:\quad\sup_{n\geq 1}\rho(H_{n}(x),x)<\infty,\] _then there exists a unique point \(c\in X\) such that the sequence \((H_{1}\circ...\circ H_{n})_{n=1}^{\infty}\) converges pointwise to \(c\)._ Let us also quote the following result. **Proposition 3.3.3** ([23, Proposition 1]).: _Let \(P_{n}:\mathbb{C}\longrightarrow\mathbb{C}\) be a polynomial of degree \(d_{n}\geq 2\) for \(n\in\{1,2,...\}\). Let \(E\in\mathcal{R}\) and define \(E_{n}:=(P_{n}\circ...\circ P_{1})^{-1}(E)\) for \(n\in\{1,2,...\}\). If_ \[\sum_{n=1}^{\infty}\frac{\Gamma(P_{n+1}^{-1}(E),E)}{d_{1}d_{2}\cdots d_{n}}<\infty, \tag{3.7}\] _then the sequence \((E_{n})_{n=1}^{\infty}\) is convergent in \((\mathcal{R},\Gamma)\) to a set \(F\). Any other choice of \(\widetilde{E}\in\mathcal{R}\) for which (3.7) is satisfied, results in the same limit \(F\). If we assume that \(P_{n}^{-1}(E)\subset E\) for all \(n\), then the sequence \((E_{n})_{n=1}^{\infty}\) is decreasing and_ \[F=\bigcap_{n\geq 1}E_{n}=\{z\in E:\ (P_{n}\circ...\circ P_{1})(z)\in E\ \text{for all}\ n\geq 1\}.\] ### Julia sets of sequences of KW polynomials We will now apply the results from the previous subsection to a sequence of contractions of the type defined in (3.3). **Theorem 3.4.1**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Fix now a sequence \((d_{k})_{k=1}^{\infty}\) of integers greater than \(1\). Then the sequence_ \[\Big{(}A_{p_{d_{1}}}\circ...\circ A_{p_{d_{k}}}\Big{)}_{n=1}^{\infty}\] _converges pointwise in (\(\mathcal{R},\Gamma)\) to a constant mapping with the value \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\)._ _In particular \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\) and \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\) are polynomially convex and regular._ Proof.: By Theorem 3.3.2 and Proposition 3.3.1, for every \(E\in\mathcal{R}\) the sequence \[\left(A_{p_{d_{1}}}\circ...\circ A_{p_{d_{k}}}(E)\right)_{n=1}^{\infty}\] is convergent to the same set \(F\in\mathcal{R}\). Take \(\varrho>0\) from Corollary 2.3.3. Proposition 3.3.3 yields \[\forall R\geq\varrho:\quad F=\bigcap_{k=1}^{\infty}(p_{d_{k}}\circ...\circ p_{ d_{1}})^{-1}\left(\overline{\mathbb{D}}(0,R)\right).\] Hence \(F=\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\) by Theorem 3.2.6. In order to get the assertion for \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\) it suffices to use Remark 3.2.2 to \(\mathcal{K}[(p_{n})_{n=2}^{\infty}]\). **Corollary 3.4.2**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Fix now a sequence \((d_{k})_{k=1}^{\infty}\) of integers greater than 1. Then_ \[\forall E\in\mathcal{R}:\quad g_{(p_{d_{n}}\circ...\circ p_{d_{1}})^{-1}(E)} \rightrightarrows g_{\mathcal{K}[(p_{d_{n}})_{n=1}^{\infty}]}.\] _In particular the function sequence_ \[\left(\frac{1}{d_{n}\cdot...\cdot d_{1}}\log^{+}|p_{d_{n}}\circ...\circ p_{d_{ 1}}|\right)_{n=1}^{\infty}\] _is uniformly convergent in \(\mathbb{C}\)._ Proof.: It follows directly from Theorem 3.4.1, Definition 3.2.1 of the filled Julia set \(\mathcal{K}\left[(p_{d_{n}})_{n=1}^{\infty}\right]\) and of Klimek's metric \(\Gamma\). The last assertion is a consequence of (2.3) and the formula for the Green function of the unit disk (cf. Example 2.1.3). We will now consider \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\). **Corollary 3.4.3**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Then the function sequence_ \[\left(\frac{1}{n!}\log^{+}|p_{n}\circ...\circ p_{1}|\right)_{n=1}^{\infty}\] _is uniformly convergent in \(\mathbb{C}\) to \(g_{\mathcal{K}[(p_{n})_{n=1}^{\infty}]}\)._ Proof.: By the previous corollary \[g_{(p_{n}\circ...\circ p_{2})^{-1}}(\overline{\mathbb{D}}(0,1))\rightrightarrows g _{\mathcal{K}[(p_{n})_{n=2}^{\infty}]}.\] Recall the formula (2.3) and note that \(p_{1}:\mathbb{C}\longrightarrow\mathbb{C}\) is bijective. Since \(\deg p_{1}=1\), we have \[\frac{1}{n!}\log^{+}|p_{n}\circ...\circ p_{1}| =g_{(p_{n}\circ...\circ p_{1})^{-1}}(\bar{\mathbb{D}}(0,1))=\] \[=g_{(p_{n}\circ...\circ p_{2})^{-1}}(\bar{\mathbb{D}}(0,1))\circ p _{1}\rightrightarrows g_{\mathcal{K}[(p_{n})_{n=2}^{\infty}]}\circ p_{1}=\] \[=g_{p_{1}^{-1}}(\mathcal{K}[(p_{n})_{n=2}^{\infty}])=g_{\mathcal{ K}[(p_{n})_{n=1}^{\infty}]}.\] The following approximation of the non-autonomous Julia set by the autonomous Julia sets of compositions can be easily shown (cf. [1, Proposition 5]). **Corollary 3.4.4**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Fix now a sequence \((d_{k})_{k=1}^{\infty}\) of integers greater than \(1\). Then_ \[\lim_{k\rightarrow\infty}\Gamma\left(\mathcal{K}[p_{d_{k}}\circ...\circ p_{d_ {1}}],\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\right)=0.\] We recall now another result due to Klimek. **Theorem 3.4.5** ([22, Corollary 5]).: \[\forall E,F\in\mathcal{R}:\quad|\log\operatorname{cap}(E)-\log\operatorname{ cap}(F)|\leq\Gamma(E,F).\] _In particular the logarithmic capacity is continuous on \((\mathcal{R},\Gamma)\)._ Recall that \(\operatorname{cap}\left(\overline{\mathbb{D}}(0,1)\right)=1\), moreover because of (2.3) we have \[\operatorname{cap}\left(f^{-1}\left(\overline{\mathbb{D}}(0,1)\right)\right)=1 \tag{3.8}\] for any non constant monic polynomial \(f\) too. **Corollary 3.4.6**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with \(K\). Let \((d_{k})_{k=1}^{\infty}\) be a sequence of integers greater than \(1\)._ _Then \(\operatorname{cap}\left(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\right)= \operatorname{cap}\left(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\right)=1\)._ Proof.: By Theorem 3.4.1 \[\left(p_{d_{n}}\circ...\circ p_{d_{1}}\right)^{-1}\left(\overline{\mathbb{D}} (0,1)\right)\longrightarrow\mathcal{K}\left[(p_{d_{k}})_{k=1}^{\infty}\right] \quad(n\rightarrow\infty)\] with respect to \(\Gamma\). The assertion follows from (3.8) and Theorem 3.4.5. At the end of this subsection we give some information about our toy case of Chebyshev polynomials on \([-1,1]\). _Example 3.4.7_.: The following pictures (prepared by Maciej Klimek) show approximations of \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\), where \((t_{n})_{n=1}^{\infty}\) is the sequence of minimal polynomials on \([-1,1]\) (see Example 2.2.2). The sets depicted here are, from left to right: * \((t_{8}\circ...\circ t_{2}\circ t_{1})^{-1}([-1,1]\times[-0.0005,0.0005])\), (which is used as an approximation of \((t_{8}\circ...\circ t_{2}\circ t_{1})^{-1}([-1,1])\)), * \((t_{5}\circ...\circ t_{2}\circ t_{1})^{-1}\left(\overline{\mathbb{D}}(0,1) \right)\), * \((t_{100}\circ...\circ t_{2}\circ t_{1})^{-1}\left(\overline{\mathbb{D}}(0,1) \right)\). Observe some simple geometric properties of the set \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\). Property 1: \(I=[-1,1]\subset\mathcal{K}[(t_{n})_{n=1}^{\infty}]\). In particular, \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\cap\{z:\mathrm{Im}z=0\}\neq\emptyset\). Proof.: The set \(I\) is totally invariant under every polynomial \(T_{n}\). Hence \(t_{1}(I)=T_{1}(I)=I\), \(t_{2}(I)=(1/2)T_{2}(I)=[-1/2,1/2]\subset I\), \(t_{n}(I)=(1/2^{n-1})T_{n}(I)=[-1/2^{n-1},1/2^{n-1}]\subset I\), and \((t_{n}\circ...\circ t_{2}\circ t_{1})(I)\subset I\) for every \(n\geq 1\). Property 2: If \(z\in\mathcal{K}[(t_{n})_{n=1}^{\infty}]\), then \(-z\in\mathcal{K}[(t_{n})_{n=1}^{\infty}]\) and \(\bar{z}\in\mathcal{K}[(t_{n})_{n=1}^{\infty}]\). Proof.: We have \(t_{1}(z)=z\) and \(t_{2}(-z)=t_{2}(z)\) for every \(z\in\mathbb{C}\). Hence \((t_{n}\circ...\circ t_{2}\circ t_{1})(-z)=(t_{n}\circ...\circ t_{2}\circ t_{ 1})(z)\) for every \(n\geq 2\) and every \(z\in\mathbb{C}\). Moreover, all \(t_{n}\) have real coefficients, so \((t_{n}\circ...\circ t_{1})(\overline{z})=\overline{(t_{n}\circ...\circ t_{1}) (z)}\) for every \(n\geq 2\) and every \(z\in\mathbb{C}\). It follows that each of the sequences \(((t_{n}\circ...\circ t_{1})(\overline{z}))_{n=1}^{\infty}\) and \(((t_{n}\circ...\circ t_{1})(-z))_{n=1}^{\infty}\) is bounded if and only if \(((t_{n}\circ...\circ t_{1})(z))_{n=1}^{\infty}\) is. Property 3: \(\exists R\geq 2:\ \mathcal{K}[(t_{n})_{n=1}^{\infty}]\subset E_{R}\), where \(E_{R}\) is the filled ellipse with foci \(\pm 1\) and semiaxes \(a_{R}=\frac{1}{2}(R+\frac{1}{R}),\ b_{R}=\frac{1}{2}(R-\frac{1}{R})\). Proof.: When \(R>1\), such (filled) ellipses are sublevel sets (see Definition 2.1.7 and Example 2.1.9) of the Green function \(g_{I}\), which tends to infinity as \(|z|\to\infty\). Hence we have \(\mathbb{C}=\bigcup_{R>1}E_{R}\), and, by compactness, \(\exists R>1:\mathcal{K}[(t_{n})_{n=2}^{\infty}]\subset E_{R}\). The capacity of \(E_{R}\) is \(R/2\) (see Example 2.1.5). By monotonicity of capacity and Corollary 3.4.6, we need \(R\geq 2\) for the inclusion \(\mathcal{K}[(t_{n})_{n=2}^{\infty}]\subset E_{R}\). Property 4: \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\not\subset E_{2}\). Proof.: Note that \(E_{2}\) has the major semiaxis \(a_{2}=5/4\) and the minor semiaxis \(b_{2}=3/4\). We need to find a point in \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\setminus E_{2}\). Let us start with the following properties of the polynomials \(t_{n}\), \(n\in\{1,2,...\}\): 1. \(t_{n}(-x)=(-1)^{n}t_{n}(x),\ x\in\mathbb{R}\); 2. \(t_{n}\) is increasing in the interval \((1,+\infty)\); 3. \(\max_{z\in E_{2}}|t_{n}(z)|=|t_{n}(5/4)|=|t_{n}(-5/4)|=1+2^{-2n}\leq 5/4\). (a) and (b) are known. To prove (c), let us first compute \(\max_{w\in E_{2}}|T_{n}(w)|\) (cf. [15], [19]). Recall that the classical Chebyshev polynomials satisfy the relation \[T_{n}\left(\frac{z+z^{-1}}{2}\right)=\frac{z^{n}+z^{-n}}{2},\qquad n\in\{1,2,...\}.\] For \(z=2e^{i\theta}\) with \(\theta\in[0,2\pi)\) we thus have \[T_{n}\left(\frac{z+z^{-1}}{2}\right) =\frac{2^{n}e^{in\theta}+2^{-n}e^{-in\theta}}{2}\] \[=\frac{1}{2}\left((2^{n}+2^{-n})\cos n\theta+i(2^{n}-2^{-n})\sin n \theta\right).\] Then \[\left|T_{n}\left(\frac{z+z^{-1}}{2}\right)\right|^{2}=\frac{1}{4}\left(2^{2n }+2\cos 2n\theta+2^{-2n}\right)\] achieves its maximum when \(\theta=0\) or \(\theta=\pi\). Checking values for the corresponding \(z=2\) or \(z=-2\) we get \[\max_{w\in E_{2}}|T_{n}(w)|=\left|T_{n}\left(\frac{2+2^{-1}}{2}\right)\right| =\left|T_{n}\left(-\frac{2+2^{-1}}{2}\right)\right|=2^{n-1}+2^{-(n-1)}.\] Hence (c) is proved. Observe now that Property 1 together with (a), (b) and (c) implies that \([-5/4,5/4]\subset\mathcal{K}[(t_{n})_{n=1}^{\infty}]\). Indeed, for every \(n\geq 1\) we have \[t_{n}\left([-5/4,5/4]\right)=t_{n}([-5/4,-1]\cup[-1,1]\cup[1,5/4])\subset[-5/ 4,5/4],\] consequently \[(t_{n}\circ...\circ t_{1})([-5/4,5/4])\subset[-5/4,5/4]\] and we get the inclusion \([-5/4,5/4]\subset\mathcal{K}[(t_{n})_{n=1}^{\infty}]\). Consider the point \(z_{0}=4i/5\), which does not belong to \(E_{2}\) (since the minor semiaxis of \(E_{2}\) is \(b_{2}=3/4<4/5\)). Now, \[t_{2}(z_{0})=z_{0}^{2}-1/2=-57/50\in(-5/4,-1),\] hence, for every \(n\geq 2\) we have \((t_{n}\circ...\circ t_{2}\circ t_{1})(z_{0})\in[-5/4,5/4]\), and so \(z_{0}\in\mathcal{K}[(t_{n})_{n=1}^{\infty}]\). ## 4. Chebyshev polynomials on compact sets, revisited ### More notions and examples We will use Definition 2.2.1 of minimal polynomials. It is known that for a fixed infinite compact set \(E\) and for each \(n\) the \(n\)th Chebyshev polynomial on \(E\) is unique (see e.g. [29, Chapter II. Theorem 7] or [13, page 2]). Few explicit examples of Chebyshev polynomials are known. We already have Example 2.2.2. Recall also the following: _Example 4.1.1_.: Fix an \(R>0\). Then \(t_{n}(z)=z^{n}\) is the \(n\)th Chebyshev polynomial for \(\{z\in\mathbb{C}:\ |z|=R\}\) as well as for \(\overline{\mathbb{D}}(0,R):=\{z\in\mathbb{C}:|z|\leq R\}\). _Example 4.1.2_.: The polynomial \(t_{n}:=\frac{1}{2^{n-1}}T_{n}\) from Example 2.2.2 is also the \(n\)th Chebyshev polynomial on any ellipse \(E\) with foci \(-1,+1\) (see [15]). Recall that these ellipses are level sets of the Green function (cf. Definition 2.1.7) of the segment \([-1,1]\) (cf. Example 2.1.9). _Example 4.1.3_.: Let \(E\subset\mathbb{C}\) be a compact set, let \(p\) be the \(n\)th Chebyshev polynomial on \(E\) and let \(f\) be an arbitrary polynomial of degree \(m\geq 1\). Then \(p\circ f\) is the \((m\cdot n)\)th Chebyshev polynomial on \(f^{-1}(E)\) (see [28] or [8]). **Definition 4.1.4** ([28, Definition 1]).: Let \(E\subset\mathbb{C}\) be a compact set. The closed disc \(\overline{\mathbb{D}}_{C}:=\overline{\mathbb{D}}(a,r_{C})\) of the smallest radius which contains \(E\) is called the _Chebyshev disc_ of \(E\). Its center \(a\) is called the _Chebyshev center_ of \(E\); and its radius \(r_{C}\) is called the _Chebyshev radius_ of \(E\). **Lemma 4.1.5**.: _The point \(a\in\mathbb{C}\) is the Chebyshev center of a compact set \(E\subset\mathbb{C}\) if and only if \(t_{1}(z)=z-a\) is the first Chebyshev polynomial on \(E\)._ Proof.: By Definitions 2.2.1 and 4.1.4, if \(a\in\mathbb{C}\) is the Chebyshev center of \(E\), then \(\|t_{1}\|_{E}=r_{C}\) and conversely, if \(t_{1}(z)=z-a\) is the first Chebyshev polynomial on \(E\), then \(\overline{\mathbb{D}}(a,\|z-a\|_{E})\) is the Chebyshev disc of \(E\) Let us now point out a relation between an escape radius (recall Definition 3.1.3) and the Chebyshev radius for a filled Julia set. **Corollary 4.1.6**.: _If \(P:\mathbb{C}\longrightarrow\mathbb{C}\) is a polynomial of degree \(d\geq 2\) and \(\overline{\mathbb{D}}(a,r_{C})\) is the Chebyshev center of \(\mathcal{K}[P]\), then \(r_{C}+|a|\) is an escape radius for \(P\)._ Proof.: By Definition 4.1.4 we have \(\mathcal{K}[P]\subset\overline{\mathbb{D}}(a,r_{C})\subset\overline{\mathbb{ D}}(0,r_{C}+|a|)\). It suffices to apply Lemma 3.1.4. _Example 4.1.7_.: Consider now the special case \(P_{c}:\mathbb{C}\ni z\longmapsto z^{2}+c\in\mathbb{C}\) for \(c\in\mathbb{C}\). Note that the Chebyshev center of \(\mathcal{K}[P_{c}]\) is \(0\), since the set \(\mathcal{K}[P_{c}]\) is symmetric with respect to \(0\). If additionally \(c\in(-\infty,0]\), then the Chebyshev radius of \(\mathcal{K}[P_{c}]\) is \[r_{C}=\frac{1}{2}+\sqrt{\frac{1}{4}-c} \tag{4.1}\] (cf. [5]). Observe that in this case every escape radius of \(P_{c}\) is not smaller than \(\frac{1}{2}+\sqrt{\frac{1}{4}-c}\). Indeed, Corollary 4.1.6 yields that \(r_{C}\) given in (4.1) is an escape radius for \(P_{c}\). Definition 4.1.4 and Lemma 3.1.4 give the assertion. We will show now the uniform convergence of a function sequence defined with use of Chebyshev polynomials (cf. Main Example from the Introduction and [26]). **Corollary 4.1.8**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set. If \((t_{n})_{n=1}^{\infty}\) is the sequence of Chebyshev polynomials on \(K\), then the function sequence_ \[\left(\frac{1}{n!}\log^{+}|t_{n}\circ...\circ t_{1}|\right)_{n=1}^{\infty}\] _is uniformly convergent in \(\mathbb{C}\)._ Proof.: We apply Proposition 2.2.5 and Corollary 3.4.3 to the sequence \((t_{n})_{n=1}^{\infty}\). ### Chebyshev polynomials on Julia sets In either case of the Example 3.1.2, the \(n\)th Chebyshev polynomial for the filled Julia set coincides with the polynomial of degree \(n\) generating the Julia set. Sequences of Chebyshev polynomials on Julia sets were determined in [3] (for quadratic polynomials) and [20]. Chebyshev polynomials on level sets of Green functions for Julia sets were studied in [34], [36] and [37]. In [2] Chebyshev polynomials on non-autonomous Julia sets of sequences from class \(\mathcal{B}\) (defined in [9]) are discussed. Recall that in view of Remark 3.2.5 we cannot apply the results from [2] directly to our case. We can however prove an analogue of the main theorem there. **Theorem 4.2.1**.: _Let \(K\subset\mathbb{C}\) be a regular polynomially convex compact set and let \((p_{n})_{n=1}^{\infty}\) be a sequence of KW polynomials associated with K. If \((d_{k})_{k=1}^{\infty}\) is a sequence of integers not smaller than \(2\), then \(\forall k\geq 1\ \exists\tau_{k}\in\mathbb{C}:\ p_{d_{k}}\circ...\circ p_{d_{1}}- \tau_{k}\) is the \(d_{k}\cdot...\cdot d_{1}\)-th Chebyshev polynomial on \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\)._ _Moreover, \(\forall n\geq 2\ \exists\pi_{n}\in\mathbb{C}:\ p_{n}\circ...\circ p_{1}-\pi_{n}\) is the \(n!\)-th Chebyshev polynomial on \(\mathcal{K}[(p_{n})_{n=1}^{\infty}]\)._ Proof.: Let \(R\) be as in Proposition 2.3.2. We follow the lines of the proof of [2, Theorem 4] with this \(R\) for \((p_{d_{k}})_{k=1}^{\infty}\) and \((p_{n})_{n=2}^{\infty}\). We obtain the assertion for \(\mathcal{K}[(p_{d_{k}})_{k=1}^{\infty}]\), which yields also that \(p_{n}\circ...\circ p_{2}-\tau_{n}\) is the \(n!\)-th Chebyshev polynomial on \(\mathcal{K}[(p_{n})_{n=2}^{\infty}]\). To finish the proof it suffices to use Remark 3.2.2 and Example 4.1.3. _Example 4.2.2_.: Let \(t_{n}=\frac{1}{2^{n-1}}T_{n}\), where \(T_{n}\) is the classical Chebyshev polynomial, \(n\in\{1,2,...\}\). The Chebyshev polynomial of degree \(1\) on \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\) is \(p(z)=z\). Proof.: Property 2 in Example 3.4.7 shows that \(\mathcal{K}[(t_{n})_{n=1}^{\infty}]\) is symmetric with respect to \(0\). Hence its Chebyshev center is \(0\). It is enough now to apply Lemma 4.1.5. **Acknowledgements**.: Both authors thankfully acknowledge their participation in Thematic Research Programme "Modern holomorphic dynamics and related fields", Excellence Initiative - Research University programme at the University of Warsaw (a mini-semester in spring 2023). The paper was partially written thanks to the support from the programme. The second named author extends her thanks to the Faculty of Mathematics, Informatics and Mechanics of the University of Warsaw for supporting her participation in the thematic semester "Dynamical Systems. Topological, smooth and holomorphic dynamics, ergodic theory, fractals" in Stefan Banach International Mathematical Center at the Institute of Mathematics of the Polish Academy of Sciences in Warsaw (part of "Simons Semesters in Banach Center: 2020s vision.") and in the conference "Complex dynamics: connections to other fields", at the University of Warsaw Conference Center in Checiny, Poland, as well as to the Chair of Approximation, Institute of Mathematics, Jagiellonian University, Krakow, for hosting her in March - June 2023 during her study leave from the American Mathematical Society. We would also like to thank Maciej Klimek for preparing the figures for us.
2310.18317
Comparison of grain growth mean-field models regarding predicted grain size distributions
Mean-field models have the ability to predict grain size distribution evolution occurring through thermomechanical solicitations. This article focuses on a comparison of mean-field models under grain growth conditions. Different microstructure representations are considered and discussed, especially regarding the consideration of topology in the neighborhood construction. Experimental data obtained with a heat treatment campaign on a 316L austenitic stainless steel are used for material parameters identification and as a reference for model comparisons. Mean-field models are also confronted to both mono- and bimodal initial grain size distributions to investigate the interest of introducing neighborhood topology in microstructure predictions models. This article exposes that improvements in the predictions are obtained in monomodal cases for topological models. In bimodal test, no comparison with experimental data were performed as no data were available. But relative comparisons between models indicate few differences in predictions. The interest of neighborhood topology in grain growth mean-field models gives overall small improvements compared to classical mean-field models when comparing implementation complexity.
Marion Roth, Baptiste Flipon, Nathalie Bozzolo, Marc Bernacki
2023-09-19T07:32:28Z
http://arxiv.org/abs/2310.18317v1
# Comparison of grain growth mean-field models regarding predicted grain size distributions ###### Abstract Mean-field models have the ability to predict grain size distribution evolution occurring through thermomechanical solicitations. This article focuses on a comparison of mean-field models under grain growth conditions. Different microstructure representations are considered and discussed, especially regarding the consideration of topology in the neighborhood construction. Experimental data obtained with a heat treatment campaign on a 316L austenitic stainless steel are used for material parameters identification and as a reference for model comparisons. Mean-field models are also confronted to both mono- and bimodal initial grain size distributions to investigate the interest of introducing neighborhood topology in microstructure predictions models. This article exposes that improvements in the predictions are obtained in monomodal cases for topological models. In bimodal test, no comparison with experimental data were performed as no data were available. But relative comparisons between models indicate few differences in predictions. The interest of neighborhood topology in grain growth mean-field models gives overall small improvements compared to classical mean-field models when comparing implementation complexity. Mean-field model Grain growth Grain size distribution Topology Neighborhood description ## 1 Introduction The phenomenon of grain growth takes place in metallic materials when their are submitted to a heat treatment. Considered as the only phenomenon occurring, materials are assumed free of any stored energy (_i.e._ low dislocation density) and the driving pressure for grain boundary (GB) migration arises from the minimization of the GB surface energy leading to a curvature flow problem. At the polycrystalline scale, the GB motion is generally described by \(v=M_{GB}\left|P\right|\) with \(v\) the velocity norm of the boundary [1], \(M_{GB}\) its mobility and \(P=-\gamma_{GB}\kappa\) with \(\gamma_{GB}\) the GB energy and \(\kappa\) the trace of the curvature tensor. Since 70 years, models have largely been developed in order to predict microstructure changes under thermomechanical treatments and their impact on macroscopic properties. At the polycrystalline scale, three types of models can be found in the literature: phenomenological, mean-field, and full-field models. Phenomenological approaches are classically based on an experimental database and correspond to the extraction of a mathematical trend from what is observed experimentally. Such models are restricted to the set of thermomechanical conditions and to the material experimentally investigated [2, 3]. The last two model types are based on a more generic approach, since they use physical equations to predict microstructure evolution. Full-field models, at the mesoscopic scale, give access to a complete description of the system where each individual grain and its topology are taken into account [4, 5, 6]. It has the advantage to be able to deal with local heterogeneities. However, their major drawback is to be very costly in terms of computing time. Mean-field models, on the other hand, keeps this physically based implementation with a similar set of equations, but the general description of the microstructure is simplified [7, 8, 9, 10, 11, 12, 13]. They also present competitive computation time when compared to full-field models. The original definition of a mean-field model considers the evolution of a microstructure described by mean quantities. The work of Burke and Turnbull [14] provides the simplest version of such models. In this case, the microstructure is solely defined by its mean grain radius \(\bar{R}\). The mean grain size (MGS) quantity is commonly determined by the (arithmetic) mean of the equivalent diameter (\(\overline{ED}=2\bar{R}\)) where \(ED_{i}\) is defined in 2D, resp. in 3D, as the diameter of a circle, resp. a sphere, having the same area (\(A_{i}\)), resp. volume (\(V_{i}\)), as the considered grain \(G_{i}\): \[\bar{R}=\frac{\overline{ED}}{2}=\frac{1}{2N}\sum_{i=1}^{N}ED_{i},\ \text{i.e.}\ \bar{R}=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{A_{i}}{\pi}\right)^{1/2}\ \text{in 2D, and}\ \bar{R}=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{3V_{i}}{4\pi}\right)^{1/3}\ \text{in 3D,} \tag{1}\] with \(N\) the number of grains in the microstructure. In Burke and Turnbull (B&T) context, the GB velocity norm is considered proportional to the inverse of \(\bar{R}\). Hillert model [7] introduces the consideration of the grain size distribution (GSD) in mean-field models. The microstructure is sampled in a representative distribution, where each bin is assigned to a given ED value and a frequency. In the GSD, a bin is commonly called grain class, each of them being virtually composed of many grains indicated by the frequency. For every class, one can define a representative grain having the characteristics of the associated grain class. Hillert formalism represents the microstructure as a grain embedded in a Homogeneous Equivalent Medium (HEM) that is characterized by the MGS of the distribution. The size evolution of a considered class is deduced from the difference in curvature radius between the current class and that of the HEM. Later on, statistical models, as the one developed by Abbruzzese _et al._[8, 15] modify the definition of the HEM. Rather than being only associated with a mean grain radius, the HEM is replaced by contact surfaces defined for all grain classes of the microstructure. Each grain class is surrounded by a statistical medium (SM) composed of all the grain classes of the microstructure. The contact surface between a grain class and its surrounding neighbors is defined by a perimeter intersection probability. One of the latest topological mean-field models, developed by Maire _et al._[13] combines several formalisms in its neighborhood construction. This hybrid model uses both the statistical approach initiated by Abbruzzese _et al._ and a deterministic number of neighbors ruled by the imposed bijectivity of neighborhood assignment. To the authors knowledge, the interest of these different views concerning the neighborhood description was never discussed in the state of the art for GG modeling. The main purpose of this work is to compare mean-field models of different microstructure descriptions to determine if semi-topological approaches are of interest to better describe GSD evolution in the context of grain growth. This article is constructed as follows: first, mean-field GG models are introduced. A detailed description of the microstructure formalism in Hillert, Abbruzzese _et al._ and Maire _et al._ models is made. Then optimized data parameters for the use of these models are determined and discussed. The last section is dedicated to a distribution comparison of the Maire _et al._ model to the mean-field models of Hillert, Abbruzzese _et al._ and to experimental data. ## 2 Mean-field models This section recalls the main equations and details the microstructure description of the previously introduced models for the GG phenomenon. ### Burke and Turnbull model B&T model [14] illustrates the original meaning of mean-field models as the microstructure description is reduced to mean geometric considerations. Indeed, the rate of GG (\(d\bar{R}/dt\)) is assumed proportional to \(1/\bar{R}\)[1] such as: \[d\bar{R}=M_{GB}P_{c}dt\text{, with }P_{c}=\frac{\alpha\left(d-1\right)\gamma_{GB}}{ \bar{R}}, \tag{2}\] where \(\alpha\) is a constant, \(\gamma_{GB}\) is the GB energy, \(M_{GB}\) is the GB mobility, and \(d\) the space dimension. In the B&T analysis, \(\gamma_{GB}\) and \(M_{GB}\) are assumed to be constant in time and space. A parabolic expression can then be derived from eq. (2) and represents the time dependence of \(\bar{R}^{2}-\bar{R}_{0}^{2}\): \[\bar{R}^{2}-\bar{R}_{0}^{2}=2\left(d-1\right)\alpha M_{GB}\gamma_{GB}t, \tag{3}\] where \(\bar{R}_{0}\) is the initial mean grain radius. An extension of this law is classically preferred in the literature : \[\bar{R}^{2}-\bar{R}_{0}^{2}=\tilde{\alpha}M_{GB}\gamma_{GB}t^{n}, \tag{4}\] with \(n\) and \(\tilde{\alpha}\) two constants to be fitted thanks to experimental data. ### Hillert model The GG mean-field model proposed by Hillert [7] relies on considering each grain class inside a common HEM. The microstructure description is illustrated in fig. 1 where the representative grain of a given grain class \(R_{i}\) is surrounded by a HEM defined by the mean grain radius of the microstructure \(\bar{R}\). Compared to B&T model, the driving pressure term is modified following a more local approach. \(P_{c}\), for the class \(i\), is defined as a function of the curvature radius of the grain class \(i\): \[dR_{i}=\frac{(d-1)}{2}M_{GB}\gamma_{GB}\left(\frac{1}{\bar{R}}-\frac{1}{\bar{ R}_{i}}\right)dt. \tag{5}\] Thus, the grain \(i\) will shrink when \(\bar{R}>R_{i}\), will grow if \(\bar{R}<R_{i}\) and will be stable when \(\bar{R}=R_{i}\). ### Abbruzese _et al._ model In the statistical model proposed by Abbruzese _et al._[8, 15] in 1992, the authors suggested an evolution of the Hillert model by introducing a statistically constructed medium (SM) composed of all grain classes of the microstructure as illustrated in fig. 1(a). The contribution of all neighbors being weighted by a statistical coefficient: the contact probability. In 2D, considering the grain class \(i\), the contact between \(i\) and its neighbor grain classes is defined by a fraction of the neighbor class perimeter. The probability of a grain class \(j\) to belong to the SM is only depending on its own size Figure 1: Schematic representation of the microstructure in the Hillert model. (grain radius \(-R_{j}\)) and will be the same for all grain class \(i\) considered. This represents what would be the chances or probabilities that a grain of the \(j^{\text{th}}\) class in the microstructure intersects the representative grain \(i\) as schematized in fig. 1(b). The contact probability \(p_{j}\) is then given by a ratio between the \(j^{\text{th}}\) grain perimeter and the sum of all the grain perimeters of the microstructure. Along with the grain size, the number frequency of each grain class is also taken into account. \(p_{j}\) expression is written as follows: \[p_{j}=\frac{N_{j}R_{j}}{\sum_{k=1}^{n}N_{k}R_{k}}, \tag{6}\] with \(n\) the total number of grain classes in the microstructure and, \(\forall k\in[1,n]\), \(N_{k}\) the number of grains belonging to the grain class \(k\). With such a kind of explicit description of the surrounding, the GB migration can be achieved locally between the class \(i\) and each of its neighbors. In the context of 2D-GG (\(d=2\)), the difference of driving pressure between a grain class \(i\) and its neighbor \(j\) can be derived from the Hillert equation (eq. (5)) and is then given by: \[dR_{(i,j)}=\frac{1}{2}M_{GB}\gamma_{GB}\left(\frac{1}{R_{j}}-\frac{1}{R_{i}} \right)dt. \tag{7}\] The global variation of grain size undergone by the grain class \(i\) is the sum of the local variations with every \(j^{\text{th}}\) neighbor. The local GB migration can be generalized to all the \(j^{\text{th}}\) neighbors of the grain class \(i\) using the contact probability \(p_{j}\). The total rate of evolution with respect to time for the radius of the representative grain \(i\) can then be written as: \[dR_{i}=\sum_{j=1}^{n}p_{j}dR_{(i,j)}=\frac{1}{2}M_{GB}\gamma_{GB}\left(\sum_{ j=1}^{n}\frac{p_{j}}{R_{j}}-\frac{1}{R_{i}}\right)dt. \tag{8}\] ### Maire _et al._ model Maire _et al._[13] proposed a 3D mean-field model to simulate microstructure evolution under thermomechanical solicitations. Physical mechanisms such as GG, discontinuous dynamic recrystallization and post-dynamic recrystallization can be modeled. It is based on previous works of Bernard _et al._[11] and Beltran _et al._[16] who developed a mean-field model with two HEMs, one medium associated to non recrystallized grains and the other to recrystallized ones, each of these media being characterized by their MGS. Maire _et al._ work focused on the introduction of neighborhood topology in the Bernard-Beltran previously defined formalism. A specific neighborhood is proposed for each grain class of the microstructure. Each class \(i\) is characterized, in the context of GG, by two main properties: a grain radius Figure 2: Statistical neighborhood construction of the 2D GG model of Abbruzzese _et al._: (a) description of the statistical medium, and (b) illustration of the contact probability \(p_{j}\) concept. \(R_{i}\) and a number of grains belonging to this class \(N_{i}\). Microstructure and its evolution are described as detailed below. The following equation links the radius variation \(dR_{(i,j)}\) between two representative grains to the volume variation \(dV_{(i,j)}\) associated to the GB migration: \[dV_{(i,j)}=dR_{(i,j)}\times S_{c(i,j)}=dR_{(i,j)}\times p_{(i,j)}\times S_{R_{i}}, \tag{9}\] \[\text{with }dR_{(i,j)}=M_{GB}\gamma_{GB}\left(\frac{1}{R_{j}}-\frac{1}{R_{i}} \right)dt\text{, as }d=3\text{ (see eq.~{}(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq: way, as the current class goes up in the distribution a part of the neighborhood has already been filled by injectivity with grains having a radius smaller than the current class. After update, the remaining surface of grain class \(j\) progressively decreases. A test is realized to check if \(S_{R_{j}}>0\): if true, grain class \(j\) can take new neighbors; and if not, grain class \(j\) neighborhood is complete. Class \(j\) is therefore not available anymore as a neighbor and will not take part in the construction of the following grain classes (class \(j\) will be seen as the red grain classes from fig. 4 for all other classes) Of course, it is important here to highlight that the order used to build the neighborhood of each class can have an impact on the global answer of the GG model as the neighborhood determination will be different for example if the distribution is treated by a decreasing GSD order. This effect will be discussed below. The eq. (9) introduces the contact probability \(p_{(i,j)}\) between two grain classes of the microstructure. The idea of Abbruzzes _et al._ contact probability has been conserved in the equation form, however the spatial dimension of contact used is elevated to a volumic consideration as described in the following equation: \[p_{(i,j)}=\frac{N_{j}R_{j}^{3}}{\sum_{k=1}^{n_{i}}N_{k}R_{k}^{3}}, \tag{13}\] with \(n_{i}\) the number of grain classes with an incomplete neighborhood when the \(i^{\text{th}}\) grain class neighborhood is constructed. As presented previously, this model is described in 3D, so quantities exchanged during GB migration are volumic. The volume variation seen by a grain is the sum of all signed volume variations with respect to its neighbors given by eq. (9) such as: \[\Delta V_{i}=\sum_{j=1}^{n_{i}}dV_{(i,j)} \tag{14}\] with \(\eta_{i}\) the number of neighbors of the grain class \(i\). As \(dV_{(j,i)}=-dV_{(i,j)}\) owing to the imposed bijectivity, the volume conservation of the global system is ensured. Figure 4: Specific Neighborhood construction in the Maire _et al._ model for class \(i\). ## 3 Input data for mean-field modeling ### Material-dependent model parameters acquisition Experimental data and material parameters identification are necessary to calibrate a mean-field model for a given material and a given temperature range. This section will detail the material and experimental data used for model parameters identification. #### 3.1.1 Experimental data A single-phase austenitic stainless steel (316L) was selected for this study. The identification procedure of the reduced mobility (\(M_{GB}\gamma_{GB}\) product) presented in this work relies on a minimal campaign of nine thermal treatments [17, 18]. Fifteen are provided here and their conditions are detailed in table 1. Longer annealing times from 3 to 5 hours have been realized to validate the identification procedure at long durations. These heat treatments have been performed using a Carbolite furnace. A thermocouple was placed in the furnace near to the samples to control and record the temperature evolution. These samples have been prepared for electron back scattered diffraction (EBSD) analyses by cutting and selecting a centered observation area to avoid any effects of surface oxides on analyses. Classical first steps of polishing for stainless steel were realized using abrasive SiC papers, followed by polishing with a 3 \(\mathrm{\SIUnitSymbolMicro m}\) diamond suspension and final electropolishing for 25s at 10V with a solution of 10\(\%\) perchloric acid in ethanol. EBSD analyses were performed with a Carl Zeiss Supra 40 field emission gun scanning electron microscope (FEGSEM) coupled with a Bruker Quantax EBSD detector and the Esprit 2.3 software. A voltage of 20kV and a 120 \(\mathrm{\SIUnitSymbolMicro m}\) aperture were used. The post-processing of EBSD data was achieved with the MTEX Matlab toolbox [19]. The step and the cartography size have been targeted to get a representative number of grains in the observation area. Table 2 gathers the latter described parameters as well as the number of grains observed in each cartography. This number of grains is calculated without taking into account twin boundaries and using a misorientation of 15 to define a grain boundary. The minimal size to consider an entity as a grain is superior to 5 pixels. The process of entities removal under the set threshold and a grain boundaries smoothing were applied considering only indexed pixels. These data give also access to 2D GSD that will be transposed in 3D with the Saltykov algorithm (detailed in the following section) to be used as initial input distribution in the model but also as experimental data to compare with simulation results in section 4. Fig. 5 displays the IPF Z maps of some of the post-treated experimental results as well as the related histograms in number frequency. \begin{table} \begin{tabular}{|c|c|c|} \hline \(1000\,^{\circ}\mathrm{C}\) & \(1050\,^{\circ}\mathrm{C}\) & \(1100\,^{\circ}\mathrm{C}\) \\ \hline \(30\,\mathrm{min}\) & \(30\,\mathrm{min}\) & \(30\,\mathrm{min}\) \\ \hline \(1\,\mathrm{h}\) & \(1\,\mathrm{h}\) & \(1\,\mathrm{h}\) \\ \hline \(2\,\mathrm{h}\) & \(2\,\mathrm{h}\) & \(2\,\mathrm{h}\) \\ \hline \(3\,\mathrm{h}\) & \(3\,\mathrm{h}\) & \(3\,\mathrm{h}\) \\ \hline \(5\,\mathrm{h}\) & \(5\,\mathrm{h}\) & \(5\,\mathrm{h}\) \\ \hline \end{tabular} \end{table} Table 1: Heat treatments campaign conditions, with orange contoured conditions used as qualibration values, red conditions are used for both calibration and validation and green ones are only used for validation. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\(T=\)1000\({}^{\circ}\mathrm{C}\)} & \multicolumn{3}{c|}{\(T=\)1050\({}^{\circ}\mathrm{C}\)} & \multicolumn{3}{c|}{\(T=\)1100\({}^{\circ}\mathrm{C}\)} \\ \hline \(t\) & \(h\) (\(\mathrm{\SIUnitSymbolMicro m}\)) & \(L_{x}\times L_{y}\) (\(mm\times mm\)) & \#G & \(h\) (\(\mathrm{\SIUnitSymbolMicro m}\)) & \(L_{x}\times L_{y}\) (\(mm\times mm\)) & \#G & \(h\) (\(\mathrm{\SIUnitSymbolMicro m}\)) & \(L_{x}\times L_{y}\) (\(mm\times mm\)) & \#G \\ \hline Initial & 1.49 & 1.1\(\times\)0.85 & 980 & 1.49 & 1.1\(\times\)0.85 & 980 & 1.49 & 1.1\(\times\)0.85 & 980 \\ \hline 30min & 2.5 & 2\(\times\)1.4 & 2654 & 1.13 & 1\(\times\)0.7 & 534 & 3.3 & 3.7\(\times\)2.8 & 3509 \\ \hline 1h & 2.5 & 2\(\times\)1.4 & 2078 & 3 & 3\(\times\)2.2 & 1964 & 3.3 & 3.7\(\times\)2.8 & 3590 \\ \hline 2h & 1.13 & 1\(\times\)0.7 & 456 & 3 & 3\(\times\)2.2 & 1154 & 3.77 & 3.7\(\times\)2.8 & 2208 \\ \hline 3h & 1.13 & 1\(\times\)0.7 & 468 & 1.13 & 1\(\times\)0.7 & 300 & 3.77 & 3.7\(\times\)2.8 & 2263 \\ \hline 5h & 1.13 & 1\(\times\)0.7 & 243 & 1.13 & 1\(\times\)0.7 & 133 & 3.77 & 3.7\(\times\)2.8 & 2304 \\ \hline \end{tabular} \end{table} Table 2: Table gathering, for the different isothermal treatment temperatures and for the different holding times, the post-processing details composed of the EBSD step size (\(h\)), the dimensions in each direction (\(L_{x}\times L_{y}\)) of analyzed areas of the sample, and the number of grains represented in the EBSD images considering all the grains without taking into account the twins boundaries (#G). ### Use of Saltykov algorithm to obtain a 3D GSD Experimentally acquired EBSD data correspond to 2D slices of 3D polycrystals. In order to be consistent with mean-field simulations results and to enable comparisons, a 2D to 3D conversion is performed. To this end, the Saltykov method [20] is applied to experimental GSDs. Initially, the method had been developed to exhibit 2D section data from 3D granulometry data of spherical particles. The inverse Saltykov method gives the possibility to transform GSD data from a 2D histogram distribution to a 3D discrete distribution [20]. Several assumptions are present in this method which are compatible with the topology of the considered initial 316L microstructure. The Saltykov method has indeed been proven to be efficient on a similar equiaxed polycrystal [21]. No assumption is required regarding the shape of the input distribution, so multi-modal distributions can also be submitted to such a method [22]. The methodology ensures that the average of an infinite number of 2D-cuts of a polycrystal respecting the obtained 3D-discrete distribution will converge towards the imposed 2D-histogram distribution. However, the quality of the methodology is, of course, also linked to the statistical representativity of the input 2D GSD. This procedure is illustrated in fig. 5 to exhibit the 3D distribution evolution after the 2h thermal treatments at the different temperatures summarized in table 2. Moreover, fig. 6 illustrates for one particular microstructure (\(T=\)\(1050\,\mathrm{\SIUnitSymbolCelsius}\) for \(t=\)\(5\,\mathrm{h}\)) the comparison between the 2D-histogram distribution and the 3D obtained discrete distribution thanks to the inverse Saltykov transformation. Figure 5: In background: EBSD IPF Z maps of 316L microstructures of (a) the as-received material, after (b) 2h at 1000\(\,\mathrm{\SIUnitSymbolCelsius}\), (c) 2h at 1050\(\,\mathrm{\SIUnitSymbolCelsius}\) and (d) 2h at 1100\(\,\mathrm{\SIUnitSymbolCelsius}\). Black lines denote all grain boundaries. In foreground for each image: the corresponding 3D-GSD after an inverse Saltykov transformation. #### 3.2.1 GB mobility parameter identification A first approximation by the use of classical B&T lawThe first step of the GB mobility identification procedure is to find an initial approximation for the reduced mobility \((M_{GB}\gamma_{GB})\), in order to run a first mean-field computation. To this end, the historical form of the Burke and Turnbull law [1, 14] (eq. (4) with \(\tilde{\alpha}=1/2\) and \(n=1\)) is used to obtain a first approximated value of the reduced mobility for each considered temperature (see table 1). For each heat treatment temperature, the B&T law is plotted in order to obtain a linear dependence between \(\bar{R}^{2}-\bar{R}_{0}^{2}\) and the time \(t\). Fig. 7 illustrates the methodology on sets of experimental points for the three studied temperatures where the best linear fit directly provides a first rough value for \((M_{GB}\gamma_{GB})\) called \((M_{GB}\gamma_{GB})_{ini}\). Refined identificationThe values \((M_{GB}\gamma_{GB})_{ini}\) are then used in Hillett and Maire _et al._ models for comparison with experimental points (solid blue and red lines respectively in fig. 8). These first simulation results are then used to perform an optimization of the reduced mobility by calculating the \(L^{2}\) error on several points between the simulation and experimental results. A translation coefficient \(c_{fit}\) is defined by a least square method to shift the simulated curve in order to improve the correlation with experimental data so that: \[\left(M_{GB}\gamma_{GB}\right)_{fit}=c_{fit}\times\left(M_{GB}\gamma_{GB} \right)_{ini}. \tag{15}\] The cost function of the least square method is as followed: \[F(c)=\sum_{i=1}^{n}f_{i}^{2}(c_{i}), \tag{16}\] where \(n\) is the number of experimental points, \(c_{i}=\frac{t_{sim_{i}}}{t_{exp_{i}}}\) the translation coefficient between the interpolated curve of the simulation data and the experimental points and \(f_{i}(c_{i})\) compute the \(L^{2}\) errors between the simulated and experimental point for each value of \(c_{i}\): \[f_{i}(c_{i})=L_{i}^{2}(c_{i})=100\times\sqrt{\frac{\sum_{k=1}^{n}(\frac{t_{sim _{k}}}{c_{i}}-t_{exp_{k}})^{2}}{\sum_{k=1}^{n}t_{exp_{k}}^{2}}}\text{ with }\forall i\in[\![1,n]\!]. \tag{17}\] Reduced Mobility data Model-dependence of Reduced MobilitySince the intrinsic GB mobility of a material is hard to quantify experimentally, a common approach is to model the GB migration evolution (for instance using the \(v=M_{GB}P\) equation) Figure 6: Inverse Saltykov method illustrated on the 2D GSD of a sample heat treated at \(1050\,\mathrm{\SIUnitSymbolCelsius}\) for \(5\,\mathrm{h}\), the blue histogram corresponds to the 2D GSD and the orange discrete distribution corresponds to the obtained 3D results after the inverse Saltykov algorithm. and consider the mobility \(M_{GB}\) as a material-dependent model parameter. Depending on the definition used in the pressure term \(P\), mobility will also be a model-dependent parameter. Fig. 8 solid lines illustrate the difference in response from Hillert and Maire _et al._ models to an identical mobility value. When the mobility is identified for each model, the respectively colored dashed lines are obtained for both models in the same figure. Reduced mobility values at the temperature of \(1100\,^{\circ}\mathrm{C}\) are gathered in table 4. The GB migration equation in these models (cf. eq. (5), (7), (10)) is rather similar which explains that the identified reduced mobility are of the same order of magnitude. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Model & Hillert & Abbruzzese & Maire \\ \hline \(M_{GB}\gamma_{GB}\)(m\({}^{2}\,\mathrm{s}^{-1}\)) & 1.08e-13 & 1.27e-13 & 1.10e-13 \\ \hline \end{tabular} \end{table} Table 4: Identified reduced mobility values for the different models at \(1100\,^{\circ}\mathrm{C}\) for 316L. Figure 7: Use of the Burke & Turnbull law to obtain the first value of \((M_{GB}\gamma_{GB})_{ini}\) for 316L at (a) \(1000\,^{\circ}\mathrm{C}\),(b) \(1050\,^{\circ}\mathrm{C}\) and (c) \(1100\,^{\circ}\mathrm{C}\). ## 4 Results and discussion This section is dedicated to numerical parameters optimization and to the comparisons between results obtained using the different introduced mean-field models for 316L austenitic stainless steel in GG context. Different heat treatment conditions (temperature, duration) of table 1 are simulated and compared with experimental GSDs. ### Numerical parameters As described in section 2, the three mean-field models (Hillert, Abbruzzese _et al._ and Maire _et al._) are based on specific media or neighborhood. These latter may have an influence on several modeling parameters which requires a thorough study to either optimize their value or evaluate their impact on model predictions. #### 4.1.1 Convergence study concerning the number of grain classes introduced in the model An important common parameter to these models is the initial number of grain classes introduced at the start of a simulation. Statistical representativity is represented by two criteria: the minimum number of grain classes necessary for a GSD to be representative of an experimental microstructure and the representation necessity of the different grain populations existing in the material (detection of mono- or multimodal distributions). The convergence study performed here will focus on the first described criterion. In GG, the number of grain classes drops in time due to capillarity effects, the representativity of the microstructure is therefore impacted. Maire _et al._ neighborhood construction relies on a good statistical representativity, as it widens the choice of classes in the neighbors selection. A convergence study is performed on this parameter in order to determine the minimal initial number of grain classes necessary to obtain a reproducible final GSD. A thermal treatment of one hour at \(1100\,^{\circ}\mathrm{C}\) is used for this discussion and GSD results will be compared to a reference simulation for each model. More precisely, seven simulations from 25 to 2000 initial grain classes have been run and compared to a reference simulation where 5000 grain classes were considered. This reference is considered as representative of experimental values. Indeed, the number of grains in the sample area examined by EBSD analysis varies from 3500 to 130 grains with an average around 1400 grains for all studied conditions according to table 2. The use of 5000 initial grains classes perfectly covers the 980 grains observed experimentally. Also, a final number of 1300 classes is obtained with this simulation after 5h. To be able to define at which value convergence is reached, eq. (18) gives the relative error, at time \(t\) between the reference case and the tested GSDs: \[L^{2}(t)=100\times\sqrt{\frac{\sum_{i=1}^{n_{\text{max}}}\left(S_{i}-S_{i}^{ \prime}\right)^{2}}{\sum_{i=1}^{n_{\text{max}}}\left(S_{i}^{\prime}\right)^{2 }}}, \tag{18}\] Figure 8: Curve fitting of simulation points (obtained with \(\left(M_{GB}\gamma_{GB}\right)_{ini}\)) with respect to experimental points by minimizing \(L^{2}\) error for Maire _et al._ and Hillert models at \(1050\,^{\circ}\mathrm{C}\) for 316L. with \(n_{\text{bins}}\) the number of bins in the histograms used for comparison. This number \(n_{\text{bins}}\) is introduced to simplify the visual representation of histograms and have a meaningful comparison between simulations, a reduced number of histogram bins is then selected comparatively to the total number of grain classes. Typically, for the following histogram representations presented in this paper at exception of fig. 8(a), \(n_{\text{bins}}\) is set to 25. The corresponding bin width is computed from this number. This allows to recreate a unique ECD vector of 25 visual grain classes equally distanced from each other to rightfully compare GSDs. In fig. 8(a), to introduce the \(L^{2}\) comparison, a number of 9 bins is selected to simplify the visual explanation. In this figure \(S_{i}\) (resp. \(S_{i}^{\prime}\)) corresponds, for the Maire _et al._ model, to the \(i^{\text{th}}\) bin grain class area of the GSD at \(t=1\,\mathrm{h}\) (resp. of the model reference GSD). Fig. 8(b) plots the \(L^{2}(1\,\mathrm{h})\) evolution for the different initial number of grain classes in each model at \(1100\,\mathrm{\SIUnitSymbolCelsius}\). From 25 to 500 initial grain classes \(L^{2}(1\,\mathrm{h})\) error decreases drastically from above 100\(\%\) to 3\(\%\). 25 initial grain classes give rise to a \(L^{2}(1\,\mathrm{h})\) error superior to 100 \(\%\) for all models, traducing therefore a degradation of the statistical representation. A convergence threshold is set at 5\(\%\), considering that convergence is reached below that error. From 500 to 2000 initial grain classes, the simulations are therefore converging considering the threshold. For comparisons performed in section 4, an initial number of 1000 grains classes is chosen in order to assure consistency and computational time efficiency. This value is also in perfect accordance with the experimental initial number of grains of 980 as exposed in Figure 9: (a) Histogram description of the \(L^{2}(1h)\) error method to analyse convergence of simulations at \(1100\,\mathrm{\SIUnitSymbolCelsius}\), (b) convergence study of the initial grain classes number by the computing \(L^{2}(1h)\) error at \(1100\,\mathrm{\SIUnitSymbolCelsius}\) for Hillert, Abbruzzese _et al._ and Maire _et al._ models on 316L and (c) same convergence study computing the \(L^{2}(1h)\) error for the three models using the final number of grain classes as x-axis. table 2. The statistical representativity depends also on a second parameter which corresponds to the number of grain classes present at the end of a numerical computation. Fig. 8(c) illustrates the same convergence study as in fig. 8(b) but where the x-axis represents the number of final grain classes at the end of each simulation. This shows that a minimum number of 200 remaining grain classes is necessary to stay below a threshold of 5% of \(L^{2}(1h)\) error. As the model is working in grain classes, it can be considered that 200 classes can describe with good accordance the range of experimental number of grains from 130 to 3500 observed in EBSD maps for the different conditions. This figure also shows that for an identical initial number of grain classes Hillert model conserves a higher number of remaining grain classes and therefore a better statistical representativity. For forthcoming GSD comparisons, the statistical representativity for heat treatment from 2 to \(5\,\mathrm{h}\) at \(1100\,^{\circ}\mathrm{C}\) with an initial number of 1000 grain classes was also verified thanks to the Maire _et al._ model as illustrated in fig. 8(c) with the colored triangle icons. Indeed, one can see for the three thermal treatments that the threshold for representativity previously defined is well respected as the \(L^{2}(t)\) error (comparatively to the reference case at 5000 grains classes) remains below 5% and that the remaining number of classes after annealing time is above 200. #### 4.1.2 Different spatial dimensions considered to define the contact probability Description of the spatial dimensionsIn the original work of Abbruzzese _et al._[8], the 2D contact probability is computed based on the perimeter of the neighbor class (cf. section 2.3). This formalism was extended to 3D by considering sphere surfaces instead of disk perimeters in Di Schino _et al._[23]. The contact probability construction suggested by Maire _et al._ is a generalization of the latter formalism. This section will focus on determining if the contact probability has a positive effect on refining the description of Maire _et al._ model simulated GSDs. Inspired from Abbruzzese _et al._ work with eq. (6), four types of contact probability can be derived from a probability in number to a volumic probability: \[p_{(i,j)}^{m}=\frac{N_{j}R_{j}^{m}}{\sum_{k=1}^{n_{i}}N_{k}R_{k}^{m}}\quad \text{ with }m\in[\![0,3]\!], \tag{19}\] where \(N_{j}\) is the number of grains belonging to the grain class \(j\) and \(n_{i}\) the number of grain classes with an incomplete neighborhood when \(p_{(i,j)}^{m}\) is computed for the grain class \(i\). The four contact probability definitions described by eq. (19) are defined for the neighborhood of the first grain class of the microstructure at the initial time t=\(0\,\mathrm{s}\), as plotted in fig. 10. The probabilities are used in the context of Maire _et al._ model detailed in section 2.4 by modifying eq. (13). In eq. (19) with \(m=0\), the contact probability \(p_{(i,j)}^{0}\) is computed in terms of number with no influence of the grain size. In fig. 10, the orange curve shows that the majority of the weight is given to smaller neighbor grains as only the frequency of occurrence in the microstructure is taken into account. The \(p_{(i,j)}^{1}\) formulation uses the perimeter of the grain class to describe the contact probability. This latter is taking the highest probability values on the first one-third grain classes, which traduces that small grain classes are Figure 10: Description of the different contact probabilities \(p_{(i,j)}^{m}\) with neighbor classes of the first class (\(i=0\)) of the microstructure at t=\(0\,\mathrm{s}\). still more represented in the neighborhood of the first class in this description. \(p^{2}_{(i,j)}\) formulation considers the surface of the neighbor grain class \(j\) to build the contact probability. A shift toward the middle grain sizes is observed, leading to larger grains taking part of the neighborhood compared to \(p^{0}_{(i,j)}\) and \(p^{1}_{(i,j)}\) definitions. Finally, the description \(p^{3}_{(i,j)}\) imposes a neighborhood based on the volume of the neighbor grain classes which gives more weights to larger grain classes as highlighted by the green curve in fig. 10. Looking solely at neighborhood construction, none of these representations seems more justified than others. Depending on the selected representation, an emphasis on certain groups of grains is done as explained above. Impact on the distribution resultsTo select one of the above contact probability definitions for the study, grain size distribution results will be compared using a GG test case. For the three investigated temperatures, 1000, 1050 and 1100\({}^{\circ}\)C an annealing of two hours is simulated with the four contact probabilities. The GSD results are compared to the data converted into a 3D GSD in fig. 11 using the Saltykov method described in section 3.2. The experimental data are represented by the discrete black histogram. At all three temperatures, \(p^{3}_{(i,j)}\) seems to provide a better fit of the tail of the distribution. This volume-based description brought by eq. (19) with \(m=3\) accentuates the topological effect of the neighborhood construction by giving more weight to bigger grains. If a homogeneous microstructure is composed solely of small grains, the contact probability description will give similar contact probabilities to these grains with a comparable volume. However, if a heterogeneous microstructure with large and small grains is considered, the volumic probability will bring a topological aspect by giving more weight to larger grains. Indeed, if bigger grains have a better representation in the neighborhood of a grain class, then the \(dV\) exchange achieved with the GB migration eq. (9) for these grain classes is increased. Therefore, they have statistically more chances to grow and be represented in the distribution in further time steps. #### 4.1.3 Impact of the grain classes order in the neighborhood construction The complexity of the neighborhood construction proposed by Maire _et al._ is influenced by the selecting order of the grain classes in the GSD. As mentioned in section 2.4, in Maire _et al._ original work [13] the ascending order has been arbitrarily selected for the GSD. For Hillert and Abbruzzese _et al._ models, the MGS evolution with respect to time shows no changes no matter in which order the grain classes are selected, as no topology is involved in the computation of their surrounding media. To study the effect of the selecting order of the grain classes in the neighborhood construction on the results, two other types of selecting order are considered: GSD is either selected by decreasing grain sizes or randomly. In fig. 12a, the MGS evolution is strongly affected by the selection in descending sorting order for construction of the neigborhood as described by the orange solid line. The shuffle order of construction has a smaller impact on the MGS kinetic compared to the latter one. The reduced mobility values need to be re-identified to retrieve a good fit with the experimental data. The values for these specific selecting orders of construction in Maire _et al._ model are gathered in table 5 and the associated MGS evolutions are represented by the dashed lines in fig. 12a. The corresponding GSDs are presented in terms of number frequency and volume fraction in fig. 12b and 12c. When the reduced mobility is re-identified, predicted GSDs remain close to each other independently of the selected sorting order of construction. Ascending and descending construction orders tend to predict longer distribution tails than the shuffle order. However, if the mobility is not re-identified, the strong impact for MGS prediction can be explained by the difference in neighborhood construction for the same current grain class. The microstructure representations of these construction orders are schematically presented in fig. 13a and fig. 13b, describing two different construction patterns. In ascending sorting order, bigger grains of the microstructure are red grain classes, _i.e._ the ones that will not take part in the neighborhood of the current class \(i\). On the contrary, in the descending sorting order construction, red grain classes are the smaller ones. This will modify the GB migration volume exchanges between neighboring grains during GB migration. For the forthcoming comparisons, the original ascending GSD sorting order will be conserved, as in this case the reduced mobility value has a similar order of magnitude to the one of the other models. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Sorting order & Ascending & Descending & Shuffle \\ \hline \(M_{GB}\gamma_{GB}(\mathrm{m^{2}\,s^{-1}})\) & 2.19e-13 & 5.00e-13 & 2.30e-13 \\ \hline \end{tabular} \end{table} Table 5: Identified reduced mobility values for Maire _et al._ model for different selecting order at \(1100\,^{\circ}\mathrm{C}\) for 316L. ### Comparison of mean-field models using different initial microstructures In this section, initial mono- and bimodal distributions will be used to evaluate the impact of heterogeneities in the initial GSD. The monomodal test case use the initial experimental GSD. Thermal treatment simulations will be compared to experimental data. However, bimodal comparisons are confronted only between themselves as no experimental data were available for this case. In both analyses, Maire _et al._ model is employed with a volumic contact probability as detailed in section 4.1.2 and with an initial number of grain classes of 1000 as deduced from section 4.1.1. An ascending sorting order for the input distribution is considered as originally used in previous Maire _et al._ work [13]. #### 4.2.1 Comparison of mean-field models with a monomodal initial microstructure Fig. 14 (a) to (h) illustrate mean-field GSDs predictions after different annealing times at \(1100\,^{\circ}\mathrm{C}\) in comparison with experimental data obtained thanks to EBSD. The tail of GSDs representing larger grains is better predicted by Maire _et al._ model either considering frequency in number or volume fraction representation of the GSD. However, the volume fraction histograms show that none of the three models catches the entire experimental distribution tail for any of the studied annealing times. This may be due to both implementation hypotheses or also to experimental GSD statistical representativity. Hillert model seems to better predict number frequency histogram and under-estimate volumic predictions. However, for Abbruzzese _et al._ and Maire _et al._ models, the first part of the GSD, composed of small grain classes, shows good accordance with experimental data in volume fraction (fig. 14 (a), (c), (e), (g)). On Figure 11: Comparison of the impact on the distribution of the \(p_{(i,j)}^{m}\) with \(m\in\llbracket 0,3\rrbracket\) for an annealing of \(2\,\mathrm{h}\) at different temperatures for 316L. the other hand, the number frequency predictions exposes some divergence with respect to the experimental data for small grain sizes. One common strong hypothesis in models is considering spherical grains and spherical evolution with capillarity. Models also make an important approximation by considering grain boundaries properties as isotropic. Indeed, grain boundaries energy \(\gamma_{GB}\) and their mobility \(M_{GB}\) are both considered constant and identical for all grain classes used in the simulation. However, experimental microstructures have been proven [24, 25], to expose a dispersion in values of these properties. These hypotheses tend to smooth all microstructure heterogeneities at the beginning or during simulations, which may explains GSD extremity differences. The radius variation exchange considered in the GB migration equation for the different microstructure descriptions also impact the GSD prediction. Indeed, Hillert model by employing an HEM account for these variations through the choice of the MGS equation. Experimental factors can also play a role in these differences. The statistical representativity of experimental data can be limited and the inverse Saltykov method is not a deterministic one. As a result no perfect reproducibility of the data is possible. This can explain the difficulty of models to predict GSD tail. The discussed models provide a general tendency of the GSD evolution depending on the thermal conditions. Moreover, no special treatment has been done to consider twining in this work even if these special GB can be considered as partially taken into account in identified parameters of the models such as \(M_{GB}\). Figure 12: Comparing different selecting orders for the neighborhood construction using the Maire _et al._ model. Test case selected here is a \(1\,\mathrm{h}\) annealing at \(1100\,\mathrm{\SIUnitSymbolCelsius}\). (a) MGS evolution with respect to time, GSD at the end of the heat treatment (t=\(1\,\mathrm{h}\)) considering (b) number frequency and (c) volume fraction. For more quantified comparisons, \(L^{2}(t)\) error is computed with eq. (18), with \(S_{i}\) (resp. \(S^{\prime}_{i}\)) corresponding to the \(i^{th}\) bin grain class area of the simulated GSD at time \(t\) (resp. of the experimental GSD at time \(t\)). Fig. (a)a and (b)b illustrate the comparison of the latter criterion for the four experimented times in both representations in number and volume fraction. High \(L^{2}(t)\) error values can be explained by the assumptions and experimental statistical representativity detailed above. However, these results provide a good relative comparison basis between models. As expected, Hillert model provide a good prediction rate in number frequency. But, its volumic predictions are low compared to the two others. Abbruzzese _et al._ and Maire _et al._ models provide a relatively constant prediction in both representations. In order to qualify a model prediction ability, good results in both representation are necessary. Maire _et al._ expose here the overall lowest \(L^{2}(t)\) values, making it the model with the highest accuracy compare to the others. In the case of GG, the simulation of an initial monomodal GSD is overall similarly predicted by the three models. Maire _et al._ model improves GSD predictions on larger grain sizes but Hillert gives satisfying results considering the simplicity of its description when only GG mechanism is at play. #### 4.2.2 Comparison of mean-field models on bimodal initial microstructure By developing a test case using a bimodal initial GSD, the neighborhood construction of the Maire _et al._ model is then tested for an initial heterogeneous GSD. An input microstructure is selected with a MGS of \(42\,\mathrm{\SIUnitSymbolMicro m}\) and a four ATSM grain size difference between the two selected grain populations to respect the bimodal definition of the ASTM standard E112 [26] as shown in fig. (a)a. The previously identified reduced mobility values are used. In this case, no experimental data are available, thus only a relative comparison between models will be made. The MGS evolution can be visually dissociated into two kinetics. The first one is appearing in the range of the first \(10\,\mathrm{min}\) then a more steady increasing kinetic until the end of the annealing treatment of \(2\,\mathrm{h}\) is observed in fig. (b)b. Three associated GSD at \(5\,\mathrm{min}\), \(10\,\mathrm{min}\), and \(2\,\mathrm{h}\) comparing the behavior of the models are given in fig. (c)c, (d)d and (e)e. For all cases, there are very few distinctions between models predictions and the bimodal aspect is rapidly smoothed after \(5\,\mathrm{min}\) with the apperance of a monomodal distribution, excepted Hillert model predictions that retained a small bimodal distribution at \(5\,\mathrm{min}\) and \(10\,\mathrm{min}\). The breaking point between the two kinetics in the MGS evolution corresponds to the point where the monomodal distribution is reached after about \(10\,\mathrm{min}\). Impact of the selecting order of neighborhood construction on the distribution predictionIn the same way, as in section 4.1.3, an observation of the use of a descending selecting order of neighborhood construction is performed. Figure 13: Specific neighborhood construction considering (a) the ascending and (b) the descending selecting order in the GSD. Similarly, the MGS evolution of Hillert and Abbruzzese _et al._ in fig. 17a remains unchanged and Maire _et al._ model kinetics is one more time slowed down. However, the associated GSDs at \(5\,\mathrm{min}\) and \(10\,\mathrm{min}\) show well the bimodality. Figure 14: GSDs comparisons in (a,c,e,g) number frequency and (b,d,f,h) in volume fraction for Hillert, Abbruzzese _et al._ and Maire _et al._ models with experimental data at \(1100\,\mathrm{\SIUnitSymbolCelsius}\) for (a,b) 1h, (c,d) 2h, (e,f) 3h and (g,h) 5h of thermal treatment on 316L. Maire _et al._ model seems to retard the smoothing effect conducting to the final Gaussian distribution as shown in fig. 17b and 17c. Indeed, in both graphs, the small grain size population keeps a higher number frequency in the case of Maire _et al._ model than for other GG models. In addition to the construction impact due to the selecting order as described in fig. 13, this construction technique does not allow the same neighborhood diversity for all grain classes. As exposed by fig. 4, grains classes selected in the middle of the construction process will benefit from a more important number of neighbors as part of it is built with blue grains having incomplete neighborhoods. As it gets to this end of the construction, less grain classes are available to be part of the neighborhood. Considering this point, the order of selection of neighborhood construction has an impact on the GSD prediction even if the bijectivity is taken into account. This topological information is consistent with what is observed on the GSD evolution. In the bimodal case, it seems that the ascending order construction promotes the elimination of the small grains population of the distribution with a faster kinetic. Indeed, the curvature driving pressure of the GG phenomenon will tend to facilitate the disappearance of smaller grains considering the latter neighborhood construction. However, this elimination kinetic seems to be postponed when the descending construction order distribution is employed. In this case, small grains classes will be better dispersed in the neighborhood construction and least favorable to disappearance. This conducts to a higher MG5 in the case of the ascending order distribution around \(80\,\mathrm{\SIUnitSymbolMicro m}\) while the descending counterpart case barely reaches \(66\,\mathrm{\SIUnitSymbolMicro m}\). And finally, if an analysis of the remaining grain classes number at the end of the first phase around \(1000\,\mathrm{s}\) of heat treatment is done, a difference of 90 grain classes in the considered system is observed. Sorting order of the grain classes using the specific neighborhood construction of the Maire _et al._ model has a direct impact on microstructure evolution and therefore GSD predictions. ## 5 Conclusion Different GG mean-field models were investigated in this article by first a detailed explanation of their equations/hypotheses and by comparing heat treatment predictions on 316L steel. The neighborhood description in Maire _et al._ model is based on the original work of Abbruzese _et al._[8] that described a 2D GG model with a statistical neighborhood based on contact probabilities involving the entire microstructure. It relies on a hybrid description by using the statistical approach of contact probabilities to define neighbors surfaces in contact with grain classes coupled with a deterministic number of neighbors ruled by the use of a bijectivity criterion. To optimize the accuracy of the discussed models, parameters analyses were performed to observe their impact on the GSD predictions. First, a convergence study was achieved to optimize the initial and final number of grain classes in order to ensure statistical representativity. Then, the contact probability definition was identified to be a leverage quantity in the GSD description. Indeed, the latter impacts the distribution of the contact surface of neighbor classes. Four contact probability have been tested, from a ratio in number not involving the size of the grains to a volumic fraction. This means that for a contact probability in number, the neighbor grains size will only impact the grain evolution through its curvature radius in the GB migration. However, if a volumic contact probability is considered, a more important weight will be given to large grains which emphasizes the effect of grain topology. The volumic contact probability has been selected for the previous reasons. In particular, this has proven to give better GSD predic Figure 15: Comparison of \(L^{2}(t)\) error computed from GSDs for the three models and heat treatments from \(1\,\mathrm{h}\) to \(5\,\mathrm{h}\) at \(1100\,\mathrm{\SIUnitSymbolCelsius tions by improving the description of the tail of the GSD when compared to experimental data as shown in section 4. Figure 16: Comparison of the GG predictions at \(1100\,\mathrm{\SIUnitSymbolCelsius}\) starting from an initial bimodal distribution with a neighborhood construction selected in the ascending order for the Maire _et al._ model: (a) Initial bimodal distribution, (b) MGS evolution in time, and GSDs at (c) \(5\,\mathrm{min}\), (d) \(10\,\mathrm{min}\) and (e) \(2\,\mathrm{h}\). And finally, the selected neighborhood construction order in the GSD, impacts strongly the microstructure evolution in Maire _et al._ model. For this model, reduced mobility values need to be re-identified in order to be predictive when a different selecting order is adopted. Once optimized parameters were determined, a focus was made on comparing predicted GSD to EBSD experimental data for one to five hours annealing. To enable the comparisons, the identification of reduced mobility with respect to temperature and the use of the Saltkykov method were necessary. Identified values of the reduced mobility ensure that the models exhibit a similar MGS evolution than the reference experimental data. On the other hand, Saltkykov method enables to convert 2D GSD histograms to 3D discrete GSD providing a way for 2D to 3D conversion of experimental EBSD data. From these comparative histograms, a good general accordance of all models with experimental data was observed. However, Maire _et al._ model gives more satisfying results in describing distribution tails than Hillert and Abbruzzese _et al._ ones. \(L^{2}(t)\) computation also gives a reduced error for Maire _et al._ model on the studied GG cases. When comparing implementation simplicity and GSD response, Hillert model gives good predictions with a simpler medium description. We have discussed in this article the interest of Maire _et al._ model solely in the frame of GG. Originally designed to model discontinuous dynamic recrystallization phenomenon [13], its strength relies on accounting for topology in the considered grain neighborhoods. It is especially powerful when dealing with different types of grains. By distinguishing RX grains from non-RX ones, the specific neighborhood construction provides better predictions of Figure 17: Comparison of the GG predictions at \(1100\,\mathrm{\SIUnitSymbolCelsius}\) starting from an initial bimodal distribution with a neighborhood construction selected in the descending order for the Maire _et al._ model: (a) MGS evolution in time, and GSDs at (b) \(5\,\mathrm{min}\), (c) \(10\,\mathrm{min}\) and (d) \(2\,\mathrm{h}\). GSD with respect to experimental GSD. While this article was dedicated to validate the neighborhood construction in the case of GG, new developments aiming to improve GSD predictions during and after recrystallization is to be followed. ## Acknowledgements The authors thank ArcelorMittal, Aperam, Aubert & Duval, CEA, Constellium, Framatome, Safran and Transvalor companies and the ANR for their support through the DIGIMU consortium and RealIMotion ANR industrial Chair (Grant No. ANR-22-CHIN-0003).
2309.08067
Evaluating Direct RF Sampling Performance for RFSoC-based Radio-frequency Astronomy Receivers
As the maximum RF input and output frequencies of the integrated data converters in RFSoC increase, it becomes practical to digitize and synthesize RF signals in the majority of C band directly without analogue up and down mixing circuits. The elimination of the mixer circuits can significantly simplify the architecture of the receivers or readouts for radio astronomy telescopes. For the systems with large bandwidth or high channel counts, direct sampling can dramatically reduce the size and cost of overall system. This paper with focus on summarising part of the preliminary characterization results for direct sampling with RFSoC data converters in higher order Nyquist zones.
Chao Liu, Larry Ruckman, Ryan Herbst
2023-09-14T23:52:49Z
http://arxiv.org/abs/2309.08067v1
# Evaluating Direct RF Sampling Performance for RFSoC-based Radio-frequency Astronomy Receivers ###### Abstract As the maximum RF input and output frequencies of the integrated data converters in RFSoC increase, it becomes practical to digitize and synthesize RF signals in the majority of C band directly without analogue up and down mixing circuits. The elimination of the mixer circuits can significantly simplify the architecture of the receivers or readouts for radio astronomy telescopes. For the systems with large bandwidth or high channel counts, direct sampling can dramatically reduce the size and cost of overall system. This paper with focus on summarising part of the preliminary characterization results for direct sampling with RFSoC data converters in higher order Nyquist zones. ## 1 Introduction RF system-on-chip (RFSoC) devices have been widely used to develop receivers for radio astronomy applications since it has been released by Xilinx. Some of the applications, such as C-band surveys receivers [1], the readout for superconducting detectors of microwave SQUID multiplexers (umux) or microwave kinetic inductance detectors (MKIDs) for Cosmic Microwave Background (CMB) experiments [3] and millimetre-wavelength telescope after first stage down-conversion [2], are operating in the frequency range of 4-8 GHz, which falls in the higher order Nyquist zones of the integrated data converters in RFSoC. Therefore, those receivers requires analogue down-conversion circuits to mix the frequency down to the first Nyquist zone of the data converters. Due to the high demand of direct RF sampling in telecommunication industry, Xilinx has advanced both the sampling speed and the RF input frequency of the analogue-to-digital converter (ADC) integrated in RFSoC devices. From GEN 1 RFSoC devices to the latest DFE RFSoC devices, the maximum sampling frequency of the ADCs has been increased from 4.096 GHz to 5.9 GHz and the maximum RF input frequency has been extended from 4GHz to 7.125 GHz [4]. Those improvements enable us to totally or partially eliminate the analogue down-conversion circuits, which can significantly simplify the architecture of the receiver systems and reduce the hardware cost of the systems, especially for systems with large channel counts. The RF signals in higher order Nyquist zone are folded back to the first Nyquist zone, so the RF signal in higher order Nyquist zone can be sampled without down-mixing. Digital-up-covers (DUC) and digital-down-conversion (DDC) are also included as a part of hardened radio frequency system in RFSoC. The integrated NCOs in DDCs and DUCs can be used to up or down convert the RF signal to desired centre frequency. Therefore, the combination of those components in the hardened radio system in RFSoC can replace the analogue mixing required for some of the applications. In this paper, we present the wide-band performance evaluation results for direct RF sampling schemes with RF data converters and other parts of hardened radio system with different generations of RFSoC devices. The performance of the integrated data converters sampling at first Nyquist zone has been comprehensively discussed in [1]. The focus of this paper is the performance characterization of direct RF sampling at higher orders of Nyquist zones and the integrated DUCs and DDCs. The results can be used as a guide line for future system design and development for RFSoC-based radio-frequency receiver or readout systems with minimum analogue mixing circuits. ## 2 Characterization Test Setup The characterization described in this paper is performed with the Xilinx Zynq UltraScale+ RFSoC ZCU208 Evaluation Kit, which carries, XCZU48DR-2FSVG1517E5184 RFSoC, a Gen 3 RFSoc device with eight 14-bit ADCs up to 5 GSPS, and eight 14-bit DACs up to 10 GSPS. Figure 1: Test setup used for single tone performance characterization. The architecture of data converters remains the same when the DAC is generating a comb of tones, but the configuration of the datapath is changed for different test purposes. For the targeted applications in this case, both ADCs and DACs are intended to be utilized beyond the first Nyquist zones. Therefore, the first test performed is a single tone test with a loop-back test setup as shown in Figure 1. The RFSoC not only integrate the data converters, but the entire digital up and down conversion datapath, including digital mixers with NCOs and decimation and interpolation hardened blocks for application with narrower bandwidth. As Figure 1 shows, the characterization is performed with full datapaths on both ADC and DAC side. In this case, the DAC is configured to sample at 6.88128 GHz, which is close to the 7 GHz upper limit when the DAC is in IQ mode. For the single tone test, DC sequency is loaded into the interpolation block in IQ format and the up-mixed digitally at 4.25 or 5.25 GHz, which locates in the second Nyquist zone of DAC. Therefore, the DAC is generating CW RF signal at those two frequencies. The RFSoC DAC has a RF mixed mode to concentrate the RF power in the second Nyquist zone and it has been employed in this test. The output signal of the DAC is filtered with inline band-pass filters to attenuate the images of the RF signal, which can fold back to the first Nyquist zone and appear as spurs in baseband after down conversion. The ADC sampling speed is 4.9152 GSPS and digital down-mix is performed at the corresponding image frequency of the RF frequency in the first Nyquist zone. The 4.25 GHz RF signal is in the second order Nyquist zone of the ADC and 5.25 GHz in the third order zone. The bandwidth investigated in this case is 600 MHz, Which covers the 500 MHz bandwidth requirement for one of our targeted applications. ## 3 Characterization Results Two sets of the critical characterizations test results is discussed in this section. The first set is obtained with the exact setup described in Section 2, which can demonstrate the full spurious free dynamic range (SFDR) for the full loopback circuit. The second setup is performed with similar DAC setup, but the DAC generating a comb of tones and output signal from the DAC is measured by a high frequency spectrum analyzer. ### Tests with Single Tones The single tone test have been performed at 4.25 GHz and 5.25 GHz. The RF signal has been down-mixed and decimated by the ADC datapath. IQ components are captured at the end of the datapath and the spectrum is calculated offline in Matlab. Figure 2 and 3 shown the single tone spectrum at 4.25 GHz and 5.25 GHz respectively. As the baseband IQ components are used for spectrum calculation, the 600 MHz bandwidth is centred close to DC, where the tones appear. The SDFR is approximately 79.6 dB at 4.25 GHz and 81.2 dB at 5.25 GHz. The most comparable characterization results listed in Xilinx datasheet [4] are the SFDR measurements perform for the ADC of this device family at 4.9 GHz and 5.9 GHz with CW power at \(-\)10 dBFS, which are 75 and 74 dB respectively. As the frequency we measured the SFDR are lower than the test cases in datasheet and the test has been performed with signal generator for the results in datasheet, the high SFDR results are reasonable. Therefore, DAC operated at second order Nyquist zone and ADC at second and third order Nyquist zones can deliver the desired performance for most of the radio astronomy applications [1]. ### Test with Comb of Tones Most of our readout applications employ the frequency division multiplexing (FDM) technique, which requires a large number of tones at different frequencies to be generated by the DAC. The tones are used as probes to measure the phase shift introduced by detector signal, so the phase noise is one of the most critical requirements. In this test, Figure 3: Single tone spectrum with NCOs at 5.25 GHz. The power of the spectrum in normalized the power at primary RF tone. Figure 2: Single tone spectrum with digital up and down conversion at 4.25 GHz or corresponding image frequency at the first Nyquist zone. we use one of the integrated DACs to generate a comb of tones as shown in Figure 4 and the output of the DAC is measured by Keysight EXA signal analyzer N9010B with bandwidth from 10 Hz to 26.5 GHz. As Figure 4 shown, the tones are generated in baseband from -1 GHz to 1 GHz in a step size of 2.4 MHz, which covers 2 GHz of bandwidth with 833 tones simultaneously. In this test the sampling rate of DAC is 6.144 GHz and the frequency of NCO for up-mixing is 5 GHz. The baseband sequence is generated at 3.072 GHz and then interpolated by a factor of 2. The spectrum of the DAC output captured by the spectrum analyzer is shown in Figure 5. The spectrum is centred at 5 GHz and has the 2 GHz bandwidth as expected. The power of the tones reduced by about 3 dB from 4 to 6 GHz, which can be largely attributed to the frequency dependent insertion loss of the balun used for differential to single-ended conversion. The power level per tone around -46 dBm is adequate for the target application and there is no significant intermodulation product or spur in the band of interest. There is a 6.144 GHz spurs in the spectrum, which is the sampling frequency of the DAC and not affecting the performance. Phase noise measurement is performed with the spectrum analyzer with one of the tones in the comb. The phase noise is measured at 30 kHz and 1 MHz offset with respect to the frequency of the selected tone and the results are shown in Figure 6 and 7. The phase noise is about -88 dBc/Hz at 30 kHz offset and -91 dBc/Hz at 1 MHz offset. The results is about 10 dB lower than the system described in [3]. The system and measurement setup will be investigated and optimized to achieve high phase noise performance. There is no significant intermodulation product in the spectrum given a very high number of tones generated with a single DAC over 2 GHz of bandwidth. Figure 4: Spectrum of the comb of tones generated in baseband loaded to the integrated DAC in RFSoC. Figure 5: Spectrum of DAC output RF signal captured by the spectrum analyzer. Figure 6: Phase noise measured at 30 kHz offset from one of the tones in the comb. Figure 7: Phase noise measured at about 1 MHz offset from one of the tones in the comb. Conclusion The loopback characterization results demonstrate that the RFSoC integrated data converters and up and down-mixing datapath can deliver required SFDR at higher order Nyquist zones. This can eliminate a large fraction of analogue RF components in the receiver or readout systems for the targeted applications, and therefore, simplify the architecture of system, reduce the cost and offer higher level of flexibility for system configuration. For Gen 3 RFSoC device, the maximum input RF frequency for ADC is 6 GHz, which limits the spectrum converge over C band. However, the direct RF bandwidth has been extended to 7.125 GHz for RFSoC DEF and it can cover higher frequency with analogue mixing. The test results with a comb of tones shows the DAC's capability of generating a large number of tone with reasonable phase noise and no significant intermodulation noise, which are two of the major concerns for the targeted applications. The characterization results show the great potential of the integrated DAC in RFSoC to be utilized for realizing FDM readouts with high multiplexing factor. As the system development progressing, more comprehensive characterization with optimizations will be performed and published at different stages.
2301.13503
Identical Bands Around the Isobaric Rare Earth Even-Even Nuclei with the Mass Number A = 164
Eight pairs of rare earth normally deformed nuclei around the isobaric nuclei with A = 164 and have identical values of F-spin have been studied. These pairs of identical bands cover 16 mass units and are classified. We suggested a theoretical collective rotational formula containing three parameters (CRF3) as an extended version of Bohr-Mottelson model to calculate the ground state positive parity excitation energies. Also, the sd-version of the interacting boson model (IBM) has been used to describe the nuclear shapes by using the intrinsic coherent-state. The optimized models parameters for each nucleus are adjusted by using a simulation search program to minimize the root mean square deviation between the theoretical calculation and experimental excitation energies. The best adopted model parameters of the CRF3 are used to calculate the rotational frequencies, the kinematic and dynamic moments of inertia and the evolution of with increasing hw are systematically analyzed. A smooth gradual increase in both moments of inertia was seen. The calculated results agree excellently with the experimental ones which give strong support to the suggested CRF3. The adopted IBM parameters are used to calculate the potential energy surfaces which describe the nuclear deformation. The correlation quantities which identify the IB are extracted, exhibit identical excitation energies and energy ratios in their ground state rotational bands.
M. A. Abdelsalam, H. A. Ghanim, M. Kotb, A. M. Khalaf
2023-01-31T09:44:35Z
http://arxiv.org/abs/2301.13503v1
# Identical Bands Around the Isobaric Rare Earth Even-Even Nuclei with the Mass Number A = 164 ###### Abstract Eight pairs of rare-earth normally - deformed (ND) nuclei around the isobaric nuclei with A = 164 and have identical values of F-spin, \(\pm\)\(F_{0}\) and \(N_{p}\)\(N_{n}\) (\(N_{p}\) and \(N_{n}\) are the number of valence protons and valence neutrons respectively ) have been studied. These pairs of identical bands (IB's) cover 16 mass units and are classified as (i) 3 pairs of nuclei separated by (2p,2n) :\({}^{(162)}Yb\)\(-\)\({}^{166}\)\(Hf\)), \((^{162}Er\)\(-\)\({}^{166}\)\(Yb)\), \((^{162}Dy\)\(-\)\({}^{166}\)\(Er)\) (ii) 2 pairs of nuclei separated by (4p,4n): \((^{160}Dy\)\(-\)\({}^{168}\)\(Yb)\), \((^{160}Er\)\(-\)\({}^{168}Hf)\) (iii) 2 pairs of nuclei separated by (6p,6n): \((^{158}Er\)\(-\)\({}^{170}\)\(W)\)\((^{158}Dy\)\(-\)\({}^{170}\)\(Hf)\) and (iv) one pair of nuclei separated by (8p,8n): \((^{156}Dy\)\(-\)\({}^{172}\)\(W)\). We suggested a theoretical collective rotational formula containing three parameters (CRF3) as an extended version of Bohr-Mottelson model to calculate the ground state positive parity excitation energies. Also, the sd-version of the interacting boson model (IBM) has been used to describe the nuclear shapes by using the intrinsic coherent-state. The optimized models parameters for each nucleus are adjusted by using a simulation search program to minimize the root mean square deviation between the theoretical calculation and experimental excitation energies. The best adopted model parameters of the CRF3 are used to calculate the rotational frequencies \(\hbar\omega\), the kinematic \(J^{(1)}\) and dynamic \(J^{(2)}\) moments of inertia and the evolution of \(J^{(1)}\) and \(J^{(2)}\) with increasing \(\hbar\omega\) are systematically analyzed. A smooth gradual increase in both moments of inertia was seen. The calculated results agree excellently with the experimental ones which give strong support to the suggested CRF3. The adopted IBM parameters are used to calculate the potential energy surfaces (PES's) which describe the nuclear deformation. The PES's for our nuclei shows two wells corresponding to prolate and oblate sides which indicate that these nuclei are deformed and have rotational behaviors. The correlation quantities which identify the IB's are extracted. It is found that the nuclei having \(N_{p}N_{n}/\triangle\) where \(\triangle\) is the average pairing gap, exhibit identical excitation energies and energy ratios in their ground state rotational bands. **Keywords :** Interacting Boson model (IBM) - Identical Bands - Potential Energy Surface ## 1 Introduction The discovery of rotational bands in adjacent even-even and odd-mass superdeformed (SD) nuclei in which the \(\gamma\)-ray transition energies are nearly identical to within a few KeV was an exotic and unexpected phenomenon in nuclear structure physics [1, 2, 3, 4, 5]. Since the identical bands (IB's) have essentially identical transition energies, then the associated dynamical moment of inertia are thus identical. Several explanations were put forward [4, 5, 6, 7, 8, 9, 10, 11, 12] to understand the origin of IB's phenomenon assuming the occurrence of such IB's to be a specific property of the SD states in nuclei. The explanations of these IB's includes: the Coriolis force, the particle alignment and pairing [13], the roles of special high-N orbitals of intruder configuration and band crossing [14, 15, 16, 17], the pseudo-spin in supersymmetry [7, 18, 19] and the supersymmetry with many-body interactions [20]. Soon the phenomenon of low-spin identical bands was found in pairs of even-even normal deformed (ND) nuclei [21], and in neighboring even-even and odd-mass nuclei in rare-earth region where they have similar moments of inertia [22, 23]. If was noted that low spin IB's are not limited to nearby nuclei but are widespread and found in pairs of even-even nucleoside as separated by 24 mass unit (like \({}^{156}Dy\),\({}^{180}\)\(Os\) [24]. Attempts were made to understand the low-spin IB's in terms of some simple systematics of the moments of inertia in the rare-earth region [25, 26, 27, 28, 29, 30] or from several types of consideration [31]. For the description of normally deformed (ND) bands, some useful models were proposed. Bohr and Mottelson [32] pointed out that, under the adiabatic approximation, the rotational energy of an axially symmetric nucleus may be expanded for \(K=0\) band as a power series in the I(I+1) term. The expansion for the \(K\neq 0\) band takes the same form, but includes a band head energy and the I(I+1) is replaced by \(\left[I(I+1)-K^{2}\right]\). Another useful models for nuclear rotational spectra are the particle-rotor model (PRM) [33], the variable moment of inertia (VMI) model [34, 35], the soft rotor model [36] and the interacting boson model [37]. In the concept of F-spin and its projection [38] any pairs of conjugate nuclei with the same F-spin and \(F_{0}\) values in any F-multiplet will have the same \(N_{p}N_{n}\)[24, 39, 40] where \(N_{p}\) and \(N_{n}\) are respectively the number of valence protons and valence neutrons. The product \(N_{p}N_{n}\) was used in the classification of the changes that occur in nuclear structure [41, 42]. It was assumed that [25, 43] the moment and the P-factor depends also on the product \(N_{p}N_{n}\). The purpose of the present paper is (i) to analyse the excitation energies for even-even normally deformed nuclei in rare earth region in framework of suggested new collective rotational formula (CRF3). (ii) to exhibit the occurrence of IB's in eight pairs of nuclei in rare earth region. (iii) to present the parameters which characterize the appearance of IB's. (iv) use the sd version of interacting boson model (sdIBM) to calculate the potential energy surfaces (PES's). ## 2 Outline of the Suggested Collective Rotational Formula with Three Parameters (CRF3) Rotational states in normal deformed (ND) nuclei can be characterized by their excitation energies E(I) as a function of spin I, which generally lie low as compared to the single-particle excitation. In the strong coupling limit, the rotational ground state energy for an axially symmetric even-even nucleus obeys the I(I+1) rule, i.e form bands of levels that fulfill the relation \[E(I)=\frac{\hbar^{2}}{2J}I(I+1)=\alpha\,\hat{I}^{2} \tag{1}\] where \(\alpha=\hbar^{2}/2J\) and \(\hat{\rm I}=\) I(I+1) The relation (1) defines in addition the nuclear moment of inertia J as a constant for an ideal rotor. This simple rotational formula gives deviations from experimental data, So Bohr and Mottelson pointed out that agreement was improved by adding to it a second team to yield \[E(I) =\alpha I(I+1)+\beta[I(I+1)]^{2}\] \[=\alpha\,\hat{\rm I}^{2}+\beta\,\hat{\rm I}^{4}\] \[E(I) =\alpha\,\hat{\rm I}^{2}(1+\gamma\,\hat{\rm I}^{2}) \tag{2}\] where \(\gamma=\beta/\alpha\) Since the moment of inertia J increases on rotation of the nucleus, the observed deviations from the experiment were still more evident. According to the variable moment of inertia(VMI) model [34, 35], there is a gradual increase in moment of inertia J with increasing the spin I, so we suggest that the moment inertia J can be written as \[J=J(I)=J\,(1\,+\,\sigma\,\hat{\rm I}^{2}) \tag{3}\] Substituting in equation (2), yield \[E(I)=\alpha\,\hat{\rm I}^{2}\left(\frac{1+\gamma\,\hat{\rm I}^{2}}{1+\sigma\, \hat{\rm I}^{2}}\right) \tag{4}\] Therefore, the two-term Bohr-Mottelson formula becomes an extended new formula with three parameters. We denote formula (4) as the collective rotational formula with three parameters (CRF3). The parameters are \(\alpha,\beta,\gamma\). The suggested CRF3 is more general because it leads to the following three predictions: a) when \(\sigma=\gamma\) it gives pure rigid rotor equation(1) b) when \(\sigma=0\) it gives the two parameters Bohr-Mottelson equation (2) c) when \(\gamma=0\) it gives soft rotor model [36] \[E(I)=\frac{\hbar^{2}}{2J}\frac{I(I+1)}{1+\sigma(I+I^{2})} \tag{5}\] Two types of moments of inertia were suggested by Bohr-Mottelson which reflect two different aspects of nuclear dynamics. The first moment of inertia is the kinematic \(J^{(1)}\), it is equal to the inverse of the slope of the curve of energy E versus \(\hat{\mathfrak{l}}^{2}\) (or I(I+1)) times \(\hbar^{2}/2\), while the second moment of inertia is the dynamic \(J^{(2)}\), it is related to the curvature in the curve of E versus \(\hat{\mathfrak{l}}\) (or \(\sqrt{I(I+1)}\) ). The kinematic \(J^{(1)}\)) and dynamic \(J^{(2)}\) moments of inertia are defined as: \[J^{(1)} =\frac{\hbar^{2}}{2}\left[\frac{dE}{dI(I+1)}\right]^{-1}=\hbar \frac{\sqrt{I(I+1)}}{\omega}\] \[=\frac{\hbar^{2}}{2}\left(\frac{dE}{d\hat{\mathfrak{l}}^{2}} \right)^{-1}=\hbar\frac{\hat{\mathfrak{l}}}{\omega} \tag{6}\] \[J^{(2)} =\hbar^{2}\left[\frac{d^{2}E}{d(\sqrt{I(I+1)})^{2}}\right]^{-1}= \hbar\frac{d\sqrt{I(I+1)}}{d\omega}\] \[=\hbar^{2}\left(\frac{d^{2}E}{d\hat{\mathfrak{l}}^{2}}\right)^{-1 }=\hbar\frac{d\hat{\mathfrak{l}}}{d\omega} \tag{7}\] In the case of our CRF3, the two moments of inertia becomes \[J^{(1)}(I)=\frac{\hbar^{2}}{2\alpha}\frac{(1+\sigma\hat{\mathfrak{l}}^{2})^{2 }}{[1+\gamma\hat{\mathfrak{l}}^{2}(2+\sigma\hat{\mathfrak{l}}^{2})]} \tag{8}\] \[J^{(2)}(I)=\frac{\hbar^{2}}{2\alpha}\frac{(1+\sigma\hat{\mathfrak{l}}^{2})^{3 }}{[(1+6\gamma\hat{\mathfrak{l}}^{2})+\sigma\hat{\mathfrak{l}}^{2}(3\gamma \hat{\mathfrak{l}}^{2}+\alpha\gamma\hat{\mathfrak{l}}^{4}-3)]} \tag{9}\] Experimentally \(\hbar\omega\), \(J^{(1)}\)and \(J^{(2)}\) are extracted in terms of the transition energy \(E_{\gamma}(I)=E(I)-E(I-2)\) as: \[\hbar\omega(I)=\frac{1}{4}[E_{\gamma}(I+2)+E_{\gamma}(I)]\hskip 42.679134pt(MeV) \tag{10}\] \[J^{(1)}(I)=\frac{2I-1}{E_{\gamma}(I)}\hskip 71.13189pt(\hbar^{2}MeV^{-1}) \tag{11}\] \[J^{(2)}(I)=\frac{4}{E_{\gamma}(I+2)-E_{\gamma}(I)}\hskip 71.13189pt(\hbar^{2}MeV^{-1}) \tag{12}\] As a special case, the lowest dynamical moment of inertia reads \[J^{(2)}_{lowest}=\frac{4}{E_{\gamma}(4_{1}^{+}\to 2_{1}^{+})-E_{\gamma}(2_{1}^{+}\to 0_{1}^{+})} \tag{13}\] Determination of Ground State Band Properties of Even-Even Nuclei and the Physical Identical Parameters In order to understand the behavior of low lying states of an axially symmetric normally deformed nuclei, it is insightful to examine some physical observables which exist in a pair of IB's, the observables include: **1. The P- Factor, Structure Factor (SF), and Saturation Parameter (SP)** Casten [43] introduced the P-Factor \[P=\frac{N_{p}N_{n}}{N_{p}+N_{n}} \tag{14}\] where \(N_{p}\) and \(N_{n}\) are the numbers of valence protons and valence neutrons respectively which are counted as particles or holes from the nearest closed shell \[N_{p} =min[(Z-50),(82-Z)] \tag{15}\] \[N_{n} =min[(N-82),(126-N)] \tag{16}\] The P- Factor represents the average number of interactions of each valence nucleon with those of the other type. It can be viewed as the ratio of the number of valences p-n residual interactions to the number of valence like-nucleon pairing interactions, or if the p-n and pairing interactions are orbit independent, then P is proportional to the ratio of the integrated p-n interaction strength to the integrated pairing interaction strength. The nuclear collectivity and deformation depend sensitively on the P- Factor. The structure factor (SF) and the saturation parameter (SP) are given by \[SF =N_{p}N_{n}(N_{p}+N_{n}) \tag{17}\] \[SP =\left(1+\frac{SF}{SF_{max}}\right)^{-1} \tag{18}\] It is found that the lowest dynamical moment of inertia \(J_{lowest}^{(2)}\) is proportional to \(\sqrt{SF}\). **2. The Concept of F-Spin** A nucleus with \(N_{p}\) valence protons and \(N_{n}\) valence neutrons has a total boson number \[N_{B}=\frac{N_{p}+N_{n}}{2}=N_{\pi}+N_{\nu} \tag{19}\] The \(N_{\pi}\) proton bosons and neutron bosons are assigned F-Spin, \(F=\frac{1}{2}\) with projection \(F_{0}=+\frac{1}{2}\) for proton bosons and \(F_{0}=-\frac{1}{2}\) for neutron bosons. A given nucleus is characterized by two quantum numbers [38]: \(F=\frac{N_{\pi}+N_{\nu}}{2}\) and its projection \(F_{0}=\frac{N_{\pi}-N_{\nu}}{2}\) Squaring and subtracting, yield \[4(F^{2}-F_{0}^{2})=4N_{\pi}N_{\nu}=N_{p}N_{n} \tag{20}\] That is any pair of conjugate nuclei with the same F-spin and \(F_{0}\) values in any F-spin multiplet have identical \(N_{p}N_{n}\) values. In our chosen nuclei, the F-spin multiplet is given by: (A+4, Z+2), (A+8, Z+4), (A+12, Z+6) and (A+16, Z+8) for Dy, Er, Yb, Hf, and W isotopes. Any pair of nuclei which show identical excitation energies have nearly equal value of the product of their valence nucleon numbers \(N_{p}\) and \(N_{n}\)[41]. However, the analysis of experimental data shows that the converse is not true. The simple quantity \(N_{p}N_{n}\) helps also in the evolution of nuclear deformation and collectivity in nuclei [40]. On the other hand, the product \(N_{p}N_{n}\) or the P- Factor plays an important role in studying the orbit dependence, shell gaps, and intruder orbitals. **3. Pairing Interaction Energy** The pairing interaction energy \(\triangle\) in an even-even nucleus is the average pairing gap (\((\triangle_{p}+\triangle_{n})/2\) where \(\triangle_{p}\) and \(\triangle_{n}\) are respectively the proton and neutron pairing gaps which are determined from the difference in binding energies of the neighboring odd and even nuclei \[\triangle_{p} =\frac{1}{4}[B(N,Z-2)-3B(N,Z-1)+3B(N,Z)-B(N,Z+1)] \tag{21}\] \[\triangle_{n} =\frac{1}{4}[B(N-2,Z)-3B(N-1,Z)+3B(N,Z)-B(N+1,Z)] \tag{22}\] The pairing gaps \(\triangle_{p}\) and \(\triangle_{n}\) are determined empirically from the relation \[\triangle_{p}\simeq\triangle_{n}=\frac{12}{\sqrt{A}}\hskip 36.135pt(MeV) \tag{23}\] The average pairing gap of the nucleus is then \[\triangle=\frac{\triangle_{p}+\triangle_{n}}{2}=\frac{12}{\sqrt{A}}\;\;MeV \tag{24}\] It is observed that [39, 43] the even-even nuclei belong to different mass number having identical \((N_{p}N_{n}/\triangle)\) values exhibit identical excitation energies and identical energy ratios. **4. Quadrupole Transition Probabilities and Deformation Parameters** The quadrupole transition probability per unit time for the transition \(I_{i}\to I_{f}\) is given by \[T(E_{2})=\frac{4\pi}{75}\left(\frac{5}{\hbar}\right)\left(\frac{E_{2^{+}_{1}}} {\hbar c}\right)^{5}B(E_{2};I_{i}\to I_{f}) \tag{25}\] where \(B(E_{2})\) is the reduced transition probability and \(E_{2^{+}_{1}}\) is the energy of the \(2^{+}_{1}\) state. Experimentally \(T(E_{2})\) for transition \(2^{+}_{1}\to 0^{+}_{1}\) is obtained by \[T(E_{2},2^{+}_{1}\to 0^{+}_{1})=\frac{ln2}{(1+\alpha)T_{1/2}}=\frac{0.693}{(1+ \alpha)T_{1/2}} \tag{26}\] where \(\alpha\) is the total conversion coefficient taken from the tabulated values given by Rose [44] and \(T_{1/2}\) is the lifetime of the rotational level. The \(B(E_{2},2^{+}_{1}\to 0^{+}_{1})\) values carry important information about the collectivity of nuclear rotation and can be extracted from the equations (25,26). The relation between the intrinsic nuclear quadrupole moment \(Q_{0}\) and \(B(E_{2})\) is given by \[Q_{0}^{2}=\frac{16\pi}{e}B(E_{2},2^{+}_{1}\to 0^{+}_{1}) \tag{27}\] Practically the most reliable method of determining the quadrupole deformation parameter \(\beta_{2}\) in framework of geometric collective model (GCM) is to extract \(\beta_{2}\) from \(Q_{0}\) according to the formula \[\beta_{2}(exp)=\frac{\sqrt{5\pi}}{3ZR_{0}^{2}}Q_{0} \tag{28}\] assuming a uniformly charged nucleus of spheroidal shape, where the nuclear radius has the value \(R_{0}=1.2A^{1/3}\)(fm) and \(Z\) is the nuclear charge number. The expression (28) for \(\beta_{2}\) is widely used to compare the quadrupole deformation of different nuclei. It is noticed that the \(B(E_{2},2^{+}_{1}\to 0^{+}_{1})\) values increase when going from the closed shell at N=82 toward midshell where maximum values are occur, while from midshell toward the shell closure at N= 126 its values are decreases. In a second way, specially where the \(B(E_{2},2^{+}_{1}\to 0^{+}_{1})\) value is not known, we estimate \(\beta\) by using the approximate empirical Grodzins relation [45]: \[E_{2^{+}_{1}}B(E_{2},2^{+}_{1}\to 0^{+}_{1})=2.5\times 10^{-3}\;\;\frac{Z^{2}}{A} \tag{29}\] where \[B(E_{2},2^{+}_{1}\to 0^{+}_{1})=\frac{1}{16\pi}e^{2}Q_{0}^{2}=\frac{9}{80\pi^{2 }}e^{2}Z^{2}R_{0}^{4}\beta^{2}\;\;\;\;\;\;\;\mbox{(in units of $e^{2}b^{2}$)} \tag{30}\] We can relate \(\beta\) and \(E_{2^{+}_{1}}\) as: \[\beta_{G}^{2}=\frac{1224}{E_{2^{+}_{1}}A^{7/3}} \tag{31}\] where \(E_{2^{+}_{1}}\) is in MeV. Also \(\beta_{2}\) can be determined by using the SU(3) rotational limit of interacting boson model(IBM) [37], the square of the deformation parameter \(\beta^{2}\) in a state of angular momentum I is given by [46]: \[\langle\beta^{2}\rangle_{I}=\frac{\alpha^{2}}{6(2N-1)}[I(I+1)+8N_{B}^{2}+22N_ {B}-15] \tag{32}\] where \(N_{B}\) is the total number of valence bosons and \(\alpha\) is a normalization constant (\(\alpha=0.101\) for rare-earth nuclei). The expectation value of \(\beta^{2}\) in the ground state becomes \[\langle\beta^{2}\rangle_{0}=\alpha^{2}\frac{8N_{B}^{2}+22N_{B}-15}{6(2N-1)} \tag{33}\] which is an almost linearly increasing function of the boson number \(N_{B}\) and has the same value for nuclei having the same number of valence nucleons \[N=[N_{p}+N_{n}],N=[(N_{p}-1)+(N_{n}-1)] \tag{34}\] It is evident that \(\beta_{IBM}\) extracted from IBM is much larger than \(\beta_{GCM}\) extracted from GCM because \(\beta_{GCM}\) refer to the deformation of all A nucleons while \(\beta_{IBM}\) describe only 2N valence bosons, the approximate relation between them is given by: \[\beta_{GCM}=1.18\left(\frac{2N}{A}\right)\beta_{IBM} \tag{35}\] The deformation parameter \(\beta\) reflects the equilibrium shape and structure of the nucleus such as the energy ratio \(R_{4/2}=E(4_{1}^{+})/E(2_{1}^{+})\) and the reduced transition probability \(B(E_{2},2_{1}^{+}\to 0_{1}^{+})\) which are the best indicators to exhibit the collective properties of the even-even nuclei. **5. Energy Ratios and Percentage Difference in Transition Energies** The energy ratios and the percentage difference in transition energies give the characteristic of the evolution of the collectivity in the even-even nuclei. Only deformed nuclei show rotational levels and particularly the even-even nuclei display a simple structure energies proportional to I(I+1) with only even values of the spin I considering that the moment of inertia is constant (rigid rotator), therefore the energy ratio \(R_{4/2}=3.333\). The observed moment of inertia extracted from the experiment is only one-quarter to one-half of what one would expect from a rigid rotator which means that not the whole nucleons are participating in the collective motion. On the other hand for an ideal harmonic quadrupole spectrum for spherical nuclei a system of equidistant states is formed by the composition of vibrational quanta. The first excited state is \(2_{1}^{+}\) followed by the degenerate \(0_{2}^{+},2_{2}^{+},4_{1}^{+}\), and so forth. Therefore energy ratio\(R_{4/2}=2\). To compare level spacing in two nuclei with masses \(A_{1}\), and \(A_{2}\) where \(A_{2}>A_{1}\), we define the percentage differences ratios in transition energies as : \[\delta=\frac{\triangle E_{\gamma}(I)}{E_{\gamma_{2}}(I)} \tag{36}\] where \[E_{\gamma}=E(I)-E(I-2) \tag{37}\] \[\triangle E_{\gamma}(I)=E_{\gamma_{1}}(I)-E_{\gamma_{2}}(I) \tag{38}\] So that \[E_{\gamma_{1}}=(1+\delta)E_{\gamma_{2}} \tag{39}\] For rigid rotor the ratio \[\delta_{R}=\left(\frac{A_{2}}{A_{1}}\right)^{5/3}-1 \tag{40}\] define the fractional change in \(A^{5/3}\). The fractional change in transition energies \(\delta\) divided by the rigid rotor ratio \(\delta_{R}\) is denoted by \(\delta_{\gamma}\). If the spacings are identical, then \(\delta=0,\delta_{\gamma}=0\) and if they scale as \(A^{5/3}\) then \(\delta_{\gamma=1}\). Similarly, the percentage difference in kinematic moment of inertia \(J^{(1)}\) is given by \[K=-\frac{\triangle J^{(1)}(I)}{J_{2}^{(1)}(I)} \tag{41}\] where \[J^{(1)}(I) =\frac{2I-1}{E_{\gamma}(I)} \tag{42}\] \[\triangle J^{(1)}(I) =J_{1}^{(1)}(I)-J_{2}^{(1)}(I) \tag{43}\] So that \[J_{2}^{(2)} =(1+K)J_{1}^{(1)} \tag{44}\] Substituting for \(J^{(1)}\), yield \(K=\delta\). The Interacting Boson Model to Calculate the Potential Energy Surfaces and Electric Quadrupole Transition Probability We consider the Hamiltonian of the first order U(5)- SU(3) quantum shape phase transition in the form \[H=\epsilon_{d}\hat{n}_{d}+a_{2}\hat{Q}^{(x)}\hat{Q}^{(x)} \tag{45}\] where \(\hat{n}_{d}\) and \(\hat{Q}^{(x)}\) are respectively the d-boson number operator and quadrupole operator defined as \[\hat{n}_{d}=\sum_{\mu}d_{\mu}^{\dagger}\stackrel{{ \sim}}{{d}}_{\mu} \tag{46}\] \[\hat{Q}^{(x)}=\left[d^{\dagger}s+s^{\dagger}\stackrel{{ \sim}}{{d}}\right]^{(2)}+x\left[d^{\dagger}\times\stackrel{{ \sim}}{{d}}\right]^{(2)} \tag{47}\] where \(\left(s^{\dagger},d^{\dagger}\right)\) and \(\left(s,\stackrel{{\sim}}{{d}}\right)\) are the boson creation and annihilation operators respectively, and \(x\) is the structure parameter of the quadrupole operator of IBM (\(x\) for pure rotational SU(3) limit is equal to \(-\sqrt{7}/2\)). Here \(d_{\mu}=(-1)^{\mu}d_{-\mu}\) and standard notation of angular momentum coupling is used. To get the potential energy surface (PES) of the Hamiltonian, we introduce the intrinsic coherent frame in which the ground state of a nucleus with N bosons can be expressed as a boson condensate. The bosonic intrinsic coherent state for the ground state band of a given even-even nucleus can be written in the form [47, 48, 49] \[|N\beta\gamma\rangle=\frac{1}{\sqrt{N!}}[b^{\dagger}(\beta,\gamma)]^{N}|0\rangle \tag{48}\] where \(|0\rangle\) is the boson vacuum and \(b^{\dagger}\) is the boson creation operator which acts in the intrinsic system and is given by: \[b^{\dagger}=\frac{1}{\sqrt{1+\beta^{2}}}[s^{\dagger}+\beta cos\gamma(d_{0}^{ \dagger})+\frac{1}{\sqrt{2}}\beta sin\gamma(d_{2}^{\dagger}+d_{-2}^{\dagger})] \tag{49}\] where \(\beta\) is the quadrupole deformation parameter which measures the axial deviation from spherical symmetry and the parameter \(\gamma\) controls the departure from axial symmetries. The ground state PES is the expectation value of the Hamiltonian in the intrinsic coherent state \[PES=\langle N\beta\gamma|H|N\beta\gamma\rangle \tag{50}\] The associated PES of the Hamiltonian (45) for \(x=-\sqrt{7}/2\) reads \[E(N,\beta,\gamma)=\epsilon_{d}\frac{N\beta^{2}}{1+\beta^{2}}+a_{2}\left[\frac {N}{1+\beta^{2}}(5+\frac{11}{4}\beta^{2})+\frac{N(N-1)}{(1+\beta^{2})^{2}}(4 \beta^{2}-2\sqrt{2}\beta^{3}cos3\gamma+\frac{1}{2}\beta^{4})\right] \tag{51}\] Equation (51) can be written in another form as \[E(N,\beta,\gamma)=g_{1}\frac{N\beta^{2}}{1+\beta^{2}}+\frac{N(N-1)}{(1+\beta^ {2})^{2}}[g_{2}\beta^{2}+g_{3}\beta^{3}cos3\gamma+g_{4}\beta^{4}]+c \tag{52}\] where the coefficients involve linear combination of the Hamiltonian parameters \[g_{1} =\epsilon_{d}-\frac{9}{4}a_{2},\hskip 28.452756ptg_{2}=4a_{2}\] \[g_{3} =2\sqrt{2}a_{2},\hskip 56.905512ptg_{4}=\frac{1}{2}a_{2},\hskip 28.452756ptc =5Na_{2}\] Also, equation (51) can be rewritten in general form as \[E(N,\beta,\gamma)=\frac{A_{2}\beta^{2}+A_{3}\beta^{3}cos3\gamma+A_{4}\beta^{4 }}{(1+\beta^{2})^{2}}+A_{0} \tag{53}\] where the coefficients read \[A_{2} =\left[\epsilon+\left(4N-\frac{25}{4}\right)a_{2}\right]N, \hskip 28.452756ptA_{3} =2\sqrt{2}a_{2}(N-1)N\] \[A_{4} =\left[\epsilon+\left(\frac{2N+5}{4}-4\right)a_{2}\right]N, \hskip 28.452756ptA_{0} =5a_{2}N\] For \(a_{2}=0\), we get the pure spherical vibrator U(5) limit and for \(\epsilon_{d}=0\), we get the pure deformed rotational Su(3) limit. Another important quantity that tests the nature of the shape phase transition of low lying collective states the reduced electric quadrupole transition probabilities \(B(E_{2})\). In IBM, the general form of the electric quadrupole operator is written in the form [50] \[T(E_{2})=eQ(sdIBM) \tag{54}\] The coefficient e is the boson's effective charge. The reduced electric quadrupole transition probabilities are given by \[B[E_{2},I_{i}\to I_{f}]=\frac{1}{2I_{i}+1}|\langle I_{f}||T(E_{2})||I_{i} \rangle|^{2} \tag{55}\] For rotational SU(3), yield \[B(E_{2},I+2\to I) =e^{2}\frac{3}{4}\frac{(I+2)(I+1)}{(2I+3)(2I+5)}(2N-1)(2N+I+3) \tag{56}\] \[Q(I) =-e\sqrt{\frac{16\pi}{40}}\frac{I}{2I+3}(4N+3) \tag{57}\] For the special case for I=0, we have \[B(E_{2},2_{1}^{+}\to 0_{1}^{+})=e^{2}\frac{1}{5}N(2N+3) \tag{58}\] ## 5 Numerical Calculations and Discussion In this section, we applied our formalism to eight pairs of nuclei having identical bands (IB's) in rare-earth region namely: \((^{162}Yb-^{166}Hf),(^{162}Er-^{166}Yb),(^{162}Dy-^{166}Er),(^{160}Dy-^{168}Yb ),(^{160}Er-^{168}Hf),\)\((^{158}Er-^{170}W),(^{158}Dy-^{170}Hf)\) and \((^{156}Dy-^{172}W)\). To calculate the ground state positive parity excitation energy E(I) for each nucleus, we suggested the CRF3. The parameters \(\alpha,\gamma,\sigma\) of CRF3 have been determined by a fitting procedure using a computer-simulated search program to minimize the root mean square deviation of the calculated excitation energies from the experimental ones. The quality of the fitting is indicated by the standard common definition of \(x\) \[x=\sqrt{\frac{1}{N}\Sigma_{i}\left(\frac{E_{exp}(I_{i})-E_{cal}(I_{i})}{\delta E _{exp}(I_{i})}\right)^{2}}\] where N is the number of experimental data points entering the fitting procedure and \(\delta E_{exp}(I_{i})\) is the experimental error in the excitation energies - The experimental excitation energies are taken from [51]. The optimized best adopted values of parameters for each nucleus of our studied nuclei are listed in Table (1). Figure 1: Systematic of the calculated (solid curves) ground state energies for our selected even-even rare earth Dy, Er, YB, Hf, W isotopes versus neutron number N and comparison with the experimental ones (dashed curves). The spin-parity are labeled by \(I^{\pi}\). The systematic of the excitation energies of the low spin states as a function of neutron number N in the considered even-even Dy, Er, Yb, Hf, W isotopes in the mass region A= 156 - 172 in the normally deformed nuclear are shown in Figure(1) and compared with the experimental ones. Only the ground state of positive parity and spin \(I^{\pi}=2^{+},4^{+},6^{+},8^{+},10^{+},12^{+}\) has been indicated. We can see that the excitation energies decrease with increasing the neutron number. Also, Figure(2) illustrate the calculated \begin{table} \begin{tabular}{c||c|c|c|c|c} \hline Nuclide & \(\alpha\) (KeV) & \(\gamma\) (\(10^{-3}\)) & \(\sigma\) (\(10^{-3}\)) & \(N_{p}\) & \(N_{n}\) \\ \hline Dy 156 & 22.96 & 6.964 & 14.54 & 16 & 8 \\ 158 & 16.48 & 2.163 & 4.339 & 16 & 10 \\ 160 & 14.49 & 0.8683 & 2.021 & 16 & 12 \\ 162 & 13.49 & 1.398 & 2.233 & 16 & 14 \\ Er 158 & 32.76 & 9.699 & 23.52 & 14 & 8 \\ 160 & 20.73 & 3.017 & 6.641 & 14 & 10 \\ 162 & 17.01 & 1.440 & 3.212 & 14 & 12 \\ 166 & 13.49 & 0.2573 & 1.188 & 14 & 16 \\ Yb 162 & 27.87 & 6.334 & 14.27 & 12 & 10 \\ 166 & 17.08 & 2.053 & 3.95 & 12 & 14 \\ 168 & 14.72 & 1.039 & 2.425 & 12 & 16 \\ Hf 166 & 26.60 & 5.565 & 12.67 & 10 & 12 \\ 168 & 20.58 & 3.116 & 6.849 & 10 & 14 \\ 170 & 15.92 & -0.00749 & 1.391 & 10 & 16 \\ W 170 & 26.44 & 5.714 & 13.55 & 8 & 14 \\ 172 & 20.68 & 3.944 & 9.279 & 8 & 16 \\ \hline \end{tabular} \end{table} Table 1: Values of optimized best parameters \(\alpha,\gamma,\sigma\) of the collective rotational formula(CRF3) for ground state bands in our selected even-even rare-earth nuclei. \(N_{p}\) and \(N_{n}\) are the number of valance protons and the number of valance neutrons respectively. Figure 2: The calculated energy ratio \(R_{4/2}=E(4_{1}^{+})/E(2_{1}^{+})\) versus neutron number N characterizes the low lying spectrum in Dy, Er, Yb, Hf, and W isotopes. The symbols \(o,*,\Box,\triangle,\) and \(x\) denote \({}_{66}Dy_{\text{${}_{6}$}}Er_{\text{${}_{70}$}}Yb_{\text{${}_{72}$}}Hf,\) and \({}_{74}W\) respectively. energy ratio \(R_{4/2}\) as a function of neutron number N for our studied nuclei. We observe that for each isotopic chain the value of \(R_{4/2}\) increases with increasing N (that is the deformation increased), and the difference in \(R_{4/2}\) for all pairs of IB's is ranging from 0.4 % to 2.5 % except the two pairs including the two isotopes \({}^{170,172}W\) (the difference is about 5%). For the eight pairs of IB'S, the kinematic \(J^{(1)}\) and the dynamic \(J^{(2)}\) moments of inertia derived from the transition energies are plotted versus the rotational frequency \(\hbar\omega\) as shown in Figure(3). It can be seen that for all bands \(J^{(1)}\) is smaller than \(J^{(2)}\) and a smooth gradual increase in both \(J^{(1)}\) and \(J^{(2)}\) with increasing \(\hbar\omega\) are seen and the similarities between each pair of IB'S are observed. Figure 3: The calculated results of kinematic \(J^{(1)}\) (dashed curves) and dynamic \(J^{(2)}\) (solid curves) moments of inertia plotted as a function of rotational frequency \(\hbar\omega\) for the studied eight pairs of identical bands in the rare-earth region. The \(*\) and \(o\) correspond to the lighter and heavier nucleus respectively. The IB's correlation quantities exist between the considered pairs of nuclei which exhibit the same identical excitation energies in their ground state bands are listed in Table (2). These quantities include the P. Factor, structure Factor SF, Saturation parameter SP, the F-Spin and its projection \(F_{0}\), pairing gaps \(\triangle\), and the deformation parameter \(\beta\). The maximum structure factor for our region of nuclei is SF= 6720. It is seen that the ratio \(N_{p}N_{n}/\triangle\) rather than the product \(N_{p}N_{n}\) may be a better parameter for studying the IB's. Note that nuclei with symmetric \(\pm F_{0}\) values have identical \(N_{p}N_{n}\) values. For example the pair (\({}^{160}Er\) and \({}^{168}Hf\)) have \((N_{p},N_{n})=(14,10)\) and \((10,14)\) respectively, so that \(N_{p}N_{n}=140\) and \(F_{0}=\pm 1\). Therefore if any F-spin multiplet has \(F_{0}=\)\(|N_{p}-N_{n}|/4\), those indicate that the pair of nuclei are similar in structure if they have identical \((|F_{0}|,N_{p}N_{n})\). The percentage differences ratios in transition energy \(\delta\) and the rigid rotor ratio \(\delta_{R}\) between pairs of levels in two nuclei are calculated and listed in Table(3) for our eight pairs of IB's. In spite of the parameters \(N_{p}N_{n}\), P, SF and SP are the same for the pairs \((^{156}Dy,^{172}W)\), this pair is not really identical according to their high average percentage differences in transition energies (approximately 6.7%). For each nucleus in isotopic chains of \({}_{66}Dy,^{68}Er,^{70}Yb,^{72}Hf\) and \({}_{74}W\), the values of lowest dynamical moments of inertia \(J_{lowest}^{(2)}\) were calculated and displayed against the neutron number N in Figure(4) - It can be seen that \(J_{lowest}^{(2)}\) increases with increasing the neutron number N and the difference in\(J_{lowest}^{(2)}\) for each pair of IB's is very small ( approximately a horizontal line). As an example of two nuclei that exhibit good IB's, the pair \({}^{162}_{68}Er(J_{lowest}^{(2)}=31.525\hbar^{2}MeV^{-1})\) and \({}^{166}_{70}Yb(J_{lowest}^{(2)}=31.519\hbar^{2}MeV^{-1})\), that is nearly the same \(J_{lowest}^{(2)}\). \begin{table} \begin{tabular}{c||c|c|c|c|c|c} \hline & \(N_{p}N_{n}\) & P & SF & SP & \(|\delta|\%\) & \(|k|\%\) \\ \hline \((^{158}Er\ -\ ^{170}W)\) & 112 & 5.090 & 2464 & 0.7317 & 1.28 & 1.27 \\ \((^{162}Yb\ -\ ^{166}Hf)\) & 120 & 5.4545 & 2640 & 0.7179 & 2.94 & 2.45 \\ \((^{156}Dy\ -\ ^{172}W)\) & 128 & 5.333 & 3072 & 0.6862 & 6.73 & 6.28 \\ \((^{160}Er\ -\ ^{168}Hf)\) & 140 & 5.833 & 3360 & 0.6666 & 1.35 & 1.22 \\ \((^{158}Dy\ -\ ^{170}Hf)\) & 160 & 6.1538 & 4160 & 0.6176 & 1.28 & 1.27 \\ \((^{162}Er\ -\ ^{166}Yb)\) & 168 & 6.6461 & 4368 & 0.6060 & 0.22 & 0.20 \\ \((^{160}Dy\ -\ ^{168}Yb)\) & 192 & 6.6857 & 5376 & 0.5555 & 0.10 & 0.30 \\ \((^{162}Dy\ -\ ^{166}Er)\) & 224 & 7.466 & 6720 & 0.5 & 1.29 & 1.26 \\ \hline \end{tabular} \end{table} Table 2: The identical band quantities of our eight pairs of nuclei. We classified our selected pairs of IB's into four multiplets = (A+4), Z+2), (A+B,Z+4), (A+12,Z+6), and (A+16,Z+8) and the percentage differences in transition energies \(\delta=\triangle E_{\gamma}/E_{\gamma_{2}}\) as a function of spin I (up to I=10) have been calculated and illustrated Figure (5). It is seen that the pairs of IB's have approximately similar \(\delta\) ( less than 2.5 %) except the two pairs which include the tungsten isotopes \({}^{170,172}W\) where the value of \(\delta\) reaches \(\sim 6-10\)% in spite of they have the same \(N_{p}N_{n}\) value (\(N_{p}N_{n}=112\) for \({}^{158}Er,^{170}W\) and \(N_{p}N_{n}=128\) for \({}^{156}Dy,^{172}W\)). To further investigation for IB's we used the SU(3) rotational limit of the IBM to extract the quadrupole deformation \(\beta_{IBM}\) for each nucleus. The calculated \(\beta_{IBM}\) is plotted against the ratio \(N_{\nu}/N_{\pi}\) (where \(N_{\nu}\) and \(N_{\pi}\) are the number of valence neutron and valence proton bosons respectively) in Figure(6). It is seen that \(\beta_{IBM}\) is the same for each pair of IB's (horizontal line). \begin{table} \begin{tabular}{c||c|c|c} \hline Identical pairs & \(|\delta|=\frac{\triangle E_{\gamma}}{E_{\gamma_{2}}}\) & \(\%\) & \(\delta R\) & \(\langle R_{\delta}\rangle\) \\ \hline \((^{162}Yb\ -\ ^{166}Hf)\) & 2.964 & 4.149 & 0.714 \\ \((^{162}Er\ -\ ^{166}Yb)\) & 0.415 & 4.149 & 0.100 \\ \((^{162}Dy\ -\ ^{166}Er)\) & 1.297 & 4.149 & 0.312 \\ \((^{160}Er\ -\ ^{168}Hf)\) & 1.352 & 8.471 & 0.159 \\ \((^{160}Dy\ -\ ^{168}Yb)\) & 1.131 & 8.471 & 0.133 \\ \((^{158}Er\ -\ ^{170}W\ )\) & 10.826 & 12.976 & 0.834 \\ \((^{158}Dy\ -\ ^{170}Hf)\) & 1.765 & 12.976 & 0.136 \\ \((^{156}Dy\ -\ ^{172}W)\) & 7.410 & 17.671 & 0.419 \\ \hline \end{tabular} \end{table} Table 3: The percentage differences ratios in transition energies \(\delta\), the fractional change in transition energies divided by the rigid rotor ratio \(\delta R\) and the ratio \(R\ =\delta/\delta R\) for the eight pairs of identical bands. Figure 4: The lowest dynamical moment of inertia \(J^{(2)}_{lowest}\) against the neutron number N for the eight pairs of identical bands. The solid line connects each pair and symbols \(o,*,\triangle,\Box,\Box,\) and \(\diamondsuit\) denotes \({}_{66}Dy,_{68}Er,_{70}Yb,_{72}Hf,\) and \({}_{74}W\) respectively. Figure 5: Percentage difference in transition energies \(\delta=\triangle E_{\gamma}/E_{\gamma_{2}}\) for the eight pairs of multiplet (A+4,Z+2), (A+8,Z+4), (A+12,Z+6), and (A+16,Z+8) for Dy, Er, Yb, Hf, and W isotopes. The dashed curve represents the ratio of the rigid rotor. Figure 6: The quadrupole deformation parameter \(\beta_{IBM}\) was calculated from SU(3) limit of IBM as a function of \(N_{\nu}/N_{\pi}\) for our eight pairs of identical bands. For each nucleus, by using the IBM Hamiltonian equation (45) and its eigenvalues equation (53), the PES's have been calculated as a function of deformation parameter \(\beta\) along the axial trajectory \(\gamma\) = 0\({}^{\circ}\), 60\({}^{\circ}\). The results are illustrated in Figure(7) and the corresponding calculated parameter of the PES's \(A_{2}\), \(A_{3}\), \(A_{4}\) and \(A_{o}\) which are linear combinations of the original parameters \(\epsilon_{0}\) and \(a_{2}\) are listed in Table(4). From the graphs presented in Figure(7), we observe the similarity in PES's for each pair of IB's. All studied nuclei are deformed and have rotational characters, the prolate deformation is deeper than the oblate deformation. Figure 7: Sketch of the potential energy surface PES calculated from the U(5)-SU(3) shape phase transitions of IBM with intrinsic coherent state versus the deformation parameters \(\beta\) for the eight pairs of even-even nuclei having identical bands. ## 6 Conclusion By using a novel three parameters collective rotational formula (CRF3), the positive parity ground state excitation energies are calculated for sixteen nuclei in rare-earth region. The optimized three parameters are deduced by using a computer simulated search program in order to obtain a minimum root mean square deviation of the calculated excitation energies from the measured ones. The potential energy surfaces are calculated by using the sd-version of the interacting boson model. The problem of low-spin identical bands in normal deformed nuclei in rare-earth region is treated. We have exhibited identical bands in eight pairs of conjugate even-even nuclei of widely dispersed spanning as much as sixteen mass unit. Each pair with the same F-spin and projections \(\pm F_{0}\) values have identical product of valence proton and neutron numbers \(N_{p}N_{n}\) values. Also, the values of dynamical moments of inertia for each identical band pair are approximately the same. We extracted all the identical band symmetry parameters like P-factor, saturation parameter, and structure factor which all depend on \(N_{p}\) and \(N_{n}\). The pairing interaction energy, the quadrupole transition probabilities, and the energy ratios are also treated.
2309.10628
Symmetric conformity functions make decision-making processes independent of the distribution of learning strategies
Two main procedures characterize the way in which social actors evaluate the qualities of the options in decision-making processes: they either seek to evaluate their intrinsic qualities (individual learners), or they rely on the opinion of the others (social learners). For the latter, social experiments have suggested that the mathematical form of the probability of adopting an option, called the conformity function, is symmetric in the adoption rate. However, the literature on decision-making includes models where social learners employ either symmetric or nonsymmetric conformity functions. We generalize a particular case studied in a previous work, and we show analytically that if the conformity function is symmetric, the details of the probability distribution of the propensity of the agents to behave as a social or an individual learner do not matter, only its expected value influences the determination of the steady state. We also show that in this case, the same steady state is reached for two extreme dynamical processes: one that considers propensities as idiosyncratic properties of the agents (each agent being an individual learner always with the same probability), and the opposite one, which allows them to change their propensity during the dynamics. This is not the case if the conformity function is nonsymmetric. This fact can inspire experiments that could shed light on the debate about mathematical properties of conformity functions.
Arkadiusz JΔ™drzejewski, Laura HernΓ‘ndez
2023-09-19T14:07:11Z
http://arxiv.org/abs/2309.10628v2
Symmetric conformity functions make decision-making processes independent of the distribution of learning strategies ###### Abstract Two main procedures characterize the way in which social actors evaluate the qualities of the options in decision-making processes: they either seek to evaluate their intrinsic qualities (individual learners) or they rely on the opinion of the others (social learners). For the latter, social experiments have suggested that the mathematical form of the probability of adopting an option, called the _conformity function_, is symmetric in the adoption rate. However, the literature on decision making includes models where social learners employ either symmetric and non-symmetric conformity functions. Here, we generalize previous models and show analytically that when symmetric conformity functions are considered, the form of the probability distribution of the individual strategies (behaving as a social or an individual learner) does not matter: only the expected value of this distribution influences the determination of the steady state. Moreover, we show that a dynamics that considers strategies as idiosyncratic properties of the agents and another that allows them to change in time lead to the same result in the case of symmetric conformity functions, while the results differ in the case of non-symmetric ones. This fact can inspire experiments that could shed light on the debate about this point. ## I Introduction Decision making is an individual task that benefits from a detailed knowledge about the possible options. The vast literature addressing the way in which different species of animals, and in particular humans, acquire this knowledge is pluri-disciplinary and targets different aspects of the problem [1; 2; 3]. Social actors are usually classified according to their learning strategies as _individual learners_, those who search to identify the intrinsic merits of the options without suffering any peer pressure, or _social learners_, those who simply follow their peers' choice. However, this is a rough classification as each class entails a variety of cognitive processes that are very difficult to disentangle experimentally. Early studies on decision making were challenged by new experimental techniques [4; 5]. A question that raised strong debates concerns the transmission of learning abilities, in the light of natural selection. As social learning (in any of its forms) is considered less costly than individual learning, it is supposed to enhance individual fitness and then prevail [6]. However, A. Rogers showed that this may not be the case if the environment is subject to changes. In this case, if social learners are selected, their proportion in society increases, and the probability that they obtain a wrong information about the environment by copying other social learners with "old" information increases, and therefore, their fitness diminishes. This is known as the Rogers' paradox [7]. Rogers' paradox does not mean that social learning--thus culture--prevents social agents from adapting to the environment, it just points out that a model that only evaluates the cost-benefit of the chosen strategies is not enough to account for the observations. Rogers himself had suggested to introduce some biases in the social learning like to copy preferentially high fitness individuals, or as proposed by Boyd and Richerson, just copy individual learners. Neither of the two lifted the paradox, and the fitness of the group decreases with generations because the strategy of social learners is frequency dependent while that of individual learners is not [8]. Other modifications introduce the possibility for the strategies of an individual to evolve according to different situations (cost of individual learning, changing environment, fitness of the neighbours, etc.). These modifications may lift Rogers' paradox or not depending on details of the parameters [9; 10]. All this shows that the problem of how learning strategies are transmitted goes beyond a cost-benefit problem and that flexibility in the learning strategies is essential in order to maintain a high fitness of the population. Another aspect of the problem would be to ask how a given generation composed of individual and social learners reaches a collective decision. In this case, the particular ways in which both social and individual learners acquire new knowledge are studied. For example, one can consider that social learners conform to an option because of peer pressure. Individual learners, who seek information on their own about the options, may also take advantage of the knowledge about the choices of others but in a different way, as the merit of an option may in some cases, be correlated to the number of individuals that have already chosen it, either positively or negatively [11]. Let us consider for example the usage of electric cars. Individual learners may consider that part of the merit of this option resides in that it limits the \(CO_{2}\) emissions, but they may also be interested in learn ing that a large fraction of the population has already adopted it because this will enhance the development of recharging stations, increasing the merit of this option further. On the contrary, if they were to decide about choosing public transportation, which also reduces \(CO_{2}\) emission, they may also evaluate the fact that if a large fraction of others already chose this option, its merit diminishes because of the discomfort of crowded transportation. In this sense, the merit of the options is not a constant but may be dependent on the frequency. It is interesting to notice here that this point also addresses the main ingredient of Rogers' paradox [8]. Recently, a dynamical system model has showed that social learners may, when numerous, impair collective performance [12]. This model considers the proportion of social and individual learners as well as the value of the merit as a fixed parameter of the model and chooses a symmetric _conformity function_ to describe social learning. However, the specific form of social learning function is still a subject of study. McElreath et al. have detailed different heuristics for social learning which lead to either symmetric or non-symmetric functional forms [13]. Experiments, both in laboratory and in real settings, show different ways in which individuals learn from peers [14] and whether there is a general mathematical form representing a general social learning function is far from being clear [15; 16]. These experiments also observed the situation where subjects change their strategies during the experiment [16] alternating the ways in which they gather information (either by learning individually or by getting information from their peers). In this work, we present a general theoretical model that allows us to explore analytically the possible outcomes of dynamics based either on fixed (quenched dynamics) or alternating learning strategies (annealed dynamics), and where the social learning function may be symmetric, as the one considered in Ref. [12], or non-symmetric, as in the well-known \(q\)-voter model [17]. The model also allows for a general distribution of such strategies, \(f_{p}(p_{i})\), where \(p_{i}\) is the probability that a given agent acts as an individual learner. We show analytically that a symmetric learning function has important consequences in the phase diagram of the system, which does not depend on the distribution \(f_{p}(p_{i})\), but only on its first moment, \(\bar{p}\). Moreover, in this situation, the phase diagrams of the quenched and annealed dynamics are identical. On the contrary, for the non-symmetric learning function the resulting phase diagrams for the quenched and annealed dynamics are different, even with variations in the presence of discontinuous or continuous transitions depending on the parameters. Our results confirm and extend those presented in Ref. [12] which is a particular case of our model for a given set of parameters. ## II Model and methods We study the situation where individuals (agents) of the society have to make a binary decision--choosing a product, adopting a given behaviour, or a social norm--where the options are called \(A\) and \(B\). Those who behave as individual learners evaluate the options' properties by themselves, while social learners rely on what has been chosen by others. Each agent \(i\) chooses to behave as an individual learner with probability \(p_{i}\) and the distribution of such probabilities over the population may follow a general distribution \(f_{p}(p_{i})\), as illustrated in the left part of Fig. 1. There are two possible dynamics: either the agents choose their \(p_{i}\) from the start and stay with them throughout the entire dynamical learning process (quenched dynamics), or the agents choose their \(p_{i}\) at each step of the dynamics (annealed dynamics). An agent that follows the individual learner dynamics will evaluate the probability of choosing one of the options, let's say \(A\), using the function \(f_{I}(a)\), where \(a\) represents the fraction of the population that has chosen that option. Therefore, the probability of selecting option \(B\) through the same process is given by the complementary probability \(1-f_{I}(a)\). At a difference with previous works, we use an individual learning function, instead of a constant parameter, to account for the situation where the merit of the option may increase or decrease according to the fraction of the population that has already adopted it [11]. Following the Rasch model [18], we use the logit function to define \(f_{I}(a)\). Thus, the natural logarithm of the odds of choosing \(A\) is proportional to the fraction of individuals favoring this option over the fraction \(a_{m}\), which gives the equal probability to both options: \[\ln\left[\frac{f_{I}(a)}{1-f_{I}(a)}\right]=k(a-a_{m}). \tag{1}\] The parameter \(k\) accounts for the tendency and the strength of the likelihood of choosing option \(A\), whereas \(a_{m}\) is the midpoint of \(f_{I}(a)\), i.e., \(f_{I}(a_{m})=1/2\), which we set here as \(a_{m}=1/2\). The left hand side of Eq. (1) can be interpreted as a trade-off between an individual's attitudes towards option \(A\) and its adoption difficulties [19; 20]. In our model, this trade-off depends explicitly on the number of adopters. From Eq. (1), we get the following form of the individual leaning probability: \[f_{I}(a)=\frac{1}{1+e^{-k(a-a_{m})}}. \tag{2}\] The right panel of Fig. 1 illustrates the shape of \(f_{I}(a)\) for different values of \(k\). Notice that the model studied in Ref. [12] is a particular case of this one for \(k=0\) and quenched dynamics. On the other hand, social learners will be directly influenced by the choice of their peers. The form of this influence is given by a _conformity function_\(f_{S}(x)\) that returns the probability of _changing to_ the option favored by a fraction \(x\) of individuals. This function accounts for the average social influence felt by the agent, which is growing with the fraction of adopters of the option. Table 1 summarises the way individual and social learning functions are used to update the agents' options. The mathematical properties of the conformity function are still a subject of debate. In simple models of frequency-dependent bias [6], the conformity function is taken to be an increasing function, symmetric around a midpoint \(f_{S}(0.5)=0.5\). Some experimental results tend to support this hypothesis [13; 14; 21]. For such a conformity function, the following dependency holds: \[f_{S}(x)+f_{S}(1-x)=1 \tag{3}\] for all \(x\in[0,1]\). Other works have represented social influence through non-symmetric conformity functions [6; 22; 23]. There is also experimental evidence for such a form, in particular concerning what is know as frequency-dependent direct bias in social learning [6; 13; 16]. Thus, we study both cases in this work. For the symmetric case, and for the sake of comparison, we use the same form as in Ref. [12]: \[f_{S}(x)=\begin{cases}\frac{1}{2}(2x)^{q}&\text{if }0\leq x<0.5,\\ 1-\frac{1}{2}\left[2(1-x)\right]^{q}&\text{if }0.5\leq x\leq 1.\end{cases} \tag{4}\] For the non-symmetric conformity functions, we assume a simple mathematical form inspired from the _non-linear q-voter model_[17; 23; 24]: \[f_{S}(x)=x^{q}. \tag{5}\] Both these functions are parameterized by \(q\), which reflects their degree of non-linearity, where \(q=1\) corresponds to a linear response. ## III Results We study two dynamical scenarios where \(p_{i}\) is either a quenched or annealed property of agent \(i\)[23; 24]. In this section, we present the general equations for these two dynamics without imposing any specific form of the learning strategy distribution, \(f_{p}(p_{i})\). We also illustrate the behaviour of the system in the particular case treated in Ref. [12], where the choice between the two strategies follows a simple Bernoulli distribution. These general results along with their specification for the individual and social learning functions, given by Eq. (2), Eq. (4), and Eq. (5), are summarized below in Table 2. ### General results Let \(f_{p}(x)\) be an arbitrary distribution with mean \(\bar{p}=\int xf_{p}(x)dx\). \begin{table} \begin{tabular}{c|c c|c c} \multirow{2}{*}{Option before learning} & \multicolumn{4}{c}{Option after} \\ \cline{2-5} & \multicolumn{2}{c|}{Individual learning} & \multicolumn{2}{c}{Social learning} \\ \cline{2-5} & \(A\) & \(B\) & \(A\) & \(B\) \\ \hline \(A\) & \(f_{I}(a)\) & \(1-f_{I}(a)\) & \(1-f_{S}(1-a)\) & \(f_{S}(1-a)\) \\ \(B\) & \(f_{I}(a)\) & \(1-f_{I}(a)\) & \(f_{S}(a)\) & \(1-f_{S}(a)\) \\ \end{tabular} \end{table} Table 1: Probabilities that an agent with a given option remains it or changes it to the alternative one using a corresponding learning strategy. Figure 1: Main elements of the model. (left panel) Model’s diagram: agent \(i\) has a personal inclination towards individual learning, represented by the probability \(p_{i}\) of choosing this strategy. Its value is drawn from distribution \(f_{p}(p_{i})\). In the quenched dynamics a \(p_{i}\) value is assigned to agent \(i\) at the beginning of the process; alternatively in the annealed dynamics, the tendency to individual learning may change at each time step. Both learning strategies depend on the fraction of opinion \(A\) followers, \(a\). (right panel) Individual learning as a function of the fraction of option \(A\) followers for a few values of \(k\). Annealed dynamics At each time step, each agent is assigned a probability of being individual learner from the distribution \(f_{p}(p_{i})\). It should be noticed that the probability distribution itself does not change in time. The time evolution of the fraction of adopters of choice A, \(a\), is given by: \[\frac{da}{dt}=P_{B\to A}(1-a)-P_{A\to B}a, \tag{6}\] where \(P_{B\to A}\) and \(P_{A\to B}\) are the transition probabilities. The transition probabilities from one option to the other are formed at each step by those agents that learnt about the options either individually or socially: \[\begin{split} P_{B\to A}=&\int xf_{I}(a)f_{p}(x) dx+\int(1-x)f_{S}(a)f_{p}(x)dx,\\ P_{A\to B}=&\int x\left[1-f_{I}(a)\right]f_{p}(x)dx \\ &+\int(1-x)f_{S}(1-a)f_{p}(x)dx.\end{split} \tag{7}\] Having integrated the above formulas, we get: \[\begin{split} P_{B\to A}&=\bar{p}f_{I}(a)+(1-\bar{p}) f_{S}(a),\\ P_{A\to B}&=\bar{p}\left[1-f_{I}(a)\right]+(1-\bar{p}) f_{S}(1-a).\end{split} \tag{8}\] From Eqs. (6) and (8), we get that \[\frac{da}{dt}=\bar{p}\left[f_{I}(a)-a\right]+(1-\bar{p})\left[(1-a)f_{S}(a)- af_{S}(1-a)\right]. \tag{9}\] Calling \(a^{*}\) the fixed points that make: \[\frac{da}{dt}\bigg{|}_{a^{*}}=0. \tag{10}\] We get \(a^{*}=1/2\) for any value of \(\bar{p}\), and also those satisfying the following equation: \[\bar{p}=\frac{a^{*}\left[f_{S}(a^{*})+f_{S}(1-a^{*})\right]-f_{S}(a^{*})}{a^{ *}\left[f_{S}(a^{*})+f_{S}(1-a^{*})\right]-f_{S}(a^{*})+f_{I}(a^{*})-a^{*}}. \tag{11}\] If the conformity function, \(f_{S}(a)\), is symmetric, we can use Eq. (3) to simplify the above formula. As a result, Eq. (11) becomes: \[\bar{p}=\frac{a^{*}-f_{S}(a^{*})}{f_{I}(a^{*})-f_{S}(a^{*})}. \tag{12}\] #### ii.1.2 Quenched dynamics The probability of being an individual learner is assigned for each individual at the beginning of the dynamics, and each agent keeps the same probability value during all the evolution. Let \(a_{x}\) denote the fraction of agents that choose to act as individual learners with probability \(p_{i}=x\) and who favor option \(A\) and \(1-a_{x}\) those who being individual learners, with the same probability, favour option \(B\). For each value of the probability \(x\) of being individual learner, we have the following rate equation for the adopters of \(A\): \[\frac{da_{x}}{dt}=P_{B\to A}^{x}(1-a_{x})-P_{A\to B}^{x}a_{x}, \tag{13}\] where \(P_{B\to A}^{x}\) and \(P_{A\to B}^{x}\) are the transition probabilities for the group of agents that are individual learners with probability \(p_{i}=x\) (and therefore, social learners with probability \(1-x\)): \[\begin{split} P_{B\to A}^{x}&=xf_{I}(a)+(1-x)f_{S}(a ),\\ P_{A\to B}^{x}&=x\left[1-f_{I}(a)\right]+(1-x)f_{S}(1-a). \end{split} \tag{14}\] We look for the fractions of adopters \(a_{x}^{*}\) of option \(A\) among agents with \(p_{i}=x\) that make the evolution of all the populations stationary: \[\frac{da_{x}}{dt}\bigg{|}_{\{a_{x}^{*}\}}=0. \tag{15}\] We see that \(a_{x}^{*}=1/2\) is a fix point for all values of \(x\), in this case, Eq. (15) is satisfied for any distribution \(f_{p}(x)\). The remaining fixed points are determined by combining Eqs. (13), (14), and (15): \[a_{x}^{*}=\frac{xf_{I}(a^{*})+(1-x)f_{S}(a^{*})}{x+(1-x)\left[f_{S}(a^{*})+f_{ S}(1-a^{*})\right]}, \tag{16}\] where \[a^{*}=\int a_{x}^{*}f_{p}(x)dx. \tag{17}\] The fixed values of \(a\) are obtained by solving Eq. (17) with the obtained formulas for \(a_{x}^{*}\). If the conformity function is non-symmetric, we cannot perform further calculations without knowing the exact distribution \(f_{p}(x)\). However, if the conformity function is symmetric, we can perform the integration in Eq. (17) without imposing any special form of \(f_{p}(x)\) since we can use Eq. (3) to simplify Eq. (16). In such a case, we get \[a^{*}=\bar{p}f_{\text{I}}(a^{*})+(1-\bar{p})f_{\text{S}}(a^{*}), \tag{18}\] so the fixed points depend only on the mean of the distribution \(\bar{p}=\int xf_{p}(x)dx\). By transforming Eq. (18), we get \[\bar{p}=\frac{a^{*}-f_{\text{S}}(a^{*})}{f_{\text{I}}(a^{*})-f_{\text{S}}(a^{* })}. \tag{19}\] Comparing Eq. (19) with Eq. (12), we get the main general result of this work: if the conformity function is symmetric, the fixed points of the quenched and the annealed dynamics are the same and only depend on the first moment of the distribution \(f_{p}(p_{i})\) and not on the details of this distribution. ### Particular case: Bernoulli distribution of the learning strategies Let us consider a simple case where \(p_{i}\) is Bernoulli distributed, so \(\forall i=1,...,N\), \(f_{p}(p_{i}=1)=p\) and \(f_{p}(p_{i}=0)=1-p\), and \(p\) is the mean of the distribution. Note that agents with \(p_{i}=1\) certainly behave as individual learners, whereas those with \(p_{i}=0\) certainly behave as social learners. #### iii.2.1 Annealed dynamics Each agent is assigned a particular learning strategy at each time step: individual learning (\(p_{i}=1\)) and social learning (\(p_{i}=0\)) with probability \(p\) and \(1-p\), respectively. To describe such a system, we simply use equations from Section III.1.1 with \(\bar{p}=p\) since \(p\) is the mean of the Bernoulli distribution in this case. Notice that for \(k=0\) and a non-symmetric conformity function given by Eq. (5), we get a particular case of the non-linear noisy voter model [25] or the \(q\)-voter model with independence [26]. \begin{table} \begin{tabular}{c|c|c} Conformity function & Annealed dynamics & Quenched dynamics \\ \hline Symmetric & Annealed and quenched dynamics lead to the same fixed points that do not depend on the full distribution of learning strategies, but only on its mean: & \\ & \[\bar{p}=\frac{a^{*}-f_{S}(a^{*})}{f_{I}(a^{*})-f_{S}(a^{*})},\text{ where }\bar{p}=\int xf_{p}(x)dx.\] & For a special case of symmetric conformity function given by Eq. (4), we have & \\ & \[\bar{p}=\begin{cases}\frac{2a^{*}-(2a^{*})^{q}}{2f_{I}(a^{*})-(2a^{*})^{q}}& \text{if }0\leq a^{*}<0.5,\\ \frac{2(a^{*}-1)+[2(1-a^{*})]^{q}}{2\left[f_{I}(a^{*})-1\right]+[2(1-a^{*})]^{ q}}&\text{if }0.5\leq a^{*}\leq 1.\end{cases}\] \\ \hline Non-symmetric & Fixed points depend on the mean of the distribution of learning strategies: & Fixed points depend on the whole shape of the distribution of learning strategies: & \\ & \[\bar{p}=\frac{a^{*}\left[f_{S}(a^{*})+f_{S}(1-a^{*})\right]-f_{S}(a^{*})}{a^{*} \left[f_{S}(a^{*})+f_{S}(1-a^{*})-1\right]+f_{I}(a^{*})-f_{S}(a^{*})},\] & \[a^{*}=\int a^{*}_{x}f_{p}(x)dx,\text{ where }\\ & \[a^{*}_{x}=\frac{xf_{I}(a^{*})+(1-x)f_{S}(a^{*})}{x+(1-x)\left[f_{S}(a^{*})+f_{ S}(1-a^{*})\right]}.\] \\ \end{tabular} \end{table} Table 2: Summary of the results. Formulas for the fixed points with majoritarian option of the models with different dynamics and types of conformity functions. Figure 2: Stable (solid lines) and unstable (dashed lines) fixed points for the model with \(q=3\) and \(k=-15\) for (a) a symmetric conformity function and (b) a non-symmetric conformity function. For the symmetric conformity function, annealed and quenched approaches produce the same diagrams. Symbols represent the results from the simulations of the model with \(\bullet\) annealed and \(\blacktriangle\) quenched dynamics. Detailed information about the simulations can be found in the Supporting Information. #### ii.1.2 Quenched dynamics Each agent is assigned a particular learning strategy from the Bernoulli distribution only once, at the start of the dynamics. This means that eventually, we have two groups of agents. One group consists of individual learners (\(p_{i}=1\)), and the other group consists of social learners (\(p_{i}=0\)). In such a system, individual learners represent a fraction \(p\) of the total population, wheres social learners represent the remaining fraction \(1-p\). Notice that for \(k=0\), we obtain a particular case presented in Ref. [11] or [27] depending on the considered type of the conformity function, symmetric for the former and non-symmetric for the latter. In this case, Eq. (17) becomes \[a=(1-p)a_{0}+pa_{1}, \tag{20}\] where \(a_{0}\) and \(a_{1}\) are the fractions of individual who favor \(A\) among social learners (\(p_{i}=0\)) and individual learners (\(p_{i}=1\)), respectively. The rate equations resulting from Eqs. (13) and (14) are the following: \[\begin{split}\frac{da_{0}}{dt}&=f_{S}(a)(1-a_{0})- f_{S}(1-a)a_{0},\\ \frac{da_{1}}{dt}&=f_{I}(a)-a_{1}.\end{split} \tag{21}\] Thus, the fixed points satisfy: \[p=\frac{a^{*}\left[f_{S}(a^{*})+f_{S}(1-a^{*})\right]-f_{S}(a^{*})}{f_{I}(a^{*} )\left[f_{S}(a^{*})+f_{S}(1-a^{*})\right]-f_{S}(a^{*})}. \tag{22}\] Figure 4: Phase diagram for the model with a symmetric conformity function. In this case, annealed and quenched dynamics produce the same fixed point diagrams. In white area, only continuous phase transitions take place. In red area, discontinuous phase transitions between ordered phases additionally occur. In blue area, discontinuous phase transitions between ordered and disordered phases are also possible. Insets illustrate fixed point diagrams in given regions. More detailed versions of these figures are presented in the Supporting Information. Figure 3: Phase diagrams for the model with a non-symmetric conformity function and (a) annealed (b) quenched dynamics. White and blue areas indicate regions of parameter space where only continuous and discontinuous transitions take place, respectively. Insets illustrate fixed point diagrams in given regions. More detailed versions of these figures are presented in the Supporting Information. In order to illustrate the fixed point equations, we need to choose a particular form for the conformity and individual learning functions. Figure 2 shows the steady state \(a^{*}\) as a function of \(p\) for the particular case of \(q=3\) and \(k=-15\). Negative \(k\) values correspond to the situation where the probability of adoption through individual learning diminishes with the fraction of adopters and is therefore in competition with the conformity function. As in Ref. [12], we find that the final state of adoption depends on the fraction of social learners, but the results are very different if one considers symmetric or non-symmetric conformity functions. For symmetric conformity functions, Fig. 2(a) shows that both quenched and annealed curves are indistinguishable. On the contrary, Fig. 2(b) shows that when non-symmetric conformity functions are considered, the phase-plots are qualitatively different for exactly the same parameters, to the extreme of showing continuous transitions for the quenched dynamics and discontinuous for the annealed one. The dotted lines of these figures denote the unstable solutions obtained from the stability analysis detailed in the Supporting Information. Figure 3 shows that the full phase diagrams obtained for the annealed and quenched versions of the model are very different in the case of non-symmetric conformity functions. The solid black line divides the parameter space in a white region of continuous transitions, and the blue one, where discontinuous transitions may occur, as it is illustrated by the fixed point diagrams shown in the insets for the set of parameters indicated by a black dot in the corresponding region. Figure 4 shows a different situation. When the conformity function is symmetric, annealed and quenched dynamics lead to the same phase diagram. Moreover, in this case, we find three different regions in the phase diagram. As for the non-symmetric case, the one indicated in white corresponds to continuous transitions, and the blue one corresponds to a region of discontinuous ones. In these two regions, the transitions occur between phases of _order_ (a majority of the population adopts a given option) and _disorder_ (half of the population adopting each option). However, in the region marked as red, the transition occurs between two different _ordered_ phases, where two unbalanced fractions of the population may choose a given option. The cusp-like shape of the fix-point plots in the insets, which are absent from Fig. 3, are a consequence of the piece-wise form of the studied symmetric conformity function. ## IV Discussion In spite of numerous theoretical and empirical studies aimed at understanding the rules by which individuals conform to the opinions of others [13; 14; 15; 16; 28], the general form of the conformity functions is still an open question. An important debate turns around the fact of it being symmetric or non-symmetric with respect to its midpoint. Here, we present exact analytical results along with numerical simulations of large but finite populations, showing that the symmetrical character of the conformity function has important consequences for the outcomes of the dynamical process. If the conformity function is symmetric, the shape of the distribution of learning strategies is irrelevant to determine the fixed point diagram, and only the mean of this distribution counts. Interestingly, in Ref. [12], where they study a particular case of symmetric functions and quenched dynamics, they found on the contrary, that the fixed point diagram is very different if the distribution of learning strategies is right skewed. The reason for this finding is that the skewed distribution they chose, has a different mean than the other two. Another important consequence of symmetric conformity functions is that the timescale at which individuals choose a learning strategy does not matter to determine the fixed points of the dynamics. We have shown that the fixed point diagrams are identical whether the individuals keep the same strategy all along the dynamical process (quenched dynamics) or change it on the same timescale as the dynamical variables (annealed dynamics). On the contrary, if the conformity function is non-symmetric, the outcomes of the dynamics become dependent on the timescale at which the individuals change their learning strategies. This is an important point as it has been shown that individuals tend to change their learning strategies [16; 3]. If they do it frequently, in a timescale that is comparable to that of the dynamical variables measuring the adoption rate, as in the annealed dynamics, the fixed points still depend, as for the symmetric conformity function case, on the mean of the distribution of strategies but not on its whole shape. However, if the individuals are persistent with their choices, as in the quenched dynamics, the whole distribution of learning strategies enters in the determination of the fixed point diagrams. In this case, more effort should be put into modelling various possible distributions and estimating them in empirical studies. ## V Conclusions In this article, we study analytically and numerically the outcomes of a decision making-process of a population of agents who may choose to learn individually or socially. We consider both symmetric and non-symmetric conformity functions along with annealed and quenched dynamics for the learning strategies, and we show that according to the chosen case, the steady state solutions are very different. Our results suggest an experimental protocol that differs from those commonly used in order to decide whether the conformity functions are symmetric or not. Instead of trying to fit the naturally noisy data observed in experiments to different mathematical functions in order to decide about the symmetry proper ties of the conformity function [13; 14; 16; 29], one could try to identify the type of conformity function used by the population in the experiment by observing the outcomes of the global choice made by it, for a given set of controlled parameters. Hopefully, the results presented here may inspire an experimental design that could help to clarify the debate about the symmetry properties of conformity functions. ## Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation program under the Maria Sklodowska-Curie grant agreement number 945380. * ## Appendix A Abstract In the main text, we show that in general if the conformity function is symmetric, the shape of the distribution of the learning strategies does not enter in the fixed point equations, and only its mean counts. Moreover, this equation is the same for the annealed and quenched dynamics. On the contrary, in order to plot fixed point and phase diagrams one needs to specify the particular functions that describe the individual and social learning procedures, and in the case of non-symmetric conformity functions, the distribution of learning strategies \(f_{p}(x)\). Herein, we present the details of the calculations of the particular cases used to compute the phase diagrams presented as examples in the main text. In these examples, \(f_{p}(x)\) is a Bernoulli distribution, and the individual learners' function as well as the symmetric and non symmetric conformity functions are those discussed in the main text. ## Appendix A Analytical calculations: stability analysis ### Symmetric conformity function We have chosen \[f_{S}(x)=\begin{cases}\frac{1}{2}(2x)^{q}&\text{if }0\leq x<0.5,\\ 1-\frac{1}{2}\left[2(1-x)\right]^{q}&\text{if }0.5\leq x\leq 1,\end{cases}\] (A.1) as our representative of symmetric conformity functions in order to compare with the particular case studied in Ref. [12]. In the case of symmetric conformity functions, annealed and quenched dynamics lead to the same final result. This result depends only on the mean of the distribution of learning strategies \(f_{p}(x)\), i.e., \(\bar{p}=\int xf_{p}(x)dx\). However, since the calculations that lead to this result are different for annealed and quenched dynamics, we cover them separately in the next subsections. #### a.1.1 Annealed dynamics The rate equation takes the following form: \[\frac{da}{dt}=\begin{cases}\bar{p}\left[f_{I}(a)-a\right]+(1-\bar{p})\left[ \frac{1}{2}(2a)^{q}-a\right]&\text{if }0\leq a<0.5,\\ \bar{p}\left[f_{I}(a)-a\right]+(1-\bar{p})\left\{1-a-\frac{1}{2}\left[2(1-a) \right]^{q}\right\}&\text{if }0.5\leq a\leq 1.\end{cases}\] (A.2) We have two groups of fixed points. The first group is given by \(a^{*}=1/2\) and any value of \(\bar{p}\), whereas the second group satisfies the following formula: \[\bar{p}=\begin{cases}\frac{2a^{*}-(2a^{*})^{q}}{2f_{I}(a^{*})-(2a^{*})^{q}}& \text{if }0\leq a^{*}<0.5,\\ \frac{2(a^{*}-1)+[2(1-a^{*})]^{q}}{2\left[f_{I}(a^{*})-1]+[2(1-a^{*})]^{q}}& \text{if }0.5\leq a^{*}\leq 1.\end{cases}\] (A.3) To check the stability of the derived fixed points, let us first denote the right hand side of Eq. (A.2) by \(F(a)\) and let \[F^{\prime}(a^{*})=\left.\frac{dF(a)}{da}\right|_{a^{*}}.\] (A.4) The fixed point is stable if \(F^{\prime}(a^{*})<0\) and unstable if \(F^{\prime}(a^{*})>0\)[30]. In this case, \[F^{\prime}(a^{*})=\begin{cases}\bar{p}\left[f^{\prime}_{I}(a^{*})-1\right]+(1- \bar{p})\left[q(2a^{*})^{q-1}-1\right]&\text{if }0\leq a^{*}<0.5,\\ \bar{p}\left[f^{\prime}_{I}(a^{*})-1\right]+(1-\bar{p})\left\{q\left[2(1-a^{* })\right]^{q-1}-1\right\}&\text{if }0.5\leq a^{*}\leq 1,\end{cases} \tag{10}\] where \[f^{\prime}_{I}(a^{*})=\left.\frac{df_{I}(a)}{da}\right|_{a^{*}}=\frac{ke^{-k(a ^{*}-1/2)}}{\left[1+e^{-k(a^{*}-1/2)}\right]^{2}}. \tag{11}\] For \(a^{*}=1/2\), we can determine the stability analytically. We have \[f^{\prime}_{I}(1/2)=\frac{k}{4}, \tag{12}\] so \[F^{\prime}(1/2)=\frac{k}{4}\bar{p}+q(1-\bar{p})-1. \tag{13}\] Consequently, the point at which the stability of \(a^{*}=1/2\) changes is given by \[\bar{p}^{*}=\frac{4\left(q-1\right)}{4q-k}. \tag{14}\] If \(q>1\) and \(k<4\), the fixed point \(a^{*}=1/2\) is unstable for \(\bar{p}<\bar{p}^{*}\), and it is stable for \(\bar{p}>\bar{p}^{*}\), whereas if \(k>4\), \(a^{*}=1/2\) is unstable for all \(\bar{p}\). If \(0<q<1\) and \(k<4\), \(a^{*}=1/2\) is stable for all \(\bar{p}\), whereas if \(k>4\), \(a^{*}=1/2\) is stable for \(\bar{p}<\bar{p}^{*}\), and it is unstable for \(\bar{p}>\bar{p}^{*}\). The stability of the remaining fixed points, given by Eq. (10), is determined numerically. #### a.2.2 Quenched dynamics In this case, the rate equations have the following forms \[\frac{da_{0}}{dt} =\begin{cases}\frac{1}{2}(2a)^{q}-a_{0}&\text{if }0\leq a<0.5,\\ 1-a_{0}-\frac{1}{2}\left[2(1-a)\right]^{q}&\text{if }0.5\leq a\leq 1,\end{cases} \tag{15}\] \[\frac{da_{1}}{dt} =f_{I}(a)-a_{1}, \tag{16}\] where \[a=(1-p)a_{0}+pa_{1}. \tag{17}\] The first group of fixed points is given by \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) and any value of \(\bar{p}\), whereas the second group satisfies the following formulas: \[a_{0}^{*} =\begin{cases}\frac{1}{2}(2a^{*})^{q}&\text{if }0\leq a^{*}<0.5,\\ 1-\frac{1}{2}\left[2(1-a^{*})\right]^{q}&\text{if }0.5\leq a^{*}\leq 1,\end{cases} \tag{18}\] \[a_{1}^{*} =f_{I}(a^{*}), \tag{19}\] where \[a^{*}=(1-\bar{p})a_{0}^{*}+\bar{p}a_{1}^{*}. \tag{20}\] Consequently, we have \[\bar{p}=\begin{cases}\frac{2a^{*}-(2a^{*})^{q}}{2f_{I}(a^{*})-(2a^{*})^{q}}&\text {if }0\leq a^{*}<0.5,\\ \frac{2(a^{*}-1)+[2(1-a^{*})]^{q}}{2\left[f_{I}(a^{*})-1\right]+[2(1-a^{*})]^{q} }&\text{if }0.5\leq a^{*}\leq 1.\end{cases}\] (A.16) Note that the same result was obtained for the annealed dynamics, see Eq. (A.3). To check the stability of the derived fixed points, let us denote the right hand side of Eqs. (A.10) and (A.11) by \(F_{0}(a_{0},a_{1})\) and \(F_{1}(a_{0},a_{1})\), respectively. The stability is determined by the determinant and trace of the following Jacobian matrix: \[\mathbf{J}(a_{0}^{*},a_{1}^{*})=\begin{bmatrix}\dfrac{\partial F_{0}}{ \partial a_{0}}&\dfrac{\partial F_{0}}{\partial a_{1}}\\ \dfrac{\partial F_{1}}{\partial a_{0}}&\dfrac{\partial F_{1}}{\partial a_{1 }}\end{bmatrix}_{(a_{0}^{*},a_{1}^{*})},\] (A.17) where \[\frac{\partial F_{0}}{\partial a_{0}} =\begin{cases}q(1-\bar{p})(2a)^{q-1}-1&\text{if }0\leq a<0.5,\\ q(1-\bar{p})\left[2(1-a)\right]^{q-1}-1&\text{if }0.5\leq a\leq 1,\end{cases}\] (A.18) \[\frac{\partial F_{0}}{\partial a_{1}} =\begin{cases}q\bar{p}(2a)^{q-1}&\text{if }0\leq a<0.5,\\ q\bar{p}\left[2(1-a)\right]^{q-1}&\text{if }0.5\leq a\leq 1,\end{cases}\] (A.19) \[\frac{\partial F_{1}}{\partial a_{0}} =(1-\bar{p})f_{I}^{\prime}(a),\] (A.20) \[\frac{\partial F_{1}}{\partial a_{1}} =\bar{p}f_{I}^{\prime}(a)-1.\] (A.21) The state is stable if \(\det\left[\mathbf{J}(a_{0}^{*},a_{1}^{*})\right]>0\) and \(\operatorname{tr}\left[\mathbf{J}(a_{0}^{*},a_{1}^{*})\right]<0\)[30]. For \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\), we can determine the stability analytically. In this case, we have \[\frac{\partial F_{0}}{\partial a_{0}}\bigg{|}_{(1/2,1/2)} =q(1-\bar{p})-1,\] (A.22) \[\frac{\partial F_{0}}{\partial a_{1}}\bigg{|}_{(1/2,1/2)} =q\bar{p},\] (A.23) \[\frac{\partial F_{1}}{\partial a_{0}}\bigg{|}_{(1/2,1/2)} =\frac{k}{4}(1-\bar{p}),\] (A.24) \[\frac{\partial F_{1}}{\partial a_{1}}\bigg{|}_{(1/2,1/2)} =\frac{k}{4}\bar{p}-1.\] (A.25) Hence, the determinant and the trace are the following: \[\det\left[\mathbf{J}(1/2,1/2)\right] =1-\frac{k}{4}\bar{p}-q(1-\bar{p}),\] (A.26) \[\operatorname{tr}\left[\mathbf{J}(1/2,1/2)\right] =\frac{k}{4}\bar{p}+q(1-\bar{p})-2=-\left(\det\left[\mathbf{J}( 1/2,1/2)\right]+1\right).\] (A.27) As a result, the point at which the stability of \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) changes is given by \[\bar{p}^{*}=\frac{4\left(q-1\right)}{4q-k},\] (A.28) which is the same formula as obtained for the annealed dynamics, see Eq. (A.9), and we have the same stability conditions. The stability of the remaining fixed points, given by Eq. (A.16), is determined numerically. #### iii.1.3 Results Figure 5 illustrates the behavior of the model with the symmetric conformity function. In this case, the annealed and quenched dynamics produce the same fixed-point diagrams. In the parameter space presented in Fig. 5(a), we can identify three areas separated by two curves: \(\widetilde{k}(q)\), the red one, and \(\widetilde{k}(q)\), the black one. These curves are determined numerically. For \(k>\widetilde{k}(q)\), the system exhibits continuous transitions between a phase where one option dominates over the other (i.e., ordered phase for \(\bar{p}<\bar{p}^{*}\)) to a phase without the majoritarian option (i.e., disordered phase for \(\bar{p}>\bar{p}^{*}\)), see Fig. 5(b). For \(\bar{k}(q)<k<\widetilde{k}(q)\), additional discontinuous transitions between phases with the majoritarian options appear, see Fig. 5(d). Finally, for \(k<\bar{k}(q)\), discontinuous transitions between phases with and without the majoritarian options are possible, see Fig. 5(f). Figure 5: Behavior of the model with symmetric conformity function, where the annealed and quenched dynamics produce the same diagrams. (a) Phase diagram. The blue and the white regions correspond to the zones of the parameter space where transitions between an ordered and a disordered phase are discontinuous or continuous, respectively. The intermediate red region also presents discontinuous transitions but between two ordered phases with different fraction of adopters. The letters indicate the parameter regions of the following fixed-point diagrams (b)-(f), which present stable (solid lines) and unstable (dashed lines) fixed points for the model with \(q=4\) and (b) \(k=-20\), (c) \(k=\widetilde{k}(q=4)\approx-24.3\), (d) \(k=-27\), (e) \(k=\bar{k}(q=4)\approx-29.1\), (f) \(k=-35\). Symbols represent the results from the simulations of the model with \(\bullet\) annealed and \(\blacktriangle\) quenched dynamics. ### Non-symmetric conformity function We have chosen \[f_{S}(x)=x^{q} \tag{101}\] as our representative of non-symmetric conformity functions as this form is commonly used in models of opinion dynamics [23] that originate from the nonlinear \(q\)-voter model [17]. In the case of non-symmetric conformity functions, annealed and quenched dynamics lead to different results. We cover them separately in the next subsections. #### a.2.1 Annealed dynamics The transition rates take the forms: \[P_{B\to A} =\bar{p}f_{I}(a)+(1-\bar{p})a^{q}, \tag{102}\] \[P_{A\to B} =\bar{p}\left[1-f_{I}(a)\right]+(1-\bar{p})(1-a)^{q}, \tag{103}\] which results in the following rate equation \[\frac{da}{dt}=\bar{p}\left[f_{I}(a)-a\right]+(1-\bar{p})\left[(1-a)a^{q}-a(1-a )^{q}\right]. \tag{104}\] The first group of fixed points is given by \(a^{*}=1/2\) and any value of \(\bar{p}\), and the second group satisfies the following formula: \[\bar{p}=\frac{a^{*}(1-a^{*})^{q}-(1-a^{*})(a^{*})^{q}}{a^{*}\left[(a^{*})^{q} +(1-a^{*})^{q}-1\right]+f_{I}(a^{*})-(a^{*})^{q}}. \tag{105}\] To check the stability of the derived fixed points, let us denote the right hand side of Eq. (104) by \(F(a)\). The stability is determined by the sign of \[F^{\prime}(a^{*})=\bar{p}\left[f^{\prime}_{I}(a^{*})-1\right]+(1-\bar{p}) \left[q(1-a^{*})(a^{*})^{q-1}+qa^{*}(1-a^{*})^{q-1}-(a^{*})^{q}-(1-a^{*})^{q} \right], \tag{106}\] where \[f^{\prime}_{I}(a^{*})=\frac{ke^{-k(a^{*}-a_{0})}}{\left[1+e^{-k(a^{*}-a_{0})} \right]^{2}}. \tag{107}\] For \(a^{*}=1/2\), we can determine the stability analytically. In this case, we have \[f^{\prime}_{I}(1/2)=\frac{k}{4}, \tag{108}\] and \[F^{\prime}(1/2)=\bar{p}\left[\frac{k}{4}-1\right]+(1-\bar{p})\frac{q-1}{2^{q- 1}}. \tag{109}\] Consequently, the point at which the stability of \(a^{*}=1/2\) changes is given by \[\bar{p}^{*}=\frac{q-1}{q-1+2^{q-1}\left(1-\frac{k}{4}\right)}. \tag{110}\] If \(q>1\) and \(k<4\), the fixed point \(a^{*}=1/2\) is unstable for \(\bar{p}<\bar{p}^{*}\), and it is stable for \(\bar{p}>\bar{p}^{*}\), whereas if \(k>4\), \(a^{*}=1/2\) is unstable for all \(\bar{p}\). If \(0<q<1\) and \(k<4\), \(a^{*}=1/2\) is stable for all \(\bar{p}\), whereas if \(k>4\), \(a^{*}=1/2\) is stable for \(\bar{p}<\bar{p}^{*}\), and it is unstable for \(\bar{p}>\bar{p}^{*}\). The stability of the remaining fixed points, given by Eq. (105), is determined numerically. At the fixed point \((a^{*},\bar{p})=(1/2,\bar{p}^{*})\), a pitchfork bifurcation takes place. This bifurcation changes its type from subcritical to supercritical in the parameter space \((k,q)\) along the curve \(k^{*}(q)\) defined by the equation: \[(k^{*})^{3}+8(k^{*}-4)(q-5)q=0. \tag{111}\] The bifurcation is subcritical for \(k<k^{*}(q)\), while it becomes supercritical for \(k>k^{*}(q)\). Note that this model for \(k=0\) corresponds to the \(q\)-voter model with independence [26; 27; 31] or the non-linear noisy voter model [25]. Quenched dynamics In this case, the rate equations have the following forms: \[\frac{da_{0}}{dt} =(1-a_{0})a^{q}-a_{0}(1-a)^{q}, \tag{100}\] \[\frac{da_{1}}{dt} =f_{I}(a)-a_{1}, \tag{101}\] where \[a=(1-p)a_{0}+pa_{1}. \tag{102}\] The first group of fixed points is given by \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) and any value of \(\bar{p}\), whereas the second group satisfies the following formulas: \[a_{0}^{*} =\frac{(a^{*})^{q}}{(a^{*})^{q}+(1-a^{*})^{q}}, \tag{103}\] \[a_{1}^{*} =f_{I}(a^{*}), \tag{104}\] where \[a^{*}=(1-\bar{p})a_{0}^{*}+\bar{p}a_{1}^{*}. \tag{105}\] As a result, we have \[\bar{p}=\frac{a^{*}(1-a^{*})^{q}-(1-a^{*})(a^{*})^{q}}{f_{I}(a^{*})\left[(a^{* })^{q}+(1-a^{*})^{q}\right]-(a^{*})^{q}}. \tag{106}\] To check the stability of the derived fixed points, let us denote the right hand side of Eqs. (100) and (101) by \(F_{0}(a_{0},a_{1})\) and \(F_{1}(a_{0},a_{1})\), respectively. The stability is determined by the use of the Jacobian matrix given by Eq. (100) where \[\frac{\partial F_{0}}{\partial a_{0}} =q(1-\bar{p})\left[a_{0}(1-a)^{q-1}+(1-a_{0})a^{q-1}\right]-a^{q} -(1-a)^{q}, \tag{107}\] \[\frac{\partial F_{0}}{\partial a_{1}} =q\bar{p}\left[a_{0}(1-a)^{q-1}+(1-a_{0})a^{q-1}\right],\] (108) \[\frac{\partial F_{1}}{\partial a_{0}} =(1-\bar{p})f_{I}^{\prime}(a),\] (109) \[\frac{\partial F_{1}}{\partial a_{1}} =\bar{p}f_{I}^{\prime}(a)-1. \tag{110}\] For \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\), we can determine the stability analytically. In this case, we have \[\frac{\partial F_{0}}{\partial a_{0}}\bigg{|}_{(1/2,1/2)} =\frac{1}{2^{q-1}}\left[q(1-\bar{p})-1\right], \tag{111}\] \[\frac{\partial F_{0}}{\partial a_{1}}\bigg{|}_{(1/2,1/2)} =\frac{1}{2^{q-1}}q\bar{p},\] (112) \[\frac{\partial F_{1}}{\partial a_{0}}\bigg{|}_{(1/2,1/2)} =\frac{k}{4}(1-\bar{p}),\] (113) \[\frac{\partial F_{1}}{\partial a_{1}}\bigg{|}_{(1/2,1/2)} =\frac{k}{4}\bar{p}-1. \tag{114}\] Thus, the determinant and the trace are the following: \[\det\left[\mathbf{J}(1/2,1/2)\right] =\frac{1}{2^{q-1}}\left[1-\frac{k}{4}\bar{p}-q(1-\bar{p})\right], \tag{115}\] \[\operatorname{tr}\left[\mathbf{J}(1/2,1/2)\right] =\frac{1}{2^{q-1}}\left[q(1-\bar{p})-1\right]+\frac{k}{4}\bar{p}-1. \tag{116}\] As a result, the point at which the stability of \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) changes is \[\bar{p}^{*}=\frac{4\,(q-1)}{4q-k}. \tag{100}\] If \(q>1\) and \(k<4\), the fixed point \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) is unstable for \(\bar{p}<\bar{p}^{*}\), and it is stable for \(\bar{p}>\bar{p}^{*}\), whereas if \(k>4\), \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) is unstable for all \(\bar{p}\). If \(0<q<1\) and \(k<4\), \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) is stable for all \(\bar{p}\), whereas if \(k>4\), \((a_{0}^{*},a_{1}^{*})=(1/2,1/2)\) is stable for \(\bar{p}<\bar{p}^{*}\), and it is unstable for \(\bar{p}>\bar{p}^{*}\). The stability of the remaining fixed points, given by Eq. (101), is determined numerically. At the fixed point \((a^{*},\bar{p})=(1/2,\bar{p}^{*})\), a pitchfork bifurcation takes place. This bifurcation changes its type from subcritical to supercritical in the parameter space \((k,q)\) along the curve \(k^{*}(q)\) defined by the equation: \[(k^{*})^{3}+16(4-k^{*})(q+1)q=0 \tag{101}\] The bifurcation is subcritical for \(k<k^{*}(q)\), while it becomes supercritical for \(k>k^{*}(q)\). Note that this model for \(k=0\) corresponds to the \(q\)-voter model with independence under the quenched approach from Ref. [27]. #### a.2.3 Results Figure 6 illustrates the behavior of the model with the non-symmetric conformity function for (a)-(d) annealed and (e)-(h) quenched dynamics. In the parameter space presented in Fig. 6(a) and 6(e), we can identify two areas separated by the black curve, \(k^{*}(q)\), given by Eqs. (104) and (101). For \(k>k^{*}(q)\), the system exhibits continuous transitions between a phase where one option dominates over the other (i.e., ordered phase for \(\bar{p}<\bar{p}^{*}\)) to a phase without the majoritarian option (i.e., disordered phase for \(\bar{p}>\bar{p}^{*}\)), see Figs. 6(b) and 6(f). At \(k^{*}(q)\), the system still exhibits continuous phase transitions, see Figs. 6(c) and 6(g). However, crossing this curve results in a change of the phase transition type. Consequently, for \(k<k^{*}(q)\), the transitions between ordered and disordered phases are discontinuous, see Figs. 6(d) and 6(h). Figure 6: Behavior of the model with non-symmetric conformity function for (a)-(d) annealed and (e)-(h) quenched dynamics. (a) and (e) Phase diagrams. The blue and the white regions correspond to the zones of the parameter space where transitions between an ordered and a disordered phase are discontinuous or continuous, respectively. The letters indicate the parameter regions of the following fixed-point diagrams (b)-(d) and (f)-(h), which present stable (solid lines) and unstable (dashed lines) fixed points for the model with \(q=4\) and (b) \(k=-4\), (c) \(k=k^{*}(q=4)\approx-7.1\), (d) \(k=-10\), (f) \(k=-10\), (g) \(k=k^{*}(q=4)\approx-19.6\), (h) \(k=-30\). Symbols represent the results from the simulations of the model. ## Appendix A Simulations ### Simulation details In addition to the analytical results discussed in the main text, which correspond to the thermodynamic limit, we simulate the dynamical equations for a large but finite population of \(N=10^{5}\) agents. We consider one time step of this dynamics when \(N\) agents have been updated, or in other words when, on average, all the agents have been updated once, in analogy with the notion of _Monte Carlo step per site_ (MCS/s). In the simulations, we trace the fraction of adopters of the most common option, i.e., \[\alpha=\max\{a,b\}, \tag{10}\] where \(a\) and \(b=1-a\) are the fraction of adopters of the options \(A\) and \(B\), respectively. In the figures, we show the mean value of \(\alpha\), \(\left[\left\langle\alpha\right\rangle_{t}\right]_{s}\). The angle brackets represent the average over time. We discarded the first 900 MCS to let the system reach the stationary state and perform the time average over next 100 MCS. The square brackets represent the sample average that was performed over 20 independent simulations at most (in a metastable region, the average is perform over those of the simulations that ended up in the same phase). In all the simulations, all the agents are initialized with option \(A\). Standard errors are of the mark size order. When simulating the quenched dynamics, instead of randomly assigning the learning strategies, which would lead to some fluctuations in \(\bar{p}\) between simulations, we assign them deterministically. We choose the first \(pN\) agents to be individual learners and the rest of them to be social learners. In such a way, we have exactly the same value of \(\bar{p}\) in all the simulations that we average over. The problem with the fluctuations can be also overcome by keeping the random assignment but increasing the number of agents in the system as the fluctuations in \(\bar{p}\) diminishes with the system size at a rate of \(1/\sqrt{N}\). ### Source code The model is implemented in C++ using object-oriented programming. Python and Matlab are used for data analysis and numerical calculations. The code files can be found in the following GitHub repositories: * [https://github.com/arkadiusz-jedrzejewski/norm-formation-abm](https://github.com/arkadiusz-jedrzejewski/norm-formation-abm), * [https://github.com/arkadiusz-jedrzejewski/norm-formation-abm-py](https://github.com/arkadiusz-jedrzejewski/norm-formation-abm-py), * [https://github.com/arkadiusz-jedrzejewski/norm-formation-m](https://github.com/arkadiusz-jedrzejewski/norm-formation-m).
2309.05972
Self-supervised Extraction of Human Motion Structures via Frame-wise Discrete Features
The present paper proposes an encoder-decoder model for extracting the structures of human motions represented by frame-wise discrete features in a self-supervised manner. In the proposed method, features are extracted as codes in a motion codebook without the use of human knowledge, and the relationship between these codes can be visualized on a graph. Since the codes are expected to be temporally sparse compared to the captured frame rate and can be shared by multiple sequences, the proposed network model also addresses the need for training constraints. Specifically, the model consists of self-attention layers and a vector clustering block. The attention layers contribute to finding sparse keyframes and discrete features as motion codes, which are then extracted by vector clustering. The constraints are realized as training losses so that the same motion codes can be as contiguous as possible and can be shared by multiple sequences. In addition, we propose the use of causal self-attention as a method by which to calculate attention for long sequences consisting of numerous frames. In our experiments, the sparse structures of motion codes were used to compile a graph that facilitates visualization of the relationship between the codes and the differences between sequences. We then evaluated the effectiveness of the extracted motion codes by applying them to multiple recognition tasks and found that performance levels comparable to task-optimized methods could be achieved by linear probing.
Tetsuya Abe, Ryusuke Sagawa, Ko Ayusawa, Wataru Takano
2023-09-12T05:43:13Z
http://arxiv.org/abs/2309.05972v1
# Self-supervised Extraction of Human Motion Structures via Frame-wise Discrete Features ###### Abstract The present paper proposes an encoder-decoder model for extracting the structures of human motions represented by frame-wise discrete features in a self-supervised manner. In the proposed method, features are extracted as codes in a motion codebook without the use of human knowledge, and the relationship between these codes can be visualized on a graph. Since the codes are expected to be temporally sparse compared to the captured frame rate and can be shared by multiple sequences, the proposed network model also addresses the need for training constraints. Specifically, the model consists of self-attention layers and a vector clustering block. The attention layers contribute to finding sparse keyframes and discrete features as motion codes, which are then extracted by vector clustering. The constraints are realized as training losses so that the same motion codes can be as contiguous as possible and can be shared by multiple sequences. In addition, we propose the use of causal self-attention as a method by which to calculate attention for long sequences consisting of numerous frames. In our experiments, the sparse structures of motion codes were used to compile a graph that facilitates visualization of the relationship between the codes and the differences between sequences. We then evaluated the effectiveness of the extracted motion codes by applying them to multiple recognition tasks and found that performance levels comparable to task-optimized methods could be achieved by linear probing. Keywords:Human motion analysis, Discrete latent space, Self-supervised learning, Visualization, Self-attention, VQ-VAE 2021 ## 1 Introduction Improved recognition of human behaviors will provide important progress toward realizing advanced technologies, such as human-robot interactions. However, one of the difficulties with action recognition can be traced to the continuity of human motions because even just a few seconds of motion will contain several smoothly connected actions. In order to minimize the effects of this complexity, most existing human action recognition methods use motion data with annotations added for specific motion segments. In efforts aimed at learning segmented motion data, recent research has successfully extracted motion features that are useful for action recognition [1; 2; 3]. These methods, which assume that the given motions have the same level of granularity, seek to identify recurring or similar motion sequence patterns in latent space. The extracted features are usually convoluted with the whole-motion sequence. However, the effectiveness of such feature representations strongly depends on the granularity of the motion annotations that are applied during the training phase. For example, when a sequence of walking data is given, the entire sequence is converted to a feature that can represent "walking", but its meaning cannot be subdivided in latent space into "one step with the left foot" or "one step with the right foot". Since this fixed-granularity problem is found in the human motion features of all of these methods, their applicability to multiple applications is limited to generalizations that require different levels of representation granularity. Human motions consist of multiple stages, each of which is influenced by the characteristics of the individual human or the contexts of specific motions. Therefore, it is necessary to extract representations of multiple spatial and temporal components in order to effectively recognize these motions. Taken further, if it were possible to extract the unique representations of a specific individual, this would be very useful for understanding the characteristics of his or her human motions. For example, this could help to explain the differences in the motions between beginners and experts. However, since each individual possesses unique representations that are not shared among others, the actions of one person are not expected to completely match the shared knowledge of others in a manner similar to natural language. Therefore, first among the three primary issues to be resolved is finding a way to extract a unique representation without using preexisting knowledge. Typically, human motions are expressed as multi-dimensional continuous quantities, such as joint angles at each sampling moment. However, when we consider the case of generating a new motion, specifying all joint angles at each moment is difficult. Therefore, in order to make it feasible to specify such values, representations which are a finite number of components or parameters are required. This is the second representation issue that must be addressed. Since the components that make up a motion have temporal relationships with each other, the recognition of the motion is equivalent to identifying those relationships. In addition, since any human motion will have a relationship with motions that took place several seconds or more previously, the temporal receptive field for the recognition should be wider than the dependency. Furthermore, since it is necessary for the recognition of human motions, the receptive field should be wider than several hundred frames. Therefore, the third issue is determining a recognition method that can be used with a wide receptive field. With the above motivations in mind, our goal is to extract identifiable features that can be used to represent human motions without using preexisting knowledge (annotated motion data). The contributions of the present study are as follows. We show how an intermediate representation of human motion can be generated in latent space based on an encoder-decoder model without using preexisting knowledge (annotated motion data). The representation consists of a finite number of components that are obtained by discretizing the latent space, and the proposed method realizes a wide temporal receptive field by using an attention-based network to extract relationships in a long sequence. ## 2 Related work The process of understanding human behavior is typically categorized into several tasks, such as action recognition and action segmentation. Action recognition is defined as the task of finding areas of correspondence between input data and action labels. In this task, the system basically identifies answers with actions that match the input data. Hence, in supervised approaches, action labels are assigned, and the target actions to be detected are fixed. As examples, previously methods proposed [4; 5; 6] determine actions based on supervised learning directly from video footage used as input data. Another approach by which to determine actions is to use skeletal information acquired by detecting human poses or by using a motion capture system [7; 8; 9]. Some methods based on skeletal information [10; 11; 12; 13; 14; 15] define the problem as a translation between motion and language. However, since the cost of obtaining input data, such as videos with action labels, is high, unsupervised approaches, such as pre-training, have been used to learn representations from video footage [16]. In these methods, latent spaces are learned from skeletal information based on self-supervised approaches [17; 1]. Action segmentation is the task of segmenting a temporal sequence of input data into multiple actions. A number of methods [18; 19; 20] determine multiple action types and their boundaries in examined video frames based on supervised learning. Unsupervised approaches to action segmentation that work by using clustering to find the same actions [21; 22; 23] and aligning phases [24] have also been proposed. Separately, since numerous actions are said to have hierarchical structures, which means multi-level action labels can be defined, other previous methods based on supervised learning [25; 26; 27; 28; 29] have proposed the use of fine-grained action labels to explain coarse-level actions. For example, a method proposed in a previous study [30] learns the latent space of sub-actions by means of an unsupervised approach based on clustering. Using skeletal information to learn the representations of human motion based on unsupervised learning is a method that can also be used for other tasks. For example, when examining video footage, the joint angles of the next frame have been predicted using previous frame data [31; 32; 33; 34]. One of the major approaches used to explain human actions is to find relationships between the motions and the words of natural languages. Since words are common knowledge, the parts of an action can be readily understood by users. Hence, various methods of facilitating mutual translations between motions and language have been studied. These approaches can achieve a useful intermediate representation of motion by finding the relationship between a motion and a language. In one example, a motion language model encoded by a hidden Markov model (HMM) was used as an intermediate representation to express the relationships between motions and language [11]. Separately, a context vector encoded by a recurrent neural network (RNN) was used to provide an intermediate representation of motion [12], which allowed human motions to be generated from the input text by learning the intermediate representation as the output of an RNN based on a generative adversarial network (GAN) in [13], while an autoencoder for motion and language that calculates two latent vectors has also been proposed [14]. In the latter method, the difference between the latent vectors is minimized in training and then used as the intermediate representation. Joint embedding of the latent vectors has been used for an autoencoder [15], which reproduces the motion from the joint vector, and the latent vector of the language encoder is only used in inference time. However, in all of these approaches, the intermediate representations of the motion are supervised by the annotations using language labels, which means representing unique characteristics that cannot be expressed by language is difficult. Furthermore, while these supervised or self-supervised methods with annotated (segmented) motion data can extract the motion features for action recognition, the latent space obtained in this manner is mediated by human knowledge that segments the motion data, which means that these methods require motion data with an appropriate level of granularity. In order to avoid this problem, we propose a method that can extract feature representations without preexisting human knowledge (segmented motion data with annotation label). More specifically, the proposed method is based on a variational autoencoder that generates a discrete latent space, such as a vector quantized-variational autoencoder (VQ-VAE) [35] or a dynamical variational autoencoder (dVAE) [36]. Temporal relationships between frames are found based on self-attention, as proposed in the Transformer [37]. The great benefit of discrete representation is its ability to find the discontinuous points of motion data and thus help in detecting actions in non-segmented motion data. ## 3 Proposed method The proposed method extracts the discrete motion feature of each frame in a long sequence. Once extracted, the motion feature is regarded as a motion code in a motion codebook, which is a set of components used to explain various motions. The purpose of such frame-wise feature extraction is to obtain motion codes independently of existing knowledge. However, since the proposed method does not use segmented motion data, the limited receptive field of a convolutional network is unsuitable. Accordingly, the proposed method uses a self-attention architecture, and a vector-quantized framework [35] to find discrete representations among motion data by considering temporal relationships over a wide range of frames. The temporal dependency of human motions is unknown but is considered to be more than several seconds. For example, the receptive field of the network used along the time axis should be several hundred frames wide if the motion is captured at 100 Hz. ### Encoder-decoder with discrete latent space The proposed method generates a discrete latent space that describes the structure of a human motion using a network consisting of an encoder and a decoder combined with a block of clustering latent vectors that are used to extract discrete motion codes. Vector clustering is realized as a quantization process that maps the encoder output to the nearest embedding vector in a Figure 1: Proposed model consistings of an encoder, a decoder, and vector clustering, which has two outputs: \(\mathbf{x}_{q}\) and \(\mathbf{\bar{x}}_{e}\). The former is a motion code replaced from the encoder output, and the latter is the mean vector of a segment of the same motion code motion codebook. In the proposed study, this quantization is implemented based on VQ-VAE [35], and the encoder and decoder are realized by self-attention layers [37] in order to find relationships between frames. The architecture of the proposed model is shown in Fig. 1. The input data of a motion consist of \(n_{f}\) frames of \(n_{j}\)-dimensional vectors at each frame. and the encoder converts the input vector \(\mathbf{v}_{i}\) at each frame to a feature vector \(\mathbf{z}_{e}\). The vector clustering has two outputs: one output replaces \(\mathbf{z}_{e}\) with the vector \(\mathbf{z}_{q}\), which is the nearest neighbor in the codebook based on Euclidean distance in the latent space. Note that the codebook itself consists of 512 kinds of embedding vectors \(\mathbf{z}_{q}\) in latent space. The other output replaces an encoded vector with the mean vector \(\bar{\mathbf{z}}_{e}\) of each segment, which consists of consecutive frames of the same motion code \(\mathbf{z}_{q}\). The difference between \(\mathbf{z}_{q}\) and \(\bar{\mathbf{z}}_{e}\) is considered to be the parameter of a segment that represents variations in the same cluster. Since \(\mathbf{z}_{q}\) does not contain this parameter, we refer to it as the default parameter. The decoder reconstructs the outputs \(\mathbf{v}_{o}\) and \(\bar{\mathbf{v}}_{o}\) using \(\mathbf{z}_{q}\) and \(\bar{\mathbf{z}}_{e}\), respectively. The outputs are framewise human motions that correspond to the input. If the input is motion, then the model is designed as an autoencoder. However, the input can also be other modalities, such as first-person video input by the subject. In the present paper, a high-dimensional video input is assumed to have already been encoded frame by frame for use as a feature vector and that the temporal relationship has already been extracted by the proposed model. ### Layered causal self-attention for a long sequence The input sequence length can be more than a thousand frames if, for example, it is captured for more than one minute at 30 frames per second (fps). Because identifying every combination of frames by self-attention due to time and memory limitations is not feasible, the attention matrix is only calculated for a portion of a sequence even if the multiple self-attention layers proposed in [37] are used. Therefore, for example, the attention is calculated for \(M(<N)\) frames as the attention width, even though the sequence has a total of \(N\) frames. In this section, the use of layered causal self-attention is proposed to overcome this limitation. In the human motion extraction task, it can be assumed that the output motion is causal only for the past input and output. However, since simply masking future frames will not decrease computational costs, the proposed approach only calculates the attention between each frame and its preceding \(M-1\) frames. The output \(\mathbf{z}\) of self-attention is calculated using the following equation: \[\mathbf{z}=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{D}}\mathbf{V}) \tag{1}\] where \(\mathbf{Q},\mathbf{K}\), and \(\mathbf{V}\) are the query, key, and value vectors, and \(D\) is the dimension of the feature vector. Fig. 2(a) is an input sequence and Fig. 2(b) is the attention mask for each frame. The dimensions are the batch size \(B\), the number of value vectors, and the number of query/key vectors. The vectors are rearranged so that the attention is calculated between a single value vector and vectors of the preceding \(M-1\) frames, as shown in Fig. 2(c). The value vector is the feature of the last frame of each row in Fig. 2(c), and output \(\mathbf{z}\) is obtained with attention width \(M\) for every frame except the beginning of the sequence. If the number of attention layers is \(N_{SA}\), then the total receptive field size is \(N_{SA}M\) frames for every frame. For preceding frames that are less than \(M\), the query and key vectors are padded by zero vectors. Since the attention is calculated for the last frame, the positional encoding is given as a position relative to the value vector frame. Therefore, the positional encoding values are the same in each column in Fig. 2(c). In the present study, the method used to calculate the position is the same as that used in the original Transformer. The output is rearranged again, as shown in Fig. 2(e), and then used as input for the feed-forward network in the same manner as proposed in the Transformer. ### Loss for extracting motion codes shared by sequences If the network is optimized to reconstruct the output \(\mathbf{v}_{o}\) without using any constraint to share motion codes, then each sequence will be reconstructed using unique codes. However, if the latent space forms a structure of human motions, a latent vector extracted by the proposed method is regarded as a motion code that is expected to be shared by multiple sequences. Hence, the following loss \(L\) is proposed to encourage motion code sharing: \[L=\sum_{k}^{n_{f}}\alpha L_{\text{reconst}}+L_{\text{latent}} \tag{2}\] Figure 2: Attention is calculated only for preceding frames. By rearranging the vectors, a wide receptive field is realized with a narrow attention width for each layer. The sizes indicate the number of vectors or vector dimensions: Batch \(\times\) Value \(\times\) Query/Key \(\times\) Feature where \(L_{\text{reconst}}\) and \(L_{\text{latent}}\) are the losses of reconstruction and latent space, respectively, for the \(k\)-th frame of a motion sequence of \(n_{f}\) frames, and \(\alpha\) is a user-defined weight of the reconstruction loss, as are \(\beta\) and \(\gamma\) in equations presented later. #### 3.3.1 Reconstruction loss The reconstruction loss is calculated with the parameter of each segment and the default parameter. In the proposed method, it is assumed that the default parameter of a motion code indicates temporally local motion. The loss with the constraint of local motion is defined as follows: \[L_{\text{reconst}}=L_{\text{P}}+L_{\text{V}} \tag{3}\] \[L_{\text{P}}=\parallel\mathbf{v}_{ik}-\bar{\mathbf{v}}_{ok}\parallel^{2} \tag{4}\] \[L_{\text{V}}=\parallel\hat{\mathbf{v}}_{ik}-\hat{\mathbf{v}}_{ok}\parallel^{2} \tag{5}\] where \(L_{\text{P}}\) is the reconstruction loss with the parameter of each segment, and \(L_{\text{V}}\) is that of a local motion. In addition, \(L_{\text{P}}\) is calculated as the difference of the output vectors, and \(L_{\text{V}}\) is calculated as the difference of the temporal derivative of these output vectors. If the output vectors refer to the position, the default parameter constraint is imposed on the velocity of the vectors. #### 3.3.2 Latent space loss The second part of loss \(L\) is the latent space loss, which is defined as follows: \[L_{\text{latent}}=L_{\text{VC}}+L_{\text{tV}} \tag{6}\] \[L_{\text{VC}}=\parallel\text{sg}[\mathbf{z}_{qk}]-\mathbf{z}_{ek}\parallel^{2}+\beta \parallel\mathbf{z}_{qk}-\text{sg}[\mathbf{z}_{ek}]\parallel^{2} \tag{7}\] \[L_{\text{tV}}=\gamma\parallel\hat{\mathbf{z}}_{qk}\parallel_{1} \tag{8}\] where \(L_{\text{VC}}\) is the loss of vector clustering, and \(L_{\text{tV}}\) is that of the temporal change of quantized latent vectors. Here, \(L_{\text{VC}}\) is based on the vector quantization [35] used to make the encoded vector \(\mathbf{z}_{ek}\) easy to quantize, and \(\mathbf{z}_{qk}\) is an entry in the motion codebook. The function sg is the stop-gradient operator that is defined as an identity at the forward computation time and has zero derivatives. Moreover, \(L_{\text{tv}}\) is the constraint that ensures the same motion code continues for as long as possible and is realized by minimizing the total variation defined by \(L^{1}\) norm. #### 3.3.3 Restricting motion codes Let \(S\) be a subset of input sequences that includes all types of motion in the dataset, and let \(Z_{q}\) be the subset of motion codes used to encode the sequences in \(S\). In this case, all sequences in the input dataset must be reconstructed using the subset \(Z_{q}\). Since different subsets \(Z_{qj}(j=1,\ldots,J)\) can be taken from the input dataset with \(J\) sequences, the loss is minimized by restricting the codebook to a subset \(Z_{qj}\) that is randomly chosen. In the first epoch, all codes are used for encoding, and one of the subsets is used to encode each sequence after the second epoch. Restricting the set of motion codes ensures that the codes are shared. ### Visualizing the structure of motion codes The decoder generates human motion from motion codes. Therefore, the attention weight for a frame indicates the frames referenced during generation. By considering multiple layers of attention and residual connections, the weight matrix \(\mathbf{W}\) is calculated from the attention weight \(\mathbf{W}_{i}(i=1,\ldots,N_{SA})\) of the \(i\)-th decoder attention layer as follows: \[\mathbf{W}=(\mathbf{I}+\mathbf{W}_{N_{SA}})\cdots(\mathbf{I}+\mathbf{W}_{2})(\mathbf{I}+\mathbf{W}_{1}) \tag{9}\] where \(\mathbf{I}\) is the identity matrix. Since the length \(N\) of input sequences is longer than the number of frames \(M\) for the attention layers, \(\mathbf{W}_{i}\) is calculated by concatenating the attention weights of partial sequences. High-weight frames, which are important for reconstructing motion, can be considered as keyframes. We have observed that the quantized latent space defined using motion codes generates high attention weights more sparsely, as compared to the continuous latent space defined by the basic use of a variational autoencoder (VAE) [38]. Fig. 3 shows an example of the sum of attention weights for each query/key frame. These two graphs show the same part of a sequence decoded by a basic VAE model (Fig. 3(a)) and by the proposed model (Fig. 3(b)). This example is one of the sequences in the JHU-ISI gesture and skill assessment working set (JIGSAWS) [39], and the background color indicates the action labels given in the dataset. Although the distribution of the weights is dense in Fig. 3(a), the proposed model generates a sparse distribution in Fig. 3(b). Since high weights can be observed for the frames close to the label boundaries, the weights appear to have some semantic meaning. In order to extract sparse keyframes, we propose counting the top-1 frame weight for each frame. Fig. 4(a) shows the top-1 frame weight for each value vector, and Fig. 4(b) is the number of top-1 frames larger than one, which are used as keyframes. The motion code transitions assigned to the keyframes are shown in Fig. 4(c). By counting these transitions, the relationships between them can be used to form a graph, as shown in Fig. 5. The positions of the motion codes within the graph are calculated by the Fruchterman-Reingold force-directed algorithm [40] implemented in NetworkX [41]. The motion codes that correspond to three of the annotation labels are enclosed by dotted lines. The labels that occur subsequently (G2 and G3) share the motion codes, while the labels that occur at the different phases of the motion (G4) use separate motion codes. ## 4 Experiments As a dataset that contains multiple components in each sequence of human motion, the JIGSAWS dataset [39], which contains video and kinematic data for robotic surgical tasks performed by operators with different skill levels, is first used in our experiments. In the present study, the suturing task, with 39 trials produced using eight subjects, is evaluated. For annotation purposes, 10 labels are used to describe actions at each frame, and the skill level (novice, intermediate, or expert) is given for each subject. The split of training and test sequences is the same as that of the setting in [42] for cross-validation. In the kinematic data provided in JIGSAWS, six-dimensional poses for the grippers and the gripper angles of the two robot arms at each frame are used as the input/output data, which means that 14 variables are calculated for each frame. Each recorded sequence is between one and three minutes long and is captured at 30 fps at \(640\times 480\) pixels. First, the kinematic data are used as the input of the proposed method. The hyperparameters of the model in the present paper are as follows. The number of attention layers is \(N_{SA}=6\) for both the encoder and decoder. The motion codebook has 512 vectors of \(D(=256)\) dimensions, and the attention widths are \(M=100\) and \(M=10\) for the encoder and decoder, respectively. Differences between the subjects can be visualized by comparing their motion code structures, which are shown in Fig. 6. Here, half of the motion Figure 3: Sum of attention weights for each query/keyframe: (a) basic variational autoencoder (VAE) model, (b) proposed model codes are shared by both subjects, and the others are only used by one of the subjects, which indicates that the motion codes can express both the similarities and uniqueness of the subjects. ### Evaluating the latent space by linear probing of multiple tasks For quantitative evaluation of the latent space generated by the proposed method, the task of recognizing annotated labels was tested by using the motion codes as the input. The purpose was to evaluate whether the generated motion codes, which are trained by reconstructing the output vectors, contain useful information. Since fine-tuning the trained network to a specific task is not an appropriate method by which to evaluate the usefulness of motion codes, linear probing with a simple linear layer, in which the backbone network to generate latent space is fixed, was applied. Since the motion codes have sufficient structure to allow them to reconstruct human motion, the generated motion codes are expected to be applicable to multiple tasks without optimization to specific tasks. In the present study, two tasks, action segmentation and skill classification, are tested with the JIGSAWS dataset. The former is a task that assigns action labels for each frame, Figure 4: (a) Top-1 frames for each value vector, (b) Number of times counted as top-1 attention frames, which is used to extract keyframes, (c) Motion code transitions assigned to the keyframes and the latter involves classifying each sequence to one of the skill levels. The head blocks are trained for these tasks by a single linear layer. As the baseline methods, the network for skill classification by Ismail et al. [42] and temporal convolutional networks (TCNs) [43] for action segmentation are compared. As shown in Fig. 7, the backbone and head blocks for the designed tasks are trained end-to-end for each method, and the head for the other task is trained as linear probing. Except for the input dimension, the head block architectures are the same for the three methods. The input vector of the linear layers is the quantized feature vector of the motion codes. The linear layer of action segmentation uses a part of the sequence as the input and is implemented as a 1D convolution. The 1D convolution of the head for action segmentation uses 500 frames around each frame. Figure 5: Graph of keyframe motion codes is constructed by the transition. The numbers indicate the IDs of motion codes. The relationship between the annotation labels is visualized in the graph Figure 6: Keyframe motion code graphs for two subjects for different sets of motion codes Tables 1 and 2 show the quantitative results of action segmentation and skill classification, respectively. The accuracy is the percentage of correctly labeled frames, and the edit score is the segmental edit distance [43] to measure the correctness of the temporal ordering of actions. The micro average accuracy and the macro average recall [44] are computed as the average of total correct predictions across all classes and the mean of true positive rates for each class, respectively. The compared methods that are optimized to one of the two tasks show good results for the optimized tasks, but the results for the other task by linear probing are degraded. Although the proposed method is not optimized for each task, the results are comparable to the method optimized for each task. This proves that the motion codes extract effective information to understand the temporal structure of motion and to explain the static characteristics between motions. \begin{table} \begin{tabular}{c|c c} \hline & Accuracy & Edit score \\ \hline Ismail 2018[42] & 64.9 & 55.9 \\ TCN[43] & 80.3 & 85.6 \\ MotionCode (Proposed) & 82.6 & 65.7 \\ \hline \end{tabular} \end{table} Table 1: Results of action segmentation for JIGSAWS kinematic inputs \begin{table} \begin{tabular}{c|c c} \hline & Micro average & Macro average \\ & accuracy & recall \\ \hline Ismail 2018[42] & 99.4 & 99.6 \\ TCN[43] & 59.0 & 46.7 \\ MotionCode (Proposed) & 94.9 & 94.9 \\ \hline \end{tabular} \end{table} Table 2: Results of skill classification for JIGSAWS kinematic inputs Figure 7: Evaluation of motion codes by linear probing with JIGSAWS kinematic inputs ### Evaluation with video/3D skeleton inputs #### Extracting motion codes from video The next experiment is extracting motion codes from the video in the JIGSAWS dataset, which means that the output modality is different from that of the input. In this experiment, each frame in a video is encoded as a feature frame by frame using an image encoding block, and the feature is used as the input of the proposed encoder-decoder model, as shown in Fig. 8. The image encoding is implemented by Vision transformer [45], and the parameters are fine-tuned to predict the kinematic data of each frame. The dimension of the feature vector used as input is 768. Since no implementation is available to test the tasks of action segmentation and skill classification from video, the proposed method is compared with the methods tested under the same condition. The results obtained by MsM-CRF [44] and 3D ConvNet [46] are shown for comparison in Tables 3 and 4 for action segmentation and skill classification, respectively. Although the recognition by the proposed method is linear probing without fine-tuning, the results of the proposed method are comparable with those of the methods that are optimized for the respective tasks. \begin{table} \begin{tabular}{c|c c} \hline & Micro average & Macro average \\ & accuracy & recall \\ \hline 3D ConvNet [46] & 100 & 100 \\ MotionCode (Proposed) & 94.9 & 96.5 \\ \hline \end{tabular} \end{table} Table 4: Results of skill classification for JIGSAWS video inputs Figure 8: Evaluation of motion codes by linear probing with JIGSAWS video inputs #### Extracting motion codes from a 3D skeleton The next experiment is extracting the motion codes from a 3D skeleton dataset (HuGaDB) [47], as shown in Fig. 9. The dataset contains the motions of the lower half of the body, such as walking, taking stairs up or down, and sitting down, with segmentation and annotation. The input skeleton consists of six joints with a three-axis accelerometer and three-axis gyroscope data at each joint. The motions are captured at 60Hz for 18 subjects, and their lengths are 300 frames to 12,000 frames. The split of training and test sequences is the same as the setting in [48]. The results are evaluated by accuracy and F1@50. The former is sample-wise accuracy, as described above, and the latter is the F1-score of classifying segments by 50% intersection over union (IoU) overlap with respect to the corresponding expert annotation [27]. Table 5 shows the results compared with the methods tested in [48]. The accuracy of the proposed method is comparable to those of the other methods that optimized for the segmentation task, although the F1@50 score is lower than those of the others, which is the case because no device is provided to avoid over-segmentation in linear probing, as is the edit score in Table 1. ### Ablation study #### Generating motion codes without restriction Restricting motion codes in the training introduced in Section 3.3.3 is expected to encourage motion code sharing. Fig. 10 shows a comparison of the results \begin{table} \begin{tabular}{c|c c} \hline & Accuracy & F1@50 \\ \hline Bi-LSTM & 86.1 & 81.5 \\ TCN & 88.3 & 56.8 \\ ST-GCN & 88.7 & 67.7 \\ MS-TCN & 86.8 & 89.9 \\ MS-GCN & 90.4 & 93.0 \\ MotionCode (Proposed) & 87.5 & 58.5 \\ \hline \end{tabular} \end{table} Table 5: Results of action segmentation for the 3D skeleton dataset (HuGaDB) Figure 9: Evaluation of motion codes by linear probing with 3D skeleton inputs of training with and without restriction. In the restricted case, most of the motion codes used by subjects C and F are shared, which indicates that the motions are translatable to each other between the subjects. On the other hand, in the non-restricted case, most of the motion codes are not shared between subjects C and F. The reconstruction loss decreases more easily by using different motion codes than by sharing motion codes, however, using split codes is not desirable for translatability. The result shows the need for restrictions to share motion codes between subjects. #### 4.3.2 Effect of the attention width of the decoder Attention width \(M\) of the decoder attention layers defines how many preceding frames are referred to generate motion from motion codes. The number of reference frames is expected to affect the frame range of high attention weight to a keyframe. In this experiment, how the sequence of extracted motion codes changes is tested if the reference frames are wide. The attention width \(M\) is changed to 100 while \(M=10\) in Fig. 4. Fig. 11 shows the entire sequence of keyframe motion codes with (a) \(M=10\) and with (b) \(M=100\). Since the total attention widths of six attention layers are 60 and 600, respectively, the attention is concentrated on sparse keyframes and the number of keyframes is reduced in the latter case. Since the boundaries between different motion codes are still close to those of the annotated labels, the motion codes have a relationship to the understanding of motion by humans. Tables 6 and 7 shows the results of action segmentation and skill classification by attention width of Figure 10: Motion code graphs for the results trained with / without restricting motion codes the decoder, respectively. Since the performance of action segmentation with \(M=100\) is similar to the case with \(M=10\), the motions can be recognized in the case of sparse keyframes. On the other hand, the result of skill classification is degraded with \(M=100\) from the case with \(M=10\). The reason is considered to be because, as shown in Fig. 12, in the case of \(M=100\), most motion codes are shared by subjects compared to the case of \(M=10\). Consequently, the granularity of motion codes can be controlled by the parameter of the attention width, but the side effect of sharing motion codes is an issue to be investigated in a future study. Figure 11: Transition of keyframe motion codes with (a) \(M=10\) and (b) \(M=100\) Figure 12: Motion code graphs with \(M=10\) and \(M=100\) ## 5 Conclusion In the present paper, we proposed an encoder-decoder model that extracts frame-wise motion codes as discrete components in order to provide intermediate representations of human motions. The motion codes are extracted in a self-supervised manner without using any manual annotations. We found that generating a discrete representation contributes to extracting sparse keyframes and visualizing the relationship between the components. We then evaluated the effectiveness of the motion codes by applying them to multiple recognition tasks. Since the motion codes extracted by the proposed method yield results comparable to those that have features optimized by supervised learning, the results of the present study show that the motion codes contain sufficient information to effectively understand human motions. One of the issues is optimizing the granularity of the motion codes for various tasks. When the attention width is narrow, the granularity is smaller than that of annotation labels, which makes it difficult to find one-to-one correspondence with a user-defined label. Furthermore, since various levels of granularity can be considered when attempting to explain human behavior, one of our future areas of study will be to generate a hierarchical structure of motion codes. More specifically, since structures can be extracted in a self-supervised manner from the dataset used, the goal will be to construct a hierarchical motion code structure without the need for hand-crafted level explanations of human behavior. In addition, since the advantages of the sparse and discrete features make them easily specified by users, another direction of future study will be to use motion codes to generate new motions that may be difficult to explain by user-defined labels. Such motions can be expected to be useful for robotics and computer graphics applications. Acknowledgments.This work was supported by JSPS JP22H00545, JP22H05002 and NEDO JPNP20006 (New Energy and Industrial Technology Development Organization) in Japan. \begin{table} \begin{tabular}{c|c c} \hline Attention width \(M\) & Accuracy & Edit score \\ \hline 10 & 82.6 & 65.7 \\ 100 & 82.7 & 64.3 \\ \hline \end{tabular} \end{table} Table 6: Results of action segmentation for JIGSAWS kinematic inputs based on the attention width of the decoder \begin{table} \begin{tabular}{c|c c} \hline \hline Attention width \(M\) & \begin{tabular}{c} Micro average \\ accuracy \\ \end{tabular} & \begin{tabular}{c} Macro average \\ recall \\ \end{tabular} \\ \hline 10 & 94.9 & 94.9 \\ 100 & 88.5 & 88.2 \\ \hline \end{tabular} \end{table} Table 7: Results of skill classification for JIGSAWS kinematic inputs based on the attention width of the decoder ## Declarations **Conflict of interest** The authors declare that they have no conflict of interest. ## Appendix A Keyframe motion code graph In our experiments, we tested three types of input data. Figs. 13, 14, 15 show the keyframe motion code graphs of all individual subjects for JHU-ISI gesture and skill assessment working set (JIGSAWS) kinematic inputs, JIGSAWS video inputs and 3D skeleton inputs (HuGaDB), respectively. As described in the main manuscript, there are differences in the codes used between the subjects. Online Resource 1 (ESM_1.mp4) shows the synchronized visualization of the input video and the keyframe motion codes for the two sequences of the JIGSAWS kinematic inputs, Suturing_C001 of Subject C and Suturing_F001 of Subject F. One of the clear differences is the IDs of motion codes used for the annotation, "Pushing needle through tissue". The IDs, 49 and 236, are used by one of the two subjects, but not by the other subject. The difference occurs repeatedly, and the reason for the difference is considered to be caused by the difference of the gripper pose. Since we do not have expert knowledge of robotic surgery, we cannot tell whether it comes from the skill or habit of a subject, but the similarities and uniqueness are visualized without the knowledge. Online Resource 2 (ESM_2.mp4) shows a comparison of changing the decoder attention width. If the attention width is large (\(M=100\)), the motion is represented by a smaller number of keyframe motion codes than in the case of small attention width (\(M=10\)). Since each segment becomes long, the granularity of the motion codes is coarse. The control of the granularity is one of our future areas of study. Online Resource 3 (ESM_3.mp4) shows the synchronized visualization of the input video and the keyframe motion codes for the two sequences of the JIGSAWS video inputs, Suturing_C001 of Subject C and Suturing_F001 of Subject F. The difference between subjects C and F can be also observed, even if the input is changed to videos. For example, the codes used in the annotation, "Pushing needle through tissue", are different. ## Appendix B Action segmentation Fig. 16 gives a visual overview of the action segmentation results for JIGSAWS kinematic input. The left-hand column in the figure shows the results of the test split with the highest accuracy in quantitative evaluation, whereas the right-hand column shows the results of the test split of the worst case. The action segmentation is accomplished with high quality for most of the sequences. The sequences Suturing_H001 and Suturing_I001 are examples of the worst case. The reason for their difference from the ground truth is considered to be due to insufficient data because "Loosening more suture" appears in only one sequence in this training split (and appears in only three sequences in the total dataset). In addition, since the subjects repeatedly try and fail the operation in the sequences of Suturing_B001 and Suturing_G001, over-segmentation occurs in these cases and results in low edit scores. Fig. 17 gives a visual overview of the action segmentation results for JIGSAWS video input. The left-hand column in the figure shows the results for the test split with the highest accuracy in quantitative evaluation, whereas the right-hand column shows the results for the test split of the worst case. The segmentation quality of the video input is slightly more accurate than the result for the kinematic input. The results of video input are believed to be more accurate because the input and output are close to the condition of annotation, which is given by watching videos. The difference from the ground truth is considered to be due to the same reason in the case of kinematic input, insufficient training examples and over-segmentation of repeated trials in an operation. Fig. 18 gives a visual overview of the action segmentation results for HuGaDB. The sequences in the figure are from the test dataset, excluding duplicate combinations of actions. The action segmentation is accomplished with high accuracy for most of the sequences. Although the proposed method is based on reconstructing acceleration and angular velocity, these factors do not have significant differences between certain annotations, for example, between "Walking" and "Going up/down" and between "Standing" and "Up/down by elevator", which are considered to be the reason for the incorrect classification found in 02_00, 03_05, and 04_13. Figure 13: Keyframe motion code graphs for all subjects. The kinematic data are used as input
2305.00507
(pseudo)Scalar mesons in a self-consistent NJL model
In this study, we investigate the mass spectrum of $\pi$ and $\sigma$ mesons at finite chemical potential using the self-consistent NJL model and the Fierz-transformed interaction Lagrangian. The model introduces an arbitrary parameter $\alpha$ to reflect the weights of the Fierz-transformed interaction channels. We show that when $\alpha$ exceeds a certain threshold value, the chiral phase transition transforms from a first-order one to a smooth crossover, which is evident from the behaviors of the chiral condensates and meson masses. Additionally, at high chemical potential, the smaller the value of $\alpha$, the higher the masses of the $\pi$ and $\sigma$ mesons become. Moreover, the Mott and dissociation chemical potentials both increase with the increase in $\alpha$. Thus, the meson mass emerges as a valuable experimental observable for determining the value of $\alpha$ and investigating the properties of the chiral phase transition in dense QCD matter.
Xiaozhu Yu, Xinyang Wang
2023-04-30T15:25:52Z
http://arxiv.org/abs/2305.00507v2
# (pseudo)Scalar mesons in a self-consistent NJL model ###### Abstract In this study, we investigate the mass spectrum of \(\pi\) and \(\sigma\) mesons at finite chemical potential using the self-consistent NJL model and the Fierz-transformed interaction Lagrangian. The model introduces an arbitrary parameter \(\alpha\) to reflect the weights of the Fierz-transformed interaction channels. We show that when \(\alpha\) exceeds a certain threshold value, the chiral phase transition transforms from a first-order one to a smooth crossover, which is evident from the behaviors of the chiral condensates and meson masses. Additionally, at high chemical potential, the smaller the value of \(\alpha\), the higher the masses of the \(\pi\) and \(\sigma\) mesons become. Moreover, the Mott and dissociation chemical potentials both increase with the increase in \(\alpha\). Thus, the meson mass emerges as a valuable experimental observable for determining the value of \(\alpha\) and investigating the properties of the chiral phase transition in dense QCD matter. ## I Introduction Exploring the properties of strongly interacting matter is a fundamental question in high energy nuclear physics. In particular, from a theoretical point of view, it is very important to study the breaking and restoration of the chiral symmetry in order to understand such kind of matter. As we know, the (approximate) chiral symmetry is believed to be a good symmetry in the light quark sector. Unfortunately, the perturbative method becomes unavailable for quantum chromodynamics (QCD) in low energy regime, since the strong coupling constant is no longer small enough. Besides, the lattice QCD can not handle the numerical calculations at finite chemical potential because of the famous sign problem. Hence, the effective theories and models are needed to investigate the QCD matter. Especially, based on the chiral symmetry and chiral symmetry breaking of QCD, the Nambu-Jona-Lasinio (NJL) model [1; 2] is one of the most useful tools to study the properties of strongly interacting matter, such as the dynamical breaking/restoration of the chiral symmetry and the masses of light mesons. One of the uncertainties of the NJL model is the way of dealing with the mean field approximation, and this issue has been well emphasized for a few decades [3]. Mathematically, the Fierz transform of NJL model Lagrangian should be of equal importance, as compared with the original NJL model Lagrangian, but they are treated unequally when applying the mean field approximation. Therefore, we rewrite the Lagrangian as \(\mathcal{L}_{R}=(1-\alpha)\mathcal{L}+\alpha\mathcal{L}_{F}\) by introducing an arbitrary weighting parameter \(\alpha\), where \(\mathcal{L}\) is the original NJL Lagrangian and \(\mathcal{L}_{F}\) is the Fierz transform of \(\mathcal{L}\). It has been discussed that there are no physical requirements for the choice of \(\alpha\) value. And the value of \(\alpha\) could be determined by astronomy observations, i.e., the properties of compact stars [4; 5; 6; 7] which impose constraints on the QCD equations of state. In this paper, we will discuss a possible alternative way of predicting the value of \(\alpha\) by the properties of light mesons. As we know, the mass spectra of pseudoscalar meson \(\pi\) and scalar meson \(\sigma\) have been studied in the NJL type model since a few dozen years ago [8]. Since \(\pi\) and \(\sigma\) mesons are chiral partners, the mass difference between them carries the information of the chiral symmetry breaking and restoration. Therefore, apart from the indirect measurement on the equation of states of compact stars, the measurement of the meson masses in the heavy ion collision experiments is an alternative method to extract the information of the weighting parameter \(\alpha\), as well as that of the chiral phase transition in dense QCD matter. This paper is organized as follows. We begin with the general formalism in Sec. II, and then the corresponding numerical results are presented in Sec. III. Finally, the conclusions are given in Sec. IV. ## II Formalism The redefined Lagrangian--the combination of the original Lagrangian \(\mathcal{L}\) in the two-flavor NJL model and the corresponding Fierz transformed Lagrangian \(\mathcal{L}_{F}\) is given by [4]: \[\mathcal{L}_{R}=(1-\alpha)\mathcal{L}+\alpha\mathcal{L}_{F}, \tag{1}\] where \[\mathcal{L}=\bar{\psi}(i\not{\partial}-m)\psi+G\left[(\bar{\psi} \psi)^{2}+\left(\bar{\psi}i\gamma^{5}\cdot\tau\psi\right)^{2}\right] \tag{2}\] and \[\mathcal{L}_{F} = \bar{\psi}(i\not{\partial}-m)\psi+\frac{G}{8N_{c}}\left[2(\bar{ \psi}\psi)^{2}+2\left(\bar{\psi}i\gamma^{5}\tau\psi\right)^{2}-2(\psi\tau\psi) ^{2}-2\left(\bar{\psi}i\gamma^{5}\psi\right)^{2}\right. \tag{3}\] \[\left.-4\left(\bar{\psi}\gamma^{\mu}\psi\right)^{2}-4\left(\bar{ \psi}i\gamma^{\mu}\gamma^{5}\psi\right)^{2}+\left.\left(\bar{\psi}\sigma^{2m }\psi\right)^{2}-\left(\bar{\psi}\sigma^{\mu\nu}\tau\psi\right)^{2}\right],\] along with the quark current masses \(m=diag(m_{u},m_{d})\). By applying the mean field approximation on the Lagrangian and dropping the irrelevant part, we get the effective Lagrangian \[\left\langle\mathcal{L}_{R}\right\rangle_{eff} = \bar{\psi}(i\not{\partial}-M)\psi+G\left(1-\alpha+\frac{\alpha}{4N _{c}}\right)\sigma^{2}+\frac{\alpha G}{2N_{c}}n^{2}, \tag{4}\] where \(\sigma=\left\langle\bar{\psi}\psi\right\rangle\) is the quark condensation, and the quark number density is \(n=\left\langle\psi^{\dagger}\psi\right\rangle\). Besides, we have introduced the constituent quark mass \(M=m-2G\left(1-\alpha+\frac{\alpha}{4N_{c}}\right)\sigma\) and the renormalized chemical potential \(\mu_{r}=\mu-\frac{\alpha G}{N_{c}}n\). Thus, the corresponding thermodynamic potential density takes the form \[\Omega = -\frac{T}{V}\ln Z \tag{5}\] \[= G\left(1-\alpha+\frac{\alpha}{4N_{c}}\right)\sigma^{2}-\frac{ \alpha G}{2N_{c}}n^{2}\] \[-\frac{N_{c}N_{f}}{\pi^{2}}\int_{0}^{\Lambda}dpp^{2}\left\{E(M,p )+T\ln\left[1+\exp\left(-\frac{E(M,p)+\mu_{r}}{T}\right)\right]+T\ln\left[1+ \exp\left(-\frac{E(M,p)-\mu_{r}}{T}\right)\right]\right\}.\] Here, the energy dispersion relation \(E(M,p)=\sqrt{M^{2}+p^{2}}\) and the Fermi-Dirac distribution functions \[n(p,\mu)=\frac{1}{1+\exp\left(\frac{E(M,p)-\mu}{T}\right)}, \hskip 28.452756pt\bar{n}(p,\mu)=\frac{1}{1+\exp\left(\frac{E(M,p)+\mu}{T} \right)}. \tag{6}\] Then, the gap equations are determined by\(\frac{\partial\Omega}{\partial M}=\frac{\partial\Omega}{\partial\mu_{r}}=0\), which could be written into the exact form \[\sigma+\frac{N_{c}N_{f}M}{\pi^{2}}\int_{0}^{\Lambda}\frac{dpp^{2} }{E(M,p)}\left[1-n(p,\mu_{r})-\bar{n}(p,\mu_{r})\right]=0\] (7a) and \[n-\frac{N_{c}N_{f}M}{\pi^{2}}\int_{0}^{\Lambda}\frac{dpp^{2}}{E(M,p)}\left[n (p,\mu_{r})-\bar{n}(p,\mu_{r})\right]=0. \tag{7b}\] On the other hand, in order to calculate the meson masses, we obtain the dispersion relations for \(\pi\) and \(\sigma\) mesons in the random-phase approximation (RPA), \[1-2\left(1-\alpha+\frac{\alpha}{4N_{c}}\right)G\frac{N_{c}N_{f} }{\pi^{2}}P\int_{0}^{\Lambda}\frac{p^{2}}{E(M,p)}\left(1-\frac{M_{\pi}^{2}}{M_ {\pi}^{2}-4E(M,p)^{2}}\right)\left(1-n(p,\mu_{r})-\bar{n}(p,\mu_{r})\right)dp =0,\] (8a) and \[1-2\left(1-\alpha+\frac{\alpha}{4N_{c}}\right)G\frac{N_{c}N_{f} }{\pi^{2}}P\int_{0}^{\Lambda}\frac{p^{2}}{E(M,p)}\left(1-\frac{M_{\sigma}^{2}-4 M^{2}}{M_{\sigma}^{2}-4E(M,p)^{2}}\right)\left(1-n(p,\mu_{r})-\bar{n}(p,\mu_{r}) \right)dp =0. \tag{8b}\] ## III Numerical results In this section, we calculate the numerical results for the pole masses of the \(\pi\) and \(\sigma\) mesons. Firstly, by fitting the physical pion mass \(M_{\pi}=137\) MeV, decay constant \(f_{\pi}=93\) MeV and the quark condensate \(\langle\bar{u}u\rangle=-(247)^{3}\,\)MeV\({}^{3}\), we obtain the current mass of light quarks \(m=5.5\) MeV, the three momentum hard cutoff \(\Lambda=631\) MeV and the coupling constant \(g=5.074^{-6}\) MeV\({}^{-1}\) in the conventional NJL model[8]. And then, the new coupling constant G in the modified NJL model is given by \[G=\frac{1+\frac{1}{N_{e}}}{1-\alpha+\frac{\alpha}{4N_{e}}}g. \tag{9}\] The constituent quark masses \(M\) as a function of quark chemical potential \(\mu\) at zero temperature but with some different weighting constants \(\alpha\) are plotted in Fig. 1. And it is easy to find that, within our model parameters, the order of chiral phase transition changes from first order to crossover as \(\alpha\) increases from zero: explicitly, they are first-order phase transitions at \(\alpha=0,0.5\), while they are crossover transitions at \(\alpha=0.925,1.009\). Hence, there must be a threshold value \(\alpha_{c}\) between 0.5 and 0.925 where the termination of the first-order transition happens. And this has also been discussed in Ref. [4] by using different NJL parameters and will be further investigated in Ref. [9] with the same parameters. Moreover, the (pseudo)critical chemical potential \(\mu_{c}\) at zero temperature is found to be located at about 354, 360, 397 and 474 MeV, respectively, by analyzing the data in Fig. 1 numerically for these four different values of \(\alpha\) from 0 to 1.009. And another point worth mentioning is that, when \(\mu\lesssim 340\) MeV, the constituent quark mass \(M\) remains constant for all different values of \(\alpha\) at zero temperature. As for \(\mu\gtrsim 340\) MeV, we can see that the smaller \(\alpha\), the more rapidly the constituent quark mass decreases. Therefore, the (approximate) chiral symmetry will be fully restored at a larger chemical potential when \(\alpha\) becomes larger. The pole masses of \(\pi\) and \(\sigma\) mesons as functions of quark chemical potential at zero temperature are given in Figs. 2 and 3, which obtained by Eqs. (8a) and (8b) with different weighting constants \(\alpha\). Note that, when \(\alpha=0\), our results coincide with the previous results in the conventional mean field approximation [8]. Similar to the constituent quark mass, the meson masses keep constant as long as the chemical potential is smaller than 340 MeV. When the chemical potential reaches 340 MeV, the pion mass begins to increase with \(\mu\), while the \(\sigma\) meson mass decreases first and then increases. And in the region \(\mu\gtrsim 340\) MeV, the smaller \(\alpha\) is, the faster the meson masses change. Also there is a jump at \(\mu_{c}\) for small \(\alpha\) where a first-order chiral phase transition happens. Figure 1: The constituent quark mass \(M\) as a function of the chemical potential \(\mu\) at \(T=0\) and \(\alpha~{}=~{}0,~{}0.5,~{}0.925\) and 1.009, respectively. The pole masses of \(\pi\) and \(\sigma\) mesons as well as twice the constituent quark mass for some selected \(\alpha\) are plotted in Fig. [4]. Note that the Mott chemical potential is defined by \(M_{\pi}(\mu_{Mott})=2M(\mu_{Mott})\). The Mott transition is the signification where pions dissociate to the unbound resonance state. As discussed in Ref. [10], when the chiral phase transition is a smooth crossover, instead of the definition of the pseudo critical point, the Mott transition point is the better way to characterize the chiral crossover. In our case, when \(\alpha\) is small, \(\mu_{Mott}\) and \(\mu_{c}\) are almost identical. When \(\alpha\) is large, \(\mu_{Mott}>\mu_{c}\), the differences are larger when \(\alpha\) is larger (\(\mu_{Mott}=354,~{}360,~{}420\) and \(519\) MeV for \(\alpha=0.925\) and \(\alpha=0.925\)). The Mott transition is the same as the Mott transition. Figure 3: The pole mass of \(\sigma\) meson as a function of chemical potential at \(T=0\) and \(\alpha~{}=~{}0,~{}0.5,~{}0.925\) and \(1.009\). Figure 2: The pole mass of pion as a function of chemical potential at \(T=0\) and \(\alpha~{}=~{}0,~{}0.5,~{}0.925\) and \(1.009\). \(\alpha=0,\ 0.5,\ 0.925\) and \(1.009\), respectively). The smoother the phase transition(\(\alpha\) is larger in our model), the larger the difference between \(\mu_{Mott}\) and \(\mu_{c}\). For \(\sigma\) meson, the dissociation chemical potential is defined as \(m_{\sigma}(\mu_{d})=2m_{\pi}(\mu_{d})\). From our data, \(\mu_{d}=354,\ 360,\ 402,\ 483\) MeV for \(\alpha=0,\ 0.5,\ 0.925\) and \(1.009\), respectively. The difference between \(\mu_{d}\) and \(\mu_{c}\) is also enhanced when \(\alpha\) becomes larger, but the difference is tiny even when \(\alpha\) is larger than 1. At very large chemical potential, the pole mass of \(\sigma\) meson and pion are almost identical (they are the same in the chiral limit, while for massive light quark current mass \(M_{\pi}^{2}=M_{\sigma}^{2}-4m^{2}\)). Of course, the larger the \(\alpha\) is, the larger chemical potential is needed to reach such feature. ## IV Discussion and conclusion In this paper, we study the pole masses of the \(\pi\) and \(\sigma\) mesons as functions of the chemical potential \(\mu\) at \(T=0\) by using the NJL model with a new self-consistent means field approximation method. As expected, the mass spectrum of mesons could reflect the information on the chiral phase transition of QCD matter. The chiral phase transition is first order when \(\alpha\) is small, while it is a crossover when \(\alpha\) is large enough, indicated by the behaviors of meson masses and constituent quark mass as functions of the chemical potential. Besides, we also find that the differences between Mott transition and chiral phase transition become larger when \(\alpha\) is larger. In the high density environment, e.g., in the core of the compactor stars, the chemical potential of quark matter should reach several hundred MeV. The weighting constant \(\alpha\) in our model is an important feature to study the dense matter, other than the indirect measurements which constrain the EOS of neutron stars. Our results of the \(\pi\) and \(\sigma\) mesons show the significant differences in meson properties with different \(\alpha\). Thus, the measurement of meson properties in the heavy ion collisions with large chemical potential or in compact stars should be a good way to confirm \(\alpha\) and study the properties of phase transition of QCD matter. Moreover, the future lattice calculations on the meson mass spectrum in the high chemical potential regime is also a possible method to determine \(\alpha\) in our model. Figure 4: The pole mass of \(\sigma\) meson, pion and twice constituent quark mass as functions of chemical potential at \(T=0\) and \(\alpha\ =\ 0,\ 0.5,\ 0.925\) and \(1.009\). ###### Acknowledgements. The work of X.W. is supported by the start-up funding No. 4111190010 of Jiangsu University and NSFC under Grant No. 12147103.
2309.11907
Learning to Recover for Safe Reinforcement Learning
Safety controllers is widely used to achieve safe reinforcement learning. Most methods that apply a safety controller are using handcrafted safety constraints to construct the safety controller. However, when the environment dynamics are sophisticated, handcrafted safety constraints become unavailable. Therefore, it worth to research on constructing safety controllers by learning algorithms. We propose a three-stage architecture for safe reinforcement learning, namely TU-Recovery Architecture. A safety critic and a recovery policy is learned before task training. They form a safety controller to ensure safety in task training. Then a phenomenon induced by disagreement between task policy and recovery policy, called adversarial phenomenon, which reduces learning efficiency and model performance, is described. Auxiliary reward is proposed to mitigate adversarial phenomenon, while help the task policy to learn to recover from high-risk states. A series of experiments are conducted in a robot navigation environment. Experiments demonstrate that TU-Recovery outperforms unconstrained counterpart in both reward gaining and constraint violations during task training, and auxiliary reward further improve TU-Recovery in reward-to-cost ratio by significantly reduce constraint violations.
Haoyu Wang, Xin Yuan, Qinqing Ren
2023-09-21T09:17:38Z
http://arxiv.org/abs/2309.11907v1
# Learning to Recover for ###### Abstract Safety controllers is widely used to achieve safe reinforcement learning. Most methods that apply a safety controller are using handcrafted safety constraints to construct the safety controller. However, when the environment dynamics are sophisticated, handcrafted safety constraints become unavailable. Therefore, it worth to research on constructing safety controllers by learning algorithms. We propose a three-stage architecture for safe reinforcement learning, namely TU-Recovery Architecture. A safety critic and a recovery policy is learned before task training. They form a safety controller to ensure safety in task training. Then a phenomenon induced by disagreement between task policy and recovery policy, called adversarial phenomenon, which reduces learning efficiency and model performance, is described. Auxiliary reward is proposed to mitigate adversarial phenomenon, while help the task policy to learn to recover from high-risk states. A series of experiments are conducted in a robot navigation environment. Experiments demonstrate that TU-Recovery outperforms unconstrained counterpart in both reward gaining and constraint violations during task training, and auxiliary reward further improve TU-Recovery in reward-to-cost ratio by significantly reduce constraint violations. ## Introduction Reinforcement learning has been widely believed as key path to general artificial intelligence. In the past decade, deep reinforcement learning has made great achievements in many fields. However, most of the successful reinforcement learning applications have been confined to virtual domains, such as video games, board games, and virtual robot controls, while the application of reinforcement learning in real-world is still a difficult problem that worthy of attention. Safety is one of the main obstacles to the real-world application of reinforcement learning. According to traditional reinforcement learning framework, agents are encouraged to explore environments, which is crucial to performance improvements. What prevent reinforcement learning algorithms from being safe is exactly the exploration procedures in these algorithms, because any freely exploring action could put an agent into catastrophe if the environment contains danger areas. Therefore, the exploratory nature of reinforcement learning contradicts safety requirements in lots of real-world applications, and how to balance exploration and safety - that is, make RL agents gain performance improvements continuously while meet environment constraints - is a tough but critical problem to safe RL. Recently in some high-safety required area, such as autonomous driving, AI medical treatment, and robot navigation, researchers are showing growing interest in safe reinforcement learning. Typically, safe reinforcement learning is achieved by adding constraints to the original optimization problem. These constraints are usually characterized by handcrafted features. However, handcrafted features have their own disadvantages. The first disadvantage is that constraints constructed with handcrafted features are difficult, or even impossible to establish, when the environment dynamics operate in a complex way. The second disadvantage is that, it requires human-level prior knowledge to make handcrafted features. In some cases, no prior knowledge exists, which means building handcrafted features is infeasible. So in these scenarios, building safety constraints through learning algorithms is necessary. It is worth noting that, safety constraints need not to be explicit formulas. One can use a safe controller to supervise agent's exploration procedure, so that safety constraints are implicitly expressed by the safe controller. This work mainly focuses on dealing with complex safety constraints. We first train a safety critic, and use this critic to train a task-unaware safe controller. Then task training is performed, under the supervision of safe controller. We find that task actions, which proposed by task agent in order to maximize expected reward, and safe actions, which proposed by safe controller in order to ensure safety, are sometimes against each other. We call this phenomenon as "adversarial phenomenon". We come up with an idea that use auxiliary rewards to handle the adverse effects bring by this phenomenon. Contributions of this paper are listed below: * TU-Recovery Architecture - for safe reinforcement learning, where safety constraints and safety controllers are learned in advance of task learning procedure, and are used to guide the exploration of task agents. * We propose to utilize auxiliary rewards to mitigate the adversarial phenomenon during task training. These auxiliary rewards also help task agents learn to take safe actions when situations get bad. * We test the proposed methods in a robot navigation environment, and show that TU-Recovery outperforms unconstrained RL algorithms. Furthermore, we show that auxiliary reward improve TU-Recovery by significantly reduce constraint violations. ## Preliminaries Markov Decision Process (MDP) is an important model in the field of RL. A MDP can be represented by a six-tuple, \((\mathcal{S},\mathcal{A},P,r,\rho_{0},\gamma)\), where \(\mathcal{S}\) is the space of all possible states, \(\mathcal{A}\) is the space of all possible actions, \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the transition probability function that determines the probability of transitioning to another state after taking an action from one state, \(r:\mathcal{S}\rightarrow\mathbb{R}\) is the reward function that indicates the reward of being in a state, \(\rho_{0}:\mathcal{S}\rightarrow[0,1]\) is the distribution of initial states, and \(\gamma\) is a discount factor. A stationary policy in this MDP can be represented by \(\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\), where \(\pi(a|s)\) is the probability of take action \(a\) under state \(s\). In traditional RL framework, an agent needs to learn a policy that maximize the expected discounted reward. That is, an agent aims to solving the following optimization problem: \[\underset{\pi}{argmax}\quad\operatorname*{\mathbb{E}}_{\tau \sim\pi}\left[\sum_{t=1}^{\infty}\gamma^{t}r_{t}\right],\] where \(\tau=(s_{0},a_{0},s_{1},a_{1},...)\) represent one trajectory generated by the interaction between the agent and the environment, \(r_{t}\) is a simplified denotation of \(r(s_{t})\), and by \(\tau\sim\pi\) we mean that the distribution of trajectories follows the policy \(\pi\). Value functions are important notions in many RL algorithms. One value function, known as state value function, can be denoted as \(V_{\pi}(s)=\operatorname*{\mathbb{E}}_{\tau\sim\pi}\left[\sum_{t=0}^{\infty} \gamma^{t}r_{t}|s_{0}=s\right]\). Another value function, known as state-action value function, is denoted as \(Q_{\pi}(s,a)=\operatorname*{\mathbb{E}}_{\tau\sim\pi}\left[\gamma^{t}r_{t}|s_ {0}=s,a_{0}=a\right]\). Different from traditional RL, safe RL is usually modeled as Constrained Markov Decision Process (CMDP), which is an extension of MDP. A CMDP can be represented by a seven-tuple of the form \((\mathcal{S},\mathcal{A},P,r,c,\rho_{0},\gamma)\), where the definitions of \(\mathcal{S}\), \(\mathcal{A}\), \(P\), \(r\), \(\rho_{0}\) and \(\gamma\) are the same as in MDP, and \(c:\mathcal{S}\rightarrow\mathbb{R}\) is the cost function, which indicates the cost of being in a state. In the context of safe RL, an agent considers both maximizing expected discounted reward and satisfying safety constraints. Typically, a safety constraint can be characterized by the expected discounted cost, \(\operatorname*{\mathbb{E}}_{\tau\sim\pi}\left[\sum_{t=1}^{\infty}\gamma^{t}c_ {t}\right]\). Therefore, the following constrained optimization problem is what the agent aims to solve: \[\underset{\pi}{argmax}\quad\operatorname*{\mathbb{E}}_{\tau \sim\pi}\left[\sum_{t=1}^{\infty}\gamma^{t}r_{t}\right],\] \[s.t.\quad\operatorname*{\mathbb{E}}_{\tau\sim\pi}\left[\sum_{t=1 }^{\infty}\gamma^{t}c_{t}\right]\leq d,\] where \(d\) is a hyperparameter. In CMDP, cost value functions can be defined in the same form as value functions. In other words, we define state cost value function \(V_{\pi}^{c}(s)\) and state-action cost value function \(Q_{\pi}^{c}(s,a)\) in analogy to \(V_{\pi}(s)\) and \(Q_{\pi}(s,a)\) respectively, using cost, instead of reward, in the definition. We assume that the cost function is an indicator function. The state space is split in to two disjoint subsets, safe state set and unsafe state set, denoted as \(\mathcal{S}_{safe}\) and \(\mathcal{S}_{unsafe}\) respectively. The cost function is written as: \[c(s)=\mathbf{1}_{\mathcal{S}_{unsafe}}(s) \tag{1}\] One way to safe RL is utilizing a safety controller. Figure 1 shows a general framework for safety controller guided safe RL. According to this framework, the agent does not directly interact with the environment; Actions proposed by the agent is input into a safety controller, which always outputs safe actions. Suppose that the agent proposes an action, \(a_{task}\), which is only in consider of maximizing the expected discounted reward. The safety controller can be thought as a function that map the entire action space to the safe action space, denoted as \(\Phi\). So the actual action to be performed is \(a_{safe}=\Phi(a_{task})\). The design of safety controller is a crucial part in this framework. A good design of the safety controller can greatly improve the performance in gaining reward and satisfying safety constraints. ## Related Work Recently, there are some researches about using controllers for safe reinforcement learning. The idea is that the exploration process is modified, and the scope of exploration is limited to the safe area, because of the existence of safety controllers; This limitation to the exploration area can prevent the agent from entering dangerous areas. Shielding RL [1] is one of the algorithms use a safety controller. They use prior knowledge, which is modeled by an abstract MDP, to construct a shield, and use this shield to supervise the agent's exploration. In fact, shield is one kind of safety controller. RL-CBF [1] make the assumption that environment dynamics is linear combination of nominal dynamics and unknown dynamics, and the unknown dynamics is modeled as Gaussian process. Optimization problems are constructed using barrier functions. They define a controller to project unsafe actions into safe action space, by solving these optimization problems. Also, safety layer [1] is proposed to play the role of safety controller. They approximate the Figure 1: A Typical Safe Reinforcement Learning Framework with Safety Controller cost function to the first order of action, then they establish a safety layer to solve a quadratic program problem to project actions into safe action space. SEdiror (Yu, Xu, and Zhang, 2022) is an extended version of safety layer. In their work, projection from whole action space to safe action space is implement by a learned policy. There are some algorithms in which the ideas are similar to implement a safety controller, but they do not explicitly define a safety controller. Leave-no-Trace (Eysenbach et al., 2017) learns a task policy and a reset policy simultaneously. The reset policy is used to provide safety aborts and restore the agent to initial states when task policy is about to enter dangerous states. Recovery RL (Thananjeyan et al., 2020) also learn two policies - task policy and recovery policy. If an action proposed by task policy is considered as dangerous action, agent will execute the action proposed by recovery policy. DESTA (Mguni et al., 2021) has similar idea to Recovery RL - it utilizes a task policy and a safety policy. An impulse controller is used to choose between the task action and the safety action. Our safe RL architecture is greatly inspired by Leave-no-Trace and Recovery RL, but with recovery policy learned before task training. Furthermore, we train the recovery policy in a task-unaware way, which means the learned recovery policy is a general, task-free guiding policy. There have been some work on utilizing augmented reward for safe reinforcement learning, for example, RCPO (Tessler, Mankowitz, and Mannor, 2019). However, the idea of auxiliary reward is far more widely used in other RL fields than safe RL field. In this work, the idea of using auxiliary reward is mainly inspired by Episodic Curiosity (Savinov et al., 2019). However, the purpose of using auxiliary reward is different between Episodic Curiosity and this work: In their work, auxiliary reward is used to deal with sparse reward problem, while auxiliary reward is used to cope with adversarial phenomenon in here. ## Safe Reinforcement Learning Architecture The safe RL architecture proposed in this paper is shown in figure 2. We define a three-stage workflow for our architecture, namely the exploration stage, the recovery learning state, and the task training stage. Environments in the first two stages are safety-oriented, while environments in the last stage are task-oriented. In other words, agent receives only safety-related signals (cost signals) and no task-related signals (reward signals) in the first two stages, while it receives task-related signals in the last stage. Although in the last stage the agent need not to receive any cost signal, we design the task-oriented environments to return cost signals, for recording safety violations and evaluating algorithm performances. The recovery policy and safety critic are trained in a Task-Unaware way, so we refer to this architecture as TU-Recovery Architecture. ### Exploration Stage Exploration stage is showed as the left part of figure 2. The purpose of exploration stage is to learn a safety critic. An exploratory policy \(\pi_{epx}\) is used to interact with the environment. During the interaction, we learn the state-action value function of the exploratory policy, according to the following Bellman equation: \[Q^{c}_{exp}(s_{t},a_{t})=c_{t} \tag{2}\] \[+(1-c_{t})\gamma\mathop{\mathbb{E}}_{\tau\sim\pi_{exp}}\left[Q^{ c}_{exp}(s_{t+1},a_{t+1})|s_{t},a_{t}\right],\] In practice, we use sampled trajectories to approximate the expectation, and train \(Q^{c}_{exp}\) by minimizing the MSE loss of LHS and RHS. This equation is used in the same way in Recovery RL and its previous research (Thananjeyan et al., 2020; Srinivasan et al., 2020). After training, the learned function \(Q^{c}_{exp}\) is considered as a safety critic. The larger value of \(Q^{c}_{exp}(s,a)\) means the larger probability of the agent to enter dangerous areas after taking action \(a\) in state \(s\). Figure 2: Architecture of Safe Reinforcement Learning with Safety Critic and Recovery Policy – TU-Recovery Architecture. Another part needs to be specified in exploratory stage is the exploratory policy. In practice we use random policy as exploratory policy. The reasons of choosing random policy are: First, random policy brings strong exploratory, which helps to train a good safety critic; Second, random policy is easy to implement, and it could save more computing space and time. ### Recovery Learning Stage The middle part of figure 2 illustrates how recovery learning stage works. In recovery learning stage, a recovery policy is trained to minimize the safety critic, which is trained to convergence in the previous stage. This stage is like a traditional RL procedure, where an agent interact with an safety-oriented environment, except that the per-step reward is not directly given by the environment, but given by the safety critic. Suppose that recovery policy takes an action \(a\) in state \(s\), then it will receive a reward of \(-Q^{c}_{exp}(s,a)\). Note that we use negative of the critic as the per-step reward. This allows us to use traditional RL algorithms to optimize recovery policy, because traditional RL algorithms learns to maximize rewards, which is equivalent to minimize safety critic in this case. ### Task Training Stage As shown in the right part of figure 2, there are four parts in task training stage - a task policy, a recovery policy, an action decider, and a task-oriented environment. The basic idea is to train the task policy under the supervision of the recovery policy and the action decider. Suppose that the agent is in a state \(s\), the task policy propose a task action, \(a_{task}\), and the recovery policy propose a recovery action, \(a_{rec}\). The action decider should choose between \(a_{task}\) and \(a_{rec}\). Intuitively, the decider should choose task action when it is considered to be safe, otherwise choose recovery action. We follow the implementation scheme of Recovery RL, which use the safety critic \(Q^{c}_{exp}\) to make decision: \[a=\begin{cases}a_{task},&Q^{c}_{exp}(s,a_{task})\leq d\\ a_{rec},&else\end{cases} \tag{3}\] where \(d\) is a hyperparameter threshold. From the view of task agent, recovery policy and action decider can be considered as part of the environment, so it can be considered that recovery policy and action decider change the dynamics of the environment to which task agent interact. This procedure corresponds to figure 1, where the safety controller is composed of the recovery policy and the action decider. Furthermore, according to equation 3, the controller only changes the proposed task action when the risk is beyond some threshold. This could be thought as a kind of _hard intervention_, which replace the task actions with recovery actions in some highly dangerous areas in order to get out of these areas quickly. ## Learning Recover Actions through Auxiliary Reward ### Motivation According to definitions from the last part, task policy learns to maximize reward, while recovery policy helps to restore from high risk areas. It could be observed that sometimes these two policies play against each other, which could cause the agent to get stuck in a small range of states. We refer to this phenomenon as _adversarial phenomenon_. Figure 3 shows an example of adversarial phenomenon, where a robot (presented as red points) is supposed to reach the target area (presented as green circles), while avoiding collisions with an obstacle (presented as blue circles). Light orange in the figure shows the recovery zone - agent in this zone will take recovery actions instead of task actions. Task actions and recovery actions are shown as black arrows and green arrows, respectively. Note that the task action and the recovery action are opposite in directions, which makes the agent move back and forth repeatedly at the boundary of the recovery zone. Adversarial phenomenon usually happens in relatively hard tasks. For example, as shown in figure 3, the agent has to navigate around the obstacle, which is in the middle of the straight line between the agent and the target. However, situations could be totally different in a simple task. As figure 4 shows, an obstacle is placed in the opposite direction to the target, and there are no obstacles on the way for the agent to reach the target. This is the case where task action and recovery action "agree with" each other. Both task action and recovery action meet the purpose of both task policy and recovery policy - moving towards the target and moving away from high risk area. We refer to this phenomenon as _collaborative phenomenon_, because task actions and recovery actions help each other in this case. Figure 4: Collaborative Phenomenon near the Border of the Recovery Zone Figure 3: Adversarial Phenomenon near the Border of the Recovery Zone Adversarial phenomenon can cause great performance degradation during task training stage. It makes the agent stuck around one point without further movements. In experiments we find that this phenomenon of agent stuck can happen even when the policies are trained to convergence. On the other hand, we consider collaborative phenomenon as an advantage, because it accelerates the algorithm convergence. Therefore, the goal is to mitigate the impact of adversarial phenomenon, while enhance the impact of collaborative phenomenon. We propose to utilize an auxiliary reward to meet this goal, as will be described in the following. ### Auxiliary Reward Consider the interactions between the agent and the environment during task training stage. At any time step \(t\), the agent receives a reward signal \(r_{t}\). An auxiliary reward \(b_{t}\) is added to \(r_{t}\), resulting in an augmented reward: \[\hat{r_{t}}=r_{t}+b_{t}. \tag{4}\] Task policy is trained with augmented rewards, rather than original rewards. The idea behind auxiliary reward is to force the task policy to learn recovery actions in high risk areas. Suppose that the agent is in a high risk state, then the auxiliary reward should give a high value when \(a_{task}\) proposed by task policy is close to \(a_{rec}\) proposed by recovery policy, otherwise give a small value. However, when the agent is in a low risk state, which means the agent is far from dangerous areas, the auxiliary reward is not supposed to work on the augmented reward, because finishing task is the only thing an agent needs to consider when it is in safe areas. Based on the above idea, we consider the auxiliary reward of the following form: \[b(s,a)=\alpha f(D^{c}(s))k(a,a_{rec}), \tag{5}\] where \(\alpha\) is a positive scaling parameter, \(D^{c}\) is a safety critic indicates the risk of a state (the higher risk of a state, the greater value of \(D^{c}\)), \(f(\cdot)\) is a monotonically increasing function on \(\mathbb{R}\), ranging from \([0,1]\), and \(k(\cdot,\cdot)\) is a measure of how close between two actions. Ideally, \(f\) could be chosen as indicator function: \[f(x)=\begin{cases}1,&x>d\\ 0,&x\leq d.\end{cases} \tag{6}\] Under some suitable assumptions, a theoretical result about one-step optimization of augmented reward is proposed as follows. **Proposal 1**.: _Suppose that a policy interact with an environment, with augmented reward defined by equation 4. The auxiliary reward is defined by equation 5, where \(f(\cdot)\) is defined by 6, and \(k(\cdot,a_{rec})\) is a kernel function maximized at \(a=a_{rec}\). Then when \(\alpha\) tends to be infinite, for any state \(s\) with \(D^{c}(s)>d\), the action that maximize one-step augmented reward is given by \(a^{*}=a_{rec}\): For any state \(s\) with \(D^{c}(s)\leq d\), the action that maximize one-step augmented reward is given by \(a^{*}=\underset{a}{argmax}\;\;r(s,a)\)._ Proof.: First, consider a state \(s\) with \(D^{c}(s)>d\). From equation 6, the augmented reward can be written as \(\hat{r}(s,a)=r(s,a)+\alpha k(a,a_{rec})\). Note that maximizing this augmented reward is equivalent to solve a multi-objective maximization problem, with \(r(s,a)\) and \(k(a,a_{rec})\) as objectives, using additive weighting method. So \(\alpha\) can be considered as the weight of \(k(a,a_{rec})\) in the problem. When \(\alpha\) tends to be infinite, the problem degenerates into maximizing \(k(a,a_{rec})\). Second, consider a state \(s\) with \(D^{c}(s)\leq d\). It is easy to deduce that maximizing \(\hat{r}(s,a)\) is equivalent to maximizing \(r(s,a)\) in this case, because the auxiliary reward is always zero, according to equation 6. Proposal 1 gives an insight of how auxiliary reward affect a policy's behavior. If the agent is in a high risk state and \(\alpha\) is big enough, action that maximize one-step augmented reward will tend to be close to the recovery action; If the agent is in a low risk state, action that maximize one-step augmented reward will be the same as the one that maximize one-step original reward. Although proposal 1 is about one-step optimization, which only works with greedy policy, it is reasonable to believe that the augmented reward can also help to enhance the performance of a long-term-concerned policy. In practice, we use \(Q^{c}_{exp}(s,a)\) instead of \(D^{c}(s)\) in equation 5. We will show in experiments that a state-action based function is better safety critic than a state based function. \(\langle a,a_{rec}\rangle\) is used as \(k(a,a_{rec})\) in equation 5. Dot product between two actions indicates the "agreement" of these actions to each other. An action that is in the same direction to \(a_{rec}\) shows strong agreement with \(a_{rec}\), therefore tends to give a great positive value of auxiliary reward. On the other hand, an action that is in the opposite direction to \(a_{rec}\) shows strong disagreement with \(a_{rec}\), therefore tends to give a great negative value of auxiliary reward. Moreover, continuous and smoothed versions of indicator function, rather than the original indicator function is used as \(f\). We consider the following two functions: * Sigmoid function with linear transformation, with hyperparameters \(a\) and \(b\): \[f(x)=\frac{1}{1+exp(-(ax+b))}\] (7) Figure 5: Example function graph of \(f\). Left: SL function. Right: GC function. * Piece-wise Gaussian and constant function, with \(\sigma\) as hyperparameter: \[f(x)=\begin{cases}1,\quad x>d\\ exp(-\frac{(x-d)^{2}}{\sigma^{2}})\end{cases}\] (8) For simplicity, we refer to the first function as _SL function_, and refer to the second function as _GC function_. Figure 5 shows examples of the two functions. The main reason of using these functions rather than indicator function is to avoid intense value change around some states, and make the learning procedure more stable. Auxiliary rewards can be thought as a kind of _soft intervention_. It change the task policy's behavior in a gradual manner, which is different from how hard intervention affect the task policy. ## Experiments ### Environment The experiment environment is illustrated in figure 6. In practice, we construct this environment based on safety-gym [1]. It is worth emphasizing that, the start positions and target positions are randomly initialized during training stage. ### Metrics The randomness in start positions and target positions may result in high variance results, so the metrics for algorithms performance should be carefully chosen. Rewards and costs are the values that most directly show the performance of algorithms. We use cumulative rewards and costs per 1000 steps as basic metrics. For two algorithms, if one is greater in rewards while less in costs than the other, we say one _dominates_ the other. However, sometimes there are no dominance between two algorithms. In these case, we compare two algorithms by the cumulative **R**eward-to-**C**ost ratio, abbreviated as _RC-ratio_. During a long period of training procedure, the ratio between the Maximum cumulative **R**eward and the maximum cumulative **C**ost during training, abbreviated as _MRC-ratio_, is used. The ratio between the **A**verage cumulative **R**eward and the average cumulative **C**ost during training, abbreviated as _ARC-ratio_, is also used as a metric. ## Results ### Results **a. Results of Task Training Stage.** We test four algorithms - an unconstrained method, TU-Recovery method and TU-Recovery method with auxiliary reward. The learning curves for cumulative rewards and costs are shown in figure 7. We also report the MRC-ratio and ARC-ratio, as shown in table 1. All policies are trained using SAC algorithm [1], and all results are the averages of 5 runs. It can be easily seen from figure 7 that, TU-Recovery methods (including the ones with auxiliary rewards) significantly reduce constraint violations comparing to unconstrained method. Results from table 1 also indicate that our methods outperform unconstrained method. Furthermore, table 1 shows that auxiliary rewards can improve the performance, especially for the GC reward (4th row in table 1), with which the algorithm outperforms the one without auxiliary reward (2nd row in 1) in both MRC-ratio and ARC-ratio. **b. How Do Auxiliary Reward Improve Task Policy?** To give an insight of how auxiliary rewards affect the behaviors of task policy, we test the trained task policies, and measure their performance by cumulative reward, cumulative cost, and RC-ratio. In this experiment, interventions of recovery policy is disabled, so that all results reflect the pure performance of trained task policies. Results are shown in table 2. For each task policy, 10 different seeds is used to conduct 10 runs, so these results are the averages of 10. It is clear that task policies trained with auxiliary reward (3rd, 4th rows in 2) is significantly less in constraint violations, while slightly less in reward gaining, compared to task policy trained with original reward (2nd row in 2), resulting in higher RC-ratio (as shown in the 4th column). \begin{table} \begin{tabular}{l l l} \hline Algorithm & MRC-ratio & ARC-ratio \\ \hline Unconstrained & 0.800 & 1.321 \\ TU-Recovery & 1.232 & 2.564 \\ TU-Recovery + SL Reward & 1.054 & **2.675** \\ TU-Recovery + GC Reward & **1.421** & 2.669 \\ \hline \end{tabular} \end{table} Table 1: Task Training Results for Unconstrained Method and Our Methods. Figure 6: Environment Used in Experiments. Left: Obstacles layout. Right: Environment layout at one moment. Obstacles are represented by blue pillars, robot is represented by a red sphere, and target area is represented by green pillars. Figure 7: Cumulative rewards and costs for unconstrained method and our methods. Left: Learning curves of cumulative rewards. Right: Learning curves of cumulative costs. ### Ablation Experiments **a. The Reason of Using Exploratory Policy.** To give a justification for using Q function of exploratory policy as safety critic, we conduct an experiment to compare different safety critics. We consider two other safety critics in addition to \(Q^{c}_{exp}\), as will described in the following. A simple plan to build a safety critic is to directly train the recovery policy to minimize expected discounted cost, then use Q function of this recovery policy, denoted by \(Q^{c}_{d\_rec}\), as the safety critic. One benefit of this plan is that, exploratory stage is combined with recovery learning stage, so the three-stage training workflow is simplified to a two-stage procedure. We also consider using the distance from the robot to an obstacle as safety critic. There are more than one obstacles in the environment, so the minimum distance from the robot to these obstacles, denoted by \(D^{c}_{min}\), is used. Note that this is a handcrafted safety critic, and normally the robot has no knowledge about this value, so this critic only acts as an ideal state-based safety critic. When using this safety critic, only recovery learning stage and task training stage need to be executed. We compare the task learning curve of TU-Recovery with three different safety critics, as shown in figure 8. The conclusion is that the one using exploratory policy dominates the one using minimum distance, and the one using minimum distance dominates the one using directly recovery policy. We give an insight about the above conclusion by providing the heatmaps of three safety critics, as figure 9 shows. All heatmaps about Q functions are drawn by masking the actions to zeros. By comparison between the heatmaps of \(Q^{c}_{exp}\) (left part in figure 9) and \(Q^{c}_{d\_rec}\) (middle part in figure 9), it makes sense that \(Q^{c}_{exp}\) is a better safety critic than \(Q^{c}_{d\_rec}\): \(Q^{c}_{exp}\) shows homogeneously decreasing in all directions as the distance from the obstacle increases, while \(Q^{c}_{d\_rec}\) does not. According to this point of view, it seems that \(D^{c}_{min}\) should be better safety critic than \(Q^{c}_{exp}\). Why the results from figure 8 contradict this view? We argue that this is because \(Q^{c}_{exp}\) is state-action based critic, while \(D^{c}_{min}\) is state based critic. A state-action based critic takes the actions into account, and is tend to give more reasonable values than a state based critic. Therefore, the results also justify using Q function as safety critic. **b. Necessity of Using Hard Intervention.** We conduct a experiment to see whether hard intervention is necessary. TU-Recovery with both hard intervention and soft intervention (auxiliary reward) is compared to its soft-intervention-only counterpart. The result is shown in figure 10. It can be seen that soft intervention help the task policy to learn to be safe gradually, resulting in better performance than unconstrained method, but a hard intervention is still necessary to ensure safety during the whole task training. ## Conclusion We propose a three-stage framework for safe reinforcement learning, named TU-Recovery Architecture. The framework can construct safety constraints by learning, avoiding hand-crafting safety constraints. It is demonstrated that our framework outperformed unconstrained counterpart in task training. Adversarial phenomenon may degrade the performance during task training, so auxiliary rewards is proposed to mitigate this issue. Experiments show that auxiliary rewards can efficiently help the task policy to learn recovery actions. ## Acknowledgements We express our acknowledgement to Zhejiang University and all people provided us with technical supports and helpful suggestions. \begin{table} \begin{tabular}{l l l l} \hline \hline Algorithm & Reward & Cost & RC-ratio \\ \hline Unconstrained & 22.136 & 15.84 & 1.397 \\ TU-Recovery & **23.682** & 8.0 & 2.960 \\ TU-Recovery + SL Reward & 22.212 & **4.32** & **5.142** \\ TU-Recovery + GC Reward & 22.590 & 5.8 & 3.895 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of The Trained Task Policies. Figure 8: Learning Curves of TU-Recovery with Different Safety Critics. Left: Cumulative reward curves. Right: Cumulative cost curves. Figure 10: Ablations of Hard Intervention. Left: Cumulative reward curves. Right: Cumulative cost curves. Figure 9: Heatmaps of Different Safety Critics. Left: Q function of exploratory policy. Middle: Q function of directly trained recovery policy. Right: Minimum distance from current position to an obstacle.
2306.00057
Exploring Large-Scale Entanglement in Quantum Simulation
Entanglement is a distinguishing feature of quantum many-body systems, and uncovering the entanglement structure for large particle numbers in quantum simulation experiments is a fundamental challenge in quantum information science. Here we perform experimental investigations of entanglement based on the entanglement Hamiltonian, as an effective description of the reduced density operator for large subsystems. We prepare ground and excited states of a 1D XXZ Heisenberg chain on a 51-ion programmable quantum simulator and perform sample-efficient `learning' of the entanglement Hamiltonian for subsystems of up to 20 lattice sites. Our experiments provide compelling evidence for a local structure of the entanglement Hamiltonian. This observation marks the first instance of confirming the fundamental predictions of quantum field theory by Bisognano and Wichmann, adapted to lattice models that represent correlated quantum matter. The reduced state takes the form of a Gibbs ensemble, with a spatially-varying temperature profile as a signature of entanglement. Our results also show the transition from area to volume-law scaling of Von Neumann entanglement entropies from ground to excited states. As we venture towards achieving quantum advantage, we anticipate that our findings and methods have wide-ranging applicability to revealing and understanding entanglement in many-body problems with local interactions including higher spatial dimensions.
Manoj K. Joshi, Christian Kokail, Rick van Bijnen, Florian Kranzl, Torsten V. Zache, Rainer Blatt, Christian F. Roos, Peter Zoller
2023-05-31T18:00:01Z
http://arxiv.org/abs/2306.00057v1
# Exploring Large-Scale Entanglement in Quantum Simulation ###### Abstract Entanglement is a distinguishing feature of quantum many-body systems, and uncovering the entanglement structure for large particle numbers in quantum simulation experiments is a fundamental challenge in quantum information science. Here we perform experimental investigations of entanglement based on the entanglement Hamiltonian, as an effective description of the reduced density operator for large subsystems. We prepare ground and excited states of a 1D XXZ Heisenberg chain on a 51-ion programmable quantum simulator and perform sample-efficient 'learning' of the entanglement Hamiltonian for subsystems of up to 20 lattice sites. Our experiments provide compelling evidence for a local structure of the entanglement Hamiltonian. This observation marks the first instance of confirming the fundamental predictions of quantum field theory by Bisognano and Wichmann, adapted to lattice models that represent correlated quantum matter. The reduced state takes the form of a Gibbs ensemble, with a spatially-varying temperature profile as a signature of entanglement. Our results also show the transition from area to volume-law scaling of Von Neumann entanglement entropies from ground to excited states. As we venture towards achieving quantum advantage, we anticipate that our findings and methods have wide-ranging applicability to revealing and understanding entanglement in many-body problems with local interactions including higher spatial dimensions. + Footnote †: The first three coauthors contributed equally. ## I Introduction Entanglement is the crucial ingredient that sets apart the quantum world from its classical counterpart. It is a fundamental concept that has garnered intense research interest due to its implications for various aspects of quantum physics, from foundational aspects to quantum computation to condensed matter systems and quantum chemistry [1]. In quantum many-body problems, entanglement leads to an exponential scaling of complexity with system size. While classical simulations struggle to capture this complexity, quantum simulation experiments have the ability to naturally represent large-scale entanglement - being quantum systems themselves. Recent years have seen tremendous progress in large-scale quantum simulation experiments touching upon the boundaries of what is classically simulatable [2; 3; 4; 5; 6; 7; 8; 9]. The quantum many-body systems in such experiments are typically only locally interacting. That is, operators appearing in the system Hamiltonian act only on local clusters of adjacent particles, with important consequences for the eigenstates of the system and leading to universal features of the entanglement structure contained in them. Investigations of bipartite entanglement start with considering a partition of the system of interest into a subsystem \(A\) and its complement \(\bar{A}\) (see Fig. 1 (a)). For a system prepared in a many-body state \(\ket{\Psi}\), entanglement between the two can be quantified via the von Neumann entanglement entropy (EE) \(S_{A}^{\text{VN}}=-\text{Tr}(\rho_{A}\log\rho_{A})\) where \(\rho_{A}=\text{Tr}_{A}\ket{\Psi}\bra{\Psi}\) describes the reduced density matrix of subsystem A. For many-body ground states of locally interacting systems, one expects a sub-extensive _area-law_ scaling, where the EE only grows with the size of the boundary \(\partial A\) of the subsystem [10]. This area law scaling lies at the heart of efficient tensor-network approximations in classical simulations of many-body ground states [10; 11]. In contrast, generic excited states with energies well above the ground state will exhibit _volume-law_ scaling, a growth of the EE with the subsystem size reminiscent of the extensive behavior of thermodynamic entropy [12]. However, extracting such information about entanglement from large-scale experiments remains challenging, predominantly because of the difficulty of performing tomography of \(\rho_{A}\) on large subsystems [13] for the purpose of evaluating \(S_{A}^{\text{VN}}\). Moreover, identifying quantifiers that capture the entanglement pattern, beyond a simple scalar value \(S_{A}^{\text{VN}}\), and in relation to the geometry and topology of the subsystem has proven to be a difficult task [14]. Here we engage these challenges in an experimental setting by considering the Entanglement (or Modular) Hamiltonian (EH) \(\tilde{H}_{A}\), which describes the reduced density matrix of a subsystem \(A\) through \(\rho_{A}\sim e^{-\tilde{H}_{A}}\). For ground states of many-body Hamiltonians with local interactions, this entanglement Hamiltonian is conjectured to have a simple operator structure. According to fundamental predictions in quantum field theory (QFT) by Bisognano and Wichmann [15; 16], the entanglement Hamiltonian can be expressed as a spatial deformation of the system Hamiltonian, i.e., it is composed of the same local operators appearing in the translationally invariant system Hamiltonian, but acquiring a spatially dependent prefactor (see also below, Eq. (2)). Rather than performing full subsystem tomography of \(\rho_{A}\), we instead 'learn' a local entanglement Hamiltonian from experimental data. This allows us to study entanglement properties of large subsystems, and experimentally investigate predictions from QFT. Moreover, the entanglement Hamiltonian provides a unique insight into entanglement patterns by offering an interpretation of \(\hat{\rho}_{A}\) as a Gibbs state with a locally varying inverse temperature, or 'entanglement temperature', quantifying how subregions of the subsystem are entangled with the outside world [17; 18; 19]. In this work, we prepared ground and excited states of the 1D Heisenberg XXZ model with \(N=51\) spins in a trapped-ion platform, and extracted entanglement Hamiltonians for subsystem sizes up to \(L_{A}=20\) lattice sites. We find the first experimental evidence for an entanglement Hamiltonian in the form of a deformation of the system Hamiltonian, in line with a fundamental prediction by Bisognano and Wichmann (BW) [15; 16] and its extension to conformal field theories (CFTs) [20; 21]. In addition, our results provide us with the Von Neumann entanglement entropy displaying the area-to-volume law transition [12]. We verify the accuracy of the fitted local entanglement Hamiltonians from independent experimental data, i.e. we assign fidelities to the results without having to resort to theoretical simulations. ## II Locality of the entanglement Hamiltonian Our study primarily focuses on the entanglement Hamiltonian (EH). In the following discussion, we high Figure 1: _Learning the entanglement structure of variationally prepared ground and heated quantum many-body states._ (a) We study the XXZ model with Hamiltonian \(\hat{H}\) given in Eq. (3). This defines our EH ansatz \(\hat{H}_{A}(\mathbf{\beta})\) according to Eq. (2) with \(\mathbf{\beta}\) the entanglement temperature profile on a subsystem of length \(L_{A}\) in the 51-ion chain. (b) Experimental procedure for state preparation and data analysis. A variational quantum circuit (see Appendix G) first prepares correlated quantum many-body states. In the second step, we collect frequencies of bit strings sampled in different Pauli bases (see Appendix D). The data are subjected to an entanglement Hamiltonian tomography (EHT) procedure which finds the optimal EH \(\hat{H}_{A}(\mathbf{\beta})\) best reproducing the experimentally obtained frequencies (see Appendix D). (c) Entanglement properties of variationally prepared many-body states on 51 ions, obtained via EHT for different anisotropies \(\Delta=1\) and \(\Delta=1.7\), respectively. Blue squares show the results for the VQE ground states, while red diamonds show data for heated states. The energy spectrum on the left indicates VQE ground and excited state energies as fractions of the entire spectral range. The upper panels show the von Neumann entropies as a function of subsystem size \(L_{A}\) which we obtain from the learned EH via \(S_{A}^{\rm VN}=\langle\hat{H}_{A}(\mathbf{\beta})\rangle+\log[Z_{A}(\mathbf{\beta})]\). Dashed lines show the corresponding theoretical curves obtained from fitting the ansatz \(\rho_{A}(\mathbf{\beta})\) to an MPS simulation of the experiment. Heated-up states indicate a clear volume-law scaling of the entanglement entropy \(S_{A}^{\rm VN}\sim L_{A}\) as opposed to area-law scaling for the VQE ground states \(S_{A}^{\rm VN}\sim\text{const}\). Lower panels depict the optimal EH parameters for \(L_{A}=12\) determined from EHT (see AppendixD). Transparent lines show the results of all connected 12-site subsystems from the central 27 ions of the 51-ion chain. Solid marked lines represent the mean over all these subsystems. light the remarkable observation that the EH exhibits a local structure in the ground states of quantum field theories (QFTs), as demonstrated in [15; 16; 20; 21; 22]. Moreover, this fundamental finding has been extended to lattice models [22], providing insights into the operator content of the EH in locally interacting quantum many-body systems. This allows for efficient protocols to comprehend the EH, which serves as the foundation for measuring entanglement in large subsystems within our quantum simulation experiments [23]. Relativistic quantum field theory (RQFT) in \(d+1\) dimensional Minkowski space makes predictions about the entanglement structure of the vacuum (ground) state \(\ket{\Omega}\). The Bisognano and Wichmann theorem [15; 16] states that the reduced density operator of the ground state when partitioned into a semi-infinite half-space \(A=\{\mathbf{x}\in\mathbb{R}^{d}\mid x_{1}>0\}\) and its complement \(\tilde{A}\), takes the form of a Gibbs state, \[\rho_{A}\equiv\operatorname{Tr}_{\tilde{A}}\ket{\Omega}\bra{ \Omega}=\frac{1}{Z_{A}}e^{-\int_{A}\mathbb{d}^{d}x\,\beta(\mathbf{x})\mathscr{H}( \mathbf{x})}\equiv\frac{1}{Z_{A}}e^{-\tilde{H}_{A}} \tag{1}\] Here, \(\mathscr{H}(\mathbf{x})\) is the Hamiltonian density of the RQFT, and \(\beta(\mathbf{x})\sim x_{1}\) is the inverse temperature profile, that follows a linear ramp with \(x_{1}\) being the coordinate perpendicular to the plane \(\partial A\) cutting the infinite half-spaces. This result generalizes to conformal field theories (CFTs) [20; 21], predicting in particular that for a region \(A\) as a ball of radius \(R\) and radial coordinate \(r=|\mathbf{x}|\), the inverse temperature profile has the form of a parabola, \(\beta(\mathbf{x})\sim(R^{2}-r^{2})/2R\) (see Appendix for details). Notably, Eq. (1) implies that the entanglement Hamiltonian \(\tilde{H}_{A}\) is local and shares the same operator structure as the system Hamiltonian but is spatially deformed according to the profile \(\beta(\mathbf{x})\) defining an 'entanglement temperature' \(T(\mathbf{x})=1/\beta(\mathbf{x})\). This temperature decreases with increasing distance from the cut, indicating that the dominant contributions to the entanglement, due to low-lying eigenfunctions of \(\tilde{H}_{A}\), are supported close to the cut. By adapting and applying BW arguments to the EH for ground states of strongly interacting lattice models with local interactions, a conjecture has emerged that generalizes Eq. (1) [22]. This conjecture has been supported by both numerical and analytical investigations across various many-body models. To elaborate, let's consider a spatial deformation of a lattice Hamiltonian: \[\hat{H}=\sum_{j}\hat{h}_{j}\quad\xrightarrow{\text{deform}}\quad\tilde{H}_{A }=\sum_{j\in A}\beta_{j}\hat{h}_{j}+\dots, \tag{2}\] where \(\hat{h}_{j}\) represents few-body terms that act on a neighborhood of lattice sites \(j\), and \(\beta_{j}\) denotes the deformation profile. According to the conjecture, the ground state of \(\hat{H}\) gives rise to a reduced density operator \(\rho_{A}\) for a connected subsystem \(A\), which assumes the form described in Eq. (1), with \(\tilde{H}_{A}\) derived from Eq. (2). It is expected that this statement holds true for states that can be effectively described by a continuum field theory, to which the BW theorem can be applied, with minor non-universal corrections indicated by the dots. In the current context, these arguments suggest a general operator structure for the entanglement Hamiltonian that can be experimentally explored. By parametrizing \(\tilde{H}_{A}\) as in (2) and testing for potential deviations, we employ a learning protocol called EHT [23; 24], owing to the locality of \(\tilde{H}_{A}\). In other words, the number of samples required to learn the coefficients \(\beta_{j}\) within a given error scales polynomially with the number of terms \(\hat{h}_{j}\). Simultaneously, this procedure enables the direct measurement of the Von Neumann entanglement entropy (EE), given by \(S_{A}^{\text{VN}}=\operatorname{Tr}(\rho_{A}\tilde{H}_{A})+\log Z_{A}\). Furthermore, this methodology can also be applied to more general excited or thermal states. In the latter case, a Gibbs state with a flat inverse temperature profile \(\beta=\beta_{j}\) is expected (with boundary corrections [25]). Extracting the EH for states at various energies, spanning from the lowest to the middle of the spectrum of \(\hat{H}\), facilitates the experimental observation of the transition from an area law to a volume law for the EE. ## III Experimental setup and model A programmable trapped-ion quantum simulator serves as our experimental platform for studying the entanglement structure in correlated quantum many-body states. In our setup, a linear chain of \(N=51\)\({}^{40}\)Ca\({}^{+}\) ions is held in a linear Paul trap using highly anisotropic confining potentials. The spin states are encoded into long-lived electronic states \(\ket{\downarrow}=\ket{S_{1/2},m=+1/2}\) and \(\ket{\uparrow}=\ket{D_{5/2},m=+5/2}\), defining the computational basis. Global entangling operations are realized via quench dynamics of an XY model with controllable long-range interactions, which is engineered via exploiting the spin and motional degrees of freedom of the trapped ions (see Appendix B). Our system allows spatially-resolved addressing and detection, enabling arbitrary single-qubit rotations and high-fidelity readout (see Appendix C). To study universal features of the entanglement structure on the trapped-ion platform, we focus on realizing low-energy states of the Heisenberg XXZ model with open boundary conditions \[\hat{H}_{\text{XXZ}}=J\sum_{j=1}^{N-1}\left(\hat{S}_{j}^{x}\hat{S}_{j+1}^{x}+ \hat{S}_{j}^{y}\hat{S}_{j+1}^{y}+\Delta\hat{S}_{j}^{z}\hat{S}_{j+1}^{z}\right), \tag{3}\] where \(\hat{S}_{j}^{\alpha}\) denote spin-1/2 operators acting on lattice sites \(j\). For an anisotropy parameter \(-1<\Delta\leq 1\), the critical regime, this model is gapless and its low-energy physics is described by a CFT with central charge \(c=1\)[26]. Preparing low-energy states in this regime allows us not only to experimentally study lattice analogs of the BW theorem on half partitions, but also finite bulk intervals. We further present results analyzing the change of the temperature profile outside the critical regime \(\Delta>1\) as well as for highly excited states of the XXZ chain. ## IV Results Our main experimental procedure consists of variational state preparation followed by Entanglement Hamiltonian Tomography as outlined in Fig. 1 (b). We performed experimental preparations of approximate ground and excited states for an XXZ model (cf. Eq. (3)) by optimizing quantum circuits generated through quench dynamics in a variational quantum eigensolver (VQE) feedback loop [28]. This resulted in variational states that correspond to superpositions of eigenstates from finite energy windows within the XXZ model (indicated by the blue and red brackets in the energy bar of Fig. 1 (c)). We successfully prepared states with an energy distance from the true ground state that corresponds to approximately 2% of the entire spectral range (see Appendix G). Furthermore, states with high average energy, located in the middle of the spectrum, can be prepared efficiently by quenching the initial state using the native interaction Hamiltonian of the ion chain (see Appendix B), and subsequently applying the same VQE circuit utilized for preparing low-energy state. We will refer to the states prepared using this method as 'VQE heated states'. Subsequently, we analyze the entanglement properties of the prepared many-body states via EHT, using experimental samples from 243 Pauli bases as base data. In each of these Pauli bases, we collect 200 samples from quantum projective measurements. EHT is performed from an ansatz of the EH of the form \(\tilde{H}_{A}(\mathbf{\beta})=\sum_{j}\beta_{j}\hat{h}_{j}\) with operator components \(\hat{h}_{j}\) defined in Fig. 1 (a). The EHT procedure is independently verified via cross-fidelity check with respect to independent experimental data sets and theoretical simulations (see below). Area-law and volume-law scaling of the entanglement entropyOur main results are summarized in Fig. 1 (c), in which we analyze the entanglement structure of VQE ground and heated states for different values of the anisotropy \(\Delta\). For subsystems \(A\) in the bulk of the chain, we observe the anticipated distinct behavior for the low-energy and excited states. While the former exhibits an approximately constant EE, consistent with an area law of entanglement, the heated-up states exhibit a characteristic growth of the EE, which is consistent with a linear, volume law, scaling \(S_{A}\propto L_{A}\) with subsystem size \(L_{A}=2,\ldots,12\). This behavior is intimately related to the characteristic shape of the EH temperature profiles \(\beta_{j}\), displayed in the lower panels of Fig. 1 (c). The parabolic shape of the profiles in the VQE ground state results in spins near the boundary of \(A\) providing the dominant contribution to the entanglement with the environment \(\tilde{A}\), thus capturing the essential feature of area-law entanglement. In contrast, the profiles for the excited states \(\beta_{j}\) exhibit a relatively flat plateau within the bulk of the subsystem, with only small differences observed between the boundary and bulk spins. In either case, we find a smooth profile for the learned parameters, consistent with expectations from CFT. Furthermore, our results provide indications of a distinctive behavior between temperature profiles of the VQE ground states in the critical and non-critical regimes. As can be seen in the lower right panel of Fig. 1 (c) (\(\Delta=1.7\)), the flanks of the profile \(\beta_{j}\) exhibit an approximately constant slope, to be contrasted to the parabolic profile in the critical regime (\(\Delta=1\)). Our findings are consistent with the analytical results of free-particle systems [29], where EH parameters of non-critical chains follow a triangular deformation. This suggests that the different shapes (parabolic vs. triangular) of the EH parameters reflect the distinct functional behavior in the decay of correlation functions (power-law vs. exponential). Scaling behavior of entanglement temperature profiles with \(L_{A}\)We now turn to analyzing the entanglement structure in more detail, by studying the behavior of the EH as a function of subsystem size for bulk and boundary regions at the critical anisotropy \(\Delta=1\). In Fig. 2 (a), we display normalized variance and mean energies for variationally prepared ground and the excited states for which we summarise the results below. Fig. 2 (b) summarizes the resulting entanglement temperature profiles \(\beta_{j}\), encoding the information of subsystem density matrices of the 51-ion chain up to \(L_{A}=20\) sites. For the VQE ground state, the profiles exhibit a parabolic shape for all subsystem sizes \(L_{A}\) whose height grows approximately linearly with \(L_{A}\), i.e. individual spins that are close to the subsystem's edges contribute dominantly to the entanglement. Although the original BW predictions are made for ground states, we find that even for the approximate ground states, which are superpositions of states spanning a finite energy range of the spectrum, the parabolic profile remains robust. In Appendix H.2, we show numerically for the lowest 200 states that each individual eigenstate exhibits a parabolically deformed EH. The heated-up states however, are much higher in energy and here the inverse entanglement temperature profile flattens and forms a plateau, whose height stays approximately constant as a function of subsystem size, resulting in a linear scaling of entanglement entropy (see Fig. 1 (c)) reminiscent of a locally thermalized state. The BW theorem predicts a linear slope of the temperature profile \(\beta_{j}\sim j\) for a bi-partition of an infinite system into two halves, and we expect this profile to bend over in the presence of a boundary [23]. These expectations are confirmed by our EH learning procedure for the VQE ground state as illustrated for subsystems at the edge, mimicking a half-infinite subsystem, of the ion chain in Fig. 2 (b), in agreement with exact ground state simulations (red dashed-dotted line). The profiles close to the boundary of the ion chain, however, reveal that VQE state preparation is less accurate there, which we attribute to finite-size artifacts due to the low depth of our variational circuit. In order to quantify this effect, we compute the average fidelity of 7-qubit reduced density matrices compared to the corresponding subsystems in the exact ground state (see lower right panel of Fig. 2 (b)), which shows a clear deviation from the exact ground state in the boundary region. We note that the primary contribution to the density matrix stems from spins located in close proximity to the entanglement cut, which clarifies the slight drop in geometric mean fidelity (see Appendix F) at the boundary. Verification.We perform direct experimental verification of the reconstructed density matrices, by employing protocols similar to those described in Refs. [30, 31]. The procedure we employ works as follows. Having obtained \(\rho_{A}(\mathbf{\beta})\) for a given subsystem \(A\), we further split the subsystem \(A\) into 2 subintervals \(A_{1}\) and \(A_{2}\), and compute reduced density matrices for the subinterval \(A_{1}\) via \(\rho_{A_{1}}(\mathbf{\beta})=\mathrm{Tr}_{A_{2}}\left[\rho_{A}(\mathbf{\beta})\right]\). We then cross-verify the model \(\rho_{A_{1}}(\mathbf{\beta})\) against an independently taken data set via a Hilbert-Schmidt fidelity estimation (see [30] and Appendix F for details) and average the resulting fidelity over all connected subintervals \(A_{1}\) of \(A\). Fig. 2 (c), summarizes the results of this verification procedure up to \(L_{A}=12\) sites, for subinterval sizes \(N_{A_{1}}=5\). For both VQE ground state and heated state, the geometric mean fidelity \(\mathcal{F}_{\mathrm{mean}}\) is significantly above 90 %, while the max fidelity \(\mathcal{F}_{\mathrm{max}}\) in the heated state approaches \(\sim\)85% (as defined in Appendix F). Moreover, we perform the same verification procedure not only with respect to independent data sets from the experiment but also to data from theoretical simulations. To this end, we simulate the variational circuits using a time-dependent variational principle with Matrix Product States (MPS) on 51 spins and compute reduced density matrices of subintervals \(\rho_{A_{1}}^{\mathrm{MPS}}\) from the MPS wave function, and with these, we perform direct fidelity estimation with the experimental data. The results, denoted as \(\mathcal{F}_{\mathrm{sim}}\) in Fig. 2 (c), demonstrate that the maximum fidelities between \(\rho_{A_{1}}(\mathbf{\beta})\) and \(\rho_{A_{1}}^{\mathrm{MPS}}\) are consistent with the experimental fidelities within the error bars. Entanglement structure of disjoint subsystems.So far, we have focused on a single connected subsystem and demonstrated the universal applicability of the BW arguments. We now investigate _disconnected_ subsystems \(A\cup B\) with regions \(A\) and \(B\) that are separated by a distance \(d_{12}\), where no universal predictions are available. We find that the reduced density operator is well cap Figure 2: _Entanglement temperature profiles for different subsystem sizes at the critical point \(\Delta=1\)_ (a) Energy distribution of VQE states calculated from the mean and variance of \(\hat{H}\) in the corresponding MPS wave functions. (b) Local inverse temperatures \(\beta_{j}\) for different subsystem sizes in the bulk and at the boundary of the 51-ion chain obtained from EHT (see Appendix). The lower panels (blue curves) show the results for the VQE ground state up to \(L_{A}=20\) sites. Temperature profiles up to \(L_{A}=12\) sites are obtained from an EH ansatz with local fit parameters \(\beta_{j}\) (blue squares) and operator components \(\hat{h}_{j}\) as defined in Fig. 1 (a). For \(L_{A}>12\), we describe the temperature profile with a second-order polynomial \(\beta_{j}=q_{0}+q_{1}j+q_{2}j^{2}\), introducing 3 global fit parameters \(\{q_{m}\}_{m=0}^{2}\). The orange pentagons plotted in the lower right panel of (b) shows the Uhlmann fidelity of the learned \(\rho_{A}(\mathbf{\beta})\) with respect to the corresponding \(\rho_{A}\) from the exact ground state for subsystems of size \(L_{A}=7\) that we sweep through the ion chain. The observed fidelity drop at the edges of the chain causes the learned coefficients \(\beta_{j}\) to deviate from the ones of the exact ground state (red dash-dotted lines) in a boundary region. The coefficients \(\beta_{j}\) observed in the heated state (red diamonds) are consistent with the uniform temperature profile of a thermal Gibbs state with a notable decrease of the local inverse temperature compared to the VQE ground state. (c) Verification of the EHT procedure via cross-fidelity estimation (see Appendix F). Reduced density matrices of size \(L_{A^{\prime}}=5\) are computed from the learned \(\rho_{A}(\mathbf{\beta})\) and cross-verified against independent data taken from the experiment. Circles represent the maximum fidelity \(\mathcal{F}_{\mathrm{max}}^{\mathrm{sim}}\) with respect to theoretical simulations. Error bars have been obtained via Jack-knifing and are smaller than symbols if not shown. tured by an EH of the form \[\tilde{H}_{A\cup B}=\sum_{ij\in A\cup B}\beta_{ij}\ \hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S} }_{j}\;, \tag{4}\] where \(\hat{\mathbf{S}}=\left(\hat{S}^{x},\hat{S}^{y},\sqrt{\Delta}\hat{S}^{z}\right)^{T}\) and \(\beta_{ij}=\delta_{i,j-1}\beta_{i}\) whenever \(i\) and \(j\) are within the same sub-subsystem. We analyze disconnected subsystems for the \(\Delta=1\) dataset. As shown in Fig. 3 (a), the intra-subsystem profiles \(\beta_{j}\) approach the expected parabolic shape for large separations, indicating that \(A\) and \(B\) become statistically independent. For short distances, these profiles are modified and acquire an asymmetry, in qualitative agreement with analytic predictions for specific CFTs [32, 33] and reminiscent of an entropic force [27]. We further quantify this effect by calculating the mutual information \(I_{AB}\) shown in Fig. 3 (c), which increases for small \(d_{12}\) and approaches a small constant value for large \(d_{12}\), in agreement with our theoretical simulations. The correctness of the ansatz (4) is verified by again computing the fidelity \(\mathcal{F}_{\text{max}}\), which exceeds \(90\%\), for all values of \(d_{12}\). Omitting the additional terms in the ansatz that connect the subsystems A and B leads to markedly lower fidelities (see Fig. 3 (c). The values of all necessary fit parameters \(\beta_{ij}\) are shown in Fig. 3 (b), where the emergence (vanishing) of inter-subsystem coupling with decreasing (increasing) distance becomes apparent. To summarize, despite the small size of the subsystems studied here, our findings provide the first experimental evidence in favor of bi-local generalization of Eq. (2), compatible with a CFT calculation for a massless Dirac field [32, 33]. We again emphasize that a general prediction for the EH for disconnected subsystems is presently not available. Applying our approach to different models with a known effective CFT description can therefore help to improve our understanding of entanglement properties for general CFTs. ## V Conclusions and Outlook The entanglement Hamiltonian provides a powerful tool to study entanglement in correlated quantum matter governed by local Hamiltonians. In case the EH is local, it is not only efficiently learnable from experimental data, but it also provides a readily interpretable 'entanglement temperature' profile providing insights into the entanglement structure of the underlying quantum many-body state. Our work presents the first experimental observation of a local EH in strongly interacting lattice models, as an extension of predictions from BW originally made Figure 3: _Entanglement Hamiltonian of disjoint 5-site subsystems._ (a) Local temperature profiles of disconnected subsystems with separations \(d_{12}=1\) (orange) and \(d_{12}=8\) (blue). The inset indicates the increased inverse temperatures \(\beta_{j}\) of subsystems with separations \(d_{12}=1\) as opposed to subsystems with a separation \(d_{12}=8\), resulting in the decrease of mutual information as demonstrated in panel (b). The inset further illustrates an asymmetry of the temperature profiles which we interpret as a temperature gradient (as numerically observed in [27] in the context of free-fermion models). (b) Colormaps of the EH parameters \(\beta_{ij}\) for different subsystem separations. (c) Upper panel: mutual information \(I_{AB}=S_{A}^{\text{VN}}+S_{B}^{\text{VN}}-S_{AB}^{\text{VN}}\) as a function of subsystem separation \(d_{12}\) together with theoretical predictions from the MPS wave function. Lower panel: maximum fidelity with respect to an independent data set as a function of subsystem separation \(d_{12}\), averaged over many subsystems. The orange lines depict the fidelity of an ansatz \(\tilde{H}_{A}(\mathbf{\beta})\) which includes cross-links \(\beta_{ij}\) with \(i\in A\) and \(j\in B\), while the blue line uses the standard ansatz for two independent subsystems. For small separations, cross-links \(\beta_{ij}\) lead to a significant boost in fidelity. Error bars are smaller than symbols in the context of ground states of QFTs. Interestingly, we observe that the local structure of the EH is robust and persists over a large range of low-energy states. We have studied low and high energy states of the Heisenberg model with 51 spins in a trapped ion quantum simulator. The local operator structure of the learned EHs is verified by direct fidelity estimation from independent data, as well as numerical simulations providing excellent agreement. Our methods also enable clear observation of the transition from an area law of entanglement to a volume law in excited states, with entanglement temperature profiles transitioning from strongly deformed to near-uniform distributions. We anticipate that the entanglement characteristics of ground states explained by BW arguments apply to a broad class of many-body systems with local Hamiltonians including higher spatial dimensions and fermionic systems. Furthermore, the toolset used for measuring the operator structure of the entanglement Hamiltonian can be applied to all present-day programmable quantum simulation platforms. These advancements provide a framework for exploring entanglement-related phenomena in experiments systematically. For instance, the approach can be used to recognize topologically ordered phases of matter through entanglement spectroscopy [34; 35; 36], or it can be used to test new concepts providing further insights into entanglement structure [37; 38]. Furthermore, the Ryu-Takayanagi conjecture [39] quantitatively relates entanglement properties of CFTs to the geometry of a dual gravitational theory, enabling the indirect study of gravity through the holographic principle, where recent quantum simulation experiments [40] provide the necessary programmability of interactions. A shift of focus to the EH as the central object of study in investigations of entanglement in many-body systems thus opens the door to a broad class of new physics to be explored on programmable quantum simulators. ## Acknowledgements CK and PZ thank Dries Sels for discussions. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101113690 (PASQuanS2.1). CK, RvB, TVZ, and PZ were supported by the US Air Force Office of Scientific Research (AFOSR) via IOE Grant No. FA9550-19-1-7044 LASCEM, the Austrian Research Promotion Agency (FFG) contract 884471 (ELQO, RvB), and by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, PZ). MJ, FK, CFR, and RB acknowledge the financial support for the experiment from the Austrian Science Fund through the SFB BeyondC (F7110), and the Institut fur Quanteninformation GmbH. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. The computational results presented have been achieved (in part) using the HPC infrastructure LEO of the University of Innsbruck. Simulations were performed using iTensor [41]. ## Author Contribution MJ and FK developed and conducted the experiment under the guidance of RB and CR. CK, RvB, TVZ, and PZ proposed the research and developed the quantum protocols. CK, RvB, and MJ performed the data analysis. CK, RvB, TVZ, and PZ wrote the manuscript, and MJ contributed texts on experimental setups. All authors contributed to the discussion of the results. ## Appendix A Characterizing bi-partite entanglement The main text considers bipartite entanglement properties of the ground and excited states. For a quantum system in a pure state \(\ket{\Psi}\), the entanglement properties with respect to a bipartition \(A\colon\bar{A}\) are specified by the Schmidt decomposition, \(\ket{\Psi}=\sum_{\alpha=1}^{\infty_{A}}\lambda_{\alpha}\ket{\Phi_{A}^{\alpha} }\otimes\ket{\Phi_{A}^{\alpha}}\). Here, \(\lambda_{\alpha}\) are Schmidt coefficients, and the Schmidt rank \(\chi_{A}\) serves as a proxy of entanglement. Defining a reduced density matrix \(\rho_{A}=\mathrm{Tr}\left[\ket{\Psi}\bra{\Psi}\right]\), and an EH \(\tilde{H}_{A}\) as in Eq. (1), we identify the Schmidt vectors \(\ket{\Phi_{A}^{\alpha}}\) with the eigenvectors of \(\rho_{A}\) and \(\tilde{H}_{A}\). The entanglement spectrum (ES) is defined via \(\lambda_{A}^{2}=e^{-\xi_{\alpha}}\). Thus, knowledge of \(\rho_{A}\), or equivalently \(\tilde{H}_{A}-\) as provided by sample-efficient learning of the entanglement Hamiltonian in Appendix D - fully specifies bipartite entanglement. Area and volume law scaling are defined via the bipartite Von Neumann entanglement entropy defined as \(S_{A}^{\mathrm{VN}}=-\mathrm{Tr}\left(\rho_{A}\log\rho_{A}\right)\), or in terms of the EH as \(S_{A}^{\mathrm{NN}}=\mathrm{Tr}(\rho_{A}\tilde{H}_{A})+\log Z_{A}\). Area law behavior, as is characteristic for many-body ground states, is identified as the scaling \(S_{A}^{\mathrm{VN}}\propto L_{A}^{d-1}\) (or \(\log L_{A}\) for \(d=1\) at critical points), where \(L_{A}\) is the linear extent of the subsystem \(A\) and \(d\) denotes the number of spatial dimensions. In contrast, volume law scaling is given by \(S_{A}\propto V_{A}=L_{A}^{d}\), as expected, e.g. for a thermodynamic entropy. For the \(d=1\) Heisenberg model, area and volume law scaling are observed for (approximate) ground and excited states, respectively, in Fig. 1 (c). ## Appendix B Approximated power-law interactions The experimental platform is discussed in previous references [2; 42]. In our trapped-ion quantum simulator, an approximated power-law type spin-spin interaction is engineered by laser fields driving the electronic and transverse motional degrees of freedom of the trapped ions, allowing us to perform entangling steps in the variational optimization loop. The interaction is described by \[\hat{H}_{\mathrm{XY}}=\sum_{i,j>i}J_{ij}(\hat{\sigma}_{i}^{+}\hat{\sigma}_{j} ^{-}+\hat{\sigma}_{i}^{-}\hat{\sigma}_{j}^{+}), \tag{1}\] with \(J_{ij}\simeq J_{0}/|i-j|^{\alpha}\). The power-law exponent \(\alpha\) is controlled by adjusting the laser frequency of a three-tone laser field that manipulates electronic and transverse motional degrees of freedom of the ion chain [43]. For the current study, the exponent is tuned to \(\alpha\approx 0.82\). In the numerical simulations, we used a spin-spin coupling matrix constructed from the experimental measurements to compare the experiment and theory. In the previous studies [2; 42], the influence of the third tone, which is used for compensating the light-induced shifts on the spectator electronic levels, on the spin-spin coupling for long ion strings was neglected. Here, we revise the calculation of the \(J_{ij}\) matrix and compare the experimentally measured spin-spin coupling matrix with the calculated one. For this calculation, we use the following expression after taking all motional modes (\(2N\)) and their respective coupling to each frequency component of the laser beam into account. Here, the coupling between the \(i^{\text{th}}\) and \(j^{\text{th}}\) ion is described by \[J_{ij} =\frac{\hbar k^{2}}{4m}\sum_{n=1}^{2N}M_{i}^{(n)}M_{j}^{(n)}\Bigg{[} \frac{\Omega_{i}^{(\text{B}/\text{R})}\Omega_{j}^{(\text{B}/\text{R})}}{\omega _{\text{B}/\text{R}}^{2}-\omega_{n}^{2}}+\frac{1}{2}\frac{\Omega_{i}^{(\text{ C})}\Omega_{j}^{(\text{C})}}{\omega_{\text{C}}^{2}-\omega_{n}^{2}}\Bigg{]}\,, \tag{3}\] where \(M_{i}^{(n)}\) is the normalised mode amplitude of the \(i^{\text{th}}\) ion and the \(n^{\text{th}}\) motional mode with mode frequency \(\omega_{n}\)[44]. \(\omega_{\text{R}/\text{B}/\text{C}}\) and \(\Omega_{\text{R}/\text{B}/\text{C}}\) are the detunings and Rabi frequencies of the laser beam from the two-level atomic transition. Here, subscripts R, B, and C denote three frequencies of the laser beam that contains red-detuned, blue-detuned, and compensation beams. \(k\) is the wavenumber of the 729 nm laser beam and \(m\) is the mass of the calcium ion. In our experiments, the laser is detuned by \(\pm(\omega_{\text{COM}}+2\pi\times 25\text{ kHz})\) from the carrier transition, while the center-of-mass mode frequency is \(\omega_{\text{COM}}=2\pi\times 2.93\text{ MHz}\). Experimentally measured nearest-neighbor terms of the engineered spin-spin interaction are shown in Fig. 4 (a), where solid lines are theory results. The engineered spin-spin interaction is then used to experimentally examine the quench dynamics of a single spin initialized to spin up in the middle of the ion chain while having all others in the spin-down state. The flip-flop interaction coherently drives the excitation to other locations of the ion chain while keeping the total magnetization conserved. The experimental results are shown in Fig. 4 (b), where solid lines are numerical results. The full numerically simulated \(J_{ij}\) matrix for the experimentally measured parameters is shown in Fig. 4 (c). ## Appendix C System characterization In our experimental platform, the input state is prepared with a global \(X(\pi/2)\) gate over all ions, followed by an operation with a far-detuned laser beam that is tightly addressed over even ion sites out of all 51 ions while performing light shift gates. The unaddressed ions are prepared to spin down, while addressed ions are prepared to spin up after applying another global \(X(-\pi/2)\) gate. The state preparation is subjected to inevitable dephasing processes, thus a spin-echo sequence is employed while splitting the sequential addressing operation into two parts to improve state preparation. The state preparation fidelity \(\mathcal{F}=|\bra{\downarrow\uparrow\downarrow\uparrow...\ket{\psi_{0}}}|^{2}\) is measured to be 0.75(7) for the whole ion chain, which corresponds to a single particle state preparation fidelity of 0.994(2). In the fidelity estimation, the error bars account for fluctuations over different days of measurement outcomes. An important remark: here, we perform direct fluorescence measurements (i.e. the measurements on the \(z\) basis) and detect the individual ion on the EMCCD and compare the measured bit-string to the ideal state to calculate the fidelity. From our independent measurements, the detection error is measured to be smaller than \(10^{-3}\) per particle, thus the drop in the present fidelity can be assigned to the state preparation. In addition to the state preparation error discussed above, there are also measurement errors in our experimental system: here, we discuss the results of two-qubit tomography performed for all nearest-neighbor pairs of the Neel state (the input state for the variational optimization). The average fidelity of the reconstructed two-qubit state is estimated to be \(\mathcal{F}_{\text{two-qubit}}=0.980(5)\), i.e. \(\mathcal{F}_{\text{single-qubit}}=0.989(5)\). This estimated fidelity also accounts for the state preparation discussed previously. To examine the major source of error in the measurements, we perform direct single-qubit tomography of the same input state while utilizing only global \(x\), \(y\), and \(z\) base measurements, in contrast to the two-qubit case where local rotations are performed to account for all 9 bases measurements. Here, the average single-qubit state fidelity is estimated to be 0.994(3), which is higher than the estimates from the two-qubit tomography reconstruction. In summary, these analyses imply that in our system the leading errors, while carrying out measurements into different bases, arise from the local rotations which are performed with the help of a tightly focused beam. ## Appendix D Entanglement Hamiltonian tomography and error mitigation To reconstruct a reduced density matrix \(\hat{\rho}_{A}\) from experimental data, and to gain insight into its entanglement structure, we use the procedure of entanglement Hamiltonian tomography (EHT) as introduced in Ref. [23]. Here, the reduced density matrix is assumed to be of the form \[\rho_{A}(\mathbf{\beta})=\exp\left(-\sum_{j\in A}\beta_{j}\hat{h}_{j}\right)/Z_{A}( \mathbf{\beta}). \tag{4}\] Figure 4: _Effective interactions and spin dynamics for a 51-ion chain_ (a) Experimentally measured nearest-neighbor interaction terms \(J_{i,i+1}\), compared to theoretical calculations (solid line). (b) The quench dynamics of a single spin initialized to a spin-up state in the middle of the ion chain, while other spins initialized to a spin-down state, under the engineered flip-flop type interaction plotted in discs and solid lines are numerical results. (c) Theoretically calculated interaction matrix for the experimental parameters. The parameters \(\beta_{j}\) are free variables to be fitted to the data, and \(Z(\mathbf{\beta})=\mathrm{Tr}[\exp(-\sum_{j\in A}\beta_{j}\hat{h}_{j})]\) is a constant that ensures trace normalization. The operators \[\hat{h}_{j}=\frac{J}{2}(\hat{S}_{j}^{+}\hat{S}_{j+1}^{-}+\mathrm{H.c.})+\Delta \hat{S}_{j}^{z}\hat{S}_{j+1}^{z} \tag{10}\] are associated with a link between two adjacent sites \(j,j+1\in A\), and form a decomposition of the Heisenberg Hamiltonian, i.e., such that \(\hat{H}=\sum_{j}\hat{h}_{j}\). State preparation and measurement errors in the experiment can be accounted for by applying a quantum operation consisting of a depolarizing map with rate \(p_{1}\), and spontaneous emission from \(\ket{\uparrow}\) to \(\ket{\downarrow}\) with rate \(p_{2}\), of the form \[\mathcal{D}_{p}[\rho_{A}(\mathbf{\beta})]=\prod_{i=1}^{N_{A}}\mathcal{D}_{p}^{(i) }\rho_{A}(\mathbf{\beta}), \tag{11}\] where \(\mathcal{D}_{p}^{(i)}\) is a quantum operation, applied to particle \(i\) of a density matrix \(\hat{\rho}\): \[\mathcal{D}_{p}^{(i)}\hat{\rho}=\sum_{k}E_{k}\hat{\rho}E_{k}^{\dagger}, \tag{12}\] with the following \(E_{k}\) acting on particle \(i\): \[E_{0} =\sqrt{1-3p_{1}/4}\ket{\downarrow}\bra{\downarrow}+\sqrt{1-3p_{1 }/4-p_{2}}\ket{\uparrow}\bra{\uparrow} \tag{13}\] \[E_{1} =\sqrt{\frac{p_{1}}{4}}\hat{\sigma}^{x},\ \ E_{2}=\sqrt{\frac{p_{1}}{4}}\hat{ \sigma}^{y},\ \ E_{3}=\sqrt{\frac{p_{1}}{4}}\hat{\sigma}^{z},\] (14) \[E_{4} =\sqrt{p_{2}}\hat{\sigma}^{-}=\sqrt{p_{2}}\ket{\downarrow}\bra{ \uparrow}. \tag{15}\] The rates \(p_{1}\) and \(p_{2}\) are calibrated through an analysis of the total magnetization of the system after applying the state preparation circuit. Since the circuit conserves the magnetization, any deviations from the intended initial state magnetization can be attributed to the decoherence channel, and the values of \(p_{1}\) and \(p_{2}\) can be determined from the variance and mean of the magnetization. Experimental data is collected as bit strings measured in the computational (Z) basis after applying unitary basis rotations \[U^{(\mathbf{\alpha})}=\bigotimes_{j\in A}u_{j}^{(\alpha_{j})} \tag{16}\] where \(\hat{u}_{j}^{(\alpha_{j})}\) are single particle basis rotations applied at site \(j\), rotating from the \(\alpha_{j}=x,y,z\) basis to the \(Z\) basis. We use \(3^{5}\) different measurement settings \(\mathbf{\alpha}\), corresponding to a tomographically complete basis set for all contiguous 5-site subsystems of the 51 ion chain (and in particular of subsystem A). While these measurements are not tomographically complete for larger subsystems, they still yield enough information to reliably fit the restricted ansatz Eq. (10). In each basis, we take \(10^{2}\) measurements, each measurement comprising a single bit string in the computational basis. The free parameters \(\beta_{j}\) of the ansatz (10) are subsequently fitted to the acquired data in a least-squares sense, i.e., minimizing a cost function \[\chi^{2} =\!\sum_{\mathbf{\alpha}}\sum_{\mathbf{s}}\left[P_{\mathbf{s}}^{(\mathbf{\alpha} )}\!-\!\mathrm{Tr}\left(\mathcal{D}_{p}[\rho_{A}(\mathbf{\beta})]U^{(\mathbf{\alpha} )}\ket{\mathbf{s}}\bra{\mathbf{s}}U^{(\mathbf{\alpha})\dagger}\right)\right]^{2}, \tag{17}\] where \(P_{\mathbf{s}}^{(\mathbf{\alpha})}\) is the experimentally observed probability for measuring bit string \(\mathbf{s}\) after applying the basis transformation \(\mathbf{\alpha}\). This cost function represents the difference between the observed frequencies of occurrence of the bit strings, and the corresponding expectation values from the density matrix ansatz \(\mathcal{D}_{p}[\rho_{A}(\mathbf{\beta})]\) for a particular choice of parameters \(\mathbf{\beta}\). Finally, we note that the EHT procedure effectively provides a built-in method for error mitigation, i.e., it is possible to filter out the effects of decoherence. Namely, since the contributions of decoherence are explicitly fitted in the form of the quantum operation \(\mathcal{D}_{p}\), we are thus able to isolate the coherent part \(\rho_{A}(\mathbf{\beta})\) from the experimental data. ## Appendix E Data post-processing The XXZ-model studied in the main text exhibits a global \(\mathbb{Z}_{2}\)-symmetry \[[\hat{H},\hat{\mathcal{P}}]=0\text{ with }\hat{\mathcal{P}}=\bigotimes_{i}\hat{S }_{i}^{x} \tag{18}\] resulting in the situation that eigenstates with opposite total magnetization \(\ket{\Phi_{n}}\) and \(\hat{\mathcal{P}}\ket{\Phi_{n}}\) are degenerate. The variational circuits used in the experiment (see Appendix G) conserve total magnetization, hence starting with an initial state \(\ket{\Psi_{0}}\) of given magnetization, the circuit is only able to prepare approximations to one of the states \(\ket{\Psi_{G}}\) or \(\hat{\mathcal{P}}\ket{\Psi_{G}}\). The original prediction of BW, however, is only valid for systems with a _unique_ ground state. The ground state (GS) degeneracy of the XXZ model, on the other hand, depends on the total number of sites \(N\). For even \(N\) the GS is unique with magnetization \(M=0\), while for odd \(N\) the GS is two-fold degenerate with \(M=\pm 1\). In the limit \(N\to\infty\), this distinction vanishes in the sense that the corresponding reduced density matrices \(\rho_{A}\) converge to a single result. In order to test the BW prediction for the experimental system with an _odd_ number of sites, we approximate a pure ground state with mean magnetization given by the superposition \(\ket{\Psi_{G}}+\hat{P}\ket{\Psi_{G}}\), where \(\ket{\Psi_{G}}\) is one of the \(M=\pm 1\) ground states. Explicitly, we calculate observables instead for the mixture \(\ket{\Psi_{G}}\bra{\Psi_{G}}+\hat{\mathcal{P}}\ket{\Psi_{G}}\bra{\Psi_{G}} \hat{\mathcal{P}}\). Independent of the experimental analysis, we have numerically confirmed that this procedure converges to the correct expectation values in the bulk in the limit \(N\to\infty\). ## Appendix F Verification To verify the learning procedure, we determine a (mixed-state) fidelity between the experimental quantum state under study, described by the density matrix \(\rho_{\mathrm{exp}}\equiv\rho_{1}\), and the reconstructed density matrix from EHT, \(\rho_{A}(\mathbf{\beta})\equiv\rho_{2}\). In particular, we analyze the reconstructed density matrix in terms of two different Hilbert-Schmidt fidelities defined in [45], given by the _maximum_ fidelity \[\mathcal{F}_{\mathrm{max}}(\rho_{1},\rho_{2})=\frac{\mathrm{Tr}(\rho_{1}\rho_{2} )}{\max\{\mathrm{Tr}(\rho_{1}^{2}),\mathrm{Tr}(\rho_{2}^{2})\}}, \tag{19}\] and the _geometric mean_ fidelity \[\mathcal{F}_{\mathrm{mean}}(\rho_{1},\rho_{2})=\frac{\mathrm{Tr}(\rho_{1}\rho_ {2})}{\sqrt{\mathrm{Tr}(\rho_{1}^{2})\mathrm{Tr}(\rho_{2}^{2})}}, \tag{20}\] which both measure the overlap between \(\rho_{1}\) and \(\rho_{2}\), normalized by their purities. As shown in Ref. [30], terms of the form \(\mathrm{Tr}(\rho_{i}\rho_{j})\) for \(i,j=1,2\), as occurring in Eqs. (11)-(12), can be evaluated from outcomes of measurements performed in sufficiently many measurement bases. Whereas Ref. [30] suggests the use of randomized measurement bases [31], we use here the set of tomographically complete Pauli measurements for all contiguous subsystems of size \(5\), i.e. the same measurement bases used for the EHT protocol described above. Specifically, we denote by \(P_{\mathbf{\alpha}}^{(1)}(\mathbf{s})\) the frequency of having observed a particular bitstring \(\mathbf{s}\) in the experiment (where \(\rho_{\mathrm{exp}}\) is realized and Pauli basis rotation \(\hat{U}^{(\alpha)}\) has been applied) and \(P_{\mathbf{\alpha}}^{(2)}(\mathbf{s})=\mathrm{Tr}\left(\rho_{A}(\mathbf{\beta})U^{(\alpha )}\left|\mathbf{s}\right\rangle\left\langle\mathbf{s}\right|U^{(\alpha)\dagger}\right)\), i.e. the expectation value of observing bitstring \(\mathbf{s}\) in the reconstructed state \(\rho_{A}(\mathbf{\beta})\). Then, we obtain the overlap \(\mathrm{Tr}(\rho_{i}\rho_{j})\) for \(i=1,j=2\) and purities \(\mathrm{Tr}(\rho_{i}\rho_{j})\) for \(i=j=1,2\) via [30] \[\mathrm{Tr}(\rho_{i}\rho_{j})=\frac{2^{N_{A}}}{N_{\mathbf{\alpha}}}\sum_{\mathbf{ \alpha}}\sum_{\mathbf{s},\mathbf{s}^{\prime}}(-2)^{-\mathcal{D}[\mathbf{s},\mathbf{s}^{\prime }]}P_{\mathbf{\alpha}}^{(i)}(\mathbf{s})P_{\mathbf{\alpha}}^{(j)}(\mathbf{s}^{\prime}), \tag{13}\] where the Hamming distance \(\mathcal{D}[\mathbf{s},\mathbf{s}^{\prime}]\) between two strings \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) is defined as the number of entries where \(s_{k}\neq s^{\prime}_{k}\), i.e. \(\mathcal{D}[\mathbf{s},\mathbf{s}^{\prime}]\equiv\#\left\{k\in\{1,\ldots,N_{A}\} \,|\,s_{k}\neq s^{\prime}_{k}\right\}\) (see also Refs. [46; 47]). Eq. (13) provides a direct experimental verification of the reconstructed density matrix via the Hilbert-Schmidt fidelity, requiring no further theory input such as simulations, and can be evaluated from the same type of measurements employed for EHT. Importantly, however, the measurements used for fidelity estimation should be independent from those used in EHT, to avoid false correlations and biasing. We, therefore, split the total dataset for each quantum state into two, where one-half of the data is used to reconstruct \(\rho_{A}(\mathbf{\beta})\), and the other half is subsequently used to evaluate the fidelity (11). To evaluate the purity \(\mathrm{Tr}(\rho_{1}\rho_{1})\), the verification data set is split once more into two sets to perform an unbiased estimation. That is, in Eq. (13), we evaluate \(P_{\mathbf{\alpha}}^{(j)}(\mathbf{s})\) from one dataset, and \(P_{\mathbf{\alpha}}^{(j)}(\mathbf{s}^{\prime})\) from the other. Finally, since the measurement basis set is only tomographically complete for contiguous subsystems of size \(5\), and since we have taken only a limited number of measurements, we have found that the fidelity estimation becomes inaccurate for subsystems larger than \(5\). For subsystems of size \(L_{A}>5\) we therefore compute the averaged \(5\)-site fidelity for all contiguous sub-subsystems contained in the subsystem, as a measure of the fidelity of the total subsystem. ## Appendix G Variational state preparation The experiment prepares variational quantum states [48] by alternatingly applying two types of unitaries. The first type of operation consists of an entangling operation of the form \(\hat{U}_{XY}(\theta)=\exp(-\mathrm{i}\theta\hat{H}_{XY})\), which applies for a variable duration \(\theta\) the native interaction Hamiltonian described in Eq. (11). The second type of operation consists of single particle rotations, applied to each second site, of the form \(\hat{U}_{Z}(\theta)=\exp\left(-\mathrm{i}\sum_{k=1}^{\left\lfloor N/2\right\rfloor }\hat{\sigma}_{k}^{2}\theta/2\right)\). Starting from an initial Neel state \(\ket{\psi_{0}}=\ket{\downarrow\uparrow\downarrow\ldots}\), the variational quantum states are thus of the form \[\ket{\Psi(\mathbf{\theta})}=\hat{U}_{Z}(\theta_{M})\hat{U}_{XY}(\theta_{M-1}) \cdots\hat{U}_{Z}(\theta_{2})\hat{U}_{XY}(\theta_{1})\ket{\psi_{0}}. \tag{14}\] The parameters \(\mathbf{\theta}\) of the variational state are optimized in a feedback loop with a classical computer running an optimization algorithm that attempts to minimize the expectation value of the energy of the state under the Heisenberg Hamiltonian \(\hat{H}\). From 30 measurements in each of the \(X,Y,Z\) bases, the quantity \(\bra{\Psi(\mathbf{\theta})}\hat{H}\ket{\Psi(\mathbf{\theta})}\) is estimated, serving as a cost function for the classical optimizer. We use a variant of the SPSA algorithm [49], enhanced with a Gaussian process surrogate model [50]. The search is warm-started with an initial guess consisting of optimal parameters from a numerical exact optimization for \(13\) particles. The variational optimization on the experiment serves to refine these parameters. We reach energies, normalized to the total spectral range, of \(0.048(2)\) for \(\Delta=1\) and \(0.056(8)\) for \(\Delta=1.7\). For the 'heated states' discussed in the main text, we apply one additional quench with the native entangling operation \(\hat{H}_{XY}\). This quench is applied to the initial state, before running the variational circuit with optimal parameters. We found that quenches to the initial state are more effective at heating the state than similar length quenches applied _after_ executing the circuit. The duration of the heating quenches is \(1.87\) ms and \(1.5\) ms for \(\Delta=1\) and for \(\Delta=1.7\), respectively. For these quenches, the measured energies on the quantum simulator are \(0.37(1)\) and \(0.197(4)\). We note that there also exist deterministic state preparation protocols for eigenstates of the XXZ chain, making use of the integrability of the model, in terms of so-called algebraic Bethe circuits [51]. Given the requirement of a universal gate set to implement these circuits, we have employed simpler short-depth variational circuits here. ## Appendix H Bisognano-Wichmann theorem and extensions ### The Entanglement Hamiltonian in Quantum Field Theories Entanglement properties of quantum many-body systems can rarely be described analytically. A notable exception is given by the Bisognano-Wichmann (BW) theorem [52; 15; 16], which applies to the ground state \(\ket{\Omega}\) of any relativistic quantum field theory (RQFT) in \((d+1)\)-dimensional Minkowski space. Given the underlying Hamiltonian \(\hat{H}=\int\mathsf{d}^{d}x\mathscr{H}(x)\) where \(\mathscr{H}(x)=T^{00}(x)\) is determined by the energy-momentum tensor \(T^{\mu\nu}(x)\), the reduced density operator as given in Eq. (1) of the main text can be calculated exactly for the special case of a bi-partition of space into two halves, \(\mathbb{R}^{d}=A\cup B\) with \(A=\{x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}|x_{1}>0\}\). The corresponding EH takes the form of a 'deformation' of \(\hat{H}\), \[\tilde{H}_{A}=\int_{x\in A}\mathsf{d}^{d}x\,\beta(x)\mathscr{H}(x)-F\;, \tag{15}\] with a linearly increasing 'local inverse temperature' \[\beta(x)=\beta^{\mathrm{BW}}(x)=2\pi x_{1}, \tag{16}\] and a normalization constant \(F\). The classic result of BW can be extended when the QFT exhibits conformal invariance [20; 21], which allows to map the half-space into other regions \(A\subset\mathbb{R}^{d}\). In particular, for a solid sphere of radius \(R\), i.e. \(A=\{x\in\mathbb{R}^{d}|x^{2}\leq R^{2}\}\), the EH is again local with \[\beta(x)=\beta^{\text{CFT}}(x)=2\pi\frac{(R^{2}-x^{2})}{2R}\;. \tag{11}\] We emphasize that the predictions in Eqs. (10) and (11) are valid in arbitrary spatial dimensions \(d\) and only require Lorentz or conformal invariance of the QFT, respectively. Translating these analytical results from the continuum to a spatial lattice suggests approximate linear and parabolic deformations such as \[\beta^{\text{BW}}(x) \rightarrow \beta^{\text{BW}}_{n}\sim n\;,\qquad\qquad\qquad n=0,1,2,\ldots\;, \tag{12}\] \[\beta^{\text{BW}}(x) \rightarrow \beta^{\text{CFT}}_{n}\sim n(N-n)\;,\quad n=-N,\ldots N \tag{13}\] written here for \(d=1\) for simplicity [see also Eq. (2)]. For disconnected intervals as studied in the main text, no generally applicable extension of the BW theorem is presently known. For reference, we state here the result for a massless Dirac field on a line with \(A=A_{+}\cup A_{-}\subset\mathbb{R}\) and two intervals \(A_{\pm}=(\pm a,\pm b)\)[32; 33] \[\tilde{H}_{A}=\int_{x\in A}dx\left[\beta_{\text{loc.}}(x)\mathscr{H}(x)+ \beta_{\text{bi-loc.}}(x)\mathscr{H}_{\text{bi-loc.}}(x,x_{\text{c}}(x))\right]\;. \tag{14}\] Here, \(\mathscr{H}_{\text{bi-loc.}}(x,x_{c})\) is a bi-local operator that connects every \(x\in A_{\pm}\) with a conjugate point \(x_{c}(x)=-ab/x\in A_{\mp}\), and the spatial deformations take the form \[\beta_{\text{loc.}}(x) =\frac{(b^{2}-x^{2})(x^{2}-a^{2})}{2(b-a)(ab+x^{2})}\;, \tag{15}\] \[\beta_{\text{bi-loc.}}(x) =\frac{ab}{x(ab+x^{2})}\beta_{\text{loc.}}(x)\;. \tag{16}\] Note that the local function \(\beta_{\text{loc.}}(x)\) interpolates between the expected parabolic shapes for a single subsystem when \(A_{+}\) and \(A_{-}\) touch (\(b=R\) with \(a\to 0\)) and two independent subsystems when \(A_{+}\) and \(A_{-}\) are far away (\(b=a+2R\) with \(a\rightarrow\infty\)), while the bi-local function \(\beta_{\text{bi-loc.}}(x)\) vanishes in both limits. ### Applicability to general low-energy states Predictions about the spatial structure of the Entanglement Hamiltonian (EH) from RQFT and Conformal Field Theory (CFT) apply to _ground states_ of local theories (see main text). In the present paper, we study the EH for subsystems of variationally prepared quantum many-body states on a trapped-ion quantum simulator. We find that these states can be represented as superpositions of a finite number of low-lying eigenstates of the target model. While numerical verification of the local structure of the EH has been established for ground states of quantum lattice models [22], it is less clear if such findings hold true for low-lying excited states or their superpositions. In the following, we study the EH's spatial profile in low-lying eigenstates of a 51-site XXZ chain and verify its local structure using fidelity estimations with respect to the exact density matrices. Specifically, we compute excited states of the 51-site XXZ model using the Density Matrix Renormalization Group (DMRG) with matrix product states (MPS). In particular, for calculating the \(k^{\text{th}}\) excited state, we modify the XXZ Hamiltonian \(\hat{H}\) by adding projectors on the \(k-1\) pre-computed eigenstates \(\ket{\phi_{k}}\), with a constant weight factor \(w\): \[\hat{H}\rightarrow\hat{H}+w\sum_{q<k}\ket{\phi_{q}}\bra{\phi_{q}}. \tag{17}\] Within this scheme, DMRG minimizes the energy of \(\hat{H}\) while simultaneously minimizing the overlap with all previously computed excited states. Using this strategy, we compute MPS representations for the lowest 200 excited states of the XXZ model. In Fig. 5 (a) we performed EHT for subsystems of \(L_{A}=9\) sites for the lowest 200 excited states. As can be seen, each of the individual eigenstates exhibits a parabolically deformed EH, where the temperature profile for higher-lying states acquires a flat plateau in the bulk of the subsystem. In order to verify the validity of the local temperature profiles, in Fig. 5 (b) we compute the Uhlmann fidelity \(\mathcal{F}(\rho_{1},\rho_{2})=\left(\text{Tr}\sqrt{\sqrt{\rho_{1}}\rho_{2} \sqrt{\rho_{1}}}\right)^{2}\) between the density matrices \(\rho_{A}(\mathbf{\beta})\), obtained from EHT, and the exact density matrices \(\rho_{A}=\text{Tr}_{\hat{A}}\left(\ket{\Phi_{k}}\bra{\Phi_{k}}\right)\). While the overall fidelities consistently exceed \(\sim 94\%\) for the lowest 200 eigenstates, Fig. 5 (b) reveals that the spread in fidelity increases for higher-lying excited states. However, even at the highest energies investigated, eigenstates can be found where the reduced density matrix can be described by a local EH with up to \(\sim 99\%\) fidelity. ## Appendix I Numerical justification of the data post-processing procedure As discussed in the main text, in our experiment we prepare approximate ground states \(\ket{\Psi_{G}}\) of the XXZ model for \(N=51\) sites in different parameter regimes. These states have good quantum numbers with respect to total magnetization \(\hat{M}=\sum_{j}\hat{S}_{j}^{z}\) but are not eigenstates of the global \(\mathbb{Z}_{2}\) symmetry operator \(\hat{\mathcal{P}}=\bigotimes_{j}\hat{S}_{j}^{x}\). This is because our variational circuits prepare states with a fixed finite magnetization \(M\neq 0\), while the global \(\mathbb{Z}_{2}\) operation changes the magnetization \(M\) to \(-M\). In Appendix H, we discuss a data post-processing technique that effectively simulates the presence of a state of the form \(\ket{\Psi_{G}^{\text{sym}}}=\frac{1}{\sqrt{2}}\left(\ket{\Psi_{G}}+\hat{ \mathcal{P}}\ket{\Psi_{G}}\right)\) in experimental measurements, restoring the global \(\mathbb{Z}_{2}\) symmetry and facilitating the investigation of the BW predictions through an analysis of a unique ground state (see Methods). This is achieved via an approximation, modifying the experimentally obtained sample data to make them appear as if they originate from a state of the form \[\rho_{A}^{\text{mod}}=\frac{1}{2}\left(\rho_{A}+\hat{\mathcal{P}}_{A}\rho_{A} \hat{\mathcal{P}}_{A}\right), \tag{18}\] with \(\hat{\mathcal{P}}_{A}=\bigotimes_{j\in A}\hat{S}_{j}^{x}\) and \(\rho_{A}\) the reduced density matrix of a subsystem of \(\ket{\Psi_{G}}\). In the following, we justify this procedure via numerical simulations and show that it converges to the 'correct' density matrix \[\rho_{A}^{\text{sym}}=\text{Tr}_{\hat{A}}\left(\ket{\Psi_{G}^{\text{sym}}} \bra{\Psi_{G}^{\text{sym}}}\right), \tag{19}\] in the limit \(L\to\infty\), where \(L\) denotes the total number of particles. To verify the procedure, we compute \(\rho_{A}^{\rm sym}\) of a bulk region of the superposition state \(\ket{\Psi_{G}^{\rm sym}}\) and compare it to the modified density matrix \(\rho_{A}^{\rm mod}\) of Eq. (11). Expanding the reduced density matrices of a subsystem \(A\) in a basis of Pauli strings \(\hat{\sigma}_{j}^{\alpha_{j}}\hat{\sigma}_{j+1}^{\alpha_{j}+1}\dots\hat{\sigma }_{j+L_{A}-1}^{\alpha_{j}+L_{A}-1}\), we can find an analytical expression for the difference between \(\rho_{A}^{\rm sym}\) and \(\rho_{A}^{\rm mod}\), given by \[\rho_{A}^{\rm sym}-\rho_{A}^{\rm mod} =\sum_{\{\alpha_{j}\}}\bra{\Psi_{G}}\left[\hat{\sigma}_{j}^{ \alpha_{j}}\hat{\sigma}_{j+1}^{\alpha_{j+1}}\dots\hat{\sigma}_{j+L_{A}-1}^{ \alpha_{j+L_{A}-1}},\hat{\mathcal{P}}\right]\ket{\Psi_{G}}\] \[\times\hat{\sigma}_{j}^{\alpha_{j}}\hat{\sigma}_{j+1}^{\alpha_{j +1}}\dots\hat{\sigma}_{j+L_{A}-1}^{\alpha_{j+L_{A}-1}} \tag{13}\] where \(\{\cdot,\cdot\}\) denotes the anticommutator and \(\hat{\sigma}_{j}^{\alpha_{j}}\in\mathbb{P}[\hat{\sigma}_{j},\hat{\sigma}_{j}^ {\alpha_{j}},\hat{\sigma}_{j}^{\alpha_{j}}]\). The expectation values appearing in Eq. (13) contain Pauli strings of the form \(\hat{\sigma}_{1}^{x}\dots\hat{\sigma}_{j-1}^{x}\hat{\sigma}_{j}^{\alpha_{j}} \hat{\sigma}_{j+1}^{\alpha_{j+1}}\dots\hat{\sigma}_{j+L_{A}-1}^{\alpha_{j+L_{A }-1}}\hat{\sigma}_{j+L_{A}}^{x}\dots\hat{\sigma}_{L}^{x}\) which span the whole spin chain. Intuitively, expectation values of large Pauli strings vanish for \(L\gg\), which we numerically verify in Fig. 6. In particular, Fig. 6 illustrates different density matrix fidelities (see Appendix F) between \(\rho_{A}^{\rm sym}\) and \(\rho_{A}^{\rm mod}\) for density matrices computed for a subsystem of size \(L_{A}=5\) from the center of the spin chain as a function of the total system size \(L\). As can be seen, \(\rho_{A}^{\rm sym}\) and \(\rho_{A}^{\rm mod}\) become approximately identical at \(L\sim 10^{3}\), indicating the vanishing of expectation values of large Pauli stiags as \(L\to\infty\). Contributions to the density matrix \(\rho_{A}^{\rm sym}\) that stem from off-diagonal coherences in \(\ket{\Psi^{\rm sym}}\) thus vanish for \(L\to\infty\). This allows us to replace the coherent superposition state \(\ket{\Psi^{\rm sym}}\) by an incoherent mixture for large system sizes, for which reduced density matrices take the form of Eq. (11). ## Appendix J Spatial structure of entanglement eigenstates In this section we provide further evidence for the interpretation of the deformation parameters \(\beta_{j}\) in the EH as a "local inverse (entanglement) temperature". To be explicit, we consider again the EH \(\bar{H}_{A}=\sum_{j\in A}\beta_{j}h_{j}\) with a finite subsystem \(A\) in the bulk for the ground state \(\ket{\Psi_{G}}\) of \(\hat{H}_{\rm XXZ}=\sum_{j}h_{j}\) as in the main text. The fact that \(\rho_{A}\propto e^{-\bar{H}_{A}}\) is a Gibbs state with \(\beta_{j}\) small close to the entanglement cut and larger towards the middle of the subsystem intuitively suggests that the "dominant contributions to the entanglement live close to the boundary". To make this intuition more precise, we write \[\rho_{A}=\frac{1}{Z_{A}}e^{-\bar{H}_{A}}=\sum_{\alpha}e^{-\xi_{\alpha}}\ket{ \Phi_{A}^{\alpha}}\bra{\Phi_{A}^{\alpha}} \tag{14}\] with \(\xi_{\alpha}\) and \(\ket{\Phi_{A}^{\alpha}}\) the eigenvalues and -states of \(\tilde{H}_{A}\), i.e. the entanglement spectrum and the Schmidt vectors, respectively. The dominant contribution to the entanglement is thus attributed to the Schmidt vectors with the smallest eigenvalues. We expect that these vectors carry excitations that are dominantly supported close to the entanglement cut. Figure 5: _Numerical analysis of the Entanglement Hamiltonian of the lowest 200 eigenstates of a 51-site XXZ chain_ (a) Local temperature profiles of the EH for subsystems of \(L_{A}=9\) sites in the center of the spin chain for the individual eigenstates. The inverse temperature profiles are colored according to their eigenenergies, which are measured in units of the total spectral range. (b) Verification of the local structure of the EH via computing Uhlmann fidelities between the density matrices \(\rho_{A}(\mathbf{\beta})\), reconstructed from EHT, and the exact density matrices \(\rho_{A}\) obtained via performing a Schmidt decomposition on the individual eigenstates \(\ket{\Phi_{k}}\) with \(k=1\dots 200\). GS denotes the temperature profile as well as the fidelity for the ground state wave function. Figure 6: Uhlmann fidelity \(\mathcal{F}\), maximum fidelity \(\mathcal{F}_{\rm max}\) and geometric-mean fidelity \(\mathcal{F}_{\rm mean}\) (see Appendix F) of the β€˜correct’ density matrix \(\rho_{A}^{\rm sym}\) with respect to the modified density matrix \(\rho_{A}^{\rm mod}\) as a function of total system size \(L\), for a 5-site subsystem from the center of the spin chain. The plot illustrates that contributions to the density matrix \(\rho_{A}^{\rm sym}\) originating from off-diagonal coherences in the state \(\ket{\Psi^{\rm sym}}\) vanish in the limit \(L\to\infty\). We reveal this anticipated spatial structure by calculating the average energy density \(\langle\hat{h}_{j}\rangle\) and \(j\in A\) (see Fig. 7). In Fig. 7a, we show the energy density for \(|\Psi_{G}\rangle\), which exhibits a homogeneous profile (up to a 2-site unit cell effect) as expected for a translationally invariant system. In contrast, the energy density in the low-lying Schmidt vectors is strongly inhomogeneous as demonstrated in Fig. 7 (b), where we plot the absolute difference in energy density in \(|\Phi_{A}^{\alpha}\rangle\) w.r.t. \(|\Psi_{G}\rangle\). We interpret the fact that this difference is largest close the entanglement cut as localized excitations in the dominant Schmidt vectors which are supported in this boundary region. This spatial "localization" of entanglement finds its most prominent manifestation in the context of topological order. As originally pointed out by Li and Haldane [35], the low-lying entanglement spectrum for a subsystem of topologically ordered state carries a fingerprint of the structure associated to the edge state CFT, which is again due to dominant Schmidt vectors contributing excitations that are mainly supported close to the entanglement cut. Our findings suggest that such interpretations also apply in a more general context.
2305.00513
Tailoring polarisation of attosecond pulses via co-rotating bicircular laser fields
The present work introduces a robust way to generate attosecond pulses with tunable ellipticity via high-order harmonic generation by co-rotating $\omega - 2\omega$ bicircular laser fields. The total electric field of the laser fields exhibits an absence of rotational symmetry, which leads to the generation of high harmonics of the same helicity across a broad range of spectral bandwidth. High-harmonics with the same helicity offer the opportunity to synthesize attosecond pulses with tunable ellipticity. The polarisation properties of the generated harmonics are robust against the variations in driving fields' parameters, such as wavelength, intensity ratio, and the sub-cycle phase between $\omega-2\omega$ fields. Our work opens an avenue to study chiral-sensitive light-matter ultrafast processes on their intrinsic timescale.
Rambabu Rajpoot, Amol R. Holkundkar, Navdeep Rana, Gopal Dixit
2023-04-30T15:52:22Z
http://arxiv.org/abs/2305.00513v1
# Tailoring polarisation of attosecond pulses via co-rotating bicircular laser fields ###### Abstract The present work introduces a robust way to generate attosecond pulses with tunable ellipticity via high-order harmonic generation by co-rotating \(\omega-2\omega\) bicircular laser fields. The total electric field of the laser fields exhibits an absence of rotational symmetry, which leads to the generation of high harmonics of the same helicity across a broad range of spectral bandwidth. High-harmonics with the same helicity offer the opportunity to synthesize attosecond pulses with tunable ellipticity. The polarisation properties of the generated harmonics are robust against the variations in driving fields' parameters, such as wavelength, intensity ratio, and the sub-cycle phase between \(\omega-2\omega\) fields. Our work opens an avenue to study chiral-sensitive light-matter ultrafast processes on their intrinsic timescale. ## I Introduction High-harmonic generation (HHG) is one of the non-perturbative nonlinear processes of laser-matter interaction. HHG has been intensively used for a tabletop coherent light source in extreme ultraviolet and soft x-ray energy regimes with attosecond temporal resolution [1]. HHG in a gaseous medium proceeds via a three-step process [2; 3], wherein an intense laser pulse librates an electron via tunnel ionisation as a first step. The liberated electron gains energy in the presence of driving laser as it propagates in the continuum and, eventually, is driven back to recollide with the parent ion. The kinetic energy acquired by the electron is emitted in the form of higher-order harmonics of the driving laser field. HHG not only plays a paramount role in producing attosecond pulses, but also offers a wide array of applications by unraveling ultrafast electron dynamics in matter with atomic-scale spatial-temporal resolutions [4; 5; 6; 7; 8; 9; 10]. Owing to the rapid development on both the theoretical and experimental fronts, researchers have focused on controlling the polarization of the emitted harmonics. Usually, the polarization of the emitted harmonics is controlled by utilizing various forms of the counter-rotating \(\omega-2\omega\) bicircular fields configuration, such as fields having different intensity ratios, relative phase, and ellipticities [11; 12; 13; 14; 15; 16; 17; 18; 19; 20], non-collinear mixing of combining pulses [21; 22], adding a seed pulse [23], as well as utilizing the plasmonic field enhancement [24], to name a few. However, due to the three-fold symmetry restriction enforced by the counter-rotating \(\omega-2\omega\) fields, harmonics with alternating helicity are generated. Such circularly polarized harmonics with alternating helicity could generate linearly polarized attosecond pulses with each subsequent pulse rotated by \(120^{\circ}\) in space [25]. Thus, it is crucial to desire harmonics of the same helicity across a range of spectral bandwidths to produce attosecond pulse with tunable ellipticity. In this work, we introduce a robust scheme to generate attosecond pulse with tunable ellipticity using co-rotating \(\omega-2\omega\) circularly polarized fields. In the following, we will demonstrate that the harmonics with the same helicity are produced owing to the absence of rotational symmetry in the driving co-rotating \(\omega-2\omega\) bicircular fields. The robustness of our scheme is tested with respect to the variations in the driving fields' parameters. It is found that highly elliptical attosecond pulse can be generated as the scheme is insensitive to any variations in the parameters. The generated attosecond pulses with circular or elliptical polarization are desirable to probe various chiral-sensitive light-matter phenomena [26; 27; 28; 29; 30; 31; 32]. Recently, Lu and co-workers have applied corotating bicircular field configurations with 1:3 ratio to discuss the role of Coriolis-force effect in the generation of high harmonics [33]. Moreover, the superiority of the molecular target over atomic in the context of corotating bicircular fields setup is discussed in Ref. [34]. Solids and plasma targets, apart from gaseous systems, have also attracted attentions for HHG via co-rotating driving fields [35; 36; 37]. The paper is organized as follows. Details of the numerical methods are discussed in Sec. II, followed by the results and discussion in Sec. III. The concluding remarks and future directions are discussed in Sec. IV. ## II Numerical methods Time-dependent Schrodinger equation in two-dimensions, within single-active-electron approximation, for helium is numerically solved as discussed in Refs. [16; 38]. The harmonic spectrum is obtained by performing the Fourier transform of the dipole acceleration as \[S_{\kappa}(\Omega)=\Big{|}\frac{1}{\sqrt{2\pi}}\int a_{\kappa}(t)e^{-i\Omega t }dt\Big{|}^{2}=\big{|}a_{\kappa}(\Omega)\big{|}^{2}, \tag{1}\] where \(\kappa\) stands for the \(x\) or \(y\) components of the time-dependent dipole acceleration. To describe the polarization of the harmonics, the intensity of the left- and right-rotating components is obtained as \(D_{\pm}=\big{|}a_{\pm}(\Omega)\big{|}^{2}\) with \(a_{\pm}(\Omega)=[a_{x}(\Omega)\pm i\alpha_{y}(\Omega)]/\sqrt{2}\). Ellipticity of the emitted harmonics is calculated using \[\epsilon=\frac{|a_{+}(\Omega)|-|a_{-}(\Omega)|}{|a_{+}(\Omega)|+|a_{-}(\Omega)|}. \tag{2}\] The parameter \(\epsilon\) varies in the interval from \(-1\) to \(+1\), and the sign of \(\epsilon\) defines the helicity of the harmonics. The harmonics rotating in a counter-clockwise direction have positive helicity while those rotating in a clockwise direction have negative helicity [39; 40]. The temporal profile of an attosecond pulse is constructed by filtering the desired frequency range with an appropriate window function \(w(\Omega)\) and then performing an inverse Fourier transform as [41]: \[\mathcal{E}_{\kappa}(t)=\frac{1}{\sqrt{2\pi}}\int a_{\kappa}(\Omega)w(\Omega)e ^{i\Omega t}d\Omega. \tag{3}\] Here, \(w(\Omega)=\Theta(\Omega-\Omega_{1})\Theta(\Omega_{2}-\Omega)\), wherein \(\Omega_{1}\leq\Omega\leq\Omega_{2}\) is the frequency range to be filtered, and \(\Theta(x)\) is standard step function. The total electric field corresponding to the \(\omega-2\omega\) co-rotating configuration is written as \[\begin{split}\mathbf{E}(t)=f(t)\big{\{}& E_{1}\ [\cos(\omega_{1}t)\hat{\mathbf{e}}_{x}+\sin(\omega_{1}t)\hat{\mathbf{e}}_{y}] +\\ & E_{2}\ [\cos(\omega_{2}t+\phi)\hat{\mathbf{e}}_{x}+\sin(\omega_{2}t+ \phi)\hat{\mathbf{e}}_{y}]\big{\}},\end{split} \tag{4}\] where \(\omega_{j}\) and \(E_{j}\) are the frequency and the electric field amplitude of the \(j^{\text{th}}\) component of the bicircular field, respectively. \(\phi\) defines the sub-cycle phase between the two fields. The temporal pulse envelope \(f(t)=\sin^{2}(\pi t/\tau)\) with total duration \(\tau=5T_{1}\), where \(T_{1}=2\pi/\omega_{1}\) is the period of the fundamental field. We have considered the spatial simulation domain of \(\pm 150\) a.u. along both \(x\) and \(y\) directions. The value of parameter \(r_{\text{abs}}=\pm 142\) a.u. is considered. The spatial step \(\Delta x=\Delta y\approx 0.29\) a.u. is used and the simulation time step \(\Delta t=0.01\) a.u. is considered, which is well within the criteria \(\Delta t\lesssim 0.5(\Delta x)^{2}\). The convergence is tested with respect to the spatial grid as well as space and time steps. Our simulation utilizes widely used _Armadillo_ library for linear algebra purpose [42]. In the following sections, we discuss the harmonic generation by co-rotating bicircular laser pulses and the production of highly elliptically polarized attosecond pulses. ## III Results and Discussions The harmonic spectra of helium driven by co-rotating \(\omega-2\omega\) fields is presented in Fig. 1. Absence of the dynamical rotational symmetry of the total electric field results in the generation of even and odd-order harmonics with a regular plateau structure as evident from the figure. Additionally, the harmonic component co-rotating with the driving field has higher contrast, dominates in the cutoff region. The present observations are consistent with the propensity rules as discussed in Ref.[43]. This is in contrast with counter-rotating \(\omega-2\omega\) configuration, which results in harmonic doublets with alternating helicity. Before we discuss the generation of attosecond pulses with tunable ellipticity from the spectrum shown in Fig. 1, let us explore the robustness of the features in the spectrum with respect to various laser parameters. Figure 2 presents harmonic spectra corresponding to co-rotating \(\omega-2\omega\) configuration having different fundamental wavelengths and intensity ratios. The overall nature of the spectrum remains insensitive with respect to the changes in the intensity ratio as the ratio is tuned from 1:1 to 1:1.25 and 1.25:1, as evident from Figs. 2(a) and 2(b), respectively. A similar observation can be made when the wavelength is increased from 1064 nm to 1200 nm with the same intensity ratios [see Figs. 2(c) and 2(d)]. Thus, the higher yield of the co-rotating harmonic component in blue is insensitive with respect to the variations in the wavelength as well as intensity ratio of the co-rotating fields. In all cases, the subcycle phase between \(\omega-2\omega\) fields, \(\phi\), is zero. At this juncture, it is natural to envision how different values of \(\phi\) affect the nature of the harmonic spectra discussed so far. Figure 3 presents harmonic spectra for different values of \(\phi\) for 1200 nm wavelength of \(\omega\)-field with 1:1 intensity ratio. As the value of \(\phi\) is tuned from \(\phi=0^{\circ}\) to \(\phi=30^{\circ}\), spectrum shown in Fig. 3(a) displays high contrast of the \(D_{+}\) harmonic component, co-rotating with the driving field, compared to the counter-rotating \(D_{-}\) harmonic component in the cutoff region. The insensitivity of the features of the harmonic spectra can Figure 2: Same as Fig. 1 with different wavelengths, and intensity ratios. The wavelength is considered 1064 nm in (a-b) and 1200 nm in (c-d). Here, value 1 corresponds to the intensity \(5\times 10^{13}\) W/cm\({}^{2}\). Figure 1: High-harmonic spectrum of helium driven by co-rotating \(\omega-2\omega\) fields. \(D_{+}\) harmonic component (blue line) is co-rotating with the \(\omega\) field, whereas the red line represents the counter-rotating \(D_{-}\) harmonic component.. Inset shows Lissajous figure corresponding to the total co-rotating \(\omega-2\omega\) electric fields for one cycle of \(\omega_{1}\) field. \(\lambda_{1}=1064\) nm, \(I_{1}=I_{2}=5\times 10^{13}\) W/cm\({}^{2}\), and \(\phi=0^{\circ}\) are used to simulate the spectrum. be understood by analysing the Lissajous figure. The orientation of the Lissajous figure rotates by changing \(\phi\) value, but dynamical rotational symmetry is still absent, as visible from the inset. As a result, the polarization and intensity of the emitted harmonics remain unaffected for different values of \(\phi\), as evident from the spectra shown in Figs. 3(b) - 3(d). We have also simulated the harmonic spectra when the helicity of the co-rotating driving fields is reversed. In this case, the co-rotating harmonic component dominates in the cutoff region regardless of the rotation direction of the driving field as can be seen in Fig. 4. This offers the possibility of generating attosecond pulses with desired handedness by simply changing the rotation direction of the driving laser pulses. From the analysis of Figs. 1 - 3, it is established that the essential features of the harmonic spectra, such as domination of the co-rotating \(D_{+}\) component, are robust with respect to the variations in the laser parameters. This eliminates the need for precise adjustments of the intensity ratio or the relative phase between the driving fields or a specific choice of wavelength. Thus, co-rotating \(\omega-2\omega\) scheme can be utilized to synthesize attosecond pulse with controlled polarization. To illustrate the feasibility of generating attosecond pulses with tunable polarization via HHG driven by co-rotating \(\omega-2\omega\) fields configuration, let us focus on the harmonic spectra in the energy range \(44-51\) eV. Figure 5 shows the temporal profile of the synthesized attosecond pulse with its \(x\)-component, \(y\)-component and the Lissajous figure of the total electric field. The pulse shown in Fig. 5(a) has ellipticity as high as \(0.88\) with \(\sim 630\) attoseconds pulse duration. This elliptical pulse is synthesized by superposing the harmonics in the energy range \(44-51\) eV from the spectra shown in Fig. 1. The high ellipticity is a direct consequence of unequal intensities of the two co- and counter-rotating harmonic components, i.e., \(D_{+}\) and \(D_{-}\). In contrast to the co-rotating driving fields, if one considers the counter-rotating driving fields to generate high-harmonics, the resultant attosecond pulse exhibits ellipticity as low as \(\sim 0.13\) [see Fig. 6]. Thus, co-rotating \(\omega-2\omega\) bi-circular fields is a potential way to generate highly elliptical attosecond pulses. To demonstrate the robustness and universality, we have also synthesized pulses from the spectra shown in Figs. 2(a), 2(b), and 2(d); and the corresponding attosecond pulses are displayed in Figs. 5(b) - 5(d), respectively. The filtered harmonic window is considered near the cutoff region of the respective harmonic spectrum. In all cases, the range of the pulse duration and the ellipticity of the synthesized pulses are \(\sim 550-600\) attoseconds and \(\sim 0.77-0.88\), respectively. As expected from the harmonic spectra, the helicity of the generated attosecond pulses is same as the driving laser field. The high ellipticity of the synthesized attosecond pulses proves the potential of the co-rotating \(\omega-2\omega\) fields scheme. Moreover, the absence of rotational symmetry in the co-rotating field configuration translates into the robustness of the generated HHG spectra, wherein small changes in the parameters of the driving fields leave the spectra unaltered. ## IV Summary and conclusions In summary, we have successfully demonstrated the generation of attosecond pulses with tunable ellipticity via HHG driven by co-rotating \(\omega-2\omega\) bicircular fields. The absence of the dynamical rotational symmetry in the co-rotating \(\omega-2\omega\) fields translates into the generation of the harmonics with same helicity, which leads to elliptically polarised attosecond pulses. It is found that the essential features of the generated harmonics, via co-rotating \(\omega-2\omega\) fields configuration, remain insensitive against variations in laser parameters, such as fundamental driving wavelength, intensity ratio, and the subcycle phase between two fields. Moreover, the effect of the focal averaging is expected to negligible as the attosecond pulses are synthesised using harmonics in near-cutoff region [33]. Thus, it avoids the need for precise adjustments of laser parameters from the experimental perspective. Moreover, the reliance of the polarization properties of the harmonics on driving fields' parameters provides opportunities to shape the polarization of the generated attosecond pulses. The gener Figure 4: Same as Fig. 2(b) except the rotation direction of combining fields is reversed. This implies that the \(D_{-}\) harmonic component represented by the red line is now co-rotating with the driving field, while the blue line representing the counter-rotating \(D_{+}\) harmonic component. Figure 3: Sensitivity of the harmonic spectra with respect to the subcycle phase, \(\phi\), between \(\omega-2\omega\) fields. (a) \(30^{\circ}\), (b) \(45^{\circ}\), (c) \(60^{\circ}\), and (d) \(90^{\circ}\). Insets show Lissajous figures corresponding to co-rotating driving fields (solid purple line) with the Lissajous figure for \(\phi=0^{\circ}\) in a black dotted line for comparison purposes. \(\lambda_{1}=1200\) nm, and \(I_{1}=I_{2}=5\times 10^{13}\) W/cm\({}^{2}\) are used to simulate the spectra. ated chiral attosecond pulses can be employed to study chiral-sensitive dynamics on its intrinsic timescales [26; 27; 28; 29; 30]. Furthermore, our work can extend to various scenarios, such as current-carrying molecular states, which can further increase the ellipticity of the pulse [44; 45] or different pulse shaping techniques to extend the harmonic cutoffs [46; 47; 48]. ###### Acknowledgements. A. R. H. acknowledges support from Science and Engineering Research Board (SERB) India (CRG/2020/001020). G. D. acknowledges support from SERB India (Project No. MTR/2021/000138).
2309.09184
Search for the production of dark gauge bosons in the framework of Einstein-Cartan portal in the simulation of proton-proton collisions at $\sqrt{s} = 13.6$ TeV
In the present work, we study the possible production of the heavy neutral dark gauge boson (A$^{\prime}$) candidates, which originated from a simplified model based on the Einstein-Cartan gravity, in association with dark matter. This study has been performed by studying events with dimuon plus missing transverse energy produced in the simulated proton-proton collisions at the Large Hadron Collider, at 13.6 TeV center of mass energy and integrated luminosity of 52 fb$^{-1}$ corresponding to the LHC RUN III circumstances. We provide upper limits, in case no new physics has been discovered, on the masses for various particles in the model as, spin-1 (A$^{\prime}$), as well as the heavy mediator (torsion field).
S. Elgammal
2023-09-17T07:17:26Z
http://arxiv.org/abs/2309.09184v6
Search for the production of dark gauge bosons in the framework of Einstein-Cartan portal in the simulation of proton-proton collisions at \(\sqrt{s}=13.6\) TeV ###### Abstract In the present work, we study the possible production of the heavy neutral dark gauge boson (\(\mathrm{A^{\prime}}\)) candidates, which are originated from a simplified model based on the Einstein-Cartan gravity, in association with dark matter. This study has been performed by studying events with dimon plus missing transverse energy produced in the simulated proton-proton collisions at the Large Hadron Collider, at 13.6 TeV center of mass energy and integrated luminosity of 52 fb\({}^{-1}\) corresponding to the LHC RUN III circumstances. These provide the most stringent upper limits on the masses for various particles in the model as, spin-1 (\(\mathrm{A^{\prime}}\)), as well as the heavy mediator (torsion field). ## I Introduction The Standard Model of particle physics SM has been tested during more than 40 years [1], and its predictions agree very well with all experimental observations. However, the SM is nowadays considered as a low energy manifestation of other theories realized at high energy, generically known as BSM (Beyond the Standard Model) theories [2]. One motivation for BSM physics is to have a unified theory for the electromagnetic, weak and strong interactions, in a unique Grand Unified Theory (GUT) [3]. The Super-Symmetry (SUSY) attempts to also include gravitation lead to models with extra spatial dimensions. These BSM models typically predict the existence of new dark particles at the TeV scale and higher. The existence of heavy neutral bosons (\(\mathrm{Z^{\prime}}\)) is a feature of many extensions of the Standard Model. They arise in extended gauge theories, including grand unified theories (GUT) [4], and other models like left-right symmetric models (LRM) [5]. A specific case is the sequential standard model (SSM), in which the \(\mathrm{Z^{\prime}}\) boson has the same coupling as the SM \(\mathrm{Z^{\prime}}\)[6]. Model of extra dimensions like Randall and Sundrum model (RS) [7] predicts the existence of heavy Kaluza-Klein gravitons. Searches for these heavy dark neutral gauge bosons have been performed at the CMS and ATLAS experiments, at the Large Hadron Collider (LHC), with no evidences of their existence using the full RUN II period of the LHC data taking [8; 9]. Another alternative for Randall and Sundrum model could be achieved through the Einstein-Cartan portal [10; 11; 12; 13; 14; 15; 16]. At which gravity (represented by torsion field) can couple to the SM particles in addition to dark sector fermions, it provides a mechanism of producing the dark sector particles and allows a chance for probing dark gauge boson (\(\mathrm{A^{\prime}}\)), which corresponds to a \(U(1)_{D}\) symmetry, at LHC [17]. In this theory, the torsion mass is in the TeV-scale regime, so that the \(\mathrm{A^{\prime}}\) can be produced with the high boost and missing transverse energy (\(E_{T}^{miss}\)) from dark-sector fermions. The search for the \(\mathrm{A^{\prime}}\) could be achieved at the LHC via its decay to dilepton (i.e. \(\mathrm{A^{\prime}}\to l^{+}l^{-}\)) and large \(E_{T}^{miss}\). Many searches for DM have been performed via analysing the data collected by the CMS experiment during RUN II. These searches rely on the production of a visible object "X", which recoils against the large missing transverse energy from the dark matter particles leaving a signature of (\(\mathrm{X}+E_{T}^{miss}\)) in the detector [18]. The visible particle could be a SM particle like W, Z bosons or jets [19], photon [20] or SM Higgs boson [21]. In this analysis, we present a search for dark neutral gauge bosons (\(\mathrm{A^{\prime}}\)), which are originated in a simplified model in Einstein-Cartan portal, at the LHC simulated proton-proton collisions with 13.6 TeV center of mass energy corresponding to the LHC RUN III circumstances [22]. The topology of the studied simulated events is dimon, from the decay of \(\mathrm{A^{\prime}}\), plus large missing transverse energy which attributes to dark matter. Similar search for dark matter in this channel has been performed at the CMS experiment at the LHC with the visible particle being a Z boson decaying to dimon at \(\sqrt{s}=13\) TeV [23]. In the following section II, the theoretical formalism of the \(U(1)_{D}\) simplified model based on the Einstein-Cartan gravity and its free parameters are presented. Then the simulation techniques used for events generation for the signal and SM backrounds samples are displayed in section III. Afterwards, the selection cuts and the strategy of the analysis are explained in section IV. Finally, the results and the summary of this analysis are discussed in sections V and VI respectively. ## II The simplified model in the framework of Einstein-Cartan gravity The analyzed simplified model is based on the Einstein-Cartan gravity, which has been discussed in [17], assumes the production of dark matters from proton-proton collisions at the LHC in addition to a new heavy neutral dark gauge boson \(\mathrm{A^{\prime}}\). The proposed dark gauge boson (A\({}^{\prime}_{\mu}\)) can be produced through the process of pair annihilation of two quarks \(q\bar{q}\) mediated by the heavy torsion field (\(S_{\rho}\), which is the axial-vector part of the torsion tensor \(T^{\lambda}_{\mu\nu}\)[17]), which then undergoes two dark matter particles (\(\chi\)). Dark matter is heavy enough to decay to a A\({}^{\prime}_{\mu}\) and another dark matter (\(\chi\)) as shown in figure 1. The interaction terms, in the effective Lagrangian, between the torsion field and Dirac fermion (\(\psi\)), is given by [17] \[\bar{\psi}i\gamma^{\mu}(\partial_{\mu}+i\mathsf{g}_{\eta}\gamma^{5}S_{\mu}+... )\psi,\] where \(\mathsf{g}_{\eta}\) is the coupling of torsion field to Dirac fermions. While the term in the effective Lagrangian at which the torsion field couples to the dark matter, and between dark gauge boson (A\({}^{\prime}_{\mu}\)) and dark matter are given by [17] \[\bar{\chi}(i\gamma^{\mu}D_{\mu}-M_{\chi})\chi,\] where \(D_{\mu}=\partial_{\mu}+i\mathsf{g}_{\eta}\gamma^{5}S_{\mu}+i\mathsf{g}_{D}A^{ \prime}_{\mu}\), \(M_{\chi}\) is the dark matter mass and \(\mathsf{g}_{D}\) is the coupling of dark gauge boson to dark matter. The neutral dark gauge boson (A\({}^{\prime}\)) decays to the SM fermion pairs, in our case we choose the muonic decay of A\({}^{\prime}\). The highest significant branching ratio of A\({}^{\prime}\rightarrow\mu^{+}\mu^{-}\) could be reached if the following mass assumption [17] is satisfied, \[M_{A^{\prime}}<2M_{\chi}. \tag{1}\] In this model, there are many free parameters including the torsion field mass (\(M_{ST}\)), the dark gauge boson mass (\(M_{A^{\prime}}\)), the mass of dark matter (\(M_{\chi}\)) and the coupling constants (\(\mathsf{g}_{\eta}\) and \(\mathsf{g}_{D}\)). In this analysis, the values of these couplings are taken to be \(\mathsf{g}_{\eta}=0.2\) and \(\mathsf{g}_{D}=1.2\), these values have been chosen based on the results presented in [17]. Since we are interested by studying the possible production of heavy neutral dark gauge boson at the LHC, with \(M_{A^{\prime}}>100\) GeV, we have fixed the mass of dark matter to be \(M_{\chi}=500\) GeV in order to satisfy the mass condition given in equation 1. In addition, a similar analysis [23] has shown that, for axial-vector mediators, DM masses less than 300 GeV are excluded. The typical signature of this process consists of a pair of opposite sign muons from the decay of A\({}^{\prime}\) plus a large missing transverse energy due to the stable dark matter \(\chi\). Since the CMS detector has been optimized to this decay channel, which is a clean channel with respect to SM backgrounds. So that our studied events are with the following topology (\(\mu^{+}\mu^{-}+E_{T}^{miss}\)). ## III Simulation of signal samples and SM backgrounds The SM background processes yielding muon pairs in the signal region are Drell-Yan (DY \(\rightarrow\mu^{+}\mu^{-}\)) production, the production of top quark pairs (\(\mathrm{t}\bar{\mathrm{t}}\rightarrow\mu^{+}\mu^{-}+2b+2\nu\)) and production of diboson (\(W^{+}W^{-}\rightarrow\mu^{+}\mu^{-}+2\nu\), \(ZZ\rightarrow\mu^{+}\mu^{-}+2\nu\) and \(W^{\pm}Z\rightarrow\mu^{\pm}\mu^{+}\mu^{-}+\nu\)). The second type of background is the jets background, which comes from the misidentification of jets as muons, where a jet or multiple pass the muons selection criteria. This kind of background originates from two processes: W+jet and QCD multijet. The contamination of single and multijet background in data is usually estimated from data using a so called data driven method which is explained in [8], nevertheless they are irrelevant for our study because our analysis is based on MC simulations only. The signal samples, for the simplified model (based on Einstein-Cartan gravity), and the corresponding SM processes have been generated using MadGraph5_aMC@NLO [24] interfaced to Pythia 8 for parton shower model and hadronization [25], and DELPHES [26] for a fast detector simulation of CMS experiment. They were generated from proton-proton collisions at the Large Hadron Collider at 13.6 TeV center of mass energy, which corresponds to the circumstances of RUN III, with muon \(p_{T}>10\) GeV and \(|\eta|<3\) rad. For the simplified model, with the use of mass assumption given in equation 1, table 1 indicates the cross section measurements times branching ratios calculated for different sets of the dark gauge boson (A\({}^{\prime}\)) and torsion field (\(ST\)) masses. The simulated signals, used in this analysis, are private production samples, at which we used the matrix element event generator MadGraph5 aMC@NLO v2.6.7 [24]. We are grateful to Cao H. Nam, the authors of [17], for sharing with us the Universal FeynRules Output (UFO) for the model. All Monte Carlo samples used in this analysis and their corresponding cross sections were calculated at next-to-leading order. Thus, the contributions of the signal samples and the SM background processes have been estimated from the Monte Carlo simulations, at which they are normalized to their corresponding cross section and integrated luminosity of 52 fb\({}^{-1}\)[22]. The detector related systematic uncertainty is originated from the evaluation of the integrated luminosity of the 2022-data, that are recorded by the CMS during RUN III, was estimated to be 2.2% [27]. Figure 1: Feynman diagram for the simplified model based on Einstein-Cartan gravity; for the production of dark gauge boson (A\({}^{\prime}\)) in association to dark matter (\(\chi\)) pair [17]. ## IV Event selection The selection of event, for the analysis, has been designed to reconstruct a final state with two high transverse momentum (\(p_{T}\)) muons in association with missing transverse energy accounting for the DM candidate. The selection is made in the form of cuts applied on different kinematic parameters. Each of the two muons should pass the following preliminary selection: \(\bullet\)\(p_{T}^{\mu}\) (GeV) \(>\) 30, \(\bullet\)\(\eta^{\mu}\) (rad) \(<\) 3, \(\bullet\) IsolationVarRhoCorr \(<\) 0.1, "IsolationVarRhoCorr" represents the isolation cut in DELPHES software in order to reject muons produced inside jets. In this cut, it is required that the scalar \(p_{T}\) sum of all muon tracks within a cone of \(\Delta R=0.5\) around the muon candidate, excluding the muon candidate itself, should not exceed 10% of the \(p_{T}\) of the muon. This cut has been corrected for pileup effect. Thus, each event has been selected with two opposite charge muons, and the invariant mass of the dimuon is bigger than 60 GeV, since we are looking for a resonance in the high mass regime. Figure 2 shows the distribution of the dimuon invariant mass; the cyan histogram represents the Drell-Yan background, the yellow histogram stands for the vector boson pair backgrounds (WW, WZ and ZZ) and the \(t\bar{t}\) background is represented by the red histogram. These histograms are stacked. While the signals of the simplified model in the framework of Einstein-Cartan gravity, which have been generated with different masses of the neutral dark gauge boson A\({}^{\prime}\) with fixed values of the torsion field mass (\(M_{ST}=2000\) GeV) and dark matter mass (\(M_{\chi}=500\) GeV), are represented by different colored lines, and are overlaid. The corresponding distribution of the missing transverse energy is presented in figure 3. It is clearly shown from these figures that, the signal samples are overwhelmed by the backgrounds. So that, it is necessary to apply a more tighter set of cuts to discriminate signals from SM backgrounds as will be explained in the next paragraph. In addition to the preliminary selection, extra tighter cuts have been applied. These tight cuts are based on three variables: the first variable is related to the invariant mass of the dimuon, at which we restricted the invariant mass of the dimuon to a small range around the mass of the dark gauge boson A\({}^{\prime}\), such that \((0.9\times M_{A^{\prime}})<M_{\mu^{+}\mu^{-}}<(M_{A^{\prime}}+25)\). The second is the relative difference between the transverse energy of dimuon (\(E_{T}^{\mu^{+}\mu^{-}}\)) and the missing transverse energy (\(E_{T}^{\rm miss}\)), it has been selected to be less than 0.4. (i.e. \(|E_{T}^{\mu^{+}\mu^{-}}-E_{T}^{\rm miss}|/E_{T}^{\mu^{+}\mu^{-}}<0.4\)). The third one is \(\Delta\phi_{\mu^{+}\mu^{-},E_{T}^{\rm miss}}\), which is defined as difference in the azimuth angle between the dimuon direction and the missing transverse energy direction (i.e. \(\Delta\phi_{\mu^{+}\mu^{-},E_{T}^{\rm miss}}=|\phi^{\mu^{+}\mu^{-}}-\phi^{miss}|\) ), it has been selected \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(M_{A^{\prime}}\) & 1250 & 1500 & 1750 & 1800 & 1970 & 2000 & 3000 & 4000 & 5000 & 6000 & 7000 \\ \hline 200 & 0.00043 & 0.00863 & 0.0158 & 0.017 & 0.0180 & 0.0181 & 0.00895 & 0.0026 & 0.00061 & 0.00012 & \(2.32\times 10^{-5}\) \\ \hline 300 & \(9.76\times 10^{-7}\) & 0.0040 & 0.011 & 0.012 & 0.0142 & 0.0145 & 0.00874 & 0.0027 & 0.00066 & 0.00014 & \(2.64\times 10^{-5}\) \\ \hline 400 & \(5.43\times 10^{-7}\) & 0.0010 & 0.0065 & 0.0075 & 0.0101 & 0.0105 & 0.0079 & 0.0026 & 0.00067 & 0.00014 & \(2.80\times 10^{-5}\) \\ \hline 500 & \(3.29\times 10^{-7}\) & \(8.1\times 10^{-7}\) & 0.0032 & 0.0041 & 0.0066 & 0.00695 & 0.00688 & 0.0019 & 0.00066 & 0.00014 & \(2.85\times 10^{-5}\) \\ \hline 600 & \(2.10\times 10^{-7}\) & \(3.48\times 10^{-7}\) & 0.0011 & 0.0017 & 0.0039 & 0.0042 & 0.0058 & 0.0023 & 0.00063 & 0.00014 & \(2.84\times 10^{-5}\) \\ \hline 700 & \(1.37\times 10^{-7}\) & \(1.9\times 10^{-7}\) & 0.00012 & 0.00041 & 0.0019 & 0.0022 & 0.0048 & 0.0020 & 0.00059 & 0.00014 & \(2.77\times 10^{-5}\) \\ \hline 800 & \(9.04\times 10^{-8}\) & \(1.22\times 10^{-7}\) & \(1.28\times 10^{-7}\) & \(3.22\times 10^{-7}\) & 0.00071 & 0.00091 & 0.0039 & 0.0018 & 0.00054 & 0.00013 & \(2.68\times 10^{-5}\) \\ \hline 900 & \(5.99\times 10^{-8}\) & \(7.80\times 10^{-8}\) & \(1.22\times 10^{-7}\) & \(1.40\times 10^{-7}\) & 0.00011 & 0.00021 & 0.0030 & 0.0016 & 0.00050 & 0.00012 & \(2.57\times 10^{-5}\) \\ \hline \end{tabular} \end{table} Table 1: The simplified model (based on Einstein-Cartan gravity) cross section measurements times branching ratios (in pb) calculated for different sets of the masses \(M_{A^{\prime}}\) (in GeV), and \(M_{ST}\) (in GeV), for the mass assumption given in equation 1, with dark matter mass (\(M_{\chi}=500\) GeV), the following couplings constants \(\mathsf{g}_{\eta}=0.2,\ \mathsf{g}_{D}=1.2\) and at \(\sqrt{s}=13.6\) TeV. Figure 2: The measured dimuon invariant mass spectrum, after applying preliminary cuts, for the estimated SM backgrounds and for different choices of dark gauge boson (A\({}^{\prime}\)) masses generated based on the simplified model, with mass of torsion field (\(M_{ST}=2000\) GeV) and dark matter mass (\(M_{\chi}=500\) GeV). to be greater than 2.6 rad. For dimuon events, with each muon passes the preliminary cuts, we present in figure 4 the distributions of \(|E_{T}^{\mu^{+}\mu^{-}}-E_{T}^{\rm miss}|/E_{T}^{\mu^{+}\mu^{-}}\) (a) and \(\Delta\phi_{\mu^{+}\mu^{-},\bar{E}_{T}^{\rm miss}}\) (b) for the signal presentation of the simplified model corresponding to Einstein-Cartan gravity, which was generated with masses of dark gauge boson \(M_{A^{\prime}}=200\) GeV, the torsion field mass \(M_{ST}=2000\) GeV and dark matter mass \(M_{\chi}=500\) GeV and SM backgrounds. These distributions are scaled to one. In these plots, the vertical dashed lines correspond to the chosen cut value per each variable. These tight cuts have been applied in order to strongly decrease the SM backgrounds. \(ZZ\) background has been fully suppressed by the mass window cut \((0.9\times M_{A^{\prime}})<M_{\mu^{+}\mu^{-}}<(M_{A^{\prime}}+25)\). ## V Results The shape-based analysis has been used based on the missing transverse energy distributions (\(E_{T}^{\rm miss}\)), which are good discriminate variable, since the signals distributions are characterized by relatively large \(E_{T}^{\rm miss}\) values compared to the SM backgrounds. The distribution of the missing transverse energy, after the application of the final event selection, is illustrated in figure 5. The event yields passing the analysis final selection, for each of the SM backgrounds and the signal of simplified model, which was generated with masses of dark gauge boson \(M_{A^{\prime}}=200\) GeV, the torsion field mass \(M_{ST}=2000\) GeV and dark matter mass \(M_{\chi}=500\) GeV; corresponding to an integrated luminosity of 52 fb\({}^{-1}\) are presented in table 2. Uncertainties include both statistical and systematic components, summed in quadrature. In order to make a statistical interpretation for our results, we preformed a statistical test based on the profile likelihood method, with the use of the modified frequentist construction CLs [28; 29] used in the asymptotic approximation [30] to derive exclusion limits on the product of signal cross sections and branching fraction Br(\(A^{\prime}\)\(\rightarrow\mu\mu\)) at 95% confidence level. The 95% upper limit on the cross section times the branching ratio versus the mass of torsion field \(M_{ST}\), for the simplified model based on Einstein-Cartan gravity, is presented in figure 6, with the muonic decay of the A\({}^{\prime}\) and coupling constant values of \(\mathbf{g}_{\eta}=0.2\) and \(\mathbf{g}_{D}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{ g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{ g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{ g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{ g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_ {\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_ {\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_ {\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{ \eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{ \eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_ {\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_ {\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{ \eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{ \eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}=\mathbf{g}_{\eta}= \mathbf{g}_{\eta}=\mathbf 1.2 and dark matter mass \(M_{\chi}=500\) GeV. The black solid curve represents the simplified model for \(M_{A^{\prime}}=200\) GeV. Based on figure 6, we exclude the torsion field (\(ST\)) production in the mass range between 1375 - 6450 GeV as shown from expected median. For the simplified model (based on Einstein-Cartan gravity), the cross section times the branching ratio limit is presented in figure 7 as a function of the mediator's masses \(M_{ST}\) and the masses of the dark neutral gauge boson \(M_{A^{\prime}}\). The region between the respective pair of the expected 95% dotted line is excluded. The results from the inclusive signal regions exclude expected values of up to \(1375<M_{ST}<6645\) GeV. Figure 5: The distribution of the missing transverse energy, after final analysis selection cuts, for the expected SM background and one signal benchmark corresponding to the Einstein-Cartan gravity with \(M_{A^{\prime}}=200\) GeV is superimposed. Figure 6: 95% CL upper limits on the cross section times the branching ratio (expected), as a function of the mediator’s mass (\(M_{ST}\)) based on Einstein-Cartan model, with the muonic decay of the A\({}^{\prime}\). The black line represents the Einstein-Cartan gravity with \(M_{A^{\prime}}=200\) GeV. Figure 7: The 95% CL upper limits on the product of the cross section and branching fraction from the inclusive search, for variations of pairs of the simplified model parameters (\(M_{ST}\) and \(M_{A^{\prime}}\)). The filled region indicates the upper limit. The dotted black curve indicates the expected exclusions for the nominal A\({}^{\prime}\) cross section. Summary A search for dark neutral gauge bosons (A\({}^{\prime}\)) produced in association with dark matter (\(\chi\)), in the framework of \(U(1)_{D}\) simplified model based on the Einstein-Cartan gravity, has been presented, using the simulated proton-proton collisions corresponding to the LHC RUN III 13.6 TeV center of mass energy, for an integrated luminosity of 52 fb\({}^{-1}\). Results from muonic decay mode of A\({}^{\prime}\) are discussed, with fixing the values of the coupling constants to be \(g_{D}=1.2\), \(g_{\eta}=0.2\), and dark matter mass (\(M_{\chi}=500\) GeV). We have considered the variations of several parameters of the signal model: the torsion field mass (\(M_{ST}\)) and the dark neutral gauge boson mass(\(M_{A^{\prime}}\)). The general version of the search, which uses only event-level kinematic variables, excludes models with \(1375<M_{ST}<6645\) GeV at 95% confidence level (CL). ###### Acknowledgements. The author of this paper would like to thank Cao H. Nam, the author of [17], for his useful discussions about the theoretical models, and sharing with us the Universal FeynRules Output (UFO) for the model that were used for the events generation.
2309.11913
Spatial-Temporal Transformer based Video Compression Framework
Learned video compression (LVC) has witnessed remarkable advancements in recent years. Similar as the traditional video coding, LVC inherits motion estimation/compensation, residual coding and other modules, all of which are implemented with neural networks (NNs). However, within the framework of NNs and its training mechanism using gradient backpropagation, most existing works often struggle to consistently generate stable motion information, which is in the form of geometric features, from the input color features. Moreover, the modules such as the inter-prediction and residual coding are independent from each other, making it inefficient to fully reduce the spatial-temporal redundancy. To address the above problems, in this paper, we propose a novel Spatial-Temporal Transformer based Video Compression (STT-VC) framework. It contains a Relaxed Deformable Transformer (RDT) with Uformer based offsets estimation for motion estimation and compensation, a Multi-Granularity Prediction (MGP) module based on multi-reference frames for prediction refinement, and a Spatial Feature Distribution prior based Transformer (SFD-T) for efficient temporal-spatial joint residual compression. Specifically, RDT is developed to stably estimate the motion information between frames by thoroughly investigating the relationship between the similarity based geometric motion feature extraction and self-attention. MGP is designed to fuse the multi-reference frame information by effectively exploring the coarse-grained prediction feature generated with the coded motion information. SFD-T is to compress the residual information by jointly exploring the spatial feature distributions in both residual and temporal prediction to further reduce the spatial-temporal redundancy. Experimental results demonstrate that our method achieves the best result with 13.5% BD-Rate saving over VTM.
Yanbo Gao, Wenjia Huang, Shuai Li, Hui Yuan, Mao Ye, Siwei Ma
2023-09-21T09:23:13Z
http://arxiv.org/abs/2309.11913v1
# Spatial-Temporal Transformer based Video Compression Framework ###### Abstract Learned video compression (LVC) has witnessed remarkable advancements in recent years. Similar as the traditional video coding, LVC inherits motion estimation/compensation, residual coding and other modules, all of which are implemented with neural networks (NNs). However, within the framework of NNs and its training mechanism using gradient backpropagation, most existing works often struggle to consistently generate stable motion information, which is in the form of geometric features, from the input color features. Moreover, the modules such as the inter-prediction and residual coding are independent from each other, making it inefficient to fully reduce the spatial-temporal redundancy. To address the above problems, in this paper, we propose a novel Spatial-Temporal Transformer based Video Compression (STT-VC) framework. It contains a Relaxed Deformable Transformer (RDT) with Uformer based offsets estimation for motion estimation and compensation, a Multi-Granularity Prediction (MGP) module based on multi-reference frames for prediction refinement, and a Spatial Feature Distribution prior based Transformer (SFD-T) for efficient temporal-spatial joint residual compression. Specifically, RDT is developed to stably estimate the motion information between frames by thoroughly investigating the relationship between the similarity based geometric motion feature extraction and self-attention. MGP is designed to fuse the multi-reference frame information by effectively exploring the coarse-grained prediction feature generated with the coded motion information. SFD-T is to compress the residual information by jointly exploring the spatial feature distributions in both residual and temporal prediction to further reduce the spatial-temporal redundancy. Experimental results demonstrate that our method achieves the best result with 13.5% BD-Rate saving over VTM and 68.7% BD-Rate saving over the baseline without the proposed modules. Ablation study validates the effectiveness of each proposed module. Transformer, Inter-prediction, Learned video compression ## I Introduction Video data has experienced an exponential growth with the proliferation of video-sharing platforms and increasing high-resolution videos, resulting in the urgent need of more efficient video compression. Video compression is to reduce the spatial and temporal redundancy with intra/inter prediction, transform coding, quantization, and entropy coding, in order to achieve high compression ratios while maintaining perceptual quality. Traditional block-based video coding approaches have been widely studied with predefined intra/inter predictions such as the angular intra prediction and motion estimation/compensation, predefined transform coding such as the DCT, and predefined entropy coding such as the CABAC [1]. Such block-based video coding architecture with pre-defined modules has enjoyed a great success in the past decades and widely adopted in the industry. With the rapid development of deep learning, there has been a growing interest in exploring new video compression methods based on deep neural networks. These methods aim to leverage the powerful representation learning capabilities of deep models to learn adaptive transforms instead of using predefined ones, in order to achieve higher compression efficiency. Various approaches have been proposed [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] including deep learning enhanced video coding by replacing some modules in the traditional video coding method and learned video coding approach with a whole neural network to compress a video. This paper focuses on the learned video coding, especially learned inter-frame video coding. Existing learned inter-frame video coding approach generally takes a similar process as the traditional video coding, including inter-frame prediction based on motion estimation/compensation, residual compression and entropy coding. While many learned methods [3, 21] have been developed and achieve state-of-the-art performance as traditional video coding method, there still exist two key problems in the exploration of the inter-frame information. Firstly, motion information as geometric information, used to align the reference frame to the current frame in order to perform the inter-frame prediction, is difficult to be stably transformed from the color space, i.e., the image frame and its corresponding feature. Current alignment methods mainly use optical flow and offsets as motion, and both are dense geometric transformation representations. Learning from color representations based on gradient backpropagation to generate such geometric feature is usually not stable as illustrated in [2]. Moreover, existing methods mostly only employ the immediate previous frame for prediction without fully exploring the multiple reference frames. Secondly, after the inter-frame prediction, the residual is compressed independently from the inter-frame prediction information, neglecting the useful spatial information in the prediction. To be specific, other than subtracting the corresponding point-to-point prediction information as a temporal prediction, the spatial relationship embedded in the prediction can also assist the spatial compression of the residual. To address the above problems, a spatial-temporal Transformer based inter-frame video coding framework is proposed in this paper. First, a Multi-Granularity Prediction generation with proposed Relaxed Deformable Transformer is developed, where the multi-reference frame information is fully explored. Then a Spatial Feature Distribution prior based Transformer (SFD-T) is proposed to utilize the spatial feature distribution prior embedded in the temporal prediction feature to reduce the remaining spatial redundancy in the residual. The contributions of this paper can be summarized as follows. * We propose a Relaxed Deformable Transformer (RDT) based motion estimation and compensation module, where the RDT transforms the color feature with the spatial position embedding to generate the geometric motion alignment information and produces a coarse prediction feature. The mechanism of using RDT for producing motion between two frames is thoroughly investigated, and the deformable transformer is relaxed to the deformable convolution with their relationship carefully examined. * We propose a Multi-Granularity Prediction (MGP) based multi-reference prediction refinement module. The multi-reference frame information is explored in the manner of video denoising with the coarse prediction feature as anchor, since it is obtained with coded motion information and thus contains most information of the current frame. * We propose a Spatial Feature Distribution prior based Transformer (SFD-T) module to compress the residuals by further exploring the spatial feature distribution information embedded in the temporal prediction. The above modules are all constructed in the form of transformer and comprises a complete Transformer based video coding framework. Extensive experiments demonstrate that our method achieves state-of-the-art results, and ablation studies have also been conducted to validate the proposed modules. The rest of this paper is organized as follows. Section II presents the related works in learned image and video compression. Section III describes the proposed method in details. Experimental results with ablation studies are provided in Section IV and conclusions are drawn in Section V. ## II Related Work In this section, a brief review of the related work in the field of learned video compression [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] is presented. Considering that image compression usually serves to code the first frame of a video and the spatial motion and residual used in the video coding, some related learned image compression methods are reviewed first. ### _Learned Image Compression_ Existing learned image compression methods are mostly derived from Variational Autoencoder (VAE) [22], which uses an Encoder-Decoder architecture with the quantization process mimicked as the variational process. Most of the encoders and decoders adopt the convolutional neural networks (CNNs) [23] with down-sampling and up-sampling in the encoder and decoder, respectively. The input is first transformed into the latent representation with the encoder, and quantized by rounding which is replaced by adding a uniform-distributed noise for gradient backpropagation at training [24]. The quantized residual is then encoded into the bitstream with the entropy coding and transmitted to the decoder [2, 22, 25]. To enhance the performance of entropy coding, hyperprior and context based entropy coding methods [26, 27, 25] were developed and widely used in the following learned image and video compression. With the rapid development of Transformers [28, 29], Transformer based image compression has also been studied [26, 25, 27]. A CNN-Transformer hybrid framework of image compression was proposed in [25], where the Swin Transformer blocks are inserted between convolutional layers without changing the overall architecture. In [26], a symmetric Transformer framework is used by replacing all CNN layers in the encoder-decoder architecture with Transformer layers. In addition to exploring Transformer for the encoder and decoder, there are also works such as ContextFormer [27] and Entroformer [30] investigating using Transformer for entropy model. These researches focus on reducing the spatial redundancy within an image, and can also be used in video coding to compress the motion vectors (if exist) and residuals. However, these models are developed without considering the temporal information and cannot directly using them for video coding. ### _Learned Video Compression_ Compared with the image compression, learned video compression focuses more on the inter-frame predictive coding. Most of the existing methods adopt a similar procedure as the conventional hybrid video coding, and consist of motion estimation and compensation [4, 5, 6, 8], residual coding and filtering [7, 19]. Deep video compression (DVC) [31] first proposed a fully learned video compression, using optical flow and warping to represent motion and perform motion compensation to generate prediction frame. Then residual is obtained by subtracting the current frame with the prediction frame, and further coded with an autoencoder (similarly as the image compression). Many methods have been developed with a similar procedure. In [2], FVC proposed to perform the inter-prediction in the feature domain using deformable convolution to reduce the artifacts around edges brought by the optical flow. Deep Contextual Video Compression (DCVC) [20], and its variants, including DCVC-TCM [18], DCVC-HEM [21] and DCVC-DC [3], were proposed to use context models for spatial compression. It directly processes the concatenated prediction and the current frame without explicitly obtaining the residual, using the property of conditional entropy no larger than the entropy of the residual. In addition to the single-reference based prediction methods described above, there also exist methods using multi-scale or multi-reference frame strategy [4, 5, 6] to help with prediction generation. In [7], the multi-reference frames are first gradually warped to the current frame with the coded motion vectors at each time step, and then the warped multi-reference frames are fused together to generate the final prediction. 3D convolution is used in [8] to fuse the initial prediction feature and the multi-reference frames without temporal warping. Other than explicitly fusing multi-reference frames at each frame, implicit temporal information aggregation with neural architectures such as LSTM [32] have also been investigated for video coding [9]. In [10], Conv-LSTM is used together with the U-net architecture to directly learn the space-time difference between video frames. LSTM have also been explored to construct the context models for the hyperprior based entropy coding [7, 11]. The above methods all employ the CNNs based encoding architecture. On the other hand, while there are some Transformer based spatial-temporal modelling methods, there only exist few Transformer based video coding methods. VCT [12] directly used Transformer to model temporal dependencies and predict the distribution of current frame without using explicit motion representation. Autoregressive prediction within each frame is also used similarly as the autoregressive context model [33]. Some methods also adopt the self-attention module in part of the video coding framework such as in encoder and decoder [13] and quality enhancement [14, 19]. In the area of Transformer based temporal modelling, several representative modelling methods are briefly described in the following. ViVit [15] proposed a Transformer based video processing method by factorizing the model into spatial and temporal processing. For the temporal processing, patches of the same spatial position are grouped with a temporal position embedding to synthesize three-dimensional tokens and thus can be processed in the same way as the spatial Transformer. Video SwinTransformer [16] extended the Swin-T [34] to three dimensions for video processing, where a 3D shifted window over a video is used for the multi-head self-attention calculation. CRAFT [35] was developed for optical flow estimation based on cross-attention between two frames. VSRT [17] proposed a spatially integrated convolutional self-attention layer to enhance the extraction of spatial locality information and use an optical flow based temporal alignment to enhance temporal information modelling. Deformable Transformer was first proposed for spatial object detection [36], where the offsets and the weights are generate by a query feature and then the sampled features are fused together similarly as the deformable convolution. Then Deformable Attention Transformer (DAT) [37] was proposed, where, after the offsets are generated, the sampled features are fused together using the self-attention model. However, such Transformer based temporal modelling methods do not concern the special needs of video coding such as the balance between rate and distortion, and cannot be directly used for video coding. ## III Proposed Method ### _Overview_ This paper proposes a learned inter-frame video coding method to generate a high-quality prediction and a compactly coded residual, with a Transformer based framework. It mainly consists of three components, including Relaxed Deformable Transformer (RDT) for motion estimation and compensation, Multi-Granularity Prediction (MGP) for prediction refinement, and Spatial Feature Distribution prior based Transformer (SFD-T) for residual compression. The framework of the proposed method is illustrated in Fig. 1. The current frame \(X_{t}\) and reference frame \(\hat{X}_{t-1}\) are first transformed to features \(F_{t}\) and \(\hat{F}_{t-1}\), respectively, and the following prediction and residual compression are performed in the feature domain. 1) RDT first uses Uformer to conduct the motion estimation (shown in the light blue box of Fig. 1), which is then coded with a motion vector codec and finally produces the motion information \(\widehat{m}_{off}\) and attention scores/confidence mask \(\hat{M}_{c}\). Then the deformable Transformer based value feature fusion process is relaxed to the deformable convolution to generate a coarse-grained prediction feature \(F_{c-pre}\). 2) MGP is further used to refine the prediction feature to explore the multi-reference frame information (shown in the light green box of Fig. 1). MGP applies RDT to align the multi-reference frame features to the coarse-grained prediction feature \(F_{c-pre}\), and fuses them through a spatial-channel Fig. 1: Framework of the proposed STT-VC. attention to generate enhanced prediction feature \(F_{m-pre}\). The residuals \(F_{resi}\) are generated by subtracting the enhanced prediction feature \(F_{m-pre}\) from the current frame feature \(F_{t}\). 3) SFD-T compresses the residual \(F_{resi}\) (shown in the light gray box of Fig. 1), by exploring the enhanced prediction feature \(F_{m-pre}\) in the attention calculation process as a spatial feature distribution prior. Finally, the decoded residual feature \(\hat{F}_{resi}\) is added with the enhanced prediction feature \(F_{m-pre}\) to reconstruct input feature \(\hat{F}_{t}\), which is then further transformed back to pixel domain as the reconstructed frame \(\hat{X}_{t}\). The three proposed modules RDT, MGP and SFD-T are presented in detail in the following subsections. ### _Relaxed Deformable Transformer (RDT) for Motion Estimation and Compensation_ To perform inter-frame prediction, the motion information between the current frame and the reference frame needs to be estimated first, known as motion estimation. Then the reference frame is aligned to the current frame with the estimated motion information to generate prediction, known as motion compensation. Currently, warping with optical flow and using deformable convolution [38] with offsets are the two main approaches for motion estimation and compensation. It is known that utilizing optical flow for feature alignment during motion compensation often results in artifacts around the edges of objects. Therefore, in this paper, offsets with the deformable convolution (DConv) is used as the base to represent the motion and perform motion compensation between the reference frame and the current frame. The existing offsets and DConv based motion estimation and compensation methods usually estimate the offsets by gradually convolving the features of the reference and current frames using a CNN. However, the offsets, in the form of the geometric information, is difficult to be directly obtained from the color features via gradient backpropagation. On the other hand, motion estimation and compensation estimates the motion changes between the reference frame and the current frame based on their similarity, and then aligns the reference frame with the motion to ensure its similarity to the current frame. Essentially this process is conducted based on the similarity between the reference frame and the current frame. Therefore, to overcome the problem of stably obtaining similarity based geometric motion information from color features, a relaxed deformable transformer (RDT) is developed. A Uformer is first used to estimate the offsets and attention/confidence of each offsets based on the similarity, and a deformable convolution is then used as a relaxed deformable Transformer to fuse the value features according to the corresponding offsets and attention/confidence. First, the current frame and the reconstructed reference frame are transformed into the feature domain through a feature extraction module. Taking the processing of the current frame \(X_{t}\) as an example, it can be represented by \[F_{conv}=ReLU(Conv_{5\times 5}(X_{t}))\] \[F_{t}=ResBlocks(F_{conv})+F_{conv} \tag{1}\] where \(F_{t}\) represents the final feature of \(X_{t}\). \(ResBlocks\) represent three Resblocks, each of which consists two convolution modules and a skip connection. To calculate the similarity between the features of the reference frame and current frame, instead of using cross-attention [35], we find that simply concatenating the features and processing them together with the self-attention can better cover the similarity calculation among different positions of the two features and within each feature. Specifically, the two frame features \(F_{t}\) and \(\hat{F}_{t-1}\) are concatenated together, and processed with a \(1\times 1\) convolution to fuse the information and reduce the channels as \[F_{c}=Conv_{1\times 1}\langle F_{t},\hat{F}_{t-1}\rangle \tag{2}\] where \(\langle.\rangle\) is used to represent the concatenation operation for simplicity. Then the spatial position embedding is incorporated into the features and processed by the self-attention model as \[F_{s}=W_{MSA}(LN(F_{c}+F_{pos}))\] \[W_{MSA}=softmax(QK^{T}/\sqrt{d}+B)V \tag{3}\] where \(W_{MSA}\) represents the calculation of window-based multi-head self-attention [34], and \(LN\) represents the normalization layer. \(F_{pos}\) represents the spatial position embedding, which is obtained by embedding the absolute position within a block. \(Q,K,V\) represent the query, key and value features in the calculation of multi-head self-attention, obtained by conducting linear projection to features at each layer (\(Q,K,V=\ Linear(LN(F_{c}+F_{pos}))\)), and \(B\) represents relative position bias as in [34]. Since the concatenated feature \(F_{c}\) contains information from both \(F_{t}\) and \(F_{t-1}\), the calculation of attention scores \((Q\cdot K)\) before normalization between features of two positions \((x,y)\) and \((x+\Delta x,y+\Delta y)\) can be formulated as the following Eq. (4) by substituting \(F_{c}\) with Eq. (2) and ignoring the position embedding first. \[F(x, y)\cdot F(x+\Delta x,y+\Delta y)=[f_{a}(F_{t}(x,y))+f_{b}(F_{t-1}(x,y))]\] \[\cdot[f_{a}(F_{t}(x+\Delta x,y+\Delta y))+f_{b}(F_{t-1}(x+\Delta x,y+\Delta y))] \tag{4}\] Fig. 2: (a) Structure of the proposed RDT based motion estimation and compensation. (b) Details of the Lewin transformer layer with W-MSA [34] used in RDT. where \(f_{a}\) and \(f_{b}\) represent the information mixing of \(F_{t}\) and \(F_{t-1}\) in the \(1\times 1\) convolution in Eq. (2). Note that here we do not concern the detailed function of \(f_{a}\) and \(f_{b}\) and simply use them to represent the information fusion process. Eq. (4) can be further turned into \[F\left(x,y\right)\cdot F(x+\Delta x,y+\Delta y)=\] \[f_{a}(F_{t}(x,y))\cdot f_{a}(F_{t}(x+\Delta x,y+\Delta y))\] \[+f_{a}(F_{t}(x,y))\cdot f_{b}(F_{t-1}(x+\Delta x,y+\Delta y))\] \[+f_{b}(F_{t-1}(x,y))\cdot f_{a}(F_{t}(x+\Delta x,y+\Delta y))\] \[+f_{b}(F_{t-1}(x,y))\cdot f_{b}(F_{t-1}(x+\Delta x,y+\Delta y)) \tag{5}\] It can be seen that the self-attention score calculation on the fused features covers the estimation of the similarity not only within each feature but also between features, which is important for motion estimation. On the other hand, position embedding introduces the spatial geometric information into the features. In the above self-attention score calculation, with the position embedding, it can learn to also consider the geometric distance among different positions in addition to the feature similarity. More importantly, it helps directly transfer the color features into geometric information. After the attention score calculation, the value features (\(V\)) are fused together based on the attention score. Since \(V\) is obtained with the position embedding, the final feature can be represented as \[F_{s}=\sum_{i\in B}\alpha_{i}V(F_{c}+F_{pos}) \tag{6}\] where \(i\in B\) represents the block (\(\left\{8\times 8\right\}\) in the experiments) of the self-attention calculation. It can be seen that by incorporating the similarity information contained in the attention score and the geometric information contained in the value feature, the output feature directly contains the desired geometric position information based on the similarity among features. This agrees with the motion estimation based on the similarity as discussed in the beginning of this subsection. Thus, the motion offsets between two features can be stably obtained by the above self-attention. To obtain the motion information which distributes in a large range from small motion of a fractional pixel to large motion of a few pixels, the Uformer architecture [39] is used. It directly calculates the self-attention based on pixel-wise features within a block instead of patchifying them, thus able to obtain detailed motion. On the other hand, to achieve the large motion based on global information, U-structure is used by down-sampling and up-sampling the features, and also concatenating the encoder feature to the output with a skip connection. In each layer of the Uformer, in addition to the self-attention calculation as in Eq. (3) the input feature is also added to the output feature \(F_{a}=F_{s}+F_{c}\). Then a few convolutional layers are further used to increase the local processing of the features, known as locally-enhanced window (LeWin) Transformer layers [39]. \[F_{l}=Conv_{1\times 1}(DWConv_{3\times 3}(Conv_{1\times 1}(LN(F_{a}))))+F_{a} \tag{7}\] where \(DWConv\) represents depth-wise convolution. By using LeWin Transformer layers, it effectively models the relationship between pixels in the reference frame and the current frame within each window. The other layers in the U-architecture use the same form and produce the final output feature \(F_{o}\) as shown in Fig. 2. Finally, a \(1\times 1\) convolution can be used in the end to transform the output feature into motion offsets between the current frame and the reference frame. Since in video coding the motion information needs to be compressed into the bitstream and transferred to the decoder, the final feature of Uformer, instead of the motion offsets, is compressed with an encoder-decoder codec and then transformed into the motion offsets with the decoder feature. This motion encoding process can be expressed as \[F_{mv-o}=Enc(F_{o})\] \[b_{mv}=Entropy(f_{Q}(F_{mv-o}))\] \[\widehat{mv}_{off},\hat{M}_{c}=Conv_{1\times 1}(Dec(\hat{F}_{mv-o})) \tag{8}\] where \(Enc\) and \(Dec\) represent the encoder and decoder in the motion codec, respectively. Any motion codec can be used and here for simplicity, the motion codec in FVC [2] is used. \(f_{Q}\) represents the quantization process generating the quantized feature \(\hat{F}_{mv-o}\), and \(Entropy\) refers to the entropy coding model. Finally, the decoded features are processed with a \(1\times 1\) convolution to generate the reconstructed motion offsets \(\widehat{mv}_{off}\). Unlike the conventional deformable transformer that uses the deformed features indicated by the motion offsets to enhance the current feature with the self-attention operation [37], here the deformed features are fused together to generate a prediction of the current feature. Therefore, an attention mask \(\hat{M}_{c}\) is directly obtained together with the motion offsets as a relaxed version of self-attention. \[F_{e-pre}=\sum_{i\in\left\{3\times 3\right\}}\hat{M}_{c}(i) Conv_{1\times 1}(\hat{F}_{t-1}((x_{i},y_{i})+\widehat{mv}_{off}(i)))\] \[\xrightarrow{relax}DConv_{3\times 3}(\hat{F}_{t-1},\widehat{mv}_{off},\hat{M}_{c}) \tag{9}\] where \(\hat{F}_{t-1}((x_{i},y_{i})+\widehat{mv}_{off}(i))\) represents the deformed reference frame features (bilinearly interpolated), and \(Conv_{1\times 1}\) is \(1\times 1\) convolution to generate the value feature. Summarizing the value features with the attention \(\hat{M}_{c}\left(i\right)\) over the block produces the final prediction feature. This process is equivalent to a deformable convolution (DConv) with shared filter weights over the \(3\times 3\) locations, where \(\widehat{mv}_{off}\) and \(\hat{M}_{c}\) represent the Fig. 3: (a) Structure of the proposed MGP. (b) Details of the fusion model used in MGP. offsets and confidence mask, respectively, and the multi-head number in the transformer is the group number in the DConv. For generality, we further relax the shared embedding weights to non-shared weights as convolution, and thus turning it to a deformable convolution. This process, deformable convolution with offsets and confidence mask obtained with a Uformer, is thus coined to relaxed deformable transformer (RDT), unifying the motion estimation and compensation in the realm of Transformer. ### _Multi-granularity prediction (MGP) feature generation based on Multi-frame Enhancement_ After motion estimation and compensation, a coarse-grained prediction feature is generated. However, using only the immediately previous frame cannot provide accurate prediction of the current frame, especially for moving areas and occluded background. Predicting such information requires the long-range temporal information which can be partly provided by the multi-reference frames. With the pixel-wise motion representation for prediction generation in the existing LVC framework, it is difficult to directly generate a prediction by mixing the motion from multi-reference frames, or explore the multi-reference frames without significantly increasing the rate on the motion representation. On the other hand, the motion representation is obtained under the rate-distortion optimization (RDO), where, instead of obtaining a motion representation of a high rate to provide the best-quality prediction feature, a motion representation with the smallest RD cost is used to provide a decent-quality prediction with a relatively small rate. Therefore, the motion representation is suboptimal only considering the quality of prediction feature. To solve the above problems, a multi-granularity prediction generation method is proposed to fully explore the multi-reference frames. It is developed based on the observation that the coarse-grained prediction feature, generated with the motion offsets encoded to the bitstream, contains much information of the current frame and can be considered as a noisy version of the current frame. Unlike the previous multi-reference prediction methods focusing on predicting the motion vectors of multi-reference frames or directly fusing the past multiple prediction features based on the past noisy motion information, we propose to take the coarse-grained prediction feature as an approximate of the current frame feature. Accordingly, the multi-reference frames are explored in a manner of video denoising where the motion information used in the process can be obtained at both encoder and decoder without coding into bitstream. The features of the multiple reference frames, including the immediately previous frame feature \(\hat{F}_{t-1}\), are first temporally aligned to the prediction feature \(F_{c-pre}\) and then fused together with the prediction feature to improve its quality. Taking \(\hat{F}_{t-1}\) as an example, the temporal alignment process is similar to the above coarse-grained prediction feature generation: \[F_{f-pre1}=RDT(\langle F_{c-pre},\hat{F}_{t-1}\rangle) \tag{10}\] where \(RDT\) represents the above Relaxed Deformable Transformer. Note that here the motion representation is not coded with quantization and entropy coding since the coarse-grained prediction feature \(F_{c-pre}\) and the reference frame feature \(\hat{F}_{t-1}\) are available at both encoder and decoder. Therefore, the motion representation encoded under RDO in the coarse-grained prediction feature generation only needs to describe the coarse motion to save bits, while the detailed motion representation and prediction feature can be further generated with this bit-free temporal alignment and enhancement process. The same process is applied to the other reference features to explore the long-range temporal information. In the experiments, three reference frames are used with features \(\hat{F}_{t-1}\), \(\hat{F}_{t-2}\) and \(\hat{F}_{t-3}\), and corresponding enhanced features using RDT are denoted by \(F_{f-pre1}\), \(F_{f-pre2}\) and \(F_{f-pre3}\), respectively. Such enhanced features can be considered as fine-grained prediction feature, which is generated without the rate cost of the motion representation. The final prediction feature is obtained by fusing the coarse-grained prediction feature and the multi-reference fine-grained prediction feature. It is known that frames that are closer in the temporal dimension tend to have higher similarity. As a result, each reference feature contributes differently to the prediction feature, indicating that the importance of \(F_{f-pre(i)}\) varies. Moreover, in the spatial domain, the distribution of image details differs between flat regions and sharp edges, and also between moving regions and background, leading to variations in the spatial fusion of the different features. To address this, a spatial-channel attention similar as CBAM [40] is used to fuse the features with different channel and spatial weights. This fusion process is illustrated in Fig. 3 and can be described as \[F_{enh\_cat} =\langle F_{c-pre},F_{f-pre1},F_{f-pre2},F_{f-pre3}\rangle\] \[F_{attn\_ch} =C\_attn(F_{enh\_cat})\cdot F_{enh\_cat}\] \[F_{enh\_conv} =ReLU(Conv_{1\times 1}(F_{attn\_ch}))\] \[F_{attn\_sp} =S\_attn(F_{enh\_conv})\cdot F_{enh\_conv}\] \[F_{m-pre} =F_{c-pre}+F_{attn\_sp} \tag{11}\] where \(C\_attn\) and \(S\_attn\) represents the channel and spatial attention in CBAM [40], respectively. The \(1\times 1\) convolution reduces the channel number of \(F_{attn\_ch}\) to the same as \(F_{c-pre}\) Fig. 4: Structure of the proposed SFD-Transformer based residual compression. The final prediction feature \(F_{m-pre}\) is obtained by adding the coarse-grained prediction feature and the fused feature since the quality of the coarse-grained prediction feature is rather stable with the estimated and encoded motion representation. Finally, the residual feature is obtained by subtracting the current frame feature with the enhanced prediction feature \(F_{resi}\)= \(F_{t}-F_{m-pre}\). ### _Spatial Feature Distribution (SFD) Prior based Residual Compression_ After the residuals generated with the inter-prediction, residual compression is further performed to remove the spatial redundancy within the residual. Existing LVC methods preform residual compression completely in the spatial dimension, neglecting the inherent spatial feature distribution information that may be contained in the prediction frame. Taking an image with spatial repetitive patterns as an example, the spatial feature distribution in the residual, including the feature similarity at different locations, still resembles the distribution of the prediction features. In other words, the current inter-prediction only removes the pixel-to-pixel temporal redundancy with the subtraction operation or locally processed context operation, while the redundancy of the spatial feature distribution between temporal frames is ignored and can be further reduced. Therefore, a Spatial Feature Distribution prior based Transformer (SFD-Transformer) is developed for residual compression, where the spatial prior presented in the prediction feature is used to guide the self-attention model in compressing the spatial residual features. The framework of the proposed SFD-Transformer is shown in Fig. 4. An encoder-decoder Transformer architecture is used for the residual encoding and decoding. Firstly, at encoder, the residual is divided into patches, and embedded through a linear layer. Then the SFD prior based self-attention is calculated. Specifically, in the calculation of self-attention scores, the relationship among the prediction features is also considered via a feature distance as \[\mathrm{Attn}_{st-r}=softmax(Q_{R}K_{R}+\mathrm{pos}_{b}+Q_{p}K_{p}+mod)V_{R} \tag{12}\] where \(Q_{R}\) and \(K_{R}\) represent the query and key features of the current frame, respectively, while \(Q_{P}\) and \(K_{P}\) represent those of the corresponding prediction features, obtained with the same linear embedding functions. The similarity, i.e., \(Q_{P}\)\(K_{P}\), is served as the spatial feature distribution prior. \(\mathrm{pos}_{b}\) and \(mod\) represent the two-dimensional relative position coding and the learnable adjustment modulator, respectively. This updated self-attention calculation is used in all the Transformer layers at both encoder and decoder. \(Q_{p}\) and \(K_{p}\) at decoder are the same to the encoder, providing the same spatial feature distribution information for decoding the residual. With reference to the existing Transformer based image compression model configuration [26], the number of self-attention blocks of each layer at the encoder is set to [x2,x2,x6,x2]. The resolution is reduced by down-sampling after each stage. It can be expressed as \[F_{resi(i)}=f_{PM}(\mathrm{Attn}_{st-r(i-1)}(F_{resi(i-1)},F_{m-pre(i-1)})) \tag{13}\] where \(\mathrm{Attn}_{st-r(i-1)}()\) refers to the SFD prior based self-attention calculation operation of the \((i-1)\)th layer. \(f_{PM}\) represents the patch merging to down-sample the feature in the spatial resolution [34], by reconstructing the feature between patches to be half in size and double in channel. Accordingly, \(Q_{p}\) and \(K_{p}\) are also down-sampled layer by layer with linear projection to match the size of corresponding residual feature maps, so that the SFD prior fits in the self-attention calculation of each layer. Take \(Q_{p(i)}\) as an example, \[Q_{p(i)}=LN(Linear(Reshape(Q_{p(i-1)},\frac{1}{2}))) \tag{14}\] where \(Reshape(Q_{p(i-1)},\frac{1}{2})\) as in [45] represents reshaping the input \(Q_{p(i-1)}\) to the size \(\frac{HW}{4}\times(4C)\), and \(Linear()\) is a linear projection that reduces channel number. \(K_{p(i)}\) is processed in the same way. The encoded feature \(F_{resi(i)}\) after the encoder then undergoes entropy encoding to generate the bitstream. \[b_{resi}=Entropy(f_{Q}(F_{resi-o})) \tag{15}\] The bitstream is transmitted to the decoder and entropy decoding is performed to generate the initial residual feature. Then SFD prior based Transformer layers are used to further decode the initial residual features to reconstruct the residual features of the original resolution \(\hat{F}_{resi}\). The prediction feature \(F_{m-pre}\) is then added back to reconstruct the input feature. \[\widetilde{F}_{t}=F_{m-pre}+\hat{F}_{resi} \tag{16}\] Similarly as FVC [2], non-local attention mechanism is further used to enhance the reconstructed feature with the multi-reference reconstructed features. Finally, the enhanced reconstructed feature is transformed back to the pixel domain to generate the reconstructed frame \(\hat{X}\) with a few ResBlocks in the same way as in FVC. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Method & UVG & MCL-JCV & HEVC B & HEVC C & HEVC D & HEVC E & Average \\ \hline VTM-17.0 [41] & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ HM-16.25 [42] & 36.4 & 41.5 & 38.8 & 36.0 & 33.7 & 44.0 & 38.4 \\ ECM-5.0 [43] & -10.0 & -12.2 & **-11.5** & -13.4 & **-13.5** & -10.9 & -11.92 \\ CANF-VC [44] & 73.0 & 70.8 & 64.4 & 76.2 & 63.1 & 118.0 & 77.6 \\ DCVC [20] & 166.1 & 121.6 & 123.2 & 143.2 & 98.0 & 266.1 & 153.0 \\ DCVC-TCM [18] & 44.1 & 51.0 & 40.2 & 66.3 & 37.0 & 82.7 & 53.6 \\ DCVC-HEM [21] & 1.1 & 8.6 & 5.1 & 22.2 & 2.4 & 20.5 & 10.0 \\ FVC [2] & 155.0 & 171.6 & 176.5 & 182.41 & 164.7 & 273.9 & 171.4 \\ \hline PROPOSED & **-20.4** & **-16.4** & -10.1 & **-16.0** & -2.2 & **-15.8** & **-13.5** \\ \hline \hline \end{tabular} \end{table} TABLE I: Result comparison in terms of BD-Rate (%) measured with PSNR. The anchor is VTM. The loss function of the proposed method is \[L\ =R+\lambda D=R_{mv}+\ R_{resi}+\lambda d(X_{t},\hat{X}_{t}) \tag{17}\] where \(R_{mv}\) and \(R_{resi}\) represent the bits introduced by compressing the offsets map \(\mathrm{mv}_{off}\) and residual feature \(F_{resi}\), respectively. \(d(X_{t},\hat{X}_{t})\) is the distortion between original frame \(X_{t}\) and reconstructed frame \(\hat{X}_{t}\). \(\lambda\) is the corresponding Lagrange multiplier. For the first 15 epochs, the distortion \(d(X_{m-pre},X_{t})\) between enhanced prediction frame \(X_{m-pre}\) (that is generated from the enhanced prediction feature \(F_{m-pre}\) with a simple convolutional layer) and original frame \(X_{t}\) is also used to accelerate the training of the RDT and MGP prediction module. After that, Eq. (17) is used to continue training. ## IV Experiments ### _Experimental Settings_ #### Iv-A1 Dataset Vimeo-90K dataset [46] is used as training dataset similar to the existing LVC methods [2, 18, 21]. It contains 89,800 video sequences and each sequence includes 7 frames. The sequences are randomly cropped to the resolution of 256 x 256 as input for training. The first frame in GOP, namely I frame, is compressed by Cheng2020 in CompressAI [47]. Video sequences from three datasets are used for test to evaluate the performance of our model, including HEVC B-E class sequences [48], UVG [49], and MCL-JCV [50]. #### Iv-A2 Training and testing Details The PyTorch platform is used for implementation. Adam [51] optimizer is used with the batch size set to 8. The hyperparameter \(\lambda\) is set to four different values corresponding to four models (\(\lambda\) = 256, 512, \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Method & UVG & MCL-JCV & HEVC B & HEVC C & HEVC D & HEVC E & Average \\ \hline VTM-17.0 [41] & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ HM-16.25 [42] & 31.1 & 38.8 & 36.6 & 35.2 & 33.0 & 41.1 & 36.0 \\ ECM-5.0 [43] & -9.1 & -11.1 & -10.2 & -11.7 & -11.0 & -9.9 & -10.5 \\ CANF-VC [44] & 46.5 & 26.0 & 43.5 & 30.9 & 17.9 & 173.0 & 56.3 \\ DCVC [20] & 64.9 & 27.5 & 54.4 & 39.7 & 15.2 & 210.4 & 68.7 \\ DCVC-TCM [18] & 1.0 & \(-\)10.8 & \(-\)11.7 & \(-\)15.2 & \(-\)29.0 & 16.7 & 8.85 \\ DCVC-HEM [21] & \(-\)25.2 & \(-\)36.3 & \(-\)38.0 & \(-\)38.3 & \(\mathbf{-48.1}\) & \(-\)25.8 & \(-\)35.3 \\ FVC [2] & 144.8 & 151.9 & 150.8 & 119.9 & 116.2 & 244.7 & 154.7 \\ \hline PROPOSED & **-36.0** & **-46.5** & **-43.1** & **-47.5** & -34.4 & **-36.8** & **-40.7** \\ \hline \hline \end{tabular} \end{table} TABLE II: Result comparison in terms of BD-Rate (%) measured with MS-SSIM. The anchor is VTM. Fig. 5: Rate-distortion curve comparisons under PSNR and MS-SSIM, respectively. 1024, 2048) at different rates. To construct the multi-reference frame structure in the proposed MGP, when the reference frame buffer has less than three frame features, the feature of the latest frame is duplicated until there are enough reference frames. Peak Signal-to-Noise Ratio (PSNR) and Multi-Scale-Structural Similarity Index (MS-SSIM) are used to evaluate the distortion between the reconstructed video and the ground-truth video. BD-Rate savings over both PSNR and MS-SSIM are adopted for evaluation and all evaluations are performed in the RGB space where the YUV videos are converted to RGB with FFMPEG. We evaluate 96 frames for each video in the test sets, with an intra period set to 32 frames. The other settings including the low-delay configurations are the same as in [3]. ### _Comparison with State-of-the-art Methods_ To evaluate the performance, the proposed method is compared with the existing State-of-the-art methods, including HM-16.25 [42], VTM-17.0 [41], ECM-5.0 [43], FVC [2], DCVC [3], DCVC-HEM [21], and DCVC-TCM [18]. Among these methods, HM, VTM and ECM are the traditional block-based video compression methods while the others are LVC methods. The results are shown in Table I and Table II for PSNR and MS-SSIM based BD-Rate comparison, respectively. The result of VTM is used as anchor for the BD-Rate calculation. From Table I, it can be seen that the proposed method achieves better performance than the existing LVC methods. It obtains an average bitrate saving of 13.5% compared to the anchor, while the other LVC methods perform worse (or similar) than VTM. The RD curve comparisons are shown in Fig. 5, where our method performs best compared to the others. ### _Ablation Study_ Ablation experiments are further conducted to validate the effectiveness of each proposed module in our method. The baseline without the proposed modules is the FVC model [2] and the HEVC dataset is used for evaluation. **RDT based motion estimation and compensation.** On top of the baseline FVC, RDT is first used to replace the CNN-based motion estimation and compensation component. The result is shown in Table III. It can be seen that compared with the baseline FVC, significant improvement in terms of BD-Rate, 18.9% reduction on average, is achieved with our RDT module. This indicates that the proposed RDT based motion estimation method can obtain higher-quality motion information from color space. As shown in Fig. 6, a comparison between Fig. 6c and Fig. 6d shows that the ball in our coarse-grained prediction picture obtained with RDT are more clear and complete than the FVC prediction. This further demonstrates the advantages of the proposed RDT in capturing motion details and stably transforming features between color space and geometric space. **MGP based on multi-frame enhancement**. MGP is further added on top of the FVC and RDT to validate the effectiveness of using multi-reference frame to enhance the prediction generation. As shown in Table III, the performance is further improved by 24.6% in terms of BD-rate. This demonstrates the effectiveness of using MGP to explore the multi-reference information for prediction frame refinement. With sufficient temporal information, the quality of prediction frame is further improved. A visual result comparison in Fig.6d and Fig.6e illustrates that the prediction frame with our MGP module contains more temporal motion details and sharper edges compared to the FVC prediction and coarse-grained prediction. Fig. 6: Visual results comparison. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline RDT & MGP & SFD-T & HEVC B & HEVC C & HEVC D & HEVC E & Average \\ \hline βœ— & βœ— & βœ— & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ βœ“ & βœ— & βœ— & -17.8 & -15.3 & -18.2 & -24.2 & -18.9 \\ βœ“ & βœ“ & βœ— & -41.4 & -44.0 & -35.3 & -53.3 & -43.5 \\ βœ“ & βœ“ & βœ“ & -67.0 & -71.8 & -61.1 & -75.1 & -68.7 \\ \hline \hline \end{tabular} \end{table} TABLE III: Ablation study results, in terms of BD-Rate (%) comparison measured with PSNR. **SFD-T based residual compression.** The performance of the proposed SFD-T module can be further observed by comparing the results of the RDT & MGP and the result of the full module in row 2 and row 3 of Table III, respectively. It can be observed that the proposed SFD-T provides a significant 25.2% BD-Rate gain over the CNN based residual compression in FVC. This demonstrates that the SFD-T module effectively reduces redundancy in the spatial feature distribution embedded in the temporal prediction, improving the residual compression efficiency. Notably on HEVC test sets, the proposed method with all the modules surpasses the baseline FVC by an average of 68.7% BD-Rate saving, demonstrating the effectiveness of our method. ## V Conclusion In this paper, we propose a novel Spatial-Temporal Transformer-based video compression (STT-VC) framework, containing RDT, MGP and SFD-T, designed for inter-frame coding. RDT is developed for high-quality motion estimation and compensation by producing a stable feature transformation from color space to geometric motion space. MGP further enhances the prediction feature by exploring the multi-reference frame information. It takes full advantage of the coarse-grained prediction generated by the RDT and characterized by the coded motion information. Then SFD-T is designed to improve residual compression efficiency by leveraging joint temporal-spatial feature distribution priors to further reduce spatial redundancy in the residuals. Experimental results demonstrate that the proposed STT-VC framework outperforms VTM by 13.5% on average in all tests and achieves the best performance. Ablation studies confirm the effectiveness of each proposed module in the framework and achieve a coding gain of 68.7% BD-Rate saving against the baseline.
2310.20475
Linked Papers With Code: The Latest in Machine Learning as an RDF Knowledge Graph
In this paper, we introduce Linked Papers With Code (LPWC), an RDF knowledge graph that provides comprehensive, current information about almost 400,000 machine learning publications. This includes the tasks addressed, the datasets utilized, the methods implemented, and the evaluations conducted, along with their results. Compared to its non-RDF-based counterpart Papers With Code, LPWC not only translates the latest advancements in machine learning into RDF format, but also enables novel ways for scientific impact quantification and scholarly key content recommendation. LPWC is openly accessible at https://linkedpaperswithcode.com and is licensed under CC-BY-SA 4.0. As a knowledge graph in the Linked Open Data cloud, we offer LPWC in multiple formats, from RDF dump files to a SPARQL endpoint for direct web queries, as well as a data source with resolvable URIs and links to the data sources SemOpenAlex, Wikidata, and DBLP. Additionally, we supply knowledge graph embeddings, enabling LPWC to be readily applied in machine learning applications.
Michael FΓ€rber, David Lamprecht
2023-10-31T14:09:15Z
http://arxiv.org/abs/2310.20475v1
# Linked Papers With Code: The Latest in Machine Learning as an RDF Knowledge Graph ###### Abstract In this paper, we introduce _Linked Papers With Code_ (LPWC), an RDF knowledge graph that provides comprehensive, current information about almost 400,000 machine learning publications. This includes the tasks addressed, the datasets utilized, the methods implemented, and the evaluations conducted, along with their results. Compared to its non-RDF-based counterpart _Papers With Code_, LPWC not only translates the latest advancements in machine learning into RDF format, but also enables novel ways for scientific impact quantification and scholarly key content recommendation. LPWC is openly accessible at [https://linkedpaperswithcode.com](https://linkedpaperswithcode.com) and is licensed under CC-BY-SA 4.0. As a knowledge graph in the Linked Open Data cloud, we offer LPWC in multiple formats, from RDF dump files to a SPARQL endpoint for direct web queries, as well as a data source with resolvable URIs and links to the data sources SemOpenAlex, Wikidata, and DBLP. Additionally, we supply knowledge graph embeddings, enabling LPWC to be readily applied in machine learning applications. Scholarly Data, Open Science, Ontology Engineering, Machine Learning 1 languageresource domains. By incorporating FAIR principles that focus on the availability and reuse of research data and artifacts, we expect LPWC to improve the discoverability and applicability of machine learning research results. We make the code used for knowledge graph creation and embedding generation available online ([https://github.com/davidlamprecht/linkedpaperswithcode](https://github.com/davidlamprecht/linkedpaperswithcode)). In the following, we present LPWC in detail. ## 2 Linked Papers with Code **Linked Papers With Code Ontology.** First, we develop an ontology that adheres to the best practices of ontology engineering and incorporates as much existing vocabulary as possible. Given that the PWC data dump is sourced directly from the PWC website, thus lacks a standardized schema and comprises diverse JSON objects, it was infeasible to directly model it within an OWL/RDF framework. Consequently, we construct a novel semantic schema to model the data. An overview of the entity types, object properties, and data type properties can be found in Figure 1. The LPWC ontology encompasses _13 entity types_ and _47 relationship types_. In addition to the ontology, which is available as an OWL file, we provide a VoID file, following the Linked Open Data good practices to describe our linked dataset. **Linked Papers With Code Knowledge Graph.** PWC provides access to its data via a user-friendly, human-readable website. In addition, it offers daily JSON data dumps.1 However, there are several aspects that currently make using the data difficult: 1. There is a lack of semantic interoperability. Entities, such as authors or AI models, are represented as strings without unique IDs. This prevents effective linking of data and creation of knowledge graphs. 2. Due to the complexity of the data, modeling in JSON format proves difficult, especially when processing or querying the data. This issue becomes particularly apparent with evaluation tables, which are nested within a JSON structure with up to 19 levels in depth. This results in significant data redundancy within the file. In contrast, a graph representation provides a more intuitive and manageable way of modeling. 3. The data, originally designed for a human readable interface, uses markdown for natural language descriptions of entities, which may not be optimal when being processed by NLP methods or displaying it outside of the website. Footnote 1: See [https://github.com/paperswithcode/paperswithcode-data](https://github.com/paperswithcode/paperswithcode-data) _Data Transformation._ To overcome these limitations, we convert the JSON files from the PWC data dump into an RDF knowledge graph based on the developed ontology. This requires major changes in the data formatting and data modeling. In the transformation process we, among other steps, (1) assign unique HTTP URIs to all entities, (2) convert all markdown test to plain text and (3) link the entities to other scholarly data sources in the LOD cloud. _Author Name Disambiguation._ The disambiguation of author names given as strings is a crucial step on top of the pure data transformation. Specifically, we develop an efficient two-step method to link the 1,471,006 authors in LPWC to entities in SemOpenAlex, which is a massive RDF dataset modeling the academic landscape with its publications, authors, sources, and institutions, via its public SPARQL endpoint [3]. We leverage LPWC author names and paper titles for the disambiguation. The first step involves exact name matching and publication title substring comparison. Figure 1: _Schema of Linked Papers With Code._ If no match is found, the second step employs a variant search of LPWC paper titles in SemOpenAlex works, and author matching based on fuzzy similarity techniques. This process yields 947,709 links to SemOpenAlex entities. The remaining 523,297 author names are represented in LPWC using the lpwc:authorName property. _Creating ow:sameAs statements_. We further link all conferences modeled in LPWC to DBLP. Moreover, we successfully map 267,314 papers (71% of all papers in LPWC) to SemOpenAlex works, utilizing variations of the LPWC paper titles. Lastly, we are able to create 158 mappings (2% of all datasets) between datasets modeled in LPWC and datasets modeled in Wikidata. **Key Statistics**. Our knowledge graph's SPARQL endpoint enables the direct computation of interesting statistics. For instance, Table 1 shows the frequency of entities across entity types. Additionally, Figure 2 illustrates how to compare conferences (here: NAACL, EMNLP, ACL) based on the used evaluation metrics of their papers. **Knowledge Graph Embeddings.** To enable additional use cases, we compute knowledge graph embeddings for LPWC. Embeddings have proven to be valuable as implicit knowledge representations in various scenarios. We train the embeddings based on state-of-the-art embedding techniques such as TransE, DistMult, ComplEx, and RotatE [4, 5]. The training process involves a maximum of 900 epochs, implementing early stopping based on the mean rank calculated on the validation sets at intervals of 300 epochs. Among the evaluated techniques, TransE shows the best results. Therefore, we provide the TransE-based embedding vectors for all entities and relations online and all our evaluation results in our repository. Notably, our provided embeddings are in line with state-of-the-art results on benchmark datasets with similar characteristics in terms of the number of relations, triples, and entities [4, 5]. **Use Case Examples.** LPWC can enhance existing use cases while also enabling the development of new ones. In the following, we highlight some potential use cases: 1. _Machine Learning Data Analysis_: LPWC is a novel scientific knowledge graph covering the current field of machine learning. Complex analyses, such as comparing conferences or detecting new research topics, become possible in this way. 2. _Scholarly LOD Cloud Enrichment_: LPWC is highly integrated with the LOD cloud and connected to multiple data sources such as SemOpenAlex, Wikidata, and DBLP. This enables efficient data integration and enhanced research data management in alignment with the FAIR principles. 3. _Academic Recommender Systems:_ Given the information overload in science, scientific recommender systems are becoming increasingly important. LPWC and the provide knowledge graph embeddings can be used directly to build state-of-the-art recommender systems for key scientific content. With LPWC, these systems can recommend also items such as datasets, methods, and conferences. ## 3 Conclusion In this paper, we presented _Linked Papers with Code_, the first RDF knowledge graph with detailed information about the machine learning landscape, consisting of close to 8 million RDF triples. We outlined the creation process of this dataset, discussed its characteristics, and examined the procedure for training state-of-the-art knowledge graph embeddings. In future work, we aim to leverage the extensive interconnectivity between LPWC and SemOpenAlex to facilitate large-scale key content extraction from publications.
2310.20313
Perturbing Masses: A Study of Centered Co-Circular Configurations in Power-Law n-Body Problems
This research investigates centered co-circular central configurations in the general power-law potential $n$-body problem. Firstly, there are no such configurations when all masses are equal, except for two; secondly, unless all masses are equal, no such configurations exist when masses can be divided into two sets of equal masses. We adapt Wang's criterion and incorporate insights on cyclic quadrilaterals, alongside mathematical induction.
Zhengyang Tang, Shuqiang Zhu
2023-10-31T09:38:40Z
http://arxiv.org/abs/2310.20313v1
# Perturbing masses: a study of centered co-circular configurations in power-law n-body problems ###### Abstract. This research investigates centered co-circular central configurations in the general power-law potential \(n\)-body problem. Firstly, there are no such configurations when all masses are equal, except for two; secondly, unless all masses are equal, no such configurations exist when masses can be divided into two sets of equal masses. We adapt Wang's criterion and incorporate insights on cyclic quadrilaterals, alongside mathematical induction. **Keywords:**. Centered co-circular central configurations; Cyclic polygon; Power-Law \(n\)-body problem. **2020AMS Subject Classification**: 70F10, 70F15. ## 1. Introductions In the Newtonian \(n\)-body problem, there is a well-known conjecture that the regular \(n\)-gon with equal masses is the unique co-circular central configuration whose center of mass is the center of the circle. We consider this conjecture in the general power-law potential \(n\)-body problem for systems with mixed mass distributions. Our findings reveal that, in cases where there are both equal masses and two unequal masses, or when the masses can be divided into two groups with equal masses within each group, no co-circular central configuration with the center of mass at the circle's center exists. This result marks a new progression towards affirming the conjecture. The Newtonian \(n\)-body problem involves characterizing the dynamic behavior of solutions to Newton's equations, \(m_{k}\ddot{q}_{k}=\frac{\partial U}{\partial q_{k}},k=1,2,\cdots,n\), where \(U=\sum_{i<j}\frac{m_{i}m_{j}}{|q_{i}-q_{j}|}\). Though initially addressed by Newton and explored by mathematicians over the centuries, the problem remains largely unsolved for \(n>2\). Central configurations, just specific particle arrangements by definition, have emerged as pivotal in understanding the \(n\)-body problem's dynamics. They have relevance in various aspects, including homographic solutions, the analysis of collision orbits, and the bifurcation of integral manifold (cf. [12, 13, 17]). The main focus on the topic of central configurations is the following problem: Is the number of relative equilibria (planar central configurations) finite, in the Newtonian \(n\)-body problem, for any choice of positive real numbers \(m_{1},\cdots,m_{n}\) as the masses? It was proposed by Chazy [3] and Wintner [17], and was listed by Smale as the sixth problem on his list of problems for the 21-st century [14]. Euler and Lagrange have solved this finiteness question when \(n=3\). Hampton and Moeckel [9] gave an affirmative answer for the case of \(n=4\). Albouy and Kaloshin [2] gave an important partial answer to the question for \(n=5\). We refer the reader to the excellent review on this problem by Hampton, Moeckel [9] and Albouy, Kaloshin [2]. In this paper, the co-circular central configuration whose center of mass is the center of the circle will be called _centered co-circular central configuration_, following the terminology in [8]. It is easy to see that any regular polygon with equal masses makes a centered co-circular central configuration, [11, 15]. In order to answer the question: _Do there exist planar choreography solutions whose masses are not all equal?_ Chenciner proposed another question in [4]: _Is the regular polygon with equal masses the unique centered co-circular central configuration?_ The question was also included in the well-known list of open problems on the classical \(n\)-body problems compiled by Albouy, Cabral and Santos [1]. Hampton's work in [7] provided a positive answer for the case of \(n=4\). The study of \(n=5\) was addressed in [10]. Wang's recent research in [16] confirmed a positive answer for \(n=5\) and \(n=6\). This intriguing question, like many others in celestial mechanics, has also been explored in the context of the general power-law potential \(n\)-body problem, where the potential takes the form: \[U_{\alpha}=\sum_{i<j}\frac{m_{i}m_{j}}{|q_{i}-q_{j}|^{\alpha}}.\] Notably, when \(\alpha=1\), it corresponds to the Newtonian \(n\)-body problem and the limiting case \(\alpha=0\) corresponds to the \(n\)-vortex problem. Indeed, for the limiting case of \(\alpha=0\), Cors, Hall, and Roberts in [6] have established an affirmative answer to Chenciner's question for any \(n\). For \(\alpha>0\), Wang's work in [16] gave an positive answer for \(n=3\) and \(n=4\), and furthermore, it introduced a valuable criterion for determining the existence of the centered co-circular central configuration. Another interesting approach to Chenciner's question was initiated by Hampton in [8], where he proved that there are no centered co-circular central configuration formed by \(n\) equal masses plus one infinitesimal mass in the case of \(\alpha=1\), or, we may say that he proved the nonexistence of such configurations for masses in the form of \(n+\epsilon\). This result was subsequently expanded upon by Corbera and Valls in [5] to general power-law potentials and masses in the form of \(n+1\), i.e., \(n\) equal masses plus one arbitrary mass. The goal of this paper is to study the existence of centered co-circular central configuration for masses in the form of \(n+1+1\) and \(n+k\). More precisely, we show: **Theorem 1**.: _In the general power-law potential \(n\)-body problem, no centered co-circular central configurations exist where all masses are equal except for two._ **Theorem 2**.: _In the general power-law potential n-body problem, when masses can be grouped into two sets of equal masses, no centered co-circular central configurations exist unless all masses are equal._ Our method involves refining and extending Wang's criterion [16], along with an original result concerning cyclic quadrilaterals (see Lemma 4). Notably, our approach also incorporates the use of mathematical induction. The paper is structured as follows. In Section 2, we briefly review the notation of centered co-circular central configurations, and list several useful lemmas. In Section 3, we prove Theorem 1 and Theorem 2. ## 2. Basic settings and useful lemmas Suppose that there are n positive masses represented by \(\mathbf{m}=(m_{1},m_{2},\ldots,m_{n})\) placed around a unit circle centered at the origin in the complex plane. Their positions are given by \(\mathbf{q}=(q_{1},q_{2},\ldots,q_{n})\) in \(\mathbb{C}^{n}\), with each position defined as \(q_{j}=e^{\sqrt{-1}\theta_{j}}=\cos\theta_{j}+\sqrt{-1}\sin\theta_{j}\). Without loss of generality, assume that \(\theta_{j}\) falls within the range \((0,2\pi]\), and \[0<\theta_{1}<\theta_{2}<\cdots<\theta_{n}\leq 2\pi.\] We also write the positions by \(\theta=(\theta_{1},\ldots,\theta_{n}).\) In this way, the mass vector determines the order of the masses on the circle. Now, the potential \(U_{\alpha}\) is \[U_{\alpha}(\mathbf{m},\theta)=\sum_{j<k}\frac{m_{j}m_{k}}{r_{jk}^{\alpha}},\] where the distance between masses \(j\) and \(k\) is given by \(r_{jk}\): \[r_{jk}=\left|2\sin\frac{\theta_{j}-\theta_{k}}{2}\right|=\sqrt{2-2\cos\left( \theta_{j}-\theta_{k}\right)}.\] It is a centered co-circular central configuration if \[\sum_{j\neq k}\frac{m_{j}(q_{j}-q_{k})}{r_{jk}^{\alpha+2}}+\frac{\lambda}{ \alpha}q_{k}=0,\quad k\in\{1,\ldots,n\}.\] Projecting the equations on \((-\sin\theta_{k},\cos\theta_{k})\) and \((\cos\theta_{k},\sin\theta_{k})\), [6], an equivalent form is found as \[\frac{\partial}{\partial\theta_{k}}U_{\alpha}=0,\ \ \frac{\partial}{\partial m _{k}}U_{\alpha}=\sum_{j\neq k}\frac{m_{j}}{r_{jk}}=\frac{2\lambda}{\alpha},\ k=1, \ldots,n. \tag{1}\] The central configuration equations are invariant under the rotation. To remove the symmetry, we specify that \(\theta_{n}=2\pi\). Let \(\mathcal{K}_{0}=\left\{\theta:0<\theta_{1}<\theta_{2}<\ldots<\theta_{n}=2\pi \right\},\)\(\mathcal{CC}_{0}=\left\{\left(\mathbf{m},\theta\right)satisfy\ (1),\theta\in\mathcal{K}_{0}\right\}.\) **Lemma 1** ([6]).: _For any \(\mathbf{m},\) there is a unique point in \(\mathcal{K}_{0}\) satisfying \(\frac{\partial}{\partial\theta_{k}}U_{\alpha}=0,k=1,\ldots,n.\) Moreover, the critical point is a minimum, denoted by \(\theta_{\mathbf{m}}\)._ The dihedral group, \(D_{n}\), acts on the set \(\mathbb{R}_{+}^{n}\times\mathcal{K}_{0}\) as followes. Denote \[P=\left(\begin{array}{cccccc}0&1&0&\ldots&0&0\\ 0&0&1&\ldots&0&0\\.&.&.&\ldots&.&.\\ 0&0&0&\ldots&0&1\\ 1&0&0&\ldots&0&0\end{array}\right),\ \ S=\left(\begin{array}{cccccc}0&0&\ldots&0&1&0\\ 0&0&\ldots&1&0&0\\.&.&\ldots&.&.&.\\ 1&0&\ldots&0&0&0\\ 0&0&\ldots&0&0&1\end{array}\right),\] \[\mathcal{P}=\left(\begin{array}{cccccc}-1&1&0&\ldots&0&0\\ -1&0&1&\ldots&0&0\\.&.&.&\ldots&.&.\\ -1&0&0&\ldots&0&1\\ 0&0&0&\ldots&0&1\end{array}\right),\ \ \mathcal{S}=\left(\begin{array}{cccccc}0&0& \ldots&0&-1&1\\ 0&0&\ldots&-1&0&1\\.&.&\ldots&.&.&.\\ -1&0&\ldots&0&0&1\\ 0&0&\ldots&0&0&1\end{array}\right).\] The action of \(D_{n}\) on \(\mathbb{R}_{+}^{n}\) is by the matrix group generated by \(P,S\), and the action of \(D_{n}\)on \(\mathcal{K}_{0}\) is by the matrix group generated by \(\mathcal{P},\mathcal{S}\). For any \(g=P^{h}S^{l}\in D_{n}\), letting \(\hat{g}=\mathcal{P}^{h}\mathcal{S}^{l}\),define the action of \(D_{n}\) on \(\mathbb{R}_{+}^{n}\times\mathcal{K}_{0}\) by \[g\cdot(\mathbf{m},\theta)=(g\mathbf{m},\hat{g}\theta).\] **Lemma 2**.: _Assume that \(\left(\mathbf{m},\theta_{\mathbf{m}}\right)\in\mathcal{CC}_{0}\) is a centered co-circular central configuration, then_ 1. _For any_ \(g\in D_{n}\)_,_ \(g\cdot(\mathbf{m},\theta_{\mathbf{m}})\in\mathcal{CC}_{0}\)_._ 2. \(U_{\alpha}(\mathbf{m},\theta_{\mathbf{m}})=U_{\alpha}(g\mathbf{m},\hat{g} \theta_{\mathbf{m}})\leq U_{\alpha}(g\mathbf{m},\theta_{\mathbf{m}})\) _and_ \(\hat{g}\theta_{\mathbf{m}}=\theta_{g\mathbf{m}}\) (3) \(\mathbf{m=}g\mathbf{m}\)_implies \(\hat{g}\theta_{\mathbf{m}}=\theta_{\mathbf{m}}\)._ Proof.: Since equations (1) and \(U_{\alpha}\) are invariant under the group \(O(2)\) and \(D_{n}\) is a discrete subgroup of \(O(2)\), we see part (1) holds, and \(U_{\alpha}(\mathbf{m},\theta_{\mathbf{m}})=U_{\alpha}(g\mathbf{m},\hat{g} \theta_{\mathbf{m}})\). The uniqueness of the minimum implies \[\hat{g}\theta_{\mathbf{m}}=\theta_{g\mathbf{m}},\ U_{\alpha}(g\mathbf{m},\hat{ g}\theta_{\mathbf{m}})\leq U_{\alpha}(g\mathbf{m},\theta_{\mathbf{m}}).\] So part (2) is proved. If \(\mathbf{m}=g\mathbf{m}\), then the uniqueness of the minimum implies the equation of part (3). Let us elaborate the second part of the above lemma. Assume that \((\mathbf{m},\theta_{\mathbf{m}})\in\mathcal{CC}_{0}.\) Consider the symmetric matrix \(H_{\mathbf{m}}\), which is determined by \(\mathbf{m}\) and the corresponding \(\theta_{\mathbf{m}}\), by \((H_{\mathbf{m}})_{ij}\)=\(1/r_{ij}^{\alpha}\) when \(i\neq j\), and \((H_{\mathbf{m}})_{ii}=0.\) When considered as a quadratic form, we can write \[U_{\alpha}(\mathbf{m},\theta_{\mathbf{m}})=H_{\mathbf{m}}(\mathbf{m})=\mathbf{ m}^{T}H_{\mathbf{m}}\mathbf{m}. \tag{2}\] The gradient of \(U_{\alpha}\) with respect to \(\mathbf{m}\) is \(H_{\mathbf{m}}\mathbf{m}.\) Note that \((\mathbf{m},\theta_{\mathbf{m}})\) also satisfies \(\frac{\partial}{\partial m_{k}}U_{\alpha}=\frac{2\lambda}{\alpha},k=1,\ldots,n,\) so \(H_{\mathbf{m}}\mathbf{m}=\frac{2\lambda}{\alpha}1\). Since \(g\mathbf{m}-\mathbf{m}\in\mathbf{1}^{\perp}\), where \(\mathbf{1}^{\perp}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n},\sum x_{i}=0\}\), then \[U_{\alpha}(g\mathbf{m},\theta_{\mathbf{m}})=U_{\alpha}(\mathbf{m},\theta_{ \mathbf{m}})+0+H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m}). \tag{3}\] **Lemma 3**.: _Given \(\mathbf{m}\) and the corresponding \(\theta_{\mathbf{m}}\), if there is some \(g\in D_{n}\) such that \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})<0\), then \((\mathbf{m},\theta_{\mathbf{m}})\notin\mathcal{CC}_{0}\)._ **Remark 1**.: The above two observations are essentially due to Wang [16]. While our object is the potential \(U_{\alpha}\), his attention is on the function in the form of \(U_{\alpha}+\frac{U_{-2}}{K}\), where \(K\geq\frac{2^{3+\alpha}}{\alpha}\). A co-circular configuration can be viewed as a _cyclic polygon_, which is by definition a polygon with vertices upon which a circle can be circumscribed. The following fact of cyclic quadrilateral is important for the study of \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m}).\) **Lemma 4**.: _Cosider a cyclic quadrilateral with vertices A, B, C, D, which are odered counterclockwise. See Figure 1. Then for any \(\alpha>0,\) it holds that_ \[\frac{1}{AC^{\alpha}}+\frac{1}{BD^{\alpha}}-(\frac{1}{AD^{\alpha}}+\frac{1}{ BC^{\alpha}})<0.\] Proof.: Set \(\angle CDB=\beta\), \(\angle CDA=\delta\), \(\angle CAD=\gamma\). Then \(0<\beta<\delta,\beta+\gamma<\delta+\gamma<\pi.\) By the law of sines, \(\frac{AC}{\sin\delta}=\frac{BD}{\sin(\beta+\gamma)}=\frac{AD}{\sin(\pi-\delta- \gamma)}=\frac{BC}{\sin\beta}=2R,\) where \(R\) is the radius of the circumcircle. Hence \[\begin{array}{ll}\frac{1}{AC^{\alpha}}+\frac{1}{BD^{\alpha}}-\left(\frac{1}{AD^ {\alpha}}+\frac{1}{BC^{\alpha}}\right)&=\frac{1}{(2R)^{\alpha}}\left[\frac{1}{ \sin^{\alpha}\delta}+\frac{1}{\sin^{\alpha}(\beta+\gamma)}-\frac{1}{\sin^{ \alpha}(\delta+\gamma)}-\frac{1}{\sin^{\alpha}\beta}\right]\\ &=-\frac{1}{(2R)^{\alpha}}\left[\frac{1}{sin^{\alpha}(\delta+\gamma)}-\frac{1} {sin^{\alpha}\delta}-\left(\frac{1}{sin^{\alpha}(\beta+\gamma)}-\frac{1}{sin^ {\alpha}\beta}\right)\right]\\ &=-\frac{1}{\alpha(2R)^{\alpha}}\left[\int_{\delta}^{\delta+\gamma}-\frac{ cos(x)}{sin^{\alpha+1}(x)}dx-\int_{\beta}^{\beta+\gamma}-\frac{cos(x)}{sin^{ \alpha+1}(x)}dx\right]\\ &=-\frac{1}{\alpha(2R)^{\alpha}}\left[\int_{0}^{\gamma}\frac{cos(x+\beta)}{ sin^{\alpha+1}(x+\beta)}-\frac{cos(x+\delta)}{sin^{\alpha+1}(x+\delta)}dx\right]<0. \end{array}\] In the last step, we employ the fact that \(f(x)=\frac{cosx}{sin^{\alpha+1}x}\) is a decreasing function on \((0,\pi)\). Indeed, \(f^{\prime}(x)=-\frac{1+\alpha cos^{2}(x)}{sin^{\alpha+2}(x)}\). ## 3. Proof of the main results The main idea is to utilize the criterion of Lemma 3. If we can find some \(g\in D_{n}\) such that the sign of \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})\) is negative, then we can conclude the nonexistence of centered co-circular central configurations. Theorem 1 is proved in Subsection 3.1. In Theorem 2, we consider the case that the masses consist of two group of equal masses. We make induction on the cardinality of the second group. To make the explanation more readable, we will first show that if the second group has cardinality 3, there is no centered co-circular central configurations unless all masses are equal, in Subsection 3.2. In the last subsection, we prove the general case of Theorem 2. ### Proof of Theorem 1 In [16], Corollary 3.8, Wang has proved the result in the case that the total number of particles is odd and all the masses are equal except two. We only need to discuss the case of the total number of particles being even. For completeness, we include the discussion of the case that the total number of particles is odd. Figure 1. One cyclic quadrilateral. The dashed lines correspond to the positive terms, while the solid black lines correspond to the negative terms. Proof.: **I. when \(n\) is odd.** Without loss of generality, suppose the mass vector is \[\mathbf{m}=(1,\ldots,1,m_{k},1,\ldots,1,m_{n}),m_{k}\neq 1,m_{n}\neq 1.\] Note that \(S\mathbf{m}-\mathbf{m}=\pm(m_{k}-1)(0,\ldots,0,1,0,\ldots,0,-1,0,\ldots,0,0).\) Obviously, \(H_{\mathbf{m}}(S\mathbf{m}-\mathbf{m})<0.\) **II. when \(n\) is even.** Without loss of generality, suppose the mass vector is \[\mathbf{m}=(1,\ldots,1,m_{j},1,\ldots,1,m_{n}),m_{j}\neq 1,m_{n}\neq 1.\] There are three subcases: \(1,j\neq\frac{n}{2};\)\(2,\)\(j=\frac{n}{2}\) and \(m_{j}\neq m_{n};\)\(3,\)\(j=\frac{n}{2}\) and \(m_{j}=m_{n}.\) **II-1.**\(j\neq\frac{n}{2}\). Simliar to the case when \(n\) is odd, we have \(H_{\mathbf{m}}(S\mathbf{m}-\mathbf{m})<0.\) **II-2.**\(j=\frac{n}{2}\) and \(m_{j}\neq m_{n}.\) Let \(g=P^{\frac{n}{2}}.\) Then \(P^{\frac{n}{2}}\mathbf{m}-\mathbf{m}=(m_{n}-m_{j})(0,\ldots,0,1,0,\ldots,0,-1).\) Obviously, \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})<0.\) **II-3.**\(j=\frac{n}{2}\) and \(m_{j}=m_{n}.\)Let \(g=P.\) Then \[g\mathbf{m}-\mathbf{m}=(m_{n}-1)(0,0,\ldots,1,-1,0,0,\ldots,1,-1),\ m_{n}-1 \neq 0.\] We will show that the inequality \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})<0\) holds again. Note that \[H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})=2(m_{n}-1)^{2}(\frac{1}{r_{\frac{ \alpha}{2}-1,n-1}^{\alpha}}+\frac{1}{r_{\frac{\alpha}{2},n}^{\alpha}}-\frac{1 }{r_{n,2}^{\alpha}}-\frac{1}{r_{n-1,n}^{\alpha}}-\frac{1}{r_{\frac{\alpha}{2} -1,n}^{\alpha}}).\] Set \(a=r_{\frac{n}{2}-1,\frac{n}{2}},b=r_{\frac{n}{2},n-1},c=r_{n-1,n},d=r_{n,\frac {n}{2}-1},e=r_{\frac{n}{2}-1,n-1}\) and \(f=r_{\frac{n}{2},n}.\) See Figure 2. Then it suffices to show that \[F=\frac{1}{e^{\alpha}}+\frac{1}{f^{\alpha}}-(\frac{1}{a^{\alpha}}+\frac{1}{b^ {\alpha}}+\frac{1}{c^{\alpha}}+\frac{1}{d^{\alpha}})<0.\] The inequality is obvious by Lemma 4. Figure 2. The dashed lines correspond to the positive terms, while the solid lines correspond to the negative terms. When \(\alpha\leq 1\), we would like to provide another interesting proof. Ptolemy's theorem says that \(ef=ac+bd.\) Then \[F= \frac{e^{\alpha}+f^{\alpha}}{(ef)^{\alpha}}-\big{(}\frac{a^{\alpha}+ c^{\alpha}}{(ac)^{\alpha}}+\frac{b^{\alpha}+d^{\alpha}}{(bd)^{\alpha}}\big{)}\] \[< \frac{a^{\alpha}+b^{\alpha}+c^{\alpha}+d^{\alpha}}{(ac+bd)^{\alpha }}-\big{(}\frac{a^{\alpha}+c^{\alpha}}{(ac)^{\alpha}}+\frac{b^{\alpha}+d^{ \alpha}}{(bd)^{\alpha}}\big{)}\] \[< (a^{\alpha}+c^{\alpha})\big{[}\frac{1}{(ac+bd)^{\alpha}}-\frac{1} {(ac)^{\alpha}}\big{]}+(b^{\alpha}+d^{\alpha})\big{[}\frac{1}{(ac+bd)^{\alpha }}-\frac{1}{(bd)^{\alpha}}\big{]}<0.\] In summary, in all cases, we have showed that there is some \(g\in D_{n}\) such that \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})<0\). By Lemma 3, there is no centered co-circular central configuration. ### A special case of Theorem 2 The main aim of this subsection is to discuss a special case of Theorem 2. For it, we first discuss a geometric inequality of cyclic polygons. Recall that a cyclic polygon is a polygon with vertices upon which a circle can be circumscribed. There are many interesting researchs on the areas of cyclic polygons. What we are interested in is one inequality involved with the sides of an arbitrary cyclic polygon with \(2n\) vertices (cyclic \(2n\)-gons, for short). We will also study the inequality corresponding to some sub cyclic polygons. Let us fix some notations first. We will always assume that the vertices of a cyclic \(2n\)-gon are ordered counterclockwise as \(1,2,\ldots,2n.\) We refer the polygon as \(G\left\{1,2,\ldots,2n\right\}.\) A sub cyclic \(2k\)-gon consisting of vertices \(i_{1}\),\(i_{2},\ldots,i_{2k},\) where \[\left\{i_{1},i_{2},\ldots,i_{2k}\right\}\subset\left\{1,2,\ldots,2n\right\},\ i_{1}<i_{2}<\ldots i_{2k},\] is refered by \(G\left\{i_{1},i_{2},\ldots,i_{2k}\right\}.\) For a cyclic \(2n\)-gon \(G\left\{1,2,\ldots,2n\right\},\) define \[R\left(G\left\{1,2,\ldots,2n\right\}\right)=\sum_{p=s(mod2)}\frac{1}{r_{ps}^{ \alpha}}-\sum_{p\neq s(mod2)}\frac{1}{r_{ps}^{\alpha}}.\] Similarly, for a cyclic \(2k\)-gon \(G\left\{i_{1},i_{2},\ldots,i_{2k}\right\},\) define \[R\left(G\left\{i_{1},i_{2},\ldots,i_{2k}\right\}\right)=\sum_{p=s(mod2)}\frac{ 1}{r_{ip_{i}i_{s}}^{\alpha}}-\sum_{p\neq s(mod2)}\frac{1}{r_{ip_{i}i_{s}}^{ \alpha}}.\] For a cyclic quadrilateral \(G\left\{i_{1},i_{2},i_{3},i_{4}\right\},\) define \[S\left(G\left\{i_{1},i_{2},i_{3},i_{4}\right\}\right)=\frac{1}{r_{i_{1}i_{3}}^ {\alpha}}+\frac{1}{r_{i_{2}i_{4}}^{\alpha}}-\left(\frac{1}{r_{i_{2}i_{3}}^{ \alpha}}+\frac{1}{r_{i_{1}i_{4}}^{\alpha}}\right).\] We are interested in the sign of the above defined functions. For instance, we have seen that \(S\left(G\left\{i_{1,}i_{2,}i_{3,}i_{4}\right\}\right)<0\) for any cyclic quadrilateral \(G\left\{i_{1,}i_{2,}i_{3,}i_{4}\right\}\) in Lemma 4. It is also clear that \(R\left(G\left\{i_{1,}i_{2,}i_{3,}i_{4}\right\}\right)<0\) by the proof of Theorem 1, or just by Lemma 4. It can be extended to cyclic hexagons. **Lemma 5**.: _For any cyclic hexagon \(G\left\{1,2,3,4,5,6\right\}\) and any \(\alpha>0,\) it holds that_ \[R\left(G\left\{1,2,3,4,5,6\right\}\right)= \frac{1}{r_{13}^{\alpha}}+\frac{1}{r_{15}^{\alpha}}+\frac{1}{r_{24 }^{\alpha}}+\frac{1}{r_{26}^{\alpha}}+\frac{1}{r_{35}^{\alpha}}+\frac{1}{r_{46 }^{\alpha}}\] \[-\left(\frac{1}{r_{12}^{\alpha}}+\frac{1}{r_{14}^{\alpha}}+\frac{ 1}{r_{16}^{\alpha}}+\frac{1}{r_{23}^{\alpha}}+\frac{1}{r_{25}^{\alpha}}+\frac{ 1}{r_{34}^{\alpha}}+\frac{1}{r_{36}^{\alpha}}+\frac{1}{r_{45}^{\alpha}}\right) <0.\] Proof.: The idea is to decompose \(R\left(G\left\{1,2,3,4,5,6\right\}\right)\) as indicated in Figure 3. Note that \[R\left(G\left\{1,2,3,4,5,6\right\}\right)=R\left(G\left\{3,4,5,6\right\} \right)+S\left(G\left\{1,2,3,4\right\}\right)+S\left(G\left\{1,2,5,6\right\} \right)-\frac{1}{r_{12}^{\alpha}}.\] Then by Lemma 4, the inequality \(R\left(G\left\{1,2,3,4,5,6\right\}\right)<0\) holds for any cyclic hexagon. **Theorem 3**.: _In the general power-law potential n-body problem, assume that the masses can be divided into two groups of equal masses, and the cardinality of the second group is 3. There is no centered co-circular central configuration unless all the masses are equal._ Figure 3. The dashed lines correspond to the positive terms, while the solid lines correspond to the negative terms. Proof.: Without lose of generality, assume that each mass of the first group is \(1\) and each mass of the second group is \(m.\) Firstly, we consider the case when three \(m\)'s are nonadjacent. Without losing generality, assume the mass vector is \[\mathbf{m=}(1,\ldots,1,m,1,\ldots,1,m,1\ldots,1,m),\] where the three \(m\)'s locate at the \(i,j\) and \(n\)-th positions. Since the three positions are nonadjacent, we have \(1<i,i+1<j,j+1<n.\) Let \(g=P.\) Then \[g\mathbf{m-m}=(0\ldots,0,m-1,1-m,0\ldots,0,m-1,1-m,0\ldots,0,m-1,1-m).\] The \((m-1)\)-terms are at the \((i-1)\)-th, the \((j-1)\)-th and the \((n-1)\)-th coordinates, while the \((1-m)\)-terms are at the \(i\)-th, the \(j\)-th and the \(n\)-th coordinates. Then, \[H_{\mathbf{m}}(g\mathbf{m-m})=2(m-1)^{2}R\left(G\left\{i-1,i,j-1,j,n-1,n\right\} \right),\] where \(G\left\{i-1,i,j-1,j,n-1,n\right\}\) is the cyclic hexagon formed by the six co-circular vertices \(i-1,i,j-1,j,n-1,n\), and the co-circular \(n\) vertices \(1,2,\ldots,n\) are obtained from \(\theta_{\mathbf{m}}\) of Lemma 1. By Lemma 5, we have\(H_{\mathbf{m}}(g\mathbf{m-m})<0.\) Secondly, assume that two of three \(m\)'s are adjacent. Without losing generality, assume the mass vector is \[\mathbf{m=}(1,\ldots,1,m,1,\ldots,1,m,m),\] and the \(m\)'s locate at the \(i,n-1\) and \(n\)-th positions. So \(1<i,i+1<n-1.\) Let \(g=P.\) Then \[g\mathbf{m-m}=(0\ldots,0,m-1,1-m,0\ldots,0,m-1,0,1-m).\] Then, \[H_{\mathbf{m}}(g\mathbf{m-m})=2(m-1)^{2}R\left(G\left\{i-1,i,n-2,n\right\} \right),\] where \(G\left\{i-1,i,n-2,n\right\}\) is the cyclic quadrilateral formed by the four co-circular vertices \(i-1,i,n-2,n\), and the co-circular \(n\) vertices \(1,2,\ldots,n\) are obtained from \(\theta_{\mathbf{m}}\) of Lemma 1. By Lemma 4, we have \(H_{\mathbf{m}}(g\mathbf{m-m})<0.\) Thirdly, assume that the three \(m\)'s are adjacent. Without losing generality, assume the mass vector is \[\mathbf{m=}(1,\ldots,1,m,m,m).\] Let \(g=P.\) Then \[g\mathbf{m-m}=(0\ldots,0,m-1,0,0,1-m).\] Then obviously, \(H_{\mathbf{m}}(g\mathbf{m-m})<0.\) In summary, in all cases, we have showed that there is some \(g\in D_{n}\) such that \(H_{\mathbf{m}}(g\mathbf{m-m})<0\). By Lemma 3, there is no centered co-circular central configuration. ### The general case of Theorem 2 We first generalize Lemma 5 to all cyclic \(2n\)-gons, via induction. **Lemma 6**.: _For any cylic \(2n\)-gon \(G\left\{1,2,\ldots,2n\right\}\), it always holds that_ \[R\left(G\left\{1,2,\ldots,2n\right\}\right)=\sum_{p=s(mod2)}\frac{1}{r_{ps}^{ \alpha}}-\sum_{p\ast s(mod2)}\frac{1}{r_{ps}^{\alpha}}<0.\] Proof.: We proceed by induction on the number \(n\). First, note that it is true for \(n=2\) by Lemma 4, for \(n=3\) by Lemma 6. Now assume that it holds for number less than \(n\). For the case of \(n\), decompose \(R\left(G\left\{1,2,\ldots,2n\right\}\right)\) as indicated in Figure 4. Note that \[R\left(G\left\{1,2,\ldots,2n\right\}\right)= R\left(G\left\{3,4,\ldots,2n\right\}\right)+S\left(G\left\{1,2,3,4\right\}\right)\] \[+S\left(G\left\{1,2,5,6\right\}\right)+\ldots+S\left(G\left\{1,2,2n-1,2n\right\}\right)-\frac{1}{r_{12}^{\alpha}}.\] Then by Lemma 4 and the hypotheses, the inequality \(R\left(G\left\{1,2,\ldots,2n\right\}\right)<0\) holds for any cyclic \(2\)n-gon. In summary, the inequality \(R\left(G\left\{1,2,\ldots,2n\right\}\right)<0\) holds for any cyclic \(2\)n-gon. Figure 4. The dashed lines correspond to the positive terms, while the solid lines correspond to the negative terms. Proof.: [proof of Theorem 2] Without lose of generality, assume that each mass of the first group is \(1\) and each mass of the second group is \(m\). Suppose that the cardinality is \(n\) for the first group and is \(k\) for the second, and \(k\leq n\). Firstly, we consider the case when \(k\)\(m\)'s are nonadjacent. Without losing generality, assume that the \(km\)'s locate the \(i_{1},\ldots,i_{k}\)-th positions and \(i_{k}=n\). Then \(1<i_{1},i_{s}+1<i_{s+1}\) for \(1\leq s\leq k-1\) and \(i_{k}=n\). Similar to the proof of Theorem 3, for \(g=P\), the vector \(g\mathbf{m}-\mathbf{m}\) has \(n-k\) zeros. Neglecting those zeros, and dividing it by \(m-1\), the vector \(g\mathbf{m}-\mathbf{m}\) consists of \(k\)\(1\)'s, and \(k\)\(-1\)'s. The \(1\)'s and \(-1\)'s appear consecutively. Then \[H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})=2(m-1)^{2}R\left(G\left\{i_{1}-1,i_{1 },i_{2}-1,i_{2},\ldots,i_{k}-1,i_{k}\right\}\right),\] where \(G\left\{i_{1}-1,i_{1},i_{2}-1,i_{2},\ldots,i_{k}-1,i_{k}\right\}\) is the cyclic \(2k\)-gon formed by the \(2k\) co-circular vertices \(i_{1}-1,i_{1},i_{2}-1,i_{2},\ldots,i_{k}-1,i_{k}\), and the co-circular \(n+k\) vertices \(1,2,\ldots,n+k\) are obtained from \(\theta_{\mathbf{m}}\) of Lemma 1. By Lemma 6, we have\(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})<0\). Secondly, assume that some of \(k\)\(m\)'s are adjacent. Similar to the proof of Theorem 3, it would always hold that \(H_{\mathbf{m}}(Pm-\mathbf{m})<0\) and we would like not to repeat the proof. In summary, in all cases, we have showed that there is some \(g\in D_{n}\) such that \(H_{\mathbf{m}}(g\mathbf{m}-\mathbf{m})<0\). By Lemma 3, there is no centered co-circular central configuration. ## Acknowledgment The first author would like to thank the Innovation Training Program for College Students of Southwestern University of Finance and Economics for its support in all aspects of the project. He was able to complete the work smoothly because of the school's platform and organization.
2305.19724
A Surrogate Model Framework for Explainable Autonomous Behaviour
Adoption and deployment of robotic and autonomous systems in industry are currently hindered by the lack of transparency, required for safety and accountability. Methods for providing explanations are needed that are agnostic to the underlying autonomous system and easily updated. Furthermore, different stakeholders with varying levels of expertise, will require different levels of information. In this work, we use surrogate models to provide transparency as to the underlying policies for behaviour activation. We show that these surrogate models can effectively break down autonomous agents' behaviour into explainable components for use in natural language explanations.
Konstantinos Gavriilidis, Andrea Munafo, Wei Pang, Helen Hastie
2023-05-31T10:31:36Z
http://arxiv.org/abs/2305.19724v1
# A Surrogate Model Framework for Explainable Autonomous Behaviour ###### Abstract Adoption and deployment of robotic and autonomous systems in industry are currently hindered by the lack of transparency, required for safety and accountability. Methods for providing explanations are needed that are agnostic to the underlying autonomous system and easily updated. Furthermore, different stakeholders with varying levels of expertise, will require different levels of information. In this work, we use surrogate models to provide transparency as to the underlying policies for behaviour activation. We show that these surrogate models can effectively break down autonomous agents' behaviour into explainable components for use in natural language explanations. Explainable Agents, Human-In-The-Loop Application, Surrogate Model, Feature Contribution. ## I Introduction Robotic and autonomous systems are at the stage where we are seeing them being adopted more frequently in a variety of environments, such as ground vehicles for inspection of disaster sites, or underwater for pipeline inspection. It is important that humans are kept in the loop in these operations and are able to intervene as necessary. However, this comes with challenges, such as underwater vehicles having limited bandwidth to broadcast updates [1]. Transparency and the ability to explain actions and decisions are key factors for safety, accountability and adoption [2]. However, these are non-trivial to implement, given the complexity of autonomous systems and the 'blackbox' nature of neural-based models. Platform-specific explanation interfaces normally require a basic understanding of an agent's behaviour space (\(B\)), possible states (\(S\)) and decision-making (\(D\)) to comprehend its capabilities and what could be described to operators. Furthermore, user studies are necessary to recognise which behaviours may be certainly valid and appropriate but might perhaps confuse the operator. This can lead to mission aborts and inaccurate mental models [3]. These user studies and explanation methods also need to adapt to the stakeholder. The IEEE Standard for Transparency of Autonomous Systems (P7001) [2], defines a number of stakeholder groups from the expert operator, to the general public and lawmakers. They all require different types of information and level of detail to be included in the explanation. Furthermore when working with commercial entities, they are continuously developing their autonomous models, adding new behaviours or states as required for new use cases and customers. If we are to provide accurate, up-to-date explanations, the explanation module will also need continuous updating, requiring considerable time and effort. Here, we propose a generic method using surrogate models for generating autonomy-agnostic explanations that can be used without a deep understanding of the underlying autonomy and can be easily updated. Specifically addressing the following research questions: * **RQ1:** How robust are surrogate models in approximating a complex deterministic agent's policy for behaviour activation? * **RQ2:** Can these surrogate models be used to effectively generate explanations? * **RQ3:** How is the performance affected when going from simulated data to real trials with real vehicles tested in a realistic environment? For the remainder of this paper, in Section II we mention previous work, which has impacted our approach and in Section III we describe our use case and the explanation types that our framework provides. In Section IV, we describe the functionality of components, such as the surrogate models or the feature contribution estimators in the pipeline architecture. Finally, in Section V, we report the performance of the Fig. 1: Unmanned Surface Vehicles (USVs) Heron (left) and Philos (right) used during the trials on Charles River in Boston. surrogate models in simulations and how it is affected during the trial or when new behaviours are incremented. ## II Related Work **Explanation Frameworks:** Explainable agency has been introduced as a trait of robots to define what properties a transparent robot should have conceptually. Among these traits are the ability to explain (i) plan generation, (ii) executed actions and (iii) replanning in a user-friendly fashion [4]. Looking at previous work on explainable agency, the whole process can be broken down into three main parts, _explanation generation_, _explanation communication_ and _explanation reception_[5]. For explanation communication, previous studies have looked into how robots could explain themselves as people do by including reasons for intentional and causes for unintentional behaviours [6]. Further studies investigate the desired verbosity of explanations in different scenarios [7, 8] and various types of explanations, which can be provided to a user from an explainable planning perspective [9]. **Explainable Artificial Intelligence:** The right of users to receive explanations about a robot's behaviour is supported by government regulations and the recommended direction is the development of transparent robotics [2], e.g. through the use of interpretable models [10]. Machine learning models differ in terms of simulatability, decomposability and algorithmic transparency [11]. In the case of opaque models, explanation methods should be applied to them to disambiguate their functionality. Explanation methods fall into two categories depending on the way they are applicable to a black-box model, namely _Model-specific_ and _Model-agnostic_[12]. Surrogate models can belong to either category, depending on their intended use and are useful for deriving the causality behind any prediction. LIME [13] accomplishes this by locally approximating the model around a given prediction. SHAP [14] generates Shapley values, which indicate the contribution of each feature to the difference between the initial belief of a model and its actual prediction. Another option to highlight causal relationships is through counterfactuals, where several feature contributions can help the user understand the relationship between feature values and the corresponding predictions [15]. However, surrogate models are not always consistent and, as a result, robustness has been introduced as a metric that represents how stable explanations are when inputs are slightly modified [16]. **Explainable Robotics:** In Sakai and Nagai [17], the relationship between XAI and explanations for transparent robotics is defined. Existing work has examined the use of both algorithmically transparent models and the combination of opaque models with posthoc explanation methods to explain robotic failures [18]. The decision-making of reinforcement learning agents has also been explained either with the use of surrogate models [19, 20] or the generation of Shapley values to explain robot grasping [21]. Focusing more on explanation communication and reception, the studies in Thielstrom et al. [22] and Robb et al. [23] present videos of robotic failures along with explanations to users. Meanwhile, in Das et al. [24], natural language explanations are generated with a Neural Translation Network to improve human assistance in fault recovery. Each approach performed a corresponding user study to evaluate how explanations affect the mental model of users. Continuing our effort in [25], we have developed a framework that retrieves data from deterministic agents and, with model selection, finds the optimal classifier for behaviour prediction. Depending on the transparency of the intermediate model, we capture the causality behind behaviours either by directly analysing the model or with the application of a posthoc explainer. **Knowledge Representation and Verbalisation:** Knowledge representations have always played an important role in the unification or the completion of knowledge for autonomous agents. In Li et al. [26], an ontology is used to tackle the issue of information heterogeneity and to facilitate collaboration between underwater robots. Additionally, in Gavriilidis et al. [27] an ontology is used to relate sensor readings to hardware errors and make a new plan, while a ROS listener retrieves and verbalises these outcomes with a surface realiser. Furthermore, Suh et al. [28] use a multilayered ontology to complement the perception of a household robot for object recognition and assist with its localisation. At the same time, knowledge representations play an important role in Natural Language Generation. In [29], a Neural Language Model efficiently transforms Wikipedia infoboxes into biography summaries, while in [30] a fine-tuned T5 model generates sentences just by connective plain utterances from concept sets. On the other hand, Ghosal [31] collected a dialogue reasoning dataset, where additional context is incorporated into utterances to teach a T5 model to make more intuitive transitions in dialogue. This type of data-driven natural language generation Fig. 2: Empirical decision tree for behaviour activation derived by a domain expert. is out of scope for the work described here, but is clearly an area for future use, particularly with the advent of more advanced large language models such as GPT-4 [32]. ## III Use Case and Explanation Types The use case examined here is a hybrid autonomy, that combines a ROS-based deterministic agent with a reactive agent, prioritising behaviours through multi-objective optimization for the maritime domain. Specifically, our scenarios focus on _Unmanned Surface Vehicles_ (USV) and _Autonomous Underwater Vehicles_ (AUV), as illustrated in Figure 1. This work was done in collaboration with industry partner SeeByte Ltd, who have developed an autonomous agent for driving such vehicles for a variety of maritime applications, such as inspection. To have an initial understanding of the autonomous agent and how it selects a behaviour, we interviewed in-depth a domain expert from the company. From this interview, we derived an abstract definition of behaviour decision-making in a tree format. This _empirical decision tree_ is illustrated in Figure 2. We then investigated if this behaviour tree covers aspects of a mission in simulation ahead of the real trial. The simulation scenario involves a restricted coastal area on the River Charles in Boston, where two vehicles collaborate to complete a mission. The versatile USV Heron performs each objective according to the uploaded plan, while USV Philos inspects the area to detect obstacles and notifies the other vehicle if something out of the plan is found. Each mission contains a launch and recovery objective and a number of survey or target objectives, where the vehicle needs to hold its position for a default amount of time. Additionally, the obstacles are either static (with locations specific to each mission but there for the duration), or dynamic (i.e. appearing during the mission). In terms of capabilities, both vehicles support 6 behaviours _B = (wait, transit, survey, hold_position, replanned_transit, avoid_obstacle)_ and they both run two autonomy models simultaneously (one for each vehicle) in a master-slave architecture. The following explanation types were derived in consultation with the expert and captured in the empirical decision tree in Figure 2 and are listed here. **E.1 Behaviour Causality** describes how a robot selects its current behaviour or action. Especially for operators, it can be difficult to comprehend how a robot closely observes objects around it, updates its world model and acts according to its goals. The utterances of this explanation usually entail the name of the behaviour, its use and the cause of activation. _Answers question: Why did you do that?_ **E.2 Replanning Clarification** complements the previous category and covers cases where unexpected outcomes arise and force the autonomy to alter its plan. Some indicative examples are obstacle avoidance and platform integrity, where for safety reasons the robot has to make a stop in a new location. _Answers question: Why do I need to replan at this point?_ **E.3 Counterfactual Explanation** allows the operator to ask the autonomy how it would react if its internal state changed in a specific way. With this functionality, the user can learn about alternative outcomes at any given point and to better comprehend the underlying logic of the autonomous agent. _Answers question: What if?_ ## IV Method The overall pipeline architecture that goes from the autonomous vehicle to the explanation interface is illustrated in Figure 3. Its aim is to act as a wrapper application that does not disrupt the existing autonomy but clearly conveys the approximated policy to users. For the ROSListening component, we found the relevant ROS topics that provide the vehicle states needed to predict exhibited behaviours (per the behaviour definition in Figure 2). We then created a listener with two uses in mind: (i) data collection in simulation and (ii) online behaviour prediction during plan execution. Using the acquired data, we trained a number of classifier models as surrogate autonomy models. These models predict which of the 6 behaviours the vehicle is exhibiting. We investigated a number of classifiers with varying transparency and compared the accuracy of the models. If the highest accuracy model is transparent, we would directly extract the feature contribution for each behaviour prediction. Otherwise, a post-hoc explanation method would be applied to the opaque model to derive feature contribution. Here, we examine both of these options. Finally, a knowledge representation that contains this information is generated and fed into a rule-based natural language explanation generator that conveys the same content in a user-friendly format. These components are described in detail below and represented in Algorithm 1. ### _The Data_ We collected a Behaviour Causality Dataset from 10 simulations of missions. For each simulation, we monitored eight ROS topics with a listener module corresponding to five vehicle states where S = {_ready_plan, current_objective, progress_type, same_objective, obstacle_found_} along with the corresponding behaviour. Each mission lasted 22.5 minutes on average and resulted in a dataset of 5056 data instances with 5 categorical features and a target value. ### _Surrogate Model Training and Selection_ After data collection, we made a comparison of various models to decide on the most suitable option for Behaviour Classification. Specifically, we tested three algorithmically explainable models, which are robust for classification with categorical features (K-Nearest Neighbours (KNN) [33], Categorical Naive Bayes (CategoricalNB) [34] and Decision Tree [35]) and we also included two more complex models (Support Vector Machine (SVM) [36], Multilayer Preceptron (MLP) [37]) to check if there is a significant performance difference. A total of 5 categorical features were given as input to each model (ready_plan, current_obj, progress_type, same_obj, obstacle_found) to predict the current behaviour of the vessel (wait, transit, survey, hold_position, replanned_transit, obstacle_avoidance). Nested cross-validation was used to select the best combination of hyperparameters for each model and to retrieve unbiased metrics indicative of each model's performance [38]. From the results in Table I, for the transparent models, it is clear that the Decision Tree and Categorical Naive Bayes algorithms outperform KNN. With regards the more opaque algorithms, SVM achieved similar performance to MLP but with the latter performing slightly better at correctly classifying behaviours. As for model training and evaluation, the time needed for transparent models to do both was much shorter than for Neural Networks. Thus going forward, we decided to use the decision tree (_max_depth = 8_, _max_leaf_nodes = 15_) given that it has similar accuracy to more complex models. Furthermore, its high transparency means that it can be used to verify the validity of the surrogate framework for more opaque models, in case these are used in future use cases and datasets. Figure 4 provides a confusion matrix of the predictions of the decision tree per behaviour. For **transit**, **hold_pos** and **survey** behaviours, there are some false classifications due Fig. 4: Confusion matrix indicating classification performance per behaviour with a Decision Tree during simulations. Fig. 3: Illustration of the proposed pipeline architecture, where a Surrogate Model approximates agent policy and Feature Contribution is estimated to detect behaviour causality. The output of this framework is an explanation with content which originates from a Contextualised Concept Set. to some inconsistency between the progress_type feature and the corresponding behaviour, which indicates that an internal autonomy state could be missing. As for false classifications between **hold_pos**, **survey** and **obs_avoid** behaviours, we noticed that even though replanning is triggered and an obstacle is found, the vehicle finds a way to perform its objective, however, the explanation framework misses this fact probably because an internal autonomy state is missing once again. ### _Explanation Layer_ Once a trained surrogate model is in place to predict the corresponding behaviour of a vehicle state, the feature contribution for the classification of the behaviour is used as a basis for the causal reasoning explanation. One way to do feature contribution is to examine the trained surrogate model itself [13]. This is feasible for transparent models, such as the decision tree chosen here, but not so for more complex models such as Neural Networks. Opaque models such as Neural Networks may be needed in future applications as the complexity of the autonomy increases and the datasets grow in size. For these more complex models, an alternative is to use _Shapley Values_[14], which has been shown to be a reliable and descriptive approach. Here, we follow this latter method as a proof of concept. Each model has initially a prior belief about what the expected value will be and a Shapley value describes how a specific feature creates the difference between the expected and actual values (\(E(x)-f(x)\)). ### _Knowledge Representation and Explanation Generation_ The final two components of the pipeline use, as input, the prediction of the surrogate model along with the Shapley values estimated by the feature contribution estimator. Based on the importance of each feature towards a prediction, behaviour causality is inferred and this knowledge is represented with contextualised concept sets. Contextualisation is incorporated with the use of key-value pairs, as opposed to simple triplets to indicate the role of each value. The end result is a knowledge base with \((vessel,behaviour,causality,time)\) sets, which describe the sequence of behaviours exhibited by the robot in JSON format. An example of a contextualised concept set can be found in Figure 3, where the current behaviour (Transit) and its trigger (Obstacle) can be distinguished. This entry indicates that the current transit behaviour has a modified trajectory that goes around the obstacle to avoid collision. With regards to natural language generation, for each new entry in the Knowledge Base, the key-value pairs are passed to a _Surface Realiser_, which produces an explanation that has been syntactically checked with SimpleNLG [39]. ## V Results and Discussion With regards **RQ1** and **RQ2**, as discussed above and in Table I, the surrogate models have accuracies for behaviour prediction of around 90%. This could further be further improved by training on more data both in simulation and with real vehicles. Even with this accuracy, we observed accurate explanations as give illustrated in Figure 5. In this figure, we present four continuous scenarios from a single mission along with the explanations which were generated. _Scenario 1_ involves a vessel moving to the launch point to retrieve the relative positions of the objective areas and begins working on each objective. In this case, the surrogate model correctly predicts the current behaviour by using the **progress_type**, **current_obj** and **same_obj** features. To validate the results, we also calculated the Shapley values, which highlight the **current_obj** as the main contributor. In _Scenario 2_, while the vehicle is moving from the launch point to the survey area, it encounters an obstacle and avoids it by changing its trajectory. Behaviour prediction was also successful in this case, with the model utilising **progress_type**, **current_obj**, **same_obj** and **obstacle_found** features. In _Scenario 3_, a false explanation is generated, due to the value of **progress_type** even though the survey has already started. Here, the features used by the surrogate model are, **progress_type**, **current_obj** and **same_obj**, while SHAP only attributes this prediction to the current objective. As for _Scenario 4_, where the vessel performs a survey, the surrogate model can immediately detect the new behaviour thanks to its unique **progress_type**, while SHAP adds to the causality both **current_obj** and **same_obj**, which seem reasonable causes in this case. An informal evaluation has been conducted but a formal subject evaluation is future work. ### _Going from Simulation to Real Trial_ For **RQ3**, a real trial took place with two Unmanned Surface Vehicles collaborating to complete a survey, while obstacles appeared in the dynamic environment. As a result, we tested the surrogate model on a separate trial test set consisting of 1331 instances with 5 features and corresponding behaviours. Looking at the overall performance, an accuracy of 99% was achieved showing the capability of the surrogate model to comprehend the autonomous behaviour activations. See Figure 6 for the confusion matrix for this test. The only errors observed are false classifications between the **survey** and **transit** behaviours, which we attribute to a potential missing vehicle state since the **progress_type** could not fully indicate the transition between these two behaviours. Finally, the **hold_position** behaviour is missing from Figure 6, because this objective was not used during the trial for practical reasons. ## VI Conclusion and Future Work With this work, a framework for approximating behaviour activations and replanning of an autonomous agent with classification models has been introduced. Our approach is capable of discovering the causality of autonomous decisions with the estimation of feature contribution for each action prediction. The main advantage of this framework is the storage of information in generic knowledge representations such as concept sets which can be later leveraged to produce user-friendly modalities such as natural language explanations. . Another advantage is that the framework is agnostic to the autonomy model, making it reusable in different domains. Moving forward, we plan on extending the functionality of this framework and investigating data-driven language explanations such as large language models in order to stochastically map knowledge representations to informative natural language explanations about robotic behaviour. Further evaluation of explanations is also required to examine the capacity of our approach to disambiguate robotic behaviours. ## Acknowledgment We would like to thank MIT's AUV Lab and Laurence Boe from SeeByte Ltd for their assistance with the simulator and the real trial. This work was also funded and supported by the EPSRC Prosperity Partnership (EP/V05676X/1), the UKRI Node on Trust (EP/V026682/1), EPSRC CDT on Robotics and Autonomous Systems (EP/S023208/1), and Scottish Research Partnership in Engineering. Fig. 5: Four continuous events from a single mission along with their behaviour predictions and the corresponding explanations. Scenarios 1, 2 and 4 contain correctly predicted behaviours, while Scenario 3 demonstrates a false prediction that has been encountered. Fig. 6: Confusion matrix indicating classification performance per behaviour with a Decision Tree during the real trial.
2309.16932
Symmetry Induces Structure and Constraint of Learning
Due to common architecture designs, symmetries exist extensively in contemporary neural networks. In this work, we unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models. We prove that every mirror-reflection symmetry, with reflection surface $O$, in the loss function leads to the emergence of a constraint on the model parameters $\theta$: $O^T\theta =0$. This constrained solution becomes satisfied when either the weight decay or gradient noise is large. Common instances of mirror symmetries in deep learning include rescaling, rotation, and permutation symmetry. As direct corollaries, we show that rescaling symmetry leads to sparsity, rotation symmetry leads to low rankness, and permutation symmetry leads to homogeneous ensembling. Then, we show that the theoretical framework can explain intriguing phenomena, such as the loss of plasticity and various collapse phenomena in neural networks, and suggest how symmetries can be used to design an elegant algorithm to enforce hard constraints in a differentiable way.
Liu Ziyin
2023-09-29T02:21:31Z
http://arxiv.org/abs/2309.16932v2
# Symmetry Leads to Structured Constraint of Learning ###### Abstract Due to common architecture designs, symmetries exist extensively in contemporary neural networks. In this work, we unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models. We prove that every mirror symmetry of the loss function leads to a structured constraint, which becomes a favored solution when either the weight decay or gradient noise is large. As direct corollaries, we show that rescaling symmetry leads to sparsity, rotation symmetry leads to low rankness, and permutation symmetry leads to homogeneous ensembling. Then, we show that the theoretical framework can explain the loss of plasticity and various collapse phenomena in neural networks and suggest how symmetries can be used to design algorithms to enforce hard constraints in a differentiable way. ## 1 Introduction Modern neural networks are so large that they often contain an astronomical number of neurons and connections layered in a highly structured manner. This design of modern architectures and loss functions means that there are a lot of redundant parameters in the model and that the loss functions are often invariant to hidden, nonlinear, and nonperturbative transformations of the model parameters. We call these invariant transformations the "symmetries" of the loss function. In physics, symmetries are regarded as fundamental organizing principles of nature, and systems with symmetries exhibit rich and hierarchical behaviors (Anderson, 1972). However, the role of symmetries in affecting the learning of neural networks remains largely a mystery. Since we will also discuss stochastic aspects of learning, we focus on studying a generic twice-differentiable non-negative _per-sample loss function_: \[\ell_{\gamma}=\ell_{0}(\theta,x)+\gamma\|\theta\|^{2}, \tag{1}\] where \(x\) is a minibatch or a single data point of arbitrary dimension and sampled from a training set. \(\theta\) is the model parameter, and \(\gamma\) is the weight decay. \(\ell_{0}\) assumes the definition of the model architecture and is the data-dependent part of the loss. Training with stochastic gradient descent (SGD), we sample a set of \(x\) and compute the gradient of the averaged per-sample loss over the set. The per-sample loss averaged over the training set is the empirical risk: \(L_{\gamma}(\theta):=\mathbb{E}_{x}[\ell_{\gamma}]\). Training with gradient descent (GD), we compute the gradient with respect to \(L_{\gamma}\). All the results we derive for \(\ell_{\gamma}\) directly carry over to \(L_{\gamma}\). We first study the effect of three specific types of symmetry one often encounters in deep learning: (1) rescaling symmetry, (2) rotation symmetry, and (3) permutation symmetry. See Figure 1 for an illustration. We then identify a general class of symmetry, _the mirror reflection symmetry_, that treats all three types of symmetry in a coherent framework and proves a general theorem showing that _every mirror symmetry leads to a structured constraint_. Section 4 discusses the related works in detail and the connections of our results to them. All the proofs are given in Appendix B. ## 2 Consequences of Common Symmetries While all the theorems in this section can be proved as corollaries of the general theorem 4, we give independent proofs of them to bring some concreteness to the general theorem. ### Rescaling Symmetry Leads to Sparsity The simplest type of symmetry in deep learning is the rescaling symmetry (Dinh et al., 2017; Saxe et al., 2013; Neyshabur et al., 2014; Tibshirani, 2021). Consider a loss function \(\ell_{0}\) for which the following equality holds for any \(x\), arbitrary vectors \(u,\ w\) and \(\rho\in\mathbb{R}_{\{0\}}\): \[\ell_{0}(u,w,x)=\ell_{0}(\rho u,\rho^{-1}w,x). \tag{2}\] For the rescaling symmetry and for all the problems we discuss below, it is also possible for \(\ell_{0}\) to contain other parameters \(v\) that are irrelevant to the symmetry: \(\ell_{0}=\ell_{0}(u,w,v)\). Since having such \(v\) or not does not change our result, we only show \(v\) explicitly when necessary. Also, because the symmetries we consider in this work hold for any \(x\), we also omit writing \(x\) unless necessary. The following theorem states that this symmetry leads to sparsity in the parameters. **Theorem 1**.: _Let \(\ell_{0}(u,w)\) have the rescaling symmetry in Eq. (2). Then, for any \(x\),_ 1. _if_ \(u=0\) _and_ \(w=0\)_, then_ \(\nabla_{u}\ell_{\gamma}=0\) _and_ \(\nabla_{w}\ell_{\gamma}=0\)_;_ 2. _for any fixed_ \(u\)_,_ \(w\)_, there exists_ \(\gamma_{0}\) _such that for all_ \(\gamma>\gamma_{0}\)_,_ \(\ell_{\gamma}(0,0)<\ell_{\gamma}(u,w)\)_._ Two parts of the theorem statement convey different insights. Part 1 shows that the learning dynamics are constrained - namely, GD or SGD will not leave the condition \((u,w)=(0,0)\) once entered. Part 2 shows that such constrained solutions can be locally favored for a large regularization. Additionally, symmetry has strong implications on the structures of the Hessian of the loss function and global properties of the loss landscapes; we delay its presentation and discussion after Theorem 4. Figure 1: When loss function symmetries are present, the model converges to structurally constrained solutions at a high weight decay or gradient noise. **Left**: A vanilla linear regression trained with SGD does not converge to sparse solutions for any learning rate. When we introduce redundant rescaling symmetry to every parameter, sparser solutions are favored at higher learning rates (\(\lambda\)). **Mid**: Vanilla 200 dimensional matrix factorization trained with SGD prefers lower-rank solutions when the gradient noise is strong due to the rotation symmetry. The inset shows that the model always stays full-rank if we remove the rotation symmetry by introducing residual connections. **Right**: Correlation of the pre-activation value of neurons in the penultimate layer of ResNet18. After training, the neurons are grouped into homogeneous blocks when weight decay is present. The inset shows that such block structures are rare when there is no weight decay. Also, the patterns are similar for post-activation values, which further supports the claim that the block structures are due to the symmetry, not because of linearity. See Appendix A for the experimental details and more results. This symmetry usually manifests itself in ReLU networks or when part of the parameters is linearly connected. Previous works have used this property to either understand the inductive bias of neural networks or design efficient training algorithms. When the model is a fully connected ReLU network, Neyshabur et al. (2014) showed that having \(L_{2}\) is equivalent to \(L_{1}\) constraints of weights. Ziyin and Wang (2023) designed an algorithm to compress neural networks by transforming a parameter vector \(v\) to \(u\odot w\), where \(\odot\) is the Hadamard product. ### Rotation Symmetry Leads to Low-Rankness A more involved but common type of symmetry is the rotation symmetry, which also appears in a few slightly different forms in deep learning. This type of symmetry appears in matrix factorization problems, where it is a main cause of the emergence of saddle points (Li et al., 2019). It also appears in Bayesian deep learning (Tipping and Bishop, 1999; Kingma and Welling, 2013; Lucas et al., 2019; Wang and Ziyin, 2022), self-supervised learning (Chen et al., 2020; Ziyin et al., 2023b), and transformers in the form of key-query matrices (Vaswani et al., 2017; Dong et al., 2021). Now, we show that rotation symmetry in the landscape leads to low rankness. We use the word "rotation" in a broad sense, including all orthogonal transformations. There are two types of rotation symmetry common in deep learning. In the first kind, we have for any \(W\), \[\ell_{0}(W)=\ell_{0}(\Omega W) \tag{3}\] for any orthogonal matrix \(\Omega\) such that \(\Omega\Omega^{T}=I\) and \(W\) is a set of weights viewed as a matrix or vector whose left dimension matches the right dimension of \(R\). **Theorem 2**.: _Let \(\ell_{0}\) satisfy the rotation symmetry in Eq. (3). Then, for any index \(i\), vector \(n\) and \(x\),_ 1. _if_ \(n^{T}W=0\)_, then_ \(n^{T}\nabla_{W}\ell_{\gamma}=0\)_;_ 2. _for any fixed_ \(W\)_, there exists_ \(\gamma_{0}\) _such that for all_ \(\gamma>\gamma_{0}\)_,_ \(\ell_{\gamma}(W_{fi})<\ell_{\gamma}(W)\)_;_1__ Footnote 1: The notation \(W_{fi}\) denotes the matrix obtained by setting the \(i\)-th singular value of \(W\) to be zero. Part 1 of the statement deserves a closer look. \(n^{T}W=0\) implies that \(W\) is low-rank and \(n\) is a left eigenvector of \(W\). That the gradient vanishes in this direction means that once the weight matrix becomes low-rank, it will always be low-rank for the rest of the training. A more common symmetry is a simultaneous rotation symmetry, where \(\ell_{0}\) depends on two matrices \(U\) and \(W\) and satisfies \(\ell_{0}(U,W)=\ell_{0}(UR,R^{T}W)\), for any orthogonal matrix \(R\) and any \(U\) and \(W\). Namely, the loss function is invariant if we simultaneously rotate two different matrices with the same rotation. In this case, one can show something similar: \(n^{T}W=0\) and \(Un=0\) for some fixed direction \(n\) is the favored solution.2 Footnote 2: The theorem does not imply that \(Un=0\) and \(n^{T}W=0\) must happen simultaneously. To see this, consider any loss function that only depends on the norm of \(U\) and \(W\). ### Permutation Symmetry Leads to Homogeneous Ensembling The most common type of symmetry in deep learning is permutation symmetry. It shows up in virtually all architectures in deep learning. A primary and well-studied example is that in a fully connected network, the training objective is invariant to any pairwise exchange of two neurons in the same hidden layer. We refer to this case as the "special permutation symmetry" because it is a special case of the permutation symmetry we study here. Many recent works are devoted to understanding the special permutation symmetry (Simsek et al., 2021; Entezari et al., 2021; Hou et al., 2019). Notably, Entezari et al. (2021) empirically showed that neural networks under SGD likely converge to the same type of solution if we take permutation symmetry into consideration. Here, we study a more general and abstract type of permutation symmetry. The loss function has a permutation symmetry between parameter subsets \(\theta_{a}\) and \(\theta_{a}\) if, for any \(\theta_{a}\) and \(\theta_{b}\),3 Footnote 3: As an example, consider a hidden layer of a network; let \(w_{a}\) and \(u_{a}\) be the input and output weights of neuron \(a\), and \(w_{b}\), \(u_{b}\) be the input and output weights of neuron \(b\). We can thus let \(\theta_{a}:=(w_{a},u_{a})\) and \(\theta_{b}:=(w_{b},u_{b})\). \[\ell_{0}(\theta_{a},\theta_{b})=\ell_{0}(\theta_{b},\theta_{a}). \tag{4}\] When there are multiple pairs that satisfy this symmetry, one can combine this pairwise symmetry to generate arbitrary permutations. In this perspective, permutation symmetries appear far more common than is recognized. For example, another example is that a convolutional neural network is invariant to a pairwise exchange of two filters, which is rarely studied. A scalar rescaling symmetry can also be regarded as a special case of permutation symmetry. Here, we show that the permutation symmetry tends to make the neurons become identical copies of each other (namely, encouraging \(\theta_{a}\) to be as close to \(\theta_{b}\) as possible). **Theorem 3**.: _Let \(\ell_{0}\) satisfy the permutation symmetry in Eq. (4). Then, for any \(x\),_ 1. _if_ \(\theta_{a}-\theta_{b}=0\)_, then_ \(\nabla_{\theta_{a}}\ell_{\gamma}=\nabla_{\theta_{b}}\ell_{\gamma}\)_;_ 2. _for any_ \(\theta_{a}\neq\theta_{b}\)_, there exists_ \(\gamma_{0}\) _such that for all_ \(\gamma>\gamma_{0}\)_,_ \(\ell_{\gamma}((\theta_{a}+\theta_{b})/2,(\theta_{a}+\theta_{b})/2)<\ell_{ \gamma}(\theta_{b},\theta_{a})\)_;_ This theorem implies that a permutation symmetry can be seen as a generalized form of ensembling smaller submodels.4 Special cases of this result have been proved previously. For a fully connected network, Fukumizu & Amari (2000) showed that the solutions of subnetworks are also solutions of the larger network, and Chen et al. (2023) demonstrated that these subnetwork solutions of fully connected networks can be attractive when the learning rate is large. Our result is more general because it does not restrict to the special permutation symmetry induced by fully connected networks. A novel application is that the networks have block-wise neurons and activation patterns whenever weight decay is present. See Figure 1. Footnote 4: One might suspect the origin is always favored when a mirror symmetry exists: this is not true. Let us consider a simple reparametrized linear regression problem: \(L_{\gamma}(w_{1},w_{2})=[(w_{1}+w_{2})x-y]^{2}+\gamma(w_{1}^{2}+w_{2}^{2})\). A permutation symmetry exists between \(w_{1}\) and \(w_{2}\). The condition \(\theta_{a}-\theta_{b}=0\) is satisfied for all solutions of the loss whenever \(\gamma>0\), however small \(\gamma\) is. Meanwhile, for a finite \(\gamma\), no solution satisfies \(\theta_{a}=\theta_{b}=0\). Therefore, achieving such solutions does not imply we have reached a trivial solution. ## 3 Every Mirror Symmetry Leads to a Structured Constraint A remarkable aspect of Theorems 1, 2 and 3 is that their proofs only require the symmetry, and no details of the architecture or loss function need to be specified. This means that these results are more general than the previous literature, which often specializes in a given architecture (such as a fully connected network) that happens to have a type of symmetry. The observation that only knowing the symmetry alone can help us deduce so much about the behavior of these systems hints at some underlying universal principle. Let us first define a general type of symmetry called mirror reflection symmetry. **Definition 1**.: _A per-sample loss function \(\ell_{0}(w)\) is said to have the simple mirror (reflection) symmetry with respect to a unit vector \(n\) if, for all \(w\), \(\ell_{0}(w)=\ell_{0}((I-2nn^{T})w)\)._ Note that the vector \((I-2nn^{T})w\) is the reflection of \(w\) with respect to the plane orthogonal to \(n\). Also, the \(L_{2}\) regularization term itself satisfies this symmetry for any \(n\) because reflection is norm-preserving. An important quantity is the average of the two reflected solutions: \(\bar{w}=(I-nn^{T})w\), where \(\bar{w}\) is the fixed point of this transformation and can be called a "symmetric solution." This mirror symmetry can be generalized to the case where the loss function is invariant only when multiple mirror reflections are made. **Definition 2**.: _Let \(O\) consist of columns of orthonormal vectors: \(O^{T}O=I\), and \(R=I-2OO^{T}\). A loss function \(\ell_{0}(w)\) is said to have the \(O\)-mirror symmetry if, for all \(w\), \(\ell_{0}(w)=\ell_{0}(Rw)\)._ By construction, \(OO^{T}\) and \(I-OO^{T}\) are projection matrices, and \(I-2OO^{T}\) is an orthogonal matrix. There are a few equivalent ways to see this symmetry. First of all, it is equivalent to requiring the loss function to be invariant only after multiple simple mirror symmetry transformations. Let \(m\) be a unit vector orthogonal to \(n\). Reflections to both \(m\) and \(n\) give \[(I-2mm^{T})(I-2nn^{T})=I-2(nn^{T}+mm^{T}). \tag{5}\] The matrix \(nn^{T}+mm^{T}\) is a projection matrix and, thus, an instantiation of \(OO^{T}\). Secondly, because the composition of orthogonal unit vectors spans the space of projection matrices, \(OO^{T}\) is nothing but a generic projection matrix \(P\). Thus, this symmetry can be equivalently defined with respect to \(P\) such that \(\ell_{0}(w)=\ell_{0}((I-2P)w)\). If we let \(O\) or \(P\) be rank-1, the symmetry reduces to the simple mirror symmetry in Definition 1. We also make a reasonable smoothness assumption, which is only needed for part 4 of the theorem.5 Footnote 5: It is difficult to imagine what kind of network has a negatively diverging Hessian eigenvalue. Even if such an example exists, it goes away if we constrain the parameters in a bounded space. **Assumption 1**.: _The smallest eigenvalue of the Hessian of \(\ell_{0}\) is lower-bounded by a (possibly negative) constant \(\lambda_{\min}\)._ With these definitions, we are ready to prove the following theorem. **Theorem 4**.: _Let \(\ell_{0}(w)\) satisfy the \(O\)-mirror symmetry. Then,_ 1. _for any_ \(\gamma\)_, if_ \(O^{T}w=0\)_, then_ \(O^{T}\nabla_{w}\ell_{\gamma}=0\)_;_ 2. _if_ \(O^{T}w=0\)_, a subset of the eigenvector of_ \(\nabla_{w}^{2}\ell_{0}(w)\) _spans_ \(\ker(O^{T})\)_, and the rest spans_ \(\operatorname{im}(OO^{T})\)_;_ 3. _if_ \(O^{T}w\neq 0\)_, there exists_ \(\gamma_{0}\) _such that for all_ \(\gamma>\gamma_{0}\)_,_ \(\ell_{\gamma}((I-OO^{T})w)<\ell_{\gamma}(w)\)_;_ 4. _there exists_ \(\gamma_{1}\) _such that for all_ \(\gamma>\gamma_{1}\)_, all minima of_ \(\ell_{\gamma}\) _satisfy_ \(O^{T}w=0\)_._ Parts 1 and 2 are statements regarding the local gradient geometry, regardless of the weight decay. Parts 3 and 4 are local and global statements regarding the role of weight decay. It is instructive to show how Theorems 1, 2 and 3 are corollaries of Theorem 4. The simplest application is to the rescaling symmetry. When the rescaling symmetry exists between two scalars \(u\) and \(w\), there are two planes of mirror symmetry: \(n_{1}=(1,1)\) and \(n_{2}=(1,-1)\). Here, \(n_{1}\) symmetry implies that \(u=-w\) is a symmetry solution, and \(n_{2}\) symmetry implies that \(u=w\) is a symmetry solution. Applying Theorem 4 to these two mirrors implies that \(u=0\) and \(w=0\) is a symmetry solution and obeys Theorem 1. When \(u\in\mathbb{R}^{d_{1}}\) and \(w\in\mathbb{R}^{d_{2}}\) are vectors of arbitrary dimensions and have the rescaling symmetry, one can identity the implied mirror symmetry as \(O=-I\), and so \(I-2P=-I\): the loss function is symmetric to a simultaneous flip of all the signs of \(u\) and \(w\). Applying Theorem 4 to this mirror again allows us to derive Theorem 1. For permutation symmetry in \(\ell_{0}(\theta_{1},\theta_{2})\) with \(\theta_{i}\in\mathbb{R}^{d}\), we can identify the projection as \[P=\frac{1}{2}\begin{bmatrix}I_{d}&-I_{d}\\ -I_{d}&I_{d}\end{bmatrix}. \tag{6}\] Let \(\theta=(\theta_{1},\theta_{2})\) denote a vector combination of both sets of the parameters. The permutation symmetry thus implies the mirror symmetry: \(\ell_{0}(\theta)=\ell_{0}((I-2P)\theta)\). The symmetry solution is \(\theta_{1}=\theta_{2}\), and applying the master theorem to this mirror allows us to obtain Theorem 3. For rotation symmetry, we note that for any projection matrix \(\Pi\), the matrix \(I-2\Pi\) is an orthogonal matrix because \((I-2\Pi)(I-2\Pi)^{T}=(I-2\Pi)^{2}=I\). Therefore, the rotation symmetry already implies that for any \(\Pi\) and \(W\), \(\ell_{0}((I-2\Pi)W)=\ell_{0}(W)\). To apply the theorem, we need to view \(W\) as a vector, and the corresponding reflection matrix is \(\operatorname{diag}(I-2\Pi,...,I-2\Pi)\), a block-wise repetition of the matrix \(I-2\Pi\), where each block corresponds to a column of \(W\). By construction, \(P\) is also a projection matrix. Since this holds for an arbitrary \(\Pi\), one can choose \(\Pi\) to be the plane that matches the desired plane in Theorem 2, which can be then proved by invoking Theorem 4. Therefore, all three main types of symmetry we study are consequences of the general theorem. Applications ### Absorbing States and Stationary Conditions To discuss the implication of symmetries, we introduce the concept of a "stationary condition." **Definition 3**.: _For an arbitrary function \(f\), \(f(\theta)=0\) is a **stationary condition** of \(L(\theta)\) if \(f(\theta_{t})=0\) implies \(f(\theta_{t+1})=0\), where \(\theta_{t}\) is the \(t\)-th step parameter under (stochastic) gradient descent._ A stationary condition can be seen as a special case of an absorbing state, which is a major theme in the study of Markov processes and is associated with complex phase-transition-like behaviors (Norris, 1998; Dickman and Vidigal, 2002; Hinrichsen, 2000). Part 1 of Theorem 4 implies the following corollary. **Corollary 1**.: _Every \(O\)-mirror symmetry implies a linear stationary condition: \(O^{T}\theta=0\)._ Alternatively, a stationary condition can be seen as a generalization of a stationary point because every stationary point in the landscape implies the existence of a stationary condition - but not vice versa. For example, some functions of the parameters might reach stationarity before the whole model reaches stationarity. The existence of such conditions implies that there are special subspaces in the landscape such that the dynamics of gradient descent within these subspaces will not leave it. See Appendix Figure 4 for an illustration of the stationary conditions. ### Structure of the Hessian Part 2 of Theorem 4 has very important implications for the local geometry of the loss and the dynamics of SGD. Let \(H\) denote the Hessian of the loss \(L\) or that of the per-sample loss \(\ell\). Part 2 states that \(H\) close to symmetry solutions are _partitioned_ by the symmetry condition \(I-2P\) to two subspaces: one part aligns with the images of \(P\), and the other part must be orthogonal to it. Namely, one can transform the Hessian into a two-block form, \(H_{\perp}\) and \(H_{\parallel}\), with \(O\).6 Note that the parameters might also contain other symmetries, so \(H_{\parallel}\) and \(H_{\perp}\) may also consist of multiple sub-blocks. This implies that close to the symmetric solutions, the Hessian of the loss will take a highly structured form simultaneously for all data points or batches. See Figure 2. Footnote 6: Let \(\bar{O}\) be any orthogonal matrix whose basis includes all the eigenvectors of \(O\). Then, \(O^{T}HO\) will be a two-block matrix. That the Hessian of neural networks after training takes a similar structure is supported by empirical works. For example, the illustrative Hessian in Figure 2 is similar to that computed in (Sagun et al., 2016). That the actual Hessians after training are well approximated by smaller blocks is supported by (Wu et al., 2020). Blockwise Hessian matrices can also be related to the existence of gaps in the Hessian spectrum, which is widely observed (Sagun et al., 2017; Ghorbani et al., 2019; Wu et al., 2020; Papyan, 2018). It is instructive to consider the special case where \(O=n^{T}\) is rank-1. Part 2 implies that \(n\) must be an eigenvector of the Hessian whenever the model is at a symmetry solution. For illustration, we consider a two-layer linear network with scalar input and outputs. The loss function can always be written as \(\ell(w,u)=\frac{1}{2}\left(x\sum_{i}^{d}u_{i}w_{i}-y\right)^{2}\). For each index \(i\), \(u_{i}w_{i}\) contains the rescaling symmetry and are thus subject to two symmetries with mirrors \((1,1)\) and \((1,-1)\). Therefore, the theory predicts that when \(u\approx w\approx 0\), the Hessian consists of \(d\)\(2\times 2\) symmetric matrices with \((1,1)\) and \((1,-1)\) being the eigenvectors. This can be compared with a direct computation. When \(w=u=0\), the nonvanishing terms of the Hessian are \(\frac{\partial^{2}}{\partial w_{i}\partial u_{i}}\ell=-xy\): \[H=\begin{bmatrix}0&-xy&&\\ -xy&0&&\\ &\ \ ### Dynamics of Stochastic Gradient Descent The symmetry in the loss has a lot of consequences for the dynamics of training with SGD in light of the recent progress in analyzing SGD. Let \(O\) denote the mirror and \(P=OO^{T}\) the projection matrix. If \(OO^{T}w=sn\) where \(n\) is a unit vector, and \(s\) is a small quantity, the model is perturbatively away from the symmetry solution. In this case, one can expand the loss function to leading orders in \(s\): \[\ell(x,w)=\ell(x,w_{0})+\frac{1}{2}w^{T}PH(x)Pw+O(s^{4}), \tag{8}\] where we have defined the sample Hessian restricted to the projected subspace: \(H(x):=P\nabla_{w}^{2}\ell(x,w_{0})P\), which is a matrix of random variables. Note that all the odd-order terms in \(s\) vanish due to the symmetry in flipping the sign of \(s\). In fact, one can view the training loss \(\ell_{\gamma}\) or \(L_{\gamma}\) as a function of \(s\), which we denote as \(\tilde{L}(s)\), and this analysis implies that the loss landscape close to \(s=0\) takes a rather universal geometry. See Figure 2. This allows us to characterize the dynamics of SGD in the symmetry directions: \[Pw_{t+1}=Pw_{t}-\lambda HPw_{t}, \tag{9}\] where \(\eta\) is the learning rate. Previously, this type of critical point is shown to exist at interpolation minima of wide networks (Wu et al., 2018). Our result implies that this type of solution is far more common than previously understood and exists whenever symmetries are present. Let us first consider GD. The largest negative eigenvalue of \(\mathbb{E}_{x}[H]\), \(\xi^{*}\), thus gives the speed at which SGD escapes the stationary condition: \(Pw_{t}\propto\exp(-\xi^{*}t)\). When weight decay is present, all the eigenvalues of \(H\) will be positively shifted by \(\gamma\), and, therefore, if and only if \(\xi^{*}+\gamma>0\), GD will be attracted to these symmetric solutions. In this sense, \(\xi^{*}\) gives a critical weight decay value at which a symmetry-induced constraint is favored. For SGD, the dynamics is qualitatively different. Naively, when using SGD, the model will escape the stationary condition faster due to the noise. However, this is the opposite of the truth. The existence of the SGD noise due to minibatch sampling makes these stationary conditions more attractive. The stability of the type of dynamics in Eq. (9) can be analyzed by studying the condition for convergence in probability of the solution \(Pw=0\)(Ziyin et al., 2023). One can show that \(Pw\) converges to \(0\) in probability if and only if the Lyapunov exponent of the process \(\Lambda\) is negative, which is possible even if this critical point is a strict saddle.7 When does a subspace of \(Pw\) converge (or collapse) to zero? One can derive a satisfactory Figure 2: When symmetries exist, the stationary conditions correspond to highly structured Hessians. **Left**: the symmetry mirror \(O\) partitions \(H\) into two blocks: one block parallel to surfaces in \(OO^{T}\), and the other orthogonal to it. When an extra symmetry exists, these two blocks can be decomposed into additional subblocks. **Mid-Right**: the loss function around a symmetric solution has a universal geometry. Here, \(s\) is the component of the parameters along a direction of the \(O\)-symmetry. The competition between the signal in the dataset and the regularization strength determines the local landscape. approximate learning rate by making the commutation approximation, which assumes that \(H(x)\) commutes with \(H(x^{\prime})\) for all \(x,\ x^{\prime}\) in the training set. In this case, each subspace of \(H(x)\) has its own Lyapunov exponent and can be analytically computed. Let \(\xi(x)\) denote the eigenvalue of \(H(x)\) in this subspace. Then, this subspace collapses when \(\Lambda=\mathbb{E}_{x}[\log|1-\lambda(\xi(x)+\gamma)|]<0\), which is negative for a large learning rate (see Appendix B for a formal treatment). The meaning of this condition becomes clear by expanding to the second order in \(\lambda\) to obtain: \[\lambda>\frac{-2\mathbb{E}[\xi+\gamma]}{\mathbb{E}[(\xi+\gamma)^{2}]}. \tag{10}\] The numerator is the eigenvalue of the empirical loss, and the denominator can be identified as the minibatch noise effect (Wu et al., 2018), which becomes larger if the batch size is small or if the dataset is noisy. Therefore, this phenomenon happens due to the competition between the signal and noise in the gradient. This example shows that at a large learning rate, the stationary conditions are favored solutions of SGD, even if they are not favored by GD. From a Markovian perspective, this critical learning rate is when the Markov process becomes an absorbing Markov chain.8 Also, convergence to these symmetry-induced saddles is not a unique feature of SGD but happens for Adam-type dynamics as well (Ziyin et al., 2021, 2023a). Footnote 8: Alternatively, similar problems can also be analyzed using a continuous-time approximation and show that when gradient noise is strong, these points are attractive (Vivien et al., 2022; Chen et al., 2023). Two novel applications of this analysis are to learning a sparse model and a low-rank model. See Figure 1. We first apply it to a linear regression with rescaling symmetry. It is known that when both weight decay and rescaling symmetries are present, the solutions are sparse and identical to lasso (Ziyin and Wang, 2023). Our result shows that even without weight decay, the solutions are sparse at a large learning rate. Then, we consider a matrix factorization problem. Classical results show that the solutions are low-rank when weight decay is present (Srebro et al., 2004). Our result shows that even if there is no weight decay, SGD at a large learning rate or gradient noise converges to these low-rank saddles. The fact that these constrained structures disappear completely when the symmetry is removed supports our claim that symmetry is the cause of them. A strong piece of evidence for the relevance of the theory to real neural networks is that after training, the Hessian of the loss function is observed to contain many small negative eigenvalues, which hints at the convergence to saddle points (Sagun et al., 2016, 2017; Ghorbani et al., 2019; Alain et al., 2019; Sankar and Balasubramanian, 2017). Another related phenomenon is that of pathological Fisher information. From a Bayesian perspective, the matrix \(J:=\mathbb{E}_{x}[\nabla_{w}\ell\nabla_{w}^{T}\ell]\) is the Fisher information of the system (Amari and Nagaoka, 2007). Our result implies that the Fisher information is singular close to any symmetry solutions. To see this, note that \(O^{T}\nabla_{w}\ell(w,x)=0\) for a symmetry solution and any \(x\). Therefore, \(O^{T}J=0\) implies that the Fisher information has a zero eigenvalue along the directions orthogonal to any mirror symmetry. Many previous works have demonstrated that the learning of neural networks passes through regions of singular Fisher information, where the learning dynamics is prohibitively slow (Wei et al., 2008; Cousseau et al., 2008; Fukumizu, 1996; Karakida et al., 2019, 2019). Therefore, the Fisher information having flat directions is also strong evidence that the symmetry solutions are reached after training. ### Loss of Plasticity and Neural Collapses Our theory implies that the commonly observed loss of plasticity problem in continual and reinforcement learning (Lyle et al., 2023; Abbas et al., 2023; Dohare et al., 2023) is attributable to symmetries in the model. For a given task, weight decay or a finite learning rate makes the model converge to symmetry solutions, which tend to be low-capacity constrained solutions. If we train on an additional task, the capacity of the model can only decrease because the symmetry solutions are also stationary conditions, which SGD cannot escape. Fortunately, our theory suggests at least two ways to fix this problem: (1) use an alternative parameterization that explicitly removes the symmetry and/or (2) inject additive noise to the gradient to eliminate the stationary conditions. There are many ways to achieve (1). An easy way is to bias every (symmetry-relevant) parameter by a random bias: \(w_{i}\to w_{i}+\beta_{i}\), where \(\beta_{i}\) is a small fixed random variable. See Figure 3 and Appendix A for experimental details. A related phenomenon that symmetry can explain is the collapse of neural networks. The most common type of collapse is when the learned representation of a neural network spans a low-rank subspace of the entire available space, often leading to reduced expressive power and, thus, performance. In Bayesian deep learning, a posterior collapse happens when the stochastic latent variables are low-rank (Dai and Wipf, 2019; Alemi et al., 2018; Lucas et al., 2019; Wang and Ziyin, 2022). This can be attributed to the double rotation symmetry of the encoder's last layer weight and the decoder's first layer weight. In self-supervised learning, a dimensional collapse happens when the representation of the last layer is low-rank (Tian, 2022), which has been found to be explained by the rotation symmetry of the last layer weight that is often present in common self-supervised learning loss function. This also explains why many self-supervised learning methods focus on introducing a term that removes the symmetry (Bardes et al., 2021). The rank collapse that happens in self-attention may also be relevant. Previous works attributed the rank collapse to the use of the softmax after the key-query matrices (Dong et al., 2021). Our theory offers a possible alternative explanation: the rank collapse happens due to the double rotation symmetry between the key and query matrices. However, more empirical evidence is needed to decide which explanation is more likely. In supervised learning, the "neural collapse" happens when the learned representation of the penultimate learning becomes low-rank, which happens when weight decay is present (Papyan et al., 2020; Galanti et al., 2021; Rangamani and Banburski-Fahey, 2022; Rangamani et al., 2021). Figure 1 shows that such a phenomenon can be attributed to the permutation symmetry in the fully connected layer. In the past, collapses in different scenarios are often treated differently. Our result, in contrast, provides a unified perspective of the collapse phenomenon: collapses are caused by symmetries in the loss function. Our theory also suggests that these collapse phenomena have a natural interpretation as "phase transitions" in theoretical physics. A collapse solution corresponds to a symmetric state where the "order parameter" \(O^{T}w=0\), and a normal solution corresponds to a nonsymmetric solution where \(O^{T}w\neq 0\). ### \(L_{1}\) Equivalence of Mirror Symmetries Parts 3 and 4 of Theorem 4 imply that constrained solutions are favored when weight decay is used. These results can be stated in an alternative way: that _every mirror symmetry plus weight decay has an \(L_{1}\) equivalent_. To see this, let the loss function \(L_{0}(w)\) be \(O\)-symmetric, and \(P=OO^{T}\). Let \(w\) be an arbitrary weight, which we decompose as \(w=w^{\prime}+sPw/\|Pw\|\), where we define \(s=\|Pw\|\). Let us define an equivalent loss function \(\tilde{L}_{0}(w^{\prime},Pw/\|Pw\|,s^{2}):=L_{0}(w)\). By definition, we have successfully constructed the \(L_{1}\) equivalent of the original loss. \[L_{0}(w)+\gamma\|w\|^{2}=\tilde{L}_{0}(w^{\prime},Pw/\|Pw\|,s^{2})+\gamma(\|w ^{\prime}\|^{2}+s^{2})=\tilde{L}_{0}(w^{\prime},Pw/\|Pw\|,|z|)+\gamma(\|w^{ \prime}\|^{2}+|z|),\] where we introduced \(|z|=s^{2}\). Therefore, along the symmetry-breaking direction, the loss function has an equivalent \(L_{1}\) form. One can also show that \(\tilde{L}_{0}\) is well defined as an \(L_{1}\)-constrained loss function. If \(L_{0}\) is differentiable, \(\tilde{L}_{0}\) is differentiable except at \(s=0\). Thus, it suffices to show that the right derivative of \(\tilde{L}_{0}\) with respect to \(z\) exists at \(z=0_{+}\). As we have discussed, at \(z=0\), the expansion of \(L_{0}\) is second order in \(s\). This means that the leading order term of \(\tilde{L}_{0}\) is first order in \(z\), and so the \(L_{1}\) penalty is well-defined for this loss function. Lastly, it is worth commenting that there is a crucial difference between the original loss with the symmetry and the equivalent \(L_{1}\): the \(L_{1}\) equivalent is not differentiable, and so one cannot optimize it with Figure 3: Loss of plasticity in continual learning in a vanilla linear regressor (**dashed**) and linear regressors with rescaling symmetry (**solid**). Vanilla regression has no symmetry and does not suffer plasticity loss, whereas having symmetries leads to the loss of plasticity. One can fix the problem with one of the two suggested methods, either by removing the symmetry in the model or removing the absorbing states by injecting noise. GD. In contrast, the original loss function is generally differentiable and can be efficiently optimized with standard deep-learning training routines. ### An Algorithm for Differentiable Constraint Sparsity and low-rankness are typical structured constraints that practitioners often want to incorporate into their models (Tibshirani, 1996; Meier et al., 2008; Jaderberg et al., 2014). However, the known methods of achieving these structured constraints tend to be tailored for specific problems and based on nondifferentiable operations. Our theory shows that incorporating symmetries is a general and scalable way to introduce such constraints into deep learning. Consider solving the following constrained problem: \(\min_{\theta}L(\theta)\)_s.t. as many elements of \(P\theta\) are zero as possible._ Here, \(P=OO^{T}\) is a projection matrix. This constraint is rather general and includes common structured constraints such as parameter sparsity, group sparsity, and low-rankness. Our theory implies an algorithm for enforcing such constraints in a differentiable way: introducing an artificial \(O\)-symmetry to the loss function encourages the constraint \(O^{T}\theta=0\), which can be achieved by running GD on the following loss function: \[\min_{w,u,v}L(T(w,u,v))+\alpha(\|w\|^{2}+\|u\|^{2}), \tag{11}\] where \(w,\ u,\ v\) have the same dimension as \(\theta\) and \(T(w,u,v)=(I-P)v+(Pw)\odot(Pu)\), where \(\odot\) denotes the Hadamard product. We call the algorithm _DCS_, standing for differentiable constraint by symmetry. This parameterization introduces the mirror symmetry to which \(O^{T}T(w,u,v)=0\) is a stationary condition. By Theorem 4, a sufficiently large \(\alpha\) ensures that \(O^{T}T(w,u,v)=0\) is an energetically favored solution. Also, note that this parametrization is a "faithful" parametrization in the sense that it is always true that \(\min_{w,u,v}L(T(w,u,v))=\min_{\theta}L(\theta)\). ## 5 Discussion In this work, we studied the implications of loss function symmetries on the gradient-based learning of models. We have shown that every mirror symmetry leads to a structured constraint of learning. This statement is examined from two different angles: (1) such solutions are favored when \(L_{2}\) regularizations are applied; (2) they are favored when the gradient noise is strong (which can happen when the learning rate is large, the batch size is small, or the data is noisy). We showed that the theory can analyze and understand common structures such as sparsity and low-rankness. We also discussed a variety of specific problems and phenomena in a unified manner. Our result is universal in that it only relies on the existence of the specified symmetries and does not rely on the properties of the loss function, model architectures, or data distributions. Per se, symmetry and its associated constraint are both good and bad. On the bad side, it limits the expressivity of the network and its approximation power. On the good side, it leads to more condensed models and representations, tends to ignore features that are noisy and can improve generalization capability thereby. Understanding symmetry systematically can help us avoid its negative side and utilize it to our advantage.
2309.09812
R2GenGPT: Radiology Report Generation with Frozen LLMs
Large Language Models (LLMs) have consistently showcased remarkable generalization capabilities when applied to various language tasks. Nonetheless, harnessing the full potential of LLMs for Radiology Report Generation (R2Gen) still presents a challenge, stemming from the inherent disparity in modality between LLMs and the R2Gen task. To bridge this gap effectively, we propose R2GenGPT, which is a novel solution that aligns visual features with the word embedding space of LLMs using an efficient visual alignment module. This innovative approach empowers the previously static LLM to seamlessly integrate and process image information, marking a step forward in optimizing R2Gen performance. R2GenGPT offers the following benefits. First, it attains state-of-the-art (SOTA) performance by training only the lightweight visual alignment module while freezing all the parameters of LLM. Second, it exhibits high training efficiency, as it requires the training of an exceptionally minimal number of parameters while achieving rapid convergence. By employing delta tuning, our model only trains 5M parameters (which constitute just 0.07\% of the total parameter count) to achieve performance close to the SOTA levels. Our code is available at https://github.com/wang-zhanyu/R2GenGPT.
Zhanyu Wang, Lingqiao Liu, Lei Wang, Luping Zhou
2023-09-18T14:35:35Z
http://arxiv.org/abs/2309.09812v2
# R2GenGPT: Radiology Report Generation with Frozen LLMs ###### Abstract Large Language Models (LLMs) have consistently showcased remarkable generalization capabilities when applied to various language tasks. Nonetheless, harnessing the full potential of LLMs for Radiology Report Generation (R2Gen) still presents a challenge, stemming from the inherent disparity in modality between LLMs and the R2Gen task. To bridge this gap effectively, we propose R2GenGPT, which is a novel solution that aligns visual features with the word embedding space of LLMs using an efficient visual alignment module. This innovative approach empowers the previously static LLM to seamlessly integrate and process image information, marking a step forward in optimizing R2Gen performance. R2GenGPT offers the following benefits. First, it attains state-of-the-art (SOTA) performance by training only the lightweight visual alignment module while freezing all the parameters of LLM. Second, it exhibits high training efficiency, as it requires the training of an exceptionally minimal number of parameters while achieving rapid convergence. By employing delta tuning, our model only trains 5M parameters (which constitute just 0.07% of the total parameter count) to achieve performance close to the SOTA levels. Our code is available at [https://github.com/wang-zhangyu/R2GenGPT](https://github.com/wang-zhangyu/R2GenGPT). R2GenGPT: Radiology Report Generation Large Language Models LLAMA ## 1 Introduction The landscape of radiological imaging data is experiencing exponential growth that far surpasses the availability of trained readers, resulting in a significant and unsustainable surge in radiologists' workloads. This surge in both the volume and complexity of cases places big pressure on radiologists to interpret more studies within increasingly tight timeframes. Consequently, radiologists are faced with extended working hours and a heightened risk of reading fatigue, all of which significantly contribute to diagnostic errors. Notably, the situation is particularly precarious during on-call hours for emergency radiology studies. As a result, the demand for automated radiographic report generation has soared, as it promises to alleviate the burden on radiologists, mitigate diagnostic errors, and expedite the clinical workflow. Automated radiographic report generation (R2Gen) is a complex AI task. It aims to produce a coherent paragraph that captures the observations and findings depicted in a given radiology image. There are different R2Gen approaches based on whether the report generation is structured and whether it is template-based. This paper focuses on unstructured multi-sentence report generation. Given its critical clinical relevance, the field of medical report generation has been garnering increasing attention. Most methodologies are inspired by image/video captioning and adopt the encoder-decoder paradigm [15, 38, 44, 48, 32, 33], with specific improvements tailored to the unique characteristics of the R2Gen task. In summary, recent works in the R2Gen task mainly aim to tackle two major challenges. The first challenge lies in **long text generation**. Unlike the image captioning task which generates a single sentence description, medical report generation requires detailed and coherent paragraph-long descriptions. This requires the model to have a robust capacity for learning long-range dependencies. To address this, many solutions have been proposed [13, 46, 42, 4, 3]. For instance, some research works [13, 46, 42] have employed hierarchically structured LSTM which first produces topic vectors using a sentence LSTM and then creates a description for each generated topic with a word LSTM. Another type of work R2Gen [4] introduced a memory-driven Transformer that can record key information of the generation process, enhancing the model's ability to produce long texts. The second challenge lies in the **bias in visual and textual data**. Due to an over-representation of normal samples in the training data, the model's learning process was biased towards these samples, limiting its ability to effectively detect abnormalities and anomalies within the dataset. Some works [42, 47, 39] have addressed this issue by aligning image and text/report features, such as work Self-boost [42] incorporating an image-text matching branch to enhance the model's capability to capture the anomalous features in the image. Other research works mitigate the effects of data bias by incorporating external knowledge, such as medical tags [13, 41], and knowledge graphs [17, 50, 21, 45]. For instance, PPKED [21] utilizes a knowledge graph and introduces the Posterior-and-Prior Knowledge Exploring-and-Distilling framework. Despite many efforts and solutions putting forth, the aforementioned two challenges remain significant issues in this field. Recently, large language models (LLMs) (e.g., [5, 35]) have demonstrated excellent capabilities to perform tasks with zero in-domain data, conduct logical reasoning, and apply commonsense knowledge in NLP tasks [16, 43]. This leads us to ponder whether we can apply large language models to medical report generation tasks, as pre-trained large language models seem to inherently possess the ability to address the two challenges mentioned above. As for long text generation, LLMs are equipped with an inherent understanding of grammar, syntax, and semantic coherence, making them well-suited for tasks requiring extended text generation, such as medical reporting. Furthermore, their proficiency in context modeling allows them to maintain consistency and relevance throughout a lengthy report. As for the bias stemming from an over-representation of normal samples in medical datasets, LLMs can serve as potential correctives due to their extensive knowledge base. Having been exposed to vast amounts of data, LLMs demonstrate robustness and are less susceptible to the effects of imbalanced datasets. They are even capable of handling numerous zero-shot tasks. Moreover, current methods mitigating bias entail the incorporation of external knowledge, whereas pre-trained LLMs inherently possess a wealth of informative knowledge. However, applying LLMs to R2Gen tasks poses challenges due to the fundamental disparity between visual and textual modalities. The crucial step in applying LLMs to R2Gen is to bridge the gap between visual information and textual generation. In this paper, we present R2GenGPT and explore three methods for aligning visual features with large language models. We first process chest x-ray images using a Visual Encoder to obtain visual embeddings. These embeddings are then mapped to the LLM's feature space via a Visual Mapper, ensuring uniform dimensions. To identify the most efficient method of aligning visual features with the LLM, we've crafted three alignment modules: 1) shallow alignment, where only the Visual Mapper is trained and other parameters remain fixed; 2) deep alignment, where both the visual encoder and the Visual Mapper are trained simultaneously; and 3) Delta alignment, where the Visual Mapper and a limited set of incremental parameters from the visual encoder are trained, ensuring both effectiveness and efficiency. Our main contributions are summarized as follows. * We propose a novel LLMs-based Radiology report generation (R2Gen) framework, dubbed R2GenGPT. This marks the first instance of harnessing pre-trained large language models (LLMs) for the R2Gen task with comprehensive comparisons conducted on two frequently employed benchmark datasets. * We explored three methods with varying levels of trainable parameters to connect image modalities to large language models, namely: shallow alignment, delta alignment, and deep alignment, enabling the LLM to effectively process visual information. * Our approach exhibits promising and robust performance on two widely recognized benchmark datasets--IU-Xray and MIMIC-CXR. In comparison to multiple state-of-the-art methods, our framework consistently demonstrates its efficacy, affirming its potential in the field of R2Gen. ## 2 Relate Works **Radiology report generation** Radiology report generation (R2Gen) has gained significant attention in recent years, with many models being developed based on the encoder-decoder architecture initially used for image captioning tasks [38, 44, 26]. However, R2Gen poses additional challenges compared to image captioning, as medical reports are typically longer and clinical abnormalities in medical images are harder to detect than natural objects due to the data bias existed in the training set. To address these challenges, researchers have proposed various methods. In [42], Wang et al. introduced an image-text matching branch to facilitate report generation, utilizing report features to augment image characteristics and consequently minimize the impact of data bias. They also employed a hierarchical LSTM structure for the generation of long-form text. Chen et al. [4] and Wang et al. [41] introduced additional memory modules to store past information, which can be utilized during the decoding process to improve long-text generation performance. Another type of work aims to mitigate data bias by incorporating external knowledge information, with the most representative approach being the integration of knowledge graphs [17, 50, 21, 45, 18, 9]. Zhang et al. [50] and Liu et al. [21] combined pre-constructed graphs representing relationships between diseases and organs using graph neural networks, enabling more effective feature learning for abnormalities. Li et al. [18] developed a dynamic approach that updates the graph with new knowledge in real-time. Huang et al. [9] incorporated knowledge from a symptom graph into the decoding stage using an injected knowledge distiller. Apart from knowledge graphs, another method for integrating external knowledge involves incorporating semantic information to assist report generation through multi-task learning, such as multi-label classification [13, 41, 39, 12, 34]. Wang et.al. [41] extracted 768 high-frequency medical terms from RadGraph [11] and trained a multi-label classification network. The prediction results from classification were then incorporated as semantic information input to the decoder, assisting the generation of the report. Jin et al. [12] make use of the diagnostic results from the classification via prompts to explicitly guide the generation process. All the above methods employing an encoder-decoder architecture in a traditional way, typically assign equal importance to both the encoder and the decoder, with a comparable number of trainable parameters. In these methods, the output from the encoder serves as the key and value for cross-attention computation in the decoder. In contrast, our approach based on LLM deviates significantly from this traditional encoder-decoder framework. Firstly, the number of parameters in the decoder significantly exceeds that in the encoder. Secondly, the encoder functions more like a "visual tokenizer", converting images into visual tokens that are fed into LLM. The attention mechanism employed within this framework remains self-attention rather than cross-attention. With this innovative approach, our paper pioneers the use of a "decoder-centric" architecture for the task of medical report generation. **Large language Models** Recently, there has been a surge of interest in Large Language Models (LLMs) due to their superior efficacy in a wide array of Natural Language Processing (NLP) tasks. This began with transformer models like BERT [31], GPT [28], and T5 [29], each designed with distinct pre-training objectives. The introduction of GPT-3 [2] marked a significant shift, demonstrating the model's impressive zero-shot generalization capabilities, owed to a scaled-up parameter and data volume, which allowed it to excel in tasks it had not previously encountered. This catalyzed the development of several LLMs such as OPT [49], BLOOM [30], PaLM [5], and LLaMA [35], heralding the triumph of LLMs. In a parallel endeavor, Ouyang et al. [25] introduced InstructGPT, which brought human instruction and feedback into alignment with GPT-3. These advancements have been leveraged by applications like ChatGPT, which enables human-like dialogue interactions by responding to a vast spectrum of complex and nuanced questions and instructions. ## 3 Methodology **Overview** As illustrated in Figure 1, R2GenGPT comprises a Visual Encoder, a Visual Mapper, and an LLM (Large Language Model) component. The visual encoder is employed to extract information from chest x-ray images, while the visual mapper serves to project low-dimensional image features into the high-dimensional feature space of the LLM. Utilizing the visual features derived from the chest x-ray images, the LLM generates corresponding diagnostic reports. **Feature Alignment** For an input chest xray image \(\mathbf{X}_{v}\), we consider the pre-trained Swin Transformer [23] as visual encoder, which provides the visual feature \(\mathbf{Z}_{v}=g(\mathbf{X}_{v};\theta_{v})\), where \(\theta_{v}\) is the parameters of the Swin Transformer. The grid features of the last transformer layer is utilized in our experiments. We consider a simple linear layer as the Visual Mapper to connect image features into the LLM's word embedding space. Specifically, we apply a trainable projection matrix \(\mathbf{W}_{m}\) to convert \(\mathbf{Z}_{v}\) into language embedding tokens \(\mathbf{H}_{v}\), which have the same dimensionality of the word embedding space in the large language model. \[\mathbf{H}_{v}=\mathbf{W}_{m}\mathbf{Z}_{v},\quad\text{with }\mathbf{Z}_{v}=g( \mathbf{X}_{v}) \tag{1}\] Thus we have a sequence of visual tokens \(\mathbf{H}_{v}\). Following the extraction of visual tokens \(\mathbf{H}_{v}\), we propose the following three distinct training strategies to identify the most efficient aligning method by varying the level of trainable parameters. a) Shallow Alignment: In this mode, we fix the parameters of the pre-trained Swin Transformer and train only the linear Visual Mapper, represented by \(\mathbf{W}_{m}\). b) Deep Alignment: For this approach, both the Swin Transformer and the Visual Mapper are jointly fine-tuned. Specifically, parameters from both the Visual Encoder (Swin Transformer) and the Visual Mapper, denoted as \(\theta_{v}\) and \(\mathbf{W}_{m}\) respectively, are updated. c) Delta Alignment: As the Swin Transformer utilized in this paper was originally trained on natural images, the shallow alignment approach hinders the model's ability to capture high-quality radiographic image features. On the other hand, adopting deep alignment substantially impacts the model's training efficiency. Therefore, we propose delta alignment, parameter-efficiently fine-tuning the Swin Transformer model using LoRA [8]. Specifically, for a pre-trained weight matrix \(\mathbf{W}_{0}\) within \(\theta_{v}\), LoRA constrains its update with two smaller matrices using a low-rank decomposition \(\mathbf{W}_{0}+\Delta\mathbf{W}_{0}=\mathbf{W}_{0}+\mathbf{BA}\), where \(\mathbf{W}_{0}\in\mathbb{R}^{d\times k}\), \(\mathbf{B}\in\mathbb{R}^{d\times r}\), \(\mathbf{A}\in\mathbb{R}^{r\times k}\), and the rank \(r\ll min(d,k)\). It is noted that in our implementation, we only adjust the query and value projections within the swin transformer to prioritize a simple yet efficient model. The trained parameters are denoted as \(\Delta\theta_{v}\), and both \(\Delta\theta_{v}\) and \(\mathbf{W}_{m}\) are trained in this mode. Large Language ModelsWe adopt Llama2-7B model for the large language model component. The Llama2-7B stands out for its remarkable capabilities and robustness. Designed with a massive 7-billion-parameter architecture, it encapsulates a rich knowledge base derived from extensive pre-training on diverse datasets. One of its key strengths lies in its extraordinary ability to understand and generate complex language structures, making it particularly well-suited for intricate tasks such as radiology report generation. Given an chest xray image \(\mathbf{X}_{v}\) and its corresponding report \(\mathbf{X}_{r}\), the detailed prompt inputted into Llama2 is as follows. Human: <Img>X\({}_{v}\)</Img>, X\({}_{p}\) vn Assistant: X\({}_{r}\) </s>. Here \(\mathbf{X}_{p}\) is our designed instruction prompt specific to the R2Gen task. In our current implementation, \(\mathbf{X}_{p}\) = "Generate a comprehensive and detailed diagnosis report for this chest xray image.". For this prompt, before inputting it into LLAMA2 for computation, \(\mathbf{X}_{v}\) will be replaced by visual tokens \(\mathbf{H}_{v}\) processed using Equ. 1 while all other text is tokenized into word tokens using LLAMA's tokenizer. Loss FunctionWe perform instruction-tuning of the LLM only on the report tokens, using its original auto-regressive training objective. Specifically, for a report of length \(L\), conditioned on visual information \(\mathbf{X}_{v}\) and instruction prompt Figure 1: An overview of our proposed R2GenGPT. The input tokens for the Large Language Model (LLM) are sequentially concatenated, consisting of visual tokens, prompt tokens, and report tokens. A token mask of -100 indicates that those particular tokens are excluded from auto-regressive training, while a mask of 1 signifies inclusion in auto-regressive training. \(\mathbf{X}_{p}\), our loss function, captured as the negative log likelihood, is formulated as: \[\mathcal{L}(\theta;\mathbf{X}_{r},\mathbf{X}_{v},\mathbf{X}_{p})=-\sum_{i=1}^{L} \log p_{\theta}(x_{i}|\mathbf{X}_{v},\mathbf{X}_{p},\mathbf{X}_{r,<l}), \tag{2}\] where \(\theta\) is the trainable parameters, \(\mathbf{X}_{r,<i}\) is the report tokens before the current prediction token \(x_{i}\). ## 4 Experiments ### Data Collection We evaluated performance using two datasets: a widely-used benchmark IU-Xray [7] and the currently largest dataset MIMIC-CXR [14] for medical report generation. **IU-Xray**: Indiana University Chest X-ray Collection (IU-Xray) [7] is the most widely used publicly accessible dataset in medical report generation tasks. It contains 3,955 fully de-identified radiology reports, each of which is associated with frontal and/or lateral chest X-ray images, and 7,470 chest X-ray images in total. Each report is comprised of several sections: Impression, Findings, Indication, etc. In this work, we adopt the same data set partitioning as [4] for a fair comparison, with a train/test/val set by 7:1:2 of the entire dataset. All evaluations are done on the test set. **MIMIC-CXR**: This largest publicly available dataset encompasses both chest radiographs and unstructured textual reports. This comprehensive dataset comprises a total of 377,110 chest X-ray images and 227,835 corresponding reports sourced from 64,588 patients who underwent examination at the Beth Israel Deaconess Medical Center between 2011 and 2016. To ensure equitable comparisons, we adhered to MIMIC-CXR's official partitioning as outlined in [4], resulting in 270790 samples designated for training, while allocating 2130 and 3,858 samples for validation and testing, respectively. ### Experimental Settings **Evaluation Metrics** Adhering to the established evaluation protocol 1, we employ the prevalent metrics for assessment, namely BLEU scores [27], ROUGE-L [20], METEOR [1] and CIDEr [37], to gauge the quality of the generated textual reports. To measure the accuracy of descriptions for clinical abnormalities, we follow [4, 3, 22] and further report clinical efficacy metrics. Specifically, we employ CheXpert [10] for annotating the generated reports, which are subsequently compared against ground truth annotations across 14 distinct categories related to thoracic Figure 2: Three proposed alignment methods. (a) Shallow Alignment: Training only the Linear Layer. (b) Deep Alignment: Training both the Linear Layer and all parameters of the Visual Encoder. (c) Delta Alignment: Training the Linear Layer and a small subset of incremental parameters of the Visual Encoder. diseases and support devices. We use precision, recall, and F1 to evaluate model performance for clinical efficacy metrics. **Implementation Details** In this work, we leveraged the LLAMA2-7B model 2 as the large language model and the base version of the Swin Transformer 3 as the Visual Encoder. Within the parameters of LoRA, we configured the Lora attention dimension to 16, and the alpha parameter for Lora scaling was also set at 16. The training process was conducted on four NVIDIA A100 40GB GPUs using mixed precision for 3 epochs for MIMIC-CXR and 15 epochs for IU-Xray dataset, with a mini-batch size of 6 and a learning rate of 1e-4. During the testing phase, we employed a beam search strategy with a beam size set to 3. \begin{table} \begin{tabular}{l|l|c c c c c c} \hline \hline Dataset & Methods & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & ROUGE & METEOR & CIDEr \\ \hline \multirow{8}{*}{IU-Xray} & Show-Tell & 0.243 & 0.130 & 0.108 & 0.078 & 0.307 & 0.157 & 0.197 \\ & Att2in & 0.248 & 0.134 & 0.116 & 0.091 & 0.309 & 0.162 & 0.215 \\ & AdaAtt & 0.284 & 0.207 & 0.150 & 0.126 & 0.311 & 0.165 & 0.268 \\ & Transformer & 0.372 & 0.251 & 0.147 & 0.136 & 0.317 & 0.168 & 0.310 \\ & M2transformer & 0.402 & 0.284 & 0.168 & 0.143 & 0.328 & 0.170 & 0.332 \\ & R2Gen\({}^{\dagger}\) & 0.470 & 0.304 & 0.219 & 0.165 & 0.371 & 0.187 & - \\ & R2GenCMN\({}^{\dagger}\) & 0.475 & 0.309 & 0.222 & 0.170 & 0.375 & 0.191 & - \\ & MSAT\({}^{\dagger}\) & 0.481 & 0.316 & 0.226 & 0.171 & 0.372 & 0.190 & 0.394 \\ & METransformer\({}^{\dagger}\) & 0.483 & **0.322** & 0.228 & 0.172 & **0.380** & 0.192 & 0.435 \\ \hline \multirow{8}{*}{IU-Xray} & R2GenGPT (Shallow) & 0.466 & 0.301 & 0.211 & 0.156 & 0.370 & 0.202 & 0.405 \\ & R2GenGPT (Delta) & 0.470 & 0.299 & 0.213 & 0.162 & 0.369 & 0.211 & 0.419 \\ & R2GenGPT (Deep) & **0.488** & 0.316 & **0.228** & **0.173** & 0.377 & **0.211** & **0.438** \\ \hline \multirow{8}{*}{MIMIC-CXR} & Show-Tell & 0.308 & 0.190 & 0.125 & 0.088 & 0.256 & 0.122 & 0.096 \\ & Att2in & 0.314 & 0.198 & 0.133 & 0.095 & 0.264 & 0.122 & 0.106 \\ & AdaAtt & 0.314 & 0.198 & 0.132 & 0.094 & 0.267 & 0.128 & 0.131 \\ & Transformer & 0.316 & 0.199 & 0.140 & 0.092 & 0.267 & 0.129 & 0.134 \\ & M2Transformer & 0.332 & 0.210 & 0.142 & 0.101 & 0.264 & 0.134 & 0.142 \\ & R2Gen\({}^{\dagger}\) & 0.353 & 0.218 & 0.145 & 0.103 & 0.277 & 0.142 & - \\ & R2GenCMN\({}^{\dagger}\) & 0.353 & 0.218 & 0.148 & 0.106 & 0.278 & 0.142 & - \\ & PPKED\({}^{\dagger}\) & 0.36 & 0.224 & 0.149 & 0.106 & 0.284 & 0.149 & 0.237 \\ & GSK\({}^{\dagger}\) & 0.363 & 0.228 & 0.156 & 0.115 & 0.284 & - & 0.203 \\ & MSAT\({}^{\dagger}\) & 0.373 & 0.235 & 0.162 & 0.120 & 0.282 & 0.143 & 0.299 \\ & METransformer\({}^{\dagger}\) & 0.386 & 0.250 & 0.169 & 0.124 & 0.291 & 0.152 & **0.362** \\ \hline \multirow{2}{*}{R2GenGPT (Shallow)} & 0.365 & 0.237 & 0.163 & 0.117 & 0.277 & 0.136 & 0.145 \\ & R2GenGPT (Delta) & 0.380 & 0.244 & 0.167 & 0.119 & 0.281 & 0.145 & 0.195 \\ \cline{1-1} & R2GenGPT (Deep) & **0.411** & **0.267** & **0.186** & **0.134** & **0.297** & **0.160** & 0.269 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison on IU-Xray (upper part) and MIMIC-CXR datasets (lower part). \(\dagger\) indicates the results are quoted from their respective papers. For the methods without \(\dagger\), their results are obtained by re-running the publicly released codebase [19] on these two datasets using the same training-test partition as our method. ### Results and Discussion **Comparison with SOTA** Table 1 showcases a performance comparison between the state-of-the-art methods and our R2GenGPT model variants on the IU-Xray and MIMIC-CXR dataset. In terms of standard image captioning methods, the table considers Show-Tell [38], Att2in [44], AdaAtt [24], Transformer [36], and M2Transformer [6]. Furthermore, medical report generation methods such as R2Gen [4], R2GenCMN [3], MSAT [41], METransformer [40], and other methods marked with \(\dagger\) in Table 1 are considered. From Table 1, it is evident that our R2GenGPT model variants, especially R2GenGPT (Deep), outperform the compared methods across nearly all evaluation metrics. In the MIMIC-CXR dataset, apart from CIDEr, we significantly outperform the latest METransformer [40] method across all metrics. For instance, our BLEU_4 score is improved from 0.124 to 0.134, marking an 8.1% increase. However, we achieved a CIDEr score of 0.269, which is lower than METransformer's 0.362. This discrepancy is because METransformer employs an expert voting strategy similar to an ensemble approach to enhance the CIDEr metric. In comparison to methods without this enhancement, such as R2Gen [4] and PPKED [21], we also hold a distinct advantage in terms of the CIDEr metric. It is also noteworthy that our R2GenGPT (Shallow), with only 4.2M trainable parameters, has been able to achieve a performance in par with the well-known R2Gen model [4], if not even better. Model Efficiency and Clinical Efficacy AnalysisIn Table 2, we have presented both model efficiency and clinical efficacy metrics. It's evident that our model exhibits higher training efficiency compared to METransformer. For instance, R2GenGPT (Deep) requires training with only 90.9 million parameters, which is significantly less than METransformer's 152 million. Furthermore, R2GenGPT (Delta) achieves comparable performance with just 5 million parameters. To assess the model's training efficiency, we conducted evaluations on four A100 40G GPUs and recorded \begin{table} \begin{tabular}{c|c c c|c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c|}{Trainable Components} & \multicolumn{3}{c|}{Scale and Efficiency} & \multicolumn{3}{c}{Clinical Efficacy} \\ & Mapper & Encoder & LoRA & Trainable Parameter & Time & Precision & Recall & F1 \\ \hline Shallow & βœ“ & & & 4.2M & 1.75h/epo & 0.341 & 0.312 & 0.325 \\ Delta & βœ“ & & βœ“ & 5.0M & 1.83h/epo & 0.366 & 0.350 & 0.358 \\ Deep & βœ“ & βœ“ & & 90.9M & 2.75h/epo & **0.392** & **0.387** & **0.389** \\ \hline METransformer & - & - & - & 152M & 3.62h/epo & 0.364 & 0.309 & 0.334 \\ \hline \hline \end{tabular} \end{table} Table 2Evaluation of Model Efficiency and Clinical Efficacy on MIMIC-CXR dataset. Figure 3: Examples of the generated report on MIMIC-CXR dataset. We compared the results generated by the three alignment methods proposed in R2GenGPT. For better illustration, the key medical information in the reports are highlighted using different colors. the time required for one training epoch. Notably, R2GenGPT Shallow, Delta, and Deep each completed this epoch in just 1.75, 1.83, and 2.75 hours, respectively, compared to METransformer's 3.62 hours, highlighting our model's superior training efficiency. In terms of clinical efficacy metrics, it can be observed that our R2GenGPT(deep) and R2GenGPT(delta) achieved F1 scores of 0.389 and 0.358, respectively, surpassing the current SOTA method, METransformer, with a score of 0.334. This demonstrates the ability of R2GenGPT to generate crucial clinical information. Qualitative resultsIn Figure 3, we compare the reports generated by the three alignment methods of R2GenGPT. To provide a better visualization, we highlight key medical information in both the ground truth and generated reports using different colors. From the figure, it can be observed that reports generated using the Shallow Alignment method are notably inferior to those generated using the Delta Alignment and Deep Alignment methods. For instance, in the first example (top), the Shallow Alignment method erroneously identifies a sample with mild pulmonary edema as a normal sample, whereas Delta Alignment and Deep Alignment methods can accurately identify it. ## 5 Conclusions In this paper, we present R2GenGPT, an innovative framework at the forefront of Radiology Report Generation (R2Gen) that capitalizes on the capabilities of Large Language Models (LLMs). Through a comprehensive exploration of three alignment methods--shallow, delta, and deep-- this research highlights the game-changing potential of LLMs in elevating the R2Gen landscape. R2GenGPT not only attains competitive SOTA performance but also achieves a remarkable reduction in computational complexity. This dual achievement positions R2GenGPT as a promising solution to automate and improve radiology reporting.
2309.11085
Hecke action on tamely ramified Eisenstein series over $\mathbb{P}^1$
We study the space of automorphic functions for the rational function field $\mathbb{F}_q(t)$ tamely ramified at three places. Eisenstein series are functions induced from the maximal torus. The space of Eisenstein series generates a trimodule for the affine Hecke algebra. We conjecture a generators and relations description of this module and prove the conjecture when $G=\mathrm{PGL}(2)$ and $\mathrm{SL}(3)$.
Tahsin Saffat
2023-09-20T06:28:39Z
http://arxiv.org/abs/2309.11085v2
# Hecke action on tamely ramified Eisenstein series over \(\mathbb{P}^{1}\) ###### Abstract. We study the space of automorphic functions for the rational function field \(\mathbb{F}_{q}(t)\) tamely ramified at three places. Eisenstein series are functions induced from the maximal torus. The space of Eisenstein series generates a trimodule for the affine Hecke algebra. We conjecture a generators and relations description of this module and prove the conjecture when \(G=\operatorname{PGL}(2)\) and \(\operatorname{SL}(3)\). ###### Contents * 1 Introduction * 1.1 Problem Setup * 1.2 Geometric Formulation * 1.3 Main Result * 1.4 Acknowledgements * 2 Background on Group Theory, Hecke Operators, and Eisenstein Series * 2.1 Operations with Functions * 2.2 Group Theory * 2.3 Affine Hecke Algebra * 2.4 Hecke Operators * 2.5 Pseudo-Eisenstein Series * 3 Example: G=PGL(2) * 3.1 Finite Hecke Action * 3.2 Finite Hecke Trimodule Structure * 3.3 Hecke Trimodule Structure * 4 Proof of Theorem 1 * 5 Some Formulas for Algebraic Eisenstein Module * 6 Example: G=SL(3) * 6.1 Proof of Proposition 6.1 * 6.2 Proof of Proposition 6.2 * 7 Directions: Functional Equation, Many Points * 7.1 Many Points of Tame Ramification * 7.2 Reflection Relation as a Functional Equation * 8 Appendix: Proof of Equation 18 * 8.1 Geometric Interpretation of Equation 18 * 8.2 Proof of Equation 18 ## 1. Introduction ### Problem Setup Let \(\mathbb{P}^{1}\) denote the projective line over \(\mathbb{F}_{q}\). We denote by \(\mathbb{F}\), \(\mathbb{A}\), and \(\mathbb{O}\), its function field, ring of adeles, and the subring of integral adeles, respectively. Let \(S=\{0,1,\infty\}\subset\mathbb{P}^{1}(\mathbb{F}_{q})\) and fix \(G\) a reductive group over \(\mathbb{F}_{q}\), \(T\) a split maximal torus split, and \(B\) a Borel subgroup containing \(T\). Define \(K_{S}:=\prod_{x\in\mathbb{P}^{1}|}K_{x}\), where \(K_{x}=I\subset G(\mathbb{F}_{x})\) is the Iwahori subgroup for \(x\in S\) and \(K_{x}=G(\mathbb{F}_{x})\) otherwise. Consider the vector space of compactly supported, complex valued automorphic functions \[C_{Aut}:=C\left[K_{S}\backslash G(\mathbb{A})/G(\mathbb{F})\right].\] This vector space has an action of the affine Hecke algebra at \(x\in S\) and the spherical Hecke algebra at \(x\in|\mathbb{P}^{1}|\setminus S\). Given a function \(\phi:\Lambda\to\mathbb{C}\), the Eisenstein series \(\mathrm{Eis}_{\phi}\) is a (\(G(\mathbb{O})\) invariant) function on \(G(\mathbb{A})/G(\mathbb{F})\) defined by \[\mathrm{Eis}_{\phi}(g):=\sum_{\gamma\in G(\mathbb{F})/B(\mathbb{F})}\phi(g\gamma)\] For \(\lambda\in\Lambda\), define \(\mathrm{Eis}_{\lambda}=\mathrm{Eis}_{\underline{1}_{\lambda}}\), where \(\underline{1}_{\lambda}\) is the function taking value \(1\) on \(\lambda\) and \(0\) elsewhere. We are interested in the space \(C_{Eis}\), which is the closure under all Hecke operators of \[\mathrm{span}\{\mathrm{Eis}_{\lambda}\}\subset C\left[K_{S}\backslash G( \mathbb{A})/G(\mathbb{F})\right].\] We explicitly determine the action of all Hecke operators on \(C_{Eis}\). In order to state the main result, we first reformulate the problem. ### Geometric Formulation Let \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\) denote the moduli stack over \(\mathbb{F}_{q}\) of \(G\) bundles on \(\mathbb{P}^{1}\) with \(B\) reduction along \(S\). For example, for \(G=GL_{n}\), it classifies the data \((\mathcal{E},\{F_{s}\}_{s\in S})\) where \(\mathcal{E}\) is a rank \(n\) vector bundle on \(\mathbb{P}^{1}\) and \(F_{s}\) is a flag in \(\mathcal{E}|_{s}\). Then, \(K_{S}\backslash G(\mathbb{A})/G(\mathbb{F})\) is identified with the \(\mathbb{F}_{q}\) points of \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\). There is an induction diagram \[\mathrm{Bun}_{T}(\mathbb{P}^{1})\stackrel{{ p}}{{\leftarrow}} \mathrm{Bun}_{B}(\mathbb{P}^{1})\xrightarrow{q}\mathrm{Bun}_{G}(\mathbb{P}^{1},S),\] Moreover, \(\mathrm{Bun}_{T}(\mathbb{P}^{1})\) is identified with \(\Lambda\otimes\mathrm{Pic}(\mathbb{P}^{1})\) and each \(\lambda\in\Lambda\) corresponds to a component of \(\mathrm{Bun}_{T}(\mathbb{P}^{1})\). The Eisenstein series corresponding to \(\lambda\in\Lambda\) is constructed as \(\mathrm{Eis}_{\lambda}=q_{i}p^{*}(\underline{1}_{\lambda})\), where \(\underline{1}_{\lambda}\) is the function taking value \(1\) along the component corresponding to \(\lambda\) and \(0\) elsewhere. The pushforward is integration along fibers relative to the motivic (or weighted counting) measure. The action of spherical Hecke operators on \(C_{Eis}\) at any \(x\in|\mathbb{P}^{1}|\setminus S\) can be expressed in terms of the Hecke operators at \(s\in S\) through the central homomorphism \(\mathcal{H}^{sph}\to\mathcal{H}\). We denote by \(\mathcal{H}^{\otimes S}\) the algebra generated by Hecke operators at all \(s\in S\). For \(s\in S\), picking a uniformizer in the local ring \(\mathbb{F}_{s}\) identifies the algebra of Hecke operators at \(s\), \(\mathcal{H}^{s}\), with the algebra of compactly supported functions on the \(\mathbb{F}_{q}\) points of \(I\backslash G((t))/I\). For \(w\in W^{aff}\), define \(T_{w}\) as the corresponding function on \(I\backslash G((t))/I\). There is an injective homomorphism \(\mathbb{C}[\Lambda]\to\mathcal{H}\) defined by \(\lambda\mapsto T_{\lambda}\) for \(\lambda\) antidominant. Let \(J_{\lambda}\) denote the image of \(\lambda\) under this homomorphism. For an operator \(A\in C[I\backslash G((t))/I]\) and \(s\in S\), let \(A^{s}\) be its image under the identification \[C[I\backslash G((t))/I]\cong\mathcal{H}^{s}\] ### Main Result Let us introduce a mild technical restriction on \(G\). Let \(\Lambda^{\vee}:=\mathrm{Hom}(T,\mathbb{G}_{m})\) denote the lattice of weights of \(T\). Assume that for all roots \(\check{\alpha}\in\Lambda^{\vee}\), the map \(\Lambda\to\mathbb{Z}\) given by \(\lambda\mapsto\langle\check{\alpha},\lambda\rangle\) is surjective. For example, \(\mathrm{PGL}_{2}\) and \(\mathrm{SL}_{3}\) satisfy this condition, but \(\mathrm{SL}_{2}\) does not. The adjoint form of a group will always satisfy this condition. We conjecture the following. **Conjecture 1.1**.: \(C_{Eis}\) is the \(\mathcal{H}^{\otimes S}\) module generated by \(\mathrm{Eis}_{0}\) with the following relations 1. (Translation Relation) For any \(\lambda\in\Lambda\) \[J_{\lambda}^{0}\mathrm{Eis}_{0}=J_{\lambda}^{1}\mathrm{Eis}_{0}=J_{\lambda}^{ \infty}\mathrm{Eis}_{0}\] 2. (Reflection Relation) For any simple reflection, \(s_{\alpha}\in W\) \[(1+T_{s_{\alpha}}^{0})(1+T_{s_{\alpha}}^{1})\mathrm{Eis}_{0}=(1+T_{s_{\alpha}}^ {0})(1+T_{s_{\alpha}}^{\infty})\mathrm{Eis}_{0}=(1+T_{s_{\alpha}}^{1})(1+T_{s_{ \alpha}}^{\infty})\mathrm{Eis}_{0}\] There is a natural generalization of this, Conjecture 7.1, to arbitrary tame ramification \(S\subset\mathbb{P}^{1}(\mathbb{F}_{q})\). When \(S\) consists of one or two points, the conjecture follows from the Radon transform which identifies \(C_{Eis}\) with the regular bimodule for the Hecke algebra. In this paper, we prove Conjecture 1.1 when \(G=\mathrm{PGL}(2)\) or \(\mathrm{SL}(3)\) (Theorems 4 and 6) as well as in the following generic sense. Let \(\widetilde{C}\) be the quotient of \(\mathcal{H}^{\otimes S}\) by the left ideal generated by the Translation and Reflection relations. **Theorem 1**.: There is a surjective map \(\widetilde{C}\to C_{Eis}\) of \(\mathcal{H}^{\otimes S}\) modules such that rationalizing the action of translation operators at \(0\) yields an isomorphism \[\operatorname{Frac}(\mathbb{C}[\Lambda])\otimes_{\mathbb{C}[\Lambda]}\widetilde{ C}\xrightarrow{\cong}\operatorname{Frac}(\mathbb{C}[\Lambda])\otimes_{\mathbb{C}[ \Lambda]}C_{Eis}.\] The proof of Theorem 1 is given in Section 3.3.2. ### Acknowledgements I thank David Nadler for suggesting this problem and for providing extremely generous support. I also thank Zhiwei Yun for helpful discussions and for suggesting the connection to the functional equation for Eistenstein series. This work was partialy supported by NSF grant DMS-1646385. ## 2. Background on Group Theory, Hecke Operators, and Eisenstein Series ### Operations with Functions In this paper we will compute pushforwards and pullbacks of functions on rational points of stacks. The set of rational points of an Artin stack, \(X\), over \(\mathbb{F}_{q}\), has a natural measure, given by \(\mu(\mathcal{E})=|\operatorname{Aut}(\mathcal{E})|^{-1}\). This endows the space, \(C(X)\), of (compactly supported) functions on the rational points with an inner product. Given a map \(f:X\to Y\), there is a pullback, \(f^{*}:C(Y)\to C(X)\), given by \(f^{*}F(x)=F(f(x))\). Pushforward \(f_{!}:C(X)\to C(Y)\) is the adjoint of pullback with respect to the inner product \[\int_{X(\mathbb{F}_{q})}d\mu_{X}F_{1}(x)f^{*}F_{2}(x)=\int_{Y(\mathbb{F}_{q}) }d\mu_{Y}f_{!}F_{1}(y)F_{1}(y)\] **Example**.: A group homomorphism \(H\hookrightarrow G\) induces a map \(f:\operatorname{pt}/H\to\operatorname{pt}/G\). Identifying, \(C(\operatorname{pt}/H)\cong\mathbb{C}\cong C(\operatorname{pt}/G)\), \(f^{*}=1\in\operatorname{End}(\mathbb{C})\) and \(f_{!}=[G(\mathbb{F}_{q}):H(\mathbb{F}_{q})]\in\operatorname{End}(\mathbb{C})\). **Example** (Base Change).: Given a Cartesian diagram, \(g^{*}v_{!}=f_{!}u^{*}\). A special case of the base change formula is that given \(f:X\to Y\), \[f_{!}F_{1}\otimes F_{1}=f_{!}(F_{1}\otimes f^{*}F_{2}\] where \(\otimes\) denotes pointwise multiplication of functions. **Example** (Finite Hecke Algebra).: For \(G\) a spit reductive group over \(\mathbb{F}_{q}\) with Borel subgroup \(B\subset G\), the convolution product on \(C(B\backslash G/B)\) is realized through the following diagram. \[\begin{CD}B\backslash G/B@<{}<{\pi_{1}}>B\backslash G\times_{B}G/B@>{\operatorname{ prod}}>{\pi_{2}}>B\backslash G/B\\ @V{}V{\pi_{2}}V\\ B\backslash G/B\end{CD}\] For functions, \(F_{1},F_{2}\in C(B\backslash G/B)\), \[F_{1}\cdot F_{2}=\operatorname{prod}(\pi_{1}^{*}F_{1}\otimes\pi_{2}^{*}F_{2})\] ### Group Theory Fix a reductive group, \(G\) over \(\mathbb{F}_{q}\) and assume \(G\) has a split torus. Fix a choice of Borel \(B\subset G\) and let \(N\subset B\) be the unipotent radical and \(T=B/N\) the universal Cartan. \(\Lambda=\operatorname{Hom}(\mathbb{G}_{m},T)\) is the coweight lattice with \(R_{+}\subset\Lambda\) the positive coroots, and \(\Lambda^{\vee}=\operatorname{Hom}(T,\mathbb{G}_{m})\) the weight lattice with \(R_{+}^{\vee}\subset\Lambda^{\vee}\) the positive roots. \(\rho\in\Lambda\) is half the sum of the elements of \(R_{+}\) \(\Lambda_{+}\subset\Lambda\) is the dominant cone. Let \(W\) denote the Weyl group and \(W^{aff}\cong W\ltimes\Lambda\) the (extended) affine Weyl group. Let \(\mathcal{B}\cong G/B\) denote the flag variety. \(G((t)):=\operatorname{Map}(\operatorname{Spec}\mathbb{F}_{q}((t)),G)\) is the loop group and \(G[[t]]:=\operatorname{Map}(\operatorname{Spec}\mathbb{F}_{q}[[t]],G)\) is the arc group. \(I\subset G[[t]]\) is the Iwahori subgroup, defined as the preimage of \(B\) under evaluation at zero \(\operatorname{ev}_{0}:G[[t]]\to G\). ### Affine Hecke Algebra The affine Hecke algebra is the following algebra of compactly supported functions under convolution. Normalize the Haar measure on \(G((t))\) so that \(I\) has unit measure. \[\mathcal{H}:=C\left[I\backslash G((t))/I\right].\] The points of \(I\backslash G((t))/I\) are indexed by elements of \(W^{aff}\). Given \(w\in W^{aff}\), let \(T_{w}\) denote the corresponding element of \(\mathcal{H}\). \(I\backslash G((t))/I\) classifies pairs of bundles on the formal disk \(\operatorname{Spec}(\mathbb{F}_{q}[[t]])\) along with an isomorphism of their restrictions away from zero. **Definition 2.1** (Translation Operator).: For \(\lambda\in\Lambda\), define the _translation operator_\(J_{\lambda}=(T_{-\lambda_{1}})^{-1}T_{-\lambda_{2}}\), where \(\lambda=\lambda_{1}-\lambda_{2}\), with \(\lambda_{i}\in\Lambda_{+}\). **Remark**.: The definition does not depend on the choice \(\lambda_{1},\lambda_{2}\). **Theorem 2** (Bernstein's Relations).: The operators \(T_{w}\)\(w\in W\) form a basis for the subalgebra \(\mathcal{H}^{fin}\), called the finite Hecke algebra. The relations are as follows: * \(T_{w_{1}}T_{w_{2}}=T_{w_{1}w_{2}}\) if \(\ell(w_{1}w_{2})=\ell(w_{1})+\ell(w_{2})\) * \(T_{s_{\alpha}}^{2}=(q-1)T_{s_{\alpha}}+q\) if \(s_{\alpha}\in W\) is simple The operators \(T_{s_{\alpha}}\) and \(J_{\lambda}\) for \(s_{\alpha}\in W\) simple and \(\lambda\in\Lambda\) satisfy \[J_{\lambda}T_{s_{\alpha}}=q^{-\tilde{\alpha}(\lambda)}T_{s_{\alpha}}J_{s_{ \alpha}(\lambda)}+(q-1)\frac{J_{\lambda}-q^{-\tilde{\alpha}(\lambda)}J_{s_{ \alpha}(\lambda)}}{1-qJ_{\alpha}}\] Proof.: See Proposition 3.6 of [3] for the original proof by Lusztig based on unpublished work of Bernstein. [2], [1], and [5] were also helpful references for the author. It follows that \(\lambda\mapsto J_{\lambda}\) is an injective homomorphism \(\mathbb{C}[\Lambda]\to H^{aff}\). Its image, \(A\), a maximal commutative subalgebra. The geometric basis elements, \(\{T_{w}\}_{w\in W^{aff}}\), are partially ordered by the length function on \(W^{aff}\). Both the sets \(\{J_{\lambda}T_{w}\}_{w\in W,\lambda\in\Lambda}\) and \(\{T_{w}J_{\lambda}\}_{w\in W,\lambda\in\Lambda}\) are upper triangular with respect to the geometric basis, by Theorem 2. In particular, they are bases. ### Hecke Operators For \(s\in S\) the Hecke operators \(\mathcal{H}^{s}\) are constructed as follows. We construction a left action of \(\mathcal{H}\) on \(C_{Aut}\). Picking a uniformizer in the completed local ring at \(s\) defines a map \(\operatorname{Spec}(\mathbb{F}_{q}[[t]])\to\mathbb{P}^{1}\) sending the closed point to \(s\). consider the following diagram \[\operatorname{Bun}_{G}(\mathbb{P}^{1},S)\] \[I\backslash G((t))/I\] \[\operatorname{Corr}^{s}\] classifies data the data of a triple \((\mathcal{E}_{1},\mathcal{E}_{2},T)\) where \(\mathcal{E}_{1},\mathcal{E}_{2}\) are parabolic \(G\)-bundles on \(\mathbb{P}^{1}\) and \(T\) is an isomorphism of their restrictions away from \(s\). res is the restriction of the bundles along the map \(\operatorname{Spec}(\mathbb{F}_{q}[[t]])\to\mathbb{P}^{1}\). For \(A\in\mathcal{H}\), the Hecke operator \(A^{s}:C[\operatorname{Bun}_{G}(\mathbb{P}^{1},S)]\to C[\operatorname{Bun}_{G} (\mathbb{P}^{1},S)]\) is defined as \[A^{s}f=\pi_{2!}(\operatorname{res}^{*}A\otimes\pi_{1}{}^{*}f)\] The operator \(A^{s}\) is independent of the choice of uniformizer. For \(w\in W^{aff}\), the following is true. \[\operatorname{Bun}_{G}(\mathbb{P}^{1},S)\] \[T_{w}^{s}f=\pi_{2!}\pi_{1}{}^{*}f\] \[\operatorname{Corr}_{w}^{s}\] is the subspace of \(\operatorname{Corr}^{s}\) where \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) restricted to a formal disk around \(s\) are in relative position \(w\). **Definition 2.2** (Simultaneous Modification at Marked Points).: For \(A\in\mathcal{H}\) and \(R\subset S\), \(A^{R}\) will denote the product of the Hecke operators \(A^{s}\) for \(s\in R\). \[A^{R}:=\prod_{s\in R}A^{s}\] This is a product of commuting operators so the order of the product doesn't matter. #### 2.4.1. Reflection Operators **Definition 2.3** (Reflection Operator).: For simple reflections \(s_{\alpha}\in W\) and \(s\in S\), define the _reflection operator_\(\operatorname{Avg}^{s}_{s_{\alpha}}:=1+T^{s}_{s_{\alpha}}\). \(\operatorname{Avg}^{s}_{s_{\alpha}}\) has the following interpretation. Let \(P_{s_{\alpha}}\) denote the almost minimal parabolic corresponding to the simple coroot \(\alpha\). Let \(\operatorname{Bun}_{G}(\mathbb{P}^{1},S,s,s_{\alpha})\) be the moduli stack of \(G\)-bundles on \(\mathbb{P}^{1}\) with Borel reduction at \(S\setminus\{s\}\) and \(P_{s_{\alpha}}\) reduction at \(s\). For example, for \(G=GL_{n}\), it classifies pairs \((\mathcal{E},\{F_{p}\}_{p\in S})\), * \(\mathcal{E}\) is a rank \(n\) vector bundle on \(\mathbb{P}^{1}\) * For \(p\neq s\), \(F_{p}\) is a full flag in the fiber \(\mathcal{E}|_{p}\) * \(F_{s}\) is an almost full flag in the fiber \(\mathcal{E}|_{s}\), consisting of a space of each dimension except the one corresponding to \(s_{\alpha}\). There is a map \(\pi:\operatorname{Bun}_{G}(\mathbb{P}^{1},S)\to\operatorname{Bun}_{G}( \mathbb{P}^{1},S,s,s_{\alpha})\). For \(F\in C_{Aut}\), \[\operatorname{Avg}^{s}_{s_{\alpha}}\cdot F=\pi^{*}\pi_{!}F\] ### Pseudo-Eisenstein Series Given a compactly supported function \(f:\Lambda\to\mathbb{C}\), the pseudo-Eisenstein series \(\operatorname{Eis}_{f}\) is defined by the following induction diagram. \[\Lambda\otimes\operatorname{Pic}(\mathbb{P}^{1})\cong\operatorname{Bun}_{T}( \mathbb{P}^{1})\stackrel{{ p}}{{\leftarrow}}\operatorname{Bun}_{B}( \mathbb{P}^{1})\stackrel{{ q}}{{\to}}\operatorname{Bun}_{G}( \mathbb{P}^{1},S)\] \[\operatorname{Eis}_{f}=q_{!}p^{*}f\] \(p\) is the map associating the induced \(T\)-bundle to a \(B\)-bundle. \(q\) is the map that associates the induced \(G\)-bundle and remembers the \(B\) structure along \(S\). Define \(\operatorname{Bun}_{B}^{\lambda}(\mathbb{P}^{1})\) as the preimage of the component \(\lambda\in\operatorname{Bun}_{T}(\mathbb{P}^{1})\) and \(q_{\lambda}:\operatorname{Bun}_{B}^{\lambda}(\mathbb{P}^{1})\to\operatorname {Bun}_{G}(X,S)\) the restriction of \(q\). then \[\operatorname{Eis}_{\lambda}:=\operatorname{Eis}_{\underline{1}_{\lambda}}=q _{\lambda}!\underline{1}\] Pseudo-Eisenstein series form a subspace of \(C_{Aut}\). It is closed under spherical Hecke operators at \(p\notin S\) but not under affine Hecke operators. **Definition 2.4** (Eisenstein Module).: The Eisenstein module, \(C_{Eis}\), is the subspace of \(C_{Aut}\) generated by the action of all affine Hecke operators on all pseudo-Eisenstein series. The space of Eisenstein series is also closed under spherical Hecke operators. #### 2.5.1. Compatibility of Eisenstein series and Translation We describe the standard compatibility of Eisenstein induction with Hecke operators. **Theorem 3**.: For compactly supported \(f:\Lambda\to\mathbb{C}\) and \(\mu\in\Lambda\), \[J^{s}_{\mu}\cdot\operatorname{Eis}_{f}=\operatorname{Eis}_{\mu\cdot f}\] where \(\mu\cdot f(\lambda)=f(\lambda-\mu)\). Proof.: It suffices to show \(J^{s}_{\mu}\operatorname{Eis}_{\lambda}=\operatorname{Eis}_{\lambda+\mu}\) for \(\mu,\lambda\in\Lambda\) with \(\mu\) anti-dominant. In this case \(J_{\mu}=T_{\mu}\). We show that there is a diagram, where the left square is Cartesian and the upper left horizontal arrow is a homemorphism: Assuming such a diagram exists, \[\mathsf{J}_{\mu}^{s}\cdot\mathrm{Eis}_{\mu}=\pi_{2!}\pi_{1}^{*}q_{\lambda}{ \underline{1}}=\pi_{2!}t_{2!}t_{1}t_{1}^{*}{\underline{1}}=q_{\lambda+\mu_{1} {\underline{1}}}=\mathrm{Eis}_{\lambda+\mu}\] The existence of such a diagram is shown in Lemma 2.4.4 of [4]. ## 3. Example: G=PGL(2) Fix \(G=\mathrm{PGL}(2)\). Identify \(\Lambda\cong\mathbb{Z}\) by \((t\mapsto\mathrm{diag}(t^{k},1))\mapsto k\). First, we first describe the geometry of \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\) and compute the finite Hecke action on \(C_{Aut}\) in the geometric basis of points of the moduli space. Then, we calculate the structure of \(C_{Aut}\) as a \(\mathcal{H}^{fin}\) trimodule. In this case \(C_{Aut}=C_{Eis}\). Finally, we will prove Theorem 4 characterizing \(C_{Aut}\) as a \(\mathcal{H}\) trimodule and confirm Conjecture 1.1 for \(\mathrm{PGL}(2)\). ### Finite Hecke Action There is a unique simple reflection. \(\mathcal{H}^{fin}\) is generated by the operator \(\mathrm{Avg}=1+T_{s_{\alpha}}\), which satisfies the quadratic relation \(\mathrm{Avg}\cdot\mathrm{Avg}=(q+1)\mathrm{Avg}\). We compute the action of \(\mathcal{H}^{fin}\) at \(0\in S\). The formulas for the action at other points are completely analogous. We organize the calculation according to the following maps, given by forgetting parabolic structure. \[\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\stackrel{{\pi^{0}}}{{ \longrightarrow}}\mathrm{Bun}_{G}(\mathbb{P}^{1},\{1,\infty\})\to\mathrm{Bun}_ {G}(\mathbb{P}^{1})\] Recall that \(\mathrm{Avg}^{0}=(\pi^{0})^{*}\pi^{0}{}_{!}\). We list the rational points of \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\) and record the fibers of the map \(\pi^{0}\), so that we can compute the operator \(\mathrm{Avg}^{0}\). We organize the information by fibers of the projection of \(\mathrm{Bun}_{G}(\mathbb{P}^{1})\). There is a short exact sequence, \(1\to\mathbb{G}_{m}\to\mathrm{GL}_{2}\to G\to 1\), so by the vanishing of the Brauer group of a curve, \[\mathrm{Vect}_{2}(\mathbb{P}^{1})/\mathrm{Pic}(\mathbb{P}^{1})\cong\mathrm{Bun }_{G}(\mathbb{P}^{1})\] An of object of \(\mathrm{Bun}_{G}(\mathbb{P}^{1})\) is represented by a rank 2 vector bundle, \(\mathcal{E}\), up to tensoring with a line bundle. An object of \(\mathrm{Bun}_{G}(\mathbb{P}^{1},R)\), for \(R\subset S\) is represented by a rank 2 vector bundle, \(\mathcal{E}\), up to tensoring with line bundle, and a line \(\ell_{s}\) in the fiber \(\mathcal{E}|_{s}\), for \(s\in R\). #### 3.1.1. \(\mathcal{E}\cong\mathcal{O}\oplus\mathcal{O}\) The first column records the automorphism group of the object. The next two columns record the poset of points of \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\) and \(\mathrm{Bun}_{G}(\mathbb{P}^{1},\{1,\infty\})\), respectively. \(x\to y\) means \(y\) lies in the closure of \(x\). The fibers of \(\pi^{0}\) are indicated by color. \(\{1\}\)\(C_{0}(\emptyset)\)\(C_{0}(0)\)\(C_{0}(1\infty)\)\(C_{0}(0)\)\(C_{0}(0)\)\(C_{0}(1\infty)\)\(C_{0}(0)\)\(C_{0}(0)\)\(C_{0}(1\infty)\)\(C_{0}(1\infty)\)\(C_{0}(1\infty)\)\(C_{0}(1\infty)\)\(C_{0}(1\infty)\)Identify the fibers \(\mathcal{E}|_{s}\) for \(s\in S\). For \(R\subset S\), \(c_{0}(R)\) denotes the locus where \(\ell_{s}\) coincide for \(s\in R\). Similarly, for \(R\subset\{1,\infty\}\)\(c_{0}^{0}(R)\) denotes the locus where \(\ell_{s}\) coincide for \(s\in R\). \(\mathrm{Avg}^{0}\) acts as follows. \[\mathrm{Avg}^{0}{\underline{1}}_{c_{0}(S)}=(\pi^{0})^{*}\pi_{!}^{0}1_{c_{0}(S )}=(\pi^{0})^{*}{\underline{1}}_{c_{0}^{0}(S)}\frac{|\mathrm{Aut}(c_{0}^{0}(1 \infty))|}{|\mathrm{Aut}(c_{0}(S))|}={\underline{1}}_{c_{0}(S)}+{\underline{1} }_{c_{0}(1\infty)}\] \[\mathrm{Avg}^{0}{\underline{1}}_{c_{0}(1\infty)}=(\pi^{0})^{*}\pi_{!}^{0}1_{c_ {0}(1\infty)}=(\pi^{0})^{*}{\underline{1}}_{c_{0}^{0}(S)}\frac{|\mathrm{Aut}(c_ {0}^{0}(1\infty))|}{|\mathrm{Aut}(c_{0}(1\infty))|}=q{\underline{1}}_{c_{0}(S)} +q{\underline{1}}_{c_{0}(1\infty)}\] \[\mathrm{Avg}^{0}{\underline{1}}_{c_{0}(01)}=(\pi^{0})^{*}\pi_{!}^{0}1_{c_{0}(0 1)}=(\pi^{0})^{*}{\underline{1}}_{c_{0}^{0}(\emptyset)}\frac{|\mathrm{Aut}(c_ {0}^{0}(\emptyset))|}{|\mathrm{Aut}(c_{0}(01))|}={\underline{1}}_{c_{0}(01)}+{ \underline{1}}_{c_{0}(0\infty)}+{\underline{1}}_{c_{0}(\emptyset)}\] \[\mathrm{Avg}^{0}{\underline{1}}_{c_{0}(0\infty)}=(\pi^{0})^{*}\pi_{!}^{0}1_{c_ {0}(0\infty)}=(\pi^{0})^{*}{\underline{1}}_{c_{0}^{0}(\emptyset)}\frac{|\mathrm{ Aut}(c_{0}^{0}(\emptyset))|}{|\mathrm{Aut}(c_{0}(0\infty))|}={\underline{1}}_{c_{0}( 01)}+{\underline{1}}_{c_{0}(0\infty)}+{\underline{1}}_{c_{0}(\emptyset)}\] \[\mathrm{Avg}^{0}{\underline{1}}_{c_{0}(\emptyset)}=(\pi^{0})^{*}\pi_{!}^{0}1_{c_ {0}(\emptyset)}=(\pi^{0})^{*}{\underline{1}}_{c_{0}^{0}(\emptyset)}\frac{| \mathrm{Aut}(c_{0}^{0}(\emptyset))|}{|\mathrm{Aut}(c_{0}(0\infty))|}=(q-1){ \underline{1}}_{c_{0}(01)}+(q-1){\underline{1}}_{c_{0}(\infty)}+(q-1){ \underline{1}}_{c_{0}(\emptyset)}\] #### 3.1.2. \(\mathcal{E}\cong\mathcal{O}(1)\oplus\mathcal{O}\) We use the same conventions as before. \[\begin{gathered}\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par \[\mathrm{Avg}^{s}\underline{1}_{c_{1}(\emptyset)}=\mathrm{Avg}^{s}T_{s_{a}}^{S \setminus\{s\}}\underline{1}_{c_{1}(S)},\text{ for }s\in S \tag{2}\] Proof.: For \(k\geq 2\) and \(R\subset S\), \[T_{s_{a}}^{R}\underline{1}_{c_{k}(S)}=\underline{1}_{c_{k}(S\setminus R)}.\] In particular, \(C_{Aut}^{k}\) is freely generated by \(c_{k}(S)\). \(C_{Aut}^{0}\) is generated by \(\underline{1}_{c_{0}(S)}\). Check that \(\mathrm{Avg}^{\{0,1\}}\) is the constant function on the locus where the bundle is trivial. Relation 1 follows. It is easy to see that there are no other relations. We check that \(C_{Aut}^{1}\) is generated by \(c_{1}(*)\) and \(c_{1}(S)\) generate \(C_{Aut}^{1}\) with a relations given by Equation 2. First check that equation 2 is true using the calculations from the previous section. To see that relations are sufficient, observe that \(\underline{1}_{c_{1}(S)}\) generates a free rank one \((\mathrm{H}^{fin})^{\otimes S}\) submodule of \(C_{Aut}^{1}\) consisting of functions, \(f\), satisfying \(f(c_{1}(*))=f(c_{1}(\emptyset))\). This submodule has codimension one in \(C_{Aut}^{1}\). ### Hecke Trimodule Structure We state and prove Theorem 4, confirming Conjecture 1.1 in this case. First we identify the Eisenstein functions. #### 3.3.1. Eisenstein Objects **Proposition 3.2** (Eisenstein Objects).: \(\mathrm{Eis}_{k}=\underline{1}_{c_{k}(S)}\) for \(k\geq 0\) and \(\mathrm{Eis}_{-1}=\underline{1}_{c_{1}(\emptyset)}\). Proof.: Recall the induction diagram \[\mathrm{Bun}_{T}(\mathbb{P}^{1})\leftarrow\mathrm{Bun}_{B}(\mathbb{P}^{1}) \rightarrow\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\] Objects of \(\mathrm{Bun}_{B}(\mathbb{P}^{1})\) are represented by pairs \((\mathcal{L},\mathcal{E})\), \(\mathcal{E}\) a rank \(2\) vector bundle, and \(\mathcal{L}\subset\mathcal{E}\) a rank \(1\) subbundle, up to tensoring with a line bundle. The fiber above \(k\in\Lambda\), \(\mathrm{Bun}_{B}^{k}(\mathbb{P}^{1})\), is the locus of pairs \((\mathcal{L},\mathcal{E})\), where \(\mathcal{L}\cong\mathcal{O}(k)\) and \(\mathcal{E}/\mathcal{L}\cong\mathcal{O}\). \(\mathrm{Eis}_{k}\) is the pushforward of the constant function on \(\mathrm{Bun}_{B}^{k}(\mathbb{P}^{1})\). Fix \(k\geq-1\). We show that the image of \(\mathrm{Bun}_{B}^{k}(\mathbb{P}^{1})\) in \(\mathrm{Bun}_{G}(\mathbb{P},S)\) is a single point. Suppose we have a short exact sequence of vector bundles bundles, \[\mathcal{O}(k)\rightarrow\mathcal{E}\rightarrow\mathcal{O},\] \(\mathrm{Ext}(\mathcal{O}(-k),\mathcal{O})=0\), so the short exact sequence splits. Therefore, \(\mathrm{Bun}_{B}^{k}(\mathbb{P}^{1},S)\) has a single point. If \(k\geq 0\) the image of that point in \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\) is \(c_{k}(S)\), and if \(k=-1\), the image is \(c_{1}(\emptyset)\). There are three cases. 1. For \(k>0\), comparing stabilisers, we find \(\mathrm{Aut}((\mathcal{L},\mathcal{E}))\cong\mathbb{G}_{m}\ltimes\mathbb{G}_{ a}^{k+1}\cong\mathrm{Aut}(c_{k}(S))\). It follows that \(\mathrm{Eis}_{k}=\underline{1}_{c_{k}(S)}\). 2. For \(k=0\), \(\mathrm{Aut}((\mathcal{L},\mathcal{E}))\cong B\cong\mathrm{Aut}(c_{0}(S))\). It follows that \(\mathrm{Eis}_{0}=\underline{1}_{c_{0}(S)}\). 3. Finally, for \(k=-1\), \(\mathrm{Aut}((\mathcal{L},\mathcal{E}))\cong T\cong\mathrm{Aut}(c_{1}( \emptyset))\). \(\mathrm{Eis}_{-1}=\underline{1}_{c_{1}(\emptyset)}\). It is not necessary for the following calculations but one can calculate \(\mathrm{Eis}_{k}\) for \(k\leq-2\). For example \[\mathrm{Eis}_{-2}=\underline{1}_{c_{0}(\emptyset)}+\underline{1}_{c_{2}( \emptyset)}.\] In general, \(\mathrm{Eis}_{k}\) for \(k\leq-2\) is nonzero only on points of moduli space where the bundle is \(\mathcal{E}\cong\mathcal{O}(r)\oplus\mathcal{O}\) with \(0\leq r\leq-k\) the same parity as \(k\). #### 3.3.2. Main Theorem **Theorem 4**.: \(C_{Aut}\) is the \(\mathcal{H}^{\otimes S}\) module generated by \(\mathrm{Eis}_{0}\) with the relations \[J_{k}^{0}\mathrm{Eis}_{0}=J_{k}^{1}\mathrm{Eis}_{0}=J_{k}^{\infty}\mathrm{Eis}_{ 0}\text{ for }k\in\Lambda \tag{3}\] \[\mathrm{Avg}^{\{0,1\}}\mathrm{Eis}_{0}=\mathrm{Avg}^{\{0,\infty\}}\mathrm{Eis }_{0}=\mathrm{Avg}^{\{1,\infty\}}\mathrm{Eis}_{0} \tag{4}\] Proof.: By Proposition 3.1, \(C_{Aut}\) is generated by Eisenstein functions under Hecke operators. By Theorem 3 all Eisenstein functions are generated by \(\mathrm{Eis}_{0}\) under translation Hecke operators. Therefore, \(C_{Aut}\) is generated by \(\mathrm{Eis}_{0}\). We check that the stated relations hold. Equation 3 is a consequence Theorem 3 on compatibility of translation Hecke operators with Eisenstein induction. By Proposition 3.2, \(\mathrm{Eis}_{0}=\underline{1}_{\omega_{0}(S)}\), so Equation 4 follows from Equation 1 of Proposition 3.1. We show that there are no other relations. Let \(\widetilde{C}\) denote the quotient of \(\mathcal{H}^{\otimes S}\) by the left ideal generated by the relations stated in Equations 3 and 4. There is a surjection of \(\mathcal{H}^{\otimes S}\) modules \[\widetilde{C}\to C_{Aut}\] We will show that this an injective map of \((\mathcal{H}^{fin})^{\otimes S}\) modules. Let \(\widetilde{C}_{+}\subset\widetilde{C}\) be the \((\mathcal{H}^{fin})^{\otimes S}\) submodule generated by \(\{J_{k}^{0}\}_{k\geq-1}\). By Proposition 3.1 and Proposition 3.2, it suffices to show that the following are true in \(\widetilde{C}\): \[\mathrm{Avg}^{s}J_{-1}^{0}=\mathrm{Avg}^{s}T_{s_{\alpha}}^{S\setminus\{s\}}J_ {1},\text{ for }s\in S \tag{5}\] \[J_{k}\in\widetilde{C}_{+}\text{ for }k\leq-2 \tag{6}\] We have omitted the superscript, \(s\in S\), on the operators \(J_{k}\) because of the defining relations of \(\widetilde{C}\). The formulas follow from Proposition 5.1. ## 4. Proof of Theorem 1 For this section, assume \(G\) is such that \(\rho\in\Lambda\). **Definition 4.1**.: The _algebraic_ Eisenstein module, \(\widetilde{C}\), is the quotient \(\mathcal{H}^{S}\) module which is the quotient of \(\mathcal{H}^{S}\) by the left ideal generated by the relations \[J_{\lambda}^{0}=J_{\lambda}^{1}=J_{\lambda}^{\infty}\text{ for }\lambda\in\Lambda\] \[\mathrm{Avg}_{s_{\alpha}}^{\{0,1\}}=\mathrm{Avg}_{s_{\alpha}}^{\{0,\infty\}}= \mathrm{Avg}_{s_{\alpha}}^{\{1,\infty\}}\text{ for simple }s_{\alpha}\in W\] (delete definition of \(C_{Eis}\) everywhere). maybe put definition in background section? **Theorem 5**.: There is a surjective map of \(\mathcal{H}^{\otimes S}\) modules, \(\widetilde{C}\to C_{Eis}\) given by \(1\mapsto\mathrm{Eis}_{0}\). Proof.: This is equivalent to checking the Translation and Reflection relations on \(C_{Eis}\). By Theorem 3\(J_{\lambda}^{s}\mathrm{Eis}_{0}=\mathrm{Eis}_{\lambda}\) for any \(s\in S\) and \(\lambda\in\Lambda\). In particular, \(J_{\lambda}^{s}\mathrm{Eis}_{0}\) is independent of \(s\). Fix a simple coroot \(\alpha\). Let \(P_{s_{\alpha}}\) the almost minimal parabolic associated with \(s_{\alpha}\). For \(R\subset S\), let \(\mathrm{Bun}_{G}(\mathbb{P}^{1},S,R,s_{\alpha})\) denote the moduli of stack of \(G\)-bundles on \(\mathbb{P}^{1}\) with Borel reductions at \(S\setminus R\) and \(P_{s_{\alpha}}\) reduction at \(R\). There is a map \(\pi:\mathrm{Bun}_{G}(\mathbb{P}^{1},S)\to\mathrm{Bun}_{G}(\mathbb{P}^{1},\{0,1 \},s_{\alpha})\) forgetting parabolic structure at \(0\) and \(1\). Consider the following diagram, where the square is Cartesian For \(F\in C_{Aut}\), \[\mathrm{Avg}_{s_{\alpha}}^{\{0,1\}}F=\mathrm{Avg}_{s_{\alpha}}^{1}\mathrm{Avg }_{s_{\alpha}}^{0}F=\pi_{1}^{*}\pi_{1};\pi_{0}^{*}\pi_{0}!F=\pi_{1}^{*}\pi_{1 }^{*}\pi_{01};\pi_{0}!F=\pi^{*}\pi_{1}F\] There is a point \(\mathrm{pt}/B\to\mathrm{Bun}(\mathbb{P}^{1},S)\) classifying trivial bundles with the same Borel reduction at all points of \(S\). There is also a point \(\mathrm{pt}/B\to\mathrm{Bun}_{G}(\mathbb{P}^{1},\{0,1\},s_{\alpha})\) classifying trivial bundles with the same Parabolic structure at all points of \(S\) (and the unique, up to automorphism, of the further reduction of the structure group to \(B\) at \(\infty\)). The following diagram commutes \[\begin{CD}\text{pt}/B@>{}>{}>\text{\rm{pt}}_{G}(\mathbb{P}^{1},S)\\ @V{}V{}V\\ \text{\rm{Bun}}_{G}(\mathbb{P}^{1},\{0,1\},s_{\alpha})\end{CD}\] Therefore, \[\pi_{!}\text{Eis}_{0}=\pi_{!}\underline{1}_{\text{pt}/B}=\underline{1}_{\text {pt}/B}\] The fiber above \(\text{pt}/B\) of \(\pi\) is the locus where the bundle is trivial and the Borel reductions at the points of \(S\) have the same \(P_{s_{\alpha}}\) reduction. \(\text{Avg}_{s_{\alpha}}^{\{0,1\}}\text{Eis}_{0}=\pi^{*}\underline{1}_{\text{ pt}/B}\) is the constant function on this locus. By symmetry, we see that \(\text{Avg}_{s_{\alpha}}^{\mathcal{S}\setminus\{s\}}\text{Eis}_{0}\) is independent of \(s\). \(\widetilde{C}\) and \(C_{Eis}\) are \(\mathcal{H}^{\otimes}\) modules. By restriction of scalars through \(\mathbb{C}[\Lambda]\to\mathcal{H}^{0}\), these become modules over algebra of translation operators at \(0\). We make some observations about these modules. **Proposition 4.1**.: \(C_{Eis}\) is finitely generated over \(\mathbb{C}[\Lambda]\). Proof.: We show that \(C_{Eis}\) is generated by the \(|W|^{3}\) elements \(T_{w_{0}}^{0}T_{w_{1}}^{1}T_{w_{\infty}}^{\infty}\text{Eis}_{0}\) for \(w_{0},w_{1},w_{\infty}\in W\). Recall that \(\{T_{w}J_{\lambda}\}_{w\in W,\lambda\in\Lambda}\) is a basis for \(\mathcal{H}\). Therefore, the Eisenstein module is spanned by functions elements \[T_{w_{0}}^{0}J_{\lambda_{0}}^{0}T_{w_{1}}^{1}J_{\lambda_{1}}^{1}T_{w_{\infty}} ^{\infty}J_{\lambda_{\infty}}^{\infty}\text{Eis}_{0},\] \(w_{0},w_{1},w_{\infty}\in\Lambda\) and \(\lambda_{0},\lambda_{1},\lambda_{\infty}\in\Lambda\). By the translation relation \[T_{w_{0}}^{0}J_{\lambda_{0}}^{0}T_{w_{1}}^{1}J_{\lambda_{1}}^{1}T_{w_{\infty}} ^{\infty}J_{\lambda_{\infty}}^{\infty}=T_{w_{0}}^{0}T_{w_{1}}^{1}T_{w_{\infty }}^{\infty}J_{\lambda}^{0}\text{Eis}_{0},\] where \(\lambda=\lambda_{0}+\lambda_{1}+\lambda_{\infty}\). because \(\{J_{\lambda}T_{w}\}_{w\in W,\lambda\in\Lambda}\) is another basis for \(\mathcal{H}\), \(C_{Eis}\) is spanned by functions \[J_{\lambda}^{0}T_{w_{0}}^{0}T_{w_{1}}^{1}T_{w_{\infty}}^{\infty}\text{Eis}_{0}.\] In particular, \(C_{Eis}\) is generated over \(\mathbb{C}[\Lambda]\) by functions \(T_{w_{0}}^{0}T_{w_{1}}^{1}T_{w_{\infty}}^{\infty}\text{Eis}_{0}\). **Proposition 4.2**.: \(C_{\text{Eis}}\) contains a free \(\mathbb{C}[\Lambda]\) submodule of rank \(|W|^{2}\). Proof.: We claim that \(|W|^{2}\) elements \(T_{w_{1}}^{1}T_{w_{\infty}}^{\infty}\text{Eis}_{0}\) are independent over \(\mathbb{C}[\Lambda]\). Suppose there is some nontrivial finite linear combination \[\sum_{i}c_{i}J_{\lambda_{i}}^{0}T_{w_{1,i}}^{1}T_{w_{\infty},i}^{\infty}\text {Eis}_{0}=0\] Pick a weight \(\lambda\) such that \(\mu+\lambda_{i}-\rho\in\Lambda_{+}\). \(J_{\lambda}\) is invertible, so \[\sum_{i}c_{i}J_{\lambda_{i}}^{0}T_{w_{1,i}}^{1}T_{w_{\infty},i}^{\infty}\text {Eis}_{0}=0\iff\sum_{i}c_{i}J_{\mu+\lambda_{i}}^{0}T_{w_{1,i}}^{1}T_{w_{\infty },i}^{\infty}\text{Eis}_{0}=0\iff\sum_{i}c_{i}T_{w_{1,i}}^{1}T_{w_{\infty},i}^ {\infty}\text{Eis}_{\mu+\lambda_{i}}\] In particular, it suffices to show that the functions \(\{T_{w_{1,i}}^{1}T_{w_{\infty},i}^{\infty}\text{Eis}_{\lambda}\}\), for \(w_{1},w_{\infty}\in W\) and \(\lambda-\rho\in\Lambda_{+}\) are linearly independent. Identify isomorphism classes of \(G\)-bundles on \(\mathbb{P}^{1}\) with \(W\backslash W^{aff}/W\cong\Lambda_{+}\). If \(\mathcal{E}_{\lambda}\) is a \(G\) bundle corresponding to \(\lambda\in\Lambda_{+}\) such that \(\lambda-\rho\in\Lambda_{+}\), then there is a \(B\)-bundle \(\mathcal{E}_{B,\lambda}\), stable under \(\text{Aut}(\mathcal{E}_{\lambda})\). In particular, for \(s\in S\) there is a flag \(F_{s}\subset\mathcal{E}_{\lambda}|_{s}\) that is stable under \(\text{Aut}(\mathcal{E}_{\lambda})\). The function \(\{T_{w_{1,i}}^{1}T_{w_{\infty},i}^{\infty}\text{Eis}_{\lambda}\}\) is supported only on points of the locus classifying parabolic bundles \((\mathcal{E},\{F_{s}^{\prime}\}_{s\in S})\) where \(\mathcal{E}\cong\mathcal{E}_{\lambda}\) and the parabolic structure at \(s\in\{1,\infty\}\) is in relative position \(w_{s}\) to \(F_{s}^{\prime}\). **Remark**.: If \(\lambda-2\rho\in\Lambda_{+}\), then the isomorphism class of an object in \(\text{Bun}_{G}(\mathbb{P}^{1},S)\) with underlying bundle \(\mathcal{E}_{\lambda}\) is determined by the relative positions, for \(s\in S\), of the parabolic structure \(F_{s}^{\prime}\) to \(F_{s}\). We haven't proved this observation as it isn't needed for any of our results. **Conjecture 4.1**.: \(\widetilde{C}\) is free of rank \(|W|^{2}\) over \(\mathbb{C}[\Lambda]\). **Example**.: Conjecture 4.1 is true for \(G=\mathrm{PGL}(2)\). \(C_{Eis}\) is generated over \(\mathbb{C}[\Lambda]\) by the following functions: \[\mathrm{Eis}_{0},T^{1}\mathrm{Eis}_{0},T^{\infty}\mathrm{Eis}_{0},T^{\{1, \infty\}}\mathrm{Eis}_{0},T^{0}\mathrm{Eis}_{0}\] The first four generators are independent over \(\mathbb{C}[\Lambda]\). One can check that \[(q^{2}J_{2}^{0}-1)(T^{1}T^{\infty}-qT_{0})=(q-1)(T^{1}-q)(T^{\infty}-q) \tag{7}\] Therefore, \(\{\mathrm{Eis}_{0},T^{1}\mathrm{Eis}_{0},T^{\infty}\mathrm{Eis}_{0},(T^{\{1, \infty\}}-qT^{0})\mathrm{Eis}_{0}\}\) is a basis over \(\mathbb{C}[\Lambda]\). Conjecture 4.1 together with Propositions 4.1 and 4.2 imply that \(\widetilde{C}\to C_{Eis}\) is an isomorphism. We show that 4.1 is generically true over \(\mathbb{C}[\Lambda]\). **Proposition 4.3**.: \(\mathrm{Frac}(\mathbb{C}[\Lambda])\otimes_{\mathbb{C}[\Lambda]}\widetilde{C}\) has dimension \(|W|^{2}\) over \(\mathcal{K}^{0}\). Let us postpone the proof of Proposition 4.3 briefly. It will be easier to filter the vector space \(\widetilde{C}\) and work with the associated graded vector space. \(\mathcal{H}\) is filtered by length \(\ell:W\to\mathbb{Z}_{\geq 0}\). For \(i\in\mathbb{Z}_{\geq 0}\) The \(i\)th filtered component \(F^{i}(\mathcal{H})\subset\mathcal{H}\) is spanned by \(T_{w}J_{\lambda}\) for \(w\in W\) with \(\ell(w)\leq i\) and \(\lambda\in\Lambda\). \(F^{0}(\mathcal{H})\cong\mathbb{C}[\Lambda]\) is the subalgebra of translation operators. Note that \(F^{i}(\mathcal{H})\) is also spanned by \(J_{\lambda}T_{w}\) for \(w\in W\) with \(\ell(w)\leq i\) and \(\lambda\in\Lambda\). In general, \[F^{i}(\mathcal{H})\cdot F^{j}(\mathcal{H})\subset F^{i+j}(\mathcal{H})\ \forall\ i,j\in\mathbb{Z}_{\geq 0}\] We filter \(\widetilde{C}\) so that the following are true: 1. The \(\widetilde{C}\) is a filtered module for the filtered algebra \(\mathcal{H}^{0}\) of Hecke operators at \(0\). That is, \[F^{i}(\mathcal{H}^{0})\cdot F^{j}(\widetilde{C})\subset F^{i+j}\widetilde{C}, \ \forall\ i,j\in\mathbb{Z}_{\geq 0}\] 2. The filtration on \(\widetilde{C}\) is preserved by Hecke operators at \(S\setminus\{0\}\). That is, for \(s\in S\setminus\{0\}\), \[F^{i}(\mathcal{H}^{s})\cdot F^{j}(\widetilde{C})\subset F^{j}\widetilde{C}, \ \forall\ i,j\in\mathbb{Z}_{\geq 0}\] Note that a consequence of the first requirement is that the filtered components of \(\widetilde{C}\) are modules for the algebra translation operators at \(0\), \(\mathbb{C}[\Lambda]\). **Definition 4.2** (Filtration of \(\widetilde{C}\)).: The \(i\)th filtered component \(F^{i}(\widetilde{C})\subset\widetilde{C}\) is spanned by \(A^{0}T^{1}_{w_{1}}T^{\infty}_{w_{\infty}}\), for \(w_{1},w_{\infty}\in W\) and \(A\in F^{i}(\mathcal{H})\). Alternatively, it is spanned by \(T^{0}_{w_{0}}T^{1}_{w_{1}}T^{\infty}_{w_{\infty}}J_{\lambda}\) for \(\lambda\in\Lambda\) and \(w_{0},w_{1},w_{\infty}\in W\) with \(\ell(w_{0})\leq i\). The first requirement on the filtration of \(\widetilde{C}\) is automatically satisfied by construction. The second condition is also satisfied because of the translation relation. Now we prove Proposition 4.3 Proof of Proposition 4.3.: After rationalization, we have the isomorphism \[\mathrm{Frac}(\mathbb{C}[\Lambda])\otimes_{\mathbb{C}[\Lambda]}\widetilde{C} \cong\bigoplus_{i}\mathrm{Frac}(\mathbb{C}[\Lambda])\otimes_{\mathbb{C}[ \Lambda]}\mathrm{Gr}^{i}(\widetilde{C})\] \(\mathrm{Gr}^{i}(\widetilde{C})\) is generated as a \(\mathbb{C}[\Lambda]\otimes\mathcal{H}^{\otimes\{1,\infty\}}\) module by \(T^{0}_{w}\), for \(w\in W\) with \(\ell(w)=i\). Therefore, we need only to show that for \(w\in W\) of length \(\ell(w)=i\), there is \(A\in\mathbb{C}[\Lambda]\) such that \[A^{0}\cdot T^{0}_{w}\in F^{i-1}(\widetilde{C}).\] Pick a simple reflection \(s_{\alpha}\) such that \(\ell(ws_{\alpha})=\ell-1\). Pick \(\lambda\in\Lambda\) such that \(\langle\check{\alpha},\lambda\rangle=1\). Start with the equation from Proposition 5.2, \[T^{0}_{s_{\alpha}}(J_{\lambda}-J_{s_{\alpha}(\lambda)})\in F^{0}(\widetilde{C})\] \[J^{1}_{-s_{\alpha}(\lambda)}T^{0}_{s_{\alpha}}(J_{\lambda}-J_{s_{\alpha}( \lambda)})\in F^{0}(\widetilde{C})\] \[T^{0}_{s_{\alpha}}(J_{\alpha}-1)\in F^{0}(\widetilde{C})\] \[T^{0}_{w}(J_{\alpha}-1)\in F^{0}(\widetilde{C})\in F^{i-1}(\widetilde{C})\] Observe that for some integer \(n\), \(T_{w}J_{\alpha}-q^{n}J_{w\cdot\alpha}T_{w}\in\mathbb{C}[\Lambda]\), so \[(q^{n}J^{0}_{w\cdot\alpha}-1)T^{0}_{w}\in F^{0}(\widetilde{C})\in F^{i-1}( \widetilde{C})\] **Remark**.: \(w\cdot\alpha\) is always a negative coroot. \(n\) is given by the explicity formula \(n=\langle\check{\rho},w\cdot\alpha\rangle-1\). For Proposition 4.3 we only needed to invert the polynomial \[\prod_{\alpha\in\mathcal{R}_{+}}\left(q^{\langle\rho,\alpha\rangle+1}J_{\alpha }-1\right)\] This is not the standard discriminant polynomial. In particular, \(q^{\langle\rho,\alpha\rangle+1}J_{\alpha}-1\) is not homogeneous with respect to the natural \(q\)-twisted \(\mathbb{G}_{m}\) action on \(\mathbb{C}[\Lambda]\). Theorem 1 follows from Propositions 4.1, 4.2, and 4.3. ## 5. Some Formulas for Algebraic Eisenstein Module In this section we prove some formulas that hold in the module \(\widetilde{C}\) formally generated over \(\mathcal{H}^{\otimes S}\) by one generator subject to the translation and reflection relations. We have postponed these calculations to this section as they don't fit the flow of the arguments where they are used. It is helpful to first understand the \(G=\mathrm{PGL}(2)\) example. **Proposition 5.1** (Functional Equation for Algebraic Eisenstein Module).: Let \(\widetilde{C}\) be the quotient of \(\mathcal{H}^{\otimes S}\) by the left ideal generated by relations: \[J^{0}_{\lambda}=J^{1}_{\lambda}=J^{\infty}_{\lambda}\text{ for }\lambda\in\Lambda\] \[\mathrm{Avg}^{\{0,1\}}_{s_{\alpha}}=\mathrm{Avg}^{\{0,\infty\}}_{s_{\alpha}}= \mathrm{Avg}^{\{1,\infty\}}_{s_{\alpha}}\text{ for simple }s_{\alpha}\in W\] Assume that the map \(\check{\alpha}:\Lambda\to\mathbb{Z}\) given by \(\lambda\mapsto\langle\check{\alpha},\lambda\rangle\) is surjective. Then, for any simple reflection \(s_{\alpha}\), \[\mathrm{Avg}^{s}_{s_{\alpha}}J_{\lambda}=\mathrm{Avg}^{s}_{s_{\alpha}}T^{S \setminus\{s\}}_{s_{\alpha}}J_{s_{\alpha}(\lambda)}\text{ if }\langle\check{\alpha}, \lambda\rangle=-1\] \[J_{\lambda}\in\mathrm{Span}_{(\mathcal{H}^{fin})^{\otimes S}}\{J_{\mu}\}_{ \mu\in R(\lambda,\alpha)}\text{ if }\langle\check{\alpha},\lambda\rangle\leq-2\] where \(R(\lambda,\alpha)\subset\Lambda\) consists of coweights \(\mu\), such that \(\mu-\lambda\) is an integral multiple of \(\alpha\) and \(-1\leq\langle\check{\alpha},\mu\rangle\leq-\langle\check{\alpha},\lambda\rangle\). Proof.: Fix the simple coroot \(\alpha\) and let \(T:=T_{s_{\alpha}}\), \(\mathrm{Avg}:=\mathrm{Avg}_{s_{\alpha}}\).Fix \(\lambda\) so that \(\langle\check{\alpha},\lambda\rangle=1\). Start with the reflection relation \[T^{1}\mathrm{Avg}^{0}=T^{\infty}\mathrm{Avg}^{0}\] \[T^{1}J^{1}_{\lambda}T^{1}\mathrm{Avg}^{0}=T^{1}J^{1}_{\lambda}T^{\infty} \mathrm{Avg}^{0}\] Observe that \(TJ_{\lambda}T=J_{\lambda-\alpha}\). \[\mathrm{Avg}^{0}J_{\lambda-\alpha}=\mathrm{Avg}^{0}T^{1\infty}J_{\lambda}\] This proves the first part of the proposition. Continuing with the previous equality \[T^{0}J^{0}_{\lambda}\mathrm{Avg}^{0}J_{\lambda-\alpha}=T^{0}J^{0}_{\lambda} \mathrm{Avg}^{0}T^{1\infty}J_{\lambda}\] \[J_{2\lambda-2\alpha}+T^{0}J_{2\lambda-\alpha}=T^{S}J_{2\lambda}+T^{1\infty}J_{ 2\lambda-\alpha}\] \[J_{2\lambda-2\alpha}=T^{S}J_{2\lambda}+(T^{1\infty}-T^{0})J_{2\lambda-\alpha}\] Now, let \(\lambda^{\prime}\in\Lambda\) be such that \(\langle\check{\alpha},\lambda^{\prime}\rangle\leq-2\). Define \(\mu:=\lambda^{\prime}-2\lambda\) and \(n:=-\langle\check{\alpha},\mu\rangle\in\mathbb{Z}_{\geq 0}\). \[J_{\lambda^{\prime}}=J_{\mu}^{0}T^{S}J_{2\lambda}+J_{\mu}^{0}(T^{1\infty}-T^{0})J_{ 2\lambda-\alpha}\in\operatorname{Span}_{(\mathcal{H}^{fin})^{\otimes S}}\{J_{ \lambda^{\prime}+k\alpha}\}_{k=1}^{n}\] The second part of the proposition follows by induction on \(n\). **Proposition 5.2**.: Let \(\widetilde{C}\) be as in Proposition 5.1. If \(\lambda\in\Lambda\) such that \(\langle\check{\alpha},\lambda\rangle=1\), then the following is true in \(\widetilde{C}\): \[T_{s_{\alpha}}^{0}(J_{\lambda}-J_{s_{\alpha}(\lambda)})=-T_{s_{\alpha}}^{\{1,\infty\}}(J_{\lambda}-q^{-1}J_{s_{\alpha}(\lambda)})-(1+T_{s_{\alpha}}^{1}+T_{ s_{\alpha}}^{\infty})q^{-1/2}J_{s_{\alpha}(\lambda)}\] Proof.: For ease of notation, let \(T:=T_{s_{\alpha}}\). Introduce the operator \(D\in\mathcal{H}\), \[D:=q^{1/2}J_{\lambda}-q^{-1/2}J_{s_{\alpha}(\lambda)}\] Observe that \[DT=-TD+\frac{2(q-1)D}{1-qJ_{\alpha}}=-TD-2(q-1)q^{-1/2}J_{s_{\alpha}(\lambda)} \tag{8}\] Start with the reflection relation. \[(1+T^{1})(T^{0}-T^{\infty})=0\] \[\implies(D^{1}-D^{0})(1+T^{1})(T^{0}-T^{\infty})=0\] Using Equation 8 to move all \(D\) operators to the right and simplifying we obtain the following. In light of the translation relation, the superscript is omitted from all translation that appear as the rightmost term of an expression. \[(T^{0}+T^{\{1,\infty\}})D=(1+T^{1}+T^{\infty}-T^{0})\frac{(q-1)D}{1-qJ_{\alpha}}\] \[\implies T^{0}\left(D+\frac{(q-1)D}{1-qJ_{\alpha}}\right)=-T^{1}T^{\infty}D+(1+ T^{1}+T^{\infty})\frac{(q-1)D}{1-qJ_{\alpha}}\] \[\implies q^{1/2}T^{0}(J_{\lambda}-J_{s_{\alpha}(\lambda)})=-T^{1}T^{\infty}D+( 1+T^{1}+T^{\infty})\frac{(q-1)D}{1-qJ_{\alpha}}\] ## 6. Example: G=SL(3) This section has been included to provide some evidence that the conjecture is true and give some intuition for the general structure of \(C_{Eis}\). Fix \(G=\operatorname{SL}(3)\) for this section. We will prove Theorem 6 verifying Conjecture 1.1 in this case. **Theorem 6**.: Conjecture 1.1 is true when \(G=\operatorname{SL}(3)\). Our approach is to study the Eisenstein module as a finite Hecke trimodule. It is not expected that this approach will generalize to arbitrary \(G\). Identify the coweight lattice \[\Lambda\cong\{(k_{1},k_{2},k_{3})\in\mathbb{Z}^{3}:\ k_{1}+k_{2}+k_{3}=0\}\] by \((t\mapsto\operatorname{diag}(t^{k_{1}},t^{k_{2}},t^{k_{3}}))\mapsto(k_{1},k_{2 },k_{3})\). There are two simple coroots, \(\alpha_{1}=(1,-1,0)\) and \(\alpha_{2}=(0,1,-1)\). \(\rho=(1,0,-1)\). The Weyl group is identified \(W\cong S_{3}\) with it's standard action on \(\mathbb{Z}^{3}\). \(s_{\alpha_{i}}\) is identified with the standard generator \(s_{i}\in S_{3}\). Reflection normal to the long root is identified with \(s_{3}\in S_{3}\), \(s_{3}=s_{1}s_{2}s_{1}=s_{2}s_{1}s_{2}\). To simplify notation, define \(T_{i}\) and \(\operatorname{Avg}_{i}\in\mathcal{H}\), for \(i=1,2\) as \(T_{i}=T_{s_{\alpha_{i}}}\) and \(\operatorname{Avg}_{i}=\operatorname{Avg}_{s_{\alpha_{i}}}\). Let \(\widetilde{C}\) be the algbraic Eisenstein module as in Definition 4.1. By Theorem 5 there is a surjective map of \(\mathcal{H}^{\otimes S}\) modules \(\widetilde{C}\to C_{Eis}\). By Proposition 5.1, \(\widetilde{C}\) is generated over \((\mathcal{H}^{fin})^{\otimes S}\) by \(J_{\lambda}\) for \(\lambda\in\Lambda\) such that \(\lambda+\rho\) is dominant. Further, the following relations hold amongst the generators (see Figure 1): 1. \(\lambda=0\) (Principal Orbit) (9) \[\operatorname{Avg}_{i}^{\{0,1\}}J_{0}=\operatorname{Avg}_{i}^{\{0,\infty\}}J_{0}= \operatorname{Avg}_{i}^{\{1,\infty\}}J_{0}\text{ for }i\in\{1,2\}\] 2. \(\lambda\in W\cdot\rho\) (10) \[\operatorname{Avg}_{1}^{s}J_{\alpha_{2}}=\operatorname{Avg}_{1}^{s}T_{1}^{S \setminus\{s\}}J_{\rho}\text{ for }s\in S\] (11) \[\operatorname{Avg}_{2}^{s}J_{\alpha_{1}}=\operatorname{Avg}_{2}^{s}T_{2}^{S \setminus\{s\}}J_{\rho}\text{ for }s\in S\] (12) \[\operatorname{Avg}_{i}^{s}J_{-\rho}\in\operatorname{Span}_{(\mathcal{H}^{fin} )^{\otimes S}}\{J_{0},J_{\alpha_{1}},J_{\alpha_{2}},J_{\rho}\}\text{ for }i\in\{1,2\},\ s\in S\] Figure 1. Lattice of coweights of \(\operatorname{SL}(3)\); depicts the structure of \(\widetilde{C}\) as a \((\mathcal{H}^{fin})^{\otimes S}\) module. The module is generated by the shifted dominant cone \(-\rho+\Lambda_{+}\). The generator \(0\) satisfies the reflection relation. Generators (colored yellow) along a wall satisfy the reflection relations for only one simple root. Dashed red arrow indicated generators are related as \(\operatorname{Eis}_{-1}\) and \(\operatorname{Eis}_{1}\) (see PGL(2) example). 3. \(\langle\tilde{\alpha_{i}},\lambda\rangle=0\), \(\lambda\neq 0\) (walls of dominant cone) (13) \[\operatorname{Avg}_{i}^{\{0,1\}}J_{\lambda}=\operatorname{Avg}_{i}^{\{0,\infty\} }J_{\lambda}=\operatorname{Avg}_{i}^{\{1,\infty\}}J_{\lambda}\] 4. \(\langle\tilde{\alpha_{i}},\lambda\rangle=-1\), \(\lambda\neq-\rho\) (walls of \(-\rho\) shifted dominant cone) (14) \[\operatorname{Avg}_{i}^{s}J_{\lambda}=\operatorname{Avg}_{i}^{s}\!T_{i}^{ \nabla\setminus\{s\}}J_{s_{i}\cdot\lambda}\text{ for }s\in S\] We want to show that \(\widetilde{C}\to C_{Eis}\) given by \(1\mapsto\operatorname{Eis}_{0}\) is an isomorphism. We study the map over map \((\mathcal{H}^{fin})^{\otimes S}\). For \(\lambda\in\Lambda_{+}\), define the the \((\mathcal{H}^{fin})^{\otimes S}\) submodules \(\widetilde{C}^{\lambda}\subset\widetilde{C}\) and \(C^{\lambda}_{Eis}\subset C_{Eis}\) as follows. \[\widetilde{C}^{\lambda}:=\operatorname{Span}_{(\mathcal{H}^{fin})^{\otimes S }}\{J_{w\cdot\lambda}:w\in W,\ w\cdot\lambda\in-\rho+\Lambda_{+}\}\] \[C^{\lambda}_{Eis}:=\operatorname{Span}_{(\mathcal{H}^{fin})^{\otimes S}}\{ \operatorname{Eis}_{w\cdot\lambda}:w\in W,w\cdot\ \lambda\in-\rho+\Lambda_{+}\}\] It suffices to show that \[C_{Eis}\cong\oplus_{\lambda\in\Lambda_{+}}C^{\lambda}_{Eis} \tag{15}\] \[\dim_{\mathbb{C}}(C^{\lambda}_{Eis})\geq\dim_{\mathbb{C}}(\widetilde{C}^{ \lambda})\text{ for }\lambda\in\Lambda_{+} \tag{16}\] These are established by Propositions 6.1 and 6.2. **Proposition 6.1**.: Equation 15 is true and \[\dim_{\mathbb{C}}(C^{\lambda}_{Eis})=\begin{cases}69&\lambda=0\\ 6^{3}+3^{3}+3^{3}+1^{3}&\lambda=\rho\\ 3^{3}\cdot 5&\langle\tilde{\alpha_{i}},\lambda\rangle=0,\ \lambda\neq 0\\ 3^{3}\cdot(2^{3}+1)&\langle\tilde{\alpha_{i}},\lambda\rangle=1,\ \lambda\neq\rho\\ 6^{3}&\lambda\in 2\rho+\Lambda_{+}\end{cases}\] **Proposition 6.2**.: \[\dim_{\mathbb{C}}(\widetilde{C}^{\lambda})\leq\begin{cases}69&\lambda=0\\ 6^{3}+3^{3}+3^{3}+1^{3}&\lambda=\rho\\ 3^{3}\cdot 5&\langle\tilde{\alpha_{i}},\lambda\rangle=0,\ \lambda\neq 0\\ 3^{3}\cdot(2^{3}+1)&\langle\tilde{\alpha_{i}},\lambda\rangle=1,\ \lambda\neq\rho\\ 6^{3}&\lambda\in 2\rho+\Lambda_{+}\end{cases}\] ### Proof of Proposition 6.1 To prove Proposition 6.1, we first describe the geometry of the fibers \(\operatorname{Bun}_{G}(\mathbb{P}^{1},S)\to\operatorname{Bun}_{G}(\mathbb{P}^ {1})\) and identify the Eisenstein objects \(\operatorname{Eis}_{\lambda}\) for \(\lambda\in-\rho+\Lambda_{+}\). For \(\lambda\in\Lambda_{+}\), let \(\operatorname{Bun}_{G}^{\lambda}(\mathbb{P}^{1},S)\) denote the fiber above \(\lambda\in\Lambda_{+}\). We will find that except for \(\operatorname{Eis}_{-\rho}\), all these Eisenstein objects are nonzero only on a single point, which lies in \(\operatorname{Bun}_{G}^{\tilde{\lambda}}(\mathbb{P}^{1},S)\), where \(\tilde{\lambda}\in\Lambda_{+}\) is in the \(W\) orbit of \(\lambda\). We will also find that for \(\lambda\in\Lambda_{+}\setminus\{0,\rho\}\), \(C^{\lambda}_{Eis}\) is equal to the space of all automorphic functions taking nonzero values only on points of \(\operatorname{Bun}_{G}^{\lambda}(\mathbb{P}^{1},S)\). Objects of \(\operatorname{Bun}_{G}(\mathbb{P}^{1})\) are represented by rank 3 vector bundles \(\mathcal{E}\), whose determinant bundle is trivial. Objects of \(\operatorname{Bun}_{G}(\mathbb{P}^{1},S)\) are represented by \(\mathcal{E}\in\operatorname{Bun}_{G}(\mathbb{P}^{1})\) with flags \(F_{s}=(\ell_{s},p_{s})\), \(\ell_{s}\subset p_{s}\subset\mathcal{E}|_{s}\). #### 6.1.1. \(\mathcal{E}\cong\mathcal{O}(0)\) \(\operatorname{Bun}_{G}^{0}(\mathbb{P}^{1},S)\) is identified with the orbits of the triple flag variety \(G\backslash\mathcal{B}^{S}\). The generic configuration is when the flags are pairwise transverse and the following two conditions are satisfied: * The lines \(\ell_{s}\) are not coplanar. * The planes \(p_{s}\) are not concurrent. For \((p,q)\in S\times S\) with \(p\neq q\), there is a map \(\pi_{p,q}:G\backslash\mathcal{B}^{S}\to G\backslash(\mathcal{B}\times \mathcal{B})\). Identify the points of \(G\backslash(\mathcal{B}\times\mathcal{B})\cong B\backslash G/B\) with \(W\) by relative position of flags. Explicitly, for \(w\in W\), \((F_{1},F_{2})\), is in relative position \(w\), * \(w=1\) if \(\ell_{1}=\ell_{2}\) and \(p_{1}=p_{2}\) * \(w=s_{1}\) if \(\ell_{1}\neq\ell_{2}\) and \(p_{1}=p_{2}\) * \(w=s_{2}\) if \(\ell_{1}=\ell_{2}\) and \(p_{1}\neq p_{2}\) * \(w=s_{2}s_{1}\) if \(\ell_{2}\in p_{1}\), \(\ell_{1}\notin p_{2}\) * \(w=s_{1}s_{2}\) if \(\ell_{2}\notin p_{1}\), \(\ell_{1}\in p_{2}\) * \(w=s_{3}\) if \(\ell_{2}\notin p_{1}\), \(\ell_{1}\notin p_{2}\) If \((F_{0},F_{1})\) are in relative position \(w\) and \((F_{1},F_{\infty})\) are in relative position \(w^{\prime}\), then the possible relative positions of \((F_{0},F_{\infty})\) are exactly those \(w^{\prime\prime}\in W\) such that \(T_{w^{\prime\prime}}\) has a nonzero coefficient in \(T_{w^{\prime}}T_{w}\in\mathcal{H}^{fin}\). In particular, if \(\ell(w)+\ell(w^{\prime})=\ell(w^{\prime}w)\), then the relative position of \((F_{0},F_{\infty})\) must be \(w^{\prime}w\). The other cases are * \(w=w^{\prime}=s_{i}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{i}}+q\). * \(w=s_{i}\), \(w^{\prime}=s_{j}s_{i}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{j}s_{i}}+qT_{s_{j}}\). * \(w=s_{i}\), \(w^{\prime}=s_{3}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{3}}+qT_{s_{i}s_{j}}\). * \(w=s_{i}s_{j}\), \(w^{\prime}=s_{i}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{i}s_{j}}+qT_{s_{j}}\). * \(w=s_{i}s_{j}\), \(w^{\prime}=s_{j}s_{i}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{3}}+q(q-1)T_{s_{j}}+q^{2}\) * \(w=w^{\prime}=s_{i}s_{j}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{3}}+qT_{s_{j}s_{i}}\) * \(w=s_{i}s_{j}\), \(w^{\prime}=s_{3}\). \(T_{w^{\prime}}T_{w}=(q-1)^{2}T_{s_{3}}+q(q-1)T_{s_{j}s_{i}}+q(q-1)T_{s_{i}s_{j }}+q^{2}T_{s_{i}}\). * \(w=s_{3}\), \(w^{\prime}=s_{i}\). \(T_{w^{\prime}}T_{w}=(q-1)T_{s_{3}}+qT_{s_{j}s_{i}}\). * \(w=s_{3}\), \(w^{\prime}=s_{j}s_{i}\). \(T_{w^{\prime}}T_{w}=(q-1)^{2}T_{s_{3}}+q(q-1)T_{s_{j}s_{i}}+q(q-1)T_{s_{i}s_{j }}+q^{2}T_{s_{i}}\). * \(w=s_{3}\), \(w^{\prime}=s_{3}\). \(T_{w^{\prime}}T_{w}=(q-1)(q^{2}-q+1)T_{s_{3}}+q(q-1)^{2}T_{s_{i}s_{j}}+q(q-1)^ {2}T_{s_{j}s_{i}}+q^{2}(q-1)T_{s_{i}}+q^{2}(q-1)T_{s_{j}}+q^{3}\) \(s_{i}\) is one simple reflection, \(s_{j}\) is the other. Let \(\pi:G\backslash(\mathcal{B}^{S})\rightarrow(B\backslash G/B)^{3}\) be given by \((\pi_{0,1},\pi_{1,\infty},\pi_{0,\infty})\). Let \(c_{0}(w,w^{\prime},w^{\prime\prime})\) be the preimage of \((w,w^{\prime},w^{\prime\prime})\). From the above calculation, we find that \(c_{0}(w,w^{\prime},w^{\prime\prime})\) is nonempty exactly for the following triples: 1. \((w,w^{\prime},w^{\prime\prime})\), with \(w^{\prime\prime}=w^{\prime}w\). There are exactly \(36\) such triples. 2. \((w,w^{\prime},w^{\prime\prime})\) is one of the following \(33\) triples: \[(s_{1},s_{1},s_{1}),(s_{2},s_{2},s_{2}),(s_{1},s_{2}s_{1},s_{2}s_{1}),(s_{2},s_{ 1}s_{2},s_{1}s_{2}),(s_{1},s_{3},s_{3}),(s_{2},s_{3},s_{3}),(s_{1}s_{2},s_{1}, s_{1}s_{2}),(s_{2}s_{1},s_{2},s_{2}s_{1}),(s_{1}s_{2},s_{2}s_{1},s_{3}),\] \[(s_{1}s_{2},s_{2}s_{1},s_{1}),(s_{2}s_{1},s_{1}s_{2},s_{3}),(s_{2} s_{1},s_{1}s_{2},s_{2}),(s_{1}s_{2},s_{1}s_{2},s_{3}),(s_{2}s_{1},s_{2}s_{1},s_{3}),(s_{ 2}s_{1},s_{2}s_{1},s_{3}),(s_{1}s_{2},s_{3},s_{3}),(s_{1}s_{2},s_{3},s_{1}s_{2}),(s_{1}s_{2},s_{3},s_{2}s_{1}),\] \[(s_{2}s_{1},s_{3},s_{3}),(s_{2}s_{1},s_{3},s_{2}s_{1}),(s_{2}s_{1},s_{3},s_{1}s_{2}),(s_{3},s_{1},s_{3}),(s_{3},s_{2},s_{3}),(s_{3},s_{2}s_{1}, s_{3}),(s_{3},s_{2}s_{1},s_{2}s_{1}),(s_{3},s_{2}s_{1},s_{1}s_{2}),\] \[(s_{3},s_{1}s_{2},s_{3}),(s_{3},s_{1}s_{2},s_{1}s_{2}),(s_{3},s_{1} s_{2},s_{2}s_{1}),(s_{3},s_{3},s_{1}),(s_{3},s_{3},s_{2}),(s_{3},s_{3},s_{1}s_{2}),(s_{3},s_{3},s_{2}s_{1}),(s_{3}, s_{3},s_{3})\] One can check that each of these loci \(c_{0}(w,w^{\prime},w^{\prime\prime})\) has exactly one isomorphism class of objects, except for the locus \(c_{0}(s_{3},s_{3},s_{3})\), classifying triples of pairwise transverse flags. This locus is as follows: \(c_{0}(s_{3},s_{3},s_{3};\{s_{1}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\)\(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\) is the configuration where \(\ell_{s}\) are coplanar and \(p_{s}\) are concurrent. \(c_{0}(s_{3},s_{3},s_{3};\{s_{1}\})\) is the configuration where \(\ell_{s}\) are not coplanar and \(p_{s}\) are concurrent. \(c_{0}(s_{3},s_{3},s_{3};\{s_{2}\})\) is the generic configuration. The loci of flags in generic configuration forms a space isomorphic to the complement of three points in \(\mathbb{P}^{1}\); the remaining subloci of \(c_{0}(s_{3},s_{3},s_{3})\) have a unique point. **Definition 6.1**.: \(C^{0}_{loc}\subset C_{Aut}\) is the subspace of functions supported on points where the bundle is trivial and constant along the generic locus \( Proof.: We describe the equations cutting out \(C^{0}_{Eis}\) in \(C^{0}_{loc}\). Let \(c_{0}(*)\) denote the locus where \(\ell_{s}\) are not coplanar and \(p_{s}\) are not concurrent. The following are subloci of \(c_{0}(*)\) organized so that \(x\to y\) means \(y\) is contained in the closure of \(x\). For \(R\subset S\), \(c^{L}_{0}(R)\subset c_{0}(*)\) is the sublocus where \(\ell_{s}\in p_{L(s)}\) if and only if \(s\in R\), where \(L(s)\) denotes the predecessor of \(s\) in the cyclic ordering \(0\to 1\to\infty\to 0\). \(c_{0}(R)\subset c_{0}(*)\) is the sublocus where \(\ell_{s}\in P_{R(s)}\) if and only if \(s\in R\), where \(R(s)\) denotes the successor of \(s\) in the same cyclic ordering. In the previous notation 1. \(c^{L}_{0}(S)=c_{0}(s_{2}s_{1},s_{2}s_{1},s_{1}s_{2})\) 2. \(c^{L}(01)=c_{0}(s_{2}s_{1},s_{3},s_{1}s_{2}),c^{L}(0\infty)=c_{0}(s_{3},s_{2}s _{1},s_{1}s_{2}),c^{L}_{0}(1\infty)=c_{0}(s_{2}s_{1},s_{2}s_{1},s_{3})\) 3. \(c^{L}(0)=c_{0}(s_{3},s_{3},s_{1}s_{2}),c^{L}(1)=c_{0}(s_{2}s_{1},s_{3},s_{3}),c ^{L}_{0}(\infty)=c_{0}(s_{3},s_{2}s_{1},s_{3})\) 4. \(c^{R}_{0}(S)=c_{0}(s_{1}s_{2},s_{1}s_{2},s_{2}s_{1})\) 5. \(c^{R}(01)=c_{0}(s_{1}s_{2},s_{1}s_{2},s_{3}),c^{R}(0\infty)=c_{0}(s_{1}s_{2},s _{3},s_{2}s_{1}),c^{R}_{0}(1\infty)=c_{0}(s_{3},s_{1}s_{2},s_{2}s_{1})\) 6. \(c^{R}(0)=c_{0}(s_{1}s_{2},s_{3},s_{3}),c^{R}(1)=c_{0}(s_{3},s_{1}s_{2},s_{3}), c^{R}_{0}(\infty)=c_{0}(s_{3},s_{3},s_{2}s_{1})\) \(C^{0}_{Eis}\subset C^{0}_{loc}\) is the subspace of functions, \(f\), such that \[f(c_{0}(s_{3},s_{3},s_{3};\emptyset))-f(c_{0}(s_{3},s_{3},s_{3};\{s_{1}\}))-f (c_{0}(s_{3},s_{3},s_{3};\{s_{2}\}))+f(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\}) )=0\] \[f(c_{0}(s_{3},s_{3},s_{3};\emptyset))+\sum_{R\subset S;R\neq\emptyset}(-1)^{ |R|}f(c^{L}_{0}(R))=0\] \[f(c_{0}(s_{3},s_{3},s_{3};\emptyset))+\sum_{R\subset S;R\neq\emptyset}(-1)^{ |R|}f(c^{R}_{0}(R))=0\] \(f(c_{0}(s_{3},s_{3},s_{3};\emptyset))\) is the common value of \(f\) on any point of the generic locus \(c_{0}(s_{3},s_{3},s_{3};\emptyset)\). #### 6.1.2. Interlude on Bundles with a Positive Splitting The following is a special case. The general principle will be elaborated upon in a future document. Suppose that \(\mathcal{E}\cong\mathcal{O}(\lambda)\) admits a _positive_ splitting \(\mathcal{E}\cong\mathcal{E}_{1}\oplus\mathcal{E}_{2}\), which means that \(\operatorname{Hom}(\mathcal{E}_{1},\mathcal{E}_{2})=0\). For example, if \(\mathcal{E}_{1}\cong\mathcal{O}(m)\oplus\mathcal{O}(n)\) and \(\mathcal{E}_{2}\cong\mathcal{O}(k)\) then the positivity condition is \(m,n\geq k+1\). Let \(P\supset B\) be the parabolic subgroup corresponding to the splitting. If \(\mathcal{E}_{1}\) is rank two, then \(P=P_{s_{1}}\) and if \(\mathcal{E}_{1}\) is rank one, then \(P=P_{s_{2}}\). The subbundle \(\mathcal{E}_{1}\) is stable under \(\operatorname{Aut}(\mathcal{E})\), so there is a subspace \(\mathcal{E}^{\operatorname{stab}}_{s}\subset\mathcal{E}|_{s}\) given by restriction of \(\mathcal{E}_{1}\). Let \(\operatorname{fib}_{s}:\operatorname{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\to P \backslash G/B\) be given by relative position of \((\mathcal{E}^{\operatorname{stab}}_{s},F_{s})\). For example, if \(\mathcal{E}_{1}\) is rank two, the relative position, \(w\) is given by: * \(w=1\) if \(p_{s}=\mathcal{E}^{\operatorname{stab}}_{s}\) * \(w=s_{2}\) if \(\ell_{s}\subset\mathcal{E}^{\operatorname{stab}}_{s}\) but \(p_{s}\neq\mathcal{E}|_{s}\) * \(w=s_{1}s_{2}\) if \(\ell\notin\mathcal{E}^{\operatorname{stab}}_{s}\) There is also a map \(\operatorname{Bun}^{\lambda}_{G}(\mathbb{P}^{1})\to\operatorname{Bun}^{\lambda _{1}}_{L}(\mathbb{P}^{1})\), given by \(\mathcal{E}\mapsto\mathcal{E}_{1}\oplus\mathcal{E}/\mathcal{E}_{1}\), where \(L\subset P\) is the Levi subgroup and \(\mathcal{E}_{1}\cong\mathcal{O}(\lambda_{1})\). At the level of rational points, this can be lifted to included parabolic structure: \[\operatorname{split}:\operatorname{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\to \operatorname{Bun}^{\lambda_{1}}_{L}(\mathbb{P}^{1},S)\] For example, if \(\mathcal{E}_{1}\) is rank two, the parabolic structure for \(\mathcal{E}_{1}\) at \(s\) is given by \(p_{s}\cap\mathcal{E}^{\operatorname{stab}}_{s}\) if \(p_{s}\) is transverse to \(\mathcal{E}^{\operatorname{stab}}_{s}\) and otherwise by \(\ell_{s}\). The splitting map is not continuous on the underlying moduli spaces. Suppose further that the splitting \(\mathcal{E}\cong\mathcal{E}_{1}\oplus\mathcal{E}_{2}\) is _very positive_, which means \(\operatorname{Hom}(\mathcal{E}_{1},\mathcal{E}_{2}\otimes\omega_{\mathbb{P}^{1} }(S))=0\). For example, if \(\mathcal{E}_{1}\cong\mathcal{O}(m)\oplus\mathcal{O}(n)\) and \(\mathcal{E}_{2}\cong\mathcal{O}(k)\) then the condition is \(m,n\geq k+2\). Calculating the action of \(\operatorname{Aut}(\mathcal{E})\) on \(\prod_{s\in S}\mathcal{E}|_{s}\) shows that the product of the splitting map and \(\operatorname{fib}:=\prod_{s\in S}\operatorname{fib}_{s}\) is a bijection on points. \[(P\backslash G/B)^{S}\leftarrow\operatorname{Bun}_{G}^{\lambda}(\mathbb{P}^{ 1},S)\rightarrow\operatorname{Bun}_{L}^{\lambda_{1}}(\mathbb{P}^{1},S)\] 6.1.3. \(\mathcal{E}\cong\mathcal{O}(\rho).\) The \(B\)-bundle \(\mathcal{O}(1)\subset\mathcal{O}(1)\oplus\mathcal{O}\subset\mathcal{E}\) is stable under \(\operatorname{Aut}(\mathcal{E})\). For \(s\in S\), there is a flag \(F_{s}^{\operatorname{stab}}=(\ell_{s}^{\operatorname{stab}},p_{s}^{ \operatorname{stab}})\subset\mathcal{E}|_{s}\), given by restriction of the stable \(B\)-bundle, also invariant under \(\operatorname{Aut}(\mathcal{E})\). There is a map \(\operatorname{fib}_{s}:\operatorname{Bun}_{G}^{\rho}(\mathbb{P}^{1},S) \to B\backslash G/B\) given by the relative position \((F_{s}^{\operatorname{stab}},F_{s})\) of the flag \(F_{s}\) in the fiber at \(s\) to the stable \(B\)-bundle. Let \(c_{\rho}(w_{0},w_{1},w_{\infty})\) denote the locus where the relative position of \(F_{s}\) to the stable flag is \(w_{s}\in W\). There are two splitting maps, for \(i=1,2\): \[\operatorname{split}_{i}:\operatorname{Bun}_{G}^{\rho}(\mathbb{P}^{1},S) \rightarrow\operatorname{Bun}_{\operatorname{PGL}(2)}^{1}(\mathbb{P}^{1},S)\] The parabolic structure at \(s\in S\) for \(\operatorname{split}_{1}\) is given by the distinguished line \(\ell_{s}^{\operatorname{dist},1}\subset p_{s}^{\operatorname{stab}}\) defined as \(\ell_{s}^{\operatorname{dist},1}=p_{s}\cap p_{s}^{\operatorname{stab}}\) if \(F_{s}\) is transverse to \(p_{s}^{\operatorname{stab}}\) and \(\ell_{s}\) otherwise. The parabolic structure for \(\operatorname{split}_{2}\) is given by the distinguished plane \(\ell_{s}^{\operatorname{dist},2}\subset\mathcal{E}|_{s}/\ell_{s}^{\operatorname {stab}}\) given by \((\ell_{s}\oplus\ell_{s}^{\operatorname{stab}})/\ell_{s}^{\operatorname{stab}}\) if \(F_{s}\) is transverse to \(\ell_{s}^{\operatorname{stab}}\) and \(p_{s}/\ell_{s}^{\operatorname{stab}}\), otherwise. Explicitly, the points of \(c_{\rho}(w_{0},w_{1},w_{\infty})\) are as follows. 1. If for each \(i=1,2\), there is at least one \(s\in S\) such that \(\ell(w_{s}s_{1})>\ell(w_{s})\), then the locus consists of a single point. 2. \(\ell(w_{s}s_{1})<\ell(w_{s})\) for all \(s\in S\), but there is at least one \(s^{\prime}\in S\) such that \(\ell(w_{s^{\prime}}s_{2})>\ell(w_{s^{\prime}})\). This locus consists of two points. The generic configuration, \(c_{\rho}(w_{0},w_{1},w_{\infty};\emptyset)\) is where the distinguished lines \(\ell_{s}^{\operatorname{dist},1}\) are not contained in the image of a map \(\mathcal{O}\rightarrow\mathcal{O}(1)\oplus\mathcal{O}\). The degenerate locus, \(c_{\rho}(w_{0},w_{1},w_{\infty};\{s_{1}\})\) is where there is such a map. 3. \(\ell(w_{s}s_{2})<\ell(w_{s})\) for all \(s\in S\), but there is at least one \(s^{\prime}\in S\) such that \(\ell(w_{s^{\prime}}s_{1})>\ell(w_{s^{\prime}})\). The generic configuration, \(c_{\rho}(w_{0},w_{1},w_{\infty};\emptyset)\) is where the distinguished lines \(\ell_{s}^{\operatorname{dist},2}\) are not contained in the image of a map \(\mathcal{O}(-1)\rightarrow\mathcal{E}/\mathcal{O}(1)\). The degenerate locus, \(c_{\rho}(w_{0},w_{1},w_{\infty};\{s_{2}\})\) is where there is such a map. 4. \(w_{0}=w_{1}=w_{\infty}=s_{3}\). This locus has four points \[c_{\rho}(s_{3},s_{3},s_{3};\{s_{1}\})\] \[c_{\rho}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\] For \(\delta\subset\{s_{1},s_{2}\}\)\(c_{\rho}(s_{3},s_{3},s_{3};\delta)\) is the locus where the distinguished lines \(\ell_{s}^{\operatorname{dist},1}=p_{s}\cap p_{s}^{\operatorname{stab}}\) are contained in the image of a map \(\mathcal{O}\rightarrow\mathcal{O}(1)\oplus\mathcal{O}\) if and only if \(s_{1}\in\delta\) and the distinguished lines \(\ell_{s}^{\operatorname{dist},2}=p_{s}/\ell_{s}^{\operatorname{stab}}\) is contained in the image of a map \(\mathcal{O}(-1)\rightarrow\mathcal{E}/\mathcal{O}(1)\) if and only if \(s_{2}\in\delta\). The Eisenstein objects in \(C_{Eis}^{\rho}\) are \[\operatorname{Eis}_{\rho}=\underline{1}_{c_{\rho}(1,1,1)}\] \[\operatorname{Eis}_{s_{1}\cdot\rho}=\underline{1}_{c_{\rho}(s_{1},s_{1},s_{1}; \{s_{1}\})}\] \[\operatorname{Eis}_{s_{2}\cdot\rho}=\underline{1}_{c_{\rho}(s_{2},s_{2},s_{2}; \{s_{2}\})}\] \[\operatorname{Eis}_{-\rho}=\underline{1}_{c_{\rho}(s_{3},s_{3},s_{3};\{s_{1},s_{ 2}\})}+\underline{1}_{c_{0}(s_{3},s_{3},s_{3};\emptyset)}\] [Check the last calculation] The finite Hecke module generated by \(\operatorname{Eis}_{\rho}\) consists of all functions on the points \(\operatorname{Bun}_{G}^{\rho}(\mathbb{P}^{1},S)\) constant along the loci \(c_{\rho}(w_{0},w_{1},w_{\infty})\). \(\operatorname{Eis}_{s_{i}\cdot\rho}\) generates, under finite Hecke modification, the constant function on points \(c_{\rho}(w_{0},w_{1},w_{\infty};\{s_{i}\})\) for \((w_{0},w_{1},w_{\infty})\neq(s_{3},s_{3},s_{3})\) as well as the function \[\underline{1}_{c_{\rho}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})}+\underline{1}_{c_{ \rho}(s_{3},s_{3},s_{3};\{s_{i}\})}.\] Therefore, \(C^{\rho}_{Eis}\) consists of functions, \(f\), vanishing away from the points of the loci \(\mathrm{Bun}^{\rho}_{G}(\mathbb{P}^{1},S)\) and \(c_{0}(s_{3},s_{3},s_{3};\emptyset)\) that are constant along \(c_{0}(s_{3},s_{3},s_{3};\emptyset)\) and satisfy \[f(c_{0}(s_{3},s_{3},s_{3};\emptyset))=f(c_{\rho}(s_{3},s_{3},s_{3};\{s_{1},s_{2 }\}).\] It follows that \[\mathrm{dim}_{\mathbb{C}}(C^{\rho}_{Eis})=\left|\mathrm{Bun}^{\rho}_{G}( \mathbb{P}^{1},S)\right|=6^{3}+3^{3}+3^{3}+1^{3}.\] 1.4. \(\mathcal{E}\cong\mathcal{O}(\lambda)\), \(\langle\dot{\alpha_{i}},\lambda\rangle=0,\ \lambda\neq 0\) Without loss of generality, assume \(\langle\dot{\alpha_{1}},\lambda\rangle=0\). Then \(\mathcal{E}\cong\mathcal{O}(k)\oplus\mathcal{O}(k)\oplus\mathcal{O}(-2k)\), for some \(k\geq 1\). There is a bijection of points \[\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\leftrightarrow(P_{s_{1}}\backslash G /B)^{S}\times\mathrm{Bun}^{0}_{\mathrm{PGL}(2)}(\mathbb{P}^{1},S)\] \(\mathrm{Eis}_{\lambda}\) is the constant function on the point corresponding to \(c_{0}(S)\times(1,1,1)\). By the \(\mathrm{PGL}(2)\) calculation, for every point, pt, of \(\mathrm{Bun}^{0}_{\mathrm{PGL}(2)}(\mathbb{P}^{1},S)\), \(C^{\lambda}_{Eis}\) contains the constant function on the point corresponding to \(\mathrm{pt}\times(1,1,1)\). Furthermore, for \(w_{0},w_{1},w_{\infty}\in\{1,s_{2},s_{1}s_{2}\}\cong P_{s_{1}}\backslash G/B\), \[T^{0}_{w_{0}}\,T^{1}_{w_{1}}T^{\infty}_{w_{\infty}}\underline{1}_{\mathrm{pt} \times(1,1,1)}=\underline{1}_{\mathrm{pt}\times(w_{0},w_{1},w_{\infty})}.\] Therefore, \(C^{\lambda}_{\mathrm{Eis}}\) consists of all functions taking nonzero value only on the points of \(\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\), so \[\mathrm{dim}_{\mathbb{C}}(C^{\lambda}_{Eis})=\left|(P_{s_{1}}\backslash G/B)^ {S}\times\mathrm{Bun}^{0}_{\mathrm{PGL}(2)}(\mathbb{P}^{1},S)\right|=3^{3}\cdot 5\] 1.5. \(\mathcal{E}\cong\mathcal{O}(\lambda)\), \(\langle\dot{\alpha_{i}},\lambda\rangle=1,\ \lambda\neq\rho\) Without loss of generality, assume \(\langle\dot{\alpha_{1}},\lambda\rangle=1\). Then \(\mathcal{E}\cong\mathcal{O}(k+1)\oplus\mathcal{O}(k)\oplus\mathcal{O}(-2k-1)\), for some \(k\geq 1\). There is a bijection of points \[\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\leftrightarrow(P_{s_{1}} \backslash G/B)^{S}\times\mathrm{Bun}^{1}_{\mathrm{PGL}(2)}(\mathbb{P}^{1},S)\] \(\mathrm{Eis}_{\lambda}\) is the constant function on the point corresponding to \(c_{1}(S)\times(1,1,1)\) and \(\mathrm{Eis}_{s_{1}(\lambda)}\) is the constant function on the point corresponding to \(c_{1}(\emptyset)\times(1,1,1)\). By the \(\mathrm{PGL}(2)\) calculation, for every point, pt, of \(\mathrm{Bun}^{1}_{\mathrm{PGL}(2)}(\mathbb{P}^{1},S)\), \(C^{\lambda}_{Eis}\) contains the constant function on the point corresponding to \(\mathrm{pt}\times(1,1,1)\). Furthermore, for \(w_{0},w_{1},w_{\infty}\in\{1,s_{2},s_{1}s_{2}\}\cong P_{s_{1}}\backslash G/B\), \[T^{0}_{w_{0}}\,T^{1}_{w_{1}}T^{\infty}_{w_{\infty}}\underline{1}_{\mathrm{pt} \times(1,1,1)}=\underline{1}_{\mathrm{pt}\times(w_{0},w_{1},w_{\infty})}.\] Therefore, \(C^{\lambda}_{\mathrm{Eis}}\) consists of all functions taking nonzero value only on the points of \(\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\), so \[\mathrm{dim}_{\mathbb{C}}(C^{\lambda}_{Eis})=\left|(P_{s_{1}}\backslash G/B)^ {S}\times\mathrm{Bun}^{1}_{\mathrm{PGL}(2)}(\mathbb{P}^{1},S)\right|=3^{3} \cdot(2^{3}+1)\] #### 6.1.6. \(\mathcal{E}\cong\mathcal{O}(\lambda)\), \(\lambda\in 2\rho+\Lambda_{+}\) There is \(B\)-bundle stable under \(\mathrm{Aut}(\mathcal{E})\). There is a map \(\mathrm{fib}_{s}:\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\to B\backslash G/B\) given by the relative position \((F^{\mathrm{stab}}_{s},F_{s})\) of the flag \(F_{s}\) in the fiber at \(s\) to the stable \(B\)-bundle. The points of the locus \(\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\) are identified with \((B\backslash G/B)^{S}\). Moreover, \(\mathrm{Eis}_{\lambda}\) is identified with \(\underline{1}_{\{1,1,1\}}\) and \(T^{0}_{w_{0}}\,T^{1}_{w_{1}}T^{\infty}_{w_{\infty}}\mathrm{Eis}_{\lambda}\) is identified with \(\underline{1}_{\{w_{0},w_{1},w_{\infty}\}}\). Therefore, there is an isomorphism of \((\mathcal{H}^{fin})^{\otimes S}\) modules \((\mathcal{H}^{fin})^{\otimes S}\to C^{\lambda}_{Eis}\) given by \(1\mapsto\mathrm{Eis}_{\lambda}\). \(\mathrm{dim}_{\mathbb{C}}(C^{\lambda}_{Eis})=|W|^{3}\). #### 6.1.7. Proof of Equation 15 First, check that \(C^{0}_{Eis}\cap C^{\rho}_{Eis}=0\). Indeed, every function in \(C^{0}_{Eis}\) takes nonzero values only on points of \(\mathrm{Bun}^{0}_{G}(\mathbb{P}^{1},S)\), but every nontrivial function in \(C^{\rho}_{Eis}\) takes nonzero value on some point of \(\mathrm{Bun}^{\rho}_{G}(\mathbb{P}^{1},S)\). Then, observe that the spaces \(\{C^{\lambda}_{Eis}\}_{\lambda\in\Lambda_{+}\backslash\{0,\rho\}}\cup\{C^{0}_ {Eis}\oplus C^{\rho}_{Eis}\}\) are pairwise orthogonal. This is because functions in \(C^{0}_{Eis}\oplus C^{\rho}_{Eis}\) take nonzero values only on points of \(\mathrm{Bun}^{0}_{G}(\mathbb{P}^{1},S)\cup\mathrm{Bun}^{\rho}_{G}(\mathbb{P}^{1},S)\), whereas functions in \(C^{\lambda}_{Eis}\) for \(\lambda\in\Lambda_{+}\setminus\{0,\rho\}\) only take nonzero value on points of \(\mathrm{Bun}^{\lambda}_{G}(\mathbb{P}^{1},S)\). **Remark.** The space of cusp forms \(C_{cusp}\subset C_{Aut}\) is the space orthogonal to \(C_{Eis}\). We see that cusp forms are functions taking nonzero values only on the generic locus of \(c_{0}(s_{3},s_{3},s_{3};\emptyset)\), as well as the following points of \(\mathrm{Bun}^{0}_{G}(\mathbb{P}^{1},S)\cup\mathrm{Bun}^{\rho}_{G}(\mathbb{P}^{1},S)\): 1. \(c_{0}(s_{3},s_{3},s_{3};\delta)\) for \(\delta\subset\{s_{1},s_{2}\}\) nonempty 2. \(c_{0}^{\lambda}(R)\) for \(R\subset S\) nonempty 3. \(c_{0}^{R}(R)\) for \(R\subset S\) nonempty 4. \(c_{\rho}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})\) The space of cusp forms is given by the following equations. \[f(c_{\rho}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})=-\sum_{\text{pt}\in c_{0}(s_{3},s_ {3},s_{3};\{s_{1},s_{3}\})}f(\text{pt})\] \[f(c_{0}^{L}(S))=-(q-1)f(c_{0}^{L}(0))=-(q-1)f(c_{0}^{L}(1))=-(q-1)f(c_{0}^{L}( \infty))=(q-1)^{2}f(c_{0}^{L}(01))=(q-1)^{2}f(c_{0}^{L}(0\infty))\] \[=(q-1)^{2}f(c_{0}^{L}(1\infty))\] \[f(c_{0}^{R}(S))=-(q-1)f(c_{0}^{R}(0))=-(q-1)f(c_{0}^{R}(1))=-(q-1)f(c_{0}^{R}( \infty))=(q-1)^{2}f(c_{0}^{R}(01))=(q-1)^{2}f(c_{0}^{R}(0\infty))\] \[=(q-1)^{2}f(c_{0}^{R}(1\infty))\] \[f(c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{2}\})=-(q-1)f(c_{0}(s_{3},s_{3},s_{3};\{s _{1}\})=-(q-1)f(c_{0}(s_{3},s_{3},s_{3};\{s_{2}\})\] \[\sum_{\text{pt}\in c_{0}(s_{3},s_{3},s_{3};\{s_{1},s_{3}\})}f(\text{pt})+f(c_{0 }(s_{3},s_{3},s_{3};\{s_{1}\})+f(c_{0}^{L}(01))+f(c_{0}^{R}(01))=0\] Counting points and constraints shows \(\dim_{\mathbb{C}}(C_{cusp})=q\). ### Proof of Proposition 6.2 2.1. \(\lambda=0\). \(\widetilde{C}^{0}\) is generated over \((\mathcal{H}^{fin})^{\otimes S}\) by \(J_{0}\) Using Equation 9 we can always write any monomial \(T_{w_{0}}^{0}T_{w_{1}}^{1}T_{w_{\infty}}^{\infty}\), \(w_{s}\in W\), as a sum of monomials where for any \(i\in\{1,2\}\) \[\ell(w_{\infty}s_{i})<\ell(w_{\infty})\implies\ell(w_{0}s_{i})>\ell(w_{0}),\ \ell(w_{1}s_{i})>\ell(w_{1}).\] Let us list the triples \((w_{0},w_{1},w_{\infty})\) that satisfy this condition. 1. \((w_{0},w_{1},1),\ w_{0},w_{1}\in W\) 2. \((w_{0},w_{1},w_{\infty}),\ w_{\infty}\in\{s_{1},s_{2}s_{1}\},\ w_{0},w_{1}\in\{1,s_{2},s_{1}s_{2}\}\) 3. \((w_{0},w_{1},w_{\infty}),\ w_{\infty}\in\{s_{2},s_{1}s_{s}\},\ w_{0},w_{1}\in\{1,s_{1},s_{2}s_{1}\}\) 4. \((1,1,s_{3})\) There are \(|W|^{2}+2\cdot 3^{2}+2\cdot 3^{2}+1=73\) such triples. Let \(M\) be the set of 69 monomials formed from excluding the following four from the 73 listed monomials: \[T_{s_{1}}^{0}T_{s_{1}}^{1}T_{s_{1}s_{2}}^{\infty},T_{s_{2}s_{1}}^{0}T_{s_{1}}^ {1}T_{s_{1}s_{2}}^{\infty},T_{s_{1}}^{0}T_{s_{2}s_{1}}^{1}T_{s_{1}s_{2}}^{ \infty},T_{s_{2}s_{1}}^{0}T_{s_{2}s_{1}}^{1}T_{s_{1}s_{2}}^{\infty}\] \(\widetilde{C}^{0}\) is spanned over \(\mathbb{C}\) by \(M\). This follows from two Lemmas. **Lemma 2**.: \(T_{s_{1}}^{0}T_{s_{1}}^{1}T_{s_{1}s_{2}}^{\infty}\in\operatorname{Span}_{ \mathbb{C}}(M)\)__ Proof.: Explicitly, \[T_{s_{1}}^{0}T_{s_{1}}^{1}T_{s_{1}s_{2}}^{\infty}=-T_{s_{1}s_{2}}^{\infty}-T_{s_ {1}}^{0}T_{s_{1}s_{2}}^{\infty}-T_{s_{1}}^{1}T_{s_{1}s_{2}}^{\infty}+q^{-1}(T_{ s_{1}s_{2}}^{0}+T_{s_{2}}^{0})(T_{s_{1}s_{2}}^{1}+T_{s_{2}}^{1})(T_{s_{2}s_{1}}^{ \infty}+T_{s_{1}}^{\infty})-q^{-1}(T_{s_{2}s_{1}}^{0}+T_{s_{3}}^{0})(T_{s_{2}s_{ 1}}^{1}+T_{s_{3}}^{1}) \tag{17}\] To prove Equation 17, observe that it rearranges to Equation 18, which we prove in Section 8.2. \[\operatorname{Avg}_{1}^{01}(T_{s_{1}s_{2}}^{\infty}+q^{-1}T_{s_{2}}^{01}(T_{s_{ 1}}^{01}-T_{s_{1}}^{\infty})-q^{-1}T_{s_{2}}^{S}T_{s_{1}}^{\infty})=0 \tag{18}\] **Lemma 3**.: \(\operatorname{Span}_{\mathbb{C}}(M)\) is closed under \(T_{s_{2}}^{0}\) and \(T_{s_{2}}^{1}\).__ Proof.: It is sufficient to check closure under \(T_{s_{2}}^{0}\). Consider a monomial \(m=T_{w_{0}}^{0}T_{w_{1}}^{1}T_{w_{\infty}}^{\infty}\in M\). \(T_{s_{2}}^{0}m\in M\) unless \((w_{0},w_{\infty})\) is one of the following * \((1,s_{2}),(1,s_{1}s_{2}),(1,s_{3})\) * \((s_{1}s_{2},s_{1}),(s_{1}s_{2},s_{2}s_{1})\) In each case, it is straighforward calculation to check that \(T^{0}_{s_{2}}\in\mathrm{Span}_{\mathbb{C}}(M)\). 2.2. \(\lambda=\rho\). \(\widetilde{C}^{\rho}\) is generated over \((\mathcal{H}^{fin})^{\otimes S}\) by \(J_{\rho},J_{\alpha_{1}},J_{\alpha_{2}},J_{-\rho}\). We filter \(\widetilde{C}^{\rho}\) by subsets of \(\{s_{1},s_{2}\}\). \(F^{\emptyset}(\widetilde{C}^{\rho})\) is the submodule generated by \(J_{\rho}\). For simple reflection \(s_{i}\), \(F^{\{s_{i}\}}\) is the submodule generated by \(J_{\rho}\) and \(J_{s_{i}\cdot\rho}\). \(F^{\{s_{1},s_{2}\}}=\widetilde{C}^{\rho}\). By Equations 10, 11, and 12, the following are true in the associated graded: \[\mathrm{Avg}^{s}_{1}J_{\alpha_{2}}=\mathrm{Avg}^{s}_{2}J_{\alpha_{1}}=0\text{ for }s\in S\] \[\mathrm{Avg}^{s}_{i}J_{-\rho}=0\text{ for }i\in\{1,2\},\ s\in S\] Therefore, \[\dim_{\mathbb{C}}(\mathrm{Gr}^{\emptyset}(\widetilde{C}^{\rho})\leq\dim_{ \mathbb{C}}((\mathcal{H}^{fin})^{\otimes S})=|W|^{3}\] \[\dim_{\mathbb{C}}(\mathrm{Gr}^{\{s_{1}\}}(\widetilde{C}^{\rho})\leq\dim_{ \mathbb{C}}((\mathcal{H}^{fin})^{\otimes S}/\langle\mathrm{Avg}_{1}\rangle_{ s\in S})=\dim_{\mathbb{C}}((\mathcal{H}^{fin}/\mathrm{Avg}_{1})^{\otimes S})=3^{3}\] \[\dim_{\mathbb{C}}(\mathrm{Gr}^{\{s_{2}\}}(\widetilde{C}^{\rho})\leq\dim_{ \mathbb{C}}((\mathcal{H}^{fin})^{\otimes S}/\langle\mathrm{Avg}_{2}\rangle_{ s\in S})=\dim_{\mathbb{C}}((\mathcal{H}^{fin}/\mathrm{Avg}_{2})^{\otimes S})=3^{3}\] \[\dim_{\mathbb{C}}(\mathrm{Gr}^{\{s_{1},s_{2}\}}(\widetilde{C}^{\rho})\leq \dim_{\mathbb{C}}((\mathcal{H}^{fin})^{\otimes S}/\langle\mathrm{Avg}_{1}, \mathrm{Avg}_{2}\rangle_{s\in S})=1^{3}\] \[\dim_{\mathbb{C}}(\widetilde{C}^{\rho})\leq 6^{3}+3^{3}+3^{3}+1^{3}\] **Remark**.: For \(\delta\subset\{s_{1},s_{2}\}\), pick an additive character \[\psi_{\delta}:N(\mathbb{F}_{q})/[N(\mathbb{F}_{q}),N(\mathbb{F}_{q}]\cong \oplus_{\{s_{1},s_{2}\}}\mathbb{F}_{q}\rightarrow\mathbb{C}^{\times},\] that is generic in the arguments \(\delta\). We can identify the graded component of \(\widetilde{C}^{\rho}\) with the Whittaker module for the finite Hecke algebra. \[\mathrm{Gr}^{\delta}(\widetilde{C}^{\rho})\cong(C^{(N,\psi_{\delta})}[ \mathcal{B}])^{\otimes S}.\] The Whittaker module is the the space of \((N(\mathbb{F}_{q}),\psi_{\delta})\) equivariant functions on the points of the flag variety. It is a finite Hecke module by convolution after identifying \(\mathcal{B}\cong G/B\). 2.3. \(\langle\check{\alpha_{i}},\lambda\rangle=0\), \(\lambda\neq 0\). \(\widetilde{C}^{\lambda}\) is generated over \((\mathcal{H}^{fin})^{\otimes S}\) by \(J_{\lambda}\). Using Equation 13 we can always write any monomial \(T^{0}_{w_{0}}T^{1}_{w_{1}}T^{\infty}_{w_{\infty}}J_{\lambda}\), \(w_{s}\in W\), as a sum of monomials where \[\ell(w_{\infty}s_{i})<\ell(w_{\infty})\implies\ell(w_{0}s_{i})>\ell(w_{0}),\ \ell(w_{1}s_{i})>\ell(w_{1})\] Let us count how many triples \((w_{0},w_{1},w_{\infty})\) satisfy this condition. There are three \(w\in W\) such that \(\ell(ws_{i})<\ell(w)\) and three such that \(\ell(ws_{i})>\ell(w)\). The set of \(s\in S\) such that \(\ell(w_{s}s_{i})<\ell(w)\) is exactly one of the following five: \(\emptyset,\{0\},\{1\},\{\infty\},\{0,1\}\). 2.4. \(\langle\check{\alpha_{i}},\lambda\rangle=1\), \(\lambda\neq\rho\). \(\widetilde{C}^{\lambda}\) is generated over \((\mathcal{H}^{fin})^{\otimes S}\) by \(J_{\lambda}\) and \(J_{s_{i}\cdot\lambda}\). Let \(F^{0}\) be the submodule generated by \(J_{\lambda}\). By Equation 14, in the quotient \(\widetilde{C}^{\lambda}/F^{0}\), \(\mathrm{Avg}_{i}J_{s_{i}\cdot\lambda}=0\). Therefore, \[\dim_{\mathbb{C}}(\widetilde{C}^{\lambda})\leq\dim_{\mathbb{C}}(F^{0})+\dim_{ \mathbb{C}}(\widetilde{C}^{\lambda}/F_{0}))\leq\dim_{\mathbb{C}}((\mathcal{H} ^{fin})^{\otimes S})+\dim_{\mathbb{C}}((\mathcal{H}^{fin}/\langle\mathrm{Avg} _{i}\rangle)^{\otimes S})=|W|^{3}+3^{3}\] 2.5. \(\lambda=\in 2\rho+\Lambda_{+}\). \(\widetilde{C}^{\lambda}\) is generated by \(J_{\lambda}\) under \((\mathcal{H}^{fin})^{\otimes S}\), so \(\dim_{\mathbb{C}}(\widetilde{C}^{\lambda})\leq\dim_{\mathbb{C}}((\mathcal{H} ^{fin})^{\otimes S})=|W|^{3}\) ## 7. Directions: Functional Equation, Many Points ### Many Points of Tame Ramification We state the following natural generalization of Conjecture 1.1 to \(\mathbb{P}^{1}\) with several points of tame ramification \(S\subset\mathbb{P}^{1}(\mathbb{F}_{q})\), \(S\neq\emptyset\). **Conjecture 7.1**.: If \(\rho\) is integral then \(C_{Eis}\) is the \(\mathcal{H}^{\otimes S}\) module generated by \(\mathrm{Eis}_{0}\) with the following relations 1. (Translation Relation) For any \(\lambda\in\Lambda\) and \(p,q\in S\), \[(J^{p}_{\lambda}-J^{q}_{\lambda})\mathrm{Eis}_{0}=0\] 2. (Reflection Relation) For any simple reflection, \(s_{\alpha}\in W\) and \(p,q\in S\) \[\left(\prod_{s\in S\setminus\{p\}}\mathrm{Avg}^{*}_{s_{\alpha}}-\prod_{s\in S \setminus\{q\}}\mathrm{Avg}^{*}_{s_{\alpha}}\right)\mathrm{Eis}_{0}=0\] When \(S\) consists of two points, the quotient of \(\mathcal{H}^{\otimes S}\) by the translation and reflection relation is identified with the regular bimodule of \(\mathcal{H}\). In this case, Conjecture 7.1 amounts to identifying \(C_{Eis}\) with the regular bimodule. This is done in the categorical geometric setting in Section 2.6 of [4]. A similar argument works in the arithmetic function field setting. The author is not aware of any reference but would be grateful to be referred to one. ### Reflection Relation as a Functional Equation We propose that the reflection relation from Conjecture 1.1 could be related to the functional equation for Eisenstein series. Let \(\mathrm{Bun}_{T}(\mathbb{P}^{1},S)\) be the space classifying pairs \((\mathcal{E},\{(V_{s},F^{0}_{s},F^{1}_{s},\tau_{s})\}_{s\in S})\), where \(\mathcal{E}\) is a \(T\)-bundle on \(\mathbb{P}^{1}\), \(V_{s}\) is a vector space, and \(F^{0}_{s},F^{*}_{1}\subset V_{s}\) are flags with an identification \(\tau_{s}:\mathrm{Gr}(F^{0}_{s})\cong\mathcal{E}|_{s}\). The constant term space is the space of compactly supported functions on the rational points of \(\mathrm{Bun}_{T}(\mathbb{P}^{1},S)\). \[\mathrm{CT}:=C[\mathrm{Bun}_{T}(\mathbb{P}^{1},S)]\] The functional equation for Eisenstein series expresses that parabolic induction \(\mathrm{Eis}:\mathrm{CT}\to C_{Aut}\) intertwines an action of the Weyl group on the constant term space. \(\mathrm{CT}\) is identified with the quotient of \(\mathcal{H}^{\otimes S}\) by the translation relation. The constant term space is free of rank \(|W|^{|S|}\) over \(\mathbb{C}[\Lambda]\). In light of the functional equation it is natural to conjecture that \(C_{\mathrm{Eis}}\) is free of rank \(|W|^{|S|-1}\).
2309.17392
Instability cascade of strongly nonlinear gravity waves in a vertically sheared atmosphere
Although internal gravity waves are generally recognized as an important mechanism to distribute energy through the atmosphere, their dynamics near the instability is only partially understood to date. Many types of instabilities, notably the classical modulational instability, a novel point spectrum modulational instability, the triadic resonant instability, the shear instability and the static instability have been studied mostly in idealized settings and mostly isolated from one another. Here, we identify the instability cascade of a quasi one-dimensional and stationary internal gravity wave modulated by a vertically sheared mean flow. We find indicators of various interdependent instability mechanisms which partly compete for dominance and partly follow one another. A key finding is that the particular dynamics of the local cascade depends on the sign of the background shear.
Georg Sebastian Voelker, Mark Schlutow
2023-09-29T16:56:01Z
http://arxiv.org/abs/2309.17392v1
# Instability cascade of strongly nonlinear gravity waves in a vertically sheared atmosphere ###### Abstract Although internal gravity waves are generally recognized as an important mechanism to distribute energy through the atmosphere, their dynamics near the instability is only partially understood to date. Many types of instabilities, notably the classical modulational instability, a novel point spectrum modulational instability, the triadic resonant instability, the shear instability and the static instability have been studied mostly in idealized settings and mostly isolated from one another. Here, we identify the instability cascade of a quasi one-dimensional and stationary internal gravity wave modulated by a vertically sheared mean flow. We find indicators of various interdependent instability mechanisms which partly compete for dominance and partly follow one another. A key finding is that the particular dynamics of the local cascade depends on the sign of the background shear. **Abbreviations:** internal gravity wave (IGW), triadic resonant instability (TRI), point spectrum modulational instability (PSMI), Internal gravity waves, wave instability, instability cascade CONTACT G. S. Voelker. Email: [email protected] ## 1 Introduction It has been widely recognized that gravity waves affect the global circulation of Earth's atmosphere. Usually excited in the troposphere and stratosphere, gravity waves carry energy vertically as well as laterally, they drag the mean flow and lead to mixing of trace gases [8, 13, 26]. They are associated with the anomalous summer temperature minimum in the mesopause, the Quasi-Biannual Oscillation and the residual mean-flow circulation [4, 5, 20] Gravity waves are ubiquitous [11]. A pivotal role in understanding the interaction of the waves with the mean flow is wave dissipation, i.e. the dynamics of a wave that becomes unstable, overturns, breaks and vanishes into chaotic turbulence eventually. This work is concerned with the very first step of this chain: the instability onset. Several instability mechanisms were identified that cause an infinitesimal perturbation of a wave to grow exponentially with time. One of the problems with the theories governing the instability mechanisms lies in the fact that they only predict instability growth rates for specific instability mechanisms in an idealized setting. In nature, however, waves are not isolated and instabilities may coincide as they differ in location or in scale. For instance, convective and shear instabilities appear on comparable spatio-temporal scales but at distinct positions relative to the wave's period. Convective instabilities grow where the buoyancy is at its minimum and the shear instability can be found where the wind shear has its maximum [18, 19]. In contrast to convective and shear, modulational instabilities occur on scales that are much larger than the period [6]. They are excited on scales that are comparable with the typical variation of the mean flow, the synoptic scale or the mesoscale. Another class of instability that may be found on comparable scales to the wave itself is Triadic Resonant Instability and in particular the parametric subharmonic instability [9, 10]. Not only do these instability mechanisms coincide, they may also trigger each other. Modulational instabilities, for example, are able to amplify waves locally which causes an increased buoyancy amplitude which causes convective instability. Consequently, waves can undergo an entire cascade of instabilities before they dissipate and, conversely, it is usually impossible to identify one single instability mechanism responsible for the dissipation of a wave. We argue that only if one understands the instability cascade, one is able to predict gravity wave dissipation. Naturally, waves become unstable when their amplitude is large. In contrast to internal gravity waves in the ocean, atmospheric gravity waves tend to have as large wind amplitudes as the background or mean-flow wind, respectively. This is an indirect effect of air's compressibility that causes an exponentially decreasing ambient density with altitude. Due to energy conservation, atmospheric gravity waves gain in amplitude when they propagate upwards. Therefore, amplitudes become easily so large that neither linear nor weakly nonlinear theories are applicable. The former requires infinitesimal small and the latter finite but still small amplitudes in comparison with the mean flow [27]. Strongly nonlinear waves with large amplitudes are, in conclusion, rather the rule than the exception in the atmosphere and therefore we want to focus, in particular, on this class of waves in this study. Theoretical studies on gravity wave instabilities often assume homogeneous background atmosphere, i.e. constant stratification and constant ambient wind. A more realistic (but still highly idealized) scenario arises when sheared ambient wind is considered adding a substantial layer of complexity to the problem. Schlutow and Voelker [28] (hereafter SV20) carried out a theoretical investigation of the modulation equations, which describe the temporal evolution of the wave parameters, such as wave number and amplitude, in an ideal atmosphere with a very thin shear layer. This scenario is a common occurrence in the actual atmosphere, such as when a mountain wave encounters the tropospheric jet [12]. SV20 found a novel instability that is generally similar to the canonical modulation instability [27, 31] in unsheared backgrounds. The difference lies in the operator representing the linearized equations. Whereas the canonical modulation instability comes from the essential (continuous) spectrum of the operator, the novel type of instability is generated by the point (matrix-like) spectrum or in other words the set of discrete eigenvalues of the operator. In particular, the latter occurs only at a lower edge of a spatially slowly varying jet and at sufficiently large amplitudes of the stationary wave. This novel instability type is thus an addition to the many known instability mechanisms [13]. It remains, however, largely unclear how these various mechanisms interdepend and are thus linked to one another. With all these complications in mind, the aim of this study is to examine the instability cascade--from the first unstable perturbation to the knock-out mode--of strongly nonlinear gravity waves that interact with a layer of sheared background wind. This paper is structured as follows. In Section 2 we state the problem of the stationary refracted wave in a sheared background flow in terms of the Bretherton-Grimschaw modulation equations [7, 15]. A theoretical account on the manifold instability mechanisms is given; we review the classical (essential spectrum) modulation instability as well as the novel point-spectrum instability, then static, and shear instabilities. A brief discussion on triadic resonant instabilities concludes this section. The numerical model, that we utilize to simulate the refracted wave, is described in Section 3. Our simulation results together with analyses with respect to the various instabilities, that we found, are shown in Section 4. The concluding Section 5 summarizes our results and gives some final thoughts. ## 2 Instabilities of the stationary refracted wave solution In general an internal wave mode may encounter a wide range of known and individually studied instabilities. To highlight the most important instability types of a Boussinesq nonlinear internal gravity wave in a sheared background we consider the Bretherton-Grimschaw modulation equations in two dimensions as follows [3, 28, 29] \[\begin{split} 0&=\partial_{t}k_{z}+\partial_{z} \omega,\\ 0&=\rho\partial_{t}a+\partial_{z}\left(c_{gz}\rho a \right),\\ 0&=\rho\partial_{t}u+\partial_{z}\left(c_{gz}k_{x} \rho a\right).\end{split} \tag{1}\] While the wave action, \(a\), the wave vector, \((k_{x},k_{z})\), the intrinsic frequency, \(\hat{\omega}\), and the vertical group velocity, \(c_{gz}=\partial_{k_{z}}\hat{\omega}\), are wave parameters, \(u\) represents the horizontal mean flow velocity including a background flow and a wave induced part. Note that the Doppler shifted extrinsic frequency is denoted by \(\omega=\hat{\omega}+k_{x}u\). The background density, \(\rho\), and the buoyancy frequency, \(N\), are assumed constant for simplicity. The mean-flow velocity then decomposes as follows \[u(z,t) =k_{x}a(z,t)+U(z) \tag{2}\] \[=k_{x}a(z,t)+U_{1}+\frac{U_{2}-U_{1}}{2}\tanh\frac{z-z_{0}}{h}. \tag{3}\] Where we have imposed a background flow with a shape of a _tangens hyperbolicus_ changing continuously from a lower level with \(U\approx U_{1}\) to an upper layer with \(U\approx U_{2}\) on a length scale \(h\). To analyze the stability of such a refracted wave SV20 make use of an asymptotic WKBJ approach with a spatially slowly varying wave amplitude and mean flow. Since the vertical wavelength and the transition length scale, \(h\), are similar in magnitude the transition between \(U_{1}\) and \(U_{2}\) appears as a jump in the background velocity on the slowly varying scale. Additionally, we assume a stationary primary wave. Such a solution then yields vertical wavenumbers given by \(K_{z,j}=-(N^{2}/U_{j}^{2}-K_{x}^{2})^{\frac{1}{2}}\) with \(j\in\{1,\,2\}\) and a boundary condition between the two layers, \(A_{2}c_{gz,2}=A_{1}c_{gz,1}\). Here we define upper case variables as explicit solutions for the steady state primary wave. For convenience one may also define the relative frequency square, \(J_{j}=N^{2}/K_{x}^{2}U_{j}^{2}\), and the relative wave amplitude, \(\alpha=|\mathcal{B}K_{z}|/N^{2}\), where \(\mathcal{B}\) denotes the buoyancy amplitude of the wave. Note that in such a notation \(\alpha=1\) marks the threshold for static instability from linear theory [e.g. 2, 32]. In the theoretical analysis we thus assume a stable incident wave with \(\alpha<1\) everywhere. Demanding a transient wave solution, one finds the condition for non-evanescence \[J_{j}=\frac{N^{2}}{K_{x}^{2}U_{j}^{2}}>1, \tag{4}\] or equivalently \(|\hat{\omega}|<N\). Performing an instability analysis on the above solution one may find a description of the classical modulational instability as well as the point spectrum modulational instability (PSMI). Additionally but without reproducing the theory here we consider the triadic resonant instability, the shear instability, and the static instability mechanisms. ### Classical Modulational Instability Following Kapitula and Promislow [17], Schlutow [27] and SV20 one may use a perturbation ansatz to find both the continuous (essential) spectrum and the point (matrix-like) spectrum of the operator resulting from linearizing Eq. (1). The former, then, provides us with the classical modulational stability criterion \[J_{j}>\frac{3}{2}. \tag{5}\] Rather than adding another study to the often considered classical modulational instability [14, 31] we aim at including the PSMI by choosing the initial conditions accordingly. The latter is ensured through demanding the wavenumber aspect ratio \(1/\sqrt{2}<|K_{z}|/K_{x}\). ### Point spectrum modulational instability In addition, the point spectrum yields another type of modulational instability. It predicts a statically stable primary wave to be unstable due to the point spectrum but stable according to the essential spectrum if \[1>\alpha_{1}^{2}>\frac{2}{J_{1}}\frac{(J_{1}-1)^{2}}{2J_{1}-3}\qquad\text{and} \qquad|U_{1}|<|U_{2}|. \tag{6}\] Hence, \(\alpha_{1}\) and \(\alpha_{2}\) correspond to the amplitudes outside and inside the jet, respectively and follow the relationship \(\alpha_{1}>\alpha_{2}\). It is furthermore worth mentioning that the instability criterion (Eq. 6) requires a normalized wave amplitude as large as \(\alpha_{1}>\sqrt{8/9}\) for the instability to occur. One may argue that such large amplitudes are rare events. With respect to the atmosphere, where the present Boussinesq analysis may be regarded as a local approximation, the anelastic amplification however makes large amplitudes a common phenomenon. For more details we refer the interested reader to SV20 and the references therein. ### Triadic Resonant Instability Triadic resonant instabilities (TRI) are considered one of the most important mechanism for spectral (and non-local) energy transfer between internal gravity waves. TRI general occurs when there are three spectral wave components which approximately fulfill the resonance conditions [10, 22, 36] \[\mathbf{k}_{1} =\mathbf{k}_{2}+\mathbf{k}_{3}, \tag{7}\] \[\hat{\omega}_{1} =\hat{\omega}_{2}+\hat{\omega}_{3}.\] It is worth mentioning that the instability may also grow when one of the three spectral components has a zero amplitude, thus generating a third triad member. It is however generally not applicable for an incidentally monochromatic wave. In such a case--as considered here--only a perturbation with a corresponding spectral component may trigger the instability. Such a perturbation could for instance be generated through another instability mechanism and subsequently act as a triad member of a resonant or near-resonant triad. Most attention has been given to perturbations which are very close to the initial wave component leading to interactions commonly classified as parametric subharmonic instability [22, 36]. Considering a stationary incident wave the resonance conditions (Eq. 7) predict the growth of a vertically propagating wave mode with a horizontal wavenumber \(k_{x}^{\prime}\approx 2K_{x}\). Following recent insights into modulated TRI modulation through wind shear may reduce the growth rate of a generated mode through effectively narrowing the spectral interaction window for near-resonant interactions [33]. ### Static and Shear Instabilities Gravity waves may become unstable with respect to shear instability when the local Richardson number \(\mathrm{Ri}\) falls below a quarter [Kelvin-Helmholtz instability; 16, 23]. The Richardson number is canonically defined as \[\mathrm{Ri}=\frac{N^{2}+g\theta_{0}^{-1}\partial_{z}\theta^{\prime}}{\left[ \partial_{z}(u^{\prime}+u)\right]^{2}} \tag{8}\] which comprises the ratio of stratification to wind shear. An introduction on shear instabilities of gravity waves is given in [24, p 142]. Note that the Richardson criterion is only a necessary condition for shear instability and only applies for horizontally parallel flows. Gravity waves are shear waves, i.e. the velocity field is sheared along the direction of propagation. Strictly speaking, the assumption of horizontally parallel flow is only valid for hydrostatic gravity waves. For strongly nonlinear, non-hydrostatic gravity waves, however, the critical Richardson number may be modified [21]. The local Richardson number may be written in terms of the phase \(\phi=k_{x}x+k_{z}z\) as \[\mathrm{Ri}=\frac{|\mathbf{k}|^{2}}{k_{z}^{2}}\frac{1-\alpha\sin(\phi)}{\alpha^{2 }\cos^{2}(\phi)} \tag{9}\] according to Lelong and Dunkerton [19]. Studying Eq. (9), we learn that the Richardson criterion depends on the phase and the relative wave amplitude. Only when \(\alpha\) exceeds unity, the Richardson criterion, modified or not, can be fulfilled. In other words, a shear unstable wave is also unstable with respect to static instability. With regard to the phase, the wave becomes statically unstable where \(\phi=\pi/2\) or where the buoyancy field of the perturbation has its maximum. The Kelvin-Helmholtz instability appears consequently where the shear is maximized at \(\phi=0\). As a final remark of this section we want to point out that the initial conditions of our simulations are neither statically nor dynamically unstable as we assume initially \(\alpha<1\). ## 3 Model description With the above described instability mechanisms in mind we consider a nonlinear stationary wave in a sheared background flow. Such a solution is in principle stable under the assumptions of linear theory but will prove to exhibit a range of growing modes forming a cascade of instabilities which ultimately leads to the breakdown of the stationary parent wave. Here, we perform the analysis using the Large Eddy Simulation code _PincFlow_ with a second order MUSCL scheme and a MC flux limiter [25, 30, 34]. To accommodate the assumption of an incompressible flow we utilize the Boussinesq mode and integrate with an explicit third-order Runge-Kutta scheme [35]. Given the conditions by the above theoretical considerations we choose the parameters summarized in Tbl. 1. To accommodate a modulated stationary initial mode we set up the model with periodic boundary conditions and a jet embedded in a background flow as follows \[U(z)=\begin{cases}U_{1},&z<z_{1}-5h,\\ U_{1}+\frac{U_{2}-U_{1}}{2}\tanh\frac{z-z_{1}}{h}&z\in[z_{1}-5h,z_{1}+5h)\,,\\ U_{2},&z\in[z_{1}+5h,z_{2}-5h)\,,\\ U_{2}+\frac{U_{1}-U_{2}}{2}\tanh\frac{z-z_{2}}{h}&z\in[z_{2}-5h,z_{2}+5h)\,,\\ U_{1},&z\geq z_{2}+5h.\end{cases} \tag{10}\] We thus embed a jet with velocity \(U_{2}\) within a background with velocity \(U_{1}\) with smooth edges on the top and the bottom. Correspondingly, \(z_{1}\), and \(z_{2}\) denote the lower and upper jet edges, and \(h\) is the transition scale as before. The vertical wavenumber, \(K_{z}\), is then \[K_{z,j}(z)=-\sqrt{\frac{N^{2}}{U(z)^{2}}-K_{x}^{2}}, \tag{11}\] such that the wavenumber ratio is fulfills \(|K_{z}|/K_{x}\in(1.05,1.46)\) and does not permit modulational instability in the initial conditions. Moreover we ensure a conserved wave action flux by setting \(c_{gz}A=\text{const.}\) and choosing the amplitude \(\alpha\) accordingly. Finally, we integrate the phase of the wave and choose a domain size, \((x_{max},z_{max})\), such that the wave phase is continuous at the periodic vertical boundaries. The Brunt \begin{table} \begin{tabular}{l l r r} \hline parameter & description & \multicolumn{2}{c}{**value**} \\ \(\Delta x\) & hor. grid spacing & \(161.3\) & \(m\) \\ \(\Delta z\) & vert. grid spacing & \(39.0\) & \(\underline{m}\) \\ \(z_{\text{max}}\) & vert. domain extent & \(79924.34\) & \(m\) \\ \(x_{\text{max}}\) & hor. domain extent & \(5000\) & \(\underline{m}\) \\ \(N\) & buoyancy frequency & \(0.01\) & \(s^{-1}\) \\ \(\lambda_{x}=2\pi/K_{x}\) & hor. wavelength & \(5000\) & \(\underline{m}\) \\ \(z_{1}\) & lower jet edge & \(20,000\) & \(m\) \\ \(z_{2}\) & upper jet edge & \(60,000\) & \(\underline{m}\) \\ \(U_{1}\) & wind outside jet & \(-4.5\) & \(m\,s^{-1}\) \\ \(U_{2}\) & wind within jet & \(-5.5\) & \(\underline{m}\,s^{-1}\) \\ \(h\) & transition height & \(1000\) & \(m\) \\ \end{tabular} \end{table} Table 1: Summary of relevant model parameters. Vaisala frequency, \(N\), is constant. In this setup we can thus compare the behavior of the wave at both the lower as well as the upper jet edge. To ensure that the two edge regions evolve approximately independently, the heights of the jet edges are set such that they are well separated from each other and the periodic domain boundaries by approximately \(40h\). Seeking instability mechanisms in numerical experiments it is important to consider that numerical errors can trigger instability mechanisms in marginally stable initial conditions. To avoid this effect we rely on the variational diminishing discretization of _PincFlow_ which inhibits spurious oscillations to propagate through the numerical solution. ## 4 Observed instabilities Using the model setup described above one may expect instabilities to develop and break the very large amplitude incident wave. It is that initial phase of the simulation in which the instabilities develop that we are particularly interested in. For illustration we show snapshots of the vertical velocities in the full domain for the initial conditions and after the instabilities have developed (Fig. 1). One possible way to identify that deviation from the initial state is quantifying the \(L_{2}\)-norm of the vertical perturbations (cf. SV20). Integrating over the domains near the upper and lower jet edges we find that the perturbations do indeed grow exponentially but show distinct growth rates until they are reaching a saturation state at around \(1.5-2\,h\) (Fig. 2). What is more, we find that instabilities at the upper jet grow faster and earlier compared to the lower edge. After approximately \(2\,h\) the initial wave is broken and the solution is transitioning to turbulent behavior. As for the spatial structure of the perturbation-induced mean-wind we find mostly stationary structures at the lower edge and strong transient features at the upper edge (Fig. 3). This suggests that different instability mechanisms or combinations of instabilities govern the two spatial regimes of interest. To further illustrate the two distinct instability cascades we analyze the simulation results with respect to various indicators of the previously mentioned instability mechanisms. ### Point spectrum modulational instability First, we would like to highlight that the non-dimensional amplitude \(\alpha=0.975\) satisfies the condition of the predicted PSII (SV20) and may thus be observed at the lower jet edge. Also, we would like to remind the reader that the instability is predicted to be vertically localized exhibiting its maximum amplitude at the jet edge with a horizontal wavelength equal to the primary wave (cf. section 2.2 and SV20). Thus, such instabilities would be expected to be visible in the horizontal mean wind (Fig. Figure 1: Snapshots of the vertical wind in the full domain of the simulation at the initial conditions (a) and after \(1.4\,h\) (b) simulated time. Figure 4: Wavenumber-frequency spectra of the potential temperature deviations near the lower jet edge (a) and the upper jet edge (b). The spectra are computed using Hanning windows in time and the vertical axis. The resulting spectra are then horizontally averaged. The red vertical lines indicate the vertical wavenumbers of the primary wave inside and outside the jet. Additionally, the white functions show the theoretical spectral range of the PSMI. Figure 3: Hovmoeller plots of the horizontally averaged horizontal wind deviations, \(\left\langle u(x,z,t)-u(x,z,0)\right\rangle_{x}\), at the lower jet edge (a) and the upper jet edge (b). The two regimes show distinct generation of stationary (a and b) and transient structures (b). Figure 2: Normalized L\({}^{2}\)-norm of \(w\) integrated over the intervals \([z_{1}-5h,z_{1}+5h]\) (solid line) and \([z_{2}-5h,z_{2}+5h]\) (dashed line). The perturbation growth rates associated to the two jet edges show distinct values suggesting different growth mechanisms to be dominant there. 3) and the periodogram for the horizontal mode-1 (Fig. 5a and c, also see Sec. 2.3). Indeed, upon visual inspection of the perturbation-induced mean wind one may find localized and temporally stationary structures at the lower jet edge (Fig. 3, left panel). Similarly, the horizontal mode-1 perturbation signal reveals dominantly stationary structures similar to the expected instability at tht lower jet edge (Fig. 5c). However, the induced mean-wind's properties are not easily identified and associated with the PSII. Moreover, the theory according to SV20 predicts a linear relationship between the temporal growth rate, \(\lambda\), and the exponential spatial decay rate, \(\sigma\), with a proportionality coefficient given by the wave properties and the background wind. It should be noted that the theory permits both the growth and the spatial decay rates to have an imaginary component that would correspond to oscillatory behavior. We thus compute wavenumber-frequency spectra and insert the range of expected growth rates (white lines, Fig. 4)). The dominant signal is, as one might expect, the primary wave at the corresponding wavenumbers and zero frequency. Moreover, we observe that the spectra are asymmetric with respect to the sign of the wavenumber of the growing perturbations. While these perturbations exhibit spectral energy within the expected range for the PSII it may be difficult to interpret the spectra for various reasons. Firstly, we expect this type of instability to only exist at the lower jet edge (Fig. 4a). However, the upper jet edge (Fig. 4b) exhibits energy at similar wavenumber and frequency ranges, albeit in a more broad spectral region. Secondly, the spectral energy within the expected range does not dominate the perturbation spectrum. The PSII may thus occur, if present, in combination with other instabilities. Ultimately, we may not unambiguously identify this novel type of instability but conclude that the present numerical experiments permit the instability to be embedded in the cascade of instabilities leading to the breakdown of the primary wave. ### Triadic resonant interaction Given the quasi monochromatic incident wave and the resonance conditions (Eq. 7) we expect the TRI to generate vertically propagating wave components with a horizontal wavenumber \(k^{\prime}_{x}\approx 2K_{x}\). Utilizing the horizontally periodic boundary conditions we Figure 5: Power spectral densities for horizontal modes with \(k_{x}=K_{x}\) (a, c) and \(k_{x}=2K_{x}\) (b, d) from horizontal periodograms of \(u(x,z,t)-u(x,z,0)\). While the upper jet edge is dominated by transient mode-1 and mode-2 wave components, the lower jet edge mostly shows a stationary mode-1 perturbation. employ horizontal periodograms of the perturbation signal, \(u-u(t=0)\), to identify the growing spectral components (cf. Fig. 5). We find that at the upper jet edge dominantly transient components with horizontal wavenumbers \(K_{x}\) and \(2K_{x}\) are generated. At the lower jet edge, however, we find that a dominantly stationary component with horizontal wavenumber \(K_{x}\) is generated. At the same time the generation of transient mode-2 waves seems greatly reduced with respect to the upper jet edge. In general, \(2K_{x}\) components may also be introduced through the generation of higher harmonics associated to the primary wave [e.g. 1]. These higher harmonics, being coupled to a stationary primary wave with a zero extrinsic frequency, \(\omega=0\), would be set to have a vanishing extrinsic frequency and stationary phases as well. Finding mostly transient wave structures in the horizontal mode-2 components we conclude that the TRI may be the dominant process generating aforementioned mode-2 components therein. ### Shear and static instabilities Albeit being stable initially with respect to shear instability it may develop as a part of the cascade of instabilities during the break down of the incident gravity wave. That is, modulation of the stationary parent wave may induce shear instabilities through changes in the vertical wavenumbers. Also, growing perturbations may become non-linear and eventually exhibit shear instabilities themselves. As noted in Sec. 2.4, the necessary condition for shear instability to occur is fulfilled where the local Richardson number falls below one quarter, \(\mathrm{Ri}\leq 1/4\) (Eq. 9). In particular, we observe that well after the onset of first instabilities local areas develop flow characterized by a Richardson number smaller \(1/4\) (Figs. 2 and 6). To further understand the detail of the instability cascade we distinguish between wavelengths equivalent to the incident wave (Fig. 6, blue volumes) and perturbations associated to higher horizontal modes (Fig. 6, red volumes). While the instability of the horizontal mode-1 structures is mostly stationary and occurs at both the lower and the higher jet edge we find transient regions of small Richardson numbers in the higher modes only above the jet. This is consistent with the growth of horizontal mode-2 structures due to the TRI as discussed above. From these observations we may conclude that stationary structures with equal wave vector as the incident wave grow over time and eventually become unstable with respect to shear instability. What is more, higher modes generated through TRI may become unstable themselves over time posing an efficient pathway of the wave energy to small scales. The latter mechanism is observed at the upper jet edge only. Here we would like to remind the reader that both the conditions for the shear and the static instabilities coincide with respect to the relative wave amplitude, \(\alpha>1\), which is not fulfilled in the initial conditions. We thus interpret the observed shear instabilities as mechanisms occurring further downstream in the instability cascade. Figure 6: Hovmoeller diagrams of zero total vorticity and \(\mathsf{Ri}=1/4\) for the lower jet edge (a) and the upper jet edge (b). Blue and red isosurfaces correspond to winds and potential temperatures associated to the horizontal mode 1 and higher modes, respectively. Green isosurfaces show manifolds of zero total vorticity. ## 5 Conclusions In this study we have simulated a stationary internal gravity wave modulated by a sheared jet with amplitudes close to static instability in order to identify the cascade of both known and novel instability types. While some instability types like the classical modulational instability are excluded by choice many others show clear indications of occurrence. In particular, we find growing modes associated to the triadic resonant instability (TRI) mechanism as well as evidence of growing stationary structures with the same wave characteristic as the incident wave. While the transient TRI generated modes are dominant at the upper jet edge they are barely observed below. In contrast, the stationary structures occur at both the upper and lower edges of the jet albeit with an earlier onset and larger amplitude at the lower edge. These structures might be associated to the point spectrum modulational instability (PSMI) as proposed by Schlutow and Voelker [28], however it could not be identified unambiguously. Finally, the growing structures become subject to both shear and static instabilities ultimately leading to the breakdown of the incident wave and the transition to a turbulent regime. We conclude with the remark that the breakdown of a modulated gravity wave is not associated to a single instability but a zoo of mechanisms occurring partly in parallel (competing for dominance) and in a cascade following one another or even breaking down the growing modes. Over all, this highlights that although many instability mechanisms are well known and understood many questions remain open. As an example the growth of stationary instabilities could not be uniquely associated to a specific mechanism in the present study. Thus we need more investigations to strengthen our understanding of the mechanics of gravity wave breaking. ## Acknowledgements The authors like to thank the Center for Scientific Computing of the Goethe University Frankfurt. All calculations for this research were conducted on the provided Goethe-HLR Cluster. Moreover the authors thank Ulrich Achatz for his support and constructive feedback. ## Funding This paper is a contribution to the project W01 (Gravity-wave parameterization for the atmosphere) and S02 (Improved Parameterizations and Numerics in Climate Models) of the Collaborative Research Centre TRR 181 "Energy Transfers in Atmosphere and Ocean" funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 274762653.
2309.07181
The Grand Illusion: The Myth of Software Portability and Implications for ML Progress
Pushing the boundaries of machine learning often requires exploring different hardware and software combinations. However, the freedom to experiment across different tooling stacks can be at odds with the drive for efficiency, which has produced increasingly specialized AI hardware and incentivized consolidation around a narrow set of ML frameworks. Exploratory research can be restricted if software and hardware are co-evolving, making it even harder to stray away from mainstream ideas that work well with popular tooling stacks. While this friction increasingly impacts the rate of innovation in machine learning, to our knowledge the lack of portability in tooling has not been quantified. In this work, we ask: How portable are popular ML software frameworks? We conduct a large-scale study of the portability of mainstream ML frameworks across different hardware types. Our findings paint an uncomfortable picture -- frameworks can lose more than 40% of their key functions when ported to other hardware. Worse, even when functions are portable, the slowdown in their performance can be extreme and render performance untenable. Collectively, our results reveal how costly straying from a narrow set of hardware-software combinations can be - and suggest that specialization of hardware impedes innovation in machine learning research.
Fraser Mince, Dzung Dinh, Jonas Kgomo, Neil Thompson, Sara Hooker
2023-09-12T22:11:55Z
http://arxiv.org/abs/2309.07181v1
# The Grand Illusion: The Myth of Software Portability and Implications for ML Progress. ###### Abstract Pushing the boundaries of machine learning often requires exploring different hardware and software combinations. However, the freedom to experiment across different tooling stacks can be at odds with the drive for efficiency, which has produced increasingly specialized AI hardware and incentivized consolidation around a narrow set of ML frameworks. Exploratory research can be restricted if software and hardware are co-evolving, making it even harder to stray away from mainstream ideas that work well with popular tooling stacks. While this friction increasingly impacts the rate of innovation in machine learning, to our knowledge the lack of portability in tooling has not been quantified. In this work, we ask: _How portable are popular ML software frameworks?_ We conduct a large-scale study of the portability of mainstream ML frameworks across different hardware types. Our findings paint an uncomfortable picture - frameworks can lose more than 40% of their key functions when ported to other hardware. Worse, even when functions are portable, the slowdown in their performance can be extreme and render performance untenable. Collectively, our results reveal how costly straying from a narrow set of hardware-software combinations can be - and suggest that specialization of hardware impedes innovation in machine learning research. ## 1 Introduction The field of machine learning (ML) has made significant strides in recent years, thanks in large part to advances in hardware and software (Chowdhery et al., 2022; Zhang et al., 2022; Kaplan et al., 2020). However, the pursuit of efficiency has led to the creation of increasingly specialized AI hardware and the consolidation of ML frameworks around a narrow set of tools (Hooker, 2021). This specialization has limited the ability of researchers to experiment with different hardware and software combinations, hindering the rate of innovation in the field. The portability challenge has been amplified by the ever more heterogeneous landscape of hardware and software (Reddi et al., 2020). In particular, differences in hardware create a vexing problem for software: how to allow portability while maximizing performance (Hooker, 2021; Lee et al., 2011; Barham and Isard, 2019). Many commercial hardware suppliers purport to support a variety of popular ML libraries, however qualitative evidence from machine learning researchers suggests that this is often far from a straightforward process that requires significant changes to the code before it can be transferred successfully (Johansen et al., 2014). In this work, we ask how has the _increasingly fragmented and specialized hardware and software landscape impacted the portability of research?_ To our knowledge, there has been no prior work that has sought to quantify the ease of portability between hardware types. In this work, we seek to address this gap, by explicitly quantifying the portability of popular mainstream ML libraries, TensorFlow (Abadi et al., 2015), PyTorch (Paszke et al., 2019), and JAX (Bradbury et al., 2018), that are used by millions of developers across different hardware types. We embark on extensive data collection and annotation to hand-curate representative tests for each library and subsequently benchmark transferability and latency across different hardware types. Our results reveal highly uneven portability, suggesting that there will be increasingly uneven gains from progress in computer science research. Exploration in ML research appears to be hindered by failing functions and dismal performance. While some operations benefit from portability across devices, there are large gaps in coverage for widely used software frameworks. We find that there are frustrating differences in the subset of software operations supported on different types of hardware which prevent the portability of algorithms across hardware types. Even where there is portability, significant gaps exist in performance between each framework. Software kernels are often overly optimized for a specific type of hardware which causes huge lags in efficiency when used with a different type of hardware (Hennessy & Patterson, 2019). Our main contributions can be enumerated as follows: * We gather a human-curated and annotated collection of functions from popular ML libraries that can be benchmarked across hardware types. We open source this dataset for use in future benchmarking at the provided repo. * We find that PyTorch and TensorFlow, in particular, have portability issues. On GPUs, 22% of the TensorFlow benchmark functions fail partially or completely. On TPUs, a remarkable 44% of PyTorch benchmark functions partially or completely fail. * with both unexpected speedups and slowdowns moving functions between the GPU and the TPU. For example, 81.4% of functions in PyTorch exhibit more than a 10x slowdown when transferring functions from GPU to TPU. * We illustrate that certain software libraries are locked into a particular tooling stack. JAX was co-designed with TPUs in mind, and this is reflected in its performance. In all, 91.8% of our JAX function set is faster on the TPU. * We compare how software portability has evolved over time by comparing different versions of GPUs and TPUs. Specifically, we run experiments on both GPUs T4 and A100 and observe that the portability remains the same for PyTorch while it differs by only up to 1% for TensorFlow and JAX. Moreover, we observe that 28.07% and 9.09% of PyTorch functions achieve a 1.5X speed improvement when operating newer GPU and TPU versions, respectively. Hence, although newer generations of hardware have not improved software portability, they have yielded modest speed enhancements for certain frameworks. **Importance of this work:** This paper presents an evaluation framework at the beginning of a time when hardware and software specialization is growing, and thus where comparative evaluations will become ever more important. The economics of chip specialization have dramatically changed over the last decade or so (Thompson and Spanuth, 2021), leading Hennessy and Patterson to term this a _new golden age for computer architecture_ in their Turing lecture (Hennessy and Patterson, 2019). Specialization carries with it radical changes in performance, and disparities will only increase, as will the importance of co-designing implementations to those chips. Thus, we should expect that the type of quantitative portability analyses that we do in this paper will only become more important in the coming years to aid the design of tooling that is both efficient and portable. ## 2 Methodology We are interested in quantifying the portability of mainstream Python libraries used for machine learning workloads. We define portability as the _ease with which a machine learning workload (code, data, and models) can transfer between different hardware types._ We consider several types of failure: 1. **Complete failure to run**: If the function does not run on the device at all. 2. **Partial failure to run**: Some but not all the benchmark tests for a given function fail to run. 3. **Intolerable latency**: High latencies may be prohibitively inefficient, which may impair usability even if the function technically is able to run on multiple hardware types. Our goal is to benchmark the portability of libraries that claim to be portable across hardware types, and which are widely adopted. Hence, we evaluate the portability of JAX version 0.4.8, PyTorch version 1.12.0, and TensorFlow version 2.11.0. To avoid overfitting to specific machine learning workloads that may not capture future machine learning research directions, we **evaluate the portability of functions and not scripts**. A major concern when overly focusing on popular architectures or tasks is the sidelining the diverse range of code and ideas that researchers are exploring, some of which might not have reached popularity or high optimization levels. In addition, choosing to analyze workloads instead of functions would have posed several challenges for fairly comparing frameworks: \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{6}{c}{**Comparison of TPU and GPU Failure and Success Rates**} \\ \hline \hline & \multicolumn{2}{c}{**GPUs**} & \multicolumn{2}{c}{**TPUs**} \\ \hline & **Success** & **Failure** & **Success** & **Failure** \\ & Pass & Partial & Complete & Pass & Partial & Complete \\ \cline{2-6} TensorFlow & 78\% & 8\% & 14\% & 71\% & 15\% & 14\% \\ PyTorch & 92\% & 3\% & 5\% & 57\% & 27\% & 17\% \\ JAX & 98\% & 0\% & 2\% & 97\% & 0\% & 3\% \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of portability success and failure rates of a random stratified sample of TensorFlow, PyTorch, and JAX functions across TPUs and GPUs. 1. **Analysis can be difficult:** For example, if we have input \(x\), go through three functions \(F\), \(G\), and \(H\) in the order \(F(G(H(x)))\). If the middle function, which is \(G\) in this case, fails because it is not portable, we will not be able to test the function \(F\). 2. **Different workloads use different framework versions:** If we use a deprecated function, we might face (1). 3. **Privileging common workloads introduces bias:** The function X might work on a common task, but it might not work in a more niche case. Therefore, the function sampling is much more thorough and thus more suitable for extrapolation. 4. **Operations are the building blocks of ML workloads:** The performance and portability of the operations directly impact the workloads that use them. ### Data collection **Function sampling procedure**: To obtain a full list of all functions, we iterate through the module structure of PyTorch, TensorFlow, and JAX to enumerate all functions and classes contained in the library. This process results in 2718 TensorFlow functions, 2898 PyTorch functions, and 1091 JAX functions. **Sampling procedure**: To have a representative view of each libraries performance, we do stratified sampling, including 1) **the top 20** functions as ranked by frequency of use, and 2) **5 random functions** from each decile of all functions ranked by frequency of use for each library (JAX, PyTorch, TensorFlow). The random sample allows us to capture a variety of different engineering Figure 1: Comparison of average execution time on Log scale for TensorFlow, PyTorch, and JAX functions on GPU versus TPU. In total, there are 51 functions in TensorFlow, 43 functions in PyTorch, and 61 functions in JAX. The number of data points is lower than the overall count of functions because we have excluded all subtests that failed on either device. This exclusion was to ensure a valid comparison. use cases and not overfit to scripts that may only rely on a small subset of the library. Benchmarking the top 20 functions measures how the frequency of use of a given function impacts portability - our expectation at the outset was that more frequently used functions would be prioritized for support across hardware types. To identify the top 20 functions and the decile samples, we measure the frequency of how often these PyTorch, TensorFlow, and JAX functions appear in scripts in the CodeParrot-clean dataset1. CodeParrot-clean is a deduplicated version of the CodeParrot dataset2, which is a collection of approximately 22 million Python files used originally to build a code generation model as part of the O'Reilly Transformers book (Tunstall et al., 2022). This was created by collecting Python files from the Github Google BigQuery dataset 3. We filtered to restrict to files that string matched import statements from either library. In the Appendix Section 9 section, we provide more details about the filtering procedure used. Footnote 1: [https://huggingface.co/datasets/codeparrot/codeparrot-clean](https://huggingface.co/datasets/codeparrot/codeparrot-clean) Footnote 2: [https://huggingface.co/datasets/transformersbook/codeparrot](https://huggingface.co/datasets/transformersbook/codeparrot) Footnote 3: [https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code) Thus, for each framework, we sample approximately 70 functions, 50 random decile functions, and the 20 most-used.4 The total number of samples per framework balanced the need for coverage with the time-intensive process need to human annotate and procure the minimal test for each function, to add in code to track latency to each script, and to modify and verify that each test is only run on the hardware in question. We intend to release this as a dataset that can be used by others for benchmarking portability. Footnote 4: This number is approximate due to needing to exclude some functions from the list. More details are in the next paragraph. In all, we include 63 unique functions from PyTorch, 65 unique functions from TensorFlow, and 63 functions for JAX. **Human annotation and curation of tests**: Our goal is to benchmark as conservatively as possible the expected behavior of the function. To do so, we rely where possible on the test included in the library for a given function. We manually match each function to its associated tests in the TensorFlow, PyTorch, and JAX libraries. Given these are widely used and well-maintained libraries, our expectation is that the tests within the formal library reasonably represent the minimal code needed to validate expected function behavior under different conditions. For all tests that failed, we Figure 2: TensorFlow, PyTorch, and JAX time densities. ensured that all failed tests were due to errors in the function being tested. Once tests are identified, we manually modify the test files to ensure 1) only the relevant tests and the initialization code needed for running were preserved, 2) the code was being run on the device we were evaluating, and 3) the code needed to record the latency in a consistent way for each script was added. **Top 20 test exclusion:** In the top 20 functions, there were occasions when it was not possible to test a function: 1. **Overlapping functions**: Due to the inherited randomness of our sampling and the static nature of our top 20 there are some overlaps between deciles and the overall top 20: 4, 0, and 4 overlapping functions in PyTorch, TensorFlow, and JAX, respectively. These we excluded from testing since we could not replace the top 20. 2. **Functions without a relevant test**: For some functions in the top 20 there was either no relevant test or situations where testing them would be somewhat nonsensical such as int, which shows up in PyTorch but as it is a type. It is not quite clear what we should test in this case, and thus we decided to exclude them. **Replacement criteria**: After completing the sampling procedure, we found that a subset of functions was not viable for benchmarking. In total, we found 14 functions in TensorFlow, 13 functions in PyTorch, and 13 functions in JAX that needed replacement by resampling at random from the same decile. Our criteria for justifying replacing a function are detailed below: 1. No test present in the respective test suites. For example, arctan within PyTorch was not tested in the PyTorch open-sourced test suite. Respectively, there were 12, 12, and 13 functions for PyTorch, TensorFlow, and JAX that were not tested in the open-sourced test suite. 2. The tests are designed to validate the error handling of the functions; therefore, the timing doesn't work in this case. For example, when testing the batch_norm function in PyTorch, the test focuses solely on checking if the function correctly raises errors for unreasonable inputs. So, the core functionality of the method is not tested, just error throwing. Only one function in PyTorch fell into this case. 3. The functions operate on an older version of the framework. For instance, the test for TensorFlow's raw_rnn is only compatible with TensorFlow version 1, yet we are conducting tests in TensorFlow version 2. There were two functions in TensorFlow that fit this case. For functions that needed to be resampled, we sampled a new function at random from the same frequency decile as the original function. ### Hardware evaluation and device running procedures **Types of Hardware Evaluated**: We primarily ran test suites on a T4 GPU and a v3-8 TPU (Jouppi et al., 2017). For certain analyses, we utilized an A100 GPU and v2-8 TPU, and we specifically indicate such instances in the charts and tables. Unless otherwise indicated, readers should assume the use of a T4 GPU and a v3-8 TPU. **Ensuring operations executed on correct device**: To ensure that PyTorch, TensorFlow, and JAX tests ran on the right hardware, we provided a device environment variable, which we then referred to in test helpers and startup code to force tests to be on the correct device. This ensures that operations are not split between multiple devices but instead run on a single device. This was necessary because many tests specifically test transferring values between the CPU and another device, whereas our goal is to establish the viability of running a function on a single device. We include more details in the appendix Section 10 about the technical implementation of ensuring functions are only run on the device of interest. **Latency measuring procedure**: For every script and each framework we wrap the relevant operation with time.perf_counter(). Before recording the ending time, we include a synchronization point. This will synchronize asynchronous XLA and Cuda operations, allowing the operations to finish before we take the time. We include more details in the appendix in Section 11 about how we implement the synchronization points. We record 3 runs for every test, framework, and device combination. Unless indicated otherwise, results are reported as the average of the 3 runs. ## 3 Results and discussion ### Portability of functions across hardware types **Overall failure and partial failure rates**: We observe rates of failure for all libraries we benchmark across hardware types. However, the rate of failure differs between frameworks. As seen in Table 1, on GPUs TensorFlow had the highest failure rate with a total of 21.54% complete and partial failures. On TPUs PyTorch has the highest failure rate with a remarkable total of 44.44% complete and partial failures. Across both platforms, we observe the lowest rate of failure for JAX with 1.59% complete failure on the GPU and 3.17% complete failure on the TPU. In particular, PyTorch's TPU failure rates stand out, as double the failure rate of TensorFlow and the highest failure rate overall. **Failure rate across different percentiles**: One of the questions we wanted to explore was whether portability was impacted by the frequency of use of functions. Our expectation was that the more heavily used a function was, the more portable it would be given the incentive to support the top use cases. However, as shown in Figure 1, there is a fairly consistent failure and partial Figure 3: Percentage of functions faster on A100 GPU vs v3-8 TPU. failure rate across deciles. This holds for all libraries, which suggests that frequency of use has not had a significant effect on the prioritization of support across hardware types. **Failure rate of top-20 functions**: To further explore whether the frequency of use has influenced \begin{table} \begin{tabular}{l l r r r} \hline \hline & Function & GPU & TPU & TPU/GPU \\ \hline Tensorflow & tf.linalg.svd & 0.931 & 112.843 & 121.206 \\ & tf.math.reduce\_logsumexp & 13.028 & 474.586 & 36.428 \\ \cline{2-5} & tf.estimator.LoggingTensorHook & 0.042 & 0.038 & 0.905 \\ & tf.compat.v1.Session.run & 5.722 & 3.804 & 0.665 \\ \hline PyTorch & torch.argsort & 0.157 & 948.210 & 6039.554 \\ & torch.optim.Adamax & 0.069 & 392.712 & 5691.478 \\ \cline{2-5} & torch.cuda & 0.041 & 0.061 & 1.488 \\ & torch.nn.Conv2d & 46.053 & 67.081 & 1.457 \\ \hline JAX & jax.named\_call & 0.007 & 0.012 & 1.714 \\ & jax.numpy.array & 0.435 & 0.638 & 1.467 \\ \cline{2-5} & jax.numpy.cos & 172.002 & 26.102 & 0.152 \\ & jax.numpy.sqrt & 98.118 & 13.860 & 0.141 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the latency in milliseconds for the two functions with the greatest and least increase in latency in TensorFlow, PyTorch, and JAX on GPU and TPU. The table is ordered by the ratio GPU/TPU in descending order, and the top two biggest ratio functions are highlighted. Note that values are rounded to 3 decimal places. Figure 4: Complete and partial failure rates by percentile bucket for TensorFlow, PyTorch, and JAX functions on GPU and TPU. Note that charts include functions across deciles. portability, we directly compare rates of portability for the top 20 functions vs. other deciles. In Table 4, we observe that some libraries like JAX have 0% failure rates in the top-20 and low overall failure rates across all functions. However, surprisingly on TPUs, PyTorch actually presents slightly higher failure rates in the top 20 functions than across all functions (46% vs 44%). We also observe that on GPUs, TensorFlow also presents a considerably higher rate of failure in the top-20 functions (33% vs 22%). Across the board, we observe the rates of error between the deciles and the top 20 are quite similar showing even the most used functions do not benefit from greatly increased portability. **Comparing GPUs generations:** When analyzing the portability success and failure rates of TensorFlow, PyTorch, and JAX functions across T4 and A100 GPUs, we observe surprisingly similar trends between the two hardware generations, differing by only up to 1% for TensorFlow and JAX as shown in Table 3. Success rates remain consistently high for all frameworks on both GPUs, indicating robust compatibility. The percentages of Partial and Complete Failures also exhibit comparability across the GPUs and frameworks. This is concerning as it indicates that the advancements in A100 architecture have minimal influence on the overall portability. **First class citizen effect on different hardware**: One noted effect we see in our results could Figure 5: Complete and partial failure rates for the top 20 functions in TensorFlow, PyTorch, and JAX functions on GPU and TPU. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{5}{c}{**Comparison of GPU A100 and T4 Failure and Success Rates**} \\ \hline \hline & \multicolumn{2}{c}{**T4**} & \multicolumn{2}{c}{**A100**} \\ \hline & \multicolumn{2}{c}{**Success**} & \multicolumn{2}{c}{**Failure**} & \multicolumn{2}{c}{**Success**} & \multicolumn{2}{c}{**Failure**} \\ & \multicolumn{2}{c}{Pass} & \multicolumn{2}{c}{Partial} & Complete & Pass & Partial & Complete \\ \cline{3-6} TensorFlow & 78\% & 8\% & 14\% & 79\% & 9\% & 12\% \\ PyTorch & 92\% & 3\% & 5\% & 92\% & 3\% & 5\% \\ JAX & 98\% & 0\% & 2\% & 97\% & 0\% & 3\% \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of portability success and failure rates of a random stratified sample of TensorFlow, PyTorch, and JAX functions across T4s and A100s. be described as a _first class citizen effect_. Or simply frameworks built for a device or compilation target perform much better in that environment. The most striking example of this is in JAX on TPUs. As seen in Table 1 in JAX, we see a much lower rate of errors on TPUs when compared to other frameworks with only 3% of functions failing. This is likely due to JAX being built with XLA as a target in mind. We see a similar but less pronounced effect with TensorFlow when compared to PyTorch. TensorFlow was one of the original targets for XLA and thus performed decently well on them with 29% of functions failing when compared to PyTorch which has 44% of functions failing. The first-class citizen effect is less pronounced in TensorFlow, likely due to the age of the framework and the newer JAX giving the teams at Google a chance to rethink what XLA as a compilation target looks like. Compare both of these to PyTorch, and you can see a significant difference. PyTorch is a framework where XLA support was tacked on in a separate library, and it very much shows. Figure 6: Percentage of failure categories per framework device pair. **Reason for failure**: We observe rates of failure for all libraries we benchmark across hardware types. To understand better what impacts hardware portability, we annotate failures into categories. We briefly describe each below: * **Type failure**: Some types are not implemented in a given device. For instance, in PyTorch, while a TPU tensor might claim to be a double, it will always be represented by a Float instead. Another example of this is in the case of the PyTorch function SmoothL1Loss, which on GPUs we attempt to call on the bFloat16 type. However, this is not implemented and fails. * **Not implemented**: Some operations do not have kernels implemented on TPUs or GPUs at all or for specific categories of inputs. For example, on the TensorFlow numpy_function, the kernel PyFuncStateless is not implemented. * **Timeout**: We set a threshold of 15 minutes for each test. If a test goes above that, we automatically kill it and record it as a timeout failure. Given the minimal test code and the size of the test inputs (designed to run quickly), we believe 15 minutes was conservatively long. * **Memory issue**: Captures all cases where memory was attempted to be allocated or accessed as part of the operation and failed. For example, PyTorch Dataset attempted to use pin_memory, but this does not work on a TPU. * **Float precision error**: TPUs have a special float class called bFloat16, which has fewer mantissa bits and more bits for the exponent. This allows for much smaller and larger values but at the cost of floating point precision. This can break assertions in the tests. As shown in Figure 6, the most common reason for failure across all frameworks is the Not Implemented error, which is pronounced in TensorFlow and JAX, accounting for over 60% of failures. Moreover, PyTorch has a distinctive rise in Type Failures, contributing to more than 30% of its failures, a rate noticeably higher than the almost negligible or at most nearly 10% in other frameworks. Both TensorFlow and PyTorch exhibit a relatively low failure rate due to Memory Issues and Type Failures. As expected, the Float Precision error is unique to the TPU, representing around 20% of the failures for both TensorFlow and PyTorch. ### Efficiency cost to switching tooling As depicted in Figure 1 and Figure 3, 96% and 100% of TensorFlow and PyTorch functions experience significant deceleration when migrating from GPU to TPU, respectively. Specifically, within TensorFlow and PyTorch, a clear latency gap emerges when functions previously operating on the GPU are transferred to the TPU. As seen in Table 2, the lowest observed latency gap is 0.665 times. This gap tends to be more pronounced for slower operations on the GPU, reaching a maximum latency gap of 121.206 times. In PyTorch, the latency gap is even more prominent, with speed reductions of up to 6039 times when migrating from GPU to TPU. The latency densities also follow this trend, as shown in Figure 2. In contrast, most functions perform faster on TPU in JAX. When comparing the two devices, there is a minimal latency gap in JAX for both quick and slow operations. The ratio of performance in the migration from GPU to TPU in JAX remains minor, ranging from 0.141 to 1.714 times. In all, we see slowdowns on 100% functions for PyTorch, 96% of functions for TensorFlow, and 8% functions on JAX while moving to the TPU. There are unique circumstances here that might make this different from using these frameworks in real-life situations (specifically in your standard training situation, you have long-running processes, and in our case, we are running simple functions that finish quickly), but this is clear that the benefits of switching to specialized hardware can be uneven and variable. **Discussion of Differences in Latency** While an in-depth analysis of differences between GPU and TPU kernels of functions is beyond the scope of our paper, we wanted to categorize some high-level reasons for differences as a starting point for discussion. We note these are anecdotal observation, but may be of interest to the reader as a starting point for discussion. Broadly, we expect slowdowns to be attributed to one of the following categories: 1. **Misalignment between workload and implementation:** frameworks and hardware may assume certain usage patterns (e.g., uniform input sizes) that are mismatched with actual workloads. 2. **Memory architectures:** The substantially different memory architecture choices made by TPU and GPU architects advantage particular data structures and operations, making framework optimizations uneven in their effectiveness (Zhu et al., 2020). 3. **Bottlenecks:** Unimplemented features in some frameworks can lead to data transfer bottlenecks that lead to the full performance not being able to be taken advantage of. **PyTorch Latency Differences** For TPUs, we observe **long data transfer times** between the TPUs memory and CPUs can be a bottleneck. This is a problem that is much worse in PyTorch than TensorFlow due to a lack of an infeed processing implementation. TensorFlow specifically runs input processing on the CPU while TPU operations take place. PyTorch has chosen not to implement this, which makes TPUs slower than GPUs for PyTorch. Another contributing bottleneck for TPUs is **kernel recompilation on dynamic shapes** which can lead to slower results in tests that use dynamic shapes when running on TPUs. **TensorFlow Latency Differences kernel recompilation on dynamic shapes** is also a contributing factor for TensorFlow. This leads to our greatest latency difference in TensorFlow on the SVD function. **Data transfer pauses:** While input processing is implemented in TensorFlow, data transfer remains a bottleneck. In some cases, this data preparation and transfer can take longer than the XLA process itself. **Unequal Speedups Due to Specialization:** The largest benefits of TPUs will be on operations involving matrix multiplication. For other operations, large speed-ups are not ensured. **Performance Comparison Across Hardware Versions:** Referring to Figure 7, 9.09% of TensorFlow functions exhibit a 1.5X performance enhancement when transitioning from a T4 GPU to a A100 GPU. Additionally, 28.07% and 9.09% of PyTorch functions achieve a 1.5X speed improvement when operating newer GPU and TPU versions, respectively. In contrast, JAX functions display minimal gains of just 0.05% on the GPU and 0.02% on the TPU. ## 4 Related work **Deep learning frameworks**: The rapid adoption and commercial success of Deep learning has spurred the development of software frameworks tailored to deep neural network workloads. Many of the most widely used libraries for machine learning workloads are Python libraries like TensorFlow Abadi et al. (2015), Theano (Team et al., 2016), Chainer (Tokui et al., 2019), MXNet (Chen et al., 2015), PyTorch (Paszke et al., 2019) and JAX (Bradbury et al., 2018). Despite the variety in frameworks, there has been no study to our knowledge of the difficulty of porting these frameworks between different types of hardware. **Narrowing of AI research**: The specialization of hardware to create efficiencies for machine learning workloads has created concerns about a narrowing in research contributions. Recent work (Hooker, 2021; Barham and Isard, 2019) suggests that inflexible high-performance kernels and limited programming abstractions are hindering innovative machine learning research. (Hooker, 2021) argues that the availability of accelerator hardware determines the success of ML algorithms potentially more than their intrinsic merits - that the success of ideas hinges on alignment with hardware on software. (Klinger et al., 2022) analyzes arXiv papers and finds that AI research has stagnated in recent years and that AI research involving the private sector tends to be less diverse and more influential than research in academia. Several works (Ahmed and Wahed, 2020) point to the growing compute divide, which impacts accessibility to research and ideas. **Portability of software frameworks**: Different designs for technology are possible, and some designs are more desirable from an innovation standpoint than others (David et al., 2009). However, circumstances such as chance events, shortsightedness, and lack of coordination can lead to a situation where an inferior design becomes dominant and difficult to transition away from, even after its limitations become evident (Arthur, 1994; David, 1985). In the face of uncertainty regarding the advantages and risks associated with different technologies and the desire to avoid getting stuck with an inferior design prematurely, it might be sensible to implement policies that maintain diversity in the technological landscape (David et al., 2009). A third and final reason to preserve technological mobility and reduce the cost of exploration: innovation involves the creative recombination of ideas, and unusual mixes are often an important source of radical and transformative innovations (Arthur, 2011). Figure 7: Percentage of functions faster on new GPU/TPU compared with old ones. ## 5 Limitations While our work does a great deal to quantify existing gaps in portability, it has some important limitations. Firstly we recorded latency calculations and failure categories on two types of GPUs (A100s and T4s) and two types of TPUs (v2-8 and v3-8). We believe the similar error rates between types of GPUs show that at least for failure rates there is a good deal of consistency between types of GPUs. Worthwhile extensions of this work would include adding more device types to get a more robust view of overall portability and its trend. Secondly, this paper does not explore in depth why these portability gaps exist. We provide some broad hypotheses on why there might be differences in Section 3.2, but we leave it to future work to pinpoint why these differences exist. One reason for our limitation is due to the lack of access to CUDA internals as it is not completely open source. Understanding the differences in kernels between devices and framework implementations is a daunting task and outside of the scope of this work. ## 6 Conclusion We benchmark three widely used and adopted machine learning libraries to evaluate the ease of portability across different hardware types. We find large differences in the subset of software operations supported on different types of hardware. We find that PyTorch and TensorFlow, in particular, have pronounced portability issues. On GPUs, 22% of the TensorFlow benchmark functions fail partially or completely. On TPU, a remarkable 44% of PyTorch benchmark functions partially or completely fail. Even where there is portability, significant gaps exist in performance between each framework. We observe that when transferring functions from GPU to TPU, 81.4% of functions in PyTorch exhibit more than 10x slowdown. Significant work remains to ensure portability and performance between device types. Currently, major slowdowns and broken operations are the norms, and widely used frameworks have over-promised when it comes to portability. This lack of portability has costs to innovation: incentivizing researchers to stick with the tooling they have, which often makes it harder to stray off the beaten path of research ideas. Innovation often occurs when it is cheaper to explore, but our tooling stacks have clearly quantifiable friction that deters exploration. Valuable future work includes developing standardized approaches to machine learning tooling that enable greater portability between different hardware types, and benchmarking additional hardware types. We hope by releasing our benchmark dataset, we can spur greater visibility into what frameworks need more support. \begin{table} \begin{tabular}{l l r r r} \hline \hline & & TensorFlow & PyTorch & JAX \\ \hline GPU & Top 20 & 33\% & 0\% & 0\% \\ & All Functions & 22\% & 10\% & 2\% \\ \hline TPU & Top 20 & 27\% & 46\% & 0\% \\ & All Functions & 30\% & 44\% & 4\% \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of failure rates between functions in the top 20 and the overall failure rate across all deciles.
2306.17517
Universal properties of repulsive self-propelled particles and attractive driven particles
Motility-induced phase separation (MIPS) is a nonequilibrium phase separation that has a different origin from equilibrium phase separation induced by attractive interactions. Similarities and differences in collective behaviors between these two types of phase separation have been intensely discussed. Here, to study another kind of similarity between MIPS and attraction-induced phase separation under a nonequilibrium condition, we perform simulations of active Brownian particles with uniaxially anisotropic self-propulsion (uniaxial ABPs) in two dimensions. We find that (i) long-range density correlation appears in the homogeneous state, (ii) anisotropic particle configuration appears in MIPS, where the anisotropy removes the possibility of microphase separation suggested for isotropic ABPs [X.-Q. Shi et al., Phys. Rev. Lett. 125, 168001 (2020)], and (iii) critical phenomena for the anisotropic MIPS presumably belong to the universality class for two-dimensional uniaxial ferromagnets with dipolar long-range interactions. Properties (i)-(iii) are common to the well-studied randomly driven lattice gas (RDLG), which is a particle model that undergoes phase separation by attractive interactions under external driving forces, suggesting that the origin of phase separation is not essential for macroscopic behaviors of uniaxial ABPs and RDLG. Based on the observations in uniaxial ABPs, we construct a coarse-grained Langevin model, which shows properties (i)-(iii) and corroborates the generality of the findings.
Hiroyoshi Nakano, Kyosuke Adachi
2023-06-30T10:06:12Z
http://arxiv.org/abs/2306.17517v2
# Universal properties of repulsive self-propelled particles and attractive driven particles ###### Abstract Motility-induced phase separation (MIPS) is a nonequilibrium phase separation that has a different origin from equilibrium phase separation induced by attractive interactions. Similarities and differences in collective behaviors between these two types of phase separation have been intensely discussed. Here, to study another kind of similarity between MIPS and attraction-induced phase separation under a nonequilibrium condition, we perform simulations of active Brownian particles with uniaxially anisotropic self-propulsion (uniaxial ABPs) in two dimensions. We find that (i) long-range density correlation appears in the homogeneous state, (ii) anisotropic particle configuration appears in MIPS, where the anisotropy removes the possibility of microphase separation suggested for isotropic ABPs [X.-Q. Shi _et al._, Phys. Rev. Lett. 125, 168001 (2020)], and (iii) critical phenomena for the anisotropic MIPS presumably belong to the universality class for two-dimensional uniaxial ferromagnets with dipolar long-range interactions. Properties (i)-(iii) are common to the well-studied randomly driven lattice gas (RDLG), which is a particle model that undergoes phase separation by attractive interactions under external driving forces, suggesting that the origin of phase separation is not essential for macroscopic behaviors of uniaxial ABPs and RDLG. Based on the observations in uniaxial ABPs, we construct a coarse-grained Langevin model, which shows properties (i)-(iii) and corroborates the generality of the findings. ## I Introduction Liquid-gas or liquid-liquid phase separation is a typical collective phenomenon that has been observed in a wide range of systems from polymer solution [1] to biological materials [2; 3]. Basically, equilibrium phase separation is caused by attractive interactions between molecules or particles [1], and the corresponding critical phenomena have been considered to belong to the Ising universality class [4; 5; 6]. In contrast, in nonequilibrium systems, depending on how the detailed balance is broken, the critical exponents for phase separation can deviate from the Ising model values [7], and phase separation can emerge from different mechanisms such as chemical reactions [8] and coupling to multiple heat baths [9]. A comprehensive understanding of the seemingly broad spectrum of nonequilibrium phase separation requires theoretical studies from a unified viewpoint. For attractively interacting particles that undergo phase separation, one of the ways to break the detailed balance is external driving with bulk fields or boundary reservoirs, which generically changes the density correlation to long-ranged [7; 10; 11; 12; 13; 14; 15; 16] and leads to nonequilibrium critical phenomena [7; 17; 18; 19; 20; 21; 22]. The driven lattice gas (DLG) [18; 23] and randomly driven lattice gas (RDLG) [24; 25; 26; 27; 28; 29; 30] are prototypical models of nonequilibrium phase separation, in which particles stochastically move with short-range attractive interactions under external driving forces. Unidirectional and uniaxial driving forces are assumed in DLG and RDLG, respectively, which makes the difference in symmetry between these two models. In DLG and RDLG, spatial anisotropy of the driving force causes long-range density correlation [25] and critical phenomena that do not belong to the Ising universality class [31; 32; 33; 34]. In particular, the universality class for RDLG has been considered as that for uniaxial ferromagnets with dipolar long-range interactions [35; 36], according to the renormalization group (RG) analysis [27; 32]. Self-propulsion is another way to break the detailed balance [37; 38; 39]. In a crowd of self-propelled particles, or active matter, collective phenomena ranging from giant number fluctuations [40; 41] to active turbulence [42] have been found using biological [43; 44; 45; 46; 47] and artificial [48; 49; 50; 51; 52; 53; 54; 55; 56] systems. In particular, as shown in simulations [57; 58; 59; 60; 61] and experiments [62], self-propelled particles with repulsive interactions can undergo phase separation, which is called motility-induced phase separation (MIPS) [63]. No attractive interactions are necessary for MIPS, which is distinct from equilibrium phase separation or nonequilibrium phase separation under external driving. MIPS has been studied in comparison with equilibrium phase separation, and similarities and differences between them have been reported [see Figs. 1(a) and (c)]. For example, the global phase diagrams for MIPS [64; 65] and equilibrium phase separation are similar if we exchange the axis of self-propulsion strength for MIPS with that of attractive interaction strength for equilibrium phase separation. In addition, the lever rule [1], which is common to equilibrium phase separation, holds for MIPS in particle models [66], and consistently, effective free energy has been proposed based on coarse-grained models [67; 68]. In contrast, it is still unclear whether the critical phenomena for MIPS belong to the Ising universality class [69; 70; 71; 72]. Furthermore, as a unique feature of MIPS, the nucleation of persistent gas bubbles that can lead to microphase separation has been found [66; 73; 74; 75]. In the previous work [76], one of the authors has proposed another kind of similarity between the anisotropic version of MIPS and attraction-induced phase separation under external driving. Briefly, it has been found that a lattice gas model with spatially anisotropic self-propulsion exhibits a variety of collective behaviors: long-range density correlation, anisotropic phase separation, and critical phenomena with the universality class expected to be the same as that for uniaxial dipolar ferromagnets. All these behaviors have also been seen in RDLG, which indicates a connection between repulsively interacting particles with anisotropic self-propulsion and attractively interacting particles under external driving. However, the generality of such observations is still unclear beyond the considered lattice gas model. In particular, though persistent gas bubbles have been observed in active Brownian particles (ABPs) [66], a prototypical model of MIPS [59], the fate of gas bubbles under spatial anisotropy has not been investigated. More broadly, systematic studies of the effect of spatial anisotropy on active matter are still scarce [77; 78; 79; 80; 81]. In this paper, toward a comprehensive understanding of the relation between the anisotropic MIPS and attraction-induced phase separation under external driving, we consider ABPs with anisotropic self-propulsion. In Fig. 1, we show typical particle configurations obtained from model simulations for the above-mentioned four types of phase separation: attraction/motility-induced phase separation with isotropic/anisotropic dynamics. In each panel of Fig. 1, we also schematically show the single particle motion and typical configuration of small clusters, which can grow up to a macroscopic scale and lead to phase separation. Our present focus is on the relation between the two types of anisotropic phase separation in the right panels of Fig. 1. We perform simulations of ABPs with uniaxially anisotropic self-propulsion (uniaxial ABPs). We find that, as expected from the previous study [76], uniaxial anisotropy dramatically changes the collective behaviors and causes long-range correlation, anisotropic phase separation, and critical phenomena that are presumably in the same universality class as that for uniaxial dipolar ferromagnets. Furthermore, uniaxial anisotropy suppresses the growth of gas bubbles in MIPS [66] and stabilizes macroscopic phase separation. Developing a coarse-grained model for particles with anisotropic self-propulsion, we corroborate the generality of the observed phenomena. ## II Microscopic models In this section, we explain the numerical implementation of uniaxial ABPs and RDLG, which are anisotropic extensions of the isotropic ABPs and equilibrium lattice gas, respectively. We also present phase diagrams for the two models, which provide preliminary insights into collective behaviors. ### Active Brownian particles with uniaxial anisotropy For uniaxial ABPs, \(N\) particles are confined in \([0,L_{x}]\times[0,L_{y}]\) with periodic boundary conditions. The state of the \(i\)th particle is specified by position \(\mathbf{r}_{i}\) and polarity angle \(\theta_{i}\). The Figure 1: Four types of phase separation. The row and column correspond to the type of phase separation (attraction- or motility-induced) and the type of dynamics (isotropic or anisotropic), respectively. In each panel, a typical particle configuration obtained from model simulations is shown with schematic figures of the single particle motion and small cluster formation. (a) Brownian particles follow overdamped dynamics with attractive interactions (wavy lines) and random forces. (b) In RDLG, particles stochastically move with attractive interactions (wavy lines) and external driving force (red arrow) along an axis (i.e., \(y\)-axis in the figure). (c) ABPs show self-propelled motion (red arrow) with repulsive interactions and random forces. (d) Uniaxial ABPs show anisotropic self-propelled motion favored along an axis (i.e., \(x\)-axis in the figure) with repulsive interactions and random forces similar to (c) [see Eq. (1) for the detail]. time evolution of \((\mathbf{r}_{i},\,\theta_{i})\) is governed by \[\left\{\begin{array}{l}\frac{d\tau_{i}^{a}}{dt}=\mu_{i}^{ab}\left[F_{0}n_{i}^{b} -\sum_{\vec{r}\neq i}\frac{\partial V(|\mathbf{r}_{i}-\mathbf{r}_{j}|)}{\partial r_{i}^ {b}}\right]+\eta_{i}^{a}\\ \frac{d\theta_{i}}{dt}=-\epsilon\frac{\partial U(\theta_{i})}{\partial\theta_{ i}}+\sqrt{2\tau\mu_{0}}\xi_{i}^{\theta},\end{array}\right. \tag{1}\] where \(a\in\{x,y\}\), \(\mu_{i}^{ab}:=\mu_{i}\eta_{i}^{a}n_{i}^{b}+\mu_{\perp}(\delta^{ab}-n_{i}^{a}n _{i}^{b})\), \(\eta_{i}^{a}:=\sqrt{2\tau\mu_{0}}\xi_{i}^{\eta}n_{i}^{a}+\sqrt{2\tau\mu_{1}} \xi_{i}^{\pm}(\mathbf{n}_{i}\times\mathbf{\hat{z}})^{\eta}\), and \(\mathbf{n}_{i}:=(\cos\theta_{i},\sin\theta_{i})\). Also, \(\xi_{i}^{\pm}\), \(\xi_{i}^{\pm}\), and \(\xi_{i}^{a}\) are Gaussian white noises with zero mean and unit variance. We assume the two-body interaction as \(V(r)=(k/2)(\sigma-r)^{2}\) for \(r<\sigma\) and \(V(r)=0\) otherwise. The potential for the polarity angle, \(U(\theta)\), is added to model the effect of spatial anisotropy on self-propulsion, and \(\epsilon\) (\(\geq 0\)) represents the strength of anisotropy. In this work, we use a simple potential function, \(U(\theta)=-\cos(2\theta)\), which enhances the alignment of polarity along the \(x\)-axis (i.e., \(\theta=0\) or \(\pi\)). Note that the polarity angle of each particle can take any value between \(0\) and \(2\pi\), in contrast to the previous model [76], in which the polarity angle is restricted to \(0\) or \(\pi\). We also stress that we consider anisotropy of the self-propulsion direction, not of the particle shape. Throughout the numerical study, we set \(\sigma=1\), \(k=20\), and \(\tau=0.01\). The controlled parameters are system size \((L_{x},L_{y})\), particle density \(\rho:=N/(L_{x}L_{y})\), mobilities \((\mu_{1},\mu_{\perp},\mu_{0})\), magnitude of anisotropy \(\epsilon\), and the Peclet number, \(\mathrm{P}_{\mathrm{c}}:=F_{0}\sigma/\tau\), which represents the dimensionless strength of self-propulsion. The simulations are performed using LAMMPS [82, 83]. The time integration is performed by the Euler method with timestep \(dt=0.02\). Figure 2(a) displays snapshots with two sets of parameters, which show that this model undergoes anisotropic phase separation. We stress that there is no attractive interaction in uniaxial ABPs, just like isotropic ABPs. As suggested in Fig. 1(d), this phase separation originates from the self-propulsion of each particle. We also present the phase diagram in Fig. 2(c); phase separation emerges for large Pe, which is also the same as in isotropic ABPs. Thus, this phase separation is regarded as the anisotropic extension of isotropic MIPS [see Fig. 1(c)]. ### Randomly driven lattice gas For RDLG, we consider \(N\) particles on a square lattice with system size \((L_{x},L_{y})\) in units of the lattice constant. The state of the \(i\)th site is specified by occupation number \(n_{i}\), and the set of \(n_{i}\) represents the configuration of the whole system. We assume exclusion between particles so that each site can be occupied by at most one particle, i.e., \(n_{i}\in\{0,1\}\). We also consider attractive interaction between neighboring particles, which is represented by the following Hamiltonian: \[H=-J\sum_{(i,j)}n_{i}n_{j}. \tag{2}\] The state of the system is updated in three steps: 1. We randomly choose two adjacent sites, \((i,j)\), and calculate the energy difference \((\Delta H)\) between the original configuration and the new configuration obtained by exchanging the state of the \(i\)th site with the state of the \(j\)th site. Figure 2: Phase behaviors of uniaxial ABPs and RDLG. (a) Typical snapshots of uniaxial ABPs. The parameters are \((L_{x},L_{y})=(64,64)\), \(\rho=0.71\) (\(N=2908\)), \((\mu_{1},\mu_{\perp},\mu_{0})=(1,0.25,2.75)\), \(\epsilon=0.01\), Pe = 2.5 (left), and Pe = 37.5 (right). (b) Typical snapshots of RDLG. The parameters are \((L_{x},L_{y})=(64,64)\), \(\rho=0.5\) (\(N=2048\)), \(E=100\), \(\beta=0.1\) (left), and \(\beta=0.40\) (right). (c) Phase diagram of uniaxial ABPs. The parameters are the same as those for (a). (d) Phase diagram of RDLG. The parameters are the same as those for (b). In (c) and (d), the signs (+/\(\times\)) indicate the parameter sets where the left/right panels of (a) and (b) are calculated, respectively. The triangle (\(\vartriangle\)) represents the estimated critical point, the properties of which are discussed in Sec. V.2. 2. If sites \((i,j)\) are located along the \(x\)-axis, the new configuration is accepted with probability \(\min(1,e^{-\beta\Delta H})\). 3. If sites \((i,j)\) are located along the \(y\)-axis, the new configuration is accepted with probability \(\min(1,e^{-\beta(\Delta H+E\nu)})\), where \(E\) is the strength of the external field, and \(\eta\) is a random number drawn from a Gaussian distribution with zero mean and unit variance. For step 3, the random driving force is applied along the \(y\)-axis. We basically set the parameters to \(J=4\) and \(E=100\) and control \(\beta\) and \(\rho:=N/L_{x}L_{y}\). Here, \(E=100\) is practically equivalent to the limiting case with \(E=\infty\), where the configuration is updated regardless of the value of \(\Delta H\). This limiting case has been commonly used in simulations of DLG and RDLG [7]. Figure 2(b) displays snapshots with two sets of parameters. This model undergoes phase separation induced by the attractive interaction [Fig. 1(b)] though the motion of each particle is affected by the random driving force. We present the phase diagram in Fig. 2(d); phase separation is controlled by inverse temperature \(\beta\), just like in equilibrium particle systems with attractive interactions [see Fig. 1(a)]. ### Orientation of phase separation Self-propulsion is favored along the \(x\)-axis in uniaxial ABPs, while the driving force is applied along the \(y\)-axis in RDLG. Despite this difference in the direction of the enhanced particle motion, the dense and dilute regions are segregated along the \(x\)-axis in both uniaxial ABPs and RDLG [see Figs. 2(a) and (b)]. Such a coincidence of the collective behavior can be interpreted from a microscopic viewpoint as follows. For uniaxial ABPs, self-propulsion induces persistent collision of particles along the \(x\)-axis, leading to effective adhesion between particles along the \(x\)-axis. Since this type of collision is less probable along the \(y\)-axis, particles can move more freely along the \(y\)-axis. Thus, particle clusters that are caused by the effective adhesion should be elongated along the \(y\)-axis, which results in the segregation along the \(x\)-axis [see Fig. 1(d)]. Note that similar cluster patterns have been recently found in simulations of ABPs with anisotropic self-propulsion [81]. For RDLG, the driving force enhances the free motion of particles along the \(y\)-axis. Thus, particle clusters caused by the attractive interaction should be elongated in the \(y\) direction, leading to the segregation along the \(x\)-axis [see Fig. 1(b)]. See Appendix A for further comparisons between uniaxial ABPs and RDLG. ## III Properties of homogeneous state Hydrodynamic descriptions are helpful in understanding the collective behavior of particles. For RDLG, homogeneous state properties have been studied using a linear coarse-grained model [84; 27]: \[\partial_{t}\phi+\mathbf{\nabla}\cdot\mathbf{j}=0 \tag{3}\] with \[\left\{\begin{array}{l}j_{x}=-\partial_{x}(a_{x}-K_{xx}\partial_{x}^{2}-K_{ xy}\partial_{y}^{2})\phi+\sqrt{2D_{x}}\xi_{x}\\ j_{y}=-\partial_{y}(a_{y}-K_{xx}\partial_{x}^{2}-K_{yy}\partial_{y}^{2})\phi+ \sqrt{2D_{y}}\xi_{y}.\end{array}\right.\] Here, \(\phi(\mathbf{r},t)\) is the density fluctuation field, \(\mathbf{\xi}(\mathbf{r},t)\) is a Gaussian noise with \(\langle\xi_{a}(\mathbf{r},t)\rangle=0\) and \(\langle\xi_{a}(\mathbf{r},t)\xi_{b}(\mathbf{r}^{\prime},t^{\prime})\rangle=\delta_{ab} \delta(\mathbf{r}-\mathbf{r}^{\prime})\delta(t-t^{\prime})\). In the isotropic limit (\(K_{xx}=K_{xy}=K_{yx}=K_{yy}=K\), \(a_{x}=a_{y}=a\), and \(D_{x}=D_{y}=D\)), Eq. (3) is reduced to the so-called model B [85], \[\partial_{t}\phi=\mathbf{\nabla}^{2}\frac{\delta\mathcal{H}}{\delta\phi}-\sqrt{2 D}\mathbf{\nabla}\cdot\mathbf{\xi}, \tag{4}\] where \(\mathcal{H}\) is a coarse-grained Hamiltonian: \[\mathcal{H}=\int d^{2}\mathbf{r}\left[\frac{a}{2}\phi^{2}+\frac{K}{2}(\mathbf{\nabla }\phi)^{2}\right]. \tag{5}\] Thus, Eq. (3) is regarded as an extension of model B to an anisotropic system that respects the symmetry of particle dynamics in RDLG. In the following, we demonstrate that the homogeneous states of uniaxial ABPs and RDLG exhibit the same type of long-range correlation as a generic feature of the nonequilibrium collective dynamics, which can be explained by Eq. (3). In Appendix D, using the well-known correspondence between RDLG and uniaxial dipolar ferromagnets [32; 27], we Figure 3: Singular structure factors in the homogeneous states. (a, b) Heatmap of structure factor \(S(\mathbf{k})\) for (a) uniaxial ABPs and (b) RDLG. (c, d) Density correlation functions \(C(x,0)\) (yellow) and \(C(0,y)\) (purple) for (c) uniaxial ABPs and (d) RDLG. In the insets, the absolute value is plotted on the log-log scale. The parameters used for (a, c) and (b, d) are the same as those for the left panels of Figs. 2(a) and (b), respectively. The system size is set to \(L_{x}=L_{y}=360\) for both models. further establish the connection between uniaxial ABPs and dipolar ferromagnets. ### Long-range density correlation The steady-state long-range correlation of a conserved quantity has been recognized as a general feature of nonequilibrium systems with anisotropic dynamics [11; 13]. Specifically, the fluctuation of a conserved quantity, which we denote as \(\delta A(\mathbf{r})\) here, decays as \[\langle\delta A(\mathbf{r})\delta A(\mathbf{r}^{\prime})\rangle\sim c_{\rm eq}e^{-|\bm {r}-\mathbf{r}^{\prime}|/\xi}+\frac{c_{\rm neq}}{|\mathbf{r}-\mathbf{r}^{\prime}|^{\alpha}}, \tag{6}\] where \(\langle\cdot\rangle\) is an ensemble average in the steady state, and \(c_{\rm eq}\) and \(c_{\rm neq}\) are constants. The first term represents an exponential decay that also appears in equilibrium systems, while the second term is a nonequilibrium correction that leads to the long-range correlation with a power-law decay. The presence of long-range correlation (i.e., \(c_{\rm neq}\neq 0\)) is ubiquitous in nonequilibrium systems with spatial anisotropy. In uniaxial ABPs and RDLG, the self-propulsion and driving force violate the detailed balance in a spatially anisotropic way, respectively. Thus, the long-range correlation of the density field, which is a locally conserved field, is expected to appear in both systems. Though RDLG has been known to show the long-range correlation [84; 11; 27], for completeness, we explain the results for uniaxial ABPs and RDLG in parallel. Assuming small self-propulsion Pe in uniaxial ABPs and low inverse temperature \(\beta\) in RDLG [see the plus sign (+) in Figs. 2(c) and (d)], we focus on typical homogeneous states [Figs. 2(a) and (b)]. We calculate the structure factor and the two-point correlation function, which are defined as \[S(\mathbf{k}):=\frac{1}{L_{x}L_{y}}\langle|\delta\tilde{\rho}(\mathbf{k})|^{2}\rangle \tag{7}\] and \[C(\mathbf{r}):=\big{\langle}\delta\rho(\mathbf{r})\delta\rho(\mathbf{0})\big{\rangle}, \tag{8}\] respectively. Here, \(\rho(\mathbf{r}):=\sum_{k=1}^{N}\delta(\mathbf{r}-\mathbf{r}_{i}),\delta\rho(\mathbf{r}):=\rho( \mathbf{r})-\langle\rho(\mathbf{r})\rangle\), and \(\tilde{\rho}(\mathbf{k})\) is the Fourier transformation of \(\rho(\mathbf{r})\). We show the heatmaps of \(S(\mathbf{k})\) for uniaxial ABPs and RDLG in Figs. 3(a) and (b), respectively, both of which exhibit owl-like or butterfly-like patterns [84]. Analytically, the observed pattern of \(S(\mathbf{k})\) can be characterized by the discontinuity at the origin in the Fourier space, i.e., \[\lim_{k_{x}\to 0}S(k_{x},k_{y}=0)\neq\lim_{k_{y}\to 0}S(k_{x}=0,k_{y}). \tag{9}\] This discontinuity of \(S(\mathbf{k})\) reflects the power-law decay of \(C(\mathbf{r})\) in the real space [84]. As shown in Figs. 3(c) and (d), the correlation function [\(C(x,y=0)\) (yellow) and \(C(x=0,y)\) (purple)] indeed shows a power-law decay as \(\sim r^{-2}\), which implies the long-range density correlation. The negative correlation observed in \(C(x,y=0)\) suggests the formation of transient clusters elongated along the \(y\)-axis. This orientation of clusters is consistent with the configurations in phase separation shown in Figs. 2(a) and (b) (see Sec. II.3). ### Linear coarse-grained model According to the previous studies, the owl-like pattern of the structure factor observed in RDLG [Fig. 3(b)] can be reproduced by the linear coarse-grained model [Eq. (3)] [84]. The similar pattern observed in uniaxial ABPs [Fig. 3(a)] suggests that uniaxial ABPs and RDLG share the same macroscopic dynamics described by Eq. (3). To confirm the validity of Eq. (3) for both uniaxial ABPs and RDLG, we examine the structure factor for the coarse-grained density fluctuation, \(S_{\rm lin}(\mathbf{k}):=\langle|\tilde{\phi}(\mathbf{k})|^{2}\rangle\) /(\(L_{x}L_{y}\)), and \(\tilde{\phi}(\mathbf{k})\) is the Fourier transformation of \(\phi(\mathbf{r})\). From Eq. (3), we can obtain [84; 7] \[S_{\rm lin}(\mathbf{k})=\frac{D_{y}{k_{x}}^{2}+D_{y}{k_{y}}^{2}}{a_{x}{k_{x}}^{2}+ a_{y}{k_{y}}^{2}+K_{xx}{k_{x}}^{4}+2K_{xy}{k_{x}}^{2}{k_{y}}^{2}+K_{xy}{k_{y}} ^{4}}. \tag{10}\] For uniaxial ABPs, we fit the simulation data of \(S(\mathbf{k})\) for \(\mathbf{k}\in[2\pi/L_{x},20\pi/L_{x}]\times[2\pi/L_{y},20\pi/L_{y}]\) with Eq. (10), using \(D_{x}\), \(D_{y}\), \(L_{x}\), \(a_{y}\), \(K_{xy}\), and \(K_{xy}\) as fitting paramters with \(K_{xx}=1\). The fitting results are as follows: \[D_{x}=0.0287,\ D_{y}=0.00600,a_{x}=0.0990,\ a_{y}=0.0778,\] \[K_{xy}=0.525,\ K_{xy}=0.145. \tag{11}\] Figure 4: Quantitative comparison between the simulated structure factor and the theoretical expression [Eq. (10)], where the same data as plotted in Fig. 3 is used. (a, b) Structure factor \(S(k_{x},k_{y})\) with \(k_{y}=4\pi/L_{y}\), \(8\pi/L_{y}\), and \(12\pi/L_{y}\) for (a) uniaxial ABPs and (b) RDLG. (c, d) Structure factor \(S(k_{x},k_{y})\) with \(k_{x}=4\pi/L_{x}\), \(8\pi/L_{x}\), and \(12\pi/L_{x}\) for (c) uniaxial ABPs and (d) RDLG. In all figures, the colored dots represent the simulation results, and the black lines represent the theoretical expression with the best-fit parameter. In Figs. 4(a) and (c), we plot the observed \(S(\mathbf{k})\) (with dots) and the fitted \(S_{\rm lin}(\mathbf{k})\) (with lines). The results show that Eq. (10) quantitatively reproduces the observed behavior of the structure factor for small \(|\mathbf{k}|\), which reflects the long-wavelength density fluctuation. We also fit the simulation data of RDLG in the same way as used for uniaxial ABPs. The fitting results are as follows: \[D_{x}=1.37,\ D_{y}=1.00,\ a_{x}=1.41,\ a_{y}=4.52,\] \[K_{xy}=0.609,\ K_{yy}=-0.0899. \tag{12}\] In Figs. 4(b) and (d), we compare the observed \(S(\mathbf{k})\) and the fitted \(S_{\rm lin}(\mathbf{k})\), which show quantitative agreement as expected. As discussed in previous studies of DLG and RDLG [7], we can derive the asymptotic behavior of the long-range part of the correlation function, \(C_{\rm lin}(\mathbf{r})\), which is the inverse Fourier transformation of \(S_{\rm lin}(\mathbf{k})\). From Eq. (10) we can obtain \[C_{\rm lin}(x,0)\sim-x^{-2},\ C_{\rm lin}(0,y)\sim y^{-2}\ \ \ (r\to\infty), \tag{13}\] which is also consistent with the power-law decay of \(C(\mathbf{r})\) observed in uniaxial ABPs [Fig. 3(c)] and RDLG [Fig. 3(d)]. ## IV Phase separation properties As briefly explained in Sec. II, uniaxial ABPs and RDLG undergo anisotropic phase separation (Fig. 2). In this section, we investigate the properties of phase separation of uniaxial ABPs in more detail. We focus on the nucleation of persistent gas bubbles and the possibility of microphase separation, which have been found in recent studies [66]. ### Anisotropy-induced removal of gas bubbles In Fig. 5(a), we show typical density fields in the phase-separated states for three different values of \(\epsilon\). The detailed procedure for drawing this figure is given in Appendix E. From this figure, we find that for \(\epsilon=0\), numerous gas bubbles are nucleated within the liquid phase. Throughout this paper, we use a "gas bubble" to refer to a connected region of the gas phase surrounded by the largest liquid phase. Note that we regard the largest gas phase as the gas reservoir and not as the gas bubble (see Appendix E.2 for the method to detect gas bubbles). As \(\epsilon\) increases, the number of gas bubbles decreases. For sufficiently large values of \(\epsilon\) (e.g., \(\epsilon=0.02\)), the presence of gas bubbles becomes less evident. To quantitatively characterize this observation, we define the bubble fraction as \[f_{b}:=\frac{S_{\rm bubble}}{S}, \tag{14}\] where \(S:=L_{x}L_{y}\) and \(S_{\rm bubble}\) is the total area occupied by gas bubbles. We plot \(f_{b}\) as a function of \(\epsilon\) in Fig. 5(b), which shows that the fraction of gas bubbles monotonically decreases as \(\epsilon\) increases. For sufficient large \(\epsilon\), \(f_{b}\) reaches zero, indicating the absence of gas bubbles. This observation demonstrates that the uniaxial self-propulsion prevents the nucleations of gas bubbles. In isotropic ABPs (i.e., \(\epsilon=0\)), the nucleation of gas bubbles has been examined in Ref. [66], which has revealed a connection between the existence of gas bubbles and a novel type of phase separation called microphase separation [73]. To briefly explain the previous results in Ref. [66], we focus on the size distribution of gas bubbles divided by the total liquid area, Figure 5: Persistent gas bubbles in the phase-separated state of uniaxial ABPs. (a) Typical snapshots in the steady state at \((L_{x},L_{y})=(1440,720)\) for three values of \(\epsilon\). The colors represent the particle density from \(0\) (blue) to \(1.5\) (red). (b) Bubble fraction \(f_{b}\) as a function of \(\epsilon\) for \((L_{x},L_{y})=(1440,720)\). (c, d) Bubble size distribution divided by the total liquid area, \(n(a)/S_{\rm liq}\), for (c) isotropic (\(\epsilon=0\)) and (d) anisotropic (\(\epsilon=0.002\)) systems. (e) \(n(a)/S_{\rm liq}\) for three values of \(\epsilon\) for \((L_{x},L_{y})=(2880,1440)\). In all figures, the parameters are chosen as \(\rho=0.765\), \((\mu_{1},\mu_{\perp},\mu_{\theta})=(1,0.3125,2.75)\), and \(\text{Pe}=100\). \(n(a)/S_{\rm{\,iq}}\), where \(a\) is the area of a single bubble. In Fig. 5(d), we plot \(n(a)/S_{\rm{\,iq}}\) for isotropic ABPs. We find that \(n(a)/S_{\rm{\,iq}}\) for large \(a\) fits well with the power-law decay observed in the reduced bubble model [66]: \[\frac{n(a)}{S_{\rm{\,iq}}}\sim a^{\alpha}\ \ \ (\alpha=-1.77). \tag{15}\] Considering that the bubble fraction, \(f_{b}\), and the size distribution, \(n(a)\), are related as [86] \[f_{b}=\frac{1}{S}\int_{0}^{S_{\rm{\,pu}}}an(a)da, \tag{16}\] we can derive the system size dependence of \(f_{b}\) as \[f_{b}\sim\chi_{\rm{\,iq}}\chi_{\rm{\,gas}}^{a+2}S^{a+2}. \tag{17}\] Here, \(\chi_{\rm{\,iq}}:=S_{\rm{\,iq}}/S\) and \(\chi_{\rm{\,gas}}:=1-\chi_{\rm{\,iq}}\) represent the area fractions of the liquid and gas phases, respectively, and are nearly independent of the system size, \(S\). Thus, as \(S\) increases, \(f_{b}\) is expected to increase until it reaches the area fraction of the gas phase, \(\chi_{\rm{\,gas}}\). This implies that the whole gas phase exists as persistent gas bubbles surrounded by the liquid phase. This state has been defined as the microphase-separated state [66]. As seen in Fig. 5(a), we find that gas bubbles are still observed for small but finite \(\epsilon\). We consider whether the size distribution of such gas bubbles can show the power-law decay as observed in isotropic ABPs (i.e., \(\epsilon=0\)). In Fig. 5(d), we plot \(n(a)/S_{\rm{\,iq}}\) for \(\epsilon=0.002\). In contrast to the isotropic case, the bubble size distribution does not show the power-law behavior. Note that this result is not attributed to the finite-size effect since \(n(a)/S_{\rm{\,iq}}\) for different system sizes fall on a universal curve. More specifically, \(n(a)/S_{\rm{\,iq}}\) for \(\epsilon=0.002\) decays faster than \(a^{-2}\). From Eq. (17), \(f_{b}\) is expected to converge to zero in the large system size limit, implying that uniaxial ABPs undergo macroscopic phase separation rather than microphase separation. Thus, we confirm that the type of phase separation significantly changes by the anisotropic self-propulsion. We also plot the \(\epsilon\) dependence of \(n(a)/S_{\rm{\,iq}}\) for \((L_{x},L_{y})=(2880,1440)\) in Fig. 5(e), which shows that the functional form of \(n(a)/S_{\rm{\,iq}}\) is changed by a small amount of \(\epsilon\). This suggests that microphase separation can be prohibited even for extremely small \(\epsilon\) (e.g., \(\epsilon=0.0005\)), though we need a more detailed finite-size scaling analysis to draw a conclusion. We comment on possible gas bubbles in RDLG. Note that previous studies on RDLG have not reported any possibility of microphase separation. As shown in Fig. 6(a), the nucleation of gas bubbles is hardly observed in typical snapshots for large systems, and macroscopic phase separation is expected to appear regardless of the strength of anisotropy. The bubble fraction, \(f_{b}\), plotted in Fig. 6(b) suggests that the nucleation of gas bubbles is suppressed by anisotropic external field \(E\) in a similar way to uniaxial ABPs. ### Nonlinear coarse-grained model Though the linear coarse-grained model [Eq. (3)] succeeds in explaining the homogeneous state far from the critical point as discussed in Sec. III, it cannot describe phase separation since nonlinear terms are not included. In previous studies on isotropic ABPs [66], the qualitative features of microphase separation and the mechanism behind the observed persistent gas bubbles have been demonstrated using a coarse-grained model called Active Model B+ (AMB+) [73; 87]. To discuss the observed suppression of gas bubbles by the anisotropic self-propulsion from a general perspective, we consider an anisotropic extension of AMB+: \[\partial_{t}\phi= a_{x}\partial_{x}{}^{2}\phi+a_{y}\partial_{z}\phi+\mathbf{\nabla}^{2}( b\phi^{3}-K\mathbf{\nabla}^{2}\phi+K^{\prime}\mathbf{\nabla}^{4}\phi)\] \[+\lambda\mathbf{\nabla}^{2}(\mathbf{\nabla}\phi)^{2}-\zeta\mathbf{\nabla} \cdot[(\mathbf{\nabla}^{2}\phi)\mathbf{\nabla}\phi]-\sqrt{2D}\mathbf{\nabla}\cdot\mathbf{ \xi}, \tag{18}\] which is also regarded as a nonlinear extension (i.e., adding the \(b\), \(\lambda\), and \(\zeta\) terms) of Eq. (3). The \(b\) term can be derived from a coarse-grained Hamiltonian, and the \(\lambda\) and \(\zeta\) terms reflect the violation of the time-reversal symmetry [73]. To improve numerical stability, the higher-order gradient term with a small \(K^{\prime}\) is also introduced. This term is irrelevant in the RG sense (see Appendix G.2 for the detail) and is not expected to affect the qualitative phase behavior. For simplicity, the effect of anisotropy is minimally retained in the difference between \(a_{x}\) and \(a_{y}\). Throughout the numerical study of Eq. (18), we set \(a_{x}=-0.25\), \(b=0.25\), \(K=1\), \(K^{\prime}=0.2\) and \(D=0.5\). We take \((\phi_{0},\lambda,\zeta)=(-0.1,0.5,5)\) and \((0.4,1,4)\) as low- and high-density cases, respectively, where \(\phi_{0}\) is the spatial average of \(\mathbf{\phi(r,t)}\). The strength of anisotropy is controlled by \(a_{y}\) (\(\geq a_{x}\)). Considering the periodic boundary conditions along both axes Figure 6: Absence of gas bubbles in RDLG. (a) Typical snapshots in the steady state for three values of \(E\). The colors represent the particle density from 0 (blue) to 1 (red). (b) Bubble fraction \(f_{b}\) as a function of \(E\). In all figures, the parameters are chosen as \(\rho=0.5\), \(\beta=0.556\), and \((L_{x},L_{y})=(720,360)\). \([\phi(x+L_{x},y,t)=\phi(x,y+L_{y},t)=\phi(x,y,t)]\), we perform numerical integration of Eq. (18) by the explicit Euler method (see Appendix G.1 for the detail). We regard the regions with \(\phi<0\) and \(\phi>0\) as the gas and liquid phases, respectively. We explain the isotropic limit (\(a_{x}=a_{y}\)) with the present parameter set. In the low-density case, we observe phase separation with persistent gas bubbles [Fig. 7(a), left], which is similar to the behavior of uniaxial ABPs [Fig. 5(a), left]. In the high-density case, we observe microphase separation, where gas bubbles are present throughout the system [Fig. 7(b), left]. Such phase behaviors are consistent with the previous observations in the isotropic AMB+ [73]. We consider the effect of anisotropy on phase separation with gas bubbles [Fig. 7(a)]. Similarly to the observation in uniaxial ABPs [Fig. 5(b)], we find the suppression of bubble fraction \(f_{b}\) as shown in Fig. 7(c). This suggests that the minimal extension of AMB+ (i.e., \(a_{x}\neq a_{y}\)) is sufficient to explain the qualitative behavior of uniaxial ABPs. We next examine the effect of anisotropy on microphase separation [Fig. 7(b)]. We find that microphase separation discontinuously changes into macroscopic phase separation, indicated by the abrupt change in \(f_{b}\) [Fig. 7(d)]. In addition, we define an order parameter for macroscopic phase separation along the \(x\)-axis as \(m:=S(k_{x}=2\pi/L_{x},0)\), where the structure factor is defined as \(S(\mathbf{k}):=\langle|\tilde{\phi}(\mathbf{k})|^{2}\rangle\left/(L_{x}L_{y}\right)\) with \(\tilde{\phi}(\mathbf{k}):=\int d^{2}\mathbf{r}\,e^{-i\mathbf{k}\cdot\mathbf{r}}\phi(\mathbf{r})\). As shown in the inset of Fig. 7(d), the discontinuous change in \(m\) also suggests the discontinuous transition between microphase separation and macroscopic phase separation. Let us focus on the case with \(a_{x}<0<a_{y}\) [see the right panels of Figs. 7(a) and (b)] to consider why strong anisotropy suppresses gas bubbles and stabilizes macroscopic phase separation. We neglect the noise term in Eq. (18) by the mean-field approximation, which has been used in the previous studies [67; 68; 73]. Then, the linearized equation for \(\phi-\phi_{0}\) is obtained in the Fourier space as \[\partial_{t}\tilde{\phi}(\mathbf{k},t)=-(a_{x}{k_{x}}^{2}+a_{y}{k_{y}}^{2}+K|\mathbf{ k}|^{4}+K^{\prime}|\mathbf{k}|^{6})\tilde{\phi}(\mathbf{k},t). \tag{19}\] From \(a_{x}<0<a_{y}\), \(K>0\), and \(K^{\prime}>0\), we see that the most unstable wavevector is along the \(k_{x}\)-axis. Thus, we approximately neglect the modulation in the \(y\) direction and replace Eq. (18) by \(\partial_{t}\phi=\partial_{x}{}^{2}\mu\), where \(\mu(x,t):=a_{x}\phi+b\phi^{3}-K\partial_{x}{}^{2}\phi+K^{\prime}\partial_{x}{} ^{4}\phi+(\lambda-\zeta/2)(\partial_{x}\phi)^{2}\). Here, chemical potential \(\mu\) is a local quantity, in contrast to the isotropic limit (\(a_{x}=a_{y}\)), where nonlocality of chemical potential can lead to phase separation with gas bubbles and microphase separation [73]. Thus, macroscopic phase separation is expected to appear for \(a_{x}<0<a_{y}\). ## V Critical properties Since uniaxial ABPs and RDLG share the common properties in the homogeneous and phase-separated states (see Secs. III and IV), we expect that the critical point for anisotropic phase separation in each model belongs to the same universality class. In the following, we support this expectation using the RG analysis of the coarse-grained model [Eq. (18)] and the finite-size scaling analysis of simulation data for uniaxial ABPs. ### Renormalization group analysis of coarse-grained model We consider the critical phase transition between the homogeneous and phase-separated states in the coarse-grained model [Eq. (18)] under sufficiently large anisotropy with \(a_{x}<a_{y}\). We first review the previous RG analyses of Eq. (18) for \(K^{\prime}=\lambda=\zeta=0\)[26; 27; 31; 32]. Retaining only the relevant variables in the RG sense, we can obtain a model that is equivalent to a coarse-grained model of uniaxial dipolar ferromagnets, which have dipolar long-range interactions [26; 27; 31; 32] (see Appendix D for the detail). At the two-loop level, the critical exponents for the coarse-grained model of uniaxial dipolar ferromagnets have been obtained [26; 27; 32] as \[\beta=0.315\,\ v_{x}=0.626\ \ \text{(Two-loop RG)}. \tag{20}\] Figure 7: Suppression of gas bubbles by anisotropy in the coarse-grained model. For a low-density condition [\((\phi_{0},\lambda,\zeta)=(-0.1,0.5,5)\)], we show (a) typical snapshots for \((L_{x},L_{y})=(256,128)\) and (c) bubble fraction \(f_{b}\) as a function of anisotropy strength \(a_{y}\) for the system lengths \(L_{x}\) (\(=2L_{y}\)). For a high-density condition [\((\phi_{0},\lambda,\zeta)=(0.4,1.4)\)], we show (b) typical snapshots for \((L_{x},L_{y})=(192,192)\) and (d) bubble fraction \(f_{b}\) as in (c). In the inset of (d), we plot the \(a_{y}\) dependence of \(m\), an order parameter for macroscopic phase separation. Here, \(\beta\) is the exponent for the onset of the order parameter, and \(\nu_{x}\) and \(\nu_{y}\) (\(\simeq 2\nu_{x}\)) are the exponents for the divergent correlation lengths along the \(x\)- and \(y\)-axes, respectively. For RDLG, the finite-size scaling analysis of simulation data has been performed to obtain the critical exponents [27] as \[\beta=0.33(2)\,\ \nu_{x}=0.62(3)\ \ \ (\text{RDLG}). \tag{21}\] These values coincide with the RG results [Eq. (20)] within the numerical error, suggesting that the critical point for anisotropic phase separation in RDLG belongs to the universality class of uniaxial dipolar ferromagnets. Considering nonzero \(\lambda\) and \(\zeta\) to discuss the phase behavior of uniaxial ABPs (see Sec. IV), we can show that \(\lambda\) and \(\zeta\) are irrelevant variables in the RG sense (see Appendix G.2 for the detail). This suggests that the introduction of small \(\lambda\) or \(\zeta\) does not affect the critical properties of anisotropic phase separation, and the critical exponents remain the same as those given in Eq. (20). Thus, like RDLG, the critical point for anisotropic phase separation in uniaxial ABPs is expected to belong to the universality class of uniaxial dipolar ferromagnets. Note that the irrelevance of \(\lambda\) or \(\zeta\) is further supported by the suppression of gas bubbles under strong anisotropy (see Fig. 7). ### Connection to uniaxial dipolar ferromagnets To study the critical point for anisotropic phase separation in uniaxial ABPs, we perform simulations with a fixed strength of anisotropy, \(\epsilon=0.01\). Here, we assume that the critical exponents are not affected by the specific value of \(\epsilon\). First, assuming the law of rectilinear diameter [6, 88], we estimate the critical density as \(\rho_{c}=0.71\) (see Appendix F.1 for the detail). Next, we perform simulations with \(\rho=\rho_{c}=0.71\) to identify the universality class of the critical point using the anisotropic finite-size scaling analysis, which has been widely applied to critical phenomena in externally driven systems [7, 22, 89, 90]. Since the liquid and gas phases are separated along the \(x\)-axis for large Pe [Fig. 2(a)], the degree of phase separation can be measured by an order parameter, \[\hat{m}:=\frac{1}{L_{x}L_{y}}\sum_{j=1}^{N}e^{-i2\pi x_{j}/L_{x}}. \tag{22}\] The finite-size scaling hypotheses for \(\langle\hat{m}\rangle\) and the Binder ratio, \(U:=\langle\hat{m}^{2}\rangle^{2}/\langle\hat{m}^{4}\rangle\), are given as \[\langle\hat{m}\rangle=L_{x}{}^{-\beta/\nu_{x}}\mathcal{M}(L_{x}{}^{1/\nu_{x}} \tau,L_{y}/L_{x}{}^{\nu_{y}/\nu_{x}};\epsilon,\rho) \tag{23}\] and \[U=\mathcal{U}(L_{x}{}^{1/\nu_{x}}\tau,L_{y}/L_{x}{}^{\nu_{y}/\nu_{x}}; \epsilon,\rho), \tag{24}\] respectively. Here, \(\tau:=\text{Pe}-\text{Pe}_{c}\) is the distance from the critical point, and \(\mathcal{M}\) and \(\mathcal{U}\) are scaling functions. Equations (23) and (24) are extensions of the scaling hypotheses for isotropic systems with \(\nu_{x}=\nu_{y}\)[22], and the values of \(\nu_{x}\) and \(\nu_{y}\) can be different in anisotropic systems such as uniaxial ABPs and RDLG. For \(\nu_{x}\neq\nu_{y}\), to perform the finite-size scaling analysis, we need to vary the system size with \(L_{y}/L_{x}{}^{\nu_{y}/\nu_{x}}\) fixed. Though \(\nu_{y}/\nu_{x}\) should be determined in principle by the finite-size scaling analysis, we choose \(\nu_{y}/\nu_{x}=2\), which has been commonly used for RDLG based on the RG analysis [26, 27]. Following this choice, we perform simulations with five different system sizes satisfying \(L_{y}/L_{x}{}^{2}=1/24^{2}\): \((L_{x},L_{y})=(180,56.25)\), \((210,76.5625)\), \((240,100)\), \((300,156.25)\), and \((360,225)\). Figure 8: Finite-size scaling analysis for uniaxial ABPs. The parameters are chosen as \(\rho=0.71\), \((\mu_{1},\mu_{\perp},\mu_{\mu})=(1,0.25,1.5)\), and \(\epsilon=0.01\). (a) The Binder ratio \(U\) as a function of Pe for different system sizes. (b) \(U\) and (c) the rescaled order parameter \(\langle\hat{m}\rangle\) as functions of the rescaled Pe with the best-fitted critical exponents (\(\beta/\nu_{x}=0.540\), \(1/\nu_{x}=1.54\)). (d) \(\partial U/\partial\text{Pe}\) and (e) \(\langle\hat{m}\rangle\) against \(L_{x}\) near the critical point in the log-log plot. In (d) and (e), the red dashed lines represent \(\partial U/\partial\text{Pe}\propto L_{x}{}^{1/\nu_{x}}\) and \(\langle\hat{m}\rangle\propto L_{x}{}^{-\beta/\nu_{x}}\), respectively, with the critical exponents used in (b) and (c), and the blue dashed lines are counterparts for the expected universality class [Eq. (20)] based on the RG analysis. The results of the finite-size scaling analysis are summarized in Fig. 8 (See Appendix F.2 for the detailed procedure). Varying Pe from 11.5 to 13.0, we find that \(U\) as a function of Pe for different system sizes approximately crosses at a unique point [Fig. 8(a)], which suggests the presence of the critical point, \(\mathrm{Pe}_{c}\). By fitting \(U(\tau,L_{x})\) and \(\langle\hat{n}\rangle(\tau,L_{x})\) with second-order polynomials, we obtain \(\mathrm{Pe}_{c}\) as \[\mathrm{Pe}_{c}=12.408(5) \tag{25}\] and the critical exponents as \[\beta=0.35(4)\,,\;\nu_{x}=0.65(6)\;\;\;\text{(uniaxial ABPs)}. \tag{26}\] Using these obtained values, we find that the rescaled plots of \(U\) and \(\langle\hat{m}\rangle\) collapse onto universal curves [Figs. 8(b) and (c)], which validates the anisotropic finite-size scaling hypotheses given by Eqs. (23) and (24). The obtained \(\beta\) and \(\nu_{x}\) [Eq. (26)] agree with the RG result for the coarse-grained model [Eq. (20)] and the simulation result of RDLG [Eq. (21)] within the error margin. This indicates that the critical phenomena in uniaxial ABPs belong to the universality class of uniaxial dipolar ferromagnets, as expected from the RG analysis (see Sec. V.1). To check the consistency of the obtained values of \(\beta\) and \(\nu_{x}\), we plot the \(L_{x}\) dependence of \(\partial U/\partial\mathrm{Pe}\) and \(\langle\hat{m}\rangle\) at \(\mathrm{Pe}=12.415\) (\(\approx\mathrm{Pe}_{c}\)) in Figs. 8(d) and (e). According to Eqs. (23) and (24), the slopes of \(\partial U/\partial\mathrm{Pe}\) and \(\langle\hat{m}\rangle\) on the logarithmic scale are \(1/\nu_{x}\) and \(-\beta/\nu_{x}\), respectively. Indeed, Figs. 8(d) and (e) show that the slopes are comparable to the counterparts for the two-loop RG result [Eq. (20)]. ## VI Discussion In this paper, to investigate the relation between MIPS and nonequilibrium phase separation caused by attractive interactions, we have studied the collective properties of 2D uniaxial ABPs, in which self-propulsion along the \(x\)-axis is favored. Performing simulations, we have found three distinctive features of uniaxial ABPs: (i) generic long-range density correlation in the homogeneous state, (ii) anisotropic phase separation with suppressed nucleation of gas bubbles in contrast to isotropic ABPs, and (iii) critical phenomena that presumably belong to the universality class of 2D uniaxial ferromagnets with dipolar long-range interactions. Since properties (i)-(iii) are common to RDLG, in which phase separation is induced by attractive interactions under external driving, we have established the connection between collective behaviors of uniaxial ABPs and RDLG. Additionally, we have constructed a nonlinear coarse-grained model [Eq. (18)] and substantiated the generality of properties (i)-(iii). The critical exponents for the models related to this study are summarized in Table 1, which points out that the critical behaviors of 2D uniaxial ABPs are close to those of the 3D Ising model rather than the 2D Ising model. This property is consistent with the previous study concerning 2D uniaxial ferromagnets with dipolar long-range interactions [35; 36]. For 2D uniaxial dipolar ferromagnets, the effective increase in dimensionality has been attributed to the consequence of the long-range correlation caused by the dipolar interactions. For 2D uniaxial ABPs, the long-range density correlation arising from the anisotropic nonequilibrium dynamics (see Sec. III) effectively increases the dimensionality from two to three, according to the analogy with uniaxial dipolar ferromagnets (see Appendix D for the detail). Our results suggest that the origin of phase separation (i.e., self-propulsion or attractive interaction) is not essential for the collective behaviors of particles with anisotropic dynamics [Figs. 1(b) and (d)]. In contrast, for isotropic systems [Figs. 1(a) and (c)], the collective phenomena of self-propelled particles can be distinct from those of attractively interacting particles. Specifically, in 2D isotropic ABPs, persistent gas bubbles or microphase separation can appear (see Sec. IV) [73; 66], and the universality class for critical phenomena can be different from the 2D Ising class [87]. Further studies are required to elucidate the condition for such differences in isotropic systems. Recently, a wide range of active matter phases has been realized using biological [42; 43; 44; 45; 46; 47] and artificial [48; 49; 50; 51; 52; 53; 54; 55; 56] systems, especially under anisotropic conditions [91]. The connection between uniaxial ABPs and RDLG suggests that active matter can serve as a platform for materializing the properties predicted for externally driven systems. Though we have focused on uniaxial anisotropy in this study, it will be interesting to examine whether the collective behaviors of the standard DLG can be observed in ABPs with unidirectional anisotropy, which can be relevant to biological systems with chemical gradients. ###### Acknowledgements. We thank Kyogo Kawaguchi and Hiroshi Watanabe for the scientific discussions. We also thank Yohsuke T. Fukai, Yoshihiro Michishita, and Yuki Nagai for their comments on programming and Rory Cerbus and Hiroshi Noguchi for their helpful comments. The computations in this study were performed using the facilities of the Supercomputer Center at the Institute for Solid State Physics, the University of Tokyo. This work was supported by JSPS KAKENHI Grant Numbers JP21J00034, JP22K13978 (to H.N.), and JP20K14435 (to K.A.). ## Appendix A Comparison of microscopic dynamics between uniaxial ABPs and RDLG We compare the microscopic implementation of uniaxial ABPs and RDLG. In terms of single-particle dynamics, the fundamental aspects of the microscopic implementation are similar. Both models are based on overdamped dynamics, and the motion of particles is enhanced along the direction of polarity/external field. However, by carefully comparing the microscopic implementation, we notice three distinct differences, which are summarized in Tab. 2. They involve the direction of polarity, persistence time, and interparticle interaction: 1. Uniaxial ABPs allow a full 360-degree rotation of polarity, whereas RDLG restricts the angle of the driving field to either \(\theta=0\) or \(\pi\). 2. The persistence time of uniaxial ABPs is finite, similar to that of isotropic ABPs. In contrast, for RDLG, the direction of the external field changes randomly, indicating that RDLG is characterized by the zero persistence time \(\tau_{p}=0\) 3. RDLG contains both short-range attractive interaction and excluded volume interaction, whereas uniaxial ABPs involve only excluded volume interaction. Here, the persistence time \(\tau_{p}\) is typically defined in ABPs as from correlation of the polarity \(\mathbf{n}_{i}\). For example, in isotropic ABPs, the correlation of \(\mathbf{n}_{i}\) is calculated as \[\langle\mathbf{n}_{i}(s)\cdot\mathbf{n}_{i}(0)\rangle=e^{-\tau_{p}\mu_{0}s}, \tag{10}\] and consequently the persistence time is given by \(\tau_{p}=1/\tau\mu_{\theta}\). In the comparison mentioned above, the concept of persistence time is extended to RDLG by considering the external field as the equivalence of the polarity. Similarly, we present the relationship between isotropic ABPs and equilibrium LG in Tab. 2. Notably, the polarity is not defined in the equilibrium LG. However, an important point is that the passive Brownian particles, an off-lattice version of the equilibrium LG, correspond to the zero persistence time limit of isotropic ABPs. Then, by focusing on the aspects of the persistence time and intrinsic attractive force, we can interpret uniaxial ABPs and RDLG as one anisotropic extension of isotropic ABPs and equilibrium LG. Due to the second point, RDLG cannot be linked to uniaxial ABPs with continuous changes of parameters such as the zero persistence time limit. To understand this point, we focus on the probability distribution of polarity angle, \(P(\theta)\) for uniaxial ABPs, which is calculated as \[P(\theta)=\frac{1}{Z}\exp\left(-\epsilon\,\frac{U(\theta)}{\tau\mu_{\theta}} \right), \tag{11}\] where \(Z\) is a normalization constant. To localize the polarity at \(\theta=0\) or \(\pi\), we must change \(\mu_{\theta}\rightarrow+0\) with \(\epsilon\) fixed or \(\epsilon\rightarrow+\infty\) with \(\mu_{\theta}\) fixed. Clearly, under these continuous changes, the polarity cannot climb the potential barrier between \(\theta=0\) and \(\theta=\pi\). ## Appendix B Procedure for constructing phase diagram In this Appendix, we explain the procedure for drawing the phase diagram [Figs. 2(c) and (d)] in details. As the initial state, we prepare the half-filling state by placing the particles in the right-half. After the relaxation run, we determine the high and low-density regions based on the fact that the center of the mass of the systems \(x_{\text{com}}\) coincides with the center of the high-density region. Specifically, we identify the high-density region as \[\left[x_{\text{com}}-\frac{L_{x}}{10},x_{\text{com}}+\frac{L_{x}}{10}\right] \times[0,L_{y}]. \tag{12}\] Also, from the fact that the center of the low-density region is the farthest from the center of the high-density region, we identify the low-density region as \[\left[x_{\text{com}}+\frac{L_{x}}{2}-\frac{L_{x}}{10},x_{\text{com}}+\frac{L _{x}}{2}+\frac{L_{x}}{10}\right]\times[0,L_{y}]. \tag{13}\] We then observe the density \(\rho_{l}\) and \(\rho_{h}\) in the high- and low-density regions. In the phase-separated state, the values of \(\rho_{l}\) and \(\rho_{h}\) give the coexisting (binodal) curve, which is drawn in Fig. 2(c) and (d). \begin{table} \begin{tabular}{c c c c c} \hline model & direction of polarity & persistence time & intrinsic attractive force & anisotropy \\ \hline \hline uniaxial ABPs & \(0\sim 2\pi\) & finite & no & yes \\ RDLG & \(0\) or \(\pi\) & zero & yes & yes \\ isotropic ABPs & \(0\sim 2\pi\) & finite & no & no \\ equilibrium LG & not defined & zero limit & yes & no \\ \hline \hline \end{tabular} \end{table} Table 2: Basic features in microscopic implementation of uniaxial ABPs, RDLG, isotropic ABPs, and equilibrium LG. \begin{table} \begin{tabular}{c c c} \hline model & \(\beta\) & \(\nu_{x}\) \\ \hline 2D uniaxial dipolar ferromagnet (two-loop RG) & 0.315 & 0.626 \\ 2D RDLG (Monte Carlo) & 0.33(2) & 0.62(3) \\ 2D uniaxial ABPs (our study) & 0.35(4) & 0.65(6) \\ 2D Ising model (Monte Carlo) & 0.125 & 1.0 \\ 3D Ising model (Monte Carlo) & 0.326 & 0.630 \\ \hline \end{tabular} \end{table} Table 1: Comparison with the critical exponents of related models. ## Appendix C Parameter details of Figs. 3 and 4 We set the simulation box to \(L_{x}=L_{y}=360\). The particle number is set to \(N=92016\) for uniaxial ABPs and \(N=64800\) for RDLG, which respectively correspond to the density of \(0.710\) and \(0.50\). We start from the initial state in which the particles are randomly located with zero overlaps. We perform the relaxation run for \(10^{8}\) time steps (i.e., time \(=10^{8}dt=2.0\times 10^{6}\)) for uniaxial ABPs and for \(4.0\times 10^{6}\) Monte Carlo steps for RDLG. After that, we observe the structure factor \(S(\mathbf{k})\). The real-space density correlation \(\langle\rho(\mathbf{r})p(\mathbf{0})\rangle\) is calculated by the inverse Fourier transformation of the structure factor \(S(\mathbf{k})\). We take the time average in the steady state and the ensemble average over different noise realizations. For uniaxial ABPs, the ensemble average is performed over \(28\) different noise realizations, and the time average is performed over \(400\) samples obtained every \(10^{6}\) time steps (i.e., time \(=10^{6}dt=20000\)). For RDLG, the ensemble average is performed over \(96\) different noise realizations, and the time average is performed over \(400\) samples obtained every \(20000\) Monte Carlo steps. ## Appendix D Relation to equilibrium uniaxial dipolar ferromagnet For RDLG, it is known that the specific patterns of structure factor \(S(\mathbf{k})\) involving the long-range correlations are analogous to the long-range nature of the uniaxial dipolar system. Here, we give the definition of uniaxial dipolar ferromagnet [35] and briefly discuss the analogy between the density correlation of uniaxial ABPs and the spin correlation of uniaxial dipolar ferromagnet. We start with the Heisenberg model with the short-range exchange interaction and long-range dipolar interaction. The Heisenberg spin \(S_{\mathbf{R}}\) is defined on the two-dimensional square lattice \(\{\mathbf{R}=(n_{x},n_{y})\mid n_{x},n_{y}=0,\pm 1,\pm 2,\cdots\}\), where the lattice constant is set to \(1\). The Hamiltonian \(\mathcal{H}\) of this model consists of the short-range exchange interaction and long-range dipolar interaction, which is expressed as \[\mathcal{H}= -G\sum_{\mathbf{R}\neq\mathbf{R}}\sum_{\alpha,\beta}\biggl{(}-\frac{ \delta_{\alpha\beta}}{|\mathbf{R}-\mathbf{R}^{\prime}|^{2}}+\frac{(R_{\alpha}-R_{ \alpha}^{\prime})(R_{\beta}-R_{\beta}^{\prime})}{|\mathbf{R}-\mathbf{R}^{\prime}|^{4}} \biggr{)}S_{\mathbf{R}}^{\alpha}S_{\mathbf{R}^{\prime}}^{\beta}\] \[-\frac{1}{2}J\sum_{\mathbf{R}}\sum_{\mathbf{\delta}}S_{\mathbf{R}}\cdot S_{\bm {R}+\mathbf{\delta}}, \tag{10}\] where \(\sum_{\mathbf{R}}\sum_{\mathbf{\delta}}\) runs over all nearest-neighbor pairs. Let us impose the uniaxial condition where the Heisenberg spin \(S_{\mathbf{R}}\) is restricted to pointing in the direction of the \(y\)-axis: \(S_{\mathbf{R}}=(0,S_{\mathbf{R}},0)\). The model reduces to the Ising model with anisotropic interaction: \[\mathcal{H}= -G\sum_{\mathbf{R}\neq\mathbf{R}}\biggl{(}-\frac{1}{|\mathbf{R}-\mathbf{R}^{ \prime}|^{2}}+\frac{(R_{y}-R_{y}^{\prime})^{2}}{|\mathbf{R}-\mathbf{R}^{\prime}|^{4}} \biggr{)}S_{\mathbf{R}}S_{\mathbf{R}^{\prime}}\] \[-\frac{1}{2}J\sum_{\mathbf{R}}\sum_{\mathbf{\delta}}S_{\mathbf{R}}S_{\mathbf{R}+ \mathbf{\delta}}. \tag{11}\] This model is called the uniaxial dipolar ferromagnet. In the Fourier space, the dipolar part of the Hamiltonian is expanded near the \(\mathbf{k}=\mathbf{0}\) as \[-G\sum_{\mathbf{R}\neq\mathbf{R}}\biggl{(}-\frac{1}{|\mathbf{R}-\mathbf{R}^{ \prime}|^{2}}+\frac{(R_{y}-R_{y}^{\prime})^{2}}{|\mathbf{R}-\mathbf{R}^{\prime}|^{4}} \biggr{)}\\ =a_{1}\Bigl{(}\frac{k_{y}}{k}\Bigr{)}^{2}-a_{2}k_{y}^{2}-(a_{3}+ a_{4}\mathbf{k}^{2})+\cdots, \tag{12}\] where \(\{a_{i}\}_{i=1,\cdots,4}\) is a set of numerical constants depending on the lattice structure. By expanding the short-range part of Hamiltonian in the same way, we rewrite the Hamiltonian as \[\mathcal{H}= -\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}\biggl{(}r_{0}+\mathbf{k}^{2}-h_{0 }k_{y}^{2}+g_{0}\frac{k_{y}^{2}}{\mathbf{k}^{2}}\biggr{)}S_{\mathbf{k}}S_{\mathbf{k}}\] \[-u_{0}\int\frac{d^{2}\mathbf{k}_{1}}{(2\pi)^{2}}\int\frac{d^{2}\mathbf{k} _{2}}{(2\pi)^{2}}\int\frac{d^{2}\mathbf{k}_{3}}{(2\pi)^{2}}S_{\mathbf{k}_{1}}S_{\mathbf{k }_{2}}S_{\mathbf{k}_{3}}S_{\mathbf{-k}_{1}\mathbf{-k}_{2}\mathbf{-k}_{3}}, \tag{13}\] where we ignore the higher-order terms in \(S_{\mathbf{k}}\). The values of the numerical factor are given in Ref. [35]. The equilibrium state of this system is described by the canonical ensemble. In the disordered state, the linear approximation leads to the static spin-spin correlation: \[\langle S(\mathbf{k})S(\mathbf{k}^{\prime})\rangle=C(\mathbf{k})\delta(\mathbf{k}+\mathbf{k}^{ \prime}) \tag{14}\] with \[C(\mathbf{k})=\frac{T\mathbf{k}^{2}}{r_{0}k_{x}^{2}+(r_{0}+g_{0}k_{y}^{2}-h_{0}k_{y}^ {2}\mathbf{k}^{2}+\mathbf{k}^{4}}. \tag{15}\] This form is the special case of Eq. (10), indicating that uniaxial ABPs acquire dipolar-like long-range natures. As discussed in Appendix G.2, this feature determines the universality class of critical phenomena. ## Appendix E Supplemental information of Fig. 5 The microscopic simulation in the phase-separated phase of uniaxial ABPs is performed to examine the nucleation of bubbles, whose results are summarized in Fig. 5. Here, we explain how to draw them. ### Relaxation run for observing gas bubbles The simulation box is a rectangle with the ratio of \(L_{x}:L_{y}=2:1\). We prepare an initial configuration by placing the particles in the region \(0<x<L_{x}/2\) and perform the relaxation run. In Fig. 9, we present the relaxation process of the different system sizes for \(\epsilon=0\) and \(0.002\), where the simulation data is averaged over \(2\sim 8\) different noise realizations. From this figure, we immediately notice that the relaxation time is significantly longer for the isotropic systems (\(\epsilon=0\)) compared to the anisotropic system. Additionally, the relaxation time increases as the system size becomes larger. According to this observation, we basically perform the relaxation run for \(2.0\times 10^{8}\) time steps (i.e., time \(=2.0\times 10^{8}dt=4.0\times 10^{6}\)), and after that, perform the observation run for \(3.0\times 10^{8}\) time steps (i.e., time \(=3.0\times 10^{8}dt=6.0\times 10^{6}\)). There is one exceptional case, specifically when \((L_{x},L_{y})=(4320,2160)\) with \(\epsilon=0.000\), where the relaxation time is notably longer. In this specific case, we perform the relaxation run for \(9.0\times 10^{8}\) time steps (i.e., time \(=9.0\times 10^{8}dt=18.0\times 10^{6}\)). ### Numerical procedure to detect gas bubbles After a sufficiently long relaxation run, we observe the bubble fraction, \(f_{b}\), and the size distribution of gas bubbles, \(n(a)\). For this observation, we divide the simulation box into square cells with a width of \(\delta\), and calculate the density field as the collection of the local density. Figure 5(a) draws the density field obtained using a bin size of \(\delta=2.0\). The liquid and gas phases are distinguished based on the local density \(\rho(\mathbf{r})\). The gas phase is designated by \(\rho(\mathbf{r})<0.765\), while the liquid phase is designated by \(\rho(\mathbf{r})>0.765\). As mentioned in the main text, the gas bubble is defined as the connected region of gas inside the liquid phase. The largest gas bubble is regarded as the gas reservoir and not as the gas bubble. To detect the connected regions of the gas phase, we used a Julia package Julialmages.jl, which identifies the region connected to each other along the \(x\) or \(y\)-axis. The bubble fraction \(f_{b}\) is defined by Eq. (14). In the numerical procedure, we calculate the bubble fraction \(f_{b}\) by rewriting Eq. (14) as \[f_{b}=\frac{S_{\text{gas}}-a_{\text{max}}}{S}, \tag{12}\] where \(S_{\text{gas}}\) is the total area of the gas phase and \(a_{\text{max}}\) is the maximum area of the gas phase (i.e. the area of the gas reservoir). In Fig. 5(b)-(e), to calculate \(f_{b}\) and \(n(a)/S_{\text{liq}}\), we use the density field obtained with the bins \(\delta=2.0\) for \((L_{x},L_{y})=(720,360)\) and \((L_{x},L_{y})=(1440,720)\), \(\delta=4.0\) for \((L_{x},L_{y})=(2880,1440)\), and \(\delta=6.0\) for \((L_{x},L_{y})=(4320,2160)\). We take the time average over 3000 samples obtained every \(10^{5}\) time steps (i.e., time \(=10^{5}dt=2000\)) and the ensemble average over \(2-8\) noise realizations. ## Appendix F Supplemental information of Fig. 8 The simulations near criticality are performed to examine the universality class, whose results are summarized in Fig. 8 of the main text. In this appendix, we elaborate on the procedure for obtaining the critical properties such as the position of the critical point and the universality class. ### Rough estimation of critical density We first estimate the critical density by calculating the rectilinear diameter \((\rho_{l}+\rho_{g})/2\) for various Pe, where \(\rho_{l}\) and \(\rho_{g}\) are the densities in the liquid and gas phases, respectively. Here, we summarize the supplemental information of this simulation. The system size is set to \((L_{x},L_{y})=(480,400)\) and \((720,900)\). The density is set to \(\rho=0.765\), which corresponds to the particle numbers \(N=146880\) and \(495720\), respectively. We prepare an initial configuration by placing the particles in the region \(0<x<200\) for \((L_{x},L_{y})=(480,400)\) and \(0<x<300\) for \((L_{x},L_{y})=(720,900)\). In all simulations, we perform the relaxation run for \(7.5\times 10^{7}\) time steps (i.e., \begin{table} \begin{tabular}{c c c} \hline Pe & \((480,400)\) & \((720,900)\) \\ \hline \hline 10.0 & 0.764 & 0.763 \\ 11.0 & 0.760 & 0.765 \\ 12.0 & 0.776 & 0.771 \\ 13.0 & 0.723 & 0.708 \\ 14.0 & 0.713 & 0.702 \\ 15.0 & 0.700 & 0.701 \\ \hline \hline \end{tabular} \end{table} Table 3: Rectilinear diameter \((\rho_{l}+\rho_{g})/2\) near the critical point for \((L_{x},L_{y})=(480,400)\) and \((720,900)\). time \(=7.5\times 10^{7}dt=1.5\times 10^{6}\)), and the observation run for \(2.5\times 10^{7}\) time steps (i.e., time \(=2.5\times 10^{7}dt=0.5\times 10^{6}\)). The simulation result is presented in Tab 3. In the large system size limit, the rectilinear diameter in the homogeneous state is equal to the global density of \(0.765\), while at the critical point, it coincides with critical density \(\rho_{c}\). From Tab. 3, we observe a distinct change in the rectilinear diameter between \(\mathrm{Pe}=12.0\) and \(\mathrm{Pe}=13.0\). Specifically, at \(\mathrm{Pe}=12.0\), it closely matches the expected value of \(0.765\), whereas at \(\mathrm{Pe}=13.0\) it significantly deviates from this value. Based on this observation, we can infer that the critical Peclet number, \(\mathrm{Pe}_{\mathrm{c}}\), lies between \(12.0<\mathrm{Pe}_{\mathrm{c}}<13.0\) and the critical density, \(\rho_{c}\), is estimated as \(\approx 0.708\). ### Estimation of critical exponents Based on the estimation of the critical density in the previous section, we set the density to \(\rho=0.710\) and change the Peclet number, \(\mathrm{Pe}\), from \(\mathrm{Pe}=11.5\) to \(\mathrm{Pe}=13.0\). As explained in main text, we set the system sizes to \((L_{x},L_{y})=(180,56.25)\), \((210,76.5625)\), \((240,100)\), \((300,156.25)\), and \((360,225)\). We show the typical time evolution of the ensemble average of the order parameter for \((L_{x},L_{y})=(240,100)\), \((300,156.25)\), and \((360,225)\) in Fig. 10. This figure confirms that our simulation achieves the steady state after a sufficiently long relaxation run. Using the data within the red region, we take the time and ensemble averages for the order parameter \(\langle\hat{m}\rangle\) and the Binder Parameter \(U:=\langle\hat{m}^{2}\rangle^{2}/\langle\hat{m}^{4}\rangle\). The ensemble average is taken over \(800\) different noise realizations for \((L_{x},L_{y})=(300,156.25)\) and \((360,225)\), and \(500\) different noise realizations for \((L_{x},L_{y})=(180,56.25)\), \((210,76.5625)\), and \((240,100)\). The time average is performed by using the data every \(4.0\times 10^{6}\) time steps (i.e., time \(=4.0\times 10^{6}dt=8.0\times 10^{4}\)) for \((L_{x},L_{y})=(360,225.5)\), \(2.0\times 10^{6}\) time steps (i.e., time \(=2.0\times 10^{6}dt=4.0\times 10^{4}\)) for \((L_{x},L_{y})=(300,156.25)\), \(1.0\times 10^{6}\) time steps (i.e., time \(=1.0\times 10^{6}dt=2.0\times 10^{4}\)) for \((L_{x},L_{y})=(240,100)\), \(0.5\times 10^{6}\) time steps (i.e., time \(=0.5\times 10^{6}dt=1.0\times 10^{4}\)) for \((L_{x},L_{y})=(210,76.5625)\), and \(0.25\times 10^{6}\) time steps (i.e., time \(=0.25\times 10^{6}dt=0.5\times 10^{4}\)) for \((L_{x},L_{y})=(210,76.5625)\), and \(0.25\times 10^{6}\) time steps (i.e., time \(=0.25\times 10^{6}dt=0.5\times 10^{4}\)) for \((L_{x},L_{y})=(180,56.25)\). To estimate the critical exponents, we use the anisotropic finite-size scaling hypothesis Eqs. (23) and (23). We refer to Ref. [22] for a more detailed discussion of the anisotropic finite-size scaling. Since the scaling functions \(\mathcal{M}\) and \(\mathcal{U}\) are analytic, we can expand \(\langle\hat{m}\rangle(\tau,L_{x})\) and \(U(\tau,L_{x})\) around \(\tau=0\) as \[\langle\hat{m}\rangle(\tau,L_{x})=\sum_{n=0}^{\infty}\frac{\partial^{n} \mathcal{M}}{\partial\tau^{n}}\Big{|}_{\tau=0}(L_{y}/L_{x}{}^{\nu_{y}/\nu_{z}}; \epsilon,\rho)L_{x}{}^{(n-\beta)/\nu_{z}}\tau^{n}, \tag{11}\] \[U(\tau,L_{x})=\sum_{n=0}^{\infty}\frac{\partial^{n}\mathcal{U}}{\partial\tau^ {n}}\Big{|}_{\tau=0}(L_{y}/L_{x}{}^{\nu_{y}/\nu_{z}};\epsilon,\rho)L_{x}{}^{n /\nu_{z}}\tau^{n}. \tag{12}\] According to these expansions, we fit the simulation data \(\langle\hat{m}\rangle\) and \(U\) to the second-order polynomials to obtain the critical point \(\mathrm{Pe}_{c}=12.408(5)\) and the critical exponents \(\beta=0.35(4)\) and \(\nu_{x}=0.65(6)\). For this fitting, the data within \(-2000.0<{L_{x}}^{1/0.65}(\mathrm{Pe}-12.408)<2000.0\) are used. ## Appendix G Coarse-grained model ### Numerical simulation For simulations of the coarse-grained model [Eq. (18)], \[\partial_{t}\phi= a_{x}\partial_{x}{}^{2}\phi+a_{y}\partial_{y}{}^{2}\phi+\mathbf{ \nabla}^{2}(b\phi^{3}-K\mathbf{\nabla}^{2}\phi+K^{\prime}\mathbf{\nabla}^{4}\phi)\] \[+\lambda\mathbf{\nabla}^{2}(\mathbf{\nabla}\phi)^{2}-\zeta\mathbf{\nabla} \cdot[(\mathbf{\nabla}^{2}\phi)\mathbf{\nabla}\phi]-\sqrt{2D}\mathbf{\nabla}\cdot\mathbf{\xi}, \tag{13}\] Figure 11: Initial states used in simulations of the coarse-grained model. We show the initial states for (a) \(\phi_{0}=-0.1\) with \((L_{x},L_{y})=(256,128)\) and (b) \(\phi_{0}=0.4\) with \((L_{x},L_{y})=(192,192)\), which correspond to Figs. 7(a) and (b), respectively. with \(\langle\xi_{a}(\mathbf{r},t)\rangle=0\) and \(\langle\xi_{a}(\mathbf{r},t)\xi_{b}(\mathbf{r}^{\prime},t^{\prime})\rangle=\delta_{ab} \delta(\mathbf{r}-\mathbf{r}^{\prime})\delta(t-t^{\prime})\), we discretize time as \(t=n\Delta t\) and spatial coordinates as \(x=i\Delta x\) and \(y=j\Delta y\) with periodic boundary conditions. Accordingly, we replace \(\phi(x,y,t)\) by \(\phi_{i,j}^{n}\) and \(\xi_{a}(x,y,t)\) by \((\Delta x\Delta y\Delta t)^{-1/2}\xi_{a,i,j}^{n}\), where \(\xi_{a,i,j}^{n}\) is a Gaussian noise with \(\langle\xi_{a,i,j}^{n}\rangle=0\) and \(\langle\xi_{a,i,j}^{n}\xi_{b,i^{\prime},j^{\prime}}^{n}\rangle=\delta_{ab} \delta_{i^{\prime}}\delta_{j^{\prime}}\delta_{m^{\prime}}\). Using the explicit Euler method, we replace Eq. (63) by \[\phi_{i,j}^{n+1}=\phi_{i,j}^{n}+[F(\phi)]_{i,j}^{n}\Delta t, \tag{64}\] where \([F(\phi)]_{i,j}^{n}\) is the discretized form of the right-hand side of Eq. (63). To determine \([F(\phi)]_{i,j}^{n}\) we use the second-order central finite difference for the differential operators that appear in Eq. (63) (i.e., \(\partial_{x}\), \(\partial_{y}\), \(\partial_{x}^{2}\), and \(\partial_{y}^{3}\)), such as \([\partial_{x}f]_{i,j}^{n}=(f_{i+1,j}^{n}-f_{i-1,j}^{n})/(2\Delta x)\) and \([\partial_{x}^{3}f]_{i,j}^{n}=(f_{i+1,j}^{n}-2f_{i,j}^{n}+f_{i-1,j}^{n})/ \Delta x^{2}\). The discretization parameters are chosen as \(\Delta t=0.1\) and \(\Delta x=\Delta y=1\), and the model parameters are fixed as \(a_{x}=-0.25\), \(b=0.25\), \(K=1\), \(K^{\prime}=0.2\), and \(D=0.5\) throughout the numerical study. The other parameters are \((\lambda,\zeta)=(0.5,5)\) for \(\phi_{0}=-0.1\) (low-density case) and \((\lambda,\zeta)=(1,4)\) for \(\phi_{0}=0.4\) (high-density case), where \(\phi_{0}\) is the spatial average of \(\phi(\mathbf{r},t)\). As the initial state for all the simulations, we use a phase-separated state, \(\phi_{\rm ini}(\mathbf{r}):=-2{\rm sgn}(\phi_{0})\exp[-(x-L_{x}/2)^{4}/(L_{x}/4)^{4}] -C\), where \(C\) is a constant to set the spatial average of \(\phi_{\rm ini}(\mathbf{r})\) to \(\phi_{0}\) (Fig. 11). We define the liquid and gas phases as the spatial regions satisfying \(\phi(\mathbf{r})>0\) and \(\phi(\mathbf{r})<0\), respectively. In the same way as applied to uniaxial ABPs (see Appendix E.2), a Julia package (Julialmages.jl) is used to detect the connected regions of the gas phase. The size of each gas phase, \(a\), is defined as the area of the regions that satisfy \(\phi(\mathbf{r})<0\) and are connected to each other along the \(x\) or \(y\)-axis. The bubble fraction, \(f_{b}\), which is plotted in Figs. 7(c) and (d), is calculated as \(f_{b}:=\langle S_{\rm gas}-a_{\rm max}\rangle/(L_{x}L_{y})\), where \(S_{\rm gas}\) and \(a_{\rm max}\) are the total and maximum areas of the gas phase, respectively, and \(\langle\cdots\rangle\) means the average over samples. To characterize the steady state using bubble fraction \(f_{b}\) and order parameter \(m\), independent samples are taken with different noise realizations. For the low-density condition with \(\phi_{0}=-0.1\), which is used for Figs. 7(a) and (c), 1152 independent samples are taken with \(10^{7}\) time steps (i.e., total time \(=10^{7}\Delta t=10^{5}\) for each sample) for \((L_{x},L_{y})=(64,32)\), 1152 independent samples with \(4\times 10^{7}\) time steps for \((L_{x},L_{y})=(128,64)\), and 24 independent samples with \(1.6\times 10^{8}\) time steps for \((L_{x},L_{y})=(256,128)\). For the high-density condition with \(\phi_{0}=0.4\), which is used for Figs. 7(b) and (d), 1152 independent samples are taken with \(10^{7}\) time steps for \((L_{x},L_{y})=(64,64)\), 288 independent samples with \(4\times 10^{7}\) time steps for \((L_{x},L_{y})=(128,128)\), and 128 independent samples with \(9\times 10^{7}\) time steps for \((L_{x},L_{y})=(192,192)\). To obtain the expectation values, we take the average over independent samples as well as the time average over 51 points in the last half of the total time. We show the typical time evolution of \(\phi\) in the liquid and gas phases (\(\phi_{\rm ini}\) and \(\phi_{\rm gas}\), respectively), averaged over space and independent samples [Figs. 12(a-c) and (e-g)]. The points in the red region in Figs. 12(a-c) and (e-g) are used in time averaging to obtain the \(a_{y}\) dependence of \(\phi_{\rm ini}\) and \(\phi_{\rm gas}\), which is plotted in Figs. 12(d) and (h). Similarly, we show the typical time evolution of \(f_{b}\) in Fig. 13, in which the points in the red region are used to obtain the \(a_{y}\) dependence of \(f_{b}\) [Figs. 7(c) and (d)]. Note that, near the isotropic limit [\(a_{x}=a_{y}\) (\(=-0.25\))] for the low-density case (\(\phi_{0}=-0.1\)), the relax ation is slow as seen from Fig. 13(a), and thus the values of \(f_{b}\) plotted in Fig. 7(c) can be underestimated around \(a_{y}=-0.25\), which is not essential for the current study. ### Renormalization group analysis Assuming anisotropic systems with \(a_{y}>0\), we consider the critical phase transition between the homogeneous state and anisotropic phase separation that occurs as \(a_{x}\) is changed. Applying the approach by Martin, Siggia, Rose, Janssen, and de Dominicis (MSRJD) [92; 93; 94; 95] to Eq. (61), we can obtain the probability density for a dynamical path of configurations \(\{\phi(\mathbf{r},t)\}_{t\in[0,T]}\) as \[P[\phi]=\int D(i\bar{\phi})\exp(-S[\phi,\bar{\phi}]). \tag{63}\] Here, dynamical action \(S[\phi,\bar{\phi}]\) is given as \[S[\phi,\bar{\phi}]=\int_{0}^{T}dt\int d^{2}\mathbf{r}\,[\bar{\phi}[ \partial_{t}\phi-a_{x}\partial_{x}{}^{2}\phi-a_{y}\partial_{y}{}^{2}\phi-b_{x }\partial_{x}{}^{2}\phi^{3}\] \[-b_{y}\partial_{y}{}^{2}\phi^{3}+K_{xx}\partial_{x}{}^{4}\phi+K _{xy}\partial_{x}{}^{2}\partial_{y}{}^{2}\phi+K_{yy}\partial_{y}{}^{4}\phi-K ^{\prime}_{xxx}\partial_{x}{}^{6}\phi\] \[-K^{\prime}_{xxy}\partial_{x}{}^{4}\partial_{y}{}^{2}\phi-K^{ \prime}_{xxy}\partial_{x}{}^{2}\partial_{y}{}^{4}\phi-K^{\prime}_{yy}\partial _{x}{}^{6}\phi-\lambda_{xx}\partial_{x}{}^{2}(\partial_{x}\phi)^{2}\] \[-\lambda_{xy}\partial_{x}{}^{2}(\partial_{x}\phi)^{2}-\lambda_{ xy}\partial_{y}{}^{2}(\partial_{x}\phi)^{2}-\lambda_{yy}\partial_{y}{}^{2}( \partial_{y}\phi)^{2}\] \[+\zeta_{xy}\partial_{x}(\partial_{y}{}^{2}\phi\,\partial_{x}\phi )+\zeta_{yx}\partial_{y}(\partial_{x}{}^{2}\phi\,\partial_{y}\phi)]+D_{x}\bar{ \phi}\partial_{x}{}^{2}\bar{\phi}+D_{y}\bar{\phi}\partial_{y}{}^{2}\bar{\phi}], \tag{64}\] where we generalize the coupling constants, which are related to the original ones as \(b_{x}=b_{y}=b\), \(K_{xx}=K_{yy}=K\), \(K_{xy}=2K\), \(K^{\prime}_{xxx}=K^{\prime}_{yyy}=K^{\prime}\), \(K^{\prime}_{xxy}=K^{\prime}_{xxy}=3K^{\prime}\), \(\lambda_{xx}=\lambda_{yy}=\lambda-\zeta/2\), \(\lambda_{xy}=\lambda_{yx}=\lambda\), \(\zeta_{xy}=\zeta_{yx}=\zeta\), and \(D_{x}=D_{y}=D\). Considering the tree-level renormalization group analysis of Eq. (64), we perform the scale transformation as \(x\to c^{-1}x\) (\(c>1\)). Requiring the invariance of \(a_{y}\), \(K_{xx}\), and \(D_{x}\) to consider the criticality of anisotropic phase separation, we can obtain the scaling of the other quantities: \(y\to c^{-2}y\), \(t\to c^{-4}t\), \(\phi\to c^{1/2}\phi\), \(\bar{\phi}\to c^{5/2}\bar{\phi}\), \(a_{x}\to c^{2}a_{x}\), \(b_{x}\to cb_{y}\), \(b_{y}\to c^{-1}b_{y}\), \(K_{xy}\to c^{-2}K_{xy}\), \(K_{xy}\to c^{-4}K_{yy}\), \(K^{\prime}_{xxxx}\to c^{-2}K^{\prime}_{xxx}\), \(K^{\prime}_{xxy}\to c^{-4}K^{\prime}_{xy}\), \(K^{\prime}_{yyy}\to c^{-8}K^{\prime}_{yyy}\), \(\lambda_{xx}\to c^{-1/2}\lambda_{xx}\), \(\lambda_{xy}\to c^{-5/2}\lambda_{xy}\), \(\lambda_{xx}\to c^{-5/2}\lambda_{xy}\), \(\lambda_{xy}\to c^{-5/2}\lambda_{xy}\), \(\lambda_{xy}\to c^{-9/2}\lambda_{xy}\), \(\zeta_{xy}\to c^{-5/2}\zeta_{xy}\), \(\zeta_{xx}\to c^{-5/2}\zeta_{yx}\), and \(D_{y}\to c^{-2}D_{y}\). Thus, \(a_{x}\) and \(b_{x}\) are relevant variables, the former of which works as a control parameter for the critical phase transition. The other coupling constants, especially \(K^{\prime}\), \(\lambda\), and \(\zeta\), are irrelevant variables. Neglecting all the irrelevant variables, we can obtain the effective action for the critical dynamics of Eq. (61): \[S_{\rm eff}[\phi,\bar{\phi}]=\int_{0}^{T}dt\int d^{2}\mathbf{r}\,[ \bar{\phi}(\partial_{t}\phi-a_{x}\partial_{x}{}^{2}\phi-b_{x}\partial_{x}{}^{2} \phi^{3}\] \[+K_{xx}\partial_{x}{}^{4}\phi)+D_{x}\bar{\phi}\partial_{x}{}^{2}\bar{ \phi}], \tag{65}\] which coincides with the effective action for the randomly driven lattice gas [26; 27; 31; 32].
2309.14843
$t\bar{t}jj$ -- NLO QCD corrections to top quark pair production and decays at the LHC
In these proceedings we present the calculation of NLO QCD corrections to $pp\to t\bar{t}jj$ in the dilepton decay channel. The narrow width approximation is used to model the decays of the top quark pair preserving spin correlations. Jet radiation and QCD corrections are consistently included in the production and decay of the top quarks. We discuss the size of NLO QCD corrections and the main theoretical uncertainties of fiducial cross sections at the integrated and differential level. In addition, we examine the contributions of jet radiation in the production and decay of the top-quark pair, as well as of a mixed configuration where jet radiation is present simultaneously in the production and decay stages.
Daniel Stremmer
2023-09-26T11:17:18Z
http://arxiv.org/abs/2309.14843v1
# \(t\bar{t}jj\) - NLO QCD corrections to top quark pair production and decays at the LHC ###### Abstract: In these proceedings we present the calculation of NLO QCD corrections to \(pp\to t\bar{t}jj\) in the dilepton decay channel. The narrow width approximation is used to model the decays of the top quark pair preserving spin correlations. Jet radiation and QCD corrections are consistently included in the production and decay of the top quarks. We discuss the size of NLO QCD corrections and the main theoretical uncertainties of fiducial cross sections at the integrated and differential level. In addition, we examine the contributions of jet radiation in the production and decay of the top-quark pair, as well as of a mixed configuration where jet radiation is present simultaneously in the production and decay stages. P3H-23-061, TTK-23-23 + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote † †: Speaker + Footnote † † †: Speaker + Footnote † † †: Speaker + Footnote † † †: Speaker + Footnote † † †: Speaker + Footnote † † †: Speaker + Footnote † † †: Speaker + Footnote † † †: Speaker + Footnote † † † †: Speaker + Footnote † † † †: Speaker + Footnote † † † †: Speaker + Footnote † ## 1 Introduction Although the Higgs production in association with a top-quark pair (\(t\bar{t}H\)) is only 1% of the total Higgs cross section at the LHC, this process is of high relevance for the Higgs program at the LHC. Due to its structure, it is a direct probe of the top-quark Yukawa coupling (\(Y_{t}\)) already at the tree level. In 2018 the observation of \(t\bar{t}H\) was reported by the ATLAS and CMS collaborations [1, 2] and even recently single-channel observations in the \(H\to\gamma\gamma\) decay channel were possible [3, 4] although its small branching ratio. In contrast, in the Higgs decay channel with the largest branching ratio, \(H\to b\,\bar{b}\), such precise measurements are not yet possible due to the large irreducible background from the prompt production of a bottom-quark pair (\(t\bar{t}b\bar{b}\)) and the enormous reducible background from top-quark production with two additional light jets (\(t\bar{t}jj\)) leading to significant systematic uncertainties. In addition, the study of the cross-section ratios \(R_{b}=\sigma_{t\bar{t}b\bar{b}}/\sigma_{t\bar{t}jj}\) and \(R_{c}=\sigma_{t\bar{t}c\bar{c}}/\sigma_{t\bar{t}jj}\) can be used for powerful tests of the efficiency of \(b/c\) tagging algorithms in a complex environment with many jets from different production mechanisms. Such an measurement was already performed by the CMS collaboration and differences up to \(2.5\sigma\) has been found between theoretical predictions and measurements of \(R_{b}\)[5]. While for \(t\bar{t}b\bar{b}\) theoretical predictions with matrix elements accurate at NLO QCD in both the production and decay of the top-quark pair are available in the literature [6, 7, 8], the situation is less advanced for \(t\bar{t}jj\) where NLO QCD corrections are only known for stable top quarks [9, 10]. In this proceeding we discuss the first steps towards a more realistic description of \(t\bar{t}jj\) accurate at NLO QCD in both the production and decay of the top-quark pair. We consider the dilepton decay channel and perform the decays of the top quarks and \(W\) gauge bosons in the narrow-width approximation (NWA), i.e. in the limit \(\Gamma/m\to 0\). We consistently include NLO QCD corrections as well as jet radiation in both the production and decay of the top-quark pair. Furthermore, we discuss the effects of jet radiation in the production and decay of the top-quark pair, as well as of a mixed configuration where light radiation is present in both decay stages simultaneously. ## 2 Setup of the calculation In this section we discuss the main points of the computational setup for the calculation of NLO QCD corrections to \(t\bar{t}jj\) in the dilepton decay channel. The full setup can be found in Ref. [11]. As already discussed in the introduction, the decays of the top quarks and \(W\) gauge bosons are performed in the NWA leading to the following decay chain at LO at the order \(\mathcal{O}(\alpha_{s}^{4}\alpha^{4})\) \[pp\to t\bar{t}(jj)\to W^{+}W^{-}\,b\bar{b}jj\to\ell^{+}\nu_{\ell}\,\ell^{-} \bar{\nu}_{\ell}\,b\bar{b}\,jj+X, \tag{1}\] with \(\ell^{\pm}=\mu^{\pm},e^{\pm}\) and where the brackets indicate that the light jets can be emitted from both the production and decay of the top-quark pair. At this order the process can be uniquely divided into three resonant contributions based on the origin of the light jets according to \[d\sigma_{t\bar{t}jj}^{\rm LO}=\Gamma_{t}^{-2}\overbrace{d\sigma_{t\bar{t}jj} ^{\rm{Prod.}}\,d\Gamma_{t\bar{t}}^{\rm{LO}}}^{\rm{Prod.}}+\overbrace{d\sigma_ {t\bar{t}}^{\rm{LO}}\,d\Gamma_{t\bar{t}jj}^{\rm{Decay}}}^{\rm{Decay}}+ \overbrace{d\sigma_{t\bar{t}jj}^{\rm{Mix}}\,d\Gamma_{t\bar{t}j}^{\rm{LO}}}^ {\rm{Mix}}. \tag{2}\] In particular, we have the _Prod._ (_Decay_) contribution where light jets are emitted only in the production (decay) of the top-quark pair, and finally we have the _Mix_ contribution where jet radiation is present in both decay stages. Example diagrams for the three resonant contributions are shown in Figure 1. In order to include NLO QCD corrections of the order \(\mathcal{O}(\alpha_{s}^{5}\alpha^{4})\), Eq. (2) has to be extended in the following way \[\begin{split} d\sigma^{\text{NLO}}_{\tilde{t}\tilde{t}\tilde{t}jj}& =\Gamma_{t}^{-2}\overbrace{\left(d\sigma^{\text{LO}}_{\tilde{t} \tilde{t}j}+d\sigma^{\text{virt}}_{\tilde{t}\tilde{t}j}+d\sigma^{\text{real}}_ {\tilde{t}\tilde{t}j}\right)\,d\Gamma^{\text{LO}}_{\tilde{t}\tilde{t}}}^{\text {Prod.}}+\overbrace{d\sigma^{\text{LO}}_{\tilde{t}\tilde{t}j}\,\left(d\Gamma^{ \text{LO}}_{\tilde{t}\tilde{t}j}+d\Gamma^{\text{virt}}_{\tilde{t}\tilde{t}j}+d \Gamma^{\text{real}}_{\tilde{t}\tilde{t}j}\right)}^{\text{Decay}}\\ &+\underbrace{d\sigma^{\text{LO}}_{\tilde{t}j}\,d\Gamma^{\text{LO} }_{\tilde{t}\tilde{t}j}+d\sigma^{\text{LO}}_{\tilde{t}\tilde{t}j}\,d\Gamma^{ \text{virt}}_{\tilde{t}\tilde{t}}+d\sigma^{\text{virt}}_{\tilde{t}\tilde{t}j} \,d\Gamma^{\text{LO}}_{\tilde{t}\tilde{t}j}+d\sigma^{\text{virt}}_{\tilde{t} \tilde{t}j}\,d\Gamma^{\text{LO}}_{\tilde{t}\tilde{t}j}+d\sigma^{\text{LO}}_{ \tilde{t}\tilde{t}j}\,d\Gamma^{\text{virt}}_{\tilde{t}\tilde{t}j}+d\sigma^{ \text{real}}_{\tilde{t}\tilde{t}j}\,d\Gamma^{\text{real}}_{\tilde{t}\tilde{t} j}}+d\sigma^{\text{real}}_{\tilde{t}\tilde{t}j}\,d\Gamma^{\text{real}}_{ \tilde{t}\tilde{t}j}\right)}_{\text{Mix}},\end{split} \tag{3}\] where we directly have split the full calculation into the three finite resonant contributions. We note that this equation directly implies a mixing between the different resonant configurations at NLO QCD. The mixing is induced from the fact that QCD splittings in the production or decay of the top-quark pair from different Born resonant configurations can lead to the same set of real corrections. Such real corrections are included in the _Mix_ contribution at NLO QCD where unresolved particles are allowed both in the production and decay of the top-quark pair but the total number of unresolved particles is still limited to one. In the next sections we will discuss this issue in more detail and present an alternative way to quantify the effects from jet radiation and NLO QCD corrections in top-quark decays. The calculation is performed within the Helac-Nlo framework [13] consisting of the two programs Helac-1Loop[14, 15, 16, 17, 18] and Helac-Dipoles[19]. The virtual corrections are cross-checked with the matrix element generator Recola[20, 21, 22] and for the real corrections we employ two independent subtraction schemes, the Catani-Seymour [23, 24] and Nagy-Soper scheme [25], to ensure the correctness of our results. Both subtraction schemes have been extended to handle arbitrary QCD splittings inside decay processes. We closely follow the event selection from a measurement on top-quark pair production with additional light jets by the CMS collaboration [26]. The full event selection and all input parameters can be found in Ref. [11]. A key point of this setup is the choice of the cut \(\Delta R_{jb}>0.8\) which is used to suppress light jet radiation from the top-quark decays. Because of this, we additionally considered a second inclusive setup with a reduced cut of \(\Delta R_{jb}>0.4\) to investigate the size of the three resonant contributions in more detail. We employ the NNPDF3.1 NLO PDF set [27] via the Figure 1: Representative Feynman diagrams for the Prod., Mix and Decay contributions at LO with suppressed \(W\) gauge boson decays. Feynman diagrams were produced with the help of the FeynGame program [12]. LHAPDF interface [28] at LO and NLO QCD. The renormalisation (\(\mu_{R}\)) and factorisation (\(\mu_{F}\)) scales are set to a common scale (\(\mu_{0}\)) given by \[\mu_{R}=\mu_{F}=\mu_{0}=\frac{H_{T}}{2}=\frac{1}{2}\left(\sum_{i=1}^{2}p_{T\ell _{i}}+\sum_{i=1}^{2}p_{Tj_{i}}+\sum_{i=1}^{2}p_{Tb_{i}}+p_{T}^{miss}\right), \tag{4}\] and scale uncertainties are obtained from a 7-point scale variation around the central value. ## 3 Integrated fiducial cross sections In Table 1 we show the integrated fiducial cross section at LO and NLO QCD for the full calculation and divided into the three resonant regions for \(\Delta R_{jb}>0.8\). Scale uncertainties as well as statistical uncertainties from the phase space integration are also displayed. We find at LO that the full calculation is purely dominated by the _Prod._ contribution with 97% of the full calculation and therefore the _Mix_ and _Decay_ configurations can be safely neglected compared to the scale uncertainties of 60% at this order. At LO the \(gg\) production channel is the largest one with about 65% of the full calculation followed by the \(gq\) channel with 31% and the purely quark induced channel \(qq^{\prime}\) is the smallest one with only 4%. For the full calculation we find NLO QCD corrections at level of 40% which are well within the LO scale uncertainties. In addition, the theoretical uncertainties obtained from scale variation are reduced by a factor of 4 to 14%. The _Mix_ contribution becomes more relevant at NLO QCD due to the mixing of the different resonant contributions. In particular, its sign changes and its relative size increases in absolute value from 3% to 19%. However, the large negative contribution of _Mix_ is mainly induced from the NLO QCD corrections to the top-quark decays of the _Prod._ configuration at LO. At last we have recalculated the integrated fiducial cross section with the MSHT20 [29] and CT18 [30] PDF sets. We find differences in the central value between these two PDF sets and our default one, NNPDF3.1, of about \(1\%-3\%\). These differences are of the same size as the internal PDF uncertainties of the three PDF sets. As already motivated in the last section, we have performed the same calculation for a second setup by reducing the cut \(\Delta R_{jb}>0.8\) to \(\Delta R_{jb}>0.4\). The integrated fiducial cross section for this setup is shown in Table 2. First, we notice only a small dependence on this cut for _Prod._ since this contribution increases only slightly by 16%. However, for _Mix_ and _Decay_ the dependence is much stronger as these contributions increase by 250% and 810%, respectively. Still, both contributions \begin{table} \begin{tabular}{l l l l l} \hline \(i\) & \(\sigma^{\rm LO}\) [fb] & \(\sigma^{\rm NLO}\) [fb] & \(\sigma^{\rm LO}_{i}/\sigma^{\rm LO}_{\rm Full}\) & \(\sigma^{\rm NLO}_{i}/\sigma^{\rm NLO}_{\rm Full}\) \\ \hline Full & \(868.8(2)\,^{+60\%}_{-35\%}\) & \(1225(1)\,^{+1\%}_{-14\%}\) & \(1.00\) & \(1.00\) \\ Prod. & \(843.2(2)\,^{+60\%}_{-35\%}\) & \(1462(1)\,^{+12\%}_{-19\%}\) & \(0.97\) & \(1.19\) \\ Mix & \(25.465(5)\) & \(-236(1)\) & \(0.029\) & \(-0.19\) \\ Decay & \(0.2099(1)\) & \(0.1840(8)\) & \(0.0002\) & \(0.0002\) \\ \hline \end{tabular} \end{table} Table 1: Integrated fiducial cross section at LO and NLO QCD for the \(pp\to t\bar{t}jj\) process with \(\Delta R_{jb}>0.8\). Results are shown for the full calculation and for the three resonant contributions _Prod._, _Decay and Mix. Table was taken from [11]. amount to only 8.3% and 0.2% of the full calculation at LO and thus are significantly smaller than the scale uncertainties. Due to the mixing at NLO QCD we find that the relative size of the _Mix_ contribution is reduced in absolute value to 14% from 19% in the default setup. An alternative way to quantify the effects of NLO QCD corrections and jet radiation in top-quark decays is the comparison of the full calculation with the _Prod._ contribution with LO top quark decays, which we call _Prod. LOdecay_ and simply amounts to a rescaling of the _Prod._ result to the LO top-quark width. We find for our default setup with \(\Delta R_{jb}>0.8\) that this approximation leads to the same result as the full calculation in the central value of the integrated fiducial cross section. For the second setup with \(\Delta R_{jb}>0.4\) this approximation underestimates the full calculation by 5%. In addition, the scale uncertainties of the full calculation is reduced at the level of 5% compared to this approximation for both setups. ## 4 Differential fiducial cross sections In the next step we have performed a comparison between the full calculation (orange) and the _Prod. LOdecay_ approximation (green) at NLO QCD also at the differential level shown in Figure 2 for the two observables \(p_{T,\,j_{1}j_{2}}\) and \(\Delta\phi_{j_{1}j_{2}}\) with \(\Delta R_{jb}>0.4\). For the first observable \(p_{T,\,j_{1}j_{2}}\) we \begin{table} \begin{tabular}{l l l l l} \hline \(i\) & \(\sigma^{\rm LO}\) [fb] & \(\sigma^{\rm NLO}\) [fb] & \(\sigma^{\rm LO}_{i}/\sigma^{\rm LO}_{\rm Full}\) & \(\sigma^{\rm NLO}_{i}/\sigma^{\rm NLO}_{\rm Full}\) \\ \hline Full & \(1074.5(3)\,^{+60\%}_{-35\%}\) & \(1460(1)\,^{+1\%}_{-13\%}\) & \(1.00\) & \(1.00\) \\ Prod. & \(983.1(3)\,^{+60\%}_{-35\%}\) & \(1662(1)\,^{+11\%}_{-18\%}\) & \(0.91\) & \(1.14\) \\ Mix & \(89.42(3)\) & \(-205(1)\) & \(0.083\) & \(-0.14\) \\ Decay & \(1.909(1)\) & \(2.436(6)\) & \(0.002\) & \(0.002\) \\ \hline \end{tabular} \end{table} Table 2: Same as in Table 1 but for \(\Delta R_{jb}>0.4\). Table was taken from [11]. Figure 2: Differential cross-section distributions for the observables \(p_{T,\,j_{1}j_{2}}\) and \(\Delta\phi_{j_{1}j_{2}}\) employing the full calculation (orange) and the Prod. contribution with LO top-quark decays (green) with \(\Delta R_{jb}>0.4\). Figures were taken from [11]. find shape distortions between the two predictions by up to 20% in the beginning of the spectrum which are reduced towards the tail where the two results become identical. Similar to the integrated level discussed in the previous section, the scale dependence in the full calculation is reduced for \(p_{T,\,j_{1},j_{2}}<300\) GeV by 5%. Also for angular distribution like \(\Delta\phi_{j_{1}j_{2}}\) shape distortions up to 15% for large azimuthal angle differences are possible between the full calculation and the _Prod. LOdecay_ approximation and again the scale dependence is reduced in the full calculation by 5%. At last we discuss the size of NLO QCD corrections and of the different resonant contributions at the differential level for the observables \(H_{T}^{had}=\sum_{i=1}^{2}p_{Tj_{i}}+\sum_{i=1}^{2}p_{Tb_{i}}\) and \(\Delta R_{j_{1}j_{2}}\) shown in Figure 3 and 4. On the left side we display the LO (blue) and NLO QCD (orange) predictions and on the right side we show the relative size of the different resonant contributions _Prod._ (orange), _Mix_ (green) and _Decay_ (purple) for \(\Delta R_{jb}>0.8\) (solid lines) and \(\Delta R_{jb}>0.4\) (dashed lines). We find for \(H_{T}^{had}\) NLO QCD corrections of about \(30\%-60\%\) which are within the LO uncertainty band. The scale uncertainties are reduced from 60% to 15%. The relative size of _Mix_ is the same for both setups in the tail at the level of \(-20\%\) while in the beginning of the spectrum this contribution obtains large shape distortions (enhanced for \(\Delta R_{jb}>0.4\)) and even its sign changes. Also for angular distributions we have in general NLO QCD corrections of moderate size as shown for the observable \(\Delta R_{j_{1}j_{2}}\) of about \(30\%-50\%\). Again the NLO prediction lies completely in the LO scale uncertainty band. The relative size of the _Mix_ contribution is rather flat for \(\Delta R_{jb}>0.8\) over the entire range at level of \(20\%-25\%\) while for the inclusive setup (\(\Delta R_{jb}>0.4\)) larger shape distortions in the range of \(7\%-25\%\) are present in the back-to-back region at \(\Delta R_{j_{1}j_{2}}\approx 3\). ## 5 Cross section ratios Finally, we present in Table 3 results for the cross section ratios \(\mathcal{R}_{n}=\sigma_{\tilde{t}+nj}/\sigma_{\tilde{t}+(n-1)j}\) for \(n=1,2\) in the dilepton decay channel with \(\Delta R_{jb}>0.8\). We employ the central scale \(\mu_{0}=H_{T}/2\) given in Eq. (4) for all processes entering the cross section ratios where we restrict the summation Figure 3: _Left: Differential cross-section distribution for the observable \(H_{T}^{had}\) at LO and NLO QCD with \(\Delta R_{jb}>0.8\). Right: Relative size of the three resonant contributions, Prod., Mix and Decay, at NLO QCD for \(\Delta R_{jb}>0.8\) (solid line) and \(\Delta R_{jb}>0.4\) (dashed line). Figures were taken from [11]._ of \(p_{Tj_{i}}\) to the number of light jets present in the corresponding Born process. Scale uncertainties are calculated in a correlated way by simultaneously varying the scale in the numerator and denominator. We obtain for both ratios, \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\), NLO QCD corrections of \(4\%-5\%\) which are in agreement within the LO scale uncertainties of \(11\%-12\%\). These uncertainties are reduced at NLO to \(5\%\). In the last column we present a consistent expansion in \(\alpha_{s}\) of this ratio at NLO QCD labeled as \(\mathcal{R}_{\rm exp}^{\rm NLO}\). By this expansion the scale dependence is reduced to \(2\%-3\%\) and the central value differs by less than \(1\%\) with respect to \(\mathcal{R}^{\rm NLO}\). The internal PDF uncertainties of the NNPDF3.1 PDF set are at the level of \(0.5\%\) and thus they are significantly smaller than the theoretical uncertainties obtained by scale variation. ## 6 Summary In this proceeding we have presented the calculation of NLO QCD corrections to the \(pp\to t\bar{t}jj\) process in the dilepton decay channel at the LHC. Both NLO QCD corrections as well as jet radiation have been consistently included in the production and decay of the top-quark pair. At LO the full calculation was purely dominated by the \(\it{Prod.}\) contribution such that \(\it{Mix}\) and \(\it{Decay}\) can be safely neglected at this order. However, at NLO we have found that the different resonant contributions start to mix and because of that the \(\it{Mix}\) contribution changes its sign and increases in absolute values to about \(20\%\) and thus becomes non-negligible. NLO QCD corrections at the level of \(40\%\) have been found, which show good agreement within the LO scale uncertainties. In addition, these Figure 4: Same as in Figure 3 but for the observable \(\Delta R_{j_{1}j_{2}}\). Figures were taken from [11]. \begin{table} \begin{tabular}{l c c c} \hline \(\mathcal{R}_{n}\) & \(\mathcal{R}^{\rm LO}\) & \(\mathcal{R}^{\rm NLO}\) & \(\mathcal{R}^{\rm NLO}_{\rm exp}\) \\ \hline \(\mathcal{R}_{1}=\sigma_{t\bar{t}j}/\sigma_{t\bar{t}}\) & \(0.3686\begin{subarray}{c}+12\%\\ -10\%\end{subarray}\) & \(0.3546\begin{subarray}{c}+40\%\\ -5\%\end{subarray}\) & \(0.3522\begin{subarray}{c}+0\%\\ -3\%\end{subarray}\) \\ \(\mathcal{R}_{2}=\sigma_{t\bar{t}jj}/\sigma_{t\bar{t}j}\) & \(0.2539\begin{subarray}{c}+11\%\\ -9\%\end{subarray}\) & \(0.2660\begin{subarray}{c}+0\%\\ -5\%\end{subarray}\) & \(0.2675\begin{subarray}{c}+0\%\\ -2\%\end{subarray}\) \\ \hline \end{tabular} \end{table} Table 3: LO and (expanded) NLO cross section ratios for the \(pp\to t\bar{t}+nj\) processes. Results are given for the default cuts with \(\Delta R_{jb}>0.8\). Scale uncertainties are also shown. Table was taken from [11]. theoretical uncertainties are reduced by a factor of 4 to 14% at NLO QCD and remain still dominant compared to the internal PDF uncertainties varying between \(1\%-3\%\) for different PDF sets. ## Acknowledgments This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant 396021762 - TRR 257: _P3H - Particle Physics Phenomenology after the Higgs Discovery_.
2309.14857
Cluster Exploration using Informative Manifold Projections
Dimensionality reduction (DR) is one of the key tools for the visual exploration of high-dimensional data and uncovering its cluster structure in two- or three-dimensional spaces. The vast majority of DR methods in the literature do not take into account any prior knowledge a practitioner may have regarding the dataset under consideration. We propose a novel method to generate informative embeddings which not only factor out the structure associated with different kinds of prior knowledge but also aim to reveal any remaining underlying structure. To achieve this, we employ a linear combination of two objectives: firstly, contrastive PCA that discounts the structure associated with the prior information, and secondly, kurtosis projection pursuit which ensures meaningful data separation in the obtained embeddings. We formulate this task as a manifold optimization problem and validate it empirically across a variety of datasets considering three distinct types of prior knowledge. Lastly, we provide an automated framework to perform iterative visual exploration of high-dimensional data.
Stavros Gerolymatos, Xenophon Evangelopoulos, Vladimir Gusev, John Y. Goulermas
2023-09-26T11:35:25Z
http://arxiv.org/abs/2309.14857v3
# Cluster Exploration using Informative Manifold Projections ###### Abstract Dimensionality reduction (DR) is one of the key tools for the visual exploration of high-dimensional data and uncovering its cluster structure in two- or three-dimensional spaces. The vast majority of DR methods in the literature do not take into account any prior knowledge a practitioner may have regarding the dataset under consideration. We propose a novel method to generate informative embeddings which not only factor out the structure associated with different kinds of prior knowledge but also aim to reveal any remaining underlying structure. To achieve this, we employ a linear combination of two objectives: firstly, contrastive PCA that discounts the structure associated with the prior information, and secondly, kurtosis projection pursuit which ensures meaningful data separation in the obtained embeddings. We formulate this task as a manifold optimization problem and validate it empirically across a variety of datasets considering three distinct types of prior knowledge. Lastly, we provide an automated framework to perform iterative visual exploration of high-dimensional data. ## 1 Introduction Data exploration focuses on identifying informative patterns to discover new insight and knowledge about a collection of data. The often high-dimensional nature of such data renders the visual exploration process intractable for the human eye, and therefore specialized data manipulation of the original samples is essential in practice. Dimensionality reduction methods have been at the forefront of this challenge Bishop (2006) aiming to recover lower-dimensional embeddings of the original data that facilitate the identification of underlying data cohorts and help understand better the problem at hand. One of the most well known dimensionality reduction approaches perhaps is principal component analysis (PCA) Hotelling (1933), an efficient linear method aiming to maximizing the variance along the projection vectors, which in practice appears insufficient for meaningful separation of cohorts. A variety of non-linear methods have also been proposed that conversely focus on locally preserving the structure of the data such as Isomap Tenenbaum et al. (2000), LLE Roweis and Saul (2001), t-SNE van der Maaten and Hinton (2008), UMAP McInnes and Healy (2018), TriMap Amid and Warmuth (2019) and LargeVis Tang et al. (2016), etc. Projection pursuit (PP) Friedman and Tukey (1974), Caussinus and Ruiz-Gazen (2010) defines a family of dimensionality reduction methods that can enable various embedding effects depending on a suitably selected criterion. The kurtosis index Chiang et al. (2001) is one specific PP example that specializes in identifying "interesting" projections. Its minimization particularly penalizes the normality of the data distribution, promoting thus more meaningful separability when searching for clusters. The above approaches nevertheless share the same attribute of offering a single static projection that does not consider any prior knowledge a practitioner may have regarding the high-dimensional latent structure. Such projections can be uninformative as they tend to illustrate the most evident features which are often already known by the reader. In practise, it has been shown Cavallo and Demiralp (2019) that an interactive or dynamic exploration of the available data can capture better their high-dimensional structure, especially when knowledge from a continual analysis of the data or cohort distribution can be factored into the analysis. Recently, a number of such methods were introduced to dynamically generate new embeddings guided by the user Senanayake et al. (2019). Sometimes, we want to obtain projections that remove variations with respect to some specific data samples. Contrastive PCA (cPCA) Abid et al. (2017) can be useful for this as it generates data projections which reveal structures that are more enriched in one data set relative to other data. Our work focuses on computing informative data projections that factor out different types of prior knowledge and reveal any previously unknown high-dimensional structure. We achieved this by jointly minimizing a projection pursuit objective with a background variance-based objective. Here, prior knowledge reflects some information a practitioner has about the data and informativeness is interpreted as data separation which unveils unknown underlying structure. Our method can be implemented on three different prior (background) knowledge cases. In each case, the prior knowledge is represented by a dataset whose structure we wish to remove from the obtained embeddings. More specifically, the cases are: * **Attribute-based prior**. In this case, the prior data consist of a subset of attributes of the high-dimensional dataset. We want to obtain embeddings that reveal the structure associated with the remaining attributes by discounting the structure of the attributes of the prior dataset. * **Sample-based prior**. We wish to visually explore a complex dataset that consists of the combination of a reference dataset (e.g. Fashion-MNIST) and a background dataset (e.g. MNIST). The background data modify and/or corrupt the reference dataset and we want to obtain embeddings that remove the background structure and disclose the reference one. * **Subset-based prior**. The prior dataset consists of a subset of the original high-dimensional samples which are known to be similar to each other. By removing the structure associated with these samples, we can learn embeddings that reveal the structure of the remaining datapoints. Iterative visual exploration can take place in this setting by repeatedly updating the prior to include one of the clusters. We specifically propose an efficient and effective optimization on the Stiefel manifold Stiefel (1935/36), which appears to empirically perform better compared to other related methods and helps circumvent numerical issues that are common in practice. Our main contributions are as follows: * A novel objective function which when optimised computes projections that factor out different types of prior knowledge while also revealing previously unknown underlying structure. * Manifold optimisation modeling of the complex loss function to achieve numerical stability and fast convergence to a desirable solution. * An iterative framework that can be applied to produce multiple informative projections for the visual data exploration of high-dimensional data. The rest of the paper is organised as follows: Section 2 expands upon related literature and in Section 3 we give a detailed description of our proposed method. Section 4 provides a quantitative and qualitative analysis of our results and comparison with related approaches. ## 2 Related Work A few works on interactive DR have been proposed in the last few years. Contrastive PCA Abid et al. (2017) computes data projections which highlight the salient structure of some reference data while discarding the structure of some background data. Conditional t-SNE (ct-SNE) Kang et al. (2021) is a generalisation of t-SNE which considers some prior knowledge in order to construct informative 2D embeddings. Prior knowledge corresponds to some information that the user is already aware of and can represented by a set of labels assigned to the data samples. The labels are either available before any analysis or can be inferred by clustering a set of embeddings. To discount the known factor of the labels, new embeddings are then generated which can provide insight on any underlying unknown structure. Unlike ct-SNE, which produces non-linear embeddings, SIDE Puolamaki et al. (2018) is a linear approach which takes as input some prior knowledge in terms of a background distribution of points. These points are known to be similar to each other either _a priori_ or after some analysis. Projections that promote the maximal difference between the data and the background distribution are then computed. Another recently published method Puolamaki et al. (2021) allows a user to guide the examination procedure according to their own exploration interests. The users can formulate their prior knowledge as well as their specific interests in terms of relations among a subset of samples and a subset of attributes. These are then introduced to the model for computing the projections Finding lower dimensional embeddings over matrix manifolds has recently become quite popular, mostly due to the flexibility the constraint-free manifold optimization offers Absil et al. (2007). The (compact) Stiefel manifold Stiefel (1935/36), i.e., the set of all k-tuples of orthonormal vectors, has been employed in various dimensionality reduction applications Afsari and Krishnaprasad (2004); Theis et al. (2009), as well as in general machine learning ones Tompkins and Wolfe (2007); Cetingul and Vidal (2009). More recently, the Grassmann manifold has been employed for lower dimensional embedding of 3D point clouds Haitman et al. (2021). ## 3 Methodology In this section, we first exemplify the optimization details of our method. Secondly, we introduce our framework for structure extraction and knowledge update with which we can perform iterative visual exploration of high-dimensional data. ### Objective formulation and Optimization Let \(\mathbf{X}\in\mathbb{R}^{n\times d}\) be the data matrix with each row \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) corresponding to the coordinates of the \(i^{\text{th}}\) data point. Our goal is to generate a set of low dimensional embedded points \(\{\mathbf{q}_{i}\}_{i=1}^{d}\in\mathbb{R}^{k}\) (with \(k\ll d\)) that render meaningful data separation based on prior information to maximize data cohort informativeness. Kurtosis Chiang et al. (2001) is a measure of non-normality that tends to reveal informative data cohorts within a dataset. For univariate data projections, the kurtosis is defined as: \[\kappa=\frac{n\sum_{i=1}^{n}\left(\mathbf{v}^{\top}\mathbf{x}_{i}\mathbf{x}_{ i}^{T}\mathbf{v}\right)^{2}}{\left(\mathbf{v}^{\top}\mathbf{X}^{\top}\mathbf{X} \mathbf{v}\right)^{2}} \tag{1}\] where \(\mathbf{v}\in\mathbb{R}^{d}\) is the projection vector. We propose the combination of kurtosis and cPCA Abid et al. (2017) reconstruction loss term. cPCA can efficiently maintain areas of high-variance while discarding areas of no interest with low-variance. In our case, we wish to factor out from the projections areas which are associated with our prior knowledge. However, cPCA (as PCA) is not explicitly designed to provide low-dimensional projections that exhibit meaningful data segregation. As a result, it often happens that cPCA embeddings are not informative. By jointly optimizing kurtosis, improved data separation that can reveal underlying structure is ensured. Let us assume that \(\{\mathbf{y}_{i}\}_{i=1}^{m}\) are some data associated with our prior knowledge. We formulate the above requirements in the following optimization and term our proposed method as IMAPCE (Informative MAnifold Projections for Cluster Exploration) throughout the rest of the manuscript : \[\mathbf{V}^{\star}= \operatorname*{arg\,min}_{\mathbf{V}\in\mathbb{R}^{d\times k}}f( \mathbf{V})\triangleq\|\mathbf{X}-\mathbf{X}\mathbf{V}\mathbf{V}^{\top}\|_{F}^ {2}-\alpha\|\mathbf{Y}-\mathbf{Y}\mathbf{V}\mathbf{V}^{\top}\|_{F}^{2}+\] \[+\mu n\sum_{i=1}^{n}\left[\mathbf{x}_{i}^{\top}\mathbf{V}( \mathbf{V}^{\top}\mathbf{X}^{\top}\mathbf{X}\mathbf{V})^{-1}\mathbf{V}^{\top }\mathbf{x}_{i}\right]^{2}\] \[s.t.\;\;\;\mathbf{V}^{\top}\mathbf{V}=\mathbf{I}, \tag{2}\] where \(\mu\) is a scaling parameter, \(\alpha\) (as in cPCA) regulates the trade-off between having a high target variance and a low background data variance and \(\mathbf{V}\in\mathbb{R}^{d\times k}\) with \(k<d\) is the projection matrix to be computed. Setting \(\alpha=0\) corresponds to assuming no prior information (and therefore no prior data). Due to its quartic form, the kurtosis index can have multiple local minima and thus its optimization is a challenging task. A set of quasi-power methods have recently emerged Hou and Wentzell (2011); Driscoll et al. (2019); Siyuan and Wentzell (2014) as a feasible and efficient alternative to gradient-based approaches. Nevertheless, they can be less stable when the covariance matrix \(\mathbf{X}^{\top}\mathbf{X}\) is singular. To alleviate this issue a dimensionality reduction of the original samples, such as SVD is required and kurtosis is optimized on the newly embedded points and can often lead to weak representations. To avoid this issue, we instead optimize Eq. (2) directly over the Stiefel manifold Stiefel (1935/36) \(St(k,d)\triangleq\{\mathbf{M}\in\mathbb{R}^{d\times k}:\mathbf{M}^{\top} \mathbf{M}=\mathbf{I}\}\) which is a subset of the Euclidean space \(\mathbb{R}^{d\times k}\). The optimization can be carried out using any gradient-based solver, such as steepest decent Absil et al. (2007) for example, without the need to perform any preprocessing on the original data due to singularity issues. The gradient of \(f(\mathbf{V})\) is given as \[\nabla_{\mathbf{V}}f(\mathbf{V}) =2\big{(}\alpha\mathbf{Y}^{\top}\mathbf{Y}-\mathbf{X}^{\top} \mathbf{X}\big{)}\mathbf{V}\] \[+4\mu n\sum_{i=1}^{n-m}\big{(}\mathbf{x}_{i}^{\top}\mathbf{V} \mathbf{A}^{-1}\mathbf{V}^{\top}\mathbf{x}_{i}\big{)}\big{[}(\mathbf{x}_{i} \mathbf{x}_{i}^{\top})\mathbf{V}\mathbf{A}^{-1}\] \[-\big{(}\mathbf{X}^{\top}\mathbf{X}\big{)}\mathbf{V}\mathbf{A}^{- 1}\big{(}\mathbf{V}^{\top}\mathbf{x}_{i}\mathbf{x}_{i}^{\top}\mathbf{V} \big{)}\mathbf{A}^{-1}\big{]}, \tag{3}\] where \(\mathbf{A}=\mathbf{V}^{\top}\mathbf{X}^{\top}\mathbf{X}\mathbf{V}\). For visualization purposes we usually set \(k=2,3\) and in practice therefore \(\mathbf{A}\in\mathbb{R}^{k\times k}\) is of small size, rendering its condition number \(\kappa(\mathbf{A})=||\mathbf{A}||_{F}/||\mathbf{A}^{-1}||_{F}\) to be relatively small and well-conditioned. By jointly minimizing both objectives of Eq. (2) on the Stiefel manifold we avoid any singularity issues and at the same time obtain more informative separability of cohorts as we will empirically demonstrate later in the experiments. IMAPCE has a limitation that derives from the optimisation of kurtosis. More specifically, minimisation of kurtosis was found to produce cluster artifacts for datasets whose size was very close to their dimensionality Hou and Wentzell (2011). As a result, the same holds for IMAPCE and preprocessing (e.g. PCA or SVD) is essential for such datasets. ### Iterative Visual Exploration In the case of a subset-based prior, we are _a priori_ aware that a subset of samples of some high-dimensional data are similar to each other (e.g. they share the same class) and we wish to explore the cluster structure of the remaining data points. To achieve this, we calculate and optimise the kurtosis term over the remaining (subset of unexplored) samples which are defined as \(\{\mathbf{z}_{i}\}_{i=1}^{(n-m)}=\{\mathbf{x}_{i}\}_{i=1}^{n}\backslash\{ \mathbf{y}_{i}\}_{i=1}^{m}\). As a result, the obtained projections provide meaningful data separation of the unexplored samples and reveal their cluster structure. To extract the structure of the unexplored samples \(\mathbf{Z}\), we perform the clustering of their embeddings. While the obtained clusters unveil some previously unknown structure, some of them are not as informative as others. We argue that the cluster which is the most separated from the rest, is the most informative and refer to it as the _most distinct_. We extract this cluster because it is expected to have the greatest probability of consisting of very similar points. Subsequently, we dynamically update our prior data \(\mathbf{Y}\) to include the points of this cluster, while also removing them from the unexplored subset \(\mathbf{Z}\). Optimisation of Eq. (2) takes then place to compute new 2D embeddings that exhibit data separation of the updated unexplored data where cluster extraction can again be performed. This process continues iteratively until no more data remain unexplored and can efficiently provide gradual exploration of the unknown underlying structure of high-dimensional data. To cluster the embeddings we used the Bayesian infinite Gaussian mixture model Rasmussen (1999), Anderson (1991), Neal (2000), commonly referred to as Dirichlet Process Gaussian Mixture Model (DPGMM) as it does not require a predefined number of clusters (but rather a maximum cluster number) and due to its performance quality. More information about the DPGMM is provided in the Dirichlet Process Gaussian Mixture Model Appendix. Given the mean \(\mathbf{m}_{l}\) and covariance matrix \(\mathbf{C}_{l}\) for each cluster \(l\), we can extract the most distinct cluster by calculating all pairwise distances of cluster centers and their respecive distributions using the Mahalanobis distance Mahalanobis (1936) \[\delta_{lj}=\sqrt{(\mathbf{m}_{j}-\mathbf{m}_{l})^{\top}\mathbf{C}_{l}^{-1}( \mathbf{m}_{j}-\mathbf{m}_{l})}. \tag{4}\] To avoid selecting a small deal of outliers as a cluster, we define a minimum acceptable cluster size. All clusters with number of points less than that are discarded as outliers and their distances are not considered. From Eq. (4) we define a symmetric pairwise distances matrix for all clusters as \(D_{lj}=(\delta_{lj}+\delta_{jl})/2\) and the most distinct cluster is given by \[c^{\star}=\arg\max\mathbf{D}\mathbf{1}, \tag{5}\] where \(\mathbf{1}\) is the vector of all ones. As a special case, if only two clusters of acceptable size are detected, then both are chosen as most distinct. Algorithm 1 outlines the major steps of our proposed iterative framework. ### Hyperparameter tuning Hyperparamaters \(\alpha\) and \(\mu\) have to be chosen for IMAPCE. We need to select \(\alpha\) if there are prior data (otherwise it is set to zero), while \(\mu\) has to be selected regardless of the availabilty of prior data. As a rule of thumb, setting \(\alpha=1\) empirically provides embeddings with a desirable trade-off between high original data variance and low prior data variance. In practise we observe that the kurtosis term is not greater than 10-20. After computing the cPCA (PCA if there are no prior data) reconstruction error of our original data, \(\mu\) is selected as one or two orders of magnitude below the cPCA reconstruction error. In this way, kurtosis can influence the optimization. As for \(s\) which needs to be selected for the iterative visual exploration, it denotes the minimum acceptable size of a cluster and is used for discarding outliers as well as stopping the exploration process. Its choice depends on the size and dimensionality of the original data. ## 4 Experimental Setup To showcase that IMPACE can efficiently factor out different types of prior knowledge, we ran experiments on several datasets for all previously mentioned prior knowledge types. We provide both quantitative and qualitative results and compare the performance of IMPACE with cPCA and ct-SNE. We compare with them because they have the same goal with IMPACE, which is to generate embeddings that promote some underlying structure of the data while removing any prior knowledge. For cPCA, a few projection matrices are computed for a fixed user-selected number of \(\alpha\)'s (trade-off hyperparameter) and spectral clustering is applied to them. The projection matrix which corresponds to the cluster medoid is used to compute the projections. For ct-SNE, the label information which we wish to remove from the embeddings is given as prior data. Details about its hyperparameters' selection are provided for each dataset. We implemented our method on Python and used Pymanopt Townsend et al. (2016) which is a Python toolbox for optimization on Riemannian manifolds. The DPGMM used for clustering is implemented via the Sci-kit learn library of Python Pedregosa et al. (2011). IMPACE and cPCA implementations are given in [https://github.com/StavGer/IMPACE](https://github.com/StavGer/IMPACE), while for ct-SNE we used the official implementation. We present the experimental section in a prior-wise manner. ### Attribute-based prior The task in this case is to generate embeddings that factor out the structure of one or more selected attributes of the high-dimensional data in order to reveal the structure of the remaining attributes. We ran experiments using IMPACE, ct-SNE and cPCA and compare their performance on both synthetic and real-world data. #### 4.1.1 synthetic data The synthetic dataset Heiter et al. (2023) consists of 1500 ten-dimensional points. All points are assigned to one of two clusters (with centers sampled from \(\mathcal{N}(0,25)\)) in the first four dimensions and one of three clusters (with centers sampled from \(\mathcal{N}(0,1)\)) in dimensions 5-6. For each point we add noise from \(\mathcal{N}(0,0.01)\). The last four dimensions correspond to samples from \(\mathcal{N}(0,1)\). To run IMPACE (we set \(\alpha=1\), \(\mu=200\)) and cPCA, we define as prior data the first four dimensions and wish to generate embeddings that reveal the complementary cluster structure of dimensions 5-6. With the same goal, we implement ct-SNE (by selecting its hyperparameters according to Heiter et al. (2023)) using the cluster labels of the first four dimensions as prior knowledge. The generated embeddings are visualised in figures 1(a)-1(c) where prior data information is encoded according to shape and complementary structure according to color. We observe that cPCA embeddings are clustered with respect to their shape, indicating that the structure of prior data is not removed. On the contrary, both ct-SNE and IMAPCE compute embeddings that factor out the prior information as there is mix of points with different shapes. However, ct-SNE fails to reveal the complementary cluster structure since embeddings with different colors are mixed. On the other hand, IMAPCE clearly groups the embeddings according to their colors, unveiling the complementary structure. The normalised Laplacian score was proposed Kang et al. (2021) in order to quantify the presence of some prior label information on a set of embeddings. This score takes values in [0, 1] and measures the label homogeneity within a user-selected neighborhood in an embedding set. If the embeddings remove the structure associated with some prior data labels, we expect the Laplacian to be large when computed with respect to these labels. To quantitatively compare how well cPCA, ct-SNE and IMAPCE embeddings factor out the prior information, we calculated the Laplacian scores on the prior information of dimensions 1-4. We provide the scores for a range of different neighborhood sizes (hyperparameter of Laplacian score) in Figure 1(d). IMAPCE achieved higher Laplacian scores (lower homogeneity) than cPCA and ct-SNE, indicating that it removes the prior information more effectively than them (as was observed qualitatively). Given that the kurtosis term makes the difference between cPCA and IMAPCE, we can infer that its inclusion in IMAPCE achieves the meaningful data separation that reveals some unknown structure which is missed by cPCA. #### 4.1.2 UCI Adult data We sampled 1000 data points from the UCI Adult dataset Becker and Kohavi (1996) which consists of six features. Age, education level, and work hours per week are numeric ones while ethnicity (white/other), gender (male/female) and income (\(>\) 50k) are binary ones. Using the ethnicity feature as prior, we obtain cPCA, ct-SNE (selecting its hyperparameters according to the original work Kang et al. (2021)) and IMAPCE (\(\alpha=1,\mu=150\)) embeddings as shown in figures 1(e) - 1(g). cPCA provides embeddings with mixed ethnicity, gender and income features, failing to exhibit any clear cluster formation. On the contrary, ct-SNE contains clusters of different gender and income points but does not remove the prior information as there are also clusters of different ethnicities. Finally, IMAPCE embeddings remove the prior information by having mixed ethnicities while they also reveal the cluster structure of both gender (red-green colors) and income (filled-unfilled markers) attributes. Similar to the synthetic data experiments, we compared the Laplacian scores of cPCA, ct-SNE and IMAPCE embeddings evaluated on the ethnicity prior as shown in Figure 1(h). IMAPCE has larger Laplacian scores and thus discounts the ethnicity prior information more effectively than both cPCA and ct-SNE. Equivalent experiments using gender attribute as well as the combination of gender and income attributes as priors are provided in the UCI Adult Data Embeddings Appendix. ### Sample-based prior We created some complex data by combining instances from MNIST (which we consider as background data) and Fashion-MNIST (which we consider as reference data) datasets. The task in this case is to compute two-dimensional embeddings of complex data that remove the information associated with the MNIST data and provide separation according to the complementary structure defined by the Fashion-MNIST labels. We construct a series of complex data (6000 samples each) by choosing samples from two-specific Fashion-MNIST ground truth classes and superimposing them with randomly selected MNIST instances, as shown in figure 1(i). The superimposed 28x28 images, as well as the MNIST images serving as the background, are flattened to 784 dimensional vectors before their processing. As background data we select 1000 MNIST samples which are not necessarily used when constructing the complex data. While we can employ these prior data for both IMAPCE and cPCA, ct-SNE is limited to label priors. To remove the MNIST information using ct-SNE, we provide it with the ground truth labels of the MNIST instances that were used for the complex data construction. We selected its hyperparameters according to the suggestions of the authors. Finally, for IMAPCE we set \(\alpha=1\) and \(\mu=10^{5}\). Figures 1(j)-1(l) show complex data embeddings (with 'Bag', 'Ankle-boot' Fashion MNIST ground truth labels) generated by IMAPCE, cPCA and ct-SNE. We observe that ct-SNE completely fails to separate the embeddings according to their Fashion-MNIST class. While cPCA factors out to some extent the MNIST structure, its embeddings exhibit significant overlap in terms of their Fashion-MNIST classes. IMAPCE computes embeddings that effectively remove the MNIST information and thus achieve the clearest separation with respect to their Fashion-MNIST class. Therefore, optimisation of kurtosis term provides enhanced and interpretable data segregation which is not achieved by cPCA. To verify our observations, we trained and tested an SVM classifier (75% training, 25% test) on the classification of the calculated 2D embeddings with respect to their Fashion-MNIST labels. By doing this, we can quantify the separability according to the complementary structure which we wish to unveil. The test-set accuracy is averaged over 10 random train-test splits and given in Table 1. The very poor performance of the ct-SNE embeddings as well as the superior performance achieved by the IMAPCE ones confirm our observations. Experiments for a few more complex data are given in the Complex Data Embeddings Appendix. ### Subset-based prior In this setup, prior data are subsets of the original data that share the same class (and are thus similar). The goal is to generate embeddings that reveal the cluster structure of the remaining samples (unexplored subset) within these datasets. By employing our structure extraction framework with IMAPCE or cPCA, we can dynamically update the prior data after each set of embeddings. As a result, we sequentially obtain new sets of informative embeddings that gradually unveil the underlying cluster structure of all high-dimensional data. In this case, we did not compare with ct-SNE because the prior data are subsets of the original data. Thus, it is non-trivial to define some cluster labels for all high-dimensional samples (which is essential for ct-SNE). We performed the iterative visual exploration of Image Segmentation data from the UCI machine learning repository Dua and Graff (2017) under various prior data assumptions. This dataset consists of 2310 samples and 19 attributes and includes 330 instances from 7 different classes, namely "sky", "grass", "path", "foliage", "cement", "brickface", "window". Assuming no prior data and selecting \(\alpha=1\), \(\mu=10^{5}\), \(s=75\), we sequentially implement IMAPCE on UCI Image segmentation data. The obtained projections are shown in Figure 2, where each subfigure consists of three subplots. The upper subplot illustrates the IMAPCE data projections for a specific iteration of the process. Grey points correspond to the prior data, while black points correspond to the unexplored subset of samples. The middle plot shows the results of a DPGMM clustering on the unexplored points where the most distinct cluster is marked with a black frame. The lower subplot shows the unexplored points coloured according to their ground truth class. The first data projection is shown in Figure 2(a). The upper plot consists of solely black points as there are no prior data. All data are clustered in the middle subplot and the green cluster is the most distinct. Its points are considered very similar and are stored for the evaluation stage. Subsequently, these points define the prior data and are removed from the unexplored data. Afterwards, the second iteration takes place and new informative embeddings are calculated and shown in Figure 2(b). Separation of data samples that were previously overlapping is now observed and indicative of an informative data projection. The grey points of the upper subplot correspond to the prior data while the black ones refer to the unexplored points. The unexplored points are clustered in the middle subplot and blue is the most distinct cluster. Its points are then incorporated in the prior data while removed from the unexplored data. This iterative process is repeated until no clusters are formed (only outliers are left). Gradual exploration of the whole dataset contributes to the extraction of new and meaningful underlying structure. For brevity and demonstration reasons, the rest of the exploration analysis is omitted while the first four iterations are illustrated in Figures 2(a)-2(d). To quantitatively compare IMAPCE and cPCA, we ran several experiments on UCI Image Segmentation (\(\alpha=1\), \(\mu=10^{5}\), \(s=75\)) with different initial subsets of prior data. Performance evaluation took place after the exploration of a dataset has finished. During the evaluation stage, the quality of the most distinct clusters (which are stored along the exploration process) is measured with respect to their ground truth labels using the Jaccard Jaccard (1901) and NMI scores. Both scores are highly used in the literature for the evaluation of clusters' quality. Detailed results for IMAPCE and cPCA are given in Table 2. We consider the case of no prior data, while we also experiment by setting as prior subsets, data samples from every ground truth class. \begin{table} \begin{tabular}{|c c|} \hline Method & Accuracy \\ \hline cPCA & \(0.78\pm 0.007\) \\ ct-SNE & \(0.10\pm 0.006\) \\ IMAPCE & \(\mathbf{0.98}\pm 0.003\) \\ \hline \end{tabular} \end{table} Table 1: Accuracy scores for SVM classification on the 2D embeddings of IMAPCE, cPCA and ct-SNE. Overall, IMAPCE clearly outperforms cPCA on both scores under all prior data assumptions. The superior Jaccard and NMI scores of IMAPCE indicate that it promotes enhanced cluster segregation in comparison to cPCA. This is achieved due to the optimisation of the kurtosis term. We provide equivalent quantitative results for 10000 randomly selected MNIST instances in the Iterative Exploration of MNIST appendix. ## 5 Conclusion To sum up, in this work we proposed IMAPCE to generate low-dimensional embeddings that filter out three different types of prior knowledge while also revealing any previously unknown underlying structure. To ensure numerical stability and fast convergence, we performed the optimisation over the Stiefel manifold. Additionally, we introduced an iterative framework that can be employed with IMAPCE to sequentially compute multiple embeddings of high-dimensional data. Finally, we ran experiments on diverse datasets for different prior knowledge types and provided both quantitative and qualitative results as well as comparisons with related approaches. Figure 1: Top row shows synthetic data experiments with information of dimensions 1-4 as prior. Middle row illustrates UCI adult data experiments with ethnicity feature as prior. Bottom row shows complex data experiments using MNIST data as prior. (a) cPCA embeddings are clustrerred w.r.t. dimensions 1-4 labels. (b) ct-SNE embeddings are mixed w.r.t. labels of dimensions 1-4 but not clustered w.r.t labels of dimensions 5-6 (complementary structure). (c) IMAPCE embeddings are clustered w.r.t. labels of dimensions 5-6 (with mixed labels of dimensions 1-4). (d) Laplacian scores for synthetic embeddings using labels of dimensions 1-4. (e) cPCA embeddings have mixed ethnicity, gender and income labels. (f) ct-SNE clusters w.r.t. gender and income and ethnicity. (g) IMAPCE clusters w.r.t. gender and income (revealing complementary structure). (h) Laplacian scores for UCI adult embeddings using ethnicity labels. (i) Complex data generation. (j) cPCA embeddings highly overlap w.r.t. to their Fashion-MNIST class. (k) ct-SNE embeddings are completely mixed w.r.t. to their Fashion-MNIST class. (l) IMAPCE embeddings exhibit clear segregation w.r.t. Fashion-MNIST class. ## 6 Acknowledgments Xenophon Evangelopoulos acknowledges financial support from the Leverhulme Trust via the Leverhulme Research Centre for Functional Materials Design. The work was also supported by a studentship from the School of Electrical Engineering, Electronics and Computer Science, at the University of Liverpool, UK.
2307.16534
Single-rotating Five-dimensional Near-horizon Extremal Geometry in General Relativity
The geometries with SL$(2,\mathbb{R})$ and some axial U$(1)$ isometries are called ``near-horizon extremal geometries" and are found usually, but not necessarily, in the near-horizon limit of the extremal black holes. We present a new member of this family of solutions in five-dimensional Einstein-Hilbert gravity that has only one nonzero angular momentum. In contrast with the single-rotating Myers-Perry extremal black hole and its near-horizon geometry in five dimensions, this solution may have a nonvanishing and finite entropy. Although there is a uniqueness theorem that prohibits the existence of such single-rotating near-horizon geometries in five-dimensional general relativity, this solution has a curvature singularity at one of the poles, which breaks the smoothness conditions in the theorem.
Kamal Hajian
2023-07-31T10:00:25Z
http://arxiv.org/abs/2307.16534v2
###### Abstract ###### Abstract The geometries with \({\rm SL}(2,\mathbb{R})\) and some axial \({\rm U}(1)\) isometries are called "near-horizon extremal geometries" and are found usually, but not necessarily, in the near-horizon limit of the extremal black holes. We present a new member of this family of solutions in five-dimensional Einstein-Hilbert gravity that has only one non-zero angular momentum. In contrast with the single-rotating Myers-Perry extremal black hole and its near-horizon geometry in five dimensions, this solution has a non-vanishing and finite entropy. Although there is a uniqueness theorem that prohibits the existence of such single-rotating near-horizon geometries in five-dimensional general relativity, this solution has a curvature singularity at one of the poles, which breaks the smoothness conditions in the theorem. **Single-rotating Five-dimensional Near-horizon Extremal Geometry in General Relativity** Kamal Hajian _Institute of Physics, University of Oldenburg, P.O.Box 2503, D-26111 Oldenburg, Germany_ _Department of Physics, Middle East Technical University, 06800, Ankara, Turkey_ + Footnote †: e-mail: [email protected] + Footnote †: e-mail: [email protected] ## 1 Introduction Recent observations of black holes [1, 2, 3, 4] via electromagnetic and gravitational waves have boosted theoretical research on these mysterious celestial objects. One of their challenging properties is the indication of thermodynamic behavior, such as obeying the four laws of thermodynamics [5, 6, 7, 8]. Specifically, a coherent and well-accepted statistical mechanics is missing in the black hole physics community. In spite of many innovative proposals and calculations, the universal identification of black hole microstates is a long-standing question that is still waiting to be resolved. An interesting feature of such an identification would be non-vanishing entropy at zero temperature. This is a general property of extremal black holes, i.e., the holes at zero temperature. In practice, if a black hole is more isometric, i.e., if it has more Killing vectors, it is easier to study. In this regard, extremal black holes have a feature that makes them suitable for microstate investigations: their near-horizon geometries enjoy extra isometries than the black holes themselves; the time translation isometry of a stationary extremal black hole is enhanced to an \({\rm SL}(2,\mathbb{R})\) one. So, the isometries of such a black hole near-horizon geometry with \(n\) number of \({\rm U}(1)\) axial Killings are enhanced to the \({\rm SL}(2,\mathbb{R})\)\(\times\)\({\rm U}(1)^{n}\). The near-horizon of the extremal Kerr black hole is one of the simplest examples of the mentioned geometries. It has one U(1) Killing vector, i.e., the axial rotating symmetry generator, to which the angular momentum is associated as a conserved charge. This geometry was first found by Bardeen and Horowitz in 1999 [9]. Since then, there have been many interesting studies on such near-horizon geometries, including microstate counting of extremal black holes (e.g., via Kerr/CFT correspondence [10, 11] or symplectic symmetries [12, 13, 14]). The classification of these geometries in some gravitational theories and dimensions has been worked out by Kunduri and Lucietti [15, 16]. Pedagogical reviews on the classification and explicit examples can be found in [16] and [17, 18] respectively. The SL\((2,\mathbb{R})\times\)U\((1)^{n}\) isometry, which was alluded to above, is such a restrictive property that it yields uniqueness theorems for the NHEGs in certain dimensions and theories [15, 16, 19]. However, such uniqueness theorems generically assume some smoothness conditions. As a result, especially when the horizon of the black hole is singular, there can be the possibility of some singular solutions. One of the simplest examples to study such singular horizons is the single-rotating extremal Myers-Perry (MP) black hole in five dimensions [20]. This black hole has an infinitely long bar-shaped horizon with zero area, sometimes called the extremal vanishing horizon (EVH). The near-horizon of EVH, which is obtained by first the single rotation limit and then the near-horizon limit [21, 22], has been studied extensively [23, 24, 25, 26, 27, 28, 29, 30], and it is a static solution without any rotation. The geometry is singular at one of the poles, which will be discussed at the end. In this paper, we present a single-rotating NHEG, which is derived by first taking the near-horizon limit of a double-rotating extremal MP black hole, and then taking the (appropriately prepared) single-rotation limit. We fix the period of the axial coordinates such that the resulting solution has one non-zero angular momentum while obtaining a non-vanishing entropy. Besides, similar to the near-horizon of the EVH, it has a curvature singularity at one of the poles. This new vacuum solution is the first known single-rotating NHEG in five-dimensional general relativity (putting the trivial NHEG of the extremal Kerr-String solution aside, which is simply the embedding of the extremal Kerr into five dimensions, and is the only single-rotating solution that is allowed by the uniqueness theorem 4.5 in [16]). The paper is arranged as follows: In section 2 a short review of NHEGs is provided. In section 3 the promised single-rotating NHEG solution is presented, and in section 4 its thermodynamic properties are investigated. The derivation from the extremal MP black hole and the differences with the near-horizon of the EVH solution are worked out in sections 5 and 6 respectively. ## 2 Review: near-horizon extremal geometries Given a Lagrangian density \(\mathcal{L}\) in \(D\) dimensions, near-horizon extremal geometries (NHEGs) are solutions to the equation of motion that have an SL\((2,\mathbb{R})\) and at most a \(D-3\) number of U\((1)\) isometries. As their name suggests, these geometries are usually found in the near-horizon limit of extremal black holes. Any NHEG with maximum \(D-3\) axial isometry can be written in a coordinate system \((t,r,\theta,\varphi^{1},\ldots,\varphi^{D-3})\) that makes the SL\((2,\mathbb{R})\) symmetry manifest, \[{\rm d}s^{2}=\Gamma(\theta)\left[-r^{2}{\rm d}t^{2}+\frac{{\rm d}r^{2}}{r^{2}}+ \alpha{\rm d}\theta^{2}+\sum_{i,j=1}^{D-3}\gamma_{ij}(\theta)({\rm d}\varphi^{i }+k^{i}r{\rm d}t)({\rm d}\varphi^{j}+k^{j}r{\rm d}t)\right], \tag{1}\] where \[t\in(-\infty,+\infty),\qquad r\in\{r<0\}\ {\rm or}\ \{r>0\},\qquad\theta\in[0, \theta_{Max}],\qquad\varphi^{i}\sim\varphi^{i}+2\pi, \tag{2}\] \(k^{i}\) are constants over spacetime, and are determined by the equations of motion. The constant \(\alpha\) is conventional and determines the domain of the coordinate \(\theta\) by fixing the \(\theta_{Max}\). The \(\Gamma\) and \(\gamma_{ij}\) as functions of the coordinate \(\theta\) are determined by the equations of motion. The first two terms in the metric (1) form an AdS\({}_{2}\) in the Poincare patch with \(r=0\) as the Poincare horizon. In these coordinates, the SL\((2,\mathbb{R})\times\)U\((1)^{D-3}\) isometry generators are \[\xi_{-}=\partial_{t}\,,\qquad\xi_{0}=t\partial_{t}-r\partial_{r},\qquad\xi_{+ }=\frac{1}{2}(t^{2}+\frac{1}{r^{2}})\partial_{t}-tr\partial_{r}-\frac{1}{r}k^ {i}\partial_{\varphi^{i}},\quad{\rm m}_{i}=\partial_{\varphi^{i}} \tag{3}\] with the commutation relations \[[\xi_{0},\xi_{-}]=-\xi_{-},\qquad[\xi_{0},\xi_{+}]=\xi_{+},\qquad[\xi_{-},\xi_ {+}]=\xi_{0}\,,\qquad[\xi_{\rm a},{\rm m}_{i}]=0,\quad{\rm a}\in\{-,0,+\}. \tag{4}\] ## 3 Five-dimensional single-rotating solution In this section, we present a new five-dimensional NHEG solution to the vacuum gravity with a single angular momentum, which is the main result of this paper. Its derviation from the MP solution will be carried out in section 5. Let us consider the Einstein-Hilbert action and Einstein equation in five dimensions, \[{\cal L}=\frac{1}{16\pi G}R,\qquad R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=0, \tag{5}\] in which \(R_{\mu\nu}\) and \(R\) are Ricci tensor and scalar respectively. Then, the following single-rotating NHEG in the coordinates \((t,r,\theta,\varphi^{1},\varphi^{2})\) is a solution. \[{\rm d}s^{2}=\Gamma(\theta)\left[-r^{2}{\rm d}t^{2}+\frac{{\rm d }r^{2}}{r^{2}}+4{\rm d}\theta^{2}+\sum_{i,j=1}^{2}\gamma_{ij}(\theta)({\rm d} \varphi^{i}+k^{i}r{\rm d}t)({\rm d}\varphi^{j}+k^{j}r{\rm d}t)\right], \tag{6}\] \[\Gamma=\frac{a^{2}}{4}\cos^{2}\theta,\quad\gamma_{11}=\frac{4\sin ^{2}\theta}{\cos^{4}\theta},\quad\gamma_{22}=4,\quad\gamma_{12}=0,\quad k^{1} =0,\quad k^{2}=\frac{1}{2}. \tag{7}\] Explicitly in the more convenient notation \((\varphi^{1},\varphi^{2})\to(\varphi,\psi)\), it is \[{\rm d}s^{2}=a^{2}\cos^{2}\theta\left(\frac{{\rm d}r^{2}}{4r^{2}}+{\rm d} \theta^{2}+\frac{\sin^{2}\theta}{\cos^{4}\theta}{\rm d}\varphi^{2}+{\rm d} \psi^{2}+r{\rm d}t{\rm d}\psi\right). \tag{8}\] The coordinate \(\theta\) takes the values in \([0,\frac{\pi}{2}]\), while \(\varphi\sim\varphi+2\pi\) and \(\psi\sim\psi+2\pi\). We have chosen the solution to rotate in the \(\psi\) azimuth direction. The case of a single non-zero rotation in the \(\varphi\) direction is achieved by the change of the roles of \(\varphi\leftrightarrow\psi\) and \(\cos\theta\leftrightarrow\sin\theta\) in (8). The free parameter \(a\) is related to the angular momentum of the geometry. One can use different methods for conserved charge calculation in gravity (we used the covariant formulation of charges [31, 32, 33, 34, 35, 36, 37, 38, 39] on the solution phase space [40]) to find the angular momenta as the charges of the Killing vectors \(-\partial_{\varphi}\) and \(-\partial_{\psi}\) respectively, \[J_{\varphi}=0,\qquad J_{\psi}=\frac{\pi a^{3}}{2G}. \tag{9}\] We observe that the three-dimensional manifold, which is parameterized by the \((t,r,\psi)\) coordinates, is a self-dual orbifold of the AdS\({}_{3}\) geometry, which appears in the near horizon of an extremal BTZ black hole [41]. Such a geometry can also be found in the near horizon of EVHs [25, 26, 27, 42]. Another feature of the metric (8) is that it is singular at \(\theta=\frac{\pi}{2}\). To observe this, we can calculate the Kretschmann scalar \[R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}=\frac{72}{a^{4}\cos^{8}\theta}, \tag{10}\] which diverges if \(\theta\rightarrow\frac{\pi}{2}\). For the equivalent case of \(J_{\varphi}\neq 0\) and \(J_{\psi}=0\) the singularity is at \(\theta\to 0\). We note that, in spite of the existence of curvature singularity, the solution has finite conserved charges and satisfies the NHEG thermodynamic relations, which will be discussed in the next sections. ## 4 Thermodynamic properties For a generic thermodynamic system, the first law of thermodynamics is a universal law, i.e., independent of the details of the system, which makes a relation between the variation of the thermodynamic properties of the system at non-zero temperatures [8]. At zero temperature, most thermodynamic systems have zero entropy, and the only possibility for the variation of the entropy is to excite the system to have a non-zero temperature, and this is governed by the expansion of the first law around zero temperature. However, in black hole thermodynamics, there is the possibility of non-vanishing entropy at zero temperature. So, it is possible to change the entropy without changing the vanishing temperature, i.e., by changing one extremal black hole to the adjacent extremal one. In this case, the first law (neither its standard form at non-zero temperature, nor its expansion around \(T=0\) without implementing an extra assumption [43]) does not capture the behavior of \(\delta S\), simply because by \(T=0\) and \(\delta S\neq 0\) the \(T\delta S\) term disappears from this law. As a result, the remaining parts of the first law would be just the extremality condition between the mass and other charges of the black hole without relation to the entropy. In parallel with the standard first law and Smarr formula, the extremal black holes enjoy two extra relations that govern the \(\delta S\) and \(S\). These relations have been found and proved in the context of NHEGs [44, 45] (following the pioneering work [10]). The essential point to derive these extra relations is to note that although an NHEG does not have an event horizon, it admits an infinite number of Killing horizons with identical thermodynamic properties. Explicitly, any surface of constant \(t\) and \(r\), calling it \(\mathcal{H}\) with \(t=t_{\mathcal{H}}\) and \(r=r_{\mathcal{H}}\), is the bifurcation of the two null surfaces \[t+\frac{1}{r}=t_{\mathcal{H}}+\frac{1}{r_{\mathcal{H}}},\qquad t-\frac{1}{r}=t _{\mathcal{H}}-\frac{1}{r_{\mathcal{H}}}, \tag{11}\] with the following Killing vector as the generator of the horizon \[\zeta_{\mathcal{H}}=n^{\mathrm{a}}_{\mathcal{H}}\,\xi_{\mathrm{a}}+k^{ \mathrm{i}}\mathrm{m}_{i}. \tag{12}\] In the definition of \(\zeta_{\mathcal{H}}\) summation over \(\mathrm{a}\in\{-,0,+\}\) and \(i=\{1,2,..,D-3\}\) is understood, and the \(n^{\mathrm{a}}\) is \[n^{-}=-\frac{t^{2}r^{2}-1}{2r},\qquad n^{0}=tr,\qquad n^{+}=-r. \tag{13}\] Geometrically, \(n^{\mathrm{a}}\) is the unit vector from the center in a 3-dimensional flat space to the \((t,r)\) point of the AdS\({}_{2}\) geometry that is immersed in it (see the details in Section.5 of [18]). The Killing vector \(\zeta_{\mathcal{H}}\) is null on the horizon in (11) and vanishes at the bifurcation \(\mathcal{H}\). Similar to the celebrated Wald entropy formulation [35, 36], the NHEG entropy is defined as the conserved charge of this Killing vector calculated on the bifurcation surface \(\mathcal{H}\)[44]. For Einstein-Hilbert gravity, the result turns out to be the Bekenstein-Hawking entropy [5] \[S=\frac{A_{\mathcal{H}}}{4G}. \tag{14}\] Thanks to the \(\mathrm{SL}(2,\mathbb{R})\) isometry, \(\mathcal{H}\) can be any one of the surfaces of the \((t,r)\) constant with the same result for the \(S\). With a well-defined entropy for the NHEGs and following similar steps as in Wald's proof of the first law in [36], the NHEG thermodynamic laws are found, which for the vacuum solutions they are \[\frac{\delta S}{2\pi}=k^{\mathrm{i}}\delta J_{\mathrm{i}},\qquad\quad\text{ and}\qquad\quad\frac{S}{2\pi}=k^{\mathrm{i}}J_{\mathrm{i}}. \tag{15}\] The NHEG solution in (6) and (7) satisfies these two relations; one can use different methods for conserved charge calculation in gravity (we used the covariant formulation of charges [32, 33, 34, 35, 36, 37, 38, 39] on solution phase space [40]) to find its entropy and angular momenta as charges of \(\zeta_{\mathcal{H}}\), \(-\mathrm{m}_{1}=-\partial_{\varphi^{1}}\) and \(-\mathrm{m}_{2}=-\partial_{\varphi^{2}}\), \[S=\frac{\pi^{2}a^{3}}{2G},\qquad J_{1}=0,\qquad J_{2}=\frac{\pi a^{3}}{2G}. \tag{16}\] By the \(k^{1}=0\) and \(k^{2}=\frac{1}{2}\) the NHEG thermodynamic laws (15) are satisfied. ## 5 Derivation from extremal Myers-Perry black hole Analogous black holes to the Kerr in higher dimensions are called Myers-Perry black holes. The five-dimensional MP solution is characterized by its mass and two angular momenta, with the metric \[\mathrm{d}s^{2}=-(\frac{-\Delta+a^{2}\sin^{2}\theta+b^{2}\cos^{2} \theta+\frac{a^{2}b^{2}}{\hat{r}^{2}}}{\rho^{2}})\mathrm{d}\hat{t}^{2}+\frac{ \rho^{2}}{\Delta}\mathrm{d}\hat{\hat{r}}^{2}+\rho^{2}\mathrm{d}\theta^{2}\] \[\qquad\quad+2\left(\Delta-(\hat{r}^{2}+a^{2})-\frac{b^{2}(\hat{r} ^{2}+a^{2})}{\hat{r}^{2}}\right)\frac{a\sin^{2}\theta}{\rho^{2}}\,\mathrm{d}t\, \mathrm{d}\chi^{1}\] \[\qquad\quad+2\left(\Delta-(\hat{r}^{2}+b^{2})-\frac{a^{2}(\hat{r} ^{2}+b^{2})}{\hat{r}^{2}}\right)\frac{b\cos^{2}\theta}{\rho^{2}}\,\mathrm{d}t \,\mathrm{d}\chi^{2}\] \[\qquad\quad+\left(-\Delta a^{2}\sin^{2}\theta+(\hat{r}^{2}+a^{2} )^{2}+\frac{b^{2}(\hat{r}^{2}+a^{2})^{2}\sin^{2}\theta}{\hat{r}^{2}}\right) \frac{\sin^{2}\theta}{\rho^{2}}\,\mathrm{d}\chi^{1}\mathrm{d}\chi^{1}\] \[\qquad\quad+\left(-\Delta b^{2}\cos^{2}\theta+(\hat{r}^{2}+b^{2} )^{2}+\frac{a^{2}(\hat{r}^{2}+b^{2})^{2}\cos^{2}\theta}{\hat{r}^{2}}\right) \frac{\cos^{2}\theta}{\rho^{2}}\,\mathrm{d}\chi^{2}\mathrm{d}\chi^{2}\] \[\qquad\quad+2\left(-\Delta+\frac{(\hat{r}^{2}+a^{2})(\hat{r}^{2} +b^{2})}{\hat{r}^{2}}\right)\frac{ab\sin^{2}\theta\cos^{2}\theta}{\rho^{2}}\, \mathrm{d}\chi^{1}\mathrm{d}\chi^{2}\,, \tag{17}\] where \[\rho^{2}\equiv\hat{r}^{2}+a^{2}\cos^{2}\theta+b^{2}\sin^{2}\theta,\qquad\Delta \equiv\frac{(\hat{r}^{2}+a^{2})(\hat{r}^{2}+b^{2})}{\hat{r}^{2}}+2m\,. \tag{18}\] The range of the spherical coordinates is \(\theta\in[0,\frac{\pi}{2}]\) and \(\chi^{i}\in[0,2\pi]\). There are three free parameters \(m\), \(a\) and \(b\) in this metric. The extremality constrains the mass to be a function of the angular momenta in the solution via the constraint \(2m=(a+b)^{2}\). The near-horizon of the extremal MP black hole is achieved by the coordinate transformation \[\hat{t}=\frac{\alpha_{0}r_{h}t}{\epsilon},\qquad\hat{r}=r_{h}(1+ \epsilon r),\qquad\chi^{i}=\varphi^{i}+\frac{\Omega^{i}\alpha_{0}r_{h}t}{ \epsilon}, \tag{19}\] in which \[r_{h}=\sqrt{ab},\qquad\alpha_{0}=\frac{(a+b)^{2}}{4r_{h}^{2}},\qquad\Omega^{1} =\Omega^{2}=\frac{1}{(a+b)}, \tag{20}\] followed by the limit \(\epsilon\to 0\). The NHEG that is found has the general form of (6) with \[\Gamma=\frac{1}{4}(a+b)(a\cos^{2}\theta+b\sin^{2}\theta),\qquad k ^{1}=\frac{1}{2}\sqrt{\frac{b}{a}},\qquad k^{2}=\frac{1}{2}\sqrt{\frac{a}{b}},\] \[\gamma_{ij}=\frac{4}{(a\cos^{2}\theta+b\sin^{2}\theta)^{2}}\begin{pmatrix} a(a+b\sin^{2}\theta)\sin^{2}\theta&ab\cos^{2}\theta\sin^{2}\theta\\ ab\cos^{2}\theta\sin^{2}\theta&b(b+a\cos^{2}\theta)\cos^{2}\theta\end{pmatrix}. \tag{21}\] The entropy and angular momenta of this solution are functions of the two free parameters \(a\) and \(b\), \[\frac{S}{2\pi}=\frac{\pi\sqrt{ab}(a+b)^{2}}{4G},\qquad J_{1}=\frac{\pi a(a+b)^ {2}}{4G},\qquad J_{2}=\frac{\pi b(a+b)^{2}}{4G}, \tag{22}\] which coincide with the entropy and angular momenta of the initial extremal MP black hole and satisfy the NHEG laws in (15). However, to request one of the angular momenta to vanish, \(a\) or \(b\) should be set to zero. In this limit, not only one of the angular momenta vanishes, but also the entropy is zero, as expected from the original extremal MP. Nonetheless, such a limit is not well-behaved because it makes some of the metric components in (1) with (21) to blow up. Although the limiting procedure above is not successful in creating a single-rotating NHEG, it can be modified such that it yields a new solution with different conserved charges, which is the solution in equations (6) and (7). Let us assume that we want \(b\to 0\). To this end, by redefinition of the coordinate \[\varphi^{2}\to\frac{\varphi^{2}}{2k^{2}}, \tag{23}\] and then taking the limit \(b\to 0\) in (21), the single-rotating solution is section 3 is found. The period of the new axial coordinate (which corresponds to the \(\psi\) in (8)), can be set to be \(2\pi\) in order to keep the non-vanishing angular momentum intact, i.e., to be equal to \(\frac{\pi a^{3}}{2G}\). However, noting the singularity at the \(\theta=\frac{\pi}{2}\), this period is an arbitrary parameter. Especially, if the solution is considered to be strictly derived from the MP black hole, then by the limit \(b\to 0\) the constant \(k^{2}\) in the equation (23) diverges and admits a vanishing period for this axial direction. Notice that the solution (6) and (7), which is found by the limits above and adjustment of the periods, has different properties in comparison with the NHEG of MP. Specifically in \(b\to 0\) limit, the \(J_{2}\) remains finite and non-zero, while the \(J_{1}\) vanishes as in (16), which is the reverse of the extremal MP angular momenta in (22). Moreover, the entropy is finite and non-zero, and so it is different from the zero entropy of the single-rotating extremal MP case in (22). ## 6 Comparison with near-horizon of EVH solution If one of the two angular momenta of the extremal MP black hole is set to zero, then the horizon becomes singular in the shape of an infinitely long bar with zero area. Such an extremal geometry with a vanishing area of their horizon is called an extremal vanishing horizon (EVH) whose entropy vanishes. The near-horizon of such EVHs has been studied in detail [23, 24, 25, 26, 27, 28, 29, 30]. To see how it is different from the single-rotating solution in this paper, we review its derivation. To this end, we can put \(a\) or \(b\) in equation (17) equal to zero and then take the near-horizon limit. Conventionally, if we set \(b=0\), the near-horizon limit of the EVH is taken by the transformation \[\hat{t}=\frac{t}{\epsilon},\qquad\hat{r}=\epsilon r,\qquad\chi^{1}=\varphi^{1 }+\frac{\Omega^{1}t}{\epsilon},\qquad\chi^{2}=\frac{\varphi^{2}}{\epsilon}, \tag{24}\] followed by the \(\epsilon\to 0\) limit [42]. The result is \[\mathrm{d}s^{2}=\cos^{2}\theta\,\left(-\frac{r^{2}}{a^{2}}\mathrm{d}t^{2}+ \frac{a^{2}}{r^{2}}\mathrm{d}r^{2}+r^{2}(\mathrm{d}\varphi^{2})^{2}\right)+a^ {2}\cos^{2}\theta\mathrm{d}\theta^{2}+a^{2}\tan^{2}\theta(\mathrm{d}\varphi^ {1})^{2}. \tag{25}\] This metric has a (pinching \(\varphi^{2}\sim\varphi^{2}+2\pi\epsilon\)) AdS\({}_{3}\) sector in the parenthesis, is static, and has zero angular momenta. This latter makes the metric (25) different from our new solution in section 3. However, it has the same singularity at \(\theta\rightarrow\frac{\pi}{2}\), i.e., the Kretschmann scalar diverges exactly as the one in (10). In this respect, the two solutions share similar behaviors. ## 7 Conclusion In this work, we presented the first non-trivial single-rotating NHEG in five-dimensional vacuum theory. Although the solution suffers from a curvature singularity, it has well-defined conserved charges, and satisfies NHEG thermodynamic relations. In addition, as a new feature, it has a self-dual orbifold of AdS\({}_{3}\) in a part of the metric. We also showed that one way to derive this solution is to take (adjusted) single-rotational limit of the NHEG of the MP black hole. However, the solution has different entropy and angular momenta in comparison with the original single-rotating extremal black hole and can be considered an independent solution. The generalization of such an analysis to higher-dimensional EVHs as well as studying the pinching situation of the adjusted axial direction will be postponed for future works. **Acknowledgements:** I am very thankful for the kind support from Jutta Kunz and Bayram Tekin in Oldenburg University and METU. I would also like to thank them, as well as Eugen Radu, Shahin Sheikh-Jabbari, and Mohammad H. Vahidinia for useful discussions and comments. This work has been supported by TUBITAK international researchers program No. 2221.
2309.17386
The ALMA REBELS survey: obscured star formation in massive Lyman-break galaxies at z = 4-8 revealed by the IRX-$Ξ²$ and $M_{\star}$ relations
We investigate the degree of dust obscured star formation in 49 massive (${\rm log}_{10}(M_{\star}/{\rm M}_{\odot})>9$) Lyman-break galaxies (LBGs) at $z = 6.5$-$8$ observed as part of the ALMA Reionization Era Bright Emission Line Survey (REBELS) large program. By creating deep stacks of the photometric data and the REBELS ALMA measurements we determine the average rest-frame UV, optical and far-infrared (FIR) properties which reveal a significant fraction ($f_{\rm obs} = 0.4$-$0.7$) of obscured star formation, consistent with previous studies. From measurements of the rest-frame UV slope, we find that the brightest LBGs at these redshifts show bluer ($\beta \simeq -2.2$) colours than expected from an extrapolation of the colour-magnitude relation found at fainter magnitudes. Assuming a modified blackbody spectral-energy distribution (SED) in the FIR (with dust temperature of $T_{\rm d} = 46\,{\rm K}$ and $\beta_{\rm d} = 2.0$), we find that the REBELS sources are in agreement with the local ''Calzetti-like'' starburst Infrared-excess (IRX)-$\beta$ relation. By reanalysing the data available for 108 galaxies at $z \simeq 4$-$6$ from the ALPINE ALMA large program using a consistent methodology and assumed FIR SED, we show that from $z \simeq 4$-$8$, massive galaxies selected in the rest-frame UV have no appreciable evolution in their derived IRX-$\beta$ relation. When comparing the IRX-$M_{\star}$ relation derived from the combined ALPINE and REBELS sample to relations established at $z < 4$, we find a deficit in the IRX, indicating that at $z > 4$ the proportion of obscured star formation is lower by a factor of $\gtrsim 3$ at a given a $M_{\star}$. Our IRX-$\beta$ results are in good agreement with the high-redshift predictions of simulations and semi-analytic models for $z \simeq 7$ galaxies with similar stellar masses and SFRs.
R. A. A. Bowler, H. Inami, L. Sommovigo, R. Smit, H. S. B. Algera, M. Aravena, L. Barrufet, R. Bouwens, E. da Cunha, F. Cullen, P. Dayal, I. de Looze, J. S. Dunlop, Y. Fudamoto, V. Mauerhofer, R. J. McLure, M. Stefanon, R. Schneider, A. Ferrara, L. Graziani, J. A. Hodge, T. Nanayakkara, M. Palla, S. Schouws, D. P. Stark, P. P. van der Werf
2023-09-29T16:46:44Z
http://arxiv.org/abs/2309.17386v2
The ALMA REBELS survey: obscured star formation in massive Lyman-break galaxies at z = 4-8 revealed by the IRX-\(\beta\) and M\({}_{\star}\) relations ###### Abstract We investigate the degree of dust obscured star formation in 49 massive (\(\log_{10}(M_{\star}/\mathrm{M}_{\odot})\)\(>9\)) Lyman-break galaxies (LBGs) at \(z=6.5\)-8 observed as part of the ALMA Reionization Era Bright Emission Line Survey (REBELS) large program. By creating deep stacks of the photometric data and the REBELS ALMA measurements we determine the average rest-frame UV, optical and far-infrared (FIR) properties which reveal a significant fraction (\(f_{\mathrm{obs}}=0.4\)-0.7) of obscured star formation, consistent with previous studies. From measurements of the rest-frame UV slope, we find that the brightest LBGs at these redshifts show bluer (\(\beta\simeq-2.2\)) colours than expected from an extrapolation of the colour-magnitude relation found at fainter magnitudes. Assuming a modified blackbody spectral-energy distribution (SED) in the FIR (with dust temperature of \(T_{\mathrm{d}}=46\) K and \(\beta_{\mathrm{d}}=2.0\)), we find that the REBELS sources are in agreement with the local "Calzetti-like" starburst Infrared-excess (IRX)-\(\beta\) relation. By reanalysing the data available for 108 galaxies at \(z\simeq 4\)-6 from the ALPINE ALMA large program using a consistent methodology and assumed FIR SED, we show that from \(z\simeq 4\)-8, massive galaxies selected in the rest-frame UV have no appreciable evolution in their derived IRX-\(\beta\) relation. When comparing the IRX-\(M_{\star}\) relation derived from the combined ALPINE and REBELS sample to relations established at \(z<4\), we find a deficit in the IRX, indicating that at \(z>4\) the proportion of obscured star formation is lower by a factor of \(\gtrsim 3\) at a given a \(M_{\star}\). Our IRX-\(\beta\) results are in good agreement with the high-redshift predictions of simulations and semi-analytic models for \(z\simeq 7\) galaxies with similar stellar masses and SFRs. keywords: galaxies: high-redshift - galaxies: evolution - ISM: dust, extinction ## 1 Introduction The onset of dust creation represents a milestone in the history of the Universe, as it relies on the adequate enrichment of the galaxy with metals, formation of sufficient dust particles in high-redshift supernova and inter-stellar medium (ISM) properties conducive to the survival (and growth) of dust (e.g. Draine, 2003; Mancini et al., 2016; Gall & Hjorth, 2018; Lesniewska & Michalowski, 2019; Graziani et al., 2020; Dayal et al., 2022; Di Cesare et al., 2023). The presence of dust within galaxies can be detected through the reddening of the rest-frame UV and optical light in addition to emission in the mid and far-infrared (FIR). The measurement of the rest-frame FIR modified blackbody emission provides a direct signal of the presence of dust, whereas changes in the rest-frame UV colours can be attributed to other properties of the galaxy such as older ages and an increased metallicity. The majority of the highest-redshift galaxies found within deep optical to near-infrared (NIR) surveys have been shown to have blue rest-frame UV slopes (parameterised as \(f_{\mathrm{d}}\propto\lambda^{\beta}\), \(\beta\simeq-2\)), leading to the inference of young ages and low dust content (Dunlop et al., 2012; Bouwens et al., 2014). In the past decade however, the direct detection of dust continuum emission in individual or small samples of \(z\gtrsim 7\) galaxies (e.g. Tamura et al., 2019; Wong et al., 2022; Hygate et al., 2023; Hashimoto et al., 2023) has revealed the presence of dust within galaxies less than 800 Myr after the Big Bang. Observations of star-forming galaxies in the rest-frame FIR have demonstrated the key importance of considering dust obscured star-formation in galaxy evolution, with more than half of ongoing star formation being obscured at cosmic noon (\(z\simeq 3\); see review by Madau & Dickinson, 2014). There is evidence that obscured star formation continues to be important, and potentially dominates the total cosmic SFR density (CSFRD) in the range \(3<z<6\), from measurements based on rest-frame UV selected samples (e.g. Novak et al., 2017; Khusanova et al., 2021) and highly dust-obscured galaxies including serendipitous objects (e.g. Gruppioni et al., 2020; Talia et al., 2021; Loiacono et al., 2021), as well as deep ALMA and radio surveys (e.g. Zavala et al., 2021; van der Vlugt et al., 2022). Recent results extending these measurements to \(z\simeq 7\) from Barrufet et al. (2023) have shown that dust obscured star-formation contributes at least 10 percent of the cosmic star-formation rate density, showing that it remains significant even into the Epoch of Reionization. These results have revealed a strong stellar mass dependence of the obscuration (e.g. Pannella et al., 2009, 2015; Bouwens et al., 2016; Whitaker et al., 2017), with Dunlop et al. (2017) demonstrating that at \(z\simeq 2\) the fraction of obscured SFR rises from \(\lesssim 0.5\) at log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(<9\) up to 0.99 at log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(>10\), an effect which appears to extend to \(z\simeq 7\) (although with a lower normalisation; Algera et al., 2023). Direct detections of the dust continuum emission from galaxies at \(z>6.5\) have been made in galaxies selected from galaxies representing a wide range of intrinsic rest-frame UV luminosities for example fainter sources from lensing fields (e.g. Watson et al., 2015; Laporte et al., 2017; Tamura et al., 2019; Bakx et al., 2021; Hashimoto et al., 2023) and brighter galaxies from wide-area ground-based follow-up (e.g. Bowler et al., 2018, 2022; Schouws et al., 2022; Inami et al., 2022; Wistok et al., 2022). The obscured fraction derived from these works depends on the assumed FIR spectral-energy distribution (e.g. typically the dust temperature; \(T_{\mathrm{d}}\) and the emissivity index; \(\beta_{\mathrm{d}}\)), however in general these detections reveal a significant fraction \(\simeq 0.2\)-\(0.8\) of obscured star-formation at \(z\simeq 7\) for galaxies of log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(\simeq 9.5\)(Dayal et al., 2022; Algera et al., 2023). The attenuation curve, which dictates how an intrinsic spectrum is reduced in the rest-frame UV and optical as a function of wavelength for a given optical depth, depends on the detailed properties of the dust grains and their geometric distribution (see Salim & Narayanan, 2020 for a review). To directly measure the attenuation curve requires a handle on the intrinsic stellar spectra before the effect of dust, a technique that has been successfully employed at \(z=2\)-\(5\)(e.g. Cullen et al., 2018; Shivaei et al., 2020). An alternative method is to compare the rest-frame UV slope, \(\beta\), to the ratio of the FIR to UV luminosity (infrared-excess \(=IRX=\) log\({}_{10}(L_{\mathrm{FIR}}/L_{\mathrm{UV}})\)) as the steepness of the attenuation (or extinction) curve changes the relation between IRX and \(\beta\) to maintain energy balance. The so called "IRX-\(\beta\)" diagram for local starburst galaxies shows a strong correlation presented originally in Meurer et al. (1999) and then further refined in Calzetti et al. (2000). Whether this canonical Calzetti-relation holds at higher redshifts (\(z\gtrsim 2\)) has been the topic of debate over the past decade. An alternative to the Calzetti-like attenuation curve is the steeper relation that has been found for the Small Magellanic Cloud (SMC). Note that the SMC relation is an _extinction_ as opposed to an _attenuation_ curve, and there is an ongoing discussion on whether it is expected that observations of galaxies will be consistent with an SMC extinction curve when the likely complex geometry of dust is taken into account (e.g. see discussion in Cullen et al. (2018). In this case, the same column density of dust can provide an increased reddening effect in the rest-frame UV and hence a deficit from the Calzetti-like relation. Initial observations of \(z=5\)-\(6\) galaxies with ALMA suggested such a deficit was found (e.g. Capak et al., 2015; Barisic et al., 2017), however other studies (Bowler et al., 2018, 2022; Schouws et al., 2022) found results consistent with the Calzetti-like IRX-\(\beta\). Note that the discrepancy between these studies is reduced if we consider that the first works typically assumed dust temperatures of \(T_{\mathrm{d}}=25\)-\(45\) K, while later works tended to use higher temperatures (\(T_{\mathrm{d}}\approx 50\) K). However the observation of several galaxies at \(z\simeq 5\) that lie below the SMC prediction is still present with higher temperatures as shown by Faisst et al. 2017. Furthermore, individual sources at \(z\simeq 7\) have been found to show significant scatter both above and below a Calzetti-like relation (Smit et al., 2018; Hashimoto et al., 2019; Bakx et al., 2020), while fainter (and likely lower mass) sources appear to show an upper limit that is even below the prediction of an SMC-like extinction curve (Fujimoto et al., 2016; Bouwens et al., 2016). One key uncertainty in the measurement of the IRX-\(\beta\) and IRX-\(M_{\bullet}\) relations at increasingly high redshifts is that even at \(z\gtrsim 2\) there are very few individual detections of dust continuum emission from galaxies at log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(<10\)(e.g. Dunlop et al., 2017; Bouwens et al., 2020). Because of this many studies at \(z\gtrsim 2\) rely on stacking analyses of large numbers of individually undetected galaxies within rest-frame FIR survey data (e.g. from SCUBA-2; Koprowski et al., 2018) or alternatively small samples of often inhomogeneously detected samples from multiple follow-up programs with ALMA. In addition to the reliance on stacking or small samples, there are several systematic uncertainties that have precluded a deeper understanding of the IRX-\(\beta\) relation at high redshift. The first is the uncertain FIR spectral-energy distribution (SED) in the galaxies of interest, as the majority of early studies rely on measurements in the rest-frame FIR at typically one frequency. The derived FIR luminosity is strongly dependent on the assumed SED (\(L_{\mathrm{IR}}\propto T_{\mathrm{d}}^{4+\beta_{\mathrm{d}}}\); see discussion in e.g. Behrens et al., 2018; Liang et al., 2019; Sommovigo et al., 2020) and where individual dust temperature measurements have been made a wide variation has been found in the derived \(T_{\mathrm{d}}=20\)-\(70\) K (Witstok et al., 2023). Second, the position of a galaxy with respect to the IRX-\(\beta\) relation depends sensitively on the geometry of the dust and stars, as has been shown in theoretical works (e.g. Popping et al., 2017; Narayanan et al., 2018; Ferrara et al., 2022; Pallottini et al., 2022; Vijayan et al., 2023). Early resolved observations have shown that there appears to be an anti-correlation between the position of the rest-frame UV and FIR emission, suggesting a complex geometry that could impact the observed IRX-\(\beta\)(Faisst et al., 2017; Carniani et al., 2017; Bowler et al., 2018; Inami et al., 2022; Hashimoto et al., 2023; Tamura et al., 2023). The result of these studies is an uncertain picture of how the commonly observed rest-frame UV emission in LBGs is connected to any obscured star formation at \(z\gtrsim 7\)(see Hodge & da Cunha, 2020 for a recent summary). To make progress in understanding obscured star formation in LBGs what is required is a statistical survey of homogeneously selected galaxies with deep observations probing the dust continuum. In this work we utilise a comprehensive survey of 49 rare, massive (log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(\gtrsim 9\)) galaxies at \(z=6.5\)-\(8.5\) observed as part of the ALMA REBELS large program (Bouwens et al., 2022). The majority of these sources were selected from wide-area, ground-based data over 7 deg\({}^{2}\) and probe bright rest-frame UV magnitudes \(M_{\mathrm{UV}}<-21\) and hence the bright-end of the rest-frame UV luminosity function at this epoch (e.g. Bowler et al., 2017; Harikane et al., 2022; Varadaraj et al., 2023). We also perform a consistent analysis of the ALMA ALPINE large program (Le Fevre et al., 2020; Bethermin et al., 2020; Faisst et al., 2020) to provide a measurement of the evolving IRX-\(\beta\) and IRX-\(M_{\bullet}\) relations from \(z=4\) to \(z=8\) using the most comprehensive homogeneous samples of \(z>4\) galaxies observed with ALMA. REBELS provides a unique sample of galaxies with direct dust detections (or strong upper limits) to constrain the IRX-\(\beta\) and IRX-\(M_{\bullet}\) relation within the Epoch of Reionization (EoR). This work builds upon the previous observational REBELS papers from Inami et al. (2022), Algera et al. (2023), and Barrufet et al. (2023) and theoretical analyses tailored specifically to describe the REBELS observations from Dayal et al. (2022), Sommovigo et al. (2022) and Ferrara et al. (2022). The structure of this paper is as follows. In Section 2 we describe our sample from REBELS and ALPINE, presenting the ALMA observations in addition to the archival optical and NIR data that we utilize. We present the methods and results in Section 3, in particular the stacking analysis and SED fitting. In Section 4 we present the resulting colour-magnitude relation, physical properties from SED fitting and the IRX-\(\beta\) relation. In Section 5 we discuss our results and present a new derivation of the IRX-\(M_{\bullet}\) relation from \(z=4\)-8, and we compare our ALPINE + REEBELS results to the predictions from simulations in Section 6. We end with our conclusions in Section 7. Throughout this work we present magnitudes in the AB system (Oke and Gunn, 1983). The standard concordance cosmology (Planck Collaboration et al., 2020) is assumed, with \(H_{0}=70\,\rm km\,s^{-1}\,Mpc^{-1}\), \(\Omega_{\rm m}=0.3\) and \(\Omega_{\Lambda}=0.7\). ## 2 Data In this work we combine rest-frame UV, optical and FIR measurements to understand the dust properties of \(z=4\)-8 galaxies. The rest-frame UV and optical information is provided by deep degree-scale extragalactic survey observations that have the required wavelength coverage from multiple photometric bands to select these galaxies via the redshifted Lyman break. The rest-frame FIR measurements come from targeted ALMA programs to follow-up these bright sources and provide a direct detection or upper limit on the dust-continuum emission. ### Rebels The REBELS survey is a Cycle 7 ALMA large program that observed 40 LBGs with the primary goal to measure the [CII] 158 \(\mu\)m or [OIII] 88 \(\mu\)m emission line. The sources were found within the ground-based Cosmological Evolution Survey (COSMOS; Scoville et al., 2007) and the _XMM-Newton_ Large Scale Structure (XMM-LSS; Pierre et al., 2004) surveys, with the addition of two sources from _HST_ surveys (REBELS-16 and REBELS-40). We also include 9 sources that were observed as part of the REEBELS pilot programs as presented in Smit et al. (2018) and Schousys et al. (2022), resulting in a final sample containing 49 galaxies. The primary selection criterion was that the source redshift lay securely at \(z>6.5\) as determined by three independent SED fitting codes. The galaxies are bright in the rest-frame UV, with absolute magnitudes (measured at 1500A in the rest-frame) in the range \(-23.0<M_{\rm UV}<-21.3\). REEBELS observed each source using between two and six spectral tunings to cover the frequency range of likely FIR line emission given the photometric redshift probability distribution. As presented in Bouwens et al. (2022), Inami et al. (2022), Schouws et al. (2023, in prep), and van Leeuwen et al. (2023, in prep), 25 galaxies have been spectroscopically confirmed via the [CII] line (with no [OIII]) transformations to-date). In addition, these observations simultaneously allowed a measurement of the dust-continuum emission, with 18 sources detected in the continuum at \(>3.3\sigma\) by Inami et al. (2022). The [CII] line (if detected) was masked in the continuum images. The typical continuum depth of the Band 6 or 7 data (approximately 240 GHz and 350 GHz depending on exact line scan frequencies) was \(\sigma_{\rm rms}=10\)-\(20\,\mu\)Jy with a beam of 1.2-\(1.6^{\prime\prime}\) full width at half maximum (FWHM). ### Alpine The ALPINE survey is an ALMA large program awarded in Cycle 5 that aimed at detecting the [CII] line and dust continuum emission in 118 galaxies at \(z=4\)-6 (Le Fevre et al., 2020; Bethermin et al., 2020). The sources were selected from a large red-optical spectroscopic survey of "normal" star-forming galaxies within the COSMOS and Extended-_Chandra_ Deep Field South (ECDFS) fields. The resulting sample consisted of 67 sources with spectroscopic redshifts from Lyman-\(\alpha\) emission or rest-frame UV absorption features between \(z=4.4\)-\(4.6\) and 51 objects at \(z=5.1\)-\(5.9\)(Faissi et al., 2020). Both Lyman-break selection and narrow-band selections were utilized (with more narrow-band sources in the \(z>5\) sub-sample), leading to a relatively high average rest-frame EW of Ly\(\alpha\) of \(\simeq 5\)-100A (Cassata et al., 2020). In comparison to REEBELS, where none of the sources had spectroscopic redshifts prior to the ALMA program, this leads to a different selection function for galaxies, which we discuss further in Section 5. As presented in Fudamoto et al. (2020), 23 galaxies were detected at \(>3.5\sigma\) significance in the dust continuum in the original ALPINE survey (8 sources at \(z>5\)). Several sources have been further followed-up in multiple bands (e.g. HZ4 and HZ6; see Faisi et al., 2020). The typical depth of the ALPINE Band 7 data (275-373 GHz) was \(\sigma_{\rm rms}=30(50)\,\mu\)Jy for the \(z=5.5\) (4.5) samples, with an average beam of \(1.1^{\prime\prime}\) FWHM (Bethermin et al., 2020). ### Optical and near-infrared imaging To measure the rest-frame UV slopes of the REBELS and ALPINE galaxies, in addition to physical properties such as stellar mass, we exploited the wealth of available optical and NIR imaging data. The details of the photometry for the REBELS sample is presented in Bouwens et al. (2022) and Stefanon et al. (in prep), however we briefly describe the relevant data here. In the COSMOS (XMM-LSS) field we used the UltraVISTA (VIDEO) survey from VISTA which provided imaging in the NIR \(YJHK_{S}\)-bands (McCracken et al., 2012; Jarvis et al., 2013). A subset of (fainter) galaxies were additionally located within a \(1\,\rm deg^{2}\) sub-region of the XMM-LSS field that has deeper observations in the \(JHK\) from the UK Infrared Deep Sky Survey (Lawrence et al., 2007) Ultra-Deep Survey (UDS). _Spitzer_/Infrared Array Camera (IRAC) photometry was extracted from the deep mosaics presented in Stefanon et al. (in prep.), in particular from the _Spitzer_ Extended Deep Survey (SEDS; Ashby et al., 2013) and the _Spitzer_ Matching Survey of the UltraVISTA Ultra-deep Stripes (SMUVS; Ashby et al., 2018). Photometry was extracted using \(0.6^{\prime\prime}\) diameter apertures (\(0.9^{\prime\prime}\) for IRAC) on images where the neighbouring sources had been subtracted using MOPHONGO (Labbe et al., 2015). The aperture flux was corrected to total according to the MOPHONGO model for the galaxy. As several of the very bright \((M_{\rm UV}<-22.5)\) REBELS sources were resolved in the ground-based data (Bowler et al., 2017), this step was important in deriving accurate absolute magnitudes and physical properties. Errors on the photometry were derived from empty aperture measurements on the data. For the ALPINE sample we used the photometry provided in the COSMOS2020 catalogue (Weaver et al., 2022) which provided point-spread function (PSF) matched flux measurements for the full suite of optical and NIR filters available in the field. In comparison to the COSMOS2015 catalogue that was utilised in the original ALPINE papers (as presented in Faisi et al., 2020), the COSMOS2020 catalogue has deeper data in a range of filters. Particularly for this work, the optical (from Subaru) and near-infrared (from UltraVISTA DR4) data are up to 1 magnitude deeper, providing significantly improved measurements of the rest-frame UV slope and derived stellar masses. We used the 'Classic' catalogue that provides photometry measured in \(2^{\prime\prime}\) diameter apertures. In the COSMOS2020 catalogue this aperture photometry is corrected to a total flux by a constant factor determined from the Source Extractor MAG_AUTO. From the full ALPINE catalogue of 118 sources, ten sources lie within the ECDFS. To provide a sample with uniform photometry from the COSMOS2020 catalogue, we excluded these ten sources from further analysis, leaving a final ALPINE sample of 108 galaxies. ## 3 Methods The primary goal of this work is to measure the rest-frame UV and FIR properties of the 49 bright \(z=6\)-8 LBGs observed as part of the REBELS survey. We also include for comparison a consistent analysis of the ALPINE sample in the COSMOS field, to provide a base-line reaching down to \(z\simeq 4\). We measure the properties of individual sources from both surveys, but also perform a stacking analysis to derive average properties within bins of \(M_{\bullet}\) and \(M_{\rm UV}\). Due to the relatively low fraction of sources that are directly detected in the dust continuum (0.43 for REBELS, 0.19 for ALPINE), stacking is a key tool to understand the dust continuum properties of galaxies within this sample. Here we describe the key methods used in this work. Where possible we used identical approaches for the REBELS and ALPINE analysis to provide a direct comparison between galaxies spanning \(z\simeq 4\) to \(z\simeq 8\). ### Stacking analysis The ALPINE and REBELS samples span a range of redshifts leading to different rest-frame features being observed in our available optical and NIR data. We therefore separate our sample into two redshift bins in ALPINE and three within REBELS. The two ALPINE bins correspond to \(z=4.0\)-\(4.5\) and \(z=4.8\)-\(5.4\), which are the two main redshift groupings within the sample. All ALPINE sources were observed in the same ALMA band (Band 7). In REBELS we split the sample into three bins, by increasing redshift as detailed below. We excluded the _HST_ selected sources REBELS-16 and REBELS-40 from the stacks as they have different rest-frame optical and NIR filters available. The first bin included galaxies at \(z=6.5\)-\(6.9\) (20 galaxies), with the second bin included galaxies in the range \(z=6.9\)-\(7.7\) (20 galaxies). This separation at \(z=6.9\) was chosen as it is the point at which the Lyman break starts to move into the VISTA \(Y\)-band and is also when the H\(\beta\)+[OIII] \(\lambda\)44959, 5007 rest-frame optical lines move from the \([3.6\mu\)m] to the \([4.5\mu\)m] band. The 40 sources within these two bins had observations in ALMA Band 6. The third and final REBELS bin included the seven galaxies that have \(z>7.7\). Four of these sources have ALMA Band 7 observations that were designed to target the [OIII] line. These sources have photometric redshifts in the range \(z=7.7\)-\(7.8\)-\(6\), however none have been spectroscopically confirmed to-date. Due to the wide range of photometric redshifts in this bin, and the differing ALMA measurement bands, we do not create a stack from this sub-sample. However, we present their individual IRX-\(\beta\) properties for comparison with our stacks. We further choose to split the sample using \(M_{\rm UV}\) as this provided the greatest dynamic range in the derivation of the IRX-\(\beta\) and IRX-\(M_{\bullet}\) relation, while also not suffering from biases (as the \(\beta\) and \(M_{\bullet}\) values have significant statistical errors, scatter between bins can lead to biases; see e.g. McLure et al., 2018). The bins we used for REBELS are shown in Table 1. We split the sub-samples by \(M_{\rm UV}=-22.0\) (\(M_{\rm UV}=-22.5\)) at \(z\simeq 6.7\) (\(z\simeq 7.2\)) to provide roughly equal sources with the brighter/fainter absolute magnitude bins. For ALPINE we split by \(M_{\rm UV}=-22.0\) for both redshift bins, and in addition we separate into two stellar mass bins to take into account the wider range of \(M_{\bullet}\) values within the sample. The ALPINE bins are detailed in Table 1. By restricting the \(M_{\rm UV}\) and \(M_{\bullet}\) ranges slightly, we result in a final ALPINE sample of 54 galaxies within the \(z\simeq 4.5\) bin and 32 within the \(z\simeq 5.5\) bin. We performed weighted mean stacking of the ALMA continuum data in the specified bins in the image plane. Our results are unchanged if we instead use a median stacking procedure. Due to the majority of sources in the REBELS and ALPINE samples being undetected in the ALMA data we stack at the position of the observed rest-frame UV. We note that this could cause an underestimate of the peak ALMA flux if there exists significant offsets between the rest-frame UV and FIR emission (e.g. as simulated in Bowler et al., 2018). Given the large beam of the REBELS and ALPINE observations, and the relatively small absolute offsets found for these samples (e.g. Le Fevre et al., 2020; Inami et al., 2022, we expect the effect to be small. Indeed, as described in Section 3.4, only the most massive ALPINE stack shows evidence for extended emission, which we attribute partially to offsets between the rest-frame UV and FIR positions. We Figure 1: The average optical and NIR photometry and corresponding best-fitting SED model for the REBELS samples at \(z<6.9\) (top) and \(z>6.9\) (bottom). In each plot we show the observed fluxes and errors from the bright and faint stacks as the blue and red filled circles respectively. The six points correspond to the VISTA \(YJHK_{s}\) and _Spitzer_/IRAC \([3.6\mu\)m] and \([4.5\mu\)m] bands. The open circles correspond to the synthetic photometry as derived from the best-fitting SED model from BAGPIPES, shown as the solid line. The observed change in the \([3.6\mu\)m] to \([4.5\mu\)m] colour with redshift is due to the transit of the strong H\(\beta\)+[OIII] emission lines between these filters (e.g. Smit et al., 2014). take this into account in the flux measurement by using a Gaussian fit to the stack. Foreground sources were masked based on a map of objects derived from PyBDSF (Mohan and Rafferty, 2015), excluding pixels within a radius of \(1.5\arcsec\) from the rest-frame UV centroid of the galaxy. The majority of sources did not have any neighbours at the depth of the ALMA imaging. Errors on the stacked flux measurements were determined by remaking each stack using bootstrap resampling with replacement. To determine the mean optical and NIR photometric measurements we averaged the fluxes from the REBELS and COSMOS2020 catalogues (Section 2.3). We also experimented with stacking the images themselves, however we concluded that it was not possible to improve on the flux stacking results, due to close neighbours contaminating the flux measurements. This contamination was taken into account in our catalogue creation, with nearby sources being subtracted prior to aperture photometry (see Section 2.3). For ALPINE we use the COSMOS2020 'Classic' aperture photometry measurements where basic subtraction of neighbours is performed. We visually checked the ALPINE sources in the COSMOS optical and NIR imaging, but found that they are all sufficiently isolated for the catalogue fluxes to be robust. ### SED fitting We fit the photometric data for the individual REBELS and ALPINE sources (and the derived stacks) using BAGPIPES (Carnall et al., 2018) to provide a best-fitting model with which to measure the rest-frame UV slope. The fitting also provides physical properties for the stacks, which we include in particular for measuring the IRX-\(M_{\bullet}\) relation. We fix the redshift to the spectroscopic redshift when available (28 sources in REBELS, all of the sources in ALPINE), and for the stacked photometry we fix the redshift to the average redshift. We found that using a luminosity-weighted redshift instead of an average had no effect on our results, as the difference was \(\delta z\leq 0.03\). We include bands above the Lyman-break reaching to the \([4.5\mu\)m] filter, beyond which the resolution and depth decreases dramatically. We also exclude bands that contain the Lyman-break in the fitting of the stacked photometry, as the small differences in break position within the band lead to tensions within the fitting. Hence for REBELS, we fit to the \(YJHK_{s}[3.6\mu\)m\(][4.5\mu\)m] bands for the \(z=6.5\)-\(6.9\) sub-sample stack, and to the \(JHK_{s}[3.6\mu\)m\(][4.5\mu\)m] bands for the \(z>6.9\) stack. The resulting photometry and best-fitting SED models for the REBELS stacks are shown in Fig. 1. For ALPINE, we fit to the \(IzYJHK_{s}[3.6\mu\)m\(][4.5\mu\)m]bands for the \(z=4.5\) stack, and to the \(zYJHK_{s}[3.6\mu\)m\(][4.5\mu\)m]bands for the \(z=5.5\) stack. A delayed-\(\tau\) model was assumed (\(\Phi(t)\propto te^{-(t/\tau)}\)) in which the timescale of the decline was allowed to vary in the range \(\tau=[0.3,10.0]\) Gyr and the age from 1 Myr up to the age of the Universe at that redshift. The metallicity was fixed to \(0.2Z_{\odot}\), and the Calzetti et al. (2000) dust law was assumed with the attenuation in the \(V\)-band constrained to the range \(A_{\rm V}=[0,2]\). We allowed the nebular ionization parameter to vary in the range \(\log_{10}(U)=[-2,-4]\). Uniform priors were assumed for all of the fitted parameters. These parameters resulted in acceptable fits to the REBELS sources as seen in Fig. 1, with no evidence for truncation of the resulting corner plots. Assuming a different SFH (e.g. constant or \(\tau\)) or metallicity only marginally affected the \(M_{\bullet}\) by at most 0.1 dex and the derived \(\beta\) values by \(<0.05\). Note that assuming a non-parametric SFH for the REBEL sample as presented in Topping et al. (2022) can increase the derived stellar masses by on average \(\simeq 0.5\) dex and in some cases \(\gtrsim 1.5\) dex. To provide a closer comparison to previous literature measurements of the IRX-\(M_{\bullet}\) we primarily consider the \(M_{\bullet}\) values derived with standard parametric SFHs, however we note where relevant how our results would change with an assumed alternate SFH. For the ALPINE sample, Faisst et al. (2020) found that the \(\beta\)-value derived depended on the assumed dust law in the fitting. We also recover this trend in our sample, with the derived \(\beta\)-slopes being redder by around 0.1 when fitting with an SMC dust law in comparison to a Calzetti law. ### Rest-frame UV luminosity and slope determination The monochromatic rest-frame UV luminosity was derived at 1500 A using a top-hat filter of width 100 A applied to the best-fitting SED model from BAGPIPES for both the REBELS and ALPINE stacked photometry. We note that the aperture photometry for both the REBELS catalogue and COSMOS2020 have been corrected to a total flux accounting for the full extent of the galaxy, and hence the derived \(M_{\rm UV}\) can be considered a total absolute magnitude. We found a systematic offset brightwards of \(\Delta M_{\rm UV}=-0.1\) mag between the ALPINE absolute magnitudes presented in Faisst et al. (2020) and those found in our analysis, with some sources having considerable offsets reaching \(>0.5\)mag. Further inspection reveals this to be due to the improved photometry between the COSMOS2015 and COSMOS2020 catalogues. The rest-frame UV slope is historically defined from a series of windows in the continuum from \(\lambda_{\rm rest}=1268\)-\(2580\) A (Calzetti et al., 1994). Different methods for measuring \(\beta\) from the available photometric data in high-redshift galaxies have been extensively discussed, including the fitting of a power law or a Figure 2: The ALMA Band 6 (\(\lambda_{\rm rest}\approx 150\)\(\mu\)m) stacks for REBELS. The stacks in the redshift range \(6.5<z<6.9\) (\(6.9<z<7.7\)) are shown in the upper (lower) row. The left and right columns show the brighter and fainter \(M_{\rm UV}\) stacks respectively, with the brighter stacks containing 7 (6) sources and the fainter stack containing 13 (14) sources at \(z\simeq 6.7\) (\(z=7.2\)). The stamps are 10 arcsec on a side, with N to the top and E to the left. The colour scale is saturated beyond the range \([-2.0,10.0]\)\(\sigma\) and contours are shown at \(1\sigma\) intervals. The average beam for the data included in the stack is shown as the grey ellipse, with the position angle determined as the mode of the input image values. The stacks are consistent with being unresolved at the resolution of the data (1.2–1.6 arcsec FWHM). power law with a Lyman-break to the photometry directly or to the fit SED model (e.g. Dunlop et al., 2012; Rogers et al., 2013). In this work we measure \(\beta\) from the best-fitting SED model derived from BAGPIPES (see Section 3.2), excluding regions that are outside of the Calzetti windows, to avoid strong absorption or emission features. Errors were derived using a bootstrap analysis, where we restacked the photometry and re-fit using BAGPIPES. Comparing the derived \(\beta\) values to those presented in the original ALPINE analysis (Faisst et al., 2020), with find on average a very mild bias to bluer slopes by 0.05 in our analysis, when comparing results obtained by fitting assuming the same dust attenuation law. The scatter between individual objects can be large (up to \(\delta\beta=0.5\)), but is within the errors of the derived \(\beta\) values. ### Rest-frame FIR luminosity derivation Using PyBDSF (Mohan and Rafferty, 2015) we measured both the peak flux and flux derived from a Gaussian fit for the individual sources and the stacked ALMA data. We found that our stacked results were consistent with being unresolved for log\({}_{10}\)(\(M_{\bullet}\)/M\({}_{\odot}\))\(<10\) (i.e. the full REBELS sample and low-mass sub-sample of ALPINE). We define an unresolved source if the measured major and minor axes from PyBDSF are consistent with the beam size within the 1\(\sigma\) error (again derived within PyBDSF). Hence we used the peak flux measure in these cases. For the ALPINE sample at log\({}_{10}\)(\(M_{\bullet}\)/M\({}_{\odot}\))\(>10\) we instead used the Gaussian flux measurement, as the derived sizes from PyBDSF were significantly resolved. From the single data-point in the observed mm-regime from ALMA in Band 6 for ALPINE and Band 6 or 7 for REBELS we determined the total FIR luminosity by assuming a modified blackbody SED. We corrected for the effect of the Cosmic Microwave Background following da Cunha et al. (2013), which results in an increase in the \(L_{\rm IR}\) by 10 percent for the REBELS sample. While some sources within the two surveys have been observed in multiple ALMA bands (Algera et al., 2023), we choose here to provide a uniform measure of \(L_{\rm IR}\) from the single main band that is available for the full REBELS + ALPINE samples. For the analysis presented in this work we assumed a single fixed dust temperature of \(T=46\) K, with an opacity fixed to \(\beta_{\rm d}=2.0\) (consistent with that recently measured by Wristotk et al., 2023). This dust temperature was derived by the model of Sommovigo et al. (2022) and was used by Inami et al. (2022). When showing the IRX-\(\beta\) relation we illustrate with an arrow the uncertainty introduced by this assumption. We keep the dust temperature constant between the ALPINE and REBELS analysis, following the \(T_{\rm d}\) analysis presented in Sommovigo et al. (2022, 2022) who found that the dust temperatures between the samples were consistent at 46 K despite the different redshifts. This finding was not expected given that other studies have found a redshift evolution in \(T_{\rm d}\), however the exact form of the relation is still under debate especially at \(z>4\)(e.g. Sommovigo et al., 2022; Wristotk et al., 2023; Jones and Stanway, 2023). For example the trend found by Schreiber et al. (2018) up to \(z=4\) would predict a change of around 5 K between the redshifts of the ALPINE and REBELS results. As we discuss further in Section 4.4 even this small temperature difference can have an appreciable effect on the derived IRX and hence IRX-\(\beta\) and IRX-\(M_{\bullet}\) relations. The dust temperature constraints that exist for the individual ALPINE (Faisst et al., 2017) and REBELS (Wristotk et al., 2022; Algera et al., 2023) galaxies are consistent with our chosen \(T_{\rm d}\) within the (substantial) errors, and show best-fit values from 20 K to 90 K. Due to the lack of \(T_{\rm d}\) measurements for the vast majority of our sample, we are unable to account for this in our analysis and leave it to a future work. ## 4 Results In Fig. 1 we present the stacked photometry and best-fitting SED model for the REBELS sample, split into the two main redshift (and further two \(M_{\rm UV}\) bins) as shown in Table 1. The results of stacking the ALMA data in these bins are shown in Fig. 2. We find a significant (7-10\(\sigma\)) detection in the fainter \(M_{\rm UV}\)-bin for both the \(z\simeq 6.7\) and the \(z\simeq 7.2\) stacks. In the brighter stacks we find marginal detections in the dust continuum, at 4\(\sigma\) and 2.5\(\sigma\) for the \(z\simeq 6.7\) and the \(z\simeq 7.2\) stack respectively. The fluxes we derive are consistent with that found in the independent analysis of the REBELS sample by Algera et al. (2023), who used a Monte Carlo stacking analysis to measure a correlation of \(L_{\rm IR}\) with stellar mass. From these ALMA detections we then proceeded to compute the \(L_{\rm IR}\) and combine this with the rest-frame UV information (\(L_{\rm UV}\), rest-frame UV slope) and the stellar mass as derived from BAGPIPES as detailed below. ### Physical properties The splitting of the bright REBELS sample into two redshift bins separated at \(z=6.9\) allows us to provide high-S/N stacks of the rest-frame UV and optical emission where the strong H\(\beta\) + [OIII] \(\lambda\lambda 4959,5007\) lines sit within a single [3.6\(\mu\)m] or [4.5\(\mu\)m] band. As shown in Fig. 1 we find that the REBELS sources are blue in the rest-frame UV, as probed by the \(YJHK_{s}\) bands, with strong [3.6\(\mu\)m]-[4.5\(\mu\)m] colours evident. As we move to \(z>6.9\) we see a change in the IRAC colour indicative of H\(\beta\)+[OIII] \(\lambda\lambda 4959,5007\) moving into the [4.5\(\mu\)m]-band. The derived SED fitting parameters from \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \(z\) bin & N & \(z_{\rm mean}\) & \(M_{\rm UV}\) & \(F_{\rm peak}\) & \(F_{\rm Gauss}\) & \(L_{\rm IR}\) & IRX & \(\beta_{\rm SED}\) \\ & & & /mag & /\(\mu\) Jy & /\(\mu\) Jy & /\(10^{11}\) L\({}_{\odot}\) & & & \\ \hline 6.5 \(<\) z \(<\) 6.9 & 7 & 6.76 & \(-22.34\pm 0.14\) & \(30.3\pm 7.9\) (5.6) & \(31.1\pm 9.9\) & \(1.4\pm 0.4\) & \(-0.11^{+0.16}_{-0.18}\) & \(-2.10^{+0.14}_{-0.14}\) \\ 6.5 \(<\) z \(<\) 6.9 & 13 & 6.67 & \(-21.71\pm 0.12\) & \(44.5\pm 10.4\) (6.5) & \(64.0\pm 15.3\) & \(2.1\pm 0.5\) & \(0.31^{+0.14}_{-0.16}\) & \(-1.79^{+0.08}_{-0.08}\) \\ \hline 6.9 \(<\) z \(<\) 7.7 & 6 & 7.11 & \(-22.75\pm 0.18\) & \(<36.0\) (2.0) & \(--\) & \(<1.9\) & \(<-0.14\) & \(-2.19^{+0.10}_{-0.10}\) \\ 6.9 \(<\) z \(<\) 7.7 & 14 & 7.26 & \(-21.97\pm 0.25\) & \(36.3\pm 9.6\) (10.1) & \(56.0\pm 8.5\) & \(2.0\pm 0.5\) & \(0.18^{+0.22}_{-0.23}\) & \(-2.10^{+0.07}_{-0.07}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The measured FIR fluxes and derived properties of the four REBELS stacks at \(z=6.5\)–7.7. The equivalent results for the ALPINE analysis is presented in Table 1. The top (bottom) two rows show the lower (higher) redshift stack, with the stacks ordered by \(M_{\rm UV}\). Columns 1 and 2 detail the redshift and number of sources included in each stack. The average redshift and \(M_{\rm UV}\) of each stack are shown in Columns 3 and 4. Column 5 presents the measured peak ALMA flux, with the corresponding S/N shown in brackets. The flux measured using a Gaussian fit is shown in Column 6. The derived FIR luminosity (assuming \(T_{\rm d}=46\) K, \(\beta_{\rm d}=2.0\)) and the resulting IRX value are shown in Columns 7 and 8. The \(L_{\rm IR}\) was determined from the peak flux for the REBELS results. Finally the rest-frame UV slope \(\beta\) is presented in Column 9, as measured from the best-fitting SED model. this photometry including stellar mass and age are presented in Table 2. As expected from the individually results for REEBELS presented in Bouwens et al. (2022), the galaxies are massive, with \(M_{\star}\geq 10^{9}\,\mathrm{M}_{\odot}\), and moderate ages of the order of 40-130 Myrs. The derived dust attenuation is relatively low, as expected from the fact that the sample is rest-frame UV selected and shows blue \(\beta\) slopes (see Section 4.2). Our most significantly FIR detected stack has the reddest \(\beta=-1.8\) and strongest \(A_{\mathrm{V}}=0.65\pm 0.05\). We measure the unobscured SFR from the rest-frame UV emission using the luminosity at 1500A and the conversion of Madau & Dickinson (2014) for a constant SFR in the previous 100 Myr and a fixed metallicity of \(Z=0.1\,Z_{\odot}\). The SFR from the FIR was derived using the conversion based on the same assumptions on the SFH from Madau & Dickinson (2014), which provides an identical calibration to that used in the previous REBELS work by Algera et al. 2023b. Both calibrations were adjusted to a Chabrier (2003) initial mass function (IMF). The total SFR measured as the sum of these two components (SFR\({}_{\mathrm{UV}}\) + SFR\({}_{\mathrm{IR}}\)) is in good agreement with that derived from the SED fitting. ### Colour-magnitude relation at \(\mathbf{z\simeq 7}\) In Fig. 3 we present the rest-frame UV slopes of the REBELS sources, and the stacks, in comparison to colour-magnitude relations from recent literature measurements. The REBELS sample allows us to measure the colour-magnitude relation up to \(M_{\mathrm{UV}}\simeq-23\), which is considerably brighter than the typical galaxies found and studied previously using _HST_ or _JWST_ data that are typically dominated by sources at \(M_{\mathrm{UV}}\gtrsim-21.\) We find that the REEBELS galaxies show a range of rest-frame UV slopes, with \(-2.7<\beta<-1.0\), with many measured \(\beta\)-slopes dominated by large errors (\(\Delta\beta>0.5\)). Reassuringly, our \(\beta\) values derived from the stacked photometry follow the distribution of individual values. We compare our results to the extrapolated relations from fainter sources derived at \(z\simeq 7\) using _HST_ data by Bouwens et al. (2014), and the two recent _JWST_ results by Topping et al. (2023) at \(z=7.3\) and by Cullen et al. (2023) at \(z=8\)-12 (here we use the redshift evolution applied to the slope found Rogers et al. 2014). These studies computed \(\beta\) by fitting a power law the available photometry probing the rest-frame UV. At \(z\simeq 6.7\) we find good agreement with these relations in our fainter bin, however we see that our brighter stack has a significantly bluer \(\beta\)-slope than expected from the extrapolated colour-magnitude relations from previous studies at fainter magnitudes. At \(M_{\mathrm{UV}}<-22\) the offset bluewards from the colour-magnitude relations is between \(\Delta\beta\simeq 0.3\)-0.7 depending on the study. Looking at the slightly higher redshift bin at \(z\simeq 7.2\) we find an offset to bluer \(\beta\) values by \(\Delta\beta=0.4\) in both the brighter and fainter stack. Our results support a flattening, and potentially even a turn-over, of the colour-magnitude relationship at \(M_{\mathrm{UV}}\lesssim-22\), with these galaxies showing a mean colour of \(\beta=-2.1\) in contrast to the predicted colour of \(\beta\simeq-1.4\) to \(-1.7\) from the relations extrapolated from fainter LBGs. As we discuss further in Section 5, this turn over can be explained by the effect of scatter in the obscuration when considering sources that have a steeply declining number density. In the measurement of the colour-magnitude relationship we must consider any effect of sample selection and \(\beta\)-measurement bias in the results we obtain. It is possible that we could be missing redder \(z\simeq 7\) galaxies due to the requirement that the sources show good high-redshift fits and poorer quality fits to (typically redder) low-redshift galaxy contaminants. As shown in Fig. 3 we are able to measure \(\beta\)-values as red as \(\beta\simeq-1.2\) for the sources in our sample, even at the faint end, whereas we do not find significant numbers of the brightest sources to be as red (even though the increased S/N should make bright, red, sources easier to identify than similarly red, fainter sources). The REBELS sample selection is not only based on the rest-frame UV bands but also includes the \([3.6\mu\mathrm{m}]\) and \([4.5\mu\mathrm{m}]\) bands in the SED fitting. The _Spitzer_/IRAC colour aids in the selection of robust \(z\simeq 7\) galaxies due to the specific colours produced by the rest-frame optical nebular emission lines in the \([3.6\mu\mathrm{m}]\) and \([4.5\mu\mathrm{m}]\) bands (Smit et al. 2015). Using these filters could be biasing our sample towards bluer slopes by potentially selecting young galaxies with stronger nebular emission. We discount this however, as the distribution of the EW\({}_{0}\)(H\(\beta\) + [OIII]) of the REBELS sample is in excellent agreement with \(z\simeq 7\) samples that are selected only based on a strong Lyman break (see figure 18 Figure 3: The REBELS galaxies at \(z=6.5\)–\(6.9\) (upper) and \(6.9<z<7.7\) (lower) in comparison to the colour-magnitude relation found by previous studies. In each plot the individual galaxy measurements of the rest-frame UV slope and \(M_{\mathrm{UV}}\) are shown as the grey open points, while the stacked results (from the fits shown in Fig. 1) are shown as the blue filled points. The relationship derived from fainter studies are shown as the black solid, dotted and dashed lines from Cullen et al. (2023), Topping et al. (2023) and Bouwens et al. (2014) respectively. Slight differences between the relations in the upper and lower plot are due to the derived evolution of the relation from these studies. We also show the data points from the same three works (at a fixed rather than evolving redshift), note in particular that the redshift range of the data from the Cullen et al. (2023) study is at \(z>8\). of Bouwens et al., 2022). In fact, because these colours are challenging to reproduce by low-redshift galaxy contaminants it can aid in the recovery of good high-redshift galaxy fits to sources with redder rest-frame UV slopes (e.g. in Endsley et al., 2021; see Stefanon in prep. for individual SED fits). Hence we conclude that our measurements of the rest-frame UV slope of the REBELS LBGs are unlikely to be significantly biased, with the caveat that we only select sources that are bright in the rest-frame UV, and hence will be incomplete to the most obscured galaxies (with the extreme situation being fully 'UV-dark' galaxies as found in e.g. Fudamoto et al., 2021). ### \(\mathbf{Ikx-\beta}\) relation at \(\mathbf{z\simeq 7}\) from REBELS In Fig. 4 we present the IRX-\(\beta\) relation derived from the REBELS sample in the redshift range \(6.5<z<7.7\). We also show the derived values for individual sources, of which there are 18 detections at \(>3.3\sigma\) from Inami et al. (2022). These results (both individual galaxies and for the stacks) were computed with the assumption of a modified blackbody FIR SED, with an assumed dust temperature of \(T=46\) K and \(\beta_{\rm d}=2.0\) (Section 3.4). The individual results show a large scatter horizontally on the plot as a result of the large errors in individual measurements of the rest-frame UV slope. We find that the majority of this range in observed rest-frame UV slopes in the sample can be explained with statistical scatter, with the intrinsic variation as a function of \(M_{\rm UV}\) derived to be of the order of \(\Delta\beta=0.1\)-0.3 for REBELS (see Table 1). Rather the scatter can be explained simply due to the large errors on the individual \(\beta\) measurements, which we have demonstrated via a simple simulation assuming the sample is drawn from a constant input \(\beta=-2.0\) with the same \(\beta\) measurement errors. With our assumed rest-frame FIR SED, based on the work of Sommovigo et al. (2022), we do not confirm any sources significantly below the SMC-like IRX-\(\beta\) relation as found by previous high-redshift studies (e.g. Barisic et al., 2017; Faisst et al., 2017; Smith et al., 2018; note that these works assumed a lower \(T_{\rm d}\simeq 30\)-40). Although at the depths of our observations, 31 of the 49 sources in REBELS are undetected in the dust continuum and hence the IRX values represent upper limits. We see one source that is significantly in excess of the others, with IRX \(=1.2\pm 0.2\). This is the unusually FIR bright object REBELS-25 that is discussed further in Hygate et al. (2023). This source is included in our stacks, however our results are unchanged if it is removed. These data are shown in comparison to the expected relation for a Calzetti et al. (2000) dust attenuation and SMC dust extinction law, assuming an intrinsic \(\beta\)-slope, \(\beta_{0}=-2.3\). This intrinsic slope is consistent with that found in detailed SED fitting of comparable mass sources at \(z=3\)(McLure et al., 2018) and similar to that found in simulations of galaxies at \(z\simeq 5\)(e.g. Cullen et al., 2018 found \(\beta_{0}=-2.4\)). We present a fit to the IRX-\(\beta\) relation, and fits from previous works at higher redshift, that include a steeper intrinsic \(\beta\) in Section 4.4. We find no strong correlation between the offset from a Calzetti-like relation and \(A_{\rm V}\), \(M_{\star}\), or spatial offset between the rest-frame UV and FIR flux (from Inami et al., 2022). There is a weak trend that the FIR brightest galaxies tend to be above the relation, such as REBELS-25 (as has been seen for ULIRGS; see discussion below). Turning to the stacked results, we present four individual points corresponding to the two different redshift and \(M_{\rm UV}\) bins. For the \(6.5<z<6.9\) sub-sample we see that the brighter and fainter stack shows significantly different rest-frame UV slopes, with the \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \(z\) bin & \(M_{\rm UV}\) bin & SFR\({}_{\rm UV}\) & SFR\({}_{\rm IR}\) & \(f_{\rm obs}\) & SFR\({}_{\rm SED}\) & log\({}_{10}\)(\(M_{\star}\)/M\({}_{\odot}\)) & Age & A\({}_{V}\) & log\({}_{10}\)(\(U\)) \\ & \(\rm/M_{\odot}/yr\) & \(\rm/M_{\odot}/yr\) & & \(\rm/M_{\odot}/yr\) & & \(\rm/M_{\odot}/yr\) & & \(\rm/M_{\odot}/yr\) & & \(\rm/M_{\odot}/yr\) \\ \hline \(6.5<z<6.9\) & \(-23.5<M_{\rm UV}<-22.0\) & \(23^{+3}_{-3}\) & \(17^{+4}_{-4}\) & \(0.42\pm 0.11\) & \(42^{+12}_{-16}\) & \(9.6^{+0.2}_{-0.2}\) & \(70^{+60}_{-40}\) & \(0.36^{+0.07}_{-0.09}\) & \(-3.1^{+0.4}_{-0.4}\) \\ \(6.5<z<6.9\) & \(-22.0<M_{\rm UV}<-20.5\) & \(13^{+2}_{-2}\) & \(25^{+5}_{-5}\) & \(0.66\pm 0.16\) & \(30^{+17}_{-12}\) & \(9.4^{+0.2}_{-0.2}\) & \(40^{+30}_{-20}\) & \(0.65^{+0.05}_{-0.05}\) & \(-2.7^{+0.4}_{-0.3}\) \\ \hline \(6.9<z<7.7\) & \(-23.5<M_{\rm UV}<-22.5\) & \(34^{+6}_{-5}\) & \(<24\) & \(<0.41\) & \(48^{+15}_{-12}\) & \(9.8^{+0.2}_{-0.3}\) & \(130^{+130}_{-50}\) & \(0.21^{+0.13}_{-0.12}\) & \(-3.0^{+0.7}_{-0.6}\) \\ \(6.9<z<7.7\) & \(-22.5<M_{\rm UV}<-20.5\) & \(17^{+4}_{-3}\) & \(24^{+6}_{-6}\) & \(0.59\pm 0.20\) & \(35^{+10}_{-10}\) & \(9.6^{+0.2}_{-0.2}\) & \(110^{+140}_{-70}\) & \(0.31^{+0.10}_{-0.13}\) & \(-2.9^{+0.6}_{-0.6}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The physical properties of the REBELS sources as derived from the stacked photometry. We employ both SFR calibrations and SED fitting using BAGPIPES, where the best-fitting models are shown in Fig. 1. Each row corresponds to a different stack in redshift and absolute UV magnitude as shown in Columns 1 and 2 respectively. In Columns 3 and 4 we present the SFR derived from the rest-frame UV and FIR (see Section 4.1), with Column 5 showing the obscured SFR fraction derived from these quantities. Columns 6 to 10 show the SFR, \(M_{\star}\), age, \(A_{\rm V}\) and ionization parameter as derived from BAGPIPES assuming a SFH following a delayed \(\tau\) model and a fixed metallicity of \(Z=0.2Z_{\odot}\). Figure 4: The IRX-\(\beta\) relation as derived from the REBELS sample. The individual dust continuum detected galaxies are shown as the filled dark grey points, with undetected galaxies shown as the lighter grey upper limits. The results from our stacking analysis are shown as the dark blue circles (light blue squares) at \(z\simeq 6.7\) (\(z\simeq 7.2\)). Within each redshift bin the brighter \(M_{\rm UV}\) stack is found to have a redder \(\beta\) and a larger IRX. We assume a dust temperature of 46 K and emissivity of \(\beta_{\rm d}=2.0\), with the arrow shown in the lower right of the plot illustrating the systematic uncertainty we would obtain assuming \(\pm 15\) K. The expected relation for Calzetti-like dust assuming \(\beta_{\rm b}=-2.3\) is shown as the black solid line, with the expected relation from an SMC extinction curve shown as the dashed line. The right axis shows the dust attenuation in magnitudes corresponding to the IRX according to the Calzetti dust law with a screen geometry. brighter stack (\(M_{\rm UV}<-22\)) appearing bluer, while simultaneously being fainter in the FIR. The brighter stack also shows a lower ALMA detected flux, and hence a lower IRX both from a higher \(L_{\rm UV}\) and a reduced \(L_{\rm IR}\). The same trend is seen for the galaxies in the \(6.9<z<7.7\) sub-sample, however here as both stacks are blue in the rest-frame UV (and the brighter stack is undetected in the FIR), we have a reduced dynamic range in \(\beta\). Overall, we find, somewhat counter-intuitively to the consensus colour-magnitude relation (Section 4.2 and Fig. 3), that the rest-frame UV _brightest_ galaxies in REBELS are bluer than the sources at slightly fainter magnitudes. As expected by the canonical IRX-\(\beta\) relation, we find the bluer sources show a reduced IRX, and this is driven primarily due to a reduced \(L_{\rm IR}\) (although as we have previously described, note that galaxy age, dust SED and star-dust geometry can alter the expected relation; e.g. Popping et al., 2017). #### 4.3.1 Comparison to previous studies at \(z\simeq 7\) In Fig. 5 we compare our REBELS results with those from previous studies at \(z\simeq 7\). We find that our stacked results are in good agreement with the previous measurements derived from luminous LBGs in Schouws et al. (2022) and Bowler et al. (2022). There are a handful of sources with redder rest-frame UV slopes and low IRX-values found within the study of Wristok et al. (2022), although we note that the measured \(\beta\)-slopes are relatively uncertain in these cases. Molyneux et al. (2022) found a red rest-frame UV slope and an upper limit on the dust continuum emission in the \(z=6.8\) galaxy A1703-zD1. These studies all assumed a dust temperature of 50 K and \(\beta_{\rm d}=1.5\)-1.6 and hence we expect no appreciable offset to the results of this work due to differences in the chosen rest-frame FIR SED. As shown in Fig. 8 and further discussed in Section 4.4, although the dust temperature in these cases is higher than we assume in this work, the lower \(\beta_{\rm d}\) compensates almost exactly. We additionally show two galaxies where there is a confirmed ALMA detection and robust rest-frame UV slope determination. The recent study of the \(z=7.13\) lensed galaxy A1689-zD1 from Bakx et al. (2021) found an IRX value that is in excess of the majority of the other points and the canonical Calzetti-like relation. The \(L_{\rm IR}\) of this source was derived with the observed best-fitting dust temperature of \(T_{\rm d}=40\) K, with a fixed \(\beta_{\rm d}=2.03\). If a higher dust temperature was assumed (to make the FIR analysis consistent with this work, and the other studies shown in this plot) this would increase the IRX by 0.2 dex (Fig. 8) resulting in an even greater excess. To show this source on the IRX-\(\beta\) relation we take the rest-frame UV colour derived by Watson et al. (2015). Knudsen et al. (2017) argue that A1689-zD1 could be a massive starburst due to the observed [CII] deficit, large \(L_{\rm IR}\) (given the stellar mass) and disturbed morphology. The fact that A1689-zD1, as well as REBELS-25 in Fig. 4, appear in the upper left region of the IRX-\(\beta\) diagram could be due to a spatial offset between the regions emitting in the rest-frame UV and FIR. The FIR emission in this case would be dominated by optically thick emission, while the rest-frame UV colour is measured from unobscured stars leading to an unusually blue colour for the observed IRX (as seen in ULIRGS; Casey et al. (2014), and predicted in theoretical works e.g. Popping et al., 2017; Behrens et al., 2018; Liang et al., 2019; Sommovigo et al., 2020; Ferrara et al., 2022). We also show the galaxy COS-87259 from Endsley et al. (2023) that was found within the COSMOS field using an LBG selection, but has been confirmed to be a highly star forming and dust obscured radio-loud AGN at \(z=6.853\). This source is very red, but it has a high derived IRX placing it slightly above the prediction of a Calzetti-like IRX-\(\beta\) relation. #### 4.3.2 Individual REBELS galaxies at \(z>7.7\) In Fig. 6 we show the IRX-\(\beta\) results we derive for the seven galaxies in REBELS that have photometric redshifts at \(z>7.7\). These galaxies were not included in our stacking analysis due to four sources having Band 7 observations, and the relatively uncertain photometric redshifts derived for these sources at the very high-redshift end of the REBELS sample. We compare to the Hashimoto et al. (2023) work that spectroscopically confirmed a group of galaxies (nicknamed RIOA) at \(z=7.88\), with three of the components showing detections in the rest-frame FIR from ALMA. Other \(z\gtrsim 7.5\) sources have been observed with ALMA (e.g. MACS0416_Y1 at \(z=8.31\) and MACS0416-JD at \(z=9.11\); Hashimoto et al., 2018; Bakx et al., 2020) however these other objects do not have published rest-frame UV slopes. Two of the REBELS sources at \(z>7.7\) are detected in the dust continuum (REBELS-4 and REBELS-37; also called XMM-355 and UVISTA-1212 respectively in Bowler et al., 2020), while the other five are not. REBELS-4 is in good agreement with our stacked results at slightly lower redshift, however REBELS-37 shows a redder rest-frame UV colour and deficit in IRX from both the Calzetti and SMC relations shown. We note that the \(\beta\)-slope value is more uncertain at these redshifts, due to the few bands (\(H,K_{s}\)) available for fitting, and the broader uncertainty in photometric redshift leading to a degeneracy between redshift and slope. Excluding REBELS-37, we find good agreement within the (large) errors with our \(z\simeq 7\) stacks, although we note that the majority are upper limits on the dust continuum. Figure 5: The stacked IRX–\(\beta\) points from our analysis of the REBELS sample in comparison to previous results for LBGs at \(z\simeq 7\). We show the stacked results from Schouws et al. (2022) as the orange diamonds. The six individual galaxy results of Bowler et al. (2022) are shown as the open grey circles. Further sources from Wristock et al. (2022), Bakx et al. (2021) and Molyneux et al. (2022) are shown as the purple diamonds, red square and purple diamond respectively. We show the radio-loud AGN identified initially as a bright \(z\simeq 7\) LBG by Endsley et al. (2023) as the black star in the upper right. ### IRX-\(\beta\) from z = 4 - 8 from ALPINE and REBELS To provide a consistent comparison to the \(z>6.5\) results from REBELS, we performed the same analysis on the ALPINE sample in the COSMOS field (see Section 2). We present the ALPINE results compared to the individual object measurements in Fig. 11 with the values presented in Table 11. As the ALPINE sample is both larger and has a broader range of measured stellar masses, we additionally binned in \(M_{\star}\). In our computation of the \(L_{\rm IR}\) and hence IRX for the two samples we assumed the same rest-frame FIR SED, with a dust temperature of \(T_{\rm d}=46\) K and emissivity of \(\beta_{\rm d}=2.0\) following the work of Sommovigo et al. (2022a,b) (see Section 3.4). We compare the ALPINE results with those from REBELS in Fig. 7. As was found for the REBELS sources, the brighter rest-frame UV stacks have lower IRX and appear bluer. We also see a strong dependence on stellar mass, with the galaxies at \(\log_{10}(M_{\star}/\rm M_{\odot})\)\(>10\) showing considerably redder colours with \(\beta\gtrsim-1.75\) and a higher IRX by \(\simeq 0.75\). We note here that due to the selection methodology of the initial ALPINE sample, a larger fraction of the sources in the \(z\simeq 5.5\) bin were selected to be Lyman-\(\alpha\) emitters. Taking a rest-frame EW of \(>50\)(25) A as the separation between LBGs and LAEs, 8 (30) percent of the \(z=4.5\) sub-sample are LAEs in comparison to 38 (57) percent of the \(z=5.5\) sub-sample. As LAEs have in general been found to show lower dust attenuation (e.g. Schaerer et al., 2015) this could explain the small offset we see between these two redshift bins. As we discuss further in the next section, we find consistent results between the derived IRX-\(\beta\) relation between the REBELS and ALPINE samples in our analysis, when we use the same modified blackbody SED fitting analysis and \(\beta\) measurement procedure. The data appears to agree with the local starburst relation of Calzetti et al. (2000), with no evidence from our stacked results for a deficit in the relation that could be consistent with SMC-like dust, given our assumptions on FIR SED. By combining the results from the two surveys we are able to measure the IRX-\(\beta\) relationship across a wide redshift range from \(z=4\)-8 from the largest sample of \(\log_{10}(M_{\star}/\rm M_{\odot})\)\(>9\) galaxies available with deep ALMA follow-up. Taking the stacked detections for ALPINE and REBELS, we fit the slope of the IRX-\(\beta\) relation with a given intrinsic rest-frame UV slope, \(\beta_{0}\), according to the formalism presented in McLure et al. (2018) as: \[{\rm IRX}=1.71\times 10^{(0.4\,A_{1600}/d\beta\ (\beta-\beta_{0}))}-1) \tag{1}\] In this formalism, the Calzetti (SMC)-like relation has a slope of \({\rm d}A_{1600}/{\rm d}\beta=1.97(0.91)\) and the 1.71 pre-factor is a constant set by the bolometric correction between the total rest-frame UV emission available to heat the dust and that characterised by \(L_{\rm UV}\). The pre-factor can change if we break the assumption of a dust screen, however for this analysis we keep it constant. For our combined ALPINE and REBELS results we find a best-fitting slope of \({\rm d}A_{1600}/{\rm d}\beta=2.11\pm 0.13\) when assuming \(\beta_{0}=-2.3\) or a shallower slope of the IRX-\(\beta\) relation of \({\rm d}A_{1600}/{\rm d}\beta=1.38\pm 0.09\) when assuming \(\beta_{0}=-2.5\). The intrinsic rest-frame UV slope of our sample is not known, however from BAGPIPES SED fitting analysis we find it to be between \(\beta_{0}=-2.3\) and \(-2.5\) and hence present the results of both fits. As can be seen in Fig. 7, both \(\beta_{0}\) assumptions provide a good description of the data over a broad range in measured rest-frame UV slope. Our results are in general in excess of the previously derived IRX-\(\beta\) relations at \(z>4\) (e.g. from Fudamoto et al., 2020; Schouws et al., 2022). #### 4.4.1 Comparison to previous results from the ALPINE survey Our conclusions on the IRX-\(\beta\) relation at \(z=4\)-8 are different to those found in the previous ALPINE analysis presented in Fudamoto et al. (2020), particularly at \(\log_{10}(M_{\star}/\rm M_{\odot})\)\(>10\) where we find a higher IRX by around 0.5 dex when comparing stacks across the same \(M_{\star}\) range. The later studies of Burgarella et al. (2022) and Boquien et al. (2022) found similar conclusions to the Fudamoto et al. (2020) study with further analysis of subsets of the ALPINE sample. Fudamoto et al. (2020) present stacked IRX-\(\beta\) relations using bins in \(\beta\) and \(M_{\star}\), finding the results to be consistent. In the following discussion we compare to the \(M_{\star}\) binning results of Fudamoto et al. (2020) as this has been shown to be the least biased estimator of IRX-\(\beta\)(e.g. McLure et al., 2018). This provides the most natural comparison as our points are already stacked in \(M_{\star}\), however we additionally stack in \(M_{\rm UV}\) bins. Hence in the following discussion we combined our \(M_{\rm UV}\) bins at a given \(M_{\star}\). This leads to points that lie mid-way between the two \(M_{\rm UV}\) bins per \(M_{\star}\)bin, as expected. To identify the cause of this offset we first directly compared the derived \(\beta\)-slopes, \(M_{\rm UV}\) values and ALMA fluxes for individual objects. We find that our rest-frame UV slopes are on average bluer than those derived in ALPINE by \(\Delta\beta=0.1\), with around half of this difference attributed to the dust law that we assume in SED fitting (we assume Calzetti, whereas in Faisst et al., 2020) took the average between results with an SMC and Calzetti dust law). The \(M_{\rm UV}\) values are found to be offset slightly brighter (0.1 mag) in our analysis, which used the COSMOS2020 catalogue instead of the COSMOS2015 data analysed in Faisst et al. (2020), however this has a negligible effect on the derived IRX. For the 20 percent of the ALPINE sample that have dust continuum detections we find good agreement between our raw flux measurements. Both our study and that of Fudamoto et al. (2020) take into account the fact that the dust continuum emission may be extended in the higher mass (\(\log_{10}(M_{\star}/\rm M_{\odot})\)\(>10\)) stacks by using a Gaussian fit to the ALMA data. On closer inspection, the extension found in these stacked images is due to both an intrinsic Figure 6: The individual IRX–\(\beta\) points for the seven REBELS galaxies with photometric redshifts at \(z>7.7\) (open grey diamonds) in comparison to our stacked results at \(z=6.5\)–\(6.9\) (navy circles) and \(z=6.9\)–\(7.7\) (blue squares). We also compare to the \(z=7.88\) galaxy group found within the Abell cluster A2744 lensing field, that have been detected in the dust continuum by Hashimoto et al. (2023). The Calzetti and SMC-like relations are shown as described in the caption of Fig. 4. extension (i.e. higher mass sources have an extended dust distribution) and an artificial extension introduced in the stacking process due to offsets between the rest-frame UV and FIR centroid. In the binning analysis, we determine the \(\beta\) slopes from SED fitting to the stacked optical/NIR photometry, while Fudamoto et al. 2020b take the median \(\beta\) in each bin. However despite the different method, when comparing the same bins in \(M_{\star}\) we find only a 0.1 difference between the resulting \(\beta\)-slopes (0.2 for the lower mass bin at \(z\simeq 4.5\)), and 0.05 of the difference can be accounted for by the different assumed dust law in the fitting (Section 3). Assuming that the fluxes in the stacks are consistent, the main difference is in the FIR SED assumed in the derivation of the \(L_{\rm IR}\). Fudamoto et al. (2020b) used a scaling factor to compute \(L_{\rm IR}\) that was derived from an empirical FIR template created by stacking _Herschel_ data in the COSMOS field. An expanded sample of photometrically selected galaxies over a similar redshift in the ALPINE sample was used in the creation of this template (Bethermin et al., 2020), and it can be approximated with a modified blackbody with a fixed \(\beta_{\rm d}=1.8\) of temperature \(T_{\rm d}=41\pm 1\) K and \(T_{\rm d}=43\pm 5\) K at \(z=4\)-5 and \(z=5\)-6 respectively. While only 3-5 K lower than that assumed in this work, the difference is enough to account for a 0.25 dex difference in the resulting IRX given the same input flux measurement. We mark this offset as an arrow in Fig. 7. This is illustrated in Fig. 8, where we show the offset in IRX expected for changes in \(T_{\rm d}\) and \(\beta_{\rm d}\). This difference, and the slightly bluer rest-frame UV slopes we find, can account for 0.3 dex of the observed difference between our analysis of ALPINE and that presented previously in Fudamoto et al. (2020b). As can be seen in Fig. 11, the ALPINE sample consisted of 80 percent upper limits on the dust continuum emission and hence the exact stacking process could contribute to the remaining difference. Despite not knowing exactly the dust SED for the ALPINE and REBELS samples, we have shown that the two samples have consistent IRX relations when the ALMA measurements are fit with the same modified blackbody assumptions. If the ALPINE sample does indeed show a lower dust temperature to that of REBELS (as expected from the \(T_{\rm d}\)-redshift relation in e.g. Schreiber et al., 2018), then we would recover a lower IRX-\(\beta\) relation for the ALPINE dataset by 0.25 dex. ## 5 Discussion This work presents a consistent analysis of the two most substantial ALMA surveys measuring the dust continuum emission from \(z>4\) LBGs. We have computed the IRX-\(\beta\) relation at \(z\simeq 7\) from REBELS, and compare this to a consistent analysis of the \(z=4.5\) and \(z=5.5\) samples from ALPINE. These surveys targeted galaxies with stellar masses of \(\log_{10}(M_{\star}/{\rm M}_{\odot})>9\) and hence probe the high-mass end of the known galaxy distribution at these redshifts. Even before considering the results from the ALMA observations themselves, we find that the REBELS galaxies are surprisingly blue in the rest-frame UV as compared to the predicted colour of galaxies from the extrapolated colour-magnitude relation found for fainter sources (Fig. 3). Despite being some of the most massive galaxies known at \(z\simeq 7\), the REBELS sources show a flatting, or a potential turnover, in the colour-magnitude relation for galaxies at \(M_{\rm UV}<-21.5\). As discussed in Section 4.2, we do not think this behaviour is due to our sample selection in part due to the fact that predominantly blue \(\beta\)-slopes are found even in the most luminous (and hence highest S/N) galaxies where redder slopes would be robustly measured. Following the arguments presented in e.g. Bowler et al. 2015, 2020 amongst other studies (e.g. Salim and Lee 2012), when considering galaxies found at the bright-end of the rest-frame UV luminosity function (UVLF), the effect of scatter must be considered. If we consider an underlying stellar mass function (SMF) with an exponential cut-off above the characteristic mass, and estimate from this the expected UVLF, given the scatter between \(M_{\star}\) and the \(L_{\rm UV}\), we expect to see a shallower decline in the number density to brighter galaxies. Because of the steepness of the SMF, galaxies that are found to be bright in the rest-frame UV are necessarily the sources that have low dust attenuation. Galaxies of the same stellar mass, but with a higher than average attenuation would instead be scattered fainter on the UVLF and be lost within the large population of fainter sources. The majority of the REBELS sources were selected from wide-area ground-based data and they represent a very rare population of galaxies that sit brightward of the knee of the UVLF. Hence from the arguments detailed above, we would expect them to show a relatively low dust obscuration given their stellar mass (e.g. as found in Algera et al. 2023b), and hence a lower IRX and bluer \(\beta\). An additional factor that could contribute to the unexpectedly blue colours we observe for the REBELS sources is a geometric offset between the dust and stars within the galaxy (e.g. as predicted by Popping et al. 2017). The UV variability model of Shen et al. (2023) predicts that the rest-frame UV brightest galaxies should be blue, due to their young ages and clearance of dust in the star-burst phase. A clumpy morphology with an offset between the observed young stars stars and dust, leading to relatively unobscured regions, has already been observed in higher-resolution ALMA observations of REBELS galaxies in Bowler et al. (2022) supporting this hypothesis. Figure 7: The IRX–\(\beta\) relation derived in this study from the ALPINE and REBELS surveys at \(z\simeq 4\)–8. The ALPINE points at \(z=4.5\) (\(z=5.5\)) are shown as the orange (red) open squares, while the REBELS points at \(z=6.7\) (\(z=7.3\)) are shown as the blue (light blue) circles. Our best fitting IRX–\(\beta\) relation to these data are shown as the solid blue (light blue) lines for an assumed intrinsic rest-frame UV slope of \(\beta_{0}=-2.3(-2.5)\). We also show the best-fitting IRX–\(\beta\) relation from Schouws et al. (2022) (assuming a \(\beta_{0}=-2.23\)) as the blue dotted line. The best-fitting IRX–\(\beta\) relations assuming a \(\beta_{0}=-2.62\) from Fudamoto et al. (2020b) are shown as the orange (red) dashed lines for \(z=4.5\) (\(z=5.5\)). The difference between our ALPINE results and those of Fudamoto et al. (2020b) can be mostly explained by the different assumed FIR SED. Fudamoto et al. (2020b) assumed \(T_{\rm d}=41\) K, in comparison to the \(T_{\rm d}=46\) K assumed in this work, leading to the difference illustrated in the bottom right as the red arrow. ### Implications for the dust attenuation curve at \(\mathbf{z>4}\) We find that the IRX-\(\beta\) relation derived from a stacking analysis of the ALPINE and REBELS samples are consistent with a Calzetti-like relationship as found at \(z=0\), with our assumed rest-frame FIR SED for both samples. We find no evidence that the sources _on average_ lie below the relation, which could indicate a different attenuation law such as an SMC-like relation (discounting the effect of geometry which tends to make the observed _attenuation_ law appear shallower even in the case of a steeper _extinction_ law). As shown in Fig 7, our results differ from those found using the ALPINE dataset by Fudamoto et al. (2020), who found a deficit in IRX for a given rest-frame UV slope and concluded that an SMC-like attenuation curve was preferred. As discussed in Section 4, the difference between our results and those of Fudamoto et al. (2020) is primarily due to the assumed rest-frame FIR SED (with this work using a higher dust temperature by 5 K), with a minor effect of our analysis deriving bluer rest-frame UV slopes from the stacked photometry (\(\Delta\beta=0.1\)). The best-fit relation presented in Fudamoto et al. (2020) assumed an intrinsic \(\beta_{0}=-2.62\) (fits were also presented for redder \(\beta_{0}\)) and showed a tentative evolution in the normalisation, such that the IRX is decreasing at a given \(\beta\)-slope from \(z=4.5\) to \(z=5.5\). We instead find little evolution in the IRX-\(\beta\) relation from \(z\simeq 4\) to \(z\simeq 7.2\) from our analysis. This relies on our assumption that the FIR SED (and intrinsic rest-frame UV slope) remains approximately constant between the samples and redshift ranges, as found by Sommovigo et al. (2022, 2023) which motivated the \(T_{\rm d}\) and emissivity coefficient used in this work for the ALPINE and REBELS datasets. If instead the dust temperature differs between the two samples at above and below \(z=6\), then we would conclude that the REBELS sources lie above the ALPINE galaxies on the IRX-\(\beta\) plane by approximately 0.3 dex. This would potentially indicate a different selection function between the surveys (e.g. we already know that ALPINE targeted more Lyman-\(\alpha\) emitters than REBELS; see Section 5.3). Our results are also higher (particularly at redder \(\beta\)) than the best-fit derived by Schouws et al. (2022) (assuming an intrinsic \(\beta_{0}=-2.23\), at \(T_{\rm d}=50\) K). However the stacked results from that work are consistent within the errors with our findings from REBELS (see Fig. 5). In the range of stellar mass between \(\log_{10}(M_{\bullet}/{\rm M}_{\odot})\)= 9-10, galaxies in both REBELS and ALPINE appear blue (\(\beta\simeq-2\)) and the obscured fraction is 0.4-0.6 (Table 2). For the higher mass galaxies found within ALPINE (as REBELS contains very few galaxies at \(\log_{10}(M_{\bullet}/{\rm M}_{\odot})\)\(>10\) assuming a parametric SFH; although see Topping et al. 2022) we find redder slopes (\(\beta\simeq-1.5\)) and obscured fractions approaching 0.9. Comparing our results to studies that targeted lower mass galaxies there is some evidence for a difference in IRX with galaxy luminosity (or stellar mass). For example the stacking analysis of Bouwens et al. (2016) and Bouwens et al. (2020) derived from the deep ALMA data covering the _Hubble_ Ultra Deep Field showed only upper limits over the broad redshift range \(z=4\)-10. Bouwens et al. (2020) also found evidence for an SMC-like attenuation curve at \(\log_{10}(M_{\bullet}/{\rm M}_{\odot})\)\(<9.25\), while a Calzetti like curve was preferred at higher masses. These results are consistent with our analysis, which supports a Calzetti-like attenuation curve for \(\log_{10}(M_{\bullet}/{\rm M}_{\odot})\)\(>9\) from \(z=4\)-8. Our find that there is a lack of evolution in the attenuation curve from local starburst galaxies up to \(z\gtrsim 7\) could suggest that the conditions required to form Calzetti-like dust are present already 800 Myr after the Big Bang. As shown by various theoretical studies however, taking the observed IRX-\(\beta\) relation as evidence for a particular dust attenuation curve can be problematic due to the myriad of other factors that can influence the observed relation (e.g. Popping et al. 2017; Narayanan et al. 2018). The fact that we do not see a significant excess above the Calzetti relation further suggests that on average the REBELS and ALPINE galaxies are not dominated by significant optically thick regions contributing to the \(L_{\rm IR}\), that would boost the measured IRX values (e.g. as seen in the case of ULIRGs; Casey et al. 2014). #### 5.1.1 Evidence for a high gas-phase metallicity? Further insight into the origin of our results for massive galaxies can be gained from the works of Shivaei et al. (2020, 2020) who found that the dust attenuation curve correlated most strongly with gas-phase metallicity from a detailed study of sources at \(z=2\)-2.5. They found that at \(12+\log{\rm(O/H)}>8.5\) a Calzetti-like attenuation curve was consistent with the data, with a steeper attenuation curve found for lower metallicity sources. Qualitatively our results agree with these \(z\simeq 2\) studies, although the difference in methodology makes it non trivial to compare quantitatively. In particular, Shivaei et al. (2020) remove sources with very strong emission line strengths ([OIII] \(\lambda 44959,5007>630\)A) that we inferred to be present in the majority of the REBELS sources (Bouwens et al. 2022). In addition, they compute the IR luminosity using a template from Rieke et al. (2009) for the more massive sources in their sample. These templates are stated to be equivalent to a greybody curve with \(T_{\rm d}=38\)-64 K and \(\beta_{\rm d}=0.7\)-1, which as shown in Fig. 8 would give lower values of the IRX by 0.3-0.4 dex. With these caveats in mind, these previous results would suggest that the ALPINE and REBELS galaxies Figure 8: The modified blackbody fitting parameters from previous observations and models. We show the dust temperature against the emissivity coefficient, for an optically thin model. The results for the theoretical analysis by Sommovigo et al. (2022, 2022) are shown as the purple circle (square) for REBELS (ALPINE). In the ALPINE survey, Bethermin et al. (2020) measured an empirical SED that had equivalent modified blackbody parameters as shown by the blue squares (the \(z=4\) point has the smaller error bar, with the \(z=5\) point showing a higher temperature). For both these studies the \(\beta_{\rm d}\) value was fixed. The contours show the offset computed in the derived IRX value from the parameters used in this work (\(T_{\rm dust}=46\) K, \(\beta_{\rm d}=2.0\)). Such that the Bethermin et al. (2020) parameterisation would give a lower IRX by \(0.25\) dex. The results of fitting with a self-consistent (not optically thin) model to \(z=4\)-8 galaxies with multiple FIR data points in Wristok et al. (2023) are shown as the grey open diamonds, demonstrating the current uncertainties and potential intrinsic scatter on these parameters at high redshift. with \(\log_{10}(M_{\star}/\mathrm{M_{\odot}})\)\(\simeq 9.5\) show an increased gas-phase metallicity compared to lower mass sources, that then leads to the observed Calzetti-like dust attenuation curve. If we take the mass-metallicity relation derived by Sanders et al. (2021) at \(z\simeq 3\), we would predict a metallicity for this sample of \(12+\log(\mathrm{O/H})\simeq 8.3\). We also obtain a comparable estimate of the metallicity using the fundamental mass-metallicity relation (FMR) from Curti et al. (2020) assuming a total SFR \(\sim 50\,\mathrm{M_{\odot}/yr}\) (Table 2). Recent results from early _JWST_ analysis for lower mass galaxies have revealed similar FMRs as at \(z\simeq\) 2 (Nakajima et al., 2023) as well as evidence for a deficit from the FMR in galaxies at \(z\simeq 7\)(Curti et al., 2023). If this deficit is found to hold for galaxies at \(\log_{10}(M_{\star}/\mathrm{M_{\odot}})\ga 9.5\), this would imply a lower metallicity by \(\sim 0.3\) dex for galaxies in our sample given their \(M_{\star}\). Regardless of which of these metallicity calibrations we consider, the derived values of average metallicity quantitatively disagree with the results of Shivaei et al. (2020) that would predict that these sources should show an SMC-like dust attenuation curve at \(12+\log(\mathrm{O/H})<8.5\). Despite this, our results are qualitatively in agreement with a picture where the ALPINE and REBELS sources have higher metallicities than lower-mass/fainter sources at the same redshifts, and hence a Calzetti-like dust attenuation law remains the preferred fit even up to \(z\simeq 7\). With upcoming NIRSpec observations we will be able to directly determine the extension of the FMR relation at \(z=7\) to higher masses, and test whether the sources that show significant dust detections are metal rich in comparison to lower mass (\(\log_{10}(M_{\star}/\mathrm{M_{\odot}})<9\)) galaxies. ### The IRX-\(M_{\star}\) relation at z = 4-8 In principle the IRX-\(M_{\star}\) relation provides a more fundamental physical relationship than the IRX-\(\beta\) plane, where the latter can be affected by large errors on the rest-frame UV slope measurement and geometric effects (e.g. see Faissi et al., 2017; Liang et al., 2019; Sommovigo et al., 2020; Ferrara et al., 2022). The stellar mass represents an integral of all past star-formation activity in the galaxy, and hence is expected at least qualitatively to be correlated with the production of dust (e.g. via stellar dust production and the overall metal enrichment of the ISM; Dayal et al., 2022). Dunlop et al. (2017) showed that stellar mass is a strong predictor of a FIR detection (and thus \(L_{\mathrm{IR}}\); see also McLure et al., 2018; Bouwens et al., 2020). Up to \(z\simeq 3\) there has been in general a good agreement between previous studies of the IRX-\(M_{\star}\) relation, with the results found to be approximately consistent to the local relation (e.g. McLure et al., 2018; Koprowski et al., 2018; Alvarez-Marquez et al., 2019; although see Reddy et al., 2018). At higher redshift, Fudamoto et al. (2020) found a steeper slope of the IRX-\(M_{\star}\) relation at \(z=2.5\)-4 implying a reduced obscured SFR fraction and total SFR than the relations recovered at lower redshifts (at \(\log_{10}(M_{\star}/\mathrm{M_{\odot}})<11\) when the relations cross), a trend that was also recovered in the ALPINE sample (Fudamoto et al., 2020). In Fig. 9 we show the measured IRX-\(M_{\star}\) points from our stacking analysis of the ALPINE and REBELS samples. We find an offset in the measurements from the \(z=3\) results of Koprowski et al. (2018) and Alvarez-Marquez et al. (2019). These studies use empirical templates in their derivation of the FIR luminosity, with the best-fitting templates used in Koprowski et al. (2018) showing a temperature of 40 K. Given the observed relation of \(T_{\mathrm{d}}\) with redshift, and the lack of information on the FIR SED at \(z>6\) and observed changes in the FIR SED with physical properties (e.g. \(M_{\star}\), SFR; Alvarez-Marquez et al., 2019), it is not trivial to compare IRX values over the redshift range \(z=2\)-8 and be fully confident of the exact offsets between studies. Nevertheless, we recover an offset of between 0.5-1.0 dex when comparing our \(z>4\) results to those at \(z\simeq 3\), depending on the lower redshift relation assumed. Fitting these data points we derive the following IRX-\(M_{\star}\) relation: \[\log_{10}(\mathrm{IRX})=0.69(\pm 0.12)\log_{10}\left(\frac{M_{\star}}{10^{10} \,\mathrm{M_{\odot}}}\right)+0.40\ (\pm 0.05). \tag{2}\] While the derived slope is only weakly constrained due to the small dynamic range in stellar mass that we probe with the ALPINE and REBELS samples, it is consistent with the \(z\simeq 3\) relations. Further analysis of galaxies at \(\log_{10}(M_{\star}/\mathrm{M_{\odot}})<9\) will be required to confirm if the slope does steepen at \(z\ga 5\). Due to the strong \(M_{\star}\) dependence of obscuration however, these measurements are extremely challenging and must rely on stacking (for example the ASPECS program only found 18 ALMA detections from a sample of 1362 galaxies at \(z=1.5\)-10, with the highest redshift being at \(z=3.7\); Bouwens et al. 2020). Regardless of the exact form of the relation, the offset we observe in the derived IRX for a given \(M_{\star}\) shows that between \(z\simeq 2\) and \(z=4\)-8 the degree of obscured star formation has dropped by a factor of \(\sim 3\)-10. This is despite the sources showing good agreement with the galaxy "main-sequence" (Topping et al., 2022; Algera et al., 2023) and hence they do not show a deficit in total SFR. This finding is consistent with previous results that have derived a lower obscured fraction as a function of \(M_{\star}\) at \(z\ga 5\)(Fudamoto et al., 2020; Gruppioni et al., 2020; Algera et al., 2023), however we do not recover the steepness of these relations. We note that recent results on the dust obscuration as a function of stellar mass from Shapley et al. (2023) using the Balmer decrement measured with _JWST_ have not found evidence for evolution in the degree of dust obscuration between \(z\simeq 2\)-6, in contrast to our results. One explanation for these differing conclusions is that there is an evolution in the relation between continuum (as we measure in this work using the rest-frame UV slope) and the nebular reddening (as measured by Shapley et al., 2023). Future rest-frame FIR observations of samples of massive \(z>7\) galaxies, such as those selected via upcoming wide-area NIR imaging observations from _Euclid_ and _Roman_, will be required to confirm if the reduction in dust obscured star-formation continues to even higher redshifts. ### Caveats A key assumption in this work is that of a fixed dust SED with a dust temperature of \(T_{\mathrm{d}}=46\) K and \(\beta_{\mathrm{d}}=2.0\). These parameters were utilized within the REBELS survey as the fiducial rest-frame FIR SED and were derived by Sommovigo et al. (2022). The dust temperature at \(z>5\) remains uncertain, and recent results have suggested that it may vary between galaxies from \(T_{\mathrm{d}}=35\)-\(90\) K (e.g. Hashimoto et al., 2019; Bakx et al., 2020; Algera et al., 2023). For the emissivity index, Wristok et al. (2023) found a best-fitting value of \(\beta_{\mathrm{d}}=1.8\pm 0.3\), consistent with our assumed model. These results would suggest that a different dust temperature would be required for each source. However, given the large errors for individual sources, and the lack of these measurements for the full sample, this analysis is unfeasible at this time. Another important caveat is that the ALPINE and REBELS samples were selected in different ways, and for example the \(z\simeq 5.5\) sample from ALPINE includes a larger fraction of Lyman-\(\alpha\) emitters than REBELS. The REBELS survey has no confirmed Lyman-\(\alpha\) emission stronger than \(EW_{0}=25\) A (Endsley et al., 2022), while in the \(z=5.5\) bin of ALPINE 57 percent of the sources have \(EW_{0}>25\)A. This comparison is complicated by the impact of the neutral IGM as we approach \(z\simeq 7\), however as galaxies selected as Lyman-\(\alpha\) emitters have been shown to have lower dust content than LBGs (e.g. Schaerer et al., 2015; Matthee et al., 2017) this difference could cause a bias to lower IRX values in the ALPINE sample. There is also evidence at lower redshifts that the dust attenuation curve varies between galaxies of the same stellar mass (e.g. Salim & Boquien, 2019; see the review by Salim & Narayanan, 2020). Using a subset of the ALPINE sample, Boquien et al. (2022) derived a range of best-fitting attenuation curves also suggesting that the attenuation law varies significantly between individual galaxies. Narayanan et al. (2018) have predicted using a simulation that the variation in attenuation curves between galaxies decreases with increasing redshift, however further observations are needed to understand the properties of dust within high-redshift galaxies. Another factor to consider, with reference to our inferences about stellar mass dependence, is that we assumed a certain parametric star-formation history in computing these parameters. Topping et al. (2022) has shown that these measurement may underestimate the mass by on average 0.5 dex. This would cause an increased deficit (up to around 0.35 dex, given the best-fit relation) in the IRX-\(M_{\bullet}\) relation to previous studies, however care must be taken to compare similar methodologies in determining \(M_{\bullet}\). Future observations of the REBELS galaxies with _JWST_ (e.g. through program PID1626, PI Stefanon) will constrain the SFH and hence stellar masses with greater accuracy. Finally, we consider the potential effects of sample incompleteness. Due to the rest-frame UV based Lyman-break selection (or Lyman-\(\alpha\) selection for some ALPINE sources), our samples will be incomplete to any highly obscured sources. Such sources have been identified with ALMA, for example the 'UV-dark' galaxies found serendipitously via their [CII] emission only 40-60 pkpc from the central (rest-UV bright) REBELS source in Fudamoto et al. (2021). It is challenging to place these types of galaxies on the IRX-\(\beta\) or IRX-\(M_{\bullet}\) relation as their rest-frame UV and optical are undetected, however it is likely that they are extremely red and massive (log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(\lesssim\) 10-10.5) and with a high IRX. Further study is needed to understand this population of extremely obscured galaxies at \(z\simeq 7\). ## 6 Comparison to Models The attenuation curve and the IRX-\(\beta\) relation have been the subject of many theoretical and simulation analyses (e.g. see recent review by Salim & Narayanan, 2020). Here we discuss the handful of results that focus specifically on predictions for the high-redshift Universe. In a work that was based on the REBELS sample, Dayal et al. (2022) provided the first theoretical comparison to the dust properties of the REBELS galaxies. Using the semi-analytic model (SAM) DELPHI, they were able to match the observed dust masses while also reconciling the intrinsic UV luminosity function with the observed (attenuated) luminosity function at \(z=7\). Ferrara et al. (2022) further identified that some REBELS sources show inconsistencies between the rest-frame UV properties (\(\beta\)) and FIR emission, suggestive of spatially offset regions within the galaxies. We recover this result in the form of sources that lie above the typical Calzetti-like IRX-\(\beta\) relation - showing bluer than expected colours for their measured \(L_{\rm IR}\). While Ferrara et al. (2022) argue that this makes the IRX-\(\beta\) relation difficult to utilize for these galaxies, in this work we have found that on _average_ there is a correlation in the IRX-\(\beta\) plane for galaxies in the REBELS sample (that is consistent with a Calzetti-like relation). In Fig. 10 we present the most recent analysis of dust emission from the DELPHI SAM in comparison to our derived IRX-\(\beta\) results from REBELS and ALPINE. We took the output of the model presented in Mauerhofer & Dayal (2023) and computed the rest-frame UV slope using an identical method to that used on the ALPINE and REBELS stacks, by fitting a power law to the resulting predicted SED. Sources were extracted that had \(-24<M_{\rm UV}<-20\), and nebular continuum emission is included in the modelling of the SED. As is evident from the figure, comparing the DELPHI model of Mauerhofer & Dayal (2023) to the results of this work we find a good agreement with the model predictions. The predicted change in IRX and \(\beta\) with increasing stellar mass matches very well with what we find in the data, with the observed galaxies in ALPINE at log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(>10\) lying on top of the model predictions for this mass range. Furthermore, the bluer \(\beta\) values are comparable to those found in the data for galaxies at log\({}_{10}(M_{\bullet}/\mathrm{M}_{\odot})\)\(\simeq 9\), and the lack of evolution we see between \(z=4\)-8 is also recovered in DELPHI. As shown in Fig. 10, the SAM predicts that the IRX-\(\beta\) should increase with redshift, due to the increased optical depth within compact galaxies at the highest redshifts. While there are assumptions made in the computation of the IRX-\(\beta\) in both the models (e.g. through escape fraction of UV photons, dust heating etc.) and in the observations (with the systematic uncertainties in the \(L_{\rm IR}\)determination), it is reassuring that the general trends are present in this basic comparison. Turning now to works that consider hydrodynamical simulations, Narayanan et al. (2018) used the MAFUSA model to predict the attenuation law at \(z=6\). They found a relatively grey attenuation curve, with the scatter between different galaxies becoming reduced at higher redshift due to more consistent stellar ages between sources and an increase in complexity in the star-dust geometry. The predicted curve is very close to that of the Calzetti et al. (2000) curve, however with a prominent 2175A bump. Vijayan et al. (2023) used the FLARES simulation to investigate the expected dust attenuation curve for galaxies at \(z=5\)-10, with a particular focus on examining the effect of star-dust geometry. For galaxies with a similar SFR to Figure 9: The derived IRX–\(M_{\bullet}\) values from our consistent analysis of the REBELS and ALPINE datasets. The results from our REBELS analysis are shown as the dark blue (light blue) circles at \(z\approx 6.7\) (\(z\simeq 7.2\)). The ALPINE results are shown as orange (red) open squares for the \(z\simeq 4.5\) (\(z\simeq 5.5\)) sub-samples. The best-fitting relation to these data is shown as the black solid line with the grey shading indicating the \(1\sigma\) confidence regime. Relations from previous studies at \(z=3\) from Koprowski et al. (2018) and Alvarez-MΓ‘rquer et al. (2019) are shown as the dotted blue and solid light blue lines respectively. The \(z\simeq 4\)–6 relation found by Fudamoto et al. (2020b) is shown as the dashed red line. the REBELS and ALPINE sources, they find a dust law with a similar slope to that of the Calzetti et al. (2000) relation, despite the input of an SMC-like _extinction_ law to their simulations. In FLARES, galaxies with a higher SFR show a larger degree of clumpiness, which in turn leads to a larger range in \(A_{\rm V}\) over the galaxy surface and hence a greyer attenuation curve when integrated across the source. The zoom-in cosmological SERRA simulations presented in Pallottini et al. (2022) produces galaxies at \(z\simeq 7.7\) that are in excess of the majority of observations at high redshift (and in excess of the Calzetti-relation). Pallottini et al. (2022) attribute the differences to first, the uncertain FIR SED and hence systemic errors in the derived \(L_{\rm IR}\) from observations (they suggest that a higher dust temperature should be assumed, exceeding 90 K), and second, to potential inaccuracies in the feedback prescription in the modelling and an insufficient spatial resolution to fully resolve the molecular clouds where the majority of the FIR luminosity is produced. Finally, Liang et al. (2019) used the cosmological simulation MassiveFIRE to model the IRX-\(\beta\) relation of \(z=2\)-6 galaxies. They found a relation similar to that found locally in the Calzetti-relation (although as a Milky Way dust attenuation curve was input, this is perhaps not surprising) Liang et al. (2019) directly predict the IRX-\(\beta\) values for the galaxies in their simulations, finding objects that are blue (\(\beta<-1.5\)) and occupy the lower left region of the diagram. They conclude the scatter around the relation is dominated by different intrinsic \(\beta_{0}\) slopes. In conclusion, in general these models predict a dust attenuation law that is similar to the local Calzetti et al. (2000) relationship for galaxies at \(z>6\) with similar stellar masses and SFRs to the REBELS sample. When the models have predicted the IRX-\(\beta\) relation this has been in agreement, or in excess of, what we observe for our combined ALPINE and REBELS samples (Liang et al., 2019; Pallottini et al., 2022; Mauerhofer and Dayal, 2023). We therefore conclude that current models and observational constraints from this work favour a distribution in the IRX-\(\beta\) plane that is consistent with predictions for a Calzetti-like dust attenuation law at \(z=4\)-8, with no evidence for a deficit from the local relation towards what would be predicted by a screen of SMC-like dust. ## 7 Conclusions In this study we present an analysis of the ALMA large program REBELS, which provided observations of the dust continuum emission for a sample of 49 galaxies at \(z=6.5\)-8. We also perform a consistent analysis of the ALPINE large program that targeted galaxies in the redshift range of \(z=4\)-6, to provide a key comparison to the REBELS results and investigate any evolution in the dust emission properties with redshift. The main conclusions of this work are as follows: * When compared to the expected colour from the extrapolated colour-magnitude relation for fainter galaxies, the REBELS sources are bluer than expected by up to \(\Delta\beta=0.5\)-1.0 (depending on the extrapolated relation). These results point to a flattening, or potential turnover, of the colour magnitude relation bright-ward of \(M_{\rm UV}\)= \(-21.5\), that can be understand as a consequence of scatter on an underlying steep galaxy stellar mass function. In this scenario, the REBELS galaxies represent the sub-set of sources at a given \(M_{\star}\) that have low dust attenuation and hence have bright rest-frame UV magnitudes and a blue colour. * When stacking the REBELS sources in the ALMA Band 6 data we find detections at \(>5\sigma\) significance for all but the highest redshift and brightest \(M_{\rm UV}\) bins. We derive the \(L_{\rm IR}\) and IRX for the sample from these results, assuming a modified blackbody curve with a dust temperature of 46 K and \(\beta_{\rm d}=2.0\) as derived in the model of Sommovigo et al. (2022). The IRX-\(\beta\) relation we derive for the REBELS sample with this assumed FIR SED is as expected for a Calzetti-like attenuation law (with an intrinsic rest-frame UV slope of \(\beta_{0}=-2.3\)). In comparison to other studies at \(z\simeq 7\), we find no strong evidence for a systematic deviation below this relation, although large scatter is found between individual sources. * By assuming the same FIR SED for stacks of the ALPINE data (as motivated by the study of Sommovigo et al. 2022), we find that the IRX-\(\beta\) results at \(z\simeq 4.5\) and \(z\simeq 5.5\) produce values that are consistent with our REBELS findings. We therefore find negligible evolution in the IRX-\(\beta\) relation from \(z=4\)-8, and conclude that there is little evidence for a deficit in the IRX-\(\beta\) relation at \(z>4\) as compared to the local starburst results of e.g. Calzetti et al. (2000). Comparisons to previous studies at \(z>4\) again highlight that the assumed FIR SED has a dramatic affect on the derived \(L_{\rm IR}\)and care must be taken in comparing different studies. * We compute the IRX-\(M_{\star}\) relation from our combined ALPINE and REBELS samples, finding a similar slope to previous analyses at \(z=3\), but with a 0.5 dex offset to lower IRX at a given \(M_{\star}\)for our assumed FIR SED. These results corroborate previous findings (e.g. Schouws et al. 2022; Algera et al. 2023) that show that for a given stellar mass, the proportion of obscured star-formation is reduced at \(z>4\) by a factor of \(\gtrsim 3\). * In comparison to models of dust attenuation in \(z\gtrsim 6\) galaxies, we find that in general a Calzetti-like attenuation curve is predicted. In simulations that compute the IRX-\(\beta\) relation we find that the results are in good agreement with our combined ALPINE and REBELS analysis, with few studies predicting very low 'SMC-like' relations. In a detailed comparison to the predictions of the DELPHI SAM, we find very good agreement between this model and our observations Figure 10: The predictions of the DELPHI semi-analytic model of Mauerhofer and Dayal (2023) for the IRX-\(\beta\) relation, in comparison to our results from REBELS and ALPINE. The data points and lines are as shown in Fig. 11. The observed points from ALPINE that are found at \(\beta>-1.75\) correspond to the log\({}_{10}(M_{\star}/M_{\odot})=10\)–11 stacks of these data. The REBELS sources show log\({}_{10}(M_{\star}/M_{\odot})\simeq 9.5\) (Table 2). The results from DELPHI from \(z=4.6\) to \(z=7.9\) are shown as the small dots, with the colour scale corresponding to log\({}_{10}(M_{\star}/M_{\odot})\) as derived in the model. There is a slight evolution within the redshift range, which we highlight with the solid and dotted grey lines from \(z=7.9\) and \(z=4.6\) respectively. over the stellar mass range of \(\log_{10}(M_{\star}/\mathrm{M_{\odot}})\)= 9-11. These results indicate that despite complexities and caveats in both the modelling and observation of high-redshift galaxies, qualitatively (and quantitively for the DELPHI model) there is good agreement between the predicted and measured trends of rest-frame UV colour, dust attenuated star-formation and stellar mass at \(z\) = 4-8. To understand the origin of the effects seen in this work and others, a more detailed view on the measured properties of galaxies (such as more precise \(M_{\star}\), \(\beta\) and gas-phase metallicity) are required. Cycle 1 _JWST_ observations of 12 of the REBELS sample are being obtained as part of the General Observer program PID1626 (PI Stefano). This program will target the sources with the NIRSpec Integral Field Unit, providing gas-phase properties from the rest-frame optical emission lines for this unique sample. The data will also provide refined (and resolved) rest-frame UV slope and \(M_{\star}\) measurements. For the ALPINE survey, Cycle 2 observations have been approved for a subset of the most massive 18 galaxies for NIRSpec IFU from GO program 3045 (PI Faisst). The results of these programs, coupled with ongoing additional follow-up with ALMA to determine the dust temperature (via additional bands) and spatial distribution of the gas, dust and stars (via higher spatial resolution ALMA observations), will provide additional insights into the evolution of the IRX-\(\beta\) and IRX-\(M_{\star}\) relations at \(z\) = 4-8. ## Acknowledgements RAAB acknowledges support from an STFC Ernest Rutherford Fellowship [grant number ST/T003596/1]. RJB acknowledges support from NWO grants 600.065.140.11N211 (vrijcompetitie) and TOPI TOP1.16.057. RS acknowledges support from an STFC Ernest Rutherford Fellowship [grant number ST/S004831/1]. YF acknowledge support from NAOJ ALMA Scientific Research Grant number 2020-16B. YF further acknowledges support from support from JSPS KAKENHI Grant Number JP19X23419. MA acknowledges support support from FONDECYT grant 1211951, ANID+PCI+REDES 190194 and ANID BASAL project FB210003. FC acknowledges support from a UKRI Frontier Research Guarantee Grant [grant reference EP/X021025/1]. JSD acknowledges the support of the Royal Society through a Royal Society University Research Professorship. This work was supported by NAOJ ALMA Scientific Research Grant Code 2021-19A (HI and HABA). HI acknowledges support from JSPS KAKENHI Grant Number JP19K23462. IDL and MP acknowledge support from ERC starting grant 851622 DustOrigin. MS acknowledges support from the ERC Consolidator Grant 101088789 (SFEER), from the CIDEGENT/2021/059 grant, and from project PID2019-109592GB-I00/AEI/10.13039/501100011033 from the Spanish Ministerio de Ciencia e Innovacion - Agencia Estatal de Investigacion. JH acknowledges support of the ERC Consolidator Grant 101088676 (VOYAJ) and the VIDI research programme with project number 639.042.611, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO). EdC gratefully acknowledges the Australian Research Council as the recipient of a Future Fellowship (project FT150100079) and the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D; project CE170100013). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2017.1.01634.L, ADS/JAO.ALMA#2017.1.00604.S, ADS/JAO.ALMA#2018.1.00236.S, ADS/JAO.ALMA#2018.1.00085.S ADS/JAO.ALMA#2018.1.00085.S ADS/JAO.ALMA#2018.A.00022.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. PD & VM acknowledge support from the NWO grant 016.VIDI.189.162 ("ODIN"). PD warmly acknowledges support from the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program. ## Data Availability The REBELS and ALPINE datasets used in this work have been publicly released, as have all of the optical and NIR imaging utilized. The COSMOS2020 catalogue is public (Weaver et al., 2022). ## References * Algera et al. (2023) Algera H. S., et al., 2023b, MNRAS, 518, 6142 * Alvarez-Marquez et al. (2019) Alvarez-Marquez J., Burgeralla D., Buat V., Ilbert O., Perez-Gonzalez P. G., 2019, A&A, 630, A153 * Ashby et al. (2013) Ashby M. L. N., et al., 2013, ApJ, 769, 80 * Ashby et al. (2018) Ashby M. L. N., et al., 2018, ApJS, 237, 39 * Bakx et al. (2020) Bakx T. J. L. C., et al., 2020, MNRAS, 493, 4294 * Bakx et al. (2021) Bakx T. J. L. C., et al., 2021, MNRAS, 508, L58 * Barisic et al. (2017) Barisic I., et al., 2017, ApJ, 845, 41 * Barrulet et al. (2023) Barrulet L., et al., 2023, MNRAS, 522, 3926 * Behrens et al. (2018) Behrens C., Pallottini A., Ferrara A., Gallerani S., Vallini L., 2018, MNRAS, 477, 552 * Behremin et al. (2020) Behremin M., et al., 2020, A&A, 643, A2 * Boquien et al. (2022) Boquien M., et al., 2022, A&A, 663, A50 * Bouwens et al. (2014) Bouwens R. J., et al., 2014, ApJ, 793, 115 * Bouwens et al. (2016) Bouwens R. J., et al., 2016, ApJ, 833, 72 * Bouwens et al. (2020) Bouwens R., et al., 2020, ApJ, 902, 112 * Bouwens et al. (2022) Bouwens R. J., et al., 2022, ApJ, 931, 160 * Bowler et al. (2015) Bowler R. A. A., et al., 2015, MNRAS, 452, 1817 * Bowler et al. (2017) Bowler R. A. A., Dunlop J. S., McLure R. J., McLeod D. J., 2017, MNRAS, 466, 3612 * Bowler et al. (2018) Bowler R. A. A., Bourne N., Dunlop J. S., McLure R. J., McLeod D. J., 2018, MNRAS, 481, 1631 * Bowler et al. (2020) Bowler R. A. A., Jarvis M. J., Dunlop J. S., McLure R. J., McLeod D. J., Adams N. J., Milwang-Jensen B., McCracken H. J., 2020, MNRAS, 493, 2059 * Bowler et al. (2022) Bowler R. A. A., Cullen F., McLure R. J., Dunlop J. S., Avison A., 2022, MNRAS, 510, 5088 * Burgeralla et al. (2022) Burgeralla D., et al., 2022, A&A, 664, A73 * Calzetti et al. (1994) Calzetti D., Kinney A. L., Storchi-Bergmann T., 1994, ApJ, 429, 582 * Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682 * Capak et al. (2015) Capak P. L., et al., 2015, Nature, 522, 455 * Carnali et al. (2018) Carnali A. C., McLure R. J., Dunlop J. S., Dave R., 2018, MNRAS, 480, 4379 * Carniani et al. (2017) Carniani S., et al., 2017, A&A, 605, A42 * Casey et al. (2014) Casey C. M., et al., 2014, ApJ, 796, 95 * Cassata et al. (2020) Cassata P., et al., 2020, A&A, 643, A6 * Chabrier (2003) Chabrier G., 2003, PASP, 115, 763 * Cullen et al. (2018) Cullen F., et al., 2018, MNRAS, 476, 3218 * Cullen et al. (2023) Cullen F., et al., 2023, MNRAS, 520, 14 * Curti et al. (2020) Curti M., Mannucci F., Cresci G., Maiolino R., 2020, MNRAS, 491, 944 * Curti et al. (2023) Curti M., et al., 2023, arXiv e-prints, p. arXiv:2304.08516 * Dayal et al. (2022) Dayal P., et al., 2022, MNRAS, 512, 989 * Di Cesare et al. (2023) Di Cesare C., Graziani L., Schneider R., Ginolfi M., Venditti A., Santini P., Hunt L. K., 2023, MNRAS, 519, 4632 * Draine (2003) Draine B. T., 2003, ARA&A, 41, 241 Dunlop J. S., McLure R. J., Robertson B. E., Ellis R. S., Stark D. P., Cirasuolo M., de Ravel L., 2012, MNRAS, 420, 901 * Dunlop et al. (2017) Dunlop J. S., et al., 2017, MNRAS, 466, 861 * Endsley et al. (2021) Endsley R., Stark D. P., Chevallard J., Charlot S., 2021, MNRAS, 500, 5229 * Endsley et al. (2022) Endsley R., et al., 2022, MNRAS, 512, 4248 * Endsley et al. (2023) Endsley R., et al., 2023, MNRAS, 520, 4609 * Faist et al. (2017) Faist A. L., et al., 2017, ApJ, 847, 21 * Faist et al. (2020a) Faist A. L., et al., 2020a, ApJS, 247, 61 * Faist et al. (2020b) Faist A. L., Fudamoto Y., Oesch P. A., Scoville N., Riechers D. A., Pavesi R., Capak P., 2020b, MNRAS, 498, 4192 * Ferrara et al. (2022) Ferrara A., et al., 2022, MNRAS, 512, 58 * Fudamoto et al. (2020a) Fudamoto Y., et al., 2020a, MNRAS, 491, 4724 * Fudamoto et al. (2020b) Fudamoto Y., et al., 2020b, A&A, 643, A4 * Fudamoto et al. (2021) Fudamoto Y., et al., 2021, Nature, 597, 489 * Fujimoto et al. (2016) Fujimoto S., Ouchi M., Ono Y., Shibuya T., Ishigaki M., Nagai H., Momose R., 2016, ApJS, 222, 1 * Gall et al. (2018) Gall C., Hjorth J., 2018, ApJ, 868, 62 * Graziani et al. (2020) Graziani L., Schneider R., Ginolfi M., Hunt L. K., Maio U., Glatzle M., Ciardi B., 2020, MNRAS, 494, 1071 * Gruppioni et al. (2020) Gruppioni C., et al., 2020, A&A, 643, A8 * Harikane et al. (2022) Harikane Y., et al., 2022, ApJS, 259, 20 * Hashimoto et al. (2018) Hashimoto T., et al., 2018, Nature, 557, 392 * Hashimoto et al. (2019) Hashimoto T., et al., 2019, PASJ, 71, 71 * Hashimoto et al. (2023) Hashimoto T., et al., 2023, arXiv e-prints, p. arXiv:2305.04741 * Hodge & de Cunha (2020) Hodge J. A., de Cunha E., 2020, Royal Society Open Science, 7, 200556 * Hygate et al. (2023) Hygate A. P. S., et al., 2023, MNRAS, 524, 1775 * Inami et al. (2022) Inami H., et al., 2022, MNRAS, 515, 3126 * Jarvis et al. (2013) Jarvis J. I., et al., 2013, MNRAS, 428, 1281 * Jones & Stanway (2003) Jones G. T., Stanway E. R., 2003, MNRAS, 525, 5720 * Khusanova et al. (2021) Khusanova Y., et al., 2021, A&A, 649, A152 * Knudsen et al. (2017) Knudsen K. K., Watson D., Frayer D., Christensen L., Gallazzi A., Michalowski M. J., Richard J., Zavala J., 2017, MNRAS, 466, 138 * Koprowski et al. (2018) Koprowski M. P., et al., 2018, MNRAS, 479, 4355 * Labbe et al. (2015) Labbe I., et al., 2015, ApJS, 221, 23 * Laporte et al. (2017) Laporte N., et al., 2017, ApJ, 837, L21 * Lawrence et al. (2007) Lawrence A., et al., 2007, MNRAS, 379, 1599 * Le Fevre et al. (2020) Le Fevre O., et al., 2020, A&A, 643, A1 * Lesisowska & Michalowski (2019) Lesisowska A., Michalowski M. J., 2019, A&A, 624, L13 * Liang et al. (2019) Liang L., et al., 2019, MNRAS, 489, 1397 * Loiacon et al. (2021) Loiacono F., et al., 2021, A&A, 646, A76 * Madau & Dickinson (2014) Madau P., Dickinson M., 2014, ARA&A, 52, 415 * Mancini et al. (2016) Mancini M., Schneider R., Graziani L., Valiante R., Dayal P., Maio U., Ciardi B., 2016, MNRAS, 462, 3130 * Matthee et al. (2017) Matthee J., et al., 2017, ApJ, 851, 145 * Mauerhofer & Dayal (2023) Mauerhofer V., Dayal P., 2023, arXiv e-prints, p. arXiv:2305.01681 * McCracken et al. (2012) McCracken H. J., et al., 2012, A&A, 544, A156 * McLure et al. (2018) McLure R. J., et al., 2018, MNRAS, 476, 3991 * Meurer & Heckman (1999) Meurer G. R., Heckman T. M., Calzetti D., 1999, ApJ, 521, 64 * Mohan & Rafferty (2015) Mohan N., Rafferty D., 2015, PyDSF: Python Blo Detection and Source Finder, Astrophysics Source Code Library, record asc1:502.007 (ascl:1502.007) * Molyneux et al. (2022) Molyneux S. J., et al., 2022, MNRAS, 512, 535 * Nakajima et al. (2023) Nakajima K., Ouchi M., Isobe Y., Harikane Y., Zhang Y., Ono Y., Umeda H., Oguri M., 2023, arXiv e-prints, p. arXiv:2301.12825 * Narayanan et al. (2018) Narayanan D., Conroy C., Dave R., Johnson B. D., Popping G., 2018, ApJ, 869, 70 * Novak et al. (2017) Novak M., et al., 2017, A&A, 602, A5 * Oke & Gunn (1983) Oke J. B., Gunn J. E., 1983, ApJ, 266, 713 * Pallottini et al. (2022) Pallottini A., et al., 2022, MNRAS, 513, 5621 * Pannella et al. (2009) Pannella M., et al., 2009, ApJ, 698, L116 * Pannella et al. (2015) Pannella M., et al., 2015, ApJ, 807, 141 * Pierre et al. (2004) Pierre M., et al., 2004, J. Cosmology Astropart. Phys., 2004, 011 * Planck Collaboration et al. (2020) Planck Collaboration et al., 2020, A&A, 641, A6 * Popping et al. (2017) Popping G., Puglisi A., Norman C. A., 2017, MNRAS, 472, 2315 * Reddy et al. (2018) Reddy N. A., et al., 2018, ApJ, 853, 56 * Rieke et al. (2009) Rieke G. H., Alonso-Herrero A., Weiner B. J., Perez-Gonzalez P. G., Blaylock M., Donley J. L., Marcillac D., 2009, ApJ, 692, 556 * Rogers et al. (2013) Rogers A. B., McLure R. J., Dunlop J. S., 2013, MNRAS, 429, 2456 * Rogers et al. (2014) Rogers A. B., et al., 2014, MNRAS, 440, 3714 * Salim & Boquien (2019) Salim S., Boquien M., 2019, ApJ, 872, 23 * Salim & Lee (2012) Salim S., Lee J. C., 2012, ApJ, 758, 134 * Salim & Narayanan (2020) Salim S., Narayanan D., 2020, ARA&A, 58, 529 * Sanders et al. (2021) Sanders R. L., et al., 2021, ApJ, 914, 19 * Schaerer et al. (2015) Schaerer D., Boone F., Zamoiski M., Stagnhu J., Dessauges-Zavadsky M., Finkelstein S., Combes F., 2015, A&A, 574, A19 * Schouwis et al. (2022) Schouwis S., et al., 2022, ApJ, 928, 31 * Schreiber et al. (2018) Schreiber C., Elbaz D., Pannella M., Ciesla L., Wang T., Franco M., 2018, A&A, 609, A30 * Scoville et al. (2007) Scoville N., et al., 2007, ApJS, 172, 1 * Shapley et al. (2023) Shapley A. E., Sanders R. L., Reddy N. A., Topping M. W., Brammer G. B., 2023, arXiv e-prints, p. arXiv:2301.03241 * Shen et al. (2023) Shen X., Vogelsberger M., Boylan-Kolchin M., Tacchella S., Kannan R., 2023, MNRAS, 513, 2 derive errors on the stacked ALMA flux, we also find evidence for an increased scatter within the bins than found in the REBELS sample. ## Appendix B Rebels Individual Values In Table 1 (continued in 2) we present the properties of the individual galaxies in the REBELS sample, as derived for our IRX-\(\beta\) and IRX-\(M_{\bullet}\) analyses.
2306.17503
Correlation-driven non-trivial phases in single bi-layer Kagome intermetallics
Bi-layer Kagome compounds provide an exciting playground where the interplay of topology and strong correlations can give rise to exotic phases of matter. Motivated by recent first principles calculation on such systems (Phys. Rev. Lett 125, 026401), reporting stabilization of a Chern metal with topological nearly-flat band close to Fermi level, we build minimal models to study the effect of strong electron-electron interactions on such a Chern metal. Using approriate numerical and analytical techniques, we show that the topologically non-trivial bands present in this system at the Fermi energy can realize fractional Chern insulator states. We further show that if the time-reversal symmetry is restored due to destruction of magnetism by low dimensionality and fluctuation, the system can realize a superconducting phase in the presence of strong local repulsive interactions. Furthermore, we identify an interesting phase transition from the superconducting phase to a correlated metal by tuning nearest-neighbor repulsion. Our study uncovers a rich set of non-trivial phases realizable in this system, and contextualizes the physically meaningful regimes where such phases can be further explored.
Aabhaas Vineet Mallik, Adhip Agarwala, Tanusri Saha-Dasgupta
2023-06-30T09:30:36Z
http://arxiv.org/abs/2306.17503v1
# Correlation-driven non-trivial phases in single bi-layer Kagome intermetallics ###### Abstract Bi-layer Kagome compounds provide an exciting playground where the interplay of topology and strong correlations can give rise to exotic phases of matter. Motivated by recent first principles calculation on such systems (Phys. Rev. Lett **125**, 026401), reporting stabilization of a Chern metal with topological nearly-flat band close to Fermi level, we build minimal models to study the effect of strong electron-electron interactions on such a Chern metal. Using appropriate numerical and analytical techniques, we show that the topologically non-trivial bands present in this system at the Fermi energy can realize fractional Chern insulator states. We further show that if the time-reversal symmetry is restored due to destruction of magnetism by low dimensionality and fluctuation, the system can realize a superconducting phase in the presence of strong local repulsive interactions. Furthermore, we identify an interesting phase transition from the superconducting phase to a correlated metal by tuning nearest-neighbor repulsion. Our study uncovers a rich set of non-trivial phases realizable in this system, and contextualizes the physically meaningful regimes where such phases can be further explored. ## I Introduction The Kagome lattice - built out of corner sharing triangles - presents a rather interesting situation where both the itinerancy of the electrons as well as the effect of electron-electron interactions, can be frustrated. The frustration of the itinerant electrons is manifested through the characteristic flat / nearly-flat bands in the various short range tight-binding models on the Kagome lattice. When the Fermi energy lies in one of these bands of narrow band-width then the electron-electron interactions are expected to play a crucial role in determining the ground state properties of the system. Together with this effect, the presence of spin-orbit coupling in the nearly-flat non-interacting band at the Fermi energy may lead to a non-trivial band topology [1; 2; 3; 4; 5; 6]. This interplay of electron-electron interactions and band topology poses outstanding challenges and has been of much interest, for example, in the context of fractionally filled Landau levels in the quantum Hall systems [7; 8; 9] and Moire graphene [10; 11] more recently. Interestingly, a plethora of recently discovered metallic systems based on the Kagome motif [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32] provide a wide material basis to realize and explore this physics further. A particularly interesting family of Kagome based metallic systems occur in the binary intermetallics \(M_{3}\)Sn\({}_{2}\), where \(M=\) Mn, Fe, Ni, Cu, Co represents a transition metal[12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. These materials form a three dimensional layered structure with the basic motif being a bi-layer Kagome of the \(M\) atoms, as shown in Fig. 1. The sought-after flat bands have been observed in iron compounds [21], in cobalt systems [36; 37], and most recently in manganese based intermetallics [38; 39]. The interplay of strong correlations and topological properties is manifested in a wide range of observed exotic phenomena, such as the co-existence of magnetism and anomalous Hall effect seen recently in manganese systems (along with rare earths) [40], an exotic charge density wave order and superconductivity seen in related antimony com Figure 1: _Bi-layer Kagome lattice:_ A Kagome bi-layer lattice has a tripartite structure with a unit-cell containing six identical atoms labelled (1-3) (bottom-layer) and (1’-3’) (top-layer) and shown by different colors. Inter and intra-unit cell nearest-neighbor hoppings can have different hopping strengths due to breathing anisotropy. The lattice vectors are given by \(\mathbf{d_{1}}=\left\{1,0\right\},\ \ \mathbf{d_{2}}=\left\{\frac{1}{2},\frac{ \sqrt{3}}{2}\right\}\). The reciprocal lattice vectors are given by \(\mathbf{b_{1}}=\left\{\frac{\sqrt{3}}{2},-\frac{1}{2}\right\}\frac{4\pi}{ \sqrt{3}},\ \ \mathbf{b_{2}}=\left\{0,1\right\}\frac{4\pi}{\sqrt{3}}\).
2309.07195
Diffusion models for audio semantic communication
Directly sending audio signals from a transmitter to a receiver across a noisy channel may absorb consistent bandwidth and be prone to errors when trying to recover the transmitted bits. On the contrary, the recent semantic communication approach proposes to send the semantics and then regenerate semantically consistent content at the receiver without exactly recovering the bitstream. In this paper, we propose a generative audio semantic communication framework that faces the communication problem as an inverse problem, therefore being robust to different corruptions. Our method transmits lower-dimensional representations of the audio signal and of the associated semantics to the receiver, which generates the corresponding signal with a particular focus on its meaning (i.e., the semantics) thanks to the conditional diffusion model at its core. During the generation process, the diffusion model restores the received information from multiple degradations at the same time including corruption noise and missing parts caused by the transmission over the noisy channel. We show that our framework outperforms competitors in a real-world scenario and with different channel conditions. Visit the project page to listen to samples and access the code: https://ispamm.github.io/diffusion-audio-semantic-communication/.
Eleonora Grassucci, Christian Marinoni, Andrea Rodriguez, Danilo Comminiello
2023-09-13T13:54:07Z
http://arxiv.org/abs/2309.07195v1
# Diffusion Models for Audio Semantic Communication ###### Abstract Directly sending audio signals from a transmitter to a receiver across a noisy channel may absorb consistent bandwidth and be prone to errors when trying to recover the transmitted bits. On the contrary, the recent semantic communication approach proposes to send the semantics and then regenerate semantically consistent content at the receiver without exactly recovering the bitstream. In this paper, we propose a generative audio semantic communication framework that faces the communication problem as an inverse problem, therefore being robust to different corruptions. Our method transmits lower-dimensional representations of the audio signal and of the associated semantics to the receiver, which generates the corresponding signal with a particular focus on its meaning (i.e., the semantics) thanks to the conditional diffusion model at its core. During the generation process, the diffusion model restores the received information from multiple degradations at the same time including corruption noise and missing parts caused by the transmission over the noisy channel. We show that our framework outperforms competitors in a real-world scenario and with different channel conditions. Visit the project page to listen to samples and access the code: [https://ispamm.github.io/diffusion-audio-semantic-communication/](https://ispamm.github.io/diffusion-audio-semantic-communication/). Eleonora Grassucci, Christian Marinoni, Andrea Rodriguez, and Danilo Comminiello Dept. of Information Eng., Electronics and Telecom., Sapienza University of Rome, Italy Audio Restoration, Generative Semantic Communication, Audio Inverse Problems, Diffusion Models ## 1 Introduction Audio communication is the task of transmitting an audio signal from a sender over a noisy channel that can degrade and corrupt the information up to a receiver that should then retrieve the received content. However, sending the complete signal may absorb considerable bandwidth and recovering the complete bitstream at the receiver may be error-prone. This has always been considered a very tight constraint in wireless communications. Recently, with the upcoming rise of 6G communications, semantic communication frameworks have replaced classical wireless systems. The promising aspects of semantic communication lie in the ability to regenerate content preserving the meaning of the transmission (i.e., the semantics) without necessarily recovery the exact bit sequence [1, 2]. In recent years, few audio semantic communication frameworks have been proposed, encoding the semantics of speech signals with the help of neural networks [3, 4, 5]. Concurrently, generative models have been demonstrated to be powerful and robust tools to enhance semantic communication frameworks [6] due to their ability to generate content from the received semantic information [7], even when extremely degraded and corrupted [8]. In this paper, we address the problem of audio communication over a noisy channel, formulating it as an inverse problem in which the transmission deteriorates and corrupts data while the model tries to restore the original audio or its semantic aspects. Under this formulation, the problem moves to solving an audio inverse problem in which diffusion models excel [9, 10, 11]. To do so, we define a novel audio semantic communication framework, whose core is a latent diffusion model conditioned on textual semantics to enhance the generation results. The sender transmits lower-dimensional latent representations of the audio and of its caption to the receiver. The latter solves the inverse problem by restoring the audio from the channel noise and inpainting the missing parts that have been lost in the transmission over the channel. This is done by leveraging the range-null space decomposition that ensures consistency with the inverse problem formulation and realness according to the data distribution [12]. While doing this, the diffusion model leverages the textual semantic information to ensure semantically consistent outputs and improve the quality of generation. We conduct an experimental evaluation on a real-world dataset and we show that the proposed framework is able to denoise speeches and real-world sounds or audio scenes in the case of heavily corrupted received information. Moreover, the proposed framework inpaints meaningful speeches and sounds in audio clips with missing parts although the received semantic information may be corrupted by the noisy channel. Summarizing our contributions: i) To the best of our knowledge, we propose the first diffusion model-based framework for audio semantic communication; ii) We design a reverse sampling procedure to perform multiple restorations at the same time, such as denoising and inpainting even in the case of highly degraded channel conditions; iii) We show the effectiveness of the proposed framework in real-world scenarios, including both speeches and sounds proving its superiority with respect to state-of-the-art comparisons. Figure 1: Results of the proposed framework on the denoising and inpainting tasks performed on low-dimensional representations of audio signals and semantics corrupted by a communication channel. The rest of the paper is organized as follows: in Sec. 2 we formulate the problem setting and derive the proposed framework. Section 3 shows the experimental evidence of the proposed method, while we draw conclusions in Sec. 4. ## 2 Audio Semantic Communication ### Problem Formulation Real-world communication systems face physical challenges due to the communication channel that may distort, corrupt, and lose portions of the transmitted signal. A quantitative way to characterize the amount of noise added to the transmitted content due to the channel conditions is the PSNR, as: \[\text{PSNR}=10\log\frac{P}{\sigma_{c}^{2}}, \tag{1}\] where \(P\) is the signal power and \(\sigma_{c}^{2}\) the channel variance. Lower values of the PSNR represent bad channel conditions and potential heavy data corruption, while high values of the PSNR stand for good transmissions. In addition to the corruption due to the noise, there may be missing portions of the received content due to losses in the case of bad channel conditions. In this scenario, the receiver should be able to fill the gap with semantically-consistent content. Therefore, we can formulate the transmission of a content \(\mathbf{z}\) over the channel as \(\mathbf{y}=\mathbf{A}\mathbf{z}+\mathbf{n}\), where \(\mathbf{n}\sim\mathcal{N}(0,\sigma_{c}^{2}\mathbf{I})\) is the noise added by the channel, and \(\mathbf{A}\) the matrix of the corruption, indicating the missing portions of the transmitted content. The received content \(\mathbf{y}\) is therefore a noisy and masked version of the original transmitted content \(\mathbf{z}\). Consequently, we can handle such communication formulation as an inverse problem and try to solve it with diffusion models. ### Audio Semantic Communication Framework We develop the proposed audio semantic communication framework on top of a text-to-audio latent diffusion model [13]. At the sender side, we first extract the mel-spectrogram of the audio waveform, then we encode it by means of the VAE encoder into the latent space. Simultaneously, the text encoder extracts the latent representation of the audio caption. The two lower-dimensional representations are transmitted over the communication channel to the receiver. Figure 2 shows the proposed framework. **Audio Encoder and Vocoder.** The VAE encoder squeezes the mel-spectrogram in a lower-dimensional latent representation \(\mathbf{z}_{0}\in\mathbb{R}^{C\times L/r\times F/r}\), in which \(C\) are the channels, \(L\) the time, \(F\) the frequency and \(r\) the compression. Involving the pretrained VAE [14], we consider the best setting of \(C=8\) and \(r=4\) in which the residual U-Net blocks of the encoder-decoder structure have been trained to maximize the evidence lower bound while minimizing the adversarial loss. **Textual Encoder.** To encode textual captions into latent representations that can be transmitted over the channel and then leveraged by the diffusion model, we involve the pretrained LLM FLAN-T5-Large [15], following [13]. FLAN-T5 has 780M parameters and it has been trained on a large-scale chain-of-thought (CoT) and instruction-based dataset. **Latent Diffusion Model.** The core of our framework is a latent diffusion model [16] that receives the corrupted information and solves the inverse problems. The forward diffusion is a Gaussian Markov chain that destroys the data distribution into a standard Gaussian distribution in \(T\) steps with predefined noise schedule \(0<\beta_{1}<...<\beta_{T}<1\), following the transition probabilities: \[\begin{split} q(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\mathcal{N}( \sqrt{1-\beta_{t}}\mathbf{z}_{t-1},\beta_{t}\mathbf{I}),\\ q(\mathbf{z}_{t}|\mathbf{z}_{0})=\mathcal{N}(\sqrt{\bar{\alpha} _{t}}\mathbf{z}_{0},(1-\bar{\alpha}_{t})\mathbf{I}).\end{split} \tag{2}\] Note that \(\alpha_{t}=1-\beta_{t}\) and that \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{t}\). The process output is \(\mathbf{z}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The underlying model is the U-Net backbone of StableDiffusion, having four encoder blocks, a middle block and four decoder blocks. We involve the TANGO pretrained latent diffusion model [13] and replace the standard conditional reverse process with a process able to solve inverse problems. We describe it in the next subsection. ### Solving Audio Semantic Communication Inverse Problems Recall that from the range-null space decomposition, given a linear operator \(\mathbf{A}\), any sample \(\mathbf{z}\) can be decomposed into two parts: i) the range space of \(\mathbf{A}\), and ii) the null space of \(\mathbf{A}\)[17, 18]. Therefore, the sample \(\mathbf{z}\) can be written by \[\mathbf{z}=\overline{\mathbf{A}^{\dagger}\mathbf{A}\mathbf{z}}+\overline{ \mathbf{(I-A^{\dagger}\mathbf{A})\mathbf{z}}}. \tag{3}\] Let us consider a generic inverse problem formulation \(\mathbf{y}=\mathbf{A}\mathbf{z}\), the solution to this problem consists in an audio latent vector \(\hat{\mathbf{z}}\) that satisfies the two constraints: \[\overline{\mathbf{A}\mathbf{z}}=\mathbf{y} \tag{4}\] \[\hat{\mathbf{z}}\sim q(\mathbf{z}), \tag{5}\] that are, respectively, consistency and **realness**. Recalling the sample formulation in (3) and applying the operator \(\mathbf{A}\), the range space becomes \(\mathbf{y}\), while the null space becomes \(\mathbf{0}\) since \(\mathbf{A}\mathbf{z}=\mathbf{A}\overline{\mathbf{A}^{\dagger}\mathbf{A} \mathbf{z}}+\mathbf{A}\overline{\mathbf{(I-A^{\dagger}\mathbf{A})\mathbf{z}}} =\mathbf{A}\mathbf{z}+\mathbf{0}=\mathbf{y}\). Therefore, for Figure 2: Proposed diffusion model for audio semantic communication framework. The original audio and the corresponding semantics are encoded and transmitted over the channel. The receiver restores the audio according to its semantics. any inverse problem of this form, we can formally build the solution \(\hat{\mathbf{z}}=\left|\mathbf{A}^{\dagger}\mathbf{y}\right|+\left(\mathbf{I}- \mathbf{A}^{\dagger}\mathbf{A}\right)\bar{\mathbf{z}}\) that satisfies the consistency constraint, whatever \(\bar{\mathbf{z}}\) is. The solution \(\bar{\mathbf{z}}\), however, determines whether the solution satisfies the realness constraint too. The scope of the training is therefore finding the \(\bar{\mathbf{z}}\) such as \(\hat{\mathbf{z}}\sim q(\mathbf{z})\), and the diffusion model can be trained to generate the proper null space \(\overline{\left(\mathbf{I}-\mathbf{A}^{\dagger}\mathbf{A}\right)\bar{ \mathbf{z}}}\) for the range space \(\overline{\left|\mathbf{A}^{\dagger}\mathbf{y}\right|}\). However, intermediate states \(\mathbf{z}_{t}\) of the reverse process are noisy and this can break the harmony between the range and the null space [12]. To avoid this misalignment, the mean and the variance of the intermediate state \(p(\mathbf{z}_{t-1}|\mathbf{z}_{t},\mathbf{z}_{0})\) can be reparameterized to arrive at the desired output \(\mathbf{z}_{0}\sim q(\mathbf{z})\) as \[\mu_{t}(\mathbf{z}_{t},\mathbf{z}_{0}) =\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}} \mathbf{z}_{0}+\frac{\sqrt{\bar{\alpha}_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{ \alpha}_{t}}\mathbf{z}_{t} \tag{6}\] \[\sigma_{t}^{2} =\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}. \tag{7}\] We can reverse (2) to estimate \(\mathbf{z}_{0}\) from \(\mathbf{z}_{t}\) and from the predicted noise \(\epsilon_{t}=\mathcal{Z}_{\theta}(\mathbf{z}_{t},t)\) and formulate the estimated \(\mathbf{z}_{0}\) as \[\mathbf{z}_{0|t}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\left(\mathbf{z}_{t}- \mathcal{Z}_{\theta}\right). \tag{8}\] Finally, the estimated \(\mathbf{z}_{0|t}\) is computed by \[\hat{\mathbf{z}}_{0|t}=\overline{\left|\mathbf{A}^{\dagger}\mathbf{y}\right|}+ \overline{\left(\mathbf{I}-\mathbf{A}^{\dagger}\mathbf{A}\right)\mathbf{z}_{ 0|t}}=\mathbf{z}_{0|t}-\overline{\left|\mathbf{A}^{\dagger}(\mathbf{A} \mathbf{z}_{0|t}-\mathbf{A}\mathbf{z})\right|}. \tag{9}\] However, in the case of noisy inverse problems as formulated in Subsection 2.1 for communications, a further noisy term \(\mathbf{A}^{\dagger}\mathbf{n}\) would be introduced in (9), producing final noisy samples. Therefore, we can introduce two parameters in the reverse process to adapt the formulation to noisy inputs: \[\hat{\mathbf{z}}_{0|t}=\mathbf{z}_{0|t}-\Sigma_{t}|\mathbf{A}^{ \dagger}(\mathbf{A}\mathbf{z}_{0|t}-\mathbf{y})|, \tag{10}\] \[\hat{p}(\mathbf{z}_{t-1}|\mathbf{z}_{t},\hat{\mathbf{z}}_{0|t}) =\mathcal{N}(\mu_{t}(\mathbf{z}_{t},\hat{\mathbf{z}}_{0|t}),\Phi_{t}\mathbf{ I}), \tag{11}\] in which \(\Sigma_{t}\) scales the range space correction \(\overline{\left|\mathbf{A}^{\dagger}(\mathbf{A}\mathbf{z}_{0|t}-\mathbf{y}) \right|}\) and \(\Phi_{t}\) scales the noise \(\sigma_{t}\epsilon\) in \(p(\mathbf{z}_{t-1}|\mathbf{z}_{t},\hat{\mathbf{z}}_{0|t})\). The two terms need to satisfy some constraints: i) \(\Sigma_{t}\) has to tend to the identity matrix so as to maximize the consistency through the range space correction \(\overline{\left|\mathbf{A}^{\dagger}(\mathbf{A}\mathbf{z}_{0|t}-\mathbf{y}) \right|}\), while ii) \(\Phi_{t}\) has to guarantee that the noise variance in \(\mathbf{z}_{t-1}\) is equal to \(\sigma_{t}^{2}\) so that it can be removed by the pre-trained model that estimates the noise through \(\mathcal{Z}_{\theta}\). We can approximate \(\mathbf{A}^{\dagger}\mathbf{n}\sim\mathcal{N}(\mathbf{0},\sigma_{\mathbf{y}}^ {2}\mathbf{I})\), where \(\sigma_{\mathbf{y}}^{2}\) is the variance of the noise in the received latent vector \(\mathbf{y}\), which is the variance \(\sigma_{c}^{2}\) of the channel noise rescaled according to the original data range. With this approximation, we can simplify \(\Sigma_{t}=\lambda_{t}\mathbf{I}\) and \(\Phi_{t}=\gamma_{t}\mathbf{I}\)[12]. Given that the intermediate state \(\mathbf{z}_{t-1}\) is equal to \[\mathbf{z}_{t-1}=\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t} }\hat{\mathbf{z}}_{0|t}+\frac{\sqrt{\bar{\alpha}_{t}}(1-\bar{\alpha}_{t-1})}{ 1-\bar{\alpha}_{t}}\mathbf{z}_{t}+\sigma_{t}\epsilon, \tag{12}\] we can satisfy constraint i) by setting: \[\gamma_{t}=\sigma_{t}^{2}-(\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{ \alpha}_{t}}\lambda_{t}\sigma_{\mathbf{y}})^{2}, \tag{13}\] \[\lambda_{t}=\begin{cases}1,\qquad\sigma_{t}\geq\frac{\sqrt{\bar{\alpha}_{t-1} }\beta_{t}}{1-\bar{\alpha}_{t}}\sigma_{\mathbf{y}}\\ \sigma_{t}/\sigma_{\mathbf{y}},\qquad\sigma_{t}<\frac{\sqrt{\bar{\alpha}_{t-1}} \beta_{t}}{1-\bar{\alpha}_{t}}\sigma_{\mathbf{y}}\end{cases} \tag{14}\] and constraint ii) with: \[\left(\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}}\lambda_{t }\sigma_{\mathbf{y}}\right)^{2}+\gamma_{t}=\sigma_{t}^{2}. \tag{15}\] With the above formulation, the only parameter that has to be set is \(\sigma_{\mathbf{y}}\), from which the denoising ability of the sampling procedure depends. We discuss it in the next Subsection. ### Automating the choice of denoising hyperparameter The optimal value \(\sigma_{\mathbf{y}}^{\star}\) of \(\sigma_{\mathbf{y}}\) strictly depends on the variance of the noisy observation \(\mathbf{y}\sim\mathcal{N}(0,\sigma_{\mathbf{y}}^{2})\). A manual setting of such a hyperparameter, as suggested in previous works [12], is unfeasible in a communication scenario where the receiver is unaware of the channel conditions and of the distortions they may have applied to the transmitted content. In addition, we notice that \(\sigma_{\mathbf{y}}^{\star}\) directly hinges on the data range too since the standard deviation is scaled as the data distribution scales. Therefore, we propose to automatically compute the optimal value \(\sigma_{\mathbf{y}}^{\star}\) that adaptive changes depending on the range and on the standard deviation of the received data \(\mathbf{y}\) following: \[\sigma_{\mathbf{y}}^{\star}=(\max\left(\mathbf{y}\right)-\min\left(\mathbf{y} \right))\cdot\sigma_{\mathbf{y}}. \tag{16}\] Equipped with this formulation, the proposed method at the receiver side can automatically compute the optimal value for denoising without requiring any human feedback or knowledge about the channel conditions. This transforms the proposed method in an end-to-end method robust to different and unknown channel conditions. ## 3 Experimental Evaluation We perform the experimental evaluation on AudioCaps [19], a real-world large-scale dataset of about \(46\)k audio clips with human-collected text pairs starting from the AudioSet dataset [20]. We resample all samples to \(16\) kHz and standardize the length to be \(10\) seconds long. We perform two sets of experiments, in a denoising-only scenario and with the inpainting task, both under different channel conditions with PSNR values in the set \([15,17.5,20,30]\). ### Denoising The denoising scenario faces the case in which both the low-dimensional latent representation of the audio signal and the semantics are heavily affected by the noise coming from the communication channel. To mimic the behavior of the channel, we apply Gaussian noise that adheres to the predefined PSNR values. When using our method, we set \(\sigma_{\mathbf{y}}^{\star}\) as defined in Section 2.4, resulting in higher values for lower PSNRs. For instance, \(\sigma_{\mathbf{y}}^{\star}\) is, on average, equal to \(68\) when the PSNR equals \(15\). Moreover, we employ \(1k\) steps in the diffusion process and a guidance scale equal to \(3\). We \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline PSNR\(\rightarrow\) & \multicolumn{2}{c|}{15} & \multicolumn{2}{c|}{17.5} & \multicolumn{2}{c|}{20} & \multicolumn{2}{c}{30} \\ Model & SNR\(\uparrow\) & FAD\(\downarrow\) & SNR\(\uparrow\) & FAD\(\downarrow\) & SNR\(\uparrow\) & FAD\(\downarrow\) & SNR\(\uparrow\) & FAD\(\downarrow\) & SNR\(\uparrow\) & FAD\(\downarrow\) \\ \hline N2N & -8.08 & 22.07 & -6.81 & 20.42 & -5.16 & 18.25 & **1.74** & 11.04 \\ Ours & **-2.88** & **21.24** & **-2.63** & **10.87** & **-2.74** & **8.38** & -2.57 & **3.75** \\ \hline \hline \end{tabular} \end{table} Table 1: Denoising results according to SNR and FAD. The proposed framework better denoises received samples according to both metrics in every test we conduct. compare our solution with a U-Net-based approach that extends Noise2Noise (N2N) to the speech denoising task [21]. In particular, we take the original architecture and, to simulate a deteriorated channel, we retrain it on a noisy version of the AudioCaps data set till convergence. Compared to our approach, this method has two principal differences. First, N2N operates directly on the input data, while our model forges on the lower-dimensional latent space and crucially has, therefore, lesser bandwidth requirements. Second, N2N does not employ the semantic information provided by captions to guide the generation process, as ours does instead. We evaluate the approaches with two metrics, the Signal-to-Noise Ratio (SNR) and the Frechet Audio Distance (FAD) [22], in the four levels of PSNR. The SNR quantifies the ratio of the power of the desired signal to the strength of the unwanted noise. A higher SNR value indicates a better-denoised audio signal. However, reducing noise is only one side of the medal, as the denoising process can introduce distortions. To account for this fact, we also consider FAD, a reference-free metric that correlates more closely with human perception. As shown in Tab. 3, our approach provides the best results, both in terms of SNR and FAD, accounting for the semantics of the audio samples. Moreover, our method leads to lower band occupancy. An example of the result of the denoising (first row) and then inpainting (second row) tasks is depicted in Fig. 1. ### Inpainting Another scenario we can encounter during transmission through a communication channel is losing part of the information. In this scenario, a receiver equipped with the proposed method can regenerate the missing content in a semantically consistent way. More formally, the receiver obtains the latent representation of the audio signal with a missing portion to retrieve by solving the related inverse problem. The sender also transmits the corresponding semantics, subject to channel noise, that the receiver can leverage to guide the generation process. To reproduce the behavior of the channel, we apply additive white gaussian noise (AWGN) according to the chosen PSNR values to the semantics, and we mask a 1-second-long section of the audio latent representation to simulate a loss of information. While \(\sigma_{\mathbf{y}}\) should be equal to zero when dealing with non-noisy inverse problems, here we set \(\sigma_{\mathbf{y}}\) according to (16) computed on the caption embeddings. Indeed, we notice that noisy conditioning can, in turn, introduce unknown noise in the reverse diffusion process, producing dirty samples. Therefore, we propose to treat this task as a (slightly) noisy inverse problem as well and jointly perform denoising while inpainting the missing part. We compare our model with two state-of-the-art approaches: Tango [13] + RePaint [23] and AudioLDM [14]. The first consists of replacing the reverse diffusion process of TANGO with one inspired by RePaint [23], meaning that this method shares the same architecture as ours but presents a different sampling procedure. The second comparative method is AudioLDM, a text-to-audio system designed to learn continuous audio representations from CLAP [24] embeddings and capable of performing zero-shot audio inpainting and style transfer. In this case, the architecture is different from ours, thus making it possible to assess the performance of a distinct framework on this task. Since it is not performed natively by AudioLDM, we apply noise to the embeddings used for conditioning the sampling process following the same four PSNR levels. We evaluate the three approaches with the Frechet Audio Distance (FAD) on the entire duration of the audio sample (10 seconds) and on the masked section only (1 second). We refer to these as _All_ and _Imp_ FAD. Indeed, we notice that calculating the metrics focusing on the inpainted part of the audio allows a better estimation of the effectiveness of the methods analysed on the inpainting task. Complementary, the _All_ FAD takes into account any distortions introduced by the models on known parts. Table 2 reports FAD values associated with the corresponding four selected PSNR values. Our method achieves the best results in all the most challenging configurations, still being able to compete with the state of the art for higher PSNR values. Moreover, we perform a semantic evaluation of the inpainted audio. We apply Whisper Audio Captioning V2 [25] to generate captions for audio samples generated with our model (with PSNR=20) and analyse its impact on the semantics. We repeat this process to produce the captions associated with the original sound, thus enabling a fair comparison between our samples and the unmasked ones. Figure 3 shows random captions of original vs. inpainted audio by our method and highlights the consistency of our results. Indeed, our method produces audio with congruous captions with respect to original uncorrupted audio, thus proving that the proposed framework preserves semantics in restored samples. ## 4 Conclusion In this paper, we present a novel generative audio semantic communication framework that addresses the problem of denoising or inpainting the lower-dimensional latent representation of audio samples with the help of semantics. Our solution, which provides better results on the metrics considered, has two remarkable features: (1) it allows efficient use of the communication channel thanks to a reduced amount of information the sender needs to transmit to the receiver; (2) it allows efficient estimation of the original transmitted data by exploiting semantics, even when the channel suffers from high noise or when part of the content is lost. \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline PSNR\(\rightarrow\) & \multicolumn{2}{c}{15} & \multicolumn{2}{c}{17.5} & \multicolumn{2}{c}{20} & \multicolumn{2}{c}{30} \\ Model & All\(\downarrow\) & Imp\(\downarrow\) & All\(\downarrow\) & Imp\(\downarrow\) & All\(\downarrow\) & Imp\(\downarrow\) & All\(\downarrow\) & Imp\(\downarrow\) \\ \hline AudioLDM [14] & 2.23 & 14.89 & 2.25 & 14.13 & 2.29 & 13.95 & 2.32 & 12.11 \\ Repaint [23] & 4.98 & 21.83 & 3.02 & 19.84 & 2.95 & 16.21 & 2.44 & 15.01 \\ Ours & **2.14** & **11.95** & **2.16** & **12.52** & **1.98** & **10.37** & **2.08** & **10.33** \\ \hline \hline \end{tabular} \end{table} Table 2: Inpainting results as measured by FAD metrics (the lower the better) on the whole audio (All\(\downarrow\)) and on the inpainted part only (Imp\(\downarrow\)). Our method provides the best results, especially in the case of bad channel conditions. Average over multiple runs. Figure 3: Captions generated by applying the Whisper Audio Captioning model. The left column shows captions on clean audio samples. On the right, the captions are derived from the same captioning model applied to audio samples generated with our approach.
2304.00160
Secure Federated Learning against Model Poisoning Attacks via Client Filtering
Given the distributed nature, detecting and defending against the backdoor attack under federated learning (FL) systems is challenging. In this paper, we observe that the cosine similarity of the last layer's weight between the global model and each local update could be used effectively as an indicator of malicious model updates. Therefore, we propose CosDefense, a cosine-similarity-based attacker detection algorithm. Specifically, under CosDefense, the server calculates the cosine similarity score of the last layer's weight between the global model and each client update, labels malicious clients whose score is much higher than the average, and filters them out of the model aggregation in each round. Compared to existing defense schemes, CosDefense does not require any extra information besides the received model updates to operate and is compatible with client sampling. Experiment results on three real-world datasets demonstrate that CosDefense could provide robust performance under the state-of-the-art FL poisoning attack.
Duygu Nur Yaldiz, Tuo Zhang, Salman Avestimehr
2023-03-31T22:49:01Z
http://arxiv.org/abs/2304.00160v2
# Secure Federated Learning against Model Poisoning Attacks via Client Filtering ###### Abstract Given the distributed nature, detecting and defending against the backdoor attack under federated learning (FL) systems is challenging. In this paper, we observe that the cosine similarity of the last layer's weight between the global model and each local update could be used effectively as an indicator of malicious model updates. Therefore, we propose CosDefense, a cosine-similarity-based attacker detection algorithm. Specifically, under CosDefense, the server calculates the cosine similarity score of the last layer's weight between the global model and each client update, labels malicious clients whose score is much higher than the average, and filters them out of the model aggregation in each round. Compared to existing defense schemes, CosDefense does not require any extra information besides the received model updates to operate and is compatible with client sampling. Experiment results on three real-world datasets demonstrate that CosDefense could provide robust performance under the state-of-the-art FL poisoning attack. ## 1 Introduction The gist of Federated Learning (FL) is to train a model coordinated by a server while preserving the clients' data privacy Zhang et al. (2021). However, this substantial property introduces new challenges. Since the server does not have access to the client data due to privacy concerns, FL is vulnerable to data or model poisoning attacks, in which the attacker send corrupted updates and contaminates the global model. Given the distributed nature of FL, it is challenging to detect and correct these failures under the vanilla FL framework McMahan et al. (2016); Zhang et al. (2022). Several solutions have been proposed to defend the server from model poisoning attacks to relax the security challenge for the FL framework. Server-side robust aggregation approaches aim to detect outliers by inspecting the client updates, and filtering the malicious updates before model aggregation such as Blanchard et al. (2017). Besides completely filtering out before model aggregation, approaches proposed by Xu and Lyu (2021); Cao et al. (2021); Regatti et al. (2021); Fung et al. (2020); Prakash et al. (2020) diminish the aggregation coefficients of the clients that are likely to be malicious. However, existing approaches have some critical drawbacks or unfeasible assumptions as we summarize in Table 1. In this work, we propose CosDetect, a cosine similarity based outlier detection algorithm, to tackle the fundamental issues of existing defense methods. We provide an intriguing finding that the weight inside the last layer of the local model update is more sensitive to the local data distribution than other layers. Based on this crucial observation, we propose that the last layer of local updates from the malicious clients should be outliers compared to the ones from the benign clients. By calculating the cosine similarity of the last layer between each collected model update and the last global model, it is possible to filter the poisoning updates before model aggregation. As shown in Table 1, the proposed cosine similarity based outlier detection scheme, though simple, has equipped CosDetect with multiform merits compared to prior strategies: (1) CosDetect does not require representative benign data at the server to distinguish the malicious updates. It is only built on accessing the global model parameters and the clients' updates. (2) CosDetect does not need the precise number of malicious clients per communication round in advance for robust performance. (3) CosDetect does not rely on the previous clients' records to cluster the malicious clients. Instead, CosDetect eliminates the outliers only based on the current round information, which makes our algorithm compatible with client selection and brings more flexibility for the implementation. ## 2 Approach Overview ### Layer-wise Cosine Similarity for Model Weights We first focus on a critical question: _how would the attack be reflected on the server side during FL training?_ Under the vanilla FL setting McMahan et al. (2016), the only information the server holds during the training are the collected model updates and the following aggregated model. A previous study Zhao et al. (2020) notes that label information for the training data can be computed analytically from the gradients of the last layer inside the machine learning model under the centralized setting. Following this direction, one intriguing finding from our empirical study is that compared to other layers, the last layer's weight is more sensitive to the input data distribution. We quantify the similarity of model weights by cosine similarity, which is the dot product of the two vectors divided by the product of individual norms as \(cos(\alpha)=\frac{<x,y>}{||x||:||y||}\) where \(\alpha\) is the angle between vectors \(x\) and \(y\). Figure 1 shows the average cosine similarity for each layer in the model across independent clients. In this experiment, ten clients train a four-layer CNN-based model independently without any model synchronization on the MNIST dataset, which has been non-iid partitioned (See Section 3.1 for details). We observe that with the increment of the iteration, the input-side layers show higher similarity than the output-side layers, and the last layer has the lowest similarity score, because the local data distribution among all clients mainly varies on the label distribution. These observations provide critical insight that the local data label distribution could be efficiently reflected in the last layer's weight compared to the other layers. ### CosDefense: Filtering the Malicious Model Update via Cosine Similarity Based on the crucial observation we described in the previous section, we propose that the last layer of the local models from the attackers should be outliers compared to the ones from the benign models. Therefore, it is possible to filter the malicious model update on the server by calculating the cosine similarity between each collected model update and the last global model. When calculating \begin{table} \begin{tabular}{l|c c c c} \hline \hline & **Compatability with** & **Validation Data** & **Information of** & **Client Score** \\ & **Client Sampling** & **at the Server** & **Attacker Number** & **Maintenance** \\ \hline Krum Blanchard et al. (2017) & \(\surd\) & \(\times\) & \(\surd\) & \(\times\) \\ Multi-Krum Blanchard et al. (2017) & \(\surd\) & \(\times\) & \(\surd\) & \(\times\) \\ Median Yin et al. (2018) & \(\surd\) & \(\times\) & \(\times\) & \(\times\) \\ RFFL Xu \& Lyu (2021) & \(\times\) & \(\times\) & \(\times\) & \(\surd\) \\ FoolSold Fang et al. (2020) & \(\surd\) & \(\times\) & \(\times\) & \(\surd\) \\ FLTrust Cao et al. (2021) & \(\surd\) & \(\surd\) & \(\times\) & \(\times\) \\ ByGras Regatti et al. (2021) & \(\surd\) & \(\surd\) & \(\times\) & \(\surd\) \\ SageFlow Park et al. (2021) & \(\surd\) & \(\surd\) & \(\times\) & \(\times\) \\ **CosDefense (Our Method)** & **βœ“** & **βœ“** & **βœ“** & **βœ“** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of CosDefense with existing backdoor defense FL methods. Figure 1: The average cosine similarity of model weight in various layers between one client and the other nine clients. All 10 models are trained independently for 1000 iterations without synchronization. the cosine similarity between global model weights and local model update, we have the following: \[cos(\alpha_{i})=\frac{<\theta_{t},g^{i}_{t+1}>}{||\theta_{t}||\cdot||g^{i}_{t+1}|| }=\frac{<\theta_{t},\theta^{i}_{t+1}-\theta_{t}>}{||\theta_{t}||\cdot||\theta^{i }_{t+1}-\theta_{t}||}=\frac{<\theta_{t},\theta^{i}_{t+1}>-<\theta_{t},\theta_{t }>}{||\theta_{t}||\cdot||\theta^{i}_{t+1}-\theta_{t}||} \tag{1}\] where \(\alpha_{i}\) denotes the angle between global model weights (\(\theta_{t}\)) and local model update of client \(i\) (\(g^{i}_{t+1}\)). If all the participated clients are benign nodes, as the communication round \(t\) goes to infinity, the global model should be converged to an optimal point based on the received local model update. Therefore, the cosine similarity between global model weights and the local model update tends to be decreasing during the training, as the difference between \(\theta_{t}\) and \(\theta^{i}_{t+1}\) (local model for client \(i\)) goes smaller for each client. However, the cosine similarity scores from malicious clients do not follow this phenomenon. As their aim is to prevent convergence, their local optimization direction would be different compared to the benign client models or the global model. As a result, the difference between \(\theta_{t}\) and \(\theta^{i}_{t+1}\) from malicious nodes becomes more extensive than the benign nodes, making the cosine similarity score between the malicious client update and the global weights tend to be increasing during the training. To demonstrate our thoughts, we trace the average cosine score of the last layer local updates and that of the global model under benign and poisoning attack cases ( Figure 2). Under both cases, the total client number is 100, and the sampling rate is 0.1 in each round. For the poisoning attack case, we launch the IPM attack Xie et al. (2019) at round 200, with 30% of the participants being malicious clients. For a better illustration, Figure 2 is smoothed by the moving averaged filter with window size 40. The average cosine similarity increases sharply as the attack happens and is much higher than the scores under the all-benign case, which confirms our speculation. We observed that cosine similarity between global model parameters and the benign local model updates tends to be smaller during the training, which indicates that the direction of benign updates is less than perpendicular to the global model, making it move toward a different direction in each round. However, the average cosine similarity scores increase after the attack, indicating that cosine similarity scores from malicious clients are much larger than the scores from benign clients. The large cosine similarity values represent that malicious client updates are aligned with the global model parameters and do not update the global model in different directions but keep it still. As a result, if there are enough malicious clients in the system, their updates considerably diminish the benefit of benign client updates, preventing the convergence of the global model. Following these directions, we propose CosDefense, a server-side clustering-based defense method as shown in the Appendix C. In each round \(t\), after the server receives all the updates from the sampled clients, it calculates the cosine similarity between the last layer's global weights and the last layer's local updates for each client. Then, the server clusters clients based on their cosine similarity scores either as malicious or benign. If the cosine similarity score of a client is much higher than others, then the server labels that client as malicious for round \(t\) and exclude that client from aggregation. CosDefense only detects malicious clients before aggregation and excludes them, allowing the server to perform any aggregation method with benign client updates. Hence, CosDefense algorithm is compatible with any aggregation method. ## 3 Evaluations ### Experiment Setup We conduct the evaluations on three datasets and two models: MNIST LeCun et al. (1998) and Fashion-MNIST Xiao et al. (2017) with a four-layer CNN-based model following the previous related work Li et al. (2022), and CIFAR-10 Krizhevsky (2009) with ResNet-18 model He et al. (2015). We generate the non-iid partition for all datasets following previous FL works Li et al. (2022); Fang et al. (2019) with default \(q\) as 0.5, where the higher \(q\) represents a higher non-iid degree. We compare our Figure 2: The trace of the average cosine similarity score of the last layer’s weight between all received local updates and global model. defense method with two representative robust aggregation rules: Krum Blanchard et al. (2017) and Clipping-Median. We do not include the baselines such as FLTrust Cao et al. (2021) or RFFL Xu & Lyu (2021) because they either need validation data on the server or do not compatible with the client sampling. Further details of the baseline implementation and parameter selection can be found in Appendix D ### Defense Performance with the State-of-the-Art Model Poisoning Attack To evaluate the defense performance, we evaluate the CosDefense on top of the state-of-the-art model poisoning attack methods, Inner Product Manipulation (IPM) Xie et al. (2019) attack. In the following experiments, we set the attacker control 30% of the participants as malicious nodes, and the attack would launch at round 200. The server randomly samples 10 clients from 100 for local model training in each communication round. **Evaluation Results:** Figure 3 shows the accuracy curve over 1000 communication rounds for Krum, Clipping-Median, and CosDefense for MNIST, Fashion-MNIST, and CIFAR-10. The detailed numerical results and ablation study related to the non-iid degree are shown in Appendix E. We have three observations from the results. (1) As Krum and Clipping-Median use all layers of the updates, the results of CosDefense indicate that the similarity of the last layer between the collected model update and global model is more sensitive to attacks, which could effectively be used to filter out the malicious model updates. (2) After the IPM attack begins, all the defense strategies drop the accuracy significantly. However, the proposed CosDefense strategy has a much faster convergence speed and shorter recovery time to resume the learning compared to Krum and Clipping-Median. (3) For the most challenging dataset CIFAR-10, after the IPM attack happens, CosDefense not only recovers the best accuracy before round 200, but also has an increasing trend on the curve, while other defense baselines even could not reach the best accuracy before the attack. **Impact of the number of the attackers.** Some previous works on untargeted model poisoning assume that there is a large fraction of attackers among participants Fang et al. (2019); Xie et al. (2019). Therefore, we also investigate the number of attackers' impact on defense performance. In this section, we conduct experiments on non-iid partitioned MNIST with \(q=0.5\), and vary the number of attackers from 10% to 40% of the total participants. Results of this study are provided in Figure 4, showing the final global model accuracy of defense methods under various settings. We observe that the two defense baselines remain robust when the ratio of attackers is less than 30%. As the ratio reaches 30%, both Krum and Clipping Median collapse on the accuracy performance and fail to defend the model convergence, while CosDefense still provides robust accuracy performance. One notable point is that the CosDefense achieves the best accuracy compared to other baselines among all different ratios of attackers. The experiment results demonstrate that CosDefense is robust and reliable under both small-scale and large-scale attack scenarios. Figure 4: Accuracy performance on MNIST dataset under different numbers of attackers perform IPM attacks. Figure 3: Accuracy performance over 1000 rounds for Krum, Clipping-Median, CosDefense, and FedAvg on MNIST (left), FMNIST (middle), and CIFAR-10 (right) under IPM attack. ## 4 Conclusion We presented CosDefense, a cluster-based defense scheme that could filter the malicious updates out of the model aggregation on the server. Experiment results on real-world datasets demonstrate that CosDefense can provide robust performance under the state-of-the-art FL poisoning attack. We picture the limitations and potential future work in Appendix F. ## 5 Acknowledgement This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0156, ARO award W911NF1810400, ONR Award No. N00014-16-1-2189. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2309.14102
Normalization of direct citations in publication-level networks: Evaluation of six approaches
Clustering of publication networks is an efficient way to obtain classifications of large collections of research publications. Such classifications can be used to, e.g., detect research topics, normalize citation relations, or explore the publication output of a unit. Citation networks can be created using a variety of approaches. Best practices to obtain classifications using clustering have been investigated, in particular the performance of different publication-publication relatedness measures. However, evaluation of different approaches to normalization of citation relations have not been explored to the same extent. In this paper, we evaluate five approaches to normalization of direct citation relations with respect to clustering solution quality in four data sets. A sixth approach is evaluated using no normalization. To assess the quality of clustering solutions, we use three measures. (1) We compare the clustering solution to the reference lists of a set of publications using the Adjusted Rand Index. (2) Using the Sihouette width measure, we quantity to which extent the publications have relations to other clusters than the one they have been assigned to. (3) We propose a measure that captures publications that have probably been inaccurately assigned. The results clearly show that normalization is preferred over unnormalized direct citation relations. Furthermore, the results indicate that the fractional normalization approach, which can be considered the standard approach, causes inaccurate assignments. The geometric normalization approach has a similar performance as the fractional approach regarding Adjusted Rand Index and Silhouette width but leads to fewer inaccurate assignments. We therefore believe that the geometric approach may be preferred over the fractional approach.
Peter SjΓΆgΓ₯rde, Per Ahlgren
2023-09-25T12:50:58Z
http://arxiv.org/abs/2309.14102v3
# Normalization of direct citations in publication-level networks: Evaluation of six approaches ###### Abstract Clustering of publication networks is an efficient way to obtain classifications of large collections of research publications. Such classifications can be used to, e.g., detect research topics, normalize citation relations, or explore the publication output of a unit. Citation networks can be created using a variety of approaches. Best practices to obtain classifications using clustering have been investigated, in particular the performance of different publication-publication relatedness measures. However, evaluation of different approaches to normalization of citation relations have not been explored to the same extent. In this paper, we evaluate five approaches to normalization of direct citation relations with respect to clustering solution quality in four data sets. A sixth approach is evaluated using no normalization. To assess the quality of clustering solutions, we use three measures. (1) We compare the clustering solution to the reference lists of a set of publications using the Adjusted Rand Index. (2) Using the Sihouette width measure, we quantity to which extent the publications have relations to other clusters than the one they have been assigned to. (3) We propose a measure that captures publications that have probably been inaccurately assigned. The results clearly show that normalization is preferred over unnormalized direct citation relations. Furthermore, the results indicate that the fractional normalization approach, which can be considered the standard approach, causes inaccurate assignments. The geometric normalization approach has a similar performance as the fractional approach regarding Adjusted Rand Index and Silhouette width but leads to fewer inaccurate assignments. We therefore believe that the geometric approach may be preferred over the fractional approach. \({}^{\text{\textdagger}}\)Health Informatics Centre, Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden \({}^{\text{\textdagger}}\)University library, Karolinska Institutet, Stockholm, Sweden \({}^{\text{\textdagger}}\)Department of Statistics, Uppsala University, Uppsala, Sweden ORCID: \({}^{\text{\textdagger}}\)[https://orcid.org/0000-0003-4442-1360](https://orcid.org/0000-0003-4442-1360) \({}^{\text{\textdagger}}\)[https://orcid.org/0000-0003-0229-3073](https://orcid.org/0000-0003-0229-3073) Email: [email protected]; [email protected] Corresponding author: Peter Sjogarde, University Library, Karolinska Institutet, 17177 Stockholm, Sweden ## 1 Introduction Constructing classifications of research publications by the use of clustering in citation networks is an efficient way to detect research topics or, at a more aggregate level, research specialties in very large publication collections. Such classifications provide possibilities to study the research landscape. Bibliometric studies of citation networks have a rather long history, starting over half a century ago (e.g., de Solla Price, 1965; Garfield et al., 1964). Large publication-level classifications have been around for about 10 years. Several papers have investigated best practices for clustering of publications (e.g., Ahlgren et al., 2020; Ahlgren & Jarneving, 2008; Boyack & Klavans, 2010, 2020; Klavans & Boyack, 2017; Sjogarde & Ahlgren, 2018, 2020; Velden et al., 2017; Waltman et al., 2020). However, to our best knowledge, no evaluation of normalization approaches to direct citations has been performed. Evaluation of such approaches is the focus of this paper. In 2012, Waltman and van Eck proposed a methodology to construct an hierarchical publication-level classification of research publications in a large citation network (Waltman & van Eck, 2012). The development of modularity-based optimization algorithms and improved computational capacity had made such approaches possible (Newman, 2004; Newman & Girvan, 2004). The initial modularity-based approaches have been improved during the last decade, both regarding efficiency and the quality of the obtained clustering solutions (Traag et al., 2011, 2019; Waltman & van Eck, 2013). In the field of scientometrics, quite a lot of research has been devoted to comparing clustering solutions obtained using different publication-publication relatedness measures. Direct citations have been compared to indirect approaches that use co-citations and bibliographic coupling (Boyack et al., 2020; Boyack & Klavans, 2020; Klavans & Boyack, 2017; Waltman et al., 2020, 2020). Expanding the direct citation approach using citations external to a publication set of interest has been shown to increase the quality of clustering solutions (Ahlgren et al., 2020; Boyack & Klavans, 2014). There are also indications that global models perform better than local models (Boyack, 2017). However, none of these studies investigate the use of different approaches to normalization of raw citation-based measures (like number of co-citations). Normalization of citation relations has mostly been discussed in the context of journal citation networks and indirect citation relations. It was early recognized that the size of journals influences the number of relations (Narin et al., 1972). Leydesdorff (1987) clustered journals using Pearson's \(r\) to normalization of co-citation relations. The use of Pearson's \(r\) was, though, criticized by Ahlgren et al. (2003), who pointed out drawbacks of this approach. Boyack et al. (2005) compared different relatedness measures in a journal citation network using the Web of Science subject categories as a baseline. With respect to inter-citation frequencies, they preferred the Jaccard normalization approach based on its scalability, the resemblance of the resulting clusters with the Web of Science subject categories and an assessment of visualizations created by the use of different normalization approaches. However, they underscore that the cosine approach to normalization performed just as well as the Jaccard normalization in statistical terms. In publication-level networks, normalization of direct citations has not been much discussed, and to our best knowledge, no study has (as indicated above) evaluated the use of different normalization approaches to direct citations in large-scale networks of this kind. Waltman and van Eck (2012) proposed a normalization approach that normalize each citation relation with the total number of relations of the publication (see Section 4.1.2). This approach has been used in several studies (Ahlgren et al., 2020; Boyack & Klavans, 2020; Sjogarde, 2022; Sjogarde & Ahlgren, 2018, 2020). However, it has been recognized that clustering methodologies sometimes create loosely connected clusters and results that are less intuitive or even undesirable (Held, 2022; Held et al., 2021; Held & Velden, 2022). In this contribution, we restrict the analysis to direct citations. Other approaches such as extended direct citations, bibliographic coupling or textual similarity would probably be preferable in a real analysis setting of the datasets used by us, because of the modest sizes of the datasets and the high number of publications with few relations (cf. Section 3 below). However, the sole purpose of our analysis is to compare the performance of different approaches to normalization of direct citations, and we are particularly interested in how the approaches perform in sparse areas of the networks. We evaluate six approaches to normalization of direct citations with respect to clustering solution quality. The paper is organized as follows. In the next section, we explain the purpose of normalizing direct citation relations when clustering publications. Furthermore, we describe the most commonly used approach, and introduce an observed problem related to this approach. In Section 3, we present the data used in the study. The methods section (Section 4), treats the investigated normalization approaches, the clustering approach used in this study, and the evaluation methodology. We present the results of the study in Section 5. In the last section, we discuss the results and present some conclusions. ## 2 Direct citations: the need for normalization, and a normalization problem Clustering of publications in citation networks is influenced by some properties of citations. First of all, a citation is a directed relation between two publications. A citation occurs when one (newer) publication refers to an (older) publication. Older publications generally have more citations than newer publications, since they have had more time to be cited. Consequently, older publications generally have more citation relations in a direct citation network. Secondly, the distribution of citations over publications is highly skewed, meaning that a low share of the publications receive a high share of the citations, and a large share of the publications receive few or no citations. Lastly, the referencing practices vary between fields, regarding both the average quantity of references in the publications and the age of the referenced literature. These variations result in different density of citation relations among fields. If no normalization is performed, it is likely that clustering procedures are biased towards old publications, highly cited publications, and fields with high density of citation relations (Sjogarde, 2023). Normalization has been performed to correct for the biases indicated above. Waltman and van Eck (2012) proposed a normalization procedure that normalizes the citation relation between two publications (say \(i\) and \(j\)) to the total number of relations of \(i\). We refer to this approach as "Fractional" approach (for further description, see Section 4.1.2). The fractional approach is probably the most widely used approach for normalization of direct citation relations within the field of scientometrics. Nonetheless, to the best of our knowledge, it has not been empirically assessed and compared to other approaches. Furthermore, we have reasons to believe that this approach may not be the best alternative, at least in some circumstances. In the following, we will describe such a circumstance and illustrate a problem, which is related to the fractional normalization approach and which we address in this paper. Figure 1 shows a cluster belonging to a clustering solution obtained by Sjogarde (2022). The fractional approach to normalization of direct citations and the Leiden clustering algorithm, the latter in combination with the Constant Potts Model quality function, were used to obtain the solution in a large direct citation network of publications. Nodes represent publications, and node size corresponds to total number of citation relations (including relations outside the cluster). Edges represent citation relations, and edge thickness corresponds to edge weights, here based on normalized citation relations. Consider the node in red color in the top of the figure, say \(i\). This node has 51 relations in total, but only one of these within the cluster. The relation within the cluster has a very high weight, however, because the node related to the red one, say \(j\), has only a few relations, namely four of which three falls within the cluster. In the fractional approach, the edge weight for \(i\) and \(j\) is equal to (1/4+1/51)/2 \(\simeq\) 0.13, i.e., the average of the two normalized citation relations1. Further, the other 50 nodes related to \(i\) have 23 to 3,437 relations. Let us denote the node with 23 relations as \(k\). The edge weight for \(i\) and \(k\) is approximately 0.03 ((1/23+1/51)/2). This means that the edge weight for \(i\) and \(j\) is about four times higher than the weight between \(i\) and \(k\). In the clustering process, the high relation weight for \(i\) and \(j\) yields that the clustering algorithm is rewarded (with respect to the quality function) for assigning \(i\) and \(j\) to the same cluster. We find it concerning, though, that cluster membership of publications with many relations can be determined by one or a few publications with a small number of relations. It can be noted that 12 of the publications related to \(i\) belong to another cluster. This cluster may be a better alternative for the assignment of \(i\). Now, one way to reduce relative differences of the indicated kind would be to normalize a direct citation against the geometric mean of the total number of relations of the two publications. With regard to the nodes \(i,j\) and \(k\), this yields an edge weight equal to \(1/\sqrt{4\times 51}\)\(\approx\) 0.07 for \(i\) and \(j\), whereas the corresponding weight for \(i\) and \(k\) is equal to \(1/\sqrt{23\times 51}\)\(\approx\) 0.03. By using a geometric mean approach, the edge weight for \(i\) and \(j\) is about 2.3 times higher than the corresponding weight for \(i\) and \(k\), a substantial reduction from four. Indeed, the geometric mean approach is one of the approaches to normalization of direct citations that we evaluate in this work. We need to point out that the cluster in Figure 1 should not be seen as representative, but as an illustration of a problem that seem to occur in some of the small clusters. However, the extent of this problem is hard to estimate since it is not easily measured. _Figure 1: The fractional approach. An example of a cluster with a node (in red color) with many relations outside the cluster and exactly one relation within the cluster._ ## 3 Data We used four sets of publications for the evaluation of approaches to normalization of direct citations between publications. Each set was retrieved by searching the in-house version of PubMed/MEDLINE at Karolinska Institutet for a Medical Subject Heading (MeSH). The MeSH terms were selected from different branches in the MeSH tree, and we aimed to pick terms with high semantic dissimilarity. Publications were retrieved for each MeSH term, including their sub-terms. If the MeSH term was located in several places in the tree, we used sub-terms from all branches. We only considered terms as "major topics" (terms marked with asterisks in the PubMed web interface). The following MeSH terms were used: "Psychology, Social" (we write "Social Psychology" in the remainder of this paper), "Autoimmune Diseases", "Metabolism" and "Stem Cells". We restricted each search to the publication years 1995-2021. Each of the four terms retrieves a rather large set of publications, ranging from about 160,000-440,000. The NIH Open Citation Collection (iCite et al., 2019) was used to retrieve citation relations between the publications in each set. We considered the relations as undirected, and we removed duplicates: if publication \(i\) cites publication \(j\) and \(j\) cites \(i\), we only took one of these relations into account. Table 1 shows descriptive statistics for the four datasets. The distribution of relation counts over publications is highly skewed in all four datasets (Table 2). Most publications have few relations and a small proportion of the publications have more than 100 relations. Only a few publications have more than 1,000 relations. Figure 2 shows the distribution of relations in the four datasets as box plots with violin wrapping. In the Social Psychology set, there is a dense concentration of publications with only 1-5 relations. The datasets Autoimmune Diseases and Stem Cells are not as highly skewed as the two other data sets, and the publications in these sets generally have more citation relations. \begin{table} \begin{tabular}{l c c c c} \hline **Dataset** & **1-10 relations** & **11-100 relations** & **101-1000 relations** & \(>\)**1000 relations** \\ \hline Social Psychology & 174,392 & 94,999 & 1,486 & 2 \\ \hline Autoimmune & 93,933 & 166,242 & 9,254 & 57 \\ Diseases & & & & \\ \hline Metabolism & 226,869 & 166,447 & 4,000 & 12 \\ \hline Stem Cells & 42,013 & 102,547 & 7,320 & 70 \\ \hline \end{tabular} \end{table} Table 2: Number of publications with 1-10, 11-100, 101-1000 and \(>\)1000 relations respectively. \begin{table} \begin{tabular}{l c c c c} \hline **Dataset** & **\# Publications** & **\# Publications with at least 1 relation** & **\# Citation relations** & **Avg. relations/ Publication** \\ \hline Social Psychology & 386,204 & 271,879 & 1,595,274 & 4.1 \\ \hline Autoimmune Diseases & 298,157 & 269,486 & 3,692,263 & 12.4 \\ \hline Metabolism & 437,717 & 397,328 & 2,955,214 & 6.8 \\ \hline Stem Cells & 162,093 & 151,950 & 2,527,699 & 15.6 \\ \hline \end{tabular} \end{table} Table 1: Descriptive statistics for the four datasets. ## 4 Methods In this section, we first describe the six approaches to normalization of direct citations. We then briefly describe the clustering of the publications, and finally we present our evaluation methodology. ### Investigated approaches The six normalization approaches used in this study give rise to corresponding publication-publication relatedness measures. In the following six subsections, we describe the approaches through the definitions of their corresponding relatedness measures. The seventh subsection puts forward edge weight examples. #### 4.1.1 Unnormalized The unnormalized relatedness of two publications, \(i\) and \(j\), is defined as (Ahlgren et al., 2020): \[r_{ij}=\max\left(c_{ij},c_{ji}\right) \tag{1}\] where \(c_{ij}\) is 1 if \(i\) cites \(j\), 0 otherwise. Thus, if a citation relation exists from either \(i\) to \(j\) or from \(j\) to \(i\), then \(r_{ij}\) is 1, otherwise 0. Note that \(r_{ij}\) is undirected. #### 4.1.2 Fractional For the fractional approach, we used the definition provided by Waltman and van Eck (2012). The normalized relatedness of \(i\) with \(j\) is defined as: \[a_{ij}=\frac{r_{ij}}{\sum_{k}c_{ik}} \tag{2}\] Figure 2: Box plots with violin wrapping showing the distribution of relations over publications. Restricted to publications with 1-100 relations. where \(\sum_{k}r_{ik}\) is the total number of relations of \(i\). However, since the network is undirected, we also considered the normalized relatedness of \(j\) with \(i\) to calculate the edge weight. We use the average of \(a_{ij}\) and \(a_{ji}\) for the edge weight between \(i\) and \(j\), i.e. as the normalized relatedness of \(i\) and \(j\), which we denote by \(r_{ij}^{frac}\). \(r_{ij}^{frac}\) ranges from 0 to 1. #### 4.1.3 Geometric mean The geometric mean approach is similar to the fractional approach. However, in this approach we divide \(r_{ij}\) with the geometric mean of the total number of relations of \(i\) and \(j\). The normalized relatedness of \(i\) and \(j\) is defined as \[r_{ij}^{geom}=\frac{r_{ij}}{\sqrt{\sum_{k}r_{ik}\times\sum_{k}r_{jk}}} \tag{3}\] \(r_{ij}^{geom}\) ranges from 0 to 1. #### 4.1.4 Geometric mean-limitN This approach is similar to the geometric mean approach but uses a restriction of the minimum value of \(\sum_{k}r_{ik}\) and \(\sum_{k}r_{jk}\) in the calculation. Geometric mean-limitN reduces the edge weight of relations for publications with less than \(N\) relations. The normalized relatedness of \(i\) with \(j\) is defined as: \[r_{ij}^{geom-limN}=\frac{r_{ij}}{\sqrt{d_{ik}\times d_{jk}}} \tag{4}\] where \(d_{ik}=\sum_{k}r_{ik}\) if \(\sum_{k}r_{ik}\geq N\), otherwise \(d_{ik}=N\). We used \(N=5\), which yields that \(r_{ij}^{geom-limit5}\) ranges from 0 to 0.2. #### 4.1.5 Directional-fractional The directional-fractional approach, as well as the directional-geometric approach defined below, differs from the other approaches in that the direction of the citation relation is taken into consideration when calculating the edge weight (Yun et al., 2020). The normalized relatedness of \(i\) and \(j\) is defined as \[r_{ij}^{prob-frac}=\begin{cases}0\text{ if }r_{ij}=0\\ \left(\frac{r_{ij}}{i_{ref}}+\frac{r_{ij}}{j_{cit}}\right)/2\text{ if }r_{ij}=1\text{ and }i\text{ cites }j\\ \left(\frac{r_{ij}}{i_{cit}}+\frac{r_{ij}}{j_{ref}}\right)/2\text{ if }r_{ij}=1\text{ and }j\text{ cites }i\end{cases} \tag{5}\] where \(i_{ref}\) is the number of references in \(i\) and \(j_{cit}\) is the number of citations to \(j\). The probability that a citation exists from \(i\) to \(j\) increases with increasing number of references in \(i\) and increasing number of citations to \(j\). However, the probability of a citation from \(i\) to \(j\) is not affected by increasing number of references in \(j\) or citations to \(i\). Therefore, the number of references in \(j\) and the number of citations to \(i\) is disregarded in the calculation of \(r_{ij}^{prob-frac}\). \(r_{ij}^{prob-frac}\) ranges from 0 to 1. Even though direction is considered in the definition of the measure, the citation relation is used as undirected as for the other considered measures. #### 4.1.6 Directional-geometric The directional-geometric approach is basically the same as bidirectional normalization used by Yun et al. (2020). The difference between directional-fractional and directional-geometric is analogous to the difference between the fractional and the geometric approach. Here, the normalized relatedness of \(i\) and \(j\) is defined as \[\tau_{ij}^{prob-geom}=\begin{cases}0\text{ if }\tau_{ij}=0\\ \dfrac{r_{ij}}{\sqrt{l_{ref}\times l_{cit}}}\text{ if }\tau_{ij}=1\text{ and }i\text{ cites }j\\ \dfrac{r_{ij}}{\sqrt{l_{cit}\times l_{ref}}}\text{ if }\tau_{ij}=1\text{ and }j\text{ cites }i\end{cases} \tag{6}\] \(\tau_{ij}^{prob-geom}\) ranges from 0 to 1 #### 4.1.7 Edge weights across four of the approaches-examples Table 3 illustrates, for some example values of the total number of relations of \(i\) and \(j\), how the edge weight varies across the four approaches that do not take direction into consideration. 2 Note that the geometric mean approach, but not the geometric mean-limit5 approach, results in the same weight as the fractional approach if \(\sum_{k}r_{lk}=\sum_{k}r_{jk}\), i.e. when \(i\) and \(j\) have the same number of relations. However, the edge weight is lower using \(\tau_{ij}^{geom}\) compared to \(\tau_{ij}^{frac}\) when the total number of relations differs between \(i\) and \(j\). The variation of edge weights is much smaller for geometric mean-limit5 approach than for the geometric mean approach and for the fractional approach. Footnote 2: To keep Table 3 as simple and illustrative as possible, we have left out the directional approaches. ### Clustering For each approach and dataset, we obtained a series of clustering solutions using the Leiden algorithm (Traag et al., 2019). The Leiden algorithm was used to maximize the Constant Potts Model as quality function (Traag et al., 2011; Waltman and van Eck, 2012). This model is resolution limit free, which means that it can be used to detect clusters at granular levels. We used the following values of the resolution parameter \(\gamma\) to obtain the clustering solutions: 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001. \begin{table} \begin{tabular}{r r r r r r} \hline \(\sum_{k}r_{lk}\) & \(\sum_{k}r_{jk}\) & Unnormalized & Fractional & Geometric & Geometric \\ & & & mean & mean-limit5 \\ \hline 1 & 1 & 1.00 & 1.00 & 1.00 & 0.20 \\ 10 & 10 & 1.00 & 0.10 & 0.10 & 0.10 \\ 100 & 100 & 1.00 & 0.01 & 0.01 & 0.01 \\ 1 & 10 & 1.00 & 0.55 & 0.32 & 0.14 \\ 1 & 100 & 1.00 & 0.51 & 0.10 & 0.04 \\ 10 & 100 & 1.00 & 0.06 & 0.03 & 0.03 \\ \hline \end{tabular} \end{table} Table 3: Examples of the variation of edge weight using the different normalization approaches. Only approaches that do not take direction into consideration are covered. 4.3. Evaluation methodology We compared the six normalization approaches described in Section 4.1 with respect to clustering solution quality. For this, the following three measures were used to evaluate the clustering solutions: (1) The Adjusted Rand Index (ARI), (2) Silhouette width, and (3) Number of probably inaccurate assignments. The evaluation measures are described in Section 4.3.1, whereas result visualization is treated in Section 4.3.2. #### 4.3.1 Evaluation measures In this section, we describe the three evaluation measures. We doubt that there exists a ground truth for clustering solutions. Our intention is, though, to capture different aspects of clustering solution quality. #### Ari ARI is a measure of the similarity between two classifications of the same set of objects (Hubert & Arabie, 1985). The measure takes values on the interval \([0,1]\). We used ARI to compare a baseline solution to clustering solutions obtained using the different normalization approaches. The baseline was created in a similar manner as in Boyack and Klavans (2017) and Sjogarde and Ahlgren (2018). In each of the four datasets of publications, we retrieved those publications that had more than 100 references in total and a minimum of 50% of the references within the dataset. The restriction to 50% is more inclusive than in Sjogarde and Ahlgren (2018), because we used a much smaller dataset. Furthermore, we restricted the set to publications published from 2019 or later. We regard the retrieved set of publications as baseline classes and their references as the items of these classes. The baseline classes were used as proxies for research topics. The classes should not be seen as perfect representations of topics. However, we assume that a large proportion of the references in each baseline publication is likely to be connected to the same topic. We therefore believe that baseline classes can be used as proxies for topics and for comparative purposes. We followed the same procedure as in Sjogarde and Ahlgren (2018) in order to fulfill requirements of the properties of the baseline classes and the clustering solutions. To avoid having more than one baseline class addressing the same topic, we used the following procedure. Bibliographic coupling was used to calculate the similarity between baseline classes. If the items of two classes had an overlap of 30% or more of their references, we regarded the two classes as addressing the same topic. Based on these same-topic relations, we created groups of connected classes. From each group, we selected one class at random and used this class as a baseline class and excluded the other classes in the same group. A second requirement of the properties of the baseline classes is that the classes must be pairwise disjoint. To fulfill this requirement, we referred each item of the baseline classes to one class only. Each item was referred to the class to which it had the highest frequency of citation relations. As a final step, we delimited each clustering solution (obtained using a dataset and a normalization approach) to the publications represented in the corresponding baseline class. Silhouette widthThe silhouette width of an observation, in our case a publication, quantifies how well the observation has been clustered (Rousseeuw, 1987). This is done by contrasting coherence to separation: within-cluster dissimilarity is compared to between-cluster dissimilarity. Let \(i\) and \(j\) be publications such that \(i\) and \(j\) belong to clusters in a given clustering solution. We define the _dissimilarity between \(i\) and \(j\)_ as 1 if \(r_{ij}=0\), and as 0 if \(r_{ij}=1\) (see Section 4.1.1 for the definition of \(r_{ij}\)). Informally, the dissimilarity between \(i\) and \(j\) is 1 if there is no citation relation between \(i\) and \(j\), and the dissimilarity between \(i\) and \(j\) is 0 if there is such a relation between \(i\) and \(j\). For a publication \(i\) and a clustering solution containing \(i\), let \(A\) be the cluster to which \(i\) has been assigned, and let \(d(i\),\(C)\) be the average dissimilarity of \(i\) to all publications of \(C\), where \(C\) is a cluster of the solution and \(C\neq A\). The _silhouette width_ for \(i\), \(s(i)\), is then defined as: \[s(i)=\frac{b(i)-a(i)}{\max\left\{a(i),b(i)\right\}} \tag{7}\] where \(a(i)\) is the average dissimilarity of \(i\) to all other publications of \(A\), and \(b(i)=\min\ d(i,C)\). The silhouette width takes values on the interval [-1, 1]. As a clustering solution quality measure, we used the average \(s(i)\) across all publications in a dataset. _Probably inaccurate assignments_ We designed a measure, _probably inaccurate assignments_ (PIA), with the intention to quantify the extent of the problem of the fractional approach indicated in Section 2. Recall that the problem concerns nodes with many citation relations outside their clusters and only a few relations inside their clusters. PIA is defined, with respect to a given clustering solution, as the number of publications \(i\) in the solution that satisfy the following three conditions: * \(i\) has at least 20 citation relations, * \(i\) has less than 10% of its citation relations within its cluster, and * \(i\) has a negative silhouette width. If the three conditions are satisfied, one may state that (i) \(i\) has sufficiently many citation relations to classified in a proper way, (ii) only a small proportion of the citation relations of \(i\) are within the cluster of \(i\), and (iii) there is at least one other and more proper cluster to which \(i\) could have been assigned (cf. the definition silhouette width above). Notice that the node in red color in the example of Section 2 stands for a publication, which satisfies the three PIA conditions, given that we assume that its silhouette width is negative. _4.3.2. Granularity and granularity-quality plots_ We define the _granularity_ of a clustering solution as the number of publications divided by the sum of the squared cluster sizes (Waltman et al., 2020). Ideally, and for fairness reasons, clustering solutions compared with regard to the three evaluation measures should have exactly the same granularity. For the five approaches where normalization of direct citations is used, the granularity requirement can be assumed to be approximately satisfied by solutions obtained using different approaches but associated with the same value of the resolution parameter \(\gamma\). However, the granularity of the clusters obtained from the unnormalized approach deviates somewhat from the granularity of the other approaches. This should be taken into account when the results are interpreted. We visualize the results by using granularity-quality plots, inspired by earlier, related studies in which granularity-accuracy (GA) plots have been used (Ahlgren et al., 2020; Boyack and Klavans, 2020; Waltman et al., 2020). The use of quality-granularity plots is a way to counteract the difficulty that the granularity requirement referred to in the preceding paragraph is only approximately satisfied. In a granularity-quality plot, the horizontal axis represents granularity (as defined above), whereas the vertical axis represents \(M\), where \(M\) is one of the three evaluation measures used in this study. For a given normalization approach, like fractional, a point in the plot represents the \(M\) value and granularity of a clustering solution, obtained using a certain resolution value of \(\gamma\). Further, a line approach (approximate) each point of the normalization approach, where \(M\) values for granularity values between points are estimated. In this way, the performance of the approaches can be compared at a given granularity level. In our case, the lines are xsplines, i.e. curves drawn relative to control points (Blanc & Schlick, 1995). ## 5 Results In this section we first present plots showing the skewness of the cluster size distributions resulting from the six normalization approaches in each of the four data sets (Section 5.1). We then present granularity-quality plots for the three evaluation measures (Sections 5.2-5.3). ### Skewness The unnormalized approach results in cluster size distributions that are much more skewed than when normalization is performed (Figure 3). The fractional approach results in the least skewed distribution in most data sets. However, when the granularity increases, the differences between the normalization approaches are very small. _Figure 3: Skewness of the cluster size distribution (y-axis) by granularity level (x-axis) for the six normalization approaches in the four data sets._ 5.2. ARI The directional-fractional approach resulted in the highest ARI values (Figure 4), but the values does not differ much from the rest of the normalization approaches. If no normalization is performed, the ARI value is notably lower at most granularity levels. The unnormalized approach has higher ARI values for the most granular clustering solutions but its maximal ARI value is lower compared to the maximal ARI values of the other approaches. It should be noted that the unnormalized approach contains high numbers of very small clusters in the most granular clustering solutions. For example, in the most granular clustering solution in "Stem cells", about 13 thousand out of 17 thousand clusters have less than 10 publications. Such feature is unwanted in many practical applications, e.g. if clusters are used for normalization of citation counts. The ARI value reaches its highest point at a granularity level of about 0.005 in three of the four fields, and even lower for "Social Psychology". Thus, the higher granularity levels capture clusters that have a narrower scope than what is covered in review papers. Such small clusters are probably not very useful in practical applications. ### Silhouette width Also in the case of silhouette width (Figure 5), the values are pretty similar for all of the normalization approaches. The fractional approach has the highest value in most fields and at most granularity levels. The unnormalized approach has lower values at most granularity Figure 4: ARI-values (y-axis) by granularity level (x-axis) for the six normalization approaches in the four data sets. levels in most fields. The values of the normalization approaches are generally rather close to 1, which suggests that many publications have rather strong connections to another cluster than the one they have been assigned to. This may be an indication of the overlapping nature of research fields. _Figure 5: Silhouette width (y-axis) by granularity level (x-axis) for the six normalization approaches in the four data sets._ ### Pia The unnormalized approach results in the lowest PIA values, i.e., the lowest numbers of inaccurate assignments (Figure 6). The PIA value increases with higher granularity. The two fractional approaches have the highest PIA values, while the geometric and geometric-lim5 approaches have substantially lower values. The results indicate that the geometric approach reduces the problem with inaccurate assignments related to the fractional approach, a problem caused by the high normalized relatedness of a publication, with few relations, with another publication, which is cited by or cites the publication. ## 6 Discussion and conclusions The performance of the different normalization approaches is rather similar regarding ARI and silhouette width. The fractional approach, which can be considered as the standard approach for normalization of direct citation relations, performs as well as the other approaches regarding these evaluation measures. The fractional approach also results in cluster size distributions that are among the least skewed. However, the fractional approach has been shown to result in high PIA values. Recall that the PIA measure captures publications assigned to clusters to which they have few relations, despite the facts that they Figure 6: PIA (\(y\)-axis) by granularity level (\(x\)-axis) for the six normalization approaches in the four data sets. have enough relations to be properly assigned and there exist other clusters to which they have relatively more relations. The geometric approach and geometric-lim5 approach have lower PIA values (especially at higher granularity levels), compared to the approaches fractional, directional-fractional and directional-geometric. The former two approaches may be used to reduce the problem of inaccurate assignment of publications with a modest number of citation relations. We do not believe that changing from the fractional normalization approach will result in a clustering solution free from poorly connected clusters. Poorly connected clusters may also be a consequence of the clustering algorithm. Park et al. (2023) show in a recent preprint that poorly connected clusters are produced by several of the commonly used community detection algorithms, including the Leiden algorithm for maximizing the Constant Potts Model. They propose a method to remediate poorly connected clusters to improve the connectedness of the clusters in a clustering solution. Such an approach may be combined with a geometric approach for normalization to further reduce the problems of poorly connected clusters and inaccurate assignments. Furthermore, our results support reassignment of publications belonging to small clusters (Waltman and van Eck, 2012). The PIA value increases with higher granularity, indicating that the problem of inaccurate assignments grows with smaller clusters. Reassigning publications in small clusters, clusters with fewer publications than a threshold value, is likely to reduce the problem of poorly connected clusters. Nonetheless, to accurately assign publications with few citation relations (or even no citation relations), it is necessary to make use of more information. Publications with few citation relations are inevitable difficult to assign to an appropriate cluster. Combining the direct citation approach with a textual-based approach may increase the density in sparse areas of the network. However, such combined approaches have not shown to perform substantially better than a standalone use of direct citations, or extended direct citations, in a couple of previous studies (Ahlgren et al., 2020; Boyack and Klavans, 2020). Furthermore, combined approaches may make interpretation of the clustering solution more difficult in that it becomes less obvious what clusters represent. A direct citation relation implies that the citing authors are aware of the cited publication and explicitly mention this publication in their texts. On the other hand, a textual similarity of two publications occurs when two publications use the same terms in, for example, titles and abstracts, which may happen without awareness of each other's work. This exemplifies the different natures of citation-based and textual similarity. Combining the approaches in one single network may therefore make it unclear how publications in a cluster are related to each other. Future work may focus on how to address the problem with sparse areas of citation networks. An alternative would be to initially disregard publications with few relations and create a clustering solution including publications with a substantial amount of citation relations. This would create a clustering solution in which clusters represent dense areas of formal communication represented by citations (Sjogarde, 2023). Publications with few citation relations could then be assigned to clusters based on a textual based approach. Future work may also address the performance of clustering approaches that provide overlapping clustering solutions. Such approaches may perform differently in terms of the evaluation measures used in the present work. In this study, we have compared six approaches to normalization of direct citations with respect to clustering solution quality in four data sets. We conclude that the geometric approach has a similar performance as the fractional approach regarding ARI and silhouette width. However, the results indicate that the geometric approach reduces the problem of inaccurate assignments, and therefore we believe that the geometric approach may be preferred over the fractional approach. ## Data and code availability Data analyzed in this study is openly available in Zenodo at [https://zenodo.org/record/8343758](https://zenodo.org/record/8343758). Code used for data analysis in this study is openly available in GitHub at [https://github.com/petersiogarde/papers/tree/main/normalization](https://github.com/petersiogarde/papers/tree/main/normalization) dc evaluation. ## Author contributions Peter Sjogarde: Conceptualization; methodology; software; formal analysis; writing--original draft; writing--review & editing; visualization. Per Ahlgren: Conceptualization; methodology; formal analysis; writing--original draft; writing--review & editing. ## Competing interests The authors declare no competing interests. ## Funding information Peter Sjogarde was funded by The Foundation for Promotion and Development of Research at Karolinska Institutet.
2309.12573
Earth Tomography with ICAL at INO
Observing matter effects in atmospheric neutrinos travelling through the entire mantle and core of the Earth is a promising way of enhancing our understanding of Earth's density structure. In that context we study the prospects of Earth tomography with the ICAL detector at the India-based Neutrino Observatory. While this experiment is smaller in size in comparison to some of the other bigger detectors being proposed, it is the only neutrino experiment with charge-identification sensitivity. In particular, ICAL can see matter effects separately in neutrinos and antineutrinos. This has been seen to enhance ICAL's sensitivity to earth matter effects and hence the mass ordering sensitivity for both normal and inverted mass orderings. It is therefore, pertinent to see if the ICAL sensitivity to earth tomography is competitive or better with respect to other experiments, especially for the inverted mass ordering, where other experiments suffer reduced sensitivity. We present the sensitivity of ICAL to earth tomography by taking into consideration both the Earth's mass constraint as well as the hydrostatic equilibrium constraints.
Deepak Raikwal, Sandhya Choubey
2023-09-22T02:00:25Z
http://arxiv.org/abs/2309.12573v1
# Earth Tomography with ICAL at INO ###### Abstract Observing matter effects in atmospheric neutrinos travelling through the entire mantle and core of the Earth is a promising way of enhancing our understanding of Earth's density structure. In that context we study the prospects of Earth tomography with the ICAL detector at the India-based Neutrino Observatory. While this experiment is smaller in size in comparison to some of the other bigger detectors being proposed, it is the only neutrino experiment with charge-identification sensitivity. In particular, ICAL can see matter effects separately in neutrinos and antineutrinos. This has been seen to enhance ICAL's sensitivity to earth matter effects and hence the mass ordering sensitivity for both normal and inverted mass orderings. It is therefore, pertinent to see if the ICAL sensitivity to earth tomography is competitive or better with respect to other experiments, especially for the inverted mass ordering, where other experiments suffer reduced sensitivity. We present the sensitivity of ICAL to earth tomography by taking into consideration both the Earth's mass constraint as well as the hydrostatic equilibrium constraints. ## I Introduction Knowing and understanding our own planet remains one of mankind's biggest challenges. In particular, there is a lot of on-going effort to understand the density structure of the Earth. Today the best understanding in this area comes from seismology - the study of earthquakes and the corresponding seismic waves that they create. The motion of the seismic waves depends crucially on the density structure of the Earth, hence, seismology has managed to provide us with a reasonable understanding of Earth's interior. Data is collected at several seismographic stations accross the surface of the Earth. Since the speed of the seismic waves depends on the distance between these stations as well as the density of matter that they cross, a careful comparative analysis of this collective data can be directly used to obtain Earth's density profile. The best density model of the Earth obtained till-date using seismological data is called the PREM (Preliminary Reference Earth Model) density profile [1]. This model divides the Earth into various regions or zones. Broadly, Earth can be divided into the core, mantle and crust, with finer layers in each of these divisions. The density in each of these layers is given by the PREM profile. However, there remains uncertainties in these density estimates, with the uncertainty in density in some layers being significantly higher than some others. In general, the density in the deeper layers has a larger uncertainty than the outer layers. While efforts are on to get a better understanding of Earth's density profile via seismology, it is pertinent to ask if one could cross-check Earth's density profile using other complementary methods. Neutrino physics can provide such a complementary approach. Neutrinos are a probe for the density of matter through which they travel in two different ways. Neutrinos interact with matter via \(W\) and \(Z\) boson mediated weak interactions. The neutrino interaction cross-section increases with neutrino energy, becoming sizeable at the TeV energy scale. Interactions of high energy neutrinos with the ambient Earth matter results in the attenuation of the neutrino flux. Since the resultant attenuation depends on the density of matter, measurement of this attenuation can be used to measure the average Earth matter density along the neutrino path length. This method can be used at neutrino telescopes such as IceCube [2] and KM3NeT ARCA [3] to determine Earth's density profile. The second way neutrinos can probe the Earth density is via neutrino oscillations. This is the subject of study in this work. Neutrino oscillations have been observed and confirmed in a myriad of neutrino experiments. The neutrino oscillation parameters have been measured to a reasonable precision. Only the few last remaining pieces of this puzzle remain to be discovered and/or confirmed. Existence of CP violation in the lepton sector is one of the most important missing puzzle pieces. The other missing piece is the octant of the mixing angle \(\theta_{23}\), _ie._ whether \(\theta_{23}<\pi/4\) (called lower octant (LO) solution) or \(\theta_{23}>\pi/4\) (called upper octant (UO) solution). Finally, the last puzzle piece is the neutrino mass ordering (MO), _ie._ whether the atmospheric mass squared difference \(\Delta m^{2}_{31}>0\) (called normal ordering (NO)) or \(\Delta m^{2}_{31}<0\) (called inverted ordering (IO)). Bigger and better experiments are being proposed and built to find these remaining pieces of the puzzle. Amongst these are the next generation atmospheric neutrino experiments such as IceCube (PINGU) [4], ORCA [5; 3] and ICAL@INO [6]. These atmospheric neutrino experiments are particularly suited to measure the neutrino MO via their ability to observe Earth matter effects in neutrino oscillations. Matter effects in neutrino oscillations depend on the MO as well as density of matter. Hence, the flavor oscillations of atmospheric neutrinos while traveling inside the Earth become sensitive to the density of matter. Therefore, precise measurement of matter effects in neutrino oscillations can be used to verify the Earth matter density profile. Prospects of Earth tomography using atmospheric neutrinos in multi-megaton class detectors has been performed earlier for IceCube (PINGU) [7; 8] and ORCA [9; 8] detectors. In [8; 10] the author studies the prospects of earth tomography in PINGU and ORCA and concludes that the density measurements in the lower mantle region can be performed to a few percent level. Ref. [9] further improves this analysis in the context of the ORCA detector and makes a thorough sensitivity study of Earth tomography. They include Earth mass constraints and hydrostatic equilibrium and show that the density in the outer core (mantle) can be measured to -18%/+15% (-6%/+8%) level. They also look at the impact of systematic uncertainties on the sensitivity of ORCA. In Ref. [11; 12] the authors look at the prospects of confirming the existence of the core using atmospheric neutrinos in ICAL@INO [6]. In this work we look at the prospect of performing Earth tomography using the atmospheric neutrino experiment ICAL@INO. We take the PREM profile as the reference density model of the Earth and study the sensitivity of ICAL@INO to deviations from PREM. We present our results as a function of the percentage change that ICAL@INO can confirm with respect to the PREM profile. We consider three different cases in our studies. We start with a simple approach where we calculate the sensitivity of ICAL@INO to Earth mass density in the mantle and core regions without any other constraint imposed. We next study the case where the Earth mass constraint is imposed by compensating an increase (decrease) in a given layer of Earth by a corresponding decrease (increase) in all the other layers such that the total mass of the Earth does not change. Finally, we take into account the fact that not all layers are equally uncertain and constrain the compensation accordingly. In all the cases we study the impact of systematic uncertainties on the sensitivity of ICAL@INO. We also take into account the condition for hydrostatic equilibrium of the Earth. The papers is organised as follows. We begin in section II by discussing the Earth density model and the three different cases that we consider for analysing the Earth density model. In this section we also present the impact of the variations to the PREM profile on the relevant neutrino oscillation probabilities. In section III we present in details our analysis methodology, where we provide details of the ICAL@INO experiment, the atmospheric neutrino fluxes, the simulation tools, the oscillation framework and the statistical analysis method. In section IV we present our results. Finally, we summarize our results and conclude in section V along with a comparison with other experiments. ## II Earth density model The neutrino oscillation probabilities are calculated by using the Hamiltonian \[\mathcal{H}=UMU^{\dagger}+\mathcal{V}_{e}\,, \tag{1}\] where \(U\) is the PMNS mixing matrix [13], \(M\) is 3\(\times\)3 neutrino mass squared matrix \(M=diag(0,\Delta m^{2}_{21},\Delta m^{2}_{31})\), and \(\mathcal{V}_{e}\) is the 3\(\times\)3 matrix containing the effective matter potential coming from coherent forward scattering of neutrinos with electrons in the ambient matter, with \(\mathcal{V}_{e}=diag(\pm\sqrt{2}G_{F}N_{A}\,\rho,0,0)\), where \(N_{A}\) is the Avogadro's number and \(\rho\) is the density of matter. For neutrinos the matter potential takes the positive sign, while for antineutrinos the matter potential comes with the negative sign. In this work we use the convention \(\Delta m^{2}_{ij}=m^{2}_{i}-m^{2}_{j}\). As a result, neutrino oscillations are modified when neutrinos travel in matter. The effective potential depends on the density of matter in which the neutrinos are traveling. Neutrinos (and antineutrinos) with energy around 5-10 GeV, traveling through the Earth undergo large matter effects which change their oscillation probabilities. The size of these changes depends on the density of matter through which the neutrinos travel and energy of the neutrinos, and are opposite for neutrinos and antineutrinos. Atmospheric neutrinos and antineutrinos span an energy range of 100 MeV to 10 TeV and come from all directions _aka_, all zenith angles. Neutrinos from different zenith angles traverse different density layers of the Earth and hence experience different matter effects. ICAL@INO can separately see \(\nu_{\mu}\) (\(\mu^{-}\)) and \(\bar{\nu}_{\mu}\) (\(\mu^{+}\)), as a function of muon energy and muon angle. In addition, it can observe the corresponding hadrons produced in the charged current event. This enables ICAL@INO to observe earth matter effects rather efficiently. Thus, measurement of the earth matter effects as a function of energy and zenith angle at ICAL@INO can be effectively used to study the density structure of our Earth. For our reference density structure of the Earth we use the PREM (Preliminary Reference Earth Model) [1] profile. In the PREM model, Earth is broadly divided in seven major parts: 1) crust \(d<30\), (2) lower lithosphere \(30<d<400\), (3) upper mesosphere \(400<d<600\), (4) transition zone \(600<d<800\), (5) lower mesosphere \(800<d<2890\), (6) outer core \(2890<d<5150\) (7) and inner core \(5150<d<6371\), where \(d\) is the depth inside Earth, measured in km. The density varies within each of these layers and in our analysis we have taken that density variation into account by subdividing Earth into 26 layers, each with a fixed density given by the PREM profile. This gives an excellent simulation of the full PREM profile. It is known [8; 9] and we have checked that the sensitivity of atmospheric neutrinos to the density of the crust, lower lithosphere, upper mesosphere, transition zone and inner core is rather poor. Therefore, throughout this work we discuss density changes in two major Earth zones - the _outer core_ (\(2890<d<5150\)) and the lower mesosphere which we will call broadly _mantle_ (\(800<d<2890\)), where \(d\) is the depth inside Earth measured in km. We use our simulation code, as described in the next section, to simulate the "data" in ICAL corresponding to the reference PREM profile. This simulated "data" is then fitted by changing the Earth's density in the theory. We work with three different scenarios for the theory, which we label as Case I, Case II and Case III, described below. For Cases II and III we also impose the condition of hydrostatic equilibrium of the Earth by demanding that the following inequalities should always hold. \[\rho_{man}^{max}\leq\rho_{OC}^{min}\text{ and }\rho_{OC}^{max}\leq\rho_{IC}^{ min}\,, \tag{2}\] where \(\rho_{man}^{max}\) and \(\rho_{OC}^{max}\) are the maximum density inside the mantle (man) and outer core (OC), respectively, while \(\rho_{OC}^{min}\), \(\rho_{IC}^{min}\) are the minimum density inside the outer core and inner core (IC), respectively. ### Case I We begin with the simplest case where we change the density in a given region, outer core or mantle by a constant factor \(x\%\) without any other consideration. In particular, we do not take into account the fixed mass of the Earth and/or conditions needed for its hydrostatic stability. This naive case, even though not absolutely correct, is meant to give us an understanding of how density changes in a given individual layer independently affects the neutrino oscillation probabilities. In left panel of Fig. 1 we show the density profile of the Earth for this case as a function of the the radial depth \(d\) in km. The green line is for the reference PREM profile while the blue and the red lines are the density profiles with \(-10\%\) and \(+10\%\) change of density in the mantle, while the density of the inner and outer core are kept fixed at their reference PREM values. The middle (for neutrinos and normal ordering (NO)) and right (for antineutrinos and inverted ordering (IO)) panels of Fig. 1 show how the survival probability \(P_{\mu\mu}\) changes when we change the density in the mantle by \(+10\%\). The change \(\Delta P_{\mu\mu}\) in these panels are shown as a function of the neutrino path length \(L\) in km and neutrino energy \(E\) in GeV. Figure 1: Top panel shows the change in density of mantle according to Case I. Middle panel shows the effect of density change on the survival probability via \(\Delta P_{\nu_{\mu}\nu_{\mu}}=P_{\nu_{\mu}\nu_{\mu}}^{PREM}-P_{\nu_{\mu}\nu_{ \mu}\mu}^{newPREM}\) for NO and neutrinos, where \(newPREM\) corresponds to density modified case. Lower panel is the same as panel (b) but for IO and antineutrinos. Figure 2: The panels shown here are the same as in Fig. 1 but for density change in OC for Case I. \(L\) is related to the zenith angle \(\theta_{z}\) by the following relation \[L=\sqrt{(R+L_{0})^{2}-(R\sin\theta_{z})^{2}}-R\cos\theta_{z}\,, \tag{3}\] where \(R\) is the radius of earth and \(L_{0}\) is the height of atmosphere. Note that the largest changes in the probability occur for neutrinos and antineutrinos with longer trajectories, _ie._, those that cross the core as well as the mantle. We find that these changes occur for all energies less than 15 GeV, but the most prominent change happens around 2-10 GeV. The black dashed vertical line shows the \(L\) corresponding to the neutrino trajectory that touches the boundary between the outer core and the mantle, while the red dashed vertical line shows the \(L\) corresponding to the neutrino trajectory that crosses the boundary between the lower mesosphere and transition region. The three panels of Fig. 2 show similar results, but this time for reference PREM and \(\pm 10\%\) change of density in the outer core, keeping the density in all other layers fixed at their reference PREM value. We note that the color map shows significantly more "islands" in the middle and right panels for this case. This implies that better energy and angle resolutions and finer binning of data would be more crucial for this case as compared to the case for the mantle. ### Case II When we change the layer density by \(\pm x\%\) with respect to the PREM profile, we effectively change the mass of the Earth. The mass of the Earth is known to much better precision as compared to its density profile. In our analysis here for this case, we take the Earth mass to be fixed. Therefore, when we increase (decrease) the density in any given layer, we must decrease (increase) the density in some other layers such that the mass of the Earth remains constant. In order to quantify this we do the following. We consider that the density in the three layers - inner core, outer core and mantle could change, while the density of all the other layers of the Earth is kept fixed. So the Earth mass is given by \[M_{Earth}=\sum_{i}V_{i}\rho_{i}+\sum_{j}V_{j}\rho_{j}+\sum_{k}V_{k}\rho_{k}+M_ {\rm fixed}, \tag{4}\] where \(i\), \(j\) and \(k\) are labels for the three regions, \(\rho_{i}\), \(\rho_{j}\) and \(\rho_{k}\) are the densities of the individual layers in each of these regions and \(V_{i}\), \(V_{j}\) and \(V_{k}\) are the corresponding volumes of the individual layers. The last term \(M_{\rm fixed}\) is the mass of the Earth in the layers whose density is taken as fixed. If we change the density by \(x\%\) in any one region, say the region marked by \(i\), it increases the Earth mass. In order to make the Earth mass constant we need to decrease the density of the other two regions. For simplicity we assume that the density change in the Figure 4: The panels shown here are the same as in Fig. 3 but for density change in OC for Case II. Figure 3: Top panel shows the change in density of mantle according to Case II. Middle panel shows the effect of density change on the survival probability via \(\Delta P_{\nu_{\mu}\nu_{\mu}}=P_{\nu_{\mu}\nu_{\mu}}^{PREM}-P_{\nu_{\mu}\nu_{ \mu}\nu_{\mu}}^{newPREM}\) for NO and neutrinos, where \(newPREM\) corresponds to density modified case. Lower panel is the same as panel (b) but for IO and antineutrinos. other two regions are the same, given by \(y\%\) according to the following rule, \[M_{Earth}=(1+x)\sum_{i}V_{i}\rho_{i}+(1+y)\bigg{(}\sum_{j}V_{j}\rho_{j}+\sum_{k}V _{k}\rho_{k}\bigg{)}+M_{\rm fixed}. \tag{5}\] Which gives \(y\) in terms of \(x\) as follows, \[y=-\frac{x\sum_{i}V_{i}\rho_{i}}{\sum_{j}V_{j}\rho_{j}+\sum_{k}V_{k}\rho_{k}}. \tag{6}\] In Figs. 3 and 4 we show plots similar to Figs. 1 and 2 but for the Case II where we keep the Earth mass fixed. In the left panel of both figures we can see that the densities in all layers of the earth get altered. In the left panel Fig. 3, when the mantle density decreases (increases) the inner and outer core densities increase (decrease). Similarly, in the left panel of Fig. 4, when the outer core density decreases (increases) the mantle and inner core densities increase (decrease). One can also note from Figs. 3 that a small density change in mantle induces large density changes in the core. One the other hand Figs. 4 shows that a density change in the outer core induces a larger density change in the inner core and a smaller density change in the mantle. The middle and right panels of Figs. 3 and 4 should be compared with the middle and rights panels of Figs. 1 and 2. We see that compared to Case I, for Case II the effect of the density change on the oscillation probabilities has increased. For instance comparing middle panels of Figs. 1 and 3 shows that there are two ways in which the effect of the density change on the probability has increased - first, the probability change is now over a much wider range of zenith angle and second, even in the zenith angle range \(-1<\cos\theta<-0.4\) the \(|\Delta P_{\nu_{\mu}\nu_{\mu}}|\) is larger. The same is true for the right panels. Same is true for the outer core case. and the inner mantle. Hence, in this case when we change the density in a given region by \(x\%\), we compensate for the fixed Earth mass by changing the density elsewhere by \(y\%\), but now only for the layers in the inner core, outer core and inner mantle are changed. The relation between \(x\) and \(y\) is still given by a relation similar to Eq. 6 but where the three relevant layers for compensation are only the inner core, outer core and inner mantle. We show the density change for inner mantle and outer core in the left panels of Figs. 5 and 6, respectively. A comparison of Case III figures with the ones presented earlier for Cases I and II will come hand when understanding the expected sensitivity plots for ICAL@INO. ## III Analysis Method ### Experimental setup and simulation details We give below a brief overview of our simulation framework. The proposed India-based Neutrino Observatory (INO) to be built in India, will house a 50 kton magnetized Iron CALorimeter (ICAL) [6]. In this work we refer to this detector as ICAL@INO. The design proposal for ICAL@INO is to have a layered structure, with 5.6 cm thick iron slabs interlaced with RPCs (Resistive Plate Chambers) [6] as the active detector elements. The detector will be placed inside a 1.5 T magnetic field, making ICAL@INO magnetised. This gives ICAL@INO its charge identification sensitivity, allowing it to observe particles and anti-particles separately. Therefore, this detector will independently and efficiently observe \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) produced in the Earth's atmosphere. The atmospheric \(\nu_{\mu}\) (\(\bar{\nu}_{\mu}\)) will interact with the iron producing \(\mu^{-}\) (\(\mu^{+}\)) and hadron(s). The \(\mu^{-}\) (\(\mu^{+}\)) produce long track in the detector, while the hadron(s) produce a hadronic shower. Both the muon track and the hadron shower can be observed at ICAL@INO. The magnetic field at ICAL@INO bends the \(\mu^{-}\) and \(\mu^{+}\) in opposite directions, allowing the experiment to record the \(\mu^{-}\) and \(\mu^{+}\) track events separately. The length, curvature and direction of the track can be used to reconstruct the energy and zenith angle of the muon. The hadronic shower can be used to measure the energy of the hadron. We use the Honda 3D atmospheric neutrino fluxes computed for the Theni site in India [14]. Atmospheric neutrino events in ICAL are generated using the GENIE MC [15] tailored for the ICAL detector [16]. Events are generated for 1000 years of ICAL running to reduce MC errors and then normalised to 25 years for our analysis. Events from GENIE are generated for unoscillated neutrino fluxes. Relevant neutrino oscillation probabilities are then included via the re-weighting algorithm [17; 18]. On these raw Genie events we next implement the event reconstruction efficiencies, charge identification efficiency, energy and angle resolutions on muon events [19] and energy resolution on hadron events [20]. The detector efficiencies and resolutions that we use have been obtained by the INO collaboration using the ICAL detector simulator based on the Geant4 simulation code [21]. This gives us events in terms of their reconstructed energy and reconstructed zenith angle. The muon data is then binned in reconstructed muon energy and reconstructed muon zenith angle bins, while the hadron data is binned in reconstructed hadron energy bins only. Therefore, we have a three-pronged binned data and the binning scheme used in this work is shown in Table 1. ### Oscillation parameters The assumed true values for oscillation parameters used for simulating the prospective ICAL data are given in Table 2. These values are compatible with the current best-fit values obtained from global analysis of neutrino oscillation data [22]. Since the value of \(\theta_{23}\) and even its true octant is not yet known with any significance, we show results for three different possible true values \(\theta_{23}\), \(\theta_{23}=42^{\circ}\), \(45^{\circ}\) and \(49^{\circ}\). We also show results for both mass ordering (NO and IO). We used \(\Delta m^{2}_{eff}\) in our analysis, defined as \[\Delta m^{2}_{eff}=\Delta m^{2}_{31}-\bigg{(}\cos^{2}\theta_{12}-\cos\delta_{ CP}\sin\theta_{13}\sin 2\theta_{12}\tan\theta_{23}\bigg{)}\Delta m^{2}_{21}. \tag{7}\] The \(\chi^{2}\) defined below is minimised over \(\theta_{23}\) in the range \(40^{\circ}\) to \(51^{\circ}\) and \(\Delta m^{2}_{eff}\) in the range given in the Table. We have taken \(\delta_{CP}=0^{\circ}\) and kept it fixed in the analysis since the ICAL data is very weakly dependent on \(\delta_{CP}\). \(\Delta m^{2}_{21}\), \(\theta_{13}\) and \(\theta_{12}\) are also kept fixed in our analysis at the their values given in the Table. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Observable & Range & Bin width & No. of bins \\ \hline \hline \(E_{\mu}^{\rm obs}\)(GeV)(15 bins) & [0.5,4] & 0.5 & 7 \\ & [4,7] & 1 & 3 \\ & [7,11] & 4 & 1 \\ & [11,12,5] & 1.5 & 1 \\ & [12,15,1] & 2.5 & 1 \\ & [15,25] & 5 & 2 \\ \(\cos(\theta_{\mu}^{\rm obs}\) (21 bins) & [-1.0,-0.98] & 0.02 & 1 \\ & [-0.98,-0.43] & 0.05 & 11 \\ & [-0.43,-0.4] & 0.03 & 1 \\ & [-0.40,-0.2] & 0.10 & 2 \\ & [-0.2,1.0] & 0.2 & 6 \\ \(E_{\rm had}^{\rm std}\) (GeV) (4 bins) & [0,2] & 1 & 2 \\ & [2,4] & 2 & 1 \\ & [4,15] & 11 & 1 \\ \hline \end{tabular} \end{table} Table 1: The binning scheme in the three observable \(E_{\mu}^{\rm obs}\), \(\cos\theta_{\mu}^{obs}\) and \(E_{had}^{obs}\) used in the analysis. ### The \(\chi^{2}\) Formula We generate data for the PREM profile of the Earth. Then we fit the generated data by a theory, where we modify the Earth density profile within the schemes discussed in the previous section. For statistical analysis of the data we define the following test statistics \[\chi^{2}=\chi^{2}_{\mu^{-}}+\chi^{2}_{\mu^{+}}\,, \tag{8}\] where, \[\chi^{2}_{\mu^{+}}=\sum_{i=1}^{N_{\mu}}\sum_{j=1}^{N_{\mu}}\sum_{k=1}^{N_{\mu }}2\bigg{[}\bigg{(}T^{\pm}_{ij(k)}-D^{\pm}_{ij(k)}\bigg{)}-D^{\pm}_{ij(k)}\ln \bigg{(}\frac{T^{\pm}_{ij(k)}}{D^{\pm}_{ij(k)}}\bigg{)}\bigg{]}+\sum_{i=1}^{5} \xi^{2}_{i}\,, \tag{9}\] where the sum is over muon energy bins (\(i=1\) to \(N_{E_{\mu}}\)) muon zenith angle bins (\(j=1\) to \(N_{\theta_{\mu}}\)) and hadron energy bins (\(k=1\) to \(N_{E_{H}}\)), \(D^{\pm}_{ij(k)}\) is the simulated \(\mu^{\pm}\) data binned in muon energy, muon zenith angle and hadron energy bins and \(T^{\pm}_{ij(k)}\) is the corresponding systematic uncertainty weighted prediction for a given theoretical model in the same bin and is given as \[T^{\pm}_{ij(k)}=T^{0\pm}_{ij(k)}\bigg{(}1+\sum_{l^{\pm}=1}^{5}\pi^{l^{\pm}}_{ ij(k)}\xi_{l^{\pm}}\bigg{)}\,, \tag{10}\] where \(T^{0\pm}_{ij(k)}\) gives the number of \(\mu^{\pm}\) events without systematic errors in theory. We consider five kinds of systematic errors in muons and antimuon data seperately, giving a total of 10 systematic uncertainties (\(\pi^{l^{\pm}}\)) and pulls (\(\xi_{l^{\pm}}\)). The systematic uncertainties considered are 20% flux normalization error, 10% cross-section error, 5% tilt error, 5% zenith angle error and 5% overall systematic, in both \(\mu^{+}\) and \(\mu^{-}\) channels ## IV Results In this section we present our numerical results and quantify how well ICAL is able to resolve the density profile of the Earth. We present the results for the three Cases mentioned before and separately for the mantle and outer core. In each case the data is generated for the standard PREM density profile. This data is then fitted by changing the density for a given Case as a percentile and the corresponding \(\chi^{2}\) is plotted as a function of this given percentile change. ### Case I We start with considering Case I, which is the simplest case where we consider percentage density variation in a given layer of the Earth, say mantle or outer core, without putting any other constraint on the density. #### iv.1.1 Mantle Fig. 7 illustrates the expected sensitivity of ICAL@INO to the density of the mantle in Case I, where no constraint is placed on the Earth's mass. We show results with no systematic errors (upper plots) and with systematic errors (bottom plots). The figures depict the \(\chi^{2}\) as a function of the percentage density variation in the mantle. The results shown are for NO (blue line) and IO (red line) in each panel. In Fig. 7, the left panels are for \(\theta_{23}=42^{\circ}\), the middle panels are for \(\theta_{23}=45^{\circ}\), and the right panels are for \(\theta_{23}=49^{\circ}\). In Fig. 7, we see that the \(\chi^{2}\) curve is symmetric for small percentage changes in mantle density. However, as the density change increases, the curves become asymmetric with respect to zero. This asymmetry is seen to be largest for \(\theta_{23}=49^{\circ}\). We also notice that the sensitivity for NO is generally higher than for IO because the neutrino data is statistically stronger than the antineutrino data. Finally, a comparison of upper and lower panels reveals that the sensitivity gets slightly worse with systematic uncertainties. In Table 8 we show the percentage uncertainty expected in the density in mantle for Case 1 within \(1\sigma\), \(2\sigma\) and \(3\sigma\) C.L. For example, the first column shows that if \(\theta_{23}=42^{\circ}\), then ICAL@INO can determine the density of mantle with \(-15.5\%\) and \(+16.4\%\) uncertainty at \(2\sigma\) C.L. #### iv.1.2 Outer Core (OC) Fig. 8 depicts the sensitivity of ICAL@INO to the density of the OC for Case I. Similar to Fig. 7, we show our results for both without systematic errors (upper plots) and with systematic errors (bottom plots), for NO and IO as well as for 3 choices of \(\theta_{23}\). The figures show \(\chi^{2}\) as a function of percentage density variation in OC density without any constraint on the Earth's mass. We notice features that rather different as compared to the case with the Mantle. In particular, we see that for the case of OC, the value of \(\chi^{2}\) does not monotonically increase with the percentage change in density. Instead, the value reaches a maximum and then falls, oscillating with the percentage change in density. We see that both the magnitude of \(\chi^{2}\) and the position of its maxima depends on the value of \(\theta_{23}\). The above is true for both NO and IO cases. We also notice that, unlike in the case \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline C.L. & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -6.6/7.1 & -8.5/8.4 & -5.4/6.3 & -7.6/8 & -5.7/6.1 & -7.5/6.8 \\ \(2\sigma\) & -15.5/16.4 & -21/22 & -13/15 & -20/20 & -13.3/13.4 & -18.5/17.5 \\ \(3\sigma\) & -26/29.6 & -46/49 & -24/28 & -46/44 & -23.5/22 & -50/34 \\ \hline \end{tabular} \end{table} Table 3: The range of density variation values for which ICAL@INO is sensitivity to the Mantle density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case I. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. of the mantle, the \(\chi^{2}\) is now very asymmetric for positive and negative variation in density. Indeed, the \(\chi^{2}\) is significantly lower for positive variations as compared to negative variations in OC density, and never even reaches \(\chi^{2}=4\) for any of the \(\theta_{23}\) cases and for both NO and IO. The \(\chi^{2}\) is seen to be lower for IO as compared to NO for all plots. Table 4 gives the range of density variation values for which ICAL@INO is sensitive to the OC density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case I. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. ### Case II For the Case II which we described in Sec 3.2, we put the constraint that the total mass of the Earth is fixed. This constraint therefore, propagates the effect of change of density in one layer to all other density layers of the Earth. #### iv.2.1 Mantle In Fig. 9 we present results on the sensitivity of ICAL@INO to density of the mantle when Earth total mass constraint is implemented and the mantle density variation is compensated by corresponding change in all other layers of the Earth. We consider a simplistic scenario where we make the compensation by taking equal percentage changes in rest of the layers. As before, we show results for without systematic uncertainties and with systematic uncertainties, for NO and IO, as well as for three choices of \(\theta_{23}\). In our analysis, the constraint on earth mass alters the shape of the \(\chi^{2}\) plots, as shown in Fig. 9. For mantle, we lose symmetry in IO plots and a little asymmetry also comes in NO plots. The \(\chi^{2}\) values shows marked improvement as compared to Case I. Table 5 gives the \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline C.L. & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -8.1/8.7 & -11.6/10.4 & -7.5/9.1 & -9.6/9.4 & -8.3/7.4 & -9.9/7.4 \\ \(2\sigma\) & -29/ & -39/ & -26/ & -34/ & -23/ & -33/ \\ \(3\sigma\) & -44/ & -/ & -39/ & -47/ & -37/ & -46/ \\ \hline \end{tabular} \end{table} Table 4: The range of density variation values for which ICAL@INO is sensitive to the OC density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case I. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. Figure 7: The \(\chi^{2}\) as a function of percentage change in density in the mantle for Case I. Red lines are for NO and blue lines are for IO. Panel (a) is for \(\theta_{23}=42^{\circ}\), (b) for \(\theta_{23}=45^{\circ}\) and (c) for \(\theta_{23}=49^{\circ}\). Upper panels are for no systematic uncertainties in the analysis while the lower panels show the \(\chi^{2}\) including systematic uncertainties. range of density variation values for which ICAL@INO is sensitive to the mantle density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case II. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. We can see the improvement in the expected sensitivity of ICAL@INO in comparison to Case I. #### iv.2.2 Oc In Fig. 10 we present results on the sensitivity of ICAL@INO to OC density when Earth total mass constraint is implemented and OC density variation is compensated with corresponding change in all other layers of earth with equal percentage. Upper panels of Fig. 10 are without systematic uncertainties while the lower panels are with systematic uncertainties in the \(\chi^{2}\) analysis. Shown as curves for both NO and IO for three choices of \(\theta_{23}\). Examination of the plots in Fig. 10 shows that the \(\chi^{2}\) increases significantly for both positive and negative density changes of density in OC. The oscillatory part of the positive side has turned into a steady increasing curve. The reason for this is that increasing OC density decreases mantle density, and we have seen that decreasing mantle density gives a sharp increase in the \(\chi^{2}\), so adding these effects gives us a continued increase for positive OC density change. A decrease in OC density also results in a sharp and steady increase in \(\chi^{2}\). Because of the contribution from other layers, particularly the mantle, overall \(\chi^{2}\) values are higher than in the previous case. Table 6 gives the range of density variation values for which ICAL@INO is sensitive to the outer core density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case II. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline \hline \(\mathrm{Matle}\) & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -47.7/3.1 & -6.5/4.4 & -3.9/3.8 & -57.4/4.4 & -3.7/3.9 & -4.25/4.2 \\ \(2\sigma\) & -11.9/8.7 & -19.2/13.9 & -10.9/8.8 & -17/12.5 & -9.9/8 & -15/11.4 \\ \(3\sigma\) & -25.5/15.5 & -43/21.7 & -24.2/14.6 & -43/20.6 & -20.5/13.5 & -39.6/19.7 \\ \hline \end{tabular} \end{table} Table 5: The range of density variation values for which ICAL@INO is sensitivity to the Mantle density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case II. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline \hline C.L. & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -7.2/6.7 & -8.5/8.5 & -6.7/6.4 & -7.4/7.4 & -6.4/5.9 & -6.9/5.9 \\ \(2\sigma\) & -16.8/19.6 & -24/34 & -15.8/17.7 & -22/29 & -14.7/16.5 & -20.1/24.5 \\ \(3\sigma\) & -28/46 & -37/ & -26/41 & -35/ & -24.5/37.4 & -34.5/ \\ \hline \end{tabular} \end{table} Table 6: The range of density variation values for which ICAL@INO is sensitivity to the OC density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case II. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. Figure 8: Same as Fig. 7 but for percentage density change in OC. ### Case III In this case we take into account the Earth mass constraint by compensating for the density change in any given layer by suitable changes to layers only in the "inner part" of the Earth. In particular, we change the density of only the inner core, outer core and inner mantle corresponding to \(d>2200\) km. #### iv.3.1 Mantle We start by studying the expected sensitivity of ICAL@INO to the change in density in the mantle region for this case. The results are shown in Fig. 11 and Table 7. We notice that the \(\chi^{2}\) expected for this case is considerably lower than that for Case II but mildly higher than for Case I. Comparing the results in the lower and upper panels of the figure we see that improvement in systematics is not expected to bring any drastic improvement to the sensitivity. #### iv.3.2 Oc The expected sensitivity of ICAL@INO to the density of the OC for Case III is presented in Fig. 12 and IX. A comparison of Figs. 8, 10 and 12 shows that compensation due to Earth mass constraint has much less effect in Case III as compared to Case II. This is because when we \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline C.L. & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -8.8/7.7 & -10.5/10.8 & -7.8/7.1 & -9.9/9 & -8.7/6.8 & -9.4/7.8 \\ \(2\sigma\) & -19.7/ & -25.5/ & -18/ & -24/ & -18.7/ & -24.5/ \\ \(3\sigma\) & -31.6/ & -/- & -30/ & -59/ & -29.6/ & -58/ \\ \hline \end{tabular} \end{table} Table 8: The range of density variation values for which ICAL@INO is sensitivity to the Mantle density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case I. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline Mantle & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -27/30 & -41/34 & -26/30 & -34/33 & -24.0/30 & -32/33 \\ \hline \end{tabular} \end{table} Table 7: The range of density variation values for which ICAL@INO is sensitive to the Mantle density at \(1\sigma\), for Case III. We show these ranges for 3 choices of \(\theta_{23}\) and for both NO and IO. Figure 9: The \(\chi^{2}\) as a function of percentage change in density in the mantle for Case II. Red lines are for NO and blue lines are for IO. Panel (a) is for \(\theta_{23}=42^{\circ}\) (b) for \(\theta_{23}=45^{\circ}\) and (c) for \(\theta_{23}=49^{\circ}\). Upper panels are for no systematic uncertainties in the analysis while the lower panels show the \(\chi^{2}\) including systematic uncertainties. change the density in the OC, compensation to preserve Earth mass happens mainly in IC for Case III, while for Case II we have compensation from both IC as well as mantle. As stated before, density changes in IC do not change the probabilities much and therefore the resulting \(\chi^{2}\) is also lower. ## V Conclusion Earth tomography is an important field in science. While the best estimates of Earth's density profile comes from seismology, it is pertinent to check if complementary information and/or cross-checks can be informed elsewhere. Neutrino experiments offer a promising complementary approach to tomography. Neutrinos traveling through Earth can get affected by the ambient Earth matter in two ways. Very high energy neutrinos can undergo substantial inelastic scattering via weak interactions with the ambient particles in Earth leading to an attenuation of the neutrino flux. Neutrino telescopes can use this as a signal for determining the density through which the neutrinos travel before reaching the detector. The second way in which neutrinos can probe Earth matter density is via matter effects in their flavor oscillations. This method can be effectively used in atmospheric neutrino experiments, since atmospheric neutrinos come from all zenith angles, crossing the Earth from all directions, and also experience large matter effects. In this work we quantified, for the first time, the potential of the ICAL@INO atmospheric neutrino experiment towards Earth tomography. In this work we used the PREM profile as the reference density structure for Earth matter density. This essentially means that we simulated the atmospheric neutrino "data" at ICAL@INO for the PREM profile. Values of oscillation parameters compatible with the current best-fit solutions were taken. Data was generated for 25 years of running of ICAL. We then statistically analysed this data with a theory where the density was allowed to be different from the PREM profile by a given percentile. The corresponding \(\chi^{2}\) obtained was plotted as a function \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(\theta_{23}=42^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=45^{\circ}\)} & \multicolumn{2}{c|}{\(\theta_{23}=49^{\circ}\)} \\ \hline OC & NO & IO & NO & IO & NO & IO \\ \hline \(1\sigma\) & -7.5/8.5 & -10/ & -7.6/7.8 & -9.8/ & -7.7/7.3 & -9.5/9.0 \\ \(2\sigma\) & -19.7/ & -25.8/ & -19.6/ & -26/ & -18.6/ & -25.3/ \\ \(3\sigma\) & -33.7/ & / & -32.6/ & / & -32/ & 55.6/ \\ \hline \end{tabular} \end{table} Table 9: The range of density variation values for which ICAL is sensitivity to the OC density at \(1\sigma\), \(2\sigma\) and \(3\sigma\), for Case III. We show these ranges for 3 choices of \(\theta_{23}\). Figure 10: All panels show plots of \(\chi 2\) as a function of percentage change in OC density for Case II. Panel (a) is for \(\theta_{23}=42^{\circ}\), (b) for \(\theta_{23}=45^{\circ}\), and (c) for \(\theta_{23}=49^{\circ}\), both without (upper plots) and with (bottom plots) systematic uncertainties in the \(\chi^{2}\) analysis. Blue lines are for NO and red lines are for IO. of the percentile change in density. The change in density was done for either the mantle or the outer core. We checked explicitly that ICAL@INO was not sensitive to density changes in the inner core, and hence this was not presented. We performed this study for three different cases. We started with showing the effect of density on the survival probability for each of these three cases and then went on to show the expected sensitivity of ICAL@INO to density measurements. Case I corresponded to the situation when the density in a given layer was changed without any other constraint on the analysis. This case helped us understand how sensitive the experiment will be to change in any given layer, independent of constraints coming from density changes in other layers. We found that ICAL@INO can be sensitive to density changes within \(-5.4\%/+6.3\%\) in the mantle at \(1\sigma\) for \(\theta_{23}=45^{\circ}\) and NO. For the outer core the corresponding values are \(-8.3\%/+7.4\%\) at \(1\sigma\) for \(\theta_{23}=45^{\circ}\) and NO. The sensitivity was seen to depend on the value of \(\theta_{23}\) as well as the mass ordering. Case II corresponded to the situation when the density in a given layer was changed with the constraint that the total mass of the Earth is constant. This implies that when the density, say in the mantle was changed by \(x\%\), then one needs a change in density in all the other layers of the Earth by \(y\%\), such that the mass of the Earth would still be the same. The sensitivity of ICAL@INO to density in both the mantle as well as outer core improved in this case as compared to Case I since in this case a given change in density in any layer was being accompanied by corresponding density changes in the the other layers in order to compensate for the constant total Earth mass. We found that at \(1\sigma\) ICAL@INO can measure the mantle density to within \(-3.9\%/+3.8\%\) for \(\theta_{23}=45^{\circ}\) and NO. For the outer core the corresponding values are \(-6.7\%/+6.4\%\) at \(1\sigma\) for \(\theta_{23}=45^{\circ}\) and NO. Finally, we considered a softened version of Earth mass compensation in Case III, where we allowed compensatory density changes only in the inner regions of the Earth. In particular, density changes were allowed only in layers for which \(d<2200\) km, where \(d\) is the radial depth of the layer from the surface of the Earth. For this case the sensitivity of ICAL@INO was seen to be intermediate between Case I and Case III. In particular, we showed that the density in the mantle could be measured within \(-7.8\%/+7.1\%\) at \(1\sigma\) for \(\theta_{23}=45^{\circ}\) and NO. The corresponding expected sensitivity for the outer core was shown to be \(-x.x\%/+x.x\%\) at \(1\sigma\) for \(\theta_{23}=45^{\circ}\) and NO. The reason for lower sensitivity in this case as compared to Case II was discussed. For all the cases we studied the effect of systematic Figure 11: All three panels has plots for \(\chi^{2}\) as a function of percentage change in density of Mantle for case III (putting Earth mass constrain in inner layers). Where (a) is for \(\theta_{23}=42^{\circ}\) (b) for \(\theta_{23}=45^{\circ}\) and (c) for \(\theta_{23}=49^{\circ}\) without (upper plots) and with (bottom plots) systematic uncertainties in \(\chi^{2}\) analysis.The Blue line is for NO and Red line for IO as a known MO case. uncertainties on the expected sensitivity. We also considered the condition for hydrostatic equilibrium. With 25 years of data taking, ICAL@INO would be competitive with large neutrino telescopes IceCube-PINGU and ORCA. In Table 1 we present the comparative \(1\sigma\) expected sensitivity from ICAL@INO (this work), IceCube-PINGU [7]and ORCA [9]. We can see that ICAL@INO can be competitive despite its smaller size. In particular, we can see that while the expected sensitivity of both goes down significantly for IO, the sensitivity of ICAL@INO is similar for both mass orderings, with NO being only slightly better. The expected sensitivity for ORCA also seems to be comparable for both mass orderings. This feature is true for both mantle as well as outer core. Note that for outer core, PINGU has rather poor sensitivity for the IO case. However, expected sensitivity of ICAL@INO (and ORCA) are good even for this case. The main reason why ICAL@INO can perform at a level comparable to PINGU and ORCA, especially for IO, is its extremely good charge identification capability. This gives ICAL@INO very good sensitivity to Earth matter effects for both mass orderings and hence the good expected sensitivity.
2309.09635
Temporal Analysis of Dark Patterns: A Case Study of a User's Odyssey to Conquer Prime Membership Cancellation through the "Iliad Flow"
Dark patterns are ubiquitous in digital systems, impacting users throughout their journeys on many popular apps and websites. While substantial efforts from the research community in the last five years have led to consolidated taxonomies of dark patterns, including an emerging ontology, most applications of these descriptors have been focused on analysis of static images or as isolated pattern types. In this paper, we present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey, grounded in insights from a US Federal Trade Commission complaint against the company. We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP), including considerations for characterization of individual dark patterns across a user journey, combinatorial effects of multiple dark patterns types, and implications for expert detection and automated detection.
Colin M. Gray, Thomas Mildner, Nataliia Bielova
2023-09-18T10:12:52Z
http://arxiv.org/abs/2309.09635v1
# Temporal Analysis of Dark Patterns: A Case Study of a User's Odyssey to ###### Abstract. Dark patterns are ubiquitous in digital systems, impacting users throughout their journeys on many popular apps and websites. While substantial efforts from the research community in the last five years have led to consolidated taxonomies of dark patterns, including an emerging ontology, most applications of these descriptors have been focused on analysis of static images or as isolated pattern types. In this short paper, we present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey, grounded in insights from a US Federal Trade Commission complaint against the company. We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP), including considerations for characterization of individual dark patterns across a user journey, multiplicative effects of multiple dark patterns types, and implications for expert detection and automated detection. dark patterns, temporal analysis, detection, methodology + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: Footnote †: journal: ACM + Footnote †: Footnote †: Footnote †: journal: ACM + [MISSING_PAGE_POST] individual dark pattern types into high-, meso-, and low-level patterns to allow easier access and adaption within and outside this community. In the past two years, regulators and policymakers have taken action to address issues of technology manipulation and deceptive design practices, resulting in legislative frameworks and regulatory sanctions that aim to protect users from dark patterns' harms. Legislation such as the EU's Digital Service Act (DSA) [6] and California's Privacy Rights Act (CPRA) [2], alongside guidance from governmental bodies such as the Organization for Economic Co-operation and Development (OECD) [30], UK Consumer and Markets Authority (CMA) [4], and the US Federal Trade Commission (FTC) [1] have supported the development of regulatory frameworks related to dark patterns, with the goal of bringing more transparency into digital environments and protecting users' autonomy to make informed decisions. Currently, lawsuits and other sanctions are leveraging these new regulations and, thus, demonstrate the effectiveness of policies where HCI and law work side-by-side to protect end users. However, existing scholarship often focuses on static dark patterns, driven by sharing screenshots as artifacts as evidence of dark patterns latent in the UI [18]. While contemporary dark patterns scholars often acknowledge aspects of temporal complexity, including feedforward, repetition of actions such as nagging, or actions that are part of a larger sequence, no expert evaluation or automated methods have been proposed that comprehensively support the inspection of an entire user journey. This lack of support for specific methods to support the temporal experience is particularly odd, given that Brignull (the originator of the term "dark patterns" and founder of darkpatterns.org) shared an annotated journey map including the kinds of details mentioned in 2016 for a brief time (Figure 1). As Brignull notes on this archived page: _"A journey map is a simple diagram to illustrate users go through in engaging with a webpage, whether it is an online experience, a product, retail, service or any combination. Usually when there are many touchpoints it means the experience is more complex. In this case, we located the Dark Patterns as touchpoints--ideally the map should be clean."_ (From darkpatterns.org, 2016) We use Brignull's diagram as a source of inspiration and starting point to propose components of a disciplined and rigorous methodology to characterize dark patterns experienced over time. This kind of temporal complexity has been primarily addressed in the dark patterns literature at present through application audits conducted with specific sets Figure 1: A journey map taken from a version of darkpatterns.org in 2016. This diagram demonstrates users’ interactions when engaging with a website, including experiences of multiple dark patterns. of user goals in mind (Han et al., 2018; Gray et al., 2018), while other scholarship has focused on automated or semi-automated detection across elements of the user journey (Han et al., 2018; Gray et al., 2018; Gray et al., 2018). Recent work from Mildner et al. (2018; Gray et al., 2018), echoing prior work from Gray et al. (2018), Luguri and Strahilevitz (2018), and guidance from the OECD (Mollner et al., 2018), suggests that not only do dark patterns often occur together in single moments of a user journey, but they can also produce multiplicative or amplified effects both in isolation and across a user journey. We advance this line of research in this short paper, building a foundation for a method of Temporal Analysis of Dark Patterns (TADP) and consider attributes of this method through a case study from the legal literature. To that end, we make two contributions to the HCI and dark patterns literature. First, we illustrate aspects of temporal complexity that enhance the impact of dark patterns on user behavior through a case study of the Amazon Prime "Iliad Flow," identifying the kinds of user interactions over time that should be considered and characterized by researchers, regulators, and legal scholars. Second, we assess components of a TADP methodology that should be considered when studying the effects dark patterns have on users and identify how these components might be taken up through expert evaluation, automated detection, and human-in-the-loop detection. ## 2. Problematizing Dark Patterns Experienced Over Time: A Case Study of Amazon Prime's "Iliad Flow" A legal complaint filed by the US Federal Trade Commission (FTC) in June 2023 against Amazon is a recent example of enforcement action that includes detailed references to dark patterns (Bradner et al., 2018). This case follows multiple other cases (Gray et al., 2018; Gray et al., 2018; Gray et al., 2018) in the past two years by the FTC and other government bodies that have used the presence of dark patterns as a central form of evidence that user autonomy was not respected. We present the Amazon Prime cancellation process as an explanatory case study (Mollner et al., 2018) to identify how dark patterns are inscribed into the user experience, how these dark patterns relate to each other on specific screens and over time, and what elements of the overall user experience would be useful for scholars to focus on when analyzing other experiences for the presence of dark patterns. The center of a recent enforcement action by the FTC is Amazon Prime's cancellation process, which gained notoriety for being obstructive to users and led to the adoption of a two-click cancellation option in July 2022--but only for EU consumers (Bradner et al., 2018). A Norwegian Consumer Council report from 2021 demonstrated how dark patterns were used in Amazon's cancellation process to frustrate consumers, leaving them "[...] faced with a large number of hurdles, including complicated navigation menus, skewed wording, confusing choices, and repeated nudging. Throughout the process, Amazon manipulates users through wording and graphic design, making the process needlessly difficult and frustrating to understand." (Gray et al., 2018). In the FTC complaint against Amazon, these same allegations were exposed in further detail, building on evidence that showed how Amazon's design teams were complicit in making this process more difficult than it needed to be: * [...] the primary purpose of the Prime cancellation process was not to enable subscribers to cancel, but rather to thwart them. Fittingly, Amazon named that process "Iliad," which refers to Homer's epic about the long, arduous Trojan War. Amazon designed the Iliad cancellation process ("Iliad Flow") to be labyrinthine, and Amazon and its leadership [...] slowed or rejected user experience changes that would have made Iliad simpler for consumers because those changes adversely affected Amazon's bottom line. (Bradner et al., 2018, p. 3) Notably, legal frameworks such as those used by the FTC or other regulatory bodies rarely require proof of intent in order to produce sanctions. However, in this case, not only were elements of the user experience clearly obstructive, but Amazon's own naming of the user flow indicated their goal of making the process as difficult as possible. The complaint features an exhaustive description of the user journey, supported by screenshots. Several different aspects of the interactive system are included in the complaint, including the process to subscribe to Amazon Prime, different ways to enter the "Iliad Flow" to cancel Amazon Prime, and the component interactions required to cancel the membership. ### Identifying Dark Patterns in the "Iliad Flow" The FTC complaint included explicit analysis that demonstrated and named the presence of multiple dark patterns across the "Iliad Flow". To characterize these dark patterns in more detail, we leveraged Mildner et al.'s [(29)] approach to identify dark patterns in interfaces, using the software Atlas.ti [(16)] to analyze the complaint through open coding. We used a deductive codebook containing dark patterns from Gray et al.'s [(19)] ontology and Mildner et al.'s [(29)] work, thereby analyzing both the text and visual elements of the complaint using a qualitative content analysis approach. One author performed the initial coding work, leveraging dark patterns noted in the complaint and previous sanctions alongside their own expertise from previous studies on dark patterns. A second author who also had prior experience conducting studies on dark patterns confirmed the application of codes. After the document was fully coded, we connected the different interface stages in the form of a journey map including co-occurring and amplification of dark patterns on the one hand and their sequential dependency on the other. Amazon's "Iliad Flow" describes the user journey leading to the option for cancelling a Prime membership. In our analysis, we not only focused on the "Iliad Flow" but also considered membership creation and the required steps to cancel the service. The cancellation process itself includes three pages, however, there are multiple ways to enter the "Iliad Flow" and even more interactions that terminate the flow without successfully canceling the membership. Figure 2 shows the user journey described in the complaint including membership creation, finding the "Iliad Flow," and three pages users have to successfully navigate to find the option to cancel the membership. _Becoming an Amazon Prime Member._ Although not directly a part of the "Iliad Flow," it is noteworthy to demonstrate the ease through which Amazon recruits new members to its Prime program. Options to subscribe Amazon Prime are presented continuously through Amazon's services on both mobile and desktop modalities, including Amazon Music, Amazon Prime Video, and anytime an item is being purchased from Amazon--the service seemingly utilizes every opportunity to offer its Prime membership to users. In doing so, Amazon uses multiple dark patterns that manipulate users' understanding of the choice architecture. Although both Amazon Music and Video offers include stand-alone and cheaper alternatives, the service provider exploits _Interface Interference_ (a high-level dark pattern) to promote its Prime membership as a superior subscription, including all of Amazon's premium features but at a higher cost. Consequently, Figure 2. This flowchart demonstrates the user journeys for becoming an Amazon Prime member, finding the β€œIliad Flow”, and canceling a subscription. users are being tricked into more expensive subscriptions through the _Bait and Switch_ dark pattern and deploying the _Roach Motel_ pattern through the existence of the "Iliad Flow". _Entering the "Iliad Flow"_. While it is relatively easy to subscribe to Amazon Prime, the complaint describes a complex and labyrinthine procedure to cancel it. In total, Amazon offers users three possible paths to enter the "Iliad Flow". First, customers can use a search function on the website. However, the complaint describes how users have to be highly precise in their choosing of words to be presented with a link to enter the cancellation process. Alternatives refer customers to other settings or help services but do not present quick access. Second, customers can reach out to customer service, which itself requires customers to navigate through multiple options before being able to actually enter a query. Third, customers can enter the "Iliad Flow" by first navigating to Amazon's "Account & Lists," finding the "Manage Membership" option, and selecting the "End Membership" option. Contrary to its name, this option will not end a customer's membership but rather forward them to begin the "Iliad Flow." Together, these three options contain multiple instances of _Interface Interference_ and _Obstruction_, for instance, in the form of _Labyrinthine Navigation_ or _Misdirection_. The complaint suggests that customers have to take a minimum of two actions to even enter the "Iliad Flow." _Navigating the "Iliad Flow"_. Once a customer finds themselves in the cancellation flow process, they have to successfully navigate three pages before being able to end their Amazon Prime membership. As Figure 3 demonstrates, the pages repeatedly feature alternative options that, if clicked, remove the user from the "Iliad Flow", exiting the process. Thus, customers have to begin to find and enter the "Iliad Flow" again if they consider one of the alternatives presented on each screen. Each screen of the "Iliad Flow" includes a variety of dark patterns in a labyrinthine interface path deceiving customers in their attempt to cancel their membership. Moreover, the interface emotionally manipulates customers by reminding them about personalized features or contemporary offers that become unavailable once they terminate their membership. Only after customers reach the third page of the "Iliad Flow" are they able to end their subscription immediately. However, Amazon still aims to keep users connected to their service by offering the alternatives to pause a membership or terminate it at a different time. Collectively, customers have to navigate through a plethora of dark patterns in various stages before being able to end their membership. ### Characterizing the Complexity of the "Iliad Flow" In this section, we describe the findings of our temporal analysis based on Amazon's "Iliad Flow," with a summary of our findings shown in Figure 3. For the sake of brevity, we simplified the "Iliad Flow" in terms of displayed dark patterns and overall complexity to allow a high-level view of both the strategies deployed on individual screens and discrete UI elements and across the entire flow experience. In its original form, the "Iliad Flow" affords users multiple scrolling actions, as options to proceed were otherwise outside the visible frame. Moreover, additional visual and text in the original experience added further _Social Engineering_ tactics. #### 2.2.1. Instances of Dark Patterns Our temporal analysis of the "Iliad Flow" revealed a plethora of dark patterns customers encounter throughout their attempt to cancel their Amazon Prime membership. In their complaint, the FTC named seven dark patterns specifically: (1) _Forced Action_; (2) _Interface Interference_; (3) _Obstruction_; (4) _Misdirection_; (5) _Sneaking_; and (6) _Confirmshaming_. While our analysis confirms instances of these dark patterns, we extend the FTC's findings by also identifying multiple instances of 22 dark pattern types, including high-level, meso-level, and low-level mapped to Gray et al.'s (Gray et al., 2018) ontology. Notably, the "Iliad Flow" itself comprises three linked screens on which we counted 70 instances of dark patterns across 22 types. Most prominently and at the highest level of abstract, we identified _Obstruction_ (\(n=25\)) and _Interface Interference_ (\(n=14\)) dark patterns. Other lower-level types that were frequently found in the screens included _Labyrinthine Navigation_ (\(n=10\)), _Exploiting Errors_ (\(n=10\)), and _Redirective Condition_ (\(n=6\)). #### 2.2.2. Dark Pattern Co-Occurrence & Amplification Aside from the variety of dark patterns deployed in the "Iliad Flow," our analysis further shows how multiple dark patterns often occur together. As shown in Figure 3, high-level patterns of _Sneaking_ and _Obstruction_ pervaded the entire interaction sequence, supported by _Social Engineering_ in Figure 3. A summary of our temporal analysis of dark patterns in Amazon’s β€œIliad Flow.” For brevity, we simplified the interface complexity but maintained key options including three screens users have to navigate to be able to cancel their membership. Vertically underneath each page, we summarized co-occurring and amplifying dark patterns. Horizontally, we follow the sequential impacts and dependency of dark patterns. Dark patterns in small caps refer to high-level types while lower-case dark patterns refer to meso- and low-level instances (from Gray et al.’s (Gray et al., 2018) ontology). Descriptions of each dark pattern are included in Table 1. key decision moments in the first two screens. Additionally, these high level patterns were supported--and even amplified--by numerous lower-level patterns that drew on the higher-level parent types. For instance, all three screens used manipulation of the visual hierarchy (a meso-level pattern) to confuse users about the interactive differences and feedforward between the three options, making options to keep the membership, continue to cancel, or be reminded later appear in parallel. In parallel, _Social Engineering_ strategies such as personalization were used to amplify the \begin{table} \begin{tabular}{c l l l} \hline \hline \multicolumn{1}{c}{**Code**} & **N** & **Definition** \\ \hline 1 & Aesthetic Manipulation & 7 & β€œAny manipulation of the user interface that deals more directly with form than function. This includes design choices that focus the user’s attention on one thing to distract them from or convice them of something else.” \\ 2 & Confirmshaming [10] & 7 & β€œGuilting users into opting into something. The option to decline is worded to shame the user into compliance.” \\ 3 & Confusion [14] & 3 & β€œAsking the user questions or providing information that they do not understand. Asking a novice user if they would like to change their default browser, use of double, triple, or quadruple negatives.” \\ 4 & Decision Uncertainty & 1 & β€œThis dark pattern confuses users by diminishing their ability to assess situations, leaving them clueless as to what is expected of them or what options are available.” \\ 5 & Exploiting Errors [14] & 10 & β€œTaking advantage of user errors to facilitate the interface designer’s goals. E.g. mistyped URL brings up advertisement instead of assistance.” \\ 6 & Forced Action [21] & 5 & β€œThis strategy describes dark patterns that require the user to perform a certain action to access (or continue to access) certain functionality.” \\ 7 & Hard to cancel [26] & 3 & β€œThe pattern does not disclose important information upfront to the user that canceling a subscription or membership could not be completed in the same manner they signed up with.” \\ 8 & Hidden Costs [10] & 2 & β€œYou get to the last step of the checkout process, only to discover some unexpected charges have appeared.” \\ 9 & Hidden Information & 5 & β€œThis dark pattern describes options or actions relevant to the user but not made immediately or readily accessible. It may manifest as options or hidden in fine print, disclosed text, or a product’s terms and conditions statement.” \\ 10 & Interface Interference [18] & 14 & β€œThis strategy describes dark patterns that manipulate the user interface privileging certain actions over others, thereby confusing the user or limiting discoverability of important action possibilities.” \\ 11 & Labyrinthine Navigation [29] & 10 & β€œThis dark pattern describes nested interfaces that are easy to get lost in, disabling users from choosing preferred settings. This pattern is often seen in social media settings means.” \\ 12 & Manipulate Navigation [14] & 2 & β€œInformation architectures and navigation mechanisms that guide the user towards interface designer’s goal. E.g. making the free version of an application far more difficult to find than the commercial version on a consumer firewall vendor’s website.” \\ 13 & Misdirection [10] & 3 & β€œThe design purposefully focuses your attention on one thing in order to distract your attention from another.” \\ 14 & Nagging [18] & 2 & β€œThis strategy describes dark patterns that redirect of expected functionality persisting beyond one or more interaction.” \\ 15 & Obfuscation [14] & 4 & β€œHiding desired information ad interface elements. E.g. reducing contrast of close/stop buttons on video advertisements.” \\ 16 & Obstruction [21] & 25 & β€œThis strategy describes dark patterns with intentions of making a process more difficult than it needs to be, with the intent of disusading certain action(s).” \\ 17 & Redirective Condition [29] & 6 & β€œDark patterns of this type contain choice limitations that force users to overcome unnecessary obstacles before being able to achieve their goals.” \\ 18 & Roach Motel [10] & 4 & β€œYou get into a situation very easily but getting out is difficult (occurs in subscriptions).” \\ 19 & Sneaking [21] & 2 & β€œDark patterns following this strategy attempt to hide, disguise, or delay the divulging of information that is relevant to the user.” \\ 20 & Social Engineering [21] & 8 & β€œSocial Engineering is a strategy which presents options or information that causes a user to be more likely to perform a specific action based on their individual and/or social cognitive biases, thereby leveraging a user’s desire to follow expected or imposed social norms.” \\ 21 & Toying With Emotions [18] & 2 & β€œ[T]his dark pattern includes any use of language, style, color, or other similar elements to evoke an emotion in order to persuade the user into a particular action.” \\ 22 & Visual Interference [26] & 7 & β€œThis dark pattern uses style and visual presentation to influence users into making certain choices over others.” \\ \hline \hline \multicolumn{1}{c}{**Total = 70**} \\ \hline \hline \end{tabular} \end{table} Table 1: Table containing all 22 identified dark patterns from the analysis of the β€œlliad Flow”. The table includes the number of instances each dark pattern was identified and their definitions. interface interference effects by providing specific amounts of media the user might lose access to or provide options on how to choose a different payment plan that would appear more affordable. Notably, while some patterns are easily traceable to one or more specific UI elements, the interactions among the different types of dark patterns are more nuanced. For instance, the first screen layers choice architecture manipulation and emotional manipulation (_Interface Interference_) and urgency (_Social Engineering_) in a direct way, leaving the roach model (_Sneaking_) to be realized across the entire user journey. Similarly, the use of labyrinthine navigation (_Obstruction_) applies to the entire user journey as opposed to one discrete UI element or screen. #### 2.2.3. Sequential Impact & Dependency of Dark Patterns While co-occurrence between dark pattern types provides insights into the interplay between specific forms of manipulation, deception, and coercion, these types also benefit from each other from a sequential level. To understand their intertwined effects, we considered the dark patterns across the interactions and how they helped maintain deceptive and manipulative pressure on customers. As a customer sets out to end their membership, they constantly face distractions and _Sneaking_ strategies to keep them from proceeding. As Figure 3 depicts, each screen contains multiple options deflecting from the goal to end a membership. The screens are visually designed to appear engaging through the _Interface Interference_ and _Social Engineering_ high-level dark patterns--being both highly visible in their focus and emotionally pressuring. Importantly, engagement with any of the options other than the undifferentiated buttons indicated in the figure instantly exits the customer from the "Iliad Flow" and requires them to begin again. Thus, the combination and sequencing of dark patterns deployed ensures that most consumers will fail at their goal of cancelling the service--particularly the first time they navigate the gauntlet of dark patterns. ## 3. Foundations for a Temporal Analysis of Dark Patterns (TADP) Methodology Building on the case study we have presented, in this section we outline key characteristics that a Temporal Analysis of Dark Patterns methodology should consider, along with how these characteristics might be supported by expert evaluation, automated analysis, and human-in-the-loop automated analysis. 1. **Identify which dark patterns are being used, in what combination or sequence, and of what type(s).** This stage requires the use of a standardized source of pattern types and definitions, such as the emergent ontology of dark patterns by Gray and colleagues (Gray and colleagues, 2017). Identification of dark patterns should include high, meso, and low-level characterization where possible, although novel dark patterns might only be characterized by high and meso-level, with a low-level characterization leading to the definition of a new potential pattern type. This stage of analysis takes into account: readable text; layout; relative size and positioning of UI elements; use of color, typography, or text decoration; feedforward or other forms of feedback to the user; task flows or other relations between UI elements and screens; and the context or medium of use. 2. **Identify which UI element(s) are implicated in the use of dark patterns, and how these concentrations of elements within the interface might lead to the user's experience of dark patterns.** This stage requires connections between the presence of a dark pattern and its manifestation in UI or system. This stage of analysis takes into account the relationship between: one or more dark patterns to one or more UI elements; one or more dark patterns to the lack of visible UI elements; or one or more dark patterns to transitions between screens or across the entire user journey. Different levels of dark pattern characterization may allow characterization of high- and meso-level patterns on the screen or journey level that are then inscribed into one or more specific UI elements. 3. **Describe interactions between dark patterns, co-occurrence of dark patterns types, and/or potential amplification effects.** This stage requires knowledge of which dark pattern types appear and in which combination, both on a specific screen and over time. This stage of analysis takes into account the: combinations of dark patterns that appear in discrete moments of the user journey and over time; the co-occurrence of patterns with shared or differing high- or meso-level parents; the strategies or cognitive biases the patterns exploit; and the causal or other interactive relationship between patterns on a screen or over time. Based on these proposed stages for a TADP methodology, we can consider which components are best suited for manual expert review, which can be fully automated, and which type of automation may augment expert analysis in a human-in-the-loop system. When detecting dark patterns automatically, several researchers have implicitly recorded temporal interactions with web services in order to reveal the presence of dark patterns; however, the need for temporal detection was not explicitly stated. We list below the most recent advancements in automatic detection of dark patterns in websites and mobile applications, demonstrating where technical approaches might be leveraged in relation to our overall methodology aims. 1. **Web applications** The foundational work of Mathur et al. in e-commerce websites [27] scaled the detection of dark patterns by (1) automatising the process of product acquisition and capturing HTTP Archive (HAR) [8] files for each crawled page containing HTTP headers and full website response content, (2) detecting visible HTML elements in website content and further automatically clustering them, and (3) using expert analysis to evaluate the occurrence of dark patterns using additional context and surrounding information. This expert analysis included insights from the temporal dimension. For example, researchers found _sneak into basket_ instances by noting that no such product was explicitly added earlier thus requiring to observe several steps of the purchase process [26, Fig.3a]; additionally, a _countdown timer_ was found on a website where the same offer remained on a day-to-day basis requiring the website to be recorded over several days [26, Fig.4a]. Bouhoula et al. [9] also conducted research on consent banners, automatically detecting dark patterns on these banners using natural language processing (NLP) applied to HTML elements. These researchers also used a temporally grounded two-step process to detect such elements on the first and second layer of the banner, anticipating how a user would interact with these elements in real life. These examples demonstrate a technical foundation that could support automated detection of dark patterns in digital systems by collecting and evaluating HTML element information over time, while also indicating places where expert analysis is needed to characterize what kinds of data are collected, in what time frame(s), and how these data are processed or evaluated. 2. **Mobile applications** Several scholars have also detected dark patterns in mobile applications, which differ in accessibility as compared to website HTML code. Koch et al. [23] proposed a new solution to download the Android Package (APK) and iOS and iPadOS application archive file (IPA) files to be able to further analyse Android and iOS applications. They also targeted consent banners, extracting app elements that contain visible text, grouping them to detect accept/reject/settings options, and automatically interacting with the options to observe the hidden data flows in each scenario. Chen et al. [11] took a different approach and based their analysis on computer vision and NLP to automatically detect dark patterns in mobile apps, however only using static UI screenshots. These examples demonstrate different technical approaches to identifying and evaluating dark patterns on mobile applications, revealing opportunities for both code-based auditing of APK or IPA files that simulate temporal interaction and scaling up of computer vision or NLP techniques that could be applied to videos of interactions to better characterize temporal characteristics of dark patterns. We anticipate that future scholarship can productively advance this intersection of automated and expert evaluation techniques to support the temporal analysis of dark patterns, facilitating descriptions of dark patterns on both websites and mobile applications. However, each context presents challenges relating to what level or type(s) of patterns can be detected that future work should consider. In general, low-level patterns _may_ be detectable if they can be abstracted in a way that can be supported by web crawlers; however, this detectability is limited by the concreteness of the pattern and the need for human intelligence to detect instances where a pattern is deployed through many different combinations of HTML elements that may require interpretation (as in (Kraus et al., 2019)). For instance, a pattern that manipulates the visual choice architecture might be quite straightforward to detect since this pattern often relates to specific form fields or buttons that can be identified and evaluated in a straightforward manner (as in (Bergmann et al., 2019; Gray et al., 2020)). However, other patterns--particularly those that involve sneaking or obstruction--will be more difficult to detect in a fully automated manner. In these cases, augmentation technologies may be useful to amplify the abilities of the evaluator, creating an audit trail and also potentially supporting further detection efforts at scale in the future. For instance, an evaluator might manually tag dozens of examples of dark patterns across multiple screens of an interface, indicating where types are present in both static and temporal forms with labels and links to HTML elements or interactive components of the system; these mappings may then be used in combination to train detection systems that can suggest the presence of dark patterns that can then be evaluated and confirmed by an expert. We envision a future TADP methodology that brings together the strengths of both technical detection and expert evaluation, supporting the identification of dark patterns statically and over time in relation to specific UI elements and aspects of the overall user experience. ## 4. Conclusion In this short paper, we present a case study of the Amazon Prime "Iliad Flow" to characterize the complexity of dark patterns as they are experienced over time. We used this case to demonstrate how dark patterns exist in combination and over time, supporting the foundation for an analysis methodology for Temporal Analysis of Dark Patterns (TADP). We identify key stages that this methodology should include and identification for components that could be automated or augment expert analysis in future work. ###### Acknowledgements. This work is funded in part by the National Science Foundation under Grant No. 1909714 and the ANR 22-PECY-0002 IPO (Interdisciplinary Project on Privacy) project of the Cybersecurity PEPR. The research of this work was partially supported by the Klaus Tschira Stiftung gGmbH.
2309.07618
Estimating mutual information for spike trains: a bird song example
Zebra finch are a model animal used in the study of audition. They are adept at recognizing zebra finch songs and the neural pathway involved in song recognition is well studied. Here, this example is used to illustrate the estimation of mutual information between stimulus and response using a Kozachenko-Leonenko estimator. The challenge in calculating mutual information for spike trains is that there are no obvious coordinates for the data. The Kozachenko-Leonenko estimator does not require coordinates, it relies only on the distance between data points. In the case of bird song, estimating the mutual information demonstrates that the information content of spiking does not diminish as the song progresses.
Jake Witter, Conor Houghton
2023-09-14T11:27:35Z
http://arxiv.org/abs/2309.07618v1
# Estimating mutual information for spike trains: a bird song example ###### Abstract Zebra finch are a model animal used in the study of audition. They are adept at recognizing zebra finch songs and the neural pathway involved in song recognition is well studied. Here, this example is used to illustrate the estimation of mutual information between stimulus and response using a Kozachenko-Leonenko estimator. The challenge in calculating mutual information for spike trains is that there are no obvious coordinates for the data. The Kozachenko-Leonenko estimator does not require coordinates, it relies only on the distance between data points. In the case of bird song, estimating the mutual information demonstrates that the information content of spiking does not diminish as the song progresses. ## 1 Introduction The mutual information between two random variables \(X\) and \(Y\) is often conveniently described using a diagram like this: where the whole rectangle represents the entropy \(H(X,Y)\) of the joint variable \((X,Y)\). This is, in general, less than the sum of \(H(X)\) and \(H(Y)\) because \(X\) and \(Y\) are not independent. In this diagram, the purple and green regions together are intended to represent \(H(X)\) and the green and yellow regions \(H(Y)\). The purple region on its own represents \(H(X|Y)\) the entropy remaining, on average, when the value of \(Y\) is known. In the same way the yellow region represents \(H(Y|X)\). Now, the mutual information is represented by the green section: \[\xygraph{H(X,Y)}\] It is \[I(X,Y)=H(X)-H(X|Y)=H(Y)-H(Y|X) \tag{1}\] or, by substitution, \[I(X,Y)=\mathbb{E}\log_{2}\left[\frac{p_{X|Y}(x|y)}{p_{X}(x)}\right]=\mathbb{E }\log_{2}\left[\frac{p_{Y|X}(y|x)}{p_{Y}(y)}\right] \tag{2}\] Here, for illustrative purposes, mutual information is described relative to a specific example: the neural response of cells in the zebra finch auditory pathway to zebra finch song. This is both an interesting neuroscientific example and an example which is typical of a broad set of neuroscience problems. The zebra finch is a model animal used to study both auditory processing and learning; the male finch sings, he has a single song which begins with a series of introductory notes, followed by two or three repetitions of the motif: a series of complex frequency stacks known as syllables, separated by pauses. Syllables are about 50ms long, with songs lasting about two seconds. The songs have a very rich structure and both male and female zebra finch can distinguish one zebra finch song from another. Here we use a data set consisting of spike trains recorded while the bird is listening to one of a set of songs and we provide an estimate for the mutual information between the song identity and spike trains recorded from cells in the auditory pathway. This is an interesting and non-trivial problem. Generally, calculating mutual information is costly in terms of data because it requires the estimation of probabilities such as \(p_{Y}(y)\) and \(p_{Y|X}(y|x)\). For this reason, some measure of correlation is often when quantifying the relationship between two random variables. However, not all data types have a correlation: calculating the correlation assumes algebraic properties of the data that are not universal. As an example, calculating the correlation between \(X\) and \(Y\) requires the calculation of \(\mathbb{E}[XY]\) which in turn assumes that it makes sense to multiply \(x\) and \(y\) values. This is not the case for the typical neuroscience example considered here, where the set of outcomes for \(X\) is song identities and for \(Y\), spike trains. To circumvent this, spike trains are often replaced with something else, spike counts for example. However, this involves an implicit assumption about how information is coded. This is likely to be inappropriate in many cases. Indeed, the approach taken to calculating mutual information can involve making very strong assumptions about information coding, the very thing that is being studied. The purpose of this review paper is to demonstrate a different approach: there is a metric-space version of the Kozachenko-Leonenko estimator [7, 8] introduced in [13, 3, 4] and inspired by [15]. This approach has been tested on simulated data, for example in [4], and this shows it to be promising. However, it is important to also test it on real data. Here it is applied in the zebra finch example. ## 2 Materials and Methods Let \[\mathcal{D}=\{(x_{1},y_{1}),(x_{2},y_{2}),\...\,(x_{n},y_{n})\} \tag{3}\] be a data set, in our case the \(x_{i}\) are the labels for songs in the set of stimuli, with each \(x_{i}\in\{1,\ldots,n_{s}\}\); \(n_{s}\) is the number of different songs. For a given trial, \(y_{i}\) is the spiking response. This will be a point in "the space of spike trains". What exactly is meant by the space of spike trains is less clear, but for our purposes here, the important point is that this can be regarded as a metric space, with a metric that gives a distance between any two spike trains, see [16, 10], or, for a review, [5]. Given the data, the mutual information is estimated by \[I(X,Y)\approx\frac{1}{n}\sum_{i=1}^{n}\log_{2}\left[\frac{p_{Y|X}(y_{i}|x_{i}) }{p_{Y}(y_{i})}\right] \tag{4}\] where the particular choice of which conditional probability to use, \(p_{Y|X}\) rather than \(p_{X|Y}\), has been made for later convenience. Thus, the problem of estimating mutual information is one of estimating the probability mass functions \(p_{Y|X}\) and \(p_{Y}\) at the data points in \(\mathcal{D}\). In our example there is no challenge to estimating \(p_{X}\); since each song is presented an equal number of times during the experiment \(p_{X}(x_{i})=1/n_{s}\) for all \(x_{i}\) and, in general \(p_{X}(x_{i})\) is known from the experiment design. However, estimating \(p_{Y|X}\) and \(p_{Y}\) is more difficult. In a Kozachenko-Leonenko approach this is done by first noting that for a small volume \(R_{i}\) containing the point \(y_{i}\) \[p_{Y}(y_{i})\approx\frac{1}{\text{vol}(R_{i})}\int_{R_{i}}p_{Y}(y)\,dy \tag{5}\] with the estimate becoming more-and-more exact for smaller regions \(R_{i}\). If the volume of \(R_{i}\) were reduced towards zero \(p_{Y}(y)\) would be constant in the resulting tiny region. Here \(\text{vol}(R_{i})\) denotes the volume of \(R_{i}\). Now the integral \(\int_{R_{i}}p_{Y}(y)\,dy\) is just the probability mass contained in \(R_{i}\) and so it is approximated by the number of points in \(\mathcal{D}\) that are in \(R_{i}\): \[\int_{R_{i}}p_{Y}(y)\,dy\approx\frac{|\{y_{j}\in R_{i}\}|}{n}. \tag{6}\] It should be noted at this point that this approximation becomes more-and-more exact as \(R_{i}\) becomes bigger. Using the notation \[k_{i}=|\{y_{j}\in R_{i}\}| \tag{7}\] this means \[p_{Y}(y_{i})\approx\frac{k_{i}}{n\mathrm{vol}(R_{i})}. \tag{8}\] This formula provides an estimate for \(p_{Y}(y_{i})\) provided a strategy is given for choosing the small regions \(R_{i}\) around each point \(y_{i}\). As will be seen, a similar formula can be derived for \(p_{Y|X}(y_{i}|x_{i})\), essentially by restricting the points to \(\mathcal{D}_{i}=\{(x_{j},y_{j})\in\mathcal{D}|x_{j}=x_{i}\}\): \[p_{Y|X}(y_{i}|x_{i})\approx\frac{h_{i}}{n_{c}\mathrm{vol}(R_{i})} \tag{9}\] where, \(h_{i}\) is the number of points in \(R_{i}\) with label \(x_{i}\) and \(n_{c}\) is the total number of points with label \(x_{i}\). In the example here \(n_{c}=n/n_{s}\). Once the probability mass functions are estimated, it is easy to estimate the mutual information. However, there is a problem: the estimates also require the volume of \(R_{i}\). In general, a metric space does not have a volume measure. Furthermore while many everyday metric spaces also have coordinates providing a volume measure, this measure it not always appropriate since the coordinates are not related to the way the data is distributed. However, the space that the \(y_{i}\)s belong to is not simply a metric space, it is also a space with a probability density, \(p_{Y}(y)\). This provides a measure of volume: \[\mathrm{vol}(R_{i})=\int_{R_{i}}p_{Y}(y)dy \tag{10}\] In short, the volume of a region can be measured as the amount of probability mass it contains. This is useful because this quantity can in turn be estimated from data, as before, by counting points: \[\mathrm{vol}(R_{i})\approx\frac{k_{i}}{n}. \tag{11}\] The problem with this, though, is that it gives a trivial estimate of the probability. Substituting back into the estimate for \(p_{Y}(y_{i})\), Equation 8, gives \(p_{Y}(y_{i})=1\) for all points \(y_{i}\). This is not as surprising as it might at first seem, probability density is a volume-measure dependant quantity, that is what is meant by calling it a density and is the reason that entropy is not well-defined on continuous spaces. There is always a choice of coordinate that trivializes the density. However, it is not the entropy that is being estimated here. It is the mutual information and this is well defined: its value does not change when the volume measure is changed. The mutual information uses more than one of the probability density on the space; in addition to \(p_{Y}(y_{i})\) it involves the conditional probabilities \(p_{Y|X}(y|x)\). Using the measure defined by \(p_{Y}(y)\) does not make these conditional probability densities trivial. The idea behind the metric space estimator is to use \(p_{Y}(y)\) to estimate volumes. This trivialises the estimates for \(p_{Y}(y_{i})\) but it does allow us to estimate \(p_{Y|X}(y|x)\) and use this to calculate an estimate of the mutual information. In this way the volume of \(R_{i}\) is estimated from the probability that a data point is in \(R_{i}\) and this, in turn, is estimated by counting points. Thus, to fix the volume \(\mathrm{vol}(R_{i})\) a number \(h\) of data points is specified and for each point the \(h-1\) nearest data points are identified, giving \(h\) points in all when the "seed point" is included. This is equivalent to expanding a ball around \(y_{i}\) until it has an estimated volume of \(h/n\). This defines the small region \(R_{i}\). The conditional probability is then estimated by counting how many points in \(R_{i}\) are points with label \(x_{i}\), that is, are points in \(\mathcal{D}_{i}\). In fact, this just means counting how many of the \(h\) points that have been identified are in \(\mathcal{D}_{i}\), or, put another way, it means counting how many of the \(h-1\) nearest points to the original seed point are from the same stimulus as the seed point. In summary, the small region consists of \(h\) points, to estimate \(p_{Y|X}(y_{i}|x_{i})\) the number of points in the small region corresponding to label \(x_{i}\) is counted, this is referred to as \(h_{i}\) so \[h_{i}=|\{y_{j}\in R_{i}\,|\,x_{j}=x_{i}\}|=|R_{i}\cap\mathcal{D}_{i}|. \tag{12}\] This is substituted into the formula for the density estimator, Equation 6 to get \[p_{Y|X}(y_{i}|x_{i})\approx\frac{n}{n_{c}}\frac{h_{i}}{h} \tag{13}\] where, as before \(n_{c}\) is the total number of trials for each song. It is assumed that each song is presented the same number of times. It would be easy to change this to allow for different numbers of trials for each song, but this assumption is maintained here for notational convenience. Substituting back into the formula for the estimated mutual information, Equation 4, gives \[I_{0}=\frac{1}{n}\sum_{i=1}^{n}\log_{2}\frac{n_{s}h_{i}}{h} \tag{14}\] The calculation of \(I_{0}\) is illustrated in Figure 1. The subscript zero has been added in order to preserve the unadorned \(I\) for the information itself and \(\tilde{I}\) for the debiased version of the estimator; this is discussed below. This estimate is biased and it gives a non-zero value even if the \(X\) and \(Y\) are independent. This is a common problem with estimators of mutual information. One advantage of the Kozachenko-Leonenko estimator described here is that the bias at zero mutual information can be calculated exactly. Basically, for the estimator to give a value of zero would require \(h_{i}=h/n_{s}\) for every \(i\). In fact, while this is the expected value if \(X\) and \(Y\) are independent, \(h_{i}\) has a probability distribution which can be calculated as a sort of urn problem. As detailed in [17] doing this calculation gives the debiased estimator \[I\approx\tilde{I}=I_{0}-I_{b} \tag{15}\] where \(I_{b}\), the bias, is \[I_{b}=\sum_{r=1}^{h}\sum_{c=1}^{n_{s}}\frac{n_{c}}{n}u(r-1;n_{c}-1,h-1,n-n_{c} )\log_{2}\frac{n_{c}r}{h} \tag{16}\] and \(u\) is the probability for the Hypergeometric distribution. Using the parameterization used by distributions.jl1 Footnote 1: juliastats.org/Distributions.jl/v0.14/univariate.html \[u(k;s,h,f)=\left.\binom{s}{k}\binom{f}{m-k}\right/\left(\binom{s+f}{m}\right) \equiv\text{Hypergeometric}(s,m,f) \tag{17}\] Obviously the estimator relies on the choice of the smoothing parameter \(h\). Recall that for small \(h\) the counting estimates for the number of points in the small region and for the volume of the small regions are noisy. For large \(h\) the assumption the probability density is constant in the small region is poor. These two countervailing points of approximation affect \(I_{0}\) and \(I_{b}\) differently. It seems a good strategy in picking \(h\) for real data is to maximize \(\tilde{I}(h)\) over \(h\). This is the approach that will be adopted here. ### Data As an example we will use a data set recorded from zebra finch and made available on the Collaborative Research in Computational Neuroscience data sharing website2[12]. This data set contains a large number of Figure 1: **The calculation of \(I\) and the spiking data**. **A** illustrates how the estimator is calculated. The circles and triangle are data points and red and blue represent two labels. The dashed line is the small region around the seed point in the center marked by a triangle \(\blacktriangle\). Here \(h=7\) so the ball has been expanded until it includes seven points. It contains four red points, the colour of the central point, so \(h_{\blacktriangle}=4\). For illustration the points have been drawn in a two-dimensional space, but this can be any metric space. **B** describes the data. The spiking responses of a typical neuron to each presentation of a song is plotted as a raster plot, with a mark for each spike. The trials are grouped by song, so the ten responses in each group correspond to repeated presentations of a single stimulus. Stimulus onset is aligned at 0, with the shortest song lasting 1.65 seconds. recordings from neurons in different parts of zebra finch auditory pathway. The original analysis of these data are described in [2, 1]. The data set includes different auditory stimuli, here, though only the responses to zebra finch song are considered. There are 20 songs, so \(n_{s}=20\), and each song is presented ten times, \(n_{c}=10\), giving \(n=200\). The zebra finch auditory pathway is complex and certainly does not follow a single track, but for our purposes it looks like \[\text{auditory nerve}\rightarrow\text{CN}\rightarrow\text{MLd}\rightarrow \text{OV}\rightarrow\text{Field L}\rightarrow\text{HVc} \tag{18}\] where CN is cochlear nuclei, MLd is mesencephalicus lateralis pars dorsalis, analogous to mammalian inferior colliculus, OV is nucleus ovoidalis, Field L is the primary auditory pallium, analogous to mammalian A1 and, finally, HVc is regarded as the locus of song recognition. The mapping of the auditory pathway and our current understanding of how to best associate features of this pathway to features of the mammalian brain is derived from, for example [6, 14, 9, 18, 1]. In the data set there are 49 cells from each of MLd and Field L and here the entropy is calculated for all 98 of these cells. ## 3 Results Our interest in considering the mutual information for bird song was to check whether or not the early part of the spike train was more informative about the song identity. It seemed possible that the amount of information later in the spike train would be less than in the earlier portion. This does not seem to be the case. There are a number of spike train metrics than could be used. Although these differ markedly in the mechanics of how they calculate a distance, it does appear that the more successful among them are equally good at capturing the information content. In Figure 2**A** the total mutual information between song identity and spike train is plotted. Here the Victor-Purpura (VP) metric [16], the spike count, earth mover distance (EMD) [11] and van Rossum metric [10] are considered. The Victor-Purpura metric and van Rossum metric both include a parameter which can be tuned, roughly corresponding to the precision of spike timing. Here the optimal value for each case has been used, chosen to maximize the average information. These values are \(q=32.5\) Hz for the VP metric and \(\tau=15\) ms for the vR metric. The mutual information estimator uses the metric to order the points, each small region contains the \(h-1\) points nearest the seed point so the estimator does not depended on the distances themselves, just the order. Indeed, the estimated mutual information is not very sensitive to the choice of \(q\) or \(\tau\). This is demonstrated in Figure 2**B** where the mutual information is calculated as a function of \(q\), the parameter for the VP metric. The Victor-Purpura metric and van Rossum metric clearly have the highest mutual information and are very similar to each other. This indicates that the estimator is not sensitive to the choice of metric, provided the metric is one that can capture features of the spike timing as well as the overall rate. The spike count does a poor job, again indicating that there is information contained in spike timing as well as the firing rate. Similar results were seen in [19] and in [5], though a different approach to evaluating the performance of the metrics was used there. The cells from MLd have higher mutual information, on average, than the cells from Field L. Since Field L is further removed from the auditory nerve than MLd this is to be expected from the information processing inequality. This inequality stipulates that away from the source of information, information can only be lost, not created. In Figure 3 the information content of the spike trains as a function of time is considered. To do this the spike trains are sliced into 100 ms slices and the information is calculated for each slice. The songs have variable length, so the mutual information becomes harder to interpret after the end of the shortest song, marked by a dashed line. Nonetheless, it is clear that the rate of information, and the information per spike, is largely unchanged through the song. ## 4 Discussion As well as demonstrating the use of the estimator for mutual information, we were motivated here by an interest in the nature of coding in spike trains in a sensory pathway. It is clear that the neurons in MLd and Field L are not "grandmother" neurons, responding only to a specific song and only through the overall firing rate. The firing rate contains considerably less information than was measured using the spike metrics. The spike metrics, in turn, give very similar values for the mutual information, this Figure 2: **Information content according to different distances**. **A** shows mean mutual information (MI) among the 98 neurons from both regions according to different distance metrics, the Victor Purpura metric, the firing rate, the earth mover distance and the van Rossum metric. To calculate the mutual information 1.65 s of spike train is used, corresponding to the length of the short song. **B** shows how that mean MI varies according to the \(q\) parameter for the Victor-Purpura metric. In both case blue corresponds to MLd and red to Field L. In **B** the translucent band corresponds to middle 20% of data points; there is substantial variability in information across cells. appears to indicate that the crucial requirement of a spike train metric is a "fuzzy" sensitivity to spike timing. This demonstrates the need for an estimator such as the KL estimator used here. Approaches that do not incorporate spike timings underestimate the mutual information, but histogram methods, which do include timings are computational impractical for modest amounts of data. A pioneering paper, [19], also examines mutual information for zebra finch song, but using a histogram approach. The substantial conclusion of there was similar to the conclusion here: there was evidence that spike timings are important. However, it seems likely that this early paper was constrainted in its estimates by the size of the data set. This is suggested by the way the amount of information measured increased monotonically as the bin-width in the temporal discretization was reduced, a signature of a data-constrained estimate. Finally it is observed that it is not the case that the precision of spiking diminishes as the song continues. Since that song can often be identified from the first few spikes of the response, it might be expected that the neuronal firing would become less precise. Precision is metabolically costly. However, although the firing rate falls slightly, the information remains constant on a per-spike basis. **Author contributions:** Both authors contributed to conceptualization, methodology and writing. **Funding:** JW is supported by EPSRC DTP (EP/T517872/1). CH is a Leverhulme Research Fellow (RF-2021-533) Figure 3: **Information content per time**. These figures show the time resolved mutual information by calculating the mutual information for spiking response over 0.1 s slices; the centres of which, \(T\), are plotted against the mean mutual information. **A** shows how this varies over time, with a vertical line showing the ending of the shortest stimulus. **B** shows the mean information per spike; although **A** shows a small decrease, **B** seems to indicate that this corresponds to a reduction in firing rate, not in the information contained in each spike. In both cases the metric is the VP metric with \(q=30\) Hz. **Acknowledgements:** We are very grateful to Theunissen, F.E.; Gill, P.; Noopur, A.; Zhang, J.; Woolley, S.M.N. and Fremouw, T. for making their data available on CRCNS.org **Conflicts of interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
2309.10364
Is it possible to separate baryonic from dark matter within the $Ξ›$-CDM formalism?
We found general solutions of matter stress-energy (non-)conservation in scalar-tensor FLRW-type cosmological models by extending the logotropic formalism to the case of non-minimal coupling between the scalar field and new dark fluid candidates. The energy conditions expressed by the generating function are introduced. Next, we investigate the possibility of separating baryonic from dark matter and explain their ratio as a chameleon effect in the presence of non-minimal coupling. To answer the question affirmatively we analyze simple extensions of the $\Lambda$-CDM model by adding a non-minimally coupled scalar field in the Einstein frame. Two scenarios involving either a scalaron (quintessence) or a phantom (ghost) are numerically solved and compared. As a result, it is shown that in both cases LCDM model can be reproduced with a high accuracy in the region covered by observations. As expected, in the case of the phantom (ghost) field the Big-Bang scenario is replaced by the (matter) Bounce.
Andrzej Borowiec, Marcin Postolak
2023-09-19T06:58:01Z
http://arxiv.org/abs/2309.10364v3
# Is it possible to separate baryonic from dark matter within the \(\Lambda\)-CDM formalism? ###### Abstract We found general solutions of matter stress-energy (non-)conservation in scalar-tensor FLRW-type cosmological models by extending the logotropic formalism to the case of non-minimal coupling between the scalar field and new dark fluid candidates. The energy conditions expressed by the generating function are introduced. Next, we investigate the possibility of separating baryonic from dark matter and explain their ratio as a chameleon effect in the presence of non-minimal coupling. To answer the question affirmatively we analyze simple extensions of \(\Lambda\)-CDM model by adding a non-minimally coupled scalar field in the Einstein frame. Two scenarios involving either a scalaron (quintessence) or a phantom (ghost) are numerically solved and compared. As a result, it is shown that in both cases LCDM model can be reproduced with a high accuracy in the region covered by observations. As expected, in the case of the phantom (ghost) field the Big-Bang scenario is replaced by the (matter) Bounce. modified cosmology, scalar-tensor cosmology, modified gravity, baryonic matter, dark matter, LCDM model, energy conditions, non-minimal coupling, chameleon mechanism, Einstein frame, quintessence, phantom, ghost, scalar field, bounce cosmology, matter bounce. ## I Introduction Modern relativistic cosmology is one of the most rapidly evolving branches of physics and science in general. The discovery of the late-time accelerating expansion of the Universe [1; 2] is one of the most fundamental challenges facing modern theoretical physics. These questions are partially addressed by the \(\Lambda\)-CDM (LCDM) model [3]12. The next step was the proposal of an inflation mechanism3[8; 9; 10] designed to explain the flatness and horizon problem. This mechanism, together with the reference model, constitute a kind of paradigm, largely consistent with observational data, such as those involving CMB analysis associated with the _Planck mission_[11; 12; 13] and measurements of type Ia supernovae in _Pantheon+_[14; 15]. However, it is worth noting that this approach also has its weaknesses, such as the fine tuning, the unclarified nature of the inflaton, and the lack of a unified and consistent description of the dark sector of the Universe - dark matter [16; 17; 18; 19] and dark energy [20]4. Footnote 1: The reader will find more details in the textbooks [4; 5]. Footnote 2: A systematic and chronological overview of the evolution of physical cosmology throughout the 20th and early 21st centuries can be found in [6]. Footnote 3: For a more extensive discussion of various inflationary scenarios, see [7]. Footnote 4: For an overview of the problems posed to the LCDM model, see [21]. The current description of dark matter is plagued by several important inaccuracies in galactic scales, such as: cuspy halo problem, dwarf galaxy problem, satellite disk problem, galaxy morphology problem and one of the most relevant issues from the cosmological point of view - high redshift galaxies (such as _JADES-GS-z13-0_ observed by the James Webb Space Telescope at \(z=13.20^{+0.04}_{-0.07}\)[22]). Despite the obvious advantages and triumphs associated with the combination of the \(\Lambda\)-CDM model and the inflationary mechanism, one of the most rapidly growing branches of physics is an approach that searches for modifications and deviations from the classical general theory of relativity and cosmological inflation [23]. Scientists are making every effort to improve the paradigm, or to propose new hypotheses describing our Universe (for current papers on this topic, see [24; 25; 26]). Note, also, that the motivation regarding the introduction of brand new theories in general also extends to much more fundamental questions about gravity and cosmology itself5. The issue of modified gravity has received a significant amount of attention in many review papers [28; 29; 30; 31; 32; 33; 34], as well as many tests of the compatibility of these proposals with observational data have been carried out (e.g. [35])6. Footnote 5: A discussion of these issues can be found in [27]. Footnote 6: A collective description of many aspects of modified gravity is provided in [36]. One of the most widespread as well as recognized attempts to modify GR are the \(f(R)\) theories [37; 38; 39; 40], which replace the standard term in the action associated with the curvature scalar by a function depending on it. Meanwhile, these theories can be related to another class of modified gravity theories - scalar-tensor theories (STT) [41; 42; 43]. In certain situations, these approaches are equivalent to each other, however, it is not always possible to link these two formalisms (for more details, see [42; 44]). However, even within STT, we are dealing with so-called conformal frames, since the scalar field that is an additional degree of freedom in our theory can be non-minimally coupled to the gravitational segment (Jordan frame) or the matter part (Einstein frame) within the action of the theory. There is no of these frames is physical. Opinions on this question are sharply divided among cosmologists [45; 46; 47; 48]. Another popular approach in recent years is the attempt to unify dark matter and dark energy within the so-called _dark fluid_. Under this formalism, the dark sector of the Universe is assumed to be a single physical phenomenon. At galactic scales it reproduces the behavior of dark matter, while at cosmological scales it reconstructs the evolution of dark energy. In many cases, equations of state motivated by specific examples of the wide range of solid state physics provide a good basis for formulating dark fluid equations of state7, e.g. recent model with cosmological fluid reproducing _Murnaghan_ EoS [49]8 of the following form [50]: Footnote 7: The reader can find more specific examples in Proposition 2.1. Footnote 8: The Murnaghan EoS models the behavior of matter under conditions of high pressure and states that at \(T=\) const the bulk modulus of the incompressibility \(K=-V\left(\frac{\partial p}{\partial V}\right)_{T}\) is a linear function of pressure: \[p=-\frac{A_{*}}{\alpha}\left[\left(\frac{\rho_{*}}{\rho}\right)^{\alpha}-1 \right]\propto\rho^{-\alpha} \tag{1}\] corresponding to Chaplygin-like behavior. More details about the motivation and mathematical structure of the above formalism can be found in [51; 52; 53]. The purpose of our paper is twofold: 1. Study of stress-energy non-conservation in ST FLRW cosmological model in the context of the so-called chameleon mechanism, providing a general solution to this problem. 2. Proposing toy models: relatively simple modifications of the LCDM by adding a non-minimally matter-coupled scalar field as an object that effectively describes the dark matter phenomenon and is able to explain the dark-to-baryonic matter ratio. The segment related to dark energy (cosmological constant) remains unaffected, so this article does not aim to explain this feature. In Section 1.1, following [54; 55] we recall the formalism associated with ST cosmology in the most generic case including non-minimal coupling (NMC) between gravity and matter as a realization of the chameleon mechanism [56; 57] and a corresponding stress-energy non-conservation. Furthermore, we propose general solution to a comoving fluid non-conservation in the FLRW background in terms of arbitrary generating function describing energy density and pressure. In addition, we reformulate the standard energy conditions known from relativistic cosmology in terms of generating function. In Section 3, we propose two toy cosmological models: the first corresponding to the minimal extension of the LCDM model with _scalaron_ field (with the presence of an initial singularity) and the second being a concrete realization of the alternative to cosmic inflation - _matter bounce_ scenario. Based on the numerical solutions, we analyze the time evolution of two models and try to gain the possible physical outcomes. ### Scalar-tensor gravity & FLRW cosmology Our starting point is the most general action for scalar-tensor theories of gravity which can be defined as follows (see e.g., [54; 55; 58] for the same convention): \[\begin{split}& S[g_{\mu\nu},\Phi,\chi]=\frac{1}{2\kappa^{2}}\int d ^{4}x\sqrt{-g}\Big{[}\mathcal{A}(\Phi)R-\mathcal{B}(\Phi)g^{\mu\nu}\\ &\partial_{\mu}\Phi\partial_{\nu}\Phi-\mathcal{V}(\Phi)\Big{]}+S _{\rm matter}\left[e^{2\alpha(\Phi)}g_{\mu\nu},\chi\right],\end{split} \tag{2}\] where: \(\{\mathcal{A}(\Phi),\mathcal{B}(\Phi),\mathcal{V}(\Phi),\alpha(\Phi)\}\) are the four arbitrary functions so-called frame parameters. Usually \(\mathcal{A}(\Phi)\) is a positive function of the scalar field \(\Phi\), which is referred to as the so-called effective gravitational constant and which is referred to as the non-minimal coupling of \(\Phi\) to gravity. The \(\mathcal{B}(\Phi)\) function describes a non-canonical kinetic term associated with the scalar field and \(\mathcal{V}(\Phi)\) is the self-interaction potential of the scalar field itself. \(\alpha(\Phi)\) in turn, is responsible for yet another non-minimal coupling of the scalar field \(\Phi\) to the matter fields \(\chi\). All of them appear naturally when one passes from \(f(R)\) gravity into its scalar-tensor representation. Particularly, in the Einstein frame one finds that \(\alpha^{\prime}(\Phi)\neq 0\) (e.g. [54]). By performing a variation of the action (2) with respect to the metric tensor \(g_{\mu\nu}\) and the scalar field \(\Phi\), we obtain the field equations describing our theory: \[\begin{split}&\mathcal{A}G_{\mu\nu}+\left(\frac{1}{2}\mathcal{B}+ \mathcal{A}^{\prime\prime}\right)g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\Phi \partial_{\beta}\Phi-\left(\mathcal{B}+\mathcal{A}^{\prime\prime}\right) \partial_{\mu}\Phi\partial_{\nu}\Phi\\ &+\mathcal{A}^{\prime}(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}) \Phi+\frac{1}{2}\mathcal{V}g_{\mu\nu}=\kappa^{2}T_{\mu\nu},\end{split} \tag{3a}\] \[\begin{split}& 2\Big{[}3(\mathcal{A}^{\prime})^{2}+2 \mathcal{A}\mathcal{B}\Big{]}\Box\Phi+\Big{[}2(\mathcal{A}\mathcal{B}^{ \prime}+\mathcal{A}^{\prime}(\mathcal{B}+3\mathcal{A}^{\prime\prime}))\Big{]} (\partial\Phi)^{2}\\ &+2(2\mathcal{A}^{\prime}\mathcal{V}-\mathcal{A}\mathcal{V}^{ \prime})=2\kappa^{2}T(\mathcal{A}^{\prime}-2\alpha^{\prime}\mathcal{A}),\end{split} \tag{3b}\] where: \(()^{\prime}\equiv\frac{d}{d\Phi}\), \(\Box=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) and \(T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S_{m}}{\delta g^{\mu\nu}}\) denotes stress-energy tensor representing an external matter source with \(T=g^{\mu\nu}T_{\mu\nu}\). Some relevant observations can be deduced from relation (3). First, when \(3(\mathcal{A}^{\prime})^{2}+2\mathcal{A}\mathcal{B}<0\) then the scalar field is a ghost9. Second, in the case \(\mathcal{A}^{\prime}-2\alpha^{\prime}\mathcal{A}=0\) our scalar field \(\Phi\) is minimally coupled to matter otherwise it is generated by matter itself. In addition, the condition for the matter stress-energy to be covariantly conserved is the vanishing of derivative from the \(\alpha(\Phi)\) function, i.e. \(\alpha^{\prime}(\Phi)=0\). In general, from the field equations, it follows that the matter stress-energy tensor is not conserved unless \(\alpha^{\prime}(\phi)=0\) (see, [54; 58] for details): \[\nabla_{\mu}T^{\mu\nu}=\alpha^{\prime}(\Phi)\,\partial^{\nu}\Phi\,T\,. \tag{4}\] This is a manifestation of the so-called Chameleon mechanism as introduced in [59]. If this derivative takes nonzero values then the matter particles follow the geodesics of conformally transformed metric: \[\tilde{g}_{\mu\nu}=e^{2\alpha(\Phi)}g_{\mu\nu} \tag{5}\] and this generates deviations from the geodesics of \(g_{\mu\nu}\) due to the presence of the so-called "fifth force" associated with the existence of \(\Phi\)[42]. For cosmological applications, one takes the FLRW metric with the scale factor \(a(t)\): \[g_{\mu\nu}=\text{diag}\left(-N(t)^{2},\frac{a(t)^{2}}{1-kr^{2}},a(t)^{2}r^{2}, a(t)^{2}r^{2}\sin^{2}\theta\right), \tag{6}\] where: \(k=\{\pm 1;0\}\) denotes spatial curvature of the Universe. The laps function \(N(t)\) allows control of the reparametrization of the time variable and plays an important role in minisuperspace formulation. The case \(N=1\) determines the physical cosmic time. An external matter source is assumed in a perfect fluid form: \[T_{\mu\nu}=(p+\rho)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{7}\] where: \[u^{\mu}=\left(N^{-1};0;0;0\right) \tag{8}\] represents comoving 4-velocity with conventional normalization \(u^{\mu}u_{\mu}=-1\). If \(\alpha^{\prime}(\Phi)=0\) then the conservation principle of the matter stress-energy tensor (4) satisfies the usual expression: \[\nabla_{\mu}T^{\mu\nu}=0, \tag{9}\] that in terms of time-dependent energy density \(\rho(t)\) and the pressure \(p(t)\) takes very well-known form: \[\dot{\rho}+3H\left(p+\rho\right)=0. \tag{10}\] Here: \(\dot{(})\equiv\frac{d}{dt}\) denotes the differentiation w.r.t. any local coordinate time and \(H=\dot{a}/a\) is the Hubble parameter as measured by a local observer. It means that the time dependence appears only implicitly as a consequence of the time-rescaling invariance: \(dt\mapsto N(t)dt=d\hat{t}\). Consequently, solutions \(\rho=\rho(a),p=p(a)\) are functions of the scale factor. Moreover, the energy density \(\rho(a)\) determines the pressure: \[p(a)=-\frac{a\,d\rho(a)}{3da}-\rho(a). \tag{11}\] Similarly, in the presence of non-minimal coupling (\(\alpha^{\prime}(\Phi)\neq 0\)) the matter stress-energy non-conservation (4) takes in the FLRW background the following time reparametrization invariant form: \[\dot{\rho}+3H\left(p+\rho\right)=-\dot{\alpha}\left(3p-\rho\right) \tag{12}\] and, moreover, it is expected to have solutions as explicit functions \(\rho(a,\Phi),p(a,\Phi)\) (see Proposition 2.1). Now substituting back to the field equations (3) and assuming a spatially flat Universe with \(N=1\) we obtain a closed (over-determined) system of the second-order ODE for two functions \(a(t)\) and \(\Phi(t)\): \[3H^{2}=\frac{\kappa^{2}\,\rho(a,\Phi)}{\mathcal{A}(\Phi)}+\frac{\mathcal{B}( \Phi)}{2\,\mathcal{A}(\Phi)}\dot{\Phi}^{2}-3\frac{\mathcal{A}^{\prime}(\Phi)} {\mathcal{A}(\Phi)}H\dot{\Phi}+\frac{\mathcal{V}(\Phi)}{2\mathcal{A}(\Phi)}\,, \tag{13a}\] \[2\dot{H}+3H^{2}=-\frac{\kappa^{2}\,p(a,\Phi)}{\mathcal{A}(\Phi)} -\frac{\mathcal{B}(\Phi)+2\mathcal{A}^{\prime\prime}(\Phi)}{2\,\mathcal{A}( \Phi)}\dot{\Phi}^{2}\] (13b) \[+\frac{\mathcal{V}(\Phi)}{2\,\mathcal{A}(\Phi)}-\frac{\mathcal{A} ^{\prime}(\Phi)}{\mathcal{A}(\Phi)}\left(2H\dot{\Phi}+\bar{\Phi}\right)\,,\] \[\left(3(\mathcal{A}^{\prime}(\Phi))^{2}+2\mathcal{A}(\Phi) \mathcal{B}(\Phi)\right)\ddot{\Phi}=-3(3(\mathcal{A}^{\prime}(\Phi))^{2}+\] \[+2\mathcal{A}(\Phi)\mathcal{B}(\Phi))H\dot{\Phi}-\left((\mathcal{ A}(\Phi)\mathcal{B}(\Phi))^{\prime}+3\mathcal{A}^{\prime}(\Phi)\mathcal{A}^{ \prime\prime}(\Phi)\right)\dot{\Phi}^{2}\] \[+(2\mathcal{V}(\Phi)\mathcal{A}^{\prime}(\Phi)-\mathcal{V}^{ \prime}(\Phi)\mathcal{A}(\Phi))\] \[+\kappa^{2}(\rho-3p)(a,\Phi)\left[\mathcal{A}^{\prime}(\Phi)-2 \alpha^{\prime}(\Phi)\mathcal{A}(\Phi)\right]\,. \tag{13c}\] This system can be then solved and compared with the LCDM model at least numerically, after choosing required frame functions \(\{\mathcal{A}(\Phi),\mathcal{B}(\Phi),\mathcal{V}(\Phi),\alpha(\Phi)\}\) and imposing initial conditions that respect the zero Hamiltonian energy constraints (13a). More exactly, the system (13a)-(13c) can be equivalently recast into the form of a constrained conservative two-dimensional Hamiltonian system of classical mechanics called minisuperspace (MSS) formalism. Cosmological "initial conditions" (Cauchy data) are, in fact, related to the present-day values of cosmological parameters. Normalizing the scale factor (\(a_{0}=1,\dot{a}_{0}=H_{0}\)) and assuming that the scalar field has no observable dynamics today (\(\Phi_{0}=0\)) we obtain [55]: \[3H_{0}^{2}=\frac{\kappa^{2}\,\rho(a_{0},\Phi_{0})}{\mathcal{A}(\Phi_{0})}+\frac{ \mathcal{V}(\Phi_{0})}{2\mathcal{A}(\Phi_{0})}. \tag{14}\] The algebraic relation between \(\Phi_{0}\) and \(H_{0}\) does not depend on the kinetic term \(B(\Phi)\). Its solutions provide some cosmological scenarios which can be realized in the form of numerical solutions (\(a(t),\Phi(t)\)). This is enough to compare numerically with the well-known LCDM scenario and calculate the baryonic to dark matter ratio, provided \(\alpha^{\prime}(\phi)\neq 0\). Engineering of stress-energy (non-)conservation in flrw models In fact, the continuity equation (10) as well as discontinuity one (12) can be solved in terms of a priori arbitrary differentiable function \(f(x)\) which for physical reasons we can assume, e.g., to satisfy the positive energy density condition10: Footnote 10: Some energy conditions allow negative energy densities, see Proposition 2.5. \[f(x)\geq 0\qquad\text{for}\qquad x\geq 0. \tag{15}\] These generate a huge class of potentially _unphysical_ solutions whose usefulness can be controlled by additional parameters e.g. energy conditions11 or speed of sound. On the other hand, treating STT as an effective description one may expect that some of them are _physically reasonable_ from the point of view of yet unknown more fundamental theory or/and additional fields. Now, by direct calculation, one can verify the validity of the following Proposition, c.f. (11): Footnote 11: More comprehensive context regarding energy conditions in the case of relativistic cosmology can be found in [60; 61; 62]. **Proposition 2.1**.: 1. _Let_ \(f(x)\) _be some differentiable function, which will be called a generating function. We set:_ \[g(x)=xf^{\prime}(x)-f(x),\] (16) _where_ \({}^{\prime}\equiv\frac{d}{dx}\)_. Then the pair:_ \[\begin{cases}\dfrac{\rho(a)}{\rho_{0}}=f(a^{-3})\\ \dfrac{p(a)}{\rho_{0}}=g(a^{-3})\end{cases}\] (17) _is a solution of (_10_) for any dimensionfull constants_ \(\rho_{0}\)_._12__ Footnote 12: We assume the speed of light \(c=1\). Here, the constant \(\rho_{0}\) is for dimensional reasons and will be further omitted. 2. _Conversely, if_ \(g(x)=g(x)\) _is given, then the generating function can be reconstructed by:_ \[f(x)=x\int\frac{g(x)}{x^{2}}dx\] (18) _such that the pair:_ \[\rho(a)=f(x)|_{x=a^{-3}},p(a)=g(x)|_{x=a^{-3}}\] (19) _is a solution of (_10_). Moreover, the both transformations are inverse to each other._ 3. _If functions_ \(\rho=\rho(a)\) _and_ \(p=p(a)\) _are solutions of (_10_) then:_ \[\begin{cases}\rho(a)\mapsto\rho(a,\Phi)=e^{4\alpha(\Phi)}\rho\left(a\,e^{ \alpha(\Phi)}\right)\\ p(a)\mapsto p(a,\Phi)=e^{4\alpha(\Phi)}\,p\left(ae^{\alpha(\Phi)}\right)\end{cases}\] (20) _are solutions of (_12_)._ Particularly, within ST gravity, it is always possible to find out a conformally related frame in which the stress-energy tensor is conserved, e.g. Jordan frame in \(f(R)\) gravity models. The formulae relating function \(f(x)\) and \(g(x)\) are linear as expected, therefore allowing for the creation of composite multi-component objects as a linear combinations of given ones. Its first part generalizes the results of [63]. The second one (20) is to the best of our knowledge new. For the special case of barotropic fluids it has been explored in [55]. It allows to study chameleon mechanism [56] as an effect of non-minimal coupling between gravity and matter. The generalization to the FLRW spacetime in arbitrary Lorentzian dimension \(n+1\) is also possible: \[\begin{cases}\rho(a)\mapsto\rho(a,\Phi)=e^{(n+1)\alpha(\Phi)}\rho\left(a\,e^ {\alpha(\Phi)}\right)\\ p(a)\mapsto p(a,\Phi)=e^{(n+1)\alpha(\Phi)}\,p\left(ae^{\alpha(\Phi)}\right), \end{cases} \tag{21}\] where: \[\rho(a)=f(x)|_{x=a^{-n}}\,,\quad p(a)=g(x)|_{x=a^{-n}}\,. \tag{22}\] The formula (21) is a solution of the stress-energy non-conservation: \[\dot{\rho}+n\,H\left(p+\rho\right)=-\dot{\alpha}\left(n\,p-\rho\right) \tag{23}\] provided (22) satisfies the case \(\dot{\alpha}=0\). It should be remarked that a similar expression to (16) is used for the definition of a self-interacting potential: \[V(R)=Rf^{\prime}(R)-f(R)\,, \tag{24}\] see e.g. Appendix 1 in [54], when changing from \(f(R)\)-gravity to its ST equivalent, where \(R\) is a Ricci scalar. The difference is that above \(R=R(\Phi)\) is understood as an inverse function to \(\Phi=f^{\prime}(R)\) while in (16) we do not use the Legendre transformation, c.f. [64]. We finish general consideration by providing few examples. **Example 2.2**.: _In fact, every choice \(f(x)\) generates two parameter extensions \(f_{A,B}(x)=f(A+Bx)\) with \(g_{A,B}(x)=Bxf^{\prime}(A+Bx)-f(A+Bx)\), \(B\neq 0\). For one of the simplest cases:_ \[f_{A,B}(x)=(A+Bx)^{(1+\omega)} \tag{25}\] one gets:_ \[g_{A,B}(x)=(\omega\,Bx-A)(A+Bx)^{\omega}, \tag{26}\] _which for \(A=0\) provides a barotropic fluid with the barotropic equation of state (EoS) parameter \(\omega\). Furthermore, for \(\omega=-1\) one gets a cosmological constant. The case \(\omega=0\) provides dust matter. Thus:_ \[\rho_{0,1,\omega}(a,\Phi)=\rho_{0\omega}\,e^{(1-3\omega)\alpha(\Phi)}\,a^{-3( 1+\omega)} \tag{27}\] _as already obtained in [55]. It shows that the chameleon factor depends, in fact, on a barotropic parameter \(\omega\)._ Originally, the chameleon mechanism has been proposed as an effect of non-minimal coupling between gravity and matter within scalar-tensor gravity formalism [57; 59] as a being related with the change from Jordan to Einstein's frame. In some approaches, it is ad-hoc assumed that it manifests itself as a multiplicative factor in front of the matter Lagrangian that depends on the scalar field. As we can see from expression (27) such an assumption is valid only in the case of barotropic EoS. In addition, the multiplicative factor \(e^{(1-3\omega)\alpha(\Phi)}\) heavily depends on the barotropic coefficient \(\omega\). In particular, it does change the fluid representing dark energy, \(\omega=-1\), but does not change the radiation term, \(\omega=\frac{1}{3}\). However, for more general fluids with nonlinear relations between \(\rho(a,\phi)\) and \(p(a,\Phi)\) the situation is different. **Example 2.3**.: _More generally, considering:_ \[\begin{split} f(x)&=x^{(\omega+1)}\left[A+Bx^{-( \beta+1)(\omega+1)}\right]^{\frac{1}{\beta+1}}\\ &=\left[B+Ax^{(\beta+1)(\omega+1)}\right]^{\frac{1}{\beta+1}}\,, \end{split} \tag{28}\] _one obtains:_ \[g(x)=\omega f(x)+(1+\omega)\frac{B}{f(x)^{\beta}}, \tag{29}\] _which provides a generalized Chaplygin gas, see e.g. [65]. In particular, for \(B=0\) one recovers barotropic fluid and for \(\beta=1,\omega=0\) the standard Chaplygin gas [66]._ **Example 2.4**.: _The choice:_ \[f(x)=\frac{x^{\alpha}}{(\alpha-1)^{2}}(\ln x^{\alpha-1}-1),\qquad\alpha\neq 1 \tag{30}\] _implying_ \[g(x)=x^{\alpha}\ln x \tag{31}\] _is known a generalized logotropic case [67]. The Anton-Schmidt fluid, see [68; 69; 70], is obtained for_ \[f(x)=\frac{x}{2}\ln^{2}x\,,\quad g(x)=x\ln x\,. \tag{32}\] ### Energy conditions and other applications In this subsection, we discuss some constraints on the generating function \(f(x)\) that comes from its possible physical interpretation as an energy density. Firstly, notice that choosing \(g(x)\) as a generating function, one gets: \[\frac{\rho(x)}{\rho_{0}}=Dx+Cx\int\frac{g(A+x)}{x^{2}}dx \tag{33}\] allowing, as observed in [63], to infer the first law of thermodynamics: \[xd\rho=(\rho+p)dx=Bxf^{\prime}(A+Bx)\big{|}_{x=a^{-3}}\,. \tag{34}\] Further, we can introduce two quantities characterizing the physical properties of a fluid; an effective barotropic factor \(w(f)\), also known as the equation of state (EoS) parameter and (effective) speed of sound \(c_{s}(f)\) as functions of the scale factor by the following expressions: \[w(f)=\frac{p}{\rho}=-1+\frac{xf^{\prime}(x)}{f(x)}\bigg{|}_{x=a^{-3}}, \tag{35}\] \[c_{s}^{2}(f)=\frac{dp}{d\rho}=\frac{xf^{\prime\prime}(x)}{f^{\prime}(x)}\bigg{|} _{x=a^{-3}}\,. \tag{36}\] Particularly, \(c_{s}^{2}(f)=1\) for \(f(x)=x^{2}\), i.e. for a stiff matter. Matter-dominated era \(\omega(f)\approx 0\) means that \[xf^{\prime}(x)\approx f(x) \tag{37}\] for the wide range of \(x\), i.e. a dust matter. Further constraints on the generating function \(f(x)\) can be imposed by the so-called energy-conditions: \[\text{DEC}\subset\text{WEC}\subset\text{NEC}\supset\text{SEC}\,. \tag{38}\] More exactly, under the reasonable assumption that: \[0\leq x=a^{-3}<\infty, \tag{39}\] one gets the following13: Footnote 13: Energy conditions can also be understood pointwise, as limiting spacetime regions, which in the context of FLRW cosmology means restricting the time variable. **Proposition 2.5**.: _Energy conditions in terms of generating function:_ 1. _Dominant energy condition (DEC):_ \[\rho\geq|p|\] (40) \[2f(x)\geq xf^{\prime}(x)\geq f(x)\geq 0\] (41) _or_ \[f(x)\geq xf^{\prime}(x)\geq 0.\] (42) _The first equality_ \(2f(x)=xf^{\prime}(x)\) _holds for a stiff matter_ \(f(x)=Ax^{2}\) _while the second_ \(f(x)=xf^{\prime}(x)\) _holds for a dust_ \(f(x)=Ax\)_,_ \(A>0\)_._ 2. _Weak energy condition (WEC):_ \[\begin{cases}\rho\geq 0\\ \rho+p\geq 0\end{cases}\] (43) \[f(x)\geq 0\wedge f^{\prime}(x)\geq 0.\] (44) _is satisfied, for instance, if the cosmological constant is positive. More generally, the function_ \(f(x)\) _should not decrease for_ \(x\geq 0\)_._ 3. _Null energy condition (NEC):_ \[\rho+p\geq 0\] (45) \[f^{\prime}(x)\geq 0.\] (46) _is e.g, satisfied by positive and negative cosmological constants._ 4. _Strong energy condition (SEC):_ \[\begin{cases}\rho+p\geq 0\\ \rho+3p\geq 0\end{cases}\] (47) \[f^{\prime}(x)\geq 0\wedge f^{\prime}(x)\geq\frac{2}{3}\frac{f(x)}{x}.\] (48) _Here, the equality holds for positive spatial curvature (cosmic strings):_ \[f(x)=A\,x^{2/3}\] (49) _with_ \(A>0\)_._ It can be noticed that NEC (46) and SEC (48) admit negative energy densities, and therefore, negative values for the generating function \(f(x)\) for \(x\geq 0\). However, it is not entirely clear how this fact could manifest itself in physical reality14 (perhaps it would be a footprint of higher-dimensional/quantum gravity theories). Also energy density \(\rho(a)\) observed by a co-moving observer can be negative15. In STT one should take into account the scalar field dynamics which also contribute to the overall energy balance (see below). Footnote 14: An attempt to answer this question has been included in the paper [71]. Footnote 15: General background regarding negative energy densities in terms of quantum effects related to gravity in the case of Hawking radiation can be found in [72]. **Remark 2.6**.: _Except the barotropic EoS (27), the formulas (35) and (36) are changed when non-minimal coupling is active, i.e. when \(x\mapsto a^{-3}e^{-3\alpha(\Phi)}\). Similarly, for the energy conditions listed above._ **Remark 2.7**.: _As follows from Proposition 2.1, the Friedmann equation in the form:_ \[H^{2}=\kappa^{2}\rho_{f0}\,f(a^{-3})\equiv\kappa^{2}\rho_{f} \tag{50}\] _where \(f(x)\) is, in principle, any differentiable function is consistent with the Einstein equation (\(E_{11}\) component):16_ Footnote 16: The dimensional integration constant \(\rho_{f0}\) and the gravitational constant \(\kappa^{2}=8\pi G\) are necessary for dimensional reasons. Further we work with geometric units \(\kappa=c=1\). \[3H^{2}+2\dot{H}=-\kappa^{2}\rho_{f0}\left(a^{-3}f^{\prime}(a^{-3})-f(a^{-3}) \right)\equiv-\kappa^{2}p_{f} \tag{51}\] _through conservation law (10). Furthermore, assuming:_ \[f(x)=\sum_{n\geq 0}f_{n}x^{n} \tag{52}\] _to be analytic gives:_ \[g(x)=-f_{0}+\sum_{n\geq 2}(n-1)f_{n}x^{n}. \tag{53}\] _It contains the terms with EoS parameter \(w_{n}=n-1\geq 0\). In this way one can get only late acceleration provided by the cosmological constant \(f_{0}>0\). Nevertheless, non-analytic functions turn out to be more useful, for example:_ \[f(x)=\begin{cases}Ae^{-B/x};\text{ for }x>0\\ 0;\text{ otherwise},\end{cases} \tag{54}\] _with: \(A,B>0\). Moreover, the choice:_ \[H^{2}=\Lambda+A\,e^{-Ba^{3}}\,, \tag{55}\] _enforces an early de Sitter era by \(\Lambda\mapsto\Lambda+A\) leaving the late governed by \(\Lambda\)._ ### Einstein frame action, equations of motion and conservation law In the Einstein frame the action takes a very simple form: \[\begin{split} S^{\text{\tiny{e}}}[g_{\mu\nu},\Phi,\chi]=& \frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-g}\left[R-\epsilon g^{\mu\nu}\partial_{ \mu}\Phi\partial_{\nu}\Phi-\mathcal{V}(\Phi)\right]\\ &+S_{\text{\tiny{m}}}\left[e^{2\alpha(\Phi)}\,g_{\mu\nu},\chi \right]\end{split} \tag{56}\] From the point of view of \(f(R)\)-gravity the parameter \(\epsilon=\{\pm 1;0\}\) separates three cases: Palatini (\(\epsilon=0\)), metric (\(\epsilon=1\)) and hybrid (\(\epsilon=-1\)), see [55]. From the other hand a value of \(\epsilon\) changes the character of scalar field itself. For \(\epsilon=\gamma=0\) scalar field has no dynamics and undergoes algebraic constraints, \(\epsilon=1\) corresponds to scalaron (quintessence) field, see e.g. [73; 74; 75] while \(\epsilon=-1\) is known as a phantom (ghost) field [76; 77; 78; 79]. In our work, we will refer to the model with \(\epsilon=1\) as _Einstein Frame Scalar-Tensor Scalaron_ (EFSTS), and the model with \(\epsilon=-1\) as _Einstein Frame Scalar-Tensor Ghost_ (EFSTG). For cosmological applications, we rewrite the action (56) in a more convenient form of two-dimensional conservative mechanical system known as minisuperspace (MSS) formulation: \[S^{\epsilon}_{\rm MSS}\left[a,\Phi\right]=\int dt\Big{[}-6a\dot{a}^{2}+ea^{3} \dot{\Phi}^{2}-a^{3}\Big{(}{\cal V}(\Phi)+2\kappa^{2}\rho_{f}\Big{)}\Big{]}, \tag{57}\] where the matter density: \[\rho_{f}=\rho_{f}(a,\Phi)=\rho_{f0}\,e^{4\alpha(\Phi)}\,f\left(a^{-3}\,e^{-3 \alpha(\Phi)}\right) \tag{58}\] is taken in its most general form allowed by an implementation of the chameleon mechanism. Corresponding pressure is explicitly expressed as: \[\frac{p_{f}}{\rho_{f0}}=e^{\alpha(\Phi)}\,a^{-3}f^{\prime}\left(a^{-3}\,e^{-3 \alpha(\Phi)}\right)-e^{4\alpha(\Phi)}\,f\left(a^{-3}\,e^{-3\alpha(\Phi)} \right)\,. \tag{59}\] From the relation (58), one can easily obtain partial derivatives of the energy density: \[\partial_{a}\rho_{f}=-3a^{-1}\left(\rho_{f}+p_{f}\right)\,,\quad\partial_{\Phi }\rho_{f}=\left(\rho_{f}-3p_{f}\right)\alpha^{\prime}(\Phi) \tag{60}\] that are necessary for obtaining MSS equations of motion. The zero Hamiltonian energy condition constraining the system can be recast into the Friedmann equation (13a): \[H^{2}=\frac{\rho_{\Phi}}{3}+\frac{\kappa^{2}\,\rho_{f}(a,\Phi)}{3}, \tag{61}\] where the scalar field energy density and pressure are defined as follows: \[\begin{cases}\rho_{\Phi}=\frac{1}{2}\Big{(}\epsilon\dot{\Phi}^{2}+{\cal V}( \Phi)\Big{)}\\ p_{\Phi}=\frac{1}{2}\Big{(}\epsilon\dot{\Phi}^{2}-{\cal V}(\Phi)\Big{)}\,.\end{cases} \tag{62}\] By performing a variation of the action (57) with respect to the scale factor and the scalar field, respectively, we obtain the following explicit form of the system of dynamical equations: \[3H^{2}+2\dot{H}= -\,\kappa^{2}p_{f}-p_{\Phi} \tag{63a}\] \[\epsilon\left(\ddot{\Phi}+3H\dot{\Phi}\right)+\,\frac{1}{2}{\cal V }^{\prime}(\Phi)\,=\,-\kappa^{2}(\rho_{f}-3p_{f})\alpha^{\prime}(\Phi)\,. \tag{63b}\] From the equations (61)-(62) it immediately follows that the conservation law (10) for a total or effective energy density: \[\rho_{\rm eff}=\rho_{\Phi}+\rho_{f}\,,\quad p_{\rm eff}=p_{\Phi}+p_{f}\,. \tag{64}\] is always fulfilled. The last equation (63b) instead can be rewritten in the form: \[\dot{\rho}_{\Phi}+3H(\rho_{\Phi}+p_{\Phi})\,=\,-\kappa^{2}\alpha^{\prime}(\Phi )\dot{\Phi}\,\partial_{\Phi}\rho_{f}\,. \tag{65}\] Thus conservation of \(\rho_{\Phi}\) is equivalent to the conservation of \(\rho_{f}\), which holds when non-minimal coupling is absent. In this case, according to the Proposition (2.1) \(\rho_{\Phi}\) can be emulated as perfect fluid (\(\epsilon\Box\Phi=-\frac{1}{2}{\cal V}^{\prime}(\Phi)\)): \[\rho_{\Phi}\,=\,\rho_{h}\equiv h(a^{-3}) \tag{66}\] form some function \(h(x)\). It means that the time dependence \(\rho_{\Phi}(t)\) for solutions of field equations can be replaced by \(h(a^{-3}(t))\). In fact, an explicit form of the function \(\Phi(a)\) can be deduced from a solution of differential equation presented in [55] (see, section 2.4). For the special choice, \({\cal V}(\Phi)={\cal V}_{0}=2\Lambda={\rm const}\), one gets: \[\frac{\epsilon}{2}\dot{\Phi}^{2}=A\,a^{-6}\,, \tag{67}\] where \(A\) is an integration constant. Thus in such case \(\rho_{\Phi}\) effectively consists of gravitational constant and stiff matter. This observation gives some inside into the nature of scalar field dynamics and its role in ST FLRW-type cosmology. ## III Toy models mimicking \(\Lambda\)-CDM with baryonic and dark matter separated In this section we investigate cosmological models taking advantage of the mechanism of non-minimal coupling between the scalar field and matter part of the Universe as a way to distinguish between dark and baryonic matters. Both substances, according to LCDM philosophy, are emulated as a cosmic dust. The mathematical formalism introduced so far provides a way to carry out such a procedure. Jordan frame considerations have been proposed in [55]. ### The models As an illustrative example we propose to analyze two toy models that, in a sense, minimally extend well-known \(\Lambda\)-CDM model by adding a scalar field either with positive or negative kinetic energy term and reduce self-interaction potential to a cosmological constant \(\Lambda\). Therefore, comparing with the previous section we specialize the potential: \[{\cal V}(\Phi)\equiv V_{\rm DE}=2\Lambda \tag{68}\] which, unlike a dark energy fluid \(\omega=-1\), is independent of the chameleon effect, c.f. (27). Also, the matter part containing dust and radiation is taken to be the same as in \(\Lambda\)-CDM: \[\rho=\rho(a,\Phi)=\rho_{\rm R0}a^{-4}+\rho_{\rm BM0}\,\Phi^{\gamma}\,a^{-3}\,. \tag{69}\] However, our additional modification assumes a non-minimal coupling between the scalar field and the matter that is controlled by the function: \[\alpha(\Phi)=\gamma\ln\Phi. \tag{70}\] The last term expresses our hypothesis that non-minimal coupling can provide the correct _dark-to-baryonic matter ratio_ (radiation term remains unaffected, c.f. (27)). In this way, we utilize the chameleon mechanism: dust matter is described not by the original FLRW metric \(g_{\mu\nu}\) but by a new conformally re-scaled "dark metric": \[\tilde{g}_{\mu\nu}=\Phi^{2\gamma}g_{\mu\nu}, \tag{71}\] while the baryonic matter is related to the original one. The zero Hamiltonian energy condition: \[\frac{H^{2}}{H_{0}^{2}}=\epsilon\frac{\dot{\Phi}^{2}}{6H_{0}^{2}}+\Omega_{ \Lambda}+\Omega_{\rm R0}a^{-4}+\Omega_{\rm BM0}\,\Phi^{\gamma}\,a^{-3} \tag{72}\] at any instant of time \(t\), where dimensionless densities: \[\Omega_{\Lambda}=\frac{\Lambda}{3H_{0}^{2}}\,,\quad\Omega_{w\,0}=\frac{ \kappa^{2}\rho_{w\,0}}{3H_{0}^{2}} \tag{73}\] are defined in a standard way, and \(H_{0}\) denotes the current value of the Hubble parameter. Subsequently, equations of motion take the following form: \[3H^{2}+2\dot{H} = 3\Omega_{\Lambda}-\Omega_{\rm R0}a^{-4}-\frac{\epsilon}{2}\dot{ \Phi}^{2} \tag{74a}\] \[\epsilon\left(\ddot{\Phi}+3H\dot{\Phi}\right) = -3\gamma\Omega_{\rm BM0}a^{-3}\Phi^{\gamma-1}\,. \tag{74b}\] For \(\epsilon=0\) the equation (74a) can be integrated to the Friedmann equation of the \(\Lambda\)-CDM model17. More generally, assuming only \(\gamma=0\) or \(\Omega_{\rm BM0}=0\), the equation (74b) admits solution \(\dot{\Phi}=\dot{\Phi}_{0}a^{-3}\), c.f. (67). In this case (74a) integrates to the Friedmann equation with an additional stiff matter term: Footnote 17: In fact, \(\epsilon=0\) forces \(\gamma=0\), c.f. (74b). \[\frac{H^{2}}{H_{0}^{2}}=\epsilon\frac{\dot{\Phi}_{0}^{2}}{6H_{0}^{2}}\,a^{-6} +\Omega_{\Lambda}+\Omega_{\rm R0}a^{-4}+\Omega_{\rm BM0}\,a^{-3}. \tag{75}\] It contributes proportionally to \(\epsilon\) and the current value of \(\dot{\Phi}_{0}^{2}\). In what follows we assume: \(\dot{\Phi}_{0}=0\), \(\gamma\neq 0\) and \(\Omega_{\rm BM0}\neq 0\) (\(\dot{\Phi}_{0}=\gamma=0\) reproduces again \(\Lambda\)-CDM). Thus controlling the term \(3\gamma\Omega_{\rm BM0}\Phi^{\gamma-1}\approx 0\) for \(a\approx 1\) we do not interfere in the Friedmann equation for most of the observational data regardless of the value of \(\epsilon\)18. Keeping all these in mind, an example of the numerical solution of the above system of ODE equations will form the basis for further analysis of both toy models. Footnote 18: The problems of the CMB spectrum (\(a\approx 10^{-3}\)) and structure formation should be discussed separately. ### Numerical analysis: large scale Our purpose in this section is to analyze more deeply numerical solutions of the above system of ODE in order to get more inside into these models and test our hypotheses19. Before doing this we want to make clear that _numerical solutions should be treated with some care_. Universe evolution, as described by these models, is a particular trajectory in the phase space of some autonomous (Hamiltonian) dynamical system whose stability is not yet clarified: small deviations in the initial conditions could have a large impact on other stages of the evolution. For this reason, some claims, especially the one concerning an early universe, have rather a speculative and preliminary character, even if they agree with a piece of common knowledge. Future research should use more advanced methods, such as a Markov chain Monte Carlo as well as dynamical systems analysis and cosmological perturbations. Footnote 19: All of the diagrams were made in Wolfram Mathematica [80]. First, we notice that the model contains four numerical parameters: \(\{\Omega_{\rm BM0},\Omega_{\rm R0},\Omega_{\Lambda},\gamma\}\) as well as one discreet \(\epsilon=\pm 1\) which are explicitly present in the equations of motion (74a)-(74b) we want to solve. The first two, ie. baryonic matter and radiation densities: \[\Omega_{\rm BM0}=4.86\times 10^{-2}\,,\quad\Omega_{\rm R0}=5\times 10^{-4} \tag{76}\] are taken from the Planck mission data [11]. The initial (or, in fact,present day) conditions are taken in the most natural way \(a_{0}=1,\dot{a}_{0}=H_{0}=1\), \(\Phi_{0}\) is to be determined while \(\dot{\Phi}_{0}=0\) in order to exclude stiff matter in the limit \(\epsilon=0\). We also have to take into account the Hamiltonian constraints (72) imposed on the data, cf. (14)20: Footnote 20: Here, we work with the normalized scale factor \(a_{0}=1\) and normalized cosmic time \(T=H_{0}^{-1}=1\). \[1=\Omega_{\Lambda}+\Omega_{\rm R0}+\Omega_{\rm BM0}\Phi_{0}^{\gamma}\,, \tag{77}\] that relates \(\Omega_{\Lambda},\Phi_{0}\) and \(\gamma\). Now, looking for the values, giving a good fit to the \(\Lambda\)-CDM model, we successfully find: \[\Omega_{\Lambda}\approx 0.739\,,\gamma\approx 0.2450\,,\Phi_{0}\approx 946.507\,. \tag{78}\] The quality of the fitting is shown on Figs. 1 and 2, where \(\Lambda\)-CDM plot is based on Planck data [11]. This good accuracy obeys Pantheon supernovae data and extends beyond the current epoch. In both cases, as expected, the parameters values are independent of \(\epsilon\). It should be also noted that \(\Omega_{\Lambda}\) is not much different from Planck value \(\Omega_{\rm{APlanck}}=0.6847\). Accordingly, the calculated value of dust to baryonic matter ratio is also not much different from the Planck one: \[\frac{\rho_{\rm{dust}}}{\rho_{\rm{BM0}}}=\Phi_{0}^{\gamma}\approx 5.3598\,. \tag{79}\] Moreover, the baryonic matter density: \[\rho_{\rm{BM}}\equiv\rho_{\rm{BM0}}\,a^{-3} \tag{80}\] is conserved while the total dust density: \[\rho_{\rm{dust}}\equiv\rho_{\rm{BM0}}\,\Phi^{\gamma}\,a^{-3} \tag{81}\] is not conserved and satisfies the equation: \[\dot{\rho}_{\rm{dust}}+3H\rho_{\rm{dust}}=\gamma\frac{\dot{\Phi}}{\Phi}\,\rho_ {\rm{dust}}. \tag{82}\] This makes the _chameleon dark matter_: \[\rho_{\rm{DM}}\equiv\rho_{\rm{dust}}-\rho_{\rm{BM0}} \tag{83}\] an _emergent quantity_ whose density is _not conserved_. Furthermore, using effective density and pressure (64) one can easily define an expression describing the evolution of effective EoS parameter, c.f. (61), (63a): \[\omega_{\rm{eff}}(t)\equiv\frac{p_{\rm{eff}}(t)}{\rho_{\rm{eff}}(t)}=-\frac{2 }{3}\frac{\dot{H}}{H^{2}}-1=\frac{1}{3}(2q-1)\,, \tag{84}\] where \(q=\frac{\dot{H}}{H^{2}}-1\) is a deceleration parameter. This is shown on Fig. 3. \(q=0\) corresponds to \(\omega_{\rm{eff}}=-\frac{1}{3}\) that means transition from deceleration to acceleration era. The evolution of this parameter suggests also a consistency with recent Planck mission data, for which a _mid-point redshift of reionization_ has been estimated to be \(z_{\rm{re}}=7.68\pm 0.79\)[11]. This compatibility leaves the door open to the possibility of creating a large-scale structure in such models (keeping in mind the necessity to carry out a perturbation analysis). ### Small scale analysis: \(t<10^{-3}T_{0}\) One of the most important aspects of cosmological models is their behavior during the earliest periods of time, if such initial time exists at all. Solving the equations of motion (74a)-(74b), we get, in fact, two scenarios which appear in much shorter scale still accessible for numerical calculations. Now, the "fine tuned" value of \(\gamma=0.24500002\) parameter with a larger number of significant digits is needed in order to unveil these effects21. Footnote 21: For a smaller number of significant digits, the evolution period to the value \(a\sim 0\) (EFSTG) is slightly increased. This makes a comparison more tricky. In the first scenario, for \(\epsilon=1\), we get a model with an initial singularity (in fact, very similar to the scenario considered in [75]). In the case of \(\epsilon=-1\) we are dealing with the existence of a phase of (plausible non-singular) Figure 1: Evolution of the scale factors for both models reproduces over a large range of time the behavior typical of the LCDM. A cosmic bounce scenario can be observed for EFSTG model (\(\epsilon=-1\)). Figure 3: Evolutions of the effective equation of state (84).The differences among models seem to be insignificant from this perspective. Figure 2: Evolution of the Hubble parameters for both cosmological scenarios compared with the LCDM model, c.f. (72). cosmological bounce_ (Fig. 4). This phenomenon occurs in models related to the so-called Bounce Cosmology22. Footnote 22: For review papers on this subject, see [81; 82; 83]. In the case of the EFSTS model, we have a classic example of a model of the \(\Lambda\)-CDM type, i.e., the case where a necessary aspect of solving the initial singularity problem is to implement an external mechanism (e.g. cosmological inflation, quantum gravity corrections). On the other hand, the EFSTG model is an example of the so-called _matter bounce_ scenario [23], in which it is the exotic matter (in such a case, the kinetic energy of the phantom field) that leads to the contraction phase. Then a bounce occurs, followed by the period of accelerated expansion of the Universe with the presence of possible reheating epoch (see, Fig. 9 and description below it). In the later epochs of the Universe evolution, the models show pretty good agreement with the reference model, i.e. the magnitude of \(H(t)\) is a decreasing function aiming at a constant positive value (Fig. 2), which can be regarded as the era of the cosmological constant dominance, for which the future value of the Hubble parameter is constant over time. In contrast, when it comes to the behavior of the Hubble parameter near \(t=0\), the EFSTG model is more challenging. That case involves a _matter bounce_ scenario [23]. The phase of cosmological bounce is illustrated in Fig. 5. The Hubble parameter slowly decreases in value until it reaches a minimum, then passes through a value equal to zero (bounce moment). The last stage is to reach a maximum value and gradually decrease in value. This is standard behavior for models with matter-dominated cosmological bounce [23]. At this point, we can also consider an important quantity that can indicate the nature of the bounce, namely the Hubble horizon, which is the inverse of the Hubble function: \(R_{\rm H}=H^{-1}(t)\). In Fig. 6 one can see that wavelength of the fluctuation mode represents by a scale factor function (i.e., \(\lambda\propto a\)) enters under the horizon, shortly before as well as exits shortly after the bounce. According to our numerical solutions, this phase lasts about 279,000 years. As was mentioned earlier, the transition phase between contraction and expansion (or by contraction in the case of EFSTS) for the EFSTG model is dominated by the kinetic energy of the ghost field. As shown in Fig. 7, the kinetic energy of the scalaron field is a decreasing function in time asymptotically going to zero. In contrast, in the second model we are considering, we are dealing (most likely) with the singular nature of the kinetic energy of the field at the very moment of cosmic bounce. This energy also asymptotically tends to zero, however, in such a case in the limit with \(t\rightarrow\pm T_{0}\). The most significant differences are seen in the early periods of the Universe evolution. In contrast to the LCDM model, positive maximum were obtained for the model with \(\epsilon=-1\), while for the scalaron field model the parameter of the equation of st Figure 4: Behavior of the scale factors nearly a LCDM BB initial singularity: standard (with a singularity) for EFSTS and non-standard (with a bounce) for EFSTG. Figure 5: Behavior of the Hubble parameters nearby a LCDM BB initial singularity. One sees non-singular evolution for \(\epsilon=-1\) case. Figure 6: Hubble horizon for the EFSTG model. The scale factor entering below the horizon at moments marked on the graph represents the wavelength of the fluctuation mode - this is typical behavior for the matter bounce phase [84]. mum but in the early period also exhibits a positive-value character (again, Fig. 3). For both models, the behavior of the equations of state converges to LCDM-like early in the evolution of the Universe (about 13.8 mln years after the Big Bang/Bounce). Also in the case of effective energy density concept (84), one can see significant differences in the earliest period of the evolution of the Universe. The scalaron field model deviates slightly from the case of the reference model, while the ghost field model reveals the tangible presence of the epoch of dominance of non-standard matter (kinetic energy of the phantom field) (Fig. 8). The maximum of the energy density before the bounce corresponds to the moment when the effective negative density of the ghost (stiff matter) field turns on, during the slow contraction the effective density decreases (probably) to zero (the moment of the bounce) and then increases to a new maximum of the same value as before. When the kinetic energy of the phantom field becomes no longer dominant we return to standard evolution, i.e. the energy density decreases as the size of the Universe increases. As in the case of the reference model (\(\Lambda\)-CDM), the approximated influence of individual components on the global evolution can be represented in a simple way by formulating the effective (Newtonian) potential. Such potential, is expressed, in the case of our models, as follows: \[U_{\rm eff}\left(a\right)=-\frac{1}{2}\left(\frac{\epsilon}{6}\dot{\Phi}^{2}a ^{2}+\Omega_{\Lambda}a^{2}+\Omega_{\rm R}a^{-2}+\Omega_{\rm BM}a^{-1}\Phi^{ \gamma}\right). \tag{85}\] With the help of Fig. 9, a significant conclusion can be drawn. Namely, the local minimum of the Newtonian-type potential for the EFSTG model could represent the _reheating_ phase of the Universe evolution. This takes place in our case for a redshift approximately equal to \(z=267.75\) what corresponds to the epoch of large-scale structure formation. However, in the case of the model with \(\epsilon=-1\), it should be borne in mind that the scenarios proposed so far for obtaining the large-scale structure of the Universe are based on the condensation effect (the Universe obtains a critical temperature), in which structures are formed "immediately" and not as a result of a long-term process at time [85; 86; 87]. The non-standard new segment in the expression (85) is a component that depends on the type of kinetic energy of the scalar field (canonical or ghost) mimicking dark matter. The crucial characteristic of stiff matter23 is that its energy density dilutes more slowly as the universe expands and thickens more slowly as the universe contracts compared to other forms of matter/energy. Footnote 23: More details on stiff matter cosmology can be found at [88]. An unusual aspect of the effective potential for the EFSTG model is the presence of a positive contribution from the kinetic energy of the scalar field. This results Figure 8: The typical evolution of the energy density function (84) for models with an initial singularity (EFSTS) and the evolution with two maxima for a model with a bounce phase (EFSTG) [23]. Figure 7: Kinetic energy of the scalar field as a decisive factor for the earliest stages of the evolution of the Universe. In the case of the scalaron field, it shows a positive character as for the canonical scalar field and a negative character typical for the ghost fields. in the possibility of contraction and then, as a result of the decay of part of the energy of the scalar field, accelerated expansion of the Universe. This process can be regarded as the presence of a reheating mechanism in models that are effective approximations of the unknown (now or never?) epoch of quantum gravity. It may also indicate the possibility of the existence of currently unknown states of matter/types of energy in the Universe. ### Cosmological aspects regarding scalaron and phantom (ghost) scalar field Phantom energy characterized by the equation of state parameter \(\omega<-1\) appears in individual braneworld models [89; 90] or Brans-Dicke theory [91]. The simplest possible approach to get this effect is to introduce a ghost 24 scalar field with a negative kinetic term in the action [93]. Such a scalar field naturally appears in effective theories originated from type IIA String Theory [94; 95] and low-energy limit of F-theory reformulated in 12-D type IIB action [96]. Footnote 24: For a more general discussion from QFT perspective, see [92]. The phantom (ghost) field in cosmology shows quite interesting properties, e.g. its value of the energy density \(\rho_{\Phi}\) increases with time in the period after cosmological bounce (Fig. 10), the speed of sound equals the speed of light (although there are models with subluminal values of this speed, for instance [97; 98; 99]), there is also a correlation between such types of fields and the de Sitter-CFT [100]. The fundamental difference in the description of the two fields we are considering is the different form of the equation of state describing the evolution of the scalaron and phantom (ghost). The scalaron field involves the standard form known from the canonical scalar field scenario in cosmology (i.e. \(\epsilon=1\)). This, in the case of our generalization, can be described as follows: \[\omega_{\Phi}^{(\epsilon)}\equiv\frac{p_{\Phi}}{\rho_{\Phi}}=\frac{\frac{1}{2 }\left(\epsilon\dot{\Phi}^{2}-V(\Phi)\right)}{\frac{1}{2}\left(\epsilon\dot{ \Phi}^{2}+V(\Phi)\right)}=1-\frac{2V(\Phi)}{\epsilon\dot{\Phi}^{2}+V(\Phi)}. \tag{86}\] On the other hand, for the "non-canonical" ghost field, we are dealing with a specific form of the equation (86) introducing the changed (negative sign) up front of the kinetic term, namely \(\epsilon=-1\). One can see from the expression (86) that the equation of state for the phantom (ghost) field is indeterminate when the kinetic energy of the scalar field is equal to its potential energy (cosmological constant). Most likely, at this point, a kind of phase transition occurs and the dark matter described by this field goes from effective stiff matter in the epoch of the matter bounce to an effective phantom phase asymptotically transformed into an effective factor incorporated into dark energy (Fig. 11). In the case of the scalaron field, the situation is different. In the earliest period of the evolution, the field also exhibits behavior that characterizes stiff matter (\(\omega_{\Phi}^{\rm(sc)}\simeq 1\)), but then goes through a dust epoch (\(\omega_{\Phi}^{\rm(sc)}\simeq 0\)) and then mimics the behavior as for a cosmological constant (\(\omega_{\Phi}^{\rm(sc)}\simeq-1\)). Thus, it may also represent a kind of _quintessence_ field known from the attempt to unify dark matter and dark energy [101] (again Fig. 11). By considering the evolution of the equation of state for the obtained models with a scalar field depending on the scale factor, it is possible to discuss the problem of observations of galaxies with a high-redshift value, which pose a challenge for the currently recognized theory of the large-scale structure formation of the Universe (alternatively, it may be a contribution to the discussion regarding the age of the Universe/duration of the epoch after a possible cosmological bounce such as in EFSTG model). Figure 11: The scalar field equation of state parameter (86) as a criterion to characterize the different effective (matter) phases of the \(\Phi\) (DM) behavior. PDL stands for Phantom Divide Line. Figure 10: In the case of the matter bounce phase, the value of the energy density and pressure exhibited by the phantom field ((86)) tends to a constant (finite) negative value. In contrast, for the scalaron field (also (86)), we observe a decrease in both values over time. The values of energy density and pressure for both scalar fields tend asymptotically to the same values. As one can see in Fig. 12, the transition between negative and positive value of the \(\omega_{\Phi}\) parameter (for both models) occurs for the redshift of \(z_{0}=20.9238\) (about 168 mlm years after the Big Bang/Bounce). Possible future observational data from the JWST may confirm or dismiss this concept. However, one should also remember about the potential problems associated with phantom (ghost) scalar field. One of the main aspects is the so-called _UV instability_. To this group, we can include: vacuum instability problems due to particle production caused by the lack of a minimum energy density \(\rho_{\Phi}\) constraint, and the presence of a _MeV cut-off_ to avoid producing excess gamma radiation [102]. ## IV Discussion and perspectives In this paper, we performed a detailed study of matter stress-energy non-conservation in ST FLRW universe and its relation with the chameleon mechanism. An explicit solution to this problem is proposed for any kind of dark fluid implemented by an arbitrary generating function \(f\). Two toy models mimicking well LCDM one with the same predictions for the late universe (including supernovae data) and providing a realistic ratio between baryonic and dark matter were analyzed. Differences appear for the early universe (\(a<10^{-1}\)) which, particularly in the CMB epoch (\(a\sim 10^{-3}\)), requires studying of the cosmological perturbations. Preliminary results obtained here by numerical methods show that the scalaron case (\(\epsilon=1\)) suffers the same problems as LCDM: lack of internal inflationary mechanism. This can be cured by modification of self-interaction potential \({\cal V}(\Phi)\neq\) const and considering massive scalar field. In contrast, the phantom (ghost) case (ie. \(\epsilon=-1\)), as expected, offers (matter) bounce scenario instead of big bang. In the Section 2, we also pointed out the need to revise the description of the chameleon mechanism, in which in the general (non-barotropic) case we are not dealing only with a factor that multiplies the energy density of the material content of the Universe. With the help of the so-called generating function, we pointed out the general relations for generating different types of the so-called dark fluid, which treats dark matter and dark energy as manifestations of a single physical phenomenon. The aspect of these dark fluids (e.g., logotropic [67] and Murnaghan [50] fluids) regarding the chameleon mechanism seems worth considering in the future. Reformulations of standard energy conditions for FLRW type cosmological models provided in the Subsection 2.1 may provide an interesting scope for the study of energy conditions per se, as well as for uncommon yet physically interesting proposals of new forms of functions for energy densities in cosmological applications. These conditions are, in principle, a kind of assumptions and not constraints derived from fundamental principles in physics. For this reason, consideration of these conditions could lead to new interesting physical systems and discovering "new" laws of physics. Of the two toy models we have obtained, the more promising seems to be the model belonging to the family of so-called Bounce Cosmology - EFSTG (Einstein Frame Scalar-Tensor Ghost) characterized by a matter (phantom (ghost) scalar field) bounce phase. It constitutes a particular example of an alternative hypothesis to the widely accepted model of cosmological inflation. However, the EFSTS model may be a realization of the description of the dark sector of the Universe known as the quintessence, which, when the more complicated scalar field potential is taken into account, may also exhibit interesting properties. As it has been demonstrated in [103], the only known _supersmoother25_ in modern relativistic cosmology is the phase of slow contraction (ekpyrotic phase [105]). When it comes to inflation, the inflaton field quantum fluctuations generate growing mode curvature fluctuations which, consequently, does not allow for the homogeneity of the Universe in the broadest sense. In order for the inflation mechanism to fulfill its primary purpose, we must deal with a strong narrowing of the possible values of the free parameters associated with the self-interaction scalar field potential and a narrow range of allowed inflaton initial velocities. As a result, we face the problem of _fine-tuning_ and _initial conditions_[106]. In contrast, in models with a non-singular cosmological bounce, both issues do not arise. Instead of an initial singularity, we have an ekpyrotic (ultra-slow contraction) phase (geodesic completeness in mathematical sense), followed by a non-singular bounce, and then there is a release of some of the scalar field energy consumed in the reheating process - this aspect regarding the models introduced in Figure 12: Dependence of the \(\omega_{\Phi}\) parameter versus the scale factor with an apparent transition between positive and negative values of the equation of state (86). our work should be further examined through the methods of perturbation theory (e.g. via quasi-static approximation in STT [75]) and dynamical systems, especially regarding the formation of the large-scale structure of the Universe in terms of SFDM models [85; 86; 87]. Furthermore, such an evolution could generate a nearly scale-invariant spectrum of nearly gaussian density perturbations [104; 107] as in the inflationary scenario. This is one of the crucial observational criteria regarding modern cosmological models. In addition, from a thermodynamics perspective, the cyclic scenario of the Bounce Cosmology [108] could avoid the well known Tolman entropy problem [109] plaguing earlier attempts to describe cyclic Universe scenario. It is also a challenge to try to introduce a non-singular (for \(H=0\)) description of entropy during a cosmological bounce (e.g., [110]), which would allow an attempt to gain a deeper understanding of alternatives to cosmological inflation within the foundations of thermodynamics. These issues regarding the formalism proposed in our work should also be addressed in future research. ## Acknowledgements This article is based upon work from COST Action CA21136 - "Addressing observational tensions in cosmology with systematics and fundamental physics (Cosmo-Verse)", supported by COST (European Cooperation in Science and Technology). AB is supported by the project UMO-2022/45/B/ST2/01067 from the Polish National Science Center (NCN). MP would like to express his sincerest appreciation to Roksana Szwarc for her very helpful and accurate remarks on numerical and symbolic calculations in Wolfram Mathematica, and to Alexander Kozak for inspiring discussions regarding scalar-tensor theories of gravity. We would also like to express our sincere appreciation to Orlando Luongo for his valuable comments regarding dark fluid models and his interest in our article.
2301.13729
Low-rank LQR Optimal Control Design over Wireless Communication Networks
This paper considers a LQR optimal control design problem for distributed control systems with multi-agents. To control large-scale distributed systems such as smart-grid and multi-agent robotic systems over wireless communication networks, it is desired to design a feedback controller by considering various constraints on communication such as limited power, limited energy, or limited communication bandwidth, etc. In this paper, we focus on the reduction of communication energy in an LQR optimal control design problem on wireless communication networks. By considering the characteristic of wireless communication, i.e., Radio Frequency (RF) signal can spread in all directions in a broadcast way, we formulate a low-rank LQR optimal control model to reduce the communication energy in the distributed feedback control system. To solve the problem, we propose an Alternating Direction Method of Multipliers (ADMM) based algorithm. Through various numerical experiments, we demonstrate that a feedback controller designed using low-rank structure can outperform the previous work on sparse LQR optimal control design, which focuses on reducing the number of communication links in a network, in terms of energy consumption, system stability margin against noise and error in communication.
Myung Cho, Abdallah Abdallah, Mohammad Rasouli
2023-01-31T16:10:08Z
http://arxiv.org/abs/2301.13729v1
# Low-rank LQR Optimal Control Design over Wireless Communication Networks ###### Abstract This paper considers a LQR optimal control design problem for distributed control systems with multi-agents. To control large-scale distributed systems such as smart-grid and multi-agent robotic systems over wireless communication networks, it is desired to design a feedback controller by considering various constraints on communication such as limited power, limited energy, or limited communication bandwidth, etc. In this paper, we focus on the reduction of communication energy in an LQR optimal control design problem on wireless communication networks. By considering the characteristic of wireless communication, i.e., Radio Frequency (RF) signal can spread in all directions in a broadcast way, we formulate a low-rank LQR optimal control model to reduce the communication energy in the distributed feedback control system. To solve the problem, we propose an Alternating Direction Method of Multipliers (ADMM) based algorithm. Through various numerical experiments, we demonstrate that a feedback controller designed using low-rank structure can outperform the previous work on sparse LQR optimal control design, which focuses on reducing the number of communication links in a network, in terms of energy consumption, system stability margin against noise and error in communication. optimal control, LQR, least quadratic regulator, low rank optimal control, distributed control system, feedback matrix design ## I Introduction Design of feedback control systems has been studied for several decades and applied to various applications in autonomous vehicles, power plants, and robots to name a few. Optimal control design methods can be used optimizing various objective functions (e.g., Linear Quadratic Regulator (LQR), \(\mathcal{H}_{2}\) norm or \(\mathcal{H}_{\infty}\) norm) to meet some design criteria in a feedback controller. Unlike optimal control design for conventional systems that are studied in [1, 2, 3] and references therein, recent control systems can be very different from the previous ones in various aspects. Recent systems are larger in scale, distributed, ubiquitous, and connected via wireless communication network. The evolution of wireless communication devices such as cellular and Internet of Things (IoTs) devices significantly contribute to recent changes in control systems. Thus, recent control systems may have multi-agents distributed in large-scale topology, which communicate over wireless communication networks. With the new paradigm of distributed systems, we face new challenges including the communication energy overhead, response time and delay, privacy and security issues, etc. To address the new challenges and issues in distributed multi-agents control systems, especially reducing communication burden, several research studies have been conducted. The main focus has been on the network connectivity aspect of designing a feedback control system. More specifically, in [4, 5, 6, 7, 8, 9, 10, 11, 12, 13] and the references therein, LQR control designs with predetermined structure of network topologies were studied. In order to reduce the number of communication links in a distributed multi-agents system, the proposed solutions in [14, 15, 16, 17] took into account sparse LQR control design models which simultaneously minimize LQR cost as well as sparsity level of the network topology by considering the sparsity condition on the topology as a regularization term or as a constraint in optimization problems. The research studies conducted so far raise the following research question: "Is reducing the number of communication links among agents helping to reduce the total energy consumed in both control and communication operations?" For example, with the reduced number of communication links, we may reduce the communication energy, but, what if we need to spend more energy, e.g., LQR cost, in control? Then, reducing the communication links may not result in reducing the total energy spent in the whole process. In this paper, we attempt to answer this question in a distributed control system with multi-agents connected via a wireless communication network. The idea is that when wireless nodes run in a broadcast mode, the increase in network coverage may help to reduce communication energy and delay with minimal impact on the LQR cost, compared to the standard LQR optimal control design and sparse LQR control design. Therefore, we formulate low-rank LQR optimal control design problems, and propose algorithms to solve the problems. The contribution of this paper is three-fold. First, we introduce new LQR optimal control problems, which we call "low-rank LQR" control design problems. The sparse LQR control design applies sparseness to the structure of a feedback matrix, while in the low-rank LQR control design problems, we consider low-rankness on the feedback matrix, which can be interpreted as a controller with communication in a broadcast mode. Secondly, to solve these novel optimization problems, we introduce Alternating Direction Method of Multipliers (ADMM) algorithms. Finally, we demonstrate that under various wireless communication scenarios, our proposed method outperforms the previous solutions that utilize standard LQR control and sparse LQR control designs. The rest of the paper is organized as follows. Section II introduces the problem statement for the optimal control design that minimizes the LQR cost with a low-rank constraint on communication networks, and describes how this structure of a wireless network is interpreted in control. In Section III, we briefly review the previous research on standard LQR and sparse LQR control designs. In Section IV, we describe the merit of the low-rank LQR control design against the standard LQR and the sparse LQR control designs, and propose the ADMM based algorithm to solve the low-rank LQR optimal control problems. In Section V, we provide numerical experiment results demonstrating the performance of our proposed work against the standard LQR control and the sparse LQR control designs under various communication scenarios. Finally, Section VI concludes the paper and introduce possible future research directions. **Notations**: \(\mathbb{R}\) and \(\mathbb{C}\) are reserved for the sets of real numbers and complex numbers respectively. We denote a scalar, a vector, and a matrix as a non-bold letter, a bold small letter, and a bold capital letter respectively, e.g., \(x\) or \(X\) for a scalar, \(\mathbf{x}\) for a vector, \(\mathbf{X}\) for a matrix. We denote \(\mathrm{Re}(\cdot)\) as the real part of a complex value. We use the super-script \(T\) for transpose. For a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\), we use Frobenius norm as \(||\mathbf{A}||_{F}\), element-wise \(\ell_{1}\) norm as \(\|\mathbf{A}\|_{1}\), i.e., \(\|\mathbf{A}\|_{1}=\sum_{i,j}|A_{i,j}|\), and nuclear norm as \(\|\mathbf{A}\|_{*}\), i.e., sum of singular values of \(\mathbf{A}\), respectively. We reserve \(\mathbf{I}\) for the identity matrix. A feasible set of feedback matrices \(\mathbf{K}\)'s with asymptotic stability is denoted as \(\mathcal{F}\), i.e., \(\mathcal{F}:=\{\mathbf{K}\mid\max(\mathrm{Re}(\lambda(\mathbf{A}-\mathbf{B}_{1}\mathbf{K}))) <0\}\), where \(\lambda(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})\) represents the eigenvalue of the closed-loop state matrix \(\mathbf{A}-\mathbf{B}_{1}\mathbf{K}\). For a matrix \(\mathbf{Q}\), \(\mathbf{Q}\succeq 0\) and \(\mathbf{Q}\succ 0\) represent a symmetric positive semidefinite matrix and a symmetric positive definite matrix respectively. \(\otimes\) represents the Kronecker product. ## II Problem Statement In this paper, we consider a distributed control system with multiple agents where we need a communication network to share feedback signals among the agents to stabilize the whole system in a feedback loop. This system can be expressed as the following state space representation: \[\dot{\mathbf{x}}(t) =\mathbf{A}\mathbf{x}(t)+\mathbf{B}_{1}\mathbf{u}(t)+\mathbf{B}_{2}\mathbf{w}(t),\;(\mathbf{x }(0)=\mathbf{B}_{2}\mathbf{w}(0)),\] \[\mathbf{y}(t) =\mathbf{C}\mathbf{x}(t)+\mathbf{D}\mathbf{u}(t),\] \[\mathbf{u}(t) =-\mathbf{K}\mathbf{x}(t), \tag{1}\] where \(\mathbf{x}(t)\), \(\dot{\mathbf{x}}(t)\), and \(\mathbf{u}(t)\) are the state vector, its derivative with respect to time, and the input vector respectively. We organize the system state \(\mathbf{x}(t)\) (resp. its derivative) by stacking the states (resp. its derivatives) of each agent in a vector as shown in Fig. 1(b). The system input \(\mathbf{u}(t)\) at time \(t\) is also organized by stacking the inputs of agents in the system as shown in Fig. 1(b). \(\mathbf{w}(t)\) is disturbance at time \(t\) with i.i.d. Gaussian distribution \(\mathcal{N}(0,\mathbf{I})\). Also, \(\mathbf{y}(t)\) is the output of the system at time \(t\). Correspondingly, a state matrix, input matrix, and disturbance are denoted by \(\mathbf{A}\in\mathbb{R}^{n\times n}\), \(\mathbf{B}_{1}\in\mathbb{R}^{n\times m}\) and \(\mathbf{B}_{2}\in\mathbb{R}^{n\times l}\) respectively. \(\mathbf{C}\) and \(\mathbf{D}\) are output matrices. \(\mathbf{K}\in\mathbb{R}^{m\times n}\) represents a feedback matrix. Throughout the paper, we assume that \((\mathbf{A},\mathbf{B}_{1})\) is stabilizable and \((\mathbf{A},\mathbf{Q}^{1/2})\) is detectable. The goal here is to find a feedback matrix \(\mathbf{K}\) that makes not only the whole system asymptotically stable but also satisfies certain conditions, e.g., low LQR cost, low-rank, and sparsity, etc. The state space representation (II) can be restated as \[\dot{\mathbf{x}}(t) =(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})\mathbf{x}(t)+\mathbf{B}_{2}\mathbf{w}(t),\;(\mathbf{x }(0)=\mathbf{B}_{2}\mathbf{w}(0)),\] \[\mathbf{y}(t) =(\mathbf{C}-\mathbf{D}\mathbf{K})\mathbf{x}(t). \tag{2}\] Due to the way how we organize the system input \(\mathbf{u}(t)\) and the system state \(\mathbf{x}(t)\) as shown in Fig. 1(b), non-zero entries in off-diagonal (or off-block-diagonal) of the feedback matrix \(\mathbf{K}\) pertain to communication links, i.e., dotted arrows in Fig. 1(a), among agents in the distributed system. For instance, if \(u_{i}(t)\) and \(x_{i}(t)\) are single-output functions over \(t\), for all \(i\), and we have non-zero entry in the \(i\)-th row and the \(j\)-th column of the feedback matrix \(\mathbf{K}\), then, there needs to be communication from agent \(\mathcal{A}_{j}\) to agent \(\mathcal{A}_{i}\). Basically, the structure of the feedback matrix \(\mathbf{K}\) can be related to the communication links in a distributed system with multi-agents. From (II), the LQR cost in infinite time domain is defined as follows: \[J_{0}(\mathbf{K}) :=\int_{t=0}^{\infty}\mathbf{x}(t)^{T}\mathbf{Q}\mathbf{x}(t)+\mathbf{u}(t)^{T} \mathbf{R}\mathbf{u}(t)\;dt\] \[=\int_{t=0}^{\infty}\mathbf{x}(t)^{T}(\mathbf{Q}+\mathbf{K}^{T}\mathbf{R}\mathbf{K}) \mathbf{x}(t)\;dt \tag{3}\] where \(\mathbf{Q}\succeq 0\in\mathbb{R}^{n\times n}\) and \(\mathbf{R}\succ 0\in\mathbb{R}^{m\times m}\) are given performance weight matrices. If \(\mathbf{Q}\) and \(\mathbf{R}\) are identity matrices, then, the LQR cost can be simply understood as the energy of a control system expected to be consumed in infinite time. Remark that the squared term on \(\mathbf{x}(t)\) can be related to the power of a signal \(\mathbf{x}(t)\), and the integral of the power over time Fig. 1: Illustration of a distributed system with four agents denoted by \(\mathcal{A}_{1}\), \(\mathcal{A}_{2}\), \(\mathcal{A}_{3}\) and \(\mathcal{A}_{4}\), where a dotted arrow represents a feedback signal from one agent to another or an internal feedback signal. In (b), \(\mathbf{u}_{i}(t)\) and \(\mathbf{x}_{i}(t)\) represent input and state vectors of the \(i\)-th agent, and each agent has its derivative \(\dot{\mathbf{x}}_{i}(t)\) and integration part inside, which are omitted in (a). can be interpreted as energy-related cost. With the introduction of a matrix \(\mathbf{P}\in\mathbb{R}^{n\times n}\) such that \(\frac{d}{dt}\mathbf{x}(t)^{T}\mathbf{P}\mathbf{x}(t)=-\mathbf{x}(t)^{T}(\mathbf{Q}+\mathbf{K}^{T}\mathbf{R} \mathbf{K})\mathbf{x}(t)\), we can express the expectation of \(J_{0}(\mathbf{K})\), denoted by \(J(\mathbf{K})\), over the disturbance as follows: \[J(\mathbf{K}) :=\mathbb{E}\bigg{[}\int_{t=0}^{\infty}-\frac{d}{dt}\mathbf{x}(t)^{T} \mathbf{P}\mathbf{x}(t)dt\bigg{]}=\mathbb{E}\bigg{[}\mathbf{x}(0)^{T}\mathbf{P}\mathbf{x}(0)\bigg{]}\] \[=\mathrm{Tr}(\mathbf{B}_{2}^{T}\mathbf{P}\mathbf{B}_{2}), \tag{4}\] where \(\lim_{t\to\infty}\mathbf{x}(t)=\mathbf{0}\) by the assumption of asymptotically stable feedback system, and the final equality is obtained from that \(\mathbf{x}(0)=\mathbf{B}_{2}\mathbf{w}(0)\) and \(\mathbb{E}[\mathbf{w}(0)\mathbf{w}(0)^{T}]=\mathbf{I}\). Since we have \(\frac{d}{dt}\mathbf{x}(t)^{T}\mathbf{P}\mathbf{x}(t)=\dot{\mathbf{x}}(t)^{T}\mathbf{P}\mathbf{x}(t)+ \mathbf{x}(t)^{T}\mathbf{P}\dot{\mathbf{x}}(t)\), where \(\dot{\mathbf{x}}(t)=(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})\mathbf{x}(t)+\mathbf{B}_{2}\mathbf{w}(t)\) in (2) and \(\mathbb{E}[\mathbf{w}(t)]=\mathbf{0}\), we have the following well-known Lyapunov equation over \(\mathbf{K}\in\mathbb{R}^{m\times n}\) and \(\mathbf{P}\in\mathbb{R}^{n\times n}\): \[(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T}\mathbf{P}+\mathbf{P}(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})+\mathbf{Q}+\bm {K}^{T}\mathbf{R}\mathbf{K}=\mathbf{0}, \tag{5}\] where \(\mathbf{P}\) needs to be strictly positive definite. This Lyapunov equation can also be restated as \((\mathbf{I}\otimes(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T}+(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T} \otimes\mathbf{I})\operatorname{vec}(\mathbf{P})=-\operatorname{vec}(\mathbf{Q}+\mathbf{K}^{T }\mathbf{R}\mathbf{K})\), where \(\operatorname{vec}(\cdot)\) is the vectorization operator for a matrix by stacking the columns of a matrix. From this equation, it is also recognized that if the feedback matrix \(\mathbf{K}\) is in \(\mathcal{F}\), then, all eigenvalues of the feedback system matrix \(\mathbf{A}-\mathbf{B}_{1}\mathbf{K}\) have negative real parts. Then, there will be no zero in the sum of any two eigenvalues of the feedback system matrix, which leads to \((\mathbf{I}\otimes(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T}+(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T} \otimes\mathbf{I})\) is non-singular [18]. It indicates that for a given feedback matrix \(\mathbf{K}\in\mathcal{F}\), there exists a unique matrix \(\mathbf{P}\). Hence, we consider \(\mathbf{P}\) as a function of \(\mathbf{K}\), denoted by \(\mathbf{P}(\mathbf{K})\). Then, we introduce the LQR minimization problem with a regularization term for \(\mathbf{K}\) as follows: \[\underset{\mathbf{K}}{\text{minimize}} J(\mathbf{K})+\gamma G(\mathbf{K})\] \[\text{subject to} \mathbf{K}\in\mathcal{F}, \tag{6}\] where \(J(\mathbf{K})=\mathrm{Tr}(\mathbf{B}_{2}^{T}\mathbf{P}(\mathbf{K})\mathbf{B}_{2})\), \(\mathbf{P}(\mathbf{K})\) needs to satisfy the Lyapunov equation (5), and \(G(\mathbf{K})\) is a regularization term for the structure of the feedback matrix \(\mathbf{K}\), \(\gamma\geq 0\) is a tuning parameter for weighting the regularization term. Since the off-diagonal (or off-block-diagonal) of the feedback matrix \(\mathbf{K}\) is related to the communication links, let us decompose the feedback matrix \(\mathbf{K}\) into the sum of a diagonal matrix and a low-rank matrix as follows: \[\mathbf{K}=\mathbf{K}_{diag}+\mathbf{K}_{low}. \tag{7}\] Then, we propose the low-rank LQR control design problem as follows: \[\underset{\mathbf{K},\mathbf{K}_{low},\mathbf{K}_{diag}}{\text{minimize}} J(\mathbf{K})+\gamma\|\mathbf{K}_{low}\|_{*}\] \[\text{subject to} \mathbf{K}\in\mathcal{F},\] \[\mathbf{K}=\mathbf{K}_{diag}+\mathbf{K}_{low}, \tag{8}\] where the nuclear norm is used for the regularization term. By considering the rank of \(\mathbf{K}_{low}\) as a constraint, we can have \[\underset{\mathbf{K},\mathbf{K}_{low},\mathbf{K}_{diag}}{\text{minimize}} J(\mathbf{K})\] \[\text{subject to} \mathbf{K}\in\mathcal{F},\] \[\mathbf{K}=\mathbf{K}_{diag}+\mathbf{K}_{low},\] \[rank(\mathbf{K}_{low})=r \tag{9}\] where \(rank(\cdot)\) represents the rank of a matrix. Our goal in this paper is to find a feedback matrix \(\mathbf{K}\) whose decomposition is expressed as a (block) diagonal matrix plus a low-rank matrix by solving the low-rank LQR optimal control design problem (8) or (9). In the next section, we will introduce the standard LQR control design which can provide minimum LQR cost but with heavy communication links, and the sparse LQR control design which can provide a trade-off solution between the LQR cost and the number of communication links in the control of a distributed system. ## III Previous Research on LQR Control Design By setting \(\gamma\) to 0 in (6), the optimization problem (6) becomes the standard LQR optimal control design problem. The standard LQR optimal control design problem has been studied for several decade. Since there is no regularization term for \(\mathbf{K}\), it can provide a feedback matrix \(\mathbf{K}\) with the minimum LQR cost. However, the feedback matrix obtained from this standard LQR control design is normally a dense matrix. In the aspect of communication network, we have normally heavy communication links among agents in the control of a distributed system, which requires lots of communication. To reduce the number of communication links in a network, the sparse LQR optimal control design problem or its variation has been studied in previous research such as [14, 15, 16, 17] and reference therein. In order to have a sparse feedback matrix which represents the reduction of the number of communication links in a network, for the regularization term \(G(\mathbf{K})\), element-wise \(\ell_{1}\) norm or its variation were considered, e.g., \(G(\mathbf{K})=\|\mathbf{K}\|_{1}\), or \(\ell_{1}\) norm of off-diagonal matrix of \(\mathbf{K}\), or column-wise \(\ell_{1}\) norm. To solve the sparse LQR optimal control problem, the previous research considered ADMM technique in [14], Iterative Shrinkage Thresholding Algorithm (ISTA) in [16], and Gradient Support Pursuit (GraSP) in [15], which can successfully provide a trade-off solution between the LQR cost and the level of sparsity on feedback matrix \(\mathbf{K}\). However, we have a question about whether the reduction of the number of communication links can be beneficial to reducing the total energy consumed in a distributed system. To answer this question, in the next sections, let us introduce why low-rank LQR optimal control design can play an important role in the reduction of the total energy consumption against the standard and the sparse LQR control designs, especially in a distributed system over a wireless communication network. ## IV Low-rank LQR Optimal Control Design In this section, we will introduce the interpretation of the low-rank \(\mathbf{K}_{low}\) in the control of a distributed system. Before the introduction, it is noteworthy that we decompose the feedback matrix \(\mathbf{K}\) into \(\mathbf{K}_{diag}\) and \(\mathbf{K}_{low}\), where \(\mathbf{K}_{diag}\) has only non-zero entries in diagonal (or block-diagonal) which can be linked to a feedback loop in each individual agent, i.e., internal feedback, and \(\mathbf{K}_{low}\) is related to communication links among agents which we would like to reduce. Then, in order to see the physical meaning of the low-rank feedback matrix \(\mathbf{K}_{low}\) in communication, let us consider rank-1 case first, i.e., \(rank(\mathbf{K}_{low})=1\). When it is rank-1, we can express \(\mathbf{K}_{low}\in\mathbb{R}^{m\times n}\) as follows: \[\mathbf{K}_{low}=\begin{bmatrix}a_{1}\\ a_{2}\\ \vdots\\ a_{m}\end{bmatrix}\begin{bmatrix}b_{1}&b_{2}&\cdots&b_{n}\end{bmatrix}, \tag{10}\] where \(a_{i},b_{j}\), \(i=1,...,m\), \(j=1,...n\) are arbitrary numbers. Then, the feedback signal \(\mathbf{u}(t)\) is expressed as follows: \[\mathbf{u}(t) =-\mathbf{K}\mathbf{x}(t)=-(\mathbf{K}_{diag}+\mathbf{K}_{low})\mathbf{x}(t)\] \[=-\mathbf{K}_{diag}\mathbf{x}(t)-\underbrace{\begin{bmatrix}a_{1}\\ a_{2}\\ \vdots\\ a_{m}\end{bmatrix}}_{\text{External feedback signals}}[b_{1}&b_{2}&\cdots&b_{n}]\,\mathbf{x}(t)\] \[=-\underbrace{\mathbf{K}_{diag}\mathbf{x}(t)}_{\text{Internal feedback}}- \underbrace{\begin{bmatrix}a_{1}\\ a_{2}\\ \vdots\\ a_{m}\end{bmatrix}}_{\text{External feedback signals}}b_{1}x_{1}(t)-\cdots- \underbrace{\begin{bmatrix}a_{1}\\ a_{2}\\ \vdots\\ a_{m}\end{bmatrix}}_{\text{External feedback signals}}b_{n}x_{n}(t),\] where \(\mathbf{x}(t)=[x_{1}(t),x_{2}(t),\cdots,x_{n}(t)]^{T}\), \((A)\) and \((B)\) represent external feedback signals to every agents from the agent \(1\) and the agent \(n\) respectively. After solving (6), we can have \(\mathbf{K}_{low}\). At the initial stage (at time 0), each node can share the scale information, i.e., a column vector in \((A)\) or \((B)\) just one time. Then, each agent can have scale information for other agents, and use them through the infinite time period. Since sharing the scale information for each agent is a one-time operation as a part of initialization, the communication burden for sharing scale information can be limited when it is compared to communication burden during control operation in infinite time domain. In the case of Fig. 2, where \(m=4\), and rank-1 \(\mathbf{K}_{low}\), the number of scale information to share in the initial stage is just three, \(a_{2}\), \(a_{3}\), and \(a_{4}\). When we have a distributed control system over a wireless network, agents share communication channels in a shared medium broadcast pattern, which spread every direction over space. Therefore, for the term \((A)\), the agent 1, i.e., \(\mathcal{A}_{1}\), can share its scaled state \(b_{1}x_{1}(t)\) at time \(t\) in the broadcasting manner, as shown in Fig. 2. Namely, we do not need to send the state of agent 1, i.e., \(x_{1}(t)\), to each agent one by one, which will cause severe communication delay in control over wireless and may cause performance deterioration and/or possibly the instability of the system due to the delay [19, 20, 21, 22, 23]. In terms of consumed energy in communication, since the power density in wireless signal is proportional to the inverse square of the distance [24], the transmission power can be determined by maximum distance. In the case of Fig. 2, it is the distance between agent 1, \(\mathcal{A}_{1}\), and agent 3, \(\mathcal{A}_{3}\), and the distance can be related to energy consumption in communication. Since in the rank-1 structure, agent 1 can send its state in a broadcast manner to every other agents one time with the power that can reach agent 3, every other node on a wireless communication network can have the state of the agent 1. In contrast, the standard or the sparse LQR control design, it is required to separately send the state of an agent to other agents. Therefore, in the standard or the sparse LQR control design, energy consumption and delay in communication can be much larger than those of the low-rank LQR control design on a wireless network. Basically, thanks to the low-rank structure of the feedback matrix, \(\mathbf{K}_{low}\), we can reduce the communication delay as well as communication energy compared to the case separately sharing the state information from one agent to another. Additionally, in a wireless network, the communication delay caused by sharing channel can be minimized by using orthogonal carrier signals among agents. In the case of \(\mathbf{K}_{low}\) in rank-\(r\), we can express \(\mathbf{K}_{low}\) as \[\mathbf{K}_{low}=\sum_{k=1}^{r}\mathbf{a}_{k}\mathbf{b}_{k}^{T}, \tag{11}\] where \(\mathbf{a}_{k}\in\mathbb{R}^{m\times 1}\) and \(\mathbf{b}_{k}\in\mathbb{R}^{n\times 1}\) are arbitrary vectors. For the feedback signal \(\mathbf{u}(t)\), we have \[\mathbf{u}(t) =-\mathbf{K}\mathbf{x}(t)=-(\mathbf{K}_{diag}+\mathbf{K}_{low})\mathbf{x}(t)\] \[=-\mathbf{K}_{diag}\mathbf{x}(t)-\sum_{k=1}^{r}\mathbf{a}_{k}\mathbf{b}_{k}^{T} \mathbf{x}(t)\] \[=-\underbrace{\mathbf{K}_{diag}\mathbf{x}(t)}_{\text{Internal feedback}} \underbrace{-\sum_{k=1}^{r}\mathbf{a}_{k}b_{k,1}x_{1}(t)}_{(A)}-\cdots- \underbrace{\sum_{k=1}^{r}\mathbf{a}_{k}b_{k,n}x_{n}(t)}_{\text{External feedback signals}},\] where \(b_{k,i}\) represents the \(i\)-th element of the vector \(\mathbf{b}_{k}\). In this case, even though the size of data, i.e., scale information, at the initial stage, is increased, with low-rank feedback matrix, the data to share at the early stage can also be limited in the similar manner to the rank-1 case, when it is compared to the communication energy in infinite time domain. The number of scale information to share is \(O(mr)\), where \(m\) is the number of agents. In the next sub-section, let us introduce the ADMM-based algorithm to solve the low-rank LQR control design problems introduced in (8) and (9). ### _ADMM-based Algorithm to Solve Low-rank LQR Optimal Control Design Problem_ In order to solve the low-rank LQR optimal control peoblem (8), we can use the ADMM technique [25]. Since Fig. 2: Illustration of external feedback, \(\mathbf{K}_{low}\), signals from agent 1 to other agents at time \(t\) with a scalar factor \(b_{1}\). the objective function is already separable between \(J(\mathbf{K})\) and \(\|\mathbf{K}_{low}\|_{*}\), from (8), we have the augmented Lagrangian \(L_{a}(\mathbf{K},\mathbf{K}_{diag},\mathbf{K}_{low},\mathbf{\Lambda})\), where \(\mathbf{\Lambda}\) is a dual variable, as follows: \[L_{a}(\mathbf{K},\mathbf{K}_{diag},\mathbf{K}_{low},\mathbf{\Lambda})\] \[=J(\mathbf{K})+\gamma\|\mathbf{K}_{low}\|_{*}+\langle\mathbf{K}-\mathbf{K}_{diag} -\mathbf{K}_{low},\mathbf{\Lambda}\rangle\] \[\qquad+\frac{\rho}{2}\|\mathbf{K}-\mathbf{K}_{diag}-\mathbf{K}_{low}\|_{F}^{2}. \tag{12}\] Then, with the augmented Lagrangian, we can have the following steps for updating variables in ADMM: \[\mathbf{K}^{(t+1)}=\underset{\mathbf{K}\in\mathcal{F}}{\text{argmin}}\ L _{a}(\mathbf{K},\mathbf{K}^{(t)}_{diag},\mathbf{K}^{(t)}_{low},\mathbf{\Lambda}^{(t)}) \tag{13}\] \[\mathbf{K}^{(t+1)}_{diag}=\underset{\mathbf{K}_{diag}}{\text{argmin}}\ L _{a}(\mathbf{K}^{(t+1)},\mathbf{K}_{diag},\mathbf{K}^{(t)}_{low},\mathbf{\Lambda}^{(t)})\] (14) \[\mathbf{K}^{(t+1)}_{low}=\underset{\mathbf{K}_{low}}{\text{argmin}}\ L _{a}(\mathbf{K}^{(t+1)},\mathbf{K}^{(t+1)}_{diag},\mathbf{K}_{low},\mathbf{\Lambda}^{(t)})\] (15) \[\mathbf{\Lambda}^{(t+1)}=\mathbf{\Lambda}^{(t)}+\rho(\mathbf{K}^{(t+1)}-\mathbf{K }^{(t+1)}_{diag}-\mathbf{K}^{(t+1)}_{low}), \tag{16}\] where the super-script \((t)\) and \((t+1)\) are used to indicate the \(t\)-th and \((t+1)\)-th iteration number respectively. We run the aforementioned updating steps until the following stopping criteria meets: \[\|\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}-\mathbf{K}^{(t+1)}_{low}\|_{F} \leq\epsilon_{pri}, \tag{17}\] \[\|\mathbf{K}^{(t+1)}_{diag}+\mathbf{K}^{(t+1)}_{low}-\mathbf{K}^{(t)}_{diag}- \mathbf{K}^{(t)}_{low}\|_{F}\leq\epsilon_{dual},\] (18) \[\mathbf{K}_{diag}+\mathbf{K}_{low}\in\mathcal{F}, \tag{19}\] where \((\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}-\mathbf{K}^{(t+1)}_{low})\) and \((\mathbf{K}^{(t+1)}_{diag}+\mathbf{K}^{(t+1)}_{low}-\mathbf{K}^{(t)}_{diag}-\mathbf{K}^{(t)}_{ low})\) represent the primal residual and the dual residual at the \((t+1)\) iteration. And correspondingly \(\epsilon_{pri}\) and \(\epsilon_{dual}\) are small feasibility tolerances for the primal and dual residuals. In the detailed step, for (13), we calculate the following optimization problem: \[\underset{\mathbf{K}\in\mathcal{F},\mathbf{P}}{\text{minimize}}\ \ \mathrm{Tr}(\mathbf{B}_{2}^{T}\mathbf{P}\mathbf{B}_{2})+ \langle\mathbf{K},\mathbf{\Lambda}^{(t)}\rangle+\frac{\rho}{2}\|\mathbf{K}-\mathbf{K}^{(t)}_{ diag}-\mathbf{K}^{(t)}_{low}\|_{F}^{2}\] (20) subject to \[(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T}\mathbf{P}+\mathbf{P}(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})+ \mathbf{Q}+\mathbf{K}^{T}\mathbf{R}\mathbf{K}=0.\] In order to obtain the gradient over \(\mathbf{K}\), by introducing a new variable \(\mathbf{L}\) which needs to satisfy the following Lyapunov equation: \[(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})\mathbf{L}+\mathbf{L}(\mathbf{A}-\mathbf{B}_{1}\mathbf{K})^{T}+\mathbf{B}_{2} \mathbf{B}_{2}^{T}=0, \tag{21}\] we can have the following gradient of the objective function as follows [1]: \[\forall_{\mathbf{K}}L_{a}(\mathbf{K},\mathbf{K}^{(t)}_{diag},\mathbf{K}^{(t)}_{ low},\mathbf{\Lambda}^{(t)}) \tag{22}\] \[=2[\mathbf{B}_{1}^{T}\mathbf{P}(\mathbf{K})-\mathbf{R}\mathbf{K}]\mathbf{L}(\mathbf{K})+\mathbf{ \Lambda}^{(t)}+\rho(\mathbf{K}-\mathbf{K}^{(t)}_{diag}-\mathbf{K}^{(t)}_{low}),\] where \(\mathbf{P}(\mathbf{K})\) and \(\mathbf{L}(\mathbf{K})\) are functions of \(\mathbf{K}\) satisfying the Lyapunov equations (5) and (21) respectively. By considering the first-order condition at an optimal solution, i.e., gradient at an optimal solution needs to be zero, we can obtain an optimal solution for \(\mathbf{K}^{(t+1)}\) by using a well-known fixed-point iterative method, so-called Anderson-Moore algorithm [1]. For \(\mathbf{K}^{(t+1)}_{diag}\), by taking into account the first-order condition at \(\mathbf{K}^{(t+1)}_{diag}\), we can find an optimal solution for \(\mathbf{K}^{(t+1)}\) satisfying the following the first-order condition: \[\mathbf{\Lambda}^{(t)}+\rho(\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}-\mathbf{K}^{(t)}_{ low})=\mathbf{0}.\] And by considering the diagonal (or block diagonal) structure of \(\mathbf{K}_{diag}\), we have \[\mathbf{K}^{(t+1)}_{diag}=\text{diag}_{trim}\bigg{(}\frac{1}{\rho}\mathbf{\Lambda}^{(t )}+\mathbf{K}^{(t+1)}-\mathbf{K}^{(t)}_{low}\bigg{)}, \tag{23}\] where \(\text{diag}_{trim}(\cdot)\) is an operator to make a diagonal matrix by taking the elements in diagonal only and setting the off-diagonal to zero. Then, for \(\mathbf{K}^{(t+1)}_{low}\), we can solve the following optimization problem: \[\mathbf{K}^{(t+1)}_{low} \tag{24}\] \[=\underset{\mathbf{K}_{low}}{\text{argmin}}\ \gamma\|\mathbf{K}_{low}\|_{*}+\frac{\rho}{2}\|\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{ diag}+\frac{1}{\rho}\mathbf{\Lambda}^{(t)}-\mathbf{K}_{low}\|_{F}^{2}.\] The level of the rank in the optimal solution \(\mathbf{K}^{(t+1)}_{low}\) will be determined by the ratio parameter \(\rho/\gamma\). If \(\rho/\gamma\) is large, then, in order to reduce the misfit error in the Frobenius norm, the optimal solution will be clear to \((\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}+\frac{1}{\rho}\mathbf{\Lambda}^{(t)})\), and possibly not have low-rank. If \(\rho/\gamma\) is small enough, then, by allowing more misfit error in the Frobenius norm, we can have a low-rank solution. Basically, the ratio parameter \(\rho/\gamma\) determines the level of the rank in the optimal solution \(\mathbf{K}^{(t+1)}_{low}\). And the low-rank solution, \(\mathbf{K}^{(t+1)}_{low}\), is expressed as follows [26]: \[\mathbf{K}^{(t+1)}_{low}=\mathbf{U}\mathcal{S}_{\rho/\gamma}(\mathbf{\Sigma})\mathbf{V}^{T}, \tag{25}\] where \(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\) is the Singular Value Decomposition (SVD) of \((\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}+\frac{1}{\rho}\mathbf{\Lambda}^{(t)})\), and \(\mathcal{S}_{\rho/\gamma}(\mathbf{\Sigma})\) is the shrinkage-threshold operator \(\mathbb{R}^{m\times n}\rightarrow\mathbb{R}^{m\times n}\) stated as follows: \[\mathcal{S}_{\rho/\gamma}(\mathbf{\Sigma})_{i,j}=\max\{\Sigma_{i,j}-\rho/\gamma,0\}. \tag{26}\] Notice that \(\mathbf{\Sigma}\) is a diagonal matrix having singular values in diagonal. We call (26) as soft-thresholding operation. Additionally, in order to have \(\mathbf{K}^{(t+1)}_{low}\) in rank-\(r\), we can calculate the SVD of \((\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}+\frac{1}{\rho}\mathbf{\Lambda}^{(t)})\); and by taking the first \(r\) largest singular values and corresponding singular vectors, we can obtain a \(\mathbf{K}^{(t+1)}_{low}\) matrix in rank-\(r\), which is expressed as follows: \[\mathcal{H}_{r}(\mathbf{\Sigma})=\text{diag}([\sigma_{1},\sigma_{2},...,\sigma_{r} ]), \tag{27}\] where \(\sigma_{i}\) is the \(i\)-th singular value of \((\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}+\frac{1}{\rho}\mathbf{\Lambda}^{(t)})\) in descending order, and \(\text{diag}(\cdot)\) represents the diagonal function making a diagonal matrix for a given vector by placing the vector elements in diagonal. We call (27) as hard thresholding operation. And then, \(\mathbf{K}^{(t+1)}_{low}\) matrix in rank-\(r\) is obtained as follows: \[\mathbf{K}^{(t+1)}_{low}=\mathbf{U}_{1 ### _Optimality of ADMM-based Algorithm to Solve the Low-rank LQR Optimal Control Design Problem_ Let us introduce the analysis of the ADMM-based algorithm to solve the low-rank LQR optimal control design problem. Especially, for the optimality of the ADMM-based algorithm, an optimal solution needs to satisfy the following primal feasibility condition: \[\mathbf{K}^{\star}-\mathbf{K}^{\star}_{diag}-\mathbf{K}^{\star}_{low}=\mathbf{0}, \tag{29}\] where the super-script \(\star\) represents the optimal solution to the problem in (8). With the given augmented Lagrangian (12), for dual feasible conditions over \(\mathbf{K}^{\star}\) and \(\mathbf{K}^{\star}_{low}\), the gradient value of the objective function of (8) at the optimal point needs to be zero. Thus, we have \[\mathbf{0}\in\partial J(\mathbf{K}^{\star})+\mathbf{\Lambda}^{\star}, \tag{30}\] \[\mathbf{0}\in\partial\|\mathbf{K}^{\star}_{low}\|_{\star}-\mathbf{\Lambda}^{ \star}, \tag{31}\] where \(\partial\) represents the subdifferential operator [25]. In the ADMM step for \(\mathbf{K}^{(t+1)}_{low}\), \(\mathbf{K}^{(t+1)}_{low}\) minimizes \(L_{a}(\mathbf{K}^{(t+1)},\mathbf{K}^{(t+1)}_{diag},\mathbf{K}_{low},\mathbf{\Lambda}^{(t)})\). Hence, we have the following condition for \(\mathbf{K}^{(t+1)}_{low}\): \[\mathbf{0}\in\partial\|\mathbf{K}^{(t+1)}_{low}\|_{\star}-\mathbf{\Lambda}^{ (t)}-\rho(\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}-\mathbf{K}^{(t+1)}_{low})\] \[=\partial\|\mathbf{K}^{(t+1)}_{low}\|_{\star}-\mathbf{\Lambda}^{(t+1)},\] where the equality is obtained from the ADMM step in (16). This indicates that with \(\mathbf{K}^{(t+1)}\) and \(\mathbf{\Lambda}^{(t+1)}\), the dual feasible condition (31) always holds. Then, in order to check the optimality conditions of (29) and (30) for \(\mathbf{K}^{(t+1)}\), which minimizes \(L_{a}(\mathbf{K},\mathbf{K}^{(t)}_{diag},\mathbf{K}^{(t)}_{low},\mathbf{\Lambda}^{(t)})\), we have \[\mathbf{0}\in\partial J(\mathbf{K}^{(t+1)})+\mathbf{\Lambda}^{(t)}+\rho(\mathbf{ K}^{(t+1)}-\mathbf{K}^{(t)}_{diag}-\mathbf{K}^{(t)}_{low})\] \[=\partial J(\mathbf{K}^{(t+1)})+\mathbf{\Lambda}^{(t)}+\rho(\mathbf{K}^{(t+1 )}-\mathbf{K}^{(t)}_{diag}-\mathbf{K}^{(t)}_{low})\] \[\qquad\qquad+\rho(\mathbf{K}^{(t+1)}_{diag}+\mathbf{K}^{(t+1)}_{low})- \rho(\mathbf{K}^{(t+1)}_{diag}+\mathbf{K}^{(t+1)}_{low})\] \[=\partial J(\mathbf{K}^{(t+1)})+\mathbf{\Lambda}^{(t+1)}\] \[\qquad\qquad+\rho(\mathbf{K}^{(t+1)}_{diag}+\mathbf{K}^{(t+1)}_{low}-\mathbf{ K}^{(t)}_{diag}-\mathbf{K}^{(t)}_{low}),\] where the term \(\rho(\mathbf{K}^{(t+1)}_{diag}+\mathbf{K}^{(t+1)}_{low}-\mathbf{K}^{(t)}_{diag}-\mathbf{K}^{(t)}_ {low})\) can be thought of as a residual that needs to go to zero as the iteration step goes. Additionally, for the primal feasible condition (29), we check the primal residual which is defined as \(\mathbf{K}^{(t+1)}-\mathbf{K}^{(t+1)}_{diag}-\mathbf{K}^{(t+1)}_{low}\) in our stopping criteria. The residuals converge to zero through the ADMM updating steps, i.e., minimizing the augmented Lagrangian and updating the dual variable. Therefore, with small feasibility tolerances, \(\epsilon_{pri}\) and \(\epsilon_{dual}\), the estimated solution obtained from the ADMM updating step can be considered to be close to an optimal solution. Additionally, the low-rank solution, i.e., \(\mathbf{K}_{diag}+\mathbf{K}_{low}\), needs to be in the feasible set \(\mathcal{F}\) as another primal feasibility condition. Hence, we check the condition as a part of our stopping criteria. ## V Numerical experiments In the numerical experiments, we simulate various distributed multi-agent control system models where each agent has \((x,y)\) coordinates as its location on a plane. By considering the characteristics of wireless communication, namely, the communication power density is inversely proportional to the square of the distance, and wireless signal can spread every direction, we run numerical experiments to compare the low-rank LQR optimal control design against the sparse and the standard LQR optimal control designs on a distributed multi-agent control system model. For the distributed multi-agent control system model, we deal with the following second order system model having coupling with other agents through the exponentially decaying function of the Euclidean distance between any two nodes: \[\begin{bmatrix}\dot{x}_{i}(t)_{1}\\ \dot{x}_{i}(t)_{2}\end{bmatrix}= \begin{bmatrix}1&1\\ 1&3\end{bmatrix}\begin{bmatrix}x_{i}(t)_{1}\\ x_{i}(t)_{2}\end{bmatrix}\] \[+\sum_{i\neq j}e^{d(i,j)}\begin{bmatrix}x_{j}(t)_{1}\\ x_{j}(t)_{2}\end{bmatrix}+\begin{bmatrix}0\\ 1\end{bmatrix}\bigg{(}w_{i}(t)+u_{i}(t)\bigg{)}, \tag{32}\] where the subscript \(i\) represents the \(i\)-th agent having two states \(x_{i}(t)_{1}\) and \(x_{i}(t)_{2}\), where \(i=1,2,...,n\), \(w_{i}(t)\) and \(u_{i}(t)\) are the disturbance and input signals of the \(i\)-th agent respectively, and \(d(i,j)\) represents the Euclidean distance between the \(i\)-th agent and \(j\)-th agent. We vary the number of agents \(N\) from 10 to 20 and choose the locations of agents on a \(10\times 10\) plane uniformly at random. For both \(\mathbf{Q}\), and \(\mathbf{R}\), we use identity matrices. For the initial point of the algorithms for both low-rank LQR design and sparse LQR design, we use the Linear-Quadratic Regulator (LQR) Matlab function to have the standard LQR control design solution, which normally provides a dense feedback matrix \(\mathbf{K}\), but with minimum LQR cost. We denote it as \(J_{\text{stand}}\). ### _Communication scenario 1: Fixed communication power_ In this scenario, we consider a case where each agent transmits or broadcasts its states at each time with fixed transmission power. We assume that the all agents are reachable each other on a wireless network with the transmission power. Since communication power of an agent at each time is the same for all agents, for communication burden, we take into account total communication energy as power \(\times\) time, which can be estimated by the number of communication attempt. For the controller based on the low-rank LQR design, we choose \(\boldsymbol{K}_{low}\) to be rank-1. With the feedback controller \(\boldsymbol{K}=\boldsymbol{K}_{diag}+\boldsymbol{K}_{low}\in\mathbb{R}^{m\times n}\), if there is no communication error and no zero column vector in \(\boldsymbol{K}_{low}\), it requires \(n\) communication attempts. To have the same number of minimum communication attempts in the feedback matrix based on sparse LQR design for comparison purpose, we adjust the parameter \(\gamma\) in (6) to have \(n\) communication links. Thus, in the case of no error in communication, we can expect to have same communication burden between the low-rank LQR design and the sparse LQR-design. Hence, we compare the LQR cost increment of the two different designs from the standard LQR cost, i.e., \(J_{\text{stand}}\), by varying the number of agents in the system. We run this simulation for hundred trials with randomly chosen nodes. The number of nodes, i.e., agents, are varied from 10 to 20. Fig. 3 shows the LQR cost increment from the standard LQR design by calculating \(J(\boldsymbol{K})/J_{\text{stand}}\), where \(\boldsymbol{K}\) is determined by the feedback controller design. Red solid and blue dotted lines represent the low-rank LQR design and the sparse LQR design respectively. A vertical line represents minimum and maximum LQR cost increment among 100 random trials. As shown in Fig. 3, under the same communication burden, the increment rate of the LQR cost in the low-rank feedback controller design is significantly smaller than the increment rate in the sparse feedback controller design. It is noteworthy that the more LQR cost is incremented, the more energy consumption in control is expected. Unfortunately, we normally have error in communication. When we denote the probability of error in communication as \(P_{e}\), the total error probability in the rank-1 feedback controller design is \(1-(1-P_{e})^{mn}\), while the total error probability in the sparse feedback controller design having \(n\) communication links is \(1-(1-P_{e})^{n}\). Because of the parameter \(m\), the total error probability in communication on the low-rank feedback controller can be larger than that of the sparse feedback controller. This is because in the low-rank LQR design having rank-1, we have \(m\) times larger number of communication links than that of the sparse LQR design. And the simulation shown in Fig. 3 is a special case, when \(P_{e}=0\). That is one drawback of the low-rank LQR design. In order to reflect the cases of communication error and noise, we introduce the next simulation scenarios. ### _Communication scenario 2: System endurance against communication noise_ In this scenario, we check the system endurance against communication noise, since communication noise is inevitable. We take into account communication noise following i.i.d Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\) in each commutation link. With the two feedback matrices obtained from the low-rank LQR design and the sparse LQR design, at each time of transmission of states from one agent to another, we add the Gaussian noise to transmitted signals. For that, we generate Gaussian noise for each element in off-diagonal of a feedback matrix, and add the noise to the feedback matrix. And then, we check whether the feedback matrix is in \(\mathcal{F}\) or not. In the next round of random test, we generate another noise for each element in off-diagonal of the feedback matrix, and check the stability of the feedback matrix. For a feedback matrix, we run hundred trials with randomly chosen noise and count the occurrences when the corrupted feedback matrix by noise is in \(\mathcal{F}\). If the corrupted feedback matrix is in \(\mathcal{F}\), then, we consider it as success. We run the trials for 100 different feedback matrices for the low-rank LQR design and the sparse LQR design respectively. Therefore, for each parameter setup shown in Fig. 4, we calculate the probability of success in \(100\times 100\) random cases. Black and white boxes represent probability 0 and 1 respectively. The variance of noise \(\sigma^{2}\) is varied from \(0.1\) to \(0.9\). As shown in Fig. 4, the low-rank LQR design has similar robustness against noise to the standard LQR design, and has more robustness than the sparse LQR design. ### _Communication scenario 3: Security under cyber attack_ Security is another critical issue to be considered in the control of distributed systems. By taking into account a scenario of cyber attack, in this simulation, we forcedly remove some communication links and check the system stability between the low-rank LQR control design and the sparse LQR control design. Hence, through this simulation, we compare the performance of the low-rank LQR control design against that of the sparse LQR control design in terms of system stability endurance against cyber attack. More specifically, under the assumption that any data or signal cannot go through communication links under attack, we measure the margin of the feedback control system in stability by varying the number of communication links under attack from 1 to 10. The communication links under attack are randomly chosen among the required communication links. Hence, through this scenario, we investigate the system robustness against cyber attack. In the aspect of noise, this scenario can be thought of as hard noise case, i.e., completely fail in communication for Fig. 3: Comparison in LQR cost increment between the low-rank LQR design and the sparse LQR design in terms of the standard LQR cost, denoted by \(J_{\text{stand}}\), under fixed communication power scenario. some links, while the Communication scenario 2 deals with a soft noise case. We run simulations and check the performance of the system robustness against cyber attack between the low-rank LQR control design and the sparse LQR control design. Depending on the number of removed communication links in maximum allowing the system to be stable, the performance in system robustness against cyber attack is evaluated. With the two feedback matrices obtained from the low-rank LQR design and the sparse LQR design, at each time of transmission of states from one agent to another, we assume that we have communication links under attack. The number of links under attack, \(l\), is varied from 1 to 10. We choose \(l\) elements in off-diagonal of a feedback matrix uniformly at random for the links having cyber attack, and make the chosen elements to zero. And then we check whether the feedback matrix is in \(\mathcal{F}\) or not. In the next round of random test, we choose again \(l\) elements uniformly at random, and check the stability of the feedback matrix after setting the chosen elements to zero. For a feedback matrix, we run hundred random trials on cyber attack and count the occurrences where the modified feedback matrix is in \(\mathcal{F}\). If the modified feedback matrix is in \(\mathcal{F}\), then, we consider it as success. For hundred different feedback matrices, we repeat the same scenario. Therefore, for each parameter setup shown in Fig. 5, we run \(100\times 100\) trials and check the probability of success, where black and white boxes represent the probability of success 0 and 1 respectively. As shown in Fig. 5, we demonstrate that the low-rank LQR control design can have more stability margin against cyber attack than the sparse LQR control design, and can have similar stability margin to the standard LQR design against cyber attack with much more reduced communication energy. ### _Communication scenario 4: Limited communication energy_ This scenario considers applications such as a distributed multi-drone, Unmanned Aerial Vehicle (UAV), based system or a distributed multi-robot system, where we have limited communication energy because of agents being battery-powered; namely, each agent has limited energy. We further assume that all agents are reachable each other, and each communication attempt spends the same amount of power. Under this assumption, we compare the ratio of the LQR cost between the sparse LQR design and the low-rank LQR design by matching the number of communication links in a critical node. Namely, in the sparse LQR control design, a node having largest communication links among all nodes is chosen as a critical node. This is because the node needs to send its states to other nodes through communication links. Therefore, in a scenario of UAV, the operation time of the node, i.e., agent, will be the shortest. In contrast, in the low-rank LQR design, every node will have balanced energy consumption in communication. To compare the two different designs, we match the number of communication links in a critical node. Since each agent needs to transmit two states \(x(t)_{1}\) and \(x(t)_{2}\) in (32), for \(\mathbf{K}_{low}\) in rank-\(r\), we need \(2\times r\) numbers of communication transmissions, which is shown in dotted lines in Fig. 6(a). By considering this scenario, we evaluate the low-rank LQR optimal control design against the sparse LQR control design in terms of LQR cost. As shown in 6(b), for rank-1 \(\mathbf{K}_{low}\) design, the LQR cost of the sparse LQR design is increased by 71%, when it is compared to the LQR cost of the low-rank LQR design. Intuitively, a feedback controller based on the sparse LQR control design can have a critical node, i.e., agent. Hence, we can anticipate that the energy that the critical node has can be consumed much faster than the other agents, since that critical node needs to continuously send its states to other nodes, which can easily jeopardize the whole distributed control system. In contrast, in the low-rank LQR control design, wireless signal is transmitted in a broadcast way. Therefore, even though an agent has lots of communication links to other agents, the number of communication transmissions at the agent is not proportional to the number of communication links, but it is more related to the communication error. However, in sparse LQR controller, the communication transmissions is linearly proportional to the number of communication links, because the agent needs to separately transmit its states to the connected agents. ## VI Conclusion and discussion In this paper, we consider a LQR optimal control design problem with a constraint on communication to optimally Fig. 4: Probability of success that corrupted feedback matrix by noise is in \(\mathcal{F}\). Comparison among (a) Standard LQR design, (b) Sparse LQR design, and (c) Low-rank LQR design in term of system stability endurance against noise in communication. controlling distributed systems having multi-agents, which can find applications in smart-grid and multi-agent robot systems. Especially, by considering wireless communication networks and taking advantage of its characteristics, i.e., electromagnetic signal can be spread all directions in a broadcast way, we propose low-rank LQR optimal control design problem and the ADMM-based algorithm to solve the problem. The low-rank LQR control design provides a trade-off solution between the LQR cost and the communication energy in a distributed control system having feedback loops. Through various numerical experiments in different communication scenarios, we demonstrate that the low-rank LQR control design can provide better feedback controller design than that of the sparse LQR optimal control design in terms of energy consumption, system stability margin against noise and error in communication. We introduce possible future research directions for low-rank LQR optimal control design as follows: * Designing fast algorithms to solve the proposed low-rank LQR control design problems can be a possible future research topic. * Finding a way to express the proposed low-rank LQR control design problem into a convex optimization problem such as semi-definite Programming (SDP) formulation is interesting like we have [27, 28, 29, 30, 4] for the standard LQR optimal control design problem. By having the convex optimization problem, we can use off-the-shelf solvers, e.g., CVX [31], and have a global solution guaranteed. * For large-scale distributed control systems, running algorithms to solve the low-rank LQR optimal control problems in limited time can be computationally challenging. Therefore, data-driven approach to design a feedback controller can also be an interesting topic.
2303.18005
Artificial Intelligence in Ovarian Cancer Histopathology: A Systematic Review
Purpose - To characterise and assess the quality of published research evaluating artificial intelligence (AI) methods for ovarian cancer diagnosis or prognosis using histopathology data. Methods - A search of PubMed, Scopus, Web of Science, CENTRAL, and WHO-ICTRP was conducted up to 19/05/2023. The inclusion criteria required that research evaluated AI on histopathology images for diagnostic or prognostic inferences in ovarian cancer. The risk of bias was assessed using PROBAST. Information about each model of interest was tabulated and summary statistics were reported. PRISMA 2020 reporting guidelines were followed. Results - 1573 records were identified, of which 45 were eligible for inclusion. There were 80 models of interest, including 37 diagnostic models, 22 prognostic models, and 21 models with other diagnostically relevant outcomes. Models were developed using 1-1375 slides from 1-776 ovarian cancer patients. Model outcomes included treatment response (11/80), malignancy status (10/80), stain quantity (9/80), and histological subtype (7/80). All models were found to be at high or unclear risk of bias overall, with most research having a high risk of bias in the analysis and a lack of clarity regarding participants and predictors in the study. Research frequently suffered from insufficient reporting and limited validation using small sample sizes. Conclusion - Limited research has been conducted on the application of AI to histopathology images for diagnostic or prognostic purposes in ovarian cancer, and none of the associated models have been demonstrated to be ready for real-world implementation. Key aspects to help ensure clinical translation include more transparent and comprehensive reporting of data provenance and modelling approaches, as well as improved quantitative performance evaluation using cross-validation and external validations.
Jack Breen, Katie Allen, Kieran Zucker, Pratik Adusumilli, Andy Scarsbrook, Geoff Hall, Nicolas M. Orsi, Nishant Ravikumar
2023-03-31T12:26:29Z
http://arxiv.org/abs/2303.18005v2
# Artificial Intelligence in Ovarian Cancer Histopathology: A Systematic Review ###### Abstract To characterise and assess the quality of published research evaluating artificial intelligence (AI) methods for ovarian cancer diagnosis or prognosis using histopathology data. A search of PubMed, Scopus, Web of Science, Cochrane Central Register of Controlled Trials, and WHO International Clinical Trials Registry Platform was conducted up to 01/12/2022. The inclusion criteria required that research evaluated AI on histopathology images for diagnostic or prognostic inferences in ovarian cancer, including primary tumours of the ovaries, fallopian tubes, and peritoneum. Reviews and non-English language articles were excluded. The risk of bias was assessed for every model that met the inclusion criteria using the Prediction model Risk Of Bias ASessment Tool (PROBAST). Information about each model of interest was tabulated and summary statistics were reported. Based on the results, we provided recommendations to improve study design and reporting to reduce the risk of bias and improve the reproducibility of future research in the field. The study protocol was registered on PROSPERO (CRD42022334730). PRISMA 2020 reporting guidelines were followed. ## Results A total of 1434 research articles were identified, of which 36 were eligible for inclusion. These studies reported 62 models of interest, including 35 classifiers, 14 survival prediction models, 7 segmentation models, and 6 regression models. Models were developed using 1-1375 slides from 1-664 ovarian cancer patients. A wide array of outcomes were predicted, including overall survival (9/62), histological subtypes (7/62), stain quantity (6/62), malignancy (5/62), primary cancer (4/62), and tumour region (4/62). Older studies used traditional machine learning (ML) models with hand-crafted features, while newer studies typically employed deep learning (DL) to automatically learn features and predict the outcome(s) of interest. All models were found to be at high or unclear risk of bias overall, with most research having a high risk of bias in the analysis and a lack of clarity regarding participants and predictors in the study. Research was frequently limited by insufficient reporting, small sample sizes, and insufficient validation, with external validation being particularly rare. ## Conclusion Limited research has been conducted on the application of AI to histopathology images for diagnostic or prognostic purposes in ovarian cancer, and none of the associated models have been demonstrated to be ready for real-world implementation. Recommendations are provided addressing underlying biases and flaws in study design, which should help inform higher-quality reproducible future research. Key aspects to help ensure clinical translation include more transparent and comprehensive reporting of data provenance and modelling approaches, as well as improved quantitative performance evaluation using cross-validation and external validations. ## Funding UKRI Engineering and Physical Sciences Research Council and The Tony Bramall Charitable Trust ## Introduction Ovarian cancer is the eighth most common malignancy in women worldwide [1]. It is notoriously difficult to detect and diagnose, with ineffective screening [2] and vague symptoms similar to those caused by menopause [3]. Encompassing primary malignant tumours of the ovaries, fallopian tubes, and peritoneum, the disease has often started to spread within the abdomen at the time of diagnosis (FIGO [4] Stage 3). This typical late stage at diagnosis makes ovarian cancer a particularly deadly disease, with the 314,000 new cases diagnosed each year translating to 207,000 deaths a year globally [1]. Most ovarian cancers are carcinomas (cancers of epithelial origin) which predominantly fall into five histological subtypes: high-grade serous, low-grade serous, clear cell, endometrioid, and mucinous. Non-epithelial ovarian cancers are rare and include germ cell, sex cord-stromal, and mesenchymal tumours. Ovarian cancer subtypes differ morphologically and prognostically and have varying treatment options [5]. High-grade serous carcinoma is by far the most common form of ovarian cancer, accounting for approximately 70% of all cases [6]. Histopathology, the examination of tissue specimens at the cellular level, is the gold standard for ovarian cancer diagnosis. Pathologists typically interpret tissue stained with haematoxylin and eosin (H&E), where haematoxylin stains cell nuclei blue and eosin stains other cellular structures, such as cytoplasm and cell membranes, varying shades of pink and red. The interpretation of H&E slides can be a subjective, time-consuming process, with some tasks having a high level of inter-observer variation [7, 8, 9]. In the assessment of difficult cases, general pathologists may seek assistance from subspecialty gynaecological pathology experts, and/or use ancillary tests, such as immunohistochemical (IHC) stains. IHC stains indicate the presence of specific antigens and are often used to aid pathologists in identifying the primary tissue of origin or to make subtype diagnoses where there are specific phenotypic profiles [5]. Referrals and ancillary testing can be essential to the accuracy of the diagnostic process but come at the cost of making it longer and more expensive. Worldwide, pathologists are in much greater demand than supply, with significant disparities in the number of pathologists between countries [10], and even better-supplied countries unable to meet demand [11]. Traditionally, pathologists have analysed glass slides using a light microscope. However, the implementation of a digital workflow, where pathologists review scanned whole slide images (WSIs) using a computer, is becoming more common. While digital pathology uptake has likely been driven by efficiency benefits [12], it has created an opportunity for the development of automated tools to assist pathologists. These tools often aim to improve the accuracy, efficiency, objectivity, and consistency of diagnosis. Such tools could help to alleviate the global workforce shortage of pathologists, increasing diagnostic throughput and reducing the demand for referrals and ancillary tests. This is an increasingly active area of research [13] and, for some malignancies, these systems are starting to achieve clinical utility [14]. In this study, we systematically reviewed all literature in which artificial intelligence (AI) techniques (comprising both traditional machine learning (ML) and deep learning (DL) methods) were applied to digital pathology images for the diagnosis or prognosis of ovarian cancer. This included research which focused on a single diagnostic factor such as histological subtype, and studies that performed computer-aided diagnostic tasks such as tumour segmentation. The review characterises the state of the field, describing which diagnostic and prognostic tasks have been addressed, and assessing factors relevant to the clinical utility of these methods, such as the risks of bias. Despite ovarian cancer being a particularly difficult disease to detect and diagnose, and the shortage of available pathologists, AI models have not yet been implemented in clinical practice for this disease. This review aims to provide insights and recommendations based on published literature to improve the clinical utility of future research, including reducing risks of bias, improving reproducibility, and increasing generalisability. ## Methods ### Literature Search Searches were conducted in three research databases, PubMed, Scopus and Web of Science, and two trial registries, Cochrane Central Register of Controlled Trials (CENTRAL) and the World Health Organisation International Clinical Trial Registry Platform (WHO-ICTRP). The initial searches were performed on 25/04/2022 and were repeated on 01/12/2022. The search strategy was composed of three distinct aspects - artificial intelligence, ovarian cancer, and histopathology. For each aspect, multiple relevant terms were combined using the _OR_ operator (e.g. "artificial intelligence" OR "machine learning"), and then these were combined using the _AND_ operator to ensure that retrieved research met all three aspects. The widest possible set of search fields was used for each search engine except for Scopus, where restrictions were imposed to avoid searching within the citation list of each article, which is not an available field in the other search engines. The terms 'ML' and 'AI' were restricted to specific fields due to the diversity of their possible meanings. To ensure the most rigorous literature search possible, no restrictions were placed on the publication date or article type during searching. Many AI approaches build on statistical models, such as logistic regression, which can blur the lines between disciplines. When conducting searches, a previously reported methodology was adopted [15] whereby typical AI approaches were searched by name (e.g. neural networks), and other methods were searched by whether the authors described their work as _artificial intelligence_. Full details of the search implementation for each database are provided in Appendix A. The review protocol was registered with PROSPERO before the search results were screened for inclusion (CRD42022334730). ### Literature Selection One researcher (JB) manually removed duplicate papers with the assistance of the referencing software _EndNote X9_. Two researchers (JB, KA) then independently screened articles for inclusion in two stages, the first based on title and abstract, the second based on full text. Disagreements were discussed and arbitrated by a third researcher (NR). Trials in WHO-ICTRP do not have associated abstracts, so for these studies, only titles were available for initial screening. The inclusion criteria required that research evaluated the use of at least one AI approach to make diagnostic or prognostic inferences on human histopathology images from suspected or confirmed cases of ovarian cancer. Studies were only included where AI methods were applied directly to the digital pathology images, or to features which were automatically extracted from the images. Fundamental tasks such as segmentation and cell counting were considered to be diagnostic tasks because these could be used by pathologists for computer-assisted diagnosis. Only conventional light microscopy images were considered, with other imaging modalities, such as fluorescence and hyperspectral imaging, excluded. Publications which did not include primary research were excluded (such as review papers). Non-English language articles and research where a full version of the manuscript was not accessible were excluded. ### Risk of Bias Analysis The risk of bias of models in the accepted literature was assessed using the Prediction model Risk Of Bias ASessment Tool (PROBAST) [16]. This tool includes 20 questions which are answered as either _yes_, _probably yes_, _probably no_, _no_, or _no information_. These questions are categorised into four domains (participants, predictors, outcome, and analysis), which are summarised as high-risk, low-risk, or unclear. An overall score is calculated by aggregation of these domain-specific scores, with a single high-risk domain being sufficient for an overall high-risk score. Each model was analysed by three independent researchers (any of JB, KA, NR, KZ, NMO), with at least one computer scientist and one clinician involved in the risk of bias assessment for each model. The PROBAST applicability of research analysis was not implemented as it is unsuitable for such a diverse array of possible research questions. ### Data Synthesis Data extraction was performed independently by two researchers (JB, KA) using a form containing 81 fields within the categories _Overview_, _Data_, _Methods_, _Results_, and _Miscellaneous_. Several of these fields were added or clarified during data extraction with the agreement of both researchers and retroactively applied to all accepted literature. The final data extraction form is available at www.github.com/scjjb/OvCaReview, with a summary included in Appendix B. Information was sought from full-text articles, as well as references and supplementary materials where appropriate. Inferences were made only when both researchers were confident that this gave the correct information, with disagreements resolved through discussion. Fields which could not be confidently completed were labelled as being _unclear_. Information was extracted regarding each outcome reported in a paper for which the corresponding model met the inclusion criteria. Where multiple models were compared for the same outcome, data was only extracted for the newly proposed model, with the best performing model during validation taken if this was unclear. Models used to predict different outcomes in the same study were assessed independently even if the methods were similar. Data synthesis excluded any model which was not applied to ovarian cancer digital pathology slides, such as repeats of the same methodology applied to different malignancies. Models that met the inclusion criteria are referred to as _models of interest_. All extracted data are summarised in two tables, one each for study-level and model-level characteristics, with the model-level table grouped by outcome type. The data synthesis did not include any meta-analysis due to the diversity of included methods and outcomes. ## Results As shown in Figure 1, the literature searches returned a total of 1434 records, of which 496 were duplicates. 866 records were excluded during the screening of titles and abstracts, and 36 were excluded based on full paper screening, including 2 records for which full articles could not be obtained. The remaining 36 studies were included in the review, of which 11 were conference papers and 25 were journal papers. All accepted studies were originally identified through searches of research databases, with no records from trial registries meeting the inclusion criteria. While the searches returned literature from as early as 1949, all of the research which met the inclusion criteria was published since 2010, and over half of the included literature was published since 2020. Study characteristics are shown in Table 2. The 36 accepted articles contained 62 models of interest, details of which are shown in Table 3. ### Risk of Bias Analysis The results of the PROBAST assessments are shown in Table 1. While some studies contained multiple models of interest, none of these contained models with different risk of bias scores for any section of the PROBAST assessment, so we only present one risk of bias analysis per paper. All models showed either a high overall risk of bias (30/36) or an unclear overall risk of bias (6/36). Every high-risk model had a high-risk score in the analysis section (30/36), with several also being at high risk for participants (5/36), predictors (10/36), or outcomes (11/36). Only half of the studies achieved a low risk of bias in any domain (18/36), with most low risks being found in the outcomes (14/36) and predictors (8/36) sections. Nearly all of the papers had an unclear risk of bias in at least one domain, most commonly the participants (29/36) and predictors (18/36) domains. Qualitative summaries are presented in Figure 2. Figure 1: PRISMA 2020 flowchart of the study identification and selection process for the systematic review. Records were screened on titles and abstracts alone, and reports were assessed based on the full-text content. \begin{table} \begin{tabular}{|c|c c c c c|} \hline **Publication** & **Participants** & **Predictors** & **Outcome** & **Analysis** & **Overall** \\ \hline Dong 2010(a) [17] & High & High & High & High & High \\ Dong 2010(b) [18] & High & High & High & High & High \\ Signolle 2010 [19] & Unclear & Unclear & High & High & High \\ Janowczyk 2011 [20] & Unclear & Unclear & Low & High & High \\ Janowczyk 2012 [21] & Unclear & High & Unclear & High & High \\ Kothari 2012 [22] & Unclear & Low & Low & Unclear & Unclear \\ Poruthoor 2013 [23] & Unclear & High & High & High & High \\ BenTaieb 2015 [24] & Unclear & Unclear & Low & High & High \\ BenTaieb 2016 [25] & Unclear & High & Unclear & High & High \\ BenTaieb 2017 [26] & Unclear & Unclear & Low & High & High \\ Lorsakul 2017 [27] & Unclear & Unclear & High & High & High \\ Du 2018 [28] & Unclear & Unclear & Unclear & Unclear & Unclear \\ Heindl 2018 [29] & Unclear & Low & Low & High & High \\ Kalra 2020 [30] & Unclear & Low & Low & High & High \\ Levine 2020 [31] & Unclear & Low & Low & Unclear & Unclear \\ Yaar 2020 [32] & Unclear & Unclear & Low & High & High \\ Yu 2020 [33] & Unclear & Low & Low & High & High \\ Gentles 2021 [34] & High & Unclear & High & High & High \\ Ghoniem 2021 [35] & Unclear & Unclear & Unclear & High & High \\ Jiang 2021 [36] & High & High & Unclear & High & High \\ Laury 2021 [37] & Low & High & High & High & High \\ Paijens 2021 [38] & Low & High & Unclear & High & High \\ Shin 2021 [39] & Unclear & Unclear & Unclear & High & High \\ Zeng 2021 [40] & Unclear & Unclear & Low & High & High \\ Boehm 2022 [41] & Unclear & High & Unclear & High & High \\ Boschman 2022 [42] & Unclear & Low & Low & High & High \\ Elie 2022 [43] & Unclear & Low & High & High & High \\ Farahani 2022 [44] & Unclear & Unclear & Low & Unclear & Unclear \\ Hu 2022 [45] & Unclear & Unclear & Unclear & Unclear & Unclear \\ Jiang 2022 [46] & Unclear & Unclear & High & High & High \\ Kasture 2022 [47] & High & High & High & High & High \\ Kowalski 2022 [48] & Unclear & Unclear & Unclear & High & High \\ Liu 2022 [49] & Unclear & Unclear & Unclear & Unclear & Unclear \\ Nero 2022 [50] & Unclear & Low & High & High & High \\ Salguero 2022 [51] & Unclear & Unclear & Low & High & High \\ Wang 2022 [52] & Unclear & Unclear & Low & High & High \\ \hline \end{tabular} \end{table} Table 1: PROBAST risk of bias assessment results for the 36 papers included in this review. This is presented as one row for each paper because every paper that contained multiple models of interest was found to have the same risk of bias for every model. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Publication**} & **Ovarian Cancer Data Source** & \begin{tabular}{c} **Models of** \\ **interest** \\ \end{tabular} & \multirow{2}{*}{**Outcome Categories**} & \multirow{2}{*}{**Outcomes**} & \multirow{2}{*}{**Published Code**} \\ \hline Dong 2010(b) & Unclear & 1 & Segmentation & Stained region & None \\ \hline Dong 2010(b) & Unclear & 1 & Segmentation & Stained region & None \\ \hline Siongle 2010 & Unclear & 1 & Segmentation & Tumour region & None \\ \hline Janowczyk 2011 & Unclear & 1 & Segmentation & Tumour region & None \\ \hline Janowczyk 2012 & Unclear & 1 & Segmentation & Stained region & None \\ \hline Kothari 2012 & TCGA-OV (Multi-city, USA) & 1 & Classification & Malignancy & None \\ \hline Portthoor 2013 & TCGA-OV (Multi-city, USA) & 2 & Classification, Survival & Grade; Overall survival & None \\ \hline BenTaeb 2015 & \begin{tabular}{c} Transcanadian Study \\ (Multi-city, Canada) \\ \end{tabular} & 1 & Classification & Histological subtype & None \\ \hline BenTaeb 2016 & \begin{tabular}{c} Transcanadian Study \\ (Multi-city, Canada) \\ \end{tabular} & 1 & Classification & Histological subtype & inaccessible \\ \hline BenTaeb 2017 & Unclear & 1 & Classification & Histological subtype & inaccessible \\ \hline Lorsakul 2017 & Unclear & 1 & Classification & Cell type & None \\ \hline Du 2018 & Unique (Oklahoma, USA) & 1 & Classification & Tissue type & None \\ \hline Heindl 2018 & TCGA-OV (Multi-city, USA) & 1 & Classification & Cell type & [https://quantib.com/file/O3wave2.net](https://quantib.com/file/O3wave2.net) \\ \hline Kalra 2020 & TCGA-OV (Multi-city, USA) & 4 & Classification & Primary cancer type, & None \\ \hline Levine 2020 & OVCARE (Vancouver, Canada) & 1 & Classification & Histological subtype & [https://github.com/](https://github.com/) \\ \hline Yazr 2020 & TCGA-OV (Multi-city, USA) & 1 & Survival & Symptom-free interval & \multicolumn{1}{c}{} \\ \cline{2-5} Yu 2020 & TCGA-OV (Multi-city, USA) & 4 & Classification, Survival & \begin{tabular}{c} Multiganancy, Grade, \\ Transcriptomic subtype; \\ Platinum-free interval \\ \end{tabular} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} Gentles 2021 & Unique (Newcastle, UK) & 6 & Regression & Stain quantity & None \\ \hline Ghnoinem 2021 & TCGA-OV (Multi-city, USA) & 1 & Classification & Stage & None \\ \hline Jiang 2021 & Mayo Clinic (Rochester, USA) & 1 & Classification & Malignancy & [https://github.com/](https://github.com/) \\ \hline Laury 2021 & Unique (Hetsinki, Finland) & 1 & Survival & Platinum-free interval & None \\ \hline Paljens 2021 & \begin{tabular}{c} Unique (Groningen \& \\ Zwolle, The Netherlands) \\ \end{tabular} & 1 & Survival & Overall survival & None \\ \hline Shin 2021 & TCGA-OV (Multi-city, USA) & 1 & Classification & Malignancy & [https://github.com/](https://github.com/) \\ \(+\) Unique (Ajou, Korea) & 1 & Classification & Malignancy & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} Zeng 2021 & TCGA-OV (Multi-city, USA) & 8 & Classification, Survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (Shanghai, China) & 1 & Classification, Survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 3 & Classification, Survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 1 & Classification & Histological subtype & None \\ \hline Boschman 2022 & OVCARE (Vancouver, Canada) & 1 & Classification & Stain presence & None \\ \hline Elie 2022 & Unique (Case, France) & 3 & Classification & Stain presence & None \\ \hline Varahani 2022 & \begin{tabular}{c} OVCARE (Vancouver, Canada) \\ \end{tabular} & 1 & Classification & Histological subtype & [https://github.com/](https://github.com/) \\ \(+\) Unique (Calgary, Canada) & 1 & Classification & Histological subtype & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (Calgary, Canada) & 1 & Classification & Epithelial-mesenchymal transition & [https://github.com/](https://github.com/) \\ \hline Hu 2022 & TCGA-OV (Multi-city, USA) & 1 & Segmentation & Tumour region & [https://github.com/](https://github.com/) \\ \hline Jiang 2022 & Mayo Clinic (Rochester, USA) & 1 & Segmentation & Tumour region & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 1 & Classification & Histological subtype & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 1 & Classification & Histological subtype & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 3 & Survival & Overall survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 2 & Classification, Survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 2 & Classification, Survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 2 & Classification, Survival & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-5} & \(+\) Unique (New York, USA) & 2 & Classification, Survival & \multicolumn{1}{c}{} Figure 2: PROBAST risk of bias results summarised for the 36 papers included in this review. \begin{table} \begin{tabular}{c ### Data in Included Literature The number of participants in internal datasets varied by orders of magnitude, with each study including 1 to 664 ovarian cancer patients, and one study including over 10,000 total patients across a range of 32 malignancies [30]. Only the five most common subtypes of ovarian carcinoma were used, with no study reporting the inclusion of less common carcinomas or non-epithelial ovarian cancers. Only one study explicitly included any prospective data collection, and this was only for a small subset which was not used for external validation [41]. As shown in Figure 3, the number of pathology slides used was often much greater than the number of patients included, with three studies using over 1000 slides from ovarian cancer patients [22, 33, 49]. Most of the studies used WSIs for model development (27/36), with others using tissue microarrays (TMAs) (4/36) or pre-cropped digital pathology images (2/36). Most studies used H&E-stained tissue (27/36) and the others used a variety of IHC stains (9/36), with no two papers reporting the use of the same IHC stains. Some studies included multi-modal approaches, using genomics [23, 32, 35, 40, 41], proteomics [23, 40], transcriptomics [40], and radiomics [41] data alongside histopathological data. The most commonly used data source was The Cancer Genome Atlas (TCGA) (14/36), a project from which over 30,000 digital pathology images from 33 malignancies are publicly available. The ovarian cancer subset, TCGA-OV [53], contains 1481 WSIs from 590 cases of ovarian serous carcinoma (mostly, but not exclusively, high-grade), with corresponding genomic, transcriptomic, and clinical data. This includes slides from eight data centres in the United States, with most slides containing frozen tissue sections (1374/1481) rather than formalin-fixed, paraffin-embedded (FFPE) sections. Other recurring data sources were the University of British Columbia Ovarian Cancer Research Program (OVCARE) repository [31, 42, 44], the Transcanadian study [24, 25], and the Mayo Clinic records [36, 46], each of which was used in multiple publications by a single research group. All other researchers either used a unique data source (11/36) or did not report the provenance of their data (8/36). TCGA-OV, OVCARE, and the Transcanadian study are all multi-centre datasets. Aside from these, few studies reported the use of multi-centre data [38, 39, 40, 41, 44]. Only two studies reported the use of multiple slide scanners, with every slide scanned on one of two available scanners [42, 44]. The countries from which data were sourced included Canada, China, Finland, France, Italy, the Netherlands, South Korea, Taiwan, the United Kingdom, and the United States of America. ### Methods in Included Literature There was a total of 62 models of interest in the 36 included papers, with each paper containing 1 to 8 such models. These models consisted of 35 classifiers, 14 survival prediction models, 7 segmentation models, and 6 regression models. A variety of classification outcomes were assessed - histological subtype (7/35), malignancy (5/35), primary cancer type (4/35), genetic mutation status (3/35), stain intensity (3/35), tumour grade (2/35), tissue type (2/35), cell type (2/35), microsatellite instability (2/35), transcriptomic subtype (2/35), stage (1/35), epithelial-mesenchymal transition status (1/35), and treatment response (1/35). Most survival models measured overall survival (9/14), while others measured progression-free survival (2/14), platinum-free interval (2/14) and symptom-free interval (1/14). Segmentation models were split between tumour segmentation (4/7) and stain segmentation (3/7). The regression models also Figure 3: Histograms showing the number of ovarian cancer patients and slides used in model development. Many of these values are uncertain due to incomplete reporting, as reflected in Table 3. quantified staining but were formulated as regression tasks rather than segmentation. A variety of models were used, with the most common types being convolutional neural network (CNN) (21/62), support vector machine (SVM) (10/62), and random forest (9/62). CNN architectures included GoogLeNet [28], VGG16 [33], VGG19 [31, 44], InceptionV3 [39], ResNet18 [42, 45], ResNet50 [50], and MaskRCNN [46]. Novel CNNs typically used multiple standardised blocks involving convolutional, normalization, activation, and/or pooling layers [32, 47, 48], with one study also including attention modules in these blocks [49]. One study generated their novel architecture by using a topology optimization approach on a standard VGG16 [35]. Most researchers split their original images into patches to be separately processed, with patch sizes ranging from 60x60 to 2048x2048 pixels, the most common being 256x256 pixels (6/36) and 512x512 pixels (5/36). A range of feature extraction techniques were employed, with a nearly even split between hand-crafted/pre-defined features (26/62) and features that were automatically learned by the model (30/62). Hand-crafted features included a plethora of textural, chromatic, and cellular and nuclear morphological features. Hand-crafted features were commonly used as inputs to classical ML methods, such as SVM and random forest models. Learned features were typically extracted using a CNN, which was often also used for classification. Despite the common use of patches, most models made predictions at the WSI level (25/62) or patient level (11/62), requiring aggregation of patch-level information. Two distinct aggregation approaches were used, one aggregating before modelling and one aggregating after modelling. The former approach requires the generation of slide-level features before modelling, the latter requires the aggregation of patch-level model outputs to make slide-level predictions. Slide-level features were generated using averaging [23, 40], attention-based weighted averaging [45, 49, 50], concatenating [25, 30], as well as more complex embedding approaches using Fisher vector encoding [24] and k-means clustering [26]. Patch-level model outputs were aggregated to generate slide-level predictions by taking the maximum [32] or average [35], using voting strategies [42, 52], or using a random forest classifier [44]. These approaches are all examples of _multiple instance learning_ (MIL), though few models of interest were reported using this terminology [32, 45, 50]. Despite attention-based approaches having been applied to other malignancies for several years [55, 56], they were only seen in the most recent ovarian cancer studies [44, 45, 49, 50, 52], and none of the methods included self-attention, an increasingly popular method for other malignancies [57]. Most models were deterministic, though hidden Markov trees [19], probabilistic boosting trees [20], and Gaussian mixture models [43] were also used. Tissue was typically analysed at a single resolution, with only four papers including multi-magnification techniques in their models of interest. Two of these combined features from different resolutions for modelling [24, 26], and the other two used low-resolution images to determine areas of interest in high-resolution images [25, 52]. Out of the papers for which it could be determined, the most common modelling magnifications were 20x (26/31) and 40x (7/31). Few models integrated histopathology data with other modalities (8/62). Multi-modal approaches included the concatenation of separately extracted uni-modal features before modelling [23, 35, 40], the amalgamation of uni-modal predictions from separate models [41], and a teacher-student approach where multiple modalities were used in model training but only histopathology data was used for prediction [32]. #### Analysis in Included Literature Analyses were limited, with less than half of the outcomes being evaluated with cross-validation (24/62) and/or external validation on independent ovarian cancer datasets (7/62) despite small internal cohort sizes. Cross-validation methods included k-fold (11/24) with 4 to 10 folds, Monte Carlo (8/24) with 3 to 15 repeats, and leave-one-patient-out cross-validations (5/24). Some other papers included cross-validation on the training set to select hyperparameters but used only a small unseen test set from the same data source for evaluation. Externally validated models were all trained with WSIs, with validations either performed on TMAs (4/7) or WSIs from independent data sources (3/7), with two of these explicitly using different scanners to digitize internal and external data [42, 44]. Some papers included external validation with different malignancies, but none of these included ovarian cancer data in any capacity. Most classification models were evaluated using accuracy, balanced accuracy, and/or area under the receiver operating characteristic curve (AUC), with one exception where only a p-value was reported measuring the association between histological features and transcriptomic subtypes based on a Kruskal-Wallis test [33]. Some models were also evaluated using the F1-score, which we chose not to tabulate (in Figure 3) as the other metrics were reported more consistently. Survival model performance was reported using AUC, p-value, accuracy and hazard ratios. Segmentation models were almost all evaluated differently from each other, with different studies reporting AUC, accuracy, Dice coefficient, sensitivity, specificity, and qualitative evaluations. Regression models were all evaluated using the coefficient of determination (\(R^{2}\)-statistic). The variability of model performance was not frequently reported (20/78), and when it was reported it was often incomplete. This included cases where it was unclear what the intervals represented (95% confidence interval, one standard deviation, variation, etc.), or not clear what the exact bounds of the interval were due to results being plotted but not explicitly stated. Within the entire review, there were only two examples in which variability was reported during external validation [39, 42], one of which did not clearly state either the bounds or the type of the interval. No studies performed any Bayesian form of uncertainty quantification. Reported results are shown in Table 3, though direct comparisons between the performance of different models should be treated with caution due to the diversity of data and validation methods used to evaluate different models, the lack of variability measures, the consistently high risks of bias, and the heterogeneity in reported metrics. ## Discussion The vast majority of published research on AI for diagnostic/prognostic purposes in ovarian cancer histopathology was found to be at a high risk of bias due to issues within the analyses performed. Researchers often used a limited quantity of data or did not include sufficient validation to account for overfitting and model optimism (cross-validation, bootstrapping, external validation) within their study methodology. While data quantity may have been limited by technical and financial constraints, the lack of thorough validation is a key issue which can be corrected regardless of other limitations through improved study design. The more robust analyses included one study in which several relevant metrics were evaluated using 10 repeats of Monte Carlo cross-validation on a set of 406 WSIs, with standard deviations reported for each metric [31]. Another positive example included the use of both an internal five-fold cross-validation, and an external validation for the same outcome, giving a more rigorous analysis [52]. While external validations were uncommon, those which were conducted offered a real insight into model generalisability, with a clear reduction in performance on all external validation sets except one [44]. The only study which demonstrated high generalisability included the largest training set out of all externally validated approaches, included more extensive data labelling than many similar studies, and implemented a combination of three colour normalisation approaches, indicating that these factors may benefit generalisability. Studies frequently had an unclear risk of bias within the participants (29/36) and predictors (18/36) domains of PROBAST, with published work rarely reporting information about patient recruitment and inclusion, especially when using open-access datasets. Only two papers were found to be at low risk of bias for participants, with these including clear and reasonable patient recruitment strategies and selection criteria, which can be seen as positive examples for other researchers [37, 38]. Information about the predictors (histopathology images and features derived thereof) was generally better reported, but still often missed key details which meant that it was unclear whether all tissue samples were processed similarly to avoid risks of bias from visual heterogeneity. It was found that when patient characteristics were reported, they often showed a high risk of bias. Many studies included very small numbers of patients with specific differences from the majority - for example, a minority where specimens were processed with a different staining protocol, leading to variable image appearance. This can be a source of bias because the minority subgroup may be correlated with the outcome of interest by chance, so a model can make predictions based on a surrogate marker which may only be useful in one specific dataset, and is not generalisable to the wider population. Such a surrogate marker may have little to do with the outcome of interest, being the result of a spurious correlation in the data learned by the model. Larger population subgroups can also cause bias, though this is less likely to be caused by random chance and more likely to be influenced by structural confounding factors. One paper was also found to have major discrepancies between the reported data, the study design, and the data that was available through a link in the paper, indicating a significant risk of bias [47]. In this case, it was reported that TCGA-OV data was used for multi-class subtyping, despite this dataset only including high-grade serous and low-grade serous carcinomas. ### Limitations of the Review The main limitation of this review is the restriction to the English language - AI research is a global field, and relevant literature has likely been published in other languages, making this review incomplete. While most of the review process was completed by multiple independent researchers, the duplicate detection was performed by only a single researcher, raising the possibility of errors in this step of the review process, resulting in incorrect exclusions. Due to the significant time gap between the initial and final literature searches (approximately 7 months), there may have been inconsistencies in interpretations, both for data extraction and risk of bias assessments. Finally, this review focused only on light microscopy images of human histopathology samples relating to ovarian cancer, so may have overlooked useful literature outside of this domain. ### Development of the Field The field of AI in ovarian cancer histopathology diagnosis is rapidly growing, with more research published since the start of 2020 than in all preceding years combined. The earliest research, published between 2010-2013, used hand-crafted features to train classical ML methods such as SVMs. These models were used for segmentation [17, 18, 19, 20, 21], malignancy classification [22], grading [23], and survival prediction [23]. Most of these early studies focused on IHC-stained tissue (5/7), which would be much less commonly used in subsequent research (4/29). The field was relatively dormant in the following years, with only 6 papers published between 2014-2019, half of which had the same primary author [24, 25, 26]. These models still used traditional ML classifiers, though some used learned features rather than the traditional hand-crafted features. The models developed were used for histological subtyping [24, 25, 26] and cellular/tissue classification [27, 28, 29]. Since 2020 there has been a much greater volume of research published, most of which has involved the use of deep neural networks for automatic feature extraction and classification. Recent research has investigated a broader array of outcomes, including the classification of primary cancer type [30], mutation status [40, 50], transcriptomic subtypes [33, 40], microsatellite instability [40], epithelial-mesenchymal transition status [45], and treatment response prediction [52]. Three additional survival outcomes have also been predicted in more recent literature - symptom-free interval [32], platinum-free interval [33, 37], and progression-free survival [41, 50]. Despite progress within a few specific outcomes, there was no obvious overall trend in the sizes of datasets used over time, either in terms of the number of slides or the number of participants. Similarly, there was no evidence that recent research included more rigorous internal validations, though external validations have been increasing in frequency - no research before 2021 included any external validation with ovarian cancer data, but two papers published in 2021 [39, 40] and three published in 2022 [42, 44, 52] did. These external validations were typically limited to small quantities of data from a single external data centre or of a different data type (TMA rather than WSI). However, the inclusion of any external validation demonstrates progress from previous research. Such validations are essential to the clinical utility of these models as real-world implementation will require robustness to different sources of visual heterogeneity, with variation occurring across different data centres and within data centres over time. As this field continues to mature, we hope to see more studies conduct thorough validations with larger, high-quality independent datasets, including clearly reported protocols for patient recruitment and selection, pathology slide creation, and digitization. This will help to reduce the biases, limited reproducibility, and limited generalisability identified in most of the existing research in this domain. ### Current Limitations and Future Recommendations A large proportion of published work did not provide sufficient clinical and pathological information to assess the risk of bias. Common types of missing information included where the patients were recruited, how many patients were included, how many samples/images were used, whether any patients/images were excluded, and the methods by which tissue was processed and digitized. The latter includes details about the fixing, staining, and scanning of tissue, processes which are likely causes of visual heterogeneity in pathology slides. This heterogeneity can lead to confounding or bias in models when not properly accounted for, especially when using small datasets where random correlations between unrelated factors are more likely to occur. When using sufficiently large datasets and rigorous methodologies to account for confounding, visual heterogeneity can be beneficial as models can be trained to account for these variations. To understand the effects of heterogeneity it is important that AI researchers thoroughly report data provenance. Researchers may find it useful to refer to reporting checklists, such as _transparent_ reporting of a multivariable prediction model for individual prognosis or diagnosis_ (TRIPOD), to ensure that they have understood and reported all relevant details of their studies. Reporting was particularly sparse in studies which used openly accessible data, possibly indicating that AI-focused researchers were not taking sufficient time to understand these datasets and ensure their research was clinically relevant. For example, many of the researchers who used TCGA data included frozen tissue sections without commenting on whether this was appropriate, despite the fact that pathologists do not consider them to be of optimal diagnostic quality. One paper handled TCGA data more appropriately, with a clear explanation of the positives and negatives of the dataset, and entirely separate models for FFPE and frozen slides [30]. AI researchers should seek to understand the clinical context of their data before undertaking research to reduce bias and increase clinical utility. Ideally, this should involve regular interactions with expert clinicians, including histopathologists and oncologists. Many researchers reported results from only a single train-test split of their data, which raises questions about the reliability of results, especially with small datasets. We recommend that researchers should always conduct more thorough analyses, using cross-validation, bootstrapping, and/or external validations to ensure that results are robust and truly reflect the ability of their model(s) to generalise to unseen data, and are not simply caused by chance. It is also beneficial to report the variability of results (typically in a 95% confidence interval), especially when comparing multiple models, where confidence intervals can help to distinguish whether one model is genuinely better than another or whether the difference is due to chance. Statistical tests can also be beneficial for these evaluations. Another option for capturing variability is Bayesian uncertainty quantification, which can be used to separate aleatoric (inherent) and epistemic (modelling) uncertainty. The incomplete reporting observed in many studies makes them much less reproducible. As well as the previously mentioned factors around patient recruitment and data processing, there was often missing information about AI methodology and analysis approaches. The negative effect that incomplete reporting has on reproducibility can be significantly mitigated by publishing code and data. Only 14 of the 36 included papers made any attempt to share code, with some of these appearing to be incomplete or inaccessible. The better code repositories included detailed documentation to aid reproducibility, including environment set-up information [33, 42], overviews of included functions [41], and code examples used to generate reported results [29]. It is relatively easy to publish code and generate documentation to enhance usability, and there are few drawbacks to doing so when publishing research. Making data available is more difficult due to data security requirements and the potential storage costs, but it can provide benefits beyond the primary research of the original authors. Digital pathology research in ovarian cancer is currently limited by the lack of openly accessible data, leading to over-dependence on TCGA, and causing many researchers to painstakingly collate similar but distinct datasets. These datasets often contain little of the heterogeneity seen in multi-centre, multi-scanner data, making it difficult for researchers to train robust models or assess generalisability. Making more data openly accessible, with detailed protocols describing data creation, would allow future researchers to conduct more thorough analyses and subsequently improve model generalisability and clinical implementability. Current literature in this field can be largely characterised as model prototyping with homogeneous retrospective data. Studies rarely consider the reality of human-machine interaction, perhaps believing that these models are a drop-in replacement for pathologists. However, these models perform narrow tasks within the pathology pipeline and have no understanding of context beyond their limited training datasets. We believe these models would be more beneficial (and more realistic to implement) as assistive tools for pathologists, providing secondary opinions or novel ancillary information. While current research is typically focused on assessing model accuracy without any pathologist input, different study designs could be employed to better assess the real-world utility of these models as assistive tools. For example, usability studies could investigate which models are most accessible and most informative to pathologists in practice, and prospective studies could quantify any benefits to diagnostic efficiency and patient outcomes, and investigate the robustness of models in practice. Understanding the effects of AI on the efficiency of diagnosis is particularly important given the limited supply of pathologists worldwide. As such, this type of research will significantly benefit clinical translation. ### Summary of recommendations * Understand data and ensure planned research is clinically relevant before modelling, ideally involving clinicians throughout the project. * Consider different study designs, including usability studies and/or prospective studies * Clearly report the context of any histopathology data, including how patients were recruited/selected, and how tissue specimens were processed to generate digital pathology images. * Conduct thorough analyses using cross-validation, external validation, and/or bootstrapping. * Make all code openly accessible (and data if possible). ## Acknowledgments There was no direct funding for this research. JB is supported by the UKRI Engineering and Physical Sciences Research Council (EPSRC) [EP/S024336/1]. KA, PA are supported by the Tony Bramall Charitable Trust. AS is supported by Innovate UK via the National Consortium of Intelligent Medical Imaging (NCIMI) [104688], Cancer Research UK [C19942/A28832] and Leeds Hospitals Charity [9R01/1403]. The funders had no role in influencing the content of this research. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. ## Author Contributions JB created the study protocol with feedback and contributions from all other authors. JB, KA, KZ, NMO, and NR performed the risk of bias assessments. JB and KA performed data extraction. JB analysed extracted data and wrote the manuscript, with feedback and contributions from all other authors. ## Competing Interests GH receives research funding from IQVIA. NMO receives research funding from 4D Path. All other authors declare no conflicts of interest.
2309.12215
Regionally Additive Models: Explainable-by-design models minimizing feature interactions
Generalized Additive Models (GAMs) are widely used explainable-by-design models in various applications. GAMs assume that the output can be represented as a sum of univariate functions, referred to as components. However, this assumption fails in ML problems where the output depends on multiple features simultaneously. In these cases, GAMs fail to capture the interaction terms of the underlying function, leading to subpar accuracy. To (partially) address this issue, we propose Regionally Additive Models (RAMs), a novel class of explainable-by-design models. RAMs identify subregions within the feature space where interactions are minimized. Within these regions, it is more accurate to express the output as a sum of univariate functions (components). Consequently, RAMs fit one component per subregion of each feature instead of one component per feature. This approach yields a more expressive model compared to GAMs while retaining interpretability. The RAM framework consists of three steps. Firstly, we train a black-box model. Secondly, using Regional Effect Plots, we identify subregions where the black-box model exhibits near-local additivity. Lastly, we fit a GAM component for each identified subregion. We validate the effectiveness of RAMs through experiments on both synthetic and real-world datasets. The results confirm that RAMs offer improved expressiveness compared to GAMs while maintaining interpretability.
Vasilis Gkolemis, Anargiros Tzerefos, Theodore Dalamagas, Eirini Ntoutsi, Christos Diou
2023-09-21T16:16:22Z
http://arxiv.org/abs/2309.12215v1
# Regionally Additive Models: Explainable-by-design models minimizing feature interactions ###### Abstract Generalized Additive Models (GAMs) are widely used explainable-by-design models in various applications. GAMs assume that the output can be represented as a sum of univariate functions, referred to as components. However, this assumption fails in ML problems where the output depends on multiple features simultaneously. In these cases, GAMs fail to capture the interaction terms of the underlying function, leading to subpar accuracy. To (partially) address this issue, we propose Regionally Additive Models (RAMs), a novel class of explainable-by-design models. RAMs identify subregions within the feature space where interactions are minimized. Within these regions, it is more accurate to express the output as a sum of univariate functions (components). Consequently, RAMs fit one component per subregion of each feature instead of one component per feature. This approach yields a more expressive model compared to GAMs while retaining interpretability. The RAM framework consists of three steps. Firstly, we train a black-box model. Secondly, using Regional Effect Plots, we identify subregions where the black-box model exhibits near-local additivity. Lastly, we fit a GAM component for each identified subregion. We validate the effectiveness of RAMs through experiments on both synthetic and real-world datasets. The results confirm that RAMs offer improved expressiveness compared to GAMs while maintaining interpretability. Keywords:Explainable AI Generalized Additive models x-by-design ## 1 Introduction Generalized Additive Models (GAMs) [1] are a popular class of explainable by design (x-by-design) models [14, 15]. Their popularity stems from their inherent interpretability. GAMs represent an aggregation of univariate functions, where the overall model can be expressed as \(f(\mathbf{x})=c+\sum_{s=1}^{D}f_{s}(x_{s})\). Due to this structure, each individual univariate function (component) can be visualized and interpreted independently. Consequently, understanding the behavior of the overall model simply requires visualizing all components, each with a one-dimensional plot. However, GAMs have limitations especially in cases where the outcome depends on multiple features simultaneously, i.e., when the unknown predictive function includes terms that combine multiple features. Therefore, there are a lot of methods [1, 13, 16, 15] that extend the traditional GAMs in multiple directions. The most famous direction involves selecting the most important higher-order interactions. GA\({}^{2}\)Ms [15] first introduced this line of research extending the traditional GAMs by adding pairwise interactions in their formulation, i.e., \(\sum_{s=1}^{D}\sum_{s_{1}\neq s_{2}}f_{s_{1}s_{2}}(x_{s_{1}},x_{s_{2}})\). GA\({}^{2}\)Ms are also x-by-design, because the user can visualize both the first-order (\(1D\) plots) and second-order ((\(2D\) plots)) components. As the number of features increases, the number of second-order interactions grows exponentially, making it impractical for users to interpret a large number of two-dimensional plots. Therefore, methods like GA\({}^{2}\)Ms target on automatically selecting the most significant interaction terms. Both GAMs and GA\({}^{2}\)Ms have limitations in modeling interactions of more than two features, and. The main reason behind this limitation is that it is difficult to visualize three or more features on a single plot. Therefore, an approach like that would violate the x-by-design principle. To address this limitation, we propose a new class of x-by-design models called Regionally Additive Models (RAMs). Since in the general case, it is infeasible to visualize terms with more than two variables, RAMs focus on learning terms with structure: \(f(x_{s_{1}}|\mathbbm{1}_{x_{c_{1}}},\mathbbm{1}_{x_{c_{2}}},\cdots)\) for first-degree interactions and \(f(x_{s_{1}},x_{s_{2}}|\mathbbm{1}_{x_{c_{1}}},\mathbbm{1}_{x_{c_{2}}},\cdots)\) for second-degree interactions. The symbol \(\mathbbm{1}_{x_{c_{1}}}\) denotes the condition that the feature \(x_{c_{1}}\) takes a specific value or belongs to a specific range. To better grasp the idea, consider a prediction task where the outcome depends, among others, on a combination \(f(x_{1},x_{2},x_{3})\) of three features: \(x_{1}\in[20,80]\) (age), \(x_{2}\in[0,40]\) (years in work), and \(x_{3}\in\{True,False\}\) (married). Both GAM and GA\({}^{2}\)M would fail to accurately learn this term of the underlying predictive function. However, the three-feature effect can be decomposed in two sets of second-degree conditional terms based on the marital status: \(f_{1}(x_{1},x_{2}|x_{3}=True)\) and \(f_{2}(x_{1},x_{2}|x_{3}=False)\). In this way, RAM can accurately represent \(f\) through learning two second-degree conditional terms, one for each marital status. Furthermore, the two sets of terms can be visualized and interpreted as using two-dimensional plots. It is worth noting that the conditional terms can also include numerical features. For example, it could be more accurate to learn instead a set of four first-degree terms, conditioned on the marital status and the years in work: \(f_{1}(x_{1}|x_{2}<10,x_{3}=True)\), \(f_{2}(x_{1}|x_{2}\geq 10,x_{3}=True)\), \(f_{3}(x_{1}|x_{2}<10,x_{3}=False)\), and \(f_{4}(x_{1}|x_{2}\geq 10,x_{3}=False)\), which can be visualized and interpreted as four one-dimensional plots. To adhere to the x-by-design principle, RAMs should be able to automatically identify the most significant conditional terms. As the number of these terms increases, it becomes difficult for users to retain and interpret numerous plots associated with each feature or pair of features. Therefore, RAMs use Regional Effect Plots [10] to identify a small set of conditional terms that have the greatest impact in minimizing feature interactions. The RAM framework consists of three key steps. First, a black-box model is fitted to capture all high-order interactions. Then, the subregions where the black-box model exhibits near-local additivity are identified using Regional Effect Plots. Finally, a GAM component is fit to each identified subregion. The main contributions of this paper are as follows: * We formulate a new class of x-by-design models called Regionally Additive Models (RAMs). * We propose a generic framework for learning RAMs and we propose a novel method for identifying the most significant conditional terms. * We demonstrate the effectiveness of RAMs in modeling high-order interactions on a synthetic toy example and two real-world datasets. ## 2 Motivation Consider the black-box function \(f(\mathbf{x})=8x_{2}\mathbbm{1}_{x_{1}>0}\mathbbm{1}_{x_{3}=0}\) with \(x_{1},x_{2}\sim\mathcal{U}(-1,1)\) and \(x_{3}\sim Bernoulli(0,1)\). Although very simple, GAM and GA\({}^{2}\)M would fail to learn this mapping due to the the three-features interaction term. As we see in Figure 0(a), a GAM misleadingly learns that \(\hat{f}(\mathbf{x})\approx 2x_{2}\) because in \(\frac{1}{4}\) of the cases (\(x_{1}>0\text{ and }x_{3}=0\)) the impact of \(x_{2}\) to the output is \(8x_{2}\), and in the rest \(\frac{3}{4}\) of the cases the impact of \(x_{2}\) to the output is \(0\). However, if splitting the input space in two subregions we observe that \(f\) is additive in each one (regionally additive): \[f(\mathbf{x})=\begin{cases}8x_{2}&\text{if }x_{1}>0\text{ and }x_{3}=1\\ 0&\text{otherwise}\end{cases} \tag{1}\] Therefore, if we knew the appropriate subregions, namely, \(\mathcal{R}_{21}=\{x_{1}>0\text{ and }x_{3}=0\}\) and \(\mathcal{R}_{22}=\{x_{1}\leq 0\text{ or }x_{3}=1\}\), we could split the impact of \(x_{2}\) appropriately and fit the following model to the data: \[f^{\texttt{RAM}}(\mathbf{x})=f_{1}(x_{1})+f_{21}(x_{2})\mathbbm{1}_{(x_{1},x _{3})\in\mathcal{R}_{21}}+f_{22}(x_{2})\mathbbm{1}_{(x_{1},x_{3})\in\mathcal{ R}_{22}}+f_{3}(x_{3}) \tag{2}\] Equation (2) represents a Regionally Additive Model (RAM), which is simply a GAM fitted on each subregion of the feature space. Importantly, RAM's enhanced expressiveness does not come at the expense of interpretability. As we observe in Figures 0(b) and 0(c), we can still visualize and comprehend each univariate function in isolation, exactly as we would do with a GAM, with the only difference being that we have to consider the subregions where each univariate function is active, The key challenge of RAMs is to appropriately identify the subregions where the black-box function is (close to) regionally additive. For this purpose, as we will see in Section 4.2, we propose a novel algorithm that is based on the idea of regional effect plots. ## 3 RAM formulation Notation.Let \(\mathcal{X}\in\mathbb{R}^{d}\) be the \(d\)-dimensional feature space, \(\mathcal{Y}\) the target space and \(f(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) the black-box function. We use index \(s\in\{1,\dots,d\}\) for the feature of interest and \(/s=\{1,\dots,D\}-s\) for the rest. For convenience, we use \((x_{s},\mathbf{x}_{/\mathbf{s}})\) to refer to \((x_{1},\cdots,x_{s},\cdots,x_{D})\) and, equivalently, Figure 1: The left image showcases the global GAM which erroneously learns an approximation of \(f(\mathbf{x})\approx 2x_{2}\). In contrast, the middle and right images demonstrate the RAM’s ability to identify two distinct subregions where \(f\) exhibits regional additivity. By fitting a GAM to each subregion, the RAM accurately captures the true function \(f\) while retaining interpretability. \((X_{s},X_{/s})\) instead of \((X_{1},\cdots,X_{s},\cdots,X_{D})\) when we refer to random variables. The training set \(\mathcal{D}=\{(\mathbf{x}^{i},y^{i})\}_{i=1}^{N}\) is sampled i.i.d. from the distribution \(\mathbb{P}_{X,Y}\). The RAM consists of a three-step pipeline; (a) fit a black-box model (Section 4.1), (b) identify subregions with minimal interactions (Section 4.2) and (c) fit a GAM component to each subregion (Section 4.3). In step (b), we use regional effect methods [10, 20] to identify the regions where the black-box function is (close to) regionally additive. Regional effect methods yield for each individual feature \(s\), a set of \(T_{s}\) non-overlapping regions, denoted as \(\{\mathcal{R}_{st}\}_{t=1}^{T_{s}}\) where \(\mathcal{R}_{st}\subseteq\mathcal{X}_{/s}\). Note that, the number of non-overlapping regions can be different for each feature \((T_{s})\), the regions \(\{\mathcal{R}_{st}\}_{t=1}^{T_{s}}\) are disjoint and their union covers the entire feature space \(\mathcal{X}_{/s}\). The primary objective is to identify regions in which the impact of the \(s\)-th feature on the output is _relatively independent_ of the values of the other features \(\mathbf{x}_{/\mathbf{s}}\). To better grasp this objective, if we decompose the impact of the \(s\)-th feature on the output \(y\) into two terms: \(f_{s}(x_{s},\mathbf{x}_{/\mathbf{s}})=f_{s,ind}(x_{s})+f_{s,int}(x_{s}, \mathbf{x}_{/\mathbf{s}})\), where \(f_{s,ind}(\cdot)\) represents the independent effect and \(f_{s,int}(\cdot)\) represents the interaction effect, the objective is to identify regions \(\{\mathcal{R}_{st}\}_{t=1}^{T_{s}}\) such that the interaction effect is minimized. Regionally Additive Models (RAM) formulate the mapping \(\mathcal{X}\rightarrow\mathcal{Y}\) as: \[f^{\texttt{RAM}}(\mathbf{x})=c+\sum_{s=1}^{D}\sum_{t=1}^{T_{s}}f_{st}(x_{s}) \mathbb{1}_{\mathbf{x}_{/\mathbf{s}}\in\mathcal{R}_{st}},\quad\mathbf{x}\in \mathcal{X} \tag{3}\] In the above formulation, \(f_{st}(\cdot)\) is the component of the \(s\)-th feature which is active on the \(t\)-th region. RAM can be viewed as a GAM with \(T_{s}\) components per feature where each component is applied to a specific region \(\mathcal{R}_{st}\). To facilitate this interpretation, we can define an enhanced feature space \(\mathcal{X}^{\texttt{RAM}}\) defined as: \[\begin{split}\mathcal{X}^{\texttt{RAM}}&=\{x_{st}|s \in\{1,\dots,D\},t\in\{1,\dots,T_{s}\}\}\\ x_{sk}&=\begin{cases}x_{s},&\text{if }\mathbf{x}_{/ \mathbf{s}}\in\mathcal{R}_{sk}\\ 0,&\text{otherwise}\end{cases}\end{split} \tag{4}\] and then define RAM as a typical GAM on the extended feature space \(\mathcal{X}^{\texttt{RAM}}\): \[f^{\texttt{RAM}}(\mathbf{x})=c+\sum_{s,t}f_{st}(x_{st})\quad\mathbf{x}\in \mathcal{X}^{\texttt{RAM}} \tag{5}\] Equations 3 and 5 are equivalent. To better understand of the formulations, consider the toy example described in Section 2. To minimize the impact of feature interactions, we need to divide feature \(x_{2}\) into two subregions, \(\mathcal{R}_{21}=\{x_{1}>0\text{ and }x_{3}=1\}\) and \(\mathcal{R}_{22}=\{x_{1}\leq 0\text{ or }x_{3}=0\}\). Using Eq. 3, RAM formulation is: \(f^{\texttt{RAM}}(\mathbf{x})=f_{1}(x_{1})+f_{21}(x_{2})\mathbb{1}_{x_{1}>0 \text{ and }x_{3}=1}+f_{22}(x_{2})\mathbb{1}_{x_{1}\leq 0\text{ or }x_{3}=0}+f_{3}(x_{3})\). Using Eq. 4, we should first define the augmented feature space \(\mathcal{X}^{\texttt{RAM}}=(x_{1},x_{21},x_{22},x_{3})\), where \(x_{21}=x_{2}\mathbb{1}_{x_{1}>0\text{ and }x_{3}=1}\) and \(x_{22}=x_{2}\mathbb{1}_{x_{1}\leq 0\text{ or }x_{3}=0}\) and then RAM formulation is: \(f^{\texttt{RAM}}(\mathbf{x})=f_{1}(x_{1})+f_{21}(x_{21})+f_{22}(x_{22})+f_{3}(x_{3})\). ## 4 RAM framework ### First step: Fit a black-box function In the initial step of the pipeline, we fit a black-box function \(f(\cdot)\) to the training set \(\mathcal{D}=\{(\mathbf{x}^{i},y^{i})\}_{i=1}^{N}\) to accurately learn the underlying mapping \(f(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\). While any black-box function can theoretically be employed in this stage, for utilizing the DALE approximation, as we will show in the next step, it is necessary to select a differentiable function. Recent advancements have demonstrated that differentiable Deep Learning models, specifically designed for tabular data [1], are capable of achieving state-of-the-art performance, making them a suitable choice for this step. ### Second step: Find subregions To identify the regions of the input space where the impact of feature interactions is reduced, we have developed a regional effect method influenced by the research conducted by Herbinger et al. [2023] and Gkolemis et al. [2023a]. Herbinger et al. [2023] introduced a versatile framework for detecting such regions, where one of the proposed methods is the Accumulated Local Effects [1]. We have adopted their approach with two notable modifications. First, instead of using the ALE plot, we employ the Differential ALE (DALE) method introduced by Gkolemis et al. [2023a], which provides considerable computational advantages when the underlying black-box function is differentiable. Second, we utilize variable-size bins, instead of the fixed-size ones in DALE, because the result in a more accurate approximation, as show by Gkolemis et al. [2023b]. DALEDALE gets as input the black-box function \(f(\cdot)\) and the dataset \(\mathcal{D}=\{(\mathbf{x}^{i},y^{i})\}_{i=1}^{N}\), and returns the effect (impact) of the \(s\)-th feature \(s\) on the output \(y\): \[\hat{f}^{\texttt{DALE}}(x_{s})=\Delta x\sum_{k=1}^{k_{x}}\underbrace{\frac{1}{ |\mathcal{S}_{k}|}\sum_{i:\mathbf{x}^{(i)}\in\mathcal{S}_{k}}\frac{\partial f }{\partial x_{s}}(\mathbf{x}^{i}))}_{\hat{\mu}(z_{k-1},z_{k})} \tag{6}\] For more details on the DALE method, please refer to the original paper [14]. In the above equation, \(k_{x}\) is the index of the bin such that \(z_{k_{x}-1}\leq x_{s}<z_{k_{x}}\) and \(\mathcal{S}_{k}\) is the set of the instances of the \(k\)-th bin, i.e. \(\mathcal{S}_{k}=\{\mathbf{x}^{i}:z_{k-1}\leq x_{s}^{(i)}<z_{k}\}\). In short, DALE computes the average effect (impact) of the feature \(x_{s}\) on the output, by, first, dividing the feature space into \(K\) equally-sized bins, i.e., \(z_{0},\ldots,z_{K}\) second, computing the average effect in each bin \(\hat{\mu}(z_{k-1},z_{k})\) (bin-effect) as the average of the instance-level effects inside the bin, and, finally, aggregating the bin-level effects. DALE for feature interactionsIn cases where there are strong interactions between the features, the instance-level effects inside each bin deviate from the average bin-effect (bin-deviation). We can measure such deviation using the standard deviation of the instance-level effects inside each bin: \[\hat{\sigma}^{2}(z_{k-1},z_{k})=\frac{1}{|\mathcal{S}_{k}|-1}\sum_{i:\mathbf{ x}^{i}\in\mathcal{S}_{k}}\left(\frac{\partial f}{\partial x_{s}}(\mathbf{x}^{i})- \hat{\mu}(z_{k-1},z_{k})\right)^{2} \tag{7}\] The bin-deviation is a measure of the interaction between the feature \(x_{s}\) and the rest of the features inside the \(k\)-th bin. Therefore, we can measure the global interaction between the feature \(x_{s}\) and the rest of the features along the whole \(s\)-th dimension with the aggregated bin-deviation: \[\mathcal{H}_{s}=\sqrt{\sum_{k=1}^{k_{x}}(z_{k}-z_{k-1})^{2}\hat{\sigma}^{2}(z_ {k-1},z_{k})} \tag{8}\] Eq. (8) outputs values in the range \([0,\infty)\) with zero indicating that \(x_{s}\) does not interact with any other feature, i.e., the underlying black box function can be written as \(f(\mathbf{x})=f_{s}(x_{s})+f_{/s}(x_{/s})\). In all other cases, \(\mathcal{H}_{s}\) is greater than zero and the higher the value, the stronger the interaction. A final detail, is that in order to have a more robust estimation of the bin-effect and the bin-deviation, we use variable-size bins instead of the fixed-size ones in DALE. In particular, we start with a dense fixed-size grid of bins and we iteratively merge the neighboring bins with similar bin-effect and bin-deviation until all bins have at least a minimum number of instances. In this way, we can have a more accurate approximation of the bin-effect and the bin-deviation. Subregions as an optimization problemIn the same way that we can estimate the feature effect (Eq. (6)) and the feature interactions (Eq. (8)) for the \(s\)-th feature in the whole input space, we can also estimate the effect and the interactions in a subregion of the input space \(\mathcal{R}_{st}\subset\mathcal{X}\). We denote the equivalent regional qunatities as \(\hat{f}_{st}^{\text{DALE}}(x_{s})\) and \(\mathcal{H}_{st}\). \(\hat{f}_{\mathcal{R}_{st}}^{\text{DALE}}(x_{s})\) and \(\mathcal{H}_{\mathcal{R}_{st}}\) are defined exactly as in Eq. (6) and Eq. (8) respectively, with the only difference that instead of using the whole dataset \(\mathcal{D}\), to compute the regional bin-effect \(\hat{\mu}_{st}(z_{k-1},z_{k})\) and the regional bin-deviation \(\hat{\sigma}_{st}^{2}(z_{k-1},z_{k})\), we use \(\mathcal{D}_{st}\) which includes only the instances that belong to the subregion \(\mathcal{R}_{st}\), i.e., \(\mathcal{D}_{st}=\{\mathbf{x}^{i}:x_{s}^{i}\in\mathcal{S}_{k}\wedge\mathbf{x _{c}}^{i}\in\mathcal{R}_{st}\}\). Therefore, in order to minimize the interactions of a particular feature \(s\) we search for a set of regions \(\{\mathcal{R}_{st}\}_{t=1}^{T_{s}}\), that minimizes the following objective: \[\underset{\{\mathcal{R}_{st}\}_{t=1}^{T_{s}}}{\text{minimize}} \mathcal{L}_{s}=\sum_{t=1}^{T_{s}}\frac{|\mathcal{D}_{st}|}{| \mathcal{D}|}\mathcal{H}_{st}\] (9) subject to \[\quad\bigcup_{t=1}^{T}\mathcal{R}_{st}=\mathcal{X}\] \[\quad\mathcal{R}_{st}\cap\mathcal{R}_{s\tau}=\emptyset,\quad \forall t\neq\tau\] In Eq. (9), the objective function is the weighted sum of the regional interactions \(\mathcal{H}_{st}\), where the weights are the number of instances in each subregion. In this way, we give more importance to the subregions that contain more instances. The first constraint ensures that the subregions cover the whole input space and the second constraint ensures that the subregions are disjoint. Proposed solutionThe core of the method is outlined in Algorithm 1. First, we fit a differentiable black box model to the data (Step 1) and we compute the Jacobian matrix w.r.t. the input features (Step 2). Then we search for a set of subregions by minimizing the objective of Eq. (9) for each feature \(s\) independently (Steps 3-4-5). Based on the optimal subregions, we define the extended feature space (Step 6) and we fit a GAM in the extended feature space (Step 7). For solving Eq. (9), we have developed a tree-based algorithm based on the approach proposed by [10], which we describe in detail in Algorithm 2. To describe the algorithm, we define some additional notation: \(\mathcal{R}_{s}^{l}\) is the set of optimal subregion of the \(s\)-th feature at level \(l\) of the tree. Since at each level of the tree we divide the input space into two subregions, at level \(l\) we have \(2^{l}\) subregions, i.e., \(\mathcal{R}_{s}^{l}=\{\mathcal{R}_{st}\}_{t=1}^{2^{l}}\). Equivalently, \(\mathcal{L}_{s}^{l}\) is the optimal objective value of Eq. (9) at level \(l\) of the tree. Although the algorithm can search for an abritrary number of subregions per feature, in order to preserve the smooth interpretation of the method, we limit the maximum depth of the tree to \(L=3\) levels, which stands for a maximum of \(T=2^{L}=8\) subregions per feature. In general, the user can control the trade-off between the interpretability and the accuracy of the method by changing the maximum depth of the tree. Note that with three splits, we already have an interaction term of four (\(f(x_{s}|\mathbb{1}_{x_{c_{1}}},\mathbb{1}_{x_{c_{2}}},\mathbb{1}_{x_{c_{3}}})\) or five (\(f(x_{s_{1}},x_{s_{2}}|\mathbb{1}_{x_{c_{1}}},\mathbb{1}_{x_{c_{2}}},\mathbb{1} _{x_{c_{3}}})\) features. To describe how the algorithm finds the optimal splits at each level \(l\), let's consider the illustrative example of Section 2. For feature \(s=2\), the algorithm starts with \(/s=\{1,3\}\) as candidate split-features for the first level of the tree. For each candidate split-feature, the algorithm determines the candidate split positions. Since \(x_{1}\) is a continuous feature, the candidate splits postions are a linearly spaced grid of \(P\) points within the range of the feature, i.e. \([-1,1]\), where \(P\) is a hyperparameter of the algorithm, set to 10 in the experiments. Therefore, the candidate positions are \(p\in\{-1,-0.8,-0.6,\ldots,0.8,1\}\) each on defining two subregions, \(\mathcal{R}_{21}=\{(x_{1},x_{3}):x_{1}\leq p\}\) and \(\mathcal{R}_{22}=\{(x_{1},x_{3}):x_{1}>p\}\). As for \(x_{3}\), being a categorical feature, the candidate split points are its unique values, i.e., \(\{0,1\}\), and the corresponding subregions are \(\mathcal{R}_{21}=\{(x_{1},x_{3}):x_{3}=0\}\) and \(\mathcal{R}_{22}=\{(x_{1},x_{3}):x_{3}\neq 0\}\). Each candidate position, creates a corresponding dataset \([\mathcal{D}_{21},\mathcal{D}_{22}]\), and the algorithm computes the weighted level of interactions \(\mathcal{H}_{21}\) and \(\mathcal{H}_{22}\) for each dataset. After iterating over all features and all candidate positions for each feature, it selects the split point that minimizes the weighted level of interactions. In the illustrative example, the optimal first-level split is based on \(x_{3}\) and the optimal split point is \(p=0\). The algorithm next proceeds to the second level, where the only candidate feature is \(x_{3}\). In this step, the first split is considered fixed so the optimal second split is applied to the subregions \(\mathcal{R}_{21}\) and \(\mathcal{R}_{22}\), creating four subregions in total. The algorithm continues in a similar manner, until it reaches the maximum depth \(T\) or the drop in the weighted level of interactions is below a threshold \(\epsilon\) (set to 20% drop in the experiments). ``` Input : A dataset \((X,y)\) and a maximum level \(T\) Output : A trained RAM model \(f^{\texttt{RAM}}\) 1 Train a differentiable black box model \(f\) using \((X,y)\); 2 Compute the Jacobian w.r.t. features \(\mathbf{x}\), \(J=\nabla_{\mathbf{x}}f(\mathbf{x})\) ; 3for\(s\in\{1,\ldots,D\}\)do 4\(\{\mathcal{R}_{st}\}_{t=1}^{T_{s}}\) = DetectSubregions(\(X\), \(J\), \(T\), \(s\)); 5 6 end for 7 Create the extended feature space \(\mathcal{X}^{\texttt{RAM}}\) using all \(\mathcal{R}_{st}\), as in Eq. (4) ; 8 Fit a GAM in \(\mathcal{X}^{\texttt{RAM}}\) ; // i.e., train each\(f_{st}\) using only data in \(\mathcal{R}_{st}\) return\(f^{\texttt{RAM}}(\mathbf{x})=c+\sum_{s,t}f_{st}(x_{st}),\quad\mathbf{x}\in \mathcal{X}^{\texttt{RAM}}\) ``` **Algorithm 1**Regionally Additive Model (RAM) training Computational ComplexityAlgorithm 2 has a computational complexity of \(\mathcal{O}(D-1\cdot L\cdot N)\) as it iterates over all features, query positions, and performs indexing operations on the data (splitting the dataset and computing the level of interactions). Algorithm 2 is applied to each feature \(s\) independently, and so computational complexity of the entire algorithm is \(\mathcal{O}(D\cdot(D-1)\cdot L\cdot N)\). However, in practice, \(P\) and \(T\) are small numbers. Therefore, the computational complexity of the proposed method simplifies to \(\mathcal{O}(D^{2}\cdot N)\), making it suitable for large datasets, heavy models, and reasonably high-dimensional data. The key point is that the use of DALE eliminates the need to compute the Jacobian matrix for each split, which is the most computationally expensive step. This is because the Jacobian matrix is computed only once for the entire dataset, and then it is used as a lookup table for computing the level of interactions for each split. This makes the proposed method applicable to heavy models. ``` Input : Dataset \(X\), Gradients \(J\), Maximum depth \(T\), Feature \(s\) Output : Subregions \(\left\{\mathcal{R}_{st}\right\}_{t=1}^{T_{s}}\), where \(T_{s}\leq 2^{T}\) 1\(\mathcal{H}_{s}^{0}\) ; // Compute the level of interactions before any split 2\(T_{s}=0\) ; // Initialize the number of splits for feature \(s\) 3for\(l=1\) to \(L\)do 4if\(H_{s}^{l-1}=0\)then 5 break; 6 end if /* Find best split feature \(c_{s}^{l}\) at point \(p_{s}^{l}\), leading to loss \(\mathcal{H}_{s}^{l}\) using regions of previous level */ 7 Find \(\mathcal{H}_{st},c,p\) of the optimal split based on \(\mathcal{R}_{s}^{l}\) ; 8if\(1-\frac{\mathcal{H}_{s}^{l}}{\mathcal{H}_{s}^{l-1}}>\epsilon\)then 9 break; 10 end if 11\(T_{s}=2^{l}\) ; // Update the number of splits for feature \(s\) 12 end if return\(\left\{\mathcal{R}_{st}|s\in\left\{1,\ldots,D\right\},t\in\left\{1,\ldots,T_{s} \right\}\right\}\) ``` **Algorithm 2**DetectSubregions ### Third step: Fit a GAM in each subregion Once the subregions are detected, any Generalized Additive Model (GAM) family can be fitted to the augmented input space \(\mathcal{X}^{\texttt{RAM}}\). Recently, several methods have been proposed to extend GAMs and enhance their expressiveness. These methods can be categorized into two main research directions. The first direction targets on representing the main components of a GAM \(\left\{f_{i}(x_{i})\right\}\) using novel models. For example, [1] introduced an approach that employs an end-to-end neural network to learn the main components. The second direction aims to extend GAMs to model feature interactions. Examples of such extensions include Explainable Boosting Machines (EBMs) [11] or Node-GAMs [10]. These models are generalized additive models that incorporate pairwise interaction terms. It is worth noticing, that the RAM framework and can be used on top of both these research directions to further enhance the expressiveness of the models while maintaining their interpretability. In our experiments, we use the Explainable Boosting Machines (EBMs). ## 5 Experiments We evaluate the proposed approach on two typical tabular datasets: the Bike-Sharing Dataset [13] and the California Housing Dataset [14]. Bike-Sharing DatasetThe Bike-Sharing dataset contains the hourly bike rentals in the state of Washington DC over the period 2011 and 2012. The dataset contains a total of 14 features, out of which 11 are selected as relevant for the purpose of prediction. The majority of these features involve measurements related to environmental conditions, such as \(X_{\texttt{month}}\), \(X_{\texttt{hour}}\), \(X_{\texttt{temperature}}\), \(X_{\texttt{humidity}}\) and \(X_{\texttt{windspeed}}\). Additionally, certain features provide information about the type of day, for example, whether it is a working day (\(X_{\texttt{workingday}}\)) or not. The target value \(Y_{\texttt{count}}\) is the bike rentals per hour, which has mean value \(\mu_{\texttt{count}}=189\) and standard deviation \(\sigma_{\texttt{count}}=181\). As a black-box model, we train for 60 epochs a fully-connected Neural Network with 6 hidden layers, using the Adam optimizer with a learning rate of 0.001. The model attains a root mean squared error of \(0.39\cdot 181\approx 70\) counts on the test set. Subsequently, we extract the subregions, searching for splits up to a maximum spliting depth of \(T=3\). Following the postprocessing step, we find that the only split that substantially reduces the level of interactions within the subregions is based on the feature \(X_{\texttt{hour}}\). This feature is divided into two subgroups: \(X_{\texttt{hour}}|\mathbb{1}_{X_{\texttt{workingday}}\neq 1}\) and \(X_{\texttt{hour}}|\mathbb{1}_{X_{\texttt{workingday}=1}}\). Figure 2 clearly illustrates that the impact of the hour of the day on bike rentals varies significantly depending on whether it is a working day or a non-working day. Specifically, during working days, there is higher demand for bike rentals in the morning and afternoon hours, which aligns with the typical commuting times (Figure 2b). On the other hand, during non-working days, bike rentals peak in the afternoon as individuals engage in leisure activities (Figure 2c). The proposed RAM method effectively captures and detects this interaction by establishing two distinct subregions, each corresponding to working days and non-working days, respectively. Subsequently, the EBM that is fitted to each subregion, successfully learns these patterns, achieving a root mean squared error of approximately \(0.56\cdot 181\approx 101\) counts on the test set. It is noteworthy that RAM not only preserves the interpretability of the model, but it also enhances the interpretation of the underlying modeling process. By identifying and highlighting the interaction between the hour of the day and the day type, RAM provides valuable insights into the relationship between these variables and their influence on bike rentals. In contrast, the GAM model 2a is not able to capture this interaction and achieves a root mean squared error of \(0.73\cdot 181\approx 132\) counts on the test set. Finally, in table 1, we also observe that the RA\({}^{2}\)M, i.e., RAM with second-order interactions, outperforms the equivalent GA\({}^{2}\)M model in terms of predictive performance. Specifically, the RA\({}^{2}\)M model achieves a root mean squared error of \(0.41\cdot 181\approx 74\) counts, while the GA\({}^{2}\)M model of \(0.44\cdot 181\approx 80\) counts on the test set. It is worth noticing that the RA\({}^{2}\)M model's accuracy is comparable to the black-box model's accuracy. California Housing DatasetThe California Housing dataset consists of approximately \(20,000\) of housing blocks situated in California. Each housing block is described by eight numerical features, namely, \(X_{\texttt{1at}}\), \(X_{\texttt{long}}\), \(X_{\texttt{median\_age}}\), \(X_{\texttt{total\_rooms}}\), \(X_{\texttt{total\_bedrooms}}\), \(X_{\texttt{population}}\), \(X_{\texttt{households}}\), and \(X_{\texttt{median\_income}}\). The target variable, \(\overline{Y}_{\texttt{value}}\), is the median house value in dollars for each block. The \begin{table} \begin{tabular}{l|c|c c c c} & **Black-box** & \multicolumn{4}{c}{**x-by-design**} \\ \hline \hline & all orders & \multicolumn{2}{c}{\(1^{\texttt{st}}\) order} & \multicolumn{2}{c}{\(2^{\texttt{nd}}\) order} \\ \hline \hline & **DNN** & **GAM** & **RAM** & **GA\({}^{2}\)M** & **RA\({}^{2}\)M** \\ \hline Bike Sharing (MAE) & 0.254 & 0.549 & 0.430 & 0.298 & 0.278 \\ Bike Sharing (RMSE) & 0.389 & 0.734 & 0.563 & 0.438 & 0.412 \\ \hline California Housing (MAE) & 0.373 & 0.600 & 0.553 & 0.554 & 0.533 \\ California Housing (RMSE) & 0.533 & 0.819 & 0.754 & 0.774 & 0.739 \\ \end{tabular} \end{table} Table 1: The table compares the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) of DNN, GAM, RAM, GA\({}^{2}\)M, and RA\({}^{2}\)M (representing 2nd order interactions), on two datasets: Bike-Sharing and California Housing. Lower values indicate better performance. RAM consistently outperforms GAM and approaches DNN performance. target value ranges in the interval \([15,500]\cdot 10^{3}\), with a mean value of \(\mu_{Y}\approx 201\cdot 10^{3}\) and a standard deviation of \(\sigma_{Y}\approx 110\cdot 10^{3}\). As a black-box model, we train for 45 epochs a fully-connected Neural Network with 6 hidden layers, using the Adam optimizer with a learning rate of 0.001. The model achieves a root mean square error (RMSE) of about 58K dollars on the test set. Subsequently, we perform subregion extraction by searching for splits up to a maximum depth of \(T=3\). After the postprocessing step, we discover that several splits significantly reduce the level of interactions, resulting in an expanded input space consisting of 16 features, as we show in table 2. Out of them, we randomly select and illustrate in Figure 3 the effect of the feature \(X_{\texttt{long}}\). As we observe, for the house blocks located in the southern part of California (\(X_{\texttt{1st}}\leq 34.9\)), the house value decreases in an almost linear fashion as we move eastward (\(X_{\texttt{long}}\) increases). In contrast, for the house blocks located in the northern part of California (\(X_{\texttt{1st}}>34.9\)), the house value performs a rapid (non-linear) decrease as we move eastward (\(X_{\texttt{long}}\) increases). We also observe that although the EBM fitted to each subregion captures the general trend, it does not align perfectly with the regional effect. As in the Bike-Sharing Example, the RMSE of the RAM model, i.e. \(0.75\cdot 110\approx 82.5\)K dollars on the test set, is lower than the one of the GAM model, i.e.\(0.82\cdot 110\approx 90\)K dollars. These results indicate that the RAM model provides superior predictions compared to the GAM model. The same conclusion holds is when comparing the RA\({}^{2}\)M and the GA\({}^{2}\)M models, where the former achieves a RMSE of \(0.74\cdot 110\approx 81\)K dollars, while the latter of \(0.77\cdot 110\approx 85\)K dollars. ## 6 Conclusion and Future Work In this paper we have introduced the Regional Additive Models (RAM) framework, a novel approach for learning accurate x-by-design models from data. RAMs operate by decomposing the data into subregions, where the relationship between the target variable and the features exhibits an approximately additive nature. Subsequently, Generalized Additive Models (GAMs) are fitted to each subregion and combined to create the final model. Our experiments on two standard re Figure 2: Comparison of different models’ predictions for bike rentals based on the hour of the day. Subfigure (a) depicts the generalized additive model (GAM), while subfigures (b) and (c) illustrate the RAM model’s predictions for different day types: non-working days \(f(X_{\texttt{hour}})\mathbb{1}_{X_{\texttt{workingsty}}\neq 1}\) and working days \(f(X_{\texttt{hour}})\mathbb{1}_{X_{\texttt{workingsty}}=1}\), respectively. The RAM model successfully captures the interaction between the hour of the day and the day type, leading to improved predictions and enhanced interpretability. \begin{table} \begin{tabular}{c|c} Feature & Subregions \\ \hline \multirow{2}{*}{\(X_{\text{long}}\)} & \(X_{\text{long}}\mathbbm{1}_{X_{\text{list}}\leq 34.9}\) \\ & \(X_{\text{long}}\mathbbm{1}_{X_{\text{list}}>34.9}\) \\ \hline \multirow{2}{*}{\(X_{\text{lat}}\)} & \(X_{\text{lat}}\mathbbm{1}_{X_{\text{long}}<-120.31}\) \\ & \(X_{\text{lat}}\mathbbm{1}_{X_{\text{long}}>-120.31}\) \\ \hline \multirow{2}{*}{\(X_{\text{total\_rooms}}\)} & \(X_{\text{total\_rooms}}\mathbbm{1}_{X_{\text{total\_bottom}}\leq 449.37}\) \\ & \(X_{\text{total\_rooms}}\mathbbm{1}_{X_{\text{total\_bottom}}>449.37}\) \\ \hline \multirow{3}{*}{\(X_{\text{total\_bedrooms}}\)} & \(X_{\text{total\_bedrooms}}\mathbbm{1}_{X_{\text{busbuscholds}}\leq 4411} \mathbbm{1}_{X_{\text{total\_bedrooms}}\leq 647}\) \\ & \(X_{\text{total\_bedrooms}}\mathbbm{1}_{X_{\text{buscholds}}\leq 4411} \mathbbm{1}_{X_{\text{total\_bedrooms}}>647}\) \\ & \(X_{\text{total\_bedrooms}}\mathbbm{1}_{X_{\text{buscholds}}>411} \mathbbm{1}_{X_{\text{total\_bedrooms}}\leq 647}\) \\ & \(X_{\text{total\_bedrooms}}\mathbbm{1}_{X_{\text{buscholds}}>411} \mathbbm{1}_{X_{\text{total\_bedrooms}}>647}\) \\ \hline \multirow{2}{*}{\(X_{\text{population}}\)} & \(X_{\text{population}}\mathbbm{1}_{X_{\text{buscholds}}\leq 411.5}\) \\ & \(X_{\text{population}}\mathbbm{1}_{X_{\text{buscholds}}>411.5}\) \\ \hline \multirow{2}{*}{\(X_{\text{households}}\)} & \(X_{\text{households}}\mathbbm{1}_{X_{\text{total\_bedrooms}}\leq 630.57}\) \\ & \(X_{\text{households}}\mathbbm{1}_{X_{\text{total\_bedrooms}}>630.57}\) \\ \end{tabular} \end{table} Table 2: California Housing: Subregions Detected by RAM Figure 3: Comparison of different predictions for housing prices in California based on the longitude. Subfigures (a) showcases the generalized additive model (GAM), while subfigures (b) and (c) demonstrate the RAM components for different latitude ranges: \(f(X_{\text{long}})\mathbbm{1}_{X_{\text{lat}}\leq 34.89}\) and \(f(X_{\text{long}})\mathbbm{1}_{X_{\text{lat}}>34.89}\), respectively. We observe, that although the EBM model is able to capture the overall trend in the data, it also exhibits a large amount of variance. gression datasets have shown promising results, indicating that RAMs can provide more accurate predictions compared to GAMs while maintaining the same level of interpretability. Nevertheless, there are still several unresolved questions that require attention and further experimentation. Firstly, it is essential to systematically evaluate the performance of RAMs on a larger set of datasets to ensure that the observed improvements are not specific to particular datasets. Secondly, we need to explore different approaches for each step of the RAM framework. For the initial step, we should experiment with various black-box models. Regarding the subregion detection step, we can explore alternative clustering algorithms. Finally, in the last step, we should investigate different types of GAM models to fit within each subregion. Another important area of investigation involves exploring the impact of second-order effects within the RAM framework. While our experimenation demonstrated that even with the current subregion detection, RA\({}^{2}\)Ms outperform GA\({}^{2}\)Ms, it may be the case, that for second-order models the optimal subregions are not necessarily those that maximize the additive effect of individual features, but rather those that maximize the additive effect of feature pairs.
2309.05769
Tortoise: An Authenticated Encryption Scheme
Given the open nature of the Internet, there is a need for authentication schemes to address inherent trust issues. We present Tortoise, an experimental nonce-based authenticated encryption scheme modeled on the Synthetic Counter-in-Tweak. This paper demonstrates a generalizable plug-and-play framework for converting block cipher into Authenticated Encryption with Associated Data. As part of this work, we utilized an XOR procedure for constructing a generic tweakable cipher. Finally, we support two modes: nonce-respecting and nonce-misuse-resistant. Source code available at https://github.com/kenluck2001/cipherResearch/tree/main/src/tortoise.
Kenneth Odoh
2023-09-11T18:55:07Z
http://arxiv.org/abs/2309.05769v2
# Tortoise: An Authenticated Encryption Scheme ###### Abstract. We present Tortoise, an experimental nonce-based authenticated encryption scheme modeled on the Synthetic Counter-in-Tweak framework to convert any block cipher into Authenticated Encryption with Associated Data. Our work supports two modes: nonce-respecting and nonce-misuse-resistant. **Source code** available at [https://github.com/kenluck2001/cipherResearch/tree/main/src/tortoise](https://github.com/kenluck2001/cipherResearch/tree/main/src/tortoise) Cryptorography, Privacy, Security, Authentication + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyright: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyright: copyrighted: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted: copyright: copyrighted Β© 2023 Association for Computing Machinery. + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyright: copyrighted: **Definition 1**[(12)]. Let \(C,tag=E_{A}(P,K,A,N)\) be an authenticated encryption function, and \(P=D_{A}(C,K,A,tag,N)\) be an authenticated decryption function where \(P\): plaintext, \(C\): ciphertext, \(K\): key, \(A\): associated data, and \(N\): nonce respectively. Decryption works once when tags are valid and match, implying no forgery. ### Specification Our work shares similarities to Deoxys [(8)] for using the same SCT [(13)] formulation, but it is different too in peculiar ways. Deoxys [(8)] follows the Tweakey framework for tweaking block cipher with a dependency on the key expansion scheme suited for AES-like only cipher, thereby affecting generalizability to certain ciphers e.g. Quantum-safe ciphers. On the contrary, we have opted for the XOR procedure with a 2-universal hashing function [(12)]. Reusing nonce is a common issue in real-world cryptosystems. It is unavoidable in some cases as it may be impossible to keep track of used values as system restarts may reset the system. Once the nonce is repeated in a nonce-respecting mode, then a security vulnerability results. ### Tweakable Cipher The tweak is desirable when changing the key is more expensive than changing tweaks. The tweak in our formulation has similar characteristics to CBC mode. A tweakable block cipher is secure even if the adversary knows the chosen tweak. It is conceptually similar to how initialization vectors are utilized in CBC mode. * The underlying base cipher is as follows: \(c=E(p,k)\), \(p=D(c,k)\) where \(c\): ciphertext, \(p\): plaintext, \(k\): key. * The supported operations of the Tweakable cipher as shown: \(C=E_{n}(K,T,P)\) is an encryption function that accepts key, \(K\), tweak, \(T\), and plaintext, \(P\), then outputs ciphertext, \(C.P=D_{n}(K,T,C)\) is an encryption function that accepts key, \(K\), tweak, \(T\), and ciphertext, \(C\), then outputs plaintext, \(P\). Our construction of a tweakable cipher for Tortoise follows Theorem 1 with the proof shown in paper [(12; 14)]. **Theorem 1** [(12)]. Let \(E_{n}(K,T,P)=E(P,h(T))\oplus h(T)\) where \(h(T)\) is chosen from the family of \(e-AXU_{2}\). Note that we used SHAKE128 [(2)] as our hashing function, \(h\). ### Implementation In our formulation, the tweak block size must match the message block size, which is a deviation from Deoxys [(8)]. We retain the 4-bit prefix used in Deoxys [(8)] and the same modes (nonce-respecting, and nonce-misuse resistant). The presented algorithms are influenced by the routine in Deoxys [(8)] which follows Definition 1. The nonce-respecting procedures are available in Algorithm 1 and Algorithm 2. ``` 0:\(P\): plainText, \(K\): key, \(A\): associated data, and \(N\): nonce where \(l_{a}\): length of associated data, \(A\), of multiple of blocks of size, \(n\) i e \(|A_{i}|=n\), \(l_{p}\): length of plainText, \(P\), of multiple of blocks of size, \(n\) i e \(|P_{i}|=n\) 0:\(C\): cipherText i e \(|C_{i}|=n\), \(tag\) \(auth=0\) // Processing associated data for all\(i\in l_{a}\)do \(auth=auth\oplus E_{n}(K,0010|i,A_{i})\) endfor // Processing plaintext data \(checksum=0^{n}\) for all\(j\in l_{p}\)do \(checksum=checksum\oplus P_{j}\) \(C_{j}=E_{n}(K,0000|N|j,P_{j})\) endfor \(fTag=E_{n}(K,0001|N|l_{p},checksum)\) // Tag generation \(tag=fTag\oplusauth\) ``` **Algorithm 1** Encryption Algorithm, \(E_{A}(P,K,A,N)\) The nonce-misuse-resistant procedure as shown in Algorithm 3 and Algorithm 4 respectively. ``` 0:\(P\): plainText, \(K\): key, \(A\): associated data, and \(N\): nonce where \(l_{a}\): length of associated data, \(A\), of multiple of blocks of size, \(n\) i \(\in|A_{i}|=n\), \(l_{p}\): length of plainText, \(P\), of multiple of blocks of size, \(n\) i \(\in|P_{i}|=n\) 0:\(C\): cipherText i \(e|C_{i}|=n\), tag 1:// Processing associated data \(auth=0\) for all\(i\in l_{a}\)do \(auth=auth\oplus E_{n}(K,0010|i,A_{i})\) endfor // Processing plaintext data \(tag=auth\) for all\(j\in l_{p}\)do \(tag=tag\oplus E_{n}(K,0000|N|j,P_{j})\) endfor // Tag generation \(tag=E_{n}(K,0001|0^{4}|N,tag)\) // Message encryption for all\(j\in l_{p}\)do \(C_{j}=P_{j}\oplus E_{n}(K,tag\oplus j,0^{8}|N)\) endfor ``` **Algorithm 3** Encryption Algorithm, \(E_{A}(P,K,A,N)\) as shown in Definition 1 ``` 0:\(C\): cipherText, \(K\): key, \(A\): associated data, tag, and \(N\): nonce where \(l_{a}\): length of associated data, \(A\), of multiple of blocks of size, \(n\) i \(\in|A_{i}|=n\), \(l_{c}\): length of cipherText, \(P\), of multiple of blocks of size, \(n\) i \(\in|P_{i}|=n\) 0:\(P\): plainText i \(e|P_{i}|=n\), \(\hat{tag}\) // Message encryption for all\(j\in l_{p}\)do \(P_{j}=C_{j}\oplus E_{n}(K,tag\oplus j,0^{8}|N)\) endfor ``` **Algorithm 4** Decryption Algorithm, \(D_{A}(C,K,A,tag,N)\) ``` 0:\(C\): cipherText, \(K\): key, \(A\): associated data, tag, and \(N\): nonce where \(l_{a}\): length of associated data, \(A\), of multiple of blocks of size, \(n\) i \(\in|A_{i}|=n\), \(l_{c}\): length of cipherText, \(P\), of multiple of blocks of size, \(n\) i \(\in|P_{i}|=n\) 0:\(P\): plainText i \(e|P_{i}|=n\), \(\hat{tag}\) // Message encryption for all\(j\in l_{p}\)do \(path=C_{j}\oplus E_{n}(K,tag\oplus j,0^{8}|N)\) // Processing associated data \(auth=0\) for all\(i\in l_{a}\)do \(auth=auth\oplus E_{n}(K,0010|i,A_{i})\) endfor // Processing plaintext data \(\hat{tag}=auth\) for all\(j\in l_{p}\)do \(\hat{tag}=\hat{tag}\oplus E_{n}(K,0000|N|j,P_{j})\) endfor // Tag generation \(\hat{tag}=E_{n}(K,0001|0^{4}|N,\hat{tag})\) // Tag verification if\(\hat{tag}==tag\)thenreturn plaintext, \(P\) else // Do nothing endif ``` **Algorithm 5** Encryption Algorithm, \(E_{A}(P,K,A,N)\) as shown in Definition 1 ## 5. Security Analysis AES (Aeseses, 2011) has been subjected to real-world security analysis. Hence, we can claim security levels of \(2^{128}\) for AES 128. Our underlying tweakable AES (Aeses, 2011) is reasonably resilient to differential and linear cryptanalysis. However, it may be vulnerable to the meet-in-the-middle attack (Beses, 2011). There is also more work to be done to evaluate resilience to related-key attacks. ## 6. Limitations and Future Work We have identified some areas for improvement and limitations. These ideas are discussed as follows: * We hope to improve this work in the future by providing hardware implementation. * Based on our formulation work, generic AEADs such as quantum-safe authenticated ciphers can be developed. * Future work by relaxing the notion of MRAE in the form of Online AE (OAE), which requires only a single pass for nonce-misuse resistant ciphers. This planned update can use ideas from Romulus-M (Bes, 2011), and we can create an improved nonce-misuse resistant cipher. ## 7. Conclusions We have released Tortoise as a general-purpose scheme for converting any block into AEAD. As a result of this work, we have created a custom XOR procedure for the construction of a generic tweakable cipher.
2301.10220
Exact solutions to Euler's equations for rigid body motion with application to detumbling satellites
Exact solutions are found for Euler's equations of rigid body motion for general asymmetrical bodies under the influence of torque by using Jacobi elliptic functions. Differential equations are determined for the amplitudes and the parameters of the elliptic functions. The solution is then applied to the detumbling of a satellite with arbitrary initial rotation rates where numerical solutions are seen to be in agreement with the analytical solution. The body fixed frame solution is then transformed to the inertial frame by use of a quaternion rotation matrix to depict the motion in figures and in animations within a Mathematica notebook which is openly published on the Wolfram community.
Christian Peterson
2022-11-21T17:48:31Z
http://arxiv.org/abs/2301.10220v1
Exact solutions to Euler's equations for rigid body motion with application to detumbling satellites ###### Abstract Exact solutions are found for Euler's equations of rigid body motion for general asymmetrical bodies under the influence of torque by using Jacobi elliptic functions. Differential equations are determined for the amplitudes and the parameters of the elliptic functions. The solution is then applied to the detumbling of a satellite with arbitrary initial rotation rates where numerical solutions are seen to be in agreement with the analytical solution. The body fixed frame solution is then transformed to the inertial frame by use of a quaternion rotation matrix to depict the motion in figures and in animations within a Mathematica notebook which is openly published on the Wolfram community. ## I Introduction The rotational motion about the center of mass in the coordinate frame that is fixed to the body is described by Euler's equations of motion. The system of three ordinary differential equations is coupled and non-linear, determining the dynamics of the angular velocities in the direction of the principal axes of the body. Rigid body motion has attracted the attention of many investigations due to its practical applicability to the attitude control of space vehicles and aircraft. Several studies have investigated analytical solutions to special cases of Euler's equations under torque, such as by assuming symmetric or near-symmetric bodies or by assuming one of the angular velocities is near zero[1; 2; 3; 4]. A first-order approximation of a general rigid body subjected to torques is provided by Longuski and Tsiotras [5]. Panayotounakos et al.[6] present a complete analytical solution for an asymmetric body by reducing Euler's equations to Abel differential equations of the second kind of the normal form. In this paper, an analytical solution to Euler's equations is presented by assuming a Jacobi elliptic function form for the angular velocities. The solution to Euler's equations for torque free motion is known to have Jacobi elliptic function solutions where the eccentricity and the amplitudes of oscillation are constant[7; 8; 9]. In the current investigation the parameters of the
2309.03318
Fitness Approximation through Machine Learning
We present a novel approach to performing fitness approximation in genetic algorithms (GAs) using machine-learning (ML) models, through dynamic adaptation to the evolutionary state. Maintaining a dataset of sampled individuals along with their actual fitness scores, we continually update a fitness-approximation ML model throughout an evolutionary run. We compare different methods for: 1) switching between actual and approximate fitness, 2) sampling the population, and 3) weighting the samples. Experimental findings demonstrate significant improvement in evolutionary runtimes, with fitness scores that are either identical or slightly lower than that of the fully run GA -- depending on the ratio of approximate-to-actual-fitness computation. Although we focus on evolutionary agents in Gymnasium (game) simulators -- where fitness computation is costly -- our approach is generic and can be easily applied to many different domains.
Itai Tzruia, Tomer Halperin, Moshe Sipper, Achiya Elyasaf
2023-09-06T18:58:21Z
http://arxiv.org/abs/2309.03318v2
# Fitness Approximation through Machine Learning ###### Abstract We present a novel approach to performing fitness approximation in genetic algorithms (GAs) using machine-learning (ML) models, focusing on evolutionary agents in Gymnasium (game) simulators--where fitness computation is costly. Maintaining a dataset of sampled individuals along with their actual fitness scores, we continually update throughout an evolutionary run a fitness-approximation ML model. We compare different methods for: 1) switching between actual and approximate fitness, 2) sampling the population, and 3) weighting the samples. Experimental findings demonstrate significant improvement in evolutionary runtimes, with fitness scores that are either identical or slightly lower than that of the fully run GA--depending on the ratio of approximate-to-actual-fitness computation. Our approach is generic and can be easily applied to many different domains. genetic algorithm, machine learning, fitness approximation, regression, agent simulation. ## I Introduction Agenetic algorithm (GA) is a population-based meta-heuristic optimization algorithm that operates on a population of candidate solutions, referred to as individuals, iteratively improving the quality of solutions over generations. GAs employ selection, crossover, and mutation operators to generate new individuals based on their fitness values, computed using a fitness function [13]. GAs have been widely used for solving optimization problems in various domains, such as telecommunication systems [15], energy systems [21], and medicine [11]. Further, GAs can be used to evolve agents in game simulators. For example, Garcia-Sanchez et al. [9] employed a GA to enhance agent strategies in Hearthstone, a popular collectible card game, and Elyasaf et al. [8] evolved top-notch solvers for the game of FreeCell. Algorithm 1 outlines the pseudocode of a canonical GA, highlighting the main fitness-selection-crossover-mutation loop. An accurate evaluation of a fitness function is often computationally expensive, particularly in complex and high-dimensional domains, such as games. In fact, a GA spends most of its time in line 3 of Algorithm 1: computing fitness. To mitigate this cost, fitness approximation techniques have been proposed to estimate the fitness values of individuals based on a set of features or characteristics. This paper focuses on performing fitness approximation in genetic algorithms using machine learning (ML) models. ``` 0: problem to solve 1: generate initial population of candidate solutions to problem 2: while termination condition not satisfied do 3: compute fitness value of each individual in population 4: perform parent selection 5: perform crossover between parents 6: perform mutation on resultant offspring ``` **Algorithm 1** Genetic Algorithm. Specifically, we propose to maintain a dataset of individuals and their actual fitness values, and to learn a fitness-approximation model based on this dataset. We analyze several options for: 1) sampling the search space for creating the dataset, 2) switch conditions between using the actual fitness function and the approximate one, and 3) weighting the samples in the dataset. We evaluate our approach on two games implemented by Gymnasium, a framework designed for the development and comparison of reinforcement learning (RL) algorithms. We only use Gymnasium's game implementations, called environments, for evaluating fitness--we do not use the framework's learning algorithms. The next section surveys relevant literature on fitness approximation. Section III provides brief backgrounds on linear ML models and Gymnasium. Section IV introduces the problems being solved herein: Blackjack and Frozen Lake. Section V describes the proposed framework in detail, followed by experimental results in Section VI. Section VII presents two extensions to our method, involving novelty search and hidden fitness scores. We end with concluding remarks and future work in Section VIII. ## II Fitness Approximation: Previous Work Fitness approximation is a technique used to estimate the fitness values of individuals without performing the computationally expensive fitness evaluation for each individual. By using fitness approximation, the computational cost of evaluating fitness throughout an evolutionary run can be significantly reduced, enabling the efficient exploration of large search spaces. There has been much interest in fitness approximation over the years, and providing a full review is beyond the scope herein. We focus on a number of works we found to be of particular relevance, some of which we compare to our proposed approach in Section VI. Fitness inheritanceSmith et al. [28] suggested the use of fitness inheritance, where only part of the population has its fitness evaluated, and the rest inherit the fitness values from their parents. Their work proposed two fitness-inheritance methods: 1) averaged inheritance, wherein the fitness score of an offspring is the average of its parents; and 2) proportional inheritance, wherein the fitness score of an offspring is a weighted average of its parents, based on the similarity of the offspring to each of its parents. Chen et al. [4] examined the use of fitness inheritance in Multi-Objective Optimization, with and without fitness sharing. Fitness sharing is a method that penalizes the fitness score of similar individuals in order to increase their diversity. Results showed that fitness inheritance could lead to a speed-up of 3.4 without fitness sharing and a speed-up of 1.25 with fitness sharing and parallelism. Fitness approximation using MLJin [16] discussed various fitness-approximation methods involving ML models with offline and online learning, both of which are included in our approach. Schmidt and Lipson [26] simultaneously coevolved solutions and predictors for a symbolic-regression problem, a popular task in the field of Genetic Programming (GP), wherein solutions are represented in various forms, including trees [17], two-dimensional grids of computational nodes [22], grammars [23], and more. Their results showed significant runtime improvement and reduced the size of the trees, leading to faster fitness computations. Dias et al. [6] used ML-based fitness approximation to solve a beam-angle optimization problem for cancer treatments, using neural networks as surrogate models. Their results were superior to an existing treatment type. They concluded that integrating surrogate models with genetic algorithms is an interesting research direction. Guo et al. [10] proposed a hybrid GA with an Extreme Learning Machine (ELM) fitness approximation to solve the two-stage capacitated facility location problem. ELM is a fast, non-gradient-based, feed-forward neural network that contains one hidden layer, with random constant hidden-layer weights and analytically computed output-layer weights. The hybrid algorithm included offline learning for the initial population and online learning through sampling a portion of the population in each generation. This algorithm achieved adequate results in a reasonable runtime. Yu and Kim [32] examined the use of Support Vector Regression [7], Deep Neural Networks [25], and Linear Regression models trained offline on sampled individuals, to approximate fitness scores in GAs. Specifically, the use of Linear Regression achieved adequate results when solving the One-Max and Deceptive problems. Zhang et al. [33] used a deep neural network with online training to reduce the computational cost of the MAP-Elites (Multi-dimensional Archive of Phenotypic Elites) algorithm for constructing a diverse set of high-quality card decks in Hearthstone. Their work achieved state-of-the-art results. Livne et al. [20] compared two approaches for fitness approximation, because a full approximation in their case would require 50,000 training processes of a deep contextual model, each taking about 1 minute: 1) training a multi-layer perception sub-network instead, which takes approximately five seconds; 2) a pre-processing step involving the training of a robust single model. The latter improved training from 1 minute to 60ms. ## III Preliminaries ### _Linear ML_ Linear ML models are a class of algorithms that learn a linear relationship between the input features and the target variable(s). We focus on two specific linear models, namely Ridge (also called Tikhonov) regression [12] and Lasso regression (least absolute shrinkage and selection operator) [31]. These two models strike a balance between complexity and accuracy, enabling efficient estimation of fitness values for individuals in the GA population. Ridge and Lasso are linear regression algorithms with an added regularization term to prevent overfitting. Their loss functions are given by: \[L_{1}:||y-Xw||_{2}^{2}+\alpha*||w||_{1}\,,\] \[L_{2}:||y-Xw||_{2}^{2}+\alpha\|w\|_{2}^{2}\,,\] where \(L_{1}\) is for Lasso, \(L_{2}\) is for Ridge, \(X\) represents the feature matrix, \(y\) represents the target variable, \(w\) represents the coefficient vector, and \(\alpha\) represents the regularization parameter. To demonstrate the use of linear ML, we provide an example implementation using scikit-learn [24]. In Listing 1 we train a Ridge regressor on a regression dataset, and then evaluate its performance in predicting the target variable. A major advantage of linear models with respect to our framework is that they are super-fast, enabling us to treat model-training time as virtually zero (with respect to fitness-computation time in the simulator). Thus, we could retrain a model at whim. As recently noted by James et al. [14]: "Historically, most methods for estimating \(f\) have taken a linear form. In some situations, such an assumption is reasonable or even desirable." ### _Gymnasium_ Gymnasium (formerly OpenAI Gym) [3] is a framework designed for the development and comparison of reinforcement learning (RL) algorithms. It offers a variety of simulated environments that can be utilized to evaluate the performance of AI agents. Gymnasium offers a plethora of simulators, called environments, from different domains, including robotics, games, cars, and more. Each environment defines state representations, available actions, observations, and how to obtain rewards during gameplay. A Gymnasium simulator can be used for training an RL agent or as a standalone simulator. Herein, we take the latter approach, using these simulators to test our novel fitness-approximation method for an evolutionary agent system. ## IV Problems: Blackjack and Frozen Lake This section provides details on the two problems from Gymnasium that we will tackle: Blackjack and Frozen Lake (Figure 1). ### _Blackjack_ Blackjack is a popular single-player card game played between a player and a dealer. The objective is to obtain a hand value closer to 21 than the dealer's hand value--without exceeding 21 (going bust). We follow the game rules defined by Sutton and Barto [30]. Each face card counts as 10, and an ace can be counted as either 1 or 11. The Blackjack environment of Gymnasium represents a state based on three factors: 1) the sum of the player's card values, 2) the value of the dealer's face-up card, and 3) whether the player holds a usable ace. An ace is usable if it can count as 11 points without going bust. Each state allows two possible actions: stand (refrain from drawing another card) or hit (draw a card). We represent an individual as a binary vector, where each cell corresponds to a game state from which an action can be taken; the cell value indicates the action taken when in that state. As explained by Sutton and Barto [30], there are 200 such states, therefore the size of the search space is \(2^{200}\). The actual fitness score of an individual is computed by running 100,000 games in the simulator and then calculating the difference between the number of wins and losses. We normalize fitness by dividing this difference by the total number of games. The ML models and the GA receive the normalized results (i.e., scores \(\in[-1,1]\)), but we will display the non-normalized fitness scores for easier readability. Given the inherent advantage of the dealer in the game, it is expected that the fitness scores will mostly be negative. ### _Frozen Lake_ In this game, a player starts at the top-left corner of a square board and must reach the bottom-right corner. Some board tiles are holes. Falling into a hole leads to a loss, and reaching the goal leads to a win. Each tile that is not a hole is referred to as a frozen tile. Due to the slippery characteristics exhibited by the frozen lake, the agent might move in a perpendicular direction to the intended direction. For instance, suppose the agent attempts to move right, after which the agent has an equal probability of \(\frac{1}{3}\) to move either right, up, or down. This adds a stochastic element to the environment and introduces a dynamic element to the agent's navigation. For consistency and comparison, all simulations will run on the 8x8 map presented in Figure 1. In this map, the Frozen Lake environment represents a state as a number between 0 and 63. There are four possible actions in each state: move left, move right, move up, or move down. Our GA thus represents a Frozen Lake agent as an integer vector with a cell for each frozen tile on the map, except for the end-goal state (since no action can be taken from that state). Similarly to Blackjack, each cell dictates the action being taken when in that state. Since there are 53 frozen tiles excluding the end-goal, the size of the search space is \(4^{53}\). The fitness function is defined as the percentage of wins out of 2000 simulated games. Again, we will list non-normalized fitness scores. ## V Proposed Method This section presents our proposed method for fitness approximation in GAs using ML models. We outline the steps involved in integrating Ridge and Lasso regressors into the GA framework, and end with a discussion of advantages and limitations of the new method. ### _Population Dataset_ Our approach combines both offline and online learning, as depicted in Figure 2. The algorithm begins in _evolution mode_, functioning as a regular GA, where a population evolves over successive generations. However, each time a fitness score is computed for an individual, we update a dataset whose features are the encoding vector of the individual and whose target value is the respective fitness score. An illustration of the dataset is presented in Table I. The initial population will always be evaluated using the simulator since the population dataset is empty at this stage of the run. After a predefined _switch condition_ is met the algorithm transitions from _evolution mode_ to _prediction mode_. In prediction mode, actual (in-simulator) fitness scores are computed only for a sampled subset of the popula Fig. 1: Gymnasium environments we use for actual fitness-score evaluation. population the GA assigns approximate fitness values using a learned ML model that was trained on the existing population dataset. The algorithm can switch back and forth between evolution mode and prediction mode, enabling dynamic adaptation to the evolutionary state. Specifically, in evolution mode, the following happens: 1. Actual (in-simulator) fitness scores of the entire population are computed. 2. The ML dataset is updated with the population's individuals and their actual fitnesses. 3. A new ML model is fitted to the updated dataset. In prediction mode, the following happens: 1. A subset of the population is sampled per generation. 2. Actual (in-simulator) fitness scores of the sampled individuals are computed. 3. The ML model predicts (approximate) fitness scores for the rest of the population. 4. The ML dataset is updated with the sampled individuals and their actual (in-simulator) fitness. 5. A new ML model is fitted to the updated dataset to be used by the next generation. Witness the interplay between the dynamic _switch condition_ and the static (pre-determined) _sample rate_--a hyperparameter denoting the percentage of the population being sampled. In cases where lower runtimes are preferred, using a relatively lenient switch condition is better, resulting in a higher fitness-approximation rate coupled with reduced runtime--at some cost to fitness quality. On the contrary, in cases where accurate fitness scores are preferred, the use of a strict switch condition is advisable, to allow ML fitness approximation only when model confidence is high. Note that the number of actual, in-simulator fitness computations performed is ultimately determined dynamically by the coaction of _switch condition_ and _sample rate_. In stochastic domains such as ours, the same individual may receive different (actual) fitness scores for every evaluation, and thus appear in the population dataset multiple times--with different target values. This can be prevented by keeping a single copy of each individual in the dataset, or by computing an average fitness score of all evaluations of the same individual (or possibly some other aggregate measure). However, since these solutions greatly interfere with the sample weighting mechanism (described in Section V-D), we decided to remove identical duplicate rows only (i.e., with both equal representations and fitness scores) while keeping individuals with equal representation and different fitness scores in the dataset. ### _Switch condition_ The switch condition plays a crucial role in determining when the GA transitions from evolution mode to prediction mode (and vice-versa). Our approach defines the switch condition based on a predefined criterion. Once this criterion is met, the algorithm switches its focus from evolving based entirely on full fitness computation to obtaining approximate fitness values through the model. The switch condition can be defined in various ways depending on the specific problem and requirements. It may involve measuring the accuracy of the model's predictions, considering a predefined threshold, or other criteria related to the state of the population and the model. In situations where the model's accuracy falls below the desired threshold, the algorithm can revert back to evolution mode until the condition for switching to prediction mode is met once again. Determining an appropriate switch condition is crucial for balancing the trade-off between the accuracy of fitness approximation and the computational efficiency of the algorithm. It requires tuning to find the optimal configuration for a given problem domain. Overall, the switch condition serves Fig. 2: Flowchart of proposed method. In evolution mode the algorithm functions as a regular GA. When the switch condition is met the algorithm shifts to prediction mode: actual (in-simulator) fitness values are calculated only for a sampled subset of the population, while the rest are assigned approximate fitnesses from the ML model. This latter is retrained before moving to the next generation. as a pivotal component in our approach, enabling a smooth transition from evolution mode to prediction mode based on a predefined criterion. We defined and tested four different switch conditions, each having a hyperparameter called _switch_threshold_: 1. **Dataset size.** The simplest solution entailed performing regular evolution until the dataset reaches a certain size threshold, and then transitioning to prediction mode indefinitely. Although simple, this switch condition is less likely to adapt to the evolutionary state due to its inability to switch back to evolution mode. 2. **Plateau.** Wait for the best fitness score to stabilize before transitioning to prediction mode. We consider the best fitness score as stable if it has not changed much (below a given threshold) over the last \(P\) generations, for a given \(P\). This method posed a problem, as the model tend to maintain the evolutionary state without significant improvement throughout the run. 3. **CV error.** Evaluate the model's error using cross-validation on the dataset. We switch to predict mode when the error falls below a predetermined threshold, and vice versa. We will demonstrate the use of this switch condition in the Frozen Lake scenario. 4. **Cosine similarity.** Cosine similarity is a metric commonly used in Natural-Language Processing to compare different vectors representing distinct words. We use this metric to compare the vectors in the GA population with those in the ML dataset. The underlying idea is that the model will yield accurate predictions if the current population closely resembles the previous populations encountered by the model, up to a predefined threshold. Our method utilizes this switch condition in the Blackjack scenario. ### _Sampling strategy_ As mentioned in Section V-A, during prediction mode, a subset of the population is sampled in each generation. There are several sampling strategies, and choosing the right strategy can greatly impact the quality of the population dataset. 1. **Random sampling.** The most straightforward sampling strategy is to randomly pick a subset of the population and compute their actual fitness scores while approximating the fitness scores for the rest of the population. This strategy is useful for domains where similar individual representations do not necessarily imply similar fitness scores. 2. **Similarity sampling.** Another approach is choosing the individuals that are the least similar to the individuals that already exist in the dataset. Using this method will improve the diversity of the dataset and hence improve the ability of the model to generalize better to a wider volume of the search space. This strategy is useful for domains where individuals with similar representations receive similar fitness scores, such as our domains. The similarity metric we chose is the cosine similarity, discussed above. Our generic method allows for the seamless integration of additional, more-sophisticated strategies, such as ones with a dynamic sample rate according to the evolutionary state, strategies that sample less frequently than every generation, etc. ### _Sample weights_ In this section,'sample' refers to a row in the dataset (as is customary in ML)--not to be confused with'sampling' in'sampling strategy', introduced in the previous section. During the training of the ML model on the dataset, each individual typically contributes equally. However, individuals tend to change and improve over the course of evolution. To account for this, we track the generation in which each individual is added to the dataset and assign weights accordingly. The weight assigned to an individual increases with the generation number. After experimenting with various weighting functions, we established a square root relationship between the generation number and its corresponding weight: \(weight=\sqrt{gen}\). We note that the algorithms we used do not require the weights to sum to one. ### _Advantages and limitations_ Our proposed method offers several advantages. It can potentially reduce the computational cost associated with evaluating fitness scores in a significant manner. Rather than compute each individual's fitness every generation, the population receives an approximation from the ML model at a negligible cost. The use of models like Ridge and Lasso helps avoid overfitting by incorporating regularization. This improves the generalization capability of the fitness-approximation model. Additionally, our approach allows for continuous learning, by updating the dataset and retraining the model during prediction mode. The continual retraining is possible because the ML algorithms are extremely rapid and the dataset is fairly small. There are some limitations to consider. Linear models assume a linear relationship between the input features and the target variable. Therefore, if the fitness landscape exhibits non-linear behavior, the model may not capture it accurately. In such cases, alternative models capable of capturing non-linear relationships may be more appropriate; we plan to consider such models in the future. Further, the performance of the fitness-approximation model heavily relies on the quality and representativeness of the training dataset. If the dataset does not cover the entire search space adequately, the model's predictions may be less accurate. Careful consideration should be given to dataset construction and sampling strategies to mitigate this limitation. An additional limitation is the choice of the best individual to be returned at the end of the run. Since a portion of the fitness values is approximate, the algorithm might return an individual with a good predicted fitness score, but with a bad actual fitness score. To address this issue, we first tried to keep the best individual in each generation (whether in evolution or prediction mode), compute the actual fitness values of these individuals at the end of the run, and return the best of them. Due to the approximate fitness scores being less accurate, this approach was found lacking. Instead, we found a simpler solution: Return the individual with the best fitness from the population dataset (which always holds actual fitness values). This solution did not require an actual fitness computation at the end of the run, as with the previous approach, and also improved the results by returning better fitness scores on average. ## VI Experiments and Results To assess the efficacy of the proposed approach, we carried out a comprehensive set of experiments aimed at solving the two problems outlined in Section IV. Our objective was to compare the performance of our method against a "full" GA (computing all fitnesses), considering solution quality and computational efficiency as evaluation criteria. Experiments were conducted using the EC-KitY software [27] on a cluster of 96 nodes and 5408 CPUs (the most powerful processors are AMD EPYC 7702P 64-core, although most have lesser specs). 10 CPUs and 3 GB RAM were allocated for each run. Since the nodes in the cluster vary in their computational power, and we had no control over the specific node allocated per run, we measured computational cost as number of actual (in-simulator) fitness computations performed, excluding the initial-population fitness computation. The average duration of a single fitness computation was 21 seconds for Blackjack and 6 seconds for Frozen Lake. The source code for our method and experiments can be found in our GitHub repository. Both fitness-approximation runs and full-GA runs included the same genetic operators with the same probabilities: tournament selection [1], two-point crossover [29], bit-flip mutation for Blackjack and uniform mutation for Frozen Lake [19]. The specific hyperparameters utilized in the experiments and their chosen values are detailed in Table II. We performed 20 replicates per sample rate, and assessed statistical significance by running a 10,000-round permutation test, comparing the mean scores between our proposed method and the full GA (with full fitness computation). The results are shown in Table III. Examining the results reveals an observable rise in fitness scores, along with an increase in the number of fitness computations, as sample rate increases. This is in line with the inherent trade-off within our method, wherein the quality of the results and the runtime of the algorithm are interconnected. Further, there is a strong correlation between the sample rate and the relative number of fitness computations. Notably, as the relative fitness-score computation approaches the sample rate, the frequency of individuals with approximate fitness scores increases. In the Blackjack scenario, fitness-computation ratios closely approximate the sample rates, indicating a strong dominance of prediction mode. In contrast, computation ratios for Frozen Lake are relatively close to 100%, with the exception of the 20% sample rate, signifying a prevalence of evolution mode in the majority of generations. Note that even in this case we attained significant savings at virtually zero cost. These observations shed light on the impact of the switch condition and its predefined threshold hyperparameters on the behavior of the algorithm in approximating fitness scores. Boldfaced results in Table III are those that are statistically identical in performance to the full GA, i.e., _p-value_\(>0.05\). _We observe that results indistinguishable from the full GA can be obtained with a significant reduction in fitness computation._ Table IV shows the performance of three methods discussed in Section II: HEA/FA [10], Averaged Fitness Inheritance [28], and Proportional Fitness Inheritance [28]. We contacted the authors for the implementation of the papers (unfortunately, they are not available on GitHub) but received no reply--so we implemented the methods ourselves. HEA/FA algorithm produced unsatisfactory results, whereas the other two performed relatively well, especially for the Frozen Lake problem. However, statistical insignificance (meaning, same as full GA) was only attained at 80% sample rate in Frozen Lake (and not at all in Blackjack)--whereas our method achieved statistical insignificance for both problems and for lower sample rates as well, as can be seen in Table III. In summary: Compare the boldfaced lines (or lack thereof) of Table III and Table IV. to the baseline method, the calculation of the actual fitness scores and model training in prediction mode is independent of the evolutionary process when fitness scores are hidden from the GA. As a result, fitness computation and model training can be executed as separate processes in parallel, significantly decreasing runtime. By our calculations this parallelization could result in a speedup of up to 60% in the runtime of our existing method. We plan to perform concrete experiments on this approach in the future. ## VIII Concluding Remarks and Future Work In this paper we presented a generic method to integrate machine learning models in fitness approximation. Our approach is useful for domains wherein the fitness score calculation is computationally expensive, such as running a simulator. We used Gymnasium simulators for evaluating actual fitness scores, and Ridge and Lasso models for learning the fitness-approximation models. Our analysis includes a rigorous comparison between different methods for: 1) switching between actual and approximate fitness, 2) sampling the population, and 3) weighting the samples. _Our results show a significant reduction in GA runtime, with a small price in fitness for low sample rates, and no price for high sample rates._ Further enhancements can be incorporated into our method by employing more complex ML models, such as Random Forest [2], XGBoost [5], or Deep Networks [25]. While these models have the potential to improve fitness approximation, it is worth noting that they are typically computationally intensive and may not be suitable for domains with limited fitness computation time. Additionally, our method can be refined by leveraging domain-specific knowledge or advanced data science concepts, to improve the generality of the population dataset and, consequently, the accuracy of the model. These approaches have the potential to enhance the overall performance of our solution. ## Acknowledgment This research was partially supported by the following grants: grant #2714/19 from the Israeli Science Foundation; Israeli Smart Transportation Research Center (ISTRC); Israeli Council for Higher Education (CHE) via the Data Science Research Center, Ben-Gurion University of the Negev, Israel.
2303.18011
Exploiting Multilingualism in Low-resource Neural Machine Translation via Adversarial Learning
Generative Adversarial Networks (GAN) offer a promising approach for Neural Machine Translation (NMT). However, feeding multiple morphologically languages into a single model during training reduces the NMT's performance. In GAN, similar to bilingual models, multilingual NMT only considers one reference translation for each sentence during model training. This single reference translation limits the GAN model from learning sufficient information about the source sentence representation. Thus, in this article, we propose Denoising Adversarial Auto-encoder-based Sentence Interpolation (DAASI) approach to perform sentence interpolation by learning the intermediate latent representation of the source and target sentences of multilingual language pairs. Apart from latent representation, we also use the Wasserstein-GAN approach for the multilingual NMT model by incorporating the model generated sentences of multiple languages for reward computation. This computed reward optimizes the performance of the GAN-based multilingual model in an effective manner. We demonstrate the experiments on low-resource language pairs and find that our approach outperforms the existing state-of-the-art approaches for multilingual NMT with a performance gain of up to 4 BLEU points. Moreover, we use our trained model on zero-shot language pairs under an unsupervised scenario and show the robustness of the proposed approach.
Amit Kumar, Ajay Pratap, Anil Kumar Singh
2023-03-31T12:34:14Z
http://arxiv.org/abs/2303.18011v1
# Exploiting Multilingual in Low-resource Neural Machine Translation via Adversarial Learning ###### Abstract Generative Adversarial Networks _(GAN)_ offer a promising approach for Neural Machine Translation _(NMT)_. However, feeding multiple morphologically languages into a single model during training reduces the NMT's performance. In GAN, similar to bilingual models, multilingual NMT only considers one reference translation for each sentence during model training. This single reference translation limits the GAN model from learning sufficient information about the source sentence representation. Thus, in this article, we propose Denoising Adversarial Auto-encoder-based Sentence Interpolation _(DAASI)_ approach to perform sentence interpolation by learning the intermediate latent representation of the source and target sentences of multilingual language pairs. Apart from latent representation, we also use the Wasserstein-GAN approach for the multilingual NMT model by incorporating the model generated sentences of multiple languages for reward computation. This computed reward optimizes the performance of the GAN-based multilingual model in an effective manner. We demonstrate the experiments on low-resource language pairs and find that our approach outperforms the existing state-of-the-art approaches for multilingual NMT with a performance gain of up to \(4\) BLEU points. Moreover, we use our trained model on zero-shot language pairs under an unsupervised scenario and show the robustness of the proposed approach. Neural Machine Translation, Adversarial Training, Multilingual, Denoising Auto-encoder. ## I Introduction Neural machine translation _(NMT)_ has established a number of cutting-edge benchmarks in machine translation tasks [1, 2]. It is an encoder-decoder framework based on a sequence-to-sequence prediction model where the encoder generates the context vector by taking a source-side word as input, and the decoder decodes this generated context vector into target sequences. Recent works ([3, 4, 5]) have extended the NMT approach to support multilingual translation, i.e., training a single model that can translate between multiple languages. There are several reasons for shifting researchers' interest from bilingual to multilingual machine translation. First, single model training for large number of languages makes multilingual model more cost-effective than multiple bilingual models. Another advantage is transfer learning i.e., training of low-resource languages in combination with high resource languages improves the translation quality of low-resource languages [6]. One of the fine examples of transfer learning-based NMT under multilingual scenario is Zero-Shot Translation _(ZST)_[2]. ZST is an unsupervised approach in which pretrained translation models are tested on related unseen language pairs. There has been a significant amount of works done on multilingual machine translation, with the majority of the works focusing on translation between language pairs with English as a prominent language [7, 8, 9]. However, very few works on non-English language pairs exist [2, 10]. One of the reasons for less multilingual systems on non-English language pairs is insufficient training data for such language pairs. Multilingual models' translation quality is not as good as bilingual models due to multiple languages, which increases the data sparseness. However, use of multiple languages during training give some good systems for zero resource languages [11]. Multilingual training significantly improves the translation quality of low and zero-resource languages. Although fitting multiple morphologically rich language pairs into a single model suffers from representation learning bottlenecks and affects the generalisation capabilities, limiting the benefits of multilingual on translation quality [12]. Despite a large amount of data fed into models with many parameters, translation performance in all language directions has not improved due to representation learning bottlenecks problems. Therefore, more research is needed for better data selection and representation, network architectures, and learning algorithms in low-resource multilingual model. To solve the problems of representation learning and generalisation, we present Denoising Adversarial Auto-encoder-based Sentence Interpolation (_DAASI_) approach for low-resource multilingual machine translation. DAASI exploits the monolingual data by adversarial-based denoising autoencoder and generates the augmented data for parallel corpus. A denoising autoencoder is a neural network architecture that corrupts data and attempts to recreate it from corrupted samples, allowing it to perform well even when the inputs are noisy [13]. Then it uses sentence interpolation to generate the latent representation of data between two different languages and uses the interpolated data to train the NMT model relying on Generative Adversarial Network_(GAN)_[14]. We train the NMT based on Wasserstein Generative Adversarial Network (WGAN) [15], consists of generator and critic, on newly constructed interpolated data. In generator, pre-trained NMT model produces translated sentence given a source sentence, while critic model takes sentence pairs as input, tries to learn the Wasserstein distance between them and judge whether they are real or fabricated based on the distances between the sentences. Unlike the reward computed in the regular GAN model, DAASI generates the reward on each test set of multiple languages. We conducted the experiments on five low-resource language pairs: Gujarati (GU)\(\leftrightarrow\)Hindi (HI), Nepali (NE)\(\leftrightarrow\)Hindi (HI), Punjabi (PA)\(\leftrightarrow\)Hindi (HI), Maithil (MAI)\(\leftrightarrow\)Hindi (HI), Urdu (UR)\(\leftrightarrow\)Hindi (HI) to demonstrate the robustness of the proposed DAASI approach. We also use our pre-trained multilingual model on zero-shot language pairs (Bhojpuri (BHO)\(\leftrightarrow\)Hindi (HI) and Magahi (MAG)\(\leftrightarrow\)Hindi (HI)) under an unsupervised scenario. Particularly, the contributions of the paper are summarised as follows: 1. Propose _DAASI_ approach based on denoising adversarial auto-encoder for low-resource multilingual machine translation. 2. Perform sentence interpolation on source-target language pairs to generate the intermediate latent representation of sentence that covers the diverse context of sentences in different languages. 3. Optimise the GAN for the multilingual machine translation model by incorporating WGAN and multi-language references to compute the reward. 4. Proposed approach outperforms existing state-of-the-art techniques up to 4 BLEU points in all translation tasks. The rest of the paper is organized as follows. Section II discusses the closely related works. Section III describes the proposed solutions. Corpus statistics, the experimental setup and results conducted are reported in Section IV. Finally, paper is summarized in Section V with future aspects of work. ## II Related Works Limited availability of resources for low-resource machine translation could be improved under a multilingual training scenario. Projecting multiple languages in the same dimensional space helps the NMT systems utilize information from high-resource language pairs and enhance the translation quality of low-resource languages through a transfer learning approach. Recent studies on denoising autoencoders reveals few improvements in the NMT model under multilingual scenario. In this section, we closely review the multilingual machine translation systems in terms of denoising autoencoder. ### _Multilingual NMT without denoising autoencoder_ The main objective of multilingual NMT is to build a model that can assist translation between more than one language pair. Multilingual NMT is of three types: one-to-many [3], many-to-one [4] and many-to-many [5]. The learning objective for multilingual NMT is to maximize the log-likelihood of all training examples for all language pairs. In [7], authors proposed architecture for NMT by incorporating an intermediate attention bridge between all languages and receiving multilingual sentence representations. In [8], authors conducted experiments on training massively multilingual NMT models, incorporating up to 103 different languages and 204 translation directions at the same time, and investigated various setups for training such models as well as analysing the trade-offs between translation quality and different modelling decisions. In [16], authors employed multilingual and multi-way neural machine translation approaches for morphologically rich languages, such as Estonian and Russian. In [17], authors addressed the rare word problem in multilingual MT models for low-resource language pairs. In [9], authors proposed a multilingual lexicon encoding framework designed exclusively to smartly share lexical-level information without involving any heuristic data preprocessing. In [18], authors incorporated a language-aware interlingua into the Encoder-Decoder architecture. The incorporated interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages. In [19], authors investigated methods for improving massively multilingual NMT, particularly on ZST, and demonstrated that multilingual NMT has limited capacity, which they propose to improve by deepening the Transformer and developing language-aware neural models. In [20], authors developed a model that divides languages into groups and trains one multilingual model for each group. In [21], authors proposed a training method based on a contrastive learning scheme and data augmentation for a single unified multilingual translation model. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Paper** & **A** & **B** & **C** & **D** & **E** & **F** \\ \hline [3] & βœ— & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [4] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [5] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [7] & βœ— & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [8] & βœ— & βœ— & βœ— & βœ“ & βœ— \\ \hline [9] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [16] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [17] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [18] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [19] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [20] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [21] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [22] & βœ— & βœ— & βœ“ & βœ— & βœ— \\ \hline [23] & βœ“ & βœ— & βœ— & βœ— & βœ— \\ \hline [24] & βœ“ & βœ— & βœ— & βœ— & βœ— \\ \hline [25] & βœ“ & βœ— & βœ— & βœ— & βœ— \\ \hline [26] & βœ— & βœ— & βœ— & βœ— & βœ— \\ \hline [DAASI] & βœ— & βœ“ & βœ“ & βœ“ \\ \hline \end{tabular} * Note. **A**: Denoising Auto-Encoder, **B**: Wasserstein Generative Adversarial Network, **C**: WX [27], **D**: Multilingual, **E**: Sentence interpolation, **F**: Multi-language reward. \end{table} Table I: Comparison of existing works ### _Multilingual NMT with denoising autoencoder_ In [22], authors proposed a multilingual unsupervised NMT framework that trains different languages simultaneously with a shared encoder and multiple decoders relying on denoising autoencoding of each language and back-translation between English and numerous non-English languages. In [23], authors presented BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by corrupting text with random noise and letting the model reconstruct the original text. In [24], authors presented mBART--a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. In [25], authors proposed an approach called NMT-Adapt, which combines denoising autoencoding, back-translation and adversarial objectives to utilize monolingual data for low-resource adaptation. ### _Shortcomings of existing methods_ The existing low-resource multilingual translation model (such as [5, 9, 18]) mainly focused on leveraging the high resource language to improve translation quality via a transfer learning approach. There is a need to maintain the coherent latent space between the sentences that helps the model to learn better context. Our approach uses sentence interpolation for generating intermediate latent sentence representation based on adversarial autoencoder and train the multilingual translation model based on Wasserstein-GAN, which optimizes the model performance. ## III Proposed Approach This section discusses the proposed DAASI approach, which improves language generalisation in low-resource multilingual machine translation and trains the translation model with modified multilingual-reward via WGAN. DAASI consists of three components: Denoising Adversarial Auto-encoder _(DAAE)_[13], sentence interpolation, and WGAN-based translation model; described in the following: ### _DAAE for multilingual model_ We introduce DAAE (Fig. 1) in the multilingual NMT model for pseudo-corpus generation and encourages the model to implicitly learn a similar latent representation of sentences. DAAE is an encoder-decoder framework consisting of a deterministic encoder \(E_{RNN}\), a probabilistic decoder \(D_{RNN}\) and a discriminator \(Q_{FFN}\). Both \(E_{RNN}\) and \(D_{RNN}\) are Recurrent Neural Network _(RNN)_. \(E_{RNN}\) takes input sequence \(m\) and uses the final RNN hidden state as its encoding \(z\). \(D_{RNN}\) generates \(m\) autoregressively. \(Q_{FFN}\) is a feed-forward network that calculates the likelihood of \(z\) coming from the prior rather than the encoder. Deterministic encoder \(E_{RNN}\): \(\mathcal{M}{\rightarrow}\mathcal{Z}\) models data space to latent space. Probabilistic decoder \(D_{RNN}\): \(\mathcal{Z}{\rightarrow}\mathcal{M}\) generates sequences from latent representations. Discriminator \(Q_{FFN}\): \(\mathcal{Z}{\rightarrow}[0,1]\) attempts to distinguish between encodings of data \(E_{RNN}(m)\) and samples from \(p(z)\). First, we encode the sequence \(m\) from multilingual monolingual corpus into a common roman script via WX-transliteration as follows: \[m_{trans}=enc_{trans}(m), \tag{1}\] where \(m_{trans}\) and \(enc_{trans}\) represent transliterated sequence and WX-encoder, respectively. Then we exploit the transliterated monolingual data by adding noise to each sentence, passing the noised sentence to the encoder, training the model, and recovering the original sentence from the noised sentence using an adversarial autoencoder. We corrupt the data by introducing local m-perturbations in a sequence \(m_{trans}\) as follows: \[m_{e}=perturb(m_{trans}), \tag{2}\] \begin{table} \begin{tabular}{|c|c|} \hline **Symbol** & **Description** \\ \hline \(M,Z\) & Sentence space, latent space \\ \hline \(m,z\) & Sequence, Generated latent variable \\ \hline \(B\) & Class labels \\ \hline \(C,m_{trans}\) & Corpus, Transliterated sequence \\ \hline \(E_{RNN}\) & Deterministic encoder in DAAE \\ \hline \(D_{RNN}\) & Probabilistic decoder in DAAE \\ \hline \(Q_{FFN}\) & Discriminative part of DAAE \\ \hline \(E_{enc}\) & Reconstruction loss in DAAE \\ \hline \(L_{adv}\) & Adversarial loss in DAAE \\ \hline \(E(m)\) & Encoded sequence \(m\) \\ \hline \(m_{e}\) & Perturbed m \\ \hline \(s,t\) & Source sentence, Target sentence \\ \hline \(s^{\prime},t^{\prime}\) & Interpolated source sentence, Interpolated target sentence \\ \hline \(z_{s}\) & Latent variable for source sentence \\ \hline \(z_{t}\) & Latent variable for target sentence \\ \hline \(s_{nn}\) & Synthetic source sentence \\ \hline \(t_{syn}\) & Synthetic target sentence \\ \hline \(t_{syn}\) & Predicted target sequence \\ \hline \(t_{syn}\) & NNT generated sentence \\ \hline \(t_{syn}\) & Reference sentence \\ \hline \(L_{i}\) & \(t^{n}\) language \\ \hline \(\alpha\) & Interpolation factor \\ \hline \(\lambda\) & Training hyperparameter in DAAE \\ \hline \(h\) & Feature map \\ \hline \(G_{NMT}\) & Generator part of translation model \\ \hline \(Q_{CNN}\) & Discriminator part of translation model \\ \hline \(R_{i}\) & Reward gained by \(i^{\prime}\) language pair \\ \hline \(R_{MMT}\) & Reward for translation model \\ \hline \(\theta\) and \(\phi\) & Gradient descent for DAAE and DAASI \\ \hline \(\beta\) & Interpolation hyperparameter \\ \hline \(I_{r}\) & Learning rate \\ \hline \(P_{r}\) & Probability distribution for human translated sentence \\ \hline \(\bar{P}_{g}\) & Probability distribution for machine generated sentence \\ \hline \(\mathcal{L}_{gen}\) & Generator loss function in WGAN \\ \hline \(\mathcal{L}_{critic}\) & Critic loss function in WGAN \\ \hline \end{tabular} \end{table} Table II: Symbol description Figure 1: DAAE for multilingual model. where \(perturb\) and \(m_{e}\) represent perturbation process and the corrupted sequence, respectively. We employ the two types of losses i.e., reconstruction loss (\(\mathcal{L}_{rec}\)) and adversarial loss (\(\mathcal{L}_{adv}\)) to train the DAAE module described as follows [13]: \[\begin{split}\mathcal{L}_{rec}(\theta_{E_{RNN}},\theta_{D_{RNN}})= \\ \mathbb{E}_{p(m,m_{e})}\big{[}-\log p_{D_{RNN}}(m|E_{RNN}(m_{e})) \big{]},\end{split} \tag{3}\] \[\begin{split}\mathcal{L}_{adv}(\theta_{E_{RNN}},\theta_{Q_{FFN}})= &\mathbb{E}_{p(z)}\big{[}-\log Q_{FFN}(z)\big{]}+\\ \mathbb{E}_{p(m_{e})}\big{[}1-\log Q_{FFN}(E_{RNN}(m_{e})) \big{]},\end{split} \tag{4}\] where, \[p(m,m_{e})=p_{data}(m)p(m|m_{e}), \tag{5}\] \[p(m_{e})=\sum_{m}p(m,m_{e}). \tag{6}\] Both reconstruction and adversarial loss are weighted via hyperparameter \(\lambda\)\(>\)0 during training as follows: \[\begin{split}\min_{E_{RNN},D_{RNN}}\max_{Q_{FFN}}\mathcal{L}_{ rec}(&\theta_{E_{RNN}},\theta_{D_{RNN}})\\ -&\lambda\mathcal{L}_{adv}(\theta_{E_{RNN}},\theta_ {Q_{FFN}}).\end{split} \tag{7}\] With perturbation process \(perturb\), the posterior distributions of the latent representations are of the form: \[p(z|m_{i})=\sum_{m_{e_{i}}}p_{perturb}(m_{e_{i}}|m_{i})p_{E_{RNN}}(z|m_{e_{i}}). \tag{8}\] We describe the DAAE training procedure in the Algorithm 1 for better understanding of the model. First, encode the multilingual monolingual sentence into WX-representation and then corrupt the sentence using the perturbation process (lines 1-2). Then train the \(E_{RNN}\) and \(D_{RNN}\) by keeping \(Q_{FFN}\) fixed (lines 4-6). Sample a batch from corrupted monolingual data and generate \(z\sim p(m_{e_{i}})\) (lines 4-5). Next, we reconstruct \(m_{i}\) from \(m_{e_{i}}\) and compute \(\mathcal{L}_{rec}\) via Eq. (3) (line 6). Then train the \(Q_{FFN}\) by keeping \(E_{RNN}\) and \(D_{RNN}\) fixed (lines 7-9). Generate \(z\)\(\sim p(z|m_{i})\) from original data and compute \(\mathcal{L}_{adv}\) via Eq. (4) (lines 8-9). Finally, jointly train the \(E_{RNN}\), \(D_{RNN}\) and \(Q_{FFN}\) autoregressively via Eq. (7) until the model gets converged (lines-3-11). ``` Input: Sentence (\(m\)),\(\gamma\)\(m\)\(\in\)C Output: DAAE-model 1\(m_{trans}\gets enc_{trans}(m)\); 2\(m_{e}\leftarrow\) perturb(\(m_{trans}\)); 3while\(\theta\) has not convergeddo 4// Update (\(E_{RNN}\), \(D_{RNN}\)) and keep \(Q_{FFN}\) fixed sample \(\{m_{e_{i}}\}_{i=1}^{i}\); a batch from corrupted monolingual data 5 Generate \(z\sim p(m_{e_{i}})\); 6Reconstruct (from \(m_{e_{i}}\) and compute \(\mathcal{L}_{rec}\) via Eq. (3); 7// Update (\(E_{RNN}\), \(Q_{FFN}\)) and keep \(D_{RNN}\) fixed 8 sample \(\{m_{i}\}_{i=1}^{i}\); a batch from monolingual data 9 Generate \(z\sim p(z|m_{i})\); 10 Compute \(\mathcal{L}_{adv}\) via Eq. (4); 11 Perform min-max using Eq. (7); // Train the DAAE via min-max algorithm 12\(\theta\leftarrow\theta\) - lr\(\frac{\partial\mathcal{L}_{adv}}{\partial\theta}\); // Update the parameter ``` **Algorithm 1**DAAE training procedure ### _Sentence interpolation_ We perform sentence interpolation between source \(s\) and target \(t\) language pairs by traversing the latent space of the text auto-encoder. DAASI encodes the source-target language pair to \(z_{s}\) and \(z_{t}\), and decode from \(\alpha z_{s}+(1-\alpha)z_{t}\) (0\(\leq\)\(\alpha\)\(\leq\)1) into interpolated form \(s^{\prime}\) or \(t^{\prime}\). We perform sentence interpolation on both the source and target monolingual data. To generate \(s^{\prime}\), we train the DAAE on source-side monolingual data and perform the sentence interpolation between source and target parallel data. After performing interpolation, we get many sentences as output. We select the sentence with a high degree of similarity to the source sentence based on the chrF2 [28] measure. Similarly, we train the DAAE on target side monolingual data and perform the sentence interpolation between source and target parallel data to get \(t^{\prime}\). We merge the generated corpus with original training data to create synthetic parallel corpus (\(s_{syn}\),\(t_{syn}\)) for further training of model as shown in Fig. 2. The generated corpus is semantically similar to the original one but contains more information that helps the model learn better context between sentences. We use the sentence interpolation in DAASI to generate the synthetic sentences close to both source and target sentences. These give some intermediate latent representations of sentences that are beneficial for learning the context of relatedness between the languages. Denoising helps produce higher-quality sentence interpolations, suggesting better linguistic continuity in its latent space. Figure 2: Illustration of proposed architecture. ### _Translation model_ We have trained the translation model on the generated semantic parallel corpora (\(s_{syn}\),\(t_{syn}\)) using WGAN, as shown in Fig. 2. It is made up of generator (\(G_{MMT}\)) and critic (\(Q_{CNN}\)) parts that use the NMT and Convolutional Neural Network (CNN) architectures, respectively [29]. The generator objective minimises the Wasserstein distance between data distribution of machine-generated translated sentences and human translated references. The Wasserstein distance is a distance metric between two probability distributions on a given metric space. Mathematically, we define the Wasserstein distance \(W\) for the translation model between the probability distributions of machine-generated translated sentences \(\mathbb{P}_{g}\) and the human translated references \(\mathbb{P}_{r}\) (\(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\) belong to embedding space \(\mathcal{X}\)) as follows [15]: \[W(\mathbb{P}_{r},\mathbb{P}_{g})=\inf_{\gamma\in\prod(\mathbb{P}_{r},\mathbb{ P}_{g})}\mathbb{E}_{(t^{\prime}_{syn},t_{syn})\sim\gamma}[||t^{\prime}_{syn}-t_{ syn}||], \tag{9}\] where \(\prod(\mathbb{P}_{r},\mathbb{P}_{g})\) represents the set of all joint distributions over human translated references \(t_{syn}\) and machine translated sentences \(t^{\prime}_{syn}\) such that the marginal distributions are equal to \(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\), and \(\gamma(t^{\prime}_{syn},t_{syn})\) is the distance that must be moved from \(t^{\prime}_{syn}\) to \(t_{syn}\) to transform \(\mathbb{P}_{r}\) to \(\mathbb{P}_{g}\), respectively. In generator, we use the NMT architecture to train the model but employ the following objective function instead of cross-entropy loss: \[\begin{split} L_{gen}(w)=\min_{\phi}\mathbb{E}_{t_{syn}\sim \mathbb{P}_{r}}[Q_{CNN}(t_{syn})]\\ -\mathbb{E}_{t^{\prime}_{syn}\sim p(t^{\prime}_{syn})}[Q_{CNN}(t^ {\prime}_{syn})].\end{split} \tag{10}\] In the second part of the model (critic), the objective is to estimate the Wasserstein distance between data distribution of machine-generated translated sentences and human-translated references. For the critic part, we create an image-like representation \(h^{(0)}\) by simply appending the embedding vectors of words in \(s_{syn}\) and \(t_{syn}\) based on [14]. Therefore, for \(i^{th}\) word \(u_{i}\) in the source sentence \(s_{syn}\) and \(j^{th}\) word \(v_{j}\) in the target sentence \(t_{syn}\), we have the following feature map [14]: \[h^{(0)}_{i,j}=[u^{T}_{i},v^{T}_{j}]^{T}. \tag{11}\] Based on such an image-like representation, we perform convolution on each 3 X 3 window in order to capture the correspondence between segments in \(s_{syn}\) and segments in \(t_{syn}\) using the following feature map of type \(f\): \[h^{(1,f)}_{i,j}=\sigma(W^{(1,f)}\hat{h}^{(0)}_{i,j}+b^{(1,f)}). \tag{12}\] where \(\hat{h}^{(0)}_{i,j}=[h^{(0)}_{i-1:i+1,j-1:i+1}]\) is the 3 \(\times\) 3 window and \(\sigma(x)=\frac{1}{(1+exp(-x))}\) represents sigmoid function. After that we perform a max-pooling in non-overlapping 2 \(\times\) 2 window: \[h^{(2,f)}_{i,j}=\max(h^{(1,f)}_{2i-1,2j-1},h^{(1,f)}_{2i-1,2j},h^{(1,f)}_{2i,2 j-1},h^{(1,f)}_{2i,2j}). \tag{13}\] This type of 2D architecture helps in modelling the semantic relationship between the two sentences more accurately. Based on this 2D architecture, we train the critic part using the objective function described as follows: \[\begin{split}\mathcal{L}_{critic}=\max_{w\in W}\mathbb{E}_{t_{ syn}\sim\mathbb{P}_{r}}[Q_{CNN}(t_{syn})]\\ -\mathbb{E}_{t^{\prime}_{syn}\sim p(t^{\prime}_{syn})}[Q_{CNN}(t^ {\prime}_{syn})].\end{split} \tag{14}\] \(Q_{CNN}\) is updated synchronously with \(G_{NMT}\) during model training. We have defined the training for translation model as follows: \[\begin{split}\min_{G_{NMT}}\max_{Q_{CNN}}\mathbb{E}_{t_{syn}\sim \mathbb{P}_{r}}[Q_{CNN}(t_{syn})]\\ -\mathbb{E}_{t^{\prime}_{syn}\sim p(t^{\prime}_{syn})}[Q_{CNN}(t^ {\prime}_{syn})].\end{split} \tag{15}\] ### _Multilingual reward_ In multilingual-NMT, we attach the \(<\)\(SRC\)\(>\) tag at the beginning of each source sentence, merge parallel corpora from different languages, and train the translation model. However, in GAN, for the multilingual model, using the merged way of reference translation in different languages for reward computation may negatively affect the model's performance due to the diversity of languages. To handle this problem of language diversity, we have proposed a multilingual reward based on the Wasserstein distance between the sentences, as shown in Fig. 3. Given a synthetic source sentence \(s_{i}\), a critic based on CNN is proposed to distinguish between the model translation results \(t_{gen}\) and the reference translation \(t_{ref}\). The translation matching of a (source, target) sentence pair must be measured to accomplish this task. We use the Wasserstein distance between real and machine-generated sentences as the objective of reward measurement in one language. The objective \(R_{i}\) calculates the reward for \(i^{th}\) language pair, which measures the Wasserstein distance between real \(t_{ref}\) and generated sentence \(t_{gen}\) as follows: \[R_{i}=\inf_{\gamma\in\prod(P_{r},P_{g})}\mathbb{E}_{(t_{gen},t_{ref})\sim \gamma}[||t^{i}_{gen}-t^{i}_{ref}||]. \tag{16}\] The presence of multiple morphologically rich languages in multilingual NMT impedes the generator's ability to learn effective parameters from WGAN model training. To effectively train the WGAN-based multilingual model, we extend the critic of WGAN for MMT by incorporating multiple references of different language pairs for reward calculation during model training. For reward computation in the multilingual-NMT model, we divide the test set language pairs into \(K\) sets such that each set has pairs of different source languages. For a better-optimized model, we compute the reward on the same number of different language pairs. We compute the reward \(R_{MMT}\) for multilingual-GAN as follows: \[R_{MMT}=\frac{\sum_{i=L_{1}}^{L_{m}}\inf_{\gamma\in\prod(P_{r},P_{g})}\mathbb{E}_{( t_{gen},t_{ref})\sim\gamma}[||t^{i}_{gen}-t^{i}_{ref}||]}{n}, \tag{17}\] where, \(n\) is the total number of language pairs in one set of \(K\). ### _Model training_ In order to make our work reproducible, we describe the training details of our proposed approach in Algorithm 2. First, we pretrain the DAAE on the monolingual source \(s\) and target \(t\) languages using Algorithm 1 (line 1). Then, generate a semantic corpus for parallel source and target sentences via the sentence interpolation approach and merge the generated semantic corpus with the original multilingual training corpus(lines 2-4). Pretrain the \(G_{\theta}\) and \(Q_{w}\) on real training data (line 5). We Update the critic model five times more than the generator for each iteration (lines 7-15). We sample the batch from real data and the machine-generated data for each critic and compute the Wasserstein loss (lines 8-10). Then, compute the reward and if the reward is less than or equal to the reward in the previous iteration, return the generator model; otherwise, update the parameters and perform clipping (lines 11-15). Sample the batch of machine-generated data for the generator and compute the generator's Wasserstein loss (lines 16-17). Finally, update the generator parameters and Train the generator and critics autoregressively until the model gets converged (lines 6-18). ## IV Performance Study In this section, we discuss the corpus statistics, experimental setup and result analysis on various parameters to execute the experiments. ### _Datasets_ We evaluate our proposed method on five language pairs in totally 10 directions: Gujarati\(\leftrightarrow\)Hindi, Nepali\(\leftrightarrow\)Hindi, Urdu\(\leftrightarrow\)Hindi, Maithili\(\leftrightarrow\)Hindi and Punjab\(\leftrightarrow\)Hindi. We also evaluate our method on zero-shot language pairs (Bhojupuri\(\leftrightarrow\)Hindi and Magahi\(\leftrightarrow\)Hindi) under unsupervised scenario. For \(Gujarati\leftrightarrow Hindi\) translation task, we extract the bilingual datasets from CVIT-PIB which consists of 15000 training, 1973 test and 1973 validation sentence pairs [30]. For \(Nepali\leftrightarrow\)_Hindi_ translation task, training, test, and validation data of about 133000, 3000, 3000 respectively collected from WAT19 task [31], Opus [32], and TDIL repositories [33]. For \(Punjabi\leftrightarrow\)_Hindi_, \(Maithili\leftrightarrow\)_Hindi_, and \(Urdu\leftrightarrow\)_Hindi_, we gather the bilingual data from Opus only [32]. For unsupervised experiments, test set of Bhojupuri\(\leftrightarrow\) Hindi and Magahi \(\leftrightarrow\) Hindi are collected from zero-shot translation task at LoResMT2020 [34]. In Punjab\(\leftrightarrow\)Hindi and Maithili\(\leftrightarrow\)Hindi translation task, we use 200000 and 93000 training sentences for Punjab\(\leftrightarrow\)Hindi and Maithili\(\leftrightarrow\)Hindi respectively. Testing and validation are done on 7000 sentences for Punjab\(\leftrightarrow\)Hindi and 3000 sentences for Maithili\(\leftrightarrow\)Hindi translation tasks. For Urdu\(\leftrightarrow\)Hindi, we train the model on 100000 sentences. Testing and validation are done on 3000 sentences. ``` Input: Parallel(\(s\),\(t\)) \(\forall\)\(s\)Source language, \(\forall\)\(t\)\(\in\)Target language, \(\rho\) = 0.00005, \(c\) = 0.01, \(n_{critic}\) = 5, \(G_{NMT}\) and \(Q_{CNN}\) with parametric function denoted as \(G_{\phi}\) and \(Q_{w}\), respectively. Output: DAASI-based multilingual NMT model (\(G_{\phi}\)) 1 DAAE(\(s\)) and DAAE(\(t\)); 2 s' \(\leftarrow\)Interpolate(\(s\),\(t\)) via DAAE(\(s\)); // Perform interpolation on source. 3 t' \(\leftarrow\)Interpolate(\(s\),\(t\)) via DAAE(\(t\)); // Perform interpolation on target. 4 {\(s_{syn}\), \(t_{syn}\)} \(\leftarrow\) merge(\(\{s^{\prime}\), \(t\), \(s\), \(t\), \(s^{\prime}\)}, {\(s^{\prime}\),\(t^{\prime}\)}); 5 Pretrain \(G_{\phi}\) and \(Q_{w}\) on real data; 6while\(\phi\) has not converged do 7 for\(b\) = 0,..., \(n_{critic}\)do 8 sample {\(t_{syn}^{\prime}\)}\({}_{i=1}^{n}\)\(\sim\)\(\triangleright\)\(p_{r}\); // a batch from synthetic data 9 sample {\(t_{syn}^{\prime}\)}\({}_{i=1}^{n}\)\(\sim\)\(\triangleright\)(\(t_{syn}|_{syn}\)); // a batch of machine generated samples 10 \(G_{w}\)\(\leftarrow\)\(\nabla_{w}\) [\(\frac{1}{n}\sum_{i=1}^{n}Q_{w}(t_{syn})^{i}-\frac{1}{n}\) \(\sum_{i=1}^{n}Q_{w}(G_{\phi}(t_{syn}^{\prime}))\)]; 11 Compute \(R_{MMT}\); 12 If\(R_{MMT_{b}}\)\(\leftarrow\)\(R_{MMT}\)then 13 return \(G_{\phi}\); 14\(w\)\(\leftarrow\)\(w\) + \(\rho\). RMSProp(\(w\),\(G_{w}\)); // Update the critic parameters 15\(w\)\(\leftarrow\) clip(\(w\),\(c\)); 16 17 sample {\(t_{syn}^{\prime}\)}\({}_{i=1}^{n}\)\(\sim\)\(\triangleright\)(\(t_{syn}|_{syn}\)); // a batch of machine generated samples 18\(G_{\phi}\)\(\leftarrow\) - \(\nabla_{\phi}\)\(\frac{1}{n}\sum_{i=1}^{n}Q_{w}(G_{\phi}(t_{syn}^{\prime}))\); 19\(\phi\)\(\leftarrow\)\(\phi\) + \(\rho\). RMSProp(\(\phi\),\(G_{\phi}\)); // Update the generator parameters ``` **Algorithm 2**DAASI training procedure ### _Settings_ In this section, we discuss the different framework and hyper-parameter used to train the models. Our proposed DAASI approach consists of two components: DAAE and a Multilingual-based reward for the NMT model. We have trained DAAE using the framework and settings discussed in [13]. For multilingual NMT, we have built a critic model based on CNN as described in [14], and for generator, we use a transformer based on Fairseq framework [35], by changing the loss function with Wasserstein loss and executed the experiments on PARAM Shivay supercomputer Figure 3: Multilingual reward generation. with an Nvidia V100 GPU. We pre-process the data with \(SentencePiece\) library [36]. Our model trained on the 5 number of the decoder and encoder layers with a 512 embedding dimension for each and learns the joint vocabulary by sharing dictionary and embedding space. The feed-forward network has encoder and decoder embedding dimensions equal to 2048. The number of attention heads used for the decoder and encoder is 2. We use weight decay, label smoothing and dropout for regularisation, with the corresponding hyper-parameters to 0.0001, 0.2 and 0.4, respectively. We use Adam optimizer for the model having \(\beta 1=0.9\) and \(\beta 2=0.98\), and keep the _patience_ value equal to 10. To demonstrate the effectiveness of our approach, we compare with the following baseline models: LSTM and TransformerWe have trained the baseline NMT model with the LSTM and Transformer architecture using Fairseq, a sequence modelling toolkit and executed the experiments on an Nvidia V100 GPU [35]. Adversarial-NMTWe have trained the Adversarial-based NMT model using the architecture and settings described in [14]. ### _Results and analysis_ To measure the performance of our proposed model, we use BLEU [37] as evaluation metric. This evaluation metric judges the model's performance by focusing on semantic, syntactic, morphological and fluency factors of generated hypothesis. In the following section, we describe the results and analyze the proposed approach under multilingual and unsupervised conditions. #### Iv-C1 Multilingual effect of different methods Tables III and IV contain the results of different methods performed under multilingual settings. Compared with the traditional multilingual NMT model (LSTM and Transformer), the Adversarial-based multilingual NMT approach achieved better performance due to its advantage on optimisation of training objective of NMT- to force the translation results to be as similar as ground-truth translation generated by a human. However, due to its optimisation limit, the Adversarial-NMT method failed to outperform the DAAE-based NMT. DAAE-based NMT, in addition to adversarial, works better due to its de-corrupting text features. Therefore, it outperforms the Adversarial-based multilingual NMT method. Existing methods only take de-corrupting text as input features in model training, which could not better model contexts for rich morphology languages. In addition, existing models only apply a single reference when distinguishing between model-generated translation and human translation. Our proposed model incorporating sentence interpolation with DAAE features for generating sentences with better context for morphological languages and multiple reference translation in the existing best multilingual-NMT model achieved the best performance of upto 4 BLEU points on all translation language tasks. #### Iv-C2 Effect of different components The proposed DAASI model consists of DAAE-based sentence interpolation and GAN-based multilingual training. From Tables III and IV, we have observed that removing any components hurt the model performance with a sufficient gap in evaluation scores. Using only denoising auto-encoder gives good results compared to vanilla NMT, but adding sentence interpolation improves the model performance. The reason behind such performance gain is the better context learning of the model by sentence interpolation approach. We have also observed that using a multilingual reward for training the model improves the model performance up to 3 BLUE points. The proposed DAASI approach achieved the best performance, incorporating both features from two components. #### Iv-C3 Effect of language similarity We compute the perplexity score of languages to assess their similarity to one another. Each language has been trained and tested for perplexity. The perplexity of each language corpus is computed as follows: \[PP(C)=\sqrt[]{\frac{1}{P(x_{1},x_{2},...,x_{n})}}, \tag{18}\] where \(P(x_{1},x_{2},...,x_{n})\) is probability of a sequence of sentences{\(x_{1}\),\(x_{2}\),...,\(x_{n}\)} in a corpus \(C\), computed as follows: \[P(x_{1},x_{2},...,x_{n})=\prod_{i=1}^{m}p(x_{i}). \tag{19}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Model** & **HI \(\rightarrow\) GU** & **HI \(\rightarrow\) NE** & **HI \(\rightarrow\) PA** & **HI \(\rightarrow\) MAI** & **HI \(\rightarrow\) UR** \\ \hline **LSTM** & 20.6 & 31.0 & 58.7 & 64.1 & 11.6 \\ \hline **Transformer-NMT** & 21.8 & 30.1 & 59.3 & 66.2 & 12.5 \\ \hline **Adversarial-NMT** & 22.2 & 30.8 & 60.6 & 66.9 & 12.7 \\ \hline **DAAE+NMT** & 23.1 & 31.2 & 60.9 & 66.8 & 13.2 \\ **DAASI** & 26.4 & 34.5 & 62.1 & 68.1 & 14.8 \\ \hline \end{tabular} \end{table} Table IV: Results on Hindi \(\rightarrow\) X \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Model** & **GU \(\rightarrow\) HI** & **NE \(\rightarrow\) HI** & **PA \(\rightarrow\) HI** & **MAI \(\rightarrow\) HI** & **UR \(\rightarrow\) HI** \\ \hline **LSTM** & 21.1 & 26.9 & 57.6 & 62.3 & 13.1 \\ \hline **Transformer-NMT** & 20.8 & 28.5 & 57.1 & 64.3 & 13.4 \\ \hline **Adversarial-NMT** & 21.3 & 28.9 & 58.5 & 65.1 & 13.1 \\ \hline **DAAE+NMT** & 21.7 & 29.6 & 58.8 & 65.1 & 14.5 \\ \hline **DAASI** & 24.2 & 32.1 & 59.2 & 65.9 & 17.4 \\ \hline \end{tabular} \end{table} Table III: Results on X \(\rightarrow\) Hindi Table V lists the perplexity-based scores of demonstrated languages with each other. The values in Table V indicate how closely languages are related to one another. Languages with lower perplexity scores between them are more similar to each other. The degree of similarity between languages decreases as perplexity increases. This similarity between languages empirically justifies the relatedness between languages that show highly co-relation with the results obtained in Tables III and IV. We have observed that languages having better similarity score between them perform better. For example, NE+HI shows wide range of improvement compared to other language pairs. #### V-B4 Effect of morphological complexity between languages Our study primarily includes morphologically diverse languages. To correlate our findings with the morphological richness of languages, we have used corpus-based complexity scales as discussed below. Word-entropy of languagesThe average information content of words is represented by Entropy. This metric would be higher in languages with a greater variety of word forms, i.e. languages that acquire more information into word structure rather than phrase or sentence structure. Let \(C\) be a text drawn from a vocabulary \(V=\{v_{1},v_{2},\ldots,v_{k}\}\) of size \(k\). Furthermore, let word type probabilities are distributed according to \(p(v)=P_{r}(v\in C)\) for \(v\in Z\). The average information content of the word types is calculated by Shannon [38] method as follows: \[H(C)=-\sum_{j=1}^{k}p(v_{j})\log_{2}(p(v_{j})). \tag{20}\] Type-to-Token Ratio (TTR) of languagesTo calculate morphological complexity, we consider the ratio of word types over word tokens [39]. The spectrum of word forms is expanded by using productive morphological markers. As a result, higher TTR value implies higher morphological complexity. Given a text \(C\) drawn from a vocabulary of word types \(V=\{v_{1},v_{2},\ldots,v_{k}\}\), the measure is written as follows: \[TTR(C)=\frac{k}{\sum_{j=1}^{k}f(q_{j})}, \tag{21}\] where, \(f(q_{j})\) is the token frequency of the \(j^{th}\) type. Entropy and TTR with higher values indicate language having high lexical richness as shown in Table VI. We have observed that translation of language pairs from low to high morphological complexity gives better score than high to low. For example, GU\(\rightarrow\)HI gives better BLEU scores than HI\(\rightarrow\)GU language pairs. #### V-B5 Using multilingual model for zero-shot language pairs Tables VII and VIII list the result of experiments performed under unsupervised settings. For evaluating the model on unsupervised conditions, we demonstrate the experiments on BHO\(\leftrightarrow\)HI and MAG\(\leftrightarrow\)HI zero-shot language pairs. We use the multilingual pretrained model to evaluate the results on zero-shot language pairs. The reason for opting these language pairs is closely relatedness between all the multilingual language pairs and the zero-shot language pairs. Our models under unsupervised conditions also succeed in improving upto 5 BLEU points. The reason behind better performance is that the speakers of these closely related languages are crossing the borders for a long period of time, leading to the sharing of linguistic and phonetic features between the languages. ## V Conclusion In this paper, we proposed the _DAASI_ approach based on denoising adversarial auto-encoder that performed sentence interpolation by learning the intermediate latent representation of the source and target sentence of multilingual language pairs. Apart from denoising the adversarial autoencoder, we also modified the reward for multilingual NMT with WGAN. The experiments performed on Gujarati\(\leftrightarrow\)Hindi, Nepali\(\leftrightarrow\)Hindi, Punjabi\(\leftrightarrow\)Hindi, Maithili\(\leftrightarrow\)Hindi and Urdu\(\leftrightarrow\)Hindi translation tasks demonstrated the effectiveness of our method. In future, we will work on achieving new state-of-the-art performance for the NMT system by fully exploiting the knowledge representation of languages at different granularity levels. ## Acknowledgment The support and the resources provided by PARAM Shivay Facility under the National Supercomputing Mission, Government of India at the Indian Institute of Technology, Varanasi are gratefully acknowledged.
2309.12201
Electroencephalogram Sensor Data Compression Using An Asymmetrical Sparse Autoencoder With A Discrete Cosine Transform Layer
Electroencephalogram (EEG) data compression is necessary for wireless recording applications to reduce the amount of data that needs to be transmitted. In this paper, an asymmetrical sparse autoencoder with a discrete cosine transform (DCT) layer is proposed to compress EEG signals. The encoder module of the autoencoder has a combination of a fully connected linear layer and the DCT layer to reduce redundant data using hard-thresholding nonlinearity. Furthermore, the DCT layer includes trainable hard-thresholding parameters and scaling layers to give emphasis or de-emphasis on individual DCT coefficients. Finally, the one-by-one convolutional layer generates the latent space. The sparsity penalty-based cost function is employed to keep the feature map as sparse as possible in the latent space. The latent space data is transmitted to the receiver. The decoder module of the autoencoder is designed using the inverse DCT and two fully connected linear layers to improve the accuracy of data reconstruction. In comparison to other state-of-the-art methods, the proposed method significantly improves the average quality score in various data compression experiments.
Xin Zhu, Hongyi Pan, Shuaiang Rong, Ahmet Enis Cetin
2023-09-15T21:55:56Z
http://arxiv.org/abs/2309.12201v1
Electroencephalogram Sensor Data Compression Using an Asymmetrical Sparse Autoencoder With a Discrete Cosine Transform Layer ###### Abstract Electroencephalogram (EEG) data compression is necessary for wireless recording applications to reduce the amount of data that needs to be transmitted. In this paper, an asymmetrical sparse autoencoder with a discrete cosine transform (DCT) layer is proposed to compress EEG signals. The encoder module of the autoencoder has a combination of a fully connected linear layer and the DCT layer to reduce redundant data using hard-thresholding nonlinearity. Furthermore, the DCT layer includes trainable hard-thresholding parameters and scaling layers to give emphasis or de-emphasis on individual DCT coefficients. Finally, the one-by-one convolutional layer generates the latent space. The sparsity penalty-based cost function is employed to keep the feature map as sparse as possible in the latent space. The latent space data is transmitted to the receiver. The decoder module of the autoencoder is designed using the inverse DCT and two fully connected linear layers to improve the accuracy of data reconstruction. In comparison to other state-of-the-art methods, the proposed method significantly improves the average quality score in various data compression experiments. Xin Zhu\({}^{*}\) Hongyi Pan\({}^{\dagger}\) Shuaiang Rong\({}^{*}\) Ahmet Enis Cetin\({}^{*}\)+\({}^{*}\)Department of Electrical and Computer Engineering, University of Illinois Chicago, USA \({}^{\dagger}\)Machine & Hybrid Intelligence Lab, Northwestern University, USA EEG signal sensor data compression, asymmetrical sparse autoencoder, discrete cosine transform, transform domain layer Footnote †: This work was supported by NSF IDEAL 2217023. ## 1 Introduction Electroencephalogram (EEG) plays a crucial role in neurological diagnosis, including epileptic illness diagnosis, brain inflammation, and dementia [1]. In existing literature, advanced feature extraction and classification algorithms have been designed for EEG analysis. Most of these algorithms gain insights into disease diagnosis by leveraging the abundance of EEG data detected by the sensor, which requires compression algorithms with high efficiency to establish high-capacity storage, fast transmission, and real-time analysis [2]. The current methods for compressing EEG signals are primarily categorized into three groups: traditional signal transform methods, neural network-based methods, and transform-based learning techniques. In [3], the discrete wavelet transform is applied to compress EEG data. It employed an optimization strategy to calculate the optimal control parameters which minimize the distortion and keep power consumption under a determined threshold. The work in [4] developed a lossy compression model by utilizing the characteristics of epileptic EEG signals based on adaptive arithmetic encoding and discrete wavelet transform. However, these transform methods can not achieve a high reconstruction accuracy. Neural network-based methods provide an alternative data compression approach by training models to learn the underlying patterns in the EEG signals. In [5], a convolutional autoencoder (CAE) is proposed to compress EEG data by employing convolutional layers and max-pooling layers to reduce the redundant data. Additionally, to retain more important information during the compression process, a new convolutional autoencoder is designed using e dynamic time warping (DTW) approximation as a loss function [6]. Transform-based learning methods have attracted considerable attention in the field of EEG compression over the past few decades [2]. A near-lossless compression algorithm is developed based on discrete cosine transform (DCT) and multilayer perceptron (MLP) in [7]. The energy concentration properties of DCT are leveraged to effectively reduce redundant data, while the MLP is utilized to compress the main DCT coefficients. Additionally, this approach calculates the reconstruction error to improve the accuracy. However, it does not have a good generalization ability in the transfer learning experiment. To overcome the limitations of low reconstruction accuracy in transform-based methods and low compression efficiency in neural network-based methods, this paper proposes an asymmetrical sparse autoencoder with a DCT layer for EEG sensor data compression. We introduced the DCT and Hadamard transform domain layers into neural networks in a number of applications [8, 9, 10]. In this paper, the key idea is to perform elementwise multiplications in the transform domain as convolutions in the time domain and use the well-known soft and hard-thresholding units [11] as the key nonlinearity of the network instead of the RELU. In EEG data compression trainable thresholding units not only remove the noise in the data but also improve the data compression efficiency. A single fully connected or convolutional layer cannot be trained together with a soft-thresholding or hard-thresholding nonlinearity. However, by adding the fixed DCT after a fully connected layer we can not only train the thresholds but also train the fully connected layer which adapts the data and improves the data compaction capability of DCT. Furthermore, the DCT layer has trainable scaling parameters and a one-by-one convolutional layer. Scaling parameters approximately perform filtering in the transform domain and the one-by-one convolution layer reduces the dimension and produces the latent space where the EEG data is compressed. The decoder module reconstructs the EEG waveform from compressed data by using Inverse DCT (IDCT) and two fully connected linear layers because the EEG decoder systems have more computational power compared to the encoder module. The proposed model achieves the best compression efficiency and reconstruction accuracy compared with other autoencoder-based models. Experimental results show the proposed model outperforms other models on two EEG datasets: the BCI [12] and the Bonn University [13] datasets in terms of quality score. Additionally, the proposed model has low computation cost at the encoder side, which is suitable for implementation at the edge in sensors. ## 2 Preliminaries Our autoencoder structure has a DCT layer as shown in Fig. 1. The DCT [14] is a real-valued transform, which utilizes cosine functions as its basis functions. In this work, type-III DCT is employed because of its convolution property. Given an input vector \(\mathbf{x}=[x_{0},x_{1},\ldots,x_{N-1}]\), its orthogonal type-III DCT \(\mathbf{X}=[X_{0},X_{1},\ldots,X_{N-1}]\) is defined as: \[X_{k}=\sqrt{\frac{1}{N}}x_{0}+\sqrt{\frac{2}{N}}\sum_{n=1}^{N-1}x_{n}\text{cos} \left[\frac{\pi}{N}\left(k+\frac{1}{2}\right)n\right], \tag{1}\] Because of its good energy concentration properties, it has been widely used in many prevalent compression algorithms, including JPEG and MPEG algorithms [15, 16], where the DCT coefficients are weighted according to their importance before quantization. The weights are experimentally determined in these old standards. In this article, we will determine them using the backpropagation algorithm and call them scaling parameters. In fact, applying weights to DCT parameters is somewhat similar to the performing convolution in the Fourier transform because elementwise multiplication in the Fourier transform corresponds to circular convolution in the time domain [17]. The DCT convolution theorem also states the elementwise multiplication in the DCT domain corresponding to symmetric convolution in the time domain [18]. Formally, it can be written as: \[\mathbf{x}\odot_{s}\mathbf{w}=\mathscr{D}^{-1}(\mathscr{D}(\mathbf{x}\circ \mathbf{f})\circ\mathscr{D}(\mathbf{w}\circ\mathbf{f}))\circ\mathbf{g}, \tag{2}\] where \(\mathbf{x}\in\mathbb{R}^{N}\) and \(\mathbf{w}\in\mathbb{R}^{N}\) are input vectors. \(\circ\) represents the element-wise multiplication. \(\otimes_{s}\) denotes the symmetric convolution [19]. \(\mathscr{D}(\cdot)\) and \(\mathscr{D}^{-1}(\cdot)\) stands for the orthogonal type-III DCT and IDCT, respectively. \(\mathbf{g}\) is a constant vector: \[\mathbf{g}[n]=\begin{cases}1/(2\sqrt{N}),&n=0,\\ \sqrt{1/(2N)},&n>0,\end{cases} \tag{3}\] where \(0\leq n\leq N-1\). \(\mathbf{f}[n]=1/\mathbf{g}[n]\). We experimentally observed that we do not need to perform symmetric convolutions to take advantage of the elementwise multiplications in the transform domain. We were able to train both the DCT domain weights and thresholding parameters using the back-propagation algorithm by inserting a fully connected layer before the DCT. We also observed that hard-thresholding performs better than soft-thresholding in EEG data compression. ## 3 Asymmetrical sparse autoencoder with a DCT layer In this section, we first describe the main building blocks of the proposed asymmetrical sparse autoencoder with a DCT layer (ASAEDCT). Then we explain its training using a cost function introducing sparsity. Classical autoencoders are designed in a symmetric architecture, with a similar complexity on both the encoder and decoder sides [20]. The encoder side needs to perform heavy computations to compress the signals. The computational load from the encoder prevents the application of the autoencoders in low-cost sensors. In ASAEDCT, the encoder side has low complexity, but the decoder side has more linear layers than the encoder side to enhance the data reconstruction ability because the decoder can be implemented in a powerful host computer. _Encoder Part of the ASAEDCT_: The original EEG signal is partitioned into short-time windows of length \(N\). This block of data is processed by the fully connected linear layer. Let the output of the linear layer be \(\mathbf{x}\in\mathbb{R}^{N}\). The DCT \(\mathbf{X}=[X_{0}\ X_{1}\ \ldots\ X_{N}]^{T}\) is computed using Eq. (1). We have multiple channels to process the DCT data. Each channel corresponds to a different convolutional filter and each channel has hard-thresholding operators to perform denoising in the DCT domain. The proposed method applies the back-propagation algorithm to train hard-thresholding parameters and the transform domain weights. The hard-thresholding operator is defined as: \[\widetilde{X}_{i,k}=\mathcal{S}_{\widetilde{\mathcal{I}_{i}}}\left(X_{k}\right) +T_{i,k}\cdot\text{sign}\left(\mathcal{S}_{\widetilde{\mathcal{I}_{i}}}\left( X_{k}\right)\right), \tag{4}\] where \[\mathcal{S}_{\widetilde{\mathcal{I}_{i}}}\left(X_{k}\right)=\text{sign}\left( X_{k}\right)\cdot\left(\left|X_{k}\right|-T_{i,k}\right)_{+} \tag{5}\] is the soft-thresholding function [11] and \(T_{i,k}\) is a trainable threshold parameter for \(0\leq i\leq C\), \(0\leq k\leq N\); the subscript \(C\) stands for the number of channels; \((\cdot)_{+}\) is the rectified linear unit (ReLU) function. Although the soft-thresholding function can also perform transform domain denoising, it may also reduce the energy of large DCT coefficients. In EEG compression we found out that hard-thresholding produces better results. Unlike the hand-crafted quantization matrix used in the JPEG and MPEG-type standards [15], our network has trainable scaling vectors to assign suitable weights for different DCT coefficients. The DCT coefficients \(\widetilde{X}_{i,k}\) are element-wise multiplied by the scaling parameters: \[\widehat{X}_{i,k}=\widetilde{X}_{i,k}\cdot V_{i,k}, \tag{6}\] for \(0\leq k\leq N-1\). Another interpretation of the scaling layer is related to the DCT convolution theorem, i.e., \(C\) scaling channels correspond to \(C\) distinct convolution kernels in the time domain, and their combination forms a filter bank to enhance the ability to extract features. The one-by-one convolutional layer [21] is used to reduce the number of channels and forms the latent space where the EEG data is compressed. After the scaling layer, tanh is employed to introduce a nonlinearity between the scaling layer and the one-by-one convolution layer. The overall operation in the DCT layer can be summarized in the Algorithm 1. We observed that \(C=3\) produces the best results in EEG data compression (to be discussed in Section 4). Another observation that we had is that we obtained better coding results when we changed the order of scaling the hard-thresholding nonlinearity in our network as shown in Fig.1. _Decoder Part of the ASAEDCT_: Encoder output data is in the DCT domain and it is quantized as discussed in Subsection 3.1. Once the decoder receives the quantized data \(\mathbf{\widehat{X}}\) in the DCT domain, the decoder first computes its IDCT \(\mathbf{\hat{x}}\). After this stage, the output of the network is obtained using two fully connected layers as shown in Fig. 1. Compared with the encoder, more linear fully connected layers are included in the decoder to enhance the reconstruction ability. #### 3.0.1 Sparsity Penalty Based Cost Function In this subsection, we describe the cost function used in training the proposed ASAEDCT system. Sparsity constraints [22] are introduced to keep the feature map sparse, which enhances the compression efficiency of the DCT layer. Suppose the output of the one-by-one convolutional layer is \(\mathbf{y}\in\mathbb{R}^{N}\), the activity of \(y_{j}\) is defined as: \[\hat{y}_{j}=\sigma(\mathbf{y})_{j}=\frac{e^{|y_{j}|}}{\sum_{k=0}^{N-1}e^{|y_{k }|}},i=0,1,\cdots,N-1, \tag{7}\] where \(\sigma(\cdot)\) stands for the softmax function. Next, the Kullback-Leibler divergence (KLD) [22] is utilized as the sparsity penalty term because it can measure the difference between two probability distributions. The KLD is defined as \[\sum_{j=0}^{N-1}\mathrm{KL}(\alpha||\hat{y}_{j})=\sum_{j=0}^{N-1}\alpha\log \frac{\alpha}{\hat{y}_{j}}+(1-\alpha)\log\frac{1-\alpha}{1-\hat{y}_{j}}, \tag{8}\] where \(\alpha\) is a sparsity parameter. Therefore, the overall loss function \(\mathcal{L}\) is composed of the linear combination of the mean squared error and the KLD: \[\mathcal{L}=\frac{1}{N}\sum_{i=0}^{N-1}\left(x_{i}-z_{i}\right)^{2}+\lambda \sum_{j=0}^{N-1}\mathrm{KL}\left(\alpha||\sigma(\mathbf{y})_{j}\right), \tag{9}\] where \(\lambda\) is the weight of the sparsity penalty term. As shown in Fig 1, \(x_{i}\) and \(z_{i}\) represent the input and reconstructed signals, respectively. In the training phase, a threshold \(\xi\) is employed to adjust the compression ratio. When the proportion of 0's in \(\mathbf{y}\) is lower than \(\xi\), the training is terminated. In this manner, small entries (redundant information) are eliminated by being set to 0. The setting of \(\xi\) is presented in Section 4. ### Data Encoding and Storage As is shown in Fig. 1, the data transmission module consists of data conversion and hybrid coding. The hybrid coding algorithm is developed using a combination of Run-Length Encoding (RLE) [23] and Lempel-Ziv-Markov chain algorithm (LZMA) [24]. Each double floating-point data requires a storage space of 64 bits in the memory, whereas an integer only needs 32 bits [25]. Therefore, floating-point numbers are converted to integers. The data conversion is formulated as: \[\mathbf{\widehat{y}}=\text{Round}(10^{\theta}\times\mathbf{y}/\phi), \tag{10}\] where \(\text{Round}(\cdot)\) is the integer rounding function; \(\theta\) and \(\phi\) are both integers. They are used to adjust the compression ratio. This data is encoded into a bitstream for wireless transmission or storage into an ambulatory device using the RLE and LZMA. The RLE is first employed to eliminate consecutive repeated zeros in the latent space. Then, LZMA is implemented on the output of the RLE. At the decoder, this bitstream is converted back into integers and processed by the decoder part of ASAEDCT for signal reconstruction in the host computer. ## 4 Experimental Results In this work, the BCI2 dataset [12], and the Bonn University dataset [13] are used to evaluate the performance of the proposed EEG compression scheme. (1) The BCI2 dataset [12] is collected from a normal subject using 28 EEG channels. The sampling frequency is 100Hz. It contains 316 training sets and 100 testing Figure 1: Block diagram of the Asymmetrical sparse autoencoder with a DCT layer. sets. Each channel includes 50 samples. (2) The Bonn University dataset [13] includes five records, _i.e._, F, N, O, Z, S. To evaluate the generalization ability of the model, we use record S which is collected from an epileptic subject. All data sets are partitioned into blocks with a size of \(N=64\). The AdamW optimizer [26] is utilized in the training process. We choose \(\theta\), \(\phi\) and \(\xi\) as 4, 5 and 0.6 because it can achieve a balance between compression efficiency and reconstruction accuracy. Additionally, the batch size is 16, and the learning rate is 0.001. \(\lambda\) is 10. Moreover, we use the compression ratio (CR) [27] and percent root-mean-square difference (PRD) [28] as the performance metrics. The comparison efficiency ascends as the CR increases, and the difference between the constructed signal and the original signal reduces as the PRD decreases. To validate the compression performance of the proposed model, we compare our model with the ANN+DCT [7] which achieved the best result on the BCI2 and Bonn datasets so far. Additionally, we consider the transform-based method [28], the neural networks-based method [22] and the transform-based learning method [29] for further comparison as they both have a low complexity encoder. Table 1 provides the detailed data compression performance of the proposed methods versus the other four state-of-the-art algorithms on the BCI2 datasets. In comparison to sparse AE, the proposed approach increases CR from 21.38 to 25.66 and decreases PRD from 8.51 to 5.09. Additionally, our model is superior to DCT-DOST because it has a higher CR and a lower PRD. It indicates that ASAEDCT which combines the orthogonal DCT transform and a neural network can achieve a better compression performance. Compared with other transform-based learning compression methods [7, 29], ASAEDCT still provides a better quality score. This is because the DCT layer improves compression efficiency and asymmetrical structure enhances the reconstruction ability in the decoder. Moreover, to validate the robustness of the model, the model trained using the BCI2 dataset is also tested on the Bonn University dataset. It is observed that the AAESDCT with \(C=3\) channels outperforms other methods as it achieves the best QS. Therefore, the model has a good generalization capability in data compression tasks. For visual evaluation of the compression performance of the proposed methods, the original signals and reconstruction signals are depicted in Fig 2. It is observed that the original signals and reconstruction signals are similar to each other as the differences between them are limited to a small range. Table 2 presents the ablation study of each module in ASAEDCT. When the scaling layer, sparsity penalty term, or nonlinearity are removed, the CR decreases and PRD increases, leading to a lower QS. Additionally, the three-channel \((C=3)\) model is superior to one-channel, two-channel, and four-channel models. When we use only the DCT to compress the EEG data, the QS decreases. When hard thresholds are replaced with soft thresholds, the performance degrades. Lastly, an analysis of the computational cost related to the proposed method is conducted. For an input with a duration of 6.4s, the compression time of the ASAEDCT network is only around 0.016s. Although this experiment is executed on the Intel Core i7-12700H CPU, the fast compression time indicates that the proposed method can be used in real-time compression of the EEG signals. ## 5 Conclusion This study presents an asymmetrical autoencoder with a DCT layer that contains hard-thresholding nonlinearity for EEG sensor data compression. Since the encoder module uses the combination of one fully connected linear layer and the DCT layer the threshold values and DCT domain weights can be trained using a backpropagation type algorithm. The hard-thresholding nonlinearity and scaling layers not only enhance the data compaction capability of the DCT layer but also denoise the EEG signal. The decoder module based on the IDCT and two linear layers reconstructs the original signal. The proposed model achieves the best compression efficiency and reconstruction accuracy compared to other methods in BCI2 and Bonn EEG datasets. The computational load of the compression part of the ASAEDCT network is low, therefore, it can be implemented in the sensor for EEG data compression. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Test Data** & **Algorithm** & **CR** & **PRD (\%)** & **QS** \\ \hline \multirow{6}{*}{BCI2} & AAN+DCT [7] & 21.57 & 6.38 & 3.38 \\ & Sparse AE [22] & 21.38 & 8.51 & 2.51 \\ & DCT-DOST [28] & 21.67 & 10.40 & 2.08 \\ & AE-DCT [29] & 21.95 & 9.18 & 2.39 \\ & **ASAEDCT** & **25.66** & **5.09** & **5.05** \\ \hline \multirow{6}{*}{Bonn} & AAN+DCT [7] & 4.45 & 7.22 & 0.62 \\ & Sparse AE [22] & 6.76 & 6.18 & 1.09 \\ \cline{1-1} & DCT-DOST [28] & 7.56 & 11.09 & 0.68 \\ \cline{1-1} & AE-DCT [29] & 6.84 & 6.65 & 1.03 \\ \cline{1-1} & **ASAEDCT** & **8.40** & **5.89** & **1.42** \\ \hline \end{tabular} \end{table} Table 1: Compression experimental results. All methods are trained on the BCI2 dataset. \begin{table} \begin{tabular}{|l|c c c|} \hline **Algorithm** & **CR** & **PRD (\%)** & **QS** \\ \hline No nonlinearity & 25.12 & 5.17 & 4.86 \\ No scaling & 25.46 & 11.79 & 2.16 \\ No penalty & 24.51 & 5.26 & 4.66 \\ Only DCT & 25.44 & 7.05 & 3.61 \\ With soft-threshold & 25.34 & 11.62 & 2.18 \\ One-channel (C=1) & 25.07 & 6.00 & 4.18 \\ Two-channel (C=2) & 24.90 & 5.39 & 4.62 \\ **ASAEDCT(C=3)** & **25.66** & **5.09** & **5.05** \\ Four-channel (C=4) & 25.01 & 5.31 & 4.71 \\ \hline \end{tabular} \end{table} Table 2: Ablation experimental result. Figure 2: Comparison of the time domain between the original data and the reconstructed data on the Bonn dataset.
2309.14670
DONNAv2 -- Lightweight Neural Architecture Search for Vision tasks
With the growing demand for vision applications and deployment across edge devices, the development of hardware-friendly architectures that maintain performance during device deployment becomes crucial. Neural architecture search (NAS) techniques explore various approaches to discover efficient architectures for diverse learning tasks in a computationally efficient manner. In this paper, we present the next-generation neural architecture design for computationally efficient neural architecture distillation - DONNAv2 . Conventional NAS algorithms rely on a computationally extensive stage where an accuracy predictor is learned to estimate model performance within search space. This building of accuracy predictors helps them predict the performance of models that are not being finetuned. Here, we have developed an elegant approach to eliminate building the accuracy predictor and extend DONNA to a computationally efficient setting. The loss metric of individual blocks forming the network serves as the surrogate performance measure for the sampled models in the NAS search stage. To validate the performance of DONNAv2 we have performed extensive experiments involving a range of diverse vision tasks including classification, object detection, image denoising, super-resolution, and panoptic perception network (YOLOP). The hardware-in-the-loop experiments were carried out using the Samsung Galaxy S10 mobile platform. Notably, DONNAv2 reduces the computational cost of DONNA by 10x for the larger datasets. Furthermore, to improve the quality of NAS search space, DONNAv2 leverages a block knowledge distillation filter to remove blocks with high inference costs.
Sweta Priyadarshi, Tianyu Jiang, Hsin-Pai Cheng, Sendil Krishna, Viswanath Ganapathy, Chirag Patel
2023-09-26T04:48:50Z
http://arxiv.org/abs/2309.14670v1
# DONNAv2 - Lightweight Neural Architecture Search for Vision tasks ###### Abstract With the growing demand for vision applications and deployment across edge devices, the development of hardware-friendly architectures that maintain performance during device deployment becomes crucial. Neural architecture search (NAS) techniques explore various approaches to discover efficient architectures for diverse learning tasks in a computationally efficient manner. In this paper, we present the next-generation neural architecture design for computationally efficient neural architecture distillation - DONNAv2. Conventional NAS algorithms rely on a computationally extensive stage where an accuracy predictor is learned to estimate model performance within search space. This building of accuracy predictors helps them predict the performance of models that are not being finetuned. Here, we have developed an elegant approach to eliminate building the accuracy predictor and extend DONNA to a computationally efficient setting. The loss metric of individual blocks forming the network serves as the surrogate performance measure for the sampled models in the NAS search stage. To validate the performance of DONNAv2 we have performed extensive experiments involving a range of diverse vision tasks including classification, object detection, image denoising, super-resolution, and panoptic perception network (YOLOP). The hardware-in-the-loop experiments were carried out using the Samsung Galaxy S10 mobile platform. Notably, DONNAv2 reduces the computational cost of DONNA by 10x for the larger datasets. Furthermore, to improve the quality of NAS search space, DONNAv2 leverages a block knowledge distillation filter to remove blocks with high inference costs. ## 1 Introduction Computer vision algorithms are being widely deployed on edge devices for several real-world applications including medicine, XR-VR technology, visual perception, and autonomous driving. However, computer vision algorithms based on deep learning require significant computational resources. Therefore, efficient search for deep learning architecture has attracted a lot of attention. Most of these NAS efforts are agnostic to the requirements of resource-constrained edge devices. Further, current NAS methods that operate over large search spaces are computationally very expensive to generate the optimized models. NAS based on block knowledge distillation (BKD) [2, 20, 10] scales well over large search spaces in a computationally efficient manner. In this work, we leverage BKD for hardware-aware NAS. The core process of our NAS approach based on BKD consists of building replacement blocks, building accuracy predictors, predicting the accuracy of models, and based on their cost(flops, parameters, latency on hardware), the models are picked based on the trade-off between predicted accuracy and cost. In multiple studies, it appears that the stage of building the accuracy predictor is the most expensive bottleneck of the pipeline. Researchers have worked on making the accuracy predictor stage efficient by utilizing regression or ranking methods. Nevertheless, the majority of the computation time for the NAS pipeline is still taken up by the accuracy predictor stage. Our work DONNAv2 aims at reducing the search space by identifying the redundant blocks and eliminating them from the search space. We have defined this method as the Blockwise Knowledge Distillation filtering stage. Furthermore, we aimed at removing the accuracy predictor stage, which was by far the most computationally expensive component of the DONNA pipeline. Our work DONNAv2 brings a more sophisticated method of approximating blockwise losses to network losses. Many NAS studies have focused on optimizing models for hardware-agnostic complexity metrics like flops (MACs). But some of the analyses [9], indicate that flops do not always translate linearly to the latency or the power of the model. To find the best architecture for a given use case, and a given hardware, it is important to specifically optimize models to minimize latency and energy consumption for on-device performance. Many NAS performs with a lookup ta ble that is reporting per-layer latency and is approximated to full model latency. Here, the assumption is that the linear sum of latency would be model latency which does not hold true always. We have hardware in the loop to optimize models for a given hardware. But unlike many expensive methods, DONNAv2 tends to provide optimal neural networks at a lower complexity for a similar diverse search space. In our work, we have compared the time complexity saved by pivoting to the approximation method using mean square error (MSE) loss rather than training an accuracy predictor model. We have described our paper through the DONNAv2 pipeline that comprises of - Block knowledge Distillation (BKD), BKD Filtering, Evolutionary Search, and Finetuning for a Galaxy S10 mobile platform. Finally, we extend our paper to cover five vision tasks to show how DONNAv2 led to an optimal compressed model without losing accuracy. The vision tasks highlighting the benefits of DONNAv2 are but not limited to image classification, object detection, super-resolution, image denoising, and multitask network. ## 2 Related Work We can delve into the historical progression of NAS to trace its evolution from initially computationally expensive methods involving diverse search space [24, 31, 32] to low computation methods with very small search space [3, 23]. DONNA [20], explored approaches to reduce the computational burden using a block-based search space. Recent study [5] has also validated the efficacy of NAS approach developed in DONNA. Here, DONNAv2, we aim to further reduce the computation time of the search while keeping the search space similar to DONNA. Mobile neural architecture search (MNAS) [24] is an expensive method that requires around 40,000 epochs to perform a single search. Other attempts for NAS included differentiable architecture search methods such as DARTS [17], FBNet [28], FBNetV2 [27], ProxylessNAS [4], AtomNAS [18] and Single-Path NAS[23] that simultaneously optimize the weights of a large super-net and its architectural parameters. However, in these cases the granularity of the search level suffers and methods need to be repeated in every scenario or when the search space changes. There have been studies [1, 11, 19, 10] to construct proxies for ranking models using the search space. These include attempts based on zero-shot proxy [1] and one-shot proxy NAS [11]. A similar approach LANA [19], also leverages the loss function as a proxy method to rank the model. A recent work [10] explores a hardware-aware search by translating the multi-objective optimization problem into a combinatorial optimization problem. However, this approach assumes chain-structured NAS and is not readily applicable to more general architectures. Our DONNAv2 builds on the idea of using the loss function as the proxy with the following enhancements: * enables hardware-aware search in a diverse search space with the hardware in the loop for latency measurements. Earlier studies leveraged a linear sum of the pre-computed feature layer latencies to estimate the latency of a deep learning model. However, this does not capture the true latency when compilers leveraged a depth-first search. * DONNAv2 is scalable when the search is expanded or the hardware platform changes. * DONNAv2 converges 9x faster during the finetuning stage while achieving a similar accuracy compared to training-from-scratch ([14]). ## 3 DONNAv2 - Lightweight NAS DONNAv2 follows the steps in DONNA, while eliminating the accuracy predictor stage and introducing a block-wise knowledge distillation (BKD) filtering stage. In DONNAv2, we start by defining a search space and then building a BKD library which gets further filtered out by a BKD filter. These filtered blocks from the BKD library are utilized by the evolutionary search phase to find the Pareto-optimal network architectures for any specific scenario using the loss metric of individual blocks. Finally, the predicted Pareto-optimal architectures are fine-tuned to full accuracy for deployment. ### Search Space Search Space in DONNAv2 follows a block-level architecture and only parameters within the blocks are varied. A collection of blocks to build candidate networks are generated based on user-defined blocks and the associated parameters. To determine a suitable search space, we include diverse macro-architectural network parameters such as layer types, attention mechanisms, and channel widths. Furthermore, micro-architectural parameters such as cell repeats within a block, kernel sizes of layers within cells, and in-cell expansion rates were also utilized. In our experiments, the cardinality of the search space was of the order of 1e14. The larger the search space, the higher the chances of identifying hardware-friendly and performance-achieving networks. ### Blockwise Knowledge Distillation Blockwise Knowledge Distillation (BKD) is the first building block of the lightweight NAS, DONNAv2. Unlike DONNA, DONNAv2 uses BKD not for building an accuracy predictor, but as an input to produce a surrogate metric to generate the Pareto optimal curve. The BKD stage generates a Block Library with pre-trained weights and loss metrics for each of the option blocks \(B_{n,m}\) that is used as the replacement. To build the BKD library, each block option \(B_{n,m}\) is replaced in the blocks of the mothernet and trained as a student model using the mothernet block \(B_{n}\) as a teacher. The MSE loss between the teacher's output feature map \(Y_{n}\) and the student's output feature map \(\bar{Y}_{n,m}\) is used as the surrogate metric in the evolutionary search stage and the BKD filtering stage. One epoch of complete dataset training is employed at this stage for building the BKD library and is denoted as 1e. The pre-trained weights at this stage help in faster convergence while finetuning the model. ### Blockwise Knowledge Distillation Filtering Blockwise Knowledge Distillation Filtering method aims to identify and drop the inefficient blocks based on the optimization strategy. Here, the optimization strategy is defined as the cost of the optimized model in terms of flops, latency, power consumption, etc. The BKD filtering stage retains only blocks with a minimum cost ratio with respect to the associated blocks in the reference model. Blocks with minimum cost ratio will retain performance-achieving efficient candidate models during the evolutionary search. The cost ratio of a block is estimated for a given loss metric. Retaining only blocks with the best cost ratio reduces the number of blocks and thereby the cardinality of the search space. However, it is important to note that block filtering does not eliminate good models in the sample space. We have validated this with experiments across several learning tasks. In Figure 3, the legend id tells the blocks of a particular layer we are filtering and the blue dots represent the blocks that are retained and grey dots represent the blocks that would be dropped. In Algorithm 1, we have described the steps in detail. ``` 0: BKD library, threshold = D. \(B_{(n,m)}\) is the \(m^{th}\) potential replacement out of M choices for block \(B_{n}\) in the mothernet model. for i do = 1 to m do \(L_{(n,m)}\) = Calculate the block \(B_{(n,m)}\) inference cost of model \(MSE_{(n,m)}\) = Calculate the block \(B_{(n,m)}\) MSE loss of model w.r.t mothernet \(C_{(n,m)}\) = Calculate the ratio of \(L_{(n,m)}\) w.r.t mothernet block inference cost Plot cost ratio vs MSE on a plot Discard the blocks at each MSE loss with higher inference cost based on the threshold D. OUTPUT: Obtain new BKD filtered library endfor ``` **Algorithm 1** BKD filtering ### Evolutionary Search Evolutionary search utilizes the MSE loss metric as a surrogate measure. Here, in contrast to DONNA, we lack Figure 1: DONNAv2 Pipeline - includes Stage A which is composed of search space definition and Block Knowledge Distillation(BKD), and Stage B which includes BKD Filtering, Evolutionary search(hardware in the loop), and Finetuning. Figure 2: BKD Filtering - The x-axis is the surrogate loss metric and the y-axis is the cost ratio(flops/latency on device) between the replacement blocks and the mothernet the predicted accuracy of the candidate models from the Pareto front. The performance of the model is approximated as the sum of the MSE of the blocks constituting the model. We built the Pareto optimal curve by using this surrogate measure of the model. Given the MSE loss metric of blocks from the block library and latency of the models formed by using the block options, the NSGA-II evolutionary algorithm is leveraged to find Pareto-optimal architectures that minimize the model loss and cost. The cost utilized could be scenario-agnostic measures such as the number of operations (MAC) or the number of parameters in the network (params). The scenario-aware cost includes on-device latency, cycles of operation, and energy. In our experiments, we have utilized on-device latency as a cost function by using direct hardware measurements in the optimization loop. After obtaining the Pareto-optimal models, we selected the model with appropriate latency and finetuned the candidate model to obtain the final model. ### Finetuning Empirically it has been observed that the final architectures from the Pareto front curve converge faster than training from scratch when pre-trained with weights obtained from the BKD stage. It has been shown that EfficientNet-style models can converge in around 50 epochs as opposed to 450 epochs when trained from scratch. ## 4 Experiments & Results In this section we will discuss the detailed experimental evaluation of DONNAv2 : across a set of diverse computer vision tasks. The performance of DONNAv2 for Image classification, Object detection and super-resolution tasks were quantified with the Samsung Mobile hardware platform in the loop. The performance of DONNAv2 for image denoising and multitask network was validated using the number of operations(MAC). Importantly, all experiments demonstrated significant model compression with minimal performance degradation on an edge device. The performance of DONNAv2 is captured in terms of accuracy, on-device latency/MAC, and the number of epochs (defined as the sum of the number of epochs for training accuracy predictor, the number of epochs used for finetuning and building the block library (1e)). It is important to note that there is very few NAS methodology that has worked for diverse vision tasks. Many research focuses on NAS method for individual Vision tasks, but we aim to focus on key components that remain the same across the wide range of vision tasks and deliver the same method to be applied across various vision tasks. Details of each of the experiments are described below: ### Search Algorithm In this section, we have summarized the overall DONNAv2 setup in an algorithm format as shown in algorithm 2. It provides step by step setup details to perform the BKD based searching. ``` Begin with a baseline or mothernet network. Split the baseline into stem, head and N blocks. \(B_{(}n,m)\) is the \(m^{th}\) potential replacement out of M choices for block \(B_{n}\) in the baseline model. fori do = 1 to m do BKD:Replace block \(B_{n}\) with \(B_{(}n,i)\) and train the new architecture for 1 epoch of complete dataset. Complete the step for all blocks and all options and construct a BKD library with MSE loss of replacement blocks w.r.t the mothernet. endfor BKD Filtering : Perform BKD filtering to remove redundant blocks as explained in Section 3.2 Evolutionary Search: Input:Population Size= E, number of search steps = T BKD Library fori do = 1 to T do Randomly sample E networks Ft from networks composed of \(B_{(}n,m)\) blocks Compute inference cost & MSE of the sampled model Ft Retain models with lowest MSE loss in each iteration at different computation cost. endfor OUTPUT:Pareto optimal curve of models at different latency Pick model X and finetune. ``` **Algorithm 2** DONNAv2 search ### Image Classification We present experiments for DONNA search spaces for ImageNet [8] classification that was earlier discussed in DONNA [20]. The mothernet chosen here, was Efficientnet-B0 [25] style architecture with 6 blocks instead of 7. We searched over 5 blocks of the mothernet numbered 1 to 6 using DONNA search space. DONNA search space had a choice out of M=192 options: kernel size \(k\in{3,5}\); expansion ratio \(expand\in{2,3,4}\); \(depth\in{1, \(type\in grouped,depthwiseinvertedresidualbottleneck\); and \(channel-scaling\in 0.5,1.0\). The search space can be expanded or arbitrarily constrained to known efficient architectures for a device. Each of these \(5*192=980\) alternative blocks is trained using BKD to complete the Block Library. At this end, we perform the BKD filtering to obtain 768 blocks, thus removing the remaining redundant 212 blocks. After preparing the filtered BKD library, we perform the NSGA-2 [7] algorithm-based search with 100 population size and 50 steps to obtain the Pareto optimal curve. We first show that networks found by DONNAv2 in the DONNA search space [20] outperform the network found by DONNA at similar latency1. DONNAv2 achieves similar accuracy at 10X less computational time. The table3 shows that the number of epochs for DONNAv2 is significantly lower than DONNA since there is no computation expended for training accuracy predictor. DONNAv2 reduces inference latency as well as model search cost. The model search cost reduction is significant for DONNAv2 since 2500 epochs on ImageNet would cost several GPU hours. Further, in Figure 5, we can see that DONNAv2 can identify efficient architectures across a similar latency range as DONNA. Table1 captures the comparison of our methodology against the popular NAS methods and it can be observed that our methodology DONNAv2 has lowered the computation cost drastically compared to other methods making it more usable by the research community to find more hardware friendly efficient models. The latency numbers reported in the table 3 are conducted on Samsung Galaxy S10 mobile platform. Figure 4, describes the efficacy of the block filtering step and compares models in the Pareto front for DONNA and DONNA v2. The left y-axis is the accuracy predictor stage and the right y-axis is the loss surrogate metric. The figure shows that DONNA v2 search, similar to DONNA, identifies wide range DNN models across the satisfying varying accuracy latency trade-offs. The diversity of models as shown in Figure 4 is similar for DONNAv2 and DONNA. Footnote 1: Latency numbers could vary by changing the SNPE SDK version. Here we compute the latency of baseline models with a given SDK version and perform NAS with this particular version to observe the compression in latency. #### 4.2.1 Performance analysis for classification task Here, we attempt to leverage centralized kernel alignment (CKA), [13], to visualize the DONNAv2 optimized models. Further, we relate interaction between layers of DONNAv2 optimized models using CKA and the surrogate loss. The feature map similarities of CNNs have a block structure. Layers in the same block group (i.e. at the same feature map scale) are more similar than layers in different block groups. DONNAv2 surrogate loss leverages the loss metric of individual blocks forming the network as the performance predictor for the sampled models. CKA analysis of the models from the Pareto optimal curve for the image classification task is shown in figure (4). In Figure 4, we can observe that the heatmap of layers of the mothernet shows a checkerboard pattern displaying the local block level similarity. The similarity measure for the mothernet is confined to the local blocks. However, as we start pruning the layers using DONNAv2, we can observe that for the model (d), a large big yellow box demonstrates similarity in representation across several layers. The fine-tuned accuracy of the model (d) also indicates the performance is saturated. Further, the fine-tuned accuracy of the DONNAv2 optimized models shown in the table 2 correlates with DONNAv2 surrogate loss. The CKA similarity shown in figure(4) also correlates with DONNAv2 surrogate loss. ### Object Detection Object detection is one of the dense vision tasks, on which extensive neural architecture search is performed. Here, we have identified NAS optimized model EfficientDet-D0 [26] as the baseline model to further optimize this model in terms of latency and accuracy. The search space identified here has been inspired by the image classification task, as we profiled the object detection model and identified that the majority of the latency of the model resides in the backbone contributing almost 60% of the end-to-end model. The backbone of the EfficientDet-D0 model is EfficientNet-B0. Hence, our search space for the object detection task includes kernel sizes \(k\in 3,5\); \(expand\in 2,3,4,6\); \(depth\in 1,2,3,4\); \(layer-type\in grouped,depthwiseinvertedresidualbottleneck\); and \(channel-scaling\in 0.5,1.0\). The search space options were expanded for 7 blocks of Efficientnet-b0 [25] model, making total search complexity to be \(128*7=896\) blocks. Here, we performed the evolutionary search based on NSGA-2 algorithm for 100 population size and 30 steps. The architecture search for object detection performed by us was completely based on MSCOCO [16] datasets without any imagenet [8] pretraining. In Table 4, we can observe the reduction in computation cost to be around 30% with improvement in the mAP when compared to the mothernet we started with. This proves that DONNAv2 method can be extended to complex vision tasks like object detection using a loss proxy scoring system to obtain the optimized models from an already compressed NAS searched models like EfficientDet-D0. the latency numbers computed for object detection was performed on Samsung galaxy S10 mobile platform, making this a highly efficient hardware friendly model with better performance as compared to the mothernet we started with. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model** & **Accuracy** & **Latency(in ms)** & **Loss surrogate** \\ \hline Model a & 78.43 & 1.6 & 0.171 \\ \hline Model b & 77.8 & 1.47 & 0.188 \\ \hline Model c & 76.36 & 1.21 & 0.221 \\ \hline Model d & 74.26 & 1.08 & 0.267 \\ \hline \end{tabular} \end{table} Table 2: Performances of DONNAv2 optimized models on Image Classification Figure 4: Here, the Mothernet has checkerboard heat map displaying local similarity and model learns different representations across layers. The compressed models (Model a, Model b, Model c and Model d) demonstrate progressively increasing similarity across multiple layers. This suggests that Model a is the best-compressed model in terms of learning distinct representations across layers. This correlates with the performance of the trained model as well as the surrogate loss used in this work. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model** & **Accuracy** & **Latency(in ms)** & **Cost/Scenario** \\ \hline DONNA & 77.8 & 1.6 & 2500 + 50 + 1e \\ \hline **DONNAv2 (ours)** & **77.8** & **1.47** & **50 + 1e** \\ \hline EfficientNet-B0 & 77.5 & 2.0 & NA \\ \hline \end{tabular} \end{table} Table 3: Performances of DONNAv2 optimized models on Image classification has a ResNet-like [12] backbone for image super-resolution task. The search space for this model comprised of searching for two blocks based on Resnet Bottleneck style architecture and one head. The search space options for the Resnet style blocks comprised of \(depth\in{1,2,3,4,6,8}\) along with input channels and bottleneck channels. The search space options for head were different comprising of kernel sizes and upscaling options. This experiment highlights the use-case that proves that the blocks we chose to optimize need not be similar in architecture to be searched over. We support varying macro-architectural parameters such as layer types, activations and attention mechanisms, as well as micro-architectural parameters such as block repeats, kernel sizes and expansion rates. Efficient model search for EDSR using DONNA and DONNAv2 resulted in models with comparable performance. However, with DONNAv2 arrived at the efficient EDSR model with 30% reduction in computational cost. REDS [21] is a small dataset and the finetuning requires 3125 epochs. To estimate an accuracy predictor for Donna, we subsample and finetune 34 candidate models. DONNAv2 avoids finetuning models to estimate the performance of candidate models in the search space. Super-resolution is also one of the dense vision tasks with varying block architectures that was able to converge to optimal models using DONNAv2 search algorithm. ### Image Denoising For image denoising tasks, one of the most popular architectures is the UNet [22]. To demonstrate the capability of DONNAv2 on image denoising, we chose to optimize a UNet-based multi-stage model NAFNet [6]. NAFnet is one of the state-of-the-art models for image denoising. The evolutionary search over architecture search space with hardware agnostic metrics (Macs count) helped in identifying efficient denoising models with minimal performance degradation. This also demonstrates the efficacy of DONNAv2 for flops-based model search. The optimization strategy could be varied based on use-cases and this is one of the examples proving that DONNAv2 can be performed for a flops-based search strategy as well. Note that NAFNet itself is a lightweight design which added to the difficulty of compressing the model furthermore using NAS. But still, DONNAv2 was able to achieve almost 40% MAC reduction with only about 0.3 PSNR degradation. It is also one of the complexed vision tasks on which very few NAS methodologies have been applied and proved their efficacy against. ### Multi-Task Network: YOLOP For many vision applications, multi-task networks are being deployed and one such widely used model in the autonomous driving industry is YOLOP [29]. YOLOP has three tasks: traffic light detection, driving area segmentation, and lane segmentation. The architecture of the model consists of an encoder model which forms the backbone of \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Model** & **GMACs** & **PSNR** & **Cost\(/\)Scenario** \\ \hline **DONNAv2 (ours)** & **40.173** & **39.9895** & 540+ \\ \hline Stage-1 NAFNet & 63.6 & 40.3045 & NA \\ \hline \end{tabular} \end{table} Table 6: Performances of DONNAv2 optimized models on Image Denoising \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Model** & **mAP** & **Latency(in ms)** & **Cost\(/\)Scenario** \\ \hline DONNA & 35.1 & 1.98 & 2500 + 310 + 1e \\ \hline **DONNAv2 (ours)** & **34.8** & **1.98** & **310 + 1e** \\ \hline EfficientDet-D0 & 33.4 & 2.792 & NA \\ \hline \end{tabular} \end{table} Table 4: Performances of DONNAv2 optimized models on Object Detection Figure 5: Imagenet Classification Pareto optimal curve of DONNA vs DONNAv2. The red plot which is the left Y-axis is representative of predicted accuracy vs latency as described in DONNA [20] and the green plot is the surrogate MSE loss vs latency **(our proposed method)**. Both the measurements of latency are performed on Samsung Galaxy S10 mobile platform. DONNAv2 search identifies models across a similar latency spread as in the case of DONNA. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Model** & **PSNR(in dB)** & **Latency(in ms)** & **Cost\(/\)Scenario** \\ \hline DONNA & 28.36 & 8.68 & 106250 + 3125 + 1e \\ \hline **DONNAv2 (ours)** & **28.44** & **7.5** & **3125 + 1e** \\ \hline EDSR & 28.6 & 16.7 & NA \\ \hline \end{tabular} \end{table} Table 5: Performances of DONNAv2 optimized models on Super-Resolution the network and three heads for each of the tasks. The backbone of YOLOP model comprises of five BottleneckCSP blocks, the object detection head comprises of three BottleneckCSP blocks and the segmentation heads comprise of two BottleneckCSP blocks each. This shows that the computational complexity of the model is spread throughout the model. Here, we explored two approaches to find an efficient compressed model. In the first approach, we compressed only the backbone and in the second approach, the NAS search covered both the backbone and head. In both experiments DONNAv2 helps us come up with networks that are 20-35 % compressed without significant performance degradation across tasks. The dataset used for the YOLOP network is the BDD100K [30] dataset. When we compare the compression approaches, as expected, compressing the backbone alone degrades performance across all three tasks when compared with jointly compressing the backbone and the heads. This highlights that when both backbone and heads were searched over, the backbone retained the components needed for accuracy boost and the compression was obtained from the heads as well. This is one of the highly complex model to be searched and it also proves our DONNAv2 can search over segmentation tasks along with multiple head in the tasks as well. ## 5 Conclusion In this paper, we have explored the efficacy of the surrogate measure and demonstrated a ten-fold reduction in computational complexity for NAS across widely varying learning tasks. It is of great advantage to researchers to be able to perform NAS searches utilizing very few GPU resources. Furthermore, it is important to note that DONNAv2 came up with efficient models while maintaining accuracy across all learning tasks we have explored. Our DONNAv2 was tested extensively across wide range of complex and dense vision tasks and our experimental studies have shown that DONNAv2 has a significant computational advantage for large ImageNet scale training data. In summary, DONNAv2 provides an efficient NAS approach to building a surrogate performance measure and introduced a novel block filtering approach to improve the quality of models obtained in the evolutionary search step. DONNAv2 has introduced a reliable proxy method that not only makes the NAS faster but can also be applied across wide range of tasks. The limitations of this paper lies in the fact that it is empirically found metric that perform on par or better than the accuracy predictor. In future work, we would like to evaluate the limitations of the metric, if found any.
2309.08259
BROW: Better featuRes fOr Whole slide image based on self-distillation
Whole slide image (WSI) processing is becoming part of the key components of standard clinical diagnosis for various diseases. However, the direct application of conventional image processing algorithms to WSI faces certain obstacles because of WSIs' distinct property: the super-high resolution. The performance of most WSI-related tasks relies on the efficacy of the backbone which extracts WSI patch feature representations. Hence, we proposed BROW, a foundation model for extracting better feature representations for WSIs, which can be conveniently adapted to downstream tasks without or with slight fine-tuning. The model takes transformer architecture, pretrained using self-distillation framework. To improve model's robustness, techniques such as patch shuffling have been employed. Additionally, the model leverages the unique properties of WSIs, utilizing WSI's multi-scale pyramid to incorporate an additional global view, thereby further enhancing its performance. We used both private and public data to make up a large pretraining dataset, containing more than 11000 slides, over 180M extracted patches, encompassing WSIs related to various organs and tissues. To assess the effectiveness of \ourmodel, we run a wide range of downstream tasks, including slide-level subtyping, patch-level classification and nuclei instance segmentation. The results confirmed the efficacy, robustness and good generalization ability of the proposed model. This substantiates its potential as foundation model for WSI feature extraction and highlights promising prospects for its application in WSI processing.
Yuanfeng Wu, Shaojie Li, Zhiqiang Du, Wentao Zhu
2023-09-15T09:11:09Z
http://arxiv.org/abs/2309.08259v1
# BROW: Better featuRes fOr Whole slide image based on self-distillation ###### Abstract Whole slide image (WSI) processing is becoming part of the key components of standard clinical diagnosis for various diseases. However, the direct application of conventional image processing algorithms to WSI faces certain obstacles because of WSIs' distinct property: the super-high resolution. The performance of most WSI-related tasks relies on the efficacy of the backbone which extracts WSI patch feature representations. Hence, we proposed BROW, a foundation model for extracting better feature representations for WSIs, which can be conveniently adapted to downstream tasks without or with slight fine-tuning. The model takes transformer architecture, pretrained using self-distillation framework. To improve model's robustness, techniques such as patch shuffling have been employed. Additionally, the model leverages the unique properties of WSIs, utilizing WSI's multi-scale pyramid to incorporate an additional global view, thereby further enhancing its performance. We used both private and public data to make up a large pretraining dataset, containing more than 11000 slides, over 180M extracted patches, encompassing WSIs related to various organs and tissues. To assess the effectiveness of BROW, we run a wide range of downstream tasks, including slide-level subtyping, patch-level classification and nuclei instance segmentation. The results confirmed the efficacy, robustness and good generalization ability of the proposed model. This substantiates its potential as foundation model for WSI feature extraction and highlights promising prospects for its application in WSI processing. ## I Introduction Deep learning is a technique based on artificial neural networks, which has achieved great success in many fields in recent years. It has the ability to process data with large amount, various dimensions and multi modalities, and learn complex features and patterns from data automatically, without the need for manual feature design, thereby improving the performance of data processing. In the field of medical imaging, there also have been many successful applications. [1] proposed a fully automated pipeline to detect and segment tumors in non-small cell lung cancer (NSCLC) CT images based on deep learning and region growing algorithm. [2] presented a multi-task weakly supervised learning approach for anomaly detection in whole-body FDG-PET/CT images. [3] designed a transfer learning approach for prediction of lymph node metastasis in papillary thyroid carcinoma by leveraging radiomics features from other medical datasets. These deep learning based approaches are making impressive progress in numerous tasks, nevertheless, achieved limited success in whole slide image (WSI) analysis. With the emergence of digital pathology, WSI is becoming part of the key components of routine procedure for clinical diagnosis of many diseases. Traditional image processing algorithms face a number of challenges in this area. First, WSIs typically exhibit extremely high resolution to depict the details of cells and tissues. The substantial computational demands arising from high-resolution render the direct use of traditional convolution methods impractical. Many recent works are struggling with the trade-off between accuracy and computational efficiency due to the difficulty in processing large-scale gigapixel WSIs. Some approaches have employed multiple-instance learning (MIL) to partition the entire WSI into smaller patches for processing, followed by subsequent sophistic aggregation. For example, a Transformer based MIL framework was proposed by [4] to explore both morphological and spatial information between different instances when making aggregation operator. [5] developed a MIL-based method to jointly learn both instance- and bag-level embeddings for making the final prediction. These methods all rely on a proficient patch-level feature extractor as a foundation, indicating the significance of constructing a well-work extractor. But most of these works have to retrain a task-specific model from scratch or use model pretrained on natural image datasets. Second, the models with insufficient parameters may struggle to effectively handle the abundant details and complex structures present in pathology images. Each pathology image possesses its specific morphological features and presentation patterns. The processes of collection, staining, scanning, and others can introduce image variations due to factors such as lighting conditions and equipment differences. Conventional algorithms are often designed for specific tasks, lacking robustness and generalization ability. Recently, large model has emerged as a viable option for addressing these problems, garnering increasing attention. After achieving the considerable success in natural language processing (NLP), large model is steadily gaining more attention and gradually expanding into broader domains such as images and videos. For instance, [6] proposed the CLIP framework, which achieved excellent performance by pretraining the model to predict which caption corresponds to which image. [7] utilized a dataset of over one billion noisy image-text pairs to expand visual and visual-linguistic representation learning. Compared with task-specific small models, rich representation learning and better generalization are parts of the advantages of these large models, also the key points in WSI processing. These works inspired us to integrate large model with WSI analysis to provide better feature representations, which can be further used across a wide range of downstream tasks. On the other hand, with techniques like prompt engineering and few-shot learning, large models are conveniently deployable to downstream tasks. This enables the trained model to have a broader range of application possibilities, fostering advancements in the corresponding researches. In this work, we proposed a self-supervised learning approach to train a large-scale model for WSI feature extraction. There are three important elements for training the large-scale model: appropriate architecture, large dataset and suitable training method. Here, we integrated the vision transformer architecture as the backbone to extract feature representations from WSI slides. As for training data, we collected over 180M patches from more than 11000 WSI slides as a large dataset with both public and private data. The dataset contains slides stained with different methods. The slides are related to a variety of tissues and organs, like kidney, lung, breast, and so on. As for training method, our approach adapted the self-distillation framework to WSI's special properties. To leverage the multi-scale input of WSI, we utilize its hierarchical structure as additional global view to provide information at different scales. By adding views of color-augmented and patch-shuffled images, we motivate the model to learn transform-invariant features, ensure better generalization ability. We also integrate masked image modeling (MIM) technique into training to enhance the semantic learning. The evaluation is conducted on three downstream tasks over 10 datasets with easy adaptation. The experiment results verified that the proposed model can be used as a foundation backbone to extract better feature representations for WSI used in many analysis tasks. The main contributions of this work are listed as follows: 1. We established a large-scale foundation model for extracting better feature representation of WSIs. 2. By leveraging WSI's properties, we integrated color augmentation, patch shuffling, masked image model (MIM) and multi-scale input to add extra views into self distillation framework. 3. For pretraining the model, both private and public data are collected to make up a large dataset, containing more than 11000 slides, encompassing images about various organs and tissues. 4. Comprehensive downstream experiments, including slide-level subtyping task, patch-level classification task and nuclei segmentation task, are performed on more than 10 datasets in total. 5. With easy adaptation, the downstream experiment results demonstrate the superiority and robustness of the proposed method, indicating its promising potential to be used as backbone for extracting WSI feature representations. The remainder of this paper is organized as follows. In Section II, a brief introduction of related works was made. In Section III, the methods were elaborately illustrated. The experiments' settings and results were shown in Section IV. The discussion and conclusion are drawn in Section V and VI. ## II Related Work ### Deep Learning Deep learning has been employed as one of the solutions in the domain of medical image processing and achieved remarkable breakthroughs and successes. [8] proposed an end-to-end lung cancer screening method based on 3D deep learning, using low-dose chest CT images for automatic detection and diagnosis. By extracting features and classifying predictions, the method can accurately identify lung lesions with high sensitivity and specificity. [9] develops a weakly supervised deep learning model for aortic valve malformation classification from unlabeled cardiac MRI sequences. By using large-scale, imperfect training labels, the model outperforms traditional supervised learning methods in performance. Deep learning based methods also have made some success in WSI processing field. [10] utilized a coarse-to-fine analysis of the localized characteristics in pathology images for WSI analysis to overcome the problem caused by large image size and rich tissue information. The first step of the analysis includes the extraction of spatially localized features. [11] designed a recalibrated multi-instance deep learning method for whole slide gastric image classification. The method is based on a localization network to extract features of the discriminating patches. These works of WSI processing highly rely on the quality of extracted features, but still have to train a task-specific model from scratch or use the model pretrained on ImageNet dataset[12]. This motivates us to pretrain a model with WSI dataset to provide a domain-specific foundation, which can be used as a better initialization extracting better WSI features. ### Large Model With the emergence of self-supervised learning (SSL) and vision transformer (ViT), studies have revealed that increasing the parameter size of models or scaling up the training dataset often leads to enhanced model capabilities in downstream tasks, and even makes emergent capabilities appear on many complex problems [13]. In the field of computer vision, many works have been conducted. In order to expand the training dataset, ALIGN [7] uses a dataset of more than one billion noisy image-text pairs to expand visual and language representation learning, without taking complex data filtering and post-processing to clean the data. This study shows that downstream tasks can benefit from large dataset. [14] proposed a large-scale model, DALL\(\cdot\)E, which is a generative model capable of generating corresponding images based on given textual descriptions. It utilizes the generative pretrained transformer architecture and self-attention mechanism, allowing it to capture contextual information from textual descriptions and translate it into guidance for image generation. Recently, [15] proposed SAM, made great progress in the field of natural images segmentation. It develops a large-scale prompt-based image segmentation model pretrained using a large dataset and achieves progress on a wide range of segmentation tasks. Overall, a growing body of research demonstrates the advantages of large-scale model for computer vision tasks. Inspired by these works, we expand the model and dataset to build a well-pretrained large model for better WSI feature representation learning. ### Self-supervised Training Constrained by insufficient annotated WSI data, self-supervised learning (SSL) is a feasible paradigm in visual representation learning. SSL is a deep learning method that does not require manual labeling of data. It extracts meaningful feature representations by learning the relationships and patterns among data itself without label information. There are two mainly used self-supervised representation learning techniques, contrastive learning and generative learning. The classical working Bert [16] and GPT series [17][18][19] in NLP are typical examples of using generative pretraining. Bert uses a random mask word and generates missing words in the training. The GPT series, on the other hand, use generation to predict what the next token will be for pretraining. Recently, generative learning has also been developed in the computer vision area, like MAE [20] and SimMIM [21] introduced the ViT architecture and proposed a MIM-based training method. These studies make it possible to train large models of computer vision. Compared with the generative learning approach, contrastive learning is more dominant for computer vision tasks. Specific frameworks for contrastive learning proposed by simCLR [22] and MoCo [23][24], make large-scale pretraining possible by avoiding feature collapse with a large number of asymmetric positive and negative sample pairs. They require a large number of negative sample pairs during the training process, making training difficult. BYOL [25] has simplified their training process by removing the memory bank. Later, SimSiam [26] removes the momentum encoder on top of that, making training more convenient. Inspired by these studies, DINO [27] combines the advantages of the ViT architecture with contrastive learning and self-distillation to achieve excellent performance in representation learning, obtained great performance in downstream classification and segmentation tasks. We adopt DINO as the base framework due to it's convenient to scale up the model and there is no need to construct the positive-negative pairs. ## III Methods We adopt the self-distillation framework to learn feature representations. In order to address the inherent problems of WSI and leverage the distinct properties, we designed some methods, including color augmentation, masked image modeling (MIM), patch shuffling and utilizing the multi-scale input, to improve the robustness and quality of the feature representations. The overall network framework is shown in Fig.1. **Base Framework** For pretraining, we adopt the self distillation paradigm, which trains a student network \(g_{\theta_{s}}\) to match the output of a teacher network \(g_{\theta_{t}}\). For self-supervised training of WSIs, first we construct a set \(V\) of different views, containing images extracted from the same source image, using different distorted methods and crop strategies. This set consists of two parts: the local views and the global views. The whole set of views will be passed through the student network, getting the probability distribution \(P_{s}(x)\) after normalized by a softmax function, while only the global views will be put into the teacher network, getting the distribution \(P_{t}(x)\). Specifically, given an input view \(x\), the probability distribution of the student network output is calculated with: \[P_{s}(x)^{i}=\frac{\exp(g_{\theta_{s}}(x)^{(i)}/\tau_{s})}{\sum_{k=1}^{K}\exp( g_{\theta_{s}}(x)^{(k)}/\tau_{s})},\quad i=1,2,...,K, \tag{1}\] where \(K\) is the dimension of the model output, \(\tau_{s}\) denotes a temperature parameter that controls the sharpness of the output distribution. The teacher network has a similar formula holds for \(P_{t}\) with temperature \(\tau_{t}\). To utilize WSIs' pyramid structure and improve the performance of dealing with images with different scales, we also use the corresponding multi-scale input as the additional global view. Moreover, in addition to the original approaches to get the views, we use color augmentation, MIM and patch shuffling to obtain the supplementary local views, in order to enhance the robustness of model. The main loss here is: \[L_{m}=\sum_{x\in\{x_{t}^{0},x_{s}^{0},x_{s}^{0}\}}\sum_{\begin{subarray}{c}x^ {\prime}\in V\\ x^{\prime}\neq x\end{subarray}}H(P_{t}(x),P_{s}(x^{\prime})), \tag{2}\] where \(H(a,b)=-alogb\). **Color Augmentation** To ensure the robustness of the model facing data with color variants, including data from multi-centers or some processed with different staining methods, we use extra color augmentation to generate additional local view, driving the model to focus more on learning color-invariant representations. The augmentations includes changing the bright, saturation, color space of images within the batch, and so on. Given the color augmented image \(x_{c}\), the loss \(L_{c}\) of this view can be calculated as same as the main loss: \[L_{c}=H(P_{t}(x),P_{s}(x_{c})), \tag{3}\] **MIM** Many works have proved that reconstructing the masked patches is a meaningful self-supervised task which drives the network to learn the latent representation of the image, such as MAE [20] and SimMIM [21]. We make a random mask \(m\) for one input view of the student model and not do this for the teacher model, specifically, \(x_{m}=m\odot x\). Instead of calculating the difference between the prediction and the unmasked patch at the image level, we add a cross-entropy loss between the output embeddings of both networks. Hence, the loss \(L_{mim}\) has the same computation methodology with \(L_{m}\), specifically, \[L_{mim}=H(P_{t}(x),P_{s}(x_{m})), \tag{4}\] which can be conveniently integrated. **Patching Shuffling**[28] proposed PIRL to learn representations that are invariant to the transformation and retain semantic information by using a commonly used pretext task, solving jigsaw puzzles. [29] introduced a similar approach to contrastive learning framework, learned invariant representation while shuffling the input patches and improved the semantic quality. There are many trivial factors bringing variants to WSI. Inspired by these works, we use patch shuffling to generate another local view to learn the transform-invariant representations for improving the semantic quality and robustness. To encourage the representation of the shuffled image to be similar with that of its original counterpart, we use cross entropy loss \(L_{ps}\) which can be defined as: \[L_{ps}=H(\text{Softmax}(e),\text{Softmax}(f(e_{t}))), \tag{5}\] where \(f(\cdot)\) represents a multi-layer perceptron projector, Softmax represents the softmax function, \(e\) and \(e_{t}\) denote the feature embeddings of the original and shuffled images, respectively. The whole loss can be defined as: \[L=L_{m}+L_{c}+L_{mim}+L_{ps}. \tag{6}\] The student network is updated with calculated gradients, while the teacher network is updated using the exponential moving average (EMA) paradigm on the student weights, specifically, \[\theta_{t}\leftarrow\lambda\theta_{t}+(1-\lambda)\theta_{s}, \tag{7}\] with \(\lambda\) following a cosine schedule from 0.996 to 1 during training. ## IV Experiments and Results ### Pretraining #### iv.1.1 Pretraining Datasets We used both private and public data to make up the large pretraining dataset, including two parts from The Cancer Genome Atlas (TCGA) dataset [30] (TCGA-RCC and TCGA-NSCLC), Camelyon 17 [31] and three private datasets. There are more than 11000 slides of WSI. Each slide were cropped to patches with size of 256\(\times\)256 for training without using the label information. There are over 180 million patches extracted. Details can be found in TableI. Figure 1: The overall network framework. From the multi-scale pyramid of WSI, local views and global views were generated. Both global views and local views will be fed into student network, while only the global ones will be processed with teacher network. The probability distributions will be normalized with the softmax function to calculate the loss. The student network will be updated with gradients, while teacher network is updated using exponential moving average (EMA) on the student weights. #### iii.1.2 Pretraining Setting The pretraining were performed on NVIDIA A100 GPUs using distributed mode. The code was developed based on Pytorch framework. The batch size was 1024, base learning rate was 0.0005, decayed with a cosine schedule. The models were trained for 100 epochs with 10 warm up epochs. ### Downstream Tasks #### iii.2.1 Downstream Datasets To evaluate the downstream task performance, we validate the pretrained models under three tasks: slide-level subtyping task, patch-level classification task and nuclei instance segmentation task. For slide-level subtyping task, we use CAMELYON16 [32], TCGA-NSCLC, PANDA [33], TCGA-RCC and TCGA-BRCA datasets. For patch-level image classification task, we use SIPaKMeD [34] and Herlev [35] datasets. For nuclei instance segmentation task, we use CoNSeP [36], TNBC [37], Kumar [38] and Lizard [39] datasets. A brief summary is posted in Table2. CAMELYON16 is a public challenge dataset of sentinel lymph node biopsy of early-stage breast cancer, which consists of 399 H&E stained WSIs (160 tumor, 239 normal). We split the training set by 9:1 into training and validation, respectively, then test on the official test set. The TCGA-NSCLC dataset includes two subtypes of lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). We randomly selected 1000 WSIs, including 500 LUAD and LUSC each, and divided them into training, validation and test sets with a ratio of 8:1:1 for cross-validation. The PANDA dataset is used for the classification of prostate cancer, consisting of 7,724 cancer samples and 2,891 non-cancer samples. They were split into training, validation and test sets according to the ratio of 7:1:2 for cross-validation. From TCGA-RCC, 896 WSIs were selected, of which the amounts of three subtypes were 90, 518 and 288. The distribution was kept unchanged and divided into training, validation, and test sets with a ratio of 8:1:1. The TCGA-BRCA dataset includes two subtypes: invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC). 973 WSIs were selected, of which 774 IDCs and 199 ILCs were split into training, validation and testing sets, respectively, with a ratio of 8:1:1. SIPaKMeD includes 4,049 images of single cells with five classes, including Dyskeratotic, Koilocytotic, Metaplastic, Parabasal, Superficial-Intermediate, with 813, 825, 793, 787 and 831 samples, respectively. The Herlev dataset consists of 917 images of single cells. Within the dataset, there are 675 abnormal cells and 242 normal cells. Both SIPaKMeD and Herlev were split into train/val/test sets with a ratio of 3:1:1 for cross-validation. The segmentation model is trained on CoNSeP dataset, and tested with the official test splits. TNBC and Kumar were used for external tests. Here, we used the whole set of TNBC and the test set of Kumar split by [36]. The part of Lizard dataset with colon tissues was used as additional data discussed in SectionIV.3.3. #### iii.2.2 Downstream Experiments Setting For slide-level subtyping task, we adopted the MIL framework with two advanced aggregators: CLAM by [40] and DTFD by [41]. For nuclei instance segmentation, we used the structure of Hover-Net [36] to separate the nuclei from the background. When performing cross-validation, the final results were calculated by averaging the metrics across all folds. Hyper-parameters are determined by the results on the validation set, while the performance was evaluated on the test set. #### iii.2.3 Evaluation metrics For both slide-level subtyping task and patch-level classification task, we employed accuracy (ACC) and area under the curve (AUC) as evaluation metrics. For nuclei instance segmentation task, we adopted five metrics as [36]: DICE, Aggregated Jaccard Index (AJI) [38], Detection Quality (DQ), Segmentation Quality (SQ) and \begin{table} \begin{tabular}{c c c c c} \hline Dataset & Classes & Slides/Patches & Organs & Task \\ \hline CAMELYON16 & 2 & 399 & Breast & Slide-sub \\ TCGA-NSCLC & 2 & 1000 & Lung & Slide-sub \\ PANDA & 2 & 10,615 & Prostate & Slide-sub \\ TCGA-RCC & 3 & 896 & Kidney & Slide-sub \\ TCGA-BRCA & 2 & 973 & Breast & Slide-sub \\ SIPaKMeD & 5 & 4,049 & Cervix & Patch-cls \\ Herlev & 2 & 917 & Cervix & Patch-cls \\ CoNSeP & / & 41 & Colon & Nuclei-seg \\ TNBC & / & 50 & Breast & Nuclei-seg \\ Kumar & / & 14 & Multi & Nuclei-seg \\ Lizard & / & 238 & Colon & Nuclei-seg \\ \hline \end{tabular} \end{table} Table 2: Datasets for downstream tasks. \begin{table} \begin{tabular}{c c c c} \hline Dataset & Slides & Organs & Stain \\ \hline TCGA-RCC & 3269 & Kidney & H\&E \\ TCGA-NSCLC & 3220 & Lung & H\&E \\ Camelyon17 & 1000 & Breast & H\&E \\ Private Data1 & 1263 & Cervix & Pap \\ Private Data2 & 1295 & Breast & IHC \\ Private Data3 & 1159 & Digestive Tract & H\&E \\ \hline \end{tabular} \end{table} Table 1: Datasets for pretraining. H&E denotes Hematoxylin and Eosin Stain, IHC denotes Immunohistochemistryistochemistry Stain, Pap denotes Papanicolaou Stain. Panoptic Quality (PQ). DICE is defined as: \[\text{DICE}=2\times(X\cap Y)/(|X|+|Y|), \tag{8}\] where \(X\) denotes the ground truth and \(Y\) is the prediction. AJI computes the ratio of an aggregated intersection cardinality and an aggregated union cardinality between X and Y. PQ is defined as: \[\text{PQ}=\underbrace{\frac{|TP|}{|TP|+\frac{1}{2}|FP|+\frac{1}{2}|FN|}}_{DQ} \times\underbrace{\frac{\sum_{(x,y)\in TP}IoU(x,y)}{|TP|}}_{SQ}, \tag{9}\] where x denotes a groundtruth (GT) segment, y denotes a prediction segment and IoU denotes intersection over union. TP, FN and FP denote matched pairs, unmatched GT segments and unmatched prediction segments, respectively. ### Results #### ii.3.1 Slide-level Subtyping Task To demonstrate the significance of the data and model scale, we run experiments of slide-level subtyping tasks on 5 different datasets: CAMELYON16, TCGA-NSCLC, PANDA, TCGA-RCC and TCGA-BRCA, employed two popular MIL frameworks: CLAM and DTFD. In this task, the pretrained model can be used as the feature extractor directly without any further fine-tuning. The feature aggregators were trained with data of different domains: WSIs and natural images (ImageNet dataset) with small scale (ResNet50) and large scale (ViT-b). The results are displayed in Table3. By pretraining with data of specific domains, regardless of using Res50 or ViT-b as feature extractor, there has been significant improvement in subtyping performance. For Res50, when pretrained on WSIs instead of ImageNet, the model performance improved most on CAMELYON16 dataset, with +3.3% of accuracy and +0.1 of AUC, using CLAM. For ViT-b, when pretrained on WSIs instead of ImageNet, the model performance improved most on TCGA-RCC dataset, with +4.1% of accuracy and +0.01 of AUC, using DTFD. By pretraining with larger scale of model, regardless pretrained with ImageNet or WSIs, there has been prominent progress in subtyping performance. When pretrained with ImageNet, using ViT-b as feature extractor instead of Res50 yields improvements up to +7.5% of accuracy and +0.1 of AUC on CAMELYON16 dataset using CLAM. When pretrained with WSIs, using ViT-b as feature extractor instead of Res50 leads to improvements up to +6.4% of accuracy and +0.1 of AUC on TCGA-BRCA dataset using DTFD. The upward trend is conspicuously apparent in the line chart, as shown in Fig.2. The enhancements in performance caused by changing the pretraining datasets and expanding the model solidify the significance of training with domain specific data and the promising potential of applying larger model. It is worth pointing out that there are two outliers when use Res50 trained with WSIs as feature extractor on BRCA and RCC dataset. The decline of performance may caused by the worse convergence limited by the model parameter scale. Compared with ImageNet dataset, the WSI dataset is more than ten times larger. The incongruity between the size of model and dataset leads to the underperformance. However, larger model, like ViT-b, did benefit from the large WSI dataset and further improve the subtyping performance, indicating the potential of using larger model as backbone. The bottom row of Table3 is the results of the proposed model BROW. By integrating the multiscale inputs and other methods, like MIM and color augmentation, the proposed model showed robust and competitive results on the five datasets, further enhancing the performance. **Compared with Advanced Works** There are many works trying to improve the performance of slide-level subtyping tasks. Most of them adopt the MIL framework and focus on exploring the correlation between instances and the whole slide. For instance, CLAM [40] used attention-based learning to identify valued sub-regions and uses instance-level clustering to constrain the feature space. Later, DTFD [41] proposed a double-tier MIL framework to effectively use the intrinsic feature by introducing the concept of pseudo-bags. Some works also pay attention to the feature extractor, like SCL-WC [42] proposed task-agnostic self-supervised feature pre-extraction and task-specific weakly-supervised feature refinement and aggregation for WSI-level prediction. Here, we made a comparison between BROW and these advanced works, including CLAM, DTFD, TransMIL[4], ZOOMMIL[43], DSMIL[44] and SCL-WC. Because Camelyon16 has the official set splits, we run experiments on this dataset. From the results shown Figure 2: The experiment results using different scale models pre-trained on different domains of data, taking accuracy as comparison metric. in Fig.3, by setting an exquisite aggregator or feature extractor, the models obtained good results. DTFD, DSMIL and SCL-WC all have got accuracy values near 0.9. However, by pretraining a large-scale model with large WSI dataset and directly using it as the feature extractor, BROW achieved the best performance with CLAM as an aggregator, the aggregator once with the worst performance. The results indicating slide-level subtyping task can benefit from the pretrained WSI foundation model as feature extractor a lot. Also, it can provide a better benchmark for developing aggregators in MIL framework. #### iv.2.2 Patch-level Classification We run experiments to test models' performance of patch-level classification task on two datasets, SIPaKMeD and Herlev. Here, in addition to adapting the deep-learning models to the task directly, we also use fusing method to further improve the classification performance. Previous works, [45] and [46], have shown the benefits of model integration, which can effectively combine the strengths of individual models, resulting in higher accuracy and improved stability compared to a single model. We run experiments following the design mentioned in [46], employing four distinct models, including Inception V3 [47], MobileNet V2 [48], Inception ResNet V2 [49], and the proposed BROW. Confidence scores were extracted from trained models, and an ensemble method based on fuzzy distance was employed to aggregate these scores for final image category prediction. The results are shown in Table4. From the results we found that there are fluctuations of other models' independent performance across two datasets. Inception V3 got competitive results on SIPaKMed, but underperformed others by a large margin on Herlev. However, BROW achieved the best and most robust results on both datasets, with accuracy of 0.9783 and 0.9663, respectively. By fusing the three models, Inception V3, MobileNet V2 and Inception ResNet V2, there are enhancements on both two datasets, demonstrating the benefits of model fusion. When integrating the proposed model BROW into model fusion, the performance got further improved. By replacing the model in the fusion with the worst independent performance, the fused model obtained the best results, with accuracy of 0.9800 and 0.9718, respectively. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{CAMELYON16} & \multicolumn{3}{c}{TCGA-NSCLC} & \multicolumn{3}{c}{PANDA} & \multicolumn{3}{c}{TCGA-RCC} & \multicolumn{3}{c}{TCGA-BRCA} \\ \cline{3-11} & & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC \\ \hline \multirow{3}{*}{DINO\_Res50\_ImgNets} & CLAM & 0.8248 & 0.8408 & 0.8060 & 0.9058 & 0.8974 & 0.9632 & 0.8911 & 0.9657 & 0.7856 & 0.8121 \\ & DTFD & 0.8574 & 0.8609 & 0.8244 & 0.8947 & 0.9036 & 0.9557 & 0.9044 & 0.9674 & 0.7927 & 0.8175 \\ \hline \multirow{3}{*}{DINO\_Res50\_WSIs} & CLAM & 0.8574 & 0.9381 & 0.8365 & 0.9199 & 0.8984 & 0.9662 & 0.9100 & 0.9828 & 0.8021 & 0.8352 \\ & DTFD & 0.8744 & 0.9522 & 0.8470 & 0.9222 & 0.9115 & 0.9629 & 0.8956 & 0.9788 & 0.8134 & 0.7689 \\ \hline \multirow{3}{*}{DINO\_ViT-b\_ImgNets} & CLAM & 0.9000 & 0.9402 & 0.8500 & 0.9325 & 0.9176 & 0.9682 & 0.9256 & 0.9860 & 0.8577 & 0.8838 \\ & DTFD & 0.9078 & 0.9244 & 0.8719 & 0.9433 & 0.9264 & 0.9718 & 0.9144 & 0.9792 & 0.8485 & 0.8747 \\ \hline \multirow{3}{*}{DINO\_ViT-b\_WSIs} & CLAM & 0.9109 & 0.9662 & 0.8726 & 0.9488 & 0.9315 & 0.9763 & 0.9422 & 0.9895 & 0.8639 & 0.8962 \\ & DTFD & 0.9124 & 0.9730 & 0.8889 & 0.9476 & 0.9372 & 0.9770 & 0.9556 & 0.9903 & 0.8773 & 0.8933 \\ \hline \multirow{3}{*}{BROW\_ViT-b\_WSIs} & CLAM & **0.9535** & 0.9756 & 0.8818 & **0.9606** & 0.9407 & 0.9802 & 0.9511 & **0.9942** & **0.8897** & **0.9224** \\ & DTFD & 0.9419 & **0.9787** & **0.9007** & 0.9574 & **0.9434** & **0.9813** & **0.9611** & 0.9917 & 0.8804 & 0.9148 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of slide-level multiclass subtyping task. The underline indicates the best result using the same framework, while the best results of two frameworks among all models were emphasized in bold. Figure 3: The experiment results of advanced models on CAMELYON16. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{SIPaKMeD} & \multicolumn{2}{c}{Herlev} \\ \cline{3-6} & ACC & AUC & ACC & AUC \\ \hline \multirow{3}{*}{single model} & MobileNet & 0.9585 & 0.9971 & 0.9476 & 0.9812 \\ & InceptionV3 & 0.9592 & 0.9970 & 0.9037 & 0.9658 \\ & Inception ResNet V2 & 0.9657 & 0.9984 & 0.9397 & 0.9842 \\ & BROW & **0.9783** & **0.9992** & **0.9663** & **0.9944** \\ \hline \multirow{3}{*}{fused model} & mix & 0.9738 & 0.9488 & / & 0.9488 & / \\ & mix+BROW & 0.9793 & / & 0.9652 & / \\ & mix*+BROW & **0.9800** & / & **0.9718** & / \\ \hline \hline \end{tabular} \end{table} Table 4: Results of patch-level classification experiments. In fused model part, mix means the fusion of MobileNet, Inception V3 and Inception ResNet V2. Mix+BROW denotes introducing BROW into the fusion. Mix*+BROW denotes using BROW to replace the model performed worst in the former three. The fusion is realized based on fuzzy distance, details can be found at Section IV.3.2. **Data Efficient Deployment** The proposed model is convenient to be adapted to this patch classification task with few shot learning. Inspired by the work of [50], we build the classification head by combining the advantages of parameter efficient fine-tuning and adapter which leverages a key-value cache model from the few shot training set. Given the support set with \(K\) labeled images in each of \(N\) categories, denoted as \(I_{K}\), with their labels \(L_{N}\), the feature representations of the images are extracted by BROW first. The feature vector \(\text{F}_{sup}\) and its corresponding label vector \(\text{L}_{sup}\) can be defined as: \[\text{F}_{sup}=\text{BROW}(I_{K}),\quad\text{F}_{sup}\in\mathbb{R}^{NK\times D}, \tag{10}\] \[\text{L}_{sup}=\text{OneHot}(L_{N}),\quad\text{L}_{sup}\in\mathbb{R}^{NK\times N}, \tag{11}\] where \(D\) is the dimension of extracted features. For the key-value cache, we treat \(\text{F}_{sup}\) as keys, while the corresponding label vector \(\text{L}_{sup}\) are their values. For a query image from test set, the feature is \(f_{test}\in\mathbb{R}^{1\times D}\). The affinities \(A\) between the query and keys can be estimated as: \[A=\exp(-\beta(1-f_{test}\text{F}_{sup}^{T})),\quad A\in\mathbb{R}^{1\times NK}, \tag{12}\] where \(\beta\) is a hyper-parameter for controlling the sharpness and \(\exp(\cdot)\) denotes the exponential function. The output logits of the test image by adapter are then calculated as \(A\text{L}_{sup}\). The original method also fine-tuned the adapter via SGD to mitigate the data gap. We used method different from that. By parameter efficient fine-tuning methods, here we used LoRA [51], the model can be adapted into the downstream task with limited data and computation. The new feature vector \(\text{F}_{sup}^{{}^{\prime}}\), \(f_{test}^{{}^{\prime}}\) and affinities \(A^{{}^{\prime}}\) can be calculated with the fine-tuned model in same way. Then we combine these two terms as final logits to capture the downstream task specific features while preserving the universal characteristics: \[\text{logits}=\alpha A\text{L}_{sup}+\alpha^{{}^{\prime}}A^{{}^{\prime}}\text {L}_{sup}, \tag{13}\] where \(\alpha\) and \(\alpha^{{}^{\prime}}\) are parameters to balance these two terms. The experiment results are shown in Table5. Without any training, the model already got promising outcomes as shown in the top row. Training with data from 1% to 10%, the performance ascended rapidly. After fine-tuning with limited labeled data, the proposed model is able to obtain competitive results which is closed to the performance of model trained with all data. Especially for Herlev dataset, the model can beat other models' performance trained with all data with only 10% data. What's more, combining the task specific feature with universal characteristics is beneficial for improving the performance, as shown in the bottom row the model got the best results. **Visual Results** The Fig.4 displays the feature distribution provided by these models. In visualizing scatter plots of pre-extracted features, the proposed model provide best representation quality, on both two datasets. The quality of representation is consistent with the final prediction performance. The experiments demonstrate the promising and robust performance of the proposed model, both individually and as part of a hybrid ensemble. The proposed model can be easily adapted into patch-classification task with only limited labeled data and efficient adaptation. #### iii.2.3 Nuclei Instance Segmentation Segmentation of cells and nuclei is a significant first step towards automatic analysis of WSI. Trained models are becoming important tools to assist pathologists. There have been several works that introduced SSL methods into WSI pretraining using vision transformer architectures to address various problems. We adapt some of the advanced works' trained models into this task. [52] proposed a hierarchical image pyramid transformer (HIPT) using self-distillation framework. [53] also provided a pretrained vit-based model using DINO, trying to establish a benchmark for SSL on pathology datasets. There is no abbreviation in the Paper. For simplicity, we call it B-DINO here. [54] proposed a hybrid model, called CTransPath, pretrained using MOCO V3 manner to learn universal feature representations for WSI. We adopted the Hover-Net [36] as framework for the nuclei segmentation task. There are two stages in this framework: in linear stage we freeze the parameters of feature extractors and only fine-tune the linear segment head; in fine-tune stage we fine-tune the whole model with down \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{SIPaKMeD (97.83)} & \multicolumn{3}{c}{Herlev (96.63)} \\ \cline{2-7} & \multicolumn{3}{c}{using data ratio} & \multicolumn{3}{c}{using data ratio} \\ \hline methods & 1\% & 5\% & 10\% & 1\% & 5\% & 10\% \\ \hline adapter & 76.49 & 87.38 & 92.82 & 86.19 & 88.40 & 90.06 \\ adapter\_F & 76.73 & 90.10 & 93.81 & **86.74** & 89.50 & 91.71 \\ LoRA & 76.73 & 89.48 & 95.30 & 86.19 & 90.61 & 95.58 \\ mix & **78.59** & **90.35** & **95.67** & **86.74** & **91.16** & **96.13** \\ \hline \hline \end{tabular} \end{table} Table 5: Experiment results for data efficient adaptation. Adapter and adapter_F are few-shot learning adapter with/without fine-tuning. LoRA represents the model fine-tuned using low-rank adaptation. Mix denotes combining the two terms. The number in bracket is the results trained with full data. The accuracy is used here as comparison metric. Figure 4: The feature distribution provided by different models. stream data. To integrate transformer into Hover-Net as the backbone, we use the feature map from the final layer of transformer to build a feature pyramid, using an approach validated as viable in the study conducted by [55]. The main experiments were performed on the CoNSep dataset. **Compare with Advanced Works** To maintain consistency with the comparative objects, we use ViT-small as the backbone size to train our model, recorded as BROWs. TableVI demonstrates the quantitative results and Fig.5 displays the visual segmentation results. As shown in TableVI, in both linear evaluation results and fine-tuning evaluation results, BROWs achieved the best results. In linear test, HIPT, B-DINO and BROWs outperform CTransPath by a large margin. Fine-tuning the net initialized with the pretrained weights, the performance of all the models improved, especially CTransPath. BROWs still maintained better results compared to competing models over all the metrics. BROW has the best ability to distinguish nuclear pixels from the background, verified by the high DICE score. CTransPath obtained sub-optimal results, suffering from low DQ values. That indicates its inability to detect the instance, leading to some overlook of detection. As shown in Fig.5, CTransPath overlooks the large instance at the top left corner (noted by blue box). HIPT and B-DINO have low DICE and AJI score, penalized by the wrong-segmented area, indicating their bad segmentation performance. As shown in Fig.5, they made wrong predication at the position noted by yellow box. In TableVI, we also list the results fine-tuned with limited data to evaluate the adaptation efficiency. Acquiring annotations with high-quality demands a significant investment of time and need high level of medical expertise. Data efficiency is important in this task. Fine-tunig with increasing amounts of data, the performance rapidly improved. The DICE score of BROWs is close to HIPT fine-tuned with all data. Considering the PQ metric for both accurate quantification and interpretability to assess the performance, the proposed model can be efficiently adapted into this segment task with fine-tuning on five images, outperformed HIPT and B-DINO trained with all data. These experiments demonstrate that the proposed model can be efficiently adapted to this segment task with limited annotated data. **Comparison with External Datasets** To verify the generalization ability, we run experiments on two extra datasets: TNBC [37] and Kumar [37]. We adopted the models trained on CoNSep dataset, and independently test them on TNBC and Kumar, in order to evaluate the ability for generalising to new organs and different sources. For Kumar, we used the test set split by Hover-Net [36]. For TNBC, we used the whole dataset as the test set. As shown in TableVII, BROW generalized best on both two datasets. HIPT showed poor performance with unseen data, less effective at detecting nuclear pixels, which is reflected by the low DICE score. CTransPath and B-DINO achieved competitive performance with BROW on SQ score, indicating the close segmentation quality. However, SQ is calculated only within true positive segments and should therefore be observed together with DQ. Thus the overall segmentation performance of BROW is superior. **Comparison with Model of Different Parameter Scale** We run segmentation experiments using models trained with ViT-small and ViT-base architectures. \begin{table} \begin{tabular}{c c c c c c c} \hline & & DICE & AJI & DQ & SQ & PQ \\ \hline \multirow{4}{*}{Linear} & HIPT & 0.7789 & 0.4235 & 0.5138 & 0.7002 & 0.3617 \\ & B-DINO & 0.7773 & 0.4204 & 0.4905 & 0.6802 & 0.3349 \\ & CTransPath & 0.7374 & 0.3384 & 0.4124 & 0.6664 & 0.2762 \\ & BROWs & **0.7862** & **0.4456** & **0.5426** & **0.7024** & **0.3825** \\ \hline \multirow{4}{*}{Fine-tune} & HIPT & 0.7886 & 0.4273 & 0.5075 & 0.7030 & 0.3588 \\ & B-DINO & 0.7902 & 0.4352 & 0.5213 & 0.7085 & 0.3707 \\ & CTransPath & 0.8061 & 0.4693 & 0.5677 & 0.7232 & 0.4119 \\ & BROWs\_1 & 0.6302 & 0.1577 & 0.2242 & 0.6749 & 0.1511 \\ & BROWs\_2 & 0.6837 & 0.1736 & 0.2554 & 0.6685 & 0.1708 \\ & BROWs\_5 & 0.7881 & 0.4517 & 0.5534 & 0.7138 & 0.3962 \\ & BROWs & **0.8122** & **0.4824** & **0.5870** & **0.7286** & **0.4292** \\ \hline \end{tabular} \end{table} Table 6: Comparative experiment results of nuclei instance segmentation on CoNSep dataset. BROWs_n represents fine-tuning with n shots of downstream data. Figure 5: Visual segmentation results. GT denotes the ground truth. \begin{table} \begin{tabular}{c c c c c c c} \hline & \multicolumn{2}{c}{DICE} & AJI & DQ & SQ & PQ \\ \hline \multirow{4}{*}{Linear} & BROWs & 0.7862 & 0.4456 & 0.5426 & 0.7024 & 0.3825 \\ & BROWb & **0.7982** & **0.4621** & **0.5488** & **0.7034** & **0.3873** \\ \hline \multirow{4}{*}{Fine-tune} & BROWs & **0.8122** & **0.4824** & **0.5870** & **0.7286** & **0.4292** \\ & BROWb & 0.8106 & 0.4766 & 0.5797 & 0.7268 & 0.4228 \\ \hline \end{tabular} \end{table} Table 8: Comparative experiments with models at different scales. \begin{table} \begin{tabular}{c c c c c c c} \hline & \multicolumn{2}{c}{TNBC} & & & & \\ \hline & DICE & AJI & DQ & SQ & PQ \\ \hline HIPT & 0.7546 & 0.5460 & 0.6845 & 0.7428 & 0.5121 \\ B-DINO & 0.7668 & 0.5736 & 0.7290 & 0.7463 & 0.5476 \\ CTransPath & 0.7610 & 0.5801 & **0.7284** & 0.7443 & 0.5468 \\ BROWs & **0.7678** & **0.5819** & 0.7283 & **0.7504** & **0.5511** \\ \hline \multicolumn{6}{c}{Kumar} \\ \hline & DICE & AJI & DQ & SQ & PQ \\ \hline HIPT & 0.7811 & 0.4696 & 0.5456 & 0.7012 & 0.3874 \\ B-DINO & 0.7706 & 0.5094 & 0.6153 & 0.7098 & 0.4411 \\ CTransPath & 0.7469 & 0.4867 & 0.5799 & 0.7112 & 0.4159 \\ BROWs & **0.7905** & **0.5367** & **0.6423** & **0.7198** & **0.4653** \\ \hline \end{tabular} \end{table} Table 7: Comparative experiment results on external test set. The results are shown in TableVIII. According to the linear evaluation results, ViT-b based model obtained better results, indicating its better generalization performance. However, after fine-tuning the full parameters, the ViT-b based model was outperformed by ViT-s based model. One possible interpretation could be the over-fitting caused by the mismatching between limited data and large scale of parameters. To verify the hypothesis and address the over-fitting problem, we run experiments in two ways: using efficient fine-tuning methods and adding more data. To make a comparison among different fine-tuning methods, we adopted: 1) directly freezing part of the blocks; 2) using LoRA [51] to reduce the number of trainable parameters by injecting trainable rank decomposition matrices into each layer of the Transformer architecture; 3) using ViT adapter [56] to efficiently transfer large pre-trained vision transformer models to downstream tasks. We also post the full fine-tuning results as a baseline. The results are shown in TableIX. The performance improved clearly, indicating the significance of fine-tuning methods in adapting a universal model into downstream tasks. By mitigating the over-fitting problem, our proposed model BROW achieved competitive results in nuclei instance segmentation task. To add more data into fine-tune training, we used the Lizard dataset [39]. It's a large-scale dataset for colonic nuclear instance segmentation and classification. We leverage parts of the whole dataset from 3 different sources: GlaS, CRAG, and DigestPath. There are 238 patches in total of this additional dataset. We gradually add them into the fine-tune training and test the performance on the original test set. As shown in Fig.6, when adding more data into the fine-tune training, even the data come from different sources, the model benefited from the extra data and improved the segmentation performance on the original dataset. ### Ablation Study The nuclei segmentation experiment has both linear and fine-tune two stages, which is suitable for testing model's robustness and final performance. We run ablation studies on this task to test the proposed modules. The results are shown in TableX. The linear stage and fine-tuning stage were performed on CoNSep dataset, while the external test were run on TNBC. From the results we found that in the linear stage, freezing the backbone and only fine-tuning the linear head, the proposed model achieved the best performance. Removing modules all leads to the decline of performance, which confirmed these modules can improve model's robustness. Removing the local view made by color augmentation leading to minimal decreasing of the results. This meets our expectations since we use color-jitter as base transform in other local views. This view used other color augmentation techniques to further increasing the robustness facing color variants. Removing the additional global view using multi-scale input and removing the MIM view result in the biggest performance decline. This indicates the multi-scale pyramid of WSI can provide valuable information and MIM can improve the quality of feature representations. In the fine-tuning stage, full fine-tuning all the parameters, the performance gap among models were reduced. This stage data influenced the performance more. But the model with all modules still obtained the best results. Then we run external test on TNBC dataset without any further fine-tuning. The model without MIM got the worst results, indicating MIM is significant in extracting the universal representation of images. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & C & P & M & G & DICE & AJI & DQ & SQ & PQ \\ \hline \multirow{4}{*}{Linear} & βœ“ & βœ“ & βœ“ & βœ“ & **0.7513** & **0.3494** & **0.4408** & **0.6865** & **0.3042** \\ & βœ“ & βœ“ & βœ“ & 0.7459 & 0.3383 & 0.4148 & 0.6777 & 0.2830 \\ & βœ“ & βœ“ & βœ“ & 0.7393 & 0.3295 & 0.4239 & 0.6729 & 0.2863 \\ & βœ“ & βœ“ & βœ“ & 0.7361 & 0.3305 & 0.3939 & 0.6701 & 0.2661 \\ & βœ“ & βœ“ & βœ“ & 0.7334 & 0.3195 & 0.4047 & 0.6755 & 0.2752 \\ \hline \multirow{4}{*}{Fine-tune} & βœ“ & βœ“ & βœ“ & **0.7944** & **0.4355** & **0.5233** & 0.7090 & **0.3728** \\ & βœ“ & βœ“ & βœ“ & 0.7930 & 0.4236 & 0.5026 & 0.7045 & 0.3560 \\ & βœ“ & βœ“ & βœ“ & 0.7860 & 0.4314 & 0.5145 & 0.7009 & 0.3620 \\ & βœ“ & βœ“ & βœ“ & 0.7928 & 0.4299 & 0.5064 & **0.7097** & 0.3611 \\ & βœ“ & βœ“ & βœ“ & 0.7891 & 0.4307 & 0.5214 & 0.7076 & 0.3708 \\ \hline \multirow{4}{*}{External Test} & βœ“ & βœ“ & βœ“ & **0.6791** & **0.4244** & **0.5492** & 0.6816 & **0.3765** \\ & βœ“ & βœ“ & βœ“ & 0.6686 & 0.4160 & 0.5395 & 0.6747 & 0.3714 \\ \cline{1-1} & βœ“ & βœ“ & βœ“ & 0.6721 & 0.3840 & 0.5186 & **0.6907** & 0.3601 \\ \cline{1-1} & βœ“ & βœ“ & βœ“ & 0.6143 & 0.3869 & 0.4763 & 0.6259 & 0.3188 \\ \cline{1-1} & βœ“ & βœ“ & βœ“ & 0.6632 & 0.4243 & 0.5286 & 0.6746 & 0.3606 \\ \hline \hline \end{tabular} \end{table} Table 10: Experiments for exploring modules’ influence on CoNSep and TNBC datasets. In the table, C denotes using color augments to add a local view. P means Patch shuffling. M denotes using MIM. G represents leveraging multi-scale input to build extra global view. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & Methods & DICE & AJI & DQ & SQ & PQ \\ \hline \multirow{4}{*}{Fine-tune} & Full fine-tune & 0.8106 & 0.4766 & 0.5797 & 0.7268 & 0.4228 \\ & freeze blocks & 0.8153 & 0.5031 & 0.6163 & **0.7362** & 0.4550 \\ & LoRA & 0.8181 & **0.5104** & 0.6200 & 0.7343 & 0.4568 \\ & Adapter & **0.8183** & 0.5085 & **0.6224** & 0.7327 & **0.4575** \\ \hline \hline \end{tabular} \end{table} Table 9: Experiment results with different fine-tuning methods. Figure 6: Segmentation performance with additional data. Discussion From the results of experiments and ablation study, the model achieved good performance on several downstream tasks, showing promising potential to be used as the backbone for extracting feature representations in WSI processing. The model can provide a better benchmark compared with models pre-trained on natural image datasets and improve the performance of tasks with good generalization ability and transferability, which has the potential to facilitate WSI processing related researches. By efficient adaptation, the foundation model can be easily deployed for different downstream tasks. Pretraining on a large dataset nurtures bountiful data diversity. The large-scale model provide the ability to process the big dataset and extract universal feature representations. By ablation studies, the additional local views building with color augmentations, MIM and patch shuffling enhance the robustness of the model. These methods drive the model to learn coloraugment-, mask- and transform-invariant features. With the leveraging of multi-scale input to build the extra global view, the model is able to better process WSI and have better generalization ability. The input pyramid introduces extra global view at different scale, providing various fields of view. The experiments in this study is diverse and comprehensive. There are three kinds of downstream tasks included, slide-level subtyping, patch-level classification and nuclei segmentation. These tasks are using data at different scales: patch level WSIs which can be processed directly with encoder and slide level WSIs which needs to be split first then processed with specific framework like MIL. In experiments, different downstream adaptation strategies are adopted. By freezing the parameter of the pretrained model or fine-tuning it with downstream tasks, the proposed model all achieved competitive results in corresponding tasks. To choose the adaptation strategy, first we should consider the task property. For example, the pretrained model can be used directly as the feature extractor in MIL framework when there is difficulty in fine-tuning the exactorr due to the large amount of patches of each slide. Second, the data amount also constrains the strategies. For small downstream datasets, it's better to use few-shot learning methods or parameter efficient learning approaches, such as LoRA. With these techniques, the pretrained model can be efficiently adapted to the downstream tasks with limited data and computation resources. However, during the experiments, there are also some bottlenecks restrict the research. Here, we list some of the factors which may facilitate the study: **More data** There are more than 180M patches extracted from over 11000 slides, consisting of the large dataset. As mentioned before, the experiment results demonstrate the benefits of pre-training on a large dataset, as shown in Fig.2 and TableIV. Then it comes a question: is the dataset large enough? Though the dataset has improved the models' performance in various downstream tasks, there still remains the necessity to expand the dataset. When compared with the well-known natural image dataset, ImageNet2k, which has about 14M images within the dataset, our dataset is quite large. However, some factors constrain the efficacy of the dataset. First, the distribution of different staining techniques is unbalanced. Various staining methods are employed in WSI analysis for different purposes, like H&E allows the observation of pathological alterations while the IHC enables the assessment of protein expression within the tissues. To make the pre-trained model become a better foundation model of WSI, we need to find more data using different staining methods, considering the unbalanced distribution now. Second, we need more data to enhance the class diversity. Still compared with ImageNet22k which has 22k classes of images, clinical data with WSI may not have so many classes. But enlarging the dataset's class diversity, like increasing data from different organs with various symptoms is a feasible approach to improve the generalization ability. Third, compared with natural images, WSIs manifest high similarities, which hinder the efficacy of large datasets. Hence, more data should be collected to construct a better 'large' dataset. **More Downstream Tasks.** Like ImageNet challenge for comparing models' performance in the natural image field, there are some well-known public challenges in WSI area, like Camelyon16, in which the proposed BROW got competitive results. Some datasets are wildly used, like TCGA-NSCLC, but there is no official selection of the data and the set splits for cross-validation. The setting will influence the performance a lot during reproduction. Some challenges are already addressed with pretty good results because of the factors such as the lack of diversity or the inherent intricacy of the task. More downstream tasks for WSI field as standard are needed, not only for providing more convincing results, but also can save researchers' time for reproducing prior works. **More Flexible Model.** The multi-scale structure is the distinct property of WSI. Each scale contains a WSI image with different resolution. We leverage this structure to construct extra global view. A more flexible model which can better use this property may further improve the efficiency and efficacy. For example, clinical experts can make fast detection using rough images and make further diagnosis using images with higher resolution, which can be integrated into auto-diagnosis pipeline to expedite the prediction. Task-specific models always focus on the maximum magnification. Some researchers have paid more attention to context details of this, like [57] introduced concentric patches at multiple resolutions to high-resolution semantic segmentation. The pyramid structure provides the images at different resolutions, potentially being beneficial for the model to deal with data from different centers with various magnifications. Therefore, the model with more flexibility to deal with the multi-scale input is needed. Conclusion This study proposed a large-scale foundation model, called BROW, for WSI processing. With the help of large dataset, scaled-up model size and appropriate training framework, the proposed model is able to extract better feature representations for WSI images. By directly being integrated to the original framework or efficiently adapting the model with downstream data, this model achieved competitive results on slide-level subtyping task, patch-level classification task and nuclei instance segmentation task. Through a comprehensive experiment comparison and analysis over ten datasets, the proposed model demonstrated its superiority, robustness and generalization ability.
2306.00080
AI Imagery and the Overton Window
AI-based text-to-image generation has undergone a significant leap in the production of visually comprehensive and aesthetic imagery over the past year, to the point where differentiating between a man-made piece of art and an AI-generated image is becoming more difficult. Generative Models such as Stable Diffusion, Midjourney and others are expected to affect several major industries in technological and ethical aspects. Striking the balance between raising human standard of life and work vs exploiting one group of people to enrich another is a complex and crucial part of the discussion. Due to the rapid growth of this technology, the way in which its models operate, and gray area legalities, visual and artistic domains - including the video game industry, are at risk of being taken over from creators by AI infrastructure owners. This paper is a literature review examining the concerns facing both AI developers and users today, including identity theft, data laundering and more. It discusses legalization challenges and ethical concerns, and concludes with how AI generative models can be tremendously useful in streamlining the process of visual creativity in both static and interactive media given proper regulation. Keywords: AI text-to-image generation, Midjourney, Stable Diffusion, AI Ethics, Game Design, Digital Art, Data Laundering
Sarah K. Amer
2023-05-31T18:01:04Z
http://arxiv.org/abs/2306.00080v2
# AI Imagery and the Overton Window ###### Abstract AI-based text-to-image generation has undergone a significant leap in the production of visually comprehensive and aesthetic imagery over the past year, to the point where differentiating between a man-made piece of art and an AI-generated image is becoming more difficult. Generative Models such as Stable Diffusion, Midjourney and others are expected to affect several major industries in technological and ethical aspects; striking the balance between raising human standard of life and work vs exploiting one group of people to enrich another is a complex and crucial part of the discussion. Due to the technology's rapid growth, the way in which its models operate, and "gray area" legalities, visual and artistic domains - including the video game industry, are at risk of being taken over from creators by AI infrastructure owners. This paper is a literature review examining the concerns facing both AI developers and users today, including identity theft, data laundering and more. It discusses legalization challenges and ethical concerns, and concludes with how AI generative models can be tremendously useful in streamlining the process of visual creativity in both static and interactive media given proper regulation. AI text-to-image generation, Midjourney, Stable Diffusion, AI Ethics, Game Design, Digital Art, Data Laundering ## I Introduction Text-to-image generation is an AI model that uses neural networks, taking natural language input (prompt) from the user and generating an image based on that written description. These models are generally composed of two parts, a language model that transforms the input text into a latent representation, and an image generation model. The technology is evolving at a rapid rate, and both researchers and industry professionals are racing to keep up with the constant changes [1]. The ethical ramifications of AI image generation entered the mainstream conversation when a game designer from the United States submitted an AI-generated image to an art competition without disclosing that the 'artwork' was not created by himself, and went on to win the first prize [2]. The participant submitted an image that was not created by himself through artistic ability or effort, but by inputting keywords and fine-tuning the prompt to best utilize the AI tool's method of operation to produce an aesthetically pleasing picture. Figure 1 shows the winning image generated using Midjourney. The event caused a stir amongst artistic communities and various fields - including the game industry, for its ethical concerns; not only was the winning image devoid of artistic ability when compared with other art created by artists through years of dedicated practice and education, but even the text prompt given to the AI generator cannot be categorized as 'artistic' the way fictional writing or poetry are [3]. These text prompts can be optimized into templates to produce imagery in more predictable ways [4]. As such, these prompts will soon not need a human to write them - a language learning model could do it faster [5]. The result becomes a fully automated process devoid of the human element of artistry and expression, which image generation is only the result and not the purposeful intent. There are significant economic ramifications to allowing a small number of AI infrastructure owners to monopolize industries, further widening the technological gap between developed and developing nations [6][7]. That is, however, only one point of concern. There are several ethical issues that AI image generators pose, including _how_ they generate their imagery. _Data Laundering_ is the obtaining of data without consent of the original owner, then the data is reused, transformed and sold for profit by legitimate companies. This also applies to educational institutions, which collect data under the claim of research purposes, but then data can be silently re-purposed for a commercial AI model [8]. This paper focuses on text-to-image AI models and their effect on visual media and its creators, especially in the video game industry. Section II provides an overview of current AI text-to-image models, the difference between AI imagery and man-made digital art, and the visual development process in game production. Section III discusses both the benefits of this AI technology, and the dangers that directly affect people economically and socially. Section IV outlines non-legislative action taken by independent institutions to mitigate the risks of these models' use. Finally, Section V states the conclusion and future work. ## II Background & Related Work ### _Text-to-Image Diffusion Models_ Text-to-image AI generators use machine learning on a dataset comprised of billions of images. It learns and then replicates patterns of data while considering the relationship between text and picture to create unique new combinations. The original imagery in the dataset is man-made, scraped from the internet regardless of aspects such as the nature of image or copyright. The model's output is a new image based on the content it has been trained on within a frame of parameters and constraints. As deep neural networks advanced, so did accuracy - correctly identifying subjects in images, given the unprecedented colossal size of training data used. Generative Adversarial Network (GAN) is one such approach to image generation AI [9]. Its goal is to generate unique imagery from a dataset (e.g. LAION-5B) by learning the underlying statistical data distribution across billions of images, recognizing shapes and subject material and associating them with specific words. One method of doing so is to designate one component of the model to act as the student (Generator) and the other as the teacher (Discriminator). The model keeps learning until it reaches the point when the Generator successfully bypasses the Discriminator, meaning the resulting image is nearly indiscernible from the dataset's statistical data distribution. Figure 2 shows an illustration of the architecture [10][11][12]. The model eventually reaches training convergence - the state where data loss settles within an error range around the final value. Further training and larger datasets will not improve the model. To resolve this, _diffusion_ models were introduced [114][115]. Current tools like Stable Diffusion, [13]\(\mathrm{DALL}\).E-2 and Midjourney are an improved version of this model. The training data has Gaussian noise added to it, then the process is reversed and the noise is again removed from the image (see Figure 3) to teach the model to recognize the subject matter, and based on that can remove it correctly. This process of learning is applied to random seeds to generate images similar in content but different in composition and style [14][15][16]. In earlier image generation models, the generative cost used to be very expensive, thus impractical to adopt at a real-world level. Diffusion-GAN approximates several steps of the denoising process, and this training leads to the generator mapping the noise to a generated sample in only one step [17]. This in turn allows the GAN to learn a faster process of denoising and allows for more diverse sampling and more stable training in both image and video processing [18]. It is important to note, however, that even with diffusion models, the latent image produced from the training is not 100% identical to the original, but highly and unmistakably similar (see Figure 4). The newer iterations of these models use Generative Adversarial Networks (GANs) [19] linked with Neural Style Transfer (NST) to generate imagery in the art _style_ of another. A user can therefore generate an image of a particular subject or composition mimicking a particular artist's art style and capturing the textural information from a completely different image sample in the dataset. Several layers of this manipulation can occur within the network, containing the correlations between different filter responses over the spatial extent of the feature maps [20][21][22][23][24][25]. Figure 5 is a representation of this functionality - an AI-generated image for the science fiction movie, _Bladerunner_, in the 19th century post-impressionism style of renowned artist Vincent Van Gogh [26][27]. Note that Van Gogh has never Images often look logically sound at first glance, but subtle nuances such as degree of realism and anatomical accuracy become evident are not as nuanced as a human artist's. Illogical elements such as too many fingers on a hand or unnatural postures and bone placement can appear in the results. The more complex the subject matter and composition, the more problems arise. As mentioned earlier, the algorithms used in these AI models do not produce the same results twice, hence why writing the exact same prompt will produce different results in terms of composition and perspective every time. To prevent this from happening and focus on making changes to a specific image output, a _seed_ parameter must be utilized. This is accessible in commercial image generation tools such as Midjourney (Version 5). The user can generate image outputs that are different, but retain the same general composition, perspective and directional value [28]. A prompt is input followed by a seed keyword and a set of 4-5 numbers in a form as seen below: \[\textbf{imagine/}\textbf{erupting volcano -- seed 67854}\] This allows the user to access the exact same set of images and generate more variations of that set within seconds. See example in Figure 6. Fig. 4: Left: original image by human photographer, right: latent image generated from AI training [49] Fig. 5: AI-generated image by a Reddit user in Van Gogh’s artistic style [26] ### _Artist Vs Algorithm_ Visual conveyance and artistic expression have been used as a tool of communication and storytelling across all civilizations. Even in today's technologically-dominated society, the word 'art' invokes thought of this creative human endeavor in the traditional sense - painting on canvas, drawing with charcoal, paper and other types of media. In many cases, the artist creates a work out of nothing, such as drawing a detailed portrait when only a blank piece of paper existed (see Figure 7). The artistic community is bound by ethical obligations and respect to the craft. Copying another artistic creator's work with the purpose of benefiting financially or socially, especially when done without permission or knowledge of the original artist, is plagiarism. Being accused of such can prove harmful to the plagiarising artist's career, for it is deemed disrespectful to the years' of hard work of the plagiarised artist, and possibly dangerous if committed for the purpose of identity theft or slander. In the 1980s, the term _Digital Art_ was introduced, and this new medium of art creation rapidly grew in popularity during the 90s onward with the introduction of powerful tools such as Adobe Photoshop and Painter [29][30]. These tools allow the artist to execute the brush motion, penmanship and jitteriness on a digital screen, with added benefits such as the ability to correct mistakes inexpensively and create infinite copies of the artwork, thus enabling affordable solutions to people seeking to purchase art within a limited budget. Although digital painting excludes some traditional aspects, such as manually mixing paints, it still requires a lot of artistic talent, and years of training, coordination and time to produce quality artwork. The paper and pencil are replaced with a graphics tablet and a digital pen respectively, but everything else remains the same (see Figure 8). Solid understanding of drawing fundamentals, composition, perspective, color theory, lighting and Fig.8: Digital painting in Photoshop using a graphics tablet (Ross Tran) [88] Fig.6: Top row: first time prompt input into Midjourney, bottom row: second time prompt input using the same seed [28] Fig.7: Traditional sketching with pencil and paper (Loish Van Baarle) [87] much more are still required to produce work of any quality [31]. Figure 9 is an example that illustrates the similar accumulated knowledge and artistry required in both mediums. ### _Art in Game Design_ Many video game designers and artists combine both traditional and digital aspects in their work, such as creating inkwork on their computer, then adding detail to the digital canvas with traditional textures scanned from the real world. Further advancements in hardware led to the availability of painting software portably, such as on the iPad tablet, which is home to digital painting software such as Procreate and Mental Canvas amongst others. The quality of these tools improved to the point where they can be used for commercial game asset design [32] (see Figure 10). As technology advanced further and 3D software became commercially available, tools like Blender became popular industry go-tos for game artists to create character models, props and environments to build original, immersive game worlds. As with 2D asset creation, 3D modelers still need to have an understanding of the fundamentals and how things exist on a 3-dimensional plane (see Figure 11). Working on a single asset or artwork can take hours, or even days of work. Visual assets required for game creation are not limited to the end game product the player sees upon purchase. The following subsection explains the game production process in brief, and the substantial amount of visual media and level of detail required to create a successful end product. ### _Game Production Process_ The general process to video game creation can be summarized into 4 main phases, as illustrated in Figure 12. They are: conceptualization, pre-production, production, then post-production. Game Designers are typically the primary team working during the conceptualization and pre-production phases. Game Artists join during the latter phase in which a lot of visual exploration, conceptual art and storyboarding must be created in order to hone and find the most suitable art direction that serves the game's genre, gameplay mechanics and storytelling objectives. As explained in the previous subsections, artists create these assets using their skill and knowledge [33]. Fig.11: 3D art and modelling for an indie game (Jonas Manke) [92] Fig.12: Game production process Fig.9: Left: traditional portrait in watercolors (Artist: Dahye Song), right: digital portrait painted in Painter II (Te Hu) [89][90] The pre-production phase is arguably the most important in the process, as a well-researched, well-planned pipeline, realistic timeline and manageable milestones with a refined focus on the core gameplay mechanics, features and overall feel of the game will drastically improve the project's success and reduce expensive mistakes down the line. The main output of this stage is a Game Design Document, the reference guide all project members refer to in order to stay on track and avoid scope creep. An MVP prototype of the game may also be produced with all the teams working together, with very polished art assets and presentation to sell the concept. The production phase is where work on the full commercial game begins. Game Designers work on the entirety of the game's user experience, progression, and information architecture. They communicate with the Game Artists to make sure the artistic style from the environmental assets and character designs all the way to the UI elements and their placements are cohesive and within the agreed upon milestones. Some artist roles required for game creation include: * _Concept artist:_ develops the look and feel of the game. Creates quick and/or detailed drawings of environments, characters, vehicles and game world props * _Splash artist:_ creates art for the game's loading screens and promotional material * _Storyboard artist:_ develops a visual telling of the game's story narrative and camera work * _Character artist:_ designs characters and their movement, wardrobes and tools * _Background artist:_ creates assets for game maps, backdrops and environments * _Texture artist:_ creates the textures and skins for character and non-character assets * _Interface artist:_ creates intuitive interfaces that easily communicate information to the gamer * _Art director:_ develops and maintains the overall creative vision and narrative style It is important for Game Designers to communicate with the development team to make sure certain artistic elements can be manipulated and translated in code logically. The Game Development team works as the implementer and problem solver, building the game using the assets and design provided to them [34]. Logic and design elements may be revised and redone during this phase to improve user experience (UX) where necessary. Testing is iterative to validate all possible scenarios of gameplay and to intercept bugs and unforeseen exceptions. It continues into the post-production phase; code and artwork is complete, save for final edits and changes. Marketing and advertising for the game is at its peak in this phase, as the game is being prepared for release. While varying to some degree from one studio to another, the game production process is a complex one that combines the creativity and originality of the design and art teams with the problem solving skills of the development team. ### _The First AI Game_ A video game designed using AI imagery was released as a browser version in 2022 for free use [35]. The game, named Shoon, is a simple sci-fi 2D shooter where all visual game assets, from the playable spaceship and foes to the post-apocalyptic background scenery were generated in Midjourney by entering word prompts then importing the images into a game engine without artist or designer involvement. A screenshot of the game can be seen in Figure 13. The imagery produced is highly-detailed and textured, and a solid understanding of the AI generators' parameters allows the end user a higher level of control over each output image, especially in the case of using and accessing the same seed. However, the final result l and cohesion necessary for a visually consistent and aesthetically pleasing game. There is no consideration for user experience design, and the information architecture is absent [36]. ## III Discussion Rapid advancement in the generation of high-resolution imagery using text-to-image AI models enabled it to fully break into the mainstream discussion. Professionals in various industries discussed the many uses this technology has on improving and automating aspects of the development lifecycle in terms of speed and efficiency [37]. Uses include: A/B testing, ideation and prototyping, data search and referencing, animation in-betweening and many more. However, this also opens the door to serious discussions in responsible use pertaining to economical, social and ethical arenas. One opinion in favor of AI image generation technology in visual industries - including the advertising, animation, and game development industries, argues that it is no different than using digital painting software to create art. That is, however, incorrect. As explained in Sections II, there is no incentive for the AI model user to have knowledge of art fundamentals, nor dedicate time or employ skill to produce results. An unconditional support of the tool in its current form does not consider the legal (and moral) ramifications of scraping human-made content without the creators' consent for the monetary gain or social clout of the AI model user, particularly when results generated from such models directly affect the personal and professional life of the original artist creator. In informal debate, an analogy is used to better explain the issue to people with no understanding of either field: claiming an AI model user is an artist by inputting keywords into a generator is not dissimilar to claiming someone who places an order and heats a pre-cooked meal is a chef. The real chef is the person who _created_ the meal - the one who ordered and heated it is the customer. Another argument made to pose AI-generated imagery as comparable to human art (and thus capable of replacing human artists in a business setting) is the claim that the result is transformative, and no different from a human artist looking at reference photos before starting their own piece. That is an inaccurate comparison - as using references to create a new artwork by hand for a new purpose is legally recognized as a transformative work under copyright laws worldwide [38]. A _transformative work_ by definition cannot replace nor undermine the referenced work's value or intent, nor does it infringe upon other creators' intellectual property, reputation, or personal and professional wellbeing [39]. Transformative work further explains why every artist has their own visual style and method of operation. These two factors are directly affected by the individual artist's knowledge, life experience, belief system and other humanistic factors. Much of the value in the end work includes the artist's _interpretation_ of the subject matter, not merely how accurately they can copy from life. Therefore, training an AI model by using an artist's creations that incorporate their interpretative voice, identity and style that they are known for to directly compete with the artist's identity, livelihood, reputation and social safety goes against the definition of transformative work. In the United States and other regions, intellectual property laws protect a creator's rights to his or her work by granting them legal right to exclusively profit from that work. These laws exist in several variations in order to protect both individual and institutional intellectual properties. However, current laws as they are do not provide adequate protection for the original creators in this context, as they do not include AI models in their definition. Based on the fact that the current AI models can only reach the current level of quality by training on manmade art and photography, and copying one artist's style to apply to another image calls for a deeper discussion on how it affects creators. Another important topic to discuss is the emergent profit shifting crisis - profits shifting from artists and creators to large corporations who own the infrastructure of the AI models [40]. ### _Legal Landscape_ Commercial text-to-image AI generators such as Midjourney and Stable Diffusion use a large dataset called LAION-5B. Two of the ethical issues this dataset poses is the scraping of original content without consent, and the use of data such as personal photos, artwork, game assets, and even medical records [41]. Artistic output is what is subject to copyright by law; the process of creation itself is not, neither is the style of expression [42]. As such, this leaves AI image generators in a legal gray area, as legal professionals are divided over whether or not the result of an image generator can be owned by a particular person or entity. For example, copyright law in the United States indicates that only a work created by a human being can be copyrighted [43]. Lawsuits have been filed against the AI models' infrastructure owners in the United States [44][45] by both individual artists and large corporations [46][47]. It is likely that copyright laws will be revisited and amended in the upcoming years, given the rapid advancement and complexity of the matter. As of this publication's release, the input artwork used to train the AI models is the legal property of the artist that created it, but the AI image has no legal owner but can still be used commercially with no compensation to the artists whose work was used to produce such imagery. The lawsuit is expected to be a complicated one filled with technicalities that may serve as legal escape clauses for the AI infrastructure owners. One such technicality is that the dataset used to train the commercial version of a model like Stable Diffusion [48] contains not the original copies of the scraped training images, but rather the latent copies of those images that were generated during the training process [49]. Therefore, the legal loophole expected to be exploited is the claim that these latent images are _derivative works_, not originals. It is unsurprising that, should the AI infrastructure owners indeed compensate every artist whose work was taken into training the model, the model would not generate income, and would operate at a loss. The current models and legal landscape do not protect or improve artist creators' quality of life, but enable corporations to achieve monopoly and encourage employee layoffs - whilst retaining the employees' talent and skill without consent. An important note to add is that models like Midjourney's results are skewed towards producing imagery in the styles of some of the best artists in the world, both living and nonliving. In fact, this is such a vital characteristic for the model's appeal to its customers that, upon learning of the lawsuits against Midjourney and Stable Diffusion, many took to the internet to voice their displeasure, as these models would not have the commercial value they currently do without being fed skilled artists' styles. In March 2023, the US Copyright Office issued a new policy, reiterating that only human-authored work can be copyrighted, and imagery produced by a machine cannot, especially since the model's user has no way of knowing what the final result will be, and a prompt is insufficient to be considered human aritstry. A prompt is merely an instruction to a machine [95], and what the prompt writer has in their head may look completely different from what the computer generates. In such a setting, a novel written by a human but with AI imagery would have to be categorized differently; the text belongs to the author, but the art does not, hence the author has no legal claim should his novel's illustrations be taken into another project without compensation. However, the ethicalness of artists' copyrighted works and training data being used to make the models as sophisticated as they are currently has still not been addressed. In the European Union, legal discussions have begun whilst involving independent experts to evaluate the models' robustness, and the kind of training data they use [96]. It is unclear if new versions of the models will resolve this issue, but as it is, artists are not given the right to opt out of them, and spokespeople for the models prioritize their paying customers, assuring them that they did not remove the artists' work from the data [50][51]. There are people for and against this school of thought, with people standing to directly profit from these AI models campaigning to normalize their use in game development and other industries. This is arguably another example of the Overton Window theory in effect, and is discussed further in Subsection K. ### _Employer Work Theft_ The statement, 'Democratizing art' [52] was used as a form of advertisement for AI models like Midjourney and Stable Diffusion. It (falsely) implies that artistic skill is a natural resource that is hoarded by an elite few and should be re-distributed, rather than the reality that art is a teachable skill based on a life-long journey of acquired mastery at one's personal dedication, and thousands of hours of labor. This narrative, if left unchecked, usurps ownership from actual creators to infrastructure owners. As discussed in Subsection A, a large segment of the text-to-image AI generators customer base are pushing for this technology to be competent enough to replace the very artists without which the imagery would not attain a professional standard, thus using artists' own skill against them. The video game industry is known for recurring unsustainable work environment practices, such as the 'crunch time' issue, where employees may find themselves working 80\(+\) hours per week for little to no overtime. Artists are commonly underpaid for their level of skill. In the animation industry, worker exploitation was so severe for decades that unions had to be created to protect their basic rights [53]. In loosely-regulated economies, profit growth may lead to the disregard of ethical issues as the employer's objective becomes to replace as many creatives as possible with machines, even if at the detriment of quality requirements such as visual direction. ### _Devaluation of the Mastery Concept_ Another widely-discussed topic is the effect of AI tools on people's perception of how long it realistically takes for high-quality work to be done, and how painstaking it is to master a craft of any kind. Several studies have been performed on people from different backgrounds, with results indicating that having tools that produce work that typically takes hours or weeks in just seconds creates a warped perspective on the world. People become impatient and entitled to others' hard work for no compensation. [98][106] Furthermore, it breeds a disinterest in learning or mastering a skill. This raises concerns about the state of critical jobs in future generations. For example, GPT-4 - an AI language model text generator, has been tested on a U.S medical license exam, and passed with flying colors. Given the fact that student plagiarism has skyrocketed in many educational institutions with the availability of AI image and text generators, many have raised concerns on the state of high-risk services in the future [109]. If a medical student can pass his exams using AI rather than study biology, it begs the question what their real skill and knowledge would be like when there is a real patient under their scalpel [99][107]. It also raises the question regarding the possible degeneration of basic daily skills, such as clear writing and critical thinking [110][111]. Online Art communities, contests and asset stores have been facing backlash from game designers and artists for not having proper measures in place to detect when imagery being uploaded to their galleries is AI-generated and not truly the uploaders' own work. Simultaneously, AI users have taken to the internet to explain how they use these tools to create passive income, whilst showing the process of how they take artists' work from online galleries and feed them into their models to create imagery, then proceed to sell it online. Visual social media platforms such as Instagram have become popular destinations to share AI imagery without disclosing its nature, and with that came an increase in viewers' demands urging the uploaders to show a process video or work-in-progress to prove they are indeed their own creations. ### _Data Laundering_ Large datasets like LAION-5B are composed of billions of images, assets and artwork scraped from the internet without the consent of their owners. Data laundering occurs when that data is reused, transformed and sold for profit by companies. The images in these datasets include artist portfolios, stock photography, game assets and more [54]. It is important to note that the research teams behind the AI models do acknowledge the copyright and ethical issues these models pose, but make the statement that since it is for academic and research purposes, they are legally in the clear. How data laundering occurs can be simplified in the short steps below: **STEP 1**: Visual media (pictures, art, illustration, logos, etc.) is scraped from the internet **STEP 2**: Scraped media is stored in a dataset or group of datasets **STEP 3**: Scraped media is used to train AI text-to-image models using GAN and Diffusion architecture **STEP 4**: Training produces and stores latent images based off of the original media **STEP 5**: New imagery is later generated from the stored latent image bases by an AI end user to sell Copyright law in the United States generally does allow such use of data to an extent. However, it is tech corporations that fund these academic/non-profit entities to create the datasets and develop the AI models. Afterwards, the corporation takes over the developed tool to generate profit. This effectively creates a pipeline from the academic non-profit environment into the corporate for-profit environment, bypassing copyright laws and evading legal accountability. For example, Stable Diffusion, despite being now owned by Stability AI, was originally created by _Machine Vision & Learning_, a research group at the Ludwig Maximilian University in Munich that relied on the LAION-5B dataset. This allows the data to be laundered and then re-licensed under a legitimate entity for commercial use [55]. A lack of control and regulation on the types of data and images scraped leads to the collection of harmful and illegal content in these datasets [56], as explained in Subsection J. ### _Intellectual Property Violation_ Even though copyright laws vary significantly from country to county, each region has its own set of regulations due to the universally-accepted fact that intellectual property - while not always tangible - is the legal property of its creator. The rapid and unregulated adoption of AI image generators complicates people's ability to protect their creations. As explained in Subsections A and D, there currently exist methods through which AI architecture owners exploit legal loopholes to further train their models. The easiest targets for these text-to-image generators are smaller creators and independent artists. Small creators, regardless of talent, generally do not have the resources to firmly protect their intellectual properties. However, large corporations do, and as such, could use the AI image generation tools to launder small creators' work into their own productions, and not only that, but also be able to patent and protect said productions as part of their assets [57][58]. Known artists' work is already being used into these models without consent. A U.S-based illustrator working for known corporations such as Disney Studios is one of many artists whose copyrighted work was scraped from the internet without her or her employers' consent to be made into an AI model that specifically produces imagery in her artistic style [54]. The artist's original art and the AI-generated imagery made to compete directly with her can be seen in Fig. 15. Fig. 14: Top row: Original artwork by illustrator Hollie Mengert, bottom row: Stable Diffusion AI-generated images in the artist’s style [54] Game artists are facing similar issues; renowned concept artist Greg Rutkowski is one of the most known victims of these models [93]. His fantastical game art is commonly used without consent to train AI models and re-create images in similar style, as in Figure 16: Non-living artists did not escape such infringements, either. Renowned Korean artist Kim Jung Gi had his work fed into an AI model within days of his death and published online [59]. ### _Cross-industry Monopoly & Unemployment_ Although AI text-to-image generators have major benefits in streamlining artists' work pipeline, current commercialized models such as Midjourney & Stable Diffusion are owned by private companies that allow customers to create an enormous number of images within seconds for a small monthly fee with no repercussions for copyright issues or data misuse - with some subscriptions being as low as $10 a month. As can be gleaned from Subsections B and C, claiming 'democratization' of art by training models to directly compete with creators (using their own work nonetheless) creates infeasible expectations of how long a work takes to be created, and the level of knowledge and skill required. This, with the possible monopolizing of several industries, causing economic problems and widespread layoffs of the people whose skill is what made the AI models effective in the first place is a juxtaposition. These models may also lead to the widening of the gap between junior-level game designers and established senior-level ones, with the former struggling to find work or learn the craft of their industries due to large AI infrastructure owners doing without their job positions altogether and investing in AI subscriptions for a fraction of the price it takes to hire a game designer or artist. The ramifications are already in effect - freelance illustrators and concept artists report struggling to find work, as models like Midjourney produce visually complex pieces within seconds where even an experienced artist would take at least a few hours to do the same. The chasm of socio-economic inequality will widen as a result [97]. Monopoly fatalism is also a concern; many policy makers and economists point out that tech companies, the likes of Google, Meta, Microsoft and more are monopolies and thus have the power to take advantage of their consumer base, and even non-consumers [60]. Tech giants' economies of scale, data collection, privacy invasion, and network domination create unfair competition and eliminate smaller competitors before they can exist. This sets the stage for predatory pricing, less motivation to innovate, and lower quality of customer experience and support. Furthermore, these firms' domination over the market is deepened by a psychological monopoly status where many people fail to name substitutes or decent competitors even if they exist. Occasionally, people who stand to make significant profits from text-to-image AI tools have expressed fatalistic attitudes and disregard for the ethical issues discussed prior. Brusseau points out the view that ethical boundaries may be seen as not only a hindrance to technological progress, but are merely subjective opinions that can be bent [61]. This view holds little concern for the creator's fundamental right to ownership of their creation [62]. Fig. 15: Top row: Original concept art by game artist Greg Rutkowski, bottom row: Stable Diffusion AI-generated images in the artist’s style [93] ### _Usurp of Ownership & the Death of Second-hand Markets_ With the rapid advent of the'subscription model' throughout various industries and across all types of products, there is the ethical concern over the consequence of powerful entities owning the majority of the globe's resources and information, and the average human owning nothing, only gaining access to products or services via a borrow system. In such a system, the individual pays money every week or month to keep using a product they should be able to obtain with a one-time purchase. The ethical and economical ramifications are serious. Videogames are one such product, where the model is shifting from owning a physical copy of the game to buying a license (permission) to play the game. Should the publisher, for any reason, remove the game from their store, the user has lost access to it as well, even if they had paid thousands of dollars to keep playing it. This model erodes personal ownership and the second-hand market, and takes away individuals' right to privacy, personal sovereignty over possessions, and the right to re-sell their assets. All are universal human rights [63]. Visual creators who take part in the creation of these games may find their work usurped into an AI model that keeps using their work to regenerate more work at a fraction of the price without giving the artists the right to re-use their own work due to legal loopholes and legislations. Highly-influential businessmen and political leaders have the power to lobby and influence copyright legislation to suit their business objectives [64]. Without a solid stance it becomes a concern that creators' work may be taken from them then re-sold with said creators retaining no access to their own works. ### _Mass Surveillance_ When it comes to the human right to privacy, there is much to be learned from previous experience. Data laundering associated with AI machine learning has been used before in another rapidly-growing technology - Facial Recognition. In the past decade, researchers from the University of Washington scraped photos from the internet to create a dataset. Personal photos were not excluded from the scrape. The data was collected, laundered, then sold for use by commercial companies such as Clearview AI, and is even used for mass surveillance by the Chinese Government nationwide, further extending economic, social and political domination over the public. The methods in which these technologies are normalized and used to discreetly collect people's private data is cause for concern. Examples of this are corporations such as Tencent and Prisma AI, which have developed AI image generators prompting people to upload their personal photos to be converted into stylized art as a harmless tool of entertainment. Unsurprisingly, this aids tech giants like Tencent to develop the faceID recognition technology used to recognize people in demonstrations or riots [65][66][67]. Facial recognition is a successful example of financially-lucrative data laundering, and may be used as a template to protect the AI text-to-image model creators from legal recourse if no action is taken [68]. ### _Information Bias and Narrative Control_ Several researchers and industry professionals have showcased the danger of using AI tools for social manipulation, especially in highly-critical industries and settings. This bias is due to several reasons, mainly the imbalance within the dataset itself. The curation of the scraped data relies on the personal preferences of the developers, intentionally or unintentionally [73]. This is expected if there is a lack of cultural and social diversity within the AI development teams. In addition to that, the nature of data scraped from the internet to populate the dataset carries ideas on race, gender and other issues [74]. Human beings are biased, and thus the information collected in the datasets are reflections of people's sentiments Case in point is the information bias within the revolutionary AI text generator _ChatGPT_[69]. As with AI imagery, this chatbot relies on a massive dataset for its machine learning, scraped from the internet with emphasis on generating text that sounds humanlike. Whilst the tool is useful for research and everyday writing, it can produce incorrect and even biased information output. A biased AI tool in a critical setting such as law enforcement will cause problems such as racial profiling [70]. Another example - a hiring process using biased AI may systematically choose a specific gender over the other regardless of qualifications [71][72]. The list goes on. California University's Professor Steven Piantadosi publicly shares some results of the tests performed using the aforementioned ChatGPT, such as Figures 16 and 17 below: If the data is tainted with bias (inevitable), content produced could be dangerous to human life if believed at face value at large scale [75]. Although OpenAI, the creators of ChatGPT, added safeguards in an attempt to reduce the likelihood of people triggering the AI to produce problematic content, testers have found ways around them. In a world that relies heavily on technology and automation, there is less incentive for the regular individual to do their own research and find the objective truth. If AI tools are employed in sensitive arenas such as policing and healthcare without strict regulation, it can lead to dangerous outcomes. The same problems exist for AI text-to-image generators, which have been tested and shown to produce formulaic and potentially problematic images leaning towards certain racial or cultural stereotypes [76]. This, unregulated, is a tool of large-scale narrative control, where a set of preferences and ideas are presented as the only objective truth, whilst suppressing and even villainizing opposing views [77]. Narrative control is when a position of influence dictates a specific telling or opinion of an event, and intentionally leaves out parts or ideas they do not want known. This leash on free thought brings the risk of creating an 'echo chamber' and polarizing communities, where only one opinion is regarded as valid. Many AI ethicists across various industries call for the proposal of policies to actively narrow bias during dataset curation [78]. ### _Impersonation, Identity Theft & Slander_ One of the consequences of AI image generators - one that is already occurring as of this study - is slander. Apart from the risk of impersonation and identity theft, the advancement in this technology leads to the creation of problematic art, deep fakes and other harmful material (e.g. artistic fraud, promotion of hate propaganda, adult content, violence, etc.). This material may then be falsely attributed to the artist whose work and artstyle was scraped without their knowledge to train the AI model. Stable Diffusion has already been used to create pornographic images of public figures. [79][80]. The technology has advanced to the point of creating hyper-realistic human faces that are near indiscernible from reality [81]. Personal privacy, the right to anonymity and individual ownership have been important topics held in high regard for many years, seen as fundamental human rights in democratic nations. However, as years progressed, the internet became more mainstream, and use of AI in aspects such as those mentioned in this subsection and prior became more accepted or at least tolerated. ### _The Overton Window_ In consideration of the topics discussed prior, the persistence to incorporate AI imagery into every possible domain is seen by some as an example of the Overton Window theory in effect [82]. Also known as the Window of Discourse, the Overton Window is a political theory concept that Fig. 16: **Example of racial profiling produced by ChatGPT [94]** Fig. 17: Example of racial profiling in Python produced by ChatGPT [94] represents a scale of a society's position on a public issue. The position on said issue may range from popular/acceptable, to radical/dangerous. The scale moves in both directions, usually towards radically liberal or radically conservative ideologies [83]. An entity, a system or otherwise with a goal to achieve - and the power to inform decision making at a large scale - may attempt to move the Overton Window up or down the scale towards their desired objective. If done abruptly, the public will firmly reject the proposed concept as they will deem it radical. Therefore, the window is moved very slightly, opening the public to a small and seemingly-insignificant change that has no immediate tangible effect. When the 'new normal' becomes acceptable and no longer questioned, the Overton Window is moved again [84]. This process, when left uncontested, may eventually lead towards adopting previously-unthinkable concepts that would have never been normalized if abruptly suggested [108]. This is a simplified example: imagine a suburban city where there are car lanes, and bicycle lanes. The city's population is rapidly growing, and so there are attempts to reduce the number of bicycle lanes in order to widen the streets and replace them with bus lanes. A lot of people will immediately object to the outright removal of bicycle lanes, as not only is it healthier and more environment-friendly, but also cheaper than motor vehicles. So the change would be incremental: 1. Retain bicycle lanes in popular pro-cycling neighborhoods and communities 2. Merge bicycle lanes with car lanes whilst keeping signage and painted lines 3. Prohibit schoolchildren from riding bicycles in public roads with safety as a justification 4. Slowly and methodically remove the bicycle signage and infrastructure 5. Repurpose remaining lines and infrastructure to better suit buses 6. A generation later, younger people will find the idea of having lanes specifically for bicycles a strange and perhaps unnecessary concept Discussions across various industries raise the concern surrounding data laundering becoming another reality gradually imposed on the public for the benefit of a few. As with the controversy of mass surveillance, the blurring or suppression of human rights using AI image generation technology rather than elevating the human condition is a valid point of debate. If mass surveillance can be legitimized and normalized in some regions of the world at the cost of the human right to privacy, with justifications such as maintaining safety and enforcing morals on the masses, it is entirely possible to incrementally bend the narrative where data laundering is portrayed as necessity - and the contrary opinion - becomes the new 'radical'. ## IV Individual Approaches There are emerging movements (independent of legislation) to regulate the use of AI text-to-image models with a focus on protecting artists' creations and addressing some of the ethical concerns discussed in the previous section. _Ethical Sourcing:_ Adobe proposed a new tool named Firefly [100], a text-to-image generator that ethically sources its training data from Adobe's own stock website, and only includes data that artists and photographers have explicitly given permission to be used. The tool has proven a success with beta testers, with potential for massive growth, and containing more customization options that do not exist in Midjourney, Stable Diffusion and other commercial models. Firefly provides users with fast solutions to create publishable design work and Fig. 18: The Overton Window quick concept generation to speed up the creative brainstorming process. The goal is to make the artist the center of the creative work process, with full copyright ownership of the final results. _Cloaking Technology:_ Researchers have been developing tools to enable artists to protect their work from being scraped by rendering their art unusable through various means of data scrambling. One such tool is Glaze, developed by a PhD research team at the University of Chicago, to protect artists' property from data scraping and misuse [101][102]. The tool is designed to interfere with the pattern recognition algorithm used by text-to-image models by 'cloaking' the original artists' image thus prohibiting the model from recognizing distinct elements in the artist's unique style. The cloak is often a cover by another known historical artist's style (e.g. Van Gogh). When the model attempts to replicate an artist's work, the end result is highly different from the artist's both in style and content. Another tool is Mist, developed by a group of researchers and developers to be highly robust to noise purification of various means. Hence, even actions like taking a screenshot of the original image, resizing, scaling, and so on are ineffective [112][113]. These tools still require constant updates to resolve AI users' attempts to circumvent the protective cloaking from the data they intend to scrape. They serve as an alternative solution until longer-term legal measures are put in effect to protect artists' intellectual property. _Community Support:_ ArtStation, a well-known art platform within the game industry, does not outright ban AI imagery, but gives artists the option to filter out any results from these models so they do not appear to the user. Artists are given the right to tag their work so that the HTML meta tag for the page displaying their art assets includes 'NoAI', making legal recourse possible in case the artist's work was scraped without consent [104]. Furthermore, AI imagery uploaded to the site must be given a mandatory tag disclosing that it was in fact created by a machine and not an artist. Other artistic platforms have taken a stricter stance against AI-produced imagery. Ko-fi is a tip-jar style platform designed to support artists and ease communication between them and their patrons. Their updated regulations strictly prohibit any scraping of art from their platform, lest legal action would be taken [103]. _Contractual Obligation:_ Several video game companies have added stipulations in their contracts obligating game designers and artists to not use AI in their work, or to disclose if they have done so within very limited parameters [105]. ## V Conclusion Text-to-image AI models are rapidly evolving and have tremendous uses that can hugely speed and improve frameworks in game development and other visually-intensive endeavors. Much like photography, AI image generation can also become its own modality that co-exists with human aritstry and aids it. However, it must be regulated and handled ethically so as not to normalize moral transgressions towards creators or the general public. This can be achieved by robust lawmaking that not only clearly understands the industries affected by this technology, but has no political or economic benefit in its legislation. There remains a disconnect between academics and industry professionals raising concerns on one side, and corporations whose goal is maximizing profit on the other. In the form they are used now, commercial image generators are laundry machines of intellectual property. Dataset creators can resolve this issue through ethically sourcing public domain and non-copyrighted imagery for training, and artists by default must be granted the right to exclude their work from datasets. Every individual has the human right to privacy and ownership of creation, and in the absence of binding ethics, allowing a powerful few to freely use such technology across industries would inevitably lead to exploitation. In any debate regarding the consequences of a new technology on human quality of life, the intrinsic value of a human will always win. AI technology is no different - it exists to serve that end, not replace it. It exists to ease people's lives, protect them from dangerous tasks, and free up their time for more mentally and spiritually fulfilling roles. Technology does not exist to take agency away. So it is a pivotal conversation to be had, when some try to automate the creation of art, one of the most human endeavors of all.
2306.07937
Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient Prediction
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions. That is, we include the Gibbs-Duhem equation explicitly in the loss function for training neural networks, which is straightforward in standard machine learning (ML) frameworks enabling automatic differentiation. In contrast to recent hybrid ML approaches, our approach does not rely on embedding a specific thermodynamic model inside the neural network and corresponding prediction limitations. Rather, Gibbs-Duhem consistency serves as regularization, with the flexibility of ML models being preserved. Our results show increased thermodynamic consistency and generalization capabilities for activity coefficient predictions by Gibbs-Duhem-informed graph neural networks and matrix completion methods. We also find that the model architecture, particularly the activation function, can have a strong influence on the prediction quality. The approach can be easily extended to account for other thermodynamic consistency conditions.
Jan G. Rittig, Kobi C. Felton, Alexei A. Lapkin, Alexander Mitsos
2023-05-31T07:36:45Z
http://arxiv.org/abs/2306.07937v2
**Gibbs-Duhem-Informed Neural Networks** ## Abstract We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions. That is, we include the Gibbs-Duhem equation explicitly in the loss function for training neural networks, which is straightforward in standard machine learning (ML) frameworks enabling automatic differentiation. In contrast to recent hybrid ML approaches, our approach does not rely on embedding a specific thermodynamic model inside the neural network and corresponding prediction limitations. Rather, Gibbs-Duhem consistency serves as regularization, with the flexibility of ML models being preserved. Our results show increased thermodynamic consistency and generalization capabilities for activity coefficient predictions by Gibbs-Duhem-informed graph neural networks and matrix completion methods. We also find that the model architecture, particularly the activation function, can have a strong influence on the prediction quality. The approach can be easily extended to account for other thermodynamic consistency conditions. ## 1 Introduction Predicting activity coefficients of mixtures with machine learning (ML) has recently attracted great attention, outperforming well-established thermodynamic models. Several ML methods such as graph neural networks (GNN), matrix completion methods (MCM), and transformers have shown great potential for predicting a wide variety of thermophysical properties with high accuracy. This includes both pure component and mixture properties such as solvation free energies (Vermeire and Green, 2021), liquid densities (Felton et al., 2023) and viscosities (Bilodeau et al., 2023), vapor pressures (Felton et al., 2023; Lansford et al., 2023), solubilities (Vermeire et al., 2022), and fuel ignition indicators (Schweidtmann et al., 2020) A particular focus has recently been placed on using ML for predicting activity coefficients of mixtures due to their high relevance for chemical separation processes. Here, activity coefficients at infinite dilution (Jirasek et al., 2020; Jirasek and Hasse, 2021; Sanchez Medina et al., 2022), varying temperature (Damay et al., 2021; Chen et al., 2021; Winter et al., 2022; Rittig et al., 2023; Sanchez Medina et al., 2023; Damay et al., 2023), and varying compositions (Felton et al., 2022; Qin et al., 2023; Winter et al., 2023), while considering a wide spectrum of molecules, have been targeted with ML, consistently outperforming well-established models such as UNIFAC (Fredenslund et al., 1975) and COSMO-RS (Klamt, 1995; Klamt et al., 2010). Given the high accuracy achieved, ML will therefore play an increasingly important role in activity coefficient prediction. To further advance ML for activity coefficients and bring it into practical application, accounting for thermodynamic consistency is of great importance: by enforcing consistency, the number of required training data is minimized and the quality of the predictions is improved. Putting the prior information into the data-driven model results in a hybrid model. In the context of activity coefficient prediction, several hybrid model forms have recently emerged. The hybrid models connect ML and mechanistic models in a sequential or a parallel fashion, and integrate ML into mechanistic models and vice versa (see, e.g., the reviews in (Carranza-Abaid et al., 2023; Jirasek and Hasse, 2023)). For example, Focke (Focke, 2006) proposed a hybrid neural network structure that embeds the Wilson model (Wilson, 1964). Developing hybrid ML structures following thermodynamic models such as Wilson (Wilson, 1964) or nonrandom two-liquid (NRTL) (Renon and Prausnitz, 1968) was further investigated in (Argatov and Kocherbitov, 2019; Toikka et al., 2021; Carranza-Abaid et al., 2023; Di Caprio et al., 2023). A recent prominent example covering a diverse mixture spectrum is the sequential hybrid ML model by Winter et al. (Winter et al., 2023), who combined a transformer with the NRTL model (Renon and Prausnitz, 1968) (i.e., the transformer predicting NRTL parameters) called SPT-NRTL. As the NRTL model fulfills the Gibbs-Duhem equation, the hybrid SPT-NRTL model by design exhibits thermodynamic consistency for the composition-dependency of the activity coefficients. However, using a specific thermodynamic model also introduces predictive limitations. For example, the NRTL model suffers from high correlation of the pure-component liquid interaction parameters (Gebreyohannes et al., 2014), which results in poor modeling of highly interactive systems (Hanks et al., 1978). In general, approaches imposing a thermodynamic model are restricted by the theoretical assumptions and corresponding limitations. Therefore, we herein focus on a physics-informed ML approach that does not rely on a specific thermodynamic model; rather, thermodynamic consistency is imposed in the training. Physics-informed ML provides a hybrid approach that integrates mechanistic knowledge as a regularization term into the loss function for training an ML model (Karniadakis et al., 2021; von Rueden et al., 2021). A prominent example are physics-informed neural networks (PINNs) (Raissi et al., 2019) that are typically employed to predict solutions of partial differential equations (PDEs). In PINNs, gradient information of the network's output with respect to the input(s) is obtained via automatic differentiation and added as a regularization term to the loss function accounting for the PDE. In this way, PINNs learn to predict solutions that are consistent with the governing PDE. Note that, in contrast to hybrid models that embed mechanistic equations, PINNs do not necessarily yield exact mechanistic consistency as it needs to be learned and may be in trade-off with learning the provided data. On the other hand, the flexibility of neural networks is preserved, and no modeling assumptions are imposed, as in the aforementioned hybrid thermodynamic models. Utilizing differential thermodynamic relationships, the concept of PINNs has been applied to molecular and material property prediction (Teichert et al., 2019; Masi et al., 2021; Hernandez et al., 2022; Rosenberger et al., 2022; Monroe et al., 2023). For instance, Masi et al. (Masi et al., 2021) proposed thermodynamics-based artificial neural networks building on the idea that material properties can be expressed as differential relationships of the Helmholtz free energy and the dissipation rate, which can be directly integrated into the network structure and allows for training with automatic differentiation. Similarly, Rosenberger et al. (Rosenberger et al., 2022) utilized differential relationships of thermophysical properties to the Helmholtz free energy to fit equations of states with thermodynamic consistency. They showed that predicting properties such as pressure or chemical potential by training neural networks to model the Helmholtz free energy and use its differential relationships to the target properties is advantageous over learning these properties directly, for both accuracy and consistency. However, using PINN-based models for predicting thermodynamic mixture properties for a wide molecular spectrum, particularly activity coefficients, has not been investigated so far. We introduce Gibbs-Duhem-informed neural networks that are inspired by PINNs and learn thermodynamic consistency of activity coefficient predictions. We add a regularization term related to the Gibbs-Duhem equation to the loss function during the training of a neural network, herein GNNs and MCMs. Specifically, we use automatic differentiation to calculate the gradients of the respective binary activity coefficient predictions by a neural network with respect to the mixture's input composition. We can then evaluate the Gibbs-Duhem consistency and add the deviation to the loss function. The loss that typically contains the prediction error on the activity coefficient value only is thus extended by thermodynamic insights, inducing the neural network to consider and utilize known thermodynamic relations in the learning process. We emphasize that our approach allows for the integration of further thermodynamic insights that can be described by (differential or algebraic) relations to the activity coefficient; herein, we use the Gibbs-Duhem equation as a prime example. Our results show that Gibbs-Duhem-informed neural networks can effectively increase Gibbs-Duhem consistency at high prediction accuracy. The manuscript is structured as follows: First, we present the concept of Gibbs-Duhem-informed neural network training including a data augmentation strategy in Section 2. In Section 3, we then test our approach on two neural network architectures, GNNs and MCMs, using a database of 280,000 binary activity coefficients that consists of 40,000 mixtures covering pair-wise combinations of 700 molecules at 7 different compositions and was calculated with COSMO-RS (Klamt, 1995; Klamt et al., 2010) by Qin et al. (Qin et al., 2023). We analyze and compare the prediction accuracy and thermodynamic consistency of GNNs and MCMs trained without (Section 3.1) and with Gibbs-Duhem loss (Section 3.2). This also includes studying corresponding vapor-liquid equilibrium predictions (Section 3.2.2). We further analyze generalization capabilities to new compositions (Section 3.2.3) and mixtures (Section 3.2.4). The last Section 4 concludes our work. ## 2 Methods & Modeling In this section, we introduce Gibbs-Duhem-informed neural networks, propose a data augmentation strategy to facilitate training, and then describe GNNs and MCMs to which we apply our training approach. A schematic overview of the Gibbs-Duhem-informed GNNs and MCMs is provided in Figure 1. We further provide insights on the data set used for training/testing and the implementation with corresponding model hyperparameters. ### Gibbs-Duhem-informed training Our approach for Gibbs-Duhem-informed training combines prediction accuracy with thermodynamic consistency in one loss function. The approach is inspired by PINNs (Raissi et al., 2019; Karniadakis et al., 2021), that is, utilizing physical knowledge as a regularization term in the loss. For the application of composition-dependent activity coefficients, we can calculate the gradients of the predicted logarithmic activity coefficient value, denoted by \(\ln(\hat{\gamma_{i}})\), with respect to the compositions of the mixture, \(x_{i}\), as illustrated in Figure 1. We can then use this gradient information to evaluate the consistency of the Gibbs-Duhem differential constraint, which has the following form for binary mixtures for constant temperature T and pressure p: \[x_{1}\cdot\left(\frac{\partial\ln(\hat{\gamma_{1}})}{\partial x_{1}}\right)_{ T,p}+x_{2}\cdot\left(\frac{\partial\ln(\hat{\gamma_{2}})}{\partial x_{1}} \right)_{T,p}=0 \tag{1}\] Figure 1: Schematic model structure and loss function of Gibbs-Duhem-informed GNN and MCM for predicting composition-dependent activity coefficients. Please note that Equ. 1 can equivalently be formulated for the partial derivative with respect to \(x_{2}\) and can also be described analogously by using \(dx_{1}=-dx_{2}\). We propose to add the deviation from the Gibbs-Duhem differential constraint as a term to the loss function. The loss function for training a neural network on activity coefficient prediction typically accounts for the deviation of the predicted value, \(\ln(\hat{\gamma_{i}})\), from the data, \(\ln(\gamma_{i})\); often the mean squared error (MSE) is used. By adding the deviation from the Gibbs-Duhem equation (cf. Equ. 1) in the form of the MSE, the loss function for Gibbs-Duhem-informed training of a mixture's binary activity coefficients at a specific composition \(k\) equals \[\begin{split}\text{LOSS}^{k}=&\left(\ln(\hat{ \gamma_{1}}^{k})-\ln(\gamma_{1}^{k})\right)^{2}+\left(\ln(\hat{\gamma_{2}}^{k}) -\ln(\gamma_{2}^{k})\right)^{2}\\ &+\lambda\cdot\left(x_{1}^{k}\cdot\frac{\partial\ln(\hat{\gamma_ {1}}^{k})}{\partial x_{1}^{k}}+x_{2}^{k}\cdot\frac{\partial\ln(\hat{\gamma_{2 }}^{k})}{\partial x_{1}^{k}}\right)^{2},\end{split} \tag{2}\] with \(\lambda\) being a weighting factor to balance the prediction and the Gibbs-Duhem loss. The logarithmic activity coefficient is typically used in the loss function for normalization purposes. We also include the infinite dilution case which is formally defined for compositions \(x_{i}\to 0\) and \(x_{j}\to 1\) with the infinite dilution activity coefficient \(y_{i}\to y_{i}^{\infty}\) of the solute and activity coefficient of the solvent \(y_{j}\to 1\). Herein, we use \(x_{i}=0\) and \(x_{j}=1\) to represent infinite dilution, similarly to other recent publications (Qin et al., 2023; Winter et al., 2023). We stress that compositions of \(0\) and \(1\) are only used for the infinite dilution case and that the Gibbs-Duhem consistency also needs to be satisfied for this case. Note that in thermodynamics some properties are problematic for \(x\to 0\), e.g., infinite derivative of the ideal mixing enthalpy with respect to the mole fraction; however, since we directly predict activity coefficients, we do not run in any numerical issues. The proposed Gibbs-Duhem-informed loss function can directly be integrated into standard ML frameworks. Since modern neural networks frameworks enable automatic differentiation and \(\ln(\gamma_{i})\) is the output and \(x_{i}\) is one input of the network, the partial derivatives in Equ. 2 can directly be calculated in the backpropagation pass. Therefore, the practical application of Gibbs-Duhem-informed training is straightforward. When applying the presented Gibbs-informed training approach, thermodynamic consistency is only induced for the mixture compositions for which activity coefficient data is readily available. To facilitate learning at compositions for which no data is available, we present a data augmentation strategy in the next session. ### Data augmentation for Gibbs-Duhem-informed training We propose a data augmentation strategy for training Gibbs-Duhem-informed neural networks by randomly perturbing the mixtures' compositions between \(0\) and \(1\). We create additional data samples that consist of the binary mixtures in the training data set but at other (arbitrary) compositions \(x\in[0,1]\); we use random sampling from a uniform distribution in \(x\). Indeed, the activity coefficients for these compositions are not known. Yet, we can evaluate the Gibbs-Duhem consistency of the model predictions at these compositions and add only the Gibbs-Duhem error to the loss during training. That is, for training data samples created with the data augmentation, we only consider the second term of the loss function, the Gibbs-Duhem loss. We can therefore use additional data for training Gibbs-Duhem-informed neural networks on compositions of mixtures for which no experimental data is available. When using data augmentation, it is important to consider that additional training data results in an increased expense of calculating the loss and its derivative, i.e., requires more training resources. Further, adding too many augmented data samples to the training, can result in an imbalanced loss focusing too much on the Gibbs-Duhem term and neglecting the prediction accuracy. We therefore set the amount of augmented data to equal the number of data points in the training set for which activity coefficient data are available. ### Machine learning property prediction methods We investigate the thermodynamic consistency and test the Gibbs-Duhem-informed training approach for two different machine learning methods: GNNs and MCMs. Both methods have recently been investigated in various studies for thermodynamic property prediction of mixtures (Jirasek et al., 2020; Damay et al., 2021; Felton et al., 2022; Sanchez Medina et al., 2022; Rittig et al., 2023a). While a third ML method, namely transformer which works on string representation of molecules, has also been very recently utilized for predicting mixture properties with very promising results (Winter et al., 2022, 2023), they typically require extensive pretraining with millions of data points, which is out of the scope of this work. The structure of Gibbs-Duhem-informed GNNs and MCMs for activity coefficient prediction at different compositions is shown in Figure 1. GNNs utilize a graph representation of molecules and learn to encode the structure of two molecular graphs within a binary mixture to a vector representation that can be mapped to the activity coefficients. In contrast, MCMs learn directly from the property data without further information about the molecular structures. Rather a matrix representation is used in which the rows and columns each represent a molecule in the binary mixture as a one-hot encoding and the matrix entries correspond to the activity coefficients. With the available activity coefficient data filling some entries of the matrix, MCMs learn to predict the missing entries. For further details about GNNs and MCMs, we refer to the reviews in (Gilmer et al., 2017; Rittig et al., 2022; Reiser et al., 2022) and (Jirasek and Hasse, 2021, 2023). We herein use a GNN based on the model architecture developed by Qin et al. (Qin et al., 2023) for predicting activity coefficients of binary mixtures at different compositions, referred to as SolvGNN. The GNN first employs graph convolutional layers to encode the molecular graph of each component into a molecular embedding vector - often referred to as molecular fingerprint. Then, a mixture graph is constructed: Each node represents a component and includes the corresponding molecular embedding and composition within the mixture; each edge represents interactions between components using Hydrogen-bond information as additional features. The mixture graph passes a graph convolutional layer such that each molecular embedding is updated based on the presence of other components in the mixture, thereby accounting for intermolecular interactions. Each updated molecular embedding is then passed through hidden layers of a multilayer perceptron (MLP) which predicts the logarithmic activity coefficient \(\ln(\gamma_{i})\) of the respective components present in the mixture; the same MLP is applied for all components. The GNN's model structure can be trained end-to-end, i.e., from the molecular graphs to the activity coefficients. For the MCM model, we use a neural network structure that was recently proposed by Chen et al. (Chen et al., 2021) and further investigated in our work for prediction of infinite dilution activity coefficients of solutes in ionic liquids (Rittig et al., 2023a). The MCM model employs several hidden layers to map the one-hot encoding of the components to a continuous vector representation - analogous to the molecular embedding/fingerprint in GNNs. The resulting mixture vector is then concatenated with the composition and enters two MLPs to obtain the respective predictions for the logarithmic activity coefficients \(\ln(\gamma_{1})\) and \(\ln(\gamma_{2})\). It is important to note, that in contrast to GNNs, the MCM inherently does not preserve permutation invariance with respect to the representation order of the components in the mixture. For example, the predictions for 90% ethanol- 10% water and 10% water - 90% ethanol are not necessarily identical when using the MCM, whereas the GNN results in the same activity coefficient values. To address the permutation variance of the MCM, future work could consider data augmentation, i.e., training on the same mixture with different order of the components (cf. (Winter et al., 2023)), or an extension of the model structure by a permutation invariant operator as used in GNNs. We also note that further formulations of MCMs, e.g., based on Bayesian inference, are frequently investigated, cf. (Jirasek et al., 2020; Damay et al., 2021). We herein focus on neural architectures, also referred to as neural collaborative filtering (He et al., 2017; Chen et al., 2021). In future work, it would be interesting to investigate if our Gibbs-Duhem-informed approach is also transferable to other MCM formulations. ### Data set and splitting We use the data set of binary activity coefficients at different compositions and a constant temperature of 298 K calculated with COSMO-RS (Klamt, 1995; Klamt et al., 2010) for 40,000 different binary mixtures and covering 700 different compounds, which was created by Qin et al. (Qin et al., 2023). The activity coefficients were calculated at seven different compositions: \(\{0,0.1,0.3,0.5,0.7,0.9,1\}\), thus including infinite dilution, cf. Section 2.1). Thus, the total number of data points amounts to 280,000. Since COSMO-RS was used for data generation, all data points are Gibbs-Duhem-consistent, thereby providing a solid basis for testing our approach. We consider three evaluation scenarios when splitting our data: Composition interpolation (comp-inter) and composition extrapolation (comp-extra) as well as system extrapolation (system-extra). _Comp-inter_ refers to the case of predicting the activity coefficient of a specific binary mixture at a composition not used in training for this mixture but for other mixtures. This evaluation scenario was also used by Qin et al. (Qin et al., 2023); in fact, we use the same 5-fold stratified split based on the polarity features of individual mixtures (i.e., 5 different splits into 80% training and 20% test data, c.f. SI (Qin et al., 2023)). Comp-inter thus allows us to evaluate if the models can learn the composition-dependency of the activity coefficient for a mixture from other mixtures in the data with thermodynamic consistency. _Comp-extra_ describes the case of predicting the activity coefficient of a specific binary mixture at a composition that was not used in training for any of the mixtures. We specifically exclude the data for the compositions of a respective set of \(x\in\{\{0.0,\,1.0\}\), \(\{0.1,\,0.9\}\), \(\{0.3,\,0.7\}\), \(\{0.5\}\}\) from training and use it as a test set. This results in four different comp-extra splits, one for each excluded set of \(x\). With the comp-extra splits, we can evaluate whether the models can extrapolate to compositions not present in the training data at all, referred to as generalization, thereby capturing the underlying composition-dependency of the activity coefficient. _Mixture-extra_ aims to test the capability of a prediction model to generalize to binary mixtures not seen during training but constituting molecules that occurred in other combinations, i.e., in other binary mixtures, during training. We separate the data set into training and test sets of unique binary mixtures by using a 5-fold stratified split based on polarity features (cf. (Qin et al., 2023)). In contrast to comp-inter, where only individual compositions of mixtures were excluded from the training data for testing, mixture-extra excludes all available compositions of a mixture for testing and thus allows to test generalization to new mixtures. ### Evaluation metrics for prediction accuracy and consistency To evaluate the predictive quality of models, we consider both the prediction accuracy and the thermodynamic consistency. The prediction accuracy is calculated based on the match between predicted values and the data values for the test set. We consider standard metrics for the prediction accuracy, i.e., root mean squared error (RMSE), mean absolute error (MAE), and coefficient of determination (R\({}^{2}\)). Thermodynamic consistency is assessed by calculating the deviation of the Gibbs-Duhem differential equation from zero. We refer to the Gibbs-Duhem root mean squared error (GD-RMSE) for predictions \(y_{i}^{k}\) of the test data by \[\text{GD-RMSE}_{\text{test}}=\sqrt{\frac{1}{N_{test}}\cdot\sum_{k}^{N_{test}} \left(x_{1}^{k}\cdot\frac{\partial\ln(\hat{y_{1}^{k}})}{\partial x_{1}^{k}}+x _{2}^{k}\cdot\frac{\partial\ln(\hat{y_{2}^{k}})}{\partial x_{1}^{k}}\right)^{ 2}} \tag{3}\] Since the Gibbs-Duhem equation can be evaluated at any composition in the range between 0 and 1 without requiring activity coefficient data, we further test the thermodynamic consistency for compositions outside the data set (cf. Section 2.4) in 0.05 steps, i.e., \(x_{i,val-ext}\in\{0.05,\,0.15,\,0.2,\,0.25,\,0.35,\,0.4,\,0.45,\,0.55,\,0.6,\,0.65, \,0.75,\,0.8,\,0.85,\,0.95\}\), to which we refer to as "test-ext". ### Implementation & Hyperparameters We implement all models and training and evaluation scripts in Python using PyTorch and provide our code openly accessible at (Rittig et al., 2023b). The GNN implementation is adapted from Qin et al. (2023b). al. (Qin et al., 2023) using the Deep Graph Library (DGL) (Wang et al., 2019) and RDKit (Landrum, 2023). We use the same model hyperparameters as in the original implementation, i.e., two shared graph convolutional layers are applied for the molecule embedding, then the compositions are concatenated, followed by a single-layer GNN for the mixture embedding and a prediction MLP with two hidden layers. For the MCM, we use the re-implementation of the architecture by Chen et al. (Chen et al., 2021) from our previous work (Rittig et al., 2023a). We take the hyperparameters from the original model, but we adapt the model structure to allow for composition-dependent prediction. The MCM has a shared molecular embedding MLP with four hidden layers, after which the compositions are concatenated and two subsequent prediction MLPs constituting two hidden layers are applied. All training runs are conducted with the ADAM optimizer, an initial learning rate of 0.001, and a learning rate scheduler with a decay factor of 0.8 and a patience of 3 epochs based on the training loss. We train all models for 100 epochs and a batch size of 100, as in Qin et al. (Qin et al., 2023); we could robustly reproduce their results for the GNN. The quality of the final models is then assessed based on the test set. We executed all runs on the High Performance Computing Cluster of RWTH Aachen University using one NVIDIA Tesla V100-SXM2-16GB GPU. ## 3 Results & Discussion We first investigate the Gibbs-Duhem consistency of GNNs and MCMs trained in a standard manner, i.e., on the prediction loss only, in Section 3.1. Then, in Section 3.2, we present the results with Gibbs-Duhem-informed training. This includes a comparison of different model architectures and activation functions trained with Gibbs-Duhem loss to those trained on the prediction loss only. We also analyse the effects of Gibbs-Duhem-informed training on vapor-liquid equilibria predictions in Section 3.2.2. Lastly, we test the generalization capabilities of Gibbs-Duhem-informed neural networks to unseen compositions in Section 3.2.3 as well as to unseen mixtures in Section 3.2.4. ### Benchmark: Evaluation of Gibbs-Duhem consistency with standard training We first evaluate the prediction accuracy and Gibbs-Duhem consistency of GNNs and MCMs for predicting activity coefficients of a binary mixture at a specific composition with the comp-inter split (cf. Section 2.4). The models are trained with a standard approach, i.e., minimizing the deviation of predicted versus data activity coefficients and not using Gibbs-Duhem loss. Fig. 2 shows the error distribution of the absolute prediction errors and absolute Gibbs-Duhem errors for the GNN (2a) and MCM (2b) model. We also report the errors for specific compositions according to the composition intervals in the data set (cf. Section 2.4) for both prediction accuracy (2c) and Gibbs-Duhem (2d) consistency. Fig. 1(a) shows high prediction accuracy of the GNN, with the MCM model performing slightly worse but still at a high level. The low MAEs of 0.03 and 0.04 and high \(R^{2}\) values of 0.99 and 0.98 for the GNN and the MCM, respectively, indicate strong prediction capabilities. Please note that the GNN prediction results are a reproduction of the study by Qin et al. (Qin et al., 2023), who reported an MAE of 0.03 and an RMSE of 0.10, which are very similar to our results. The composition-dependent errors shown Fig. 1(c) highlight that activity coefficient predictions for solvents with lower compositions have higher errors, which is expected. Infinite dilution activity coefficients with \(x_{i}\to 0\) represent the limiting case with MAEs of 0.077 for the GNN and 0.093 for the MCM. In contrast, at high compositions \(x_{i}\to 1\), the activity coefficient converges to 1 for all solvents, which is well captured by the GNN with an MAE of 0.002 and the MCM with an MAE of 0.006. Overall, we find strong prediction quality for both models. For the Gibbs-Duhem consistency shown in Fig. 1(b), the GNN again performs better than the MCM. Notably, the distribution for the GNN is more left-skewed than the MCM distribution and shows a peak fraction of deviations close to 0, i.e., with high Gibbs-Duhem consistency. However, it can also be observed that both models have many errors significantly greater than 0, with an MAE of about 0.1 for the GNN and 0.14 for the MCM. Considering the composition-dependent Gibbs-Duhem consistency illustrated in Fig. 1(d), we can observe similar behavior for the GNN and the MCM: At the boundary conditions, i.e., infinite dilution, the models yield slightly higher consistencies than at intermediate compositions, with Figure 2: Absolute prediction error and absolute deviation from Gibbs-Duhem differential equation are illustrated in histograms (a,b) and composition-dependent plots (c,d) for the GNN and the MCM trained with a standard loss function based on the prediction error and MLP activation function: ReLU. The outlier thresholds (a,b) are determined based on the top 1 % of the highest errors for the GNN. the GNN overall resulting in a slightly favorable consistency. Interestingly, we find the opposite behavior when changing the structure of the prediction MLP to be a single MLP with two outputs, i.e., predicting both activity coefficients with one MLP at the same time (cf. SI). Without any form of regularization, we find that the predictions from both models often exhibit Gibbs-Duhem inconsistencies. To further analyze the Gibbs-Duhem deviations, we show activity coefficient predictions and composition-dependent gradients with the corresponding thermodynamic consistency for exemplary mixtures in Figure 2(a) for the GNN and Figure 2(b) for the MCM. We selected mixtures that have different activity coefficient curves, contain well-known solvents, and for which Antoine parameters are readily available (cf. Section 3.2.2). Specifically, we show the predictions and Gibbs-Duhem consistency with the gradient information for three mixtures that were included in the training (1-3) and three mixtures that were not included in the training at all (4-6). Here, the predictions of the five models trained in the cross-validation of comp-inter are averaged, referred to as ensemble model (cf. (Breiman, 1996, 2006; Dietterich, 2000)). Note we can calculate the Gibbs-Duhem consistency of the ensemble by first averaging the five models' partial derivatives of the logarithmic activity coefficients with respect to the composition and then applying Equ. 1. Further ensemble features like the variance are not considered. For the exemplary mixtures in Fig. 3, the predictions exhibit a high level of accuracy but also striking thermodynamic inconsistencies. For the first two mixtures as part of the training set, the predictions are at high accuracy. However, particularly for chloroform-hexane, the prediction curves for each component show some significant changes in their slope at varying compositions, causing high thermodynamic inconsistencies. For example, the \(\ln(\gamma_{2})\)-curve for the GNN at \(x_{1}=0.2\) or for the MCM at \(x_{1}=0.4\) exhibits a step-like behavior, with the \(\ln(\gamma_{1})\)-curve not changing the slope at these compositions, yielding a high Gibbs-Duhem error. This behavior is also reflected in the gradients, which highly fluctuate and have a discontinuous curve over the composition. Notably, within some composition ranges, the gradient is a constant value, e.g., for chloroform-hexane for \(\ln(\gamma_{2})\) from \(x_{1}\) between 0 and 0.4 and for \(\ln(\gamma_{1})\) from \(x_{1}\) between 0.7 to 1. For the mixture of 2-thiabutane and butyleneoxide, discontinuities in the gradients causing high Gibbs-Duhem errors are even more prominent. We additionally find the prediction curves both have either positive or negative gradients for specific compositions, i.e., both increasing or both decreasing, which strictly violates thermodynamic principles. For two of the mixtures not used in the training at all, i.e., chloroform-acetone and ethanol-water, both models overall match the data but also show prediction errors at low compositions of the respective component. Especially for the GNN predictions of the chloroform-acetone mixture, the \(\ln(\gamma_{2})\)-curve exhibits a change in the gradient within the composition range from 0.6 to 0.8 which is not reflected in \(\ln(\gamma_{1})\). For the last mixture, ethanol-benzene, also not being in the training set, the predictions match the data values well, but for both models, Gibbs-Duhem deviations occur at low compositions of the respective component and for the MCM also at intermediate compositions. The gradient curves of the three mixtures not being part of the training set are again discontinuous, resulting in further thermodynamic inconsistencies. Figure 3 further shows that the magnitude of the activity coefficient values for a specific system influences the metrics of Gibbs-Duhem consistencies. Since mixtures with large absolute activity coefficient values naturally tend to have higher gradients, they often show larger absolute deviations from the Gibbs-Duhem differential equation than mixtures with low absolute activity coefficients. Future work could consider weighting Gibbs-Duhem deviations for individual mixtures based on the magnitude of the activity coefficients, e.g., dividing the Gibbs-Duhem error by the sum of absolute values of \(\ln(\gamma_{1})\) and \(\ln(\gamma_{2})\), which was out the scope of our investigations. We additionally show the results of the individual models in the SI, where the thermodynamic inconsistencies become even more prominent and visible. In fact, for the ensemble model results shown in Fig. 3, some inconsistencies partly average out. Using ensembles can thus, in addition to higher prediction accuracy (Sanchez Medina et al., 2022; Rittig et al., 2023), also increases thermodynamic consistencies. It would thus be interesting to systematically study ensemble effects in combination with Gibbs-Duhem-informed neural networks, which we leave for future work. Overall, we find the ML models with standard training on the prediction loss to provide highly accurate activity coefficient predictions, but they also exhibit notable thermodynamic inconsistencies, which can be related to the ML model structure. Particularly, we find the gradient curves of the activity coefficient with respect to the composition to be discontinuous, resulting in high Gibbs-Duhem errors. Figure 3: Activity coefficient predictions and their corresponding gradients with respect to the composition with the associated Gibbs-Duhem deviations for exemplary mixtures by (a) the GNN ensemble and (b) MCM ensemble trained with a standard loss function based on the prediction error and MLP activation function: ReLU. Results are averaged from the five model runs of the comp-inter split. The discontinuities of the gradients are inherent to the non-smooth activation functions typically used in ML models, e.g., ReLU. Specifically, the gradient of ReLU changes from 1 for inputs \(>0\) to 0 for inputs \(<0\), which we find to yield non-smooth gradients of the \(ln(\gamma_{i})\)-curves, thereby promoting violations of the Gibbs-Duhem consistency. This motivates us to investigate the incorporation of the thermodynamic consistency into the training of ML models with different activation functions and an adapted loss function accounting for the Gibbs-Duhem-equation, which we refer to as Gibbs-Duhem-informed neural networks. ### Proposal: Gibbs-Duhem-informed training We apply Gibbs-Duhem-informed training according to Equ. 2 for the GNN and MCM models. Since, in the previous section, we found the non-smoothness of ReLU activation to have an impact on the thermodynamic consistency of the predictions, we investigate two additional activation functions, namely ELU and softplus. In contrast to ReLU, ELU exhibits first-order continuity and softplus is smooth. The smoothness of softplus has already been utilized in models for molecular modeling by Schuett et al. (Schutt et al., 2020). In addition, we investigate an adapted GNN architecture, which we refer to as _GNNxMLP_, where we concatenate the composition to the output of the mixture embedding instead of the input of the mixture embedding, cf. Section 2.3. Using the composition after the mixture embedding and applying a smooth activation function for the prediction MLP results in a smooth relation between the activity coefficient predictions and the compositions. It also has computational benefits since we avoid calculating gradients through the graph convolutional layers used for mapping molecular to mixture embeddings. Furthermore, we investigate the proposed data augmentation strategy (cf. Section 2.2) by adding pseudo gradient data at random compositions to the Gibbs-Duhem-informed training. #### 3.2.1 Effect on predictive quality and thermodynamic consistency Table 1 shows the results of Gibbs-Duhem-informed the GNN, MCM, and GNNxMLP aggregated for the five comp-inter splits. We compare different activation functions in the MLP and different weighting factors of the Gibbs-Duhem loss (cf. Equ. 2), "lambda", with \(\lambda=~{}0\) representing training without Gibbs-Duhem loss, i.e., standard training on the prediction error from the previous Section 3.1. We also indicate whether data augmentation is applied. First comparing the prediction accuracy and thermodynamic consistency of the activation function without Gibbs-Duhem-informed training, i.e., \(\lambda=0\), in Table 1, we find for the GNN, GNNxMLP, and MCM comparable prediction accuracies, with softplus being slightly favorable for the MCM. For the thermodynamic consistency calculated by GD-RMSE, we can observe a consistent improvement from ReLU over ELU to softplus across all models for the test data. We thus find the choice of the activation function to highly influence the thermodynamic consistency, with ELU and softplus being favorable over ReLU. Now, we consider the results of Gibbs-Duhem-informed neural networks using different weighting factors \(\lambda\) in Table 1. We observe that for all cases except the MCM and the \(\text{GNN}_{\text{xMLP}}\) with ReLU activation, Gibbs-Duhem-informed training increases the thermodynamic consistency. Higher \(\lambda\) factors generally lead to lower GD-RMSE. The prediction accuracy mostly stays at a similar level for the Gibbs-Duhem-informed neural networks when using \(\lambda\) factors of 0.1 and 1. For higher \(\lambda\) factors, i.e. 10 and 100, the prediction accuracy starts to decrease consistently, indicating an imbalanced loss with too much focus on thermodynamic consistency. Generally, we observe that \(\lambda=1\) yields a significant increase in thermodynamic consistency compared to training without Gibbs-Duhem loss, e.g., for the GNN with softplus from a GD-RMSE\({}_{\text{test}}\) from 0.140 to 0.061. The prediction accuracy stays at a similar level, sometimes even slightly improving: For the example of the GNN with softplus, we observe an \(\text{RMSE}_{\text{test}}\) of 0.89 vs. 0.83 with and without Gibbs-Duhem loss, respectively, thereby indicating a suitable balance between accuracy and consistency. Notably, for the cases of the MCM and the \(\text{GNN}_{\text{xMLP}}\) with ReLU activation and the Gibbs-Duhem loss, we observe high prediction errors. For these cases, we find the loss not improving after the first epochs during training and the gradients being mostly constant for all compositions - 0 for high lambdas. Interestingly, the GNN, which, in contrast to the MCM and \(\text{GNN}_{\text{xMLP}}\), employs graph convolutions after adding the compositions, does not suffer from these training instabilities. Future work should further investigate this phenomenon, e.g., by considering the dying ReLU problem and second-order vanishing gradients that can occur when using gradient information in the loss function, cf. (Masi et al., 2021). For ELU and softplus, Gibbs-Duhem-informed training results in higher thermodynamic consistency for all models. In fact, Gibbs-Duhem-informed neural networks with softplus lead to the most consistent improvement of thermodynamic consistency with high prediction accuracy across all models. Lastly, we analyze the effect of data augmentation by considering the GD-RMSE\({}_{\text{test}}^{\text{ext}}\), i.e., the Gibbs-Duhem consistency evaluated at compositions that are not used in training for any mixture at all, which indicates the generalization for thermodynamic consistency. Table 1 shows that without data augmentation the thermodynamic consistency on the external test set is significantly higher than for the test set. We show the errors at specific compositions in the SI, where we find the largest errors occur at low compositions, which is expected since the corresponding gradients naturally tend to be higher. The model thus learns thermodynamic consistency for compositions present in the training but does not transfer this consistency to other compositions. When using data augmentation, as shown for \(\lambda\) factors of 1 and 10, the GD-RMSE\({}_{\text{test}}^{\text{ext}}\) decreases to the same level as the GD-RMSE\({}_{\text{test}}\). Data augmentation additionally reduces the GD-RMSE\({}_{\text{test}}\) in most cases, thus further increases thermodynamic consistency in general. Data augmentation without the requirement of further activity coefficient data (cf. Section 2.2) therefore effectively increases the generalization capabilities of Gibbs-Duhem-informed neural networks for thermodynamic consistency. Overall, Gibbs-Duhem-informed neural networks can significantly increase the thermodynamic consistency of the predictions. Using the softplus activation function, a \(\lambda\) factor of 1, and employing data augmentation leads to the most consistent improvement of thermodynamic consistency with high prediction accuracy across all Gibbs-Duhem-informed neural network models. Hence, we focus on the models with these settings in the following. Comparing the three different models, we find similar prediction accuracies and consistencies for the GNN and the \(\text{GNN}_{\text{xMLP}}\), with the \(\text{GNN}_{\text{xMLP}}\), reaching the highest consistency. The MCM exhibits comparable consistency but a slightly lower prediction accuracy compared to the GNNs. Interestingly, the Gibbs-Duhem-informed MCM shows higher prediction accuracy compared to the standard MCM. The runtimes averaged over the five training runs of comp-inter split are 231 minutes for the GNN, 108 min for the MCM, and 177 minutes for the \(\text{GNN}_{\text{xMLP}}\). Hence, we find the \(\text{GNN}_{\text{xMLP}}\) to be computationally more efficient than the GNN. The MCM, which has the simplest architecture without any graph convolutions, shows the highest computational efficiency. Figure 4: Activity coefficient predictions and their corresponding gradients with respect to the composition and the associated Gibbs-Duhem deviations for exemplary mixtures by (a) \(\mathrm{GNN_{xMLP}}\) ensemble and (b) MCM ensemble trained with Gibbs-Duhem-informed (GDI) loss function and following hyperparameters: MLP activation function: softplus, weighting factor \(\lambda=1\), data augmentation: true. Results are averaged from the five model runs of the comp-inter split. In Figure 4, we further show the predictions for the same mixtures as in Figure 3 for the \(\text{GNN}_{\text{xMLP}}\), which exhibits the highest thermodynamic consistency, and the MCM; further results for the GNN and the individual model runs can be found in the SI. We now observe smooth predictions and gradients of \(\ln(\gamma_{i})\) induced by the softplus activation, which results in significantly reduced GD-deviations from zero in comparison to the standard training shown in Figure 3. We also find notably less fluctuations and less large changes of the gradients, e.g., for 2-thiabutane and butyleneoxide the predictions curves are visibly more consistent. For some mixtures, slight inconsistencies are still notable yet, e.g., for the MCM predicting ethanol-water at high \(x_{1}\) compositions. Regarding accuracy, the match of the predictions and the data remains at a very high level for the presented mixtures. We also find prediction improvements for some mixtures, e.g., the \(\text{GNN}_{\text{xMLP}}\) model now predicts \(\ln(\gamma_{2})\) for the ethanol-water mixtures at high accuracy. The exemplary mixtures thus highlight the overall highly increased thermodynamic consistency of the activity coefficient predictions with high accuracy by Gibbs-Duhem-informed neural networks. #### 3.2.2 Effect on vapor-liquid equilibrium predictions We further study the effect of Gibbs-Duhem-informed neural networks on estimated vapor-liquid equilibria (VLE). To calculate VLEs, we use modified Raoult's law, with vapor pressures estimated by using Antoine parameters obtained from the National Institute of Standards and Technology (NIST) Chemistry webbook (Linstrom and Mallard, 2001), similar to Qin et al. (Qin et al., 2023; Contreras, 2019). Figure 5 shows the isothermal VLEs at 298 K for the exemplary mixtures investigated in the two previous sections. Specifically, the VLEs for the GNN (a) and MCM (c) trained with ReLU activation and standard loss (cf. Section 3.1) and the Gibbs-Duhem-informed (GDI-) \(\text{GNN}_{\text{xMLP}}\) (c) and MCM (d) with softplus activation, \(\lambda=1\), and data augmentation (cf. Section 3.2) are illustrated. For the models without Gibbs-Duhem loss, we observe abrupt changes in the slopes of the bubble and dew point curves caused by the non-smooth gradients of the \(\ln(\gamma_{i})\) predictions, cf. Section 3.1. For both the GNN and MCM, these inconsistent slope changes are particularly visible for 2-thiabutane and butyleneoxide and for chloroform and acetone, and can also be observed, for example, for \(x_{1}\) compositions between 0.1 and 0.4 for ethanol-benzene. The thermodynamic inconsistencies in the activity coefficient predictions are therefore reflected in the VLEs. Comparing the \(\text{GDI-GNN}_{\text{xMLP}}\) and \(\text{GDI-MCM}\) to the standard GNN and MCM, we observe that the consistency of the bubble and dew point curves are vastly improved; in fact, we do not find visible inconsistencies. Gibbs-Duhem-informed ML models therefore also show notably increased consistency in VLEs. Our results so far show that Gibbs-Duhem-informed training of GNNs and MCMs with smooth activation functions such as softplus greatly increases the thermodynamic consistency of activity coefficient predictions compared to standard training on the basis of prediction loss only, while prediction accuracy remains at a similar, very high level. The higher consistency also reflects in predicted VLEs. Next, we investigate whether the increase of thermodynamic consistency also transfers to higher generalization capability of Gibbs-Duhem-informed neural networks. #### 3.2.3 Generalization to unseen compositions We first test the generalization to unseen compositions, representing an extreme case of predicting the binary activity coefficient at compositions that are not present for any mixture in the training data. Specifically, we use the comp-extra split (cf. Section 2.4), i.e., in each run, the data for the compositions of a respective set of \(x\in\{\{0.0,\,1.0\}\), \(\{0.1,\,0.9\}\), \(\{0.3,\,0.7\}\), \(\{0.5\}\}\) is excluded from training and used for testing. The results for the respective runs of the ML models without and with Gibbs-Duhem loss are shown in Table 2. The thermodynamic consistency evaluated by the \(\text{GD-RMSE}_{\text{test}}\) is generally higher for all models trained with Gibbs-Duhem loss. Particularly, if data augmentation is used, the consistency is significantly increased, often in the order of one magnitude with respect to the RMSE. Interestingly, we find for low and high compositions, i.e., excluding \(x_{i}\in\{0.1,0.9\}\) and \(x_{i}\in\{0,1\}\), that models trained with Gibbs-Duhem loss but without data augmentation sometimes do not result in higher consistency, which indicates that the model is not able to transfer consistency learned from other compositions, hence overfits. For these cases, data augmentation is particularly effective. Figure 5: Isothermal vapor-liquid-equilibrium plots at 298 K based on activity coefficient predictions by (a) GNN and (c) MCM trained with standard loss based on the prediction error and MLP activation function: ReLU; (b) GDI-GNN\({}_{\text{xMLP}}\) ensemble and (d) GDI-MCM ensemble trained with Gibbs-Duhem-informed (GDI) loss function and following hyperparameters: MLP activation function: softplus, weighting factor \(\lambda=1\), data augmentation: true. Results are averaged from the five model runs of the comp-inter split. For the prediction accuracy, we first observe higher RMSEs for more extreme compositions, which is expected, cf. Section 3.1. Notably, for all runs, the Gibbs-Duhem-informed models achieve a higher accuracy than models trained only on the prediction loss. We find the strongest increase in accuracy for the case of excluding \(x_{i}\in\{0.1,0.9\}\), e.g., the GNN with ReLU activation and without Gibbs-Duhem loss has an RMSE of 0.302, hence failing to predict the activity coefficients with high accuracy, whereas the Gibbs-Duhem-informed GNN with softplus and data augmentation shows an RMSE of 0.075 corresponding to an accuracy increase by a factor of 4. For these compositions, the gradients of the activity coefficient with respect to the compositions tend to be relatively high, and thus accounting for these insights during training seems to be very valuable for prediction. For the boundary conditions, i.e., \(x_{i}\in\{0,1\}\) the accuracy increase is rather minor considering that the overall RMSE of approximately 0.3 is at a high level. Since the Gibbs-Duhem differential constraint is not sensitive to the gradient at \(x_{i}\to 0\), the regularization has less effect on the network predictions at infinite dilution. Hence, predicting the infinite dilution activity coefficient thus benefits less from Gibbs-Duhem information and remains a challenging task. Providing further thermodynamic insights for infinite dilution activity coefficients would thus be interesting for future work. Overall, we find Gibbs-Duhem-informed neural networks to increase generalization capabilities for unseen compositions. #### 3.2.4 Generalization to unseen mixtures For computer-aided molecular and process design applications, predicting the activity coefficients of new mixtures, i.e., for which no data is readily available, is highly relevant. We thus systematically investigate the generalization to unseen mixtures by Gibbs-Duhem-informed neural networks, beyond the exemplary mixtures from Figures 3, 4. Specifically, we now consider the mixture-extra split (cf. Section 2.4), where we exclude all data samples for a set of mixtures from the training set and use them for testing. Table 3 shows the results for different ML models trained without and with Gibbs-Duhem loss ag \begin{table} \begin{tabular}{l l l|c c c|c c c|c c c} \hline \hline \multicolumn{3}{c|}{model setup} & \multicolumn{3}{c|}{CNN} & \multicolumn{3}{c|}{MCM} & \multicolumn{3}{c}{GNN\({}_{\text{attr}}\)} \\ MLP set. & \(\lambda\) & data augm. & RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) \\ \hline \hline \multirow{2}{*}{\(\text{FN}_{\text{k}}\)} & 0.0 & False & 0.114 & 0.206 & 0.311 & 0.148 & 0.249 & 0.274 & 0.117 & 0.227 & 0.277 \\ & & 0.0 & False & 0.114 & 0.214 & 0.210 & 0.125 & 0.140 & 0.142 & 0.117 & 0.146 & 0.125 \\ & & 1.0 & False & 0.108 & 0.036 & 0.197 & 0.123 & 0.040 & 0.095 & 0.114 & 0.031 & 0.073 \\ & & 1.0 & True & 0.105 & 0.040 & 0.038 & 0.120 & 0.039 & 0.036 & 0.113 & 0.035 & 0.030 \\ \hline \hline \end{tabular} \end{table} Table 3: Prediction accuracies and thermodynamic consistencies measured by root mean squared error (RMSE) for mixture-extra split, i.e., generalization to unseen mixtures, by the GNN, MCM, and \(\text{GNN}_{\text{xMLP}}\). The models are trained with different hyperparameters: MLP activation function, Gibbs-Duhem loss weighting factor \(\lambda\), and data augmentation. gregated from the five mixture-extra splits. We observe that Gibbs-Duhem-informed neural networks using data augmentation yield notably higher thermodynamic consistency for all models. The prediction accuracy remains at a mostly similar, in some cases slightly higher, level of prediction accuracy. In comparison to the comp-inter split (cf. Table 1), the prediction accuracy decreases from about 0.08 RMSE to 0.11 RMSE, which is expected, since predicting activity coefficients for new mixtures is more difficult than predicting the values of a known mixture but at different composition. Overall, the prediction quality remains at a very high level. Therefore, Gibbs-Duhem-informed neural networks also provide high accuracy and greatly increase thermodynamic consistency for predicting activity predictions for new mixtures. The generalization studies emphasize that Gibbs-Duhem-informed neural networks enable high prediction accuracies with significantly increased thermodynamic consistency, cf. Section 3.2. Additionally, generalization capabilities for unseen compositions can be enhanced. We therefore demonstrate that using thermodynamic insights for training neural networks for activity coefficient predicting is highly beneficial. Including further thermodynamic relations, next to the Gibbs-Duhem equation, is thus very promising for future work. ## 4 Conclusion We present Gibbs-Duhem-informed neural networks that learn to predict composition-dependent activity coefficients of binary mixtures with Gibbs-Duhem consistency. Recently developed hybrid ML models focused on enforcing thermodynamic consistency by embedding thermodynamic models in ML models. We herein propose an alternative approach: utilizing constraints of thermodynamic consistency as regularization during training. We present the results for the choice of the Gibbs-Duhem differential constraint, as this has particular significance. We also present a data augmentation strategy in which data points are added to the training set for evaluation of the Gibbs-Duhem equation at unmeasured compositions, hence without the need to collect additional activity coefficient data. Gibbs-Duhem-informed neural networks strongly increase the thermodynamic consistency of activity coefficient predictions compared to models trained on prediction loss only. Our results show that GNNs and MCMs trained with a standard loss, i.e., on the prediction error only, exhibit notable thermodynamic inconsistencies. For instance, \(\gamma_{1}\) and \(\gamma_{2}\) both increase for changing compositions or the derivatives of the activity coefficient with respect to the composition having discontinuities caused by ReLU activation. By using Gibbs-Duhem loss during training with the proposed data augmentation strategy and employing a smooth activation function, herein softplus, the thermodynamic consistency effectively increases for both model types at the same level of prediction accuracy and is therefore highly beneficial. The higher consistency also reflects in predicted vapor-liquid equilibria. Furthermore, we test the generalization capability by respectively excluding specific mixtures and compositions from training and using them for testing. We find that Gibbs-Duhem-informed GNNs and MCMs allow for generalization to new mixtures with high thermodynamic consistency and a similar level of prediction accuracy as standard GNNs and MCMs. They further enable generalization to new compositions with higher consistency, additionally enhancing the prediction accuracy. Future work could extend Gibbs-Duhem-informed neural networks by including other relations for thermodynamic consistency, e.g., the Gibbs-Helmholtz relation for the temperature-dependency of the activity coefficient, cf. (Damay et al., 2021; Sanchez Medina et al., 2023). Since our investigations are based on activity coefficients obtained from COSMO-RS by (Qin et al., 2023), it would also be interesting to fine-tune our models on experimental databases, e.g., Dortmund Data Bank (Dortmund Data Bank, 2023). Further ML model types such as transformers (Winter et al., 2022) or MCMs based on Bayesian inference (Jirasek et al., 2020) could also be extended by Gibbs-Duhem insights using our approach. Furthermore, additional thermodynamic constraints could be added to the loss function for regularization, which might also enable transferring the concept of Gibbs-Duhem-informed neural networks to predict further thermophysical properties with increased consistency. ## Acknowledgments This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 466417970 - within the Priority Programme "SPP 2331: Machine Learning in Chemical Engineering". This work was also performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE). K.C.F acknowledges funding from BASF SE and the Cambridge-Trust Marshall Scholarship. Simulations were performed with computing resources granted by RWTH Aachen University under project "rwth1232". We further gratefully acknowledge Victor Zavala's research group at the University of Wisconsin-Madison for providing the SolvGNN implementation and the COSMO-RS activity coefficient data openly accessible. ## Authors contributions J.G.R. developed the concept of Gibbs-Duhem-informed neural networks, implemented them, set up and conducted the computational experiments including the formal analysis and visualization, and wrote the original draft of the manuscript. K.C.F. supported the development of the computational experiments and the analysis of the results, provided additional COSMO-RS calculations, and edited the manuscript. A.A.L. and A.M. acquired funding, provided supervision, and edited the manuscript.
2309.06274
ELRA: Exponential learning rate adaption gradient descent optimization method
We present a novel, fast (exponential rate adaption), ab initio (hyper-parameter-free) gradient based optimizer algorithm. The main idea of the method is to adapt the learning rate $\alpha$ by situational awareness, mainly striving for orthogonal neighboring gradients. The method has a high success and fast convergence rate and does not rely on hand-tuned parameters giving it greater universality. It can be applied to problems of any dimensions n and scales only linearly (of order O(n)) with the dimension of the problem. It optimizes convex and non-convex continuous landscapes providing some kind of gradient. In contrast to the Ada-family (AdaGrad, AdaMax, AdaDelta, Adam, etc.) the method is rotation invariant: optimization path and performance are independent of coordinate choices. The impressive performance is demonstrated by extensive experiments on the MNIST benchmark data-set against state-of-the-art optimizers. We name this new class of optimizers after its core idea Exponential Learning Rate Adaption - ELRA. We present it in two variants c2min and p2min with slightly different control. The authors strongly believe that ELRA will open a completely new research direction for gradient descent optimize.
Alexander Kleinsorge, Stefan Kupper, Alexander Fauck, Felix Rothe
2023-09-12T14:36:13Z
http://arxiv.org/abs/2309.06274v1
# ELRA: Exponential learning rate adaption gradient descent optimization method ###### Abstract We present a novel, fast (exponential rate adaption), ab initio (hyper-parameter-free) gradient based optimizer algorithm. The main idea of the method is to adapt the learning rate \(\alpha\) by situational awareness, mainly striving for orthogonal neighboring gradients. The method has a high success and fast convergence rate and does not rely on hand-tuned parameters giving it greater universality. It can be applied to problems of any dimensions \(n\) and scales only linearly (of order \(O(n)\)) with the dimension of the problem. It optimizes convex and non-convex continuous landscapes providing some kind of gradient. In contrast to the Ada-family (AdaGrad, AdaMax, AdaDelta, Adam, etc.) the method is rotation invariant: optimization path and performance are independent of coordinate choices. The impressive performance is demonstrated by extensive experiments on the MNIST benchmark data-set against state-of-the-art optimizers. We name this new class of optimizers after its core idea **E**xponential **L**earning **R**ate **A**daption - **ELRA**. We present it in two variants c2min and p2min with slightly different control. The authors strongly believe that ELRA will open a completely new research direction for gradient descent optimizers. ## Introduction Numerical optimization of functions obviously relies on information obtained from the function \(f(x)\) landscape. One key problem is that usually we are lacking meaningful global information about \(f(x)\) making it necessary to rely instead on local information. Approaches using this local information range from using the function value in physics-inspired relaxation approaches[1] to algorithms using directly the topographical structure of the function landscape such as gradient descent-like approaches to biology inspired algorithms such as swarm optimization[2]. Among these, the gradient descent-like methods have the longest history and are in high dimensional problems (e.g. DNN) the only practically applicable algorithms (due to their linear scaling with the problems dimension). In these approaches the gradient \(G=\nabla f(x)\) of the function \(f(x)\) is computed and thus also the best descent direction \(-G\). However, while the idea of going downhill is obviously reasonable in the first place, no information whatsoever can be obtained about how long an optimal step should be, e.g. the optimal step-length \(\lambda(x)=||\alpha\cdot\nabla(f(x))||\) is unknown. The parameter \(\alpha\) is referred to as the step size or learning rate. Most current gradient based algorithms use a learning rate \(\alpha\) independent of \(x\) (but sometimes dependent on the time or step count \(t\)), estimated by some initial try and error test runs. This holds in particular for the Ada-family1 of optimizers, widely used for training neural networks. To eliminate the initial tuning of \(\alpha\), there are some modern approaches which adapt the learning rate \(\alpha\) such as AdaDelta or the algorithms described in Prodigy2023[3] or DogSgd2023[4]. Yet they perform not better then the currently best Ada-optimizer Adam with optimal \(\alpha\) and ultimately they converge in most cases to constant \(\alpha\). The use of a fixed learning rate \(\alpha\) is in part due to the fact that it allows for precise mathematical analysis, guaranteeing or almost surely guaranteeing (for SGD) a lower bound on convergence rates (e.g. see Nesterov[5], 1.2.3). The method (ELRA) to be proposed in this work is paradigm changing because it estimates in each step self-consistently a near optimal learning rate \(\alpha\) for the next step from low-cost local knowledge of the function, thereby achieving a jump close to the next minimum along the gradient direction. In particular the learning rate approaches a problem-specific good scale exponentially fast. Therefore, we propose to name this class of optimizers **E**xponential **L**earning **R**ate **A**daption - **ELRA**. Depending on the problem the adaption leads to continual substantial changes of \(\alpha\). Footnote 1: Such as: AdaGrad, RMSProp, AdaDelta, Adam, etc., they all scale the gradient components individually (precondition-like) Recent articles indicate that large variations of \(\alpha\) might be very beneficial. In LongSteps2023[6], it is for the first time mathematically proven that (periodically) varying step sizes lead to much better convergence rates, which is confirmed by our experimental results. In Truong2021[7] it is shown that estimating the best \(\alpha\) via backtracking using Armijo's, Nesterov[5], 1.2.3) condition can lead to faster convergence then the Ada-family. However, each backtracking step needs a separate and expensive function value. Hence, backtracking more than once is seldom justified by the speed gained. Our algorithms do not suffer from this computational conundrum, as we provide two low-cost estimators for the best \(\alpha\), thereby retaining the benefit of a good \(\alpha\) without losing speed. Note that such a strongly adaptive \(\alpha\) completely eliminates the need for finding by hand a good constant \(\alpha\) for a particular problem. Moreover, most modern training schemes rely on decreasing \(\alpha\) over time to achieve faster convergence. The best timing is a priori unknown and often determined by an educated guess. We believe that a strongly adaptive \(\alpha\) also needs no external timing. The third advantage is that our algorithms are invariant under orthogonal transformations of the arguments \(x\), such as rotations, unlike the Ada-family. Such an invariance is preferable not only for geometric optimization (see VectorAdam2022[8]) but also important near saddle points (see Results). ## Orthogonal gradients are optimal Let us briefly explain, how we estimate the best \(\alpha\). All gradient descent methods for minimizing a function \(f\) boil down to the update scheme \(x_{t}=x_{t-1}-\alpha((1-\beta)G_{t-1}+\beta M_{t-1})\) for the argument \(x\) of \(f\), where \(G_{t-1}=\nabla f(x_{t-1})\) is the gradient at \(x_{t-1}\), \(M_{t-1}\) the momentum, \(\beta\) the ratio between \(G_{t-1}\) and \(M_{t-1}\) (possibly zero). For the Ada-family, \(\alpha\) is essentially constant while \(G_{t-1}\) is not actually the gradient, but a component-wise modification, which is dynamically adapted. However, this leads to a dependency on the coordinate system and the speed of the algorithm depends heavily on the concrete representation of the data (see Figure 0(a)). Moreover \(\alpha\) has to be chosen with care, either using past results or initial try and error runs. We provide a completely new approach which overcomes many of these problems. We can show that for the optimal learning rate \(\alpha\), which leads locally to the smallest value \(f(x_{t})\), it holds that the new and previous gradients \(G_{t}\) and \(G_{t-1}\) are orthogonal to each other (see Methods, equation (1)) or equivalently the cosine of the angle between the gradients \(\cos_{t}:=\cos(\angle(G_{t},G_{t-1}))\) is zero. Moreover, we could even show that for \(\cos_{t}<0\) the step size \(\alpha\) should be decreased while for \(\cos_{t}>0\) it should be increased. Figuratively speaking: If we see Zig-zag or anti-parallel steps we should decelerate, while for primarily parallel steps we should accelerate. This is computational much cheaper then Armijo's condition, as we need no extra gradient/function values. We use two competing approaches to implement this idea to update \(\alpha\). The first variant is \(\alpha_{t}=\alpha_{t-1}(1+\cos_{t}/2)\). This formula for \(\alpha\) is the core of our cosine-based optimizer **c2min**. The second variant is \(\alpha_{t}=\alpha_{t-1}\big{(}1+\frac{\cos_{t}}{||G_{t-1}||/||G_{t}||-\cos} \big{)}\). This version requires no momentum (\(\beta=0\)). The update minimizes a parabola through \(x_{t-1}\) and \(x_{t}\) with slopes \(-||G_{t-1}||\) and \(-\langle G_{t-1},G_{t}\rangle\). This is the update formula for our parabola-based optimizer **p2min**. Note that for c2min, the updated step size \(\alpha_{t}\) is bounded by \(0.5\cdot\alpha_{t-1}\) and \(1.5\cdot\alpha_{t-1}\), while it can be arbitrary between \(-\infty\) and \(+\infty\) for p2min. We prevent this potentially catastrophic behaviour by imposing bounds of the form \(0<\alpha_{t}/\alpha_{t-1}<\gamma_{MAX}\), where \(\gamma_{MAX}>1\) can be chosen at will, e.g. \(\gamma_{MAX}\sim 10^{6}\). Moreover, we found that it is beneficial to set fixed bounds for \(\alpha\). We use at the moment \(10^{-8}<\alpha<10^{6}\). These are additional hyper-parameter, yet sufficient form almost all applications. Note that an initial \(\alpha_{0}>0\) still has to be chosen. However, the specific choice is only marginal, as both algorithms adapt \(\alpha\) exponentially fast. We chose \(\alpha_{0}\) very small (e.g. \(\alpha_{0}\sim 10^{-5}\)) to prevent initial instabilities (explosions of \(f(x)\)). This leads to a negligible fixed number of initial extra steps to increase \(\alpha\) to the right magnitude (see Figure 2). Note that both approaches are by construction rotation invariant2, as they use only euclidean norms and cosines between vectors. Moreover, their computation is relatively cheap (effort linear in dimension \(n\), \(O(n)\) time and space), as computing the norm and the cosine (or the scalar product) for vectors is relatively cheap. Footnote 2: Actually they are even invariant under orthogonal transformations. ## Results We have a mathematical justification for our approach (see Methods, equation (1)). Yet, giving estimates on guaranteed/expected convergence rates for our proposed optimizers is intractable using state of the arts methods (even if restricted to convex landscapes), due to the adaptive nature of the learning rate \(\alpha\). Thus we rely on experiments to show the usefulness of our optimizers. All DNN experiments are executed for multiple starting points/initializations as gradient descent methods show partially chaotic behaviour3, i.e. even small changes in initialization/batching can lead to drastically different optimization paths and minima. However for cost reasons (limitations of an academical budget) we restrict ourselves to 5-25 different initializations per experiment and provide graphics which include the mean/median. Footnote 3: It appears to the authors that parts of the deep learning community are not fully aware of this fact. ### Mathematical 2D experiments As a proof of concept and to explore certain standard situations/problems in gradient descent, we first show results on 2-dimensional standard problems such as saddle points, bowls/parabolas and the Rosenbrock function. #### Saddle points Saddle points (where \(\nabla f(x)=0\) but \(f(x)\) is not a local max/min) can pose problems in gradient descent methods, as the gradient becomes arbitrarily small near them, which might lead to catastrophic speed loss. Generically, for a suitable choice coordinates, these saddle points look locally like \(x=(0,0)\) for \(f(x_{1},x_{2})=x_{1}^{2}-x_{2}^{2}\) (see Math. Suppl., equation (4)). However, for a given data representation, it is more likely that the coordinates near a saddle are rotated! We looked at the performance of the optimizers AdaDelta, Adam (with \(\alpha=0.01,\beta_{1}=0.9,\beta_{2}=0.999\)), c2min and p2min near the standard saddle \(f(x)=x_{1}^{2}-x_{2}^{2}\) starting at \(x_{0}=(1,10^{-9})\) and the problem rotated by \(45^{\circ}\). Figure 0(a) shows the value of \(f\) over steps \(t\). The dashed lines belong to the rotated situation. The fastest are c2min and p2min and for each only one graph is visible (rotation invariance). AdaDelta and Adam are slower and suffer significantly from \(45^{\circ}\)-rotation, as it makes the component wise modification of the Ada-family completely useless. Figure 0(b) illustrates the paths in the \(x_{1}\)-\(x_{2}\)\(-\)plane chosen by the different optimizers. One sees that c2min and p2min follow fast the gradient direction, while the Ada-family either try to avoid the saddle directly (unrotated situation) or follow slowly the gradient direction. This shows one drawback of conditioning individual axis weights within the Ada-family. It illustrates also that the different optimizers often find different local/global minima. Noteworthy: c2min (green) shows visible oscillations around the \(x_{2}\)-axis, which we use by design to decelerate. #### Bowls and Rosenbrock As a second class of mathematical experiments, we considered higher dimensional parabolas (so called bowls), i.e. functions of the form \(f(x)=\sum_{i}c_{i}\cdot x_{i}^{2}\), and the infamous Rosenbrock function \(f(x_{1},x_{2})=(1-x_{1})^{2}+100(x_{2}-x_{1}^{2})^{2}\). Bowls provide the simplest non-trivial functions for convex optimization, while the Rosenbrock function with its curved valley is a difficult standard optimization problem. Here, we used for Adam \(\alpha=0.05,\beta_{1}=0.8,\beta_{2}=0.9\) and for RMSprop \(\alpha=0.05\). The Tables 1 and 2 give the minimal number of steps \(t\) needed for the different optimizers to reach a certain threshold for \(f(x_{t})\). One sees that for these examples (together with the saddle from above) p2min is by far the fastest and for Rosenbrock with bigger starting points, it is the only optimizer that produces any meaningful results. The bad performance of c2min for Rosenbrock might be due to the constant momentum update. We hope to improve upon this result in the near future (see Future work). Figure 1: Performance of optimizers near saddle and effect of \(45^{\circ}\)-rotation. p2min (blue) and c2min (green) are fastest (only 8 and 32 steps resp. to leave plot-region). AdaDelta (red) and Adam (orange) are slower in all cases, especially for rotated axes (dashed). Plot (b) illustrates the different paths of the optimizers (\(\boldsymbol{+}\simeq 0^{\circ}\), \(\boldsymbol{\times}\simeq 45^{\circ}\)). Note that 4 out of the 8 steps of p2min (blue) are indistinguishable near origin \((0,0)\). ### Neural networks We conducted experiments on the MNIST data set for recognizing handwritten single digits from pictures consisting of 28x28 pixels. We used a simple fully connected network with 1 hidden layer (10 neurons) with ReLU-activation functions. This small design is to reduce computational costs. We also conducted few test with 2 hidden layers (16+16 neurons) giving similar results. We tested our two optimizers against the standard optimizers AdaDelta and Adam. First we performed 25 short runs over 1..4 epochs for each optimizer and batch-size to find an optimal global learning rate \(\alpha\) for Adam (for short runs it is \(\alpha=0.01\)) and to test performance on different batch-sizes (256 (Fig. 2), 512 (Fig. 7), 1024). We see that c2min performs faster then the standard Ada-optimizers, while p2min's performance is comparable, if one looks at the mean (dark blue curve). The plateau phase at the beginning of c2min comes from the very small initial learning rate \(\alpha_{0}=10^{-5}\). This requires a fixed amount of steps to increase \(\alpha\) to the right magnitude, due to exponential adaptation. The other optimizer p2min does not show this behaviour, as its adapts much faster. The spikes in the graph of p2min are due to dramatically bad estimations of \(\alpha\), due to its aggressive adaptation of the learning rate. In these situations, we use a kind of soft restart (described below) correcting this behaviour. Secondly, we conducted long run experiments spanning over 40 epochs, to determine best test loss results. Here, we found that \(\alpha=0.001\) is the best learning rate for Adam. We conducted 8 training sessions with different (but same for every optimizer) initializations and training data shuffle after each epoch and batch size 256. This is not the best option for p2min. Therefore we conducted also training sessions with batch size 64. For batch size 256, we find at first glance a similar performance for AdaDelta, Adam and c2min (see Figure 2(a)). Yet c2min reaches very good results fastest and finds the best results (green arrows). However, c2min enters later into a phase of oscillating test losses. Therefore, we plotted in Figure 2(b) the minimal test losses from the first to the current epoch. This ignores potential later deterioration of the results. Figures 3(a) and 3(b) show the results for micro-batches (size=64). Apparently, c2min fails here completely, while p2min now produces the best results after 30 epochs. Note that c2min with batch-size 256 still produces better and faster results. ## Conclusion We presented a novel, simple, mainly self consistent, robust and fast optimizing method with linear dimensional scaling and rotational invariance, realized in two algorithms. Typical runs on mathematical standard problems and statistical tests on a neural network for the MNIST data set with several initializations showed better performance then the best state of the art optimizer Adam with hand-tuned optimal parameters(!). We think that our algorithms still leave much room for improvement. Finding better control systems for alpha, momentum, soft restarts promise huge performance gains and increased universality (see Future works below). Meta-learning could also lead to further improvement. Moreover, sometimes c2min or p2min fail by decreasing \(\alpha\) to much, essentially stopping the optimization midway. We believe to know the cause of these issues, yet to find a universal solution requires more time. The authors thought about reasons why nobody tried steep and fast \(\alpha\)-variations before and see a couple of reasons: for small dimensions good solvers exist (often using matrix inversions, e.g. the Levenberg-Marquardt algorithm), mathematical optimizers strive for provability (which restricted until recently to constant \(\alpha\): compare Nesterov [5] and LongSteps2023[6]) and previous conditions (Armijo) for updating \(\alpha\) are often to expensive in high dimensions.
2308.16428
On the topology of the Milnor Boundary for real analytic singularities
We study the topology of the boundaries of the Milnor fibers of real analytics map-germs $f: (\mathbb{R}^M,0) \to (\mathbb{R}^K,0)$ and $f_{I}:=\Pi_{I}\circ f : (\mathbb{R}^M,0) \to (\mathbb{R}^I,0)$ that admit Milnor's tube fibrations, where $\Pi_{I}:(\mathbb{R}^K,0)\to (\mathbb{R}^{I},0)$ is the canonical projection for $1\leq I<K.$ For each $I$ we prove that the Milnor boundary $\partial F_{I}$ is given by the double of the Milnor tube fiber $F_{I+1}.$ We prove that if $K-I\geq 2$, then the pair $(\partial F_{I},\partial F_{f})$ is a generalized $(K-I-1)$-open-book decomposition with binding $\partial F_{f}$ and page $F_{f} \setminus \partial F_{f}$ - the interior of the Milnor fibre $F_{f}$ (see the definition below). This allows us to prove several new Euler characteristic formulae connecting the Milnor boundaries $\partial F_{f},$ $\partial F_{I},$ with the respectives links $\mathcal{L}_{f}, \mathcal{L}_{I},$ for each $1\leq I<K,$ and a L\^e-Greuel type formula for the Milnor boundary.
R. AraΓΊjo dos Santos, A. Menegon, M. Ribeiro, J. Seade, I. D. Santamaria GuarΓ­n
2023-08-31T03:31:57Z
http://arxiv.org/abs/2308.16428v1
# On the topology of the Milnor boundary for real analytic singularities ###### Abstract. We study the topology of the boundaries of the Milnor fibers of real analytics map-germs \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0)\) and \(f_{I}:=\Pi_{I}\circ f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{I},0)\) that admit Milnor's tube fibrations, where \(\Pi_{I}:(\mathbb{R}^{K},0)\to(\mathbb{R}^{I},0)\) is the canonical projection for \(1\leq I<K.\) For each \(I\) we prove that the Milnor boundary \(\partial F_{I}\) is given by the double of the Milnor tube fiber \(F_{I+1}.\) We prove that if \(K-I\geq 2,\) then the pair \((\partial F_{I},\partial F_{f})\) is a generalized \((K-I-1)\)-open-book decomposition with binding \(\partial F_{f}\) and page \(F_{f}\setminus\partial F_{f}\) - the interior of the Milnor fibre \(F_{f}\) (see the definition below). This allows us to prove several new Euler characteristic formulae connecting the Milnor boundaries \(\partial F_{f},\)\(\partial F_{I},\) with the respectives links \(\mathcal{L}_{f},\mathcal{L}_{I},\) for each \(1\leq I<K,\) and a Le-Greuel type formula for the Milnor boundary. ## 1. Introduction One of the most active and challenging areas in singularity theory is the study of non-isolated singularities of complex spaces. For instance, if \(f:(\mathbb{C}^{n},0)\to(\mathbb{C},0)\) is a holomorphic germ of function with non-isolated critical point, the degeneration process of the non-critical levels to the non-isolated singularity hypersurface defined by \(f\) is still not well-understood, unlike the isolated singularity case. One approach to this problem is to study such degeneration over a small sphere around the origin. In other words, one tries to understand the topology of the boundary of the Milnor fiber and how it degenerates to the link of \(f\). This problem has been attacked by several authors like Siersma [40, 41], Nemethi-Szilard [33], Michel-Pichon [29, 30, 31], Bobadilla-Menegon [16], Menegon-Seade [28] and Aguilar-Menegon-Seade [1]. The corresponding understanding for real analytic singularities is still very poor. Although one can define a Milnor fibration for many classes of real analytic germs of mapping \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0),\) not much is known about the topology of the corresponding Milnor fiber or the link of \(f\) (see [34, 20, 28, 25, 21] for some results), and even less about the boundary of such objects. The first part of this paper aims to introduce a new perspective to deal with such problem, inspired mainly by [14, 3, 27]. The idea is to relate the topology of the boundary of the Milnor fiber of \(f\), denoted by \(\partial F_{f},\) with the boundary of the Milnor fiber of the composition \(f_{I}\) of \(f\) with some projection \((\mathbb{R}^{K},0)\to(\mathbb{R}^{I},0),\) which we denote by \(\partial F_{I}.\) As a result, in Section 3 we prove that for \(K-I\geq 2\) there is a generalized open-book decomposition \[\frac{f_{K-I}}{\|f_{K-I}\|}:\partial F_{I}\setminus\partial F_{f}\to S^{K-I-1}\,,\] where \(f_{K-I}\) is the composition of \(f\) with the projection \((\mathbb{R}^{K},0)\to(\mathbb{R}^{K-I},0)\). The particular case \(0\leq K-I\leq 1\) and the case of a complex ICIS \((\mathbb{C}^{M+K},0)\to(\mathbb{C}^{K},0)\) are analyzed. On the other hand, the understanding of the topology of the boundary of the Milnor fiber of the function-germ \(f_{I}\) also provides a tool to better understanding the topology of the Milnor fiber of the map-germ \(f\) itself. In fact, in Section 4 we use the aforementioned open-book decomposition to obtain some formulae relating the Euler characteristics of \(\partial F_{I}\), \(F_{f}\) and the link \(\mathcal{L}_{I}\) of \(f_{I}\), for \(I=1,\ldots,K\). Finally, in the last section of the article we use those Euler characteristic formulae to get a hint on the possible topological behaviour of real analytic map-germs on an odd number of variables and how similar or different it can be when compared with the complex setting. ## 2. Notations and basics definitions Let \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0),f=(f_{1},\ldots,f_{K})\) be an analytic map germ and consider the following diagram (1) where the projections \(\Pi_{I}(y_{1},\ldots,y_{K})=(y_{1},\ldots,y_{I})\) and \(\Pi_{K-I}(y_{1},\ldots,y_{K}):=(y_{I+1},\ldots,y_{K})\), \(f_{I}=\Pi_{I}\circ f\) and \(f_{K-I}=\Pi_{K-I}\circ f\). **Basic notations and definitions:** The **zero locus** of \(f\) is defined and denoted by \(V(f):=\{f=0\}\), respectively, \(V(f_{I})=\{f_{I}=0\}\) and \(V(f_{K-I})=\{f_{K-I}=0\}\). Hence, \[V(f_{I})\supseteq V(f)\subseteq V(f_{K-I}).\] The **singular set** of \(f\), denoted by \(\operatorname{Sing}f\), is defined to be the set of points \(x\in(\mathbb{R}^{M},0)\) such that the rank of the Jacobian matrix \(df(x)\) is lower than \(K\). Analogously, we define the singular sets \(\operatorname{Sing}f_{I}\) and \(\operatorname{Sing}f_{K-I}\) of \(F_{I}\) and \(F_{K-I}\), respectively. The **discriminant set** of \(f\) is then defined by \[\operatorname{Disc}f:=f(\operatorname{Sing}f)\,.\] The **polar set** of \(f\) relative to \(g(x):=\|x\|^{2}\) is defined and denoted by \(\operatorname{Sing}\left(f,g\right)\). Analogously, we define \(\operatorname{Sing}\left(f_{I},g\right)\) and \(\operatorname{Sing}\left(f_{K-I},g\right)\). The next diagram relates the singular and the polar sets: **Definition 2.1**.: We say that a map germ \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),f=(f_{1},\ldots,f_{K})\) is _tame_, or satisfies the transversality condition at the origin if \[\overline{\operatorname{Sing}\left(f,g\right)\setminus V(f)}\cap \operatorname{Sing}f\subseteq\{0\}\] as a germ of set at the origin. **Lemma 2.2**.: _Let \(1\leq I\leq K-1.\) If \(f\) is tame, then \(f_{I}\) and \(f_{K-I}\) are tame as well._ It is well known that the tameness conditions for \(f,\)\(f_{I}\) and \(f_{K-I}\) induce the following fibrations on the boundary of the closed ball \(S_{\epsilon}^{M-1}:=\partial B_{\epsilon}^{M}\): \[f_{|}:S_{\epsilon}^{M-1}\cap f^{-1}(B_{\eta_{1}}^{K}\setminus\{0\}) \to B_{\eta_{1}}^{K}\setminus\{0\} \tag{3}\] \[f_{I|}:S_{\epsilon}^{M-1}\cap f_{I}^{-1}(B_{\eta_{2}}^{I}\setminus\{0\}) \to B_{\eta_{2}}^{I}\setminus\{0\} \tag{4}\] \[f_{K-I|}:S_{\epsilon}^{M-1}\cap f_{K-I}^{-1}(B_{\eta_{3}}^{K-I}\setminus\{0\}) \to B_{\eta_{3}}^{K-I}\setminus\{0\} \tag{5}\] Moreover, under the extra conditions \(\operatorname{Disc}f=\{0\}\) there also exists the **Milnor tube's fibration** is the following sense: there exists \(\epsilon_{0}>0\) small enough such that for all \(0<\epsilon\leq\epsilon_{0}\) there exists \(0<\eta_{1}\ll\epsilon\) such that the restriction map \[f_{|}:B_{\epsilon}^{M}\cap f^{-1}(B_{\eta_{1}}^{K}\setminus\{0\}) \to B_{\eta_{1}}^{K}\setminus\{0\} \tag{6}\] is a locally trivial smooth fibration, where \(B_{\epsilon}^{M}\), respectively \(B_{\epsilon}^{K}\), stand for the closed ball in \(\mathbb{R}^{M}\) with radius \(\epsilon\), centered at origin, respectively in \(\mathbb{R}^{K}\) with radius \(\eta\). Hence, for the same reason, we conclude the existence of the Milnor tube fibrations for \(f_{I}\) and \(f_{K-I}:\) \[f_{I|}:B_{\epsilon}^{M}\cap f_{I}^{-1}(B_{\eta_{2}}^{I}\setminus\{0\}) \to B_{\eta_{2}}^{I}\setminus\{0\} \tag{7}\] \[f_{K-I|}:B_{\epsilon}^{M}\cap f_{K-I}^{-1}(B_{\eta_{3}}^{K-I}\setminus\{0\})\to B_{ \eta_{3}}^{K-I}\setminus\{0\} \tag{8}\] From now on denote by \(F_{f}\), \(F_{I}\) and \(F_{K-I}\) the Milnor fibers of the fibrations (6), (7) and (8), respectively, by \(\partial F_{f}\), \(\partial F_{I}\) and \(\partial F_{K-I}\) the fibers of (3), (4) and (5). Consider the Milnor tube fibration \(f_{I|}:B_{\epsilon}^{M}\cap f_{I}^{-1}(B_{\eta_{2}}^{I}\setminus\{0\})\to B_{ \eta_{2}}^{I}\setminus\{0\}\) and \(z\in B_{\eta_{2}}^{I}\setminus\{0\}.\) Thus the fiber \(F_{I}=f^{-1}(\Pi_{I}^{-1}(z))\). Denote by \(D^{K-I}=\Pi_{I}^{-1}(z)\cap(B_{\eta_{1}}^{K}\setminus\{0\})\) the \((K-I)-\)dimensional closed disc given by the intersection of the fiber of projection \(\Pi_{I}:\mathbb{R}^{K}\to\mathbb{R}^{I}\) with the closed ball \(B_{\eta_{1}}^{K}\setminus\{0\}\) on the target space of fibration (6). Thus \(F_{I}=f^{-1}(D^{K-I})\) and one may consider the restriction map \(f:F_{I}\to D^{K-I}\) which is a smooth surjective proper submersion, and hence a smooth trivial fibration. Therefore, the following homeomorphism follows \(F_{I}\approx F_{f}\times D^{K-I}\), which is a diffeomorphism after smoothing the corners. On the boundary of the Milnor fiber the following diffeomorphism holds true: \[\partial F_{I}\approx(\partial F_{f}\times D^{K-I})\cup(F_{f}\times S^{K-I-1}). \tag{9}\] We remark that the next Proposition is in the same vein as [12, Corollary 4]. **Proposition 2.3**.: _Let \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) be a tame map germ with \(\operatorname{Disc}f=\{0\}\) and for \(1\leq I<K\) consider the composition map \(f_{I}=\Pi_{I}\circ f\) where \(\Pi_{I}:\mathbb{R}^{K}\to\mathbb{R}^{I}\) is the projection map. Then the boundary \(\partial F_{I}\) of the Milnor fiber \(F_{I}\) is obtained (up to homeomorphism) by the gluing together two disjoint copies of the Milnor fiber \(F_{I+1}\) along the common boundary \(\partial F_{I+1}\)._ Proof.: The proof follows from the composition where \(\widehat{\Pi}_{I}(y_{1},\ldots,y_{I+1})=(y_{1},\ldots,y_{I})\) and the fact that \(f\) being tame implies the same to \(f_{I+1}\) and \(f_{I}.\) In the same manner \(\operatorname{Disc}f=\{0\}\) implies \(\operatorname{Disc}f_{I+1}=\{0\}\), \(\operatorname{Disc}f_{I}=\{0\}\). Hence, \(F_{I}\approx F_{I+1}\times[-1,1]\) and the boundary \[\partial F_{I}\approx(\partial F_{I+1}\times[-1,1])\cup(F_{I+1}\times\{-1,1\}) \tag{10}\] Now it is easy to see that the closed manifold \(\partial F_{I}\) is obtained by the gluing the two disjoint copies of \(F_{I+1}\) given by \(F_{I+1}\times\{-1\}\cup F_{I+1}\times\{1\}\) along the boundaries of the cylinder \(\partial F_{I+1}\times[-1,1].\) See the Figure (1). Therefore the result follows. ## 3. Fibration structure on the boundary of the Milnor fiber From now on we will consider \(f\) tame, \(\operatorname{Disc}f=\{0\}\), and \(V(f)\neq\{0\}\). One may adjust the radii \(\eta_{1}\), \(\eta_{2}\) and \(\eta_{3}\) in the fibrations (3), (4) and (5) such that \(\partial F_{I}\subset S_{\epsilon}^{M-1}\cap f^{-1}(B_{\eta_{1}}^{K}\setminus\{ 0\})\) and the restriction map \(f_{K-I}:\partial F_{I}\to\mathbb{R}^{K-I}\) is well defined, for any \(1\leq I<K\). **Lemma 3.1**.: _The restriction map \(f_{K-I}:\partial F_{I}\to\mathbb{R}^{K-I}\) is a smooth submersion._ Proof.: For \(y\in B_{\eta_{2}}^{I}\setminus\{0\}\) consider \(\partial F_{I}=S_{\epsilon}^{M-1}\cap f_{I}^{-1}(y)\) and the matrix \[A(x):=\left[\begin{array}{c}\mathrm{d}f_{I}(x)\\ \mathrm{d}f_{K-I}(x)\\ \mathrm{d}g(x)\end{array}\right].\] By the tameness of \(f\) we have that for all \(x\in\partial F_{I}\) the rank of \(A(x)\) is maximal. Hence \(f_{K-I}\) is a smooth submersion. Now since \(0\in\mathbb{R}^{K-I}\) is a regular value of \(f_{K-I}:\partial F_{I}\to\mathbb{R}^{K-I}\) by the compactness of \(\partial F_{I}\) one may choose \(\tau>0\) small enough and a closed disc \(D_{\tau}^{K-I}\subset\mathbb{R}^{K-I}\) centered at the origin \(0\in\mathbb{R}^{K-I}\), such that all \(y\in D_{\tau}^{K-I}\) is a regular value of the restriction map \(f_{K-I}\). Hence the restriction map \(f_{K-I}:\partial F_{I}\cap f_{K-I}^{-1}(D_{\tau}^{K-I})\to D_{\tau}^{K-I}\) is a smooth onto submersion, then a trivial fibration with the fiber diffeomorphic to \(\partial F_{f}=\partial F_{I}\cap f_{K-I}^{-1}(0)\). Therefore, \[\partial F_{I}\cap f_{K-I}^{-1}(D_{\tau}^{K-I})\approx\partial F_{f}\times D _{\tau}^{K-I}. \tag{11}\] Denote by \(T_{\tau}(\partial F_{f}):=\partial F_{f}\times D_{\tau}^{K-I}\) the closed tubular neighbourhood of the embedded submanifold \(\partial F_{f}\longleftrightarrow\partial F_{I}\). See the Figure (2). Then, by (9) it follows that the complement \[\partial F_{I}\setminus int(T_{\tau}(\partial F_{f}))\approx F_{f}\times S^{ K-I-1} \tag{12}\] Figure 1. In this section we will prove that the embedded submanifold \(\partial F_{f}\longleftrightarrow\partial F_{I}\) yields on the boundary \(\partial F_{I}\) an interesting structure. For that, let \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),M>K\geq 2\) and take \(1\leq I\leq K-2\) as in the beginning of section 2. We are now ready to introduce the main result of this section. Before that, we introduce an appropriate definition which fits with the type of fibration structure we are able to prove on the boundary of the Milnor fibration. Following H. Winkelnkemper in [42], A. Ranicki [39]1, E. Looijenga [18], see also [15] and [4, section 3], given \(M\) a smooth manifold \(M\) and \(N\subset M\) a submanifold of codimension \(k\geq 2\) in \(M,\) suppose that for some trivialization \(t:T(N)\to N\times B^{k}\) of a tubular neighbourhood \(T(N)\) of \(N\) in \(M,\) the fiber bundle defined by the composition \(\pi\circ t\) in the diagram below Footnote 1: including the appendix "The history and applications of open books", by H. E. Winkelnkemper where \(\pi(x,y):=\dfrac{y}{\|y\|},\) extends to a smooth locally trivial fiber bundle \(p:M\setminus N\to S^{k-1};\) e.i., \(p_{|_{T(N)\setminus N}}=\pi\circ t.\) In such a case the pair \((M,N)\) above will be called a _generalized \((k-1)\)-open-book decomposition on \(M\) with binding \(N\)_ and _page_ the fiber \(p^{-1}(y),y\in S^{k-1}.\) The main result of this section is: **Theorem 3.2**.: _Let \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) be a tame map germ with \(\operatorname{Disc}f\)\(=\{0\}.\) Then, for each \(1\leq I<K\) such that \(K-I\geq 2,\) the pair \((\partial F_{I},\partial F_{f})\) is a Figure 2. generalized \((K-I-1)\)-open-book decomposition, with binding \(\partial F_{f}\) and page \(F_{f}\setminus\partial F_{f}\)- the interior of the Milnor fiber \(F_{f}\)._ Proof.: Consider the restriction map \(f_{K-I}:\partial F_{I}\to\mathbb{R}^{K-I}.\) It follows by Lemma 3.1 that we may adjust the radii of the fibrations (3), (4) and (5) such that for a small enough radius \(\tau\) in the diagram below (13) the projection \(\dfrac{f_{K-I}}{\|f_{K-I}\|}\) is a (trivial) locally fiber bundle, where \(\Pi_{R}\) is the radial projection. It induces the trivial fibration on the diagonal projection (14) Applying again the Lemma 3.1 in the diagram below we get that the horizontal map is a locally trivial fibration over its image; and thus, the diagonal projection is again a locally trivial smooth fibration. (15) Now we may glue the fibrations (13) and (15) along the fibration (14) to get a smooth projection of a locally trivial fiber bundle \[\dfrac{f_{K-I}}{\|f_{K-I}\|}:\partial F_{I}\setminus\partial F_{f}\to S^{K-I-1}. \tag{16}\] Now see that the diffeomorphism of (11) says that the trivialization in the horizontal map of the diagram (9) is given by \[\partial F_{I}\cap f_{K-I}^{-1}(D_{\tau}^{K-I}-\{0\})\approx\partial F_{f} \times(S_{\tau}^{K-I-1}\times(0,\tau]).\] Hence the fiber of the diagonal fibration in the diagram (13) should be diffeomorphiuc to \(\partial F_{f}\times(0,\tau]\). On the other hand, the diffeomorphism (12) assures that the fiber in the diagonal projection of the diagram (15) should be diffeomorphic to \(F_{f}.\) The fiber of the boundary trivial fibration (14) is clearly diffeomorphic to \(\partial F_{f}.\) Therefore, we conclude that the fiber of fibration (16) must be diffeomorphic to the gluing (using the identity diffeomorphism on the boundary) \(F_{f}\cup_{\partial F_{f}}(\partial F_{f}\times(0,\tau])=F_{f}\setminus\partial F _{f}.\) See Figure (3) and the proof is finished for \(K-I-1\geq 1,\) i.e., \(K-I\geq 2.\) Remark 3.3.: Notice that for \(K-I=2,\) a generalized open-book decomposition is an open-book in the usual sense. We also remark that in Theorem 3.2 we assumed \(K-I\geq 2;\) If we consider the case \(K=I\) then by convention \(f_{K-I}:=f_{0}\equiv 0\) and \(\mathbb{R}^{0}=\{0\},\) so there is nothing to be said. For \(I=K-1,\) then the study of the restriction function \(f_{K-I}:\partial F_{I}\to\mathbb{R}\) reduces to that of Proposition 2.3 and the construction above leads to a "fibration" over \(S^{0}=\{-1,1\}.\) ### The case of ICIS holomorphic map germ Let us consider now a holomorphic map germ \(f=(f_{1},\ldots,f_{K}):(\mathbb{C}^{M+K},0)\to(\mathbb{C}^{K},0),K\geq 2.\) For \(1\leq I<K\) consider the complex projections \(\Pi_{I}:(\mathbb{C}^{K},0)\to(\mathbb{C}^{I},0),\Pi_{I}(z_{1},\ldots,z_{K})=( z_{1},\ldots,z_{I}),\) and \(\Pi_{K-I}:(\mathbb{C}^{K},0)\to(\mathbb{C}^{K-I},0),\Pi_{K-I}(z_{1},\ldots,z_ {K})=(z_{I+1},\ldots,z_{K}).\) Thus, the compositions as in the diagram (1) becomes \(f_{I}:=\Pi_{I}\circ f=(f_{1},\ldots,f_{I})\) and \(f_{K-I}:=\Pi_{K-I}\circ f=(f_{I+1},\ldots,f_{K}).\) If we assume further that \(f\) is ICIS, it is known that \(\operatorname{Disc}f:=f(\operatorname{Sing}f)\) is a complex hypersurface in \(\mathbb{C}^{K},0\) and then for all \(\delta>0\) small enough the space \(B^{2K}_{\delta}\setminus\operatorname{Disc}f,\) where \(B^{2K}_{\delta}\) stand for the open ball in \(\mathbb{C}^{K}\equiv\mathbb{R}^{2K}.\) Then, the space \(B^{2K}_{\delta}\setminus\operatorname{Disc}f\) is a connected space. In fact, it was proved by H. Hamm, Le D. Trang and by E. Looijenga in [17] that there exists \(\epsilon_{0}>0\) small enough such that for each \(0<\epsilon\leq\epsilon_{0}\) there exists \(0<\delta\ll\epsilon\) such that the projection map \[f:\overline{B}^{2M+2K}_{\epsilon}\setminus f^{-1}(\operatorname{Disc}f)\to B ^{2K}_{\delta}\setminus\operatorname{Disc}f\] is a smooth locally trivial fibration. Thus, by the connectedness property of the base space the Milnor fiber \(F_{f}\) is uniquely defined, up to diffeomorphism. Figure 3. In addition to that, the ICIS condition is equivalent to the condition \(\operatorname{Sing}f\cap V_{f}=\{0\}.\) Hence, \(f\) is tame according to the Definition 2.1. We may also use the argument of Looijenga in [17] for a "good representative" to guarantee that, up to a linear coordinate change in \(\mathbb{C}^{K},\) it follows that either map germ \(f_{I}\) and \(f_{K-I}\) are ICIS as well. The following result is an interesting application of our Theorem 3.2 and also it provides an extension of [7, Proposition 3.2, p. 481]. See also [33, Chapter 3] and compare with [7, Proposition 3.2, p. 481]. **Proposition 3.4**.: _Let \(f:(\mathbb{C}^{M+K},0)\to(\mathbb{C}^{K},0),K>I\geq 1,\) be a germ of ICIS such that \(f_{I}\) and \(f_{K-I}\) are ICIS as well. Then, the Milnor projection \(\dfrac{f_{K-I}}{\|f_{K-I}\|}:\partial F_{I}\backslash\partial F_{f}\to S^{2K- 2I-1}\) induces a generalized \((2K-2I-1)\)-open-book decomposition on the boundary \(\partial F_{I}\) with binding \(\partial F_{f}.\)_ Proof.: Since \(K>I\) then \(K-I\geq 1\) and the dimension of the sphere on the target space is \(2(K-I)-1\geq 1.\) Hence, the same ideas in the proof of Theorem 3.2 work in this case. Remark 3.5.: 1. We recall that if one has an isolated complex hypersurface singularity germ, then its link has a canonical contact structure which is Stein fillable (see for instance [35]). These statements extend naturally to the setting we envisage in Proposition 3.4. 2. Still the complex ICIS above, the generalized open-book decomposition also extends with the same proof for the pair links \((\mathcal{L}_{I},\mathcal{L}_{f}),\) where \(\mathcal{L}_{I}:=f_{I}^{-1}(0)\cap S_{\epsilon}^{M-1}\) and \(\mathcal{L}_{f}:=f^{-1}(0)\cap S_{\epsilon}^{M-1},\) for all \(\epsilon>0\) smal enough. ## 4. The Euler characteristic formulae Let \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) be an analytic map-germ. We will assume along the section that \(f\) is tame and \(\operatorname{Disc}f=\{0\}.\) Thus, \(\dim F_{f}=M-K\) and \(\dim\partial F_{f}=M-K-1.\) Denote by \(\widehat{F_{f}}=F_{f}\cup_{\partial F_{f}}F_{f}\) the closed manifold built by gluing two copies of \(F_{f}\) along the boundary \(\partial F_{f}\) using the identity diffeomorphism on \(\partial F_{f}.\) By the additive property of the Euler characteristic we have that \(\chi(\widehat{F_{f}})=2\chi(F_{f})-\chi(\partial F_{f}).\) Hence \[\chi(\partial F_{f})=\begin{cases}0,&\text{if M-K is even}.\\ 2\chi(F_{f}),&\text{if M-K is odd}.\end{cases} \tag{17}\] Applying again the additive Euler characteristic to the diffeomorphism (9) we get \[\chi(\partial F_{I})=\chi(\partial F_{f}\times D^{K-I})+\chi(F_{f}\times S^{ K-I-1})-\chi(\partial F_{f}\times S^{K-I-1})=\] \[=\chi(\partial F_{f})+\chi(F_{f}).\chi(S^{K-I-1})-\chi(\partial F_{f}).\chi( S^{K-I-1}).\] Thus, together with (17) it reduces to \[\chi(\partial F_{I})=\begin{cases}\chi(F_{f}).\chi(S^{K-I-1}),&\text{if M-K is even.}\\ \chi(F_{f}).\chi(S^{K-I}),&\text{if M-K is odd.}\end{cases} \tag{18}\] We may consider the convention \(\chi(S^{-1})=\chi(\emptyset)=0\) and the fact that for the \(0-\)dimensional sphere \(\chi(S^{0})=\chi(\{-1,1\})=2.\) Then all the above discussion, including the special cases of \(I=1\) and \(I=K\) may be summarized as below. By convention, for \(I=1\) we denote \(F_{1}:=F_{f}.\) **Theorem 4.1**.: _Let \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) be an analytic map-germ, tame with \(\operatorname{Disc}f=\{0\}.\) Then:_ 1. \(\chi(\partial F_{I})=\chi(F_{f}).\chi(S^{M-I-1}),\) _for any_ \(1\leq I\leq K.\)__ 2. **Le-Greuel's type formula:**__\(\chi(\partial F_{I+1})-\chi(\partial F_{I})=2(-1)^{M-I}\chi(F_{f}),\) _for any_ \(1\leq I<K.\)__ 3. \(\chi(\partial F_{I})=\chi(\partial F_{I+2}),\) _for any_ \(1\leq I<K-1.\)__ Proof.: The item 1) follows from the identity (18). To prove the item 2) just exchange \(I\) by \(I+1\) in the item 1) and take the difference. The item 3) is immediate from item 1). ### Relating the Euler characteristic of the links Consider again \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) a tame polynomial map-germ with \(\operatorname{Disc}f=\{0\}.\) For each \(1<I\leq K\) the map \(f_{I}:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{I},0)\) admits the Milnor tube fibrations (4) and by restriction it induces the fibrations \(f_{I}:B_{\epsilon}^{M}\cap f_{I}^{-1}(S_{\eta}^{I-1})\to S_{\eta}^{I-1}\) with Milnor fibers \(F_{I}\) and the fibration \(f_{I}:S_{\epsilon}^{M-1}\cap f_{I}^{-1}(S_{\eta}^{I-1})\to S_{\eta}^{I-1}\) with fiber \(\partial F_{I}.\) Denote by \(T_{\eta}(F_{I}):=B_{\epsilon}^{M}\cap f_{I}^{-1}(S_{\eta}^{I})\) the Milnor tube of \(f_{I}\) and by \(\mathcal{L}_{I}:=f_{I}^{-1}(0)\cap S_{\epsilon}^{M-1}\) the respective link. We may consider \(\eta\) small enough such that the sphere \(S_{\epsilon}^{M-1}\) is homeomorphic to the gluing \(T_{\eta}(f_{I})\cup_{\partial T_{\eta}(f_{I})}N_{\eta}(f_{I}),\) where \(\mathcal{L}_{I}\subset N_{\eta}(f_{I}):=f_{I}^{-1}(B_{\eta}^{I})\) is a semi-algebraic neighbourhood that retract to the link \(\mathcal{L}_{I},\) as proved by A. Durfee in [13]. Thus, \[\chi(S_{\epsilon}^{M-1})=\chi(T_{\eta}(f_{I}))+\chi(N_{\eta}(f_{I}))-\chi( \partial T_{\eta}(f_{I}))=\chi(F_{I})\chi(S^{I-1})+\chi(\mathcal{L}_{I})- \chi(\partial F_{I})\chi(S^{I-1}).\] Hence, \[\chi(\mathcal{L}_{I})=\chi(S^{M-1})-\chi(F_{f})\chi(S^{I-1})+\chi(\partial F_ {I})\chi(S^{I-1}) \tag{19}\] **Lemma 4.2**.: _The following holds true:_ \[\chi(\mathcal{L}_{I})=\chi(S^{M-1})+(-1)^{M-I-1}\chi(F_{f})\chi(S^{I-1}).\] Proof.: The proof follows from Proposition 18 and equation (19). The next result provides in particular a second proof of [14, Proposition 7.1, p. 4861]. **Proposition 4.3**.: _Let \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) be a tame polynomial map-germ with \(\operatorname{Disc}f=\{0\}.\) Then:_ 1. \(\chi(\mathcal{L}_{I+1})-\chi(\mathcal{L}_{I})=2(-1)^{M-I}\chi(F_{f}),\) _for each_ \(1\leq I<K.\)__ 2. \(\chi(\mathcal{L}_{I+2})=\chi(\mathcal{L}_{I}),\) _for each_ \(1\leq I<K-1.\)__ Proof.: In the equation (19) just exchange \(I\) by \(I+1\) and take the difference. Then we have \[\chi(\mathcal{L}_{I+1})-\chi(\mathcal{L}_{I})=\chi(F_{f})\chi(S^{I-1})-\chi( \partial F_{I})\chi(S^{I-1})-\chi(F_{f})\chi(S^{I})+\chi(\partial F_{I+1}) \chi(S^{I}).\] Now we may apply Proposition 4.1, item 1), to get \[\chi(\mathcal{L}_{I+1})-\chi(\mathcal{L}_{I})=(-1)^{M-I}\chi(F_{f})\chi(S^{I- 1})+(-1)^{M-I}\chi(F_{f})\chi(S^{I})=2(-1)^{M-I}\chi(F_{f}).\] This ends the proof of item 1). Item 2) is trivial. Remark 4.4.: 1. We point out that the Le-Greuel type formula obtained in the Theorem 4.1, item (2), is somehow similar to that obtained in [11, Theorem 1, p. 3], but with the difference that in [11] the authors worked with the Euler number of the Milnor fibers, instead of its boundary. 2. In view of the Proposition 4.1 and the Proposition 4.3, we can see that for all \(1\leq I<K\) we have \(\chi(\partial F_{I+1})-\chi(\partial F_{I})=\chi(\mathcal{L}_{I+1})-\chi( \mathcal{L}_{I}).\) Thus, \(\chi(\partial F_{I+1})-\chi(\mathcal{L}_{I+1})=\chi(\partial F_{I})-\chi( \mathcal{L}_{I})=\cdots=\chi(\partial F_{2})-\chi(\mathcal{L}_{2})=\chi( \partial F_{1})-\chi(\mathcal{L}_{1}).\) Hence, it suggests the following definition. **Definition 4.5**.: Let \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0),\)\(M>K\geq 2,\) be a tame polynomial map-germ with \(\operatorname{Disc}f=\{0\}\). The degree of _degeneracy on the Milnor boundary of \(f\)_ is defined as the number \[DB(f):=\chi(\partial F_{1})-\chi(\mathcal{L}_{1})\,.\] Clearly, if \(f\) has an isolated singularity at the origin one has that \(DB(f)=0.\) ## 5. On the boundaries of the Milnor fibers and the links on each stage I. In the real setting we do not expect to prove theorems regarding the degree of connectivity of the Milnor fibers, its boundaries nor the respective links of \(f_{I},\) on each stage \(I.\) Notwithstanding, in the case where the dimension \(M\) of the source space is even, for all \(I,\)\(1\leq I\leq K,\) we may write \(\chi(\partial F_{I})=\chi(S^{I+1})\chi(F_{I})\) and as an application of Lemma 4.2 we conclude that \(\chi(\partial F_{I})=\chi(\mathcal{L}_{I}),\) and hence \(DB(f)=0.\) However, if the source space \(M\) is odd-dimensional some interesting relations between the boundaries of the Milnor fiber, the links of the singularities and the Milnor fibers on the Milnor tubes come up on each stage \(I,\) and it provides a way to distinguish between the homotopy type of the Milnor boundary and the link of the singularities \(f_{I}\) for each \(1\leq I\leq K,\) as described below. We first remind that for odd dimension \(M\geq 2\) the equation (19) becomes \[(*):\ \chi(\mathcal{L}_{I})=2-\chi(F_{f})\chi(S^{I-1})+\chi(\partial F_{I}) \chi(S^{I-1}).\] This allows us to prove the below result whose proof we left as an exercise. **Lemma 5.1**.: _Let \(M\) odd and \(I\) such that \(M>K\geq I\geq 2.\) Then, for each \(I\) the following conditions hold for the links \(\mathcal{L}_{I}\) and the boundaries \(\partial F_{I}\) of the Milnor fibers \(F_{I}:\)_ 1. _if_ \(I\) _is even then_ \(\chi(\mathcal{L}_{I})=2,\) _by equation_ \((*).\) _Moreover, since_ \(\dim F_{I}=M-I\) _is odd then_ \(\chi(\partial F_{I})=2\chi(F_{I})=2\chi(F_{f});\)__ 2. _if_ \(I\) _is odd then_ \(\chi(\mathcal{L}_{I})=2-2\chi(F_{f}),\) _by equation_ \((*).\) _Moreover, since_ \(\dim\partial F_{I}=M-I-1\) _is odd then_ \(\chi(\partial F_{I})=0.\)__ Now we are ready to state the main result of this section. **Theorem 5.2**.: _Consider \(M\) odd, \(M>K\geq I\geq 2.\) Let \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0)\) and \(f_{I}:(\mathbb{R}^{M},0)\to(\mathbb{R}^{I},0)\) be real analytic map germs as in the diagram (1). Then, \(\chi(F_{f})=1\) if and only if \(\chi(\partial F_{I})=\chi(\mathcal{L}_{I})\) in some stage \(I.\) Moreover, if the last equality holds true on any stage \(I\) it also will holds true on all stages \(I,\)\(2\leq I\leq K<M.\)_ Proof.: The proof follows from Lemma 5.1. For the "if" case, we can see that \(\chi(F_{I})=\chi(F_{f})=1\) implies that the two quantities \(\chi(\partial F_{I})=\chi(\mathcal{L}_{I})\) in the either cases of Lemma 5.1. Moreover, the equality in some stage \(I\) clearly implies that on all \(I,\)\(2\leq I\leq K<M.\) For the "only if" case, if we suppose that in some stage (even or odd) \(I\) the equality \(\chi(\partial F_{I})=\chi(\mathcal{L}_{I})\) holds true, then again by Lemma 5.1 we conclude that \(2\chi(F_{I})=2.\) Therefore, \(\chi(F_{f})=1.\) The next result provides a natural class of map germs where one of two conditions above holds true. Beside that, it also provides another proof of [3, Proposition 3, item ii), p. 71]. **Corollary 5.3**.: _Let \(f:(\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0)\), with \(M>K\geq 2\) and \(M\) odd, be a real analytic map-germ with an isolated critical point at the origin. Then, for each \(I,\)\(M>K\geq I\geq 1,\) we have that \(\chi(F_{I})=1.\)_ Proof.: The proof might be left as exercise, but we will screatch it below for the sake of convenience. Since \(f\) have an isolated singular point at the origin, one may apply the diagram (2) to get that \(\operatorname{Sing}\left(f_{I}\right)\subseteq\{0\}\) for each fixed \(I.\) It is enough to consider the case \(\operatorname{Sing}\left(f_{I}\right)=\{0\},\) because the case \(\operatorname{Sing}\left(f_{I}\right)=\emptyset\) the result follows as an easy application of the Inverse Function Theorem version for map germs. Now, if we assume further that \(M\) is odd then for each \(I\) the link \(\mathcal{L}_{I}\) must be not empty, and it is in fact a smooth manifold diffeomorphic to \(\partial F_{I}\) and thus \(\chi(\partial F_{I})=\chi(\mathcal{L}_{I}).\) Therefore, one may apply the Theorem 5.2 and conclude that \(\chi(F_{I})=1,\) for each \(1\leq I\leq K<M.\) Remark 5.4.: For the existence of map germ \((\mathbb{R}^{M},0)\to(\mathbb{R}^{K},0),\) M odd, \(M>K\geq 2,\) with isolated critical point at the origin, the reader may consult [4, section 5.2, p. 101]. **Corollary 5.5**.: _Let \(M>K\geq I\geq 1\) and \(f\) be as in Theorem 5.2. If \(\chi(F_{f})\neq 1\) then at all stages \(I,\) the Milnor boundary \(\partial F_{I}\) and the respective link \(\mathcal{L}_{I}\) of \(f_{I}\) cannot be homotopically equivalent._ Proof.: It is now trivial because \(\chi(F_{I})=\chi(F_{f})\neq 1\), on each stage \(I.\) Therefore, by Theorem 5.2 the respectives Milnor boundary \(\partial F_{I}\) and the link \(\mathcal{L}_{I}\) can not be homotopically equivalent. The next example shows that for odd-dimension \(M\) it is easy to construct a family of map germ where the Euler characteristic of the Milnor fiber is not equal to one. Example 5.6.: Let \(f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0)\) be an analytic map germ \(M>K\geq 2\), with \(\operatorname{Sing}f=\{0\}\), and \(g:(\mathbb{R}^{K},0)\rightarrow(\mathbb{R}^{K},0)\) be an analytic ramified covering map branched along \(\{0\}\) with t-sheets, \(t\geq 2.\) Then, for all fixed \(z\in(\mathbb{R}^{K},0)\), \(0<\|z\|\ll 1\), and all \(x\in g^{-1}(z)\) the map \(g\) is a local diffeomorphism and the fiber \(g^{-1}(z)\) consists of a finite number of points, and we set \(t:=\#g^{-1}(z).\) Thus, \(\operatorname{Sing}g=\{0\}\) and the composition map germ \(h=g\circ f:(\mathbb{R}^{M},0)\rightarrow(\mathbb{R}^{K},0)\) satisfies that \(\operatorname{Sing}h=f^{-1}(0)\subseteq V_{h}.\) Since \(f\) is tame, it is not hard to see that \(h\) is tame as well. Then the map \(h\) admits a Milnor tube fibration with Milnor fiber \(F_{h}=\sqcup_{i=1}^{t}F_{f}\) (t-disjoint copies of \(F_{f}\)). Therefore we have that \(\chi(F_{h})=t.\chi(F_{f})=t\geq 2\), where we use that \(\chi(F_{f})=1\) by Corollary 5.3.
2308.16400
Channel Estimation for XL-MIMO Systems with Polar-Domain Multi-Scale Residual Dense Network
Extremely large-scale multiple-input multiple-output (XL-MIMO) is a promising technique to enable versatile applications for future wireless communications.To realize the huge potential performance gain, accurate channel state information is a fundamental technical prerequisite. In conventional massive MIMO, the channel is often modeled by the far-field planar-wavefront with rich sparsity in the angular domain that facilitates the design of low-complexity channel estimation. However, this sparsity is not conspicuous in XL-MIMO systems due to the non-negligible near-field spherical-wavefront. To address the inherent performance loss of the angular-domain channel estimation schemes, we first propose the polar-domain multiple residual dense network (P-MRDN) for XL-MIMO systems based on the polar-domain sparsity of the near-field channel by improving the existing MRDN scheme. Furthermore, a polar-domain multi-scale residual dense network (P-MSRDN) is designed to improve the channel estimation accuracy. Finally, simulation results reveal the superior performance of the proposed schemes compared with existing benchmark schemes and the minimal influence of the channel sparsity on the proposed schemes.
Hao Lei, Jiayi Zhang, Huahua Xiao, Xiaodan Zhang, Bo Ai, Derrick Wing Kwan Ng
2023-08-31T02:11:08Z
http://arxiv.org/abs/2308.16400v2
# Channel Estimation for XL-MIMO Systems with Polar-Domain Multi-Scale Residual Dense Network ###### Abstract Extremely large-scale multiple-input multiple-output (XL-MIMO) is a promising technique to enable versatile applications for future wireless communications. In conventional massive MIMO, the channel is often modeled by the far-field planar-wavefront with rich sparsity in the angular domain that facilitates the design of low-complexity channel estimation. However, this sparsity is not conspicuous in XL-MIMO systems due to the non-negligible near-field spherical-wavefront. To address the inherent performance loss of the angular-domain channel estimation schemes, we first propose the polar-domain multiple residual dense network (P-MRDN) for XL-MIMO systems based on the polar-domain sparsity of the near-field channel by improving the existing MRDN scheme. Furthermore, a polar-domain multi-scale residual dense network (P-MSRDN) is designed to improve the channel estimation accuracy. Finally, simulation results reveal the superior performance of the proposed schemes compared with existing benchmark schemes and the minimal influence of the channel sparsity on the proposed schemes. Near-field communication, XL-MIMO, channel estimation, deep learning. ## I Introduction To significantly improve the required ultra-low access latency and ultra-high data rate of the sixth-generation (6G) wireless communications, extremely large-scale multiple-input multiple-output (XL-MIMO) has been considered as one of the promising techniques [1, 2]. The general idea of XL-MIMO is to deploy another order-of-magnitude antenna number (e.g., \(512\) or more) at the base station (BS), compared with only \(64\) or \(128\) antennas in those conventional massive MIMO (mMIMO) systems [1, 2]. In this way, XL-MIMO can provide higher spatial degrees-of-freedom (DoF) [1] and spectral efficiency (SE) [3] compared with mMIMO systems. To achieve the desired performance in XL-MIMO networks in practice, several challenges must be addressed, e.g., accurate channel modeling, low-complexity signal processing, spatial non-stationary characteristics, etc. Among these challenges, channel estimation (CE) is a critical one as accurate channel state information (CSI) is a fundamental requirement for effective signal processing. First of all, the channel between the BS and its user equipment (UE) is with an extremely high dimensionality due to the deployment of large-scale antennas. Secondly, the operating region of XL-MIMO shifts from far-field to near-field [4], where the boundary between near-field and far-field is defined by the Rayleigh distance. With a high carrier frequency, the Rayleigh distance can be extended to hectometre-range or even longer, [4, 5], which means that XL-MIMO channels should be modeled by near-field spherical waves. The difference in electromagnetic (EM) characteristics results in different sparsity properties. Thus, the direct application of angular-domain CE schemes will inevitably suffer from a degradation in normalized mean square error (NMSE) performance for XL-MIMO [6, 7]. Recently, several works have focused on CE problems in XL-MIMO systems [6]-[11]. For instance, the authors in [6] proposed a polar-domain representation for the XL-MIMO channel that fully captures the near-field spherical wave characteristics. It is noteworthy that compressed sensing (CS) algorithms can be exploited to perform CE in XL-MIMO systems with acceptable NMSE performance based on the polar-domain channel sparsity [6]-[8]. Additionally, CS-based CE schemes have shown reasonable performance for XL-MIMO networks with spatial non-stationary properties [9, 10]. Moreover, a reduced-subspace least-squares (RS-LS) CE scheme was proposed based on the least-squares (LS) estimator by utilizing the compact eigenvalue decomposition of the spatial correlation matrix [11]. Despite various efforts have been devoted to CE, satisfactory estimation accuracy is yet to be achieved due to the limited available resources. In recent years, deep learning (DL)-based CE algorithms have gained significant momentum. For instance, in reconfigurable intelligent surface (RIS)-aided mMIMO systems, a multiple residual dense network (MRDN) was designed for CE with high estimation accuracy by exploiting the angular-domain channel sparsity [12]. Also, the authors in [13] proposed a U-shaped multilayer perceptron (U-MLP) network to estimate the near-field channel with spatial non-stationary properties by capturing the long-range dependency of channel features. In addition, deep learning networks have been exploited to extract the parameters of near-field channels, which can be adopted to reconstruct the channel matrices [14], [15]. Moreover, the authors in [16] formulated a near-field CE problem as a compressed sensing problem and then proposed a sparsifying dictionary learning-learning iterative shrinkage and thresholding algorithm (SDL-LISTA) by formulating the sparsifying dictionary as a neural network layer. Despite these advances, there is still a lack of sufficient investigations to reveal the impact of inherent near-field channel sparsity in different domains on DL-based CE schemes. In this paper, we propose a polar-domain multiple residual dense network (P-MRDN)-based CE scheme to explicitly exploit the polar-domain channel sparsity in XL-MIMO systems and evaluate the NMSE performance of the MRDN and P-MRDN-based CE schemes. However, the existing DL-based CE schemes for near-field did not consider the multi-scale feature, which generally limits their accuracy. To address this issue, inspired by [17], the notion of atrous spatial pyramid pooling (ASPP), which adopts parallel atrous convolution layers with different rates to capture the multi-scale information [18], is incorporated into the proposed P-MRDN to further improve the CE accuracy. It is worth noting that our proposed schemes are expected to outperform the state-of-the-art CE schemes in NMSE due to the tailor-mode approach. The main contributions can be summarized as follows. * We propose a P-MRDN-based CE scheme for XL-MIMO systems by exploiting the polar-domain channel sparsity. More importantly, we reveal the impact of the channel sparsity in different domains on DL-based CE schemes. * Atrous spatial pyramid pooling-based residual dense network (ASPP-RDN) is also proposed by exploiting ASPP as a parallel branch of RDN. Then, a polar-domain multi-scale residual dense network (P-MSRDN)-based CE scheme is proposed to further improve the estimation accuracy based on ASPP-RDN. * Numerical results demonstrate that the performance of the proposed schemes can significantly outperform existing state-of-the-art CE schemes1. Footnote 1: Simulation codes are provided to reproduce the results in this paper: [https://github.com/BJTU-MIMO](https://github.com/BJTU-MIMO). _Notation_: Boldface lowercase letters \(\mathbf{a}\) and boldface uppercase letters \(\mathbf{A}\) denote column vectors and matrices, respectively. Transpose is denoted by \((\cdot)^{T}\). We denote the \(M\times N\) complex-valued matrix and \(M\times N\) real-valued matrix by \(\mathbb{C}^{M\times N}\) and \(\mathbb{R}^{M\times N}\), respectively. We adopt \(\mathbb{E}\{\cdot\}\) to denote the expectation operator. The circularly symmetric complex Gaussian distribution with covariance \(\sigma^{2}\) and the uniform distribution between \(a\) and \(b\) are denoted by \(\mathcal{CN}(0,\sigma^{2})\) and \(\mathcal{U}(a,b)\), respectively. The Euclidean norm is denoted by \(\|\cdot\|\). ## II System Model We consider an uplink time division duplexing (TDD) XL-MIMO system. As illustrated in Fig. 1, we consider a uniform linear array (ULA)-based BS with \(N\) antennas and one single-antenna UE for the XL-MIMO system2. The antenna spacing is denoted by \(\Delta=\lambda/2\), where \(\lambda\) is the carrier wavelength. We denote the channel between the UE and the BS by \(\mathbf{h}\in\mathbb{C}^{N\times 1}\), where \(N\) is the number of antennas at the BS. By assuming that the UE sends a predefined pilot sequence, set as 1 for simplicity, we can represent the received signal \(\mathbf{y}\in\mathbb{C}^{N\times 1}\) at the BS as Footnote 2: As for multi-UE scenarios, the channels between the BS and its different UEs can be modeled separately by the spherical-wave assumption. \[\mathbf{y}=\sqrt{p}\mathbf{h}+\mathbf{n}, \tag{1}\] where \(p\) is the transmit power of the UE, \(\mathbf{n}\in\mathbb{C}^{N\times 1}\) denotes the receiver noise with independent \(\mathcal{CN}(0,\sigma^{2})\) entries, and \(\sigma^{2}\) denotes the noise power. To further unveil the channel sparsity in XL-MIMO, the channel modelings for far-field and near-field are reviewed as follows: \(\bullet\)_Far-field channel modeling_: In conventional far-field region, the channel is modeled by the planar wave, which can be expressed as \[\mathbf{h}^{\rm far-field}=\sqrt{\frac{N}{L}}\sum_{l=1}^{L}\beta_{l}e^{-jkr_ {l}}\mathbf{a}(\theta_{l}), \tag{2}\] where \(k=\frac{2\pi}{\lambda}\) is the wave number. We assume that there is one line-of-sight (LoS) path and \(L-1\) non-line-of-sight (NLoS) paths [6]-[8]. Moreover, we denote the angle, the distance, and the complex path gain of the \(l\)-th path by \(\theta_{l}=\sin\upsilon_{l}\), \(r_{l}\), and \(\beta_{l}\), respectively. The steering vector \(\mathbf{a}(\theta_{l})\) can be represented as \[\mathbf{a}(\theta_{l})=\frac{1}{\sqrt{N}}\Big{[}1,e^{j\pi\theta_{l}},\cdots,e^ {j(N-1)\pi\theta_{l}}\Big{]}^{T}. \tag{3}\] More interestingly, to exploit the angular-domain channel sparsity, the corresponding angular-domain representation \(\mathbf{h}_{\rm A}^{\rm far-field}\) can be derived from the channel \(\mathbf{h}^{\rm far-field}\) as [6] \[\mathbf{h}^{\rm far-field}=\mathbf{F}\mathbf{h}_{\rm A}^{\rm far-field}, \tag{4}\] where \(\mathbf{F}=[\mathbf{a}(\theta_{0}),\cdots,\mathbf{a}(\theta_{N-1})]\in \mathbb{C}^{N\times N}\) is the Fourier transform matrix with \(\theta_{n}=\frac{2n-N+1}{N},n=0,1,\cdots,N-1\). Based on the angular-domain sparsity, several CS-based CE schemes have been proposed for far-field applications, e.g., [19], [20]. However, the angular-domain sparsity is not remarkable in XL-MIMO. The reason is that the Rayleigh distance, \(Z=2D^{2}/\lambda\), can be in the range of hectometre or longer in XL-MIMO, where \(D\) is the array aperture. For instance, the Rayleigh distance is around \(67\) meters with the array aperture of \(1\) meters and the carrier frequency of \(10\) GHz. Therefore, we should consider the scenario that the UE is in near-field for XL-MIMO. \(\bullet\)_Near-field channel modeling_: Based on the exact spherical wave, the near-field channel can be expressed as [6] \[\mathbf{h}^{\rm near-field}=\sqrt{\frac{N}{L}}\sum_{l=1}^{L}\beta_{l}e^{-jkr_ {l}}\mathbf{b}(\theta_{l},r_{l}). \tag{5}\] Fig. 1: Illustration of the XL-MIMO system with a uniform linear array (ULA)-based BS, where the UE and scatter are located in near-field. The figure depicts the one \(\upsilon_{l}\) and the distance \(r_{l}^{(n)}\) between the \(n\)-th BS antenna, \(\forall n\in\{1,\cdots,N\}\), and the scatter or the user for the \(l\)-th path, where \(\Delta\) represents the antenna spacing. It is worth noting that the near-field steering vector \(\mathbf{b}(\theta_{l},r_{l})\) is expressed as \[\mathbf{b}(\theta_{l},r_{l})\!=\!\frac{1}{\sqrt{N}}\bigg{[}1,e^{-jk\left(r_{l}^{ (2)}-r_{l}^{(1)}\right)},\cdots,e^{-jk\left(r_{l}^{(N)}-r_{l}^{(1)}\right)} \bigg{]}^{T}\!. \tag{6}\] We denote the distance between the \(n\)-th BS antenna and the UE or scatter by \(r_{l}^{(n)}\), as shown in Fig. 1. Without loss of generality, we set the coordinate of the \(n\)-th antenna as \((0,n\Delta)\) such that \(r_{l}^{(n)}=\sqrt{\left(r_{l}^{(1)}\sqrt{1-\theta_{l}^{2}}-0\right)^{2}+ \left(r_{l}^{(1)}\theta_{l}-n\Delta\right)^{2}}=\sqrt{\left(r_{l}^{(1)}\right) ^{2}+n^{2}\Delta^{2}-2r_{l}^{(1)}}\theta_{l}n\Delta\), where \(\theta_{l}=\sin\upsilon_{l}\in[-1,1]\) denotes the spatial angle. Note that the angular field distribution is independent of the distance under the planar-wave assumption in the far-field, as shown in (3). By contrast, the EM waves in XL-MIMO systems should be modeled by the spherical-wave assumption, showing distance-dependent angular field distribution, as shown in (6). To fully leverage the near-field channel sparsity, a polar-domain transform matrix \(\mathbf{D}\) was proposed in [6], which is denoted by \(\mathbf{D}=[\mathbf{D}_{Q_{1}},\mathbf{D}_{Q_{2}},\cdots,\mathbf{D}_{Q_{N}}]\) with \(\mathbf{D}_{Q_{n}}=[\mathbf{b}\left(\theta_{n},r_{n}^{1}\right),\cdots, \mathbf{b}\left(\theta_{n},r_{n}^{Q_{n}}\right)]\), where \(Q_{n}\) denotes the number of sampled distances at the sampled angle \(\theta_{n}\). Besides, we denote the number of all sampled distances by \(Q=\sum_{n=1}^{N}Q_{n}\). Based on the matrix \(\mathbf{D}\in\mathbb{C}^{N\times Q}\), the near-field channel is given by [6] \[\mathbf{h}^{\rm near-field}=\mathbf{D}\mathbf{h}_{\rm P}^{\rm near-field}. \tag{7}\] Similar to the angular-domain sparsity of far-field channels, near-field channels exhibit certain sparsity in the polar domain. Thus, CE can be performed with acceptable NMSE performance based on the polar-domain channel sparsity [6]-[8]. ## III Proposed Polar-Domain Channel Estimation In this section, we introduce the MRDN, P-MRDN, and P-MSRDN for the CE in XL-MIMO systems. The MRDN architecture is introduced as the fundamental component of our proposed CE schemes. Then, we propose the P-MRDN, which can effectively exploit the polar-domain channel sparsity in XL-MIMO. In addition, we highlight the differences between the MRDN and P-MRDN caused by the channel sparsity in different domains. Finally, the proposed P-MSRDN combines the application of the MRDN and ASPP to further improve the CE accuracy. ### _MRDN Architecture_ As shown in Fig. 2 (a), residual dense network (RDN) and convolutional block attention module (CBAM) [21] are the building modules of the MRDN [12]. #### Iii-A1 Input Layer By assuming that the real and imaginary parts of the signal \(\mathbf{y}\in\mathbb{C}^{N\times 1}\) are independent, we structure them into a matrix \(\mathbf{Y}\in\mathbb{R}^{N\times 2}\). Thus, the matrix \(\mathbf{Y}\) can be treated as a two-dimensional image and serve as the input of our schemes. #### Iii-A2 Basic Structure We denote _Convolution_ and _Rectified Linear Units_ by "_Conv_" and "_ReLU_", respectively. "_Conv_" and "_ReLU_" layer functions are denoted by \(*\) and max, respectively. Then, as shown in Fig. 3 (a), the model of the \(n\)-th residual block is a combination of two cascaded functions: \[\mathbf{r}_{-1} =W_{n,r}*\mathbf{x}+b_{n,r}, \tag{8}\] \[\mathbf{r}_{0} =\max(0,\mathbf{r}_{-1}), \tag{9}\] where \(\{W_{n,r},b_{n,r}\}\), \(n\in\{1,2,\cdots,M\}\), denote the weight and bias matrices, respectively. As shown in Fig. 3 (a), \(M\) is the number of layers of RDN. We denote the input and output of the residual block by \(\mathbf{x}\) and \(\mathbf{r}\), respectively. #### Iii-A3 RDN Structure As shown in Fig. 2 (a), in the \(n\)-th residual block, with \(f_{n}\) denoting the recursion function, the recurrence relation is \(\mathcal{F}_{1}=f_{1}(\mathbf{x})\) and \[\mathcal{F}_{n}=f_{n}(\mathcal{F}_{n-1},\cdots,\mathcal{F}_{1},\mathbf{x}), \forall n\in\{2,\cdots,M\}\,. \tag{10}\] Fig. 3: Comparison between (a) RDN and CBAM system models and (b) ASPP-RDN system model. The most significant modification of ASPP-RDN is that we take advantages of ASPP as a parallel branch of RDN. The main idea for improvement is to integrate multi-scale features of its input and serve as one of the inputs of the final β€œ_Conv_” layer in RDN. Fig. 2: Comparison between (a) MRDN-based channel estimation scheme and (b) PMRDN-based channel estimation scheme. The significant difference is the FFT in MRDN and the polar-domain transform (PT) in P-MRDN. #### Ii-A4 Cbam The recurrence relation of the CBAM is \[\mathbf{r}_{-1} =W_{-1,c}*\mathbf{x}+b_{-1,c}, \tag{11}\] \[\mathbf{r}_{0} =\max(0,\mathbf{r}_{-1}),\] (12) \[\mathbf{r}_{1} =W_{1,c}*\mathbf{r}_{0}+b_{1,c}, \tag{13}\] where \(\{W_{-1,c},b_{-1,c},W_{1,c},b_{1,c}\}\) are the weight and bias matrices, respectively. The input and output of the CBAM are denoted by \(\mathbf{x}\) and \(\mathbf{r}_{1}\), respectively. #### Ii-A5 MRDN Structure Assuming that the MRDN consists of \(B\) RDNs and a CBAM, the recurrence relation of the MRDN is \[\mathcal{F}(\mathbf{x}) =\mathcal{F}_{M,B}*\mathcal{F}_{M,B-1}*\cdots\mathcal{F}_{M,1}( \mathbf{x}), \tag{14}\] \[\mathcal{M}(\mathbf{x}) =\mathcal{F}*C(\mathbf{x}), \tag{15}\] where the operate \(\star\) denotes a function composition and \(C(\mathbf{x})\) denotes the recursion function of the CBAM. #### Ii-A6 Output Layer The matrix \(\mathbf{\hat{H}}\in\mathbb{R}^{N\times 2}\) can produce the estimated channel \(\mathbf{\hat{h}}\in\mathbb{C}^{N\times 1}\) by reversing the combining in the input layer. Besides, the loss function is given by \[\mathcal{L}=\left\|\mathbf{\hat{h}}-\mathbf{h}\right\|^{2}. \tag{16}\] ### _P-MRDN-Based Channel Estimation Scheme_ In conventional mMIMO systems, the operating region is far-field, i.e., the distance between the BS and the UE is longer than the Rayleigh distance. In this case, the channel can be modeled by the planar wave that solely depends on the angular information, resulting in the sparsity in the angular domain. Moreover, the angular-domain representation \(\mathbf{h}_{\mathrm{A}}\) can be converted from the channel \(\mathbf{h}\) over Fast Fourier Transform (FFT), as discussed above. As shown in Fig. 2 (a), the MRDN-based CE scheme aims to fully exploit the channel sparsity in the angular domain by transforming the matrix \(\mathbf{Y}\) into the angular domain over FFT. The estimated channel matrix \(\mathbf{\hat{H}}\) is then obtained over Inverse Fast Fourier Transform (IFFT) with the matrix \(\mathbf{\hat{H}}_{\mathrm{A}}=\mathbf{Y}_{\mathrm{A}}-\mathcal{M}(\mathbf{Y}_ {\mathrm{A}})\), which can produce the estimated channel \(\mathbf{\hat{h}}\). To leverage the polar-domain channel sparsity in XL-MIMO systems, the proposed P-MRDN-based channel estimation scheme adopts the polar-domain transform (PT) to transform the matrix \(\mathbf{Y}\) to the polar domain counterpart (i.e., \(\mathbf{Y}_{\mathrm{P}}\)) through the matrix \(\mathbf{D}\), analogous to the angular domain transformation. As shown in Fig. 2 (b), the estimated channel \(\mathbf{\hat{h}}\) is constructed by the estimated channel matrix \(\mathbf{\hat{H}}\) which is obtained over the inverse polar-domain transform (IPT) with the matrix \(\mathbf{\hat{H}}_{\mathrm{P}}=\mathbf{Y}_{\mathrm{P}}-\mathcal{M}(\mathbf{Y}_ {\mathrm{P}})\). Once again, the key difference between the MRDN and P-MRDN lies in their approach to exploit the inherent channel sparsity. The MRDN-based CE scheme transforms the channel to the angular domain, exploiting the angular-domain sparsity in the far-field. In contrast, the P-MRDN-based CE scheme transforms the channel to the polar domain, leveraging the polar-domain sparsity in the near-field of XL-MIMO. ### _P-MSRDN-Based Channel Estimation Scheme_ To further improve the channel estimation accuracy, we define a parallel part of the ASPP and RDN3, named ASPP-RDN, as shown in Fig. 3 (b). By incorporating the notion of ASPP into the proposed P-MRDN, the new CE scheme can achieve higher NMSE performance as the ASPP can integrate multi-scale features of its input. Footnote 3: By incorporating the notion of ASPP into the proposed P-MRDN-based CE scheme inspired by [17], advanced deep learning structures can be exploited to improve the performance of the proposed CE schemes. In addition, other similar structures, e.g., Encoder-Decoder and Encoder-Decoder with Atrous Conv [18], can also be employed to improve the performance of the proposed CE schemes. It is crucial to note that introducing more advanced structures is vital in optimizing the complexity, fitting, and generalization capabilities of the proposed schemes. These aspects deserve further investigation in the future. #### Ii-C1 Atrous Spatial Pyramid Pooling Structure We denote the single recursion function of the pooling layer, "_Conv_" layer and "_Conv_\(rate=i\)_" layer by \(p\), \(c\), and \(c_{i}\), respectively. Then, the recurrence relation of the ASPP can be given by \[A(\mathbf{x})=c(p(\mathbf{x}),c(\mathbf{x}),c_{6}(\mathbf{x}),c_{12}(\mathbf{ x})), \tag{17}\] where \(\mathbf{x}\) denotes the input of the ASPP, \(A\) is the mapping function for the ASPP. #### Ii-C2 ASPP-RDN Structure Based on the ASPP and RDN structure, the recurrence relation of the proposed ASPP-RDN is \[\mathcal{X}(\mathbf{x})=f_{M}(A(\mathbf{x}),\mathcal{F}_{M-1}, \cdots,\mathcal{F}_{1},\mathbf{x}), \tag{18}\] where \(M\) denotes the number of layers of the RDN and \(\mathbf{x}\) denotes the input of the ASPP-RDN. #### Ii-C3 P-MSRDN Structure The proposed P-MSRDN jointly makes the full use of novel ideas in the ASPP and MRDN, which are illustrated as follows: * MRDN is an extended and versatile architecture of RDN that is cascaded from multiple RDNs. The superiority of the MRDN for CE has been demonstrated due to the similarity between CE and image noise reduction. MRDN is utilized with modifications as the main component of our proposed P-MSRDN. * ASPP has shown excellent performance in reconstructing the texture details while removing the embedded noise. We utilize the ASPP as a parallel branch of the RDN to take advantage of its multi-scale feature integration capabilities, which further enhances the accuracy of CE. * Assuming that there are \(B\) ASPP-RDNs and a CBAM in the proposed P-MSRDN, the recurrence relation of the P-MSRDN is \[\mathcal{Z}(\mathbf{x}) =\mathcal{X}_{B}*\mathcal{X}_{B-1}*\cdots\mathcal{X}_{1}(\mathbf{x }),\] (19) \[\mathcal{Y}(\mathbf{x}) =\mathcal{Z}*C(\mathbf{x}),\] (20) where \(\mathcal{Y}\): \(\mathbb{R}^{N\times 2}\rightarrow\mathbb{R}^{N\times 2}\) is the mapping function for the P-MSRDN. The final estimated channel \(\mathbf{\hat{h}}\) can be obtained by reversing the combining in the input layer, as shown in Fig. 2 (b). ### _Computational Complexity_ The computational complexity of orthogonal matching pursuit (OMP) and polar-domain orthogonal matching pursuit (P-OMP) are expressed as \(\mathcal{O}(L^{3}N^{2})\) and \(\mathcal{O}(L^{3}NQ)\), respectively [7]. On the other hand, the computational complexity of the running phase in the MRDN is given by [12] \[\mathcal{O}\left(BMN^{2}K^{2}E^{2}\right), \tag{21}\] where \(K^{2}\) is the size of kernels for "_Conv_" layers and \(E\) denotes the number of features for the MRDN. In addition, we assume that the number of features for the P-MRDN and P-MSRDN are also \(E\). The computational complexity of the running phase in the P-MRDN and P-MSRDN are expressed as \[\mathcal{O}\left(BMNQK^{2}E^{2}\right), \tag{22}\] and \[\mathcal{O}\left(B(M+4)NQK^{2}E^{2}\right). \tag{23}\] ## IV Simulation Result We consider a XL-MIMO system, where \(N=128\), \(\lambda=0.03\) meters, \(L=6\), and \(Q=256\). The complex path gain \(\beta_{l}\) and the distance \(r_{l}\) of the \(l\)-th path are generated as: \(\beta_{l}\sim\mathcal{CN}(0,1)\) and \(r_{l}\sim\mathcal{U}(5,50)\) meters, respectively. In terms of hardware, we implement the proposed schemes using Intel Core i7-12700, 16 GB RAM, and NVIDIA GeForce GTX 1660 SUPER through PyTorch library. The learning rate is set as \(0.0001\) for the MRDN, P-MRDN, and P-MSRDN. We adopt \(16000\) and \(4000\) samples for the training and testing sets of three schemes, respectively. The number of residual blocks for the RDN and the number of the RDN for our schemes are \(6\) and \(8\), respectively. Besides, the NMSE is defined as \(\mathbf{NMSE}=\mathbb{E}(\|\mathbf{h}-\hat{\mathbf{h}}\|^{2}/\|\mathbf{h}\|^{ 2})\). The convergence performances of the proposed CE schemes are compared in Fig. 4, where the training SNRs are set to \(10\), \(20\), and \(30\) dB, respectively. The first observation is that the proposed P-MSRDN can achieve the best NMSE performance and the fastest convergence among the considered schemes, irrespective of the training SNRs. Specifically, after \(400\) epochs, the performance gaps between the P-MRDN and MRDN for the training SNRs of \(10\), \(20\), and \(30\) dB are \(0.24\), \(0.13\), and \(0.18\) dB, respectively. In comparison, the performance gaps between the P-MSRDN and MRDN are \(0.55\), \(0.63\), and \(1.09\) dB, respectively. The reason for this improvement is that the P-MSRDN can effectively exploit the polar-domain channel sparsity and captures more features compared with the MRDN and P-MRDN. Furthermore, it is worth noting that all the schemes achieve their convergence within \(300\) epochs. Fig. 5 compares the NMSE performance of our proposed schemes with the MRDN-based and CS-based schemes. OMP provides the worst NMSE performance, while P-OMP has a significantly better NMSE performance compared to OMP, with a \(3.39\) dB and \(4.34\) dB increase when the SNR is \(20\) dB and \(30\) dB, respectively. In addition, the proposed P-MRDN outperforms the MRDN by \(2.38\) dB when the SNR is \(30\) dB. These reveal the superiority of the polar-domain schemes, which can effectively exploit the rich polar-domain channel sparsity in XL-MIMO for lowering the NMSE in CE. An interesting finding is that the channel sparsity has minimal influence on the DL-based schemes compared with the CS-based schemes. The possible reason could be that the MRDN has already learned implicitly a portion of the unstructured near-field information. It is worth noting that the proposed P-MRDN provides a \(12.89\) dB and \(19.51\) dB NMSE performance gain over the P-OMP when the SNR is \(20\) dB and \(30\) dB, respectively. More importantly, the proposed P-MSRDN can achieve better NMSE performance compared to other schemes, with a gain of \(1.49\) dB and \(3.54\) dB over the MRDN when the SNR is \(20\) dB and \(30\) dB, respectively. We can also find that the NMSE performance gaps between the P-MSRDN and other schemes become larger with the increase of SNR. Indeed, the exploitation of the ASPP to capture multi-scale features of the polar-domain channel is the main reason for this improvement. Overall, the proposed P-MSRDN produces the best quality of CE compared to the other schemes. The computational complexity and running time of all the schemes are compared in TABLE I. In particular, we normalize Fig. 4: Convergence of the three considered channel estimation schemes (i.e., the MRDN, P-MRDN, and P-MSRDN) under different training SNRs. Fig. 5: NMSE performance comparison of the P-MRDN and P-MSRDN with MRDN and CS schemes. the running time of all the schemes by the one obtained by OMP for comparison. As discussed above, OMP and P-OMP achieve the lowest computational complexity at the cost of high estimation error. In addition, all the schemes have the same order of magnitude of running time. More importantly, the P-MSRDN achieves better NMSE performance compared to the MRDN and P-MRDN, but only requires a similar computational complexity. This is due to the fact that the P-MSRDN can capture the multi-scale features of the polar-domain channel through the exploitation of the ASPP. ## V Conclusion We proposed the P-MRDN and P-MSRDN-based CE schemes for XL-MIMO systems, building on the conventional MRDN structure. More specifically, the proposed P-MSRDN can achieve superior generalization capabilities by utilizing several techniques, e.g., exploiting the near-field channel sparsity in the polar domain, deep residual learning, and extracting features at multi-scale resolutions. By transforming the channel into the polar domain, the proposed P-MRDN and P-MSRDN schemes can effectively exploit the sparsity in the polar domain that outperform the MRDN scheme and the conventional CS schemes. As for potential future works, the topics could be the CE for hybrid-field scenario of XL-MIMO, where various UEs are in near-field and others are in far-field.
2309.09609
Comparing Performance and Portability between CUDA and SYCL for Protein Database Search on NVIDIA, AMD, and Intel GPUs
The heterogeneous computing paradigm has led to the need for portable and efficient programming solutions that can leverage the capabilities of various hardware devices, such as NVIDIA, Intel, and AMD GPUs. This study evaluates the portability and performance of the SYCL and CUDA languages for one fundamental bioinformatics application (Smith-Waterman protein database search) across different GPU architectures, considering single and multi-GPU configurations from different vendors. The experimental work showed that, while both CUDA and SYCL versions achieve similar performance on NVIDIA devices, the latter demonstrated remarkable code portability to other GPU architectures, such as AMD and Intel. Furthermore, the architectural efficiency rates achieved on these devices were superior in 3 of the 4 cases tested. This brief study highlights the potential of SYCL as a viable solution for achieving both performance and portability in the heterogeneous computing ecosystem.
Manuel Costanzo, Enzo Rucci, Carlos GarcΓ­a SΓ‘nchez, Marcelo Naiouf, Manuel Prieto-MatΓ­as
2023-09-18T09:26:46Z
http://arxiv.org/abs/2309.09609v2
Comparing Performance and Portability between CUDA and SYCL for Protein Database Search on NVIDIA, AMD, and Intel GPUs ###### Abstract The heterogeneous computing paradigm has led to the need for portable and efficient programming solutions that can leverage the capabilities of various hardware devices, such as NVIDIA, Intel, and AMD GPUs. This study evaluates the portability and performance of the SYCL and CUDA languages for one fundamental bioinformatics application (Smith-Waterman protein database search) across different GPU architectures, considering single and multi-GPU configurations from different vendors. The experimental work showed that, while both CUDA and SYCL versions achieve similar performance on NVIDIA devices, the latter demonstrated remarkable code portability to other GPU architectures, such as AMD and Intel. Furthermore, the architectural efficiency rates achieved on these devices were superior in 3 of the 4 cases tested. This brief study highlights the potential of SYCL as a viable solution for achieving both performance and portability in the heterogeneous computing ecosystem. oneAPI, SYCL, GPU, CUDA, Performance portability + Footnote †: publication: Β©2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The final authenticated version is available online at [https://doi.org/10.110](https://doi.org/10.110) 9/SBAC-PAD59825.2023.00023 ## I Introduction In the last decade, the quest to improve the energy efficiency of computing systems has fueled the trend toward heterogeneous computing and massively parallel architectures [1]. Nowadays, GPUs can be considered the dominant accelerator, and Nvidia, Intel, and AMD are the most prominent manufacturers. In the 4th quarter of 2022, Intel and AMD had 9% of the market, with Nvidia dominating the discrete graphics card market at 82%. Moreover, considering also the integrated and embedded graphics, Intel had 71% quote, Nvidia 17%, and AMD 12% 1. This poses a significant challenge for researchers who use GPUs for their experiments and simulations. The critical question is how to use this growing computational capacity transparently without having to pay attention to the programming models, hardware support, or mandatory software ecosystem. Footnote 1: [https://www.pcgamer.com/intel-is-already-matching-amd-for-gaming-gra-phics-market-share/](https://www.pcgamer.com/intel-is-already-matching-amd-for-gaming-gra-phics-market-share/) Focusing on the programming aspect, CUDA is still the most popular programming language for GPUs, although it is a proprietary language only valid for NVIDIA devices. Fortunately, other open initiatives have contemplated the programming of GPUs or even other accelerators generically. In particular, SYCL is one of the most promising recent initiatives, which is an open standard from the Khronos Group. One noteworthy feature of SYCL is its status as a cross-platform abstraction layer, enabling programmers to adhere to the fundamental principle of "write code once and run it anywhere". In this sense, the same SYCL code can run not only on multiple vendor GPUs but also on different hardware platforms, including CPUs and FPGAs. SYCL capitalizes on programming productivity by reducing the effort required during development tasks and minimizing maintenance costs. The concept of _performance portability_ becomes fundamental in this context. Specifically, performance portability encompasses two key aspects: (1) enabling the execution of a single application on various hardware platforms, and (2) achieving a desired level of performance across these diverse platforms [2]. This paper aims to address the previous issue by exploring the SYCL programming paradigm in the field of Bioinformatics and Computational Biology. These research areas have been leveraging GPUs for over two decades [3] and numerous of their implementations are based on CUDA, imposing significant limitations on portability across a wide range of heterogeneous architectures. For that reason, this study evaluates the portability and performance of the SYCL and CUDA languages for one fundamental bioinformatics application (Smith-Waterman biological sequence alignment) across different GPU architectures, considering single and multi-GPU configurations from multiple vendors. Hence, we select the _SW#_ suite [4, 5]: a CUDA-based, memory-efficient implementation for biological sequence alignment, that has been recently migrated to SYCL [6]. Our main contributions can be summarized as: * An adaptation and extension of the performance model from [7]. This performance model is adapted to the features of the _SW#_ suite and also extended to include AMD and Intel GPUs (both discrete and integrated types). * A functional and performance portability study for _SW#_ applications across different GPU architectures, considering single and multi-GPU configurations from multiple vendors. To the best of our knowledge, no previous study has considered such a diverse and large set of GPUs. The rest of the paper is organized as follows. Section II introduces the background for this research. Section III describes the case-study applications and also the adapted and extended performance model. Section IV presents the functional and performance portability results. Finally, Section V discusses some related works, and Section VI presents the conclusions and possible lines for future work. ## II Background ### _GPUs and Programming Models_ In 2007, Nvidia introduced CUDA [8] alongside the Tesla GPU, to enable general-purpose programming on GPUs. CUDA is a programming model and parallel computing platform specifically designed for general computing on GPUs. While CUDA has become the most popular low-level programming model for general-purpose GPU computing, its main limitation is that it only supports NVIDIA devices. In the opposite sense, OpenCL [9] gained prominence because it can be used in several devices and vendors requiring a similar abstraction level as CUDA. High-Level Programming initiatives such as OpenMP [10], OpenACC [11, 12], and SYCL [13] have played significant roles in the field of parallel computing in GPU scenarios. OpenMP initially focused on multi-core CPU computing but later expanded its support to include accelerators like GPUs with the release of v4.0. While OpenACC [14] (Open Accelerators) emerged as one of the earliest high-level approaches for GPU programming through the use of directive-based programming, OpenMP has even started overshadowing it by incorporating most of their features. Currently, one of the most promising initiatives in the GPU programming ecosystem is SYCL [13]. It enables developers to write code for heterogeneous processors using standard ISO C++. It incorporates host and kernel code in a single source file and utilizes templates and lambda functions for generic programming. Moreover, SYCL supports various acceleration APIs, such as OpenCL, enabling seamless integration with lower-level code. Multiple SYCL implementations are available nowadays: Codeplay's ComputeCpp [15], oneAPI by Intel [16], triSYCL [17] led by Xilinx, and OpenSYCL [18] (previously denoted as hipSYCL) led by Heidelberg University. In particular, Intel oneAPI can be considered the most mature developer suite. Among the main features of oneAPI, we can find that is an open, cross-industry project that aims to provide an efficient, high-performance programming model. It eliminates the concept of separate code bases for host and device such as in OpenCL. Moreover, multiple programming languages and different tools for each architecture are supported. Data Parallel C++ (DPC++) is oneAPI's core language for programming accelerators and multiprocessors [16], which integrates SYCL and OpenCL standards without additional extensions. Additionally, oneAPI facilitates interoperability with optimized libraries such as oneCCL, oneDAL, oneDNN, oneMKL, oneTBB, and oneVPL, catering to diverse parallel application domains. ### _Smith-Waterman Algorithm_ The SW algorithm is widely used to obtain the optimal local alignment between two sequences [19]. This method is based on a dynamic programming approach and is highly sensitive since it explores all possible alignments between the sequences. Given two sequences \(Q\) and \(D\) of length \(|Q|=m\) and \(|D|=n\), the recurrence relations for the SW algorithm with the modification of Gotoh [20] are defined as follows: \[H_{i,j}=max\begin{cases}0\\ H_{i-1,j-1}+SM(Q[i],D[j])\\ E_{i,j}\\ F_{i,j}\end{cases} \tag{1}\] \[E_{i,j}=max\begin{cases}H_{i,j-1}-G_{o}\\ E_{i,j-1}-G_{e}\end{cases} \tag{2}\] \[F_{i,j}=max\begin{cases}H_{i-1,j}-G_{o}\\ F_{i-1,j}-G_{e}\end{cases} \tag{3}\] The similarity score \(H_{i,j}\) is computed to identify a common subsequence; \(H_{i,j}\) contains the score for aligning the prefixes \(Q[1..i]\) and \(D[1..j]\). Moreover, \(E_{i,j}\) and \(F_{i,j}\) correspond to the scores of prefix \(Q[1..i]\) and \(D[1..j]\) aligned to a gap, respectively. _SM_ denotes the _scoring matrix_ and defines the match/mismatch scores between residues. Last, \(G_{o}\) and \(G_{e}\) refer to the gap open and gap extension penalties, respectively. First of all, \(H\), \(E\) and \(F\) must be initialized with 0 when \(i=0\) or \(j=0\). Then, the recurrences should be calculated with \(1\leq i\leq m\) and \(1\leq j\leq n\). The highest value in the \(H\) matrix (\(S\)) corresponds to the optimal local alignment score between \(Q[1..i]\) and \(D[1..j]\). If required, the optimal local alignment is finally obtained by following a traceback procedure whose starting point is \(S\). From a computational point of view, it is important to highlight the computational dependencies of any \(H\) element. Any cell can be calculated only after the values of the upper, left, and upper-left neighbors are known; imposing restrictions on how \(H\) can be processed. **SW in practice and parallelization issues.** The SW algorithm can be used to compute: (a) pairwise alignments (one-to-one); usually associated with long DNA sequences; or (b) database similarity searches (one-to-many), usually associated with protein sequence alignment. Although the processing nature of the SW algorithm with the data dependencies on the computation \(H_{i,j}\) is very challenging from the point of view of parallelism exploitation, both approaches have been studied in the literature exploiting the SIMD capabilities. In the (a) case, a single matrix is calculated and all Processing Elements (PEs) work collaboratively (_intra-task parallelism_). Due to inherent data dependencies, neighboring PEs communicate to exchange border elements. In the (b) approach, while the intra-task scheme can be used, a better parallel scheme consists in simultaneously calculating multiple matrices without communication between the PEs (_inter-task parallelism_) [21] could be performed. Fig. 1 illustrates both approaches. The SW algorithm runs in quadratic time and space to compute optimal alignment. However, computing optimal alignment scores do not require storing the full similarity matrix and can be calculated in linear space complexity. Similarity database search takes advantage of this feature since optimal alignment only makes sense for very similar sequences. Therefore, all alignment scores are calculated first and optimal alignments are computed only for top-ranked database sequences. ### _Performance portability_ According to Penycook et al. [23], _performance portability_ refers to _"A measurement of an application's performance efficiency for a given problem that can be executed correctly on all platforms in a given set"_. These authors define two different performance efficiency metrics: _architectural efficiency_ and _application efficiency_. The former denotes the capacity of an application to effectively utilize hardware resources, measured as a proportion of the theoretical peak performance. The latter signifies the application's ability to select the most suitable implementation for each platform, representing a fraction of the highest observed performance achieved. The metric for performance portability presented by Penycook et al. [23] was later reformulated by Marowka [2] to address some of its flaws. Formally, for a given set of platforms \(H\) from the same architecture class, the performance portability \(\bar{\Phi}\) of a case-study application \(\alpha\) solving problem \(p\) is: \[\bar{\Phi}(\alpha,p,H)=\begin{cases}\frac{\sum_{i\in\mathbb{H}}e_{i}(\alpha,p )}{||H|}&\text{if $i$ is supported $\forall i\in H$}\\ \text{not applicable (NA)}&\text{otherwise}\end{cases}\] where \(e_{i}(\alpha,p)\) corresponds to the performance efficiency of case-study application \(\alpha\) solving problem \(p\) on the platform \(i\). The _performance portability_ concept emphasizes the capability to write code that can efficiently utilize the available computing resources, such as CPUs, GPUs, or specialized accelerators while maintaining high performance regardless of the specific hardware configuration. With performance portability, developers can write code once and have it deliver optimal performance on various target platforms. This eliminates the need for extensive manual code optimizations or platform-specific modifications, reducing development time and effort. ## III Case-Study Applications and Performance Model ### _Case-Study Applications_ Two GPU-accelerated implementations of \(p=\)protein database search were considered for the performance portability evaluation: * CUDA: this version corresponds to the _SW#_ suite, a CUDA-based, memory-efficient implementation for biological sequence alignment, which can be used either as a stand-alone application or a library. It can compute pairwise alignments as well as database similarity searches, for both protein and DNA sequences; and it allows configuring the alignment method (including SW), the open/extension penalties, and the scoring matrix. _SW#_ combines CPU and GPU computation for optimal efficiency. It dynamically balances the workload between the CPU and GPU based on sequence lengths, aiming to minimize idle threads. From a parallelization point of view, _SW#_ uses both inter-task and intra-task parallelism but primarily on the GPU side. The GPU divides the workload into two partitions: a "short kernel" process shortest database sequences using inter-task scheme, while a "long kernel" aligns longest sequences by intra-task strategy. When utilizing multiple GPUs, _SW#_ follows Fig. 1: Parallelization approaches in similarity matrix computations (adapted from [22]). Each color indicates the cells that can be computed together in a SIMD manner. a flexible approach: if the number of query sequences to be aligned is fewer than the number of available GPU devices, all devices align the same query sequence with a different database partition in synchronized manner. Conversely, if the number of query sequences is greater than the number of GPUs, each GPU align a different one against the complete database [4, 5]. * SYCL: this code is based on the implementation presented in the paper [6], representing a SYCL equivalent. The migration of the _SW#_ suite was performed using dpct (the Data Parallel Compatibility Tool available in the oneAPI suite) and some hand-coding modifications. ### _Performance Model_ Peak theoretical hardware performance must be estimated for all selected GPUs in this study to compute the performance portability metric. This step requires considering both hardware and algorithm features. Fortunately, the previous work from Lan et al. [7] can be used as a basis for this task; in this paper, the computing capability of different devices (including accelerators based on NVIDIA GPUs, Intel CPUs, and the discontinued Intel Xeon Phis) can be estimated using Eq. 4: \[Capability=Clock\_Rate\times Throughput\times Lanes \tag{4}\] where _Clock_rate_ refers to the clock frequency, _Throughput_ refers to the instruction count that the device can execute in one clock cycle, and _Lanes_ refers to the number of SIMD vector lanes. Then, the number of instructions issued in each cell update of the similarity matrix should be counted. In the sequence alignment context, the most popular metric for measuring performance is related to the number of Cell Updated Per Second (CUPS). So the theoretical peak performance of any device could be modeled using Eq. 5: \[\begin{split}& Theo\_peak=\frac{Capability}{Instruction\_count\_one\_cell\_update}\end{split} \tag{5}\] Even though this study only considers GPUs, these equations can serve as a basis to estimate their theoretical peak performance for other devices such as CPUs or FPGAs. For this work, the previous performance model from [7] is adapted to the features of the _SW#_ algorithm and also extended to other GPUs vendors such as AMD and Intel GPUs (both discrete and integrated types). Table I summarized the theoretical peak performance of selected GPUs using the Eq. 5. More details can be found in the rest of this section. #### Iii-B1 SW# core instructions _SW#_ computes the similarity matrix using 32-bit integers and performs 12 instructions per cell update. Algorithm 1 presents the snippet of cell update in similarity matrix as in Eq. 1, Eq. 2, and Eq. 3. Just adding, subtracting, and maximum instructions are required to perform a single-cell update. #### Iii-B2 Architectural features on NVIDIA's GPU The # Cores in an NVIDIA GPU refers to the number of Streaming Multiprocessors. CUDA does not strictly follow a SIMD execution model but it adopts a similar one denoted as the SIMT model. A _warp_ is composed of a group of 32 threads that execute the same instruction stream. According to [7], "a _warp_ in SIMT is equivalent to a _vector_ in SIMD, and a _thread_ in SIMT is equivalent to a _vector lane_ in SIMD". The instruction throughput depends on the CUDA Compute Capability (CC) of each NVIDIA GPU 2. Footnote 2: [https://docs.nvidia.com/cuda/cuda-c-programming-guide/#maximize-instruction-throughput](https://docs.nvidia.com/cuda/cuda-c-programming-guide/#maximize-instruction-throughput) #### Iii-B3 Architectural features on AMD's GPU In the RDNA2.0 architecture, the # Cores represent the number of Compute Units (CUs), which are grouped in pairs into Workgroup Processors (WP). On its behalf, AMD calls _wavefront_ and _work-item_ the equivalent of NVIDIA's _warp_ and _thread_, respectively. RDNA2.0 supports both wavefront sizes of 32 and 64 work items but the former is prioritized. Each CU contains two SIMD32 vector units, being able to compute 64 add/subtract/max instructions per cycle (Int32). This means that the instruction throughput is 2 for each work item. #### Iii-B4 Architectural features on Intel's GPU On the discrete segment (dGPUs), Intel has a quite different GPU design philosophy than NVIDIA and AMD. The fundamental block of the Intel Xe microarchitecture is the Xe Core, each of which has 16 Xe Vector Engines (XVEs) 3 that can execute 8 add/subtract/max instructions per cycle (Int32). Thus, Xe Cores and XVEs map to # Cores and # Lanes, respectively, in the proposed model. Footnote 3: Also known as Executions Units (EUs) On the integrated segment (iGPUs), both Gen9 and Gen12 microarchitectures are similar from a design perspective, differing mainly in the amount of computational resources. In these microarchitectures, the fundamental block is the Subslice, each of which has 8 Execution Units (EUs) that can execute 8 add/subtract/max instructions per cycle (Int32). Thus, Subslices and EUs refer to # Cores and # Lanes, respectively, in the proposed model. ## IV Experimental Results ### _Experimental Design_ The experiments were carried out on a set of 10 GPUs, including 6 NVIDIA dGPUs, 1 AMD dGPU, 2 Intel iGPUs, and 1 Intel dGPU. The specific details of these GPUs can be found in Table I. The oneAPI and CUDA versions used were 2022.1.0 and 11.7, respectively. For both CUDA and SYCL, the optimization flag -O3 was used during compilation. To run SYCL code on NVIDIA and AMD GPUs, several modifications had to be made to the build process, as SYCL is not supported by default on these platforms4 but Codeplay recently has announced free binary plugs5 to support it. After these modifications, it was possible to run DPC++ code on an NVIDIA GPU using the Clang++ compiler (16.0). Footnote 4: [https://intel.github.io/llvm-docs/GetStartedGuide.html](https://intel.github.io/llvm-docs/GetStartedGuide.html) Footnote 5: [https://codeplay.com/portal/blogs/2022/12/16/bringing-nvidia-and-amd-s](https://codeplay.com/portal/blogs/2022/12/16/bringing-nvidia-and-amd-s) _SW#_ was configured with BLOSUM62 as substitution matrix, and 10/2 as insertion/extension gap penalty. The flag T=0 was also used to remove the impact of the CPU on the final performance (all sequence alignments are computed thoroughly on the GPU). The performance evaluation was carried out by searching 20 query protein sequences against the well-known Environmental Non-Redundant database (Env. NR) (2021_04 Release), which contains 995210546 amino acid residues in 4789355 sequences, with a maximum length of 16925. Query sequences were selected from the Swiss-Prot database 6, with lengths ranging from 144 to 5478. The access numbers for these queries are: P02232, P05013, P14942, P07327, P01008, P03435, P42357, P21177, Q38941, P27895, P07756, P04775, P19096, P28167, POC6B8, P20930, P08519, Q7TMA5, P33450, and Q9UKN1. Footnote 6: Swiss-Prot: [https://www.uniprot.org/downloads](https://www.uniprot.org/downloads) In order to minimize fluctuations, the tests were executed 20 times for each set, and the performance was determined based on the average of these multiple runs. ### _Single-GPU Performance and Portability Results_ A primary comparison was conducted between the performance of CUDA and SYCL on NVIDIA GPUs (see Fig 2). As can be seen, both programming models achieve practically the same GCUPS values. On the one hand, the largest performance difference in favor of SYCL was observed on the Tesla V100 (3.4%). On the other hand, the CUDA implementation did its part on the GTX 980, outperforming SYCL by 4.6%. Thus, both CUDA and SYCL are capable of delivering comparable performance for this case study on NVIDIA GPUs. Table II presents a more detailed comparison of the performance and architectural efficiency of CUDA and SYCL codes on NVIDIA, AMD, and Intel GPUs. For each platform, this table shows the peak theoretical performance, the achieved performance for both CUDA and SYCL, and the corresponding architectural efficiency. On NVIDIA GPUs, CUDA and SYCL demonstrated comparable performance and efficiency values, as was already noted in the analysis from Fig. 2. As expected, more powerful GPUs are able to achieve higher GCUPS values. As for the architectural efficiency values, they are in the range of 37%-52%. It is important to note that, although the highest GCUPS value is presented by RTX 3090 GPU, the most efficient one Fig. 2: Performance comparison between CUDA and SYCL on single, NVIDIA GPUs turns out to be RTX 2070 GPU. For AMD and Intel GPUs, only the results for SYCL are shown, due to CUDA just supports NVIDIA GPUs. This fact highlights the already mentioned greater portability of SYCL over CUDA. It can be said that the results of the SYCL version on these GPUs are generally good. On the one hand, SYCL matches its best efficiency rate on NVIDIA GPUs when running on AMD GPUs. On the other hand, SYCL beats that mark on the 2 integrated GPUs, achieving up to +23.1% architectural efficiency. The only negative aspect is SYCL's performance on Intel's Arc A770, where performance drops to 23.3% of architectural efficiency. This value represents its lowest performance and the cause could be related to Intel's discrete GPU design philosophy, which differs from NVIDIA and AMD. However, we plan to profile the code to learn more about this issue. The performance portability of both CUDA and SYCL codes is evaluated in Table III, where it can be noted that aggregated results are consistent with those observed on an individual basis before. For NVIDIA GPUs, the performance portability of both is quite similar, with values of 42% and 42.2%, respectively. As seen before, this indicates that both programming models can deliver a consistent level of performance across the different NVIDIA GPUs used in the tests. In the case of Intel GPUs, SYCL demonstrated very good architectural efficiency values on the iGPUs, in contrast to the lower efficiency exhibited on the dGPU. Moreover, when considering the combination of AMD and Intel GPUs, SYCL achieves the highest performance portability of the middle set. However, the performance portability decreases when NVIDIA GPUs are also included (last set), as SYCL performance is lower on these devices. Building on the previous analysis, SYCL consistently outperforms CUDA in terms of performance portability in this study. To be more precise, SYCL achieved nearly the same architectural efficiency as CUDA considering 6 NVIDIA GPUs with 5 different microarchitectures. Moreover, SYCL was not only able to run on multiple vendor GPUs (AMD and Intel), but its architectural efficiency was superior in 3 of the 4 cases tested. This demonstrates not only SYCL's broad compatibility but also its capability to improve performance across a diverse range of GPUs for this application. ### _Multi-GPU Performance and Portability Results_ To complement the previous single-GPU analysis, a performance comparison was carried out between CUDA and SYCL using different multiple NVIDIA GPUs (see Fig. 3). As is the single-GPU case, the two programming models achieve practically the same GCUPS values when NVIDIA devices are used, for both homogeneous and heterogeneous multi-GPU configurations. While CUDA outperforms SYCL when using 2\(\times\)GTX1080 by approximately 1%, SYCL achieves the best performance in all other cases, achieving up to 5% higher GCUPS. Therefore, it can be noted that SYCL does not imply additional overhead when multiple GPUs are used. Table IV presents a more detailed comparison of the performance and architectural efficiency of CUDA and SYCL codes on 5 different multi-GPU configurations. It can be seen that for NVIDIA multi-GPUs, the efficiency rates achieved when using 2 GPUs combined are a bit lower than when using a single GPU. This behavior occurs in 3 of the 4 configurations tested (the exception is when using 2\(\times\)Tesla V100) and can be explained by 2 reasons. On the one hand, it is usual that the efficiency decreases when fixing the problem size and increasing the amount of computational resources. On the other hand, the workload distribution strategy of _SW#_ is very simple, since it distributes the query sequences among the GPUs and does not consider each GPU computing power. Because these sequences do not have the same length, load imbalance can occur between GPUs, reducing performance. Finally, SYCL once again demonstrates its increased functional portability with Intel's multi-GPU case. While the performance is not good for the aforementioned reasons, it is interesting to note how SYCL allows using 2 Intel GPUs of different types at the same time: an iGPU and a dGPU. ## V Related Works Some preliminary studies have compared the performance between SYCL and CUDA in different domains. In [24], the authors employed ADEPT, a GPU-accelerated short-read alignment kernel, as a case study. They found that the SYCL implementation runs approximately \(2\times\) slower than its CUDA counterpart in all experiments when using an NVIDIA V100 GPU. The authors attribute this discrepancy to CUDA's superior utilization of memory cache and SYCL's greater reliance on register usage. Additionally, the authors verified SYCL's code portability on an Intel P630 GPU. In [25], the authors delve into the process of migrating a CPU+GPU application for epistasis detection from CUDA to SYCL, founding that the highest performance of both versions is comparable on an NVIDIA V100 GPU. However, it is important to remark that some hand-tuning was required in the SYCL implementation to reach its maximum performance. When investigating the PTX code, the authors noted that SYCL does not perform the same optimizations as CUDA, such as loop unrolling. In [26], the authors identified performance gaps in several bioinformatics applications. The study involved the selection of open-source applications that had been migrated from CUDA to SYCL, followed by a comprehensive evaluation of their performance on an NVIDIA V100 GPU. Through profiling analysis, the authors found that the SYCL compiler lacks certain optimizations that the CUDA version does, including memory management, instruction vectorization, and loop unrolling, among others. In [27], a performance comparison is carried out between SYCL and CUDA in the context of AI models. The authors extend the SYCL-DNN library to include support for NVIDIA GPUs using DPC++ and evaluate its performance against the optimized cuDNN library. Initially, they observed that the non-optimized SYCL-DNN is approximately 50% slower than cuDNN due to a poorly optimized implementation of SYCL for local memory. However, after using SYCL-BLAS, a significant speedup of up to 90% of cuDNN's performance is achieved. The remaining 10% difference is attributed to hand-written, optimized implementations in CUDA. In [28], the authors compare two CUDA and SYCL versions of the AutoDock-GPU molecular docking application on an Intel Xeon Platinum 8360Y CPU, an NVIDIA A100 GPU, and an Intel Max 1550 GPU. On the A100 GPU, SYCL exhibits slower performance compared to CUDA in some cases, with performance ratios ranging from 1.24\(\times\) to 2.38\(\times\). However, in the small test cases, SYCL outperforms CUDA by 1.09\(\times\). The authors attribute the lower ratios to the synchronization effort required in compute-intensive regions like the scoring function and gradient calculation. They highlight the need for deeper performance analysis and suggest further optimization, particularly in compute-intensive areas, to improve SYCL performance. In [29], the authors analyze the performance of mini-apps that have been created in both SYCL and CUDA, running on an NVIDIA V100 GPU. Even though there are some features not fully supported, SYCL performance is comparable to that of CUDA. Moreover, the performance differences largely stem from variations in memory access patterns. In [30], the author evaluate the gap between performance and code portability in HPC accelerators using the well-known k-means algorithm comparing SYCL with CUDA and OpenMP. SYCL implementation reports higher performance on Intel GPUs and CPUs, equivalent performance on NVIDIA GPUs, and offers potential multi-vendor compatibility. Unlike the previous works and beyond the results obtained, this performance portability study has considered different GPU architectures, including single and multi-GPU configurations from multiple vendors. To the best of our knowledge, no study has considered such a diverse and large set of GPUs. ## VI Conclusions and Future Work In the field of heterogeneous computing, ensuring functional portability is not trivial for a programming language, and thus providing performance portability represents an even greater challenge. In this study, we address this issue by assessing Fig. 3: Performance comparison between CUDA and SYCL on multiple NVIDIA GPUs the portability and performance of the SYCL and CUDA languages for the Smith-Waterman protein database search across different GPU architectures from multiple vendors. The experimental results show that CUDA and SYCL are capable of delivering comparable performance for this case study on NVIDIA GPUs, including single and multi-GPU configurations. When moving to AMD and Intel GPUs, SYCL was not only able to run on these devices, but its architectural efficiency was superior in 3 of the 4 cases tested. This demonstrates not only SYCL's broad compatibility but also its capability to improve performance across a diverse range of GPUs for this application. Since SYCL is still an immature programming model, the positive results found here cannot be generalized; performance will largely depend on the characteristics of the application and the capabilities of the compilers. However, they are a sample of the promising opportunities that SYCL can offer for heterogeneous computing. Future work will focus on: * Optimizing the SYCL code to reach its maximum performance. In particular, the original _SW#_ suite does not consider some known optimizations for SW alignment [22], such as instruction reordering to reduce their count and the use of lower precision integers to increase parallelism 7. Additionally, improving the workload distribution strategy when using more than one GPU. These improvements will lead to higher efficiency rates. Footnote 7: It is important to note that at the time of _SW#_’s development, most CUDA-enabled GPUs did not support efficient arithmetic on 8-bit vector data types. * Running the SYCL code on other architectures (such as CPUs and CPUs+GPUs) and also considering other SYCL implementations (such as OpenSYCL and ComputeCPP), as well as other programming models like Kokkos 8 and RAJA 9, to strengthen the current performance portability study. Footnote 8: [https://github.com/kokkos/kokkos](https://github.com/kokkos/kokkos) Footnote 9: [https://github.com/LLNL/RAJA](https://github.com/LLNL/RAJA)
2303.00110
A P Systems Variant for Reasoning about Sequential Controllability of Boolean Networks
A Boolean network is a discrete dynamical system operating on vectors of Boolean variables. The action of a Boolean network can be conveniently expressed as a system of Boolean update functions, computing the new values for each component of the Boolean vector as a function of the other components. Boolean networks are widely used in modelling biological systems that can be seen as consisting of entities which can be activated or deactivated, expressed or inhibited, on or off. P systems on the other hand are classically introduced as a model of hierarchical multiset rewriting. However, over the years the community has proposed a wide range of P system variants including diverse ingredients suited for various needs. In this work, we propose a new variant -- Boolean P systems -- specifically designed for reasoning about sequential controllability of Boolean networks, and use it to first establish a crisp formalization of the problem, and then to prove that the problem of sequential controllability is PSPACE-complete. We further claim that Boolean P systems are a demonstration of how P systems can be used to construct ad hoc formalisms, custom-tailored for reasoning about specific problems, and providing new advantageous points of view.
Artiom Alhazov, Vincent Ferrari-Dominguez, Rudolf Freund, Nicolas Glade, Sergiu Ivanov
2023-02-28T22:25:31Z
http://arxiv.org/abs/2303.00110v1
# A P Systems Variant for Reasoning about Sequential Controllability of Boolean Networks+ ###### Abstract A Boolean network is a discrete dynamical system operating on vectors of Boolean variables. The action of a Boolean network can be conveniently expressed as a system of Boolean update functions, computing the new values for each component of the Boolean vector as a function of the other components. Boolean networks are widely used in modelling biological systems that can be seen as consisting of entities which can be activated or deactivated, expressed or inhibited, on or off. P systems on the other hand are classically introduced as a model of hierarchical multiset rewriting. However, over the years the community has proposed a wide range of P system variants including diverse ingredients suited for various needs. In this work, we propose a new variant--Boolean P systems--specifically designed for reasoning about sequential controllability of Boolean networks, and use it to first establish a crisp formalization of the problem, and then to prove that the problem of sequential controllability is PSPACE-complete. We further claim that Boolean P systems are a demonstration of how P systems can be used to construct ad hoc formalisms, custom-tailored for reasoning about specific problems, and providing new advantageous points of view. Introduction Membrane computing and P systems are a paradigm of massively parallel computing introduced more than two decades ago by Gh. Paun [27], and inspired by the structure and the functioning of the biological cell. Following the example of the cell, a membrane (P) system is a hierarchical membrane structure with compartments containing multisets of objects, representing in an abstract sense the biochemical species. Multiset rewriting rules are attached to every membrane to represent the reactions. Over the last two decades, a considerable number of variants of P systems have been introduced, inspired by various aspects of cellular life, or capturing specific computing properties. For comprehensive overviews we refer the reader to [14, 28]. Even though P systems are directly inspired by the biological cell, their use for actual cellular modelling has encountered relatively little success. On the other hand, Boolean networks have been quite successful recently, despite their relative dissimilarity to biological structures--a Boolean network is a set of Boolean variables equipped with Boolean update functions, describing how to compute the new value of the variables from their current values. We refer the reader to [1] for a more detailed impression. One application of interest of Boolean networks is controllability--the problem of deciding whether externally modifying some parameters of a system can make it reach a particular state, and finding the necessary modifications [6, 12, 25, 30, 31]. A variant of this problem which has attracted particular attention is sequential controllability: instead of looking for a particular combination of control inputs, find a _sequence_ of control inputs to guide the system to a given state [17, 18, 19, 20, 22, 24]. Sequential controllability is promising because it may allow reducing the total number of control actions, or may even drive the Boolean network along trajectories which would otherwise be inaccessible. On the other hand, sequential controllability is \(\mathsf{PSPACE}\)-hard [24], making it a difficult problem to tackle. The goal of this paper is to show how to combine the modelling power of Boolean networks with the richness of P systems to reason about and prove some properties of sequential controllability of Boolean networks. We construct a P system variant to satisfy the following two properties simultaneously: 1. represent sequential controllability of Boolean control networks via simple syntax transformations, 2. have \(\mathsf{PSPACE}\)-complete reachability. This formalization of sequential controllability allows us to complete the complexity result from [24] by proving that this problem is \(\mathsf{PSPACE}\)-complete. We would like to use this construction to promote P system variants as a general tool for building ad hoc formalisms specifically tailored for tackling particular problems. This paper is structured as follows. Section 2 briefly recalls all the necessary preliminaries: linear bounded automata, P systems, Boolean networks, sequential controllability. Section 3 introduces the specific P system variant for tackling sequential controllability: Boolean P systems. Section 4 shows how Boolean P systems can directly simulate Boolean networks. Section 5 introduces composition of Boolean P systems in the spirit of automata theory, and Section 6 shows how composite Boolean P systems can capture a Boolean network together with the master dynamical system emitting the control inputs. In Section 7, we show that the reachability problem for Boolean P systems is \(\mathsf{PSPACE}\)-complete, and we use this result in Section 8 to show that sequential controllability of Boolean networks is \(\mathsf{PSPACE}\)-complete as well. Finally, in Section 9 we extensively discuss the obtained technical results concerning sequential controllability, the features of Boolean P systems, and the general methodology of designing ad hoc formalisms custom-tailored to specific problems. ## 2 Preliminaries In this section, we briefly recall the necessary preliminaries, in particular deterministic bounded automata, P systems, Boolean networks, Boolean Control Networks (BCN), and sequential controllability of BCN. Given two sets \(A\) and \(B\), we denote by \(B^{A}\) the set of all functions \(f:A\to B\). We denote by \(2^{A}\) the set of all subsets of \(A\) (the power set of \(A\)) and by \(|A|\) the cardinal of the set \(A\). An indicator function of a subset \(C\subseteq A\) is the function \(i_{C}:A\rightarrow\{0,1\}\) with the property that \(C=\{a\mid i_{C}(a)=1\}\). In this paper, we will often use the same symbol to refer to a subset and to its indicator function. ### Deterministic Linear Bounded Automata (LBA) A deterministic linear bounded automaton (deterministic LBA or simply LBA) \(\mathcal{M}\) is a construct \[\mathcal{M}=(Q,V,T_{1},T_{2},\delta,q_{0},q_{1},Z_{l},B,Z_{r}),\] where: * \(Q\) is a finite set of states, * \(V\) is the finite tape alphabet, * \(T_{1}\subseteq V\setminus\{Z_{l},B,Z_{r}\}\) is the input alphabet, * \(T_{2}\subseteq V\setminus\{Z_{l},B,Z_{r}\}\) is the output alphabet, * \(\delta:Q\times V\to Q\times V\times\{L,R,S\}\) is the transition function, * \(q_{0}\) is the initial state, * \(q_{1}\) is the final state, * \(Z_{l}\in V\) is the left boundary marker, * \(B\in V\) is the blank symbol, * \(Z_{r}\in V\) is the right boundary marker, We restrict the transition function such that the automaton can never write over the boundary markers or exceed them, more precisely: \[\forall q\in Q:\delta(q,Z_{l})\in Q\times\{Z_{l}\}\times\{R,S\}, \text{ and}\] \[\forall q\in Q:\delta(q,Z_{r})\in Q\times\{Z_{r}\}\times\{L,S\}.\] A configuration of the automaton will be written as \(Z_{l}u\,q\underline{a}\,vZ_{r}\), where \(a\in V\setminus\{Z_{l},Z_{r}\}\), \(u,v\in(V\setminus\{Z_{l},Z_{r}\})^{*}\). The state \(q\) is written to the left of the underlined tape symbol \(a\) on which the head of the automaton currently stands. Suppose the LBA is in state \(q\) and reads the symbol \(a\) on the tape. If \(\delta(q,a)=(p,b,D)\), one of the following transitions occurs, depending on the value of \(D\in\{L,R,S\}\): \[Z_{l}uc\,q\underline{a}\,vZ_{r} \Rightarrow Z_{l}u\,p\underline{c}\,bvZ_{r}, \text{if }D=L,\text{ where }c\in V,\] \[Z_{l}u\,q\underline{a}\,cvZ_{r} \Rightarrow Z_{l}ub\,p\underline{c}\,vZ_{r}, \text{if }D=R,\text{ where }c\in V,\] \[Z_{l}u\,q\underline{a}\,vZ_{r} \Rightarrow Z_{l}u\,p\underline{b}\,vZ_{r}, \text{if }D=S.\] Due to the restriction of the transition function, the accessible part of the tape is limited to the input plus the two delimiters \(Z_{l}\) and \(Z_{r}\). Another model of LBA consists in restricting the size of the accessible part of the tape to a linear function of the input, which is the origin of the name _linear_ bounded automaton. The two models have the same computational power [13]. An LBA accepts the input \(x\in V^{*}\) if starting with the configuration \(Z_{l}q_{0}xZ_{r}\) it reaches a configuration of the form \(Z_{l}q_{1}\{B\}^{*}Z_{r}\). Given an LBA \(\mathcal{M}\) and an input \(x\), the LBA-ACCEPTANCE problem consists in deciding whether \(\mathcal{M}\) accepts \(x\). This problem is PSPACE-complete [13]. ### P Systems In this subsection, we give a general overview of P systems. For more details, we refer the reader to [14, 28]. A P system is a construct \[\Pi=(O,T,\mu,w_{1},\ldots,w_{n},R_{1},\ldots R_{n},h_{i},h_{o}),\] where \(O\) is the alphabet of objects, \(T\subseteq O\) is the alphabet of terminal objects, \(\mu\) is the membrane structure injectively labelled by the numbers from \(\{1,\ldots,n\}\) and usually given by a sequence of correctly nested brackets, \(w_{i}\) are the multisets giving the initial contents of each membrane \(i\) (\(1\leq i\leq n\)), \(R_{i}\) is the finite set of rules associated with membrane \(i\) (\(1\leq i\leq n\)), and \(h_{i}\) and \(h_{o}\) are the labels of the input and the output membranes, respectively (\(1\leq h_{i}\leq n\), \(1\leq h_{o}\leq n\)). Quite often, the rules associated with membranes are multiset rewriting rules (or special cases of such rules). Multiset rewriting rules have the form \(u\to v\), with \(u\in O^{\circ}\setminus\{\mathbf{0}\}\) and \(v\in O^{\circ}\), where \(O^{\circ}\) is the set of multisets over \(O\), and \(\mathbf{0}(a)=0\), for all \(a\in O\). If \(|u|=1\), the rule \(u\to v\) is called non-cooperative; otherwise it is called cooperative. In communication P systems, rules are additionally allowed to send symbols to the neighbouring membranes. In this case, for rules in \(R_{i}\), \(v\in(O\times\mathit{Tar}_{i})^{\circ}\), where \(\mathit{Tar}_{i}\) contains the symbols _out_ (corresponding to sending the symbol to the parent membrane), _here_ (indicating that the symbol should be kept in membrane \(i\)), and \(\mathit{in}_{h}\) (indicating that the symbol should be sent into the child membrane \(h\) of membrane \(i\)). When writing out the multisets over \(O\times\mathit{Tar}_{i}\), the indication _here_ is often omitted. In P systems, rules are often applied in a maximally parallel way: in one derivation step, only a non-extendable multiset of rules can be applied. The rules are not allowed to consume the same instance of a symbol twice, which creates competition for objects and may lead to non-deterministic choice between the maximal collections of rules applicable in one step. A computation of a P system is traditionally considered to be a sequence of configurations it can successively visit, stopping at the halting configuration. A halting configuration is a configuration in which no rule can be applied any more, in any membrane. The result of a computation of a P system \(\Pi\) as defined above is the contents of the output membrane \(h_{o}\) projected over the terminal alphabet \(T\). Example 1: Figure 1 shows the graphical representation of the P system formally given by \[\begin{array}{l}\Pi\ =(\{a,b,c,d\},\{a,d\},[_{1}[_{2}]_{2}]_{1},R_{1},R_{2},1,1), \\ R_{2}\ =\{a\to aa,b\to b\,(c,out)\},\\ R_{1}\ =\emptyset.\end{array}\] In the maximally parallel mode, the inner membrane 2 of \(\Pi\) will apply as many instances of the rules as possible, thereby doubling the number of \(a\), and ejecting a copy of \(c\) into the surrounding (skin) membrane at each step. The symbol \(d\) in the skin membrane is not used. Therefore, after \(k\) steps of evolution, membrane 2 will contain the multiset \(a^{2^{k}}b\) and membrane 1 the multiset \(c^{k}d\). Since all rules are always applicable in \(\Pi\), this P system never halts. ### Boolean Networks A Boolean variable is a variable which may only have values in the Boolean domain \(\{0,1\}\). Let \(X\) be a finite set of Boolean variables. A state of these variables is any function \(s:X\rightarrow\{0,1\}\), \(s\in\{0,1\}^{X}=S_{X}\) assigning a Boolean value to every single variable. An update function is a Boolean function computing a Boolean value from a state: \(f:S_{X}\rightarrow\{0,1\}\). A Boolean network over \(X\) is a function \(F:S_{X}\to S_{X}\), in which the update function for a variable \(x\in X\) is computed as a projection of \(F\): \(f_{x}(s)=F(s)_{x}\). A Boolean network \(F\) computes trajectories on states by updating its variables according to a (Boolean) mode \(M\subseteq 2^{X}\), defining the variables which Figure 1: An example of a simple P system. should be updated together in a step. Typical examples of modes are the synchronous mode \(\mathit{syn}=\{X\}\) and the asynchronous mode \(\mathit{asyn}=\{\{x\}\ |\ x\in X\}\). A trajectory \(\tau\) of a Boolean network under a given mode \(M\) is any finite sequence of states \(\tau=(s_{i})_{0\leq i\leq n}\) such that \(F\) can derive \(s_{i+1}\) from \(s_{i}\) under the mode \(M\). Remark 1: The definition of modes and evolution are quite different in P systems and Boolean networks. The asynchronous mode in Boolean networks only allows updating one variable at a time, while the asynchronous mode in P systems generally allows any combinations of updates. Furthermore, no halting conditions are generally considered in Boolean networks, and the asymptotic behavior is often looked at as the important part of the dynamics. Example 2: Consider the set of variables \(X=\{x,y\}\) with the corresponding update functions \(f_{x}(x,y)=\bar{x}\wedge y\) and \(f_{y}(x,y)=x\wedge\bar{y}\). Figure 2 shows the possible state transitions of this network under the synchronous and the asynchronous modes. The states are represent as pairs of binary digits, e.g. \(01\) stands for the state in which \(x=0\) and \(y=1\). We notice that, under the synchronous mode, this network exhibits three kinds of behaviors. If initialized to \(00\), it will stay in this state forever--\(00\) is a stable state. If initialized to \(11\), the network will directly converge to \(00\). Finally, if it is initialized to any one of the states \(01\) or \(10\), it will oscillate between them. The synchronous mode yields deterministic behavior. The state transitions are quite different under the asynchronous mode, under which only one variable may be updated at a time. While state \(00\) remains stable, states \(01\) and \(10\) can now oscillate to \(11\), but not directly between them. Moreover, these states can also converge to \(00\), but \(11\) cannot anymore. ### Boolean Control Networks (BCN) Boolean networks are often used to represent biological networks in the presence of external perturbations: environmental hazards, drug treatments, etc. (e.g., [5, 6, 24]). To represent network reprogramming, an extension of Boolean networks can be considered: Boolean control networks (BCN) [6]. Informally, a BCN is a parameterized Boolean network template; assigning a Boolean value to every single one of its parameters yields a Boolean network. Formally, a Boolean control network is a function \(F_{U}:S_{U}\rightarrow(S_{X}\to S_{X})\), where the elements of \(U\), \(U\cap X=\emptyset\), are called the control inputs. To every Figure 2: The synchronous (left) and the asynchronous (right) dynamics of the Boolean network in Example 2. valuation of control inputs, \(F_{U}\) associates a Boolean network. A control \(\mu\) of \(F_{U}\) is any Boolean assignment to the control inputs: \(\mu:U\to\{0,1\}\). While this definition of BCNs is very general, in practice one restricts the impact the control inputs may have on the BCN to some biologically relevant classes. One particularly useful class are freeze perturbations, in which a variable in \(X\) is temporarily frozen to \(0\) or to \(1\), independently of its normal update function. These actions mean to model gene knock-outs and knock-ins. When Boolean update functions are written as propositional formulae, freeze control inputs can be written directly in the formulae. Consider for example a Boolean network \(F\) over \(X=\{x_{1},x_{2}\}\) with the update functions \(f_{1}=x_{1}\wedge x_{2}\) and \(f_{2}=x_{2}\). To allow for freezing of \(x_{1}\), we introduce the control variables \(U=\{u_{1}^{0},u_{1}^{1}\}\) into the Boolean formula of \(f_{1}\) in the following way: \(f_{1}^{\prime}=(x_{1}\wedge x_{2})\wedge u_{1}^{0}\vee\overline{u_{1}^{1}}\). Setting \(u_{1}^{0}\) to \(0\) and \(u_{1}^{1}\) to \(1\) freezes \(x_{1}\) to \(0\), independently of the values of \(x_{1}\) and \(x_{2}\). Symmetrically, setting \(u_{1}^{1}\) to \(0\) and \(u_{1}^{0}\) to \(1\) freezes \(x_{1}\) to \(1\). Setting both \(u_{1}^{0}\) and \(u_{1}^{1}\) to \(0\) is generally disallowed. In this paper, we will use two notations to indicate which control inputs correspond to which controlled variable. In the simplest examples in which the variables have no indices, e.g. \(x\) or \(y\), we will directly specify the name of the variable in the subscript of the corresponding control inputs, like so: \(u_{x}^{0}\), \(u_{x}^{1}\), \(u_{y}^{0}\), or \(u_{y}^{1}\). In more general cases, we will refer to the variables by indexed names \(x_{i}\), and we will only specify the respective index as the subscript of the corresponding control inputs: \(u_{i}^{0}\) and \(u_{i}^{1}\). ### Sequential Controllability of BCN In many situations, perturbations of biological networks do not happen once, but rather accumulate or evolve over time [9, 16, 24]. In the language of Boolean control networks, this accumulation can be represented by sequences of controls \((\mu_{1},\ldots,\mu_{n})\). More precisely, take a BCN \(F_{U}\) with the variables \(X\) and the control inputs \(U\), as well as a sequence of controls \(\mu_{[n]}=(\mu_{1},\ldots,\mu_{n})\), \(\mu_{i}:U\to\{0,1\}\in S_{U}\). This gives rise to a sequence of Boolean networks \((F_{U}(\mu_{1}),\ldots,F_{U}(\mu_{n}))\). Fix a mode \(M\) and consider a sequence of trajectories \((\tau_{1},\ldots,\tau_{n})\) of these Boolean networks. Such a sequence is an evolution of \(F_{U}\) under the sequence of controls \(\mu_{[n]}\) if the last state of every \(\tau_{i}\) is the first state of \(\tau_{i+1}\). In this case we can speak of the trajectory of the BCN \(F_{U}\) under the control sequence \(\mu_{[n]}\) as the concatenation of the individual trajectories \(\tau_{i}\), in which the last state of every single \(\tau_{i}\) is glued together with the first state of \(\tau_{i+1}\). Given the \(3\)-tuple \((F_{U},S_{\alpha},S_{\omega})\), where \(F_{U}\) is a BCN, \(S_{\alpha}\) is a set of starting states, and \(S_{\omega}\) is a set of target states, the sequence inference problem is the problem of inferring a control sequence driving \(F_{U}\) from each state in \(S_{\alpha}\) to any state in \(S_{\omega}\). This problem was called the CoFaSe problem in [24] and was extensively studied. In particular, is was shown that CoFaSe is PSPACE-hard. Example 3: Consider again the Boolean network from Example 2, with \(X=\{x,y\}\) and the update functions \(f_{x}=\bar{x}\wedge y\) and \(f_{y}=x\wedge\bar{y}\). As mentioned before, a convenient way to express freezing controls is by explicitly including the control inputs into the update functions in the following way: \[f^{\prime}_{x} = (\bar{x}\wedge y)\wedge u^{0}_{x}\vee\overline{u^{1}_{x}},\] \[f^{\prime}_{y} = (x\wedge\bar{y})\wedge u^{0}_{y}\vee\overline{u^{1}_{y}}.\] Notice how setting \(u^{0}_{x}\) to \(0\) essentially sets \(f^{\prime}_{x}=0\), and setting \(u^{1}_{x}\) to \(0\) essentially sets \(f^{\prime}_{x}=1\), independently of the actual value of \(x\) or \(y\). Consider now the following \(3\) controls: \[\mu_{1} = \{u^{0}_{x}\gets 1,u^{1}_{x}\gets 1,u^{0}_{y} \gets 1,u^{1}_{y}\gets 1\},\] \[\mu_{2} = \{u^{0}_{x}\gets 0,u^{1}_{x}\gets 1,u^{0}_{y} \gets 1,u^{1}_{y}\gets 1\},\] \[\mu_{3} = \{u^{0}_{x}\gets 1,u^{1}_{x}\gets 1,u^{0}_{y} \gets 1,\underline{u^{1}_{y}}\gets 0\}.\] Informally \(\mu_{1}\) does not freeze any variables, \(\mu_{2}\) freezes \(x\) to \(0\), and \(\mu_{3}\) freezes \(y\) to \(1\). Consider now the BCN \(F_{U}\) with the variables \(X=\{x,y\}\) and the controlled update functions \(f^{\prime}_{x}\) and \(f^{\prime}_{y}\). Fix the synchronous update mode. A trajectory of this BCN under the control \(\mu_{1}\)--i.e. a trajectory of \(F_{U}(\mu_{1})\)--is \(\tau_{1}:01\to 10\to 01\). A trajectory of \(F_{U}(\mu_{2})\) is \(\tau_{2}:01\to 00\to 00\); remark that \(00\) is still a stable state of \(F_{U}(\mu_{2})\). A trajectory of \(F_{U}(\mu_{3})\) is \(\tau_{3}:00\to 01\to 11\). We can now glue together the trajectories \(\tau_{1}\), \(\tau_{2}\), and \(\tau_{3}\) by identifying their respective ending and starting states, and we obtain the following trajectory of the BCN \(F_{U}\) under the control sequence \(\mu_{[3]}=(\mu_{1},\mu_{2},\mu_{3})\): \[\tau:01\to 10\to 01\to 00\to 00\to 01\to 11.\] It follows from this construction that \(\mu_{[3]}\) is a solution for the CoFaSe problem \((F_{U},\{01\},\{11\})\). Remark that 11 is not reachable from \(01\) under the synchronous mode in the uncontrolled case, as Figure 2 illustrates. Remark 2: We follow the approach from [24] which decorrelates the length of the control sequence from the length of the trajectories it yields. Thus, \(\mu_{[3]}\) can yield trajectories of different lengths greater or equal to \(3\). From the modeling standpoint, this represents the fact that the time scale on which control inputs are emitted is not necessarily the same as the time scale of the controlled system. ## 3 Boolean P Systems In this section we introduce a new variant of P systems--Boolean P systems--tailored specifically to capture sequential controllability of Boolean networks with as little descriptional overhead as possible. We further tackle the differences between evolution modes in Boolean networks and P systems by introducing quasimodes. Rather than trying to be faithful to the original model of P systems as recalled in Section 2, we here invoke the intrinsic flexibility of the domain to design a variant fitting to our specific use case. ### Formalism Boolean P systems are set rewriting systems. A Boolean state \(s:X\rightarrow\{0,1\}\) is represented as the subset of \(X\) obtained by considering \(s\) as an indicator function: \(\{x\in X\mid s(x)=1\}\). By abuse of notation, we will sometimes use the symbol \(s\) to refer both to the Boolean state and to the corresponding subset of \(X\). A Boolean P system is the following construct: \[\Pi=(V,R),\] where \(V\) is the alphabet of symbols, and \(R\) is a set of rewriting rules with propositional guards. A rule \(r\in R\) is of the form \[r:A\to B\mid\varphi,\] where \(A,B\subseteq X\) and \(\varphi\) is the guard--a propositional formula with variables from \(V\). The rule \(r\) is applicable to a set \(W\subseteq V\) if \(A\subseteq W\) and \(W\in\varphi\), where by abuse of notation we use the same symbol \(\varphi\) to indicate the set of subsets of \(V\) which satisfy \(\varphi\). Formally, for \(W\subseteq V\), we denote by \(\varphi(W)\) the truth value of the formula obtained by replacing all variables appearing in \(W\) by \(1\) in \(\varphi\), and by \(0\) all variables from \(V\setminus W\). Then the set of subsets satisfying \(\varphi\) is \(\varphi=\{W\subseteq V\mid\varphi(W)\equiv 1\}\). Applying the rule \(r:A\to B\mid\varphi\) to a set \(W\) results in the set \((W\setminus A)\cup B\). Applying a finite set of separately applicable rules \(\{r_{i}:A_{i}\to B_{i}\mid\varphi_{i}\}\) to \(W\) results in the new set \[\left(W\setminus\bigcup_{i}A_{i}\right)\cup\bigcup_{i}B_{i}.\] Note how this definition excludes competition between the rules, as only individual applicability is checked. Further note that applying a rule multiple times to the same configuration has exactly the same effect as applying it once. In P systems, the set of multisets of rules of \(\Pi\) applicable to a given configuration \(W\) is usually denoted by \(\mathit{Appl}(\Pi,W)\)[11]. Since in Boolean P systems multiple applications of rules need not be considered, we will only look at the set of _sets_ of rules applicable to a given configuration \(W\) of a Boolean P system \(\Pi=(V,R)\), and use the same notation \(\mathit{Appl}(\Pi,W)\). A mode \(M\) of \(\Pi\) will then be a function assigning to any configuration \(W\) of \(\Pi\) a set of sets of rules applicable to \(W\): \(M(W)\subseteq\mathit{Appl}(\Pi,W)\). If \(|M(W)|\leq 1\) for any \(W\subseteq V\), the mode \(M\) is called deterministic6. Otherwise it is called non-deterministic. Footnote 6: More precisely, this is the definition of strong determinism, see [3]. An evolution of \(\Pi\) under the mode \(M\) is a sequence of states \((W_{i})_{0\leq i\leq k}\) with the property that \(W_{i+1}\) is obtained from \(W_{i}\) by applying one of the sets of rules \(R^{\prime}\in M(W_{i})\) prescribed by the mode \(M\) in state \(W_{i}\). This is usually written as \(W_{i}\stackrel{{ R^{\prime}}}{{\longrightarrow}}W_{i+1}\). If no rules are applicable in state \(W_{k}\), it is called the halting state, and \((W_{i})_{0\leq i\leq k}\) is called a halting evolution. Example 4: Take \(V=\{a,b\}\) and consider the following rules \(r_{1}:\{a,b\}\rightarrow\{a\}\mid\mathbf{1}\) and \(r_{2}:\{a\}\rightarrow\emptyset\mid\bar{b}\), where \(\mathbf{1}\) is the Boolean tautology. Construct the Boolean P system \(\Pi=(V,\{r_{1},r_{2}\})\). Informally, \(r_{1}\) removes \(b\) from a configuration which contains \(a\) and \(b\), and \(r_{2}\) removes \(a\) from the configuration which does not already contain \(b\). A possible trajectory of \(\Pi\) under the maximally parallel mode--which applies non-extendable applicable sets of rules--is \(\{a,b\}\rightarrow\{a\}\rightarrow\emptyset\). Note that only \(r_{1}\) is applicable in the first step, since \(r_{2}\) requires the configuration to not contain \(b\). Remark 3: Boolean P systems as defined here are very close to other set rewriting formalisms, and in particular to reaction systems [8]. A reaction system \(\mathcal{A}\) over a set of species \(S\) is a set of reactions (rules) of the form \(a:(R_{a},I_{a},P_{a})\), in which \(R_{a}\subseteq S\) is called the set of reactants, \(I_{a}\subseteq S\) the set of inhibitors, and \(P_{a}\subseteq S\) the set of products. For \(a\) to be applicable to a set \(W\), it must hold that \(R_{a}\subseteq W\) and \(I_{a}\cap W=\emptyset\). Applying such a reaction to \(W\) yields \(P_{a}\), i.e. the species which are not explicitly sustained by the reactions disappear. We claim that despite their apparent similarity and tight relationship with Boolean functions, reaction systems are not so good a fit for reasoning about Boolean networks as Boolean P systems. In particular: 1. Reaction systems lack modes and therefore non-determinism, which may appear in Boolean networks under the asynchronous Boolean mode. 2. The rule applicability condition is more powerful in Boolean P systems, and closer to Boolean functions than in reaction systems. 3. Symbols in reaction systems disappear unless sustained by a rule, which represents the degradation of species in biochemistry, but which makes reaction systems harder to use to directly reason about Boolean networks. We recall that our main goal behind introducing Boolean P systems is reasoning about Boolean networks in a more expressive framework. This means that zero-overhead representation of concepts from Boolean networks is paramount. Remark 4: Reaction systems [8] are intrinsically interesting for discussing controllability, because they are defined as open systems from the start, via the explicit introduction of context. Note however that contexts only allow adding symbols to the configuration, not removing them. We refer to [15] for an in-depth discussion of controllability of reaction systems. ### Quasimodes An update function in a Boolean network can always be computed, but a rule in a Boolean P system need not always be applicable. This is the reason behind the difference in the way modes are defined in the two formalisms: in Boolean networks a mode is essentially a set of subsets of update functions, while in Boolean P systems a mode is a function incorporating applicability checks. This means in particular that Boolean network modes are not directly transposable to Boolean P systems. To better bridge the two different notions of modes, we introduce quasimodes. A _quasimode_\(\tilde{M}\) of a P system \(\Pi=(V,R)\) is any set of sets of rules: \(\tilde{M}\subseteq 2^{R}\). The mode \(M\) corresponding to the quasimode \(\tilde{M}\) is derived in the following way: \[M(W)=\tilde{M}\cap\mbox{\it Appl}(\Pi,W).\] Given a configuration \(W\) of \(\Pi\), \(M\) picks only those sets of rules from \(\tilde{M}\) which are also applicable to \(W\). Thus, instead of explicitly giving the rules to be applied to a given configuration of a P system \(W\), a quasimode advises the rules to be applied. In the rest of the paper, we will say "evolution of \(\Pi\) under the quasimode \(\tilde{M}\)" to mean "evolution of \(\Pi\) under the mode derived from the quasimode \(\tilde{M}\)". ## 4 Boolean P Systems Simulate Boolean Networks Consider a Boolean network \(F\) over the set of variables \(X\), and take a variable \(x\in X\) with its corresponding update function \(f_{x}\). The update function \(f_{x}\) can be simulated by two Boolean P systems rules: the rule corresponding to setting \(x\) to \(1\), i.e. introducing \(x\) into the configuration, and the rule corresponding to setting \(x\) to \(0\), i.e. erasing \(x\) from the configuration: \[R_{x}=\{\ \emptyset\rightarrow\{x\}\mid f_{x},\ \ \{x\}\rightarrow\emptyset \mid\overline{f_{x}}\ \ \}.\] Consider now the following Boolean P system: \[\Pi(F)=\left(X,\bigcup_{x\in X}R_{x}\right).\] We claim that \(\Pi(F)\) faithfully simulates \(F\). Theorem 4.1: _Take a Boolean network \(F\) and a Boolean mode \(M\). Then the Boolean P system \(\Pi(F)\) constructed as above and working under the quasimode \(\tilde{M}=\left\{\bigcup_{x\in m}R_{x}\mid m\in M\right\}\) faithfully simulates \(F\): for any evolution of \(F\) under \(M\) there exists an equivalent evolution of \(\Pi(F)\) under \(\tilde{M}\), and conversely, for any evolution of \(\Pi(F)\) under \(\tilde{M}\) there exists an equivalent evolution of \(F\) under \(M\)._ Proof: Take two arbitrary states \(s\) and \(s^{\prime}\) of \(F\) such that \(s^{\prime}\) is reachable from \(s\) by the update prescribed by an element \(m\in M\). Consider now the subsets of variables \(W,W^{\prime}\subseteq X\) defined by \(s\) and \(s^{\prime}\) taken as the respective indicator functions. It follows from the construction of \(\tilde{M}\) that it contains an element \(\tilde{m}\) including the update rules for all the variables of \(m\): \(\tilde{m}=\bigcup_{x\in m}R_{x}\). Therefore, \(\Pi(F)\) can derive \(W^{\prime}\) from \(W\) under the quasimode \(\tilde{M}\). Conversely, consider two subsets of variables \(W,W^{\prime}\subseteq X\) such that \(\Pi(F)\) can derive \(W^{\prime}\) from \(W\) under the update prescribed by an element \(\tilde{m}\in\tilde{M}\). By construction of \(\tilde{M}\), there exists a subset \(m\subseteq X\) such that \(\tilde{m}=\bigcup_{x\in m}R_{x}\). Take now the indicator functions \(s,s^{\prime}:X\rightarrow\{0,1\}\) describing \(W\) and \(W^{\prime}\) respectively. Then \(F\) can derive \(s^{\prime}\) from \(s\) by updating the variables in \(m\). We conclude that the transitions of \(\Pi(F)\) exactly correspond to the transitions of \(F\), which proves the statement of the theorem. Example 5: Consider the Boolean network \(F_{U}\) from Example 2: \[\begin{array}{l}f_{x}=\bar{x}\wedge y,\\ f_{y}=x\wedge\bar{y}.\end{array}\] This Boolean network can be translated to the Boolean P system \(\Pi=(V,R)\) with \(V=\{x,y\}\) and the following rules: \[R =R_{x}\cup R_{y},\] \[R_{x}=\{\,\emptyset\rightarrow\{x\}\mid\bar{x}\wedge y,\;\{x \}\rightarrow\emptyset\mid\overline{\bar{x}\wedge y}\;\},\] \[R_{y}=\{\,\emptyset\rightarrow\{y\}\mid x\wedge\bar{y},\;\{y \}\rightarrow\emptyset\mid\overline{x}\wedge\bar{y}\;\}.\] The first rule in \(R_{x}\) ensures that \(x\) is introduced whenever the current state satisfies \(\bar{x}\wedge y=f_{x}\), and the second rule ensures that \(x\) is removed whenever the current state does not satisfy \(\bar{x}\wedge y\). Similarly, the rules in \(R_{y}\) introduce or remove \(y\) depending on whether the current state satisfies \(f_{y}\). To simulate \(F_{U}\) under the Boolean synchronous mode, \(\Pi\) should run under the quasimode \(\tilde{M}_{syn}=\{R\}\), i.e. the quasimode allowing all rules in \(R\) to be applied at all times. To simulate \(F_{U}\) under the Boolean asynchronous mode, \(\Pi\) should run under the quasimode \(\tilde{M}_{asyn}=\{R_{x},R_{y}\}\), i.e. the quasimode allowing the application of either both rules in \(R_{x}\), or both rules in \(R_{y}\), but not all 4 rules at a time. Remark 5: Incidentally, Boolean P systems also capture reaction systems (see also Remarks 3 and 4). Indeed, consider a reaction \(a=(R_{a},I_{a},P_{a})\) with the reactants \(R_{a}\), inhibitors \(I_{a}\), and products \(P_{a}\). It can be directly simulated by the Boolean P system rule \(\emptyset\to P_{a}\mid\varphi_{a}\), where \(\varphi_{a}=\bigwedge_{x\in R_{a}}x\wedge\bigwedge_{y\in I_{a}}\bar{y}\). The degradation of the species in reaction systems can be simulated by adding a rule \(\{x\}\rightarrow\emptyset\mid\mathbf{1}\) for every species \(x\), where \(\mathbf{1}\) is the Boolean tautology. ## 5 Composition of Boolean P Systems In this section, we define the composition of Boolean P systems in the spirit of automata theory. Consider two Boolean P systems \(\Pi_{1}=(V_{1},R_{1})\) and \(\Pi_{2}=(V_{2},R_{2})\). We will call the union of \(\Pi_{1}\) and \(\Pi_{2}\) the Boolean P system \(\Pi_{1}\cup\Pi_{2}=(V_{1}\cup V_{2},R_{1}\cup R_{2})\). Note that the alphabets \(V_{1}\) and \(V_{2}\), as well as the rules \(R_{1}\) and \(R_{2}\) are not necessarily disjoint. To talk about the evolution of \(\Pi_{1}\cup\Pi_{2}\), we first define a variant of Cartesian product of two sets of sets \(A\) and \(B\): \(A\mathbin{\dot{\times}}B=\{a\cup b\mid a\in A,b\in B\}\). We remark now that \[\forall W\subseteq V_{1}\cup V_{2}:\mathit{Appl}(\Pi_{1}\cup\Pi_{2},W)= \mathit{Appl}(\Pi_{1},W)\mathbin{\dot{\times}}\mathit{Appl}(\Pi_{2},W).\] Indeed, since the rules of Boolean P systems do not compete for resources among them, the applicability of any individual rule is independent of the applicability of the other rules. Therefore, the applicability of a set of rules of \(\Pi_{1}\) to a configuration \(W\) is independent of the applicability of a set of rules of \(\Pi_{2}\) to \(W\). For a mode \(M_{1}\) of \(\Pi_{1}\) and a mode \(M_{2}\) of \(\Pi_{2}\), we define their product as follows: \[(M_{1}\times M_{2})(W)=M_{1}(W)\,\dot{\times}\,M_{2}(W).\] The union of Boolean P systems \(\Pi_{1}\cup\Pi_{2}\) together with the product mode \(M_{1}\times M_{2}\) implement parallel composition of the two P systems. In particular, if the alphabets of \(\Pi_{1}\) and \(\Pi_{2}\) are disjoint, the projection of any evolution of \(\Pi_{1}\cup\Pi_{2}\) under the mode \(M_{1}\times M_{2}\) on the alphabet \(V_{1}\) will yield a valid evolution of \(\Pi_{1}\) under \(M_{1}\) (modulo some repeated states), while the projection on \(V_{2}\) will yield a valid evolution of \(\Pi_{2}\) under the mode \(M_{2}\) (modulo some repeated states). Note that this property may not be true if the two alphabets intersect \(V_{1}\cap V_{2}\neq\emptyset\). Quasimodes fit naturally with the composition of modes, as the following lemma shows. Lemma 1: _If the mode \(M_{1}\) can be derived from the quasimode \(\tilde{M}_{1}\) and \(M_{2}\) from the quasimode \(\tilde{M}_{2}\), then the product mode \(M_{1}\times M_{2}\) can be derived from \(\tilde{M}_{1}\,\dot{\times}\,\tilde{M}_{2}\):_ _where a dashed arrow \(\,\dasharrow\,\) from a quasimode to a mode indicates that the mode is derived from the quasimode, and the arrows \(\,\dasharrow\,\) are the respective projections._ Proof: Consider the mode \(M_{12}\) derived from \(\tilde{M}_{1}\,\dot{\times}\,\tilde{M}_{2}\): \[M_{12}(W)=\left(\tilde{M}_{1}\,\dot{\times}\,\tilde{M}_{2}\right)\cap\mbox{ Appl}(\Pi,W).\] Pick an arbitrary element \(m_{12}\in M_{12}(W)\) and remark that it can be seen as a union \(m=m_{1}\cup m_{2}\) where \(m_{1}\) is a subset of applicable rules with the property that \(m_{1}\in\tilde{M}_{1}\), and \(m_{2}\) is a subset of applicable rules with the property that \(m_{2}\in\tilde{M}_{2}\). Thus \(m_{1}\in\tilde{M}_{1}\cap\mbox{Appl}(\Pi,W)\) and \(m_{2}\in\tilde{M}_{2}\cap\mbox{Appl}(\Pi,W)\), implying that \[M_{12}(W)\subseteq\left(\tilde{M}_{1}\cap\mbox{Appl}(\Pi,W)\right)\dot{ \times}\left(\tilde{M}_{2}\cap\mbox{Appl}(\Pi,W)\right).\] Consider on the other hand an arbitrary \(m_{1}\in\tilde{M}_{1}\cap\mbox{Appl}(\Pi,W)\) and an arbitrary \(m_{2}\in\tilde{M}_{2}\cap\mbox{Appl}(\Pi,W)\). By definition of \(\dot{\times}\), \(m_{1}\cup m_{2}\in\tilde{M}_{1}\,\dot{\times}\,\tilde{M}_{2}\). Remark that every rule in \(m_{1}\) and \(m_{2}\) is individually applicable, meaning that they are also applicable together and that \(m_{1}\cup m_{2}\in\mbox{Appl}(\Pi,W)\). Combining this observation with the reasoning from the previous paragraph we finally derive: \[M_{12}(W)=\left(\tilde{M}_{1}\cap\mbox{Appl}(\Pi,W)\right)\dot{\times}\left( \tilde{M}_{2}\cap\mbox{Appl}(\Pi,W)\right)=M_{1}(W)\,\dot{\times}\,M_{2}(W),\] which implies that \(M_{12}=M_{1}\times M_{2}\) and concludes the proof. Boolean P Systems for Sequential Controllability Underlying sequential controllability of Boolean control networks (Section 2.5) is the implicit presence of a master dynamical system emitting the control inputs to the network and thereby driving it. This master system is external with respect to the controlled BCN. The framework of Boolean P systems is sufficiently general to capture both the master system and the controlled BCN in a single homogeneous formalism. In this section, we show how to construct such Boolean P systems for dealing with questions of controllability. Any BCN \(F_{U}:S_{U}\rightarrow(S_{X}\to S_{X})\) can be written as a system of propositional formulae over \(X\cup U\). First, note that a control \(\mu\in S_{U}\) can be described by the conjuction \(\bigwedge_{u\in\mu}u\wedge\bigwedge_{v\in U\setminus\mu}\bar{v}\). Now fix an \(x\in X\) and consider the formula \[\bigvee_{\mu\in S_{U}}\mu\wedge F(\mu)_{x}, \tag{1}\] in which \(\mu\) enumerates all the conjuctions corresponding to the controls in \(S_{U}\) and \(F(\mu)_{x}\) is the propositional formula of the update function which \(F(\mu)\) associates to \(x\). Using (1), we can translate any BCN \(F_{U}:S_{U}\rightarrow(S_{X}\to S_{X})\) into the system of Boolean functions \(F^{\prime}:S_{X\cup U}\to S_{X}\) and use the set \(R_{x}\) from Section 4 to further translate the individual components of \(F^{\prime}\) to pairs of Boolean P system rules. Denote \(\Pi=(X\cup U,R)\) the Boolean P system whose set of rules is precisely the union of the sets \(R_{x}\) mentioned above, for \(x\in X\). Finally, construct the Boolean P system \(\Pi_{U}(U,R_{U})\) with the following rules whose guards are always true: \[R_{U} =R_{U}^{0}\cup R_{U}^{1},\] \[R_{U}^{0} =\{\ \{u\}\rightarrow\emptyset\mid\mathbf{1}\ \mid u\in U\,\},\] \[R_{U}^{1} =\{\ \emptyset\rightarrow\{u\}\mid\mathbf{1}\ \mid u\in U\,\}.\] Suppose now that the original BCN \(F_{U}\) runs under the mode \(M\), and consider the corresponding quasimode \(\tilde{M}=\big{\{}\bigcup_{x\in m}R_{x}\mid m\in M\big{\}}\), as well as the quasimode \[\tilde{M}_{U}=\{R_{U}^{0}\}\,\dot{\times}\,2^{R_{U}^{1}}.\] Every element \(m_{U}\in\tilde{M}_{U}\) is a union of \(R_{U}^{0}\) and a subset of \(R_{U}^{1}\), meaning that \(m_{U}\) enables all rules removing the control inputs, and enables _some_ of the rules adding back control inputs. We claim that the Boolean P system \(\Pi\cup\Pi_{U}\) running under the quasimode \(\tilde{M}\,\dot{\times}\,\tilde{M}_{U}\) faithfully simulates the BCN \(F_{U}\) running under the mode \(M\). The following theorem formalizes this claim. Theorem 6.1: _Consider a BCN \(F_{U}\) running under the mode \(M\). Then the Boolean P system \(\Pi\cup\Pi_{U}\) constructed as above and running under the quasimode \(\tilde{M}\,\dot{\times}\,\tilde{M}_{U}\) faithfully simulates \(F_{U}\):_ 1. _For any evolution of_ \(F_{U}\) _under_ \(M\) _there exists an equivalent evolution of_ \(\Pi\cup\Pi_{U}\) _under_ \(\tilde{M}\,\dot{\times}\,\tilde{M}_{U}\) 2. _For any evolution of_ \(\Pi\cup\Pi_{U}\) _under_ \(\tilde{M}\dot{\times}\tilde{M}_{U}\) _there exists an equivalent evolution of_ \(F_{U}\) _under_ \(M\)_._ Proof: _(1)_ Consider two states \(s,s^{\prime}\in S_{X}\) and a control \(\mu\in S_{U}\) such that \(F_{U}(\mu)\) reaches \(s^{\prime}\) from \(s\) in one step. Take \(W,W^{\prime}\subseteq X\) and \(W_{U}\subseteq U\) by respectively taking \(s,s^{\prime}\), and \(\mu\) as indicator functions. Then, as in Theorem 3.1, there exists an \(\tilde{m}\in\tilde{M}\) such that \(\Pi\) reaches \(W^{\prime}\cup W_{U}\) from \(W\cup W_{U}\) in one step. This follows directly from the construction of the rules in \(\Pi\) and from the fact that \(W_{U}\) contains exactly the symbols corresponding to the control inputs activated by \(\mu\). Take now \(\tilde{M}\dot{\times}\tilde{M}_{U}\) and remark that its elements are of the form \(\tilde{m}\cup\tilde{m}_{U}\), where \(\tilde{m}_{U}=\tilde{m}_{U}^{1}\cup R_{U}^{0}\) and \(\tilde{m}_{U}^{1}\subseteq R_{U}^{1}\). Under such an element \(\tilde{m}\cup\tilde{m}_{U}\), \(\Pi\cup\Pi_{U}\) reaches a state \(W^{\prime}\cup W_{U}^{\prime}\) from \(W\cup W_{U}\) in one step, where \(W_{U}^{\prime}\) contains the symbols from \(U\) introduced by the rules selected by \(\tilde{m}_{U}^{1}\). Further note that all elements of \(W_{U}\) are always erased by the rules \(R_{U}^{0}\), but may be immediately reintroduced by \(m_{U}^{1}\). Suppose now that \(F_{U}(\mu)\) reaches \(s^{\prime}\) from \(s\) in multiple steps. Then \(\Pi\) reaches \(W^{\prime}\cup W_{U}\) from \(W\cup W_{U}\) in the same number of steps, provided that \(\tilde{m}_{U}^{1}\) is always chosen such that the rules it selects reintroduce exactly the subset \(W_{U}\). If \(F_{U}\) reaches \(s^{\prime}\) from \(s\) in multiple steps, but the control evolves as well, it suffices to choose \(\tilde{m}_{U}^{1}\) such that it introduces the correct control inputs before each step. Finally, the control \(\mu_{0}\) applied in the first step of a trajectory of \(F_{U}\) must be introduced by setting the starting state of \(\Pi\cup\Pi_{U}\) to \(W\cup W_{U}^{0}\), where \(W\) corresponds to the initial state of the trajectory of \(F_{U}\). _(2)_ The converse construction is symmetric. A state \(W\cup W_{U}\) of \(\Pi\cup\Pi_{U}\) is translated into the state \(s\in S_{X}\) and the control \(\mu\in S_{U}\) corresponding to \(W_{U}\). A step of \(\Pi\cup\Pi_{U}\) under \(\tilde{m}\cup\tilde{m}_{U}\) is translated to applying \(\mu\) to \(F_{U}\) and updating the variables corresponding to the rules activated by \(\tilde{m}\). In this way, for any trajectory of \(\Pi\cup\Pi_{U}\) under the quasimode \(\tilde{M}\dot{\times}\tilde{M}_{U}\) there exists a corresponding trajectory in the controlled dynamics of \(F_{U}\). We now give an extensive example showing how the composite system \(\Pi\cup\Pi_{U}\) from the proof above is constructed for a concrete BCN, and detailing how \(\Pi\cup\Pi_{U}\) simulates its sequentially controlled trajectories. Example 6: Consider the BCN \(F_{U}\) from Example 3 with the following update functions modified to include the control inputs: \[f_{x}^{\prime} =(\bar{x}\wedge y)\wedge u_{x}^{0}\vee\overline{u_{x}^{1}},\] \[f_{y}^{\prime} =(x\wedge\bar{y})\wedge u_{y}^{0}\vee\overline{u_{y}^{1}},\] and recall that \(X=\{x,y\}\) and \(U=\{u_{x}^{0},u_{x}^{1},u_{y}^{0},u_{y}^{1}\}\). Since the control inputs are already explicitly present in the propositional formulae, we can put these together directly to obtain \(F^{\prime}:S_{X\cup U}\to S_{X}\), bypassing equation 1. _Construction of \(\Pi\cup\Pi_{U}\)._ First construct the Boolean P system \(\Pi=(X\cup U,R)\) with the following rules: \[R =R_{x}\cup R_{y},\] \[R_{x} =\{\;\emptyset\rightarrow\{x\}\;|\;f^{\prime}_{x},\;\{x\} \rightarrow\emptyset\;|\;\overline{f^{\prime}_{x}}\;\},\] \[R_{y} =\{\;\emptyset\rightarrow\{y\}\;|\;f^{\prime}_{y},\;\{y\} \rightarrow\emptyset\;|\;\overline{f^{\prime}_{y}}\;\}.\] Now, define \(\Pi_{U}=(U,R_{U})\) with the following rules: \[R_{U} =R_{U}^{0}\cup R_{U}^{1},\] \[R_{U}^{0} =\{\;\{u_{x}^{0}\}\rightarrow\emptyset\;|\;\mathbf{1},\;\{u_{x}^ {1}\}\rightarrow\emptyset\;|\;\mathbf{1},\;\{u_{y}^{0}\}\rightarrow\emptyset \;|\;\mathbf{1},\;\{u_{y}^{1}\}\rightarrow\emptyset\;|\;\mathbf{1}\;\},\] \[R_{U}^{1} =\{\;\emptyset\rightarrow\{u_{x}^{0}\}\;|\;\mathbf{1},\; \emptyset\rightarrow\{u_{x}^{1}\}\;|\;\mathbf{1},\;\emptyset\rightarrow\{u_{ y}^{0}\}\;|\;\mathbf{1},\;\emptyset\rightarrow\{u_{y}^{1}\}\;|\;\mathbf{1}\;\}.\] Suppose that \(F_{U}\) runs under the synchronous mode. This is translated into the quasimode \(\tilde{M}_{syn}=\{R\}\) for the Boolean P system \(\Pi\). The quasimode \(\tilde{M}_{U}\) for \(\Pi_{U}\) will be as follows: \[\tilde{M}_{U}=\{R_{U}^{0}\cup\tilde{m}_{U}^{1}\;|\;\tilde{m}_{U}^{1}\subseteq R _{U}^{1}\}.\] Finally, the composite P system \(\Pi\cup\Pi_{U}\) will run under the following quasimode: \[\tilde{M}\,\dot{\times}\,\tilde{M}_{U}=\{R\cup R_{U}^{0}\cup\tilde{m}_{U}^{1} \;|\;\tilde{m}_{U}^{1}\subseteq R_{U}^{1}\}.\] _Simulation of sequential control._ The 3 controls introduced in Example 3 can be written as sets in the following way: \[\mu_{1} =\{u_{x}^{0},u_{x}^{1},u_{y}^{0},u_{y}^{1}\},\] \[\mu_{2} =\{\;\;\;\;\;u_{x}^{1},u_{y}^{0},u_{y}^{1}\},\] \[\mu_{3} =\{u_{x}^{0},u_{x}^{1},u_{y}^{0}\;\;\;\;\;\}.\] The trajectory \(\tau_{1}:01\to 10\to 01\) of \(F_{U}(\mu_{1})\) will be simulated as the following evolution of \(\Pi\cup\Pi_{U}\): \[\{y\}\cup\mu_{1}\rightarrow\{x\}\cup\mu_{1}\rightarrow\{y\}\cup\mu_{1},\] where the rules to be applied in each transition are picked from the set \(R\cup R_{U}^{0}\cup R_{U}^{1}\in\tilde{M}\,\dot{\times}\,\tilde{M}_{U}\). Note how \(\mu_{1}\) is explicitly included as a set of symbols in the configuration of the composite Boolean P system \(\Pi\cup\Pi_{U}\). Similary, the trajectory \(\tau_{2}:01\to 00\to 00\) of \(F_{U}(\mu_{2})\) will be simulated as follows: \[\{y\}\cup\mu_{2}\rightarrow\emptyset\cup\mu_{2}\rightarrow\emptyset\cup\mu_{2},\] where the rules to be applied in each transition are picked from the set \(R\cup R_{U}^{0}\cup\{\,\emptyset\rightarrow\{u\}\;|\;\mathbf{1}\;\;|\;u\in \mu_{2}\}\in\tilde{M}\,\dot{\times}\,\tilde{M}_{U}\). Note how all symbols corresponding to control inputs are removed at every step, and then specifically the control inputs from \(\mu_{2}\) are reintroduced. Finally, the trajectory \(\tau_{3}:00\to 01\to 11\) of \(F_{U}(\mu_{3})\) will be simulated as follows by \(\Pi\cup\Pi_{U}\): \[\emptyset\cup\mu_{3}\rightarrow\{y\}\cup\mu_{3}\rightarrow\{x,y\}\cup\mu_{3}.\] To simulate the final trajectory under the control sequence \(\mu_{[3]}=(\mu_{1},\mu_{2},\mu_{3})\), we glue together the final and the initial states of the above simulations, always anticipating the control from the subsequent simulation: \[\{y\}\cup\mu_{1}\rightarrow\{x\}\cup\mu_{1}\rightarrow\underline{\{y\}\cup \mu_{2}}\rightarrow\emptyset\cup\mu_{2}\rightarrow\underline{\emptyset\cup \mu_{3}}\rightarrow\{y\}\cup\mu_{3}\rightarrow\{x,y\}\cup\mu_{3}.\] Underlined elements are the states in which the control inputs change. Thus, the transition \(\{x\}\cup\mu_{1}\rightarrow\underline{\{y\}}\cup\mu_{2}\) for example is governed by the set of rules \(R\cup R^{0}_{U}\cup\{\emptyset\rightarrow\{u\}\mid\mathbf{1}\mid u\in\mu_{2 }\}\in\tilde{M}\,\dot{\times}\,\tilde{M}_{U}\) already, instead of \(R\cup R^{0}_{U}\cup R^{1}_{U}\) which was used in the first step. The component \(\Pi_{U}\) in the composite P system of Theorem 2.1 and Example 6 is an explicit implementation of the master dynamical system driving the evolution of the controlled system \(\Pi\). The setting of this theorem captures the situation in which the control can change at any moment, but \(\Pi_{U}\) can be designed to implement other kinds of control sequences. We give the construction ideas for the kinds of sequences introduced in [24]: * _Total Control Sequence (TCS):_ all controllable variables are controlled at all times. The quasimode of \(\Pi_{U}\) will be correspondingly defined to always freeze the controlled variables: \(\tilde{M}_{U}=\{R^{0}_{U}\}\,\dot{\times}\,2^{P^{1}_{U}}\), where \(P^{1}_{U}\subseteq R^{1}_{U}\) with the property that for every \(x_{i}\in X\) every set \(p\in P^{1}_{U}\) either introduces \(u^{0}_{i}\) or \(u^{1}_{i}\), but not both. * _Abiding Control Sequence (ACS):_ once controlled, a variable stays controlled forever, but the value to which it is controlled may change. The rules of \(\Pi_{U}\) will be constructed to never erase the control symbols which have already been introduced, but will be allowed to change the value to which the corresponding controlled variable will be frozen: \(R_{U}=R^{1}_{U}\cup P_{U}\), with the new set of rules defined as follows: \[P_{U}=\left\{\ \{u^{a}_{i}\}\rightarrow\{u^{b}_{i}\}\mid\mathbf{1}\ \mid x_{i}\in X,\,a,b\in\{0,1\}\right\}.\] \(\Pi_{U}\) will able to rewrite some of the control symbols, or to introduce new control symbols: \(\tilde{M}_{U}=2^{R_{U}}\). ## 7 Reachability in Boolean P Systems In this section we focus on reachability in Boolean P systems, which we define in the following way: given a Boolean P system \(\Pi\), a mode \(M\) (or a quasimode \(\tilde{M}\)), a set of starting states \(S_{\alpha}\) and a set of target states \(S_{\omega}\), decide whether an evolution of \(\Pi\) exists under the mode \(M\) (or the quasimode \(\tilde{M}\)) driving it from each state in \(S_{\alpha}\) to some state in \(S_{\omega}\). We refer to such a decision problem by the 4-tuple \((\Pi,M^{\dagger},S_{\alpha},S_{\omega})\), where \(\mathcal{M}^{\dagger}\) may be a mode or a quasimode. In the rest of the paper, we will mainly deal with reachability under quasimodes. Remark 6: Unlike the CoFaSe problem in which the synchronous mode is implicitly assumed, we explicitly include here the mode or the quasimode into the reachability problem. Indeed, the size of the quasimode may be as much as exponential in the number of symbols, while the complexity of a mode may be even bigger, since it depends on the current configuration. Furthermore, the mode choice impacts the answer of the problem. For example the problem under the quasimode \(\tilde{M}=\emptyset\) has a solution if and only if \(S_{\alpha}\subseteq S_{\omega}\). In this section we will show that the reachability problem for Boolean P systems is PSPACE-complete. We start by showing that this reachability problem is at least as hard as LBA-ACCEPTANCE. Lemma 2: LBA-ACCEPTANCE _is reducible in polynomial time to reachability for Boolean P systems._ Proof: We will first show how to construct a Boolean P system simulating a given LBA, and will then evaluate the size complexity of the construction. Construction: Let \(\mathcal{M}=(Q,V,T_{1},T_{2},\delta,q_{0},q_{1},Z_{l},B,Z_{r})\) be an LBA and \(x\in T_{1}^{*}\) an input word of length \(n\). We construct in polynomial time a Boolean P system \(\Pi=(\tilde{V},R)\) that simulates the computation of \(\mathcal{M}\) on the input \(x\). The alphabet of \(\Pi\) contains the following symbols \[\tilde{V}=\{A_{v,j}\mid v\in V,\;0\leq j\leq n+1\}\cup\{C_{q,j}\mid q\in Q,\;0 \leq j\leq n+1\},\] where the symbols \(A_{v,j}\) describe which symbols appear in which tape cells of \(\mathcal{M}\) and \(C_{q,j}\) describes the position and the state of the LBA head. More precisely: * \(A_{v,j}\) represents the situation in which cell \(j\) contains the symbol \(v\), * \(C_{q,j}\) represents the situation in which the head is on cell \(j\) and in state \(q\). We construct the rules of \(\Pi\) as the union \(R=\bigcup_{\rho\in\delta}R_{\rho}\), where each instruction \(\rho=(q,X;p,Y,D)\) of \(\mathcal{M}\) is simulated by a set of Boolean P system rules in the following way, depending on the direction of the movement of the head: \[\begin{array}{l}D=R:R_{(q,X;p,Y,R)}=\{\,\{A_{X,j},C_{q,j}\}\to\{A_{Y,j},C_{ p,j+1}\}\mid\mathbf{1}\;\mid 0\leq j\leq n\,\},\\ D=S:R_{(q,X;p,Y,S)}=\{\,\{A_{X,j},C_{q,j}\}\to\{A_{Y,j},C_{p,j}\}\mid \mathbf{1}\;\mid 0\leq j\leq n+1\,\},\\ D=L:R_{(q,X;p,Y,L)}=\{\,\{A_{X,j},C_{q,j}\}\to\{A_{Y,j},C_{p,j-1}\}\mid \mathbf{1}\;\mid 1\leq j\leq n+1\,\}.\end{array}\] The evolution of \(\Pi\) is governed by the quasimode \(\tilde{M}=\{R\}\). Due the form of the left-hand sides of the rules above, if the current state contains exactly one state symbol of the form \(C_{q,j}\), at most one rule in \(R\) will be applicable. We finally define the singleton set of target states: \[S_{\omega}=\{\{A_{B,j}\;\mid 1\leq j\leq n\}\cup\{C_{q_{1},0},A_{Z_{l},0},A_{Z_ {r},n+1}\}\}.\] The only state appearing in \(S_{\omega}\) therefore corresponds to the halting configuration of \(\mathcal{M}\) in which all tape cells are blank except cells \(0\) and \(n+1\) which contain the left and right end delimiters \(Z_{l}\) and \(Z_{r}\) respectively, and the head is on cell \(0\) and in state \(q_{1}\). It is a direct consequence of the definition of the rules in \(R\) that the LBA \(\mathcal{M}\) accepts a word \(x=v_{1}v_{2}\ldots v_{n}\) of length \(n\) if and only if the reachability problem \((\Pi,\tilde{M},\{s_{x}\},S_{\omega})\) has a solution, where \(s_{x}=\{A_{v_{j},j}\mid 1\leq j\leq n\}\cup\{C_{q_{0},1}\}\). Complexity.The number of symbols in \(\Pi\) is \(|\tilde{V}|=(n+2)(|V|+|Q|)\) and the number of rules is \(|R|=\mathcal{O}(n|V||Q|)\), so the Boolean P system \(\Pi\) can be constructed in time \(\mathcal{O}(n|V||Q|)\). Since \(\tilde{M}\) is a singleton and its only element is of cardinal \(|R|=\mathcal{O}(n|V||Q|)\), the quasimode can be constructed in time \(\mathcal{O}\left(n|V||Q|\cdot\log(n|V||Q|)\right)\)--roughly, the number of rules times the number of bits necessary to describe a rule. Because there is only one starting state and one target state, and since a state can be described by a sequence of \(n+3\) symbols (\(n+2\) for the tape and \(1\) for the state of the head), the whole description \((\Pi,\tilde{M},S_{\alpha},S_{\omega})\) can be constructed in the following time: \[\mathcal{O}\left(n|V||Q|\cdot\log(n|V||Q|)\right)=\mathcal{O}\left((n|V||Q|)^ {2}\right).\] This expression is polynomial in the size of the specification of \(\mathcal{M}\) and in the length \(n\) of the input \(x\), which concludes the proof. We will now show the symmetrical statement that reachability in Boolean P systems is at most as hard as LBA-ACCEPTANCE. Lemma 3: _Reachability for Boolean P systems is in_ PSPACE_._ Proof: We will prove that reachability for Boolean P systems is in NPSPACE, which implies the required statement by Savitch's theorem [29]. Let \((\Pi,\tilde{M},S_{\alpha},S_{\omega})\), with \(\Pi=(V,R)\), be an instance of the reachability problem. Algorithm 1 is a non-deterministic algorithm that solves this problem in polynomial space. The function \(\textit{UPDATE}_{\Pi}\) takes a state \(s\) of \(\Pi\) and an element of a quasimode \(m\in\tilde{M}\), and returns the state updated according to the rules \(R\) of \(\Pi\) and the chosen element of the quasimode, as defined in Section 3.2. Since the number of possible states of \(\Pi\) is \(2^{|V|}\), the shortest evolution between two states is of length at most \(2^{|V|}\), if it exists. Algorithm 1 therefore non-deterministically tests all possible evolutions of length at most \(2^{|V|}\), starting from all states in \(S_{\alpha}\). At the end _Reachable_ gets the value _true_ if and only if a state in \(S_{\omega}\) can be reached from every state in \(S_{\alpha}\), which ensures the correctness of the algorithm. This algorithm runs in polynomial space in the size of the reachability problem. Note that several states, a counter up to \(2^{|V|}\), and \(|S_{\alpha}|\) Boolean flags are stored, all of which takes up \(\mathcal{O}(|V|+|S_{\alpha}|)\) space. Furthermore, the function \(\textit{UPDATE}_{\Pi}\) can be evaluated in polynomial space. Indeed, to determine the set of applicable rules in a state \(s\), one needs to check for each rule if the guard is true and if the left part of the rule is present in \(s\). Both operations, the evaluation of a Boolean function and a comparison, can be carried out in polynomial space with respect to \(|V|\). Only the rules in \(m\) are then applied, and these applications can be carried out in polynomial space with respect to \(|V|\) and \(|R|\). Remark 7: The argument of Lemma 3 focuses on reachability under quasimodes. This argument can be trivially extended to modes derivable from quasimodes, and more generally to any mode for which non-deterministically picking a set \(m\) of rules to apply can be done in polynomial space. The following theorem brings together Lemmas 2 and 3 to show the main result with respect to the complexity of reachability. Theorem 8.1: _Reachability for Boolean P systems is \(\mathsf{PSPACE}\)-complete_ ## 8 Complexity of Sequential Controllability In this section we first extend the CoFaSe problem with some additional details necessary to properly reason about its complexity, and then show that sequential controllability of BCN is \(\mathsf{PSPACE}\)-complete. ### CoFaSe and Control Modes Theorem 8.1 shows that Boolean P systems can directly simulate Boolean networks together with the master control system, and Theorem 8.1 shows that reachability for Boolean P systems is \(\mathsf{PSPACE}\)-complete. Nevertheless, we cannot immediately conclude that CoFaSe is \(\mathsf{PSPACE}\)-complete because of the role modes and quasimodes play in evaluating the size of the problem. Consider a BCN \(F_{U}\) with the variables \(X\) and the control inputs \(U\), and recall that the CoFaSe problem is given by the triple \((F_{U},S_{\alpha},S_{\omega})\), where \(S_{\alpha},S_{\omega}\subseteq S_{X}\) are the sets of starting and target states respectively. The simulating Boolean P system \(\Pi\cup\Pi_{U}\) constructed in Theorem 2 uses the quasimode \[\tilde{M}_{U}=\{R_{U}^{0}\}\,\dot{\times}\,2^{R_{U}^{1}},\] for which \(|\tilde{M}_{U}|=2^{|U|}\), meaning that size of the reachability problem for \(\Pi\cup\Pi_{U}\) is always exponential in the size of \(U\), independently of the sizes of the individual elements of the triple \((F_{U},S_{\alpha},S_{\omega})\)7. As a consequence, directly combining Theorems 2 and 3 is not guaranteed to yield a polynomial bound on space in terms of the size of the CoFaSe problem \((F_{U},S_{\alpha},S_{\omega})\). Footnote 7: In general, the description of \(F_{U}\) is of size \(\mathcal{O}(2^{|X||U|})\), because some Boolean functions may require an exponential number of Boolean connectors \(\land\), \(\lor\), \(\bar{\cdot}\) to be represented. \(S_{\alpha}\) and \(S_{\omega}\) are of size \(\mathcal{O}(|X|)\) by their definition. In practice, however, the sizes of these entities are often well under the respective upper bounds [6, 24]. We believe that the correct way to deal with this issue is to include a specification of the master system emitting the controls into the description of the problem of sequential controllability. Indeed, CoFaSe is formulated for the situation in which the control can change at any moment [24], and this information is not explicitly included in its definition, while it is explicitly present in the P system \(\Pi\cup\Pi_{U}\) from Theorem 2. We propose to describe the possible changes in controls by defining a relation on \(2^{U}\)--the control mode. A _control mode_ for a BCN \(F_{U}\) is a relation \(\mathcal{R}_{U}\subseteq 2^{U}\times 2^{U}\) describing the possible evolutions of control inputs. More precisely, consider the following trajectory of the BCN \(F_{U}\): \[s_{1}\xrightarrow{F_{U}(\mu_{1})}s_{2}\xrightarrow{F_{U}(\mu_{2})}s_{3} \xrightarrow{F_{U}(\mu_{3})}\ldots\xrightarrow{F_{U}(\mu_{n})}s_{n+1}.\] This trajectory complies with the control mode \(\mathcal{R}_{U}\) if and only if \((\mu_{i},\mu_{i+1})\in\mathcal{R}_{U}\), for every \(1\leq i\leq n\). Example 7: Control modes naturally capture the types of control sequences given at the end of Section 6 and initially discussed in [23]. To streamline the definitions of the corresponding control modes, we introduce the following helper function: \[idx:2^{U}\to 2^{\{1,\ldots,|U|\}},\quad\textit{idx}(\mu)=\{i\mid u_{i}^{ \star}\in\mu,\star\in\{0,1\}\}.\] In other words, _idx_ produces the set of control input indices appearing in a control \(\mu\), irrespectively of the nature of the control input (freeze to \(0\) or freeze to \(1\)). We can now define the control mode \(\mathcal{R}_{U}^{\,\mathit{TCS}}\) capturing Total Control Sequences (TCS) as follows: \[\forall\mu,\nu\in 2^{U}:(\mu,\nu)\in\mathcal{R}_{U}^{\,\mathit{TCS}}\iff\textit{idx} (\mu)=\textit{idx}(\nu)=\textit{idx}(U).\] Informally, \(\mathcal{R}_{U}^{\mathit{TCS}}\) includes all those pairs of controls which act on every single controlled variable by activating one of the corresponding control inputs. Similarly, in the case of Abiding Control Sequences (ACS), the control mode \(\mathcal{R}_{U}^{\mathit{ACS}}\) can be defined as follows: \[\forall\mu,\nu\in 2^{U}:(\mu,\nu)\in\mathcal{R}_{U}^{\mathit{TCS}}\iff\mathit{idx} (\mu)\subseteq\mathit{idx}(\nu).\] Thus, \(\mathcal{R}_{U}^{\mathit{ACS}}\) only allows to transition from \(\mu\) to \(\nu\) if \(\nu\) acts at least on the same controlled variables as \(\mu\). Note that \(\nu\) is allowed to change the value to which a controlled variable \(x_{i}\) is frozen by replacing \(u_{i}^{0}\) by \(u_{i}^{1}\) or vice versa. We now define an extension of CoFaSe to capture sequential controllability of BCN in a more general framework. The \(\mathsf{SEQ}\)-\(\mathsf{CONTROL}\) problem is given by the 5-tuple \((F_{U},M,\mathcal{R}_{U},S_{\alpha},S_{\omega})\) and consists in deciding whether for every starting state in \(S_{\alpha}\) there exists an initial control \(\mu_{0}\) and a trajectory of the BCN \(F_{U}\) under the mode \(M\) and the control mode \(\mathcal{R}_{U}\) ending up in a target state from \(S_{\omega}\). \(\mu_{0}\) must appear as a the first term in at least a pair of \(\mathcal{R}_{U}\): \(\exists\nu\subseteq U:(\mu_{0},\nu)\in\mathcal{R}_{U}\). Example 8: Consider the Boolean network \(F_{U}\) described in Figure 3, as well as the controls \(\mu_{110}=\{u_{1}^{1},u_{2}^{1}\}\), freezing both \(x_{1}\) and \(x_{2}\) to \(1\), and \(\mu_{\emptyset}=\emptyset\). If \((\mu_{110},\mu_{110}),(\mu_{110},\mu_{\emptyset})\in\mathcal{R}_{U}\) then the following trajectory is possible: \[000\xrightarrow{F_{U}(\mu_{110})}110\xrightarrow{F_{U}(\mu_{110})}111 \xrightarrow{F_{U}(\mu_{\emptyset})}001.\] Suppose now that only freezing \(x_{1}\) or \(x_{2}\) separately is permitted, i.e. \(i\in\mathit{idx}(\mu)\cap\{1,2\}\implies\mathit{idx}(\mu)=\{i\}\), for any \(\mu\) appearing in a pair in \(\mathcal{R}_{U}\). In this case, \(F_{U}\) can reach \(100\) or \(010\) from \(000\) by respectively controlling \(x_{1}\) or \(x_{2}\) to \(1\). The following 3 scenarios are possible afterwards: 1. maintain the control of \(x_{1}\) or \(x_{2}\) and stay in the same state; 2. freeze the other variable--\(x_{2}\) if \(x_{1}\) was controlled or \(x_{1}\) if \(x_{2}\) was controlled, and switch to the other state--\(010\) or \(100\) respectively; 3. release all controls and go back to \(000\). Figure 3: The update functions of the Boolean network from Example 8 (left) as well as its uncontrolled synchronous dynamics (right). In any of these cases, \(F_{U}\) is not able to reach \(001\) with the above restriction on the control mode. Finally, suppose that once \(\mu_{110}\) is employed, it must be maintained for the rest of the trajectory, i.e. \((\mu_{110},\nu)\in\mathcal{R}_{U}\implies\nu=\mu_{110}\). The previous paragraph shows that the only way for \(F_{U}\) to leave the connected component consisting of the states \(\{000,100,010\}\) while starting from \(000\) is to apply \(\mu_{110}\). On the other hand, since \(\mu_{110}\) cannot be deactivated once applied, this means that \(F_{U}\) cannot reach \(001\) from \(000\) with this restriction on the control mode. ### Seq-Control and CoFaSe Are \(\mathsf{PSPACE}\)-complete We start by combining Theorems 2 and 3 to characterize the complexity of \(\mathsf{SEQ}\)-CONTROL. Theorem 8.1: \(\mathsf{SEQ}\)-CONTROL _is \(\mathsf{PSPACE}\)-complete._ Proof: \(\mathsf{SEQ}\)-CONTROL is \(\mathsf{PSPACE}\)-hard, since by taking \(U=\emptyset\) it is reduced to the problem of reachability for Boolean networks, known to be \(\mathsf{PSPACE}\)-complete [7, 23]. Let now \((F_{U},M,\mathcal{R}_{U},S_{\alpha},S_{\omega})\) be an instance of \(\mathsf{SEQ}\)-CONTROL and consider the following set of rules: \[R_{U}=\{\,\mu_{1}\to\mu_{2}\mid\mathbf{1}\,\mid(\mu_{1},\mu_{2})\in\mathcal{R }_{U}\,\},\] as well as the quasimode \(\tilde{M}_{U}=\{r\mid r\in R_{U}\}\). The Boolean P system \(\Pi_{U}=(U,R_{U})\) running under the quasimode \(\tilde{M}_{U}\) will therefore simulate the changes in controls allowed by the control mode \(\mathcal{R}_{U}\). We can now construct the reachability problem \((\Pi\cup\Pi_{U},\tilde{M}\dot{\times}\tilde{M}_{U},S_{\alpha},S_{\omega})\) in the same way as in Theorem 2.2. The entire construction, including that of \(R_{U}\), happens in polynomial time with respect to the size of the initial instance of \(\mathsf{SEQ}\)-CONTROL. This allows us to conclude the proof by invoking the fact that reachability in Boolean P systems is \(\mathsf{PSPACE}\)-complete (Theorem 3). As explained in the previous section, \(\mathsf{SEQ}\)-CONTROL being \(\mathsf{PSPACE}\)-complete does not immediately imply that CoFaSe is \(\mathsf{PSPACE}\)-complete, since translating from CoFaSe to \(\mathsf{SEQ}\)-CONTROL may require exponential increase in space. However, it is possible to directly prove that CoFaSe is in \(\mathsf{PSPACE}\) by using a variation of Algorithm 1 from Lemma 3. Theorem 8.2: _CoFaSe is \(\mathsf{PSPACE}\)-complete._ Proof: Similarly to the proof of Lemma 3, we show here a non-deterministic polynomial-space algorithm solving the instance of CoFaSe given by the triple \((F_{U},S_{\alpha},S_{\omega})\): Algorithm 2. Algorithm 2 has very similar properties to Algorithm 1. Note that no requirements on the values of control inputs are imposed in the CoFaSe problem, meaning that only the state space \(S_{X}\) needs to be explored, excluding the control inputs. Since \(|S_{X}|=2^{|X|}\), exploring trajectories of length at most \(2^{|X|}\) is sufficient to conclude about the reachability of a state in \(S_{\omega}\) for all states in \(S_{\alpha}\). Algorithm 2 stores a constant number of intermediate states and controls, a counter up to \(2^{|X|}\), and \(|S_{\alpha}|\) Boolean flags, all of which takes up \(\mathcal{O}(|V|+|U|+|S_{\alpha}|)\) space. Furthermore, \(F_{U}\) can be computed in polynomial space in \(|X|\) and \(|U|\), meaning that Algorithm 2 requires polynomial space in the size of the triple \((F_{U},S_{\alpha},S_{\omega})\). Finally, we conclude the proof by invoking Savitch's theorem [29], stating that \(\mathsf{NPSPACE}=\mathsf{PSPACE}\). ## 9 Conclusion and Discussion We structure the conclusion into three subsections, focusing on three main take-aways and future research directions stemming from this paper. ### Complexity of Sequential Controllability The central technical result of this work is proving that sequential controllability of Boolean control networks (BCN) is \(\mathsf{PSPACE}\)-complete, thereby closing the question left open in [24]. One important intuition that this result yields is that sequential controllability of BCN is not in fact harder computationally speaking than simple reachability, in spite of the much heftier two-level setup with a master dynamical system driving the Boolean network. While no explicit construction is given, it is to be expected that the evolution of a BCN under a control sequence may be simulated by a Boolean network, modulo a polynomial transformation. This implies that reasoning about sequential controllability is as hard as reasoning about pure reachability in Boolean networks, opening a promising direction of future work about using the most permissive semantics [26] for sequential controllability of BCN. We stress nevertheless that sequential controllability and reachability being in the same complexity class does not necessary imply that the techniques for efficiently solving reachability in practical situations can be immediately transposed to controllability. Exploring such possibilities is an important direction for future research on sequential controllability of BCN. While we extensively deal with CoFaSe in this work, it should be noted that the ConEvs semantics explored in [24] is not treated. The ConEvs semantics of the control sequence constraints the moments at which the control may change to the stable states of the driven Boolean network. This places the master system in a feedback loop with the driven network and changes the architecture substantially. In particular, the computational complexity of sequential controllability under the ConEvs semantics still remains to be characterized. ### Boolean P Systems Most of the technical results presented in this paper are obtained via Boolean P systems, a framework specifically designed for dealing with sequential controllability in Boolean networks. We particularly emphasize one of our central goals: designing ad hoc formalisms very tightly suited for a specific problem and thereby giving new relevant viewpoints. One of the advantages in relying on Boolean P systems is that the language of individual rules is more flexible than that of propositional formulae in Boolean networks. In particular, having set rewriting directly available allows for naturally expressing the notions of adding, removing, or depending on resources, while the propositional guards allow for easy checking of Boolean conditions whenever necessary. These two ingredients shine in Section 6, in which we show how a Boolean P system can capture both the BCN and the master dynamical system emitting the control inputs. On the other hand, we construct Boolean P systems without indulging too much into computationally expensive ingredients, which keeps the complexity of reachability in PSPACE. We would like to dwell specifically on the difference between Theorems 4 and 5, in particular on the fact that the latter directly shows that CoFaSe is in PSPACE, completely eliding Boolean P systems. First, remark that Theorem 4 showing that SEQ-CONTROL is PSPACE-complete is in fact more general, as it holds for any mode and for any control mode, incorporating _en passant_ different kinds of control sequences such as TCS, ACS, etc. Secondly, remark that Algorithm 2 in Theorem 4 is directly derived from (and is a special case of) Algorithm 1 in Lemma 3, which arguably needed some general framework like Boolean P systems to be conceived. Going back to the ConEvs semantics mentioned in the previous subsection, we expect that considering it in the framework of Boolean P systems will bring new valuable insight both concerning the characterization of its complexity and its other properties, as well as possible optimizations for specific use cases. Observe that ConEvs cannot be captured as a control mode, because it introduces a backward dependency of the control sequence on the state of the BCN. Boolean P systems on the other hand should allow to express this feedback elegantly, since the master system \(\Pi_{U}\) and the driven system \(\Pi\) are both part of the same composite system \(\Pi\cup\Pi_{U}\) (Theorem 2), and can therefore communicate both ways. In fact, just from this informal reasoning we can make a conjecture with respect to the upper bound on the complexity of sequential controllability under the ConEvs semantics. Conjecture 1: Sequential controllability of BCN under the ConEvs semantics is in \(\mathsf{PSPACE}\). Finally, we stress once again the point of Remark 3: while Boolean P systems are very closely related to reaction systems [8], they have distinctive features which make them a much better fit for reasoning about sequential controllability--specifically, explicit Boolean guards and permanency of the resources. ### Lineage of (Polymorphic) P Systems, Homoiconicity, and Lisp As we have already insisted, one central point that we bring forward with this work is conceiving ad hoc formalisms specialized for solving particular problems. This approach is partially inspired by the venerable Lisp family of programming languages, and more particularly by language-oriented programming--a methodology proposing to start solving problems by developing specifically-tailored programming languages--domain-specific languages or DSLs [10, 32]. When adopting this approach, it is important that such bespoke constructions be done within a particular general framework, lest the design costs grow too high and the new formalisms too obscure. In this paper, we promote P systems as such a general framework. The community around this model of computing has been producing a wide spectrum of variants, a far-from-exhaustive glimpse of which can be seen in [2, 14, 21, 28]. The rich body of literature provides many ingredients and various tools for easily assembling different new formalisms. This is why we believe that P systems are particularly well suited for the ad hoc formalism methodology. We conclude this work by underlining that Boolean P systems are far from being a frontier of how far one can go in designing specialized formalisms. We recall as an example polymorphic P systems [4], in which the rules are given by pairs of membranes rather than being part of the static description of the system, as is classically done in automata and language theory. Polymorphic P systems thus implement a form of homoiconicity--code-as-data, similarly to the Lisp languages. A lot more can be done in terms of customizing P systems, and we expect to see and invest further effort into the ad hoc formalism methodology. ### Acknowledgements All authors are grateful to Laurent Trilling from Universite Grenoble Alpes for fruitful discussions. Artiom Alhazov acknowledges project 20.80009.5007.22 "Intelligent information systems for solving ill-structured problems, processing knowledge and big data" by the National Agency for Research and Development.
2309.15220
New solvable two-matrix model and BKP tau function
We present exactly solvable modifications of the two-matrix Zinn-Justin-Zuber model and write it as a tau function. The grand partition function of these matrix integrals is written as the fermion expectation value. The perturbation theory series is written out explicitly in terms of series in strict partitions. The related string equations are presented.
E. N. Antonov, A. Yu. Orlov
2023-09-26T19:39:03Z
http://arxiv.org/abs/2309.15220v2
# New solvable two-matrix model and BKP tau function ###### Abstract We present exactly solvable modifications of the two-matrix Zinn-Justin-Zuber model and write it as a tau function. The grand partition function of these matrix integrals is written as the fermion expectation value. The perturbation theory series is written out explicitly in terms of series in strict partitions. The related string equations are presented. ## 1 Introduction This note was initiated by the work on the generalized Kontsevich model [35] and further discussions with the authors. The perturbation theory series for the partition function of this model was written in a very compact form as a sum over strict partitions of a pair of projective Schur functions. It was followed by the work [1] where the similar series appeared for the different model (BGW model). The sources of interest to series in projective Schur functions can be found in [22], [52], [31], [1], [37], [3], [3], [34], [32], [36], [33]. There are some earlier works on the series, see [51], [40], [44], [47], [27], [12]. On the projective Schur functions and the representation theory of the supersymmetric group \(q(N)\) and the symmetric group \(S_{n}\), see [46], [16], [9], [48]. The appearance of these functions in the integrable models was presented in [54] and [38]. If a matrix integral is a tau function as a function of its coupling constants, we call it solvable. As far as we know, the first solvable (in this sense) matrix model was presented in the preprint of [10]; see other examples in [19], [20] and [21]. Then we should point out the work [25]. Here, we present and compare two families of solvable matrix integrals. The second family is completely new and is related to the KP hierarchy on the root system B (the BKP hierarchy, which was introduced in [8]). ## 2 Models of two unitary matrices ### Standard and modified models of two unitary matrices In [55] the following integral over two unitary matrices was studied \[I_{1}(\mathbf{t},\mathbf{t}^{*})=C\int_{\mathbb{U}_{N}\times\mathbb{U}_{N}}e^ {c\,\mathrm{tr}\left(U_{1}^{-1}U_{2}^{-1}\right)+\sum_{n>0}\frac{1}{n}(t_{n} \mathrm{tr}U_{1}^{n}+t_{n}^{*}\mathrm{tr}U_{2}^{n})}d_{*}U_{1}dU_{2}^{*} \tag{1}\] where \(d_{*}U\) is the Haar measure of the unitary group \(\mathbb{U}_{N}\). The number \(c\) and the sets \(\mathbf{t}=(t_{1},t_{2},\dots)\), \(\mathbf{t}^{*}=(t_{1}^{*},t_{2}^{*},\dots)\) play the role of coupling constants in the model, and \(C\) is a normalization constant: \(CI(0,0)=1\). It is shown that \(I_{1}\) can be written explicitly as a series of the Schur polynomials as functions in the coupling constants over partitions as follows: \[I_{N}(\mathbf{t},\mathbf{t}^{*})=\sum_{\lambda}s_{\lambda}(\mathbf{t})s_{ \lambda}(\mathbf{t}^{*})\prod_{(i,j)\in\lambda}\frac{c}{(N-i+j)!} \tag{2}\] where \((i,j)\) are the coordinates of the node of the Young diagram \(\lambda\). The sum ranges over all Young diagrams of height do not acccede \(N\), that is, \(j=1,2,\dots\) and \(i\leq N\). It can be shown that this series is an example of the KP tau functions, also known as hypergeometric tau functions [21], [41], which admit relatively simple determinant forms. Modified family. In [39] (see also Appendix A in [11]), the following generalization of the integral (1) was introduced, namely, one can make a replacement for the term responsible for the interaction between matrices \(U_{1}\) and \(U_{2}\): \[e^{c\,\mathrm{tr}U_{1}^{-1}U_{2}^{-1}}\,\to\,\tau\left(N;c\,U_{1}^{-1}U_{2}^{- 1};f\right) \tag{3}\] where \(\tau\left(c\,U_{1}^{-1}U_{2}^{-1};f\right)\) is defined by the choice of the function \(f\) as follows: \[\tau\left(N;c\,U_{1}^{-1}U_{2}^{-1};f\right):=\sum_{\lambda}\prod_{i<j\leq N}( \lambda_{i}-\lambda_{j}-i+j)s_{\lambda}\left(c\,U_{1}^{-1}U_{2}^{-1}\right) \prod_{i\leq N}f(\lambda_{i}-i+N) \tag{4}\] Examples: \[\text{if }f(x)=\frac{1}{\Gamma(x+1)}\text{ then }\tau\left(N;c\,U_{1}^{-1}U_{ 2}^{-1};f\right)=e^{c\,\mathrm{tr}\left(U_{1}^{-1}U_{2}^{-1}\right)} \tag{5}\] \[\text{if }f(x)=\frac{\Gamma(x+a)}{\Gamma(a)\Gamma(x+1)}\text{ then }\tau\left(N;c\,U_{1}^{-1}U_{2}^{-1};f\right)=\det\left(1-c\,U_{1}^{-1}U_{2}^{- 1}\right)^{-a} \tag{6}\] The wonderful property of such tau function is the following determinantal representation: \[\int_{\mathbb{U}_{N}}\tau\left(c\,UU_{1}^{-1}U^{-1}U_{2}^{-1};f\right)d_{*}U =\frac{\det[\tau(1;c\,u_{i}^{-1}v_{j}^{-1};f)]_{i,j\leq N}}{\prod_{i<j\leq N}( u_{i}^{-1}-u_{j}^{-1})(v_{i}^{-1}-v_{j}^{-1})} \tag{7}\] where \[\tau(1;c\,u^{-1}v^{-1};f)]=1+f(1)u^{-1}v^{-1}+f(2)u^{-2}v^{-2}+\cdots \tag{8}\] which allows us writing down the perturbation series for the generalized model in the form of another matrix integral: \[I_{N}(\mathbf{t},\mathbf{t}^{*};f)=C\int_{\mathbb{U}_{N}\times\mathbb{U}_{N} }\tau\left(N;c\,U_{1}^{-1}U_{2}^{-1};f\right)e^{\sum_{n>0}\frac{1}{n}(t_{n} \mathrm{tr}U_{1}^{n}+t_{n}^{*}\mathrm{tr}U_{2}^{n})}d_{*}U_{1}dU_{2}^{*} \tag{9}\] \[=\sum_{\lambda\atop\ell(\lambda)\leq N}s_{\lambda}(\mathbf{t})s_{\lambda}( \mathbf{t}^{*})\prod_{i}f(\lambda_{i}-i+N)c^{\lambda_{i}} \tag{10}\] which is the Toda lattice [30] tau function [53], [49], [50] as presented in [21] and carefully studied in [42]. In [11], the integral (9) was written as the fermionic vacuum expectation value. See Appendix D, where the linear equations (sometimes known as string ones) for (1) and (9) are written down. ## 3 New models of two unitary matrices Let us consider a modification of the models mentioned above. Let us consider the following model: \[J_{N}(\mathbf{p}^{(1)},\mathbf{p}^{(2)})=C_{N}\int_{\mathbb{U}_{N}\times \mathbb{U}_{N}}e^{c\,\mathrm{tr}\left(U_{1}^{-2}U_{2}^{-2}\right)+\sum_{n=1, 3,5,\dots}\frac{2}{n}\left(p_{n}^{(1)}\mathrm{tr}U_{1}^{n}+p_{n}^{(2)} \mathrm{tr}U_{2}^{n}\right)}d\mu_{1}(U_{1})d\mu_{2}(U_{2}) \tag{11}\] where \[d\mu_{i}(U)=\det\left(U^{-\frac{1}{2}(N^{2}-N)+\kappa_{i}}\right)d_{*}U_{i}, \quad i=1,2 \tag{12}\] For the sake of simplicity, we choose \(\kappa_{1}=\kappa_{2}=0\) (all calculations below can be reproduced also in case \(\kappa_{i}\neq 0\), but all formulas take up noticeably more space in this case). Here, the constant \(C_{N}\) is chosen to ensure the normalization \(J_{N}(0,0)=1\). In this model, the parameters \(\mathbf{p}^{(1)}=(p_{1}^{(1)},p_{3}^{(1)},\dots)\), \(\mathbf{p}^{(2)}=(p_{1}^{(2)},p_{3}^{(2)},\dots)\) play the role of coupling constants. To study (11) we apply the Harish-Chandra-Itzykson-Zuber relation: \[\int_{\mathbb{U}_{N}}e^{\mathrm{tr}\left(UAU^{-1}B\right)}d_{*}U=\frac{\det\left[ e^{a_{k}b_{l}}\right]_{1\leq k,l\leq N}}{\prod_{k<l}(a_{k}-a_{l})(b_{k}-b_{l})} \tag{13}\] where \(a_{i}\) and \(b_{i}\), \(i=1,\dots,N\) are the eigenvalues of matrices \(A\) and \(B\). Then we rewrite \(J_{N}\) as the integral in eigenvalues, also using the symmetry property of the integrand: \[J_{N}(\mathbf{p}^{(1)},\mathbf{p}^{(2)})=\tilde{C}_{N}\oint\dots\oint\prod_{i <j}\frac{(u_{i}-u_{j})(v_{i}-v_{j})}{(u_{i}+u_{j})(v_{i}+v_{j})}\prod_{i=1}^{N} e^{c\,\mathrm{tr}\left(u_{i}^{-2}v_{i}^{-2}\right)+\sum_{n=1,3,5,\dots}\frac{2}{n} \left(p_{n}^{(1)}u_{i}^{n}+p_{n}^{(2)}v_{i}^{n}\right)}\frac{du_{i}}{u_{i}} \frac{dv_{i}}{v_{i}} \tag{14}\] To obtain this form, we use (13) as well as the well-known explicit form of the expression for the Haar measure \(\mathbb{U}_{N}\), which contains the factors \(d_{*}U_{1}\sim\prod_{i<j}|u_{i}-u_{j}|^{2}\prod_{k}\frac{du_{k}}{u_{k}}\) and \(d_{*}U_{2}\sim\prod_{i<j}|v_{i}-v_{j}|^{2}\prod_{k}\frac{dv_{k}}{v_{k}}\). For the next step, we use the following analogue of the Cauchy-Littlewood relation [29]: \[e^{\sum_{n=1,3,\dots}\frac{2}{n}p_{n}\mathrm{tr}\left(X^{n}\right)}=\sum_{ \stackrel{{\alpha\in D_{P}}}{{\ell(\alpha)\leq N}}}Q_{\alpha}( \mathbf{p})Q_{\alpha}\left(\mathbf{p}(X)\right)\prod_{i=1}^{\ell(\alpha)}2^{- \alpha_{i}} \tag{15}\] where \(X\) is a matrix and where \(Q_{\alpha}\) is the so-called projective Schur function, see Appendix. This is a polynomial in the variables \(p_{1},p_{3},\dots\) and \[\alpha=(\alpha_{1},\alpha_{2},\dots,\alpha_{k}),\,\alpha_{1}>\dots>\alpha_{k} \geq 0,\quad k=1,2,\dots\] is the multi index of this multivariable polynomial. Such sets are called strict partitions, and the set of all strict partitions (or the same: the set of Young diagrams with distinct lengths of rows \(\alpha_{1}>\alpha_{2}>\dots\)) we denote \(DP\) as in [29]. The notation \(Q_{\alpha}\left(\mathbf{p}(X)\right)\) means that here the arguments \(p_{1},p_{3},\dots\) are not free parameters but chosen to be equal Newton sums of the eigenvalues of the matrix \(X\): \[p_{n}=p_{n}(X):=\mathrm{tr}\left(X^{n}\right),\quad n\,\mathrm{is\,odd} \tag{16}\] Thus, the polynomial \(Q_{\alpha}\left(\mathbf{p}(X)\right)\) is a symmetric function in the eigenvalues of \(X\). For the sake of simplicity, we shall write \(Q_{\alpha}(X)\) instead of \(Q_{\alpha}\left(\mathbf{p}(X)\right)\) having in mind that a capital letter serves for a matrix. At last, one can prove (see Appendix C) that \[\frac{1}{(2\pi i)^{2N}}\oint\dots\oint Q_{\alpha}(U_{1})Q_{\beta}(U_{2})\prod _{i<j}\frac{(u_{i}-u_{j})(v_{i}-v_{j})}{(u_{i}+u_{j})(v_{i}+v_{j})}\prod_{i=1}^ {N}2^{cu_{i}^{-2}v_{i}^{-2}}\frac{du_{i}}{u_{i}}\frac{dv_{i}}{v_{i}} \tag{17}\] \[=\begin{cases}\delta_{\alpha,\beta}2^{2\ell(\alpha)}\prod_{i=1}^{\ell(\alpha)} \frac{c^{\alpha_{i}}}{(\frac{1}{2}\alpha_{i})!},\,\text{if each}\,\alpha_{i}\, \text{is even}\\ 0\quad\text{otherwise}\end{cases}\] see Appendix. From (14),(15) and (18) we obtain \[J_{N}(\mathbf{p}^{(1)},\mathbf{p}^{(2)})=\sum_{\stackrel{{\alpha \in DP}}{{\ell(\alpha)\leq N}}}Q_{2\alpha}(\mathbf{p}^{(1)})Q_{2\alpha}( \mathbf{p}^{(2)})\prod_{i=1}^{\ell(\alpha)}c^{2\alpha_{i}}f(\alpha_{i}), \tag{18}\] where \[f(\alpha_{i})=\frac{1}{\alpha_{i}!} \tag{19}\] compare to (1). The series in the right-hand side can be obtained by a limiting procedure from the hypergeometric BKP tau function studied in [40]. If we generalize integral (11) with the help of replacement (3), namely \[J_{N}(\mathbf{p}^{(1)},\mathbf{p}^{(2)};f)=C\int_{\mathbb{U}_{N}\times\mathbb{ U}_{N}}\tau(N;cU_{1}^{-2}U_{2}^{-2};f)e^{\sum_{n=1,3,5,\dots}\frac{2}{n} \left(p_{n}^{(1)}\mathrm{tr}U_{1}^{n}+p_{n}^{(2)}\mathrm{tr}U_{2}^{n}\right)} d\mu_{1}(U_{1})d\mu_{2}(U_{2}) \tag{20}\] we get (18). Fermionic form for the grand partition functionThe grand partition function for the sum (18) is \[J({\bf p}^{(1)},{\bf p}^{(2)},\zeta)=\sum_{N}J_{N}({\bf p}^{(1)},{\bf p}^{(2)}) \zeta^{N}=\sum_{N}\zeta^{N}\sum_{\stackrel{{\alpha\in DP}}{{\ell( \alpha)\leq N}}}Q_{2\alpha}({\bf p}^{(1)})Q_{2\alpha}({\bf p}^{(2)})\prod_{i=1}^ {\ell(\alpha)}\frac{c^{2\alpha_{i}}}{\alpha_{i}!}, \tag{21}\] where we insert (18),(19) for the model (11). This series can be written as fermionic vacuum expectation value: \[J({\bf p}^{(1)},{\bf p}^{(2)},\zeta)=\langle 0|\mathbb{T}{\bf e}^{\zeta F({\bf p }^{(1)},{\bf p}^{(2)})}|0\rangle,\quad F({\bf p}^{(1)},{\bf p}^{(2)})=-\frac{1}{ 4\pi^{2}}\oint\oint e^{cu^{-2}v^{-2}}\phi^{(1)}(u,{\bf p}^{(1)})\phi^{(2)}(v,{ \bf p}^{(2)})\frac{dudv}{uv} \tag{22}\] where \[\phi^{(i)}({\bf p},z)=e^{\sum_{n=1,3,5,\dots}\frac{1}{p}_{n}z^{n}}\phi^{(i)}(z ),\quad i=1,2, \tag{23}\] are neutral fermions: \[[\phi^{(i)}(z_{1}),\phi^{(i)}(z_{2})]_{+}=\sum_{n\,\mbox{\scriptsize odd}}(- 1)^{n}\frac{z_{1}^{n}}{z_{2}^{n}},\quad[\phi^{(1)}(z_{1}),\phi^{(2)}(z_{2})]_{+}=0 \tag{24}\] (Fermi fields \(\phi^{(i)}(z,{\bf p})\) obey the same anticommutation relations.) Here \(\mathbb{T}\) is the chronological ordering of fermions in 2D Eucledian space in the Taylor series of the exponential (which means the ordering of the absolute values of arguments \(|z_{a}|>|z_{b}|\) if \(\phi^{(i)}(z_{a})\) is placed to the left of \(\phi^{(i)}(z_{a})\) and the sign factor is taken into account). In further, we shall omit the symbol \(\mathbb{T}\) keeping it in mind as the well-known standard point. This expression is rather similar to the vacuum expectation value written down in [6] for the Fatteev-Frolov-Schwartz instanton sum in \(\sigma\)-model. Fermi modes defined by \(\phi^{(i)}(z)=\sum_{j\in\mathbb{Z}}\phi^{(i)}_{j}z^{j}\) obey \[[\phi^{(i_{1})}_{j_{a}},\phi^{(i_{2})}_{j_{b}}]_{+}=(-1)^{j_{a}}\delta_{i_{1}, i_{2}}\delta_{j_{a}+j_{b}} \tag{25}\] We notice that \(F(0,0)=\sum_{i\in\mathbb{Z}}\frac{\phi^{(1)}_{j_{a}}\phi^{(2)}_{j_{b}}}{i!}\). Now, attention, we use the approach of Kac-van de Leur [18] where the vacuum vectors are chosen as follows: \[\phi^{(i)}_{0}|0\rangle=\frac{1}{\sqrt{2}}|0\rangle,\quad\langle 0|\phi^{(i)}_{0 }=\frac{1}{\sqrt{2}}\langle 0|,\quad i=1,2 \tag{26}\] \[\phi_{n}|0\rangle=0,\quad\langle 0|\phi_{-n}=0,\quad n<0 \tag{27}\] This means that \[\langle 0|\phi(z,{\bf p})|0\rangle=\frac{1}{\sqrt{2}}e^{\sum_{n=1,3,\dots} \frac{2}{p}_{n}z^{n}} \tag{28}\] and for \(N=1,2,3,\dots\) we get \[\langle 0|\phi^{(i)}(z_{1},{\bf p})\cdots\phi^{(i)}(z_{N},{\bf p})|0\rangle=2^{ \frac{N}{2}}\prod_{i<j\leq N}\frac{z_{i}-z_{j}}{z_{i}+z_{j}}e^{\sum_{m=1,3, \dots}\sum_{i=1}^{N}\frac{2}{m}p_{m}z_{i}^{m}} \tag{29}\] \[=2^{\frac{N}{2}}\prod_{i<j\leq N}\frac{z_{i}-z_{j}}{z_{i}+z_{j}}\sum_{ \stackrel{{\alpha\in DP}}{{\ell(\alpha)\leq N}}}2^{-\ell(\alpha)}Q_ {\alpha}({\bf p})Q_{\alpha}(Z) \tag{30}\] where \(z_{1},\dots,z_{N}\) are eigenvalues of an \(N\times N\) matrix \(Z\). The first factor in (30) we got by the Wick theorem, and the second factor in (30) was obtained from (15). However, it goes back to the work [54], which, in turn, was based on [8]. The vacuum expectation value (22) is an example of the two-component BKP tau function introduced in [8], [18]. The by-product of this statement is the following bilinear equations (the same: Hirota equations) for the integral (11). The Hirota equations for the two-component BKP tau function can be found in [18]. The grand partition function (22) solves these bilinear equations as well as any other BKP tau function. The special tau function is selected by constraints written in the form of linear differential equations, see below the paragraph on string equations. Neutral Fermi fields \(\phi^{i}(z)\) can be realized as anticommuting operators \(V(z,\hat{\bf p}^{(i)})\) acting in the bosonic Fock space \(\cal F\) formed by polynomials in the variables \({\bf p}^{(1)},{\bf p}^{(2)}\) multiplied by \(\frac{1}{2}(1+\eta_{1})(1+\eta_{2})\) where \(\eta_{i},\,i=1,2\) are odd Grassmannian numbers: \(\eta_{i}^{2}=0\). The (right) vacuum vector is \(\frac{1}{2}(1+\eta_{1})(1+\eta_{2})\) creation operators are \(p_{n}^{(1)}\) and \(p_{n}^{(2)}\), \(n\) odd, and annihilation operators are equal to the derivatives \(n\partial_{p_{n}^{(1)}}\) and \(n\partial_{p_{n}^{(2)}}\). Let \[V(z,\hat{\bf p}^{(i)})=\frac{\eta_{i}+\frac{\partial}{\partial\eta_{i}}}{ \sqrt{2}}e^{\sum_{m\in\mathbb{Z}_{odd}^{+}}\frac{2}{m}z^{m}p_{m}^{(i)}}e^{- \sum_{m\in\mathbb{Z}_{odd}^{+}}z^{-m}\frac{\partial}{\partial p_{m}^{(i)}}}, \quad|z|=1, \tag{31}\] be the vertex operator as it was introduced in [18]. The symbol \(\hat{\bf p}^{(i)}\) denotes the set of two collections: \(\eta_{i},p_{1}^{(i)},p_{3}^{(i)},p_{5}^{(i)},\dots\) and \(\frac{\partial}{\partial\eta_{i}},\frac{\partial}{\partial p_{1}^{(i)}},\frac {\partial}{\partial p_{3}^{(i)}},\dots\). One can verify that vertex operators \(V^{(i)}(z)\) satisfy relations (24). The bosonization formulae result it the following representation for \(J({\bf p},{\bf p}^{*},\zeta)\): \[J({\bf p}^{(1)},{\bf p}^{(2)},\zeta)=g^{\rm Bos}(\hat{\bf p}^{(1)},\hat{\bf p }^{(2)},\zeta)\cdot 1, \tag{32}\] and \[g^{\rm Bos}(\hat{\bf p}^{(1)},\hat{\bf p}^{(2)},\zeta)=e^{\zeta\ointoint e^{ca -2}v^{-2}}V(u,\hat{\bf p}^{(1)})V(v,\hat{\bf p}^{(2)})\frac{dada}{a\cdot a} \tag{33}\] String equations. The general construction of the algebra \(W^{B}_{1+\infty}\) is presented in [23], [24]. Following the standard procedure, one can expand the product of the two vertex operators in the generators of \(W^{B}_{1+\infty}\) algebra: \[\frac{1}{2}V(ze^{\frac{y}{2}},\hat{\bf p})V(-ze^{-\frac{y}{2}},\hat{\bf p})- \frac{1}{2}\frac{1+e^{-y}}{1-e^{-y}}=\frac{1}{4}\frac{e^{y}+1}{e^{y}-1}\left( \vdots e^{\theta(ze^{\frac{y}{2}})+\theta(-ze^{-\frac{y}{2}})}\vdots-1\right)= \tag{34}\] \[=\frac{1}{4}\frac{e^{y}+1}{e^{y}-1}\sum_{k>0}\frac{1}{k!}\left(\sum_{m\in \mathbb{Z}_{odd}}\theta_{m}(\hat{\bf p})z^{m}\left(e^{\frac{my}{2}}-e^{-\frac {my}{2}}\right)\right)^{k}\vdots=:\sum_{m\in\mathbb{Z},n\geq 0}\frac{1}{n!}y^{n}z^{m} \Omega_{\sf B}(m,n,\hat{\bf p}) \tag{35}\] where \(\vdots\): means the bosonic normal ordering (which means that all derivatives \(\partial_{p_{i}}\) are moved to the right of \(p_{i}\)) and where \[2\theta(z,\hat{\bf p}):=\sum_{m\in\mathbb{Z}_{odd}^{+}}\frac{2}{m}z^{m}p_{m}- \sum_{m\in\mathbb{Z}_{odd}^{+}}z^{-m}\frac{\partial}{\partial p_{m}} \tag{36}\] Equivalently, one can write \[\Omega_{\sf B}(m,n,\hat{\bf p})=-\mathop{\rm res}_{z}V(-z,\hat{\bf p})z^{- \frac{m}{2}}\left(D^{n}z^{-\frac{m}{2}}V(z,\hat{\bf p})\right)\frac{dz}{z}, \quad D=z\frac{\partial}{\partial z} \tag{37}\] As follows from the left-hand side of this formula, \(\Omega_{mn}\) vanishes when \(n\) and \(m\) have the same parity. In particular, we have \[\Omega_{\sf B}(0,1,\hat{\bf p})=\sum_{n>0}np_{n}\partial_{n}\] \[\Omega_{\sf B}(0,3,\hat{\bf p})=\frac{1}{2}\sum_{n>0}n^{3}p_{n}\partial_{n}+ \frac{1}{2}\sum_{n>0}np_{n}\partial_{n}+4\sum_{n_{1},n_{2},n_{3}\,{\rm odd}}p_ {n_{1}}p_{n_{2}}p_{n_{3}}(n_{1}+n_{2}+n_{3})\partial_{n_{1}+n_{2}+n_{3}}+ \tag{38}\] \[+3\sum_{n_{1}+n_{2}=n_{3}+n_{4}\,{\rm odd}}p_{n_{1}}p_{n_{2}}n_{3}n_{4} \partial_{n_{3}}\partial_{n_{4}}+\sum_{n_{1},n_{2},n_{3}\,{\rm odd}}p_{n_{1}+n_ {2}+n_{3}}\partial_{n_{1}}\partial_{n_{2}}\partial_{n_{3}} \tag{39}\] The fermionic counterpart of (35) is much simpler: \[\frac{1}{2}:\phi(ze^{\frac{y}{2}})\phi(-ze^{-\frac{y}{2}}):=\frac{1}{2}\sum_{m,j\in\mathbb{Z}}z^{m}e^{\frac{y}{2}(m+2j)}(-1)^{j}:\phi_{m+j}\phi_{-j}:=\sum_{ \stackrel{{ m\in\mathbb{Z}}}{{n\geq 0}}}\frac{1}{n!}y^{n}z^{m}\Omega_{ \mathbb{F}}(m,n) \tag{40}\] where \(:a:\) serves for the fermionic normal ordering (namely, in formula (40) one can replace each \(:a:\) by \(a-\langle 0|a|0\rangle\)). Again, as follows from the left-hand side of this formula, \(\Omega_{mn}=0\) when \(n\) and \(m\) have the same parity. One gets \[\Omega_{\mathbb{F}}(m,n)=\frac{1}{2}\sum_{j\in\mathbb{Z}}\left(\frac{m}{2}+j \right)^{n}(-1)^{j}:\phi_{m+j}\phi_{-j}: \tag{41}\] \[=\mathop{\rm res}\limits_{z}\left(z^{-\frac{m}{2}}\cdot D^{n}\cdot z^{-\frac{m}{2}} \cdot\phi(z)\right)\phi(-z)\frac{dz}{z},\quad D=z\frac{\partial}{\partial z}. \tag{42}\] Among operators (41), we have the Virasoro ones, whose fermionic form is \[L^{\rm r}_{m}:=\Omega_{\rm r}(-2m,1)=\frac{1}{2}\sum_{j\in\mathbb{Z}}\left(j-m \right)(-1)^{j}\phi_{j-2m}\phi_{-j} \tag{43}\] and operators \[\Omega_{\rm r}(0,n)=\frac{1}{2}\sum_{j\in\mathbb{Z}}(-1)^{j}j^{n}:\phi_{j}\phi _{-j}:=\sum_{j=1,3,\dots}(-1)^{j}j^{n}\phi_{j}\phi_{-j},\quad n\,{\rm odd}. \tag{44}\] It is known that \[\Omega_{\rm n}(0,n,\hat{\bf p})Q_{\alpha}({\bf p})=e_{\alpha}Q_{\alpha}({\bf p }),\quad e_{\alpha}=\sum_{i}\alpha_{i}^{n},\quad n=1,3,5,\dots\] and therefore we get the following string equation: \[\left(\Omega_{\rm n}(0,n,\hat{\bf p}^{(1)})-\Omega_{\rm n}(0,n,\hat{\bf p}^{( 2)})\right)J_{N}({\bf p}^{(1)},{\bf p}^{(2)})=0,\quad n=1,3,5,\dots \tag{45}\] Each of the equations (45) characterizes the series (18), where both of the projective Schur functions in the products of pairs are labeled with the same strict partition, but in no way does it characterize the prefactor \(\prod_{i}\frac{2^{-2\alpha_{i}}c^{2\alpha_{1}}}{\alpha_{i}!}\) in the right-hand side of (18). To do it, we will write down constraints for the grand partition function, which can be obtained by the usage of different \(BW_{1+\infty}\) elements. We describe this procedure in short. First, for each given integer \(m\), we introduce a special combination of \(\Omega_{\rm r}(2m,n),\,n=1,3,\dots\) as follows: \[{\cal M}^{(i)}_{\rm r}(2m,G)=\frac{1}{2}\sum_{j\in\mathbb{Z}}y_{m,G}\left(j+m \right)(-1)^{j}:\phi_{2m+j}^{(i)}\phi_{-j}^{(i)}: \tag{46}\] \[=\mathop{\rm res}\limits_{z}:\left(z^{-m}\cdot y_{m,G}(D)\cdot z^{-m}\cdot\phi ^{(i)}(z)\right)\phi^{(i)}(-z):\frac{dz}{z} \tag{47}\] where we define \(y_{m,G}\) as the following polynomial function of odd degree: \[y_{m,G}(x)=x\left(x^{2}-m^{2}\right)\left(x^{2}-(m+1)^{2}\right)\cdots\left(x ^{2}-(2m-1)^{2}\right)G\left(x^{2}\right),\quad x=m+j \tag{48}\] where \(G(x^{2})\) is an arbitrary-chosen polynomial of \(x^{2}\). This combination of \(BW_{1+\infty}\) elements is chosen to provide the property \[{\cal M}^{(i)}_{\rm r}(2m,G)|0\rangle=0,\quad i=1,2 \tag{49}\] for any integer \(m\) and for any polynomial \(G(x^{2})\). Indeed, if \(m\) is negative, then each of \(\phi_{j+2m}\phi_{-j}\) from the sum in the right-hand side of (46) annihilates \(|0\rangle\). If \(m>0\), the situation is as follows: we have \[\frac{1}{2}\sum_{j\in\mathbb{Z}}(-1)^{j}\phi_{2m+j}^{(i)}\phi_{-j}^{(i)}|0 \rangle=\phi_{2m}^{(i)}\phi_{0}^{(i)}|0\rangle+\cdots+(-1)^{m-1}\phi_{m+1}^{(i )}\phi_{m-1}^{(i)}|0\rangle\neq 0,\] however \({\cal M}^{(i)}_{\rm f}(2m,G)\) annihilates the vacuum vector due to the vanishing of \(y_{m,G}(m+i)\) at points \(i=0,i=1,\dots,i=m-1\), see (48). Next, by (25) one can also verify that for \(m\geq 0\) we obtain \[[{\cal M}^{(1)}_{\rm r}(2m,G)-{\cal M}^{(2)}_{\rm r}(-2m,\tilde{G}),F(0,0)]=0 \tag{50}\] where \(\tilde{G}(x^{2})\) is also a polynomial in \(x^{2}\) and defined as follows: \[\tilde{G}(x^{2})=G(x^{2})c_{\rm even}(x),\quad c(x)=\frac{\Gamma(x+1+m)}{ \Gamma(x+1)}=c_{\rm even}(x)+c_{\rm odd}(x) \tag{51}\] where polynomial \(c_{\rm even}(x)=c_{\rm even}(-x)\)\(c_{\rm odd}(x)=-c_{\rm odd}(-x)\). Say, for \(m=1\) we have \(c(x)=x+1\), thus \(c_{\rm even}=1\) and \(y_{1,G}(x)=y_{-1,\tilde{G}}(x)=x(x^{2}-1)G\). Clearly, thanks to the symmetry between \(\phi^{(1)}\) and \(\phi^{(2)}\) in (21) one can also write \[[\mathcal{M}_{\textsc{f}}^{(2)}(2m,G)-\mathcal{M}_{\textsc{f}}^{(1)}(-2m,\tilde{ G}),F(0,0)]=0\] The properties (49)-(50) and the bosonization procedure result in (string) equations on the grand partition function: \[\left(\mathcal{M}_{\textsc{h}}(-2m,\tilde{G},\hat{\mathbf{p}}^{(2)})- \mathcal{M}_{\textsc{h}}(2m,G,\hat{\mathbf{p}}^{(1)})\right)J(\mathbf{p}^{(1) },\mathbf{p}^{(2)},\zeta)=0 \tag{52}\] with \[\mathcal{M}_{\textsc{h}}(2m,G,\hat{\mathbf{p}}^{(1,2)})=\underset{z}{\text{res }}\left(z^{-m}\cdot y_{m,G}(D)\cdot z^{-m}\cdot V(z,\hat{\mathbf{p}}^{(1,2)}) \right)V(-z,\hat{\mathbf{p}}^{(1,2)})\frac{dz}{z} \tag{53}\] \[\mathcal{M}_{-\textsc{h}}(-2m,Gc_{\text{even}},\hat{\mathbf{p}}^{(2,1)})= \underset{z}{\text{res}}\left(z^{m}\cdot y_{-m,Gc_{\text{even}}}(D)\cdot z^{m }\cdot V(z,\hat{\mathbf{p}}^{(2,1)})\right)V(-z,\hat{\mathbf{p}}^{(2,1)}) \frac{dz}{z} \tag{54}\] On the Pfaffian form. Let us write \(\mathbf{p}\) instead of \(\mathbf{p}^{(1)}\). Let us choose \(p_{n}^{(2)}=p_{n}(y):=\sum_{i=1}^{2k}y_{i}^{n}\), \(n=1,3,\dots\). For this choice one can apply the bosonization formula and the Wick theorem and get the following pfaffian representation: \[J(\mathbf{p},\mathbf{p}(y),\zeta)=\prod_{i<j}^{2k}\left(\frac{y_{i}+y_{j}}{y_ {i}-y_{j}}\right)\text{Pf}\left[J\left(\mathbf{p}^{(1)},\mathbf{p}(y_{i},y_{ j}),\zeta\right)\right]_{i,j} \tag{55}\] where \[J\left(\mathbf{p},\mathbf{p}(y_{i},y_{j}),\zeta\right)=\sum_{N}\int_{U_{1} \times U_{1}}e^{e\text{tr}\left(U_{1}^{-2}U_{2}^{-2}\right)+\sum_{n=1,3,\dots }\frac{2}{n}\left(p_{n}^{(i)}\text{tr}U_{1}^{n}\right)}\text{det}\frac{(1-y_{ i}U_{2})(1-y_{j}U_{2})}{(1+y_{i}U_{2})(1+y_{j}U_{2})}d_{*}U_{1}dU_{2}^{*} \tag{56}\] And by the same method, one can write down the answers as the Pfaffian of a block \(2k+2l\times 2k+2l\) matrix in case \(p_{n}^{(1)}=\sum_{i=1}^{2k}y_{i}^{n}\), \(p(2)_{n}=\sum_{i=1}^{2l}z_{i}^{n}\)\(n=1,3,\dots\). We omit this specious expression and leave it as quite a tedious exercise for the reader, see [13] where such calculations are made. ## 4 Cauchy-type interaction Next, we consider the integral \[K_{N}(\mathbf{p}^{(1)},\mathbf{p}^{(2)})=C\int_{\mathbb{U}_{N}\times\mathbb{ U}_{N}}\text{det}\left(1-cU_{1}^{-2}U_{2}^{-2}\right)^{-a}e^{\sum_{n=1,3,5, \dots}\frac{2}{n}\left(p_{n}^{(i)}\text{tr}U_{1}^{n}+p_{n}^{(2)}\text{tr}U_{2 }^{n}\right)}d_{*}U_{1}d_{*}U_{2} \tag{57}\] This is a different version of the Cauchy matrix model [39], [11] and [7]. Instead of (14), we obtain \[K_{N}(\mathbf{p},\mathbf{p}^{*})=\tilde{C}\oint\dots\oint\prod_{i<j}\frac{(u_ {i}-u_{j})(v_{i}-v_{j})}{(u_{i}+u_{j})(v_{i}+v_{j})}\prod_{i=1}^{N}\left(1-cu_ {i}^{-2}v_{i}^{-2}\right)^{N-1-a}e^{\sum_{n=1,3,5,\dots}\frac{2}{n}\left(p_{n}u _{i}^{n}+p_{n}^{*}v_{i}^{n}\right)}\frac{du_{i}}{u_{i}}\frac{dv_{i}}{v_{i}} \tag{58}\] where \(C\) and \(\tilde{C}\) provide the condition \(K_{N}(0,0)=1\). Similarly to the formula (18) in the previous case, we get \[K_{N}(\mathbf{p},\mathbf{p}^{*})=\sum_{\alpha\in DP\atop\ell(a)\leq N}Q_{2 \alpha}(\mathbf{p})Q_{2\alpha}(\mathbf{p}^{*})\prod_{i=1}^{\ell(\alpha)} \frac{2^{-2\alpha_{i}}c^{2\alpha_{i}}(1-c\alpha_{i})^{-a}}{\alpha_{i}!}, \tag{59}\] In this case, the formulas (22) and (32) for the grand partition function are the same; however, in Cauchy's case, we have \[F(\mathbf{p},\mathbf{p}^{*})=-\frac{1}{4\pi^{2}}\oint\oint(1-cuv)^{-a}\phi^{(1 )}(u,\mathbf{p})\phi^{(2)}(v,\mathbf{p}^{*})\frac{dudv}{uv} \tag{60}\] \[F^{\rm Bos}(\hat{\bf p},\hat{\bf p}^{*})=-\frac{1}{4\pi^{2}}\oint\oint(1-cuv)^{-a}V^ {(1)}(u,\hat{\bf p})V^{(2)}(v,\hat{\bf p}^{*})\frac{dudv}{uv} \tag{61}\] Formula (55) where \[J\left({\bf p},{\bf p}(y_{i},y_{j}),\zeta\right)=\sum_{N}\int_{U_{1}\times U_{ 1}}e^{\sum_{n>0}\frac{1}{n}(p_{n}{\rm tr}U_{1}^{n})}{\rm det}\left(1-cU_{1}^{- 2}U_{2}^{-2}\right)^{-a}{\rm det}\frac{(1-y_{i}U_{2})(1-y_{j}U_{2})}{(1+y_{i}U _{2})(1+y_{j}U_{2})}d_{*}U_{1}dU_{2}^{*}, \tag{62}\] is still correct. String equations are the same, namely, (45) and (52),(53) and (54), which are still true, however, now in (51) we have \[c(x)=\frac{\Gamma(x+m+1)\Gamma(x+a)}{\Gamma(x+1)\Gamma(x+m+a)}\frac{\Gamma(a+ m)}{\Gamma(a)}=c_{\rm even}(x)+c_{\rm odd}(x) \tag{63}\] For \(m=1\), \(G\equiv 1\), and \(a=1\), we get \(c_{\rm even}(x)=1\). Thus, \(y_{1,1}=x^{3}-x=y_{-1,1}\). _Acknowledgements._ The authors are grateful A. Alexandrov, A.Morozov, A. Mironov for attracting attention to [3], [35], and special thanks to Andrey Mironov for fruitful discussions. The work was supported by the Russian Science Foundation (Grant No.23-41-00049).
2310.00028
Fundamental scaling limits and bandwidth shaping of frequency-modulated combs
Frequency-modulated (FM) combs based on active cavities like quantum cascade lasers have recently emerged as promising light sources in many spectral regions. Unlike passive modelocking, which uses amplitude modulation to generate amplitude modulation, FM combs use phase modulation to generate phase modulation. They can therefore be regarded as a phase-domain version of passive modelocking. However, while the ultimate scaling laws of passive modelocking have long been known -- Haus showed in 1975 that pulses have a bandwidth proportional to effective gain bandwidth -- the limits of FM combs have been much less clear. Here, we show that FM combs are governed by the same fundamental limits, producing combs whose bandwidths are linear in the effective gain bandwidth. Not only do we show theoretically that the diffusive effect of gain curvature limits comb bandwidth, we also show experimentally how this limit can be increased. By adding carefully designed resonant-loss structures that are evanescently coupled to the cavity of a terahertz laser, we reduce the curvature and increase the effective gain bandwidth of the laser, demonstrating bandwidth enhancement. Our results give a new degree of freedom for the creation of active chip-scale combs and can be applied to a wide array of cavity geometries.
Mithun Roy, Zhenyang Xiao, Sadhvikas Addamane, David Burghoff
2023-09-29T02:05:47Z
http://arxiv.org/abs/2310.00028v2
# Diffusive loss shaping of quantum cascade laser frequency combs ###### Abstract Integrated optical frequency combs based on active cavities such as quantum cascade lasers (QCLs) have emerged as promising light sources in the mid-infrared and terahertz spectral regions. Their bandwidths are limited by two separate, yet equally important effects: dispersion and diffusion. However, while dispersion has been extensively engineered, diffusion--a phenomenon originating from gain variation--has not. We show theoretically and experimentally that the addition of carefully-engineered diffusive loss can enhance the bandwidth of QCL combs. Adding resonant loss to the cavity of a terahertz QCL can counteract the diffusive effect of the gain medium and allows broader bandwidth combs to form, fully exploiting the bandwidth and dynamic range of the gain medium. These results are well-explained by active cavity mean-field theory. Interestingly, this strategy also permits our structures to generate soliton-esque pulsed states of light. Our results give a new degree of freedom for the creation of active chip-scale combs, and can be applied to a wide array of cavity geometries. ## I Introduction In recent years, there has been significant interest in the generation of equidistant frequency combs formed in cavities with distributed gain. In particular, quantum cascade lasers (QCLs)--semiconductor lasers that can operate both in the mid-infrared and terahertz regions and are capable of producing watt-level output power--are of interest due to their broadband and compact nature, especially for spectroscopic applications [1, 2, 3], radiometry [4], metrology, and quantum information science [5]. While QCLs have gone through tremendous improvements with respect to output power, operating temperature [6], and frequency range, precise comb formation remains challenging. For many years, it has been understood that intriguing phase-locking mechanisms can occur [7], but pulse formation by conventional passive mode-locking remained difficult [8]. One of the most salient approaches is one that was observed to occur spontaneously inside QCL Fabry-Perot cavities [9, 10, 11], one that was originally believed to relate to a large third-order nonlinearity in intersubband systems but is now recognized as a broader emergent phenomenon that arises from the motion of the gain grating. Briefly, gain saturation combines with the asymmetry in the field at the facets to create an effective quasi-\(\chi^{(3)}\) nonlinearity [12, 13]. This nonlinearity creates a phase modulation dependent on the phase of the field itself, and this causes the dynamics of the system to be governed by a phase-driven nonlinear Schrodinger equation [13]. This is in stark contrast to Kerr combs (which use amplitude modulation to generate phase modulation) or passive mode-locking (which uses amplitude modulation to generate amplitude modulation). Thus, the natural state of these combs is a frequency-modulated (FM) mode of operation, where the frequency is strongly modulated but the amplitude is not. FM-like states have been observed in many different semiconductor laser systems including but not limited to QCLs [14, 15, 16, 17], and has been verified using a number of techniques, including SWIFTS [18, 19], FACE [20], and upconversion sampling [21, 22]. However, in contrast to pulse-forming mechanisms, the ultimate bandwidth limits of FM modes of operation are not well-understood. It is known that FM combs require low but nonzero dispersion in order to maintain stability, but even if the optimal dispersion is engineered [11, 23], this is not enough. By modifying and generalizing the Lugiato-Lefever equation to describe Fabry-Perot lasers [13], we previously showed that the primary limiting factor for these combs is not dispersion, but gain curvature (variation in the gain at its peak). The fundamental FM comb--referred to as an extendon--can have infinite bandwidth without gain curvature. Even when the laser is far above threshold and has ample gain across a broad bandwidth, that is not a guarantee that the FM comb will actually be able to utilize all this gain. As the instantaneous frequency detunes from the center of the gain peak, the instantaneous intensity falls in concert (Fig. 1a). Other frequencies will be able to lose in this window instead, and the competition between these two processes leads to chaotic multimode behavior. Alternatively, if the gain is low, the diffusive effect of gain curvature can instead cause the laser to instead produce continuous wave (CW) light. In this work, we introduce the concept of _diffusive loss shaping_ to balance the effects of gain curvature. By adding carefully-designed resonant loss structures that are evanescently coupled to the cavity of a terahertz QCL and controlling their size and distance (Fig. 1b), we are able to flatten the effective gain of the laser and broaden the comb bandwidth that can be achieved. We show theoretically and experimentally that this strategy can produce combs whose bandwidth is close to the absolute gain bandwidth limit of a medium--the ability to make a single-mode laser. We verify the coherence of our combs and, using SWIFTS, demonstrate that our devices can produce not only FM combs but also pulses. The ability to engineer both dispersion and diffusion enables new exciting prospects for integrated combs, as the diffusive loss and gain have not been well-explored in integrated active cavities and provide new degrees of freedom. ## II Basic concept and theory Gain curvature is intrinsic to any laser. The peak of a gain medium is set by the transition frequency and falls as the frequency is detuned from it, giving rise to a negative \(\frac{\partial^{2}q}{\partial\omega^{2}}\). For a homogeneously-broadened transition, this curvature is related to the dephasing time. FM-type combs form in semiconductor lasers (such as QCLs) due to a combination of gain saturation and with the asymmetry in the field at the facets. An FM comb with gain curvature can most simply be described in the presence of gain curvature by a phase-driven nonlinear Schrodinger equation as [13] \[\frac{\partial E}{\partial t}=\frac{i}{2}\left(\beta-i\frac{D_{g}}{2}\right) \frac{\partial^{2}E}{\partial z^{2}}+i\gamma|E|^{2}\angle EE-r(|E|^{2}-P_{0}),\] where \(\beta\) represents dispersion, \(D_{g}\) represents gain curvature, the middle term represents the nonlinear phase potential, and the final term resists amplitude modulation. Gain curvature is diffusive and therefore acts as imaginary dispersion. For pulsed lasers, it primarily serves to broaden pulses, but for FM combs the effect is more subtle. As the solution is already maximally-chirped, diffusion serves primarily to modulate the output, converting the self-phase modulation into amplitude modulation. However, the core requirement that allows for stable extendon solutions to form is that the amplitude variation remain small. If the gain curvature grows too large, this will lead to unwanted amplitude variation (Fig. 1a). Even when the dispersion is chosen optimally, gain curvature limits the comb bandwidths that can be achieved. For example, Fig. 2a shows the maximum comb bandwidth that can be achieved as a function of effective gain bandwidth (simulated using mean-field theory), demonstrating a linear relationship between the two. For typical QCL parameters this implies a maximum comb bandwidth of approximately a third of the full-width-half-maximum (FWHM) of the gain medium, which is usually much narrower than the range over which the gain medium can last. Ideally, a comb should be able to use at any frequencies whose gain exceeds the threshold. The effects of gain curvature can have deleterious effects even if a comb forms. This can be dramatically observed in the case of double-peaked gain media (Fig. 2b), which are frequently created as an accidental byproduct of the material growth process of THz QCLs. Even just 10% of gain modulation can lead to combs whose amplitude varies by _two orders of magnitude_. Without gain curvature, there is no theoretical limit to the bandwidth of an FM comb. The bandwidth of an extendon is inversely proportional to dispersion and can be made arbitrarily large simply by choosing smaller dispersion values or adding more gain [13; 24]. For example, Fig. 2c shows the spectrum and temporal profile of a THz QCL comb biased well above threshold and without gain curvature. A constant-amplitude, linear chirp forms (except at the jump point, where a pulse forms), and the spectrum is highly coherent. However, when gain curvature is enabled (Fig. 2d), severe amplitude fluctuations develop and the spectrum becomes chaotic. No comb forms. When the gain is lowered but curvature is left on (Fig. 2e), the laser instead enters a CW mode, as the diffusion counteracts the cross-steepening nonlinearity responsible for FM comb formation. Not only does gain curvature limit the bandwidth, but it also limits the dynamic range over which combs can form. In order to remove the deleterious effects of gain curvature, in this work we engineer diffusive loss structures that add _positive_ gain curvature. We introduce small disks on both sides of a laser cavity, covering almost the entire length of the cavity. These disks are made of the gain medium, and they introduce resonant loss. There Figure 1: a. Effect of gain curvature on FM comb formation. For an FM comb with gain curvature, frequency modulation gives rise to intensity modulation that destabilizes the comb. b. By introducing diffusive loss shaping, in which resonant structures near the cavity reduce the gain curvature and mitigate amplitude fluctuations, unwanted amplitude fluctuations are suppressed. fore, through an appropriate design, it is possible to reduce the net gain curvature of the overall gain and increase comb bandwidth. In contrast to the heterogeneous gain medium concept, our strategy is less susceptible to material growth uncertainties, as we are able to tailor our cavities electromagnetically and even for a particular wafer. It can also account for unwanted curvature in homogeneous wafers as well (such as double-peaked media). For this work, we demonstrated our concept on terahertz quantum cascade laser combs made in the metal-metal waveguide platform. Disks are introduced on both sides of a Fabry-Perot (FP) laser cavity. Each of the disks is, in fact, the QCL gain medium in a metal-metal waveguide. Due to the proximity of these disks to the cavity, light couples into these disks, which results in cavity loss. The loss has resonance characteristics, which can be controlled by varying the disk parameters. Figure 3a shows a schematic view of a unit cell of our design, in which the whole structure is obtained by simply replicating the unit cell periodically. The structure is symmetric with respect to the plane bisecting the FP cavity, which mitigates scattering of the fundamental lateral mode of the FP cavity into higher-order asymmetric modes. ## Results The active region of the QCL we chose for our work uses a regrown version of the same design that was that used in Ref. [25]. This is a photon-phonon [26] design that has low threshold and broadband operation, and when processed into tunable VECSELs was able to achieve lasing over a range of 880 GHz. However, as this operation did not all occur at the same bias, it should be considered an upper bound for the maximum possible bandwidth of the structure. To design our shaping structures, we assumed that the unsaturated gain was a Lorentzian with a peak value of 40 cm\({}^{-1}\) and had a FWHM of 0.67 THz [27]. For most of these structures, we seek to add a relatively small amount of engineered loss, ranging from 2 to 8 cm\({}^{-1}\). Though the overall effect on the gain profile is therefore relatively minor (Fig. 3b), this can be sufficient to eliminate the curvature at the top of the gain peak. Figure 3c shows the individual and total loss obtained by simulating a structure (COMSOL) that has two disk pairs. Since the FWHM of the total loss depends on the radii of the disks used, one might include more disk pairs of varying radii to increase the width of the total loss. Moreover, by varying the distance between the disks and the laser cavity (i.e., the coupling distance), losses of different amplitude can be introduced. To verify the efficacy of our approach, we designed diffusive loss shapers with varying amplitudes (2, 4, and 8 cm\({}^{-1}\) of loss), and these were interspersed with Fabry-Perots. All of these structures were designed with double-chirped mirrors to eliminate dispersion (designed to compensate for dispersion of 0.1 ps\({}^{2}\)/mm) [11]. The fabrication of these devices was done following a standard metal-metal waveguide process. Figure 2: Gain curvature theory. a. Comb bandwidths that can be achieved given a gain bandwidth, provided the optimum dispersion is chosen. The linear relationship demonstrates the necessity of low gain curvature. b. Effect of gain inhomogeneity on FM comb spectra. Even if a coherent comb can form, a minor change in the gain can lead to an enormous spectral modulation. c. Effect of gain curvature on simulated THz QCLs. Without gain curvature, a laser biased well above threshold can still form broadband extendon combs, with a linear chirp and an amplitude pulse. d,e. With gain curvature, either chaos or continuous wave operation is observed (at high and low gain values, respectively). ## Discussion In Fig. 4, we show the spectra and beatnote map for a Fabry-Perot (indicated by 0 cm\({}^{-1}\)) and three diffusive devices with 2, 4, and 8 cm\({}^{-1}\) of diffusive loss, respectively. The spectra shown here are the broadest spectra measured with narrow beatnotes for the respective devices, and the corresponding current biases are indicated on the map (vertical red line). Spectra were measured with a nitrogen-purged FTIR and pyroelectric detector, so atmospheric absorption lines are apparent and prevent measurement above 3.6 THz, where a deep absorption line is present. The devices tested are 3.9 mm long, 20 \(\mu\)m wide, and contain DCMs with the same parameters. The beatnote map shows that the FP and devices with 2 and 4 cm\({}^{-1}\) losses operate as frequency combs for certain bias ranges. Comparing the spectra for these devices, we find that the comb bandwidth for the 4 cm\({}^{-1}\) loss device is the broadest, about 200 GHz (30%) broader than that for the device without any disks. In addition, the beatnote map of this device is by far the cleanest, operating as a comb essentially across the full dynamic range of the laser (without any additional tuning). A minor dip in the spectrum appears at the designed loss frequency (3.21 THz), but the overall flatness, bandwidth, and dynamic range make this comb much better than the reference Fabry-Perot. The spectrum of the 8-cm\({}^{-1}\)-loss device is too strongly affected by the addition of loss, and as a result it has a large hole in the middle of the spectrum. Additionally, it no longer possessed any stable comb regimes, only incoherent multimode regimes. In order to more fully characterize these devices and to understand their dynamics, coherence and temporal measurements were performed using SWIFTS [11; 18; 19]. A room-temperature Schottky mixer (WM-57 from Virginia Diodes, with response beyond 4 THz) was used to detect the optical beatnote, and high-signal-to-noise ratio (SNR) beatnotes (40 dB) could be achieved (Fig. 5a). Despite the high SNR in the intermediate frequency (IF) chain, the DC-coupled monitor signal is not intended to be low-noise and had limited SNR. Despite this, in the broadest-bandwidth comb regime the device has a coherence spectrum that matches the normal spectrum product well at the frequency ranges where the spectrum product is above noise (Fig. 5b). To verify the ultimate bandwidth of the comb and compare with the best theoretical performance, the spectrum was measured with a higher-dynamic-range superconducting bolometer (Fig. 5c). The spectrum is flat at the top and has SWIFTS temporal traces consistent with FM modes of operation. Although the blue side of the spectrum is difficult to precisely confirm owing to the strong atmospheric absorption around 3.6 THz, our results establish a comb bandwidth of at least 700 GHz, which is approximately 80% of the 880 GHz range over which a nominally identical wafer was able to lase over [25]. While this comparison is not ideal--even nominally identical wafers can be different, and a single-mode laser has the benefit of bias tuning--but this illustrates that diffusive loss shaping can achieve combs that are nearly as broad as the bandwidth of the gain spectrum that is above threshold. Interestingly, diffusively-shaped devices possess unexpected pulsed modes of operation as well. While at most biases the spectra are primarily FM-like (Fig. 5d), when the comb is biased to very near an unstable regime clear pulses form instead (Fig. 5e). The pulses are strikingly large--approximately 3-4 times larger than the FM portion of the wave--and would generally be considered more characteristic of solitons than of extendons. Though QCLs do not generally form pulses readily, THz QCLs are more amenable to pulse formation on account of their longer gain recovery times [28; 29; 30; 31; 8]. Though the limited SNR makes precise evaluation of the origin challenging, it is likely that it originates from the boundary pulse occasionally observed in FM combs [14; 16]. This boundary pulse appears in simulation as well (e.g., see the top-right panel of Fig. d), but is usually not so Figure 3: Diffusive loss shaping of terahertz QCL combs. a. Basic concept of the design, in which a periodically-arranged series of disks are evanescently coupled to the laser. By adjusting the radius and distance of the disks, the losses can be precisely controlled. b. Target design of gain medium for a structure designed to add 4 cm\({}^{-1}\) of loss. c. Multiple ring types can be combined to achieve broader loss spectra and to more finely tune the profile. d. Structures that were designed to demonstrate efficacy of the diffusive shaping. FP lasers were interspersed with shaping structures designed to add 2, 4, and 8 cm\({}^{-1}\) of loss. pronounced. For example, in Fig. d the boundary pulse is twice the height of the FM portion of the wave, not 3-4 times larger. While a full analytical theory that exactly captures the effect of the boundary pulse has not yet been developed, it is known from numerics that multimode behavior arises at the boundary and that the boundary pulse is largest at the edge of stability (which is the case here). Going forward, our results imply new degrees of freedom in the engineering of active cavity combs, especially QCLs. Up until now, the development of novel comb states has primarily focused on the dispersion degree of freedom, and to the extent that gain has been engineered it has been through the creation of broadband heterogeneous designs. While heterogeneous designs will likely be critical for achieving octave-spanning combs, achieving sufficiently flat gain spectra through MBE growth alone would be a monumental task. The very act of combining multiple stacks naturally leads to gain variations, and even minor (few cm\({}^{-1}\)) variations can lead to enormous changes in performance. However, loss shaping allows for the possibility of trimming these variations, not just using a fixed fabricated design but even dynamically. Adding bias to these structures would allow the resonant loss to become resonant gain, giving even finer control of the comb spectrum. This strategy is also compatible with every type of cavity, including enhanced Fabry-Perots [32] and rings [33, 34, 35, 36]. ## Conclusion In conclusion, we have demonstrated theoretically and experimentally the pivotal role of diffusion in integrated QCL combs. By introducing the concept of diffusive loss shaping--employing resonant structures closely coupled to the laser cavity--we eliminated gain curvature and achieved comb bandwidths near the intrinsic limit of the medium (80% of the maximum lasing bandwidth of a single-mode laser). We verified the coherence of the combs produced, and surprisingly showed that these devices could yield not just frequency-modulated comb states, but also pulsed states. This research unveils the enormous potential harbored by the simultaneous engineering of both dispersion and diffusion, and the introduction of diffusive loss in active cavities paves the way for more robust, versatile, and advanced chip-scale frequency comb systems. Figure 4: Effect of diffusive loss shaping on terahertz QCL combs, which spectra shown on the left, corresponding beatnotes shown in the insets, and beatnote maps shown on the right. Starting from the top, the structures have 0, 2, 4, and 8 cm\({}^{-1}\) of loss shaping. The spectrum and beatnote map improves significantly for the device with 4 cm\({}^{-1}\) of shaping, but the final one adds too much loss and the spectrum bifurcates, and no stable comb regime was found. ## Acknowledgments D.B. acknowledges support from ONR grant N00014-21-1-2735, AFOSR grant no. FA9550-20-1-0192, and NSF grant ECCS-2046772; this research is funded in part by the Gordon and Betty Moore Foundation through Grant GBMF11446 to the University of Texas at Austin to support the work of D.B. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the US Department of Energy (DOE) Office of Science. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly-owned subsidiary of Honeywell International, Inc., for the US DOE's National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the article do not necessarily represent the views of the US DOE or the United States Government.
2309.13889
Resilient State Estimation for Nonlinear Discrete-Time Systems via Input and State Interval Observer Synthesis
This paper addresses the problem of resilient state estimation and attack reconstruction for bounded-error nonlinear discrete-time systems with nonlinear observations/ constraints, where both sensors and actuators can be compromised by false data injection attack signals/unknown inputs. By leveraging mixed-monotone decomposition of nonlinear functions, as well as affine parallel outer-approximation of the observation functions, along with introducing auxiliary states to cancel out the effect of the attacks/unknown inputs, our proposed observer recursively computes interval estimates that by construction, contain the true states and unknown inputs of the system. Moreover, we provide several semi-definite programs to synthesize observer gains to ensure input-to-state stability of the proposed observer and optimality of the design in the sense of minimum $\mathcal{H}_{\infty}$ gain.
Mohammad Khajenejad, Zeyuan Jin, Thach Ngoc Dinh, Sze Zheng Yong
2023-09-25T06:03:15Z
http://arxiv.org/abs/2309.13889v1
Resilient State Estimation for Nonlinear Discrete-Time Systems via Input and State Interval Observer Synthesis ###### Abstract This paper addresses the problem of resilient state estimation and attack reconstruction for bounded-error nonlinear discrete-time systems with nonlinear observations/constraints, where both sensors and actuators can be compromised by false data injection attack signals/unknown inputs. By leveraging mixed-monotone decomposition of nonlinear functions, as well as affine parallel outer-approximation of the observation functions, along with introducing auxiliary states to cancel out the effect of the attacks/unknown inputs, our proposed observer recursively computes interval estimates that by construction, contain the true states and unknown inputs of the system. Moreover, we provide several semi-definite programs to synthesize observer gains to ensure input-to-state stability of the proposed observer and optimality of the design in the sense of minimum \(\mathcal{H}_{\infty}\) gain. ## I Introduction State estimation and unknown input reconstruction are indispensable in various engineering applications such as aircraft tracking, fault detection, attack detection and mitigation in cyber-physical systems (CPS) and urban transportation [1, 2, 3]. Particularly, set-membership approaches have been proposed for bounded-error systems to provide hard accuracy bounds, which is especially useful for obtaining robustness guarantees for safety-critical systems. Moreover, since attackers may be strategic in adversarial settings, the ability to simultaneously estimate states and inputs without imposing any assumptions on the unknown inputs/attack signals is desirable and often crucial. _Literature review._ Numerous studies in the literature have investigated _secure estimation_, i.e., how to accurately estimate the states of a system when it is under attack or subject to adversarial signals. For instance, secure state estimation and control problem was addressed in the presence of false data injection attacks on both the actuators and sensors in [4], in which a \(\chi^{2}\) detector was proposed to detect malicious attacks. The research in [5] proposed a sliding-mode observer to simultaneously estimate system states and attacks, while the work in [6] provided a projected sliding-mode observer-based estimation approach to reconstruct system states. Further, the work in [7] reconstructed attack signals from the equivalent output injection signal using a sliding-mode observer, while in [8], an attack was considered as an auxiliary state and estimated by employing a robust switching Luenberger observer assuming sparsity. However, all the aforementioned works considered stochastic/Gaussian noise and hence do not apply to the bounded-error setting we consider in this paper, where noise/disturbance signals are assumed to be distribution-free and bounded. A related body of literature that could be applied to resilient state estimation in the bounded-error setting is that of unknown input interval observers. Particularly, the works in [9, 10, 11] considered the problem of designing unknown input interval observers for continuous-time linear parameter varying (LPV), uncertain linear time-invariant (LTI) and discrete-time switched linear systems, respectively, where the authors in [9] formulated the necessary Metzler property as part of a semi-definite program. A similar problem was considered for nonlinear continuous-time systems with linear observations in [12]. However, these approaches are not suitable for general discrete-time nonlinear systems and the unknown input signal does not affect the output/measurement equation (needed for representing false data injection attacks on the sensors) in either of the works in [9, 10, 11, 12]. On the other hand, while our previous works [13, 14] do consider the design of state and unknown input interval observers for nonlinear discrete-time systems with nonlinear observations, no stabilizing gains were synthesized in [13, 14]. We aim to address this shortcoming in this paper. _Contributions._ By leveraging a combination of mixed-monotone decomposition of nonlinear functions [15, 16] and parallel affine outer-approximation of observation functions [17], we synthesize a resilient interval observer, i.e., a discrete-time dynamical system that by construction, _simultaneously_ returns interval-valued estimates of states and unknown inputs (representing false data injection signals on both the actuators and sensors) for a broad range of nonlinear discrete-time systems with nonlinear observations. Our proposed design is a significant improvement to our previous input and state interval observer designs [13, 14], in which no stabilizing gains were considered and so the stability of the previous observer designs only hinged upon some dynamical systems properties. Moreover, in contrast to many unknown input (interval) observer designs in the literature, our design considers arbitrary unknown input signals with no assumptions of _a priori_ known intervals, being stochastic with zero mean (as is often assumed for noise) or bounded. Further, we provide sufficient conditions for the input-to-state-stability of the proposed observer, which at the same time ensures the optimality of the design in the sense of minimum \(\mathcal{H}_{\infty}\) gain by solving semi-definite programs. ## II Preliminaries _Notation._\(\vee\) denotes the logical disjunction (the OR truth-functional operator). \(\mathbb{R}^{n},\mathbb{R}^{n\times p},\mathbb{D}_{n},\mathbb{N},\mathbb{N}_{n}, \mathbb{R}_{\geq 0}\) and \(\mathbb{R}_{>0}\) denote the \(n\)-dimensional Euclidean space and the sets of \(n\) by \(p\) matrices, \(n\) by \(n\) diagonal matrices, natural numbers (including 0), natural numbers from 1 to \(n\), non-negative and positive real numbers, respectively, while \(\mathbb{M}_{n}\) denotes the set of all \(n\) by \(n\) Metzler matrices, i.e., square matrices whose off-diagonal elements are non-negative. Euclidean norm of a vector \(x\in\mathbb{R}^{n}\) is denoted by \(\|x\|_{2}\triangleq\sqrt{x^{\top}x}\). For \(M\in\mathbb{R}^{n\times p}\), \(M_{ij}\) denotes \(M\)'s entry in the \(i\)'th row and the \(j\)'th column, \(M^{\oplus}\triangleq\max(M,\mathbf{0}_{n,p})\), \(M^{\oplus}=M^{\oplus}-M\) and \(|M|\triangleq M^{\oplus}+M^{\ominus}\), where \(\mathbf{0}_{n,p}\) is the zero matrix in \(\mathbb{R}^{n\times p}\), while \(\mathrm{sgn}(M)\in\mathbb{R}^{n\times p}\) is the element-wise sign of \(M\) with \(\mathrm{sgn}(M_{ij})=1\) if \(M_{ij}\geq 0\) and \(\mathrm{sgn}(M_{ij})=-1\), otherwise. \(M\succ 0\) and \(M\prec 0\) (or \(M\succeq 0\) and \(M\preceq 0\)) denote that \(M\) is positive and negative (semi-)definite, respectively. Further, a function \(f:S\subseteq\mathbb{R}^{n}\rightarrow\mathbb{R}\), where \(0\in S\), is positive definite if \(f(x)>0\) for all \(x\in S\backslash\{0\}\), and \(f(0)=0\). Finally, an interval \(\mathcal{I}\triangleq[\underline{z},\overline{z}]\subset\mathbb{R}^{n}\) is the set of all real vectors \(z\in\mathbb{R}^{n_{z}}\) that satisfies \(\underline{z}\leq z\leq\overline{z}\) (component-wise), where \(\|\overline{z}-\underline{z}\|_{\infty}\triangleq\max_{i\in\{1,\cdots,n_{z}\}}| z_{i}|\) is the interval width of \(\mathcal{I}\). Next, we review some related results and definitions. **Proposition 1** (Jacobian Sign-Stable Decomposition [15, Proposition 2]).: _If a mapping \(f:\mathcal{Z}\subset\mathbb{R}^{n_{z}}\rightarrow\mathbb{R}^{p}\) has Jacobian matrices satisfying \(J^{f}(x)\in[\underline{J}^{f},\overline{J}^{f}]\), \(\forall z\in\mathcal{Z}\), where \(\underline{J}^{f},\overline{J}^{f}\in\mathbb{R}^{p\times n_{z}}\) are known matrices, then the mapping \(f\) can be decomposed into an additive remainder-form:_ \[\forall z\in\mathcal{Z},f(z)=Hz+\mu(z), \tag{1}\] _where the matrix \(H\in\mathbb{R}^{p\times n_{z}}\) satisfies_ \[\forall(i,j)\in\mathbb{N}_{p}\times\mathbb{N}_{n_{z}},H_{ij}=\underline{J}^{f }_{ij}\ \lor H_{ij}=\overline{J}^{f}_{i,j}, \tag{2}\] _and \(\mu(\cdot)\) and \(Hz\) are nonlinear and linear Jacobian sign-stable (JSS) mappings, respectively, i.e., the signs of each element of their Jacobian matrices do not change within their domains (\(J^{\nu}_{ij}(\cdot)\geq 0\) or \(J^{\nu}_{ij}(\cdot)\leq 0\), \(\nu(z)\in\{\mu(z),Hz\}\))._ **Definition 1** (Mixed-Monotonicity and Decomposition Functions).: _[_18_, Definition 1]__, [19, Definition 4] Consider the discrete-time dynamical system \(x_{k+1}=g(x_{k})\), with initial state \(x_{0}\in\mathcal{X}_{0}\triangleq[\underline{x}_{0},\overline{x}_{0}]\subset \mathbb{R}^{n}\). Furthermore, \(g:\mathcal{X}\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is the vector field, and \(\mathcal{X}\) is the entire state space. A function \(g_{d}:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}^{n}\) is a discrete-time mixed-monotone decomposition mapping for the vector field \(g\) if it satisfies the following conditions: i) \(g_{d}(x,x)=g(x)\), ii) \(g_{d}\) is monotone increasing in its first argument, i.e., \(\hat{x}\geq x\Rightarrow g_{d}(\hat{x},x^{\prime})\geq g_{d}(x,x^{\prime})\), and iii) \(g_{d}\) is monotone decreasing in its second argument, i.e., \(\hat{x}\geq x\Rightarrow g_{d}(x^{\prime},\hat{x})\leq g_{d}(x^{\prime},x)\)._ **Proposition 2** (Tight and Tractable Decomposition Functions for JSS Mappings).: _[_15_, Proposition 4 & Lemma 3]_ _Suppose \(\mu:\mathcal{Z}\subset\mathbb{R}^{n_{z}}\rightarrow\mathbb{R}^{p}\) is a JSS mapping on its domain. Then, for each \(\mu_{i}\), \(i\in\mathbb{N}_{p}\), its tight decomposition function is:_ \[\mu_{d,i}(z_{1},z_{2})=\mu_{i}(D^{i}z_{1}+(I_{n}-D^{i})z_{2}), \tag{3}\] _for any ordered \(z_{1},z_{2}\in\mathcal{Z}\), with a binary diagonal matrix \(D^{i}\) that is determined by the vertex of the interval \([z_{1},z_{2}]\) that minimizes the function \(\mu_{i}\) (if \(z_{1}<z_{2}\)) or the vertex of the interval \([z_{2},z_{1}]\) that maximizes \(\mu_{i}\) (if \(z_{2}\leq z_{1}\)), i.e.,_ \[D^{i}=\mathrm{diag}(\max(\mathrm{sgn}(\overline{J}^{\mu}_{i}),\mathbf{0}_{1,n_{z }})).\] _Moreover, if the JSS mapping \(\mu\) is a remainder term of a JSS decomposition of a function \(f\) as discussed in Proposition 1, then for any interval domain \(\underline{z}\leq z\leq\overline{z}\), with \(z,\underline{z},\overline{z}\in\mathcal{Z}\) and \(\underline{\varepsilon}\triangleq\overline{z}-\underline{z}\), the following inequality holds: \(\delta^{\underline{\varepsilon}}_{d}\triangleq\mu_{d}(\overline{z},\underline{z} )-\mu_{d}(\underline{z},\overline{z})\leq\overline{F}_{\mu}\varepsilon\), with \(\overline{F}_{\mu}\triangleq 2\max(\overline{J}_{f}-H,\mathbf{0}_{p,n_{z}})-\underline{J}_{f}+H\) and \(H\in\mathbb{R}^{p\times n_{z}}\) given in Proposition 1._ Consequently, by applying Proposition 2 to the Jacobian sign-stable decomposition obtained using Proposition 1, a tight and tractable decomposition function can be obtained (cf. details in [15]). Furthermore, in the case that the mapping is not JSS, a tractable algorithm has been introduced in [20, Algorithm 1] to compute _tight remainder-form decomposition functions_ for a very broad class of nonlinear functions. **Definition 2** (Embedding System).: _[_16_, Definition 6]_ _For a discrete-time dynamical system \(x_{k+1}=g(x_{k})\) defined over mapping \(g:\mathcal{X}\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) with a corresponding decomposition function \(g_{d}(\cdot)\), its embedding system is a \(2n\)-dimensional system with initial condition \(\left[\overline{x}_{0}^{\top}\ \ \underline{x}_{0}^{\top}\right]^{\top}\) defined as \(\left[\underline{x}_{k+1}^{\top}\ \overline{x}_{k+1}^{\top}\right]^{\top}=\left[ \underline{g}_{d}^{\top}(\underline{x}_{k},\overline{x}_{k})\ \overline{g}_{d}^{\top}(\underline{x}_{k},\underline{x}_{k})\right]\.\)_ Note that according to [20, Proposition 3], the embedding system in Definition 2 with decomposition function \(g_{d}\) corresponding to the dynamics \(x_{k+1}=g(x_{k})\) has a _stateramer property_, i.e., its solution is guaranteed to frame the unknown state trajectory \(x_{k}\), i.e., \(\underline{x}_{k}\leq x_{k}\leq\overline{x}_{k}\) for all \(k\in\mathbb{N}\). Next, we will briefly restate our previous result in [17], tailoring it specifically for intervals to help with computing affine bounding functions for our functions. **Proposition 3**.: _[_17_, Affine Outer-Approximation]_ _Consider the function \(g(.):\mathcal{B}\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), where \(\mathcal{B}\) is an interval with \(\overline{x},\underline{x},\mathcal{V}_{\mathcal{B}}\) being its maximal, minimal and set of vertices, respectively. Suppose \(\overline{A}_{\mathcal{B}},\underline{A}_{\mathcal{B}},\overline{e}_{\mathcal{B }},\underline{e}_{\mathcal{B}},\theta_{\mathcal{B}}\) is a solution of the following linear program (LP):_ \[\min_{\theta,\overline{A},\underline{A}\underline{x},\underline{ \varepsilon}} \theta \tag{4}\] \[s.t\ \underline{A}x_{s}+\underline{\varepsilon}+\sigma\leq g(x_{s}) \leq\overline{A}x_{s}+\overline{\varepsilon}-\sigma,\] \[(\overline{A}-\underline{A})x_{s}+\overline{\varepsilon}- \underline{\varepsilon}-2\sigma\leq\theta\mathbf{1}_{m},\ \forall x_{s}\in\mathcal{V}_{\mathcal{B}},\] _where \(\mathbf{1} ## III Problem Formulation _System Assumptions._ Consider the nonlinear discrete-time system with unknown inputs and bounded noise \[\begin{array}{rl}x_{k+1}&=f(x_{k})+Ww_{k}+Gd_{k},\\ y_{k}&=h(x_{k})+Vv_{k}+Hd_{k},\end{array} \tag{5}\] where at time \(k\in\mathbb{N}\), \(x_{k}\in\mathcal{X}\subset\mathbb{R}^{n}\), \(d_{k}\in\mathbb{R}^{p}\) and \(y_{k}\in\mathbb{R}^{l}\) are the state vector, unknown input vector, and measurement vector, respectively. The process and measurement noise signals \(w_{k}\in\mathbb{R}^{n}\) and \(v_{k}\in\mathbb{R}^{l}\) are assumed to be bounded, i.e., \(w_{k}\in\mathcal{W}\triangleq[\underline{w},\overline{w}]\), \(v_{k}\in\mathcal{V}\triangleq[\underline{v},\overline{v}]\) with known lower and upper bounds, \(\underline{w}\), \(\overline{w}\) and \(\underline{v}\), \(\overline{v}\), respectively. We also assume that lower and upper bounds for the initial state, \(\underline{x}_{0}\) and \(\overline{x}_{0}\), are available, i.e., \(\underline{x}_{0}\leq x_{0}\leq\overline{x}_{0}\). The functions \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}^{l}\) and matrices \(W\), \(V\), \(G\) and \(H\) are known and of appropriate dimensions, where \(G\) and \(H\) encode the _locations_ at which the unknown input (or attack) signal can affect the system dynamics and measurements. Note that no assumption is made on \(H\) to be either the zero matrix (no direct feedthrough), or to have full column rank when there is direct feedthrough (in contrast to [13]). _Unknown Input (or Attack) Signal Assumptions._ The unknown inputs \(d_{k}\) (representing false data injection attack signals) are not constrained to follow any model nor to be a signal of any type (random or strategic), hence no prior 'useful' knowledge of the dynamics of \(d_{k}\) is available (independent of \(\{d_{\ell}\}\)\(\forall k\neq\ell\), \(\{w_{\ell}\}\) and \(\{v_{\ell}\}\)\(\forall\ell\)). We also do not assume that \(d_{k}\) is bounded or has known bounds and thus, \(d_{k}\) is suitable for representing adversarial attack signals. Next, we briefly introduce a similar system transformation as in [3], which will be used later in our observer structure. _System Transformation._ Let \(p_{H}\triangleq\mathrm{rk}(H)\). Similar to [3], by applying singular value decomposition, we have \(H=\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}\begin{bmatrix}\Xi&0\\ 0&0\end{bmatrix}\begin{bmatrix}E_{1}^{\top}\end{bmatrix}\) with \(E_{1}\in\mathbb{R}^{p\times p_{H}}\), \(E_{2}\in\mathbb{R}^{p\times(p-p_{H})}\), \(\Xi\in\mathbb{R}^{p_{H}\times p_{H}}\) (a diagonal matrix of full rank; so we can define \(S\triangleq\Xi^{-1}\)), \(U_{1}\in\mathbb{R}^{l\times p_{H}}\) and \(U_{2}\in\mathbb{R}^{l\times(l-p_{H})}\). Then, since \(D\triangleq\begin{bmatrix}E_{1}&E_{2}\end{bmatrix}\) is unitary: \[d_{k}=E_{1}d_{1,k}+E_{2}d_{2,k},\ d_{1,k}=E_{1}^{\top}d_{k},\ d_{2,k}=E_{2}^{ \top}d_{k}. \tag{6}\] Finally, by defining \(T_{1}\triangleq U_{1}^{\top},T_{2}\triangleq U_{2}^{\top}\), the output equation can be decoupled, by which system (5) can be rewritten as: \[\begin{array}{rl}x_{k+1}&=f(x_{k})+Ww_{k}+G_{1}d_{1,k}+G_{2}d_{2,k},\\ z_{1,k}&=h_{1}(x_{k})+V_{1}v_{k}+\Xi d_{1,k},\\ z_{2,k}&=h_{2}(x_{k})+V_{2}v_{k},\end{array} \tag{7}\] where \(h_{i}(x_{k})=T_{i}h(x_{k})\), \(\forall i\in\{1,2\}\) and \(K_{i}\triangleq T_{i}K_{i},\forall K\in\{V,G\},\forall i\in\{1,2\}\). Moreover, we assume the following, which is satisfied for a broad range of nonlinear functions [21]: **Assumption 1**.: _Functions \(f,h\) have bounded Jacobians over the state space \(\mathcal{X}\) with known/computable Jacobian bounds._ **Assumption 2**.: _The JSS decomposition of \(h_{2}(x_{k})\) via Proposition 1 given by \(h_{2}(x_{k})=C_{2}x_{k}+\psi_{2}(x_{k})\) is such that \(\psi_{2}\) is JSS and further, \(C_{2}G_{2}\) has full column ranka. Consequently, there exists \(M_{2}\triangleq(C_{2}G_{2})^{\dagger}\) such that \(M_{2}C_{2}G_{2}=I\)._ Footnote a: In the special case that \(G=0\), we would require \(G_{2}\) to be empty (and this does happen when \(H\) has full rank), in which case \(C_{2}G_{2}\) being full rank is satisfied by assumption. _Assumption 3_.: _(Only needed when the observations are nonlinear, i.e., if \(\psi_{2}(x_{k})\neq 0\)) The entire state space \(\mathcal{X}\subset\mathbb{R}^{n}\) is bounded. Moreover, \(A_{g}\) is invertible, where \(A_{g}\in\mathbb{R}^{n\times n}\) is the parallel affine outer-approximation slope (cf. Proposition 3 and Corollary 1) of the function \(g(x)\triangleq x+G_{2}M_{2}\psi_{2}(x)\) over the entire state space._ Further, we formally define the notions of _framers_, _correctness_ and _stability_ that are used throughout the paper. **Definition 3** (Interval Framers).: _Given the nonlinear plant (5) (equivalently (7)), the sequences \(\{\overline{x}_{k},\underline{x}_{k}\}_{k=0}^{\infty}\subset\mathbb{R}^{n}\) and \(\{\overline{d}_{k},\underline{d}_{k}\}_{k=0}^{\infty}\subset\mathbb{R}^{p}\) are called upper and lower frames for the states and inputs of the system in (5), respectively, if_ \[\forall k\in\mathbb{N},\forall w_{k}\in\mathcal{W},\forall v_{k}\in\mathcal{V},\ \underline{\nu}_{k}\leq\nu_{k}\leq\overline{\nu}_{k},\forall\nu\in\{x,d\}.\] _In other words, starting from the initial interval \(\underline{x}_{0}\leq x_{0}\leq\overline{x}_{0}\), the true state of the system in (5), \(x_{k}\), and the unknown input \(d_{k}\) are guaranteed to evolve within the interval flow-pipe \([\underline{x}_{k},\overline{x}_{k}]\) and bounded within the interval \([\underline{d}_{k},\overline{d}_{k}]\), for all \((k,w_{k},v_{k})\in\mathbb{N}\times\mathcal{W}\times\mathcal{V}\), respectively. Finally, any dynamical system (i.e., tractable algorithm) that returns upper and lower frames for the states and unknown inputs of system 5 is called a resilient interval planner for (5)._ **Definition 4** (Framer Error).: _Given state and input frames \(\{\underline{x}_{k}\leq\overline{x}_{k}\}_{k=0}^{\infty}\) and \(\{\underline{d}_{k}\leq\overline{d}_{k}\}_{k=1}^{\infty}\), the sequences \(\{e_{k}^{x}\triangleq\overline{x}_{k}-\underline{x}_{k}\}_{k=0}^{\infty}\) and \(\{e_{k}^{x}\triangleq\overline{d}_{k}-\underline{d}_{k}\}_{k=1}^{\infty}\) are called the state and input framer errors, respectively. It easily follows from Definition 3 that \(e_{k}^{e}\geq 0,\forall k\in\mathbb{N},\forall\nu\in\{x,d\}\)._ **Definition 5** (Input-to-State Stability and Interval Observer).: _An interval framer is input-to-state stable (ISS), if the framer state error (cf. Definition 4) is bounded as follows:_ \[\forall k\in\mathbb{N},\ \|e_{k}^{x}\|_{2}\leq\beta(\|e_{0}^{x}\|_{2},k)+\alpha(\| \delta\|_{\ell_{\infty}}), \tag{8}\] _where \(\delta\triangleq[(\delta^{w})^{\top}\ (\delta^{v})^{\top}]^{\top}\triangleq[( \overline{w}-\underline{w})^{\top}\ (\overline{v}-\underline{v})^{\top}]^{\top}\), \(\beta\) and \(\alpha\) are functions of classesb\(\mathcal{K}\mathcal{L}\) and \(\mathcal{K}_{\infty}\), respectively, and \(\|\delta\|_{\ell_{\infty}}\triangleq\sup_{k\in\mathbb{N}}\|\delta_{k}\|_{2}=\| \delta\|_{2}\) is the \(\ell_{\infty}\) signal norm. An ISS resilient interval framer is called a resilient interval observer._ Footnote b: A function \(\alpha:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is of class \(\mathcal{K}\) if it is continuous, positive definite, and strictly increasing and is of class \(\mathcal{K}_{\infty}\) if it is unbounded. Moreover, \(\lambda:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is of class \(\mathcal{K}\mathcal{L}\) if for each fixed \(t\geq 0\), \(\lambda(\cdot,t)\) is of class \(\mathcal{K}\) and for each fixed \(s\geq 0\), \(\lambda(s,t)\) decreases to zero as \(t\rightarrow\infty\). **Definition 6** (\(\mathcal{H}_{ ### _Interval Framer Design_ Our strategy for designing resilient interval observers in the presence of unknown inputs has three steps. First, we obtain an equivalent representation of the system in (5) by introducing some auxiliary state variables, such that the equivalent system is not affected by the attack signal. Then, inspired by our previous work on synthesizing interval observers for nonlinear systems [15, 16] we will design embedding systems (cf. Definition 2) for the equivalent system representation, which returns state frames. Finally, we obtain input framers (with a one-step delay since \(d_{2,k}\) does not appear in the measurements \(z_{1,k}\) and \(z_{2,k}\) in (7)) as functions of the computed state framers. First, note that from (7) and with \(S\triangleq\Xi^{-1}\), \(d_{1,k}\) can be computed as a function of the state at current time as follows: \[d_{1,k}=S(z_{1,k}-h_{1}(x_{k})-V_{1}v_{k}). \tag{9}\] Next, we introduce an auxiliary state variable as: \[\xi_{k}\triangleq x_{k}-N(z_{2,k}-V_{2}v_{k}-\psi_{2}(x_{k}))=(I-NC_{2})x_{k}, \tag{10}\] where the equality follows from (7) and Assumption 2. Moreover, \(N\in\mathbb{R}^{n\times(l-\bar{p})}\) is a to-be-designed gain to cancel out the effect of the unknown input in the state equation. This is done through the following lemma. **Lemma 1**.: _Suppose Assumption 2 holds and let \(N=G_{2}M_{2}=G_{2}(C_{2}G_{2})^{\dagger}\) and \(S\triangleq\Xi^{-1}\). Then, the value of the auxiliary state \(\xi_{k}\) at time step \(k+1\) can be computed as:_ \[\xi_{k+1}\!\!=\!\!(I\!-\!NC_{2})(f(x_{k})\!+\!G_{1}S(z_{1,k}\!-\!h_{1}(x_{k})\! -\!V_{1}v_{k})\!+\!\!Ww_{k}). \tag{11}\] Proof.: By plugging \(d_{1,k}\) from (9) into (7), we obtain \[x_{k+1}\!\!=\!\!f(x_{k})\!\!+\!\!G_{1}S(z_{1,k}\!-\!h_{1}(x_{k})\! -\!V_{1}v_{k})\!\!+\!\!Ww_{k}\!\!+\!\!G_{2}d_{2,k}. \tag{12}\] This, together with the second equality in (10) and the above choice of \(N\) such that \((I\!-\!NC_{2})G_{2}=0\), returns (11). The evolution of the auxiliary state \(\xi_{k}\) in (11) is independent of the unknown input and hence, we can compute propagated framers for \(\xi_{k}\) leveraging embedding systems (cf. Proposition 2). However, we do not have a way of directly retrieving the propagated framers for the original states, i.e., \(\{\underline{x}_{k},\overline{x}_{k}\}\) in terms of \(\{\underline{\xi}_{k},\overline{\xi}_{k}\}\) from the second equality of (10), since \(I-NC_{2}=I-G_{2}(C_{2}G_{2})^{\dagger}C_{2}\) can be shown to be not invertible. To overcome this difficulty, given Assumption 3, we introduce a new auxiliary state: \[\gamma_{k}\triangleq x_{k}-\Lambda(N(z_{2,k}-V_{2}v_{k})-\epsilon_{k}), \tag{13}\] with \(\Lambda\triangleq A_{q}^{-1}\), where \(A_{g}\) and \(\epsilon_{k}\in[\underline{\epsilon},\overline{\epsilon}]\) are parallel affine outer-approximation slope and approximation error of the mapping \(g(x)\triangleq x+G_{2}M_{2}\psi_{2}(x)\) on the entire space \(\mathcal{X}\) (cf. Proposition 3, Corollary 1 and Assumption 3). **Proposition 4**.: _Given Assumption 3, the two auxiliary states \(\gamma_{k}\) and \(\xi_{k}\) are linearly related as:_ \[\gamma_{k}=\Lambda\xi_{k}. \tag{14}\] Proof.: Computing parallel affine outer-approximation of the mapping \(g(x_{k})=A_{g}x_{k}+\epsilon_{k}\) and applying (10), we obtain \[g(x_{k})\triangleq x_{k}+N\psi_{2}(x)=\xi_{k}+N(z_{2,k}-V_{2}v_{k})\] \[\Rightarrow A_{g}x_{k}=\xi_{k}+N(z_{2,k}-V_{2}v_{k})-\epsilon_{k}, \quad\epsilon_{k}\in[\underline{\epsilon},\overline{\epsilon}],\] from which and given Assumption 3 (that \(A_{g}\) is invertible, with \(\Lambda=A_{g}^{-1}\)), we have \[x_{k}=\Lambda(\xi_{k}+N(z_{2,k}-V_{2}v_{k})-\epsilon_{k}),\quad\epsilon_{k}\in[ \underline{\epsilon},\overline{\epsilon}]. \tag{15}\] Plugging \(x_{k}\) from (15) into (13) returns the results. We are now ready to propose an input and state resilient interval framer, i.e., the following discrete-time dynamical system (16)-(18), which by construction, outputs/returns framers for the original states \(\{x_{k}\}_{k=0}^{\infty}\) and the unknown input signal \(\{d_{k}\}_{k=1}^{\infty}\) of system (5). The details of the framer construction/design will be provided in the proof of Theorem 1. The proposed resilient interval framer is as follows: \[\begin{array}{l}\underline{\gamma}_{k+1}\!\!=\!\!(A\!-\!LC_{2})^{\oplus} \underline{\gamma}_{k}\!-\!(A\!-\!LC_{2})^{\ominus}\overline{\gamma}_{k}\!+\!\! \rho_{d}(\underline{x}_{k},\overline{x}_{k})\\ \!+D^{\ominus}\underline{\epsilon}\!-\!D^{\ominus}\underline{\epsilon}\!+\!\!L^{ \ominus}\psi_{2,d}(\underline{x}_{k},\overline{x}_{k})\!-\!L^{\ominus}\psi_{2, d}(\overline{x}_{k},\underline{x}_{k})\\ \!+\!V^{\ominus}\underline{v}\!-\!V^{\ominus}\overline{v}\!+\!W^{\ominus} \underline{w}-\!W^{\ominus}\overline{w}+\hat{z}_{k},\\ \!\overline{\gamma}_{k+1}\!\!=\!\!(A\!-\!LC_{2})^{\oplus}\overline{\gamma}_{k}\! -\!(A\!-\!LC_{2})^{\ominus}\underline{\gamma}_{k}\!+\!\rho_{d}(\overline{x}_{k },\underline{x}_{k})\\ \!+\!D^{\ominus}\overline{\epsilon}\!-\!D^{\ominus}\underline{\epsilon}\!+\!\!L^{ \ominus}\psi_{2,d}(\overline{x}_{k},\underline{x}_{k})\!-\!L^{\ominus}\psi_{2, d}(\underline{x}_{k},\overline{x}_{k})\\ \!+\!\hat{V}^{\ominus}\overline{v}-\hat{V}^{\ominus}\underline{v}+\hat{W}^{ \oplus}\overline{w}-\hat{V}^{\ominus}\underline{w}\!+\!\hat{z}_{k},\\ \underline{x}_{k}\!\!=\!\!\underline{\gamma}_{k}\!+\!\Lambda Nz_{2,k}\!+\!\Lambda^{\ominus} \underline{\epsilon}\!-\!\Lambda^{\ominus}\overline{v}\!+\!(ANV_{2})^{\oplus} \!\underline{v}\!-\!(ANV_{2})^{\oplus}\overline{v},\\ \overline{x}_{k}\!\!=\!\overline{\gamma}_{k}\!+\!\Lambda Nz_{2,k}\!+\!\Lambda^{\ominus} \underline{\epsilon}\!-\!\Lambda^{\ominus}\underline{\epsilon}\!+\!(ANV_{2})^{ \ominus}\overline{v}\!-\!(ANV_{2})^{\oplus}\!\underline{v},\\ \underline{d}_{k-1}\!\!=\!\Phi^{\oplus}\underline{x}_{k}-\Phi^{\ominus}\overline{x}_ {k}+\kappa_{d}(\underline{x}_{k-1},\overline{x}_{k-1})\!+\!A_{z}z_{1,k-1}\\ \!+A_{\ominus}^{\oplus}\overline{v}-A_{\ominus}^{\ominus}\overline{v}+\Phi^{ \ominus}\underline{w}-\Phi^{\ominus}\overline{w},\\ \overline{d}_{k-1}\!\!=\!\Phi^{\oplus}\overline{x}_{k}-\Phi^{\ominus}\underline{x}_ {k}+\kappa_{d}(\overline{x}_{k-1},\underline{x}_{k-1})+A_{z}z_{1,k-1}\\ \!+A_{\ominus}^{\ominus}\overline{v}-A_{\ominus}^{\ominus}\underline{v}+\Phi^{ \ominus}\overline{w}-\Phi^{\ominus}\underline{w},\end{array} \tag{16}\] where \(S\triangleq\Xi^{-1}\), \(N=G_{2}M_{2}\) and \(\Lambda\triangleq A_{g}^{-1}\). Furthermore, \(L\in\mathbb{R}^{n\times(l-\bar{p}_{H})}\) is an arbitrary matrix (observer gain) which will be designed later in Theorem 2 to yield stability and optimality of the proposed framers. Moreover, \(A\in\mathbb{R}^{n\times n}\) and \(\rho:\mathcal{X}\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) are obtained by applying JSS decompositions (cf. Proposition 1) on the mapping \(\tilde{f}(x)\triangleq\Lambda(I-NC_{2})(f(x)-G_{1}Sh_{1}(x))\), while \(\psi_{2,d}\) and \(\rho_{d}\) are tight decomposition functions for the JSS mappings \(\rho\) and \(\psi_{2}\), respectively, computed through Proposition 1. Further, \[\begin{array}{l}\hat{V}\!\triangleq(A-LC_{2})ANV_{2}+LV_{2}+\Lambda(I-NC_{2})G_ {1}SV where \(\hat{V}\triangleq\Lambda(I-NC_{2})G_{1}SV_{1}+LV_{2}\), \(\hat{W}\triangleq\Lambda(I-NC_{2})W\) and \(\tilde{z}_{k}\triangleq\Lambda(I-NC_{2})G_{1}Sz_{1,k}+Lz_{2,k}\). Then, by computing \(x_{k}\) in terms of \(\gamma_{k}\) from (13) and plugging it back into the linear terms in the right-hand side of (20), we obtain \[\begin{array}{rl}\gamma_{k+1}&=(A-LC_{2})\gamma_{k}+\rho(x_{k})-L\psi_{2}(x_{ k})\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad- \hat{V}v_{k}+\hat{W}w_{t}-D\epsilon_{k}+\hat{z}_{k},\end{array} \tag{21}\] with \(\hat{V},D,\hat{W}\) and \(\hat{z}_{k}\) given in (19). Next, by applying Proposition 2 and [22, Lemma 1], we construct the embedding system (16) for (21), which implies \(\underline{\gamma}_{k}\leq\gamma_{k}\leq\overline{\gamma}_{k},\forall k\in\mathbb{N}\), by construction. Further, the results in (17) follow from applying [22, Lemma 1] on (13) to compute framers of \(x_{k}\) in terms of the framers of \(\gamma_{k}\). To obtain input framers, note that multiplying both sides of (12) by \(M_{2}C_{2}\) together with Assumption 2 yields \(d_{2,k-1}=M_{2}C_{2}(x_{k}-f(x_{k-1})+G_{1}Sh_{1}(x_{k-1})+G_{1}S(V_{1}v_{k-1}- z_{1,k-1})-Ww_{k-1})\). This, along with (6) and (9), leads to \[\begin{array}{l}d_{k-1}=\Phi x_{k}\mbox{+}\kappa(x_{k-1})\mbox{+}A_{z}z_{1,k- 1}\mbox{+}A_{v}v_{k-1}\mbox{-}\Phi Ww_{k-1}.\end{array} \tag{22}\] The input framers in (18) are obtained by leveraging [19, Theorem 1] to compute a decomposition function for the nonlinear function \(\kappa\), as well as applying [22, Lemma 1] to bound the linear terms in the right-hand side of (22). ### _ISS and \(\mathcal{H}_{\infty}\)-Optimal Interval Observer Synthesis_ Next, we provide sufficient conditions to guarantee the stability of the proposed framers, i.e., we seek to synthesize the observer gain \(L\) to ensure input-to-state stability (ISS) of the observer state error, \(e_{k}^{x}\triangleq\overline{x}_{k}-\underline{x}_{k}\) in the sense of Definition 4, while ensuring that the design is optimal in the sense of minimizing the \(\mathcal{H}_{\infty}\) gain (cf. Definition 5). First, we derive the observer error dynamics as follows. **Lemma 2**.: _Consider the nonlinear system (5) and suppose all assumptions in Theorem 2 hold. Then, the state frame error dynamics of the resilient interval observer (16)-(18) and its nonlinear comparison system are as follows:_ \[\begin{array}{rl}e_{k+1}^{x}&=|A-LC_{2}|e_{k}^{x}+\delta_{k}^{\rho}+|L|\delta _{k}^{\psi_{2}}+|\hat{W}|\delta^{w}\\ &+(|V_{a}-LV_{b}|-|A-LC_{2}||\Lambda NV_{2}|+|\Lambda NV_{2}|)\delta^{v}\\ &+(|\Lambda|+|D_{a}-LD_{b}|-|A-LC_{2}||\Lambda|)\delta^{\epsilon}\\ &\leq(|A-LC_{2}|+\overline{F}_{\rho}+|L|\overline{F}_{\psi_{2}}e_{k}^{x}+| \hat{W}|\delta^{w}\\ &+(|V_{a}-LV_{b}|-|A-LC_{2}||\Lambda V_{2}|+|\Lambda NV_{2}|)\delta^{v}\\ &+(|\Lambda|+|D_{a}-LD_{b}|-|A-LC_{2}||\Lambda|)\delta^{\epsilon},\end{array} \tag{23}\] _where \(\delta_{k}^{\zeta}\triangleq\zeta_{d}(\overline{x}_{k},\underline{x}_{k})- \zeta_{d}(\underline{x}_{k},\overline{x}_{k}),\forall\zeta\in\{\psi_{2},\rho\}\), \(\delta^{s}\triangleq\overline{s}-\underline{s},\forall s\in\{w,v,\epsilon\}\), and \(\overline{F}_{\zeta},\forall\zeta\in\{\psi_{2},\rho\}\) are computed through Proposition 2. Moreover,_ \[\begin{array}{rl}V_{a}&\triangleq AANV_{2}+\Lambda(I-NC_{2})G_{1}SV_{1},\\ V_{b}&\triangleq(C_{2}\Lambda N-I)V_{2},D_{a}\triangleq A\Lambda,D_{b} \triangleq C_{2}\Lambda.\end{array} \tag{24}\] Proof.: It follows from (16) that the dynamics of \(e_{k}^{x}\triangleq\overline{\gamma}_{k}-\underline{\gamma}_{k}\) is given by \(e_{k+1}^{\gamma}=|A-LC_{2}|e_{k}^{x}+\delta_{k}^{\rho}+|L|\delta_{k}^{\psi_{2 }}+|\hat{V}|\delta^{w}+|\hat{W}|\delta^{w}+|D|\delta^{\epsilon}\). This, combined with \(e_{k}^{x}=e_{k}^{\gamma}+|\Lambda|\delta^{\epsilon}+|\Lambda V_{2}|\delta^{ \epsilon}\) (followed from (17)) results in the equality in (23), which together with the facts that \(\delta_{k}^{\zeta}\leq\overline{F}_{\zeta}e_{k}^{x},\forall\zeta\in\{\rho, \psi_{2}\}\) (cf. Proposition 2), yields the inequality in (23). Further, by leveraging slightly different approaches to derive an upper _linear_ comparison system for the _nonlinear_ error comparison system (23), we derive different sets of sufficient conditions to guarantee the ISS property of the proposed observer, as well as to ensure the optimality of the design in the sense of minimum \(\mathcal{H}_{\infty}\) gain, as follows. **Theorem 2** (ISS & \(\mathcal{H}_{\infty}\)-Optimal Resilient Interval Observer Synthesis).: _Consider system (5) (equivalently the transformed system (7)) and suppose Assumptions 1-3 hold. Moreover, suppose there exist matrices \(\mathbb{R}^{n\times n}\ni P^{*}\succ\mathbf{0}_{n,n},\Gamma^{*}\in\mathbb{R}_{ \geq 0}^{n\times(1-p_{H})}\) and \(\eta^{*}\in\mathbb{R}_{>0}\) such that \(-P^{*}\in\mathbb{M}_{n}\) and the tuple \((P^{*},\Gamma^{*},\eta^{*})\) solves the following problem:_ \[\begin{array}{l}\min\limits_{\{\eta,\tilde{P},\Gamma\}}\eta\\ s.t.\begin{bmatrix}P&P\tilde{A}-\Gamma\tilde{C}&P\tilde{B}-\Gamma\tilde{D}&0\\ *&P&0&I\\ *&*&\eta I&0\\ *&*&*&\eta I\end{bmatrix}\succ&0,(P,\Gamma)\in\mathbf{C},\end{array} \tag{25}\] _where the matrices \(\tilde{A},\tilde{B},\tilde{C},\tilde{D}\), as well as the corresponding additional set of constraints \(\mathbf{C}\) can be either of the following:_ 1. \(\mathbf{C}\!=\!\{(P,\Gamma)\mid P\left[A\;\;V_{a}\;D_{a}\right]\!-\!\Gamma \left[C_{2}\;\;V_{b}\;D_{b}\right]\geq 0\}\)_, if:_ \(\tilde{A}=A+\overline{F}_{\rho},\;\tilde{C}=C_{2}-\overline{F}_{\psi_{2}},\)__ \(\tilde{B}=\left[V_{a}+(I-A)|\Lambda NV_{2}|\;\;|\hat{W}|\;\;D_{a}+(I-A)|\Lambda| \right],\)__ \(\tilde{D}=\left[V_{b}-C_{2}|\Lambda NV_{2}|\;\;0\;D_{b}-C_{2}|\Lambda|\right].\)__ 2. \(\mathbf{C}=\{(P,\Gamma)\mid\Gamma\left[C_{2}\;\;V_{b}\;D_{b}\right]\geq 0\}\)_, if_ \(\tilde{A}=|A|+\overline{F}_{\rho},\;\tilde{C}=-C_{2}-\overline{F}_{\psi_{2}},\)__ \(\tilde{B}=\left[|V_{a}|\mbox{+}(I\mbox{--}|A|)|\Lambda NV_{2}|\;\;|\hat{W}|\;\;(I \mbox{--}|A|)|\Lambda|\mbox{+}|D_{a}|\right],\)__ \(\tilde{D}=\left[C_{2}|\Lambda NV_{2}|-V_{b}\;\;0\;\;C_{2}|\Lambda|-D_{b} \right].\)__ 3. \(\mathbf{C}=\{(P,\Gamma)\mid P-\Gamma C_{2}\geq 0\}\)_, if:_ \(\tilde{A}=A+\overline{F}_{\rho},\;\tilde{C}=C_{2}-\overline{F}_{\psi_{2}},\;\tilde{D}= \left[-V_{2}\;\;0\;\;0\right],\)__ \(\tilde{B}=\left[|\Lambda(I\mbox{--}NC_{2})G_{1}SV_{1}|\mbox{+}|\Lambda NV_{2}|\;\;|\hat{W}|\;\;| \Lambda|\right].\)__ _Then, the proposed resilient interval frame (16)-(18) with the corresponding gain \(L=(P^{*})^{-1}\Gamma^{*}\), is a resilient ISS input and state interval observer in the sense of Definition 5 and also is \(\mathcal{H}_{\infty}\)-optimal (cf. Definition 6). Finally, in any of the above cases, the LMI in (25) is feasible only if the linear comparison system \((\tilde{A},\tilde{B},\tilde{C},\tilde{D})\) is detectable._ Proof.: We will show that in each of the cases (i)-(iii), given the corresponding constraint set \(\mathbf the comparison system (26) is 0-stable (0-GAS), which in addition to the AG property above is equivalent to the ISS property for (26) by [24, Theorem 1-e]. Thus, the designed observer is also ISS. So, what remains to complete the proof is to show that the comparison system (26) can indeed be computed in each of the cases as follows. **Case** (i). Consider the nonlinear comparison system in (23). By satisfying the constraint set \(\mathbf{C}\), we enforce \(-P\) to be Metzler, as well as \(P\tilde{A}-\Gamma\tilde{C},PV_{a}-\Gamma V_{b}\) and \(PV_{a}-\Gamma V_{b}\) to be non-negative. Also, \(\Gamma\) is non-negative by assumption. Consequently, since \(P\) is positive definite, it becomes a non-singular M-matrix, i.e., a square matrix whose negation is Metzler and whose eigenvalues have non-negative real parts, and hence is inverse-positive [25, Theorem 1], i.e., \(P^{-1}\geq 0\). Therefore, \(L=P^{-1}\Gamma\geq 0\), \(A-LC_{2}=P^{-1}(PA-\Gamma C_{2})\geq 0\), \(V_{a}-LV_{b}=P^{-1}(PV_{a}-\Gamma V_{b})\geq 0\) and \(D_{a}-LD_{b}=P^{-1}(PD_{a}-\Gamma D_{b})\geq 0\), because they are matrix products of non-negative matrices. So, \(|L|=L,|A-LC_{2}|=A-LC_{2},|V_{a}-LV_{b}|=V_{a}-LV_{b}\) and \(|D_{a}-LD_{b}|=D_{a}-LD_{b}\), which turns (23) into the form of (26). **Case** (ii). By applying the triangle inequality, the comparison system in (23) can get upper bounded again as \[\begin{array}{l}e_{k+1}^{x}\leq(|A|+|LC_{2}|+\overline{F}_{\rho}+|L| \overline{F}_{\psi_{2}})e_{k}^{x}+|W|\delta^{w}\\ +(|V_{a}|+|LV_{b}|-|LC_{2}||\Lambda NV_{2}|+(I-|A|)|\Lambda NV_{2}|)\delta^{x} \\ +((I-|A|)|\Lambda|+|D_{a}|+|LD_{b}|-|LC_{2}||\Lambda|)\delta^{x}.\end{array} \tag{27}\] By a similar argument as in Case (i), enforcing \(-P\) to be Metzler along with the constraints set \(\mathbf{C}\) results in \(|LC_{2}|=LC_{2},|LV_{b}|=LV_{b}\) and \(|LD_{b}|=LD_{b}\), and hence turns (27) into the form of (26). **Case** (iii). Note that by the triangle inequality, \(|V_{a}-LV_{b}|=|(A-LC_{2})\Lambda NV_{2}+LV_{2}+\Lambda(I-NC_{2})G_{1}SV_{1}| \leq|(A-LC_{2})||\Lambda NV_{2}|+|L||V_{2}|+|\Lambda(I-NC_{2})G_{1}SV_{1}|\), and \(|D_{a}-LD_{b}|=|(A-LC_{2})\Lambda|\leq|(A-LC_{2})||\Lambda|\). These two combined with (23) yield \[\begin{array}{l}e_{k+1}^{x}\leq(|A-LC_{2}|+\overline{F}_{\rho}+|L| \overline{F}_{\psi_{2}})e_{k}^{x}+|\hat{W}|\delta^{w}+|\Lambda|\delta^{x}\\ +(|L||V_{2}|+|\Lambda NV_{2}|+|\Lambda(I-NC_{2})G_{1}SV_{1})\delta^{w}.\end{array} \tag{28}\] The rest of the proof is to enforce that \(A-LC_{2}\) and \(L\) are non-negative to turn (28) into the form of (26), which is similar to the the proofs of the previous two cases. ## V Illustrative Example We now illustrate the effectiveness of our proposed resilient observer using a three-area power system [2, Figure 1], where each control area consists of a generator and load buses with transmission lines between areas. The nonlinear continuous-time model of the buses is slightly modified based on [26], with the subscript \(i\) being the bus number: \[\begin{array}{l}\dot{f}_{1}(t)\ =-\frac{1}{m_{1}}(\phi_{i}(t)-(P_{M_{1}}(t)+d_{1 }(t)))+w_{2,1}(t),\\ \dot{f}_{i}(t)\ =-\frac{1}{m_{i}}(\phi_{i}(t)-P_{M_{i}}(t))+w_{2,i}(t),\ i\in\{2,3\},\\ \dot{\theta}_{i}(t)\ =f_{i}(t)+w_{1,i}(t),\
2303.18039
Laying the foundation of the effective-one-body waveform models SEOBNRv5: improved accuracy and efficiency for spinning non-precessing binary black holes
We present SEOBNRv5HM, a more accurate and faster inspiral-merger-ringdown gravitational waveform model for quasi-circular, spinning, nonprecessing binary black holes within the effective-one-body (EOB) formalism. Compared to its predecessor, SEOBNRv4HM, the waveform model i) incorporates recent high-order post- Newtonian results in the inspiral, with improved resummations, ii) includes the gravitational modes (l, |m|) = (3, 2), (4, 3), in addition to the (2, 2), (3, 3), (2, 1), (4, 4), (5, 5) modes already implemented in SEOBNRv4HM, iii) is calibrated to larger mass-ratios and spins using a catalog of 442 numerical-relativity (NR) simulations and 13 additional waveforms from black-hole perturbation theory, iv) incorporates information from second-order gravitational self-force (2GSF) in the nonspinning modes and radiation-reaction force. Computing the unfaithfulness against NR simulations, we find that for the dominant (2, 2) mode the maximum unfaithfulness in the total mass range $10-300 M_{\odot}$ is below $10^{-3}$ for 90% of the cases (38% for SEOBNRv4HM). When including all modes up to l = 5 we find 98% (49%) of the cases with unfaithfulness below $10^{-2} (10^{-3})$, while these numbers reduce to 88% (5%) when using SEOBNRv4HM. Furthermore, the model shows improved agreement with NR in other dynamical quantities (e.g., the angular momentum flux and binding energy), providing a powerful check of its physical robustness. We implemented the waveform model in a high-performance Python package (pySEOBNR), which leads to evaluation times faster than SEOBNRv4HM by a factor 10 to 50, depending on the configuration, and provides the flexibility to easily include spin-precession and eccentric effects, thus making it the starting point for a new generation of EOBNR waveform models (SEOBNRv5) to be employed for upcoming observing runs of the LIGO-Virgo-KAGRA detectors.
Lorenzo Pompili, Alessandra Buonanno, HΓ©ctor EstellΓ©s, Mohammed Khalil, Maarten van de Meent, Deyan P. Mihaylov, Serguei Ossokine, Michael PΓΌrrer, Antoni Ramos-Buades, Ajit Kumar Mehta, Roberto Cotesta, Sylvain Marsat, Michael Boyle, Lawrence E. Kidder, Harald P. Pfeiffer, Mark A. Scheel, Hannes R. RΓΌter, Nils Vu, Reetika Dudi, Sizheng Ma, Keefe Mitman, Denyz Melchor, Sierra Thomas, Jennifer Sanchez
2023-03-31T13:20:10Z
http://arxiv.org/abs/2303.18039v1
# Laying the foundation of the effective-one-body waveform models SEOBNRv5: ###### Abstract We present SEOBNRv5HM, a more accurate and faster inspiral-merger-ringdown gravitational waveform model for quasi-circular, spinning, nonprecessing binary black holes within the effective-one-body (EOB) formalism. Compared to its predecessor, SEOBNRv4HM, the waveform model i) incorporates recent high-order post-Newtonian results in the inspiral, with improved resummations, ii) includes the gravitational modes \((\ell,|m|)=(3,2),(4,3)\), in addition to the \((2,2),(3,3),(2,1),(4,4),(5,5)\) modes already implemented in SEOBNRv4HM, iii) is calibrated to larger mass-ratios and spins using a catalog of 442 numerical-relativity (NR) simulations and 13 additional waveforms from black-hole perturbation theory, iv) incorporates information from second-order gravitational self-force (2GSF) in the nonspinning modes and radiation-reaction force. Computing the unfaithfulness against NR simulations, we find that for the dominant \((2,2)\) mode the maximum unfaithfulness in the total mass range 10-300\(M_{\odot}\) is below \(10^{-3}\) for 90% of the cases (38% for SEOBNRv4HH), When including all modes up to \(\ell=5\) we find 98% (49%) of the cases with unfaithfulness below \(10^{-2}\) (\(10^{-3}\)), while these numbers reduce to 88% (5%) when using SEOBNRv4HM. Furthermore, the model shows improved agreement with NR in other dynamical quantities (e.g., the angular momentum flux and binding energy), providing a powerful check of its physical robustness. We implemented the waveform model in a high-performance Python package (pySEOBNR), which leads to evaluation times faster than SEOBNRv4HM by a factor 10 to 50, depending on the configuration, and provides the flexibility to easily include spin-precession and eccentric effects, thus making it the starting point for a new generation of EOBNR waveform models (SEOBNRv5) to be employed for upcoming observing runs of the LIGO-Virgo-KAGRA detectors. ## I Introduction Gravitational-wave (GW) astronomy has rapidly advanced since the first detection of GWs from a binary black-hole (BBH) merger in 2015 [1], recording about ten events in the initial and second observing runs [2; 3] and about one hundred events in the third observing run [4; 5; 6; 7; 8] of the LIGO-Virgo detectors [9; 10; 11; 12; 13]. With upcoming upgrades of existing detectors and new facilities on the ground, such as Einstein Telescope [14] and Cosmic Explorer [15; 16], and the space-based mission LISA [17], it is expected that the merger rates of compact binaries will significantly increase. Accurately modeling the GWs emitted by binary systems is essential to take full advantage of the discovery potential of ever more sensitive GW detectors, enriching our knowledge of astrophysics, cosmology, gravity and fundamental physics. Numerical relativity (NR) simulations [18; 19; 20] can provide the most accurate waveforms, but they are computationally expensive, which makes it important to develop waveform models that combine analytical approximation methods with NR results. The most commonly used approaches to build complete inspiral-merger-ringdown (IMR) waveform models of compact binaries are the NR surrogate, phenomenolog
2309.14658
Improvements on Scalable Stochastic Bayesian Inference Methods for Multivariate Hawkes Process
Multivariate Hawkes Processes (MHPs) are a class of point processes that can account for complex temporal dynamics among event sequences. In this work, we study the accuracy and computational efficiency of three classes of algorithms which, while widely used in the context of Bayesian inference, have rarely been applied in the context of MHPs: stochastic gradient expectation-maximization, stochastic gradient variational inference and stochastic gradient Langevin Monte Carlo. An important contribution of this paper is a novel approximation to the likelihood function that allows us to retain the computational advantages associated with conjugate settings while reducing approximation errors associated with the boundary effects. The comparisons are based on various simulated scenarios as well as an application to the study the risk dynamics in the Standard & Poor's 500 intraday index prices among its 11 sectors.
Alex Ziyu Jiang, Abel RodrΓ­guez
2023-09-26T04:28:58Z
http://arxiv.org/abs/2309.14658v2
# Improvements on Scalable Stochastic Bayesian Inference Methods for Multivariate Hawkes Process ###### Abstract Multivariate Hawkes Processes (MHPs) are a class of point processes that can account for complex temporal dynamics among event sequences. In this work, we study the accuracy and computational efficiency of three classes of algorithms which, while widely used in the context of Bayesian inference, have rarely been applied in the context of MHPs: stochastic gradient expectation-maximization, stochastic gradient variational inference and stochastic gradient Langevin Monte Carlo. An important contribution of this paper is a novel approximation to the likelihood function that allows us to retain the computational advantages associated with conjugate settings while reducing approximation errors associated with the boundary effects. The comparisons are based on various simulated scenarios as well as an application to the study the risk dynamics in the Standard & Poor's 500 intraday index prices among its 11 sectors. _Keywords--_ Hawkes Processes; Stochastic Optimization; Variational inference; EM Algorithm; Langevin Monte Carlo; Bayesian Inference ## 1 Introduction The multivariate Hawkes process (MHP) model (Hawkes, 1971; Liniger, 2009) is a class of temporal point process models that can capture complex time-event dynamics among multiple objects. Specifically, MHPs demonstrate the _self_- and _mutually-exciting_ properties in multidimensional event sequences, where an event occurrence in a certain dimension leads to a higher likelihood of future events appearing in the same or other dimensions. This feature of the models makes MHPs attractive in a wide range of applications, including earth sciences (Ogata, 1988), finance (Bacry et al., 2015) and social media analysis (Rizoiu et al., 2017). Computational methods for maximum likelihood inference in Hawkes process models include direct maximization of the likelihood function (e.g., see Ozaki, 1979) and the expectation-maximization (EM) algorithm (e.g., see Veen and Schoenberg, 2008 and Lewis and Mohler, 2011). In the context of Bayesian inference, some of the algorithms that have been proposed include Markov Chain Monte Carlo algorithms (MCMC) (Rasmussen, 2013; Mohler, 2013; Holbrook et al., 2021, 2022), variational approximations (Xu and Zha, 2017; Malem-Shinitski et al., 2022), sequential Monte Carlo (Linderman et al., 2017), and the maximum _a posteriori_ probability estimation using the Expectation-Maximization algorithm (EM) (Zhang et al., 2018). One key challenge associated with all these computational approaches is that they do not scale well to large datasets. Specifically, the double summation operation needed to carry out a single likelihood evaluation is typically of time complexity \(\mathcal{O}(KN^{2})\), where \(K\) is the number of dimensions and \(N\) is the number of total events. Even in situations where careful implementation can reduce the time complexity to \(\mathcal{O}(KN)\) (e.g., for exponential excitation functions), the cost of this operation can be prohibitive for moderately large datasets. Furthermore, for methods that utilize the branching structure of MHPs, the space complexity is \(\mathcal{O}(N^{2})\) in all cases. An additional complication is that the calculation of the so-called "compensator" term in the likelihood function might limit our ability to exploit potential conjugacy in the model structure. Standard approximations to the compensator, which are well-justified when maximizing the full likelihood, can have a more serious impact when applied to small datasets. Algorithms inspired by stochastic optimization (Robbins and Monro, 1951) ideas, which approximate the gradient of the objective function through noisy versions evaluated on subsamples, offer an alternative for Bayesian inference on large datasets. Examples of such algorithms include stochastic gradient EM algorithms for finding the posterior mode of a model (e.g., see Chen et al., 2018), stochastic gradient variational algorithms (e.g., see Hoffman et al., 2013) and stochastic gradient Hamiltonian Monte Carlo methods (e.g., see Nemeth and Fearnhead, 2021 and references therein). The use of stochastic gradient methods in the context of MHP models is, nonetheless, limited. Exceptions include Linderman and Adams (2015), who consider the use of stochastic gradient variational inference in the context of a discretized MHP, and Nickel and Le (2020), who discuss stochastic gradient methods to directly maximize the observed data likelihood. In this paper, we discuss the efficient implementation of stochastic gradient EM, stochastic gradient variational approximations, and stochastic gradient Langevin diffusion methods in the context of parametric MHP models, and evaluate various aspects of their performance using both simulated and real datasets. Not only is the literature on stochastic gradient methods for Bayesian inference in MHP models limited, but the trade-offs between computational speed and accuracy are not well understood in this context. For _full-batch_ methods (i.e., when using gradients based on the whole dataset rather than subsamples) Zhou et al. (2020) compares the estimation properties for EM, variational and random-walk MCMC algorithms. Our work extends this comparative evaluation to algorithms based on stochastic gradient methods. A key contribution is an investigation of a novel approximation technique for the likelihood of the subsamples based on first-order Taylor expansion of the compensator term of the MHP models. We show that this novel approximation can lead to improvements in both point and interval estimation accuracy. For illustration purposes, we focus on intensity functions with exponential excitation functions. However, the insights gained from our experiments can be useful when working with other excitation functions that are proportional to density functions for which a conjugate prior on the unknown parameters is tractable. ## 2 Multivariate Hawkes process models Let \(\mathbf{X}=\{(t_{i},d_{i}):i=1,\ldots,n\}\) be a realization from from a marked point process where \(t_{i}\in\mathbb{R}^{+}\) represents the time at which the \(i\)-th event occurs and \(d_{i}\in\{1,\ldots,K\}\) is a mark that represents the dimension in which the event occurs. For example, \(t_{i}\) might represent the time at which user \(d_{i}\) makes a social media post, or the time at which the price of stock \(d_{i}\) drops below a certain threshold. Also, let \(n\) be the total number of events in the sequence and let \(n_{k}\) be the number of events in dimension \(k\). Similarly, let \(\mathcal{H}_{t}=\{(t_{i},d_{i}):t_{i}<t,t_{i}\in\mathbf{X}\}\) be the set of events that happened up until time \(t\), and \(N^{(k)}(t)\) be the number of events in dimension \(k\) that occurred on \([0,t]\). A sequence \(\mathbf{X}\) follows a multivariate Hawkes process if the conditional density function on dimension \(\ell\) has the following form: \[\lambda_{\ell}(t)\equiv\lim_{h\to 0}\frac{\mathbb{E}[N^{(\ell)}(t+h)-N^{( \ell)}(t)\mid\mathcal{H}_{t}]}{h}=\mu_{\ell}+\sum_{k=1}^{K}\sum_{t_{i}<t,d_{i} =k}\phi_{k,\ell}\left(t-t_{i}\right), \tag{1}\] where \(\mu_{\ell}>0\) is the background intensity for dimension \(\ell\), and \(\phi_{k,\ell}(\cdot):\mathbf{R}^{+}\rightarrow\mathbf{R}^{+}\) is the excitation function that controls how previous events in dimension \(\ell\) affect the occurrence of new events in dimension \(k\). For illustration purposes, we consider in this paper the case of an exponential decay function, where \(\phi_{k,\ell}(\Delta)=\alpha_{k,\ell}\beta_{k,\ell}e^{-\beta_{k,\ell}\Delta}\) for \(\Delta\geq 0\). The parameter \(\alpha_{k,\ell}\) controls the importance of events from dimension \(j\) on the appearance of events in dimension \(k\), and \(\beta_{k,\ell}\) controls the magnitude of exponential decay of the instant change associated with a new event. We let \(\boldsymbol{\theta}=(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{ \mu})\) denote the vector of all model parameters. Using standard theory for point processes (e.g., see Daley and Vere-Jones, 2008), the observed log-likelihood associated with a Hawkes process can be written as \[\mathcal{L}(\mathbf{X}\mid\boldsymbol{\alpha},\boldsymbol{\beta}, \boldsymbol{\mu}) =\sum_{\ell=1}^{K}\sum_{d_{i}=\ell}\log\lambda_{\ell}\left(t_{i} \right)-\sum_{\ell=1}^{K}\int_{0}^{T}\lambda_{\ell}(s)ds \tag{2}\] \[=\sum_{\ell=1}^{K}\sum_{d_{i}=\ell}\log\left(\mu_{\ell}+\sum_{k=1 }^{K}\sum_{\begin{subarray}{c}j<i\\ d_{j}=k,d_{i}=\ell\end{subarray}}\alpha_{k,\ell}\beta_{k,\ell}e^{-\beta_{k, \ell}(t_{i}-t_{j})}\right)-\sum_{\ell=1}^{K}\mu_{\ell}T\] \[\qquad\qquad\qquad\qquad\qquad-\sum_{k=1}^{K}\sum_{\ell=1}^{K} \alpha_{k,\ell}\left[n_{k}-\sum_{d_{i}=k}\exp\left(-\beta_{k,\ell}\left(T-t_ {i}\right)\right)\right].\] The MHP can also be obtained as a multidimensional Poisson cluster process in which each point is considered an "immigrant" or an "offspring" (Hawkes and Oakes, 1974; Marsan and Lengline, 2008; Zhou et al., 2013; Rasmussen, 2013). We use the lower-triangular \(n\times n\) binary matrix \(\mathbf{B}\) to represent the latent branching structure of the events, where each row contains one and only one non-zero entry. For the strictly lower-triangular entries on the matrix, \(B_{ij}=1\) indicates that the \(i\)-th event can be viewed as an offspring of the \(j\)-th event. On the other hand, for the diagonal entries of the matrix, \(B_{ii}=1\) indicates that the \(i\)-th event is an immigrant. Each immigrant independently generates a cluster of offsprings that can further generate offsprings of newer generations. The branching structure, which is typically latent and unobservable, allows us to decouple the complex observed likelihood into factorizable terms and design simpler computational algorithms. The complete data log-likelihood, defined as the joint log-likelihood of the observed data and the branching structure \(\mathbf{B}\), has the following form : \[\begin{split}\mathcal{L}(\mathbf{X},\mathbf{B}\mid\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\mu})=\sum_{\ell=1}^{K}|I_{\ell}|\log\mu_{ \ell}+\sum_{k=1}^{K}\sum_{\ell=1}^{K}&\left[|O_{k,\ell}|\left( \log\alpha_{k,\ell}+\log\beta_{k,\ell}\right)-\beta_{k,\ell}\sum_{\begin{subarray} {c}j<i\\ d_{j}=k,d_{i}=l\end{subarray}}B_{ij}\left(t_{i}-t_{j}\right)\right]\\ &-\sum_{\ell=1}^{K}\mu_{\ell}T-\sum_{k=1}^{K}\sum_{\ell=1}^{K} \alpha_{k,\ell}\left[n_{k}-\sum_{d_{i}=k}\exp\left(-\beta_{k,\ell}\left(T-t_{ i}\right)\right)\right],\end{split} \tag{3}\] where \(|I_{\ell}|=\sum_{\begin{subarray}{c}1\leq i\leq n\\ d_{i}=\ell\end{subarray}}B_{ii}\) is the number of immigrants for dimension \(\ell\), and \(|O_{k,\ell}|=\sum_{\begin{subarray}{c}j<i\\ d_{j}=k,d_{i}=\ell\end{subarray}}B_{ij}\) is the number of descendants on dimension \(k\) that are an offspring of an event on dimension \(j\). ### Approximation for the data likelihood The expressions for the observed and complete data likelihood in (2) and (3) share the same term \[\sum_{\ell=1}^{K}\int_{0}^{T}\lambda_{\ell}(s)ds=\sum_{\ell=1}^{K}\mu_{\ell}T +\sum_{k=1}^{K}\sum_{\ell=1}^{K}\alpha_{k,\ell}\left[n_{k}-\sum_{d_{i}=k}\exp \left(-\beta_{k,\ell}\left(T-t_{i}\right)\right)\right]. \tag{4}\] The integral \(\int_{0}^{T}\lambda_{\ell}(s)ds\) is known as the compensator for the conditional density function \(\lambda_{\ell}(t)\). The compensator term guarantees that there are infinitely many 'none events' between observations on the finite temporal region \([0,T]\)(Mei and Eisner, 2017). The form of the compensator causes a number of computational challenges for designing scalable inference algorithms for MHP models (for a discussion see Lewis and Mohler, 2011, Schoenberg, 2013, Holbrook et al., 2021). A common approach used to avoid these challenges is to use the approximation technique introduced in Lewis and Mohler (2011): \[\alpha_{k,\ell}\left[n_{k}-\sum_{d_{i}=k}\exp\left(-\beta_{k,\ell}\left(T-t_{ i}\right)\right)\right]\approx\alpha_{k,\ell}n_{k}, \tag{5}\] for all \(k,\ell=1,\ldots,K\). The approximation above is based on the observation that most events are far away from the boundary and most exponential terms are therefore close to zero. The approximation is therefore most accurate for large datasets.For small datasets, this approximation can introduce edge effects, which can be specially problematic in the implementation of stochastic gradient methods. An alternative motivation for the approximation in (5) is as a zero-order Taylor expansion of the exponential function. This motivates a novel approximation in which we divide \(\mathbf{X}\) into two parts, based on whether the data points are observed within a predetermined threshold \((T-\delta,T]\) where \(0<\delta<T\). For all observations outside the threshold, we follow the previous method and approximate the exponential with zero. For the ones within the threshold, we apply the first-order Taylor expansion, evaluated at \(t=T\): \[\alpha_{k,\ell}\left[n_{k}-\sum_{d_{i}=k}\exp\left(-\beta_{k,\ell}\left(T-t_{ i}\right)\right)\right]\approx\alpha_{k,\ell}\left[n_{k}-\sum_{0\leq T-t_{i}< \delta,d_{i}=k}\left[1-\beta_{k,\ell}(T-t_{i})\right]\right], \tag{6}\] for all \(k,\ell=1,\ldots,K\). One key advantage of the boundary-corrected approximation in (6) is that it allows us to exploit conjugacies in the definition of the model while providing a more accurate approximation for observations close to \(T\). Please see Section 3 for additional details. ### Prior distributions Bayesian inference for the MHP requires that we specify priors for the unknown parameters \((\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\mu})\). For the baseline intensities we set \[\mu_{\ell}\mid a_{\ell},b_{\ell}\stackrel{{ i.i.d}}{{\sim}} \operatorname{Gamma}\left(a_{\ell},b_{\ell}\right),\] which is conditionally conjugate given the branching structure \(\mathbf{B}\). Similarly, under the exponential decay functions we use \[\alpha_{k,\ell}\mid e_{k,\ell},f_{k,\ell}\stackrel{{ i.i.d}}{{\sim}}\operatorname{Gamma}\left(e_{k,\ell},f_{k,\ell}\right),\] \[\beta_{k,\ell}\mid w_{k,\ell},s_{k,\ell}\stackrel{{ i.i.d}}{{\sim}}\operatorname{Gamma}\left(w_{k,\ell},s_{k,\ell}\right)\] which are also conditionally conjugate. Computational methods ### Preliminaries In this section, we describe three stochastic gradient algorithms for MHP models based on the EM, variational inference and an Markov chain Monte Carlo algorithm based on Langevin dynamics algorithm, respectively. Before delving into the details of each algorithm, we discuss a three issues that are relevant to the design of all three. The first issue refers to how to define the subsamples used to compute the gradient at each iteration. A common approach for regression models is to randomly select independent observations. However, the temporal dependence in event sequences makes this approach inappropriate for MHP models. Instead, our subsamples consist of all observations contained in the random interval \([T_{0},T_{0}+\kappa T]\), where we uniformly sample \(T_{0}\) on \([0,(1-\kappa)T]\) and \(\kappa\in(0,1]\) corresponds to the relative size of the subsample. Similarly strategies have been applied to developing stochastic gradient variational algorithms for hidden Markov models (Foti et al., 2014) and stochastic block models (Gopalan et al., 2012). The second issue relates to the selection of the learning rate \(\rho_{r}\) for the algorithms, which controls how fast the information from the stochastic gradient accumulates. It is well known (e.g., see Robbins and Monro, 1951) that the following conditions lead to convergence towards a local optima: \[\sum_{r=1}^{\infty}\rho_{r}=\infty, \sum_{r=1}^{\infty}\rho_{r}^{2}<\infty. \tag{7}\] In the following analysis, we apply the commonly used update schedule for \(\rho_{r}\), outlined in Welling and Teh (2011): \[\rho_{r}=\rho_{0}(r+\tau_{1})^{-\tau_{2}}, \tag{8}\] where \(\rho_{0}\) is a common scaling factor, \(\tau_{2}\in(0.5,1]\) is the forgetting rate that controls the exponential decay rate, and \(\tau_{1}\geq 0\) is the delay parameter that downweights early iterations. In our numerical experiments, we investigate the impact of specific choices of \(\tau_{1}\) and \(\tau_{2}\) on the results. The third issue relates to the use of approximation techniques. We will only approximate the likelihood for the stochastic gradient EM and variational inference algorithms, while we can derive efficient update formula for the stochastic gradient Langevin dynamics algorithm using the exact likelihood. For simplicity, we will refer to the algorithms with approximation as their 'boundary-corrected' versions, and we only show the update formula based on the 'common approximation approach' in this section. Additionally, we want to point out that the exponential decay function that we are using allows us to update \(\mathbf{\mu}\) and \(\mathbf{\alpha}\) using the exact likelihood formula, and we will only consider the approximation when we update \(\mathbf{\beta}\). ### Stochastic gradient EM algorithm for posterior mode finding The expectation-maximization (EM) algorithm (Dempster et al., 1977) is an iterative maximization algorithm that is commonly used for latent variable models, especially in cases where knowledge of the latent variables simplifies the likelihood function. For Bayesian models, it can be used for maximum _a posteriori_ probability estimation for the model parameters. Let \(\mathbf{X}\) be the observed dataset of size \(N\), \(\mathbf{\theta}\) be the set of model parameters to be estimated, and \(\mathbf{B}\) be the set of latent branching structure variables, and denote \((\mathbf{X},\mathbf{B})\) as the complete dataset. We further assume that the distribution of the complete dataset belongs to the following exponential family: \[l(\mathbf{X},\mathbf{B}\mid\mathbf{\theta})=A(\mathbf{X},\mathbf{B})\exp\left( \mathbf{\phi}(\mathbf{\theta})^{\intercal}\mathbf{s}(\mathbf{X},\mathbf{B})-\psi(\mathbf{ \theta})\right), \tag{9}\] where \(\mathbf{s}(\mathbf{X},\mathbf{B})\) is the vector of sufficient statistics for the complete data model and \(\mathbf{\phi}(\mathbf{\theta})\) is the canonical form of the vector of parameters and \(\mathbf{\phi}(\cdot)\) represents a one-to-one transformation. In the context of Bayesian models, the EM algorithm can be used to obtain the maximum a posterior (MAP) estimate (e.g., see Logothetis and Krishnamurthy, 1999). Starting from an initial guess of the model parameters \(\mathbf{\theta}^{(0)}\), the EM algorithm alternatively carries out the following two steps until convergence: * In the 'E-step', the algorithm estimates the "marginal" sufficient statistics based on the expected value \(\hat{\mathbf{s}}^{(r)}:=\mathrm{E}_{\mathbf{B}|\mathbf{X},\mathbf{\theta}^{(r)}} \left[\mathbf{s}(\mathbf{X},\mathbf{B})\right]\). * In the 'M-step', the algorithm updates the model parameter as the maximizer of the \(Q\) function: \[\mathbf{\theta}^{(r+1)}=\arg\min_{\mathbf{\theta}}\left[\mathbf{\phi}(\mathbf{\theta})^{ \intercal}\hat{\mathbf{s}}^{(r)}+\log p(\mathbf{\theta})\right].\] where \(p(\mathbf{\theta})\) denotes the prior on \(\mathbf{\theta}\). Note that the expectation calculation in the E-step update requires a pass through the whole dataset. As we discussed in the introduction, this can be challenging in very large datasets. The stochastic gradient EM (SGEM) algorithm (Cappe and Moulines, 2009) addresses this challenge by approximating the marginal sufficient statistics with an estimate based on randomly sampled mini-batches. We let \(\mathbf{X}^{(r)}\) denote a subsample of size \(n\) (and respectively, we let \(\mathbf{B}^{(r)}\) be the set of branching structure that corresponds to the selected subsample). For the stochastic E-step, the SGEM updates the estimated sufficient statistics \(\hat{\mathbf{s}}^{(r+1)}\) as a linear combination of the previous update and a new estimate of the sufficient statistics based on the random subsample and the current model parameter: \[\hat{\mathbf{s}}^{(r+1)}=(1-\rho_{r})\hat{\mathbf{s}}^{(r)}+\rho_{r}\kappa^{-1 }\mathrm{E}_{\mathbf{B}^{(r+1)}|\mathbf{X}^{(r+1)},\boldsymbol{\theta}^{(r)}} [\mathbf{s}(\mathbf{X}^{(r+1)},\mathbf{B}^{(r+1)})].\] where \(\rho_{r}\) is given in (8). Because of the way we select the subsamples, \(\kappa^{-1}\mathrm{E}_{\mathbf{B}^{(r+1)}|\mathbf{X}^{(r+1)},\boldsymbol{ \theta}^{(r)}}[\mathbf{s}(\mathbf{X}^{(r+1)},\mathbf{B}^{(r+1)})]\) is an unbiased estimate of the sufficient statistics of the model based on the whole dataset. In the following M-step, the SGEM algorithm maximizes the \(Q\) function \[\boldsymbol{\theta}^{(r+1)}=\arg\max_{\boldsymbol{\theta}}\left[\boldsymbol{ \phi}(\boldsymbol{\theta})^{\intercal}\hat{\mathbf{s}}^{(r+1)}+\log p( \boldsymbol{\theta})\right].\] In the case of the MHP model with exponential excitation functions, \(\boldsymbol{\theta}^{(r)}=(\boldsymbol{\mu}^{(r)},\boldsymbol{\alpha}^{(r)}, \boldsymbol{\beta}^{(r)})\) and the expectation in the E-step is computed with respect to the probabilities \[p_{i,j}^{(r)}:=p\left(\mathbf{B}_{i,j}^{(r)}=1,\mathbf{B}_{i,-j}^{(r)}=0\mid \boldsymbol{\mu}^{(r)},\boldsymbol{\alpha}^{(r)},\boldsymbol{\beta}^{(r)}, \mathbf{X}^{(r)}\right)\propto\begin{cases}\mu_{d_{i}}^{(r)}&\text{if }j=i,\\ \alpha_{d_{j},d_{i}}^{(r)}\beta_{d_{j},d_{i}}^{(r)}\exp(-\beta_{d_{j},d_{i}}^{ (r)}(t_{i}-t_{j}))&\text{if }j<i,\\ 0&\text{if }j>i.\end{cases} \tag{10}\] for \(i=2,\ldots,n\), the negative subindex stands for all other possible except the one, and \(p_{1,1}^{(r)}:=1\). Then, the vector of expected sufficient statistics of the complete data likelihood evaluated at iteration \(r\) \[\left(s_{\mu,\ell,1}^{(r)},s_{\mu,\ell,2}^{(r)},s_{\alpha,k,\ell,1}^{(r)},s_{ \alpha,k,\ell,2}^{(r)},s_{\beta,k,\ell,1}^{(r)},s_{\beta,k,\ell,2}^{(r)}\right),\] is updated as \[s_{\mu,\ell,1}^{(r+1)} =(1-\rho_{r})s_{\mu,\ell,1}^{(r)}+\rho_{r}\kappa^{-1}\sum_{d_{i}= \ell}p_{i,i}^{(r)},\] \[s_{\mu,\ell,2}^{(r+1)} =T,\] \[s_{\alpha,k,\ell,1}^{(r+1)} =(1-\rho_{r})s_{\alpha,k,\ell,1}^{(r)}+\rho_{r}\kappa^{-1}\sum_{d _{i}=\ell}\sum_{\begin{subarray}{c}d_{j}=k\\ j<i\end{subarray}}p_{i,j}^{(r)},\] \[s_{\alpha,k,\ell,2}^{(r+1)} =(1-\rho_{r})s_{\alpha,k,\ell,1}^{(r)}+\kappa^{-1}\left(n_{j}^{( r)}-\sum_{d_{j}=k}\exp\left(-\beta_{k,l}^{(r)}\left(\kappa T-t_{j}\right) \right)\right),\] \[s_{\beta,k,\ell,1}^{(r+1)} =(1-\rho_{r})s_{\beta,k,\ell,1}^{(r)}+\rho_{r}\kappa^{-1}\sum_{d _{i}=l}\sum_{\begin{subarray}{c}d_{j}=k\\ j<i\end{subarray}}p_{i,j}^{(r)}\] \[s_{\beta,k,\ell,2}^{(r+1)} =(1-\rho_{r})s_{\beta,k,\ell,2}^{(r)}+\rho_{r}\kappa^{-1}\sum_{d _{i}=l}\sum_{\begin{subarray}{c}d_{j}=k\\ j<i\end{subarray}}p_{i,j}^{(r)}\left(t_{i}-t_{j}\right),\] where \(n_{j}^{(r)}\) denotes the number of events on dimension \(j\) in \(\mathbf{X}^{(r)}\). Finally, in the M-step, the value of the parameters is updated as: \[\alpha_{k,\ell}^{(r+1)} =\frac{s_{\alpha,k,\ell,1}^{(r+1)}+e_{k,\ell}-1}{s_{\alpha,k,\ell,2}^{(r+1)}+f_{k,\ell}}, \beta_{k,\ell}^{(r+1)} =\frac{s_{\beta,k,\ell,1}^{(r+1)}+w_{k,\ell}-1}{s_{\beta,k,\ell,2}^{(r+1)}+ s_{k,\ell}}, \mu_{\ell}^{(r)} =\frac{s_{\mu,\ell,1}^{(r+1)}+a_{\ell}-1}{s_{\mu,\ell,2}^{(r+1)}+b_{ \ell}}.\] We repeat the steps above until the convergence criterion is reached. ### Stochastic Gradient Variational Inference Variational inference (Wainwright et al., 2008) is an approximate inference method that replaces the posterior distribution with an approximation that belongs to a tractable class. More specifically, the variational approximation \(q_{\boldsymbol{\eta}}(\boldsymbol{\theta},\mathbf{B})\), \(\boldsymbol{\eta}\in H\) to the posterior distribution \(p(\mathbf{\theta},\mathbf{B}\mid\mathbf{X})\) is obtained through maximizing the evidence lower bound (ELBO), which is equivalent to setting \[\mathbf{\eta}=\arg\max_{\mathbf{\eta}\in H}\mathrm{E}_{q_{\mathbf{\eta}}}\log \left\{\frac{p(\mathbf{\theta},\mathbf{B},\mathbf{X})}{q_{\mathbf{\eta}}(\mathbf{\theta}, \mathbf{B})}\right\}. \tag{11}\] The class of variational approximations most used in practice is the class of mean-field approximations (Bishop and Nasrabadi, 2006), where model parameters are taken to be independent from each other under the variational distribution, i.e., \(q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{B})=\prod_{j}q_{\mathbf{\eta}_{\mathbf{\theta}_{j}}}( \theta_{j})\prod_{i}q_{\mathbf{\eta}_{\mathbf{\eta}_{\mathbf{\theta}_{i}}}}(\mathbf{B}_{i})\). If both the full conditional posterior distributions and the corresponding variational distribution belong to the same exponential family, e.g., if \[p\left(\theta_{j}\mid\mathbf{\theta}_{-j},\mathbf{B},\mathbf{X}\right)=A\left( \theta_{j}\right)\exp\left\{\theta_{j}s_{j}\left(\mathbf{\theta}_{-j},\mathbf{B}, \mathbf{X}\right)-\psi\left(\mathbf{\theta}_{-j},\mathbf{X}\right)\right\},\qquad q _{\mathbf{\eta}_{\mathbf{\theta}_{j}}}(\theta_{j})=A(\theta_{j})\exp\left\{\theta_{j }s_{i}\left(\mathbf{\eta}_{\mathbf{\theta}_{j}}\right)-\psi\left(\mathbf{\eta}_{\mathbf{ \theta}_{j}}\right)\right\},\] Blei and Jordan (2006) showed that the coordinate ascent algorithm for the mean-field variational inference updates the variational parameters by setting \(\mathbf{\eta}_{\mathbf{\theta}_{j}}^{(r+1)}=\mathrm{E}_{q_{\mathbf{\eta}^{(r)}}}\left[s_ {j}\left(\mathbf{\theta}_{-j},\mathbf{B},\mathbf{X}\right)\right]\). A similar result applies to the updates of the variational parameters \(\mathbf{\eta}_{B_{i}}\). Stochastic gradient variational inference (SGVI) (Hoffman et al., 2013) is a variant of variational inference that replaces the gradient computed over the whole sample with the one calculated over a random subsample \(\mathbf{X}^{(r)}\) of size \(n\) selected during iteration \(r\). Under conjugacy, SGVI then updates the vector \(\mathbf{\eta}_{\mathbf{B}}\) in iteration \(r\) by setting \[\eta_{\mathbf{B}_{i}^{(r)}}^{(r+1)}=\mathrm{E}_{q_{\mathbf{\eta}^{(r)}}}\left[ \tilde{s}_{i}\left(\mathbf{B}_{-i}^{(r)},\mathbf{\theta},\mathbf{X}^{(r)}\right) \right],\] where \(\tilde{s}_{i}\left(\mathbf{B}_{-i}^{(r)},\mathbf{\theta},\mathbf{X}^{(r)}\right)\) is the sufficient statistics associated with the block \(\mathbf{B}_{i}\), and \(\mathbf{\eta}_{\mathbf{\theta}}\) through the recursion \[\eta_{\mathbf{\theta}_{j}}^{(r+1)}=(1-\rho_{r})\eta_{\mathbf{\theta}_{j}}^{(r)}+\rho_ {r}\hat{\eta}_{\mathbf{\theta}_{j}}^{(r+1)},\] where \(\hat{\eta}_{\mathbf{\theta}_{j}}^{(r+1)}=\mathrm{E}_{q_{\mathbf{\eta}^{(r+1)}}}\left[ s_{j}(\mathbf{\theta}_{-j},\mathbf{B}^{(r)},\mathbf{X}^{(r)})\right]\). In the specific case of the MHP with exponential excitation functions we have \(\mathbf{\theta}=(\mathbf{\mu},\mathbf{\alpha},\mathbf{\beta})\), \(\mathbf{\eta}=\left(\mathbf{\eta}_{\mathbf{\alpha}},\mathbf{\eta}_{\mathbf{\beta}},\mathbf{\eta}_{\bm {\mu}},\mathbf{\eta}_{\mathbf{B}}\right)\) and \[q_{\mathbf{\eta}}(\mathbf{\alpha},\mathbf{\beta},\mathbf{\mu},\mathbf{B})=\prod_{i=1}^{N}q_{ \mathbf{\eta}_{\mathbf{\mathbf{\alpha}}_{i}}}(\mathbf{B}_{i})\prod_{k=1}^{K}q_{\mathbf{\eta}_{ \mathbf{\mu}_{k}}}(\mu_{k})\prod_{j=1}^{K}\prod_{k=1}^{K}q_{\mathbf{\eta}_{\mathbf{\alpha} _{k,\ell}}}(\alpha_{k,\ell})q_{\mathbf{\eta}_{\mathbf{\alpha}_{k,\ell}}}(\beta_{k,\ell}),\] where \(\alpha_{k,\ell}\sim\mathrm{Gamma}(\eta_{\alpha,k,\ell,1},\eta_{\alpha,k,\ell,2})\), \(\beta_{k,\ell}\sim\mathrm{Gamma}(\eta_{\beta,k,\ell,1},\eta_{\beta,k,\ell,2})\), \(\mu_{\ell}\sim\mathrm{Gamma}(\eta_{\mu,\ell,1},\eta_{\mu,\ell,2})\), \(\mathbf{B}_{i}\) denotes the \(i\)-th row of the matrix \(\mathbf{B}\), and \(\mathbf{B}_{i}\) follows a categorical distribution with parameter \(\mathbf{\eta}_{\mathbf{B}_{i}}\). Hence, each iteration of the SGVI algorithm starts by updating the variational parameter for the local branching structure through the following formula: \[\eta_{B_{ij}^{(r)}}\propto\begin{cases}\exp\left\{\psi\left(\eta_{\mu,d_{i},1}^ {(r)}\right)-\log\left(\eta_{\mu,d_{i},2}^{(r)}\right)\right\}&j=i\\ \exp\left\{\Psi_{ij}-\log\left(\eta_{\alpha,d_{j},d_{i},2}^{(r)}\right)-\log \left(\eta_{\beta,d_{j},d_{i},2}^{(r)}\right)\right\}&j<i,\\ 0&j>i,\end{cases}\] where \(\Psi_{ij}=\psi\left(\eta_{\alpha,d_{j},d_{i},1}^{(r)}\right)+\psi\left(\eta_{ \beta,d_{j},d_{i},1}^{(r)}\right)-\frac{\eta_{\alpha,d_{j},d_{i},1}^{(r)}}{\eta_ {\beta,d_{j},d_{i},2}^{(r)}}\left(t_{i}^{(r)}-t_{j}^{(r)}\right)\). In this expression, \(\psi(x)=\frac{\mathrm{d}}{\mathrm{d}x}\ln\Gamma(x)\) denotes the digamma function, and \((t_{i}^{(r)},t_{j}^{(r)})\) represents the \(i\)-th and \(j\)-th event in \(\mathbf{X}^{(r)}\). Then, we update the rest of the variational parameters as: \[\eta_{\alpha_{k,\ell,1}}^{(r+1)} =(1-\rho_{r})\eta_{\alpha_{k,\ell,1}}^{(r)}+\rho_{r}\left(\kappa^{ -1}\sum_{d_{i}=\ell}\sum_{\begin{subarray}{c}d_{j}=k\\ j<i\end{subarray}}\eta_{B_{ij}^{(r)}}+e_{k,\ell}\right),\] \[\eta_{\alpha_{k,\ell,2}}^{(r+1)} =(1-\rho_{r})\eta_{\alpha_{k,\ell,2}}^{(r)}+\rho_{r}\left(\kappa^ {-1}\left(n_{k}^{(r)}-\sum_{d_{j}=k}\left(1+\frac{\kappa T-t_{j}}{\eta_{\beta_{ k,\ell,2}}^{(r+1)}}\right)^{-\eta_{\beta_{k,\ell,1}}^{(r+1)}}\right)+f_{k,\ell} \right),\] \[\eta_{\beta_{k,\ell,1}}^{(r+1)} =(1-\rho_{r})\eta_{\beta_{k,\ell,1}}^{(r)}+\rho_{r}\left(\kappa^ {-1}\sum_{d_{i}=\ell}\sum_{\begin{subarray}{c}d_{j}=k\\ j<i\end{subarray}}\eta_{B_{ij}^{(r)}}+r_{k,\ell}\right),\] \[\eta_{\beta_{k,\ell,2}}^{(r+1)} =(1-\rho_{r})\eta_{\beta_{k,\ell,2}}^{(r)}+\rho_{r}\left(\kappa^ {-1}\sum_{d_{i}=\ell}\sum_{\begin{subarray}{c}d_{j}=k\\ j<i\end{subarray}}\eta_{B_{ij}^{(r)}}(t_{i}^{(r)}-t_{j}^{(r)})+s_{k,\ell} \right),\] \[\eta_{\mu_{\ell,1}}^{(r+1)} =(1-\rho_{r})\eta_{\mu_{\ell,1}}^{(r)}+\rho_{r}\left(\kappa^{-1} \sum_{d_{i}=\ell}\eta_{B_{ii}^{(r)}}+a_{\ell}\right),\] \[\eta_{\mu_{\ell,2}}^{(r+1)} =T+b_{\ell}.\] These updates are repeated until convergence. ### Stochastic Gradient Langevin Dynamics Unlike the previous two sections, here we focus on inference methods that are based on the observed data likelihood (2) instead of the complete data likelihood (3). Specifically, we consider simulation methods that rely on Langevin dynamics (LD) (Neal, 2011), a class of MCMC methods that are based on the discretization of a continuous-time stochastic process whose equilibrium distribution is the desired posterior distribution. Compared to simple random walk MCMC algorithms, LD algorithms explore the parameter space much more efficiently because they use information about the gradient of the likelihood to guide the direction of the random walk. In particular, LD methods proposes new values for the parameter according to \[\boldsymbol{\theta}^{*}=\boldsymbol{\theta}^{(r)}-\frac{\rho}{2}\left.\nabla_{ \boldsymbol{\theta}}U\left(\boldsymbol{\theta}\mid\mathbf{X}\right)\right|_{ \boldsymbol{\theta}=\boldsymbol{\theta}^{(r)}}+\sqrt{\rho}\epsilon_{r+1}, \tag{12}\] where \(\rho\) is the step size used to discretize the Langevin diffusion, \(U\left(\boldsymbol{\theta}\mid\mathbf{X}\right)=-\log p(\mathbf{X}\mid \boldsymbol{\theta})-\log p(\boldsymbol{\theta})\) is the negative logarithm of the unnormalized posterior of interest, and \(\epsilon_{r+1}\) is drawn from a standard multivariate normal distribution. If no discretization of the Langevin diffusion was involved, then this proposed valued would come from the correct stationary distribution. However, the introduction of the discretization means that a correction is required. Hence, values proposed according to (12) are accepted with probability \[\min\left\{1,\frac{\exp\left\{-U(\boldsymbol{\theta}^{*}\mid\mathbf{X}) \right\}}{\exp\left\{-U(\boldsymbol{\theta}^{(r)}\mid\mathbf{X})\right\}} \right\}. \tag{13}\] If accepted, then \(\boldsymbol{\theta}^{(r+1)}=\boldsymbol{\theta}^{*}\). Otherwise, \(\boldsymbol{\theta}^{(r+1)}=\boldsymbol{\theta}^{(r)}\). The stochastic gradient Langevin Dynamics (SGLD) algorithm (Welling and Teh, 2011; Chen et al., 2014) replaces the likelihood computed over the whole sample with (an appropriately rescaled) likelihood evaluated on a random subsample \(\mathbf{X}^{(r)}\). SGLD also uses a decreasing stepsize \(\rho_{r}\) to construct the discretization of the Langevin diffusion in step \(r\) of the algorithm and ignores the correction step in (13). This leads to updates of the form \[\boldsymbol{\theta}^{(r+1)}=\boldsymbol{\theta}^{(r)}-\frac{\rho_{r}}{2}\left. \nabla_{\boldsymbol{\theta}}\tilde{U}(\boldsymbol{\theta}\mid\mathbf{X}^{(r) })\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{(r)}}+\sqrt{\rho_{r}} \epsilon_{r+1}, \tag{14}\] where \(\tilde{U}(\boldsymbol{\theta}\mid\mathbf{X}^{(r)})=\kappa^{-1}\log p\left( \mathbf{X}^{(r)}\mid\boldsymbol{\theta}\right)+\log p\left(\boldsymbol{ \theta}\right)\). In the case of the MHP model with exponential excitation functions, we perform a logarithmic transformation on the model parameters before implementing the SGLD, so that \(\mathbf{\xi}_{\mathbf{\alpha}}=\log\mathbf{\alpha}\), \(\mathbf{\xi}_{\mathbf{\beta}}=\log\mathbf{\beta}\) and \(\mathbf{\xi}_{\mathbf{\mu}}=\log\mathbf{\mu}\). Then, the gradients become: \[\nabla^{(r)}_{\xi_{\alpha_{k,\ell}}}U\left(\mathbf{\xi}\right) =-\sum_{d_{i}=\ell}\frac{\alpha^{(r)}_{k,\ell}\beta^{(r)}_{k,\ell }\sum_{d_{j}=k,j<i}\exp\left(-\beta^{(r)}_{k,\ell}\left(t^{(r)}_{i}-t^{(r)}_{j }\right)\right)}{\mu^{(r)}_{\ell}+\alpha^{(r)}_{k,\ell}\beta^{(r)}_{k,\ell}\sum _{d_{j}=k,j<i}\exp\left(-\beta^{(r)}_{k,\ell}\left(t^{(r)}_{i}-t^{(r)}_{j} \right)\right)}\] \[+\alpha^{(r)}_{k,\ell}\left(n^{(r)}_{k}-\sum_{d_{j}=k}\exp\left(- \beta^{(r)}_{k,l}\left(\kappa T-t_{j}\right)\right)+f_{k,\ell}\right)-e_{k, \ell},\] \[\nabla^{(r)}_{\xi_{\beta_{k,\ell}}}U\left(\mathbf{\xi}\right) =-\sum_{d_{i}=\ell}\frac{\alpha^{(r)}_{k,\ell}\beta^{(r)}_{k, \ell}\sum_{d_{j}=k,j<i}\left(1-\beta^{(r)}_{k,\ell}\left(t^{(r)}_{i}-t^{(r)}_{ j}\right)\right)\exp\left(-\beta^{(r)}_{k,\ell}\left(t^{(r)}_{i}-t^{(r)}_{j} \right)\right)}{\mu^{(r)}_{\ell}+\alpha^{(r)}_{k,\ell}\beta^{(r)}_{k,\ell} \sum_{d_{j}=k,j<i}\exp\left(-\beta^{(r)}_{k,\ell}\left(t^{(r)}_{i}-t^{(r)}_{j} \right)\right)}\] \[+\sum_{d_{j}=k}\alpha^{(r)}_{k,l}(\kappa T-t_{j})\exp\left(-\beta ^{(r)}_{k,l}\left(\kappa T-t_{j}\right)\right)-r_{k,\ell}+s_{k,\ell}\beta^{(r) }_{k,\ell},\] \[\nabla^{(r)}_{\xi_{\mu_{k,\ell}}}U\left(\mathbf{\xi}\right) =-\sum_{d_{i}=\ell}\frac{\mu^{(r)}_{\ell}}{\mu^{(r)}_{\ell}+ \alpha^{(r)}_{k,\ell}\beta^{(r)}_{k,\ell}\sum_{d_{j}=k,j<i}\exp\left(-\beta^{( r)}_{k,l}\left(t^{(r)}_{i}-t^{(r)}_{j}\right)\right)}+\mu^{(r)}_{\ell}(b_{\ell}+ \kappa T)-a_{\ell}.\] Note that SGLD does not require approximating the observed data likelihood. ## 4 Simulation studies In this section, we conduct a set of simulations to understand the performance of the algorithms with and without time budget constraints. Compared with small-scale learning problems, large-scale problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying algorithm (Bottou and Bousquet, 2007), making evaluation under time constraints key. We also investigate the model fitting performance of all algorithms under different subsampling ratios. ### Experimental setting Data generation mechanism.We consider the multivariate Hawkes process model presented in section 2 with \(K=3\) dimensions and the following parameter settings: \[\mathbf{\alpha} =\begin{bmatrix}0.3&0.3&0.3\\ 0.3&0.3&0.3\\ 0.3&0.3&0.3\end{bmatrix}, \mathbf{\beta} =\begin{bmatrix}4&4&4\\ 4&4&4\\ 4&4&4\end{bmatrix}, \mathbf{\mu} =\begin{bmatrix}0.5\\ 0.5\\ 0.5\end{bmatrix}.\] Algorithms to be compared.We compare the performances of SGEM, SGVI, SGLD and the boundary-corrected versions for the first two algorithms (SGEM-c and SGVI-c). Also, as a 'gold-standard' that does not involve subsampling, we implemented full MCMC and its boundary-corrected version (MCMC-c). Parameters.For the model hyperparameters from Section 2.2, we let \(a_{\ell}=2,b_{\ell}=4,e_{k,\ell}=2,f_{k,\ell}=4,r_{k,\ell}=2,s_{k,\ell}=0.5\) for \(k,\ell=1,\dots,K\). We simulate \(K_{d}=50\) datasets for \(T=1000\). For every dataset, we start all algorithms at 16 different initial points to minimize the risk of convergence to a local optimum. For the tuning hyperparameters in stochastic optimization algorithms, we consider several subsampling ratios of \(\kappa=\{0.01,0.05,0.1,0.2,0.3,0.4\}\) and let \(\tau=1,\kappa=0.51\). For SGEM and SGVI, we let \(\rho_{0}=0.02\), and for SGLD we let \(\rho_{0}=\frac{0.1}{T_{k}}\). We chose \(\delta=0.25\) as the threshold for boundary-corrected methods. Performance Metrics for Model Fitting.We consider the observed data likelihood defined in (2) as a measure for model fitting. Denote by \(\mathrm{ODL}_{d,\iota}\) the observed data likelihood calculated based on dataset \(d\) and initial point \(\iota\), we define \(\mathrm{BODL}_{d}=\max_{1\leq i\leq 16}\mathrm{ODL}_{d,\iota}\) as the best-observed data likelihood (BODL), as a basis for evaluating model performance. Finally, in order to compare model-fitting performance under different subsampling ratios and different datasets, we propose the following relative best-observed data likelihood (RBODL): \[\mathrm{RBODL}_{d,\kappa_{1},\kappa_{2}}=\frac{\mathrm{BODL}_{d,\kappa_{1}}}{ \mathrm{BODL}_{d,\kappa_{2}}}\] where \(\mathrm{BODL}_{d,\kappa_{1}},\mathrm{BODL}_{d,\kappa_{2}}\) are the best-observed data likelihoods on dataset \(d\) under subsampling ratio \(\kappa_{1}\) and \(\kappa_{2}\). Additionally, we refer to \(\kappa_{2}\) as the reference subsampling ratio for \(\mathrm{RBODL}_{d,\kappa_{1},\kappa_{2}}\). The RBODL is fairly easy to interpret, in that \(\mathrm{RBODL}_{d,\kappa_{1},\kappa_{2}}>1\) indicates a superior empirical performance of subsampling ratio \(\kappa_{1}\) compared to \(\kappa_{2}\) and vice versa. Performance Metrics for Estimation Accuracy.We consider performance metrics for both point and uncertainty estimations. To evaluate estimation accuracy of the model parameters, we rely on the averaged root mean integrated squared error (RMISE) for \(\mathbf{\alpha},\mathbf{\beta}\), and use mean absolute error (MAE) for \(\mathbf{\mu}\) on the log scale: \[\text{RMISE}(\mathbf{\alpha},\mathbf{\beta}): =\frac{1}{K^{2}}\sum_{j=1}^{K}\sum_{k=1}^{K}\sqrt{\int_{0}^{+ \infty}\left(\phi_{j,k}^{\text{true}}(x)-\hat{\phi}_{j,k}(x)\right)^{2}\ \mathrm{d}x},\] \[\text{MAE}(\mathbf{\mu}): =\frac{1}{K}\sum_{j=1}^{K}|\log(\mu_{k}^{\text{true}})-\log(\hat{ \mu}_{k})|.\] where \(\hat{\mu}_{k}\) is the point estimator of \(\mu_{k}\) (the posterior mode for the stochastic gradient EM, the posterior mean under the variational approximation for the stochastic gradient variational method, and the posterior mean of the samples after burn-in for the stochastic gradient Langevin dynamics), and \(\hat{\phi}_{k,\ell}(x)\) is obtained by plugging in the point estimators for \(\alpha_{k,\ell}\) and \(\beta_{k,\ell}\) into the exponential decay function. The RMISE is a commonly used metric for nonparametric triggering kernel estimation for MHP models (Zhou et al., 2020) and collectively evaluates the estimation performance for all model parameters. We also evaluate the uncertainty estimates generated by the SGVI, SGLD, SGVI-c and SGLD-c models (SGEM provides point estimators, but does not directly provide estimates of the posterior variance). To do so, we consider the interval score (IS) (Gneiting and Raftery, 2007) for 95% credible intervals, which jointly evaluates the credible interval width and its coverage rate. We also separately compute the average coverage rate (ACR), defined as the proportion of correct coverages out of \(2K^{2}+K\) model parameters and the average credible interval length (AIW) as references. ### Simulation results Optimal subsampling ratios.Table 1 shows the RBODLs for all methods subject to three time budgets: 1, 3 and 5 minutes. We choose \(\kappa=0.01\) as the reference subsampling ratio. The results indicate that, except for SGLD run for 5 minutes, all methods reach the highest RBODL at \(\kappa=0.05\). Given that this optimum is greater than 1, this indicates that choosing a subsampling ratio around 0.05 (rather than the baseline, 0.01) leads to optimal model-fitting performance under a time budget. For a given method under fixed running time, we observe that the RBODL tends to drop as \(\kappa\) increases. This is likely because larger subsamples take considerably more time to process due to the quadratic computational complexity for each iteration. We also observe such drops in RBODL tend to reduce in size as running time increase, which suggests better model convergence with more computation time. Finally, we see more dramatic drops in RBODL for SGVI compared to SGEM under the same running time, which suggests that the EM algorithms tends to converge faster than VI algorithms. This result concurs with those of Zhou et al. (2020) in the non-stochastic gradient setting. \begin{table} \begin{tabular}{c c c c c c} \hline \hline methods & running time & 0.05 & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline \multirow{4}{*}{SGEM} & 1 min & **1.003** (0.003) & 1.002 (0.004) & 1.000 (0.003) & 0.999 (0.004) & 0.997 (0.004) \\ & 3 min & **1.004** (0.005) & 1.003 (0.005) & 1.002 (0.005) & 1.001 (0.006) & 1.000 (0.005) \\ & 5 min & 1.003 (0.005) & **1.004** (0.006) & 1.002 (0.006) & 1.002 (0.006) & 1.001 (0.006) \\ \cline{2-6} & 1 min & **1.003** (0.007) & 1.002 (0.006) & 1.000 (0.006) & 0.999 (0.006) & 0.998 (0.006) \\ & 3 min & **1.003** (0.005) & 1.003 (0.006) & 1.002 (0.006) & 1.001 (0.006) & 1.000 (0.006) \\ & 5 min & 1.003 (0.008) & **1.004** (0.008) & 1.003 (0.007) & 1.003 (0.008) & 1.001 (0.008) \\ \cline{2-6} & 1 min & **1.004** (0.001) & 1.002 (0.001) & 1.000 (0.001) & 0.997 (0.001) & 0.995 (0.001) \\ SGVI & 3 min & **1.005** (0.001) & 1.004 (0.001) & 1.002 (0.001) & 1.001 (0.001) & 0.999 (0.001) \\ & 5 min & **1.005** (0.001) & 1.005 (0.001) & 1.003 (0.001) & 1.002 (0.001) & 1.000 (0.001) \\ \cline{2-6} & 1 min & **1.002** (0.001) & 1.000 (0.001) & 0.997 (0.001) & 0.995 (0.001) & 0.992 (0.001) \\ SGVI c. & 3 min & **1.002** (\(<\)0.001) & 1.002 (0.001) & 1.000 (0.001) & 0.998 (0.001) & 0.996 (0.001) \\ & 5 min & **1.002** (\(<\)0.001) & 1.002 (0.001) & 1.001 (0.001) & 0.999 (0.001) & 0.998 (0.001) \\ \cline{2-6} & 1 min & **1.001** (\(<\)0.001) & 0.996 (0.001) & 0.988 (0.002) & 0.98 (0.004) & 0.968 (0.005) \\ SGLD & 3 min & **1.001** (\(<\)0.001) & 0.998 (0.001) & 0.991 (0.001) & 0.986 (0.003) & 0.977 (0.004) \\ & 5 min & **1.001** (\(<\)0.001) & 0.999 (0.001) & 0.992 (0.001) & 0.987 (0.002) & 0.980 (0.003) \\ \hline \hline \end{tabular} \end{table} Table 1: RBODLs for SGEM, SGVI and SGLD under running times of 1, 3 and 5 minutes, with \(\kappa=0.01\) being the reference subsampling ratio. Average RBODL across 50 datasets is shown, with standard deviations in the brackets. Estimation accuracy.Table 2 shows the estimation performance measures, including RMISE, MAE, IS, ACR and AIW for all seven methods. Similar to the previous simulation study, we run the same algorithm on 50 datasets with 16 different initial parameter values and choose the instance associated with the highest observed data likelihood for estimation performance evaluation. We keep the same stochastic optimization hyperparameters, fixing the subsampling ratio \(\kappa\) at 0.05. For SGLD, SGVI, SGVI-c, SGEM and SGEM-c, we run the algorithms for 30 minutes. For MCMC and MCMC-c, we run the algorithms on the whole dataset without subsampling for 15,000 iterations, which took around 12 hours to complete. We discard the first 5,000 samples as burn-in and calculate the posterior median for estimation performance evaluation. As would be expected, the lowest values of RMISE, MAE and IS correspond to the three MCMC algorithms. Moreover, MCMC algorithms produce coverage rates that are very close to nominal. Among the remaining methods in Section 3, SGLD shows the best uncertainty estimation performance with the lowest IS and ACR closest to the nominal rate, while SGVI-c shows the best point estimation performance with RMISE even lower than the MCMC methods. Additionally, we observe a significant improvement in both RMISE and IS for SGVI-c compared to SGVI, indicating that incorporating a boundary correction can lead to both improved point and uncertainty estimation performance for SGVI. Sensitivity analysis for different dataset sizes.We also look at how sensitive the RBODLs are to the scale of the data we have. We run the same algorithms on two sets of 50 datasets with \(T=500\) and \(T=2000\), and the RBODLs are shown in Tables 3 and 4. For the small datasets, the median RBODLs change much less than the large datasets over different subsampling ratios. This is not surprising, as it indicates that the algorithms tend to reach convergence for smaller datasets than the larger ones. Additionally, the optimal subsampling ratios for smaller datasets tend to be larger, indicating that there could be a fixed amount of data needed for algorithms to attain better model-fitting results. Sensitivity analysis for the stochastic search parameters.Next, we investigate the effect of the stochastic search parameters on our previous simulation results. To this end, we rerun our analysis for the medium-sized dataset under each of the following three sets of \(\tau_{1},\tau_{2}\) values: (1) \(\tau_{1}=5,\tau_{2}=0.51\), (2) \(\tau_{1}=1,\tau_{2}=1\), (3) \(\tau_{1}=5,\tau_{2}=1\). The results are shown in Table 5, 6 and 7. As expected, the behavior of RBODL with respect to subsampling ratio and running time is similar to the default scenario shown in Table 1. Also, the results in Table 5 are more similar to those in Table 1 than cases in Tables 6 and 7. This is because \(\tau_{2}\) controls the decay rate of the stepsize parameter, which has a bigger long-term effect on \(\rho_{r}\) compared to the delay parameter \(\tau_{1}\). We also looked at the estimation performances for all five methods under these four scenarios, with performance metrics shown in Table 8 and 9. We can see that all algorithms performed significantly better in scenarios where \(\tau_{2}=0.51\), indicating that a large value of \(\tau_{2}\) may lead to suboptimal estimates because algorithms converged too fast. We also note that the SGEM-c outperformed SGEM where \(\tau_{2}=0.51\). Sensitivity analysis for the threshold values of the boundary-corrected methods.Previously, we chose a fixed value \(\delta=0.25\) as the common threshold for all boundary-corrected methods. In this simulation study, we would like to propose a systematic way of choosing \(\delta\) and study the parameter estimation performance under different values of \(\delta\). Given an estimate for \(\mathbf{\beta}\) and a fixed value \(r>0\), we find \(\delta\) such that \(\delta=\frac{1}{K^{2}}\sum_{j=1}^{K}\sum_{k=1}^{K}\frac{1}{\beta_{k,\ell}}\). Intuitively, the exponential \begin{table} \begin{tabular}{l c c c c c} \hline \hline methods & RMISE \((\mathbf{\alpha},\mathbf{\beta})\) & MAE \((\mathbf{\mu})\) & IS & ACR & AIW \\ \hline \multirow{2}{*}{MCMC} & 0.042 & 0.072 & 1.042 & 0.952 & 1.015 \\ & (0.008) & (0.036) & (0.274) & (0.046) & (0.053) \\ \multirow{2}{*}{MCMC-c} & 0.042 & 0.072 & 1.056 & 0.952 & 1.015 \\ & (0.008) & (0.036) & (0.278) & (0.055) & (0.058) \\ \hline \multirow{2}{*}{SGLD} & 0.052 & 0.109 & 3.898 & 0.667 & 0.844 \\ & (0.019) & (0.283) & (4.739) & (0.152) & (0.172) \\ \multirow{2}{*}{SGVI} & 0.046 & 0.103 & 6.163 & 0.333 & 0.222 \\ & (0.008) & (0.048) & (2.117) & (0.131) & (0.012) \\ \multirow{2}{*}{SGVI-c} & 0.040 & 0.093 & 4.905 & 0.429 & 0.213 \\ & (0.007) & (0.044) & (1.698) & (0.139) & (0.012) \\ \multirow{2}{*}{SGEM} & 0.100 & 0.024 & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{-} \\ & (0.076) & (0.026) & & & \\ \multirow{2}{*}{SGEM-c} & 0.103 & 0.023 & & & \\ & (0.065) & (0.022) & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Estimation metrics across all seven methods. The values in the grid cells are the average across 50 datasets, with the standard deviation in the brackets. functions \(\alpha_{k,\ell}\exp(-\beta_{k,\ell}(T-t))\) evaluated at \(t=T-\delta\) will roughly decay to \(e^{-r}\) times its boundary value, evaluated at \(t=T\). Table 10 shows the point and uncertainty estimation results for SGVI-c and SGEM-c under values of \(r\in\{0.5,1,2,3,4\}\). For both methods, all estimation metrics reached optimality between \(r=1\) and \(r=2\), indicating that doing a first-order Taylor expansion for a certain amount of observations at the tail end of the sampled sequence may lead to lower point and uncertainty estimation errors. ### Real-world application Data description.In this section, we apply our methods to model the market risk dynamics in the Standard & Poor (S&P)'s 500 intraday index prices for its 11 sectors: Consumer Discretionary (COND), Communication Staples (CONS), Energy (ENRS), Financialis (FINL), Health Care (HLTH), Industrials (INDU), Information Technology (INFT), Materials (MATR), Real Estate (RLST), Communication Services (TELS), Utilities (UTIL). To achieve this, price data between August 22, 2022 and Jan 23, 2023 was downloaded from Bloomberg Finance L.P. Similar to Rodriguez et al. (2017), an event occurs on dimension \(k=1,\ldots,11\) if the negative log returns in sector \(k\) exceeds a predetermined threshold (in our case, a 0.05% drop on a one-minute basis). The resulting dataset contains 55,509 events across the 11 dimensions. Results.We fit a Hawkes process model with exponential decay functions to the event data using the SGEM, SGEM-c, SGVI, SGVI-c and SGLD algorithms. We set the subsampling ratio of \(\kappa=0.01\) for SGLD and of \(\kappa=0.05\) for all other methods. Similar to the procedure in Section 4.1, we start all algorithms at 16 different initial points and choose the instances with the highest observed data likelihood to compute the estimates. Furthermore, all these algorithms were run for a fixed period of 30 minutes for each initial set of values. As a reference, we also apply MCMC and MCMC-c to the dataset and use 10,000 posterior samples after 10,000 burn-ins, which roughly took around two days. Figure 1 shows heatmaps of point estimates for the \(\boldsymbol{\alpha}\) parameters for all seven algorithms. To facilitate comparisons, we also generate a visual representation by constructing a measure of similarity between sectors \(i\) and \(j\) as \(\Upsilon(i,j)=\exp\left\{-\frac{1}{2}(\alpha_{ij}+\alpha_{ji})\right\}\), and then use multidimensional scaling (Torgerson, 1952) to find a two-dimensional representation of these similarities. Because the representation is arbitrary up to translations, rotations and reflections, we use Procrustes analysis (Dryden and Mardia, 2016) to align the representations for all the algorithms. All seven methods yield similar point estimates for \(\boldsymbol{\alpha}\). To explore this question in more detail, we present in Table 11 the mean square distance between point estimates for each pair of methods. We see that MCMC and MCMC-c are almost identical, and that SGEM, SGVI, and SGEM-c yield very similar estimates. Interestingly, SGVI-c yields results that are as close to those of the MCMC "gold standard" as those from SGEM, SGVI, and SGEM-c, but that are fairly different from them. We also note that SGLD seems to yield results that are the furthest away from the MCMC procedures, suggesting that a time budget of 30 minutes is not enough to achieve reliable results in this example. From a substantive point of view, Figure 1 suggests mutual excitation of exceedances within each of the following three groups: (1) UTIL, MATR COND, (2) INFT, FINL and INDU, (3) TELS, RLST, HLTH and CONS and ENRS. One particular interesting result is the estimates for the energy (ENRS) sector, which has a much higher diagonal \(\alpha\) estimate and lower off diagonal estimates corresponding to other sectors. This is supported by the scatterplot of principal coordinates, in which the point for ENRS is away from all other sectors, indicating that such sector may be less likely associated with price movements in other sectors. Next, we show in Figure 2 point estimates of \(\boldsymbol{\beta}\) under all 7 methods, and in Table 12 the mean square distance between the estimates generated by the different methods. The pattern of the results is very similar: (1) MCMC and MCMC-c yield the most similar results, (2) SGLD seems to yield estimates that are furthest away from those generated by the MCMC methods, (3) SGEM, SGVI, and SGEM-c yield very similar results to each other, and (4) SGVI-c yields different results from SGEM, SGVI, and SGEM-c, but they are as close to those of the MCMC approaches as those from the three alternatives. We note, however, that the estimates of \(\boldsymbol{\beta}\) generated by MCMC and MCMC-c do seem to differ from each other much more than than the estimates of \(\boldsymbol{\alpha}\) did. Finally, Figure 3 shows the point estimates for \(\boldsymbol{\mu}\), and Table 13 shows mean quare distances between the model estimates. Not surprisingly, the same patterns arise again, although we note that the distances tend to be smaller. From an application point of view, we note that all methods identify ENRS as a sector with a very high baseline rate events, FINL, INFT, INDU, HLTH and CONS as sectors where the majority of price drops are the result of contagion from turbulence in other sectors. To complete our analysis, we present in Figures 4, 5 and 6 the length of the estimated posterior credible intervals for \(\boldsymbol{\alpha}\), \(\boldsymbol{\beta}\) and \(\boldsymbol{\mu}\) for SGVI, SGVI-c and SGLD, as well as for MCMC and MCMC-c. As was the case with simulated datasets, stochastic gradient methods seem to underestimate the uncertainty in the posterior distribution, with SGVI and SGVI-c doing much more dramatically than SGLD. some clear tradeoffs. SGEM algorithms are the fastest (which is consistent with the results of Zhou et al. (2020) for full-batch methods), but they do not yield interval estimates of the parameters. SGVI algorithms are almost as computationally efficient as SGEM and yield interval estimates. However, these interval estimates are too narrow, leading to substantial undercoverage. That variational inference underestimates the variance of the posterior distribution is well known (e.g., see Blei et al., 2017), but it was still striking to see how low the coverage can be in the case of MHPs. SGLD algorithms are the slowest and require careful tuning, but can also lead to more accurate interval estimates if allowed to run for enough time. Our experiments also suggest that the new approximation to the full-data likelihood based on a first-order Taylor expansion of the compensator of the Hawkes process has the potential to improve the accuracy of the algorithm with minimal additional computational costs. This was clearer for SGVI algorithms, where the approximation clearly improved the MSE of the point estimators. Finally, our experiments suggest that, as sample sizes grow, the fraction of time involved in the subsamples used to compute stochastic gradients can decrease as long as the number of observations in each subsample remains above a critical threshold. This is important, because it suggests that, at least empirically, the computational complexity of this algorithms can remain roughly constant as a function of the sample size. The work in this manuscript focused on a very particular class of MHP with constant baseline intensity and a parametric excitation function. This was a deliberate choice meant to simplify exposition and interpretation. However, the insights from this manuscript apply much more broadly. For example, we are currently work on fast inference algorithms for MHP models where the excitation functions are modeled non parametrically using mixtures of dependent Dirichlet processes. This, and other extensions, will be discussed elsewhere. The R scripts for the algorithms were run on CentOS 7 Linux, with 128GB of memory. The dataset and the R code for the simulation and application examples can be found at [https://github.com/AlexJiang1125/MHP](https://github.com/AlexJiang1125/MHP). ## 6 Acknowledgements This research was partially supported by NSF Grants NSF-2023495 and NSF-2114727. We would also like to acknowledge the support of the Research Computing Club at the University of Washington by providing access to their computational resources.
2309.03324
Lens mass estimate in the Galactic disk extreme parallax microlensing event Gaia19dke
We present the results of our analysis of Gaia19dke, an extraordinary microlensing event in the Cygnus constellation that was first spotted by the {\gaia} satellite. This event featured a strong microlensing parallax effect, which resulted in multiple peaks in the light curve. We conducted extensive photometric, spectroscopic, and high-resolution imaging follow-up observations to determine the mass and the nature of the invisible lensing object. Using the Milky Way priors on density and velocity of lenses, we found that the dark lens is likely to be located at a distance of $D_L =(3.05^{+4.10}_{-2.42})$kpc, and has a mass of $M_L =(0.51^{+3.07}_{-0.40}) M_\odot$. Based on its low luminosity and mass, we propose that the lens in Gaia19dke event is an isolated white dwarf.
M. MaskoliΕ«nas, Ł. Wyrzykowski, K. Howil, K. A. Rybicki, P. ZieliΕ„ski, Z. Kaczmarek, K. KruszyΕ„ska, M. JabΕ‚oΕ„ska, J. Zdanavičius, E. PakΕ‘tienΔ—, V. Čepas, P. J. MikoΕ‚ajczyk, R. Janulis, M. Gromadzki, N. Ihanec, R. AdomavičienΔ—, K. Ε iΕ‘kauskaitΔ—, M. Bronikowski, P. Sivak, A. StankevičiΕ«tΔ—, M. Sitek, M. Ratajczak, U. Pylypenko, I. Gezer, S. Awiphan, E. Bachelet, K. BΔ…kowska, R. P. Boyle, V. Bozza, S. M. Brincat, U. Burgaz, T. Butterley, J. M. Carrasco, A. Cassan, F. Cusano, G. Damljanovic, J. W. Davidson, V. S. Dhillon, M. Dominik, F. Dubois, H. H. Esenoglu, R. Figuera Jaimes, A. Fukui, C. Galdies, A. Garofalo, V. Godunova, T. GΓΌver, J. Heidt, M. Hundertmark, I. Izviekova, B. Joachimczyk, M. K. KamiΕ„ska, K. KamiΕ„ski, S. Kaptan, T. Kvernadze, O. Kvaratskhelia, S. Littlefair, O. Michniewicz, N. Nakhatutai, W. OgΕ‚oza, R. Ohsawa, J. M. Olszewska, M. PoliΕ„ska, A. Popowicz, J. K. T. Qvam, M. Radziwonowicz, D. E. Reichart, A. SΕ‚owikowska, A. Simon, E. Sonbas, M. Stojanovic, Y. Tsapras, S. Vanaverbeke, J. Wambsganss, R. W. Wilson, M. Ε»ejmo, S. Zola
2023-09-06T19:06:45Z
http://arxiv.org/abs/2309.03324v1
# Lens mass estimate in the Galactic disk extreme parallax microlensing event Gaia19dke ###### Abstract We present the results of our analysis of Gaia19dke, an extraordinary microlensing event in the Cygnus constellation that was first spotted by the _Gaia_ satellite. This event featured a strong microlensing parallax effect, which resulted in multiple peaks in the light curve. We conducted extensive photometric, spectroscopic, and high-resolution imaging follow-up observations to determine the mass and the nature of the invisible lensing object. Using the Milky Way priors on density and velocity of lenses, we found that the dark lens is likely to be located at a distance of \(D_{L}=(3.05^{+4.10}_{-2.4})\) kpc, and has a mass of \(M_{L}=(0.51^{+5.07}_{-0.46})M_{\odot}\). Based on its low luminosity and mass, we propose that the lens in Gaia19dke event is an isolated white dwarf. Key Words.:Gravitational lensing: micro - Stars: black holes - white dwarfs - Stars: neutron - Techniques: photometric - Techniques: spectroscopic + Footnote †: offprints: ## 1 Introduction In the context of a standard point-source single-lens photometric microlensing event (Paczynski, 1996), it is generally challenging to determine a comprehensive set of physical parameters that fully describe the lensing object and its properties. The reason behind this limitation lies in the fact that the standard model of the light curve for such events relies on a single parameter, known as the event's time-scale (\(t_{\rm E}\)), which is dependent on three physical quantities: the distances of the source and lens, as well as the relative velocity between the lens and the source. Consequently, it becomes difficult to straightforwardly differentiate between microlensing events caused by main sequence (MS) stars and those caused by stellar remnants like white dwarfs (WD), neutron stars (NS), or stellar-mass black holes (BH) within the vast pool of tens of thousands of photometric microlensing events discovered over the last three decades through dedicated microlensing surveys such as OGLE (Udalski et al., 2015), MOA (Sumi et al., 2013), or KMTNet (Kim et al., 2016). The usage of microlensing can be instrumental in shedding light on various unresolved questions concerning stellar remnants, such as the population study and mass distribution of white dwarfs (Raddi et al., 2022), the masses of neutron stars (Ozel & Freire, 2016), the existence of a mass-gap between black holes and neutron stars (Bailyn et al., 1998; Ozel et al., 2010; Farr et al., 2011), and the potential of black holes to explain at least a portion of the enigm can be deduced using the following expressions (Gould, 2000; Gould & Yee, 2014): The mass and distance of the lensing object can then be derived as \[M=\frac{\theta_{\rm E}}{\kappa\pi_{\rm E}}=\frac{\mu_{\rm rel}t_{\rm E}}{\kappa \pi_{\rm E}}\,,\ \ \ \kappa\equiv\frac{4G}{c^{2}{\rm AU}}\simeq 8.1\frac{\rm mas}{M_{ \odot}}\,, \tag{1}\] and \[D_{\rm L}=\frac{1}{\mu_{\rm rel}t_{\rm E}\pi_{\rm E}+1/D_{\rm S}}\,, \tag{2}\] where we used the fact that the angular size of the Einstein radius can be rewritten as a product of the length of the vector of the heliocentric relative proper motion \(|\mu_{\rm rel}|=|\mu_{L}-\mu_{S}|\) between the lens (L) and source (S) and the event's timescale \(t_{\rm E}\). The parallax and time scale are the two physical parameters that can be determined when using the photometric light curves of microlensing events. Without the knowledge of the Einstein radius (\(\theta_{\rm E}\)), the mass and distance of the lens can be determined by employing probability distributions for the density and velocity of lenses (e.g. Wyrzykowski et al., 2016; Wyrzykowski & Mandel, 2020; Mroz & Wyrzykowski, 2021). One of the methods to obtain \(\theta_{\rm E}\) is to observe both the changes in observed light (photometric component) and the position of the source during the microlensing event (astrometric component). While a microlensing event occurs, the source is split into two, unevenly magnified images. Unlike in strong lensing, the angular separation of these images is small and was observed only through the use of Very Large Telescope's instruments GRAVITY and PIONIER (Dong et al., 2019; Cassan et al., 2022). By obtaining precise measurements of the source's position, it becomes possible to monitor the motion of the light's centroid. This technique is referred to as astrometric microlensing and has demonstrated success in recent observations and discovery of the first isolated stellar-mass black hole with _Hubble_ Space Telescope (Sahu et al., 2022; Lam et al., 2022; Mroz et al., 2022). It will become possible to derive the size of the Einstein radii for many of the brighter microlensing events observed by the European Space Agency's _Gaia_ space mission(Gaia Collaboration et al., 2016) as _Gaia_ was designed to collect both photometric and astrometric measurements for about 2 billion stars(Gaia Collaboration et al., 2016, 2023b). It is anticipated that _Gaia_'s astrometric observations will enable the measurement of astrometric microlensing signals (e.g. Dominik & Sahu, 2000; Belokurov & Evans, 2002; Rybicki et al., 2018), which in turn will yield \(\theta_{\rm E}\)(Wyrzykowski et al., 2023). To ensure the usefulness of _Gaia_'s astrometry in microlensing events, it was crucial to gather dense and accurate photometric data through intensive monitoring of bright events (\(G\lesssim 16\)) that occurred during the _Gaia_ mission (2014-2025). These events were reported in near-real-time by the _Gaia_ Science Alerts system (Wyrzykowski & Hodgkin, 2012; Hodgkin et al., 2013, 2021). Of particular significance are the events that exhibit a well-constrained microlensing parallax. When combined with the source distance, these parameters allow for a comprehensive understanding of the lens's distance and luminosity. Consequently, the nature of the lens can be revealed, providing a complete picture of its properties. In this paper, we present a detailed investigation of the Gaia19dke microlensing event, which satisfies all the aforementioned criteria. The event has already lasted for more than 2000 days, making it a long-duration event. Moreover, it demonstrated a highly pronounced microlensing parallax effect and was sufficiently bright to enable precise astrometry measurements by the _Gaia_ mission. While the _Gaia_ astrometric data will be published in Gaia Data Release 4 (\(\sim\)Q4 2025), here we present the comprehensive analysis of the photometric data and use the Galactic model to predict the most likely properties of the dark lens. The paper is organized as follows. In Section 2 we present the discovery and follow-up observations of the event. Section 3 contains the description of the microlensing model used to fit the photometric data. In Section 4, we analyze the source star using photometry and spectroscopy and in Section 5, we derive the probable parameters of the lens. We discuss the results in Section 6 and conclude in Section 7. ## 2 Discovery and follow-up of Gaia19dke Gaia19dke (IAU Transient Name Server, TNS, id AT2019nd) event is located in the Cygnus constellation close to the edge of the Lyra constellation (Fig. 1) in the Northern Galactic Plane (\(RA\), \(\delta\))=(19:25:58.68, +28:24:24.70) in the equatorial system, (\(l\), \(b\)) = (62:01113, 5, \(\sigma\) 07414 in the Galactic system) It was reported by the _Gaia_ Science Alerts System on the 8th of August 2019 (JD = JD - 2450000. = 8703) as a small rise of brightness in the _Gaia_ G-band in a previously non-varying star. _Gaia_ Data Release 3 (_Gaia_ DR3, (Gaia Collaboration, 2020)) source_id is 2026409795566972544. The object was previously recorded in the 2MASS catalogue under id 19255869+2824249 (Skrutskie et al., 2006). _Gaia_ DR3 for this object provides the following astrometric parameters: \(\varpi=(0.0718\pm 0.0267)\) mas, \(\mu_{RA}=(-2.862\pm 0.022)\) mas/yr and \(\mu_{\delta}=(-5.447\pm 0.029)\) mas/yr, where \(\varpi\) is the stellar parallax of the source, and \(\mu_{RA}\) and \(\mu_{\delta}\) are proper motion components in right ascension and declination directions respectively measured at the reference epoch year 2016. Figure 1: Location of the Gaia19dke event (red circle) is shown on the Cygnus - Lyra constellation map from www.frestarcharts.com. Also shown is the location of Gaia16aye binary microlensing (green square) event from Wyrzykowski et al. (2020). ### Gaia photometry While _Gaia_ scans the sky, it revisits the same location on average within 30 days. Each transit typically provides two independent measurements separated by 106 minutes coming from the two fields of view of the spacecraft (see Gaia Collaboration et al. 2016 for details). As of May 2023, _Gaia_ has collected 191 measurements of Gaia19dke. The light curve from _Gaia_ is collected in the _Gaia_ broad-band filter \(G\)-band and exhibits multiple peaks, with the main peak reaching about 14.8 mag in August 2020. A table with photometric data gathered by _Gaia_ can be found in Table 6. GSA does not provide uncertainty on magnitudes in light curves for published events. We, therefore, used _Gaia_ DR3 photometric time-series statistics (mean \(G\)-band magnitude and its standard deviation) to derive the mean expected uncertainties as a function of magnitude. The nominal error for the Gaia19dke magnitude range was computed as around 0.008 mag (Gaia Collaboration et al. 2018). Table 6 presents the uncertainty estimated for _Gaia_ measurements, which were used throughout this work. ### Ground-based photometric follow-up Due to the fact that the event was relatively bright with \(G\sim\)15.5 mag at the baseline, it was possible to collect a vast number of follow-up observations using small-sized telescopes. The ground-based observations were carried out by a network of telescopes, including manually and robotically operated ones, listed in Table 4. To facilitate the coordination of observations and data processing, a web-based system called the Black Hole Target and Observation Manager (BHTOM1) was utilized, which is based on LCO's Target and Observation Manager TOM) Toolkit (Volgenau et al. 2022). Footnote 1: [https://bhtom.space](https://bhtom.space) For each telescope, the acquired images underwent bias, dark, and flat calibration following each telescope's procedures and the calibrated fits images were uploaded in near-real-time to BHTOM. PSF photometry was performed using CCDPhot (e.g. Zielinski et al. 2020; Rybicki et al. 2022, while standardization was achieved using the Cambridge Photometric Calibration Server (CPCS), as detailed in (Zielinski et al. 2019). Observations were conducted across various filters in both the SDSS and Johnson-Kron-Cousins systems. To establish uniformity, the data were standardized to the Gaia Synthetic Photometry (GaiaSP) catalogue (Gaia Collaboration et al. 2023a), with automated matching of instrumental data to the closest filter available in GaiaSP. Table 5 lists the number of data points collected by each observatory, the time span of their data and the list of GaiaSP filters the observations were matched to. The table contains also the details on the data collected serendipitously for this target by the Zwicky Transient Factory (ZTF) Survey (Bellm et al. 2019) in \(g\) and \(r\) bands and provided by the IPAC service. The earliest follow-up started 21 days after the announcement of the event on the GSA web page. The first data point was taken on the night of 29/30 August 2019, with the 60 cm telescope in the Astronomical Station Vidojevica (ASV) of Astronomical Observatory, Serbia. The follow-up then continued for over 2000 days until the event reached the baseline level again around May 2023. The data obtained by the follow-up network are available for download from BHTOM page for Gaia19dke ([https://bhtom.space](https://bhtom.space)). In total, nearly 3000 data points were collected with the telescope network over a period of nearly 4 years. ### Spectroscopic follow-up In order to classify the object and to derive the properties of the source, Gaia19dke was also observed spectroscopically. The first spectrum was obtained close to the first brightness peak on December 11, 2019, with the Spectrograph for the Rapid Acquisition of Transients (SPRAT, Piascik et al. 2014) mounted on 2-m robotic Liverpool Telescope (LT, Steele et al. 2004) located in La Palma, Canary Islands, Spain. The spectrum was taken in the optical part of the electromagnetic window (400-800 nm) and low-resolution mode (R\(\sim\)350). It was reduced, and wavelength and flux were calibrated in a standard way by using an automated pipeline provided by the LT Team. The Xenon arc lamp was used to calibrate the spectrum in the wavelengths. SPRAT data have shown the typical spectrum for normal G-type stars with prominent Mg 5167-5184 A lines and Balmer series in absorption. No clear emission lines were registered, therefore, we do not observe any hints of stellar activity, variability, or the existence of circumstellar matter. Any of the features responsible for that was not registered in the SPRAT spectrum. Therefore, Gaia19dke was classified as a microlensing event candidate and further follow-up observations were planned. The Microlensing Observing Platform 2 automatically requested the spectroscopic monitoring for this target and a low-resolution spectrum (R\(\sim\)500) has been collected by the OMEGA collaboration on August 8, 2020 (the source was magnified by a factor 1.8 at this time, i.e. \(G=14.9\) mag), with the FLOYDS instrument mounted on the Las Cumbres Observatory 2-m telescope at the Siding Spring observatory (Brown et al. 2013a). The spectrum has been reduced with the LCO FLOYDS pipeline3. It confirmed the classification made based on SPRAT data showing absorption lines typical for a G-type star. Footnote 2: [https://mpo.lco.global](https://mpo.lco.global) Footnote 3: [https://lco.global/documentation/data/floyds-pipeline/](https://lco.global/documentation/data/floyds-pipeline/) Footnote 4: [https://www.lbto.org/](https://www.lbto.org/) The low-resolution spectra of Gaia19dke gathered by SPRAT and FLOYDS instruments are presented together in Fig. 2. Gaia19dke event reached a bright enough magnitude near its main peak around mid-2020 to be also observed with high-resolution spectroscopy. We used the Potsdam Echelle Polarimetric and Spectroscopic Instrument (PEPSI, Strassmeier et al. 2015) installed at the 2x8.4-m Large Binocular Telescope (LBT)4 located on Mt. Graham, Arizona, US. The data were taken on July 18, 2020, i.e., close to the maximum brightness of the event. The fibre diameter 300 \(\mu\)m as well as two cross-dispersers (CD) were used: III (blue arm) and V (red arm) simultaneously. We were able to obtain a high-dispersion spectrum with an S/N ratio of around 31 and resolution R\(\sim\)43 000, which covers the wavelength range 383 - 907 nm. It was calibrated by using the standard PEPSI software for stellar spectroscopy (SDS4PESI, (Ilyin 2000)), i.e., images were bias subtracted, flat-fielded, and then optimally extracted and normalized using a spline fit to the continuum. Due to the poor quality of the spectrum below 480 nm, for further analysis, we used part above this threshold. Footnote 4: [https://www.lbto.org/](https://www.lbto.org/) The spectrum from a high-resolution PEPSI spectrograph is presented in Fig. 3. In addition, the synthetic spectrum (_red_) generated based on the method described in Section 4 is over-plotted on the observed spectrum (_blue_). ### High-resolution imaging follow-up Gaia19dke was observed with the Gemini North 8-m telescope using the 'Alopeke speckle imaging instrument5 on 9 August 2020. 'Alopeke is a simultaneous two-channel EMCCD instrument that performs speckle interferometric imaging. Using narrow-band filters centred at 562 nm and 832 nm, the images are obtained with 60 msec integration times and collected in sets of 1000 such images/set. The final product from 'Alopeke imaging is a high-resolution image in each filter with an inner working angle at the diffraction limit, near 20 mas for the 8-m Gemini telescope, and covering a small field of view out to 1.2 arcsecs. Footnote 5: [https://www.gemini.edu/sciops/instruments/alopeke-zorro/](https://www.gemini.edu/sciops/instruments/alopeke-zorro/) The set of images was subjected to Fourier analysis in our standard reduction pipeline (Howell et al., 2011). Figure 4 shows the final 5-\(\sigma\) contrast curves in each filter and the 832 nm reconstructed speckle image. We find that the object at Gaia19dke is not resolved beyond a single point source, even down to the 20 mas inner working angle. ## 3 Photometric Microlensing Model The photometric data of Gaia19dke has been modelled with the single point source single lens microlensing model with annual parallax(e.g. Gould, 2000; Smith et al., 2002; Wyrzykowski et al., 2016; Rybicki et al., 2022; Kruszynska et al., 2021). We used open-source flexible software _MulensModel_(Poleski and Yee, 2019) for finding the model parameters. The parallax model is described with the following parameters: * \(t_{0}\), time of the minimal approach between the lens and the source; * \(u_{0}\), impact parameter, the minimal distance between the lens and the source in units of the Einstein Radius; * \(t_{E}\), the time-scale of the event, defined as the time to cross the Einstein Radius; * \(\pi_{E}\), vector of the microlensing parallax, decomposed into equatorial North \(\pi_{EN}\) and East \(\pi_{EE}\) components; * \(mag_{0}\), baseline magnitude(s), separately in each observing band, computed in _MulensModel_ from source flux; * \(f_{S}\), blending parameter(s), separately in each observing band, defines as the flux of the source over the total baseline flux, composed of source and blend(s) and/or lens light, computed in _MulensModel_ from source and blend fluxes; The microlensing parallax model has been fitted in a geocentric frame with a fixed \(t_{0,w}\) parameter, set to the time of the maximum of the light curve, hence very close to \(t_{0}\). To find the most Figure 4: Contrast curves for red and blue narrow-band filters obtained from speckle interferometric observations of Gaia19dke obtained on 2020 Aug.9 with β€˜Alopeke instrument at the Gemini telescope. The inset shows the combined set of images in an 832 nm filter. Figure 3: Spectrum of the Gaia19dke obtained on 18 July 2020 with LBT/PEPSI around the main peak of the event(_blue_) and the best-matching fit (_red_) synthesized for the specific parameters. The Ca II triplet (_top_) and H\(\alpha\) (_bottom_) region are presented. Figure 2: Low-resolution spectra of Gaia19dke obtained by LT/SPRAT (red points) and LCO/FLOYDS (black points) spectrographs. The grey parts of the plot denote the wavelength range with the telluric lines. The dashed lines correspond to the best-matching template spectra. likely model, we used Markov chain Monte Carlo (MCMC) implemented in the emcee package (Foreman-Mackey et al. 2013). Since _Gaia_ observed Gaia19dke from L2 point, we included the space-parallax factor in _MulensModel_. We used all photometric light curve data gathered by the end of May 2023, when the event reached its baseline magnitude. We first modelled _Gaia_ data only, as it covers the shape of the event densely and contains a couple of years of the baseline prior to the microlensing event. Table 1 contains the values of microlensing model parameters found when fitting _Gaia_-only data. Our procedure identified only one solution in the parameter space for negative \(u_{0}\). Figure 5 shows the _Gaia_ photometric data together with the best microlensing model with a parallax fit to that data. Subsequently, the microlensing model fitting was performed using the combined dataset of _Gaia_ observations, follow-up observations, and data from the Zwicky Transient Facility (ZTF). Given that all observations, acquired with a network of telescopes, underwent consistent calibration and standardization to GaiaSP bands, we were able to effectively utilize the entire collected dataset from all telescopes, a total of nearly 5000 data points. However, we excluded 30 points calibrated to \(u\), \(U\), and \(z\) filters, as they were erroneously matched to incorrect bands and exhibited clear outliers. The modelling has been carried out in each GaiaSP band separately. Table 1 shows the values of microlensing model parameters found for _Gaia_ and the follow-up data set combined. The baseline magnitude and blending parameters were found separately for each observatory and filter. There was also only one parallax solution found for this data set. Figure 6 shows the best microlensing model and its residuals fitting the _Gaia_ and follow-up observations. The parameters obtained in the two models agree within the margin of error, but the model constructed using follow-up photometric observations exhibits narrower error bars, a factor of 3 to 5 better, which translates to more precise parameter estimates and improved accuracy. In order to achieve more continuous samples from the parameter space, in the modelling process we allowed the blending parameter \(f_{s}\) to be greater than one. Samples with \(f_{s}\) greater than one should be treated as if there is no blending at all. In both models, the value of the blending parameter is very close to 1, in particular, for _Gaia_ band, \(f_{s}=1\) within the margin of error. Other bands yielded slightly lower values of \(f_{s}\) (e.g. I(GaiaSP)), which can be attributed to low spatial resolution of instruments collecting these data and the observed blending is caused by nearby stars falling within their disks of Point Spread Function. ## 4 Source star In order to determine the parameters of the lensing object, the initial step involves deducing the distance and the spectral type of the source star. Our study is based on the assumption that the source star is single since there are no signs of its binarity in the microlensing model. Moreover, according to _Gaia_ EDR3, the closest object is 1.6 arcsecs away and is significantly fainter. ### Atmospheric parameters The parameters of the source star in the Gaia19dke event were derived from spectroscopic follow-up datasets, high-resolution data obtained with LBT/PEPSI and low-resolution data from two instruments: LT/SPRAT and LCO/FLOYDS. The spectroscopic analysis of absorption lines visible in high-resolution PEPSI spectrum was performed first. We used _iSpec6_ framework for spectral analysis which integrates several well-known radiative transfer codes (Blanco-Cuaresma et al. 2014; Blanco-Cuaresma 2019). In our case, to determine atmospheric parameters (i.e., effective temperature \(T_{\rm eff}\), surface gravity \(\log g\), metallicity [M/H], microturbulence velocity \(v_{\rm t}\)), the SPECTRUM7 code was used. We generated a set of synthetic spectra based on a well-known grid of MARCS atmospheric models (Gustafsson et al. 2008) and solar abundances taken from Grevesse et al. (2007). The synthetic spectra were fitted to the observational spectrum for selected regions containing H\(\alpha\), Ca, Mg, Fe, Na, and Ti atomic lines. The best-matching solution was found for the following parameters: \(T_{\rm eff}=(5251\pm 25)\) K, \(\log g=(3.06\pm 0.02)\), \(\rm[M/H]=(0.91\pm 0.03)\) dex and \(v_{\rm t}=(1.23\pm 0.07)\) km s\({}^{-1}\). According to these parameters, we assume that our source star is a metal-rich G5-type giant. Moreover, no \begin{table} \begin{tabular}{l l l} \hline Parameter & _Gaia_-only & _Gaia_+FUP \\ \hline \hline \(t_{\rm 0,par}-2450000\). [JD] & - & 9068 \\ \(t_{0}-2450000\). [JD] & \(9065.39^{+0.82}_{-0.81}\) & \(9064.0639^{+0.33}_{-0.33}\) \\ \(t_{\rm E}\) & \(159.48^{+3.43}_{-2.60}\) & \(162.47^{+2.68}_{-1.90}\) \\ \(u_{0}\) & -0.6115\({}^{+0.0236}_{-0.016}\) & -0.6100\({}^{+0.0160}_{-0.0112}\) \\ \(\pi_{\rm EN}\) & -0.0936\({}^{+0.0021}_{-0.0018}\) & -0.0911\({}^{+0.0014}_{-0.0012}\) \\ \(\pi_{\rm EE}\) & -0.1972\({}^{+0.0042}_{-0.0037}\) & -0.1923\({}^{+0.0031}_{-0.0024}\) \\ \(mag_{0}\)\(G\) (Gaia) & \(15.5052^{+0.0007}_{-0.0006}\) & \(15.5059^{+0.0005}_{-0.0004}\) \\ \(f_{\rm S}\)\(G\) (Gaia) & \(1.0045^{+0.0437}_{-0.0064}\) & \(0.9947^{+0.0301}_{-0.0421}\) \\ \(mag_{0}\)\(B\)(GaiaSP) & - & \(17.2458^{+0.0007}_{-0.0006}\) \\ \(f_{\rm S}\)\(B\)(GaiaSP) & - & \(0.9162^{+0.0280}_{-0.0397}\) \\ \(mag_{0}\)\(g\)(GaiaSP) & - & \(16.6105^{+0.0014}_{-0.0014}\) \\ \(f_{\rm S}\)\(g\)(GaiaSP) & - & \(0.9832^{+0.0033}_{-0.0426}\) \\ \(mag_{0}\)\(i\)(GaiaSP) & - & \(14.9657^{+0.0298}_{-0.0419}\) \\ \(f_{\rm S}\)\(i\)(GaiaSP) & - & \(0.9685^{+0.028}_{-0.0419}\) \\ \(mag_{0}\)\(I\)(GaiaSP) & - & \(14.4646^{+0.0008}_{-0.0007}\) \\ \(f_{\rm S}\)\(I\)(GaiaSP) & - & \(0.8624^{+0.0205}_{-0.0375}\) \\ \(mag_{0}\)\(r\)(GaiaSP) & - & \(15.4483^{+0.0010}_{-0.0010}\) \\ \(f_{\rm S}\)\(r\)(GaiaSP) & - & \(0.9324^{+0.0289}_{-0.0409}\) \\ \(mag_{0}\)\(R\)(GaiaSP) & - & \(15.2199^{+0.0007}_{-0.0007}\) \\ \(f_{\rm S}\)\(R\)(GaiaSP) & - & \(0.9849^{+0.0301}_{-0.0427}\) \\ \(mag_{0}\)\(V\)(GaiaSP) & - & \(15.9731^{+0.0007}_{-0.0007}\) \\ \(f_{\rm S}\)\(V\)(GaiaSP) & - & \(0.9987^{+0.0030}_{-0.0433}\) \\ \(mag_{0}\)\(g\)(ZTF) & - & \(16.5351^{+0.0009}_{-0.0009}\) \\ \(f_{\rm S}\)\(g\)(ZTF) & - & \(0.9690^{+0.0297}_{-0.0421}\) \\ \(mag_{0}\)\(r\)(ZTF) & - & \(15.3947^{+0.0009}_{-0.0008}\) \\ \(f_{\rm S}\)\(r\)(ZTF) & - & \(0.9777^{+0.0250}_{-0.0044}\) \\ \(\chi^{2}\) & 556.7 & 3621.64 \\ \hline \hline \end{tabular} \end{table} Table 1: Microlensing parallax model for _Gaia_-only data and _Gaia_ with follow-up observations. absorption lines from a potential second component are visible in PEPSI data. Fig. 3 shows the result of this analysis, i.e., PEPSI spectrum and synthetic fit for Ca II triplet and H\(\alpha\) region are presented. After that, we modelled the spectroscopic data with templates on the full wavelength range. This approach is complementary to the analysis of absorption lines presented above. Following the method of Bachelet et al. (2022), we fitted the FLOYDS and SPRAT spectra with templates from (Kurucz, 1993) with the Spyctres pipeline8. The new version of Spyctres includes the updated extinction law from Cardelli et al. (1989a) to the one of Wang and Chen (2019). In short, the latter combines an adjustment of the Cardelli et al. (1989a) law with a fixed total-to-selective extinction ratio \(R_{V}=A_{V}/E(B-V)=3.1\) and a power-law index \(\alpha=2.07\) for the near-IR regions. The data and Figure 5: Light curve of Gaia19dke microlensing event with data only from _Gaia_, spanning from JD = 2458062 to JD = 2460062. The black line is the mode of the chains from the MCMC model. The bottom panel shows the residuals with respect to the mode solution. Figure 6: Light curve of Gaia19dke microlensing event with data from _Gaia_ and follow-up observations, spanning from JD = 2458062 to JD = 2460062. Black line is the mode of the chains from the MCMC model. The bottom panel shows the residuals with respect to the mode solution. results are presented in the Fig. 2. The template-matching analysis reveals that the source is a red giant, with an effective temperature \(T_{eff}=(5000\pm 200)\) K, a sun-like metallicity \([M/H]=(0.0\pm 0.3)\) dex, a surface gravity log \(g=(2.2\pm 0.5)\), an angular radius \(\theta_{\rm n}=(7.9\pm 0.4)\ \mu mas\) and an absorption \(A_{\nu}=(1.6\pm 0.2)\) mag. The results obtained from absorption line analysis and template-matching are in good agreement, except the metallicity, and are presented in Tab. 2. ### Source distance One of the simplest and most popular ways to determine the distance to the star is to use the Bailer-Jones et al. (2021) catalogue, where distances were calculated based on the _Gaia_ EDR3 and priors on the Galaxy. Geometric distance, based on the parallax and its uncertainties, gives the distance to Gaia19dke source star of \(7.6<D_{s}<11.9\) kpc. The photo-geometric value, which is based on the parallax, the colour as well as the observed magnitude of the star, gives the distance of \(6.7<D_{s}<9.2\) kpc. We note here, that the values based on Gaia parallax measurement in case of microlensing events should be considered with great care, as the parallax measurement can be affected by the light of the lens, if luminous, or any other blends in the line of sight. Moreover, if the parallax measurement obtained from the astrometric time-series collected at the time of the event, the astrometric data can be also affected by the astrometric microlensing effect (e.g. Rybicki et al., 2018; Sahu et al., 2022; Jabloriska et al., 2022). Therefore, in order to verify the distance to the source star, we use the spectroscopic data and apply the well-known spectro-photometric equation: \[5\log D_{S}=V-M_{V}+5-A_{V}, \tag{3}\] where \(D_{S}\) is the distance to source star, \(V\) is the apparent magnitude, \(M_{V}\) is the absolute magnitude and \(A_{V}\) is the interstellar extinction. In the present work, we used the atmospheric parameters based on the high-resolution PEPSI spectrum where the star is classified as G5 giant, while the extinction value \(A_{V}=1.6\pm 0.2\) mag was taken from the template matching analysis of low-resolution spectra. The typical absolute magnitude and error for G5 giant star are \(M_{V}=(1.0\pm 0.5)\) mag (Straizys, 1992). Together with the apparent magnitude of Gaia19dke \(V=16.101\) mag (Stassun et al., 2019) and accepting extinction value \(A_{V}=1.6\) mag determined from the low-resolution spectra and taking into account the nonlinearity of the transformation and asymmetry of the distance, log\({}_{10}(D_{S})\), we have determined the distance to the source star of Gaia19dke \(D_{s}=(4.9\pm 1.2)\) kpc which is a factor of two different from Bailer-Jones et al. (2021)'s values. It is in good agreement with the template matching analysis that points towards a source \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & Line fitting & Template Matching \\ \hline \(T_{\rm eff}\) [K] & \(\bar{5}251\pm 25\) & \(5000\pm 200\) \\ \(\log g\) & \(3.06\pm 0.02\) & \(2.2\pm 0.5\) \\ \([\)M\(/\)H\(]\) [dex] & \(0.91\pm 0.03\) & \(0.0\pm 0.3\) \\ \(v_{\rm t}\) [km/s] & \(1.23\pm 0.07\) & – \\ \(A_{\nu}\) [mag] & – & \(1.6\pm 0.2\) \\ \(\theta_{\rm t}\) [\(\mu as\)] & – & \(7.9\pm 0.4\) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the derived parameters for the source of Gaia19dke event. Averaged solutions of line fitting and template matching are presented. Figure 8: Chi-squared contours plotted as a function of the parameters fitted in the MCMC fit for the best model for the Gaia19dke event obtained after including the follow-up data. Black, dark grey and light grey solid colours represent \(1\sigma\), \(2\sigma\), and \(3\sigma\) confidence regions respectively. Black dots represent solutions outside of the \(3\sigma\) confidence level. Cyan lines and squares mark the median solution reported in Table 1. The plot has been created using corner python package by Foreman-Mackey (2016). Figure 7: Chi-squared contours plotted as a function of the parameters fitted in the MCMC fit for the best model for the Gaia19dke event obtained with _Gaia_-only data. Black, dark grey and light grey solid colours represent \(1\sigma\), \(2\sigma\), and \(3\sigma\) confidence regions respectively. Black dots represent solutions outside of the \(3/sigma\) confidence level. Cyan lines and squares mark the median solution reported in Table 1. The plot has been created using corner python package by Foreman-Mackey (2016). distance of \(D_{s}=4.3^{+3.3}_{-1.1}\) kpc assuming a source age of 1 Gyr and using the isochrones from Bressan et al. (2012) and Marigo et al. (2013). Because of the significant difference between our spectroscopic distance and literature values from Bailer-Jones et al. (2021), we should critically evaluate which value is the most real and which one should be used for determining the lens parameters. To independently verify the source star parameters we apply other available methods based on accessible databases. We used infrared photometry from 2MASS (Skrutskie et al. 2006) survey were source stars measured magnitudes in \(J=(13.348\pm 0.024)\) mag, \(H=(12.766\pm 0.024)\) mag and \(K_{s}=(12.550\pm 0.023)\) mag. According to (Straizys & Lazauskaite 2009), the intrinsic colour of the G5 giant star should be \((J-K_{s})_{0}=0.49\) mag. For the source star, the colour excess and interstellar extinction were calculated with the following equations (Dutra et al. 2002): \[E_{J-K_{S}}=(J-K_{s})_{\rm obs}-(J-K_{s})_{0},\ \ \ \ A_{K_{s}}=0.67\,E_{J-K_{s}}, \tag{4}\] where \(E_{J-K_{s}}\) is the colour excess, \((J-K_{s})_{\rm obs}\) is the observed colour, \((J-K_{s})_{0}\) is the the intrinsic colour, and \(A_{K_{s}}\) is the interstellar extinction in \(K_{s}\) band. According to Eq. 4, the estimated extinction value for this star is \(A_{K_{s}}=0.21\) mag. The extinction value \(A_{K_{s}}\) transformed to the \(A_{V}\) with the following relation (Cardelli et al. 1989b; Dutra et al. 2002): \[A_{V}=8.3A_{K_{s}}, \tag{5}\] The estimated value of \(A_{V}=1.7\pm 0.3\) mag is in excellent agreement with the value determined by template-matching based on low-resolution spectra \(A_{V}=1.6\pm 0.2\) mag. Fig. 9 shows the location of the source star in the 2MASS \((J-H)_{0}\) vs. \((H-K_{0})\) diagram for the observed and dereddened according to an extinction value. The intrinsic red giant's branch is shown as a black line. The dereddened star position on the diagram shows acceptable agreement with extinction and spectral class determined based on low- and high-resolution spectra collected for Gaia19dke. We used another method that allows us to verify extinction was proposed by (Majewski et al. 2011) based on combined 2MASS and Spitzer colour indices \(H\)-[4.5], since for most of F-G-K stars are close to the zero. Here [4.5] is the magnitude at 4.5 \(\mu\)m of the Spitzer IRAC system. We have to apply the WISE (Wright et al. 2010) system since the Spitzer measurements are absent and taking into account that WISE \(W2\) measurements with the 4.6 \(\mu\)m mean wavelength direct comparison (Jarrett et al. 2011) shows little scattering. Using WISE measured magnitude \(W2=(12.524\pm 0.027)\) mag, for the source star, interstellar extinction was calculated with the equation: \[A_{K_{s}}=0.918\,(H-W2-0.08), \tag{6}\] In this way, the estimated extinction value \(A_{K_{s}}=0.149\) mag is by 0.06 mag smaller than previously determined using only 2MASS. The answer seems obvious that extinction value \(A_{V}=1.6\) mag determined by the spectroscopic analysis and compared with different methods matches with calculated using different databases. For distance check, we also use 2MASS photometry. We again apply the spectro-photometric method but use 2MASS \(K_{s}\) where distance is determined with the following equation: \[5\log D_{S}=K_{s}-M_{K_{s}}+5-A_{K_{s}}, \tag{7}\] The most uncertain in Eq.7 are \(M_{K_{s}}\) for the type G5 giants. We assume its value of \(-1.5\) mag since the location in \(M_{K_{s}}/J-K\) HR diagram is on the left edge from the Red Clump Giant (RCG) position (Veltz et al. 2008). We do not exclude that the real \(M_{K_{s}}\) may vary more than \(\pm 0.5\) mag. Using 2MASS photometry we just verify the distance and we determine \(D_{s}=(6.0\pm 1.4)\) kpc to the source star. As demonstrated above, spectro-photometric method based on optical and infrared data yielded a similar value in the source distance as the one using spectra. We assume that the optically determined distance is more reliable than the infrared one because, in the 2MASS colour-colour diagram, the star only coincides with the actual position of the G5 giant within the error limits, which can be explained by the measurement errors. We can not exclude some variability properties (Henry et al. 2000) since it can change observed magnitude and colour, consequently and source star location on 2MASS colour-colour diagram. Throughout the work, we, therefore, use the source distance determined with the PEPSI spectrum, \(D_{s}=(4.9\pm 1.2)\) kpc. ## 5 Lensing object The microlensing model found for Gaia19dke (Section 3) indicates no additional light in the event apart from the source. This is encompassed in the blending parameters derived for each photometric band, as listed in Table 1. Blending can originate from both the lens itself as well as any star located in close vicinity of the event and unresolved by the photometry. Gaia19dke is located in the Galactic Disk, where the stellar density is significantly lower than in typical microlensing fields in the Galactic Bulge, hence we do not expect any additional source of light close to it, which is confirmed with the high-angular resolution imaging with 'Alopeke (Sec.2.4). In order to constrain the nature of this dark lensing object, hence its mass and distance, we adopted the method outlined Figure 9: Colour-colour \((J-H)_{0}\) vs. \((H-K)_{0}\) diagram for the intrinsic red giant’s branch (black line). Spectral classes, corresponding to the intrinsic colours, are indicated close to the line. The value for Gaia19dke is plotted as a red point with errors. De-reddened and shifted according to an extinction value \(A_{K_{s}}=0.21\) mag is shown as a blue point with errors. in (Wyrzykowski et al. 2016), (Mroz & Wyrzykowski 2021), (Kruszynska et al. 2021), (Kaczmarek et al. 2022) and explained in detail in Howil et al. (in prep.). The microlensing parameters and their samples from MCMC obtained in previous steps, described in Section 3, were combined with priors on the mass, distance, and velocity distribution of stars in the Galaxy for the lens and the source. Blending parameters \(f_{\rm S}\) of for both _Gaia_-only and _Gaia_ with follow-up are close to 1, which means _Gaia_ registers the movement and position of the source star. We have thus adopted the proper motion for the source star as published in _Gaia_ EDR3. For the distance, we used the value obtained from spectral analysis, described in Section 4. In each iteration, we have drawn from a Gaussian distribution of distances with a mean of 4.9 kpc and a spread of 1.2 kpc. This method requires also knowing the value of the extinction \(A_{\rm G}\) towards the lens, to constrain the light coming from the lens if it was an MS star. We used the value presented in _Gaia_ DR2 catalogue, which lists \(A_{\rm G}\) under a_g_val in gaia_source table and is equal to \(A_{\rm G}=0.8043\) mag. We assume this value to be the maximal possible extinction in the direction towards the lens. Finally, we had to assume the relative proper motion of the lens and source \(\mu_{\rm rel}\). For this, we drew a random number between 0 and 30 mas year\({}^{-1}\) (Mroz & Wyrzykowski 2021). This allowed us to find the distance and mass to the lens in combination with the \(\pi_{\rm E}\) and \(\tau_{\rm E}\) obtained from the posterior distribution of parameters of the best-fitting microlensing model solution and the distance mentioned above to the source. Knowing the mass and distance of the lens, we could derive the observable brightness of the lens as if it was the MS star using empirical data from Pecaut & Mamajek (2013)9 and compare it to the constraints on the brightness of the lens we obtained from microlensing model. We then computed a weight using a set of priors from Skowron et al. (2011) for all the pairs of lens mass \(M_{\rm L}\) and lens distance \(D_{\rm L}\). For the mass function prior we used the value of -2.35, following the classical mass function for stars (Kroupa & Weidner 2003). Footnote 9: [http://www.pas.rochester.edu/](http://www.pas.rochester.edu/)\({}^{\sim}\)emamajek The results of this analysis are shown in Figures 10 and 11. The histograms of the distribution of the lens mass and lens distance are visible in Figures 12 and 13. Table 3 contains the summary of the median values of the mass, distance, blend light, and lens light in the case of an MS star lens. For _Gaia_-only data model the median mass is \(M_{\rm L}=0.50^{+0.31}_{-0.40}\)\(M_{\odot}\) and distance \(D_{\rm L}=3.08^{+0.99}_{-2.45}\) kpc. For combined _Gaia_ and follow-up data, the median mass and distance are \(M_{\rm L}=0.51^{+3.07}_{-0.40}\)\(M_{\odot}\) and \(D_{\rm L}=3.05^{+4.10}_{-2.42}\) kpc, respectively. Modes of the distributions are, respectively, \(M_{\rm L}=0.27\)\(M_{\odot}\), \(D_{\rm L}=2.31\) kpc, for G model and \(M_{\rm L}=0.28\)\(M_{\odot}\), \(D_{\rm L}=2.26\) kpc, for G+F model. Figure 11 contains the comparison of the light of the blend obtained from the microlensing model and the light of the lens if the lens is the MS star. Lines divide the plot area into two cases: above both lines prevail the scenario where the MS is justified given the blending. Below the lines, the light of the lens as an MS star is greater than the actual light of the lens we get from the microlensing model, suggesting a dark lens scenario. The solid line denotes the scenario, when the value of the extinction is equal to the one for the source, while the dashed line assumes no extinction to the lens at all. The luminous lens dark-lens scenario is preferred with 57% to 63% probability (for _Gaia_-only solution) and 58% to 64% for G+F model, with the range of probabilities resulting from a range of possible extinction values to the lens. The addition of extensive ground-based follow-up observations improved the determination of all parameters by a factor of about 3. In particular, the improvement is the strongest in the case of the impact parameter \(u_{0}\) and Einstein time-scale \(t_{E}\), while the uncertainty on the parallax vector is about 0.8% with the follow-up data. More importantly, the blending parameter for _Gaia_ data has been determined more accurately when including follow-up data, from 4% to around 1%, which additionally supports the dark-lens case scenario. Blending parameters determined for all other modelled bands additionally confirm there is no or very little extra light apart from the source, with values of the blending very close to 1. Combining this information with no detection of any additional sources in the high-resolution image from 'Alopeke, strengthens the dark or very faint lens case. We decided to use _Gaia_'s blending parameter in the lens nature determination in Section 5 because GSA data covers both sides of the light curve, both its rising and declining parts as well as the baseline before the event, while other data sets covered only the central part of the event. Microlensing in Gaia19dke allows us to determine the lens mass and its distance only because we measure the microlensing parallax and we use the priors on the lens proper motions as well as its distance and slope of the mass function. The results for mass and distance of the lens are summarised in Table 3, however, it should be noted that all the resulting posterior distributions are non-symmetric. Nevertheless, when using median values for mass and distance for either solution, we find the lens would need to be an M1V spectral-type star if it was a main sequence object. Placed at a median distance it would shine at 21.3 mag or 22.1 mag if all extinction measured to the source was in front of the lens. When compared with the amount of blending we measure in the light curve and its microlensing model, we can rule out such a scenario of a luminous lens. For a more massive lens, its distance would be even shorter, yielding an increase in the brightness of the alleged main sequence star. Only masses lower than the median would be possible to be explained within the observed bounds for blended light. The total integral over the parameter space yields between 57 and 64% dark lens probability for both G and G+F models, the range resulting from including none or all extinction to the lens light. The high angular-resolution image obtained on 2020 Aug.9 with 'Alopeke instrument at the Gemini telescope does not show any visible additional object within 20 mas. From the long-term microlensing light curve analysis which started on the 8th of August 2019 and involved a massive ground telescope follow-up campaign that allowed us to collect a very detailed light curve for Gaia19dke, we also did not detect any binary lens signatures, typically visible as deviations to standard lensing curve and sharp caustic crossings. This strengthens the explanation of the shape of the light curve as microlensing by a single lens, affected by the parallax effect due to the Earth's orbit. We, therefore, suggest the lensing event could have been caused by a stellar remnant. Stellar evolution theory predicts that White Dwarfs (WDs) are the most common stellar remnants in the Galaxy. However, Figure 12: Probability density plot for the mass of the lens for G+F solution. The solid line marks the median and the dashed line marks the mode. The filled red area represents the 95% confidence interval. Figure 13: Probability density plot for distance to the lens for G+F solution. The solid line marks the median, and the dashed line marks the mode. The filled red area represents the 95% confidence interval. \begin{table} \begin{tabular}{l c c} \hline Parameter & G & G+F \\ \hline \hline \(G_{H}\) [mag] & \(ND\) & \(>22.6\) \\ \hline \hline \(M_{\rm L}\) [\(M_{\odot}\)] & \(0.50^{+3.01}_{-0.40}\) & \(0.51^{+3.07}_{-0.40}\) \\ \(D_{\rm L}\) [kpc] & \(3.08^{+4.09}_{-2.45}\) & \(3.05^{+4.10}_{-2.42}\) \\ \(\theta_{E}\) [mas] & \(0.87^{+5.35}_{-0.68}\) & \(0.90^{+5.37}_{-0.70}\) \\ Prob(DL) & 57.2\%-63.4\% & 58.4\%-64.5\% \\ \(\rm SpT_{MS}\) & M1V & M1V \\ \(G_{\rm MS}\)[mag] & 22.1-21.3 & 22.1-21.3 \\ \hline \end{tabular} 3 \end{table} Table 3: Lens masses \(M_{\rm L}\), distances \(D_{\rm L}\) and size of the Einstein Radius \(\theta_{E}\) for the microlensing solutions. it is important to notice that, because of low brightness, the detection of WD is challenging. The majority of known WDs were found within the around 100 pc Gentile Fusillo et al. (2019), consequently, a full understanding of the WD population is far from complete. According to Takahashi et al. (2013) the upper mass limit for a WD is 1.367 \(M_{\odot}\), confirmed with the recent discovery of 1.35 \(M_{\odot}\) WD (Caiazzo et al., 2021). The most common mass of WD, however, falls within the range of 0.6\(M_{\odot}\) - 0.7\(M_{\odot}\)(McCleery et al., 2020). The most probable mass of the lens in our models is around 0.5 \(M_{\odot}\), making the WD option most feasible. However, the possible mass range for the lens (Fig. 12) also spans to larger masses, hence we can not rule out even a nearby neutron star scenario. Gaia19dke event is an excellent example of microlensing events for which _Gaia_'s astrometric time-series will provide an actual measurement of the lens mass and distance through measurement of a tiny displacement of the source star due to microlensing (Dominik & Sahu, 2000; Belokurov & Evans, 2002). In the case of non-blended events like this one, the shift in the position of the source is of the order of the size of the Einstein Radius. Using Galaxy priors we estimate this size to be about 1 mas, hence easily detectable in the _Gaia_ astrometric data (Rybicki et al., 2018; Jablornska et al., 2022; Wyrzykowski et al., 2023). ## 7 Conclusions In this work, we presented the investigation and analysis of a very long multi-peak microlensing event Gaia19dke located in the Galactic Disk, discovered by the _Gaia_ space satellite. The event exhibited a microlensing parallax effect perturbed by the Earth's orbital motion. The investigation is based on _Gaia_ data and ground follow-up photometry and spectroscopy follow-up observations. We determined the source star distance to \(D_{S}=(4.9\pm 1.2)\) kpc and we estimated the lens mass of \(M_{L}=(0.50^{+3.0}_{-0.40})M_{\odot}\) and its distance of \(D_{L}=(3.05^{+4.10}_{-2.42})\) kpc for the model including both _Gaia_ and ground-based data. Since essentially all of the detected light is coming from the source, a possible explanation is that the lens is a dark remnant candidate, most likely a single WD star, but a neutron star can also be considered. The conclusive answer to the question on the nature of the lens will come with the _Gaia_ astrometric time-series data to be released within DR4 (part until mid-2019) and DR5 (all remaining data). Additionally, the high-resolution AO-assisted observations of the source star in about a decade should provide strong confirmation on the dark lens in case of a non-detection of the lens(e.g. Blackman et al., 2021). ## Acknowledgments This work is supported by Polish NCN grants: Daina No. 2017/27/L/ST9/03221, grant No. S-LL-19-2 of the Research Council of Lithuania, Harmonia No. 2018/30/M/ST9/00311, Preludium No. 2017/25/N/ST9/01253, Opus No. 2017/25/B/ST9/02805 and MNiSW grant DI/WK/2018/12. This project used data obtained via BHTOM ([https://bhtom.space](https://bhtom.space)), which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreements No. 730890 and 101004719. We thank LT Support Astronomers for their help with observations and data reduction. HHE also thanks TUBITAK National Observatory for partial support in using the T100 telescope with project number 21AT100-1799 (and our sincere thanks to the whole of humanity that came to the aid of the earthquake disaster in Turkriye). Observations were carried out under OPTICON programmes XOL19B040 (PI: P. Zielinski). The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. We acknowledge ESA _Gaia_, DPAC and the Photometric Science Alerts Team ([http://gsaweb.ast.cam.ac.uk/alerts](http://gsaweb.ast.cam.ac.uk/alerts)). This paper made use of the Whole Sky Database (wsdb) created by Sergey Koposov and maintained at the Institute of Astronomy, Cambridge by Sergey Koposov, Vasily Belokurov and Wyn Evans with financial support from the Science & Technology Facilities Council (STFC) and the European Research Council (ERC), with the use of the Q3C software ([http://adsabs.harvard.edu/abs/2006AGPC..351..735K](http://adsabs.harvard.edu/abs/2006AGPC..351..735K)). The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and the University of Virginia. Some of the observations in the paper made use of the High-Resolution Imaging instrument 'Alopeke obtained under Gemini LLP Proposal Number: GN/S-2021A-LP-105. 'Alopeke was funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. Alopeke was mounted on the Gemini North (and/or South) telescope of the International Gemini Observatory, a program of NSF's OIR Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). T.G. was supported by the Scientific Research Projects Coordination Unit of Istanbul University, project number: FBG-2017-23943 and the Turkish Republic, Presidency of Strategy and Budget project, project number: 2016K121370." G. Damljanovic and M. Stojanovic acknowledge support by the Astronomical station Vidojevica, funding from the Ministry of Science, Technological Development and Innovation of the Republic of Serbia (contract No. 451-03-47/2023-01/200002), by the EC through project BELISSIMA (call FP7-REGPOT-2010-5, No. 265772), the observing and financial grant support from the Institute of Astronomy and Rozhen NAO BAS through the bilateral SANU-BAN joint research project "GAIA astrometry and fast variable astronomical objects", and support by the SANU project F-187. Adam Popowicz was responsible for automation and running remote observations at Otivar observatory and was supported by grant BK-236/RAu-11/2023. YT acknowledges the support of the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" (TS 356/3-1). Josep Manel Carrasco was (partially) supported by the Spanish MICIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe" by the "European Union" through grant PID2021-122842OB-C21, and the Institute of Cosmos Sciences University of Barcelona (ICCUB, Unidad de Excelencia 'Maria de Maeztu') through grant CEX2019-000918-M. The Joan Oro Telescope (TJO) of the Montsee Observatory (OdM) is owned by the Catalan Government and operated by the Institute for Space Studies of Catalonia (IEEC). This work was funded by ANID, Millennium Science Initiative, ICN12_009. Supachai Awiphan was supported by a National Astronomical Research Institute of Thailand (NARIT) and Thailand Science Research and Innovation (TSRI) research grant. Nawapon Nakharutiai acknowledges the support of Chiang Mai University. This research is partially supported by the Optical and Infrared Synergetic Telescopes for Education and Research (OISTER) program funded by the MEXT of Japan. AF is supported by JSPS KAKENHI Grant Number JP17H02871. RFJ acknowledges funding by ANID's Millennium Science Initiative through grant ICN12_009, awarded to the Millennium Institute of Astrophysics (MAS), and by ANID's Basal project FB210003.
2309.17351
Hypergraphs in LHC Phenomenology -- The Next Frontier of IRC-Safe Feature Extraction
In this study, we critically evaluate the approximation capabilities of existing infra-red and collinear (IRC) safe feature extraction algorithms, namely Energy Flow Networks (EFNs) and Energy-weighted Message Passing Networks (EMPNs). Our analysis reveals that these algorithms fall short in extracting features from any $N$-point correlation that isn't a power of two, based on the complete basis of IRC safe observables, specifically C-correlators. To address this limitation, we introduce the Hypergraph Energy-weighted Message Passing Networks (H-EMPNs), designed to capture any $N$-point correlation among particles efficiently. Using the case study of top vs. QCD jets, which holds significant information in its 3-point correlations, we demonstrate that H-EMPNs targeting up to N=3 correlations exhibit superior performance compared to EMPNs focusing on up to N=4 correlations within jet constituents.
Partha Konar, Vishal S. Ngairangbam, Michael Spannowsky
2023-09-29T15:57:28Z
http://arxiv.org/abs/2309.17351v2
# Hypergraphs in LHC Phenomenology - The Next Frontier of IRC-Safe Feature Extraction ###### Abstract In this study, we critically evaluate the approximation capabilities of existing infra-red and collinear (IRC) safe feature extraction algorithms, namely Energy Flow Networks (EFNs) and Energy-weighted Message Passing Networks (EMPNs). Our analysis reveals that these algorithms fall short in extracting features from any \(N\)-point correlation that isn't a power of two, based on the complete basis of IRC safe observables, specifically C-correlators. To address this limitation, we introduce the Hypergraph Energy-weighted Message Passing Networks (H-EMPNs), designed to capture any \(N\)-point correlation among particles efficiently. Using the case study of top vs. QCD jets, which holds significant information in its 3-point correlations, we demonstrate that H-EMPNs targeting up to N=3 correlations exhibit superior performance compared to EMPNs focusing on up to N=4 correlations within jet constituents. Keywords: Large Hadron Collider, Hadronic jets, Message-passing Graph Neural Networks + ## 1 Introduction The Large Hadron Collider (LHC) has been a cornerstone in advancing our understanding of particle physics. However, the complexity of the data generated necessitates sophisticated methods for feature extraction and analysis. Traditional approaches often fail to capture intricate relationships among the data points, especially when considering infrared and collinear (IRC) safe observables. In this context, neural networks have shown promise [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] but are not without limitations. These include issues regarding interpretibility [16; 17; 18; 19; 20; 21; 22], uncertainty quantification [23; 24; 25; 26; 27; 28; 29; 30], and a better handle and design of the physical biases [31; 32; 33; 34; 35; 36; 37; 38; 39; 40] of the neural networks for better physics generalization capabilities. The intricate nature of the underlying physical description warrants a thorough understanding of these algorithms, particularly as a precise understanding of the Standard Model background within perturbative Quantum Chromodynamics (pQCD) is needed to discover new physics. With the recorded events naturally represented as sets (of variable sizes) of different reconstructed particles or raw detector hits, point clouds are the natural representation of the recorded data, and architectures to process such data efficiently, particularly Graph Neural Networks [41; 42; 43; 44; 45; 46; 47; 48], have been used successfully for LHC phenomenology. However, graphs do not expose higher-order correlations within the data by design, concentrating on two-particle correlations-the natural generalisation being hypergraphs. This generalisation is diagrammatically shown in figure 1 for a three-prong top jet where the graph's edges are defined in terms of two particles, while the order three hyperedges can look into the relevant three-prong structure of the top jet. This paper addresses these challenges by introducing Hypergraph Energy-weighted Message Passing Networks (H-EMPNs), which address some of the currently available infra-red and collinear (IRC) safe networks fail to address. We first examine the universal approximation capabilities of existing infra-red and collinear safe neural network models like Energy Flow Networks (EFNs) [31] and Energy-weighted Message Passing Networks (EMPNs) [36] in approximating any IRC safe observable expressible in terms of C-correlators [49; 50] looking into any general \(N\)-body phase space. Finding that EFNs are restricted \(N=1\), and EMPNs have an arguably weak capability for approximating any \(N\neq 2^{n}\) C-correlators, we present H-EMPN as a more robust and versatile model capable of efficiently approximating any general IRC safe observable for any general \(N\). Our method leverages the power of message-passing in graphs and hypergraphs to capture higher-order relationships among the data points, thereby providing a more comprehensive feature extraction mechanism. Restricting ourselves to \(N=3\) for the top vs QCD jet tagging scenario, where the dominant information lies in the 3-body decay phase space of the top quark, we find that H-EMPNs outperform EMPN, which look up to \(N=4\) interparticle correlations, confirming our initial observation. We demonstrate the efficacy of H-EMPNs through empirical tests to showcase the learned graph representations. Furthermore, we discuss the architectural nuances of H-EMPN, providing insights into its design and training procedures. By doing so, we aim to establish H-EMPN as a powerful tool for LHC phenomenology, opening new avenues for applications in collider phenomenology. Specifically, in section 2, we discuss the universal approximation of any IRC safe ob Figure 1: Visualisation of the inter-relations of jet constituents as captured by a graph structure (left) and a hypergraph structure with order-three hyperedges (right). In a graph structure, the edges correlate two constituents at a time and are shown as a line segment connecting two nodes. Instead, the order-three hyperedges simultaneously link properties of three jet constituents at a time and are shown as a triangle with vertices coinciding with three nodes. Thus, hypergraphs are more expressive structures and can access higher-order correlations amongst jet constituents. servable by EFNs and EMPNs by taking its correspondence to any generic C-correlator. In section. 3, we devise H-EMPNs that can approximate any general C-correlator. The architecture and training details are presented in section 4, while the results are presented in section. 5. We conclude in section 6. #### Notation In the following discussions, we are given a set of massless particle four vectors \[\mathcal{S}=\{\ p_{1},p_{2},....,p_{n_{part.}}\}\quad.\] These particles will be indexed via small Roman subscripts, while the number of message-passing operations will be indexed as Greek superscripts. Unless otherwise stated, all summations will be over the set \(\mathcal{S}\). The four vectors are given in terms of the relative hardness \(z_{i}=p_{T}^{i}/\sum p_{T}^{j}\) and the rapidity-azimuth variables \(\hat{\mathbf{p}}_{i}=(y_{i},\phi_{i})\). Bold-faced alphabets like \(\mathbf{h}_{i}\) and \(\mathbf{G}\) denote vector quantities with their italicized counterparts \(h_{i}\) and \(G\) acting as a placeholder for a component. As we will consider inference on networks after training rather than the training itself, we will not explicitly write the dependence of function approximators on the tunable parameters. For instance, \(\mathbf{g}^{(\alpha)}(\mathbf{h}_{i}^{(\alpha-1)},\mathbf{h}_{j}^{(\alpha-1)})\) denote a MultiLayer Perceptron (MLP) at the \(\alpha^{th}\) message passing step, \(\mathbf{h}_{i}^{(\alpha-1)}\) and \(\mathbf{h}_{i}^{(\alpha-1)}\) corresponds to the updated node features in the previous operation of particle \(i\) and \(j\), respectively, in \(\mathcal{S}\). ## 2 Universal Approximation of IRC safe observables In present scientific literature, it is well-known that MLPs are universal function approximators [51; 52; 53]. Without going into mathematical rigour, a parametrized function \(f(\mathbf{x},\Theta)\) of a vector \(\mathbf{x}\) and tunable parameters \(\Theta\), is a universal approximator if it can approximate any continuous function up to any arbitrary precision in a compact domain and range. On the other hand, physical observables like momenta or position live in an underlying metric space, and notions of completeness have long been the bread-and-butter of physicists to study physical systems. The complete set of IRC safe observables is essential at the LHC and the subject of our present investigation. Any IRC safe observable \(\mathcal{O}\) can be expanded in a basis of C-correlators [49] as \[\mathcal{O}\approx\sum_{N=0}^{N_{max}}\ \mathcal{C}_{N}^{f_{N}}\quad,\quad \mathcal{C}_{N}^{f_{N}}=\sum_{i_{1}}\sum_{i_{2}}...\sum_{i_{N}}\ E_{i_{1}}E_{i_{2}}...E_{i_{N}}\,f_{N}( \hat{p}_{i_{1}},\hat{p}_{i_{2}},...\hat{p}_{i_{N}})\quad, \tag{1}\] where \(f_{N}\) is symmetric to any permutation of its arguments. Energy Flow Polynomials (EFPs) [50] expand \(\mathcal{O}\) in a basis of polynomials in energy and separation \(\Delta R_{ij}\) in the rapidity-azimuth plane using the Stone-Weistrass approximation theorem. In this section, we take a look into the approximation capabilities of existing IRC safe neural networks, namely Energy Flow Networks [31], and Energy-weighted Message Passing Network (EMPN) [36], comparing the functional form to any arbitrary \(N\) in the basis of C-correlators. As the C-correlators are complete, the network-extracted observables would be expressible as a linear sum of different C-correlators, and we investigate the terms in the sum (as given in eq. 1) that are optimally extracted via these observables. Although we rely on the statement of universal approximation theorems, it is important to remember that we will strictly talk about the existence of such approximators and not concentrate on the method of finding such a function. However, presently available gradient descent algorithms are powerful enough to efficiently find an approximation given that we have the desired output value on a large enough number of samples. This numerical nature of finding a practical working point in the weight space is one of the significant concerns regarding the interpretability of neural networks in general. Our aim is not to tackle this more difficult problem but to systematically establish the capability of IRC-safe feature extractors based on their ability to approximate different C-correlators. Moreover, we concentrate on the extracted features rather than the final observable approximated by the complete network, i.e. we do not consider the function approximation done by the downstream MLP, which takes the extracted IRC safe features, as this would be akin to a usual multi-variate approach of physics motivated features. As we will study the general behaviour of the approximated function whose weights are frozen after some training procedure, we will not discuss the explicit dependence of the neural networks on their tuneable parameters in the following discussions. ### Energy Flow Networks Energy Flow Networks are infra-red and collinear safe deep sets model which learns a per-particle map of each particle's directional coordinates \(\mathbf{\hat{p}}_{i}\) and undergoes an energy-weighted sum to form a fixed length representation of any variable cardinality constituent set. Without loss of generality for a multi-dimensional representation, a single IRC safe observable can be written as \[C_{1}=\sum_{i}z_{i}\:g_{1}(\mathbf{\hat{p}}_{i})\quad,\] where \(g_{1}(\hat{p}_{i})\) represents a parameterised multilayer perceptron. We have specifically denoted the observable as \(C_{1}\) to make it self-evident the per-particle map essentially approximates any general \(\mathcal{C}_{1}^{f_{1}}\). This is because the MLP \(g_{1}\) is a universal approximator and can approximate any function \(f_{1}\) suiting a particular objective up to a required precision. In a practical implementation, several related IRC safe observables are approximated, which are fed to a downstream network for classification. The direct implementation of EFNs can, therefore, only extract features expressible in terms of \(C_{1}\). ### Energy-weighted Message Passing Networks An energy-weighted message passing operation for any general parametrised function \(\mathbf{\bar{g}}^{(\alpha)}\) can be written as \[\mathbf{h}_{i}^{(\alpha+1)}=\sum_{j=\mathcal{N}[i]}\:\omega_{j}^{(\mathcal{N} [i])}\:\mathbf{\bar{g}}^{(\alpha+1)}(\mathbf{h}_{i}^{(\alpha)},\mathbf{h}_{j }^{(\alpha)})\quad,\] where \({\bf h}_{i}^{(\alpha)}\) is the input node features for the \(\alpha^{th}\) message passing operation and \[\omega_{j}^{({\cal N}[i])}=\frac{p_{T}^{j}}{\sum_{k={\cal E}{\cal N}[i]}\ p_{T}^ {k}}\] are the energy weights dependent on the IRC safe neighbourhood set \({\cal N}[i]\), with \(\omega_{j}^{(S)}=z_{j}\), for the whole set \({\cal S}\). For notational convenience in the following discussions, we will take the sum over the full set of particles in the jet and replace \(z_{j}\) in place of \(\omega_{j}^{({\cal N}[i])}\) without loss of generality. Therefore, we have \[{\bf h}_{i}^{(\alpha+1)}=\sum_{j}\ z_{j}\ {\bf g}^{(\alpha+1)}({\bf h}_{i}^{( \alpha)},{\bf h}_{j}^{(\alpha)}) \tag{2}\] with the function \({\bf g}^{(\alpha+1)}\) expressed as a product of a Heaviside step functions \(\Theta(\Delta R_{ij}<R_{0})\) and the original message function \(\bar{\bf g}^{(\alpha+1)}\) as \[{\bf g}^{(\alpha+1)}({\bf h}_{i}^{(\alpha)},{\bf h}_{j}^{(\alpha)})=\Theta( \Delta R_{ij}<R_{0})\ \bar{\bf g}^{(\alpha+1)}({\bf h}_{i}^{(\alpha)},{\bf h}_{j}^{(\alpha)})\quad.\] Here, \(\Delta R_{ij}\) is the Euclidean distance in the rapidity-azimuth plane between particle \(i\) and \(j\) while \(R_{0}\) is the graph's radius. The requirement of symmetry in the argument of \(f_{2}(\hat{\bf p}_{i},\hat{\bf p}_{j})\) for \({\cal C}_{2}^{f_{2}}\) and its absence in eq. 2 is not a contradiction as the node features themselves are defined for each particle and hence are not IRC safe observables. In contrast, the IRC safe graph representation will generally be expressible as some linear combination of \({\cal C}_{N}\). We have \({\bf h}_{i}^{(0)}=\hat{\bf p}_{i}\) which gives \(\hat{\bf p}_{i}=\hat{\bf p}_{j}\implies{\bf h}_{i}^{(\alpha)}={\bf h}_{j}^{( \alpha)}\) for any \(\alpha>=0\) and any two collinear particles \(i\) and \(j\). The IRC safe graph representation is obtained as \[{\bf G}^{(L)}=\sum_{i=1}^{i=N}\ z_{i}\ {\bf h}_{i}^{(L)}\quad,\] after \(L\) iterations. As we shall see in the following, the complexity of the extracted features via EMPN will depend on the value of \(L\). Explicitly for \(L=1\), we have \({\bf h}_{i}^{(1)}=\sum_{j}z_{j}\ {\bf g}^{(1)}(\hat{\bf p}_{i},\hat{\bf p}_{j})\) which gives \[{\bf G}^{(1)}=\sum_{i,j}\ z_{i}\ z_{j}\ {\bf g}^{(1)}(\hat{\bf p}_{i},\hat{ \bf p}_{j})\quad.\] If the symmetry is enforced in \({\bf g}^{(1)}\), the approximated observable will contain \({\cal C}_{2}^{f_{2}}\) term only. At the same time, a non-symmetric \({\bf g}^{(1)}\) would also have a \({\cal C}_{1}\) component. For \(L=2\), we have \[\begin{split}{\bf G}^{(2)}&=\sum_{i,j}\ z_{i}\ z_{j} \ {\bf g}^{(2)}({\bf h}_{i}^{(1)},{\bf h}_{j}^{(1)})\\ \implies{\bf G}^{(2)}&=\sum_{i,j}\ z_{i}\ z_{j}\ {\bf g}^{(2)}(\sum_{k}z_{k}\,{\bf g}^{(1)}(\hat{\bf p}_{i},\hat{\bf p}_{k}), \sum_{l}z_{l}\,{\bf g}^{(1)}(\hat{\bf p}_{j},\hat{\bf p}_{l}))\.\end{split} \tag{3}\] The complicated nature of the arguments makes it difficult to ascertain the exact behaviour of the functional approximation. However, one expects the universal approximator \({\bf g}^{(2)}\) to be expressible as a linear combination of \(\mathcal{C}_{N}\)'s up to \(N=4\). However, due to the presence of four angular arguments and four energy weights, it hints against the efficient approximation of any \(\mathcal{C}_{N}^{f_{N}}\) for any \(N<4\). The situation is even more futile for \(L=3\) with eight angular arguments and eight energy-weighted sums. For a particular \(L\), we have \(2^{L}\) angular arguments and the same number of energy-weighted sums. Even if one extracts the graph features at each stage \(\alpha\), and gets a concatenated graph representation for each \(\alpha>0\) up to \(\alpha=L\), we have the efficient extraction of \(2,2^{2},2^{3},....2^{L}\) terms the sum in eq. 1 for any general IRC safe observable \(\mathcal{O}\). Although, for the jet substructure, one does not need to go to very high \(N\), we already run into a problem for top-tagging, which has valuable information in the 3-prong structure of the energy deposits. ## 3 Hypergraph Energy-weighted Message Passing Networks As discussed above, although powerful, Graph Neural Networks cannot look into higher-order relational information amongst the nodes efficiently. Therefore, in this section, we develop IRC-safe point cloud architectures capable of efficiently extracting higher-point correlation. A possible way to extend the capabilities of IRC safe feature extraction to higher-point correlations is to directly implement the form of C-correlators as \[\mathcal{H}^{N}=\sum_{i_{1}}\sum_{i_{2}}...\sum_{i_{N}}z_{i_{1}}z_{i_{2}}...z_ {i_{N}}\ \Theta_{N}(\hat{p}_{i_{1}},\hat{p}_{i_{2}},...,\hat{p}_{i_{N}})\,\Phi_{N}( \hat{p}_{i_{1}},\hat{p}_{i_{2}},...,\hat{p}_{i_{N}})\quad,\] where \(\Theta_{N}\) are step functions for reducing the sums to localised information, and \(\Phi_{N}\) are the neural networks approximating a correlated set (as the output of \(\Phi_{N}\) in general, is a vector) of \(f_{N}\)'s for the particular training objective. For IRC safety, both \(\Theta_{N}\) and \(\Phi_{N}\) should be symmetric under the permutation of its arguments. The step function \(\Theta_{N}\) for each \(N\) essentially endows an \(N\)-uniform hypergraph structure onto the constituent set similar to the radius filter \(\Theta(\Delta R_{ij}<R_{0})\) endowing a graph structure for the case of \(\mathcal{C}_{2}\). Therefore, the concatenated hypergraph representations \[\mathbf{X}=\oplus_{N}\mathcal{H}^{N}\quad,\] up to \(N_{max}\) would extract IRC safe features to be fed to a downstream MLP for some task. We do not follow this approach for the following reasons. It is well-known [54; 55; 56; 57] that automatic feature extraction works best with deeper networks. Depth can only be brought into \(\Phi_{N}\) in the above expression, which does nothing to the IRC-safe feature extraction process. The complexity can be increased by increasing \(N\), which increases the width of the network, thereby increasing the model complexity sharply. Although the factorisation of the extracted features in energy and angular components could lead to better all-order behaviour in QCD and is indeed interesting, one needs to have proper control of the behaviour of the parameter optimisation before we can hope to answer such questions as demonstrated in ref [33]. Our approach is based on one-particle and two-particle messages to construct a hybrid message-passing neural network that can extract higher point correlations in a recursive approach. Although it is easily generalisable to higher-point information, we restrict ourselves up to 3-point interactions due to the increasing complexity. ### IRC safety with heterogenous source and destination embeddings The basic observation which makes it possible to build a higher-point IRC safe feature extractor is that the requirement of IRC safety for EMPN is still valid even when the node embedding for the source \(\psi_{S}(\mathbf{\hat{p}}_{i})\), and destination \(\psi_{D}(\mathbf{\hat{p}}_{i})\) are different as long as they separately satisfy \[\psi_{S}(\mathbf{\hat{p}}_{q})=\psi_{S}(\mathbf{\hat{p}}_{r})=\psi_{S}( \mathbf{\hat{p}}_{s})\quad,\text{ and }\psi_{S}(\mathbf{\hat{p}}_{q})=\psi_{D}(\mathbf{\hat{p}}_{r})=\psi_{D}( \mathbf{\hat{p}}_{s})\] when \(\mathbf{\hat{p}}_{q}=\mathbf{\hat{p}}_{r}=\mathbf{\hat{p}}_{s}\), even if \(\psi_{S}(\mathbf{\hat{p}}_{i})\neq\psi_{D}(\mathbf{\hat{p}}_{i})\). More importantly, the embeddings \(\psi_{S}\) and \(\psi_{D}\) need not be functions of just a single particle. They can also be the updated node features of the \(\alpha\)-hop IRC safe neighbourhood after \(\alpha\) energy-weighted message passing operations (as given in eq. 2). For an IRC safe neighbourhood of \(i\), where a particle \(q\) splits to two daughters \(r\) and \(s\), we have \(\mathcal{N}[i]\ni q\implies\mathcal{N}^{\prime}[i]\ni r\ \wedge\mathcal{N}^{\prime}[i]\ni s\) when \(\mathbf{\hat{p}}_{q}=\mathbf{\hat{p}}_{r}=\mathbf{\hat{p}}_{s}\). Let us look closer into the statement that we need not have the same embedding in the argument of the message function in an Energy-weighted Message Passing operation even though the statement logically follows from the non-requirement of symmetry of the message function. Since we have heterogeneous source and destination embeddings, we need to fix a uniform direction of messages. We will take all messages as originating from a neighbourhood node \(j\in\mathcal{N}[i]\) and move towards the destination node \(i\). Therefore, we have \[\mathbf{H}_{i}^{(\alpha+1,\beta+1)}=\sum_{j}\ z_{j}\ \mathbf{g}^{(\alpha+1, \beta+1)}(\mathbf{h}_{D,i}^{(\alpha)},\mathbf{h}_{S,j}^{(\beta)})\quad,\] where \(\mathbf{h}_{D,i}^{(\alpha)}\) and \(\mathbf{h}_{S,j}^{(\beta)}\) are the destination and source node embeddings, respectively, and \(\mathbf{g}^{(\alpha+1,\beta+1)}\) is the corresponding message function. As the destination and source node embeddings differ, the message passing operations' index is indexed separately with \(\alpha\) and \(\beta\), respectively. The source embedding satisfying \(\mathbf{h}_{S,q}^{(\beta)}=\mathbf{h}_{S,r}^{(\beta)}=\mathbf{h}_{S,s}^{(\beta)}\) in the collinear limit makes the updated node representation \(\mathbf{H}_{i}^{(\alpha+1,\beta+1)}\) equal for \(i\notin\{q,r,s\}\), in the splitted and unsplitted case since \(z_{q}=z_{r}+z_{s}\). Explicitly, we have \[z_{q}\ \mathbf{g}^{(\alpha+1,\beta+1)}(\mathbf{h}_{D,i}^{(\alpha)}, \mathbf{h}_{S,q}^{(\beta)})=z_{r}\ \mathbf{g}^{(\alpha+1,\beta+1)}(\mathbf{h}_{D,i}^{(\alpha)},\mathbf{h}_{S,r}^{ (\beta)})+z_{s}\ \mathbf{g}^{(\alpha+1,\beta+1)}(\mathbf{h}_{D,i}^{(\alpha)},\mathbf{h}_{S,s}^ {(\beta)})\quad. \tag{11}\] Additionally, we require the equality of the destination embeddings \(\mathbf{h}_{D,q}^{(\alpha)}=\mathbf{h}_{D,r}^{(\alpha)}=\mathbf{h}_{D,s}^{(\alpha)}\) when \(i\in\{q,r,s\}\). However, we can have \(\mathbf{h}_{D,q}^{(\alpha)}\neq\mathbf{h}_{S,q}^{(\beta)}\), as this is not needed to satisfy eq. 11. Therefore, \(\mathbf{H}_{i}^{(\alpha+1,\beta+1)}\) satisfies \(\mathbf{H}_{q}^{(\alpha+1,\beta+1)}=\mathbf{H}_{r}^{(\alpha+1,\beta+1)}= \mathbf{H}_{s}^{(\alpha+1,\beta+1)}\), in the collinear limit of the two daughters \(r\) and \(s\) of \(q\). ### Building higher point IRC safe feature extractor It is now straightforward to build an IRC-safe message-passing operation which looks into three-particle correlations. The structure of the two-particle energy-weighted operation is kept the same as rq. 2, and then combined with destination embedding \(\psi_{D}(\mathbf{\hat{p}}_{i})\) and source embedding \(\psi_{R}(\mathbf{\hat{p}}_{i})\) of the angular coordinates to give an effective three particle message passing of the form \[\begin{split}\mathbf{H}_{i}^{(1,2)}&=\sum_{j}\;z_{j} \;\mathbf{g}^{(1,2)}(\psi_{D}(\mathbf{\hat{p}}_{i}),\mathbf{h}_{S,j}^{(1)}) \quad,\\ \mathbf{H}_{i}^{(2,1)}&=\sum_{j}\;z_{j}\;\mathbf{g}^ {(2,1)}(\mathbf{h}_{D,i}^{(1)},\psi_{S}(\mathbf{\hat{p}}_{j}))\quad.\end{split} \tag{19}\] As the destination and source embeddings are different, \(\mathbf{h}_{D,i}^{(1)}\) and \(\mathbf{h}_{S,i}^{(1)}\) denote node features updated after two separate message-passing operations as given in eq. 2 with different message functions \(\mathbf{g}_{D}^{(1)}\) and \(\mathbf{g}_{S}^{(1)}\), respectively. The IRC safe feature would be a graph-level representation after an energy-weighted summed graph readout on \(\mathbf{H}_{i}^{(1,2)}\) and \(\mathbf{H}_{i}^{(2,1)}\), as \[\mathbf{G}_{3}^{(1,2)}=\sum_{i}\;z_{i}\;\mathbf{H}_{i}^{(1,2)}\quad,\quad \mathbf{G}_{3}^{(2,1)}=\sum_{i}\;z_{i}\;\mathbf{H}_{i}^{(2,1)}\quad. \tag{20}\] We shall see in the following discussions that these two representations look at distinct topological structures in the graph; the IRC safe representation for the order three feature extraction is constructed as a concatenation of these two components \[\mathbf{G}_{3}=\mathbf{G}_{3}^{(1,2)}\oplus\mathbf{G}_{3}^{(2,1)}\quad.\] We can ascertain the behaviour of \(\mathbf{G}_{3}\) by writing down its dependence on the particle's four vectors: \[\begin{split}\mathbf{G}_{3}&=\sum_{i,j}\;z_{i}\,z_{ j}\;\left(\mathbf{g}^{(1,2)}(\psi_{D}(\mathbf{\hat{p}}_{i}),\sum_{l}\;z_{l} \,\mathbf{g}_{S}^{(1)}(\mathbf{\hat{p}}_{j},\mathbf{\hat{p}}_{l}))\right.\\ &\qquad\qquad\qquad\left.\oplus\;\mathbf{g}^{(2,1)}(\sum_{l}\,z_ {l}\,\mathbf{g}_{D}^{(1)}(\mathbf{\hat{p}}_{i},\mathbf{\hat{p}}_{l}),\psi_{S}( \mathbf{\hat{p}}_{j})\right)\quad.\end{split}\] Three energy weights and three angular arguments hint that the learning procedure would directly start looking at the three-particle interrelations. It is important to note that any IRC safe observable looking into \(n\) body phase space, by definition, approaches its \(n-1\) body phase space limit when one particle approaches the soft or collinear limit. In other words, eq. 3 will also look into the three-body limit of any four-particle combination when one is soft or collinear to any other particle. However, we expect the above form to extract better the three-particle correlations required for tagging three-prong jets like top quarks. A schematic representation of the feature extraction procedure using different source and destination embeddings of order one and order two operations is shown in figure 2. We focus on the red node whose neighbours are the coloured. On the top right, the per-particle embeddings for the source and destination can only look into the individual particle information. On the left, however, the energy-weighted message-passing operation gathers information from each node's neighbourhood, which are shown with the identically coloured arrows for the coloured nodes. The order three feature extractors are built by combining the per-particle destination embedding with the order-two source embedding (on the left) and the order-two destination embedding with the per-particle source embedding (on the right). From a feature extraction perspective, there are two essential differences in comparison to the \(L=2\) case given in eq. 3: * One argument in both \(\mathbf{g}^{(1,2)}\) and \(\mathbf{g}^{(2,1)}\) is an embedding of the angular coordinates Figure 2: The figure shows a schematic representation of the message passing operation to build hybrid order three node representations for Hypergraph Energy-weighted Message Passing Networks by combining order one and two node representations. of a single particle and hence contain single-particle information. In contrast, both arguments already contain the aggregated neighbourhood information in \(\mathbf{g}^{(2)}\). * The embedding of the two arguments in \(\mathbf{g}^{(1,2)}\) and \(\mathbf{g}^{(2,1)}\) have independently trainable weights while they are shared for \(\mathbf{g}^{(2)}\). The first difference makes it possible for the function \(\mathbf{g}^{(1,2)}\) to effectively extract the relation of each node \(i\) concerning the IRC safe 2-hop neighbourhood, while the function \(\mathbf{g}^{(2,1)}\) looks at the aggregated node feature of \(i\)'s immediate neighbourhood with individual nodes in the same neighbourhood. The difference is also seen in figure 2, where on the left \(\mathbf{H}_{i}^{(1,2)}\) looks into the features of the nodes within each coloured circle with the red node, while on the right, \(\mathbf{H}_{i}^{(2,1)}\) looks into the feature of the aggregated neighbourhood information of the red node with the individual nodes within its neighbourhood. This essential difference in the feature extraction procedure makes it imperative to devise the two separate message-passing operations as they need to extract topologically different features within the graph. It is straightforward to generalize this procedure to any arbitrary \(N\), with substantial flexibility to choose the extractor guided by the requirement to divide \(N\) into two parts in any possible way. Any feature extractor with \(N-1\) can be used to extract features from topological distinct paths of length \(N\) within the graph. Due to the different combinatorial factors involved, the complexity rises relatively fast with increasing \(N\), and we restrict our discussion to \(N=3\). To look into the learnt features of the order one and two feature extractors, we define the graph representation as a concatenation of the source and destination embeddings as \[\begin{split}\mathbf{G}_{1}&=\mathbf{G}_{D,1} \oplus\mathbf{G}_{S,1}=\sum_{i}\;z_{i}\;(\psi_{D}(\mathbf{\hat{p}}_{i})\oplus \psi_{S}(\mathbf{\hat{p}}_{i}))\quad,\\ \mathbf{G}_{2}&=\mathbf{G}_{D,2}\oplus\mathbf{G}_{S,2}=\sum_{i}\;z_{i}\;(\mathbf{h}_{D,i}^{(1)}\oplus\mathbf{h}_{S,i}^{(1)})\quad. \end{split} \tag{10}\] This gives the concatenated graph readout to be fed to the classifier network as \[\mathbf{G}=\mathbf{G}_{1}\oplus\mathbf{G}_{2}\oplus\mathbf{G}_{3}\quad. \tag{11}\] ## 4 Network architecture and training To gauge the properties of the proposed network, we utilise the public top-tagging dataset [58] for a supervised classifier. These events were generated with Pythia 8.2.15[59] and were showered and hadronised without MPI effects. The showered events additionally underwent a parametrised detector response via Delphes3[60] with the default ATLAS detector card. The particle-flow objects of the Delphes output were used as inputs to construct anti-\(k_{T}\)[61] jets with \(R=0.8\) via FastJet[62]. with additional requirements of \(p_{T}\) within the range \([550,650]\) GeV, and pseudorapidity \(|\eta|<2\). Further, for the signal events, the top quark and its decay products' parton level information were used to reject falsely reconstructed jets with the partons falling outside the jet's area. The training data comprises 1.2 million samples, while the test and validation datasets contain 400k samples. The network analysis uses PyTorch-Geometric[63]. e compare order three Hypergraph Energy-weighted Message Passing Networks (H-EMPNs) with \(L=2\) EMPNs. For a reasonable comparison with the H-EMPN, we will extract the graph features for \(\alpha=1\) and \(\alpha=2\) stages separately for the EMPN and feed the concatenated graph representation into the classifier network. As shown in figure 3, the IRC-safe feature extractor module for the H-EMPN, in total, contains two per-particle maps for \(\psi_{D}\) and \(\psi_{S}\), and four energy-weighted edge convolution (E-EdgeConv) operations to give the updated node embeddings \(\mathbf{h}_{D,i}^{(1)}\), \(\mathbf{h}_{S,i}^{(1)}\), \(\mathbf{H}_{i}^{(1,2)}\), and \(\mathbf{H}_{i}^{(2,1)}\). Including the classifier MLP, which takes in the concatenated graph readout, we have seven MLPs. We have one for each per-particle map and a message function for each E-EdgeConv operation from the feature extractor module. All these seven MLPs contain two hidden layers with 128 nodes and a rectified linear unit activation function. Except for the classifier network, which has a one-dimensional output with sigmoid activation, all other MLPs have a 128-dimensional output layer with a linear activation function. The per-particle maps take the rapidity-azimuth coordinates \(\mathbf{\hat{p}}_{i}=(\Delta y_{iJ},\Delta\phi_{iJ})\) of each constituent \(i\) as inputs with the differences Figure 3: The architecture of the H-EMPN network utilized in this study is shown in the figure above. taken from the jet axis defined by the four-vector \(p_{J}^{\mu}=\sum_{k\in\mathcal{S}}p_{\mu}^{k}\). For a destination node embedding \(\mathbf{h}_{S,i}\) and source node embedding \(\mathbf{h}_{D,i}\), the message function takes in the concatenated vector \(\mathbf{h}_{S,i}\oplus\mathbf{h}_{S,i}-\mathbf{h}_{S,j}\) as the input. The EMPN network sequentially applies the E-EdgeConv operation twice to the input graph's node features. The first and the second E-EdgeConv operations have the same MLP architecture corresponding to the ones that give \(\mathbf{h}_{D,i}^{(1)}/\mathbf{h}_{D,i}^{(1)}\) and \(\mathbf{H}_{i}^{(1,2)}/\mathbf{H}_{i}^{(2,1)}\), respectively. The classifier MLP for the EMPN and H-EMPN takes in 256 and 768-dimensional concatenated graph representations, respectively. The whole network is trained using the binary-cross entropy loss function. We construct graphs with \(R_{0}\in\{0.4,0.5,0.6\}\) and \(R_{0}\rightarrow\infty\) corresponding to complete graphs. For all these four instances of input graphs, we train each network five times from random initialization for 100 epochs with the Adam optimizer [64] and a learning rate of 0.001. A decay-on-plateau condition is applied to the learning rate with a decay factor of 0.5 if the validation loss does not decrease for three epochs. The epoch with minimum validation loss is used for inference for each training instance. ## 5 Results ### Performance The receiver operator characteristics (ROC) curve for the network with highest area under the ROC (AUC) curve from all training instances between the signal acceptance \(\epsilon_{S}\) and the inverse of background acceptance \(1/\epsilon_{B}\) for the two models for \(R_{0}=0.4\) and \(R_{0}\rightarrow\infty\) is shown in figure 4. We see that the EMPN has almost an overlapping ROC curve for these two radii, while for the H-EMPN, there is a noticeable improvement. The area under the receiver operator curve for the EMPN and H-EMPN and different graph construction radii are tabulated in table 1. The values correspond to the mean over the five training Figure 4: The receiver operator characteristics curve for the best performing network (in terms of AUC) over the five training instances for \(R_{0}=0.4\) and \(R_{0}\rightarrow\infty\) for the EMPN and H-EMPN for different ranges of signal acceptance \(\epsilon_{S}\). instances, while the errors correspond to the standard deviation. For \(R_{0}=0.4\), the EMPN and H-EMPN have almost identical discrimination power with an AUC of \(0.9823\) and \(0.9821\), respectively. As the radius increases, there is a steady increase for the H-EMPN, while for the EMPN, it increases for \(R_{0}=0.5\) and stays at a similar value for \(R_{0}=0.6\) and there is a noticeable dip in performance when going to complete graphs with \(R_{0}\rightarrow\infty\). This trend can be understood from the structural difference between the EMPN and H-EMPN. The EMPN's feature extraction is sequential, with the second E-EdgeConv being fed by the first E-EdgeConv's updated node features. With increasing radius, the feature extraction suffers from a redundancy of the information as the first E-EdgeConv already looks at a much larger neighbourhood in the rapidity-azimuth plane. On the other hand, the H-EMPN has a much larger width, with four modules taking the input jet constituents parallelly, which are then combined non-trivially to feed the order-three feature extractors. From a purely QCD perspective, the radius \(R_{0}\) puts in an additional scale, and going to the \(R_{0}\rightarrow\infty\) limit takes away this dependence in the feature extraction procedure. Therefore, the H-EMPN can extract features from the full jet more efficiently without being restrained by an arbitrary angular scale \(R_{0}\). The AUC paints a global picture of the discrimination power of a binary classifier; however, a classifier is almost always used at a specific working point, depending on the analysis. This practical aspect demands a local figure of merit, which we show with the inverse of the background acceptance \(\epsilon_{B}\), the background rejection \(1/\epsilon_{B}\), at fixed values of signal acceptance \(\epsilon_{S}\). The background rejection for the EMPN and H-EMPN for the different graph construction radii are shown for \(\epsilon_{S}=0.5\) and \(\epsilon_{S}=0.3\) in tables 2 and 3, respectively. The values are averaged over the five training instances, with the standard deviations shown as errors. Although the trend for separate models is similar to that of the AUCs, the H-EMPN already starts having a noticeably better background rejection for \(R_{0}=0.5\) even though the EMPN has a nominally higher AUC. As a matter of fact, except for \(R_{0}=0.4\) at \(\epsilon_{S}=0.3\), the H-EMPN has a numerically higher mean background rejection for all other instances. output. We choose the best-performing complete graph, which has the possibility of the highest information redundancy besides being the strongest classifier. Although a relatively high linear correlation with the network output does point to the classification using that particular information, it is defined for each component of the graph representation, which dilutes the importance of the underlying vector representations. Moreover, the absence of linear correlation does not imply the lack of discriminatory information, as neural networks can be highly non-linear functions of their inputs. We look into the separating power of the different graph representations by visualizing them in a two-dimensional latent space using the t-distributed Stochastic Neighbourhood Embedding (t-SNE) [65]--an unsupervised data representation technique, where high dimensional data is embedded non-linearly in a lower dimensional space by maximally conserving the neighbourhood information endowed by a Euclidean metric in both spaces. In other words, nearby points in the high-dimensional representation get mapped to a local neighbourhood in the low-dimensional space. As it is an unsupervised technique, no explicit class information (QCD and top for our case) is fed when learning the map, and the clusters that arise in the low-dimensional space are a consequence of their proximity in the high-dimensional space. Therefore, a well-separated cluster in the lower-dimensional space implies that the higher-dimensional space also has well-separated regions. We use the implementation of t-SNE in Scikit-learn[66] package to embed the various 128-dimensional graph representations of the test dataset evaluated on the best performing EMPN and H-EMPN for the complete graph in a two-dimensional space separately for each representation. The class-wise two-dimensional histogram in the embedding space \((t_{1},t_{2})\) for \(\mathbf{G}^{(1)}\) and \(\mathbf{G}^{(2)}\) for the EMPN are shown in figure 5. We can see \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{4}{|c|}{\(1/\epsilon_{B}\) at \(\epsilon_{S}=0.5\)} \\ \cline{2-5} **Model** & \(R_{0}=0.4\) & \(R_{0}=0.5\) & \(R_{0}=0.6\) & \(R_{0}\rightarrow\infty\) \\ \hline EMPN & \(235\pm 7\) & \(250\pm 2\) & \(246\pm 4\) & \(255\pm 6\) \\ H-EMPN & \(236\pm 2\) & \(258\pm 6\) & \(258\pm 11\) & \(276\pm 6\) \\ \hline \end{tabular} \end{table} Table 2: The table shows the background rejection at a signal acceptance of 50% for different models. The values correspond to the mean from the evaluation of the test dataset for five different training instances from random initialization, while the standard deviations are shown as errors. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{4}{|c|}{\(1/\epsilon_{B}\) at \(\epsilon_{S}=0.3\)} \\ \cline{2-5} **Model** & \(R_{0}=0.4\) & \(R_{0}=0.5\) & \(R_{0}=0.6\) & \(R_{0}\rightarrow\infty\) \\ \hline EMPN & \(819\pm 39\) & \(882\pm 11\) & \(839\pm 31\) & \(895\pm 36\) \\ H-EMPN & \(817\pm 33\) & \(917\pm 25\) & \(911\pm 34\) & \(995\pm 48\) \\ \hline \end{tabular} \end{table} Table 3: The table shows the background rejection (\(1/\epsilon_{B}\)) at a signal acceptance (\(\epsilon_{S}\)) of 30% for different models. The values correspond to the mean from the evaluation of the test dataset for five different training instances from random initialization, while the standard deviations are shown as errors. that both the graph representations have relatively distinct regions in \((t_{1},t_{2})\) for the QCD samples (shown above) and top samples (shown below). Similarly, the two-dimensional histograms for the graph representations constructed out of the destination and source node-embeddings for the H-EMPN are shown in figures 6 and 7, respectively. All these embedded graph representations exhibit clear clustering of the QCD and top samples in different regions, confirming that the H-EMPN has extracted discriminating features from all of its component modules. Although the EMPN and H-EMPN can utilize their constituent graph representation to separate the QCD jets from top jets as seen from these two-dimensional histograms, we reiterate the qualitative differences between these two networks from the QCD perspective. The \(L=2\) EMPN looks up to order four relations. In contrast, the H-EMPN in its present guise only looks up to order three-the sequential application of E-EdgeConv (to give \(\mathbf{H}_{i}^{(1,2)}\) and \(\mathbf{H}_{i}^{(2,1)}\)) takes in the per-particle map with single particle information rather than an updated node feature with the local neighbourhood information in one of its arguments. However, we can see the better ability of the H-EMPN network from its performance studies and potentially better behaviour in QCD with its greater efficacy in the absence of an arbitrary angular scale \(R_{0}\). Since we took the top vs QCD jets classification example, we already knew that there is beneficial information in the three-prong structure within Figure 5: The two-dimensional histogram of the QCD (above) and top (below) test datasets in the two-dimensional latent space obtained after a t-SNE embedding of the 128-dimensional graph representation \(\mathbf{G}^{(1)}\) (left) and \(\mathbf{G}^{(2)}\) (right) of the best performing EMPN trained with complete graphs. the jet, which prompted our design of the specific H-EMPN.1 The first observation from the finite \(R_{0}\) cases is that the H-EMPN architecture is more critical in extracting the order three relational information from the jets than the \(L=2\) EMPN. On the other hand, our a priori knowledge of QCD, prompting the design of the H-EMPN, validates that physical inductive biases, or more specifically, QCD, have an important role in the design of performant feature extractors. Therefore, rather than throwing a currently "fashionable network" under the hood, designing architectures based on the underlying physical intuition can help push the performance boundaries of deep learning algorithms and gain (at least) a qualitative understanding of their inner workings. Footnote 1: The situation may be different, for instance, in the quark vs gluon case where the separating information is not in the hard prong structure but the soft radiation pattern surrounding the one prong core within the jet. ## 6 Conclusions This study delved deep into the intricacies of generalised automatic infrared and collinear safe feature extraction for LHC phenomenology, focusing on the potential of Graphs and Hypergraphs. Hypergraphs are a generalisation of traditional graphs. While a standard graph consists of vertices connected by edges, each connecting exactly two vertices, a Figure 6: The two-dimensional histogram of the QCD (above) and top (below) test datasets in the two-dimensional latent space obtained after a t-SNE embedding of the 128-dimensional graph representation \(\mathbf{G}_{D,1}\) (left), \(\mathbf{G}_{D,2}\) (center) and \(\mathbf{G}_{3}^{(1,2)}\) (right) of the best performing H-EMPN trained with complete graphs. hypergraph allows edges to connect any number of vertices, offering a more flexible way to represent relationships between entities. First, we explored the behaviour of energy-weighted message passing and its capability to approximate general infrared and collinear safe observables. We highlighted the significance of IRC-safe observables, especially in the context of data interpretation at LHC experiments. The study further explored the capabilities of Energy Flow Networks and Energy-weighted message-passing networks, shedding light on their potential and constraints utilising the usage of multilayer perceptrons as universal function approximators within the architecture with the IRC-safe observables expressible in terms of C-correlators. To enhance the capabilities of IRC safe feature extraction, especially for higher-point correlations, a novel method was introduced by leveraging the form of C-correlators and heterogenous source and destination node embeddings. This approach presents a renewed outlook on feature extraction. Qualitatively assessing the two models, while the EMPN model provides a robust foundation for feature extraction, the H-EMPN model, designed to look at order-three interparticle relations, demonstrates an edge in performance metrics even though the EMPN model via the application of two-message passing operations could theoretically look up to order-four. This suggests that incorporating hypergraph structures in the H-EMPN model Figure 7: The two-dimensional histogram of the QCD (above) and top (below) test datasets in the two-dimensional latent space obtained after a t-SNE embedding of the 128-dimensional graph representation \(\mathbf{G}_{S,1}\) (left), \(\mathbf{G}_{S,2}\) (center) and \(\mathbf{G}_{3}^{(2,1)}\) (right) of the best performing H-EMPN trained with complete graphs. offers enhanced capabilities in extracting higher-point correlations, making it a promising tool for more intricate analyses in LHC phenomenology. Our findings underscore the potential of hypergraph-based methods in enhancing the extraction of IRC-safe features. The research paves the way for further exploration into LHC phenomenology, focusing on optimising feature extraction techniques. ## Acknowledgements M.S. is supported by the STFC under grant ST/P001246/1. Computational work were performed on the Param Vikram-1000 High Performance Computing Cluster and TDP resources at the Physical Research Laboratory (PRL).
2301.13701
On the Stability of General Bayesian Inference
We study the stability of posterior predictive inferences to the specification of the likelihood model and perturbations of the data generating process. In modern big data analyses, useful broad structural judgements may be elicited from the decision-maker but a level of interpolation is required to arrive at a likelihood model. As a result, an often computationally convenient canonical form is used in place of the decision-maker's true beliefs. Equally, in practice, observational datasets often contain unforeseen heterogeneities and recording errors and therefore do not necessarily correspond to how the process was idealised by the decision-maker. Acknowledging such imprecisions, a faithful Bayesian analysis should ideally be stable across reasonable equivalence classes of such inputs. We are able to guarantee that traditional Bayesian updating provides stability across only a very strict class of likelihood models and data generating processes, requiring the decision-maker to elicit their beliefs and understand how the data was generated with an unreasonable degree of accuracy. On the other hand, a generalised Bayesian alternative using the $\beta$-divergence loss function is shown to be stable across practical and interpretable neighbourhoods, providing assurances that posterior inferences are not overly dependent on accidentally introduced spurious specifications or data collection errors. We illustrate this in linear regression, binary classification, and mixture modelling examples, showing that stable updating does not compromise the ability to learn about the data generating process. These stability results provide a compelling justification for using generalised Bayes to facilitate inference under simplified canonical models.
Jack Jewson, Jim Q. Smith, Chris Holmes
2023-01-31T15:20:54Z
http://arxiv.org/abs/2301.13701v2
# On the Stability of General Bayesian Inference ###### Abstract We study the stability of posterior predictive inferences to the specification of the likelihood model and perturbations of the data generating process. In modern big data analyses, the decision-maker may elicit useful broad structural judgements but a level of interpolation is required to arrive at a likelihood model. One model, often a computationally convenient canonical form, is chosen, when many alternatives would have been equally consistent with the elicited judgements. Equally, observational datasets often contain unforeseen heterogeneities and recording errors. Acknowledging such imprecisions, a faithful Bayesian analysis should be stable across reasonable equivalence classes for these inputs. We show that traditional Bayesian updating provides stability across a very strict class of likelihood models and dgp, while a generalised Bayesian alternative using the \(\beta\)-divergence loss function is shown to be stable across practical and interpretable neighbourhoods. We illustrate this in linear regression, binary classification, and mixture modelling examples, showing that stable updating does not compromise the ability to learn about the dgp. These stability results provide a compelling justification for using generalised Bayes to facilitate inference under simplified canonical models. _Keywords:_ Stability; Generalised Bayes; \(\beta\)-divergence; Total Variation; Generalised linear models. Introduction Bayesian inferences are driven by the posterior distribution \[\pi(\theta|y)=\frac{\pi(\theta)f(y;\theta)}{\int\pi(\theta)f(y;\theta)d\theta}. \tag{1}\] which provides the provision to update parameter prior \(\pi(\theta)\) using observed data \(y=(y_{1},\ldots,y_{n})\in\mathcal{Y}^{n}\) assumed to have been generated according to likelihood \(f(\cdot;\theta)\). The quality of such posterior inference depends on the specification of the prior, likelihood, and collection of the data. In controlled experimental environments where time is available to carefully consider such specifications, a posterior calculated in this way might be credible. However, modern applications often involve high-dimensional observational data and are undertaken by non-experts. In such scenarios, it is natural to question the quality of the specification of \(\pi(\theta)\) and \(f(\cdot;\theta)\) and the collection of \(y\) and therefore wonder to what extent posterior inference through (1) can be trusted. Much work has previously investigated the stability of (1) to the specification of \(\pi(\theta)\), therefore our focus here will be on \(f(\cdot;\theta)\) and \(y\). The likelihood model captures the decision maker's (dm's) beliefs regarding the generation of data \(y\). However, accurately formulating expert judgements as probability densities is difficult. Even for a well trained expert, so doing requires many more probability specifications to be made at a much higher precision than is possible within the time constraints of a typical problem (Goldstein, 1990). This is not to say that an elicited model is useless. Often domain experts can reliably elicit important broad judgements. However, the resulting "_functional_" model \(f(\cdot;\theta)\) generally involves some form of interpolating approximation of the dm's "_true_" beliefs. So doing is not unreasonable. However, a consequence of such expediency is that not only does the dm not believe all the judgements made by \(f(\cdot;\theta)\), its specific form is likely only one member of an equivalence class of models that also capture the dm's elicited beliefs and _could_ have used for inference. A typical example of the above is when applied practitioners deploy computationally convenient canonical models, for which there are software and illustrative examples available, to their domain specific problems. While the broad structure of such models may be suitable across domains, it is the practitioner's familiarly with its form, its software implementation or the platform on which it was published that motivates its use for inference, rather than a careful consideration of how it captures beliefs about the new environment Similarly, the data were not necessarily collected exactly how the dm imagined when specifying \(f(\cdot;\theta)\). There may be unforeseen heterogeneities, outliers, or recording errors. Alternatively, the DM may be deploying someone else's carefully elicited model to an analogous but not necessarily exchangeable scenario. We therefore also consider the data generating process (dgp) that generated the dm's data \(y\) to belong to an equivalence class of dgps to which the dm_could_ have deployed their inference. Given the inevitable lack of specificity in \(f\) and \(y\), a faithful Bayesian analysis should be able to demonstrate that it is not overly dependent on arbitrary choices across equivalence classes of its inputs. Such stability would allow dms to continue using familiar models in the knowledge that their selection is not driving the critical posterior inferences. This paper shows that the requirement for such stability necessitates the consideration of an updating rule different from (1). Consider, for example, using a Gaussian distribution, \(\mathcal{N}(y;\mu,\sigma^{2})\) to approximate beliefs about data \(y\). While the Gaussian distribution is ubiquitous, the top of Figure 1 shows that a Student's-\(t\) likelihood \(t_{5}(y;\mu,\sigma^{2})\) with 5 degrees of freedom would also have sufficed for this specification. The two likelihoods appear almost indistinguishable for all values of their shared \(\mu\) and \(\sigma^{2}\). Therefore, it would be unreasonable to expect that any dm will strongly prefer one or the other of these. However, the bottom left of Figure 1 shows that when updating according to (1) each model can result in very different posterior inferences. Equally, (1) is not stable to perturbations of the data either, as a small proportion of outliers moves the posterior inferences away from the uncontaminated part of the dgp. We demonstrate that this is a feature of the fact that implicitly (1) learns about the parameter of the model minimising the Kullback-Leibler Divergence (kld) between the data generating process (dgp) and the model, and that stability can only be expected here when the dm is sure of the tail specification of their model and the data. See Section 6.1 for full details of this example. Under traditional Bayesian updating, it is therefore left up to the dm to perform some kind of _post hoc_ sensitivity analysis to examine the impact their chosen model and particular features of the data had on the inference (see Box, 1980; Berger et al., 1994, and references within). However, such analyses are usually unsystematic and limited to the investigation of a small number of alternative models within the equivalence class. An alternative, motivated by the _M_-open world assumption that the model is misspecified for the dgp(Bernardo and Smith, 2001), is to use general Bayes (Bissiri et al., 2016) to update beliefs about model parameters minimising a divergence different from the kld(Jewson et al., 2018). A particularly convenient alternative is the \(\beta\)-divergence (\(\beta\)D) which has previously been motivated as providing inference that is robust to outliers (Basu et al., 1998; Ghosh and Basu, 2016) and desirable from a decision making point of view (Jewson et al., 2018). In this paper, we extend the motivation for using \(\beta\)D-Bayes further, showing that its posterior predictive inferences are provably stable across an interpretable equivalence class of likelihood models and DGPs. We treat stability to \(f\) and \(y\) separately, first showing that \(\beta\)D-Bayes inference is stable to the choice likelihood model for a given dgp, and then that inferences for a fixed model are stable to small perturbations to the dgp. Importantly, the stability afforded to \(\beta\)D-Bayes inference does not compromise its ability to extract useful inferences about the dgp. \(\beta\)D-Bayes has the appealing property that if the model is correctly specified for the dgp, then the data generating parameter will be learned, and there exists a growing literature that advocates using the \(\beta\)D for applied analyses (e.g. Knoblauch et al., 2018, 2022; Girardi et al., 2020; Sugasawa, 2020). This is further demonstrated in our experiments. For example, Figure 1 shows that as well as producing similar inference for the Gaussian and Student's-\(t\) likelihood models, the \(\beta\)D-Bayes inferences both capture the modal part of the observed data. Further, inferences must be also stable to the selection of the \(\beta\)D and its hyperparameter. We discuss methods to select \(\beta\) and demonstrate reasonable insensitivity to its selection. Results regarding the stability of (1) have largely focused on the parameter prior. Gustafson and Wasserman (1995) proved that the total variation divergence (tvd) between two posteriors resulting from priors in linear and geometric \(\epsilon\)-contamination neighbourhoods divergences as \(\epsilon\to 0\) at a rate exponential in the dimension of the parameter space. However, Smith and Rigat (2012) showed that the tvd between two posteriors converges to 0 provided the two priors under consideration are close as measured by the local De Robertis distance. Our first results provide analogies to these for the specification of the likelihood model. Gilboa and Schmeidler (1989); Whittle and Whittle (1990); Hansen and Sargent (2001a,b); Watson and Holmes (2016) consider the stability of optimal decision making and consider minimax decision across neighbourhoods of the posterior. However, they do not consider what perturbations of the inputs of (1) would leave a dm in such a neighbourhood _a posteriori_. Most similar to our work is Miller and Dunson (2018), which considers Bayesian updating conditioning on data arriving within a kld ball of the observed data and results concerning 'global bias-robustness' to contaminating observations, for example of the kernel-Stein discrepancy posteriors of Matsubara et al. (2021). We consider stability to an interpretable neighbourhood of the data which as a special case contains the globally bias-robust contamination. Bayes linear methods (Goldstein, 1999), which concern only the sub-collection of probabilities and expectations the dm considers themselves to be able to specify (Goldstein et al., 2006), is an alternative to (1) designed to be stable to interpolating approximations. We prefer, however, to adopt the general Bayesian paradigm in this analysis. Firstly, the general Bayesian paradigm includes traditional Bayesian updating as a special case and produces familiar posterior and predictive distributions. Secondly, linear Bayes requires the elicitation of expectations and variances of unbounded quantities which are themselves unstable to small perturbations (see discussion on Goldstein and Wooff, 1994). Lastly, rather than demanding stability across an equivalence class of models, the dm could let the data guide any decision the dm themselves is not able to make using methods such as penalised likelihood approaches (e.g. Akaike, 1973; Schwarz et al., 1978), Bayes' factors (Kass and Raftery, 1995) or Bayesian model averaging (Hoeting et al., 1999). In particular, Williamson and Goldstein (2015) propose methods for combining posterior beliefs across an equivalence class of analyses. However, such methods can be computationally burdensome to compute across even a finite class of models (e.g. Rossell et al., 2021) and the dm could reasonably only consider a handful of the models that might fit with their beliefs, a subset of the full equivalence class. The rest of the paper is organised as follows: Section 2 presents our inference paradigm, introducing general Bayesian updating (Bissiri et al., 2016), robustified inference with the \(\beta\)D, and defining how we will investigate posterior predictive stability. Section 3 presents our theoretical contributions surrounding the stability of Bayesian analyses to the choice of the likelihood function and Section 4 presents our results on the stability of inference to perturbations of the dgp. Proofs of all of our results are deferred to the supplementary material. Section 5 discusses methods to set the \(\beta\) hyperparameter and Section 6 illustrates the stability of the \(\beta\)D-Bayes inference in continuous and binary regression examples from biostatistics and a mixture modelling astrophysics example, where stability is shown not to compromise the model's ability to learn about the dgp. Code to reproduce all of the examples in this paper can be found at [https://github.com/jejewson/stabilityGBI](https://github.com/jejewson/stabilityGBI). ## 2 A paradigm for inference and stability ### General Bayesian Inference Under the assumption that the model used for inference \(f(y;\theta)\) does not exactly capture the dm's beliefs, we find it appealing to adopt the general Bayesian perspective of inference. Bissiri et al. (2016) showed that the posterior update \[\pi^{\ell}(\theta|y)=\frac{\pi(\theta)\exp\left(-w\sum_{i=1}^{n}\ell(\theta,y _{i})\right)}{\int\pi(\theta)\exp\left(-w\sum_{i=1}^{n}\ell(\theta,y_{i}) \right)d\theta}. \tag{2}\] provides a coherent means to update prior beliefs about parameter \(\theta_{g}^{\ell}:=\arg\min_{\theta\in\Theta}\int\ell(\theta,z)g(z)dz\) after observing data \(y\sim g(\cdot)\) without requiring that \(\theta\) index a model for the data generating density \(g(\cdot)\). The parameter \(w>0\) in (2) calibrates the loss with the prior to accounts for the fact that \(\exp(-\ell(\theta,y_{i}))\) is no longer constrained to integrate to 1, as was the likelihood in (1). Lyddon et al. (2018) set \(w\) to match the asymptotic information in the general Bayesian posterior to that of a sample from the 'loss-likelihood bootstrap', while Giummole et al. (2019), building on the work of Ribatet et al. (2012), directly calibrate the curvature of the posterior to match that of the frequentist loss minimiser. We focus on a subset of loss functions, known as scoring rules, that depend upon the dm's likelihood model, continuing to allow the dm to use this to encode their beliefs about the dgp. Under the log-score, \(\ell(\theta,y)=-\log f(y;\theta)\) (2) collapses to (1). The parameter \(\theta_{g}^{\ell}\) associated with the log-score is the minimiser of the kld between the distribution of the sample and the model (Berk et al., 1966). We therefore call updating using (1) kld-Bayes. However, it is well known that minimising the log-score puts large importance on correctly capturing the tails of the data (Bernardo and Smith, 2001) and can have negative consequences for posterior decision making (Jewson et al., 2018). This is demonstrated in the bottom left of Figure 1. ### \(\beta\)D-Bayes An alternative to the log-score is the \(\beta\)-divergence loss (Basu et al., 1998) \[\ell_{(\beta)}(y,f(\cdot;\theta))=-\frac{1}{\beta-1}f(y;\theta)^{\beta-1}+ \frac{1}{\beta}\int f(z;\theta)^{\beta}dz, \tag{3}\] so called as \(\arg\min_{\theta}\mathbb{E}_{y\sim g}\left[\ell_{(\beta)}(y,f(\cdot;\theta)) \right]=\arg\min_{\theta}D_{B}^{(\beta)}(g||f(\cdot;\theta))\) where \(D_{B}^{(\beta)}(g||f)\) is the \(\beta\)-divergence defined in Section A.1. We refer to updating using (2) and loss (3) as \(\beta\)D-Bayes. This was first used by Ghosh and Basu (2016) to produce a robustified Bayesian posterior (\(\beta\)D-Bayes) and has since been deployed for a variety of examples (e.g. Knoblauch et al., 2018, 2022; Girardi et al., 2020; Sugasawa, 2020). The implicit robustness to outliers exhibited by the \(\beta\)D-Bayes is illustrated in the bottom right of Figure 1, where, unlike the kld-Bayes, the \(\beta\)D-Bayes continues to captures the distribution of the majority of observations under outlier contamination. Jewson et al. (2018) argued that updating in a manner that is automatically robust to outliers, removes the burden on the dm to specify their beliefs in a way that is robust to outliers is removed. The results of the coming sections provide a formal rationale for adopting this methodology to provide stability to the canonical model choice and departures from the dgp. While Bayesian inference has been proposed minimising several alternative divergences including the Hellinger divergence, \(\alpha\)-divergence, and the tvd(e.g. Hooker and Vidyashankar, 2014; Jewson et al., 2018; Knoblauch and Vomfell, 2020) such methods require a non-parametric density estimate, prohibiting their use for high-dimensional problems with continuous data. We restrict our attention to local methods not requiring such an estimate and in particular to the \(\beta\)D and kld. The \(\gamma\)-divergence (Fujisawa and Eguchi, 2008) has also been shown to produce robust inference without requiring a non-parametric density estimate (Hung et al., 2018; Knoblauch et al., 2022) and in general behaves very similarly, see Section B.1.3. ### Posterior Predictive Stability Our results will investigate the stability of general Bayesian posterior predictive distributions \[m_{f}^{D}(y_{new}|y)=\int f(y_{new};\theta)\pi^{D}(\theta|y)d\theta. \tag{4}\] for exchangeable observation \(y_{new}\in\mathcal{Y}\) to the specification of the model \(f\), and the dgp\(g\). As a result, we focus on the stability of the posterior distribution for observables \(y\in\mathcal{Y}\) to perturbations of the prior for observables, \(f\), and generating distributions for these observables \(g\). From a decision-making perspective, the posterior predictive is often integrated over to calculate expected utilities, and therefore stable posterior predictive distributions correspond to stable decision making. We consider two metrics for stability, the first is the divergence between posterior predictives, which if small, indicates that a dm with either distribution would make similar decisions. The second measures the difference between the posterior predictives' divergence to the dgp. Predictives that are close to the dgp will make close to optimal decisions and therefore, two predictives that are equally close will make similarly good decisions Predictive stability is also a more reasonable requirement than say posterior stability. The parameter posteriors for two distinct models/dgp will generally converge in different places (e.g. Smith, 2007). However, divergent parameter posteriors do not necessarily imply divergent posterior predictives, as we show. Further, focusing on observables allows us to consider interesting cases of neighbouring models with nested parameter spaces (see Section 6.2) ## 3 Stability to the specification of the likelihood function In this section we consider two potential likelihood models for the data. These could correspond to the dm's true and functional beliefs, or two, equally preferable candidates for the later. In both cases, the dm would not wish their posterior inferences to diverge if one candidate was used in place of the other. ### An interpretable neighbourhood of likelihood models We first consider the stability of inference to the specification of the dm's likelihood model. Likelihood models \(f\) and \(h\) are considered to be in the same equivalence class of likelihood models for \(y\in\mathcal{Y}\) if they satisfy Definition 1 **Definition 1** (tvd neighbourhood of likelihood models).: Likelihood models \(f(\cdot;\theta)\) and \(h(\cdot;\eta)\) for observable \(y\in\mathcal{Y}\) are in the neighbourhood \(\mathcal{N}_{\epsilon}^{\textsc{tvd}}\) of size \(\epsilon\) if \[\forall\theta\in\Theta,\exists\eta\in\mathcal{A}\text{ s.t. }\textsc{tvd}(f(\cdot;\theta),h(\cdot;\eta))\leq \epsilon\quad\text{and}\quad\forall\eta\in\mathcal{A},\exists\theta\in\Theta \quad\text{s.t.}\quad\textsc{tvd}(f(\cdot;\theta),h(\cdot;\eta))\leq\epsilon\] Neighbourhood \(\mathcal{N}_{\epsilon}^{\textsc{tvd}}\) demands the existence of functions \(I_{f}:\Theta\mapsto\mathcal{A}\) and \(I_{h}:\mathcal{A}\mapsto\Theta\) such that for all \(\theta\), \(\textsc{tvd}(f(\cdot;\theta),h(\cdot;I_{f}(\theta))\) is small and for all \(\eta\), \(\textsc{tvd}(h(\cdot;\eta),f(\cdot;I_{h}(\eta))\) is also small. The symmetry of Definition 1 allows \(\Theta\) and \(\mathcal{A}\) to have different dimensions. For two likelihoods to be close in terms of tvd requires that the greatest difference in any of the probability statements made by the two likelihoods be small on the natural scale. \[\textsc{tvd}(f(\cdot;\theta),h(\cdot;\theta)):=\sup_{Y\in\mathcal{Y}}|f(Y; \theta)-h(Y;\theta)|=\frac{1}{2}\int|f(y;\theta)-h(y;\theta)|\,dy \tag{5}\] Additionally, tvd neighbourhoods contain \(\epsilon\)-contaminations considered in the context of prior stability by Gustafson and Wasserman (1995) and often used as outlier models (e.g. Aitkin and Wilson, 1980). As a result, it is reasonable for a dm to be able to elicit their beliefs within a \(\mathcal{N}_{\epsilon}^{\textsc{tvd}}\) neighbourhood of their chosen model, and such a neighbourhood contains standard perturbations for sensitivity analysis. The weak conditions required for the results of the following sections are formally stated in Section A.3. Briefly, Condition A.1 requires the boundedness of the essential supremum of models \(f\) and \(h\) and the dgp\(g\), and Condition A.2 requires sufficient concentration of posterior \(\pi_{f}^{D}(\theta|y)\) around \(\theta_{f}^{D}\). For clarity of argument, we proceed under the assumption that prior \(\pi^{D}(\theta)\) and \(\pi^{D}(\eta)\) are fixed. ### The stability of the \(\beta\)D-Bayes In the first of our main results, Theorem 1 bounds the _a posteriori_ divergence between the predictive distributions resulting from likelihood models \(f\) and \(h\) as a function of the size of the _a priori_ neighbourhood \(\mathcal{N}_{\epsilon}^{\textsc{tvd}}\). **Theorem 1** (Stability of the posterior predictive distributions of two models under the \(\beta\)D-Bayes inference).: Given \(1<\beta\leq 2\) and two likelihood models \(\{f(\cdot;\theta):\theta\in\Theta\}\) and \(\{h(\cdot;\eta):\eta\in\mathcal{A}\}\) such that \(f,h\in\mathcal{N}_{\epsilon}^{\textsc{tvd}}\) for \(\epsilon>0\). Then provided there exists \(M<\infty\) such that Condition A.1 holds, and \(y\), \(\pi^{(\beta)}(\theta)\) and \(\pi^{(\beta)}(\eta)\) satisfy Condition A.2 for \(D=\mathit{D}_{\mathit{B}}^{(\beta)}\) \[\mathit{D}_{\mathit{B}}^{(\beta)}(m_{f}^{(\beta)}(\cdot|y)||m_{h} ^{(\beta)}(\cdot|y)) \leq\frac{M^{\beta-1}(3\beta-2)}{\beta(\beta-1)}\epsilon+\frac{1} {c_{1}}+2\frac{M^{\beta-1}}{\beta-1}\int\textsc{tvd}(g,f(\cdot;\theta))\pi_{f} ^{(\beta)}(\theta|y)d\theta\] \[\mathit{D}_{\mathit{B}}^{(\beta)}(m_{h}^{(\beta)}(\cdot|y)||m_{f }^{(\beta)}(\cdot|y)) \leq\frac{M^{\beta-1}(3\beta-2)}{\beta(\beta-1)}\epsilon+\frac{1} {c_{2}}+2\frac{M^{\beta-1}}{\beta-1}\int\textsc{tvd}(g,h(\cdot;\eta))\pi_{h} ^{(\beta)}(\eta|y)d\eta,\] where \(c_{1}\) and \(c_{2}\) are defined in Condition A.2. Further, Theorem 2 bounds the absolute distance between the \(\beta\)D of the posterior predictive distributions produced from two likelihood models within \(\mathcal{N}_{\epsilon}^{\textsc{tvd}}\) from the dgp. **Theorem 2** (The stability in the posterior predictive approximation of two models to the dgp of \(\beta\)D-Bayes inference).: Given \(1<\beta\leq 2\) and two likelihood models \(\{f(\cdot;\theta):\theta\in\Theta\}\) and \(\{h(\cdot;\eta):\eta\in\mathcal{A}\}\) such that \(f,h\in\mathcal{N}_{\epsilon}^{\textsc{tvd}}\) for \(\epsilon>0\). Then provided there exists \(M<\infty\) such that Condition A.1 holds and \(y\), \(\pi^{(\beta)}(\theta)\) and \(\pi^{(\beta)}(\eta)\) satisfy Condition A.2 for \(D=\mathit{D}_{\mathit{B}}^{(\beta)}\) \[|\mathit{D}_{\mathit{B}}^{(\beta)}(g||m_{f}^{(\beta)}(\cdot|y))-\mathit{D}_{ \mathit{B}}^{(\beta)}(g||m_{h}^{(\beta)}(\cdot|y))|\leq\frac{M^{\beta-1}(3\beta -2)}{\beta(\beta-1)}\epsilon+\frac{1}{c}+C^{(\beta)}(f,h,y),\] where \(c=\min\{c_{1},c_{2}\}\) as defined in Condition A.2 and \[C^{(\beta)}(f,h,y): =\max\left\{\int\mathit{D}_{\mathit{B}}^{(\beta)}(g||f(\cdot; \theta))\pi_{f}^{(\beta)}(\theta|y)d\theta-\mathit{D}_{\mathit{B}}^{(\beta)}( g||m_{f}^{(\beta)}(\cdot|y)),\right.\] \[\left.\int\mathit{D}_{\mathit{B}}^{(\beta)}(g||h(\cdot;\eta))\pi_{ h}^{(\beta)}(\eta|y)d\eta-\mathit{D}_{\mathit{B}}^{(\beta)}(g||m_{h}^{(\beta)}( \cdot|y))\right\}.\] The value \(M\) present in both Theorems 1 and 2 is often easy to bound, for example by selecting a minimum value of the scale of Gaussian or Student's-\(t\) likelihood models, and we expect \(c_{1},c_{2}\to\infty\) as \(n\to\infty\) (see Section A.3). The final term in Theorem 1 involves the tvd between the models under consideration and the unknown dgp. While it is difficult to say anything formal about this, Lemma A.6 shows that the \(\beta\)D can be bounded above by the tvd, and therefore any values of parameters \(\theta\) and \(\eta\) that are close to \(g\) in tvd should have high posterior mass under the \(\beta\)D posterior. On the other hand, \(C^{(\beta)}(f,h,y)\) in Theorem 2, is is related to the concentration of the posteriors \(\pi_{f}^{(\beta)}(\theta|y)\) and \(\pi_{h}^{(\beta)}(\eta|y)\) with Jensen's inequality and the convexity of the \(\beta\)D guaranteeing that \(C^{(\beta)}(f,h,y)\geq 0\). Under suitable regularity conditions as \(n\to\infty\) and the posterior collapses to a point mass (Chernozhukov and Hong, 2003; Lyddon et al., 2018), then this term converges to \(0\). Importantly, Theorem 2 does not depend on how well specified the two likelihood models are for the dgp. ### The stability of the KLD-Bayes Figure 1 demonstrates that the stability afforded by the \(\beta\)D-Bayes is not afforded by the kld-Bayes. The kld is recovered from the \(\beta\)D as \(\beta\to 1\). However, in such a scenario, the bounds proven in the previous sections tend to infinity. Instead, Lemma 1 provides an analogous stability result for traditional Bayesian updating. **Lemma 1** (The stability in the posterior predictive approximation of the dgp of kld-Bayes inference).: For any two two likelihood models \(\{f(\cdot;\theta):\theta\in\Theta\}\) and \(\{h(\cdot;\eta):\eta\in\mathcal{A}\}\), and \(y\), \(\pi^{\text{\tiny{KLD}}}(\theta)\) and \(\pi^{\text{\tiny{KLD}}}(\eta)\) satisfying Condition A.2 for \(D=\) kld, we have that \[|\text{\tiny{KLD}}(g||m_{f}^{\text{\tiny{KLD}}}(\cdot|y))-\text{\tiny{KLD}}(g|| m_{h}^{\text{\tiny{KLD}}}(\cdot|y))|\leq C^{\text{\tiny{KLD}}}(f,h,y)+\frac{1}{c}+T (f,h,y),\] where \(c:=\min\{c_{1},c_{2}\}\) as defined in Condition A.2 and \[T(f,h,y): =\max\left\{\int\int g(\cdot)\log\frac{f(\cdot;\theta)}{h(\cdot;I _{f}(\theta))}d\mu\pi_{f}^{\text{\tiny{KLD}}}(\theta|y)d\theta,\right. \tag{6}\] \[\qquad\qquad\left.\int\int g(\cdot)\log\frac{h(\cdot;\eta)}{f( \cdot;I_{h}(\eta))}d\mu\pi_{h}^{\text{\tiny{KLD}}}(\eta|y)d\eta\right\}\] \[C^{\text{\tiny{KLD}}}(f,h,y): =\max\left\{\int\text{\tiny{KLD}}(g||f(\cdot;\theta))\pi_{f}^{ \text{\tiny{KLD}}}(\theta|y)d\theta-\text{\tiny{KLD}}(g||m_{f}^{\text{\tiny{ KLD}}}(\cdot|y)),\right.\] \[\qquad\qquad\left.\int\text{\tiny{KLD}}(g||h(\cdot;\eta))\pi_{h}^ {\text{\tiny{KLD}}}(\eta|y)d\eta-\text{\tiny{KLD}}(g||m_{h}^{\text{\tiny{KLD} }}(\cdot|y))\right\}.\] We investigate \(T(f,h,y)\), the term not analagous to any of those from Theorem 2. Without loss of generality assume that the second term in (6) is the largest. Then, the reverse Pinsker's inequality (Sason and Verdu, 2016; Binette, 2019) provides \[\int g(\cdot)\log\frac{h(\cdot;\eta)}{f(\cdot;I_{h}(\eta))}d\mu =\int\frac{g(\cdot)}{h(\cdot;\eta)}h(\cdot;\eta)\log\frac{h(\cdot; \eta)}{f(\cdot;I_{h}(\eta))}d\mu \leq M_{h}^{*}\text{\tiny{KLD}}(h(\cdot;\eta)||f(\cdot;I_{h}(\eta)))\] \[\leq M_{h}^{*}K_{h,f}\text{\tiny{TVD}}(h(\cdot;\eta),f(\cdot;I_{ h}(\eta)))\] where \(M_{h}^{*}=\operatorname*{ess\,sup}\frac{g}{h(\cdot;\theta_{h})}\) and \(K_{h,f}=\left(\frac{\log(a)}{a-1}+\frac{\log(b)}{1-b}\right)\) with \(a=\operatorname*{ess\,inf}\frac{dF}{dH}\) and \(b=\operatorname*{ess\,sup}\frac{dF}{dH}\). As a result, a tvd ball around the likelihood model is not sufficient for posterior stability when using Bayes' rule updating. In fact, posterior stability can only be guaranteed according to Lemma 1 if \[|\log(h(\cdot;\eta))-\log(f(\cdot;I_{h}(\eta)))| \tag{7}\] is small in regions where \(g\) has density. Without knowledge of \(g\), this requires that (7) be small everywhere, requiring the dm to be confident in the accuracy of their probability statements on the log-scale rather than on the natural scale as was the case for \(\mathcal{N}_{\epsilon}^{\textsc{tvd}}\). Logarithms act to inflate the magnitude of small numbers and thus ensuring that \(|\log(h(\cdot;\eta))-\log(f(\cdot;I_{h}(\eta)))|\) is small requires that \(f\) and \(h\) are increasingly similar as their values decrease. This requires the dm to be more and more confident of the accuracy of their probability specifications as they get further and further into the tails, something that is known to already be very difficult for low dimensional problems (Winkler and Murphy, 1968; O'Hagan et al., 2006), and becomes increasingly difficult as the dimension of the observation space increases. ## 4 Stability to the DGP ### A reasonable neighbourhood of DGP perturbations Our second series of results concern the stability of inferences from a single model \(\{f(\cdot;\theta);\theta\in\Theta\}\) to perturbations of the dgp for \(y\in\mathcal{Y}\). We consider updating on datasets \(y_{1}:=(y_{1},\ldots,y_{n_{1}})\sim g_{1}\) or \(y_{2}:=(y_{1},\ldots,y_{n_{2}})\sim g_{2}\) with \(n_{1},n_{2}>0\) and \(g_{1}\) and \(g_{2}\) satisfying Definition 2 **Definition 2** (tvd Neighbourhood of data generating processes).: Data generating processes \(g_{1}\) and \(g_{2}\) for observable \(y\in\mathcal{Y}\) are in the neighbourhood \(\mathcal{G}_{\epsilon}^{\textsc{tvd}}\) of size \(\epsilon\) if \(\textsc{tvd}(g_{1},g_{2})\leq\epsilon\) The tvd provides a relevant and reasonable way to describe perturbations of the dgp. It contains \(\epsilon\)-contamination neighbourhoods as considered by Matsubara et al. (2021) in the context of 'global bias-robustness' and also in Figure 1. It demands that the data sets were generated under mechanisms that were absolutely close on the natural scale, rather than the log-score considered in the kld neighbourhoods on Miller and Dunson (2018). Conceptually, it is convenient to think about datasets such that \(n_{1}=n_{2}\) but this is not necessary. The conditions for the results of the next sections are similar to those required in Section 3 and are stated in full in Section A.3. ### The stability of the \(\beta\)D Theorem 3 bounds the \(\beta\)D between the posterior predictive distributions resulting from model \(f\) and data from two DGPs in the \(\mathcal{G}_{\epsilon}^{\textsc{TVD}}\) neighbourhood. **Theorem 3** (The stability of the posterior predictive distribution under two DGPs of the \(\beta\)D-Bayes inference).: Given \(1<\beta\leq 2\) and likelihood model \(\{f(\cdot;\theta):\theta\in\Theta\}\) and two data sets \(y_{1}:=(y_{1},\ldots,y_{n_{1}})\sim g_{1}\) and \(y_{2}:=(y_{1},\ldots,y_{n_{2}})\sim g_{2}\) for \(n_{1},n_{2}>0\) with \(\{g_{1},g_{2}\}\in\mathcal{G}_{\epsilon}^{\textsc{TVD}}\). Then provided there exists \(M<\infty\) such that Condition A.3 hold, Condition A.4 holds for \(D=\mathit{D}_{B}^{(\beta)}\), \(y_{1}\), \(y_{2}\) and \(\pi^{(\beta)}(\theta)\) then, \[\mathit{D}_{B}^{(\beta)}(m_{f}^{(\beta)}(\cdot|y_{1})||m_{f}^{( \beta)}(\cdot|y_{2}))\leq 2\frac{M^{\beta-1}}{\beta-1}\epsilon+\frac{1}{c_{\mathcal{S}^{(1 )}}}+2\frac{M^{\beta-1}}{\beta-1}\int\textsc{TVD}(g_{1},f(\cdot;\theta_{1})) \pi_{f}^{(\beta)}(\theta_{1}|y_{1})d\theta_{1}.\] \[\mathit{D}_{B}^{(\beta)}(m_{f}^{(\beta)}(\cdot|y_{2})||m_{f}^{( \beta)}(\cdot|y_{1})))\leq 2\frac{M^{\beta-1}}{\beta-1}\epsilon+\frac{1}{c_{\mathcal{S}^{(2 )}}}+2\frac{M^{\beta-1}}{\beta-1}\int\textsc{TVD}(g_{2},f(\cdot;\theta_{2})) \pi_{f}^{(\beta)}(\theta_{2}|y_{2})d\theta_{2}.\] where \(c_{\mathcal{S}^{(1)}}\) and \(c_{\mathcal{S}^{(2)}}\) are defined in Condition A.4 Further, Theorem 4 bounds the difference in the \(\beta\)D from the DGP of the \(\beta\)D-Bayes posterior predictive distributions resulting from data from the two DGPs. **Theorem 4** (The stability in the posterior predictive approximation of two DGPs under the same model of \(\beta\)D-Bayes inference).: Given \(1<\beta\leq 2\) and likelihood model \(\{f(\cdot;\theta):\theta\in\Theta\}\) and two data sets \(y_{1}:=(y_{1},\ldots,y_{n_{1}})\sim g_{1}\) and \(y_{2}:=(y_{1},\ldots,y_{n_{2}})\sim g_{2}\) for \(n_{1},n_{2}>0\) with \(\{g_{1},g_{2}\}\in\mathcal{G}_{\epsilon}^{\textsc{TVD}}\). Then provided there exists \(M<\infty\) such that Condition A.3 holds, and Condition A.4 holds for \(D=\mathit{D}_{B}^{(\beta)}\), \(y_{1}\), \(y_{2}\) and \(\pi^{(\beta)}(\theta)\) then, \[|\mathit{D}_{B}^{(\beta)}(g_{1}||m_{f}^{(\beta)}(\cdot|y_{1}))- \mathit{D}_{B}^{(\beta)}(g_{2}||m_{f}^{(\beta)}(\cdot|y_{2}))|\leq\frac{M^{ \beta-1}(\beta+2)}{\beta(\beta-1)}\epsilon+\frac{1}{c}+C^{(\beta)}(f,y_{1},y_ {2}),\] where \(c:=\min\{c_{\mathcal{S}^{(1)}},c_{\mathcal{S}^{(2)}}\}\) defined in Condition A.4 and \[C^{(\beta)}(f,y_{1},y_{2}): =\max\left\{\int\mathit{D}_{B}^{(\beta)}(g_{1}||f(\cdot;\theta_ {1}))\pi^{(\beta)}(\theta_{1}|y_{1})d\theta_{1}-\mathit{D}_{B}^{(\beta)}(g_{1 }||m_{f}^{(\beta)}(\cdot|y_{1})),\right.\] \[\left.\int\mathit{D}_{B}^{(\beta)}(g_{2}||f(\cdot;\theta_{2}))\pi ^{(\beta)}(\theta_{2}|y_{2})d\theta_{2}-\mathit{D}_{B}^{(\beta)}(g_{2}||m_{f}^ {(\beta)}(\cdot|y_{2}))\right\}\] Theorems 3 and 4 are the analogous result to Theorems 1 and 2 respectively. The value \(M\) is still easy to bound here and the concentration terms \(\frac{1}{c_{\mathcal{S}^{(j)}}}\) are expected to shrink to \(0\) as \(n\to\infty\). For Theorem 3, we invoke Lemma A.6 and argue that the \(\beta\)D posterior will place density on parameter values of model \(f\) that are close to \(g\) in tvd. The bound of Theorem 4 depends on \(C^{(\beta)}(f,y_{1},y_{2})\), which under mild regularity conditions goes to \(0\) as \(n\to\infty\), demonstrating that the \(\beta\)D-Bayes is stable to tvd perturbations of the data, independently of how well the model approximates either of the dGPs. ### The stability of the KLD-Bayes Figure 1 showed that updating using (1) is not stable to perturbations of the DGP. The data considered is within a \(\mathcal{G}_{0.1}^{\textsc{\tiny TVD}}\) neighbourhood of data generated from \(\mathcal{N}(0,1)\) and unlike the \(\beta\)D-Bayes, the estimated posterior predictive is vastly different to what would have been estimated under the uncontaminated dgp. Lemma 2 investigates perturbations of the dgp that traditional Bayesian inference is stable too. **Lemma 2** (The stability in the posterior predictive approximation of two dGPs under the same model of kld-Bayes inference).: For likelihood model \(\{f(\cdot;\theta):\theta\in\Theta\}\) and data sets \(y_{1}:=(y_{1},\ldots,y_{n_{1}})\sim g_{1}\) and \(y_{2}:=(y_{1},\ldots,y_{n_{2}})\sim g_{2}\) for \(n_{1},n_{2}>0\), given Condition A.4 holds for \(D=\textsc{kld}\), \(y_{1}\), \(y_{2}\) and \(\pi^{\textsc{kld}}(\theta)\), we have that \[|\textsc{kld}(g||m_{f}^{\textsc{\tiny kld}}(\cdot|y))-\textsc{kld}(g||m_{h}^{ \textsc{\tiny kld}}(\cdot|y))|\leq C^{\textsc{\tiny kld}}(f,y_{1},y_{2})+\frac {1}{c}+T_{1}(g_{1},g_{2})+T_{2}(f,y_{1},y_{2}),\] where \(c:=\min\{c_{\mathcal{S}^{(1)}},c_{\mathcal{S}^{(2)}}\}\) as defined in Condition A.4 and \[T_{1}(g_{1},g_{2}): =\max\left\{\int g_{2}\log g_{2}-g_{1}\log g_{1}d\mu,\int g_{1} \log g_{1}-g_{2}\log g_{2}d\mu\right\}\] \[T_{2}(f,y_{1},y_{2}): =\max\left\{\int\int(g_{1}-g_{2})\log f(\cdot;\theta_{1})d\mu\pi ^{\textsc{\tiny kld}}(\theta_{1}|y_{1})d\theta_{1},\right.\] \[\qquad\qquad\qquad\left.\int\int(g_{2}-g_{1})\log f(\cdot;\theta_ {2})d\mu\pi^{\textsc{\tiny kld}}(\theta_{2}|y_{2})d\theta_{2}\right\}\] \[C^{\textsc{\tiny kld}}(f,y_{1},y_{2}): =\max\left\{\int\textsc{kld}(g_{1}||f(\cdot;\theta_{1}))\pi^{ \textsc{\tiny kld}}(\theta_{1}|y_{1})d\theta_{1}-\textsc{kld}(g_{1}||m_{f}^{ \textsc{\tiny kld}}(\cdot|y_{1})),\right.\] \[\qquad\qquad\qquad\left.\int\textsc{kld}(g_{2}||f(\cdot;\theta_{2 }))\pi^{\textsc{\tiny kld}}(\theta_{2}|y_{2})d\theta_{2}-\textsc{kld}(g_{2}||m _{f}^{\textsc{\tiny kld}}(\cdot|y_{2}))\right\}\] Lemma 2 shows that stability of the kld approximation of dgp by model \(f\) to perturbations of the dgp requires that \(T_{1}(g_{1},g_{2})\) and \(T_{2}(f,y_{1},y_{2})\) are small. Small \(T_{1}(g_{1},g_{2})\) requires \(g_{1}\) and \(g_{2}\) to have similar entropy, which is not necessarily guaranteed by dGPs according to Definition 2. Alternatively, if \(|\log f(\cdot;\theta)|\) is bounded then \(T_{2}(f,y_{1},y_{2})\) can be bounded above by tvd\((g_{1},g_{2})\). However, boundedness of the log-likelihood is unlikely, as \(f(y;\theta)\to 0\), \(|\log f(y;\theta)|\rightarrow\infty\). Therefore, \(T_{2}(f,y_{1},y_{2})\) being small requires \(g_{1}\) and \(g_{2}\) to be increasingly close in the tails of the fitted models, prohibiting, for example, outlier contaminations such as in Figure 1. ## 5 Setting \(\beta\) The only additional specification required from the dm when implementing the \(\beta\)D-Bayes compared with the kld-Bayes is that they select the value of \(\beta\). This hyperparameter regulates the trade-off between robustness and efficiency (e.g. Basu et al., 1998). Minimising the kld (\(\beta=1\)) provides the most efficient inference but is very sensitive to outliers. Increasing \(\beta\) away from 1 gains robustness to outliers at a cost to efficiency. The bounds of the previous theorems all depend on \(\beta\) and we can therefore additionally interpret \(\beta\) as a sort of meta prior for the dm's confidence in their elicited model or data collection. The less confident they are, the greater \(\beta\) will need to be to prevent non-negligible _a posteriori_ divergence. Eliciting \(\beta\) as such requires the dm to reflect on the value of \(\epsilon\) associated with their beliefs or the quality of the data. For the neighbourhoods of Definition 1, this can be obtained by considering for a given set of parameters what the largest possible error in any of the probability statements could be, or for Definition 2 by considering the minimal proportion of a population that they believe is consistent with the dgp. Our results are also informative about when the value of \(\beta\) might be too large. The dm should want their \(\beta\)D-Bayes inferences be stable because \(\epsilon\) is small, and not because the terms involving \(\beta\) that multiply \(\epsilon\) in the theorems in Sections 3 and 4 are small. Alternatively, there is increasing interest in data-driven methods to learn \(\beta\). Warwick and Jones (2005); Ghosh and Basu (2015); Basak et al. (2021) consider procedures to estimate \(\beta\) to minimise the mean squared error (MSE) of estimated model parameters, Toma and Broniatowski (2011); Kang and Lee (2014) estimate \(\beta\) to minimise the maximum perturbation of the parameter estimates resulting from replacing one observation by the population estimated mean, and Jewson and Rossell (2022); Yonekura and Sugasawa (2021) estimate \(\beta\) to minimise the Fisher's divergence to the dgp. Finally, \(\beta\)D-Bayes inference appears not to be overly sensitive to the exact value of \(\beta\). Figure 2 demonstrates that for the example introduced in Section 1, inference for the Gaussian and Student's-\(t\) models is almost identical for values of \(\beta\geq 1.3\). Section B.1.2 provides further demonstration of this. ## 6 Experiments ### Gaussian and Student's-\(t\) likelihood We revisit the Gaussian and Student's-\(t\) example briefly introduced in Section 1. The likelihood models considered here are \[f_{\sigma^{2}_{adj}}(y;\theta):=\mathcal{N}\left(y;\mu,\sigma^{2}\times\sigma^{ 2}_{adj}\right)\text{ and }h_{\nu}(y;\eta):=\text{Student's}-t_{\nu}\left(y;\mu,\sigma^{2}\right). \tag{8}\] Hyperparameters, \(\nu=5\) and \(\sigma^{2}_{adj}=1.16\) are fixed to match the quartiles of the two distributions for all \(\mu\) and \(\sigma^{2}\). These were inspired by O'Hagan (2012), who argued that for absolutely continuous probability distributions, it is only reasonable to ask an expert to make a judgement about the median and the quartiles of a distribution along with maybe a few specially selected features. This is justified as adequate as any two distributions with similar percentiles will look very similar, see for example Figure 1. However, Section 3.3 suggests that greater precision is required to ensure the stability of Bayes' rule updating. On the other hand, the likelihoods in (8) are contained in \(\mathcal{N}_{0.043}^{\text{\tiny TVD}}\). We generated \(n=1000\) observations from the \(\epsilon\)-contamination model \(g(x)=0.9\times\mathcal{N}\left(y;0,1\right)+0.1\times\mathcal{N}\left(y;5,3^{ 2}\right)\) contained within the \(\mathcal{G}_{0.1}^{\text{\tiny TVD}}\) neighbourhood of \(\mathcal{N}\left(y;0,1\right)\). We then conducted Bayesian updating under the Gaussian and Student's-\(t\) likelihood using both Bayes' rule and the \(\beta\)D-Bayes (\(\beta=1.5\)) under shared priors \(\pi(\mu,\sigma^{2})=\mathcal{N}\left(\mu;\mu_{0},v_{0}\sigma^{2}\right) \mathcal{IG}(\sigma^{2};a_{0},b_{0})\), with hyperparameters ( \(0.01,b_{0}=0.01,\mu_{0}=0,v_{0}=10\)). Figure 1 and Figure B.1, which plots the parameter posterior distributions for both models under both updating mechanisms, clearly demonstrate the stability of the \(\beta\)D-Bayes across these two models and the lack of stability of traditional Bayesian updating. Not only is the \(\beta\)D inference more stable across \(\mathcal{N}_{\epsilon}^{\text{\tiny TVD}}\), the \(\beta\)D predictive better captures the majority of the dgp than either of the predictive do under traditional Bayesian updating. The capturing of the \(\mathcal{N}\left(y;0,1\right)\) mode further illustrates the \(\beta\)D-Bayes' stability across neighbourhoods of the dgp. Figure 3 plots influence functions (West, 1984) for the kld-Bayes and \(\beta\)D-Bayes under the Gaussian and Student's-\(t\) model. Influence functions are the gradient of the loss function evaluated at parameter estimates as a function of the observations and show the impact that observation had on the analysis. Under the \(\beta\)D-Bayes, the influence functions of the Gaussian and Student's-\(t\) likelihoods are closer for almost every \(y\), illustrating the stability to the model, and additionally, the influence functions for both models under the \(\beta\)D-Bayes vary less with \(y\), illustrating stability to the dgp. #### 6.1.1 Dld data We consider an RNA-sequencing data set from Yuan et al. (2016) measuring gene expression for \(n=192\) patients with different types of cancer. Rossell and Rubio (2018) studied the impact of 57 predictors on the expression of dld, a gene that can perform several functions such as metabolism regulation. To illustrate our results, we selected the 15 variables with the 5 highest loadings in the first 3 principal components, and fitted regression models using the neighbouring models in (8) for the residuals. Section B.1.6 lists the selected variables. Figure 3: Influence functions for parameter \(\mu\) and \(\sigma^{2}\) of the Gaussian and Student’s-\(t\) likelihood models under the kld-Bayes and \(\beta\)D-Bayes with \(\beta=1.5\). Figure 4 demonstrates that \(\beta\)D-Bayes (\(\beta=1.5\)) produces more stable estimates of the fitted residuals (top-left), the estimated density of the residuals (top-right), parameter estimates (bottom-left), and posterior predictive density for the observed data (bottom-right) than the traditional Bayesian inference. Rossell and Rubio (2018) found evidence that this data is heavy-tailed, further demonstrated in Figure B.5, which caused the kld-Bayes to estimate very different densities under the Gaussian and Student's-\(t\) model, while the \(\beta\)D-Bayes is stable to this feature of the data. Figure B.4 shows the fit of the models to the posterior mean estimates of the standardised residuals, showing that as well as being stable, the \(\beta\)D-Bayes produces good estimation around the mode of the dld data under both models. Section B.1.5 considers a further regression example showing that even when one of the models under consideration is 'well-specified' for the data, the \(\beta\)D-Bayes inference continues to perform adequately. ### Mixture Modeling An advantage of considering the stability of the distributions for observables rather than parameters is that it allows 'neighbouring' models to have different dimensions to their parameter space. For example, consider initial model \(f(\cdot;\theta)\) and then 'neighbouring' model \[h(\cdot;\eta)=(1-\omega)\times f(\cdot;\theta)+\omega\times h^{{}^{\prime}}( \cdot;\kappa),\] for \(\eta=\{\theta,\kappa,\omega\}\). Here, \(h(\cdot;\eta)\) is a mixture model combining the likelihood model \(f(\cdot;\theta)\), which could itself already be a mixture model, and some other density \(h^{{}^{\prime}}(\cdot;\kappa)\) with additional parameters \(\kappa\). For all \(\theta\in\Theta\) and any \(\kappa\in K\) we have that \(\textsc{tvd}(f\left(\cdot;\theta\right),h\left(\cdot;\{\theta,\kappa,\omega\} \right))<\omega\) and therefore a tvd neighbourhood can be defined by upper bounding \(\omega\). #### 6.2.1 Shapley Galaxy Dataset We examine the Shapley galaxy dataset of Drinkwater et al. (2004), recording the velocities of 4215 galaxies in the Shapley supercluster, a large concentration of gravitationally-interacting galaxies; see Figure 5. The clustering tendency of galaxies continues to be a subject of interest in astronomy. Miller and Dunson (2018) investigate this data using Gaussian mixture models and use their coarsened posterior to select the number of mixture components, finding considerable instability in the number of estimated components \(K\) under different specifications of the coarsening parameter. See Cai et al. (2021) for further issues with estimating the number of components in mixture models. We estimate Gaussian mixture models of the form \[f(y;\theta)=\sum_{k=1}^{K}\omega_{j}\mathcal{N}(y;\mu_{j},\sigma_{j}),\] under the kld-Bayes and \(\beta\)D-Bayes, considering number of components \(K\in\{2,3,4,5,6\}\) and using the normal-inverse Wishart priors of Fuquene et al. (2019) (full details available in Section B.2). \(\beta\)D-Bayes inference for such one-dimensional mixture models is easy to implement using adaptive quadrature to approximate the necessary integral term \(\frac{1}{\beta}\int h(z;\eta)^{\beta}dz\). We do not formally place any constraint on the estimation of \(\omega_{k}\), however, any model that estimates a component with small \(\omega_{k}\) can be seen as a neighbour of a model with one fewer component. Figure 4: Posterior mean estimates of standardised residuals (**top left**), posterior mean estimated residuals distribution (**top-right**), absolute difference in posterior mean parameter estimates (**bottom left**) and difference in posterior predictive densities of the observations (**bottom right**) under the Gaussian and Student’s-\(t\) model of kld-Bayes and \(\beta\)D-Bayes (\(\beta=1.5\)) for the dldd data. Figure 5 demonstrates the posterior mean approximation to the histogram of the data of the Gaussian mixture models under the kld-Bayes and \(\beta\)D-Bayes and Table 1 records the tvd between the posterior mean predictive distribution of recursively adding components to the model. The \(\beta\)D-Bayes inference for \(\beta=1.25\) and \(1.5\) is more stable to the addition of an extra component. In particular, for \(K\geq 3\) the \(\beta\)D-Bayes inference stably estimates the biggest components of the data centered approximately at \(5,000\) and \(15,000\)\(km/s\), while the kld-Bayes produces very different inference for these modes depending on the number of clusters selected. ### Binary Classification Binary classification models predict \(y\in\{0,1\}\) from \(p\)-dimensional regressors \(X\). The canonical model in such a setting is logistic regression where \[P_{LR}(y=1|X,\theta)=\frac{1}{1+\exp{(-X\theta)}},\quad P_{LR}(y=0|X,\theta)= 1-P_{LR}(Y=1|X,\theta),\] where \(\theta\in\mathbb{R}^{p}\) are the regression parameters. Alternative, less ubiquitous models include, probit regression, which uses an alternative glm link function depending on the standard Gaussian cdf \(\Phi(\cdot)\), 'heavier tailed' \(t\)-logistic regression (Ding and Vishwanathan, 2010; Ding et al., 2013) and a mixture type model that explicitly models the chance of mislabelling of the observed classes. \[P_{PR}(y=1|X,\eta)=\Phi(w_{PR}\times X\theta),\quad P_{LR}(y=1|X,\eta)=\exp_{t}((w_{tLR}\times 0.5X\theta-G_{t}(w_{tLR}\times X\theta)))\] \[P_{ML}(y=1|X,\eta)=(1-\nu_{1})P_{LR}(y=1|X,\theta)+\nu_{0}(1-P_{ LR}(y=1|X,\theta))\] where \(0<t<2\) and \(0<\nu_{0},\nu_{1}<1\). The so-called \(t\)-exponential '\(\exp_{t}\)' and \(G_{t}\) ensures that \(P_{tLR}(y=1|X,\eta)\) is normalised, both are defined in Section B.3.1. Setting \(t>1\) results in heavier-tailed probabilities than the logistic model. For the probit and \(t\)-logistic models parameters \(\theta\) are \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & \(K=2\) vs \(K=3\) & \(K=3\) vs \(K=4\) & \(K=4\) vs \(K=5\) & \(K=5\) vs \(K=6\) \\ \hline kld & 0.27 & 0.12 & 0.13 & 0.03 \\ \(\beta\)D (\(\beta=1.25\)) & 0.26 & 0.06 & 0.06 & 0.03 \\ \(\beta\)D (\(\beta=1.5\)) & 0.23 & 0.05 & 0.08 & 0.02 \\ \hline \hline \end{tabular} \end{table} Table 1: Total variation distances between posterior mean predictive distributions for different number of mixture components \(K\) under the kld-Bayes and \(\beta\)D for \(\beta=1.25\) and \(1.5\). Figure 5: Shapley Galaxy Data: Histograms of the data, in units of 1,000 km/s, excluding a small amount of data extending in a tail up to 80,000 km/s, with fitted Gaussian mixture models with \(K=2-6\) components under the kld-Bayes (**top**), \(\beta\)D-Bayes with \(\beta=1.25\) (**middle**) and \(\beta\)D-Bayes with \(\beta=1.5\) (**bottom**). scalar multiples \(w_{PR},w_{tLR}\in\mathbb{R}\) of the logistic regression parameters \(\theta\mapsto w\theta\). These are calculated in order to minimise the _a priori_TVD between the models and the logistic regression baseline according to \(\mathcal{N}_{\epsilon}^{\textsc{tVD}}\) (see Section B.3.2). We upper bound \(\nu_{0}\) and \(\nu_{1}\) by \(0.05\) making \(\epsilon=0.05\) for these models. Figure 6 plots \(P(y=1|X,\theta)\) as a function of \(X\theta\) for all four models (left) and the tvd between each alternative model and the logistic regression (right), demonstrating that all four produce very similar binary probabilities. #### 6.3.1 Colon Cancer Dataset To investigate the stability of posterior predictive inferences across the logistic, probit, \(t\)-logistic, and mislabelled binary regression models we consider the colon cancer dataset of Alon et al. (1999). The dataset contains the expression levels of 2000 genes from 40 tumours and 22 normal tissues and there is purportedly evidence that certain tissue samples may have been cross-contaminated (Tibshirani and Manning, 2013). Rather than consider the full 2000 genes we first run a frequentist LASSO procedure, estimating the hyperparameter via cross-validation, and focus our modelling only on the nine genes selected by this procedure. We understand that such post-model selection biases parameter estimates, but the stability of the predictive inference is our focus here. Figure 7 compares the _a posteriori_TVD distance between the posterior mean estimated distribution for each observation with the _a priori_TVD distance between each of the models (top) and the difference Figure 6: **Left**: \(P(y=1|X,\theta)\) for logistic, probit, \(t\)-logistic and mislabelled models. **Right**: tvd between the logistic regression canonical model and the probit, \(t\)-logistic and mislabelled models. The \(\theta\) parameters of the probit and \(t\)-logistic models are scalar multiplied in a fashion that minimise the tvd to the logistic regression between the posterior mean regression parameter estimates of the two models (bottom) under the kld-Bayes and \(\beta\)D-Bayes with \(\beta=1.5\). The stability of the \(\beta\)D-Bayes is once again demonstrated here, for almost every observation and every pair of models the posterior predictive inference is as stable as it was _a priori_, while the KLD-Bayes inference is more often divergent. For the \(t\)-logistic and mislabelled models the predictive stability of the \(\beta\)D-Bayes also provides greater stability in the posterior mean parameter estimates. Figure 7: Colon Cancer Data. **Top**: tvd between the posterior mean estimated probabilities for each observation of the probit (**left**), \(t\)-logistic (**centre**) and mislabelled (**right**) models and the canonical logistic regression under the kld-Bayes and \(\beta\)D-Bayes (\(\beta=1.5\)). The dotted line represented the _a priori_ tvd distance between the models. **Bottom**: Absolute differences between posterior mean parameter estimates and those of the logistic regression. Discussion This paper investigated the posterior predictive stability of traditional Bayesian updating and a generalised Bayesian alternative minimising the \(\beta\)D. In practice, the model used for inference is usually a convenient and canonical member of a wider class that capture the broad belief statements made by the dm and the observed data was not necessarily collected in the manner the dm imagined. We proved that \(\beta\)D-Bayes inference is provably stable across a class of likelihood models and data generating processes whose probability statements are absolutely close, a tvd neighbourhood, by establishing bounds on how far their predictive inferences can diverge. On the other hand, our results require the dm to be sure about the tail properties of their beliefs and the dgp to guarantee stability for standard Bayesian inference. The results of this paper simplify the process of belief elicitation for the \(\beta\)D-Bayes, bounding the _a posteriori_ consequences for a given level of _a priori_ inaccuracy, leaving the dm free to use the best guess approximation of their beliefs that they are most comfortable with, rather than switch to a less familiar model with better outlier rejection properties (O'Hagan, 1979). Such stability is achieved through a minimal amount of extra work compared with traditional Bayes' rule inference, and it provides a similarly recognisable output. We hope such results help to justify the increased use of the \(\beta\)D to make robust inferences in statistics and machine learning applications. A key issue motivating the departure from standard Bayesian methods here is a lack of concordance between the likelihood model and the data. Such an issue can be attributed to either a failure of the modeller to think carefully enough about the dgp, or errors in data collection. However, we treat these results separately to exemplify two different manifestations of the instability of Bayes' rule. Future work could explore the applicability of such results in multivariate settings where belief specification and data collection are harder, and further investigate our kld-Bayes results. While we argued when you could guarantee the stability of such methods, identifying for which statements kld-Bayes is not stable would provide important and useful results to facilitate more focused belief elicitation. To continue to facilitate the deployment of \(\beta\)D-Bayes methods in practice, more work is required to study and build upon existing methods to select \(\beta\), particularly in high dimensions. While it is clear that considerable gains can be made over standard methods in certain scenarios, an adversarial analysis of the \(\beta\)D performance compared with its kld-Bayes analogue would further motivate its wider applications. ## Acknowledgements The authors would like to thank Danny Williamson, Christian Robert, and Sebastian Vollmer for their insightful discussions on the topics in this paper. JJ was partially funded by the Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2017, the Government of Spain's Plan Nacional PGC2018-101643-B-I00, and a Juan de la Cierva Formacion fellowship FJC2020-046348-I. CH was supported by the EPSRC Bayes4Health programme grant and The Alan Turing Institute, UK.
2309.06248
Rethinking Evaluation Metric for Probability Estimation Models Using Esports Data
Probability estimation models play an important role in various fields, such as weather forecasting, recommendation systems, and sports analysis. Among several models estimating probabilities, it is difficult to evaluate which model gives reliable probabilities since the ground-truth probabilities are not available. The win probability estimation model for esports, which calculates the win probability under a certain game state, is also one of the fields being actively studied in probability estimation. However, most of the previous works evaluated their models using accuracy, a metric that only can measure the performance of discrimination. In this work, we firstly investigate the Brier score and the Expected Calibration Error (ECE) as a replacement of accuracy used as a performance evaluation metric for win probability estimation models in esports field. Based on the analysis, we propose a novel metric called Balance score which is a simple yet effective metric in terms of six good properties that probability estimation metric should have. Under the general condition, we also found that the Balance score can be an effective approximation of the true expected calibration error which has been imperfectly approximated by ECE using the binning technique. Extensive evaluations using simulation studies and real game snapshot data demonstrate the promising potential to adopt the proposed metric not only for the win probability estimation model for esports but also for evaluating general probability estimation models.
Euihyeon Choi, Jooyoung Kim, Wonkyung Lee
2023-09-12T14:04:12Z
http://arxiv.org/abs/2309.06248v1
# Rethinking Evaluation Metric for Probability Estimation Models Using Esports Data ###### Abstract Probability estimation models play an important role in various fields, such as weather forecasting, recommendation systems, and sports analysis. Among several models estimating probabilities, it is difficult to evaluate which model gives reliable probabilities since the ground-truth probabilities are not available. The win probability estimation model for exports, which calculates the win probability under a certain game state, is also one of the fields being actively studied in probability estimation. However, most of the previous works evaluated their models using accuracy, a metric that only can measure the performance of discrimination. In this work, we firstly investigate the Brier score and the Expected Calibration Error (\(ECE\)) as a replacement of accuracy used as a performance evaluation metric for win probability estimation models in exports field. Based on the analysis, we propose a novel metric called Balance score which is a simple yet effective metric in terms of six good properties that probability estimation metric should have. Under the general condition, we also found that the Balance score can be an effective approximation of the true expected calibration error which has been imperfectly approximated by \(ECE\) using the binning technique. Extensive evaluations using simulation studies and real game snapshot data demonstrate the promising potential to adopt the proposed metric not only for the win probability estimation model for exports but also for evaluating general probability estimation models. Probability Estimation, Calibration, \(ECE\), Balance Score ## I Introduction Probability estimation problem is already an important issue in various fields, such as weather forecasting [1], recommendation systems [2], and sports analysis [3, 4]. Since reliable probability in such fields brings great benefits to our life [5], various models have been proposed to estimate the reliable probability. However, unlike classification problems with true labels, the probability estimation problem is more difficult to evaluate because there is no ground-truth \(p\) label [6]. It is thus increasingly important to select a metric carefully to measure the performance of probability estimation models adequately. As an option of the probability estimation evaluation metric, several works have been done to develop metrics such as the Brier score [7] and the Expected Calibration Error (\(ECE\)) [8] which are proposed to measure the calibration performance of models instead of accuracy. Recently with the rapid market growth of exports, research to apply the win probability estimation model to the exports field is also being actively conducted [9, 10]. Among the esports genres, the multiplayer online battle arena (MOBA) genre is one of the main targets to apply the win probability estimation model based on its high market share in the esports field and also easily accessible data to train the models [10]. To the best of our knowledge, however, most of the previous works on the MOBA genre are still using accuracy as a metric in the evaluation of their probability estimation models. Unfortunately in the esports field, using accuracy as an evaluation metric can be more unstable since the datasets in esports have a huge diversity [9] in terms of _operating conditions_ (the distribution of ground-truth \(p\)) which the probability estimation model works with. In classic sports, the aspect of the game does not differ significantly depending on the point of view of the game. However in MOBA, the aspect of the game is very different at each point of the game, as the stats of the characters played by each human player change dynamically within the game. This makes the operating condition of datasets completely distinct at each time point. In addition, repeated updates by game companies make another diversity to the operating condition of datasets by changing the dynamics of the game. It is concluded that research on the win probability estimation model using esports data are being conducted on very different datasets without a fixed dataset. These circumstances make that the suggested model cannot guarantee performance in other cases. In this work, we provide a detailed analysis of three candidate metrics that can be employed to evaluate probability estimation models for esports data. Based on the analysis of candidate metrics, we propose a simple yet effective evaluation metric called Balance score that can address the shortcomings identified in other metrics. Extensive evaluation using simulation studies and real game snapshot data verifies the benefits of the proposed metric and opens up possibilities to be utilized in general probability estimation model. ## II Related Works & Proposed Metric ### _Overview_ Most previous studies on win probability estimation models in esports use accuracy as an evaluation metric [9, 10]. However, accuracy is a metric that only measures the discrimination performance of the model, so it does not guarantee the performance of the model on estimating the exact win probability [8]. Also, in general classification tasks such as image classification, training is carried out assuming that the optimal model will show optimal performance (e.g., 100%). However, in the game snapshot data problem, it can be seen that even the optimal model cannot achieve 100% accuracy due to the uncertainty of the game snapshot data itself as shown in Fig. 1. Compare to the evaluation of a classification problem that can be measured with a given ground-truth label, proper evaluation of the probability estimation model is more challenging because the ground-truth probability is unknown. Recently in fields other than esports, two representative works namely, the Brier score [7] and the Expected Calibration Error (\(ECE\)) [8] are proposed to measure the calibration performance of models instead of accuracy. Calibration refers to the statistical consistency between the estimated probabilities and the true results. Unlike accuracy, measuring the calibration performance thus can reflect the estimated probability values. In this section, we firstly formulate win probability estimation problem in the esports field. Subsequently, we introduce two representative metrics for calibration performance measures based on their definitions, advantages, and disadvantages. Finally, we propose a novel metric called Balance score motivated by two metrics. ### _Problem Formulation_ Assume the game snapshot dataset consists of feature vectors \(\mathbf{x_{i}},1\leq i\leq n\), and their corresponding results \(y_{i}\in\{0,1\}\). For \(n\) game snapshots, each feature vector can consist of several scalar information at the time (e.g., earned gold, earned experience points) depending on the game to measure. Following, the '0' and '1' for the result \(y_{i}\) respectively refers to the 'lose' and 'win' at the end of the game. Given a win probability estimation model, the model predicts the win probability \(\hat{p_{i}}\) of each snapshot \(\mathbf{x_{i}}\). Noting that the real win probability \(p_{i}\) of the snapshot is unknown. The purpose of the evaluation metric for the win probability estimation model is thus to adequately evaluate the model's predicted output \(\hat{p}\) of each game snapshot. ### _Brier Score_ Scoring function is a set of rules that involves computation between the estimated probability and the actual outcome. Scoring function can be used to evaluate the estimated probabilities and encourage models to estimate 'good' probabilities by providing the appropriate score [11]. For each snapshot \(\mathbf{x_{i}}\), only \(\hat{p_{i}}\) and \(y_{i}\) are used as input of scoring function since \(p_{i}\) is unknown. The Brier score is one of the representative scoring rules which has been normally used to evaluate the probability estimation models. The Brier score can be represented with its scoring function \(f_{br}(\cdot)\) and expectation as follows: \[Brier\ score=\frac{1}{n}\sum_{i=1}^{n}f_{br}(\hat{p_{i}},y_{i}), \tag{1}\] where \[f_{br}(\hat{p_{i}},y_{i})=(\hat{p_{i}}-y_{i})^{2}. \tag{2}\] Equation (1) can be decomposed again into two terms as follows [12]: \[Brier\ score=\frac{1}{n}\sum_{i=1}^{n}(\hat{p_{i}}-y_{i})(2\hat{p_{i}}-1)+ \frac{1}{n}\sum_{i=1}^{n}\hat{p_{i}}(1-\hat{p_{i}}). \tag{3}\] In equation (3), the left term falls into 0 in expectation under the perfect calibration while the right term related to the sharpness, the concentration of the predictive distribution [13]. This means that the Brier score simultaneously addresses the calibration performance and the sharpness of probability estimation. Also, the Brier score is one of the strictly proper scoring rules [14] which have the characteristic that estimating \(\hat{p_{i}}\) to \(p_{i}\) is the only optimal strategy for the expected score. In Fig. 1: Problem of using accuracy as a measure of probability estimation models in esports. Compare to the image classification task which only ask for the label, even the optimal model may not be able to estimate the result of the match due to the uncertainty of the game snapshot data itself. general, the expected score of a model under a certain scoring function can be calculated as: \[\begin{split}\mathbb{E}_{model}[score]&=\int_{0}^{1} \int_{0}^{1}\left[p\cdot f(\hat{p},1)+(1-p)\cdot f(\hat{p},0)\right]\\ &\times P_{model}(\hat{p}|p)d\hat{p}\,\pi(p)dp\end{split} \tag{4}\] where \(f(\hat{p},y)\), \(\pi(p)\) and \(P_{model}(\hat{p}|p)\) respectively denotes the scoring function, the distribution of the ground-truth \(p\) on target dataset, and the conditional probability of \(\hat{p}\) under \(p\) of the model. An optimal model which always gets \(\hat{p_{i}}=p_{i}\) can also be evaluated in terms of the Brier score. Assume a situation where the optimal model gets scores with uniform distribution of the ground-truth \(p\)(i.e., \(\pi(p)=1\) for 0 \(\leq p\leq 1\), \(\pi(p)=0\) otherwise). In the case of esports, it can be seen as an example of win probability estimation in the middle of a match where the win probability is likely to be uniformly distributed rather than concentrated. In such conditions, optimal expected scores of accuracy (which also can be represented as a scoring function) and the Brier score can be calculated as 0.75 (75%) and 0.166 respectively, according to the equation (4). Since the operating condition of a specific dataset is unknown in real cases, the optimal score which can be the target is also unknown. It means that the low Brier score reported in previous studies cannot guarantee general performance. Instead, the score can only be used as a relative measure of multiple models for a single fixed dataset that shares an operating condition. This limitation of the Brier score can be a major drawback to its adoption as a metric in the field of esports which does not have a fixed dataset and the operating condition varies widely each time. ### _Expected Calibration Error_ In recent studies, several approaches such as the reliability diagram [15], Expected Calibration Error (\(ECE\)), and Maximum Calibration Error (\(MCE\)) [8] have been proposed to measure the calibration performance of a model. Among these metrics, \(ECE\) is frequently used because it can reasonably express the calibration performance of the model with a single scalar value. The perfect calibration from the model can be expressed as follows: \[Prob(\hat{Y}=1|\hat{P}=p)=p,\forall p\in[0,1]. \tag{5}\] Then, the model's true expected calibration error is represented as follows: \[\text{True ECE}=\underset{\hat{P}}{\mathbb{E}}[|Prob(\hat{Y}=1|\hat{P}=p)-p|]. \tag{6}\] To approximate the true expected calibration error, Guo et al. [16] suggested dividing the set of \(\hat{p_{i}}\) with [0, 1] probability interval into \(M\) equally spaced bins. \(ECE\) value is calculated based on the errors from these bins as follows: \[ECE=\sum_{m=1}^{M}\frac{|B_{m}|}{n}|\overline{y}(B_{m})-\overline{\hat{p}}(B_ {m})|, \tag{7}\] where \(B_{m}\) denotes the set of indices of predictions belonging to m-th bin, \(\overline{y}(B_{m})\) denotes the proportion of the true results of predictions in m-th bin, and \(\overline{\hat{p}}(B_{m})\) denotes the average of probability predictions in m-th bin. Noting here the term '\(ECE\)' refers to the approximation of the true expected calibration error (True ECE) by equation (7) hereafter. Compare to accuracy and the Brier score, an optimal model which always predicts \(p_{i}\) as \(p_{i}\) can get 0 \(ECE\) value. Knowing the optimal value has the advantage that the model experimenter can check whether the model approaches perfect calibration by tracking the \(ECE\) value of the model [16]. However, \(ECE\) has some limitations. First of all, there is no criterion for determining how many bins to divide. To get a precise approximation of the true expected calibration error, a larger \(M\) would be better. However, if \(M\) is increased, the number of \(\hat{p_{i}}\) in each bin decreases which results in a large bias in the \(ECE\) value [17]. If the calibration performance of several models on one dataset is compared, the order of the calibration performance of the models can be changed as the \(M\) value is changed. Also, due to the nature of the calibration metric which divides bins and collects predictions to calculate values, only evaluations on the entire dataset are possible. Based on these observations, we propose a scoring function based metric called Balance score which addresses the shortcomings of existing metrics and takes only their advantages. ### _Balance Score_ The Balance score is a score with gain and loss strategy, which pursues the balance of the score. If model predicts the true observation \(y_{i}\) based on \(\hat{p_{i}}\), it gains a score and if it predicts incorrectly, it loses a score. For each \(\boldsymbol{x_{i}}\) which is difficult to predict the result correctly (e.g., \(p\) in 40% \(\sim\) 60%), a large score is gained if the predicted result is correct, and a small score is lost if the predicted result is incorrect. Conversely, when \(\boldsymbol{x_{i}}\) is easy to predict the result correctly (e.g., \(p\) close to 0% or 100%), a small score is gained if the predicted result is correct, and a large score is lost if the predicted result is incorrect. The Balance score with its scoring function \(f_{ba}(\cdot)\) can be defined as follows: \[f_{ba}(\hat{p_{i}},y_{i})=\begin{cases}1-\hat{p_{i}},&\text{if }\hat{p_{i}}\geq 0.5\text{ and }y_{i}=1\\ \hat{p_{i}},&\text{if }\hat{p_{i}}<0.5\text{ and }y_{i}=0\\ -\hat{p_{i}},&\text{if }\hat{p_{i}}\geq 0.5\text{ and }y_{i}=0\\ -1+\hat{p_{i}},&\text{if }\hat{p_{i}}<0.5\text{ and }y_{i}=1\end{cases}, \tag{8}\] \[Balance\ score=\frac{1}{n}\sum_{i=1}^{n}f_{ba}(\hat{p_{i}},y_{i}). \tag{9}\] To give new properties of the Balance score, let \(G(p)\) be the pointwise expected score when model predicts \(p\) as \(p\). Then the model can get 0 score by maintaining the total score to be balanced as follows: \[\begin{split} G(p)&=pf_{ba}(p,1)+(1-p)f_{ba}(p,0)\\ &=0\text{ for }\forall p\in[0,1].\end{split} \tag{10}\] Also, when \(p\) is estimated to be a different value, the balance is broken in proportion to the difference between \(p\) and the estimated value. More generally, let \(g(q;p)\) be the pointwise expected score function when model predicts \(q\) under the ground-truth probability \(p\). Then \(G(p)=g(p;p)\) holds. Also, the following expression holds: \[|g(q;p)|=|q-p|\text{ for }\forall q,p\in[0,1]. \tag{11}\] Equation (11) simply shows that model gets 0 score only if model estimates \(p\) under \(p\), and the balance is broken in proportion to the difference. The Balance score is not following the proper scoring rule suggested in [14] because it is a new scoring rule with gain and loss. However, the Balance score still shares the concept of a proper scoring rule that estimating \(p\) as \(p\) is the optimal strategy for the expected score. An expected score of the Balance score can also be calculated with equation (4). Same with the \(ECE\), the optimal model also can get 0 value according to equation (10) regardless of the operating condition. This means that the optimal Balance score can be the target of models to be trained. Moreover, recent machine learning models suffer mainly from overconfidence or underconfidence in terms of probability estimation [16]. According to the tendency of the model, if a extreme prediction \(\hat{p_{i}}\) close to 0 or 1 is given compared to the actual \(p_{i}\), it is called overconfident, and if it is given a mild value, it is called underconfident. Assume a situation with \(Prob(\hat{Y}=1|\hat{P}=p)=q_{p}\). If a model has an overconfident property, \(p\) is placed in \(0.5\leq p\leq q_{p}\text{ for }0.5\leq p\) and \(q_{p}\leq p<0.5\) for \(p<0.5\). Then true expected calibration error is approximated as: \[\text{True ECE} = \underset{P}{\mathbb{E}}[Prob(\hat{Y}=1|\hat{P}=p)-p|]\] \[= \underset{P}{\mathbb{E}}[|q_{p}-p|]\] \[= \underset{P<0.5}{\mathbb{E}}[p-q_{p}]+\underset{0.5\leq P}{ \mathbb{E}}[q_{p}-p]\] \[= \underset{P<0.5}{\mathbb{E}}[-g(q_{p};p)]+\underset{0.5\leq P}{ \mathbb{E}}[-g(q_{p};p)]\] \[= -\underset{P}{\mathbb{E}}[g(q_{p};p)]\] \[\approx -Balance\ score.\] Conversely, if model has an underconfident property, \(\text{True ECE}\approx Balance\ score\). This means that the Balance score is another approximation of true expected calibration error under general condition, but without binning technique. To conclude, Table I summarizes whether the metrics have good properties as probability estimation metric. ## III Empirical Results In this section, we evaluate the proposed evaluation metric using simulation studies and real game snapshot data. Under the first case study, we evaluate the expected accuracy, Brier score, \(ECE\), and Balance score obtained from the optimal model under various beta distributions. The expected scores obtained from the logistic regression model under real game snapshot datasets at different time points are also evaluated to show the limitations of accuracy and the Brier score. In the second case study, we compare two calibration based metrics (\(ECE\), Balance score) in detail. ### _Case Study I: Limitations of accuracy and Brier score_ In equation (4), an analytical method to calculate the expected score of a model under a specific operating condition and a specific scoring function was presented. Assume a situation in which a model was scored on a dataset following a specific distribution. This is done by repeating the situation where \(p_{*}\) is firstly generated following a specific distribution, \(y_{*}\) is determined according to the value of \(p_{*}\), and assume the model estimates it to \(\hat{p_{*}}\). Since each \(p_{*}\) exists between [0,1] and the conditions of the two competing teams are the same, the win probability distribution of the game snapshot dataset can be considered to follow a symmetric beta distribution with an average win probability of 0.5 (50%). The win probability distribution of the game state snapshot dataset is very different depending on the state of the game. At the beginning of the game, the odds of the two competing teams are not significantly different, so the win probabilities will be concentrated near 50%. As time goes by, the tail of the distribution will become thicker, and the distribution of the win probability will lean to both extreme sides after the middle of the game. This situation can be simulated by adjusting the \(\alpha\) and the \(\beta\) parameters of the beta distribution. As a simulation study, The upper row of the Table II summarizes expected accuracy, Brier score, \(ECE\), and Balance score of the optimal model on dataset with the distribution of probabilities following \(Beta(0.5,0.5)\), \(Beta(1,1)\), and \(Beta(2,2)\). By generating 100,000 synthetic data \(p_{*}\) following each distribution, the score can be calculated according to the simulation method which becomes the approximation of the equation (4). The number of bin \(M\) for \(ECE\) is set to 10 and subsequently calculated by collecting pairs of (\(\hat{p_{*}}\), \(y_{*}\)). As shown in the upper row of the Table II, the optimal model cannot achieve 100% accuracy, and the optimal accuracy value varies greatly depending on the distribution, from 68.92% to 81.88%. Similar to the accuracy, the Brier score also differs greatly depending on the distribution (0.1995 to 0.1241). Also, the true \(p\) and the true distribution of \(p\) of actual dataset are unknown as mentioned in the previous section. Therefore, the result of achieving a certain accuracy and Brier score in a certain dataset cannot be a general performance. Since the true ECE of the optimal model is 0, both the \(ECE\) and the Balance score properly approximate them to 0 regardless of the distribution of \(p_{*}\). Noting that the error of \(ECE\) is slightly larger than the Balance score due to the bias caused by the binning process. For the experiment of real game snapshot data, the game snapshot data from the "League of Legends", one of the popular MOBA games is collected through the Riot Games public API [18] and evaluated. We collected datasets at three different time points which are expected to follow the three distributions assumed in the simulation study. The feature vector \(x_{*}\) for each snapshot data is generated by taking 14 indicators that affect the winning of the match (the gold difference for each role player (5), the experience difference for each role player (5), the number of killed dragons for each team (2), and the number of destroyed towers for each team (2) similar to that in [19]. Following our problem formulation, their corresponding results \(y_{*}\) are also recorded as the true labels. For each time point, 100,000 matches were taken and vectorized, while 60,000 matches were allocated as a training set and the remaining 40,000 matches were allocated as a testset. The lower row of the Table II shows the results calculated by each metric after the learning with a logistic regression model that naturally derives probabilities. The distribution of \(\hat{p}\) derived by the logistic regression model trained with the dataset for each time period is shown in Fig. 2. As shown in the figure, the game snapshots at 5, 10, and 15 minutes respectively resemble the distributions of \(Beta(2,2)\), \(Beta(1,1)\), and \(Beta(0.5,0.5)\). Consider the distributions of \(\hat{p}\) from the trained model are similar to the assumed distributions of \(p\) and the scores from the trained model are similar to the optimal model's score. Based on the results in the Table II, the trained logistic regression model can be considered a pretty nice model. However, the low accuracy of 65.56% can be considered that the model's discrimination power is insufficient. In a similar manner, the high Brier score of 0.2159 may seem insufficient to use the logistic regression model as a probability estimation model. For this gap between the understandings of the model's performance, it would be right to understand that the performance is limited by the uncertainty of the data rather than conclude that the logistic regression model does not have sufficient capacity or that there is a learning problem. However, since the distribution of true \(p\) is unknown and subjective, comparing the performance with that of the optimal score on the assumed distribution is also not reasonable. Instead, the \(ECE\) or the Balance score has a chance to be an absolute measure of the model's performance which also measures the calibration performance of the model. Since we know that the optimal value of both metrics is 0, we can directly understand the performance of the logistic regression model itself and thus can target to update the model targeting the values to be 0. ### _Case Study II: Comparison between \(ECE\) and Balance score_ \(ECE\), one of the best choices for approximating the true ECE, involves a binning process (see equation (7)). The binning hyperparameter \(M\) should be increased to make the approximation more accurate, but the bias occurs because the number of \(\hat{p_{*}}\) in each bin decreases. Fig. 2: Three plots in left side respectively refers to the plot of \(p\) obtained from the generated beta distributions \(Beta(2,2)\), \(Beta(1,1)\), and \(Beta(0.5,0.5)\). Distributions of \(\hat{p}\) derived by the logistic regression model trained with the real game snapshot datasets at 5, 10, and 15 minutes are also plotted on the right side. To illustrate limitations of a binning process for performance evaluation task, we assume two synthetic models with some degree of overconfident tendency. These models tend to overestimate for each true \(p\). When the tendency is 0.1, the degree is about 1/10 of the model that is completely overconfident and estimates 0 or 1 for all \(p\). For example, the model estimates \(0.9*0.6+0.1*1=0.64\) for true \(p\) of 0.6, and \(0.9*0.2+0.1*0=0.18\) for true \(p\) of 0.2. Fig. 3 shows the resultant \(ECE\) values along the increasing \(M\) from 5 to 100 with 10,000 (\(\hat{p_{*}}\), \(y_{*}\)) pairs obtained from the synthetic models with a tendency of 0.1 and 0.11. The corresponding 10,000 true \(p_{*}\) are generated from the uniform distribution. Since the true ECE of the model with tendency 0.1 is smaller than the model with tendency 0.11, it is clear that the \(ECE\) value also should be smaller. However, depending on the selection of \(M\), \(ECE\) of the model with a tendency of 0.11 can be smaller than the model with a tendency of 0.1 as shown in the crossing points of blue and red lines in Fig. 3. This shows that the order of the models' performance can be changed by the subjective choice of the experimenter regardless of their actual performance. Instead, the Balance score can be an exact measure for the performance evaluation of the model without any hyperparameter. In addition, the Balance score also requires much fewer data to estimate the true ECE based on pointwise calculation. When the distribution of \(p_{*}\) is uniform, the analytic solution of the true ECE for an overconfident model with a tendency of 0.1 can be calculated to 0.025 following the equation (6). Fig. 4 shows the \(ECE\) (\(M=10\)) and the Balance score of the model along the increment of utilized synthetic data size from 50 to 1000. Compare to the \(ECE\) which needs more than 500 sample data to approximate the true ECE, the Balance score can approach the true ECE value with much fewer data. Note that the pointwise calculation of the Balance score also resulted in the computational efficiency compared to the \(ECE\) calculation. Based on the observations, the Balance score without a subjective binning process shows several advantages over the \(ECE\) in terms of evaluation metric. ## IV Conclusion In this work, we have investigated how to adequately evaluate probability estimation models via esports' win probability estimation model. Through the theoretical analysis and experiments, we found that a novel metric called Balance score, motivated by the Brier score and \(ECE\), takes the advantages of existing metrics and also solves their shortcomings. Also, under machine learning models' general condition, we found that the Balance score can be an effective approximation of the true expected calibration error. In future works, we expect to develop a model that provides more reliable probabilities for esports' win probability with the help of proper evaluation by the Balance score. Additionally, we will investigate favorable effects of replacing \(ECE\) in various calibration-involved areas such as calibrated model learning and post-processing calibration methods.
2308.16372
Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning
We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs). We first extend the calibration technique of SNNs to arbitrary activation functions beyond ReLU, making it more versatile, and we prove a theorem that ensures the effectiveness of the calibration. We successfully convert PINNs to SNNs, enabling computational efficiency for diverse regression tasks in solving multiple differential equations, including the unsteady Navier-Stokes equations. We demonstrate great gains in terms of overall efficiency, including Separable PINNs (SPINNs), which accelerate the training process. Overall, this is the first work of this kind and the proposed method achieves relatively good accuracy with low spike rates.
Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda
2023-08-31T00:21:27Z
http://arxiv.org/abs/2308.16372v1
# Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning + ###### Abstract We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs). We first extend the calibration technique of SNNs to arbitrary activation functions beyond ReLU, making it more versatile, and we prove a theorem that ensures the effectiveness of the calibration. We successfully convert PINNs to SNNs, enabling computational efficiency for diverse regression tasks in solving multiple differential equations, including the unsteady Navier-Stokes equations. We demonstrate great gains in terms of overall efficiency, including Separable PINNs (SPINNs), which accelerate the training process. Overall, this is the first work of this kind and the proposed method achieves relatively good accuracy with low spike rates. Spiking Neural Networks Conversion Nonlinear activation ## 1 Introduction The use of machine learning techniques in the scientific community has been spreading widely, reaching many fields such as physics [1, 2, 3, 4], chemistry [5, 6], biology [7, 8, 9], geophysics [10, 11], epidemiology[12, 13] and many more. The advances in computation capabilities have enabled many researchers to reformulate diverse problems as data-driven problems, by combining prior knowledge of the problem with fitting a model for the available data. A prominent drawback of Scientific Machine Learning (SciML) techniques is that they are usually expensive in terms of computational cost. They require either knowledge of the governing equations that determine the process (approximating them is a costly procedure), or a large amount of data to fit (expensive as well). The SciML community is striving for a more efficient method for training and inferring neural networks. Neuromorphic chips are one edge computing component that SciML applications could benefit from. In this work, we explore methods for enabling this. An important breakthrough in the field of SciML was the invention of the Physics-Informed Neural Networks (PINNs) [14, 15]. PINNs incorporate the knowledge of the physical experiment and governing equations into the network training step, making it a hybrid (physics and data) training. PINNs and its extensions [16, 17, 18, 19] have achieved great success in many fields and applications [20, 1, 21, 22, 23, 24, 7, 25, 26]. A disadvantage of PINNs is that like other deep neural networks, they are prone to long training times. In addition, when changing the problem conditions (initial conditions, boundary conditions, domain properties, etc.), the PINN has to be trained from scratch. Therefore, for real-time applications, a more efficient solution is sought after. For training a PINN, one usually uses smooth activation functions (such as the Tanh or Sine activation functions), where in most ANNs the ReLU activation is dominant. Using smooth activation function in a SNN is a new challenge we address in this paper with theoretical justification. Spiking Neural Networks (SNNs) have been gaining traction in the machine learning community for the past few years. The main reason is their expected efficiency [27; 28; 29; 30], in terms of energy consumption, compared to their Artificial Neural Network (ANN) counterparts that are commonly used for many applications. In addition, the advances in neuromorphic hardware (such as Intel's Loihi 2 chip [31; 32]), call for innovative algorithms and software that can utilize the chips and produce lighter and faster machine learning models. However, developing an SNN is a challenging task, especially for regression [33; 34]. Studies have been conducted for translating components from the popular ANNs into a spiking framework [35], but many components are not yet available in the spiking regime. In this paper we focus on that specific aspect. There are three popular approaches for training SNNs. The first involves using mathematical formulations of the components of the brain, such as the membrane [36; 37; 38], the synapse [39], etc. In this case, one uses a Hebbian learning rule [40] to find the weights of the synapses (the trainable parameters) using forward propagation (without backward propagation [41; 42]). The second method involves building surrogate models for the elements in the SNN that block the back-propagation, such as the non-differentiable activation functions used in SNNs. The third method, which is discussed in this paper, addresses converting a trained ANN into a SNN. The main contributions of this paper are as follows: 1. We propose a method to convert PINNs, a type of neural network commonly used for regression tasks, to Spiking Neural Networks (SNNs). The conversion allows for utilizing the advantages of SNNs in the inference stage, such as computational efficiency, in regression tasks. 2. We extend the calibration techniques used in previous studies to arbitrary activation functions, which significantly increases the applicability of the conversion method. Furthermore, we provide a convergence theorem to guarantee the effectiveness of the calibration. 3. We apply the conversion to separable PINNs (SPINNs), which accelerates the training process of the PINNs. Overall, the proposed method extends the application of SNNs in regression tasks and provides a systematic and efficient approach to convert existing neural networks for diverse regression tasks to SNNs. ## 2 Related Work Physics-informed neural networks (PINNs):An innovative framework that combines neural networks with physical laws to learn complex physical phenomena. In PINNs, the physical equations are integrated into the loss function, which allows the network to learn from both the given data and the underlying physics. This approach significantly improves the network's ability to handle incomplete or noisy data and performs well with limited training data. PINNs have been successfully applied to a range of problems in fluid dynamics, solid mechanics, and more [22; 1; 21; 23; 7; 24]. Separable PINNs (SPINNs):Cho et al. [43] proposed a novel neural network architecture called SPINNs, which aims to reduce the computational demands of PINNs and alleviate the curse of dimensionality. Unlike vanilla PINNs that use point-wise processing, SPINN works on a per-axis basis, thereby reducing the number of required network forward passes. SPINN utilizes factorized coordinates and separated sub-networks, where each sub-network takes an independent one-dimensional coordinate as input, and the final output is generated through an outer product and element-wise summation. Because SPINN eliminates the need to query every multidimensional coordinate input pair, it is less affected by the exponential growth of computational and memory costs associated with grid resolution in standard PINNs. Furthermore, SPINNs operate on a per-axis basis, which allows for parallelization with multiple GPUs. Spiking Neural Networks (SNNs):A type of Artificial Neural Network (ANN), that differs in the implementation of the core components. The purpose is to create more biologically-plausible training and inference procedures. Unlike traditional ANNs, which process information through numerical values, SNNs process information through spikes, which occur in response to stimulation (much like the human brain). SNNs are becoming increasingly popular as they can mimic the temporal nature of biological neurons. Additionally, SNNs are computationally efficient and have the potential for efficient hardware implementation, making them well-suited for real-time applications. Combining SNN implementation with edge computing, training could be significantly faster, and inference as well [27; 28; 29]. Recent results have shown that SNNs can achieve high accuracy on image classification tasks, with up to 99.44% on the MNIST dataset [44] and up to 79.21% on ImageNet [45]. SNN conversionA technique to transform a trained ANN into a SNN. SNN conversion usually involves mapping the weights and activation functions of the ANN to the synaptic strengths and spike rates of SNNs. It is considered the most efficient way to train deep SNNs, as it avoids the challenges of direct SNN training, such as gradient estimation and spike generation [46; 47; 48]. The algorithm of SNN conversion can be divided into two steps: offline conversion and online inference. In the offline conversion step, the trained ANN model is converted into an equivalent SNN model by adjusting the network parameters. In the online inference step, the converted SNN model is used for the inference. In the online step, the SNN is intended to be deployed on neuromorphic hardware, to unlock its full potential and energy efficiency. ## 3 Method A complementary process to the conversion technique. The SNN calibration [49] is a method that minimizes the loss of accuracy and efficiency when converting an ANN into a SNN. SNN calibration leverages the knowledge of the pre-trained ANN and corrects the conversion error layer by layer. The algorithm consists of two steps: replacing the ReLU activation function with a spike activation function, and applying the calibration to adjust only the biases (light) or weights and biases (advanced) of each layer. The calibration method is based on the theoretical analysis of the conversion error and its propagation through the network. SNN calibration can achieve comparable or even better performance than the original ANN on various datasets and network architectures. This paper is a generalization of this work. We propose an extension to the SNN conversion, which is approriate for regression tasks. ### SNN Conversion setup We consider a dataset \(D=(X,Y)\), and an ANN \(\mathcal{A}\) with \(n\) hidden layers, trained to fit it. Let \(\mathbf{x}^{(n)}=\mathcal{A}(X)\) be the output of \(\mathcal{A}\). The goal of SNN conversion is to find an SNN \(\mathcal{S}\), whose (averaged) output is \(\bar{\mathbf{s}}^{(n)}=\mathcal{S}(X)\), such that \(\mathbf{x}^{(n)}\) is close to \(\bar{\mathbf{s}}^{(n)}\). In other words, we want to minimize the norm of the error \(\mathbf{e}^{(n)}\stackrel{{ d}}{{=}}\mathbf{x}^{(n)}-\bar{ \mathbf{s}}^{(n)}\) for a given \(\mathcal{A}\) and \(D\). The same network structure as \(\mathcal{S}\) is used for \(\mathcal{A}\) and the activation function of \(\mathcal{A}\) is replaced with IF. Then we can analyze the factors that influence the total conversion error \(\mathbf{e}^{(n)}\). ### SNN Conversion with Calibration In this section, we will briefly explain the SNN conversion with calibration proposed in [49]. Consider an MLP model with \(n\) layers, the first \(n-1\) layers use ReLU activation and the last layer has no activation. We denote \(\mathbf{W}^{(l)}\) as the weights and bias for layer \(l\). The naive way of SNN conversion is simply replacing the ReLU activation layers with IF activation layers. In this case, we define \(\mathbf{x}^{(l)}\) as the output of layer \(l\) recursively, which is \(\mathbf{x}^{(l)}=\mathrm{ReLU}(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})\) and \(\mathbf{x}^{(0)}\) is the input. Similarly, we define \(\bar{\mathbf{s}}^{(l)}\), the output of the layer \(l\) of converted SNN, as \(\bar{\mathbf{s}}^{(l)}=\mathrm{IF}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\) and \(\bar{\mathbf{s}}^{(0)}=\mathbf{x}^{(0)}\) is the input. In fact, we can compute the expected output spikes as \(\bar{\mathbf{s}}^{(l)}=\mathrm{ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l -1)})\). The \(\mathrm{ClipFloor}\) function is an approximation to \(\mathrm{ReLU}\) and is illustrated in Figure 1. Then we can define the conversion error of layer \(l\) as \(\mathbf{e}^{(l)}=\mathbf{x}^{(l)}-\bar{\mathbf{s}}^{(l)}\) and decompose it as \[\begin{split}\mathbf{e}^{(l)}&=\mathbf{x}^{(l)}- \bar{\mathbf{s}}^{(l)}\\ &=\mathrm{ReLU}(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-\mathrm{ ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\\ &=\mathrm{ReLU}(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-\mathrm{ ReLU}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})+\mathrm{ReLU}(\mathbf{W}^{(l)} \bar{\mathbf{s}}^{(l-1)})-\mathrm{ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^ {(l-1)})\\ &=\mathbf{e}^{(l)}_{r}+\mathbf{e}^{(l)}_{c}\end{split} \tag{1}\] Here \(\mathbf{e}^{(l)}_{r}\) represents the error caused by approximating the continuous input by the spiking input, and \(\mathbf{e}^{(l)}_{c}\) represents the local conversion error caused by changing the smooth activation function to the spiking activation (IF). A major Figure 1: Illustration of \(\mathrm{ClipFloor}\) function and its relationship to \(\mathrm{ReLU}\) (left) and \(\tanh\) (right). result in [49] is that we can bound the conversion error \(\mathbf{e}^{(n)}\), with the weighted sum of the local conversion errors \(\mathbf{e}^{(l)}_{c}\). This allows us to minimize the conversion error via optimizing each \(\mathbf{e}^{(l)}_{c}\). ### Results for general activation functions In fact, the results can be generalized to activation functions other than \(\mathrm{ReLU}\) by similar techniques in [49]. Since the generalized activation functions may have negative values, we introduce the idea of negative threshold, a concept in SNNs that allows neurons to fire both positive and negative spikes, depending on their membrane potential [50]. A positive spike occurs when the membrane potential exceeds the positive threshold, and a negative spike occurs when it is below the negative threshold. This mimics the biological behavior of neurons that do not fire a spike when the membrane potential does not reach the threshold. Negative threshold can be applied to different types of SNNs and learning functions depending on the problem domain and the data characteristics. It is very helpful when the dataset contains negative values. To formulate the conversion with calibration for generalized activations, we consider the conversion error decomposition \[\begin{split}\mathbf{e}^{(l)}&=\mathbf{x}^{(l)}- \bar{\mathbf{s}}^{(l)}\\ &=f(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-\mathrm{ClipFloor}( \mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\\ &=f(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-f(\mathbf{W}^{(l)}\bar{ \mathbf{s}}^{(l-1)})+f(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})-\mathrm{ ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\\ &=\mathbf{e}^{(l)}_{r}+\mathbf{e}^{(l)}_{c}\end{split} \tag{2}\] where \(\mathbf{e}^{(l)}_{r}\) and \(\mathbf{e}^{(l)}_{c}\) have the same meaning as in Eq 1. Then we can also use the weighted sum of the local conversion errors \(\mathbf{e}^{(l)}_{c}\) to bound the total conversion error \(\mathbf{e}^{(n)}\) by the following theorem. **Theorem 1**: _For any activation function whose function values and first-order derivatives can be uniformly approximated by piecewise linear functions and up to second-order derivatives are bounded, then the conversion error in the final network output space can be bounded by a weighted sum of local conversion errors, given by_ \[\mathbf{e}^{(n),\top}\mathbf{H}^{(n)}e^{(n)}\leq\sum_{l=1}^{n}2^{n-l+1} \mathbf{e}^{(l),\top}_{c}(\mathbf{H}^{(l)}+K_{L}^{(l)}\sqrt{L}\mathbf{I}) \mathbf{e}^{(l)}_{c} \tag{3}\] _where \(L\) is the training loss._ We present the detailed proof in the Appendix. The main technique is to find piecewise linear functions that approximate the smooth activation function. Therefore, the total conversion error is bounded from above by the local conversion errors. To achieve more accurate conversion performance, we can minimize the total conversion error by minimizing the local conversion error layerwise, which can be easily implemented as in [49]. We enlarge the last layer threshold to preserve the maximum value of the output. The details are discussed in the Appendix. ## 4 Results ### Function regression We first present an example of function regression: using neural networks to approximate the \(\sin\) function. For the training dataset, the input is the uniform mesg points on \([-\pi.\pi]\), and the output is the values of \(\sin\) on these points. The ANN model has two intermediate layers, each of them has 40 neurons. The activation function is \(\tanh\) except for the last layer; there is no activation function for the last layer. The network is trained with the Adam optimizer until the training error is less then \(1^{-7}\). Then, we convert the ANN to SNN with advanced calibration with different numbers of time steps. The results are shown in Figure 2. To further investigate the impact of \(T\) on the conversion error, we train networks with different intermediate layer numbers \(L=2,3,4\) and neuron numbers per layer \(N=20,40,60,80,100\). All the other setups are the same. We obtain the results shown in Figure 3. We can find that the conversion error decreases with larger \(T\) (more specifically, conversion error \(\sim 1/T\) where ) when \(T<32\), but it becomes stable or larger after \(T\geq 32\). And for neural network with fixed depth, larger layer width usually leads to smaller conversion error. However, for fixed layer width, deeper neural networks will not bring significantly better conversion performance. To validate the Theorem 1, we need to compute \(\mathbf{e}^{(l),\top}\mathbf{H}^{(l)}\mathbf{e}^{(l)}\) for \(l=1,2,\dots,n\). Since \(\mathbf{H}^{(l)}\) are intractable, we can only use the identity matrix to replace it and obtain some qualitative results. That is, compute \(\left\|\mathbf{e}^{(l)}\right\|^{2}\) instead of \(\mathbf{e}^{(l),\top}\mathbf{H}^{(l)}\mathbf{e}^{(l)}\). So the computed RHS is \(\sum_{l=1}^{n}2^{n-l+1}\mathbf{e}^{(l),\top}\mathbf{e}^{(l)}\). We train the ANN with layer numbers \(L=2,3,4\) with \(100\) neurons per layer. All the other setups are the same. The results are shown in Figure 4. Although the RHS term is not exactly the same as in 1, the trend agrees with our statement, that conversion error decreases as the RHS term does. Figure 4: The number of layers are \(L=2,3,4\) from left to right. The error here refers to \(\mathbf{e}^{(n)}\). RHS is defined as before. We can find that the total conversion error is smaller than the computed RHS and decrease with the RHS as well, which is an emprical validation of the Theorem 1. Figure 3: The number of layers are \(L=2,3,4\) from left to right. The error here refers to \(\mathbf{e}^{(n)}\). When \(T\leq 64\), the conversion error follows \(\mathbf{e}^{(n)}\approx 1/T\), and \(\mathbf{e}^{(n)}\) will get stable after \(T\) is already very large. Figure 2: Results of converting an ANN, which is trained to approximate \(\sin\), to SNN. The number of time steps is \(T=8,32,128\) from left to right. The output of SNN is close to the ground truth, which is the output of ANN. With increasing T, the conversion error becomes smaller. When \(T=128\), the SNN output curve is almost smooth and similar to the ground truth. ### PINNs To show the power of SNN conversion in regression tasks, we train PINNs, a MLP-based neural network which can solve PDEs, and convert them to SNNs. We present results for the Poisson equation, the diffusion-reaction equation, the wave equation, the Burgers equation, and the Navier-Stokes equations. Poisson equationThe Poisson equation is often used to describe the potential field in electromagnetic theory. Here we solve the following boundary value problem of Poisson equation \[\begin{split}-\Delta u(x)&=1,\quad x\in\Omega=[-1, 1]\times[-1,1]\\ u(x)&=0,\quad x\in\partial\Omega\end{split} \tag{4}\] with PINN. The network has \(3\) intermediate layers, each of which has 100 neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 50,000 epochs. Then we convert it into SNN. The results are shown in Figure 5. We observe that the SNN converted with calibration can achieve the same magnitude of error as the PINN evaluated as an ANN, while the SNN converted without using calibration can just obtain a rough shape of the solution and has much larger error. Diffusion-reaction equationThis equation models reactive transport in physical and biologicak systems. Here we use PINN to solve the diffusion-reaction equation with the following initial condition: \[\begin{split} u_{t}-u_{xx}&=ku^{2},\quad x\in \Omega=[-1,1]\\ u(x,0)&=\exp\!\left(-\frac{x^{2}}{2\sigma^{2}}\right) \end{split} \tag{5}\] where \(k=1\), \(\sigma=0.25\) up to time \(T=0.01\). The network has \(3\) intermediate layers, each of which has 100 neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 100,000 epochs. Then we convert it into SNN. The results are shown in 6. Figure 5: Poisson equation: The results of converting a PINN solving the poisson equation (4). Figure 4(a) is the reference solution. Figure 4(b) is the PINN result. Figure 4(c) is the result of the SNN converted from the PINN without using calibration. Figure 4(c) is the result of the SNN converted from the PINN using calibration. Here the L2 error and relative error are defined as \(\left\|\mathbf{x}^{(n)}-\mathbf{s}^{(n)}\right\|_{2}\) and \(\left\|\mathbf{x}^{(n)}-\mathbf{s}^{(n)}\right\|_{2}\)/\left\|\mathbf{x}^{(n) }\right\|_{2}\), where \(\mathbf{x}^{(n)}\) is the reference solution and \(\mathbf{s}^{(n)}\) is the neural network output. \(\left\|\cdot\right\|_{2}\) is the \(l^{2}\) norm, which is the root of mean square error. Figure 6: Reaction-diffusion equation: The results of converting a PINN solving the nonlinear heat equation (5). Figure 5(a) is the reference solution. Figure 5(b) is the PINN result. Figure 5(c) is the result of the SNN converted from the PINN without using calibration. Figure 5(c) is the result of the SNN converted from the PINN using calibration. Wave equationHere we use PINN to solve the a wave equation with the following initial and boundary conditions: \[\begin{split} u_{tt}-u_{xx}&=0,\quad x\in\Omega=[-1,1]\\ u(x,0)&=\begin{cases}1&x\in[-0.245,0.245]\\ 0&x\in[-1,-0.6]\cap[0.6,1]\\ \mathrm{linear}&\mathrm{otherwise}\end{cases}\\ u(-1,t)&=u(1,t)=0\end{split} \tag{6}\] up to time \(T=0.5\). The network has \(3\) intermediate layers, each of which has 100 neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 100,000 epochs. Then we convert it into SNN. The results are shown in 7. Viscous Burgers equationThe Burgers equation is a prototype PDE representing nonlinear advection-diffusion occurs fluid mechanics. Here we solve the following problem of viscous Burgers equation: \[\begin{split}\frac{\partial u}{\partial t}-\frac{\partial}{ \partial x}(\frac{1}{2}u^{2})&=\nu\frac{\partial^{2}u}{\partial x ^{2}},\quad(x,t)\in[0,2\pi]\times[0,4]\\ u(x,0)&=\sin(x)\end{split} \tag{7}\] with PINN. The network has \(6\) intermediate layers, each of which has \(40\) neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 100,000 epochs. Then we convert it into SNN. The results are shown in 8. Here we can find that conversion without calibration does not give correct position of steep gradient as the position is moving left (see Figure 7(c)). But the conversion with calibration keeps the steep gradient position correct, which is important for physics. Due to the discontinuous nature of SNN, the conversion results are not smooth. However, we can still apply some filters to smooth the outputs. For example, we apply FFT to the conversion results and remove the high frequencies, and the results are shown in Figure After the smoothing, conversion error becomes lower, hence can be adopted as the postprocessing procedure. Figure 8: Burgers equation: The results of converting a PINN solving the viscous Burgers equation (7). Figure 7(a) is the reference solution. Figure 7(b) is the PINN result. Figure 7(c) is the result of the SNN converted from the PINN without using calibration. Figure 7(c) is the result of the SNN converted from the PINN using calibration. Figure 7: Wave equation: The results of converting a PINN solving the wave equation (6). Figure 6(a) is the reference solution. Figure 6(b) is the PINN result. Figure 7(c) is the result of the SNN converted from the PINN without using calibration. Figure 7(c) is the result of the SNN converted from the PINN using calibration. ### Accelerating training with separable PINNs (SPINNs) A current limitation of PINN-SNN conversion is the computational resources it requires, especially for training the ANN. Herein, we highlight the effectiveness of using the separable physics-informed neural networks to spiking neural networks (SPINN-SNN) conversion. The SPINN-SNN conversion pipeline enhances the speed of operations and provides great computational efficiency compared to the direct application of PINN. We implement the SPINN-SNN conversion to address two PDEs: a two-dimensional viscous Burgers equation and a three-dimensional unsteady Beltrami flow. We provide a comparative analysis of the training time taken by standard PINN and the SPINN-SNN conversion pipeline. Experimental results reveal that the application of SPINN-SNN conversion greatly enhances the speed of the training, particularly when solving high dimensional PDEs. Figure 10 displays a runtime comparison between SPINN-SNN conversion and PINN, applied to two and three-dimensional problems. When addressing the two-dimensional Burgers equation, the SPINN-SNN conversion process is approximately 1.7 times faster than PINN. The superiority of SPINN-SNN conversion becomes more apparent while solving the three-dimensional Beltrami flow problem, which is over 60 times faster than PINN. Notably, the time necessary for the SNN calibration remains relatively constant, regardless of the problem dimensionality. This indicates that the benefits of the SPINN-SNN conversion pipeline become increasingly prominent with the rise in the dimensionality of the problem. Viscous Burgers equationIn order to conduct a fair comparison between the PINN-SNN and SPINN-SNN conversions, the setup of the viscous Burgers equation is kept consistent with that described in Equation 7, and the same set of hyperparameters is utilized. Figure 11 presents the results of converting a SPINN to solve the Burgers' equation. The SPINN has individual subnetworks for each independent variable, \(x\) and \(t\). Each of these subnetworks comprises three intermediate layers, each layer containing 40 neurons, and employs the tanh activation function, except in the last layer. As depicted in Figure 11, the conversion from SPINN provides an accuracy level comparable to that of PINN. An SNN converted with calibration achieves a significantly smaller error than one converted without calibration. Beltrami FlowThe Navier-Stokes equations are fundamental in fluid mechanics as they mathematically represent the conservation of momentum in fluid systems, and there has been significant advancement in solving Navier-Stokes flow problems using scientific machine learning methods [51; 52; 53]. The Navier-Stokes equations can be presented in two forms: the velocity-pressure (VP) form and the vorticity-velocity (VV) form. The incompressible Navier-Stokes equations, in their VP form, are as follows: \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} =-\nabla p+\frac{1}{Re}\nabla^{2}\mathbf{u} \tag{8}\] \[\nabla\cdot\mathbf{u} =0\] Figure 10: The runtimes of the SPINN-SNN conversions solving the viscous Burgers equation (7) and Beltrami flow (8). Figure 10a is the runtime for the 2D viscous Burgers equation. Figure 10b is the runtime for the 3D Beltrami flow. Figure 9: Figure 9a is the original conversion result. Figure 9b is the smoothed conversion result. We chose the spatial domain to be \(\Omega\in[-1,1]^{2}\) and time interval to be \(\Gamma\in[0,1]\). Here, \(t\) is non-dimensional time, \(\mathbf{u}(\mathbf{x},t)=[u,v]^{T}\) the non-dimensional velocity in the \((x,y)\)-directions, \(p\) the non-dimensional pressure, and the Reynolds number \(Re=\frac{U_{ref}D_{ref}}{v}\) is defined by characteristic length (\(D_{ref}\)), reference velocity (\(U_{ref}\)), and kinematic viscosity (\(v\)). In this example, we simulate a three-dimensional unsteady laminar Beltrami flow where \(Re=1\). The analytical solution of the Beltrami flow is [54]: \[\begin{split} u(x,y,t)&=-\cos x\sin y\ e^{-2t}\\ v(x,y,t)&=\sin x\cos y\ e^{-2t}\\ p(x,y,t)&=-\frac{1}{4}(\cos 2x+\cos 2y)\ e^{-4t} \end{split} \tag{9}\] The boundary and initial conditions are extracted from this exact solution. The PINN network comprises 4 intermediate layers, with each one containing 128 neurons. The activation function is tanh applied taoi all layers except the final layer. The network is trained for 20,000 epochs before being converted into an SNN. The SPINN consists of separate subnetworks for each independent variable, u, v, and p. Every subnetwork has 2 intermediate layers, with each layer consisting of 50 neurons. They all utilize the tanh activation function, apart from the last layer. Figure 12 illustrates that the SPINN conversion offers similar accuracy compared to PINN. The error for pressure is slightly more than the velocity errors in both the \(x\) and \(y\) axes. Nevertheless, the SNN, once calibrated and converted using SPINN, attains good accuracy and notable speed improvements in comparison to PINN. ### Firing rates of the converted SNN To demonstrate the potential efficiency of converting the ANN to the SNN, we computed the spiking rates of different equations. The spiking rate is defined as the ratio of non-zero values in the output of each layer. Prior works have suggested that SNNs with lower spiking rates will translate to energy efficient impelmentations on neurromorphic hardware. The results are shown in Table 1. We observe \(<0.5\) spiking rate in most cases demonstrating that SNNs only expend \(<50\%\) of their network computations. \begin{table} \begin{tabular}{c|c c} \hline Equations & Number of parameters & Spiking rate \\ \hline Poisson & 20601 & 0.3727 \\ Diffusion-reaction & 20601 & 0.2879 \\ Wave & 20601 & 0.5721 \\ Burgers & 8361 & 0.7253 \\ Burgers (Separable PINN) & 15500 & 0.2831 \\ N-S (Beltrami flow, Separable PINN) & 30900 & 0.1754 \\ \hline \end{tabular} \end{table} Table 1: The spiking rate of SNNs for different equations. Figure 11: Burgers equation with Separable PINN (SPINN): The results of converting a Separable PINN solving the viscous Burgers equation (7). Figure 11a is the reference solution. Figure 11b is the Separable PINN result. Figure 11c is the result of the SNN converted from the Separable PINN without using calibration. Figure 11c is the result of the SNN converted from the Separable PINN using calibration. ## 5 Conclusion We have successfully extended the SNN calibration method proposed in [49] to a more generalized class of activation functions beyond ReLU. The original proof relied on the specific property of ReLU's second-order derivative being zero, but our approach relaxed this constraint by incorporating the training loss as an additional term in the bound. We demonstrated the effectiveness of our method through various examples, including PINN and Separable PINN [43] (a variant of PINN), where the activation functions are not ReLU. The results demonstrated that our approach achieved good accuracy with low spike rates, making it a promising and energy-efficient solution for scientific machine learning. By enabling the conversion of a wider range of neural networks to SNNs, this method opens up new possibilities for \begin{table} \begin{tabular}{c|c c} \hline \hline Equations & \(L^{2}\) error & Relative \(L^{2}\) error \\ \hline Poisson & \(7.8508\times 10^{-3}\) & \(4.7095\times 10^{-2}\) \\ Diffusion-reaction & \(2.8766\times 10^{-2}\) & \(6.3267\times 10^{-2}\) \\ Wave & \(3.8965\times 10^{-2}\) & \(6.5308\times 10^{-2}\) \\ Burgers & \(3.9884\times 10^{-2}\) & \(6.9781\times 10^{-2}\) \\ Burgers (Separable PINN) & \(6.6495\times 10^{-2}\) & \(1.1634\times 10^{-1}\) \\ N-S (Beltrami flow, Separable PINN) & \(8.2512\times 10^{-3}\) & \(4.3273\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The \(L^{2}\) and relative \(L^{2}\) error of converted SNN for different equations. Figure 12: Beltrami flow with Separable PINN (SPINN): The results of converting a SPINN solving the Beltrami flow problem(7). Figure 12(a,e,i) are the reference solutions for \(\mathbf{u},\mathbf{v},\mathbf{p}\) respectively. Figure 12(b,f,j) are the SPINN results. Figure 12(c,g,k) are the PINN results. Figure 12(d,h,i) are the results of the SNN converted from the SPINN with calibration. harnessing the temporal processing capabilities and computational efficiency of SNNs in various scientific applications. The proposed approach contributes to advancing the field of spiking neural networks and their potential for practical real-world implementations in edge computing. Future research can explore further extensions of the calibration technique to other types of activation functions and investigate its performance on more complex neural network architectures. ## 6 Acknowledgements This work was supported in part by the DOE SEA-CROGS project (DE-SC0023191), the ONR Vannevar Bush Faculty Fellowship (N00014-22-1-2795), CoCoSys- a JUMP2.0 center sponsored by DARPA and SRC, and the National Science Foundation CAREER Award.
2309.15552
Startup success prediction and VC portfolio simulation using CrunchBase data
Predicting startup success presents a formidable challenge due to the inherently volatile landscape of the entrepreneurial ecosystem. The advent of extensive databases like Crunchbase jointly with available open data enables the application of machine learning and artificial intelligence for more accurate predictive analytics. This paper focuses on startups at their Series B and Series C investment stages, aiming to predict key success milestones such as achieving an Initial Public Offering (IPO), attaining unicorn status, or executing a successful Merger and Acquisition (M\&A). We introduce novel deep learning model for predicting startup success, integrating a variety of factors such as funding metrics, founder features, industry category. A distinctive feature of our research is the use of a comprehensive backtesting algorithm designed to simulate the venture capital investment process. This simulation allows for a robust evaluation of our model's performance against historical data, providing actionable insights into its practical utility in real-world investment contexts. Evaluating our model on Crunchbase's, we achieved a 14 times capital growth and successfully identified on B round high-potential startups including Revolut, DigitalOcean, Klarna, Github and others. Our empirical findings illuminate the importance of incorporating diverse feature sets in enhancing the model's predictive accuracy. In summary, our work demonstrates the considerable promise of deep learning models and alternative unstructured data in predicting startup success and sets the stage for future advancements in this research area.
Mark Potanin, Andrey Chertok, Konstantin Zorin, Cyril Shtabtsovsky
2023-09-27T10:22:37Z
http://arxiv.org/abs/2309.15552v1
# Startup success prediction and VC portfolio simulation using CrunchBase data ###### Abstract Predicting startup success presents a formidable challenge due to the inherently volatile landscape of the entrepreneurial ecosystem. The advent of extensive databases like Crunchbase jointly with available open data enables the application of machine learning and artificial intelligence for more accurate predictive analytics. This paper focuses on startups at their Series B and Series C investment stages, aiming to predict key success milestones such as achieving an Initial Public Offering (IPO), attaining unicorn status, or executing a successful Merger and Acquisition (M&A). We introduce novel deep learning model for predicting startup success, integrating a variety of factors such as funding metrics, founder features, industry category. A distinctive feature of our research is the use of a comprehensive backtesting algorithm designed to simulate the venture capital investment process. This simulation allows for a robust evaluation of our model's performance against historical data, providing actionable insights into its practical utility in real-world investment contexts. Evaluating our model on Crunchbase's, we achieved a 14 times capital growth and successfully identified on B round high-potential startups including Revolut, DigitalOcean, Klarna, Github and others. Our empirical findings illuminate the importance of incorporating diverse feature sets in enhancing the model's predictive accuracy. In summary, our work demonstrates the considerable promise of deep learning models and alternative unstructured data in predicting startup success and sets the stage for future advancements in this research area. ## 1 Introduction The prediction of startup success is a crucial task for various stakeholders, including investors, entrepreneurs, and policymakers, as it has significant implications for resource allocation and decision-making. It is estimated that approximately 90% of startups fail within their first five years, a failure rate that has remained relatively constant over the past few decades, despite considerable advancements in technology and business practices. Consequently, the accurate prediction of startup success can assist investors in more effectively allocating their resources and enable entrepreneurs to make better-informed decisions. Recently, the proliferation of data from sources such as Crunchbase has intensified interest in the application of machine learning techniques for the prediction of startup success. Machine learning models can harness various types of data, encompassing funding history, market trends, team composition, and social media activity, to identify patterns and generate predictions. This study presents two distinct methodologies for predicting startup success: a supervised deep learning approach leveraging multiple data sources, and a ranking-based approach focusing on the identification of characteristics common to successful startups and investors. The supervised approach entails collecting and labeling data, constructing a prediction model, and evaluating its performance. In contrast, the ranking-based approach centers on identifying startups and investors that exhibit shared characteristics with successful ones. Our train dataset consists of 34,470 companies The primary novelty of this research lies in the application of deep learning techniques and the integration of heterogeneous input data types. A crucial feature of our research is the simulation of fund operations based on historical data, resulting in a projected 14x capital growth of the fund's portfolio. As per machine learning metrics, our model exhibits a robust 86% ROC_AUC. The remainder of this paper is organized as follows: Section 2 reviews the related works in the area of startup success prediction and machine learning. Section 3 describes dataset collection, preprocessing, and feature selection. Section 4 presents the experimental results of the supervised approach. Section 5 describes some other ideas about company and investor scoring. Finally, Sections 6 and 7 provide the conclusion of the study and discuss prospective research avenues in this domain. ## 2 Related works The application of AI in fintech has substantially transformed the financial services industry over the past decades [1]. For example, one of the most well-known applications is credit risk assessment [2]. Another challenging task could be stock market prediction [3]. This paper focuses on startup prediction and the VC market, and there is a growing literature on analyzing investments using machine learning. In the paper [4], authors present a machine learning model, CapitalVX, trained on a large dataset obtained from Crunchbase, to predict the outcomes for startups, i.e., whether they will exit successfully through an IPO or acquisition, fail, or remain private. They investigated MLP, Random Forest, XGBoost and used mostly numerical features from the dataset. In [5] paper, authors conducted a review on existing machine learning techniques that are recently contributed to understanding the need of start-ups, trends of business and can provide recommendations to plan their future strategies to deal with the business problems. The study conducted by [6] underscores the potential of machine learning applications in the venture capital industry, demonstrating its ability to predict various outcomes for early-stage companies including subsequent funding rounds or closure. In another study [7], authors use behavioral decision theory to compare the investment returns of an algorithm with those of 255 business angels (BAs) investing via an angel investment platform. The study found that, on average, the algorithm achieved higher investment performance than the BAs. However, experienced BAs who were able to suppress their cognitive biases could still achieve best-in-class investment returns. This research presents novel insights into the interplay of cognitive limitations, experience, and the use of algorithms in early-stage investing. This study [8] proposes a data-driven framework, wherein the model was trained on 600,000 companies across two decades and 21 significant features. This review [9] provides a thorough analysis of AI applications in Venture Capital, categorizing influential factors on a company's probability of success or fund-raising into three clusters: team/personal characteristics, financial considerations, and business features. In another study [10], authors leveraged Crunchbase data from 213,171 companies to develop a machine learning model to predict a company's success. Despite limiting the number of predictors, it achieved promising results in precision, recall, and F1 scores, with the best outcomes from the gradient boosting classifier. This study [11] explores the untapped potential of web-based open sources in contrast to just structured data from the startup ecosystem. A significant performance enhancement is demonstrated by incorporating web mentions of the companies into a robust machine learning pipeline using gradient boosting. This study [12] aims to assist VC firms and Angel investors in identifying promising startups through rigorous evaluations, emphasizing the impact of founder backgrounds and capital collected in seed and series stages. This very recent paper published in 2023 [13] introduces a novel model for predicting startup success that incorporates both internal conditions and industry characteristics, addressing a gap in previous research that focused primarily on internal factors. Using data from over 218,000 companies from Crunchbase and six machine learning models, the authors found media exposure, monetary funding, the level of industry convergence, and the level of industry association to be key determinants of startup success. In this study [14], authors analyze more than 187,000 tweets from 253 new ventures' Twitter accounts achieving up to 76% accuracy in discriminating between failed and successful businesses. The research outlined in [15] investigates the methodologies used by venture capitalists when evaluating technology-based startups, using the influence of weak (Twitter sentiment) and strong (patents) signals on venture valuations. Findings reveal that while both signals positively associate with venture valuations, Twitter sentiment fails to correlate with long-term investment success, unlike patents. Furthermore, startup age and VC firm experience act as boundary conditions for these signal-valuation relationships. ## 3 Dataset Overview, Preprocessing, and Features We used daily Crunchbase database export (Daily CSV Export) as the primary data source, which is also supported by a well-documented API. The main goal of this research was to collect a labeled dataset for training a deep learning model to classify companies as either successful or unsuccessful. The analysis was based on the Daily CSV Export from 2022-06-14, and only companies established on or after 2000-01-01 were taken into account. To refine the focus of the research, only companies within specific categories were included, such as _Software_, _Internet Services_, _Hardware_, _Information Technology_, _Media and Entertainment_, _Conmerce and Shopping_, _Mobile_, _Data and Analytics_, _Financial Services_, _Sales and Marketing_, _Apps_, _Advertising_, _Artificial Intelligence_, _Professional Services_, _Privacy and Security_, _Video_, _Content and Publishing_, _Design_, _Payments_, _Gaming_, _Messaging and Telecommunications_, _Music and Audio_, _Platforms_, _Education_, and _Lending and Investments_. This research is focused on investment rounds occurring after round B. However, in the Crunchbase data glossary, rounds such as _series_unknown_, _private_equity_, and _undisclosed_, possess unclear characteristics. To incorporate them into the company's funding round history, we only included these ambiguous rounds if they occurred after round B. ### Successful Companies Dataset In this research, a company is deemed successful if it achieves one of three outcomes: Initial Public Offering (IPO), Acquisition (ACQ), or Unicorn status (UNIC), the latter being defined as a valuation exceeding $1 billion. To assemble a list of successful companies, we initially filtered for IPOs with valuations above $500M or funds raised over $100M, yielding 363 companies. For acquisitions, we applied filters to eliminate companies with a purchase price below the maximum amount of funds raised or under $100M, resulting in 833 companies. To select unicorns, we searched for companies with a valuation above $1 billion, utilizing both Crunchbase data and an additional table of verified unicorns, which led to a total of 1074 unicorns. The final dataset contains a timeline of all crucial investment rounds leading to the success event (i.e., achieving unicorn status, IPO, or ACQ), with the index of this event specified in the _success_round_ column. This approach ensures that the dataset accurately represents the history and progress of each successful company, facilitating effective analysis. ### Unsuccessful Companies Dataset To supply the model with examples of 'unsuccessful' companies, we collected a separate dataset. We excluded companies already present in the successful companies dataset by removing those that had IPO, ACQ, or UNIC flags. We also eliminated a considerable number of actual unicorns from the CrunchBase website [16] to avoid overlap. We excluded companies that have not attracted any rounds since 2016. Additionally, we excluded companies that are subsidiaries or parent companies of other entities. Furthermore, we used the jobs dataset to exclude companies that have hired employees since 2017. Additionally, we applied extra filters to exclude companies with valuation above $100 million, as they reside in the "gray area" of companies that may not be clearly categorized as successful or unsuccessful. By applying these filters, we constructed a dataset comprising 32,760 companies, denoted by the label '0' for unsuccessful, and 1,989 companies, denoted by the label '1' for successful. ### Features The feature space of the model includes: Founders Features Categorical: _country_code_, _region_, _city_, _institution_name_, _degree_type_, _subject_. Numerical: _twitter_url_, _linkedin_url_, _facebook_url_, _gender_, _is_completed_, _num_degrees_, _num_last_startups_, _num_last_jobs_, _number_of_founders_. We incorporated three binary flags into our model to represent the presence of founders' social media links. Since a company can have multiple founders, it was essential to aggregate information on all the founders for each company. For categorical variables, the most frequent value from the list was used, and the median for numerical variables. Investors Features Categorical: _type_, _country_code_, _region_, _city_, _investor_types_. Numerical: _investment_count_, _total_funding_usd_, _twitter_url_, _linkedin_url_, _facebook_url_, _raised_amount_usd_, _investor_count_, _num_full_. Functions that generate features based on the founders' and investors' data incorporate a date parameter as input. This approach is necessary to the model from using future information. For example, details about the number of companies founded or the founder's previous job experience that took place after the date of interest should not be incorporated into the feature set to avoid information leakage from the future. Rounds featuresCategorical: _country_code_, _investment_type_, _region_, _city_, _investor_name_. Numerical: _sum_, _mean_, _max_ of _raised_amount_usd_, _investor_count_, _post_money_valuation_usd_. It is crucial to emphasize that all features related to a company's investment rounds are gathered at a time point prior to the beginning of the time window of interest. CategoriesThere are two additional types of text data - text tags representing the companies' field of work. For example: * category_list: _Internet_, _Social Media_, _Social Network_ * category_groups_list: _Data and Analytics_, _Information Technology_, _Software_ The set of tags used in our study consists of a list of keywords separated by commas. We used the NMF (Non-Negative Matrix Factorization) matrix factorization method to generate features from these tags. This process involves creating a binary table with companies represented as rows and tags as columns, where each value in the table indicates whether a given company is associated with a specific tag (1) or not (0). The trained matrix factorization then converts each binary vector into a smaller dimension vector (in our case, 30). All categorical features are encoded using the OrdinalEncoder, while numerical features are normalized. ## 4 Model Training, Evaluation, and Portfolio Simulation A representation of the model's architecture is visualized in Figure 1. ### Backtest The backtest period during which we tested the model spans from 2016-01-01 to 2022-01-01, and the model was retrained every 3 months. Actually, it is a hyperparameter that could be tuned depending on the time/accuracy trade-off. In each iteration of the backtest, the time window under consideration is defined by the start and end dates. For example, the first iteration considers the window with a start date of 2016-01-01 and an end date of 2016-04-01. Companies that attracted Round B or C are selected as the "test" set for this window. The model is trained on the dataset described in Section 3. However, the entire dataset cannot be used for training since it would be incorrect to train on companies founded in the future to predict the success of companies in the past. Therefore, only companies founded before the start of the current time window (i.e., before 2016-01-01 in the first iteration) are considered for training. Additionally, the success of a company (IPO/ACQ/UNICORN) may occur in the Figure 1: Model architecture future relative to the current window. To train the model, only companies with success event occurred before the start of the current time window are considered. This approach is designed to ensure the integrity of the backtesting process, avoiding any influence from future events. However, the drawback of this approach is the limited number of training examples at the beginning of the backtest (i.e., in the first iterations in 2016-2017). Consequently, the predictive power of the model is lower at the beginning of the backtest compared to the end. The backtest yields an array of test companies with a score assigned to them, indicating the level of success predicted by the model. The model is retrained every 3 months during the backtest, resulting in a total of 25 prediction windows. A sorted list of predictions is generated for each window. Finally, all predictions from all windows are compiled into one table, representing the complete backtest of predictions for the period from 2016-01-01 to 2022-01-01. This table passes to the optimization algorithm. A decision has been made to construct a monthly portfolio based on the backtest results. Therefore, we can conduct the backtest with a window of 1 month, covering the periods from 2016-01-01 to 2016-02-01, from 2016-02-01 to 2016-03-01, and so on, by adding or removing companies in our portfolio every month. Again, using the example of one period, for example from 2018-01-01 to 2018-02-01, let us describe the process of selecting companies to be included in the portfolio. We take a slice of the backtest predictions in this period and sort them by score, which represents the model's assessment of the success of each company. As the size of our portfolio is limited, for instance, to 30 companies, there is no need to fill it entirely in the first months. Thus, the logic for adding companies is as follows: * In each month, we select the top 3 companies from the sorted list of predictions. But, a cut-off threshold for the predicted score has also been established. However, the choice of the optimal threshold is an empirical task and requires careful consideration. With the augmentation of the training dataset over time, the model becomes more confident in its predictions. Therefore, it makes sense to increase the threshold when moving along the backtest time. One way to do this is to set the threshold as a function that takes into account the size of the train dataset and other relevant factors. * Every month we verify the current portfolio: * **success**: if the company has achieved a success event (IPO/ACQ/unicorn) during the month, it is removed from the active portfolio and marked with this flag. * **longtime**: if the company has not attracted any rounds within the last 730 days (2 years, a configurable parameter), it is removed from the portfolio and marked with this flag. * **still_in**: if the company is still in the portfolio at the end of the backtest, it is marked with this flag. These are companies that were recently added to the portfolio (in 2021-2022) and for which we cannot yet make a decision on their success. The result is a dataset that simulates our venture fund during the period 2016-2022, and we collected (as well as filtered) companies in it every month. The resulting dataset contains the following fields: * a unique company identifier * the name of the company * the date of the round in which the fund entered the company * the valuation of the company at the time of entry (if available) * the company score at the time of entry (if available) * the date when the company was added to the portfolio * the date of the last round of funding, which could be an IPO, acquisition, or the round in which the company became a unicorn * the valuation of the company at the time of the last round of funding, if available * the reason for the fund's exit from the company (if applicable) * the date when the fund exited the company (due to success or expiration of the holding period) The reader may wonder why we retrain the model every 3 months while building the portfolio with a one-month interval. Essentially, at the beginning of the training set, we include all companies until 2016-01-01. The test set consists of companies that received rounds B or C funding during the period from 2016-01-01 to 2016-04-01. We make predictions and add them to the overall table. Then, we expand the training data until 2016-04-01, and the test period becomes from 2016-04-01 to 2016-07-01, and so on. In the end, we have a complete test table covering the period from 2016-01-01 to 2022-01-01. After that, we go through this table with a one-month step, simulating our venture fund's behavior and assembling the portfolio. The fact that we first collect all predictions and then go through them to construct the portfolio is simply a matter of optimization. We do not look into the future in any way. ### Backtest settings In this study, several experiments were conducted with different backtest configurations, we called them **earlybird** and **any**. The earlybird configuration exclusively permits entry for companies only in rounds B or C, while the any configuration broadens the entry criteria to any round within the list of _series_b_series_c_series_d_series_e_series_f_, _series_g_series_h_series_j_series_i_, as long as they are within the considered backtest window. The choice of entry configuration depends on the stage at which we enter the company. Similarly, the choice of exit configuration depends on when we decide to exit the company based on its success event (IPO/ACQ/unicorn), as discussed in Section 2.2. However, since the "unicorn" status can occur in the early rounds, there is a question of which round to exit. Two approaches were considered: using **first** approach we exit the company when the first success event occurs, while using **last** approach we exit on the last success event, analogous to "we sit until the end." The main approach used in this study is the **earlybird_last** due to business requirements. However, this approach has its drawbacks, such as the fact that the company success flag becomes known later in time, resulting in a smaller dataset size for training at the beginning of the backtest and a slightly lower quality of the backtest compared to the **earlybird_first** approach. ### Results The primary output of the algorithm is the backtest Table 2, sorted by the time the company was added to the portfolio. The table includes an _exit_reason_ column, which serves as the main metric for evaluating model quality on the backtest. This column can take on the following values: * **success**: the company had a successful round (unicorn/acquisition/IPO), and we exited * **longtime**: a negative case where we left the company because it didn't have a successful event and had no rounds for two years * **STILL_IN**: a gray area, mainly consisting of companies that were recently added to the backtest Hence, an optimal backtest is characterized by the maximum quantity of successful companies and a minimal number of companies categorized as **longtime**. Table 2 (_earlybird_last_) is the basic configuration based on business requirements. We enter in the first rounds (B/C) and exit in the last round. However, the model may not work very well at the beginning of the backtest due to limited data for training. In the Table 3 (_any_last_) configuration, we can observe a large number of known unicorns simply because we allow the model to enter in later rounds. ### Capital Growth Traditional metrics utilized in machine learning may not be directly transferable to the AI investor due to changes in data availability over time and class imbalance in the dataset. Therefore, we assess the model's performance based on the presence of well-known companies in the resulting portfolio and the financial growth of the companies. In this subsection, we focus on the latter assessment. To calculate the PnL of the success of companies, we need the company valuation during entry and exit rounds. The valuation of companies that exited due to longtime is set to zero. For companies marked as **STILL_IN**, we use their last known valuation since they are the youngest companies in the portfolio. The PnL is divided into realized and unrealized components. The unrealized PnL illustrates the current cumulative valuation of the portfolio, incorporating the presently known rounds, in contrast, the realized PnL denotes the cumulative sum garnered by exiting thriving companies and consequent capital growth. Results with exit reasons and valuations are presented in Table 2. Unfortunately, we didn't have valuation data for all companies. There is a column "Used in Capital Growth" that shows whether the company was used to calculate the PnL. We present cumulative PnL and the current portfolio size over time in Figure 2, with a step size of 1 month. The sharp rise in the middle of 2021 corresponds to the exit from Revolut. The companies that remained in the portfolio at the end of 2021 are all marked as **STILL_IN**. Overall, the PnL graph shows a positive trend, indicating the financial growth of the portfolio over time. To evaluate the algorithm via conventional machine learning metrics, we employ cross-validation for time-series analysis with a 1-year test window, spanning the years from 2016 to 2022. Within this test window, we focus on companies that secured B or C funding rounds during a given year and subsequently achieved success. Furthermore, to ensure the integrity of our analysis, the training dataset for each fold exclusively comprises companies whose success or failure status was known prior to the commencement of the test window. Standard binary classification metrics can be used to evaluate the performance of the model, and Recall is of particular interest to us. The minimization of False Negatives (FN) holds greater significance than that of False Positives (FP) in order to circumvent the omission of successful companies. Finally, in Table 1 we present metrics that have been averaged across 6 folds for a comprehensive evaluation of our predictive model's performance: ## 5 Other approaches ### Investors ranking model All investors could be scored in terms of frequency, amount, and field of investments. Also, an investor could be an indicator of a company's potential failure or success. This scoring was carried out in three stages: 1. Through an autoencoder model with several modalities, we created vector representations for each investor 2. According to experts' estimates, we select a group of top investors, and further create the centroid of this group in the vector space 3. We rank investors according to distance from the centroid \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Precision** & **Recall** & **ROC AUC** & **PR AUC** \\ \hline 0.92 & 0.64 & 0.86 & 0.65 \\ \hline \end{tabular} \end{table} Table 1: Metrics Figure 2: Capital growth An elevated score corresponds to a proximate alignment with top investors. Results are presented in Table 4. If the lead investor of a company has a low score, it could be an indicator that such a company should be excluded from consideration. **Example:** Company 14W has a score of 0.9 and invests in IT companies, incl. unicorns (for example, European travel management startup TravelPerk). ### Founders ranking model According to some characteristics - the number of previous startups (founder, co-founder), their area, success, etc. - we can also score founders. An escalated score is indicative of a company's enhanced credibility. The results of these models can be used both for preliminary scoring of companies and as independent features in other models. An example is presented in Table 5. ### Unicorn recommendation model It was revealed that the median time for a company to achieve the status of a "unicorn" is 4-5 years. Thus, in this period of time, about half of the unicorns have reached this status, moreover, the second half is waiting in the wings in the near future. This model identifies nascent companies established within this 4-5 year time frame, isolates 'unicorns' within this subset, scores entities bearing the greatest resemblance and subsequently generates a list of the top 30 recommendations. For 2016-2021 simulation run: * On Jan 1st of each year, a list of recommendations of potential unicorns is formed. * Every month, in case of the announcement of a round (series_X), a company is added to the portfolio if its valuation is below 1 billion and the round is not too high. * Companies that have reached 2.5 billion or have not had rounds for 3 years are removed from the portfolio. As a result, at the end of the period, a portfolio of companies was formed. The limitation in this context is the scarcity of information related to post_money_valuation information. Further development: as new data become available, building a more complex recommendation system. The results are presented in Table 6. ## 6 Conclusion Traditionally, venture capital investment decisions have largely been guided by the investors' intuition, experience, and market understanding. While these elements remain significant, there's a growing recognition that these traditional approaches can be greatly enhanced by integrating data-driven insights into the investment decision-making process. Our paper comprehensively examines a predictive model for startups based on an extensive dataset from CrunchBase. A meticulous review and analysis of the available data were conducted, followed by the preparing of a dataset for model training. Special attention was given to the selection of features which include information about founders, investors, and funding rounds. The article also underlines a thoughtfully designed backtest algorithm, enabling a fair evaluation of the model's behavior (and the simulation of a VC fund based on it) from a historical perspective. Rigorous efforts were made to avoid data leakage, ensuring training at any given point only utilized data that would have been known at that time. Several configurations were explored regarding the funding rounds at which the fund could invest in a company and the timing of exits. The primary evaluative metrics were derived from a backtest table (Table 2), which chronicles instances of company entries, exits, and the corresponding success status. Utilizing additional data on company valuations, we calculated the Capital Growth, illustrating the fund's impressive economic impact over time. To sum up, this work primarily focused on the variety of input features, the integrity of the backtest, and the realistic simulation of the portfolio from a historical perspective. Additionally, we profler a series of propositions aimed at enhancing the existing model, primarily revolving around the access to supplementary data repositories. Within the highly competitive and dynamic investment environment, the assimilation of data-driven decision-making practices transitions from being an option to becoming a necessity. As such, venture capitalists that effectively harness the potential of AI and machine learning will likely secure a significant competitive advantage, positioning themselves for success in the new era of venture capitalism. ## 7 Further Research In terms of further work, a promising direction is the usage of different sources of text data about companies, founders, and investors. This could involve leveraging social media platforms such as Twitter and LinkedIn, as well as parsing the websites of the companies themselves. Additionally, it may be worth adjusting the foundation date filter to include companies founded in 1995, rather than the current start date of 2000-01-01. However, this could potentially result in an influx of companies from the "dotcom bubble" period. The current strict filters used to determine successful companies (IPO/ACQ/UNICORN) could also be loosened to potentially capture more companies in the "gray area" between success and failure. Finally, it may be worth conducting experiments to determine the optimal threshold value for adding companies to the portfolio, taking into account the size of the portfolio. These additional tasks can provide valuable insights and enhance the effectiveness of the AI investor backtest model. Analyzing the presentation materials, video interviews, and source code of software companies can provide a better understanding of the company's strategy, goals, and potential. Developing information collection systems to automate this process can save time and improve accuracy. Evaluating the influence of macroeconomic elements and technological trajectories on startups may facilitate the identification of potential risks and opportunities. It can also aid in the development of exit strategies. Additionally, analyzing competing studies can provide insights into the market and competition, which can inform investment decisions.
2309.05741
Conductance and thermopower fluctuations in interacting quantum dots
We model an interacting quantum dot of electrons by a Hamiltonian with random and all-to-all single particle hopping (of r.m.s. strength $t$) and two-particle interactions (of r.m.s. strength $J$). For $t \ll J$, such a model has a regime exhibiting the non-quasiparticle physics of the Sachdev-Ye-Kitaev model at temperatures $E_{\rm coh} \ll T \ll J$, and that of a renormalized Fermi liquid at $T \ll E_{\rm coh}$, where $E_{\rm coh} = t^2 / J$. Extending earlier work has computed the mean thermoelectric properties of such a dot weakly coupled to two external leads, we compute the sample-to-sample fluctuations in the conductance and thermopower of such a dot, and describe several distinct regimes. In all cases, the effect of the SYK interactions is to reduce the strength of the sample-to-sample fluctuations. We also find that in the regime where the mean transport co-efficients are determined only by the value of $J$ at leading order, the sample-to-sample fluctuations can be controlled by the influence of the smaller $t$.
Henry Shackleton, Laurel E. Anderson, Philip Kim, Subir Sachdev
2023-09-11T18:04:54Z
http://arxiv.org/abs/2309.05741v1
# Conductance and thermopower fluctuations ###### Abstract We model an interacting quantum dot of electrons by a Hamiltonian with random and all-to-all single particle hopping (of r.m.s. strength \(t\)) and two-particle interactions (of r.m.s. strength \(J\)). For \(t\ll J\), such a model has a regime exhibiting the non-quasiparticle physics of the Sachdev-Ye-Kitaev model at temperatures \(E_{\rm coh}\ll T\ll J\), and that of a renormalized Fermi liquid at \(T\ll E_{\rm coh}\), where \(E_{\rm coh}=t^{2}/J\). Extending earlier work has computed the mean thermoelectric properties of such a dot weakly coupled to two external leads, we compute the sample-to-sample fluctuations in the conductance and thermopower of such a dot, and describe several distinct regimes. In all cases, the effect of the SYK interactions is to reduce the strength of the sample-to-sample fluctuations. We also find that in the regime where the mean transport co-efficients are determined only by the value of \(J\) at leading order, the sample-to-sample fluctuations can be controlled by the influence of the smaller \(t\). ###### Contents * I Introduction * II Setup * II.1 Hamiltonian and transport coefficients * II.1.1 Single site lead coupling * II.1.2 All-to-all couplings * II.2.1 Disordered all-to-all couplings * II.2.2 Comparison to other analyses * III Free fermion analysis * III.1.1 Conductance statistics * III.2 Thermopower statistics * IV Pure SYK analysis * IV.1 Conductance statistics * IV.2 Thermopower statistics * V Interplay between hoppings and interactions * V.1 Fermi liquid regime * V.2 SYK Regime * V.3 Thermopower statistics * VI Conclusion * A Path integral calculation of fluctuations * B Replica off-diagonal fluctuations in the \((G,\Sigma)\) action * C Statistics of ratio distributions * D Conductance fluctuations for single-lead coupling Introduction The Sachdev-Ye-Kitaev (SYK) model [1; 2] is a strongly interacting quantum many-body system without quasiparticle excitations, whose exact solvability in the large-\(N\) limit - with \(N\) the number of sites - has led to significant interest in it both as a toy model for non-Fermi liquid behavior as well as an analytically tractable example of holographic duality [3; 4]. In contrast to its analytic solvability, experimental realizations of the SYK model have proved to be challenging. The SYK model is defined microscopically as a system of fermions with random all-to-all quartic interactions, and is unstable at low temperatures to single-particle hopping. As such, any experimental proposal must generate strongly-disordered interactions with a high degree of connectivity, while simultaneously quenching any single-particle hopping terms. Several promising proposals have been made to this extent, involving Majorana zero modes [5; 6], quantum processors [7; 8], ultracold gases [9; 10; 11] and disordered graphene flakes [12; 13]. Simulations of the SYK model have been achieved on quantum processors [14] and controllable nuclear-spin-chain simulators [15]. Our study here was motivated by experiments on disordered graphene flakes [16], results of which will be reported in a separate paper [17]. Each experimental realization of the SYK model will have a different set of observables that it is best suited to study. Our focus will be on proposals for realizing the SYK model with complex fermions in a disordered graphene flake, for which the measurable quantities are thermoelectric transport observables, such as conductance and thermopower. Theoretical predictions for the average values of these quantities have been calculated [18] for realistic models that include both SYK terms and experimentally-relevant perturbations. The conclusion of this analysis is that thermoelectric quantities display a crossover from Fermi liquid-like behavior at temperatures below a coherence energy \(E_{\rm coh}=t^{2}/J\), where small single-particle hopping terms, with r.m.s. value \(t\), produce coherent quasiparticle excitations, to SYK-like behavior at temperatures \(E_{\rm coh}\ll T\ll J\), where \(T\) is the temperature, and \(J\) is the r.m.s. value of the SYK interactions. In experimental realizations of these mesoscopic systems, transport quantities such as the conductance and thermopower will display sample-to-sample fluctuations, or alternatively fluctuations as a function of tuning external parameters such as chemical potential or magnetic field. For weakly-interacting disordered quantum dots at zero temperature coupled to broad multi-channel leads, this results in the well-studied phenomenon of _universal conductance fluctuations_ (UCF) at zero temperature, where the conductance displays \(\mathcal{O}(1)\) fluctuations (in units of the conductance quanta, \(e^{2}/h\)) whose magnitude is independent of the disorder strength [19; 20; 21; 22]. An analogous treatment of disorder fluctuations in strongly-correlated quantum dots has not been explored previously. In this work, we ana lyze the fluctuations in transport properties in quantum dots with strong SYK interactions, and study the behavior of these fluctuations as their average values crossover from Fermi liquid-like for \(T\ll E_{\rm coh}\) to SYK-like for \(T\gg E_{\rm coh}\). We contrast analysis of these properties in the SYK regime, which involve statistical fluctuations of the _single-particle_ Green's function, with the large body of work analyzing statistical properties of the many-body spectrum [23; 24; 25; 26]. Our analysis is able to recover UCF behavior for zero temperature, while the variance of the conductance in the Fermi liquid regime displays a \(T^{-1}\) falloff at higher temperatures, consistent with prior studies of weakly-interacting disordered quantum dots [27]. However, we find a surprising feature of these fluctuations for temperatures larger than the coherence energy. In contrast to the mean values of transport quantities, whose behavior for \(T\gg E_{\rm coh}\) is well-described by a pure SYK model (\(t=0\)), the same is not true for the variance - at leading order in \(N^{-1}\), the variance of the conductance for a pure SYK model is distinct from the variance in a model with SYK interactions and random hopping with r.m.s. value \(t>\sqrt{TJ}/N\). The self-averaging properties of the pure SYK model are so strong that, to leading-order in \(N^{-1}\), fluctuations of the physical transport properties remain driven by fluctuations of random hopping terms, even if their mean values are well-described by the pure SYK solution. Distinct predictions are still found for the two temperature regimes, arising from the different form of the average spectral function in the two limits, and we find a \(T^{-2}\) suppression of the conductance variance in the SYK regime in contrast with the \(T^{-1}\) Fermi liquid prediction. These aspects of our results are illustrated by the following summary of our prediction for the mean (\(\overline{\sigma}\)) and variance of the electrical conductance (\(\sigma\)): \[\overline{\sigma}_{FF}\propto\frac{\Gamma e^{2}}{\hbar}\frac{1}{t}, \text{Var }\sigma_{FF}\propto\left(\frac{\Gamma e^{2}}{\hbar}\right)^{2} \frac{1}{NtT} \tag{1}\] \[\overline{\sigma}_{SYK}\propto\frac{\Gamma e^{2}}{\hbar}\frac{1} {\sqrt{JT}}, \text{Var }\sigma_{SYK}\propto\left(\frac{\Gamma e^{2}}{\hbar} \right)^{2}\frac{1}{N^{3}JT}\] (2) \[\overline{\sigma}_{tSYK}\propto\frac{\Gamma e^{2}}{\hbar}\frac{1} {t}, \text{Var }\sigma_{tSYK}\propto\left(\frac{\Gamma e^{2}}{\hbar} \right)^{2}\frac{1}{NJT},\quad T\ll E_{\rm coh}\] (3) \[\overline{\sigma}_{tSYK}\propto\frac{\Gamma e^{2}}{\hbar}\frac{1 }{\sqrt{JT}}, \text{Var }\sigma_{tSYK}\propto\left(\frac{\Gamma e^{2}}{\hbar} \right)^{2}\frac{\mathcal{E}^{2}t^{2}}{NJ^{2}T^{2}},\quad E_{\rm coh}\ll T\ll J \tag{4}\] Here (_i_) \(FF\) refers to the free-fermion results in Section III.1 with \(\Gamma\) a measure of the coupling to the leads, and the various of \(\sigma_{FF}\) crosses over to the UCF value when \(T<\Gamma^{2}/Nt\); (_ii_) the pure SYK results are in Section IV.1; (_iii_) \(tSYK\) refers to the model with both hopping and interactions with \(t\ll J\), \(E_{\rm coh}=t^{2}/J\), the results for \(T\ll E_{\rm coh}\) are in Section V.1, and the results for \(E_{\rm coh}\ll T\ll J\) are in Section V.2 (\(\mathcal{E}\) is a measure of the particle-hole asymmetry). All these results are obtained for the case where the coupling to the leads, \(\Gamma\), is the smallest energy scale, and to leading order in a \(1/N\) expansion. Note that in all cases, the effect of the SYK interactions is to _reduce_ the strength of the conductance fluctuations: (_i_) Eq. 2 is suppressed by a factor of \(1/N^{3}\) in contrast to \(1/N\) in all other cases, (_ii_) Eq. 3 is smaller than Eq. 1 by a factor of \(t/J\), and (_iii_) Eq. 4 is smaller than Eq. 3 by a factor of \(E_{\rm coh}/T\). The structure of this paper is as follows. In Section II, we make explicit the setup of our theoretical model as well as the assumptions used in calculating thermoelectric quantities. In Section III, we calculate the fluctuations of transport quantities in the non-interacting limit, where properties are governed by single-particle random matrix theory (RMT). We emphasize that this approach is distinct from more standard approaches of modeling UCF phenomena using RMT [28], where calculations are done at zero temperature and involve the statistical treatment of transmission eigenvalues. Our treatment is primarily done at non-zero temperature and in the limit of weak environmental coupling, although we show that it is possible to extend our results down to zero temperature and recover \(\mathcal{O}(1)\) universal fluctuations in an appropriate limit. In Section IV, we study transport fluctuations in the SYK regime, presenting results both for pure SYK as well as more realistic models with random single-particle hopping. In Section V, we study a model with both SYK and random hopping terms and demonstrate that the transport fluctuations for \(T\gg E_{\rm coh}\) are qualitatively different than that of a pure SYK model. In each of these sections, we discuss the fluctuations of the thermopower in addition to the conductance. The statistical properties of the thermopower require more care; in our formalism, we find that the thermopower is determined by a ratio of two Gaussian random variables, and hence the variance is formally not well-defined. An approximation to normality is still appropriate in certain parameter regimes for small fluctuations around the mean, and hence we can formally define a variance within this approximation. We state results given this assumption and find qualitatively similar behavior as the conductance variance, which is that the presence of strong SYK interactions serves to reduce the fluctuations around the mean value. Setup ### Hamiltonian and transport coefficients Our goal is to characterize fluctuations in transport properties of disordered quantum dots with random all-to-all interactions. We model this quantum dot by the Hamiltonian \[H_{\rm dot}=\frac{1}{(2N)^{3/2}}\sum_{ij;kl}J_{ij;kl}c_{i}^{\dagger}c_{j}^{ \dagger}c_{k}c_{l}+\frac{1}{N^{1/2}}\sum_{ij}t_{ij}c_{i}^{\dagger}c_{j}-\mu\sum _{i}c_{i}^{\dagger}c_{i} \tag{5}\] where \(J_{ij;kl}\) and \(t_{ij}\) are complex random numbers with zero mean and variances \(J^{2}\) and \(t^{2}\), respectively. The complex SYK model is given by the first term, whereas the second term is a random single-particle hopping which leads to Fermi liquid behavior at low temperatures. The quantum dot is coupled to two leads. Following the approach of [29], we model the leads by considering the Hamiltonian \[H=H_{\rm dot}+\sum_{\bf q}\epsilon_{\bf q}a_{\bf q}^{\dagger}a_{\bf q}+\sum_{ i,{\bf q},\alpha}\left[\lambda_{i\alpha}c_{i}^{\dagger}a_{{\bf q}\alpha}+ \lambda_{i\alpha}^{*}a_{{\bf q}\alpha}^{\dagger}c_{i}\right]\,. \tag{6}\] where \(\alpha=R\,,L\) labels the right and left leads. To parameterize the coupling to the leads, we define the matrices \[\Gamma_{ij}^{\alpha}=\pi\rho_{\rm lead,\alpha}\lambda_{i\alpha}\lambda_{j \alpha}^{*}\,, \tag{7}\] where \(\rho_{\rm lead,\alpha}\) is the density of states in lead \(\alpha\) near the Fermi level. We will assume \(\rho_{\rm lead,L}=\rho_{\rm lead,R}\equiv\rho_{\rm lead}\). We will find that the nature of the conductance fluctuations depends sensitively on how we model the coupling to the leads, \(\lambda_{i\alpha}\). This is in contrast to the mean values of transport quantities, which is not as sensitive. We first make the assumption that the two couplings are proportional to each other, i.e. \(\lambda_{iR}=\alpha\lambda_{iL}\) for some constant \(\alpha\). With this constraint, it becomes possible to express transport properties solely in terms of the equilibrium Green's functions of the quantum dot. Using expressions derived in [30], we define \[\mathcal{L}_{ab}=-\frac{i}{2\pi\hbar}\int_{-\infty}^{\infty}{\rm d}\omega\, \omega^{a+b-2}f^{\prime}(\omega)\,{\rm Im}\,{\rm Tr}\left[\mathbf{\Gamma}\mathbf{G}^{ R}\right]\,, \tag{8}\] with \(\mathbf{G}^{R,A}(\omega)\) the local retarded and advanced Green's function of \(H\), both \(N\times N\) matrices, \(f(\omega)\) the Fermi function \(f(\omega)=1/\left(e^{\omega/T}+1\right)\), and \(\mathbf{\Gamma}\equiv\mathbf{\Gamma}^{L}\mathbf{\Gamma}^{R}/\left(\mathbf{\Gamma}^{L}+\mathbf{ \Gamma}^{R}\right)\). For cases where the matrix \(\mathbf{\Gamma}^{L}+\mathbf{\Gamma}^{R}\) is non-invertable, this equation is modified by omitting the null subspace of the matrix. The Green's functions must be solved for the full Hamiltonian, including the coupling to the leads; however, we will primarily be focused on the parameter regime where \(\Gamma\) is the smallest energy scale and the Green's functions of the isolated system \(H_{\rm dot}\) are used. The electric conductance \(\sigma\), thermal conductance \(\kappa\), and thermopower \(\Theta\) are given by \[\sigma=e^{2}{\cal L}_{11}\,,\quad\kappa=\beta\left({\cal L}_{22}- \frac{{\cal L}_{12}^{2}}{{\cal L}_{11}}\right)\,,\quad\Theta=\frac{\beta}{e} \frac{{\cal L}_{12}}{{\cal L}_{11}}\,. \tag{9}\] where \(\beta=1/T\). Beyond this point, we must make further assumptions on the nature of the coupling to the leads. For notational simplicity, we will assume \(\lambda_{iR}=\lambda_{iL}\equiv\lambda_{i}\) - generalization to the case where the magnitude of the couplings are asymmetric does not qualitatively affect our results. #### ii.1.1 Single site lead coupling In this model, we take our two leads to be coupled to a single site, i.e. \(\lambda_{i}\equiv\delta_{i1}\lambda\). Defining \(\Gamma\equiv\pi\rho_{\rm lead}\big{|}\lambda\big{|}^{2}\), we have \[\overline{{\cal L}_{ab}}=\frac{2\Gamma}{\pi\hbar}\int_{-\infty}^{ \infty}{\rm d}\omega\,\omega^{a+b-2}f^{\prime}(\omega)\overline{\rm Im\,G_{11 }^{R}(\omega)} \tag{10}\] Recall that the Green's functions are dependent on the random variables \(J_{ij;kl},t_{ij}\). Averaging over disorder, we find that \(\overline{\rm Im\,G_{11}^{R}(\omega)}=N^{-1}\sum_{ii}\overline{\rm Im\,G_{ii}^ {R}(\omega)}\equiv\overline{\rm Im\,G^{R}(\omega)}\). Note that this relation relies on neglecting corrections to \(\mathbf{G}^{R}\) arising from the couplings to the leads, as these corrections will be site-dependent. Higher moments of these transport coefficients are given by \[\overline{{\cal L}_{ab}{\cal L}_{ab}}-\overline{{\cal L}_{ab}} \,\overline{{\cal L}_{ab}}=\left(\frac{2\Gamma}{\pi\hbar}\right)^{2}\int_{- \infty}^{\infty}{\rm d}\omega\,{\rm d}\epsilon\,\omega^{a+b-2}\epsilon^{a+b-2 }f^{\prime}(\omega)f^{\prime}(\epsilon)\rho_{d}(\omega,\epsilon) \tag{11}\] where we define \[\rho_{d}(\omega,\epsilon)\equiv\frac{1}{N^{2}}\sum_{ij}\left[ \overline{\rm Im\,G_{ii}^{R}(\omega)\,\rm Im\,G_{jj}^{R}(\epsilon)}-\overline {\rm Im\,G_{ii}^{R}(\omega)}\,\overline{\rm Im\,G_{jj}^{R}(\epsilon)}\right] \tag{12}\] The subscript \(d\) indicates that this quantity describes the covariance of the _diagonal_ component of the Green's function, \(G_{ii}^{R}\). #### ii.1.2 All-to-all couplings Here, we take the leads to be coupled to all sites with equal hopping, \(\lambda_{i}\equiv\frac{\lambda}{\sqrt{N}}\). This model is also appropriate for hoppings that are equal in magnitude but with site-dependent phases, as the overall phase can be absorbed by a unitary transformation on the quantum dot operators. Defining \(\Gamma\equiv\pi\rho_{\text{lead}}|\lambda|^{2}\) as before, we have \[\overline{\mathcal{L}_{ab}}=\frac{1}{N}\sum_{ij}\frac{2\Gamma}{\pi\hbar}\int_{- \infty}^{\infty}\text{d}\omega\,\omega^{a+b-2}f^{\prime}(\omega)\overline{ \operatorname{Im}G^{R}_{ij}(\omega)}=\frac{2\Gamma}{\pi\hbar}\int_{-\infty}^{ \infty}\text{d}\omega\,\omega^{a+b-2}f^{\prime}(\omega)\overline{ \operatorname{Im}G^{R}(\omega)}\,. \tag{13}\] where we utilize the fact that \(\overline{G^{R}_{ij}(\omega)}=0\) for \(i\neq j\). The overall scaling of \(N^{-\frac{1}{2}}\) in \(\lambda_{i}\) was chosen such that the mean value of the conductance is consistent with the previous model. The second moment is given by \[\overline{\mathcal{L}_{ab}\mathcal{L}_{ab}}-\overline{\mathcal{L}_{ab}}\, \overline{\mathcal{L}_{ab}}=\frac{1}{N^{2}}\sum_{ij,kl}\left(\frac{2\Gamma}{ \pi\hbar}\right)^{2}\int_{-\infty}^{\infty}\text{d}\omega\,\text{d}\epsilon \,\omega^{a+b-2}\epsilon^{a+b-2}f^{\prime}(\omega)f^{\prime}(\epsilon)\left[ \rho_{d}(\omega,\epsilon)+\rho_{o}(\omega,\epsilon)\right]\,. \tag{14}\] where now we define the off-diagonal Green's function covariance, \[\rho_{o}(\omega,\epsilon)\equiv\frac{1}{N^{2}}\sum_{ij}\left[\overline{ \operatorname{Im}G^{R}_{ij}(\omega)\operatorname{Im}G^{R}_{ji}(\epsilon)}- \overline{\operatorname{Im}G^{R}_{ij}(\omega)}\,\overline{\operatorname{Im}G^ {R}_{ji}(\epsilon)}\right]\,. \tag{15}\] #### ii.1.3 Disordered all-to-all couplings If our sites physically correspond to spatially random modes, as is the case in graphene realizations of strongly interacting quantum dots in the zeroth Landau level, then it may be appropriate to model the coupling to the leads as additional random variables. To analyze this case, we treat \(\lambda_{i}\) as Gaussian random variables: \[\overline{\lambda_{i}} =0\,, \tag{16}\] \[\overline{\lambda_{i}^{*}\lambda_{j}} =\delta_{ij}\frac{\lambda^{2}}{N}\,,\] which in turn implies that \(\overline{\Gamma_{ij}^{\alpha}}=\delta_{ij}\frac{\pi\rho_{\text{lead}} \lambda^{2}}{N}\equiv\delta_{ij}\frac{\Gamma}{N}\). Crucial to the calculation of fluctuations, we note the identity \[\overline{\Gamma_{ij}^{\alpha}\Gamma_{kl}^{\beta}}=\left(\frac{\Gamma}{N} \right)^{2}\left(\delta_{ij}\delta_{kl}+\delta_{il}\delta_{jk}\right)\,. \tag{17}\] The average values of the transport coefficients are the same as in the previous models. Using the relation \[\overline{(\Gamma_{ij}^{R}+\Gamma_{ij}^{L})(\Gamma_{kl}^{R}+\Gamma_{kl}^{L})} =\left(\frac{2\Gamma}{N}\right)^{2}\left(\delta_{ij}\delta_{kl}+\delta_{il} \delta_{jk}\right)\,, \tag{18}\] we can obtain higher moments of the transport coefficients. This leads to a result for the variance almost identical to the uniform all-to-all couplings in the previous section. The crucial difference is that in this case, the disconnected component of \(\rho_{o}(\omega,\epsilon)\), defined in Eq. 15, is not subtracted off in the expression for the variance of \(\mathcal{L}_{ab}\). The consequence of this is a trivial contribution to the variance of \(\mathcal{L}_{ab}\), which is given by \(N^{-1}\overline{\mathcal{L}_{ab}}^{2}\) and can be thought of as being driven by the disorder in the leads in contrast to the intrinsic disorder in the quantum dot. While this is suppressed by a factor of \(N^{-1}\), we will find that fluctuations generically only appear at the order or higher, so this contribution cannot be disregarded on these grounds. We have shown that the variance of transport quantities, such as the conductance, are determined by the single-particle Green's function covariances \(\rho_{d}\), \(\rho_{o}\). The primary focus of our paper will be an analysis of these functions, and their implications for conductance fluctuations. For concreteness, we will give our predictions for conductance fluctuations in a model with uniform all-to-all couplings, such that both \(\rho_{o}\) and \(\rho_{d}\) contribute, and so that there is no trivial contribution to the variance arising from disordered leads. We summarize results for single-mode couplings in Appendix D. ### Comparison to other analyses Due to the extensive literature on conductance fluctuations in mesoscopic systems, we make precise here the connection between our setup and prior work. The most well-established results for conductance fluctuations pertain to the \(T=0\) conductance of a sufficiently weakly-interacting quantum dot such that a single-particle picture is appropriate. In this limit, conductance fluctuations can be understood most directly via a random matrix analysis of the scattering matrices, which take values in the circular ensemble [31; 32]. An alternative approach, suitable for studying the effects of non-zero temperature and weak magnetic fields, is to start with a microscopic single-particle Hamiltonian modeled as a random matrix, much like our Hamiltonian in the limit \(J=0\). In the non-interacting limit, the conductance for a generic set of lead couplings \(\lambda_{jL}\), \(\lambda_{jR}\) is given by the Landauer formula for a single channel, \[\begin{split}\sigma&=\frac{e^{2}}{h}\int\mathrm{d} \omega\,f^{\prime}(\omega)t(\omega)t^{*}(\omega)\,,\\ t(\omega)&\equiv 2\pi\rho_{\text{lead}}\sum_{ij} \lambda_{iL}^{*}G_{ij}^{R}(\omega)\lambda_{jR}\,.\end{split} \tag{19}\] The conductance variance is thus related to the disorder average of four copies of \(G^{R}\), solved in the presence of the leads. This becomes a tractable problem in the limit where the number of channels in the leads is large, and can be dealt with rigorously using supersymmetry techniques [33; 34; 27; 35; 36] to give results consistent with random matrix predictions. This formulation can also be generalized to non-zero temperature [27], where a \(T^{-1}\) falloff of the conductance variance is observed. However, these supersymmetry techniques cannot be generalized to accommodate strong interactions. Our results are most appropriately compared to the prior results on closed quantum dots, where the number of channels is small and are weakly coupled to the dot. These works have primarily focused on the \(T=0\) conductance in either the weakly-interacting limit [37; 38], where non-Gaussian behavior of the conductance was found, or in the Coulomb blockade-dominated limit [32] which found a non-Gaussian distribution of the conductance peaks. To our knowledge, conductance fluctuations in the parameter regime of closed quantum dots with \(T\gg\Gamma\) and for negligible Coulomb blockade effects has not been studied previously. This is the regime where we will conduct our analysis, as it is in this limit that the effects of strong SYK interactions becomes analytically tractable. ## III Free Fermion Analysis ### Conductance statistics We begin with an analysis of conductance fluctuations in the non-interacting limit (\(J=0\)). In this limit, the conductance is independent of temperature [18], \[\overline{\sigma}=\frac{e^{2}}{\hbar}\frac{\Gamma\sqrt{4t^{2}-\mu^{2}}}{\pi t ^{2}}\,. \tag{20}\] In order to understand the behavior of conductance flucutations, we must calculate the single-particle covariances \(\rho_{d,o}(\omega,\epsilon)\). This may be done diagrammatically, only keeping diagrams to leading order in \(N^{-1}\). We do this by calculating the covariance of the Green's function in imaginary time and analytically continuing to the real axis. The calculation of \(\rho_{d}\) involves analytic continuation of the quantity \(\sum_{ij}G_{ii}(i\omega)G_{jj}(i\epsilon)\), and for \(\rho_{o}\), \(\sum_{ij}G_{ij}(i\omega)G_{ji}(i\epsilon)\). Diagrams that contribute to the covariance of the Green's function consists of diagrams of pairs of Green's functions that are only connected along disorder lines. The structure of these diagrams is shown in Fig. 1. The diagrammatic structure of both the \(\rho_{d}\) and \(\rho_{o}\) fluctuations are similar - both involve an infinite summation over a set of ladder diagrams, given in the first figure in Fig. 1. The leading order contributions to \(\rho_{o}\) are just given by this set of diagrams. For \(\rho_{d}\), two additional classes of diagrams must be considered and are shown in Fig. 1. The first class yields an \(n\)-fold degeneracy of ladders with \(n\) rungs, and the second class gives additional disorder averaging on either side of the ladder rungs. Putting all this together, we obtain the final form for the Green's function covariances, \[g_{d}(i\omega,i\epsilon) \equiv \frac{1}{N^{2}}\sum_{ij}\left(\overline{G_{ii}(i\omega)G_{jj}(i \epsilon)}-\overline{G_{ii}(i\omega)}\times\overline{G_{jj}(i\epsilon)}\right)\] \[= \frac{1}{N^{2}}\frac{t^{2}G(i\omega)^{2}G(i\epsilon)^{2}}{\left[1 -t^{2}G(i\omega)G(i\epsilon)\right]^{2}}\frac{1}{1-t^{2}G(i\omega)^{2}}\frac{1 }{1-t^{2}G(i\epsilon)^{2}} \tag{21}\] \[g_{o}(i\omega,i\epsilon) \equiv \frac{1}{N^{2}}\sum_{ij}\left(\overline{G_{ij}(i\omega)G_{ji}(i \epsilon)}-\overline{G_{ij}(i\omega)}\times\overline{G_{ji}(i\epsilon)} \right)=\frac{1}{N}\frac{t^{2}G(i\omega)^{2}G(i\epsilon)^{2}}{1-t^{2}G(i \omega)G(i\epsilon)}\] where in the RHS, we use the average Green's function \[G_{0}(i\omega)=\frac{i\omega+\mu}{2t^{2}}-i\frac{\text{sgn}(\omega)}{2t^{2}} \sqrt{4t^{2}+(\omega-i\mu)^{2}}\,. \tag{22}\] To obtain expressions for \(\rho_{o,d}\), we analytically continue these to the real axis, \[\rho_{\alpha}=-\frac{1}{4}\left[g_{\alpha}(\omega^{+},\epsilon^{+})+g_{\alpha }(\omega^{-},\epsilon^{-})-g_{\alpha}(\omega^{-},\epsilon^{+})-g_{\alpha}( \omega^{+},\epsilon^{-})\right] \tag{23}\] where \(\omega^{\pm}\equiv\omega\pm i\eta\), \(\eta\to 0\). The expression for \(\rho_{d}\) has been derived before using a similar diagrammatic approach [39], although we are not aware of an analogous calculation for \(\rho_{o}\). Figure 1: Ladder diagrams that contribute to the fluctuations of the single-particle spectral function. The first class of diagrams contributes to both the covariances \(\rho_{d}\) and \(\rho_{o}\), with the contribution to \(\rho_{d}\) coming from the \(i=j\) case. The last two classes only contribute to \(\rho_{d}\). Disorder-averaging of the single-particle hopping (SYK interactions) is represented in red (blue). From this analysis, we see that fluctuations arising from \(\rho_{o}\) are enhanced relative to the \(\rho_{d}\) fluctuations by a factor of \(N\), and hence will be the main focus of our analysis. However, we will show that a more careful analysis of \(\rho_{d}\) will be necessary to recover UCF behavior at zero temperature. Due to the form of the average Green's function, we find a singular behavior for the Green's function covariances in Eq. 21 for \(|\omega-\epsilon|\to 0\), as \[\begin{split} 1-t^{2}G(\omega^{+})G(\epsilon^{-})&= \frac{1}{t}\left(-\eta+\frac{i}{2}(\omega-\epsilon)\right)+\mathcal{O}\big{(}( \omega/t)^{2},(\epsilon/t)^{2}\big{)}\,,\\ \rho_{d}(\omega,\epsilon)&=-\frac{1}{8N^{2}}\, \text{Re}\left[\frac{1}{(i(\omega-\epsilon)/2-\eta)^{2}}\right]\\ \rho_{o}(\omega,\epsilon)&=-\frac{1}{2Nt}\,\text{ Re}\left[\frac{1}{i(\omega-\epsilon)/2-\eta}\right]\end{split} \tag{24}\] The above divergence holds for arbitrary chemical potential \(\mu\). We see that the \((\omega-\epsilon)^{-2}\) divergence in \(\rho_{d}(\omega,\epsilon)\) is _independent_ of the energy scale \(t\). The correlation function \(\rho_{d}\) determines fluctuations of the single-particle energy levels - for the non-interacting system, the distribution of single-particle energy levels is determined by the Gaussian Unitary Ensemble (GUE) in which fluctuations are known to take this universal form [39; 40; 41]. For \(T\neq 0\), this divergence may be regulated by carefully taking the \(\eta\to 0\) limit in the analytic continuation to the real axis. We state the calculation in a general form, for use later. For real-valued functions \(A(\omega)\), \(B(\omega)\), and \(\rho(\omega-\epsilon)=\rho(\epsilon-\omega)\), we have the identity \[\int\text{d}\omega\,\text{d}\epsilon\,A(\omega)B(\epsilon)\rho(\omega-\epsilon )=\sqrt{2\pi}\int\text{d}k\,\text{Re}\left[\widetilde{A}(k)\widetilde{B}^{*}( k)\right]\widetilde{\rho}(k)\,. \tag{25}\] where we define the Fourier transform \(\widetilde{A}(k)\equiv\frac{1}{\sqrt{2\pi}}\int e^{-ikx}A(x)\,\text{d}x\). The Fourier transform of the Green's function covariances are: \[\begin{split}\widetilde{\rho}_{d}(k)&=\frac{1}{8N^ {2}}\sqrt{\frac{\pi}{2}}|k|\,,\\ \widetilde{\rho}_{o}(k)&=\frac{\sqrt{2\pi}}{4Nt}\,. \end{split} \tag{26}\] This analysis for \(\rho_{d}\) recovers the well-known Dyson-Mehta formula for the variances of linear statistics in RMT [40; 41], and the more general covariance formula for linear statistics [42]; however, these fluctuations are a factor of \(N^{-1}\) smaller than the contributions from \(\rho_{o}\) fluctuations. This formula yields the result for the conductance variance, \[\text{Var}\;\sigma=\left(\frac{\Gamma e^{2}}{\hbar}\right)^{2}\frac{2}{3\pi tTN }\,, \tag{27}\] which agrees well with a numerical simulation, shown in Fig. 2. This expression is valid for \(\Gamma^{2}\ll NTt\), as suggested by the \(T\to 0\) divergence. In order to obtain results for \(T=0\), a more careful treatment of the coupling to the leads is required. To obtain a \(T\to 0\) result, we must include the self-energy arising from the coupling to the leads. The form of this correction is dependent on the manner in which we choose the coupling. For uniform all-to-all couplings, we have \[\Sigma_{ij}(i\omega)=2\int\mathrm{d}\epsilon\,\frac{\rho(\epsilon){|\lambda|} ^{2}}{i\omega-\epsilon}\approx\frac{2i\Gamma}{N}\mathrm{sgn}(\omega) \tag{28}\] where \(\rho\) is the density of states in the leads, which we approximate by its value at the Fermi level. To leading order in \(\Gamma\), \[\overline{G_{ij}(i\omega)}=\delta_{ij}G_{0}(i\omega)+\frac{2i\Gamma G_{0}(i \omega)^{2}}{N}\mathrm{sgn}(\omega) \tag{29}\] Figure 2: We plot the mean and variance of both the conductance and thermopower, calculated in the non-interacting (\(J=0\)) limit of our disordered quantum dot and using Eq. 8 averaged over 100000 realizations of the hoppings \(t_{ij}\). We set the chemical potential \(\mu=0.33\). In this calculation, the Green’s function of the quantum dot is solved independent of the leads. We set the strength of the leads coupling \(\Gamma=0.1\) in order for the mean thermopower and conductance to have comparable magnitudes, although we emphasize that this value only appears as an overall coefficient in the conductance. These numerical results are compared with the analytic predictions given in Eq. 27 and Eq. 34, which show good agreement. Using this result for \(G\) in Eq. 21, we can directly evaluate the \(T=0\) conductance fluctuations \[\Gamma^{2}\rho_{d}(0,0) =-\frac{1}{32}+\mathcal{O}(\Gamma/t)\,, \tag{30}\] \[\Gamma^{2}\rho_{o}(0,0) =\mathcal{O}(\Gamma/t)\,,\] \[\text{Var }\sigma(T=0) =\left(\frac{e^{2}}{h}\right)^{2}\left(\frac{1}{8}+\mathcal{O}( \Gamma/t)\right)\,.\] Note that in this case, \(\rho_{d}\) and \(\rho_{o}\) contribute at the same order; the \(T=0\) divergences are regulated by an \(\mathcal{O}(N^{-1})\) self-energy, and \(\rho_{d}\) is more singular at \(T=0\). We stress that this result is not rigorous - as evident from the above results, this manner of including the corrections from the leads is not done consistently in an \(N^{-1}\) expansion. A proper extrapolation down to \(T=0\) necessitates, for example, the use of supersymmetric techniques [34]. ### Thermopower statistics Although less well-studied than conductance fluctuations, thermopower fluctuations have been studied analytically for single-mode contacts at \(T=0\)[43] and for broad contacts [44]. Experimental measurements [45; 46] have found good agreement with these predictions. Our analysis will fall in a distinct parameter regime to these results, where we consider a quantum dot weakly coupled to its environment, at temperatures much larger than the coupling strength. In the free fermion limit, the mean thermopower vanishes linearly with temperature [18] \[\overline{\Theta}=\frac{\pi^{2}T}{3e}\frac{\mu}{4t^{2}-\mu^{2}}\,. \tag{31}\] The linear temperature dependence is a consequence of the linear temperature dependence of the entropy, and hence is generic for systems with quasiparticle excitations. In our framework, the statistical properties of the thermopower is determined by the _ratio_ of two random variables, \(\mathcal{L}_{12}\) and \(\mathcal{L}_{11}\). As higher order moments are suppressed by additional factors of \(N^{-1}\), our transport coefficients are Gaussian to leading order in \(N^{-1}\). The thermopower distribution is then determined by the ratio of two Gaussian statistics, which in general is non-Gaussian. Nevertheless, an approximation to Gaussian is appropriate [47] for capturing small fluctuations around the mean value, so long as the width of the Gaussian distribution is small relative to the mean. We provide more details on this approximation in Appendix C. A similar approach was used to characterize fluctuations of the Fano factor in weakly-interacting quantum dots [48]. Such an approximation requires knowledge of the covariance of the two quantities \(\mathcal{L}_{12}\) and \(\mathcal{L}_{11}\). This is given by \[\text{Cov}(\mathcal{L}_{11},\mathcal{L}_{12})=-\left(\frac{\Gamma}{\pi\hbar} \right)^{2}\int\text{d}\omega\,\text{d}\epsilon\,\omega f^{\prime}(\omega)f^{ \prime}(\epsilon)\left[\rho_{d}(\omega,\epsilon)+\rho_{o}(\omega,\epsilon)\right] \tag{32}\] which vanishes by an application of Eq. 25. Therefore to leading order in \(N^{-1}\), the random variables \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\) are both uncorrelated and have a bivariate normal distribution, so we treat them as independent. With this assumption, the typical fluctuations of \(\Theta\) around its mean value \(\frac{\beta}{e}\frac{\overline{\mathcal{L}_{12}}}{\overline{\mathcal{L}_{11}}}\) are Gaussian with variance \[\frac{\text{Var }\Theta}{\overline{\Theta}^{2}}=\frac{\text{Var }\mathcal{L}_{11}}{\overline{\mathcal{L}_{11}}^{2}}+\frac{\text{Var }\mathcal{L}_{12}}{\overline{\mathcal{L}_{12}}^{2}}\,, \tag{33}\] with \[\text{Var }\mathcal{L}_{11} =\left(\frac{\Gamma}{\hbar}\right)^{2}\frac{2}{3\pi NTt}\quad \overline{\mathcal{L}}_{11}=\frac{\Gamma}{\hbar}\frac{\sqrt{4t^{2}-\mu^{2}}}{ 2\pi t^{2}}\] \[\text{Var }\mathcal{L}_{12} =\left(\frac{\Gamma}{\hbar}\right)^{2}\frac{(\pi^{2}-6)T}{9\pi Nt }\quad\overline{\mathcal{L}_{12}}=-\frac{\Gamma}{\hbar}\frac{\pi\mu T^{2}}{6t ^{2}\sqrt{4t^{2}-\mu^{2}}}\,. \tag{34}\] This analytic prediction agrees well with the numerically calculated variance, shown in Fig. 2. Similar to the conductance variance, the thermopower variance scales as \(T^{-1}\) at low temperature, although the fact that the mean value scales linearly with temperature means that, in contrast to the conductance, the variance normalized by the mean squared diverges as \(T^{-3}\). ## IV Pure SYK analysis ### Conductance statistics We now move to an analysis of conductance fluctuations for a pure SYK model (\(t=0\)), where the average value takes the form at half-filling \[\overline{\sigma}=\frac{e^{2}}{\hbar}\frac{0.72\Gamma}{\sqrt{JT}} \tag{35}\] (the exact value of the prefactor is \(2\sqrt{2}\pi^{-1/4}\Gamma(3/4)\Gamma(1/4)\approx 0.72\)). Deviations away from half-filling only constitute a change in the numerical coefficient. For full generality, we present results for an SYK\({}_{q}\) model with \(q\)-fermion interactions - the case \(q=4\) is the one of experimental relevance. The diagrammatic prescription for calculating the Green's function covariances \(\rho_{d}\), \(\rho_{o}\) remain the same, and we consider pairs of Green's functions that are only connected via disordered lines. The \(N\) scaling of disorder-connected diagrams has been considered in SYK-like models previously [4; 49; 50; 51], although an explicit evaluation of such diagrams has only been carried out for the off-diagonal covariance, \(\rho_{o}\), in the Majorana SYK model [50]. The simplest leading-order diagram, which contributes to both \(\rho_{d}\) and \(\rho_{o}\), is shown on top in Fig. 3. This contributes to \(\rho_{o}\) with coefficient \(N^{1-q}\) and \(\rho_{d}\) with coefficient \(N^{-q}\). The different coefficients arise because the \(\rho_{d}\) contribution appears with a factor of \(\delta_{ij}\). We find that \(\rho_{d}\) contains additional "ladder" diagrams as shown in the bottom of Fig. 3 that also contribute at \(\mathcal{O}(N^{-q})\). Both these covariances can be evaluated analytically in the conformal limit, when \(\beta J\gg 1\). To see this, we examine the first diagram, in the top of Fig. 3. In the conformal limit, the Green's functions take the following form: \[g^{R}(\overline{\omega},\overline{T})=-ie^{-i\theta}\left(\frac{\pi}{\cos 2 \theta}\right)^{1/4}\left(\frac{1}{2\pi\overline{T}}\right)^{1/2}\frac{\Gamma \left(\frac{1}{4}-\frac{i\overline{\omega}}{2\pi\overline{T}}+i\mathcal{E} \right)}{\Gamma\left(\frac{3}{4}-\frac{i\overline{\omega}}{2\pi\overline{T}}+ i\mathcal{E}\right)} \tag{36}\] where \(\theta\) and \(\mathcal{E}\) characterize the spectral asymmetry and are related to the total charge \(\mathcal{Q}\) by \[\begin{split}\mathcal{E}&=\frac{1}{2\pi}\ln\frac{ \sin(\pi/4+\theta)}{\sin(\pi/4-\theta)}\,,\\ \mathcal{Q}&=\frac{1}{2}-\frac{\theta}{\pi}-\frac{ \sin(2\theta)}{4}\,.\end{split} \tag{37}\] We have the bounds \(-\frac{\pi}{4}\leqslant\theta\leqslant\frac{\pi}{4}\) which implies \(0\leqslant\mathcal{Q}\leqslant 1\), and the particle-hole symmetric point is \(\mathcal{Q}=\frac{1}{2}\). Note that in contrast to the free fermion case, the SYK solution is most easily analyzed in the canonical ensemble with fixed charge \(\mathcal{Q}\). These Green's functions satisfy the Figure 3: a) The leading-order diagram for a pure SYK model that contributes to the Green’s function covariance. This contributes to \(\rho_{o}\) with a factor of \(N^{1-q}\), and the specialized \(i=j\) case contributes to \(\rho_{d}\) with a factor of \(N^{-q}\). b) For the diagonal covariance \(\rho_{d}\), the diagram in a) is the first in an infinite series of diagrams, generated from the first by attaching ladder rungs to either the top or bottom diagram in the manner shown here. We have deformed the diagram from a) in order to more clearly illustrate the structure of the ladder rungs. Schwinger-Dyson (SD) equation \(\Sigma(\omega)G(\omega)=-1\), \(\Sigma(\tau_{1},\tau_{2})=J^{2}G\left(\tau_{1},\tau_{2}\right)^{\frac{q}{2}}(-G( \tau_{2},\tau_{1}))^{\frac{q}{2}-1}\). We can evaluate the \(\tau\) integrals of the top and bottom part of the Feynman diagram independently. Making use of the conformal SD equations, we find that each of these parts evaluates to \[\begin{split}& J^{2}\int\mathrm{d}\tau_{a}\,\mathrm{d}\tau_{b}\,G( \tau_{1},\tau_{a})G(\tau_{a},\tau_{b})^{\frac{q}{2}}(-G(\tau_{b},\tau_{a}))^{ \frac{q}{2}-1}G(\tau_{b},\tau_{2})\\ &=\int\mathrm{d}\tau_{a}\,\mathrm{d}\tau_{b}\,G(\tau_{1},\tau_{a })\Sigma(\tau_{a},\tau_{b})G(\tau_{b},\tau_{2})=-G(\tau_{1},\tau_{2})\,.\end{split} \tag{38}\] A careful analysis of combinatoric factors from the disorder lines yields the result \[\rho_{o}(\omega,\epsilon)\frac{(q/2)!(q/2-1)!}{N^{q-1}}\,\mathrm{Im}\left[G^{ R}(\omega)\right]\mathrm{Im}\left[G^{R}(\epsilon)\right]\,. \tag{39}\] For calculating \(\rho_{d}\), the summation of ladder diagrams in Fig. 3 must be carried out. These ladder diagrams are well-studied in the SYK literature; in particular, evaluation in the strict conformal limit often leads to a divergent summation, with the regularizing near-conformal corrections taking a universal form that reflect the underlying dual quantum gravity description. This divergent summation is a consequence of "resonant" eigenfunctions of the ladder kernel which have eigenvalue unity. Remarkably, there is no such effect in this class of ladder diagrams - because of the relation in Eq. 38, we find that the conformal Green's function is an _exact_ eigenfunction with eigenvalue \(q-1\), and therefore no resonance occurs. Because of this, the ladder diagrams may be evaluated via a geometric series to obtain the result \[\rho_{d}(\omega,\epsilon)\frac{(q/2)!(q/2-1)!}{q^{2}N^{q}}\,\mathrm{Im}\left[G ^{R}(\omega)\right]\mathrm{Im}\left[G^{R}(\epsilon)\right]\,. \tag{40}\] To leading order in \(N^{-1}\), the conductance fluctuations are driven by \(\rho_{o}\). The fact that \(\rho_{o}\) factorizes into two copies of the spectral function leads to the simple result, \[\frac{\mathrm{Var}\ \sigma}{\overline{\sigma}^{2}}=\frac{(q/2)!(q/2-1)!}{N^{q-1}}\,. \tag{41}\] The statement that the variance divided by the mean squared takes the above form holds for any linear statistic \(A\) of the spectral function, \(A=\int_{=\infty}^{\infty}\mathrm{d}\omega\,A(\omega)\,\mathrm{Im}\,G^{R}(\omega)\). For the SYK model with four-fermion interactions, this gives \[\mathrm{Var}\ \sigma=\left(\frac{e^{2}}{\hbar}\right)^{2}\frac{1.04\Gamma^{2}}{ N^{3}JT}\,. \tag{42}\] We compare this result to numerical calculations of the conductivity variance using exact diagonalization, shown in Fig. 4. We also plot the mean values, which show decent agreement with their respective analytic predictions despite the relatively small system sizes. Recall that Eq. 41 is only valid in the conformal limit, where \(\beta J\gg 1\). For finite size systems, we also require \(\beta J\ll N\) due to Schwarzian fluctuations setting in at lower temperatures [49, Figure 4: We present numerical results for the conductance and thermopower of a complex SYK model, for even system sizes \(6\leqslant N\leqslant 12\). All results are averaged over \(10^{5}\) realizations with \(J=1\), \(\mu=0.05\). a) The average thermopower, and conformal prediction. b) The average conductance, and conformal prediction. c) The system size scaling of the conductance and thermopower variance, obtained by fitting the variance as a function of \(N\) to a power law at each temperature. d) We Temperature dependence of the normalized conductance and thermopower variance for \(6\leqslant N\leqslant 12\), both rescaled by their appropriate system size - \(N^{-3}\) for conductance and \(N^{-2}\) for thermopower. Darker plots indicate larger system sizes. 52; 53; 54]. Exact diagonalization studies are restricted to small system sizes, with \(N=12\) the maximum size studied here. This implies a rather narrow temperature window where conformal behavior could be expected. We are unable to establish the constant temperature dependence predicted by Eq. 41; however, the predicted \(N^{-3}\) scaling is expected to hold even at higher temperatures, away from the conformal limit, and this is validated by our numerical results. ### Thermopower statistics We now analyze statistics of the thermopower, where the extensive entropy leads to a constant thermopower \[\overline{\Theta}=\frac{4\pi}{3e}\mathcal{E}\,. \tag{43}\] Recall that the thermopower is given by the ratio of two random variables, whose linear covariance vanished in the free fermion limit. Strikingly, the opposite behavior is true for an SYK model. To quantify this, we examine the Pearson correlation coefficient of the two random variables \(A\) and \(B\), \[r_{A,B}\equiv\frac{\text{Cov}(A,B)}{\sqrt{\text{Var }A\times\text{Var }B}}\,. \tag{44}\] \(r_{A,B}\) lies between \(-1\) and \(+1\) and measures the degree of correlation between two random variables. For the SYK model, a particular property of \(\rho_{o}\) is that it that it factorizes into a product \(\alpha(\omega)\alpha(\epsilon)\) to leading order in \(N^{-1}\). Therefore, in sharp contrast to the Fermi liquid regime where the variables \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\) were uncorrelated, we generically expect \(r_{A,B}=1-\mathcal{O}(N^{-1})\). Despite this, we may still approximate our distribution as Gaussian. The approximation to normality of the distribution of two correlated Gaussian random variables follows along similar lines as the uncorrelated ratio [47], which we also discuss in more detail in Appendix C. Defining \(r\) as the correlation coefficient between \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\), \[\frac{\text{Var }\Theta}{\overline{\Theta}^{2}}=\frac{\text{Var }\mathcal{L}_{11}}{\overline{\mathcal{L}_{11}}^{2}}+\frac{\text{Var }\mathcal{L}_{12}}{\overline{\mathcal{L}_{12}}^{2}}-\frac{2r\sqrt{\text{Var }\mathcal{L}_{11}\times\text{Var }\mathcal{L}_{12}}}{\overline{\mathcal{L}_{11} \mathcal{L}_{12}}}\,. \tag{45}\] Both \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\) are linear statistics, so the conformal prediction of \(\frac{\text{Var }\mathcal{L}_{11}}{\overline{\mathcal{L}_{11}}^{2}}=\frac{\text{Var }\mathcal{L}_{12}}{\overline{\mathcal{L}_{12}}^{2}}\) leads to a vanishing thermopower variance. The leading order non-zero result in the conformal limit is hence suppressed by an additional factor of \(N^{-1}\). However, high-temperature non-conformal corrections will still give an \(\mathcal{O}(N^{1-q})\) contribution. Surprisingly, we find strong disagreement between this prediction and the exact diagonalization in Fig. 4. The thermopower variance in the temperature regime \(\beta J\gg N^{-1}\) is well fit by a \(N^{-2}\) scaling, rather than the \(N^{-3}\) high-temperature contribution or the \(N^{-4}\) conformal contribution. This arises due to an anomalous \(N^{-2}\) scaling in the variance of the numerator, \(\mathcal{L}_{12}\). As this quantity is proportional to the particle-hole asymmetry, we conjecture that this is related to additional fluctuations in the asymmetry not captured by our diagrammatic approach. ## V Interplay between hoppings and interactions In the previous sections, we have derived results for the conductance variance for both the limiting cases of non-interacting fermions with random hopping and a pure SYK model. In this section, we more carefully analyze the physically-relevant model with includes both random hopping and SYK terms. Analysis of crossover behavior in these models has been performed previously [55; 18; 56] for the average values of observables. The conclusion of these analyses is that there exists a coherence energy scale \(E_{\rm coh}\equiv\frac{t^{2}}{J}\) such that transport properties closely resemble the free fermion model for temperatures \(T\ll E_{\rm coh}\), with SYK behavior emerging for \(T\gg E_{\rm coh}\) (throughout this analysis, we assume \(T\ll t\,,J\)). The source of this behavior lies in the solution to the set of Schwinger-Dyson equations for the average value of the Green's function, which is exact in the large-\(N\) limit: \[\begin{split} G(i\omega_{n})^{-1}&=-i\omega_{n}+ \mu-t^{2}G(i\omega_{n})-\Sigma(i\omega_{n})\,,\\ \Sigma(\tau)&=-J^{2}G^{2}(\tau)G(-\tau)\,.\end{split} \tag{46}\] It is this Green's function that displays a crossover at \(T\sim E_{\rm coh}\) from the free fermion-like solution to an SYK-like solution, which in turn leads to a crossover of the average values of transport properties. In contrast, we claim that the variance of transport quantities displays a qualitatively different type of crossover behavior. This is a consequence of the free fermion variance in Sec III and the SYK variance in Sec IV containing different powers of \(N\). Fluctuations driven by the randomness in SYK interactions are strongly suppressed relative to fluctuations driven by the random single-particle hopping. As a result, to leading order in \(N^{-1}\), the free fermion Feynman diagrams in Fig. 1 - which exist for any arbitrarily small random hopping - are always the relevant ones for calculating fluctuation properties so long as the ratio \(\frac{t}{J}\) does not scale with some inverse power of \(N\). The effect of SYK interactions is to renormalize the average Green's functions, such that the Green's function that appear in Eq. 21 are given by the solution to Eq. 46 rather than just the free fermion result. One can verify that to leading order in \(N^{-1}\), the inclusion of SYK interactions does not modify the diagrammatic structure any further than this, with the exception of a class of diagrams illustrated in Fig. 5 - these diagrams only contribute to \(\rho_{d}\) and hence will not be relevant for our analysis. The key difference that results in the average values of thermoelectric properties being described by pure SYK for \(T\gg E_{\rm coh}\) and not their variances may be best understood conceptually within the framework of the \((G,\Sigma)\) action, which is worked out explicitly in Appendix B. The intuition is as follows. For systems such as \(H_{\rm dot}\) with random all-to-all couplings, the fermionic degrees of freedom may be integrated out and the problem reformulated as a path integral over bilocal fields \(G(\tau_{1},\tau_{2})\), \(\Sigma(\tau_{1},\tau_{2})\), with an action that includes an explicit pre-factor of \(N\); hence, the large-\(N\) solution is described by the saddle point value of this action, which is precisely Eq. 46. The disorder-averaged spectral function, and in turn the average values of thermoelectric quantities, depend solely on this saddle-point solution. This is not true for fluctuations, which are subleading in \(N^{-1}\) and is governed by the \((G,\Sigma)\) action, which is not a priori of the \((G,\Sigma)\) action. Figure 5: Ladder diagrams that contribute to the fluctuations of the single-particle spectral function to leading order in \(N^{-1}\), for a model that includes both random single-particle hopping and SYK interactions. Disorder-averaging of the single-particle hopping (SYK interactions) is represented in red (blue). The structure of the diagrams are largely identical to the free fermion case illustrated in Fig. 1, with the SYK interactions having the effect of renormalizing the average Green’s functions. An exception to this is the additional set of diagrams, illustrated in the last diagram, which are qualitatively distinct from the free fermion limit. These diagrams only contribute to the diagonal covariance \(\rho_{d}\) and hence will be neglected as they are suppressed by a factor of \(N^{-1}\) relative to the off-diagonal covariances. by replica off-diagonal fluctuations around the large-\(N\) saddle point. The structure of the perturbation theory around the saddle point may be completely modified by the presence of a hopping term \(t\) - Feynman diagrams proportional to \(t\) may appear at lower orders in \(N^{-1}\), and whose contributions will _a priori_ be dominant even in a parameter regime where the saddle point is well-described by the \(t=0\) solution. Our approach to studying the behavior of transport fluctuations for an interacting quantum dot will again involve calculating the single-particle covariance \(\rho_{d,o}(\omega,\epsilon)\). We will work in the regime where \(\omega,T\ll t,J\), and the average Green's function takes the universal form [55] \[G(\omega,T)=\frac{1}{t}g\left(\frac{\omega}{E_{\rm coh}},\frac{T}{E_{\rm coh} }\right)\equiv\frac{1}{t}g(\overline{\omega},\overline{T})\,, \tag{47}\] where we define the dimensionless quantities \(\overline{\omega}\equiv\omega/E_{\rm coh}\), \(\overline{T}\equiv T/E_{\rm coh}\). We find that the system sizes accessible to exact diagonalization are inadequate for establishing even the approximate crossover of the average Green's function; due to the narrow temperature window \(N^{-1}\ll T\ll J\,,t\) where our analysis is valid, any crossover behavior is obscured by combination of high temperature or finite size effects. As a consequence, numerical results in this section will be restricted to self-consistent solutions of the Schwinger-Dyson equations given in Eq. 46. ### Fermi liquid regime For \(\overline{T},\overline{\omega}\ll 1\), it is known [55] that \(g^{R}(\overline{\omega},\overline{T})\) has a Fermi liquid behavior. These properties can most simply stated at half filling (\(\mu=0\)), where the Fermi liquid nature implies \[g^{R}(\overline{\omega}\ll 1,\overline{T}\ll 1)\approx-i\,. \tag{48}\] This behavior is determined by Luttinger's theorem, which for a generic charge \(\mathcal{Q}\) says that \[\mu(\mathcal{Q})-\Sigma(i0^{+})=\mu_{0}(\mathcal{Q}) \tag{49}\] where \(\mu(\mathcal{Q})\) is the chemical potential necessary to tune to the charge \(\mathcal{Q}\), and \(\mu_{0}(\mathcal{Q})\) is that same value for the non-interacting (\(J=0\)) system. This fixes \(G^{R}(\omega\to 0,T\to 0)\) to be that of the non-interacting Green's function, the latter of which we know has the property \(\left|g^{R}(\omega\to 0,T\to 0)\right|^{2}=1\) for generic filling. This property is sufficient for recovering the temperature-independent non-interacting prediction for the mean value of the conductance at low temperature, given in Eq. 20 and likewise properly recovers the small \(\omega,\epsilon\) divergence of \(\rho_{d,o}\) given in Eq. 24. Although an explicit calculation of the conductance variance requires knowledge of the small frequency and temperature behavior of \(g^{R}\), which is not fixed by Luttinger's theorem, the degree of the \(T\to 0\) divergence and the assumption that small frequency/temperature corrections appear at linear order in \(\overline{\omega},\overline{T}\) imply from dimensional analysis that \[\begin{split}\text{Var }\sigma(T\ll E_{\text{coh}})& \propto\frac{1}{t^{2}N\overline{T}}=\frac{1}{NJT}\,,\\ \frac{\text{Var }\sigma(T\ll E_{\text{coh}})}{\sigma(T\ll E_{ \text{coh}})^{2}}&\propto\frac{1}{N\overline{T}}\,.\end{split} \tag{50}\] This result is confirmed by calculating the conductance variance using the Green's function \(G^{R}\) obtained from numerically solving the large-\(N\) Schwinger-Dyson equations, shown in Fig. 7. ### SYK Regime We now analyze the conductance fluctuations for \(\overline{T}\gg 1\), where the average Green's function approaches the conformal SYK result given in Eq. 36. The mean value of the conductance is then given by the pure SYK result in Eq. 35. Using this form of the Green's function, we find that the \((\omega-\epsilon)^{-1}\) divergence of \(\rho_{o}(\omega,\epsilon)\) is no longer present. The infinite sum of ladder diagrams that yields \(\rho_{o}\) is convergent for large \(\overline{T}\). Expanding in powers of \(\overline{T}^{-1}\), we obtain the leading-order expression \[\begin{split}\text{Var }\sigma(T\gg E_{\text{coh}})& =\left(\frac{e^{2}}{\hbar}\frac{\Gamma}{t\overline{T}}\right)^{2} \frac{1}{N\pi^{5}}\frac{1}{2\cos(2\theta)}\left[\int_{-\infty}^{\infty}\text{d} x\,\frac{e^{x}}{\left(1+e^{x}\right)^{2}}\text{Im}\left[h(x)^{2}\right] \right]^{2}\,,\\ h(x)&\equiv e^{-i\theta}\frac{\Gamma\left(\frac{1} {4}-\frac{ix}{2\pi}+i\mathcal{E}\right)}{\Gamma\left(\frac{3}{4}-\frac{ix}{2 \pi}+i\mathcal{E}\right)}\,.\end{split} \tag{51}\] This integral must be done numerically; however, one can see that at the particle-hole symmetric point (\(\theta=\mathcal{E}=0\), \(\mathcal{Q}=1/2\)), the integrand vanishes. We emphasize that this expression for the conductance variance is obtained by using the conformal SYK form of the Green's function _and_ taking to leading order a large-\(\overline{T}\) expansion of the integral for the conductance variance, the latter of which is not a homogeneous function of \(\overline{T}\). In particular, Eq. 51 does not imply that the conductance variance vanishes exactly in the conformal limit when \(\mathcal{E}=0\). Rather, the variance for \(\mathcal{E}=0\) is given by a subleading \(\overline{T}^{-3}\) term. For general \(\mathcal{Q}\), we find that the resulting expression is well-fit, see Fig. 6 by the function \[\begin{split}\text{Var }\sigma(T\gg E_{\text{coh}})& \approx\left(\frac{e^{2}}{\hbar}\frac{\Gamma}{t\overline{T}} \right)^{2}\times\frac{2.02\mathcal{E}^{2}}{N}\,,\\ \frac{\text{Var }\sigma(T\gg E_{\text{coh}})}{\sigma(T\gg E_{ \text{coh}})^{2}}&\approx\frac{3.91\mathcal{E}^{2}}{N \overline{T}}\,.\end{split} \tag{52}\] We see that the conductance variance normalized by the mean squared has a \(T^{-1}\) scaling, identical to the Fermi liquid regime. However, both quantities individually have distinct behavior, with the conductance variance scaling as \(T^{-2}\) for \(T\gg E_{\rm coh}\) in contrast to the \(T^{-1}\) scaling for \(T\ll E_{\rm coh}\). As an aside, we state the generalization to an SYK\({}_{q}\) model with \(q\)-fermion interactions; using the conformal Green's function gives a \(\overline{T}^{\frac{8}{q}-4}\) scaling of the conductance variance, and a \(\overline{T}^{\frac{4}{q}-2}\) scaling of the normalized conductance variance. This crossover behavior is demonstrated in Fig. 7, where we solve for the conductance variance given the form of the Green's function covariance in Eq. 21, where we use the average Green's function \(G^{R}(\omega)\) obtained from a full self-consistent solution of the Schwinger-Dyson equations in real time. Details on the numerical implementation for solving the real-time Schwinger-Dyson equations can be found in [56]. We note a unique difficulty in calculating the conductance variance not present in the average value, which comes from the denominator \(1-t^{2}G^{R}(\omega)G^{A}(\omega)\) in the Green's function covariance. As discussed previously, it is characteristic of a Fermi liquid that this denominator goes to zero as \(T\to 0\). As a consequence, the accuracy with which one must numerically solve for \(G^{R}(\omega)\) diverges as \(T\to 0\); small errors at low temperatures can easily lead to an unphysical divergence in the conductance variance. Our self-consistent solution for \(G^{R}(\omega)\) utilizes a grid of \(2^{28}\) frequency points on the real axis, which gives a sufficiently accurate solution down to \(T/E_{\rm coh}\approx 0.03\) and is enough to recover the predicted \(T^{-1}\) scaling at low temperatures. Figure 6: We plot the numerical coefficient of the leading-order conductance variance in the conformal SYK limit, obtained by a numerical evaluation of the integral in Eq. 51, along with a quadratic approximation \(4.05\mathcal{E}^{2}\). ### Thermopower statistics The mean thermopower in a model with both random hopping and SYK interactions displays a crossover from the linear temperature scaling characteristic of a Fermi liquid for \(T\ll E_{\rm coh}\) to the constant SYK value for \(T\gg E_{\rm coh}\). The coefficient of the mean thermopower in the Fermi liquid regime receives a renormalization due to the presence of SYK interactions, from \(\overline{\Theta}\sim(et)^{-1}T\) in the free fermion model to \(\overline{\Theta}\sim(eE_{\rm coh})^{-1}T\). This is not true for the mean conductance, whose value for \(T\to 0\) is determined by the zero-frequency spectral density and is fixed by Luttinger's theorem, Eq. 48. We now discuss the crossover behavior of the thermopower variance. For \(T\ll E_{\rm coh}\), the thermopower variance follows from the free fermion analysis in Section III and diverges as \(\overline{T}^{-1}\) for low temperatures, albeit with a renormalized coefficient. For \(T\gg E_{\rm coh}\), we find that the Pearson correlation coefficient \(r\) between \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\) is 1 to leading order in \(\overline{T}\). We apply Eq. 45, which gives the thermopower variance in terms of \(r\) and the statistics of Figure 7: For parameters \(J=10\), \(t=0.1\), \(\mathcal{Q}=0.4\), \(N=30\), and \(\Gamma=0.1\), we numerically solve for the leading order contribution to the conductance variance in the large-\(N\) limit by solving the Schwinger-Dyson equations for the average Green’s function over a range of temperatures. We demonstrate a crossover from \(T^{-1}\) behavior at low temperatures, indicative of Fermi liquid behavior, to a more rapid \(T^{-2}\) falloff at higher temperatures which reflects the average Green’s function approaching the conformal SYK form. \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\), where now we have \[\text{Var }\mathcal{L}_{11}=\left(\frac{\Gamma}{\hbar}\right)^{2} \times\frac{2.02\mathcal{E}^{2}}{Nt^{2}\overline{T}^{2}}\,,\quad\overline{ \mathcal{L}_{11}}=\frac{\Gamma}{\hbar}\frac{0.72}{t\sqrt{\overline{T}}}\,,\] \[\text{Var }\mathcal{L}_{12}=\left(\frac{\Gamma}{\hbar}\right)^{2} \times\frac{0.07}{Nt^{2}}\,,\quad\overline{\mathcal{L}_{12}}=\frac{\Gamma}{ \hbar}\frac{3.01\overline{T}^{1/2}\mathcal{E}}{t}\,. \tag{53}\] All of the terms in Eq. 45 decay as \(\overline{T}^{-1}\), which implies that in the limit \(r=1\), \[\frac{\text{Var }\Theta}{\overline{\Theta}^{2}}=\frac{1}{N\overline{T}}\left(1.97|\mathcal{E}|-0.09|\mathcal{E}|^{-1}\right)^{2}\,. \tag{54}\] The coefficient is rather striking, as it predicts a suppression of this leading-order variance at a critical value of the particle-hole asymmetry \(|\mathcal{E}_{c}|\approx 0.24\). Recall that this leading-order suppression happens generically for a pure SYK model - this is a consequence of expanding around the limit of perfect correlation between \(\mathcal{L}_{11}\) and \(\mathcal{L}_{12}\), along with the identity \(\frac{\text{Var}\mathcal{L}_{11}}{\mathcal{L}_{11}^{2}}=\frac{\text{Var} \mathcal{L}_{12}}{\mathcal{L}_{12}}\). The latter identity is not true generically in this model, but only occurs at the aforementioned fine-tuned value \(\mathcal{E}_{c}\). This value of \(\mathcal{E}\) corresponds to a rather large particle-hole asymmetry however, \(\mathcal{Q}_{c}\approx\frac{1}{2}\pm 0.41\), and is hence not easily accessible. ## VI Conclusion We have analyzed the fluctuations of thermoelectric transport properties in strongly-correlated quantum dots. Despite the apparent simplicity of our microscopic model due to its exact large-\(N\) solution, this saddle point only describes the mean value of transport quantities; higher-order moments are controlled by replica off-diagonal fluctuations around this saddle point, and as such require a more unconventional analysis. We find distinct system size scalings for these fluctuations in a free fermion model (\(N^{-1}\)) and an SYK model (\(N^{-3}\)). The SYK prediction is qualitatively changed by the inclusion of a small random hopping, which we find is able to drive conductance fluctuations at the same order as the free fermion prediction. However, we still find distinct temperature scalings, with a \(T^{-2}\) suppression for temperatures above the coherence energy in contrast to the \(T^{-1}\) scaling at lower temperatures predicted by the free fermion result. Our main analytic results for the conductance, \(\sigma\) were summarized in Section I. We also computed the thermopower, \(\Theta\). The mean thermopower vanishes linearly with \(T\) in the Fermi liquid regime (see Eq. 31), while the SYK regime has a \(T\)-independent thermopower (see Eq. 43). Furthermore, the finite \(N\) Schwarzian corrections are quite small for the mean thermopower in the SYK regime [18]. These features make the thermopower an ideal probe for detecting the SYK regime in experiments. However, analytic computations of the sample-to-sample fluctuations in the thermopower are not straightforward because the expression for the thermopower involves the ratio of electron Green's functions. We made partial analytic progress assuming small Gaussian fluctuations about the mean of both the numerator and the denominator, and also obtained numerical exact-diagonalization results for small values of \(N\). Our main results are as follows. For a free fermion model, the thermopower variance scales as \(t\left(NT\right)^{-1}\), in good agreement with numerical results. For a pure SYK model, we find surprisingly that the leading order \(N^{-3}\) contribution to the thermopower variance vanishes in the conformal limit (\(T\ll J\)) due to perfect correlation between the numerator and denominator. Fluctuations in this regime are hence governed by a combination of high-temperature and \(\mathcal{O}(N^{-4})\) corrections, although we are unable to verify this behavior numerically due to anomalous \(\mathcal{O}(N^{-2})\) fluctuations. For a model with both random hopping and SYK interactions, our predictions once again are qualitatively modified. The scaling of the variance in the low temperature Fermi liquid regime is suppressed from the free fermion result \(t\left(NT\right)^{-1}\) by an additional factor of \(t/J\). In the SYK regime, the scaling is identical, albeit arising from distinct mechanisms. A noteworthy feature in the SYK regime is that this leading-order variance vanishes at a critical value of the particle-hole asymmetry \(\mathcal{E}_{c}\), in which case the first non-zero contribution scales as \(N^{-1}(T/E_{\text{coh}})^{-2}\). A more careful treatment of the effects of the coupling between the quantum dot and the leads may reveal richer physics. In this work, we restrict our parameter regime to a "closed" quantum dot, where the coupling to the leads is the smallest energy scale in the system and transport quantities follow from the properties of the isolated quantum dot. A more robust framework for treating the effects of the leads can be developed by treating both the single-particle hopping in the leads and the coupling to the quantum dot as random variables, for which an exact (in the large-\(N\) limit) set of Schwinger-Dyson equations can be obtained for the non-equilibrium Green's functions [57]. The mean value of the conductance has been studied using this framework, although the effects of single-particle hopping within the quantum dot were not considered. In addition to treating conductance fluctuations within this framework, an analysis of the effects of inter-dot single-particle hopping, which was not considered in [57], may lead to new predictions even in the average value of transport properties. The nature of conductance fluctuations for a pure SYK model is also deserving of further analysis. The results we present are confined to the conformal regime. Deviations from this prediction at higher temperatures can be captured by an analysis of the large-\(N\) numerical solution to the Schwinger-Dyson equations, and low-temperature deviations may be understood analytically through Schwarzian fluctuations. This analysis is also expected to give greater agreement with numerical results for small system sizes, where clear agreement with the conformal prediction is absent. ## Acknowledgements We thank Alex Kruchkov for significant discussions at the initial stages of this work. We also thank Yigal Meir for helpful comments. This research was supported by the U.S. National Science Foundation grant No. DMR-2245246 and by the Simons Collaboration on Ultra-Quantum Matter which is a grant from the Simons Foundation (651440, S.S.). PK and LA acknowledge support from ONR MURI (N00014-21-1-2537). ## Appendix A Path integral calculation of fluctuations In this Appendix, we review the procedure for calculating the fluctuations of observables in disordered systems using the path integral approach. Calculating statistical quantities in disordered systems, such as averages and variances, is in general a non-trivial task. This arises from the fact that correlation functions such as \(G(\tau-\tau^{\prime})\) for a given disorder realization \(J_{ijkl}\) (this notation is specific to an SYK model, which we will use without loss of generality) are given by functional integrals of the form \[G(\tau-\tau^{\prime})=\frac{1}{N}\frac{\int\mathcal{D}c^{\dagger}\mathcal{D}c \,\sum_{i}c_{i}^{\dagger}(\tau)c_{i}(\tau^{\prime})e^{-S[c,\,c^{\dagger},J_{ ijkl}]}}{\int\mathcal{D}c^{\dagger}\mathcal{D}c\,e^{-S[c,c^{\dagger},J_{ ijkl}]}}\,. \tag{10}\] The mean of this quantity over an ensemble \(P(J_{ijkl})\) is given by integrating it over all realizations of \(J_{ijkl}\). This averaging cannot simply be done, as Eq. 10 is a ratio of two quantities. What can be done analytically is carry out the average of the numerator and denominator separately - this constitutes treating the random variables \(J_{ijkl}\) on the same footing as our physical variables \(c_{i}^{\dagger}\,,c_{i}\). Treating the disorder average properly requires techniques such as the replica trick [58], which we will employ here. Supersymmetric techniques have also been developed for dealing with these averages [59], which is the primary method used for calculating conductance fluctuations of free electrons and generally yields more reliable results than the replica approach, the latter of which requires a generally-uncontrolled analytical continuation of the number of replicas \(M\to 0\). However, these supersymmetric techniques are not appropriate for including the effects of strong interactions. Recent advances have generalized these supersymmetry techniques to a particular variant of the SYK model [60], and an interesting direction for future research would be to see whether such an approach is applicable to our model or a variant thereof that would allow for more controlled calculations of transport fluctuations. Here, we make explicit the setup we use to calculate fluctuations of quantities like \(G(i\omega)\). What we are interested in is the covariance of the Green's function at different frequencies, such as \(\frac{1}{N^{2}}\sum_{ij}\left[\overline{G_{ii}(i\omega)G_{jj}(i\epsilon)}- \overline{G_{ii}(i\omega)}\,\overline{G_{jj}(i\epsilon)}\right]\). Using the replica trick, we can rewrite the product of Green's functions \(G(\tau_{1}-\tau_{2})G(\tau_{3}-\tau_{4})\) as a functional integral taken over two copies of fermionic variables,\(c_{i}^{a}\,,c_{i}^{\dagger a}\,,\widetilde{c}_{i}^{a^{\prime}}\,, \widetilde{c}_{i}^{\dagger a^{\prime}}\), with \(i\) a site index and \(a\,,a^{\prime}\) replica indices, \[\lim_{M\,,M^{\prime}\to 0}\frac{1}{N^{2}MM^{\prime}}\sum_{\begin{subarray}{c}1<a<M \\ 1<a^{\prime}<M^{\prime}\end{subarray}}\int\sum_{i,j}c_{i}^{\dagger a}(\tau_{1} )c_{i}^{a}(\tau_{2})\widetilde{c}_{j}^{\dagger a^{\prime}}(\tau_{3}) \widetilde{c}_{j}^{a^{\prime}}(\tau_{4})e^{-\sum_{a}S[c_{i}^{\dagger a}\,,c _{i}^{a},J_{ijkl}]-\sum_{a^{\prime}}S[\widetilde{c}_{i}^{\dagger a^{\prime}} \,,\widetilde{c}_{i}^{\dagger a^{\prime}},J_{ijkl}]} \tag{10}\] We can dispense of the independent replica summations and the distinction between \(c\) and \(\widetilde{c}\) by combining them into an enlarged summation, \[\lim_{M\to 0}\frac{1}{N^{2}M^{2}}\sum_{\begin{subarray}{c}1<a,b<M\\ a\neq b\end{subarray}}\int\sum_{i,j}c_{i}^{\dagger a}(\tau_{1})c_{i}^{a}(\tau_ {2})c_{j}^{\dagger b}(\tau_{3})c_{j}^{b}(\tau_{4})e^{-\sum_{d}S[c_{i}^{ \dagger d}\,,c_{i}^{d},J_{ijkl}]} \tag{11}\] The action \(S\) is a function of the random variables \(J_{ijkl}\), and the disorder average is performed over the above quantity. Doing this induces interactions between the different replicas. Subtracting off the disconnected contribution, \(\overline{G(\tau_{1}-\tau_{2})}\,\overline{G(\tau_{3}-\tau_{4})}\) constitutes disregarding contributions that do not contain any interactions between the two replica indices. An analogous treatment of the off-diagonal covariance, \(\frac{1}{N^{2}}\sum_{ij}\left[\overline{G_{ij}(i\omega)G_{ji}(i\epsilon)}- \overline{G_{ij}(i\omega)}\,\overline{G_{ji}(i\epsilon)}\right]\) leads to an expectation value of the form \(c_{i}^{\dagger a}(\tau_{1})c_{i}^{b}(\tau_{2})c_{j}^{\dagger b}(\tau_{3})c_{j }^{a}(\tau_{4})\). For our calculations, we will proceed perturbatively starting from the replica-symmetric saddle point. If we use this as our starting point, our propagators will remain replica-symmetric to all orders in perturbation theory [61]. It has been shown that for free fermions, this approximation is sufficient for accurately recovering the leading-order contribution to the mean value of \(G(\tau_{1}-\tau_{2})\), although \(N^{-1}\) corrections require replica-off-diagonal saddles [62]. For four-point functions like Eq. 10, it is known that a replica-diagonal ansatz is insufficient for reproducing the full spectral correlations of random matrix theory [63] for small \(\mathcal{O}(N^{-1})\) energy differences, but can be recovered by considering off-diagonal saddle manifolds [62]. This discrepancy is not relevant for our analysis, as we will only be interested in spectral correlations over \(\mathcal{O}(T)\) energy differences. ## Appendix B Replica off-diagonal fluctuations in the \((G,\Sigma)\) action The calculation of the Green's function covariances may be performed within the formalism of the \((G,\Sigma)\) path integral, which we describe here. Although this perspective does not provide a direct computational advantage over the fermionic diagram approach in the main text - all non-trivial integrals are still present - it admits an explicit \(N^{-1}\) expansion, in contrast with the diagrammatic approach in the main text where the task of writing down all diagrams that contribute at a given order requires careful analysis of index summations. The approach here is more easily generalizable to the calculation of higher order moments, and also provides a more general framework for understanding which observables obey a straightforward crossover from SYK-like to Fermi liquid-like as a function of temperature and which ones have more subtle crossover behavior - the former are functions of only the saddle point solutions of the \((G,\Sigma)\) path integral, whereas the latter are properties of fluctuations around the saddle point. Here, we rederive the off-diagonal Green's function covariance, \(\rho_{o}\), using this formulation. We begin with a derivation of the \((G,\Sigma)\) path integral. Recall that our Hamiltonian is given by \[H=\frac{1}{(2N)^{3/2}}\sum_{ij;kl=1}^{N}J_{ij;kl}c_{i}^{\dagger}c_{j}^{\dagger }c_{k}c_{l}+\frac{1}{N^{1/2}}\sum_{ij=1}^{N}t_{ij}c_{i}^{\dagger}c_{j}-\mu\sum _{i}c_{i}^{\dagger}c_{i} \tag{11}\] where \(J_{ij;kl}\) and \(t_{ij}\) are complex random numbers with zero mean and variances \(J^{2}\) and \(t^{2}\), respectively. In path integral form, we have the partition function \[\begin{split}\overline{Z[h]^{M}}&=\int\mathcal{D}J \mathcal{D}t\mathcal{D}c\mathcal{D}c^{\dagger}e^{-\sum_{a=1}^{M}S_{a}[J]}\\ S_{a}[J]&=\sum_{ij}\int\mathrm{d}\tau\:c_{i}^{ \dagger a}(\tau)\left[\left(\partial_{\tau}-\mu\right)\delta_{ij}+\frac{t_{ij} }{N^{1/2}}\right]c_{j}^{a}(\tau)\\ &+\frac{1}{(2N)^{3/2}}\sum_{ij;kl}\int\mathrm{d}\tau\,J_{ij;kl}c_ {i}^{\dagger a}(\tau)c_{j}^{\dagger a}(\tau)c_{k}^{a}(\tau)c_{l}^{a}(\tau) \end{split} \tag{12}\] Integrating over disorder, our path integral becomes \[\begin{split} Z[h]&=\int\mathcal{D}c\mathcal{D}c^{ \dagger}e^{-S}\\ S&=\sum_{a,i}\int\mathrm{d}\tau\,c_{i}^{\dagger a}( \partial_{\tau}-\mu)c_{i}^{a}-\sum_{a,b}\int\mathrm{d}\tau_{1}\,\mathrm{d} \tau_{2}\left[\frac{NJ^{2}}{4}\left(\frac{1}{N}\sum_{i}c_{i}^{\dagger a}(\tau_ {1})c_{i}^{b}(\tau_{2})\right)^{2}\left(\frac{1}{N}\sum_{i}c_{i}^{\dagger b}( \tau_{2})c_{i}^{a}(\tau_{1})\right)^{2}\\ &-\frac{Nt^{2}}{2}\left(\frac{1}{N}\sum_{i}c_{i}^{\dagger a}(\tau _{1})c_{i}^{b}(\tau_{2})\right)\left(\frac{1}{N}\sum_{i}c_{i}^{\dagger b}( \tau_{2})c_{i}^{a}(\tau_{1})\right)\Bigg{]}\end{split} \tag{13}\] We now insert the field \[G^{ab}(\tau_{1}\,,\tau_{2})\equiv\frac{1}{N}\sum_{i}c_{i}^{\dagger a}(\tau_{1} )c_{i}^{b}(\tau_{2}) \tag{14}\] where the equivalence is enforced with a Lagrange multiplier \(\Sigma^{ab}(\tau_{1},\tau_{2})\). The \(c\,,c^{\dagger}\) fields can be integrated out to yield the action \[\begin{split}\frac{S[G\,,\Sigma\,,h]}{N}&=-\ln\det(- \partial_{\tau}+\mu-\Sigma)-\sum_{a,b}\int\mathrm{d}\tau_{1,2}\left(\Sigma^{ab} (\tau_{1},\tau_{2})G^{ba}(\tau_{2},\tau_{1})\right.\\ &+\left.\frac{J^{2}}{4}\left(G^{ab}(\tau_{1},\tau_{2})G^{ba}(\tau_ {2},\tau_{1})\right)^{2}-\frac{t^{2}}{2}G^{ab}(\tau_{1},\tau_{2})G^{ba}(\tau_ {2},\tau_{1})\right).\end{split} \tag{10}\] We take the replica-diagonal saddle point, \(G^{ab}(\tau_{1},\tau_{2})=\delta_{ab}G(\tau_{1}-\tau_{2})\) and likewise for \(\Sigma^{ab}\). The replica-diagonal Schwinger-Dyson equations are given by Eq. 46 in the main text - as emphasized earlier, it is the solution to this set of equations that displays a crossover from SYK-like for \(T\gg E_{\text{coh}}\) to Fermi liquid-like for \(T\ll E_{\text{coh}}\). This saddle-point solution does not contribute to the Green's function covaraince; to obtain a non-zero value, we must consider fluctuations around it, \(G^{ab}(\tau_{1},\tau_{2})\equiv\delta_{ab}G(\tau_{1}-\tau_{2})+\delta G^{ab}( \tau_{1},\tau_{2})\). In this representation, our observables of interest are \[\begin{split} g_{o}(\tau_{1,2,3,4})&\equiv\frac{1} {N^{2}}\sum_{ij}\left[\overline{G_{ij}(\tau_{1}-\tau_{2})G_{ji}(\tau_{3}-\tau_ {4})}-\overline{G_{ij}(\tau_{1}-\tau_{2})}\,\overline{G_{ji}(\tau_{3}-\tau_{4} )}\right]\\ &=\langle G^{ab}(\tau_{1}-\tau_{2})G^{ba}(\tau_{3}-\tau_{4}) \rangle-\frac{1}{N}\langle G^{aa}(\tau_{1}-\tau_{2})G^{bb}(\tau_{3}-\tau_{4})\rangle \end{split} \tag{11}\] for \(a\neq b\). Note the subleading correction in \(g_{o}\), which arises from the \(i=j\) term in the disconnected contribution (the "standard" disconnected part of \(g_{o}\) vanishes due to the fact that \(\langle G^{ab}\rangle=0\) for fluctuations around the replica-diagonal saddle point). These replica off-diagonal observables vanish at the replica-diagonal saddle point. To find the leading order non-zero result, we expand the action around its saddle-point solution. The expansion of everything other than the determinant is rather straightforward. For evaluation of the determinant, we use Jacobi's formula \[\begin{split}&\frac{1}{\det(-\partial_{\tau}+\mu-\Sigma)}\frac{ \partial\det(-\partial_{\tau}+\mu-\Sigma)}{\partial\Sigma^{ab}(\tau_{1},\tau_ {2})}=-\operatorname{Tr}\left[(-\partial_{\tau}+\mu-\Sigma)^{-1}\,\frac{ \partial\Sigma}{\partial\Sigma^{ab}(\tau_{1},\tau_{2})}\right]\\ &=-\left[(-\partial_{\tau}+\mu-\Sigma)^{-1}\right]^{ba}(\tau_{2},\tau_{1})=-\delta_{ab}G(\tau_{2}-\tau_{1})\end{split} \tag{12}\] where in the final line we evaluate the expression at the replica-diagonal saddle point. To second order, we use \[\begin{split}&\frac{1}{\det(-\partial_{\tau}+\mu-\Sigma)}\frac{ \partial^{2}\det(-\partial_{\tau}+\mu-\Sigma)}{\partial\Sigma^{ab}(\tau_{1}, \tau_{2})\partial\Sigma^{cd}(\tau_{3},\tau_{4})}\\ &=-\frac{1}{\det(-\partial_{\tau}+\mu-\Sigma)}\frac{\partial}{ \partial\Sigma^{cd}(\tau_{3},\tau_{4})}\left[\det(-\partial_{\tau}+\mu-\Sigma )\operatorname{Tr}\left[(-\partial_{\tau}+\mu-\Sigma)^{-1}\,\frac{\partial \Sigma}{\partial\Sigma^{ab}(\tau_{1},\tau_{2})}\right]\right]\\ &=\delta_{ab}\delta_{cd}G(\tau_{2}-\tau_{1})G(\tau_{4}-\tau_{3}) -\operatorname{Tr}\left[\delta\Sigma^{ab}G\delta\Sigma^{ba}G\right]\end{split} \tag{13}\] This leads to the quadratic action \[\begin{split}&\frac{\delta S\left[\delta G\,,\delta\Sigma\right]}{N}= \sum_{ab}\left[\frac{1}{2}\operatorname{Tr}\left[G\delta\Sigma^{ab}G\delta \Sigma^{ba}\right]-\int\mathrm{d}\tau_{1}\,\mathrm{d}\tau_{2}\,\delta G^{ab}( \tau_{1},\tau_{2})\left[\delta\Sigma^{ba}(\tau_{2},\tau_{1})-\frac{t^{2}}{2} \delta G^{ba}(\tau_{2},\tau_{1})\right]\\ &-\frac{J^{2}\delta_{ab}}{2}\int\mathrm{d}\tau_{1}\,\mathrm{d} \tau_{2}\left(2G(\tau_{1},\tau_{2})G(\tau_{2},\tau_{1})\delta G^{aa}(\tau_{1}, \tau_{2})\delta G^{aa}(\tau_{2},\tau_{1})\right.\\ &+G(\tau_{1},\tau_{2})^{2}\delta G^{aa}(\tau_{2},\tau_{1})\delta G ^{aa}(\tau_{1},\tau_{2})\right)\end{split} \tag{10}\] The trace notation in the first term is shorthand for four time integrals, i.e. \(\operatorname{Tr}[G\Sigma]\equiv\int\mathrm{d}\tau_{a}\,\mathrm{d}\tau_{b}\,G( \tau_{a},\tau_{b})\Sigma(\tau_{b},\tau_{a})\). We can invert the quadratic action to obtain a propagator, which we can do separately for the replica diagonal and replica off-diagonal components. For the Figure 8: We illustrate the propagators for use in a diagrammatic expansion in \(N^{-1}\) around the saddle point of the \((G\,,\Sigma)\) action. The fields \(G\) and \(\Sigma\) are a function of two times and two replica indices, which necessitates the sheet-like representation above. The colors indicate different replica indices \(a\,,b\), and solid (dotted) lines indicate a \(G\) (\(\Sigma\)) field. Figure 9: Interactions arise in an expansion around the \((G\,,\Sigma)\) saddle point from expanding the \(\ln det(-\partial_{\tau}+\mu-\Sigma)\) term, which leads to arbitrary order sheets for which \(\Sigma\) propagators can be attached to. Additionally, four \(G\) fields can be attached at a β€œseam.” latter, we have \[-\frac{2\delta S}{N}=\int\mathrm{d}\tau_{1,2,3,4}\left(\delta G^{ab}(\tau_{1},\tau_ {2})\;\;\delta\Sigma^{ab}(\tau_{1},\tau_{2})\right)\begin{pmatrix}-t^{2}\delta_ {\tau_{1},\tau_{3}}\delta_{\tau_{2},\tau_{4}}&\delta_{\tau_{1},\tau_{3}}\delta_ {\tau_{2},\tau_{4}}\\ \delta_{\tau_{1},\tau_{3}}\delta_{\tau_{2},\tau_{4}}&-G(\tau_{1}-\tau_{3})G( \tau_{2}-\tau_{4})\end{pmatrix}\begin{pmatrix}\delta G^{ba}(\tau_{4},\tau_{3}) \\ \delta\Sigma^{ba}(\tau_{4},\tau_{3})\end{pmatrix} \tag{111}\] The matrix must be inverted, which can most easily be done in Matsubara frequency space. This leads to the result \[\langle\delta G^{a\neq b}(\tau_{1},\tau_{2})\delta G^{b\neq a}( \tau_{4},\tau_{3})\rangle =\frac{1}{N\beta^{2}}\sum_{i\omega_{n},i\epsilon_{n}}e^{-i\omega_ {n}(\tau_{1}-\tau_{3})-i\epsilon_{n}(\tau_{4}-\tau_{2})}\frac{G(i\omega_{n})G( i\epsilon_{n})}{1-t^{2}G(i\omega_{n})G(i\epsilon_{n})}\,,\] \[\langle\delta\Sigma^{a\neq b}(\tau_{1},\tau_{2})\delta G^{b\neq a }(\tau_{4},\tau_{3})\rangle =\frac{1}{N\beta^{2}}\sum_{i\omega_{n},i\epsilon_{n}}e^{-i\omega_ {n}(\tau_{1}-\tau_{3})-i\epsilon_{n}(\tau_{4}-\tau_{2})}\frac{1}{1-t^{2}G(i \omega_{n})G(i\epsilon_{n})}\,, \tag{112}\] \[\langle\delta\Sigma^{a\neq b}(\tau_{1},\tau_{2})\delta\Sigma^{b \neq a}(\tau_{4},\tau_{3})\rangle =\frac{1}{N\beta^{2}}\sum_{i\omega_{n},i\epsilon_{n}}e^{-i\omega_ {n}(\tau_{1}-\tau_{3})-i\epsilon_{n}(\tau_{4}-\tau_{2})}\frac{t^{2}}{1-t^{2}G( i\omega_{n})G(i\epsilon_{n})}\,.\] This gives the expected result for \(g_{o}\) in Eq. 21 of the main text once the trivial disconnected piece of \(g_{o}\) is subtracted off. Note that for \(t=0\), while \(\langle\delta G^{ab}\delta G^{ba}\rangle\) is non-zero, its contribution to \(g_{o}\) is subtracted off exactly by the disconnected piece. Hence, the leading order contribution to \(g_{o}\) when \(t=0\) is given by the first correction to the \(G^{ab}\) propagator, illustrated in Fig. 10. This corresponds to the fermionic Feynman diagram shown in the top of Fig. 3 in the main text. ## Appendix C Statistics of ratio distributions Here, we provide a summary of relevant results involving ratio distributions, which we utilize for calculating statistical properties of the thermopower. Figure 10: We illustrate Feynman diagrams that contribute to the off-diagonal Green’s function covariance in a pure SYK model. For a model that includes random hoppings, there exists a non-trivial contribution in the bare \(\delta G^{ab}\delta G^{ba}\) propagator; for a pure SYK model, this contribution is subtracted off exactly in the covariance and one must include the leading order correction to obtain a non-zero result. We take \(X_{1}\), \(X_{2}\) to be two correlated Gaussian random variables, with means \(\mu_{1,2}\), variances \(\sigma_{1,2}^{2}\), and correlation coefficient \(r\). Our quantity of interest is the random variable \(Z\equiv X_{1}/X_{2}\). The probability density function \(f(z)\) of \(Z\) can be obtained from the joint density \(g(x_{1},x_{2})\) of \(X_{1,2}\), \[f(z)=\int_{-\infty}^{\infty}|y|g(zy,y)\,\mathrm{d}y. \tag{109}\] This function along with the cumulative distribution function \(F(z)\equiv\int_{-\infty}^{z}f(x)\,\mathrm{d}x\) are known [47]. However, much like the Cauchy distribution - which is a limiting case of a ratio distribution when the numerator and denominator have zero mean - the integrals \(\int_{-\infty}^{\infty}z^{a}f(z)\,\mathrm{d}z\), \(\alpha\geqslant 1\) do not converge and the mean and variance are formally ill-defined. One can make progress in the limit where \(|\sigma_{2}/\mu_{2}|\to 0\); or in other words, when the probability of the denominator in \(Z\) becoming negative is zero. This result can equivalently be derived from the assumption that \(X_{2}>0\) which implies \(F(z)\equiv P(x_{1}/x_{2}<z)=P(x_{1}-zx_{2}<0)\). Since the sum of two correlated Gaussians is also a Gaussian, this gives the cumulative distribution function \[F(z)=\Phi\left(\frac{\mu_{2}z-\mu_{1}}{\sqrt{\sigma_{1}^{2}-2zr\sigma_{1} \sigma_{2}+z^{2}\sigma_{2}^{2}}}\right) \tag{110}\] where \(\Phi(x)\) is the cumulative distribution function of a Gaussian random variable, \(\Phi(x)\equiv\int_{-\infty}^{x}\phi(y)\,\mathrm{d}y\), \(\phi(x)\equiv\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^{2}}\). For small fluctuations around the mean value, \(\overline{z}=\mu_{1}/\mu_{2}\), we have \[\Phi\left(\frac{\mu_{2}z-\mu_{1}}{\sqrt{\sigma_{1}^{2}-2zr\sigma_{1}\sigma_{2 }+z^{2}\sigma_{2}^{2}}}\right)\approx\Phi\left(\frac{z-\overline{z}}{\overline {z}\sqrt{\frac{\sigma_{1}^{2}}{\mu_{1}^{2}}-\frac{2r\sigma_{1}\sigma_{2}}{\mu _{1}\mu_{2}}+\frac{\sigma_{2}^{2}}{\mu_{2}^{2}}}}\right) \tag{111}\] which yields the approximation to normality, with variance \[\frac{\text{Var }z}{\overline{z}^{2}}\approx\frac{\sigma_{1}^{2}}{\mu_{1}^{2}}- \frac{2r\sigma_{1}\sigma_{2}}{\mu_{1}\mu_{2}}+\frac{\sigma_{2}^{2}}{\mu_{2}^{ 2}}\,. \tag{112}\] In the main text, we find several situations where the numerator and denominator are highly correlated such that \(r=1-\mathcal{O}(N^{-1})\), where we use \(N\) as a stand-in for a generic large dimensionless parameter, which depending on the context may refer to either the actual system size or \(T/E_{\text{coh}}\). To leading order in \(N^{-1}\), we therefore have _perfect_ correlation between the numerator and denominator, leading to \[\frac{\text{Var }z}{\overline{z}^{2}}\approx\left(\frac{\sigma_{1}}{\mu_{1}}- \frac{\sigma_{2}}{\mu_{2}}\right)^{2}+\mathcal{O}\big{(}N^{-1}\big{)}\,. \tag{113}\] Working in the limit of perfect correlation means that we may think of \(X_{1}\) and \(X_{2}\) as arising from the same normal distribution \(X\), i.e. \(X_{1}=\sigma_{1}X+\mu_{1}\) and \(\mu_{2}\). The ratio distribution is still non-trivial even if both variables arise from the same probability distribution. However, it does imply a special limit \(\sigma_{1}/\mu_{1}=\sigma_{2}/\mu_{2}\) where the distribution becomes trivial and the variance vanishes due to the numerator and denominator being directly proportional to each other. In this limit, the variance incurs an additional \(N^{-1}\) suppression due to the necessity of expanding out \(r\) to higher order. This prediction is confirmed by numerical simulation, see Fig. 11. We take \(10000\) samples of the ratio distribution \(Z\) for parameters \(\sigma_{1}^{2}=1.5\), \(\sigma_{2}^{2}=1\), \(\mu_{y}=10\), \(r=1-1/N\), and variable \(\mu_{x}\). We fit the power law scaling of the variance as a function of \(N\) for \(500<N<10000\) and plot the exponent while varying \(\mu_{x}\). As expected, an anomalous suppression of the variance appears at the critical value where \(\sigma_{1}/\mu_{1}=\sigma_{2}/\mu_{2}\). ## Appendix D Conductance fluctuations for single-lead coupling In the main text, we present results for conductance fluctuations for a model where we take our leads to be coupled to all sites with equal magnitude. To leading order in \(N^{-1}\), fluctuations are controlled by the off-diagonal Green's function covariance \(\rho_{o}\). If we instead choose to model our leads as only being coupled to a single site in the quantum dot, our Figure 11: By drawing from a ratio distribution, where the correlation coefficient between the numerator and denominator is given by \(1-1/N\), we fit the variance to an \(N^{\alpha}\) form and plot the exponent \(\alpha\). When the probability distributions are tuned such that \(\sigma_{1}/\mu_{1}=\sigma_{2}/\mu_{2}\), we obtain a \(N^{-1}\) suppression of the variance. results are modified as fluctuations are driven by the diagonal Green's function covariance \(\rho_{d}\), which is generically suppressed relative to \(\rho_{o}\) by an additional factor of \(N^{-1}\). Note that this contribution is still present in our model in the main text, but is ignored in virtue of this \(N^{-1}\) suppression. We present results for both \(\rho_{o}\) and \(\rho_{d}\) in the main text but focus on conductance fluctuations for fully-connected leads. Here, we present results for conductance fluctuations that arise from \(\rho_{d}\), which are subleading in \(N^{-1}\) for fully-connected leads but are the dominant contribution for leads coupled to a single site. We remind the reader that average conductance is insensitive to this choice and remains the same as in the main text. For a free fermion model, we have the physical interpretation that \(\rho_{d}\) gives the covariance of the single-particle eigenvalues, the form of which is universal and well-known from random matrix theory. In particular, the variance of linear statistics such as the conductance is given by the Dyson-Mehta formula [40; 41], which yields the conductance variance \[\text{Var }\sigma_{FF}=\left(\frac{e^{2}}{\hbar}\frac{\Gamma}{TN}\right)^{2} \frac{3\zeta(3)}{\pi^{4}}\,. \tag{49}\] For a pure SYK model, our expression for \(\rho_{d}\) given in Eq. 40 yields \[\text{Var }\sigma_{SYK}=\left(\frac{e^{2}\Gamma}{\hbar}\right)^{2}\frac{0.07}{N ^{4}JT}\,. \tag{50}\] We now consider the case with both SYK interactions and random hopping terms. For the low temperature Fermi liquid phase, we predict a scaling similar to the free fermion result in Eq. 49, but with a renormalization which can be deduced on dimensional grounds to be \[\text{Var }\sigma_{tSYK}\propto\left(\frac{\Gamma e^{2}}{\hbar TN}\frac{t}{J} \right)^{2}\,,\quad T\ll E_{\text{coh}}\,. \tag{51}\] For the SYK regime, \(T\gg E_{\text{coh}}\), we find nearly identical to the case considered to the main text, due to the fact that in this regime, \(\rho_{d}(\omega,\epsilon)=N^{-1}\rho_{o}(\omega,\epsilon)\) to leading order in \(E_{\text{coh}}/T\). Hence, \[\text{Var }\sigma_{tSYK}=2.02\mathcal{E}^{2}\left(\frac{\Gamma e^{2}}{\hbar NT }\frac{t}{J}\right)^{2}\,,\quad T\gg E_{\text{coh}}\,. \tag{52}\] Note that this is the same scaling as in the Fermi liquid regime, albeit with the crucial difference that the overall coefficient is proportional to the particle-hole asymmetry.
2309.14894
Verifiable Learned Behaviors via Motion Primitive Composition: Applications to Scooping of Granular Media
A robotic behavior model that can reliably generate behaviors from natural language inputs in real time would substantially expedite the adoption of industrial robots due to enhanced system flexibility. To facilitate these efforts, we construct a framework in which learned behaviors, created by a natural language abstractor, are verifiable by construction. Leveraging recent advancements in motion primitives and probabilistic verification, we construct a natural-language behavior abstractor that generates behaviors by synthesizing a directed graph over the provided motion primitives. If these component motion primitives are constructed according to the criteria we specify, the resulting behaviors are probabilistically verifiable. We demonstrate this verifiable behavior generation capacity in both simulation on an exploration task and on hardware with a robot scooping granular media.
Andrew Benton, Eugen Solowjow, Prithvi Akella
2023-09-26T12:51:03Z
http://arxiv.org/abs/2309.14894v1
Verifiable Learned Behaviors via Motion Primitive Composition: Applications to Scooping of Granular Media ###### Abstract A robotic behavior model that can reliably generate behaviors from natural language inputs in real time would substantially expedite the adoption of industrial robots due to enhanced system flexibility. To facilitate these efforts, we construct a framework in which learned behaviors, created by a natural language abstractor, are verifiable by construction. Leveraging recent advancements in motion primitives and probabilistic verification, we construct a natural-language behavior abstractor that generates behaviors by synthesizing a directed graph over the provided motion primitives. If these component motion primitives are constructed according to the criteria we specify, the resulting behaviors are probabilistically verifiable. We demonstrate this verifiable behavior generation capacity in both simulation on an exploration task and on hardware with a robot scooping granular media. ## I Introduction In recent years, learning from human demonstrations has proven tremendously successful at imitating intricate, human-like motion on robotic systems [1, 2, 3]. This has allowed for improvements in robotic grasping [4, 5, 6], assembly [7, 8, 3], and even robotic surgery [9, 10, 11]. However, these methods often require prohibitive amounts of precisely labeled data [12]. Additionally, these learned behaviors are typically not transferrable to tasks that are similar but not identical, prompting further research into task-transferrable learning [13, 14, 15]. However, works in this vein exhibit similar, if not heightened, requirements on the amount of data available to the learning procedure. Despite these challenges, more comprehensive learned models that incorporate streams of multimodal data have shown tremendous success at learning generalized, intricate behaviors. For example, the recently developed Palm-E model has successfully translated natural language user commands to control policies for a \(6\)-DOF arm, realizing the intended tasks even when they were not explicitly learned [16]. Building on the success of Palm-E and other foundational robotic models [17, 18, 19], recent work also aims to codify effective design principles for these models [20]. Conceptually, however, both the Palm-E model and the other learning paradigms mentioned prior hinge on a notion of composing generalized behavior from a finite set of learned behaviors. Prior work in controls and robotics has shown that generalizing from this initial behavior set, termed motion primitives in the existing literature, yields robust, and more importantly, verifiable generalized behavior provided the primitives and subsequent behaviors are constructed with care [21, 22, 23]. Consequently, inspired by the previous attempts at codifying design principles for these learned models [20], we posit that by leveraging these prior works in motion primitives and black-box risk-aware verification, we can synthesize verifiable learned behaviors over a provided set of carefully constructed motion primitives. **Our Contribution:** Leveraging recent work in risk-aware verification [24, 25], we take steps towards constructing a framework for verifying learned, generalized behaviors composed from a set of motion primitives. Specifically, if the input/output spaces of the motion primitives satisfy certain conditions that permit its verifiability, and the behavior is constructed as a directed graph over these primitives, then the resulting behavior is similarly verifiable. We showcase this verifiability in both simulation and on hardware, focusing on exploration and reconnaissance for the former and a granular media scooping task for the latter. **Structure:** We review black-box risk-aware verification and Fig. 1: A graphical representation of our natural-language-based behavior generalize and verification scheme. By ensuring that the language model only composes behaviors as a directed graphical abstraction over the provided motion primitives, we show that any such generated behavior has an associated certificate list that we can exploit to verify the learned behavior’s ability to realize the user’s desired task. motion primitives in Section II before formally stating the problem under study in Section II-C. Section III details our behavior generation scheme and states our main contribution regarding the verifiability of the resulting generated behaviors. Finally, Section IV showcases our behavior generation scheme developing an exploratory behavior - Section IV-A - and a scooping motion for granular media - Section IV-B. Both behaviors are also verified in the same sections according to the provided verification scheme. ## II Terminology and Formal Problem Statement ### _Black-Box Risk-Aware Verification_ The information in this section is adapted from [24, 25]. Black-box risk-aware verification assumes the existence of a discrete-time controlled system of the following form, with system state \(x\in\mathcal{X}\), control input \(u\in\mathcal{U}\), environment state \(d\in\mathcal{D}\) and potentially unknown dynamics \(f\): \[x_{k+1}=f(x_{k},u_{k},d),\ \forall\ k=0,1,2,\ldots. \tag{1}\] As verification measures the robustness of a controlled system's ability to realize a behavior of interest, work in this vein assumes the existence of a feedback controller \(U:\mathcal{X}\times\mathcal{D}\to\mathcal{U}\). The system's evolution when steered by this controller \(U\) will be denoted as \(\Sigma\) - a function mapping an initial system and environment state to the system state evolution as prescribed by (1), _i.e._\(\Sigma(x_{0},d)=\{x_{0},x_{1},\ldots,x_{K}\}\in\mathcal{X}^{K}\) for some \(K>0\). Finally, a robustness measure \(\rho\) maps this state evolution \(\Sigma(x_{0},d)\) and environment state \(d\) to the reals, _i.e._\(\rho:\mathcal{X}^{K}\times\mathcal{D}\to\mathbb{R}\). For context, these robustness measures can be those coming from temporal logic [26] or the minimum value of a control barrier function over a time horizon [27] among other methods. A positive outcome of this robustness measure indicates that the corresponding state evolution realized the desired behavior, _i.e._\(\rho(\Sigma(x_{0},d),d)\geq 0\) implies the state evolution \(\Sigma(x_{0},d)\) realized the behavior of interest. Black-box risk-aware verification employs this robustness measure to provide a probabilistic statement on the system's ability to realize the desired behavior for all permissible initial conditions and environment states. This will formally be expressed in the following theorem: **Theorem 1**.: _Let \(\{r^{i}=\rho(\Sigma(x_{0}^{i},d^{i}),d^{i})\}_{i=1}^{N}\) be a set of \(N\) robustness evaluations of trajectories whose initial conditions and environments \((x_{0}^{i},d^{i})\) were sampled via \(\pi\) over \(\mathcal{X}\times\mathcal{D}\), and let \(r^{*}=\min\{r_{1},r_{2},\ldots,r_{N}\}\). Then, both the probability of sampling an initial condition and environment evolution pair whose robustness is lower bounded by \(r^{*}\) and the confidence in the associated probability is only a function of the number of samples \(N\) and a scalar \(\epsilon\in[0,1]\), i.e._ \[\mathbb{P}_{\pi}^{N}\left[\mathbb{P}_{\pi}[\rho(\Sigma(x_{0},d),d)\geq r^{*}] \geq 1-\epsilon]\geq 1-(1-\epsilon)^{N}.\] ### _Motion Primitives_ Motion primitives are a well-studied field in the controls and robotics literature, though we will provide a slight variant on existing definitions to align with our notation. **Definition 1**.: A _Motion Primitive_ is \(4\)-tuple \(\mathcal{P}=(\Xi,A,U,R)\) with the following definitions for the tuple: * The complete set of parameters for this primitive, _i.e._\(\Xi\subseteq\mathbb{R}^{p}\) for an appropriate dimension \(p\geq 0\). * A function taking a system and environment state \((x,d)\) as per (1) and outputting the subset of valid parameters \(P\) for this pair, _i.e._\(A(x,d)=P\subseteq\Xi\). * The parameterized controller for this primitive, mapping states, environments, and the parameter to inputs, _i.e._\(U:\mathcal{X}\times\mathcal{D}\times\Xi\to\mathcal{U}\). * A parameterized function outputting the subset of the state space the system will occupy upon completion of the primitive, _i.e._ for \(\xi\in\Xi\) and with environment state \(d\), \(R(\xi,d)=X_{f}\subseteq\mathcal{X}\). As an example consistent with the simulations to follow then, consider the system as per (1) to be a single-integrator system on the plane required to navigate in a finite-sized grid. A feasible motion primitive \(\mathcal{P}\) would be moving the system to an adjacent cell. For simplicity's sake, assume there are no obstacles, and as such, the environment state space \(\mathcal{D}=\varnothing\). Then, the complete set of parameters \(\Xi\) would be the labels for all the cells in this grid, the accepting function \(A\) outputs all adjacent cells to the cell containing the current system state \(x\), \(U\) could be a proportional controller sending the system to the appropriate grid, and \(R\) would output the subset of the state space encompassed by the cell to which the system was required to move. ### _Problem Statement_ Our goal is to develop a framework by which behaviors learned over these primitives can be verified. As such, we define a behavior \(B\) as a directed graph of primitives, with edges from a primitive \(\mathcal{P}\) indicating the primitive \(\mathcal{P}^{\prime}\) to be run upon completion of \(\mathcal{P}\). For examples of such behaviors, see the sketch provided in Figure 1 and the resulting behavior for our simulation example in Figure 3. The formal definition of these behaviors will follow. **Definition 2**.: A _behavior_\(B\) is a directed graph defined as a \(4\)-tuple, _i.e._\(B=(N,E,S,T)\) with the following definitions: * The finite set of nodes for the graph, where each node is a primitive as per Definition 1, _i.e._\(N=\{\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{\left\lvert N\right\rvert}\}\). * The set of directed edges connecting nodes in the graph. Each edge identifies a method to choose parameters for the successive primitive. If multiple edges emanate from a node, then a method exists such that at runtime, only one edge is chosen. * A start function taking as input the system and environment state \((x,d)\) as per (1) and outputting both the starting primitive and its parameter, _i.e._\(S(x,d)=(\xi,\mathcal{P})\) where \(\mathcal{P}\in N\) and \(\xi\in A_{\mathcal{P}}(x,d)\). * The set of terminal nodes, _i.e._\(T\subseteq N\). Our goals are twofold. First, determine whether we can verify the behaviors generated by Algorithm 1, and second, if the behaviors are verifiable, determine a framework by which we can verify any behavior generated by this method. Phrased formally, the problem statement will follow. **Problem 1**.: _Determine if the behaviors generated by Algorithm 1 are verifiable, and if they are verifiable, determine a method to verify any such generated behavior._ ## III Verifying Learned Behaviors We will provide a solution to both aspects of Problem 1 simultaneously, by constructing the framework for verifying any behavior as per Definition 2. To construct this framework, we first note that there exist two outcomes to executing any behavior from any initial system and environment state - it either terminates successfully or it does not. In the event it terminates successfully, we can record the set of all primitives run over the course of the behavior, their corresponding parameters, and the system states upon termination of the corresponding primitive, _i.e._\(\mathbb{D}=\{(\xi_{1},\mathcal{P}_{1},x_{1}^{f}),(\xi_{2},\mathcal{P}_{2},x_{ 2}^{f}),\dots\}\). If the behavior fails due to reasons such as an intermediary controller failure or an error in the behavior's graph construction leading to a runtime error, we can record the failure. This permits us to construct a robustness measure for a verification scheme aligned with the method described in Section II-A. First, for each pair in the dataset \(\mathbb{D}\) generated by running the behavior, we can define a certificate function checking whether the terminal state laid in the terminal set prescribed by the primitive, parameter, and environment: \[C\left(\xi,\mathcal{P},x^{f},d\right)=x^{f}\in R_{\mathcal{P}}(\xi,d). \tag{2}\] Here, we note that we are implicitly associating boolean outcomes with \(\pm 1\). The robustness measure \(\rho\) would check the validity of each of these certificates over the run of a behavior and output \(1\) if all certificates were satisfied and \(-1\) if the system failed or any certificate was not satisfied. Specifically then, let \((x_{0},d)\) be the initial system and environment state, let \(\Sigma\) be the trajectory function as described in Section II-A, and let \(\mathbb{D}\) be the dataset of tuples collected over the course of a successfully run behavior. Then the robustness measure \[\rho_{B}(\Sigma(x_{0},d),d)=\begin{cases}\min_{\gamma\in\mathbb{D}}\ C(\gamma, d)&\text{if behavior finished},\\ -1&\text{else}.\end{cases} \tag{3}\] Here, we have abbreviated the tuples in \(\mathbb{D}\) with the variable \(\gamma\) to ease notation. That being said, the robustness measure \(\rho_{B}\) in (3) evaluates to a positive number if and only if the behavior successfully terminated and all component primitives exhibited their component desired behaviors. Using the robustness measure in (3), we can verify any behavior as per Definition 2. To ease the formal exposition of the results, we will first denote via \(\mathcal{B}\) the subset of the system and environment state spaces that have a valid starting point for the behavior \(B\) to be verified. This is to ensure that in the verification procedure to follow, we do not sample and evaluate the behavior's performance from initial conditions and environment states that disallow the behavior from the start. Formally then, \[\mathcal{B}=\{(x,d)\in\mathcal{X}\times\mathcal{D}\ |\ S_{B}(x,d)\neq\varnothing\}. \tag{4}\] With these definitions we have the following theorem identifying a framework to verify behaviors, though to simplify exposition, we will express the assumptions separately: **Assumption 1**.: Let \(\{r^{i}=\rho_{B}(\Sigma(x_{0}^{i},d^{i}),d^{i})\}_{i=1}^{N}\) be the behavioral robustness of \(N\) attempts at executing behavior \(B\) from uniformly sampled initial conditions and states \((x_{0},d)\) over the allowable space \(\mathcal{B}\) as per (4) with robustness measure \(\rho\) as per (3), and let \(r^{*}=\min_{i}r^{i}\). **Theorem 2**.: _Let Assumption 1 hold. If \(r^{*}=1\), then \(\forall\ \epsilon\in[0,1]\), the behavior \(B\) will execute successfully for at least \(100(1-\epsilon)\%\) of the initial condition and environment pairs in \(\mathcal{B}\) and the confidence in this statement is \(1-(1-\epsilon)^{N}\)._ **Proof:** As Assumption 1 satisfies the conditions for Theorem 1, we can employ the same theorem and get the following result \(\forall\ \epsilon\in[0,1]\) and substituting \(r^{*}=1\): \[\begin{array}{l}\mathbb{C}1\triangleq\mathbb{P}_{\mathrm{U}[\mathcal{B}]}[ \rho_{B}(\Sigma(x_{0},d),d)\geq 1]\geq 1-\epsilon,\\ \mathbb{C}2\triangleq\mathbb{P}_{\mathrm{U}[\mathcal{B}]}^{N}[\mathbb{C}1] \geq 1-(1-\epsilon)^{N}.\end{array}\] Here, \(\mathrm{U}[\mathcal{B}]\) denotes the uniform distribution over \(\mathcal{B}\). We will analyze \(\mathbb{C}1\) first. Note that in order for \(\rho_{B}(\Sigma(x_{0},d),d)\geq 1\), all certificate functions over the dataset \(\mathbb{D}\) generated by running behavior \(B\) must evaluate to \(1\) - a consequence of equations (3) and (2). As a result, \[\rho_{B}(\Sigma(x_{0},d),d)=1\iff\begin{array}{l}\mathrm{The\ behavior\ executes}\\ \mathrm{successfully}.\end{array}\] Therefore, we can define a subset of the feasible joint state space, corresponding to initial conditions and environment states where from and in the behavior executes successfully: \[\mathbb{V}=\{(x,d)\in\mathcal{B}\ |\ \rho(\Sigma(x,d),d)=1\}.\] Similarly, we can define a volume fraction function over the allowable joint state space: \[\mathcal{V}(Q)=\frac{\int_{Q}1ds}{\int_{\mathcal{B}}1ds}.\] Finally, since the uniform distribution assigns probabilistic weight to a subset of events equivalent to their volume fraction in the sample space, \(\mathbb{C}1\) resolves to the following: \[\mathbb{C}1\equiv\mathcal{V}(\mathbb{V})\geq 1-\epsilon.\] Substituting this equivalency in \(\mathbb{C}2\) completes the proof. ### _Extending to Non-Deterministic Behaviors_ In the prior sections, we only considered deterministic system evolution and behavior graph resolution. However, it may be the case that either the system evolves or the behavior graph resolves non-deterministically. Our proposed verification framework should account for this non-determinism, and this section details how the prior procedure extends to this case. We will formalize this non-determinism directly in the context of verification. Specifically, we assume that we have a distribution by which we can draw robustness evaluations of system trajectories, _i.e._ \[\rho(\Sigma(x_{0},d),d)\text{ is sampled from }\pi_{V}\] Note that this accounts for both cases where the initial system and environment states are potentially sampled randomly via a distribution \(\pi_{X}\) over the allowable space \(\mathcal{B}\) as per (4) and the ensuing trajectories \(\Sigma(x_{0},d)\) are also randomly sampled from some unknown trajectory-level distribution \(\pi_{S}\), arising from the aforementioned non-deterministic system evolution or behavior graph resolution. As a result, we can follow the same verification method as in Theorem 1, though we cannot identify trajectories via initial conditions as we did in Assumption 1. The following assumption and corollary expresses this notion formally: **Assumption 2**.: Let \(\rho_{B}\) be the robustness measure for the behavior \(B\) as per equation (3), let \(\{r^{i}=\rho_{B}(\Sigma(x_{0}^{i},d^{i}),d^{i})\}_{i=1}^{N}\) be the robustnesses of \(N\) trajectories sampled via the (unknown) distribution \(\pi_{V}\), and let \(r^{*}=\min_{i}r^{i}\). **Corollary 1**.: _Let Assumption 2 hold. If \(r^{*}=1\), then \(\forall\ \epsilon\in[0,1]\), the non-deterministic system \(\Sigma\) successfully executes the behavior \(B\) with minimum probability \(1-\epsilon\) and confidence \(1-(1-\epsilon)^{N}\), i.e.:_ \[\mathbb{P}_{\pi_{V}}^{N}\left[\mathbb{P}_{\pi_{V}}\left[\rho(\Sigma(x_{0},d), d)\geq r^{*}\right]\geq 1-\epsilon\right]\geq 1-(1-\epsilon)^{N}.\] **Proof:** This is a direct result of Theorem 1. ## IV Demonstrations ### _Exploratory Behavior Generation_ To illustrate the verifiability of behaviors generated via Algorithm 1, this section will detail our efforts at using a natural language abstractor built on ChatGPT to construct an exploratory behavior. **System and Environment Description:** To that end, the simulations to follow feature an agent idealized as a single integrator system on the plane and navigating within a \(10\times 10\) grid with obstacles and a few goals. The system state \(x\) is its planar position and its labels for each of the cells, _i.e._\(x\in[-5,5]^{2}\times\{\text{empty},\text{obstacle},\text{unexplored},\text{ goal}\}^{100}\triangleq\mathcal{X}\). The environment, _i.e._ obstacle and goal cells, is the subset of the overall label space where there exist \(30\) obstacles and \(3\) goals with no overlaps, _i.e._\(\mathcal{D}\subset\{\text{empty},\text{obstacle},\text{goal}\}^{100}\). The system dynamics as per (1) are known in this case, with single-integrator dynamics for the planar dimension and label updates when specifically provided by a controller - otherwise, labels remain constant. **Motion Primitives:** The system has two primitives upon which the natural-language behavior generalizer can build behaviors. Their descriptions will follow: * A label update function that updates the labels in the state \(x\) to match the labels of the cells immediately surrounding the agent, _i.e._ if the agent were in cell \((2,3)\) the function updates the labels of cells \(\{(2,3),(3,3),(1,3),(2,4),(2,2)\}\). * The set of all cells, _i.e._\(\Xi=\{0,1,2,\ldots,9\}^{2}\). * A function outputting the cell the system currently occupies, _i.e._ if the system's planar position were \([-4.5,-3.5]\), the only valid parameter is cell \((0,1)\). * Updates the state to reflect the environment labels of all adjacent cells. * A function outputting the portion of the state space where the labels for the agent's current and adjacent cells align with those of the environment. All other Fig. 2: Examples of the environments considered for the example in Section IV-A. The blue circle represents the agent, the blue square represents the agent’s starting cell, the green squares are goals, the black squares are obstacles, and the gold region is the region explored by the learned behavior. cell labels are unconstrained, _i.e._ if the agent's current and adjacent cells were all empty, then \(R(\xi,d)\) would output the subset of the state space containing label vectors whose elements for those cells all read "empty" with no constraints on other elements. * A navigation function that steers the agent to a desired cell while avoiding obstacles. * The set of all cells, _i.e._\(\Xi=\{0,1,2,\ldots,9\}^{2}\). * A function outputting the portion of the parameter space where the cell is reachable by the agent in the provided environment. * A Markov-Decision-based planner tracked by a PD controller that steers the agent to the desired cell while avoiding obstacles. * Outputs the portion of the planar state space encompassed by the desired cell, _i.e._ if the agent could reach cell \((2,2)\), then \(R(\xi=(2,2),d)=[-2,-1]^{2}\). **Algorithm Information:** We desired an exploratory behavior whereby the system searches the grid for a goal and after identifying a goal, oscillates back and forth between the goal and its starting location at least \(5\) times. As useful information for the task-following algorithm, the inputted information - string \(I\) in Algorithm 1 - indicated that the language model could use the following functions when determining edges in the outputted behavior graph: * A function that outputs as a list, all the cells that have been explored by the agent thus far, _i.e._ all cells that have a label other than "unexplored" in the current state. * A function that outputs as a list all cells immediately adjacent to the agent's currently occupied cell. * A function that determines whether a goal has been found and outputs the corresponding cell. **Remark 1**: _For the first step, we asked the algorithm to devise a behavior that explored the grid until it identified a goal. Specifically, the inputted behavior string \(s\) was as follows: "Please construct a function that performs the following tasks in sequence. First, it searches over all explored cells that are not obstacles to find the explored cell with the highest number of unexplored neighbors. Let's call this identified cell, cell \(A\). Second, it sends the agent to cell \(A\) and identifies the labels of all adjacent cells. Three, it repeats steps one and two until a goal has been found, at which point, it stops." The part of the graph highlighted in green in Figure 3 shows the generated behavior graph. As part of this generation procedure, it used two of the provided functions \(\mathcal{E}_{1}^{s},\mathcal{E}_{2}^{s}\) to construct the edge decision function \(\mathcal{E}_{4}^{s}\) whose description will follow:_ * the list of explored cells is provided by \(\mathcal{E}_{1}^{s}\) - and assigns to each cell the number of its adjacent cells that are unexplored - the list of adjacent cells is provided by \(\mathcal{E}_{2}^{s}\). Reports the first cell in the list with the maximum number of unexplored neighbors. **Behavior 2**: _We wanted to build on the prior behavior for the latter half of our goal, and as such, informed the LLM of the existence of this prior behavior in the list of existing behaviors denoted as \(\mathbb{B}\) in Algorithm 1. Then, as the user, we requested the following from the LLM: "Please construct a function that performs the following tasks in sequence. First, it finds a goal. Second, it moves between the goal and its starting location 5 times." The behavior graph for this second behavior is the unhighlighted graph in Figure 3._ **Verification Procedure and Remarks:** As Behavior \(2\) utilized Behavior \(1\), verifying both amounts to verifying the former. Following the results of Theorem 2, we recorded a data set \(\mathbb{D}\) of parameters, primitives, and terminal states while running the second behavior. The certificates per equation (2) amount to checking that updated labels matched their true labels after running primitive \(\mathcal{P}_{1}^{s}\) and checking that the system occupied the desired cell after running primitive \(\mathcal{P}_{2}^{s}\). The allowable joint state space \(\mathcal{B}\) as per (4) was the portion of the joint space where the system starts in a state \(x\) such that at least one goal is reachable in the corresponding environment \(d\). Finally, the verification procedure uniformly randomly sampled state pairs \((x,d)\in\mathcal{B}\) and checked the corresponding certificates for each run of the behavior. After running the second behavior from \(100\) randomly sampled initial state pairs, the behavior terminated successfully every time. As such, by Theorem 2 we expect that the second behavior will run successfully for \(95\%\) of possible state pairs and we are \(99.4\%\) confident in this statement - we generated these numbers by substituting \(\epsilon=0.05\) and \(N=100\) for Theorem 2. To validate these statements, we ran the second behavior in \(2000\) more sampled environments, and it terminated successfully every time. If we were incorrect in our prior statement that the behavior would run successfully for at least \(95\%\) of feasible state pairs \((x,d)\in\mathcal{B}\), then we would have been effectively guaranteed to identify such a failure over the \(2000\) subsequent runs. As we did not, we are confident in our corresponding statement. Furthermore, while the synthesized behaviors seem rudimentary, they suffice to indicate that our behavior synthesis scheme produces effective and verifiable behaviors. Fig. 3: Depiction of the directed behavior graph generated by Algorithm 1 for the example detailed in Section IV-A. The first behavior’s graph is highlighted in green, the second behavior incorporates the first and the extra information is the unhighlighted part of the graph. ### _Scooping of Granular Media_ Our second demonstration focusing on granular media scooping illustrates the framework's utility in helping end-users set up repetitive, verifiable tasks. **System and Environment Description:** The scooping problem consists of picking up material from a donor container and depositing it into a receiver container using a UR5e \(6\)-DOF robot arm with a wrist-mounted RealSense depth camera. While a rudimentary scooping motion has been programmed _apriori_, it does not know the environment in which it will be performing this motion - similar to the situation when a pre-programmed robot has to be initialized for specific use. The robot's state \(x\in\mathbb{R}^{6}\) is the full pose of the end-effector, the control input \(u\) corresponds to joint rotations, and the environment \(d\) corresponds to the locations and orientations of the donor and receiver containers and the level and distribution of sand in the donor container. **Motion Primitives:** In this case, the system only has one primitive, the scooping primitive, described as follows: * A primitive performing a scooping motion from a donor container to a receiver container. * The space of feasible end-effector poses where a parameter \(\xi\in\Xi\) denotes the pose in which the robot will sense all objects in the environment to start the scooping motion. * A function outputting the space of end-effector poses from which all containers are in view of the onboard vision system. * A controller that performs the scooping motion. * A function that outputs a ball around the provided parameter within which the end-effector's pose will lie upon the termination of the scooping motion. That being said, the acceptance function \(A\) is implicitly defined and impossible to know _apriori_. Here, we intend for the algorithm to assist the end-user in selecting a parameter \(\xi\) whose validity, _i.e._ existence in \(A(x,d)\)\(\forall\)\((x,d)\in\mathcal{X}\times\mathcal{D}\), can be checked through the ensuing verification procedure. **Algorithm Information:** To assist the user in picking such a parameter \(\xi\), the algorithm was provided an information string \(I\) describing a helper function \(\mathcal{E}_{1}^{r}\) that translated and rotated the end-effector a desired amount. This string also included several examples of natural language translations to inputs for this function \(\mathcal{E}_{1}^{r}\). Additionally, the string included another function \(\mathcal{E}_{2}^{r}\) that saved the end-effector pose for future reference, and the LLM was told to call this function if the user deemed the current end-effector pose satisfactory. **Behavior Generation and Verification:** The task-model repeatedly queried the user for end-effector translations and rotations and as to whether or not the user deemed the current pose sufficient for sensing any placement of containers. As such, there was no singular behavior prompt \(s\). However, as the resulting behavior repetitively executes the scooping primitive with the user-provided sensing pose parameter \(\xi\), this behavior can be verified by the results of Corollary 1. To do so, before every scooping motion, we placed the containers at a computer-generated randomly chosen distance from a pre-determined set-point. As we are manually placing containers at the pre-determined locations, there will be noise affecting this placement, though we assume this noise is independent for successive placements. We will denote this distribution of container placements via \(\pi\). As there is no need to sample over initial robot states - the system always starts and ends at the parameterized sensing pose \(\xi\) every iteration - we can draw independent environments - container placements - via our distribution \(\pi\) and record the robot's ability to perform its scooping motion in each placement. Doing so for \(59\) sampled environments with successful trials each time indicates according to Corollary 1 that if we continued to sample environments and test the system accordingly, the system would succeed at least \(95\%\) of the time and we are at least \(95\%\) confident in that statement. ## V Conclusion We propose a framework by which a natural language abstractor can synthesize verifiable behaviors as a directed graph over provided motion primitives. To showcase the increased flexibility and verifiability of the synthesized behaviors, we instructed the task-following model to construct an exploratory behavior for a simulated planar agent and a scooping behavior for a robotic arm. In both cases, the generated behavior was verifiable via the aforementioned method, and we were able to validate our probabilistic verification statements in simulation. Fig. 4: Depiction of the learned scooping behavior. In this case, the motion was coded previously, but contingent on the arm’s ability to sense the cups in its environment. As such, the LLM interface only asked for the end-user to provide that initial positioning (1) wherein the arm had a high likelihood of sending both cups. Then the LLM behavior first moves to the desired sensing position (2), calls the scooping primitive as seen in (3)-(4), and returns to the instructed sensing position in (5) in case any of the cups shifted during the procedure. Then the process repeats.
2309.13123
Probing the physics in the core boundary layers of the double-lined B-type binary KIC4930889 from its gravito-inertial modes
Stellar evolution models of B-type stars are still uncertain in terms of internal mixing properties, notably in the area between the convective core and the radiative envelope. This impacts age determination of such stars in addition to the computation of chemical yields produced at the end of their life. We investigated the thermal and chemical structure and rotation rate in the near-core boundary layer of the double-lined B-type binary KIC4930889 from its four-year Kepler light curve, ground-based spectroscopy, and Gaia astrometry. We computed grids of 1D stellar structure and evolution models for different mixing profiles and prescriptions of the temperature gradient in the near-core region. We examined the preferred prescription and the near-core rotation rate using 22 prograde dipole modes detected by Kepler photometry. We employed a Mahalanobis distance merit function and considered various nested stellar model grids, rewarding goodness of fit but penalising model complexity. Furthermore, we found a preference for either an exponentially decaying mixing profile in the near-core region or absence of additional near-core mixing, but found no preference for the temperature gradient in this region. The frequency (co)variances of our theoretical predictions are much larger than the errors on the observed frequencies. This forms the main limitation on further constraining the individual parameters of our models. Additionally, non-adiabatic pulsation computations of our best models indicate a need for opacity enhancements to accurately reproduce the observed mode excitation. The eccentric close binary system KIC4930889 proves to be a promising target to investigate additional physics in close binaries by developing new modelling methods with the capacity to include the effect of tidal interactions for full exploitation of all detected oscillation modes.
Mathias Michielsen, Timothy Van Reeth, Andrew Tkachenko, Conny Aerts
2023-09-22T18:06:40Z
http://arxiv.org/abs/2309.13123v1
Probing the physics in the core boundary layers of the double-lined B-type binary KIC 4930889 from its gravito-inertial modes ###### Abstract Context:Stellar evolution models of B-type stars are still uncertain in terms of internal mixing properties, notably in the area between the convective core and the radiative envelope. This impacts age determination of such stars in addition to the computation of chemical yields produced at the end of their life. Aims:We investigated the thermal and chemical structure and rotation rate in the near-core boundary layer of the double-lined B-type binary KIC 4930889 from its four-year _Kepler_ light curve, ground-based spectroscopy, and _Gaia_ astrometry. Methods:We computed grids of 1D stellar structure and evolution models for different mixing profiles and prescriptions of the temperature gradient in the near-core region. We examined the preferred prescription and the near-core rotation rate using 22 prograde dipole modes detected by _Kepler_ photometry of KIC 4930889. We employed a Mahalanobis distance merit function and considered various nested stellar model grids, rewarding goodness of fit but penalising model complexity. Results:We were able to constrain the near-core rotation rate of the pulsator to \(\Omega_{\rm rot}=0.73^{+0.02}_{-0.06}\)d\({}^{-1}\). Furthermore, we found a preference for either an exponentially decaying mixing profile in the near-core region or absence of additional near-core mixing, but found no preference among the various options for the temperature gradient in this region. The frequency (co)variances of our theoretical predictions are much larger than the errors on the observed frequencies. This forms the main limitation on further constraining the individual parameters of our models. A combination of spectroscopic, astrometric, binary, and asteroseismic information was used to achieve these constraints. Additionally, non-adiabatic pulsation computations of our best models indicate a need for opacity enhancements to accurately reproduce the observed mode excitation. Conclusions:The eccentric close binary system KIC 4930889 proves to be a promising target to investigate additional physics in close binaries by developing new modelling methods with the capacity to include the effect of tidal interactions for full exploitation of all detected oscillation modes. ## 1 Introduction Slowly pulsating B (SPB) stars are non-radial pulsators with spectral types between B9 and B3, effective temperatures ranging from 11000 K to 22000 K, and masses from about 3 to 9 M\({}_{\odot}\)(Waelkens, 1991). They are main-sequence stars that exhibit high radial order gravity (g-) mode oscillations, which allows the use of asteroseismology to investigate the physical processes taking place in their interiors. In order to achieve this, space-based monitoring lasting for years is necessary to resolve the frequencies of the g modes, which have individual periods of roughly 0.5 to 3 days, with a good enough precision for asteroseismology (Aerts et al., 1999; Mathias et al., 2001; De Cat & Aerts, 2002; De Cat et al., 2007). Despite these high demands, the monitoring efforts are worthwhile, as SPB stars have the potential to provide the much needed calibration of the stellar structure and evolution theory for massive stars with convective cores and radiative envelopes. Indeed, the well-known asteroseismic scaling relations used for the low-mass stars based upon their solar-like oscillations cannot be extrapolated to stars with a convective core. This leaves poorly calibrated physical properties of their convective core boundary layers, such as its thermal structure or size (often imposed via a free parameter). Gravity-mode asteroseismology of main-sequence stars saw its birth thanks to the five-month light curves assembled by the CoRoT space telescope (Degroote et al., 2010; Neiner et al., 2012) and underwent a major boost from the four-year datasets from the _Kepler_ mission (e.g. Aerts, 2021, for a review). It has meanwhile been applied to numerous cases, both in SPB stars and less massive \(\gamma\) Doradus stars to probe a variety of physical processes such as near-core rotation rates (e.g. Van Reeth et al., 2016; Li et al., 2020; Takata et al., 2020), chemical mixing in the radiative envelope (e.g. Mombarg et al., 2020, 2022), magnetic fields (e.g. Buysschaert et al., 2018; Prat et al., 2019; Lecoanet et al., 2022), opacities (e.g. Szewczuk & Daszynska-Daszkiewicz, 2018; Walczak et al., 2019), and convective boundary mixing (CBM) whether looking solely at chemical element transport (e.g. Moravveji et al., 2016) or also altering the thermal structure in this CBM region (e.g. Pedersen et al., 2021; Michielsen et al., 2021). These modelling efforts typically involve a parameter study in a multidimensional space, which is generally computationally intense and time-consuming. In order to be able to perform asteroseismic modelling of a sample of SPB stars covering the entire core-hydrogen burning stage and rotational frequencies from almost zero up to the critical rate within a reasonable computation time, Pedersen et al. (2021) developed a statistical approach to get an initial estimate for the parameters and internal properties of 26 sample SPB stars. Our work concerns a novel methodological framework, which we applied to one of these 26 sample stars, namely the double-lined spectroscopic binary SPB KIC 4930889. In order to achieve this, we relied on the most recent spectroscopic analyses and frequency determinations done since the study by Pedersen et al. (2021). The emphasis of this work is on the CBM processes including the thermal structure and the near-core rotation rate of this binary g-mode pulsator. We first provide an overview of its observed properties in the next section. ## 2 Gravito-inertial pulsations in the double-lined spectroscopic binary KIC 4930889 Papics et al. (2017) analysed the _Kepler_ light curve of the system and identified a g-mode period series consisting of 20 pulsations with mode identification \((\ell,m)=(1,1)\) and consecutive radial orders, where a positive \(m\)-value denotes prograde modes. Additionally they gathered high-resolution spectra of the target using the HERMES spectrograph (Raskin et al., 2011). Their spectral synthesis of the disentangled spectra of the system resulted in the parameters listed in Table 1. The analysis places both components in the SPB instability strip, which raises the question as to which star the prograde dipole mode pattern belongs. Asteroseismic modelling, assuming that the primary is the pulsator and based on a statistical approximation for both stellar model properties and mode frequencies, was performed by Pedersen et al. (2021). The parameters of their best stellar models are listed in Table 2. While such an approximative statistical approach was chosen to keep simultaneous treatment of 26 pulsators within reasonable computation time, it can never be as detailed as an approach tuned to a particular star, as offered here. Moreover, the results listed in Table 2 were based on the frequency list and spectroscopic parameters derived by Papics et al. (2017). Johnston et al. (2019) revisited and renormalised the spectra obtained by Papics et al. (2017), deriving a new spectroscopic solution listed in Table 3. They found their solution to be in better agreement with the evolutionary expectations of their isochrone-cloud modelling. We rely on these spectroscopic parameters to guide the modelling. To compute the stellar luminosity, we used \[\log\frac{L}{L_{\odot}}=-0.4(M_{\rm S_{2}}+BC_{\rm S_{2}}-M_{\rm bol,\odot}) \tag{1}\] with \[M_{\rm S_{2}}=m_{\rm S_{2}}-5\log\frac{d}{10\rm pc}-R_{\rm S_{2}}E(B-V). \tag{2}\] In this expression, \(m_{\rm S_{2}}\) represents the apparent magnitude from the _Gaia_\(G\)-band, and \(d\) is the _Gaia_ eDR3 distance from Bailer-Jones et al. (2021). The \(E(B-V)\) reddening value was obtained with the 3D reddening map (Bayserst19) from Green et al. (2019), and the reddening vector \(R_{\rm S_{2}}\) equals 3.002 for the \(G\) band (Pedersen et al., 2020). The bolometric correction \(BC_{\rm S_{2}}\) was calculated adopting the prescription from Pedersen et al. (2020) (model 3, LTE+non-LTE), and \(M_{\rm bol,\odot}=4.74\). However, the system is a close binary, yet it was treated as a single object in _Gaia_ DR3. To derive the luminosity for both components separately, we therefore had to adjust their apparent magnitudes based on their respective light factors (here denoted as \(f_{\rm i}\)). Given that the apparent magnitude of the system depends on the total observed flux divided by the zero-point (for simplicity denoted as \(F\)) \[m_{\rm S_{2},system}=-2.5\log(F),\] \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & KIC 4930889 A & KIC 4930889 B \\ \hline \(T_{\rm eff}\) [K] & 15100 \(\pm\) 100 & 12070 \(\pm\) 200 \\ log \(g\) [dex] & 3.95 \(\pm\) 0.1 & 3.85 \(\pm\) 0.1 \\ [M/H] & \(-\)0.08 \(\pm\) 0.1 & \(-\)0.09 \(\pm\) 0.1 \\ \(v\sin i\) [km s\({}^{-1}\)] & 116 \(\pm\) 6 & 85 \(\pm\) 5 \\ \(\xi_{\rm i}\)[km s\({}^{-1}\)] & 1.85 \(\pm\) 0.8 & 2 \(\pm\) 1 \\ Light factor & 0.71 \(\pm\) 0.01 & 0.29 \(\pm\) 0.01 \\ Spectral type & B5 IV-V & B8 IV-V \\ \hline Orbital period [d] & 18.296 \(\pm\) 0.002 \\ q [M\({}_{2}\)/M\({}_{1}\)] & 0.77\(\pm\)0.09 \\ e & 0.32\(\pm\)0.02 \\ \(\omega(^{*})\) & 352.7\(\pm\)4.9 \\ \(a\sin i_{\rm orb}(R_{\odot})\) & 23.9 \(\pm\) 0.5 & 31.1 \(\pm\) 0.5 \\ \hline \end{tabular} 1 \end{table} Table 1: Parameters obtained from the spectroscopic analysis by Papics et al. (2017). \begin{table} \begin{tabular}{l c c c c} \hline \hline Parameter & \multicolumn{5}{c}{KIC 4930889 A} \\ & \(\psi_{1}\) & \(\psi_{2}\) & \(\psi_{6}\) & Average (\(\psi_{1}\), \(\psi_{5}\)) \\ \hline \(M_{\rm ini}\) [M\({}_{\odot}\)] & 4.375 & 4.0 & 4.2 & 4.06\(\pm\)0.31 \\ \(Z_{\rm ini}\) & 0.0092 & 0.01 & 0.014 & 0.00924 \(\pm\) 0.00002 \\ \(f_{\rm CBM}\) & 0.0128 & 0.036 & 0.03 & 0.012\(\pm\)0.001 \\ log(\(D_{\rm conv}\)) & 2.736 & 4.6 & 5 & 3.3\(\pm\)0.5 \\ \(X_{\rm c}/X_{\rm ini}\) & 0.37 & 0.50 & 0.55 & 0.362\(\pm\)0.0007 \\ \(\Omega_{\rm rot}\) (d\({}^{-1}\)) & & & & 0.740\(\pm\)0.008 \\ \hline \end{tabular} 1 \end{table} Table 2: Stellar parameters of the best asteroseismic models of KIC 4930889 derived by Pedersen et al. (2021). \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & KIC 4930889 A & KIC 4930889 B \\ \hline \(T_{\rm eff}\) [K] & 14020 \(\pm\) 280 & 12820 \(\pm\) 900 \\ log \(g\) [dex] & 3.55 \(\pm\) 0.24 & 4.38 \(\pm\) 0.10 \\ \hline \end{tabular} 1 \end{table} Table 3: Updated parameters obtained from the spectroscopic analysis by Johnston et al. (2019). we can calculate the apparent magnitude of the individual components by adjusting the total apparent magnitude based on their respective light factors \[m_{\mathrm{S_{1,component}}}=-2.5\log(f_{i}\cdot F) =-2.5\log(F)-2.5\log(f_{i}) \tag{4}\] \[=m_{\mathrm{S_{1,system}}}-2.5\log(f_{i}). \tag{5}\] To derive the light factors for the spectroscopic solution from Johnston et al. (2019), we followed Eq. 3 from Tkachenko (2015). We calculated the ratio of continuum intensities for atmospheric models with parameters given in Table 3 weighed by the _Gaia_ eDR3 \(G\) passband transmissivity, and used the radii ratio of \(\mathcal{R}=R_{secondary}/R_{primary}=0.76\) from Johnston et al. (2019). This lead us to light factors of \(0.67\pm 0.01\) and \(0.33\pm 0.01\) for the primary and secondary, respectively. Using these corrections to the apparent magnitude, we derived a luminosity \(\log\frac{L}{L_{\odot}}=2.61\pm 0.04\) for the primary, and \(\log\frac{L}{L_{\odot}}=2.23\pm 0.08\) for the secondary. Both components of the binary fall within the SPB instability strip, and they both contribute a significant amount to the total observed flux of the system. To aid the determination of the pulsator, we divided the _Kepler_ light curve in 20 segments according to orbital phase, using the methodology from Van Reeth et al. (2023), in search of pulsation amplitude and phase modulations as a function of the binary orbital phase. In close binaries, these can be caused by either tidal perturbations (e.g. Reyniers & Smeyers 2003b,a; Samadi Ghadim et al. 2018; Bowman et al. 2019; Steindl et al. 2021) or tidal tilting (e.g. Springer & Shaviv 2013; Handler et al. 2020; Kurtz et al. 2020; Fuller et al. 2020). In intermediate to wide binaries, these can be caused by the light travel time effect (e.g. Shibahashi & Kurtz 2012; Murphy et al. 2014). Only one dominant mode turned out to be part of a multiplet separated by the orbital frequency in the Fourier transform of the star's light curve. This mode is f\({}_{43}\) from Table B.1 (\(1.1949135\pm 0.0000019\) d\({}^{-1}\)), and the multiplet consists of only two modes. Moreover, none of the detected pulsation phase and amplitude modulations are consistent with each other, except when the modulation is zero within the uncertainty. This is contrary to what is expected for frequency modulations caused by binarity (Shibahashi & Kurtz 2012). The tidal perturbation of pulsations can affect individual modes more strongly than others, depending on their geometry. However, for g-modes that form a period-spacing pattern (i.e. have the same geometry) we expect the orbital multiplet structure to be very similar for all modes, and most easily detectable for the dominant g-modes (Van Reeth et al. 2023). This was also not the case. From this we conclude that neither tidal perturbations or tilting nor the light travel time effect can be detected for this target. Additionally, we attempted to determine the pulsator via line profile variations in the 26 spectra observed by Papics et al. (2017). However, due to the relevant lines being either very weak or broad and shallow, the S/N of the spectra proved insufficient to draw any conclusions from this approach. We therefore make no assumption regarding which component hosts the pulsations, in contrast to the previous study by Pedersen et al. (2021) who assumed the primary to be the pulsator. We subsequently show that this is the least likely of the two scenarios. The analysis of the _Kepler_ light curve and the subsequent frequency extraction was revisited by Van Beeck et al. (2021). We used the frequency list obtained by their'strategy 3' for the prewhitening procedure, as this method explains the highest fraction of variance for this light curve. This full list of frequencies is provided in Table B.1. The mode period pattern that we identified from this frequency list consists of 22 prograde dipole modes of consecutive radial order, which are listed in Table B.2 and shown in Fig. 1. This provides us with the asteroseismic input for our modelling, \(\mathbf{Y}^{\mathrm{obs}}\) composed of the individual mode frequencies \(\mathbf{Y}^{\mathrm{obs}}_{i}\) with \(i=1,\ldots,22\). Two additional modes occur in our pattern, while they were not present in the one based on the 20 modes found by Papics et al. (2017). These are the two modes with the longest periods in Fig. 1. We additionally found a high amount of modes at lower frequencies that form two similar period series, listed in Table B.3 and shown in Fig. 2. They have an upward tilt, typical for retrograde modes, and were also detected by Papics et al. (2017). We also found one isolated peak in the amplitude spectrum at a period of 18.297\(\pm\)0.014 d (f\({}_{165}\) in Table B.1 with frequency 0.05465\(\pm\)0.00004 d\({}^{-1}\)), which is in perfect agreement with the orbital period found by Papics et al. (2017). We searched the full list of detected frequencies in Table B.1 for frequencies that correspond to multiples of the orbital frequency within a \(2\sigma\) uncertainty interval. Frequencies f\({}_{160}\) (\(0.10934\pm 0.00005\)d\({}^{-1}\)), f\({}_{155}\) (\(0.16388\pm 0.00006\)d\({}^{-1}\)), f\({}_{150}\) (\(0.21865\pm 0.00008\)d\({}^{-1}\)), and f\({}_{144}\) (\(0.27325\pm 0.00009\)d\({}^{-1}\)) coincide with two, three, four, and five times the orbital frequency, respectively. f\({}_{160}\) and f\({}_{144}\) correspond to f\({}_{11}\) and f\({}_{4}\) from the first additional pattern in Table B.3, and f\({}_{150}\) corresponds to f\({}_{6}\) from the second additional pattern Table B.3. Frequencies f\({}_{115}\) (\(0.81955\pm 0.00013\)d\({}^{-1}\)) and f\({}_{70}\) (\(1.09295\pm 0.00005\)d\({}^{-1}\)) coincide with 15 and 20 times the orbital frequency, respectively, with f\({}_{70}\) corresponding to f\({}_{13}\) from the main dipole mode period series in Table B.2. The latter of these two may not be a tidally excited oscillation, since coinciding with a relatively high multiple of the orbital frequency could be coincidental, and since it falls in line with the dipole mode period series. Since some of the signals in the secondary patterns are low multiples of the orbital frequency, those signals might be caused by proximity effects instead of actual pulsations, and those patterns as a whole are in any case likely influenced by the binary orbit. Additionally, we could not unambiguously determine the degree of the involved modes in these patterns from our single-star approach to the asteroseismology. We therefore do not in Figure 1: Prograde dipole g-mode pattern of KIC 4930889. The top panel shows the amplitude spectrum in grey and the frequencies extracted by Van Beeck et al. (2021) in blue. The frequencies selected to be part of the prograde dipole mode pattern are indicated by dashed red lines. The bottom panel shows the period-spacing pattern (\(\Delta\mathrm{P_{a}}\equiv\mathrm{P_{n+1}}-\mathrm{P_{n}}\)) of the selected prograde dipole modes. clude them in our modelling at this stage of the work. These two extra patterns offer potential for future more in-depth modelling based on close binary evolution models. Inclusion of these patterns requires developing a dedicated method to include tidal interactions, which is beyond the scope of the current study. Here, we restrict ourselves to asteroseismology based on single-star evolution models, as discussed in Section 4. ## 3 Orbital modelling of proximity effects Figure 3 shows the phase-folded light curve after prewhitening for all detected significant frequencies apart from the multiples of the orbital frequency. The residual intrinsic variability in the light curve that is left after the stopping criterion employed by Van Beeck et al. (2021) is dominant over the one caused by the orbital motion. We employ PHOEBE version 2.4.11 (Conroy et al., 2021) to construct models for the orbital harmonics. We fix the mass ratio, eccentricity, argument of periapsis, and projected semi-major axis to the values listed in Table 1. Furthermore we fix the orbital period to 18.297 d, since this is the orbital frequency retrieved from the light curve itself, it falls within the uncertainty of the value in Table 1, and the phase-folded light curve has less scatter than if 18.296 d from Table 1 is used in the phase fold. The effective temperatures are fixed to the values from Table 3 since they only have a minor impact on the simulated light curve compared to the other parameters. The surface gravity of the stars are left as free parameters, with the values from Table 3 as initial guesses. The inclination is a free parameter as well, as no prior information is available on its value. The Nelder-Mead algorithm (Nelder and Mead, 1965) is employed to optimise this initial setup. Afterwards we compute a small parameter study around the retrieved solution, of which the projected goodness of fit can be seen in Fig. 4. As discussed before, the leftover intrinsic variability in the light curve is dominant, we therefore employ a mask during this parameter study to only model the bump in the light curve around orbital phase 0.4 since this is the clearest signal that is present in the harmonics. As can be seen from Fig. 4, the best fitting surface gravity of the primary star deviates from the value of \(3.55\pm 0.24\) from Johnston et al. (2019). Their value of \(4.38\pm 0.10\) for the surface gravity of the secondary agrees very well with the best fitting values we retrieve in Fig. 4. The distribution for the \(\chi^{2}\) values of the inclination, as shown in Fig. 4, is however much flatter than those of the surface gravities. Figure 5 demonstrates that inclination is indeed not very well constrained. The figure shows models with inclination angles of \(60^{\circ}\) and \(74^{\circ}\), which both reproduce the modelled signal comparably well. The absence of a clear minimum in the \(\chi^{2}\) distribution of the models, combined with the wide range of inclinations that produce visually similarly good models, leaves us unable to confidently use this as a constraint on our asteroseismic modelling. ## 4 Computation of theoretical mode frequencies ### Stellar equilibrium models Following a similar setup as Michielsen et al. (2021), we compute two grids of single star models as input for the pulsation computations. The input physics for these two grids is the same, apart from the adopted temperature gradient in the core CBM region. The first, called the radiative grid, adopts the radiative temperature gradient in that transition zone. The other is termed the Peclet grid and adopts a temperature gradient based on the Peclet number in that transition zone, following the same prescription as in Eq. (5) of Michielsen et al. (2021). This prescription includes a convective penetration zone extending the convective core, which entails that (at least a part of) the CBM region is fully adiabatic. The mixing coefficient in the CBM region is governed by two parameters, \(\alpha_{\text{CBM}}\) and \(f_{\text{CBM}}\) which dictate the step-like and exponentially decaying parts of the region respectively. The diffusive mixing in the radiative envelope is implemented to increase going further outwards due to internal gravity waves as deduced by Rogers and McElwaine (2017), following Pedersen et al. (2018). The level of this mixing in the radiative envelope at its inner boundary with the CBM region is set by the parameter \(D_{\text{env}}\). The parameter ranges for the two grids with different temperature gradients are identical and listed in Table 4. We note that the upper bound of the central hydrogen fraction is the initial fraction at the zero-age main sequence, which can vary depending on the initial metallicity of that model. Figure 6 illustrates the stellar structure of a model with the maximum amount of mixing included in our grid, and compares it with the structure of one of the models with considerably less mixing. We can clearly see that strong mode trapping occurs in the CBM region when the amount of mixing is on the lower end. An increased amount of mixing in the envelope causes the chemical gradient Figure 3: Phase-folded residual light curve. Prewhitened for all frequencies listed in Table 1, except those corresponding to the first five orbital harmonics. The signal from those first five orbital harmonics is shown in red. Figure 2: Same as Fig. 1, but for the possible additional series of the low frequency peaks. to be less steep, entailing a much less pronounced peak of the Brunt-Vaisila frequency, and a greatly reduced (or even absent) mode trapping. The two grids of stellar evolution models are computed using the stellar evolution code MESA (Paxton et al., 2011, 2013, 2015, 2018, 2019) version r15140. The models employ an Eddington grey atmosphere as atmospheric boundary condition and make use of the OP opacity tables (Seaton, 2005). They contain the standard chemical mixture of OB stars in the solar neighbourhood deduced by Nieva & Przybilla (2012) and Przybilla et al. (2013). We determine the initial helium fraction by adopting an enrichment law \(Y_{\rm{ini}}=Y_{p}+(\Delta Y/\Delta\Sigma)Z_{\rm{ini}}\). We set the primordial helium abundance \(Y_{p}=0.2465\), as determined by Aver et al. (2013). Since there is currently no consensus on the value of \(\frac{\Delta Y}{\Delta\Sigma}\)(e.g. Verma et al., 2019, and references therein), we require that the galactic enrichment ratio, \(\frac{\Delta Y}{\Delta\Sigma}\), is able to reproduce the mass fractions of the adopted chemical mixture (\(X\)=0.71, \(Y\)=0.276, \(Z\)=0.014) derived by Nieva & Przybilla (2012). This leads us to adopt \(\Delta Y/\Delta Z\)=2.1. After \(Y_{\rm{ini}}\) is determined according to this enrichment law, \(X_{\rm{ini}}\) is set following \(X_{\rm{ini}}=1-Y_{\rm{ini}}-Z_{\rm{ini}}\). We adopt the mixing length theory as developed by Cox & Giuli (1968) with a mixing length parameter \(\alpha_{\rm{ml}}=2.0\), and use the Ledoux criterion for convection without allowing for semiconvection. This is warranted since this form of slow mixing is absent in the presence of CBM (e.g. Kaiser et al., 2020), which is included in the vast majority of our models. The exact location where the transition from core to near-core mixing is made, is determined by the \(f_{0}\) parameter in MESA. We fix \(f_{0}=0.005\), except for setting \(f_{0}=0\) in the models where both \(\alpha_{\rm{CBM}}\) and \(f_{\rm{CBM}}\) are equal to zero as there is no CBM region for this case. A link to the detailed MESA setup is provided in Appendix A. ### Pulsation computations The pulsation mode properties of the MESA equilibrium models are computed employing the stellar oscillation code GYRE (Townsend & Teitler, 2013; Townsend et al., 2018), version 6.0.1. Since non-adiabatic effects mainly become important in the outer stellar envelope, the adiabatic approximations are sufficient for our modelling work due to the mode inertias of the g modes Figure 4: \(\chi^{2}\) values of the PHOEBE models. The values are projected along the log(g) of the primary (left), the log(g) of the secondary (center), and along the inclination of the binary (right). Figure 5: Phase-folded PHOEBE model for the five orbital harmonics from Van Beeck et al. (2021). The dots in blue are included in the mask and modelled in the parameter study, whereas the dots in grey are excluded by the mask. being dominant near the stellar core. For computational reasons and given that it does not affect the mode frequencies at the level of measurement errors, we only perform non-adiabatic computations for some of our best models after the forward modelling is finished, in order to evaluate their mode excitation. We compute the dipole g modes for all our equilibrium models for an initial guess of the rotation frequency, assuming rigid rotation and relying on the traditional approximation of rotation (TAR; e.g. Eckart, 1960; Bildsten et al., 1996; Lee & Saio, 1997), following its implementation in GYRE (as described in Townsend, 2020, sec. 4). The stellar rotation frequency required to optimally reproduce the observed stellar pulsations differs for the varying equilibrium models. We therefore start from the same initial guess for each equilibrium model and rescale the g-mode frequencies for each model separately, following the TAR and assuming rigid rotation, to reproduce the observed pulsations as closely as possible. This optimisation is performed using the Levenberg-Marquardt method implemented in LMFIT (Newville et al., 2020). To reduce the chances of the optimisation method returning a local minimum, we start the optimisation procedure from two separate initial values for the rotation. The first one being the initial guess, \(\omega_{\rm initial,\;1}=\omega_{\rm guess}\), used to calculate the GYRE model. The second initial value is taken by adjusting the first one by twice the difference between the initial value and its solution, so \(\omega_{\rm initial,\;2}=\omega_{\rm guess}-2\cdot(\omega_{\rm guess}- \omega_{\rm optimised,\;1})=2\cdot\omega_{\rm optimized,\;1}-\omega_{\rm guess}\). This way the global minimum of the initial value problem is approached both from a higher and a lower initial value. In the case where these solutions do not converge, we take the best of the two returned solutions since it indicates that the other one returns a local minimum. Figure 7 illustrates the rescaling of the period-spacing pattern due to a change in the rotation rate. It also shows the relative differences between the periods of the rescaled modes and the periods obtained by repeating the GYRE computation using that same optimised rotation rate. We find that the rescaled mode periods agree well with the periods computed by GYRE for the new rotation rate. The differences are of order \(10^{-3}\%\) in the asymptotic mode frequency regime where the observed pulsations occur, and even the largest differences at low radial orders are still relatively small (\(<\)0.05%). Rescaling the g-mode frequencies to the optimised rotation frequency and selecting a set of the theoretical frequencies to match the observations yields for each equilibrium model a list of theoretically predicted dipole mode frequencies, \(\mathbf{Y}^{\rm Theo}\) composed of \(\rm Y_{i}^{\rm Theo}\), where \(i\) stands for the radial order. The GYRE inlist to compute the initial frequency lists is provided through the link in Appendix A. ## 5 Modelling approach We utilise the same asteroseismic modelling procedure as Michielsen et al. (2021). A brief overview is provided here for convenience without going too much into the details. ### General mathematical framework We employ the Mahalanobis distance as a merit function for the maximum likelihood estimation in the asteroseismic modelling (see Aerts et al., 2018, for its application to asteroseismic modelling), \[\mathrm{MD}_{j}=\left(\mathbf{Y}_{j}^{\rm theo}-\mathbf{Y}^{\rm obs}\right)^{T} \left(V+\Sigma\right)^{-1}\left(\mathbf{Y}_{j}^{\rm theo}-\mathbf{Y}^{\rm obs} \right), \tag{6}\] \begin{table} \begin{tabular}{l l l l} \hline \hline Parameter & lower boundary & upper boundary & step size \\ \hline \(M_{\rm ini}\) [\(\,\mathrm{M}_{\odot}\)] & 3.0 & 4.5 & 0.1 \\ \(Z_{\rm ini}\) & 0.008 & 0.024 & 0.004 \\ \(\alpha_{\rm CBM}\) & 0 & 0.3 & 0.05 \\ \(f_{\rm CBM}\) & 0 & 0.03 & 0.005 \\ \(\log(D_{\rm em})\) & 0 & 4 & 1 \\ \(X_{\rm c}\) & 0.1 & \(X_{ini}\) & 0.01 \\ \hline \end{tabular} \end{table} Table 4: Parameter ranges of each of the two grids of equilibrium models used for the asteroseismic modelling, containing a total of 1191680 models per grid. Figure 6: Radial profiles of a \(4\,\mathrm{M}_{\odot}\) star from the PΓ©clet grid with a central hydrogen content \(X_{\rm c}=0.5\). The top panel shows the temperature gradients and the mean molecular weight per gas particle (\(\mu\)). The middle panel shows the Brunt-VΓ€isΓ€lli frequency (N), as well as the shape of the mixing profiles, divided in convective core (grey), near-core mixing (blue), and diffusive mixing in the outer radiative envelope (green). The bottom panel shows the mode inertia of two g modes with different radial orders. All dashed lines and transparent colours correspond to a model with the maximum amount of mixing included in our grid (\(\alpha_{\rm CBM}=0.3\), \(f_{\rm CBM}=0.03\), \(\log(D_{\rm em})=4\)), whereas the solid lines and darker colours correspond to a model with a considerably lower amount of mixing (\(\alpha_{\rm CBM}=0.1\), \(f_{\rm CBM}=0.01\), \(\log(D_{\rm em})=1\)). with \(\mathbf{Y}^{\text{obs}}\) the vector of observations and \(\mathbf{Y}^{\text{theo}}_{l}\) the corresponding vector of predicted values in gridpoint \(j\). \(\Sigma\) is the variance matrix due to the measurement errors of \(\mathbf{Y}^{\text{obs}}\) and \(V\) is the variance-covariance matrix of \(\mathbf{Y}^{\text{theo}}\) capturing the theoretical uncertainties in the mode frequency predictions caused by the limited knowledge of the physical ingredients in the input physics of the equilibrium models, taking as well the correlations among the free parameters used to describe these ingredients into account. The modelling involves both statistical models that are non-nested, comparing models within one grid of equilibrium models, and statistical models that are nested, comparing equilibrium models across different grids where none, one, or both of the CBM parameters are fixed at zero. This allows for a comparison between different numbers of free parameters, including an evaluation whether the increase in goodness of fit outweighs the entailed punishment by the selection criterion for having an increased number of free parameters. We use the Akaike Information Criterion corrected for small sample size (AICc, Claeskens & Hjort 2008, Chapter 2) since it rewards fit quality but penalises complexity. It is defined as \[\text{AICc}=-2\ln\mathcal{L}+\frac{2kN}{N-k-1}, \tag{7}\] with \(N\) and \(k\) the number of observables and free parameters, respectively, and \(\mathcal{L}\) the likelihood of a stellar model. In our framework for this star, \(N=22\) when fitting periods, or 21 when fitting period spacings (that is the differences in period between two pulsation modes of consecutive radial order, \(\Delta\text{P}_{\text{n}}\equiv\text{P}_{\text{n+1}}-\text{P}_{\text{n}}\)). The number of free parameters \(k\) is 4, 5, or 6 depending on whether two, one, or zero of the CBM parameters (\(\alpha_{\text{CBM}}\), \(\gamma_{\text{CBM}}\)) are fixed in the nested grids. In case \(k\)=6, the list of parameters consists of \((M_{\text{ini}},Z_{\text{ini}},\alpha_{\text{CBM}},f_{\text{CBM}},\log(D_{\text {env}}),X_{\text{c}})\). Rewriting the AICc for the likelihood function of the Mahalanobis Distance yields \[\text{AICc}=\ln(|V+\Sigma|)+k\ln(2\pi)+\text{MD}+\frac{2kN}{N-k-1}. \tag{8}\] The performance of two nested models can be compared through their difference in AICc values \(\Delta\text{AICc}=\text{AICc}_{\text{A}}-\text{AICc}_{\text{B}}\). Model B is preferred over model A if \(\Delta\text{AICc}>2\), with a (very) strong preference if \(\Delta\text{AICc}>6\) (10). We determine the uncertainty region of the best solution by employing Bayes' theorem, stating that the probability of a parameter \(\theta^{m}\) occurring in the interval \(|\theta^{m}_{a}\), \(\theta^{m}_{b}|\) is given by \[P(\theta^{m}_{a}<\theta^{m}_{b}|\mathbf{D})= \frac{\sum_{i}^{4}P(\mathbf{D}|\mathbf{\theta}_{i})P(\mathbf{ \theta}_{i})}{\sum_{j}^{Q}P(\mathbf{D}|\mathbf{\theta}_{j})P(\mathbf{\theta}_{ j})}\] \[= \frac{\sum_{i}^{4}P(\mathbf{D}|\mathbf{\theta}_{i})\prod_{i}^{k}P (\mathbf{\theta}_{i}^{j})}{\sum_{j}^{Q}P(\mathbf{D}|\mathbf{\theta}_{j})\prod _{i}^{k}P(\mathbf{\theta}_{j}^{j})}. \tag{9}\] Index \(j\) is summed over all \(Q\) equilibrium models in the grid that are consistent within \(3\sigma\) of the spectroscopic log \(g\), \(T_{\text{eff}}\), and stellar luminosity, that are also consistent with the observed metallicity and constraints from the binarity of the system. These binary constraints are explained in more detail in Section 5.2 Index \(i\) is summed over the \(q\) models with the highest likelihood so that \(P(\theta^{m}_{a}<\theta^{m}_{b}|\mathbf{D})=0.95\). We consider three approaches to match theoretical mode periods to the observed ones, and analyse the results of the method that performs best for each grid. In the first two we begin matching mode periods starting from the theoretical period that is closest to the either the mode with the highest observed amplitude or the highest-frequency detected in the observed pattern. The third option is to match each observed mode period to its best matching theoretical counterpart, and adopt the longest sequence of consecutive modes that we get in this way. The rest of the pattern is then build consecutively in radial order starting from this sequence. These three options of pattern construction will henceforth be referred to as highest amplitude, highest frequency, and longest sequence. Apart from just the mode periods, we also consider the period-spacing values as a set of observables to be used in our modelling procedure. The condition numbers of the variance-covariance matrices \(V+\Sigma\) are used to determine the best of these sets of observables. The condition number \(\kappa\) is defined as the ratio of its maximum to minimum eigenvalue, \[\kappa(V+\Sigma)=\frac{|\lambda_{max}(V+\Sigma)|}{|\lambda_{min}(V+\Sigma)|}. \tag{10}\] This gives an indication of how well- or ill-conditioned the matrix is with respect to the inversion to be computed, with lower values being better conditioned. Figure 7: Rescaling period spacing patterns to an optimised rotation rate. Top panel shows the period-spacing pattern as calculated by GYRE with the initial guess for the rotation in blue, and the pattern rescaled to the optimised rotation frequency in orange. The inset figures are zoomed in on the region with the observed pulsations, with the modes selected to match the observations circled in red. The bottom panel shows the relative difference between the mode periods calculated by GYRE given the optimised rotation rate, and the mode periods from the rescaled pattern. The grey region denotes the observational uncertainties. ### Isochrone clouds The methodology from Michielsen et al. (2021) as summarised in Sect. 5.1 considers the system's asteroseismic and spectroscopic data from a single-star perspective. KIC 4930889 is however a double-lined spectroscopic eccentric (\(e=0.32\)) binary. Hence, we can utilise the information obtained from the binarity of the system to put additional constraints on the models in an attempt to lift some of the degeneracies that are present. We employ the use of isochrone clouds (Johnston et al., 2019), which is in this application the collection of isochrones of a given age but for all combinations of \(\alpha_{\rm CBM}\), \(f_{\rm CBM}\), log \(D_{\rm env}\) present in our grid. Constructing an isochrone cloud coupled to a model of a certain grid, we enforce all models in that cloud to be of an age that differs less than one gridstep in age from this model, have the same initial metallicity, and have a mass that is compatible within the error margin of the mass ratio of the system (listed in Table 1). We computed some additional evolutionary tracks for masses above and below the grid range listed in Table 4 to allow all masses that we could expect for the companion from the observed mass ratios to be present in the isochrone clouds. Figure 8 illustrates how the constraints of the isochrone clouds are applied for the case where we assume the secondary star to be the pulsator. It shows the isochrone clouds of the primary star for three different models of the secondary, which are arbitrarily chosen for the purpose of this visual representation. These three models are among the best, that is having the lowest AICc values, and they would have been included in the error ellipses if the system were modelled from a single-star perspective without any constraints from binarity. The light-grey tracks in the background show all models with the same metallicity and a mass within the observed mass ratio, and only the models that have an age difference smaller than one gridstep are shown in colour. Two of the isochrone clouds fall partially within the \(1\sigma\) or \(3\sigma\) errors of the companion star and are thus accepted as viable solutions. The third isochrone cloud falls completely outside of the \(3\sigma\) spectroscopic error region, and is hence not accepted as a solution. Although Fig. 8 only showcases the constraints on log \(g\) and \(T_{\rm eff}\), the stellar luminosity is also used in these isochrone-cloud constraints. ## 6 Modelling results The condition numbers of the variance-covariance matrices computed via Eq. (10) are of order \(\kappa(A)\sim 10^{3}\) to \(10^{4}\) when considering mode periods, but are significantly smaller, down to \(\kappa(A)\sim 10^{1}\), when considering period spacings. We therefore primarily consider the period spacings as the set of observables to fit, but still list the results from using the periods as observables as well. Although the individual parameters of the best models may differ between these two sets of observables, they are in most cases quite similar if not the same, and always fall within the other error ellipse. Our conclusions are therefore independent from the chosen observable. We model the observed pulsation pattern twice. Once with the spectroscopic constraints of the primary, for which the corner plots of the radiative and Peclet grid are shown in Figs. 9 and 10, and once with the spectroscopic constraints of the secondary, with Figs. 11 and 12 showing the corner plots for the radiative and Peclet grid respectively. The models included in the \(2\sigma\) error ellipse of the MD according to Eq. (9) are shown in colour, while the models in grey scale fall outside of this error ellipse. Additionally we make a comparison between the AICc values of the best models of the full grid and each partial grid with fewer free parameters, for both prescriptions of the temperature gradient in the CBM region. We hereafter refer to the grids with six free parameters as the radiative and Peclet grid, but specify when talking about grids with fewer free CBM parameters. Among the nested grids with five free parameters, we have \(\alpha_{\rm CBM}=0\) but varying \(f_{\rm CBM}\), henceforth denoted as exponential radiative or exponential Peclet grid, and \(f_{\rm CBM}=0\) but varying \(\alpha_{\rm CBM}\), henceforth denoted as step radiative or step Peclet grid. The nested grid with four free parameters, having both \(\alpha_{\rm CBM}\) and \(f_{\rm CBM}\) set to zero, is referred to as the grid without CBM. Comparing all these grids with various numbers of free parameters not only enables us to investigate which temperature gradient is preferred in the CBM region, but also to examine if the increased fit quality outweighs the penalties for higher model complexity. The model parameters and AICc values of these best models are listed in Table 5 and Table 6 when enforcing the constraints on the luminosity and spectroscopic \(T_{\rm eff}\) and log \(g\) of the primary and secondary, respectively. We cannot distinguish the preferred temperature gradient whilst modelling the primary star, since there is no preference between the exponential radiative, exponential Peclet, or the grid without any CBM, given that \(\Delta{\rm AICc}<2\) between their best models. We do however find a preference of these three grids over the grids with a step-like mixing profile, or the ones with a combined step and exponential mixing. The period spacings of the best models from these indistinguishable grids are shown in Fig. 12(a). Modelling the secondary star also yields no possibility to distinguish which temperature gradient is preferred. It is in this case not possible to differentiate between both grids with an exponentially decaying mixing in the CBM region, the grid without any CBM, and the radiative grid with six free parameters. The period spacings of the best models from these indistinguishable grids are shown in Fig. 12(b). We clearly see from both Fig. 12(a) and Fig. 12(b) that the variance of the theoretical predictions is much larger than the uncertainties on the observations. Our solutions are therefore dominated by the theoretical uncertainties, rather than the observational ones. The larger variance of the grid without CBM is one of the reasons why there is no selection capacity between this grid and those with exponential CBM. A reduction of the vari Figure 8: Isochrone clouds of the primary star matching three different models of the secondary. The black lines show the \(1\sigma\) and \(3\sigma\) spectroscopic \(T_{\rm eff}\) and log \(g\) error boxes of the primary star. Figure 10: Corner plot (as in Fig. 9) for the PΓ©clet grid. Made using period spacings in a Mahalanobis distance merit function and spectroscopic and luminosity constraints from the primary star. Figure 9: Corner plot for the radiative grid. Made using period spacings in a Mahalanobis distance merit function and spectroscopic and luminosity constraints from the primary star. The 50% best models are shown, colour-coded according to the log of their merit function value (at right). The models in colour fall within the 2 \(\sigma\) error ellipse of the MD constructed using Eq. (9), whilst the models in grey fall outside of this error ellipse. The figures on the diagonal show binned parameter distributions of the models in the error ellipse, and the panel at the top right shows a Hertzsprung–Russell diagram with the 1 and 3\(\sigma\)\(T_{\rm eff}\) and log L error boxes. Figure 11: Corner plot (as in Fig. 9) for the radiative grid. Made using period spacings in a Mahalanobis distance merit function and spectroscopic and luminosity constraints from the secondary star. Figure 12: Corner plot (as in Fig. 9) for the PΓ©clet grid. Made using period spacings in a Mahalanobis distance merit function and spectroscopic and luminosity constraints from the secondary star. Figure 13: Period-spacing patterns of the observations, and of the best models of the preferred grids that are not distinguishable from one another. These are the models in bold in Tables 5 and 6 that use \(\Delta\)P as observables. The formal errors on the observations are smaller than the symbol sizes. The largest of the observational errors is enlarged ten times and shown for comparison. The vertical bars in the bottom left corner of the top panel show the maximum considered uncertainty for the theoretical predictions approximated by the variance–covariance matrix of that particular grid. The middle and bottom panels show the relative difference in period spacing and period, respectively, between the observation and the model. The narrow grey areas indicate the formal \(1\sigma\) observational uncertainty from Table 2. ance of the theoretical predictions would lead to a much stronger preference for the presence of CBM over the absence of CBM according to the AICc. This would in particular be the case should we ignore the (co)variance due to limits in the theoretical predictions (\(V\)=0), that is when reducing the merit function from a Mahalanobis distance to a \(\chi^{2}\). \begin{table} \begin{tabular}{l l l l l l l l l l l l l} \hline \hline Obs. & Grid & \(M_{\text{ini}}\) [ M\({}_{\odot}\)] & \(Z_{\text{ini}}\) & \(\alpha_{\text{CBM}}\) & \(f_{\text{CBM}}\) & log(\(D_{\text{env}}\)) & \(X_{\text{c}}\) & \(\Omega_{\text{rot}}\) [d\({}^{-1}\)] & \(\Omega_{\text{rot}}/\Omega_{\text{crit}}\) & \(M_{\text{cc}}\) [ M\({}_{\odot}\)] & MD & AICc \\ \hline \(\Delta\)**P** & **PΓ©clet** & \(\mathbf{3.6}_{3.9}^{3.9}\) & \(\mathbf{0.016}_{0.008}^{0.016}\) **(...)** & \(\mathbf{0.03}_{0.00}^{0.03}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.52}_{0.36}^{0.71}\) & \(\mathbf{0.73}_{0.67}^{0.75}\) & \(\mathbf{0.37}\) & \(\mathbf{0.67}_{0.47}^{0.84}\) & **13 –251.0** \\ & **Radiative** & \(\mathbf{3.2}_{3.0}^{3.9}\) & \(\mathbf{0.012}_{0.008}^{0.016}\) **(...)** & \(\mathbf{0.03}_{0.00}^{0.03}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.52}_{0.31}^{0.71}\) & \(\mathbf{0.76}_{0.67}^{0.75}\) & \(\mathbf{0.33}\) & \(\mathbf{0.60}_{0.45}^{0.86}\) & **12 –250.8** \\ & **Radiative** & \(\mathbf{3.0}_{3.0}^{3.9}\) & \(\mathbf{0.008}_{0.008}^{0.016}\) **0.25\({}^{0.3}_{0.0}\)** & \(\mathbf{0.01}_{0.00}^{0.03}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.62}_{0.33}^{0.71}\) & \(\mathbf{0.69}_{0.67}^{0.75}\) & \(\mathbf{0.24}\) & \(\mathbf{0.61}_{0.44}^{0.88}\) & **10 –250.1** \\ & **No CBM** & \(\mathbf{3.8}_{3.0}^{3.9}\) & \(\mathbf{0.012}_{0.008}^{0.016}\) **(...)** & **(...)** & \(\mathbf{2.0}_{0.0}^{4.0}\) & \(\mathbf{0.49}_{0.33}^{0.71}\) & \(\mathbf{0.73}_{0.67}^{0.74}\) & \(\mathbf{0.35}\) & \(\mathbf{0.67}_{0.44}^{0.83}\) & **8 –249.4** \\ & Radiative & \(3.0_{3.0}^{3.9}\) & \(0.016_{0.008}^{0.016}\) **(...)** & \(\mathbf{0.2}_{0.0}^{0.3}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.51}_{0.36}^{0.71}\) & \(\mathbf{0.69}_{0.67}^{0.75}\) & \(\mathbf{0.32}\) & \(\mathbf{0.53}_{0.44}^{0.86}\) & 8 –248.4 \\ & PΓ©clet & \(3.6_{3.0}^{3.9}\) & \(0.012_{0.008}^{0.016}\) & \(0.3_{0.0}^{0.03}\) & \(\mathbf{(...)}\) & \(2.0_{0.0}^{4.0}\) & \(\mathbf{0.51}_{0.36}^{0.71}\) & \(\mathbf{0.74}_{0.68}^{0.75}\) & \(\mathbf{0.36}\) & \(\mathbf{0.72}_{0.46}^{0.84}\) & 10 –248.4 \\ \hline **Period** & \(\mathbf{3.2}_{3.0}^{3.9}\) & \(\mathbf{0.016}_{0.008}^{0.016}\) **(...)** & \(\mathbf{0.03}_{0.00}^{0.03}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.51}_{0.33}^{0.71}\) & \(\mathbf{0.72}_{0.67}^{0.75}\) & \(\mathbf{0.36}\) & \(\mathbf{0.59}_{0.45}^{0.86}\) & **12 –270.7** \\ & PΓ©clet & \(\mathbf{3.5}_{3.0}^{3.9}\) & \(\mathbf{0.016}_{0.008}^{0.016}\) **(...)** & \(\mathbf{(...)}\) & \(\mathbf{0.03}_{0.00}^{0.03}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.52}_{0.36}^{0.71}\) & \(\mathbf{0.72}_{0.67}^{0.75}\) & \(\mathbf{0.36}\) & \(\mathbf{0.65}_{0.47}^{0.84}\) & **13 –270.5** \\ & **Radiative** & \(\mathbf{3.0}_{3.0}^{3.9}\) & \(\mathbf{0.008}_{0.008}^{0.016}\) **(...)** & \(\mathbf{0.2}_{0.0}^{0.03}\) & \(\mathbf{0.01}_{0.0}^{0.03}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.62}_{0.36}^{0.71}\) & \(\mathbf{0.69}_{0.66}^{0.76}\) & \(\mathbf{0.24}\) & **0.61** & 0 –269.9 \\ & **No CBM** & \(\mathbf{3.7}_{3.0}^{3.9}\) & \(\mathbf{0.012}_{0.008}^{0.016}\) **(...)** & \(\mathbf{(...)}\) & \(\mathbf{2.0}_{0.0}^{4.0}\) & \(\mathbf{0.49}_{0.33}^{0.71}\) & \(\mathbf{0.72}_{0.67}^{0.74}\) & \(\mathbf{0.34}\) & \(\mathbf{0.65}_{0.44}^{0.83}\) & **8 –269.2** \\ & **No CBM** & \(\mathbf{3.0}_{3.0}^{3.9}\) & \(\mathbf{0.016}_{0.008}^{0.016}\) **(...)** & \(\mathbf{(...)}\) & \(\mathbf{1.0}_{0.0}^{4.0}\) & \(\mathbf{0.51}_{0.33}^{0.71}\) & \(\mathbf{0.69}_{0.66}^{0.75}\) & \(\mathbf{0.32}\) & \(\mathbf{0.53}_{0.44}^{0.86}\) & 8 –268.3 \\ & **PΓ©clet** & \(3.6_{3.0}^{3.9}\) & \(\mathbf{0.016}_{0.008}^{0.016}\) **(...)** & \(\mathbf{(...)}\) & \(2.0_{0.0}^{4.0}\) & \(\mathbf{0.51}_{0.33}^{0.71}\) & \(\mathbf{0.73}_{0.67}^{0.75}\) & \(\mathbf{0.36}\) & \(\mathbf{0.70}_{0.46}^{0.84}\) & 10 –266.5 \\ \hline \hline \end{tabular} \end{table} Table 6: Same as Table 5, but for KIC 4930889 B. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline Obs. & Grid & \(M_{\text{ini}}\) [ M\({}_{\odot}\)] & \(Z_{\text{ini}}\) & \(\alpha_{\text{CBM}}\) & \(f_{\text{CBM}}\) & log(\(D_{\text{env}}\)) & \(X_{\text{c}}\) & \(\Omega_{\text{rot}}\) [d\({}^{-1}\)] & \(\Omega_{\text{rot}}/\Omega_{\text{crit}}\) & \(M_{\text{cc}}\)[ M\({}_{\odot}\)] & MD & AICc \\ \hline \(\Delta\)**P** & **PΓ©clet** & \(\mathbf{4.1}_{3.3}^{4.5}\) & \(\mathbf{0.016}_{0.008}^{0.016}\) **(...)** ### Near-core rotation rate and convective core mass From the 2\(\sigma\) MD error ellipses on the best models, we find the near-core rotation rate of the star to be well constrained. The values are listed alongside the best models of each nested grid in Tables 5 and 6. If we take the grids that are indistinguishable according to the AICc, we find all of them to be consistent with the result of our best model grid; \(\Omega_{\rm rot}=0.73^{+0.02}_{-0.05}\)d\({}^{-1}\). This is about 13.4 times the orbital frequency of this eccentric binary, and about 37% of the best model's Roche critical rotation rate. Even when considering the nested grids that are not preferred, we can see that their near-core rotation rates are also consistent and agree very well with one another. Defining the convective core mass as the one determined by the Ledoux criterion without including the CBM region (as is visualised by the grey area in Fig. 6), we constrain it to \(M_{\rm cc}=0.67^{+0.17}_{-0.20}\) M\({}_{\odot}\). This value is consistent across all grids we considered in the modelling, keeping in mind its uncertainties. ### Mode excitation We are left with multiple different solutions that cannot be distinguished from each other based on modelling the period-spacing values and using the spectroscopic, astrometric, and isochrone-cloud restraints. Therefore we look at which of these models performs best at reproducing the mode excitation of our observed period-spacing pattern. Fig. 14 shows the normalised growth rates, \(\eta\), of the modes (Stellingwerf 1978). These indicate an excited or damped mode for a positive or negative value of \(\eta\), respectively. For the primary, the models from the exponential Peclet and the exponential radiative grid have ten modes excited out of the 22 observed modes in our pulsation pattern. The model from the grid without CBM has sixteen of the observed modes as excited. This higher amount of excited modes is an effect of its more evolved nature compared to the other best models, rather than a direct effect of the absence of CBM (e.g. Fig. 1. of Papics et al. 2017, which shows an increasing number excited modes during the first part of the main-sequence evolution). All of these models show some excited modes at shorter periods that were not observed in our pattern. The model for the secondary star from the exponential Peclet grid shows twelve out of the 22 modes excited. The models from the exponential radiative grid and the grid without CBM show seven excited modes, while the model from the radiative grid shows no excited modes at all. Accurately reproducing the excitation of high radial order g modes in B-type pulsators often requires opacity enhancements which were not considered in this study (e.g. Moravveji 2016; Daszynska-Daszkiewicz et al. 2017; Walczak et al. 2019; Szewczuk et al. 2022). In our results, we also see that the increased number of excited modes in general corresponds to an increased metallicity, and hence elevated opacity of the iron- and nickel-group chemical elements. We therefore confirm that the standard OP opacity tables are insufficient to accurately reproduce the observed mode excitations, rather than using this result to constrain our solutions. ## 7 Conclusions In this work we investigated the gravito-inertial modes in the double-lined B-type binary KIC 4930889. We explored which of the components hosts the pulsations, what the preferred temperature gradient and mixing profiles are in the CBM region of that star, and constrained its near-core rotation rate. We employed asteroseismic, spectroscopic and astrometric information, and constraints obtained from the binarity through isochrone clouds. The quality of our best asteroseismic solutions are better for the spectroscopic and astrometric constraints of the secondary star, although not statistically significantly better than when using those of the primary star. The difference in luminosity between both stars, with 67% and 33% of the light contribution from the primary and secondary respectively, is not large enough to assign the pulsation signal to one or the other. Furthermore, we do not find a preference for the temperature gradient based on the 22 mode periods or 21 period spacings. However, we are able to constrain the near-core rotation rate of the pulsating component to \(\Omega_{\rm rot}=0.73^{+0.02}_{-0.06}\)d\({}^{-1}\). We also obtain better solutions for models with some type of exponentially decaying CBM over those with a step-like mixing profile in the CBM region. We find that the model from the exponential Peclet grid performs better at explaining the mode excitation than the models with a radiative temperature gradient. The Peclet model shows twelve out of the 22 observed modes excited, whereas the radiative models have at most seven of the observed modes excited. The larger number of excited modes is due to the higher metallicity of the stellar equilibrium model selected from this grid; that is, the different temperature gradient only indirectly influences the predicted mode excitation. The model that best explains the excited modes is however the one without CBM present that assumes the spectroscopic and astrometric constraints for the primary star. This absence of CBM also only indirectly influences the mode excitation, since this model is more evolved than the others along the first half of its main sequence, entailing a higher number of theoretically predicted excited modes. Comparing our detailed follow-up treatment to the earlier performed statistical modelling approach of Pedersen et al. (2021) and Pedersen (2022b), we note a few key differences in the modelling setup. Whilst Pedersen et al. (2021) investigated two different prescriptions for the CBM region and four for the envelope mixing, our study considers only one case of envelope mixing but seven different ones for the CBM region. We opted for such an approach since the g modes that we are considering have a much higher probing power in the CBM region than in the stellar envelope, as can be seen from their mode inertia in Fig. 6. Pedersen (2022b) was able to distinguish between different shapes of the envelope mixing, but found no preference between their considered CBM prescriptions. This would equal a comparison between our exponential radiative and step Peclet grids, where we do find a preference for the exponential grids over the step grids. As far as the retrieved model parameters are concerned, we list both the results from Pedersen (2022b) and from this work (using the spectroscopic and astrometric constraints of the primary and secondary) in Table 7. All parameters are in agreement when considering our error estimation and considering the spectroscopic and astrometric constraints for the primary star. In particular, the stellar rotation rate aligns well. However, our best asteroseismic model was found when we consider the constraints for the secondary star. For this case the mass and central hydrogen content are no longer compatible within the projected error ellipses. Our best point estimators do deliver a younger, less massive star with more CBM and less envelope mixing. We note that the uncertainties on the two sets of results differ substantially, where our uncertainties on the parameters encompass the results from Pedersen (2022b), but not vice versa. These different results are influenced slightly by the different spectroscopic constraints and set of prograde dipole modes that are employed, but stem dominantly from the modelling approach, where we used a more detailed treatment of the asteroseismic modelling as compared to the approximative statistical modelling approach of Pedersen et al. (2021); Pedersen (2022b, a), who used statistical approximations for the pulsations rather than detailed GYRE computations. We find their uncertainties obtained from the approximative statistical modelling to be underestimated for this star. Although the error ellipses of our solutions contain less than 3% of our initial models, the projections of the six dimensional error ellipse in one dimension results in uncertainties on each individual parameter that range over most of the initial model grid. The vast majority of the parameter combinations that are included in these one-dimensional projections are however not part of the actual higher dimensional error ellipse. An indication of this can be seen in the two-dimensional projections of the error ellipse in Figs. 9 to 12. In contrast, the results obtained by Michielsen et al. (2021) for KIC 7760680 yielded much smaller error ellipses so that the uncertainties remained small even when they were projected on one dimension. This difference is due to the number of modes in the observed prograde dipole mode pattern, which amounted to 36 modes for KIC 7760680 and to 22 modes for KIC 4930889, where a larger number of observed modes entails a better probing power of the stellar interior. With this in mind, constraints from spectroscopic data, _Gaia_ astrometric data, and from the binarity of the system are valuable to complement asteroseismic information. These complementary constraints become all the more beneficial for stars with lower asteroseismic probing power due to fewer observed modes. Similar to Michielsen et al. (2021), so regardless of the amount of pulsations in our observed period pattern, we find that the uncertainties on the theoretically predicted pulsation patterns are much larger than the uncertainties on the observed patterns. The uncertainties in our modelling are therefore dominated by the theoretical model uncertainties, rather than the observational ones. Additionally, the theoretical variance is largest in the grid without CBM, causing the lack of selection capacity between these models and the ones with CBM. A reduction of the variances would lead to a stronger preference of model grids with CBM over the ones without it. Hence, future work should prioritise improving stellar evolutionary models by both refining and expanding the physical processes that are included in them. KIC 4930889 is a good target to evaluate tidal effects in close binary evolution models, given our detection of multiples of the orbital frequency in its secondary period spacing patterns. ###### Acknowledgements. The authors thank Jordan van Beeck for having provided the frequencies he found for KIC 4930889 from his 2021 paper in electronic format, and Sarah Gebruers and Alex Kemp for the insightful scientific discussions. The authors are grateful to the MESA and GYRE developers teams for their efforts and for releasing their software publicly; this study would not have been possible without their codes. The research leading to these results has received funding from the Research Foundation Flanders (FWO) by means of a PhD scholarship to MM under project No. 11F7120N, a postdoctoral fellowship to TVR with grant agreement No. 122B6200N, and to AT through grant agreement No. G089422N, from the KU Leuven Research Council (grant C16/18/005: PARABLES to PI Aerts), as well as from the BEL.gian federal Science Policy Office (BELSPO) through PRODEX grant PLATO. \begin{table} \begin{tabular}{l c c c} \hline \hline & Primary & Primary & Secondary \\ Parameter & Pedersen (2022b) & This work & This work \\ \hline \(M_{\rm ini}\) [ M\({}_{\odot}\)] & 4.06\(\pm\)0.31 & 4.1\({}^{+0.4}_{-0.8}\) & 3.6\({}^{+0.3}_{-0.6}\) \\ \(Z_{\rm ini}\) & 0.00924\(\pm\)0.00002 & 0.016\({}^{+0.00}_{-0.008}\) & 0.016\({}^{+0.00}_{-0.008}\) \\ \(f_{\rm CBM}\) & 0.012\(\pm\)0.001 & 0.02 \({}^{+0.01}_{-0.02}\) & 0.03 \({}^{+0.03}_{-0.03}\) \\ \(\log(D_{\rm env})\) & 3.3\(\pm\)0.5 & 2\({}^{+2.2}_{-2.2}\) & 1\({}^{+3.03}_{-1.1}\) \\ \(X_{\rm c}/X_{\rm ini}\) & 0.362\(\pm\)0.0007 & 0.72\({}^{+0.11}_{-0.57}\) & 0.74\({}^{+0.26}_{-0.23}\) \\ \hline & Pedersen (2022a) & This work & This work \\ \hline \(\Omega_{\rm tot}\) (d\({}^{-1}\)) & 0.740\(\pm\)0.008 & 0.75\({}^{+0.01}_{-0.07}\) & 0.73\({}^{+0.02}_{-0.06}\) \\ \hline \end{tabular} 1 \end{table} Table 7: Stellar parameters of KIC 4930889 derived by Pedersen (2022b) and Pedersen (2022a) compared to the parameters from our best model. Figure 14: Instability parameter \(\eta\). The parameter is shown as a function of mode period for the best models of the primary (top panel) and secondary (bottom panel). The period spacing patterns of these models are shown in Fig. 13, coloured circles indicate excited modes, while empty circles indicate the non-excited ones. The vertical lines show the observed mode periods and their amplitude.
2309.03330
Non-Perturbative Simulations of Quantum Field Theories using Complex Langevin Dynamics
Non-perturbative formulations of field theories are essential to capture intriguing physical phenomena, including confinement in QCD, spontaneous supersymmetry breaking, and dynamical compactification in superstrings. Lattice regularization provides a robust framework to study these non-perturbative features through Euclidean path integrals. Conventionally, path integrals are numerically evaluated using Monte Carlo methods, where the Boltzmann factor is interpreted as a probability weight. However, complex actions in various physical systems render the Boltzmann factor complex, leading to the sign problem. The complex Langevin method overcomes the sign problem and can be used to evaluate complex integrals. This thesis employs the complex Langevin method to investigate various non-perturbative aspects of field-theoretic systems with complex actions. We probe the possibility of spontaneous supersymmetry breaking in the simplest realizations of supersymmetric field theories. These systems generally have complex actions arising from a complex determinant of the fermion operator. We studied various interesting classes of complex potentials, including those exhibiting PT-symmetry. Another exciting aspect explored is the dynamical compactification of extra dimensions in superstring theory. The IKKT matrix model, in the large-N limit, is a conjectured formulation for the 10D type IIB string theory. We employ the complex Langevin method to investigate the Euclidean version of this matrix model, which has an inherent complex Pfaffian, to probe the spontaneous breaking of SO(10) symmetry. The investigations performed in this thesis suggest that the complex Langevin method can successfully simulate non-perturbative aspects of quantum field theories by taming the associated sign problem.
Arpith Kumar
2023-09-06T19:19:19Z
http://arxiv.org/abs/2309.03330v1
# Non-Perturbative Simulations of Quantum Field Theories using Complex Langevin Dynamics ###### Abstract In this paper we investigate the dynamics of the quantum field theory in a quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory of quantum field theory. We show that the quantum field theory is a non-perturbative theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of quantum field theory of field theory of
2308.16414
Remarks on flat $S^1$-bundles, $C^\infty$ vs $C^Ο‰$
We describe low dimensional homology groups of $\mathrm{Diff}^\delta_+S^1$ in terms of Haefliger's classifying space $B\overline{\Gamma}_1$ by applying a theorem of Thurston. Then we consider the question whether some power of the rational Euler class vanishes for real analytic flat $S^1$-bundles. We show that if it occurs, then the homology group of $\mathrm{Diff}_+^{\omega,\delta} S^1$ should contain two kinds of many torsion classes which vanish in $\mathrm{Diff}^\delta_+S^1$. This is an informal note on our discussions about the above question.
Teruaki Kitano, Yoshihiko Mitsumatsu, Shigeyuki Morita
2023-08-31T02:58:17Z
http://arxiv.org/abs/2308.16414v1
# Remarks on flat \(S^{1}\)-bundles, \(C^{\infty}\) vs \(C^{\omega}\) ###### Abstract. We describe low dimensional homology groups of \(\mathrm{Diff}_{+}^{\delta}S^{1}\) in terms of Haefliger's classifying space \(B\overline{\mathrm{I}}_{1}\) by applying a theorem of Thurston. Then we consider the question whether some power of the rational Euler class vanishes for real analytic flat \(S^{1}\)-bundles. We show that if it occurs, then the homology group of \(\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\) should contain two kinds of many torsion classes which vanish in \(\mathrm{Diff}_{+}^{\delta}S^{1}\). This is an informal note on our discussions about the above question (see Remark 1.17). Key words and phrases:flat \(S^{1}\)-bundle, Euler class, Haefliger \(\Gamma\)-structure, Mather-Thurston theory, Borel construction 2010 Mathematics Subject Classification: Primary 55R40, 57R32 ## 1. Results Let \(\mathrm{Diff}_{+}S^{1}\) be the orientation preserving \(C^{\infty}\) diffeomorphism group of the circle with the smooth topology and let \(\mathrm{Diff}_{+}^{\delta}S^{1}\) denote the same group equipped with the _discrete_ topology. Then there is a fibration \[B\overline{\mathrm{Diff}}_{+}S^{1}\to B\mathrm{Diff}_{+}^{\delta}S^{1}\to B \mathrm{Diff}_{+}S^{1}\] where \(B\mathrm{Diff}_{+}^{\delta}S^{1}\) is the classifying space for flat \(S^{1}\)-bundles while \(B\overline{\mathrm{Diff}}_{+}S^{1}\) is the classifying space for flat \(S^{1}\)-products. Since \(\mathrm{Diff}_{+}S^{1}\) is homotopy equivalent to \(\mathrm{SO}(2)\), if we denote by \(\widetilde{\mathrm{Diff}}_{+}S^{1}\) the universal covering group of \(\mathrm{Diff}_{+}S^{1}\), then \(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1}\) can serve as \(B\overline{\mathrm{Diff}}_{+}S^{1}\). As is well known, there are natural identification \[\widetilde{\mathrm{Diff}}_{+}S^{1}\cong\{f\in\mathrm{Diff}_{+}^{\infty}\mathbb{ R};Tf=fT\}\quad\text{where }T(x)=x+1\;(x\in\mathbb{R})\] and a cental extension \[0\to\mathbb{Z}\to\widetilde{\mathrm{Diff}}_{+}S^{1}\overset{p}{\to}\mathrm{ Diff}_{+}S^{1}\to 1. \tag{1}\] Now let us recall a theorem of Thurston which says that \(B\overline{\mathrm{Diff}}_{+}S^{1}\) and hence \(B\widetilde{\mathrm{Diff}}_{+}S^{1}\) is homologically equivalent to the free loop space \(\wedge B\overline{\mathrm{Gamma}}_{1}\) of Haefliger's classifying space \(B\overline{\mathrm{Gamma}}_{1}\) ([8, 9]). **Theorem 1.1** (Thurston [23]).: _Let \(h:B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1}\times S^{1}\to B\overline{ \mathrm{Gamma}}_{1}\) be the classifying map for the flat \(S^{1}\)-product over \(\widetilde{B\mathrm{Diff}}_{+}^{\delta}S^{1}\). Then its adjoint mapping_ \[H:B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1}\to\wedge B\overline{\mathrm{Gamma }}_{1}\] _induces isomorphism on homology._ By making use of this theorem, we obtain the following results. **Theorem 1.2**.: 1. _There exist isomorphisms:_ \[H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\cong H_{3}(B\overline{\Gamma}_{ 1};\mathbb{Z})\oplus\mathbb{Z}\quad(\text{canonical direct sum}),\] \[H_{2}(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{Z})\cong H_{3}(B \overline{\Gamma}_{1};\mathbb{Z}).\] 2. _There exist isomorphisms:_ \[H_{3}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\cong H_{3}(\Omega B \overline{\Gamma}_{1};\mathbb{Z}),\] \[H_{3}(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{Z})\cong H_{3}(B \overline{\Gamma}_{1};\mathbb{Z})\oplus H_{3}(\Omega B\overline{\Gamma}_{1}; \mathbb{Z}).\] 3. _If we denote by_ \(\mu:H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\to H_{3}(B\widetilde{ \mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{Z})\) _a part of the Gysin exact sequence associated with the central extension (_1_), then it is given as follows._ \[H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\cong H_{3}(B\overline{\Gamma}_{1};\mathbb{Z})\oplus\mathbb{Z} \ni(\sigma,n)\] \[\overset{\mu}{\longmapsto}(\sigma,0)\in H_{3}(B\widetilde{ \mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{Z})\cong H_{3}(B\overline{\Gamma}_{1} ;\mathbb{Z})\oplus H_{3}(\Omega B\overline{\Gamma}_{1};\mathbb{Z}).\] _The generator \(1\in\mathbb{Z}\) of the canonical summand \(\mathbb{Z}\subset H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) in \((\mathrm{i})\) is characterized by the two conditions \(\mu(1)=0\) and \(\chi(1)=1\), where \(\chi\in H^{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) denotes the Euler class._ Recall here that Herman [11, 12] proved \(H_{1}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})=H_{1}(B\mathrm{Diff}_{+}^{ \omega,\delta}S^{1};\mathbb{Z})=0\). **Example 1.3**.: We describe an element belonging to the canonical direct summand \(\mathbb{Z}\subset H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\). Here we use Thurston's original idea by which he proved linear independence of the two classes \(\chi,\alpha\in H^{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{R})\) where \(\alpha\) denotes the Godbillon-Vey class integrated along the fiber. Let \[\rho:\pi_{1}(\Sigma_{2})\to\mathrm{PSL}(2,\mathbb{R})\to\mathrm{Diff}_{+}^{ \omega,\delta}S^{1}\] be a Fuchsian representation, corresponding to a hyperbolic structure on a closed surface \(\Sigma_{2}\) of genus \(2\), followed by a natural embedding \(\mathrm{PSL}(2,\mathbb{R})\subset\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\). It is known that this representation lifts to \(\mathrm{SL}(2,\mathbb{R})\) (see [17]) so that we have \[\tilde{\rho}:\pi_{1}(\Sigma_{2})\to\mathrm{SL}(2,\mathbb{R})\to\mathrm{Diff}_{ +}^{\omega,\delta}S^{1}\] where the second homomorphism is induced by the natural action of \(\mathrm{SL}(2,\mathbb{R})\) on the set of oriented directions from the origin of \(\mathbb{R}^{2}\). Then \(\chi(\rho_{*}([\Sigma_{2}]))=-2\) while \(\chi(\tilde{\rho}_{*}([\Sigma_{2}]))=-1\). Now set \(\sigma=\tilde{\rho}_{*}([\Sigma_{2}])-2\rho_{*}([\Sigma_{2}])\in H_{2}(B \mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) so that \(\chi(\sigma)=3\). We show that \(\mu(\sigma)=0\) which implies that \(\sigma\) represents \(3\) of the canonical direct summand \(\mathbb{Z}\subset H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\). We denote by \(S^{1}(\rho)\) (resp. \(S^{1}(\tilde{\rho})\)) the total space of the flat \(S^{1}\)-bundle induced by \(\rho\) (resp. \(\tilde{\rho}\)). Then there exists a fiberwise \(2\)-fold covering map \(S^{1}(\tilde{\rho})\to S^{1}(\rho)\). Therefore the classifying map for the codimension one foliations on these total spaces factors as \[S^{1}(\tilde{\rho})\xrightarrow[\text{2-fold cover}]{\text{fiberwise}}S^{1}( \rho)\longrightarrow B\overline{\Gamma}_{1}.\] The images in \(H_{3}(B\overline{\Gamma}_{1};\mathbb{Z})\) of \([S^{1}(\tilde{\rho})],[S^{1}(\rho)]\) under the above map represent the first summands of \(\mu(\tilde{\rho}_{*}[\Sigma_{2}]),\mu(\rho_{*}[\Sigma_{2}])\) and the first one is twice the second one. Therefore Theorem 1.2\((\mathrm{ii}),(\mathrm{iii})\) implies that \(\mu(\sigma)=0\in H_{3}(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{Z})\) as required. Instead of \(\mathrm{PSL}(2,\mathbb{R})\), we may use the group \(\mathrm{GL}^{+}(2,\mathbb{Z}[\frac{1}{2}])\) using a result of Milnor [16]. Also it is a very important question whether \(\mu(\sigma)=0\) holds already in \(H_{3}(\widetilde{\mathrm{Diff}}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\) or not. This is because if it holds, then we can conclude that \(\chi^{2}\) does not vanish rationally for real analytic flat \(S^{1}\)-bundles. Let \(\varphi_{k}:\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1}\to\widetilde{\mathrm{ Diff}}_{+}^{\delta}S^{1}\)\((k=2,3,\cdots)\) be the endomorphism defined by \[\varphi_{k}(f)(x)=\frac{1}{k}f(kx)\quad(f\in\widetilde{\mathrm{Diff}}_{+}^{ \delta}S^{1}).\] In the situation of the above Example 1.3, the endomorphism \(\varphi_{2}\) appears in the following commutative diagram. \[\begin{CD}\pi_{1}(S^{1}(\tilde{\rho}))@>{}>{}>\widetilde{\mathrm{Diff}}_{+}^{ \delta}S^{1}\\ @V{\cap}V{\operatorname{index}2}V@V{}V{\varphi_{2}}V\\ \pi_{1}(S^{1}(\rho))@>{}>{}>\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1}.\end{CD}\] **Theorem 1.4**.: _For any element \(\sigma\in H_{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\), and for any \(k\), we have_ \[(\varphi_{k})_{*}(\mu(\sigma))=\mu(\sigma)\in H_{3}(\widetilde{B\mathrm{Diff}} _{+}^{\delta}S^{1};\mathbb{Z}).\] **Problem 1.5**.: Study the above equality in the real analytic case. Namely, for a given element \(\sigma\in H_{2}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\) and \(k\), determine whether the following equality holds or not. \[(\varphi_{k})_{*}(\mu(\sigma))=\mu(\sigma)\in H_{3}(\widetilde{B\mathrm{Diff}} _{+}^{\omega,\delta}S^{1};\mathbb{Z})\] This is related to the question of non-triviality of \(\chi^{2}\in H^{4}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\) because of the following result. By the way, even in the smooth case, it is an open problem to construct explicit \(4\)-cycles in \(H_{4}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) with non-vanishing \(\chi^{2}\). **Proposition 1.6**.: _If the above problem will be affirmatively solved for one particular element \(\sigma\in H_{2}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\) with \(\chi(\sigma)\neq 0\) and one particular \(k\), then we have_ \[\chi^{2}\neq 0\in H^{4}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q}).\] Next, we consider what can be said about the homology of \(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\) assuming that \(\chi^{2}=0\in H^{4}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\) (or more generally \(\chi^{k}=0\) for some \(k\geq 2\)). We show that there will arise rather strange integral homology classes of \(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\). By Proposition 1.6, the above assumption is equivalent to the following condition: \[(\varphi_{k})_{*}(\mu(\sigma))\neq\mu(\sigma)\in H_{3}(\widetilde{ \mathrm{Diff}}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] \[\text{for any }\sigma\in H_{2}(B\mathrm{Diff}_{+}^{\omega, \delta}S^{1};\mathbb{Z})\text{ with }\chi(\sigma)\neq 0\text{ and }k\geq 2.\] **Theorem 1.7**.: _Assume that \(\chi^{2}=0\in H^{4}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\). Then the quotient group_ \[H_{3}(B\widetilde{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})/\mu(p_{*}(H_{2}( B\widetilde{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})))\] _contains a group \(P\ (\supset\mathbb{Z})\) which admits a surjective homomorphism onto \(\mathbb{Q}\). This group \(P\) vanishes in \(H_{3}(B\widetilde{\rm Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) and there is a subgroup_ \[P/\mathbb{Z}\subset H_{3}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] _which admits a surjective homomorphism onto \(\mathbb{Q}/\mathbb{Z}\)._ **Remark 1.8**.: In general, we can prove the following statement. Assume that \(\chi^{k-1}\neq 0\in H^{2k-2}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\) and \(\chi^{k}=0\in H^{2k}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\) for some \(k\geq 2\). Then we can conclude that the quotient group \[H_{2k-1}(B\widetilde{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})/\mu(p_{*} (H_{2k-2}(B\widetilde{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})))\] contains a group \(P\ (\supset\mathbb{Z})\) which admits a surjective homomorphism onto \(\mathbb{Q}\). This group \(P\) vanishes in \(H_{2k-1}(B\widetilde{\rm Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) and there is a subgroup \[P/\mathbb{Z}\subset H_{2k-1}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] which admits a surjective homomorphism onto \(\mathbb{Q}/\mathbb{Z}\). **Remark 1.9**.: As shown in Dupont-Sah [2, 3] and Parry-Sah [21], a similar phenomenon as in Theorem 1.7 occurs at the level of \({\rm SL}^{\delta}(2,\mathbb{R})\subset{\rm Diff}_{+}^{\delta}S^{1}\). Recall here that \(\chi^{2}=0\in H^{4}(B{\rm SL}^{\delta}(2,\mathbb{R});\mathbb{Q}))\). More precisely, they proved that the following part of the Gysin exact sequence for the central extension \(0\to\mathbb{Z}\to\widetilde{\rm SL}\to{\rm SL}\to 1\) (where \({\rm SL}\) denotes \({\rm SL}(2,\mathbb{R})\) for short) contains a sub-exact sequence shown in the second row below: \[\begin{CD}H_{2}(B{\rm SL}^{\delta};\mathbb{Z})@>{\mu}>{}>H_{3}(B\widetilde{ \rm SL}^{\delta};\mathbb{Z})@>{}>{}>H_{3}(B{\rm SL}^{\delta};\mathbb{Z})@>{}> {}>H_{1}({\rm SL}^{\delta};\mathbb{Z})=0\\ \cup\Big{\uparrow}@V{\cup}V{}V@V{\cup}V{}V@V{}V{\Big{\downarrow}}V\\ \mathbb{Z}@>{\mu}>{}>\mathbb{Q}@>{}>{}>\mathbb{Q}/\mathbb{Z}@>{}>{}>0.\end{CD} \tag{2}\] However, there is also a considerable difference between the two cases. The \(\mathbb{Q}\)-factor in (2) survives in \(H_{3}(B\widetilde{\rm Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) while the \(\mathbb{Q}\)-factor in Theorem 1.7 vanishes there. This is because, the former one is detected by the \(\beta\) class \(\in H^{3}(B\widetilde{\rm Diff}_{+}^{\delta}S^{1};\mathbb{R})\) (= Godbillon-Vey class) pulled back to \(H^{3}(B\widetilde{\rm SL}^{\delta};\mathbb{R})\) while the latter is not. On the other hand, the \(\mathbb{Q}/\mathbb{Z}\)-factor in (2) is described as \(H_{3}({\rm SO}(2)_{\rm tor};\mathbb{Z})\cong H_{3}(\mathbb{Q}/\mathbb{Z}; \mathbb{Z})\cong\mathbb{Q}/\mathbb{Z}\) where \({\rm SO}(2)\subset{\rm SL}\) is the subgroup consisting of rotations and \({\rm SO}(2)_{\rm tor}\) denotes its torsion subgroup. Dupont and Sah [2, 3] proved that it is detected by the Cheeger-Chern-Simons class \(\hat{c}_{2}\). Recall here that \({\rm SO}(2)_{\rm tor}\cong\mathbb{Q}/\mathbb{Z}=\underset{\underset{n}{\to}}{ \lim}\mathbb{1}\mathbb{Z}/\mathbb{Z}\) where \(\frac{1}{n}\mathbb{Z}/\mathbb{Z}\) is the subgroup of \({\rm SO}(2)_{\rm tor}\) and also recall \(H_{2k-1}({\rm SO}(2)_{\rm tor};\mathbb{Z})\cong\mathbb{Q}/\mathbb{Z}\) where \(\frac{1}{n}\mathbb{Z}/\mathbb{Z}=\mathbb{Z}/n\mathbb{Z}\) is realized by the natural flat \(S^{1}\)-bundle over the lens space \(L_{n}^{2k-1}\). Motivated by the above result, we consider how the homology group \(H_{*}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\) of the subgroup \(\mathrm{SO}(2)_{\text{tor}}\subset\mathrm{Diff}_{+}^{\delta}S^{1}\) consisting of rational rotations of \(S^{1}\) will survive (or vanish) in the homology of various subgroups of \(\mathrm{Diff}_{+}^{\delta}S^{1}\) containing \(\mathrm{SO}(2)\), in particular \(\mathrm{Diff}_{+}^{\delta}S^{1}\) itself and \(\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\). As for the former \(C^{\infty}\) case, we obtain the following vanishing result by using Thurston's Theorem 1.1. **Theorem 1.10**.: _Let \(\mathrm{SO}(2)_{\text{tor}}\subset\mathrm{Diff}_{+}^{\delta}S^{1}\) be the subgroup consisting of all the rational rotations. Then the homomorphisms_ \[H_{2k-1}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{Z} \longrightarrow H_{2k-1}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\quad(k =2,3,\cdots)\] _are all trivial._ **Remark 1.11**.: \((\mathrm{i})\) By considering \(\mathrm{SO}(2)_{\text{tor}}\), Nariman [20] proved that the homomorphism \[H^{*}(B\mathrm{Diff}_{+}S^{1};\mathbb{Z})\cong H^{*}(\mathbb{C}P^{\infty}; \mathbb{Z})\to H^{*}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] is injective, by showing the fact that \[H^{*}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\to H^{*}(\mathrm{SO} (2)_{\text{tor}};\mathbb{Z})\to H^{*}(\mathbb{Z}/n\mathbb{Z};\mathbb{Z})\] is surjective for any \(\mathbb{Z}/n\mathbb{Z}\subset\mathrm{SO}(2)_{\text{tor}}\). In contract with this, we showed that the homology map \[H_{*}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\to H_{*}(B\mathrm{Diff}_{+}^{ \delta}S^{1};\mathbb{Z})\] is trivial. The problem is whether this still holds in \(H_{*}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\) or not. \((\mathrm{ii})\) We also remark that if \(B\overline{\Gamma}_{1}\) were an Eilenberg-MacLane space \(K(\mathbb{R},3)\), then we can compute the homology group \(H_{*}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\) explicitly, again by using Thurston's Theorem 1.1 (see Remark 2.12). In particular, we see that it has no torsion, although this assumption would be too naive at present. On the other hand, we obtain the following facts which may give another method of proving the non-triviality of the rational \(\chi^{2}\) (and more generally the problem of determining whether the rational \(\chi^{k}=0\) for some \(k\) or not) in the real analytic case. **Proposition 1.12**.: _If the homomorphism_ \[H_{3}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{Z} \longrightarrow H_{3}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] _is not injective, then \(\chi^{2}\neq 0\in H^{4}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\)._ In fact, this is a particular case of the following more general facts. **Theorem 1.13**.: \((\mathrm{i})\) _Let \(\Gamma\subset\mathrm{Diff}_{+}^{\delta}S^{1}\) be any subgroup containing \(\mathrm{SO}(2)\). Assume that the homomorphism_ \[i_{*}:H_{2k-1}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{ Z}\longrightarrow H_{2k-1}(\Gamma;\mathbb{Z})\] _is injective (resp. non-trivial) for some \(k\), where \(i:\mathrm{SO}(2)_{\text{tor}}\subset\Gamma\) denotes the inclusion. Then the homomorphisms_ \[i_{*}:H_{2l-1}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{ Z}\longrightarrow H_{2l-1}(\Gamma;\mathbb{Z})\] _are injective (resp. non-trivial) for all \(l\geq k\)._ \((\mathrm{ii})\) _Assume that \(\chi^{k}=0\in H^{2k}(\Gamma;\mathbb{Q})\). Then the homomorphisms_ \[i_{*}:H_{2l-1}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{Z} \longrightarrow H_{2l-1}(\Gamma;\mathbb{Z})\] _are injective for all \(l\geq k\)._ **Remark 1.14**.: If we combine Theorem 1.10 and Theorem 1.13 (ii) for the case \(\Gamma=\mathrm{Diff}_{+}^{\delta}S^{1}\), we obtain yet another proof of the fact that all the powers of the rational Euler class are non-trivial in \(H^{*}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Q})\) (see [18][7][20] for former proofs). **Example 1.15**.: \((\mathrm{i})\) The results of Dupont, Sah and Parry mentioned in Remark 1.9 together with Theorem 1.13\((\mathrm{i})\) show that the homomorphisms \[i_{*}:H_{2k-1}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{ Z}\longrightarrow H_{2k-1}(B\mathrm{SL}^{\delta}(2,\mathbb{R});\mathbb{Z})\] are injective for all \(k\geq 2\). \((\mathrm{ii})\) If we assume that Thurston's lost theorem (see Remark 1.17 below) holds: \[\chi^{3}=0\in H^{6}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q}),\] then Theorem 1.13\((\mathrm{ii})\) implies that \[i_{*}:H_{2k-1}(\mathrm{SO}(2)_{\text{tor}};\mathbb{Z})\cong\mathbb{Q}/\mathbb{ Z}\longrightarrow H_{2k-1}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] are injective for all \(k\geq 3\). More generally, if some power \(\chi^{k}\) of the Euler class vanishes rationally for real analytic flat \(S^{1}\)-bundles, then by Remark 1.8, Theorem 1.13, and Theorem 1.10, we can conclude that \(H_{*}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\) should have numerous torsion homology classes which vanish in \(H_{*}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\). On the other hand, we have the following. **Corollary 1.16**.: _Let \(n\geq 2\) be a natural number and consider the subgroup \(\mathbb{Z}/n\mathbb{Z}\subset\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\) generated by the \(1/n\)-rotation of \(S^{1}\). Assume that the homomorphism_ \[H_{2k-1}(\mathbb{Z}/n\mathbb{Z};\mathbb{Z})\cong\mathbb{Z}/n\mathbb{Z} \to H_{2k-1}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\] _is trivial. Then_ \[\chi^{k}\neq 0\in H^{2k}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q}).\] **Remark 1.17**.: Matter [13, 14, 15] proved that \(B\overline{\Gamma}_{1}\) is \(2\)-connected. On the other hand, Haefliger [8] proved that \(B\overline{\Gamma}_{1}^{\omega}\) is a \(K(\pi,1)\) space and \(H_{1}(B\overline{\Gamma}_{1}^{\omega};\mathbb{Z})=0\). Thus the homotopy types of \(B\overline{\Gamma}_{1}\) and \(B\overline{\Gamma}_{1}^{\omega}\) are drastically different from each other. Nevertheless, nothing is known at present whether the natural map \(B\overline{\Gamma}_{1}^{\omega}\to B\overline{\Gamma}_{1}\) induces isomorphism on homology or not. Tsuboi [24] proposed to study \(H_{2}(B\overline{\Gamma}_{1}^{\omega};\mathbb{Z})\). Also, it is an extremely difficult open problem to determine whether the homomorphism \[H_{GF}^{*}(\mathcal{X}_{S^{1}},\mathrm{SO}(2))\cong\mathbb{R}[\alpha,\chi]/( \alpha\chi)\to H^{*}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{R})\] from the Gel'fand-Fuchs cohomology of \(S^{1}\) relative to \(\mathrm{SO}(2)\) to the cohomology of \(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\) is injective or not. For the case of \(C^{\infty}\) diffeomorphism group, it was proved in [18] that it is injective. However, in the real analytic case, the only known results concerning this problem is due to Thurston ([22]) who proved the continuous variability of the \(\alpha\)-class and hence the linear independence of the classes \(\alpha,\chi\) (cf. Table 1 below). The present work is a tiny attempt to attack this problem, in particular the question of non-triviality of the rational \(\chi^{2}\). Note that Ghys [5, 6] mentioned a tale of, what he called, Thurston's lost theorem saying that \(\chi^{3}=0\in H^{6}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\). See also Nariman [20] where the author showed that any power of the _integral_ Euler class is non-trivial in \(H^{*}(B{\rm Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\). These are our main motivation for this work. \begin{table} \begin{tabular}{|c|c|c|} \hline & smooth case & real analytic case \\ \hline \(H_{4}(BG^{\delta};\mathbb{Z})\) & \(\stackrel{{(\alpha^{\xi}\alpha^{\eta},\chi^{2})}}{{\twoheadrightarrow}}\) & \(S^{2}_{\mathbb{Q}}(\mathbb{R})\oplus\mathbb{Z}\) & \(\stackrel{{(\alpha^{\xi}\alpha^{\eta},\chi^{2})}}{{\twoheadrightarrow }}\) & \(S^{2}_{\mathbb{Q}}(\mathbb{R})\oplus\mathbb{Z}\) what is the image? \\ \hline \(\downarrow\cap\chi\) & \(\downarrow\) & \(\downarrow\) \\ \hline \(H_{2}(BG^{\delta};\mathbb{Z})\) & \(\cong H_{2}(\Omega B\overline{\Gamma}_{1};\mathbb{Z})\oplus\mathbb{Z}\stackrel{{ (\alpha,\chi)}}{{\twoheadrightarrow}}\) & \(\mathbb{R}\oplus\mathbb{Z}\) & \(\stackrel{{?}}{{\cong}}H_{2}(\Omega B\Gamma^{+}_{H};\mathbb{Z}) \oplus\mathbb{Z}\stackrel{{(\alpha,\chi)}}{{\twoheadrightarrow}}\) & \(\mathbb{R}\oplus\mathbb{Z}\) \\ \hline \(\downarrow\mu\) & \(\downarrow(\cong,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad ## 2. Proofs In this section, we denote the group \(\mathrm{Diff}_{+}S^{1}\) (resp. \(\widetilde{\mathrm{Diff}}_{+}S^{1}\)) by \(G\) (resp. \(\tilde{G}\)) for simplicity. First we prepare a few facts. **Proposition 2.1**.: _Let \(X\) be a \(2\)-connected topological space and let \(\wedge X\) denote the free loop space of \(X\). Then we have_ \[H_{2}(\wedge X;\mathbb{Z}) \cong H_{2}(\Omega X;\mathbb{Z})\cong H_{3}(X;\mathbb{Z})\] \[H_{3}(\wedge X;\mathbb{Z}) \cong H_{3}(X;\mathbb{Z})\oplus H_{3}(\Omega X;\mathbb{Z}).\] Proof.: This can be shown by the usual spectral sequence arguments applied to the fibration \(\Omega X\to\wedge X\to X\) which has a section. The isomorphism \(H_{2}(\Omega X;\mathbb{Z})\cong H_{3}(X;\mathbb{Z})\) is induced by the theorem of Hurewicz as \[H_{2}(\Omega X;\mathbb{Z})\cong\pi_{2}(\Omega X)\cong\pi_{3}(X)\cong H_{3}(X; \mathbb{Z}).\] **Proposition 2.2** (well known, see for example [1]).: \[H_{*}(\wedge S^{3};\mathbb{Z})\cong H_{*}(\Omega S^{3};\mathbb{Z})\otimes H_ {*}(S^{3};\mathbb{Z})\cong\mathbb{Z}[\alpha]\otimes\wedge(\beta)\] _where \(\alpha\in H_{2}(\Omega S^{3};\mathbb{Z})\cong\mathbb{Z}\) and \(\beta\in H_{3}(S^{3};\mathbb{Z})\cong\mathbb{Z}\) are generators._ **Theorem 2.3** (Haefliger [10], Nariman [20], see Remark 1.14.).: _For any \(k\), there exists certain element \(\sigma_{k}\in H_{2k}(BG^{\delta};\mathbb{Z})\) such that \(\chi^{k}(\sigma_{k})=1\)._ Haefliger first pointed out that the mapping \(H\) of Theorem 1.1 is \(S^{1}\)-equivariant with respect to natural actions of \(S^{1}\) on both sides. Then the required claim follows from Theorem 1.1 by using the fact that the \(S^{1}\) action on free loop spaces has fixed points. Consider the following part of the Gysin exact sequence associated with the central extension (1) \[\cdots\to H_{k+2}(BG^{\delta};\mathbb{Z})\xrightarrow{\cap\chi}H_{k}(BG^{ \delta};\mathbb{Z})\xrightarrow{\mu}H_{k+1}(B\tilde{G}^{\delta};\mathbb{Z}) \xrightarrow{p_{*}}H_{k+1}(BG^{\delta};\mathbb{Z})\to\cdots.\] **Proposition 2.4**.: _Define a homomorphism_ \[\nu:H_{k}(\wedge B\overline{\Gamma}_{1};\mathbb{Z})\to H_{k+1}(\wedge B \overline{\Gamma}_{1};\mathbb{Z})\] _by the following map_ \[H_{k}(\wedge B\overline{\Gamma}_{1};\mathbb{Z})\ni\tau\mapsto\nu(\tau)=\theta ^{\prime}_{*}(\tau\times[S^{1}])\in H_{k+1}(\wedge B\overline{\Gamma}_{1}; \mathbb{Z})\] _where \(\theta^{\prime}:\wedge B\overline{\Gamma}_{1}\times S^{1}\to\wedge B\overline{ \Gamma}_{1}\) denotes the natural \(S^{1}\) action on \(\wedge B\overline{\Gamma}_{1}\). Then the following diagram is commutative:_ \[\begin{CD}H_{k}(B\tilde{G}^{\delta};\mathbb{Z})@>{H_{*}}>{\cong}>H_{k}(\wedge B \overline{\Gamma}_{1};\mathbb{Z})\\ @V{\mu\circ p_{*}}V{}V@V{}V{\nu}V\\ H_{k+1}(B\tilde{G}^{\delta};\mathbb{Z})@>{H_{*}}>{\cong}>H_{k+1}(\wedge B \overline{\Gamma}_{1};\mathbb{Z}).\end{CD}\] Proof.: As already mentioned above, the mapping \(H:B\tilde{G}^{\delta}\to\wedge B\overline{\Gamma}_{1}\) is \(S^{1}\)-equivariant with respect to the natural \(S^{1}\)-actions. The \(S^{1}\)-action \[\theta:B\tilde{G}^{\delta}\times S^{1}\to B\tilde{G}^{\delta}\] on \(B\tilde{G}^{\delta}\) is defined because \(B\tilde{G}^{\delta}\) can be considered as the total space of the universal \(S^{1}\)-bundle over \(BG^{\delta}\), which is an \(S^{1}\)- principal bundle. Thus we have the following homotopy commutative diagram \[\begin{CD}B\tilde{G}^{\delta}\times S^{1}@>{H\times\mathrm{id}}>{}>\wedge B \overline{\Gamma}_{1}\times S^{1}\\ @V{\theta}V{}V@V{}V{\theta^{\prime}}V\\ B\tilde{G}^{\delta}@>{H}>{}>\wedge B\overline{\Gamma}_{1}\end{CD}\] Now the homomorphism \[\mu\circ p_{*}:H_{k}(B\tilde{G}^{\delta};\mathbb{Z})\to H_{k+1}(B\tilde{G}^{ \delta};\mathbb{Z})\] is realized by the map: \[H_{k}(B\tilde{G}^{\delta};\mathbb{Z})\ni\sigma\longmapsto\mu\circ p_{*}( \sigma)=\theta_{*}(\sigma\times[S^{1}])\in H_{k+1}(B\tilde{G}^{\delta}; \mathbb{Z}).\] The claim follows from this. Proof of Theorem 1.2.: As already mentioned, Mather proved that \(B\overline{\Gamma}_{1}\) is \(2\)-connected. If we apply Proposition 2.1 to the case \(X=B\overline{\Gamma}_{1}\), then we obtain the second statements of (i) and (ii). Next we prove the first statement of (i). Consider the following two parts of the Gysin exact sequence of the central extension (1): \[0=H_{1}(BG^{\delta};\mathbb{Z})\xrightarrow{\mu}H_{2}(B\tilde{G}^ {\delta};\mathbb{Z})\xrightarrow{p_{*}}H_{2}(BG^{\delta};\mathbb{Z}) \xrightarrow{\cap\chi}H_{0}(BG^{\delta};\mathbb{Z})=\mathbb{Z}\to 0 \tag{4}\] \[\to H_{4}(BG^{\delta};\mathbb{Z})\xrightarrow{\cap\chi}H_{2}(BG^{ \delta};\mathbb{Z})\xrightarrow{\mu}H_{3}(B\tilde{G}^{\delta};\mathbb{Z}) \xrightarrow{p_{*}}H_{3}(BG^{\delta};\mathbb{Z})\xrightarrow{H_{1}(BG^{\delta };\mathbb{Z})}=0 \tag{3}\] where \(\chi\in H^{2}(BG^{\delta};\mathbb{Z})\) denotes the Euler class and \(H_{1}(BG^{\delta};\mathbb{Z})=0\) is due to Hermann [11]. From the exact sequence (3), we have \[H_{2}(BG^{\delta};\mathbb{Z})\cong H_{2}(B\tilde{G}^{\delta};\mathbb{Z})\oplus \mathbb{Z}\quad(\text{non-canonical}).\] On the other hand, we show that the restriction \(\mu\circ p_{*}\) of the homomorphism \(\mu\) in the exact sequence (4) to the submodule \(H_{2}(B\tilde{G}^{\delta};\mathbb{Z})\subset H_{2}(BG^{\delta};\mathbb{Z})\) is injective. In view of Proposition 2.4 together with the second statements of, we can write \[\begin{CD}H_{2}(B\tilde{G}^{\delta};\mathbb{Z})@>{H_{*}}>{\cong}>H_{2}(\wedge B \overline{\Gamma}_{1};\mathbb{Z})\cong H_{2}(\Omega B\overline{\Gamma}_{1}; \mathbb{Z})\\ @V{\mu\circ p_{*}}V{}V@V{}V{\nu}V\\ H_{3}(B\tilde{G}^{\delta};\mathbb{Z})@>{H_{*}}>{\cong}>H_{3}(\wedge B \overline{\Gamma}_{1};\mathbb{Z})\cong H_{3}(B\overline{\Gamma}_{1};\mathbb{ Z})\oplus H_{3}(\Omega B\overline{\Gamma}_{1};\mathbb{Z}).\end{CD}\] For any element \(\sigma\in H_{2}(B\tilde{G}^{\delta};\mathbb{Z})\), we consider the element \[\tau=H_{*}(\sigma)\in H_{2}(\wedge B\overline{\Gamma}_{1};\mathbb{Z})\cong H_ {2}(\Omega B\overline{\Gamma}_{1};\mathbb{Z}).\] Let \(\tau^{*}\in H_{3}(B\overline{\Gamma}_{1};\mathbb{Z})\cong\pi_{3}(B\overline{ \Gamma}_{1})\) be the element which corresponds to \(\tau\in H_{2}(\Omega B\overline{\Gamma}_{1};\mathbb{Z})\) under the natural isomorphism \(H_{3}(B\overline{\Gamma}_{1};\mathbb{Z})\cong H_{2}(\Omega B\overline{\Gamma}_ {1};\mathbb{Z})\) and choose a continuous mapping \[f:S^{3}\to B\overline{\Gamma}_{1}\] which represents \(\tau^{*}\). Then, if we denote by \(\iota:S^{2}\to\Omega S^{3}\) the mapping representing the generator of \(\pi_{2}(\Omega S^{3})\cong\mathbb{Z}\), the above element \(\tau=H_{*}(\sigma)\) is represented by the following composed mapping \[S^{2}\stackrel{{\iota}}{{\longrightarrow}}\Omega S^{3} \stackrel{{\Omega f}}{{\longrightarrow}}\Omega B\overline{ \Gamma}_{1}\subset\wedge B\overline{\Gamma}_{1}.\] By Proposition 2.4 again, \[H_{*}(\bar{\mu}(\sigma))=\nu(H_{*}(\sigma))=\nu(\tau)=\theta^{\prime}_{*}(\tau \times[S^{1}]).\] On the other hand, the following diagram is clearly commutative \[\begin{CD}\wedge S^{3}\times S^{1}@>{\wedge f\times\text{id}}>{}>\wedge B \overline{\Gamma}_{1}\times S^{1}\\ @V{\theta^{\prime\prime}}V{}V@V{}V{\theta^{\prime}}V\\ \wedge S^{3}@>{\wedge f}>{}>\wedge B\overline{\Gamma}_{1}\end{CD}\] where \(\theta^{\prime\prime}\) denotes the \(S^{1}\) action on \(\wedge S^{3}\). Hence \[\theta^{\prime}_{*}(\tau\times[S^{1}])=(\wedge f)_{*}(\theta^{\prime\prime}_{* }(j_{*}\iota_{*}([S^{2}])\times[S^{1}]))\] where \(j:\Omega S^{3}\to\wedge S^{3}\) denotes the inclusion. In the terminology of Proposition 2.2, we have \[j_{*}\iota_{*}([S^{2}])=\alpha.\] On the other hand, it is easy to see that \[\theta^{\prime\prime}_{*}(\alpha\times[S^{1}])=\beta.\] Hence \[\nu(\tau)=\theta^{\prime}_{*}(\tau\times[S^{1}])=(\wedge f)_{*}(\beta)\] and finally \[(\wedge f)_{*}(\beta)=f_{*}([S^{3}])=\tau^{*}\in H_{3}(B\overline{\Gamma}_{1} ;\mathbb{Z})\subset H_{3}(\wedge B\overline{\Gamma}_{1};\mathbb{Z}).\] Summing up, the homomorphism \(\bar{\mu}:H_{2}(B\tilde{G}^{\delta};\mathbb{Z})\to H_{3}(B\tilde{G}^{\delta}; \mathbb{Z})\) is described, under the isomorphism \(H_{*}\), as \[H_{2}(\wedge B\overline{\Gamma}_{1};\mathbb{Z})\cong H_{2}(\Omega B\overline{ \Gamma}_{1};\mathbb{Z})\ni\tau\stackrel{{\nu}}{{\mapsto}}(\tau^ {*},0)\in H_{3}(B\overline{\Gamma}_{1};\mathbb{Z})\oplus H_{3}(\Omega B \overline{\Gamma}_{1};\mathbb{Z}).\] Observe here that \(\nu\) does not hit the second component because \(H_{3}(\Omega S^{3};\mathbb{Z})=0\). Thus we have proved that \(\bar{\mu}\) is injective as required. Now we use Theorem 2.3 to conclude that the composition \[H_{4}(BG^{\delta};\mathbb{Z})\stackrel{{\cap\chi}}{{ \longrightarrow}}H_{2}(BG^{\delta};\mathbb{Z}))\stackrel{{\cap \chi}}{{\longrightarrow}}H_{0}(BG^{\delta};\mathbb{Z})\cong\mathbb{Z}\] is surjective. Now consider the following subgroup \[\operatorname{Im}\left(H_{4}(BG^{\delta};\mathbb{Z})\stackrel{{ \cap\chi}}{{\longrightarrow}}H_{2}(BG^{\delta};\mathbb{Z})\right)= \operatorname{Ker}\left(H_{2}(BG^{\delta};\mathbb{Z})\stackrel{{ \mu}}{{\longrightarrow}}H_{3}(B\tilde{G}^{\delta};\mathbb{Z})\right)\] of \(H_{2}(BG^{\delta};\mathbb{Z})\). Then it is easy to see that this subgroup is isomorphic to \(\mathbb{Z}\) and we have the required canonical direct sum decomposition \[H_{2}(BG^{\delta};\mathbb{Z})=H_{2}(B\tilde{G}^{\delta};\mathbb{Z})\oplus \mathbb{Z}\cong H_{3}(B\overline{\Gamma}_{1};\mathbb{Z})\oplus\mathbb{Z}\quad( \text{canonical direct sum}). \tag{5}\] This finishes the proof of \((\mathrm{i})\) and also \((\mathrm{iii})\). Finally the first statement of \((\mathrm{ii})\) follows from the exact sequence (4) and the second statement of \((\mathrm{ii})\). Thus we have proved Theorem 1.2. Proof of Theorem 1.4.: By Theorem 1.2, the second factor of \(\mu(\sigma)\) in \[H_{3}(B\tilde{G}^{\delta};\mathbb{Z})\cong H_{3}(B\overline{\Gamma}_{1}; \mathbb{Z})\oplus H_{3}(\Omega B\overline{\Gamma}_{1};\mathbb{Z})\] is trivial and the first factor is detected by the following composed mapping \[B\tilde{G}^{\delta}\to\wedge B\overline{\Gamma}_{1}\to B\overline{\Gamma}_{1}\] where the second one denotes the evaluation map at the base point of \(S^{1}\) which also serves as the natural retraction onto the subspace of \(\wedge B\overline{\Gamma}_{1}\) consisting of constant loops. This composed mapping factors through the classifying space of \(\mathrm{Diff}^{\delta}_{+}\mathbb{R}\) as \[B\tilde{G}^{\delta}\to B\mathrm{Diff}^{\delta}_{+}\mathbb{R}\to B \overline{\Gamma}_{1}.\] The endomorphism \(\varphi_{k}\) of \(\tilde{G}^{\delta}\) is defined on this larger group \(\mathrm{Diff}^{\delta}_{+}\mathbb{R}\) as an inner automorphism. Since any inner automorphism of a group acts on the homology trivially, the required result follows. Here we recall a few facts from [18]. Let \(p_{k}:G^{(k)}\to G\) be the \(k\)-fold cover of \(G\). Then it can be described as \[G^{(k)}=\{f\in G;fR(1/k)=R(1/k)f\}\] where \(R(1/k)\) denotes the rotation by \(1/k\), so that we have also the inclusion \[i_{k}:G^{(k)}\subset G.\] **Definition 2.5**.: Define an endomorphism \[\varphi_{k}^{\mathbb{Q}}:H_{*}(BG^{\delta};\mathbb{Q})\to H_{*}(BG^{\delta}; \mathbb{Q})\] by setting \[\varphi_{k}^{\mathbb{Q}}(\sigma)=(i_{k})_{*}(p_{k})_{*}^{-1}(\sigma)\quad( \sigma\in H_{m}(BG^{\delta};\mathbb{Q})).\] **Remark 2.6**.: The above definition is given by adapting the endomorphism \(\varphi_{k}\) on \(\tilde{G}\) to the case of \(G\) but only at the rational homological level. In fact, it can be seen that the following diagram is commutative. \[\begin{CD}H_{m}(B\tilde{G}^{\delta};\mathbb{Q})@>{(\varphi_{k})_{*}}>{}>H_{m} (B\tilde{G}^{\delta};\mathbb{Q})\\ @V{p_{*}}V{}V@V{}V{p_{*}}V\\ H_{m}(BG^{\delta};\mathbb{Q})@>{\varphi_{k}^{\mathbb{Q}}}>{}>H_{m}(BG^{\delta };\mathbb{Q}).\end{CD}\] **Proposition 2.7** ([18]).: _For any \(\sigma\in H_{m}(BG^{\delta};\mathbb{Q})\), we have the identity_ \[(\varphi_{k})_{*}(\mu(\sigma))=\mu\Big{(}\frac{1}{k}\varphi_{k}^{\mathbb{Q}}( \sigma)\Big{)}.\] _Namely, the following diagram is commutative_ \[\begin{CD}H_{m}(BG^{\delta};\mathbb{Q})@>{\frac{1}{k}\varphi_{k}^{\mathbb{Q}} }>{}>H_{m}(BG^{\delta};\mathbb{Q})\\ @V{\mu}V{}V@V{}V{\mu}V\\ H_{m+1}(B\tilde{G}^{\delta};\mathbb{Q})@>{\varphi_{k}}>{}>H_{m+1}(B\tilde{G}^{ \delta};\mathbb{Q}).\end{CD}\] _This also holds if we replace \(G^{\delta},\tilde{G}^{\delta}\) by \(G^{\omega,\delta},\tilde{G}^{\omega,\delta}\), respectively._ **Proposition 2.8** (Action of \(\varphi_{k}^{\mathbb{Q}}\)).: _In the direct sum decomposition_ \[H_{2}(BG^{\delta};\mathbb{Q})=H_{2}(B\tilde{G}^{\delta};\mathbb{Q})\oplus \mathbb{Q}\quad(\text{canonical direct sum})\] _given in the proof of Theorem 1.2 (see (5)), both summands \(H_{2}(B\tilde{G}^{\delta};\mathbb{Q})\) and \(\mathbb{Q}\) are eigenspaces of \(\varphi_{k}^{\mathbb{Q}}\) with eigenvalues \(k\) and \(\frac{1}{k}\), respectively._ Proof of Proposition 1.6 By the assumption, \[(\varphi_{k})_{*}(\mu(\sigma))=\mu(\sigma)\in H_{3}(\widetilde{B\mathrm{Diff} _{+}}^{\omega,\delta}S^{1};\mathbb{Z}).\] for some \(\sigma\in H_{2}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Z})\) with \(\chi(\sigma)\neq 0\) and \(k\). Then by Proposition 2.7, we have \[\mu\Big{(}\sigma-\frac{1}{k}\varphi_{k}^{\mathbb{Q}}(\sigma)\Big{)}=0.\] Hence, by the exact sequence (4) (for the real analytic diffeomorphism group with the rational coefficients), we have \[\Big{(}\sigma-\frac{1}{k}\varphi_{k}^{\mathbb{Q}}(\sigma)\Big{)}\in\mathrm{Im }\,\Big{(}H_{4}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\stackrel{{ \cap\chi}}{{\longrightarrow}}H_{2}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1}; \mathbb{Q})\Big{)}.\] On the other hand, we have \[\chi\Big{(}\sigma-\frac{1}{k}\varphi_{k}^{\mathbb{Q}}(\sigma)\Big{)}=\Big{(} 1-\frac{1}{k^{2}}\Big{)}\chi(\sigma)\neq 0.\] Therefore \(\chi^{2}\neq 0\in H^{4}(B\mathrm{Diff}_{+}^{\omega,\delta}S^{1};\mathbb{Q})\) completing the proof. Proof of Theorem 1.7 The short exact sequence (3) holds also for the real analytic case: \[0=H_{1}(BG^{\omega,\delta};\mathbb{Z})\stackrel{{\mu}}{{ \rightarrow}}H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Z})\to H_{2}(BG^{ \omega,\delta};\mathbb{Z})\stackrel{{\cap\chi}}{{\longrightarrow }}H_{0}(BG^{\omega,\delta};\mathbb{Z})=\mathbb{Z}\to 0\] where \(G^{\omega,\delta}\) denotes \(\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\) and \(H_{1}(BG^{\omega,\delta};\mathbb{Z})=0\) because of Herman's result [11] that \(\mathrm{Diff}_{+}^{\omega,\delta}S^{1}\) is a simple group. Therefore \[H_{2}(BG^{\omega,\delta};\mathbb{Z})\cong H_{2}(B\tilde{G}^{\omega,\delta}; \mathbb{Z})\oplus\mathbb{Z}\quad(\text{non-canonical direct sum}).\] By the assumption \(\chi^{2}=0\) and the exact sequence (4) \[\to H_{4}(BG^{\omega,\delta};\mathbb{Z})\stackrel{{\cap\chi}}{{ \longrightarrow}}H_{2}(BG^{\omega,\delta};\mathbb{Z})\stackrel{{ \mu}}{{\rightarrow}}H_{3}(B\tilde{G}^{\omega,\delta};\mathbb{Z})\to H_{3}( BG^{\omega,\delta};\mathbb{Z})\to H_{1}(BG^{\omega,\delta};\mathbb{Z})=0\] for the real analytic case, we can conclude that the homomorphism \(\mu\) is _injective_ on the (non-canonical) summand \(\mathbb{Z}\). Furthermore \[\mu(H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Z}))\cap\mu(\mathbb{Z})=\{0\}\] because otherwise, there is a non-zero integer \(n\) and \(y\in H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Z}))\) such that \(\mu(y)=\mu(n)\). It follows that \(\mu(y-n)=0\) which implies \(y-n\in\operatorname{Im}\cap\chi\). But since \(\chi(y-n)=-n\neq 0\), this contradicts the assumption \(\chi^{2}=0\). Hence \[\operatorname{Im}\mu=\mu(H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Z}))\oplus \mu(\mathbb{Z})\ \subset H_{3}(B\tilde{G}^{\omega,\delta};\mathbb{Z})\] where \(\mu(\mathbb{Z})\cong\mathbb{Z}\). Let us define \[\tilde{P}= \text{submodule of }H_{3}(B\tilde{G}^{\omega,\delta}; \mathbb{Z})\text{ generated by }\mu(H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Z}))\text{ and}\] \[\text{the elements }\{\mu(1),(\varphi_{k})_{*}(\mu(1));k=2,3,\cdots\}\] and set \[P=\tilde{P}/\mu(H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Z})).\] We construct a surjective homomorphism \[T_{P}:P\twoheadrightarrow\mathbb{Q}. \tag{6}\] By the definition of the group \(P\), there is a homomorphism \[H_{2}(BG^{\omega,\delta};\mathbb{Z})/H_{2}(B\tilde{G}^{\omega,\delta}; \mathbb{Z})\cong\mathbb{Z}\to P\] and the left hand side is detected by the homomorphism \(\chi:H_{2}(BG^{\omega,\delta};\mathbb{Z})\to Z\). Consider the rational form of this homomorphism \[H_{2}(BG^{\omega,\delta};\mathbb{Q})/H_{2}(B\tilde{G}^{\omega,\delta}; \mathbb{Q})\cong\mathbb{Q}\to P\otimes_{\mathbb{Z}}\mathbb{Q}. \tag{7}\] By Proposition 2.7, together with Definition 2.5 and Remark 2.6, the homomorphism \(\varphi_{k}^{\mathbb{Q}}\) acts on the left hand side with eigenvalue \(\frac{1}{k}\), and this homomorphism is transferred to the homomorphism \(k(\varphi_{k})_{*}\) on the right hand side. Here observe that this operation preserves the space \(\mu(H_{2}(B\tilde{G}^{\omega,\delta};\mathbb{Q}))\) so that it induces that on the quotient \(P\otimes_{\mathbb{Z}}\mathbb{Q}\). Now we show that the above homomorphisms (7) is an isomorphism. Injectivity is clear because the generator of the summand \(\mathbb{Z}\) (non-canonical) \(\subset H_{2}(G^{\omega,\delta};\mathbb{Q})\) goes to \(\mu(1)\otimes 1\). To prove the surjectivity, it is enough to show that the element \((\varphi_{k})_{*}(\mu(1))\otimes 1\) is contained in the image for any \(k\). But by Proposition 2.7 and 2.8, \[(\varphi_{k})_{*}(\mu(1))\otimes 1=\mu\left(\frac{1}{k}\varphi_{k}^{\mathbb{Q}}(1 )\right)\otimes 1=\mu(1)\otimes\frac{1}{k^{2}}.\] Now we define the homomorphism \(T_{P}\) to be the composition \[T_{P}:P\to P\otimes_{\mathbb{Z}}\mathbb{Q}\ \overset{\eqref{eq:P}}{\cong} \mathbb{Q}.\] It remains to prove that this homomorphism is surjective (this is the main point of the proof). For any rational number \(\frac{p}{q}\in\mathbb{Q}\), consider the element \(pq\ (\varphi_{q})_{*}(\mu(1))\in P\). Then we have \[T_{P}\left(pq\ (\varphi_{q})_{*}(\mu(1))\right)=T_{P}\left(pq\ \mu\left(\frac{1}{q} \varphi_{q}^{\mathbb{Q}}(1)\right)\right)=T_{P}\left(pq\ \frac{1}{q^{2}}\mu(1)\right)=\frac{p}{q}\] proving the surjectivity. Finally the last claim \(P/\mathbb{Z}\subset H_{3}(BG^{\omega,\delta};\mathbb{Z})\) follows from the exact sequence (4). **Remark 2.9**.: The homomorphism \(T_{P}\) constructed above can be interpreted as a particular case of the secondary characteristic class defined in Definition 3.1 below. In this case, it is equal to the class \(T\chi^{2}\) defined in \(H^{3}(B\tilde{G}^{\omega,\delta};\mathbb{Q})\) associated with the assumption that \(\chi^{2}=0\in H^{4}(BG^{\omega,\delta};\mathbb{Q})\). Apart from the above, one distinctive feature here is that the value of this homomorphism takes any rational number on _integral_ homology classes in \(H_{3}(B\tilde{G}^{\omega,\delta};\mathbb{Z})\). Proof of Theorem 1.10.: Since \(\mathrm{SO}(2)_{\mathrm{tor}}=\lim\limits_{\underset{n}{\rightarrow}}\mathbb{ Z}/n\mathbb{Z}\) it is enough to show that the homomorphism \[H_{2k-1}(\mathbb{Z}/n\mathbb{Z};\mathbb{Z})\cong\mathbb{Z}/n\mathbb{Z} \to H_{2k-1}(BG^{\delta};\mathbb{Z})\] is trivial for any \(n,\,k\in\mathbb{N}\,\). This homomorphism is induced by a mapping \(i:L_{n}^{2k-1}\to BG^{\delta}\) from the \((2k-1)\)-dimensional lens space \(L_{n}^{2k-1}=S^{2k-1}/(\mathbb{Z}/n\mathbb{Z})\) to \(BG^{\delta}\) defined as the composition \(L_{n}^{2k-1}\to B(\mathbb{Z}/n\mathbb{Z})\to BG^{\delta}\). Let \[S^{1}\to L_{n}^{2k-1}\tilde{\times}S^{1}\to L_{n}^{2k-1}\] be the foliated \(S^{1}\)-bundle over \(L_{n}^{2k-1}\) corresponding to the mapping \(i\), where \(L_{n}^{2k-1}\tilde{\times}S^{1}\) denotes its total space. Since, as already mentioned, \(B\tilde{G}^{\delta}\) can be considered as the total space of the universal flat \(S^{1}\)-bundle over \(BG^{\delta}\), there exists a map \[\tilde{i}:L_{n}^{2k-1}\tilde{\times}S^{1}\to B\tilde{G}^{\delta}\] making the following diagram commutative \[\begin{CD}L_{n}^{2k-1}\tilde{\times}S^{1}@>{\tilde{i}}>{}>B\tilde{G}^{\delta} \\ @V{}V{}V@V{}V{}V\\ L_{n}^{2k-1}@>{i}>{}>BG^{\delta}.\end{CD}\] The foliation on \[L_{n}^{2k-1}\tilde{\times}S^{1}=S^{2k-1}\times_{\mathbb{Z}/n}S^{1}\] can also be described as the quotient of the horizontal foliation on \(S^{2k-1}\times S^{1}\) by the action of \(\mathbb{Z}/n\mathbb{Z}\), where the generator of \(\mathbb{Z}/n\mathbb{Z}\) acts on \(S^{1}\) by \(1/n\) rotation. Hence, its leaf space is considered to be a circle which is denoted by \(S^{1}/n=S^{1}/(\mathbb{Z}/n\mathbb{Z})\) and the mapping \[f:L_{n}^{2k-1}\tilde{\times}S^{1}\to S^{1}/n\] to the leaf space restricts to each fiber as an \(n\)-fold covering map. Thanks to the following Proposition, essentially due to Haefliger [10] and Nariman [20], it suffices to prove \[(H/\!/S^{1})_{*}\circ i_{*}([L_{n}^{2k-1}])=0\in H_{2k-1}(\wedge B\overline{ \Gamma}_{1}/\!/S^{1};\mathbb{Z})\] instead of showing \(i_{*}([L_{n}^{2k-1}])=0\in H_{2k-1}(BG^{\delta};\mathbb{Z})\), where \(H/\!/S^{1}:B\tilde{G}^{\delta}/\!/S^{1}\to\wedge B\overline{\Gamma}_{1}/\!/S^{1}\) is the Borel \(S^{1}\)-quotient map associated with the Mather-Thurston map \(H:B\tilde{G}^{\delta}\to\wedge B\overline{\Gamma}_{1}\) and \(B\tilde{G}^{\delta}/\!/S^{1}\) is replaced with \(BG^{\delta}\) because they are homotopically equivalent. **Proposition 2.10**.: (i) _The Mather-Thurston map \(H:B\tilde{G}^{\delta}\to\wedge B\overline{\Gamma}_{1}\) in Thurston's Theorem 1.1 is \(S^{1}\)-equivariant._ (ii) _The Borel \(S^{1}\)-quotient map_ \[H/\!/S^{1}:B\tilde{G}^{\delta}/\!/S^{1}\to\wedge B\overline{\Gamma}_{1}/\!/S^ {1}\] _associated with the Mather-Thurston map \(H\) induces isomorphisms between their homology groups._ Proof of Proposition 2.10 (ii).: We enlarge the Mather-Thurston map \(H\) to \(H:B\tilde{G}^{\delta}\times ES^{1}\to\wedge B\overline{\Gamma}_{1}\times ES^{1}\). Thanks to (i) the enlarged \(H\) is still \(S^{1}\)-equivariant and induces isomorphisms between homologies. \[\begin{CD}S^{1}@>{}>{}>B\tilde{G}^{\delta}\times ES^{1}@>{}>{}>B\tilde{G}^{ \delta}/\!/S^{1}\\ @V{}V{H}V@V{}V{\text{homology iso.}}V@V{}V{H/\!/S^{1}}V\\ S^{1}@>{}>{}>\wedge B\overline{\Gamma}_{1}^{\delta}\times ES^{1}@>{}>{}> \wedge B\overline{\Gamma}_{1}/\!/S^{1}\end{CD}\] Comparing the homology Gysin sequences, we see \(H/\!/S^{1}\) induces isomorphisms between homologies because so does \(H\). This applies to the Borel quotient map associated with any \(S^{1}\)-equivariant mapping which induces isomorphisms between homology groups. In order to investigate the cycle \(H/\!/S^{1}\circ i(L_{n}^{2k-1})\), we look at \(H\circ\tilde{i}(L_{n}^{2k-1}\tilde{\times}S^{1})\) and concider where it locates. As \(H\) is the adjoint of the classifying map \[h:B\tilde{G}^{\delta}\times S^{1}\to B\overline{\Gamma}_{1}\] of foliations of codimension one, we look at the foliation on \(B\tilde{G}^{\delta}\times S^{1}\) induced by the projection \(B\tilde{G}^{\delta}\times S^{1}\to B\tilde{G}^{\delta}\) from the one on \(B\tilde{G}^{\delta}\), which gives rise to the universal flat \(S^{1}\)-product structure over \(B\tilde{G}^{\delta}\), and the foliation on \((L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}\) induced from the one on \(L_{n}^{2k-1}\tilde{\times}S^{1}\) by the projection \((L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}\to L_{n}^{2k-1}\tilde{\times}S^{1}\). The foliation on \((L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}\) is \(S^{1}\)-invariant and the leaf space is again identified with \(S^{1}/n\). Therefore it is the pull-back of the point foliation on \(S^{1}/n\) by \[\tilde{f}:(L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}\to S^{1}/n\] and the right action of \(S^{1}\) induces the \(n\)-fold covering map on \(S^{1}/n\). Hence the classifying map \(h\circ\tilde{i}\) of this foliation is homotopic to \(\iota\circ\tilde{f}\), where \(\tilde{i}\colon(L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}\to B\tilde{G}^{ \delta}\times S^{1}\) covers the classifying map \(\tilde{i}\) and \(\iota:S^{1}\to B\overline{\Gamma}_{1}\) denotes the classifying map of the point foliation on \(S^{1}/n\). Since \(B\overline{\Gamma}_{1}\) is simply connected (in fact \(2\)-connected as mentioned already), there exists a mapping \(\tilde{\iota}:D^{2}\to B\overline{\Gamma}_{1}\) which extends the mapping \(\iota\) on \(\partial D^{2}=S^{1}/n\). Thus we obtain the following homotopy commutative diagram \[\begin{CD}(L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}@>{\tilde{f}}>{}>S^{1}/n \subset D^{2}@>{\tilde{\iota}}>{}>B\overline{\Gamma}_{1}\\ @V{}V{}V@V{}V{}V\\ (L_{n}^{2k-1}\tilde{\times}S^{1})\times S^{1}@>{\tilde{\tilde{i}}}>{}>B\tilde{G} ^{\delta}\times S^{1}@>{h}>{}>B\overline{\Gamma}_{1}\\ @V{}V{\text{right action}}V@V{}V{\text{right action}}V\\ L_{n}^{2k-1}\tilde{\times}S^{1}@>{\tilde{\iota}}>{}>B\tilde{G}^{\delta}@>{H}>{}> \wedge B\overline{\Gamma}_{1}(\times ES^{1})\\ @V{}V{\text{right action}}V@V{}V{\text{right action}}V@V{}V{\text{right action}}V@V{}V{\text{right action}}V@V{}V{\text{right action}}V@V{}V{\text{right action}}V@V{}V{\text{right action} Then we have the equality \[i_{*}^{\omega}([L_{n}^{2k+1}])\cap\chi=i_{*}^{\omega}([L_{n}^{2k-1}]).\] This follows from the argument in the proof of Theorem 1.13 (i) given below. By the exactness, we can conclude that \[\mu(i_{*}^{\omega}([L_{n}^{2k-1}]))=\tilde{i}_{*}^{\omega}([L_{n}^{2k-1} \tilde{\times}S^{1}])=0\in H_{2k}(B\tilde{G}^{\omega,\delta};\mathbb{Z}).\] The important problem is to determine whether the stronger condition \[i_{*}^{\omega}([L_{n}^{2k-1}])=0\;?\quad\in H_{2k-1}(BG^{\omega,\delta}; \mathbb{Z})\] holds or not. _Proof of Theorem 1.13_ First we prove \(({\rm i})\). The restriction of the central extension \(0\to\mathbb{Z}\to\tilde{G}^{\delta}\to G^{\delta}\to 0\) to the subgroup \({\rm SO}(2)_{\rm tor}\cong\mathbb{Q}/\mathbb{Z}\subset G^{\delta}\) is \(0\to\mathbb{Z}\to\mathbb{Q}\to\mathbb{Q}/\mathbb{Z}\to 0\). The Gysin exact sequence of this central extension is given by \[H_{2k+1}(\mathbb{Q};\mathbb{Z})=0\to H_{2k+1}(\mathbb{Q}/\mathbb{Z};\mathbb{ Z})\cong\mathbb{Q}/\mathbb{Z}\stackrel{{\cap i_{0}^{*}\chi}}{{ \to}}H_{2k-1}(\mathbb{Q}/\mathbb{Z};\mathbb{Z})\cong\mathbb{Q}/\mathbb{Z}\to H _{2k}(\mathbb{Q};\mathbb{Z})=0\] where \(i_{0}:{\rm SO}(2)_{\rm tor}\subset G^{\delta}\) denotes the inclusion. Therefore the homomorphism \(\cap i_{0}^{*}\chi\) is an isomorphism for all \(k\). Let \(0\to\mathbb{Z}\to\tilde{\Gamma}\to\Gamma\to 0\) be the restriction of the central extension to \(\Gamma\subset G^{\delta}\). Then we have the following commutative diagram between the Gysin exact sequences \[\begin{CD}0@>{}>{}>H_{2k+1}(\mathbb{Q}/\mathbb{Z};\mathbb{Z})\cong\mathbb{Q}/ \mathbb{Z}@>{\cap i_{0}^{*}\chi}>{}>H_{2k-1}(\mathbb{Q}/\mathbb{Z};\mathbb{Z}) \cong\mathbb{Q}/\mathbb{Z}@>{}>{}>0\\ @V{}V{i_{*}}V@V{i_{*}}V{}V@V{i_{*}}V{}V@V{}V{}V\\ H_{2k+1}(\tilde{\Gamma};\mathbb{Z})@>{}>{}>H_{2k+1}(\Gamma;\mathbb{Z})@>{\cap i _{\Gamma^{*}\chi}^{*}}>{}>H_{2k-1}(\Gamma;\mathbb{Z})@>{}>{}>H_{2k}(\tilde{ \Gamma};\mathbb{Z})\end{CD}\] where \(i_{\Gamma}:\Gamma\subset G^{\delta}\) denotes the inclusion. Hence, if the homomorphism \(i_{*}\) on the righthand side is injective (resp. non-trivial), so is the lefthand side as well. The claim follows from this. Next we prove \(({\rm ii})\). By the assumption that \(\chi^{k}=0\in H^{2k}(\Gamma;\mathbb{Q})\), we can define the secondary class \(\widehat{\chi^{k}}\in H^{2k-1}(\Gamma;\mathbb{Q}/\mathbb{Z})\) which is well defined modulo the image of the natural homomorphism \(H^{2k-1}(\Gamma;\mathbb{Q})\to H^{2k-1}(\Gamma;\mathbb{Q}/\mathbb{Z})\) (see Definition 3.1 for details). On the other hand, \(i_{0}^{*}\chi^{k}=0\in H^{2k}({\rm SO}(2)_{\rm tor};\mathbb{Q})\) so that we have also the secondary class \(\widehat{i_{0}^{*}\chi^{k}}\in H^{2k-1}({\rm SO}(2)_{\rm tor};\mathbb{Q}/ \mathbb{Z})\). By the naturality of the secondary class, this class is equal to \(i^{*}\widehat{\chi^{k}}\). Furthermore, all the indeterminacy coming from the rational cohomology vanishes here while the essential part remains so that this class is uniquely defined and it gives an isomorphism \[\cap i^{*}\widehat{\chi^{k}}:H_{2k-1}({\rm SO}(2)_{\rm tor};\mathbb{Z})\cong \mathbb{Q}/\mathbb{Z}\stackrel{{\cong}}{{\longrightarrow}} \mathbb{Q}/\mathbb{Z}.\] Then the required claim follows because of the following identity \[\langle i_{*}(H_{2k-1}({\rm SO}(2)_{\rm tor};\mathbb{Z})),\widehat{\chi^{k}} \rangle=\langle H_{2k-1}({\rm SO}(2)_{\rm tor};\mathbb{Z}),i^{*}\widehat{\chi^ {k}}\rangle\] together with the claim of \(({\rm i})\). **Remark 2.12**.: Here we give a sketch of proof of the following fact. If \(B\overline{\Gamma}_{1}\) were an Eilenberg MacLane space \(K(\mathbb{R},3)\), then we can compute \(H_{*}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\). In particular, it has no torsion. More precisely, we show the following. Let \[\mathrm{GV}:B\overline{\Gamma}_{1}\to K(\mathbb{R},3)\] be the classifying map for the Godbillon-Vey class in \(H^{3}(B\overline{\Gamma}_{1};\mathbb{R})\) and let \[\wedge\mathrm{GV}:\wedge B\overline{\Gamma}_{1}\to\wedge K(\mathbb{R},3) \tag{8}\] be the associated map between free loop spaces induced by \(\mathrm{GV}\). Now we see that the extension class of the fibration \[\Omega K(\mathbb{R},3)=K(\mathbb{R},2)\to\wedge K(\mathbb{R},3)\to K(\mathbb{ R},3)\] defined in \(H^{3}(K(\mathbb{R},3);\pi_{2}(K(\mathbb{R},2)))\cong\mathrm{Hom}_{\mathbb{Z}}( \mathbb{R},\mathbb{R})\) is trivial because this fibration has a section. Therefore \(\wedge K(\mathbb{R},3)\) is homotopy equivalent to the product \(K(\mathbb{R},2)\times K(\mathbb{R},3)\). If we put this into the map (8), then we obtain a mapping \[\wedge\mathrm{GV}:\wedge B\overline{\Gamma}_{1}\to K(\mathbb{R},2)\times K( \mathbb{R},3).\] This corresponds to the two cohomology classes \(\alpha\in H^{2}(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{R})\) (the Godbillon-Vey class integrated along the fibers) and \(\beta\in H^{3}(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{R})\) (the Godbillon-Vey class) under the isomorphism \(H^{*}(B\widetilde{\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{R})\cong H^{*}( \wedge B\overline{\Gamma}_{1};\mathbb{R})\) induced by Thurston's Theorem 1.1. Therefore the mapping (8) induces a homomorphism \[(\wedge GV)_{*}:H_{*}(\wedge B\overline{\Gamma}_{1};\mathbb{Z})\to H_{*}(K( \mathbb{R},2)\times K(\mathbb{R},3);\mathbb{Z})\cong S^{*}_{\mathbb{Z}}( \mathbb{R})\otimes_{\mathbb{Z}}\wedge^{*}_{\mathbb{Z}}(\mathbb{R}).\] Here in the last term, \(S^{k}_{\mathbb{Z}}(\mathbb{R})\) (resp. \(\wedge^{k}_{\mathbb{Z}}(\mathbb{R})\)) denotes the \(k\)-th symmetric power over \(\mathbb{Z}\) (resp. \(k\)-th exterior power over \(\mathbb{Z}\)) of \(\mathbb{R}\) which is considered as a \(\mathbb{Q}\)-vector space. We remark that the operation over \(\mathbb{Z}\) is the same as that over \(\mathbb{Q}\) because \(\mathbb{R}\) is a uniquely divisible group. Also the degree of the generator \(S^{1}_{\mathbb{Z}}(\mathbb{R})=\mathbb{R}\) of \(S^{*}_{\mathbb{Z}}(\mathbb{R})\) is \(2\) while the degree of the generator \(\wedge^{1}_{\mathbb{Z}}(\mathbb{R})=\mathbb{R}\) of \(\wedge^{*}_{\mathbb{Z}}(\mathbb{R})\) is \(3\) (see [19] for more details of this computation). Now we consider the Borel constructions on each space of the mapping (8). Then we obtain a morphism of \(S^{1}\)-fibrations. \[\begin{CD}S^{1}@>{}>{}>S^{1}@>{}>{}>S^{1}\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ B\tilde{G}^{\delta}@>{H}>{}>\wedge B\overline{\Gamma}_{1}@>{}>{}>\wedge K( \mathbb{R},3)\\ @V{}V{}V@V{}V{}V\\ B\tilde{G}^{\delta}/S^{1}=BG^{\delta}@>{H/\!/S^{1}}>{}>\wedge B\overline{ \Gamma}_{1}/\!/S^{1}@>{}>{}>\wedge K(\mathbb{R},3)/\!/S^{1}.\end{CD} \tag{9}\] This induces a homomorphism \[H_{*}(BG^{\delta};\mathbb{Z})@>{}>{}>H_{*}^{S^{1}}(\wedge B\overline{\Gamma}_ {1};\mathbb{Z})\to H_{*}^{S^{1}}(\wedge K(\mathbb{R},3);\mathbb{Z}) \tag{10}\] which would be an isomorphism if \(B\overline{\Gamma}_{1}\) were \(K(\mathbb{R},3)\). In general, it is an important and extremely difficult problem to determine the kernel and cokernel of this homomorphism, both in the \(C^{\infty}\) and real analytic categories. Now we can determine \(H^{S^{1}}_{*}(\wedge K(\mathbb{R},3);\mathbb{Z})\) as follows. \[H^{S^{1}}_{2k}(\wedge K(\mathbb{R},3);\mathbb{Z})\cong\mathbb{Z}\oplus Q_{2k}\] \[H^{S^{1}}_{2k+1}(\wedge K(\mathbb{R},3);\mathbb{Z})\cong Q_{2k+1}\] where each \(Q_{k}\) is a certain \(\mathbb{Q}\)-vector space which can be described explicitly. For example, the first several terms are given as follows. \[Q_{1}=0,\ Q_{2}=\mathbb{R},\ Q_{3}=0,\ Q_{4}=S^{2}_{\mathbb{Z}}( \mathbb{R}),\ Q_{5}=\wedge^{2}_{\mathbb{Z}}(\mathbb{R}),\] \[Q_{6}=S^{3}_{\mathbb{Z}}(\mathbb{R}),\ Q_{7}=S^{2,1}(\mathbb{R} ),\ Q_{8}=S^{4}_{\mathbb{Z}}(\mathbb{R})\oplus\wedge^{3}_{\mathbb{Z}}( \mathbb{R}),\ Q_{9}=S^{3,1}(\mathbb{R}),\] \[Q_{10}=S^{5}_{\mathbb{Z}}(\mathbb{R})\oplus S^{2,1,1}(\mathbb{R }),\ Q_{11}=S^{4,1}(\mathbb{R})\oplus S^{2,1,1}(\mathbb{R}),\cdots\] Here the symbol \(S^{k_{1},k_{2},\cdots}(\mathbb{R})\) denotes the \(\mathbb{Q}\)-vector space obtained by applying the Schur functor \(S^{k_{1},k_{2},\cdots}\)\((k_{1}\geq k_{2}\geq\cdots)\) on the \(\mathbb{Q}\)-vector space \(\mathbb{R}\). This is a purely homotopy theoretical computation using the Gysin sequence applied to the most right \(S^{1}\)-fibration of the diagram (9) together with the homology computation of the associated fibration \[\wedge K(\mathbb{R},3)\rightarrow\wedge K(\mathbb{R},3)/\!/S^{1}\to BS ^{1}.\] However, it is easier to do this computation if we keep in mind geometric properties of the characteristic classes \(\chi\) and \(\alpha,\beta\) as well as Schur functors in representation theory (see [4]). Here we omit the details (see SS4 Appendix B for the first several cases of computations). ## 3. Appendix: secondary classes Here we briefly describe definitions of secondary classes associated with vanishing of powers of the rational Euler class. We consider the same situation as in the setting of Theorem 1.13. Thus let \(\Gamma\subset\operatorname{Diff}_{+}^{\delta}S^{1}\) be any subgroup containing \(\operatorname{SO}(2)\). Assume that \(\chi^{k}=0\in H^{2k}(\Gamma;\mathbb{Q})\). Then we can define two secondary classes \[\widehat{\chi^{k}}\in H^{2k-1}(\Gamma;\mathbb{Q}/\mathbb{Z})\] \[T\chi^{k}\in H^{2k-1}(\tilde{\Gamma};\mathbb{Q})\] as follows. First, choose \[c\in Z^{2}(B\mathrm{Diff}_{+}^{\delta}S^{1};\mathbb{Z})\quad\text{ such that}\quad\delta c=0\quad\text{and}\quad[c]=\chi\] \[b\in C^{1}(\widetilde{B\mathrm{Diff}}_{+}^{\delta}S^{1};\mathbb{ Z})\quad\text{such that}\quad p^{*}c=\delta b.\] These choices are essentially unique (i.e. modulo exact cochains), because we can choose these cochains at the levels of \(B\mathrm{Diff}_{+}S^{1}\) and \(\widetilde{B\mathrm{Diff}}_{+}S^{1}\) and then \(H^{2}(B\mathrm{Diff}_{+}S^{1};\mathbb{Z})\) is isomorphic to \(\mathbb{Z}\) and \(\widetilde{B\mathrm{Diff}}_{+}S^{1}\) is contractible. **Definition 3.1**.: \((\mathrm{i})\) By the assumption \(\chi^{k}=0\in H^{2k}(\Gamma;\mathbb{Q})\), there exists certain element \(a\in C^{2k-1}(\Gamma;\mathbb{Q})\) such that \(c^{k}|_{\Gamma}=\delta a\). Set \(\bar{a}\in C^{2k-1}(\Gamma;\mathbb{Q}/\mathbb{Z})\) be the projection of \(a\) under the coefficients projection \(\mathbb{Q}\to\mathbb{Q}/\mathbb{Z}\). Then we have \(\delta\bar{a}=\bar{c}^{k}=0\). Define \[\widehat{\chi^{k}}=[\bar{a}]\in H^{2k-1}(\Gamma;\mathbb{Q}/\mathbb{Z})\] which is well-defined modulo \[\mathrm{Image}[H^{2k-1}(\Gamma;\mathbb{Q})\to H^{2k-1}(\Gamma;\mathbb{Q}/ \mathbb{Z})].\] \((\mathrm{ii})\) Under the same condition as above, we have \[\delta(p^{*}a-(b\,p^{*}c^{k-1})|_{\Gamma})=p^{*}(c^{k})|_{\Gamma}-(p^{*}c\cup p ^{*}c^{k-1})|_{\Gamma})=0.\] Define \[T\chi^{k}=[p^{*}a-(b\,p^{*}c^{k-1})|_{\Gamma}]\in H^{2k-1}(\tilde{\Gamma}; \mathbb{Q})\] which is well-defined modulo \[\mathrm{Image}[H^{2k-1}(\Gamma;\mathbb{Q})\to H^{2k-1}(\tilde{\Gamma}; \mathbb{Q})].\]
2309.08503
HealthFC: Verifying Health Claims with Evidence-Based Medical Fact-Checking
In the digital age, seeking health advice on the Internet has become a common practice. At the same time, determining the trustworthiness of online medical content is increasingly challenging. Fact-checking has emerged as an approach to assess the veracity of factual claims using evidence from credible knowledge sources. To help advance automated Natural Language Processing (NLP) solutions for this task, in this paper we introduce a novel dataset HealthFC. It consists of 750 health-related claims in German and English, labeled for veracity by medical experts and backed with evidence from systematic reviews and clinical trials. We provide an analysis of the dataset, highlighting its characteristics and challenges. The dataset can be used for NLP tasks related to automated fact-checking, such as evidence retrieval, claim verification, or explanation generation. For testing purposes, we provide baseline systems based on different approaches, examine their performance, and discuss the findings. We show that the dataset is a challenging test bed with a high potential for future use.
Juraj Vladika, Phillip Schneider, Florian Matthes
2023-09-15T16:05:48Z
http://arxiv.org/abs/2309.08503v2
# HealthFC: Verifying Health Claims with Evidence-Based ###### Abstract In the digital age, seeking health advice on the Internet has become a common practice. At the same time, determining the trustworthiness of online medical content is increasingly challenging. Fact-checking has emerged as an approach to assess the veracity of factual claims using evidence from credible knowledge sources. To help advance automated NLP solutions for this task, in this paper, we introduce a novel dataset in German and English of 750 health-related claims, labeled for veracity by medical experts and backed with evidence from clinical trial studies. We provide an analysis of the dataset, highlighting its characteristics and challenges. The dataset can be used for NLP tasks related to automated fact-checking, such as evidence retrieval, claim verification, or explanation generation. For testing purposes, we provide baseline systems based on different approaches, examine their performance, and discuss the findings. We show that the dataset is a challenging test bed with a high potential for future use. ## 1 Introduction Health can be defined as "a state of complete physical, mental, and social well-being" and is a popular point of discussion both in everyday life and in online spaces. The Internet has made seeking information about personal and public health easier than ever before. Many people have turned to online blogs and news portals as a source of evidence regarding health-related inquiries. According to a report released by the Pew Research Center (Fox and Duggan, 2013), over one-third of American adults have searched online for medical conditions that they might have, and they first consult the Internet before deciding if they should visit a medical professional. With the increasing volume of new data generated daily and the rapid speed at which information is propagated in digital media, keeping track of trustworthy sources has become challenging. This has facilitated the spread of misinformation - content that is usually false, misleading, or not backed by any relevant knowledge sources. In the period of the COVID-19 pandemic, medical misinformation has led people to turn to unsafe drugs and unproven treatments (Pennycook et al., 2020; Zarocostas, 2020). The challenge of seeking credible health-related information is further amplified by the advent of digital health assistants and generative language models, which have the ability to generate eloquent responses for any input query, yet are prone to "hallucinating" knowledge or omitting important details (Ji et al., 2023). The usual way for biomedical researchers to test their hypotheses related to human health is by conducting _clinical trials_. Clinical trials are carefully designed research studies that seek to investigate the efficacy and safety of biomedical or behavioral interventions in human subjects, which may in \begin{table} \begin{tabular}{l} \hline \hline **Claim:** Can regular intake of vitamin C prevent colds? \\ \hline **Document:** The recommendation to take high-dose vitamin C at the first signs of a cold cannot be confirmed by studies. If cough, sniffing or sore throat are already present, vitamin C does not seem to have any detectable effect. The daily requirement for the vitamin is about 100 milligrams, with the recommendations slightly fluctuating [2,3]. This amount is contained in an apple, half a pepper or two tomatoes [4]. (...) \\ \hline **Verdict: Refuted** \\ \hline \hline **Claim:** Does melatonin help against jet lag? \\ \hline **Document:** This sounds plausible at first because melatonin plays an important role in sleep-wake rhythm [4]. We have found an overview of ten individual studies [1] and a newer individual study [2]. At random, the test subjects received melatonin or a dummy medication. Overall, the studies show that melatonin may help better against jet lag than a sham drug. (...) \\ \hline **Verdict: Supported** \\ \hline \hline \end{tabular} \end{table} Table 1: Example of two claims from HealthFC with a snippet of evidence documents and verdicts. Manually annotated evidence sentences are highlighted in violet. clude novel treatments such as vaccines, drugs, dietary supplements, medical devices, or known interventions that require further examination (Piantadosi, 2017). When performed with high standards, clinical trials serve as a high-quality and trustworthy expert-curated source of evidence for health-related decisions. Multiple clinical trials related to the same topic are commonly combined into a _systematic review_. These reviews serve as a medical artifact providing guidelines concerning treatments and medical decisions with varying levels of evidence and strength of recommendation (Sekhon et al., 2017). Fact-checking is the task of assessing factual claims that are contested, using relevant evidence from credible knowledge sources. It is a time-consuming task that is still usually performed manually by dedicated experts in journalism (Guo et al., 2022). Recently, solutions based on Machine Learning (ML) and Natural Language Processing (NLP) have been developed to automate parts of the fact-checking process. Considering the complexity of the task, current solutions are still far from achieving human-level performance. Still, they can be used to assist human fact-checkers in their work, such as discovering evidence (Nakov et al., 2021). While multiple datasets for automated fact-checking of health-related and biomedical claims have been constructed in recent years, none of them use clinical studies as their primary source of knowledge on determining a claim's veracity. This is a major drawback considering the importance of clinical trials and systematic reviews in making health-related decisions in medicine. Furthermore, most datasets provide only top-level labels like "true" and "false" with no information regarding the level of evidence and certainty in the label. Finally, virtually all datasets contain claims solely in English. To address these research gaps, in this paper, we present the following contributions: 1. We introduce HealthFC, a constructed bilingual German and English dataset, featuring \(750\) health-related claims and richly annotated data. This includes veracity labels from a team of medical experts, level of evidence, and explanatory documents written in lay language describing clinical studies used for assessment. We additionally provide manually annotated evidence sentences from documents. The dataset enables testing various NLP tasks related to automated fact-checking. 2. We develop diverse baseline systems to benchmark the performance of evidence selection and verdict prediction on the dataset and describe the findings and challenges. 3. We provide additional insight and experiments related to different evidence sources and levels of evidence in the verification process. We provide the dataset in a public GitHub repository.1 The data is free to use for research purposes. Footnote 1: [https://github.com/jvladika/HealthFC/](https://github.com/jvladika/HealthFC/) ## 2 Related Work ### Medical NLP Tasks Healthcare is a popular application domain in artificial intelligence and natural language processing. The complexity of language found in sources like biomedical publications and clinical trial reports makes it a challenging domain to work with. To overcome these obstacles, general-purpose NLP models are pre-trained and fine-tuned on domain-specific biomedical and scientific texts. This includes models like SciBERT (Beltagy et al., 2019) and BioBERT (Lee et al., 2020). Biomedical NLP tasks include a wide array of common NLP tasks, such as natural language inference (Romanov and Shivade, 2018), named entity recognition (Zhao et al., 2019), dialogue systems (Zeng et al., 2020), or text summarization (Abacha et al., 2021). A knowledge-intensive NLP task related to fact-checking is _question answering_ (QA). In particular, _biomedical question answering_ can be divided into four groups: scientific, clinical, examination, and consumer health (Jin et al., 2022). The first three target questions helping medical professionals and researchers conduct work. Our work is mostly similar to consumer-health QA, where the goal is to help the general population seek medical advice, and the produced answers should be consumer-understandable (Demner-Fushman et al., 2019). In QA, the task is to answer a specific question, while our dataset more intuitively belongs to automated fact-checking since it assesses claim veracity. ### Medical Fact-Checking Numerous datasets for automated fact-checking have been released in recent years (Guo et al., 2022; Vladika and Matthes, 2023). Most of these datasets are related to society, politics, and general online rumors. Examples include MultiFC (Augenstein et al., 2019) or the Snopes dataset (Hanselowski et al., 2019), where the authors leveraged existing claims and explanations from professional fact-checking platforms to construct the dataset. Such an approach was also followed by us, focusing on health claims. Most fact-checking datasets contain claims and evidence solely written in English. Only other dataset we found with some claims in German is the multilingual dataset X-Fact (Gupta and Srikumar, 2021), which focuses on challenges in cross-lingual transfer for automated fact-checking. Datasets with biomedical and health-related claims started emerging since 2020 due to online content related to the pandemic of COVID-19. These datasets differ with respect to their primary source of claims and evidence. Datasets like SciFact (Wadden et al., 2020) and HealthVer (Sarrouti et al., 2021) feature expert-written claims stemming from biomedical research publications and user search queries, respectively. Both of them pair the claims with abstracts of scientific publications that provide evidence for assessing the claim. On the other hand, datasets COVID-Fact (Saakyan et al., 2021) and CoVERT (Mohr et al., 2022) take social media posts to gather the claims, the former pairing claims from Reddit with accompanying evidence articles, and the latter taking causative biomedical claims from Twitter posts paired with manually annotated evidence documents from Google search results. Most similar to our dataset in terms of construction is PubHealth (Kotonya and Toni, 2020), a dataset of claims about public health for explainable automated fact-checking. It uses news titles of articles from dedicated fact-checking websites as claims and accompanying article text as the evidence source. Still, the dataset is relatively noisy since the news titles often do not make a factual and an atomic claim. To the best of our knowledge, HealthFC is the first dataset for medical fact-checking to use clinical trials and systematic reviews as its main source of evidence. More precisely, it utilizes knowledge from clinical studies as its initial source and presents it understandably for everyday users in a form of articles. Furthermore, this is the only dataset of health claims to feature the strength of found evidence in its labels and a short explanation paragraph for every verdict decision. It also covers a wider variety of topics concerning all segments of human health, when compared to other datasets focusing only on COVID-19-related claims (HealthVer, COVID-Fact, CoVERT). ## 3 Dataset Construction ### Data Source The dataset was constructed from the publicly available data on the web portal _Medizin Transparent_.2 It is a project by the team of Cochrane Austria. Cochrane is an international charitable organization formed to organize medical research findings to facilitate evidence-based decisions about health interventions involving health professionals, patients, and policymakers. Footnote 2: [https://medizin-transparent.at/](https://medizin-transparent.at/) The team of Medizin Transparent uses a systematic approach to perform fact-checking. The process usually starts with a user inquiry regarding a health-related issue. In addition, health claims that are currently trending on popular news portals are considered as well. Then, this inquiry is formed to a precisely defined question that is used to search through several research databases dealing with biomedical research, where relevant studies are manually filtered down. The preference as a primary source is given to systematic reviews since they present a comprehensive synthesis of results on a research topic in previously published studies. If no systematic reviews are available, the conclusions are drawn from as many informative individual studies as possible to make the best-informed decision. The narrowed-down studies are assessed with regard to quality and significance using previously defined criteria, ensuring the trustworthiness and consistency of the sources. The quality of studies is checked by at least two people from the project's scientific team. The results are summarized by an author, checked by a medical professional, and described in a comprehensible and easily understandable way. We constructed a scraping project with the Python library _Scrapy_ and collected all the text from the articles. Because the crawled articles from the portal are exclusively written in German, we translated them into English to provide a wider reach and alignment with similar datasets in English. Claims and explanations of the verdicts were translated with the DeepL API.3 For the article text, DeepL could not be used due to the limitations of the free API version. Instead, we translated the longer document texts with the Opus-MT library (Tiedemann and Thottingal, 2020), an open-source tool with a proven record of generating translations of high quality. All the translated articles were read by the authors during the evidence annotation process and any spotted mistakes in translation were manually corrected by the authors who are native German and fluent English speakers, to ensure high quality of the provided text. ### Claims and Labels The two main components of the HealthFC dataset are claims and evidence documents. Each of the 750 claims is paired with a single evidence document. These evidence documents come directly from the fact-checking portal and were written and proofread by the portal's medical team. The claim veracity labels also come directly from the medical experts. One specific aspect of the veracity labels in our dataset is that, on top of providing a positive ("true") or negative ("false") label, there is additionally a three-point scale denoting the _level of evidence_. This refers to how strong the findings from clinical trials were and how certain is the veracity label of the claim based on available evidence. The medical team follows internal guidelines on determining which of the three scores of level of evidence to assign to each claim. Following other common fact-checking datasets, we map all the veracity labels to three final high-level verdicts: _supported_, _refuted_, and _not enough information_ (NEI). On top of these three labels, there is also a label for the aforementioned level of evidence, in case of the _supported_ and _refuted_ claims. In some other datasets, like SciFact, the NEI label signifies that no relevant evidence documents related to the claim are present in the dataset. On the other hand, claims labeled with NEI in our dataset will always be paired with an evidence document. This evidence document usually reports how no relevant clinical studies were found in academic databases, or those found are lacking in quality, and therefore a reliable verdict on claim's veracity cannot be made. ### Evidence Annotation Even though the evidence documents for each claim contain enough information to make a final verdict, not everything in the documents is relevant - they often contain background information that is interesting for readers, but not necessary to make a final decision. Hence, we decided to annotate individual sentences that provide evidence (rationale) for making a final verdict on claim's veracity. Two authors served as annotators. They followed a systematic annotation process by first reading the claim along with its stated verdict and then the full article. All sentences in the article were split automatically using a sentence tokenizer from NLTK. The task was to select only those sentences that make a statement on claim's veracity. The maximum number of sentences to be selected as rationales was capped at \(5\). It was empirically determined that rarely are more than \(5\) sentences needed to make a verdict and this number follows the convention of other fact verification datasets like FEVER (Thorne et al., 2018) and COVID-Fact (Saakyan et al., 2021). The annotators held regular meetings to discuss and resolve any uncertainties during the labeling process. In order to assess the inter-annotator agreement in the labeling process, \(50\) evidence documents (\(6.7\%\)) were selected for mutual annotation by both authors. The Cohen's \(\kappa\) coefficient (Cohen, 1960) was determined to be \(0.72\). Cohen suggested interpreting the values of \(\kappa\) between \(0.61\) and \(0.80\) as substantial agreement. This is comparable to Cohen's \(\kappa\) of \(0.70\) in Hanselowski et al. (2019) and \(0.71\) in Wadden et al. (2020), as well as Fleiss' \(\kappa\) coefficient (Fleiss, 1971) of \(0.68\) in Thorne et al. (2018) and \(0.74\) in Hu et al. (2022). ## 4 Dataset Description In this chapter, we will provide descriptive analytics and statistics of the dataset, and outline some specific characteristics and challenges. ### General Overview of Dataset The HealthFC dataset consists of 750 scientifically fact-checked health claims and evidence articles. Our dataset is available in both English and German. Figure 1 shows the number of yearly published articles over the project's time span. The plotted distribution reveals a significant increase in the number of articles per year, peaking in 2016 with 105 articles. After that, around 80 to 90 claims were fact-checked annually, until the number dropped again in 2022. Since articles can get outdated with time as new clinical studies are published, the team periodically checks all claims and updates the verdicts as appropriate. Therefore, the knowledge is kept up-to-date with latest developments and this is reflected in our dataset. The dataset covers a diverse range of health topics, encompassing many subdomains. Listing all covered topics would go beyond the scope of this brief dataset description. To gain insight into the most frequently covered subdomains, a subset of the top ten topics is visualized in Figure 2. The chart depicts the relative share of fact-checks among the top ten topics. It is evident that inquiries about eating habits are most popular, since the topics dietary supplements and nutrition account respectively for 18% and 15%. Dietary topics are a complex topic for which an abundance of health advice can be found on the Internet. The third most popular topic is the immune system. It plays a vital role in bodily defense and is thus responsible for many health conditions. Other prominent subdomains focus on specific body systems, such as the respiratory, musculoskeletal, or cardiovascular systems. Alternative and complementary medicine covers non-traditional forms of healing. Apart from general health domains that remain consistent over time, the dataset also contains topics that depend on current trends and events. One such topic is COVID-19, which has dominated the news and public health discussions since its outbreak in 2019. The COVID-19 pandemic has impacted people's health and well-being worldwide. It has highlighted the importance of online resources as a primary information medium for people seeking health and medical advice. ### Descriptive Statistics of Dataset The HealthFC dataset comprises health claims, evidence articles, verdicts, and manually annotated evidence sentences that support the final verdict. Each evidence article also contains an explanation paragraph that serves as a short summary of the article and a justification of the verdict. Table 2 summarizes descriptive statistics of the English texts in our provided dataset. It includes the mean, standard deviation, minimum, and maximum values for various aspects of the dataset. An interesting observation is that the word count of explanations has a mean of \(40.0\) and ranges from only 7 up to 103 words. On the other hand, the word counts of the manually selected evidence sentences are almost twice as high. This demonstrates that, despite limiting the evidence sentences to a maximum of five, the original explanations (summaries) of the verdicts are more concise. These short explanatory summaries could be used for the task of explanation generation in future work. The absolute frequency of three verdict labels (refuted, supported, NEI) for all 750 articles is presented in Figure 3. The distribution in the chart is highly skewed towards the NEI verdict, being the majority class with 423 articles. This is to be expected due to the complex nature of research in the health field. Health claims are often subject to ongoing research and clinical trials, and there may not be enough evidence available at the time of the assessment to determine whether a claim is true or false. The number of articles where claims are supported (202) is less than half of those in the NEI class, while the fewest belong to the refuted (125) category. Supported and refuted claims additionally have levels of evidence provided on a three-point scale, which refer to the frequency \begin{table} \begin{tabular}{l c c c c} \hline \hline **Aspect** & \(\mu\) & \(\sigma\) & **Min** & **Max** \\ \hline No. of evidence sent. & 3.4 & 1.2 & 1 & 5 \\ No. of all sentences & 59.0 & 25.3 & 16 & 168 \\ Words: articles & 857 & 369 & 244 & 2677 \\ Words: explanations & 40.0 & 18.3 & 7 & 103 \\ Words: evidence sent. & 76.6 & 32.2 & 15 & 189 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative statistics of the dataset Figure 1: Number of collected health fact-check articles by year of publication. Figure 2: Distribution of the top ten most popular health topics in the collected dataset. and strength of discovered evidence clinical studies related to the claim. The distribution of the level of evidence is also skewed towards lower levels. This demonstrates once again the challenge of making decisive health assessments. ## 5 Baselines In this chapter, we will introduce the problem statement and describe baseline systems used to benchmark the performance of tasks of evidence selection and veracity prediction on the dataset. We experimented with two types of systems (pipeline and joint) and four different base language models. ### Problem Statement The process of automated fact-checking in our dataset consists of two major components: evidence selection and veracity prediction. **Evidence selection.** A binary classification task, where given a claim \(c\) and an evidence document consisting of \(n\) sentences \(s_{1},s_{2},...,s_{n}\), the task is to train a model that predicts \(z_{i}=\mathbf{1}[s_{i}\) is an evidence sentence]. **Veracity prediction.** A ternary classification task, where for a given claim \(c\) and \(k\) previously selected evidence sentences \(e_{1},e_{2},...,e_{k}\), the goal is to predict one of the three classes of the final verdict: _supported_, _refuted_, _not enough info (NEI)_. ### Pipeline Systems The intuitive approach is to develop two separate models - one for evidence selection and another for veracity prediction. The evidence sentences selected by the first model are used as input for veracity prediction in the next step. It is also common to use the same underlying base model in both steps and fine-tune it for these two different tasks (DeYoung et al., 2020). Each candidate sentence \(s_{i}\) from the document is concatenated with the claim \(c\) to obtain candidate sequences in the form of \(a_{i}=[s_{i};SEP;c]\). Each sequence is encoded with a base language model to obtain their dense representation: \(h_{i}=BERT(a_{i})\). This representation is then fed to the classifier model Multi-Layer Perceptron (MLP) that assigns the probabilities on the candidate sentence being evidence: \(p_{i},\,\bar{p}_{i}=softmax(MLP(h_{i}))\). Finally, a selection function proclaims sentences with a probability over a threshold (which we fix at \(0.5\)) to be evidence sentences: \(z_{i}=p_{i}>0.5\). In the end, the model selected \(k\) final evidence sentences \(e_{1},e_{2},...,e_{k}\) as input for the next step. The task of veracity prediction is commonly modeled in automated fact-checking as the established task Natural Language Inference (NLI), or more specifically, Recognizing Textual Entailment (RTE), which aims to infer the logical relation (entailment/contradiction/neutral) between a hypothesis and a relation. In our case, the hypothesis is the claim \(c\), and the premise is a concatenation of evidence sentences \(e=[e_{1};e_{2};...;e_{k}]\). These two are concatenated as \(x=[c;SEP;e]\) and embedded as \(w=BERT(x)\). The final model for sequence classification has to learn the function \(\hat{y}(c;e)=softmax(MLP(w))\), which is the probability of each veracity label for the claim \(c\) given evidence \(e\). The class with the highest probability score is selected as the final verdict \(v(c;e)=argmax(y)\). ### Joint Systems Another approach is a system that jointly learns both the tasks of evidence retrieval and veracity prediction. This type of training leverages multi-task learning (MTL) and is beneficial because of data efficiency, reduced overfitting, and faster learning with auxiliary information (Crawshaw, 2020). This is achieved by modeling a unified representation of the claim and the document used for both tasks and a joint loss function that combines the evidence selection loss and veracity prediction loss. The claim \(c\) is concatenated together with all of the sentences \(s_{1},s_{2},...,s_{n}\) in the document to obtain a claim+document sequence \(seq=[c;SEP;s_{1};SEP;s_{2};...;SEP;s_{n}]\).4 This se Figure 3: Evidence level count by verdict label. quence is embedded as \(h=BERT(seq)=[h_{c};SEP;h_{s_{1}};...;SEP;h_{s_{n}}]\). The representation of each candidate sentence \(h_{s_{i}}=[h_{w_{1}},h_{w_{2}},...,h_{w_{m}}]\) is singled out from the initial representation and passed to a binary linear classifier that calculates the probabilities of the sentence being evidence: \(p_{i},\,\bar{p}_{i}=softmax(MLP(h_{s_{i}}))\). Those sentences that are above the \(0.5\) threshold are selected and used to form the final claim+evidence representation \(h_{f}=[h_{c};h_{e_{1}},h_{e_{2}},...,h_{e_{k}}]\). This representation is given to a ternary classifier that predicts the verdict \(v=argmax(softmax(MLP(h_{f})))\). Footnote 1: [https://github.com/google-learning/](https://github.com/google-learning/) ### Encoding models To encode the text, we experimented with a number of underlying base models that we found representative of different aspects we wanted to test. BERT Devlin et al. (2019) is used as the representative vanilla pre-trained language model (PLM), which gives a good initial insight into the performance of PLMs on the dataset. BioBERT Lee et al. (2020), an extension of BERT that was fine-tuned to abstracts of biomedical scientific publications, is used to check whether the medical terminology and relations it learned will help assess the claims in this dataset. Additionally, DeBERTa-v3 He et al. (2021), an improvement of BERT with enhanced training procedure based on disentangled attention, was chosen because it has proven to be powerful for natural language understanding (NLU) tasks, in particular natural language inference and entailment recognition. Finally, XLM-RoBERTa Conneau et al. (2020) is chosen to contrast the performance between the English and German versions of the dataset because it is a powerful multilingual model that was also shown to work well on NLP tasks involving German text Vladika et al. (2022). ## 6 Experiments ### Setup We performed an array of experiments to test the performance of baseline systems on the two common fact-checking tasks. Considering the dataset's relatively small size, we opted out of declaring a small subset of the dataset to be a test set, but instead split it into \(5\) folds of equal size and equal label distribution and then performed a 5-fold cross-validation procedure with the final scores being shown with a mean and standard deviation. These five splits are released together with the dataset for easier reproducibility. The hyperparameters were mostly the same for all models and setups: learning rate \(10^{-5}\), warmup rate \(0.06\), weight decay \(0.01\), batch size \(4\), epochs \(7\). For all of the models, their _Large_ version was used, imported from the HuggingFace repository. The experiments were run on a single Nvidia V100 GPU card. ### Results The final results of main experiments are shown in Table 3. The results show the mean and standard deviation over the 5-fold cross-validation of precision, recall, and F1 score, which are useful classification metrics for a dataset with an imbalanced label distribution. All the metrics are macro-averaged scores over the three classes. All experiments were run on the English version of the dataset, except _XLM-R (German)_, which was run on the German version of the dataset. The task of _Evidence Selection_ consisted of predicting for each candidate sentence whether it belongs or not to evidence sentences, with only about 6% of all candidate sentences had a positive label in this task. The selected sentences in this task are passed over to models in the next \begin{table} \begin{tabular}{c|c|c c c|c c c|c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**Evidence Selection**} & \multicolumn{4}{c}{**Veracity Prediction**} & **Oracle Ver. Pred.** \\ \hline **System** & **Base Model** & **Precision** & **Recall** & **F1 Macro** & **Precision** & **Recall** & **F1 Macro** & **F1 Macro** \\ \hline \multirow{4}{*}{pipeline} & XLM-R (German) & \(48.0_{1.5}\) & \(48.3_{3.6}\) & \(48.1_{2.1}\) & \(59.0_{2.9}\) & \(59.1_{8.4}\) & \(58.4_{9.1}\) & \(73.6_{5.5}\) \\ & XLM-R (English) & \(51.9_{2.1}\) & \(52.9_{4.0}\) & \(52.3_{1.4}\) & \(64.8_{7.4}\) & \(60.0_{5.7}\) & \(60.4_{6.1}\) & \(74.7_{4.4}\) \\ \cline{1-1} \cline{2-10} & BERT & \(51.4_{3.4}\) & \(51.0_{9.1}\) & \(51.2_{1.6}\) & \(50.0_{4.8}\) & \(50.5_{1.5}\) & \(50.1_{4.6}\) & \(69.9_{5.6}\) \\ & BioBERT & \(52.2_{2.0}\) & \(54.4_{2.5}\) & \(53.2_{1.0}\) & \(62.6_{5.3}\) & \(56.3_{4.0}\) & \(57.2_{4.2}\) & \(78.1_{3.8}\) \\ & DeBERTa & \(54.8_{3.7}\) & \(56.6_{3.3}\) & **\(\textbf{55.5}_{1.0}\)** & \(67.6_{4.6}\) & \(64.5_{4.0}\) & \(\textbf{65.3}_{3.2}\) & \(\textbf{81.9}_{1.4}\) \\ \hline \multirow{4}{*}{joint} & BERT & \(79.1_{3.8}\) & \(70.0_{2.1}\) & \(73.2_{1.7}\) & \(69.4_{4.0}\) & \(65.5_{3.5}\) & \(66.9_{4.4}\) & β€” \\ & BioBERT & \(64.2_{2.2}\) & \(74.2_{2.8}\) & \(67.4_{1.2}\) & \(65.1_{3.9}\) & \(63.3_{4.2}\) & \(63.1_{3.7}\) & β€” \\ \cline{1-1} & DeBERTa & \(71.8_{2.8}\) & \(75.2_{3.5}\) & **73.4**\({}_{1.4}\) & \(68.2_{4.6}\) & \(66.8_{4.0}\) & \(\textbf{67.5}_{3.2}\) & β€” \\ \hline \hline \end{tabular} \end{table} Table 3: Results of all baseline systems and models in the form of the mean and standard deviation of a 5-fold cross-validation over the dataset. task, the _Veracity Prediction_. It is a three-class classification problem with the goal of predicting one of the three classes. The models used for Veracity Prediction were fine-tuned to predict the label with gold (annotated) sentences, but during inference time, they used model-selected sentences from the previous step. In the last column, _Oracle Verdict Prediction_, we show the scenario where manually annotated ("gold") evidence sentences were used as input to the label-prediction model. Some additional experiments were performed, and their results are shown in Table 5. The experiment _Claims only_ predicts the veracity by only taking into account the claim text, with no evidence at all. For the experiment Google snippets, we ran a search over the Google Search API (on May 1, 2023) with our claims in English and collected the snippets from the first 10 results. These snippets were concatenated as evidence, fine-tuned, and claim veracity was predicted. This is to test the open-domain claim verification, as Google snippets were used as evidence in other fact-checking datasets (Augenstein et al., 2019; Hu et al., 2022). The experiment _Gold explanations_ utilizes explanatory summaries written by authors at the beginning of every fact-check article to check how useful these summaries are for veracity prediction. All of these experiments used the same 5 folds as the previous table. Finally, the experiment _Level of Evidence_ aimed to predict one of the three categories of the level of evidence (_low_, _medium_, or _high_), for which the distribution is shown in Figure 3. ## 7 Discussion As can be seen in Table 3, the basic BERT model provides results slightly above \(50.0\) for both tasks, which is solid considering the dataset is imbalanced. Still, the biomedical model BioBERT outperforms BERT in both types of systems, which show the benefit of using a domain-specific model for a dataset that includes biomedical terminology. Even though our dataset consists of text written to be understandable to everyday users, it still features a wide array of medical terms and nuances that were probably better captured by BioBERT. Nevertheless, DeBERTa outperformed both BERT and BioBERT in the pipeline system, by a massive margin on veracity prediction. This shows the power of this model for the task of natural language inference in general. This shows that being optimized for good performance in a specific NLP task like entailment recognition can beat a simpler model that is optimized for a specific domain. When looking at the performance of XLM-RoBERTa for the parallel German and English corpus, it is evident that it worked better for English, especially for evidence sentence selection. This likely stems from the fact that even though the model is multilingual, English was still the most prevalent dataset during pre-training. Still, the results for German are decent while leaving room for improvement. Developing language-specific NLP solutions for a task like this is useful because the speakers of the said language will often seek health advice on the Internet in their native language, so improving the performance on the German version of the dataset remains an open challenge. The veracity prediction performance with oracle sentences is by far superior to the setup where the model has to select evidence sentences on its own. This shows that detecting appropriate evidence spans and arguments in unstructured text, for a given claim or query, is a challenging problem. Furthermore, the joint systems show a clear dominance for evidence selection and veracity prediction. Especially for evidence selection, the sig \begin{table} \begin{tabular}{l} \hline \hline **Claim:** Are vegetables prepared in a microwave oven less healthy than those prepared in other ways? \\ \hline **Evidence:** If there are health problems related to the microwave, then this is not because the microwave ingredients are destroyed or changed, but because the food is simply too unhealthy to eat altogether. The week-long feeding with always several times warmed up in the microwave has not led to any signs of poisoning. \\ \hline **Gold label: Refuted**\(\parallel\)** Predicted: Supported \\ \hline \hline **Claim:** Does cat’s claw improve joint disease symptoms? \\ \hline **Document:** Whether cat claw helps better in rheumatoid arthritis or osteoarthritis than a placebo cannot be reliably estimated on the basis of the available studies. We can’t make any statements about effectiveness. \\ \hline **Gold label: NEI**\(\parallel\)** Predicted: Supported \\ \hline \hline **Claim:** Does brain training boost intelligence? \\ \hline **Document:** Those who train their memory only get better in these exercises, i.e. only in working memory and not in other aspects of intelligence. A review from 2013 casts doubt on how much cognitive training can be helpful for children and adolescents with various mental developmental disorders. \\ \hline **Gold label: Refuted**\(\parallel\)** Predicted: NEI \\ \hline \hline \end{tabular} \end{table} Table 4: Examples of claims and gold evidence snippets where the DeBERTa baseline made incorrect predictions nificant improvement stems from the fact that the learned representation takes the whole document into context and contextualizes sentences to their surroundings. The task of veracity prediction is also improved, which shows the clear benefit of multi-task learning and joint task modeling. To get a deeper insight into the baseline model performance, Table 4 shows examples where the best-performing DeBERTa model made incorrect predictions. This shows the challenging nature of the dataset. Table 5 shows the results of the additional experiments. For claims only, the classifier is slightly better than random, which shows DeBERTa model did utilize some of its internal world knowledge for predictions but is still considerably worse than any other setup from Table 3. This shows there are no linguistic patterns spoiling the results, and evidence is indeed needed for a genuine verdict prediction. Fact-checking with Google snippets performs poorly and indicates that these snippets are not informative enough to reach a conclusion, on top of possibly coming from untrustworthy sources. Future work could explore the open-domain claim verification performance using sources such as Wikipedia or PubMed. The performance with explanation summaries is decent but still lacking when compared to using evidence sentences. Future work could utilize these human-written summaries to produce natural language explanations to justify predicted claim verdicts. Finally, the prediction of the level of evidence is also considerably poor and indicates how utilizing this aspect in fact-checking is yet to be explored and refined. ## 8 Conclusion We introduce HealthFC, a novel fact-checking dataset for verifying claims related to everyday health-related concerns. It comprises 750 claims based on users' online inquiries, rich metadata including final verdict labels, explanation paragraphs, full evidence documents, and manually annotated rationale sentences. We describe the dataset creation and collection process in detail and present descriptive statistics. Finally, we provide results of extensive experiments with two types of baseline systems with multiple base models and show that joint systems with full-document representation outperform the more common pipeline systems. We anticipate that the dataset can help advance the state of automated medical fact-checking and be used for NLP tasks not covered in this paper, such as open-domain verification and explanation generation. Another relevant area for future research concerns dialogue systems in the health domain. ## 9 Limitations Our study employed a rigorous research design that involved the collection, annotation, as well as analysis of a data corpus about fact-checks of health claims. However, it is crucial to acknowledge certain constraints within the study. For one thing, the crawled text data was automatically translated from German to English, which may have resulted in translation errors, especially in view of particular layman's terms or idioms that are difficult to translate. Still, upon manual annotation of evidence sentences, we corrected any spotted errors and inconsistencies. For nine articles, there were no concise verdict explanations available, so we only used the full article text. Moreover, owing to the German-speaking readership of the Austria-based Medizin-transparent portal, a few articles might focus on topics related to healthcare practices in Germany, Switzerland, or Austria. Nevertheless, most health facts are not country-specific because they are based on scientific research that is applicable universally. In consequence, they can be often applied globally without being restricted to a specific country or region. As a last point, it should be noted that errors during the dataset annotation may have occurred, leading to a not ideal selection of evidence sentences. To mitigate this risk, the annotators discussed their assigned labels in regular meetings to establish a clear understanding of the task.
2309.05998
Ancestral reproductive bias in continuous time branching trees under various sampling schemes
Cheek and Johnston (Journal of Mathematical Biology, 2023) consider a continuous-time Bienaym\'e-Galton-Watson tree conditioned on being alive at time $T$. They study the reproduction events along the ancestral lineage of an individual randomly sampled from all those alive at time $T$. We give a short proof of an extension of their main results to the more general case of Bellman-Harris processes. Our proof also sheds light onto the probabilistic structure of the rate of the reproduction events. A similar method will be applied to explain (i) the different ancestral reproduction bias appearing in work by Geiger (Journal of Applied Probability, 1999) and (ii) the fact that the sampling rule considered by Chauvin, Rouault and Wakolbinger (Stochastic Processes and their Applications, 1991) leads to a time homogeneous process along the ancestral lineage.
Jan Lukas Igelbrink, Jasper Ischebeck
2023-09-12T06:55:45Z
http://arxiv.org/abs/2309.05998v2
# Ancestral reproductive bias in continuous time branching trees under various sampling schemes ###### Abstract. Cheek and Johnston [3] consider a continuous-time Bienayme-Galton-Watson tree conditioned on being alive at time \(T\). They study the reproduction events along the ancestral lineage of an individual randomly sampled from all those alive at time \(T\). We give a short proof of an extension of their main results [3, Theorems 2.3 and 2.4] to the more general case of Bellman-Harris processes. Our proof also sheds light onto the probabilistic structure of the rate of the reproduction events. A similar method will be applied to explain (i) the different ancestral reproduction bias appearing in work by Geiger [1] and (ii) the fact that the sampling rule considered by Chauvin, Rouault and Wakolbinger in [1, Theorem 1] leads to a time homogeneous process along the ancestral lineage. Key words and phrases:branching processes, spines, reproductive bias, inspection paradox,sampling schemes 2020 Mathematics Subject Classification: Primary 60J80; secondary 60K05, 92D10 We thank Anton Wakolbinger for bringing the work [3] to our attention. We are grateful to him and also to Matthias Birkner, Gotz Kersting and Marius Schmidt for stimulating discussions and valuable hints. A substantial part of this work was done during the 2023 seminar week of the Frankfurt probability group in Haus Bergkranz. ## 2. Sampling an ancestral line at random On the event \(\{N_{T}>0\}\), let the individual \(V\) be sampled as described in the Introduction, and let \(S\) be its mark. We define the process \((N_{t})_{t\geq 0}\) to be right continuous with left limits. As a consequence, if \(T_{1}\) is the lifetime of the root individual, then \(N_{T_{1}}\) has distribution \(\left(p_{k}\right)_{k\geq 0}\). Let \(J\) be the random number of reproduction events and \(0<T_{1}<T_{2}<\dots<T_{J}\leq T\) be the random times of reproduction events along the ancestral lineage of \(V\). Let \(L_{1},\dots,L_{J}\) be the offspring sizes in these reproduction events and let \(0<\tau_{1}<\tau_{2}<\dots\) be the random arrival times in a renewal process with interarrival time distribution \(\mu\). Denote by \(\mathbf{P}\) and \(\mathbf{E}\) the probability measure and expectation for \(N_{0}=1\). **Theorem 2.1**.: _For \(j\geq 0\), \(0<t_{1}<\dots<t_{j}\leq T\in\mathbb{R}\) and \(\ell_{1},\dots,\ell_{j}\in\mathbb{N}\) we have_ \[\mathbf{P}\left(N_{T}>0,J=j,\,T_{1}\in\mathrm{d}t_{1},\dots T_{j} \in\mathrm{d}t_{j},\,L_{1}=\ell_{1},\dots,L_{j}=\ell_{j},\,S\in\mathrm{d}s\right)\] \[=\mathbf{P}\left(\tau_{1}\in\mathrm{d}t_{1},\dots,\tau_{j}\in \mathrm{d}t_{j},\tau_{j+1}>T\right)\prod_{i=1}^{j}\left(\ell_{i}p_{\ell_{i}} \,\mathbf{E}\left[s^{N_{T-t_{i}}}\right]^{\ell_{i}-1}\right)\mathrm{d}s. \tag{2.1}\] **Corollary 2.2**.: _When integrated over \(s\in(0,1)\), (2.1) reveals that the process \((T_{1},L_{1}),\dots,(T_{J},L_{J})\) of reproduction times and offspring sizes along the ancestral lineage of the uniformly chosen individual (conditioned on \(\{N_{T}>0\}\)) is a mixture of (what could be called) "biased compound renewal processes"._ **Remark 2.3**.: * _When the lifetime distribution_ \(\mu\) _is the exponential distribution with parameter_ \(r\)_, then_ \(\tau_{1},\tau_{2},\dots\) _are the points of a rate_ \(r\) _Poisson point process. In this case Corollary_ 2.2 _together with (_2.1_) becomes a reformulation of the statements of_ _[_3_, Theorems 2.3 and 2.4]__, and at the same time reveals the probabilistic role of the mixing parameter_ \(s\) _in the mixture of biased compound Poisson processes that appear in the "Cox process representation" of_ _[_3_]__._ _Let us write (as in_ _[_3_]__)_ \(F_{t}(s):=\mathbf{E}[s^{N_{t}}]\)_, and abbreviate_ (2.2) \[B(t,T,\ell):=\frac{1}{1-F_{T}(0)}\int_{0}^{1}F_{T-t}(s)^{\ell-1}F_{T}^{\prime} (s)\,\mathrm{d}s.\] * _(_as well as Theorem_ 2.1_) says that the rate of size_ \(\ell\) _ reproduction along the uniform ancestral lineage at time_ \(t\) _is_ \(r\ell p_{\ell}\,B(t,T,\ell)\)_. In this sense the factor_ \(B(t,T,\ell)\) _can be interpreted as an_ (ancestral) rate bias_, on top of the classical term_ \(r\ell p_{\ell}\)_. Indeed, the factor_ \(B(t,T,\ell)\) _is absent in trees that are biased with respect to their size at time_ \(T\)_. Galton-Watson trees of this kind have been investigated (also in the multitype case) by Georgii and Baake_ _[_10_, Section 4]__; they are continuous-time analogues of the size-biased trees analysed by Lyons et al._ _[_11_]_ _and Kurtz et al._ _[_12_]__._ _In the critical and supercritical case one can check that, for all fixed_ \(u<T\) _and_ \(\ell\in\mathbb{N}\) _one has the convergence_ \(B(T-u,T,\ell)\to 1\) _as_ \(T\to\infty\)_. In the supercritical case this stabilisation along the sampled ancestral lineage corresponds to the "retrospective viewpoint" that has been taken in_ _[_10_]_ _and, in the more general situation of Crump-Mode-Jagers processes, by Jagers and Nerman_ _[_13_]__._ _The choice_ \(\mu=\delta_{1}\) _renders the case of discrete time Galton-Watson processes, starting with one individual at time_ \(0\) _and with reproduction events at times_ \(1,2,\dots\)_. Then, with_ \(T=n\in\mathbb{N}\)_, and_ \(L_{1},\dots,L_{n}\) _being the family sizes along the ancestral lineage of the sampled individual_ \(V\)_, the formula (_2.1_) specialises to_ (2.3) \[\mathbf{P}\left(N_{n}>0,\,L_{1}=\ell_{1},\dots,L_{n}=\ell_{n},\,S\in\mathrm{d}s \right)=\left(\prod_{i=1}^{n}\ell_{i}p_{\ell_{i}}\,\mathbf{E}\left[s^{N_{n-i}} \right]^{\ell_{i}-1}\right)\mathrm{d}s.\] ## 3. Maxima of i.i.d. random markers As a preparation for the short probabilistic proof of Theorem 2.1 given in the next section, we recall the following well-know fact: For \(\ell\in\mathbb{N}\), let \(\widetilde{S}\) be the maximum of \(\ell\) independent \(\text{Unif}[0,1]\)-distributed random variables \(U_{1},\ldots,U_{\ell}\). Then the density of \(\widetilde{S}\) is \[\mathbf{P}\left(\widetilde{S}\in\mathrm{d}s\right)=\ell s^{\ell-1}\,\mathrm{d }s,\quad 0\leq s\leq 1. \tag{3.1}\] Indeed, because of exchangeability, \[\mathbf{P}\left(\widetilde{S}\in\mathrm{d}s\right)=\ell\,\mathbf{P}\left(U_{1 }\in\mathrm{d}s\right)\mathbf{P}\left(U_{2}<s,\ldots,U_{\ell}<s\right),\] which equals the r.h.s. of (3.1). The following lemma specialises to (3.1) when putting \(\widetilde{N}\equiv 1\). **Lemma 3.1**.: _Let \(\widetilde{N}\) be an \(\mathbb{N}_{0}\)-valued random variable, and \(\widetilde{N}_{1},\widetilde{N}_{2},\ldots\) be i.i.d. copies of \(\widetilde{N}\). Given \(\widetilde{N}_{1},\widetilde{N}_{2},\ldots\) let \(U_{1,1},\ldots U_{1,\widetilde{N}_{1}},U_{2,1},\ldots U_{2,\widetilde{N}_{2}},\ldots\) be independent \(\text{Unif}[0,1]\)-distributed random variables, and write_ \[S_{k} := \max\left\{U_{k,1},\ldots,U_{k,\widetilde{N}_{k}}\right\},\quad k =1,2,\ldots\] \[S^{(\ell)} := \max\left\{S_{1},\ldots,S_{\ell}\right\},\quad\ell\in\mathbb{N}\] _where we put \(\max(\emptyset):=-\infty\). Then, for all \(\ell\in\mathbb{N}\), the density of \(S^{(\ell)}\) is_ \[\mathbf{P}\left(\widetilde{N}_{1}+\ldots+\widetilde{N}_{\ell}>0,\,S^{(\ell)} \in\mathrm{d}s\right)=\ell\,\,\mathbf{E}\left[s^{\widetilde{N}}\right]^{\ell -1}\mathbf{P}\left(\widetilde{N}_{1}>0,\,S_{1}\in\mathrm{d}s\right),\quad 0 \leq s\leq 1. \tag{3.2}\] Proof.: Again because of exchangeability, the l.h.s. of (3.2) equals \[\ell\,\mathbf{P}\left(\widetilde{N}_{1}>0,\,S_{1}\in\mathrm{d}s\right)\mathbf{ P}\left(S_{2}<s,\ldots,S_{\ell}<s\right). \tag{3.3}\] Since by assumption the \(S_{k}\) are i.i.d. copies of \(S_{1}\), the rightmost factor in (3.3) equals \[\mathbf{P}\left(S_{1}<s\right)^{\ell-1}=\mathbf{E}\left[\mathbf{P}\left(S_{1} <s\mid\widetilde{N}_{1}\right)\right]^{\ell-1}=\mathbf{E}\left[s^{\widetilde{ N}_{1}}\right]^{\ell-1}=\mathbf{E}\left[s^{\widetilde{N}}\right]^{\ell-1}.\] Hence, (3.3) equals the r.h.s. of (3.2), completing the proof of the lemma. The following corollary is immediate. **Corollary 3.2**.: _Let \(L\) be an \(\mathbb{N}_{0}\)-valued random variable that is independent of all the random variables appearing in Lemma 3.1, with \(\mathbf{P}(L=\ell)=p_{\ell}\), \(\ell\in\mathbb{N}_{0}\). Then we have for all \(\ell\in\mathbb{N}_{0}\),_ \[\mathbf{P}\left(L=\ell,\,\widetilde{N}_{1}+\ldots+\widetilde{N}_{\ell}>0,\,S^{ (\ell)}\in\mathrm{d}s\right)=\ell p_{\ell}\,\,\mathbf{E}\left[s^{\widetilde{N }}\right]^{\ell-1}\mathbf{P}\left(\widetilde{N}_{1}>0,\,S_{1}\in\mathrm{d}s \right),\quad 0\leq s\leq 1.\] ## 4. Proof of Theorem 2.1 For \(j=0\), both sides of (2.1) are equal to \(\mu((T,\infty))\)\(\mathrm{d}s\). For \(j\geq 1\), a decomposition at the first reproduction event of the branching process (which on the event \(\{N_{T}>0\}\) necessarily is also the first reproduction event along the sampled ancestral lineage) leads us directly to the situation of Corollary 3.2. Here, the random variable \(L\) in Corollary 3.2 takes the role of the \(L_{1}\) in (2.1), and the random variable \(N_{T-t_{1}}\) from (2.1) becomes the \(\widetilde{N}\) in Corollary 3.2. We thus obtain from Corollary 3.2 for \(0<t_{1}\leq T\), \(0\leq s\leq 1\), \(\ell_{1}\in\mathbb{N}\) \[\begin{split}&\mathbf{P}\left(N_{T}>0,J\geq 1,\,T_{1}\in \mathrm{d}t_{1},L_{1}=\ell_{1},S\in\mathrm{d}s\right)\\ &=\mathbf{P}\left(\tau_{1}\in\mathrm{d}t_{1}\right)\ell_{1}p_{\ell _{1}}\,\mathbf{E}\left[s^{N_{T-t_{1}}}\right]^{\ell_{1}-1}\mathbf{P}\left(N_{ T-t_{1}}>0,\,S_{1}\in\mathrm{d}s\right),\end{split} \tag{4.1}\] where, on the event \(\{N_{T-t_{1}}>0\}\), \(S_{1}\) is the largest among the marks assigned in an i.i.d. \(\text{Unif}[0,1]\) manner to the \(N_{T-t_{1}}\) many individuals. Thanks to the independence and self-similarity properties inherent in the branching processes, this can be iterated, leading directly to (2.1). ## 5. Conditioning on a marker value Chauvin, Rouault and Wakolbinger [10] consider a Markov process with an atomless transition probability indexed by a continuous-time Galton-Watson-tree and condition on an individual at time \(T\) to be at a given location. To relate this to the framework described in the Introduction, we assume that each individual alive at time \(T\) in the Bellmann-Harris tree carries a mark in some standard Borel space \(E\) and these random marks have the following properties: * Their marginal distributions (denoted by \(\nu\)) are identical and do not depend on the reproduction events * a.s. no pair of marks is equal. Think for example of branching Brownian motion: The positions of the particles depend on each other via the genealogy, but the movements after a branching event are independent. At every fixed point in time a.s. no pair of particles will be at the same position. We now condition on \(\left\{N_{T}>0\right\}\) and, for given \(s\in E\), on one of the \(N_{T}\) individuals having marker value \(s\). Denote by \(V\) the individual having marker \(s\). Let \(J\) be the random number of reproduction events along the ancestral lineage of \(V\) and \(0<T_{1}<T_{2}<\cdots<T_{J}<T\) be the random times of these reproduction events. Let \(L_{1},\ldots,L_{J}\) be the offspring sizes in these reproduction events and let \(0<\tau_{1}<\tau_{2}<\cdots\) be the random arrival times in a renewal process with interarrival time distribution \(\mu\). **Theorem 5.1**.: _For \(j\geq 0\), \(0<t_{1}<\ldots<t_{j}<T\) and \(\ell_{1},\ldots,\ell_{j}\in\mathbb{N}\) we have for \(\nu\)-almost all \(s\)_ \[\begin{split}&\mathbf{P}\left(\,J=j,\,T_{1}\in\mathrm{d}t_{1}, \ldots T_{j}\in\mathrm{d}t_{j},\,L_{1}=\ell_{1},\ldots,L_{j}=\ell_{j}\right]N_ {T}>0,\exists\,\mathrm{mark}\in\mathrm{d}s\big{)}\\ &=\frac{1}{\mathbf{E}[N_{T}]}\,\mathbf{P}\left(\tau_{1}\in \mathrm{d}t_{1},\ldots,\tau_{j}\in\mathrm{d}t_{j},\tau_{j+1}\geq T\right) \prod_{i=1}^{j}\ell_{i}p_{\ell_{i}}.\end{split} \tag{5.1}\] Proof.: Because of properties (M1), (M2) we have \[\mathbf{P}(N_{T}>0,\exists\,\mathrm{mark}\in\mathrm{d}s)=\mathbf{E}[N_{T}]\nu (\mathrm{d}s),\quad s\in E.\] Hence (5.1) is equivalent to \[\begin{split}&\mathbf{P}\left(J=j,\,T_{1}\in\mathrm{d}t_{1}, \ldots T_{j}\in\mathrm{d}t_{j},\,L_{1}=\ell_{1},\ldots,L_{j}=\ell_{j},N_{T}>0, \exists\,\mathrm{mark}\in\mathrm{d}s\right)\\ &=\mathbf{P}\left(\tau_{1}\in\mathrm{d}t_{1},\ldots,\tau_{j}\in \mathrm{d}t_{j},\tau_{j+1}\geq T\right)\prod_{i=1}^{j}\ell_{i}p_{\ell_{i}}\, \nu(\mathrm{d}s).\end{split} \tag{5.2}\] We prove the statement (5.2) by induction over \(j\), _simultaneously_ over all time horizons \(T>0\). We write \(\mathbf{P}^{T}\) for the probability referring to time horizon \(T\); this well be helpful in the induction step where will encounter two different time horizons. For \(j=0\) the statement is true, since \[\mathbf{P}^{T}(J=0,N_{T}>0,\exists\,\mathrm{mark}\in\mathrm{d}s)=\mathbf{P} \left(\tau_{1}\leq T\right)\,\nu(ds).\] Assume we have proved (5.2) for all time horizons \(T^{\prime}\) with \(j-1\) (in place of \(j\)), for all times \(t^{\prime}_{1},\ldots,t^{\prime}_{j-1}\leq T^{\prime}\), sizes \(\ell^{\prime}_{1},\ldots,\ell^{\prime}_{j-1}\in\mathbb{N}\) and marker distributions with the same marginal \(\nu\) that satisfy conditions (M1), (M2). Turning to (5.2) as it stands, we note that on \(\left\{T_{1}=t_{1},L_{1}=\ell_{1}\right\}\), the descendants of the \(\ell_{1}\) siblings in the first branching event form \(\ell_{1}\) independent and identically distributed trees on the time interval \([t_{1},T]\). Let \(\mathcal{U}_{k},\,k=1,\ldots,\ell_{1}\), be the set of markers of the individuals at time \(T\) that descend from the \(k\)-th sibling. By randomly permuting these \(\ell_{1}\) siblings, we can assume that the set-valued random variables \(\mathcal{U}_{k},\,k=1,\ldots,\ell_{1}\), are exchangeable. Note that the markers in each \(\mathcal{U}_{k}\) satisfy conditions (M1), (M2). Because the markers are a.s. pairwise different by assumption, the mark \(s\) belongs to at most one of those \(\mathcal{U}_{k}\), so \[\mathbf{1}_{\left\{\exists\,\mathrm{mark}\in\mathrm{d}s\right\}}=\sum_{k=1}^{ \ell_{1}}\mathbf{1}_{\left\{\mathcal{U}_{k}\cap\mathrm{d}s\neq\emptyset \right\}}\,\,\,\mathrm{a.s.}\] Putting \(t_{1}^{\prime}:=t_{2}-t_{1},\ldots,t_{j-1}^{\prime}:=t_{j}-t_{1}\) we thus infer, using the branching property of the Bellman-Harris tree, that the left hand side of (5.2) equals \[\mathbf{P}(\tau_{1}\in\mathrm{d}t_{1})p_{\ell_{1}}\ell_{1}\,\mathbf{P}^{T-t_{1} }\left(J=j-1,\ldots,T_{1}\in\mathrm{d}t_{1}^{\prime},\ldots,T_{j-1}\in\mathrm{d }t_{j-1}^{\prime},N_{T-t_{1}}>0,\exists\,\mathrm{mark}\in\mathrm{d}s\right)\] By the induction assumption this is equal to \[\mathbf{P}(\tau_{1}\in\mathrm{d}t_{1})p_{\ell_{1}}\ell_{1}\,\mathbf{P}\left( \tau_{1}^{\prime}\in\mathrm{d}t_{1}^{\prime},\ldots,\tau_{j-1}^{\prime}\in \mathrm{d}t_{j-1}^{\prime},\tau_{j}^{\prime}\geq T-t_{1}\right)\prod_{i=2}^{j }\ell_{i}p_{\ell_{i}}\,\nu(\mathrm{d}s), \tag{5.3}\] where \((\tau_{1}^{\prime},\tau_{2}^{\prime},\ldots)\) have the same distribution as \((\tau_{1},\tau_{2},\ldots)\). Obviously (5.3) equals the r.h.s. of (5.2), which completes the induction step and concludes the proof. **Remark 5.2**.: _If \(\mu\) is the exponential distribution with parameter \(r\), then \(\tau_{1},\tau_{2},\ldots\) are again the points of a rate \(r\) Poisson point process and (5.1) implies that reproduction events along the ancestral lineage of \(V\) happen according to a time-homogeneous Poisson process with rate \(r\sum_{\ell}\ell p_{\ell}\). This corresponds to the description of the events along the ancestral line of \(V\) given in [10, Theorem 1]._ ## 6. Sampling the left-most ancestral lineage We now aim to obtain results about what Geiger [14] calls the leftmost surviving ancestral lineage in a planar embedding of the tree: At any reproduction event we assign independent uniformly on \([0,1]\) distributed markers to all children. An individual can now be uniquely determined by the markers along its ancestral lineage. On the event \(\{N_{T}>0\}\), let \(V\) be the individual whose markers along the entire ancestral lineage comes first in the lexicographic ordering. Let \(J\) be the random number of reproduction events and \(0<T_{1}<T_{2}<\cdots<T_{J}\leq T\) be the random times of reproduction events along the ancestral lineage of \(V\). Let \(L_{1},\ldots,L_{J}\) be the offspring sizes in these reproduction events and let \(0<\tau_{1}<\tau_{2}<\cdots\) be the random arrival times in a renewal process with interarrival time distribution \(\mu\). Denote by \(K_{i}\) the number of siblings born at reproduction event number \(i\) along the ancestral lineage of \(V\) which have a lower lexicographic order than \(V\) and whose descendants hence die out before time \(T\). **Theorem 6.1**.: _For \(j\geq 0\), \(0<t_{1}<\ldots<t_{j}<T,\,\ell_{1},\ldots,\ell_{j}\in\mathbb{N}\) and \(k_{i}\in\{1,\ldots,\ell_{i}-1\}\) we have_ \[\mathbf{P}\left(N_{T}>0,J=j,\,T_{1}\in\mathrm{d}t_{1},\ldots T_{j}\in\mathrm{d }t_{j},\,L_{1}=\ell_{1},\ldots,L_{j}=\ell_{j},K_{1}=k_{1},\ldots,K_{j}=k_{j}\right)\] Proof.: The proof of the theorem works in analogy to the one of Theorem 2.1, but using following analogue of Lemma 3.1. **Lemma 6.2**.: _Let \(\widetilde{N}\) be an \(\mathbb{N}_{0}\)-valued random variable, and \(\widetilde{N}_{1},\widetilde{N}_{2},\ldots\) be i.i.d. copies of \(\widetilde{N}\). Given \(\widetilde{N}_{1},\widetilde{N}_{2},\ldots\) let \(U_{1},U_{2},\ldots\) be independent \(\text{Unif}[0,1]\)-distributed random variables, and write_ \[S^{(\ell)} := \max\left\{U_{k}\mid N_{k}\geq 1,k=1,\ldots,\ell\right\},\] \[K^{(\ell)} := \left|\left\{U_{k}\mid U_{k}>S^{(\ell)},k=1,\ldots,\ell\right\}\right|\] _where we put \(\max(\emptyset):=-\infty\). Then, for all \(k<\ell\in\mathbb{N}\) we have_ \[\mathbf{P}\left(\widetilde{N}_{1}+\ldots+\widetilde{N}_{\ell}>0,\,K^{(\ell)}= k\right)=\mathbf{P}\left(\widetilde{N}=0\right)^{k}\mathbf{P}\left(\widetilde{N}>0 \right).\] Proof.: Because \(S^{(\ell)}\) and \(K^{(\ell)}\) are symmetric in \(U_{1},\ldots,U_{\ell}\), we can use exchangeability to assume that \(U_{1}>U_{2}>\cdots>U_{\ell}\). For \(K^{(\ell)}\) to be \(k\), \(S^{(\ell)}\) has then to be \(U_{k+1}\). This is exactly the case if \(\widetilde{N}_{1},\ldots,\widetilde{N}_{k}=0\) and \(\widetilde{N}_{k+1}>0\) ## 7. Biological perspectives Cheek and Johnston [13, Section 5] discuss recent studies ([14], [15]) which suggest that certain mutation rates are elevated for the earliest cell divisions in embryogenesis. Under the assumptions that (1) cell division times vary and (2) mutations arise not only _at_ but also _between_ cell divisions, Cheek and Johnston argue that this early rate elevation might be parsimoniously explained by their finding that in the supercritical case with no deaths the rate of branching events along a uniformly chosen ancestral lineage is increasing in \(t\in[0,T]\) (which is a corollary to their Theorem 2.4). The two-stage sampling rule _first sample a random tree ("an adult") that survives up to time \(T\), then sample an individual from this tree ("a cell from this adult") at time \(T\)_ seems adequate for the situation discussed in Cheek and Johnston [13, Section 5]. In other modeling situations, again with a large collection of i.i.d. Galton-Watson trees, one may think of a different sampling rule: choose individuals at time \(T\) uniformly at random from the union of all the trees. This makes it more probable that the sampled individuals belong to larger trees, and in fact corresponds to the size-biasing of the random trees at time \(T\) ([11, Section 4]) which we discussed in Remark 2.3 a). As mentioned there, the rate bias (2.2) is not present in this sampling scheme. As can be seen from [14, Theorem 1] (and Theorem 5.1), the rate bias (2.2) is also absent along the ancestral lineage of an individual whose marker has a prescribed value \(s\), if one considers a situation in which a neutral marker evolves along the trees in small (continuous) mutation steps, and if one takes, for the prescribed value \(s\), the collection of trees so large that one individual at time \(T\) has a marker value close to (ideally: precisely at) \(s\). The sampling rule that appears in [11] (and Theorem 6.1) leads to a rate (and reproduction size) bias along the ancestral lineage that is different from the ones we just discussed. This sampling rule can be defined via i.i.d. real-valued valued neutral markers that are created at each birth and passed to the offspring. The individual sampled at time \(T\) (from the tree conditioned to survive up to time \(T\)) is the one whose marker sequence is the largest in lexicographic order among the individuals that live in the tree at time \(T\). This interpretation appears of less biological relevance, except in the pure birth (or cell division) case, where one might think of one single marker that is passed on in each generation to a randomly chosen daughter cell.
2309.13122
The dominant mechanism(s) for populating the outskirts of star clusters with neutron star binaries
It has been argued that heavy binaries composed of neutron stars (NSs) and millisecond pulsars (MSPs) can end up in the outskirts of star clusters via an interaction with a massive black hole (BH) binary expelling them from the core. We argue here, however, that this mechanism will rarely account for such observed objects. Only for primary masses $\lesssim$ 100 M$_{\odot}$ and a narrow range of orbital separations should a BH-BH binary be both dynamically hard and produce a sufficiently low recoil velocity to retain the NS binary in the cluster. Hence, BH binaries are in general likely to eject NSs from clusters. We explore several alternative mechanisms that would cause NS/MSP binaries to be observed in the outskirts of their host clusters after a Hubble time. The most likely mechanism is a three-body interaction involving the NS/MSP binary and a normal star. We compare to Monte Carlo simulations of cluster evolution for the globular clusters NGC 6752 and 47 Tuc, and show that the models not only confirm that normal three-body interactions involving all stellar-mass objects are the dominant mechanism for putting NS/MSP binaries into the cluster outskirts, they also reproduce the observed NS/MSP binary radial distributions without needing to invoke the presence of a massive BH binary. Higher central densities and an episode of core-collapse can broaden the radial distributions of NSs/MSPs and NS/MSP binaries due to three-body interactions, making these clusters more likely to host NSs in the cluster outskirts.
Nathan W. C. Leigh, Claire S. Ye, Steffani M. Grondin, Giacomo Fragione, Jeremy J. Webb, Craig O. Heinke
2023-09-22T18:03:56Z
http://arxiv.org/abs/2309.13122v1
# The dominant mechanism(s) for populating the outskirts of star clusters with neutron star binaries ###### Abstract It has been argued that heavy binaries composed of neutron stars (NSs) and millisecond pulsars (MSPs) can end up in the outskirts of star clusters via an interaction with a massive black hole (BH) binary expelling them from the core. We argue here, however, that this mechanism will rarely account for such observed objects. Only for primary masses \(\lesssim\) 100 M\({}_{\odot}\) and a narrow range of orbital separations should a BH-BH binary be both dynamically hard and produce a sufficiently low recoil velocity to retain the NS binary in the cluster. Hence, BH binaries are in general likely to eject NSs from clusters. We explore several alternative mechanisms that would cause NS/MSP binaries to be observed in the outskirts of their host clusters after a Hubble time. The most likely mechanism is a three-body interaction involving the NS/MSP binary and a normal star. We compare to Monte Carlo simulations of cluster evolution for the globular clusters NGC 6752 and 47 Tuc, and show that the models not only confirm that normal three-body interactions involving all stellar-mass objects are the dominant mechanism for putting NS/MSP binaries into the cluster outskirts, they also reproduce the observed NS/MSP binary radial distributions without needing to invoke the presence of a massive BH binary. Higher central densities and an episode of core-collapse can broaden the radial distributions of NSs/MSPs and NS/MSP binaries due to three-body interactions, making these clusters more likely to host NSs in the cluster outskirts. keywords: celestial mechanics - binaries: close - stars: neutron - stars: black holes - stars: kinematics and dynamics. ## 1 Introduction Neutron star (NS) binaries located in the outskirts of star clusters have puzzled astronomers for many decades. The reason is that these objects are much heavier than the mean stellar mass in most old star clusters, in particular globular clusters (GCs), such that they are expected to segregate into the core on short timescales due to dynamical friction. And yet, such NS binaries have indeed been observed outside of the cluster core in many Galactic GCs (see Tables 1 and 2 for a full list). For example, the core-collapsed GC NGC 6752 is known to have two millisecond pulsars (MSPs) located beyond the cluster's half-light radius. The first, dubbed PSR J1911-5958A or NGC 6752 A, is located about 3.3 half-light radii from the cluster centre and is the farthest MSP to have ever been observed from the cluster centre (D'Amico et al., 2002). It contains a canonical MSP in a compact binary with helium white dwarf (Ferraro et al., 2003; Bassa et al., 2003) of mass 0.20 M\({}_{\odot}\)(Bassa et al., 2006; Cocozza et al., 2006; Corongiu et al., 2023). The binary has a circular orbit and an orbital period of 0.86 days. Ferraro et al. (2003) suggests that the MSP must have been the result of a dynamical interaction, but the MSP was recycled before the putative interaction occurred, based on the cooling age of the white dwarf (WD) (e.g. Sigurdsson, 2003). The second, PSR J1911-6000C or NGC 6752 C, is located at about 1.4 half-light radii from the cluster centre and is an MSP similar to NGC 6752 A, but lacks a companion (e.g. D'Amico et al., 2002). Other examples of MSPs located at or beyond the half-light radius in their host star cluster include: 1 30024-710X in NGC 104 (1.03 half-light radii), J1748-2446J in Terzan 5 (\(\sim\) 1.3 half-light radii), J1748-2021C and J1748-2021D in NGC 6440 (\(\sim\) 1.0 and 1.2 half-light radii), J1801-0857D in NGC 6517 (\(\sim\) 2.4 half-light radii), M28 F, M13 B, M15 C, B1718-19A in NGC 6342, NGC 6624 K, M30 B, and XTE J1709-267 which may be associated with NGC 6293 (see Jonker et al. (2004)). For a more detailed summary, please refer to Table 1 and 2 below. For the purposes of this paper, we will consider any object at or beyond the cluster's half-light radius as being in the "outskirts". Footnote 1: See [http://www.naic.edu/](http://www.naic.edu/) pfreire/GCpsr.html, a full catalogue of cluster pulsars. A popular mechanism often invoked in the literature to explain the presence of heavy NS binaries at large cluster-centric radii is interactions with massive black hole-black hole (BH-BH) binaries. For example, Colpi et al. (2002) and Colpi et al. (2003) proposed that the presence of NGC 6752 A in the cluster's outskirts is most likely explained by an interaction with a massive BH-BH binary in the cluster core. The authors use this observation to argue for the presence of an intermediate-mass BH (IMBH)-stellar-mass BH binary in the core, with a primary mass \(\lesssim\) 100 M\({}_{\odot}\) and a low-mass secondary closer to 5 M\({}_{\odot}\). Colpi et al. (2003) confirms that ejection velocities capable of delivering the NS binary to its currently observed location could be reached. NGC 6752 C could also be explained by such an interaction, but Colpi et al. (2002) speculate that it might have been ejected to its current position due to an ionization event with a rare high-speed star. In SS 2 we challenge the idea that massive BH-BH binaries are the likely cause of NS binaries observed in the halos of their host star clusters. In SS 3, we explore several alternative mechanisms that could allow the retention of NS binaries that are not typically invoked in the literature. We also show that indeed more probable mechanisms exist, most notably a three-body interaction involving the NS binary and a normal cluster star. Finally, in SS 4, we compare our analysis to the results of Monte Carlo \(N\)-body simulations for star cluster evolution performed using the state-of-the-art Cluster Monte Carlo code. We discuss our results and conclude in SS 5. ## 2 An interaction with a BH-BH binary? In this section, we explore the possibility that a massive BH-BH binary or intermediate mass BH (IMBH)-BH binary is responsible for ejecting a given observed NS binary into the outskirts of its host star cluster. This mechanism has been adopted as the preferred mechanism to explain NGC 6752 A's location in the outskirts of NGC 6752 (Colpi et al. 2002, 2003). Consider an interaction in which two 3-100 M\({}_{\odot}\) BHs interact with a compact NS binary, ejecting it into the cluster halo (e.g. Colpi et al. 2002, 2003). First, we assume that the BH-BH binary is very close to the cluster centre (as expected from dynamical friction), such that the NS binary is ejected on a roughly radial orbit. This means that the NS binary stalls at a distance of \(R=8\) pc from the cluster centre, as observed for NGC 6752 A (for example). We assume a total mass \(M\) for the cluster of 10\({}^{6}\) M\({}_{\odot}\) and that its density profile can be described by a Plummer sphere with a core radius \(a=1\) pc. The NS binary has a total mass of 2.1 M\({}_{\odot}\), since we adopt a NS mass of 1.5 M\({}_{\odot}\) from an approximate mean of the distribution in Capano et al. (2020) and a WD or companion mass of 0.6 M\({}_{\odot}\) (see Tremblay et al. (2016) for more information about the observed WD mass distribution), and is sufficiently compact that the timescale for it to interact directly with other stars at its current location exceeds several Gyrs. We note that there is some freedom in the choice of masses for the particles involved in the interaction, depending on the precise formation mechanism considered. For example, consider a scenario in which a WD companion to the NS formed after the hypothetical dynamical interaction. Then, the other star involved in the interaction, apart from the NS, would most likely be a typical MS star (with a mass that exceeds the present-day turn-off mass). After the interaction, the MS star could evolve, leading to mass transfer or a common envelope event that eventually formed the final NS-WD binary. Throughout this paper, our choices for the particle masses reflect the observed mass distributions wherever possible. Then, for an isotropic non-rotating cluster, the timescale for the binary to return to the cluster centre on a roughly radial orbit can be estimated approximately from the fallback time (Webb et al. 2018). To order of magnitude, this is equivalent to the crossing time, or: \[\tau_{\rm cross}=\frac{R}{v_{\rm c}}, \tag{1}\] where \(v_{\rm c}\) is the circular velocity at radius \(R\). Since \(v_{\rm c}=\sqrt{GM(<r)/r}\), we have: \[\tau_{\rm cross}=\frac{3\pi}{4G\bar{\rho}}, \tag{2}\] and \(\bar{\rho}=M(<R)/(4\pi R^{3}/3)\) is the mean density inside \(R\). For our Plummer sphere: \[\rho=\frac{3Ma^{2}}{4\pi}\frac{1}{(r^{2}+a^{2})^{5/2}}, \tag{3}\] and \[M(<r)=M\frac{r^{3}}{(r^{2}+a^{2})^{3/2}}, \tag{4}\] hence \[\bar{\rho}=\frac{3M}{4\pi(R^{2}+a^{2})^{3/2}}. \tag{5}\] Plugging in the required numbers into Equation 2, we find a crossing time of \(\sim\) 11,000 years. For comparison, we can consider a more average cluster and adopt a cluster mass of 10\({}^{5}\) M\({}_{\odot}\), but this calculation yields a very similar crossing time of \(\sim\) 34,000 years. For very radial orbits, the NS binary will most likely be disrupted on its return to the cluster centre by another interaction with the central massive BH-BH binary, since both objects should return approximately to their point of ejection (e.g. Leigh & Wegsman 2018). It is unlikely that the NS binary would be observed before returning to the cluster centre. Thus, the probability of observing such a system is low, if the NS binary is ejected by the BH-BH binary when close to the cluster centre. If the lifetime of the binary is of order the cluster age, the probability of actually observing it at any given time in this scenario is only \(\sim 10^{4}/10^{10}\sim 10^{-6}\) assuming it takes a crossing time for the NS binary to return to the core. We note that this assumes that the BH-BH binary gets a recoil decided by linear momentum conservation such that the NS binary re-encounters the BH-BH binary close to \(r=0\) upon its first pass through the core, on a timescale much shorter than the timescale for mass segregation to operate. But what will happen when the recoiled NS binary is ejected by the BH-BH when it is away from \(r=0\)? Indeed, Figure 1 shows that the wandering radius for an BH-BH binary tends to be of order \(10^{4}\) AU, and this depends only weakly on the binary mass. If originally radial then, without cluster rotation, we do not expect the NS binary's orbit throughout the cluster to deviate much from being radial (Webb et al., 2019). However, it is possible that the NS binary has some finite orbital eccentricity less than unity (as is the case for a radial orbit) throughout the cluster, causing its return pass through the core to have an impact parameter that spares it from a direct interaction with the central BH-BH binary. This could be the case either if the cluster has some rotation (since Webb et al. 2019 showed that a kicked object will gain angular momentum from the cluster due to dynamical friction, giving rise to an eccentric orbit) or if the NS binary is kicked while off-center from \(r=0\). As estimated by Colpi et al. (2002), the timescale for the binary to return to the core due to two-body relaxation is \(\tau_{\rm df}\sim 7\times 10^{8}(1.6M_{\odot}/m)\) years, which is again a small fraction of the total cluster lifetime but much longer than a crossing time. For \(m=2.1\) M\({}_{\odot}\), the timescale is a little longer than 100 Myrs at the half-mass radius, yielding a probability of observing the system in the cluster outskirts at any given time of \(10^{8}/10^{10}\sim 0.01\). This is a lower limit, since we expect the timescale for dynamical friction to be even longer in the cluster outskirts relative to at the half-mass radius. Once returned to the core, the NS binary should undergo a strong interaction with the central BH-BH binary on a timescale given by Equation 7 in Leigh et al. (2016), or: \[\tau_{\rm si}=\frac{V_{\rm BH}}{\sqrt{3}\sigma_{\rm BH}\Sigma}, \tag{6}\] where \(V_{\rm BH}\) is the volume within which all stellar-mass BHs in the cluster are confined after mass segregation, \(\sigma_{\rm BH}\) is the BH velocity dispersion and \(\Sigma\) is the collisional cross-section. This contributes of order \(\lesssim 1\) Gyr to the total timescale for a typical GC. Hence, we estimate that, even if some eccentricity is imparted to the orbit of the NS binary, it should still most likely be destroyed by the central BH-BH binary well within the lifetime of the cluster. This will occur on roughly a crossing time for interactions that occur very near the cluster centre of mass (i.e., without much wandering of the BH-BH binary) or some fraction of a core relaxation time near unity for off-centre interactions. This is because if the BH-BH binary is at \(r=0\) then the NS binary will hit it upon returning to the core on its first pass through (due to conservation of linear momentum causing a recoil kick for the BH-BH binary in the opposite direction as the ejected NS binary), whereas off-centre collisions avoid this scenario, greatly prolonging the inspiral time of the NS binary. We caution that the exact transition between these two scenarios is quite complicated and would require detailed N-body simulations \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Cluster ID & Mass & Core radius & Half-light radius & Distance & No. of pulsars & No. of pulsars & PCC? & No. of pulsars \\ & (in \(10^{5}\)\(M_{\odot}\)) & (in arcmin) & (in arcmin) & (in kpc) & & in binary & & beyond the half-light radius \\ \hline NGC 104 & 8.53 & 0.36 & 3.17 & 4.5 & 29 & 19 & no & 1 \\ NGC 6205 & 4.84 & 0.62 & 1.69 & 7.1 & 6 & 4 & no & 1 \\ NGC 6342 & 0.377 & 0.05 & 0.73 & 8.5 & 2 & 1 & yes & 1 \\ Terzan 5 & 11 & 0.16 & 0.72 & 6.9 & 43 & 24 & no & 1 \\ NGC 6440 & 5.7 & 0.14 & 0.48 & 8.5 & 8 & 4 & no & 2 \\ NGC 6517 & 2.2 & 0.06 & 0.5 & 10.6 & 17 & \(>\)2 & no & 1 \\ NGC 6624 & 1.03 & 0.06 & 0.82 & 7.9 & 11 & 2 & yes & 1 \\ NGC 6626 & 2.7 & 0.24 & 1.97 & 5.5 & 14 & 10 & no & 1 \\ NGC 6752 & 2.61 & 0.17 & 1.91 & 4 & 9 & 1 & yes & 2 \\ NGC 7078 & 5.18 & 0.14 & 1.00 & 10.4 & 9 & 1 & yes & 1 \\ NGC 7099 & 1.21 & 0.06 & 1.03 & 8.1 & 2 & 2 & yes & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: GCs with pulsars located around and beyond the half-light radius (i.e., in the cluster outskirts). All data is taken from the pulsars in GCs catalog at [http://www.naic.edu/](http://www.naic.edu/) pfreire/GCpsr.html, except the cluster mass, which is taken from [https://people.smp.uq.edu.au/HolgerBaumgardt/globular/](https://people.smp.uq.edu.au/HolgerBaumgardt/globular/). \begin{table} \begin{tabular}{l c c c c} \hline \hline Pulsar Name & Offset & Companion mass & Eccentricity & Spin period \\ & (in arcmin) & (in \(M_{\odot}\)) & & (in ms) \\ \hline NGC 104 X & 3.83 & 0.42 & 0.0000005 & 4.77152 \\ NGC 6205 B & 1.626 & 0.186 & 0.000002 & 3.52807 \\ NGC 6342 A & 2.3 & 0.13 & \(<\)0.005 & 1004.04 \\ Terzan 5 J & 0.948 & 0.39 & 0.35 & 80.3379 \\ NGC 6440 C & 0.48 & - & - & 6.22693 \\ NGC 6440 D & 0.57 & 0.14 & 0.0 & 13.4958 \\ NGC 6517 D & 1.202 & - & - & 4.22653 \\ NGC 6624 K & 1.43 & - & - & 2.768 \\ NGC 6626 F & 2.794 & - & - & 2.45115 \\ NGC 6752 A & 6.39 & 0.22 & 0.0000082 & 3.26619 \\ NGC 6752 C & 2.70 & - & - & 5.27733 \\ NGC 7078 C & 0.944 & 1.13 & 0.681386 & 30.5293 \\ NGC 7099 B & 1.2 & 1.31 & 0.87938 & 12.98983 \\ \hline \hline \end{tabular} \end{table} Table 2: Properties of pulsars located around and beyond the half-light radius (i.e., in the cluster outskirts). All data is taken from the pulsars in GCs catalog at [http://www.naic.edu/](http://www.naic.edu/) pfreire/GCpsr.html to properly quantify, but for most of the relevant parameter space we expect the two-body relaxation time to be a better approximation since the NS binary is most likely to interact with the BH-BH binary away from the cluster centre. We can quantify the above in another way. Figure 1 shows various critical distances pertinent to this problem as a function of the primary BH mass. Specifically, we show as a function of the primary BH mass the wandering radius of the BH-BH binary, the hard-soft boundary for the BH-BH binary, the orbital separation corresponding to an inspiral time of 10 Gyr due to gravitational wave radiation, the orbital separation yielding a recoil kick equal (from an interaction with the putative NS binary) to the cluster escape speed and the tidal disruption radius of the NS binary (see Colpi et al. (2002) and the figure inset for more details). The orbital separation yielding a most probable recoil kick equal to the cluster escape speed is calculated using Equation 7.19 in Valtonen & Karttunen (2006) by equating the cluster escape speed to the peak velocity of the centre of mass of the binary, or: \[v_{peak}=\sqrt{\frac{2(M_{t}-m_{e})}{5m_{e}M_{t}}}\sqrt{|E_{0}|}, \tag{7}\] where \(M_{t}\) is the total mass (i.e., the sum of the masses of all interacting particles), \(m_{\rm e}\) is the mass of the ejected NS/MSP binary and \(E_{0}\) is the total interaction energy. The key point is that the BH-BH binary orbital separation is most likely to be less than the wandering radius and, more importantly, the hard-soft boundary, but greater than the orbital separation corresponding to a recoil velocity equal to the core escape speed of 35 km s\({}^{-1}\)(Colpi et al., 2003); otherwise a higher ejection velocity for the NS/MSP binary is more probable. The BH-BH binary is dynamically hard and most likely to eject the NS binary to the outskirts (and not from the cluster) at less than the escape speed for primary BH masses \(\lesssim\) 100 M\({}_{\odot}\) and only a narrow range of orbital separations, as shown in Figure 1. Is the secondary mass in this scenario at all constrained? In short, yes. First, the secondary must be relatively massive such that an object heavier than the ejected binary should be left in orbit about the primary BH, otherwise the compact NS binary would be more likely to end up bound to it, ejecting the BH instead. The mass of NGC 6752 A, for example, is sufficiently low that the secondary in the BH-BH binary need not be a BH. A heavy neutron star could also get the job done roughly as easily, or perhaps even easier if NSs are more abundant in GCs than are stellar-mass BHs. Second, in order for the NS binary to remain **intact**, Wang et al. (2019) showed that mass ratios near unity are preferred, and the dependence of the survival probability on the mass ratio is rather steep. Given the arguments presented in this section, we conclude that a relatively improbable interaction with a BH-BH binary is needed to delicately place a NS binary into the outskirts of a star cluster and have it remain there for longer than a crossing time. This is because only for primary masses \(\lesssim\) 100 M\({}_{\odot}\) and a narrow range of orbital separations should the BH-BH binary be both dynamically hard and produce a sufficiently low recoil velocity to avoid completely ejecting the NS binary from the cluster. Provided other mechanisms to put NS binaries into the cluster outskirts could also be operating with a non-negligible probability, our results are consistent with a scenario in which clusters with high NS binary, and especially MSP, frequencies should most likely host the lowest mass BHs and the fewest BH binaries, as found in Leigh et al. (2016) and Ye et al. (2019), roughly independent of their observed radial distribution. This is due to two reasons. First, interactions with BH-BH binaries, especially more massive ones, are more likely to entirely eject NSs and MSPs from clusters than simply launch them into the cluster outskirts. Second, in clusters with lots of BHs, the BHs act as a heat source for the NSs, preventing them from mass segregating into the centre (Ye et al., 2019). The second reason should be the case in all but the most massive clusters (\(\gtrsim\) 10\({}^{6}\) M\({}_{\odot}\)) with the longest relaxation times, since here (especially in the outskirts) the relaxation times can exceed a Hubble time such that observing NS/MSP in the cluster outskirts should be independent of any dynamics happening in the core. ## 3 Alternative formation pathways Although the BH-BH ejection scenario alone has been extensively discussed in the literature, other mechanisms also exist. We will argue some of these are more likely to explain the origins of PSR A and other NS binaries floating beyond the half-mass radius of their host cluster. In this section, we begin by listing the alternative possibilities to explain the presence of a NS binary in the outskirts Figure 1: We show as a function of the primary BH mass the critical orbital separation \(a_{\rm GW}\) where the time for coalescence of the BH-BH binary due to gravitational wave radiation is equal to 10 Gyr, the critical orbital separation \(a_{\rm scat}\) where the recoil velocity due to an interaction with the hypothetical NS binary is roughly equal to the escape speed from the core, the tidal disruption radius \(r_{\rm tid}\) of the BH-BH binary (taken from Colpi et al. (2002)) and the wandering radius \(r_{\rm stand}\) of the BH-BH binary due to Brownian motion (taken from equation 5.145b in Merritt (2013) assuming a central density of 10\({}^{5}\) M\({}_{\odot}\) pc\({}^{-3}\) and a central velocity dispersion of 5 km/s). We further show the hard-soft boundary of the BH-BH binary, or hardening radius, \(a_{\rm HS}\), calculated using Equation 1 in Leigh et al. (2016). Note that all cluster and mass values are chosen to replicate the assumptions in Colpi et al. (2002) for the GC NGC 6752, except we adopt component masses of 0.6 and 1.5 M\({}_{\odot}\) for the NS binary while assuming a secondary BH mass of 3 M\({}_{\odot}\). Only for BH masses \(\lesssim\) 100 M\({}_{\odot}\) will the NS binary be retained in the cluster from a strong interaction, as indicated by the shaded area. Here, the BH-BH binary orbital separation is most likely to be less than the hard-soft boundary \(a_{\rm HS}\) but greater than the orbital separation corresponding to a recoil velocity equal to the core escape speed of 35 km s\({}^{-1}\)\(a_{\rm scat}\)(Colpi et al., 2003), otherwise a higher ejection velocity is more likely. of a dense star cluster, before exploring each in more quantitative detail. The possible formation mechanisms for a NS binary in the cluster outskirts include: * A primordial binary system born in the cluster outskirts. * A three-body interaction with a normal cluster star (i.e., a WD or MS star). * A four-body interaction involving a binary composed of normal cluster stars. * The disruption of a stable hierarchical triple due to the accretion-induced implosion of the tertiary companion. * A natal kick partially imparted to the binary centre of mass due to accretion-induced collapse. Here, the natal NS gets a kick due to asymmetric mass loss in the detonating progenitor at the time of supernova explosion, which also imparts momentum to the binary centre of mass motion due to asymmetric mass loss from the binary system itself. Let us now consider each of the listed mechanisms in more detail. ### A primordial binary Following Colpi et al. (2002), the simplest possibility is that the binary is a primordial system born in the cluster outskirts. This scenario is unlikely, however, given that the timescale for the binary to segregate back into the cluster core due to two-body relaxation is shorter than the cluster age in all but the most massive MW GCs. This is because the binary is more massive than a typical single star, such that it will segregate back into the core on a relaxation time, or: \[\tau_{r}(m)=\frac{<m>}{m}\tau_{r}(<m>), \tag{8}\] where \(<m>\sim 0.5\) M\({}_{\odot}\) is the mean stellar mass for an old stellar population and: \[\tau_{r}(<m>)=1.7\times 10^{5}M^{1/2}\Big{(}\frac{r_{h}}{1\rm pc}\Big{)}^{3/2} \Big{(}\frac{1M_{\odot}}{<m>}\Big{)}{\rm years}, \tag{9}\] where \(r_{h}\) is the half-light radius. Taking \(r_{h}=5\) pc for a typical Milky Way GC (Harris 1996, 2010 update) and setting \(\tau_{r}(<m>)=10\) Gyr, we find that only clusters with \(M>\) 7 \(\times\) 10\({}^{6}\) M\({}_{\odot}\) will have sufficiently long relaxation times for a primordial binary born in the cluster outskirts to still be located there today, having avoided segregating into the core due to two-body relaxation. Hence, in massive clusters like 47 Tuc, it would take the longest for NS binaries to segregate back into the core due to its larger mass and hence longer relaxation time, but the timescale is still much less than a Hubble time. Importantly, for NGC 6752 A, the preceding argument suggests that a primordial origin is unlikely to be at the root of its unusual location far out in the outskirts of its host cluster. The cluster is less massive than 7 \(\times\) 10\({}^{6}\) M\({}_{\odot}\), suggesting that the NS binary would have had sufficient time to segregate into the core if it were born in the cluster outskirts. Importantly, however, this estimate is obtained using the two-body relaxation time at the half-mass radius, whereas NGC 6752 A is located over 3 half-light radii from the cluster centre, where the relaxation time can be roughly an order of magnitude longer due to the much lower density. Hence, NGC 6752 A is somewhat of an unusual case due to its very large distance from the cluster centre, and a primordial origin cannot be entirely ruled out for this system given a purely two-body relaxation-based argument. An independent argument against many NS/MSP binaries located in the cluster outskirts having a primordial origin is as follows. It is highly unlikely for MSPs to be formed in the outskirts of clusters, due to the low encounter rate in those outskirts. We know that MSPs are \(\sim\)100 times more frequent in globular clusters than in the Galactic field, as are low-mass X-ray binaries (LMXBs) (Clark75 1975). For a simple estimate, take the mass of the Galaxy as \(6\times 10^{10}\) M\({}_{\odot}\), and the mass of all globular clusters as \(3.8\times 10^{7}\) M\({}_{\odot}\)(Baumgardt et al. 2018). We use the estimate of 30,000 MSPs in the Galaxy from (Lorimer 2013), and estimate the number of MSPs in Galactic globular clusters by extrapolating the MSP numbers estimated by Zhao & Heinke (2022) in 36 globular clusters (600-1500) to the remaining Galactic clusters by stellar encounter rate (Bahramian et al. 2013), giving 1000-2500 MSPs in all globular clusters. Thus we confirm that globular clusters produce 50-130 times more MSPs per unit mass than the Galaxy. If we assume these halo MSPs are formed primordially, then their frequency should be similar to the field (actually, it should be substantially less, considering that the cluster escape velocity is small in the outskirts, so neutron stars would escape even more easily than from clusters; Pfahl et al. 2002.) Let us assume the mass of the cluster outside the half-mass radius produces MSPs primordially, while the cluster inside the half-mass radius produces MSPs through dynamics. Then we should find (at least) 100-1000 times more MSPs inside the half-mass radius than outside the half-mass radius. (This assumes that no MSPs originally located outside the half-mass radius dynamically segregate inside the half-mass radius over time.) Thus, we should find \(<<\)1% as many MSPs in the cluster outskirts as in their cores. But of the MSPs in the Freire catalog, we see 11 outside the cluster half-mass radius, and 130 within the half-mass radius; of order 10% lie in the outskirts. This is far more than can be explained by primordial formation. ### A three-body interaction with a normal cluster star #### 3.2.1 Most Likely Ejection Velocities Figure 2 shows via the dashed line the most likely ejection velocity for a three-body interaction involving a NS (1.5 M\({}_{\odot}\)), a WD (0.6 M\({}_{\odot}\); see Tremblay et al. (2016)) and a normal MS star (0.5 M\({}_{\odot}\)), leaving the NS and WD bound in a compact binary, as a function of the initial orbital separation (in this case, we assume the NS and WD were initially bound in a binary as well). This is done using Equation 7 for the peak or most likely ejection velocity. Note that we have corrected the final velocity for linear momentum conservation. For comparison, the solid line shows the same thing but for an BH-BH ejector.2 As is clear, a more compact NS-WD binary is needed to achieve a higher ejection velocity than an BH-BH binary: the BH-BH binary can achieve one order of magnitude higher velocities for the same initial orbital separation. More importantly, a compact NS-WD binary can easily reach the cluster escape speed at an initial orbital separation of only 1 AU (and higher velocities are attainable for even more compact binaries). Footnote 2: We note that this is technically a four-body interaction, but it can be viewed as a three-body interaction if the NS binary is considered to be a single object due to its very compact orbit. #### 3.2.2 Ejection Velocity Distributions To better compare the three-body interaction with a normal cluster star scenario to the historical scenario of a NS-WD binary in teraction with a BH-BH binary, we make use of the Corespray particle spray code (Grondin et al., 2023). Based on the theoretical three-body encounter framework presented in Valtonen & Karttunen (2006), Corespray samples the outcomes of three-body interactions within the cores of star clusters. Combining a cluster's orbital and structural parameters with a set of initial encounter configurations (e.g. system masses, orbital separations, binary binding energies, etc.), Corespray ultimately produces statistical representations of the kinematics and positions of objects that have undergone three-body interactions in cluster cores 3. Using the previously discussed system in NGC 6752 as a template, we use orbital and structural conditions for NGC 6752 from Baumgardt et al. (2018) and the same aforementioned encounter mass configurations. We then use Corespray to sample 50,000 three-body interactions for both cases over one azimuthal orbital period of NGC 6752 (\(P_{orb}=132.162\) Myr), where the initial separations between interacting objects are randomly sampled between the semi-major axis of the binary and twice the mean separation of objects in the core (0.25 pc). Footnote 3: To learn more about the capabilities and installation of Corespray, refer to Grondin et al. (2023) or visit [https://github.com/webbij/corespray](https://github.com/webbij/corespray). For the semi-major axis of the binary in Case 1, where three-body interactions are between a NS-WD binary and a normal MS star, we first randomly sample the binary's separation between the hard-soft boundary (Leigh et al., 2022) and the contact boundary. The contact boundary is defined by assuming the same NS and WD masses as above, corresponding to a NS radius of \(11\) km as approximated by Capano et al. (2020), and a WD radius of \(0.01R_{\odot}\) approximated from Provencal et al. (1998). We then identify the range of initial separations that lead to the NS-WD binary hardening and having a final separation that is comparable to the observed separation of \(a=0.025\) AU (D'Amico et al., 2002) to within \(10\%\). This range corresponds to separations between 0.02 and 0.05 au, which are randomly sampled to generate our final distribution of encounters. For Case 2, the NS-WD binary is treated as a single object and it is the separation of the BH-BH binary that contributes most to the energy of the three-body system. When generating 3-body interactions, we consider two scenarios; (i) the BH-BH separation is equal to the hard-soft boundary, and (ii) the BH-BH separation is equal to half of the hard-soft boundary of the cluster. The hard-soft boundary for a BH-BH binary in a star cluster is given by Equation 1 in Leigh et al. (2016) and calculated to be \(a=18.289\) AU, assuming masses of 100 \(M_{\odot}\) and 5 \(M_{\odot}\). Figure 3 illustrates the NS binary ejection velocities from the Corespray simulation for the two cases (Grondin et al., 2023). For the case of a NS-WD binary interacting with a normal cluster star, only \(\sim 21\%\) of NS-WD binaries are given strong enough kicks that lead to them escaping the cluster. The probability of cluster escape for the case of a NS-WD binary interacting with a BH-BH binary is higher, where approximately \(74\%\) and \(86\%\) of all NS-WD binaries escape the cluster when the BH-BH binary's separation equals the hard-soft boundary and half of the hard-soft boundary (Leigh et al., 2022), respectively. Hence, we conclude that interactions with a BH-BH binary have higher chances of ejecting a NS-WD binary from a cluster than retaining it, for almost all ranges of BH-BH binary masses. Conversely, interactions with normal stars are more likely to result in a NS-WD binary remaining bound to the cluster, providing a possible explanation for the location of a NS binary in the outskirts of a GC. In addition, using Equation A10 in Leigh & Sills (2011), such a single-binary interaction should occur roughly once every Myr, assuming a binary fraction of 10%, a core radius of 1 pc, a core number density of \(10^{6}\) M\({}_{\odot}\) pc\({}^{-3}\), and a mean binary orbital separation of 0.05 AU (i.e., close to our assumed initial separation for the calculation corresponding to Case 1 above). For comparison, the analogous scenario involving a BH-BH binary should occur on a timescale closer to a Gyr, as given by Equation 6. #### 3.2.3 Post Interaction Behaviour After the interaction, the NS-WD binary will most likely sink back into the core on a relaxation time. This is the case even if the WD forms after the interaction (i.e., the secondary expands to become a giant and then a WD post-interaction), since for an initially compact binary, we do not expect the subsequent binary evolution to cause the final binary to widen significantly if at all (e.g., if a common envelope phase occurs this should most likely tighten the binary further). ### A four-body interaction involving normal cluster stars In this scenario, a NS in a binary with a normal star (i.e., either a MS star or WD) undergoes an interaction with two other normal cluster stars of typical mass in a binary. This scenario is unlikely Figure 2: We show the most probable ejection velocity as a function of the initial orbital separation of the ejected binary using Equation 7. The solid black line shows the case where a BH-BH binary (composed of 100 and 5 M\({}_{\odot}\) BHs) ejects a compact NS binary (composed of a 1.5 M\({}_{\odot}\) NS and a 0.6 M\({}_{\odot}\) WD), whereas the dashed black line shows the same thing but for a three-body interaction involving the same NS binary and a 0.5 M\({}_{\odot}\) interdiping single star. to end in the production of two binaries (Leigh et al., 2016) when four objects (i.e., two binaries) of similar mass and size interact. Instead, the most likely scenario is that the two least massive objects are ejected sequentially as single stars, leaving the two most massive objects remaining bound in a binary (Leigh et al., 2016). Hence, since the direction of ejection should be more or less random and the velocity distribution for each single is the same as for a simple three-body disruption (Leigh et al., 2016), this mechanism is roughly as likely to eject an NS binary as the analogous three-body interaction scenario with normal stars already considered in the previous section. For reference, using Equation A8 in Leigh & Sills (2011), such a binary-binary interaction should occur roughly every few tens of thousand years, assuming cluster parameters typical of massive GCs (such as NGC 6752 and especially 47 Tuc) namely a binary fraction of 10%, a core radius of 1 pc, a core number density of \(10^{6}\) M\({}_{\odot}\) pc\({}^{-3}\), and a binary with a separation of 1 AU. We expect the rate of single-binary interactions to dominate over binary-binary interactions in clusters with high central densities, however, since here the binary fraction tends to be \(\lesssim\) 10% (Leigh & Sills, 2011; Sollima et al., 2008; Leigh et al., 2022). Hence, the analogous three-body interaction scenario is more likely. With that said, however, Leigh et al. (2016) showed that binary-binary interactions involving one wide binary and one compact binary tend to act as single-binary exchange interactions, with the heavy compact binary being exchanged into the wide binary, and ejecting one of its original binary companions in the process. Leigh et al. (2016) showed that, for reasonable initial assumptions, the recoil from the ejected single will often impart a kick of order a few tens of km s\({}^{-1}\) to the inner binary, perhaps enough for it to escape from the cluster core and into the outskirts. Hence, this mechanism is likely to produce a velocity close to the required velocity to put the putative triple into the outskirts. This scenario predicts that the compact NS binary should have a stable tertiary companion, though this seems to be ruled out in the case of NGC 6752 A by current pulsar timing (Corongiu et al., 2023). ### The implosion of the tertiary of a hierarchical triple Consider a stable hierarchical triple star system containing a NS in the inner binary. Hence, the system is composed of a NS binary in a compact orbit, with a third object orbiting it on a wide, stable orbit. Let us assume that the outer tertiary is a white dwarf, and that the secondary in the inner binary is overflowing its Roche lobe, transferring mass to not only its NS companion but also the outer tertiary (this could occur if, for example, there is a common envelope event in the inner binary; see below). If the tertiary is able to accrete, it could detonate as a supernova leaving behind no remnant (see Leigh et al. (2020) for more details). If this happens, the remaining compact NS binary, formerly the inner binary of the triple, will be launched at the instantaneous orbital velocity. Provided \(v_{\rm orb}\lesssim v_{\rm esc}\), which should be the case for most stable triples given the need for a wide outer orbit in order to maintain dynamical stability, this could deliver a compact NS binary to the cluster outskirts. In Figure 4, we show the critical tertiary orbital period (i.e., the orbital period needed to achieve the indicated ejection/orbital velocities) for several different values of the ejection (i.e., orbital) velocity. For this exercise, we assume a mass of 2.1 M\({}_{\odot}\) for the NS binary (as before) and a final mass of 1.4 M\({}_{\odot}\) for the outer tertiary just prior to detonation. Assuming ejection velocities of 1, 10 and 100 km s\({}^{-1}\) gives critical tertiary orbital periods of \(\sim\) 2.0 \(\times\) 10\({}^{7}\), 2.0 \(\times\) 10\({}^{4}\) and 20 years, respectively. For comparison, assuming an average stellar mass of 0.5 M\({}_{\odot}\) and a velocity dispersion of 10 km s\({}^{-1}\), the critical orbital period is \(\sim\) 5.9 \(\times\) 10\({}^{4}\) years, which will be even longer in lower velocity dispersion environments. Hence, dynamically "hard" triples can exist in clusters and still yield kick velocities with \(v_{\rm orb}\lesssim v_{\rm esc}\), since \(v_{\rm esc}\sim\) 10 - 50 km s\({}^{-1}\) for the densest and most massive Milky Way GCs. Such wide triples should also be dynamically stable for any inner binary with an orbital period of hours or days (e.g. Tokovinin, 2018). Next, we wish to know how compact the outer orbit needs to be in order for the inner binary to undergo a common envelope (CE) event and transfer mass to the outer tertiary. To answer this, we compute the orbital velocity for the tertiary of our chosen triple. Assuming an outer separation of 5 AU, the ejection velocity of the NS binary would be \(\sim\) 31 km s\({}^{-1}\), or 22 km s\({}^{-1}\) for an outer separation of 10 AU. Thus, if the tertiary accretes and detonates as a supernova explosion, leaving behind no remnant or causing it to disrupt dynamically, the compact NS binary could indeed be imparted with a kick of sufficient magnitude to deliver it to the cluster outskirts with \(v_{\rm orb}\lesssim v_{\rm esc}\), provided the outer tertiary separation is in the range \(\sim\) 5 - 10 AU. This is evident from Figure 4, which shows that having a constraint on the period of the putative NS binary immediately constrains the period of the outer orbit of the hypothetical triple. With the above said, however, we caution that it is unlikely that the hypothetical inner binary will overfill its Roche lobe with a separation of order 10 AU, given that the turn-off mass in a typical GC is \(\sim\) 0.8 M\({}_{\odot}\) or so. This is likely the case even independent of Figure 3: We show the ejection velocity distributions for a three-body encounter composed of (1) a NS-WD binary interacting with a normal MS star (dashed line) and (2) a BH-BH binary interacting with a compact NS-WD binary (solid lines, with the the NS-WD binary being treated as a single compact object). In Case 2, we consider two additional sub-cases, where the binding energy is equal to the binding energy at (a) the hard-soft boundary and (b) half the hard-soft boundary in Leigh et al. (2022). Both sets of \(N=50,00\) three-body encounters are simulated using Corespray (Grondi et al., 2023), where the Baumgardt et al. (2018) escape velocity of \(v_{esc}\sim 31\) km/s for NGC 6752 is indicated with a red line. It is clear that NS-WD binaries are more likely to be ejected from NGC 6752 when interacting with both soft and hard BH-BH binaries than with normal MS stars, providing evidence that off-centre NSs are more likely produced by the latter type of interaction. any CE event, but detailed simulations would need to be performed to address just how much mass the putative tertiary might accrete. We conclude that this mechanism is indeed a viable, but unlikely, option to put NS binaries into the outskirts of star clusters. A perhaps more likely albeit similar scenario could be invoked early on in the cluster lifetime if, instead of a WD, the outer tertiary is a massive star that ends its life as a supernova, causing the triple system to disrupt. This does not leave much time, however, for the NS to be exchanged into the hypothetical triple system due to a dynamical channel and, as previously argued, the cluster dynamics must somehow be involved in the formation of NS/MSP binaries in order to explain their increased frequency in GCs relative to the field. ### A natal kick partially imparted to the binary centre of mass Consider a compact binary in a star cluster containing a primary WD and a secondary main-sequence star. As the secondary evolves, it will expand and transfer mass to the WD. We assume that the primary ultimately explodes as a supernova, receiving a natal kick in the range of a few to several hundred km s\({}^{-1}\) due to asymmetric mass loss, and leaving behind a NS remnant. If the kick direction opposes that of the orbital motion, then the binary can survive, and end up on a very compact eccentric orbit that should be rapidly circularized due to either tidal interactions or gravitational wave inspiral. This scenario could also work in younger star clusters if a normal star is in a relatively compact binary with a massive star that explodes to produce a NS. We imagine that some fraction \(f\) of the expelled mass accelerates/decelerates the detonator directly, whereas the remaining mass fraction \((1-f)\) acts to accelerate the binary centre of mass. In order to properly distinguish between the two extremes (i.e., \(f\sim 1\) or f \(\sim 0\)), we would need to perform detailed hydrodynamics simulations and follow the mass-loaded expelled gas in detail. To the best of our knowledge, such a study has yet to be done in the literature. For now, let us take the simplest assumption, and set \(f=0.5\). Then, if the mass is ejected in a direction that opposes the orbital motion, and a total mass \(M_{\rm ej}\) is ejected, we can use conservation of linear momentum to compute the final velocity of not only the NS in its orbit but also that of the binary centre of mass motion. Let us assume that the binary centre of mass is initially moving at 10 km s\({}^{-1}\) and expels in total 0.01 M\({}_{\odot}\) of gas at a speed of 100 km s\({}^{-1}\). Then by linear momentum conservation we have: \[M_{\rm ej}v_{\rm ej}+(fM-M_{\rm ej})v_{\rm fin}=Mv_{\rm init}, \tag{10}\] where \(M=m_{1}+m_{2}\) is the initial NS binary mass with \(m_{1}=1.4\) M\({}_{\odot}\) and \(m_{2}=0.8\) M\({}_{\odot}\). This gives for the final ejection velocity of the NS binary: \[v_{\rm fin}=(Mv_{\rm init}-M_{\rm ej}v_{\rm ej})/(fM-M_{\rm ej})\sim 20{\rm kms }^{-1}, \tag{11}\] which is of sufficiently small magnitude to launch it into the cluster outskirts without ejecting it from the cluster. We conclude that the accretion-induced collapse of a WD primary in a binary could produce a sufficient recoil velocity to account for a compact NS binary observed in the outskirts of a star cluster. How likely this mechanism is depends on the details of the supernova explosion and the probability of having a suitable progenitor binary in the cluster, both of which require further study to properly quantify. The question of whether or not such an explosion can provide a sufficient kick to the binary centre of mass without unbinding it will be central moving forward. We further caution that this mechanism also suffers from the same issue as discussed in the previous sections, namely that the cluster dynamics must be involved in the production of NS/MSP binaries in order to explain their much higher frequency in GCs relative to the field. If this mechanism were operating with a substantial rate in GCs, than it should also do so in the field, over-producing the frequency of NS/MSP binaries in the field relative to what is observed. ## 4 Simulations In this section, we present the results of Monte Carlo \(N\)-body simulations for GC evolution using the Cluster Monte Carlo code (CMC; Rodriguez et al. 2022, and references therein), which we use to assess whether or not GCs can eject NS binaries into the cluster outskirts. We further use the models to assess the relative frequencies of the various mechanisms for putting NS/MSP binaries into the cluster outskirts discussed in the previous section. ### The code and initial conditions CMC is based on the Henon-style orbit-averaged Monte Carlo method (Henon 1971a,b). It incorporates various relevant physics Figure 4: We show with the black vertical lines in the P\({}_{\rm in}\)-P\({}_{\rm out}\)-plane the critical outer orbital period needed to achieve the indicated ejection velocity of the NS binary in the context of the disrupted triple scenario for getting NS binaries into the outskirts. The red line shows the critical period for reaching the escape velocity; any objects falling to the left of this line will be ejected from the cluster. We assume component masses of 1.5 M\({}_{\odot}\) and 0.5 M\({}_{\odot}\) for the inner binary and a mass of 1.4 M\({}_{\odot}\) for the outer tertiary. We further assume circular orbits. Finally, the diagonal black line shows the boundary for dynamical stability (i.e., tertiaries are stable if they fall below the line) using the criteria from Tokovinin (2018). for cluster evolution, including two-body relaxation, strong dynamical interactions of singles and binaries, and tidal mass loss. Binary and stellar evolution is fully coupled to the dynamical evolution of the clusters and is calculated by the publicly available software COSMIC (Breivik et al., 2020), which is based on SSE (Hurley, Pols, & Tout, 2000) and BSE (Hurley, Tout, & Pols, 2002). Strong three- and four-body gravitational encounters are directly integrated by the Fewbody package (Fregeau et al., 2004; Fregeau & Rasio, 2007), which includes post-Newtonian effects for BHs (et al., 2014; Amaro-Seoane & Chen, 2016; Rodriguez et al., 2018, 2018). In particular, CMC simulates NSs and MSPs self-consistently following the treatments in Ye et al. (2019, and references therein), which showed good agreements with the spin periods and magnetic fields of observed pulsars. NSs are born in core-collapse supernovae (CCSNe), electron-capture supernovae (ECSNe), or accretion-induced collapses of WDs. CMC assumes that NSs born in CCSNe receive large natal kicks drawn from a Maxwellian distribution with a standard deviation \(\sigma_{\rm CCSN}=265\,{\rm km\,s^{-1}}\)(Hobbs et al., 2005) due to asymmetries in the supernova explosion. On the other hand, NSs born in ECSNe or accretion-induced collapses receive small natal kicks drawn from a Maxwellian distribution with a standard deviation \(\sigma_{\rm ECSN}=20\,{\rm km\,s^{-1}}\)(Kiel et al., 2008). All NSs are formed with spin periods and magnetic fields similar to the observed young radio pulsars. After their formation, NSs in binaries can be spun up to millisecond periods by angular momentum transfer during Roche lobe overflow (Hurley, Pols, & Tout, 2000, Eq. 54), and their magnetic fields decay according to the'magnetic field burying' scenario (e.g., Bhattacharya & van den Heuvel, 1991) where the magnetic fields decrease inversely proportional to the amount of mass accreted \((1+M_{acc}/10^{-6}\,M_{\odot})^{-1}\)(Kiel et al., 2008). At the same time, isolated pulsars slow down through magnetic dipole radiation (Kiel et al., 2008). For more details about the treatments of MSPs, see Ye et al. (2019). As an example, we search for halo MSPs in CMC models of two clusters listed in Table 1, NGC 6752 (Ye et al., 2023, their model 1a) which is a typical core-collapsed cluster (Harris, 1996, 2010 update) and 47 Tuc which is a massive non-core-collapsed cluster (Harris, 1996, 2010 update; Ye et al., 2022). These models closely match the respective clusters' observed surface brightness profiles and velocity dispersion profiles. The NGC 6752 simulation has an initial number of stars \(N=8\times 10^{5}\), virial radius \(R_{v}=0.5\) pc, metallicity \(Z=0.0002\), and Galactocentric distance \(R_{g}=8\) kpc. Its stellar distribution follows a King profile with a concentration parameter \(W_{0}=5\)(King, 1966). A standard Kroupa broken power-law (Kroupa, 2001) between \(0.08\,M_{\odot}\) and \(150\,M_{\odot}\) is assumed for the initial mass function, and the model has an initial binary fraction of \(5\%\). The 47 Tuc simulation has an initial number of stars \(N=3\times 10^{6}\), virial radius \(R_{v}=4\) pc, metallicity \(Z=0.0038\), and Galactocentric distance \(R_{g}=7.4\) kpc. It initially follows an Elson profile (Elson, Fall, & Freeman, 1987) for stellar distribution with \(\gamma=2.1\) where \(\gamma\) is a free parameter of the Elson power-law slope (Ye et al., 2022, Eq. 8). The simulation adopts a two-component power-law initial mass function with power-law slopes \(\alpha_{1}=0.4\) and \(\alpha_{2}=2.8\) for the lower- and higher-mass parts, respectively. Masses are sampled between \(0.08\,M_{\odot}\) and \(150\,M_{\odot}\) with a break mass at \(0.8\,M_{\odot}\). The simulation assumes an initial \(2.2\%\) binary fraction. ### General results In this section, we present the results of our CMC simulations for cluster evolution. In particular, after discussing the radial distributions of NS/MSP binaries, we assess which of the mechanisms discussed in SS 3 are operating in the models and with what relative frequencies. We show the projected radial offsets of NSs and MSPs from the NGC 6752 and 47 Tuc models in Figures 5 and 6, respectively. Figure 5 includes all NSs and MSPs from 13 model snapshots (time steps) between 11 and 13.8 Gyr of the simulation for better statistics. The times roughly span the age observed for NGC 6752 (Buonanno et al., 1986; Gratton et al., 1997, 2003; Correnti et al., 2016; Souza et al., 2020; Bedin et al., 2023). Overall, about six MSPs locate outside of the half-light radius of the cluster between 11 and 13.8 Gyr. At each of the 13 snapshots, we find between zero and three MSPs (all except one snapshot have at least one MSP), consistent with the observed number. These MSPs are ejected to the cluster halo directly through strong exchange encounters with WD binaries (either double WDs or WD-main-sequence star binaries), through natal kicks from the accretion-induced collapse of one of the components triggered by dynamical interactions with WD binaries (in this case it is an NS-WD binary), or through interactions with stellar-mass BH binaries or single stars such as main-sequence stars and WDs. Four halo MSPs are in binaries, where three binaries are in tight and circular orbits with very low-mass, WD-like companions (\(\sim 0.01-0.03\,M_{\odot}\)), and one is in an eccentric binary with a massive WD companion. In addition, most non-MSP NS binaries in Figure 5 are ejected to the outskirts by interactions with single WDs or main-sequence stars, with a few ejected by WD binaries. These ejection mechanisms are consistent with those discussed in Section 3. There is no IMBH in the NGC 6752 simulation, and there are \(\sim 5\) stellar-mass BHs retained at the present day. It is also not surprising that MSPs can be relocated to the cluster halo through dynamical encounters with WD binaries. It has been shown that WDs dominate the cores of core-collapsed clusters (Kremer et al., 2021, and references therein) while most of the BHs formed in the clusters have been ejected through dynamical interactions. Similarly, Figure 6 shows all NSs and MSPs from 17 snapshots between 9 and 12 Gyr of the 47 Tuc simulation, which is the Figure 5: Radial distributions of NSs (blue), NS binaries (orange), and MSPs (gray) from the NGC 6752 simulation. We combine the projected radial offsets from multiple time steps between 11 and 13.8 Gyr of the simulation for better statistics. The vertical green lines mark the offsets of the observed MSPs in NGC 6752 from [http://www.naic.edu/](http://www.naic.edu/)\(\sim\)pfreire/GCpsr.html. The vertical yellow line shows the observed half-light radius of the cluster (Harris, 1996, 2010 update) age span predicted in Ye et al. (2022). A total of about six MSPs locate outside of the half-light radius of the cluster over this time scale, and at each snapshot, there are about three to five MSPs, consistent (within small-number statistics) with the one MSP seen outside the half-light radius. Different from NGC 6752, all of these MSPs are in tight and circular binaries with companion masses of \(\sim 0.01\,M_{\odot}\). Note that the binary properties of the halo MSPs in the simulations are affected by the binary evolution prescriptions we adopt and may not match the observed properties exactly. Three of the six binaries are primordial and born far away from the cluster centre (\(\gtrsim 4\) pc). The other three are formed through collisions with giant stars where the core of a giant star becomes a component star in the binary. The NSs in the latter binaries are formed in accretion-induced collapses (where a WD companion accretes from the core of the original giant star), and the natal kicks contribute to dislocating two of them from the cluster core (the third NS binary only appears very briefly at the outskirts, probably on an eccentric orbit in the cluster). On average over \(\sim 3\) Gyr, the fraction of MSPs from primordial binaries in the cluster outskirts is \(\lesssim 5\%\), somewhat larger than calculated in Section 3.1. We also note that since 47 Tuc is more massive than most other GCs in the Milky Way (Harris, 1996, 2010 update; Baumgardt et al., 2018), the number of halo MSPs formed in primordial binaries in other non-core-collapsed clusters will likely be closer to zero. These binaries are not ejected to the cluster halo through recoil kicks from dynamical encounters as in NGC 6752, but rather because they are born in the outskirts (where the density is low and the relaxation time is long) and in part because the stellar-mass BHs retained in the cluster prevent the NSs from mass segregating to the cluster centre. Unlike NGC 6752 which is core-collapsed and does not have many BHs retained, there are \(\sim 200\) stellar-mass BHs retained in 47 Tuc at the present day, and there are no IMBHs (Ye et al., 2022). Because of mass segregation, these BHs dominate the cluster core and act as energy sources through 'BH binary burning', which supports the cluster from core-collapsing (e.g., Kremer et al., 2020). At the same time, the lighter NSs are located further out and do not have many dynamical encounters (Ye et al., 2019). Hence, in effect, the NSs and NS binaries are in the outskirts in the 47 Tuc model because (1) they are located far out in the outskirts where the stellar density is low and the relaxation time is long; and (2) the BHs heat the core, and this heat source in turn is transferred to the rest of the cluster, including the NSs, in part helping them to stay farther out in the cluster potential well for longer. This effect is not included in the simple analytic estimates of the two-body relaxation time used in the previous sections. This is different from what occurs in the NGC 6752 simulation, in which the NSs and NS binaries do segregate back into the core once the BHs have been ejected, but some of them can be ejected back out into the halo predominantly through three- or four-body interactions with normal stars and WDs. In this case, the absence of the BHs allows for core collapse to occur, which accelerates the rate at which NSs and NS binaries are ejected back into the cluster outskirts via single-binary interactions. The aforementioned redistribution of NSs and NS binaries makes a prediction: the radial distribution of MSPs should be more extended in core-collapsed GCs relative to non-core-collapsed clusters. We can test this using the observed distributions of MSPs (see [http://www.naic.edu/](http://www.naic.edu/)\(\sim\)pfreire/GCpsr.html), and this is shown in Figure 7. The radial distribution is slightly more extended for core-collapsed clusters (see also, e.g., Verbunt & Freire, 2014, their Figure 2), however, this result is not statistically significant. A KS test suggests that the two distributions may be drawn from the same underlying distribution, with a KS statistic of 0.16 and an associated p-value of 0.43. For comparison, an Anderson-Darling test suggests that the hypothesis that the two distributions are drawn from the same underlying distribution may be rejected at the 10\(\%\) level. With that said, this comparison should be regarded carefully, since the prediction considered here does not account for other factors such as completeness, NS retention in clusters, etc. These additional effects could be important and significantly affect our naive comparison. Figure 6: Similar to Figure 5 but for radial distributions of NSs (blue), NS binaries (orange), and MSPs (gray) from the 47 Tuc simulation. We combine the projected radial offsets from multiple time steps between 9 and 12 Gyr of the simulation for better statistics. The vertical green lines mark the offsets of the observed MSPs in 47 Tuc from [http://www.naic.edu/](http://www.naic.edu/)\(\sim\)pfreire/GCpsr.html, and the vertical yellow line shows the observed half-light radius of the cluster (Harris, 1996, 2010 update) Figure 7: The observed offset distributions in the unit of the host clusters’ half-light radii of pulsars in GCs for both core-collapsed clusters (blue) and non-core-collapsed clusters (orange). Data taken from [http://www.naic.edu/](http://www.naic.edu/)\(\sim\)pfreire/GCpsr.html. We also estimate the offsets of pulsars in Omega Centauri (Chen et al., 2023) using the coordinates of the cluster center from Harris (1996, 2010 update) and include them in the figure. ### The dominant mechanism(s) Finally, we address which of the mechanisms discussed in SS 3 operate with the largest frequencies in our simulations, using the NGC 6752 simulation as an example. In general, single-binary interactions with MS stars and WDs tend to most commonly eject MSP binaries into the cluster outskirts, but binary-binary interactions also contribute (especially WD binaries). This is no surprise since the timescale for single-binary interactions is shorter than that for binary-binary interactions for binary fractions \(\lesssim 10\)% (Leigh & Sills, 2011), and the binary fractions in our simulations are \(\sim 5\%\) for NGC 6752 and \(\sim 2\%\) for 47 Tuc. We also find that natal kicks from the accretion-induced collapse of WDs contribute. For example, of the six MSPs ejected to the outskirts in the NGC 6752 model, three of them are ejected by exchange encounters in binary-single interactions (one has a natal kick from accretion-induced collapse which may help get it further out), two of them experience a single-binary interaction as their last strong encounter and the last MSP is kicked to the outskirts via a natal kick from accretion-induced collapse and has a binary-binary interaction as its last strong encounter. ## 5 Discussion and Summary It has been argued in the literature that NS and MSP binaries can be ejected into the outskirts of star clusters via an interaction with a massive black hole binary that expels them from the core. We challenge this idea in this paper and argue that this mechanism will only rarely account for such binaries. Only for primary masses \(\lesssim 100\) M\({}_{\odot}\) and a narrow range of orbital separations should a BH-BH binary be both dynamically hard and produce a sufficiently low kick velocity to retain the NS binary in the cluster. We explore several alternative mechanisms that would cause NS binaries to be retained in clusters, the most likely of which is a three-body interaction involving the NS/MSP binary and a normal star. We expect normal stars (MS and WD) to be more common than BH-BH binaries, reducing the timescale for binary-single interactions with normal stars relative to that for interactions with BH-BH binaries. We caution, however, that the precise answer will depend on the distributions of binary orbital properties (i.e., the orbital separation and mass ratio distributions), the frequency of BH-BH binaries and their mass distribution, and so on. We argue in this paper that the NS binary NGC 6752 A, which lies far beyond the half-mass radius of its host cluster NGC 6752, was most likely placed there via a binary-single interaction with a normal MS or WD star. This scenario is opposed to the system having been put there via an interaction with a massive BH-BH binary, as previously argued in the literature. We naively expect for an old GC such as NGC 6752 that has experienced core-collapse that few BH-BH binaries should be left, with most having been ejected due to dynamical interactions with other BHs. As argued previously, it follows that the timescale for binary-single interactions with normal MS or WD stars should be shorter than that for interactions with BH-BH binaries, even an IMBH-BH binary (see, for example, Leigh et al. (2014)). All of these arguments indirectly suggest an inverse relationship between the frequency of BHs in clusters and that of NS/MSP binaries (also see Ye et al., 2019). This is for two reasons. First, when lots of BHs are present, they provide a heat source to the cluster, delaying the NSs from mass segregating into the centre where they are more likely to undergo a dynamical interaction that exchanges them into binaries on a short timescale. Second, if an interaction involving a BH and the NS/MSP or NS/MSP binary occurs, it is most likely to either exchange the BH into the binary or eject the NS/MSP binary from the cluster entirely. The binary NGC 6752 A's overall properties match those expected from mass transfer from a subgiant, leaving a helium WD with the mass predicted by the Tauris & Savonije (1999) relation (Corongiu et al., 2012, 2023). The low eccentricity suggests that the mass transfer occurred after the dynamical ejection event, although pulsar timing indicates that the WD spin is misaligned with the orbit, suggesting that the ejection happened after the mass transfer (Corongiu et al., 2023). We have utilized benchmark Monte Carlo \(N\)-body simulations of the clusters NGC 6752 and NGC 104 using the CMC code to test our results. In NGC 6752, at about 12 Gyr, there are three simulated MSPs with offsets larger than the half-light radius, roughly matching the observed numbers. Two of these MSPs are single and one is in a circular binary with a very low-mass WD-like companion (about 0.02 M\({}_{\odot}\)). The binary MSP is ejected to the halo from binary-single interactions with WD-WD binaries. We caution that, although the agreement between our simulated data and the observations is good for NGC 6752 at 12 Gyr, the number of MSPs in the outskirts is a sporadic function of time. However, for almost all timesteps between \(\sim\) 11-13.8 Gyr, we find at least one halo MSP, suggesting that observing only a single MSP or NS/MSP binary is not altogether rare. In the 47 Tuc simulation, we find that the NSs and NS binaries are in the outskirts at late times because the relaxation time can be long in the outskirts where the density is low and the BHs remain in the core where they act as a heat source. This source of energy is ultimately transferred to the outer cluster regions, delaying a non-negligible fraction of NSs and NS binaries from segregating into the core. It is important for the NSs to end up in the higher density core so that the timescale for them to be exchanged into binaries (and/or be dynamically hardened to a compact state) is sufficiently short. This is different to what occurs in the NGC 6752 simulation. Here, the NSs/MSPs and NS/MSP binaries have time to segregate into the core once the BHs have been ejected. After this, some are ejected back out into the halo predominantly through three-body interactions with normal stars and WDs. In the NGC 6752 case, the late-time absence of the BHs allows for core-collapse to occur, which accelerates the rate at which NSs/MSPs and NS/MSP binaries are ejected back into the halo. Our simulations suggest that clusters that undergo core-collapse should experience a spike in the rate of single-binary interactions due to the increased central density (see also, e.g., Ye et al., 2019). This in turn increases the rate at which NSs/MSPs and NS/MSP binaries are ejected from the core due to dynamical interactions. This implies that, for a given relaxation time (and hence total cluster mass and size), post-core collapse (PCC) GCs should be more likely to host MSPs and MSP binaries in the cluster outskirts. This makes a prediction that can be tested observationally using the observed radial distributions of PCC and non-PCC clusters, namely that PCC clusters should show a broader radial distribution of NSs/MSPs during and after core-collapse. However, we find that MSPs in PCC clusters are observed to be only mildly more extended radially than are MSPs in non-PCC clusters. We find from our simulations that the most common mechanisms to put NS/MSP binaries into the outskirts of GCs are single-binary interactions involving MS stars and WDs. Binary-binary interactions also contribute but not as frequently as single-binary interactions since the timescale for single-binary interactions is shorter than that for binary-binary interactions for binary fractions \(\lesssim 10\)% (Leigh & Sills, 2011), and the binary fractions in our sim ulations are less than this. We also find that natal kicks from the accretion-induced collapse of WDs contribute to putting NS/MSP binaries into the cluster outskirts. For example, of the six MSPs ejected to the outskirts in the NGC 6752 model, five of them experience a single-binary interaction as their last strong encounter and the last MSP is kicked to the outskirts via a natal kick from accretion-induced collapse and its last strong encounter is a binary-binary interaction. We can summarize our main conclusions as follows: * In those clusters where the relaxation time is shorter than a Hubble time even in the outskirts, single-binary interactions involving MS stars or WDs are the dominant mechanism for putting NS/MSP binaries into the cluster outskirts. This is supported both by the interaction energetics, which can give a sufficient recoil velocity kick to the NS/MSP binary centre of mass to put into the outskirts while also making it sufficiently compact to have an orbital period of order days or less (e.g., to produce NGC 6752 A). * Interactions with BH-BH binaries are more likely to eject NS/MSP binaries from the cluster altogether based on energy-based arguments. They can also operate on a much longer timescale than do normal single-binary interactions (i.e., involving only a NS and MS stars and/or WDs). * Natal kicks post-NS formation due to the accretion-induced collapse of a WD in a compact binary can also eject NS/MSP binaries from the core into the cluster outskirts, as found here both analytically and via Cluster Monte Carlo code simulations. * We find two reasons as to why some clusters might still be harbouring NS/MSP binaries in their outskirts after a Hubble time: (1) Clusters with relaxation times in their outskirts that exceed a Hubble time could still host today NS/MSP binaries in their outskirts. As argued in Section 3.1, however, this primordial mechanism could only realistically explain a handful of NS/MSP binaries in the outskirts out of the total observed sample considered here (i.e., the Freire catalog). (2) In clusters with lots of BHs, the BHs can act as a heat source in the core which feeds kinetic energy to the other stars and hence in part prolonging the NSs from mass segregating into the centre. ## Acknowledgments NWCL gratefully acknowledges the generous support of a Fondecyt Regular grant 1230082, as well as support from Millenium Nucleus NCN19_058 (TITANs) and funding via the BASAL Centro de Excelencia en Astrofisica y Tecnologias Afines (CATA) grant PFB-06/2007. NWCL also thanks support from ANID BASAL project ACE210002 and ANID BASAL projects ACE210002 and FB210003. C.S.Y acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC) DIS-2022-568580. S.M.G. is partially supported from an Ontario Graduate Scholarship. G.F. acknowledges support from NASA Grant 80NSSC21K1722 and from NSF Grant AST-1716762 at Northwestern University. CH is supported by NSERC Discovery Grant RGPIN-2016-04602. The authors also thank Maria Drout, who provided important neutron star insights that improved this study. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2308.16373
Entropy Estimate for Degenerate SDEs with Applications to Nonlinear Kinetic Fokker-Planck Equations
The relative entropy for two different degenerate diffusion processes is estimated by using the Wasserstein distance of initial distributions and the difference between coefficients. As applications, the entropy cost inequality and exponential ergodicity in entropy are derived for distribution dependent stochastic Hamiltonian systems associated with nonlinear kinetic Fokker Planck equations.
Zhongmin Qian, Panpan Ren, Feng-Yu Wang
2023-08-31T00:21:58Z
http://arxiv.org/abs/2308.16373v3
Entropy Estimate for Degenerate SDEs with Applications to Nonlinear Kinetic Fokker-Planck Equations+ ###### Abstract The relative entropy for two different degenerate diffusion processes is estimated by using the Wasserstein distance of initial distributions and the difference between coefficients. As applications, the entropy-cost inequality and exponential ergodicity in entropy are derived for distribution dependent stochastic Hamiltonian systems associated with nonlinear kinetic Fokker-Planck equations. AMS subject Classification: 60J60, 60H30. Keywords: Entropy estimate, degenerate diffusion process, stochastic Hamiltonian system, nonlinear kinetic Fokker-Planck equation. ## 1 Introduction To characterize the stability of stochastic systems under perturbations, a natural way is to estimate the difference of distributions for two different processes, see [14] for a comparison theorem on transition densities (i.e. heat kernels) of diffusions with different drifts. Recently, by using the entropy inequality established by Bogachev, Rockner and Shaposhnikov [1] for diffusion processes, and by developing a bi-coupling argument, the entropy and probability distances have been estimated in [16, 10] for different non-degenerate SDEs with distribution dependent noise. In this paper, we aim to establish entropy inequality for degenerate diffusion processes. As applications, we establish a log-Harnack inequality and study the exponential ergodicity in entropy for stochastic Hamiltonian systems with distribution dependent noise. Let us start with a simple stochastic Hamiltonian system whose Hamiltonian function is given by \[H(x):=V_{1}(x^{(1)})+V_{2}(x^{(2)})\ \ \text{for}\ x=(x^{(1)},x^{(2)})\in \mathbb{R}^{d}\times\mathbb{R}^{d},\] where \(V_{i}\in C^{2}(\mathbb{R}^{d})\) with \(\|\nabla^{2}V_{i}\|_{\infty}<\infty,i=1,2\). Then \(X_{t}=(X_{t}^{(1)},X_{t}^{(2)})\), the speed \(X_{t}^{(1)}\) and the location \(X_{t}^{(2)}\) of the stochastic particle, solves the following degenerate stochastic differential equation (SDE) on \(\mathbb{R}^{d}\times\mathbb{R}^{d}:\) \[\begin{cases}\mathrm{d}X_{t}^{(1)}=\nabla V_{2}(X_{t}^{(2)})\mathrm{d}t,\\ \mathrm{d}X_{t}^{(2)}=\sqrt{2}\,\mathrm{d}W_{t}-\big{(}\nabla V_{1}(X_{t}^{(1) })+\nabla V_{2}(X_{t}^{(2)})\big{)}\mathrm{d}t,\end{cases} \tag{1.1}\] where \(W_{t}\) is the \(d\)-dimensional Brownian motion on a filtered probability space \((\Omega,\mathscr{F},(\mathscr{F}_{t})_{t\geq 0},\mathbb{P})\). It is well known that the distribution density function of \(X_{t}\) solves the associated kinetic Fokker-Planck equation. When for each \(i=1,2\), \(\mu^{(i)}(\mathrm{d}x^{(i)}):=\mathrm{e}^{-V_{i}(x^{(i)})}\mathrm{d}x^{(i)}\) is a probability measure on \(\mathbb{R}^{d}\), SDE (1.1) has a unique invariant probability measure \[\bar{\mu}(\mathrm{d}x):=\mu^{(1)}(\mathrm{d}x^{(1)})\mu^{(2)}(\mathrm{d}x^{(2 )}),\ \ \text{for}\ x=(x^{(1)},x^{(2)})\in\mathbb{R}^{d}\times\mathbb{R}^{d}.\] According to Villani [18], suppose that \(\mu^{(i)}\) satisfies the Poincare inequality \[\mu^{(i)}(f^{2})\leq\mu^{(i)}(f)^{2}+C\mu^{(i)}(|\nabla f|^{2}),\ \ \forall f\in C_{b}^{1}(\mathbb{R}^{d}),i=1,2,\] for some constant \(C>0\), where and in the sequel \(\mu(f):=\int f\mathrm{d}\mu\) for a measure \(\mu\) and a function \(f\) if the integral exists. Then the Markov semigroup \(P_{t}\) associated with (1.1) converges exponentially to \(\bar{\mu}\) in \(H^{1}(\bar{\mu})\), i.e. for some constants \(c,\lambda>0\), \[\bar{\mu}\big{(}|P_{t}f-\bar{\mu}(f)|^{2}+|\nabla P_{t}f|^{2}\big{)}\leq c \mathrm{e}^{-\lambda t}\bar{\mu}\big{(}|f-\bar{\mu}(f)|^{2}+|\nabla f|^{2} \big{)}\] for any \(t\geq 0\) and \(f\in C_{b}^{1}(\mathbb{R}^{d})\). This property, known as "hypocoercivity " due to Villani [18], has been explored further by various authors in a series of papers for the exponential convergence of \(P_{t}\) in \(L^{2}(\mu)\), such as [2] by Camrud, Herzog, Stoltz and Gordina, as well as [6] by Grothaus and Stilgenbauer, based on an abstract analytic framework built up by Dolbeaut, Mouhot and Schmeiser [4], see also the recent work [5] for the study of singular models. In case the Poincare inequality fails, slower convergence rates are presented in [7, 11] using the weak Poincare inequality developed by Rockner and the third named author [17]. On the other hand, the study of the exponential ergodicity in the relative entropy arising from information theory, which is stronger than that in \(L^{2}\) (see [19]), becomes an important topic. Recall that if \(\mu\) and \(\nu\) are two probability measures, then the relative entropy of \(\mu\) with respect to \(\nu\) is defined by \[\mathrm{Ent}(\mu|\nu):=\begin{cases}\mu\big{(}\log\frac{\mathrm{d}\mu}{ \mathrm{d}\nu}\big{)},&\text{ if }\mu\text{ is absolutely continuous w.r.t. }\nu,\\ \infty,&\text{otherwise}.\end{cases}\] By establishing a log-Harnack inequality, the exponential ergodicity in entropy has been been derived in [19] for stochastic Hamiltonian systems for linear \(\nabla V_{2}\), and has been further extended in [15, 9] to the case with distribution dependent drift. However, the log-Harnack inequality and the exponential ergodicity in entropy are still unknown for stochastic Hamiltonian systems with nonlinear \(\nabla V_{2}\). To formulate distribution dependent SDEs, we introduce the Wasserstein space \(\mathscr{P}_{2}(\mathbb{R}^{d})\) for probability measures on \(\mathbb{R}^{d}\) having finite second moment. It is a Polish space under the Wasserstein distance \[\mathbb{W}_{2}(\mu,\nu):=\inf_{\pi\in\mathscr{C}(\mu,\nu)}\bigg{(}\int_{ \mathbb{R}^{d}\times\mathbb{R}^{d}}|x-y|^{2}\pi(\mathrm{d}x,\mathrm{d}y) \bigg{)}^{\frac{1}{2}},\] where \(\mathscr{C}(\mu,\nu)\) denotes the set of all couplings for \(\mu\) and \(\nu\). Let \(\mathscr{L}_{\xi}\) denote the distribution of the random variable \(\xi\). To illustrate our general results, we consider below the distribution dependent stochastic Hamiltonian system for \(X_{t}:=(X_{t}^{(1)},X_{t}^{(2)})\in\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\): \[\begin{cases}\mathrm{d}X_{t}^{(1)}=\big{\{}BX_{t}^{(2)}+b(X_{t})\big{\}} \mathrm{d}t,\\ \mathrm{d}X_{t}^{(2)}=\sigma(\mathscr{L}_{X_{t}})\mathrm{d}W_{t}+Z^{(2)}(X_{t} ^{(2)},\mathscr{L}_{X_{t}})\mathrm{d}t,\ \ t\geq 0,\end{cases} \tag{1.2}\] where \(B\) is a \(d_{1}\times d_{2}\)-matrix such that \(BB^{*}\) is invertible (i.e. \(\mathrm{Rank}(B)=d_{1}\)), \(b\in C_{b}^{2}(\mathbb{R}^{d_{1}+d_{2}})\) such that \[\big{\langle}(\nabla^{(2)}b)B^{*}v,v\big{\rangle}\geq-\delta|B^{*}v|^{2},\quad v \in\mathbb{R}^{d_{1}}\] holds for some constant \(\delta\in(0,1)\), where \(\nabla^{(2)}\) is the gradient in \(x^{(2)}\in\mathbb{R}^{d_{2}}\), and \[\sigma:\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\to\mathbb{R}^{d_{2}\otimes d _{2}},\ \ Z^{(2)}:\mathbb{R}^{d_{1}+d_{2}}\times\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2} })\to\mathbb{R}^{d_{2}}\] are Lipschitz continuous. According to [20, Theorem 2.1], (1.2) is well-posed for distributions in \(\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\), i.e. for any \(\mathscr{F}_{0}\)-measurable initial value \(X_{0}\) with \(\mathscr{L}_{X_{0}}\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\), (respectively, any initial distribution \(\mu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\)), the SDE has a unique strong (respectively, weak) solution with \(\mathscr{L}_{X_{t}}\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\) continuous in \(t\geq 0\). Let \(P_{t}^{*}\,\mu:=\mathscr{L}_{X_{t}}\) where \(X_{t}\) is the solution of (1.2) with initial distribution \(\mu\in\mathscr{P}_{2}\). If \(\nabla Z(\cdot,\mu)\) is bounded and Lipschitz continuous uniformly in \(\mu\), then the following assertions are implied by Theorem 4.1. * By (4.4) for \(k=0\), there exists a constant \(c>0\) such that \[\mathrm{Ent}(P_{t}^{*}\mu|P_{t}^{*}\nu)\leq\frac{c}{t^{3}}\mathbb{W}_{2}(\mu, \nu)^{2},\quad t\in(0,1];\ \mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}}).\] * If \(P_{t}^{*}\) is exponentially ergodic in \(\mathbb{W}_{2}\), i.e. \(P_{t}^{*}\) has a unique invariant probability measure \(\bar{\mu}\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\) and there exist two positive constants \(c_{1}\) and \(\lambda\) such that (1.3) \[\mathbb{W}_{2}(P_{t}^{*}\mu,\bar{\mu})^{2}\leq c_{1}\mathrm{e}^{-\lambda t} \mathbb{W}_{2}(\mu,\bar{\mu})^{2}\] holds for any \(t\geq 0\) and \(\mu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\), then the exponential ergodicity in entropy holds: \[\mathrm{Ent}(P_{t}^{*}\mu|P_{t}^{*}\nu)\leq c_{1}\mathrm{e}^{-\lambda(t-1)} \mathbb{W}_{2}(\mu,\bar{\mu})^{2}\] holds for any \(t\geq 0\) and \(\mu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\). See Corollary 4.2 and Example 4.1 below for some concrete models satisfying (1.3). The remainder of the paper is organized as follows. We establish an entropy inequality in Section 2 for some SDEs which applies also to the degenerate case, then apply the inequality to stochastic Hamiltonian systems and the distribution dependent model in Sections 3 and 4 respectively. ## 2 Entropy estimate between diffusion processes Let \(d,m\in\mathbb{N},T\in(0,\infty),\) and \((W_{t})_{t\in[0,T]}\) be an \(m\)-dimensional Brownian motion on a filtered probability space \((\Omega,\mathscr{F},(\mathscr{F}_{t})_{t\in[0,T]},\mathbb{P})\). Consider the following SDEs on \(\mathbb{R}^{d}\): \[\mathrm{d}X_{t}^{\langle i\rangle}=Z_{i}^{(2)}(t,X_{t}^{\langle i\rangle}) \mathrm{d}t+\sigma_{i}(t,X_{t}^{\langle i\rangle})\mathrm{d}W_{t}\quad\text{ for }t\in[0,T], \tag{2.1}\] where \[Z_{i}^{(2)}:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\ \text{ and }\sigma_{i}:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d\otimes m}\] are nice enough measurable maps such that the SDE is well-posed for \(i=1,2\). Let \((P_{s,t}^{\langle i\rangle})_{0\leq s\leq t\leq T}\) be the corresponding Markov semigroups, i.e. \[P_{s,t}^{\langle i\rangle}f(x):=\mathbb{E}[f(X_{s,t}^{i,x})]\ \text{ for }f\in \mathscr{B}_{b}(\mathbb{R}^{d})\text{ and }x\in\mathbb{R}^{d},\] where \((X_{s,t}^{i,x})_{t\in[s,T]}\) solves (2.1) for \(t\in[s,T]\) with \(X_{s,s}^{i,x}=x.\) The corresponding generators are given by \[L_{t}^{\langle i\rangle}:=\mathrm{tr}\big{\{}a_{i}(t,\cdot)\nabla^{2}\big{\}}+ Z_{i}^{(2)}(t,\cdot)\cdot\nabla\quad\text{for }t\in[0,T],\] where \(a_{i}:=\frac{1}{2}\sigma_{i}\sigma_{i}^{*}\) which may be degenerate. If \(v:[0,T]\mapsto\mathbb{R}^{d}\) is a path, then \[\|v\|_{a_{2}}(t):=\sup_{x\in\mathbb{R}^{d}}\inf\big{\{}|w|:w\in\mathbb{R}^{d}, a_{2}(t,x)^{\frac{1}{2}}w=v(t)\big{\}}\quad\text{for }t\in[0,T],\] where the convention that \(\inf\emptyset=\infty\) is applied. Let \(\mathscr{P}(\mathbb{R}^{d})\) denote the space of all probability measures on \(\mathbb{R}^{d}\). For a given \(\nu\in\mathscr{P}(\mathbb{R}^{d})\), \(X_{t}^{i,\nu}\) denotes the solution to (2.1) with \(\mathscr{L}_{X_{0}^{i,\nu}}=\nu\), where and in the sequel, \(\mathscr{L}_{\xi}\) stands for the law of a random variable \(\xi\). Denote \[P_{t}^{i,\nu}=\mathscr{L}_{X_{t}^{i,\nu}}\quad\text{for }t\in[0,T],\ \nu\in\mathscr{P}(\mathbb{R}^{d})\text{ and }i=1,2.\] We shall make the following assumptions. * For any \(0\leq s\leq t\leq T\), \(P_{s,t}^{\langle 2\rangle}C_{b}^{2}(\mathbb{R}^{d})\subset C_{b}^{2}(\mathbb{R}^{d})\) so that the Kolmogorov backward equation holds for any \(f\in C_{b}^{2}(\mathbb{R}^{d})\): \[\partial_{s}P_{s,t}^{\langle 2\rangle}f=-L_{s}^{\langle 2\rangle}P_{s,t}^{ \langle 2\rangle}f\quad\text{for }s\in[0,t]\text{ and }t\in(0,T].\] * There exists a measurable function \(H_{\cdot}^{1,\nu}(a_{1}-a_{2}):(0,T]\mapsto(0,\infty)\) such that \[\big{|}\mathbb{E}\big{[}\mathrm{div}\{(a_{1}-a_{2})(t,\cdot) \nabla f\}(X_{t}^{1,\nu})\big{]}\big{|}\] \[\leq H_{t}^{1,\nu}(a_{1}-a_{2})\big{(}\mathbb{E}[|a_{2}(t,\cdot)^{ \frac{1}{2}}\nabla f|^{2}(X_{t}^{1,\nu})\big{]}\big{)}^{\frac{1}{2}}\] for any \(t\in(0,T]\) and \(f\in C_{b}^{2}(\mathbb{R}^{d})\). We remark that condition \((A_{1})\) is satisfied when the coefficients have bounded first and second order derivatives. For the non-degenerate case, it is satisfied for a class of Holder continuous \(\sigma_{2}\) and \(b_{2}\), see for instance [12] and references within. According to [1], condition \((A_{2})\) is satisfied if \(a_{2}\) is invertible and \(X_{t}^{1,\nu}\) has a distribution density \(\rho_{t}^{1,\nu}\) such that \(\log\rho_{t}^{1,\nu}\) is in a Sobolev space. In this case, inequality (2.2) in the following theorem reduces to [1, Theorem 1.1]. In the next section, we shall verify these conditions for some important examples of degenerate SDEs. We are now in a position to state and prove the main result. **Theorem 2.1**.: _Assume that \((A_{1})\) and \((A_{2})\) are satisfied. Then_ \[\operatorname{Ent}(P_{t}^{1,\nu}|P_{t}^{2,\nu})\leq\frac{1}{4}\int_{0}^{t} \Big{\{}\|b_{1}-b_{2}-\operatorname{div}(a_{1}-a_{2})\|_{a_{2}}(s)+H_{s}^{1, \nu}(a_{1}-a_{2})\Big{\}}^{2}\mathrm{d}s \tag{2.2}\] _for any \(t\in(0,T]\)._ Proof.: Let \(X_{t}^{i,\nu}\) solve (2.1) with initial distribution \(\nu\), and let \(X_{0}^{1,\nu}=X_{0}^{2,\nu}\). Let \(C_{b,+}^{2}(\mathbb{R}^{d})\) denote the space of all functions \(f\in C_{b}^{2}(\mathbb{R}^{d})\) such that \(\inf f>0\). By definition we have \[\operatorname{Ent}(P_{t}^{1,\nu}|P_{t}^{2,\nu})=\sup_{f\in C_{b,+ }^{2}(\mathbb{R}^{d})}I_{t}(f),\] \[I_{t}(f):=\mathbb{E}\log f(X_{t}^{1,\nu})-\log\mathbb{E}f(X_{t}^ {2,\nu}). \tag{2.3}\] For any \(f\in C_{b,+}^{2}(\mathbb{R}^{d})\), by the standard Markov property for \(X_{t}^{2,\nu}\) and using Jensen's inequality, we obtain \[I_{t}(f) =\mathbb{E}\log f(X_{t}^{1,\nu})-\log\mathbb{E}(P_{0,t}^{\langle 2 \rangle}f)(X_{0}^{2,\nu})\] \[\leq\mathbb{E}\log f(X_{t}^{1,\nu})-\mathbb{E}\log(P_{0,t}^{ \langle 2\rangle}f)(X_{0}^{2,\nu})\] \[=\int_{0}^{t}\Big{[}\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}\log (P_{s,t}^{\langle 2\rangle}f)(X_{s}^{1,\nu})\Big{]}\mathrm{d}s \tag{2.4}\] for every \(t\in(0,T]\). By \((A_{1})\) and using Ito's formula for \(X_{s}^{1,\nu}\), we derive that \[\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}\big{(}\log(P_{s,t}^{ \langle 2\rangle}f)(X_{s}^{1,\nu})\big{)}=\mathbb{E}\bigg{[}\bigg{(}L_{s}^{ \langle 1\rangle}\log(P_{s,t}^{\langle 2\rangle}f)-\frac{L_{s}^{\langle 2 \rangle}P_{s,t}^{\langle 2\rangle}f}{P_{s,t}^{\langle 2\rangle}f}\Big{)}(X_{s}^{1, \nu})\bigg{]}\] \[=\mathbb{E}\Big{[}(L_{s}^{\langle 1\rangle}-L_{s}^{\langle 2 \rangle})\log(P_{s,t}^{\langle 2\rangle}f)(X_{s}^{1,\nu})-\big{|}\{a_{2}(s, \cdot)^{\frac{1}{2}}\nabla\log P_{s,t}^{\langle 2\rangle}f\}\big{|}^{2}(X_{s}^{1, \nu})\Big{]}\] \[=\mathbb{E}\Big{[}\mathrm{div}\big{\{}(a_{1}-a_{2})(s,\cdot)\nabla \log P_{s,t}^{\langle 2\rangle}f\big{\}}(X_{s}^{1,\nu})-\big{|}\{a_{2}(s,\cdot)^{ \frac{1}{2}}\nabla\log P_{s,t}^{\langle 2\rangle}f\big{|}^{2}(X_{s}^{1,\nu})\Big{]}\] \[\quad+\mathbb{E}\Big{[}\big{\langle}b_{1}-b_{2}-\mathrm{div}(a_{1 }-a_{2}),\nabla\log P_{s,t}^{\langle 2\rangle}f\big{\rangle}(X_{s}^{1,\nu})\Big{]}.\] Combining this with \((A_{2})\) gives that \[\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}\big{(}\log(P_{s,t}^{ \langle 2\rangle}f)(X_{s}^{1,\nu})\big{)}\] \[\leq\Big{[}H_{s}^{1,\nu}(a_{1}-a_{2})+\|b_{1}-b_{2}-\operatorname{ div}(a_{1}-a_{2})\|_{a_{2}}(s)\Big{]}\big{(}\mathbb{E}|a_{2}(s,\cdot)^{\frac{1}{2}} \nabla\log P_{s,t}^{\langle 2\rangle}f|^{2}(X_{s}^{1,\nu})\big{)}^{\frac{1}{2}}\] \[\quad-\mathbb{E}\Big{[}\big{|}a_{2}(s,\cdot)^{\frac{1}{2}}\nabla \log P_{s,t}^{\langle 2\rangle}f\big{|}^{2}(X_{s}^{1,\nu})\Big{]}\] \[\leq\frac{1}{4}\Big{[}H_{s}^{1,\nu}(a_{1}-a_{2})+\|b_{1}-b_{2}- \operatorname{div}(a_{1}-a_{2})\|_{a_{2}}(s)\Big{]}^{2}\] for every \(s\in(0,t]\), which, together with (2.3) and (3.27), implies the desired estimate (2.2). As explained in [16] that \(|H_{s}^{1,\nu}(a_{1}-a_{2})|^{2}\) is normally singular for small \(s\), such that the upper bound in (2.2) becomes infinite. To derive a finite upper bound of the relative entropy, we make use of the bi-coupling argument developed in [16], which leads to the following consequence where different initial distributions are also allowed. **Corollary 2.2**.: _Assume that \((A_{1})\) and \((A_{2})\) are satisfied. Suppose that there exist a constant \(p\in(1,\infty)\) and a decreasing function \(\eta:(0,T]\mapsto(0,\infty)\) such that_ \[|P_{s,t}^{\langle 2\rangle}f(x)|^{p}\leq\big{(}P_{s,t}^{\langle 2\rangle}|f|^{p} (y)\big{)}\mathrm{e}^{\eta(t-s)|x-y|^{2}} \tag{2.5}\] _for any \(0\leq s<t\leq T\) and \(f\in\mathscr{B}_{b}(\mathbb{R}^{d})\). Then there exists a constant \(c>0\) such that_ \[\operatorname{Ent}(P_{t}^{1,\mu}|P_{t}^{2,\nu})\leq\inf_{\pi\in \mathscr{C}(\mu,\nu)}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\bigg{(}\frac{p} {4}\int_{t_{0}}^{t}\Big{\{}\|b_{1}-b_{2}-\operatorname{div}(a_{1}-a_{2})\|_{a_ {2}}(s)+H_{s}^{1,x_{1}}(a_{1}-a_{2})\Big{\}}^{2}\mathrm{d}s\] \[\quad+(p-1)\log\mathbb{E}\Big{\{}\exp\big{[}c\eta(t-t_{0})\big{|} X_{t_{0}}^{1,x_{1}}-X_{t_{0}}^{2,x_{2}}\big{|}^{2}\big{]}\Big{\}}\bigg{)}\pi( \mathrm{d}x_{1},\mathrm{d}x_{2})\] _for any \(0<t_{0}<t\leq T\) and \(x,y\in\mathbb{R}^{d}\)._ Proof.: For simplicity, denote \(P_{t}^{i,x}=P_{t}^{i,\delta_{x}}\) where \(i=1,2,x\in\mathbb{R}^{d}\), and \(\delta_{x}\) is the Dirac measure at \(x\). Let \(X_{t}(x_{1})\) be the diffusion process starting from the initial value \(x_{1}\) with the infinitesimal generator given by \[L_{t}:=1_{[0,t_{0}]}(t)L_{t}^{\langle 1\rangle}+1_{(t_{0},t]}(t)L_{t}^{\langle 2\rangle}.\] Let \(P_{t}^{\langle t_{0}\rangle x_{1}}=\mathscr{L}_{X_{t}(x_{1})}\). By using (2.2) with \(\nu=\delta_{x_{1}}\) and \(P_{t}^{\langle t_{0}\rangle x_{1}}\) in place of \(P_{t}^{2,x_{1}}\), and combining with [16, (2.4) and (2.9)], we deduce that \[\begin{split}\operatorname{Ent}(P_{t}^{1,x_{1}}|P_{t}^{2,x_{2}}) \leq&\frac{p}{4}\int_{t_{0}}^{t}\Big{\{}\|b_{1}-b_{2}- \operatorname{div}(a_{1}-a_{2})\|_{a_{2}}(s)+H_{s}^{1,x_{1}}(a_{1}-a_{2})\Big{\}} ^{2}\mathrm{d}s\\ &+(p-1)\log\mathbb{E}\bigg{\{}\exp\Big{[}c\eta(t-t_{0})\big{|}X_{t _{0}}^{1,x_{1}}-X_{t_{0}}^{2,x_{2}}\big{|}^{2}\Big{]}\bigg{\}}.\end{split} \tag{2.6}\] On the other hand, if \(\pi\in\mathscr{C}(\mu,\nu)\), then by using (2.3), the standard Markov property and Jensen's inequality, we have \[\operatorname{Ent}(P_{t}^{1,\mu}|P_{t}^{2,\nu})=\sup_{f\in C_{b,t}^{2}(\mathbb{R}^{d})}\big{\{}\mathbb{E}\log f(X_{t}^{1,\mu})-\log\mathbb{E }f(X_{t}^{2,\nu})\big{\}}\] \[=\sup_{f\in C_{b,t}^{2}(\mathbb{R}^{d})}\bigg{\{}\int_{\mathbb{R} ^{d}}P_{t}^{\langle 1\rangle}(\log f)(x_{1})\mu(\mathrm{d}x_{1})-\log\int_{ \mathbb{R}^{d}}P_{t}^{\langle 2\rangle}f(x_{2})\nu(\mathrm{d}x_{2})\bigg{\}}\] \[\leq\sup_{f\in C^{2}_{b,+}(\mathbb{R}^{d})}\bigg{\{}\int_{\mathbb{R}^{d }}P^{(1)}_{t}(\log f)(x_{1})\mu(\mathrm{d}x_{1})-\int_{\mathbb{R}^{d}}\log P^{(2 )}_{t}f(x_{2})\nu(\mathrm{d}x_{2})\bigg{\}}\] \[=\sup_{f\in C^{2}_{b,+}(\mathbb{R}^{d})}\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}\big{\{}P^{(1)}_{t}(\log f)(x_{1})-\log P^{(2)}_{t}f(x_{2} )\big{\}}\pi(\mathrm{d}x_{1},\mathrm{d}x_{2})\] \[\leq\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\sup_{f\in C^{2}_{b, +}(\mathbb{R}^{d})}\big{\{}P^{(1)}_{t}(\log f)(x_{1})-\log P^{(2)}_{t}f(x_{2} )\big{\}}\pi(\mathrm{d}x_{1},\mathrm{d}x_{2})\] \[=\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathrm{Ent}(P^{1,x_{1 }}_{t}|P^{2,x_{2}}_{t})\pi(\mathrm{d}x_{1},\mathrm{d}x_{2}),\] which, together with (2.6), yields the desired estimate. ## 3 Stochastic Hamilton system ### A general result Let \(d_{1},d_{2}\in\mathbb{N}\). For any initial distribution \(\nu\in\mathscr{P}(\mathbb{R}^{d_{1}+d_{2}})\), consider the following degenerate SDEs for \(X^{i,\nu}_{t}=(X^{i(1),\nu}_{t},X^{i(2),\nu}_{t})\in\mathbb{R}^{d_{1}}\times \mathbb{R}^{d_{2}}\) (\(i=1,2\)): \[\begin{cases}\mathrm{d}X^{i(1),\nu}_{t}=Z^{(1)}(t,X^{i,\nu}_{t})\mathrm{d}t, \\ \mathrm{d}X^{i(2),\nu}_{t}=Z^{(2)}_{i}(t,X^{i,\nu}_{t})\mathrm{d}t+\sigma_{i}(t, X^{i,\nu}_{t})\mathrm{d}W_{t},\quad\mathscr{L}_{X^{i,\nu}_{0}}=\nu,\ \ \text{for}\ t\in[0,T],\end{cases} \tag{3.1}\] where \(W_{t}\) is a \(d_{2}\)-dimensional Brownian motion on a filtered probability space \((\Omega,\mathscr{F},(\mathscr{F}_{t})_{t\in[0,T]},\mathbb{P})\), and \[Z^{(1)}:[0,T]\times\mathbb{R}^{d_{1}+d_{2}}\to\mathbb{R}^{d_{1}},\ \ Z^{(2)}_{i}:[0,T]\times\mathbb{R}^{d_{1}+d_{2}}\to\mathbb{R}^{d_{2}},\ \ \sigma_{i}:[0,T]\times\mathbb{R}^{d_{1}+d_{2}}\to\mathbb{R}^{d_{2}\otimes d_{2}}\] are measurable. If \(\nu=\delta_{x}\) where \(x\in\mathbb{R}^{d_{1}+d_{2}}\), then the solution is simply denoted by \(X^{i,x}_{t}=(X^{i(1),x}_{t},X^{i(2),x}_{t})\). Let \(\nabla^{(i)}\) be the gradient in \(x^{(i)}\in\mathbb{R}^{d_{i}}\) for \(i=1,2\). Let us introduce the following technical conditions. * The coefficients \(\sigma_{i}(t,x),Z^{(2)}_{i}(t,x)\) (for \(i=1,2\)) and \(Z^{(1)}(t,x)\) are locally bounded in \((t,x)\in[0,T]\times\mathbb{R}^{d_{1}+d_{2}}\) and twice differentiable in the space variable \(x\). The matrix valued function \(a_{2}:=\frac{1}{2}\sigma_{2}\sigma_{2}^{*}\) is invertible. There exists a constant \(K>0\) such that \[\|\nabla^{j}Z^{(2)}_{i}(t,x)\|+\|\nabla^{j}Z^{(1)}(t,x)\|+\|\nabla^{j}\sigma_{ i}(t,x)\|\leq K\] for \((t,x)\in[0,T]\times\mathbb{R}^{d_{1}+d_{2}}\) and \(j=1,2\). * There exists a function \(\xi^{\nu}\in C((0,T];(0,\infty))\) such that \[\big{|}\mathbb{E}[(\nabla^{(2)}_{v}f)(X^{1,\nu}_{t})]\big{|}\leq\xi^{\nu}_{t} \big{(}\mathbb{E}[f(X^{1,\nu}_{t})^{2}]\big{)}^{\frac{1}{2}}\] for \(t\in(0,T]\), \(v^{(2)}\in\mathbb{R}^{d_{2}}\) with \(|v^{(2)}|=1\) and \(f\in C^{1}_{b}(\mathbb{R}^{d_{1}+d_{2}})\). It is well known that condition \((B_{1})\) implies the well-posededness of (3.1) and that condition \((A_{1})\) is satisfied. Let \(P_{t}^{i,\nu}\) be the distribution of \(X_{t}^{i,\nu}\). To state our next result we recall that for a vector valued function \(g\) on \([0,T]\times\mathbb{R}^{d_{1}+d_{2}}\) \[\|g\|_{t,\infty}:=\sup_{z\in\mathbb{R}^{d_{1}+d_{2}}}|g(t,z)|\] for \(t\in[0,T]\). **Theorem 3.1**.: _Assume that conditions \((B_{1})\) and \((B_{2})\) are satisfied. Let \((e_{j})_{1\leq j\leq d_{2}}\) be the canonical basis on \(\mathbb{R}^{d_{2}}\)._ _1) The following equality holds:_ \[\mathrm{Ent}(P_{t}^{1,\nu}|P_{t}^{2,\nu})\leq\frac{1}{4}\int_{0}^{t}\Big{[} \big{\|}a_{2}^{-\frac{1}{2}}\big{\{}b_{1}-b_{2}-\mathrm{div}(a_{1}-a_{2})\big{\}} \big{\|}_{s,\infty}+\xi_{s}^{\nu}\sum_{j=1}^{d_{2}}\big{\|}a_{2}^{-\frac{1}{2 }}(a_{1}-a_{2})e_{j}\big{\|}_{s,\infty}\Big{]}^{2}\mathrm{d}s.\] _2) Suppose \((\ref{eq:1})\) holds, then there exists a constant \(c>0\) such that_ \[\mathrm{Ent}(P_{t}^{1,\mu}|P_{t}^{2,\nu})\leq\inf_{\pi\in\mathscr{C}(\mu,\nu) }\int_{\mathbb{R}^{d_{1}+d_{2}}\times\mathbb{R}^{d_{1}+d_{2}}}\bigg{(}pI_{t_{ 0},t}^{x_{2}}+(p-1)\log\mathbb{E}\Big{[}\mathrm{e}^{c\eta(t-t_{0})|X_{t_{0}}^{ 1,x_{1}}-X_{t_{0}}^{2,x_{2}}|^{2}}\Big{]}\bigg{)}\pi(\mathrm{d}x_{1},\mathrm{d }x_{2})\] _for any \(0<t_{0}<t\leq T\) and \(\mu,\nu\in\mathscr{P}(\mathbb{R}^{d_{1}+d_{2}})\), where_ \[I_{t_{0},t}^{x}:=\frac{1}{4}\int_{t_{0}}^{t}\Big{[}\big{\|}a_{2}^{-\frac{1}{2 }}\big{\{}b_{1}-b_{2}-\mathrm{div}(a_{1}-a_{2})\big{\}}\big{\|}_{s,\infty}+ \xi_{s}^{x}\sum_{j=1}^{d_{2}}\big{\|}a_{2}^{-\frac{1}{2}}(a_{1}-a_{2})e_{j} \big{\|}_{s,\infty}\Big{]}^{2}\mathrm{d}s\] _and \(\xi_{s}^{x}:=\xi_{s}^{\delta_{x}}\) for every \(x\in\mathbb{R}^{d_{1}+d_{2}}\) and \(s\in[t_{0},t]\)._ Proof.: As explained in the proof of Corollary 2.2, we only need to prove the first estimate. Since \((B_{2})\) is satisfied, we have \[\big{|}\mathbb{E}\big{[}\mathrm{div}\{\mathrm{diag}\{\mathbf{0}_{ d_{1}\times d_{1}},(a_{1}-a_{2})(t,\cdot)\}\nabla f\}(X_{t}^{1,\nu})\big{]} \big{|}\] \[=\Big{|}\sum_{j=1}^{d_{2}}\mathbb{E}\big{[}\partial_{y_{j}}\{(a_ {1}-a_{2})(t,\cdot)\nabla^{(2)}f\}_{j}\big{]}(X_{t}^{1,\nu})\Big{|}\] \[\leq\xi_{t}^{\nu}\sum_{j=1}^{d_{2}}\big{(}\mathbb{E}\{(a_{1}-a_{ 2})(t,\cdot)\nabla^{(2)}f\}_{j}(X_{t}^{1,\nu})^{2}\big{)}^{\frac{1}{2}}\] \[=\xi_{t}^{\nu}\sum_{j=1}^{d_{2}}\big{(}\mathbb{E}\langle a_{2}(t, \cdot)^{-\frac{1}{2}}(a_{2}-a_{2})(t,\cdot)e_{j},a_{2}(t,\cdot)^{\frac{1}{2}} \nabla^{(2)}f\rangle_{\mathbb{R}^{d_{2}}}(X_{t}^{1,\nu})^{2}\big{)}^{\frac{1}{2}}\] \[\leq\xi_{t}^{\nu}\sum_{j=1}^{d}\big{\|}a_{2}^{-\frac{1}{2}}(a_{1} -a_{2})e_{j}\big{\|}_{t,\infty}\big{(}\mathbb{E}\big{|}a_{2}(t,\cdot)^{\frac {1}{2}}\nabla^{(2)}f\big{|}^{2}(X_{t}^{1,\nu})\big{)}^{\frac{1}{2}}.\] Thus \((A_{2})\) is satisfied with \[H_{t}^{\nu}(a_{1}-a_{2}):=\xi_{t}^{\nu}\sum_{j=1}^{d}\||a_{2}^{-\frac{1}{2}}(a _{1}-a_{2})e_{j}|\|_{t,\infty}.\] Since \((B_{1})\) implies \((A_{1})\), the desired estimate follows immediately from Theorem 2.1. ### A class of models We next discuss a class of degenerate stochastic models for which condition \((B_{2})\) is satisfied and the dimension-free Harnack inequality (2.5) holds. Consider the following SDE for \(X_{t}^{i,\nu}=(X_{t}^{i(1),\nu},X_{t}^{i(2),\nu})\in\mathbb{R}^{d_{1}+d_{2}}\): \[\begin{cases}\mathrm{d}X_{t}^{i(1),\nu}=\big{\{}AX_{t}^{i(1),\nu}+BX_{t}^{i(2), \nu}+b(X_{t}^{i,\nu})\big{\}}\mathrm{d}t,\\ \mathrm{d}X_{t}^{i(2),\nu}=\sigma_{i}(t)\mathrm{d}W_{t}+Z_{i}^{(2)}(t,X_{t}^{i,\nu})\mathrm{d}t,\ \ \mathscr{L}_{X_{0}^{i,\nu}}=\nu\ \ \text{for}\ i=1,2,\end{cases} \tag{3.2}\] where \(A\), \(B\), \(b\), \(\sigma_{i}\) and \(Z_{i}^{(2)}\) satisfy the following assumption. 1. \(A\) is a \(d_{1}\times d_{1}\) matrix and \(B\) is a \(d_{1}\times d_{2}\) matrix, such that Kalman's condition \[\boxed{\mathbf{R}}\] (3.3) \[\text{Rank}\left[A^{i}B:0\leq i\leq k\right]=d_{1}\] holds for some \(0\leq k\leq d_{1}-1\). 2. \(b\in C_{b}^{1}(\mathbb{R}^{d_{1}+d_{2}})\) with Lipschitz continuous \(\nabla b\), and there exists a constant \(\delta\in(0,1)\) such that \[\boxed{\mathbf{R}^{\prime}}\] (3.4) \[\big{\langle}(\nabla^{(2)}b(x))B^{*}v,v\big{\rangle}\geq-\delta|B^{*}v|^{2}, \ \ v\in\mathbb{R}^{d_{1}},x\in\mathbb{R}^{d_{1}+d_{2}}.\] 3. \(\sigma_{1}(t)\) and \(\sigma_{2}(t)\) are bounded, and \(a_{2}(t):=\frac{1}{2}\sigma_{2}(t)\sigma_{2}(t)^{*}\) is reversible with bounded inverse. 4. \(Z_{i}^{(2)}(t,x)\) (for \(i=1,2\)) are locally bounded in \([0,T]\times\mathbb{R}^{d_{1}+d_{2}}\) and differentiable in \(x\), such that \[\sup_{t\in[0,T]}\left\{\|\nabla Z_{i}^{(2)}(t,\cdot)\|+\frac{\|\nabla Z_{i}^{( 2)}(t,x)-\nabla Z_{i}^{(2)}(t,y)\|}{|x-y|}\right\}\leq K\] holds for some constant \(K>0\). We introduce \(\xi_{t}\) in two different cases: \[\xi_{t}:=\begin{cases}t^{-2k-\frac{1}{2}},&\text{if }Z^{(2)}(t,x)=Z^{(2)}(t,x^{(2)}),\\ t^{-2k-\frac{3}{2}},&\text{otherwise}.\end{cases} \tag{3.5}\] **Corollary 3.2**.: _Assume that \((B_{3})\) is satisfied for either \(k=0\) or \(k\geq 1\) but \(b(x)=b(x^{(2)})\) only depends on \(x^{(2)}\). Let \(P_{t}^{i,\nu}\) be the distribution of \(X_{t}^{i,\nu}\) solving \((\ref{3.2})\). Then there exist constants \(c>0\) and \(\varepsilon\in(0,\frac{1}{2}]\) such that for any \(t\in(0,T]\) and \(\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}}),\)_ \[\mathrm{Ent}(P_{t}^{1,\nu}|P_{t}^{2,\mu})\leq\frac{c}{t^{4k+3}}\bigg{(}\mathbb{ W}_{2}(\mu,\nu)^{2}+\int_{0}^{t}\|b_{1}-b_{2}\|_{s,\infty}^{2}\bigg{)}+c\int_{ \varepsilon(1\wedge t)^{4k+3}}^{t}\xi_{s}^{2}\|a_{1}(s)-a_{2}(s)\|^{2}\Big{)} \mathrm{d}s.\] Proof.: Without loss of generality, we may and do assume that \(\sigma_{i}=\sqrt{2a_{i}}.\) Moreover, by a standard approximation argument, under \((B_{3})\) we may find a sequence \(\{Z_{i}^{(2,n)}\}_{n\geq 1}\) for each \(i=1,2\), such that \[\sup_{n\geq 1,k=1,2,t\in[0,T]}\|\nabla^{k}Z_{i}^{(2,n)}(t,\cdot)\|\leq K,\] \[\lim_{n\to\infty}\sup_{t\in[0,T]}\big{\{}\|(Z_{i}^{(2)}-Z_{i}^{(2,n)})(t,\cdot)\|_ {\infty}+\|\nabla(Z_{i}^{(2)}-Z_{i}^{(2,n)})(t,\cdot)\|_{\infty}=0.\] Moreover, let \(\{b^{(n)}\}_{n\geq 1}\) be a bounded sequence in \(C_{b}^{2}(\mathbb{R}^{d_{1}+d_{2}})\) such that \(\|b^{(n)}-b\|_{C_{b}^{1}(\mathbb{R}^{d_{1}+b_{2}})}\to 0\) as \(n\to\infty\). Let \(P_{t}^{i,\nu;n}\) be defined as \(P_{t}^{i,\nu}\) for \((b^{(n)},Z_{i}^{(2,n)})\) replacing \((b,Z_{i}^{(2)})\). It is well known that \(P_{t}^{i,\nu;n}\to P_{t}^{i,\nu}\) weakly as \(n\to\infty\), so that (2.3) implies that \[\mathrm{Ent}(P_{t}^{1,\nu}|P_{t}^{2,\mu})\leq\liminf_{n\to\infty}\mathrm{Ent}( P_{t}^{1,\nu;n}|P_{t}^{2,\mu;n}).\] Therefore, we may and do assume that \(\|\nabla^{k}b\|+\|\nabla^{k}Z_{i}^{(2)}(t,\cdot)\|_{\infty}\leq K\) holds for some constant \(K>0\) and \(i,k=1,2\), so that Theorem 3.1 applies. (a) By \((B_{3})\), \(\sigma_{1}\geq 0,\sigma_{2}\geq\lambda I_{d_{2}}\) for some constant \(\lambda>0\), where \(I_{d_{2}}\) is the \(d_{2}\times d_{2}\) identity matrix. So, according to the proof of [13, Lemma 3.3], \[\|\sigma_{1}-\sigma_{2}\|=\left\|2\int_{0}^{\infty}\mathrm{e}^{-r\sigma_{1}}( a_{1}-a_{2})\mathrm{e}^{-r\sigma_{2}}\mathrm{d}r\right\|\leq\frac{2}{\lambda}\|a_{1}-a _{2}\|. \tag{3.6}\] By Lemma 3.3 below, there exists a constant \(c_{1}>0\) such that for any \(\nu\), condition \((B_{2})\) holds with \[\xi_{t}^{\nu}=c_{1}\xi_{t}:=\begin{cases}c_{1}t^{-2k-\frac{1}{2}},&\text{if }Z^{(2)}(t,x)=Z^{(2)}(t,x^{(2)})\\ c_{1}t^{-2k-\frac{3}{2}},&\text{in general.}\end{cases} \tag{3.7}\] Moreover, by Lemma 3.4 below, (2.5) holds for the following \(\eta(s),s\in(0,T):\) \[\eta(s)=c(p)s^{-4k-3},\ \ s\in(0,T]. \tag{3.8}\] Combining these with Theorem 3.1, and noting that \(a_{2}^{-1}\) is bounded and \(\mathrm{div}(a_{1}-a_{2})=0\), we can find a constant \(c_{2}>0\) such that for any \(0<t_{0}<t\leq T\), \[\begin{split}&\mathrm{Ent}(P_{t}^{1,\mu}|P_{t}^{2,\nu})\leq c_{2 }\int_{t_{0}}^{t}\Big{(}\big{\|}b_{1}-b_{2}\big{\|}_{s,\infty}^{2}+|\xi_{s}|^ {2}\big{\|}a_{1}(s)-a_{2}(s)\big{\|}^{2}\Big{)}\mathrm{d}s\\ &+c_{2}\inf_{\pi\in\mathscr{C}(\mu,\nu)}\int_{\mathbb{R}^{d_{1}+d_ {2}}\times\mathbb{R}^{d_{1}+d_{2}}}\log\mathbb{E}\Big{[}\mathrm{e}^{c_{2}(t-t _{0})^{-4k-3}|X_{t_{0}}^{1,x_{1}}-X_{t_{0}}^{2,x_{2}}|^{2}}\Big{]}\pi(\mathrm{ d}x_{1},\mathrm{d}x_{2}).\end{split} \tag{3.9}\] It remains to estimate the exponential expectation in the last term. (b) By \((B_{3})\) and (3.6), there exists a constant \(c_{3}\geq 1\) such that \[\mathrm{d}|X_{s}^{1,x_{1}}-X_{s}^{2,x_{2}}|^{2}\leq c_{3}\big{(}|X_{s}^{1,x_{1 }}-X_{s}^{2,x_{2}}|^{2}+\|b_{1}-b_{2}\|_{s,\infty}^{2}+\|a_{1}(s)-a_{2}(s)\|^{ 2}\big{)}\mathrm{d}s+\mathrm{d}M_{s},\] where \[\mathrm{d}M_{s}:=2\langle X_{s}^{1,x_{1}}-X_{s}^{2,x_{2}},\{\sigma_{1}(s)- \sigma_{2}(s)\}\mathrm{d}W_{s}\rangle\] and therefore the following differential inequality holds: \[\mathrm{d}\langle M\rangle_{s}\leq c_{3}|X_{s}^{1,x_{1}}-X_{s}^{2,x_{2}}|^{2} \mathrm{d}s. \tag{3.10}\] It follows that \[\begin{split}&|X_{s}^{1,x_{1}}-X_{s}^{2,x_{2}}|^{2}\leq\mathrm{e}^{c _{3}s}|x_{1}-x_{2}|^{2}\\ &\quad+\int_{0}^{s}\mathrm{e}^{c_{3}(s-r)}\big{(}\|b_{1}-b_{2}\|_{ r,\infty}^{2}+\|a_{1}(r)-a_{2}(r)\|^{2}\big{)}\mathrm{d}r+\int_{0}^{s}\mathrm{e}^{c_{ 3}(s-r)}\mathrm{d}M_{r}.\end{split} \tag{3.11}\] Let \[\tau_{n}:=\inf\big{\{}s\in[0,T]:|X_{s}^{1,x_{1}}-X_{s}^{2,x_{2}}|\geq n\big{\}}, \ \ \text{for $n=1,2,\cdots$}\] with the convention that \(\inf\emptyset:=T\). Then \(\tau_{n}\to T\) as \(n\to\infty\). Let \[\lambda:=c_{3}(t-t_{0})^{-4k-3},\ \ c_{4}:=\mathrm{e}^{c_{3}T}.\] By (3.11) and the fact that \(\mathbb{E}[\mathrm{e}^{N\mathfrak{s}}]\leq(\mathbb{E}\mathrm{e}^{2\langle \hat{N}\rangle_{s}})^{\frac{1}{2}}\) holds for a continuous martingale \[\hat{N}s:=\int_{0}^{\bar{s}}\mathrm{e}^{c_{3}(s-r)}\mathrm{d}M_{r},\ \ \text{for $\bar{s}>s$},\] we deduce that \[\begin{split}&\mathbb{E}\big{[}\mathrm{e}^{\lambda|X_{s\wedge r_{n}} ^{1,x_{1}}-X_{s\wedge r_{n}}^{2,x_{2}}|^{2}}\big{]}\\ &\leq\mathrm{e}^{c_{4}\lambda|x_{1}-x_{2}|^{2}+c_{4}\lambda\int_ {0}^{s}\big{(}\|b_{1}-b_{2}\|_{r,\infty}^{2}+\|a_{1}(r)-a_{2}(r)\|^{2}\big{)} \mathrm{d}r}\Big{(}\mathbb{E}\big{[}\mathrm{e}^{2\lambda^{2}c_{4}^{2}(M)_{s \wedge r_{n}}}\big{]}\Big{)}^{\frac{1}{2}}.\end{split} \tag{3.12}\] While by (3.10) and Jensen's inequality, \[\begin{split}&\mathbb{E}\big{[}\mathrm{e}^{2\lambda^{2}c_{4}^{2}(M)_{s \wedge r_{n}}}\big{]}\leq\mathbb{E}\Big{[}\mathrm{e}^{2\lambda^{2}c_{4}^{2}c_ {3}^{2}f_{0}^{s}|X_{r\wedge r_{n}}^{1,x_{1}}-X_{r\wedge r_{n}}^{2,x_{2}}|^{2} \mathrm{d}r}\Big{]}\\ &\leq\frac{1}{s}\int_{0}^{s}\mathbb{E}\big{[}\mathrm{e}^{2 \lambda^{2}c_{4}^{2}c_{3}^{2}s|X_{r\wedge r_{n}}^{1,x_{1}}-X_{r\wedge r_{n}}^{ 2,x_{2}}|^{2}}\big{]}\mathrm{d}r\\ &\leq\sup_{r\in[0,t_{0}]}\mathbb{E}\big{[}\mathrm{e}^{2\lambda^{ 2}c_{4}^{2}c_{3}^{2}t_{0}|X_{r\wedge r_{n}}^{1,x_{1}}-X_{r\wedge r_{n}}^{2,x_ {2}}|^{2}}\big{]}\end{split} \tag{3.13}\] for \(s\in[0,t_{0}]\). Choosing \[t_{0}=\frac{1}{2c_{4}^{2}c_{3}^{3}}\Big{(}\frac{1\wedge t}{2}\Big{)}^{4k+3}=: \varepsilon(1\wedge t)^{4k+3} \tag{3.14}\] such that \[2\lambda c_{4}^{2}c_{3}^{2}t_{0}=2c_{4}^{2}c_{3}^{3}(t-t_{0})^{-4k-3}t_{0} \leq 1,\] we therefore conclude from (3.12) and (3.13) that \[\begin{split}&\sup_{s\in[0,t_{0}]}\mathbb{E}\big{[}\mathrm{e}^{ \lambda|X_{s\wedge r_{n}}^{1,x_{1}}-X_{s\wedge r_{n}}^{2,x_{2}}|^{2}}\big{]}\\ &\leq\mathrm{e}^{c_{4}\lambda|x_{1}-x_{2}|^{2}+c_{4}\lambda\int_ {0}^{t_{0}}\big{(}\|b_{1}-b_{2}\|_{r,\infty}^{2}+\|a_{1}(r)-a_{2}(r)\|^{2} \big{)}\mathrm{d}r}\Big{(}\sup_{s\in[0,t_{0}]}\mathbb{E}\big{[}\mathrm{e}^{ \lambda|X_{s\wedge r_{n}}^{1,x_{1}}-X_{s\wedge r_{n}}^{2,x_{2}}|^{2}}\big{]} \Big{)}^{\frac{1}{2}}.\end{split}\] This together with the definition of \(\lambda\) and Fatou's lemma yields \[\mathbb{E}\big{[}\mathrm{e}^{c_{3}(t-t_{0})^{-4k-3}|X_{t_{0}}^{1,x_{ 1}}-X_{t_{0}}^{2,x_{2}}|^{2}}\big{]}\leq\liminf_{n\to\infty}\mathbb{E}\big{[} \mathrm{e}^{\lambda|X_{t_{0}\wedge n_{n}}^{1,x_{1}}-X_{t_{0}\wedge n_{n}}^{2,x _{2}}|^{2}}\big{]}\] \[\leq\mathrm{e}^{2c_{4}\lambda|x_{1}-x_{2}|^{2}+2c_{4}\lambda\int_ {0}^{t_{0}}\big{(}\|b_{1}-b_{2}\|_{r,\infty}^{2}+\|a_{1}(r)-a_{2}(r)\|^{2}\big{)} \mathrm{d}r}.\] Combining (3.9) with (3.14), we can therefore find a constant \(c_{5}>0\) such that \[\mathrm{Ent}(P_{t}^{1,\mu}|P_{t}^{2,\nu})\leq c_{2}\int_{\varepsilon(1\wedge t)^{4k+3}}^{t}\Big{(}\big{\|}b_{1}-b_{2} \big{\|}_{s,\infty}^{2}+|\xi_{s}|^{2}\big{\|}a_{1}(s)-a_{2}(s)\big{\|}^{2} \Big{)}\mathrm{d}s\] \[+\frac{c_{5}}{t^{4k+3}}\bigg{(}\mathbb{W}_{2}(\mu,\nu)^{2}+\int_{0 }^{\varepsilon(t\wedge 1)^{4k+3}}\big{(}\|b_{1}-b_{2}\|_{r,\infty}^{2}+\|a_{1}(r)-a_{ 2}(r)\|^{2}\big{)}\mathrm{d}r\bigg{)}.\] The desired estimate now follow from (3.7) immediately. ### Verify conditions \((B_{2})\) and (2.5) Let us consider \(X_{t}=(X_{t}^{(1)},X_{t}^{(2)})\) taking values in \(\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\), which solves the SDE: \[\begin{cases}\mathrm{d}X_{t}^{(1)}=\big{\{}AX_{t}^{(1)}+BX_{t}^{(2)}+b(X_{t}) \big{\}}\mathrm{d}t,\\ \mathrm{d}X_{t}^{(2)}=Z(t,X_{t})\mathrm{d}t+\sigma(t)\mathrm{d}W_{t}\ \text{ for }t\in[0,T].\end{cases} \tag{3.15}\] We have the following result which ensures condition \((B_{2})\). **Lemma 3.3**.: _Let \(A,B,b\) and \((Z_{2}^{(2)},\sigma_{2}):=(Z^{(2)},\sigma)\) satisfy conditions in \((B_{3})\), but \(b\) is not necessarily bounded. Let \(\xi_{t}\) be in (3.5). Then for any \(p>1\) there exists a constant \(c(p)>0\) such that for any solution \(X_{t}\) of (3.15),_ \[\sup_{v\in\mathbb{R}^{d_{1}+2},|v|=1}\big{|}\mathbb{E}\big{[}(\nabla_{v}f)(X_{ t})\big{]}\big{|}\leq c(p)t^{-2k-\frac{3}{2}}\big{(}\mathbb{E}|f(X_{t})|^{p} \big{)}^{\frac{1}{p}},\ \ t\in(0,T],f\in C_{b}^{1}(\mathbb{R}^{d_{1}+d_{2}}). \tag{3.16}\] _If \(Z^{(2)}(t,x)=Z^{(2)}(t,x^{(2)})\) does not depend on \(x^{(1)}\), then_ \[\sup_{v\in\mathbb{R}^{d_{2}},|v|=1}\big{|}\mathbb{E}\big{[}(\nabla_{v}^{(2)}f) (X_{t})\big{]}\big{|}\leq c(p)t^{-2k-\frac{1}{2}}\big{(}\mathbb{E}|f(X_{t})|^{p }\big{)}^{\frac{1}{p}},\ \ t\in(0,T],f\in C_{b}^{1}(\mathbb{R}^{d_{1}+d_{2}}). \tag{3.17}\] Proof.: We will follow the line of [21, Remark 2.1] to establish the integration by parts formula: \[\mathbb{E}\big{[}(\nabla_{v}f)(X_{t})\big{]}=\mathbb{E}\big{[}f(X_{t})M_{t} \big{]}\] for some random variable \(M_{t}\in L^{\frac{p}{p-1}}(\mathbb{P})\). To this end, we first estimate \(D_{h}X_{t}\) and \(D_{h}(\nabla X_{t})^{-1}\), where \(D_{h}\) is the Malliavin derivative along an adapted process \((h_{s})_{s\in[0,t]}\) on \(\mathbb{R}^{d}\) with \[\mathbb{E}\int_{0}^{t}|h_{s}^{\prime}|^{2}\mathrm{d}s<\infty.\] (a) For any \(s\in[0,T)\), let \(\{K(t,s)\}_{t\in[s,T]}\) solve the following random ordinary differential equation on \(\mathbb{R}^{d_{1}\otimes d_{1}}\): \[\partial_{t}K_{t,s}=\big{\{}AX_{t}^{(1)}+\nabla^{(1)}b(t,X_{t})\big{\}}K_{t,s}, \ \ K_{s,s}=I_{d_{1}}\ \ \text{for}\ t\in[s,T].\] Since \(\nabla b\) is bounded, \(K_{t,s}\) is bounded and invertible satisfying \[\|K_{t,s}\|\vee\|K_{t,s}^{-1}\|\leq\mathrm{e}^{K(t-s)}\ \ \ \text{for}\ 0\leq s\leq t\leq T \tag{3.18}\] for some constant \(K>0\). Let \[Q_{t,s}:=\int_{0}^{s}\frac{r(t-r)}{t^{2}}K_{t,r}BB^{*}K_{t,r}^{*}\mathrm{d}r\ \ \text{for}\ 0\leq s\leq t\leq T.\] By [21, Theorem 4.2(1)] for \((t,s)\) replacing \((T,t)\), when \(k\geq 1\) and \(b(x)=b(x^{(2)})\), conditions (3.3) and (3.4) imply that \[Q_{t,s}\geq\frac{c_{0}}{t}s^{2(k+1)}I_{d_{1}}=:\xi_{t,s}I_{d_{1}}\ \ \text{ for}\ 0<s\leq t\leq T \tag{3.19}\] holds for some constant \(c_{0}>0\). It is easy to see that this estimate also holds for \(k=0\) and bounded \(\nabla b(x)\) since in this case \(BB^{*}\) is invertible. Let \(X_{t}(x)=(X_{t}^{j}(x))_{1\leq j\leq d_{1}+d_{2}}\) be the solution to (3.15) with \(X_{0}(x)=x.\) Since \(\nabla b\) and \(\nabla Z\) are bounded, we see that \[\nabla X_{t}(x):=(\partial_{x_{i}}X_{t}^{j}(x))_{1\leq i,j\leq d_{1}+d_{2}}\] exists and is invertible, and the inverse \((\nabla X_{t}(x))^{-1}=\big{(}(\nabla X_{t}(x))_{ki}^{-1}\big{)}_{1\leq k,i \leq d_{1}+d_{2}}\) satisfies \[\big{\|}\{\nabla X_{t}(x)\}^{-1}\big{\|}\leq c_{1}\ \ \ \text{for}\ t\in[0,T] \tag{3.20}\] for some constant \(c_{1}>0\). (b) Since \(\nabla b\) and \(\nabla Z\) are bounded, \((D_{h}X_{s})_{s\in[0,t]}\) is the unique solution of the random ODE \[\begin{cases}\partial_{s}\big{\{}D_{h}X_{s}^{(1)}\big{\}}=AD_{h}X_{s}^{(1)}+ BD_{h}X_{s}^{(2)}+\nabla_{D_{h}X_{s}}b(X_{s}),\\ \partial_{s}\big{\{}D_{h}X_{s}^{(2)}\big{\}}=\nabla_{D_{h}X_{s}}Z^{(2)}(s,X_{s })+\sigma(s)h_{s}^{\prime},\ \ \ D_{h}X_{0}=0\ \ \ \text{for}\ s\in[0,t],\end{cases}\] and there exists a constant \(c_{2}>0\) such that \[|D_{h}X_{s}|\leq c_{2}\int_{0}^{s}|h_{r}^{\prime}|\mathrm{d}r\ \ \text{ for}\ s\in[0,t]. \tag{3.21}\] Similarly, since \(\nabla^{2}b\) and \(\nabla^{2}Z\) are also bounded, for any \(v\in\mathbb{R}^{d_{1}+d_{2}}\), \((D_{h}\nabla_{v}X_{s})_{s\in[0,t]}\) solve the equations \[\begin{cases}\partial_{s}\big{\{}D_{h}\nabla_{v}X_{s}^{(1)}\big{\}}=AD_{h} \nabla_{v}X_{s}^{(1)}+BD_{h}\nabla_{v}X_{s}^{(2)}+\nabla_{D_{h}\nabla_{v}X_{s }}b(X_{s})\\ \qquad\qquad+\Big{\{}\nabla^{2}b(X_{s})\Big{\}}\Big{(}D_{h}X_{s},\nabla_{v}X_{ s}\Big{)}\\ \partial_{s}\big{\{}D_{h}\nabla_{v}X_{s}^{(2)}\big{\}}=\nabla_{D_{h}\nabla_{v}X_{ s}}Z^{(2)}(s,X_{s})+\big{\{}\nabla^{2}Z^{(2)}(s,X_{s})\big{\}}\big{(}D_{h}X_{s}, \nabla_{v}X_{s}\big{)}\end{cases}\] for \(D_{h}\nabla_{v}X_{0}=0\) and \(s\in[0,t]\). Moreover, there exists a constant \(c_{3}>0\) such that \[\sup_{v\in\mathbb{R}^{d_{1}+d_{2},|v|\leq 1}}\big{\|}D_{h}\nabla_{v}X_{t}\big{\|} \leq c_{3}\int_{0}^{t}\mathrm{d}s\int_{0}^{s}|h_{r}^{\prime}|\mathrm{d}r\leq c _{3}t\int_{0}^{t}|h_{s}^{\prime}|\mathrm{d}s. \tag{3.22}\] (c) For any fixed \(t\in(0,T]\), we may construct \(h\) by means of [21, (1.8) and (1.11)] for \(t\) replacing \(T\) with the specific choice \(\phi(s):=\frac{s(t-s)}{t}\) satisfying \(\phi(0)=\phi(t)=0\) as required therein. For any \(v=(v^{(1)},v^{(2)})\in\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\), let \[\alpha_{t,s}(v) :=\frac{t-s}{t}v^{(2)}-\frac{s(t-s)}{t^{2}}B^{*}K_{t,s}^{*}Q_{t,t }^{-1}\int_{0}^{t}\frac{t-r}{t}K_{t,r}Bv^{(2)}\mathrm{d}r\] \[\quad-\frac{s(t-s)B^{*}K_{t,s}^{*}}{\xi_{t,s}^{2}\mathrm{d}s}\int _{0}^{t}\xi_{t,s}^{2}Q_{t,s}^{-1}K_{t,s}v^{(1)}\mathrm{d}s,\] \[g_{t,s}(v) :=K_{s,0}v^{(1)}+\int_{0}^{s}K_{s,r}B\alpha_{t,s}(v)\mathrm{d}s,\] \[h_{t,s}(v) :=\int_{0}^{s}\sigma(r)^{-1}\big{\{}\nabla_{(g_{t,r}(v),\alpha_{ t,r}(v))}b(r,X_{r})-\partial_{r}\alpha_{t,r}\big{\}}\mathrm{d}r\quad\text{ for }s\in[0,t].\] Let \(\{e_{i}\}_{1\leq i\leq d_{1}+d_{2}}\) be the canonical ONB on \(\mathbb{R}^{d_{1}+d_{2}}\). According to [21, Remark 2.1], we have \[\begin{split}&\mathbb{E}\big{[}(\nabla_{e_{i}}f)(X_{t}\big{]}= \mathbb{E}\big{[}f(X_{t})M_{t}(e_{i})\big{]},\\ & M_{t}(e_{i}):=\sum_{j=1}^{d_{1}+d_{2}}\Big{\{}\delta(h_{t,.}(e_ {j}))(\nabla X_{t})_{ji}^{-1}-D_{h_{t,.}(e_{j})}(\nabla X_{t})_{ji}^{-1}\Big{\}} \bigg{]},\end{split} \tag{3.23}\] where \[\delta(h_{t,.}(e_{j})):=\int_{0}^{t}\big{\langle}\partial_{s}h_{t,s}(e_{j}), \mathrm{d}W_{s}\big{\rangle}\] is the Malliavin divergence of \(h_{t,.}(e_{j})\). Consequently \[\big{|}\mathbb{E}(\nabla_{e_{i}}f)(X_{t})\big{|}\big{|}\leq\big{(}\mathbb{E}| f(X_{t})|^{p}\big{)}^{\frac{1}{p}}\big{(}\mathbb{E}[|M_{t}(e_{i})|^{\frac{p}{p-1}} \big{]}\big{)}^{\frac{p-1}{p}} \tag{3.24}\] for \(t\in(0,T]\) and \(1\leq i\leq d_{1}+d_{2}\). By (3.20) and (3.22), there is a constant \(c_{4}>0\) such that \[\big{(}\mathbb{E}[|M_{t}(e_{i})|^{\frac{p}{p-1}}\big{]}\big{)}^{\frac{p-1}{p}} \leq c_{4}\sum_{j=1}^{d_{1}+d_{2}}1_{\{\|(\nabla X_{t})_{ji}^{-1}\|_{\infty}>0 \}}\Big{\{}\mathbb{E}\bigg{(}\int_{0}^{t}|\partial_{s}h_{t,s}(e_{j})|^{2} \mathrm{d}s\bigg{)}^{\frac{p}{2(p-1)}}\bigg{\}}^{\frac{p-1}{p}} \tag{3.25}\] for any \(t\in(0,T]\) and \(1\leq i\leq d_{1}+d_{2}\). By (3.19), we have \(\|Q_{t,s}^{-1}\|\leq c_{0}^{-1}ts^{-2(k+1)}\). Combining this with (3.18), we may find a constant \(c_{5}>0\) such that \[|\alpha_{t,s}(e_{j})|\leq c_{5}t^{-2k}+c_{5}1_{\{j\leq d_{1}\}}t^{- 2k-1},\] \[|\partial_{s}\alpha_{t,s}(e_{j})|\leq c_{5}t^{-2k-1}+c_{5}1_{\{j \leq d_{1}\}}t^{-2k-2},\] \[|g_{t,s}(e_{j})|\leq c_{5}t+c_{5}1_{\{j\leq d_{1}\}}\quad\text{for $0\leq s<t\leq T$ and $1\leq j\leq d_{1}+d_{2}$.}\] Now noting that \(\|\sigma(s)^{-1}\|\leq K\), together with the previous estimates, we may conclude that there is a constant \(c_{6}>0\) such that \[\partial_{s}h_{t,s}(e_{j}) =\big{|}\sigma(s)^{-1}\{\nabla_{g_{t,s}(e_{j}),\alpha_{t,s}(e_{j}) }b(s,X_{s})-\partial_{s}\alpha_{t,s}(e_{j})\}\big{|}\] \[\leq c_{6}t^{-2k-1}+c_{6}1_{\{j\leq d_{1}\}}t^{-2k-2}\] for any \(0\leq s<t\leq T\) and for \(1\leq j\leq d_{1}+d_{2}\). This together with (3.25) enables us to find a constant \(c_{7}>0\) such that \[\big{(}\mathbb{E}[|M_{t}(e_{i})|^{\frac{p}{p-1}}]\big{)}^{\frac{p-1}{p}}\leq c _{7}\begin{cases}t^{-2k-\frac{3}{2}},&\text{ if }\sup_{j\leq d_{1}}\|(\nabla X_{t})_{ji}^{-1}\|_{ \infty}>0,\\ t^{-2k-\frac{1}{2}},&\text{ otherwise.}\end{cases}\] Combining this with (3.24) we derive (3.16) for some constant \(c(p)>0\). (d) For the case where \(Z^{(2)}(s,x)=Z^{(2)}(s,x^{(2)})\) is independent of \(x^{(1)}\), we have \(\nabla_{j}X_{t}^{i}=0\) for \(i\geq d_{1}+1\) and \(j\leq d_{1}\), so that the previous estimate implies that \[\big{(}\mathbb{E}[|M_{t}(e_{i})|^{\frac{p}{p-1}}]\big{)}^{\frac{p-1}{p}}\leq c _{7}t^{-2k-\frac{1}{2}}\quad\forall t\in(0,T],\] where \(d_{1}+1\leq i\leq d_{1}+d_{2}\). Combining this with (3.24) we derive we derive (3.17) with some constant \(c(p)>0\) and \(\xi_{t}:=t^{-2k-\frac{1}{2}}\). **Lemma 3.4**.: _Let \((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq ## 4 Distribution dependent stochastic Hamilton system Consider the following distribution dependent SDEs \[\begin{cases}\mathrm{d}X_{t}^{(1)}=\big{\{}AX_{t}^{(1)}+BX_{t}^{(2)}+b(X_{t}) \big{\}}\mathrm{d}t,\\ \mathrm{d}X_{t}^{(2)}=Z^{(2)}(t,X_{t},\mathscr{L}_{X_{t}})\mathrm{d}t+\sigma( t,\mathscr{L}_{X_{t}})\mathrm{d}W_{t}\end{cases} \tag{4.1}\] for \(t\in[0,T]\), where \(X_{t}=(X_{t}^{(1)},X_{t}^{(2)})\) is \(\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\) valued process. The coefficients \(A\), \(B\), \(b,Z^{(2)}\) and \(\sigma\) satisfy the following assumption. * \(A,B\) and \(b\) satisfy conditions 1) and 2) in \((B_{3})\), \(Z^{(2)}(t,x,\mu)\) is differentiable in \(x\in\mathbb{R}^{d_{1}+d_{2}}\), and there exists a constant \(K>0\) such that \[\|\nabla b(t,\cdot,\mu)(x)-\nabla b(t,\cdot,\mu)(y)\|\leq K|x-y|,\] \[|b(t,x,\mu)-b(t,y,\nu)|+\|\sigma(t,\mu)-\sigma(t,\nu)\|\leq K\big{\{}|x-y|+ \mathbb{W}_{2}(\mu,\nu)\big{\}}\] \[\|Z^{(2)}(t,0,\delta_{0})|+\|\sigma(t,\mu)\|+\|\sigma(t,\mu)^{-1}\|\leq K\] for \(t\in[0,T]\), \(x,y\in\mathbb{R}^{d_{1}+d_{2}}\) and \(\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\). By, for instance, [20, Theorem 2.1], under this assumption the SDE (4.1) is well-posed for distributions in \(\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\), and \(P_{t}^{*}\mu:=\mathscr{L}_{X_{t}}\) for the solution \(X_{t}\) with initial distribution \(\mu\) satisfies \[\sup_{t\in[0,T]}\mathbb{W}_{2}(P_{t}^{*}\mu,P_{t}^{*}\nu)\leq C\mathbb{W}_{2}( \mu,\nu),\ \ \forall\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}}) \tag{4.2}\] for some constant \(C>0\). **Theorem 4.1**.: _Assume that condition \((C_{1})\) is satisfied._ * _There exists a constant_ \(c>0\) _such that_ (4.3) \[\mathrm{Ent}(P_{t}^{*}\mu|P_{t}^{*}\nu)\leq\frac{c}{t^{(4k+2)(4k+3)}}\mathbb{W} _{2}(\mu,\nu)^{2},\ \ \ \ \forall t\in(0,T].\] _If_ \(Z^{(2)}(t,x,\mu)=Z^{(2)}(t,x^{(2)},\mu)\) _does not dependent on_ \(x^{(1)}\)_, then_ (4.4) \[\mathrm{Ent}(P_{t}^{*}\mu|P_{t}^{*}\nu)\leq\frac{c}{t^{(4k+1)(4k+3)}}\mathbb{W} _{2}(\mu,\nu)^{2},\ \ \ \ \forall t\in(0,T].\] * _If_ \(Z^{(2)}(t,x,\mu)=Z^{(2)}(x,\mu)\) _and_ \(\sigma(t,\mu)=\sigma(\mu)\) _do not depend on_ \(t\)_, and there exist constants_ \(c^{\prime},\lambda>0\) _such that_ \[\mathbb{W}_{2}(P_{t}^{*}\mu,P_{t}^{*}\nu)^{2}\leq c^{\prime}\mathrm{e}^{- \lambda t}\mathbb{W}_{2}(\mu,\nu)^{2},\ \ \forall t\geq 0\ \text{ and }\ \ \forall\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}}),\] _then_ \(P_{t}^{*}\) _has a unique invariant probability measure_ \(\bar{\mu}\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\)_, and_ \[\mathrm{Ent}(P_{t}^{*}\mu|\bar{\mu})\leq cc^{\prime}\mathrm{e}^{-\lambda(t-1)} \mathbb{W}_{2}(\mu,\bar{\mu})^{2}\] _for any_ \(t\geq 0\) _and for every_ \(\mu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}})\)_._ Proof.: It suffices to prove the first assertion. To this end, given \((\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d_{1}+d_{2}}),\) let \[Z_{1}^{(2)}(t,x):=Z^{(2)}(t,x,P_{t}^{*}\mu),\ \ Z_{2}^{(2)}(t,x):=Z^{(2)}(t,x,P_{t}^{*} \nu),\] \[\sigma_{1}(t):=\sigma(t,P_{t}^{*}\mu)\quad\sigma_{2}(t):=\sigma(t, P_{t}^{*}\nu),\quad t\in[0,T].\] Then the desired estimates in Theorem 4.1(1) follow from Corollary 3.2 and (4.2). To illustrate this result, we consider the following typical example for \(d_{1}=d_{2}=d\): \[\begin{cases}\mathrm{d}X_{t}^{(1)}=\big{\{}BX_{t}^{(2)}+b(X_{t})\big{\}} \mathrm{d}t,\\ \mathrm{d}X_{t}^{(2)}=\sigma(\mathscr{L}_{X_{t}})\mathrm{d}W_{t}-\left(B^{*} \nabla V(\cdot,\mathscr{L}_{X_{t}})(X_{t})+\beta B^{*}(BB^{*})^{-1}X_{t}^{(1)}+ X_{t}^{(2)}\right)\mathrm{d}t,\end{cases} \tag{4.5}\] where \(\beta>0\) is a constant, \(B\) is an invertible \(d\times d\)-matrix, and \[V:\mathbb{R}^{d}\times\mathscr{P}_{2}(\mathbb{R}^{2d})\to\mathbb{R}^{d}\] is measurable and differentiable in \(x^{(1)}\in\mathbb{R}^{d}\). Let \[\psi(x,y):=\sqrt{|x^{(1)}-y^{(1)}|^{2}+|B(x^{(2)}-y^{(2)})|^{2}} \quad\text{for }x,y\in\mathbb{R}^{2d},\] \[\mathbb{W}_{2}^{\psi}(\mu,\nu):=\inf_{\pi\in\mathscr{U}(\mu,\nu)} \bigg{(}\int_{\mathbb{R}^{2d}\times\mathbb{R}^{2d}}\psi^{2}\mathrm{d}\pi\bigg{)} ^{\frac{1}{2}}\quad\text{for }\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{2d}).\] We assume that the following technical condition is satisfied. * \(V(\cdot,\mu)\) is differentiable such that \(\nabla V(\cdot,\mu)(x^{(1)})\) is Lipschitz continuous in \((x^{(1)},\mu)\in\mathbb{R}^{d}\times\mathscr{P}_{2}(\mathbb{R}^{2d}).\) Moreover, there exist constants \(\theta_{1},\theta_{2}\in\mathbb{R}\) with \[\theta_{1}+\theta_{2}<\beta,\] such that \[\big{\langle}BB^{*}\{\nabla V(\cdot,\mu)(x^{(1)})-\nabla V(\cdot,\nu)(y^{(1)})\},\ x^{(1)}-y^{(1)}+(1+\beta)B(x^{(2)}-y^{(2)})\big{\rangle}\] \[-\frac{1+\beta}{2\beta}\|B\{\sigma(\mu)-\sigma(\nu)\}\|_{HS}^{2} \geq-\theta_{1}\psi(x,y)^{2}-\theta_{2}\mathbb{W}_{2}^{\psi}(\mu,\nu)^{2}\] for any \(x,y\in\mathbb{R}^{2d}\) and \(\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{2d}).\) **Corollary 4.2**.: _Assume that condition \((C_{2})\) is satisfied. Let_ \[\kappa:=\frac{2(\beta-\theta_{1}-\theta_{2})}{2+2\beta+\beta^{2}+\sqrt{\beta^ {4}+4}}>0. \tag{4.6}\] _For any \(\kappa^{\prime}\in(0,\kappa)\), when \(\|\nabla b\|_{\infty}\) is small enough, \(P_{t}^{*}\) has a unique invariant probability measure \(\bar{\mu}\in\mathscr{P}_{2}(\mathbb{R}^{2d}),\) and there exists a constant \(c>0\) such that_ \[\mathbb{W}_{2}(P_{t}^{*}\mu,\bar{\mu})^{2}+\mathrm{Ent}(P_{t}^{*}\mu|\bar{\mu} )\leq\frac{\mathrm{ce}^{-2\kappa^{\prime}t}}{(1\wedge t)^{3}}\mathbb{W}_{2}(\mu,\bar{\mu})^{2} \tag{4.7}\] _for any \(t>0\) and \(\mu\in\mathscr{P}_{2}(\mathbb{R}^{2d}).\)_ Proof.: The proof is completely similar to that of [15, Lemma 5.2] where \(\sigma(\mu)=\sigma\) does not depend on \(\mu\). By Theorem 4.1, it suffices to find a constant \(c^{\prime}>9\) such that \[\mathbb{W}_{2}(P_{t}^{*}\mu,P_{t}^{*}\nu)^{2}\leq c^{\prime}\mathrm{e}^{-2\kappa t }\mathbb{W}_{2}(\mu,\nu)^{2} \tag{4.8}\] for any \(t>0\) and \(\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{2d})\). a) Let \[a:=\Big{(}\frac{1+\beta+\beta^{2}}{1+\beta}\Big{)}^{\frac{1}{2}},\ \ r:=a-\frac{\beta}{a}=\frac{1}{\sqrt{(1+\beta)(1+\beta+\beta^{2})}}\in(0,1). \tag{4.9}\] Define the distance \[\bar{\psi}(x,y):=\sqrt{a^{2}|x^{(1)}-y^{(1)}|^{2}+|B(x^{(2)}-y^{(2)})|^{2}+2ra \langle x^{(1)}-y^{(1)},B(x^{(2)}-y^{(2)})\rangle}. \tag{4.10}\] According to the proof of [15, Lemma 5.2], we have \[\bar{\psi}(x,y)^{2}\leq\frac{2+2\beta+\beta^{2}+\sqrt{\beta^{4}+4}}{2(1+\beta )}\psi(x,y)^{2},\ \ \forall x,y\in\mathbb{R}^{2d}, \tag{4.11}\] and there exists a constant \(C>1\) such that \[C^{-1}|x-y|\leq\bar{\psi}(x,y)\leq C|x-y|,\ \ \forall x,y\in\mathbb{R}^{2d}. \tag{4.12}\] b) Let \(X_{t}\) and \(Y_{t}\) solve (4.5) with \(\mathscr{L}_{X_{0}}=\mu,\mathscr{L}_{Y_{0}}=\nu\) such that \[\mathbb{W}_{2}(\mu,\nu)^{2}=\mathbb{E}\big{[}|X_{0}-Y_{0}|^{2}\big{]}. \tag{4.13}\] Let \(\Xi_{t}=X_{t}-Y_{t}\), \(\mu_{t}=P_{t}^{*}\mu:=\mathscr{L}_{X_{t}}\) and \(\nu_{t}:=P_{t}^{*}\nu=\mathscr{L}_{Y_{t}}\). By using \((C_{2})\), Ito's formula, and noting that (4.9) implies \[a^{2}-\beta-ra=0,\ \ 1-ra=ra\beta=\frac{\beta}{1+\beta},\] we obtain \[\frac{1}{2}\mathrm{d}\left(\bar{\psi}(X_{t},Y_{t})^{2}\right)= \frac{1}{2}\|B\left(\sigma(\mu_{t})-\sigma(\nu_{t})\right)\|_{HS}^{2}+\big{ }a^{2}\Xi_{t}^{(1)}+raB\Xi_{t}^{(2)},B\Xi_{t}^{(2)}+b(X_{t})-b(Y_{t})\big{\rangle} \mathrm{d}t\] \[\ \ \ \ -\big{\langle}B^{*}B\Xi_{t}^{(2)}+raB^{*}\Xi_{t}^{(1)},\ \beta B^{*}( BB^{*})^{-1}\Xi_{t}^{(1)}+\Xi_{t}^{(2)}\big{\rangle}\mathrm{d}t\] \[\ \ \ \ +\big{\langle}B^{*}B\Xi_{t}^{(2)}+raB^{*}\Xi_{t}^{(1)},\ B^{*} \{\nabla^{(1)}V(Y_{t}^{(1)},\nu_{t})-\nabla^{(1)}V(X_{t}^{(1)},\mu_{t})\}\big{ }\big{\rangle}\mathrm{d}t\] \[\ \ By (4.11) and the fact that \[\mathbb{W}_{2}^{\psi}(\mu_{t},\nu_{t})^{2}\leq\mathbb{E}[\psi(X_{t},Y_{t})^{2}],\] for \(\kappa>0\) in (4.6), when \(\|\nabla b\|_{\infty}\) is small enough we find a constant \(\kappa^{\prime}\in(0,\kappa)\) such that we obtain \[\frac{1}{2}\left(\mathbb{E}[\bar{\psi}(X_{t},Y_{t})^{2}]-\mathbb{ E}[\bar{\psi}(X_{s},Y_{s})^{2}]\right)\] \[\leq\|\nabla b\|_{\infty}(a^{2}+ra)\int_{s}^{t}\mathbb{E}[|\Xi_{u }^{(1)}|^{2}]\mathrm{d}u-\frac{\beta-\theta_{1}-\theta_{2}}{1+\beta}\int_{s}^{ t}\mathbb{E}[\psi(X_{u},Y_{u})^{2}]\mathrm{d}u\] \[\leq-\kappa^{\prime}\int_{s}^{t}\mathbb{E}[\bar{\psi}(X_{u},Y_{u} )^{2}]\mathrm{d}u,\ \ t\geq s\geq 0.\] By Gronwall's inequality, we then deduce that \[\mathbb{E}[\bar{\psi}(X_{t},Y_{t})^{2}]\leq\mathrm{e}^{-2\kappa^{\prime}t} \mathbb{E}[\bar{\psi}(X_{0},Y_{0})^{2}]\] for \(t\geq 0\). Combining this with (4.12) and (4.13), we may conclude that there is a constant \(c>0\) such that (4.8) holds. To conclude this paper, we present the following example of degenerate nonlinear granular media equations, see [3] and [8] for the study of non-degenerate linear granular media equations. **Example 4.1** (Degenerate nonlinear granular media equation).: Let \(d\in\mathbb{N}\) and \(W\in C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{2d}).\) Consider the following PDE for probability density functions \((\rho_{t})_{t\geq 0}\) on \(\mathbb{R}^{2d}=\mathbb{R}^{d}\times\mathbb{R}^{d}\): \[\partial_{t}\rho_{t}(x)=\frac{1}{2}\mathrm{tr}\big{\{}\sigma(\rho _{t})\sigma(\rho_{t})^{*}(\nabla^{(2)})^{2}\big{\}}\rho_{t}(x)-\langle\nabla^ {(1)}\rho_{t}(x),x^{(2)}+b(x)\rangle\] \[\quad+\langle\nabla^{(2)}\rho_{t}(x),\nabla^{(1)}(W\;\raisebox{-1. 075pt}{\scalebox{0.7}{\mbox{\tiny$\circ$}}}\;\rho_{t})(x^{(1)})+\beta x^{(1)}+x ^{(2)}\rangle, \tag{4.14}\] where \(x=(x^{(1)},x^{(2)})\in\mathbb{R}^{2d}\), \(t\geq 0\). \(\beta>0\) is a constant, and \[(W\;\raisebox{-1.075pt}{\scalebox{0.7}{\mbox{\tiny$\circ$}}}\;\rho_{t})(x^{(1) }):=\int_{\mathbb{R}^{2m}}W(x^{(1)},z)\rho_{t}(z)\mathrm{d}z,\ \ x^{(1)}\in\mathbb{R}^{d}\] stands for the mean field interaction. If there exist constants \(\theta,\alpha>0\) with \[\theta\Big{(}\frac{1}{2}+\sqrt{2+2\beta+\beta^{2}}\Big{)}+\frac{\alpha(1+ \beta)}{2\beta}<\beta,\] such that \[\begin{array}{l}|\nabla W(\cdot,z)(v)-\nabla W(\cdot,\bar{z})(\bar{v})|\leq \theta\big{(}|v-\bar{v}|+|z-\bar{z}|\big{)},\ \ \forall v,\bar{v}\in\mathbb{R}^{d},\ \mbox{and}\ \forall z,\bar{z}\in\mathbb{R}^{2d},\\ \|\sigma(\mu)-\sigma(\nu)\|_{HS}^{2}\leq\alpha\mathbb{W}_{2}(\mu,\nu)^{2},\ \ \ \forall\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{2d}),\end{array} \tag{4.15}\] then for any \(\kappa^{\prime}\in(0,\kappa)\), when \(\|\nabla b\|_{\infty}\) is small enough there exists a unique probability measure \(\bar{\mu}\in\mathscr{P}_{2}(\mathbb{R}^{2d})\) and a constant \(c>0\) such that for any probability density functions \((\rho_{t})_{t\geq 0}\) solving (4.14), \(\mu_{t}(\mathrm{d}x):=\rho_{t}(x)\mathrm{d}x\) satisfies \[\mathbb{W}_{2}(\mu_{t},\bar{\mu})^{2}+\mathrm{Ent}(\mu_{t}|\bar{ \mu})\leq c\mathrm{e}^{-\kappa^{\prime}t}\mathbb{W}_{2}(\mu_{0},\bar{\mu})^{2},\ \ \forall t\geq 1 \tag{4.16}\] where \[\kappa=\frac{2\beta-\theta-2\theta\sqrt{2+2\beta+\beta^{2}}-\alpha(1+\beta^{- 1})}{2+2\beta+\beta^{2}+\sqrt{\beta^{4}+4}}>0.\] To prove this claim, let \((X_{t},Y_{t})\) solve (4.5) for \[B:=I_{d},\ \ \psi(x,y)=|x-y|,\ \ \text{and}\ V(x,\mu):=\int_{\mathbb{R}^{2d}}W (x,z)\mu(\mathrm{d}z). \tag{4.17}\] As shown in the proof of [15, Example 2.2], \(\rho_{t}\) solves (4.14) if and only if \(\rho_{t}(x)=\frac{\mathrm{d}(P_{t}^{*}\mu)(\mathrm{d}x)}{\mathrm{d}x}\), where \(P_{t}^{*}\mu:=\mathscr{L}_{X_{t}}\). By Corollary 4.2, we only need to verify \((C_{2})\) for \(B,V\) in (4.17) and \[\theta_{1}=\theta\Big{(}\frac{1}{2}+\sqrt{2+2\beta+\beta^{2}}\Big{)},\ \ \theta_{2}=\frac{\theta}{2}\sqrt{2+2\beta+\beta^{2}}+\frac{\alpha(\beta+1)}{2 \beta}, \tag{4.18}\] so that the desired assertion holds for \[\kappa:=\frac{2(\beta-\theta_{1}-\theta_{2})}{2+2\beta+\beta^{2}+\sqrt{\beta^{ 4}+4}}>0.\] For simplicity, let \(\nabla^{v}\) denote the gradient in \(v\). By (4.15) and \(V(x,\mu):=\mu(W(x,\cdot))\), for any constants \(\alpha_{1},\alpha_{2},\alpha_{3}>0\) we have \[I :=\big{\langle}\nabla^{x^{(1)}}V(x^{(1)},\mu)-\nabla^{y^{(1)}}V(y ^{(1)},\nu),x^{(1)}-y^{(1)}+(1+\beta)(x^{(2)}-y^{(2)})\big{\rangle}\] \[\leq\int_{\mathbb{R}^{2m}}\big{\langle}\nabla^{x^{(1)}}W(x^{(1)}, z)-\nabla^{y^{(1)}}W(y^{(1)},z),\ x^{(1)}-y^{(1)}+(1+\beta)(x^{(2)}-y^{(2)})\big{\rangle}\mu( \mathrm{d}z)\] \[\qquad+\big{\langle}\mu(\nabla^{y^{(1)}}W(y^{(1)},\cdot))-\nu( \nabla_{y^{(1)}}W(y^{(1)},\cdot)),x^{(1)}-y^{(1)}+(1+\beta)(x^{(2)}-y^{(2)}) \big{\rangle}\] \[\geq-\theta\big{\{}|x^{(1)}-y^{(1)}|+\mathbb{W}_{1}(\mu,\nu) \big{\}}\cdot\big{(}|x^{(1)}-y^{(1)}|+(1+\beta)|x^{(2)}-y^{(2)}|\big{)}\] \[\geq-\theta(\alpha_{2}+\alpha_{3})\mathbb{W}_{2}(\mu,\nu)^{2}\] \[\quad-\theta\Big{\{}\Big{(}1+\alpha_{1}+\frac{1}{4\alpha_{2}} \Big{)}|x^{(1)}-y^{(1)}|^{2}+(1+\beta)^{2}\Big{(}\frac{1}{4\alpha_{1}}+\frac{1 }{4\alpha_{3}}\Big{)}|x^{(2)}-y^{(2)}|^{2}\Big{\}}.\] Take \[\alpha_{1}=\frac{\sqrt{2+2\beta+\beta^{2}}-1}{2},\ \ \alpha_{2}=\frac{1}{2 \sqrt{2+2\beta+\beta^{2}}},\ \ \ \text{and}\ \alpha_{3}=\frac{(1+\beta)^{2}}{2\sqrt{2+2\beta+\beta^{2}}}.\] We have \[1+\alpha_{1}+\frac{1}{4\alpha_{2}}=\frac{1}{2}+\sqrt{2+2\beta+ \beta^{2}},\] \[(1+\beta)^{2}\Big{(}\frac{1}{4\alpha_{1}}+\frac{1}{4\alpha_{3}}\Big{)}= \frac{1}{2}+\sqrt{2+2\beta+\beta^{2}},\] \[\alpha_{2}+\alpha_{3}=\frac{1}{2}\sqrt{2+2\beta+\beta^{2}}.\] Combining this with (4.15) and (4.18), we derive \[I-\frac{\beta+1}{2\beta}\|\sigma(\mu)-\sigma(\nu)\|_{HS}^{2}\geq-\theta_{1}|x- y)|^{2}-\theta_{2}\mathbb{W}_{2}(\mu,\nu)^{2},\] and therefore condition \((C_{2})\) is satisfied for \(B,\psi\) and \(V\) in (4.17).
2309.07157
Distribution Grid Line Outage Identification with Unknown Pattern and Performance Guarantee
Line outage identification in distribution grids is essential for sustainable grid operation. In this work, we propose a practical yet robust detection approach that utilizes only readily available voltage magnitudes, eliminating the need for costly phase angles or power flow data. Given the sensor data, many existing detection methods based on change-point detection require prior knowledge of outage patterns, which are unknown for real-world outage scenarios. To remove this impractical requirement, we propose a data-driven method to learn the parameters of the post-outage distribution through gradient descent. However, directly using gradient descent presents feasibility issues. To address this, we modify our approach by adding a Bregman divergence constraint to control the trajectory of the parameter updates, which eliminates the feasibility problems. As timely operation is the key nowadays, we prove that the optimal parameters can be learned with convergence guarantees via leveraging the statistical and physical properties of voltage data. We evaluate our approach using many representative distribution grids and real load profiles with 17 outage configurations. The results show that we can detect and localize the outage in a timely manner with only voltage magnitudes and without assuming a prior knowledge of outage patterns.
Chenhan Xiao, Yizheng Liao, Yang Weng
2023-09-10T21:11:36Z
http://arxiv.org/abs/2309.07157v1
# Distribution Grid Line Outage Identification with Unknown Pattern and Performance Guarantee ###### Abstract Line outage identification in distribution grids is essential for sustainable grid operation. In this work, we propose a practical yet robust detection approach that utilizes only readily available voltage magnitudes, eliminating the need for costly phase angles or power flow data. Given the sensor data, many existing detection methods based on change-point detection require prior knowledge of outage patterns, which are unknown for real-world outage scenarios. To remove this impractical requirement, we propose a data-driven method to learn the parameters of the post-outage distribution through gradient descent. However, directly using gradient descent presents feasibility issues. To address this, we modify our approach by adding a Bregman divergence constraint to control the trajectory of the parameter updates, which eliminates the feasibility problems. As timely operation is the key nowadays, we prove that the optimal parameters can be learned with convergence guarantees via leveraging the statistical and physical properties of voltage data. We evaluate our approach using many representative distribution grids and real load profiles with 17 outage configurations. The results show that we can detect and localize the outage in timely manner with only voltage magnitudes and without assuming the prior knowledge of outage patterns. ## I Introduction Distribution grid line outage occurrence detection and localization is essential for efficient system monitoring and sustainable system operation [1]. A timely identification of the line outage effectively reduces potential financial loss. According to the U.S. Energy Information Administration, customers had an average of 1.3 outages and went without power for four hours during 2016 [2]. The frequency and severity of line outages caused by extreme weather events and power supply shortages have also increased in recent years. The traditional line outage identification in distribution grids relies on passive feedback from customer reporting [3] or the "last gasp" message from smart meters [4], which is a notification automatically transmitted to the utility when power to the meter is lost. However, the performance of these methods will degrade while the transmission of the "last gasp" signal is not assured [5]. For instance, as the growth of distributed energy resources (DERs) penetration in distribution grids, customer can still receive power from the rooftop solar panels, battery storage, and electrical vehicles when there is no power flow in the distribution circuit connecting to the customer. So the smart meter at the customer premises cannot report a power outage. Moreover, some secondary distribution grids are mesh networks in urban areas. In this scenario, a single line outage caused by circuit faults and human activities may not cause a power outage due to alternative paths for power supply. In this second case, we will also observe smart meters measuring power injections without sending the "last gasp" notification for reporting outages. While alternative power sources make the "last gasp" notification fail to report outages, can we still find the line outage time and location? Answering this question, recent literature aimed at collecting additional information for smarter decisions. For example, power measurements, such as phasor angles from phasor measurement units (PMUs), were modeled in [6] as a Gaussian Markov random field to track the grid topology change. Other power measurements, like power flows and load estimates, were also utilized in a compressive system [7] and hypothesis-test-based detection method [8]. Non-power measurements were explored as well, such as human network information from social media [9] and the weather information from environment [10]. In distribution grids, obtaining measurements such as micro-PMUs and accurate power flow data can be challenging and costly, as they are not commonly deployed in households. To address this limitation, our earlier research [3] demonstrated that utilizing readily available voltage magnitudes could still yield accurate outage identification outcomes. However, an in-depth examination of the probability distribution of voltage data and a theoretical guarantee for learning this distribution were not included in our previous work. These aspects are crucial for understanding the outage identification procedure. Besides, the method in [3] has feasibility and accuracy issues when learning the probability distribution. In this work, we fill the above gaps via a novel approach with theoretical guarantees. To utilize the aforementioned measurements, both deterministic and probabilistic methods were proposed. Deterministic methods usually set up a threshold and declared the outage when the change of data exceeds the threshold. Such methods are simple to apply but cannot accurately discern data change in complex or large-scale grids. Probabilistic methods analyzed the data spatially or temporally. For spatial analysis, [11] studied graph spectral to assess the grid topology for line outage detection. However, such methods required the grid topology as a prior. For temporal analysis, tracing the probability distribution change of the time-series measurements is a common approach [3]. This is usually studied in the change point detection framework, which aims to find the distribution change of measurements as quickly as possible under the constraint of false alarm tolerance [12]. Such framework has been used in line outage and fault detection in transmission grids [13, 14] and DC micro-grids [15]. Although the change point detection framework assures optimal performance [16], it typically necessitates knowledge of both distributions before and after the change. Nevertheless, in distribution grids, this requirement is not practical as the post-outage distribution is unpredictable due to the large number of possible outage patterns, whereas the pre-outage distribution can be learned from historical measurements. For removing the impractical requirement discussed above, methods were proposed to provide approximation or simplification of the unknown post-change distribution in change point detection. For instance, an approximated maximum likelihood estimation of unknown distribution parameters was proposed in [3]. A convexified estimation of the unknown distribution approach was introduced in [17]. [18, 19] bypassed the requirement in restricted distribution cases with partially unknown information (e.g., scalar Gaussian with unknown means and known variances). While these methods may mitigate the incompleteness of post-outage information, they have limitations on detection performance and parameter estimation. In this paper, we propose a practical and straightforward method for utilities to identify line outages with unknown outage patterns. To address the challenge of limited data availability, our approach relies solely on voltage magnitudes obtained from smart meters. This is advantageous compared to expensive phase angle measurements and accurate power flow data, as voltage magnitudes are more readily accessible in typical distribution grids [20]. For the utilize of voltage magnitudes, we have made distinctive contributions. We demonstrate that the increment of voltage magnitudes before and after a line outage follows two distinct multivariate Gaussian distributions, where the distribution parameters are influenced by grid connectivity. Moreover, we provide theoretical guarantees for learning the unknown probability distribution parameters based on voltage magnitude data. By effectively utilizing voltage magnitudes and incorporating theoretical guarantees, we address the limitations posed by the absence of precise phase-angle data. Through the detection of changes in the learned Gaussian distributions, we can successfully identify line outages. The second challenge is the unavailability of post-outage distribution parameters as analyzed earlier. To address this issue, we propose a data-driven method that directly learns these unknown parameters using Projected Gradient Descent (PGD). While Gradient Descent (GD) is susceptible to feasibility issues in parameter estimation, the iterative nature of GD allows us to control the parameter updating trajectory. Specifically, we formulate the distribution parameter learning problem as a projection optimization problem constrained by the Bergman divergence [21]. This not only resolves the feasibility issue but also leads to accurate parameter estimation with theoretical guarantees. By accurately learning the parameters, our approach can effectively detect and localize line outages, even in large grids. In addition to accuracy, utilities are also concerned with timely operation. By utilizing the statistical and physical characteristics of voltage data, we can limit the search space of unknown parameters to a convex set, which allows for fast and accurate recovery of the post-outage distribution. We have demonstrated that PGD can achieve optimal parameter learning with a polynomial-time convergence guarantee. Furthermore, we have developed an efficient implementation of the PGD algorithm, which reduces computational time by 75% and makes it particularly well-suited for timely grid operations. In summary, our proposed method offers several contributions. Firstly, it only requires simple data but have theoretical guarantees. Secondly, it does not require prior knowledge of the outage pattern. Thirdly, it enables timely operation. Furthermore, our approach comes with performance guarantees and does not rely on knowledge of the distribution grid's topology, nor does it require all households to have smart meters data. The method is validated using four distribution grids and real-world load profiles with 17 outage configurations. In the following, Section II models the problem of line outage identification. Section III discusses the voltage data and identification procedure. Section IV extends to identification with unknown outage pattern. Section V provides performance guarantees on timely operation. Section VI evaluates our method. Section VIII concludes the paper. ## II System Model For showing our probabilistic design for change point detection and localization, we define variables on a graph probabilistically. Specifically, we model the distribution grid as a graph \(\mathcal{G}:=\{1,2,\cdots,M\}\) containing \(M>0\) buses connected by branches. Then, the voltage data from each bus \(i\in\mathcal{G}\) is modeled as a random variable \(V_{i}\). As a time-series, its realization at time \(n\) is denoted as \(v_{i}[n]=|v_{i}[n]|\exp(j\theta_{i}[n])\in\mathcal{C}\), where \(|v_{i}[n]|\in\mathcal{R}\) represents the voltage magnitude in per unit and \(\theta_{i}[n]\in\mathcal{R}\) is the voltage phase angle in degrees. These steady-state measurements are sinusoidal signals at the same frequency. It's worth noting that unlike PMUs, smart meters typically do not measure phase angles. Therefore, we want to emphasize that even though voltage is represented in its phasor form, solely using the voltage magnitude can still effectively identify a line outage. In the distribution grid \(\mathcal{G}\), the collection of voltage variables is modeled as \(\mathbf{V}_{\mathcal{G}}:=[V_{1},V_{2},\cdots,V_{M}]^{\top}\in\mathcal{R}^{M}\). Moreover, since \(\mathbf{V}_{\mathcal{G}}\) usually do not follow a regular distribution [3], we model the increment change of voltage data as \(\Delta\mathbf{V}_{\mathcal{G}}\), whose realization at time \(n\) is \(\Delta\mathbf{v}[n]=\mathbf{v}[n]-\mathbf{v}[n-1]\). For the sake of simplicity, we also use the notation \(\Delta\mathbf{v}^{1:N}=\{\Delta\mathbf{v}[1],\cdots,\Delta\mathbf{v}[N]\}\) to represent observations up to time \(N\). Based on the modeling, the problem of identifying the distribution grid line outage is formally defined as follows. * **Given**: Voltage increments \(\Delta\mathbf{v}^{1:N}\) from the smart meters. * **Find**: The line outage time as soon as possible and the out-of-service branch as accurate as possible. ## III Outage Identification via Voltage Magnitude While the expensive phasor angles and accurate power flows are hard to obtain in distribution grids, [3] showed that the easier-to-acquire voltage magnitude could be utilized to identify the line outage. The authors found that although voltage data do not follow a regular distribution, the incremental change of voltage follows Gaussian distribution. However, two things were missing in [3]: a clear formula of the distribution and an elaborate analysis of how such distribution is affected by line outages. They are the key to understanding the procedure and performance of identifying the line outage, which will be discussed in detail in the following subsection. ### _Gaussian Distribution of Voltage Increment_ For answering the missing question, in this subsection, we elaborately prove that the increment of voltage data \(\Delta\mathbf{V}_{\mathcal{G}}\) follows two multivariate Gaussian distributions before and after the line outage, and provide a clear formula of such distribution. In doing so, we can identify the outage via tracing the change of the Gaussian distribution. To study the distribution of \(\Delta\mathbf{V}_{\mathcal{G}}\), we start from the Kirchhoff's Current Law: the relationship between voltages \(\mathbf{V}_{\mathcal{G}}\in\mathcal{C}^{M}\) and currents \(\mathbf{I}_{\mathcal{G}}\in\mathcal{C}^{M}\) in the grid is \(\mathbf{Y}_{\mathcal{G}}\mathbf{V}_{\mathcal{G}}=\mathbf{I}_{\mathcal{G}}\), where the admittance matrix \(\mathbf{Y}_{\mathcal{G}}\in\mathcal{C}^{M\times M}\) can be derived through the connectivity of the grid as [22] \[\mathbf{Y}_{\mathcal{G}}=\mathbf{A}_{\mathcal{E},\mathcal{G}}^{\top}\mathbf{Y}_{\mathcal{ E}}\mathbf{A}_{\mathcal{E},\mathcal{G}}+\mathbf{Y}_{\mathcal{G}}^{*}. \tag{1}\] In the above equation, \(\mathcal{E}\) denotes the set of branches in the grid \(\mathcal{G}\). \(\mathbf{A}_{\mathcal{E},\mathcal{G}}\in\mathcal{R}^{|\mathcal{E}|\times M}\) is the incidence matrix where each row represents a branch, and has exactly one entry of \(1\) and one entry of \(-1\) to denote the two buses connected by this branch. We can swap the \(-1\) and \(1\) since the grid network is undirectional. \(\mathbf{Y}_{\mathcal{E}}\in\mathcal{C}^{|\mathcal{E}|\times|\mathcal{E}|}\) is a diagonal matrix with the series admittances of each branch, and \(\mathbf{Y}_{\mathcal{G}}^{*}\in\mathcal{C}^{M\times M}\) is a diagonal matrix with the total shunt admittances at each bus. By representing \(\mathbf{Y}_{\mathcal{G}}\) in (1), we can discuss the invertibility of \(\mathbf{Y}_{\mathcal{G}}\), which prepares us for the distribution analysis of \(\Delta\mathbf{V}_{\mathcal{G}}\). To this end, we assume that the branches are not electromagnetically coupled and have non-zero admittance, i.e., \(\mathbf{Y}_{\mathcal{E}}\) is full-rank. This assumption is common in distribution grids [22]. With full-rank \(\mathbf{Y}_{\mathcal{E}}\), we show the invertibility of \(\mathbf{Y}_{\mathcal{G}}\) in Lemma 1. **Lemma 1**.: _In a connected distribution grid \((\mathcal{G},\mathcal{E})\), the admittance matrix \(\mathbf{Y}_{\mathcal{G}}\in\mathcal{C}^{M\times M}\) is invertible after eliminating the slack-bus corresponding column and row._ In the following, we consider the eliminated admittance matrix and keep the notation unchanged for convenience. Based on Lemma 1, the relationship between voltage increments \(\Delta\mathbf{V}_{\mathcal{G}}\) and current increments \(\Delta\mathbf{I}_{\mathcal{G}}\) can be expressed as \[\Delta\mathbf{V}_{\mathcal{G}}=\mathbf{Z}_{\mathcal{G}}\Delta\mathbf{I}_{ \mathcal{G}},\quad\text{where}\quad\mathbf{Z}_{\mathcal{G}}=\mathbf{Y}_{\mathcal{G}}^{ -1}. \tag{2}\] To derive the distribution of \(\Delta\mathbf{V}_{\mathcal{G}}\), we further introduce a common assumption regarding \(\Delta\mathbf{I}_{\mathcal{G}}\). We consider that \(\Delta I\) at each non-slack bus is independent and normally distributed: \[\Delta I_{i}\bot\Delta I_{k},\ i\neq k\quad\text{and}\quad\Delta I_{k}\sim \mathcal{N}(\mu_{k},\sigma_{k}^{2}),\ k\in\mathcal{G}. \tag{3}\] This statement is adopted and validated by real data in many works [23, 24, 25], where the authors computed the mutual information between current injections to justify the independence. The empirical histogram in Fig. 2 also suggests that \(|\Delta I|\) roughly follows a Gaussian distribution. With the assumption in (3), we present the distribution analysis of \(\Delta\mathbf{V}_{\mathcal{G}}\), which is key to identifying the line outage. **Theorem 1**.: _Provided that (3) hold, \(\Delta\mathbf{V}_{\mathcal{G}}\) (excluding slack-bus) in a connected distribution grid \((\mathcal{G},\mathcal{E})\) 1) follows a multivariate Gaussian distribution, 2) still follows a multivariate Gaussian distribution (with different mean and covariance) after grid topology changes._ Proof.: With (2), \(\Delta V_{i}\) of each bus \(i\in\mathcal{G}\) (slack-bus is excluded from \(\mathcal{G}\)) can be expressed by \(\Delta V_{i}=\sum_{k\in\mathcal{G}}Z_{ik}\Delta I_{k}\), where \(Z_{ik}\) is the \((i,k)\) element of \(\mathbf{Z}_{\mathcal{G}}\). Hence, any non-trivial linear combination of \(\Delta V_{i},i\in\mathcal{G}\) can also be represented by a linear combination of \(\Delta I_{k},k\in\mathcal{G}\), and is normally distributed. This implies that the joint distribution of \(\Delta\mathbf{V}_{\mathcal{G}}=[\Delta V_{1},\cdots,\Delta V_{M}]^{\top}\) is a multivariate Gaussian distribution as \[\Delta\mathbf{V}_{\mathcal{G}}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma}), \tag{4}\] where \(\mathbf{\mu}_{i}=\sum_{k\in\mathcal{G}}Z_{ik}\mu_{k}\) and \(\mathbf{\Sigma}_{ik}=\sum_{l\in\mathcal{G}}Z_{il}Z_{kl}\sigma_{l}^{2}\). When the grid topology is changed (e.g., due to a line outage), the incidence matrix \(\mathbf{A}_{\mathcal{E},\mathcal{G}}\) changes accordingly: if branch \(l\) connecting bus \(i\) and \(k\) is out-of-service, \((\mathbf{A}_{\mathcal{E},\mathcal{G}})_{l,i}\) and \((\mathbf{A}_{\mathcal{E},\mathcal{G}})_{l,k}\) become zero. Denoting the new incidence matrix as \(\mathbf{A}_{\mathcal{E},\mathcal{G}}\), there are two scenarios: * The grid network is still connected (which is our focus in this paper). In this case, the changed admittance matrix \(\widetilde{\mathbf{Y}}_{\mathcal{G}}=\widetilde{\mathbf{A}}_{\mathcal{E},\mathcal{G}}^{ \top}\mathbf{Y}_{\mathcal{E}}\widetilde{\mathbf{A}}_{\mathcal{E},\mathcal{G}}+\mathbf{Y}_ {\mathcal{G}}^{*}\) is still invertible, which results in a varied \(\widetilde{\mathbf{Z}}_{\mathcal{G}}=\widetilde{\mathbf{Y}}_{\mathcal{G}}^{-1}\). It implies that \(\Delta\mathbf{V}_{\mathcal{G}}\) still follows a multivariate Gaussian distribution, only with different mean \(\widetilde{\mathbf{\mu}}\) and different covariance \(\mathbf{\Sigma}\) calculated according to \(\widetilde{\mathbf{Z}}_{\mathcal{G}}\). * The grid network is disconnected. In this case, we view the network as disjoint islands where each part is a connected sub-network, e.g., \(\mathcal{G}=\mathcal{G}_{1}\cup\mathcal{G}_{2},\mathcal{G}_{1}\cap\mathcal{G}_{ 2}=\emptyset\). By doing so, we can write the incidence matrix in block format, e.g., \(\widetilde{\mathbf{A}}_{\mathcal{E},\mathcal{G}}=(\begin{smallmatrix}\widetilde{\mathbf{A} }_{\mathcal{E}_{1},\mathcal{G}_{1}}&\mathbf{0}\\ \mathbf{0}&\widetilde{\mathbf{A}}_{\mathcal{E}_{2},\mathcal{G}_{2}}\end{smallmatrix})\). According to the first case, voltage increments in each sub-network follow a multivariate Gaussian distribution, and so does their joint distribution. In this scenario, since some houses lose power connection and will have zero voltages, the outage time and location can be more easily found via our approach. Suppose the outage occurs at time \(\lambda\), Theorem 1 allows us to write the sequence of voltage increments as \[\begin{cases}\Delta\mathbf{v}[n]\overset{i.i.d}{\sim}g:\mathcal{N}(\mathbf{\mu}_{0 },\mathbf{\Sigma}_{0}),&n=&1,2,\cdots,\lambda-1,\\ \Delta\mathbf{v}[n]\overset{i.i.d}{\sim}f:\mathcal{N}(\mathbf{\mu}_{1},\mathbf{\Sigma} _{1}),&n=&\lambda,\lambda+1,\cdots,N,\end{cases} \tag{5}\] where \(g\) denotes the pre-outage Gaussian distribution and \(f\) denotes the post-outage Gaussian distribution. The mean vectors \(\mathbf{\mu}_{0},\mathbf{\mu}_{1}\), along with covariance matrices \(\mathbf{\Sigma}_{0},\mathbf{\Sigma}_{1}\), are the parameters of these distributions. In our work, the pre-outage parameters \(\mathbf{\mu}_{0}\) and \(\mathbf{\Sigma}_{0}\) can be estimated using historical data during normal operation periods of the distribution grid [3]. The post-outage parameters \(\mathbf{\mu}_{1}\) and \(\mathbf{\Sigma}_{1}\) are considered unknown to reflect real-world outage scenarios, since the outage pattern is unpredictable/unknown. To visualize the varying distribution of the sequence, we provide an illustration of \(|\Delta v[N]|\in\mathcal{R}\) in Fig. 1(b). Fig. 2: The empirical histogram of \(|\Delta I|\). ### _Outage Identification via Distribution Change_ Before proposing our novel solution to unknown outage pattern, we present the commonly used framework to find the outage time \(\lambda\) and outage branch, given voltage data in (5). To identify the outage time \(\lambda\), we conduct a sequential hypothesis test \(\mathcal{H}_{0}:\lambda>N\) and \(\mathcal{H}_{1}:\lambda\leq N\) at every time step \(N\). As \(N\) increases, the first time we reject the null hypothesis \(\mathcal{H}_{0}\) determines the value of \(\lambda\). To decide when to reject \(\mathcal{H}_{0}\), we compute the posterior probability ratio at each time step \(N\) as \[\Lambda(\Delta\mathbf{v}^{1:N})=\frac{\mathbb{P}(\lambda\leq N| \Delta\mathbf{v}^{1:N})}{\mathbb{P}(\lambda>N|\Delta\mathbf{v}^{1:N})}\] \[=\frac{\sum_{k=1}^{N}\pi(k)\prod_{n=1}^{k-1}g(\Delta\mathbf{v}[ n])\prod_{n=k}^{N}f(\Delta\mathbf{v}[n])}{\sum_{k=N+1}^{\infty}\pi(k)\prod_{n=1}^{ N}g(\Delta\mathbf{v}[n])}, \tag{6}\] where \(\lambda\in\mathbb{N}\) is assumed to follow a prior distribution \(\pi\). The posterior probability ratio in (6) compares the probabilities of "outage occurred (\(\lambda\leq N\))" and "outage did not occur (\(\lambda>N\))" based on the historical measurements \(\Delta\mathbf{v}^{1:N}\). A larger posterior probability ratio indicates that "outage occurred" is more likely than "outage did not occur." Therefore, we declare the outage when the ratio (6) exceeds a predefined threshold. By the Shiryaev-Roberts-Pollals procedure [12, 16], the following threshold in Theorem 2 optimally considers the trade-off between the false alarm and the detection delay. **Theorem 2**.: _(Line outage detection). When \(\lambda\) follows a geometric prior Geo(\(\rho\)), we declare the outage at the first time when posterior probability ratio \(\Lambda(\Delta\mathbf{v}^{1:N})\) surpasses the threshold \(B_{\rho,\alpha}=(1-\alpha)/(\rho\alpha)\) as_ \[\tau=\inf\{N\in\mathbb{N}:\Lambda(\Delta\mathbf{v}^{1:N})\geq B_{\rho,\alpha}\}, \tag{7}\] _where the false alarm rate \(\mathbb{P}(\tau<\lambda)\) is upper bounded by maximal false alarm rate \(\alpha\). As \(\alpha\to 0\), \(\tau\) in (7) is asymptotically optimal for minimizing the average detection delay \(\mathbb{E}[\tau-\lambda|\tau\geq\lambda]\) as_ \[\begin{split}\mathbb{E}[\tau-\lambda|\tau\geq\lambda]& =\frac{|\log\alpha|}{-\log(1-\rho)+D_{KL}(f||g)}\\ &=\inf_{\mathbb{P}(\tau^{*}\leq\lambda)\leq\alpha}\mathbb{E}[ \tau^{*}-\lambda|\tau^{*}\geq\lambda],\end{split} \tag{8}\] _where \(D_{KL}(f||g)\) is the KL divergence between \(f\) and \(g\)._ One notable feature of the detection procedure described above is its ability to function effectively without requiring knowledge of the grid topology. Additionally, it can handle non-Gaussian distributions for \(f\) and \(g\). As depicted in Fig. 1(c), we calculate the posterior probability ratio sequentially and identify the outage time when the ratio exceeds the threshold. Once the line outage occurrence is detected, localizing the out-of-service branch is also critical for system recovery. In [3], the authors proposed an accurate outage localization method by proving that the voltage increments of two disconnected buses are conditionally independent. They computed the conditional correlation of every possible pair of buses in the grid and checked if the value changes from non-zero to zero. This approach differs from the utilization of nodal electric circuit matrices [26, 27] for estimating fault location, while our approach has also been effective (as shown in Section VI-D) and capitalizes on the learned covariance matrix in scenarios where the post-outage distribution is unknown. To estimate the conditional correlation between bus \(i\) and bus \(k\), the covariance matrix \(\mathbf{\Sigma}\) is utilized. Let set \(\mathcal{I}:=\{i,k\}\) and \(\mathcal{K}:=\mathcal{G}\backslash\{i,k\}\), the covariance matrix is decomposed as \(\mathbf{\Sigma}=\begin{bmatrix}\mathbf{\Sigma}_{\mathcal{I}\mathcal{I}}& \mathbf{\Sigma}_{\mathcal{I}\mathcal{K}}\\ \mathbf{\Sigma}_{\mathcal{I}\mathcal{K}}^{*}&\mathbf{\Sigma}_{\mathcal{K} \mathcal{K}}\end{bmatrix}\). Based on this, the conditional correlation \(\rho_{ik}\) between bus \(i\) and bus \(k\) is \[\rho_{ik}(\mathbf{\Sigma})=\frac{\mathbf{\Sigma}_{\mathcal{I}|\mathcal{K}}(1, 2)}{\sqrt{\mathbf{\Sigma}_{\mathcal{I}|\mathcal{K}}(1,1)\mathbf{\Sigma}_{ \mathcal{I}|\mathcal{K}}(2,2)}}, \tag{9}\] where the conditional covariance is computed by the Schur complement [28] as \(\mathbf{\Sigma}_{\mathcal{I}|\mathcal{K}}=\mathbf{\Sigma}_{\mathcal{I}\mathcal{ I}}-\mathbf{\Sigma}_{\mathcal{I}\mathcal{K}}\mathbf{\Sigma}_{\mathcal{K}\mathcal{K}}^{ -1}\mathbf{\Sigma}_{\mathcal{I}\mathcal{K}}^{\top}\). **Theorem 3**.: _(Line outage localization). The conditional correlation is calculated based on (9) for every pair of \((i,k)\) as_ \[\underbrace{\rho_{ik}^{-}=\rho_{ik}(\mathbf{\Sigma}_{0})}_{\text{before outage}} \quad\text{and}\quad\underbrace{\rho_{ik}^{+}=\rho_{ik}(\mathbf{\widehat{ \Sigma}}_{1})}_{\text{after outage}}. \tag{10}\] _The branch between bus \(i\) and \(k\) is out-of-service if \(|\rho_{ik}^{-}|>\delta_{\max}\) and \(|\rho_{ik}^{+}|<\delta_{\min}\). The thresholds are set as \(\delta_{\max}=0.5\) and \(\delta_{\min}=0.1\) based on real-world outage data to check if the correlation changes from non-zero to near-zero value._ According to Theorem 3, we track the change of covariance matrices to localize the out-of-service branch. Specifically, an out-of-service branch between bus \(i\) and bus \(k\) can be identified if both of the following conditions are met simultaneously: (1) \(|\rho_{ik}^{-}|>\delta_{\max}\) indicating the presence of a branch between Fig. 1: An overview of the distribution grid line outage detection problem: we collect voltage magnitudes from smart meters installed at households and use the posterior probability ratio computed in (6) to detect the change in the underlying distribution of voltage increments. buses \(i\) and \(k\) before the outage, and (2) \(|\rho_{ik}^{+}|<\delta_{\min}\) indicating the absence of a branch between buses \(i\) and \(k\) after the outage. Notably, this process still does not need the grid topology as a prior. ## IV Outage Identification with Unknown Pattern The detection and localization procedure in Section III requires knowing all the parameters of \(g\) and \(f\) in advance. However, this is impractical in real-world distribution grids. Specifically, although we know \(g\) and \(f\) are multivariate Gaussian distributions based on Theorem 1, the parameters (mean vectors and covariance matrices) of \(f\) are usually hidden. While the pre-outage distribution parameters can be learned by historical measurements during the normal grid operation, the post-outage distribution parameters are often unavailable. In fact, since there are a large number of branches and therefore, substantial combinations of outage patterns, we can not predict the outage pattern or the post-outage distribution parameters. Hence, we need to estimate the unknown parameters before conducting the aforementioned methods to identify the outage. To resolve such issue, we propose a data-driven framework to learn the post-outage distribution parameters \(\mathbf{\theta}=(\mathbf{\mu}_{1},\mathbf{\Sigma}_{1})\) jointly. Specifically, we want to find the parameter set that minimizes the negative likelihood function \(L(\mathbf{\mu}_{1},\mathbf{\Sigma}_{1})\) as \[(\widehat{\mathbf{\mu}}_{1},\widehat{\mathbf{\Sigma}}_{1})=\arg\min_{(\mathbf{\mu}_{1}, \mathbf{\Sigma}_{1})}L(\mathbf{\mu}_{1},\mathbf{\Sigma}_{1}), \tag{11}\] where \(L(\mathbf{\mu}_{1},\mathbf{\Sigma}_{1})\) is computed as \[-\sum_{k=1}^{N}\pi(k)\prod_{n=1}^{k-1}g(\Delta\mathbf{v}[n])\prod_{n=k}^{N}f( \Delta\mathbf{v}[n]|\mathbf{\mu}_{1},\mathbf{\Sigma}_{1}). \tag{12}\] To address the non-convex nature of the likelihood expressed in equation (12), the authors in [3] proposed a convex approximation using Jensen's inequality and derived closed-form solutions for equation (11). However, the use of Jensen's inequality can introduce inaccuracies in the resulting closed-form solutions, particularly in determining the minimum point. Furthermore, the estimated covariance matrix may not always be feasible. Specifically, a feasible covariance matrix must be positive definite, i.e., \(\mathbf{\Sigma}_{1}\succ 0\), and if this condition is not met during the learning process, the computation of the probability density of \(f\) can fail. An alternative approach is using the Gradient Descent (GD) to find the solution to (11). While the vanilla GD also can not ensure the aforementioned feasibility of the parameters, the iterative learning nature in GD enables us to control the updating trajectory of parameters. ### _Unknown Parameters Estimation via Projected Gradient Descent with Bregman Divergence Constraint_ To guarantee that the estimation of parameters \(\mathbf{\theta}=\{\mathbf{\mu}_{1},\mathbf{\Sigma}_{1}\}\) are always feasible, we introduce the Bregman divergence [21] to constrain the estimate in each iteration of GD and arrive at a series of optimization problems as \[\mathbf{\theta}_{i}^{(e+1)}=\arg\min_{\mathbf{\theta}_{i}}\underbrace{\Delta_{\Phi}( \mathbf{\theta}_{i},\mathbf{\theta}_{-i}^{(e)})}_{\text{Bregman divergence}}+\eta L(\mathbf{\theta}_{i},\mathbf{ \theta}_{-i}^{(e)}). \tag{13}\] In the above equation, \(\mathbf{\theta}_{i}^{(e)}\) is the update of the \(i^{th}\) parameter at \(e^{th}\) iteration, \(\mathbf{\theta}_{-i}^{(e)}=\mathbf{\theta}^{(e)}\setminus\mathbf{\theta}_{i}^{(e)}\) is a complement set, and \(\eta\) is the trade-off learning rate. The Bregman divergence \[\Delta_{\Phi}(\mathbf{\theta}_{i},\mathbf{\theta}_{i}^{(e)}):=\Phi(\mathbf{\theta}_{i})- \Phi(\mathbf{\theta}_{i}^{(e)})-\operatorname{tr}\left((\mathbf{\theta}_{i}-\mathbf{ \theta}_{i}^{(e)})\Phi(\mathbf{\theta}_{i}^{(e)})^{\top}\right)\] provides a distance measurement between two variables \(\mathbf{\theta}_{i}\) and \(\mathbf{\theta}_{i}^{(e)}\), where \(\Phi\) is a strictly convex differentiable function. Intuitively, it restricts the new estimate \(\mathbf{\theta}_{i}^{(e+1)}\) to stay relatively close to the previous estimate \(\mathbf{\theta}_{i}^{(e)}\). Therefore, if the initial guess \(\mathbf{\theta}_{i}^{(0)}\) is within the feasible domain, we can expect that the parameter \(\mathbf{\theta}_{i}\) following (13) will update towards the direction of minimizing the negative likelihood and meanwhile satisfy a similar property (e.g., positive definiteness of covariance matrix) since the update step is restricted. Finding the solution to (13) relies on one characteristic of Bregman divergence: its gradient with respect to \(\mathbf{\theta}_{i}\) has a simple form \(\nabla_{\mathbf{\theta}_{i}}\Delta_{\Phi}(\mathbf{\theta}_{i},\mathbf{\theta}_{i}^{(e)})= \dot{\Phi}(\mathbf{\theta}_{i})-\dot{\Phi}(\mathbf{\theta}_{i}^{(e)})\), where \(\dot{\Phi}\) is the differential of \(\Phi\). Based on this, we can eliminate the argmin in (13) by setting the gradient (with respect to \(\mathbf{\theta}_{i}\)) of the objective function to zero, and derive the following Projected Gradient Descent solution [29]. **Lemma 2**.: _The optimization problem in (13) is solved as_ \[\mathbf{\theta}_{i}^{(e+1)}=\dot{\Phi}^{-1}\left(\dot{\Phi}(\mathbf{\theta}_{i}^{(e)})+ \eta\nabla_{\mathbf{\theta}_{i}}L(\mathbf{\theta}^{(e)})\right). \tag{14}\] With Lemma 2, we propose the important result of our paper: the learning scheme of unknown parameters with feasibility guarantee. We will further show that this learning scheme is accurate and has convergence guarantees. **Theorem 4**.: _(Projected Gradient Descent of learning \(\mathbf{\mu}_{1},\mathbf{\Sigma}_{1}\)). With careful customization of Bregman divergence (i.e., choosing the appropriate function \(\Phi\)), the Projected Gradient Descent learning in (14) becomes feasible._ * \(\mathbf{\mu}_{1}\in\mathcal{R}^{M}\)_:_ \(\Phi(\mathbf{\mu}_{1})=\frac{1}{2}\|\mathbf{\mu}_{1}\|_{2}^{2}\) _and_ \(\dot{\Phi}^{-1}(\mathbf{\mu}_{1})=\mathbf{\mu}_{1}\)_. The learning scheme is_ \[\mathbf{\mu}_{1}^{(e+1)}=\mathbf{\mu}_{1}^{(e)}-\eta\nabla_{\mathbf{\mu}_{1}}L(\mathbf{\mu}_{1} ^{(e)},\mathbf{\Sigma}_{1}^{(e)}).\] (15) * \(\mathbf{\Sigma}_{1}\succ 0\)_:_ \(\Phi(\mathbf{\Sigma}_{1})=\operatorname{tr}(\mathbf{\Sigma}_{1}\log\mathbf{\Sigma}_{1}- \mathbf{\Sigma}_{1})\) _and_ \(\dot{\Phi}^{-1}(\mathbf{\Sigma}_{1})=\exp\mathbf{\Sigma}_{1}\)_. The learning scheme is_ \[\mathbf{\Sigma}_{1}^{(e+1)}=\exp\left(\log\mathbf{\Sigma}_{1}^{(e)}-\eta\nabla_{\mathbf{ \Sigma}_{1}}L(\mathbf{\mu}_{1}^{(e)},\mathbf{\Sigma}_{1}^{(e)})\right).\] (16) The learning process of \(\mathbf{\Sigma}_{1}\) is shown in Fig. 3. Since matrix exponential maps any symmetric matrix to a positive definite matrix, the learning scheme (16) maintains the property of positive definiteness, i.e., \(\mathbf{\Sigma}_{1}^{(e+1)}\succ 0\) if \(\mathbf{\Sigma}_{1}^{(e)}\succ 0\). Besides the statistical properties (e.g., the covariance matrix is positive definite), smart meter data have physical properties as well due to grid operation. For example, because the Fig. 3: The Projected Gradient Descent update of \(\mathbf{\Sigma}_{1}\). standard range of voltage magnitude is between \(0\)\(p.u.\) and \(1.1\)\(p.u.\), the mean value of voltage increment should be between \(-1.1\)\(p.u.\) and \(1.1\)\(p.u.\). Our learning scheme in (14) can also satisfy this requirement by defining \[\Phi(\boldsymbol{\mu}_{1})=\sum_{i=1}^{M}\left[(\mu_{i}+1.1)\log( \mu_{i}+1.1)\right.\] \[\left.+(1.1-\mu_{i})\log(1.1-\mu_{i})+\mu_{i}\right], \tag{17}\] where \(\mu_{i}\) is the \(i^{th}\) element of the mean vector \(\boldsymbol{\mu}_{1}\). To conclude, when post-outage distribution parameters \(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1}\) are unknown, we use (17) and (16) to accurately learn them with feasibility and convergence guarantees. The pre-outage parameters \(\boldsymbol{\mu}_{0}\) and \(\boldsymbol{\Sigma}_{0}\) can be estimated using historical data during normal operation periods of the distribution grid [3]. By obtaining these parameters of the Gaussian density functions \(g\) and \(f\) in (6), we can explicitly calculate \(g\) and \(f\). It enables us to implement Theorem 2 for detecting the outage time and use Theorem 3 for localizing the outage branch. This framework is summarized into Algorithm 1. ``` Input: New observation \(\Delta\mathbf{v}[N]\) Output: Outage time \(\tau\) and outage location Set \(\boldsymbol{\mu}_{1}^{(0)},\boldsymbol{\Sigma}_{1}^{(0)}\) from \({}^{(N-1)th}\) step // warm start for\(e=0,1,\ldots\)do \(\boldsymbol{\mu}_{1}^{(e+1)}\leftarrow\dot{\Phi}^{-1}\left(\dot{\Phi}( \boldsymbol{\mu}_{1}^{(e)})-\eta\nabla_{\boldsymbol{\mu}_{1}}L\right)\)// use \(\Phi\) in (17) \(\boldsymbol{\Sigma}_{1}^{(e+1)}\leftarrow\exp\left(\log\boldsymbol{\Sigma}_{1} ^{(e)}-\eta\nabla_{\boldsymbol{\Sigma}_{1}}L\right)\) if\(|L(\boldsymbol{\mu}_{1}^{(e)},\boldsymbol{\Sigma}_{1}^{(e)})-L(\boldsymbol{\mu}_{1}^{(e+1)}, \boldsymbol{\Sigma}_{1}^{(e+1)})|\leq 10^{-3}\)then return\(\boldsymbol{\mu}_{1}^{best},\boldsymbol{\Sigma}_{1}^{best}\)// return best update result, defined in Theorem 5 end if\(\Lambda(\Delta\mathbf{v}^{1:N})\geq B_{\rho,\alpha}\)then for\(i,k\in\mathcal{G}\)do if\(|\rho_{ik}^{-}|>\delta_{\max}\) and \(|\rho_{ik}^{+}|<\delta_{\min}\)then return\(\tau=N\), report the out-of-service branch between bus \(i\) and \(k\) end if ``` **Algorithm 1**Line outage identification with unknown post-outage distribution parameters ## V Timely Outage Identification with Performance Guarantee In addition to the feasibility issue that has already been addressed in Theorem 4, the accuracy and computation time of the proposed learning scheme are two other concerns when implementing such a method for real-world outage identification. In this section, we demonstrate that our proposed method can achieve the optimal parameter solution with a guaranteed convergence. Furthermore, we present an efficient implementation for timely operation. ### _Restricted Convexity for Convergence Guarantee_ While the non-convexity of likelihood \(L(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1})\) in (12) hinders us from deriving a convergence analysis directly, we note that \(L(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1})\) is constrained convex. Specifically, we notice that the unknown parameters \(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1}\) of \(f\) are not supposed to be far away from the known parameters \(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\) of \(g\), thus freeing us from searching the entire parameter space. This is because the alternative power supply makes the impact of line outage less severe. In fact, if \(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1}\) are significantly far from \(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\), distinguishing between \(f\) and \(g\) would become trivial: imaging a large leap from pre-outage data to post-outage data in Fig. 1(b), one can detect this change very easily. Hence, a reasonable assumption is that \(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1}\) are relatively close to \(\boldsymbol{\mu}_{0},\boldsymbol{\Sigma}_{0}\), which actually results in a much harder detection problem. With this assumption, we can restrict our search for parameters in a constrained set where \(L(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1})\) has good properties. To formally present this, we introduce the restricted convexity [30]. **Definition 1**.: _A continuously differentiable function \(H:\mathcal{R}^{M}\rightarrow\mathcal{R}\) is restricted convex over a possibly non-convex region \(\mathcal{D}\subseteq\mathcal{R}^{M}\) if every \(\boldsymbol{x},\boldsymbol{y}\in\mathcal{D}\) we have \(H(\boldsymbol{y})\geq H(\boldsymbol{x})+\langle\nabla H(\boldsymbol{x}), \boldsymbol{y}-\boldsymbol{x}\rangle\)._ Then, we show that \(L(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1})\) satisfies the restricted convexity in Definition 1. Specifically, \(L(\boldsymbol{\mu}_{1})\) is restricted convex on constrained set \(\left\{\boldsymbol{\mu}_{1}\left|\boldsymbol{\Sigma}_{1}\succeq\frac{ \boldsymbol{v}_{k}(\boldsymbol{\mu}_{1})\boldsymbol{v}_{k}(\boldsymbol{\mu}_{1} )^{T}}{(N-k+1)},\forall k\leq N\right\}\) where \(\boldsymbol{v}_{k}(\boldsymbol{\mu}_{1})=\sum_{n=k}^{N}(\Delta\mathbf{v}[n]- \boldsymbol{\mu}_{1})\). Similarly, \(L(\boldsymbol{\Sigma}_{1})\) is restricted convex on constrained set \(\{\boldsymbol{\Sigma}_{1}\succ 0|\nabla^{2}L(\operatorname{vec}(\boldsymbol{\Sigma}_{1})) \succeq 0\}\). Based on this property, we derive in Theorem 5 the convergence of updating \(\boldsymbol{\mu}_{1}\) and \(\boldsymbol{\Sigma}_{1}\). **Theorem 5**.: _Using PGD to iteratively update \(\boldsymbol{\theta}_{i}\), the best update \(\boldsymbol{\theta}_{i}^{best}:=\arg\min_{e\in[E]}L(\boldsymbol{\theta}_{i}^{ (e)})\) and the averaged update \(\boldsymbol{\theta}_{i}^{avg}:=\frac{1}{E}\sum_{e=1}^{E}\boldsymbol{\theta}_{i}^ {(e)}\) will converge to the optimal value \(\boldsymbol{\theta}_{i}^{*}=\arg\min_{e}\), \(L(\boldsymbol{\theta}_{i})\) with a step size \(\eta=\frac{1}{\sqrt{E}}\):_ \[L(\boldsymbol{\theta}_{i}^{best})\leq L(\boldsymbol{\theta}_{i}^{*})+ \varepsilon\quad\text{and}\quad L(\boldsymbol{\theta}_{i}^{avg})\leq L( \boldsymbol{\theta}_{i}^{*})+\varepsilon,\] _for any \(\varepsilon>0\), after at most \(E=\mathcal{O}(\frac{1}{\varepsilon^{2}})\) iterations._ The proof is in Appendix B. Moreover, since the best update converges faster as shown in Section VI, we choose it as the output of the learning scheme in Algorithm 1. To better visualize how the parameters are updated in the restricted convex area via Projected Gradient Descent (PGD), we provide Fig. 4. As we see, although the likelihood function \(L\) is not convex in the entire parameter space, the restricted area is convex, opening the door for learning accurate and feasible parameter solutions to (11). ### _Acceleration for Timely Operation_ Theorem 5 shows that our proposed method can find the optimal parameters with polynomial-time complexity, which enables quick operation. In this subsection, we provide an efficient implementation of the learning scheme to further accelerate the algorithm for timely outage identification. To achieve so, we notice that while the matrix exponential and logarithm operations in (16) provide good properties of covariance estimation, it is very time-consuming when calculating them. The calculation is time-consuming because it is often based on their infinite Taylor series. To accelerate these operations, we propose to use finite terms of their Taylor series to approximate the operations. The matrix exponential is given by the power series in (18), and can be approximated by its first \(K_{\text{exp}}\) terms since \(\frac{1}{k!}\) decreases drastically when \(k\) becomes large. However, once we replace the original matrix exponential operation with the approximated operation \(\widetilde{\exp}\), we need to verify that \(\widetilde{\exp}\) also maps any symmetric matrix to a positive definite matrix to satisfy the conclusion in Theorem 4. Otherwise, we will arrive at covariance estimates outside the feasible domain. With this motivation, we show in Lemma 3 that if we choose an appropriate value of \(K_{\text{exp}}\) for approximation, the operation \(\widetilde{\exp}\) has similar properties as \(\exp\). **Lemma 3**.: _The matrix exponential can be approximated as_ \[\exp(\mathbf{X})=\sum_{k=0}^{\infty}\frac{1}{k!}\ \mathbf{X}^{k}\approx\sum_{k=0}^{K_{ \text{exp}}}\frac{1}{k!}\ \mathbf{X}^{k}:=\widetilde{\exp}(\mathbf{X}), \tag{18}\] _for any real matrix \(\mathbf{X}\). Also, \(\widehat{\exp}(\mathbf{X})\succ 0\) for any symmetric \(\mathbf{X}\) if \(K_{\text{exp}}\) is even and \(K_{\text{exp}}>\max\{0,-a_{\min}\}\), where \(a_{\min}\) is the smallest eigenvalue of \(\mathbf{X}\)._ Similarly, the matrix logarithm is given by the power series in (19), and can be approximated by the first \(K_{\text{log}}\) terms. **Lemma 4**.: _The matrix logarithm can be approximated as_ \[\log(\mathbf{X})\approx\sum_{k=1}^{K_{\text{log}}}\frac{1}{k}(-1)^{k+1}(\mathbf{X}- \mathbf{I})^{k}:=\widehat{\log}(\mathbf{X}). \tag{19}\] In order to make Theorem 4 still hold true when we use the approximated operation \(\widetilde{\log}\), we only need the symmetry of \(\widetilde{\log}\), which can be easily verified as \(\widetilde{\log}(\mathbf{X}^{\top})=\sum_{k=1}^{K_{\text{log}}}\frac{1}{k}(-1)^ {k+1}\left((\mathbf{X}-\mathbf{I})^{k}\right)^{\top}=\left(\widehat{\log}(\mathbf{X}) \right)^{\top}\). In summary, the proposed two approximation operations \(\widetilde{\exp}\) and \(\widehat{\log}\) can still preserve the feasibility of the covariance matrix estimation. The choice of \(K_{\text{exp}}\) and \(K_{\text{log}}\) is a trade-off between execution time and approximation accuracy: when \(K_{\text{exp}}\) is small, the operation \(\widetilde{\exp}\) is very fast but provides a poor approximation of \(\exp\) and vice versa. In Section VI, we will demonstrate how we choose an appropriate approximation level which results in almost zero errors and over \(75\%\) reduction in execution time. ## VI Validate on Extensive Outage Scenarios with Real-World Data This section shows how our proposed method performs in various distribution grids with real-world data. To evaluate our method in systems with different sizes and environments, we design extensive experiments on IEEE 8-bus, IEEE 123-bus networks [31], as well as two European representative distribution systems: medium voltage (MV) network in the urban area and low voltage (LV) network in the suburban area [32]. In each network, bus 1 is selected as the slack bus. To account for more complex outage scenarios in real-world distribution grids, we examine situations where alternative power sources are available after a line outage. In these scenarios, the "last gasp" notification is ineffective, making it more difficult to detect the line outage. We simulated the following two representative scenarios to replicate this complex scenario. It should also be noted that if certain buses are disconnected from the main grid and experience a voltage magnitude of zero following an outage, our method can accurately and quickly identify the out-of-service line. This is a simpler case compared to the ones we simulated below. * [leftmargin=*] * [leftmargin=*] * [leftmargin=*] * [leftmargin=*] * [leftmargin=*] * [rightmargin=*] * [rightmargin=*] * [leftmargin=*] * [rightmargin=*] * [leftmargin=*] * [rightmargin=*] * [leftmargin=*] * [rightmargin=*] * [leftmargin=*] [MISSING_PAGE_POST] ] * [rightmargin=*] experiments even though we model the voltage data in its phasor form. Another concern of data is the high dimensionality in large-scale grids. To resolve this computational issue, we apply the whitening transformation to our data as \(\Delta\mathbf{v}^{1:N}\rightarrow\mathbf{W}\Delta\mathbf{v}^{1:N}\) based on the PCA whitening matrix satisfying \(\mathbf{W}^{\top}\mathbf{W}=\mathbf{\Sigma}_{0}^{-1}\). Since the whitening transformation does not change the KL divergence between \(g\) and \(f\), it has no impact on the outage detection performance. In the subsequent experiments, we compare our proposed method with various baselines. When full knowledge of post-outage distribution \(f\) is known, we refer to the optimal Bayesian procedure as \(f\)**known**. When the parameters of \(f\) are unknown, our method is referred to as **PGD**. For baseline methods specifically designed for outage detection with unknown post-outage distribution, we consider an approximated maximum likelihood estimation (**MLE**) proposed to learn the unknown parameters [3, 34], a generalized likelihood ratio test (**GLRT**) that only considers finite possibilities [35] of post-outage distributions \(f\), a **Shewhart** test [36] that utilizes meanshift and covariance changes in the data to detect outages. For methods that are developed for unknown post-change distribution in the change point detection, we consider a non-parametric binned generalized statistic (**BGS**) proposed to approximate the original ratio test in classic CPD [37], a non-parametric uncertain likelihood ratio (**ULR**) proposed to replace the original ratio [38], a distributed approach (**DIS**) [1], and a deep Q-network approach (**DCQ**) [39]. For more robust evaluation, each experiment will be conducted by the Monte Carlo simulation with over 1000 replications. In every replication, we randomly simulate outage time \(\lambda\) through geometric distribution \(\text{Geo}(\rho)\). This geometric prior is based on our belief that outages can occur independently at any time step, with an equal probability of \(\rho\). We choose \(\rho=0.04\) in our experiments, which is derived from historical outage data, indicating that each time step has a 4% chance of experiencing a line outage. ### _Parameters Estimation with Accuracy and Convergence_ Prior to demonstrating the accurate identification of outages with unknown post-outage distribution parameters, we must first verify that our method can learn the optimal parameters with a guaranteed convergence. Throughout the parameter learning iterations, we plot the Euclidean distance between the best update and the ground truth in Fig. 5. The plot indicates that our learning process converges to the ground truth, thereby verifying the convergence conclusion stated in Theorem 5. ### _Outage Detection with Small Delay and Rare False Alarm_ After evaluating the effectiveness of using PGD to learn the unknown parameters, we then verify the performance of outage detection using such learned parameters. The first criterion to evaluate our detection procedure is the average detection delay. To validate the asymptotic optimality of the detection delay in Theorem 2, in Fig. 6, we plot the average delay \(\mathbb{E}(\tau-\lambda|\tau\geq\lambda)\) divided by \(|\log\alpha|\) and the theoretical lower bound \(-\log(1-\rho)+D_{KL}(f||g)\). We observe that the average detection delay of the case when \(f\) is known and that of the PGD both achieve the optimal lower bound asymptotically, while the delay of PGD is slightly higher. Moreover, using PGD and accelerated PGD to learn the unknown post-outage distribution statistics enables quicker line outage detection compared to the method of MLE. The detection rule in Theorem 2 can also restrict the false alarm rate below maximum tolerance \(\alpha\). To verify this, we calculate the empirical false alarm rate \(\mathbb{P}(\tau<\lambda)\) and compare it against the upper bound \(\alpha\), as shown in Fig. 7. Our proposed method has similar performance compared to the case when \(f\) is known since the empirical false alarm is mainly below the upper bound \(\alpha\) (especially when \(\alpha\to 0\)). This observation demonstrates that our proposed algorithm could quickly detect line outages with a low false alarm rate, even when the post-outage distribution statistics are unknown. In Table II, we present a summary of our proposed method's performance in various grid systems under different outage configurations. Our method demonstrates the ability to handle diverse outage scenarios in both mesh and radial networks with DERs penetration. Specifically, when \(f\) is unknown, our method exhibits a lower detection delay and significantly lower false alarm rate in comparison to MLE. Even when \(f\) is given, our proposed method only experiences a slight degradation Fig. 5: Distance between best update and ground truth against iterations. Fig. 6: Plots of the slope \(\frac{1}{\lceil\log\alpha\rceil}\mathbb{E}(\tau-\lambda|\tau\geq\lambda)\) against \(|\log\alpha|\) for outage detection in loopy 8-bus system (outage branch 4-7). Fig. 7: Plots of the empirical false alarm rate against the theoretical probability of false alarm \(\alpha\) in loopy 8-bus system (outage branch 4-7). compared to the benchmark. Furthermore, Table II reveals two additional phenomena. First, when multiple branches are out-of-service simultaneously, the average detection delay is shorter than in a single-line outage scenario due to the larger KL distance between distributions \(g\) and \(f\) when multiple lines are disconnected. Second, in the radial network with more simulated DERs, it takes more time to detect the line outage as the KL distance between \(g\) and \(f\) is smaller in this case. To compare with more relevant methods in the literature, we provide in Table III the detection performance of our proposed method and other methods. The comparison of average detection delay and false alarm rate shows that our method is only slightly degraded from the benchmark even though we have incomplete information, and outperforms other methods that also has incomplete information. The reason for this is our performance guarantee, as stated in Theorem 5, which ensures the accurate estimation of unknown post-outage distribution parameters. Furthermore, upon comparing our approach (PGD) with the machine-learning-based method (DCQ), we notice that the latter displays a greater variance in the average detection delay and false alarm rate. This can be attributed to the fact that the neural network's parameters are randomly initialized during training, leading to a more varied estimation of the unknown post-outage distribution parameters. ### _Analysis of Execution Time for Timely Operation_ In addition to detecting delay and false alarm rate, the execution time of the proposed method is also critical for timely detection. Table IV presents the execution time of Algorithm 1 on various grid systems with different sampling rates. From the records, less than \(3\) seconds per sample is needed to obtain the outage detection result when we receive a new sample, even for grid systems with more than \(100\) buses. This execution time can be negligible compared to the normal smart meter sampling interval, which ranges from \(1\) minute to \(1\) hour. Moreover, since the most time-consuming part of our algorithm is the matrix exponential and matrix logarithm operation, we can accelerate the algorithm by approximating these operations based on their Taylor series expansion, as discussed in Section V-B. To maintain the detection performance, we select an appropriate level of approximation with near-zero errors incurred. In Fig. 8, we choose \(K_{\text{exp}}=12\) because at this approximation level, the executing time is reduced by more than \(75\%\) with almost zero errors incurred. Similarly, we choose \(K_{\text{log}}=16\) which is slightly larger than \(K_{\text{exp}}\) since the term \(\frac{1}{k}\) in (19) decreases slower than term \(\frac{1}{k^{3}}\) in (18) as \(k\) becomes large. As a result, the accelerated PGD only shows a slight performance degradation, as shown in Fig. 6 and 7. More importantly, in Table IV, the acceleration technique reduces the execution time by more than half, thus achieving better timely outage detection. Evidently, the acceleration technique becomes more valuable in distribution systems with smart meters of a lower sampling rate, which is the trend of the future. Table IV exhibits another phenomenon: as the sampling rate increases, the processing time for accumulated data \(\Delta\mathbf{v}^{1:N}\) also increases. Consequently, conducting Algorithm 1 becomes challenging when \(N\) grows very large. To address this issue, we discovered that a small window of historical samples can adequately differentiate between the pre- and post-outage distributions. Specifically, instead of using all \(N\) samples when \(N\) is very large, we can employ the latest \(N_{0}\) samples to represent the entire data stream since they contain nearly identical distribution information in the temporal dimension. By doing so, the time complexity of the algorithm is restricted to a constant number, \(N_{0}\). Through experiments, we determined that \(N_{0}=100\) samples are sufficient to maintain the algorithm's effectiveness and accuracy. ### _Outage Branch Localization with Accuracy_ After detecting an outage occurrence, we further compute the conditional correlation between buses to localize the out-of-service branch, following Theorem 3. Here, Fig. 9 demonstrates the absolute conditional correlation of every pair of buses in the loopy 8-bus system before and after a line outage at branch 4-7. Since the value in the red box changes from a non-zero value before the outage (\(\rho_{47}^{-}>\delta_{\max}\)) to near zero after the outage (\(\rho_{47}^{+}<\delta_{\min}\)), we localize the out-of-service branch at 4-7, which matches the ground truth. Fig. 9(d) indicates that the localization method using the learned covariance matrix through PGD is as effective as the optimal scenario, and is more effective than using the learned covariance matrix through MLE. Table V demonstrates the accuracy rate of localization in 1,000 experiments. As shown, our proposed method can accurately localize over 90% of the outage branches, even without the post-outage distribution parameters. ### _Sensitivity Analysis to Data Noise and Data Coverage_ Smart-meter data can be noisy and corrupted. Besides, smart-meter data may not be accessible in every household of the distribution grid. Thus, an analysis of our proposed method under different levels of data noise and data coverage is critical to gain a better understanding of its effectiveness in real-world outage scenarios. In the U.S., ANSI C12.20 standard permits the utility smart meters to have an error within \(\pm 0.5\%\)[40]. Hence, we simulate such noise in our smart-meter voltage measurements and then evaluate the corresponding detection performance. Table VI shows both average detection delay and false alarm rate under our method with different noise levels. As we see, when the noise level is \(0.5\%\), one more sample (compared to noiseless case) is needed for the detection, while the false alarm rate is also slightly increased. In fact, we are able to quantify the increase in detection delay by analyzing the change of KL divergence between the pre- and post-outage distributions affected by noisy data. In doing so, we are able to better understand and control real-world line outage detection. Another concern regarding the smart meter data is that it may not be accessible for every household in the distribution grid, particularly in certain situations. For instance, (1) in rural areas, some households may not have installed smart meters, (2) the voltage data for certain households may be lost due to technical issues, and (3) some households may refuse to provide their voltage data due to privacy concerns. Although the new generation of smart meters is developing very fast, an analysis of incomplete coverage of smart meters data is needed to evaluate our algorithm. We first emphasize that our proposed method does not rely on the assumption of 100% coverage of smart meters data in the grid. In fact, a power line outage will influence almost all buses in the system, while the degree of influence depends on the distance between a bus and the source of the outage. Hence, we can reveal the outage by detecting the distribution change of some (not necessarily all) voltage data collected nearby the outage source. Fig. 8: The ratio of saved execution time versus the ratio of error incurred by the operation \(\widehat{\exp}\) against the level of approximation \(K_{\text{exp}}\). Fig. 9: Absolute conditional correlation of the loopy 8-bus system before and after an outage in branches 4-7. We choose \(\delta_{\max}=0.5\) and \(\delta_{\min}=0.1\). According to [41], over 107 million smart meters were deployed by 2021, covering 75% of U.S. households. Hence, we simulate this scenario where only a ratio of buses is randomly selected to provide its voltage measurements in the grid system to detect the outage. Fig. 10 demonstrates both the average detection delay and the false alarm rate of our method under different levels of coverage ratio. In comparison to the scenario where voltage data is available for all buses, the detection delay increases by 1.2 units of time step. This means that an extra 1.2 samples of data are needed to detect the outage in the 75% data coverage scenario. Similarly, when the data coverage ratio drops to 50%, an additional 6.9 samples are required for detection. Furthermore, as the data coverage ratio decreases to only 50%, the false alarm rate increases from 0.7% to 21.9%. ### _Sensitivity Analysis to Hyper-parameters_ Our detection procedure involves certain hyper-parameters that have the potential to influence the detection performance, such as the geometric distribution parameter \(\rho\). Therefore, conducting a sensitivity analysis pertaining to these hyper-parameters is crucial to assess the robustness of our proposed method. During our experiments, we randomly simulated the outage time \(\lambda\) using a geometric prior distribution denoted as \(\text{Geo}(\rho)\). This distribution aligns with our assumption that outages can take place in any time step with an equal probability of \(\rho\). Fig. 11 illustrates the effect of the parameter \(\rho\) on the performance of our detection method. It can be observed that choosing different values of \(\rho\) within the range of \(0.004\) to \(0.05\) has a negligible impact on both the false alarm rate (approximately 1.65%) and the localization accuracy (approximately 92.8%). Additionally, decreasing the value of \(\rho\) leads to a slight increase in the average detection delay. ## VII Limitations While this paper has some performance guarantee, we also encounter some of the limitations that we look forward to address in the future. For instance, while the proposed approach requires only voltage magnitude data, it may be limited by the quality and availability of this data. As shown in Section VI-E, noise or incomplete data will lead to additional detection delay. Future research could investigate how to leverage additional types of data to improve outage detection and localization. Another aspect worth investigating is the ability to withstand diverse outage scenarios. For instance, if an outage occurs in an insignificant branch of the grid, resulting in minimal fluctuations in voltage data, detecting such subtle outages remains a challenge. Hence, further research is necessary to improve the detection performance in such cases. Lastly, although sensor readings facilitate line outage detection, they pose privacy concerns since they can disclose sensitive information like household occupancy and economic status to potential adversaries. An open problem is how to identify outages accurately without compromising the customer's data. ## VIII Conclusion This paper resolves three challenges in the line outage identification problem: data availability, unknown outage pattern, and timely operation. Our approach for detecting and localizing line outages only utilizes voltage magnitude. To handle unknown outage patterns, we propose a Projected Gradient Descent framework that can learn the unknown post-outage distribution parameters with a feasibility guarantee. We demonstrate the convergence guarantee of our method and further accelerate the proposed algorithm for timely operation, resulting in a reduction of more than 75% of execution time with minimal errors. Empirical results on representative grid systems confirm that our proposed method is suitable for timely outage detection and localization, even in the absence of prior knowledge about outage patterns. ## Appendix A Proof of Lemma 1 Proof.: In (1), the incidence matrix \(\mathbf{A}_{\mathcal{E},\mathcal{G}}\) has rank \(M-1\) in the connected grid [42]. The diagonal and full-rank complex matrix \(\mathbf{Y}_{\mathcal{E}}\) can be decomposed as \(\mathbf{Y}_{\mathcal{E}}=\mathbf{B}^{\top}\mathbf{B}\) where \(\mathbf{B}\) is also full-rank. Then, \(\text{rank}(\mathbf{A}_{\mathcal{E},\mathcal{G}}^{\top}\mathbf{Y}_{\mathcal{E}}\mathbf{A }_{\mathcal{E},\mathcal{G}})=M-1\) since \[\mathbf{A}_{\mathcal{E},\mathcal{G}}^{\top}\mathbf{Y}_{\mathcal{E}}\mathbf{A}_{\mathcal{E },\mathcal{G}}=\mathbf{A}_{\mathcal{E},\mathcal{G}}^{\top}\mathbf{B}^{\top}\mathbf{B}\mathbf{ A}_{\mathcal{E},\mathcal{G}}=(\mathbf{B}\mathbf{A}_{\mathcal{E},\mathcal{G}})^{\top}(\mathbf{B}\mathbf{ A}_{\mathcal{E},\mathcal{G}}).\] Hence, we have \(\text{rank}(\mathbf{Y}_{\mathcal{G}})=M-1\) considering zero shunt admittance and \(\text{rank}(\mathbf{Y}_{\mathcal{G}})=M\) otherwise. In both cases, \(\mathbf{Y}_{\mathcal{G}}\) has at least one non-singular \((M-1)\times(M-1)\) sub-matrix, which can be viewed as the remaining matrix after eliminating one column and one row (of a slack-bus) in \(\mathbf{Y}_{\mathcal{G}}\). ## Appendix B Proof of Theorem 5 Proof.: Since \(L(\mathbf{\mu}_{1})\) is restricted convex, we apply this convexity property in constraint set \(\mathcal{U}\) to give an upper bound to the level of sub-optimality of the \(e^{th}\) iterate as \[L_{e}=L(\mathbf{\mu}_{1}^{(e)})-L(\mathbf{\mu}_{1}^{*})\leq\langle\nabla L(\mathbf{\mu}_{1 }^{(e)}),\mathbf{\mu}_{1}^{(e)}-\mathbf{\mu}_{1}^{*}\rangle. \tag{20}\] We can obtain an upper bound of \(\langle\nabla L(\mathbf{\mu}_{1}^{(e)}),\mathbf{\mu}_{1}^{(e)}-\mathbf{\mu}_{1}^{*}\rangle\) as \[\frac{1}{2\eta}\left(\|\mathbf{\mu}_{1}^{(e)}-\mathbf{\mu}_{1}^{*}\|_{2}^{2}+\eta^{2}U ^{2}-\|\mathbf{\mu}_{1}^{e+1}-\mathbf{\mu}_{1}^{*}\|_{2}^{2}\right), \tag{21}\] Fig. 11: Average detection delay (unit), false alarm rate (%) and localization accuracy (%) under different levels of \(\rho\) in loopy 123-bus system, \(\alpha=1\%\). Fig. 10: Average detection delay (unit) and false alarm rate (%) under different levels of data coverage in loopy 123-bus system, \(\alpha=1\%\). where the inequality holds since the gradient \(\nabla L(\mathbf{\mu}_{1})\) can be bounded on constraint set \(\mathcal{U}\), i.e., \(\|\nabla L(\mathbf{\mu}_{1})\|_{2}\leq U\) for all \(\mathbf{\mu}_{1}\in\mathcal{U}\). Combining equations (20) and (21), we arrive at \[L_{e}\leq\frac{1}{2\eta}\left(\|\mathbf{\mu}_{1}^{(e)}-\mathbf{\mu}_{1}^{*}\|_{2}^{2}- \|\mathbf{\mu}_{1}^{e+1}-\mathbf{\mu}_{1}^{*}\|_{2}^{2}\right)+\frac{\eta U^{2}}{2},\] which upper bounds the sub-optimality at every \(e^{th}\) iterate. Suppose we initialize the mean vector as \(\mathbf{\mu}_{1}^{*}\), we can sum the sub-optimality across iterates and average it by dividing the total iterate number \(E\) as \(\frac{1}{E}\sum_{\ell=1}^{E}L_{e}\leq\frac{1}{2\sqrt{E}}\left(\|\mathbf{\mu}_{1}^{* }-\mathbf{\mu}_{1}^{*}\|_{2}^{2}+U^{2}\right),\) where step size \(\eta=\frac{1}{\sqrt{E}}\). Therefore, for any \(e>0\), we can always use at most \(E=O(\frac{1}{\varepsilon^{2}})\) total iterates to make sure \(\frac{1}{E}\sum_{e=1}^{E}L_{e}\leq\varepsilon\) since \(\|\mathbf{\mu}_{1}^{*}-\mathbf{\mu}_{1}^{*}\|_{2}^{2}+U^{2}\) is a constant number. Then, we prove that the averaged and best update both converge to \(\mathbf{\mu}_{1}^{*}\) after \(E\) updates. Applying Jensen's inequality, we derive \(L(\mathbf{\mu}_{1}^{\text{reg}})=L(\frac{1}{E}\sum_{e=1}^{E}\mathbf{\mu}_{1}^{(e)}) \leq\frac{1}{E}\sum_{e=1}^{E}L(\mathbf{\mu}_{1}^{(e)})\leq L(\mathbf{\mu}_{1}^{*})+\varepsilon\). Since \(L(\mathbf{\mu}_{1}^{\text{best}})\leq L(\mathbf{\mu}_{1}^{*})\) for every \(e^{th}\) iterate, we have \(L(\mathbf{\mu}_{1}^{\text{best}})\leq\frac{1}{E}\sum_{e=1}^{E}L(\mathbf{\mu}_{1}^{(e) })\leq L(\mathbf{\mu}_{1}^{*})+\varepsilon\).
2301.13700
One step entropy variation in sequential sampling of species for the Poisson-Dirichlet Process
We consider the sequential sampling of species, where observed samples are classified into the species they belong to. We are particularly interested in studying some quantities describing the sampling process when there is a new species discovery. We assume that the observations and species are organized as a two-parameter Poisson-Dirichlet Process, which is commonly used as a Bayesian prior in the context of entropy estimation, and we use the computation of the mean posterior entropy given a sample developed in [4]. Our main result shows the existence of a monotone functional, constructed from the difference between the maximal entropy and the mean entropy throughout the sampling process. We show that this functional remains constant only when a new species discovery occurs.
Servet Martínez, Javier SantibÑñez
2023-01-31T15:20:12Z
http://arxiv.org/abs/2301.13700v1
# _One step entropy variation in sequential sampling of species for the Poisson-Dirichlet Process_ ###### Abstract We consider the sequential sampling of species, where observed samples are classified into the species they belong to. We are particularly interested in studying some quantities describing the sampling process when there is a new species discovery. We assume that the observations and species are organized as a two-parameter Poisson-Dirichlet Process, which is commonly used as a Bayesian prior in the context of entropy estimation, and we use the computation of the mean posterior entropy given a sample developed in [4]. Our main result shows the existence of a monotone functional, constructed from the difference between the maximal entropy and the mean entropy throughout the sampling process. We show that this functional remains constant only when a new species discovery occurs. **AMS Classification Number:** 94A17 **Keywords:** Entropy, Bayesian posterior distribution, Poisson-Dirichlet Process, new species discovery. Introduction Consider the sequential sampling of species, where one takes a random sample from a population and classifies each observation according to the species (or classes) to which they belong. Because the population is large, there are some rare species that may not be observed. We intend to understand and model the discovery of a new species in this context and to study related informational quantities. Our main result shows that the two-step variation of differences between the maximal entropy and the entropy allows us to describe when a new species is discovered in the Poisson-Dirichlet Process (PDP). It is worth mentioning that our work is purely statistical. The two parameter PDP --introduced by Pitman and Yor in 1997 [15]--supplies random partitions with an infinite number of components in \([0,1]\) and serves to model the process of sampling species and the times at which new species are discovered, see [11], [8] and [9]. This process has been used in ecology, but also in genetic applications [7], natural language processing [16] and finance [17]. In Section 2, we will introduce the PDP and some of the basic properties that we shall use. Entropy is a way to measure the diversity of communities in a sample and our work focuses on studying some aspects of the posterior entropy of the process of sampling species in the PDP. The computation of posterior entropy relies on the fact that given the sample from a PDP, the posterior distribution is a mixture of a finite Dirichlet distribution and a PDP. Much of this paper concern with Bayesian entropy estimation, is due to the results in [4], in which the prior and posterior mean entropies for the PDP were computed and some of their properties stated. This is discussed in Section 3. In Proposition 3.1, we provide lower and upper bounds for the entropy when the sample size is fixed. The main purpose of this work is to obtain an increasing functional along the process constructed with posterior mean entropy between two successive steps of the PDP with parameters \((\alpha,\theta)\). This functional is, \[\mathcal{L}_{\ell}=(\theta+\ell)(\widehat{H}_{\ell}^{\max}-\widehat{H}_{\ell}), \tag{1}\] and satisfies the monotone property \(\mathcal{L}_{\ell+1}\geq\mathcal{L}_{\ell}\). Here \(\widehat{H}_{\ell}\) denotes the posterior entropy when observing a sample at step \(\ell\) and \(\widehat{H}_{\ell}^{\max}\) is its maximum over all samples of size \(\ell\). Our main result is Theorem 4.4 in Section 4, where we show that \(\mathcal{L}_{\ell}\) is increasing and the equality \(\mathcal{L}_{\ell+1}=\mathcal{L}_{\ell}\) is attained only when a new species is discovered. We also show that the weighted difference of entropies satisfies \[(\theta+\ell+1)\widehat{H}_{\ell+1}-(\theta+\ell)\widehat{H}_{\ell}>0.\] The expression (18) obtained in Theorem 4.4, for the above difference of weighted entropies, allows us to think of the entropy as a sum of the 'discovery values' of the sampled species, plus an additive deterministic term depending on \(\ell,\alpha\) and \(\theta\). On the other hand, the expression (17) allows us to write straightforwardly the functional \({\cal L}_{\ell}\) as a sum of positive rewards for'reinforcing the knowledge' of what it is known, and no additional additive term is required. The discovery values and the reinforcement rewards are expressed in terms of the digamma function. This is discussed in Remark 4.8. We also study similar quantities in the frequentist framework and relations in the same vein are shown in Proposition 4.2. ## 2 Poisson-Dirichlet Process This section is devoted to the definition of the PDP and to supply some of its properties. We follow the articles [14], [5], [18], [13], [16] and [4]. Since this is a well-known theory we only state those results directly related to our work. Let \(0\leq\alpha<1\) and \(\theta>-\alpha\). Consider independent random variables \(\beta_{k}\sim\mbox{Beta}(1-\alpha,\theta+\alpha k)\). Let \(\pi=(\pi_{k}:k\geq 1)\) be given by the two-parameter Griffiths-Engen-McCloskey distribution, \(GEM(\alpha,\theta)\), \[\pi_{1}:=\beta_{1},\quad\pi_{k}:=\beta_{k}\prod_{j=1}^{k-1}(1-\beta_{j})\quad k \geq 2,\] which defines a probability vector a.s. Now consider a non-atomic probability measure \(G\) defined on space \({\cal X}\). Let \((\phi_{k}:k\geq 1)\) be an i.i.d. sequence with distribution as \(G\), then are all different a.s. We assume \(\phi=(\phi_{k}:k\geq 1)\) are independent of \(\pi\). The discrete random measure \[\Xi(\cdot)=\sum_{k\geq 1}\pi_{k}\delta_{\phi_{k}}(\cdot) \tag{2}\] is called the PDP with base measure \(G\) and parameters \(\alpha\) and \(\theta\). The base measure \(G\) is non-atomic, this is used to give different names to the species in the process \(\Xi(\cdot)\), but the unique fact that matters is that the species are different, the exact names are not important, and this explains why we ignore \(G\) and one simply notes \(PDP(\alpha,\theta)\). The case \(\alpha=0\) is called Dirichlet process and it can be constructed as an infinite extension of a Dirichlet distribution. Examples on how PDP help to model different phenomena can be seen in [14] and [12]. Samples from a PDP are obtained from (2) in the following way. For a random measure \(\Xi(\cdot)\) one takes an i.i.d. sequence of variables \((X_{n}:n\geq 1)\) with values in \(\mathcal{X}\). Let \(\mathbf{X}_{\ell}=(X_{1},\ldots,X_{\ell})\) be a sample of size \(\ell\) collected in a sequential way. By \(K_{\ell}\) we note the total number of different species of the sample which are noted by \(X_{1}^{*},\ldots,X_{K_{\ell}}^{*}\). For \(j=1,\ldots,K_{\ell}\) we note by \(N_{j}^{\ell}\) the number of times that the species \(X_{j}^{*}\) is observed in the sample, so \(\ell=\sum_{j=1}^{K_{\ell}}N_{j}^{\ell}\). Further we do not take into account the order of the species in the sample, if needed one can enumerate their frequencies in their decreasing order. So, \((N_{j}^{\ell}:j=1,\ldots,K_{\ell})\) means the multiset of frequencies (that is a set where the values can be repeated). The conditional probability for a new observation \(X_{\ell+1}\) is, see [5], \[\mathbb{P}(X_{\ell+1}=\bullet\,|\,\mathbf{X}_{\ell})=\frac{\theta+\alpha K_{ \ell}}{\theta+\ell}G(\cdot)+\sum_{j=1}^{K_{\ell}}\frac{N_{j}^{\ell}-\alpha}{ \theta+\ell}\delta_{X_{j}^{*}}. \tag{3}\] So, the observation \(X_{\ell+1}\) is part of the species \(X_{j}^{*}\) already observed with probability \(\frac{N_{j}^{\ell}-\alpha}{\theta+\ell}\), and \(X_{\ell+1}\) defines a new species with probability \(\frac{\theta+\alpha K_{\ell}}{\theta+\ell}\). In this last case the new species \(X_{\ell+1}=X_{K_{\ell}+1}^{*}\) is distributed as \(G\) independently of the species already discovered, and \(\ell+1\) is said to be the discovery time of a new species. That is, the transition probability (3) states the probability of discovering a new species and gives a different name to it, the important point is that it is different to the previous ones. ## 3 Bayesian entropy To define the Bayesian entropy one assumes a prior distribution and makes the estimation of entropy based upon the posterior distribution given the sample. We will introduce Bayesian entropy in the context of PDP following closely, as mentioned in the introduction, the results in [4], and also [3] and [6]. To do so, we need to recall the definition of entropy. Let \(\pi\) be a distribution, the Shannon entropy is defined as \[H(\pi)=-\sum_{i=1}^{\infty}\pi_{i}\log(\pi_{i}).\] For further computations it is useful to introduce the digamma function and some of its properties, which can be found in [1] and [2]. This function is the logarithmic derivative of the Gamma function: \[\psi(x)=\frac{d}{dx}\log(\Gamma(x))=\frac{\Gamma^{\prime}(x)}{\Gamma(x)},\] where \(\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}dt\). From \(\Gamma(x+1)=x\Gamma(x)\), one gets \(\psi(x+1)=\psi(x)+1/x\) for \(x>0\), that implies \[x\psi(x+1)-(x-1)\psi(x)=\psi(x)+1,\;x>0. \tag{4}\] The digamma function is increasing for \(x>0\) and then \(x\psi(x+1)-(x-1)\psi(x)\) is also increasing for \(x>0\). Since \(\psi(2)>0\), then \(x\psi(x+1)>(x-1)\psi(x)\) when \(x\geq 1\). The digamma function admits the following bounds in terms of the logarithmic function, see [2]: \[\log(x)-\frac{1}{x}\leq\psi(x)\leq\log(x)-\frac{1}{2x},\quad x>0. \tag{5}\] For \(x\) sufficiently big the digamma function can be approximated by \[\psi(x)=\log(x)-\frac{1}{2x}+o\left(\frac{1}{x}\right). \tag{6}\] ### Entropy for the Poisson-Dirichlet Process Let \(\mathbf{X}_{\ell}=(X_{1},\ldots,X_{\ell})\) be a sample following a distribution \(\pi\). The Bayesian approach for estimating the entropy requires to assume a prior distribution \(\pi\) and estimate the posterior distribution. The least square Bayes estimator has the shape: \(\mathbb{E}(H(\pi)|\mathbf{X}_{\ell})\). When one takes a PDP as prior, the sample \(\mathbf{X}_{\ell}\) should be obtained from the random measure \(\Xi\), given by (2). But, as we mentioned before, we can omit any reference to \(G\), so the sample is obtained from the weight distribution \(\pi\) and we will refer to the process and its weight distribution indistinctly by the same symbol, that is, the prior is \(\pi\sim PDP(\alpha,\theta)\). In [4] the prior mean of \(H(\pi)\) is proven to be, \[\mathbb{E}(H(\pi))=\psi(\theta+1)-\psi(1-\alpha).\] We are interested in finding the posterior mean of \(H(\pi)\), after seeing a sample. To describe the posterior distribution consider the sample \(\mathbf{X}_{\ell}\) with \(K_{\ell}\) different species and frequencies \(N_{1}^{\ell},\ldots,N_{K_{\ell}}^{\ell}\). To simplify notation put \(K_{\ell}=k\) and \(N_{j}^{\ell}=n_{j}\) for \(j=1,\ldots,k\). In [10] it was shown that the posterior distribution \(\pi_{post}=(p_{1},\ldots,p_{k},(1-\sum_{j=1}^{k}p_{j})\pi^{\prime})\) is given by the mixture \[(p_{1},\ldots,p_{k},1-\sum_{j=1}^{k}p_{j}) \sim \mbox{Dirichlet}(n_{1}-\alpha,\ldots,n_{k}-\alpha,\theta+\alpha k)\] \[\pi^{\prime}=(\pi^{\prime}_{1},\pi^{\prime}_{2},\ldots) \sim PDP(\alpha,\theta+\alpha k).\] Hence, the probability of belonging to some species \(X_{j}^{*}\) already present in the sample is \(p_{j}\) for \(j=1,\ldots,k\); and the probability to belong to a new species is \(1-\sum_{j=1}^{k}p_{j}\), where the distribution of these probabilities depend on the frequencies \((n_{j})\) and \(k\). In the event that a new species is discovered it will be part of a specific species \(i\) with weight \(\pi^{\prime}_{i}\). The species \(X_{i}^{*}\) related to the prior distribution \(\pi\), is not the same as the species \(X_{i}^{*}\) in the posterior distribution \(\pi_{post}\), because the index taken after observing the sample is arbitrary. But, this index discrepancy does not cause any problem since the ordering of \(\pi_{i}\) is not important in \(H(\pi)\) and the transition probability for the discovery of a new species and for the species that have been discovered in the past continues to have the weights given by (3). Also, the posterior distribution of \(\pi\) is represented by a realization \(\pi_{post}\) whose ordering is totally different from the ordering of \(\pi\), this realization is only one representation of the posterior distribution. The Bayes estimator of the posterior mean of the entropy under the PDP prior, at step \(\ell\), will be defined as \[\widehat{H}_{PDP}^{\ell}=\mathbb{E}(H(\pi)|\mathbf{X}_{\ell}).\] We will write \(H\) instead of \(H(\pi)\) when there is no confusion, so \(\widehat{H}_{PDP}^{\ell}=\mathbb{E}(H|\mathbf{X}_{\ell})\). In [4] it was shown that the posterior mean of \(H\) under the PDP prior is, \[\widehat{H}_{PDP}^{\ell}=\psi(\theta+\ell+1)-\frac{\theta+\alpha k}{\theta+ \ell}\psi(1-\alpha)-\frac{1}{\theta+\ell}\sum_{i=1}^{k}(n_{i}-\alpha)\psi(n_{ i}-\alpha+1). \tag{7}\] Let \(\widehat{\pi}^{\ell}\) be the vector of empirical probabilities \(\widehat{\pi}_{i}^{\ell}=n_{i}/\ell\), for \(i=1,\ldots,k\), and \(\widehat{\pi}_{i}^{\ell}=0\) for \(i>k\), given by the sample \(\mathbf{X}_{\ell}\). The Maximum Likelihood Estimator (MLE) of the entropy, at step \(\ell\), under multinomial likelihood, is given by \[\widehat{H}_{MLE}^{\ell}=H(\widehat{\pi}^{\ell})=-\sum_{i=1}^{\infty}\widehat {\pi}_{i}^{\ell}\log(\widehat{\pi}_{i}^{\ell}), \tag{8}\] which is a biased estimator. In [4] it is shown that when \(K_{\ell}/\ell\) converges in probability to \(0\), then \(\widehat{H}^{\ell}_{PDP}\) satisfies the following consistency property, \[|\widehat{H}^{\ell}_{PDP}-\widehat{H}^{\ell}_{MLE}|\to 0\mbox{ as }\ell\to\infty. \tag{9}\] ### Bounds for the posterior PDP entropy Let us obtain lower and upper bounds for the entropy when the sample size is fixed. This is made firstly when the number of species is fixed and after over all possible number of species in the sample. **Proposition 3.1**.: _For a sample \({\bf X}_{\ell}\) of a PDP\((\alpha,\theta)\), with \(k\) different species the entropy is upper and lower bounded by,_ \[\mathbb{E}(H|{\bf X}_{\ell}) \leq \psi(\theta\!+\!\ell\!+\!1)-\frac{\theta\!+\!\alpha k}{\theta\!+ \!\ell}\psi(1\!-\!\alpha)\!-\!\frac{1}{\theta\!+\!\ell}\sum_{i=1}^{k}(\overline {n}_{i}\!-\!\alpha)\psi(\overline{n}_{i}\!-\!\alpha\!+\!1);\] \[\mathbb{E}(H|{\bf X}_{\ell}) \geq \psi(\theta\!+\!\ell\!+\!1)-\frac{\theta\!+\!\alpha k}{\theta\!+ \!\ell}\psi(1\!-\!\alpha)\!-\!\frac{1}{\theta\!+\!\ell}\sum_{i=1}^{k}( \underline{n}_{i}\!-\!\alpha)\psi(\underline{n}_{i}\!-\!\alpha\!+\!1);\] _where the vectors of frequencies \((\overline{n}_{i}:i=1,\ldots,k)\) and \((\underline{n}_{i}:i=1,\ldots,k)\) of the maximal entropy and the minimal entropy respectively, have the following structures up to index permutation:_ \[\overline{n}_{i}=\lfloor\ell/k\rfloor,i=1,\ldots,l_{k},\quad\overline{n}_{i}= \lfloor\ell/k\rfloor+1,i=l_{k}+1,\ldots,l_{k}+h_{k}\] _where \(\lfloor x\rfloor\) is the biggest integer smallest or equal to \(x\), \(h_{k}=\ell-k\lfloor\ell/k\rfloor\) and \(l_{k}=k-h_{k}\); and_ \[\underline{n}_{k}=\ell-(k-1)\mbox{ and }\underline{n}_{i}=1,\;i=1,\ldots,k-1.\] _Moreover, when one looks for the global bounds on all entropy maxima for \(k\in\{1,\ldots,\ell\}\), one finds that: the global maximum is attained when the \(\ell\) elements of the sample belong to different species and the global minimum is attained when the \(\ell\) elements of the sample belong to a unique species. This is,_ \[\min_{{\bf Y}_{\ell}}\mathbb{E}(H|{\bf Y}_{\ell})\leq\mathbb{E}(H|{\bf X}_{ \ell})\leq\max_{{\bf Y}_{\ell}}\mathbb{E}(H|{\bf Y}_{\ell})\] _with_ \[\max_{{\bf Y}_{\ell}}\mathbb{E}(H|{\bf Y}_{\ell}) = \psi(\theta\!+\!\ell\!+\!1)-\psi(1-\alpha)-\frac{\ell}{\theta+ \ell}, \tag{10}\] \[\min_{{\bf Y}_{\ell}}\mathbb{E}(H|{\bf Y}_{\ell}) = \psi(\theta\!+\!\ell\!+\!1)\!-\!\frac{(\theta\!+\!\alpha)\psi(1\! -\!\alpha)}{\theta\!+\!\ell}\!-\!\frac{(\ell\!-\!\alpha)\psi(\ell\!-\!\alpha\! +\!1)}{\theta\!+\!\ell}. \tag{11}\] Proof.: We will take into account that \(-\psi(1-\alpha)>0\). Let us first prove the extremal entropies for a fixed \(k\). If \(k=1\) there nothing to examine because \(n_{1}=\ell\) and one simply computes the entropy. Let \(k>1\). Take two species \(i\neq j\) and set \(n_{i}=n\), \(n_{j}=m\). Assume \(n>1\). We will fix when the entropy grows when one makes the change \(n\to n-1\), \(m\to m+1\) and all other frequencies \(n_{l}\) are equal, so the number of classes continues to be \(k\) and the sum of their frequencies continues to be \(\ell\). This change makes the entropy grow if and only if the following inequality holds (we take into account that there is a minus in front of the third term at the right hand side in (7)), \[(n-1-\alpha)\psi(n-\alpha)+(m+1-\alpha)\psi(m+2-\alpha)\] \[\leq (n-\alpha)\psi(n-\alpha+1)+(m-\alpha)\psi(m-\alpha+1).\] From (4) this is equivalent to \[0\leq-\psi(m-\alpha+1)-1+\psi(n-\alpha)+1=\psi(n-\alpha)-\psi(m-\alpha+1).\] But this is equivalent to \(m+1\leq n\). So, when this last inequality holds we make the change \(n\to n-1\) and \(m\to m+1\). (Note that if \(n=m+1\) the change leaves the set of frequencies invariant because the new pair is the same, \(m\), \(m+1\)). Therefore the maximal entropy for \(k\) classes is attained by the following structure of frequencies: \[n_{i}=\lfloor\ell/k\rfloor,i=1,\ldots,l_{k},\quad n_{i}=\lfloor\ell/k\rfloor+ 1,i=l_{k}+1,\ldots l_{k}+h_{k}\] with \(h_{k}=\ell-k\lfloor\ell/k\rfloor\) and \(l_{k}=k-h_{k}\). This is the frequencies are 'as equal as possible'. On the opposite when \(m+1\geq n\), the change \(n\to n-1\), \(m\to m+1\), makes the entropy decrease. So, the minimal entropy structure of frequencies is given by \(n_{1}=\ell-(k-1)\) and the rest of \(k-1\) species have frequency \(1\). Therefore the first two inequalities of the Proposition are shown. Now for obtaining the global maxima and minima we must see what happens with the extreme solutions for different \(k\)'s. This is based upon the following observation. Assume we have \(k<\ell\) number of species with frequencies \((n_{1},\cdots,n_{k})\) and \(n_{k}>1\). Let us see what happens when we change this structure of frequencies to one that contains \(k+1\) species and \((n_{1},\cdots,n_{k-1},n_{k}-1,1)\), so with \(n_{k+1}=1\). We claim that this operation makes the entropy strictly bigger. In fact by (7) the claim is equivalent to \[-\alpha\psi(1-\alpha)-(n_{k}-1-\alpha)\psi(n_{k}-\alpha)-(1-\alpha)\psi(2- \alpha)>-(n_{k}-\alpha)\psi(n_{k}+1-\alpha).\] By using (4) this last inequality is equivalent to \[-\alpha\psi(1-\alpha)-(1-\alpha)\psi(2-\alpha)+\psi(n_{k}-\alpha)+1>0. \tag{12}\] Since \(\psi(n_{k}-\alpha)\geq\psi(2-\alpha)\) it suffices to check the inequality (12) for \(n_{k}=2\). When in the expression at the left hand side in (12) we set \(n_{k}=2\) we get, \[\alpha(\psi(2-\alpha)-\psi(1-\alpha))+1,\] which is strictly positive, so (12) holds and the claim is satisfied. Then, if one takes the maximal configuration for \(k<\ell\) species, we know that there exists a frequency, that we can assume is the \(k-\)th one, that satisfies \(n_{k}>1\). So, by making the above operation gives a configuration of frequencies of a total number of species \(k+1\) and such that the entropy increases strictly. In particular the maximal entropy for \(k+1\) species is strictly bigger than the maximal entropy for \(k\) species. Then, (10) is proven. Finally when we make the above operation from the minimal configuration of \(k\) species we retrieve the minimal configuration of the \(k+1\) species and so the minimal entropy for \(k\) species is strictly lower than the minimal entropy for \(k+1\) species. So, (11) follows. The result is shown. **Remark 3.2**.: _From (11) and since \(-\psi(1-\alpha)>0\), we get_ \[\min_{\mathbf{Y}_{\ell}}((\theta+\ell)\mathbb{E}(H|\mathbf{Y}_{\ell}))\!\geq \!(\theta+\ell)\psi(\theta+\ell+1)-(\ell-\alpha)\psi(\ell-\alpha+1),\] _where \(\theta>-\alpha\). On the other hand for every real \(h>0\) we have \((x+h)\log(x+h+1)-x\log(x+1)\to\infty\) as \(x\to\infty\). Then, by also using (6) we get that \(\min_{\mathbf{Y}_{\ell}}((\theta+\ell)\mathbb{E}(H|\mathbf{Y}_{\ell}))\to\infty\) as \(\ell\to\infty\). \(\square\)_ The relation (9) shows a key property between the frequentist estimator based on empirical probabilities and the Bayesian estimator based on the posterior mean under the PDP prior, when \(\ell\to\infty\). In next section we will study the variation of weighted estimators when making a finite step \(\ell\) to \(\ell+1\), showing a property that is similar for both, the frequentist and the PDP cases. ## 4 One step variation of entropy and discovery of a new species We will state and prove our main result: an equality proving that a weighted variation between two successive steps of the posterior Bayesian entropy, is nonnegative and only vanishes in the discovery times of a new species. This is done in Section 4.2. Related to this result, we previously study the variation of the entropy when one only computes frequencies, and how it characterizes discovery time of species. ### One step variation of entropy for frequencies The framework is the following one: we collect a series of elements that are being classified in some class or species, at the moment when they are observed. At step \(\ell\) one has collected in a sequential way \(\ell\) elements \((X_{1},\ldots,X_{\ell})\) that are grouped into a set of disjoint equivalence classes which are enumerated in a sequential way as it first element is discovered. Let \(k_{\ell}\) be the number of classes at step \(\ell\) and \((n_{j}^{\ell}:j=1,\ldots,k_{\ell})\) be the number of elements in these classes, so \(\ell=\sum_{j=1}^{k_{\ell}}n_{j}^{\ell}\). When a new element \(X_{\ell+1}\) is observed, there are two possibilities: this element is in a class of an element collected before or at \(\ell\), in this case \(k_{\ell+1}=k_{\ell}\) and if \(X_{\ell+1}\) belongs to the class \(j\) then \(n_{j}^{\ell+1}=n_{j}^{\ell}+1\). When \(X_{\ell+1}\) is in none of the classes of the previous elements then a new class is discovered, so \(k_{\ell+1}=k_{\ell}+1\), \(n_{k_{\ell}+1}^{\ell+1}=1\) at step \(\ell+1\) and the frequencies of the classes that do not contain \(X_{\ell+1}\) remain unchanged from \(\ell\) to \(\ell+1\). The entropy at step \(\ell\) is \[H_{\ell}=-\sum_{j=1}^{k_{\ell}}\frac{n_{j}^{\ell}}{\ell}\log\left(\frac{n_{j}^ {\ell}}{\ell}\right).\] This relation is entirely similar to (8). We set \(0\log 0=0\), so one can add an empty class without changing the entropy. **Remark 4.1**.: _In general the sequence \((H_{\ell}:\ell\geq 1)\) is neither increasing nor decreasing. For instance if the observations \(X_{i}\), \(i=1,\ldots,4\) are such that the pairs \(\{X_{1},X_{3}\}\) and \(\{X_{2},X_{4}\}\) belong to the same class, but the classes are different, it holds \(\log 2=H_{2}=H_{4}>H_{3}\). \(\Box\)_ One has \(H_{\ell}\leq\log\ell:=H_{\ell}^{\max}\), and the equality is attained only when \(k_{\ell}=\ell\), that is when each of the \(\ell\) elements defines its own class. We also have \(H_{\ell}\geq 0\) and it vanishes only when there is a unique class containing the \(\ell\) elements. In all the other cases both inequalities, the upper and lower bounds, are strict. Also notice that \(H_{1}=0\). Below we will consider the steps \(\ell\) and \(\ell+1\) of the sequence \((H_{\ell}:\ell\geq 1)\). We will note by \(j^{\ell+1}\in\{1,\ldots,k_{\ell+1}\}\) the index of class that contains observation \(X_{\ell+1}\). Then, \(n_{j^{\ell+1}}^{\ell+1}\) is the frequency of class \(X_{j^{\ell+1}}^{*}=X_{\ell+1}\) at step \(\ell+1\). **Proposition 4.2**.: _The functional given by_ \[\mathcal{L}_{\ell}^{f}=\ell(\log\ell-H_{\ell}),\text{ for }\ell\geq 1\text{ and }\mathcal{L}_{0}^{f}=0,\] _is a nondecreasing and nonnegative functional along the trajectory \((X_{\ell}:\ell\geq 1)\) and it remains constant, \(\mathcal{L}_{\ell+1}^{f}=\mathcal{L}_{\ell}^{f}\), only when a new species is discovered at \(\ell+1\). More precisely, \(\Delta_{\ell+1}^{f}=\mathcal{L}_{\ell+1}^{f}-\mathcal{L}_{\ell}^{f}\) satisfies_ \[\forall\ell\geq 1,\quad\Delta_{\ell+1}^{f}=n_{j^{\ell+1}}\log(n_{j^{\ell+1}} )-(n_{j^{\ell+1}}\!-\!1)\log(n_{j^{\ell+1}}\!-\!1)\geq 0, \tag{13}\] _and \(\Delta_{\ell+1}^{f}=0\) only when a new class is discovered at \(\ell+1\), that is_ \[\Delta_{\ell+1}^{f}=0\Leftrightarrow n_{j^{\ell+1}}=1. \tag{14}\] _Moreover,_ \[(\ell+1)H_{\ell+1}-\ell H_{\ell} \tag{15}\] \[= \,(\ell\!+\!1)\log(\ell\!+\!1)\!-\!\ell\log\ell\!-\!\left(n_{j^{ \ell+1}}\log(n_{j^{\ell+1}})\!-\!(n_{j^{\ell+1}}\!-\!1)\log(n_{j^{\ell+1}}\!- \!1)\right)\!\geq\!0,\] _and vanishes only when \(K_{\ell+1}=1\)._ Proof.: We will show (15) at the end of the proof. All the other properties will follow when we show that \(\Delta_{\ell+1}^{f}\) satisfies the equality in (13). In fact, the inequality \(\Delta_{\ell+1}^{f}\geq 0\) is a direct consequence of it because \(j\log j-(j-1)\log(j-1)\geq 0\). This implies that the functional \(\mathcal{L}_{\ell}^{f}\) is nondecreasing. Also we have that \(j\log j-(j-1)\log(j-1)\) vanishes only if \(j=1\), and so (14) is obtained and this ensures that the functional \(\mathcal{L}\) remains constant only at times when a new class is discovered. Notice that \(\Delta_{1}^{f}=\mathcal{L}_{1}^{f}-\mathcal{L}_{0}^{f}=0\) is consistent with the fact that at step 1 a new class is discovered. Let us show the equality in (13). To simplify notation, we note \(j^{*}=j^{\ell+1}\) the class containing \(X_{\ell+1}\) at step \(\ell+1\). Also we write \(\sum\limits_{j\neq j^{*}}\) to mean \(\sum\limits_{1\leq j\leq k_{\ell+1},j\neq j^{*}}\). In the rest of the proof we note \(n_{j}=n_{j}^{\ell+1}\) for \(j=1,\ldots,k_{\ell+1}\), so \(n_{j^{*}}\) is the cardinality of the class \(X_{j^{*}}^{*}\). If at step \(\ell+1\) one has \(j\neq j^{*}\) then the number of elements of the class \(j\) is equal at steps \(\ell\) and \(\ell+1\). We have \[(\ell+1)H_{\ell+1}=-\sum\limits_{j=1}^{k_{\ell+1}}n_{j}\log n_{j}+(\ell+1)\log (\ell+1)\] and then \[(\ell\!+\!1)(\log(\ell\!+\!1)\!-\!H_{\ell\!+\!1})\!=\!\sum\limits_{j=1}^{k_{ \ell+1}}n_{j}\log n_{j}\!=\!\sum\limits_{j\neq j^{*}}n_{j}\log n_{j}\!+\!n_{j^ {*}}\log n_{j^{*}}.\] Now, the frequency of class \(j^{*}\) at step \(\ell\) is \(n_{j^{*}}-1\), so in a similar way as we did for the term \(\ell+1\) we get \[\ell(\log\ell-H_{\ell})=\sum_{j\neq j^{*}}n_{j}\log n_{j}+(n_{j^{*}}-1)\log(n_{j ^{*}}-1).\] Then, \(\Delta_{\ell+1}^{f}=(\ell+1)(\log(\ell+1)-H_{\ell+1})-\ell(\log\ell-H_{\ell})\) satisfies the equality in (13). Finally the equality in (15) is directly obtained from the equality in (13). The inequality \(\geq 0\) in this relation is a consequence of the increasing property of the function \((n+1)\log(n+1)-n\log n\) for \(n\geq 1\), which follows from \((1+1/n)^{n}<(1+1/(n+1))^{n+1}\) for all \(n\geq 1\) (and \(0\log 0=0\)). Consider the function \(\kappa(\ell+1)=(\ell+1)\log(\ell+1)-\ell\log\ell\) for \(\ell\geq 1\). From \(x-x^{2}/2\leq\log(1+x)\leq x\) for \(x\geq 0\), we get \[\frac{1}{2\ell}-\frac{1}{2\ell^{2}}\leq\kappa(\ell+1)-(\log\ell+1)\leq\frac{ 1}{\ell}\,,\] and for large \(\ell\) we have \(\kappa(\ell+1)\approx\log\ell+1+o(1)\). These bounds and approximation can be applied for \(\Delta_{\ell+1}^{f}=\kappa(n_{j^{\ell+1}})\). ### One step variation of the Bayesian entropy Let us consider the one step variation of Bayesian entropy for the PDP. Consider an i.i.d. sequence \((X_{n}:n\geq 1)\) of elements in \(\mathcal{X}\) chosen with a random measure \(\Xi(\cdot)\) of a PDP\((\alpha,\theta)\) which fixes the family of finite samples \(\mathbf{X}_{\ell}=(X_{1},\ldots,X_{\ell})\), \(\ell\geq 1\). **Remark 4.3**.: _We note that the sequence of entropies \((\mathbb{E}(H|\mathbf{X}_{\ell}):\ell\geq 1)\) is neither increasing nor decreasing. We can illustrate it with the same example used in Remark 4.1. So, assume the observations \(X_{i}\), \(i=1,\ldots,4\) are such that the pairs \(\{X_{1},X_{3}\}\) and \(\{X_{2},X_{4}\}\) are in the same class, but the classes are different. It can be checked that when \(0\leq\alpha<1/2\) and \(-\alpha<\theta<1-3\alpha\), it holds \(\mathbb{E}(H|\mathbf{X}_{2})>\mathbb{E}(H|\mathbf{X}_{3})\) and \(\mathbb{E}(H|\mathbf{X}_{4})>\mathbb{E}(H|\mathbf{X}_{3})\). \(\Box\)_ In the next result we will compute the one step variation of the posterior entropy of a PDP\((\alpha,\theta)\), when taking the sample \(\mathbf{X}_{\ell+1}=(\mathbf{X}_{\ell},X_{\ell+1})\). We recall relation (10) that gives the maximum entropy for samples of size \(\ell\), it is \[\max_{\mathbf{Y}_{\ell}}\mathbb{E}(H|\mathbf{Y}_{\ell})=\psi(\theta+\ell+1)- \psi(1-\alpha)-\frac{\ell}{\theta+\ell}.\] From (4) we get \[(\theta+\ell+1)\psi(\theta+\ell+2)-(\theta+\ell)\psi(\theta+\ell+1)=\psi(\theta+ \ell+1)+1,\] and so, \[(\theta+\ell+1)\max_{{\bf Y}_{\ell+1}}\mathbb{E}(H|{\bf Y}_{\ell+1})-(\theta+ \ell)\max_{{\bf Y}_{\ell}}\mathbb{E}(H|{\bf Y}_{\ell})\!=\!\psi(\theta+\ell+1) -\psi(1\!-\!\alpha). \tag{16}\] Now we state our main result, satisfied by the functional given in (1). As in the frequentist case we note by \(j^{\ell+1}\) the index of the species \(X_{\ell+1}\), that is such that \(X_{\ell+1}=X_{j^{\ell+1}}^{*}\). **Theorem 4.4**.: _Let \((X_{n}:n\geq 1)\) be an i.i.d. sequence of a PDP\((\alpha,\theta)\). The functional \(({\cal L}_{\ell}:\ell\geq 0)\) given by \({\cal L}_{0}=0\) and_ \[{\cal L}_{\ell}=(\theta+\ell)\left(\max_{{\bf Y}_{\ell}}\mathbb{E}(H|{\bf Y}_ {\ell})-\mathbb{E}(H|{\bf X}_{\ell})\right)\mbox{ for }\ell\geq 1;\] _is a nondecreasing and nonnegative functional along the trajectory \((X_{\ell}:\ell\geq 1)\) and it remains constant, \({\cal L}_{\ell+1}={\cal L}_{\ell}\), only when a new species is discovered at \(\ell+1\). More precisely, let_ \[\Delta_{\ell+1}={\cal L}_{\ell+1}-{\cal L}_{\ell},\] _and note \(j^{*}=j^{\ell+1}\) be the index of the species \(X_{\ell+1}\) and \(n_{j^{*}}=n_{j^{*}}^{\ell+1}\) be the frequency of this species at step \(\ell+1\). Then,_ \[\Delta_{\ell+1}=\psi(n_{j^{*}}-\alpha)-\psi(1-\alpha)\geq 0 \tag{17}\] _and it vanishes only when \(n_{j^{*}}=1\), that is when a new species is discovered at \(\ell+1\). Moreover_ \[(\theta+\ell+1)\mathbb{E}(H|{\bf X}_{\ell+1})-(\theta+\ell)\mathbb{E}(H|{\bf X }_{\ell})=\psi(\theta+\ell+1)-\psi(n_{j^{*}}-\alpha)>0. \tag{18}\] Proof.: The relation (18) will be shown at the end of the proof. Note that for the rest of the relations it suffices to show (17) because \(n_{j^{*}}\geq 1\) and \(\psi\) is strictly increasing then the expression at the right hand side of (17) increases strictly with \(n_{j^{*}}\) and it vanishes only when \(n_{j^{*}}=1\). So, let us show equality (17). The sequence of mean posterior entropies is noted by \(\widehat{H}_{\ell}=\mathbb{E}(H|{\bf X}_{\ell})\), \(\ell\geq 1\). From (7) we have \[(\theta+\ell)\widehat{H}_{\ell}=(\theta+\ell)\psi(\theta+\ell+1)-(\theta+ \alpha k_{\ell})\psi(1\!-\!\alpha)-\sum_{i=1}^{k_{\ell}}(n_{i}^{\ell}\!-\! \alpha)\psi(n_{i}^{\ell}\!-\!\alpha+1).\] Let us define, \[\eta_{\ell+1}=(\theta+\ell+1)\widehat{H}_{\ell+1}-(\theta+\ell)\widehat{H}_{\ell}. \tag{19}\] From the definitions of \(\Delta\) and \(\eta\) and equality (16) we get \[\Delta_{\ell+1}=\psi(\theta+\ell+1)-\psi(1-\alpha)-\eta_{\ell+1}.\] So, instead of proving results for \({\cal L}_{\ell}\) and \(\Delta_{\ell}\) we will do it for \(\eta_{\ell}\). Let \(K_{\ell+1}=k_{\ell+1}\). We note by \(n_{j}=n_{j}^{\ell+1}\) the frequency of class \(X_{j}^{*}\) for \(j=1,\ldots,k_{\ell+1}\). We will show that the following relation holds for \(\ell\geq 1\): \[\eta_{\ell+1}=\psi(\theta\!+\!\ell\!+\!1)-\psi(n_{j^{*}}\!-\!\alpha). \tag{20}\] Since this implies (17), the result of the Theorem will be satisfied. We first show the case \(k_{\ell+1}=k_{\ell}+1\), so \(j^{*}=k_{\ell+1}\) is the index of a new class and \(n_{j^{*}}=n_{k_{\ell+1}}=1\). The mean posterior entropy \(\widehat{H}_{\ell+1}\) is computed from (7) but with the sample size \(\ell+1\), the number of species \(k_{\ell+1}=k_{\ell}+1\), the frequencies \(n_{j}\) are unchanged for \(j=1,\ldots,k_{\ell}\) and the frequency for the new species is \(n_{k_{\ell}+1}=1\). Then, \[(\theta+\ell+1)\widehat{H}_{\ell+1} = (\theta+\ell+1)\psi(\theta+\ell+2)-(\theta+(k_{\ell}+1)\alpha) \psi(1-\alpha)\] \[-\sum_{i=1}^{k_{\ell}+1}(n_{i}-\alpha)\psi(n_{i}-\alpha+1).\] Now we use (4) on \(x=\theta+\ell+2\) to get \((\theta+\ell+1)\psi(\theta+\ell+2)=(\theta+\ell)\psi(\theta+\ell+1)+\psi( \theta+\ell+1)+1\), decompose the first term at the right hand side, separate the term \(k_{\ell}+1\) in the sum and use \(n_{k_{\ell}+1}=1\), to obtain, \[(\theta+\ell+1)\widehat{H}_{\ell+1} = (\theta+\ell+1)\psi(\theta+\ell+1)+1-(\theta+(k_{\ell}+1)\alpha) \psi(1-\alpha)\] \[-\sum_{i=1}^{k_{\ell}}(n_{i}-\alpha)\psi(n_{i}-\alpha+1)-(1- \alpha)\psi(2-\alpha).\] On the other hand, \[(\theta+\ell)\widehat{H}_{\ell} = (\theta+\ell)\psi(\theta+\ell+1)-(\theta+\alpha k_{\ell})\psi(1-\alpha)\] \[-\sum_{i=1}^{k_{\ell}}(n_{i}-\alpha)\psi(n_{i}-\alpha+1).\] By using \((1-\alpha)\psi(2-\alpha)=(1-\alpha)\psi(1-\alpha)+1\), we get \[\eta_{\ell+1}=(\theta+\ell+1)\widehat{H}_{\ell+1}-(\theta+\ell)\widehat{H}_{ \ell}=\psi(\theta+\ell+1)-\psi(1-\alpha).\] So, relation (20) is shown when \(k_{\ell+1}=k_{\ell}+1\). Let us show (20) when \(k_{\ell+1}=k_{\ell}\). For \(j\neq j^{*}\) we have \(n_{j}=n_{j}^{\ell+1}=n_{j}^{\ell}\), and for \(j^{*}\) we have \(n_{j^{*}}^{\ell}=n_{j^{*}}-1\). We will simplify some notation on sums and put \(\sum_{i\neq j^{*}}=\sum_{i=1,\ldots,k,i\neq j^{*}}\). From, \[(\theta+\ell+1)\widehat{H}_{\ell+1} = (\theta+\ell+1)\psi(\theta+\ell+2)-(\theta+\alpha k_{\ell})\psi(1 -\alpha)\] \[-\sum_{i\neq j^{*}}(n_{i}-\alpha)\psi(n_{i}-\alpha+1)-(n_{j^{*}}- \alpha)\psi(n_{j^{*}}-\alpha+1),\] and \[(\theta+\ell)\widehat{H}_{\ell}=(\theta+\ell)\psi(\theta+\ell+1)-(\theta+ \alpha k_{\ell})\psi(1-\alpha)-\sum_{i=1}^{k_{\ell}}(n_{i}-\alpha)\psi(n_{i}- \alpha+1),\] we obtain \[\eta_{\ell+1} = (\theta+\ell+1)\widehat{H}_{\ell+1}-(\theta+\ell)\widehat{H}_{\ell}\] \[= (\theta+\ell+1)\psi(\theta+\ell+2)-(\theta+\ell)\psi(\theta+ \ell+1)\] \[-(n_{j^{*}}-\alpha)\psi(n_{j^{*}}-\alpha+1)+(n_{j^{*}}-1-\alpha) \psi(n_{j^{*}}-\alpha).\] By using (4) in \(x=\theta+\ell+1\) and \(x=n_{j^{*}}-\alpha\) we get, \[(\theta+\ell+1)\psi(\theta+\ell+2)-(\theta+\ell)\psi(\theta+\ell+ 1)=\psi(\theta+\ell+1)+1\mbox{ and }\] \[-(n_{j^{*}}-\alpha)\psi(n_{j^{*}}-\alpha+1)+(n_{j^{*}}-\alpha-1) \psi(n_{j^{*}}-\alpha)=-\psi(n_{j^{*}}-\alpha)-1.\] Therefore \[\eta_{\ell+1}=\psi(\theta+\ell+1)-\psi(n_{j^{*}}-\alpha),\] and the relation (20) is shown for the case \(k_{\ell+1}=k_{\ell}\). To finish the proof of the Theorem let us show (18). It follows from definition (19), the relation (20), the inequality \(\theta>-\alpha\) and \(\psi\) is increasing. **Remark 4.5**.: _Set \(\widehat{H}_{\ell}^{\mbox{\it max}}=\max_{\mathbf{Y}_{\ell}}\mathbb{E}(H| \mathbf{Y}_{\ell})\). We have analyzed the variation,_ \[\Delta_{\ell+1}=(\theta+\ell+1)(\widehat{H}_{\ell+1}^{\mbox{\it max}}- \widehat{H}_{\ell+1})-(\theta+\ell)(\widehat{H}_{\ell}^{\mbox{\it max}}- \widehat{H}_{\ell}).\] _Note that any other weights would produces only trivial changes or would lead to the analysis of the variation weighted with the entropy. In fact if one considers_ \[c_{\ell+1}=(\theta+\ell+1)(a_{\ell+1}-\widehat{H}_{\ell+1})-(\theta+\ell)(a_{ \ell}-\widehat{H}_{\ell}),\] _then \(c_{\ell+1}=\Delta_{\ell+1}+(\theta+\ell+1)(a_{\ell+1}-\widehat{H}_{\ell+1}^{\mbox{ max}})-(\theta+\ell)(a_{\ell}-\widehat{H}_{\ell}^{\mbox{ max}})\), so it suffices to add to \(\Delta_{\ell+1}\) a deterministic sequence depending on \(\ell\). If one considers_ \[c^{\prime}_{\ell+1}=b_{\ell+1}(\widehat{H}_{\ell+1}^{\mbox{ max}}-\widehat{H}_{\ell+1})-b_{\ell}(\widehat{H}_{\ell}^{\mbox{ max}}-\widehat{H}_{\ell}),\] _one gets_ \[c^{\prime}_{\ell+1} = b_{\ell}\left(\frac{b_{\ell+1}}{b_{\ell}}(\widehat{H}_{\ell+1}^{ \mbox{ max}}-\widehat{H}_{\ell+1})-(\widehat{H}_{\ell}^{\mbox{ max}}-\widehat{H}_{\ell})\right)\] \[= b_{\ell}\left(\frac{b_{\ell+1}}{b_{\ell}}-\frac{\theta\!+\! \ell\!+\!1}{\theta+\ell}\right)(\widehat{H}_{\ell+1}^{\mbox{ max}}-\widehat{H}_{\ell+1})+\frac{b_{\ell}}{\theta+\ell}\Delta_{\ell+1}.\] _When we modify both, the additive and the multiplicative terms, in \(\Delta_{\ell+1}\) we get a combination of above situations. \(\Box\)_ **Remark 4.6**.: _In the frequentist case the weighted difference between maximal entropies at steps \(\ell+1\) and \(\ell\) is,_ \[d^{f}_{\ell+1}=(\ell+1)H_{\ell+1}^{\mbox{ max}}-\ell H_{\ell}^{\mbox{ max}}=(\ell+1)\log(\ell+1)-\ell\log\ell.\] _From (16), in the Bayesian PDP case the weighted difference of posterior entropies is,_ \[d_{\ell+1}=(\theta+\ell+1)\widehat{H}_{\ell+1}^{\mbox{ max}}-(\theta+\ell)\widehat{H}_{\ell}^{\mbox{ max}}=\Delta_{\ell+1}+\eta_{\ell+1}=\psi(\theta+\ell+1)-\psi(1-\alpha).\] _For big \(\ell\) we have that \(d^{f}_{\ell+1}\) is of the order of \(\log\ell+1\) while from (6) one gets that \(d_{\ell+1}\) is of the order of \(\log\ell-\psi(1-\alpha)\) (we recall that \(-\psi(1-\alpha)>0\)). \(\Box\)._ **Remark 4.7**.: _Now, by applying the relations (5) and (6) satisfied by the digamma function, from Theorem 4.4 we get the following bounds for the weighted entropy variation \(\eta_{\ell+1}=(\theta+\ell+1)\widehat{H}_{\ell+1}-(\theta+\ell)\widehat{H}_{\ell}\) given by (18),_ \[\eta_{\ell+1} \geq \log(\theta\!+\!\ell\!+\!1)-\frac{1}{\theta\!+\!\ell\!+\!1}-\log( n_{j^{*}}\!-\!\alpha)+\frac{1}{2(n_{j^{*}}\!-\!\alpha)},\] \[\eta_{\ell+1} \leq \log(\theta\!+\!\ell\!+\!1)-\frac{1}{2(\theta\!+\!\ell\!+\!1)}- \log(n_{j^{*}}\!-\!\alpha)+\frac{1}{n_{j^{*}}\!-\!\alpha}.\] _When \(\ell\) is sufficiently big one has,_ \[\eta_{\ell+1}\approx\log(\theta+\ell+1)-\frac{1}{2(\theta+\ell+1)}\mbox{ if }k_{\ell+1}=k_{\ell}+1;\] _and if also \(n_{j^{*}}\) is also sufficiently big, then_ \[\eta_{\ell+1}\approx\log(\theta\!+\!\ell\!+\!1)-\frac{1}{2(\theta\!+\!\ell\!+ \!1)}-\log(n_{j^{*}}\!-\!\alpha)+\frac{1}{2(n_{j^{*}}\!-\!\alpha)}\mbox{ if }k_{\ell+1}=k_{\ell}.\] **Remark 4.8**.: _One can check that (18) also holds for \(\ell=0\), where for the posterior mean entropy (7), when \(\ell=0\), one takes \(k=0\), and so \(\theta\widehat{H}^{0}_{PDP}=\theta\psi(\theta+1)-\theta\psi(1-\alpha)\). So, by applying the telescopic property to (18) we get_ \[(\theta+\ell)\widehat{H}_{\ell}=C_{\ell}(\alpha,\theta)-\sum_{i=1}^{\ell}\psi( n^{*}(i)-\alpha),\] _where \(C_{\ell}(\alpha,\theta)=\left(\sum_{i=1}^{\ell}\psi(\theta+i)\right)+\theta \psi(\theta+1)-\theta\psi(1-\alpha)\), and \(n^{*}(i)=\#\{1\leq j\leq i\,:\,X_{j}=X_{i}\}\) is the frequency of the class of the species \(X_{i}\) at step \(i\). Therefore, the only part of the entropy depending on the sample is \(-\sum_{i=1}^{\ell}\psi(n^{*}(i)-\alpha)\). The terms \(-\psi(n^{*}(i)-\alpha)\) strictly decreases with \(n^{*}(i)\) (note that \(-\psi(n^{*}(i)-\alpha)\) is positive when \(n^{*}(i)=1\), negative if \(n^{*}(i)\geq 3\) and the sign of \(-\psi(2-\alpha)\) depends on \(\alpha\in[0,1)\)). So, the terms \(-\psi(n^{*}(i)-\alpha)\) can be seen as the 'discovery value' of observing the species \(X_{i}\) at step \(i\), and so, up to the additive deterministic term, the entropy turns out to be the 'discovery' values at the successive steps of the sample. On the other hand, from (17) we get that_ \[\mathcal{L}_{\ell}=\sum_{i=1}^{\ell}(\psi(n^{*}(i)-\alpha)-\psi(1-\alpha))\] _is a sum of positive rewards for reinforcing what is already known that is going in the opposite direction of discovery. Thus the reward at step \(i\), attains the minimum \(0\) for the discovery of a new species. Differently to entropy, here no additional deterministic term depending on \(\ell,\alpha\) and \(\theta\) is required._ ### A common framework for the frequentist and the PDP cases The equations (17) and (13) have the same shape, both are measuring the weighted differences of the distance of successive entropies to the maximal entropies and both formulae express that these differences only depend on the updated frequency of the species of the new element. In fact this result holds for the class of entropies that satisfy: \[w(\ell)\mathcal{H}_{\ell}=u(a+\ell)-b-\sum_{i=1}^{k}(u(n_{i}^{\ell}-c)+v). \tag{21}\] Here \(w(\ell)\) is a strictly positive function and increasing in \(\ell\) and \(u\) is a real function defined on \(\mathbb{N}-c=\{n-c:n\geq 1\}\) and it satisfies \[u(n+1-c)-u(n-c)\text{ is increasing for }n\geq 1. \tag{22}\] The quantities \(a,b,c,v\) are constants that satisfy the conditions \[0\leq c<1,-c\leq a\text{ and }2u(1-c)+v<u(2-c). \tag{23}\] Notice that \(H_{\ell}\) can be written as \(\mathcal{H}_{\ell}\) with \(w(\ell)=\ell\), \(u(x)=x\log x\) and \(a=b=c=v=0\); and \(\widehat{H}_{\ell}\) can be also written in the form \(\mathcal{H}_{\ell}\) with \(w(\ell)=\theta+\ell\), \(u(x)=x\psi(x+1)\), \(a=\theta,b=\theta\psi(1-\alpha),c=\alpha,v=\alpha\psi(1-\alpha)\). In both cases \(0\leq c<1\). The second part in (23) holds for the PDP because \(\theta>-\alpha\) and the third part of (23) holds in the frequentist case because it is equivalent to \(2\log(1)\leq\log 2\) and in the PDP case (23) becomes \((1-\alpha)\psi(2-\alpha)+\alpha\psi(1-\alpha)<\psi(2-\alpha)+1\) which is satisfied. In relation to (22), in the PDP case it follows from \(x\psi(x+1)-(x-1)\psi(x)\) increasing in \(x>0\) and in the frequentist case (22) it is a consequence of \((n+2)\log(n+2)-(n+1)\log(n+1)>(n+1)\log(n+1)-n\log n\) for \(n\geq 0\). We will see that the conditions (22) and (23) are sufficient to show that the properties proven for the variation of differences between maximal entropies and entropies for the cases \((H_{\ell})\) and \((\widehat{H}_{\ell})\), also hold for the entropy \((\mathcal{H}_{\ell})\) written in (21). In order to retrieve the results in Proposition 3.1 we need to analyze what happens when, for two species \(i\neq j\) with \(n_{i}^{\ell}=n>1\) and \(n_{j}^{\ell}=m\), one makes the change \(m\to m+1\) and \(n\to n-1\), and all other frequencies \(n_{l}\) remain equal. The entropy increases if and only if \(u(n-c-1)+u(m-c+1)\leq u(n-c)+u(m-c)\), or equivalently \(u(m-c+1)-u(m-c)\leq u(n-c)-u(n-c-1)\). From (22) this holds if and only if \(m+1\leq n\). The second requirement has to do with the following change: for a class \(i\leq k\) with \(n_{i}=n>1\) we set \(n\to n-1\) and \(k\to k+1\) so there is a new class with \(n_{k+1}=1\). This change makes the entropy increase if \(u(n-c-1)+u(1-c)+v<u(n-c)\) or equivalently if \(u(1-c)+v<u(n-c)-u(n-c-1)\) when \(n>1\). From (22) we get that it suffices that the following inequality holds \(2u(1-c)+v<u(2-c)\), which is the second condition in (23). When these conditions take place the maximal entropy is attained when all the classes are singletons, so \[w(\ell)\mathcal{H}_{\ell}^{\max}=u(a+\ell)-b-\sum_{i=1}^{\ell}(u(1-c)+v)\] Hence, \[w(\ell+1)\mathcal{H}_{\ell+1}^{\max}-w(\ell)\mathcal{H}_{\ell}^{\max}=u(a+ \ell+1)-u(a+\ell)-(u(1-c)+v).\] Let us consider \[\Delta^{\mathcal{H}}_{\ell+1}=w(\ell+1)\left(\mathcal{H}^{\max}_{\ell+1}-\mathcal{ H}_{\ell+1}\right)-w(\ell)\left(\mathcal{H}^{\max}_{\ell}-\mathcal{H}_{\ell} \right).\] If in the transition \(\ell\to\ell+1\) the number of classes changes from \(k\to k+1\) one gets that \[\Delta^{\mathcal{H}}_{\ell+1}=0.\] If in the transition \(\ell\to\ell+1\) the number of classes is preserved, say \(k\), and the class \(j^{*}\) adds in one unit we get \[\Delta^{\mathcal{H}}_{\ell+1}=u(n^{\ell}_{j^{*}}-c+1)-u(n^{\ell}_{j^{*}}-c)-(u (1-c)+v).\] We combine (22) with the third condition in (23), to deduce that when the transition \(\ell\) to \(\ell+1\) preserves the number of classes then \(\Delta^{\mathcal{H}}_{\ell+1}>0\). Hence, the results for the variation of the weighted differences of the maximal entropy to the entropy hold for this class of entropies (21). Finally, let us see what one requires to have \[w(\ell+1)\mathcal{H}_{\ell+1}-w(\ell)\mathcal{H}_{\ell}=(u(a+\ell+1)-u(a+ \ell))-(u(n^{\ell}_{j^{*}}-c+1)-u(n^{\ell}_{j^{*}}-c))\geq 0.\] Since from (23) we have \(a\geq-c\) and so the unique new condition is \[u(n+a+1)-u(n+a)\geq u(m-c+1)-u(m-c)\text{ for }n\geq m,\] which is satisfied for both, the PDP and the frequentist case. **Acknowledgments.** This work was supported by the Center for Mathematical Modeling ANID Basal PIA program FB210005. In addition, we would like to thank the reviewer for their careful reading and valuable comments and suggestions, which helped to clarify and improve the presentation of the article.
2309.07631
Unified Linearization-based Nonlinear Filtering
This letter shows that the following three classes of recursive state estimation filters: standard filters, such as the extended Kalman filter; iterated filters, such as the iterated unscented Kalman filter; and dynamically iterated filters, such as the dynamically iterated posterior linearization filters; can be unified in terms of a general algorithm. The general algorithm highlights the strong similarities between specific filtering algorithms in the three filter classes and facilitates an in-depth understanding of the pros and cons of the different filter classes and algorithms. We end with a numerical example showing the estimation accuracy differences between the three classes of filters when applied to a nonlinear localization problem.
Anton Kullberg, Isaac Skog, Gustaf Hendeby
2023-09-14T11:52:20Z
http://arxiv.org/abs/2309.07631v1
# Unified Linearization-based Nonlinear Filtering ###### Abstract This letter shows that the following three classes of recursive state estimation filters: standard filters, such as the extended Kalman filter; iterated filters, such as the iterated unscented Kalman filter; and dynamically iterated filters, such as the dynamically iterated posterior linearization filters; can be unified in terms of a general algorithm. The general algorithm highlights the strong similarities between specific filtering algorithms in the three filter classes and facilitates an in-depth understanding of the pros and cons of the different filter classes and algorithms. We end with a numerical example showing the estimation accuracy differences between the three classes of filters when applied to a nonlinear localization problem. ## I Introduction State estimation in nonlinear dynamical systems has been extensively studied in a wide variety of research fields. Typical approaches employ some form of linearization-based approximate inference, which we focus on here. These approaches linearize the nonlinear model locally (in each time instance) to employ the Kalman filter, which is the optimal estimator in the _mean-squared error_ (mse) sense [1]. Analytical linearization leads to the _extended Kalman filter_ (ekf), while sigma-point filters, such as the _unscented Kalman filter_ (ukf), _cubature Kalman filter_ (ckf), and similar can be thought of as statistical linearization filters [1, 2, 3]. Statistical linearization filters also includes the Gaussian particle filter [4, 5]. The estimation accuracy of linearization-based filters highly depend on the point (distribution in the statistical case) about which the models are linearized. Typically, the linearization point (distribution) is chosen to be the mean (distribution) of the current state estimate. With a large error in the state estimate, this can lead to compounding errors which, in the worst case, may cause the filter to diverge. To alleviate this problem, several variants of iterated filters have been developed, such as the _iterated extended Kalman filter_ (iekf), the _iterated unscented Kalman filter_ (iukf), and the _iterated posterior linearization filter_ (iplf) [6, 7, 8, 9, 10]. These types of filters essentially iterate the measurement update, each time re-linearizing the measurement model with the "latest" iterate. The efforts in iterated filtering have primarily been focused on finding a better linearization point for the measurement model, which has been motivated by the fact that nonlinearities in the measurement model affect the resulting state estimate to a greater extent than nonlinearities in the transition model. Iterated filters have also been generalized to improve the linearization point for the transition model [11, 12]. These algorithms, which we refer to as dynamically iterated filters, are essentially iterated one-step fixed-lag smoothers that extract information from the measurement at time \(k\) to improve the linearization of the transition model at time \(k-1\). Examples of such algorithms are the _dynamically iterated extended Kalman filter_ (diekf), _dynamically iterated unscented Kalman filter_ (dikf), and the _dynamically iterated posterior linearization filter_ (diplf) [11, 12]. In this letter, we seek to provide a "general" algorithm, from which all of the aforementioned filter algorithms can be derived as special cases. In this way, we aim to clarify and highlight the strong similarities between different linearization-based filtering algorithms. Thus, the contribution is a unification of linearization-based filters in a single general algorithm, encompassing analytically and statistically linearized, as well as iterated and non-iterated filters. We also illustrate the performance differences between the three kinds of filter classes on an acoustic localization problem. ## II Background For clarity, we here present analytical and statistical linearization within a common framework. The well-known Kalman filter and _Rauch-Tung-Stribel_ (trs) smoother equations are also recapitulated. ### _Kalman Smoother_ Assume an affine state-space model with additive Gaussian noise, of the form \[\mathbf{x}_{k+1} =\mathbf{A_{f}}\mathbf{x}_{k}+\mathbf{b_{f}}+\tilde{\mathbf{w}}_{ k}, \quad\tilde{\mathbf{w}}_{k}\sim\mathcal{N}(\tilde{\mathbf{w}}_{k};\mathbf{0}, \mathbf{Q}+\mathbf{\Omega_{f}}) \tag{1a}\] \[\mathbf{y}_{k} =\mathbf{A_{h}}\mathbf{x}_{k}+\mathbf{b_{h}}+\tilde{\mathbf{e}}_ {k},\quad\tilde{\mathbf{e}}_{k}\sim\mathcal{N}(\tilde{\mathbf{e}}_{k};\mathbf{ 0},\mathbf{R}+\mathbf{\Omega_{h}}). \tag{1b}\] Here, \(\mathbf{x}_{k},\ \mathbf{y}_{k},\ \tilde{\mathbf{w}}_{k}\) and \(\tilde{\mathbf{e}}_{k}\) denote the state, the measurement, the process noise and the measurement noise at time \(k\), respectively. Lastly, assume that \(\mathbf{x}_{k}\in\mathcal{X},\forall k\), where \(\mathcal{X}\) is some set, typically \(\mathbb{R}^{n_{x}}\), and that \(\tilde{\mathbf{w}}_{k}\) and \(\tilde{\mathbf{e}}_{k}\) are mutually independent. Note that usually, \(\mathbf{\Omega_{f}}=\mathbf{\Omega_{h}}=\mathbf{0}\). For this model, the (affine) Kalman smoother update equations are given by Alg. 1, where subscript \({}_{k|k}\) denotes an estimate at time \(k\) given measurements up until time \(k\) and \(K\) is the final time [13]. ### _Analytical and Statistical Linearization_ Given a nonlinear model \[\mathbf{z}=\mathbf{g}(\mathbf{x}),\] we wish to find an affine representation \[\mathbf{g}(\mathbf{x})\approx\mathbf{A}\mathbf{x}+\mathbf{b}+\eta, \tag{2}\] with \(\eta\sim\mathcal{N}(\eta;\mathbf{0},\boldsymbol{\Omega})\). In this affine representation, there are three free parameters, \(\mathbf{A},\mathbf{b}\), and \(\boldsymbol{\Omega}\). Analytical linearization through first-order Taylor expansion selects the parameters as \[\mathbf{A}=\frac{d}{d\mathbf{x}}\mathbf{g}(\mathbf{x})|_{\mathbf{x}=\bar{ \mathbf{x}}},\quad\mathbf{b}=\mathbf{g}(\mathbf{x})|_{\mathbf{x}=\bar{ \mathbf{x}}}-\mathbf{A}\bar{\mathbf{x}},\quad\boldsymbol{\Omega}=\mathbf{0}, \tag{3}\] where \(\bar{\mathbf{x}}\) is the point about which the function \(\mathbf{g}(\mathbf{x})\) is linearized. Note that \(\boldsymbol{\Omega}=\mathbf{0}\) essentially implies that the linearization is assumed to be error free. Statistical linearization instead linearizes w.r.t. a distribution \(p(\mathbf{x})\). Assuming that \(p(\mathbf{x})=\mathcal{N}(\mathbf{x};\hat{\mathbf{x}},\mathbf{P})\), statistical linearization selects the affine parameters as \[\mathbf{A} =\Psi^{\top}\mathbf{P}^{-1} \tag{4a}\] \[\mathbf{b} =\bar{\mathbf{z}}-\mathbf{A}\hat{\mathbf{x}}\] (4b) \[\boldsymbol{\Omega} =\Phi-\mathbf{A}\mathbf{P}\mathbf{A}^{\top}\] (4c) \[\bar{\mathbf{z}} =\mathbb{E}[\mathbf{g}(\mathbf{x})]\] (4d) \[\Psi =\mathbb{E}[(\mathbf{x}-\hat{\mathbf{x}})(\mathbf{g}(\mathbf{x} )-\bar{\mathbf{z}})^{\top}]\] (4e) \[\Phi =\mathbb{E}[(\mathbf{g}(\mathbf{x})-\bar{\mathbf{z}})(\mathbf{g} (\mathbf{x})-\bar{\mathbf{z}})^{\top}], \tag{4f}\] where the expectations are taken w.r.t. \(p(\mathbf{x})\). The major difference from analytical linearization is that \(\boldsymbol{\Omega}\neq 0\), which implies that the error in the linearization is captured. Typically, the expectations in (4) are not analytically tractable and thus, practically, one often resorts to some numerical integration technique. ## III Problem Formulation To set the stage for the unification of the different filter algorithms, the general state estimation problem is described here from a probabilistic viewpoint. To that end, consider a discrete-time state-space model (omitting a possible input \(\mathbf{u}_{k}\) for notational brevity) given by \[\mathbf{x}_{k+1} =\mathbf{f}(\mathbf{x}_{k})+\mathbf{w}_{k}, p(\mathbf{w}_{k}) =\mathcal{N}(\mathbf{w}_{k};\mathbf{0},\mathbf{Q}) \tag{8a}\] \[\mathbf{y}_{k} =\mathbf{h}(\mathbf{x}_{k})+\mathbf{e}_{k}, p(\mathbf{e}_{k}) =\mathcal{N}(\mathbf{e}_{k};\mathbf{0},\mathbf{R}). \tag{8b}\] Note that (8a) and (8b) can equivalently be written as a _transition density_ and a _measurement density_ as \[p(\mathbf{x}_{k+1}|\mathbf{x}_{k}) =\mathcal{N}(\mathbf{x}_{k+1};\mathbf{f}(\mathbf{x}_{k}),\mathbf{ Q}) \tag{9a}\] \[p(\mathbf{y}_{k}|\mathbf{x}_{k}) =\mathcal{N}(\mathbf{y}_{k};\mathbf{h}(\mathbf{x}_{k}),\mathbf{R}). \tag{9b}\] Further, the initial state distribution is assumed to be given by \[p(\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{0};\hat{\mathbf{x}}_{0 |0},\mathbf{P}_{0|0}). \tag{10}\] Given the transition and measurement densities and a sequence of measurements \(\mathbf{y}_{1:k}=\{\mathbf{y}_{i}\}_{i=1}^{k}\), the filtering problem consists of computing the marginal posterior of the state at time \(k\). This can be done via the Bayesian recursions \[p(\mathbf{x}_{k}|\mathbf{y}_{1:k-1}) =\int_{\mathcal{X}}p(\mathbf{x}_{k}|\mathbf{x}_{k-1})p(\mathbf{x }_{k-1}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k-1} \tag{11a}\] \[p(\mathbf{x}_{k}|\mathbf{y}_{1:k}) =\frac{p(\mathbf{y}_{k}|\mathbf{x}_{k})p(\mathbf{x}_{k}|\mathbf{ y}_{1:k-1})}{\mathbf{Z}_{k}}\] (11b) \[\mathbf{Z}_{k} =\int_{\mathcal{X}}p(\mathbf{y}_{k}|\mathbf{x}_{k})p(\mathbf{x }_{k}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k}. \tag{11c}\] In the case where \(\mathbf{f}\) and \(\mathbf{h}\) are linear, the (analytical) solution is given by the Kalman filter [1]. In the general case, the marginal posteriors can not be computed analytically. Inspecting (11), there are two integrals that require attention. We turn first to the Chapman-Kolmogorov equation (11a). Assuming that \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})\) is Gaussian, (11a) has a closed form solution given by (5), _if_\(p(\mathbf{x}_{k}|\mathbf{x}_{k-1})\) is Gaussian and (8a) is affine. Therefore, as (9a) is Gaussian, we seek an affine approximation of the transition function \(\mathbf{f}\) as \[\mathbf{f}(\mathbf{x}_{k-1})\approx\mathbf{A}_{\mathbf{f}}\mathbf{x}_{k-1}+ \mathbf{b}_{\mathbf{f}}+\eta_{\mathbf{f}}, \tag{12}\] with \(p(\eta_{\mathbf{f}})=\mathcal{N}(\eta_{\mathbf{f}};\boldsymbol{0},\boldsymbol{ \Omega}_{\mathbf{f}})\). Hence, the transition density \(p(\mathbf{x}_{k}|\mathbf{x}_{k-1})\) is approximated by \(q(\mathbf{x}_{k}|\mathbf{x}_{k-1})\) as \[q(\mathbf{x}_{k}|\mathbf{x}_{k-1})=\mathcal{N}(\mathbf{x}_{k};\mathbf{A}_{ \mathbf{f}}\mathbf{x}_{k-1}+\mathbf{b}_{\mathbf{f}},\mathbf{Q}+\boldsymbol{ \Omega}_{\mathbf{f}}). \tag{13}\] If \(\mathbf{A}_{\mathbf{f}},\mathbf{b}_{\mathbf{f}}\), and \(\boldsymbol{\Omega}_{\mathbf{f}}\) are chosen to be the analytical linearization of \(\mathbf{f}\) about the mean of the posterior \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})\), the ekf time update is recovered through (5). Similarly, statistical linearization about \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})\) recovers the sigma-point filter time updates. This yields an approximate predictive distribution \(q(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})\), which can then be used to approximate the second integral of interest (and subsequently, the posterior at time \(k\)). Explicitly, the second integral is approximated by \[\mathbf{Z}_{k}\approx\int_{\mathcal{X}}p(\mathbf{y}_{k}|\mathbf{x}_{k})q( \mathbf{x}_{k}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k}. \tag{14}\] Similarly to (12), (14) has a closed form solution _if_\(p(\mathbf{y}_{k}|\mathbf{x}_{k})\) is Gaussian and (8b) is affine. Thus, as (9b) is Gaussian, we seek an affine approximation of the measurement function \(\mathbf{h}\) as \[\mathbf{h}(\mathbf{x}_{k})\approx\mathbf{A}_{\mathbf{h}}\mathbf{x}_{k}+\mathbf{b} _{\mathbf{h}}+\eta_{\mathbf{h}}, \tag{15}\] with \(p(\eta_{\mathbf{h}})=\mathcal{N}(\eta_{\mathbf{h}};\boldsymbol{0},\boldsymbol{ \Omega}_{\mathbf{h}})\). Hence, the measurement density \(p(\mathbf{y}_{k}|\mathbf{x}_{k})\) is approximated by \(q(\mathbf{y}_{k}|\mathbf{x}_{k})\) as \[q(\mathbf{y}_{k}|\mathbf{x}_{k})=\mathcal{N}(\mathbf{y}_{k};\mathbf{A}_{ \mathbf{h}}\mathbf{x}_{k}+\mathbf{b}_{\mathbf{h}},\mathbf{R}+\boldsymbol{ \Omega}_{\mathbf{h}}), \tag{16}\] which leads to an analytically tractable integral. With (13) and (16), the (approximate) marginal posterior (11) is now given by \[q(\mathbf{x}_{k}|\mathbf{y}_{1:k})=\frac{q(\mathbf{y}_{k}|\mathbf{x}_{k})q( \mathbf{x}_{k}|\mathbf{y}_{1:k-1})}{\int_{\mathcal{X}}q(\mathbf{y}_{k}|\mathbf{ x}_{k})q(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k}}, \tag{17}\] which is analytically tractable and given by (6). Note that analytical linearization of (15) about the mean of \(q(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})\) recovers the ekf measurement update, whereas statistical linearization recovers the sigma-point measurement update(s). The quality of the approximate marginal posterior (17) directly depends on the quality of the approximations (13) and (16). The quality of (13) and (16) in turn directly depends on the choice of linearization points or densities, which is typically chosen to be the approximate predictive and previous approximate posterior distributions. This choice is of course free and iterated filters, such as the iekf, iukf, and iplf have been proposed to improve the approximation (16) [7, 8, 14, 15]. These filters essentially iterate the measurement update to find an approximate posterior \(q^{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\), which is used to re-linearize the function \(\mathbf{h}\) to produce a new approximation \(q^{i+1}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). Iterated filters were recently generalized to dynamically iterated filters, which improve both the approximation (16), as well as the approximation (13) [11, 12]. Dynamically iterated filters are essentially one-step iterated fixed-lag smoothers that produce both a better posterior approximation \(q^{i+1}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\) as well as a smoothed approximation \(q^{i+1}(\mathbf{x}_{k-1}|\mathbf{y}_{1:k})\). Next, we describe a unification of all of these filters in terms of one general algorithm, encompassing all possible variants of filters based on either analytical or statistical linearization. ## IV Unified Linearization-based Filtering We propose a unified linearization-based filtering algorithm that encapsulates a wide variety of existing algorithms. The main idea behind the unification is that all linearization-based filters may be thought of as a single general algorithm, reduces to various special cases depending on specific implementation choices. All of the filters are essentially centered around the three key steps (5)-(7). They differ only in the choice of linearization strategy, as well as in which steps of the general (approximative affine) Kalman filter/smoother that are repeated or not. The general algorithm is presented in Alg. 2 and encompasses standard linearization-based filters, iterated filters, and dynamically iterated filters. For clarity, it is also illustrated schematically in Fig. 1. Note that the unified algorithm is purposefully restricted to algorithms that only require access to the latest measurement \(\mathbf{y}_{k}\), which, e.g., excludes the L-scan iplf[16]. The linearization choices, which are assumed to be the same for all of the steps in the general filter algorithm, and the specific filter algorithms these choices lead to, are summarized in Table I. In Table I, iterating either the measurement update (MU), both the time update (TU) and MU or none (-) is captured vertically, and the choice of particular linearization strategy horizontally. Choosing analytical linearization inevitably leads to some form of "extended" version, i.e., either the ekf, iekf, or diekf. Statistical linearization is a bit more nuanced for two reasons. Firstly, it encapsulates a wide variety of algorithms, depending on the particular chosen statistical linearization, be it exact or approximated by, e.g., some form of cubature such as the ckf or ukf. Note that we use ckf as a collective term for any statistically linearized Kalman filter based on sigma points, such as the smart sampling Kalman filter [17], the spherical simplex-radial ckf[18], or the multiple quadrate Kalman filter [19]. Secondly, iterated versions of statistical linearization filters fall into two distinct categories, iukf style that "freezes" the covariance update until the last iterate [7], or iplf style that continuously updates the covariance matrix - essentially changing the sigma point spread each iteration [15]. In Table I, the "frozen" statistical linearization based filters are summarized by, e.g., the ickf and iukf, but should be read as encapsulating any imaginable version of statistical linearization where the resulting filter has an update structure similar to that of the iekf/diekf, i.e., with a "delayed" covariance update. Note that the "freezing" or "delayed" behaviour of the iukf/diukf is not explicitly defined in the algorithm Alg. 2 but amounts to setting \(\mathbf{P}_{k-1|k}^{i+1}:=\mathbf{P}_{k-1|k}^{i}\) after Alg. 5 and \(\mathbf{P}_{k|k}^{i+1}:=\mathbf{P}_{k|k}^{i}\) after Alg. 4 until the last iteration. ## V Numerical Example To demonstrate the application of the three types of filters, we consider a localization problem modeled by a nonlinear state-space model. To keep the results uncluttered, we only consider analytical linearization and focus our comparison on the ekf, iekf, and diekf. We consider a target maneuver in a plane and describe the target state using the state vector \(\mathbf{x}_{k}=\begin{bmatrix}p_{k}^{x}&v_{k}^{x}&p_{k}^{y}&v_{k}^{y}&\omega_ {k}\end{bmatrix}^{\top}\). Here, \(p_{k}^{x}\), \(p_{k}^{y}\), \(v_{k}^{x}\), and \(v_{k}^{y}\) are the Cartesian coordinates and velocities of the target, respectively. Further, \(\omega_{k}\) is the turn rate. The transition model is given by \[\mathbf{x}_{k+1}=\mathbf{f}(\mathbf{x}_{k})+\mathbf{w}_{k}, \tag{18}\] Fig. 1: Schematic illustration of linearization-based filters. Iterated filters re-linearize the measurement update (MU). Dynamically iterated filters also re-linearize the time update (TU) through a smoothing step (S). where \[\mathbf{f}(\mathbf{x}_{k})=\begin{bmatrix}1&\frac{\sin(T_{w_{k}})}{\omega_{k}}&0&- \frac{(1-\cos(T_{w_{k}}))}{\omega_{k}}&0\\ 0&\cos(T_{W_{k}})&0&-\sin(T_{W_{k}})&0\\ 0&\frac{(1-\cos(T_{W_{k}}))}{\omega_{k}}&1&\frac{\sin(T_{W_{k}})}{\omega_{k}}&0 \\ 0&\sin(T_{W_{k}})&0&\cos(T_{W_{k}})&0\\ 0&0&0&0&1\end{bmatrix}\mathbf{x}_{k},\] and \(T\) is the sampling period. Further, \(\mathbf{w}_{k}\sim\mathcal{N}(\mathbf{w}_{k};\mathbf{0},\mathbf{Q})\) is the process noise at time \(k\), with \[\mathbf{Q}=\mathrm{blkdiag}\begin{pmatrix}q_{1}\frac{T^{3}}{T^{3}}&q_{1}\frac {T^{2}}{2}\\ q_{1}\frac{T^{2}}{2}&q_{1}T\end{pmatrix},\begin{bmatrix}q_{1}\frac{T^{3}}{T^{ 2}}&q_{1}\frac{T^{2}}{2}\\ q_{1}\frac{T^{2}}{2}&q_{1}T\end{pmatrix},q_{2}\end{pmatrix},\] where \(q_{1}\) and \(q_{2}\) are tunable parameters of the model. The target emits a known sound pulse at a rate of \(T=1.5\,\mathrm{s}\) that is picked up by a set of four microphones. With this, we construct time-difference-of-arrival (tdoa) observations, where each observation \(i\) is modeled as \[\mathbf{y}_{k}^{i}=r_{k}^{1}-r_{k}^{i}+\mathbf{e}_{k},\quad i=1,\ldots,3 \tag{19}\] where \(r_{k}^{i}\triangleq\left\|\begin{bmatrix}p_{k}^{x}&p_{k}^{y}\end{bmatrix}^{ \top}-s^{i}\right\|\), and \(s^{i}\) denotes the 2D position of the \(i\)th microphone. Further, \(\mathbf{e}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{R})\), where \(\mathbf{R}\) has been computed through a static calibration experiment. We set \(q_{1}=10^{-j},\ q_{2}=10^{-l}\) and let \(j=-6,\ldots,0,\ l=-5,\ldots,0\), and sweep over all such pairs, i.e., 42 different process noise configurations. For each configuration we compute the rmse against a ground truth trajectory, obtained from a high-precision IR-marker positioning system. The positional rmse per noise configuration is presented in Fig. 2. Clearly, the diefk performs the best overall and is non-divergent in most cases. Here, divergence corresponds to an rmse higher than \(1\,\mathrm{m}\). As the process noise is increased, the difference between the algorithms decreases, but the iterative procedure of the iefk and dief is still clearly beneficial. ## VI Conclusion A unifying view of linearization-based nonlinear filtering algorithms has been presented. It facilitates a comprehensive understanding of the commonalities and relationships between linearization-based standard, iterated, and dynamically iterated filters. The presented algorithm is simple, easy to implement, and encompasses a wide range of existing filtering algorithms. Lastly, the three classes of unified filtering algorithms were compared in a nonlinear localization problem, where the dynamically iterated filters were shown to be more resilient to poor process noise parameter tuning. Fig. 2: Positional rmse for the ek as blue dots, iefk as orange crosses and diefk as green squares. Each subplot corresponds to a different value of \(q_{1}\), indicated by the text in each subplot. An rmse higher than approximately \(1\,\mathrm{m}\) corresponds to a β€œdivergent” filter based on visual inspection of resulting estimate trajectories and is left out of the plots.
2309.11053
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection
The significant rise of security concerns in conventional centralized learning has promoted federated learning (FL) adoption in building intelligent applications without privacy breaches. In cybersecurity, the sensitive data along with the contextual information and high-quality labeling in each enterprise organization play an essential role in constructing high-performance machine learning (ML) models for detecting cyber threats. Nonetheless, the risks coming from poisoning internal adversaries against FL systems have raised discussions about designing robust anti-poisoning frameworks. Whereas defensive mechanisms in the past were based on outlier detection, recent approaches tend to be more concerned with latent space representation. In this paper, we investigate a novel robust aggregation method for FL, namely Fed-LSAE, which takes advantage of latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust FL-based threat detector in the context of IoT. More specifically, the FL evaluation witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense.
Tran Duc Luong, Vuong Minh Tien, Nguyen Huu Quyen, Do Thi Thu Hien, Phan The Duy, Van-Hau Pham
2023-09-20T04:14:48Z
http://arxiv.org/abs/2309.11053v1
Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection ###### Abstract The significant rise of security concerns in conventional centralized learning has promoted federated learning (FL) adoption in building intelligent applications without privacy breaches. In cybersecurity, the sensitive data along with the contextual information and high-quality labeling in each enterprise organization play an essential role in constructing high-performance machine learning (ML) models for detecting cyber threats. Nonetheless, the risks coming from poisoning internal adversaries against FL systems have raised discussions about designing robust anti-poisoning frameworks. Whereas defensive mechanisms in the past were based on outlier detection, recent approaches tend to be more concerned with latent space representation. In this paper, we investigate a novel robust aggregation method for FL, namely Fed-LSAE, which takes advantage of latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust FL-based threat detector in the context of IoT. More specifically, the FL evaluation witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense. Federated Learning, Poisoning Attack, threat detection, Autoencoder, Penultimate Layer Representation, latent space. ## I Introduction Recently, the Internet of Things (IoT) has emerged as a transformative technology that is reshaping the way we interact with the world around us. In fact, IoT refers to an expanding network of interrelated devices equipped with sensors, software, and additional technologies that facilitate data exchange among themselves and with other systems via the Internet. The potential applications of IoT are diverse and span a range of sectors including healthcare, transportation, agriculture, manufacturing, and smart cities. However, the rapid expansion of the IoT also poses significant security risks that must be addressed [1][2]. One of the most common threats that IoT systems have to face is cyberattacks. Cybercriminals can exploit security flaws in both IoT networks and devices in order to attain unauthorized access, exfiltrate sensitive information, cause physical damage, or carry out large-scale attacks such as distributed denial-of-service (DDoS). Therefore, the need for an intrusion detection system (IDS) as a layer of defense against cyber threats in IoT infrastructure is becoming more and more crucial [3][4]. Moreover, in order to enhance the capability of detecting unknown malicious traffices and leverage the vast amounts of data generated by IoT devices or large networks, machine learning (ML) [5] has been implemented in constructing robust IDS systems [6][7][8][9]. Traditionally, the common technique to build ML models is centralized, where all training data is collected and stored on a single server. However, this method is becoming impractical [10][11] in fact due to privacy and security concerns surrounding data collection. Many concerns have been raised about the confidentiality of data owners due to the potential of sensitive data being compromised or lost during the data storage, transmission, or sharing process. Additionally, the heavy computational cost is also a major challenge in training a conventional ML model. In this context, Federated Learning (FL) [12] has emerged as a new paradigm of distributed machine learning for building intelligent and privacy-preserving applications in IoT ecosystems and smart cities [13, 14]. This learning scheme allows multiple devices or entities to train a shared model collaboratively while keeping the training data decentralized. To be more specific, after initializing the model, the global server transmits it to each participating collaborator. Then each device trains the model on its local dataset and sends the updated model parameters back to the central server for aggregation. This procedure is repeated in several rounds so that the model can gain knowledge from a variety of data sources while maintaining data privacy and confidentiality. Therefore, adopting FL [15][16][17] to train robust ML-based threat detectors for IoT would be a potential strategy when it comes to cybersecurity. Nevertheless, FL systems have to deal with poisoning attacks [18][19][20][21] from its internal parties since the central server has no right to access the private local data of its collaborators. Unfriendly participants might pretend to be honest clients and manage to corrupt the learning phase by injecting malicious data (data poisoning) [19] or modifying updated model parameters (model poisoning) [22][18]. In this way, they could deteriorate the general performance of the global model (untargeted [23][21]) or make a bias in predicting attacker-chosen class inputs (targeted [24][25]). The targeted poisoning attacks are more advanced and trickier to carry out since it requires the stealthiness of adversaries to be undetected by threat hunters while maintaining the original detecting function on the remaining classes. As a consequence, numerous robust aggregation schemes [24][25][26][27][28] against poisoning attacks have been introduced to the research community in the past few years. Almost of defense mechanisms [29][28][30][31] are based on anomaly detection and verify an anomalous updated model as an outlier compared to benign groups. Outliers then are excluded from the server-side aggregation to ensure the stable performance of the global model. However, these methods rely on the model parameter space, while ML-based architectures in reality are constructed from thousands to millions of parameters. Therefore, the mentioned defensive frameworks must deal with a plenty of huge-sized weight parameters, posing a heavy burden on computational cost as well as the difficulty in clearly detecting poisonous models. In addition, some defense methods [28] also need to determine the number of attackers in advance, causing impractically in real-world environments. At the same time, these solutions encounter numerous obstacles in differentiating between malicious model parameters and benign ones trained on non-independent and identically distributed (non-IID) data [32][33][34]. A new approach using latent space like FedCC [35], FLARE [36] has been published recently to address those shortcomings. By extracting and comparing Penultimate Layer Representations (PLRs) among models, the solutions of FedCC [35] and FLARE [36] have shown that PLRs of benign model parameters follow an identical pattern, while poisonous PLRs adhere to other directions. However, [36] requires an auxiliary dataset in the server to compute PLR for each local model, which violates the FL standards of data privacy, especially in the case of a compromised global server. Meanwhile, the FedCC approach[35] extracts PLRs directly from the updated models without using an auxiliary dataset, which could cause the instability of PLRs if each local model was trained on a different data distribution. Thus, in this paper, we propose a new latent space (LS)-based anti-poisoning mechanism called Fed-LSAE to tackle poisoning attacks in FL-based systems, even in non-IID data environments. More specifically, Penultimate Layer Representation (PLR) would be utilized as the first LS-based core component in detecting malicious models. Moreover, to address the mentioned issue in FedCC [35], we implement Autoencoder (AE) as a second module to reduce the uncertainty of PLR changing and extract the LS representation of PLRs through the bottleneck layer assuming that PLR's parameter size is still massive. By learning the latent space of updated models, we can recognize the similarity level between updated weight models and the global model via Centered Kernel Alignment (CKA) algorithm. The CKA scores are then clustered into two groups, where the larger cluster of models will be selected as benign ones and utilized for the FedAvg aggregation. To sum up, we outline the primary contributions of this study as follows. * This work deeply investigates three typical types of poisoning attacks, including both data and model poisoning, against FL-based threat detectors in the context of IoT. From that, we designed a new anti-poisoning scheme named Fed-LSAE by detecting malicious uploaded weight parameters via Autoencoder-based latent space representations. Especially, this approach do not need any prior knowledge or raw data on the server like previous works [28][36][29]. * We conduct several experimental scenarios to reveal the effectiveness of our defense against poisoning attacks through an in-depth analysis of two datasets of IoT cyberattacks with different ML models. The proposed approach can work well when the percentage of adversaries is up to 40%. * By integrating AE into the defense mechanism, we show that our Fed-LSAE can outperform FedCC [35] in clearly distinguishing between benign clients (having IID or non-IID data) and malicious ones. In addition, our proposed framework is more conducive to identifying and removing poisoned updates from the model aggregation stage, even in non-IID settings. The remaining sections of this article are constructed as follows. Section II introduces some related works in poisoning attacks against FL-based models and the countermeasures. The following Section III gives a brief background of applied components. Next, the threat model and methodology are discussed in Section IV. Section V describes the experimental settings and scenarios with result analysis of our Fed-LSAE performance. Finally, we conclude the paper in Section VI. ## II Related work ### _Poisoning Attacks in the context of FL_ Despite providing a privacy-preserving training mechanism, FL still exposes many vulnerabilities that can be exploited by adversaries in multiple ways [37]. Poisoning attacks are one of the most common techniques that can be easily conducted to devastate FL training performance. Based on the attacker's strategies, poisoning attacks can be separated into data poisoning and model poisoning. The former occurs when attackers try to manipulate their own data with the aim of updating malicious model parameters, resulting in disrupting the FL model. More specifically, adversaries could conduct label-flipping techniques or inject perturbed samples into the local training data to achieve their goals. Meanwhile, model poisoning is a type of attack where unfriendly clients directly modify the weight of updating models during the training process that changes the decision boundary of the model, causing it to classify certain inputs differently than it would have otherwise or even hindering convergence. In addition, adversarial clients can manipulate certain settings of the model during the training process, such as learning rate, the number of epochs for local training or batch size, etc. In general, model poisoning attacks are regularly easier to conduct and more efficient than data poisoning ones because they do not focus on data preparation but manipulate the weight parameters which might directly influence the productivity of the global aggregation. Numerous published recent works [38][20][39][25] have proved the efficiency of poisoning attacks in exerting a significant impact on the FL performance. To be more specific, Jiale Zhang et al. [20] proposed a GAN-based poisoning attack strategy against the federated image classifier. Attackers pretend to be reliable participants so that they can utilize the global model as a GAN discriminator to mimic other participants' training samples from a noise vector. Through evaluation of MNIST and AT&T datasets, this paper has shown that FL would be vulnerable to adversarial poisoning attacks in which any internal party has updated local model parameters trained on poisoned data to the aggregation server. Developed from the above article, Jiale Zhang et al. [38] also presented a generative poisoning mechanism named PoisonGAN against the FL protocol in the context of edge computing. This paper built two types of poisoning attack techniques: backdoor and label-flipping, to assess the feasibility of the adversarial poisoning attack against the FL framework in practice. Furthermore, Sebastien Andreina et al. [25] also examined the efficiency of model poisoning attacks against FL-based system by conducting a backdoor attack based on the principle of multitask learning: the backdoor samples train the local model on the adversarial subtask while the genuine ones help preserve behavior of model on the primary task. The authors indicated that this attack strategy can be destructive even when existing only one poisoned update in a single round. A defense framework, named BaFFLe was also published to detect backdoor attacks on CIFAR-10 and FEMNIST dataset in their work. ### _Defense mechanisms against poisoning attacks in FL_ To mitigate the risk of poisoning attacks in FL, researchers have proposed various defense mechanisms [27][40][41][26][42]. These techniques aim to detect and reduce the effects of poisoned local updates on the global model by identifying the malicious clients and removing their updates from the learning process. A familiar defense technique in previous anti-poisoning works is adopting outlier detection algorithms to reveal poisoned updates as outliers and remove them from aggregation. For instance, Nguyen Chi Vy et al. [29] investigated the federated IDS performance when conducting label-flipping and adversarial attacks on the Kitsune dataset. A new anti-poisoning scheme was introduced, which uses the Local Outlier Factor (LOF) algorithm to verify local updated parameters from internal collaborators. By computing the LOF distance score between the uploaded model weights and the benign history, it can reveal whether an updated local model belongs to a malicious agent or not. Although this framework showed a great defensive performance against poisoning attacks, it must ensure that the FL system starts with only benign updates in several rounds. Also, the problem of non-IID data was not discussed in the paper. In recent times, Yuan-cheng Lai et al. presented DPA-FL [31] framework as a two-phase defensive mechanism against label flipping and backdoor attacks in the context of FL-based IDS. Specifically, DPA-FL also adopted the LOF algorithm as the first stage, called relative phase, to discriminate obvious malicious models from benign ones through the significant difference of LOF anomaly scores. Towards some local models with middle LOF scores, they have to undergo the second phase (absolute phase) for further data testing to confirm. However, the experimental results on the CICIDS2017 dataset only showed its effectiveness in the case of IID data. We can see that if there exists collaborative agents with non-IID data, the relative phase using LOF will be ineffectual when it comes to clarifying the difference between benign non-IID weight parameters and malicious ones. In other words, benign non-IID models might obtain the same LOF anomaly score as poisoned ones. Meanwhile, the study [28] also proposed the robust defense against label flipping and clean label attacks for FL-based network IDS, namely SecFedNIDS, which consists of 2 defense stages: model-level and data-level. Firstly, the Stochastic Outlier Selection (SOS) algorithm is applied to the model-level defensive mechanism at the server side. Its goal is to detect poisoned updated models as outliers based on the relationship among the uploaded local model parameters, and then reject them from the global aggregation. Nevertheless, this SOS-based method needs to know the number of attackers in advance, which seems infeasible in the real-world context. Secondly, at the data-level stage, they propose a novel poisoned data detection approach based on class path similarity, in which the class path is retrieved by the layer-wise relevance propagation (LRP) algorithm. However, this method works only if there exists any interventions in local agents' datasets, which can lead to another threat called inference attack [43][44][45] or even break the rules of privacy preservation in FL. Additionally, these papers [29][31][28] only work in the model parameter space, which puts a burden on computational costs and resource consumption. Recently, there has been renewed interest [35][36] in using latent space representation to build defensive schemes for FL-based systems against model poisoning attacks (MPAs). Ning Wang et al. were the pioneer of this trend when discovering a robust model aggregation mechanism for FL, namely FLARE [36]. FLARE leveraged penultimate layer representation (PLR) of models to differentiate malicious models from benign patterns. By extracting PLR of each model through an auxiliary data, FLARE determined a trust score for each local model based on pairwise PLR discrepancies among all updated models. As a result, the server aggregation could alleviate the impact of poisonous updates with low trust scores. Although FLARE could outperform some previous defenses, for example FLTrust [46], in both IID and non-IID data cases, it still exposes some limitations such as the risk of data leakage when using an auxiliary dataset in the server, only accuracy metric was used to evaluate the performance of FLARE against untargeted MPAs. Furthermore, the likewise approach was proposed by Hyejun Jeong et al. with a defensive mechanism called FedCC [35]. While FedCC does not require any subset of raw data or information sharing to extract PLRs, it compared the similarity between each PLR of local update and PLR of the global model by CKA algorithm. The lower the CKA score is, the higher the likelihood that it is a poisoned model. The experimental results on three datasets indicated that FedCC surpassed FLARE [36] in all scenarios in detecting the state-of-art MPAs. However, retrieving PLR from updated models directly without the same dataset can lead to the uncertainty of PLR vectors, which might pose a negative impact on the poisoning detection rate. Lately, Yifeng Jiang et al. [27] presented MCDFL, a detection mechanism against label flipping attacks via data quality inspection. The server-side pretrained generator is delivered to each local agent to extract latent feature space as data quality according to the given label sequences. By updating these data quality metrics, the server can clarify malicious data distribution from benign patterns via K-means clustering algorithm. The success rate of MCDFL method, however, is not always guaranteed since the data quality extraction process is conducted on the client-side. As a result, the adversaries can adapt to make some perturbations on its updated data quality parameters. Also, the feasibility of MCDFL is not discussed in other advanced data poisoning attacks. To reduce the aforementioned limitations in previous works, we propose a Fed-LSAE module as a latent space-based defensive framework against different types of poisoning attacks, even in non-IID settings. Our Fed-LSAE does not require any prior knowledge or datasets for the poisoning detection on the server-side. Not only can Fed-LSAE protect FL systems from model poisoning attacks, but our recommended approach is also effective in detecting data poisoning attacks by absorbing data representation via PLR vectors. In addition, the Fed-LSAE could address the PLR instability issue by implementing AE to learn the benign pattern of PLR vectors before the training process. ## III Background ### _Penultimate Layer Representation_ The penultimate layer refers to the second-to-last layer of a neural network that is just before the output layer (**Fig. 1**). The Penultimate Layer Representation (PLR) is a vector of numbers that encodes the input data into a feature space optimized for the specific task the neural network attempting to perform. The output layer of the network then uses this feature vector to make its final predictions or classifications. In other words, we can learn the input data representation via PLR. In [36] and [35], authors demonstrated that benign PLRs follow the same distribution while malicious ones stick to other directions. Additionally, [35] indicated the penultimate layer is the most distinct layer out of all layers in the neural networks, which means we can classify local models in FL via PLR instead of all model parameters. ### _Centered Kernel Alignment_ The Centered Kernel Alignment (CKA) algorithm was introduced by Kornblith et al. [47] to compare feature representations in neural networks. It is designed to measure the similarity between the representations by aligning their respective kernel matrices. In a normalized version, the CKA score is computed based on the Hilbert-Schmidt Independence Criterion (HSIC) as in **Eq. (1)**. \[CKA(K,L)=\frac{HSIC(K,L)}{\sqrt{HSIC(K,K)HSIC(L,L)}} \tag{1}\] where \(K\) and \(L\) are kernel matrices corresponding to two feature representations. The resulting CKA score ranges from 0 and 1, in which a score of 1 indicates perfect alignment between the two sets of feature representations. In this work, we utilize CKA as a benchmark to evaluate the resemblance between each local LS representation and the global one. We can filter malicious LS representations which are likely to be distinct from the global LS. Compared to other algorithms such as cosine, CKA could offer a more evident difference between a malicious model and a non-IID-based model when comparing the similarity with a benign one. ### _Latent space representation in Autoencoder_ Autoencoder (AE) is an unsupervised ML algorithm which is leveraged to represent data for the task learning. As shown in **Fig. 2**, it consists of two main components: an Encoder and a Decoder. The encoder compresses input data into a lower-dimensional representation, while the decoder reconstructs that representation to the original data shape. The above-mentioned compressed representation is latent space representation in AE. By learning the latent space representation of input data, AE can capture the most important features or patterns in the input. As a result, it can be applied in various fields, such as dimensionality reduction, feature extraction, anomaly detection, etc. Moreover, another benefit of AE is the capability of learning to represent complex data distributions with a relatively small number of parameters. This means that even a small portion of data can be sufficient to train an AE effectively, as long as the data is representative Fig. 1: The penultimate layer in a neural network. Fig. 2: The Autoencoder (AE) architecture. of the underlying data distribution. In this paper, by feeding PLR vectors into a pre-trained AE, we can retrieve their latent space representations, which are then used in detecting malicious models. ## IV Methodology ### _Threat Model_ #### Iv-A1 Threat Model For this study, we assume that the number of adversaries is always less than half of the total number of clients, 40% more precisely. The remaining participants and the server are considered trusted parties during the process of training global model. Meanwhile, the attacker nodes continuously carry out poisoning attacks against the FL system, which means such nodes constantly update their poisoned local model. #### Iv-A2 Attacker's Knowledge and Ability In the context of poisoning attacks, adversaries pretend to be benign participants with malevolent objectives in the FL framework. They have an in-depth insight into the training architecture since all the parties in collaborative learning adhere to a common learning algorithm, dataset, and model hyperparameters in advance. In other words, poisoning attacks in this work would be conducted in a white-box manner. In this section, we also define the capabilities of attackers in corrupting the global model as follows. **Permitted.** The poisoners take absolute control of the local training procedure with their dataset. They can arbitrarily change some hyperparameters of the retrieved model from the global server so that poisoning attacks can achieve high performance. **Not permitted.** Malicious participants have no right to interfere with the learning phase or training data of other participants. Moreover, they could not influence the server-side aggregation or modify the previously agreed-upon training algorithm. #### Iv-A3 Attack Strategy **Data manipulation using Label Flipping.** This is a type of poisoning attack that aims to undermine the performance of the federated model by intentionally flipping the labels of some data samples used for training. In this work, we only perform a binary classification task, where the detector recognizes the 1-labeled examples as attacks and the 0-labeled ones as benign. Therefore, the number of adversaries would flip all labels to the opposite ones so that their local parameters would become poisonous to the convergence of the global model. **Data manipulation using GAN-based adversarial samples.** We leverage the IDSGAN [48] as the main GAN architecture in crafting adversarial samples to conduct data poisoning attacks. In other words, each unfriendly participant could train their own IDSGAN, as described in **Fig. 3**. The global model plays a role as the IDS component in IDSGAN. By feeding malicious samples into IDSGAN, attackers generate adversarial ones to train their poisoned local model, which is conducive to the misclassification of the global model. **Weight-scaling Model Poisoning.** Model poisoning attacks can be classified into two categories based on the attacker's objectives: untargeted and targeted. In untargeted attacks, the attacker's goal is to reduce the overall accuracy of the model, whereas in targeted attacks, the objective is to manipulate the model into misclassifying a particular class of inputs. The former aims to make the model less effective in general, while the latter seeks to create a specific bias in the model's decision-making process. In this paper, we only focus on the untargeted approach, where adversaries try to scale up their model weights \(\alpha\) times before transmitting it to the aggregation server. **Eq. (2)** has shown the model weights \(w_{i}\) of the \(i\)-th client as an attacker after scaling up its original model. \[w_{i}\leftarrow\{\alpha w_{i}^{1},\alpha w_{i}^{2},...,\alpha w_{i}^{P}\} \tag{2}\] where \(w_{i}^{p}\) refers to the \(p\)-th parameter value of \(w_{i}\), and \(P\) is the total number of model parameters. ### _Detailed design of Fed-LSAE_ The overall architecture of our proposed system is given in **Fig. 4**. #### Iv-B1 Architectural components Training clientsThey are local agents which train ML-based models with their dataset before sending model weights to the aggregation server for computing the global model. Aggregation serverThe Fed-LSAE is located on the aggregation server to detect and remove malicious updates from the global training. It consists of 3 elements, including: * PLR Extractor: This module is responsible for extracting the PLR sequence of the inputs, which are the updated local models and global model. * Pretrained Autoencoder (AE): Via feeding PLR vectors into AE, it produces compressed representations containing the most important features of each PLR. The more detailed information of this element is depicted in **Section IV-B2**. * Clustering algorithm: By clustering CKA scores into two groups, we can indicate the smaller group as adversarial members and then filter them from the federated training process. Fig. 3: The GAN-based architecture for generating adversarial samples for poisoning attacks. #### Iii-A2 Pretraining process of Autoencoder Before transmitting a duplicate version of the global model to participating agents, the global model is collaboratively trained in a few internal server-side organizations by their dataset in one round. We assume that all of these organizations are friendly, benign and belong to the global server so that AE can learn the characteristics of benign models in over \(e\) epochs. Thereby, the encoder takes a PLR vector \(x\) of each benign aforementioned model as input, and produces a latent space representation \(h\) using a series of nonlinear transformations as follows: \[h=f_{\theta_{1}}(x) \tag{3}\] where \(f_{\theta_{1}}\) represents the encoder function with learnable parameters \(\theta_{1}\). On the other hand, the decoder takes the latent representation \(h\) and produces a reconstructed output vector \(\hat{x}\) (**Eq. (4)**). \[\hat{x}=g_{\theta_{2}}(h) \tag{4}\] where \(g_{\theta_{2}}\) represents the decoder function with learnable parameters \(\theta_{2}\). The objective of the autoencoder is to minimize the difference between the input vector \(x\) and the reconstructed output vector \(\hat{x}\). Therefore, we make use of Mean Squared Error (MSE) in **Eq. (5)** as the reconstruction loss. \[L(x,\hat{x})=\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\hat{x}_{i})^{2} \tag{5}\] where \(x_{i}\) and \(\hat{x}_{i}\) represent the \(i\)-th element of the input and reconstructed data, respectively. We choose AE because it is an unsupervised algorithm that can learn inputs' representation effectively even in the case of limited training data points. Also, another benefit of AE is its capability of learning non-IID patterns if the internal server-side organizations have various data distributions. As a result, it fosters the accuracy of the following CKA calculation phase that reduces the likelihood of misclassifying benign non-IID models as malicious ones. #### Iii-A3 Workflow of Fed-LSAE **Fig. 4** also illustrates the workflow of Fed-LSAE module integrated into the server-side process of FL-based threat detectors. To be more specific, the training procedure undergoes the following steps. * _Step 1_: Initially, the central server initializes a new global model and an Autoencoder (AE) architecture, which then simultaneously experiences an internal training process on the server side (described in **Section IV-B2**) to learn important features of benign PLR vectors. The resulting global model is then delivered to selected \(k\) out of \(n\) collaborative agents. * _Step 2_: The total \(k\) clients train each local model on their own dataset, and then update the trained model weights to the server for aggregation. It is time for the Fed-LSAE module to make the action. The updated weight parameters from \(k\) agents, as well as the global weight, are sent directly to a PLR extractor. * _Step 3_: As described in **Algorithm 1**, the PLR extractor outputs the global PLR for the global model and \(k\) local PLRs (lines 3-6) to feed into the pre-trained AE model. * _Step 4_: Later, the latent space representation (LSR) of each PLR will be retrieved via the bottleneck layer of the AE (lines 8-11). As mentioned before, this step aims to reduce the instability of PLR vectors if local agents train their models in different data distributions. Furthermore, it can minimize the computational costs on the assumption that the PLR dimension is still relatively large. * _Step 5_: In this step (lines 13-15), we leverage the Radial Basis Function (RBF) CKA algorithm to measure the similarity level between the global LSR and each local LSR. The reason for using RBF-CKA is that it can show Fig. 4: The Fed-LSAE architecture for Federated Threat Detection System based on Latent Space Representations. the similarity differences among non-IID models are slighter than the similarity differences between non-IID models and malicious ones. Note that, non-IID models here are trained by benign clients, in which each of them has a different data distribution in terms of the number of data points or the ratio between normal and attack samples. * _Step 6_: This stage involves gathering CKA scores into two groups by using a clustering algorithm (lines 17-19). In our work, K-means is used for this task. Since the number of attackers cannot exceed 50% of total clients, we assign the greater cluster as benign members, while the other will be poisonous models. * _Step 7_: This is the final step where the benign group is selected for the FedAvg aggregation (line 21). In other words, adversaries with poisoned updates would be removed from aggregating a new version of the global model, which results in the robust aggregation for the FL system. The FedAvg algorithm is defined as **Eq. (6)**. \[w_{t+1}\leftarrow\sum_{i=1}^{k}\frac{n_{i}}{n}w_{i,t}\] (6) where \(w_{t+1}\) is the updated global model at round \(t+1\), \(w_{i,t}\) is the local model of the \(i\)-th benign client at round \(t\), \(n_{i}\) is the number of local data points of the \(i\)-th benign client, \(n\) is the total number of local data points across all clients in the selected benign cluster, and \(k\) is the total number of benign clients participating in the federated learning process. The resulting version of the global model is then sent to newly selected \(k\) agents, and this process is repeated from step 2 to step 7 until the global model obtains the convergence point. ## V Experiments and Analysis ### _Dataset and Preprocessing_ To conduct experiments on IoT network traffic attacks, we utilize recent ML-based NIDS datasets called CIC-ToN-IoT and N-BaIoT. #### V-A1 CIC-ToN-IoT CIC-ToN-IoT is a network traffic collection, extracted from the PCAP files of the ToN-IoT dataset by the CICFlowMeter-v4 tool. It contains more than 5.3 million network records with 85 features in a csv file, including roughly 53% attack instances and 47% benign ones. The attack samples can be further classified into 9 cyberattack types including Backdoor, DoS, DDoS, Injection, etc. In our study, we only select 1,070,158 samples to conduct experiments while maintaining the proportion between benign examples and attack ones as mentioned. Initially, there are 85 features for each record with 83 main features, a Label column (defining benign samples as 0 and attack samples as 1), and an Attack column (defining the types of attack). Due to our scope of binary classification, the Label column is used as the training target. In addition, we remove 14 redundant features with unique values or serve no purpose in labeling samples, such as Flow ID, Src IP, Dst IP, etc. The resulting training data has 70 dimensions along with the Label column. Moreover, any records containing non-numeric values (NaN) or infinity values (Inf) are also discarded. Finally, we apply a Min-max normalization as in **Eq. (7)** to the remaining 70 features to have their values in the range of [0,1]. \[x_{scaled}=\frac{x-\text{min}(x)}{\text{max}(x)-\text{min}(x)} \tag{7}\] where \(x_{scaled}\) is the normalized version of feature value \(x\). \(\text{max}(x)\) and \(\text{min}(x)\) refer to the maximum and the minimum values of this feature in the dataset, respectively. #### V-A2 N-BaIoT The N-BaIoT dataset [49] is a publicly available dataset released in 2019, designed for research on intrusion detection systems for IoT devices. It contains network traffic data from a heterogeneous IoT environment with 50 different types of devices, with both benign and malicious traffic data, and various types of attacks. In our experiments, we also take a subset of N-BaIoT to evaluate our Fed-LSAE which consists of over 800,000 samples with the ratio of benign instances and malicious ones is approximately 1:10. In these experiments, N-BaIoT undergoes the same preprocessing steps as CIC-ToN-IoT. The resulting dataset contains records with 115 features and a Label column with binary values of 0 and 1 for benign and attack samples, correspondingly. In addition, all 115 features are normalized to the range of [0,1] using the same Min-Max normalization as in **Eq. (7)**. After the preprocessing phase, both datasets are divided into 3 parts for different usage. * Part 1 (70%): the training dataset divided for \(n\) participating agents. * Part 2 (25%): the testing dataset at the server-side to evaluate the global model performance. * Part 3 (5%): the training dataset that is evenly divided for internal server-side organizations with the aim of training AE in the initialization stage. ### _Performance Metrics_ We evaluate our proposed method via 4 following metrics: Accuracy, Precision, Recall, F1-Score. Since our work conducts experiments in binary classification tasks, the value of each metric is computed based on a 2D confusion matrix which includes True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). _Accuracy_ is the ratio of correct predictions \(TP,\ TN\) over all predictions. Mathematically, the _Accuracy_ of a model is calculated as **Eq. (8)**. \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{8}\] _Precision_, as in **Eq. (9)**, measures the proportion of \(TP\) over all samples classified as positive. \[Precision=\frac{TP}{TP+FP} \tag{9}\] _Recall_, which is defined in **Eq. (10)**, measures the proportion of \(TP\) over all positive instances in testing dataset. \[Recall=\frac{TP}{TP+FN} \tag{10}\] _F1-Score_ is the Harmonic Mean of \(Precision\) and \(Recall\) that has the formula as in **Eq. (11)**. \[F1-score=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{11}\] ### _Experimental Settings_ In this work, we utilize Pytorch framework and scikit-learn library to build our Fed-LSAE on the hardware configuration of Intel(r) Xeon(r) E5-2660 v4 CPU (16 cores - 1.0 GHz), 100 GB RAM and the operating system of Ubuntu 16.04. The FL-based training process occurs in 10 communication rounds (\(R=10\)). There are total \(n=10\) clients participating in the learning phase where only \(k\) agents are selected in each round depending on a fraction factor \(C\). In these experiments, \(C\) is defined as 1.0, which means \(k=C*n=10\) agents in each round. All participants train their local model in 3 epochs with the batch size of 2048. The loss function is the cross-entropy and the stochastic gradient descent (SGD) optimizer is also used with a learning rate of 0.001 and momentum of 0.9. In addition, the ML-based threat detectors are built based on 2 neural network structures named Convolutional Neural Network (CNN) and LeNet, of which architectures are described in **Table I** and **Table II** respectively. In terms of AE, we use linear layers with bias to build encoder and decoder, as in **Table III**. Each PLR of benign models is respectively fed into AE model to train in \(e=20\) epochs with an Adam optimizer and the learning rate of 0.001. The input and output dimension used in AE are the same, which represent the number of features of each PLR vector. In GAN-based poisoning attacks, we implement a GAN architecture with the hyperparameters of epochs = 20, batch_size = 512, and the Adam optimizer with a learning rate of 0.0001. The generator \(G\) and discriminator \(D\) are designed with 5 layers, with the detailed structure in **Table IV**. Note that, all following experiments are performed 5 times, and overall results are the average to ensure accuracy and reliability of our findings. ### _Experimental Scenarios_ #### Iv-D1 Scenario 1 - Baseline performance of federated threat detectors This scenario aims to evaluate the baseline effectiveness of two ML-based threat detector models, including CNN and LeNet, on CIC-ToN-IoT and N-BaIoT datasets in the context of FL. In other words, only benign clients participate in training the FL-based model, whose aggregation is based on the FedAvg algorithm as in **Eq. (6)**. Moreover, each local agent has the same data distribution as the others in terms of the number of training samples and the ratio of benign and \begin{table} \begin{tabular}{c c c c c} \hline **Layer** & **In** & **Out** & **Kernel / Stride / Padding** & **Activation** \\ \hline conv1d\_1 & 1 & 64 & 3 x 3 / 1 / 1 & ReLU \\ batchnorm1d & 64 & & - & - \\ conv1d\_2 & 64 & 128 & 3 x 3 / 1 / 0 & ReLU \\ batchnorm1d & 128 & & - & - \\ flatten & - & - & - & - \\ fc\_1 & - & 64 & - & - \\ fc\_2 & 64 & 2 & - & - \\ \hline \end{tabular} \end{table} TABLE II: LeNet Architecture \begin{table} \begin{tabular}{c c c c} \hline **Layer** & **Input** & **Output** & **Activation** \\ \hline **Encoder** & & & \\ \hline Linear & input\_dim1 & 512 & ReLU \\ Linear & 512 & 128 & ReLU \\ Linear & 128 & 64 & ReLU \\ Linear & 64 & 16 & - \\ \hline **Decoder** & & & \\ \hline Linear & 16 & 64 & ReLU \\ Linear & 64 & 128 & ReLU \\ Linear & 128 & 512 & ReLU \\ Linear & 512 & output\_dim1 & Tanh \\ \hline \end{tabular} * Dimension of input and output respectively \end{table} TABLE III: Structure of Encoder and Decoder in AE architecture malicious labels. Our goal is to build different FL-based threat detectors that have the ability to detect abnormal traffic in IoT networks. #### Iv-D2 Scenario 2 - Evaluation on the performance of Fed-LSAE in eliminating poisoned updates To assess the effectiveness of our proposed defense framework, we clarify the robustness of FL-based threat detectors against poisoning attacks after integrating the Fed-LSAE module in IID environment. For more details, 4 out of 10 clients are assumed as compromised agents (adversaries) to conduct 3 typical strategies of poisoning attacks, as described in **Section IV-A3**, throughout the FL-based learning phase. In weight-scaling model poisoning attacks, adversaries try to scale their poisoned weight parameters up to 10 times. In this scenario, we observe the negative impact of these attacks on the overall performance of FL-based detector models and the usefulness of our Fed-LSAE in defeating adversaries to maintain the stability of these models. #### Iv-D3 Scenario 3 - Comparison of defense performance to other methods This scenario reveals the outstanding features of our Fed-LSAE compared to the previous proposed FedCC scheme [35]. To have a reliable comparison, this evaluation is performed in the same context of Median [50] poisoning attack, an untargeted model poisoning attack as in experiments of FedCC [35]. Besides, to ensure the objectivity of our approach, we conduct this experiment on two models, including CNN and LeNet, following a similar methodology as that employed in the FedCC study. All the metrics and CKA scores are averaged to observe the stability of each scheme when dealing with this attack. Our desired objective is to demonstrate how Fed-LSAE outperforms FedCC in the 3 following aspects: * The stability in dealing with Median attacks in the case of IID data. * The ability to detect poisonous agents. * The performance when the rest of the benign clients follow the non-IID pattern, which is depicted in **Fig. 5**. More specifically, on both datasets, the first six clients, including four adversaries, have the same data distribution whereas the remaining four clients will follow other patterns. Clients 7 and 9 contain 100% benign samples, while clients 8 and 10 collect only attack data traffic. ### _Experimental Results_ #### Iv-E1 Scenario 1 The performance of two FL-based threat detector models is shown in **Fig. 6**, in terms of the Accuracy, Precision, Recall and F1-score. Although LeNet model has witnessed a fluctuation during the first three rounds on N-BaIoT dataset, the performance of the model on both datasets has rapidly grown to the convergence point and achieved more than 98% in all metrics. These results prove the effectiveness of these FL models in detecting cyber threats in the context of IoT networks. #### Iv-E2 Scenario 2 The detailed results of our proposed method in eliminating poisoned updates are summarized in **Fig. 7**, **Fig. 8** and **Fig. 9**. Thereby, \(A\) and dot lines describe metrics in case of attack without Fed-LSAE, while \(D\) and solid lines indicate results with defense by Fed-LSAE. In all three poisoning attack strategies, there has been a sharp decline in the performance of both FL-based threat detectors without defense. In Label Flipping (**Fig. 7**) and GAN-based attacks (**Fig. 8**), the detecting rate of FL-based models has fluctuated around roughly 50% across all metrics. On the other hand, the models seem to be completely damaged by the weight-scaling model poisoning method when Precision, Recall and F1-Score benchmarks reach exactly 0% in almost all communication rounds respectively. It implies that weight-scaling model poisoning is easier to conduct and more efficient than the others, since it directly affects the global model parameters. In the context of thwarting poisoned updates, the performance of our Fed-LSAE has rapidly reached the convergence \begin{table} \begin{tabular}{l c c c} \hline **Layer** & **Input** & **Output** & **Activation** \\ \hline **Generator** & & & \\ \hline Linear & input\_dim\({}^{2}\) & input\_dim/2 & ReLU \\ Linear & input\_dim/2 & input\_dim/2 & ReLU \\ Linear & input\_dim/2 & input\_dim/2 & ReLU \\ Linear & input\_dim/2 & output\_dim\({}^{2}\) & - \\ \hline **Discriminator** & & & \\ \hline Linear & input\_dim & input\_dim\({}^{2}\) & LeakyReLU \\ Linear & input\_dim\({}^{2}\) & input\_dim\({}^{2}\) & LeakyReLU \\ Linear & input\_dim\({}^{2}\) & input\_dim\({}^{2}\) & LeakyReLU \\ Linear & input\_dim\({}^{2}\) & input\_dim\({}^{2}\) & LeakyReLU \\ Linear & input\_dim\({}^{2}\) & input\_dim/2 & LeakyReLU \\ Linear & input\_dim/2 & output\_dim & - \\ \hline \multicolumn{3}{l}{\({}^{2}\) Dimension of input and output respectively} \\ \end{tabular} \end{table} TABLE IV: Structure of Generator \(G\) and Discriminator \(D\) in IDSGAN architecture Fig. 5: The data distribution among clients on _(a)_ CIC-ToN-IoT and _(b)_ N-BaIoT datasets in non-IID cases. point since \(3^{rd}\) communication round and achieved over 98% across all four metrics. By learning data representations via PLR vector, which is the most distinguishable layer out of all layers in a neural network, our Fed-LSAE has proved its effectiveness in eliminating both data poisoning and model poisoning attacks from ruining robust FL-based detection systems. #### Iv-B3 Scenario 3 The descriptive statistics in **Table V** have revealed that our Fed-LSAE achieves a better detection rate than FedCC in terms of Median attacks in all cases. As we can see, the Fed-LSAE framework can recognize all 4 adversaries out of 10 clients and witnessed a stable trend in its defensive performance during 10 rounds, with an average of over 93% in Accuracy and 96% in F1-Score across four cases. Meanwhile, the FedCC scheme still experienced difficulties in detecting Median attacks with average results of approximately 66% and 77% in Accuracy and F1-Score respectively in the worst case of LeNet model on N-BaloT dataset. The thing that makes Fed-LSAE outperformance is the significant difference in CKA between benign and malicious latent spaces compared to the global ones (GLS). Whereas, this distinction is quite ambiguous in FedCC. **Fig. 10** illustrates that the CKA scores of adversaries (Clients 2-5) in Fed-LSAE are distinct from the others. For example, in the case of LeNet-based detector on N-BaloT dataset, malicious latent space vectors achieve below 0.4 scores or 40% of similarity level compared to the GLS, while benign ones are approximately 98% similar to the global model. Meanwhile, FedCC only Fig. 8: Fed-LSAE performance against GAN-based attacks on _(a,b)_ CIC-ToN-IoT and _(c,d)_ N-BaloT datasets. Fig. 6: FL-based training process of threat detector on _(a,b)_ CIC-ToN-IoT and _(c,d)_ N-BaloT datasets. Fig. 7: Fed-LSAE performance against Label Flipping attacks on _(a,b)_ CIC-ToN-IoT and _(c,d)_ N-BaloT datasets. Fig. 9: Fed-LSAE performance against Model Poisoning attacks on _(a,b)_ CIC-ToN-IoT and _(c,d)_ N-BaloT datasets. witnesses slight distances in CKA scores among those clients, which results in considerable difficulties in the following clustering phase. These results have proved the outstanding benefit of integrating AE into the defensive system. When it comes to non-IID data, as shown in **Table VI**, despite quite low results, Fed-LSAE still gains a more stable performance compared to FedCC in all cases. For example, in the case of using LeNet model on CIC-ToN-IoT, the Accuracy and F1-Score of Fed-LSAE are significantly higher than those of FedCC with a difference of 12% and 16% respectively. The reason is that Fed-LSAE wins over its counterpart in distinguishing malicious and benign agents having non-IID data. More specifics, in **Fig. 11**, the CKA scores of poisoned latent space representations (Clients 2-5) in FedCC are almost the same as those of benign non-IID ones (Clients 7-10) when comparing the similarity to the GLS. All of them are more than 0.9 in all cases, leading to the misclassification between malicious and benign non-IID models. As a consequence, the aggregated global model is still affected by malicious updates, and its performance in detecting cyber threats then becomes unsatisfactory (**Table VI**). As aforementioned in **Section II-B**, this is the weakness of FedCC caused by directly extracting PLR from updated models without the same dataset, resulting in the instability of PLR vectors in non-IID cases. In Fed-LSAE, with the support of the pre-trained AE, the updates of adversaries (Clients 2-5) are completely distinct from the rest of the clients. The best case to prove this ability is using LeNet model on the N-BaIoT dataset (**Fig. 11**). Benign non-IID updates (clients 7-10) follow the same pattern as benign IID updates (clients 1,6) with CKA scores of more than 0.9. Meanwhile, the malicious latent space vectors are only approximately 40% similar to the GLS. This produces a clear and notable difference to recognize poisoned updates. Therefore, Fed-LSAE could easily prevent those anomalous updates from affecting the aggregation phase while maintaining the performance of benign Non-IID clients. The results from those experiments indicate the effectiveness of Fed-LSAE in building a robust FL-based threat detector, even in a non-IID environment. 1 ## VI Conclusion This paper proposes a robust aggregation method for federated learning, called Fed-LSAE, which utilizes the latent space representation via the penultimate layer and autoencoder \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Scheme**} & \multicolumn{4}{c|}{**CIC-ToN-IoT**} & \multicolumn{4}{c|}{**N-BaIoT**} \\ \cline{3-10} & & Accuracy & Precision & Recall & F1-Score & Accuracy & Precision & Recall & F1-Score \\ \hline \multirow{2}{*}{**CNN**} & **FedCC** & 0.98923 & 0.9932 & **0.98676** & 0.98981 & 0.94178 & 0.94172 & 0.99959 & 0.96973 \\ \cline{2-10} & **Fed-LSAE** & **0.99118** & **0.99923** & 0.98413 & **0.99159** & **0.95623** & **0.99678** & **0.97719** \\ \hline \multirow{2}{*}{**LeNet**} & **FedCC** & 0.80766 & 0.80428 & **0.99928** & 0.87304 & 0.65797 & 0.9999 & 0.63044 & 0.76556 \\ \cline{2-10} & **Fed-LSAE** & **0.96116** & **0.94727** & 0.99602 & **0.96819** & **0.93106** & **0.93251** & **0.99841** & **0.96420** \\ \hline \end{tabular} \end{table} TABLE V: Defense performance comparison between FedCC and Fed-LSAE when tackling with Median attacks in IID case Fig. 11: The comparison of similarity level between Global Latent Space (GLS) and each Local Latent Space (LLS) via CKA scores in FedCC and Fed-LSAE in non-IID cases on _(a,b)_ CIC-ToN-IoT and _(c,d)_ N-BaIoT datasets respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Scheme**} & \multicolumn{4}{c|}{**CIC-ToN-IoT**} & \multicolumn{4}{c|}{**N-BaIoT**} \\ \cline{3-10} & & Accuracy & Precision & Recall & F1-Score & Accuracy & Precision & Recall & F1-Score \\ \hline \multirow{2}{*}{**CNN**} & **FedCC** & 0.54266 & 0.34839 & 0.6 & 0.43725 & 0.6816 & 0.63453 & **0.9** & 0.7443 \\ \cline{2-10} & **Fed-LSAE** & **0.60364** & **0.5928** & **0.9988** & **0.73679** & **0.72094** & **0.67387** & **0.9** & **0.77069** \\ \hline \multirow{2}{*}{**LeNet**} & **FedCC** & 0.59378 & 0.57047 & 0.85507 & 0.65917 & 0.65907 & 0.60248 & 0.88016 & 0.69647 \\ \cline{2-10} & **Fed-LSAE** & **0.71483** & **0.71731** & **0.99333** & **0.81232** & **0.72651** & **0.73692** & **0.98014** & **0.82102** \\ \hline \end{tabular} \end{table} TABLE VI: Defense performance comparison between FedCC and Fed-LSAE when tackling with Median attacks in non-IID case Fig. 10: The comparison of similarity level between Global Latent Space (GLS) and each Local Latent Space (LLS) via CKA scores in FedCC and Fed-LSAE in non-IID cases on _(a,b)_ CIC-ToN-IoT and _(c,d)_ N-BaIoT datasets respectively. to eliminate malicious clients from the training process. This method is proved to mitigate poisoning attacks effectively and improve the performance of FL-based threat detectors for IoT systems. The experimental results on two datasets demonstrate the feasibility and effectiveness of the proposed method for constructing high-performance machine learning models for detecting cyber threats in the context of IoT. Our findings provide valuable insights for future research and development of robust and secure FL-based solutions in cybersecurity. In the future, we intend to evaluate our Fed-LSAE mechanism against other advanced types of poisoning attacks such as backdoor, sybil attacks, etc. Furthermore, the feasibility of Fed-LSAE in other contexts such as homomorphic encryption-enabled FL model exchanges and decentralized FL schemes should also be considered. ## Acknowledgment This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
2303.00105
Scalability and Sample Efficiency Analysis of Graph Neural Networks for Power System State Estimation
Data-driven state estimation (SE) is becoming increasingly important in modern power systems, as it allows for more efficient analysis of system behaviour using real-time measurement data. This paper thoroughly evaluates a phasor measurement unit-only state estimator based on graph neural networks (GNNs) applied over factor graphs. To assess the sample efficiency of the GNN model, we perform multiple training experiments on various training set sizes. Additionally, to evaluate the scalability of the GNN model, we conduct experiments on power systems of various sizes. Our results show that the GNN-based state estimator exhibits high accuracy and efficient use of data. Additionally, it demonstrated scalability in terms of both memory usage and inference time, making it a promising solution for data-driven SE in modern power systems.
Ognjen Kundacina, Gorana Gojic, Mirsad Cosovic, Dragisa Miskovic, Dejan Vukobratovic
2023-02-28T22:09:12Z
http://arxiv.org/abs/2303.00105v2
Scalability and Sample Efficiency Analysis of Graph Neural Networks for Power System State Estimation ###### Abstract Data-driven state estimation (SE) is becoming increasingly important in modern power systems, as it allows for more efficient analysis of system behaviour using real-time measurement data. This paper thoroughly evaluates a phasor measurement unit-only state estimator based on graph neural networks (GNNs) applied over factor graphs. To assess the sample efficiency of the GNN model, we perform multiple training experiments on various training set sizes. Additionally, to evaluate the scalability of the GNN model, we conduct experiments on power systems of various sizes. Our results show that the GNN-based state estimator exhibits high accuracy and efficient use of data. Additionally, it demonstrated scalability in terms of both memory usage and inference time, making it a promising solution for data-driven SE in modern power systems. State Estimation, Graph Neural Networks, Machine Learning, Power Systems, Real-Time Systems ## I Introduction **Motivation and literature review:** The state estimation (SE) algorithm is a key component of the energy management system that provides an accurate and up-to-date representation of the current state of the power system. Its purpose is to estimate complex bus voltages using available measurements, power system parameters, and topology information [1]. In this sense, the SE can be seen as a problem of solving large, noisy, sparse, and generally nonlinear systems of equations. The measurement data used by the SE algorithm usually come from two sources: the supervisory control and data acquisition (SCADA) system and the wide area monitoring system (WAMS) system. The SCADA system provides low-resolution measurements that cannot capture system dynamics in real-time, while the WAMS system provides high-resolution data from phasor measurement units (PMUs) that enable real-time monitoring of the system. The SE problem that considers measurement data from both WAMS and SCADA systems is formulated in a nonlinear way and solved in a centralized manner using the Gauss-Newton method [1]. On the other hand, the SE problem that considers only PMU data provided by WAMS has a linear formulation, enabling faster, non-iterative solutions. In this work, we will focus on the SE considering only phasor measurements, described with a system of linear equations [2], which is becoming viable with the increasing deployment of PMUs. This formulation is usually solved using linear weighted least-squares (WLS), which involve matrix factorizations and can be numerically sensitive [3]. To address the numerical instability issues that often arise when using traditional SE solvers, researchers have turned to data-driven deep learning approaches [4, 5]. These approaches, when trained on relevant datasets, are able to provide solutions even when traditional methods fail. For example, in [4], a combination of feed-forward and recurrent neural networks was used to predict network voltages using historical measurement data. In the nonlinear SE formulation, the study [5] demonstrates the use of deep neural networks as fast and quality initializers of the Gauss-Newton method. Both linear WLS and common deep learning SE methods at its best approach quadratic computational complexity regarding the power system size. To fully utilize high sampling rates of PMUs, there is a motivation to develop SE algorithms with a linear computational complexity. One way of achieving this could be using increasingly popular graph neural networks (GNNs) [6, 7]. GNNs have several advantages when used in power systems, such as permutation invariance, the ability to handle varying power system topologies, and requiring fewer trainable parameters and less storage space compared to conventional deep learning methods. One of the key benefits of GNNs is the ability to perform distributed inference using only local neighbourhood measurements, which can be efficiently implemented using the emerging 5G network communication infrastructure and edge computing [8]. This allows for real-time and low-latency decision-making even in large-scale networks, as the computations are performed at the edge of the network, closer to the data source, reducing the amount of data that needs to be transmitted over the network. This feature is particularly useful for utilizing the high sampling rates of PMUs, as it can reduce communication delays in PMU measurement delivery that occur in centralized SE implementations. GNNs are being applied in a variety of prediction tasks in the field of power systems, including fault location [9], stability assessment [10], and load forecasting [11]. GNNs have also been used for power flow problems, both in a supervised [12] and an unsupervised [13] manner. A hybrid nonlinear SE approach [14] combines a model and data-based approach using a GNN that outputs voltages which are used a regularization term in the SE loss function. **Contributions**: In our previous work [15], we proposed a data-driven linear PMU-only state estimator based on GNNs applied over factor graphs. The model demonstrated good approximation capabilities under normal operating conditions and performed well in unobservable and underdetermined scenarios. This work significantly extends our previous work in the following ways: * We conduct an empirical analysis to investigate how the same GNN architecture could be used for power systems of various sizes. We assume that the local properties of the graphs in these systems are similar, leading to local neighbourhoods with similar structures which can be represented using the same embedding space size and the same number of GNN layers. * To evaluate the sample efficiency of the GNN model, we run multiple training experiments on different sizes of training sets. Additionally, we assess the scalability of the model by training it on various power system sizes and evaluating its accuracy, training convergence properties, inference time, and memory requirements. * As a side contribution, the proposed GNN model is tested in scenarios with high measurement variances, using which we simulate phasor misalignments due to communication delays, and the results are compared with linear WLS solutions of SE. ## II Linear State Estimation with PMUs The SE algorithm has a goal of estimating the values of the state variables \(\mathbf{x}\), so that they are consistent with measurements, as well as the power system model defined by its topology and parameters. The power system's topology is represented by a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\), where \(\mathcal{N}=1,\ldots,n\) is the set of buses and \(\mathcal{E}\subseteq\mathcal{N}\times\mathcal{N}\) is the set of branches. PMUs measure complex bus voltages and complex branch currents, in the form of magnitude and phase angle [16, Sec. 5.6]. PMUs placed at a bus measure the bus voltage phasor and current phasors along all branches incident to the bus [17]. The state variables are given as \(\mathbf{x}\) in rectangular coordinates, and therefore consist of real and imaginary components of bus voltages. The PMU measurements are transformed from polar to rectangular coordinate system, since then the SE problem can be formulated using a system of linear equations [15]. The solution to this sparse and noisy system can be found by solving the linear WLS problem: \[\left(\mathbf{H}^{T}\boldsymbol{\Sigma}^{-1}\mathbf{H}\right)\mathbf{x}= \mathbf{H}^{T}\boldsymbol{\Sigma}^{-1}\mathbf{z}, \tag{1}\] where the Jacobian matrix \(\mathbf{H}\in\mathbb{R}^{m\times 2n}\) is defined according to the partial first-order derivatives of the measurement functions, and \(m\) is the total number of linear equations. The observation error covariance matrix is \(\boldsymbol{\Sigma}\in\mathbb{R}^{m\times m}\), while the vector \(\mathbf{z}\in\mathbb{R}^{m}\) contains measurement values in rectangular coordinate system. The aim of the WLS-based SE is to minimize the sum of residuals between the measurements and the corresponding values that are calculated using the measurement functions [1]. This approach has the disadvantage of requiring a transformation of measurement errors (magnitude and angle errors) from polar to rectangular coordinates, making them correlated, resulting in a non-diagonal covariance matrix \(\boldsymbol{\Sigma}\) and increased computational effort. To simplify the calculation, the non-diagonal elements of \(\boldsymbol{\Sigma}\) are often ignored, which can impact the accuracy of the SE [17]. We can use the classical theory of propagation of uncertainty to compute variances in rectangular coordinates from variances in polar coordinates [18]. The solution to (1) obtained by ignoring the non-diagonal elements of the covariance matrix \(\boldsymbol{\Sigma}\) to avoid its computationally demanding inversion is referred to as the _approximative WLS SE solution_. In the rest of the paper, we will explore whether using a GNN model trained with measurement values, variances, and covariances labelled with the exact solutions of (1) leads to greater accuracy compared to the approximative WLS SE, which ignores covariances. The GNN model, once trained, scales linearly with respect to the number of power system buses, allowing for lower computation time compared to both the approximate and exact solvers of (1). ## III Methods In this section, we introduce spatial GNNs on a high-level and describe how can they be applied to the linear SE problem. ### _Spatial Graph Neural Networks_ Spatial GNNs are a type of machine learning models that process graph-structured data by iteratively applying message passing to local subsets of the graph. The goal of GNNs is to transform the inputs from each node and its connections into a higher-dimensional space, creating a \(s\)-dimensional vector \(\mathbf{h}\in\mathbb{R}^{s}\) for each node. GNNs contain \(K\) layers, with each layer representing a single iteration \(k\) of the message passing process. Each GNN layer includes trainable functions, which are implemented as neural networks, such as a message function, an aggregation function, and an update function, as shown in Fig. 1. The message function calculates the message \(\mathbf{m}_{i,j}\in\mathbb{R}^{u}\) between two node embeddings, the aggregation function combines the incoming messages in a specific way, resulting in an aggregated message \(\mathbf{m_{j}}\in\mathbb{R}^{u}\), and the update function calculates the update to each node's embedding. The message passing process is repeated a fixed number of times, with the final node embeddings passed through additional neural network layers to generate predictions. GNNs are trained by optimizing their parameters using a variant of gradient descent, with the loss function being a measure of the distance between the ground-truth values and the predictions. ### _State Estimation using Graph Neural Networks_ The proposed GNN model is designed to be applied over a graph with a SE factor graph topology [19], which consists of factor and variable nodes with edges between them. The variable nodes are used to create a \(s\)-dimensional embedding for the real and imaginary parts of the bus voltages, which are used to generate state variable predictions. The factor nodes serve as inputs for measurement values, variances, and covariances. Factor nodes do not generate predictions, but they participate in the GNN message passing process to send input data to their neighbouring variable nodes. To improve the model's representation of a node's neighbourhood structure, we use binary index encoding as input features for variable nodes. This encoding allows the GNN to better capture relationships between nodes and reduces the number of input neurons and trainable parameters, as well as training and inference time, compared to the one-hot encoding used in [15]. The GNN model can be applied to various types and quantities of measurements on both power system buses and branches, and the addition or removal of measurements can be simulated by adding or removing factor nodes. In contrast, applying a GNN to the bus-branch power system model would require assigning a single input vector to each bus, which can cause problems such as having to fill elements with zeros when not all measurements are available and making the output sensitive to the order of measurements in the input vector. Connecting the variable nodes in the \(2\)-hop neighbourhood of the factor graph topology significantly improves the model's prediction quality in unobservable scenarios [15]. This is because the graph remains connected even when simulating the removal of factor nodes (e.g., measurement loss), which allows messages to be propagated in the entire \(K\)-hop neighbourhood of the variable node. This allows for the physical connection between power system buses to be preserved when a factor node corresponding to a branch current measurement is removed. The proposed GNN for a heterogeneous graph has two types of layers: one for factor nodes and one for variable nodes. These layers, denoted as \(\mathrm{Layer^{f}}\) and \(\mathrm{Layer^{v}}\), have their own sets of trainable parameters, which allow them to learn their message, aggregation, and update functions separately. Different sets of trainable parameters are used for variable-to-variable and factor-to-variable node messages. Both GNN layers use two-layer feed-forward neural networks as message functions, single layer neural networks as update functions, and the attention mechanism [7] in the aggregation function. Then, a two-layer neural network \(\mathrm{Pred}\) is applied to the final node embeddings \(\mathbf{h}^{K}\) of variable nodes only, to create state variable predictions. The loss function is the mean-squared error (MSE) between the predictions and the ground-truth values, calculated using variable nodes only. All trainable parameters are updated via gradient descent and backpropagation over a mini-batch of graphs. The high-level computational graph of the GNN architecture specialized for heterogeneous augmented factor graphs is depicted in Figure 2. The proposed model uses an inference process that requires measurements from the \(K\)-hop neighbourhood of each node, allowing for computational and geographical distribution. Additionally, since the node degree in the SE factor graph is limited, the computational complexity for the inference process is constant. As a result, the overall GNN-based SE has a linear computational complexity, making it efficient and scalable for large networks. ## IV Numerical Results In this section, we conduct numerical experiments to investigate the scalability and sample efficiency of the proposed GNN approach. By varying the power system and training set sizes, we are able to assess the model's memory requirements, prediction speed, and accuracy and compare them to those of traditional SE approaches. Fig. 1: A GNN layer, which represents a single message passing iteration, includes multiple trainable functions, depicted as yellow rectangles. The number of first-order neighbours of the node \(j\) is denoted as \(n_{j}\). Fig. 2: Proposed GNN architecture for heterogeneous augmented factor graphs. Variable nodes are represented by circles and factor nodes are represented by squares. The high-level computational graph begins with the loss function for a variable node, and the layers that aggregate into different types of nodes have distinct trainable parameters. We use the IEEE 30-bus system, the IEEE 118-bus system, the IEEE 300-bus system, and the ACTIVSg 2000-bus system [20], with measurements placed so that measurement redundancy is maximal. For the purpose of sample efficiency analysis, we create training sets containing 10, 100, 1000, and 10000 samples for each of the mentioned power systems. Furthermore, we use validation and test sets comprising 100 samples. These datasets are generated by solving the power flow problem using randomly generated bus power injections and adding Gaussian noise to obtain the measurement values. All the data samples were labelled using the traditional SE solver. An instance of the GNN model is trained on each of these datasets. In contrast to our previous work, we use higher variance values of \(5\times 10^{-1}\) to examine the performance of the GNN algorithm under conditions where input measurement phasors are unsynchronized due to communication delays [21]. While this is usually simulated by using variance values that increase over time, as an extreme scenario we fix the measurement variances to a high value. In all the experiments, the node embedding size is set to \(64\), and the learning rate is \(4\times 10^{-4}\). The minibatch size is \(32\), and the number of GNN layers is \(4\). We use the ReLU activation function and a gradient clipping value of \(5\times 10^{-1}\). The optimizer is Adam, and we use mean batch normalization. ### _Properties of Power System Augmented Factor Graphs_ For all four test power systems, we create augmented factor graphs using the methodology described in Section III-B. Fig. 3 illustrates how the properties of the augmented factor graphs, such as average node degree, average path length, average clustering coefficient, along with the system's maximal measurement redundancy, vary across different test power systems. The average path length is a property that characterizes the global graph structure, and it tends to increase as the size of the system grows. However, as a design property of high-voltage networks, the other graph properties such as the average node degree, average clustering coefficient, as well as maximal measurement redundancy do not exhibit a clear trend of change with respect to the size of the power system. This suggests that the structures of local, \(K\)-hop neighbourhoods within the graph are similar across different power systems, and that they contain similar factor-to-variable node ratio. Consequently, it is reasonable to use the same GNN architecture (most importantly, the number of GNN layers and the node embedding size) for all test power systems, regardless of their size. In this way, the proposed model achieves scalability, as it applies the same set of operations to the local, \(K\)-hop neighbourhoods of augmented factor graphs of varying sizes without having to adapt to each individual case. ### _Training Convergence Analysis_ First, we analyse the training process for the IEEE 30-bus system with four different sizes of the training set. As mentioned in III-B, the training loss is a measure of the error between the predictions and the ground-truth values for data samples used in the training process. The validation loss, on the other hand, is a measure of the error between the predictions and the ground-truth values on a separate validation set. In this analysis, we used a validation set of 100 samples. The training losses for all the training processes converged smoothly, so we do not plot them for the sake of clarity. Figure 4 shows the validation losses for 150 epochs of training on four different training sets. For smaller training sets, the validation loss decreases initially but then begins to increase, which is a sign of overfitting. In these cases, a common practice in machine learning is to select the model with the lowest validation loss value. As it will be shown in IV-C, the separate test set results for models created using small training sets are still satisfactory. As the number of samples in the training set increases, the training process becomes more stable. This is because the model has more data to learn from and is therefore less prone to overfitting. Next, in Table I, we present the training results for the other power systems and training sets of various sizes. The numbers in the table represent the number of epochs after which either the validation loss stopped changing or began to increase. Similarly to the experiments on the IEEE 30 Fig. 4: Validation losses for trainings on four different training set sizes. Fig. 3: Properties of augmented factor graphs along with the system’s measurement redundancy for different test power systems, labelled with their corresponding number of buses. bus system, the trainings on smaller training sets exhibited overfitting, while others converged smoothly. For the former, the number in the table indicates the epoch at which the validation loss reached its minimum and stopped improving. For the latter, the number in the table represents the epoch when there were five consecutive validation loss changes less than \(10^{-5}\). Increasing the size of the training set generally results in a lower number of epochs until the validation loss reaches its minimum. However, the epochs until the validation loss reaches its minimum vary significantly between the different power systems. This could be due to differences in the complexity of the systems or the quality of the data used for training. ### _Accuracy Assessment_ Fig. 5 reports the mean squared errors (MSEs) between the predictions and the ground-truth values on 100-sample sized test sets for all trained models and the approximate WLS SE. These results indicate that even the GNN models trained on small datasets outperform the approximate WLS SE, except for the models trained on the IEEE 30-bus system with 10 and 100 samples. These results suggest that the quality of the GNN model's predictions and the generalization capabilities improve as the amount of training data increases, and the models with the best results (highlighted in bold) have significantly smaller MSEs compared to the approximate WLS SE. While we use randomly generated training sets in this analysis, using carefully selected training samples based on historical load consumption data could potentially lead to even better results with small datasets. ### _Inference Time and Memory Requirements_ The plot in Fig. 6 shows the ratio of execution times between WLS SE and GNN SE inference as a function of the number of buses in the system. These times are measured on a test set of 100 samples. As expected, the difference in computational complexity between GNN, with its linear complexity, and WLS, with more than quadratic complexity, becomes apparent as the number of buses increases. From the results, it can be observed that GNN significantly outperforms WLS in terms of inference time on larger power systems. The number of trainable parameters in the GNN model remains relatively constant, as the number of power system buses increases. The number of input neurons for variable node binary index encoding does grow logarithmically with the number of variable nodes. However, this increase is relatively small compared to the total number of GNN parameters1. This Fig. 5: Test set results for various power systems and training set sizes. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Power system** & IEEE 118 & IEEE 300 & ACTIVSg 2000 \\ \hline **10 samples** & \(61\) & \(400\) & \(166\) \\ \hline **100 samples** & \(38\) & \(84\) & \(200\) \\ \hline **1000 samples** & \(24\) & \(82\) & \(49\) \\ \hline **10000 samples** & \(12\) & \(30\) & \(15\) \\ \hline \end{tabular} \end{table} TABLE I: Epoch until validation loss minimum for various power systems and training set sizes. indicates that the GNN approach is scalable and efficient, as the model's complexity does not significantly increase with the size of the power system being analysed. ## V Conclusions In this study, we focused on thoroughly testing a GNN-based state estimation algorithm in scenarios with large variances, and examining its scalability and sample efficiency. The results showed that the proposed approach provides good results for large power systems, with lower prediction errors compared to the approximative SE. The GNN model used in this approach is also fast and maintains constant memory usage, regardless of the size of the scheme. Additionally, the GNN was found to be an effective approximation method for WLS SE even with a relatively small number of training samples, particularly for larger power systems, indicating its sample efficiency. Given these characteristics, the approach is worthy of further consideration for real-world applications.
2309.15235
Asymptotics of Bounded Lecture-Hall Tableaux
We study the asymptotics of bounded lecture hall tableaux. Limit shapes form when the bounds of the lecture hall tableaux go to infinity linearly in the lengths of the partitions describing the large-scale shapes of these tableaux. We prove Conjecture 6.1 in \cite{SKN21}, stating that the slopes of the rescaled height functions in the scaling limit satisfy a complex Burgers equation. We also show that the fluctuations of the unrescaled height functions converge to the Gaussian free field. The proof is based on new construction and analysis of Schur generating functions for the lecture hall tableaux, whose corresponding particle configurations do not form a Gelfand-Tsetlin scheme; and the corresponding dimer models are not doubly periodic.
David Keating, Zhongyang Li, Istvan Prause
2023-09-26T20:01:11Z
http://arxiv.org/abs/2309.15235v1
# Asymptotics of bounded lecture-hall tableaux ###### Abstract. We study the asymptotics of bounded lecture hall tableaux. Limit shapes form when the bounds of the lecture hall tableaux go to infinity linearly in the lengths of the partitions describing the large-scale shapes of these tableaux. We prove Conjecture 6.1 in [8], stating that the slopes of the rescaled height functions in the scaling limit satisfy a complex Burgers equation. We also show that the fluctuations of the unrescaled height functions converge to the Gaussian free field. The proof is based on new construction and analysis of Schur generating functions for the lecture hall tableaux, whose corresponding particle configurations do not form a Gelfand-Tsetlin scheme; and the corresponding dimer models are not doubly periodic. ## 1. Introduction Lecture hall tableaux were introduced in [10] as fillings of Young tableaux satisfying certain conditions, which generalize both lecture hall partitions ([2, 3]) and anti-lecture hall compositions ([11]), and also contain reverse semistandard Young tableaux as a limit case. Lecture hall partitions and anti-lecture hall compositions have attracted considerable interest among combinatorists in the last two decades; see the recent survey [22] and references therein. We now define the lecture hall tableaux. Recall that a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) is a sequence of nonnegative integers \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{k}\geq 0\). Each integer \(\lambda_{i}\) is called a part of \(\lambda\). The length \(l(\lambda)\) of \(\lambda\) is the number of parts. A partition \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) can be identified with its Young diagram, which consists of unit squares (cells) with integer coordinates \((i,j)\) satisfying \(1\leq i\leq k\) and \(1\leq j\leq\lambda_{i}\). For two partitions \(\lambda\) and \(\mu\) we write \(\mu\subset\lambda\) to mean that the Young diagram of \(\mu\) is contained in that of \(\lambda\) as a set. In this case, a skew shape \(\lambda/\mu\) is defined to be the set-theoretic difference \(\lambda/\mu\) of their Young diagrams. We denote by \(|\lambda/\mu|\) the number of cells in \(\lambda/\mu\). A partition \(\lambda\) is also considered as a skew shape by \(\lambda/\emptyset\); where \(\emptyset\) represents the empty partition. A tableau of shape \(\lambda/\mu\) is a filling of the cells in \(\lambda/\mu\) with nonnegative integers. In other words, a tableau is a map \(T:\lambda/\mu\to\mathbb{N}\), where \(\mathbb{N}\) is the set of nonnegative integers. **Definition 1.1**.: _An \(n\)-lecture hall tableau of shape \(\lambda/\mu\) is a tableau \(L\) of shape \(\lambda/\mu\) satisfying the following conditions_ \[\frac{L(i,j)}{n+c(i,j)}\geq\frac{L(i,j+1)}{n+c(i,j+1)},\qquad\frac{L(i,j)}{n+c (i,j)}>\frac{L(i+1,j)}{n+c(i+1,j)}.\] _where \(c(i,j)=j-i\) is the content of the cell \((i,j)\). The set of \(n\)-lecture hall tableaux is denoted by \(LHT_{n}(\lambda/\mu)\). For \(L\in LHT_{n}(\lambda/\mu)\), let \(\lfloor L\rfloor\) be the tableaux of shape \(\lambda/\mu\) whose \((i,j)\)th entry is \(\lfloor\frac{L(i,j)}{(n-i+j)}\rfloor\)._ See the left graph of Figure 1.1 for an example of a lecture hall tableaux. In this paper we study lecture hall tableaux with an extra condition as follows: \[L(i,j)<t(n+j-i)\] We say these tableaux are bounded by \(t>0\). These tableaux are called bounded lecture hall tableaux and are enumerated in [9]. The main aim of this paper is to study the asymptotics of bounded \(n\)-lecture hall tableaux as \(n\to\infty\). We shall first recall a bijection between lecture hall tableaux and non-intersecting path configurations in [9], and then investigate the asymptotics (limit shape and height fluctuations) of the corresponding non-intersecting path configurations. We first define the graph on which the non-intersecting path configurations correspond to the lecture hall tableaux. **Definition 1.2**.: 1. _Given a positive integer_ \(t\)_, the lecture hall graph is a graph_ \(\mathcal{G}_{t}=(V_{t},E_{t})\)_. This graph can be described through an embedding in the plane with vertex set_ \(V_{t}\) _given by_ * \(\left(i,\frac{j}{i+1}\right)\) _for_ \(i\geq 0\) _and_ \(0\leq j<t(i+1)\)_._ _and the directed edges given by_ * _from_ \(\left(i,k+\frac{r}{i+1}\right)\) _to_ \(\left(i+1,k+\frac{r}{i+2}\right)\) _for_ \(i\geq 0\)_,_ \(0\leq r\leq i\) _and_ \(0\leq k<t\)__ * _from_ \(\left(i,k+\frac{r+1}{i+1}\right)\) _to_ \(\left(i,k+\frac{r}{i+1}\right)\) _for_ \(i\geq 0\) _and_ \(0\leq r\leq i\) _and_ \(0\leq k<t-1\) _or for_ \(i\geq 0\) _and_ \(0\leq r<i\) _and_ \(k=t-1\)_._ 2. _Given a positive integer_ \(t\) _and a partition_ \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) _with_ \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{n}\geq 0\)_, a non-intersecting path configuration is a system of_ \(n\) _paths on the graph_ \(\mathcal{G}_{t}\)_. For each integer_ \(i\) _satisfying_ \(1\leq i\leq n\)_, the_ \(i\)_th path starts at_ \(\left(n-i,t-\frac{1}{n-i+1}\right)\)_, ends at_ \((n-i+\lambda_{i},0)\) _and moves only downwards and rightwards. The paths are said to be not intersecting if they do not share a vertex._ See the middle graph of 1.1 for an example of \(\mathcal{G}_{3}\) and a configuration of non-intersecting paths on \(\mathcal{G}_{3}\). Given a positive integer \(t\) and a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) with \(\lambda_{1}\geq\ldots\geq\lambda_{n}\geq 0\), the non-intersecting path system is a system of \(n\) paths on the graph \(\mathcal{G}_{t}\). The \(i\)th path starts at \(\left(n-i,t-\frac{1}{n-i+1}\right)\) and ends at \((\lambda_{i}+n-i,0)\). The paths are called non-intersection if they do not share a vertex. **Theorem 1.3**.: _([9])There is a bijection between the bounded lecture hall tableaux of shape \(\lambda\) and bounded by \(t\) and non-intersecting paths on \(\mathcal{G}_{t}\) starting at \(\left(n-i,t-\frac{1}{n-i+1}\right)\) and ending at \((n-i+\lambda_{i},0)\) for \(i=1,2,\ldots,n\)._ _More precisely, there are exactly \(|\lambda|\) non-vertical edges present in the non-intersecting path configuration in \(\mathcal{G}_{t}\) corresponding to a lecture-hall tableaux of shape \(\lambda\). These edges have left endpoints located at \(\left(n+j-i-1,\frac{L(i,j)}{n+j-i}\right)\). The non-intersecting path configuration corresponding to the lecture hall tableaux is the unique non-intersecting path configuration joining \(\left(n-i,t-\frac{1}{n-i+1}\right)\) and \((n-i+\lambda_{i},0)\) for \(i=1,2,\ldots,n\) obtained by adding only vertical edges to these present non-vertical edges._ One can see that for an \(n\)-lecture hall tableaux bounded by \(t\), \(t\) is also the height of the corresponding lecture hall graph \(\mathcal{G}_{t}\), and \(n\) is also the total number of paths in the corresponding non-intersecting path configuration on \(\mathcal{G}_{t}\). See Figure 1.1 for an example of such a correspondence. We shall investigate the asymptotics of bounded lecture hall tableaux as \(n,t\to\infty\) by studying the asymptotics of the corresponding non-intersecting paths. These asymptotics were studied in [8] using the (not fully rigorous) tangent method; here we attack this problem by analyzing Schur polynomials. The tangent method gives the frozen boundary without the full limit shape; instead Conjecture 6.1 were made in [8], indicating that the slopes of the rescaled height functions in the scaling limit satisfy the complex Burgers equation. The complex Burgers equation was proved to be the governing equation of height functions in the scaling limit for uniform lozenge tilings and for other doubly periodic dimer models [14]. This equation naturally arises through a variational problem, we refer to [1] for a detailed study of the variational problem. Here we note that for lecture hall tableaux no variational principle has been established and although lecture hall tableaux naturally corresponds to non-interacting paths configurations and dimer configurations on a hexagon-octagon lattice ([8]), the corresponding hexagon-octagon lattice in this case is not doubly periodic as in the setting in [14]; see the right graph of Figure 1.1. The Schur generating function approach was applied to study uniform dimer model on a hexagonal lattice in a trapezoid domain in [5, 6], and for uniform dimer model on a rectangular square grid in [7]. A generalized version of the Schur generating function was defined to study the non-uniform dimer model on rail-yard graphs in [4, 17, 16, 18, 20]. Schur processes are specializations of the Macdonald processes when \(q=t\), hence the asymptotics of Schur processes can also be obtained by investigating the more general Macdonald processes; see [21, 19]. All the existing Schur-generating functions seem to be defined in the setting of the Gelfand-Tsetlin scheme; however the lecture hall tableaux Figure 1.1. Tableau, non-intersecting paths, and dimers. The left graph represents a lecture hall tableaux \(L\) of shape \(\lambda=(2,2)\) with \(L(1,1)=5\), \(L(1,2)=5\), \(L(2,1)=3\), \(L(2,2)=3\) and \(n=2\). Then \(\frac{L(1,1)}{n+1-1}=\frac{5}{2}\); \(\frac{L(2,1)}{n+1-2}\)=2;\(\frac{L(1,2)}{n+2-1}=\frac{5}{3};\frac{L(2,2)}{n+2-2}=\frac{3}{2}\). The lecture hall tableaux is bounded by \(t=3\). The middle graph represents the corresponding non-intersecting path configuration. The right graph represents a dimer configuration on a graph which is not doubly-periodic. are novel in the sense that on a skew shape they cannot be computed by skew Schur functions; and the corresponding particle configurations induced by the non-intersecting path configurations of the lecture hall tableaux do not satisfy the interlacing conditions required by the Gelfand-Tsetlin scheme; see Figure 2.1 for an example. By constructing a novel Schur generating function specifically for the lecture hall tableaux and analyzing its asymptotics, in this paper we obtain a full description of the limit shape, including the moment formulas for the counting measures and the complex Burgers equation; resolving Conjecture 6.1 in [8]. The Gaussian free field, as a high dimensional time analog of the Brownian motion, was proved to be the rule of height fluctuations for dimer models on a large class of graphs ([13, 15]). In this paper we show that the unrescaled height fluctuations of the lecture hall tableaux converge to the Gaussian free field when \(t\) goes to infinity linearly as \(n\) goes to infinity. The main results (with exact statements given in later sections after a number of precise definitions) and the organization of the paper are as follows. * In Section 2, we prove the moment formula for the limit counting when \(n\to\infty\), \(t\to\infty\) and \(\frac{t}{n}\to\alpha\in(0,\infty)\); the main theorem in Section 2 is Theorem 2.6. * In Section 3, we prove that the slopes of the (rescaled) height function in the scaling limit satisfy the complex Burgers equation; confirming Conjecture 6.1 in [8]. The main theorem proved in Section 3 is Theorem 3.1. * In Section 4, we prove the convergence of the (unrescaled) height fluctuation to the Gaussian free field (GFF) \(n\to\infty\), \(t\to\infty\) and \(\frac{t}{n}\to\alpha\in(0,\infty)\); the main theorem in Section 4 is Theorem 4.5. * In Appendix A, we discuss some technical results. ## 2. Limit Shape when \(t\to\infty\) In this section, we prove the moment formula of the limit counting measure when \(n\to\infty\), \(t\to\infty\) and \(\frac{t}{n}\to\alpha\in(0,\infty)\) by defining and analyzing a novel Schur generating function for lecture hall tableaux, which correspond to neither Gelfand-Tsetlin schemes nor doubly-periodic dimer models. The main theorem in this Section is Theorem 2.6. Let \(\mathcal{M}\) be a random non-intersecting path configuration on \(\mathcal{G}=\mathcal{G}_{t}\). Let \(n\) be the total number of non-intersecting paths. Let \(\kappa\geq 0\) be an integer. Let \(\epsilon>0\) be sufficiently small such that the region \(y\in(\kappa,\kappa+\epsilon]\) does not intersect any non-vertical edge of \(\mathcal{G}\). We associate a partition \(\lambda^{(\kappa)}\) as follows: * \(\lambda_{1}^{(\kappa)}\) is the number of absent vertical edges of \(\mathcal{M}\) intersecting \(y=\kappa+\epsilon\) to the left of the rightmost vertical edges present in \(\mathcal{M}\). * for \(j\geq 2\), \(\lambda_{j}^{(\kappa)}\) is the number of absent vertical edges of \(\mathcal{M}\) intersecting \(y=\kappa+\epsilon\) to the left of the \(j\)th rightmost vertical edges present in \(\mathcal{M}\). See Figure 2.1 for an example. For \(\mathbf{x}=(x_{0},x_{1},\ldots)\) Let \(s_{\lambda/\mu}(\mathbf{x})\) be the skew Schur function. For any tableaux \(T\) of shape \(\lambda/\mu\), let \[\mathbf{x}^{T}=\prod_{(i,j)\in\lambda/\mu}x_{T(i,j)};\] we define \[L^{n}_{\lambda/\mu}(\mathbf{x})=\sum_{T\in LHT_{n}(\lambda/\mu)}\mathbf{x}^{[ T]}\] **Definition 2.1**.: _Let \(\rho_{\kappa}\) be the probability distribution of \(\lambda^{(\kappa)}\). Define the Schur generating function for \(\rho_{\kappa}\) as follows:_ \[\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}|,\mathbf{u})=\sum_{\lambda\in\mathbb{ Y}}\rho_{\kappa}(\lambda)\frac{s_{\lambda}(|\mathbf{x}|+\mathbf{u})}{s_{ \lambda}(|\mathbf{x}|)}\] _where_ \[\mathbf{u}=(u_{1},u_{2},\ldots,u_{n});\qquad\mathbf{x}=(x_{1},x_{2},\ldots,x_ {t});\qquad|\mathbf{x}|=x_{1}+x_{2}+\ldots+x_{t}\] _and_ \[s_{\lambda}(|\mathbf{x}|+\mathbf{u}):=s_{\lambda}(|\mathbf{x}|+ u_{1},|\mathbf{x}|+u_{2},\ldots,|\mathbf{x}|+u_{n}) \tag{2.2}\] \[s_{\lambda}(|\mathbf{x}|):=s_{\lambda}(|\mathbf{x}|,\ldots,| \mathbf{x}|) \tag{2.1}\] **Lemma 2.2**.: _Let \(\lambda\in\mathbb{Y}\) with \(l(\lambda)\leq n\). Let_ \[\mathbf{a}=(a_{1},\ldots,a_{t});\qquad\mathbf{b}=(b_{1},\ldots,b_{n}).\] _Then_ \[s_{\lambda}(|\mathbf{a}|+\mathbf{b})=\sum_{\nu\subset\lambda}L_{\lambda/\nu}( \mathbf{a})s_{\nu}(\mathbf{b});\] _where \(s_{\lambda}(|\mathbf{a}|+\mathbf{b})\) is defined as in (2.1), and_ \[s_{\nu}(\mathbf{b})=s_{\nu}(b_{1},b_{2},\ldots,b_{n})\] Proof.: The lemma follows from Theorem 1.6 of [9] by letting \(\nu=\emptyset\). **Lemma 2.3**.: _Assume the partition on the bottom boundary \(\lambda^{(0)}\) is fixed. Then for any \(\kappa>0\),_ \[\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u})= \frac{s_{\lambda^{(0)}}(|\mathbf{x}|+\mathbf{u})}{s_{\lambda^{(0)}}(|\mathbf{x }|)}\] _where_ \[\mathbf{x}_{\kappa}=(x_{\kappa},x_{\kappa+1},\ldots,x_{t})\] Proof.: Let \[\mathbf{x}\setminus\mathbf{x}_{\kappa}=(x_{1},x_{2},\ldots,x_{ \kappa-1}).\] By Definition 2.1, we have \[\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u}) = \sum_{\lambda\in\mathbb{Y}}\rho_{\kappa}(\lambda)\frac{s_{\lambda }(|\mathbf{x}_{\kappa}|+\mathbf{u})}{s_{\lambda}(|\mathbf{x}_{\kappa}|)}\] \[= \sum_{\lambda\in\mathbb{Y}}\frac{L_{\lambda}(\mathbf{x}_{\kappa} )L_{\lambda^{(0)}/\lambda}(\mathbf{x}\setminus\mathbf{x}_{\lceil\kappa\rceil} )}{L_{\lambda^{(0)}}(\mathbf{x})}\frac{s_{\lambda}(|\mathbf{x}_{\kappa}|+ \mathbf{u})}{s_{\lambda}(|\mathbf{x}_{\kappa}|)}\] \[= \sum_{\lambda\in\mathbb{Y}}\frac{L_{\lambda^{(0)}/\lambda}( \mathbf{x}\setminus\mathbf{x}_{\lceil\kappa\rceil})s_{\lambda}(|\mathbf{x}_{ \kappa}|+\mathbf{u})}{L_{\lambda^{(0)}}(\mathbf{x})}\] \[= \frac{s_{\lambda^{(0)}}(|\mathbf{x}|+\mathbf{u})}{s_{\lambda^{(0) }}(|\mathbf{x}|)},\] where the last identity follows from Lemma 2.2. Then the lemma follows. Define a differential operator on Schur generating functions \[\mathcal{D}_{j,\kappa}\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{ \kappa}|,\mathbf{u}):=\frac{1}{V(\mathbf{u})}\left[\sum_{i}\left((|\mathbf{x} _{\kappa}|+u_{i})\frac{\partial}{\partial u_{i}}\right)^{j}\right]V(\mathbf{u })\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{u});\] where \[V(\mathbf{u})=\prod_{i<j}(u_{i}-u_{j}).\] We shall omit the index \(\kappa\) in the differential operator \(\mathcal{D}\) when there is no confusion. We introduce the following definition to study the distribution of random partitions. **Definition 2.4**.: _Let \(\lambda\) be a length-\(N\) partition. We define the counting measure \(m(\lambda)\) as a probability measure on \(\mathbb{R}\) as follows:_ \[m(\lambda)=\frac{1}{N}\sum_{i=1}^{N}\delta\left(\frac{\lambda_{i}+N-i}{N}\right).\] _If \(\lambda\) is random, then we can define the corresponding random counting measure._ Then **Lemma 2.5**.: _Let \(j,m\in\mathbb{N}\). Then_ \[\frac{1}{n^{(j+1)m}}\mathcal{D}_{j}^{m}\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{ \kappa}|,\mathbf{u})\bigg{|}_{\mathbf{u}=0}:=\mathbb{E}\left(\int_{\mathbb{R}}x ^{j}d\mathbf{m}_{\rho_{\kappa}}\right)^{m}.\] _where \(\mathbf{m}_{\rho_{\kappa}}\) is the random counting measure for the random partition \(\lambda^{(\kappa)}\)._ Proof.: By Definition 2.1, we obtain \[\mathcal{D}_{j}^{m}\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{ u})\big{|}_{\mathbf{u}=\mathbf{0}}=\sum_{\lambda\in\mathbb{Y}}\rho_{\kappa}( \lambda)\frac{1}{V(\mathbf{u})}\left[\sum_{i=1}^{n}\left((|\mathbf{x}_{\kappa} |+u_{i})\frac{\partial}{\partial u_{i}}\right)^{j}\right]^{m}V(\mathbf{u}) \frac{s_{\lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u})}{s_{\lambda}(|\mathbf{x}_ {\kappa}|)}\] Explicit computations show that \[\frac{1}{V(\mathbf{u})}\sum_{i=1}^{n}\left((|\mathbf{x}_{\kappa}|+u_{i})\frac{ \partial}{\partial u_{i}}\right)^{j}V(\mathbf{u})s_{\lambda}(|\mathbf{x}_{ \kappa}|+\mathbf{u})=\left[\sum_{i=1}^{n}(\lambda_{i}+n-i)^{j}\right]s_{ \lambda}(|\mathbf{x}_{\kappa}|+\mathbf{u}).\] Hence we have \[\mathcal{D}_{j}^{m}\mathcal{S}_{\rho_{\kappa}}(|\mathbf{x}_{\kappa}|,\mathbf{ u})\big{|}_{\mathbf{u}=\mathbf{0}}=\sum_{\lambda\in\mathbb{Y}_{n}}\rho_{k}( \lambda)\left[\sum_{i=1}^{n}(\lambda_{i}+n-i)^{j}\right]^{m}\] Then the lemma follows. **Theorem 2.6**.: _Let \(n\) be the the total number of non-interacting paths in \(\mathcal{G}\), and let \(t\) be the height of \(\mathcal{G}\). Let \(\rho_{\kappa}(n)\) be the probability distribution of \(\lambda^{(\kappa)}\). Assume_ \[y:=\lim_{n\to\infty}\frac{\kappa}{n};\qquad s:=\lim_{n\to\infty}\frac{| \mathbf{x}_{\kappa}|}{|\mathbf{x}|};\qquad\alpha:=\lim_{n\to\infty}\frac{t}{n}; \tag{2.3}\] _such that_ \[s\in(0,1);\qquad y\in(0,\alpha).\] _Then random measures \(\mathbf{m}_{\rho_{\kappa}(n)}\) converge as \(n\to\infty\) in probability, in the sense of moments to a deterministic measure \(\mathbf{m}_{y}\) on \(\mathbb{R}\), whose moments are given by_ \[\int_{\mathbb{R}}x^{j}\mathbf{m}_{y}(dx)=\frac{1}{2(j+1)\pi\mathbf{i}}\oint_{1 }\frac{dz}{z-1+s}\left((z-1+s)H_{\mathbf{m}_{0}}^{\prime}(z)+\frac{z-1+s}{z-1} \right)^{j+1}\] _Here \(\mathbf{m}_{0}\) is the limit counting measure for the boundary partition \(\lambda^{(0)}\in\mathbb{Y}_{n}\) as \(n\to\infty\), and \(H_{\mathbf{m}_{0}}\) is defined as in (A.2)._ Proof.: By Lemma 2.3, \[\lim_{n\to\infty}\frac{1}{n}\log\mathcal{S}_{\rho_{\kappa}(n)}(| \mathbf{x}_{\kappa}|,u_{1},\ldots,u_{j},0,\ldots,0)\] \[= \lim_{n\to\infty}\frac{1}{n}\log\frac{s_{\lambda^{(0)}}(|\mathbf{x }|+(u_{1},\ldots,u_{j},0,\ldots,0))}{s_{\lambda^{(0)}}(|\mathbf{x}|)}\] \[= \lim_{n\to\infty}\frac{1}{n}\log\frac{s_{\lambda^{(0)}}\left(1+ \frac{u_{1}}{|\mathbf{x}|},\ldots,1+\frac{u_{j}}{|\mathbf{x}|},1,\ldots,1 \right)}{s_{\lambda^{(0)}}(1,\ldots,1)}\] \[= H_{\mathbf{m}_{0}}\left(1+\frac{u_{1}}{|\mathbf{x}|}\right)+ \ldots+H_{\mathbf{m}_{0}}\left(1+\frac{u_{j}}{|\mathbf{x}|}\right).\] where the last identity follows from Lemma A.1. Then we can write \[\mathcal{S}_{\rho_{\kappa}(n)}\left(|\mathbf{x}_{\kappa}|,u_{1},\ldots,u_{n} \right)=e^{n\left[\sum_{i\in[n]}H_{\mathbf{m}_{0}}\left(1+\frac{u_{i}}{| \mathbf{x}_{\kappa}|}\right)\right]}T_{n}\left(u_{1},\ldots,u_{n}\right) \tag{2.4}\] such that \[\lim_{n\to\infty}\frac{1}{n}\log T_{n}\left(u_{1},\ldots,u_{j},0,\ldots,0 \right)=0. \tag{2.5}\] and \[T_{n}\left(0,\ldots,0\right)=1; \tag{2.6}\] and the convergence is uniform when each \(\frac{u_{i}}{|\mathbf{x}_{\kappa}|}\) is in a small complex neighborhood of \(0\) for \(i\in[j]\). Then by Lemma 2.5, \[\mathbb{E}\left(\int_{\mathbb{R}}x^{j}d\mathbf{m}_{\rho_{\kappa} (n)}\right)^{m}=\left.\frac{1}{n^{m(j+1)}}(\mathcal{D}_{j})^{m}\mathcal{S}_{ \rho_{\kappa}(n)}\left(|\mathbf{x}_{\kappa}|,u_{1},\ldots,u_{n}\right)\right|_ {\mathbf{u}=0}\] \[= \frac{1}{n^{m(j+1)}}\left.\left[T_{n}\left(u_{1},\ldots,u_{n} \right)(\mathcal{D}_{j})^{m}e^{n\left[\sum_{i\in[n]}H_{\mathbf{m}_{0}}\left(1 +\frac{u_{i}}{|\mathbf{x}|}\right)\right]}\right|_{(u_{1},\ldots,u_{n})=(0, \ldots,0)}+R\right]\] where \(R\) is the terms in \((\mathcal{D}_{j})^{m}\mathcal{S}_{\rho_{\kappa}(n)}\left(|\mathbf{x}_{\kappa}|,\mathbf{u}\right)|_{\mathbf{u}=0}\) obtained when the differential operator \((\mathcal{D}_{j})^{m}\) acts on \(T_{n}\left(\mathbf{u}\right)\) as well. From (2.5) we see that the leading term of \(\mathbb{E}\int_{\mathbb{R}}x^{j}d\mathbf{m}_{\rho_{\kappa}(n)}\) as \(n\to\infty\) is the same as that of \[\frac{1}{n^{m(j+1)}}\left.T_{n}\left(u_{1},\ldots,u_{n}\right)( \mathcal{D}_{j})^{m}e^{n\left[\sum_{i\in[n]}H_{\mathbf{m}_{0}}\left(1+\frac{u_ {j}}{|\mathbf{x}|}\right)\right]}\right|_{(u_{1},\ldots,u_{n})=(0,\ldots,0)}\] \[= \frac{1}{n^{m(j+1)}}\left.(\mathcal{D}_{j})^{m}e^{n\left[\sum_{i \in[n]}H_{\mathbf{m}_{0}}\left(1+\frac{u_{j}}{|\mathbf{x}|}\right)\right]} \right|_{(u_{1},\ldots,u_{n})=(0,\ldots,0)} \tag{2.7}\] where the last identity follows from (2.6). When \(m=1\), (2.7) can be computed as follows \[\frac{1}{n^{j+1}}\frac{1}{\prod_{i,j\in[n]:i<j}(u_{i}-u_{j})}\left.\sum_{r\in[ n]}\left((|\mathbf{x}_{\kappa}|+u_{r})\frac{\partial}{\partial u_{r}}\right)^{j} \left.\left[e^{n\left[\sum_{i\in[n]}H_{\mathbf{m}_{0}}\left(1+\frac{u_{j}}{| \mathbf{x}|}\right)\right]}\right]\prod_{i,j\in[n]:i<j}(u_{i}-u_{j})\right] \right|_{\mathbf{u}=0}\] whose leading term as \(n\to\infty\) is the same as that of \[\mathcal{M}_{j}:=\lim_{\frac{|\mathbf{x}|}{|\mathbf{x}|}\to 0}\sum_{r\in[n]}\sum_{g=0}^{j} [n]^{-g-1}\binom{j}{g}\frac{(|\mathbf{x}_{\kappa}|+u_{r})^{j}}{|\mathbf{x}|^{j-g }}\left[H^{\prime}_{\mathbf{m}_{0}}\left(1+\frac{u_{r}}{|\mathbf{x}|}\right) \right]^{j-g}\left(\sum_{j\in[n]\setminus\{r\}}\frac{1}{u_{r}-u_{j}}\right)^{g}.\] By (2.3), we obtain \[\mathcal{M}_{j}=\lim_{\frac{|\mathbf{x}|}{|\mathbf{x}|}\to 0}\sum_{r\in[n]}\sum_{g=0}^{j}\frac{1}{n}\binom{j}{g} \left(s+\frac{u_{r}}{|\mathbf{x}|}\right)^{j}\left[H^{\prime}_{\mathbf{m}_{0} }\left(1+\frac{u_{r}}{|\mathbf{x}|}\right)\right]^{j-g}\left(\frac{1}{n}\sum_ {j\in[n]\setminus\{r\}}\frac{1}{\left(\frac{u_{r}}{|\mathbf{x}|}+1\right)- \left(\frac{u_{j}}{|\mathbf{x}|}+1\right)}\right)^{g}.\] Let \[z_{i}:=\frac{u_{i}}{|\mathbf{x}|}+1\] By Lemma A.2, we obtain \[\mathcal{M}_{j} = \lim_{(z_{1},\ldots,z_{n})\to 1}\sum_{r\in[n]}\sum_{g=0}^{j}\frac{1}{n} \binom{j}{g}\left(z_{r}-1+s\right)^{j}\left[H^{\prime}_{\mathbf{m}_{0}}\left( z_{r}\right)\right]^{j-g}\left(\frac{1}{n}\sum_{j\in[n]\setminus\{r\}}\frac{1}{z_{ r}-z_{j}}\right)^{g}\] \[= \lim_{(z_{1},\ldots,z_{n})\to 1}\sum_{g=0}^{j}\frac{1}{g+1}\binom{j}{g }\frac{1}{g!}\left.\frac{\partial^{g}\left[\left(z-1+s\right)^{j}H^{\prime}_{ \mathbf{m}_{0}}\left(z\right)^{j-g}\right]}{\partial z^{g}}\right|_{z=1}\] Then the lemma follows from the Residue Theorem. **Definition 2.7**.: _Assume as \(n\to\infty\), the rescaled graph \(\frac{1}{n}\mathcal{G}\) approximate a bounded simply-connected region \(\mathcal{R}\subset\mathbb{R}^{2}\). Let \(\mathcal{L}\) be the set of \((\chi,y)\) inside \(\mathcal{R}\) such that the density \(d\mathbf{m}_{y}(\frac{\chi}{1-y})\) is not equal to 0 or 1. Then \(\mathcal{L}\) is called the liquid region. Its boundary \(\partial\mathcal{L}\) is called the frozen boundary. Let_ \[\widetilde{\mathcal{L}}:=\left\{(\chi,s):(\chi,y)\in\mathcal{L}\right\}\] _where \(s,y\) are given as Theorem 2.6._ **Definition 2.8**.: _Let \(\eta\) be a compactly supported measure on \(\mathbb{R}\). The Stieljes transform of \(\eta\) is defined by_ \[\mathrm{St}_{\eta}(w):=\int_{\mathbb{R}}\frac{\eta[ds]}{w-s}\] _for \(w\in\mathbb{C}\setminus\mathrm{supp}(\eta)\)._ **Theorem 2.9**.: _Let_ \[U_{y}(z):=(z-1+s)H^{\prime}_{\mathbf{m}_{0}}(z)+\frac{z-1+s}{z-1} \tag{2.8}\] _Assume the liquid region is nonempty, and assume that for any \(x\in\mathbb{R}\), the equation \(U_{y}(z)=x\) has at most one pair of complex conjugate roots. Then for any point \((\chi,y)\) lying on the frozen boundary, the equation \(U_{y}(z)=\chi\) has double roots._ Proof.: The density of the measure \(d\mathbf{m}_{y}(x)\) can be computed by the Stieljes transform \[\frac{d\mathbf{m}_{y}(x)}{dx}=-\lim_{\epsilon\to 0+}\frac{1}{\pi}\Im(\mathrm{St}_{ \mathbf{m}_{y}}(x+\mathbf{i}\epsilon)) \tag{2.9}\] where \(\Im(\cdot)\) represents the imaginary part of a complex number and \(\mathrm{St}_{\mathbf{m}_{y}}\) is the Stieljes transform of the measure \(\mathbf{m}_{y}\). Then the theorem follows from similar arguments of Lemma 8.1 of [4]. **Example 2.10**.: _Assume the bottom boundary partition is given by_ \[\lambda^{(0)}(n):=((p-1)n,(p-1)(n-1),\ldots,p-1)\in\mathbb{Y}_{n}\] _where \(p,n\) are positive integers. We have_ \[\frac{d\mathbf{m}_{0}}{dx}=\frac{1}{p},\ \forall x\in(0,pn).\] _Then the \(k\)th moment of \(\mathbf{m}_{0}\) can be computed as follows_ \[M_{k}(\mathbf{m}_{0})=\frac{p^{k}}{k+1}.\] _and therefore_ \[S_{\mathbf{m}_{0}}(z)=-\frac{1}{p}\log(1-pz).\] _Hence we have_ \[S_{\mathbf{m}_{0}}^{(-1)}(u)=\frac{1-e^{-pu}}{p}.\] _and_ \[H_{\mathbf{m}_{0}}^{\prime}(u)=\frac{pu^{p-1}}{u^{p}-1}-\frac{1}{u-1}\] _Then_ \[U_{y}(z)=\frac{pz^{p-1}(z-1+s)}{z^{p}-1}\] _Assume \(p=3\). then for each \(\chi\in\mathbb{R}\) the equation \(U_{y}(z)=\chi\) has at most one pair of nonreal conjugate roots. The condition that \(U_{y}(z)=\chi\) has double roots gives_ \[\begin{cases}U_{y}(z)=\chi.\\ U_{y}^{\prime}(z)=0\end{cases}\] _which gives the parametric equation for \((x,s)\) as follows._ \[\begin{cases}\chi=\frac{3z^{3}}{z^{3}+2}\\ s=\frac{z^{3}-3z+2}{z^{3}+2}\end{cases}\] 1. _When_ \(x_{1}=x_{2}=\ldots=x_{n}\)_, and_ \(\alpha=1\)_, we have_ \(s=1-y\)_. The frozen boundary is given by the blue curve of Figure_ 2_._ 2. _When_ \(\alpha=1\)_, and_ \(y=(1-s)^{2}\)_. The frozen boundary is given by the red curve of Figure_ 2_._ **Example 2.11**.: _Assume the bottom boundary partition is given by_ \[\lambda^{(0)}(n):=(n,,\ldots,n,\frac{n}{2},\frac{n}{2}-1,\ldots,1)\in\mathbb{Y}_{n}\] _where \(n\) is a positive even integer. We have_ \[\frac{d\mathbf{m}_{0}}{dx}=\begin{cases}\frac{1}{2}&\text{if }x\in(0,1)\,;\\ 1&\text{if }x\in\left(\frac{3}{2},2\right);\\ 0&\text{otherwise}.\end{cases}\] _Then the \(k\)th moment of \(\mathbf{m}_{0}\) can be computed as follows_ \[M_{k}(\mathbf{m}_{0})=\frac{1}{k+1}\left(2^{k+1}-\left(\frac{3}{2}\right)^{k+ 1}+\frac{1}{2}\right)\] _Hence we have_ \[S_{\mathbf{m}_{0}}(z) = \log\frac{1-\frac{3z}{2}}{(1-2z)\sqrt{1-z}}\] ## 3. Rescaled Height Function and Complex Burgers Equation In this section, we prove that the slopes of the (rescaled) height function in the scaling limit satisfy the complex Burgers equation; confirming Conjecture 6.1 in [8]. The main idea is to differentiate the moment formula obtained in Section 2 to obtain the slope of the limit (rescaled) height function, and then verify the complex Burgers equation. On the lecture hall graph \(\mathcal{G}\), define a random height function \(h\) associated to a random non-intersecting path configuration as follows. The height at the lower left corner is \(0\), and the height increases by \(1\) whenever crossing a path from the left to the right. Define the rescaled height function by \[h_{n}(\chi,y):=\frac{1}{n}h(n\chi,ny)\] Figure 2.2. Frozen boundary for the scaling limit of weighted non-interaction paths. The blue curve is for the uniform weight; the red curve is when the limit weight function \(s\) satisfies \(y=(1-s)^{2}\). Then by (2.9), we obtain \[\lim_{n\to\infty}\frac{dh_{n}(\chi,y)}{d\chi} = \frac{d\mathbf{m}_{y}(\chi)}{d\chi}=-\lim_{\epsilon\to 0+}\frac{1}{ \pi}\Im(\mathrm{St}_{\mathbf{m}_{y}}(\chi+\mathbf{i}\epsilon))\] Under the assumption of Theorem 2.9, following similar computations before Lemma 8.1 of [4], we obtain that when \((\chi,y)\) is in the liquid region, \[\lim_{n\to\infty}\frac{dh_{n}(\chi,y)}{d\chi}=\frac{1}{\pi}\mathrm{Arg}( \mathbf{z}_{+}(\chi,y)-1+s).\] where \(\mathbf{z}_{+}(\chi,y)\) is the unique root in the upper half plane of the equation \(U_{y}(z)=\chi\). Let \(\mathbf{h}\) be the limit of \(h_{n}\) as \(n\to\infty\). Assume \[\lim_{n\to\infty}\frac{\lambda_{1}^{(0)}+n-1}{n}=\beta\in[1,\infty).\] In this case the measure \(\mathbf{m}_{y}\) has compact support \([0,\beta]\subset\mathbb{R}\). Note that \[\int_{\mathbb{R}}x^{j}\mathbf{m}_{y}(dx)=\int_{0}^{\beta}x^{j}\mathbf{m}_{y}( dx)=\int_{0}^{\beta}x^{j}d\mathbf{h}=\int_{0}^{\beta}d\left(x^{j}\mathbf{h}(x,y) \right)-j\int_{0}^{\beta}\mathbf{h}(x,y)x^{j-1}dx.\] then \(\int_{0}^{\beta}d(x^{j}\mathbf{h}(x,y))\) is a finite constant independent of \(y\), then \[\frac{d\int_{0}^{\beta}d(x^{j}\mathbf{h}(x,y))}{dy}=0.\] Then by Theorem 2.6 we have \[\int_{0}^{\beta}\frac{\partial\mathbf{h}(x,y)}{\partial y}x^{j-1}dx = -\frac{1}{j}\frac{d}{dy}\int_{0}^{\beta}x^{j}\mathbf{m}_{y}(dx)=- \frac{1}{j}\frac{d}{dy}\int_{\mathbb{R}}x^{j}\mathbf{m}_{y}(dx)\] \[= -\frac{1}{2j(j+1)\pi\mathbf{i}}\frac{d}{dy}\oint_{1}\frac{dz}{z-1 +s}\left((z-1+s)H^{\prime}_{\mathbf{m}_{0}}(z)+\frac{z-1+s}{z-1}\right)^{j+1}\] We make a change of variables and let \(w=z-1+s\), we obtain \[\int_{0}^{\beta}\frac{\partial\mathbf{h}(x,y)}{\partial y}x^{j-1}dx = -\frac{1}{2j(j+1)\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\frac{dw}{w} \frac{\partial\left(wH^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}\right)^ {j+1}}{\partial s}\] \[= \frac{1}{2j\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}dw\left(wH^{\prime }_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}\right)^{j}\frac{\partial\left(H^{ \prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{1}{w-s}\right)}{\partial w}\] \[= \frac{1}{2j\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\left(wH^{\prime} _{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}\right)^{j}d\left(H^{\prime}_{\mathbf{m }_{0}}(w+1-s)+\frac{1}{w-s}\right)\] For each fixed \(y\), we can again consider \[d\xi_{y}(x):=\frac{\partial\mathbf{h}(x,y)}{\partial y}dx\] a measure on \(\mathbb{R}\). Note that this measure has compact support in \([0,\beta]\). The density of the measure \(\frac{\partial\mathbf{h}(x,y)}{\partial y}\) can be computed by the Stieljes transform of the measure; i.e. \[\frac{\partial\mathbf{h}(x,y)}{\partial y} = -\lim_{\epsilon\to 0+}\frac{1}{\pi}\Im\mathrm{St}_{\xi_{y}}(x+ \mathbf{i}\epsilon).\] Moreover, \[\mathrm{St}_{\xi_{y}}(x) = \sum_{j=1}^{\infty}x^{-j}\int_{\mathbb{R}}u^{j-1}d\xi_{y}(u)\] \[= \sum_{j=1}^{\infty}\frac{1}{2j\pi\mathbf{i}}\frac{ds}{dy}\oint_{s }\left(\frac{wH^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}}{x}\right)^{j }d\left(H^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{1}{w-s}\right)\] \[= -\frac{1}{2\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\log\left(1- \frac{wH^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}}{x}\right)d\left(H^{ \prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{1}{w-s}\right)\] \[= -\frac{1}{2\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}d\left[\left(H^{ \prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{1}{w-s}\right)\log\left(1-\frac{wH^{ \prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}}{x}\right)\right]\] \[+\frac{1}{2\pi\mathbf{i}}\frac{ds}{dy}\oint_{s}\frac{H^{\prime}_{ \mathbf{m}_{0}}(w+1-s)+\frac{1}{w-s}}{1-\frac{wH^{\prime}_{\mathbf{m}_{0}}(w+1 -s)+\frac{w}{w-s}}{x}}\frac{d\left(1-\frac{wH^{\prime}_{\mathbf{m}_{0}}(w+1-s )+\frac{w}{w-s}}{x}\right)}{dw}dw\] By (A.3) we obtain \[H^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{1}{w-s}=\frac{1}{(w+1-s)S_{\mathbf{m }}^{(-1)}(\ln(w+1-s))}\] When \(|x|\) is sufficiently large, in the region of the complex plane enclosed by a small circle centered at \(s\), the number of zeros of the equation \(wH^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}=x\) is equal to the number of poles of \(wH^{\prime}_{\mathbf{m}_{0}}(w+1-s)+\frac{w}{w-s}\). Therefore we have \[-\frac{1}{2\pi\mathbf{i}}\oint_{s}d\left[\left(H^{\prime}_{\mathbf{m}_{0}}(w+ 1-s)+\frac{1}{w-s}\right)\log\left(1-\frac{wH^{\prime}_{\mathbf{m}_{0}}(w+1-s )+\frac{w}{w-s}}{x}\right)\right]=0.\] In the uniformly weighted case we have \(s=1-y\), then \[\mathrm{St}_{\xi_{y}}(x)=-H^{\prime}_{\mathbf{m}_{0}}(\mathbf{z}_{+}(x,y))- \frac{1}{\mathbf{z}_{+}(x,y)-1}=-\frac{1}{\pi}\Im\frac{1}{\mathbf{z}_{+}(\chi,y)S_{\mathbf{m}_{0}}^{(-1)}(\ln\mathbf{z}_{+}(\chi,y))},\] where \(\mathbf{z}_{+}(x,y))\) is the unique root of the equation \(U_{y}(z)=x\) that converges to \(1\) as \(x\to\infty\). Hence we have the following theorem: **Theorem 3.1**.: _Assume \(\mathcal{G}\) is uniformly weighted such that \(s=1-y\). Suppose that the assumptions of Theorem 2.9 holds. Let_ \[u=\frac{1}{\mathbf{z}_{+}(\chi,y)S_{\mathbf{m}_{0}}^{(-1)}(\ln\mathbf{z}_{+}( \chi,y))}\] _Then_ \[\frac{\partial h}{\partial x}=\frac{1}{\pi}\left(2-\mathrm{Arg}(u)\right); \qquad\frac{\partial h}{\partial y}=\frac{1}{\pi}\Im u \tag{3.1}\] _where \(\mathrm{Arg}(\cdot)\) is the branch of the argument function taking values in \([0,2\pi)\). Moreover, \(u\) satisfies the complex Burgers equation_ \[u_{x}-uu_{y}=0. \tag{3.2}\] Proof.: The above arguments show that \[\nabla\mathbf{h}=\left(\frac{1}{\pi}\mathrm{Arg}(\mathbf{z}_{+}(\chi,y)-y), \frac{1}{\pi}\Im\frac{1}{\mathbf{z}_{+}(\chi,y)S_{\mathbf{m}_{0}}^{(-1)}(\ln \mathbf{z}_{+}(\chi,y))}\right)\] Moreover, the equation \(U_{y}(z)=x\) gives \[\frac{\mathbf{z}_{+}(\chi,y)-y}{\mathbf{z}_{+}(\chi,y)S_{\mathbf{m}_{0}}^{(-1 )}(\ln\mathbf{z}_{+}(\chi,y))}=x \tag{3.3}\] Since \(x\in\mathbb{R}\), we have \[\mathrm{Arg}\left(\mathbf{z}_{+}(\chi,y)-y\right)+\mathrm{Arg}\left(\frac{1}{ \mathbf{z}_{+}(\chi,y)S_{\mathbf{m}_{0}}^{(-1)}(\ln\mathbf{z}_{+}(\chi,y))} \right)=2\pi\] Then (3.1) follows. For simplicity, we shall use \(z\) to denote \(\mathbf{z}_{+}(\chi,y)\). Let \[\zeta:=\frac{1}{S_{\mathbf{m}_{0}}^{(-1)}(\ln z)};\] then \[z=e^{\mathrm{St}_{\mathbf{m}_{0}}(\zeta)}=e^{\int_{\mathbb{R}}\frac{\mathbf{ m}_{0}(dt)}{\zeta-t}};\qquad u=\frac{\zeta}{z}.\] Since \[u_{x}=\frac{-z_{x}\zeta+z\zeta_{x}}{z^{2}};\qquad u_{y}=\frac{-z_{y}\zeta+z \zeta_{y}}{z^{2}};\] and \[z_{x}=-z\zeta_{x}\int\frac{\mathbf{m}_{0}(dt)}{(\zeta-t)^{2}};\qquad z_{y}=-z \zeta_{y}\int\frac{\mathbf{m}_{0}(dt)}{(\zeta-t)^{2}};\] we obtain \[\frac{u_{x}}{u_{y}}=\frac{\zeta_{x}}{\zeta_{y}}=\frac{z_{x}}{z_{y}} \tag{3.4}\] Moreover by (3.3) we have \[z-y=xu.\] By taking derivatives we infer that \[z_{x}=xu_{x}+u;\qquad z_{y}-1=xu_{y}\] Hence \[\frac{u_{x}}{u_{y}}=\frac{z_{x}-u}{z_{y}-1} \tag{3.5}\] (3.4) and (3.5) implies that \(\frac{u_{x}}{u_{y}}=u\); and the complex Burgers equation (3.2) follows. ## 4. Height fluctuations and the Gaussian free field (GFF) when \(t\to\infty\) In Section 4, we prove the convergence of the (unrescaled) height fluctuation to the Gaussian free field (GFF) \(n\to\infty\), \(t\to\infty\) and \(\frac{t}{n}\to\alpha\in(0,\infty)\). The main idea is as follows. (1) Using the Schur difference operator defined in Section 2 to act on the Schur generating functions, we obtain the moments of the height functions; we then verify the Wick's formula in the scaling limit to obtain a Gaussian distribution. (2) We find an explicit diffeomorphism from the liquid region to the upper half plane, such that the image of the limit of the fluctuations of the (unrescaled) height function under the diffeomorphism has the correlation kernel given by the Green's function in the upper half plane. Combining with (1), We then conclude that the the limit of the fluctuations of the (unrescaled) height function is the pull-back of the GFF in the upper half plane under this mapping. The main theorem in Section 4 is Theorem 4.5. Let \(C_{0}^{\infty}\) be the space of smooth real-valued functions with compact support in the upper half plane \(\mathbb{H}\). The **Gaussian free field** (GFF) \(\Xi\) on \(\mathbb{H}\) with the zero boundary condition is a collection of Gaussian random variables \(\{\xi_{f}\}_{f\in C_{0}^{\infty}}\) indexed by functions in \(C_{0}^{\infty}\), such that the covariance of two Gaussian random variables \(\xi_{f_{1}}\), \(\xi_{f_{2}}\) is given by \[\operatorname{Cov}(\xi_{f_{1}},\xi_{f_{2}})=\int_{\mathbb{H}}\int_{\mathbb{H} }f_{1}(z)f_{2}(w)G_{\mathbb{H}}(z,w)dzd\overline{z}dwd\overline{w},\] where \[G_{\mathbb{H}}(z,w):=-\frac{1}{2\pi}\ln\left|\frac{z-w}{z-\overline{w}}\right|,\qquad z,w\in\mathbb{H}\] is the Green's function of the Dirichlet Laplacian operator on \(\mathbb{H}\). The Gaussian free field \(\Xi\) can also be considered as a random distribution on \(C_{0}^{\infty}\) of \(\mathbb{H}\), such that for any \(f\in C_{0}^{\infty}\), we have \[\Xi(f)=\int_{\mathbb{H}}f(z)\Xi(z)dz:=\xi_{f};\] where \(\Xi(z)\) is the generalized function corresponding to the linear functional \(\Xi\). Note that GFF is conformally invariant; in the sense that for any simply-connected domain \(\mathcal{D}\subsetneq\mathbb{C}\), and let \(\phi:\mathcal{D}\to\mathbb{H}\) be a conformal map from \(\mathcal{D}\) to \(\mathbb{H}\). Then the GFF on \(\mathcal{D}\) is \[\Xi_{\mathcal{D}}(z):=\Xi(\phi(z))\] See [23] for more about GFF. Let \(f\) be a function of \(r\) variables. Define the symmetrization of \(f\) as follows \[\operatorname{Sym}_{x_{1},\ldots,x_{r}}f(x_{1},\ldots,x_{r}):=\frac{1}{r!} \sum_{\sigma\in S_{r}}f(x_{\sigma(1)},\ldots,x_{\sigma(r)}); \tag{4.1}\] **Theorem 4.1**.: _Under the assumptions of Theorem 2.6, for \(\alpha_{r}\in[0,\alpha]\), let_ \[p_{k}^{(\lfloor\alpha_{r}n\rfloor)}=\sum_{i=1}^{n}\left(\lambda_{i}^{(t- \lfloor\alpha_{r}n\rfloor)}+n-i\right)^{k};\ k=1,2,\ldots\] _Then the collection of random variables_ \[\left\{n^{-k}\left[p_{k}^{(\lfloor\alpha_{r}n\rfloor)}-\mathbb{E}p_{k}^{(\lfloor \alpha_{r}n\rfloor)}\right]\right\}_{r=1,2,\ldots,g,k\geq 1}\] _converges to a Gaussian vector, in the sense of moments, with 0 mean and covariance_ \[\lim_{n\to\infty}\frac{\mathrm{cov}\left[p_{k_{1}}^{\lfloor \alpha_{r_{1}}n\rfloor},p_{k_{2}}^{\lfloor\alpha_{r_{2}}n\rfloor}\right]}{n^{ k_{1}+k_{2}}}=\frac{1}{(2\pi\mathbf{i})^{2}}\oint_{|z-1|=\epsilon}\oint_{|w-1|= \epsilon}\left((z-1+s_{r_{1}})H_{\mathbf{m}_{0}}^{\prime}(z)+\frac{z-1+s_{r_{1 }}}{z-1}\right)^{k_{1}}\] \[\times\left((w-1+s_{r_{2}})H_{\mathbf{m}_{0}}^{\prime}(w)+\frac{ w-1+s_{r_{2}}}{w-1}\right)^{k_{2}}Q(z,w)dzdw\] _where for \(i\in\{1,2\}\),_ \[s_{r_{i}}=\lim_{n\to\infty}\frac{|\mathbf{x}_{t-\lfloor\alpha_{r_{i}}n\rfloor }|}{|\mathbf{x}|}\] _and_ \[Q(z,w)=\frac{1}{(z-w)^{2}}+\frac{\partial^{2}}{\partial z\partial w}\mathrm{ log}\left(1-\frac{(z-1)(w-1)}{z-w}\left[zH_{\mathbf{m}_{0}}^{\prime}(z)-wH_{ \mathbf{m}_{0}}^{\prime}(w)\right]\right)\] Proof.: For \(r=1,\ldots,g\), let \(\kappa_{r}=t-\alpha_{r}n\). Note that \[\mathbb{E}\left[p_{k_{1}}^{(\lfloor\alpha_{1}n\rfloor)}\right]^{m_{1}}\cdot \ldots\cdot\left(p_{k_{g}}^{\lfloor\alpha_{g}n\rfloor}\right)^{m_{g}}=\left. \mathcal{D}_{k_{1},\kappa_{1}}^{m_{1}}\cdot\mathcal{D}_{k_{g},\kappa_{g}}^{m_ {g}}\mathcal{S}_{\rho_{\kappa_{g}}}(|\mathbf{x}_{\kappa_{g}}|,\mathbf{u}) \right|_{\mathbf{u}=0}\] Then \[\frac{\mathrm{cov}\left[p_{k_{1}}^{\lfloor\alpha_{t_{1}}n\rfloor },p_{k_{2}}^{\lfloor\alpha_{t_{2}}n\rfloor}\right]}{n^{k_{1}+k_{2}}}\] \[=\,\frac{1}{n^{k_{1}+k_{2}}}\left[\mathcal{D}_{k_{1},\kappa_{1}} \mathcal{D}_{k_{2},\kappa_{2}}\mathcal{S}_{\rho_{\kappa_{2}}}(|\mathbf{x}_{ \kappa_{2}}|,\mathbf{u})-\mathcal{D}_{k_{1},\kappa_{1}}\mathcal{S}_{\rho_{ \kappa_{1}}}(|\mathbf{x}_{\kappa_{1}}|,\mathbf{u})\mathcal{D}_{k_{2},\kappa_{2 }}\mathcal{S}_{\rho_{\kappa_{2}}}(|\mathbf{x}_{\kappa_{2}}|,\mathbf{u})\right] \right|_{\mathbf{u}=0}\] We have \[\mathbb{E}\left[p_{k_{1}}^{\lfloor\alpha_{t_{1}}n\rfloor},p_{k_{2 }}^{\lfloor\alpha_{t_{2}}n\rfloor}\right]\] \[=\,\frac{1}{V(\mathbf{u})}\left[\sum_{i}\left((|\mathbf{x}_{ \kappa_{1}}|+u_{i})\frac{\partial}{\partial u_{i}}\right)^{k_{1}}\right] \left[\sum_{j}\left((|\mathbf{x}_{\kappa_{2}}|+u_{j})\frac{\partial}{\partial u _{j}}\right)^{k_{2}}\right]V(\mathbf{u})\frac{s_{\lambda^{(0)}}\left(1^{n}+ \frac{\mathbf{u}}{|\mathbf{x}|}\right)}{s_{\lambda^{(0)}}(1^{n})}\right|_{ \mathbf{u}=0}\] Write \(S_{n}:=\frac{s_{\lambda^{(0)}}\left(1^{n}+\frac{\mathbf{u}}{|\mathbf{x}|} \right)}{s_{\lambda^{(0)}}(1^{n})}\), \(u_{j,\kappa}=\frac{u_{j}}{|\mathbf{x}_{\kappa}|}\), \(\mathbf{u}_{\kappa}=\frac{\mathbf{u}}{|\mathbf{x}_{\kappa}|}\) and define \[\mathcal{F}_{\kappa,k}: =\,\frac{1}{V(\mathbf{u})S_{n}}\left[\sum_{j}\left((|\mathbf{x}_ {\kappa}|+u_{j})\frac{\partial}{\partial u_{j}}\right)^{k}\right]V(\mathbf{u})S _{n}\Bigg{|}_{\mathbf{u}=0}\] \[=\,\frac{1}{V(\mathbf{u})S_{n}}\left[\sum_{j}\left((1+u_{j, \kappa})\frac{\partial}{\partial u_{j,\kappa}}\right)^{k}\right]V(\mathbf{u})S _{n}\Bigg{|}_{\mathbf{u}=0}\] and use the fact that \(\frac{\partial S_{n}}{\partial u_{i}}=\exp(\log S_{n})\frac{\partial\log S_{n}}{ \partial u_{i}}\), we obtain \[\mathbb{E}\left[p_{k_{1}}^{|\alpha_{t_{1}}n|},p_{k_{2}}^{|\alpha_{t_{2}}n|} \right]=\left.\frac{1}{V(\mathbf{u})S_{n}}\left[\sum_{i}\left((1+u_{i,\kappa_{ 1}})\frac{\partial}{\partial u_{i,\kappa_{1}}}\right)^{k_{1}}\right]V(\mathbf{ u})S_{n}\mathcal{F}_{k_{2},\kappa_{2}}\right|_{\mathbf{u}=0}\] which is the sum of terms of the form \[\text{Sym}_{a_{1},\ldots,a_{r+1}}\frac{c_{0}(1+u_{a_{1},\kappa_{1}})^{k_{1}-m _{0}}\frac{\partial^{m_{1}}\mathcal{F}_{k_{2},\kappa_{2}}}{\partial u_{a_{1}, \kappa_{1}}}\left[\frac{\partial^{m_{2}}\log S_{n}}{\partial u_{a_{1},\kappa_{ 1}}}\right]^{d_{2}}\cdots\left[\frac{\partial^{m_{q}}\log S_{n}}{\partial u_{ a_{1},\kappa_{1}}}\right]^{d_{q}}}{(u_{a_{1},\kappa_{1}}-u_{a_{2},\kappa_{1}}) \cdots(u_{a_{1},\kappa_{1}}-x_{a_{r+1},\kappa-1})}, \tag{4.2}\] where \(r,m_{0},\ldots,m_{q},d_{2},\ldots,d_{q}\) are nonnegative integers satisfying \[m_{2}<m_{3}<\ldots<m_{t};\text{ and } \tag{4.4}\] \[m_{0}+m_{1}+m_{2}d_{2}+\ldots+m_{q}d_{q}+r=k_{1}; \tag{4.3}\] and \(\text{Sym}_{a_{1},\ldots,a_{r+1}}\) is defined as in (4.1). From the terms when \(m_{1}=0\) we obtain \(\mathcal{F}_{k_{1},\kappa_{1}}\mathcal{F}_{k_{2},\kappa_{2}}\), which is exactly \(\mathbb{E}\left[p_{k_{1}}^{|\alpha_{t_{1}}n|}\right]\mathbb{E}\left[p_{k_{2}}^ {|\alpha_{t_{2}}n|}\right]\) when \(\mathbf{u}=0\). We now consider the terms with \(m_{1}\geq 1\). From Lemma A.1 we obtain that when \(n\) is large, the asymptotic degree of \(n\) in \(\frac{\partial^{j}\log S_{n}}{\partial u_{i,\kappa}}\) is \(1\); and the asymptotic degree of \(n\) in \(\frac{\partial^{j}\mathcal{F}_{k,\kappa}}{\partial u_{i,\kappa}}\) is \(k\). Hence when \(n\) is large, the asymptotic degree of \(n\) in the sum of (4.2) over all \(a_{1},\ldots,a_{r+1}\in\{1,2,\ldots,n\}\) is at most \[k_{2}+d_{2}+\ldots+d_{q}+(r+1). \tag{4.5}\] Given \(s_{1}\geq 1\) and (4.4), (4.5) is maximal when \(m_{0}=0\), \(m_{1}=1\), \(m_{2}=1\) and \(d_{2}=k_{1}-1-r\). Hence the leading terms of \(\text{cov}\left[p_{k_{1}}^{|\alpha_{t_{1}}n|},p_{k_{2}}^{|\alpha_{t_{2}}n|}\right]\) are the same as that of \[k_{1}\sum_{r=0}^{k_{1}-1}\sum_{\{a_{1},\ldots,a_{r+1}\}\subset[n]}\binom{k_{1} -1}{r}(r+1)!\text{Sym}_{a_{1},\ldots,a_{r+1}}\frac{(1+u_{a_{1},\kappa_{1}})^{ k_{1}}\frac{\partial\mathcal{F}_{k_{2},\kappa_{2}}}{\partial u_{a_{1},\kappa_{1}}} \left[\frac{\partial\log S_{n}}{\partial u_{a_{1},\kappa_{1}}}\right]^{k_{1}- 1-r}}{(u_{a_{1},\kappa_{1}}-u_{a_{2},\kappa_{1}})\cdots(u_{a_{1},\kappa_{1}}- u_{a_{r+1},\kappa_{1}})}, \tag{4.6}\] as \(n\to\infty\), both of which are asymptotically \(n^{k_{1}+k_{2}}\). Expanding \(\mathcal{F}_{k_{2},\kappa_{2}}\) and analyzing leading terms as \(n\to\infty\) in a similar way, we obtain that the leading terms of (4.6) are the same as that of \[k_{1}\sum_{r=0}^{k_{1}-1}\sum_{\{a_{1},\ldots,a_{r+1}\}\subset[n]}\binom{k_{ 1}-1}{r}(r+1)!\text{Sym}_{a_{1},\ldots,a_{r+1}}\frac{(1+u_{a_{1},\kappa_{1}})^ {k_{1}}\left[\frac{\partial\log S_{n}}{\partial u_{a_{1},\kappa_{1}}}\right]^ {k_{1}-1-r}}{(u_{a_{1},\kappa_{1}}-u_{a_{2},\kappa_{1}})\cdots(u_{a_{1},\kappa_ {1}}-u_{a_{r+1},\kappa_{1}})}\] \[\times\frac{\partial}{\partial u_{a_{1},\kappa_{1}}}\left[\sum_{q =0}^{k_{2}}\sum_{\{b_{1},\ldots,b_{q+1}\}\subset[n]}\binom{k_{2}}{q}(q+1)! \text{Sym}_{b_{1},\ldots,b_{q}}\frac{(1+u_{b_{1},\kappa_{2}})^{k_{2}}\left[ \frac{\partial\log S_{n}}{\partial u_{b_{1},\kappa_{2}}}\right]^{k_{2}-q}}{(u_{ b_{1},\kappa_{2}}-u_{b_{2},\kappa_{2}})\cdots(u_{b_{1},\kappa_{2}}-u_{b_{q+1}, \kappa_{2}})}\right] \tag{4.7}\] Note that for those terms corresponding to \(|\{a_{1},\ldots,a_{r+1}\}\cap\{b_{1},\ldots,b_{q+1}\}|\geq 2\), the asymptotic degree of \(n\) is at most \(k_{1}+k_{2}-1\) as \(n\to\infty\). Hence in the limit we only need to consider those terms with \(|\{a_{1},\ldots,a_{r+1}\}\cap\{b_{1},\ldots,b_{q+1}\}|\leq 1\). The following cases might occur: 1. \(\{a_{1},\ldots,a_{r+1}\}\cap\{b_{1},\ldots,b_{q+1}\}=\emptyset\). Then \[\frac{\partial}{\partial u_{a_{1},\kappa_{1}}}\left[\sum_{q=0}^{k_{2}}\sum_{\{b _{1},\ldots,b_{q+1}\}\subset[n]}{k_{2}\choose q}(q+1)!\mathrm{Sym}_{b_{1}, \ldots,b_{q}}\frac{(1+u_{b_{1},\kappa_{2}})^{k_{2}}\left[\frac{\partial\log S_ {n}}{\partial u_{b_{1},\kappa_{2}}}\right]^{k_{2}-q}}{(u_{b_{1},\kappa_{2}}-u _{b_{2},\kappa_{2}})\cdots(u_{b_{1},\kappa_{2}}-u_{b_{q+1},\kappa_{2}})}\right]\] \[=\sum_{q=0}^{k_{2}}\sum_{\{b_{1},\ldots,b_{q+1}\}\subset[n]}{k_{2} \choose q}(q+1)!\mathrm{Sym}_{b_{1},\ldots,b_{q}}\frac{(1+u_{b_{1},\kappa_{2 }})^{k_{2}}\left[\frac{\partial\log S_{n}}{\partial u_{b_{1},\kappa_{2}}} \right]^{k_{2}-q-1}\frac{\partial^{2}\log S_{n}}{\partial u_{a_{1},\kappa_{1} }\partial u_{b_{1},\kappa_{2}}}}{(u_{b_{1},\kappa_{2}}-u_{b_{2},\kappa_{2}}) \cdots(u_{b_{1},\kappa_{2}}-u_{b_{q+1},\kappa_{2}})}\] By Lemma A.1, we obtain \[\frac{\partial\log S_{n}}{\partial u_{i,\kappa_{j}}}\approx ns_{r_{j}}H^{ \prime}_{\mathbf{m}_{0}}\left(1+\frac{u_{i}}{|\mathbf{x}|}\right)\] where \(j\in\{1,2\}.\) By Lemma A.3, we obtain \[\frac{\partial^{2}\log S_{n}}{\partial u_{a_{1},\kappa_{1}} \partial u_{b_{1},\kappa_{2}}}\] \[=\frac{\partial^{2}}{\partial u_{a_{1},\kappa_{1}}\partial u_{b_ {1},\kappa_{2}}}\log\left(1-\frac{u_{a_{1}}u_{b_{1}}}{|\mathbf{x}|^{2}}\frac{ \left(1+\frac{u_{a_{1}}}{|\mathbf{x}|}\right)H^{\prime}_{\mathbf{m}_{0}} \left(1+\frac{u_{a_{1}}}{|\mathbf{x}|}\right)-\left(1+\frac{u_{b_{1}}}{| \mathbf{x}|}\right)H^{\prime}_{\mathbf{m}_{0}}\left(1+\frac{u_{b_{1}}}{| \mathbf{x}|}\right)}{\frac{u_{a_{1}}-u_{b_{1}}}{|\mathbf{x}|}}\right)\] Then by the residue theorem and Lemma A.2, the contribution to \(\lim_{n\to\infty}\frac{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq Hence the leading terms of \(\mathrm{cov}\left[p_{k_{1}}^{\lfloor\alpha_{t_{1}}n\rfloor},p_{k_{2}}^{\lfloor \alpha_{t_{2}}n\rfloor}\right]\) are the same as that of \[k_{1}\sum_{r=0}^{k_{1}-1}\sum_{\{a_{1},\ldots,a_{r+1}\}\subset[n]}\binom{k_{1}- 1}{r}(r+1)!\mathrm{Sym}_{a_{1},\ldots,a_{r+1}}\frac{(1+u_{a_{1},\kappa_{1}})^{ k_{1}}n^{k_{1}-1-r}s_{\kappa_{1}}^{k_{1}-1-r}[H_{\mathbf{m}_{0}}^{\prime}\left(1+ \frac{u_{a_{1}}}{|\mathbf{x}|}\right)]^{k_{1}-1-r}}{(u_{a_{1},\kappa_{1}}-u_{ a_{2},\kappa_{1}})\cdots(u_{a_{1},\kappa_{1}}-u_{a_{r+1},\kappa_{1}})}\] \[\times\frac{\partial}{\partial u_{a_{1},\kappa_{1}}}\left[\sum_{q=0}^{k_{2}} \sum_{\{b_{1},\ldots,b_{q+1}\}\subset[n]}\binom{k_{2}}{q}(q+1)!\mathrm{Sym}_ {b_{1},\ldots,b_{q}}\frac{(1+u_{b_{1},\kappa_{2}})^{k_{2}}n^{k_{2}-q}s_{\kappa _{2}}^{k_{2}-q}[H_{\mathbf{m}_{0}}^{\prime}\left(1+\frac{u_{b_{1}}}{|\mathbf{x }|}\right)]^{k_{2}-q}}{(u_{b_{1},\kappa_{2}}-u_{b_{2},\kappa_{2}})\cdots(u_{b_ {1},\kappa_{2}}-u_{b_{q+1},\kappa_{2}})}\right]\] Then the theorem follows from explicit computations of the moments and using the Wick's probability theorem to obtain the Gaussian fluctuation. **Assumption 4.2**.: _Let \(l\) be a fixed positive integer. Assume there exists_ \[0=a_{1}<b_{1}<a_{2}<b_{2}<\ldots<a_{l}<b_{l}\] _such that \(\mathbf{m}_{0}\), the limit counting measure corresponding to the partition on the bottom boundary satisfies_ \[\frac{d\mathbf{m}_{0}}{dx}=\begin{cases}1&\mathrm{if}\ a_{i}<x<b_{i}\\ 0&\mathrm{if}\ b_{j}<x<a_{j+1}\end{cases}\] _where \(i\in[l]\) and \(j\in[l-1]\)._ **Lemma 4.3**.: _Suppose Assumption 4.2 holds. For any \(\chi\in\mathbb{R}\), the equation \(U_{y}(z)=\chi\) has at most one pair of complex conjugate root, where \(U_{y}(z)\) is defined by (2.8)._ Proof.: Under Assumption 4.2, by (A.3) we have \[H_{\mathbf{m}_{0}}^{\prime}(z)=-\frac{1}{z-1}+\frac{\zeta}{z}\] where \[z=\prod_{i=1}^{l}\frac{(\zeta-a_{i})}{(\zeta-b_{i})} \tag{4.8}\] Hence by (2.8) we obtain \[U_{y}(z)=\frac{(z-1+s)\zeta}{z} \tag{4.9}\] It suffices to show that the equation \(U_{y}(z)=\chi\) has at most one pair of complex conjugate root in \(\zeta\). By \(U_{y}(z)=\chi\) and (4.9) we obtain \[\zeta=\frac{\chi z}{z-1+s} \tag{4.10}\] Plugging (4.10) into (4.8) we obtain \[z=C\prod_{i=1}^{l}\frac{z+\frac{a_{i}(1-s)}{\chi-a_{i}}}{z+\frac{b_{i}(1-s)}{ \chi-b_{i}}}:=G(z)\] between any two consecutive poles of \(G(z)\), either \(G(z)\) increase from \(-\infty\) to \(\infty\), or \(G(z)\) decrease from \(\infty\) to \(-\infty\), hence the equation \(z=G(z)\) has at least one real root between any two consecutive poles of \(G(z)\). Then we infer that \(z=G(z)\) has at least \(l-1\) real roots; since the degree of the equation is at most \(l+1\), we deduct that it has at most one pair of complex conjugate roots. **Lemma 4.4**.: _Suppose that Assumption 4.2 holds. Let \(y,s\) be given as in Theorem 2.6. Let_ \[V_{y}(\zeta):=\zeta\left[1-(1-s)e^{-\mathrm{St}_{\mathbf{m}_{0}}(\zeta)}\right] \tag{4.11}\] _Then for any \(\chi\in\mathbb{R}\), the equation \(V_{y}(\zeta)=\chi\) has 0 or \(1\) roots in the upper half plane \(\mathbb{H}\). The map \(\mathcal{T}_{\mathcal{L}}:\tilde{\mathcal{L}}\to\mathbb{H}\) which maps each point in \(\tilde{\mathcal{L}}\) to the unique root of (4.11) is a diffeomorphism form \(\tilde{\mathcal{L}}\) to \(\mathbb{H}\) with inverse map given by_ \[\chi_{\mathcal{L}}(\zeta) :=\zeta\left[1+\frac{e^{-\mathrm{St}_{\mathbf{m}_{0}}(\zeta)}( \zeta-\overline{\zeta})}{\zeta e^{-\mathrm{St}_{\mathbf{m}_{0}}(\overline{ \zeta})}-\zeta e^{-\mathrm{St}_{\mathbf{m}_{0}}(\zeta)}}\right] \tag{4.13}\] \[s_{\mathcal{L}}(\zeta) :=1+\frac{\zeta-\overline{\zeta}}{\overline{\zeta}e^{-\mathrm{ St}_{\mathbf{m}_{0}}(\overline{\zeta})}-\zeta e^{-\mathrm{St}_{\mathbf{m}_{0}}( \overline{\zeta})}}, \tag{4.12}\] _where \(\mathrm{St}_{\mathbf{m}_{0}}\) represents the Stieljes transform of the measure \(\mathbf{m}_{0}\)._ Proof.: The proof is an adaptation of the proof of Theorem 2.1 in [12]. We shall show that 1. \(\tilde{\mathcal{L}}\) is nonempty; 2. \(\tilde{\mathcal{L}}\) is open; 3. \(T_{\mathcal{L}}:\tilde{\mathcal{L}}\to\mathbb{H}\) is continuous; 4. \(T_{\mathcal{L}}:\tilde{\mathcal{L}}\to\mathbb{H}\) is injective; 5. \(T_{\mathcal{L}}:\tilde{\mathcal{L}}\to T_{\mathcal{L}}(\tilde{\mathcal{L}})\) has an inverse for each \(\zeta\in T_{\mathcal{L}}(\tilde{\mathcal{L}})\). 6. \(T_{\mathcal{L}}(\tilde{\mathcal{L}})=\mathbb{H}\). ### Proof of (1) Note that \[\mathrm{St}_{\mathbf{m}_{0}}(\zeta)=\frac{1}{\zeta}+\frac{M_{1}}{\zeta^{2}}+ \frac{M_{2}}{\zeta^{3}}+\cdots \tag{4.14}\] where \[M_{1}=\int_{\mathbb{R}}x\mathbf{m}_{0}[dx];\qquad M_{2}=\int_{\mathbb{R}}x^{2} \mathbf{m}_{0}[dx].\] Plugging (4.14) into (4.12) and (4.13) we obtain \[\chi_{\mathcal{L}}(\zeta) =1+O\left(\frac{1}{|\zeta|}\right)\] \[s_{\mathcal{L}}(\zeta) =\frac{M_{1}-\frac{1}{2}}{|\zeta|^{2}}+O\left(\frac{1}{|\zeta|^{2 }}\right)\] When \(\mathbf{m}_{0}\) satisfies Assumption 4.2 we have \[M_{1}=\int_{0}^{b_{l}}x\mathbf{m}_{0}[dx]>\int_{0}^{1}xdx=\frac{1}{2}\] It follows that when \(|\zeta|\) is large \((\chi_{\mathcal{L}}(\zeta),s_{\mathcal{L}}(\zeta))\in(0,b)\times(0,1)\); hence \(\widetilde{\mathcal{L}}\) is nonempty. **Proofs of (2) and (3).** Let \((\chi_{1},s_{1})\in\widetilde{\mathcal{L}}\) and \(\zeta_{1}=T_{\mathcal{L}}(\chi_{1},s_{1})\). We shall prove that \((\chi_{2},s_{2})\in\widetilde{\mathcal{L}}\) whenever \(|\chi_{1}-\chi_{2}|+|s_{1}-s_{2}|\) is small. Let \(y_{1},y_{2}\) correspond to \(s_{1},s_{2}\) as in Theorem 2.6. Let \(\epsilon>0\) such that \(B(\zeta_{1},\epsilon)\in\mathbb{H}\); then \(\inf_{\zeta\in\partial B(\zeta_{1},\epsilon)}|V_{y_{1}}(\zeta)-\chi_{1}|>0\) when \(\epsilon\) is small. Fix \(\epsilon\), by continuity we have \(|(V_{y_{1}}(\zeta)-\chi_{1})-(V_{y_{2}}(\zeta)-\chi_{2})|<\epsilon\) when \(|s_{1}-s_{2}|+|\chi_{1}-\chi_{2}|\) is sufficiently small for \(\zeta\in\partial B_{\zeta_{1},\epsilon}\). Hence when \(|s_{1}-s_{2}|+|\chi_{1}-\chi_{2}|\) is sufficiently small, \[|(V_{y_{1}}(\zeta)-\chi_{1})-(V_{y_{2}}(\zeta)-\chi_{2})|<|V_{y_{1}}(\zeta)- \chi_{1}|,\qquad\forall\zeta\in\partial B_{\zeta_{1},\epsilon}\] By Rouches theorem \(V_{y_{2}}(\zeta)-\chi_{2}\) has a unique root in \(B_{\zeta_{1},\epsilon}\), hence \((\chi_{2},s_{2})\in\tilde{\mathcal{L}}\). (4) (5) follows from (4.12) (4.13). **Proof of (6).** From (1)-(5) we see that \(T_{\mathcal{L}}(\widetilde{\mathcal{L}})\) is open and homeomorphic to \(\widetilde{\mathcal{L}}\). Assume there exists \(\zeta\in\partial T_{\mathcal{L}}(\widetilde{\mathcal{L}})\) and \(\zeta\in\mathbb{H}\setminus T_{\mathcal{L}}(\widetilde{\mathcal{L}})\). Let \(\zeta_{n}\in T_{\mathcal{L}}(\widetilde{\mathcal{L}})\) such that \(\lim_{n\to\infty}\zeta_{n}=\zeta\). Then there exists a subsequence \((\chi_{\mathcal{L}}(\zeta_{n}),s_{\mathcal{L}}(\zeta_{n}))\) converges to some \((\chi,s)\in\mathbb{R}\times[0,1]\). Then \(\zeta=T_{\mathcal{L}}(\chi,s)\). Since \(\zeta\in\mathbb{H}\), we obtain \((\chi,s)\in\widetilde{\mathcal{L}}\) and \(\zeta\in T_{\mathcal{L}}(\widetilde{\mathcal{L}})\). **Theorem 4.5**.: _Suppose that Assumption 4.2 holds. For each \(z\in\mathbb{H}\), let_ \[\mathbf{\Delta}_{n}(z):=\Delta_{n}(n\chi_{\widetilde{\mathcal{L}}}(z),ns_{ \widetilde{\mathcal{L}}}(z)):=\sqrt{\pi}\left|\left\{g\in[n]:\lambda_{g}^{(n-ny (s_{\widetilde{\mathcal{L}}}(z)))}-n+g\geq n\chi_{\widetilde{\mathcal{L}}(z)} \right\}\right|\] _Under the assumption of Theorem 2.6, \(\mathbf{\Delta}_{n}(z)-\mathbb{E}\mathbf{\Delta}_{n}(z)\) converges to GFF in the upper half plane in the sense that for each \(s\in(0,1)\)_ \[\lim_{n\to\infty}\int_{-\infty}^{\infty}\chi^{j}\left(\Delta_{n}(n\chi,ns)- \mathbb{E}\Delta_{n}(n\chi,ns)\right)d\chi=\int_{z\in\mathbb{H}:s_{\widetilde {\mathcal{L}}}(z)=s}\chi_{\widetilde{\mathcal{L}}}^{j}(z)\frac{d\chi_{ \widetilde{\mathcal{L}}}(z)}{dz}\Xi(z)dz\] Proof.: Explicit computations show that \[\lim_{n\to\infty}\int_{-\infty}^{\infty}\chi^{j}\left(\Delta_{n}( n\chi,ns)-\mathbb{E}\Delta_{n}(n\chi,ns)\right)d\chi\] \[=\frac{\sqrt{\pi}}{j+1}\sum_{i=1}^{n}\left([\lambda_{i}^{(n-ns)}-n +i]^{j+1}-\mathbb{E}[\lambda_{i}^{(n-ns)}-n+i]^{j+1}\right)\] Note that \[\frac{\partial^{2}}{\partial z\partial w}\log[z-w]=\frac{1}{(z-w)^{2}}\] Hence we have \[Q(z,w)=\frac{\partial^{2}}{\partial z\partial w}\log\left(\left[wH^{\prime}_{ \mathbf{m}_{0}}(w)+\frac{w}{w-1}\right]-\left[zH^{\prime}_{\mathbf{m}_{0}}(z) +\frac{z}{z-1}\right]\right)\] Let \[\tilde{z}: =zH^{\prime}_{\mathbf{m}_{0}}(z)+\frac{z}{z-1}=\operatorname{St} ^{(-1)}_{\mathbf{m}_{0}}(\log z)\] \[\tilde{w}: =wH^{\prime}_{\mathbf{m}_{0}}(w)+\frac{w}{w-1}=\operatorname{St}^ {(-1)}_{\mathbf{m}_{0}}(\log w)\] Then by Theorem 4.1, we obtain \[\lim_{n\to\infty}\frac{\operatorname{cov}\left[p_{k_{1}}^{\lfloor\alpha_{r_{1}}n \rfloor},p_{k_{2}}^{\lfloor\alpha_{r_{2}}n\rfloor}\right]}{n^{k_{1}+k_{2}}}= \frac{1}{(2\pi\mathbf{i})^{2}}\oint_{|\tilde{z}|=C}\oint_{|\tilde{w}|=2C}\left( V_{y_{r_{1}}}(\tilde{z})\right)^{k_{1}}\left(V_{y_{r_{2}}}(\tilde{w})\right)^{k_{2}} \frac{1}{(\tilde{z}-\tilde{w})^{2}}d\tilde{z}d\tilde{w}.\] Make a contour deformation and integration by parts, we obtain \[\lim_{n\to\infty}\frac{\operatorname{cov}\left[p_{k_{1}}^{\lfloor \alpha_{r_{1}}n\rfloor},p_{k_{2}}^{\lfloor\alpha_{r_{2}}n\rfloor}\right]}{n^{k _{1}+k_{2}}}\] \[=\frac{1}{(2\pi\mathbf{i})^{2}}\oint_{\tilde{z}\in\mathbb{H}:s_{ \widetilde{\mathcal{L}}}(z)=s_{r_{1}}}\oint_{\tilde{w}\in\mathbb{H}:s_{ \widetilde{\mathcal{L}}}(w)=s_{r_{2}}}\left(\chi_{\widetilde{\mathcal{L}}}( \tilde{z})\right)^{k_{1}}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{w}) \right)^{k_{2}}\frac{\partial^{2}}{\partial z\partial w}\left[2\log\frac{| \tilde{z}-\tilde{w}|}{|\tilde{z}-\overline{\tilde{w}}|}\right]d\tilde{z}d \tilde{w}\] \[=\frac{k_{1}k_{2}}{\pi}\oint_{\tilde{z}\in\mathbb{H}:s_{\widetilde {\mathcal{L}}}(z)=s_{r_{1}}}\oint_{\tilde{w}\in\mathbb{H}:s_{\widetilde{ \mathcal{L}}}(w)=s_{r_{2}}}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{z}) \right)^{k_{1}-1}\left(\chi_{\widetilde{\mathcal{L}}}(\tilde{w})\right)^{k_{2} -1}\frac{\partial\chi_{\widetilde{\mathcal{L}}}(\tilde{z})}{\partial\tilde{z} }\frac{\partial\chi_{\widetilde{\mathcal{L}}}(\tilde{w})}{\partial\tilde{w}}G _{\mathbb{H}}(z,w)d\tilde{z}d\tilde{w}.\] Then the theorem follows. ## Appendix A Technical Results We use \(\mathbb{Y}\) to denote the set of all the partitions and \(\mathbb{Y}_{N}\) to denote the set of all the partitions of length \(N\). **Lemma A.1**.: _If \((\lambda(N))\in\mathbb{Y}_{N}\) is a regular sequence of partitions, and the sequence of counting measures \(m(\lambda(N))\) converges weakly to a measure \(\mathbf{m}\) with compact support. When the \(\beta_{i}\)s are equal to 1, there exists an explicit function \(H_{\mathbf{m}}\), analytic in a neighborhood of 1, depending on the weak limit \(\mathbf{m}\) such that_ (A.1) \[\lim_{N\to\infty}\frac{1}{N}\log\left(\frac{s_{\lambda(N)}(u_{1},\ldots,u_{k},1,\ldots,1)}{s_{\lambda(N)}(1,\ldots,1)}\right)=H_{\mathbf{m}}(u_{1})+\cdots+H _{\mathbf{m}}(u_{k}),\] _and the convergence is uniform when \((u_{1},\ldots,u_{k})\) is in a neighborhood of \((1,\ldots,1)\)._ Proof.: See Theorem 4.2 of [5]. Precisely, \(H_{\mathbf{m}}\) is constructed as follows: let \(S_{\mathbf{m}}(z)=z+\sum_{k=1}^{\infty}M_{k}(\mathbf{m})z^{k+1}\) be the moment generating function of the measure \(\mathbf{m}\), where \(M_{k}(\mathbf{m})=\int x^{k}d\mathbf{m}(x)\), and \(S_{\mathbf{m}}^{(-1)}\) be its inverse for the composition. Let \(R_{\mathbf{m}}(z)\) be the _Voiculescu R-transform_ of \(\mathbf{m}\) defined as \[R_{\mathbf{m}}(z)=\frac{1}{S_{\mathbf{m}}^{(-1)}(z)}-\frac{1}{z}.\] Then (A.2) \[H_{\mathbf{m}}(u)=\int_{0}^{\ln u}R_{\mathbf{m}}(t)dt+\ln\left(\frac{\ln u}{u -1}\right).\] In particular, \(H_{\mathbf{m}}(1)=0\), and (A.3) \[H_{\mathbf{m}}^{\prime}(u)=\frac{1}{uS_{\mathbf{m}}^{(-1)}(\ln u)}-\frac{1}{u- 1}.\] **Lemma A.2**.: _Let \(n\) be a positive integer and let \(g(z)\) be an analytic function defined in a neighborhood of 1. Then_ \[\lim_{(z_{1},\ldots,z_{n})\to(1,\ldots,1)}\left(\sum_{i=1}^{n}\frac{g(z_{i})}{ \prod_{j\in[n],j\neq i}(z_{i}-z_{j})}\right)=\left.\frac{\partial^{n-1}}{ \partial z^{n-1}}\left(\frac{g(z)}{(n-1)!}\right)\right|_{z=1}\] Proof.: See Lemma 5.5 of [5]. **Lemma A.3**.: _If \((\lambda(N))\in\mathbb{Y}_{N}\) is a regular sequence of partitions, and the sequence of counting measures \(m(\lambda(N))\) converges weakly to a measure \(\mathbf{m}\) with compact support. Then_ \[\lim_{N\to\infty}\frac{\partial^{2}}{\partial x_{1}\partial x_{2 }}\log\frac{s_{\lambda(N)}(x_{1},\ldots,x_{k},1^{N-k})}{s_{\lambda(N)}(1^{N})}\] \[=\frac{\partial^{2}}{\partial x_{1}\partial x_{2}}\log\left(1-(x _{1}-1)(x_{2}-1)\frac{x_{1}H^{\prime}_{\mathbf{m}}(x_{1})-x_{2}H^{\prime}_{ \mathbf{m}}(x_{2})}{x_{1}-x_{2}}\right)\] _and_ \[\lim_{N\to\infty}\frac{\partial^{3}}{\partial x_{1}\partial x_{2 }\partial x_{3}}\log\frac{s_{\lambda(N)}(x_{1},\ldots,x_{k},1^{N-k})}{s_{ \lambda(N)}(1^{N})}=0.\] Proof.: See Theorem 8.2 of [6]. **Acknowledgements.** We thank Sylvie Corteel for asking the questions solved in the paper and for helpful discussions. ZL acknowledges support from National Science Foundation under grant 1608896 and from Simons Foundation under grant 638143. DK and IP are grateful to the Workshop on 'Randomness, Integrability, and Universality', held on Spring 2022 at the Galileo Galilei Institute for Theoretical Physics, for hospitality and support at some stage of this work. IP acknowledges support by the Academy of Finland grant 355839.
2309.04338
Exciton-carrier coupling in a metal halide perovskite nanocrystal assembly probed by two-dimensional coherent spectroscopy
The surface chemistry and inter-connectivity within perovskite nanocrystals play a critical role in determining the electronic interactions. They manifest in the Coulomb screening of electron-hole correlations and the carrier relaxation dynamics, among other many-body processes. Here, we characterize the coupling between the exciton and free carrier states close to the band-edge in a ligand-free formamidinium lead bromide nanocrystal assembly via two-dimensional coherent spectroscopy. The optical signatures observed in this work show: (i) a nonlinear spectral lineshape reminiscent of Fano-like interference that evidences the coupling between discrete electronic states and a continuum, (ii) symmetric excited state absorption cross-peaks that suggest the existence of a coupled exciton-carrier excited state, and (iii) ultrafast carrier thermalization and exciton formation. Our results highlight the presence of coherent coupling between exciton and free carriers, particularly in the sub-100 femtosecond timescales.
Esteban Rojas-Gatjens, David Otto Tiede, Katherine A. Koch, Carlos Romero-Perez, Juan F. Galisteo-Lopez, Mauricio E. Calvo, Hernan Miguez, Ajay Ram Srimath Kandada
2023-09-08T14:08:13Z
http://arxiv.org/abs/2309.04338v1
Exciton-carrier coupling in a metal halide perovskite nanocrystal assembly probed by two-dimensional coherent spectroscopy ###### Abstract The surface chemistry and inter-connectivity within perovskite nanocrystals play a critical role in determining the electronic interactions. They manifest in the Coulomb screening of electron-hole correlations and the carrier relaxation dynamics, among other many-body processes. Here, we characterize the coupling between the exciton and free carrier states close to the band-edge in a ligand-free formamidinium lead bromide nanocrystal assembly via two-dimensional coherent spectroscopy. The optical signatures observed in this work show: (i) a nonlinear spectral lineshape reminiscent of Fano-like interference that evidences the coupling between discrete electronic states and a continuum, (ii) symmetric excited state absorption cross-peaks that suggest the existence of a coupled exciton-carrier excited state, and (iii) ultrafast carrier thermalization and exciton formation. Our results highlight the presence of coherent coupling between exciton and free carriers, particularly in the sub-100 femtosecond timescales. ## I Introduction Lead halide perovskite nanocrystals (PNCs) are promising candidates for quantum dot optoelectronic applications [1; 2]. Recently, device-compatible assemblies of PNCs grown in a porous scaffold have been shown to exhibit exceptional optoelectronic properties [3; 4; 5]. The absence of the organic ligands on the surface of the PNCs may however activate surface defect states which will, depending on the relative energetics, either act as sources for carrier doping [6] or centers for non-radiation recombination of the photo-excitations [7]. On the other hand, their absence promotes effective inter-connectivity and thus electronic coupling between the PNCs, which subsequently leads to much-desired efficient charge and energy transport [8; 9] within the PNC assembly. Notably, in comparison to the more widely studied colloidal systems, these PNC assemblies present a distinct photophysical scenario in which inter-particle interactions are expected to play a dominant role in photo-excitation dynamics. Unraveling these interactions is of utmost relevance for the further optimization of this alternative approach to PNC-based materials. It is now well known that the photoluminescence in the PNCs originates from radiative recombination of bound electron-hole pairs (excitons). The correlation between excitons and free charge carriers has been a topic of research for over two decades, with early works on semiconducting systems such as GaAs [10; 11] and InP [12]. Signatures of substantial exciton-carrier interactions have also been identified recently in bulk lead-halide perovskites [13; 14; 15; 16]. The many-body correlations between excitons and carriers manifest in the nonlinear optical response of the materials, specifically in the evolution in the spectral lineshapes in the ultrafast timescales. Two-dimensional coherent spectroscopy is the technique of preference for disentangling such many-body correlations in bulk and nanostructured semiconductors [17; 18; 19; 20; 21; 22; 23]. The correlations between excited states manifest as energy shifts, resonances of multi-particle states (e.g. biexcitons and trions), spectral linewidth and phase shifts in the complex lineshape of the two-dimensional excitation-emission map (\(\hbar\omega_{1}\)-\(\hbar\omega_{3}\), respectively) [11; 17; 24; 25; 26]. Several groups have used two-dimensional coherent spectroscopy to describe the charge carrier thermalization, population relaxation, and exciton dissociation in bulk lead-halide perovskite semiconductors [14; 16; 27]. In the context of PNCs, Yu _et al_[28] used two-dimensional spectroscopy to evaluate the bottleneck effect in the hot carrier thermalization as a function of nanocrystal size. Most of such investigations in semiconductor nanostructures have been performed on colloidal suspensions. Notably, exciton-carrier correlations have not been rigorously explored in such systems. Moreover, the colloidal systems lack the interconnectivity between the particles to explore multi-particle correlations and in fact, only a handful of works on solid-state assemblies have investigated coherent inter-particle interactions [21; 22]. Here, we investigate the exciton-carrier coupling and inter-particle interactions through 2D coherent spectroscopy [29] in solid dispersions of ligand-free formamidinium lead bromide (FAPbBr\({}_{3}\)) nanocrystals embedded in a nanoporous silica scaffold. Through a systematic analysis of the observed spectral lineshape, we discuss the evidence for electronic coupling between band-edge carrier states and discrete excitonic transitions within an interconnected network of PNCs. We identify excited state absorption features associated with a trion-like coupled exciton-carrier state. In addition, we reproduce the observed experimental lineshapes using a photophysical model in which discrete exciton and band-edge carrier states interact with the carrier continuum via Fano-like interference mechanism. Lastly, we discuss the time evolution of the spectral response in the context of carrier relaxation into the excitonic state that happens within \(\approx 100\,\mathrm{fs}\). ## II Results and Discussion The FAPbBr\({}_{3}\) NCs solid-state assembly is prepared by infiltrating a precursor solution to the void space of the nanoporous silica scaffold via spin-coating followed by an annealing step. More details on the sample preparation can be found in the Supporting Information and in Ref. [30]. The FAPbBr\({}_{3}\) nanocrystals self-assemble within the pores of the matrix, with the precursor concentration determining the average particle size and the filling fraction. For a nominal concentration of 30% we obtained PNCs of the average diameter of \(6.7\,\mathrm{nm}\) and a pore fill fraction of \(0.145\), estimated through inductively coupled plasma analysis (details in the SI). Note that since the PNCs are crystallized in the voids of the porous matrix, the geometry, and average size can undergo variations that cannot be rigorously quantified. These samples show a photoluminescence quantum yield of approximately 5%, moderately higher than what is observed in bulk films. Given that the Bohr radius in FAPbBr\({}_{3}\) is estimated to be about \(8\,\mathrm{nm}\)[31], the PNC assembly used here can be considered to be in an intermediate confinement regime. Overall, the sample used in this study is a solid-state assembly of _ligand-free_ perovskite NCs with enhanced charge coupling between the distinct particle units compared to nanocrystals with ligands. The temperature-dependent absorption spectrum of the sample is shown in Fig. 1(a). We observe a spectral lineshape typical of a direct bandgap semiconductor, associated with the optical absorption of a carrier continuum and a low binding energy exciton band close to the edge, at all temperatures. We observe a blueshift in the optical edge in the PNCs in comparison to the absorption spectrum of the bulk FAPbBr\({}_{3}\) film (see Supporting Information), supporting the electronic confinement. Interestingly, the spectrum conspicuously lacks an evident excitonic peak, which is clearly present in the spectrum of the bulk film even at room temperature. There may be a number of causes for the reduction in the exciton binding energy in these PNC assemblies, including inter-particle coupling [32] and Coulomb screening from background doping [15]. We will not discuss the origin of the exciton screening in this manuscript, but we highlight the reduced binding energy that results in spectrally close exciton and continuum bands. The spectral broadening either due to the polydispersity or large intrinsic dephasing rate results in substantial spectral overlap between the discrete exciton states and the continuum, which can promote effective interaction between the two species. We investigate such correlations with 2D coherent spectroscopy, which we briefly discuss in the following. The experiment consists of a train of three ultrashort pulses incident on the sample in a BoxCAR geometry. Such a coherent excitation sequence generates a third-order nonlinear polarization in the material, which emits coherent radiation in the phase-matching direction. A fourth pulse that serves as a local oscillator enables measurement of the amplitude and phase of the emitted field. The first pulse generates a coherence that evolves for a time (\(t_{1}\)), the second pulse further drives the system to a population state which relaxes within a population time (\(t_{2}\)), and finally, the third pulse establishes another coherence which emits the electric field. We measure the emitted electric field from the ensemble of photo-excited species through spectral interferometry whose energy corresponds with the _emission_ axis of the two-dimensional spectrum, labeled \(\hbar\omega_{3}\) here. By scanning \(t_{1}\) and performing a discrete Fourier transform we recover the _excitation axis_ which we label as \(\hbar\omega_{1}\). We specifically focus on the nonlinear response with a phase-matching condition, \(\vec{k}_{sig}=-\vec{k}_{a}+\vec{k}_{b}+\vec{k}_{c}\), referred to as the _rephasing_ pathway. The experimental details are described in the supplementary information and further details can be found elsewhere [29]. The spectrum of the ultrashort pulses used in the current experiment covers the exciton and free carrier energies close to the optical edge of the sample, as shown in Fig. 1(a). We show the absolute value of the rephasing spectrum of the sample held at \(10\,\mathrm{K}\) and taken at a Figure 1: Absorption and rephasing (absolute) spectra for a FAPbBr\({}_{3}\) NCs thin film. (a) Absorption (black line) and laser spectrum used in the 2D coherent electronic spectroscopy (gray area). (b) Coherent 1Q-rephasing response, \(\vec{k}_{sig}=-\vec{k}_{a}+\vec{k}_{b}+\vec{k}_{c}\), showing the absolute component of the spectrum. (c) Temperature dependence of the Absorption measurements for FAPbBr\({}_{3}\) NCs thin films. (a) and (b) were measured at \(10\,\mathrm{K}\). population time of 10 fs in Fig. 1(b). The extended feature along the diagonal line confirms the early population of the excited states in the experimental spectral range. The larger intensity at the higher energies indicates a larger fraction of the initial population in the free carrier states at early times. More importantly, we observe clear off-diagonal cross-peaks that indicate coherent coupling between exciton states and the free carriers. In the simplest scenario, this can be interpreted as a consequence of the same ground state shared by both the exciton and free carrier states. More details on the excited-state couplings can be deduced by analyzing the real component of the rephasing spectrum along with the absolute spectrum at different temperatures as shown in Fig. 2. In the real part of the response at 10 K (Fig. 2(d)), we observe that the cross-peaks correspond to negative excited-state absorption (ESA) features, labeled as **A** and **B**. In addition, we observe an extended positive off-diagonal _streak_ at higher energies labeled **C**. We interpret the latter as a signature of Fano-like interference which will be discussed below. Firstly, the ESA feature in a 2D spectrum is associated with an excitation pathway that results in the emitted field oscillating at the energy of the coherence between the photo-generated species and a higher-lying state. The latter is typically a two-quantum transition where the energy of the final state depends on the correlations between the excited species [33]. For example, two excitons with attractive interactions result in a biexciton state, whose energy is less than twice the energy of the exciton. Thus the ESA feature associated with the exciton to biexciton transition will appear as an off-diagonal negative peak and below the diagonal in a 2D spectrum [34]. However, we discard such biexcitonic transitions as the origin for the observed ESA features as they would not result in a symmetric absolute lineshape as is the case for our experimental data. While one could explain **B** as a biexcitonic state associated with the diagonal feature at 2.33 eV, to explain **A** one would have to invoke a repulsive two-quantum state in addition to the attractive biexciton that makes it physically unfeasible. Notably, previous works on PNCs [35; 36] that have identified biexcitonic features in a 2D spectrum had clear asymmetric spectral features, unlike the present case. Feature **A** is associated with a transition from the state populated at 2.25 eV, at the exciton energy to a higher-lying state at an energy that is slightly higher than twice the exciton energy. Feature **B** instead originates from a populated state at \(\approx 2.3\) eV and thus in the carrier continuum to a stable two-quantum state. Given the symmetry in the features, we can assign the features to transitions to the same two-quantum state, one from the exciton state and the other from the free carrier state. The relative energetics indicate that the two-quantum state is in fact a coupled exciton-carrier state. As the temperature increases the absorption edge shifts towards higher energy, as evident in the spectra shown in Figs 1(b) and (c). Similarly, the 2D coherent spectrum blue shifts and the real component evolves from a symmetric _absorptive_ lineshape, Fig 2(d), with very clear cross-peaks to a _dispersive_ lineshape, Fig 2(e) and (f). Extensive literature describes the dispersive lineshapes observed in semiconductors in four-wave mixing experiments [17; 24]. The observed spectral asymmetry and derivative-like lineshape along the anti-diagonal of the 2D spectrum are typically attributed to many Figure 2: Absolute and real components of the 2D coherent spectrum at the top and bottom respectively. They were measured at temperatures (10 K, 60 K, and 110 K) for a FAPbBr\({}_{3}\) NCs thin film.. Figure 3: Absolute values of the simulated rephasing response for two discrete states separated by (a) 40 meV and (b) 20 meV, which includes contributions from coupled excited state pathways. The coupling with the continuum was set to \(q=1\). The corresponding real parts of the response are shown in (c) and (d) respectively. body interactions which result in an excitation-induced shift and excitation-induced dephasing process [37; 38; 39; 40; 41; 42]. In such a context the observed dispersive lineshapes at higher temperatures may imply temperature-induced many-body correlations between photo-excitations. It is possible that higher temperatures promote larger background doping, thus driving the observed lineshapes. However, we do not have independent confirmation of such a process. Acknowledging the need to further rationalize these lineshapes at higher temperatures, we interpret the observed dispersive-like lineshape as a consequence of the overlap of two spectral features. Specifically, the blue shift of the cross peaks that accompanies the shift of exciton's main feature and results in a spectral overlap of the diagonal and off-diagonal features, resulting in a seemingly dispersive line shape. This type of behavior due to overlapping ground state bleach (GB) and ESA features is commonly observed in 2D infrared spectroscopy for systems with strong anharmonicity [43]. Having described the signatures of ESA, \(\mathbf{A}\) and \(\mathbf{B}\), we now turn our attention to the extended cross-peak observed in all of the real spectra of Fig. 2(d-f), and labeled \(\mathbf{C}\) in Fig. 2(d). Similar extended features at high \(\hbar\omega_{1}\) have been observed in the case of GaAs bulk semiconductors [44; 11] and arise due to correlations between the excitons and the free carrier continuum as they share a common ground state. Nguyen _et al_[16] observed similar features also in lead-bromide perovskite single crystals. In the case of GaAs [44; 11], the exciton transitions are well-defined and energetically resolved from the free carrier transition, especially at low temperatures. In the present case, however, the distribution of exciton states and band-edge tail energetically overlap. Given such an overlap, a Fano-like interference between the discrete exciton state and the continuum state may be considered [45; 46]. Finkelstein-Shapiro _et al._ developed an analytical model that describes such a scenario in which the discrete state's energy lies in the middle of the continuum distribution and predicted 2D lineshapes similar to what is observed here [47; 46]. Extending this model, we consider here two discrete states representing the exciton and the free-carrier band edge interacting with a continuum of states. The continuum corresponds to the higher energy free carrier states but could also have distinct physical origins. For example, a continuum can be a result of the inhomogeneous distributions of coupled nanocrystals, Urbach tail states [48], disorder-induced energy dispersion [49], among others [50]. Without dwelling on the origin and nature of the continuum, We apply the analytical expressions derived by Finkelstein-Shapiro _et al._, summarized in supporting information, to simulate the expected nonlinear response, shown in Fig. 3. Firslty, we consider two main transitions associated with the exciton and free carrier band edge to be at energies \(k_{1}=2.27\,\mathrm{eV}\) and \(k_{2}=2.31\,\mathrm{eV}\) respectively (energy separation of \(40\,\mathrm{meV}\)). The strength of the coupling of these states with the continuum is determined by the parameter \(q\), \(q=\frac{\mu_{e}}{\mu_{e}\pi V}\) where \(V\) is the coupling constant between the continuum of states and the discrete transition. For the simulations showed here, we kept the value of \(q\) to be 1. We also considered contributions from two excited state absorption transitions from \(k_{1}\) or \(k_{2}\) to a coupled state \(k_{1}k_{2}\). More details on the analytical expressions used for the simulations are given the Supporting Information (Sec. 2.2). For this set of simulation parameters, we show the absolute and real values of the rephasing spectrum in Figs 3(a) and (c) respectively. It can be seen that the simula Figure 4: Evolution of the rephasing 2D coherent spectra as a function of population time. The subfigures (a-e) correspond to the absolute components and (f-j) correspond to the real component. duces the low temperature experimental spectra shown in Figs.2(a) and (d). We now reduce the energy difference between the discrete exciton and free-carrier band edge state to \(20\,\mathrm{meV}\) and obtain rephasing response shown in Fig. 3(b) and (d). It can be seen in Fig. 3(d) that 2D lineshape acquires a dispersive-like spectral feature and qualitatively resembles the experimental lineshapes at higher temperatures (Fig. 2 (e) and (f)). This confirms our initial hypothesis that the dispersive lineshape here is not a consequence of many-body interactions, but simply due to spectral overlap between closely spaced excitation pathways. Note that, we considered a reduced contribution from the ESA associated with the feature \(\mathbf{B}\) for the simulations shown in Fig. 3 (b) and (d). While the simulation reproduces the experimental trends very well, we highlight that the model assumes a constant and energy-independent coupling with the continuum and no inhomogeneous broadening, both which have to be rigorously considered for a more true reproduction. We now discuss the evolution of the rephasing spectra, measured at \(10\,\mathrm{K}\) with population time (waiting time between pulse 2 and 3), shown in Fig 4. From the absolute maps, Figs 4(a-e), we observe a clear red-shift in the peak of the diagonal feature, by approximately \(20\,\mathrm{meV}\) in \(90\,\mathrm{fs}\). The evolution of the spectral intensity along the diagonal can be interpreted as a signature of the transfer of population from the higher energy carrier state to the lower energy exciton state. This transfer is also accompanied by the appearance of a positive cross-peak labeled \(\mathbf{D}\) in Fig. 4. The photo-generated population after the first two excitation pulses, \(\rho_{22}=\ket{k_{2}}\bra{k_{2}}\) relaxes to \(\rho_{11}=\ket{k_{1}}\bra{k_{1}}\), which signifies the population of the lower excitonic state. The third pulse then generates the coherences of the form \(\ket{k_{1}}\bra{0}\) or \(\ket{(k_{1}k_{2})}\bra{k_{2}}\). These two terms correspond to features \(\mathbf{D}\) and \(\mathbf{B}\), which are accordingly enhanced. In Fig 4(f), we observe that \(\mathbf{A}\) has a higher intensity than \(\mathbf{B}\) which supports our assignment of \(\mathbf{A}\) and \(\mathbf{B}\) to coupled state since the population evolution creates an asymmetry in the population on the exciton and carrier edge states. Importantly, the observed population relaxation time of \(90\,\mathrm{fs}\) is notably faster than the thermalization timescales reported for strongly confined colloidal nanocrystals [28], but comparable to the charge carrier relaxation in bulk CsPbBr\({}_{3}\) crystals [51; 16]. While a longer thermalization process may still be present, the existence of the sub-100fs component may indicate an alleviation of the phonon-bottleneck effects that are ubiquitous to the strongly confined electronic systems. Importantly, there is a dominance of many-body scattering processes in the probed time ranges that drive the relaxation process. We presume that such many-body scattering events are plausible due to the strong interconnectivity and thus electronic coupling between the PNCs within the assembly. This indicates that the photophysical dynamics are somewhat similar to bulk semiconductors where multiphoton scattering induces effective carrier thermalization [51]. While a similar coherent nonlinear response was measured by Jha et al [14] also in the case of bulk perovskites, including signatures for exciton-carrier coupled state, a very prominent difference can be noted. In the bulk MAPbI\({}_{3}\), the low binding energy exciton was seen to be dissociated in ultrafast timescales to populate the free carrier states, experimentally perceived as a reduction in the inhomogeneous broadening [14]. Instead, here we observe an effective population of the lower energy emissive excitonic state in these nanocrystal assemblies. ## III Conclusion In summary, we have presented a study of the exciton-carrier coupling dynamics in a assembly of FAPbBr\({}_{3}\) nanocrystals in a silica scaffold through 2D coherent spectroscopy. The experimentally determined optical signatures indicate coherent coupling of the excitonic and free carrier states in these material systems. This is particularly evidenced by the characteristic Fano-like lineshape in the two-dimensional rephasing spectra. In addition, we find evidence for the excited state features associated with a coupled exciton-carrier state, similar to what was suggested in bulk metal halide perovskites [14]. We also observe an effective population of the excitonic state following an ultrafast thermalization process in sub-100 fs timescale. Our results provide a comprehensive description of the ultrafast excitation dynamics that are driven by coherent interactions between photo-generated carriers and excitonic states in a solid-state assembly of perovskite nanocrystals. ###### Acknowledgements. A.R.S.K. acknowledges the start-up funds provided by Wake Forest University and funding from the Center for Functional Materials and the Office of Research and Sponsored Programs at WFU. The authors thank Professor Carlos Silva for giving access to the optical instrumentation and for insightful discussions. The optical instrumentation was supported by the National Science Foundation (DMR-1904293). The experimental data collection, analysis, and the writing of corresponding manuscript sections by E.R.G. were supported by the National Science Foundation (DMR-2019444). D.O.T. acknowledges financial support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 956270. H.M. is thankful for the financial support received from the Spanish Ministry of Science and Innovation-Agencia Estatal de Investigacion (MICINAEI) under grants PID2020-116593RB-I00, funded by MCIN/AEI/ 10.13039/501100011033, from the Junta de Andalucia under grant P18-RT-2291 (FEDER/UE) and from the Innovative Training Network Persephone ITN, funded by the European Union's Horizon 2020 research. and innovation program under the Marie Sklodowska-Curie grant agreement No 956270. ## Author contributions The measurements were performed by E.R-G., D.O.T., and K.A.K. under the supervision of A.R.S.K. The samples were prepared by CRP under the supervision of J.F.G-L., M.E.C., and H.M. E.R.G. wrote the original draft and all the authors contributed to the editing of the manuscript.
2309.13888
Graph Representation Learning Towards Patents Network Analysis
Patent analysis has recently been recognized as a powerful technique for large companies worldwide to lend them insight into the age of competition among various industries. This technique is considered a shortcut for developing countries since it can significantly accelerate their technology development. Therefore, as an inevitable process, patent analysis can be utilized to monitor rival companies and diverse industries. This research employed a graph representation learning approach to create, analyze, and find similarities in the patent data registered in the Iranian Official Gazette. The patent records were scrapped and wrangled through the Iranian Official Gazette portal. Afterward, the key entities were extracted from the scrapped patents dataset to create the Iranian patents graph from scratch based on novel natural language processing and entity resolution techniques. Finally, thanks to the utilization of novel graph algorithms and text mining methods, we identified new areas of industry and research from Iranian patent data, which can be used extensively to prevent duplicate patents, familiarity with similar and connected inventions, Awareness of legal entities supporting patents and knowledge of researchers and linked stakeholders in a particular research field.
Mohammad Heydari, Babak Teimourpour
2023-09-25T05:49:40Z
http://arxiv.org/abs/2309.13888v1
# Graph Representation Learning ###### Abstract Patent analysis has recently been recognized as a powerful technique for large companies in the world to lend them insight into the age of competition among various industries. This technique is considered a shortcut for developing countries since it can significantly accelerate their technology development. Therefore, as an inevitable process, patent analysis can be utilized to monitor rival companies and diverse industries. In this research, a graph representation learning approach employed to create, analyze and find similarities of the patents data registered in the Iranian Official Gazette. The patent records were scrapped and wrapped through the Iranian Official Gazette portal. Afterward, the key entities were extracted from the scrapped patents dataset to create the Iranian patents graph from scratch based on novel natural language processing and entity resolution technique. Finally, thanks to the utilization of novel graph algorithms and text mining methods, we identified new areas of industry and research from Iranian patent data, which can be used extensively to prevent duplicate patents, familiarity with similar and connected inventions, Awareness of legal entities supporting patents and knowledge of researchers and linked stakeholders in a particular research field. Graph Representation Learning, Deep Learning, Patents Analysis, Graph Algorithms ## I Introduction Currently, the international economy depends on technological innovation. In the last two decades, there have been significant developments in the field of patent analytics. Patent application is a major side of protecting intellectual properties. Patent registry documents are a wide resource of technical and commercial knowledge. Therefore, patent analysis has been considered as a useful tool for managing research and development as well as technical and economic analysis. Patent's data is a complicated source for processing and gaining results albeit with a lot of human efforts and vast analysis time. There are many tools available to help patent experts and decision makers in patent analysis. As shown in previous research, patents data can be used to identify trends in industry as well as competitive power of enterprises or countries to track innovative activities. The most valuable benefits in patent networks analysis which can be noted include avoiding duplication in patent registry, innovators' familiarity with similar innovators and their mutual inventions, awareness of patent law enforcement agencies, and identification of relevant researchers in a particular research field. Consequently, patents data are analyzed by different techniques depending on the purpose pursued. Companies are interested in discovering latent patterns in patents for determination of patent innovation, identification of patent subjects, projection of technological innovations in a specific domain, industrial strategic planning, infringements detection in patent registry process, evaluation of patent quality for research purposes, road mapping industry, and recognition of industrial rivals. For patent data sources to be useful in decision making, the data must be precise, presented in a perceptible model, and delivered in the right manner. Patents, as the main sources of information, are significant to research and theoretical growth, serving as distinct information sources for technological information. Firms and individual inventors can achieve victory or lose essential income and market advantage if their modern designs inadvertently conflict or infringe the existing technology or if others misappropriate their pre-existing assertions. In the strictest sense, the oneness of an invention can be identified not by the occurrences of keywords and key phrases but by the inventive key findings or fresh proficiency [1][2][3]. ## II Background There have been a few studies [4][5][6][7][8] on Iranian patenting works outside Iran, mostly conducted in US, yet until very recently, there had been no effort to carry out deep exploration into the patents registered in Iran Patent Office. [9] The main reason was that the Iran Patent Office did not have an official portal; therefore, not enough information or statistics could be obtained on patenting status in Iran. Iranian Official Gazette is an affiliate organization of Iranian judiciary in charge of publishing many kinds of public or legal notices, including acts of the parliament, laws and regulations, company registrations and deregistration, wills, trademark registrations, and patent registration. While many applicants were dissatisfied with the lengthy process of patent registration in Iran, Patent Office launched an e-filing service for patenting in July 2012. Ever since, applicants can fill all the required forms, pay the prescribed fees, and check the progress of their application through an online portal. This was a decisive step for the patent system to improve and expand its services in a large, geographically dispersed country like Iran. Iranian Official Gazette launched an online explorable database to prepare free of charge global access to information content of the patents registered in Iran. Iranian patents are granted for a maximum period of 20 years. The patent's owner shall have the exclusive rights of production, sale, and utilization of the subject of the patent. The latest studies investigated innovations in Iran by utilizing international databases such as USPTO, WIPO, and EPO. Unlike previous works, the most valuable point of this study is utilization of patents registered by the Iran Patent Office for the first time to the best of the authors' knowledge. Tehranchi provided an objective assessment of the Iranian American contributions to United States' science and technology as measured by their inventions registered with the United States Patent and Trademark Office (USPTO). The study is based on data collected by U.S. Census Bureau in 2000. Noruzi et al, mapped Iranian patents based on International Patent Classification (IPC) during 1976-2011. It also utilized non-national patents database such as USPTO, WIPO, and EPO. Madani et al, studied the evolution of patent mining. They applied bibliometrics analysis and keyword network analysis to 143 papers extracted from the 'Web of Science' database [10][11][12]. ## III Objectives Mapping the patent-activity of a country based on IPC is the straightest method to calculate the technological specialization and technological scope of the country through distribution of its patents over different technological areas. The purpose of this study is to map the patents registered in Iranian Official Gazette Portal1, known as the main patent registry base in Iran, in 2016-2019. The study attempts to address the three following questions: Footnote 1: [http://rrk.ir/News/NewsList.aspx](http://rrk.ir/News/NewsList.aspx) * What are the most significant IPCs among the patented inventions? * Which invention is like the patented invention? * Who is the most potential innovator in each specific innovation field? So, the study is to distinguish and discover the patterns in the innovation-activity in Iran, and accordingly shape a new landscape for future visionary and experimental research by design a novel recommender system in patents network. This study analyzes patent classification to explore the Iranian patent-activity and industrial development, examining Iran's patent areas in a long-term perspective. The patents are classified according to the International Patent Classification (IPC) format. IPC divides patentable technology into eight main categories as follows: A. Human necessities, B. Performing operations; Transporting, C. Chemistry; Metallurgy, D. Textiles; Paper, E. Fixed construction, F. Mechanical engineering; Lighting; Heating; Weapons., G. Physics and H. Electricity. IPC is the main patent classification system in the world. Therefore, it served as the primary pattern for categorization of the patent subjects in this study. Each IPC has its corresponding technology classification. The IPC classification analysis helps researchers evaluate technology classification distribution of patents, assessing the overall technology trend of a country over different time periods. The inventors in this study were not limited to Iran; they were from different nationalities and countries among them Japan, China, North Korea (Asia), Netherlands, Germany, France (Europe), and USA. ## IV Research Methodology Research was carried out in the following brief steps: 1. Repository Selection 2. Web scrapping by designing a web crawler to collect patents advertisement data. 3. Data preprocessing and cleaning by utilizing Persian text processing techniques. 4. Reshape data into the structured style. 5. Creation Of Patents Structural Database 6. Extraction of necessary information about legal supporting institutions 7. Extraction of necessary information about innovator (e.g., gender and nationality) 8. Extraction of necessary information about patents (IPC, Subject, Innovators' names, and Owners) Figure 1: Patent Entity Properties C. Graph Construction * Apply graph deep learning algorithm on patents network to detect similar patents based on mutual IPC and IPC3 fields. * Recommend similar nodes based on the node's properties similarity. * Discovering various latent patterns in patents network. Each patent was registered with a unique ID is supported by an exclusive legal institution. Patents can contain mutual IPCs. The network consists of 6443 nodes and 8928 edges. Since the patents graph is a triplet graph with three key entities, the nodes refer to the patents, IPCs, and legal supporting institutions, and edges define the relationships between the node's connections to each other. The number of recognized innovators in our data set is 21465, who collaborated partially in innovation processes. Since the patents graph is a triplet graph with three key entities, nodes refer to the patents, IPCs, and legal supporting institutions, and edges define the relationships between the nodes' relations to each other. ## V Data In this section, we talk about web crawler architecture to collect and patent data cleaning methods. ### Web Crawling To implement the scrapping phase, web crawling technique was utilized to gather the desired data and create the patents dataset. A crawler was developed from scratch to fetch target data from the patents repository. It is worth mentioning that all the crawled data, including nodes and the relationships between them, are public data available to all users on the Iranian Official Gazette Portal. ### Data Processing After scrapping data from the patents registry repository using the crawling technique, a specific regular expression was developed to identify the desired entities in a suitable Persian language style. Since the patents were recorded in the repository in Persian and this language poses some incompatibilities in text processing, a heavy preprocessing seemed vital to obtain the key entities in the standard format. ### Patents Data Extraction As shown in the image below, the example of a patent advertisement is an HTML page encoded with the Unicode standard, which cannot be read by humans. To decode Unicode codes and convert them into Persian letters, the HTML2Text library has been used. It's a PHP library to convert HTML to formatted plain text. ## VI Network Science Approaches The outcome of our study is divided into different sections, so it could help design a technology roadmap, increase products reusability, and save time as the most asset. ### Graph Structural Information The table below shows the structural information of the graph. Like any other object in the real world, each network incorporates some features which can serve a strategic role to give the observers more insight. Based on the distribution degree of the whole graph, a small number of nodes have degrees in the range of 50-436. Most of the nodes have degrees in a range under 50. Almost %10 of the nodes (650 items to be exact) are isolated as they do not play a significant role. It can be concluded that this small percentage of nodes is not mutual with other nodes in IPC. Their research fields could be too old or so novel that they have not yet formed a significant community or drawn interest. Since graph distribution degree follows power law type, it can be mentioned that the network is scale-free. As mentioned, there are a small number of nodes with high range of degrees and many nodes with a small range of degrees. The average degree of the nodes is 2.07. The average path length between the nodes is 5.08. The modularity value is 0.771. \begin{table} \begin{tabular}{|c|c|} \hline _j_j_j_k & _j_k_j \\ \hline _TTVPV \_j_j_k_j_k_j_k_j_k_j_k_j_k_j_k_j_k_j_k_j_j_k_j_k_j_j_k_j_j_k_j_j_k_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j__j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j__j_j_j_j_j_j_j_j_j_j__j_j_j_j_j_j_j_j_j_j_j_j_j_j_j_j__j_j_j__j_j_j__j_j_j_j__j_j_j__j_j_j__j_j_j_j__j__j_j_j_j_j_j__j_j__j_j_j_j_j__j_j_j__j_j_j__j_j_j__j_j__j_j_j_j__j_j__j_j_j__j_j_j_j__j_j__j_j__j_j__j_j__j_j_j__j_j_j__j_j_j__j_j__j_j__j__j_j_j__j__j_j_j__j_j__j__j_j_j__j__j_j_j__j__j_j__j_j_j__j_j__j_j__j_j__j_j__j_j__j_j__j_j__j__j_j_j__j__j__j_j__j__j_j__j__j_j__j__j_j__j__j__j__j_j__j__j__j_j_. \end{table} Table 1: A Patent Advertisement on Iranian Official Gazette Figure 2: Graph Schema ### Network Centralities In this part, we demonstrate top patent graph nodes based on degree centrality. The table entails the most precious IPC extracted from the patent network, presented based on frequency and listed in the table according to ranking. It is obvious that most of the patents heavily deal with Human Necessities, Performing Operations, and Chemistry. ### Patents keyword Extraction In this part, visualization of the patents network and three different snapshots of the graph are shown. The points in the images are hubs distributions. Most key communications in the graph initiated through the highlighted hubs, which were elaborated in the Results section. In the following diagrams, some of the most important results about the patents data are listed. As is shown, in the various types of Patents Legal Supporting Centers, the companies are highlighted according to their distinguished contribution in the patent's registry process with 928 cases. ### Patents Graph Visualization The idea was to find which edges in a network occur most frequently between other pairs of nodes by finding edges betweenness centrality. The edges joining communities are then expected to have a high edge betweenness. The underlying community structure of the network will be much more fine-grained once the edges with the highest betweenness are eliminated which means that communities will be much easier to spot. In the following visualization, which is based on Girvan-Newman community detection algorithm[13], each color represents a specific community of graph. Girvan-Newman method [14] is one of the classic community clustering techniques. By using the algorithm, we can separate the network into communities, and the community detection can be used as a good start of data preprocessing. It will remove the edges with the largest edge betweenness in every iteration. It relies on the iterative elimination of edges that have the highest number of shortest paths between nodes passing through them. By removing edges from the graph one-by-one, the network breaks down into smaller pieces, so-called communities. The algorithm was introduced by Michelle Girvan and Mark Newman. \begin{table} \begin{tabular}{c c c c} \hline **Rank** & **IPC** & **Frequency** & **Percentage** \\ \hline **1** & A & 1653 & 28.8 \\ \hline **2** & B & 1074 & 18.62 \\ \hline **3** & C & 929 & 16.12 \\ \hline **4** & F & 716 & 12.42 \\ \hline **5** & G & 702 & 12.18 \\ \hline **6** & H & 334 & 5.79 \\ \hline **7** & E & 270 & 4.68 \\ \hline **8** & D & 84 & 1.45 \\ \hline \end{tabular} \end{table} Table 2: Top Patents IPC Classes based on Frequency. Figure 4: Visualization of Most frequented IPC’s Figure 5: Top Keyword Extracted from Patents Different Topics Figure 3: Patens Graph Degree Distribution \begin{table} \begin{tabular}{c c c c} \hline **IPC** & **Class** & **Subclass** & **Deg** \\ \hline A61K & A & Preparations for Medical, Dental & 436 \\ \hline B82Y & B & Applications of Nanostructures & 282 \\ \hline A61B & A & Diagnosis; Surgery; Identification & 226 \\ \hline B01J & B & Chemical or Physical Processes & 191 \\ \hline G05B & G & Control or Regulate Systems in General & 181 \\ \hline F01B & F & Positive-Displacement Type, Steam Engines & 177 \\ \hline G01N & G & \begin{tabular}{c} Investigating or Analyzing Materials by \\ Determining Their Chemical or \\ Physical Properties \\ \end{tabular} & 168 \\ \hline A61F & A & Filters Implantable into Blood Vessel, Prostheses & 131 \\ \hline B01D & B & separating solids from solids by wet methods & 124 \\ \hline C02F & C & \begin{tabular}{c} processes for making harmful chemical \\ substances harmless \\ \end{tabular} & 111 \\ \hline \end{tabular} \end{table} Table 3: Top Patents IPC Subclasses based on Graph Degree Centrality Based on report, Number of communities are 147 and Maximum found modularity is 0.7719164. In the following diagrams, some of the most important results about the patents data are listed. As is shown, in the various types of Patents Legal Supporting Centers, the companies are highlighted according to their distinguished contribution in the patent's registry process with 928 cases. As it is shown in the above visualization, it demonstrates the geographical distribution of innovators' nationalities around the world. This visualization was produced by Power BI software. Since foreign investors can register their patents in Iranian Official Gazette Portal, there are various nationalities among the innovators. After Iran, Europe, East Asia, and America had the most active collaboration, respectively. Netherlands, Germany, France, and Italy, representing Europe, with an overall of 163 participations, had the second biggest participation after Iran. Japan, China, and North Korea, representing Asia are the third major collaborator after Europe, with 147 participations. Finally, USA stood as the last with 23 cases of collaboration. ## VII Graph Representation Learning In this section, the representation of graph results based on DeepWalk[15], Node2Vec[16] inspired by Word2Vec[17], LINE and SDNE is shown. After applying multiple random walks and obtaining the latent vectors of the nodes from the heterogeneous graph, they can be displayed in a two-dimensional space using the T-SNE algorithm, which is a dimensionality reduction algorithm. The images below are the two-dimensional representation of the features of the nodes in the two-dimensional space. The purpose is to embed the patents and IPCs by the following steps. (1) Nodes will represent patents and IPCs. (2) Each patent is connected to its specific IPC given that a field named IPC3 was created, referring to mutual IPCs of a patent. It shows that a patent can contain more than one IPC and connect several patents. (3) Node2vec is applied to the resulting graphs. At the end, the similarity between the different nodes is inspected. It is expected that the most similar nodes to a patent be the ones with mutual IPCs and IPC3. As mentioned above, IPC3 refers to mutual IPCs of a patent. The art of algorithm is automatic node embedding learning. The patent ID fields were given as the input to the model to feed it and return the most similar patents with mutual IPC and IPC3. IPCs acted as the hubs in the network to facilitates patent relations to each other. ### Deep Walk One of the common methods of checking graph properties is to use the representation method. Many common methods in machine learning, tests and statistical inferences are defined in vector spaces, that is why methods that take a graph as input \begin{table} \begin{tabular}{c c} \hline **Parameters** & **Values** \\ \hline Representation Size & 128 \\ \hline Order & 2 \\ \hline Batch Size & 1024 \\ \hline Epochs & 50 \\ \hline \end{tabular} \end{table} Table 6: LINE Algorithm Parameters Figure 8: Geographical Distribution of Innovators’ Nationalities Figure 6: Aach color refers to a unique community of Patents. Figure 7: Community Detection based Girvan-Newman Algorithm and output a representation of the vertices or edges of this graph in a vector space, such as vertices, have become common. Similar or neighbors have a small Euclidean distance. By using this algorithm automatically, we learn the characteristics of each node in the communication graph and create a latent vector for them. Then we will predict the communication using the classification algorithm. This algorithm helps us to identify nodes with the highest degree of similarity. The criterion of similarity is the maximum sharing of inventions based on the international classification codes of inventions. So obviously, the output of the algorithm implementation on the patents network graph are the most similar nodes (inventions) to each other. To implement the random walk algorithm in the Node2Vec core, the model parameters must be adjusted. Next, the process of generating random steps is simulated and the model is trained on the graph data set. The input of the model is the patent identifiers, and the output of the model is semantically similar nodes that have international patent classification codes in common. The important point here is that we did not transfer any information about the structural characteristics of the graph nodes to the model in advance, and the model learned the representation automatically. After the end of the model training process, the most similar nodes are identified. Our input for testing the recommender system is one of the nodes of the graph that refers to an invention, and the result of the recommender system is similar inventions (similar nodes) along with the similarity score of each invention. ## VIII Conclusion Patent analysis plays a pivotal role in understanding technological advancements and competitive landscapes across industries. In this context, recommender systems within patent networks offer indispensable advantages, from preventing duplicate patents to fostering knowledge sharing among investors and institutions. The uniqueness of our approach lies in its application to Iranian patent data, opening new vistas for minimizing redundancy, enhancing familiarity with related inventions, understanding legal facets, and connecting researchers and stakeholders within specific research domains. The study recognizes the crucial role of patent analysis in gaining insights into technological advancements and competitive landscapes in various industries. It emphasizes the importance of recommender systems in patent networks to prevent duplicate patents, facilitate knowledge sharing, and support inventors and institutions. The paper's main contributions include the utilization of graph representation learning approaches, specifically graph deep learning algorithms, to uncover latent patterns in patent networks and identify similar nodes. By analyzing Iranian patent data, the study opens new avenues for preventing duplicate patents, promoting familiarity with related inventions, understanding legal aspects, and connecting researchers and stakeholders within specific research fields. The research methodology involved web scraping and data preprocessing to create a structured patent database. Graph construction and deep learning algorithms were applied to identify patent similarities, legal institutions, innovators, and patent subjects. The paper also provides insights into patent classifications, centralities, and community structures within the network. Through visualization and representation learning techniques such as DeepWalk, Node2Vec, LINE, and SDNE, the study offers a comprehensive understanding of patent relationships and similarities. These methods enable the automatic learning of node characteristics and the prediction of similar patents based on international classification codes. Overall, this paper's findings have significant implications for the Iranian patent ecosystem. The recommender system developed in this research can facilitate innovation, reduce redundancy, and enhance collaboration among investors and institutions. It provides a valuable tool for patent analysts, researchers, and decision-makers in navigating the complex landscape of patent data and fostering technological advancement. **Appendices**
2303.00111
PixCUE: Joint Uncertainty Estimation and Image Reconstruction in MRI using Deep Pixel Classification
Deep learning (DL) models are capable of successfully exploiting latent representations in MR data and have become state-of-the-art for accelerated MRI reconstruction. However, undersampling the measurements in k-space as well as the over- or under-parameterized and non-transparent nature of DL make these models exposed to uncertainty. Consequently, uncertainty estimation has become a major issue in DL MRI reconstruction. To estimate uncertainty, Monte Carlo (MC) inference techniques have become a common practice where multiple reconstructions are utilized to compute the variance in reconstruction as a measurement of uncertainty. However, these methods demand high computational costs as they require multiple inferences through the DL model. To this end, we introduce a method to estimate uncertainty during MRI reconstruction using a pixel classification framework. The proposed method, PixCUE (stands for Pixel Classification Uncertainty Estimation) produces the reconstructed image along with an uncertainty map during a single forward pass through the DL model. We demonstrate that this approach generates uncertainty maps that highly correlate with the reconstruction errors with respect to various MR imaging sequences and under numerous adversarial conditions. We also show that the estimated uncertainties are correlated to that of the conventional MC method. We further provide an empirical relationship between the uncertainty estimations using PixCUE and well-established reconstruction metrics such as NMSE, PSNR, and SSIM. We conclude that PixCUE is capable of reliably estimating the uncertainty in MRI reconstruction with a minimum additional computational cost.
Mevan Ekanayake, Kamlesh Pawar, Gary Egan, Zhaolin Chen
2023-02-28T22:26:18Z
http://arxiv.org/abs/2303.00111v2
# PixCUE: Joint Uncertainty Estimation and Image Reconstruction ###### Abstract Deep learning (DL) models are capable of successfully exploiting latent representations in MR data and have become state-of-the-art for accelerated MRI reconstruction. However, undersampling the measurements in k-space as well as the over- or under-parameterized and non-transparent nature of DL make these models exposed to uncertainty. Consequently, uncertainty estimation has become a major issue in DL MRI reconstruction. To estimate uncertainty, Monte Carlo (MC) inference techniques have become a common practice where multiple reconstructions are utilized to compute the variance in reconstruction as a measurement of uncertainty. However, these methods demand high computational costs as they require multiple inferences through the DL model. To this end, we introduce a method to estimate uncertainty during MRI reconstruction using a pixel classification framework. The proposed method, PixCUE (_stands for Pixel Classification Uncertainty Estimation_) produces the reconstructed image along with an uncertainty map during a single forward pass through the DL model. We demonstrate that this approach generates uncertainty maps that highly correlate with the reconstruction errors with respect to various MR imaging sequences and under numerous adversarial conditions. We also show that the estimated uncertainties are correlated to that of the conventional MC method. We further provide an empirical relationship between the uncertainty estimations using PixCUE and well-established reconstruction metrics such as NMSE, PSNR, and SSIM. We conclude that PixCUE is capable of reliably estimating the uncertainty in MRI reconstruction with a minimum additional computational cost. **Keywords:** MR image reconstruction, deep learning uncertainty estimation, convolutional neural network, pixel classification framework \({}^{\text{a}}\)Monash Biomedical Imaging, Monash University, Clayton, VIC 3800 Australia \({}^{\text{b}}\)Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC 3800 Australia \({}^{\text{c}}\)School of Psychological Sciences, Monash University, Clayton, VIC 3800 Australia \({}^{\text{d}}\)Department of Data Science and AI, Monash University, Clayton, VIC 3800 Australia + Footnote †: Corresponding author. _Tel._: +61 3 9905 0841; _E-mail address_: [email protected] Introduction Image reconstruction in MRI involves processing the raw k-space data acquired during a scan to human-readable image formats. In brief, the process involves applying multi-dimensional Fourier transform to the raw k-space data and combining the images from multiple channels [1]. However, to reduce the scan time of MR imaging, undersampling in the k-space is often performed, and thus more sophisticated methods such as parallel imaging [2, 3, 4, 5], compressed sensing [6, 7, 8, 9], and deep learning (DL) [10, 11, 12, 13, 14, 15, 16, 17] are needed to reconstruct an image with minimal undersampling artifacts. Mathematically, these methods are essentially solving an inverse problem [18, 15, 19]. This inverse problem can be solved using an explicit matrix inversion [3, 4] in parallel MRI, or by using iterative gradient descent-based methods [20, 21, 22, 23, 24] in compressed sensing. In the DL-based approaches, such inverse problems are approximated by a multi-layered model with the parameters learned from a training dataset. This approach of using training data to find a model of image reconstruction [10, 11, 12, 25, 26, 27, 28] has been demonstrated to outperform the conventional methods of compressed sensing and parallel imaging. Although the DL-based image reconstruction methods demonstrate significant improvements, the generalization [29, 30, 31, 32] of these methods is still an area that is poorly exploited. When a model is trained on a particular set of data and is inferred on a different set of data, it may create false features that are not present in the original image (hallucinations) [32, 33]. In addition, the model may become unstable [34] as a result of a change in the input data that is not noticeable. This aspect of DL models has motivated researchers to search for methods to detect model failure modes that will alleviate potential misdiagnoses due to errors in DL image reconstruction. Uncertainty modeling [35, 36, 37, 38] is one method to detect failure modes. Inspired by computer vision research [35], uncertainty modeling has been explored in the context of MRI [37, 38, 39]. Specifically, in literature, the variance among multiple test inferences using Monte Carlo (MC) sampling with probabilistic models is a surrogate estimate of uncertainty [40]. Several other methods involve learning the entire system parameters utilizing invertible neural networks [41, 42]. Also, Bayesian inference methods have been introduced for uncertainty estimation [38]. Stein's Unbiased Risk Estimator (SURE) has also been utilized as a method of uncertainty estimation [43]. In several previous works, a separate uncertainty layer [37, 44] or a model [38, 45] have been utilized to estimate uncertainty. More recent comprehensive work on DL uncertainty estimation involves leveraging variational autoencoders (VAEs) to develop probabilistic reconstructions that encode the acquisition uncertainty in the latent space [37]. This approach is capable of developing a posterior to the image data from which a variance map can be generated utilizing MC sampling. The mainstream drawback of MC sampling-based uncertainty estimation specifically in the DL setting is the high computational cost during multiple inferences through large DL models. In this study, we propose an uncertainty estimation method referred to as PixCUE (_stands for Pixel Classification Uncertainty Estimation_) that jointly performs image reconstruction and uncertainty estimation simultaneously using a pixel classification framework. PixCUE produces the reconstructed image along with an uncertainty map during a single forward pass through the DL model. The pixel classification framework here converts the reconstruction problem to a SoftMax classification problem where the predicted probability distribution of each pixel is utilized to estimate its uncertainty. We observe that PixCUE is capable of generating uncertainty maps that highly correlate with the actual error in reconstruction with respect to various MR imaging sequences and under numerous adversarial conditions. We also observe that the uncertainty estimations obtained using PixCUE are highly correlated with that of the conventional MC method, yet substantially faster compared with the MC method. We further provide an empirical relationship between uncertainty estimations using PixCUE and well-established reconstruction metrics such as NMSE, PSNR, and SSIM. The remainder of the paper is arranged as follows. Section 2 provides the background to the underlying work in this paper. Section 3 presents the proposed uncertainty estimation method. In Sections 4 and 5, we present our experiments and the results, respectively. A comprehensive discussion and a conclusion are presented in Sections 6 and 7, respectively. ## 2 Background ### Fundamentals of MRI Reconstruction In MRI, the acquired k-space data is represented as: \[y=Fx+\eta \tag{1}\] where \(y\) is the k-space data, \(x\) is the complex-valued image, \(F\) is the Fourier transform operator, and \(\eta\) represents tissue and instrumentation noise. To reduce the scan time, the k-space data is often undersampled using a predefined undersampling pattern. Eq. 1 can be rewritten considering the undersampling as: \[y_{u}=MFx+\eta^{\prime}=F_{u}x+\eta^{\prime} \tag{2}\] where \(y_{u}\) is the undersampled k-space data, \(M\) is the undersampling operator, \(\eta^{\prime}\) represents noise, and \(F_{u}\) represents the partial Fourier transform operator accounted for undersampling. The task in MRI reconstruction is to recover the MRI image with diagnostic quality from \(y_{u}\). Conventional solutions for MRI reconstruction utilized linear analytic techniques like partial Fourier encoding, sensitivity encoding (SENSE) [4], simultaneous acquisition of spatial harmonics (SMASH) [46], and generalized auto-calibrating partially parallel acquisitions (GRAPPA) [3]. These methods utilized knowledge of the imaging systems and k-space correlations. Subsequently, the reconstruction methods shifted towards non-linear iterative algorithms that incorporated the physics of the imaging system and regularization of priors such as sparsity [47], artificial sparsity [48], [49], low rank [50], [51], manifold [52], total variation [53], and dictionary learning [54]. ### Deep Learning MRI Reconstruction In the context of DL, the image reconstruction from undersampled k-space data is conventionally modeled as a regression problem using a DL network, \(G_{\theta}\) parameterized by \(\theta\) as: \[x^{*}=~{}G_{\theta}(F^{H}y_{u}) \tag{3}\] where \(x^{*}\) is the reconstructed image using a DL network and \(F^{H}\) represents inverse Fourier transform operator. Different manifestations of \(G_{\theta}\) and the loss functions to train such a network have been reported in the literature [11], [55], [56]. Most of the DL-based MRI reconstruction methods utilize convolutional neural networks (CNN) and can be summarized under two main groups [57]: data-driven end-to-end DL methods that map low-quality undersampled images to high-quality references [12], [16], [33], [58], and physics-constrained DL methods that iteratively solve inverse problems [15], [59], [60]. Recently, there have been several works involving transformer networks which attempt to aggregate global correlations among MR image patches [61], [62]. ### Uncertainty Estimation in Deep Learning MRI Reconstruction The robustness of DL networks for inverse problems such as MRI reconstruction has not been thoroughly investigated, and hence, prone to introduce image artifacts, especially when subjected to out-of-distribution inputs. Also, there is currently a lack of generalizable methods for estimating the uncertainty in DL reconstructions [37]. Reliable uncertainty estimation methods could be useful both as an evaluation metric and for gaining the interpretability of a given reconstruction model or a dataset [35]. Radiologists could utilize uncertainty maps in combination with the reconstructed MR image to make better-informed judgments. To this end, several uncertainty estimation strategies have been proposed in the context of DL reconstruction. Edupuganti et al. [37] utilized VAEs to create a probabilistic reconstruction approach, which utilizes MC sampling to generate uncertainty from the image's posterior. Kitichotkul et al. [43] utilized a CNN-based SURE framework to create heatmaps as per-pixel confidence intervals for compressed sensing MRI which communicated the reliability of reconstruction to the end-users. Zhang et al. [39] introduced an MRI reconstruction method that selects measurements dynamically and iteratively to reduce the uncertainty in reconstruction. Narnhofer et al. [63] proposed a deterministic MRI reconstruction that employs a Bayesian framework for uncertainty quantification in single and multi-coil undersampled MRI reconstruction. More recent work on Diffusion probabilistic Models [64] suggests drawing samples from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method and computing uncertainty maps from those drawn samples. ### Monte Carlo Drop-Outs for Uncertainty Estimation in Deep Learning MRI Reconstruction Although, conventionally DL MRI reconstruction problem is modeled using a fixed set of model parameters, in the Bayesian Neural Network (BNN) approach, a distribution over the model parameters is learned. Let \(D_{tr}=\{X,Y\}\) denote a training dataset where \(\{X,Y\}\) are input-output paired data. In BNN formulation, the output for an arbitrary sample \(X^{*}\) from the test dataset, can be predicted with respect to the posterior distribution, \(p(\theta|X,Y)\) as below [65]: \[p(Y^{*}|X^{*},D_{tr})=\int p(Y^{*}|X^{*},\theta)p(\theta|D_{tr})d\theta \tag{4}\] where \(p(\theta|X,Y)=\frac{p(Y|X,\theta)p(\theta)}{p(Y|X)}\) and is intractable, i.e., cannot compute analytically. However, \(p(\theta|X,Y)\) can be approximated using another distribution \(q(\theta)\) whose structure is easy to evaluate. This can be done through variational inference techniques [66], [67], [67] and the approximation is usually performed by minimizing Kullback-Leibler (KL) divergence [68] between the variational inference and the posterior distribution so that \(q(\theta)\) is as close as possible to \(p(\theta|X,Y)\). In the MC Drop-Out approach, \(q(\theta)\) is parametrized using the drop-out of the model weights [69], i.e., \(q_{\alpha}(\theta)\) where \(\alpha\) denotes the drop-out fraction. In this approach, \(q_{\alpha}(\theta)\) is simply enforced over the model weights before each layer of the DL model. During inference, uncertainty can be computed using multiple forward passes with different drop-out realizations which is known as an MC simulation. By performing \(T\) stochastic forward passes and computing the variance across all the generated \(T\) outputs, the uncertainty can be formulated as the predictive variance of the output reconstruction estimation, \(\hat{\mathbf{x}}\): \[U_{m}=\text{Var}(\hat{\mathbf{x}})=\frac{1}{T}\sum_{t=1}^{T}\left(\hat{\mathbf{x}}_{t}- \frac{1}{T}\sum_{t=1}^{T}\hat{\mathbf{x}}_{t}\right)^{2} \tag{5}\] where \(U_{m}\) denotes the uncertainty estimation using the MC Drop-Outs approach and \(\hat{\mathbf{x}}_{t}\) is the reconstructed image during the \(t^{\text{th}}\) forward pass. ## 3 Methods ### Pixel Classification framework for MRI Reconstruction Our previous work [16] demonstrated that an image reconstruction task can be transformed into a classification task by the quantization of the target image. To transform the image reconstruction task into a pixel classification task, first, the target image is converted to an _n-bit_ unsigned representation where each pixel in the target image can assume to take only one of the \(2^{n}\) distinct pixel intensity levels. Since there are only \(2^{n}\) distinct pixel intensity levels, we can design a DL network that can classify each pixel into one of these \(2^{n}\) classes. Therefore, the output of the DL pixel classification network, \(\mathbf{x}^{\prime}~{}=~{}G_{\theta}^{\prime}(F^{H}y_{u})\) is of size \(N\times N\times D\), where \(D=2^{n}\) is the number of classes (or pixel intensity levels) and N is the spatial dimension of the image. \(G_{\theta}^{\prime}\) is a DL pixel classification network parameterized by \(\theta\). The last layer of the network consists of a softmax function along the class dimension that makes the sum of predicted output equals one along the class dimension, i.e. \(\sum_{c=0}^{D-1}\mathbf{x}_{r}^{\prime}(c)=1\); \(\forall r\), where \(\mathbf{x}_{r}^{\prime}\) is the output at pixel location, \(\mathbf{r}\), resulting in a predicted probability distribution for each pixel. The network \(G_{\theta}^{\prime}\) can be trained with the categorical cross-entropy loss, where the loss at pixel location \(\mathbf{r}\) is given by: \[\mathcal{L}_{\mathbf{r}}=\sum_{c=0}^{D-1}-\mathbf{x}_{r}^{tar}(c)~{}log~{}\mathbf{x}_{r}^{ \prime}(c) \tag{6}\] where \(\mathbf{x}_{r}^{tar}(c)=\left\{\begin{array}{ll}1&\text{if }c=h_{\mathbf{r}}\\ 0&\text{if }c\neq h_{\mathbf{r}}\end{array}\right.\) is the target which constitutes a one-hot encoding of the true labels for the classification where \(h_{\mathbf{r}}\) represents the pixel intensity level at pixel location \(\mathbf{r}\) of the quantized target image. During the training of the pixel classification network, \(\mathbf{x}_{r}^{\prime}\) converges to a vector that contains the predicted probability values for the corresponding quantized intensity levels. The final image \(\hat{\mathbf{x}}\) can be reconstructed in a pixel-wise manner by taking a weighted average of intensity levels (weighted by their corresponding predicted probabilities), and normalizing it as below: \[\hat{x}_{r}=\frac{1}{D-1}\sum_{c=0}^{D-1}c\ x_{r}^{\prime}(c) \tag{7}\] where \(\hat{x}_{r}\) is the final image value at pixel location \(r\). It should be noted that \(x^{\prime}\) is only utilized to train the network but the pixels in the final image were reconstructed using Eq. 7 resulting in floating point output values. ### Relationship between Uncertainty and Variance The variables \(x_{r}^{\prime}\) and \(x_{r}^{tar}\) can be considered as predicted and target probability distributions, respectively for each pixel. As shown in Eq. 6, we minimize the categorical cross-entropy between \(x_{r}^{tar}\) and \(x_{r}^{\prime}\). Since \(x_{r}^{tar}\) contains only a single non-zero entry, it can be interpreted as a Dirac delta function [70] and is constant for a given pixel making the entropy, \(H(x_{r}^{tar})=0\). Thus, the categorical cross-entropy between \(x_{r}^{tar}\) and \(x_{r}^{\prime}\) becomes equivalent to the KL-divergence between \(x_{r}^{tar}\) and \(x_{r}^{\prime}\) given by: \[\mathrm{D_{KL}}(x_{r}^{tar}||\ x_{r}^{\prime})=\sum_{c=0}^{D-1}x_{r}^{tar}(c) log\left(\frac{x_{r}^{tar}(c)}{x_{r}^{\prime}(c)}\right) \tag{8}\] Therefore, the cross-entropy minimization in Eq. 6 results in minimizing \(\mathrm{D_{KL}}(x_{r}^{tar}||\ x_{r}^{\prime})\) in Eq. 8 and the optimal solution is obtained when \(x_{r}^{\prime}(c)=x_{r}^{tar}(c)\). However, in reality \(x_{r}^{\prime}(c)\) will only approximate \(x_{r}^{tar}(c)\) due to the variability in the input datasets (e.g. noise, MRI artifacts, and pathology introduced out-of-distribution samples) as well as the imperfect modeling process (e.g. over- or under-fitted models), thus uncertainty is introduced. Such uncertainty is manifested as variance in the predicted probability distribution \(x_{r}^{\prime}(c)\)[69]. ### Pixel Classification Uncertainty Estimation (PixCUE) We implement the fastMRI variational network (VN) architecture [27] as the base network of PixCUE. The VN architecture consists of unrolled CNNs with data consistency layers and at each iteration the k-space data (\(y_{k}^{\prime}\)) is modified as follows: \[y_{k+1}^{\prime}=y_{k}^{\prime}-\alpha_{k}M(y_{k}^{\prime}-y_{u})+\ FEG_{k}(RF ^{H}y_{k}^{\prime}) \tag{9}\] where \(G_{k}\) is the DL network at \(k^{\text{th}}\) iteration. For multichannel k-space data, we utilized two operators: (i) a _Reduce (R)_ operator which combines the multichannel images to a single channel, and (ii) an _Expand (E)_ operator which creates a multichannel image from the single channel image using estimated sensitivity maps. Other notations are described as previously. The estimation of the sensitivity maps is also learned during the training using a separate sensitivity estimation DL network. Interested readers can refer to the fastMRI VN paper [27] for a detailed description of the architecture. To implement the pixel classification network, only the last iteration of the VN architecture was modified such that the output from the VN is of size \(N\times N\times D\), where \(D=2^{n}\) is the number of classes. For training, the loss was calculated on the output probabilities as per Eq. 6 and the reference image was quantized to 8-bit (\(D=2^{8}\) pixel intensity levels). The trained classification network predicts the probability distribution for each pixel, \(\mathbf{x}_{r}^{\prime}\) as seen in Fig. 1, which then allows calculating the variance of the distribution. The proposed PixCUE uncertainty estimation can be computed as below: \[U_{p,r}=\frac{1}{D-1}Var(x_{r}^{\prime}) \tag{10}\] where \(U_{p,r}\) is the normalized PixCUE uncertainty estimation at location \(r\) and \(Var(x_{r}^{\prime})\) is simply the variance of the predicted probability distribution. It should be noted that, unlike split prediction head methods in the literature [44], in our proposed PixCUE method, the variance of prediction distributions emerges naturally from the pixel classification approach rather explicitly enforced as a result of minimizing the cross entropy. For regions of the reconstructed image where the uncertainty is high, the network's predicted probability \(\mathbf{x}_{r}^{\prime}\) is low. For regions where the uncertainty is low (such as the background in the image), the predicted probability \(\mathbf{x}_{r}^{\prime}\) is high and is a close approximation of the Dirac delta function. ### Variance Computation The proposed PixCUE framework has a unique characteristic which other conventional classification tasks in computer vision do not exhibit. The predicted distribution at each pixel location demonstrates a Gaussian shape and is symmetrical about the mean as can be seen in Fig. 1. This Gaussian profile arises as a result of the sorted ordering of class labels (or pixel intensity levels) from 0 to \(D-1\) which represents the tissue contrast. Hence, the predicted probability distribution can be fitted by a standard normal distribution, \(\mathbf{x}_{r}^{\prime}{\sim}N(\hat{\mathbf{x}}_{r},\sigma_{r}^{2})\) with the mean inherently being \(\hat{\mathbf{x}}_{r}\) (the weighted average pixel intensity computed using Eq. 7) and the variance \(\sigma_{r}^{2}\) being the uncertainty in prediction. In other words, the shape of the predicted probability distribution is broader for the regions of high uncertainty and narrower for the regions with low uncertainty. Figure 1: Overview of the PixCUE framework for uncertainty estimation in MRI Reconstruction. There are a couple of ways to compute the variance of \(\hat{\mathcal{X}}_{r}\) given its unique characteristics. The most accurate way is to fit \(x^{\prime}_{r}\) with a Gaussian function that has a mean of \(\hat{\mathcal{X}}_{r}\) and optimizing its variance. However, fitting a Gaussian function to each pixel individually is computationally expensive. Therefore, in this work, we utilize a simpler approximation which is computationally cheaper, yet delivers high accuracy in variance computation. We utilize the characteristics of a standard Gaussian curve, \(g(c)=\frac{1}{\sigma_{r}\sqrt{2\pi}}\,e^{-0.5\left(\frac{c}{\sigma_{r}}\right) ^{2}}\) evaluated on its variance value that yields \(g(c=\sigma_{r})\approx 0.6g_{max}\), where \(g_{max}\) is the maximum value of the Gaussian function. Note that, under the assumption that the predictive probability distribution \(x^{\prime}_{r}\) is Gaussian, we can simply capture its maximum as \(\max(x^{\prime}_{r})\). To reduce the impact of inaccuracies that may arise from measuring the maximum value, it is possible to calculate a 3-point average of consecutive neighboring points around the maximum value. Then, the variance of \(x^{\prime}_{r}\) can be estimated by counting the number of elements in \(x^{\prime}_{r}\) that is greater than \(0.6\max(x^{\prime}_{r})\). By further normalizing this quantity, it is possible to obtain an accurate estimate for Eq. 10. We followed this procedure to estimate uncertainty in our proposed PixCUE framework. ### Model implementation The underlying DL model of the PixCUE framework was trained using the Pytorch framework on NVIDIA V100 GPU. We utilized a total of six iterations in our implementation of the VN. Rectified Adam optimizer with a learning rate of 0.0001 was utilized and the model with the best validation loss was selected as the final model. The k-space data was undersampled using an equidistant undersampling pattern with 8% of center lines always acquired. A zero-filled image reconstructed from the undersampled data was utilized as an input to the network. The dynamic range of the target image (reference fully sampled) was quantized to 8-bit (i.e., 256-pixel intensity levels) to form the target distribution at a given pixel location. The categorical cross-entropy loss (Eq. 6) was utilized for training. At the time of inference, the predicted probabilities were utilized to compute the final image using Eq. 7 and the uncertainty map using Eq. 10. ## 4 Experiments ### Dataset The dataset utilized for the training was the fastMRI multi-coil brain dataset [71] which consisted of T1, T2, T1-POST contrast, and FLAIR acquisitions in k-space. The distribution of the dataset for different contrasts is provided in Table 1. All the experiments were performed on the validation dataset. \begin{table} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{validation.} \\ \hline **Contrast** & **Training** & **Validation** \\ \hline FLAIR & 343 & 107 \\ \hline T1 & 498 & 169 \\ \hline T1-POST & 949 & 287 \\ \hline T2 & 2678 & 815 \\ \hline **Total** & **4468** & **1378** \\ \hline \end{tabular} \end{table} Table 1: Contrast distribution of the number of multi-slice images utilized for training and validation. ### List of Experiments We conducted several experiments to assess the performance of PixCUE and visualized the local correspondence of the estimated uncertainties with the actual absolute error in the reconstruction. Experiments I to IV are designed likewise with various adversarial conditions introduced at the input. Experiment V assesses the local correspondence of PixCUE with the MC Drop-Out method. Further, in Experiment VI, we formulate an empirical relationship between the estimated uncertainties using PixCUE and the reconstruction performance metrics. The details of our experiments are listed below. _Experiment I - k-space undersampling:_ In this experiment, we evaluated the performance of PixCUE for accelerated MRI reconstruction. Specifically, k-space was undersampled in the phase encoding direction by a factor of four using a random undersampling pattern with the central 8% of phase encode lines always sampled. Both image reconstruction and uncertainty estimation were then performed using the undersampled k-space. Reconstruction errors and uncertainty maps were compared. _Experiment II - SNR variation:_ Imaging parameters such as TE/TR/flip-angle/image resolution result in different signal-to-noise (SNR) ratios in MR images. In this experiment, we simulated SNR variations by adding Gaussian noise to the complex-valued k-space and evaluated the effect on the reconstruction and how the estimated uncertainty captured out-of-distribution information introduced by the noise in the input datasets. _Experiment III - Sampling pattern and acceleration factor variation:_ DL models are often trained for a certain undersampling pattern, however in practice undersampling patterns may vary due to changes in the imaging field of view and k-space sampling. This experiment analyzed how the estimated uncertainty varies if the sampling pattern is changed. The model was trained with an undersampling factor of four, but during the image reconstruction, the undersampling factor was changed to six with the central 6% of phase encode lines sampled. _Experiment IV - Pathology:_ In this experiment, we evaluated PixCUE on a case consisting of pathology (i.e., tumor) for each contrast. We investigated how the estimated uncertainty captured out-of-distribution information introduced by pathology. The reconstruction error and the estimated uncertainty were calculated and compared. _Experiment V - Comparison between PixCUE with the MC Drop-Out method._ To compare the uncertainty estimations using PixCUE and MC methods, we performed an MC Drop-Out simulation experiment using the VN backbone network which yields multiple predicted distributions. We then computed an average probability distribution as a representation of the MC Drop-Out inferences and calculated the corresponding variance similar to PixCUE. Then we visualize the uncertainty maps of these two methods. We also plot the joint distribution of the uncertainty values predicted by these two approaches for randomly selected 100 pixels of an image in order to investigate the correlation between the two approaches. In the MC experiment, we utilized a drop-out fraction of 0.2 and performed 50 iterations for the variational inference. _Experiment VI - Uncertainty vs quantitative metrics:_ In this experiment, we empirically evaluated the relationship between the estimated uncertainty using PixCUE and qualitative metrics including normalized mean squares error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) index. We calculated the average uncertainty and quantitative metrics on all the images in the validation dataset and performed curve fitting using linear regression and the R\({}^{2}\) value was utilized to measure the goodness of fit. ## 5 Results Fig. 2 shows the reconstructed images along with the computed uncertainty and the absolute error images for four different contrasts. Visual inspection showed that the uncertainty estimated by PixCUE correlates with the error suggesting that uncertainty maps can be utilized as a proxy for error maps. Noticeably, a hallucination (blue arrow) in the reconstructed image is estimated as a region of high uncertainty. When noise was added to the images to simulate different SNR scenarios (Fig. 3), minor changes were observed in uncertainty. When the sampling pattern was changed (Fig. 4) to an undersampling factor of six, still the pixel classification framework was able to capture features that showed greater error as seen by the substantial spatial correlation between uncertainty and the absolute error. Fig. 5 shows the images with pathology, it can be observed that the uncertainty is higher in the regions of the tumor. Since the tumor data is underrepresented during training, the PixCUE framework was able to identify it by predicting high spatial uncertainty in the tumor region. Fig. 6 demonstrates the correlation between PixCUE and the MC method. By comparing the left and right columns of Fig. 6(a), we observed that PixCUE produced almost similar uncertainty maps to that of the MC method. Further, when the joint distribution of uncertainties of PixCUE and MC method was visualized, a high linear correlation was observed and the marginal distributions of the two methods showed similar profiles. This shows that the pixel classification framework with a single inference captures uncertainty up to a similar extent that is captured by the MC method but with a less computational cost. Fig. 7 shows the plot of quantitative scores with respect to the computed average uncertainty of the image along with the equation of the fitted curve and \(R^{2}\) value (goodness of fit). We utilized linear regression curve fitting. We observed that a linear fitting curve provided the best fit for NMSE (Fig. 7 Top row) and SSIM (Fig. 7 Bottom row), while an exponential curve fitted better for the PSNR (Fig. 7 Middle row). The \(R^{2}\) value was substantially higher for NMSE (\(\geq\)0.66). In terms of SSIM, the \(R^{2}\) value is higher for only T2 contrast. The high \(R^{2}\) values of uncertainty for NMSE demonstrate that the uncertainty estimated by PixCUE is a good proxy metric for estimating the actual error of the reconstructed images. These plots visually demonstrate an empirical relationship between the quantitative metrics of the reconstructed images and the predicted uncertainty, which is crucial for the design of accurate image reconstruction models. Figure 3: Results for Experiment II. _Input_: noisy zero-filled input image to DL network reconstructed from undersampled k-space; _Ref_: ground truth fully sampled image; _Recon_: reconstructed image using DL pixel classification network; _Error_: absolute error image; _PixCUE_: uncertainty obtained using the pixel classification framework. Visual inspection suggests that estimated uncertainty is sensitive to noise as evident from uniform higher values within the brain region. Figure 2: Results for Experiment I. _Input_: zero-filled input image to DL network reconstructed from undersampled k-space; _Ref_: ground truth fully sampled image; _Recon_: reconstructed image using DL pixel classification network; _Error_: absolute error image; _PixCUE_: uncertainty obtained using the pixel classification framework. Visual inspection suggests that there exists a correlation between the error images and calculated uncertainties. The blue arrow shows a region of hallucination (artificial feature) with high uncertainty. Figure 4: Results for Experiment III. _input_: zero-filled input image to DL network reconstructed from undersampled k-space (undersampling factor of 6); _Ref_: ground truth fully sampled image; Recon: reconstructed image using DL pixel classification network; _Error_: absolute error image; _PixCUE_: uncertainty obtained using the pixel classification framework. It can be observed that as the undersampling pattern diverges from the training undersampling pattern, the algorithm suffers marked degradation in performance. The pixel classification framework was able to capture this divergence of the sampling pattern. Figure 5: Results for Experiment IV. _input_: zero-filled input image to DL network reconstructed from undersampled k-space (undersampling factor of 6); _Ref_: ground truth fully sampled image; Recon: reconstructed image using DL pixel classification network; _Error_: absolute error image; _PixCUE_: uncertainty obtained using the pixel classification framework. The pixel classification uncertainty estimate was sensitive to underrepresented data (tumor). Figure 6: Results for Experiment V. (a) _PixCUE:_ uncertainty obtained using the pixel classification framework. _MC Drop-Out:_ uncertainty obtained using the MC method. (b) Joint distribution of the uncertainty values predicted by the proposed PixCUE method and the MC method utilizing 100 randomly selected pixels from each uncertainty map shown in Fig. 6(a). Figure 7: Results for Experiment VI. Relationship between quantitative metrics (NMSE: top row, PSNR: middle row, SSIM: bottom row) and uncertainty estimated by PixCUE; legend shows the equation of the fitted curve, and the \(R^{2}\) value represents the goodness of fit (higher the better fit). Discussion Our experiments show that the proposed PixCUE framework can produce reconstructed images along with the uncertainty estimations at no extra computation burden. The predicted uncertainty was found consistent with the previous literature [37], [38]. Our experiments highlighted that PixCUE is capable of capturing the actual image error in the reconstructed image. PixCUE is also robust to practical adversarial scenarios such as noise, change of sampling pattern, and identification of the out-of-distribution (underrepresented) data such as pathologies in images. However, contrast changes such as pathology do not necessarily degrade the reconstruction image quality. Even with the underrepresented pathology data, the model was able to reconstruct the tumor region faithfully provided sufficient training, and the reconstruction error is captured better by our PixCUE framework. Our overall results suggest modeling uncertainties is a critical step for accurate DL-based image reconstruction with increased explanatory power in their reconstruction errors. The computational cost is also an important aspect of uncertainty modeling that needs to be considered for practical purposes. In general, uncertainties are estimated using variational inference which requires multiple inferences of the same input. Literature suggests around 50 inferences implying 50 times more computation [35], [38] in addition to image reconstruction time. The increased computational time can make real-time uncertainty estimation challenging. On the other hand, our PixCUE framework does not require additional computation and the uncertainty emerges as a co-product of the standard image reconstruction. For instance, the uncertainty can be computed in the order of a few seconds, whereas MC methods can take minutes for the same image which may not meet the demand for real-time applications. Nevertheless, Experiment V shows a close correlation between the uncertainty estimations produced by PixCUE and the MC method. The real-time uncertainty estimation capability of PixCUE provides an opportunity to design future uncertainty-guided image reconstruction methods. In this work, we included several practical scenarios expected in clinical practice. However, we have some limitations of the study, for instance, we simulated the most frequently expected out-of-distribution cases including noise, and change of sampling patterns but other out-of-distribution cases including hardware configuration and instrument-related changes have not been explored. Out-of-distribution can also occur when dealing with variations in anatomical structures. For example, we can expect that a model trained on brain datasets of a particular contrast would have reduced performance when applied to whole-body imaging. It is also worthwhile to note that, in this work, we have employed a state-of-the-art network with reconstruction performance similar to the fastMRI challenge results, indicating the model parameters are well-fitted. However, we have not conducted an extensive test on different network architectures and/or with over- or under-fitted model parameters. These variations can further influence the quantitative relationship between uncertainties and reconstruction errors which warrants further investigations, and the proposed framework can be readily applied to such studies. ## 7 Conclusion In this work, we estimated and evaluated uncertainties during MR image reconstruction using a deep pixel classification approach. A novel method of estimating uncertainty, PixCUE was proposed and validated on large datasets with four different contrasts. Different realistic practical scenarios were simulated and their impacts on uncertainty were compared. Overall, it was observed that the uncertainty estimated by PixCUE corresponds well with the actual error in image reconstruction, hence can be utilized as a proxy for error. The lower computational cost of calculating uncertainty makes it practical for time-constrained medical image reconstruction applications. Also, an empirical relationship between uncertainties and quantitative image quality metrics is identified in this paper and can be useful in estimating image reconstruction errors in practice when dealing with noise and sampling pattern changes. ## Acknowledgments This work was conducted as a part of the projects titled "Simultaneous to synergistic MR-PET: integrative brain imaging technologies" funded by the Australian Research Council Linkage Program (LP170100494) and "Biophysics-informed deep learning framework for magnetic resonance imaging" funded by the Australian Research Council Discovery Program (DP210101863).
2309.05032
Unified Contrastive Fusion Transformer for Multimodal Human Action Recognition
Various types of sensors have been considered to develop human action recognition (HAR) models. Robust HAR performance can be achieved by fusing multimodal data acquired by different sensors. In this paper, we introduce a new multimodal fusion architecture, referred to as Unified Contrastive Fusion Transformer (UCFFormer) designed to integrate data with diverse distributions to enhance HAR performance. Based on the embedding features extracted from each modality, UCFFormer employs the Unified Transformer to capture the inter-dependency among embeddings in both time and modality domains. We present the Factorized Time-Modality Attention to perform self-attention efficiently for the Unified Transformer. UCFFormer also incorporates contrastive learning to reduce the discrepancy in feature distributions across various modalities, thus generating semantically aligned features for information fusion. Performance evaluation conducted on two popular datasets, UTD-MHAD and NTU RGB+D, demonstrates that UCFFormer achieves state-of-the-art performance, outperforming competing methods by considerable margins.
Kyoung Ok Yang, Junho Koh, Jun Won Choi
2023-09-10T14:10:56Z
http://arxiv.org/abs/2309.05032v1
# Unified Contrastive Fusion Transformer for Multimodal Human Action Recognition ###### Abstract Various types of sensors have been considered to develop human action recognition (HAR) models. Robust HAR performance can be achieved by fusing multimodal data acquired by different sensors. In this paper, we introduce a new multimodal fusion architecture, referred to as Unified Contrastive Fusion Transformer (UCCFormer) designed to integrate data with diverse distributions to enhance HAR performance. Based on the embedding features extracted from each modality, UCCFormer employs the Unified Transformer to capture the inter-dependency among embeddings in both time and modality domains. We present the Factorized Time-Modality Attention to perform self-attention efficiently for the Unified Transformer. UCCFormer also incorporates contrastive learning to reduce the discrepancy in feature distributions across various modalities, thus generating semantically aligned features for information fusion. Performance evaluation conducted on two popular datasets, UTD-MHAD and NTU RGB+D, demonstrates that UCCFormer achieves state-of-the-art performance, outperforming competing methods by considerable margins. Human Action Recognition, Sensor Fusion, Multimodal Fusion, Unified Transformer, Factorized Attention, Contrastive Learning, UTD-MHAD, NTU RGB+D ## I Introduction Human Action Recognition (HAR) is a process that involves the automatic identification and classification of human actions based on sensor data. HAR has a wide range of applications, including healthcare monitoring [1], fitness tracking [2], action analysis, gesture-based interfaces [3], and context-aware systems [4]. The ability to automatically recognize and classify human activities based on sensor measurements has the potential to enhance various domains and improve user experiences. Various sensors such as video camera, wearable devices [5], and environmental sensors [6] can be utilized to acquire data for HAR. Based on the sensor types, HAR methods can be broadly classified into two categories: those based on visual data and those based on non-visual data. Visual data is acquired from camera sensors, including video, depth, and infrared cameras. These camera sensors capture high-resolution images that depict the movement and posture of individuals. Deep learning models, especially convolutional neural networks (CNNs), have been widely employed for HAR using visual data [7, 8, 9, 10]. On the other hand, non-visual data can be acquired from various sensors including accelerometers, gyroscopes, magnetometers, and pressure sensors. These sensors capture non-visual data related to the body's physical movements. Devices like smartwatches [11], fitness trackers [12], and smartphones [13] integrate such sensors, allowing users to continually monitor their activities. Various deep learning architectures for modeling sequential data have been used to conduct HAR based on non-visual data [14, 15, 16]. The use of multimodal data is motivated by several factors. First, different sensor modalities capture different aspects of human motion and provide complementary information that enhances the accuracy of action recognition [17]. Second, using multiple sensor modalities can offer redundancy, which can improve the robustness of action recognition [18]. Third, multimodal data can provide a more comprehensive representation of human action, improving generalization and adaptability of HAR models [18, 19]. By capturing different aspects of human motion across various sensor modalities, the models can be trained to generalize across diverse scenarios, users, and environments. However, utilizing multimodal data in HAR poses some challenges. One of the main challenges is establishing a joint representation for multimodal data that captures complex relational semantics. To maximize the benefits of sensor fusion, HAR models should produce the complementary yet semantically well aligned features from each modality. Another challenge is optimally combining the data from different sensors and modalities. The varied forms of data are typically noisy, intricate, and exhibit high variability. These challenges necessitate an efficient sensor fusion algorithm capable of modeling their relational structure and integrating data adaptively based on its quality. To date, various multimodal fusion architectures have been proposed for HAR. In [20], Meoncks et al. proposed adaptive feature processing for HAR using RGBD and LiDAR, and in [21], Shahroudy et al. presented the shared specific feature factorization network to combine multimodal signals. Hierarchical Multimodal Self-Attention (HAMLET) [22] combined global and local features extracted from RGB and IMU through Transformer. Multitask Learning-based Guided Multimodal (MuMu) [23] integrated a multi-task learning strategy to a multimodal fusion network. Vision-to-Sensor Knowledge Distillation (VSKD) strategy [24] proposed the network for transferring the knowledge from video modality to inertial modality. MAVEN [25] enhanced the performance of multimodal fusion by using a memory-augmented recurrent network and aligning representations using an attention mechanism. In this study, we introduce a new unified multimodal fusion framework for HAR, referred to as the Unified Contrastive Fusion Transformer (UCFFormer). This framework effectively combines multimodal data of varying distributions, including both visual and nonvisual data. The UCFFormer is different from the existing fusion architectures in these two aspects. First, UCFFormer derives joint representation of multimodal sensor data using a unified Transformer architecture, which has been successfully used for vision-language modeling lately [26]. It first extracts the embedding features of the same size from each multimodal input and encodes them via Multimodal Self-attention capturing pairwise interactions across both time and modality domains. We introduce an efficient implementation of Multimodal Self-attention for the unified Transformer. We devise a Factorized Time-Modality Self-attention to independently encode the embeddings in both time and modality domains. Two strategies are proposed for factorized attention: parallel and sequential factorization. While parallel factorization conducts self-attention across both time and modality domains simultaneously, sequential factorization alternates between them in each Transformer layer. Next, we propose a feature alignment strategy based on a contrastive learning approach. By minimizing the cosine similarity metric defined between the embedding vectors of different modalities, the proposed method can mitigate the domain gap between the modalities and thereby boost the effectiveness of multimodal feature fusion. As a result, the proposed Multimodal Contrastive Alignment Network (MCANet) generates embedding features that are more coherent and semantically aligned. We evaluate the performance of UCFFormer on two widely used multimodal HAR datasets, UTD-MHAD [27] and NTU RGB+D [28]. Our evaluation demonstrates that UCFFormer outperforms existing HAR models by significant margins, recording a new state-of-the-art performance. In particular, UCFFormer achieves remarkable 99.99% Top-1 accuracy on UTD-MHAD dataset. The key contributions of this study are summarized as follows: * We present a novel multimodal fusion network called UCFFormer. Our approach leverages a unified Transformer to enhance the embedding features extracted from different sensor data modalities. This structure enables sensor fusion whose design is agnostic to the type and number of multimodal inputs, without the need for specific custom designs for different modality types. * We introduce the Factorized Time-Modality Self-attention for the efficient encoding of embedding features. To this goal, we present two distinct factorization strategies. * We further enhance the effects of sensor fusion by aligning the embedding features semantically using contrastive learning. To the best of our knowledge, we are the first to demonstrate the efficacy of contrastive learning in multimodal HAR. Our contrastive learning-centric alignment technique can be seamlessly integrated into any feature fusion module. * Our UCFFormer achieves the state-of-the-art performances on UTD-MHAD [27] and NTU RGB+D [28] datasets. ## II Related Works ### _Multimodal Fusion Methods for HAR_ Early studies in multimodal fusion presented a late fusion approach which combined classification results obtained from each modality to obtain a final prediction. In [29, 30], Dawar et al. encoded RGB images using CNN and inertial sensor data using Long Short Term Memory (LSTM) and combined the resulting classification outcomes using the weights derived from the decision fusion scores. In [31], 1D CNN, 2D CNN, and Recurrent Neural Network (RNN) were employed to predict action class based on Gyroscope data, RGB data, and human joint pose data, respectively and the classification results were combined. In [32], WiVi utilized a CNN backbone to represent WiFi signal and a C3D backbone [33] to process RGB data. These were then integrated at the decision level through an ensemble fusion model. Several studies have explored achieving information fusion at the intermediate feature level. In [34], a Keyless Attention method was presented to aggregate the features extracted from multimodal data. HAMLET [22] employed Hierarchical Multimodal Self-attention to obtain action-related spatio-temporal features. MuMu [23] was trained to conduct multiple tasks, i.e., the target task of HAR and an auxiliary task of human action grouping. The auxiliary task assists the target task in extracting appropriate multimodal representations. VSKD [24] utilized ResNet18 as a teacher network for video data and employed multi-scale TRN [35] with BN-Inception as a student network for inertial data. Distance and Angle-wise Semantic Knowledge (DASK) loss was proposed to account for the modality differences between the vision and sensor domains. MAVEN [25] employed the feature encoders to generate modality-specific spatial features that were subsequently stored in memory banks. The memory banks were used to capture long-term spatiotemporal feature relationships. Our UCFFormer is different from aforementioned methods in that a joint representation of multimodal data is found through the time-modality factorized self-attention of the unified Transformer and using contrastive learning framework. ### _Contrastive Learning_ Contrastive learning is a type of the self-supervised learning tasks that provides a means of understanding the differences between representations. This stems from the work of Bromley et al. who introduced the concept of a Siamese Network, consisting of two identical networks that share weights for metric learning [36]. Specifically, contrastive learning examines which pairs of data points are similar and different to learn high-level data features before performing classification or segmentation tasks. Early studies used a contrastive learning framework to learn invariant mappings for recognition using contrastive pair loss in discrimination models [37, 38]. Inspired by triplet loss, recent studies applied feature extraction methods that minimize the distance between representations of similar positive pairs and maximize the distance between representations of different negative pairs [39, 40, 41]. Recently, contrastive learning has widely been used for various image classification tasks. Momentum Contrast (MoCo) [42, 43] used a momentum-updated encoder to produce representations of negative samples, providing a large and consistently maintained set of negatives for contrastive loss calculations. SimCLR [44, 45] learned representations by maximizing the similarity between different augmentations of the same data sample while minimizing the similarity between different samples. Bootstrap Your Own Latent (BYOL) [46] achieved self-supervised image representation learning without using negative samples by creating two augmented views of the same image. ### _Unified Transformer for Multimodal Task_ Transformer architectures have achieved significant success in machine learning tasks, including natural language processing [47, 48] and computer vision [49]. However, they have mainly been limited to tasks within a single domain or specific multimodal domains. To overcome this challenge, a Unified Transformer model [50] has been proposed as a foundation model for multimodal data. The Unified Transformer consists of transform encoders jointly encoding multimodal inputs, and a transform decoder over the encoded input modalities, followed by task-specific outputs applied to the decoder hidden states to make final predictions for each task. The Unified Transformer handles multi-tasking and multimodal in a single model with fewer parameters, moving toward general intelligence. The Unified Transformer has been used for a variety of tasks that involve multimodal data. UFO [51] used a single transformer network for multimodal data and implemented multi-task learning during vision-language pre-training. The same transformer network was used as an image encoder, as a text encoder, or as a fusion network in the different pre-training tasks. UniFormer [52] unified 3D convolution and spatio-temporal self-attention to generate a transformer embedding vector. UniFormer's relation aggregator handles both spatio-temporal redundancy and dependency by learning local and global token correlations in shallow and deep layers, respectively. UniTranSeR [53] embedded the multimodal features into a unified Transformer semantic space to prompt inter-modal interactions, and then employed a Feature Alignment and Intention Reasoning layer to perform cross-modal entity alignment. ## III Unified Contrastive Fusion Transformer (UCF-Former) ### _Overview_ Figure 1 depicts the overall structure of the proposed UCF-Former. The proposed UCFFormer fuses the information from \(N\) multimodal sensors. First, \(N\) separate backbone networks are employed to extract sequential feature vectors of length \(T\) from each modality. The feature vectors are linearly projected to produce the \(NT\) embedding vectors of the same size. The projected embedding vectors serve as basic semantic elements represented in time and modality domain. Factorized Time-Modality Transformer (FTMT) encodes the embedding vectors using a unified Transformer architecture. The unified Transformer models both intra-modality and inter-modality interactions simultaneously to produce the updated embedding vectors. To facilitate effective interaction modeling, FTMT employs a _factorized self-attention mechanism_ that conducts the encoding process separately in temporal and modality domains. Next, the Multimodal Contrastive Alignment Network (MCANet) combines the features generated by FTMT. Among \(N\) multimodal sensors, we designate one as main modality sensor and the others as \(N-1\) sub-modality sensors. MCANet boosts the effect of feature fusion by aligning the sub-modality features with those of the main modality through contrastive learning. Finally, the combined features are passed through a multi-layer perceptron (MLP) layer followed by a softmax layer to generate the final classification result. ### _Setup and notations_ We make the assumption that \(N\) sensor data are temporally synchronized. To achieve this, we resample the data such that their sampling rates are all identical. Our model takes \(T\) consecutive samples of all modality data in a sequential manner. We denote \(T\) signal samples acquired from \(N\) modality sensors as \(x_{1}[1:T],x_{2}[1:T],...,x_{N}[1:T]\), where \(x_{n}[1:T]=\{x_{n}[1],...,x_{n}[T]\}\). The dimension of each sample is different for each modality. ### _Multimodal Feature Extractor_ Different backbone networks are employed to extract feature vectors from each modality data. Suppose that the feature vectors \(f_{n}[1:T]\) are obtained from the input data \(x_{n}[1:T]\). Then, we apply linear projection to map the feature embedding vectors into the embedding vectors of common size \(d\), i.e., \[\begin{split} Z_{1}[1:T]&=W_{1}\cdot f_{1}[1:T]\\ \vdots\\ Z_{N}[1:T]&=W_{N}\cdot f_{N}[1:T].\end{split} \tag{1}\] These \(NT\) embedding vectors form the basis for finding the joint representation of multimodal data. ### _Factorized Time-Modality Transformer Encoder_ FTMT encoder encodes the embedding vectors capturing their high-level inter-dependencies. To this goal, we utilize the unified Transformer, which has been utilized for vision-language multimodal data modeling [26]. The unified Transformer leverages Transformer Self-attention to encode the embedding vectors across different modalities and time steps. However, given \(NT\) embedding vectors, FTMT requires the computational complexity of \(\mathcal{O}(N^{2}T^{2})\) and high capacity network for learning all possible pairwise relations among \(NT\) elements. To cope with this issue, we employ the Factorized Time-Modality Self-attention, which conducts self-attention across time and modality domains independently. We present two distinct versions of factorized self-attention, which are differentiated by their respective arrangements of time-domain and modality-domain self-attention operations. #### Iii-B1 Module 1: Simultaneous Time-Modality Factorization The Simultaneous Time-Modality Factorization (FTMT-Sim) approach conducts self-attention operations concurrently in the time and modality domains and merges the encoded features in the final attention layer. Figure 2 (a) depicts the structure of FTMT-Sim. For each time step \(t\), the embedding vectors across different modalities are packed into a matrix \(Z[t]=[Z_{1}[t],...,Z_{N}[t]]\). Similarly, the embedding vectors across time steps are packed into a matrix \(Z_{n}=[Z_{n}[1],...,Z_{n}[T]]\). Then, self-attention is applied independently to each axis in parallel. First, self attention across modalities is performed as \[\begin{split} Z^{(l+1)}[t]&=\mathrm{ModalityAttention }(Z^{(l)}[t])\\ &=\mathrm{Softmax}\left(\frac{Q^{(l)}[t]((K^{(l)}[t])^{T}V^{(l)} [t])}{\sqrt{d_{k}}}\right),\end{split} \tag{2}\] where \[\begin{split} Q^{(l)}[t]&=Z^{(l)}[t]\cdot W_{q}^{(l) }\\ K^{(l)}[t]&=Z^{(l)}[t]\cdot W_{k}^{(l)}\\ V^{(l)}[t]&=Z^{(l)}[t]\cdot W_{v}^{(l)},\end{split} \tag{3}\] \(l\in[1,L]\) is the index of attention layer and \(W_{q}^{(l)}\in\mathbb{R}^{d\times d_{k}}\), \(W_{k}^{(l)}\in\mathbb{R}^{d\times d_{k}}\), and \(W_{v}^{(l)}\in\mathbb{R}^{d\times d_{v}}\) are the linear weights, where \(d_{k}\) and \(d_{v}\) represent the dimensions of the keys and values, respectively. For brevity, we exclude the notation for multi-head attention. In our implementation, both \(d_{k}\) and \(d_{v}\) are set to 64. Next, the positioning encoding is applied to the embedding vectors in \(Z_{n}\)[54] for each modality index \(n\). Then, self-attention across time steps is performed as \[\begin{split} Z_{n}^{(l+1)}&=\mathrm{TemporalAttention }(Z_{n}^{(l)})\\ &=\mathrm{Softmax}\left(\frac{Q_{n}^{(l)}((K_{n}^{(l)})^{T}V_{n} ^{(l)})}{\sqrt{d_{k}}}\right)\end{split} \tag{4}\] where \[\begin{split} Q_{n}^{(l)}&=Z_{n}^{(l)}\cdot V_{q}^{ (l)}\\ K_{n}^{(l)}&=Z_{n}^{(l)}\cdot V_{k}^{(l)}\\ V_{n}^{(l)}&=Z_{n}^{(l)}\cdot V_{v}^{(l)},\end{split} \tag{5}\] \(W_{n,q}^{(l)}\in\mathbb{R}^{d\times d_{k}}\), \(W_{n,k}^{(l)}\in\mathbb{R}^{d\times d_{k}}\), and \(W_{n,v}^{(l)}\in\mathbb{R}^{d\times d_{v}}\) are the linear weights. After \(L\) attention layers, the attention values \(Z^{(L)}[1],...,Z^{(L)}[T]\) and \(Z_{1}^{(L)},...,Z_{N}^{(L)}\) are arranged, concatenated and linearly projected, resulting in the generation of the final \(NT\) features \(C_{1}[1:T],...,C_{N}[1:T]\). Note that the skip connection, originating from the initial values \(Z_{n}^{(0)}\) and \(Z^{(0)}[t]\), is incorporated during the process of generating the final features. #### Iii-B2 Module 2: Sequential Time-Modality Factorization (FTMT-Seq) Unlike FTMT-Sim, the Sequential Time-Modality Factorization (FTMT-Seq) approach applies the self-attention operations in time axis and modality axis one by one. First, after positional encoding is performed, the time-domain self-attention operation is performed as \[Y_{n}^{(l)}=\mathrm{TemporalAttention}(Z_{n}^{(l)}) \tag{6}\] Then, the output \(Y_{1}^{(l)},...,Y_{N}^{(l)}\) are rearranged to \(Y^{(l)}[1],...,Y^{(l)}[T]\). Then, the modality-domain self-attention follows \[Z^{(l+1)}[t]=\mathrm{ModalityAttention}(Y^{(l)}[t]). \tag{7}\] After passing through \(L\) attention layers and skip connection from the input, FTMT-Seq produces the final features \(C_{1}[1:T],...,C_{N}[1:T]\). Fig. 1: **Overall Architecture** : The proposed UCFFormer first represents each raw multimodal data in a shared feature space. FTMT encodes these embedding features, capturing their dependencies within the time-modality domain using the Unified Transformer. Subsequently, MCANet refines these embeddings by aligning them across modalities through contrastive learning. These enhanced embeddings are then aggregated, leading to the final classification results. ### _Multimodal Constrastive Alignment Network_ Utilizing the output of FTMT, \(C_{1}[1:T],...,C_{N}[1:T]\), MCANet employs Weighted Multimodal Feature Fusion that combines the multi-modal features. In our framework, we let \(C_{1}[1:T]\) be the main modality features and the rest be the sub-modality features. The primary goal of this approach is to selectively aggregate the information that is pertinent to the main modality data from the sub-modality data. For each time step \(t\), Weighted Multimodal Feature Fusion combines the multimodal features as \[C_{agg}[t]=C_{1}[t]+\sum_{n=2}^{N}\mathrm{sigmoid}(\mathrm{sim}(C_{1}[t],C_ {n}[t]))C_{n}[t], \tag{8}\] where \(\mathrm{sim}(A,B)\) denotes the cosine similarity measure \[\mathrm{sim}(A,B)=\frac{A^{T}\cdot B}{\|A\|_{2}\|B\|_{2}}. \tag{9}\] Note that the sub-modality features \(C_{n}[t]\)\(n\neq 1\) are weighted according to their similarity with the main modality features \(\mathrm{sigmoid}(\mathrm{sim}(C_{1}[t],C_{n}[t]))\). Additionally, we employ a contrastive learning framework to reduce feature discrepancy across different modalities. By minimizing a contrastive loss that captures the dis-similarities between the different modalities during training, we can make our weighted feature aggregation more effective. Details on the contrastive loss will be discussed in the following subsection. Finally, MCANet flattens the matrix \([C_{agg}[1],...,C_{agg}[T]]\) into a single-dimensional vector. The flattened vector passes through the MLP followed by softmax layer, generating the final classification scores \(O\in\mathbb{R}^{n_{cls}}\), where \(n_{cls}\) is the number of action classes. Each element in the vector \(O\) represents the probability score assigned to a specific action class. By analyzing these probabilities, the model can determine the most likely class for a given input. ### _Loss Function_ As depicted in Figure 1, the loss function used to train our entire network consists of the contrastive loss term \(\mathcal{L}_{contrast}\) and the cross entropy loss term \(\mathcal{L}_{cls}\), i.e., \[\mathcal{L}=\mathcal{L}_{cls}+\alpha\mathcal{L}_{contrast}, \tag{10}\] where \(\alpha\) is the regularization parameter. In our approach, we set \(\alpha\) to 0.2. The contrastive loss term \(\mathcal{L}_{contrast}\) aims to encourage the alignment of features among different modalities. It quantifies the dissimilarity or disparity between the main modality and the \(N-1\) sub-modalities, i.e., \[\mathcal{L}_{contrast}=-\sum_{n=2}^{N}\log\left(\frac{1}{T}\sum_{t=1}^{T} \mathrm{sim}(C_{1}[t],C_{n}[t])\right) \tag{11}\] By using the contrastive loss term, the model can generate semantically consistent representations. The cross entropy loss term \(\mathcal{L}_{cls}\) is given by \[\mathcal{L}_{cls}=-\sum_{i=1}^{n_{cls}}T_{i}\log(O_{i}), \tag{12}\] where \(O_{i}\) is the \(i\)th element of \(O\) and \(T_{i}\) is the \(i\)th element of ground truth one hot vector. ## IV Implementation of Multimodal HAR System In this section, we present design variants of UCFFormer, specifically tailored for the sensor configurations provided in two datasets: UTD-MHAD [27] and NTU RGB+D [28]. ### _Implementation of UCFFormer on UTD-MHAD dataset_ UCFFormer is designed to fuse three different modalities including RGB video, skeleton data and inertial sensor signals provided by UTD-MHAD dataset. RGB video consists Fig. 2: **Factorized Time-Modality Self-attention** : (a) FTMT-Sim processes the embedding features in both time and modality domains concurrently, (b) FTMT-Seq alternates the encoding process between the time-domain and the modality-domain in a sequential manner. of length-\(T\) sequence of video frames, i.e., \(x_{1}[1:T]\in\mathbb{R}^{T\times H\times W}\), where \(H\) and \(W\) are the height and width of a video frame. The skeleton data consists of 3D coordinates of \(J\) joints, i.e., \(x_{2}[1:T]\in\mathbb{R}^{T\times J\times 3}\), where \(J\) is the number of joints. The data from the inertial sensors is divided into segments, with each segment being processed individually. The inertial sensor data is also processed in a similarly way, yielding vectors of the dimension \(S\), \(x_{3}[1:T]\in\mathbb{R}^{T\times S}\). We tried two different camera backbones; the ResNet50 model and the Temporal Shift Module (TSM) [55]. ResNet50 was used to to encode every frame of the RGB video while TSM was used to encode sequential RGB video. We specifically selected ResNet50 to ensure fair comparison with several latest methods [22, 23, 56, 57] that utilized ResNet50 on the UTD-MHAD dataset. We also tried a stronger backbone, TSM at the cost of higher computational complexity. These backbone networks take video frames \(x_{1}[1:T]\) as input and produces the sequence of feature maps \(f_{1}[1:T]\). We encoded the skeleton data \(x_{2}[1:T]\) using a Spatio-Temporal Graph Convolutional Network (STGCN) [58]. Lastly, a DeepConvLSTM [59] was employed to encode the \(T\) segments of inertial sensor data \(x_{3}[1:T]\) and produce the feature vectors \(f_{3}[1:T]\). The remaining steps follow the procedure described previously. In contrastive learning setup, RGB video acted as a primary modality, while the remaining data sources served as sub-modalities. ### _Implementation of UCFFormer on NTU RGB+D dataset_ We consider two modalities: RGB images and skeleton data on NTU RGB+D dataset. The data preprocessing step was performed similarly as in UTD-MHAD dataset. To encode the RGB image sequence, we employ TSM as a video backbone network. The samples of NTU RGB+D dataset capture both individual actions and interactions among multiple individuals. From this perspective, we processed skeleton data for each of \(P\) persons and constructed the tensor, \(x_{2}[t]\in\mathbb{R}^{T\times J\times P\times 3}\). The skeleton data was processed using the same backbone architecture used for the UTD-MHAD dataset. In contrastive learning setup, RGB video was selected as a primary modality. ## V Empirical Results In this section, we evaluate the performance of the proposed UCFFormer on two widely used datasets: UTD-MHAD [27] and NTU RGB+D [28]. ### _Dataset_ **UTD-MHAD:** The UTD-MHAD dataset comprises 27 human actions performed by eight subjects. It offers data in four distinct modalities: RGB video, depth video, skeletal joint positions, and inertial sensor signals. For evaluating performance, we utilized the Top-1 accuracy metric. In this dataset, we adopted the cross-subject evaluation method widely used to evaluate the performance of HAR models [27]. This approach divides subjects into different groups for training versus testing to assess the model's ability to generalize to new, unseen subjects. Specifically, we used a dataset comprising 8 subjects for this experiment. Among them, 6 subjects were utilized for training, 1 subject for validation, and 1 subject for testing. **NTU RGB+D:** The NTU RGB+D dataset is an extensive collection of RGB video, depth video and skeleton data, consisting of 56,880 action samples performed by 40 subjects across 60 action classes. In this dataset, two evaluation methods, namely cross-subject (CS) and cross-view (CV), have been widely employed [28]. For the CS evaluation method, 40 samples were randomly selected as training and testing groups, with the training IDs being 1, 2, 4, 5, 8, 9, 13, 14, 15, 16, 17, 18, 19, 25, 27, 28, 31, 34, 35, and 38. The remaining subjects were reserved for testing, resulting in 40,320 for training and 16,560 samples for testing. On the other hand, for CV evaluation, samples from cameras 2 and 3 were used for training, while samples from camera 1 were used for testing. The resulting sets comprised 37,920 for training and 18,960 samples for testing. ### _Experimental Setup_ #### V-B1 Implementation Details We present the implementation details of UCFFormer. The performance of UCFFormer was evaluated across four different configurations. Based on the two types of proposed factorized attention, UCFFormer is divided into UCFFormer-Sim and UCFFormer-Seq. Additionally, when using the Image Backbone ResNet50, it was denoted as R, and when using the Video Backbone TSM, it was denoted as T. Therefore, the four tested types were denoted as UCFFormer-Sim-R, UCFFormer-Seq-R, UCFFormer-Sim-T, and UCFFormer-Seq-T. **UTD-MHAD:** In the case of UTD-MHAD, we employed ResNet50 [60] and TSM [55] as RGB video backbone networks, while we used STGCN [58] for skeleton data and DeepConvLSTM [59] for inertial data. For the RGB video data, ResNet50 was used to extract features from each of eight video frames of height \(\times\) width \(\times\) channel \(=224\times 224\times 3\). TSM was used to extract features from the eight video frames of time \(\times\) height \(\times\) width \(\times\) channel \(=8\times 224\times 224\times 3\). The STGCN method extracted features from skeleton data of joint \(\times\) spatial position \(=20\times 3\) using a window of size 15 over 8 time steps. For the inertial sensor data with Accelerometer \(\times\) Rotation \(=3\times 3\), DeepConvLSTM was used with a window size of 100 over 8 time steps. The unified Transformer was configured with parameters, including a dimension of 512 and 8 attention heads. Each layer was repeated 4 times for both the FTMT-Sim and FTMT-Seq modules. During training, we employed stochastic gradient descent (SGD) as the optimizer for the entire network, utilizing a momentum of 0.9 and a learning rate of 0.0005. The training process for the FTMT-Sim and FTMT-Seq modules lasted for 16 hours and 300 epochs on the UTD-MHAD dataset. All training and testing were conducted on a single NVIDIA Geforce GTX 1080Ti GPU, with a batch size of 8. **NTU RGB+D:** In the case of UTD-MHAD, we used TSM [55] to extract the features from RGB video of size \(\mathrm{time}\times\mathrm{height}\times\mathrm{width}\times\mathrm{channel}=8 \times 224\times 224\times 3\). We also used STGCN to extract the features from skeleton data with dimension \(\mathrm{joint}\times\mathrm{spatial}\)\(\mathrm{position}\times\mathrm{person}=25\times 3\times 2\) We used a window of size 15 over 8 time steps. Both the FTMT-Sim and FTMT-Seq modules were configured with a dimension of 512 and 16 attention heads. The Transformer encoder layer was repeated 4 times. For optimization, we employed stochastic gradient descent (SGD) with a momentum of 0.9 and a learning rate of 0.003. The training duration was set to 480 hours and 70 epochs for the FTMT-Sim module, and 502 hours and 50 epochs for the FTMT-Seq module. Training is conducted with two NVIDIA Geforce GTX 1080Ti GPUs, with a batch size of 8. ### _Performance Comparison_ The performance of UCFFormer was evaluated in comparison with conventional HAR methods. Table I presents the Top 1 accuracy results of the HAR models of interest evaluated on the UTD-MHAD dataset. We note that all UCFFormer variants achieve significant performance gains over the other HAR methods. In particular, UCFFormer-Sim-T set a new state-of-the-art performance with an impressive Top 1 accuracy of 99.99%. It outperforms the latest MAVEN model [25] by a significant margin of 2.18%. These results show the effectiveness of UCFFormer in handling the multimodal data. We also observe that both FTMT-Sim and FTMT-Seq achieve comparable performance. Table II presents the performance of UCFFormer evaluated on the NTU RGB+D dataset [28]. UCFFormer exhibits a performance that is on par with the current state-of-the-art method, PoseC3D [74]. Notably, UCFFormer offers substantial performance gains over other HAR models, underscoring its competitive performance in accurately recognizing human activities. It is also worth mentioning that FTMT-Sim achieves slightly better performance than FTMT-Seq on NTU RGB+D dataset. ### _Performance Behavior_ In this section, we present a comprehensive analysis of performance behavior of our UCFFormer. #### Iv-D1 Effect of Multimodal Fusion Table III investigates performance gains achieved by multimodal fusion using UCFFormer-Sim-R on the UTD-MHAD dataset. While the original UCFFormer combined three modalities including RGB video, skeleton, and inertial data, Table III provides the performance achieved when only a single modality is used or only two modalities are combined. When a single modality was used, the embedding vectors were encoded solely through Temporal Attention, and then the flattened embedding vectors were used for classification. We observe that the performance of UCFFormer considerably decreases when employing a single modality or combining two modalities. When we use RGB, inertial, and skeleton data individually, the skeleton data yields the highest performance 94.72%, which is 4.32% lower than Fig. 3: **Confusion matrices for demonstrating the effect of multimodal fusion. The evaluation is conducted using UCFFormer-Sim-R architecture on the UTD-MHAD dataset.** the accuracy of three modalities. When only two modalities are combined, the highest achieved accuracy is 96.15%, exhibiting a 2.89% reduction compared to the accuracy of the original design. Figure 3 presents the confusion matrices obtained with different combinations of modalities used for UCFFormerSim-R. It confirms that the inclusion of an additional modality leads to substantial improvements in classification accuracy. #### Iv-C2 Effect of Contrastive Learning Figure 4 displays a t-SNE analysis that represents features in a reduced-dimensional space. We compared the feature distributions when using contrastive learning and when not using it. We observed that with the application of contrastive learning, the distribution gap in features across different modalities is notably reduced as intended. #### Iv-C3 Performance versus Hyperparameters In Figure 5, we evaluate the performance of the UCFFormer when applying different hyperparameter values including the number of Transformer layers, embedding size, and attention heads. The default values of the number of layers, the embedding size, and the number of attention heads are set to 4, 512, and 8, respectively. Figure 5 (a) presents the performance versus the number of Transformer layers for both FTMT-Sim and FTMT-Seq. Our observations reveal that the peak classification accuracy is attained with 4 layers, and increasing the depth beyond this point adversely impacts performance. Figure 5 (b) provides the performance versus the number of multi-heads. Performance improvement diminishes as the number of heads is above 8. Figure 5 (c) provides the performance of UCFFormer versus embedding size. The performance dramatically improves as embedding size increases from 16 up to 64 but plateaus beyond 64, reaching its peak at 512. For all cases, FTMT-Sim consistently outperforms FTMT-Seq. #### Iv-C4 Ablation Studies Table IV presents an ablation study presenting the contributions made by each component of UCF Fig. 4: **t-SNE visualization of multimodal feature vectors.** We present the t-SNE plots of the multimodal feature vectors extracted from the RGB video (represented in red), skeleton (represented in blue), and inertial (represented in green) modalities. We compare the feature distributions with contrastive learning versus without contrastive learning. Fig. 5: Performance of UCFFormer for different hyperparameter values. Former to the overall performance. We employed a ResNet-50 backbone and evaluated the performance using the Top-1 accuracy metric on the UTD-MHAD dataset. The baseline method neither includes FTMT nor MCANet; it directly inputs the embedding vectors into the Flatten + MLP module. Table IV demonstrates that the baseline attains an accuracy of merely 92.03%, which falls short by 7% in comparison to UCFFormer-Sim-R. Upon integrating FTMT-Sim with the baseline, there is a notable accuracy enhancement of 5.84%. The inclusion of FTMT-Seq results in a performance improvement of 5.1%. This demonstrates that the unified Transformer enhances the action semantics through joint time-modality self-attention. Adding MCANet further improves the accuracy by 1.17% for FTMT-Sim and by 1.9% for FTMT-Seq. Note that in the absence of FTMT, the utilization of MCANet alone yields a notable 4.12% improvement in accuracy compared to the baseline, which demonstrates the efficacy of MCANet. Table V compares the performance and computational complexity between the full-dimenional self-attention versus the factorized self-attention of FTMT-Sim. Table V highlights the computational advantage of that Factorized Self-Attention over the conventional Self-Attention mechanism. For each added layer, Factorized Self-Attention adds approximately 50 million floating-point operations per second (MFLOPs), while the conventional Self-Attention sees an increase of 130 MFLOPs. Table V also shows a substantial reduction in the number of parameters by adopting the Factorized Self-Attention. In spite of significant reduction in the memory usage and computation time, the factorized self-attention achieves slightly better accuracy than the full self-attention. ## VI Conclusions In this paper, we presented the UCFFormer, an innovative approach to feature-level fusion designed for HAR task. Employing two core components, FTMT and MCANet, the UCFFormer established a solid framework for achieving effective multimodal fusion. UCFFormer incrementally refines the embedding features extracted from each modality, leveraging both FTMT and MCANet. FTMT captures the high-level inter-dependencies of embedding features spanning both time and modality domains. Utilizing the Factorized Time-Modality Self-attention mechanism, FTMT offers an efficient architecture for encoding multimodal features. MCANet then further refines these embedding features, employing contrastive loss to mitigate potential domain discrepancies that might arise among different modalities. The performance of UCFFormer was evaluated on two widely used benchmarks. UCFFormer achieved the state-of-the-art performance, surpassing the latest HAR methods. Ablation studies confirmed the effectiveness of the ideas applied to UCFFormer. In conclusion, the UCFFormer presents a robust and adaptable technique for merging various data types to enhance HAR performance. Beyond HAR, this research holds promise for other applications where a joint representation of diverse data types is essential for task execution.
2301.00768
Ontology-based Context Aware Recommender System Application for Tourism
In this work a novel recommender system (RS) for Tourism is presented. The RS is context aware as is now the rule in the state-of-the-art for recommender systems and works on top of a tourism ontology which is used to group the different items being offered. The presented RS mixes different types of recommenders creating an ensemble which changes on the basis of the RS's maturity. Starting from simple content-based recommendations and iteratively adding popularity, demographic and collaborative filtering methods as rating density and user cardinality increases. The result is a RS that mutates during its lifetime and uses a tourism ontology and natural language processing (NLP) to correctly bin the items to specific item categories and meta categories in the ontology. This item classification facilitates the association between user preferences and items, as well as allowing to better classify and group the items being offered, which in turn is particularly useful for context-aware filtering.
Vitor T. Camacho, JosΓ© Cruz
2022-12-29T15:50:46Z
http://arxiv.org/abs/2301.00768v1
# Ontology-based Context Aware Recommender System Application for Tourism. ###### Abstract In this work a novel recommender system (RS) for Tourism is presented. The RS is context aware as is now the rule in the state-of-the-art for recommender systems and works on top of a tourism ontology which is used to group the different items being offered. The presented RS mixes different types of recommenders creating an ensemble which changes on the basis of the RS's maturity. Starting from simple content-based recommendations and iteratively adding popularity, demographic and collaborative filtering methods as rating density and user cardinality increases. The result is a RS that mutates during its lifetime and uses a tourism ontology and natural language processing (NLP) to correctly bin the items to specific item categories and meta categories in the ontology. This item classification facilitates the association between user preferences and items, as well as allowing to better classify and group the items being offered, which in turn is particularly useful for context-aware filtering. recommender system, CARS, ontology, tourism, content-based, collaborative filtering, demographic-based. ## 1 Introduction This work presents a novel recommender system (RS) approach, which builds on context awareness, domain ontology and different types of recommenders that enter the process at different stages of maturity. From simple recommenders that are less prone to cold-start issues to more complex and powerful recommenders which struggle quite a bit with initial lack of data. At the final stage of maturity, when all the recommenders are already deployed in the recommender pool, the different recommenders analyze different aspects of the data, from demographic features to ratings, and provide an ensemble of recommendations to the users, based on different approaches and with varying degrees of personalization. The approach is novel in how it uses several techniques, from domain ontology to bin the items using NLP to achieve concept similarity, and then from there applies content-based, demographic-based, popularity-based and collaborative filtering approaches to attain the recommended items. The collaborative filtering employed are field-aware factorization machines which are the state-of-the-art in matrix factorization, which can easily include context-awareness. The aim is to provide a powerful and adaptable recommender system framework which can adapt to any domain, given the respective domain ontology, and can overcome cold-start issues by using an approach with 4 stages of maturity, which are subsequently entered when given thresholds are reached. In the following the structure of the paper is presented, with an explanation of every section. In the present section, the Introduction, an overview of the presented recommender system framework is provided as well as a literature review of the relevant works on the subject. In section 2, the framework and all its components are presented, from adopted technologies to used algorithms and techniques. A presentation of the architecture is given as well as a mock-up of the designed UI to provide the link between user and recommender system. In section 3, the technologies and techniques, mainly the ones central to the recommender system are better explained with some formulas being provided. In section 4, the recommender system is tested with a synthetic dataset with varying stages of maturity, to show how the recommender system evolves as the data changes. In section 5, conclusions are given as well as a brief discussion on future works. ### 1 Literature review Recommender systems (RS) have been the focus of research for many years now, both on the algorithm side and on the applied side. The study of RS started in the beginning of the 1990s but it was in the last 15 years that research and the number of publications on the topic surged. Concerning our application, tourism, RS have been the focus of studies since, at least, the start of the 2000s with many publications having been made since then [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. As for works that concern more an algorithmic approach without an explicit thematic application, several studies have been published on the different types of RS, from content-based approaches to collaborative filtering, as well as context-aware solutions, so called CARS [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64]. Going into more detail regarding the tourism themed recommenders, it is relevant to give particular attention to ontology-based approaches. One of the more important examples concerning the present work is Moreno, A. et al. [8]. In this work, an ontology-based approach (SigTur/E-destination) is developed to get recommendations for tourism in the region of Tarragona. The developed approach begins with the definition of a tourism domain ontology, which describes the tourist activities in a hierarchy, and bins the activities according to a given taxonomy. The ontology is thus used to explicitly classify the activities to recommend among a predefined set of distinctive main concepts, which are used by the intelligent recommender system in its reasoning processes. The recommender than applies collaborative and content-based techniques to provide the recommendation. Another relevant work is that of Garcia-Crespo, A. et al. [11], which proposes a semantic based expert system to provide recommendations in the tourist domain (Sem-Fit). The proposed system works based on the consumer's experience about recommendations provided by the system. Sem-Fit uses the experience point of view in order to apply fuzzy logic techniques to relating customer and hotel characteristics, represented by means of domain ontologies and affect grids. An early and interesting work that applies Bayesian networks to attain personalized recommendations for tourist attractions by Huang, Y. and Bian, L. [15] is also worth mentioning. This work is from 2009 and uses ontologies to classify different types of tourist attractions. It then uses a Bayesian network, to calculate the posterior probabilities of a given tourist's preferred activities and the traveler category he fits into. Other works on recommender system tourism applications could also be mentioned but instead one can mention three surveys done on this topic. First, one from 2014, Borras, J. et al. [2] present a survey entitled "Intelligent tourism recommender systems". In this survey the various works in the state-of-the-art are analyzed and their different approaches concerning user interface, functionalities, recommendation techniques and use of AI techniques are presented. The second work that gives an overview on the topic is from Kazaz, L. et al. [3] from 2018. In this overview, the focus is essentially on recommender approaches and employed user and item data models. A third survey on this topic is given to us by Renjith, S. et al. [60] in a work titled "An extensive study on the evolution of context-aware personalized travel recommender systems". Herein, the authors start by defining the different recommender approaches that can be employed: content-based, collaborative, demographic-based, knowledge-based, hybrid, personalized and context-aware. The authors also go into detail on the different machine learning algorithms that are commonly employed, as well as the different employed metrics to evaluate the quality of the predictions. Finally, they present a table with many different works with the identification of whether or not they employ the previously mentioned techniques. One of the aspects of the present work is that, as happens with some of the examples given above, it employs ontologies to organize and classify the items to be recommended in some way. Two works can also be mentioned concerning tourism domain ontologies, but in this case their formulation rather than their use. These works are by Ruiz-Martinez, J. et al. [65] and Barta, R. et al. [66] and they present different approaches to integrate and define tourism domain ontologies. In the latter work an approach is presented that shows how to cover the semantic space of tourism and be able to integrate different modularized ontologies. In the former, a strategy to automatically instantiate and populate a domain ontology by extracting semantic content from textual web documents. This work deals essentially with natural language processing and named entity recognition, which are some of the techniques also employed in this paper in terms of ontology population or, in other words, the classification of the different items to recommend according to the ontology. Many other works should also be referenced, this time not necessarily linked to the tourism theme, but instead due to their focus on the algorithmic aspect or rather the recommendation strategy regardless of its field of application. One particular type of recommender system that is very much dominant in the literature in recent times is the context aware recommender system (CARS). The work by Kulkarni, S. et al. [32] gives us a review on the state-of-the-art techniques employed in context aware recommender systems. In this work the authors list the most common algorithmic approaches from bio-inspired algorithms to other common and less common machine learning algorithms and then enumerate the works that employed each type of solution. Another review study on context aware recommender systems is authored by Haruna, K. et al. [67]. In this work, the authors particularly emphasize the manner in which the contextual filtration is applied, for which there are three variants, pre-filtering, post-filtering and context modelling. The difference between each approach has to do with how context filtering is applied together with the process of recommendation. Hence, in pre-filtering the recommender filters the items prior to recommendation, while in post-filtering the opposite happens. In context modelling there is a more complex integration of the context filtering and the recommendations. The authors then go on to classify the different works in the literature according to this and other topics such as employed algorithms, etc. A third overview paper on the topic of CARS is the work by Raza, S. et al. [44]. In this work, the authors focus on the type of algorithms, the dimensionality reduction techniques, user modelling techniques and finally the evaluation metrics and datasets employed. Still focusing on CARS, a context-aware knowledge-based recommender system for movie showtimes called RecomMetz is presented in the work by Colombo-Mendoza, L. et al. [58]. In this work, the CARS developed has time awareness, crowd awareness and location awareness, as part of its context awareness composition. It is interesting to verify that its location awareness employs an exponential distance decay that discards items that are far away from the user. This sort of mechanism is also employed in the current work but with other goals. A last example on CARS is a genetic algorithm (GA) approach based on spatio-temporal aspects [68] by Linda, S. et al. Here, the interesting aspect is the inclusion of a GA to optimize the temporal weights of each individual while employing collaborative filtering for the recommendations. Lately, one of the most studied techniques for recommender systems have been Factorization Machines (FM) [69]. In the present work, a field-aware version of this technique is employed, also known as an FFM. This technique is a kind of collaborative filtering method that gained some notoriety for solving click-through prediction rates [64], among other problems. Several versions exist of these FMs in the literature, with ensembles with deep neural networks [45], for example, being one of such versions. The value of FM is that they are more powerful than traditional matrix factorization techniques, being able to incorporate features and information such as implicit feedback. For these reasons, an FM, more specifically an FFM, is one of the recommenders employed in the proposed recommender system, constituting the collaborative filtering component of the proposed RS. ### Description of the RS and field of application The proposed RS in this work is to be applied in the tourism industry. More specifically, the project entails the creation of a recommender system to be used by hotel companies to recommend to their guests their vast lists of partners in the region. It is very common that large hotel companies have hundreds of partners offering products and most hotel guests are unaware of most of them. The partners usually offer a wide array of products, which need an ontology to be organized and better recommended. The proposed RS starts by having a Partner Management Platform (PMP) for the hotel's partners where they can manually introduce the items they want to be recommended in the RS. The PMP, which is essentially an interface of the Item DB, feeds the Domain Ontology which exists in a graph DB. The users are clients of the hotel that have checked-in, and they exist in the User DB, which houses not only demographic information but also user preferences which are collected and inferred by the RS. The RS interface is a web-app which is presented in a further section of the paper. In the following sections more detail is provided concerning the various components of the RS, starting with the presentation of the RS architecture in the following section. ## 2 Architecture and frameworks of the recommender system The architecture of the RS can be essentially divided into 4 parts, the data repository, the context-aware subsystem, the recommender system per se and the user interface. In the following figure the architecture is presented with each of its subcomponents. An overview of each of the subcomponents is given in the following subsections. ### Data repository The first element of the recommender system is its data repository, in the sense that this is where it starts, particularly with the Partner Management Platform (PMP). It is through this PMP that we Figure 1: Architecture of the RS. have the introduction of the items, by the partners, to be recommended by the RS. In the PMP, the partners introduce the items alongside necessary description and keywords. This information introduced in the PMP is organized into an Item DB and later inserted into the domain ontology, which is later explained in detail. Other than the PMP with its Item DB and the mentioned domain ontology, the data repository also has a User DB. This DB has both the demographic information collected from the users that check-in to the hotel, but also the preference vectors that are inferred and managed by the RS. The RS uses these two components of the user info to make predictions and to build different recommendation models based on demographic, content, and collaborative filtering techniques. #### 2.1.1 Domain ontology - Neo4j and automatic population of ontology As for the domain ontology, the initial approach was to adopt the ontology presented in SigTur [8]. In addition, Neo4j (www.neo4j.com), which is a graph DB, was chosen to house the ontology and to facilitate the automatic ontological extension with the items from the PMP. In the following figures, the original ontology is shown already inserted in a Neo4j graph. Figure 2: Ontology inserted in Neo4j. The advantage of using the Neo4j framework is that it facilitates the automation of ontological extension. This ontological extension is achieved through the use of NLP techniques, such as named entity recognition and cosine similarity between semantic concepts, using the spaCy Python library integrated with Neo4j methods. These processes start with the insertion of the items from the PMP or the Item DB. These items are parsed and tokenized, using both the item descriptions and/or keywords. These parsed and tokenized items are then linked to the ontology by means of semantic similarity between its keywords and description with each of the ontological subclasses. The similarity scores above a given threshold originate a link between the item and that specific ontological subclass. This process that ends with concept similarity and starts with parsing, removal of stopwords and tokenization is performed with methods in the spaCy library. The concept similarity is performed using spaCy's vast pretrained word vectors. In addition, named entity recognition is also performed on the items, automatically linking a Wikipedia entry, if such entry exists. In Figure 4, a representation of the ontology after being extended with some items, via the described process. One can see the original nodes in orange, that belong to the ontology classes, some of which are now linked to grey nodes representing the items. The green nodes represent the Wikipedia page object when such an object was found. In Figure 5 a zoomed view of the highlighted zone in Figure 4 is shown. One can see two instances in which a Wikipedia page object was found from the Named Entity Recognition procedure. The items were linked to the ontology subclasses and one can observe that the links make sense in these cases, with driving an F1 racecar linked to "Motor Sports", and golf lessons and discounts on clubs linked to "Golf". Figure 3: Sample of the ontology (highlighted section in previous figure). Figure 4: Ontology extended with the addition of items. The recommender system module then imports the extended ontology, both the classes and the items. It will use the extended ontology to give content-based recommendations. ### Context-aware subsystem module The context-aware subsystem module does item pre-filtering on the basis of three context submodules: location-aware, weather-aware and repetition-aware. In the case of the location-aware submodule, the objective is to filter out the hotel partners that are not located close by to a specific instance of the hotel. Since the hotel company can have a wide array of partners that may, in many cases, be close to one specific hotel but not to other hotels in other locations, such as local or regional partners that only provide services to the hotels in the area, a first contextual filtering phase is to apply location pre-filtering. Then we go on to the weather-aware submodule, where the ontological sub-classes are associated with a given fuzzy definition of when they make sense to be recommended, for example the beach ontology class or the outdoor sports ontology class would tend to be penalized with bad weather. Finally, a third module, which is very much novel, which is the repetition-aware module. Here, each ontological class would have a different elapsed time parameter that affects an inverse exponential penalization factor to mimic the repeatability of a given item. For example, one would probably be more adept to repeat a restaurant than a museum in the same week. So, different ontological classes have different factors that affect the inverse exponential function, that we may call the unwillingness to repeat function, which defines how soon a user may be willing to repeat a given item. Figure 5: Sample of the extended ontology (highlighted section in previous figure). ### Recommender system module The recommender system module is the main module as the name entails. This module is constituted by a user profile manager and a preference manager, besides the recommender pool. Concerning the recommender pool and the models that compose it, that is addressed in depth in Section 3 of this work. Here it suffices to say that the recommender pool is the set of different recommender models that provide user recommendations. The models create an ensemble, when more than one is active, that provides recommendations using different techniques and approaches. As for the remainder of the recommender system module, the user profile and the preference manager, these two sub-modules manage the user related information, such as item ratings and other user feedback in the case of the former, while the latter manages the user preference vectors and propagates the user feedback on items to update the user preference vectors accordingly. The way this is done will become clearer in the next sections. ### User interface - web app The last component is the user interface, which in this case is a web app that connects to the recommender system module and other modules through a real-time and batch inference endpoints that connect to ML pipelines defined in Azure. In the previous figure one can observe the four different screens the user sees during his App experience. The FILTER screen is only presented to the user on the first time he logs in and is, in essence, a series of check boxes where the user defines his preferences. These check boxes are used to give a first estimate on the user's preferences concerning the ontology classes. The user's choices define his preference vectors which then are used to make content-based recommendations. As for the HOME screen, it shows the different recommendations made to the user by the RS, here the user can bookmark items, book items or mark an item as "uninteresting". Finally, in the PROFILE screen, the user can observe his profile in terms preferences collected and inferred by the RS as well as demographic information, such as date of birth, nationality, etc. The different interactions the user can have with the App and the consequent interactions between the App and the RS and back to the user are shown in Figure 7. In this figure one can see how these interactions cascade and what the user gets back from each action he undertakes. One can summarize the actions the user can take in the following: * Logging in * Preference input * Viewing recommendations Figure 6: App mockup showing the four main screens: welcome, preference definition, home and user profile. ## 3 Recommenders and stages in RS The recommender system module mentioned in the previous section is composed by three components: user profile manager, preference manager and recommender pool. The two former ones have already been covered, and in this Section, the latter will be explained in depth. The recommender pool is composed by four recommenders of different types: content-based, popularity-based, demographic-based and collaborative. These four recommenders are modeled with specific algorithms or employ specific techniques and they come into play in different phases of maturity of the RS. These phases of maturity concern amount of data, that is, number of users Figure 7: User-App-RS interaction. User’s various possible actions and respective interactions between the App and the RS. and rating density. Only after certain pre-specified values of users and rating density have been reached are some of these methods activated, or in other words, are some of the phases reached. In the following, the different phases and algorithms used are explained. ### Phase 1 At the beginning, the RS is void of any ratings or users, and only items exist in the RS. When a new user logs in for the first time, in order for the RS to make any meaningful recommendation, some information has to be provided in the form of user preferences. This is, at this stage, the only way to overcome cold-start issues. The user's preferences, which are associated to the predetermined ontology are given and used to give content-based recommendations to the user. The user then will provide explicit and implicit feedback, in the form of booking items, bookmarking items or explicitly indicating they don't like the item. This feedback is then received by the RS who then uses the said feedback to update the user's preference vectors. This update originates new recommendations to the user. #### 3.1.1 Preference vectors At the core of phase 1 are the user preference vectors. These preference vectors are ontology related and they are used to make content-based recommendations. There are three preference vectors per user: * High-level preferences * Low-level preferences * Specific preferences The high-level preferences are the ones the user identifies in the beginning and are associated with the ontological super-classes. These classes are the most abstract classes and lower in number. They are the first layer of ontological classes and are the ones that don't have a parent class and only child classes. Observing Figure 4, the _Sports_ ontological class is an example of a high-level preference since there is no ontology class above it. The low-level preferences are associated to the ontological classes that link directly to the items. These ontological classes are more specific, less abstract and in larger number. Observing Figure 4 and Figure 5, _Golf_ is an example of a low-level preference, because two items link to it. Finally, the specific preferences relate directly to the items, and is a vector that results from the other two higher-level preference vectors and the user's feedback on the items. The way these vectors interact is explained in the following: The user identifies the high-level preferences when he logs in for the first time. These preferences are propagated by way of vector multiplication with the low-level ontological preferences. 2. The low-level preferences are then propagated to the item level by way of vector multiplication as well, originating the specific preference vector. The items are ranked, and a subset of the highest ranked items are recommended to the user. 3. The user gives feedback on the recommendations by either bookmarking items, booking items or dismissing items. The feedback is propagated upwards to the higher-level preference vectors with different intensities. The low-level preference vector is strongly affected, while the high-level preference vector is less affected because it is higher upstream. This sort of "trickle-up" propagation of user feedback alters both high-level and low-level preference vectors with different magnitude. 4. New item recommendations are calculated, this time using both the high-level and low-level preference vectors to predict whether an item should be recommended or not. The prediction by each vector is weighed and aggregated originating an ensemble prediction using both high and low preference vectors. The items are ranked, and a subset of the highest ranked items are recommended to the user. 5. Repeat step 3. #### 3.1.2 Ontological content-based recommender The content-based recommender is essentially vector multiplication between preference vectors and content vectors. Content vectors are binary vectors which map one preference level to the items content or to another preference vector content, while preference vectors show the intensity levels of preference for each ontological category. In step 4, the high and low preference vectors multiply with their corresponding item content vector originating a content-based prediction. Both predictions are weighed and aggregated, and a subset of the highest ranked items is recommended to the user. After the user's feedback both preference vectors are updated according to the "trickle-up" propagation concept introduced above. Then, new recommendations are calculated with the new preference vectors. ### Phase 2 If the user booked and used an item, he can then rate said item, which will kickstart the hybrid recommender composed by the initial content-based recommender and the new popularity-based appendix. This popularity-based recommender uses a so-called damped mean on every item so that little cardinality of ratings doesn't give an exaggerated edge of an item over another, such as an item with a single 5-star rating having a 5-star average. \[Damped\ Mean_{j}=\frac{\sum_{i=1}^{n}\gamma_{ji}+k\cdot\bar{r}_{G}}{n+k}\] Where \(\tau_{ji}\) is item j's rating i, \(k\) is the damping coefficient, \(\bar{r}_{G}\) is the global mean rating or some other default value, and \(n\) is the number of reviews of item j. #### 3.2.1 Hybrid recommender (content-based + popularity-based) The start of the hybrid recommender marks the start of phase 2. At this point in the RS, there aren't many users and there aren't many ratings. The lack in both mean that popularity-based, demographic-based or collaborative approaches are still of little use. As more users join and more ratings are given, other recommenders can become increasingly useful. As we reach a given threshold of user and rating numbers we can initiate the demographic-based recommender. The way in which the hybrid recommender uses both recommenders is by cascading ensemble. That is, the popularity recommender pre-filters the items according to a rating threshold and then the content-based recommender recommends items that were not eliminated by the popularity recommender. ### Phase 3 As more users are added to the RS, and as these users give feedback on recommended items, other types of recommenders can enter the recommender pool. A first set of threshold values for number of users and rating density is defined. When these thresholds are reached, phase 3 is initiated with yet another recommender being added: the demographic-based recommender. #### 3.3.1 Demographic-based recommender The demographic-based recommender is composed by two ML algorithms. One clustering algorithm and one classification algorithm. The clustering algorithm has the purpose of identifying clusters of similar users based on their demographic features. The user's demographic features can be age, region/country, group composition, budget, academic degree, etc. These features can be a mix of numerical, ordinal and nominal features and so a clustering algorithm that can handle different data types is necessary. After the clustering has been performed, and the users are all organized in clusters, a classification algorithm is used to predict whether a user will enjoy each item based on the item feedback of other users in the same cluster. For clustering, the algorithm employed was K-Prototypes, which works similarly to K-Means but can deal with mixed data types, particularly ordinal and nominal data. To define the clustering model, a knee region identifier is employed to automatically identify the optimal (or close to optimal) number of clusters. The clustering model is retrained from time to time when sufficient new users have been added since the last model fitting. For classification a k-Nearest Neighbor algorithm, or kNN, was employed. Here, the users from the same cluster are used to predict whether a given user will enjoy the items, based on those users' feedback. The kNN uses a custom distance metric that takes into account both Jaccard and Manhattan distance metrics for the ordinal and nominal features. The kNN than weighs the opinion of the other users inversely proportional to their distance to the user to whom the predictions are being made. The predictions given by this algorithm are weighed and added to the predictions made by the hybrid recommender. ### Phase 4 In phase 4, collaborative filtering is added to the pool. As it happens with phase 3, the entry into phase 4 takes place when thresholds of user cardinality and rating density are reached. Once this happens the collaborative filtering model is fitted and starts giving recommendations. The algorithm used for collaborative filtering is a Field-Aware Factorization Machine (FFM), which has already been introduced in Section 1. In the following sub-section, the FFM application is explained in more detail. #### 3.4.1 Collaborative filtering with Field-Aware Factorization Machines (FFM) To use FFMs, a specific Python library (xLearn) is used and the data also has to be transformed into a specific format. A sample of a dataset in said format is shown in the following table. \begin{table} \begin{tabular}{c|c c c c c c c c c} & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) \\ \hline 0 & 0 & 0:1:1 & 1:2:1 & 2:3:1 & 3:4:1 & 4:5:1 & 5:6:1 & 6:7:1 & 7:8:1 & 8:9:1 \\ 1 & 1 & 0:10:1 & 1:2:1 & 2:11:1 & 3:4:1 & 4:5:1 & 5:6:1 & 6:12:1 & 7:13:1 & 8:14:1 \\ 2 & 0 & 0:15:1 & 1:16:1 & 2:3:1 & 3:4:1 & 4:17:1 & 5:6:1 & 6:18:1 & 7:19:1 & 8:20:1 \\ 3 & 1 & 0:15:1 & 1:2:1 & 2:21:1 & 3:22:1 & 4:17:1 & 5:6:1 & 6:23:1 & 7:8:1 & 8:24:1 \\ 4 & 1 & 0:10:1 & 1:16:1 & 2:3:1 & 3:4:1 & 4:17:1 & 5:25:1 & 6:23:1 & 7:26:1 & 8:27:1 \\... &... &... &... &... &... &... &... &... &... &... &... \\ 686422 & 1 & 0:1:1 & 1:2:1 & 2:3:1 & 3:4:1 & 4:17:1 & 5:25:1 & 6:23:1 & 7:8:1 & 8:37:1 \\ 686423 & 1 & 0:34:1 & 1:2:1 & 2:21:1 & 3:4:1 & 4:5:1 & 5:25:1 & 6:35:1 & 7:8:1 & 8:36:1 \\ 686424 & 1 & 0:10:1 & 1:16:1 & 2:3:1 & 3:4:1 & 4:17:1 & 5:25:1 & 6:18:1 & 7:8:1 & 8:24:1 \\ 686425 & 1 & 0:34:1 & 1:16:1 & 2:21:1 & 3:22:1 & 4:17:1 & 5:25:1 & 6:50:1 & 7:13:1 & 8:49:1 \\ 686426 & 1 & 0:15:1 & 1:2:1 & 2:3:1 & 3:4:1 & 4:17:1 & 5:6:1 & 6:23:1 & 7:8:1 & 8:44:1 \\ \end{tabular} \end{table} Table 1: Dataset in the FFM format where each column represents a feature, except for column 0 which represents the labels. This format is more complex than that for the Standard FM. This is due to the more complex information that is ingested by the FFM which uses information about the fields to define the latent vectors. That is, while in FMs each feature (field) has one latent vector, in FFMs this single representation is broken down into multiple latent vectors, one to represent each other field. \[\hat{y}(x):=\ \omega_{0}+\sum_{i=1}^{n}\omega_{i}x_{i}+\sum_{i=1}^{n}\sum_{j=i+1}^{n }\langle\forall_{i},\forall_{j}\rangle x_{i}x_{j}\] In the equation that represents the FM, which is shown above, the feature interactions represented by \(\langle\forall_{i},\forall_{j}\rangle\) would correspond to the following in our case scenario (user demographic features): \[v_{male}\cdot v_{bluecollar}+\ v_{male}\cdot v_{lowbudget}+v_{male}\cdot v_{ motherrope}+\ \cdots\] That is, the male latent vector that multiplies with each other latent vector is the same. The idea behind FFM is that the weight of the male latent vector might not be the same when multiplying with the job latent vectors as they are with the budget latent vectors, and so on. Thus, in the FFM, the latent vectors are field-aware, which results in the following: \[v_{male,job}\cdot v_{bluecollar,gender}+\ v_{male,budget}\cdot v_{lowbudget,gender}+v_{male,region}\cdot v_{motherrope,gender}+\ \cdots\] Besides demographic features, as is shown in this example, the latent-vectors can also easily incorporate item features as well as contextual features and can thus integrate context-awareness in a deeper sense than simple contextual pre-filtering or post-filtering. The FFM model represents the last phase addition to the recommender pool. The predictions attained from it are weighed and then aggregated with the predictions given by the other two, the hybrid and the demographic recommender. The weighs given to each recommender may be set to change over time so that it accompanies the maturity and complexity of each of the recommenders in the pool, thus giving progressively larger weight to the FFM as more users and more ratings are added to the system. Figure 8: Diagram of the various RS phases and interactions between RS and Data Repository (DB) components. ## 4 Recommender system - Case study (CS) with synthetic data One of the main challenges in designing the recommender system proposed in this work was the lack of data to perform any type of experiment or even just to aid and inspire in the definition of the algorithms to employ. The lack of data was absolute, both on the side of the items as on the side of the users and preferences. The main issue is the non-existence of a dataset with user demographic features and user preferences, since such a dataset would allow to overcome some of the cold-start issues as well as give some idea of the data schema to be adopted. As a result, and since no public datasets were found that could overcome this hinderance, the decision was made to generate a synthetic dataset. The generated dataset was done so by using many different techniques from gaussian copulas to fuzzy logic. Further information on that work will be available in another paper by the author Camacho, VT. In the following sub-section, the synthetic data employed in this work's case study is presented. Besides the synthetic data, a set of metrics was chosen to get an idea about the quality of the results from the recommenders. Traditional ML metrics are not always adequate for RS, mainly because, by principle, the objective of an RS is not to emulate exactly the choices of a given user since, if that were the case, there wouldn't be a need for an RS in the first place. In the metrics sub-section, the set of used metrics is presented. The remainder of this section is applying the recommenders introduced in the previous section and testing them with different amounts of data which will attempt to emulate the data present at the different phases. ### Synthetic data In the work mentioned above, a methodology for the generation of synthetic datasets for recommender systems is presented, thus allowing to overcome the obstacle of not having quality data in sufficient amount (or even at all) readily available. The difficulties that are associated with this task are essentially the definition of a dataset with multiple datatypes, such as numerical (continuous), ordinal and nominal, and with different levels of correlation among the data, as well as the definition of user-ratings based on well-defined latent user preferences. To overcome this, a methodology was devised where several different techniques are employed in sequence to create the datasets concerning user characteristics, item properties, item categories and latent user preferences associated to user and item features, and as a result, a user-item sparse ratings matrix. The output of the methodology is: 1) Item dataset with item names and categories. 2) User dataset with user characteristics (demographic features). 3) User-item sparse ratings matrix. 4) Latent preferences and Multinomial Logit model to compare with the outputs of the Recommender System. #### 4.1.1 Data Schema From the output presented above, we can see 4 DataFrames with different information. These DataFrames each have their own schema and have features from different data types. In the following, the created DataFrames are introduced: * Demographic Features * Preferences * Item Features * User Ratings Going into more detail regarding the user demographic features DataFrame: * Demographic Features: \(\circ\) User ID \(\circ\) Age \(\circ\) Gender \(\circ\) Job \(\circ\) Academic Degree \(\circ\) Budget \(\circ\) Country/Region \(\circ\) Group Composition \(\circ\) Accommodation Concerning the type of feature, they can be divided essentially into three groups: numerical, categorical ordinal and categorical nominal. Concerning numerical and categorical ordinal features, we have the following: * numerical (can be transformed into age bins) * Ordinal: \(\circ\) Age bins = ['18-30','31-40', '41-50', '51-60', '60+] \(\circ\) Academic Degree = [None', 'High School', 'Some College', 'College Degree'] \(\circ\) Budget = ['Low', 'Mid', 'High'] \(\circ\) Accommodation = ['Single', 'Double', 'Suite', 'Villa'] As for categorical nominal features, the following were modelled: * Gender = [Male', 'Female'] * Job = [Blue Collar', 'White Collar'] Country/Region = ["South Europe', 'North Europe', 'East Europe', 'North America', 'South America', 'Asia', 'Africa', 'Middle East'] * Group Composition = [1 'Adult', '2 'Adults', '2 'Adults + Child', 'Group of Friends'] #### 4.1.2 Samples of the generated DataFrames The resulting DataFrames (DF) can be used to train and test RS. In the case of the present work, they are used to simulate the different phases of data availability, thus testing the recommenders employed in each of the four phases. In the following, samples of the generated DFs are presented. The first sample shown is the User DF in Table 2. This DF is composed by the user demographic features and UserID. The demographic features are ordinal (Age, AcDeg, Budget, Accom) and nominal (Gender, Job, Region, GroupComp). The entire set of users created has cardinality of 100,000. The second DF is the User-Preference DF which contains the latent preferences and is presented in Table 3. These latent preferences are related to the ontology classes. The latent preferences of each user were modeled through a multinomial logit model based on their demographic features. This DF shows the relative interest of a given user in a given preference category versus any other preference category. The values between different users are not comparable. \begin{table} \begin{tabular}{c|c c c c c c c} _UserID_ & _Age_ & _AcDeg_ & _Budget_ & _Accom_ & _Gender_ & _Job_ & _Region_ & _GroupComp_ \\ \hline \(0\) & 4 & 2 & 1 & 2 & Female & blue collar & North Europe & 2Adit \\ \(1\) & 5 & 4 & 2 & 3 & Male & white collar & North Europe & GrpFriends \\ \(2\) & 3 & 3 & 2 & 2 & Female & blue collar & North Europe & 2Adit+Child \\ \(3\) & 4 & 4 & 2 & 2 & Female & white collar & North Europe & 2Adit+Child \\ \(4\) & 3 & 3 & 2 & 3 & Female & white collar & South Europe & 2Adit \\ _..._ &... &... &... &... &... &... &... &... \\ _99995_ & 4 & 4 & 2 & 2 & Female & white collar & North Europe & 2Adit+Child \\ _99996_ & 3 & 4 & 3 & 2 & Male & white collar & Asia & 2Adit+Child \\ _99997_ & 1 & 1 & 1 & 1 & Female & blue collar & South Europe & 2Adit \\ _99998_ & 1 & 3 & 1 & 2 & Female & blue collar & South Europe & 2Adit+Child \\ _99999_ & 4 & 3 & 2 & 2 & Male & blue collar & North America & 2Adit+Child \\ \end{tabular} \end{table} Table 2: User DF composed by the demographic features of the users. The third DF sample presented is the Item DF in Table 4. Here a set of 29 items were included belonging to different categories which are the user latent preferences presented in the previous table. The third DF sample presented is the Item DF in Table 4. Here a set of 29 items were included belonging to different categories which are the user latent preferences presented in the previous table. \begin{table} \begin{tabular}{c|c} _itemID_ & _Item Name_ \\ \hline \multirow{2}{*}{_0_} & A service that offers you the opportunity to \\ & do bungee-jumping \\ 1 & A tavern that serves traditional food \\ & Ancient history museum \\ 2 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ & Discount for Callaway clubs \\ 3 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ & Get a discount for Comic-Con \\ 4 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ & Get a free pint at the pub \\ 5 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ & Get a free pizza at Pizza Hut \\ 6 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ & Get a voucher for Sephora \\ 8 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ & Go shopping in our new mall \\ 9 & \begin{tabular}{c} _Category_ \\ \end{tabular} \\ \end{tabular} \end{table} Table 4: Item DF with corresponding item category (ontology and latent preferences). The last data sample is the result of an external product between the user preferences from the multinomial logit model and the item DF. The result is the input of a Fuzzy Inference System, which along with other implicit information on user and items returns the User-Item ratings DF, a sample of which is shown in Table 5. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c} & **0** & **1** & **2** & **3** & **4** & **5** & **...** & **23** & **24** & **25** & **26** & **27** & **28** \\ \hline \multicolumn{12}{l}{_userId_} \\ **0** & 1.41 & 0.00 & 1.87 & 0.00 & 3.21 & 0.00 &... & 0.00 & 1.79 & 0.00 & 1.79 & 2.96 & 0.00 \\ **1** & 0.00 & 4.63 & 1.77 & 1.26 & 0.00 & 0.00 &... & 0.00 & 0.00 & 4.06 & 0.00 & 2.21 & 1.77 \\ **2** & 0.00 & 0.00 & 0.00 & 2.10 & 3.20 & 2.38 &... & 3.48 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ **3** & 0.00 & 3.12 & 0.00 & 0.00 & 3.28 & 2.89 &... & 0.00 & 2.22 & 0.00 & 0.00 & 0.00 & 0.00 \\ **4** & 1.37 & 0.00 & 2.31 & 1.63 & 0.00 & 0.00 &... & 3.31 & 2.30 & 0.00 & 0.00 & 0.00 & 0.00 \\... &... &... &... &... &... &... &... &... &... &... &... &... &... \\ **99995** & 0.00 & 0.00 & 0.1 & 2.34 & 0.00 &... & 0.00 & 3.84 & 3.79 & 0.00 & 3.36 & 0.00 \\ **99996** & 1.46 & 0.00 & 0.00 & 0.00 & 2.31 & 0.00 &... & 2.31 & 0.00 & 0.00 & 0.00 & 0.00 & 1.39 \\ **9997** & 1.47 & 0.00 & 0.00 & 1.32 & 2.74 & 0.00 &... & 0.00 & 0.00 & 2.29 & 0.00 & 0.00 & 0.00 \\ **99998** & 0.00 & 4.64 & 4.11 & 1.78 & 0.00 & 2.94 &... & 3.43 & 2.65 & 3.80 & 0.00 & 4.65 & 4.33 \\ **9999** & 0.00 & 3.54 & 3.06 & 0.00 & 4.07 & 2.65 &... & 0.00 & 3.07 & 3.51 & 2.46 & 3.50 & 2.61 \\ \end{tabular} \end{table} Table 5: User-Item ratings DF. ### Metrics The metrics for a RS are not a trivial issue. Many works tend to use common ML metrics, such as classification metrics like precision, recall, accuracy, or regression metrics such as RMSE or MAE when the goal is to perform a regression on 1-5 ratings, for example. However, these metrics imply that the data available to us about user behavior is perfect, that is, users are aware of all the items they like and the ones they haven't tried aren't as relevant. If this were the case, no RS would be needed in the first place. The drawback of using this type of metrics is that it can encourage the recommender to make obvious recommendations in some cases, by penalizing wrong recommendations too much. In addition, these metrics do nothing to the tune of comparing recommenders based on how personalized its recommendations are, or how diversified. Other metrics have been developed for RS in recent years that try to address these issues, some of which are presented in the following. #### 4.2.1 Mean Average Precision @ K and Mean Average Recall @ K As in more traditional machine learning, the dataset is split into training and test sets, and the test set is comprised of cases the learner did not train on and thus it is used to measure the model's ability to generalize with new data. In recommender systems, the same is done, and the output of a recommender system is usually a list of K recommendations for each user in the test set, and to produce those recommendations the recommender only trained on the items that user enjoyed in the training set. MAP@K (Mean Average Precision @ K) gives insight to how relevant the list of recommended items are, whereas MAR@K (Mean Average Recall @ K) gives insight to how well the recommender system is able to discover all the items the user has rated positively in the test set. In recommender systems, precision and recall are essentially the same as in machine learning: \[Precision=\frac{\#\;of\;relevant\;recommendations}{\#\;of\;items\;recommended}\] \[Recall=\frac{\#\;of\;relevant\;recommendations}{\#\;of\;relevant\;items}\] However, these metrics don't take ordering into account, and since the output of a recommender system is usually an ordered list, the metrics at cut-off k are introduced, MAP@K and MAR@K. \[MAP@K=\frac{1}{|U|}\sum_{u=1}^{|U|}\frac{1}{\min\ (m,K)}\sum_{k=1}^{K}P_{u}(k) \cdot rel_{u}(k)\] \[MAR@K=\frac{1}{|U|}\sum_{u=1}^{|U|}\frac{1}{m}\sum_{k=1}^{K}r_{u}(k)\cdot rel_{u }(k)\] Where \(U\) is the set of users in the test set, \(m\) is the number of relevant items for user \(u\), \(P_{u}(k)\) and \(r_{u}(k)\), are the precision@k and recall@k, respectively, and \(rel_{u}(k)\) is a factor equal to 1 if the \(k\)\({}^{\text{th}}\) item is relevant, and 0 otherwise. 2. Coverage Coverage is the percentage of items on the training data that the recommender is able to recommend on a test set. \[Coverage=\frac{I}{N}*100\%\] Where \(I\) is the number of unique items the model recommends in the test data and \(N\) is the total number of unique items in the training data. 3. Personalization Personalization is the dissimilarity between users lists of recommendations. A high score indicates user lists are different between each other, while a low score indicates they are very similar. Similarity between recommendation lists is calculated via the cosine similarity between said lists and then by calculating the average of the upper triangle of the cosine similarity matrix (avgCosim). The personalization is then given by: \[Personalization=1-avgCosim\] 4. Diversity Diversity measures how different are the items being recommended to the user. \[Diversity=1-ils\] Where _ils_ corresponds to intra-list similarity, which is the average cosine similarity of all items in a list of recommendations. This calculation uses features of the recommended items (such as item metadata) to calculate the similarity. The feature matrix is indexed by the item id and includes one-hot-encoded features. If a recommender system is recommending lists of very similar items, the intra-list similarity will be high and conversely, the diversity will be low. 5. Novelty Finally, novelty measures the capacity of recommender systems to propose novel and unexpected items which a user is unlikely to know about already. It uses the self-information of the recommended item, and it calculates the mean self-information per top-N recommended list and averages them over all users. \[Novelty=\frac{1}{|U|}\sum_{u=1}^{|u|}\frac{log_{2}\Big{(}\frac{count(i)}{|U|} \Big{)}}{|N|}\] Where \(U\) is the user list, \(N\) is the top n-list and \(count(i)\) is the number of users that have consumed the specific item. ### CS with increasing data quantity In this sub-section the previously presented datasets and the previously presented metrics are employed to test and evaluate the RS in its various phases. For this to work, the datasets will be gradually incremented, starting with very few users and no ratings, and ending with the full datasets. This process is meant to mimic the natural evolution of a RS, from initial cold-start conditions to thousands of users with thousands of reviews. In each phase different recommenders are employed as was already mentioned in previous sections. #### 4.3.1 CS in Phase 1 As mentioned previously, phase 1 is characterized by little number of users and no ratings. At this point, only content-based approaches are possible, and only if there is some input from the user concerning his preferences, which the RS asks when the user first logs in. Otherwise, the RS would be incapable of giving any recommendation short of a random context-filtered one. To mimic this first stage, 98 initial users are added to the RS. Each user inputs their HL preference vector related to Table 3, which the phase 1 content-based recommender uses to generate recommendations. Unlike in Table 3, the HL preference vector takes either 0 or 1 values and thus not conveying information on interest intensity. In the following tables, a sample of the 98 users and their respective HL vectors are shown. The recommendations given by the RS for each user are in the following table. We can apply all previously presented metrics to these results, including MAP@K and MAR@K because we are aware of some ratings given by the users, present in the User-Item ratings DF which we can use for this purpose. The recommendations given by the RS for each user are in the following table. We can apply all previously presented metrics to these results, including MAP@K and MAR@K because we are aware of some ratings given by the users, present in the User-Item ratings DF which we can use for this purpose. \begin{table} \begin{tabular}{c|c c c c c c c} _userld_ & _ViewPoints_ & _Nature_ & _Towns_ & _Culture_ & _Events_ & _Leisure_ & _Routes_ & _Sports_ \\ \hline 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 2 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ 3 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 4 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 5 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\... &... &... &... &... &... &... &... &... \\ 94 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 95 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 96 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ 97 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 98 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ \end{tabular} \end{table} Table 6: High-level preferences of the users. [(6, 'Get a free pizza Hut'), (7, 'Get a voucher for Sephora'), (8, 'Go shopping in our new mail'), (27, 'go to the spa'), (28, 'visiting Disneyland')] [(6, 'Get a free pizza at Pizza Hut'), (7, 'Get a voucher for Sephora'), (8, 'Go shopping in our new mail'), (14, 'Rest and relaxation at the spa'), (27, 'go to the spa')] [(11, 'Medieval fair'), (1, 'A tavern that serves traditional food'), (13, 'One of the main nightclubs in the city'), (2, 'Ancient history museum'), (0, 'A service that offers you the opportunity to do bungee-jumping')] [(2, 'Ancient history museum'), (11, 'Medieval fair'), (13, 'One of the main nightclubs in the city'), (1, 'A tavern that serves traditional food'), (14, 'Rest and relaxation at the spa')] _Table 8 Values for the various metrics on the content model recommendations._ \begin{tabular}{c c c c c c} _MAP@K MAR@K Coverage Personalization Diversity HL Diversity LI Novelty_ \\ \hline 0.092 & 0.092 & 0.55 & 0.51 & 0.21 & 0.76 & 0.66 \\ \end{tabular} We can see that mean average precision and mean average recall have the same value, the value at K is equal to 5, since the recommender recommends 5 items to each user. The two diversity values pertain to high level and low-level preferences showing how diverse are the recommendations in terms of recommending diverse items. It is expected for the high-level diversity to be lower than the low-level diversity since the content recommender makes recommendations based on high-level preferences of the users. Low-level preferences are linked ontologically to high-level preferences, but they are greater in variety, hence the same high-level preference is linked to many low-level preferences, this justifies the larger value of Diversity LL compared to Diversity HL. Coverage, personalization and both diversities return values from 0 to 1, where 1 represents maximum coverage, personalization and diversity. The value for novelty can take any positive value, the greater the value the more unexpected recommendations are given based on popularity. In this study, the metric for novelty may not be very useful due to the relatively low cardinality of items and the fact that there are no less popular items per se, at least not very noticeably. In any case, these metrics are more useful in when used to compare different models. #### 4.3.2 CS in Phase 2 In phase 2 there are ratings in the system, although not enough users to feed the demographic-based recommender. In this phase we can simulate an RS state where there are 98 users and 64 ratings. The hybrid recommender is a hybridization of the initial content-based recommender with the new popularity-based recommender. The ratings are used to filter out items with average rating below a given threshold. Once again, the same metrics are applied, and the results are shown in the following table. It is interesting to observe that the precision and recall have gone up, which makes sense because the items are now being filtered according to rating and higher rating items are more prone to having been liked by the users, at least the synthetic data was defined as such. The coverage has gone down, which makes sense since less items are being recommended due to filtering. Personalization has gone down since it now many users are being recommended the same items. Diversity has gone up; this can be due to recommending some items outside of the natural preference of the user due to ratings filtering. All in all, differences can be observed compared to the content-recommender, these differences make sense and seem to go towards an expected behavior by the recommender. #### 4.3.3 CS in Phase 3 In phase 3, enough users with ratings given have been introduced in the system to kickstart the demographic-based recommender. This recommender works by defining user clusters based on demographic features and then giving item recommendations based on the predictions of a kNN. This phase 3 recommender works together with the hybrid recommender from phase 2. In the following table, the metrics are applied, and the results shown. The number of users in this phase total 198, with 191 ratings. \begin{table} \begin{tabular}{c|c c c c c c} & _MAP@K_ & _MAR@K_ & _Coverage_ & _Personalization_ & _Diversity HL_ & _Diversity LL_ & _Novelty_ \\ \hline _Hybrid_ & 0.178 & 0.178 & 0.34 & 0.07 & 0.64 & 0.91 & 0.66 \\ _Demog_ & 0.151 & 0.151 & 0.72 & 0.57 & 0.63 & 0.90 & 0.66 \\ \end{tabular} \end{table} Table 10: Values for the various metrics on the hybrid and demographic model recommendations. \begin{table} \begin{tabular}{c c c c c c c} _MAP@K_ & _MAR@K_ & _Coverage_ & _Personalization_ & _Diversity HL_ & _Diversity LL_ & _Novelty_ \\ \hline 0.219 & 0.219 & 0.17 & 1.11e-16 & 0.64 & 0.91 & 0.66 \\ \end{tabular} \end{table} Table 9: Values for the various metrics on the hybrid model recommendations. We can see these results in a bar chart where a min max scaler has been applied. This basically shows which model wins in each category. We can see that the hybrid model loses to the demographic model in coverage and personalization and has higher values in the other metrics. However, we can see that results are virtually equal in terms of Diversity and Novelty, and only on the Precision and Recall do we see larger values for the hybrid model, which are not that much higher. On the other hand, the demographic recommender has much larger personalization and coverage. Here we can see an increment by the demographic model compared to the hybrid model. This makes sense because the demographic model is more complex in how recommendations are given by finding similar users in terms of demographic features and then recommending similar items to the user on a more individual basis, whereas the hybrid model is again based on high level preferences. \begin{table} \begin{tabular}{c|c c c c c c} & _MAP@K_ & _MAR@K_ & _Coverage_ & _Personalization_ & _Diversity HL_ & _Diversity LL_ & _Novelty_ \\ \hline _Hybrid_ & & & & & & \\ _P2_ & 0.219 & 0.219 & 0.17 & 1.11e-16 & 0.64 & 0.91 & 0.66 \\ _Hybrid_ & & & & & & \\ _P3_ & 0.178 & 0.178 & 0.34 & 0.07 & 0.64 & 0.91 & 0.66 \\ \end{tabular} \end{table} Table 11: Values for the various metrics on the hybrid phase 2 and hybrid phase 3 model recommendations. Figure 9: Scaled metrics for both models. It is also interesting to compare the metrics between the hybrid in phase 2 and phase 3. We can see that most metrics remain similar with a slight decrease in precision and recall, which may be just random, a slight increase in personalization, and a rather large increase in coverage. This can be due to more items recommended and not filtered out due to poor ratings because of the existence of more users and ratings on items. It is interesting to see a variation of the metrics of the same recommender as the amount of data increases. #### 4.3.4 CS in Phase 4 Phase 4 starts when a given number of users and a given density of the user-item rating DF is achieved. When this happens, the final recommender is initiated. This recommender is the already mentioned FFM. In phase 4, the recommendations are, once again, the result of an ensemble of recommenders, the same one in phase 3 with the addition of the new FFM. The resulting metrics are once more applied to the recommendations and are shown in the following table. In this phase we have 250 users and 191 ratings. Comparing the recommenders, we can observe that the collaborative recommender, which was added in this later stage has high levels of personalization and coverage and achieves the highest values for precision and recall, compared to the other two models. The values for diversity are all similar at this stage, and novelty again doesn't provide useful information with this number of total items. In terms of precision and recall, coverage and personalization, the collaborative recommender gives us expected results which is relatively high values in these metrics. We can observe that each recommender brings different recommendations to the table with clear improvements in some metrics as the recommender system matures. It would be interesting to view this with a dataset comprising many more items and users. In the following figure we can see the metrics in a scaled graph. \begin{table} \begin{tabular}{c|c c c c c c} & _MAP@K_ & _MAR@K_ & _Coverage_ & _Personalization_ & _Diversity HL_ & _Diversity LL_ & _Novelty_ \\ \hline _Hybrid_ & 0.158 & 0.158 & 0.34 & 0.06 & 0.64 & 0.91 & 0.66 \\ _Demog_ & 0.137 & 0.137 & 0.68 & 0.55 & 0.66 & 0.91 & 0.66 \\ _Collab_ & 0.181 & 0.181 & 0.72 & 0.54 & 0.67 & 0.91 & 0.66 \\ \end{tabular} \end{table} Table 12: Values for the various metrics on the hybrid, demographic and collaborative model recommendations. As said, we observe that the collaborative metrics are good in comparison to the other two, however, the collaborative model is only useful when the recommender system has seen sufficient data. The metrics for the other two are not as high but they don't suffer so much from cold-start issues. We can see that between the demographic and the hybrid models there is a trade-off in metrics. We had already seen this in the previous phase. \begin{table} \begin{tabular}{c|c c c c c c} \multicolumn{1}{c}{_MAP@K MAR@K Coverage Personalization Diversity HL Diversity LL Novelty_} \\ \hline _Hybrid_ & 0.219 & 0.219 & 0.17 & 1.11e-16 & 0.64 & 0.91 & 0.66 \\ _P2_ & & & & & & \\ _Hybrid_ & 0.178 & 0.178 & 0.34 & 0.07 & 0.64 & 0.91 & 0.66 \\ _P3_ & & & & & & \\ _Hybrid_ & 0.158 & 0.158 & 0.34 & 0.06 & 0.64 & 0.91 & 0.66 \\ _Demog_ & & & & & & \\ _P3_ & & & & & & \\ _Demog_ & & & & & & \\ _Demog_ & & & & & & \\ _P4_ & & & & & & \\ \end{tabular} \end{table} Table 13: Values for the various metrics on the phase 1, phase 2 and phase 3 model recommendations of hybrid and demographic models. Figure 10: Scaled metrics for all three models Here we can see a comparison between the metrics of the different models along each phase, we can see a slight decrease of precision and recall in the evolving phases for hybrid and demographic models, but this might have to do with insufficient ratings being added between phase 3 and phase 4, which are important for the demographic recommender. With a further increase in data, we can see further differences in the metrics. Feeding the recommender system with 1000 users and 883 ratings, we attain the following results. We can see that the metrics are qualitatively similar to the case before with less users and ratings. Still the number of ratings is low, there is not a lot of rating density, which particularly penalizes the collaborative model. Nonetheless, we can observe that the collaborative model is the one that offers more personalization, which increased for all models with the increment in users and ratings. Coverage also increased heavily for the demographic model while only increasing slightly for the collaborative model. As for precision and recall, the demographic model maintains the metric with only a slight decrease while the hybrid and collaborative model saw a rather significant decrease. In regard to the collaborative model this might have to do with the low density in ratings. All in all we see that the demographic and collaborative models clearly become more dominant and useful as more data is added to the RS. The phases also make sense, by having the collaborative model initiate after all others have been initiated, since the collaborative model is very sensitive to rating density, while the demographic model is more robust in that sense. The hybrid model by this phase has clearly been passed by the two other models in most metrics which is exactly what would be expected. ## 5 Conclusion and future works In this work an ontology-based context aware recommender system application for tourism was presented where different recommenders are used at different stages of maturity of the recommender system. The novel aspect is the evolution of the recommender system with different types of recommenders entering the recommendation pool as the system's maturity evolves. The ontology extension of the recommender system allows items to be binned and recommended to users based on user preference vectors with different degrees of detail that link to the item ontology. These preference vectors will be ever changing based on user feedback, while other recommenders based on demographic features and field-aware factorization machines join the pool as data increases. Along this work, the RS was presented and ultimately tested with synthetic data mimicking different stages of maturity. One could observe that at each new phase the new recommenders added value as observed from the comparison between the different adopted metrics, which were MAP@K, MAR@K, Coverage, Personalization, Diversity HL, Diversity LL and finally Novelty. These metrics are the state of the art for Recommender Systems because they attempt to go beyond the usual metrics adopted in ML, which don't always have much meaning in RS. The results obtained were expected where Collaborative and Demographic approaches essentially brought more personalization and coverage to the table. However, the full extent of differences between recommenders could not be captured mainly due to the relatively low cardinality of items being offered, only 29. Future works would entail a broader analysis with more items, and also context-aware data which was not tested at this instance. Nonetheless, the context-aware would be essentially pre-filtering which would not be of much interest regarding the results concerning the metrics. Acknowledgements The present paper was developed in the context of the PMP project - Partnership Management Platform, code LISBOA-01-0247-FEDER-045411, co-financed by LISBOA 2020 and Portugal 2020 through the European Regional Development Fund.