id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.03397
Cooperative Multi-Agent Planning Framework for Fuel Constrained UAV-UGV Routing Problem
Unmanned Aerial Vehicles (UAVs), although adept at aerial surveillance, are often constrained by limited battery capacity. By refueling on slow-moving Unmanned Ground Vehicles (UGVs), their operational endurance can be significantly enhanced. This paper explores the computationally complex problem of cooperative UAV-UGV routing for vast area surveillance within the speed and fuel constraints, presenting a sequential multi-agent planning framework for achieving feasible and optimally satisfactory solutions. By considering the UAV fuel limits and utilizing a minimum set cover algorithm, we determine UGV refueling stops, which in turn facilitate UGV route planning at the first step and through a task allocation technique and energy constrained vehicle routing problem modeling with time windows (E-VRPTW) we achieve the UAV route at the second step of the framework. The effectiveness of our multi-agent strategy is demonstrated through the implementation on 30 different task scenarios across 3 different scales. This work offers significant insight into the collaborative advantages of UAV-UGV systems and introduces heuristic approaches to bypass computational challenges and swiftly reach high-quality solutions.
Md Safwan Mondal, Subramanian Ramasamy, James D. Humann, Jean-Paul F. Reddinger, James M. Dotterweich, Marshal A. Childers, Pranav A. Bhounsule
2023-09-06T23:08:42Z
http://arxiv.org/abs/2309.03397v1
# Cooperative Multi-Agent Planning Framework for Fuel Constrained UAV-UGV Routing Problem ###### Abstract Unmanned Aerial Vehicles (UAVs), although adept at aerial surveillance, are often constrained by limited battery capacity. By refueling on slow-moving Unmanned Ground Vehicles (UGVs), their operational endurance can be significantly enhanced. This paper explores the computationally complex problem of cooperative UAV-UGV routing for vast area surveillance within the speed and fuel constraints, presenting a sequential multi-agent planning framework for achieving feasible and optimally satisfactory solutions. By considering the UAV fuel limits and utilizing a minimum set cover algorithm, we determine UGV refueling stops, which in turn facilitate UGV route planning at the first step and through a task allocation technique and energy constrained vehicle routing problem modeling with time windows (E-VRPTW) we achieve the UAV route at the second step of the framework. The effectiveness of our multi-agent strategy is demonstrated through the implementation on 30 different task scenarios across 3 different scales. This work offers significant insight into the collaborative advantages of UAV-UGV systems and introduces heuristic approaches to bypass computational challenges and swiftly reach high-quality solutions. Keywords:Multi-agent planning, VRP, UAV, UGV ## 1 Introduction Over the last decade, unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) have become increasingly prevalent in a variety of sectors. These applications span from intelligence gathering, surveillance, and reconnaissance tasks [1; 2; 3], to search and rescue tasks [4] and even agricultural activities [5]. Due to their cost-effectiveness, ease of control, and high maneuverability UAVs are perfect for rapidly scanning or surveying any terrain. However, their usage is primarily suited for small-scale projects due to their limited battery lifespan and small payload capacity. In contrast, UGVs, equipped with a larger cargo hold and extended battery life, can withstand lengthier task durations. Nevertheless, their efficacy is often compromised by obstacles such as challenging ground terrain, limited visibility, and slower movement speed, which frequently results in incomplete task accomplishment. To counteract these individual drawbacks, a cooperative routing strategy involving both UAVs and UGVs can be employed. This approach enhances operation coverage speed and endurance. For instance, in an extensive surveillance process, UAVs can reach a set of distant assignment points while being periodically refueled by the UGV, which is acting as a mobile refueling depot. Simultaneously, the UGV can cover assignment points along the road network, thus reducing the UAVs' workload and ensuring the operation's swift completion. This collaborative approach between UAV and UGV can be modeled as a cooperative vehicle routing problem. In this study, we put forth a framework aimed at efficiently finding the optimal solution to a UAV-UGV cooperative vehicle routing problem that takes into account UAV's fuel constraints. ## 2 Related works There has been significant research conducted on the cooperative routing of fuel-constrained UAVs with ground vehicles. The routing issue of multiple fuel-constrained UAVs with several static recharging depots had been explored by Levy et al.[6]. They employed quick variable neighborhood descent (VND) and variable neighborhood search (VNS) heuristics to identify good feasible solutions for large instances. Sundar et al. [7] further developed a mixed-integer linear programming model (MILP) for the same problem, which was solved using a standard MILP solver. In contrast to fixed charging stations, Maini et al. [8] addressed a cooperative routing problem involving a single UAV-UGV system, where the UGV has the ability to recharge the UAV while in transit on a road. They proposed a greedy heuristic for determining the meeting points for recharging along the UGV route and later used a MILP model to solve both UAV-UGV routes. Manyam et al. [9] examined the cooperative routing of an air and ground vehicle team considering communication constraints. They framed the problem as a mixed-integer linear program and developed a branch-and-cut algorithm to solve the problem to optimality. Several researchers have delved deeper into the UAV-UGV cooperative vehicle routing problem, exploring it in a tiered, two-echelon manner [10]. For instance, Luo et al. [11] introduced a binary integer programming model, supplemented by two heuristics, to tackle this two-echelon cooperative routing challenge. In a related context, Liu et al. [12] devised a two-stage, route-focused framework for a parcel delivery system that utilized a truck and a drone. This framework aimed to optimize both the primary route of the truck and the associated aerial routes of the drone. To swiftly generate a feasible solution, they developed a hybrid heuristic, which integrated the strategies of nearest neighbor and cost saving. In our previous works [13; 14; 15], we studied a hierarchical, bi-level optimization framework for the cooperative routing of multiple fuel-limited UAVs and a single UGV. The outer level of this framework employed K-means clustering to determine UGV visit points. These points were then connected using a Traveling Salesman Problem (TSP) approach to establish the UGV route. On the inner level, using the determined UGV path, we formulated and solved a vehicle routing problem that took into account capacity constraints, time windows, and dropped visits for the UAV. Further expanding on this work, we demonstrated that optimizing heuristic parameters using Genetic Algorithm (GA) and Bayesian Optimization (BO) methods could lead to substantial improvements in the solution quality [16; 17]. Given the intricacy of this problem, exact methods of solving this combinatorial optimization problem or generalizing a solution framework for diverse scenarios pose significant challenges. In this research endeavor, we propose a generalized multi-agent cooperative framework for addressing this fuel-constrained UAV-UGV cooperative routing issue. The key contribution of our study is the creation of heuristics aimed at facilitating a rapid solution to the two-echelon UAV-UGV routing problem, considering fuel and speed constraints. To this end, our novel contributions include the following: 1. The proposed comprehensive framework utilizes sequential optimization with a task allocation technique. Coupled with the constrained programming-based formulations, it can provide an effective solution for fuel constrained UAV-UGV cooperative routing problems in a quick time. 2. A task allocation technique based on the minimum set cover algorithm is proposed, which breaks down the entire problem into smaller subproblems, leading to a substantial simplification of the problem-solving process. 3. Our formulation of a constraint programming-based vehicle routing problem accommodates time windows, and fuel constraints, thereby enabling swift solutions for each subproblem. 4. We present extensive computational results on different kinds of scenarios to affirm the effectiveness and robustness of our proposed framework. This underscores the practicality of our framework in a diverse set of real-world applications. The rest of the article is structured as follows. Section 3 presents the problem statement, section 4 illustrates the framework methodology and solution heuristics. Section 5 introduces the experiments of different random instances in three different scales and shows the results part, offering a concrete view of our findings. The results are analyzed in section 6 and finally, section 7 presents the conclusion and outlines future works. ## 3 Problem Description The problem objective is to configure an optimal cooperative route for a team comprising a UAV and UGV to visit a set of \(n\) assignment points \(\mathcal{M}_{n}=\{m_{0},m_{1},...,m_{n}\}\) in a euclidean space (see figure (a)a). The UAV \(A\equiv(v^{a},F^{a},\mathcal{P}^{a})\) and the UGV \(G\equiv(v^{g},F^{g},\mathcal{P}^{g})\) have heterogeneous vehicle characteristics; with the UAV having a higher velocity, i.e. \(v^{a}>v^{g}\) but lower fuel capacity than that of the UGV, i.e. \(F^{a}<F^{g}\). They also differ in their power consumption profiles (Eq. 1, Eq. 2), with the UAV demonstrating greater energy efficiency per unit distance traversal when operating at standard speeds (see figure 1). The assignment points can be visited by a free flyover of the UAV, \(\tau^{a}\) or a visit via the UGV's road network, \(\tau^{g}\). The cost of travel between a pair of assignment points is equal to the time of traversal between them \(t_{ij}=t_{j}-t_{i}\). Both the UAV and UGV commence their journeys from the same starting depot and return to it upon completion. The total task duration is the time span from when the first vehicle departs the depot until the last one returns. \[\mathcal{P}^{a}=0.0461{v^{a}}^{3}-0.5834{v^{a}}^{2}-1.8761{v^{a}}+229.6 \tag{1}\] \[\mathcal{P}^{g}=464.8v^{g}+356.3 \tag{2}\] Due to having a limited battery capacity, the UAV has to get recharged periodically from the UGV which acts as a mobile recharging depot beside visiting the Figure 1: Energy consumption per unit distance traversal of UAV & UGV assignment points. The recharging time of the UAV at the UGV is not instantaneous, it depends upon the amount of fuel present in the UAV. Since the fuel capacity of the UGV is significantly larger compared to the UAV, it is assumed to be infinite to simplify the problem. With all these above configurations, we have to find the time optimal cooperative route, \(\tau=\tau^{a}\cup\tau^{g}\) between the UAV and UGV for visiting all the assignment points at least once; given the UAV will never run out of fuel. A typical sequence for this cooperative route could be as follows: Both the UAV and UGV commence their journey from the starting depot and visit several assignment points. As they proceed, the UGV will reach an appropriate location for the UAV to recharge. After recharging, both vehicles will resume their task, continuing to visit assignment points until they reach the next recharging stop. This pattern will continue until all assignment points have been visited, after which both the UAV and UGV will return to the starting depot to conclude their task. However, for an optimal cooperative route, it is important to figure out: 1. [leftmargin=*] 2. Suitable refueling stop locations \(\mathcal{M}_{r}=\{m_{0}^{r},m_{1}^{r},...,m_{n}^{r}\}\), **where** the UAV, UGV will rendezvous for recharging. 3. Appropriate time intervals during the task **when** the UAV, UGV will meet at the refuel stops i.e, i.e, \(t_{i}^{r}\)\(\forall\)\(m_{i}^{r}\in\mathcal{M}_{r}\). 4. Optimal routes for the UAV, \(\tau^{a}\) and UGV, \(\tau^{g}\) based on the determined refuel stop locations \(m_{i}^{r}\) and time intervals \(t_{i}^{r}\) to cover the entire assignment scenario in the quickest possible time. ## 4 Methods We have devised a two-tiered optimization framework (as depicted in figure 2) for executing this fuel-constrained cooperative routing between the UAV and UGV. This framework is inspired by the "UGV First, UAV Second" heuristic approach for UAV-UGV cooperative routing [18]. At the first stage of this framework, we utilize a _UGVPlanner_ to establish the route \(\tau^{g}=(X^{g},T^{g})\) for the UGV. This route is constructed by identifying appropriate recharging stations \(\mathcal{M}_{r}\) and formulating the UGV's movement along the road network accordingly. The UGV's navigation is a combination of two-step processes. The initial phase involves movement along waypoints on the road network to cover the assignment points, while the second phase necessitates waiting for the UAV at the recharging stops. At the second tier of the framework, the _UAVPlanner_ devises the route \(\tau^{a}=(X^{a},T^{a})\) for the UAV. The formation of this route significantly relies on the route \(\tau^{g}\) created for the UGV at the outer level of the framework. Because of the slower speed of the UGV, the _UAVPlanner_ takes into consideration the _availability_ time window constraint at the refuel stops. This planning approach effectively divides the entire scenario into a series of manageable subproblems, each of which can be solved by modeling it as an energy-constrained vehicle routing problem with time windows constraint(E-VRPTW). ### UGV routing At the outer level of the proposed framework, the initial objective is to determine suitable recharging rendezvous locations \(\mathcal{M}_{r}\) for the UAV-UGV system. Subsequently, an optimal route \(\tau^{g}\) is generated for the UGV, taking into account the refuel stop locations \(\mathcal{M}_{r}\) and its operational speed \(v^{g}\) by the _UGVPlanner_. Previous research by Maini et al.[8; 19] emphasized the importance of including refueling stops within the UAV's fuel coverage radius to ensure a viable route in fuel-constrained cooperative routing problems. It was also noted that minimizing the number of recharging instances can reduce the time spent on recharging and minimize the detour required in the UAV's route, resulting in a faster cooperative route. With these aforementioned considerations, we implemented the minimum set cover algorithm (MSC) to find out minimum number of refueling stops and their locations \(\mathcal{M}_{r}\) that cover the entire assignment scenario. The minimum set cover problem has been extensively studied [20; 21], and various methods, including greedy approaches [8; 22] have been proposed. However, we came up with an alternative constraint programming formulation for solving the minimum set cover problem in the context of the cooperative routing problem. #### 4.1.1 Minimum set cover algorithm 1. **Greedy heuristics approach** : Minimum set cover algorithm is an NP-hard problem, however through the greedy heuristics the complexity of the problem can be reduced significantly. In the context of cooperative routing problem to find optimal refuel stops through greedy algorithm, we start with the assignment points \(\mathcal{M}_{n}\) that needed to be covered, the fuel capacity \(F^{a}\) of the UAV and the starting depot \(m_{0}\) of the scenario. Our goal is to obtain smallest possible subset of \(\mathcal{M}_{n}\) that can act as refueling stops \(\mathcal{M}_{r}\). As shown in algorithm 1, greedy algorithm includes the stating depot \(m_{0}\) as the first refueling stop \(m_{0}^{r}\), then it sequentially adds (line 6) the assignment points \(m_{i}\) which are covering maximum number of other assignment points into the refueling stop set \(m_{i}^{r}\) until all the points are covered. Greedy heuristics can quickly generate optimal result for a minimum set cover problem. But in many situations a minimum set cover problem Figure 2: proposed framework can have multiple optimal results for a particular scenario; because we are implementing a bilevel optimization framework it is important to take into account of the other optimal solutions of the outer level algorithm. As it is not possible to acquire all the optimal solutions through greedy heuristics, we used constraint programming method which can generate multiple optimal results (if any) in a quick span of time. 2. **Constraint programming method :** To determine the minimum number of refueling stops \(\mathcal{M}_{r}\) required to cover the entire assignment scenario (\(\mathcal{M}\)), we employ linear integer programming and utilize a constraint programming method (CP method) for solving. The problem is modeled using binary decision variables, \(x_{j}\) (indicating whether an assignment point is chosen as a refueling stop) and \(y_{ij}\) (indicating whether an assignment point \(m_{i}\) is assigned to a refueling stop \(m_{j}^{r}\)). The objective function (Eq. 3) aims to minimize the total number of refueling stops. Constraint (Eq.4) ensures that each assignment point \(m_{i}\) is assigned at least one refueling stop \(m_{j}^{r}\). Constraint (Eq. 5) ensures that an assignment point \(m_{i}\) can only be allocated to a refueling stop \(m_{j}^{r}\) if the refueling stop is selected. Furthermore, constraint (Eq. 6) guarantees that Figure 3: a) Given scenario with assignment points and starting depot b) Refuel stops in UGV route obtained from minimum set cover algorithm; the blue circles are indicating the radial coverage of the UAV c) Subproblem 1 with allocated UAV assignment points, here UGV travels between starting depot and refuel stop 1 d) Subproblem 2 with allocated UAV assignment points, here UGV travels between refuel stop 1 and refuel stop 2. an assignment point \(m_{i}\) is assigned to a refueling stop \(m_{j}^{r}\) only if the refueling stop falls within the fuel coverage radius of the UAV, allowing for a round trip from the refueling stop. \[\text{Objective: }\min\sum_{m_{j}^{r}\in\mathcal{M}_{r}}x_{j} \tag{3}\] Subject to, \[\sum_{m_{j}^{r}\in\mathcal{M}_{r}}y_{ij}\geq 1,\ \forall\ m_{i}\in\mathcal{M} \tag{4}\] \[y_{ij}\leq x_{j},\ \forall\ m_{i}\in\mathcal{M}\ \text{and}\ \forall\ m_{j}^{r}\in \mathcal{M}_{r} \tag{5}\] \[y_{ij}=0,\ \text{if}\ d_{ij}>0.5F^{a},\ \forall\ m_{i}\in\mathcal{M}\ \text{and}\ \forall\ m_{j}^{r}\in \mathcal{M}_{r} \tag{6}\] \[y_{ij},x_{j}\in\{0,1\} \tag{7}\] We had used Google's OR-Tools(tm) Constraint programming solver ( CP-SAT solver [23]) to solve the above linear integer formulation. It is possible to record all the solutions if there are multiple optimal solutions through the solver. Once, the optimal refuel stops \(\mathcal{M}_{r}\) are obtained from the MSC algorithm, it is sent to the _UGVPlanner_ to construct the UGV route \(\tau^{g}=(X^{g},T^{g})\) based on it. #### UGV Planner Upon identifying the refueling stop locations \(\mathcal{M}_{r}\) using the minimum set cover algorithm, the _UGVPlanner_ proceeds to map out a feasible UGV route for the overall task through a sequential phase process (see algorithm 2). Initially, it connects the refueling stops optimally on the road network by solving a simple Travelling Salesman Problem (TSP), what yields the spatial components \(X^{g}\in\tau^{g}\) of the UGV route, denoting the sequence \(x_{i}^{g}\) in which the assignment points on the road network will be visited. Next, the planner calculates the temporal components \(T^{g}\in\tau^{g}\) of the UGV route till the first refuel stop, which details the time instances at which the UGV will visit those assignment points. We operate under the assumption that the UGV will not wait at any assignment point, except at the refueling stops. Therefore, the arrival times at the assignment points are computed based on the UGV's constant operational speed \(v^{g}\) (line 4). This also gives the UGV's arrival time at the refueling stops (line 7), which serve as an \(availability\) time window constraint in the _UAVPlanner_. Utilizing the UAV's arrival time at the first refuel stop and the recharging time \(\mathcal{R}_{t}\) (contingent on the UAV's fuel consumption level) from the UAV's route in subproblem 1, we can estimate the UGV's waiting time at the first refuel stop (line 9), which is taken into account when computing the temporal component of the UGV route up to the next refuel stop. This process is reiterated until the UGV arrives at the final refuel stop. At the end of this process, the temporal components are integrated with their respective spatial components to provide a comprehensive UGV route, outlining the sequence in which the UGV visits the assignment points and their corresponding time instances. ``` 1:Input: Refuel stops \(\mathcal{M}_{r}\gets MSC\), UGV velocity \(v^{g}\), starting depot \(m_{0}\) 2:Output: UGV route \(\tau^{g}=(X^{g},T^{g})=[(x^{g}_{i},t^{g}_{i})]\) 3:UGV navigation waypoints \(X^{g}\gets TSP(\mathcal{M}_{r},m_{0},v^{g})\) 4:UGV route starting instance \(\tau^{g}=[(x^{g}_{0},t^{g}_{0})]\) 5:for\(x^{g}_{i}\) in \(X^{g}\)do 6:\(t^{g}_{i}=t^{g}_{i-1}+\frac{x^{g}_{i}-x^{g}_{i-1}}{v^{g}}\) 7:\(\tau^{g}.\text{append}(x^{g}_{i},t^{g}_{i})\) 8:if\(x^{g}_{i}\in\mathcal{M}_{r}\)then 9: send \(t^{g}_{i}\to UAVPlanner\) 10:\(t^{a}_{i},\mathcal{R}_{t}\gets UAVPlanner\) 11:\(t^{g}_{i}=t^{a}_{i}+\mathcal{R}_{t}\) 12:\(\tau^{g}.\text{append}(x^{g}_{i},t^{g}_{i})\) 13:endif 14:endfor ``` **Algorithm 2** UGV Planner ### UAV routing At the inner level of the proposed framework, we split the full task scenario into subproblems, taking information about the refuel stops provided by the _UGVPlanner_. A task allocation technique is employed to assign distinct assignment points to each subproblem. These subproblems are then individually addressed by formulating them as Energy Constrained Vehicle Routing Problems with Time Windows (E-VRPTW). #### Allocation of assignment points Given the scenario and the obtained refuel stops \(\mathcal{M}_{r}\) from the MSC algorithm, we can divide the entire problem into \(r-1\) number of subproblems (\(r=\) number of refuel stops with starting depot) with an assumption that UGV travels only between two refuel stops in each subproblem. For the subproblem \(SP_{i}\), the origin node is refuel stop \(m_{i-1}^{r}\) and the destination node is refuel stop \(m_{i}^{r}\). The subproblems are assigned with separate assignment points. The UAV assignment points covered by the destination refuel stop \(m_{i}^{r}\) are assigned to that subproblem \(SP_{i}\). Only, for the first subproblem \(SP_{1}\) the assignment points covered by both origin \(m_{0}^{r}\) and destination node \(m_{1}^{r}\) is assigned to it. Figure 3 demonstrate the process of subproblem division and task allocation. Figure 2(b) shows the refuel stops obtained from minimum set cover algorithm which are taken into account for UGV route construction. Based on refuel stops, the first subproblem (figure 2(c)) is created by taking the starting depot as the origin node and the refuel stop 1 as the destination node. The UAV assignment points covered by origin node (starting depot) and destination node (refuel stop 1) are assigned for subproblem 1. Similarly, the second subproblem (figure 2(d)) is created by taking the refuel stop 1 as origin node and refuel stop 2 as destination node and the assignment points covered by the destination node (refuel stop 2) are assigned for this subproblem. Now in the subproblems, the destination nodes \(m_{i}^{r}\) have an _availability_ time window constraint because UAV can recharge only when the UGV has already reached the refuel stops. This _availability_ time period \(t_{i}^{g}\) is obtained from the _UGVPlanner_ and taken in account while modelling the subproblems as energy constrained vehicle routing problem with time windows (E-VRPTW). #### E-VRPTW formulation The formulation of the E-VRPTW can be described with a graph theory. Consider an undirected graph \(G=(V,E)\) where \(V\) is the set of vertices \(V=\{S,0,1,2,...D\}\) and \(E\) is the set of edges between the vertices \(i\) and \(j\) as \(E=\{(i,j)\,||\,i,j\,\in\,V,i\neq j\}\). The non-negative arc cost between the vertices \(i\) and \(j\) is expressed as \(t_{ij}\) and \(x_{ij}\) is a binary decision variable whose value will be 1 if a vehicle travels from \(i\) to \(j\), and 0 otherwise. The UAV will start from refuel stop \(S\) and meet the UGV at destination stop \(D\). We then formulated the objective function of the E-VRPTW problem with fuel constraint, time window constraint, optional node constraints as follow: \[\min\sum_{i}\sum_{j}t_{ij}x_{ij}\quad\forall i,j\in V \tag{8}\] \[\sum_{j\in V}x_{ij}=1\quad\forall i\in V\setminus\{S,D\} \tag{9}\] \[\sum_{i\in V}x_{ij}=1\quad\forall j\in V\setminus\{S,D\} \tag{10}\] \[\sum_{j\in V}x_{Sj}=\sum_{i\in V}x_{iD}=1 \tag{11}\] \[f_{j}^{a}\leq f_{i}^{a}-\left(\mathcal{P}^{a}(v^{a})t_{ij}x_{ij}\right)+L_{1} \left(1-x_{ij}\right)\quad\forall i,j\in V\setminus\{S,D\} \tag{12}\] \[f_{j}^{a}=F^{a}\quad\forall j\in D \tag{13}\] \[0\leq f_{j}^{a}\leq F^{a},\quad\forall j\in V \tag{14}\] \[t_{j}\geq t_{i}+\left(t_{ij}x_{ij}\right)-L_{2}\left(1-x_{ij}\right)\quad \forall i,j\in V \tag{15}\] \[t_{j,start}\leq t_{j}\leq t_{j,end},\quad\forall j\in D \tag{16}\] \[x_{ij}=0,\quad\forall i\in D,\forall j\in V\] (17) \[x_{ij}\in\{0,1\},\quad\forall i,j\in V\] (18) \[f_{i}>0,f_{i}\in\mathbb{R}_{+}\quad\forall i\in V\] (19) \[t_{i}>0,t_{i}\in\mathbb{Z}\quad\forall i\in V\] (20) \[L_{1},L_{2}>0,\quad L_{1},L_{2}\in\mathbb{R}_{+} \tag{21}\] The objective of Eq. 8 is to minimize the total time spent by the UAV. Constraints in Eq. 9 and Eq. 10 represent flow conservation, where the inflow should equal the outflow at any of the assignment point vertices. Following that, constraint in Eq. 11 represents flow conservation for start and end vertices, where the number of UAVs leaving the start vertex must equal the number of UAVs arriving at the end vertex. The Miller-Tucker Zemlin (MTZ) formulation [24] for sub-tour elimination is the constraint in Eq. 12. The MTZ constraint ensures that each node is visited sequentially by keeping track of values such as fuel capacity and power consumption of the UAV corresponding to each node. It ensures that if a node is visited twice, the constraint is broken. This constraint allows that the UAV's energy is not fully drained out while eliminating loops. \(L_{1}\) denotes a large number in this constraint. This constraint activates only when there is a flow between vertices \(i\) and \(j\) and drains the UAV energy based on the time taken between the two vertices. The \(\mathcal{P}^{a}\) represents the UAV's power consumption curve during traversal. According to constraint Eq. 13, if the vertex is the destination stop ( recharging stop), the UGV must refuel the UAV to its full capacity \(F^{a}\). Constraint Eq. 14 states that the UAV's fuel should be between 0 and maximum fuel capacity \(F^{a}\) at any vertex in \(V\). The cumulative arrival time at the \(j^{th}\) node is equal to the sum of the cumulative time at the node \(i\), \(t_{i}\) and the travel time between nodes \(i\) and \(j\), \(t_{ij}\). Here, \(L_{2}\) is a large number that aids in the elimination of sub-tour constraints, as in Eq. 15. Eq. 16 puts a time window constraint that instructs the vehicle to visit the destination node within it's time window, that means the UAV is only allowed to visit the destination node only when the UGV has reached there. The constraint in Eq. 17 indicates that there should be no flow once the vehicle reaches the end node and the route will end there. Eq. 18 is a binary decision variable in charge of flow between the edges. The continuous decision variable, Eq. 19, monitors the fuel level at any node and has zero as the lower bound value. Eq. 20 denotes the integer decision variable that computes the cumulative time of the UAV's route and has a lower bound of zero. The authors resorted to constrained programming method that provided quality inner-level solutions in a shorter simulation time. #### UAV Planner By solving this E-VRPTW for subproblem \(SP_{i}\), _UAVPlanner_ gets the optimal UAV route (both spatial and temporal component) \(\tau_{i}^{a}=(x_{i}^{a},t_{i}^{a})\) as well as the time instance \(t_{i}^{a}\) at which the UAV will arrive at the refuel stop \(m_{i}^{r}\) to recharge with the UGV and the recharging time \(\mathcal{R}_{t}\) of it, which is dependent on its fuel consumption level. These information are fed back to the _UGVPlanner_ again to calculate the UGV _availability_ time window for next subproblem \(SP_{i+1}\). This reciprocal and iterative process (line 3 - 7 in algorithm 3) between the _UAVPlanner_ and _UGVPlanner_ is what facilitates the cooperative route for the entire task scenario. In figure 3(a) and figure 3(b), we got the routes for the UAV and the UGV which are combined together to get the complete routes of UAV and UGV for the entire task scenario (figure 3(c)). ## 5 Results We implemented the proposed framework across diverse random task scenarios to evaluate its proficiency. The task scenarios, generated at three distinct scales, helped us investigate the impact of UAV fuel capacity on the overall routing process. In these tests, we compared the results of the greedy and constrained programming methods when applied to the outer-loop baseline of our proposed framework. Additionally, to ascertain the upper limit of the performance metrics, we also constructed a UGV-only route in each scenario, which facilitated the assessment of the practicality and advantages of the cooperative UAV-UGV route in each specific scenario. Figure 4: a) UAV-UGV routes from subproblem 1 b)UAV-UGV routes from subproblem 2 c) UAV-UGV routes for entire task scenario after combining subproblem 1 & 2 ### Design of experiments The efficacy of our proposed framework was tested across numerous random task scenarios generated at three separate scales. We designed the scenarios such that the farthest assignment point from the starting depot was always outside the UAV's radial coverage, guaranteeing that at least one refueling stop was necessary for the UAV to complete the task. To substantiate the robustness and adaptability of the suggested methodology, we experimented with three distinct scales of task instances, as exhibited in Table 1. We introduced a _scale factor_, a non-dimensional number, to represent the relationship between the scenario map size and the UAV's radial fuel coverage area. Three examples of scenarios from three different scales are shown in figure 5. \[\text{scale factor}=\frac{\text{Area of scenario}}{\text{UAV coverage area on a single charge}} \tag{22}\] For each instance, two types of cooperative routes (if different) were generated by employing the Greedy method and the CP method at the outer loop of the suggested framework. The UGV-only route (UGV operates alone) was also determined for the \begin{table} \begin{tabular}{c c c c} \hline Scale & Map size & _scale factor_ & No. of task points \\ \hline Small & 16 km x 16 km & 1.5 & 30 \\ Medium & 25 km x 25 km & 3 & 60 \\ Large & 40 km x 40 km & 9 & 100 \\ \hline \end{tabular} \end{table} Table 1: Specifications of task Scenario Figure 5: Sample scenarios with starting depot and refuel stops obtained from minimum set cover algorithm. Radial circles are indicating the coverage area of the UAV from the stops. As the scale of the task scenario grows, the number of task points, number of refueling stops have grown proportionally. specific scenarios. There is no benchmark solution exists to this specific problem due to its complex combinatorial nature, hence we treated the UGV-only route as the baseline method for comparison. Comparison was made between the cooperative routing route and UGV only route, which signifies the impact of cooperation between UAV and UGV on the task execution. The total task completion time and total energy consumption were treated as the metrics for the evaluation of routes. ### Time metrics In Table 2, the total task completion time of the route obtained by the three aforementioned methods have been displayed. For all instances in the small-scale scenarios, \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Map Size**} & \multirow{2}{*}{**Scenarios**} & \multirow{2}{*}{**Route Time (min.)**} & \multirow{2}{*}{**Improvement (\%)**} \\ \cline{3-5} & & & & & \\ \cline{3-5} & & & Cooperative Routing & UGV only & & \\ \cline{3-5} & & & & & \\ \cline{3-5} & & & & & \\ \cline{3-5} & & & & & \\ \cline{3-5} & & & & & \\ \hline \multirow{5}{*}{Small scale} & Scenario 1 & 200 & 210 & 249 & 19.68 & 15.66 \\ & Scenario 2 & 91 & 82 & 133 & 31.58 & 38.35 \\ & Scenario 3 & 117 & 115 & 194 & 39.69 & 40.72 \\ & Scenario 4 & 153 & 153 & 178 & 14.04 & 14.04 \\ & Scenario 5 & 148 & 148 & 159 & 6.92 & 6.92 \\ & Scenario 6 & 222 & 222 & 289 & 23.18 & 23.18 \\ & Scenario 7 & 190 & 149 & 196 & 3.06 & 23.98 \\ & Scenario 8 & 128 & 128 & 198 & 35.35 & 35.35 \\ & Scenario 9 & 222 & 210 & 303 & 26.73 & 30.69 \\ & Scenario 10 & 223 & 128 & 214 & -4.21 & 40.19 \\ \hline \multirow{5}{*}{Medium Scale} & Scenario 1 & 411 & 404 & 497 & 17.30 & 18.71 \\ & Scenario 2 & 414 & 409 & 622 & 33.44 & 34.24 \\ & Scenario 3 & 364 & 364 & 511 & 28.77 & 28.77 \\ & Scenario 4 & 340 & 338 & 452 & 24.78 & 25.22 \\ & Scenario 5 & 297 & 297 & 341 & 12.90 & 12.90 \\ & Scenario 6 & 396 & 417 & 524 & 24.43 & 20.42 \\ & Scenario 7 & 296 & 298 & 370 & 20.00 & 19.46 \\ & Scenario 8 & 324 & 325 & 477 & 32.08 & 31.87 \\ & Scenario 9 & 292 & 295 & 403 & 27.54 & 26.80 \\ & Scenario 10 & 291 & 257 & 459 & 36.60 & 44.01 \\ \hline \multirow{5}{*}{Large Scale} & Scenario 1 & 461 & 407 & 438 & -5.25 & 7.08 \\ & Scenario 2 & 440 & 431 & 467 & 5.78 & 7.71 \\ \cline{1-1} & Scenario 3 & 537 & 532 & 484 & -10.95 & -9.92 \\ \cline{1-1} & Scenario 4 & 611 & 519 & 462 & -32.25 & -12.34 \\ \cline{1-1} & Scenario 5 & 744 & 744 & 680 & -9.41 & -9.41 \\ \cline{1-1} & Scenario 6 & 452 & 412 & 436 & -3.67 & 5.00 \\ \cline{1-1} & Scenario 7 & 655 & 701 & 607 & -7.91 & -15.49 \\ \cline{1-1} & Scenario 8 & 682 & 684 & 588 & -15.99 & -16.33 \\ \cline{1-1} & Scenario 9 & 620 & 620 & 570 & -8.77 & -8.77 \\ \cline{1-1} & Scenario 10 & 613 & 604 & 537 & -14.15 & -12.48 \\ \hline \hline \end{tabular} \end{table} Table 2: Time metrics of different scenarios cooperative routing with the Constrained Programming method in the framework's outer loop proved more time-efficient than the UGV-only routing. The task completion time for UGV-only routes was reduced by approximately 6% to 40% through the cooperation between the UAV and UGV in small-scale scenarios. Although cooperative routing with the Greedy method in the outer loop baseline didn't perform as good as the CP method, it was more time-efficient than the UGV-only route in most instances. However, for scenario 10 in small scale, the Greedy method couldn't improve the task completion time through the operative route. For medium-scale scenarios, the task completion time improved by 12% up to 45% through the CP method-based cooperative routing, while for the Greedy method-based cooperative route, the improvement range was 12% up to 30%. The cooperative route was more economical than the UGV individual route for most scenarios with the CP method at the outer loop, although the improvement range was a bit less than that in the small-scale scenarios. However, for most large-scale scenarios, the cooperative route couldn't improve the total task completion time, making the UGV-only route the optimal choice. The figure 6 depicts the total task completion time of the UGV-only route and the respective cooperative routing route with both CP method and Greedy method at the outer loop baseline for three types of scenarios. ### Energy metrics The energy consumed during the routing process was also analyzed across different scenarios, as demonstrated in Table 3. The improvement percentage reflects the relative gain in total energy consumption that was achieved through UAV-UGV cooperation. The results confirmed that cooperative routing is more energy-efficient than UGV-only routing. Among the cooperative routing methods, the Constrained Programming (CP) method applied in the outer loop outperformed the Greedy method. For the small-scale instances, cooperative routing enabled energy savings ranging from 28% up to 58%. For medium-scale scenarios, the improvement ranged from 40-55%, while for large-scale scenarios, the range was between 8-37%. This data affirms Figure 6: Time metrics on 3 different scale of scenarios that cooperative routing, particularly when employing the CP method in the outer loop, can significantly enhance energy efficiency across a variety of scenarios (see figure 7). ### Computational time For real-time applications, the computational time of the vehicle routing problem is a crucial factor. The greedy method and the Constrained Programming (CP) method, when implemented at the outer loop of the proposed framework, display notable differences in computational time. As shown in figure 8, the greedy method requires substantially less computational time compared to the CP method. Given the subproblem division approach at the inner loop, the computational time increases for both the Greedy and CP methods as the scale of the scenarios increases. However, the Greedy method consistently outperforms the CP method, and the gap between their respective computational times grows in proportion to the scale of the scenario. This highlights the Greedy method's efficiency and its suitability for larger scale scenarios requiring rapid computations. ## 6 Discussion The geometric configuration of assignment target points within a scenario critically influences the potential for improving total task completion time through cooperative routing. As observed in figure 6, there are positive improvement percentages for small and medium scale scenarios, but a negative trend emerges for large scale scenarios. The total time taken to complete the task via a cooperative route depends on three elements: UAV traversal time, UGV traversal time, and the waiting time of the UGV during UAV refueling. Conversely, the total task completion time for the UGV-alone route depends solely on the UGV traversal time, as it does not involve refueling. When the UGV operates alone, it has to visit all assignment target points alone, leading to a significant UGV traversal time. This duration increases proportionally Figure 7: Energy metrics on 3 different scale of scenarios with the scale of the scenario, as demonstrated in figure 9. However, this UGV traversal time can be reduced through cooperative routing, which divides the target points between the UAV and UGV. However, a drawback of cooperative routing is the addition of waiting time during which the UAV is refueled by the UGV. In small and medium scale scenarios, the assignment points spread is limited and therefore there are fewer refueling stops. This means the extra waiting time at refueling stops never exceeds the reduction in UGV traversal time, leading to a shorter total task completion time for the cooperative route compared to the UGV-alone route. However, in large scale scenarios, where there are many refueling stops due to the extensive assignment points spread, the additional waiting time can surpass the \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Map Size**} & \multirow{2}{*}{**Scenarios**} & \multirow{2}{*}{**Total energy consumption (MJ)**} & \multirow{2}{*}{**Improvement (\%)**} \\ \cline{5-6} & & & Cooperative Routing & UGV only & & \\ \cline{5-6} & & & & & \\ \cline{5-6} & & Greedy Method & CP Method & Greedy Method & CP Method \\ \hline \multirow{6}{*}{Small scale} & Scenario 1 & 20.56 & 20.61 & 37.07 & 44.54 & 44.40 \\ & Scenario 2 & 6.73 & 7.19 & 19.80 & 66.00 & 63.68 \\ & Scenario 3 & 12.53 & 11.82 & 28.88 & 56.62 & 59.08 \\ & Scenario 4 & 17.08 & 17.08 & 26.50 & 35.56 & 35.56 \\ & Scenario 5 & 12.46 & 12.46 & 23.67 & 47.38 & 47.38 \\ & Scenario 6 & 23.47 & 23.47 & 43.03 & 45.44 & 45.44 \\ & Scenario 7 & 14.14 & 16.72 & 29.18 & 51.56 & 42.71 \\ & Scenario 8 & 13.49 & 13.49 & 29.48 & 54.25 & 54.25 \\ & Scenario 9 & 25.47 & 24.86 & 45.11 & 43.55 & 44.89 \\ & Scenario 10 & 19.73 & 13.60 & 31.86 & 38.07 & 57.32 \\ \hline \multirow{6}{*}{Medium Scale} & Scenario 1 & 39.40 & 34.00 & 73.99 & 46.75 & 54.04 \\ & Scenario 2 & 41.90 & 41.63 & 92.60 & 54.75 & 55.04 \\ & Scenario 3 & 41.82 & 41.82 & 76.08 & 45.03 & 45.03 \\ & Scenario 4 & 38.15 & 38.05 & 67.29 & 43.31 & 43.46 \\ & Scenario 5 & 30.10 & 30.10 & 50.77 & 40.71 & 40.72 \\ & Scenario 6 & 46.52 & 46.85 & 78.01 & 40.37 & 39.94 \\ & Scenario 7 & 31.56 & 31.57 & 55.09 & 42.71 & 42.70 \\ & Scenario 8 & 39.75 & 39.67 & 71.02 & 44.03 & 44.15 \\ & Scenario 9 & 33.01 & 33.14 & 60.00 & 44.98 & 44.76 \\ & Scenario 10 & 31.49 & 30.71 & 68.34 & 53.92 & 55.06 \\ \hline \multirow{6}{*}{Large Scale} & Scenario 1 & 43.63 & 41.27 & 65.21 & 33.09 & 36.71 \\ & Scenario 2 & 46.17 & 45.35 & 69.53 & 33.59 & 34.78 \\ \cline{1-1} & Scenario 3 & 62.94 & 62.67 & 72.06 & 12.66 & 13.03 \\ \cline{1-1} & Scenario 4 & 56.05 & 59.50 & 68.78 & 18.51 & 13.50 \\ \cline{1-1} & Scenario 5 & 92.74 & 92.74 & 101.24 & 8.39 & 8.39 \\ \cline{1-1} & Scenario 6 & 49.15 & 42.32 & 64.91 & 24.28 & 34.81 \\ \cline{1-1} & Scenario 7 & 81.63 & 64.41 & 90.37 & 9.68 & 28.73 \\ \cline{1-1} & Scenario 8 & 80.17 & 79.96 & 87.54 & 8.42 & 8.66 \\ \cline{1-1} & Scenario 9 & 75.80 & 75.80 & 84.86 & 10.67 & 10.67 \\ \cline{1-1} & Scenario 10 & 72.37 & 71.89 & 79.95 & 9.48 & 10.07 \\ \hline \hline \end{tabular} \end{table} Table 3: Energy metrics of different scenarios decrease in UGV traversal time. This results in the cooperative route taking longer overall than the UGV-alone route. In terms of energy consumption metrics, the cooperative route consistently outperforms the UGV-alone route, regardless of the scenario scale. This is because, as demonstrated in figure 1, the UAV consumes five times less energy than the UGV per unit distance traveled, making the UGV the dominant influence on total energy consumption. Hence, the UGV-alone route, which involves a longer UGV traversal distance, consumes more energy than the cooperative route, where the UGV covers a smaller distance due to task division with the UAV. Nevertheless, as the scale of scenarios increases, the gap in total energy consumption between the UGV-alone route and the cooperative route narrows. This is because in larger scenarios, despite cooperation with the UAV, the UGV must still cover a considerable distance to provide suitable refueling stops, leading to higher overall energy consumption. Figure 8: Computational time Figure 9: Cooperative routing Vs UGV-only route ## 7 Conclusion In this work, we focus on a cooperative vehicle routing problem involving a UAV-UGV team with fuel constraints. Both vehicles are required to cover a set of assigned task points, with the UAV periodically recharging from the UGV to complete the assignment in minimum possible time. Finding the optimal recharging rendezvous points, in terms of both location and timing, between the UAV and UGV is integral to achieve an optimal route in this cooperative routing problem. We introduce a sequential optimization framework that operates in two primary steps. The initial step involves the utilization of a minimum set cover algorithm to determine the locations for refueling stops. These identified locations serve as an input to the _UGVPlanner_, which then creates the UGV route employing a Traveling Salesman Problem model. In the subsequent step, a task allocation technique is employed to partition the entire problem into smaller, more manageable subproblems. The _UAVPlanner_ then develops the UAV route by framing these subproblems as instances of the Energy-Constrained Vehicle Routing Problem with Time Windows (E-VRPTW). Our framework has been successfully applied to 30 distinct task scenarios across three different scales, showcasing its effectiveness and practicality. The cooperative routes resulting from our framework were benchmarked against the UGV-only routes for the same scenarios which served as an upper limit for comparison. The results reveal substantial improvements, with time consumption reduced by 10-30% and energy consumption diminished by 15-50% in most instances through the cooperative routing. In the future direction of the work, we would be expanding the framework for persistent surveillance on the task points and consider the stochasticity in the scenarios. Insights from this study suggest a potential enhancement of leveraging the UGV's idle waiting times during refueling to establish a mobile recharging rendezvous, which will be a focal point in our subsequent investigations. ## Acknowledgments The authors would like to express their gratitude to the DEVCOM Army Research Laboratory for their financial support in funding the projects under grant W911NF-14-S-003. ## Declarations * **Funding** The work is funded by Army Research Laboratory grant W911NF-14-S-003 * **Conflict of Interests** The authors declare no conflict of interest * **Ethics Approval** Not applicable * **Consent to Participate** Not applicable * **Consent for Publication** All authors consent to publication * **Code or data availability** The code and data backing this study's outcomes aren't publicly shared, however it can be obtained upon a credible request. * **Authors' contributions** The conceptualization of ideas and methodologies was carried out by MSM, SR, and PAB. Implementation was undertaken by MSM and SR. The initial draft was penned by MSM, while the review and editing process involved JH, JPR, JD, CM, and PAB. The project was overseen and supervised by JH, JPR, JD, CM, and PAB, with PAB also handling project administration. All contributing authors have reviewed and given their consent for the final version of the manuscript to be published.
2306.17567
Counting Guidance for High Fidelity Text-to-Image Synthesis
Recently, the quality and performance of text-to-image generation significantly advanced due to the impressive results of diffusion models. However, text-to-image diffusion models still fail to generate high fidelity content with respect to the input prompt. One problem where text-to-diffusion models struggle is generating the exact number of objects specified in the text prompt. E.g. given a prompt "five apples and ten lemons on a table", diffusion-generated images usually contain the wrong number of objects. In this paper, we propose a method to improve diffusion models to focus on producing the correct object count given the input prompt. We adopt a counting network that performs reference-less class-agnostic counting for any given image. We calculate the gradients of the counting network and refine the predicted noise for each step. To handle multiple types of objects in the prompt, we use novel attention map guidance to obtain high-fidelity masks for each object. Finally, we guide the denoising process by the calculated gradients for each object. Through extensive experiments and evaluation, we demonstrate that our proposed guidance method greatly improves the fidelity of diffusion models to object count.
Wonjun Kang, Kevin Galim, Hyung Il Koo
2023-06-30T11:40:35Z
http://arxiv.org/abs/2306.17567v1
# Counting Guidance for High Fidelity Text-to-Image Synthesis ###### Abstract Recently, the quality and performance of text-to-image generation significantly advanced due to the impressive results of diffusion models. However, text-to-image diffusion models still fail to generate high fidelity content with respect to the input prompt. One problem where text-to-diffusion models struggle is generating the exact number of objects specified in the text prompt. E.g. given a prompt "five apples and ten lemons on a table", diffusion-generated images usually contain the wrong number of objects. In this paper, we propose a method to improve diffusion models to focus on producing the correct object count given the input prompt. We adopt a counting network that performs reference-less class-agnostic counting for any given image. We calculate the gradients of the counting network and refine the predicted noise for each step. To handle multiple types of objects in the prompt, we use novel attention map guidance to obtain high-fidelity masks for each object. Finally, we guide the denoising process by the calculated gradients for each object. Through extensive experiments and evaluation, we demonstrate that our proposed guidance method greatly improves the fidelity of diffusion models to object count. ## Introduction Text-to-image generation aims to generate high-fidelity images given a user-specified text prompt. It has various applications like digital art, design, and graphics and was traditionally performed using GANs since the early start of deep learning Goodfellow et al. (2014); Karras et al. (2019); Karras et al. (2020); Karras et al. (2020); Zhang et al. (2017, 2018); Xu et al. (2018); Xia et al. (2021); Patashnik et al. (2021). However, GANs suffer from unstable training and a lack of diversity (mode collapse), making GANs only viable when generating images in narrow domains such as faces, animals, or vehicles. Recently, diffusion models Ho et al. (2020); Song and Ermon (2019); Song et al. (2020), a new family of generative models, show impressive, high fidelity and high diversity results with stable training procedures, outperforming GANs and shifting the research focus from GANs to diffusion Nichol et al. (2021); Ramesh et al. (2022); Saharia et al. (2022); Rombach et al. (2022). While many diffusion models were proposed recently, the open source model Stable Diffusion Rombach et al. (2022), a latent diffusion model trained on large datasets, has become the global standard of text-to-image generation models. However, there are still unresolved issues with diffusion models and Stable Diffusion. For example, Stable Diffusion usually shows bad performance for compositional text-to-image synthesis (e.g., _"an apple and a lemon on the table"_), and various efforts have been made to resolve this problem. Attend-and-Excite Chefer et al. (2023) proposes novel attention map guidance to generate two different objects successfully. Several other studies suggest layout-based methods for compositional text-to-image synthesis Li et al. (2023); Lin et al. (2023); Phung et al. (2023). While there is a high interest in compositional text-to-image synthesis, recent studies are only focusing on synthesizing one object of each kind, leaving the problem of synthesizing multiple instances of each object unsolved (e.g., _"three apples and five lemons on the table"_). In this work, we focus on improving diffusion models to generate the exact number of instances per object, as specified in the input prompt. To alleviate this problem, we propose counting guidance by using gradients of a counting network. Specifically, we use RCC Hobley and Prisacariu (2022) which performs reference-less class-agnostic counting for any given image. While most counting networks adopt a heatmap-based approach, RCC retrieves the object count directly via regression and, thus, allows us to obtain its gradient for classifier guidance Dhariwal and Nichol (2021); Bansal et al. (2023). Furthermore, to handle multiple object types, we investigate the semantic information mixing problem of Stable Diffusion. For instance, the text prompt _"three apples and four donuts on the table."_ usually causes diffusion models to mix semantic information between apples and donuts leading to poor results and making it hard to enforce the correct object count per object type. We propose novel attention map guidance to separate semantic information between nouns in the prompt by obtaining masks for each object from the corresponding attention map. Fig. 1 compares Stable Diffusion with our method for single and multiple object types. Our approach successfully generates the right amount of each object, while Stable Diffusion fails in these scenarios. To the best of our knowledge, our work is the first attempt to generate the exact number of each object using a counting network for text-to-image synthesis. Our contributions can be summarized as follows: * We present counting network guidance to improve pre trained diffusion models to generate the exact number of objects specified in the prompt. Our approach can be applied to any diffusion model requiring no retraining or finetuning. * We propose novel attention map guidance to solve the semantic information mixing problem and obtain high-fidelity masks for each object. * We demonstrate the effectiveness of our method by qualitative and quantitative comparisons with previous methods. ## Related Work ### Diffusion Models Diffusion models [14, 15, 16, 17] are a new family of generative models that show a significant advance in performance of image synthesis and text-to-image generation. DDPM [14] designed the Markov chain process by gradually adding noise and demonstrated the potential of diffusion models for unconditional image generation. Concurrently, VP-SDE [15, 16] interpreted diffusion models as Stochastic Differential Equations and provided broad insight into diffusion models. One of the problems with DDPM is that it depends on probabilistic sampling and requires about 1000 steps to obtain high-fidelity results making the sampling process very slow and computationally intensive. To solve this problem, DDIM [15] removed the probabilistic factor in DDPM and achieved comparable image quality to DDPM with only 50 denoising steps. Beyond unconditional image generation, recent papers on diffusion models also started to focus on conditional image generation. ADM [1] suggested classifier guidance by calculating the gradient of a classifier to perform conditional image generation. This method though requires a noise-aware classifier and per step gradient calculation. To avoid this problem, [14] proposed classifier-free guidance, which removes the need of an external classifier by computing each denoising step as an extrapolation between one conditional and one unconditional step. Furthermore, ControlNet [13] proposed a separate control network attached to a pretrained diffusion model to perform guidance with additional input in feasible training time. Universal Guidance [16] alleviates the problem of requiring a noise-aware classifier by instead calculating the gradient of the predicted clean data point. One issue of diffusion models when first proposed was the high inference cost because of repeated inference in pixel-space. To address this problem, Stable Diffusion (Rombach et al., 2022) Figure 1: Our text-to-generation method generates the exact number of each object for a given prompt. The first row shows the result of Stable Diffusion [16] while the second row shows our method’s result. et al. 2022) proposed performing the diffusion process in a low dimensional latent space instead of image space, greatly reducing the computational cost. Despite Stable Diffusion's powerful performance, there are still some remaining problems. For example, Stable Diffusion usually fails to generate multiple objects successfully (e.g., an apple and a lemon on the table). Thus, the paper Attend-and-Excite (Chefer et al. 2023) suggested attention map-based guidance to activate the attention of all objects in the prompt but nevertheless only focuses on a single instance per object, leaving the issue of reliable generation of multiple instances per objects. In this paper, we explicitly address this issue by introducing counting network guidance and attention map guidance to pre-trained diffusion models. Concurrent with our work, (Paiss et al. 2023; Zhong et al. 2023) tries to generate the exact number of objects using enhanced language models. (Paiss et al. 2023) trains a counting-aware CLIP model (Radford et al. 2021) and uses it to train the text-to-image diffusion model Imagen (Sahara et al. 2022). (Lee et al. 2023; Fan et al. 2023) uses human feedback to fine-tune text-to-image generation models by supervised learning and reinforcement learning. (Phung, Ge, and Huang 2023; Lian et al. 2023) proposes layout-based text-to-image generation, which requires additional layout input and leverages a large language model (LLM) to generate proper layouts from given prompts. Unlike the above works, our method does not require additional layout input, a LLM or retraining. ### Object Counting The goal of object counting is to count arbitrary objects in images. Object counting can be divided into few-shot object counting, reference-less counting and zero-shot object counting. For few-shot object counting (You et al. 2023; Shi et al. 2022), a few example images of the object to count are provided as input while for reference-less counting (Ranjan and Nguyen 2022; Hobley and Prisacariu 2022), example images are not provided and the aim is to count the number of all salient objects in the image. Zero-shot object counting, on the other hand, (Xu et al. 2023; Jiang, Liu, and Chen 2023) aims to count arbitrary objects of a user-provided class. Object counting networks are usually either heatmap-based or regression-based (You et al. 2023; Shi et al. 2022; Hobley and Prisacariu 2022). For our approach, regression-based methods are more suitable for calculating the gradient of the counting network compared to heatmap-based methods. In particular, we adopt RCC (Hobley and Prisacariu 2022), a reference-less regression-based counting model which builds on top of extracted features of a pre-trained ViT (Dosovitskiy et al. 2020) ## Preliminaries Denoising Diffusion Probabilistic Models (DDPM) (Ho, Jain, and Abbeel 2020) define a forward noising process and a reverse denoising process, each with \(T\) steps (\(T=1000\) in the paper). The forward process \(q(x_{t}|x_{t-1})\) is defined as \[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{\alpha_{t}}x_{t-1},(1-\alpha_{t})I), \tag{1}\] where \(\alpha_{t}\) and \(x_{t}\) are schedule and data point at time step \(t\). This process can be seen as iteratively adding scaled Gaussian noise. Thanks to the property of the Gaussian distribution, we can obtain \(q(x_{t}|x_{0})\) directly as \[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1-\bar{\alpha} _{t})I), \tag{2}\] and rewrite as \[x_{t}=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon, \tag{3}\] where \(\bar{\alpha}=\prod_{i=1}^{t}\alpha_{i}\) and \(\epsilon\sim\mathcal{N}(0,I)\). DDPM \(\epsilon_{\theta}(x_{t},t)\) is trained to estimate the noise which was added in the forward process \(\epsilon\) at each time step \(t\). By iteratively estimating and removing the estimated noise, the original image can be recovered. During inference, images are generated by using random noise as starting point. In practice however, deterministic DDIM (Song, Meng, and Ermon 2020) sampling is commonly used since it requires significantly less sampling steps compared to DDPM. DDIM sampling is performed as \[x_{t-1}=\sqrt{\bar{\alpha}_{t-1}}(\frac{x_{t}-\sqrt{1-\bar{\alpha}_{t}} \epsilon_{\theta}}{\sqrt{\bar{\alpha}_{t}}})+\sqrt{1-\bar{\alpha}_{t-1}} \epsilon_{\theta}. \tag{4}\] Figure 2: Effectiveness of counting network guidance. Our method is also effective for large number. With DDIM sampling, the clean data point \(\hat{x}_{0}\) can be obtained by \[\hat{x}_{0}=\frac{(x_{t}-\sqrt{1-\bar{\alpha}_{t}}\epsilon_{\theta}(x_{t},t))}{ \sqrt{\bar{\alpha}_{t}}}. \tag{5}\] To add classifier guidance to DDIM [4], the gradient of a classifier is computed and used to retrieve the refined predicted noise \(\hat{\epsilon}\) by \[\hat{\epsilon}=\epsilon-s\sqrt{1-\bar{\alpha}_{t}}\nabla_{x_{t}}\log p_{\phi}( y|x_{t}) \tag{6}\] where \(s\) is scale parameter and \(p_{\phi}\) is a classifier. One issue of classifier guidance is that the underlying classifier needs to be noise-aware as it receives outputs from intermediate denoising steps, requiring expensive noise-aware retraining. Universal Guidance [4] addresses this by feeding the predicted clean data point \(\hat{x}_{0}\) instead of the noisy \(x_{t}\) to the classifier which can be expressed as \[\hat{\epsilon}=\epsilon-s\sqrt{1-\bar{\alpha}_{t}}\nabla_{x_{t}}\log p_{\phi}( y|\hat{x}_{0}). \tag{7}\] ## Method In this section, we first present how to control the number of a single object type using counting network guidance and then expand it to multiple object types. For multiple object types, we solve the semantic information mixing problem of Stable Diffusion with attention map guidance and present masked counting network guidance for successful multiple object type generation. ``` 0: time step \(t\), denoising network \(\epsilon_{\theta}(\cdot,\cdot)\), decoder \(Decoder(\cdot)\), counting network \(Count(\cdot)\), number of object \(N\) 0: scale parameter \(s_{count}\) 0: clean latent \(z_{0}\) 1:for\(t=T,T-1,...,1\)do 2:\(\epsilon\leftarrow\epsilon_{\theta}(x_{t},t)\) 3:\(\hat{z}_{0}\leftarrow(z_{t}-\sqrt{1-\bar{\alpha}_{t}}\epsilon)/\sqrt{\bar{ \alpha}_{t}}\) 4:\(\hat{x}_{0}\gets Decoder(\hat{z}_{0})\) 5:\(L_{count}\leftarrow|(Count(\hat{x}_{0})-N)/N|^{2}\) 6:\(\epsilon\leftarrow\epsilon+s_{count}\sqrt{1-\bar{\alpha}_{t}}\nabla_{z_{t}}L_ {count}\) 7:\(z_{t-1}\gets Sample(z_{t},\epsilon)\) 8:endfor 9:return\(z_{0}\) ``` **Algorithm 1**Counting guidance for single object type ### Counting Guidance for a Single Object Type To avoid retraining the counting network on noisy images, we perform counting network guidance following Universal Guidance [4]. For a given number of \(N\) objects, we define the counting loss \(L_{count}\) as \[L_{count}=|\frac{Count(\hat{x}_{0})-N}{N}|^{2}, \tag{8}\] where \(Count(\cdot)\) is the pre-trained counting network RCC [1] and \(\hat{x}_{0}\) is the predicted clean image at each time step. We update the predicted noise \(\epsilon\) Figure 3: Effectiveness of attention map guidance. The first row shows the results of Stable Diffusion without attention map guidance, and the second row shows the results with attention map guidance. using the gradient of the counting network as \[\epsilon\leftarrow\epsilon+s_{count}\sqrt{1-\bar{\alpha}_{t}}\nabla_{z_{t}}L_{count }, \tag{9}\] where \(s_{count}\) is an additional scale parameter to control the strength of counting guidance. Fig. 1(a) and Fig. 1(b) show the effectiveness of our proposed counting network guidance method. For the prompt _"ten apples on the table,"_ Stable Diffusion with counting network guidance generates ten apples, while vanilla Stable Diffusion generates only three apples. We find that Fig. 1(a) and Fig. 1(b) have similar textures and backgrounds, indicating that counting guidance maintains the original properties of Stable Diffusion while only influencing the object count. Counting guidance also proves itself effective when generating a large number of objects. Due to a lack of images containing a large number of objects in Stable Diffusion's training dataset, it often fails to create plausible results for such cases. Fig. 1(c) and Fig. 1(d) show the effectiveness of counting guidance on large numbers. For the given text prompt _"fifty apples on the table,"_ Stable Diffusion with counting network guidance generates 46 apples, while vanilla Stable Diffusion generates only 18 apples. Although counting guidance helps to generate the given number of objects, it sometimes fails to produce the exact number of objects. We think that this is an issue of the accuracy of the counting network itself, so we adjust the output number \(N\) of the counting network by adding a fixed offset \(N_{off}\): \[N\gets N+N_{off}, \tag{10}\] where the value of offset \(N_{off}\) is usually 0, -1, or 1. ### Counting Guidance for Multiple Object Types #### Semantic Information Mixing Problem Handling multiple object types requires counting each type separately. We could use a class-aware counting network, however, the predicted clean image of early denoising steps is still too low quality for the counting network to correctly identify each object instance. Therefore, we decide to use a class-agnostic counting network instead. For each object type to count, we obtain a mask using the underlying self attention maps of the UNet model and feed the masked image of each object type to the counting network separately. It has been shown in previous works [14, 15] that the attention map corresponding to each object has high activation at the object region. Thus, we use the attention map of each object as its mask. However, we require accurate object masks for counting guidance, but Stable Diffusion often tends to produce attention maps that do not accurately correspond to the correct location of each object. The first row of Fig. 3 demonstrates this semantic information mixing problem. For the prompt _"three oranges and four eggs on the table."_, we find that the attention map of _"oranges"_ and the attention map of _"eggs"_ share a large part of pixels resulting in the generation of orange-colored eggs instead of oranges and eggs. #### Attention Map Guidance To solve the semantic information mixing problem, we first obtain each object's attention map following [15]. Similarly, we exclude the \(\langle{soft}\rangle\) token, re-weigh using Softmax, and then Gaussian-smooth to receive the attention map \(M_{i}\) for each object \(i\). Finally, we normalize each object's attention map as \[\hat{M}_{i,j,k}=\frac{M_{i,j,k}-\min_{j,k}(M_{i,j,k})}{\max_{j,k}(M_{i,j,k})- \min_{j,k}(M_{i,j,k})}, \tag{11}\] where \(M_{i,j,k}\) is the attention value of coordinate \((j,k)\) of object \(i\)'s attention map. We then ensure that each pixel coordinate is only referred to by the attention of a single object by calculating each coordinate's minimum attention value and summate them to \(L_{min}\) where a low \(L_{min}\) indicates that each coordinate is only activated by a single object: \[L_{min}=\sum_{j,k}\min_{i}(\hat{M}_{i,j,k}). \tag{12}\] Similar to \(L_{min}\), we define \(L_{max}\) to ensure that at least one object activates each pixel as \[L_{max}=\sum_{j,k}\max_{i}(\hat{M}_{i,j,k}). \tag{13}\] Finally, we calculate the total attention loss \(L_{attention}\) as \[L_{attention}=L_{min}-s_{max}L_{max}, \tag{14}\] where \(s_{max}\) is a scale parameter. The predicted noise \(\epsilon\) is then updated as \[\epsilon\leftarrow\epsilon+s_{attention}\sqrt{1-\bar{\alpha}_{t}}\nabla_{z_{t }}L_{attention}. \tag{15}\] The second row of Fig. 3 shows the effectiveness of our attention map guidance. We find that the attention map of _"oranges"_ only focuses on oranges, and the attention map of _"eggs"_ only focuses on eggs, resulting in a correctly synthesized output. Furthermore, we observe that high-fidelity object masks are generated from the corresponding attention maps. Masked Counting GuidanceFor each object \(i\), we binarize its attention map to receive the binary mask \(M_{i}^{b}\) as \[M_{i,j,k}^{b}=\begin{cases}1,&\text{if }i=\operatorname{argmax}_{i}(M_{i,j,k}) \\ 0,&\text{otherwise}\end{cases} \tag{16}\] and then generate a masked clean image \(\hat{x}_{0,i}\) using element-wise multiplication: \[\hat{x}_{0,i}=\hat{x}_{0}\odot M_{i}^{b}. \tag{17}\] For the \(i\)-th object count of object \(N_{i}\), each masked counting guidance \(L_{count,i}\) is defined as \[L_{count,i}=|\frac{Count(\hat{x}_{0,i})-N_{i}}{N_{i}}|^{2}. \tag{18}\] Finally, we update the noise \(\epsilon\) as \[\epsilon\leftarrow\epsilon+\sum_{i}s_{count,i}\sqrt{1-\hat{\alpha}_{t}}\nabla _{z_{t}}L_{count,i} \tag{19}\] where \(s_{count,i}\) is an additional scaling parameter per object. ## Experiments We borrow the state-of-the-art text-to-image generation model Stable Diffusion v1.4 for our experiments. We use DDIM sampling with 50 steps and set the scale parameter for \(L_{max}\) to \(s_{max}=0.1\) by default. We create a modified dataset based on the object classes in Attend-and-Excite [3] to evaluate and compare our approach with previous methods. Specifically, we remove the color category and add more animals and objects to focus on counting performance. We compare our method with Stable Diffusion [1] and Attend-and-Excite. ### Results for Single Object Type Fig. 4 shows a qualitative comparison for the single object type scenario. While Stable Diffusion and Attend-and-Excite fail to generate the same number of objects as specified in the prompt, our method generates the correct number. For the text prompt _"four tomatoes on the table,"_ Stable Diffusion generates only three tomatoes without counting guidance. With counting guidance, the tomato at the bottom is successfully divided into two tomatoes, while the rest of the image is consistent with the original result. The text prompt Figure 4: Qualitative comparison for single object type. The first row shows the results of Stable Diffusion [1], the second row shows the results of Attend-and-Excite [3] and the last row shows the results of our method. "ten oranges on the table,"_ causes Stable Diffusion to only generate four oranges compared to our solution that creates the correct amount of ten. The big difference in count results in large gradients, making our result severely differ from the original. Our method also works well for more complex categories, such as animals. Considering text prompt _"three chicks on the road"_, Stable Diffusion and Attend-and-Excite synthesize only two chicks, unlike our method which generates one additional chick while maintaining the other two chicks' appearance. For the text prompt _"five rabbits in the yard"_ Stable Diffusion and Attend-and-Excite generate only four rabbits, while our method generates one more rabbit but fails to maintain the other rabbits' appearance. That is because of the difference between the background and the rabbit colors. It is hard to generate a white rabbit from a brown yard, so Stable Diffusion with counting guidance changes the overall structure and recreates five rabbits. ### Results for Multiple Object Types Fig. 5 shows a qualitative comparison for multiple object types. For the given text prompt _"three lemons and one bread on the table,"_ Stable Diffusion successfully creates one bread but fails to generate three lemons, while Attend-and-Excite fails in both cases. With masked counting guidance, our method correctly synthesizes three lemons and one bread. The result shows that the lemon at the bottom is divided into two lemons thanks to masked counting guidance while maintaining the bread's shape. For the text prompt _"two onions and two tomatoes on the table,"_ we find that Stable Diffusion suffers from the semantic information mixing problem and generates red onions instead of tomatoes. Due to our attention map guidance, our method creates real tomatoes. As Attend-and-Excite is also based on attention map optimization, it successfully generates real tomatoes but fails to generate the exact number of onions. ## Limitations As our results show, our method aids in generating the exact number of each object, but we also found several limitations. First, it is often necessary to tune the scale parameters of the counting network guidance for a specific text prompt. Although fixed scale parameters can help to control the number of objects to a certain degree, generating the exact number of each object may require tuning the underlying scale parameters. Second, we found that generating the exact number of more complex objects is much more complicated than generating simple object shapes. The structure of the result image is mostly determined at the early denoising steps, limiting the dividing or merging of objects by counting guidance. ## Conclusions In this paper, we propose counting guidance which is, to our knowledge, the first attempt to guide Stable Diffusion with a counting network to generate the correct number of objects during text-to-image generation. For a single object type, we calculate the gradients of a counting network and refine the estimated noise at every step. For multiple object types, we discuss the semantic information mixing problem and propose attention map guidance to alleviate it. Finally, we obtain masks of each object from the corresponding attention map and calculate the counting network's gradient of each masked image separately. We demonstrate that our method effectively controls the number of objects with a few limitations. For future work, we will aim to remove the necessarily of scale parameter tuning and create one global framework working for every prompt without additional tuning.
2309.08995
Comprehensive Investigation and Evaluation of an Indoor 3D System Performance Based on Visible Light Communication
The abstract discusses the significance of Visible Light Communication (VLC) as an efficient and cost-effective solution in the era of green technology. VLC not only provides illumination but also high-speed data transmission through existing infrastructure, making it ideal for indoor positioning systems (IPS) with minimal interference with the Radio Frequency (RF) spectrum and enhanced security. While previous research has mainly focused on positioning accuracy, this paper delves into the performance evaluation of a VLC-based indoor system. The study examines key performance parameters, namely Signal-to-Noise Ratio (SNR) and path loss, in a Line of Sight (LOS) scenario. It employs a single LED and ten different photodiode (PD) locations in a 3D room. MATLAB simulations demonstrate the system's effectiveness, achieving a good SNR with low path loss. Additionally, the research highlights the importance of optimizing the PD's position to maximize signal strength while minimizing noise and losses.
Vailet Hikmat Faraj Al Khattat, Siti Barirah Ahmad Anas, Abdu Saif
2023-09-16T13:26:08Z
http://arxiv.org/abs/2309.08995v1
Comprehensive Investigation and Evaluation of an Indoor 3D System Performance Based on Visible Light Communication ###### Abstract The obvious and accelerator trend towards efficient green technology in a modern technological revolution time makes visible light communication (VLC) a solution key to meeting this growth. Besides the illumination function, VLC uses for data transmission at high speed with the lowest cost due to utilizing the infrastructure that already exists. As a result of VLC's multiple features, it had been used in the indoor positioning system (IPS) in many pieces of research to obtain a high accuracy without conflict with the Radio Frequency (RF) spectrum besides it providing high security from any penetrating. However, achieving good performance parameters is essential and fundamental in evaluating the effectiveness of any indoor system that many pieces of research neglected and concentrated more on the positioning accuracy aspect. This paper investigates and analyses the performance of the indoor system that is designed and developed based on a VLC and proves its effectiveness through a comprehensive evaluation. Signal-to-noise ratio (SNR) and path loss are the performance parameters that are investigated in this system by varying the transmitted power, incidence angles, and Lambertian mode number in the line of sight (LOS) scenario. The examined system consists of one light-emitting diode (LED) in a 3D typical room and one photodiode (PD) in different ten locations along the half-diagonal line towards the corner. Utilizing a single LED in this system is important to avoid the interference that occurs when utilizing multiple LEDs; besides, it is more convenient in the indoor environment. The obtained results by MATLAB simulation show the reliable and effective performance of the proposed developed VLC system design by achieving a good SNR with low path loss. Furthermore, the proposed system approach proves how the optimum position of PD is crucial to obtain a strong signal with the lowest ratio of noise and losses. VLC, LED, PD, SNR, Path loss. ## I Introduction The deployment of 5G opened the door to meet the incremental need for higher capacity in the wireless network imposed as a result of the rapid development of artificial intelligence (AI) and the internet of things (IoT) [1]. In spite of that, it is expected that 6G will exceed its need for the current wireless spectrum, which prompts the search for higher frequencies in terahertz to cover wider communication [2]. Visible light communication (VLC) considers a suitable and promising candidate technology that meets this expanding need of the wireless spectrum in the near future where its spectrum range of 400-800 THz [2]. It considers an integral part of optical wireless communication (OWC) that has high-speed connectivity and high data rates [3]. VLC has various features over other radio frequency (RF) technologies such as wireless fidelity (Wi-Fi), Bluetooth, and radio frequency identification (RFID) [4]. This green technology is useful in short-range scenarios where cannot penetrate the walls and this is extremely useful in terms of privacy issues [5]. As a health aspect, it has no harmful effects on health if compared to radio communications. This important feature highlights this technology to utilize in RF-restricted places like hospitals, laboratories, and airplanes due to it being interference-free to the RF [6,7]. From a cost aspect, it is efficient because its deployment depends on the infrastructure of the installed illumination with some additional devices. Light emitting diode (LED) represents the source that is applied widely in VLC technology due to its numerous features that enable to get a reliable, solid, and efficient illumination system [8]. The main features of the LED can be represented as a long lifetime, low power consumption, lightweight, small size, high-lighting efficiency, low cost, safe to the human eye [9]. Also, it is easy to install, easy to use and can be modulated at higher data rates than conventional lighting sources [10]. The double function of the LED nominating it the best choice to utilize in homes and offices for communication as data transmission, the indoor positioning besides its basic function of illumination [9]. As an advanced step could be taken towards reducing power consumption radically, all the lights are replaced by the LEDs. In this context, several research studies have appeared in the last decades that deal with indoor applications based on VLC [7][11,12]. The indoor positioning system (IPS) occupied a wide portion of the interest of researchers in these applications [13,14]. By applying different techniques based on VLC technology, a good level was achieved in the positioning of the receiver, reach to just a few centimeters from the actual position. For instance, 9.2 cm is the average positioning accuracy that is achieved based on the time-difference-of-arrival (TDOA) technique [15]. However, some of these researchers concentrate their investigation study to reach the precise positioning but they neglect the other important metrics such as in [15]. Some of them also took into their consideration calculating the received power besides the positioning. For instance, in [20] the author proposed using the Fingerprint technique based on VLC-IPS and the obtained received power was 0.92\(\times\)10\({}^{-3}\) watts and 19.3 cm is a positioning error. In addition, some of them calculate the signal-to-noise ratio (SNR) such as in [16] where the proposed system based on VLC-IPS used the received signal strength (RSS) technique to get SNR value of 13.6 dB and 6 cm of the positioning error. The received signal strength indication (RSSI) technique was applied based on VLC-IPS in [17] where SNR obtained value was \(>\) 12 dB while the positioning error was \(<\)10 cm. While some took both, for instance in [19] hybrid technique RSSI / TDOA was proposed to apply in the system based on VLC-IPS to achieve better accuracy in the positioning error. It achieved 5.81 cm for positioning error and the obtained received power was 0.254\(\times\)10\({}^{-3}\) watt besides the SNR obtained value was 0.24\(\times\)10\({}^{-3}\)dB. In the research [18], the binomial point process (BPP) technique was proposed in the VLC system performance evaluation and the SNR performance value was 20.11 dB. Moreover, most of the research in this field neglected the path loss parameter. A comprehensive evaluation of any proposed system based on the important performance parameters considers significant and essential to prove its effectiveness and robustness. Different proposed systems that are based on VLC-IPS and used different techniques had been compared with the new proposed system. The comparison was in terms of the used techniques, the system model that was applied, and the experiment type, scenario scheme, SNR, and path loss are given in Table I. In this work, an indoor VLC system in the line of sight (LOS) model had been proposed and simulated to investigate and evaluate the performance parameters. Also, to find out how the light distribution impact against the photodiode's (PD) different locations on the system performance. This is an extension of our previous work [21], where a new technique for positioning was proposed named complementary and supplementary angles based on received signal strength (CSA-RSS). Excellent results were achieved in the average positioning error reached 4.2 cm and the received optical power was 4.5 watts. More precisely, the main contributions of this paper are outlined as follows: * An indoor 3D system based on VLC technology is designed and developed in the LOS scenario because it covers most of the received power. A pair of LED-PD is used to investigate the effectiveness of the system through the performance parameters. * Investigate the impact of PD movement from the room's floor center towards one of the corners with equal displacements in 10 locations against fixed LED in the ceiling's center on the obtained values of the SNR and the path loss parameters. * Calculating the effect of varying the incidence angles, transmitted power, and Lambertian mode number against increasing the distance between the LED-PD. * Evaluating the results for different configurations by MATLAB R2019a simulation of the proposed system. The content of this paper is organized as follows; section II illustrates the model of the proposed system. Section III investigates and demonstrates the performance of the important parameters of the proposed system based on the simulation results with the evaluation and discussion. Finally, this work had been summarized in Section IV. ## II System Model The proposed model of the indoor system based on VLC is shown in Fig. 1, a standard-size empty room with dimensions of 5 m \(\times\) 5 m \(\times\) 3 m. LOS is the proposed model of OWC to investigate the performance of the system based on the fundamental parameters for evaluation. The system consists of a single LED as an access point located in the center of the room's ceiling to achieve a uniform distribution of light and obtain an optimum LOS that covers most points Fig. 1: Scheme of the indoor system based on VLC of the room for the proposed indoor environment. The coordinate of the LED is (2.5, 2.5, 3) m where the distance is equal between the LED and all wall edges. A single PD that changes its place from the room's floor center towards one of the corners in a half diagonal line with equal displacements in 10 locations as presented in Fig. 1. The coordinates of the PD's ten locations are (2.50, 2.50, 0), (2.23, 2.23, 0), (1.96, 1.96, 0), (1.69, 1.69, 0), (1.42, 1.42, 0), (1.15, 1.15, 0), (0.88, 0.88, 0), (0.61, 0.61, 0), (0.34, 0.34, 0), (0.07, 0.07, 0) m. An investigation had been made to explore the effect of increased linked distance between LED-PD dramatically on the system performance. The LED broadcasts the visible light signals which are transmitted with a unique location code based on its location coordinates as well as its lighting function. ### _The proposed scenario's configuration_ * In the LOS scenario of the indoor proposed system model that is based on VLC. One LED as an access point for the entire room is fixed in the center of the ceiling with the coordinates (2.5, 2.5, 3) m to reach optimum distribution which almost covers most room points symmetrically. The dimensions of the typical empty room are 5 m \(\times\) 5 m \(\times\) 3 m. The impact of increasing the distance due to the PD movement in 10 locations on the floor plane against the fixed LED reflects on the whole system's performance. * At the first location with coordinate (2.5, 2.5, 0) m, the PD was placed directly under the LED intensity with a straight line of LOS where 3 m is the distance between LED-PD. * The proposed movement of the PD is taken place symmetrically in the \(X\) and \(Y\) planes with equal displacements between the positions starting from the center toward the one of corners. * The PD in its 10 locations moves in a half-diagonal line on the floor plane until the tenth position at the corner with the coordinate (0.07, 0.07, 0) m. ### _VLC channel model and its Geometry_ The indoor VLC proposed system consists of a LED as a transmitter that transmits the visible light signals to a 3D empty room of size 5 m \(\times\) 5 m \(\times\) 3 m and a PD as a receiver that is placed on the floor plane for signals receive. The LED is mounted in the middle of the room roof with coordinates (2.5, 2.5, 3) m at a height of 3 m and the distance between it and the four edges of the walls is 2.5 m. The LOS link model is the optical channel considered in this investigation study because it comprises most of the received power. The signals are transmitted in a direct path through the LOS channel to reach the PD that receives and measures the strength of this optical power. The ten locations of the PD to investigate their effect are (2.50, 2.50, 0), (2.23, 2.23, 0), (1.96, 1.96, 0), (1.69, 1.69, 0), (1.42, 1.42, 0), (1.15, 1.15, 0), (0.88, 0.88, 0), (0.61, 0.61, 0), (0.34, 0.34, 0), (0.07, 0.07, 0) m. These ten locations start from the room center in a direct LOS with LED intensity until the tenth location at the corner. The geometry of the VLC channel model between the LED and the ten locations of the PD in the LOS link is shown in Fig. 2 where \(d_{LED\cdot PD}\) is the distance between LED-PD and \(d_{vertical}\) is the vertical distance between the ceiling and floor planes. Also, the irradiance angle \(\phi_{irr}\), incidence angle \(\theta\), the LED's half-power angle (\(\phi_{12}\)), and the field of view (_FOV_) angle of the PD are shown in Fig. 2 where these angles play important role in the findings obtained. #### Ii-A1 Signal to noise ratio In order to obtain outstanding performance of the VLC system and achieve higher capacity and findings, SNR plays a major and important role. This is achieved by SNR improvement which is reflected in the rapid future development of indoor systems. One of the parameters that have a negative effect on the received power (\(P_{Recivid}\)) is noise where \(P_{Recivid}\) is affected by shot and thermal noises. Shot noise is the fluctuating that occurs because of the incident optical powers for the desirable and the ambient light sources. Thermal noise is the PD's fluctuating that is caused by temperature changes of the electric circuit in the receiver. The overall noise variance is \(N\) and usually is modelled as the additive white Gaussian noise (AWGN) and it is given as [22]: \[N=\sigma_{Channel}^{2}+\sigma_{Shot}^{2} \tag{1}\] SNR plays a significant role to attain effective communication and examine any proposed system. SNR is more related to \(\sigma_{Shot}^{2}\) where it can be expressed as [23]: \[SNR=10\log_{in}\frac{(P_{Recivid}R_{p})^{2}}{\sigma_{Shot}^{2}} \tag{2}\] Where \(R_{F}\) denotes the PD's responsivity, the variance of the shot noise can be expressed as [24]: \[\sigma_{Shot}^{2}=2qR_{p}P_{Recivid}B+2qI_{n_{I}}I_{2}B \tag{3}\] Where \(q\) is the electron charge, \(B\) is the equivalent noise bandwidth, \(I_{bg}\) is the background current, and \(I_{2}\) is a noise bandwidth factor. The SNR value can give a precise reflection on the quality of the received signal, as it is affected by the value of the \(P_{Recivid}\) as an important parameter in its equation where it had been calculated based on the RSS-CSA technique that is proposed and proved based on the obtained good results previously in our work [21] and can be expressed as follow [4]: \[P_{Recivid}=\frac{P_{meas}}{\left(d_{LED-PD}\right)^{2}}\,f(\phi_{irr})A_{ diff}(\theta\ ) \tag{4}\] where \(P_{meas}\) represents the transmitted power from the LED in the VLC system, \(f(\phi_{irr})\) is the intensity of the radiant angle of the LED's irradiant angle (\(\phi_{ir}\)), \(d_{LED\cdot PD}\) is the Fig. 2: VLC channel model geometry distance between the LED-PD, and \(\theta\) represents the incidence angle which changes based on the PD's locations as shown in Fig. 2. \(A_{\mathit{eff}}\) is the effective area in the PD that combines the transmitted signals and can be expressed as follows [4][12]: \[A_{\mathit{eff}}\left(\theta\right)=\begin{cases}\mathit{Ah}(\theta)\,\mathrm{g }(\theta)\cos\theta,&\theta\leq FOV\\ 0,&\theta>FOV\end{cases} \tag{5}\] \(A\) symbolizes the PD's surface area, the field of view of the PD can be denoted as \(FOV\), \(h(\theta)\) expresses the gain of the optical filter while the gain of the concentrator is represented as \(g(\theta)\)[25]: \[\mathrm{g}(\theta)=\begin{cases}\dfrac{n^{2}}{\sin^{2}(FOV)},&0\leq\theta\leq FOV \\ 0,&\theta>FOV\end{cases} \tag{6}\] The concentrator's refractive index expressed as \(n\)[24][26]. Typically, the power distribution of the LED's profile is modeled as a Lambertian emission, and \(f\) (\(\phi_{\mathit{int}}\)) can be denoted as follows: [4]: \[f(\phi_{\mathit{int}})=[\dfrac{(m+1)}{2\pi}]\cos^{m}\phi_{\mathit{int}} \tag{7}\] The value of \(m\) refers to the directivity of the LED and represents the LED's Lambertian model number. Also, it is linked to the LED's half-power angle \(\phi_{12}\) and can be expressed as [4][27]: \[m=\dfrac{-\ln(2)}{\ln(\cos(\phi_{12}))} \tag{8}\] The distance between LED-PD in the LOS channel model can be calculated as follows [28][29]: \[d_{\mathit{int}-\mathit{PD}}=\sqrt{\left(X_{i}-X_{j}\right)^{2}+\left(Y_{i}-Y _{j}\right)^{2}+\left(Z_{i}-Z_{j}\right)^{2}} \tag{9}\] Where (\(X_{i}\), \(Y_{i}\), \(Z_{i}\)) represent the LED location which is fixed and known as (2.5, 2.5, 3) m While (\(X_{j}\), \(Y_{j}\), \(Z_{j}\)) represent the PD's ten locations. Table II illustrates the calculated distance between the LED and the PD's ten locations based on equation 9. Its movement occurs symmetrically in the \(X\) and \(Y\) axes starting from the center of the floor plane toward one of the corners. #### Ii-B2 Path loss Path loss is one of the significant metrics of the wireless channel evaluations for any proposed system. It is used to compute the decrease in the power of a signal as it propagates away from the transmitter (LED). Path loss is a major component in the analysis and design of the link in a telecommunication system where it plays a vital role in wireless network planning. Free space path loss is the proposed module in this indoor VLC system where it computes the loss of signal power in a LOS propagation path without any reflections or shadowing, and can be expressed as [30][31]: \[P_{\mathit{Low}}\approx\dfrac{(m+1)A}{2\pi(d_{\mathit{LED-PD}})^{2}}\cos^{m}( \phi_{\mathit{int}})\cos(\theta) \tag{10}\] The system simulation parameters values for different configurations which took into consideration in this research paper are given in Table III. ## III Simulation Results and Discussion In this section, the simulation results by MATLAB R2019a that investigate the proposed indoor VLC system performance are discussed and analysed in detail in the LOS scenario. Different configurations of PD locations and how significantly affect the parameters' performance are demonstrated. All the required parameter values for the simulation are given in Table III. The aim is to investigate and prove the system's performance effectiveness in a comprehensive manner that reflects on the indoor systems' future. Also, it keeps pace with rapid development by achieving high speed in the transmission of data with the lowest cost. Fig. 1 and Fig. 2 illustrate the distribution scenario of a single LED and PD's ten different locations in a 5 m \(\times\) 5 m \(\times\) 3 m room environment. To achieve a uniform distribution of LED lighting and good coverage for most of the points, the LED is mounted in the middle of the room's ceiling. The different locations of the PD through its movement reflect a clear description of how the system \begin{table} \begin{tabular}{|c|c|c|} \hline **LED’s Coordinate** & \multicolumn{2}{c|}{**PD’s Coordinates**} & **Distance** \\ \hline \multirow{8}{*}{2.5,2.5,3} & 1\({}^{\mathrm{st}}\)/At center & 2.5, 2.5, 5, 0 & 3 m \\ \cline{2-3} & 2\({}^{\mathrm{nd}}\) & 2.23, 2.23, 0 & 3.024 m \\ \cline{2-3} & 3\({}^{\mathrm{rd}}\) & 1.96, 1.96, 0 & 3.095 m \\ \cline{2-3} & 4\({}^{\mathrm{th}}\) & 1.69, 1.69, 0 & 3.211 m \\ \cline{2-3} & 5\({}^{\mathrm{th}}\) & 1.42, 1.42, 0 & 3.366 m \\ \cline{2-3} & 6\({}^{\mathrm{th}}\) & 1.15, 1.15, 0 & 3.555 m \\ \cline{2-3} & 7\({}^{\mathrm{th}}\) & 0.88, 0.88, 0 & 3.774 m \\ \cline{2-3} & 8\({}^{\mathrm{th}}\) & 0.61, 0.61, 0 & 4.017 m \\ \cline{2-3} & 9\({}^{\mathrm{th}}\) & 0.34, 0.34, 0 & 4.281 m \\ \cline{2-3} & 10\({}^{\mathrm{th}}\)/At corner & 0.07, 0.07, 0 & 4.561 m \\ \hline \end{tabular} \end{table} Table II: Coordinates and distance between LED and PD’s ten locations \begin{table} \begin{tabular}{|l|l|} \hline **Parameter** & **Value** \\ \hline Room Dimensions & 5 m \(\times\) 5 m \(\times\) 3 m \\ & (Length \(\times\) Width \(\times\) Height) \\ \hline PD’s surface area & 2.25 mm\({}^{2}\) \\ \hline LED’s transmitted Power & (8, 10, 12, 15) Watts \\ \hline Gain of the optical filter \(h(\theta)\) & 1.0 \\ \hline Reflective index (\(n\)) & 1.5 \\ \hline Lambertian model number (\(m\)) & 1.3 \\ \hline LED’s half-power angle & 60\({}^{\mathrm{rd}}\) \\ \hline Field of view (\(FOV\)) & 90\({}^{\mathrm{rd}}\) \\ \hline Incidence angle \(\theta\) & 60\({}^{\mathrm{rd}}\), 70\({}^{\mathrm{rd}}\), 80\({}^{\mathrm{rd}}\), 90\({}^{\mathrm{rd}}\) \\ \hline PD responsivity (\(R_{\mathit{F}}\)) & 0.6 A/W \\ \hline **Noise Parameters** \\ \hline Background current (\(I_{\mathit{IR}}\)) & 5.1 mA \\ \hline Equivalent noise bandwidth (\(\mathit{B}\)) & 50 MHz \\ \hline Noise bandwidth factor (\(I_{\mathit{J}}\)) & 0.562 \\ \hline \end{tabular} \end{table} Table III: Simulation parameters of proposed indoor VLC system performance is affected dramatically by this movement away from the LED intensity. Equal displacements between every location of the PD's ten locations on the \(X\) and \(Y\) axes in a symmetrical way enable regular evaluation of system performance. ### _Signal to Noise Ratio performance_ In order to obtain a premium performance of the indoor VLC system, SNR has a key role that influences achieving that. SNR profile in the room is changed based on the locations of the LED-PD of the different arrangements. Therefore, SNR has to be calculated to guide the optimum LED light distribution inside the indoor environment. To investigate the SNR performance with the design parameters based on equation 2, several values of the incidence angles and the transmitted power had been selected to examine its performance and robustness as analysed and discussed in the figures below. Fig. 3 shows the performance of the SNR versus the distance between LED-PD with several proposed incidence angles of 60', 70', 80', and 90'. Obviously, the obtained SNR value decreases gradually as long as distance increases in the PD's movement through all the selected incidence angles. In this context, the maximum SNR value is achieved at the PD's first location where the distance is the shortest path between LED-PD of 3 m while the minimum value is obtained at the longest distance of 4.56 m at the corner. Based on equation 2, the SNR value is impacted directly by the received power value and is considered a major parameter that affects the obtained results influentially. Also, the incidence angles are considered one of the effective and influential parameters in the design of a successful indoor VLC system. Therefore, there is a noticeable difference in SNR values upon each of the selected incidence angles. For instance, 22.07 dB is the maximum value of SNR achieved at 90' due to the strongest LOS between the center of LED intensity and the first PD's location at the room's floor center at a distance of 3 m. On the other hand, the minimum value of SNR obtained at 60' was within 10.40-3.24 dB. In the case of the incidence angles of 70' and 80', it is obvious the performance of the SNR value shows a decrement of 14.08-6.92 dB and 17.63-10.49 dB respectively. The difference between the maximum SNR obtained values at 90' and 60' is 11.67 dB which refers to a major and clear difference and confirms the significance of the incidence angles' role in the results. In conclusion, based on the proposed configurations and the obtained results, it is observable that the SNR value reduces with the incidence angle value decrease versus the incremental in LED-PD distances directly due to being away from the LOS channel of the LED source toward one of the corners. Fig. 4 shows the SNR performance versus the distance between LED-PD in different configurations with various proposed values of the transmitted power of 8, 10, 12, and 15 watts. The performance of SNR witnesses a noticeable decrease with the increase of the distances for the varied transmitted power values. The transmitted power is considered one of the significant design parameters where it effects the received power according to equation 4 and any change in the received power reflects on the obtained SNR value based on equation 2. In this context, it is noticeable that the maximum SNR values are achieved at the PD's first position where the distance between LED-PD is 3 m. The minimum SNR values are obtained at the PD's tenth position at one of the corners where the distance between LED-PD is 4.56 m. Based on the obtained results, the maximum achieved value of the SNR is 22.07 dB in the PD's first location at the transmitted power of 15 watts and this is attributed to the PD being closer to receiving the strongest signal from the LED source. In this regard, the minimum value of the SNR was decreased from 16.61 to 9.45 dB at the transmitted power of 8 watts. For the proposed transmitted power of 10 and 12 watts, the SNR value is increased to record 18.55 dB and 20.13 dB respectively at the first location of the PD. On the other hand, the distance between LED-PD increases due to PD's moving far away from the LED source intensity toward the corner, SNR value decrease to record 11.39 dB and 12.97 dB respectively. By comparing the maximum obtained values of SNR at the transmitted powers of 15 and 8 watts, the difference is 5.46 dB which refers to the effect of the transmitted power as an important design parameter on the findings. Comparing the difference of the maximum obtained values of SNR of the incidence angle with 11.67 dB, and the transmitted power of 5.46 dB, it is obvious that the incidence angle has the biggest impact on the results. In conclusion, based on the obtained results, it is observable that the SNR value increases with the incremental in the transmitted power values but it decreases with the increase in LED-PD Fig. 4: Performance of the SNR versus distance between the LED-PD with different values of the transmitted power. Fig. 3: Performance of SNR versus the distance between the LED-PD with different incidence angles. distances because of moving far from the direct LOS channel with the LED intensity toward the corner. To take a deeper look, Fig. 5 illustrates the distribution of SNR inside the proposed room in a 3D model. As shown, 22.07 dB is the maximum value obtained from SNR where the PD in the first location lies directly under the LED source radiation. For the design parameters applied values, the incidence angle is 90' and the value of the transmitted power is 15 watts. ### _Path loss performance_ In many research studies were conducted on indoor VLC systems, the path loss parameter had not been taken into consideration widely as the other parameters. Whereas it is important for any proposed system to give a comprehensive evaluation to achieve an effective design by proving its robustness for the indoor environment. To investigate the path loss performance with the design parameters based on equation 10, several values of the incidence angles had been selected to examine their effect on the proposed system. Also, the Lambertian mode number (\(m\)) has a significant influence on the obtained results as it refers to the directivity of the LED and is linked to the LED's half-power angle. The values of the shot noise and other design parameters of this research paper are given in Table III and the distances are calculated according to equation 9 and given in Table II. The path loss performance was analyzed and discussed in the figures below. In Fig. 6, the path loss performance versus the distance between the LED and different locations of the PD with different incidence angles 60', 70', 80', and 90' are investigated. Based on the obtained results and according to equation 10, the path loss value increases simultaneously as long as distance increases in the PD movement far from the LED. In this context, the lowest path loss value is achieved when the PD lies in the first location in the direct LOS with the LED radiation at the shortest distance of 3 m. On the other hand, the highest value of the path loss is obtained at the corner at the longest distance of 4.56 m. In consideration of the proposed incidence angles, it can be observed that the lowest value of the path loss is obtained at 90' while the highest is at 60'. This is because at 90' the PD lies directly under the LED source radiation which leads to a low path loss value, but this value increases when the incidence angle is reduced due to being far from the LED and the LOS channel. For instance, 0.008 watts is the lowest obtained value of the path loss at 90' at 3 m due to being in a strong LOS with direct LED intensity. With PD moving towards the corner, the path loss continues to increase until reaching the tenth location to record 0.019 watts as the highest path loss value at 4.56m. This is because the LOS becomes weak as a result of being away from the LED radiation. In this context, the highest value of the path loss obtained at 60' was within 0.041-0.093 watts because of the weak occurs in the signal versus the increased noise far from the direct LOS with the LED intensity. In the proposed incidence angles case of 70' and 80', it is noticeable that the performance of the path loss value shows an increment of 0.029-0.067 watts and 0.018-0.041 watts respectively. Calculating the difference between the highest path loss obtained values at 60' and 90' is 0.074 watts which indicates a clear difference. This reflects on the obtained signal and emphasizes the important role of the incidence angles in the findings. In conclusion and based on the proposed scenario and obtained results, the path loss value increases with the distance between LED-PD increases, and versus decrease in the incidence angles. the signal strength. In this context, \(m\) plays a major role in the increased values of the path loss significantly due to the light will be more concentrated in the center where it refers to the LED's half-power angle. Obviously, the highest path loss value has been obtained at the tenth location of the PD at the corner with a distance of 4.56 m and _m_=4. On the other hand, the lowest value of the path loss was recorded at the first location of the PD at the floor center which lies in the direct LOS channel with a distance of 3 m and _m_=1. With PD movement far from the LED, the path loss value increases to record the highest value at _m_=4 in the range of 0.080-0.183 watts. At _m_=1 the minimum proposed values, the light concentration at the center becomes less where the lowest recorded value of the path loss was within 0.035-0.081 watts. In the case of _m_=2 and _m_=3, it is notable that the performance of the path loss value witnesses an increase of 0.048-0.110 watts and 0.064-0.147 watts respectively. Finding the difference between the highest obtained values at _m_=4 and _m_=1 is 0.102 watts. This difference value indicates the important role of \(m\) that affect the results significantly. In conclusion, the path loss increases when the distance between LED-PD increases simultaneously. By increasing the \(m\) value, the light becomes more concentrated and this reflects to clear increase in path loss value. Comparing the obtained different results based on varying the incidence angle, and \(m\) as design parameters, it is obvious that \(m\) has the biggest effect on the path loss increment. ## IV Conclusion After the 5G deployment, meeting the growing demand to achieve high capacity in the communication field appears clearly to be the answer for the near future to keep pace with the occurring rapid development to cover wider communication. The several distinguished features of the VLC technology make it the optimum candidate for safe and secure indoor communication to meet this increased need. However, due to a lot of the VLC's advantages over other technologies, many studies and research had been conducted to investigate the VLC system performance in the indoor environment in the positioning and other services but some of them were not interested in fully investigating the proposed systems. This is achieved by examining several important performance parameters along with positioning accuracy to obtain a comprehensive evaluation of any proposed indoor system. This paper proposes to design and develop an effective 3D indoor system based on VLC; it consists of a pair of LED-PD inside a standard room of 5 m \(\times\) 5 m \(\times\) 3 m. This work investigates and analyzes the proposed system performance in terms of SNR and path loss. A comprehensive evaluation had been made by examining the effect of the crucial design parameters of the transmitted power, incidence angles, and Lambertian mode number on the performance parameters and is an expansion of our previous investigation work on indoor positioning and the received power. This approach deals with investigating the PD optimum location to achieve the strongest signal and fewer losses. The achieved findings by MATLAB R2019a simulation prove the effective and credible performance of the proposed indoor system.
2308.16882
Amplitude Prediction from Uplink to Downlink CSI against Receiver Distortion in FDD Systems
In frequency division duplex (FDD) massive multiple-input multiple-output (mMIMO) systems, the reciprocity mismatch caused by receiver distortion seriously degrades the amplitude prediction performance of channel state information (CSI). To tackle this issue, from the perspective of distortion suppression and reciprocity calibration, a lightweight neural network-based amplitude prediction method is proposed in this paper. Specifically, with the receiver distortion at the base station (BS), conventional methods are employed to extract the amplitude feature of uplink CSI. Then, learning along the direction of the uplink wireless propagation channel, a dedicated and lightweight distortion-learning network (Dist-LeaNet) is designed to restrain the receiver distortion and calibrate the amplitude reciprocity between the uplink and downlink CSI. Subsequently, by cascading, a single hidden layer-based amplitude-prediction network (Amp-PreNet) is developed to accomplish amplitude prediction of downlink CSI based on the strong amplitude reciprocity. Simulation results show that, considering the receiver distortion in FDD systems, the proposed scheme effectively improves the amplitude prediction accuracy of downlink CSI while reducing the transmission and processing delay.
Chaojin Qing, Zilong Wang, Qing Ye, Wenhui Liu, Linsi He
2023-08-31T17:39:18Z
http://arxiv.org/abs/2308.16882v1
# Amplitude Prediction from Uplink to Downlink CSI against Receiver Distortion in FDD Systems ###### Abstract In frequency division duplex (FDD) massive multiple-input multiple-output (mMIMO) systems, the reciprocity mismatch caused by receiver distortion seriously degrades the amplitude prediction performance of channel state information (CSI). To tackle this issue, from the perspective of distortion suppression and reciprocity calibration, a lightweight neural network-based amplitude prediction method is proposed in this paper. Specifically, with the receiver distortion at the base station (BS), conventional methods are employed to extract the amplitude feature of uplink CSI. Then, learning along the direction of the uplink wireless propagation channel, a dedicated and lightweight distortion-learning network (Dist-LeaNet) is designed to restrain the receiver distortion and calibrate the amplitude reciprocity between the uplink and downlink CSI. Subsequently, by cascading, a single hidden layer-based amplitude-prediction network (Amp-PreNet) is developed to accomplish amplitude prediction of downlink CSI based on the strong amplitude reciprocity. Simulation results show that, considering the receiver distortion in FDD systems, the proposed scheme effectively improves the amplitude prediction accuracy of downlink CSI while reducing the transmission and processing delay. keywords: CSI feedback, massive MIMO, amplitude prediction, receiver distortion, lightweight network + Footnote †: journal: Journal of the Acoustical and Ubiquitous Society of America ## 1 Introduction As one of the key techniques in the fifth generation (5G) communications, the massive multiple-input multiple-output (mMIMO) has shown great prospects in providing high spectrum and energy efficiency [1; 2; 3]. In frequency division duplex (FDD) systems, the downlink channel state information (CSI) estimated by user equipment (UE) usually needs to be fed back to the base station (BS) [4]. However, due to the large number of antennas, the CSI feedback overhead in mMIMO systems increases sharply, which results in large transmission and processing delay, energy consumption, and transmission resource occupation, etc [5]. Especially, in high-speed scenarios, the transmission and processing delay may cause the downlink CSI obtained at the BS to be outdated [6]. Therefore, it is crucial to reduce the feedback overhead and processing delay in FDD mMIMO systems. Recently, some studies have shown that there is a strong amplitude correlation (or reciprocity 1) between the uplink CSI and downlink CSI in FDD systems [7; 8]. Hence, it becomes popular to use deep learning (DL) to directly predict the downlink CSI from the uplink CSI to reduce/eliminate the feedback process [9; 10; 11]. In [9], considering the position-to-channel mapping is bijective, a sparse complex-valued neural network (SCNet) is proposed to approximate the uplink-to-downlink mapping function. In [10], according to the spatial correlation in time-varying scenarios, a convolutional neural network (CNN)-based downlink channel prediction method is investigated. Under the premise of considering the channel time invariance, an attention-based deep learning network is proposed in [11] to directly predict the downlink CSI from the uplink CSI. Footnote 1: Note that, due to the small enough difference in frequency-independent parameters between the uplink CSI and downlink CSI in FDD systems, we assume the reciprocity in this paper. In [7; 8; 9; 10; 11], the feedback and prediction methods are mainly based on the reciprocity between the uplink and downlink CSIs. However, this reciprocity is vulnerable in practical systems. This is because the overall channel consists of not only the wireless propagation channel, but also the radio frequency (RF) front-end, e.g., analog-to-digital converters (ADCs), filters and low-noise amplifiers (LNAs), etc [12; 13]. Although uplink and downlink wireless propagation channels may be reciprocal, the hardware imperfection (HI) of RF front-end inevitably introduces nonlinear distortion into the amplitude and phase of the transmitted and received signals [14]. This nonlinear distortion causes the amplitude mismatch and phase mismatch, thereby resulting in the reciprocity mismatch between the uplink and downlink CSIs. Due to the significant impact of reciprocity mismatch on system performance, reciprocity calibration is crucial for communication systems [12]. In both regular MIMO and mMIMO systems, the conventional reciprocity calibration method is based on the dedicated hardware circuits [15], which increases the energy consumption and hardware cost that comes from RF chains required to support a number of antennas [16]. Besides, the existing reciprocity calibration is usually investigated in time division duplex (TDD) systems. In contrast, there is limited literature addressing the issue of reciprocity calibration in FDD systems when utilizing the reciprocity. That is, in practical FDD mMIMO systems, the reciprocity between uplink and downlink channels is destroyed by the nonlinear distortion due to the difference between uplink and downlink hardware, making the reciprocity-based prediction results inaccurate, and even resulting in the prediction process impossible to achieve. Therefore, it is vital to consider the nonlinear distortion before utilizing reciprocity and the reciprocity calibration is also essential for the reciprocity-based amplitude prediction in FDD systems. To suppress the impact of distortion on reciprocity and calibrate the reciprocity, this paper proposes an amplitude prediction scheme against receiver distortion. To the best of our knowledge, the amplitude reciprocity-based CSI prediction by considering distortion has not been investigated in FDD systems. The main contributions of this paper are summarized as follows: * We propose a more practical scenario which considers the distortion before utilizing the reciprocity in FDD systems. Specifically, we take the receiver distortion at BS as an example to illustrate that reciprocity will be affected by distortion, which is a valuable reference for both UE and BS. * We design a network architecture cascading distortion learning and amplitude prediction to improve the practicality and accuracy of amplitude prediction. Specifically, learning along the direction of the uplink wireless propagation channel, a distortion-learning network (Dist-LeaNet) is designed to restrain the receiver distortion and calibrate the amplitude reciprocity between the uplink and downlink CSI. Subsequently, based on the channel amplitude reciprocity, an amplitude-prediction network (Amp-PreNet) is developed to predict the amplitude of downlink CSI directly at the BS, thus avoiding the overhead and transmission delay caused by feedback. * We construct a lightweight learning and prediction network architecture to reduce the processing delay and computational complexity for the BS receiver. Due to that the nonlinear distortion varies slowly compared with the wireless propagation channel, the features of distortion are easy to capture. Hence, Dist-LeaNet is constructed with a lightweight network architecture. With the assistance of Dist-LeaNet, Amp-PreNet is also constructed with lightweight network architecture based on the strong amplitude correlation. The rest of this paper is organized as follows. In Section 2, we introduce the system model. Then, the amplitude prediction scheme against receiver distortion is presented in Section 3 and followed by numerical results in Section 4. Finally, Section 5 concludes our work. _Notation_: Boldface upper case and lower case letters denote matrix and vector respectively. \(\mathcal{N}\left(\mu,\sigma^{2}\right)\) stands for normal distribution with mean \(\mu\) and variance \(\sigma^{2}\); \(\mathcal{U}\left(a,b\right)\) stands for uniform distribution on the interval \((a,b)\); \(|\cdot|\) denotes the operation of taking the modulus of a complex value; \(\left(\cdot\right)^{T}\) denotes transpose; \(E[\cdot]\) represents the expectation operation; \(\left\|\cdot\right\|\) is the Euclidean norm. ## 2 System Model The system model is given in Fig. 1, in which an FDD massive MIMO system that consists of a BS with \(N\) antennas and \(U\) single-antenna users in speed \(v\) is considered. At the BS, the received uplink signal from user-\(u\), denoted as \(\widetilde{\mathbf{Y}}_{u}\in\mathbb{C}^{N\times N}\), is given by \[\widetilde{\mathbf{Y}}_{u}=\mathbf{g}_{u}\mathbf{x}_{u}+\mathbf{N}_{u}, \tag{1}\] Figure 1: System model. where \(\mathbf{g}_{u}\in\mathbb{C}^{N\times 1}\) denotes the uplink channel (i.e., uplink CSI) from the user-\(u\) to the BS in the angular domain, \(\mathbf{x}_{u}\in\mathbb{C}^{1\times N}\) stands \(N\)-length uplink pilot and data of user-\(u\), and \(\mathbf{N}_{u}\in\mathbb{C}^{N\times N}\) is the circularly symmetric complex Gaussian (CSCG) noise with zero-mean and variance \(\sigma_{u}^{2}\). With the antenna diversity [17], the uplink pilot and data of each UE is received by \(N\) BS antennas to form the \(N\times N\) complex signal \(\widetilde{\mathbf{Y}}_{u}\). From [18], there exists a correlation between the uplink and downlink channels due to the shared common physical paths and similar spatial propagation characteristics. For example, the downlink CSI is constructed by utilizing frequency-independent parameters between the uplink and downlink channels in the angular domain [19]. Therefore, from [20; 21], the downlink CSI of user-\(u\) (i.e., \(\mathbf{h}_{u}\in\mathbb{C}^{N\times 1}\)) can be recovered from the uplink CSI \(\mathbf{g}_{u}\). However, the inevitable uplink distortion (e.g., caused by imperfect hardware) makes this processing difficult. By denoting the mapping function of equivalent distortion at the BS as \(f_{\text{R-dis}}\left(\cdot\right)\), the distorted signal of \(\widetilde{\mathbf{Y}}_{u}\), denoted by \(\mathbf{Y}_{u}\in\mathbb{C}^{N\times N}\), is expressed as \[\mathbf{Y}_{u}\triangleq f_{\text{R-dis}}\left(\widetilde{\mathbf{Y}}_{u} \right). \tag{2}\] Then, the uplink channel \(\mathbf{g}_{u}\) is estimated according to \(\mathbf{Y}_{u}\) and the uplink pilot in \(\mathbf{x}_{u}\). By denoting the estimated \(\mathbf{g}_{u}\) as \(\widetilde{\mathbf{g}}_{u}\) (\(\widetilde{\mathbf{g}}_{u}\in\mathbb{C}^{N\times 1}\)), our work aims to utilize the strong amplitude correlation to predict the amplitude of downlink CSI \(\mathbf{h}_{\mathbf{u}}\) from \(\widetilde{\mathbf{g}}_{u}\)[7; 8] in this paper. First, the amplitude feature of \(\widetilde{\mathbf{g}}_{u}\) is extracted. Subsequently, we build two dedicated networks, Dist-LeaNet and Amp-PreNet, to restrain the distortion of receiver and enhance the prediction accuracy of downlink CSI amplitude, respectively. The details are described in Section 3. However, the phase information exhibits unique importance due to its frequency-dependent nature, which is directly fed back to the BS [8]. ## 3 Amplitude Prediction Scheme against Receiver Distortion To effectively utilize the reciprocity in practical scenarios, we present the proposed amplitude prediction scheme against receiver distortion in this section. In Section 3.1, the model of uplink receiver distortion is presented. With the uplink receiver distortion at the BS, we develop Dist-LeaNet and Amp-PreNet to restrain the distortion and predict the downlink CSI amplitude, respectively. Both Dist-LeaNet and Amp-PreNet are elaborated in Section 3.2. ### Uplink Receiver Distortion In the uplink communication, nonlinear distortion is inevitably encountered [22], e.g., the distortion of user's power amplifiers (PAs), the distortion of BS's LNAs and ADCs, etc [23]. We mainly take the receiver distortion of BS as an example to represent the uplink distortion, which has reference value for both UE and BS. Specifically, the nonlinear distortion varies slowly compared with the wireless propagation channel [24]. Therefore, to further represent the distortion function \(f_{\text{R-dis}}\left(\cdot\right)\), the receiver distortion at the BS is denoted as \(\mathbf{D}_{\text{R-BS}}\in\mathbb{C}^{N\times N}\), wherein its diagonal elements represent the amplitude and phase distortion of each hardware at different antennas, and the off-diagonal elements correspond to the crosstalk and mutual coupling effect between different antennas [23]. The proper hardware circuit design can ensure the nearly-zero crosstalk, and the antenna mutual coupling effect is often ignored [25]. Therefore, the receiver distortion matrix can be regarded to be diagonal, which is expressed as [26] \[\mathbf{D}_{\text{R-BS}}=\text{diag}\left(r_{1,\text{BS}},\cdots,r_{n,\text{ BS}},\cdots,r_{N,\text{BS}}\right), \tag{3}\] where \(r_{n,\text{BS}}=\left|r_{n,\text{BS}}\right|e^{j\phi_{n,\text{BS}}^{\text{r}}}\) (\(n=1,2,\cdots,N\)). According to [26], the amplitudes of the distortion obey log-normal distribution, and the phases of the distortion obey uniform distribution, i.e., \[\ln\left|r_{n,\text{BS}}\right|\sim\mathcal{N}\left(0,\delta_{\text{r},\text{ BS}}^{2}\right),\phi_{n,\text{BS}}^{\text{r}}\sim\mathcal{U}\left[-\theta_{ \text{r},\text{BS}},\theta_{\text{r},\text{BS}}\right].\] ### Dist-LeaNet and Amp-PreNet In order to restrain the receiver distortion and obtain amplitude feature of the uplink wireless propagation channel, we construct the lightweight and effective Dist-LeaNet, which is supposed to be considered when channel reciprocity is involved. Then, a recovered uplink CSI amplitude feature \(\widehat{\mathbf{g}}_{u,\text{amp}}\), is learned from Dist-LeaNet. Subsequently, to predict the downlink CSI amplitude feature, we design the lightweight Amp-PreNet, which utilizes the amplitude correlation of CSI in the angular domain [8]. The corresponding network design, training and deployment are as follows. #### 3.2.1 Network Design According to [27], choosing the appropriate number of layers and hidden neurons is still a challenge in the neural network (NN). That is, for a specific network design, there is currently no established theoretical guidance on the optimal number of layers or the number of neurons to be included at each layer. Typically, complex hyper-parameter tuning is necessary. Based on plenty of experimental results, we design the lightweight Dist-LeaNet and Amp-PreNet, both of which are single hidden-layer NN. Specifically, considering the trade-off between performance and complexity, we train the network with different number of layers and neurons. After verifying the performance of the trained network, we select a suitable lightweight network architecture to reduce the computational complexity while improve the prediction performance compared with [7]. The network architectures of Dist-LeaNet and Amp-PreNet are summarized in Table 1, and the detailed descriptions are given as follows. In both Dist-LeaNet and Amp-PreNet, the neurons of the input layer, hidden layer, and output layer are \(N\), \(2N\), and \(N\), respectively. In Dist-LeaNet, a batch normalization (BN) is employed for the input layer, which normalizes the network input as zero mean and unit variance. For the hidden layer and output layer of Dist-LeaNet, the linear activation is employed. Then, the Dist-LeaNet is followed by Amp-PreNet with the cascaded mode, i.e., the output of Dist-LeaNet is the input of Amp-PreNet. Without BN, the leaky rectified unit (LReLU) [28] and linear activation are adopted for the hidden layer and output layer of Amp-PreNet, respectively. With the estimated uplink CSI \(\widetilde{\mathbf{g}}_{u}\), we extract its amplitude feature (denoted as \(\widetilde{\mathbf{g}}_{u,\mathrm{amp}}\)) according to \[\widetilde{\mathbf{g}}_{u,\mathrm{amp}}=[\left|\widetilde{g}_{u,1}\right|, \left|\widetilde{g}_{u,2}\right|,\ldots,\left|\widetilde{g}_{u,N}\right|]^{T}. \tag{4}\] Due to the BS's nonlinear distortion (e.g., the LNA and ADC in BS hardware), \(\widetilde{\mathbf{g}}_{u,\mathrm{amp}}\) cannot use to map the amplitude of downlink CSI of user-\(u\) (i.e., the amplitude of \(\mathbf{h}_{u}\)). This results in that the methods of CSI prediction and recovery in [7, 8, 9, 10, 11, 20], cannot be applied directly. Thus, we develop \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Layer} & \multicolumn{2}{c|}{Input} & \multicolumn{2}{c|}{Hidden} & \multicolumn{2}{c}{Output} \\ \cline{2-7} & Dist-LeaNet & Amp-PreNet & Dist-LeaNet & Amp-PreNet & Dist-LeaNet & Amp-PreNet \\ \hline Batch normalization & \(\surd\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline Neuron number & \(N\) & \(N\) & \(2N\) & \(2N\) & \(N\) & \(N\) \\ \hline Activation function & None & None & Linear & LReLU & Linear & Linear \\ \hline \hline \end{tabular} \end{table} Table 1: Architecture of Dist-LeaNet and Amp-PreNet. Dist-LeaNet to learn along the direction of the amplitude of \(\mathbf{g}_{u}\) and restrain the nonlinear distortion at the BS, which is expressed as \[\widehat{\mathbf{g}}_{u,\text{amp}}=f_{\text{Dist-Lea}}\left(\widetilde{\mathbf{ g}}_{u,\text{amp}},\boldsymbol{\Theta}_{\text{Dist-Lea}}\right), \tag{5}\] where \(f_{\text{Dist-Lea}}\left(\cdot\right)\) and \(\boldsymbol{\Theta}_{\text{Dist-Lea}}\) denote the mapping function of distortion suppression and the training parameters of Dist-LeaNet, respectively. On the basis of the obtained \(\widehat{\mathbf{g}}_{u,\text{amp}}\), the amplitude of downlink CSI of user-\(u\) (i.e., the amplitude of \(\mathbf{h}_{u}\)) can be mapped. Thus, based on the strong amplitude correlation, we construct Amp-PreNet to predict the amplitude feature of downlink CSI \(\widehat{\mathbf{h}}_{u,\text{amp}}\), which can be expressed as \[\widehat{\mathbf{h}}_{u,\text{amp}}=f_{\text{Amp-Pre}}\left(\widehat{\mathbf{ g}}_{u,\text{amp}},\boldsymbol{\Theta}_{\text{Amp-Pre}}\right), \tag{6}\] where \(f_{\text{Amp-Pre}}\left(\cdot\right)\) and \(\boldsymbol{\Theta}_{\text{Amp-Pre}}\) denote the mapping function of amplitude prediction and the training parameters of Amp-PreNet, respectively. #### 3.2.2 Training and Deployment The training sets are acquired by simulation, and a significant amount of data samples are collected to train Dist-LeaNet and Amp-PreNet. Specifically, these data samples are generated as follows. Amplitude correlated channels are generated by MATLAB 5G Toolbox, which is subject to specifications of the Clustered-Delay-Line (CDL) channel model in 3GPP TR 38.901 [29]. Similar to the setting in [18], the frequency-independent parameters (e.g., the angle of departure (AoD)) are fixed, while varying the complex gain of each path between the uplink and downlink channels. The AoD of downlink CSI is approximately the same as the angle of arrival (AoA) of the uplink CSI in a short time slot [18], showing a relatively strong amplitude correlation in the angle domain. The amplitude attenuation of clusters also reflects the amplitude reciprocity [8], due to the similar geographical environment in a short time slot. Thus, \(\mathbf{g}_{u}\) and \(\mathbf{h}_{u}\) are obtained by transforming the generated uplink and downlink channels to the angular domain, respectively [18]. To train Dist-LeaNet and Amp-PreNet, we use the amplitude of \(\mathbf{g}_{u}\) and \(\mathbf{h}_{u}\) as network labels, respectively, with the joint training method. The optimization goal of Dist-LeaNet is to minimize the mean squared error (MSE) between \(\widehat{\mathbf{g}}_{u,\text{amp}}\) and \(\mathbf{g}_{u,\text{amp}}\), which is derived as \[\min_{\boldsymbol{\Theta}_{\text{Dist-Lea}}}E\left[\left\|f_{\text{Dist-Lea}} \left(\widetilde{\mathbf{g}}_{u,\text{amp}},\boldsymbol{\Theta}_{\text{Dist- Lea}}\right)-\mathbf{g}_{u,\text{amp}}\right\|^{2}\right]. \tag{7}\] Similarly, the Amp-PreNet minimizes the MSE of the downlink CSI amplitude, i.e., \(E\left[\left\|\widehat{\mathbf{h}}_{u,\mathrm{amp}}-\mathbf{h}_{u,\mathrm{amp}} \right\|^{2}\right]\), which is further expressed by \[\min_{\mathbf{\Theta}_{\mathrm{Amp-Pre}}}E\left[\left\|f_{\mathrm{Amp-Pre}} \left(\widehat{\mathbf{g}}_{u,\mathrm{amp}},\mathbf{\Theta}_{\mathrm{Amp-Pre}} \right)-\mathbf{h}_{u,\mathrm{amp}}\right\|^{2}\right]. \tag{8}\] We perform the joint training once for both Dist-LeaNet and Amp-PreNet, and save the trained network parameters for testing. By using the Dist-LeaNet, the high precision uplink CSI amplitude \(\widehat{\mathbf{g}}_{u,\mathrm{amp}}\) is obtained. Then, \(\widehat{\mathbf{g}}_{u,\mathrm{amp}}\) is used to predict the downlink CSI amplitude \(\widehat{\mathbf{h}}_{u,\mathrm{amp}}\) in the Amp-PreNet. We consider the distortion before utilizing the channel reciprocity to accomplish amplitude prediction of downlink CSI. The proposed scheme demonstrates a better prediction accuracy and reduces the impact of time delay effectively in a practical scenario. ## 4 Experiment results In this section, we provide numerical results of the proposed scheme. Definitions and basic parameters involved in simulation are first given in Section 4.1. Subsequently, to verify the effectiveness of the proposed scheme, the normalized mean squared error (NMSE) of the predicted downlink CSI amplitude is given in Section 4.2. Finally, computational complexity and online running time comparison analysis are shown in Section 4.3. ### Parameters Setting Definitions involved in simulations are given as follows. The equivalent signal-to-noise ratio (SNR) and NMSE are defined as similar to [7]. During the experiments, \(v=300\) km/h, \(\delta_{\mathrm{r,BS}}^{2}=1\), and \(\theta_{\mathrm{r,BS}}=\pi\) are considered, respectively. The probability density functions (PDF) of the amplitude and phase of receiver distortion are shown in Fig. 2. Following the setting in [8], we set the uplink frequency to \(5.1\) GHz and the downlink frequency to \(5.3\) GHz. Thus, according to \(f_{m}=vf_{c}/c\)[17] with \(f_{m}\), \(f_{c}\), and \(c\) being the maximum doppler shift, the carrier frequency, and the speed of light, respectively, the maximum doppler shift for the uplink CSI and downlink CSI are \(1418\) Hz and \(1473\) Hz, respectively. The complex-valued Zadoff-Chu (ZC) sequence [30] is employed as the pilot for uplink channel estimation with least squares (LS) criterion in the simulation. For Dist-LeaNet and Amp-PreNet, their training and testing data-sets are generated according to (4). The sample numbers of training set, validation set, and testing set are 30,000, 5,000, and 15,000, respectively. In this paper, the NMSE performance of the proposed scheme is compared with those of [7] and [8]. In addition, to verify the effectiveness of Dist-LeaNet, the proposed scheme without Dist-LeaNet, denoted as "Proposed (without Dist-LeaNet)", is also simulated. It is worth noting that inspired by signal detection [31; 32], the detection-based amplitude prediction is an interesting topic. However, this is beyond the scope of this paper and prompts us to conduct exploratory research in the future. ### NMSE Performance To validate the effectiveness of amplitude prediction, NMSE curves of the recovered downlink CSI amplitude are plotted in Fig. 3, where \(N=128\) is considered. From Fig. 3, it can be observed that the NMSE of "Proposed" is smaller than those of "Ref [7]", "Ref [8]", and "Proposed (without Dist-LeaNet)", showing the effectiveness of the proposed scheme in recovering the downlink CSI amplitude. Specifically, the NMSE of "Proposed" is smaller than that of "Proposed (without Dist-LeaNet)", which confirms that Dist-LeaNet plays an essential role for the proposed scheme in distortion suppression and reciprocity calibration. In addition, for each given SNR, the NMSE of "Proposed (without Dist-LeaNet)" is lower than those of "Ref [7]" and "Ref [8]", which indicates the effectiveness of Amp-PreNet in CSI prediction. Overall, the proposed scheme proves to be advantageous in improving the NMSE performance in various SNR scenarios. To verify the NMSE performance against the impact of \(N\), NMSE curves of "Ref [7]", "Ref [8]", "Proposed (without Dist-LeaNet)", and "Proposed" are plotted in Fig. 4, where \(N=64\), \(N=128\), and \(N=256\) are considered. For each given \(N\), the NMSE of downlink CSI amplitude of "Proposed" is smaller than those of "Ref [7]", "Ref [8]", and "Proposed (without Dist-LeaNet)". As the increase of \(N\) (i.e., the number of antennas increases), the NMSE increases due to the more nonlinear distortion introduced on antennas at the same SNR. Thus, compared with "Ref [7]", "Ref [8]", and "Proposed (without Dist-LeaNet)", the proposed Dist-LeaNet restrains the distortion and Amp-PreNet predicts the CSI amplitude effectively against varying \(N\). ### Computational Complexity and Online Running Time In this subsection, the computational complexity and online running time of "Ref [7]", "Ref [8]", and "Proposed" are presented and analyzed as follows. Figure 3: NMSE of downlink CSI amplitude versus SNR, where \(N=128\). **Fig. 4.** NMSE of downlink CSI amplitude versus SNR under different \(N\). **Table 2** Analysis of Computational Complexity. \begin{tabular}{l|c|c|c|c} \hline Method & Complexity & Case1 (\(N=64\)) & Case2 (\(N=128\)) & Case3 (\(N=256\)) \\ \hline Ref [7] & \(20N^{2}-6N\) & 81,536 & 326,912 & 1,309,180 \\ \hline Ref [8] & \(N^{2}/2+26841N/8\) & 216,776 & 437,648 & 891,680 \\ \hline Proposed & \(16N^{2}-6N\) & 65,152 & 261,376 & 1,047,040 \\ \hline \end{tabular} #### 4.3.1 Computational Complexity Analysis The number of floating-point operations (FLOPs) is considered as the metric of computational complexity, which can be used to describe the NN complexity [18]. According to [18], the FLOPs of "Ref [7]", "Ref [8]" and "Proposed" are \(20N^{2}-6N\), \(N^{2}/2+26841N/8\), and \(16N^{2}-6N\), respectively. The comparison and case details of computational complexity are given in Table 2 and Fig. 5 (a). For \(N<217\), the proposed scheme demonstrates the lowest computational complexity. When \(N\geqslant 217\), the FLOPs number of "Ref [8]" is lower than those of "Proposed" and "Ref [7]". Nevertheless, the proposed scheme improves the prediction performance of downlink CSI amplitude greatly at the expense of a tolerable computational complexity. #### 4.3.2 Online Running Time The comparison of online running time is given in Fig. 5 (b), where \(N=64\), \(N=128\), and \(N=256\) are considered. For a fair comparison, \(10^{5}\) experiments of online running are conducted for "Ref [7]", "Ref [8]" and "Proposed" on the same computer. From Fig. 5 (b), for each given \(N\), the online running time of "Proposed" is shorter than those of "Ref [7]" and "Ref [8]". This reflects that the proposed scheme reduces the transmission and processing delay effectively, due to the application of prediction method and lightweight network architecture. Additionally, to demonstrate that the proposed scheme can prevent the downlink CSI outdated, the online running time is compared with the coherence time of downlink CSI. Considering that the correlation coefficient of the channel at any two time points within the coherent time is not less than 0.5 [33], the maximum Doppler frequency shift \(f_{m}\) is used to measure the coherence time of the channel. According to \(M=9/(16\pi f_{m})\)[17], the coherence time \(M\) of the downlink CSI is 0.122 ms. However, when \(N=64\), \(N=128\), and \(N=256\) are considered, the online running times for each experiment of the proposed scheme are 0.0069 ms, 0.0099 ms, and 0.0196 ms, respectively, which are smaller than the coherence time of downlink CSI. This indicates that although there is a certain Figure 5: (a) The FLOPs number versus \(N\). (b) Online running time comparison of “Ref [7]”, “Ref [8]” and “Proposed” for \(10^{5}\) experiments. delay, the proposed scheme can effectively prevent the predicted downlink CSI amplitude from becoming outdated. ## 5 Conclusion This paper presents an amplitude prediction scheme from uplink to downlink CSI against receiver distortion in FDD systems. By using a lightweight and dedicated Dist-LeaNet, the amplitude feature of the uplink wireless propagation channel is obtained after distortion suppression and reciprocity calibration. Then, with the uplink CSI, the downlink CSI amplitude is predicted by a lightweight Amp-PreNet. Experiments show that, compared with methods that don't consider the distortion in communication systems, the proposed scheme is more practical and achieves a better prediction accuracy on NMSE performance of the downlink CSI amplitude. This idea of considering and handling distortion has reference significance for both UE and BS. In our future work, we will conduct exploratory research on detection-based amplitude prediction methods. ## Acknowledgements This work is supported in part by the Sichuan Science and Technology Program (Grant No. 2023YFG0316), the Industry-University Research Innovation Fund of China University (Grant No. 2021ITA10016), the Key Scientific Research Fund of Xihua University (Grant No. Z1320929), and the Special Funds of Industry Development of Sichuan Province (Grant No. zyf-2018-056).
2309.03618
Two states for the $Ξ(1820)$ resonance
We recall that the chiral unitary approach for the interaction of pseudoscalar mesons with the baryons of the decuplet predicts two states for the $\Xi(1820)$ resonance, one with a narrow width and the other one with a large width. We contrast this fact with the recent BESIII measurement of the $K^- \Lambda$ mass distribution in the $\psi(3686)$ decay to $K^- \Lambda \bar\Xi^+ $, which demands a width much larger than the average of the PDG, and show how the consideration of the two $\Xi(1820)$ states provides a natural explanation to this apparent contradiction.
R. Molina, Wei-Hong Liang, Chu-Wen Xiao, Zhi-Feng Sun, E. Oset
2023-09-07T10:20:25Z
http://arxiv.org/abs/2309.03618v1
# Two states for the \(\Xi(1820)\) resonance ###### Abstract We recall that the chiral unitary approach for the interaction of pseudoscalar mesons with the baryons of the decuplet predicts two states for the \(\Xi(1820)\) resonance, one with a narrow width and the other one with a large width. We contrast this fact with the recent BESIII measurement of the \(K^{-}\Lambda\) mass distribution in the \(\psi(3686)\) decay to \(K^{-}\Lambda\bar{\Xi}^{+}\), which demands a width much larger than the average of the PDG, and show how the consideration of the two \(\Xi(1820)\) states provides a natural explanation to this apparent contradiction. Gradual progress in the description of the hadronic spectrum leads to the consequence that some resonances apparently well established actually correspond to two states. This is the case of the \(\Lambda(1405)\), for which two states around 1385 MeV and 1420 MeV were predicted in Refs. [1; 2]. After some time, these resonances found their place in the PDG [3]1. This is also the case of the \(K_{1}(1270)\) axial vector resonance, where also two states were found in Ref. [4], for which experimental evidence was found in Ref. [5], and the saga continues with the two states also predicted for the \(D^{*}(2400)\)[6]2. Footnote 1: We should clarify that we call two states, indicating that we do not talk about different poles in different Riemann sheets, but two distinct poles in the same Riemann sheet. Footnote 2: Two states with these quantum numbers are found in Ref. [7] but with far less precision than in Ref. [6]. Another case of two states, this time suggested from the experimental side, is the splitting of the \(Y(4260)\) resonance found by the BaBar collaboration [8; 9] into two states \(Y(4230)\) and \(Y(4260)\) by the BESIII collaboration [10]3. A recent paper [14] shows that the Weinberg-Tomozawa interaction of the leading order chiral potentials produces in some cases a double pole structure. Footnote 3: We refer to splitting in states with the same quantum numbers. We do not consider in this block states close by with different quantum numbers, like the splitting of the \(P_{c}(4450)\) of Ref. [11; 12] into the \(P_{c}(4440)\) and \(P_{c}(4457)\) with \(J^{P}=\frac{1}{2}^{-},\frac{3}{2}^{-}\)[13]. The chiral unitary approach, using information obtained from chiral Lagrangians which is unitarized in coupled channels, has proved rather useful to study the meson-meson and meson-baryon interaction and to show that many resonances actually emerge from the interaction of hadrons. Such is the case of the light scalar mesons [15; 16; 17; 18], the light axial vector resonances [4; 19], the low lying \(J^{P}=\frac{1}{2}^{-}\) baryonic resonances [1; 20; 21; 22; 23], as well as many \(\frac{3}{2}^{-}\) baryon resonances [24; 25]. In Refs. [20; 21], the interaction of the octet of pseudoscalar mesons with the octet of baryons was studied and the two \(\Lambda(1405)\) states emerged. In Ref. [25], the study was extended to the interaction of the octet of pseudoscalar mesons with the decuplet of baryons and many resonances were generated that could be associated to well known \(\frac{3}{2}^{-}\) existing states. Other resonances were predicted which were not found experimentally at the time the work was completed. One of them was a resonance, coming from the \(\bar{K}\Xi(1530)\) and \(\eta\Omega\) interaction, which was later identified with the recently found \(\Omega(2012)\) state by the Belle collaboration [26]. With ups and downs in the discussion of the nature of this resonance (see Ref. [27] for the latest update), the Belle collaboration concluded that the experimental information supported the molecular nature of this resonance [28]. Ref. [25] had another prediction that could not be contrasted with experiment at the time the work was done. Indeed, two resonances, one narrow and one with a large width, were predicted in the vicinity of \(\Xi(1820)\). The purpose of the present work is to show that support for this idea is now provided by the recent BESIII investigation of this resonance. Actually, in Ref. [29] the \(\psi(3686)\) decay to \(K^{-}\Lambda\bar{\Xi}^{+}\) is investigated and in the \(K^{-}\Lambda\) invariant mass two neat peaks, one for the \(\Xi(1690)\) and another one for the \(\Xi(1820)\), are observed. The surprising thing is that the width of the \(\Xi(1820)\) is reported as \[\Gamma_{\Xi(1820)}=73^{+6}_{-5}\pm 9\;\mathrm{MeV}. \tag{1}\] This result is much bigger, and incompatible with that of the PDG [3] of \[\Gamma^{\mathrm{PDG}}_{\Xi(1820)}=24^{+15}_{-10}\;\mathrm{MeV}\;(\mathrm{PDG \;estimate});\;\;\;\;\;24\pm 5\;\mathrm{MeV}\;(\mathrm{PDG\;average}). \tag{2}\] A solution to this problem is obtained with the acceptance of two states, as we show below. In Ref. [25], four coupled channels were considered, \(\Sigma^{*}\bar{K}[1878],\Xi^{*}\pi[1669],\Xi^{*}\eta[2078]\) and \(\Omega K[2165]\), where the threshold masses are written in brackets in units of \(\mathrm{MeV}\). As one can see, only the \(\Xi^{*}\pi\) channel is open for decaying at 1820 MeV and the width of a state depends on the coupling to this channel. The transition potential obtained from the chiral Lagrangians is given by \[V_{ij}=-\frac{1}{4f^{2}}C_{ij}(k^{0}+k^{\prime\,0}), \tag{3}\] where \(k^{0},k^{\prime\,0}\) are the energies of the initial and final mesons, and the coefficients \(C_{ij}\) are given in Table 1. The above potential is the input of the Bethe-Salpeter (BS) equation to obtain the scattering amplitude, \[T=\left[1-VG\right]^{-1}V. \tag{4}\] \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(C_{ij}\) & \(\Sigma^{*}\bar{K}\) & \(\Xi^{*}\pi\) & \(\Xi^{*}\eta\) & \(\Omega K\) \\ \hline \(\Sigma^{*}\bar{K}\) & 2 & 1 & 3 & 0 \\ \(\Xi^{*}\pi\) & & 2 & 0 & \(\frac{3}{\sqrt{2}}\) \\ \(\Xi^{*}\eta\) & & & 0 & \(\frac{3}{\sqrt{2}}\) \\ \(\Omega K\) & & & & 3 \\ \hline \hline \end{tabular} \end{table} Table 1: \(C_{ij}\) coefficients of Eq. (3). In this way, two poles were obtained in Ref. [25], one narrow and the other one wide, in the vicinity of \(\Xi(1820)\) resonance. We can see that the channel \(K^{-}\Lambda\), where the state is observed [29], is not any of the coupled channels of Table 1. However, there is a way to make a transition to this state by means of the mechanism depicted in Fig. 1. This mechanism, considering the negative parity of \(\bar{\Xi}^{+}\), requires a \(P\)-wave in the \(\psi(3686)\to\bar{\Xi}^{+}PB^{*}\) vertex and a \(D\)-wave in the \(P^{\prime}B^{*\,\prime}\to K^{-}\Lambda\) vertex 4. The amplitude of Fig. 1 is then of the type Footnote 4: Here, \(P\left(P^{\prime}\right)\) and \(B^{*}\left(B^{*\,\prime}\right)\) stand for pseudoscalar meson and decuplet baryon, respectively. \[t = \sum_{j}A_{j}\,\vec{\epsilon}_{\psi}\cdot\vec{p}_{\bar{\Xi}}\,G_{ j}(PB^{*})\,T_{ji}\,C_{i}\,\tilde{k}^{2} \tag{5}\] \[\sim \sum_{ij}D_{ij}\,\tilde{k}^{2}\,\vec{\epsilon}_{\psi}\cdot\vec{ p}_{\bar{\Xi}}\,T_{ji},\] where \(\tilde{k}\) is the momentum of the \(K^{-}\) in the \(K^{-}\Lambda\) rest frame, \(G_{j}\) are the loop functions of the intermediate \(PB^{*}\) states, regularized by means of a cutoff \(q_{\rm max}\)[21], and \(A_{j},C_{i},D_{ij}\) are unknown coefficients that depend on the dynamics in Fig. 1. But the relevant thing here is that Eq. (5) involves a linear combination of the \(T_{ij}\) amplitudes, accommodating the contribution of the two resonances. Clearly, the effect of both resonances should become visible in the experiment. The invariant mass distribution can be written as, \[\frac{{\rm d}\Gamma}{{\rm d}M_{\rm inv}(K^{-}\Lambda)}=\frac{1}{(2\pi)^{3}}\ \frac{1}{4M_{\psi}^{2}}\ p_{\bar{\Xi}}\,\tilde{k}\,\sum\sum|t|^{2}, \tag{6}\] where \(p_{\Xi}\) is the momentum of the \(\bar{\Xi}\) in the \(\psi(3686)\) rest frame, and \(\tilde{k}\) is the momentum of the kaon in the c.m. reference system of the \(K^{-}\Lambda\), \(\tilde{k}=\lambda^{1/2}(M_{\rm inv}^{2},m_{K}^{2},m_{\Lambda}^{2})/2M_{\rm inv}\). We obtain \[\frac{{\rm d}\Gamma}{{\rm d}M_{\rm inv}(K^{-}\Lambda)}=W\,p_{\Xi}^{3}\ \tilde{k}^{5}\ \sum_{ij}\left|D_{ij}\,T_{ji}\right|^{2}, \tag{7}\] with \(W\) an arbitrary weight. We have redone here the calculations of Ref. [25] and corroborated the results obtained there. We have checked that the results are stable by varying the parameters (\(f\) and \(q_{\rm max}\)), obtaining two poles, one with a small width and the other one broad. The best compromise with the experimental data is obtained by slightly changing the \(f\) parameter in Eq. (3) to \(1.28f_{\pi}\), and \(q_{\rm max}=830\) MeV. The results are shown in Table 2, together with the couplings of the states to the different channels, extracted from the behaviour at the pole, where the amplitude behaves like \(T_{ij}\simeq g_{i}g_{j}/(\sqrt{s}-M_{R})\). It is now clear why the two states have such a different width, since the only decay channel is \(\pi\Xi^{*}\) and the width goes as the square of the coupling to that channel, which is larger for the second state. As already mentioned, the coefficients \(D_{ij}\) are unknown. However, by looking at the strength of the different \(T_{ij}\) matrices, we find that the \(\eta\Xi^{*}\) channel has a large diagonal \(T_{33}\) amplitude which shows evidence of the broad resonance (this is in agreement with Fig. 7 of Ref. [25]). We take then this amplitude characterizing the sum \(\sum_{ij}D_{ij}T_{ji}\). Actually, we notice that the relevant \(T_{ij}\) matrix elements have all a similar shape. Once this is done, we find a \(K^{-}\Lambda\) mass distribution as shown in Fig. 2. We have added a background that follows \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multicolumn{1}{c|}{Poles} & \(|g_{i}|\) & \(g_{i}\) & channels \\ \hline \(1824-31i\) & \(3.22\) & \(3.22-0.096i\) & \(\bar{K}\Sigma^{*}\) \\ & \(1.71\) & \(1.55+0.73i\) & \(\pi\Xi^{*}\) \\ & \(2.61\) & \(2.58-0.38i\) & \(\eta\Xi^{*}\) \\ & \(1.62\) & \(1.47+0.67i\) & \(K\Omega\) \\ \hline \(1875-130i\) & \(2.13\) & \(0.29+2.11i\) & \(\bar{K}\Sigma^{*}\) \\ & \(3.04\) & \(-2.07+2.23i\) & \(\pi\Xi^{*}\) \\ & \(2.20\) & \(1.11+1.90i\) & \(\eta\Xi^{*}\) \\ & \(3.03\) & \(-1.77+2.45i\) & \(K\Omega\) \\ \hline \hline \end{tabular} \end{table} Table 2: Pole positions and couplings for \(q_{\rm max}=830\) MeV. All quantities are given in units of MeV. the phase space, \[C\ p_{\Xi}\ \tilde{k}, \tag{8}\] and adjusted \(C\) to the data. The strength is adjusted to the experimental data. As we can see, the results obtained with the two resonances of Table 2, together with the background, provide a fair description of the data. We perform a second test by performing a fit to the data, very similarly to what is usually done in experimental analyses. Thus, we take a coherent sum of amplitudes \[\frac{A}{M_{\rm inv}-M_{R_{1}}+i\frac{\Gamma_{1}}{2}}+\frac{B}{M_{\rm inv}-M_{ R_{2}}+i\frac{\Gamma_{2}}{2}}, \tag{9}\] with \(R_{1},\ R_{2}\) representing approximately the two resonances of Table 2, with \(M_{R_{1}}=1822\) MeV, \(\Gamma_{1}=45\) MeV, \(M_{R_{2}}=1870\) MeV, \(\Gamma_{2}=200\) MeV. We adjust \(A\) and \(B\) and the background of Eq. (8). The coefficients \(A\) and \(B\) are found to have about the same strength. We obtain a good description of the data, shown in Fig. 3, and most of the strength at higher invariant masses is provided by the contribution of the second resonance. We can see in Fig. 3 that the contribution of the wider resonance plays an important role filling up the strength in the higher part of the mass spectrum. Note that the background needed in Figs. 2 and 3 is practically the same. This means that in Fig. 2 the upper part of the spectrum comes from the \(T_{33}\) amplitude, which contains information of the two resonances, with the wide one responsible for the strength in this region. Figure 2: Results of Eq. (7), in arbitrary units, with \(\sum_{ij}D_{ij}T_{ji}\) substituted by \(T_{33}\), with the experimental data taken from BESIII [29] and the background given by Eq. (8). Although this comment is unrelated to the discussion of this work, we would like to mention that around \(M_{K^{-}\Lambda}=2100\) MeV there seems to be a peak. We should note that in Ref. [25] and here, we do find a peak around 2100 MeV better seen in the \(K\Omega\) diagonal \(T_{44}\) matrix element. In concluding remarks we stress the fact that the successful chiral unitary approach for meson-baryon interaction, applied to the interaction of pseudoscalar mesons with the baryon-decuplet, gives rise to two states around the \(\Xi(1820)\), one of them narrow and the other one wide. This feature remains if reasonable changes are done in the strength of the interaction or the regulator of the loop functions, and is independent on whether one uses dimensional regularization [25] or the cutoff method as done here. We took advantage to show that this scenario provides a satisfactory description of the data in the \(\psi(3686)\to K^{-}\Lambda\bar{\Xi}^{+}\) decay, solving the puzzle presented by the recent BESIII experiment [29], which provides a width for the \(\Xi(1820)\) much bigger than the one reported in the PDG [3]. ## Acknowledgement This work is partly supported by the National Natural Science Foundation of China under Grant No. 11975083 and No. 12365019, and by the Central Government Guidance Funds Figure 3: Results obtained adjusting Eq. (9) to the data together with a small background. In the figure we show the contribution of the background alone and the results obtained removing the contribution of the second pole. for Local Scientific and Technological Development, China (No. Guike ZY22096024). R. M. acknowledges support from the CIDEGENT program with Ref. CIDEGENT/2019/015, the Spanish Ministerio de Economia y Competitividad and European Union (NextGenerationEU/PRTR) by the grant with Ref. CNS2022-13614. This work is also partly supported by the Spanish Ministerio de Economia y Competitividad (MINECO) and European FEDER funds under Contracts No. FIS2017-84038-C2-1-P B, PID2020-112777GB-I00, and by Generalitat Valenciana under contract PROMETEO/2020/023. This project has received funding from the European Union Horizon 2020 research and innovation programme under the program H2020-INFRAIA-2018-1, grant agreement No. 824093 of the STRONG-2020 project.
2309.12715
Cuttlefish: Expressive Fast Path Blockchains with FastUnlock
Cuttlefish addresses several limitations of existing consensus-less and consensus-minimized decentralized ledgers, including restricted programmability and the risk of deadlocked assets. The key insight of Cuttlefish is that consensus in blockchains is necessary due to contention, rather than multiple owners of an asset as suggested by prior work. Previous proposals proactively use consensus to prevent contention from blocking assets, taking a pessimistic approach. In contrast, Cuttlefish introduces collective objects and multi-owner transactions that can offer most of the functionality of classic blockchains when objects transacted on are not under contention. Additionally, in case of contention, Cuttlefish proposes a novel `Unlock' protocol that significantly reduces the latency of unblocking contented objects. By leveraging these features, Cuttlefish implements consensus-less protocols for a broader range of transactions, including asset swaps and multi-signature transactions, which were previously believed to require consensus.
Lefteris Kokoris-Kogias, Alberto Sonnino, George Danezis
2023-09-22T08:56:32Z
http://arxiv.org/abs/2309.12715v1
# Cuttlefish: Expressive Fast Path Blockchains with FastUnlock ###### Abstract Cuttlefish addresses several limitations of existing consensus-less and consensus-minimized decentralized ledgers, including restricted programmability and the risk of deadlocked assets. The key insight of Cuttlefish is that consensus in blockchains is necessary due to contention, rather than multiple owners of an asset as suggested by prior work. Previous proposals proactively use consensus to prevent contention from blocking assets, taking a pessimistic approach. In contrast, Cuttlefish introduces collective objects and multi-owner transactions that can offer most of the functionality of classic blockchains when objects transacted on are not under contention. Additionally, in case of contention, Cuttlefish proposes a novel 'Unlock' protocol that significantly reduces the latency of unblocking contented objects. By leveraging these features, Cuttlefish implements consensus-less protocols for a broader range of transactions, including asset swaps and multi-signature transactions, which were previously believed to require consensus. ## 1 Introduction Consensus is not required for implementing decentralized asset transfers [13]. This insight led to the design of cryptocurrencies based on consistent or reliable broadcast [2, 3, 8], which offer several advantages. They exhibit exceptionally low latency, operate purely asynchronously, and are highly scalable. However, consensus-less systems suffer from two significant limitations. Firstly, they have limited programmability, since to maintain liveness transactions must be submitted in a valid and race-condition-free manner. Failure to do so can result in deadlocked assets that become forever inaccessible to their owners. Thus, programmability is restricted to simple transactions involving objects owned by a single entity, such as asset transfers or payments. Attempts to support more complex transactions involving multiple users (e.g., asset swaps) or authorization (e.g., multi-signature) risk causing deadlocks, rendering assets unusable indefinitely. As a result, existing consensus-less cryptocurrencies [2, 3, 8] are only suited for basic operations. The second limitation arises from the strong requirement imposed on clients in consensus-less systems to never issue conflicting transactions. Even minor bugs in client implementations can lead to deadlocked assets. For instance, a faulty ###### Abstract The first part of the paper is devoted to the study of the experience. Using multiple wallets for the same account or objects can result in concurrent conflicting transactions due to bugs, lack of synchronization, or being offline. Even, a single wallet may send a transaction with insufficient gas, only to later attempt to rectify the mistake by updating the gas value, and leading to two conflicting transaction on the same account or objects. Unfortunately, these innocent slip-ups or bugs are interpreted as equivocation attempts within the context of consistent broadcasts, potentially deadlocking the assets involved. Unlike previous solutions, Cuttlefish enabled users to swiftly regain control of their assets through FastUnlock and retry their transactions safely. **Atomic swaps.** Atomic swaps allow two parties to exchange digital assets without the need for a trusted intermediary. While consensus-based blockchains can achieve this through smart contracts, the risk of deadlock arises in consensus-less environments due to the possibility of a Byzantine user issuing a concurrent transaction. Such a situation would effectively deadlock the assets of both parties. However, this risk only materializes when an active attacker intentionally causes contention. In rare cases like these, the FastUnlock protocol enables participants in the swap to quickly recover their assets. This safety net allows Cuttlefish to support multi-owner transactions in the fast path, allowing fast path atomic swaps and other multi party smart contracts, enhancing the programmability of consensus-less transactions. **Regulated stablecoins.** Regulated stablecoins, require the issuer to be able to block accounts or balances for regulatory reasons, besides their owner spending them, which eludes consensus-less systems. Since multiple parties need to operate on such objects they need to use consensus to sequence these potentially conflicting operations, even though the issue nearly never excises their ability to block objects (creating no practical contention). Cuttlefish allows for collective objects, that may be used by more than one owner, or any pattern or complex access control, and can be used in the fast path. ## 3 Background A number of consensus-less systems have been proposed in the literature, including FastPay [2], Astro [8], Zef [3], and Linera [18]. We will specifically describe and extend Sui (the Sui Lutris mechanism [17]) as a basis for the Cuttlefish design, as it is the only currently deployed mechanism with a consensus-less fast path. Cuttlefish extends the expresivity of both object authentication and transactions in the Sui fast path, and also extends that Sui consensus path to support FastUnlock. **Object Types.** All Sui blockchain state is composed on a set of objects. There are three types of objects, and their use in a transaction determines whether the fast path or the consensus path is to be used. * _Read-only objects_ cannot be mutated or deleted and may be used in any type of transactions concurrently and by all users. * _Owned objects_ have an owner field that determines access control. When owner is an address representing a public key, a transaction may access the object, if it is signed by that address (which can also be a multi-signature). When the owner of an object (called a child object) is another object ID (called the parent object), the child object may only be used if the root object (the first one in a tree of possibly many parents) is included in the transaction and authorized. This facility is used to construct efficient collections. * _Shared objects_ do not specify an owner. They can instead be included in transactions by anyone, and do not require any authorization. Instead, they perform their authorization logic (enforced by the smart contract). Transactions.A transaction is a signed command that specifies several input objects, a version number per object, and a set of parameters. If valid it consumes the input object versions and constructs a set of output objects at a fresh version--which can be the same objects at a later version or new objects. Owned objects versions need to be the latest versions in validator databases, and not be re-used across transactions. Shared objects need not specify a version, and the version on which the transaction is executed is assigned by the system. A transaction is signed by a single address and therefore can use one or more objects owned by that address. A single transaction cannot use objects owned by more than one address. Certificates.A _certificate_ (\(\mathsf{Cert}\)) on a transaction contains the transaction itself as well as the identifiers and signatures from a quorum of at least \(2f+1\) validators. A certificate may not be unique, and the same logical certificate may be signed by a different quorum of validators. However, two different valid certificates on the same transaction should be treated as representing semantically the same certificate. The identifiers of signers are included in the certificate (i.e., accountable signatures [5]) to identify validators ready to process the certificate, or that can serve past information required to process the certificate. Processing in the Fast Path and Consensus.Figure 1 provide an overview of Sui-Lutris and by extension Cuttlefish's common-case. A transaction is sent by a user to all validators (\(\blacktriangledown\)), that ensure it is correctly signed for all owned Figure 1: General protocol flow of Sui Lutris [4] fast-path & consensus failover system. objects and versions, and also that all objects exist (); a correct validator rejects any conflicting transaction using the same owned object versions, in the same epoch (so the first transaction using an object acquires a _lock_ on it). They then countersign it () and returns the signature to the user. A quorum of signatures constitutes a _certificate_ for the transaction (). Anyone may submit the certificate to the validators () that check it. At this point execution may take the fast path: if the certificate only references read-only and owned objects it is executed immediately () and a signature on the effects of the execution returned to the user to create an effects certificate () and the transaction is final. If any shared objects are included execution must wait. In all cases, certificates are input into consensus and sequenced (). Once sequenced, the system assigns a common version number to shared objects for each certificate, and execution can resume (steps'and ') to finalize the transaction. The common sequence of certificates is also used to construct checkpoints, which are guaranteed to include all finalized transactions (). **Checkpoints and Reconfiguration.** Sui ensures transaction finality either before consensus for owned object transactions () and after consensus for shared object transactions ()'. Its reconfiguration protocols ensure that if a transaction could have been finalized it will eventually be included in a checkpoint before the end of the epoch. At the end of the epoch, all locks are reset (). Appendix 0.B summarizes the reconfiguration protocol of Sui that Cuttlefish directly adopts. **Limitations of Sui.** Misconfigured clients may create and submit concurrently conflicting transactions in step () and (), that reuse the same owned object versions. In that case, neither transaction may be able to construct a certificate (), and the owned object becomes locked until the end of the epoch. Due to the risk that owned objects can become locked through conflicting transactions, Sui restricts transactions to only contain objects from a single owner, thus limiting the applicability of the fast path--to avoid mistrusting users from locking each others' objects for a day. For similar reasons, objects may also have at most one owner. Cuttlefish addresses all these limitations. ## 4 Overview Cuttlefish adopts the high-level design of Sui, namely using reliable broadcast for a fast path with a fall-back to consensus (see Appendix 0.A for definitions of distributed systems primitives), but augments it in the following ways. 1. It allows for _multi-owner transactions_ on the fast path that use objects with different 'owners'. In Sui this can only be expressed with shared objects in transactions using the higher-latency consensus path. 2. It introduces _collective objects_ that allow for complex authorization involving different users or combinations of users, or even time or external events. Collective objects extend owned objects and may be used on the fast path. 3. It adds a _FastUnlock protocol_ that allows for fast path objects blocked due to concurrent conflicting transactions to recover liveness within seconds, whereas Sui would recover only within a day. Collective objects and multi-owner transactions allow for more expressive transactions in the fast path, but risk increasing the incidence of conflicting transactions and locked owned objects. To alleviate this issue, Section 6 presents a simple design for FastUnlock: it performs a no-op on locked objects using the consensus path making them available again within seconds. Section 7 extends the FastUnlock protocol to force a specific transaction instead of a no-op, which is necessary when objects are under continuous contention. FastUnlock leverages the consensus protocol to signal that an owned object is suspected of being under contention and should not be processed by the fast path. Following invocations, the current version of the object is blocked, and consensus is used to determine whether a transaction on it might have been final; if not, in the simple FastUnlock the version of the object is increased with a no-op. As a result, the object has a new version that can be accessed via the fast path once again. Since the version is always updated, the transactions that blocked the object are no longer valid, removing any replay attack opportunity. This is not true in Sui as even after the end of the epoch a malicious client can resubmit the equivocated transactions and try to re-lock the object. **Threat Model.** Cuttlefish operates in the same threat model as Sui. It assumes a message-passing system with a set of \(n\) validators and a computationally bound adversary that controls the network and can corrupt up to \(f<n/3\) validators within any epoch. We say that validators corrupted by the adversary are _Byzantine_ or _faulty_ and the rest are _honest_ or _correct_. To capture real-world networks we assume asynchronous _eventually reliable_ communication links among honest validators. That is, there is no bound on message delays and there is a finite but unknown number of messages that can be lost. Similarly to Sui [17], Cuttlefish additionally uses a consensus protocol as a black box that takes some valid inputs and outputs a total ordering [9, 11, 21], possibly operating within a partially synchronous model [10]. ## 5 Enhancing Programmability Cuttlefish provides greater objects programmability on the fast path than existing consensus-less systems using two main ingredients: (i) multi-owner transactions, and (ii) collective objects. **Multi-owner transactions.** Sui Lutris requires all owned objects in a transaction to be 'owned' by the same address [17]. Cuttlefish lifts this restriction: a transaction can reference owned objects with any root authenticator term. The transaction contains the authentication evidence used to authorize all objects, such as a set of signatures over the transaction, potentially from multiple addresses. Validators must ensure that all owned objects referenced by a transactions are correctly authorized before signing a transaction, which ensures that a valid certificate represents an authorized transaction. Transactions that only contain owned objects, even when they have different owners, can be executed on the fast path. Then in addition they contain shared objects their execution needs to be deferred after the certificate has been sequenced by consensus. Multi-owner transactions make Cuttlefish more susceptible to owned-objects being locked through error or malicious behaviour. For example, consider an atomic swap T transaction that takes objects A owned by Alice and object B owned by Bob and exchanges their ownership. If Alice signs T first, Bob may refuse to sign initially denying Alice access to her object. If Alice loses patience and tries to use A in another transaction T', then Bob can sign T and race Alice's attempt to build a certificate. Now both T and T' contain A and conflict which can lead to a locked A (and B). To resolve such situations it is necessary for Cuttlefish to implement FastUnlock described in Section 6. **Collective Objects.** Collective objects are owned objects with a more complex authenticator, than the usual address or object ID that Sui Lutris supports. Complex authenticators allow conjunction, disjunction, and weighted thresholds thresholds of authentication terms to be used as authenticators. Authentication terms include the traditional address and object ID, but also conditions on time or events that have occured in the environment of the execution. Due to the fact that multiple non-coordinating or even mutually distrustful parties can use the object in transaction, as well as the fact that some authorization terms are non-deterministic, complex authenticators can lead to conflicting transactions being authorized on objects and thus require the FastUnlock protocols to be practical. More specifically Cuttlefish extends the authorization logic of an owned object to be a root authentication term \(\langle T\rangle\) from the grammar in Figure 2: Figure 2: Grammar defining the authorization logic for collective objects * A PublicKey term is true if the transaction is signed by the public key \(pk\). Using a single such term as an authenticator for an object expresses the authentication logic of a traditional single owner object in Sui. * A ObjectID term requires the object with id _oid_ to be included (and authenticated) as part of the transaction. A single object id authenticator expresses the traditional parent-child relation, and ownership rules in Sui. * The BeforeTime and AfterTime are true if the (local) time the transaction is received by the validator is respectively before or after \(t\). Note that since even honest validators cannot have perfectly synchronized clocks, it is possible that a transaction with such a term becomes'stuck'. * The EventOccured term becomes true if in the trace of finalized executions a specific event was emitted on chain \(c\). Note that the chain may be different chain than the one operated by Cuttlefish effectively making authorization conditional on an oracle for another chain. Such an event may be described by type or content and we abstract this in \(e\). A reference to the transaction that emitted the event can be provided as an authenticator to help validators check this term. * The Threshold defined a threshold \(W\) and a weight \(w_{i}\) for a set of terms. It is true if the sum of weights of the true terms exceed the threshold. It allows the definition of flexible policies such as requiring a threshold of signature or other conditions to be present to authorize the object being used. * The And and Or define a number of terms, and are true if all or any of these terms are true, respectively. A transaction needs to provide evidence that all authenticator terms for all objects in its input set are true. For each input object it specifies the path(s) in the authentication term tree that are true supporting the overall authenticator term, collectively called _authentication paths_. It also contains a set of signatures (as a list ordered by public key) signing the transaction. To allow for greater flexibility the authentication paths are not signed (conceptually they are part of the signature not the transaction), and therefore a transaction cannot get information about the logic that authorized its execution through this mechanism. We note that an authorization path may be expressed in a very succinct manner as a one bit per Threshold, And or Or branch pursued to demonstrated the root authenticator term to be true. A single signature is required to satisfy any number of PublicKey terms with the same \(pk\). ObjectID terms can be demonstrated as satisfied implicitly by including the _oid_ as an input. EventOccured, BeforeTime and AfterTime terms are satisfied (or not) through the validator comparing their specified time with the current time or consulting a chain for an event, and incur no additional overhead in terms of evidence in the transaction. We represent the authenticator logic as a tree, with AND/OR, k-out-of-n connectives as branches and identities, time conditions and object IDs as leafs. In this representation, we can augment each branch and leaf with an optional nonce, compute a Merkle tree over them, and only store a hash of the root as the authenticator. In this way transfering to a complex authenticator is no different than transferring an object to an address, and one cannot tell the difference until the object is used in a transaction. A transaction then reveals only the paths necessary to show that the condition for access is satisfied. This allows objects to preserve secret authenticators until they are accessed, and even upon access only reveal the information required. We leave using zero-knowledge proofs as evidence all authenticators are satisfied in a transaction, allowing us to hide all information besides authorization, for future work. ## 6 Baseline FastUnlock Protocol Both multi-owner transactions and collective objects can result in deadlocks in the fast path when correct clients attempt to access objects concurrently. To remedy this issue Cuttlefish introduces a FastUnlock functionality. For simplicity, we show how to unlock a single object by either executing a preexisting transaction to finality or executing a no-op which only increases the version. Section 7 extends the basic protocol to execute a new transaction instead of a no-op. Sui [17] provides detailed specifications and implementations of its system model and Cuttlefish largely extends it with the additional FastUnlock protocol. #### 6.0.1 New Persistent Data Structures. Each Cuttlefish validator maintains a set of persistent tables abstracted as key-value maps, with the usual contains, get, and set operations. The map \[\textsc{LockDb}[\textsf{ObjectKey}]\rightarrow\textsf{Cert}\text{ or }\textbf{None}\] maps each object's identifier and version, \(\textsf{ObjectKey}=(\textsf{ObjectId},\textsf{Version})\), to a certificate \(\textsf{Cert}\) or **None** if the object's version exist by the validator does not hold any certificate. The map \[\textsc{UnlockDb}[\textsf{ObjectKey}]\rightarrow\textbf{Unlocked}\text{, } \textbf{Confirmed}\text{, or }\textbf{None}\] records whether a transaction over the specified object version is involved in a current FastUnlock instance (**Unlocked**), has been sequenced by the consensus engine (**Confirmed**), or none of the above (**None**). All new owned object entries start with \(\textsc{UnlockDb}[\textsf{ObjectKey}]\) set to **None**. Once a transaction certificate is sequenced through consensus it is always executed (whether it is for a shared object transaction or an owned object only transaction) and all owned object entries have \(\textsc{UnlockDb}[\textsf{ObjectKey}]\) set to **Confirmed**. #### 6.0.2 FastUnlock Protocol Description. In order to safely unlock an object, the user interactively constructs a proof, called a _no-commit certificate_, that no transaction modifying that object has been committed or will be committed on the fast path. This proof consists of a message signed by a quorum of validators attesting that they have not already executed a transaction over the \(\textsf{ObjectKey}\), and promising that they will not execute any transaction over the \(\textsf{ObjectKey}\) in the fast path. Only certificates sequenced over consensus may affect such an ObjectKey going forward. Figure 3 illustrates the fast-unlock protocol allowing a user to instruct validators to unlock a specific object. A user first creates an _unlock request_ specifying the object they wish to unlock: \[\mathsf{UnlockRqt}(\mathsf{ObjectKey},\mathsf{Auth})\] This message contains the object's key ObjectKey to unlock (accessible as \(\mathsf{UnlockRqt}.\mathsf{ObjectKey}\)) and an authenticator \(\mathsf{Auth}\) ensuring the user is authorized to unlock ObjectKey. The authenticator is composed of two parts: (i) a transaction that mutates the object in question and potentially additional objects, which is signed by the object owner, and (ii) a proof that the party requesting an unlocking can modify at least one of the objects in the transaction. The authenticator prevents rogue unlock requests for objects that are either not under contention (the transaction shows there exists a transaction that uses the object) or by parties not authorized to act on the objects. The user broadcasts this UnlockRqt message to all validators (). Each validator handles the UnlockRqt as follows (Algorithm 1). A validator first performs the following check: * **Check (1.1)** It ensures the validity of UnlockRqt by verifying the authenticator \(\mathsf{Auth}\) with respect to the ObjectKey to unlock. Specifically, it should contain a valid transaction including ObjectKey and evidence that the unlock is authorized given the owner of ObjectKey. Otherwise stop processing. The validator attempts to retrieve a certificate \(\mathsf{Cert}\) for a transaction on ObjectKey exists (**Step (1.2)** ) or sets \(\mathsf{Cert}\) to \(\mathbf{None}\). Then, the validator records that the object in UnlockRqt can only be included in transaction in the consensus path (Line 11) by setting its entry in the UnlockDb[ObjectKey] to \(\mathbf{Unlocked}\) (**Step (1.3)**). It finally returns a signed _unlock vote_\(\mathsf{UnlockVote}\) to the user: \[\mathsf{UnlockVote}(\mathsf{UnlockRqt},\mathbf{Option}(\mathsf{Cert}))\] Figure 3: FastUnlock interactions between a user and validators to unlock an object. This message contains the UnlockRqt itself, the (possibly **None**) certificate \(\mathsf{Cert}\) leading to the execution of the object key referenced by UnlockRqt (). ``` // Handle UnlockRqt messages from clients. 1:procedureProcessUnlockTx(UnlockRqt) 2: // Check (1.1): Check Auth. (Section 6). 3:if!valid(UnlockRqt)thenreturn error 4: 5: // Step (1.2): No conflicting executions. 6:ObjectKey\(\leftarrow\)UnlockRqt.ObjectKey 7: \(\mathsf{Cert}\leftarrow\)LockDBs[ObjectKey]\(\triangleright\) Can be None 8: 9: // Step (1.3): Record the decision to unlock. 10: UnlockVote\(\leftarrow\)sign(UnlockRqt, \(\mathsf{Cert}\)) 11: UnlockDBs[ObjectKey]\(\leftarrow\)Unlocked 12: 13:returnUnlockVote ``` **Algorithm 1** Process unlock requests The user collects a quorum of \(2f+1\)UnlockVote over the same (UnlockRqt, \(\mathsf{Cert}\)) fields and assembles them into an _unlock certificate_\(\mathsf{UnlockCert}\): \[\mathsf{UnlockCert}(\mathsf{UnlockRqt},\mathbf{Option}(\mathsf{Cert}))\] where UnlockRqt is the certified abort message created by the user and \(\mathsf{Cert}\) is the (possibly **None**) certificate leading to the execution of the objects referenced by UnlockRqt. There are two cases leading to the creation of UnlockCert: 1. At least one UnlockVote carries a certificate. This scenario indicates that a correct validator already executed a transaction, which implies the object is not locked. However this is not a proof of finality and subsequent steps may invalidate this execution. 2. No UnlockVote carries a certificate. This scenario is a 'no-commit' proof as there are \(f+1\) honest validators that will not process certificates (UnlockDb holds **Unlocked**) thus no certificate execution in the fast path will ever become final. The user submits this UnlockCert for sequencing by the consensus engine (). All correct validators observe a consistent sequence of UnlockCert messages output by consensus () and process them in order as follows (Algorithm 2). A validator performs the following checks, and if any fails they ignore the certificate: * **Check (2.1)** They ensure they did not already process another transaction to completion (i.e. UnlockDb is not **Confirmed**) or a different UnlockCert for the same objects keys. * **Check (2.2)** They check UnlockCert is valid, that is, the validator ensures (i) it is correctly signed by a quorum of authorities, and (ii) that the certificate \(\mathsf{Cert}\) it contains is valid or **None**. The validator then executes the transaction referenced by \(\mathsf{Cert}\) (step 2.3) if one exists. Otherwise, if \(\mathsf{Cert}\) is empty, the validator undoes any local transaction executed on the object4, then executes a no-op, that is, the object contents remain unchanged but its version number increases by one. The validator finally marks every object key as **Confirmed** to prevent future unlock certificates or checkpoint certificates from overwriting execution (Line 18) and returns an EffectSign to the user (). The user assembles a quorum of \(2f+1\) EffectSign messages into an _effect certificate_\(\mathsf{EffectCert}\) that determines finality (). Footnote 4: The UnlockCert with \(\mathsf{Cert}\) being **None** ensures such an execution could not have been final; only a single layer of execution can ever be undone, and no cascading aborts can happen. Appendix C details the use of gas objects within the context of FastUnlock and D proves the safety and liveness of the protocol. The key insight is that an UnlockCert forces transactions on the owned object to go through consensus sequencing. There, either a transaction certificate or an unlock certificate will be sequence first and consistently executed. An unlock certificate for a finalized transaction will always result in the execution of the same transaction. **Auto-Unlock.** The basic FastUnlock scheme presumes that the request to unlock an object is authenticated by the owner(s) of the object. This ensures that only authorized parties can interfere with the completion of a transaction, but it also restricts who can initiate unlocking in case of loss of liveness. Alternatively, an 'Auto Unlock' scheme may use a synchrony assumption instead to initiate unlock: each validator upon signing a transaction associates with each input object the current timestamp. An Auto Unlock request is identical to a FastUnlock request, but is not authenticated by the object owner. Instead, its validity is checked (checks (1.1) and (2.1)) by ensuring that a sufficient delay \(\Delta\) has passed since the object was locked. To ensure liveness the delay \(\Delta\) should be long enough to allow for the creation of transaction certificates if there is no contention. FastUnlock and Auto Unlock can be combined: an authenticated request can be processed immediately, but an unauthenticated request is only valid after \(\Delta\). ## 7 Contention Mitigation The basic FastUnlock protocol speeds up recovery from loss of liveness due to mistakes. However, Cuttlefish aims to support workloads on the fast path that are truly under contention. In this case, the basic protocol in Section 6 is insufficient, since it can result in multiple rounds of locking and no-op unlocking without any user transaction being committed. We present a protocol that proposes a new transaction during the unlock phase that is executed once the unlock is sequenced, ensuring liveness. Additionally, we show how to generalize the basic protocol to unlock multiple objects at once. ``` // Handle UnlockRqt messages from clients. 1:procedureProcessUnlockText(UnlockRqt) 2: // Check (3.1): Check authenticator. 3:if!valid(UnlockRqt)thenreturn error 4: 5: // Collect certificates. 6:\(c\leftarrow\)None 7:forObjectKey\(\in\)UnlockRqt.ObjectKeys do 8:\(c\gets c\cup\textsc{LockDsl}[\textsc{ObjectKey}]\) 9:\(\textsc{UnlockVote}\gets sign(\textsc{UnlockRqt},c)\) 10: 11: // Record the decision to unlock. 12:if\(c==\)Nonethen 13:forObjectKey\(\in\)UnlockRqt.ObjectKeys do 14:\(\textsc{UnlockDsl}[\textsc{ObjectKey}]\leftarrow\)Unlocked 15: 16:returnUnlockVote ``` **Algorithm 3** Process unlock requests (multi) The multi-objects unlock protocol follows the same general flow as the single-object unlock protocol described in Section 6. We thus describe the protocol referring to the steps - depicted in Figure 3. **Protocol description.** The user first creates an _unlock request_ specifying a set of objects to unlock: \[\textsc{UnlockRqt}([\textsc{ObjectKey}],\textsc{Tx},\textsf{Auth})\] This message contains a list of the object's keys [ObjectKey] to unlock (accessible as UnlockRqt.ObjectKeys), a new transaction Tx to execute if the unlock process succeeds, and an authenticator Auth ensuring the sender is authorized to access all objects in [ObjectKey]. The user broadcasts this message to all validators (). ``` // Handle UnlockCert messages from consensus. 1:procedureProcessUnlockCert(UnlockCert) 2: // Check (4.1): Check message validity. 3:forObjectKey\(\in\)UnlockCert.ObjectKeys do 4:ifUnlockDslB[ObjectKey] = Confirmedthen 5:return 6: 7: // Check (4.2): Check message validity. 8:if!valid(UnlockCert)thenreturn error 9: 10: // Check (4.3): Can we execute the tx? 11:\(v\leftarrow[\,]\) 12:ifUnlockCert.Cert\(=[\,]\)then 13:\(\textsc{Tx}\leftarrow\textsc{UnlockCert}.UnlockRqt.Tx\) 14:\(\textsc{EffectSign}\gets c\textsc{Exec}[\textsc{Tx},\textsc{UnlockCert}]\) 15:\(v\leftarrow\textsc{EffectSign}\) 16:forObjectKey\(\in\)UnlockCert.ObjectKeys do 17:\(\textsc{UnlockDslB}[\textsc{ObjectKey}]=\)Confirmed 18:else 19:forCert\(\in\)UnlockCert.Cert do 20:\(\textsc{EffectSign}\gets c\textsc{Exec}[\textsc{Cert}]\) 21:\(v\gets v\cup\textsc{EffectSign}\) 22:forObjectKey\(\in\)Cert.ObjectKeys do 23:\(\textsc{UnlockDslB}[\textsc{ObjectKey}]=\)Confirmed 24:return\(v\) ``` **Algorithm 4** Process unlock certificates (multi) Algorithm 3 describes how each validator handles this unlock request UnlockRqt. They first perform Check (3.1) Line 3 to check the authenticator Auth is valid with respect to all objects. This check ensures that the user is authorized to mutate all the objects referenced by UnlockRqt and to lock all owned object referenced by Tx. The validator then collects any certificates for the objects referenced by UnlockRqt (Line 8) and adds them to the response as Cert. The validator then marks object in UnlockRqt as reserved for transaction executed through consensus only (Line 14). The validator finally returns an _unlock vote_UnlockVote to the user: \[\textsc{UnlockVote}(\textsc{UnlockRqt},[\textbf{Option}(\textsc{Cert})])\] This message contains the unlock message UnlockRqt itself and possibly a set of certificates [Cert] on transactions including the object keys referenced by UnlockRqt (possible empty) (). If \(\mathsf{Cert}\) is not empty the certified transactions may have been finalized, and should be executed instead of the new transaction. The user collects a quorum of \(2f+1\)UnlockVote over the same UnlockRqt message and assembles them into an _unlock certificate_UnlockCert: \[\mathsf{UnlockCert}(\mathsf{UnlockRqt},\mathsf{Cert})\] where UnlockRqt is the user-created certified unlock message and \(U\mathsf{Cert}\) is the unions of all set of certificates received in UnlockRqt responses. The user submits this message to the consensus engine (). The consensus engine sequences all UnlockCert messages; all correct validators observe the same output sequence (). Algorithm 4 describes how validators process these UnlockCert messages after they are sequenced by the consensus engine. The validator first ensures they did not already process another UnlockCert or \(\mathsf{Cert}\) through checkpoint for the same objects keys (Line 4). They then check UnlockCert is valid, that is, the validator ensures (i) it is correctly signed by a quorum of authorities, and (ii) that all certificates [Cert] it contains are valid (Line 8). The validator can only execute the transaction \(\mathsf{Tx}\) specified by the user if UnlockCert.Cert is empty (Line 12). The validator then marks every object key of [ObjectKey] as **Confirmed** to prevent any future unlock requests on the ObjectKey from overwriting execution with a different transaction (Line 23) and returns a set of EffectSign to the user (). The user assembles an EffectSign from a quorum of \(2f+1\) validators into an _effect certificate_EffectCert that determines finality (). ## 8 Related and Future work The Cuttlefish's fast path is based on Byzantine consistent broadcast [6]. Previous works suggested using this weaker primitive to build payment systems [1, 2, 3, 12, 14, 17, 8, 1, 13, 14] or even as an exclusion-based locking mechanism for optimistic state-machine replication [15]. Zzyzx specifically uses a two-mode unlock mechanism that checks if all replicas have a matching history over the object and retracts the lock or runs full consensus to find the best state to adopt. Unlike Zzyzx, Cuttlefish provides the machinery to not only abort but also directly execute a new transaction and exploits the idea of shared objects to allow for easy execution when there is true contention. Addtionally, Cuttlefish comes with a full set of proofs. Section 3 extensively discussed Sui [17], the closest systems to Cuttlefish. Notably, Sui includes a restricted variant of multi-owner transactions to support sponsored transactions, and a restricted variant of complex authenticators allowing only weighted thresholds of signatures as an authenticator. Sui additionally, supports a batch execution mechanism called Programmable Transaction Blocks (PTB). In a PTB a user can bundle multiple of their transactions together for execution and allows for a significant increase in operations per second Sui can process. Unfortunately, this is currently only available for a single owner largely due to the risk of deadlocks if one of the bundled operations is under a race condition. With Cuttlefish we envision providing this significant advantage in terms of throughput efficiency for general-purpose workloads as dapp operators will be able to bundle transactions of many users in a single certificate workflow knowing that if something goes wrong, they could invoke FastUnlock and seamlessly regain liveness. Another closely related work is FastPay which implements a payment system using a Byzantine consistent broadcast primitive and a lazy synchronizer to achieve _totality_[6]. Zef combines FastPay with the Coconut anonymous credentials scheme [20] to enable confidential and unlinkable payments. Astro relies on an eager implementation of _Byzantine reliable broadcast_[6] to achieve totality without relying on an external synchronizer at the cost of higher communication in the common case. Similarly, ABC [19] proposes a relaxed notion of consensus where termination is only guaranteed for honest users. All these systems lack an integration with a consensus path making them both impractical to run for a long-time (no garbage-collection or reconfiguration) as well as limited functionality (only payments) and usability (client-side bugs result in permanent loss of funds). If integrated, then Cuttlefish would apply directly to allow more use-cases on the low latency consensusless path without the risk of locking assets forever due to race conditions. ## 9 Conclusion Cuttlefish proposes a novel approach to decentralized ledgers that addresses the shortcomings of previous consensus-minimized systems. By realizing that the requirement for consensus in blockchains is driven by contention rather than the number of owners, Cuttlefish challenges traditional wisdom and provides an alternative perspective. When objects are not under contention, the use of collective objects and multi-owner transactions, combined with the right authentication mechanism enables Cuttlefish to give the majority of the functionality seen in traditional blockchains within two round-trips of communication. To properly deal with deadlock when the objects are under contention, Cuttlefish proposes the novel FastUnlock protocol allowing users to quickly regain access to locked assets. As a result, Cuttlefish allows for the consensus-less execution of a broader set of transactions, including asset swaps and multi-sig transactions that were previously thought to need consensus. ## Acknowledgment This work is supported by Mysten Labs. We thank the Mysten Labs Engineering teams for valuable feedback broadly, and specifically to Xun li and Mark Logan for advising on a design that would best fit the Sui codebase.
2309.12073
Interface of Equation-of-State, Atomic Data and Opacities in the Solar Problem
Convergence of the Rosseland Mean Opacity (RMO) is investigated with respect to the equation-of-state (EOS) and the number of atomic levels of iron ions prevalent at the solar radiative/convection boundary. The "chemical picture" Mihalas-Hummer-D\"{a}ppen MHD-EOS, and its variant QMHD-EOS, are studied at two representative temperature-density sets at the base of the convection zone (BCZ) and the Sandia Z experiment: $(2 \times 10^6K, \ 10^{23}/cc)$ and $(2.11 \times 10^6K, \ 3.16 \times 10^{22}/cc)$, respectively. It is found that whereas the new atomic datasets from accurate R-matrix calculations for opacities (RMOP) are vastly overcomplete, involving hundreds to over a thousand levels of each of the three Fe ions considered -- FeXVII, FeXVIII and FeXIX -- the EOS constrains contributions to RMOs by relatively fewer levels. The RMOP iron opacity spectrum is quite different from the Opacity Project distorted wave model and shows considerably more plasma broadening effects. This work points to possible improvements needed in the EOS for opacities in high-energy-density (HED) plasma sources.
Anil K. Pradhan
2023-09-21T13:42:31Z
http://arxiv.org/abs/2309.12073v1
# Interface of Equation-of-State, Atomic Data and Opacities in the Solar Problem ###### Abstract Convergence of the Rosseland Mean Opacity (RMO) is investigated with respect to the equation-of-state (EOS) and the number of atomic levels of iron ions prevalent at the solar radiative/convection boundary. The "chemical picture" Mihalas-Hummer-Dappen MHD-EOS, and its variant QMHD-EOS, are studied at two representative temperature-density sets at the base of the convection zone (BCZ) and the Sandia Z experiment: \((2\times 10^{6}K,\ 10^{23}/cc)\) and \((2.11\times 10^{6}K,\ 3.16\times 10^{22}/cc)\), respectively. It is found that whereas the new atomic datasets from accurate R-matrix calculations for opacities (RMOP) are vastly overcomplete, involving hundreds to over a thousand levels of each of the three Fe ions considered -- Fe xvii, Fe xvii, Fe xix -- the EOS constrains contributions to RMOs by relatively fewer levels. The RMOP iron opacity spectrum is quite different from the Opacity Project distorted wave model and shows considerably more plasma broadening effects. This work points to possible improvements needed in the EOS for opacities in high-energy-density (HED) plasma sources. keywords: Physical Data and Processes, atomic processes ## 1 Introduction As a fundamental quantity in light-matter interaction opacity plays a key role in astrophysics, such as stellar interiors, helioseismology, and asteroseimology, elemental abundance determination, host-star and exoplanetary fluxes, etc. (Christensen-Dalsgaard _et al._ (2009); Basu _et al._ (2015); Asplund _et al._ (2009); Carlos _et al._ (2019); Buldgen _et al._ (2023a). In addition, radiation transport models of inertial plasma fusion devices requires accurate opacities (Bailey _et al._ (2015); Perry _et al._ (2018). In particular, the outstanding uncertainty in the solar chemical composition affects elemental calibration of all astronomical sources. Attempts to employ advances in helioseismology and abundances are an active area of basic research (Basu and Antia (2008); Buldgen _et al._ (2022), but require enhanced solar opacities by about 10%. That, in turn, depends on two elements, oxygen and iron, that determine about half of the solar opacity at BCZ. However, a downward revision of oxygen abundance by up to 20-40% from earlier solar composition is a major part of the "solar problem" (Asplund _et al._ (2021); Pietrow _et al._ (2023); Li _et al._ (2023); Buldgen _et al._ (2023b). Since about 90% of oxygen is either fully ionized or H-like at BCZ, its absorption coefficient is small and unlikely to change from current atomic calculations, enhanced iron opacity might countenance lower solar abundances (Bailey _et al._ (2015). Opacity computations depend on atomic data on the one hand and the plasma EOS on the other (The Opacity Project Team (1995); Seaton _et al._ (1994); Pradhan _et al._ (2023). Voluminous amounts of data are needed for all photon absorption and scattering processes in order to ensure completeness. Recently, accurate and extensive calculations of atomic data for iron ions of importance under BCZ conditions have been carried out using the R-matrix method (Pradhan _et al._ (2023); Nahar _et al._ (2023); Pradhan (2023); Zhao _et al._ (2023). However, the EOS determines how and to what extent the atomic data contribute to monochromatic and mean opacities at a given temperature and density. The Planck and Rosseland Mean Opacity (PMO and RMO respectively) are defined as \[\kappa_{P}B(T)=\int\kappa_{\nu}B_{\nu}d\nu, \tag{1}\] \[\frac{1}{\kappa_{R}}=\frac{\int_{0}^{\infty}g(u)\kappa_{\nu}^{-1}du}{\int_{0 }^{\infty}g(u)du}\quad;\quad g(u)=u^{4}e^{-u}(1-e^{-u})^{-2}, \tag{2}\] where \(g(u)=dB_{\nu}/dT\) is the derivative of the Planck weighting function \[B_{\nu}(T)=\frac{(2h\nu^{3}/c^{2})}{e^{h\nu/kT}-1} \tag{3}\] , and \(\kappa_{\nu}\) is the monochromatic opacity. Atomic processes and contributions to opacity are from bound-bound (\(bb\)), bound-free (\(bf\)), free-free (\(ff\)), and photon scattering (\(sc\)) as \[\kappa_{ijk}(\nu) = \sum_{k}a_{k}\sum_{j}x_{j}\sum_{i,i^{\prime}}\left[\kappa_{bb}(i,i ^{\prime};\nu)\right. \tag{4}\] \[+ \kappa_{bf}(i,\epsilon i^{\prime};\nu)+\kappa_{ff}(\epsilon i, \epsilon^{\prime}i^{\prime};\nu)+\kappa_{sc}(\nu)\right]\,, \tag{5}\] where \(a_{k}\) is the abundance of element \(k\), \(x_{j}\) the \(j\) ionization fraction, \(i\) and \(i^{\prime}\) are the initial bound and final bound/continuum states of the atomic species, and \(\epsilon\) represents the electron energy in the continuum. Whereas the \(ff\) and \(sc\) contributions are small, the opacity is primarily governed by \(bb\) and \(bf\) atomic data that need to be computed for all atomic species. Existing opacity models generally employ the relatively simple distorted wave (DW) approximation based on atomic structure codes, but higher accuracy requires considerable effort. Originally, the Opacity Project (The Opacity Project Team (1995) (hereafter OP) envisaged using the poweful and highly accurate R-matrix method for improved accuracy. But that turned out to be intractable owing to computational constraints, and also required theoretical developments related to relativistic fine structure and plasma broadening effects. Therefore, the OP opacities were finally computed using similar atomic physics as other existing opacity models, mainly based on the simpler distorted wave (DW) approximation (Seaton OPCD (2003), and later archived in the online database OPserver (Mendoza _et al._ (2007). However, following several developments since then renewed R-matrix calculations can now be carried out, as discussed below. ## 2 Theoretical Framework Recently, with several improvements in the extended R-matrix and opacity codes large-scale data have been computed for Fe ions Fe xvii, Fe xviii and Fe xix, which determine over 80% of iron opacity near BCZ conditions (Pradhan _et al._ (2023); Nahar _et al._ (2023); Pradhan (2023); Zhao _et al._ (2023). The R-matrix (RM) framework and comparison with existing opacity models based on atomic structure codes and the distorted wave (DW) approximation, and associated physical effects, are described in detail. The primary difference between the RM and DW approximations is the treatment of bound-free opacity which is dominated by autoionizing resonances that are included in an _ab initio_ manner in RM calculations, but treated perturbatively as bound-bound transitions in the DW method. Plasma broadening effects are very important, but manifest themselves quite differently in the two methods. Resonances in RM photoionization cross sections are broadened far more than lines as function of temperature and density since autoionization widths, shapes and heights are considered explicitly (Pradhan (2023). Also, the intrinsically asymmetric features of the large Seaton photoexcitation-of-core (PEC) resonances in bound-free cross sections are preserved in RM calculations. The unverified assertion that RM and DW opacities are equivalent is incorrect owing to basic physical effects (Delahaye _et al._ (2021). On the contrary, the RM method is based on the coupled channel approximation that gives rise to autoionizing resonances, and has historically superseded the DW method which neglects channel coupling. RM calculations for all relevant atomic processes are generally much more accurate than the DW, as for example in the work carried out under the Iron Project, including relativistic effects in the Breit-Pauli R-matrix (BPRM) approximation (Hummer _et al._ (1993) that is also employed in the present work (Nahar _et al._ (2023). The interface of atomic data with EOS parameters is implemented through the MHD-EOS (Mihalas _et al._ (1988), formulated in the "chemical picture" as designed for OP work. It is based on the concept of _occupation probability_\(w\) of an atomic level being populated in a plasma environment, characterized by a temperature-density (hereafter T-D) related to Boltzmann-Saha equations. The level population is then given as \[N_{ij}=\frac{N_{j}g_{ij}w_{ij}e^{-E_{ij}/kT}}{U_{j}}, \tag{6}\] where \(w_{ij}\) are the occupation probabilities of levels \(i\) in ionization state \(j\), and \(U_{j}\) is the atomic internal partition function. The occupation probabilities do not have a sharp cut-off, but approach zero for high-\(n\) as they are "dissolved" due to plasma interactions. The partition function is re-defined as \[U_{j}=\sum_{i}g_{ij}w_{ij}e^{(-E_{ij}/kT)}. \tag{7}\] \(E_{ij}\) is the excitation energy of level \(i\), \(g_{ij}\) its statistical weight, and \(T\) the temperature. The \(w_{ij}\) are determined upon free-energy minimization in the plasma at a given T-D. However, the original MHD-EOS was found to yield \(w\)-values that were unrealistically low by up to several orders of magnitude. An improved treatment of microfield distribution and plasma correlations was developed, leading to the so-called QMHD-EOS (Nayfonov _et al._ (1999) and employed for subsequent OP calculations and results (Seaton OPCD (2003); Mendoza _et al._ (2007). ## 3 Opacity Computations The new RMOP data are interfaced with the (Q)MHD-EOS to obtain opacities. Computed RM atomic data for \(bb\) oscillator strengths and \(bf\) photoionization cross sections of all levels up to \(n\) (SLJ) = 10 yields datasets for 454 levels for Fe xvii, 1174 levels for Fe xviii and 1626 for Fe xix (Nahar _et al._ (2023); some results for Fe xvii were reported earlier (Nahar and Pradhan (2016). Monochromatic and mean opacities may then be computed using atomic data for _any number of these levels and the EOS_. In order to study the behavior of MHD and QMHD, we employ the new RMOP opacity codes (Pradhan _et al._ (2023), varying the number of atomic levels for each Fe ion, and both sets of EOS parameters at specified temperature-density pairs for a particular ion. Monochromatic opacities are computed at the same frequency mesh in the variable and range \(0\leq u=h\nu/kT\leq 20\), as in OP work (Seaton _et al._ (1994); Mendoza _et al._ (2007). Since RMOP calculations were carried out for the three Fe ions that comprise over 80% of total Fe at BCZ, we replace their opacity spectra in OP codes (Seaton OPCD (2003) and recompute RMOP iron opacities. Thus, \(\sim\)15% contribution is from OP data for other Fe ions; a table of Fe ion fractions at BCZ is given in (Pradhan _et al._ (2023). To circumvent apparently unphysical behavior of MHD-EOS at very high densities, an ad hoc occupation probability cut-off was introduced in OP calculations with \(w(i)\geqslant 0.001\)(Badnell and Seaton (2003). We retain the cut-off in the new RMOP opacity codes (Pradhan _et al._ (2023), since the same EOS is employed, but also tested relaxing the cut-off to smaller values up to \(w(i)\geqslant 10^{-12}\). However, no significant effect on RMOs was discernible, indicating that a more fundamental revision of (Q)MHD-EOS might be necessary (Trampedach _et al._ (2006). Level population fractions are normalized to unity, and therefore including more levels would not necessarily affect opacities in a systematic manner, as discussed in the next section. unless they are modified with inclusion of possibly missing atomic-plasma microphysics of individual levels and associated atomic data. ## 4 Results and Discussion The EOS determines the contribution to opacity and its cut-off from an atomic level \(i\) via the occupation probability \(w(i)\) depending on density and resulting plasma microfield, and the level population \(pop(i)\) via the Boltzmann factor \(exp(-E_{i}/kT)\) at temperature T. Fig. 1 illustrates the behavior of the EOS parameters for Fe xvii at BCZ conditions. The new RMOP data include autoigning resonances due to several hundred coupled levels, but can not be directly compared with DW bound-free cross sections that neglect channel coupling and are feature-less (Nahar _et al._ (2023); Zhao _et al._ (2023). However, a comparison of the total monochromatic opacity spectrum can be done to illustrate differences due to plasma broadening of resonances in the RMOP data vs. lines as in the OP DW data. The primary focus of this work is the interface of EOS with atomic data. As exemplar of the detailed analysis of EOS parameters, Fig. 1 shows the occupation probabilities for Fe xvii at BCZ conditions (red dots, top panel) for all levels with \(w(i)>0.001\), and corresponding level populations (black open circles, middle panel). Since the contribution to RMO is limited by significant level populations \(Pop(i)\), the number of levels with \(Pop(i)>0.1\%\) is found to be much smaller, around 50 or so (blue dots, bottom panel). The reason for the given distribution of \(w(i)\) (top panel) is because the BPRM calculations are carried out according to total angular momentum quantum number and parity \(J\pi\). Therefore, all BPRM data are produced in order of ascending order in energy _within each \(J\pi\) symmetry_, and descending order due to Stark ionization and dissolution of levels (Mihalas _et al._ (1988). Tables 1 and 2 give sample RMOs computed at BCZ and Sandia Z temperatures and densities respectively, varying the number of contributing levels NLEV for each of the three Fe ions, and both the MHD and QMHD EOS. Correspondingly, an illustration of RMO behavior is shown in Fig. 2. There is considerable variation in RMO values for small NLEV as expected. The RMOs are very high if all the population is in the ground stae or the first few excited states, but decreasing with NLEV. But then the RMOs approach near-constant values for NLEV \(\approx\) NMAX \(=200\), for all three Fe ions and for both the MHD and QMHD; no further significant contribution to RMOs is made due to EOS cut-offs and saturation. _Therefore, this 'convergence' should be treated as apparent, and would be real if and only if the EOS is precisely determined_. The converged RMOs should be regarded as a lower bound, in case revisions to EOS enable contributions from more levels that are included in the extensive RMOP atomic datasets, and the EOS+data combination may yield higher opacities. Fig. 3 shows a comparison of the new RMOP opacity spectrum (red) with OP (black). The Sandia Z measurements are also shown (cyan), but it should be noted that the experimental values are convolved over instrument resolution and the magnitudes of individual features are not directly compatible. In the top panel in Fig. 3 the monochromatic opacities are plotted on a log\({}_{10}\)-scale, and on a linear scale in the bottom panel to better elucidate the differences. The RMOP and OP opacity spectra differ in detailed energy distribution and magnitude. In general, the RMOP background is higher and the peaks lower than OP due to opacity re-distribution, with significant enhancement around 0.7 \begin{table} \begin{tabular}{|c c c|c c|c c|} \hline & Fe xvii & & Fe xviii & & Fe xix & \\ \hline NLEV & QMHD & MHD & QMHD & MHD & QMHD & MHD \\ \hline 1 & 873.4 & 891.9 & 0.92 & 1.0 & 69.1 & 75.6 \\ 10 & 831.0 & 844.4 & 324.8 & 365.5 & 55.2 & 60.3 \\ 50 & 225.9 & 230.3 & 357.3 & 392.0 & 56.8 & 62.1 \\ 100 & 265.5 & 270.3 & 136.8 & 150.1 & 23.1 & 25.3 \\ 200 & 346.5 & 352.5 & 175.3 & 192.4 & 10.7 & 11.7 \\ 300 & 360.4 & 366.6 & 145.5 & 159.6 & 13.9 & 15.3 \\ 500 & - & - & 169.2 & 185.7 & 15.5 & 16.6 \\ \hline 700 & - & - & 189.4 & 207.9 & 12.5 & 13.7 \\ \hline 1000 & - & - & 197.9 & 217.2 & - & - \\ \hline \multicolumn{5}{|c|}{Converged RMOs with NLEV = NMAX} \\ \hline 587 & 352.6 & 358.7 & - & - & - & - \\ \hline 1591 & - & - & 196.5 & 215.6 & - & - \\ \hline 899 & - & - & - & - & 12.5 & 13.7 \\ \hline \end{tabular} \end{table} Table 1: Convergence of the Rosseland Mean Opacity (cm\({}^{2}\)/g) with QMHD and MHD equation-of-state for \(T=2\times 10^{6}K,N_{e}~{}=10^{23}cc\). NLEV = number of bound levels in EOS calculations, and NMAX = maximum number of bound levels in R-matrix atomic calculations. \begin{table} \begin{tabular}{|c c c|c c|c c|} \hline & Fe xvii & & Fe xviii & & Fe xix & \\ \hline NLEV & QMHD & MHD & QMHD & MHD & QMHD & MHD \\ \hline 1 & 456.4 & 440.0 & 1.60 & 1.64 & 419.2 & 431.1 \\ 10 & 419.8 & 403.0 & 586.6 & 602.0 & 334.8 & 344.0 \\ 50 & 111.2 & 107.9 & 654.0 & 670.9 & 351.2 & 361.4 \\ 100 & 129.0 & 124.1 & 246.4 & 252.8 & 154.4 & 159.0 \\ 200 & 156.9 & 150.9 & 232.7 & 332.0 & 82.6 & 85.0 \\ 300 & 152.8 & 147.0 & 267.9 & 274.9 & 107.5 & 110.7 \\ 500 & 142.1 & 136.7 & 315.5 & 323.6 & 117.7 & 121.2 \\ \hline 700 & - & - & 351.6 & 360.7 & 96.0 & 98.7 \\ \hline 1000 & - & - & 374.0 & 374.0 & - & - \\ \hline \multicolumn{5}{|c|}{Converged RMOs with NLEV = NMAX} \\ \hline 587 & 140.0 & 134.7 & - & - & - & - \\ \hline 1591 & - & - & 361.6 & 370.9 & - & - \\ \hline 899 & - & - & - & - & 94.0 & 96.7 \\ \hline \end{tabular} \end{table} Table 2: Convergence of RMOs (cm\({}^{2}\)/g) with QMHD-EOS and MHD-EOS at Sandia Z \(T=2.11\times 10^{6}K,N_{e}~{}=~{}3.16\times 10^{22}cc\). keV. The difference is more striking on a linear-scale in Fig. 3 (bottom panel) around 0.9-1.0 keV, where the RMOP peaks are lower by several factors. Fig. 3 also shows that the Sandia Z measurements span only a small energy range relative to the Planck function derivative dB/dT that determines the Rosseland window and therefore the RMO. But the considerable difference between the background RMOP opacity with experiment remains as with the earlier OP and other works (Bailey _et al._ (2015); Nahar and Pradhan (2016). As we expect, the background non-resonant R-matrix photoionization cross sections are similar to DW results. However, the RMOP results are qualitatively in better agreement with experimental results with shallower "windows" in opacity than OP, for example at \(E\approx 1.0\) keV (top panel) and several other energies. Nevertheless, there seems to be a source of background opacity in the Z experiment for iron (Nagayama _et al._ (2019) that is not considered in theoretical calculations. It is also interesting to revisit the only available comparison between and OP and OPAL occupations probabilities for the simple case of H-like C\({}^{5+}\) (Badnell and Seaton (2003). Table 3 gives these parameters, and also the level populations going up to \(n=6\). However, owing to the fact that the ground state population dominates over all other levels, and Carbon is fully ionized or H-like at given temperature-density, the RMO remains nearly constant at 170.3 cm\({}^{2}\)/g. We might expect similar behavior for Oxygen opacity, though more detailed study is needed, and of course for complex ions such as in this _Letter_. Figure 1: Fe xvii EOS parameters at BCZ conditions: occupation probabilities w(i) as function of level index \(i\) (top, red dots); \(Log_{10}\) of level populations \(Pop(i)\) vs. ionization energy (middle, black open circles); levels with percentage \(Pop(i)>0.1\%\) vs. ionization energy. The ground state population is \(11\%\) and the ionization energy is 93 Ry. The \(w(i)\) (top panel) correspond to levels \(i\) computed along spin-orbital-parity SLJ\(\pi\) symmetries of bound levels in RMOP computations (see text). Figure 3: Monochromatic opacity spectra from RMOP, OP and Sandia Z, Log\({}_{10}\)-scale (top) and linear values x \(10^{-4}\); the range of the Planck function dB/dT in the Rosseland integrand is also shown. The RMOP results demonstrate redistribution of opacity due to plasma broadening of resonances in the bound free much more than the OP DW data. Except the background, relative magnitude of experimental and theoretical data are not directly comprable since the latter are not convolved over instrumental resolution. Figure 2: Rosseland Mean Opacity vs. number of levels included in RMOP opacity computations for BCZ and Sandia Z conditions. RMOs appear to ’converge’ to constant values around NLEV \(\approx\) 200 (however, see text). ## 5 Conclusion Whereas improved opacities may now be computed with high precision atomic data using the state-of-the-art R-matrix method, the EOS remains a source of uncertainty. Therefore, the results presented herein should be considered tentative, pending more studies and comparison of (Q)MHD-EOS parameters with other equations-of-state, as well as newly improved versions (Trampedach _et al._ (2006). However, preliminary RMOP results indicate considerable differences with OP iron opacity spectrum, and by extension other existing opacity models based on the DW method and plasma broadening treatment of lines vs. resonances. While the present RMOP iron opacities are significantly higher than the OP owing to higher accuracy and enhanced redistribution of resonance strengths in bound-free opacity, final results might yet depend on an improved MHD-EOS resolving issues outlined herein and related to pseudo bound-free continua (Dappen _et al._ (1987); Seaton _et al._ (1994). Although the contribution may be relatively small around BCZ, completeness requires R-matrix calculations for other Fe ions (in progress). It is also noted that the Sandia Z experimental data are in a relatively small energy range and therefore inconclusive as to determination of RMOs. Although differences in background opacity with experimental data remain unexplained, there appears to be better agreement in detailed features. Finally, the atomic-plasma issues described in this _Letter_ need to be resolved accurately in order to obtain astrophysical opacities to solve the outstanding solar problem. ## Acknowledgments I would like to thank Sultana Nahar for atomic data for Fe ions and discussions. The computational work was carried out at the Ohio Supercomputer Center in Columbus Ohio, and the Unity cluster in the College of Arts and Sciences at the Ohio State University. ## Data Availability The data presented herein are available upon request from the author.
2310.00514
The CSP Dichotomy, the Axiom of Choice, and Cyclic Polymorphisms
We study Constraint Satisfaction Problems (CSPs) in an infinite context. We show that the dichotomy between easy and hard problems -- established already in the finite case -- presents itself as the strength of the corresponding De Bruijin-Erd\H{o}s-type compactness theorem over ZF. More precisely, if $\mathcal{D}$ is a structure, let $K_\mathcal{D}$ stand for the following statement: for every structure $\mathcal{X}$ if every finite substructure of $\mathcal{X}$ admits a solution to $\mathcal{D}$, then so does $\mathcal{X}$. We prove that if $\mathcal{D}$ admits no cyclic polymorphism, and thus it is NP-complete by the CSP Dichotomy Theorem, then $K_\mathcal{D}$ is equivalent to the Boolean Prime Ideal Theorem (BPI) over ZF. Conversely, we also show that if $\mathcal{D}$ admits a cyclic polymorphism, and thus it is in P, then $K_\mathcal{D}$ is strictly weaker than BPI.
Tamás Kátay, László Márton Tóth, Zoltán Vidnyánszky
2023-09-30T22:31:54Z
http://arxiv.org/abs/2310.00514v1
# The CSP Dichotomy, the axiom of choice, and cyclic polymorphisms ###### Abstract. We study Constraint Satisfaction Problems (CSPs) in an infinite context. We show that the dichotomy between easy and hard problems -established already in the finite case- presents itself as the strength of the corresponding De Bruijn-Erdos-type compactness theorem over ZF. More precisely, if \(\mathcal{D}\) is a structure, let \(K_{\mathcal{D}}\) stand for the following statement: for every structure \(\mathcal{X}\) if every finite substructure of \(\mathcal{X}\) admits a solution to \(\mathcal{D}\), then so does \(\mathcal{X}\). We prove that if \(\mathcal{D}\) admits no cyclic polymorphism, and thus it is NP-complete by the CSP Dichotomy Theorem, then \(K_{\mathcal{D}}\) is equivalent to the Boolean Prime Ideal Theorem (BPI) over ZF. Conversely, we also show that if \(\mathcal{D}\) admits a cyclic polymorphism, and thus it is in P, then \(K_{\mathcal{D}}\) is strictly weaker than BPI. Key words and phrases:CSP Dichotomy, Axiom of Choice, compactness 2020 Mathematics Subject Classification: Primary 03E25, Secondary 68Q17 The first and third authors were supported by Hungarian Academy of Sciences Momentum Grant no. 2022-58 and National Research, Development and Innovation Office ( NKFIH) grants no. 113047, 129211. The second author was supported by the ERC Consolidator Grant 772466 "NOISE", and the NKFIH grant KKP-139502, "Groups and graph limits". Introduction Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(K_{\mathcal{D}}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism. Let \(\mathcal{D}\) be a \(\mathcal{D}\)-homomorphism, and let \(\mathcal{D}\) be a \(\ two1. Later, Hell and Nesetril [16] gave a complete characterization when \(\mathcal{D}\) is a graph: they showed that if \(\mathcal{D}\) is bipartite, then the problem is in \(P\), and otherwise it is \(NP\)-complete. Footnote 1: We follow the convention of using calligraphic letters to denote structures and plain upper case letters to denote their universe. Based on the mounting evidence, Feder and Vardi [13] formulated the statement which became known as the CSP Dichotomy Conjecture: The \(\mathcal{D}\)-homomorphism problem is either in \(P\) or \(NP\)-complete. Note that by a classical theorem of Ladner [20], if \(P\neq NP\) then there are problems in \(NP\) which are neither in \(P\) nor \(NP\)-complete. Since the formulation of the conjecture, an immense amount of effort has been put into trying to verify it (some of the outstanding results are [24, 19, 1, 2, 3], see [9] for a comprehensive introduction). Bulatov, Jeavons and Krokhin [10] isolated an algebraic condition: the existence of nontrivial polymorphisms of \(\mathcal{D}\) that seemed to characterize the easy problems. They proved that if \(\mathcal{D}\) has no nontrivial polymorphisms, then the \(\mathcal{D}\)-homomorphism problem is \(NP\)-complete and conjectured that conversely, if \(\mathcal{D}\) admits a nontrivial polymorphism, then the problem is in \(P\). Finally, as a culmination of all these works, Bulatov [11] and Zhuk [31] independently proved the conjecture, see Theorem 1.1 above. ### Cyclic polymorphisms The key algebraic tool for the investigation of CSPs turned out to be polymorphisms. Recall that a(n) _(\(n\)-ary) polymorphism_ is a homomorphism \(\phi:\mathcal{D}^{n}\to\mathcal{D}\). Here \(\mathcal{D}^{n}\) is the categorical power of the structure \(\mathcal{D}\), i.e. if \(R\) is a \(k\)-ary relation of \(\mathcal{D}\), then it is interpreted on \(\mathcal{D}^{n}\) as follows: let \(\overline{x}_{i}=\left(\overline{x}_{i}(1),\ldots,\overline{x}_{i}(n)\right) \in D^{n}\) for \(i=1,\ldots,k\), then \((\overline{x}_{1},\ldots,\overline{x}_{k})\in R^{\mathcal{D}^{n}}\iff\left( \overline{x}_{1}(j),\ldots\overline{x}_{k}(j)\right)\in R^{\mathcal{D}}\) for all \(1\leq j\leq n\). A useful way to look at, say, \(n\)-ary polymorphisms is that they combine \(n\)-many homomorphisms to \(\mathcal{D}\) into a new one. Clearly, projection maps are always polymorphisms. There are several results which show that the existence of essentially non-projective polymorphisms imply the existence of ones of special forms. In our considerations the following type will play a crucial role. **Definition 2.1**.: A polymorphism \(\phi:\mathcal{D}^{n}\to\mathcal{D}\), is called _cyclic_, if it satisfies \[\phi(x_{0},x_{1},\ldots,x_{n-1})=\phi(x_{1},x_{2},\ldots,x_{n-1},x_{0}),\] for all \((x_{0},\ldots,x_{n-1})\in D^{n}\). One of the fundamental results of the area is the following. **Theorem 2.2** ([2, 27]).: _The following are equivalent for a structure \(\mathcal{D}\):_ 1. \(\mathcal{D}\) _admits an identity of polymorphisms (i.e., a multivariable system of equations) not satisfied by projections._ 2. \(\mathcal{D}\) _admits a cyclic polymorphism of every large enough prime arity._ _._ 3. \(\mathcal{D}\) _admits a polymorphism_ \(f:\mathcal{D}^{4}\to\mathcal{D}\) _with_ \[f(r,a,r,e)=f(a,r,e,a),\] _for all_ \(a,e,r\in\mathcal{D}\)_._ **Definition 2.3**.: \((*)_{\mathcal{D}}\) will denote the statement that some (all) of the above conditions hold. ### Infinite versions of the CSP Dichotomy Let us briefly mention that there are several infinite versions of the CSP dichotomy problem. A flourishing direction is to allow \(\mathcal{D}\) to be infinite, while still requiring \(\mathcal{X}\) to be finite and ask about the computational complexity of the problem. In this case a rich structure theory emerges, with a myriad of questions yet to be answered (see, e.g., [5, 6, 7, 8]). Another direction, which has been recently initiated by Thornton [29] is to keep \(\mathcal{D}\) finite, and require the homomorphisms and the instance to be Borel. An advantage of this approach is that in the Borel context, it can be proved that say, \(2\)-coloring is easier than \(3\)-coloring, see [30]. It seems however, that solving systems of linear equations over finite fields is already hard in the Borel context [14], thus, the split between hard and easy problems occurs at a different place. The direction explored in this paper is to keep \(\mathcal{D}\) finite and investigate the strength of the statement \(K_{\mathcal{D}}\) for infinite \(\mathcal{X}\). We build on the work of Levy, Mycielski, and Lauchli [23, 25, 22], who showed that over ZF, for \(n\geq 3\) the statement \(K_{K_{n}}\) is equivalent to BPI and \(K_{K_{2}}\) is significantly weaker (see also [17, 15]). Our Theorem 1.2 shows that the same split between easy and hard problems occurs in the infinite context as in the CSP Dichotomy. That is, \(K_{\mathcal{D}}\) is equivalent to BPI if \((*)_{\mathcal{D}}\) does not hold, while otherwise it is a weaker statement. Note also that in our case this is a dichotomy in the usual sense, unlike the finite case, where if \(P=NP\) then the two cases collapse into one. In the rest of the paper, we work over ZF, unless specified otherwise. ### Hard problems First, let us discuss our proof of the second statement of Theorem 1.2. We will build on the work Bulatov-Jeavons-Krokhin [10] and Thornton [29]. They provide a constructive way of reduction between homomorphism problems of structures for which \((*)_{\mathcal{D}}\) fails. This reduction turns out to be essentially sufficient for our purposes as well. Motivated by the statements \(K_{\mathcal{D}}\), we define a new notion of reduction between problems. **Definition 2.4**.: Let \(\mathcal{D}\) and \(\mathcal{E}\) be finite structures. We say that the _\(\mathcal{E}\)-homomorphism problem finitely reduces to the \(\mathcal{D}\)-homomorphism problem_, or in short, _\(\mathcal{E}\) finitely reduces to \(\mathcal{D}\)_, if there exist operations \(\Gamma\), \(\Phi\) and \(\Psi\) such that: (1) if \(\mathcal{X}\) is an instance of \(\mathcal{E}\), then \(\Gamma(\mathcal{X})\) is an instance of \(\mathcal{D}\); (2) for every instance \(\mathcal{X}\) of \(\mathcal{E}\) the operation \(\Phi\) maps \(\mathcal{X}\to\mathcal{E}\) homomorphisms to \(\Gamma(\mathcal{X})\to\mathcal{D}\) homomorphisms and \(\Psi\) maps \(\Gamma(\mathcal{X})\to\mathcal{D}\) homomorphisms to \(\mathcal{X}\to\mathcal{E}\) homomorphisms; (3) if there exists a finite substructure \(\mathcal{H}\) of \(\Gamma(\mathcal{X})\) that does not admit a homomorphism to \(\mathcal{D}\), then there exists a finite substructure \(\mathcal{F}\) of \(\mathcal{X}\) that does not admit a homomorphism to \(\mathcal{E}\). **Remark 2.5**.: Observe that finite reducibility is a transitive relation. **Remark 2.6**.: Assuming that (1) and (2) of Definition 2.4 are satisfied, to check (3) it suffices to verify the following. For every finite substructure \(\mathcal{H}\) of \(\Gamma(\mathcal{X})\) there exists a finite substructure \(\mathcal{F}\) of \(\mathcal{X}\) such that there exists an \(\mathcal{H}\to\Gamma(\mathcal{F})\) homomorphism. Proof.: Assume that a finite substructure \(\mathcal{H}\) of \(\Gamma(\mathcal{X})\) does not admit a homomorphism to \(\mathcal{D}\). Since there is a finite substructure \(\mathcal{F}\) of \(\mathcal{X}\) and an \(\mathcal{H}\to\Gamma(\mathcal{F})\) homomorphism, \(\Gamma(\mathcal{F})\) cannot admit a homomorphism to \(\mathcal{D}\). Then, by (2) of Definition 2.4, \(\mathcal{F}\) cannot admit a homomorphism to \(\mathcal{E}\). The definition of finite reducibility is tailored to suit the next statement. **Proposition 2.7**.: _For finite structures \(\mathcal{D}\) and \(\mathcal{E}\), if the \(\mathcal{E}\)-homomorphism problem finitely reduces to the \(\mathcal{D}\)-homomorphism problem, then \(K_{\mathcal{D}}\implies K_{\mathcal{E}}\)._ Proof.: Assume that \(\mathcal{E}\) finitely reduces to \(\mathcal{D}\) and \(K_{\mathcal{D}}\) holds. Let \(\mathcal{X}\) be an instance of \(\mathcal{E}\), and suppose that every finite substructure \(\mathcal{F}\) of \(\mathcal{X}\) admits a homomorphism to \(\mathcal{E}\). Consider \(\Gamma(\mathcal{X})\) (given by Definition 2.4), which is an instance of \(\mathcal{D}\). By (3) of Definition 2.4, every finite substructure of \(\Gamma(\mathcal{X})\) admits a homomorphism to \(\mathcal{D}\). Thus, by \(K_{\mathcal{D}}\), there exists a \(\Gamma(\mathcal{X})\to\mathcal{D}\) homomorphism. Now (2) of Definition 2.4 provides an \(\mathcal{X}\to\mathcal{E}\) homomorphism. We need two more definitions to describe how homomorphism problems can be reduced to each other. **Notation.** Let \(\Sigma_{\mathcal{D}}\) denote the signature of the structure \(\mathcal{D}\), i.e., the set of relations of \(\mathcal{D}\). **Definition 2.8**.: For finite structures \(\mathcal{D}\) and \(\mathcal{E}\) we say that \(\mathcal{E}\)_is a \(pp\)-power of \(\mathcal{D}\)_ if for some \(n\in\mathbb{N}\) we have \(E=D^{n}\) and for every \(k\in\mathbb{N}\) and relation symbol \(R\) of arity \(k\) in \(\Sigma_{\mathcal{E}}\) there exist \(m_{R}\in\mathbb{N}\) and relation symbols \(\alpha_{R,1},\ldots,\alpha_{R,m_{R}}\)2 in \(\Sigma_{\mathcal{D}}\cup\{=\}\) such that Footnote 2: The \(\alpha_{R,i}\) may have different arity. In the displayed formula we treat them as \((kn+r)\)-ary relations only for notational simplicity. \[(\overline{z}_{1},\ldots,\overline{z}_{k})\in R^{\mathcal{E}}\iff\exists \overline{w}\ \bigwedge_{i=1}^{m_{R}}\alpha_{R,i}{}^{\mathcal{D}}(\overline{z}_{1},\ldots, \overline{z}_{k},\overline{w}). \tag{2.1}\] **Definition 2.9**.: Two structures \(\mathcal{D}\) and \(\mathcal{E}\) are _homomorphically equivalent_ if there exist \(\mathcal{D}\to\mathcal{E}\) and \(\mathcal{E}\to\mathcal{D}\) homomorphisms. Now we can state our result. **Theorem 2.10**.: _For finite structures \(\mathcal{D}\) and \(\mathcal{E}\) we have the following:_ _(A) If \(\mathcal{D}\) and \(\mathcal{E}\) are homomorphically equivalent, then they finitely reduce to each other._ _(B) If \(\mathcal{E}\) is a \(pp\)-power of \(\mathcal{D}\), then \(\mathcal{E}\) finitely reduces to \(\mathcal{D}\)._ Proof.: See Section 3. To exploit this result, we use the following theorem which follows from the work of Barto, Kozik, and Pinsker [4] together with the classical results of Taylor [28] (see also [29, Theorem 2.6]). **Theorem 2.11**.: _Assume that \(\mathcal{D},\mathcal{E}\) are finite structures so that \(\neg(*)_{\mathcal{D}}\). Then \(\mathcal{E}\) is homomorphically equivalent to a \(pp\)-power of \(\mathcal{D}\)._ This yields a complete characterization of the strength of the statement \(K_{\mathcal{D}}\) in case \((*)_{\mathcal{D}}\) fails to hold. **Corollary 2.12**.: _If \(\neg(*)_{\mathcal{D}}\), then \(K_{\mathcal{D}}\iff K_{K_{3}}\iff BPI\)._ Proof.: To see the first equivalence, by Proposition 2.7, it suffices to prove that \(K_{\mathcal{D}}\) and \(K_{K_{3}}\) finitely reduce to each other. By Theorems 2.10 and 2.11, this follows from the fact that both \((*)_{\mathcal{D}}\) and \((*)_{K_{3}}\) fail. The second equivalence follows from the work of Levy, Mycielski, and Lauchli [23, 25, 22]. ### Easy problems Second, let us give a high level overview of the main ideas of the proof of (1) of Theorem 1.2. Call an instance \(\mathcal{X}\) of \(\mathcal{D}\)_finitely solvable_, if every finite substructure of \(\mathcal{X}\) admits a homomorphism to \(\mathcal{D}\). In order to construct models in which, say \(K_{K_{3}}\) fails, we use classical ideas of Mostowski and Fraenkel. They constructed so-called _permutation models_, that is, models in which all of the axioms of ZF hold except for the Axiom of Equality: in addition to the \(\emptyset\), there is a collection of _atoms_ which do not have elements. The corresponding axiom system is denoted by ZFA. While these models are not models of ZF, forcing arguments using similar ideas often - this happens in our case as well - yield ZF models with analogous properties (see, e.g., [18, Chapter 15]). We fix some subgroup \(\Gamma\) of the permutation group of atoms and elements of this model will be required to be invariant under some further "large" subgroups \(\Gamma^{\prime}<\Gamma\). (The action on atoms naturally extends to an action on the sets containing atoms, sets of atoms, etc.) In such a way we will ensure that the atoms are somewhat indistinguishable from each other within the model. Now, for a fixed structure \(\mathcal{D}\) with \((*)_{\mathcal{D}}\) we take a prime \(p\) and a cyclic polymorphism \(\phi\) of arity \(p\) (see 2.3). We define a graph on the atoms, consisting of disjoint cycles of size \(p\), and choose the group to be the one generated by rotations of single cycles. It will easily follow that such a graph cannot have a \(3\)-coloring in the model (i.e., an invariant one), but every finite subgraph has one, that is, \(\neg K_{K_{3}}\). In order to show that \(K_{\mathcal{D}}\) holds in this model, we have to prove that given a finitely solvable instance \(\mathcal{X}\) of the \(\mathcal{D}\)-homomorphism problem, it has a solution (i.e. an \(\mathcal{X}\to\mathcal{D}\) homomorphism) in the model. By the definition of the model, \(\mathcal{X}\) is invariant under some subgroup \(\Gamma^{\prime}\), and it suffices to construct a \(\Gamma^{\prime}\)-invariant solution. The main observation is that this can be done starting from an arbitrary solution, outside the model, which exists by finite solvability and compactness (using AC). We can then use compactness (AC) again to find an invariant solution inside the model. Since we have compactness, it suffices to construct solutions that are invariant under any finite subset of \(\Gamma^{\prime}\). This can be done using cyclic polymorphisms: if \(h_{0}\) is a homomorphism, \(\alpha\) is a group element with \(\alpha^{p}=1\), and \(\phi\) is a cyclic polymorphism of \(\mathcal{D}\) of arity \(p\), then \(h=\phi(h_{0},\alpha\cdot h_{0},\ldots,\alpha^{p-1}\cdot h_{0})\) is a solution invariant under \(\alpha\). To make this intuition precise, we proceed to describe a general theorem, which can be applied without familiarity with forcing or abstract set theory. Let \(\Gamma\) be a group. A collection \(\mathcal{F}\) of subgroups of \(\Gamma\) is called a _filter_ if * \(\{1\}\not\in\mathcal{F}\), \(\Gamma\in\mathcal{F}\) * \(\Delta,\Delta^{\prime}\in\mathcal{F}\) implies \(\Delta\cap\Delta^{\prime}\in\mathcal{F}\), * \(\Delta\in\mathcal{F}\), \(\Delta<\Delta^{\prime}\) implies \(\Delta^{\prime}\in\mathcal{F}\), * \(\Delta\in\mathcal{F}\), \(\gamma\in\Gamma\) implies \(\gamma^{-1}\Delta\gamma\in\mathcal{F}\). The main example of a filter to be kept in mind for a group acting on an infinite set \(T\) is the collection of subgroups that contain the _pointwise stabilizer_\(\operatorname{Stab}_{pw}(F)\) of some finite set \(F\subset T\). If \(\,\cdot\,\Gamma\times X\to X\) is an action of \(\Gamma\) on a set \(X\) and \(Y_{i}\subset X\) for \(i\in I\), then the setwise _stabilizer of \((Y_{i})_{i\in I}\)_ is the subgroup \[\operatorname{Stab}((Y_{i})_{i\in I})=\{\gamma\in\Gamma:\forall i\in I\ \gamma \cdot Y_{i}=Y_{i}\}.\] The action \(\cdot\) extends to \(X^{n}\), \(\bigcup_{n}X^{n}\) coordinate-wise. Similarly, if \(D\) is any set on which \(\Gamma\) does not act (or equivalently, assumed to act trivially), we can extend the action to \(X\times D\). We also use \(\cdot\) to denote these actions. Note that this allows us to talk about the stabilizers of structures on \(X\) (as they are collections of subsets of \(\bigcup_{n}X^{n}\)) and functions \(X\to D\), as they are subsets of \(X\times D\). For \(S\subset\Gamma\), an action of \(\Gamma\) on a set \(Z\), a set \(Y\subset Z\) is called _\(S\)-invariant_ if \(S\subset\operatorname{Stab}(Y)\). **Definition 2.13**.: Assume that \(\Gamma\) acts on \(X\), the universe of some structure \(\mathcal{X}\). We say that \(\mathcal{X}\) is _\(\mathcal{F}\)-symmetric (w.r.t. the given action of \(\Gamma\))_ if \(\operatorname{Stab}(\mathcal{X})\in\mathcal{F}\). In Section 4, we prove the following theorem, which one can use as a black box, without familiarity with abstract set theory. **Theorem 2.14**.: _Let \(\Gamma\) be a group and \(\mathcal{F}\) be a filter of subgroups of \(\Gamma\). Assume that for some structures \(\mathcal{D},\mathcal{E}\) the below statements can be proved in ZFC:_ 1. _For every finitely solvable_ \(\mathcal{F}\)_-symmetric_ \(\mathcal{D}\)_-instance_ \(\mathcal{X}\) _there exists a_ \(\mathcal{F}\)_-symmetric homomorphism from_ \(\mathcal{X}\) _to_ \(\mathcal{D}\)_._ 2. _There exists a finitely solvable_ \(\mathcal{F}\)_-symmetric_ \(\mathcal{E}\)_-instance_ \(\mathcal{Y}\) _with_ \(\operatorname{Stab}(F)\in\mathcal{F}\) _for all finite_ \(F\subset Y\)_, which does not admit an_ \(\mathcal{F}\)_-symmetric homomorphism to_ \(\mathcal{E}\)_._ _Then there exists a model of ZFA, in which \(K_{\mathcal{D}}\) holds but \(K_{\mathcal{E}}\) fails._ Proof.: See Section 4. We immediately obtain a more combinatorial form of the theorem using a compactness argument. **Corollary 2.15**.: _Theorem 2.14 holds even if we replace (1) by_ 1. _For every_ \(\mathcal{F}\)_-symmetric finitely solvable_ \(\mathcal{D}\)_-instance_ \(\mathcal{X}\)_, there exists some_ \(\Gamma^{\prime}\in\mathcal{F}\) _such that for every finite_ \(S\subset\Gamma^{\prime}\) _and_ \(\{x_{1},\ldots,x_{n}\}\subset X\) _there exists an_ \(S\)_-invariant partial homomorphism of_ \(\mathcal{X}\upharpoonright\{x_{1},\ldots,x_{n}\}\) _to_ \(\mathcal{D}\)_._ _In particular, to check that (1) holds it suffices to find a subgroup \(\Gamma^{\prime}\in\mathcal{F}\) such that for all finite \(S\subset\Gamma^{\prime}\) there is an \(S\)-invariant homomorphism from \(\mathcal{X}\) to \(\mathcal{D}\)._ Proof of Corollary 2.15.: We show that (1) of Theorem 2.14 holds using a compactness argument. Take the compact topological space \(D^{X}\), where \(D\) is endowed with the discrete topology. The following subsets are closed in this space for all \(\gamma\in\Gamma,x_{i}\in X,n\in\mathbb{N}\): \(C_{\gamma}=\{h:h\text{ is }\gamma\text{-invariant}\}\), \(C_{x_{1},\ldots,x_{n}}=\{h:h\upharpoonright\{x_{1},\ldots,x_{n}\}\text{ is a partial homomorphism}\}\). By our assumption, the intersection of any finite collection from \((C_{\gamma})_{\gamma\in\Gamma^{\prime}},(C_{x_{1},\ldots,x_{n}})_{x_{i}\in X}\) is nonempty, hence there is some \(h\) in the intersection of all of these sets (note that we work in ZFC, so compactness can be used), and such an \(h\) is a \(\Gamma^{\prime}\)-invariant, and thereby \(\mathcal{F}\)-symmetric homomorphism \(\mathcal{X}\to\mathcal{D}\). Finally, using forcing arguments, we obtain a ZF result as well. The more precise version of our main theorem reads as follows. **Theorem 2.16**.: _There exists a model of ZF in which \(K_{\mathcal{D}}\) holds precisely if \((*)_{\mathcal{D}}\) does._ ## 3. Hard problems are hard In this section, we show Theorem 2.10: **Theorem 2.10**.: _For finite structures \(\mathcal{D}\) and \(\mathcal{E}\) we have the following:_ _(A) If \(\mathcal{D}\) and \(\mathcal{E}\) are homomorphically equivalent, then they finitely reduce to each other._ _(B) If \(\mathcal{E}\) is a \(pp\)-power of \(\mathcal{D}\), then \(\mathcal{E}\) finitely reduces to \(\mathcal{D}\)._ Proof.: (A) Now the structures \(\mathcal{D}\) and \(\mathcal{E}\) are of the same signature and there are homomorphisms \(\theta_{1}:\mathcal{D}\to\mathcal{E}\) and \(\theta_{2}:\mathcal{E}\to\mathcal{D}\). Set \(\Gamma(\mathcal{X})=\mathcal{X}\), \(\Phi(\varphi)=\theta_{2}\circ\varphi\) and \(\Psi(\psi)=\theta_{1}\circ\psi\). This clearly works. (B) **Sketch.** The precise argument is quite heavy on notation, so we give an informal outline. Details can be found in the appendix. By definition, we have \(E=D^{n}\), and by (2.1), \((\overline{z}_{1},\ldots,\overline{z}_{k})\in R^{\mathcal{E}}\) comes with some number of witnesses in \(D\). Given an instance \(\mathcal{X}\) of \(\mathcal{E}\), we build the \(\mathcal{D}\)-instance \(\Gamma(\mathcal{X})\) by adding the following elements: * \(n\) "formal coordinates" for every \(x\in X\); * the appropriate number of "formal witnesses of relation" for every relation symbol \(R\) in \(\Sigma_{\mathcal{E}}\) and tuple \(\overline{x}\in R^{\mathcal{X}}\), arising from (2.1). We must carefully manage using the equality sign in (2.1). First, we can assume without loss of generality that no witness appears in any of the equalities. Second, the appropriate formal coordinates of those \(x\in X\) that appear in relations \(R^{\mathcal{X}}\) whose \(pp\)-definitions include equalities need to be identified. (Formally, we quotient out by the generated equivalence relation.) We need to transfer homomorphisms (to \(\mathcal{E}\) and \(\mathcal{D}\)) between \(\mathcal{X}\) and \(\Gamma(\mathcal{X})\). On the one hand, a homomorphism \(\varphi:\mathcal{X}\to\mathcal{E}\) gives rise to a map \(\Phi(\varphi):\Gamma(\mathcal{X})\to\mathcal{D}\) in a straightforward way, sending formal coordinates of \(x\in X\) to actual coordinates of \(\varphi(x)\) (and by necessity this factors through the quotient map). The relations on \(\Gamma(\mathcal{X})\) are tailored to ensure that \(\Phi(\varphi)\) is a homomorphism. On the other hand, if \(\psi:\Gamma(\mathcal{X})\to\mathcal{D}\) is a homomorphism, evaluating \(\psi\) on all formal coordinates of each \(x\in X\) gives rise to a map \(\Psi(\psi):\mathcal{X}\to D^{n}=E\). The images of formal witnesses serve as actual witnesses in \(D\), and the identification of the necessary formal coordinates ensures that all equalities in (2.1) are satisfied. The two together imply that \(\Psi(\psi)\) is a homomorphism to \(\mathcal{E}\). We also need to check (3) of Definition 2.4, regarding the finite substructures. We do this using Remark 2.6, by collecting necessary elements of \(X\) into a finite set \(F\) to provide enough relations in \(\Gamma(\mathcal{F})\) for the images of all related tuples of \(\mathcal{H}\). Again, we have to take into account the identification of formal coordinates, which forces us to add extra elements to \(F\) to ensure that all the necessary identifications already happen in \(\Gamma(\mathcal{F})\). We make choices at multiple points along the construction but these are always possible in ZF because we always choose either finitely many times or from some fixed finite set. Let us remark that the first version of this manuscript used the technique developed in [29] to deal with the case of equality. However, Thornton pointed out to us that this is not necessary. ## 4. Easy problems are easy In this section, we show Theorem 2.14 and the remaining part of our main result, Theorem 1.2. ### ZFA results First, as a warm-up, we prove the following general theorem about ZFA models. **Theorem 2.14**.: Let \(\Gamma\) be a group and \(\mathcal{F}\) be a filter of subgroups of \(\Gamma\). Assume that for some structures \(\mathcal{D},\mathcal{E}\) the below statements can be proved in ZFC: 1. For every finitely solvable \(\mathcal{F}\)-symmetric \(\mathcal{D}\)-instance \(\mathcal{X}\) there exists a \(\mathcal{F}\)-symmetric homomorphism from \(\mathcal{X}\) to \(\mathcal{D}\). 2. There exists a finitely solvable \(\mathcal{F}\)-symmetric \(\mathcal{E}\)-instance \(\mathcal{Y}\) with \(\operatorname{Stab}(F)\in\mathcal{F}\) for all finite \(F\subset Y\), which does not admit an \(\mathcal{F}\)-symmetric homomorphism to \(\mathcal{E}\). Then there exists a model of ZFA in which \(K_{\mathcal{D}}\) holds, but \(K_{\mathcal{E}}\) fails. In fact, the model depends only on \(\Gamma\), \(\mathcal{F}\) and \(\mathcal{Y}\). Proof.: Let \(\mathcal{Y}\) be the structure from (2). We build a permutation submodel of the universe \(V\) as in [18, 15.48]. Let the set of atoms \(Y^{\prime}\) be chosen so that there is a bijection \(b:Y^{\prime}\to Y\) and define \(\mathcal{Y}^{\prime}\) to be a structure on \(Y^{\prime}\) by pulling back the relations form \(\mathcal{Y}\). Similarly, define the \(\Gamma\) action on \(Y^{\prime}\) by \(\gamma\cdot y^{\prime}=b^{-1}(\gamma\cdot b(y^{\prime}))\). Let \(U\) be the permutation model corresponding to the set of atoms \(Y^{\prime}\) and the action \(\cdot\) defined above. By the \(\mathcal{F}\)-symmetricity of \(\mathcal{Y}\), \(\mathcal{Y}^{\prime}\in U\), and by the nonexistence of an \(\mathcal{F}\)-symmetric homomorphism to \(\mathcal{E}\), the \(\mathcal{E}\)-instance \(\mathcal{Y}^{\prime}\) is not solvable in \(U\). As the stabilizer of every finite set \(F\subset Y^{\prime}\) is in the filter, every partial map \(Y^{\prime}\to E\) with a finite domain is in \(U\), in particular, \(\mathcal{Y}^{\prime}\) is finitely solvable. Thus, \(U\models\neg K_{\mathcal{E}}\). Now, let \(\mathcal{X}\in U\) be a finitely solvable \(\mathcal{D}\)-instance. We check that it is such an \(\mathcal{F}\)-symmetric instance in \(V\) as well. Indeed, \(\mathcal{X}\) is hereditarily \(\mathcal{F}\)-symmetric in \(V\). To see that it is finitely solvable, just note that as \(\mathcal{F}\) is a filter, any finite set containing only hereditarily \(\mathcal{F}\)-symmetric elements is in \(U\), so every finite substructure of \(\mathcal{X}\) in \(V\) is also in \(U\). By our assumptions, it admits an \(\mathcal{F}\)-symmetric homomorphism \(h\) to \(\mathcal{D}\) in \(V\). But, as all elements of \(X\) are hereditarily symmetric, so are all elements of \(h\). Thus, \(h\) is hereditarily symmetric as well, in particular, \(h\in U\). Now we apply the above theorem in our particular case. **Theorem 4.1**.: _There exists a model of ZFA in which \(K_{\mathcal{D}}\) if \((*)_{\mathcal{D}}\) does, and \(K_{K_{\mathcal{K}_{3}}}\) fails._ Let us remark that a careful examination of the constructions presented in Section 3 show that in the model below, \(K_{\mathcal{D}}\) fails for all \(\mathcal{D}\) with \(\neg(*)_{\mathcal{D}}\). The model is a straightforward modification of the model \(\mathcal{N}2^{*}(3)\) from [17], which one could call \(\mathcal{N}2^{*}(\text{Prime})\). **Definition 4.2**.: Let \(\Gamma=\oplus_{p\text{ prime}}\mathbb{Z}_{p}\), with the standard generating set \((\gamma_{p_{i}})_{p_{i}}\). Then \(\Gamma\) acts on the set \[Y=\bigcup_{p_{i}\text{ is the }i\text{th prime}}\{i\}\times\{0,\ldots,p_{i}-1\},\] by \(\gamma_{p_{i}}\cdot(i,j)=(i,j+1\mod p_{i})\) and fixing every other element of \(Y\). Let \(\mathcal{Y}\) be the Schreier graph of \(\Gamma\)'s action on \(Y\) (w.r.t. the generating set \((\gamma_{p})_{p}\)). Let \(\mathcal{F}\) be the filter generated by the subgroups \(\langle\gamma_{p_{i}}:i\geq n\rangle\) for \(n\in\mathbb{N}\). Proof.: We apply Theorem 2.14, using the group \(\Gamma\) defined above. _First_, we show that the "in particular" part of Corollary 2.15 holds, thereby guaranteeing (1) of Theorem 2.14. Let \(\mathcal{D}\) be a structure with \((*)_{\mathcal{D}}\) and assume that \(\mathcal{X}\) is an \(\mathcal{F}\)-symmetric finitely solvable instance of \(\mathcal{D}\). Let \(\Gamma^{\prime}\) witness that \(\mathcal{X}\) is \(\mathcal{F}\)-symmetric, fix an \(n\in\mathbb{N}\), a sequence \((\phi_{p})_{p\geq p_{n}}\) of cyclic polymorphisms of arity \(p\) as in Definition 2.3. We start with an easy observation. **Claim 4.3**.: _Let \(\gamma\in\Gamma^{\prime}\) with \(\gamma^{p}=1\) and \(h_{0}:\mathcal{X}\to\mathcal{D}\) be a homomorphism. Then the map_ \[h(x)=\phi_{p}(h_{0}(x),h_{0}(\gamma^{-1}\cdot x),h_{0}(\gamma^{-2}\cdot x), \ldots,h_{0}(\gamma^{-p+1}\cdot x))\] _is a \(\{\gamma\}\)-invariant homomorphism. Moreover, if \(h_{0}\) is \(\{\gamma^{\prime}\}\)-invariant for some \(\gamma^{\prime}\in\Gamma^{\prime}\), then so is \(h\)._ Proof.: First note that by the \(\Gamma^{\prime}\)-invariance of \(\mathcal{X}\), for any \(\delta\in\Gamma^{\prime}\) the map \(x\mapsto h_{0}(\delta^{-1}\cdot x)\) is a homomorphism. Thus, \(h\) is a homomorphism, and by the cyclicity of \(\phi_{p}\) we get \[(\gamma\cdot h)(x)=h(\gamma^{-1}\cdot x)=\] \[\phi_{p}(h_{0}(\gamma^{-1}\cdot x),h_{0}(\gamma^{-2}\cdot x),\ldots,h_{0}( \gamma^{-p}\cdot x))=\] \[\phi_{p}(h_{0}(\gamma^{-p}\cdot x),h_{0}(\gamma^{-1}\cdot x),\ldots,h_{0}( \gamma^{-p+1}\cdot x))=h(x).\] In order to check the \(\{\gamma^{\prime}\}\)-invariance, just observe that \(\Gamma^{\prime}\) is abelian, hence \[(\gamma^{\prime}\cdot h)(x)=h(\gamma^{\prime-1}\cdot x)=\] \[\phi_{p}(h_{0}(\gamma^{\prime-1}\cdot x),h_{0}(\gamma^{-1}\cdot\gamma^{\prime -1}\cdot x),\ldots,h_{0}(\gamma^{-p+1}\cdot\gamma^{\prime-1}\cdot x))=\] \[\phi_{p}(h_{0}(\gamma^{\prime-1}\cdot x),h_{0}(\gamma^{\prime-1}\cdot\gamma^{ -1}\cdot x),\ldots,h_{0}(\gamma^{\prime-1}\cdot\gamma^{-p+1}\cdot x))=\] \[\phi_{p}(h_{0}(x),h_{0}(\gamma^{-1}(x)),h_{0}(\gamma^{-2}\cdot x),\ldots,h_{0} (\gamma^{-p+1}\cdot x))=h(x).\] Let \[\Gamma^{\prime\prime}=\Gamma^{\prime}\cap\text{Stab}_{pw}(\{(i,j):i\leq n\}),\] which is in \(\mathcal{F}\), by the definition of \(\mathcal{F}\). Let \(S\subset\Gamma^{\prime\prime}\) be finite. We have to show that there is an \(S\)-invariant homomorphism \(h:\mathcal{X}\to\mathcal{D}\). Pick a subsequence sequence \((\gamma_{p_{i}})_{n\leq i\leq k}\) such that \(S\subset\langle\gamma_{p_{i}}:n\leq i\leq k\rangle\). Since \(\mathcal{X}\) is finitely solvable, there is a homomorphism \(h_{0}:\mathcal{X}\to\mathcal{D}\). Applying Claim 4.3 inductively to \((\gamma_{p_{i}}),(\phi_{p_{i}})_{n\leq i\leq k}\) starting from \(h_{0}\), we get a homomorphism invariant under each \((\gamma_{p_{i}})_{n\leq i\leq k}\), in turn under all elements of \(S\). This shows that Corollary 2.15 holds which implies the first part of Theorem 2.14. _Second_, we check (2) of Theorem 2.14 for \(\mathcal{E}=K_{3}\), i.e., \(3\)-coloring. Let \(\mathcal{Y}\) be graph from Definition 4.2, it is clear that \(\mathcal{Y}\) is \(\Gamma\)-invariant. Moreover, \(\Gamma\) is the vertex disjoint union of cycles, hence it admits a \(3\)-coloring. Now, if \(c\) was an \(\mathcal{F}\)-symmetric \(3\)-coloring, then we could find a finite set \(F\subset Y\) such that for every \(\gamma\in\operatorname{Stab}_{pw}(F)\) we had \(\gamma\cdot c=c\). In particular, for every large enough \(p_{i}\) we had \(\gamma_{p_{i}}\cdot c=c\), which means that \(c\) is constant on the cycle corresponding to \(p_{i}\), a contradiction. To finish the proof of Theorem 4.1 observe that we used the same \(\Gamma\), \(\mathcal{F}\) and \(\mathcal{Y}\) for all \(\mathcal{D}\), hence in the model guaranteed by Theorem 2.14 the statement \(K_{\mathcal{D}}\) holds if \((*)_{\mathcal{D}}\) is true. ### ZF results Now we are ready to prove the main result of the paper: **Theorem 2.16**.: There is a model of ZF, in which \(K_{\mathcal{D}}\) holds precisely when \((*)_{\mathcal{D}}\) does. The argument below is the forcing version of the one presented in Corollary 4.1. Proof of Theorem 2.16.: We build a symmetric submodel of a generic extension as in [18, Chapter 15]. In order to make the forcing argument work, we need to consider a slightly modified version of the group from Definition 4.2. Let \(\mathcal{Y}\) be the graph from Definition 4.2, define \(\Gamma_{i}=\oplus_{n\in\mathbb{N}}\mathbb{Z}_{p_{i}}\) the infinite fold direct sum of the group \(\mathbb{Z}_{p_{i}}\) with itself, and \(\Delta=\oplus_{i}\Gamma_{i}\). In other words, \(\Delta\) is the sum of \(\mathbb{Z}_{p}\)s, where each cyclic group appears infinitely often. Fix an enumeration of the generators \((\gamma_{p_{i},n})_{n\in\mathbb{N},p_{i}\text{ prime}}\), and let \(\mathcal{H}\) be the filter generated by the subgroups \(\langle\gamma_{p_{i},n}:n\in\mathbb{N},i\geq l\rangle\), for \(l\in\mathbb{N}\). To each element of \(Y\), we will add a countable set of Cohen reals on which sets \(\Delta\) is going to act. To achieve this, we define an action of \(\Delta\) on \(Y\times\mathbb{N}\times\mathbb{N}\) as follows. For each \(i\), fix a bijection \(b_{i}:\mathbb{N}\to\Gamma_{i}\). A generator \(\gamma_{p_{i},n}\) acts on the \(Y\) coordinate as in Definition 4.2, while for each \(Y\ni v=(i,j)\) the corresponding first \(\mathbb{N}\) is identified with \(\Gamma_{i}\) via \(b_{i}\), and there \(\gamma_{p_{i},n}\) acts by translation. Formally, for every \(\gamma_{p_{i},n}\), let \[\gamma_{p_{i},n}\cdot((i,j),m,l):=((i,j+1\mod p_{i}),b_{i}^{-1}(\gamma_{p_{i}, n}b_{i}(m)),l),\] and let \(\gamma_{p_{i},n}\) fix every other element. Let us collect the most important observations about this action. **Claim 4.4**.: _Let \(F\subset Y\times\mathbb{N}\times\mathbb{N}\) be finite. Then_ 1. \(\operatorname{Stab}_{pw}(F)\in\mathcal{H}\)_._ 2. \(\forall i\ \exists n\ \forall((i,j),m,l)\in F\ \gamma_{p_{i},n}\cdot((i,j),m,l)\not\in F\)_._ 3. \(\forall i,n,m\)__ \[\gamma_{p_{i},n}\cdot\{((i,j),m,l):l\in\mathbb{N}\}=\{((i,j+1),b_{i}^{-1}( \gamma_{p_{i},n}b_{i}(m)),l):l\in\mathbb{N}\}.\] Proof.: The first and last statements are clear. For the second one, define a partial map \(b^{\prime}_{i}((i,j),m,l)=b_{i}(m)\), and let \(n\) be such that \(\gamma_{p_{i},n}\) does not appear in \(b^{\prime}_{i}(F)\), that is, the support of every element of \(b^{\prime}_{i}(F)\) is disjoint from the copy of \(\mathbb{Z}_{p_{i}}\) corresponding to \(\gamma_{p_{i},n}\). Then \(\gamma_{p_{i},n}\) appears in every element of \(b^{\prime}_{i}(\gamma_{p_{i},n}\cdot F)\), in particular, there is no \(((i,j),m,l)\in F\) with \(\gamma_{p_{i},n}\cdot((i,j),m,l)\in F\). Define \(P\) to be the forcing notion consisting of partial functions of \(Y\times\mathbb{N}\times\mathbb{N}\to 2\) with finite domain ordered by reverse inclusion. The \(\Delta\) action on \(Y\times\mathbb{N}\) gives rise to a \(\Delta\) action on \(P\). **Claim 4.5**.: _For every \(q\in P\)_ 1. \(\operatorname{Stab}_{pw}(q)\in\mathcal{H}\)__ 2. \(\forall i\ \exists n\) _such_ \(\gamma_{p_{i},n}\cdot q\) _and_ \(q\) _are compatible._ Proof.: Applying Claim 4.4 to \(\operatorname{dom}(q)\) yields the desired conclusions. Let \(B(P)\) be the complete Boolean algebra to which \(P\) densely embeds (see [18, Corollary 14.12]). We will identify \(P\) with its copy in \(B(P)\). Let \(M\) be the ground model. The action of \(\Delta\) on \(P\) gives rise to an action on \(B(P)\) and on the collection of names \(M^{B(P)}\) (see [18, 14.38]). Let \(HS\) be the collection of hereditarily \(\mathcal{H}\)-symmetric names, \(G\) be a generic filter on \(P\), and \(M[G]\) be the forcing extension. Define \[N=\{\dot{x}^{G}:\dot{x}\in HS\}.\] Then \(N\subset M[G]\) is a model of ZF by [18, Lemma 15.51]. The most important technical tool to be used is the Symmetry Lemma [18, 14.37]: for every \(\gamma\in\Delta\), \(q\in B(P)\), formula \(\phi\) and names \(\dot{x}_{0},\ldots,\dot{x}_{k-1}\) we have \[q\Vdash\phi(\dot{x}_{0},\ldots,\dot{x}_{k-1})\iff\gamma\cdot q\Vdash\phi( \gamma\cdot\dot{x}_{0},\ldots,\gamma\cdot\dot{x}_{k-1}).\] Let us first show the easier part of the theorem. **Proposition 4.6**.: \(K_{\mathcal{K}_{3}}\) _fails in \(N\). In particular, by Corollary 2.12, if \(\neg(*)_{\mathcal{E}}\) then \(N\models\neg K_{\mathcal{E}}\)._ Proof.: The poset \(P\) adds a collection of Cohen reals indexed by \(Y\times\mathbb{N}\), let \((\dot{r}_{(v,m)})_{v,m}\) be names for these. Let \(\dot{A}_{v}\) contain all these names, i.e., \(\dot{A}_{v}(\dot{r}_{v,m})=1\), and no other name appears in the domain of \(\dot{A}_{v}\). Observe that by Claim 4.4 for all \(\gamma_{p_{i},n}\) and \(v=(i^{\prime},j)\) we have \(\gamma_{p_{i},n}\cdot\dot{A}_{i^{\prime},j}=\dot{A}_{i^{\prime},j+1\mod p_{i}}\) if \(i=i^{\prime}\) and \(\gamma_{p_{i},n}\cdot\dot{A}_{i^{\prime},j}=\dot{A}_{i^{\prime},j}\) otherwise. In particular, all names \(\dot{A}_{v}\) are in \(HS\). These will play the role of atoms in \(N\). Let \(\dot{\mathcal{Y}}^{\prime}\) be the name for a graph on the vertices \(\dot{A}_{v}\), where \(\dot{A}_{v}\) and \(\dot{A}_{v^{\prime}}\) are adjacent precisely if \(v\) and \(v^{\prime}\) are in \(\mathcal{Y}\). Note that \(\dot{\mathcal{Y}}^{\prime}\) is in \(HS\), thus, \(\dot{\mathcal{Y}}^{\prime G}\) is in \(N\). Clearly, \(\dot{\mathcal{Y}}^{\prime G}\) is isomorphic to \(\mathcal{Y}\), therefore, every finite subgraph admits a \(3\)-coloring in \(N\). Now, assume that \(\mathcal{Y}^{\prime}\) admits a \(3\)-coloring in \(N\). Then, for some \(q\in G\) and \(\dot{c}\in HS\) we have \[q\Vdash\dot{c}\text{ is a homomorphism }\dot{\mathcal{Y}}^{\prime}\to K_{3}.\] Therefore, there is an \(i\) such that for each \(\gamma_{p_{i},n}\) we have \(\gamma_{p_{i},n}\cdot\dot{c}=\dot{c}\) and \(\gamma_{p_{i},n}\cdot q=q\). Also, there exists some \(q^{\prime}\leq q\) with \(q^{\prime}\in P\) and \(d\in D\) such that \[q^{\prime}\Vdash\dot{c}(\dot{A}_{(i,0)})=d.\] By Claim 4.5 there exists some \(n\) with \(\gamma_{p_{i},n}\cdot q^{\prime}\) compatible with \(q^{\prime}\). Then by the Symmetry Lemma \[\gamma_{p_{i},n}\cdot q^{\prime}\Vdash(\gamma_{p_{i},n}\cdot\dot{c})(\gamma_ {p_{i},n}\cdot\dot{A}_{(i,0)})=d,\] hence, \[q^{\prime}\vee\gamma_{p,i,n}\cdot q^{\prime}\Vdash\dot{c}(\dot{A}_{(i,0)})=d =\dot{c}(\dot{A}_{(i,1)}).\] As \(\gamma_{p_{i},n}\cdot q=q\), we have \(q^{\prime}\vee\gamma_{p_{i},n}\cdot q^{\prime}\leq q\), contradicting that \(q\) forces that \(\dot{c}\) is a \(3\)-coloring. Now we turn to the proof of the more involved part: **Proposition 4.7**.: _If \((*)_{\mathcal{D}}\) holds then \(N\models K_{\mathcal{D}}\)._ Proof.: Fix a \(\mathcal{D}\) with \((*)_{\mathcal{D}}\) and let \((\phi_{p})_{p\geq p^{*}}\) be the collection of the corresponding cyclic polymorphisms. Assume that we are given a finitely solvable instance \(\mathcal{X}\) in \(N\). Then \(\mathcal{X}\) is finitely solvable in \(M[G]\), so as it is a model of ZFC, it admits a homomorphism to \(\mathcal{D}\) in \(M[G]\). In particular, for some \(q\in G\), name \(\dot{\mathcal{X}}\in HS\) with \(\dot{\mathcal{X}}^{G}=\mathcal{X}\) there is a name \(\dot{h}\) such that \[q\Vdash\dot{h}\text{ is a homomorphism from }\dot{\mathcal{X}}\text{ to } \mathcal{D}.\] The overall strategy is simple: using the cyclic polymorphisms and compactness, from \(\dot{h}\) we construct a name \(\in HS\) for a homomorphism. As \(q\in P\) and \(\dot{\mathcal{X}}\in HS\) we can find an \(i_{0}\geq p^{*}\) large enough \[\langle\gamma_{p_{i},n}:n\in\mathbb{N},p_{i}\geq i_{0}\rangle\subseteq \operatorname{Stab}_{pw}(p)\cap\operatorname{Stab}(\dot{\mathcal{X}}),\] and let \(\Delta^{\prime}=\langle\gamma_{p_{i},n}:n\in\mathbb{N},p_{i}\geq i_{0}\rangle\). First we show some technical lemmas. For \(q\in B(P)\), \(\gamma\in\Delta\) define \(Homo_{q}\) to be the collection of names \(\dot{h}\) for which \(\operatorname{dom}(\dot{h})\subseteq\dot{X}\), \[q\Vdash\dot{h}\text{ is a homomorphism from }\dot{\mathcal{X}}\text{ to } \mathcal{D},\] and for \(\gamma\in\Delta\) let \(Inv_{q,\gamma}\) to be the collection of the ones \(\dot{h}\) with \[q\Vdash\gamma\cdot\dot{h}=\dot{h}.\] Let us show now the forcing analogue of Claim 4.3. **Lemma 4.8**.: _Assume that \(\dot{h}_{0}\in Homo_{q}\) and \(\gamma\in\Delta^{\prime}\) with \(\gamma^{p}=1\) with \(p\geq p_{i_{0}}\) prime. Then there exists a name \(\dot{h}\in Homo_{q}\cap Inv_{q,\gamma}\). Moreover, if \(\dot{h}_{0}\in Inv_{q,\gamma^{\prime}}\) then \(\dot{h}\in Inv_{q,\gamma^{\prime}}\) for any \(\gamma^{\prime}\in\Delta^{\prime}\)._ Proof.: By the Symmetry Lemma and the fact that \(\gamma\in\operatorname{Stab}(\dot{X})\), we have that \(\dot{h}_{0}\in Homo_{p}\) is equivalent to \(\gamma\cdot\dot{h}_{0}\in Homo_{\gamma\cdot q}.\) By \(\gamma\in\operatorname{Stab}_{pw}(q)\) these are also equivalent to \(\gamma\cdot\dot{h}_{0}\in Homo_{q}.\) Therefore, all names \(\dot{h}_{0},\gamma\cdot\dot{h}_{0},\dots,\gamma^{p-1}\cdot\dot{h}_{0}\) are forced to be homomorphisms by \(q\). Hence, \[q\Vdash\phi_{p}(\dot{h}_{0},\gamma\cdot\dot{h}_{0},\dots,\gamma^{p-1}\cdot \dot{h}_{0})\text{ is a homomorphism from }\dot{\mathcal{X}}\text{ to }\mathcal{D}.\] Let \(\dot{h}\) be a name for the composition \(\phi_{p}(\dot{h}_{0},\gamma\cdot\dot{h}_{0},\ldots,\gamma^{p-1}\cdot\dot{h}_{0})\). Then \(\dot{h}\in Homo_{q}\) and the calculations similar to the ones in Claim 4.3 show both remaining statements of the lemma: \[q\Vdash\dot{h}=\phi_{p}(\dot{h}_{0},\gamma\cdot\dot{h}_{0},\ldots,\gamma^{p-1 }\cdot\dot{h}_{0})\] is equivalent to \[\gamma\cdot q\Vdash\gamma\cdot\dot{h}=\gamma\cdot\phi_{p}(\gamma\cdot\dot{h}_ {0},\gamma^{2}\cdot\dot{h}_{0},\ldots,\gamma^{p}\cdot\dot{h}_{0}),\] which implies \[q\Vdash\gamma\cdot\dot{h}=\phi_{p}(\dot{h}_{0},\gamma\cdot\dot{h}_{0},\ldots, \gamma^{p-1}\cdot\dot{h}_{0})=\dot{h},\] by the cyclicity of \(\phi_{p}\) and the \(\gamma\)-invariance of \(q\). The moreover part follows from the fact that \(\Delta\) is abelian. Enumerate the generators in \(\Delta^{\prime}\) as \((\gamma_{n})_{n\in\mathbb{N}}\). Observe that by the choice of \(i_{0}\), every such \(\gamma_{n}\) has some prime order \(p\geq p^{*}\). Applying Lemma 4.8 repeatedly, starting with \(\dot{h}\), we obtain a sequence of names \((\dot{h}_{n})_{n\in\mathbb{N}}\), with \(\dot{h}_{n}\in Inv_{q,\gamma_{i}}\cap Homo_{q}\) for all \(i<n\). Using the compactness of \(\mathcal{D}^{\mathcal{X}}\), in \(M[G]\), we have that \[M[G]\models\exists h\in\mathcal{D}^{\mathcal{X}}\ h\in\bigcap_{n}\overline{\{ \dot{h}_{i}^{G}:i\geq n\}}.\] Hence there is some name \(\dot{h}\) and \(q^{\prime}\leq q\) form \(G\) with \[q^{\prime}\Vdash\dot{h}\in\bigcap_{n}\overline{\{\dot{h}_{i}:i\geq n\}}.\] **Lemma 4.9**.: \(\dot{h}\in Homo_{q^{\prime}}\cap Inv_{q^{\prime},\gamma}\)_, for all \(\gamma\in\operatorname{Stab}_{pw}(q^{\prime})\cap\Delta^{\prime}\)._ Proof.: We have to show that \[q^{\prime}\Vdash\dot{h}\text{ is a homomorphism from }\dot{\mathcal{X}}\text{ to } \mathcal{D}\] and that for all \(\gamma\in\operatorname{Stab}_{pw}(q^{\prime})\cap\Delta^{\prime}\), we have \[q^{\prime}\Vdash\gamma\cdot\dot{h}=\dot{h}.\] The first statement is clear, since \(q^{\prime}\) forces that \(\dot{h}\) is in the closure of a set of homomorphisms. In order to see the second statement, we need the next observation. **Claim 4.10**.: _For any names \(\dot{x},\dot{y}\) we have_ \[q^{\prime}\Vdash\forall n_{0}\ \exists n\geq n_{0}\ ((\dot{x}\in\dot{h}\iff \dot{x}\in\dot{h}_{n})\wedge(\dot{y}\in\dot{h}\iff\dot{y}\in\dot{h}_{n})).\] Proof.: Fix \(\dot{x},\dot{y}\). For any \(G^{\prime}\ni q^{\prime}\) generic by the definition of the product topology, we have that \[M[G^{\prime}]\vDash\forall n_{0}\ \exists n\geq n_{0}\ ((\dot{x}^{G^{\prime}} \in\dot{h}^{G^{\prime}}\iff\dot{x}^{G^{\prime}}\in\dot{h}_{n}^{G^{\prime}}) \wedge(\dot{y}^{G^{\prime}}\in\dot{h}^{G^{\prime}}\iff\dot{y}^{G^{\prime}}\in \dot{h}_{n}^{G^{\prime}})),\] so the claim follows. Now we show that for every \(\dot{x}\) we have \(q^{\prime}\Vdash(\dot{x}\in\gamma\cdot\dot{h}\iff\dot{x}\in\dot{h})\). By the definition of the action on names, and the facts that \(\dot{h}_{n}\in Inv_{\gamma,q^{\prime}}\) and \(\gamma\cdot q^{\prime}=q^{\prime}\) we get \[q^{\prime}\Vdash\dot{x}\in\dot{h}_{n}\iff\dot{x}\in\gamma\cdot\dot{h}_{n} \iff\gamma^{-1}\cdot\dot{x}\in\dot{h}_{n}.\] Since the construction of the sequence \((\dot{h}_{n})\) has been done in \(M\), we get that \[q^{\prime}\Vdash\forall n\ \dot{x}\in\dot{h}_{n}\iff\gamma^{-1}\cdot\dot{x} \in\dot{h}_{n}.\] Applying the claim to \(\dot{x}\) and \(\gamma^{-1}\cdot\dot{x}\) we get \[q^{\prime}\Vdash\forall n_{0}\ \exists n\geq n_{0}\ ((\dot{x}\in\dot{h} \iff\dot{x}\in\dot{h}_{n})\wedge(\gamma^{-1}\cdot\dot{x}\in\dot{h}\iff\gamma ^{-1}\cdot\dot{x}\in\dot{h}_{n})).\] Combining the last two relations, we get \[q^{\prime}\Vdash\dot{x}\in\dot{h}\iff\gamma^{-1}\cdot\dot{x}\in\dot{h},\] so, using the definition of the action and \(\gamma\cdot q^{\prime}=q^{\prime}\) again we obtain \[q^{\prime}\Vdash\dot{x}\in\dot{h}\iff\dot{x}\in\gamma\cdot\dot{h}.\] Thus, \(q^{\prime}\Vdash\dot{h}=\gamma\cdot\dot{h}\). Now we show that there exists a name \(\dot{h^{\prime}}\in HS\cap Homo_{q^{\prime}}\). For each \(\dot{x}\in\operatorname{dom}(\dot{h})\) define \(\dot{h}^{\prime}(\dot{x})\) to be \(\dot{h}(\dot{x})\wedge q^{\prime}\). We claim that \(\dot{h}^{\prime}\) is a suitable choice. First, by Lemma 4.9 it is clear that \[\dot{h}^{\prime}\in\bigcap_{\gamma\in\Delta^{\prime}\cap\operatorname{Stab}_{ pw}(q^{\prime})}Inv_{q^{\prime},\gamma}\cap Homo_{q^{\prime}},\] since \(q^{\prime}\Vdash\dot{h}^{\prime}=\dot{h}.\) Assume that for some name \(\dot{x}\) we had \(\dot{h}^{\prime}(\dot{x})\neq(\gamma\cdot\dot{h}^{\prime})(\dot{x})\). Note that \((\gamma\cdot\dot{h}^{\prime})(\dot{x})=(\gamma\cdot\dot{h}^{\prime})(\gamma^{ -1}\cdot\dot{x})\leq\gamma\cdot q^{\prime}=q^{\prime}\). Thus, there was a \(q^{\prime\prime}\leq q^{\prime}\) with \(q^{\prime\prime}\leq\dot{h}^{\prime}(\dot{x})\) and \(q^{\prime\prime}\wedge(\gamma\cdot\dot{h}^{\prime})(\dot{x})=0\), or \(q^{\prime\prime}\leq\gamma\cdot\dot{h}^{\prime}(\dot{x})\) and \(q^{\prime\prime}\wedge\dot{h}^{\prime}(\dot{x})=0\). But then \(q^{\prime\prime}\Vdash\dot{x}\in\gamma\cdot\dot{h}^{\prime}\setminus\dot{h} ^{\prime}\cup\dot{h}^{\prime}\setminus\gamma\cdot\dot{h}^{\prime}\), contradicting that \(\dot{h}^{\prime}\in Inv_{q^{\prime},\gamma}\). Hence, for each \(\gamma\in\Delta^{\prime}\cap\operatorname{Stab}_{pw}(q^{\prime})\in\mathcal{H}\) we have \(\gamma\cdot\dot{h}^{\prime}=\dot{h}^{\prime}\), showing that \(\dot{h}^{\prime}\in HS\). By \(\dot{h}^{\prime}\in HS\), we have \((\dot{h}^{\prime})^{G}\in N\) and it follows from \(q^{\prime}\in G\) and \(\dot{h}^{\prime}\in Homo_{q^{\prime}}\) that \((\dot{h}^{\prime})^{G}\) is a homomorphism from \(\dot{\mathcal{X}}^{G}\) to \(\mathcal{D}\), concluding the proof of Proposition 4.7. Propositions 4.7 and 4.6 yield Theorem 2.16. ## 5. Open problems As mentioned above, finite reducibility yields a complexity hierarchy on homomorphism problems. **Problem 5.1**.: _Describe the hierarchy of finite reducibility._ It would be also interesting to see the relationship of reduction to other well-known forms of reducibility. While our work detects a distinction between easy and hard problems, the difference between the strength of different easy problems is yet to be investigated. For example, let \(K_{3LIN_{2}}\) stand for the compactness principle for systems of linear equations over \(\mathbb{F}_{2}\). **Problem 5.2**.: _Is there a model of ZF in which \(K_{K_{2}}\) holds but \(K_{3LIN_{2}}\) does not?_ Symmetric models are the least sophisticated way to get independence results over ZF. Correspondingly, even very weak forms of choice fail in them. It would be extremely interesting to construct models where some choice principles remain true, but the same split between easy and hard problems can be detected. A deep theory has been developed in the past couple of years to construct models (see, e.g., [21]). **Problem 5.3**.: _Is there a model of ZF+DC in which \(K_{\mathcal{D}}\) holds exactly when \((*)_{\mathcal{D}}\) is true?_ **Acknowledgements**.: We would like to thank Lorenz Halbeisen for pointing out the similarity between Levy's and Mycielski's theorems with results obtained in [29, 30]. We are very grateful to Riley Thornton for suggesting the currently included proof of Theorem 2.10. We are also thankful to Amitayu Banerjee, Jan Grebik, Paul Howard, Asaf Karagila, Gabor Kun, Michael Pinsker, Eleftherios Tachtsis, and Jindrich Zapletal for their useful comments and enlightening discussions.
2301.00255
Landing a UAV in Harsh Winds and Turbulent Open Waters
Landing an unmanned aerial vehicle unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions. The MPC employs a novel objective function and an online decomposition of the oscillatory motion of the vessel to predict, attempt, and accomplish the landing during near-zero tilt of the landing platform. The nonlinear prediction of the motion of the vessel is performed using visual data from an onboard camera. Therefore, the system does not require any communication with the USV or a control station. The proposed method was analyzed in numerous robotics simulations in harsh and extreme conditions and further validated in various real-world scenarios.
Parakh M. Gupta, Eric Pairet, Tiago Nascimento, Martin Saska
2022-12-31T17:23:15Z
http://arxiv.org/abs/2301.00255v2
# Landing a UAV in harsh winds and turbulent open waters ###### Abstract Landing an unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions. The MPC employs a novel objective function and an online decomposition of the oscillatory motion of the vessel to predict, attempt, and accomplish the landing during near-zero tilt of the landing platform. The nonlinear prediction of the motion of the vessel is performed using visual data from an onboard camera. Therefore, the system does not require any communication with the USV or a control station. The proposed method was analyzed in numerous robotics simulations in harsh and extreme conditions and further validated in various real-world scenarios. Aerial Systems: Mechanics and Control, UAV, MPC, Optimization and Optimal Control, Multi-Robot Systems, Dynamics ## I Introduction Heterogeneous robot teams that are composed of UAVs and USVs are aimed to provide higher efficiency and decrease the high risk posed to human life in marine applications. An example of such an application is the process of cleaning oceans to rid them of oil spills and non-biodegradable waste [1]. While the UAVs can act as the eyes in the sky for surveying, identifying, and localizing the clean-up targets, the USVs are much better suited to the actual clean-up as this task requires heavy equipment and lifting capabilities close to the water surface. These clean-up missions can be performed autonomously by UAVs and can be conducted several dozen kilometers away from a harbor or shore. Although UAVs have short battery lives to be able to fly long distances, their strength lies in their agility and their ability to perform short-duration hover missions [2]. We can compensate for this short battery life by making a UAV and USV behave as a team, where-in the UAV can charge quickly during the mission for rapid redeployment. However, the precipitous and violent nature of the sea poses daunting challenges for landing on the USV deck, especially due to the precision required for recharging operations. When landing on a USV, the first challenge is estimating and predicting the movement of the deck of the USV before landing. A fast-moving deck can damage the UAV during landing through high impulse impacts, while a tilted deck can result in the UAV rolling or falling off the deck before the landing is complete. Additionally, a tilted deck can cause an erroneous response from the controller of the UAV during landing, which would jeopardize the landing position since multi-rotors are under-actuated vehicles with coupled angular and linear acceleration vectors. The second challenge that we focus on is attempting a landing without active communication between the UAV and the USV. Relying on a required communication channel with a high frame rate and low latency would introduce a significant source of failure in real open-water applications.To increase reliability and applicability, we attempt to build a decentralized solution that does not rely on communication between the agents. Thus, we aim to study various aspects of the dynamics of UAVs and USVs to develop a framework for predicting and landing on the USV with high precision and reliability in demanding conditions including wind and waves, often seen in harsh environments. Finally, in this work, we can define harsh environments as those that contain open water with waves with a height of up to 4 meters, and a wind velocity of up to 12m/s, which corresponds to a Beaufort scale of 6. For intended applications, this would produce a tilt in the range of [-0.5,0.5] radians for the USV. Fig. 1: UAV landing on USV in real-world experiments. ## II Related Works Riola et al. [3] show that the behavior of a ship can be predicted based on its past motion up to short prediction horizons if corrected by measured ship motion. Unsurprisingly, the topic of wave predictions is highly relevant to the shipping industry as it is needed to prevent cables from slacking while trying to offload cargo from ships using port-side cranes. Both Kuchler et al. [4] and Neupert et al. [5] describe an active heave compensation for port-side cranes using a periodic oscillation model that proves to be effective. Building on a similar model, Marconi et al. [6] and Lee et al. [7] present sophisticated approaches for fixed-wing UAVs landing on vessels. Both works adapt the model using a Kalman filter and use this heave motion of the ship to predict the altitude of the landing pad. However, these works do not focus on a rolling and pitching deck. Meng et al. [8] take a different approach and use an auto-regressive-model on the fixed-wing UAV to observe and predict the ship motion by breaking it into sinusoidal components. In addition, Ngo and Sultan [9] predict the quiescent periods for landing a helicopter on a ship based on the model of the vessel, but they do not tackle the problem of predictions of the motion for landing on an untilted deck. This leads to short opportunistic windows that have to be adhered to, even if the conditions change rapidly. All of the above-mentioned works present results only in simulation environments that are not harsh or extreme. The research on solutions for multi-rotor aerial vehicles landing on marine vessels is recent. One of the first works by Polvara et al. [10] uses a fiducial marker located on the platform and an extended Kalman filter (EKF) that estimates the position of the USV. In contrast, the approach presented by Abujoub et al. [11] relies on a LiDAR onboard the UAV to find the pose of the landing pad to learn the behavior of the platform by hovering above it. However, they classify the window of landing into go or no-go intervals. Both preliminary works were validated in lenient simulation conditions. More recently, researchers have begun testing their approaches through real-world experiments. For example, Xu et al. [12] use a fiducial marker for a decentralized approach, so as to follow the USV and use a PD controller for landing once the USV is discovered. For the second challenge of achieving decentralization, Lee et al. [13] present an interesting solution to finding a ship and its pose using classical vision algorithms. Zhang et al. [14] take a different approach and present a learning-based linear controller that receives inputs from a fiducial marker to land the UAV on a USV that is subject to the waves of a lake. Furthermore, some works also present the application of an MPC controller that enables a flexible-blade helicopter to land on a marine vessel [15, 16]. These works use a non-linear MPC to achieve near-perfect performance but do so using a numerical benchmark that doesn't run in real-time or in a real-world experiment. Our work differs from these by using simplifications and a new approach that fills these gaps of real-time computing and applicability without a significant drop in landing performance. We use these for comparison in our experimental section to demonstrate the same. The most advanced research presented with real-world flight data is the work by Persson et al. [17], which presents an MPC for a UAV autonomous landing on a moving boat. For the purpose of our work, we assume that the USV can be found by the UAV by ascending to a given altitude during the mission without the need for conducting a planned search which is beyond the scope of this paper. We also assume that the motion of the USV perpendicular to the water surface is minimal (the USV is waiting for the UAV to land while controlling its global positioning on the water in order to remain stationary, rather than drifting with the waves). Furthermore, we assume that the USV is under the influence of waves, which results in periodic oscillations of the USV deck in each axis of a coordinated system with an origin at the USV center of mass. For hardware, we assume that the UAV is equipped with a 2MP downward facing camera, an onboard computer for image processing and computing the MPC, and that the USV is equipped with a landing pattern to recognize relative pose. The main difference between our proposed approach and [17] is that our controller uses the non-linear model of the USV for landing on a rapidly tilting deck and does not employ any communication between the UAV and the USV, as motivated by real-world applicability. To the best of the authors' knowledge, it is the first approach using USV motion prediction in control feedback of a decentralized controller. In summary, our contributions are: (1) we present a novel objective function for finding an optimum landing trajectory that utilizes an MPC algorithm to predict the future of the UAV and USV, without communication; (2) we propose a decentralized vision-based method for observing and predicting the motion of a USV through the use of an online observer that adapts the USV motion model using observations from a downward-facing camera; (3) our proposed approach enables landing on a highly undulating platform with no prior knowledge of the dimensions of the USV; and (4) we propose a prediction algorithm that is designed to prevent a velocity overshoot at the set point for landing with minimal impulse transfer from the surface upon touchdown. The relevant media from this work has been made available as supplementary material on [http://mrs.felk.cvut.cz/ral-landing-on-usv](http://mrs.felk.cvut.cz/ral-landing-on-usv). ## III Proposed Non-linear Estimator-based MPC In this section, we present our proposed approach which consists of a UAV prediction model and a simplified USV prediction model. Our proposed controller must satisfy two hard constraints imposed by real-world conditions, which are: 1) The controller must perform its computation under a time constraint of 50 ms (20 Hz); and 2) There is no communication between the UAV and the USV and so, the only method for estimating the state of the USV motion is by visual pose estimation enabled by the ApriTag on the landing platform. Thus, for the sake of clarity, we will call our approach **MPC-NE** (Model Predictive Controller - Non-linear Estimator). Figure 2 presents the control pipeline used in this work; the contribution is encapsulated in Figure 2. For the UAV prediction model, a discrete linear time-invariant system is used, while the USV model uses a more complex linearised model to be described subsequently. The 6-degrees of freedom (DOF) USV pose \(\mathbf{b}=\begin{bmatrix}b_{1}&b_{2}&b_{3}&b_{4}&b_{5}&b_{6}\end{bmatrix}^{T}\) is estimated in the world frame through the detection of the fiducial tag in the center of the landing pad from the on-board camera of the UAV. The pose of the UAV is fused and accounted for to estimate the correct world frame pose of the USV. This pose information is fed to a fast Fourier transform (FFT) node (based on [18]) which identifies the frequencies, amplitudes, and phases of the \(N\) periodic oscillations that make up the USV motion in pitch and roll axes. These identified modes are used to initialize a linear Kalman observer node that corrects the observed state and predicts future motion. These predictions are sent to the MPC controller, which uses them to estimate the feasibility of landing in the near future, i.e., if a sufficiently low tilt of the USV can be found inside the predefined prediction horizon. In turn, it generates the desired linear velocities for \(x\), \(y\), and \(z\) axes, as well as the desired angular velocity in heading \(\eta\). The MPC also receives the estimated UAV state vector \(\mathbf{x}=\begin{bmatrix}x&\dot{x}&\ddot{x}&y&\dot{y}&\ddot{y}&z&\dot{z}& \ddot{z}&\eta&\dot{\eta}&\ddot{\eta}\end{bmatrix}^{T}\) using onboard state estimation proposed by our team in [19]. A finite-state automaton-based approach is used to direct our mission. Based on this, a setpoint generator node commands the aircraft to increase its altitude until the vision marker can be found. Once it is found, the reference of the MPC is changed by the setpoint generator, such that it can hover at a preset altitude above the identified marker. Subsequently, the UAV waits for enough data to be gathered so that the FFT accuracy threshold requirement can be met. Once it is met, the setpoint generator sets the global reference for landing. Then, the MPC begins to use the future motion predictions of the wave to determine a suitable time for landing. ### _USV Prediction Model_ USV models can be classified into two different types: Maneuvering Theory and Seakeeping Theory [20]. Owing to the assumptions made in Section II, we choose to focus on the Seakeeping theory since it concerns near-stationary vessels. In addition, the use of a decentralized approach brings challenges in estimating the true odometry of the USV, as converging to reliable estimates of linear and angular velocities of the USV is infeasible. Therefore, we leverage the pose estimate from the camera efficiently by focusing only on the kinematics of the USV. Our USV prediction model is composed of three parts: a fast Fourier transform, a Kalman observer, and a wave prediction model. First, the FFT performs a decomposition on the pose data obtained from the vision pipeline. The identified modes of these oscillations are used to initialize a Kalman observer that adapts the amplitude and the phase of the wave using the observed values online. Finally, the amplitudes and phases from the Kalman observer are sent to the wave prediction model to enable future wave predictions. #### Iii-A1 FFT-based Modelling We assume that the motion of the USV is composed of \(N_{j}\) periodic waves and a non-periodic term that accounts for random noise in tracking the various components for each \(j^{th}\) axis. Thus, let the state vector \(\mathbf{b}\) be represented by the linear pose \(b_{j}\) for \(j\in\{1,2,3\}\) for \(\mathbf{x}\), \(\mathbf{y}\), \(\mathbf{z}\) axes, respectively, and the angular pose be represented by \(j\in\{4,5,6\}\) about these axes in the same order. Note here that, a sufficiently large ship/boat (intended application) would exhibit sufficiently low amplitude oscillations in Z-axis such that they can be handled by changing the reference at every camera frame (as shown here). Thus, the periodic motion of the USV in an axis can be represented as a function of time such that: \[b_{j}(t)=b_{j,off}+\sum_{i=1}^{N_{j}}A_{j,i}\sin\underbrace{(2\pi f_{j,i}t+ \phi_{j,i})}_{\Phi_{j,i}}, \tag{1}\] Fig. 2: The _MPC_ landing controller (yellow block) is integrated into the MRS system (grey blocks) and supplies the desired reference (velocity \(\mathbf{\dot{t}}_{d}=\begin{bmatrix}\dot{x}&\dot{y}&\dot{z}\end{bmatrix}^{T}\) and heading rate \(\dot{\eta}_{d}\)). In the MRS system, the first layer containing a _Reference tracker_ processes the desired reference and gives a full-state reference \(\mathbf{x}\) to the attitude controller. The feedback _Position/Attitude controller_ produces the desired thrust and angular velocities (\(T_{d}\), \(\mathbf{\omega}_{d}\)) for the Pixhawk flight controller (Attitude rate controller). The _State estimator_ fuses data from _Odometry & localization_ methods to create an estimate of the UAV translation and rotation (\(\mathbf{x}\), \(\mathbf{R}\)). Finally, the _Vision-based Detector_ obtains the visual data from the camera and sends the pose information \(\mathbf{b}\) of the USV to the MPC. with \(f_{j,i}\) denoting the frequency, \(A_{j,i}\) the amplitude, and \(\phi_{j,i}\) the phase. Additionally, \(b_{j,off}\) is the non-periodic term accounting for random noise. For the initial condition, \(\Phi_{j,i}(t)\) is equal to \(\Phi_{j,i}(t_{FFT})\), which is the phase obtained as the output of FFT at the time of identification \(t_{FFT}\). In sea conditions, these frequency components can change frequently due to changing winds. Therefore, the pose is sampled continuously and an FFT is run every \(\Delta T_{FFT}\) seconds. For each axis, we discard the modes that are below a certain threshold amplitude \(A_{j,threshold}\), where \[A_{j,threshold}=A_{gate}\cdot\max\{A_{j,0},A_{j,1},\ldots,A_{j,N_{j}}\}. \tag{2}\] For reliable performance, and upon tuning on real-world data, we assume \(A_{gate}(=0.02)\) to be a suitable cutoff. This prevents us from identifying noise components as low-amplitude periodic oscillations without losing more than 2% of the accuracy. These erroneous components cause a loss of performance in the Kalman observer, which is explained in the next section. #### Iii-B2 Kalman Observer The Kalman observer uses a linear model to refine the estimate of identified amplitude and phase of each mode. The observer is necessary because, while the FFT accurately identifies the frequency components, the amplitude and phase outputs are averages for the entire \(\Delta T_{FFT}\) sampling interval. Therefore, the observer receives new parameters for all identified modes every \(\Delta T_{FFT}\) seconds. In order to allow sufficient time for the observer parameters to converge to true values, we do not reinitialize the pre-identified modes with the new parameters. Instead, only the newly identified modes are initialized, while discarding the old modes that no longer exist. To assemble the model, we first write the ordinary differential equation (ODE) for _each mode_ for a given axis at any time \(t\). We use \(\mathbf{v}_{j,i}\) to denote the \(i^{th}\) mode of the USV state vector component \(b_{j}\) in the \(j^{th}\) axis. Thus, the derivative of the mode (\(\forall j,j\in 1\ldots 6\)) is \[\dot{\mathbf{v}}_{j,i}=\underbrace{\begin{bmatrix}0&1\\ -(2\pi f_{j,i})^{2}&0\end{bmatrix}}_{\mathbf{B}(t_{FFT})}\mathbf{v}_{j,i}, \tag{3}\] and the mode at time \(t\) is \[\mathbf{v}_{j,i}=\begin{bmatrix}A_{j,i}\sin{(\Phi_{j,i}(t))}\\ 2\pi A_{j,i}f_{j,i}\cos{(\Phi_{j,i}(t))}\end{bmatrix}. \tag{4}\] Next, we derive the observer model by adding the ODEs of each mode. Thus, \[\dot{\mathbf{v}}_{j}(t)=\underbrace{\begin{bmatrix}\mathbf{B}_{j,1}&\mathbf{0 }&\ldots&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{B}_{j,2}&\ldots&\mathbf{0}&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{0}&\mathbf{0}&\ldots&\mathbf{B}_{j,N}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\ldots&\mathbf{0}&\mathbf{0}\end{bmatrix}}_{\mathbf{ \overline{\mathbf{v}}_{j}}(t)}\underbrace{\begin{bmatrix}\mathbf{v}_{j,1}\\ \mathbf{v}_{j,2}\\ \vdots\\ \mathbf{v}_{j,N_{j}}\\ \mathbf{v}_{j,off}\end{bmatrix}}_{\mathbf{v}_{j}(t)}. \tag{5}\] Hence, the output for each axis is \[b_{j}(t)=\underbrace{\begin{bmatrix}\mathbf{C}_{j,1}&\mathbf{C}_{j,2}&\ldots& \mathbf{C}_{j,N}&\mathbf{C}_{j,off}\end{bmatrix}}_{\mathbf{\overline{\mathbf{ c}}_{j}}}\mathbf{v}_{j}(t). \tag{6}\] Note, that each component of the output vector of the mode can be found as \[b_{j,i}=\underbrace{\begin{bmatrix}1&0\\ \mathbf{C}_{j,i}\end{bmatrix}}_{\mathbf{Y}_{j,i}}. \tag{7}\] Now, for the brevity of explanation and the readability of the equations, we write the following relation _for only one axial DOF_ of the USV in discrete-time. In addition, we clarify that it can be applied to all 6 of the DOF. Furthermore, notice that a time instance \(t=k\Delta T+t_{FFT}\), wherein \(\Delta T\) is the discrete sampling time for new pose observations. Thus, we have a straightforward change in notation such that, for example, \(\mathbf{v}_{j}(t)\equiv\mathbf{v}_{j}^{(k)}\). Thus, by using the integral approximation method, we have that \[\mathbf{v}_{j}^{(k+1)}=\underbrace{\exp(\mathbf{\overline{B}}_{j}\Delta T)}_{ \mathbf{\Psi}_{j}}\mathbf{v}_{j}^{(k)},\;and\;\;b_{j}^{(k)} =\mathbf{\overline{C}}_{j}\;\mathbf{v}_{j}^{(k)}. \tag{8}\] Then, we continuously estimate the amplitude \(A_{j,i}\) and phase \(\phi_{j,i}\) of each mode every \(\Delta T\) using the Kalman Filter. First, \(\mathbf{Q}\) is initialised using a diagonal matrix \(\mathbf{Q_{I}}=\lambda\mathbf{I}\), such that \[\mathbf{Q}=\frac{1}{2}(\mathbf{\Psi}\mathbf{Q_{I}}\mathbf{\Psi}^{T}+\mathbf{Q_ {I}})\Delta T, \tag{9}\] where \(\lambda\) is the gain parameter for the process noise observed in the model. Meanwhile, the observation noise matrix \(\mathbf{R}\) is set to the mean amplitude of the observed noise in the system. Thereafter, we apply the filter equations as follows: \[\begin{split}\dot{\mathbf{v}}_{j}^{(k)}&=\mathbf{\Psi}_{j} \mathbf{v}_{j}^{(k-1)},\\ \dot{\mathbf{P}}^{(k)}&=\mathbf{\Psi}_{j}\mathbf{P}^{(k-1)} \mathbf{\Psi}_{j}^{T}+\mathbf{Q},\\ \dot{b}_{j}^{(k)}&=\mathbf{\overline{C}}_{j}\dot{\mathbf{v}}_{j}^{(k)}, \\ \mathbf{L}^{(k)}&=\mathbf{\Psi}^{(k)}\overline{C}_{j}^{T}(\overline{C}_{j} \dot{\mathbf{P}}^{(k)}\overline{C}_{j}^{T}+\mathbf{R})^{-1},\\ \mathbf{v}_{j}^{(k)}&=\dot{\mathbf{v}}_{j}^{(k)}+\mathbf{L}^{(k)}(b_{j,m} -\dot{b}_{j}^{(k)}),\\ \mathbf{P}^{(k)}&=(\mathbf{I}-\mathbf{L}^{(k)}\overline{C}_{j})\dot{ \mathbf{P}}^{(k)},\end{split} \tag{10}\] where \(\hat{\cdot}\) shows the predicted value of the vector/matrix, \(\mathbf{I}\) is an identity matrix, \(b_{j,m}\) is the measured value of \(b_{j}\), \(\mathbf{L}\in\mathbb{R}^{2(N+1)}\) is the Kalman gain matrix of the system, \(\mathbf{P}\) and \(\mathbf{Q}\in\mathbb{R}^{2(N+1)\times 2(N+1)}\) are the process co-variance and system noise matrices, respectively, and \(\mathbf{R}\in\mathbb{R}\) is observation noise. At every identification, the relevant elements corresponding to both of the modes that are no longer present, as well as the newly identified modes of the \(\mathbf{\Psi}\) matrix, are re-initialized. The corresponding co-variance terms for these modes are reset to maintain a consistent prediction without large deviations. #### Iii-B3 Wave prediction Let us now define \(t_{obs}\) as the time instant where the last observation was performed, since the prediction algorithm is not run when there are no new observations. Thus, by running the Kalman observer at \(t_{obs}\) we find the new amplitude \(A_{j,i}(t_{obs})\) and phase \(\Phi_{j,i}(t_{obs})\). At the same instant in time \(t_{obs}\), we can extract the corresponding \(\mathbf{v}_{j,i}\) and use (4) to acquire: \[\begin{split}\Phi_{j,i}(t_{obs})&=\arctan{\left(\frac{ 2\pi f_{j,i}[\mathbf{V}_{j,i}]^{1,1}}{[\mathbf{v}_{j,i}]^{2,1}}\right)}\;,\\ \text{and}& A_{j,i}(t_{obs})=\frac{[\mathbf{v}_{j,i}]^{1,1}}{ \sin{(\Phi_{j,i}(t_{obs}))}},\end{split} \tag{11}\] where \([\mathbf{v}_{j,i}]^{m,n}\) represents the element corresponding to the \(m^{th}\) row and \(n^{th}\) column of the vector. This enables us to predict the wave behavior at a future time \(t>t_{obs}\) as \[b_{j}(t)=\sum_{i=1}^{N_{j}}A_{j,i}(t_{obs})\sin\left[2\pi f_{j,i}(t -t_{obs})+\Phi_{j,i}(t_{obs})\right]\\ +[\mathbf{v}_{j,off}]^{1,1}. \tag{12}\] ### _UAV Prediction Model_ The UAV prediction model used in the proposed MPC is based on the Euler approximation of a set of single particle kinematics equations. Here, we employ the following discrete linear time-invariant system: \[\mathbf{x}^{(k+1)}=\mathbf{D}\mathbf{x}^{(k)}+\mathbf{E}\mathbf{u}^{(k)},\ with\ \ \ \mathbf{u}^{(k)}=[\hat{\tilde{x}}\ \ \ \hat{\tilde{y}}\ \ \ \hat{\tilde{\tilde{\tau}}}]^{T}. \tag{13}\] In the model represented by (13), the state matrix \(\mathbf{D}\) and the input matrix \(\mathbf{E}\) can be found through the Kronecker product (\(\otimes\)), such that: \[\mathbf{D}_{12\times 12}=\mathbf{I}_{4\times 4}\otimes\mathbf{D}_{3\times 3}^{ \prime},\ with\ \ \ \mathbf{D}^{\prime}=\begin{bmatrix}1&\Delta t_{pred}&\frac{\Delta t_{pred}^{2}}{ 2}\\ 0&1&\Delta t_{pred}^{2}\\ 0&0&1\end{bmatrix}, \tag{14}\] \[\mathbf{E}_{12\times 4}=\mathbf{I}_{4\times 4}\otimes\mathbf{E}_{3\times 1}^{ \prime},\ with\ \ \ \mathbf{E}^{\prime}=\begin{bmatrix}\frac{\Delta t_{pred}^{2}}{ 2}\\ \frac{\Delta t_{pred}^{2}}{ 2}\\ \Delta t_{pred}^{2}\end{bmatrix}, \tag{15}\] where \(\mathbf{I}\) is an identity matrix, with a prediction made every \(\Delta t_{pred}=0.01\) seconds. Hence, the state vector represents the states of the system and their derivatives up to acceleration in each axis, and the control input is the jerk experienced in those axes. ### _MPC Objective Function_ Once we have defined a prediction model of the UAV and the USV, we can formulate an objective function to enable both way-point navigation and landing. For the sake of simplification, we will omit the superscript \((.)^{(k)}\), which represents a discrete instant in time. Therefore, we can define the objective function \(J\) as: \[\min_{\mathbf{u}_{1},...,\mathbf{u}_{M_{c}}}J(\mathbf{x},\mathbf{ u}) =\underbrace{\sum_{m=1}^{M_{p}}\mathbf{\tilde{x}}_{m}^{T}\mathbf{ \tilde{x}}_{m}+\mathbf{h}_{m}^{T}\mathbf{Th}_{m}}_{J_{1}}\] \[+\underbrace{\sum_{m=1}^{M_{p}}\alpha_{L}\times g(\tilde{z}_{m},b _{4,m},b_{5,m})}_{J_{2}},\] subject to : \[\mathbf{\tilde{x}}_{m} =\mathbf{x}_{m}-\mathbf{\tilde{x}}_{m}, \tag{16}\] \[\tilde{z}_{m} =z_{m}-\mathbf{\tilde{z}}_{m},\] \[\mathbf{h}_{m} =\mathbf{u}_{m}-\mathbf{u}_{m-1},\] \[\mathbf{x}_{m+1} =\mathbf{D}\mathbf{x}_{m}+\mathbf{E}\mathbf{u}_{m}\ \forall\ m\leq M_{c},\] \[\mathbf{x}_{m+1} =\mathbf{D}\mathbf{x}_{m}+\mathbf{E}\mathbf{u}_{M_{c}}\ \forall\ m>M_{c},\] \[\mathbf{u}_{min} \leq\mathbf{u}_{m}\leq\mathbf{u}_{max},\] \[\mathbf{x}_{0} =\mathbf{x}_{initial},\] \[\mathbf{u}_{0} =\mathbf{u}_{initial},\] \[\forall\ \{m:m\in\mathbb{N},1\leq m\leq M_{p}\},\] where \(\mathbf{\tilde{x}}_{m}\) is the desired state, \(\mathbf{\tilde{x}}_{m}\) is the error vector, \(\tilde{z}_{m}\) is the error in \(z_{m}\) position, \(\mathbf{h}_{m}\) is the rate of control input change to ensure smooth input to the UAV, \(M_{p}(=100)\) is the prediction horizon, and \(M_{c}(=40)\) is the control horizon. \(\mathbf{S}\) and \(\mathbf{T}\) are the corresponding penalty matrices with configurable weights for performance tuning, while \(\alpha_{L}(=1200)\) is a weight chosen for the tuning of the objective function \(g(.)\). Additionally, \(b_{4,m},b_{5,m}\) are the pitch and roll angles in discrete time of the USV about its \(x\) and \(y\) axes, according to (12). We emphasize that \(\mathbf{\tilde{x}}_{m}\) (including \(\hat{z}\)) can either be a series of points (trajectory) or a single point (step input). This would enable the UAV to keep up with a drifting USV if the XY-position state of the usv is estimated independently. However, a slowly drifting USV is within the dynamic limits of the UAV so as to be compensated by the single-point reference that can be updated after every observation (depending on the camera frame rate). We demonstrate and test this in this linked video. While we do not constrain the output of the MPC, we apply soft constraints to the velocity and acceleration states of the model, such that \(v\geq v_{max}\) and \(a\geq a_{max}\) incur a high penalty in the objective function. Herein, we have introduced a novel objective function \(J_{2}\) (described in the next section) which can account for the predicted motion of the USV, producing a smooth control input to change the altitude of the UAV without any abrupt maneuvers. Using this function, we are able to incorporate the finite state automaton approach using a sigmoid activation function without explicitly describing the possible landing condition. The UAV is able to follow the descend trajectory generated by the MPC by autonomously adjusting its hover distance above the USV. Additionally, it enables us to tune the parameters to control the variance of the resulting landing angles about the mean value of zero-tilt. It is important to mention that the term \(J_{1}\) in our cost function, is a classical quadratic objective function largely used in robotics and well documented for its feasibility and stability. On the other hand, the \(J_{2}\) term is different from usual works in the literature because it tackles the terminal cost of the optimization step as a potential barrier. We employ a non-linear optimization library (NLOPT [21][22]) which provides the near-optimum solution for the objective function. In order to exercise velocity-based control, the first input from the series of optimum control inputs calculated by the solver is then used to calculate the next state using (13). The velocities for this predicted next state are then passed to the system as the velocity references for the UAV to track (as seen in Figure 2). Since the term \(J_{1}\) primarily contributes to the position control and \(J_{2}\) contributes to the landing approach, the term \(J_{2}\) remains disabled until the conditions for the landing approach are satisfied. ### _Landing approach_ We define the function for landing cost as a combination of sigmoids, such that: \[g(\tilde{z}_{m},b_{4,m},b_{5,m})=f(\tilde{z}_{m})\cdot((b_{4,m})^{2}+(b_{5,m})^{ 2}), \tag{17}\] where \(f(\tilde{z}_{m})\) is such that \[f(\bar{z}_{m})=\begin{cases}\left(1.0+\exp\left(-\frac{\bar{z}_{m}-h_{d}}{-0.15} \right)\right)^{-1},&\text{if }\bar{z}_{m}\geq 0.16\\ \left(1.0+\exp\left(\frac{\bar{z}_{m}-0.1}{-0.01}\right)\right)^{-1},&\text{ otherwise,}\end{cases} \tag{18}\] where \(h_{d}\) controls the waiting region (see Figure 3) during a landing attempt. Empirically, \(h_{d}=1.1\) was chosen for our experiments. For the scope of this paper, we assume that the USV has relatively negligible motion in its \(x\) and \(y\) axes, which is a fair assumption for the problem of landing. The propulsion of the USV may easily compensate for the drift generated by the water currents in order to facilitate landing. It is also safe to assume that \(\tilde{z}\geq 0\), as the UAV cannot approach from beneath the USV. In order to activate \(J_{2}\) to start the landing phase, two conditions must be met. First, **FFT accuracy is higher than a given threshold to detect slow oscillations.** Second, **The position errors in \(x\) and \(y\) are below a predefined threshold (i.e., \(\tilde{x},\tilde{y}\approx 0\)) and horizontal velocities \(v_{x},v_{y}\) are also minimal.** To demonstrate the interaction of \(J_{2}\) with \(J_{1}\) during the landing approach, we present a _highly-simplified_ plot of the objective function (see Figure 3) using one mode each for pitch and roll axes. When \(J_{2}\) is activated, we acquire a combined plot governed by both the equation (17) and the residual error \(\tilde{z}\) in \(J_{1}\). In Figure 3, the value of the objective function encounters a peak that continuously evolves as a function of time. This peak acts as a potential barrier. The higher cost associated with the peak holds the aircraft in the waiting region (as marked in the plot). Meanwhile the USV model generates predictions for the future of the USV motion during every iteration of the MPC. The USV sometimes gets close enough to a zero tilt wherein a feasible solution appears, as shown by the _zero-tilt_ points in the plot. The UAV is then able to _insert_ itself into the time-varying trajectory of these special feasible points by reducing its altitude and approaching in such a way that the cost continues to decrease along the locus of these points. Therefore, the UAV is able to follow the zero-tilt points and finish at the optimum landing point, where touchdown is confirmed by the system based on thrust and other information from onboard sensors. ## IV Simulation Experiments We demonstrate our simulation results in two scenarios: with a numerical simulation, and with a realistic ROS-based Gazebo simulator [19]. The **SHMPC** presented in [15] is shown to work numerically. Thus, we use a similar numerical implementation of our work (**MPC-NE**) to allow us to perform a fair comparison with the state-of-the-art. In this comparison, the non-linear optimization problem is solved by [22] for a landing maneuver of 3 meters and assumes true knowledge of the future motion of the USV. The second comparison is performed using real-time flight with our proposed **MPC-NE** inside the Gazebo simulator. For this comparison, we use a **standard MPC**[19] designed for waypoint navigation. For this standard approach, the UAV attempts to locate the target, and lands after a programmed, uniformly randomly distributed delay between 0 and 100 seconds. We select this duration owing to the periodicity of the tilt angle of the USV. We use a T650 quadrotor frame weighing 3.6 kg carrying a Garmin LiDAR for laser-ranging of altitude and an Intel Realsense D435 camera for live in-simulation video. The video output of the Realsense camera is sent to our system to enable processing on the vision node. (Sec. III). The 3D model of the USV is similar to our real-world experiments and is affixed with an AprilTag [23] marker for pose estimation. We note that in _both_ the comparisons, we push the boundary of performance and test our work in rough sea states, and drive our USV model using a wave generator with 4-5 components of oscillation in both pitch and roll axes, and tilt angles up to \(30^{\circ}(0.5\) radians). The frequency components are set such that brief windows for feasible landing exist. Wind disturbances are not considered in this environment since it is tackled by body disturbances estimated by the low-level control feedback pipeline (see Figure 2). ### _Prediction results_ First, we demonstrate the ability of our system to predict the wave motion up to 1 second into the future based on the observed frequency components and our model of the system. The performance of the system is tested in two scenarios: a 100 Hz odometry output from the simulated USV IMU (used as ground truth), and a 30 Hz stream from the AprilTag. As we see in Figure 4, the high-frequency IMU-based predictions are able to match the observed wave reliably without introducing noise. The observer is able to adapt the observed frequency, amplitude, and phase of the modes of the oscillations and converge reliably. As opposed to that, we see slight deviations in the vision-based predictions compared to the IMU results. This deviation in performance can be explained by two factors. First, the linearisation of the model in the time-domain causes inaccuracies that grow as the sampling Fig. 4: Comparison between the predictions made by the system using the onboard IMU data (left) and using the vision data (right). Fig. 3: An example illustration of the effective cost function values obtained during the landing approach. time increases. Due to this, the three-fold sampling rate of the IMU leads to faster and more accurate convergence. Second, the output rate of the AprilTag identification node fluctuates around 30 Hz, depending on the computational load of the onboard computer of the UAV (or simulation computer, in this case). This leads to the misidentification of modes, as the FFT algorithm requires a fixed sample rate for observations. However, this sufficiently proves the ability of the proposed system to reliably predict wave behavior, which will be used for the USV landing further down the pipeline. ### _Landing results_ To continue, we present the ability of our system to land on a platform while tilt angles are sufficiently close to zero. Thus, we present Figure 5 that shows the results of the numerical comparison between our MPC-NE and the state-of-the-art SHMPC. Note here that the MPC-NE lands \(\approx 94\%\) within \(10^{\circ}\) of tilt, while the SHMPC lands \(\approx 71\%\) of the same tilt interval. In this same comparison, the solution time per iteration of our MPC-NE was 9 times lower at 102 ms compared to 917 ms for SH-MPC. Furthermore, the same figure presents the results of the realistic simulations using Gazebo. It is important to highlight the difference between a numerical simulation result and a realistic simulation result. This is explained by the existing constraints of processing time that demand the algorithms to be processed in real-time. Note here that the MPC-NE is able to conduct \(72\%\) of its landings within \(15^{\circ}\)(\(0.26\) rad) of tilt compared to \(23\%\) of landings using the standard MPC approach. In addition, the proposed approach reduces the \(80^{th}\) percentile result by \(9^{\circ}\)(\(0.16\) rad) in comparison to the standard approach. For this comparison, we classify a landing conducted at a tilt angle of less than \(20^{\circ}\)(\(0.35\) rad) as successful. Therefore, even in challenging tilts of up to \(0.5\) radians, the proposed approach had only three failures, while the standard approach fails approximately \(50\%\) of the landings. Finally, we also highlight that, even in an unrealistic and challenging scenario, our system is able to conduct \(70\%\) of the landings within \(50\) seconds of reaching its FFT accuracy threshold. ## V Real-world Experiments To test the contributions and proposed algorithms in the real world, we performed landings on an oscillating target at an open water reservoir. For the purpose of this experiment, we employed a 4.5 kg T650 quadrotor equipped with vertical pontoons [24] for safety over water (see Figure 1). In addition, the sensor stack included a Garmin LiDAR for laser-ranging of altitude, a Basler camera for the live video feed, and an Intel NUC for onboard real-time processing of the algorithms, data, and video. The target is a special custom-made USV [25] equipped with a \(2m\times 2m\) landing zone, affixed with an AprilTag [23] for 6-DOF pose-estimation. The experimental conditions subjected the UAV to a wind of 7m/s and a USV oscillating with an amplitude of \(0.3\) radians. ### _Prediction results_ Here we demonstrate our prediction pipeline in two scenarios: a 30 Hz stream from AprilTag, and a 100 Hz stream from the IMU. The prediction results for the real-world experiments are presented in Figure 6 and discussed below. For predictions based on vision-based pose estimation, as seen in Figure 6(a), the near-term future correlates well with the observed motion. However, Figure 6(b) indicates that the predictions for the long-term future can suffer in accuracy. This correlates well with the simulation results as shown in Figure 4 and can be attributed to the higher sampling time-step and its higher variability. Occasionally, it also exhibits convergence and consecutive divergence as more data is fed into the pipeline. For ground truth, we use Figure 6(c) to demonstrate the effectiveness of the pipeline in robustly predicting the future of the USV. However, since MPC exhibits a higher reliance on the predictions that are temporally proximal, the predictions for \(0.25\) and \(0.50\) seconds into the future offer robust support for preventing a landing during an infeasible window. The chosen angle for landing is also sufficiently low in order to demonstrate the prediction capabilities and the selection of a feasible landing window. Fig. 5: Histogram comparison between the proposed approach and the standard approach during the touchdown of the UAV on the USV deck. Fig. 6: Comparison between the predictions made using vision (a-b) and using the onboard IMU of the USV (c). ### _Landing results_ We demonstrate the real-world landing process through Figure 7. In these experiments, the UAV was able to land within \(50\) seconds of acquiring the required FFT accuracy. This coincides with our findings in simulation experiments. Additionally, the tilt angles upon touchdown were less than \(5^{\circ}(0.09\) rad). ## VI Conclusion In this paper, we proposed an MPC that enables a UAV to land autonomously on a tilting USV. The MPC employs a novel objective function and an online decomposition of the motion of the vessel in order to attempt and complete the landing during a near-zero tilt of the landing platform. We successfully demonstrated that we are able to model and predict the behaviour of the UAV and USV without active communication between them. Further, we establish a novel approach for landing on the USV using these predictions, which autonomously adjusts the relative altitude for the UAV to ensure that the landing occurs as close to the zero-tilt state of the landing deck as possible, increasing safeness of the landing phase and reducing impact forces on the landing UAV. In comparison to state-of-the-art approaches, we achieved significant improvement in the case of landing in demanding conditions with high waves and high winds without knowing the dimensions of the USV.
2309.10664
Preliminaries paper: Byzantine Tolerant Strong Auditable Atomic Register
An auditable register extends the classical register with an audit operation that returns information on the read operations performed on the register. In this paper, we study Byzantine resilient auditable register implementations in an asynchronous message-passing system. Existing solutions implement the auditable register on top of at least 4f+1 servers, where at most $f$ can be Byzantine. We show that 4f+1 servers are necessary to implement auditability without communication between servers, or implement does not implement strong auditability when relaxing the constraint on the servers' communication, letting them interact with each other. In this setting, it exists a solution using 3f+1 servers to implement a simple auditable atomic register. In this work, we implement strong auditable register using 3f+1 servers with server to server communication, this result reinforced that with communication between servers, auditability (event strong auditability) does not come with an additional cost in terms of the number of servers.
Antonella Del Pozzo, Antoine Lavandier, Alexandre Rapetti
2023-09-19T14:48:20Z
http://arxiv.org/abs/2309.10664v1
# Preliminaries paper: Byzantine Tolerant Strong Auditable Atomic Register ###### Abstract An auditable register extends the classical register with an audit operation that returns information on the read operations performed on the register. In this paper, we study Byzantine resilient auditable register implementations in an asynchronous message-passing system. Existing solutions implement the auditable register on top of at least 4f+1 servers, where at most \(f\) can be Byzantine. We show that 4f+1 servers are necessary to implement auditability without communication between servers, or implement does not implement strong auditability when relaxing the constraint on the servers' communication, letting them interact with each other. In this setting, it exists a solution using 3f+1 servers to implement a simple auditable atomic register. In this work, we implement strong auditable register using 3f+1 servers with server to server communication, this result reinforced that with communication between servers, auditability (event strong auditability) does not come with an additional cost in terms of the number of servers. ## I Introduction Outsourcing _storage_ capabilities to third-party distributed storage are commons practices for both private and professional users. This helps to circumvent local space limitations, dependability, and accessibility problems. However, this opens to problems such as users that have to trust the distributed storage provider on data integrity, retrievability, and privacy. As emphasized by the relentless attacks on servers storing data [1] and by the recent worldwide advent of data protection regulations [2, 3, 4]. In this work, we address the problem of bringing _auditability_ in a distributed storage system, i.e., the capability of detecting who has read the stored data. We consider a set of servers implementing the distributed storage and a set of clients (users) accessing it through read and write operations. Auditability implies the ability to report all the read operations performed by clients. Nevertheless, without reporting any client that did not read. Let us note that once a reader accesses a value, it can disclose it directly without being auditable. For that reason, auditability does not encompass this kind of behavior. ### _Related work._ Most of the results of this work have already been published in [5]. In this article, we propose a new algorithm that implements a Strong Auditable Atomic Register (with completeness and strong accuracy) using 3f+1 servers, while in [5], the solution provides only an auditable atomic register (with completeness and accuracy). ### _Our Contribution_ Our contributions are the following: * A new algorithm implementing a strong auditable atomic register with \(3f+1\) servers. * An experimentation in rust using Zenoh [6] of this algorithm. **Paper organization.** The paper is organized as follows. Section II defines the system model. Section III formalizes the auditable register abstraction and properties. Section IV gives a lower bound on the number of servers to implement an auditable register. Section V presents an optimal resilient algorithm implementing the Auditable Atomic Register and gives the proof of its correctness. Section VI presents an approach to consider multiples writer, tolerating byzantine failure, in a special context where all correct writers aims to write the same value. ## II System Model We consider an asynchronous message-passing distributed system composed of a finite set of sequential processes. Each process has a unique ID and is equipped with a cryptographic primitive \(\Sigma\) to sign the messages it sends. We assume that signatures are not forgeable. A process can be either a _client_ or a _server_. We consider an arbitrary number of clients and \(n\) servers that implement a distributed register. The writer is a special client that owns the register and is the only one allowed to write on it. The other clients are the readers, who can read the register's content. In the following, we denote the readers as \(p_{r_{1}},p_{r_{2}},\ldots\), the writer (and auditor) of the register as \(p_{w}\) and the servers as \(s_{1},...,s_{n}\). ### _Failure model_ We consider that all correct processes follow the same protocol \(A\). A process that executes any algorithm \(A^{\prime}\neq A\) is considered Byzantine. The writer can only fail by crashing. At most \(f\) servers and any number of readers can be _Byzantine_. However, we consider that any Byzantine faulty reader does not cooperate with other faulty processes. ### _Communication primitives_ The writer broadcasts messages to servers using a reliable broadcast primitive [7]. The broadcast is done by invoking the broadcast primitive and the delivery of a broadcast message is notified by the event deliver. This primitive provides the following guarantees: **Validity:**_If a correct process deliver a message \(m\) from a correct process \(p_{i}\), then \(p_{i}\) broadcast\(m\);_ **Integrity:**_No correct process deliver a message more than once;_ **No-duplicity:**_No two correct processes deliver distinct messages from \(p_{i}\);_ **Termination-1:** _If the sender \(p_{i}\) is correct, all the correct processes eventually deliver its message;_ **Termination-2:** _If a correct process deliver a message from \(p_{i}\) (possibly faulty) then all the correct processes eventually brb-deliver a message from \(p_{i}\)._ Processes can communicate with each other using a perfect point to point communication abstraction. Processes send messages by invoking the send primitive and the reception of a message is notified by the event receive. This abstraction provides the following guarantees: **Reliable delivery:**_If a correct process \(p_{i}\) sends a message \(m\) to a correct process \(p_{j}\), then \(p_{j}\) eventually delivers \(m\);_ **Integrity:**_No correct process receive a message more than once;_ **No creation:** _If some process \(p_{i}\) receives a message \(m\) with sender \(p_{j}\), then \(m\) was previously sent to \(p_{i}\) by process \(p_{j}\)._ ## III Single-Writer/Multi-Reader Atomic Auditable Register In this work we define a Single-Writer/Multi-Reader auditable atomic register, that can be transposed to the multi-writer multi-reader case [8]. In the following we first recall the atomic register specification and later we extend it with auditability. A register \(R\) is a shared object that provides the processes with two operations, \(R.write(v)\) and \(R.read()\). The first allows to assign a value \(v\) to the register, while the second allows the invoking process to obtain the value of the register \(R\). Being the register a shared object, it can be concurrently accessed by processes and each operation is modeled by two events, an invocation and a response event. We consider a single-writer/multi-reader atomic register, that can be written by a predetermined process, the _writer_, and read by any processes, the _readers_. Intuitively, atomicity provides the illusion that all the read and write operations appear as if they have been executed sequentially. The interaction between the processes and the register is modeled by a sequence of invocation and reply events, called a history \(H\). Without loss of generality, we assume that no two events occur at the same time. An operation is said to be complete in a history \(H\), if \(H\) contains both the invocation and the matching response for this operation. If the matching response is missing, the operation is said to be _pending_. A history is sequential if each operation invocation is followed by the matching response. For a given history \(H\) we denote \(\mathsf{complete}(H)\) the history obtained starting from \(H\) by appending zero or more responses to pending invocations and discarding the remaining pending invocations. A history \(H\) is atomic if there is a sequential history \(\pi\) that contains all operations in \(\mathsf{complete}(H)\) such that: 1. _Each read \(\in\pi\) returns the value of the most recent preceding write, if there is one, and otherwise returns the initial value._ 2. _If the response of an operation \(Op_{1}\) occurs in \(\mathsf{complete}(H)\), before the invocation of operation \(Op_{2}\), then \(Op_{1}\) appears before \(Op_{2}\) in \(\pi\)_ Moreover, a history \(H\) is wait-free if every operation invoked by a correct process has a matching response. All the histories generated on Atomic Register are **atomic** and **wait-free**. We now define the _auditable atomic register_ extending the atomic register with the \(\mathsf{audit}()\) operation and defining its semantics. Let us recall that only the writer can perform that operation. The \(\mathsf{audit}()\) operation invocation is \(auditReq(R)\), and its response is \(auditRep(R,Eaudit)\), with \(Eaudit\) the list of couples process-value \((p,v)\) reported by the audit operation. As shown in [9], it is not possible to implement an audit operation in the presence of Byzantine servers, if a single server stores a full copy of the value. Informally, a Byzantine reader could contact only the Byzantine servers, getting the value without leaving any trace to be detected. A possible solution to this issue, as presented in [10], is to combine secret sharing for secrecy [11] and information dispersal for space efficiency [12]. When writing a value \(v\), the writer does not send the whole value to each server, but generates a random key \(K\) and encrypts \(v\) with it. Then, for space efficiency, the writer uses information dispersal techniques to convert the encrypted value in \(n\) parts, \(v_{1},v_{2},\ldots,v_{n}\), of size \(\frac{|v|}{\tau}\) (\(\tau\) is the number of parts needed to reconstruct the value). Finally, the writer uses secret sharing techniques to convert the key \(K\) in \(n\) shares, \(sh_{1},sh_{2},\ldots sh_{n}\), such that the share \(sh_{i}\) is encrypted with the public key of the server \(s_{i}\). At this point, the writer can send to the servers \((v_{1},sh_{1}),\ldots,(v_{n},sh_{n})\). Each server stores only its block and decrypted share. The secret sharing scheme assures that (1) any \(\tau\) shares are enough for a reader to reconstruct the key \(K\), and so the value, (2) that less than \(\tau\) shares give no information on the secret. Those techniques use fingerprints to tolerate alterations by faulty processes and allow reading processes to know when they collect \(\tau\) valid blocks to reconstruct the value. For sake of simplicity in the presentation of our solution, we avoid the details of the secret sharing scheme implementation. We consider that for any value \(v\), the writer constructs a set of blocks \(\{b_{i}=(v_{i},sh_{i})\}_{i\in[1,n]}\), such that a block \(b_{i}\) can only be decrypted by a server \(s_{i}\). Any \(\tau\) blocks are necessary and sufficient to reconstruct and read the value \(v\). We use the notion of effectively read, introduced in [13]. This notion captures the capability of a process to collect those \(\tau\) blocks to reconstruct a value regardless it returns it or not i.e., the corresponding response event may not appear in the history. We consider the execution \(E\), obtained by adding to the history \(H\) the communication events: send, receive, broadcast and deliver. **Effectively read:**_A value \(v\in\mathcal{V}\) is effectively read by a reader \(p_{r}\) in a given execution \(E\) if and only if \(\exists\) the invocation of a write(v) operation \(\in E\) and receive(\(b_{v_{j}}\)) events for \(\tau\) different blocks._ We can now define the _auditability_ property as the conjunction of the completeness property and the accuracy property. * **Completeness [13] :**_For every value \(v\) written into the register, every process \(p\) that has effectively read \(v\), before the invocation of an audit operation \(op\) in \(H\), \(p\) is reported by \(op\) to have read \(v\)._ * **Strong Accuracy:**_A correct process \(p\), that never effectively read the value \(v\), will not be reported by any audit operation to have read \(v\)._ The completeness property assures that if a reader \(p\) succeeds in obtaining a value \(v\) before the invocation of the audit operation, then the \(Eaudit\) list will contain the couple \((p,v)\). The strong accuracy property assures that if a correct reader \(p\) never effectively read \(v\), then the \(Eaudit\) list will never contain the couple \((p,v)\). In this paper, we propose an optimal resilient solution of the Single-Writer Multi-Reader Strong Auditable Atomic Register. In the following, we denote the \(\mathsf{read}()\) (resp. the \(\mathsf{write}()\) and \(\mathsf{audit}()\)) operation to the register as \(Op_{r_{i}}\) (resp. \(Op_{w_{i}}\) and \(Op_{a_{i}}\)). ## IV Impossibility results In this section, we first recall an impossibility result from [13] that provides a necessary condition on the number of blocks \(\tau\) to have auditability, that we extend in our system model. Finally, we show that without communication between servers, it is impossible to implement an auditable register with less than \(4f+1\) processes. Hereafter, the impossibility result presented in [13] with the complete proof in our system model. **Theorem 1**: _Let \(\tau\) be the number of blocks necessary to recover a value written into the register \(R\). In presence of \(f\) Byzantine servers, it is impossible to provide completeness if \(\tau<2f+1\)._ **Proof** Let \(c\) be a Byzantine client. To read a value from the register, \(c\) needs to collect \(\tau\) blocks. In the following, we show that, if \(\tau<2f+1\), a client \(c\) can read a value \(v\) and \(\langle c,v\rangle\) is not returned by any audit operation. Consider the execution where \(c\), during the execution of read operation \(Op\), obtains the \(\tau\) blocks from \(f\) Byzantine servers denoted \(S_{1}\), \(\tau-2f\) correct servers denoted \(S_{2}\) and \(f\) other correct servers denoted \(S_{3}\). The remaining \(n-\tau\) correct servers, denoted \(S_{4}\), have no information about \(Op\). An audit operation that starts after \(Op\) returns cannot wait for more than \(n-f\) responses from the servers. It is possible that those responses are the ones from \(S_{1}\cup S_{2}\cup S_{4}\). \(S_{1}\) being the Byzantine, do not report \(Op\). Processes in \(S_{4}\) have no information about the read operation of \(c\). Then there are only the \(\tau-2f\) servers of \(S_{2}\) that report process \(c\). Since \(\tau<2f+1\), there is no server that can report process \(c\) to have read \(v\). \(\Box\) Intuitively, the value of \(\tau\) has to be sufficiently big to (i) impede \(f\) Byzantine servers from collaborating and reconstructing the value and (ii) to force a reader when reading to contact sufficiently many correct servers to be auditable for that operation. Thus, the number of blocks \(\tau\) also corresponds to the number of servers that must be contacted to read. Without loss of generality, in the following we consider that each server stores at most one block for each value written. We prove that in the absence of server to server communication and with up to \(f\) Byzantine servers, if the writer can crash then implementing an auditable register that ensures completeness requires at least \(4f+1\) servers. Our result is proved for a safe register as defined in [14] (which is weaker than an atomic one). This result does not depend on the communication reliability. We consider an auditable safe register, which is a safe register extended with the audit operation as defined in section III. A safe register ensures that if there is no write operation concurrent with a read operation \(Op\), \(Op\) returns the last value written in the register. **Theorem 2**: _No algorithm \(\mathcal{P}\) implements an auditable safe register in an asynchronous system with \(n<4f+1\) servers if the writer can crash and there is no server to server communication._ **Proof** Let us proceed by contradiction, assuming that \(\mathcal{P}\) exists. In particular, we consider the case of \(n=4f\). Consider an execution where the writer \(p_{w}\) completes a write operation \(Op_{w}\), and after \(Op_{w}\) returns, a correct reader \(p_{r}\) invokes a read operation \(Op_{r}\) which completes. Let \(v_{1}\) be the value written by \(Op_{w}\), since \(\mathcal{P}\) exists, then \(Op_{r}\) returns \(v_{1}\). Otherwise, we violate the safety property of the register. As \(\mathcal{P}\) ensures the liveness property and that there are \(f\) Byzantine processes, we have that \(p_{w}\) cannot wait for more than \(n-f=3f\) acknowledgments from servers before completing \(Op_{w}\), i.e., \(p_{w}\) cannot wait for more than \(2f\) acknowledgments from correct servers before terminating. Let us separate servers in three groups, \(S_{1}\), \(S_{2}\) and \(S_{3}\) with \(|S_{1}|=2f\), \(|S_{2}|=f\) and \(|S_{3}|=f\). Servers in \(S_{1}\) and \(S_{2}\) are correct, while servers in \(S_{3}\) are Byzantine. Let \(p_{w}\) crash after \(Op_{w}\) terminates but before any servers in \(S_{2}\) receive their block for \(v_{1}\). Since servers do not communicate with each other and that \(p_{w}\) crashed, we can consider that no server in \(S_{2}\) ever receives the blocks for \(v_{1}\). Then only \(2f\) correct servers, the ones in \(S_{1}\) have a block for \(v_{1}\). Since we cannot rely on Byzantine servers and each server stores at most one block, \(p_{r}\) can collect at most \(2f\) blocks for \(v_{1}\). According to our hypothesis, \(\mathcal{P}\) respect the safe semantic. Thus, \(p_{r}\) is able to read the value by collecting only \(2f\) different blocks. However, according to Theorem 1, doing so \(\mathcal{P}\) does not provide completeness, which is a contradiction. \(\Box\) ## V Solution specification We provide an algorithm that implements a Single-Writer/Multi-Reader strong auditable wait-free atomic register. According to the impossibility result given by Theorem 1, the writer uses information dispersal techniques, with \(\tau=2f+1\). Our solution requires \(3f+1\) servers, which is optimal resilient [15]. According to the impossibility result given by Theorem 2, we consider server to server communication, more in particular, we consider that the writer communicates with servers using a reliable broadcast abstraction. However, this nullifies the effect of using information dispersal techniques to prevent Byzantine servers from accessing the value. Indeed, all the servers would deliver the \(n\) blocks and then could reconstruct the value. To address this issue, the writer encrypts each block with the public key of the corresponding server, such that only the \(i-th\) server can decrypt the \(i-th\) block with its private key. ### _Description of the algorithm_ Messages have the following syntax: \(\langle TAG,payload\rangle\). \(TAG\) represent the type of messages and \(payload\) is the content of the messages. **Variables at writer side:** **-**\(ts\) is an integer which represents the timestamp associate to the value being written (or lastly written) into the register. **-**\(b_{1},\ldots,b_{n}\) are the blocks related to the value being written (or lastly written) into the register. It is such that the block in \(b_{i}\) is encrypted with the public key of the server \(s_{i}\). **Variables at reader side:** All the following variables (except \(n\_seq\)) are reset at each new read operation invocation. **-**\(n\_seq\) is an integer which represents the sequence number of the read operation of the reader \(p_{r}\). This value is incremented at each read invocation. **-**\(Collected\_blocks\) is an array of \(n\) sets of tuple (block, timestamp). The \(i-th\) position stores all the blocks associated with their timestamps, received from server \(s_{i}\) in response to \(VAL\_REQ\) messages (if any). **-**\(Collected\_ts\) is an array of \(n\) lists of integers. In each position \(i\), it stores the list of all the timestamp received from server \(s_{i}\) in response to \(TS\_REQ\) messages (if any). **-**\(min\_ts\) is an integer of the smallest timestamp stored in \(Collected\_ts\) that is greater than \(2f+1\) timestamps in \(Collected\_ts\). **Variables at audit side:** **-**\(Collected\_log\) is an \(n\) dimension array that stores in each position \(i\) the log received from server \(s_{i}\) in response to \(AUDT\) messages (if any). This variable is reinitialized at each audit invocation. **-**\(\mathcal{E}_{p_{r},ts}\) is a list that stores the proof attesting that the reader \(p_{r}\) have read the value associated with timestamp \(ts\). **-**\(E_{A}\) is a list that stores all the tuples process-timestamp, of all the read operation detected by the audit operation. This variable is reinitialized at each audit invocation. **Variables at server side \(s_{i}\):** \(\boldsymbol{\cdot}\) _reg_ts_ is an integer, which is the current timestamp at \(s_{i}\). This value is used to prevent the reader to read an old value. \(\boldsymbol{\cdot}\)_val_ is a list of tuple (block, timestamp) storing all the block receive by server \(s_{i}\). \(\boldsymbol{\cdot}\)_Log_ is a list of tuples reader ID, timestamp, signed either by the reader itself or by the writer. Those tuples are used as a proof that the reader effectively read. \(\boldsymbol{\cdot}\ **Proof**\(Op_{w}\) terminates when the condition at line 5 of figure 1 is evaluated to true. Hence, \(Op_{w}\) terminates only if the writer received \((WRITE\_ACK,ts)\), from at least \(2f+1\) different servers. A correct server, sends \((WRITE\_ACK,ts)\) (line 4 of figure 3) to the writer, if it receives \((WRITE,ts,-,-)\) from the writer and after the execution of line 3 in Figure 3 Since there are at most \(f\) of the \(3f+1\) servers are Byzantine, once the write operation completes, the writer has received at least \(f+1\)\((WRITE\_ACK,ts)\) from correct servers. Observation 1 concludes the proof. **Lemma 2**: _Let \(Op_{r}\) be a complete read operation and let \(ts\) the timestamp corresponding to the value it returns. At any time after \(Op_{r}\) returns the value of \(reg\_ts\) is greater than or equal to \(ts\) in at least \(f+1\) correct servers._ **Proof** Since \(Op_{r}\) terminates, it has satisfied the condition \(validBlocks(ts)\) (line 9 figure 2). For this condition to be true, the reader must have received \(2f+1\) different valid blocks for timestamp \(ts\), piggybacked by \(VAL\_REP\) messages (lines 18 figure 2) sent by different servers. A correct server sends such messages, only after it has \(reg\_ts\geq ts\) (lines 18 and 3). Since there are at most \(f\) Byzantine servers, and by observation 1, the claims follow. **Lemma 3**: _Let \(Op_{w}\) be a complete write operation with timestamp \(ts\) and let \(Op_{r}\) be a complete read operation by a correct process \(p_{r}\) associated to a timestamp \(ts^{\prime}\). If \(Op_{r}\) succeeds \(Op_{w}\) in real-time order then \(ts^{\prime}\geq ts\)_ **Proof** Since \(Op_{r}\) returns a value associated to timestamp \(ts^{\prime}\), the condition \(notOld(ts^{\prime})\) (line 9 figure 2) is satisfied. In the following, we show that \(notOld(ts^{\prime})\) true implies that \(ts^{\prime}\geq ts\). Let us consider that \(notOld(ts^{\prime})\) is true. As \(Collected\_ts\) is reinitialized at the beginning of each new read operation, \(notOld(ts^{\prime})\) is true, if all timestamps receives from at least \(2f+1\) different servers piggybacked by \(VAL\_RESP\) (line 18 figure 2) or \(TS\_RESP\) (line 18 figure 2) messages are smaller than or equal to \(ts^{\prime}\). According to Lemma 1, as \(Op_{w}\) terminates, the content of \(reg\_ts\) is greater than or to equal \(ts\) in at least \(f+1\) correct servers. So, during \(Op_{r}\), in response to \((VAL\_REQ,n\_seq)\) (line 3 figure 3), the reader can collect at most \(2f=n-(n-2f)\) messages for a timestamp smaller than \(ts\). Thus, \(notOld()\) always remain false for any timestamp smaller than \(ts\), and since \(notOld(ts^{\prime})\) is true, \(ts^{\prime}\geq ts\). **Lemma 4**: _If a complete read operation \(Op_{r}\), invoked by a correct process \(p_{r}\), returns a value corresponding to a timestamp \(ts>0\), then it exists a write operation \(Op_{w}\), with timestamp \(ts\), that starts before \(Op_{r}\) terminates._ Proof.: Since \(Op_{r}\) terminates, it has satisfied the condition \(validBlocks(ts)\) (line 9 of Figure 2). Thus, the reader received \(2f+1\) distinct valid blocks for timestamp \(ts\), piggybacked by \(BLOCK\_RESP\) messages (lines 10 and of Figure 2) sent by distinct servers. As all the variables are reinitialized at the beginning of a read operation, and as there are at most \(f\) Byzantine servers, at least \(f+1\) correct servers sent a block to \(p_{r}\) during the execution of \(Op_{r}\). A correct server \(s\) sends a block corresponding to a timestamp \(ts>0\) only after it has received the corresponding \(WRITE\) message from the writer; thus, after the invocation of a write operation \(Op_{w}\) for timestamp \(ts\). It follows that \(Op_{w}\) began before \(Op_{r}\) completes. **Lemma 5**: _Let \(Op_{r}\) be a complete read operation invoked by a correct process \(p_{r}\), and let \(Op_{r^{\prime}}\) be a complete read operation invoked by a correct process \(p^{\prime}_{r}\) (\(p_{r}\) and \(p^{\prime}_{r}\) may be the same process). Let \(ts\) and \(ts^{\prime}\) be the timestamps associated with \(Op_{r}\) and \(Op_{r^{\prime}}\) respectively. If \(Op_{r^{\prime}}\) succeeds \(Op_{r}\) in real-time order then \(ts^{\prime}\geq ts\)._ Proof.: The proof follows the same approach as the proof of Lemma 3. Let \(ts^{\prime}\) be \(Op_{r}.ts\) and let \(ts\) be \(Op_{r^{\prime}}.ts\). As \(Op_{r^{\prime}}\) returns a value associated to a timestamp \(ts^{\prime}\), the conditions \(validBlocks(ts^{\prime})\) and \(notOld(ts^{\prime})\) are true for timestamp \(ts\) (line 9 figure 2). In the following, we show that if condition \(notOld(ts^{\prime})\) implies that \(ts^{\prime}\geq ts\). Let us consider that \(notOld(ts^{\prime})\) is true. As \(Collected\_ts\) is reinitialized at the beginning of each new read operation, \(notOld(ts^{\prime})\) is true, if all timestamps receives from at least \(2f+1\) different servers piggybacked by \(VAL\_RESP\) (line 18 figure 2) or \(TS\_RESP\) (line 18 figure 2) messages are smaller than or equal to \(ts^{\prime}\). According to Lemma 2, as \(Op_{r}\) returns for timestamp \(ts\), then at server side, the content of \(reg\_ts\) is greater than or to equal \(ts\) in at least \(f+1\) correct servers. So the reader can collect at most \(2f=n-(n-2f)\) timestamps smaller than \(ts\). Thus, \(notOld\) always remain false for any timestamp smaller than \(ts\), hence \(ts^{\prime}\geq ts\). **Lemma 6**: _Let \(Op_{r}\) be a complete read operation with timestamp \(ts\) and let \(Op_{w}\) be a complete write operation by \(p_{w}\) associated to a timestamp \(ts^{\prime}\). If \(Op_{w}\) succeeds \(Op_{r}\) in real-time order then \(ts^{\prime}\geq ts\)_ **Proof** We proceed considering first the case in which \(ts>0\) and then the case in which \(ts=0\). From Lemma 4, if \(ts>0\) then it exists \(Op_{w^{\prime}}\) with timestamp \(ts\) that starts before the end of \(Op_{r}\). Considering that, there is a unique writer and that its execution is sequential, then \(Op_{w^{\prime}}\) terminates before \(Op_{w}\) starts. As timestamps growth monotonically (Observation 1), \(ts^{\prime}>ts\). Consider now the case \(ts=0\). As timestamps grow monotonically (Observation 1), and the initial value of the timestamp is \(0\) (line 0 of Figure 1) then all write operations have their timestamp greater than \(0\). In particular this is true for \(Op_{w}\), such that \(ts^{\prime}>ts\), which concludes the proof. Let \(E\) be any execution of our algorithm and let \(H\) be the corresponding history. We construct \(\mathsf{complete}(H)\) by removing all the invocations of the read operations that have no matching response and by completing a pending \(write(v)\) operation if there is a complete read operation that returns \(v\). Observe that only the last write operation of the writer can be pending. Then, we explicitly construct a sequential history \(\pi\) containing all the operations in \(\mathsf{complete}(H)\). First we put in \(\pi\) all the write operations according to the order in which they occur in \(H\), because write operations are executed sequentially by the unique writer, this sequence is well-defined. Also this order is consistent with that of the timestamps associated with the values written. Next, we add the read operations one by one, in the order of their response in \(H\). A read operation that returns a value with timestamp \(ts\) is placed immediately before the write that follows in \(\pi\) the write operation associated to \(ts\) (or at the end if this write does not exist). By construction of \(\pi\) every read operation returns the value of the last preceding write in \(\pi\). It remains to prove that \(\pi\) preservers the real-time order of non-overlapping operations. **Theorem 3**: _Let \(Op_{1}\) and \(Op_{2}\) be two operations in \(H\). If \(Op_{1}\) ends before the invocation of \(Op_{2}\) then \(Op_{1}\) precedes \(Op_{2}\) in \(\pi\)._ **Proof** Since \(\pi\) is consistent with the order of timestamps, we have to show that the claim is true for all operations with different timestamps. There are four possible scenarios: \(Op_{1}\) and \(Op_{2}\) are respectively a write and a read operation, then the claim holds by Lemma 3. \(Op_{1}\) and \(Op_{2}\) are two reads operations, then the claim holds by Lemma 5. \(Op_{1}\) and \(Op_{2}\) are respectively a read and a write operation, then the claim holds by Lemma 6. If \(Op_{1}\) and \(Op_{2}\) are two write operations the claim holds by the Observation 1. #### Iv-C2 Liveness Proof **Lemma 7**: _If \(p_{w}\) is correct (if the writer don't crash), then each write operation invoked by \(p_{w}\) eventually terminates._ **Proof** The write operation has the following structure. The writer broadcasts a \(WRITE\) message to all servers (line 4 of Figure 1) and waits \(n-f\) ACKs (line 5 of Figure 1) from different servers before terminate. Since \(p_{w}\) is correct, the channel communications properties assure that all correct servers deliver the message \(WRITE\) broadcast by \(p_{w}\). Considering that: (i) servers do not apply any condition to send back \(WRITE\_ACK\) messages (line 4 of Figure 3), and that (ii) at most \(f\) servers can be faulty, \(p_{w}\) always receives the \(n-f\)\(WRITE\_ACK\) replies from correct servers necessary to stop waiting, which concludes the proof. \(\Box\) **Lemma 8**: _Let \(Op_{w}\) be a complete write operation that writes \(v\) with timestamp \(ts\). If a correct server \(s\) updates \(reg\_ts\) with \(ts\), then all correct servers eventually adds the block corresponding to \(v\) in \(val\)._ **Proof** Let us recall that a correct server updates \(reg\_ts\) with \(ts\) only upon receiving a \(WRITE\) message (line 1 figure 3). When a server updates \(reg\_ts\) with \(ts\), it also add to \(val\) (line 2 figure 3) the associate block it receives in the \(WRITE\) message from the writer (line 1 figure 3). As there is a reliable broadcast between the writer and servers, eventually all correct servers receive the \(WRITE\) message and updates \(val\) with their block associate to \(ts\). \(\Box\) **Lemma 9**: _When a correct reader \(p_{r}\) receives the response to \(TS\_REQ\) from all correct servers, in at least one correct server \(reg\_ts\geq min\_ts\)._ **Proof** By contradiction, assume that in all correct servers \(reg\_ts<min\_ts\). Then, in response to \(TS\_REQ\) messages, all correct servers send their \(reg\_ts\), all inferior to \(min\_ts\). We note \(tsMax\) the greatest timestamp receives by \(p_{r}\) from correct servers. Then, it exists in \(Collected\_ts\)\(2f+1\) timestamp \(\leq tsMax\). By assumption, \(tsMax<min\_ts\), which is in contradiction with the condition line 15. \(\Box\) **Lemma 10**: _A read operation \(Op_{r}\) invoked by a correct process \(p_{r}\) always terminates._ **Proof** First, observe that if \(p_{r}\) satisfies the conditions at line 9 figure 2, \(p_{r}\) terminates. Then, let us show by construction that those conditions are necessarily satisfied. A correct reader \(p_{r}\) starts the read operation, after it reinitialized all the variables. Then, \(p_{r}\) sends a \(TS\_REQ\) messages to all the servers. Consider the moment \(p_{r}\) receives the response from the \(2f+1\) correct servers, and sends a \(VAL\_REQ\) message to all the servers for timestamp \(min\_ts\). Notice that according to Lemma 9 at least one correct server has set \(reg\_ts\) to \(min\_ts\). As at least one correct server set \(reg\_ts\) to \(min\_ts\), then from the reliable broadcast, eventually all servers will set \(reg\_ts\) to \(min\_ts\). Then, the reader will eventually receive the \(2f+1\) response from correct servers such that the condition line 7 is satisfied. Then the reader sends \(BLOCK\_REQ\) messages for timestamp \(min\_ts\) to all servers. As at least one correct server set \(reg\_ts\) to \(min\_ts\), from Lemma 8, eventually all correct servers have in \(val\) the block associate with timestamp \(min\_ts\). Then, in response to \(BLOCK\_REQ\), all correct servers can send their block corresponding to timestamp \(ts\) and the condition \(validBlocks\) is satisfied, such that the reader can return for the value-timestamp pair corresponding to timestamp \(ts\). #### Iv-C3 Auditability **Lemma 11**: _Algorithm presented figure 1 to 3 solves the completeness property_ **Proof** For a reader \(p_{r}\) to returns for a valid value \(v\) with timestamp \(ts\), then \(p_{r}\) receives at least \(\tau\) messages from different servers (line 9 figure 2) with the block corresponding to \(v\). A correct server sends a block with associate timestamp \(ts\) to a reader \(p_{r}\) (line 13, figure 3), only after it adds to its log the reader \(p_{r}\) associate with timestamp \(ts\), line 12 figure 3. Thus, if a correct server \(s\) sends a block with associate timestamp \(ts\) to a reader \(p_{r}\), \(s\) stores \(p_{r}\) ID associate with \(ts\) in its log. If \(\tau\geq 2f+1\), since there is at most \(f\) Byzantine, in the worst case, at least \(\tau-f\geq f+1\) correct servers, denoted \(P_{C}\), records \(Op_{r}\) in their logs. Let \(Op_{a}\) be an audit operation, invoked by a process \(p_{a}\), that starts after \(p_{r}\) returns. So when \(Op_{a}\) starts, \(p_{r}\) is in PC's log. Then, \(p_{a}\) waits \(2f+1\) responses (line 11 figure 1) after sending \(AUDT\) request (line 10 figure 1) to servers. As there is at most \(f\) Byzantine servers, \(p_{a}\) gets the responses from at least \(f+1\) (\(n-2f\)) correct servers. In particular, \(p_{a}\) get at least one response from a server in \(P_{C}\). Finally, with \(t\leq\tau-2f=1\), \(p_{r}\) and is reported by \(Op_{a}\) to have read the value associate to timestamp \(ts\). **Lemma 12**: _Algorithm presented figure 1 to 3 solves the completeness (with collusion) property_ **Proof** For a reader \(p_{r}\) to returns for a valid value \(v\) with timestamp \(ts\), then \(p_{r}\) receives at least \(\tau\) messages from different servers (line 9 figure 2) with the block corresponding to \(v\). Notice that if \(p_{r}\) is faulty, it can also receive those blocks not directly from the servers but for some other faulty process in \(\mathcal{B}\). However, those other faulty process at some point must have receives those blocks from the servers. A correct server sends a block with associate timestamp \(ts\) to a reader \(p^{\prime}_{r}\) (line 13, figure 3), only after it adds to its log the reader \(p^{\prime}_{r}\) associate with timestamp \(ts\) (line 12 figure 3). Thus, if a correct server \(s\) sends a block with associate timestamp \(ts\) to a reader \(p^{\prime}_{r}\), \(s\) stores \(p^{\prime}_{r}\) ID associate with \(ts\) in its log. If \(\tau\geq 2f+1\), since there is at most \(f\) Byzantine, in the worst case, at least \(\tau-f\geq f+1\) correct servers, denoted \(P_{C}\), records in their logs directly the reader \(p_{r}\), or if \(p_{r}\) is faulty, some faulty process in \(\mathcal{B}\). Let \(Op_{a}\) be an audit operation, invoked by a process \(p_{a}\), that starts after \(p_{r}\) returns. So when \(Op_{a}\) starts, \(p_{r}\) is in PC's log. Then, \(p_{a}\) waits \(2f+1\) responses (line 11 figure 1) after sending \(AUDT\) request (line 10 figure 1) to servers. As there is at most \(f\) Byzantine servers, \(p_{a}\) gets the responses from at least \(f+1\) (\(n-2f\)) correct servers. In particular, \(p_{a}\) get at least one response from a server in \(P_{C}\). Finally, with \(t\leq\tau-2f=1\), if \(p_{r}\) is correct, \(p_{r}\) is reported by \(Op_{a}\) to have read the value associate to timestamp \(ts\), otherwise some faulty process in \(\mathcal{B}\) are. **Lemma 13**: _Algorithm presented figure 1 to 3 with \(t\geq 1\) solves the strong accuracy property_ **Proof** We have to prove that a correct reader \(p_{r}\) that never invoked a read operation cannot be reported by an audit operation. With \(t=1\), a reader \(p_{r}\) is reported by an audit operation if one server respond to the \(AUDT\) message with a correct record of \(p_{r}\) in its log. Thanks to the use of signature, a false record cannot be created by a Byzantine server. The signature used to attest the validity of a record are of two kind. If a correct server add \(p_{r}\) in its log before responding to \(VAL\_REQ\) messages (line 12, figure 3), then it uses the reader signature. So for a process to have a valid record to \(p_{r}\) in its log, the process \(p_{r}\) must have sent \(VAL\_REQ\) messages to some servers, i.e. \(p_{r}\) must have invoked a read operation. **Theorem 4**: _Algorithm presented figure 1 to 3 with \(n=3f+1\), \(\tau=2f+1\) and \(t=1\) solves the strong auditability property_ **Proof** Directly from Lemma 13 and Lemma 11 with \(t=1\) and \(\tau=2f+1\). ## VI Valid reads using multiple writers In this section we explore the guarantees obtained by switching from a single writer than can crash to multiples writers, with some byzantines. We consider \(N_{w}\geq 2*f_{w}+1\) writers are trying to write au unique value \(v\) in an auditable distributed register. We present a protocol for the writers that is compatible with any write operation that completes in a single round like the one presented in V. 1. Writers perform the secret sharing in a deterministic way1. Footnote 1: This can be achieved by using \(v\) to seed a PRNG that the writers then use to perform the secret sharing 2. Writers send to each server its share. 3. Servers wait until they receive the same share from \(f_{w}+1\) distinct writers before accepting it. Using this setup, the following properties arise : **Theorem 5**: _Correct servers only accept valid shares._ **Proof** A correct server waits until collecting \(f_{w}+1\) copy of the same share before committing it to its storage. Because at most \(f_{w}\) writers can be byzantines, at least one correct writer communicated the share to our server and as such, the share is necessarily correct. **Theorem 6**: _If one correct server accepts a share then, empheventually, every correct server will accept their share._ **Proof** From the previous theorem we know that the accepted share is valid. From this we know that the writers are in the process of sending shares to every server. Because there are more than \(f_{w}+1\) correct writers and we are operating in an eventually consistent network, eventually, every server will receive \(f_{w}+1\) times their share and thus every correct server will _eventually_ accept their share. \(\Box\) With this, we basically reduced the writing process to one with a single correct writer. We therefore removed the need for protocols to be crash-tolerant, while keeping every other property they might have. In particular, this means that reader of distributed registers implementing the above write sequence know that values that they read from the register must originate from correct writers by construction, hence the name **Valid Reads**.
2309.16215
Convex Estimation of Sparse-Smooth Power Spectral Densities from Mixtures of Realizations with Application to Weather Radar
In this paper, we propose a convex optimization-based estimation of sparse and smooth power spectral densities (PSDs) of complex-valued random processes from mixtures of realizations. While the PSDs are related to the magnitude of the frequency components of the realizations, it has been a major challenge to exploit the smoothness of the PSDs, because penalizing the difference of the magnitude of the frequency components results in a nonconvex optimization problem that is difficult to solve. To address this challenge, we design the proposed model that jointly estimates the complex-valued frequency components and the nonnegative PSDs, which are respectively regularized to be sparse and sparse-smooth. By penalizing the difference of the nonnegative variable that estimates the PSDs, the proposed model can enhance the smoothness of the PSDs via convex optimization. Numerical experiments on the phased array weather radar, an advanced weather radar system, demonstrate that the proposed model achieves superior estimation accuracy compared to existing sparse estimation models, regardless of whether they are combined with a smoothing technique as a post-processing step or not.
Hiroki Kuroda, Daichi Kitahara, Eiichi Yoshikawa, Hiroshi Kikuchi, Tomoo Ushio
2023-09-28T07:41:14Z
http://arxiv.org/abs/2309.16215v3
Convex Estimation of Sparse-Smooth Power Spectral Densities from Mixtures of Realizations with Application to Weather Radar ###### Abstract In this paper, we propose a convex optimization-based estimation of sparse and smooth power spectral densities (PSDs) of complex-valued random processes from mixtures of realizations. While the PSDs are related to the magnitude of the frequency components of the realizations, it has been a major challenge to exploit the smoothness of the PSDs, because penalizing the difference of the magnitude of the frequency components results in a nonconvex optimization problem that is difficult to solve. To address this challenge, we design the proposed model that jointly estimates the complex-valued frequency components and the nonnegative PSDs, which are respectively regularized to be sparse and sparse-smooth. By penalizing the difference of the nonnegative variable that estimates the PSDs, the proposed model can enhance the smoothness of the PSDs via convex optimization. Numerical experiments on the phased array weather radar, an advanced weather radar system, demonstrate that the proposed model achieves superior estimation accuracy compared to existing sparse estimation models, regardless of whether they are combined with a smoothing technique as a post-processing step or not. Power spectral density estimation, random process, sparsity, smoothness, regularization, convex optimization, weather radar. ## I Introduction Power spectral density (PSD) of a random process describes how power of the random process is distributed over frequency. Estimation of the PSD from realizations of a random process is a fundamental problem in science and engineering [1, 2, 3]. For weather radar applications, the PSD estimation is essential for the analysis of weather phenomena, because the PSD contains information pertaining to the precipitation intensity and the Doppler velocity distribution [4, 5, 6, 7, 8]. For example, the parabolic Doppler weather radar [4] transmits a pencil beam and subsequently observes backscattered signals in a narrow range of elevation angles, which can be regarded as realizations of a single random process whose PSD reflects the weather condition in the narrow range. We consider the estimation of PSDs from mixtures of realizations of random processes, which is much more challenging than the classical case of a single random process. Our primary interest is on the phased array weather radar (PAWR) [9, 10, 11, 12, 13, 14, 15, 16], which is developed to detect hazardous weather phenomena. The Doppler weather radar is not capable of detecting hazardous weather because of its mechanical vertical scan for observing backscattered signals in multiple elevation angles, which requires a long observation time. To shorten the observation time, the PAWR transmits a fan beam and subsequently observes backscattered signals in a wide range of elevation angles. The backscattered signals observed by the PAWR can be modeled as mixtures of realizations of random processes whose PSDs reflect the weather conditions in finely-divided ranges. Thus, to obtain the weather condition in a fine resolution, the PAWR needs digital signal processing, recovering the PSDs in the finely-divided ranges. Since the estimation of PSDs from mixtures of realizations is a challenging problem, the major existing methods employ a two-step approach that first estimates the frequency components of the realizations and then estimates the PSDs. For the frequency component estimation, sparsity-aware methods have achieved significant improvements on the estimation accuracy over the classical linear methods in many fields [16, 17, 18, 19, 20, 21, 22, 23]. Sparsity in the spatial domain is exploited in [17, 18, 19] under the assumption that signals arrive from only a few angles. Unfortunately, this assumption is far from suitable for the PAWR because targets such as clouds and raindrops exist at many angles [16]. In [20, 21, 22, 23], isolated sparse frequency components, called _line spectra_, are estimated based on the \(\ell_{1}\) regularization. It is demonstrated in [16] that a block-sparse regularization model using the mixed \(\ell_{2}/\ell_{1}\) norm [24, 25, 26, 27, 28, 29, 30] is more effective for weather radar applications because the frequency components are clustered in a few blocks due to the narrow-bandwidth of the PSDs. After the frequency component estimation, the _periodogram_, i.e., the squared magnitude of the frequency components, is usually employed to estimate the PSD because of its asymptotic unbiasedness. However, the periodogram has the drawbacks of large variance and erratic oscillation [1, 2, 3, 4]. For the classical case of a single random process, smoothing techniques, e.g., those shown in [1, 2], are often used to reduce the variance and the erratic oscillation. While the existing smoothing techniques can be used as a post-processing step, such a two-stage approach would be sub-optimal because the smoothness is not exploited when estimating the frequency components. Since the PSD is estimated by the periodogram, i.e., the squared magnitude of the frequency components, one may add a penalty for the difference between the magnitude of the frequency components in the frequency component estimation. However, due to the nonconvexity of this type of penalty (see, e.g., [31]), it is hard for this approach to obtain an optimal solution, and the performance dependency on the initial estimate and the optimization algorithm is difficult to elude. Another line of studies derive approximated observation models between the realizations and the PSD, e.g., for a single random process [32, 33, 34, 35] and spatially independent random processes [36, 37]. Since the approximated observation model is written in terms of the (nonnegative) PSD, the smoothness of the PSD could be exploited via convex optimization. However, this approach takes the magnitude of the observation model to derive the approximated observation models, implying that half of the information in the original observation model is lost as the phase information is discarded. In particular, this approach is not applicable to the PAWR because signals from different angles cannot be distinguished by the magnitude information (see (6) in Example 1). In this paper, we propose a convex optimization-based method that simultaneously estimates block-sparse frequency components and block-sparse and smooth PSDs from mixtures of realizations.1 To design the proposed method, we first apply the optimally structured block-sparse model of [38] for the frequency component estimation. Then, we newly leverage the latent variable of the designed model, which is originally introduced to optimize the block structure, for the PSD estimation. More precisely, we demonstrate that the latent variable is in fact related to the square root of the PSDs, enabling us to exploit the smoothness of the PSDs via convex optimization. The main contributions of this paper are summarized as follows. Footnote 1: If a target is both sparse and smooth, it is also block-sparse since nonzero components are clustered in several blocks due to the smoothness. * We present, for the first time in the literature, a convex optimization-based method that can exploit the smoothness of the PSDs for their estimation from mixtures of realizations. * We show that many smoothness priors designed for real-valued signals, including the high-order total variation [39, 40] and the total generalized variation [41], can be directly incorporated into the proposed framework to enhance the smoothness of the PSDs of complex-valued random processes thanks to the nonnegative latent variable. * We conduct thorough numerical simulations on the PAWR, which demonstrate that the proposed method achieves superior estimation accuracy to the existing sparse estimation models combined with or without post-smoothing, i.e., a smoothing technique applied as a post-processing step after the frequency component estimation. The rest of this paper is organized as follows. In Section II, we formulate the estimation of PSDs from mixtures of realizations of random processes, and clarify its relation to weather radar applications. In Section III, we design the proposed convex optimization model that simultaneously estimates block-sparse frequency components and block-sparse and smooth PSDs. Section IV presents numerical experiments on the PAWR, followed by conclusion in Section V. A preliminary short version of this paper was presented at a conference [42]. _Notations:_\(\mathbb{N}\), \(\mathbb{R}\), \(\mathbb{R}_{+}\), \(\mathbb{R}_{++}\), and \(\mathbb{C}\) respectively denote the sets of nonnegative integers, real numbers, nonnegative real numbers, positive real numbers, and complex numbers. We use \(\imath\in\mathbb{C}\) to denote the imaginary unit, i.e., \(\imath=\sqrt{-1}\). For every \(x\in\mathbb{C}\), \(x^{\ast}\) denotes the complex conjugate of \(x\), and \(|x|:=\sqrt{x^{\ast}x}\) denotes the absolute value of \(x\). For matrices or vectors, we denote the transpose and the Hermitian transpose respectively by \((\cdot)^{\top}\) and \((\cdot)^{\mathrm{H}}\). The identity matrix of order \(N\) is denoted by \(\mathbf{I}_{N}\in\mathbb{R}^{N\times N}\). We denote the diagonal matrix with components of \(\mathbf{w}\in\mathbb{C}^{N}\) on the main diagonal by \(\mathrm{diag}(\mathbf{w})\in\mathbb{C}^{N\times N}\). The cardinality of a set \(\mathcal{A}\) is denoted by \(|\mathcal{A}|\). The \(\ell_{2}\) (Euclidean) norm, the \(\ell_{1}\) norm, and the \(\ell_{0}\) pseudo-norm of \(\mathbf{x}=(x_{1,},\ldots,x_{N})^{\top}\in\mathbb{C}^{N}\) are respectively denoted by \(\|\mathbf{x}\|:=\sqrt{\sum_{n=1}^{N}|x_{n}|^{2}}\), \(\|\mathbf{x}\|_{1}:=\sum_{n=1}^{N}|x_{n}|\), and \(\|\mathbf{x}\|_{0}:=\left|\{n\in\{1,\ldots,N\}\,|\,x_{n}\neq 0\}\right|\). The expectation operator is denoted by \(E[\cdot]\). ## II Problem formulation We consider the estimation problem of power spectral densities (PSDs) of \(N\) random processes from noisy mixtures of their realizations. We denote the \(n\)-th complex-valued discrete-time random process \((n=1,\ldots,N)\) by \[X_{n}^{\star}[\ell]\in\mathbb{C}\qquad(\ell=0,\pm 1,\pm 2,\ldots). \tag{1}\] To define the PSD of \(X_{n}^{\star}[\ell]\), we assume that \(X_{n}^{\star}[\ell]\) is zero-mean and wide-sense stationary, which imply that \(E[X_{n}^{\star}[\ell]]=0\) for any \(\ell\), and the auto-correlation \(E[X_{n}^{\star}[m+\ell](X_{n}^{\star}[m])^{\star}]\) does not depend on \(m\) for any \(\ell\). Under these assumptions, define the auto-correlation function by \[R_{n}[\ell]:=E[X_{n}^{\star}[m+\ell](X_{n}^{\star}[m])^{\star}],\] and suppose \(\sum_{\ell=-\infty}^{\infty}|R_{n}[\ell]|<\infty\). Then, the PSD of \(X_{n}^{\star}[\ell]\) is given by \[S_{n}^{\star}(f):=\sum_{\ell=-\infty}^{\infty}R_{n}[\ell]e^{-i2\pi\ell\ell} \quad\left(f\in\left[-\frac{1}{2},\frac{1}{2}\right)\right). \tag{2}\] We denote \(L\) consecutive realizations of \(X_{n}^{\star}[\ell]\) by \[\bar{x}_{j,n}[\ell]\in\mathbb{C}\qquad(\ell=1,\ldots,L), \tag{3}\] where \(j\in\{1,\ldots,J\}\) is the index of trials. Note that \(\bar{x}_{j,n}[\ell]\) for \(j=1,\ldots,J\) are assumed to be realizations of the common random process \(X_{n}^{\star}[\ell]\) (see Remark 1 for validity of this assumption). We define the observation model by \[\mathbf{y}_{j}:=\sum_{n=1}^{N}\mathbf{A}_{n}\bar{\mathbf{x}}_{j,n}+\epsilon_{ j}\in\mathbb{C}^{d}\quad(j=1,\ldots,J), \tag{4}\] where the realizations in (3) are collectively denoted by \[\bar{\mathbf{x}}_{j,n}:=(\bar{x}_{j,n}[1],\bar{x}_{j,n}[2],\ldots,\bar{x}_{j, n}[L])^{\top}\in\mathbb{C}^{L}, \tag{5}\] \(\mathbf{A}_{n}\in\mathbb{C}^{d\times L}\) is the known matrix that models the observation process for the \(n\)-th source, and \(\epsilon_{j}\in\mathbb{C}^{d}\) is the (unknown) observation noise. Our goal is to estimate the PSDs \(S_{n}^{\star}(f)\quad(n=1,\ldots,N)\) from \(\mathbf{y}_{j}\quad(j=1,\ldots,J)\) and \(\mathbf{A}_{n}\quad(n=1,\ldots,N)\) in (4). Note that the classical PSD estimation problem for a single random process [1, 2, 3], e.g., for the Doppler weather radar [4], is a special instance of (4) for \(N=1\), \(d=L\), and \(\mathbf{A}_{n}=\mathbf{1}_{L}\). The generalized observation model (4) is introduced to cover the PSD estimation for the PAWR [9, 10, 11, 12, 13, 14, 15, 16], which is our primary interest. **Example 1** (PAWR).: For the PAWR, \(X_{n}^{\star}[\ell]\) corresponds to the sum of backscattered signals in the angular interval \(\left[\theta_{n}-\frac{\Delta\theta}{2},\theta_{n}+\frac{\Delta\theta}{2}\right]\), where \(\theta_{n}\quad(n=1,\ldots,N)\) are the equally spaced angles with a spacing of \(\Delta\theta\). By using an \(M\)-element uniform linear array, the PAWR observes noisy mixtures of realizations by \[\mathbf{y}_{j}[\ell]:=\sum_{n=1}^{N}\mathbf{a}(\theta_{n})\bar{x}_{j,n}[\ell]+ \epsilon_{j}[\ell]\in\mathbb{C}^{M}\quad(\ell=1,\ldots,L) \tag{6}\] for each \(j\in\{1,\ldots,J\}\), where \(\mathbf{a}(\theta_{n})\in\mathbb{C}^{M}\) is the known steering vector for the angle \(\theta_{n}\), and \(\epsilon_{j}[\ell]\in\mathbb{C}^{M}\) is the white Gaussian noise. More precisely, \(\mathbf{a}(\theta)\) is defined by \[\mathbf{a}(\theta):=\left(1,e^{-\frac{12\pi\Delta\theta\sin\theta}{\lambda_{ \mathrm{ew}}}},\ldots,e^{-\frac{12(M-1)\pi\Delta\theta\sin\theta}{\lambda_{ \mathrm{ew}}}}\right)^{\top}\in\mathbb{C}^{M},\] where \(\Delta\) is the inter-element spacing of the uniform linear array, and \(\lambda_{\mathrm{ew}}\) is the carrier wavelength. The observation model (6) for the PAWR can be written in the form of (4), i.e., \[\mathbf{y}_{j}^{(\mathrm{pawr})} =\sum_{n=1}^{N}\mathbf{A}_{n}^{(\mathrm{pawr})}\bar{\mathbf{x}}_{ j,n}+\epsilon_{j}^{(\mathrm{pawr})}\quad(j=1,\ldots,J),\] by setting \[\mathbf{y}_{j}^{(\mathrm{pawr})} :=(\mathbf{y}_{j}[1]^{\top},\mathbf{y}_{j}[2]^{\top},\ldots, \mathbf{y}_{j}[L]^{\top})^{\top}\in\mathbb{C}^{M\!L},\] \[\epsilon_{j}^{(\mathrm{pawr})} :=(\epsilon_{j}[1]^{\top},\epsilon_{j}[2]^{\top},\ldots,\epsilon_{ j}[L]^{\top})^{\top}\in\mathbb{C}^{M\!L},\] and \(\mathbf{A}_{n}^{(\mathrm{pawr})}\in\mathbb{C}^{M\!L}\) to the block-diagonal matrix that contains \(L\) copies of \(\mathbf{a}(\theta_{n})\) on the diagonal blocks, i.e., \[\mathbf{A}_{n}^{(\mathrm{pawr})}:=\begin{pmatrix}\mathbf{a}(\theta_{n})&&&\\ &\mathbf{a}(\theta_{n})&&\\ &&\ddots&\\ &&&\mathbf{a}(\theta_{n})\end{pmatrix}.\] The estimation of the PSDs \(S_{n}^{\star}(f)\) of \(X_{n}^{\star}[\ell]\quad(n=1,\ldots,N)\) is essential for the PAWR because \(S_{n}^{\star}(f)\) contains information about the weather condition in the narrow angular interval \(\left[\theta_{n}-\frac{\Delta\theta}{2},\theta_{n}+\frac{\Delta\theta}{2}\right]\). More precisely, the weather condition can be obtained from \(S_{n}^{\star}(f)\) as follows. First, the continuous-time PSD \(S_{n}^{\star(\mathrm{ct})}(f)\) is recovered from \(S_{n}^{\star}(f)\). When aliasing does not occur in \(S_{n}^{\star}(f)\), \(S_{n}^{\star(\mathrm{ct})}(f)\) can be simply obtained by \(S_{n}^{\star(\mathrm{ct})}(f)=TS_{n}^{\star}(f)\) if \(|f|\leq\frac{1}{2T}\) and \(S_{n}^{\star(\mathrm{ct})}(f)=0\) otherwise, where \(T\) is the pulse repetition time, i.e., the sampling interval of \(X_{n}^{\star}[\ell]\). Even when aliasing occurs in \(S_{n}^{\star}(f)\), \(S_{n}^{\star(\mathrm{ct})}(f)\) can be recovered from \(S_{n}^{\star}(f)\) unless the variation of velocity is extremely large, and thus the anti-aliasing filter is usually not employed in weather radar applications (see, e.g., [4, Chapter 5] and [16, Section II-A] for detail). Next, \(S_{n}^{\star(\mathrm{ct})}(f)\) is decomposed as \[S_{n}^{\star(\mathrm{ct})}(f)=P_{n}^{\star}q_{n}^{\star}(f),\] where \(P_{n}^{\star}:=\int_{-\infty}^{\infty}S_{n}^{\star(\mathrm{ct})}(f)df\) and \(q_{n}^{\star}(f):=S_{n}^{\star(\mathrm{ct})}(f)/P_{n}^{\star}\) respectively correspond to the precipitation intensity and the Doppler frequency distribution [4]. The Doppler frequency distribution can be converted to the distribution of Doppler velocity \(v\), i.e., wind velocity parallel to the incident beam direction, by \(v=\frac{\lambda_{\mathrm{ew}}f}{2}\). For instance, the area of nonzero values of \(q_{n}^{\star}(f)\) implies the existence of the corresponding wind velocity components. The precipitation intensity and the Doppler velocity distribution are useful for the analysis of weather phenomena, e.g., [5], [7], [8], [43] for tornado, [6] for weather clutter, and [44] for raindrop size distribution. Since the estimation accuracy of the precipitation intensity and the Doppler velocity distribution heavily depends on that of the PSD, it is important for the analysis of weather phenomena based on the PAWR to realize a method that can accurately estimate the PSDs \(S_{n}^{\star}(f)\quad(n=1,\ldots,N)\) from the mixtures of realizations. **Remark 1** (Tradeoff between \(L\) and \(J\)).: To derive the observation model in (4) where \(\bar{x}_{j,n}[\ell]\quad(j=1,\ldots,J)\) are realizations of the common random process \(X_{n}^{\star}[\ell]\), similarly to the case of a single random process \((N=1)\)[1]-[4], we split all observations into \(J\) subsets. For weather radar applications, the total number \(K_{\rm{pls}}\) of pulses is divided into \(J\) subsets, and thus we have \(L=\frac{K_{\rm{pls}}}{J}\). Since \(\frac{1}{L}=\frac{J}{K_{\rm{pls}}}\) is the frequency resolution, i.e., the sampling interval in the frequency domain (see (11)), increasing \(J\) sacrifices the frequency resolution [4, Chapter 5]. Note that we cannot increase \(K_{\rm{pls}}\) unboundedly because \(K_{\rm{pls}}\) corresponds to the observation time, and thus is set to be small enough to ensure that the statistics of targets such as clouds and raindrops are (approximately) unchanged. Typically, \(J\) is set to be very small for the sake of a fine frequency resolution, and \(J=1\) is of particular interest as the original frequency resolution \(\frac{1}{K_{\rm{pls}}}\) is preserved [4]. Note that, while \(J=1\) is a typical choice in practice, our method, which will be developed in Section III, is applicable to general \(J\). We rewrite (4) to an observation model in terms of frequency components of the time-domain realizations \(\bar{\mathbf{x}}_{j,n}\) in (5) because of their more direct relation to the PSD in (2) than \(\bar{\mathbf{x}}_{j,n}\). More precisely, we represent \(\bar{\mathbf{x}}_{j,n}\) as \[\bar{\mathbf{x}}_{j,n}=\mathbf{G}\bar{\mathbf{u}}_{j,n} \tag{7}\] for \(j=1,\ldots,J\) and \(n=1,\ldots,N\), where \[\bar{\mathbf{u}}_{j,n}:=(\bar{u}_{j,n}[1],\bar{u}_{j,n}[2],\ldots,\bar{u}_{j,n}[L])^{\top}\in\mathbb{C}^{L} \tag{8}\] is used as the frequency components, and \(\mathbf{G}\in\mathbb{C}^{L\times L}\) is a suitable synthesis matrix. Substituting the representation (7) to the observation model (4), we have \[\mathbf{y}_{j}=\sum_{n=1}^{N}\mathbf{A}_{n}\mathbf{G}\bar{\mathbf{u}}_{j,n}+ \varepsilon_{j}\in\mathbb{C}^{d}\quad(j=1,\ldots,J), \tag{9}\] which is used as the observation model for the frequency components \(\bar{\mathbf{u}}_{j,n}\). The representation (7) covers popular frequency analysis methods used in weather radar applications, e.g., the discrete Fourier transform (DFT) and the windowed DFT. **Example 2** (Dft).: Let \(\bar{\mathbf{u}}_{j,n}^{(\rm{DFT})}\) be the normalized DFT coefficients of \(\bar{\mathbf{x}}_{j,n}\). Define the normalized DFT matrix \(\mathbf{F}\in\mathbb{C}^{L\times L}\) by \[\mathbf{F}:=\frac{1}{\sqrt{L}}\begin{pmatrix}1&e^{-i2\pi f_{1}}&\cdots&e^{-i2 \pi f_{1}(L-1)}\\ 1&e^{-i2\pi f_{2}}&\cdots&e^{-i2\pi f_{2}(L-1)}\\ \vdots&\vdots&\ddots&\vdots\\ 1&e^{-i2\pi f_{1}}&\cdots&e^{-i2\pi f_{L}(L-1)}\end{pmatrix}, \tag{10}\] where \[f_{k}:=\frac{k-1-L/2}{L}\quad(k=1,\ldots,L) \tag{11}\] are uniform sampling points in \([-1/2,1/2)\) used as a frequency grid, and \(L\) is assumed to be even for simplicity. Then, \(\bar{\mathbf{u}}_{j,n}^{(\rm{DFT})}\) is given by \[\bar{\mathbf{u}}_{j,n}^{(\rm{DFT})}=\mathbf{F}\bar{\mathbf{x}}_{j,n}.\] Due to the unitarity of \(\mathbf{F}\), we have \[\bar{\mathbf{x}}_{j,n}=\mathbf{F}^{\rm{H}}\bar{\mathbf{u}}_{j,n}^{(\rm{DFT})},\] which corresponds to (7) with \(\mathbf{G}=\mathbf{F}^{\rm{H}}\). **Example 3** (Windowed DFT).: To mitigate the frequency sidelobes, a window function \(\mathbf{w}\in\mathbb{R}_{++}^{L}\) is applied to \(\bar{\mathbf{x}}_{j,n}\) before the DFT in some cases [3], [45]. The windowed DFT coefficients are given by \[\bar{\mathbf{u}}_{j,n}^{(\rm{WDFT})}=\mathbf{F}\mathbf{W}\bar{\mathbf{x}}_{j,n},\] where \(\mathbf{F}\) is the DFT matrix in (10), and \(\mathbf{W}:=\mathrm{diag}(\mathbf{w})\in\mathbb{R}_{++}^{L\times L}\). Since \((\mathbf{F}\mathbf{W})^{-1}=\mathbf{W}^{-1}\mathbf{F}^{\rm{H}}\), we have \[\bar{\mathbf{x}}_{j,n}=\mathbf{W}^{-1}\mathbf{F}^{\rm{H}}\bar{\mathbf{u}}_{j,n }^{(\rm{WDFT})},\] which corresponds to (7) with \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\rm{H}}\). ### Major challenge in PSD estimation from mixtures of realizations The square of the magnitude of the frequency components in (8), i.e., \[|\bar{u}_{j,n}[k]|^{2}\quad(k=1,\ldots,L), \tag{12}\] is called the _periodogram_2 and widely used as an estimate of the PSD \(S_{n}^{\star}(f)\) on the frequency grid defined in (11). It should be noted that the periodogram needs to be estimated from the mixtures of realizations in (9) for our problem. The periodogram with the DFT shown in Example 2 is an asymptotically unbiased estimator of the PSD under a mild condition [1], [2]. More precisely, since \(\bar{u}_{j,n}^{(\rm{DFT})}[k]\) can be regarded as a realization of the random variable Footnote 2: Strictly speaking, (12) is called the periodogram when \(\mathbf{G}\) in (7) is the inverse DFT \(\mathbf{F}^{\rm{H}}\) in Example 2, and (12) with \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\rm{H}}\) is called, e.g., the _windowed periodogram_[1] or the _modified periodogram_[3]. \[U_{n}^{\star}[k]:=\frac{1}{\sqrt{L}}\sum_{\ell=1}^{L}X_{n}^{\star}[\ell]e^{-i2 \pi f_{\ell}(\ell-1)}\quad(k=1,\ldots,L),\] the asymptotic unbiasedness means that \[\lim_{L\to\infty}E\left[|U_{n}^{*}[k]|^{2}\right]=S_{n}^{*}(f_{k})\quad(k=1,\ldots,L). \tag{13}\] A simple proof of (13) is provided in [16, Section II-A] under a mild sufficient condition \(\sum_{\ell=-\infty}^{\infty}|\ell R_{n}[\ell]|<\infty\). The periodogram with the windowed DFT in Example 3 is also asymptotically unbiased if the window function is properly designed [3]. The drawback of the periodogram lies in its large variance and often-observed erratic oscillation [1, 2, 3, 4] (see also Fig. 1 for an example from the experiments in Section IV). Indeed, this drawback is theoretically shown for the periodogram with the DFT when \(X_{n}^{*}[\ell]\) is a linear combination of i.i.d. Gaussian random variables and \(L\to\infty\)[1, 2]. Namely, for this case, the variance is equally large to the square of the PSD, i.e., \[\lim_{L\to\infty}E\left[\left(|U_{n}^{*}[k]|^{2}-S_{n}^{*}(f_{k})\right)^{2} \right]=(S_{n}^{*}(f_{k}))^{2}\] for \(k=1,\ldots,L\), and \(U_{n}^{*}[k]\) and \(U_{n}^{*}[k^{\prime}]\) (\(k\neq k^{\prime}\)) are uncorrelated when \(L\to\infty\). These facts validate the often-observed erratic oscillation of the periodogram. Although several approaches have been developed to reduce the variance of the periodogram, they are not suitable for our problem in which the PSDs have to be estimated from the mixtures of realizations in (9). A simple approach is to exploit the situation that \(\bar{u}_{j,n}[k]\quad(j=1,\ldots,J)\) are realizations of the common random variable, i.e., to use the ensemble average of the periodograms: \[\frac{1}{J}\sum_{j=1}^{J}|\bar{u}_{j,n}[k]|^{2}\quad(k=1,\ldots,L). \tag{14}\] Unfortunately, since \(J\) is typically very small in weather radar applications (see Remark 1), the ensemble average cannot sufficiently reduce the variance and the erratic oscillation. Since the PSD is usually smooth in weather radar applications [4], another promising approach is to exploit the smoothness of the PSD. However, existing smoothing techniques, e.g., those shown in [1, 2], are not directly applicable to our problem because these techniques suppose that the frequency components \(\bar{u}_{j,n}\) of the realizations are known. Using smoothing techniques as a post-processing step would be sub-optimal because the smoothness of the PSD is not considered in the estimation of the frequency components. Thus, it remains a major challenge to exploit the smoothness of the PSDs when they need to be estimated from mixtures of realizations. ## III Proposed approach To exploit the sparsity and the smoothness for the PSD estimation from the observed mixtures of realizations in (9), we design a convex model that jointly estimates the frequency components \(\bar{u}_{j,n}[k]\) and the PSDs \(S_{n}^{*}(f_{k})\). In Section III-A, we first apply the optimally structured block-sparse model of [38] for the estimation of the frequency components. Then, in Section III-B, we leverage its latent variable, which is originally introduced for the block structure optimization, to estimate sparse and smooth PSDs. ### Block-sparse estimation of frequency components We design a block-sparse penalty for the frequency components \(\bar{u}_{j,n}\) by applying the optimally structured block-sparse model [38] with the knowledge of the PAWR [16]. For simplicity, we begin by designing a penalty for each \(n\in\{1,\ldots,N\}\). As demonstrated in [16], the PSD \(S_{n}^{*}(f)\) is usually narrow-band for the PAWR, which implies that \(\bar{u}_{j,n}\) is block-sparse for each source \(n\in\{1,\ldots,N\}\) and trial \(j\in\{1,\ldots,J\}\) due to the relation (13).3 Moreover, since \(\bar{u}_{j,n}\quad(j=1,\ldots,J)\) are realizations of the common random variable, suitable block partitions for \(\bar{u}_{j,n}\quad(j=1,\ldots,J)\) are the same. Thus, using the mixed \(\ell_{2}/\ell_{1}\) norm that is suitable for the block-sparsity, we introduce a penalty for \(\mathbf{u}_{n}:=(\mathbf{u}_{j,n})_{j=1}^{J}\) as Footnote 3: Although we present (13) for \(\mathbf{G}=\mathbf{I}^{\text{H}}\) in Example 2 for simplicity, \(\bar{u}_{j,n}\) is also block-sparse when \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) in Example 3 because the window function in Example 3 is designed to reduce the heights of the sidelobes and slightly increase the width of the mainlobe. \[\|\mathbf{u}_{n}\|_{2,1}^{(\mathcal{B}_{n,n})_{n=1}^{n}} :=\sum_{m=1}^{k_{n}}\sqrt{J|\mathcal{B}_{n,n}|}\left\|((u_{j,n}[k] )_{j=1}^{J})_{k\in\mathcal{B}_{n,n}}\right\|\] \[=\sum_{m=1}^{k_{n}}\sqrt{J|\mathcal{B}_{n,n}|}\sqrt{\sum_{j=1}^{J }\sum_{k\in\mathcal{B}_{n,n}}|u_{j,n}[k]|^{2}},\] where \(\mathcal{B}_{n,n}\subset\{1,\ldots,L\}\quad(m=1,\ldots,h_{n})\) is a block partition in the frequency domain of the \(n\)-th source. By suppressing the mixed \(\ell_{2}/\ell_{1}\) norm, the block-sparsity is promoted because the components \(((u_{j,n}[k])_{j=1}^{J})_{k\in\mathcal{B}_{n,n}}\) in the same block are forced to be zeros together. The problem in [16] is that an appropriate block partition is unknown a priori because it depends on the unknown Doppler velocity distribution. To solve the problem of unknown block partition, following the approach of [38], we minimize the mixed \(\ell_{2}/\ell_{1}\) norm over the partition of at most \(H_{n}\) blocks, i.e., \[\psi_{H_{n}}(\mathbf{u}_{n}):=\min_{h_{n}\in\{1,\ldots,H_{k}\}} \left[\min_{(\mathcal{B}_{n,n})_{n=1}^{n}\in\mathcal{P}_{\bar{u}_{n}}}\| \mathbf{u}_{n}\|_{2,1}^{(\mathcal{B}_{n,n})_{n=1}^{n}}\right]. \tag{15}\] Figure 1: The PSD and the periodogram computed from the realizations for a simulation. The constraint set \(\mathcal{P}_{h_{n}}\) consists of all \(h_{n}\) block partitions of \(\{1,\ldots,L\}\), i.e., \[\left(\mathcal{B}_{n,n}\right)_{m=1}^{h_{n}}\in\mathcal{P}_{h_{n}}\] \[\Leftrightarrow\left\{\begin{aligned} &\bigcup_{m=1}^{h_{n}}\mathcal{B}_{n,n}=\{1,\ldots,L\},\\ &\mathcal{B}_{m,n}\neq\varnothing\quad(m=1,\ldots,h_{n}),\\ &\mathcal{B}_{m,n}\cap\mathcal{B}_{m^{\prime},n}=\varnothing \quad(m\neq m^{\prime}),\\ &\mathcal{B}_{m,n}=\mathfrak{M}_{L}\left(\{\ell\in\mathbb{N}\mid a _{m,n}\leq\ell\leq b_{m,n}\}\right)\\ &\text{for some }a_{m,n},b_{m,n}\in\mathbb{N}\quad(m=1,\ldots,h_{n}), \end{aligned}\right.\] where \[\mathfrak{M}_{L}(\mathcal{I}):=\left\{\ell-L\left[\frac{\ell-1}{L}\right] \in\{1,\ldots,L\}\,\bigg{|}\,\ell\in\mathcal{I}\right\},\] and \(\lfloor\cdot\rfloor\) is the floor function. For instance, when \(\mathcal{I}=\{L-1,L,L+1\}\), \(\mathfrak{M}_{L}(\mathcal{I})=\{L-1,L,1\}\). Differently from the standard design proposed in [38], the present design makes \(\mathcal{P}_{h_{n}}\) includes blocks connected by the first and the last entries, and is suitable for weather radar applications due to the following reason. Since aliasing is not a serious issue in weather radar applications, the anti-aliasing filter is usually not employed (see Example 1 and references [4, Chapter 5] and [16, Section II-A] for detail), and thus aliasing may occur, i.e., some Doppler frequency components may exceed the Nyquist frequency. For instance, in Fig. 2, a part of the PSD that exceeds the Nyquist frequency is aliased, and thus the corresponding frequency components are also aliased. In such cases, aliased nonzero components and non-aliased nonzero components are better to be collected into a single block, as shown in Fig. 2, because they form a single block before the aliasing. To realize such capability, \(\mathcal{P}_{h_{n}}\) is designed to include blocks connected by the first and the last entries. Note that a block connected by the first and the last entries is not always adopted since the block partition is automatically optimized in (15). Although it is difficult to use \(\psi_{H_{n}}(\mathbf{u}_{n})\) directly due to the combinatorial optimization in (15), we can construct a tight convex relaxation of \(\psi_{H_{n}}(\mathbf{u}_{n})\) as follows. Let \(\phi\colon\mathbb{C}^{J}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\cup\{\infty\}\) be a lower semicontinuous convex function defined by \[\phi(\mathbf{v},\sigma):=\begin{cases}\frac{\|\mathbf{v}\|^{2}}{2\sigma}+ \frac{J}{2}\sigma,&\text{if }\sigma>0;\\ 0,&\text{if }\mathbf{v}=\mathbf{0}\text{ and }\sigma=0;\\ \infty,&\text{otherwise}.\end{cases} \tag{16}\] Then, similarly to [38, Section II], \(\psi_{H_{n}}(\mathbf{u}_{n})\) can be rewritten as \[\psi_{H_{n}}(\mathbf{u}_{n})=\min_{\begin{subarray}{c}\mathbf{\sigma}_{n}\in \mathbb{R}_{+}^{L}\\ \|\mathbf{D}\mathbf{\sigma}_{n}\|_{0}\leq\mu_{n}\end{subarray}}\sum_{k=1}^{L}\phi \left((u_{j,n}[k])_{j=1}^{J},\sigma_{n}[k]\right), \tag{17}\] where \(\mathbf{D}\in\mathbb{R}^{L\times L}\) is the first-order difference operator with the periodic boundary condition, i.e., the difference operator on the ring graph [46]. More precisely, \(\mathbf{D}\) is defined by \[\mathbf{D}:=\begin{bmatrix}-1&1&0&0&\cdots&0&0\\ 0&-1&1&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&0&\cdots&-1&1\\ 1&0&0&0&\cdots&0&-1\end{bmatrix}\in\mathbb{R}^{L\times L}. \tag{18}\] Note that (17) is a slight extension of the result shown in [38] to the case where the blocks are fixed over the trials \(j\in\{1,\ldots,J\}\). We can obtain a tight convex relaxation of (17) by replacing the \(\ell_{0}\) pseudo-norm in the constraint with its best convex relaxation, i.e., the \(\ell_{1}\) norm: \[\tilde{\psi}_{\alpha_{n}}(\mathbf{u}_{n}):=\min_{\begin{subarray}{c}\mathbf{ \sigma}_{n}\in\mathbb{R}_{+}^{L}\\ \|\mathbf{D}\mathbf{\sigma}_{n}\|_{1}\leq\alpha\end{subarray}}\sum_{k=1}^{L}\phi \left((u_{j,n}[k])_{j=1}^{J},\sigma_{n}[k]\right),\] where \(\alpha_{n}\in\mathbb{R}_{+}\) is a tuning parameter related to the number of blocks. Although the sum \(\sum_{n=1}^{N}\tilde{\psi}_{\alpha_{n}}(\mathbf{u}_{n})\) can be used for the penalty of \((\mathbf{u}_{n})_{m=1}^{N}\), tuning \(\alpha_{n}\) for each \(n\in\{1,\ldots,N\}\) could be troublesome. Thus, to simplify the tuning process, we propose a convex penalty for \(\mathbf{u}:=(\mathbf{u}_{n})_{n=1}^{N}\) as \[\Psi_{\alpha}(\mathbf{u}):=\min_{\begin{subarray}{c}(\mathbf{\sigma}_{n})_{n=1}^{ N}\in\mathbb{R}_{+}^{N}\\ \sum_{n=1}^{N}\|\mathbf{D}\mathbf{\sigma}_{n}\|_{1}\leq\alpha\end{subarray}}\sum_{n =1}^{N}\sum_{k=1}^{L}\phi\left((u_{j,n}[k])_{j=1}^{J},\sigma_{n}[k]\right), \tag{19}\] where the single tuning parameter \(\alpha\in\mathbb{R}_{+}\) is related to the number of total blocks for \(n=1,\ldots,N\). Using the proposed convex penalty \(\Psi_{\alpha}(\mathbf{u})\) in (19), we estimate the frequency components by the regularized least squares for the observation model in (9), i.e., \[\underset{\mathbf{u}\in\mathbb{C}^{\Delta\mu}}{\mathrm{minimize}}\,\frac{1}{2} \sum_{j=1}^{J}\left\|\mathbf{y}_{j}-\sum_{n=1}^{N}\mathbf{A}_{n}\mathbf{G} \mathbf{u}_{j,n}\right\|^{2}+\lambda\Psi_{\alpha}\left(\mathbf{u}\right), \tag{20}\] where \(\lambda>0\) is the regularization parameter that controls the importance of the block-sparsity. Substituting the definition of \(\Psi_{\alpha}\left(\mathbf{u}\right)\) in (19) into (20), we can reduce the optimization Figure 2: The PSD and the squared magnitude of the frequency components when aliasing occurs, and its suitable block partition. problem (20) to \[\begin{split}\underset{\mathbf{u}\in\mathbb{C}^{Mk},\boldsymbol{ \sigma}\in\mathbb{R}^{k}_{+}}{\mathrm{minimize}}&\frac{1}{2}\sum_{j= 1}^{J}\left\|\mathbf{y}_{j}-\sum_{n=1}^{N}\mathbf{A}_{n}\mathbf{G}\mathbf{u}_{ j,n}\right\|^{2}\\ +&\lambda\sum_{n=1}^{N}\sum_{k=1}^{L}\phi\left((u_{j,n }[k])_{j=1}^{J},\sigma_{n}[k]\right)\\ &\text{subject to }\sum_{n=1}^{N}\|\mathbf{D}\boldsymbol{\sigma}_{n} \|_{1}\leq\alpha\end{split} \tag{21}\] where \(\boldsymbol{\sigma}\) denotes \((\boldsymbol{\sigma}_{n})_{n=1}^{N}\). Although the proposed regularization model (21) is a relatively difficult convex optimization problem due to the discontinuous function \(\phi\), we can obtain a globally optimal solution of (21) by applying the proximal splitting techniques [47, 48, 49, 50, 51] with the interpretation of \(\phi\) in (16) as a _perspective function_[52, 53, 54]. A concrete algorithm based on the alternating direction method of multipliers (ADMM) and its derivation are provided in Appendix A. ### Leveraging latent variable for PSD estimation We demonstrate that the solution for the latent variable \(\boldsymbol{\sigma}\) of the proposed model (21) is in fact suitable for the PSD estimation. Let \(\hat{\mathbf{u}}\) and \(\hat{\boldsymbol{\sigma}}\) be the solutions of (21) respectively for the variables \(\mathbf{u}\) and \(\boldsymbol{\sigma}\). While it is possible to compute the periodogram as in (14) for \(\hat{\mathbf{u}}\), \(\hat{\boldsymbol{\sigma}}\) is more suitable for the estimation of smooth PSDs. To confirm this, we show that \(\hat{\boldsymbol{\sigma}}\) corresponds to the square root of smoothed and averaged periodogram as follows. 1. We begin by considering the case of \(\alpha\to\infty\), which is not of our interest but easy to analyze. In this case, since \(\boldsymbol{\sigma}\) minimizes \[\lambda\sum_{n=1}^{N}\sum_{k=1}^{L}\phi\left((u_{j,n}[k])_{j=1}^{J},\sigma_{n }[k]\right)\] (22) in (21), the solutions \(\hat{\mathbf{u}}\) and \(\hat{\boldsymbol{\sigma}}\) satisfy the relationship \[\hat{\sigma}_{n}[k]=\sqrt{\frac{1}{J}\sum_{j=1}^{J}|\hat{u}_{j,n}[k]|^{2}}\] (23) for each \(k=1,\ldots,L\) and \(n=1,\ldots,N\), which can be shown based on [38, Lemma 1]. The relation (23) means that \(\hat{\boldsymbol{\sigma}}\) is the square root of the averaged periodogram in (14) computed with \(\hat{\mathbf{u}}\) when \(\alpha\to\infty\). 2. Next, we consider the case of our interest, where \(\alpha\) is set to a finite value. In this case, the constraint \[\sum_{n=1}^{N}\|\mathbf{D}\boldsymbol{\sigma}_{n}\|_{1}\leq\alpha\] in (21) penalizes the smoothness of \(\boldsymbol{\sigma}_{n}\) (\(n=1\,\ldots,N\)) since \(\mathbf{D}\) is the difference operator. Meanwhile, the other part (22) of the proposed model forces \(\boldsymbol{\sigma}\) to be the averaged periodogram in (23). Thus, by the combination of these terms, roughly speaking, \(\hat{\boldsymbol{\sigma}}\) is smoothed around the square root of the averaged periodogram in (23). In addition to being smooth, \(\hat{\boldsymbol{\sigma}}\) is block-sparse because \(\hat{\mathbf{u}}_{j,n}\) (\(j=1,\ldots,J\)) are regularized to have a common block-sparse support. Since the PSDs are smooth and block-sparse for the PAWR, the square of components of \(\hat{\boldsymbol{\sigma}}\), i.e., \[\hat{S}_{n}(\hat{\boldsymbol{\sigma}}_{k})=(\hat{\sigma}_{n}[k])^{2}\quad(k=1,\ldots,L), \tag{24}\] is expected to be a better estimate of the PSDs than the periodogram computed with \(\hat{\mathbf{u}}\). Intuitively, the proposed model is expected to accurately estimate smooth and block-sparse PSDs by the following mechanism. Since \(\phi(\mathbf{v},\sigma)\) is basically \(\frac{\|\mathbf{v}\|^{2}}{2\sigma}+\frac{J}{2}\sigma\) (see (16)), roughly speaking, we can consider that the part (22) acts as \[\lambda\sum_{n=1}^{N}\sum_{k=1}^{L}\sum_{j=1}^{J}\left(\frac{|u_{j,n}[k]|^{2}} {2\sigma_{n}[k]}+\frac{\sigma_{n}[k]}{2}\right).\] Since \(\sigma_{n}[k]\) estimates the square root of the PSDs, \[\frac{|u_{j,n}[k]|^{2}}{2\sigma_{n}[k]}\] is expected to be an effective regularization for \(\mathbf{u}\) because the expectation of the squared magnitude of the realizations \(\tilde{\mathbf{u}}\) is close to the PSDs (see (13) and Fig. 1). Refining \(\mathbf{u}\) leads to an equally refined \(\boldsymbol{\sigma}\) because \(\boldsymbol{\sigma}\) is smoothed around the value in (23). Thanks to these interactions, the proposed model is expected to effectively estimate the frequency components and the PSDs simultaneously. While \(\|\mathbf{D}\boldsymbol{\sigma}_{n}\|_{1}\) with the first difference operator \(\mathbf{D}\) in (18) is a good choice for controlling the block structure, more advanced smoothness priors can be incorporated to further improve the estimation accuracy of the PSDs. Thanks to the nonnegativity of \(\boldsymbol{\sigma}_{n}\), many convex smoothness penalties designed for real-valued signals, such as the high-order total variation [39, 40] that uses \(\mathbf{D}^{\prime}\)\((r\geq 2)\) instead of \(\mathbf{D}\) and the total generalized variation [41], can be directly applied to \(\boldsymbol{\sigma}_{n}\). For instance, the proposed model with the high-order total variation \[\begin{split}\underset{\mathbf{u}\in\mathbb{C}^{Mk},\boldsymbol{ \sigma}\in\mathbb{R}^{k}_{+}}{\mathrm{minimize}}&\frac{1}{2}\sum_{j= 1}^{J}\left\|\mathbf{y}_{j}-\sum_{n=1}^{N}\mathbf{A}_{n}\mathbf{G}\mathbf{u}_{ j,n}\right\|^{2}\\ +&\lambda\sum_{n=1}^{N}\sum_{k=1}^{L}\phi\left((u_{j,n }[k])_{j=1}^{J},\sigma_{n}[k]\right)\\ &\text{subject to }\sum_{n=1}^{N}\|\mathbf{D}^{\prime} \boldsymbol{\sigma}_{n}\|_{1}\leq\alpha\end{split} \tag{25}\] can be solved similarly to the case of (21) by the ADMM-based algorithm shown in Appendix A. In contrast, when these penalties are applied to, e.g., the magnitude of \(\mathbf{u}\), their convexity is lost (see, e.g., [31]), which implies that a globally optimal solution is difficult to obtain. Note that the application of these penalties to \(\mathbf{u}\), which is complex-valued, is not a suitable strategy because the magnitude of \(\mathbf{u}\) is smooth but the phase of \(\mathbf{u}\) is not smooth in most applications. ## IV Simulation results To demonstrate the effectiveness of the proposed approach, we conduct numerical simulations on the PSD estimation for the PAWR shown in Example 1. Essentially, we follow the simulation setting in [12, 16]. Uniform elevation angles \(\theta_{1},\ldots,\theta_{N}\), ranging between \(-15^{\circ}\) and \(30^{\circ}\) degrees with \(N=110\), are selected. We synthesize the (discrete-time) PSD \(S_{n}^{\star}(f)\) by \[S_{n}^{\star}(f)=\frac{1}{T}\sum_{m=-\infty}^{\infty}G_{n}^{\star}\left(\frac{ f-m}{T}\right)\] for each \(n=1,\ldots,N\), where \(T\) is the pulse repetition time, and \(G_{n}^{\star}(f)\) is a continuous-time Gaussian-shaped PSD \[G_{n}^{\star}(f)=\frac{P_{n}}{\sqrt{2\pi}\varsigma_{n}}e^{-\frac{(f-\varepsilon _{n})^{2}}{2\varsigma_{n}^{2}}},\] which is an appropriate model when, e.g., the atmospheric turbulence is dominant [4]. The power \(P_{n}\) is set from the actual reflection intensity measured by the PAWR at Osaka University on March 30, 2014. We define the mean Doppler frequency \(\mu_{n}\) by the certain sine curve used in [16]. The Doppler frequency width \(\varsigma_{n}\) is converted from the Doppler velocity width, which are chosen randomly from the uniform distribution of \([1,3]\ [\mathrm{m}/\mathrm{s}]\). Note that this setting is more realistic than that presented in [16] where the Doppler velocity width is merely fixed to \(2\ [\mathrm{m}/\mathrm{s}]\) at every elevation angle. We set \(X_{n}^{\star}[\ell]\) in (1) to the Gaussian process that has the specified PSD \(S_{n}^{\star}(f)\), and then generate its realizations \(\bar{x}_{j,\mu}[\ell]\ \ \ (\ell=1,\ldots,L)\) based on the probability distribution of \(X_{n}^{\star}[\ell]\ \ (\ell=1,\ldots,L)\), which is computed in the way presented in [4, 16]. The observation vector \(\mathbf{y}_{j}\) is given by (6), where \(\varepsilon_{j}\) is generated as the white Gaussian noise of the standard deviation \(\sqrt{2.5}\). The parameters of the PAWR are set as follows: \(M=128\), \(\lambda_{w}=31.8\,[\mathrm{mm}]\), \(\Delta=16.5\,[\mathrm{mm}]\), and \(T=0.4\,[\mathrm{ms}]\). For the synthesis matrix \(\mathbf{G}\) in the observation model (9) in terms of the frequency components, we test both \(\mathbf{G}=\mathbf{F}^{\mathrm{H}}\) and \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\mathrm{H}}\) respectively for the standard DFT in Example 2 and the windowed DFT in Example 3. We use the hamming window for the window function \(\mathbf{w}\) in Example 3, which is normalized to \(\left\|\mathbf{w}\right\|=\sqrt{L}\), i.e., to the norm of the rectangular window \((1,1,\ldots,1)^{\top}\)[45]. We compare the proposed approach that jointly estimates the frequency components and the PSDs with the existing approach that first estimates the frequency components and subsequently the PSDs. The proposed approach computes the estimate \(\hat{S}_{n}(f_{k})\) of the PSD by (24) with the solution \(\hat{\mathbf{\sigma}}\) of the proposed model (25) for the variable \(\mathbf{\sigma}\). For the frequency component estimators used in the existing approach, we employ the mixed \(\ell_{2}/\ell_{1}\) regularization model using fixed small-size overlapping blocks [16], which is state-of-the-art for the PAWR, and the \(\ell_{1}\) regularization model as a non-structured sparse model. For the mixed \(\ell_{2}/\ell_{1}\) regularization model, we adopt the formulation based on latent group lasso [29, 30], which selects relevant blocks from the pre-defined overlapping blocks in the mixed \(\ell_{2}/\ell_{1}\) norm, because its estimation accuracy is (slightly) better than that of the simple overlapping blocks-based formulation in [16]. While the above nonlinear methods outperform the linear methods for the frequency component estimation in [16], we also include the minimum mean square error (MMSE) beamformer [12], which performs best among the linear methods in [16], for comparison. Since the MMSE beamformer is a time-domain method that estimates \(\bar{\mathbf{x}}_{j,\mu}\) from (4), we compute the frequency components from the estimate of \(\bar{\mathbf{x}}_{j,\mu}\) by using the DFT or the windowed DFT shown in Examples 2 and 3 respectively. From the estimated frequency components \(\hat{\mathbf{u}}\), the existing approach constructs the estimate of the PSD as the averaged periodogram \[\hat{S}_{n}^{\mathrm{(AP)}}(f_{k})=\frac{1}{J}\sum_{j=1}^{J}|\hat{u}_{j,\mu}[ k]|^{2}\ \ \ (k=1,\ldots,L). \tag{26}\] We also test the post-smoothing for the existing approach. Specifically, we employ the standard smoothing technique, i.e., _Daniell method_[1, 2]: \[\hat{S}_{n}^{\mathrm{(AP)}}(f_{k})=\frac{1}{2R+1}\sum_{k^{\prime}=k-R}^{k+R} \hat{S}_{n}^{\mathrm{(AP)}}(f_{k^{\prime}})\ \ \ (k=1,\ldots,L),\] where \(2R\) neighbor frequency bins4 are used for the smoothing. Footnote 4: When \(k^{\prime}\notin\{1,\ldots,L\}\), we instead use \(k^{\prime}-L\left[(k^{\prime}-1)/L\right]\) because the anti-aliasing filter is not employed (see also Example 1). Table 1 shows the normalized mean absolute error (NMAE) \[\frac{\sum_{n=1}^{N}\sum_{k=1}^{L}\left|S_{n}^{\star}(f_{k})-\hat{S}_{n}(f_{k} )\right|}{\sum_{n=1}^{N}\sum_{k=1}^{L}S_{n}^{\star}(f_{k})},\] which is averaged over \(100\) independent simulations. The tuning parameters of the methods are adjusted in the way that the best NMAE is obtained for each method and setting. Table 2 shows specific settings of the tuning parameters: \(\lambda\) for the importance of the (block)-sparsity, \(\alpha\) for the importance of the smoothness in the proposed model, the block-size \(B\) for the mixed \(\ell_{2}/\ell_{1}\) regularization model, \(R\) for the post-smoothing, and the standard deviation \(\varsigma_{e}\) of the noise for the MMSE beamformer. Note that the MMSE beamformer uses the actual standard deviation \(\sqrt{2.5}\) in the experiments to achieve the best accuracy. We simply set \(r=2\) in the proposed model, although the tuning of \(r\) could improve the estimation accuracy. While the case of \(J=1\) is of particular interest in weather radar applications to keep the frequency resolution of the PSDs (see Remark 1), we also show the results when \(J\) is increased to \(2\), so as to elaborate on the effect of \(J\). In Table 1, the proposed model is shown to achieve the best estimation accuracy for all the settings. The post-smoothing is found to improve the estimation accuracies of the existing models; however, their accuracies remain inferior to those of the proposed model. While the NMSEs of the proposed model and the the existing sparse estimation models combined with the post-smoothing are close for several cases when \(J\) is increased to \(2\), the proposed model yields moderate improvements against them for the cases of \(J=1\). Since the original frequency resolution is preserved when \(J=1\) (see Remark 1), the proposed model has an advantage that it estimates PSDs accurately without sacrificing the frequency resolution. Note that the proposed model also has an advantage that it has fewer tuning parameters than the the mixed \(\ell_{2}/\ell_{1}\) regularization model with the post-smoothing (see Table 2). We show the ground-truth and the estimates of the PSDs for examples of simulations: Fig. 3 for \(L=32\), \(J=1\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\), Fig. 4 for \(L=32\), \(J=1\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\), Fig. 5 for \(L=128\), \(J=1\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\), Fig. 6 for \(L=128\), \(J=1\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\), Fig. 7 for \(L=32\), \(J=2\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\), Fig. 8 for \(L=32\), \(J=2\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\), Fig. 9 for \(L=128\), \(J=2\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\), Fig. 10 for \(L=128\), \(J=2\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\). It can be seen from Figs. 3-10(b)(d)(f) that the estimates of the existing models exhibit erratic oscillation as they do not exploit the smoothness of the PSDs. In Figs. 7-10(b)(d)(f) where \(J\) is increased to \(2\), the erratic oscillation is slightly reduced but still clearly visible, suggesting the limitation of the ensemble average (26) when \(J\) is small (see Remark 1 for the reason why \(J\) is set to be small in weather radar applications). The post-smoothing is found to reduce the erratic oscillation to a certain extent, as shown in Figs. 3-10(c)(e)(g). However, the sparsity of the estimate is impaired, i.e., the number of entries of large magnitude that are not present in the ground-truth increases, because the sparsity is not considered in the post-smoothing step. In contrast, in Figs. 3-10(h), the proposed approach obtains the estimates that have both sparsity and smoothness. While the estimates of the mixed \(\ell_{2}/\ell_{1}\) regularization model after the post-smoothing seem similar to the ground-truth at a glance of Figs. 3-10(g), erroneous spread of the nonzero components are more clearly seen from enlarged views shown in Figs. 11 and 12. From Figs. 11 and 12, we also see that the proposed model estimates the area of nonzero components more accurately than the mixed \(\ell_{2}/\ell_{1}\) regularization model with the post-smoothing. Since the area of the nonzero components is related to the existence of the corresponding wind velocity components (see Example 1), this is a significant advantage of the proposed approach for weather radar applications. We also see that the erratic oscillation is still visible for the estimates of the MMSE beamformer and the \(\ell_{1}\) regularization model after the post-smoothing in Figs. 3-10(c)(e), suggesting the limitation of the post-smoothing. In particular, while the objective accuracies of the \(\ell_{1}\) regularization model after the post-smoothing are close to those of the proposed model when \(J\) is increased to \(2\) and the standard DFT is used, the erratic oscillation is not eliminated as seen in Figs. 7(e) and 9(e). Compared to the \(\ell_{1}\) regularization model, the erratic oscillation is considerably reduced in the estimates of the mixed \(\ell_{2}/\ell_{1}\) regularization model after the post-smoothing, \begin{table} \begin{tabular}{l c c c c} \hline Settings & MMSE BF & \(\ell_{1}\) reg. & Mixed \(\ell_{2}/\ell_{1}\) reg. & Proposed \\ \hline \(L=32\), \(J=1\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\) (standard DFT in Example 2) & \(1.1775\) (\(0.9217\)) & \(0.8771\) (\(0.6802\)) & \(0.7725\) (\(0.6447\)) & \(\mathbf{0.6278}\) \\ \hline \(L=32\), \(J=1\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) (windowed DFT in Example 3) & \(1.1055\) (\(0.9494\)) & \(0.8236\) (\(0.7239\)) & \(0.7644\) (\(0.7004\)) & \(\mathbf{0.6780}\) \\ \hline \(L=128\), \(J=1\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\) (standard DFT in Example 2) & \(1.1808\) (\(0.7396\)) & \(0.8161\) (\(0.4874\)) & \(0.6843\) (\(0.4906\)) & \(\mathbf{0.4693}\) \\ \hline \(L=128\), \(J=1\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) (windowed DFT in Example 3) & \(1.1767\) (\(0.7975\)) & \(0.7899\) (\(0.5428\)) & \(0.6970\) (\(0.5244\)) & \(\mathbf{0.5097}\) \\ \hline \(L=32\), \(J=2\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\) (standard DFT in Example 2) & \(0.9345\) (\(0.7767\)) & \(0.6917\) (\(0.5582\)) & \(0.6201\) (\(0.5590\)) & \(\mathbf{0.5580}\) \\ \hline \(L=32\), \(J=2\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) (windowed DFT in Example 3) & \(0.9292\) (\(0.8318\)) & \(0.6596\) (\(0.5865\)) & \(0.6169\) (\(0.5790\)) & \(\mathbf{0.5718}\) \\ \hline \(L=128\), \(J=2\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\) (standard DFT in Example 2) & \(0.9815\) (\(0.7057\)) & \(0.6676\) (\(0.4345\)) & \(0.5700\) (\(0.4623\)) & \(\mathbf{0.4335}\) \\ \hline \(L=128\), \(J=2\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) (windowed DFT in Example 3) & \(0.9763\) (\(0.7418\)) & \(0.6435\) (\(0.4684\)) & \(0.5723\) (\(0.4771\)) & \(\mathbf{0.4548}\) \\ \hline \end{tabular} \end{table} TABLE 1. A comparison of the methods in terms of the NMAE of the PSDs, where the result is averaged over 100 independent simulations. Values shown in parenthesis for the existing methods are the NMAEs with the post-smoothing. \begin{table} \begin{tabular}{l c c c c} \hline Settings & MMSE BF & \(\ell_{1}\) reg. & Mixed \(\ell_{2}/\ell_{1}\) reg. & Proposed \\ \hline \(L=32\), \(J=1\), \(\mathbf{G}=\sqrt{2.5}\), \(R=1\) & \(\lambda=0.003\frac{M}{N}\), \(R=1\) & \(\lambda=0.03\frac{M}{N}\), \(B=7\), \(R=1\) & \(\lambda=0.02\frac{M}{N}\), \(\alpha=60N\) \\ \hline \(L=32\), \(J=1\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) & \(\zeta_{\text{c}}=\sqrt{2.5}\), \(R=1\) & \(\lambda=0.003\frac{M}{N}\), \(R=1\) & \(\lambda=0.02\frac{M}{N}\), \(B=9\), \(R=1\) & \(\lambda=0.01\frac{M}{N}\), \(\alpha=60N\) \\ \hline \(L=128\), \(J=1\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\) & \(\zeta_{\text{c}}=\sqrt{2.5}\), \(R=5\) & \(\lambda=0.005\frac{M}{N}\), \(R=4\) & \(\lambda=0.05\frac{M}{N}\), \(B=36\), \(R=4\) & \(\lambda=0.05\frac{M}{N}\), \(\alpha=20N\) \\ \hline \(L=128\), \(J=1\), \(\mathbf{G}=\mathbf{W}^{-1}\mathbf{F}^{\text{H}}\) & \(\zeta_{\text{c}}=\sqrt{2.5}\), \(R=5\) & \(\lambda=0.003\frac{M}{N}\), \(R=5\) & \(\lambda=0.02\frac{M}{N}\), \(B=36\), \(R=4\) & \(\lambda=0.03\frac{M}{N}\), \(\alpha=20N\) \\ \hline \(L=32\), \(J=2\), \(\mathbf{G}=\mathbf{F}^{\text{H}}\) & \(\zeta_{\text{c}}=\sqrt{2.5}\), \(R=1\) & \(\lambda=0.01\frac{M}{N}\), \(R=1\) & \ Figure 4: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=32\), \(J=1\), \(G=W^{-1}\), Figure 5: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=128\), \(J=1\), \(G=W^{-1}\). Figure 3: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=32\), \(J=1\), \(G=W^{-1}\). Figure 8: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=32\), \(J=2\), \(G=W^{-1}\)\({}^{\mathbf{p}}\). Figure 10: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=128\), \(J=2\), \(G=W^{-1}\)\({}^{\mathbf{p}}\). Figure 7: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=32\), \(J=2\), \(G=W^{-1}\)\({}^{\mathbf{p}}\). Figure 9: Ground-truth of the PSDs and their estimates for a simulation of the PAWR using the following settings: \(L=128\), \(J=2\), \(G=W^{-1}\)\({}^{\mathbf{p}}\). as shown in Figs. 3-10(g). This could be attributed to the property that the block-sparse model does not necessarily promote the smoothness but force the components in the same block to zeros together, which would be beneficial to promote the smoothness in the post-processing step. Although the estimates of the mixed \(\ell_{2}/\ell_{1}\) regularization model after the post-smoothing seem smooth enough, from enlarged views in Figs. 11 and 12, we see that they have slight unnatural fluctuation, and the estimates of the proposed model are smoother even compared with them. The line-like artifacts in the estimates shown in Figs. 3, 5, 7, and 9 are more or less reduced in the estimates shown in Figs. 4, 6, 8, and 10 thanks to the window function that reduces the heights of the sidelobes. Although the estimates obtained with the windowed DFT are visually closer to the ground-truth than those obtained with the standard DFT, the objective accuracy shown in Table 1 is not always improved perhaps because the window function increases the width of the mainlobe. Since the line-like artifacts are caused due to the sidelobes of the window function with finite \(L\), the line-like artifacts are more reduced when \(L=128\). In particular, the line-like artifacts are almost completely eliminated in the proposed estimates when \(L=128\), as shown in Figs. 6(h) and 10(h). ## V Conclusion We presented a convex optimization model for the estimation of sparse and smooth PSDs of complex-valued random processes from noisy mixtures of realizations. While the PSDs are related to the expectation of the magnitude of the frequency components of the realizations, it has been difficult to exploit the smoothness of the PSDs as naive penalties for the difference of the magnitude of the frequency components induce hard nonconvex optimization problems. To resolve this difficulty, we designed the proposed model that jointly estimates complex-valued frequency components and nonnegative PSDs. More precisely, we first applied the optimally structured block-sparse model of [38] for the frequency component estimation. Then, to estimate the PSDs, we newly leveraged the latent variable of the model, which was originally introduced to optimize the block structure. Namely, we demonstrate that the latent variable is in fact related to the square root of the PSDs, enabling us to exploit the smoothness of the PSDs via convex optimization by penalizing the difference of the nonnegative latent variable. Moreover, to further enhance the smoothness of the PSDs of complex-valued random processes, the proposed framework can readily incorporate many smoothness priors designed for real-valued signals. Numerical experiments on the PAWR showed that the proposed approach achieved better objective accuracy and yielded visually better estimates compared with the existing sparse estimation models, even when they are combined with the post-smoothing. ## Appendix A Solver for proposed regularization model The proposed regularization model (25) can be solved by using the proximal splitting techniques [47, 48, 49, 50, 51] with the closed-form computation of the proximity operator of \(\phi\). As a concrete example, using the ADMM [48, 49], we provide an iterative solver that is guaranteed to converge to an optimal solution of (25). The ADMM solves the following convex optimization problem \[\operatorname*{minimize}_{v\in\mathcal{V},w\in\mathcal{W}}F(v)+G(w)\text{ subject to }w=\mathcal{L}v \tag{27}\] Figure 11: An enlarged view of Ground-truth of the PSDs and their estimates for the following settings: \(L=128\), \(J=1\), \(G=\mathbf{P}^{\mathbf{i}}\). Figure 12: An enlarged view of Ground-truth of the PSDs and their estimates for the following settings: \(L=128\), \(J=1\), \(G=\mathbf{W}^{-1}\mathbf{P}^{\mathbf{i}}\). by the iterations \[\begin{split}& v^{(i+1)}\in\arg\min_{v\in\mathcal{V}}\left[\gamma F(v)+ \frac{1}{2}\|w^{(i)}-z^{(i)}-\mathcal{L}v\|^{2}\right]\\ & w^{(i+1)}=\operatorname{prox}_{\gamma G}(\mathcal{L}v^{(i+1)}+z^ {(i)})\\ & z^{(i+1)}=z^{(i)}+\mathcal{L}v^{(i+1)}-w^{(i+1)},\end{split}\] where we suppose that \(\mathcal{V}\) and \(\mathcal{W}\) are finite-dimensional Hilbert spaces, \(F\) and \(G\) are proper lower semicontinuous convex functions, \(\mathcal{L}\) is a linear operator, \(\operatorname{prox}_{\gamma G}(w):=\arg\min_{w\in\mathcal{W}}\left[\gamma G( \omega)+\frac{1}{2}\|w-\omega\|^{2}\right]\) is the proximity operator of \(\gamma G\), and \(\gamma>0\). To apply the ADMM, we rewrite (25) as \[\begin{split}&\operatorname*{minimize}_{\mathbf{u},\mathbf{\sigma}, \mathbf{x},\mathbf{i},\tilde{\mathbf{\sigma}},\mathbf{\eta}}F(\mathbf{u},\mathbf{\sigma}) +G(\mathbf{x},\tilde{\mathbf{u}},\tilde{\mathbf{\sigma}},\mathbf{\eta})\\ &\text{subject to }\mathbf{x}_{j,n}=\mathbf{G}\mathbf{u}_{j,n} \quad(\forall j,n)\\ &\tilde{\mathbf{\eta}}_{j,n}=\mathbf{u}_{j,n}\quad(\forall j,n)\\ &\tilde{\mathbf{\sigma}}_{n}=\mathbf{\sigma}_{n}\quad(\forall n)\\ &\mathbf{\eta}_{n}=\mathbf{D}^{\prime}\mathbf{\sigma}_{n}\quad(\forall n )\end{split}\;, \tag{28}\] where we let \[\begin{split} F(\mathbf{u},\mathbf{\sigma})&:=0,\\ G(\mathbf{x},\tilde{\mathbf{u}},\tilde{\mathbf{\sigma}},\mathbf{\eta})& :=\frac{1}{2}\sum_{j=1}^{J}\left\|\mathbf{y}_{j}-\sum_{n=1}^{N} \mathbf{A}_{n}\mathbf{x}_{j,n}\right\|^{2}\\ +\lambda\sum_{n=1}^{N}\sum_{k=1}^{L}\phi\left((\tilde{u}_{j,n}[ k])_{j=1}^{J},\tilde{\sigma}_{n}[k]\right)+\iota_{B_{1}^{n}}\left(\mathbf{\eta} \right),\end{split}\] and \(\iota_{B_{1}^{n}}(\mathbf{\eta})\) is the indicator function of the \(\ell_{1}\) ball, i.e., \[\iota_{B_{1}^{n}}(\mathbf{\eta}):=\begin{cases}0,&\text{if }\sum_{n=1}^{N}\|\mathbf{ \eta}_{n}\|_{1}\leq\alpha;\\ \infty,&\text{otherwise}.\end{cases}\] Since the constraint of (28) can be expressed in the form of (27), we can therefore apply the ADMM to (28), and obtain the iterative algorithm shown in Algorithm 1. For our formulation, since the minimizer of the first step of the ADMM is unique, the convergence to an optimal solution of (28) follows from [51]. From the equivalence between (25) and (28), \((\mathbf{u}^{(i)})_{i=1}^{\infty}\) and \((\boldsymbol{\sigma}^{(i)})_{i=1}^{\infty}\) generated by Algorithm 1 converges to the solution of (25) for the variables \(\mathbf{u}\) and \(\boldsymbol{\sigma}\) respectively. The operators in Algorithm 1 can be computed as follows. Expressing \(\phi(\mathbf{v},\sigma)\) as the sum of the perspective function [52, 53, 54] of \(\frac{\|\mathbf{v}\|^{2}}{2}\) and the linear function \(\frac{f}{2}\sigma\), based on [54, Example 2.4], we can compute \(\mathrm{prox}_{\kappa\phi}\) for \(\kappa=\gamma\lambda\) by \[\mathrm{prox}_{\kappa\phi}(\mathbf{v},\sigma) \tag{29}\] \[=\begin{cases}(\mathbf{0},0),&\text{if }2\kappa\sigma+\| \mathbf{v}\|^{2}\leq J\kappa^{2};\\ (\mathbf{0},\sigma-\frac{\kappa f}{2}),&\text{if }\mathbf{v}=\mathbf{0}\text{ and }2 \sigma>J\kappa;\\ \left(\mathbf{v}-\kappa\frac{\mathbf{v}}{\|\mathbf{v}\|},\sigma+\kappa\frac{ \mathbf{e}^{2}-J}{2}\right),&\text{otherwise},\end{cases}\] where \(s>0\) is the unique positive root of \[s^{3}+\left(\frac{2}{\kappa}\sigma+2-J\right)s-\frac{2}{\kappa}\|\mathbf{v}\| =0,\] and can be explicitly given based on Cardano's formula as follows [38, 55]. Let \(p=\frac{2}{\kappa}\sigma+2-J\) and \(D=-\frac{\|\mathbf{v}\|^{2}}{\kappa^{2}}-\frac{\rho^{3}}{27}\). Then, \[s=\begin{cases}\sqrt[3]{\frac{\|\mathbf{v}\|}{\kappa}+\sqrt{-D}}+\sqrt[3]{ \frac{\|\mathbf{v}\|}{\kappa}-\sqrt{-D}},&\text{if }D<0;\\ 2\sqrt[3]{\frac{\|\mathbf{v}\|}{\kappa}},&\text{if }D=0;\\ 2\sqrt{-\frac{p}{3}}\cos\left(\frac{\arctan(\kappa\sqrt{D}/\|\mathbf{v}\|)}{3 }\right),&\text{if }D>0,\end{cases}\] where \(\sqrt[3]{\cdot}\) denotes the real cubic root. The \(\ell_{1}\) ball projection \(P_{B_{1}^{\star}}\), which is the proximity operator of \(\iota_{B_{1}^{\star}}(\boldsymbol{\eta})\), can be computed in \(\mathcal{O}(NL)\) expected complexity, e.g., by the algorithm of [56]. #### Implementation for Weather Radar Applications The matrix inversions in Algorithm 1 can be efficiently computed for application to weather radars as follows. * To efficiently compute the inversion of \((\mathbf{I}_{L}+\mathbf{G}^{\mathrm{H}}\mathbf{G})\), we use the property \[\mathbf{G}\mathbf{G}^{\mathrm{H}}=\mathrm{diag}(\boldsymbol{\nu})\in\mathbb{ R}_{++}^{L},\] which holds for, e.g., the DFT in Example 2 and the windowed DFT in Example 3. More precisely, from this property, we have \[(\mathbf{I}_{L}+\mathbf{G}^{\mathrm{H}}\mathbf{G})^{-1} =\mathbf{I}_{L}-\mathbf{G}^{\mathrm{H}}(\mathbf{I}_{L}+\mathbf{G }\mathbf{G}^{\mathrm{H}})^{-1}\mathbf{G}\] \[=\mathbf{I}_{L}-\mathbf{G}^{\mathrm{H}}(\mathbf{I}_{L}+\mathrm{ diag}(\boldsymbol{\nu}))^{-1}\mathbf{G}\] \[=\mathbf{I}_{L}-\mathbf{G}^{\mathrm{H}}\mathrm{diag}\left(\left( \frac{1}{1+\nu_{\ell}}\right)_{\ell=1}^{L}\right)\mathbf{G},\] where the first equality follows from the Sherman-Morrison-Woodbury matrix inversion lemma [57]. Note that the multiplications of \(\mathbf{G}\) and \(\mathbf{G}^{\mathrm{H}}\) can be computed in \(\mathcal{O}(L\log L)\) by the fast Fourier transform (FFT) for Examples 2 and 3. * Since \(\mathbf{D}\) in (18) is a circulant matrix, the inversion of \((\mathbf{I}_{L}+(\mathbf{D}^{\prime})^{\top}\mathbf{D}^{\prime})\) can be computed in \(\mathcal{O}(L\log L)\) with the FFT [57]. * For the PAWR shown in Example 1, the inversion of \(\mathbf{I}_{NL}+\mathbf{A}^{\mathrm{H}}\mathbf{A}\in\mathbb{C}^{NL\times NL}\) can be computed in \(\mathcal{O}(N^{3})\), independently of the value of \(L\), because it can be translated into a block-diagonal matrix after some permutations. We show this explicitly in another way. Notice that \((\mathbf{x}_{j,n}^{(i+1)})_{n=1}^{N}\) in Algorithm 1 is the solution of \[\begin{split}\operatorname*{minimize}_{(\mathbf{x}_{j,n})_{n=1}^{ N}}\frac{\gamma}{2}\left\|\mathbf{y}_{j}-\sum_{n=1}^{N}\mathbf{A}_{n}\mathbf{x}_{j,n} \right\|^{2}\\ +\frac{1}{2}\sum_{n=1}^{N}\|\mathbf{G}\mathbf{u}_{j,n}^{(i+1)}+ \mathbf{q}_{j,n}^{(i)}-\mathbf{x}_{j,n}\|^{2}\end{split}\right\}.\] (30) From the definitions of \(\mathbf{y}_{j}^{(\mathrm{pawr})}\) and \(\mathbf{A}_{n}^{(\mathrm{pawr})}\) in Example 1, we have \[\left\|\mathbf{y}_{j}^{(\mathrm{pawr})}-\sum_{n=1}^{N}\mathbf{A}_{ n}^{(\mathrm{pawr})}\mathbf{x}_{j,n}\right\|^{2}\] \[= \left\|\begin{pmatrix}\mathbf{y}_{j}[1]-\sum_{n=1}^{N}\mathbf{x}_{ j,n}[1]\mathbf{a}(\theta_{n})\\ \vdots\\ \mathbf{y}_{j}[L]-\sum_{n=1}^{N}\mathbf{x}_{j,n}[L]\mathbf{a}(\theta_{n}) \end{pmatrix}\right\|^{2}\] \[= \left\|\begin{pmatrix}\mathbf{y}_{j}[1]^{\top}-\sum_{n=1}^{N}x_{ j,n}[1]\mathbf{a}(\theta_{n})^{\top}\\ \mathbf{y}_{j}[L]^{\top}-\sum_{n=1}^{N}x_{j,n}[L]\mathbf{a}(\theta_{n})^{\top} \end{pmatrix}\right\|_{\mathrm{fro}}^{2}\] \[= \|\mathbf{Y}_{j}^{\top}-\mathbf{X}_{j}\mathbf{S}^{\top}\|_{ \mathrm{fro}}^{2},\] where we let \[\mathbf{Y}_{j}:= (\mathbf{y}_{j}[1],\ldots,\mathbf{y}_{j}[L])\in\mathbb{C}^{M\times L},\] \[\mathbf{X}_{j}:= (\mathbf{x}_{j,1},\ldots,\mathbf{x}_{j,N})\in\mathbb{C}^{L\times N},\] \[\mathbf{S}:= (\mathbf{a}(\theta_{1}),\ldots,\mathbf{a}(\theta_{N}))\in\mathbb{C}^ {M\times N},\] and \[\|\cdot\|_{\mathrm{fro}}\] denotes the Frobenius norm of the matrix. From this expression, it is clear that the step (30) can be solved in \(\mathcal{O}(N^{3})\) for the inversion regarding \(\mathbf{S}^{\top}\). Namely, \((\mathbf{x}_{j,n}^{(i+1)})_{n=1}^{N}\) in Algorithm 1 is obtained by \[\left(\mathbf{x}_{j,1}^{(i+1)},\ldots,\mathbf{x}_{j,N}^{(i+1)}\right)\] \[=\left(\gamma\mathbf{Y}_{j}^{\top}\mathbf{S}^{\ast}+\mathbf{G} \mathbf{U}_{j}^{(i+1)}+\mathbf{Q}_{j}^{(i)}\right)(\mathbf{I}_{N}+\gamma \mathbf{S}^{\top}\mathbf{S}^{\ast})^{-1},\] where \(\mathbf{S}^{\ast}\) is the complex conjugate of \(\mathbf{S}\), and \[\mathbf{U}_{j}^{(i+1)}:= \left(\mathbf{u}_{j,1}^{(i+1)},\ldots,\mathbf{u}_{j,N}^{(i+1)} \right)\in\mathbb{C}^{L\times N},\] \[\mathbf{Q}_{j}^{(i)}:= \left(\mathbf{q}_{j,1}^{(i)},\ldots,\mathbf{q}_{j,N}^{(i)}\right) \in\mathbb{C}^{L\times N}.\] Meanwhile, we remark that the inversions in Algorithm 1 are the same for all the iterations, and thus can be computed in advance and stored in the memory. ## Acknowledgment We would like to thank the anonymous reviewers for their valuable comments on the original version of the manuscript.
2309.09335
Analytical structure of the binary collision integral and the ultrarelativistic limit of transport coefficients of an ideal gas
In this paper we discuss the analytical properties of the binary collision integral for a gas of ultrarelativistic particles interacting via a constant cross-section. Starting from a near-equilibrium expansion over a complete basis of irreducible tensors in momentum space we compute the linearized collision matrices analytically. Using these results we then numerically compute all transport-coefficients of relativistic fluid dynamics with various power-counting schemes that are second-order in Knudsen and/or inverse Reynolds numbers. Furthermore, we also exactly compute the leading-order contribution with respect to the particle mass to the coefficient of bulk viscosity, the relaxation time, and other second-order transport coefficients of the bulk viscous pressure.
David Wagner, Victor E. Ambrus, Etele Molnar
2023-09-17T17:56:46Z
http://arxiv.org/abs/2309.09335v3
Analytical structure of the binary collision integral and the ultrarelativistic limit of transport coefficients of an ideal gas ###### Abstract In this paper we discuss the analytical properties of the binary collision integral for a gas of ultrarelativistic particles interacting via a constant cross-section. Starting from a near-equilibirum expansion over a complete basis of irreducible tensors in momentum space we compute the linearized collision matrices analytically. Using these results we then numerically compute all transport coefficients of relativistic fluid dynamics with various power-counting schemes that are second-order in Knudsen and/or inverse Reynolds numbers. Furthermore, we also exactly compute the leading-order contribution with respect to the particle mass to the coefficient of bulk viscosity, the relaxation time, and other second-order transport coefficients of the bulk viscous pressure. ## I Introduction The kinetic theory of rarefied gases contains a collision term which describes the interaction among constituents through collisions. The well known collision term defined by Boltzmann's _Stosszahlansatz_, or the assumption of _molecular chaos_, defines the number of binary collisions through a product of two single-particle distribution functions. The resulting integro-differential equation, the Boltzmann transport equation, describes the space-time evolution of the single-particle distribution function [1; 2; 3] \[k^{\mu}\partial_{\mu}f_{\bf k}=C\left[f\right]\;, \tag{1}\] where \(C\left[f\right]\) is the collision term. In the case of binary elastic collisions, the collision term reads, \[C\left[f\right] \equiv \frac{1}{2}\int{\rm d}K^{\prime}{\rm d}P{\rm d}P^{\prime}\;\left( W_{\bf p}{\bf p}^{\prime}\to{\bf k}{\bf k}^{\prime}f_{\bf p}f_{\bf p}^{ \prime}\tilde{f}_{\bf k}\tilde{f}_{\bf k}^{\prime}\right. \tag{2}\] \[- \left.W_{{\bf k}{\bf k}^{\prime}\to{\bf p}{\bf p}^{\prime}}f_{ \bf k}f_{\bf k}\tilde{f}_{\bf p}\tilde{f}_{\bf p}^{\prime}\right)\;,\] where \(f_{\bf k}\equiv f_{\bf k}(x^{\mu},k^{\mu})\) denotes the Lorentz-invariant single-particle distribution function, while \(\tilde{f}_{\bf k}\equiv 1-af_{\bf k}\), with \(a=\pm 1\) for fermions/bosons and \(a=0\) for classical particles. The Lorentz-invariant differential element is \({\rm d}K\equiv g\,{\rm d}^{3}{\bf k}/\left[(2\pi)^{3}k^{0}\right]\), while \(g\) denotes the number of internal degrees of freedom. The \(1/2\) factor removes the double counting from the integrations with respect to \({\rm d}P\) and \({\rm d}P^{\prime}\). The four-momentum of particles \(k^{\mu}=(k^{0},{\bf k})\) is normalized to their rest mass squared, \(k^{\mu}k_{\mu}=m_{0}^{2}\), where \(k^{0}=\sqrt{{\bf k}^{2}+m_{0}^{2}}\) is the on-shell energy. In this paper we use natural units \(\hbar=c=k_{B}=1\). The binary transition rate is defined as \[W_{{\bf k}{\bf k}^{\prime}\to{\bf p}{\bf p}^{\prime}}\equiv\frac{s}{g^{2}}(2 \pi)^{6}\frac{{\rm d}\sigma(\sqrt{s},\Omega)}{{\rm d}\Omega}\delta(k^{\mu}+k^ {\prime\mu}-p^{\mu}-p^{\prime\mu})\;, \tag{3}\] where the factor \((2\pi)^{6}/g^{2}\) appears due to our convention for the momentum-space integration measure. The delta function ensures the conservation of the energy and momentum in binary collision. The transition rate depends on the total center-of-momentum (CM) energy squared \(s\equiv(k^{\mu}+k^{\prime\mu})^{2}=(p^{\mu}+p^{\prime\mu})^{2}\), while the total cross section integrated over the solid angle \(\Omega\) is defined as [4] \[\sigma_{T}(\sqrt{s})=\frac{1}{2}\int{\rm d}\Omega\frac{{\rm d}\sigma(\sqrt{s}, \Omega)}{{\rm d}\Omega}\;. \tag{4}\] In this paper we employ the so-called _hard-sphere_ approximation which assumes that the transport cross section is isotropic and independent of the total CM energy, \[\sigma_{T}\equiv 2\pi\frac{{\rm d}\sigma(\sqrt{s},\Omega)}{{\rm d}\Omega}= \frac{1}{n_{0}\lambda_{\rm mfp}}\;, \tag{5}\] where \(n_{0}\) is the particle density and \(\lambda_{\rm mfp}\) is the mean free path between collisions. The relativistic Boltzmann equation provides a framework for studying various properties of matter in and out of equilibrium, as well as for deriving the macroscopic conservation laws, i.e., fluid dynamics, based on the microscopic properties of the system. A vanishing collision term, \(C\left[f_{0{\bf k}}\right]=0\), due to detailed balance, defines the local equilibrium distribution, the Juttner distribution function [2; 3; 5], \[f_{0{\bf k}}=\left[\exp{(\beta k^{\mu}u_{\mu}-\alpha)}+a\right]^{-1}\;, \tag{6}\] where \(u^{\mu}=\gamma(1,{\bf v})\) is the timelike fluid-flow four-velocity normalized to \(u^{\mu}u_{\mu}=1\), while \(\gamma=(1-{\bf v}^{2})^{-1/2}\). Furthermore, \(\beta=1/T\) is the inverse temperature and \(\alpha=\beta\mu\) with \(\mu\) the chemical potential. Out of equilibrium, the distribution function is separated as \[f_{\bf k}\equiv f_{0\bf k}+\delta f_{\bf k}\;. \tag{7}\] In this paper we apply a relativistic version of Grad's method of moments [6], as formulated by Denicol, Niemi, Molnar and Rischke, (referred to as DNMR) [7] to obtain the transport coefficients for a classical gas of massless particles interacting via an isotropic constant cross-section. Therein the irreducible moments of tensor-rank \(\ell\) of \(\delta f_{\bf k}\) are defined as \[\rho_{r}^{\mu_{1}\cdots\mu_{\ell}}\equiv\int{\rm d}KE_{\bf k}^{\tau}k^{(\mu_{1 }}\cdots k^{\mu_{\ell})}\delta f_{\bf k}\;. \tag{8}\] Here, \(r\) denotes the power of energy \(E_{\bf k}\equiv k^{\mu}u_{\mu}\), while \(k^{(\mu_{1}}\cdots k^{\mu_{\ell})}=\Delta_{\nu_{1}\cdots\nu_{\ell}}^{\mu_{1} \cdots\mu_{\ell}}k^{\nu_{1}}\cdots k^{\nu_{\ell}}\) are the irreducible tensors forming a complete orthogonal basis [1; 7]. The four-momentum is decomposed as \(k^{\mu}=E_{\bf k}u^{\mu}+k^{\langle\mu\rangle}\), where \(k^{\langle\mu\rangle}\equiv\Delta_{\mu}^{\mu}k^{\nu}\) is defined using the elementary projection operator \(\Delta^{\mu\nu}\equiv g^{\mu\nu}-u^{\mu}u^{\nu}\), with \(g^{\mu\nu}={\rm diag}(+,-,-,-)\) being the metric tensor. The symmetric, traceless, and orthogonal projection tensors of rank \(2\ell\), \(\Delta_{\nu_{1}\cdots\nu_{\ell}}^{\mu_{1}\cdots\mu_{\ell}}\), are constructed using the \(\Delta^{\mu\nu}\) projectors. Expressing the comoving derivative of irreducible moments, \(\hat{\rho}_{r}^{(\mu_{1}\cdots\mu_{\ell})}\equiv\Delta_{\nu_{1}\cdots\nu_{ \ell}}^{\mu_{1}\cdots\nu_{\ell}}u^{\alpha}\partial_{\alpha}\rho_{r}^{\mu_{1} \cdots\nu_{\ell}}\), the equations of motion for these moments follow from the Boltzmann equation (1). For the sake of concision, we do not list the complete equations of motion, since they can be found in Eqs. (35)-(46) of Ref. [7], \[\dot{\rho}_{r}-C_{r-1} = \alpha_{r}^{(0)}\theta+({\rm higher\mbox{-}order\ terms})\;, \tag{9}\] \[\dot{\rho}_{r}^{\langle\mu\rangle}-C_{r-1}^{\langle\mu\rangle} = \alpha_{r}^{(1)}\nabla^{\mu}\alpha+({\rm higher\mbox{-}order\ terms})\;,\] (10) \[\dot{\rho}_{r}^{\langle\mu\nu\rangle}-C_{r-1}^{\langle\mu\nu \rangle} = 2\alpha_{r}^{(2)}\sigma^{\mu\nu}+({\rm higher\mbox{-}order\ terms})\;. \tag{11}\] where the irreducible moments of the collision term (2) are defined similarly to Eq. (8) as \[C_{r}^{\langle\mu_{1}\cdots\mu_{\ell}\rangle}\equiv\int{\rm d}K\,E_{\bf k}^{ \tau}k^{(\mu_{1}}\cdots k^{\mu_{\ell})}C[f]\;. \tag{12}\] The coefficients \(\alpha_{r}^{(\ell)}(\alpha,\beta)\) are functions of various thermodynamic quantities. Furthermore, \(\nabla^{\mu}=\Delta^{\mu\nu}\partial_{\nu}\) is the gradient operator, \(\theta\equiv\nabla_{\mu}u^{\mu}\) is the expansion scalar and \(\sigma^{\mu\nu}\equiv\frac{1}{2}\left(\nabla^{\mu}u^{\nu}+\nabla^{\nu}u^{\mu }\right)-\frac{1}{3}\theta\Delta^{\mu\nu}\) is the shear tensor. The conservation of particle number as well as of energy and momentum in binary collisions require that the corresponding moments of the collision term vanish equivalently, i.e., \(C_{0}=0\), \(C_{1}=0\), and \(C_{1}^{(\mu)}=0\). The resulting equations of motion are the conservation laws of fluid dynamics, \[\partial_{\mu}N^{\mu}=0\;,\quad\partial_{\mu}T^{\mu\nu}=0\;, \tag{13}\] where the particle four-current and energy-momentum tensor are given by \[N^{\mu} = n_{0}u^{\mu}+V^{\mu}\;, \tag{14}\] \[T^{\mu\nu} = e_{0}u^{\mu}u^{\nu}-\left(P_{0}+\Pi\right)\Delta^{\mu\nu}+ \pi^{\mu\nu}\;. \tag{15}\] Here, \(n_{0}\), \(e_{0}\), and \(P_{0}\), are the particle density, energy density, and the isotropic pressure, in equilibrium. The bulk viscous pressure, the particle diffusion four-current and the shear-stress tensor are defined by \[\Pi \equiv -\frac{1}{3}T^{\mu\nu}\Delta_{\mu\nu}+P_{0}=-\frac{m_{0}^{2}}{3} \rho_{0}\;, \tag{16}\] \[V^{\mu} \equiv \Delta_{\alpha}^{\mu}N^{\alpha}=\rho_{0}^{\mu}\;,\] (17) \[\pi^{\mu\nu} \equiv \Delta_{\alpha\beta}^{\mu\nu}T^{\alpha\beta}=\rho_{0}^{\mu\nu}\;. \tag{18}\] In the above decompositions the fluid-flow four-velocity is the timelike eigenvector of the energy-momentum tensor, \(u^{\mu}=T^{\mu\nu}u_{\nu}/e_{0}\), as per Landau's definition [8], hence \[\rho_{1}^{\mu}\equiv\Delta_{\alpha}^{\mu}T^{\alpha\beta}u_{\beta}=0\;. \tag{19}\] The chemical potential and the temperature are determined through the Landau matching conditions, \[\rho_{1}\equiv N^{\mu}u_{\mu}-n_{0}=0\;,\quad\rho_{2}\equiv T^{\mu\nu}u_{\mu} u_{\nu}-e_{0}=0\;. \tag{20}\] The equations of motion for the primary dissipative quantities \(\rho_{0}=-3\Pi/m_{0}^{2}\), \(\rho_{0}^{\mu}=V^{\mu}\), and \(\rho_{0}^{\mu\nu}=\pi^{\mu\nu}\) follow from Eqs. (9)-(11). The five conservation equations (13) couple to these nine transport equations which contain various transport coefficients that explicitly depend on the underlying approximations and the influence of all non-dynamical moments included in this truncation. These equations are truncated according to a power-counting scheme in Knudsen and inverse Reynolds numbers. The Knudsen number, \({\rm Kn}\equiv\lambda_{\rm mfp}/L\), is the ratio of the particle mean free path \(\lambda_{\rm mfp}\) and a characteristic macroscopic scale \(L\), while the inverse Reynolds number \({\rm Re}^{-1}\) is the ratio of an out-of-equilibrium and a local equilibrium macroscopic field. The resulting equations of fluid dynamics are of second-order in Knudsen and/or inverse Reynolds numbers, and are closed in terms of 14 dynamical moments contained in \(N^{\mu}\) and \(T^{\mu\nu}\). Focusing on this second-order theory of relativistic fluid dynamics, we compute the moments of the linearized collision term for a gas of ultrarelativistic hard-spheres with constant cross-section. Introducing a novel anisotropic decomposition of the collision integral in the center-of-momentum frame, the calculation of the linearized collision matrices is done analytically in the ultrarelativistic limit. Using these exact results in the 14-dynamical moment approximation we collect and compute all transport coefficients, with five significant digits of precision, for truncation orders \(N_{0}-2=N_{1}-1=N_{2}=794\), corresponding to \(N_{0}+3N_{1}+5N_{2}+9=7160\) moments included in the basis. Then we also compare the effect of three slightly different power counting-schemes for the non-dynamic moments introduced in Refs. [7], [9], and [10], on all transport coefficients. The key difference between these power-counting schemes lies in the assumption about the relative magnitude of the Knudsen and inverse Reynolds numbers, and its impact on the corresponding transport coefficients. In this work, we will focus on three power-counting schemes, namely the DNMR approach Ref.[7] with additional corrections to the \(O({\rm Kn}^{2})\) transport coefficients calculated in Ref. [11]; the inverse Reynolds dominance (IReD) approach of Ref. [9], where all \(O({\rm Kn}^{2})\) terms are rewritten and absorbed into the \(O({\rm Re}^{-1}{\rm Kn})\) terms; and finally, the corrected DNMR (cDNMR) approach based on Ref. [10], in which the \(O({\rm Kn}^{2})\) transport coefficients receive contributions only from the asymptotic matching of the moments \(\rho_{r}^{\mu_{1}\cdots\mu_{\ell}}\) with positive energy index \(r\). The main results of this paper are the closed-form computation of the collision matrices for the scalar, vector and tensor moments in the case of massless ultrarelativistic particles interacting through a constant isotropic cross-section. This interaction model reduces in the non-relativistic limit to the well-studied hard-sphere interaction model, for which the first-order transport coefficients, i.e., the shear viscosity and heat conductivity, can be obtained in terms of the so-called Chapman-Cowling collision integrals [12; 13] via a successive iterative refinement procedure. This method can be extended into the relativistic regime [2; 14], where the exact expression requires a resummation over the entire hierarchy of moments [7]. We will demonstrate the truncation-order dependence with an analytical result only for the leading-order contribution with respect to the particle mass \(m_{0}\) to the bulk viscosity coefficient \(\zeta\) and to the relaxation time \(\tau_{\rm Pl}\) for the bulk viscous pressure. For all other transport coefficients, we rely on numerical methods to obtain their values in the limit of infinite truncation order. Another collision model for which the transport coefficients are obtained with similar accuracy as for the hard-sphere model is that of the so-called Maxwell molecules [13; 15], interacting via a potential \(V(r)\sim r^{-5}\), with \(r\) being the distance between two interacting particles. Two relativistic generalizations of this model correspond to the Israel particles model [16] and the Polak model [17]. More recently, the collision operator corresponding to the \(\lambda\phi^{4}\) theory was studied in Refs. [18; 19]. These results were used to compute transport coefficients in several fluid dynamical theories in Ref. [20]. The present work complements these studies considering the analogous problem for hard-sphere particles. This paper is structured as follows. In Sec. II, we introduce the expansion of the distribution function and the linearized collision integral in terms of irreducible moments. Then we discuss the various power-counting methods and the transport coefficients of second-order fluid dynamics. Section III clarifies the analytical structure of the collision integrals appearing in the moment equations up to tensor-rank two. These expressions are the main results of this work. In Sec. IV all first- and second-order transport coefficients are computed in the ultrarelativistic limit. The exact results for the coefficient of bulk viscosity and the relaxation time of the bulk viscous pressure are computed in Sec. V. Finally, Sec. VI concludes this work. For reasons of brevity and clarity various computations are relegated to the Appendices. ## II Second-order fluid dynamics with 14 dynamical moments For reasons of completeness, we first summarize the derivation of second-order relativistic fluid dynamics from the Boltzmann equation based on Ref. [7]. The near-equilibrium expansion is summarized in Sect. II.1. In Sect. II.2, we discuss the linearized collision integral and the various power-counting schemes in a unitary fashion. Finally, Sect. II.3 provides the relaxation equations of second-order fluid dynamics with 14 dynamical moments. The particle rest mass \(m_{0}\) is considered arbitrary (non-vanishing) throughout this section. ### Near-equilibrium expansion over a complete basis of irreducible tensors The expansion of \(\delta f_{\bf k}\) is given by, \[\delta f_{\bf k}\equiv f_{0\bf k}\tilde{f}_{0\bf k}\phi_{\bf k}=f_{0\bf k} \sum_{\ell=0}^{\infty}\sum_{n=0}^{N_{\ell}}\rho_{n}^{\mu_{1}\cdots\mu_{\ell} }k_{(\mu_{1}}\cdots k_{\mu_{\ell})}\mathcal{H}_{\bf k}^{(\ell)}\;, \tag{21}\] where the coefficient \(\mathcal{H}_{\bf k}^{(\ell)}\) is a polynomial in energy of order \(N_{\ell}\to\infty\) defined through another polynomial \(P_{\bf k}^{(\ell)}\) as \[\mathcal{H}_{\bf k}^{(\ell)}\equiv\frac{W^{(\ell)}}{\ell!}\sum_{m=n}^{N_{\ell }}a_{mn}^{(\ell)}P_{\bf k}^{(\ell)}\;,\quad P_{\bf k}^{(\ell)}\equiv\sum_{r=0}^ {m}a_{mr}^{(\ell)}E_{\bf k}^{\tau}\;. \tag{22}\] The negative-order moments \(\rho_{r<0}^{\mu_{1}\cdots\mu_{\ell}}\) are not included in the expansion (21) but they are expressed by a linear combination of positive-order moments through \[\rho_{-r}^{\mu_{1}\cdots\mu_{\ell}}=\sum_{n=0}^{N_{\ell}}\mathcal{F}_{rn}^{( \ell)}\rho_{n}^{\mu_{1}\cdots\mu_{\ell}}\;, \tag{23}\] where we defined \[\mathcal{F}_{rn}^{(\ell)}\equiv\frac{\ell!}{(2\ell+1)!!}\int{\rm d}KE_{\bf k}^ {-\tau}\left(\Delta^{\alpha\beta}k_{\alpha}k_{\beta}\right)^{\ell}\mathcal{H}_ {\bf k}^{(\ell)}f_{0\bf k}\tilde{f}_{0\bf k}\;, \tag{24}\] such that \(\mathcal{F}_{-r,n}^{(\ell)}=\delta_{rn}\) by construction. The coefficients \(a_{mn}^{(\ell)}\) are obtained via the Gram-Schmidt orthogonalization procedure by imposing the following condition, \[\int{\rm d}K\,\omega^{(\ell)}P_{\bf k}^{(\ell)}P_{\bf k}^{(\ell)}=\delta_{mn}\;, \tag{25}\] where the weight \(\omega^{(\ell)}\) is defined as \[\omega^{(\ell)}\equiv\frac{W^{(\ell)}}{(2\ell+1)!!}\left(\Delta^{\alpha\beta }k_{\alpha}k_{\beta}\right)^{\ell}f_{0\bf k}\;, \tag{26}\] while the normalization parameter \(W^{(\ell)}\) is fixed according to \(P_{\bf k}^{(\ell)}\equiv a_{00}^{(\ell)}=1\), leading to \[W^{(\ell)}=\frac{(-1)^{\ell}}{J_{2\ell,\ell}}\;. \tag{27}\] Here the thermodynamic integrals are defined as \[I_{nq}(\alpha,\beta)\equiv\frac{(-1)^{q}}{(2q+1)!!}\int\mathrm{d}KE_{\mathbf{k}}^{ n-2q}\left(\Delta^{\alpha\beta}k_{\alpha}k_{\beta}\right)^{q}f_{0\mathbf{k}}\;, \tag{28}\] \[J_{nq}(\alpha,\beta)\equiv\left(\frac{\partial I_{nq}(\alpha,\beta)}{\partial \alpha}\right)_{\beta}\;, \tag{29}\] where \((2q+1)!!=(2q+1)!/(2^{q}q!)\) is the double factorial of odd integers. For classical particles, \(a=0\) and \(J_{nq}(\alpha,\beta)=I_{nq}(\alpha,\beta)\). Using these integrals, the particle number density, the energy density, and the isotropic pressure are \(n_{0}=I_{10}\), \(e_{0}=I_{20}\), and \(P_{0}=I_{21}\). The coefficients \(\alpha_{r}^{(\ell)}(\alpha,\beta)\) in Eqs. (9-11) are \[\alpha_{r}^{(0)} \equiv-\beta J_{r+1,1}-\frac{n_{0}}{D_{20}}\left[hG_{2r}-G_{3r} \right]\;, \tag{30}\] \[\alpha_{r}^{(1)} \equiv J_{r+1,1}-\frac{J_{r+2,1}}{h_{0}}\;,\] (31) \[\alpha_{r}^{(2)} \equiv\beta J_{r+3,2}\;, \tag{32}\] where \(h_{0}\equiv\left(e_{0}+P_{0}\right)/n_{0}\) is the enthalpy per particle and \[G_{nm} \equiv J_{n0}J_{m0}-J_{n-1,0}J_{m+1,0}\;, \tag{33}\] \[D_{nq} \equiv J_{n+1,q}J_{n-1,q}-J_{nq}^{2}\;. \tag{34}\] ### The linearized collision integral and power counting methods Substituting the near-equilibrium distribution function from Eq. (7) into the binary collision term (2) and using the identity \(f_{0\mathbf{p}}f_{0\mathbf{p}^{\prime}}\tilde{f}_{0\mathbf{k}}\tilde{f}_{0 \mathbf{k}^{\prime}}=f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}\tilde{f}_{0 \mathbf{p}}\tilde{f}_{0\mathbf{p}^{\prime}}\) while neglecting quadratic terms in \(\delta f_{\mathbf{k}}\), the irreducible moments of the linearized collision integral become \[C_{r}^{\left\langle\mu_{1}\cdots\mu_{\ell}\right\rangle} =\frac{1}{2}\int\mathrm{d}K\mathrm{d}K^{\prime}\mathrm{d}P\mathrm{ d}P^{\prime}\,W_{\mathbf{k}\mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}^{ \prime}}\,f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}\tilde{f}_{0\mathbf{p}}\tilde {f}_{0\mathbf{p}^{\prime}}\] \[\times E_{\mathbf{k}}^{r}k^{\left\langle\mu_{1}\right.}\cdots k^{ \left.\mu_{\ell}\right\rangle}\left(\phi_{\mathbf{p}}+\phi_{\mathbf{p}^{ \prime}}-\phi_{\mathbf{k}}-\phi_{\mathbf{k}^{\prime}}\right)\;. \tag{35}\] Now, inserting the expansion from Eq. (21) into the above formula, the moments of the collision integral can be expressed in terms of a linear combination of irreducible moments from Eq. (8) as \[C_{r-1}^{\left\langle\mu_{1}\cdots\mu_{\ell}\right\rangle}=-\sum_{n=0}^{N_{ \ell}}\mathcal{A}_{rn}^{(\ell)}\rho_{n}^{n_{1}\cdots\mu_{\ell}}\;. \tag{36}\] The matrix \(\mathcal{A}_{rn}^{(\ell)}\) is defined as [7] \[\mathcal{A}_{rn}^{(\ell)} \equiv\frac{1}{2(2\ell+1)}\int\mathrm{d}K\mathrm{d}K^{\prime} \mathrm{d}P\mathrm{d}P^{\prime}\,W_{\mathbf{k}\mathbf{k}^{\prime}\to\mathbf{p} \mathbf{p}^{\prime}}\] \[\times f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}\tilde{f}_{0\mathbf{p} }\tilde{f}_{0\mathbf{p}^{\prime}}E_{\mathbf{k}}^{r-1}k^{\left\langle\mu_{1} \right.}\cdots k^{\left.\mu_{\ell}\right\rangle}\] \[\times\left(\mathcal{H}_{\mathbf{k}n}^{(\ell)}k_{(\mu_{1}}\cdots k _{\mu_{\ell})}+\mathcal{H}_{\mathbf{k}^{\prime}n}^{(\ell)}k_{\left\langle\mu_ {1}\right.}^{\left.\left.\left.\cdot\cdot\cdot\right.\right.}k_{\mu_{\ell}}^{ \prime}\right.\right.\] \[\left.\left.-\mathcal{H}_{\mathbf{p}n}^{(\ell)}p_{\left\langle\mu_ {1}\right.}\cdots p_{\left.\mu_{\ell}\right\rangle}-\mathcal{H}_{\mathbf{p}^{ \prime}n}^{(\ell)}p_{\left\langle\mu_{1}\right.}^{\left.\left.\left.\cdot\cdot \right.\right.}p_{\mu_{\ell}\right\rangle}^{\left.\left.\left.\cdot\right.}p_{ \mu_{\ell}\right\rangle}\right)\;, \tag{37}\] and it is separated in loss and gain parts as \[\mathcal{A}_{rn}^{(\ell)}=\mathcal{A}_{rn}^{(\ell),1}-\mathcal{A}_{rn}^{(\ell), \mathbf{k}}\;. \tag{38}\] Now, using Eq. (22) to express \(\mathcal{H}_{\mathbf{k}n}^{(\ell)}\), we obtain \[\mathcal{A}_{rn}^{(\ell),1} \equiv\frac{W^{(\ell)}}{\ell!}\sum_{m=n}^{N_{\ell}}\sum_{q=0}^{m}a _{mn}^{(\ell)}a_{mq}^{(\ell)}\mathcal{L}_{r-1,q}^{(\ell)}\;, \tag{39}\] \[\mathcal{A}_{rn}^{(\ell),\mathbf{s}} \equiv\frac{W^{(\ell)}}{\ell!}\sum_{m=n}^{N_{\ell}}\sum_{q=0}^{m}a _{mn}^{(\ell)}a_{mq}^{(\ell)}\mathcal{G}_{r-1,q}^{(\ell)}\;, \tag{40}\] where the corresponding summands \(\mathcal{L}_{rn}^{(\ell)}\) and \(\mathcal{G}_{rn}^{(\ell)}\) are given by \[\mathcal{L}_{rn}^{(\ell)} \equiv\frac{1}{2}\frac{g^{2}}{(2\ell+1)}\int\mathrm{d}P\mathrm{d }P^{\prime}\mathrm{d}K\mathrm{d}K^{\prime}\,W_{\mathbf{k}\mathbf{k}^{\prime} \to\mathbf{p}\mathbf{p}^{\prime}}\] \[\times f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}\tilde{f}_{0\mathbf{p} }\tilde{f}_{0\mathbf{p}^{\prime}}E_{\mathbf{k}}^{r}k^{(\mu_{1}}\cdots k^{\mu_ {\ell})}\] \[\times\left(E_{\mathbf{k}}^{n}k_{(\mu_{1}}\cdots k_{\mu_{\ell})}+E_ {\mathbf{k}^{\prime}}^{n}k_{(\mu_{1}}^{\prime}\cdots k^{\prime}_{\mu_{\ell})}^{ \prime}\right), \tag{41}\] and \[\mathcal{G}_{rn}^{(\ell)} \equiv\frac{g^{2}}{(2\ell+1)}\int\mathrm{d}P\mathrm{d}P^{\prime} \mathrm{d}K\mathrm{d}K^{\prime}\,W_{\mathbf{k}\mathbf{k}^{\prime}\to\mathbf{p} \mathbf{p}^{\prime}}\] \[\times f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}\tilde{f}_{0 \mathbf{p}}\tilde{f}_{0\mathbf{p}^{\prime}}E_{\mathbf{k}}^{r}E_{\mathbf{p}}^{n}k ^{(\mu_{1}}\cdots k^{\mu_{\ell})}p_{(\mu_{1}}\cdots p_{\mu_{\ell})}\;, \tag{42}\] respectively. The computation of these summands and of the collision matrix in the ultrarelativistic limit \(m_{0}\beta\to 0\) is one of the main purposes of this paper and will be discussed in Sect. III. The inverse of the collision matrix, the relaxation-time matrix, contains the microscopic time scales proportional to the mean free time between collisions \(\tau_{\mathrm{mfp}}=\lambda_{\mathrm{mfp}}/c\), \[\tau_{rn}^{(\ell)}\equiv\left(\mathcal{A}^{(\ell)}\right)_{rn}^{-1}=\sum_{m=0}^ {N_{\ell}}\Omega_{rm}^{(\ell)}\frac{1}{\chi_{m}^{(\ell)}}\left(\Omega^{(\ell)} \right)_{mn}^{-1}\;. \tag{43}\] Here the matrix \(\Omega^{(\ell)}\) diagonalizes \(\mathcal{A}^{(\ell)}\), leading to eigenvalues that are arranged in increasing order, \(\chi_{r}^{(\ell)}\leq\chi_{r+1}^{(\ell)}\), \[\left(\Omega^{(\ell)}\right)^{-1}\mathcal{A}^{(\ell)}\Omega^{(\ell)}=\mathrm{ diag}\left(\chi_{0}^{(\ell)},\chi_{1}^{(\ell)},\cdots\right), \tag{44}\] where without loss of generality we set \(\Omega_{00}^{(\ell)}=1\). The diagonalization of the collision term identifies the slowest microscopic time scale that dominates the evolution of the linearized Boltzmann equation [7]. However, as discussed in Ref. [9], the diagonalization procedure is not required for the computation of the inverse collision matrix \(\tau^{(\ell)}\), since it can be obtained by directly inverting \(\mathcal{A} coefficients \(\zeta_{r}\), \(\kappa_{r}\), and \(\eta_{r}\), are defined as \[\zeta_{r}\equiv\frac{m_{0}^{2}}{3}\sum_{n=0,\neq 1,2}^{N_{0}} \tau_{rn}^{(0)}\alpha_{n}^{(0)},\] \[\kappa_{r}\equiv\sum_{n=0,\neq 1}^{N_{1}}\tau_{rn}^{(1)}\alpha_{n}^{ (1)},\quad\eta_{r}\equiv\sum_{n=0}^{N_{2}}\tau_{rn}^{(2)}\alpha_{n}^{(2)}, \tag{45}\] where the exclusions of \(n\neq 1,2\) and \(n\neq 1\) from the first and second summations are imposed by the conservation laws (13). The equations of motion for the primary dissipative quantities, \(\Pi=-m_{0}^{2}\rho_{0}/3\), \(V^{\mu}=\rho_{0}^{\mu}\), and \(\pi^{\mu\nu}=\rho_{0}^{\mu\nu}\), are obtained by performing the matrix multiplication of Eqs. (9)-(11) with \(\tau_{nr}^{(\ell)}\), followed by setting \(n=0\). In these equations, the terms of second order in Knudsen and/or inverse Reynolds numbers also contain irreducible moments \(\rho_{r}^{\mu_{1}\dots\mu_{\ell}}\) with \(r\neq 0\), which need to be specified, while moments with tensor rank \(\ell>2\) are omitted in what follows. Following the DNMR method [7], the irreducible moments for \(0<r\leq N_{\ell}\) are approximated by their asymptotic solutions as \[\rho_{r>0} \simeq-\frac{3}{m_{0}^{2}}\Omega_{r0}^{(0)}\Pi+\frac{3}{m_{0}^{2 }}(\zeta_{r}-\Omega_{r0}^{(0)}\zeta_{0})\theta\;, \tag{46a}\] \[\rho_{r>0}^{\mu} \simeq\Omega_{r0}^{(1)}V^{\mu}+(\kappa_{r}-\Omega_{r0}^{(1)} \kappa_{0})\nabla^{\mu}\alpha\;,\] (46b) \[\rho_{r>0}^{\mu\nu} \simeq\Omega_{r0}^{(2)}\pi^{\mu\nu}+2(\eta_{r}-\Omega_{r0}^{(2)} \eta_{0})\sigma^{\mu\nu}\;. \tag{46c}\] The remaining moments of negative order \(\rho_{-r}^{\mu_{1}\dots\mu_{\ell}}\) are obtained by substituting only the first terms, and hence neglecting terms of order \(O(\text{Kn})\), from the right-hand sides of Eqs. (46a)-(46c) into Eq. (23), leading to \[\rho_{-r} \simeq-\frac{3}{m_{0}^{2}}\gamma_{r0}^{(0)}\Pi+\frac{3}{m_{0}^{2 }}(\Gamma_{r0}^{(0)}-\gamma_{r0}^{(0)})\theta\;, \tag{47a}\] \[\rho_{-r}^{\mu} \simeq\gamma_{r0}^{(1)}V^{\mu}+(\Gamma_{r0}^{(1)}-\gamma_{r0}^{( 1)})\nabla^{\mu}\alpha\;,\] (47b) \[\rho_{-r}^{\mu\nu} \simeq\gamma_{r0}^{(2)}\pi^{\mu\nu}+2(\Gamma_{r0}^{(2)}-\gamma_{r 0}^{(2)})\sigma^{\mu\nu}\;., \tag{47c}\] where we displayed explicitly both the \(O(\text{Re}^{-1})\) and the \(O(\text{Kn})\) contributions. The DNMR coefficients, \(\gamma_{r0}^{(\ell)}\), are \[\gamma_{r0}^{(0)}\equiv\sum_{n=0,\neq 1,2}^{N_{0}} \mathcal{F}_{rn}^{(0)}\Omega_{n0}^{(0)}\;,\] \[\gamma_{r0}^{(1)}\equiv\sum_{n=0,\neq 1}^{N_{1}}\mathcal{F}_{rn}^{( 1)}\Omega_{n0}^{(1)}\;,\quad\gamma_{r0}^{(2)}\equiv\sum_{n=0}^{N_{2}}\mathcal{ F}_{rn}^{(2)}\Omega_{n0}^{(2)}\;. \tag{48}\] In addition, we introduced the so-called corrected DNMR coefficients, \(\Gamma_{r0}^{(\ell)}\), defined as [9; 10] \[\Gamma_{r0}^{(0)}\equiv\sum_{n=0,\neq 1,2}^{N_{0}} \mathcal{F}_{rn}^{(0)}\frac{\zeta_{n}}{\zeta_{0}}\;,\] \[\Gamma_{r0}^{(1)}\equiv\sum_{n=0,\neq 1}^{N_{1}}\mathcal{F}_{rn}^{( 1)}\frac{\kappa_{n}}{\kappa_{0}}\;,\quad\Gamma_{r0}^{(2)}\equiv\sum_{n=0}^{N_ {2}}\mathcal{F}_{rn}^{(2)}\frac{\eta_{n}}{\eta_{0}}\;. \tag{49}\] On the other hand, the thermodynamic forces can be replaced by the Navier-Stokes relations, \(\theta=-\Pi/\zeta_{0}\), \(\nabla^{\mu}\alpha=V^{\mu}/\kappa_{0}\), and \(\sigma^{\mu\nu}=\pi^{\mu\nu}/(2\eta_{0})\). Therefore substituting the right-hand sides of Eqs. (47) eliminates the \(O(\text{Kn})\) contributions to the negative-order moments \(\rho_{r<0}^{\mu_{1}\dots\mu_{\ell}}\) and yields [9; 10] \[\rho_{-r}\simeq-\frac{3}{m_{0}^{2}}\Gamma_{r0}^{(0)}\Pi\;,\;\rho_{-r}^{\mu} \simeq\Gamma_{r0}^{(1)}V^{\mu}\;,\;\rho_{-r}^{\mu\nu}\simeq\Gamma_{r0}^{(2)} \pi^{\mu\nu}\;. \tag{50}\] Finally, the so-called Inverse-Reynolds-Dominance (IReD) approximation of Ref. [9] defines a power counting scheme without the diagonalization procedure, such that the irreducible moments are of order \(O(\text{Re}^{-1})\). The non-dynamical positive-order moments are given by \[\rho_{r>0}\simeq-\frac{3}{m_{0}^{2}}\mathcal{C}_{r0}^{(0)}\Pi\;,\;\rho_{r>0}^{ \mu}\simeq\mathcal{C}_{r0}^{(1)}V^{\mu}\;,\;\rho_{r>0}^{\mu\nu}\simeq\mathcal{C }_{r0}^{(2)}\pi^{\mu\nu}\;, \tag{51}\] where the corresponding IReD coefficients, \(\mathcal{C}_{r0}^{(\ell)}\), are \[\mathcal{C}_{r0}^{(0)}\equiv\frac{\zeta_{r}}{\zeta_{0}}\;,\quad\mathcal{C}_{r0}^ {(1)}\equiv\frac{\kappa_{r}}{\kappa_{0}}\;,\quad\mathcal{C}_{r0}^{(2)}\equiv \frac{\eta_{r}}{\eta_{0}}\;, \tag{52}\] while the negative-order moments are given by Eqs. (50). To simplify our notation we will introduce a common variable, \(\xi_{r}^{(\ell)}\), for the transport coefficients (45) in what follows, \[\xi_{r}^{(0)}=\zeta_{r}\;,\quad\xi_{r}^{(1)}=\kappa_{r}\;,\quad\xi_{r}^{(2)}= \eta_{r}\;. \tag{53}\] To study the different power counting-schemes, we introduce the following notation for the non-dynamical moments encompassing the DNMR, the cDNMR, and the IReD approximations, \[\rho_{r} =-\frac{3}{m_{0}^{2}}\mathcal{X}_{r0}^{(0)}\Pi+\frac{3}{m_{0}^{2}} \mathcal{Y}_{r0}^{(0)}\theta\;, \tag{54}\] \[\rho_{r}^{\mu} =\mathcal{X}_{r0}^{(1)}V^{\mu}+\mathcal{Y}_{r0}^{(1)}\nabla^{\mu} \alpha\;,\] (55) \[\rho_{r}^{\mu\nu} =\mathcal{X}_{r0}^{(2)}\pi^{\mu\nu}+2\mathcal{Y}_{r0}^{(2)}\sigma^{ \mu\nu}\;. \tag{56}\] In the DNMR and cDNMR approximations, no assumptions are made about the relative magnitude of the terms. At asymptotically long times, when the magnitudes of the Knudsen and the inverse Reynolds numbers are of the same order, i.e., \(\text{Kn}\sim\text{Re}^{-1}\), also known as the order of magnitude approximation [11; 4], there is freedom to re-arrange the transport coefficients. The IReD approximation of Ref. [9] expresses the thermodynamic forces in terms of the primary dissipative quantities to replace \(\text{Kn}^{2}\to\text{Kn}\,\text{Re}^{-1}\) and hence removes terms that are of second order in the Knudsen number from the fluid-dynamical equations of motion. Here, for \(r=0\), in all cases, by definition, \[\mathcal{X}_{00}^{(\ell)}=\Omega_{00}^{(\ell)}=\mathcal{C}_{00}^{(\ell)}=1\;,\quad \mathcal{Y}_{00}^{(\ell)}=0\;. \tag{57}\] For \(r\neq 0\), the DNMR coefficients are \[\mathcal{X}_{r0}^{(\ell)}=\begin{cases}\Omega_{r0}^{(\ell)}\;,&r>0\;,\\ \gamma_{-r0}^{(\ell)}\;,&r<0\;,\end{cases} \tag{58}\] and \[\mathcal{Y}^{(\ell)}_{r0}=\begin{cases}\xi^{(\ell)}_{r0}-\Omega^{(\ell)}_{r0}\xi^ {(\ell)}_{0}\,&r>0\;,\\ \left(\Gamma^{(\ell)}_{-r,0}-\gamma^{(\ell)}_{-r,0}\right)\xi^{(\ell)}_{0}\,&r<0\;,\end{cases} \tag{59}\] as follows from Eqs. (46) and (47). Similarly, the cDNMR coefficients are \[\mathcal{X}^{(\ell)}_{r0}=\begin{cases}\Omega^{(\ell)}_{r0}\,&r>0\;,\\ \Gamma^{(\ell)}_{-r,0}\,&r<0\;,\end{cases} \tag{60}\] and \[\mathcal{Y}^{(\ell)}_{r0}=\begin{cases}\xi^{(\ell)}_{r}-\Omega^{(\ell)}_{r0} \xi^{(\ell)}_{0}\,&r>0\;,\\ 0\,&r<0\;,\end{cases} \tag{61}\] as it is apparent from Eqs. (46) and (50). Finally, the IReD coefficients can be identified from Eqs. (50) and (51): \[\mathcal{X}^{(\ell)}_{r0}=\begin{cases}\mathcal{C}^{(\ell)}_{r0}\,&r>0\;,\\ \Gamma^{(\ell)}_{-r,0}\,&r<0\;,\end{cases} \tag{62}\] while, by definition, \[\mathcal{Y}^{(\ell)}_{r0}=0\,\quad r\neq 0\;. \tag{63}\] Note that the following relation holds for all of the approaches: \[\xi^{(\ell)}_{0}\mathcal{X}^{(\ell)}_{r0}+\mathcal{Y}^{(\ell)}_{r0}=\begin{cases} \xi^{(\ell)}_{r}\,&r\geq 0\;,\\ \xi^{(\ell)}_{0}\Gamma^{(\ell)}_{-r,0}\,&r<0\;.\end{cases} \tag{64}\] ### Second-order fluid dynamical equations The relaxation equations for the irreducible moments are obtained by multiplying Eqs. (9)-(11) by \(\tau^{(\ell)}_{nr}\) and then summing over \(r\). Using the expression \[\sum_{r=0}^{N_{\ell}}\tau^{(\ell)}_{nr}C^{(\mu_{1}\cdots\mu_{\ell})}_{r-1}=- \rho^{(\mu_{1}\cdots\mu_{\ell})}_{n}+\text{(higher-order terms)}\;, \tag{65}\] derived using the property \[\sum_{r=0}^{N_{\ell}}\tau^{(\ell)}_{nr}\mathcal{A}^{(\ell)}_{rm}=\delta_{nm}\;, \tag{66}\] the second-order transport equations with a linearized collision integral for \(\Pi\), \(V^{\mu}\), and \(\pi^{\mu\nu}\) from Ref. [7] read \[\tau_{\Pi}\dot{\Pi}+\Pi =-\zeta\theta+\mathcal{J}+\mathcal{K}\;, \tag{67}\] \[\tau_{V}\dot{V}^{(\mu)}+V^{\mu} =\kappa\nabla^{\mu}\alpha+\mathcal{J}^{\mu}+\mathcal{K}^{\mu}\;,\] (68) \[\tau_{\pi}\dot{\pi}^{(\mu\nu)}+\pi^{\mu\nu} =2\eta\sigma^{\mu\nu}+\mathcal{J}^{\mu\nu}+\mathcal{K}^{\mu\nu}\;. \tag{69}\] Here, \(\tau_{\Pi}\), \(\tau_{V}\), and \(\tau_{\pi}\) are the relaxation times, while \(\zeta=\zeta_{0}\), \(\kappa=\kappa_{0}\), and \(\eta=\eta_{0}\) are the first-order transport coefficients, \[\zeta=\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau^{(0)}_{0r} \alpha^{(0)}_{r}\, \tau_{\Pi}=\sum_{r=0,\neq 1,2}^{N_{0}}\tau^{(0)}_{0r} \mathcal{X}^{(0)}_{r0}\, \tag{70}\] \[\kappa=\sum_{r=0,\neq 1}^{N_{1}}\tau^{(1)}_{0r}\alpha^{(1)}_{r}\, \tau_{V}=\sum_{r=0,\neq 1}^{N_{1}}\tau^{(1)}_{0r}\mathcal{X}^{(1)}_{r0}\,\] (71) \[\eta=\sum_{r=0}^{N_{2}}\tau^{(2)}_{0r}\alpha^{(2)}_{r}\, \tau_{\pi}=\sum_{r=0}^{N_{2}}\tau^{(2)}_{0r}\mathcal{X}^{(2)}_{r0}. \tag{72}\] Furthermore, \(\mathcal{J},\mathcal{J}^{\mu}\), and \(\mathcal{J}^{\mu\nu}\) collect the terms of order \(O(\text{Re}^{-1}\text{Kn})\), \[\mathcal{J} =-\ell_{\Pi\Gamma}\nabla_{\mu}V^{\mu}-\tau_{\Pi V}V_{\mu}\dot{u}^{ \mu}-\delta_{\Pi\Pi}\Pi\theta\] \[-\lambda_{\Pi V}V_{\mu}\nabla^{\mu}\alpha+\lambda_{\Pi\pi}\pi^{ \mu\nu}\sigma_{\mu\nu}\;, \tag{73}\] \[\mathcal{J}^{\mu} =-\tau_{V}V_{\nu}\omega^{\nu-\delta}_{VV}V^{\mu}\theta-\ell_{V\Pi }\nabla^{\mu}\Pi\] \[+\ell_{V\pi}\Delta^{\mu\nu}\nabla_{\lambda}\pi^{\lambda}{}_{\nu}+ \tau_{\Pi}\Pi\dot{u}^{\mu}-\tau_{V\pi}\pi^{\mu\nu}\dot{u}_{\nu}\] \[-\lambda_{VV}V_{\nu}\sigma^{\mu\nu}+\lambda_{\Pi}\Pi\nabla^{\mu} \alpha-\lambda_{V\pi}\pi^{\mu\nu}\nabla_{\nu}\alpha\;,\] \[\mathcal{J}^{\mu\nu} =2\tau_{\pi}\pi^{(\mu}_{\lambda}\omega^{\nu)\lambda}-\delta_{\pi\pi }\pi^{\mu\nu}\theta-\tau_{\pi\pi}\pi^{\lambda(\mu}\sigma^{\nu)}_{\lambda}+ \lambda_{\pi\Pi}\Pi\sigma^{\mu\nu}\] \[-\tau_{\pi V}V^{\langle\mu}\dot{u}^{\nu\rangle}+\ell_{\pi V}\nabla^{ \langle\mu}V^{\nu\rangle}+\lambda_{\pi V}V^{\langle\mu}\nabla^{\nu\rangle} \alpha\;. \tag{75}\] Finally, the tensors \(\mathcal{K}\), \(\mathcal{K}^{\mu}\), and \(\mathcal{K}^{\mu\nu}\) contain all contributions of order \(O(\text{Kn}^{2})\), given by \[\mathcal{K} =\tilde{\zeta}_{1}\omega_{\mu\nu}\omega^{\mu\nu}+\tilde{\zeta}_{2} \sigma_{\mu\nu}\sigma^{\mu\nu}+\tilde{\zeta}_{3}\theta^{2}+\tilde{\zeta}_{4}I^{ \mu}I_{\mu}\] \[+\tilde{\zeta}_{5}\dot{u}^{\mu}\dot{u}_{\mu}+\tilde{\zeta}_{6}I^{ \mu}\dot{u}_{\mu}+\tilde{\zeta}_{7}\nabla^{\mu}I_{\mu}+\tilde{\zeta}_{8}\nabla^{ \mu}\dot{u}_{\mu}\;, \tag{76}\] \[\mathcal{K}^{\mu} =\tilde{\kappa}_{1}\sigma^{\mu\nu}I_{\nu}+\tilde{\kappa}_{2}\sigma^ {\mu\nu}\dot{u}_{\nu}+\tilde{\kappa}_{3}I^{\mu}\theta+\tilde{\kappa}_{4}\dot{u }^{\mu}\theta\] \[+\tilde{\kappa}_{5}\omega^{\mu\nu}I_{\nu}+\tilde{\kappa}_{6} \Delta^{\mu}_{\lambda}\nabla_{\nu}\sigma^{\lambda\nu}+\tilde{\kappa}_{7}\nabla^{ \mu}\theta\;,\] (77) \[\mathcal{K}^{\mu\nu} =\tilde{\eta}_{1}\omega^{\lambda\langle\mu}\omega^{\nu\rangle}{}_{ \lambda}+\tilde{\eta}_{2}\theta\sigma^{\mu\nu}+\tilde{\eta}_{3}\sigma^{ \lambda\langle\mu}\sigma^{\nu\rangle}_{\lambda}+\tilde{\eta}_{4}\sigma^{ \langle\mu}_{3}\omega^{\nu\rangle\lambda}\] \[+\tilde{\eta}_{5}I^{\langle\mu}I^{\nu\rangle}+\tilde{\eta}_{6} \dot{u}^{\langle\mu}\dot{u}^{\nu\rangle}+\tilde{\eta}_{7}I^{\langle\mu}\dot{u}^{ \nu\rangle}+\tilde{\eta}_{8}\nabla^{\langle\mu}I^{\nu\rangle}\] \[+\tilde{\eta}_{9}\nabla^{\langle\mu}\dot{u}^{\nu\rangle}\;, \tag{78}\] where \(I^{\mu}=\nabla^{\mu}\alpha\) was introduced. Note that all coefficients appearing in Eqs. (73)-(75) and Eqs. (76)-(78) are calculated using the \(\mathcal{X}^{(\ell)}_{r0}\) and \(\mathcal{Y}^{(\ell)}_{r0}\) notation in Appendix B. ## III Exact collision matrices In this section, we provide exact expressions for the matrix elements of the linearized collision term assuming that the differential cross-section is constant. The transition rate from Eq. (3) now reads \[W_{\mathbf{k}\mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}^{\prime}}=\frac{s}{g ^{2}}(2\pi)^{5}\sigma_{T}\delta(k^{\mu}+k^{\prime\mu}-p^{\mu}-p^{\prime\mu})\;. \tag{79}\] We focus on the case of a massless, classical (Boltzmann) gas, such that \[f_{0\mathbf{k}}=e^{\alpha-\beta E_ Therefore the loss and gain terms introduced in Eqs. (41) and (42) simplify to \[\mathcal{L}^{(\ell)}_{rn} =\frac{\sigma_{T}}{(2\ell+1)}\int\mathrm{d}K\mathrm{d}K^{\prime}f_{ 0\mathbf{k}}f_{0\mathbf{k}^{\prime}}E^{r}_{\mathbf{k}}\frac{s}{2}\] \[\times k^{(\mu_{1}}\cdots k^{\mu_{\ell})}\left(E^{n}_{\mathbf{k}}k _{(\mu_{1}}\cdots k_{\mu_{\ell})}+E^{n}_{\mathbf{k}^{\prime}}k^{\prime}_{(\mu_ {1}}\cdots k^{\prime}_{\mu_{\ell})}\right)\;, \tag{81}\] and \[\mathcal{G}^{(\ell)}_{rn} =2\frac{\sigma_{T}(2\pi)^{5}}{(2\ell+1)}\int\mathrm{d}P\mathrm{d} P^{\prime}\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{ \prime}}E^{r}_{\mathbf{k}}E^{n}_{\mathbf{p}}\frac{s}{2}\] \[\times\delta\left(k^{\mu}+k^{\prime\mu}-p^{\mu}-p^{\mu}\right)k^ {(\mu_{1}}\cdots k^{\mu_{\ell})}p_{(\mu_{1}}\cdots p_{\mu_{\ell})}\;. \tag{82}\] We refer the reader to Appendices C-H for the details of the calculations. ### The loss terms The auxiliary loss terms defined in Eq. (81) can be obtained in closed form and read in the cases \(\ell=0,1\), and \(\ell=2\) as follows: \[\mathcal{L}^{(0)}_{rn} =\frac{P_{0}^{2}\sigma_{T}}{4}\beta^{2-r-n}\Big{[}2\Gamma(r+n+3)\] \[\qquad\qquad\qquad\qquad+\Gamma(r+3)\Gamma(n+3)\Big{]}\;, \tag{83a}\] \[\mathcal{L}^{(1)}_{rn} =\frac{P_{0}^{2}\sigma_{T}}{36}\beta^{-r-n}\Big{[}-6\Gamma(r+n+5)\] \[\qquad\qquad\qquad\qquad+\Gamma(r+4)\Gamma(n+4)\Big{]}\;, \tag{83b}\] and \[\mathcal{L}^{(2)}_{rn} =\frac{P_{0}^{2}\sigma_{T}}{15}\beta^{-2-r-n}\,\Gamma(r+n+7)\;. \tag{83c}\] Now, inserting these expressions into Eq. (39) and performing the summations, we obtain the loss part of the collision matrices, \[\mathcal{A}^{(0),\mathrm{l}}_{rn} =\beta P_{0}\sigma_{T}\left[\delta_{nr}+\frac{(r+1)!}{2}\delta_{n 1}\beta^{1-r}\right]\;, \tag{84a}\] \[\mathcal{A}^{(1),\mathrm{l}}_{rn} =\beta P_{0}\sigma_{T}\left[\delta_{nr}-\frac{(r+2)!}{6}\delta_{n 0}\beta^{-r}\right]\;,\] (84b) \[\mathcal{A}^{(2),\mathrm{l}}_{rn} =\beta P_{0}\sigma_{T}\delta_{nr}\;. \tag{84c}\] ### The gain terms Similarly, the auxiliary gain terms defined in Eq. (82) are given in closed form by \[\mathcal{G}^{(0)}_{rn} =-\frac{\sigma_{T}P_{0}^{2}\beta^{2-r-n}}{(n+1)(r+1)}\] \[\qquad\times\left[\Gamma(4+n+r)-\Gamma(3+r)\Gamma(3+n)\right]\;, \tag{85a}\] \[\mathcal{G}^{(1)}_{rn} =\frac{\sigma_{T}P_{0}^{2}\beta^{-r-n}}{3(1+r)(2+r)(1+n)(2+n)}\] \[\qquad\times\left[(r+n+rn-3)\Gamma(6+n+r)\right.\] \[\left.+\left(3r+3n+rn+11\right)\Gamma(4+r)\Gamma(4+n)\right]\;, \tag{85b}\] and \[\mathcal{G}^{(2)}_{rn} =-\frac{2\sigma_{T}P_{0}^{2}\beta^{-2-r-n}}{15(1+n)(2+n)(3+n)(1+r)( 2+r)(3+r)}\] \[\qquad\times\left[\big{[}64-6(r+n)+2(r^{2}+n^{2})-3rn\right.\] \[\qquad\left.3(n^{2}r+r^{2}n)+r^{2}n^{2}\big{]}\Gamma(8+r+n)\right.\] \[-\left.\big{[}22+4(r+n)+rn\big{]}\Gamma(6+r)\Gamma(6+n)\right\}\;. \tag{85c}\] Plugging these expressions into Eq. (40), the resulting gain contributions to the collision matrices read \[\mathcal{A}^{(0),\mathrm{g}}_{0n} =\frac{2(-1)^{n}\sigma_{T}P_{0}\beta^{1+n}}{(n+1)!}\left[-\delta_{ n0}+S^{(0)}_{n}(N_{0})\right]\;,\] \[\mathcal{A}^{(0),\mathrm{g}}_{r>0,n\leq r} =-\frac{2\sigma_{T}P_{0}\beta^{1+n-r}(r+1)!}{r(n+1)!}\left(1-\delta _{n0}\right)\;, \tag{86a}\] \[\mathcal{A}^{(1),\mathrm{g}}_{0n} =\frac{16(-1)^{n}\sigma_{T}P_{0}\beta^{1+n}}{(n+3)!}\Bigg{[}-\frac {3}{4}\delta_{n0}+S^{(1)}_{n}(N_{1})\Bigg{]}\;,\] \[\mathcal{A}^{(1),\mathrm{g}}_{r>0,n\leq r} =-\frac{2\sigma_{T}P_{0}\beta^{1+n-r}(r+2)!}{(n+3)!}\frac{n(r+4)-r} {r(r+1)}\;, \tag{86b}\] and \[\mathcal{A}^{(2),\mathrm{g}}_{0n} =\frac{432(-1)^{n}\sigma_{T}P_{0}\beta^{1+n}}{\lambda_{\mathrm{mfp }}(n+5)!}\left[-\frac{5}{18}\delta_{n0}+S^{(2)}_{n}(N_{2})\right]\;,\] \[\mathcal{A}^{(2),\mathrm{g}}_{r>0,n\leq r} =-\frac{2\sigma_{T}P_{0}\beta^{1+n-r}(r+4)!(n+1)(9n+nr-4r)}{(n+5 )!r(r+1)(r+2)}\;, \tag{86c}\] while \[\mathcal{A}^{(\ell),\mathrm{g}}_{r>0,n>r}=0\;. \tag{86d}\] Here we defined the auxiliary sums \[S^{(\ell)}_{n}\left(N_{\ell}\right)\equiv\sum_{m=n}^{N_{\ell}}\binom{m}{n} \frac{1}{(m+\ell)(m+\ell+1)}\;. \tag{87}\] ### The collision matrices Collecting the results from the previous subsections, we can write down closed-form expressions for the elements \(\mathcal{A}^{(\ell)}_{rn}\) of the collision matrices. In the case when \(\ell=0\), we obtain \[\mathcal{A}^{(0)}_{00} =\frac{1}{\lambda_{\rm mfp}}\frac{N_{0}-1}{N_{0}+1}\;,\] \[\mathcal{A}^{(0)}_{0,n>0} =\frac{2(-\beta)^{n}}{\lambda_{\rm mfp}(n+1)!}\left[S^{(0)}_{n}(N_{ 0})-\frac{\delta_{n1}}{2}\right]\;,\] \[\mathcal{A}^{(0)}_{r>0,n\leq r} =\frac{\beta^{n-r}(r+1)!}{\lambda_{\rm mfp}(n+1)!}\left(\delta_{ nr}+\frac{2}{r}\delta_{n0}+\delta_{n1}-\frac{2}{r}\right)\;,\] \[\mathcal{A}^{(0)}_{r>0,n>r} =0\;. \tag{88}\] Note that \(\mathcal{A}^{(0)}_{1n}=\mathcal{A}^{(0)}_{2n}=0\), since the particle number and energy are conserved, while \(\mathcal{A}^{(0)}_{r>0,0}=0\). Similarly, when \(\ell=1\) we have, \[\mathcal{A}^{(1)}_{0n} =\frac{16(-\beta)^{n}}{\lambda_{\rm mfp}(n+3)!}\left[S^{(1)}_{n} (N_{1})-\frac{\delta_{n0}}{2}\right],\] \[\mathcal{A}^{(1)}_{r>0,n\leq r} =\frac{\beta^{n-r}(r+2)!}{\lambda_{\rm mfp}(n+3)!r}\left(4n+nr-r\right)\] \[\times\left(\delta_{nr}+\delta_{n0}-\frac{2}{r+1}\right),\] \[\mathcal{A}^{(1)}_{r>0,n>r} =0, \tag{89}\] where \(\mathcal{A}^{(1)}_{1n}=0\) reflects the momentum conservation in binary collisions. Summarizing the results for \(\ell=2\), we have \[\mathcal{A}^{(2)}_{0n} =\frac{432(-\beta)^{n}}{\lambda_{\rm mfp}(n+5)!}S^{(2)}_{n}(N_{2} )\;,\] \[\mathcal{A}^{(2)}_{r>0,n\leq r} =\frac{\beta^{n-r}(r+4)!(n+1)}{\lambda_{\rm mfp}(n+5)!r(r+1)} \left(9n+nr-4r\right)\] \[\times\left(\delta_{nr}-\frac{2}{r+2}\right)\;,\] \[\mathcal{A}^{(2)}_{r>0,n>r} =0\;. \tag{90}\] All these collision matrices share a similar structure, in the sense that they are almost lower triangular matrices. In all cases, all entries appearing on the zeroth row are non-vanishing, most of them diverging when \(N_{\ell}\to\infty\) with different degrees of severity. Furthermore, the matrices for tensor-rank \(\ell\leq 2\) have \(2-\ell\) vanishing rows due to the conservation of the particle number and of four-momentum in binary collisions. Note that the non-vanishing entries on the zeroth row imply that the moments corresponding to hydrodynamic variables, i.e., \(\rho_{0}\), \(\rho_{0}^{\mu}\), and \(\rho_{0}^{\mu\nu}\), couple to all moments of the same tensor-rank, which was also a conclusion found in Ref. [19] in the case of the \(\lambda\varphi^{4}\)-theory. ## IV Second-order transport coefficients In this section, we compute all second-order transport coefficients from Eqs. (73)-(78) in the ultrarelativistic limit. The general expressions of these coefficients for arbitrary particle mass and various power-counting schemes are listed in Appendix B. All second-order transport coefficients are related to the inverse of the collision matrices \(\tau_{rn}^{(\ell)}\), for which we obtained analytical expression only in the scalar case when \(\ell=0\). For \(\ell=1\) and \(2\), we employed numerical computations to find the inverse of the collision matrices given in Eqs. (89) and (90). The numerical values were obtained through an extrapolation with respect to \(1/N_{2}\) by computing the best fit parameters \(a\), \(b\), and \(\mathfrak{t}_{\infty}\) of the power law \(\mathfrak{t}(N_{2})=\mathfrak{t}_{\infty}+aN_{2}^{-b}\), where \(\mathfrak{t}\) denotes a generic transport coefficient with convergence value \(\mathfrak{t}_{\infty}\). The fits are done on data points up to \(N_{2}=794\) through gnuplot scripts that are included in the supplementary material to this paper. All transport coefficients are listed to five significant digits, which is justified by the asymptotic standard deviation of the fit being of order \(O(10^{-6})\) or lower. We remark that the coefficients do not converge at the same speed. Specifically, we can estimate the values of \(N_{2}\) for fixed relative differences between all transport coefficients and their respective convergence values as \(N_{2}[O(10^{-4})]\simeq 16536\), \(N_{2}[O(10^{-5})]\simeq 168987\), and \(N_{2}[O(10^{-6})]\simeq 1726901\), respectively. These large numbers can be attributed mainly to the slow convergence of the coefficient \(\zeta_{4}\) in the cDNMR approach. For contrast, IReD leads to \(N_{2}[O(10^{-4})]\simeq 170\), \(N_{2}[O(10^{-5})]\simeq 554\), and \(N_{2}[O(10^{-6})]\simeq 1806\). For the validation of our numerical computations against analytically solvable models, we verified that our computations reproduces the results of Ref. [10], where all transport coefficients were computed in the well-known relaxation-time approximation of Anderson and Witting [21]. In the following, all transport coefficients are computed involving the general power-counting scheme, in terms of \(\mathcal{X}^{(\ell)}_{r0}\) and \(\mathcal{Y}^{(\ell)}_{r0}\). Henceforth as in Sect. II.2, we will report results for three power-counting schemes: DNMR, the corrected DNMR, and IReD. Differences between the DNMR and cDNMR methods appear only for the transport coefficients involving the functions \(\mathcal{X}^{(\ell)}_{r0}\) with \(r<0\), or the functions \(\mathcal{Y}^{(\ell)}_{r0}\). Conversely, the cDNMR and the IReD methods show discrepancies only for the coefficients involving \(\mathcal{X}^{(\ell)}_{r0}\) and \(\mathcal{Y}^{(\ell)}_{r0}\) with \(r>0\). Furthermore, in order to assess the magnitude of the higher-order corrections originating from the irreducible moments \(\rho_{r1\cdots\mu\epsilon}^{n_{1}\cdots\mu\epsilon}\) with \(\ell\leq 2\) and \(r\neq 0\), we also list the values for the transport coefficients appearing in the \(\mathcal{J}^{\mu_{1}\cdots\mu\epsilon}\)-terms for the lowest-order possible truncation of 14 dynamical moments approximation (14M) contained in \(N^{\mu}\) and \(T^{\mu\nu}\), i.e., \(N_{0}=2\), \(N_{1}=1\), and \(N_{2}=0\). The computation of the transport coefficients is done via a Mathematica notebook, which can be found in the supplementary material to this article. Since the bulk viscous pressure \(\Pi\) vanishes in the ultrarelativistic limit, only the coefficients which are unrelated to it are computed in Sect. IV.2. The remaining second-order coefficients involving the bulk viscosity are expanded up to leading-order with respect to the particle mass \(m_{0}\) in Sect. IV.3. Finally, Sect. IV.4 ends with a discussion about the possible combinations of transport coefficients that remain invariant under the reshuffling between the \(\mathrm{Kn}^{2}\) and \(\mathrm{Re}^{-1}\mathrm{Kn}\) terms, as also considered in Ref. [9]. ### Thermodynamic functions in the massless limit In this section we present the various thermodynamic functions necessary for the computation of the transport coefficients. Since \(n_{0}\sim\beta^{-3}\) and \(P_{0}\sim\beta^{-4}\), it follows that \[\mathcal{C}^{(\ell)}_{r0}\sim\beta^{-r}\;,\quad\Omega^{(\ell)}_{r0}\sim\beta^ {-r}\;,\quad\mathcal{X}^{(\ell)}_{r0}\sim\beta^{-r}\;, \tag{91}\] while the mean free path \(\lambda_{\mathrm{mfp}}=1/\sigma_{T}n_{0}\sim\beta^{3}\), and thus, \[\tau^{(\ell)}_{nr}\sim\lambda_{\mathrm{mfp}}\beta^{n-r}\sim\beta^{3+n-r}\;. \tag{92}\] The thermodynamic functions \(\mathcal{H}\) and \(\bar{\mathcal{H}}\) are given to leading order with respect to \(m_{0}\) by \[\mathcal{H}(\alpha,\beta)\equiv\frac{n_{0}}{D_{20}}\left(h_{0}J_{20}-J_{30} \right)=m_{0}^{2}\frac{\beta^{2}}{3}\;,\] \[\bar{\mathcal{H}}(\alpha,\beta)\equiv\frac{n_{0}}{D_{20}}\left(h_{0}J_{10}-J_ {20}\right)=\frac{\beta}{3}\;. \tag{93}\] Furthermore, \[\frac{G_{2r}}{D_{20}} =\frac{\beta^{2-r}}{6}(1-r)(r+1)!\;, \tag{94a}\] \[\frac{G_{3r}}{D_{20}} =\frac{\beta^{1-r}}{2}(2-r)(r+1)!\;,\] (94b) \[\frac{\beta J_{r+2,1}}{e_{0}+P_{0}} =\frac{\beta^{1-r}}{24}(r+3)!\;. \tag{94c}\] These relations can be used to show that \(\alpha^{(0)}_{r}\) vanishes in the massless limit. To leading order with respect to \(m_{0}\), the \(\alpha^{(\ell)}_{r}\) coefficients evaluate to \[\frac{\alpha^{(0)}_{r}}{m_{0}^{2}} =\frac{\beta^{4-r}P}{36}r!(r-1)(r-2)\;, \tag{95a}\] \[\alpha^{(1)}_{r} =\frac{\beta^{1-r}P}{24}(r+2)!(1-r)\;,\] (95b) \[\alpha^{(2)}_{r} =\frac{\beta^{-r}P}{30}(r+4)!\;. \tag{95c}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & \(\eta\) & \(\tau_{\pi}[\lambda_{\mathrm{mfp}}]\) & \(\delta_{\pi\pi}[\tau_{\pi}]\) & \(\ell_{\pi V}[\tau_{\pi}]\) & \(\tau_{\pi V}[\tau_{\pi}]\) & \(\tau_{\pi\pi}[\tau_{\pi}]\) & \(\lambda_{\pi V}[\tau_{\pi}]\) \\ \hline \hline 14M & \(4/(3\sigma\beta)\) & \(5/3\) & \(4/3\) & \(0\) & \(0\) & \(10/7\) & \(0\) \\ \hline IReD & \(1.2676/(\sigma\beta)\) & \(1.6557\) & \(4/3\) & \(-0.56960/\beta\) & \(-2.2784/\beta\) & \(1.6945\) & \(0.20503/\beta\) \\ \hline DNMR \& cDNMR & \(1.2676/(\sigma\beta)\) & \(2\) & \(4/3\) & \(-0.68317/\beta\) & \(-2.7327/\beta\) & \(1.6888\) & \(0.24188/\beta\) \\ \hline \end{tabular} \end{table} Table 2: Same as Table 1, but for the second-order transport coefficients in \(\mathcal{J}^{\mu\nu}\) for the shear-stress tensor \(\pi^{\mu\nu}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & \(\pi_{\pi}[\lambda_{\mathrm{mfp}}]\) & \(\delta_{\pi\pi}[\tau_{\pi}]\) & \(\ell_{\pi V}[\tau_{\pi}]\) & \(\tau_{\pi V}[\tau_{\pi}]\) & \(\tau_{\pi\pi}[\tau_{\pi}]\) & \(\lambda_{\pi V}[\tau_{\pi}]\) \\ \hline \hline 14M & \(4/(3\sigma\beta)\) & \(5/3\) & \(4/3\) & \(0\) & \(0\) & \(10/7\) & \(0\) \\ \hline IReD & \(1.2676/(\sigma\beta)\) & \(1.6557\) & \(4/3\) & \(-0.56960/\beta\) & \(-2.2784/\beta\) & \(1.6945\) & \(0.20503/\beta\) \\ \hline DNMR \& cDNMR & \(1.2676/(\sigma\beta)\) & \(2\) & \(4/3\) & \(-0.68317/\beta\) & \(-2.7327/\beta\) & \(1.6888\) & \(0.24188/\beta\) \\ \hline \end{tabular} \end{table} Table 1: The coefficient of diffusion and second-order transport coefficients in \(\mathcal{J}^{\mu}\) for the particle-diffusion current \(V^{\mu}\), evaluated in different approaches. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Method & \(\tilde{\pi}_{1}[\tau_{\pi}]\) & \(\tilde{\beta}_{72}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{2}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{4}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{5}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{6}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{7}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{8}[\tau_{\pi}]\) & \(\tilde{\beta}\tilde{\eta}_{9}[\tau_{\pi}]\) \\ \hline \hline DNMR \& cDNMR & \(-0.43647\) & \(0.14549\) & \(0.28867\) & \(-0.87294\) & \(-0.011466\) & \(-2.1824\) & \(-0.13454\) & \(0.033634\) & \(0.43647\) \\ \hline \end{tabular} \end{table} Table 3: Second-order transport coefficients in \(\mathcal{K}^{\mu}\) for the particle-diffusion current \(V^{\mu}\) evaluated using the DNMR and corrected DNMR methods. The IReD and strict 14M approaches lead to \(\widetilde{\kappa}_{1}=\widetilde{\kappa}_{2}=\widetilde{\kappa}_{3}= \widetilde{\kappa}_{5}=\widetilde{\kappa}_{6}=0\). Note that \(\alpha_{1}^{(0)}=\alpha_{2}^{(0)}=\alpha_{1}^{(1)}=0\) for arbitrary mass. We can thus derive the following relations: \[\frac{\beta}{m_{0}^{4}}\frac{\partial\zeta_{r}}{\partial\beta}=(3-r )\frac{\zeta_{r}}{m_{0}^{4}}\;,\] \[\beta\frac{\partial\kappa_{r}}{\partial\beta}=-r\kappa_{r}\;,\quad \beta\frac{\partial\eta_{r}}{\partial\beta}=-(r+1)\eta_{r}\;. \tag{96}\] This gives an identical behaviour of \(\mathcal{Y}_{r0}^{(\ell)}\): \[\frac{\beta}{m_{0}^{4}}\frac{\partial\mathcal{Y}_{r0}^{(0)}}{ \partial\beta}=(3-r)\frac{\mathcal{Y}_{r0}^{(0)}}{m_{0}^{4}}\;,\] \[\beta\frac{\partial\mathcal{Y}_{r0}^{(1)}}{\partial\beta}=-r \mathcal{Y}_{r0}^{(1)}\;,\quad\beta\frac{\partial\mathcal{Y}_{r0}^{(2)}}{ \partial\beta}=-(r+1)\mathcal{Y}_{r0}^{(2)}\;. \tag{97}\] ### Transport coefficients for the ultrarelativistic fluid In this subsection, we summarize the second-order transport coefficients in the case of vanishing particle mass, by taking the appropriate limits in the formulas displayed in Appendix B. Since in this limit, the scalar sector involving the bulk viscous pressure does not play a role, we postpone the discussion of the transport coefficients governing the coupling to \(\Pi\) to the next subsection. We begin with the transport coefficients appearing in the equation for \(V^{\mu}\), Eq. (68). The coefficients for the \(O(\mathrm{Re}^{-1}\mathrm{Kn})\) terms appearing in \(\mathcal{J}^{\mu}\), Eq. (74), are \[\delta_{VV}=\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\mathcal{X}_{r0 }^{(1)}=\tau_{V}\;, \tag{98a}\] \[\ell_{V\pi}=\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\bigg{[} \frac{(r+3)!}{24}\beta^{1-r}-\mathcal{X}_{r-1,0}^{(2)}\bigg{]}\;,\] (98b) \[\tau_{V\pi}=\ell_{V\pi}\;,\] (98c) \[\lambda_{VV}=\frac{1}{5}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}(2r +3)\mathcal{X}_{r0}^{(1)}\;,\] (98d) \[\lambda_{V\pi}=\frac{1}{4}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}(1 -r)\mathcal{X}_{r-1,0}^{(2)}\;. \tag{98e}\] These coefficients, together with the coefficient of diffusion \(\kappa\) and relaxation time of diffusion \(\tau_{V}\) introduced in Eqs. (72), are computed in Table 1. The \(O(\mathrm{Kn}^{2})\) coefficients from \(\mathcal{K}^{\mu}\) in Eq. (77) are given by \[\widetilde{\kappa}_{1}=\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)} \bigg{[}\frac{2(1-r)}{5}\mathcal{Y}_{r0}^{(1)}+\frac{r}{2}\mathcal{Y}_{r-1,0}^ {(2)}\bigg{]}\;, \tag{99a}\] \[\widetilde{\kappa}_{3}=-\frac{2}{3}\sum_{r=0,\neq 1}^{N_{1}} \tau_{0r}^{(1)}\mathcal{Y}_{r0}^{(1)}\;,\] (99b) \[\widetilde{\kappa}_{5}=2\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)} \mathcal{Y}_{r0}^{(1)}=-3\widetilde{\kappa}_{3}\;,\] (99c) \[\widetilde{\kappa}_{6}=-2\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)} \mathcal{Y}_{r-1,0}^{(2)}\;. \tag{99d}\] The numerical values of these coefficients are given in Table 3 for the DNMR and cDNMR power-counting schemes. Note again that in the IReD power counting scheme, \(\widetilde{\kappa}_{i}=0\) by construction. As expected, DNMR and cDNMR disagree only for \(\widetilde{\kappa}_{6}\), which involves the coefficient \(\mathcal{Y}_{-1,0}^{(2)}\). Note that here we excluded \(\widetilde{\kappa}_{2}\), \(\widetilde{\kappa}_{4}\) and \(\widetilde{\kappa}_{7}\), since they vanish in the ultrarelativistic limit as \(m_{0}^{2}\to 0\). The leading-order contributions to the \(\widetilde{\kappa}_{4}\) and \(\widetilde{\kappa}_{7}\) coefficients are computed in the limit of small mass in Sec. IV.3. The \(O(\mathrm{Re}^{-1}\mathrm{Kn})\) coefficients in the relaxation equation for the shear-stress tensor (69), listed in Eq. (75), are \[\delta_{\pi\pi}=\frac{4}{3}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)} \,\mathcal{X}_{r0}^{(2)}=\frac{4}{3}\tau_{V}\;, \tag{100a}\] \[\ell_{\pi V}=\frac{2}{5}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)} \mathcal{X}_{r+1,0}^{(1)}\;,\] (100b) \[\tau_{\pi V}=4\ell_{\pi V}\;,\] (100c) \[\tau_{\pi\pi}=\frac{2}{7}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}(2r+5) \mathcal{X}_{r0}^{(2)}\;,\] (100d) \[\lambda_{\pi V}=-\frac{1}{10}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}(r+ 1)\mathcal{X}_{r+1,0}^{(1)}\;. \tag{100e}\] These are computed in Table 2. The \(O(\mathrm{Kn}^{2})\) coefficients appearing in \(\mathcal{K}^{\mu\nu}\), introduced in Eq. (78), are \[\widetilde{\eta}_{1}=2\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}\mathcal{Y }_{r0}^{(2)}\;, \tag{101a}\] \[\widetilde{\eta}_{2}=-\frac{1}{3}\widetilde{\eta}_{1}\;,\] (101b) \[\widetilde{\eta}_{3}=-\frac{2}{7}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)} (4r+3)\mathcal{Y}_{r0}^{(2)}\;,\] (101c) \[\widetilde{\eta}_{4}=2\widetilde{\eta}_{1}\;,\] (101d) \[\widetilde{\eta}_{5}=-\frac{1}{10}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)} (r+1)\mathcal{Y}_{r+1,0}^{(1)}\;,\] (101e) \[\widetilde{\eta}_{6}=5\widetilde{\eta}_{1}\;,\] (101f) \[\widetilde{\eta}_{7}=-\frac{8}{5}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)} \mathcal{Y}_{r+1,0}^{(1)}\;,\] (101g) \[\widetilde{\eta}_{8}=-\frac{1}{4}\widetilde{\eta}_{7}\;,\] (101h) \[\widetilde{\eta}_{9}=-\widetilde{\eta}_{1}\;. \tag{101i}\] Since none of the above coefficients involve \(\mathcal{Y}_{r0}^{(\ell)}\) with negative \(r\), both DNMR and cDNMR agree. The explicit values of these coefficients are summarized in Table 4. ### Leading-order contributions to the transport coefficients coupling to the bulk viscous pressure Here we compute the leading order contributions of the remaining coefficients which couple to the bulk viscous pressure from Eqs. (73)-(78). Note that we excluded \(\widetilde{\kappa}_{2}\), since the evaluation of the leading-order correction to this coefficient requires the computation of \(m_{0}^{2}\) corrections to the collision integral, which is beyond the scope of the present paper. We begin with the transport coefficients appearing in the equation for the bulk viscous pressure \(\Pi\), Eq. (67). The \(O(\text{Re}^{-1}\text{Kn})\) coefficients appearing in Eq. (73) are obtained by taking the massless limit of the expressions listed in Eqs. (101), and read \[\delta_{\Pi\Pi} =\frac{2}{3}\tau_{\Pi}\;, \tag{102a}\] \[\frac{\ell_{\Pi V}}{m_{0}^{2}} =-\sum_{r=0,\neq 1,2}^{N_{0}}\frac{\tau_{0r}^{(0)}}{3}\left[ \mathcal{X}_{r-1,0}^{(1)}+(r+1)!\frac{(r-2)}{2}\beta^{1-r}\right]\;,\] (102b) \[\frac{\tau_{\Pi V}}{m_{0}^{2}} =-\frac{\ell_{\Pi V}}{m_{0}^{2}}\;,\] (102c) \[\frac{\lambda_{\Pi V}}{m_{0}^{2}} =\frac{1}{12}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}(r-1) \mathcal{X}_{r-1,0}^{(1)}\;,\] (102d) \[\frac{\lambda_{\Pi\pi}}{m_{0}^{2}} =-\frac{1}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}(r-1) \left[\mathcal{X}_{r-2,0}^{(2)}-\frac{(r+1)!}{6}\beta^{2-r}\right]\;. \tag{102e}\] Note that here, the last four coefficients are divided by \(m_{0}^{2}\) to extract their leading-order values. The numerical values of these coefficients, together with the bulk viscosity \(\zeta\) and bulk relaxation time \(\tau_{\Pi}\), are listed in Table 5. The leading-order contribution of the terms of second order in the Knudsen number (76) appearing in the equation of motion for the bulk viscous pressure are \[\frac{\widetilde{\zeta}_{1}}{m_{0}^{4}} =\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}\frac{\mathcal{Y}_{r0}^{( 0)}}{m_{0}^{4}}\;, \tag{103a}\] \[\frac{\widetilde{\zeta}_{2}}{m_{0}^{2}} =-\frac{2}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}(r-1) \mathcal{Y}_{r-2,0}^{(2)}\;,\] (103b) \[\frac{\widetilde{\zeta}_{3}}{m_{0}^{4}} =\frac{4}{3}\frac{\widetilde{\zeta}_{1}}{m_{0}^{4}}\;,\] (103c) \[\frac{\widetilde{\zeta}_{4}}{m_{0}^{2}} =-\frac{1}{12}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}(r-1) \mathcal{Y}_{r-1,0}^{(1)}\;,\] (103d) \[\frac{\widetilde{\zeta}_{5}}{m_{0}^{4}} =-5\frac{\widetilde{\zeta}_{1}}{m_{0}^{4}}\;,\] (103e) \[\frac{\widetilde{\zeta}_{6}}{m_{0}^{2}} =-\frac{1}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \mathcal{Y}_{r-1,0}^{(1)}\;,\] (103f) \[\frac{\widetilde{\zeta}_{7}}{m_{0}^{2}} =-\frac{\widetilde{\zeta}_{6}}{m_{0}^{2}}\;,\] (103g) \[\frac{\widetilde{\zeta}_{8}}{m_{0}^{4}} =\frac{\widetilde{\zeta}_{1}}{m_{0}^{4}}\;. \tag{103h}\] These coefficients are collected in Table 6. Next, we move on to Eq. (68) for the diffusion current. The coefficients appearing in Eq. (74) which are related to the bulk viscous pressure read: \[m_{0}^{2}\ell_{V\Pi} =\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\mathcal{X}_{r+1,0}^{(0)}\;, \tag{104a}\] \[m_{0}^{2}\tau_{V\Pi} =2m_{0}^{2}\ell_{V\Pi}\;,\] (104b) \[m_{0}^{2}\lambda_{V\Pi} =\frac{1}{4}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}(1+r) \mathcal{X}_{r+1,0}^{(0)}\;. \tag{104c}\] The leading-order contributions to the coefficients contained in the terms \(\mathcal{K}^{\mu}\) which vanish in the ultrarelativistic limit are given by \[\frac{\widetilde{\kappa}_{4}}{m_{0}^{2}} =\frac{\beta^{2}}{2}\widetilde{\kappa}_{5}-5\sum_{r=0,\neq 1}^{N_{1}} \tau_{0r}^{(1)}\mathcal{Y}_{r+1,0}^{(0)}\;, \tag{105a}\] \[\frac{\widetilde{\kappa}_{7}}{m_{0}^{2}} =\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\mathcal{Y}_{r+1,0}^{(0)}- \frac{\beta^{2}}{6}\widetilde{\kappa}_{5}\;. \tag{105b}\] Finally, in the case of the equation for the shear-stress tensor, the coefficient in Eq. (75) related to \(\Pi\) is \[m_{0}^{2}\lambda_{\pi\Pi} =-\frac{2}{5}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}(r+4)\mathcal{X}_{r+ 2,0}^{(0)}\;. \tag{106}\] There are no \(O(\text{Kn}^{2})\) coefficients to report in this case. Note that, in the 14M approximation, where \(N_{2}=0\), the coefficient \(\lambda_{\pi\Pi}\) does not diverge when \(m_{0}\to 0\). However, for all orders \(N_{2}>0\), it diverges as \(1/m_{0}^{2}\), which is why we list its value multiplied by the square of the mass. The explicit values of the coefficients in Eqs. (104)-(106) are listed in Table 7. ### Invariant combinations of transport coefficients It was already noticed in Ref. [9] that there are various combinations of second-order transport coefficients that stay invariant regardless of the power-counting scheme. To keep the discussion in this section as general as possible, we consider the functions \(\mathcal{X}_{r0}^{(\ell)}\) to be arbitrary and enforce Eq. (64) to determine \[\mathcal{Y}_{r0}^{(\ell)}=\xi_{0}^{(\ell)}(\mathcal{C}_{r0}^{(\ell)}-\mathcal{ X}_{r0}^{(\ell)})\;, \tag{107}\] where \({\cal C}^{(\ell)}_{r>0,0}=\xi^{(\ell)}_{r}/\xi^{(\ell)}_{0}\) was introduced in Eq. (52) and since \({\cal F}^{(\ell)}_{-r,n}=\delta_{rn}\), therefore \({\cal C}^{(\ell)}_{r<0,0}=\Gamma^{(\ell)}_{-r,0}\). The combinations of transport coefficients that are invariant with respect to the power-counting method are those which have no explicit dependence on the essentially arbitrary functions \({\cal X}^{(\ell)}_{r0}\). In the more general case of massive particles, these combinations are listed in Table 2 of Ref. [9]. We now identify similar combinations in the case of massless particles. In order to do so, we compare the expressions for the leading-order contributions of the second-order transport coefficients belonging to terms of order \(O({\rm Kn}^{2})\), i.e., Eqs. (99), (101), (103), and (105), to the ones belonging to terms of order \(O({\rm Re}^{-1}{\rm Kn})\), i.e., Eqs. (98), (100), (102), (104), and (106) and make use of Eq. (107) to eliminate \({\cal Y}^{(\ell)}_{r0}\) in favor of \({\cal X}^{(\ell)}_{r0}\). For the relaxation times, the invariant combinations are \[\tau_{\Pi}+\frac{\widetilde{\zeta}_{1}}{\zeta}\;,\quad\tau_{V}+\frac{ \widetilde{\kappa}_{5}}{2\kappa}\;,\quad\tau_{\pi}+\frac{\widetilde{\eta}_{1} }{2\eta}\;, \tag{108}\] while for the second-order coefficients appearing in the equation of motion for \(\Pi\), they read \[\ell_{\Pi V}-\frac{\widetilde{\zeta}_{7}}{\kappa}\;,\quad\tau_{\Pi V}-\frac{ \widetilde{\zeta}_{6}}{\kappa}\;,\quad\lambda_{\Pi V}-\frac{\widetilde{\zeta}_ {4}}{\kappa}\;,\quad\frac{\lambda_{\Pi\pi}}{m_{0}^{2}}+\frac{\widetilde{\zeta }_{2}}{2m_{0}^{2}\eta}\;. \tag{109}\] The invariant combinations for the coefficients in the equation for \(V^{\mu}\) are given by \[\frac{\ell_{V\Pi}}{m_{0}^{2}}+\frac{\widetilde{\kappa}_{7}}{m_{0 }^{2}\zeta}+\frac{\beta^{2}\widetilde{\kappa}_{5}}{6\zeta}\;,\quad\ell_{V\pi}+ \frac{\widetilde{\kappa}_{6}}{2\eta}\;,\] \[m_{0}^{2}\tau_{V\Pi}-m_{0}^{2}\frac{\widetilde{\kappa}_{4}+3 \widetilde{\kappa}_{7}}{\zeta}\;,\quad\tau_{V\pi}+\frac{\widetilde{\kappa}_ {6}}{2\eta}\;,\] \[\lambda_{VV}+\frac{2\eta}{\kappa}\lambda_{V\pi}-\frac{4 \widetilde{\kappa}_{1}-2\widetilde{\kappa}_{5}+\widetilde{\kappa}_{6}}{4 \kappa}\;, \tag{110}\] while those contained in the equation for \(\pi^{\mu\nu}\) read \[\tau_{\pi\pi}+\frac{\widetilde{\eta}_{1}-\widetilde{\eta}_{3}}{2\eta}\;,\quad \tau_{\pi V}-\frac{\widetilde{\eta}_{7}}{\kappa}\;,\quad\ell_{\pi V}+\frac{ \widetilde{\eta}_{8}}{\kappa}\;,\quad\lambda_{\pi V}+\frac{\widetilde{\eta}_{5} }{\kappa}\;. \tag{111}\] The above relations are in full agreement to the massless limit of the relations in Table 2 of Ref. [9], which are valid for arbitrary mass and statistics. Note that, compared to that table, we do not list any relations for \(\lambda_{V\Pi}\) and \(\lambda_{\pi\Pi}\). Establishing such relations within the present framework requires the next-to leading order contributions in \(m_{0}^{2}\) for the coefficients \(\widetilde{\kappa}_{3}\), \(\widetilde{\kappa}_{5}\), \(\widetilde{\eta}_{1}\) and \(\widetilde{\eta}_{3}\), which were not considered in this work. ## V Exact results for the scalar sector In this section we discuss several analytical results derived from the collision matrix \({\cal A}^{(0)}_{rn}\) for the irreducible scalar moments. We derive exact results for the inverse matrix \(\tau^{(0)}_{rn}\) in Sect. V.1, while the first-order bulk vis \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Method & \(\widetilde{\zeta}_{1}/m_{0}^{4}\) & \(\tau_{\Pi}[\lambda_{\rm nfp}]\) & \(\delta_{\Pi\Pi}[\tau_{1}]\) & \(\ell_{\Pi V}[\tau_{1}]/m_{0}^{2}\) & \(\tau_{\Pi V}[\tau_{1}]/m_{0}^{2}\) & \(\lambda_{\Pi V}[\tau_{1}]/m_{0}^{2}\) & \(\lambda_{\Pi\pi}[\tau_{1}]/m_{0}^{2}\) \\ \hline \hline 14M & \(\beta^{4}/(18\sigma)\) & \(3\) & \(2/3\) & \(\beta/9\) & \(-\beta/9\) & \(-\beta/18\) & \(-7\beta^{2}/180\) \\ \hline IReD & \(11\beta^{4}/(324\sigma)\) & \((11+6\pi^{2})/33\) & \(2/3\) & \(0.067077\beta\) & \(-0.067077\beta\) & \(-0.11638\beta\) & \(-0.051367\beta^{2}\) \\ \hline DNMR & \(11\beta^{4}/(324\sigma)\) & \(3\) & \(2/3\) & \(0.15415\beta\) & \(-0.15415\beta\) & \(-0.084570\beta\) & \(-0.067901\beta^{2}\) \\ \hline cDNMR & \(11\beta^{4}/(324\sigma)\) & \(3\) & \(2/3\) & \(0.12282\beta\) & \(-0.12282\beta\) & \(-0.092398\beta\) & \(-0.062583\beta^{2}\) \\ \hline \end{tabular} \end{table} Table 5: Same as Table 1, but for the second-order transport coefficients in \({\cal J}\) for the bulk viscous pressure \(\Pi\). cosity coefficients \(\zeta_{r}\) are computed in Sect. V.2. The relaxation times of bulk viscosity \(\tau_{\Pi;r}\) are computed in Sect. V.3. Finally, the scalar contribution to the deviation \(\delta f_{\bf k}\) from local equilibrium is discussed in Sect. V.4. Note that, since the bulk viscous pressure vanishes in the ultrarelativistic limit, one has to take appropriate care to derive the leading-order terms in an expansion in \(m_{0}\beta\). The respective calculations are detailed in Appendix I. ### The inverse collision matrix The inverse collision matrix is given by \[\tau_{00}^{(0)} =\frac{1}{\mathcal{A}_{00}^{(0)}}=\lambda_{\rm mfp}\frac{N_{0}+1 }{N_{0}-1}\;,\] \[\tau_{m>2,2<n\leq m}^{(0)} =\lambda_{\rm mfp}\beta^{n-m}\frac{(m+1)!(m-1)(m-2)}{(n+1)!(n-1)(n -2)}\] \[\times\left(\delta_{mn}+\frac{2}{m-2}\right)\;,\] \[\tau_{0,n>0}^{(0)} =-\frac{2\lambda_{\rm mfp}(-\beta)^{n}}{(n-1)(n-2)(n+1)!}\] \[\times\binom{1+N_{0}}{n}\frac{(1+N_{0}-n)[N_{0}(n-2)+n]}{(N_{0}- 1)N_{0}(N_{0}+1)}\;,\] \[\tau_{m>0,0}^{(0)} =\tau_{m>2,n>m}^{(0)}=0\;, \tag{112}\] which then allows for the computation of the coefficient of bulk viscosity \(\zeta_{r}\) and the relaxation times \(\tau_{\Pi;r}\). ### Bulk viscosity coefficients Considering Eqs. (45), we can write the coefficients of bulk viscosity as \[\frac{3}{m_{0}^{4}}\zeta_{r}=\frac{1}{m_{0}^{2}}\sum_{n=0,\neq 1,2}^{N_{0}} \tau_{rn}^{(0)}\alpha_{n}^{(0)}\;, \tag{113}\] where we divided by \(m_{0}^{4}\) to obtain the leading-order contribution in the massless limit. Inserting the results for the inverse matrices \(\tau_{rn}^{(0)}\), we obtain \[\frac{\zeta_{r\geq 3}}{m_{0}^{4}}=\frac{\lambda_{\rm mfp}P_{0}\beta^{4- r}}{108}(r-1)(r+1)!\left(2H_{r}-\frac{1}{1+r}-\frac{8}{3}\right)\;, \tag{114}\] while \(\frac{1}{m_{0}^{4}}\zeta_{1}=\frac{1}{m_{0}^{4}}\zeta_{2}=0\). In the above, \(H_{r}\equiv\sum_{n=1}^{r}n^{-1}\) is the \(r\)-th Harmonic number. A calculation detailed in Appendix I.2 yields the bulk viscosity as an exact expression in terms of \(N_{0}\), namely \[\frac{1}{m_{0}^{4}}\zeta=P_{0}\beta^{4}\lambda_{\rm mfp}\frac{6+7N_{0}+11N_{0 }^{3}}{324N_{0}(N_{0}^{2}-1)}\;, \tag{115}\] interpolating between the 14-moment approximation corresponding to \(N_{0}=2\) and its convergence value when \(N_{0}\to\infty\): \[\left.\frac{\zeta}{m_{0}^{4}}\right|_{N_{0}=2}=\frac{P_{0}\beta^{4}\lambda_{ \rm mfp}}{18}\;,\;\;\lim_{N_{0}\to\infty}\frac{\zeta}{m_{0}^{4}}=\frac{11P_{0} \beta^{4}\lambda_{\rm mfp}}{324}\;. \tag{116}\] The second equation above gives the exact value of the leading-order contribution to the bulk viscosity. Their ratio, \(\zeta(N_{0}\to\infty)/\zeta(N_{0}=2)=11/18\) shows that including higher-order moments can lead to a decrease of the bulk viscosity of almost \(40\%\). ### The relaxation time of the bulk viscous pressure Depending on which power-counting method is considered, the relaxation times of the bulk viscous pressure take on different values \[{\rm DNMR\,\&\,cDNMR:} \tau_{\Pi}\equiv\tau_{00}^{(0)}=\left[\chi_{0}^{(0)}\right]^{-1}\;, \tag{117}\] \[{\rm IReD:} \tau_{\Pi;r}\equiv\sum_{n=0,\neq 1,2}^{N_{0}}\tau_{rn}^{(0)} \frac{\zeta_{n}}{\zeta_{r}}\;, \tag{118}\] where the eigenvalues \(\chi_{r}^{(0)}\) are defined in Eq. (44). Note that the IReD relaxation time of the bulk viscous pressure is denoted as \(\tau_{\Pi}\equiv\tau_{\Pi;0}\). Due to the fact that \(\mathcal{A}_{r0}^{(0)}=0\) for \(r>0\), the eigenvalues of the matrix \(\mathcal{A}^{(0)}\) are given by its diagonal entries. The eigenvalues are solutions of the equation \[0=\det\left[\tau^{(0)}-\chi\mathbb{I}\right]=\prod_{r=0,\neq 1,2}^{N_{0}} \left[\tau_{rr}^{(0)}-\chi\right]\;, \tag{119}\] where the diagonal entries \(\tau_{rr}^{(0)}\) of the inverse collision matrix are given by \[\tau_{00}^{(0)}=\lambda_{\rm mfp}\frac{N_{0}+1}{N_{0}-1}\;,\quad\tau_{rr}^{(0) }=\lambda_{\rm mfp}\frac{r}{r-2}\;, \tag{120}\] where we considered \(r\geq 3\). In the case when \(N_{0}=2\), there is a single eigenvalue equal to \(\chi_{0}^{(0)}(N_{0}=2)\equiv 3\lambda_{\rm mfp}\). For \(N_{0}>2\), the largest eigenvalue corresponds to \(\tau_{rr}^{(0)}\) with \(r=3\), being equal to \(3\lambda_{\rm mfp}\), while \(\tau_{00}^{(0)}>\lambda_{\rm mfp}\) becomes the lowest eigenvalue. Rearranging the above expressions in decreasing order gives the set of eigenvalues \([\chi_{r}^{(0)}]^{-1}\) as \[\left[\chi_{0}^{(0)}\right]^{-1}=3\lambda_{\rm mfp}\;,\quad\left[\chi_{r\geq 3 }^{(0)}\right]^{-1}=\lambda_{\rm mfp}\frac{r+1}{r-1}\;. \tag{121}\] Thus, the relaxation time of the bulk viscous pressure in the DNMR and cDNMR approaches becomes independent of \(N_{0}\geq 2\) and is given by \[\tau_{\Pi}=3\lambda_{\rm mfp}\;. \tag{122}\] Note that the inverse eigenvalues are bounded \[\lambda_{\rm mfp}<\left[\chi_{r}^{(0)}\right]^{-1}\leq 3\lambda_{\rm mfp}\;, \tag{123}\] while from \(\left[\chi_{0}^{(0)}\right]^{-1}=3\lambda_{\rm mfp}\) and \(\left[\chi_{3}^{(0)}\right]^{-1}=2\lambda_{\rm mfp}\) we see that there is a clear separation of scales. A calculation provided in Appendix I.3 yields the exact result for the relaxation time of the bulk viscous pressure in the IReD approach as a function of \(N_{0}\), \[\tau_{\Pi}=\lambda_{\rm mfp}\Bigg{\{}\frac{11+6\pi^{2}}{33}-\frac {12}{11}\psi^{(1)}(N_{0})+\frac{2}{N_{0}-1}\\ +\frac{2}{6+7N_{0}+11N_{0}^{3}}\Big{[}(N_{0}-2)(3+5N_{0})+\frac{6 \pi^{2}}{11}(1+3N_{0})\\ -\frac{32}{11}(1+3N_{0})\psi^{(1)}(N_{0})\Big{]}\Bigg{\}}\;. \tag{124}\] When \(N_{0}\to\infty\), we arrive at \[\tau_{\Pi}=\lambda_{\rm mfp}\left(\frac{1}{3}+\frac{2\pi^{2}}{11}\right)\;. \tag{125}\] The moments of higher orders on the other hand relax with \[\tau_{\Pi;r}=\frac{\lambda_{\rm mfp}}{2H_{r}-\frac{1}{r+1}-\frac {8}{3}}\Bigg{[}\frac{28r^{2}+33r+11}{9(r^{2}-1)}\\ -\frac{2(r+2)(5r-3)}{3r(r-1)}H_{r}+2H_{r}^{2}+2H_{r,2}\Bigg{]}\;, \tag{126}\] where \(H_{r,m}=\sum_{n=1}^{r}n^{-m}\) is the generalized Harmonic number, with \(H_{r}\equiv H_{r,1}\). At large \(r\), the harmonic numbers \(H_{r}\) and \(H_{r,2}\) are given asymptotically as \[H_{r}=\ln r+\gamma+O(r^{-1}),\quad H_{r,2}=\frac{\pi^{2}}{6}+O(r^{-1}), \tag{127}\] with \(\gamma\simeq 0.577\) being the Euler-Mascheroni constant, such that that the highest-order moments relax without bounds for \(r\to\infty\) as: \[\tau_{\Pi;r}=\frac{\lambda_{\rm mfp}(18L^{2}-30L+28)}{6(3L-4)}+O(r^{-1}). \tag{128}\] where we introduced \(L=\gamma+\ln r\). It can be seen that \(\lim_{r\to\infty}\tau_{\Pi;r}\simeq\lambda_{\rm mfp}\ln r\), such that higher-order moments relax slower. ### Scalar correction to the equilibrium distribution function The results obtained in the present section allow us to estimate the scalar correction, \(\ell=0\), defined in Eq. (21), to local equilibrium, \[\delta f_{\bf k}^{(0)}=f_{0{\bf k}}\sum_{n=0}^{N_{0}}\rho_{n}{\cal H}_{n{\bf k }}^{(0)}\,. \tag{129}\] Using the massless limit of \({\cal H}_{n{\bf k}}^{(0)}\) from Eq. (101), the strict 14-moment approximation 14M, corresponding to \(N_{0}=2\) leads to \[\delta f_{\bf k}^{(0)}|_{N_{0}=2}\equiv-\frac{3\Pi}{m_{0}^{2}\beta^{2}P_{0}}(6 -\beta E_{\bf k})(2-\beta E_{\bf k})f_{0{\bf k}}. \tag{130}\] On the other hand using Eq. 51 to express the non-dyanmical moements according to the IReD approximation, we have \[\rho_{n}\simeq-\frac{3\Pi}{m_{0}^{2}}\frac{\zeta_{n}}{\zeta_{0}}\,, \tag{131}\] such that now, \(\delta f_{\bf k}^{(0)}\) becomes \[\delta f_{\bf k}^{(0)}=-\frac{3\Pi}{m_{0}^{2}}\left[{\cal H}_{\bf k0}^{(0)}+ \sum_{n=3}^{N_{0}}\frac{\zeta_{n}}{\zeta_{0}}{\cal H}_{\bf kn}^{(0)}\right]f_{ 0{\bf k}}\;. \tag{132}\] After some algebra discussed in the Appendix I.4, we find the correction to be \[\delta f_{\bf k}^{(0)}=-\frac{6\Pi}{m_{0}^{2}\beta^{2}P_{0}} \Bigg{[}\frac{18}{11\beta E_{\bf k}}+\frac{6}{11}(\beta E_{\bf k}-3)\ln(\beta E _{\bf k})\\ +\frac{2\beta E_{\bf k}}{11}(3\gamma-4)+\frac{9}{11}(1-2\gamma) \Bigg{]}f_{0{\bf k}}\;. \tag{133}\] ## VI Conclusion In this work, we have analytically computed the linearized collision matrices for an ultrarelativistic gas of hard-spheres, and determined the correlation structure of the moment equations. It was found that the collision matrices feature a nearly lower-triangular structure, coupling the moments of a given order \(r>0\) to all lower-order ones. Contrary, the irreducible moments of energy-rank zero, the primary dissipative quantities \(\rho_{0}=-3\Pi/m_{0}^{2}\), \(\rho_{0}^{\mu}=V^{\mu}\), and \(\rho_{0}^{\mu\nu}=\pi^{\mu\nu}\), couple to all higher-order ones included in the basis. Expressions for all first- and second-order transport coefficients that appear in different formulations of second-order fluid dynamics, i.e., DNMR, cDNMR, and IReD, have been obtained. The coefficients appearing in the terms of second-order in the Knudsen number have been computed for the first time, are nonvanishing in both the DNMR and cDNMR approaches. Even though they vanish in the strict 14-moment approximation, their convergence values are non-negligible. This is also evidenced by the fact that some of the second-order transport coefficients appearing in the terms of order \(O(\mathrm{Kn}\,\mathrm{Re}^{-1})\) differ between the various power-counting methods in second-order fluid dynamics. Furthermore, we obtained a closed-form expressions for the bulk viscosity and the relaxation time for the bulk viscous pressure. Compared to their values in the 14-moment approximation, there is a decrease of 39% and 29%, respectively, showing that the inclusion of higher-order moments leads to sizeable changes. The computation of the collision integrals and transport coefficients presented here could easily be extended to the third-order theory of dissipative fluid dynamics introduced recently in Ref. [22]. ###### Acknowledgements. The authors thank G.S. Denicol, J. Noronha, A. Palermo, P. Huovinen, D.H. Rischke and P. Aasha for fruitful discussions. D.W. acknowledges support by the Studienstiftung des deutschen Volkes (German Academic Scholarship Foundation), and support by the Research Cluster ELEMENTS (Project ID 500/10.006). E.M. acknowledges support by the program Excellence Initiative-Research University of the University of Wroclaw of the Ministry of Education and Science. The authors gratefully acknowledge the support through a grant of the Ministry of Research, Innovation and Digitization, CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2021-1707, within PNCDI III, as well as support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 "Strong-interaction matter under extreme conditions" - project number 315477589 - TRR 211. ## Appendix A Orthogonal polynomials for the ultrarelativistic ideal gas In this section we follow Ref. [10] and we express the orthogonal polynomials \(P_{\mathbf{k}m}^{(\ell)}\) and \(\mathcal{H}_{\mathbf{k}m}^{(\ell)}\) as well as \(\mathcal{F}_{rn}^{(\ell)}\) in the case of ultrarelativistic particles obeying classical Boltzmann statistics, i.e., \(a=0\). The local equilibrium distribution from Eq. (6) becomes, \[f_{0\mathbf{k}}\equiv e^{\alpha-\beta E_{\mathbf{k}}}=g^{-1}\pi^{2}P_{0}\beta ^{4}e^{-\beta E_{\mathbf{k}}}, \tag{10}\] where \(E_{\mathbf{k}}\equiv u_{\mu}k^{\mu}=\sqrt{-\Delta^{\alpha\beta}k_{\alpha}k_{ \beta}}\equiv k^{0}\) is the particle energy in the comoving frame and \[P_{0}=ge^{\alpha}T^{4}/\pi^{2}\;, \tag{11}\] is the equilibrium pressure. In the ultrarelativistic limit of a Boltzmann gas the thermodynamic integrals from Eqs. (28) and (29) reduce to \[I_{nq}=J_{nq}\equiv\frac{(n+1)!}{(2q+1)!!}\frac{P_{0}}{2\beta^{n-2}}\;. \tag{12}\] Taking into account the orthogonality relation obeyed by the generalized Laguerre polynomials, \[\int_{0}^{\infty}\mathrm{d}xe^{-x}x^{2\ell+1}L_{m}^{(2\ell+1)}(x) L_{n}^{(2\ell+1)}(x)\\ =\frac{(n+2\ell+1)!}{n!}\delta_{mn}, \tag{13}\] the polynomial \(P_{\mathbf{k}m}^{(\ell)}(E_{\mathbf{k}})\) from Eq. (22) is expressed in terms of the generalized Laguerre polynomials as \[P_{\mathbf{k}m}^{(\ell)}(E_{\mathbf{k}})=\sqrt{\frac{m!(2\ell+1)!}{(m+2\ell+1 )!}}L_{m}^{(2\ell+1)}(\beta E_{\mathbf{k}}). \tag{14}\] Now using the explicit representation \[L_{m}^{(2\ell+1)}(x)=\sum_{n=0}^{m}\frac{(m+2\ell+1)!}{(m-n)!(n+2\ell+1)!}\frac {(-1)^{n}x^{n}}{n!}, \tag{15}\] the expansion coefficients of \(P_{\mathbf{k}m}^{(\ell)}=\sum_{n=0}^{m}a_{mn}^{(\ell)}E_{\mathbf{k}}^{n}\) are identified as \[a_{mn}^{(\ell)}=(-1)^{n}\beta^{n}\frac{\sqrt{m!(2\ell+1)!(m+2\ell+1)!}}{n!(m-n )!(n+2\ell+1)!}. \tag{16}\] Furthermore setting \(P_{\mathbf{k}0}^{(\ell)}\equiv a_{00}^{(\ell)}=1\), the momentum-independent function \(W^{(\ell)}\) from Eq. (27) leads to \[W^{(\ell)}=(-1)^{\ell}\frac{2\beta^{2\ell-2}(2\ell+1)!!}{P_{0}(2\ell+1)!}\;. \tag{17}\] Using these results the \(\mathcal{H}_{\mathbf{k}n}^{(\ell)}\) polynomial introduced in Eq. (22) is expressed as \[\mathcal{H}_{\mathbf{k}n}^{(\ell)} =(-1)^{\ell+n}\frac{2\beta^{2\ell+n-2}(2\ell+1)!!}{P\left(n+2\ell +1\right)!\ell!}\] \[\times\sum_{m=0}^{N_{\ell}-n}\frac{(n+m)!}{n!m!}L_{m+n}^{(2\ell+1 )}(\beta E_{\mathbf{k}}), \tag{18}\] hence \(\mathcal{F}_{rn}^{(\ell)}\) from Eq. (24) evaluates to [10] \[\mathcal{F}_{rn}^{(\ell)}=\frac{(-1)^{n}}{(r+n)}\frac{\beta^{r+n}(2\ell+1-r)!(N_{ \ell}+r)!}{n!(r-1)!(2\ell+1+n)!(N_{\ell}-n)!}. \tag{105}\] ## Appendix B Transport coefficients The coefficients appearing in Eq. (73) are obtained by multiplying Eq. (9) by \(-(m_{0}^{2}/3)\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}\). Then, replacing the irreducible moments using Eqs. (54)-(56) and collecting the corresponding terms, we obtain \[\delta_{\Pi\Pi}= \sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}\left[\frac{(r+2)}{3} \mathcal{X}_{r0}^{(0)}+\mathcal{H}\frac{\partial\mathcal{X}_{r0}^{(0)}}{ \partial\alpha}+\mathcal{H}\frac{\partial\mathcal{X}_{r0}^{(0)}}{\partial \beta}\right]\] \[-\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \left[(r-1)\mathcal{X}_{r-2,0}^{(0)}+\frac{G_{2r}}{D_{20}}\right]\;, \tag{106a}\] \[\ell_{\Pi V}= -\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \left[\mathcal{X}_{r-1,0}^{(1)}-\frac{G_{3r}}{D_{20}}\right]\;,\] (106b) \[\tau_{\Pi V}= \frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \left[r\mathcal{X}_{r-1,0}^{(1)}+\beta\frac{\partial\mathcal{X}_{r-1,0}^{(1)} }{\partial\beta}-\frac{G_{3r}}{D_{20}}\right]\;,\] (106c) \[\lambda_{\Pi V}= -\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \left[\frac{\partial\mathcal{X}_{r-1,0}^{(1)}}{\partial\alpha}+\frac{1}{h_{0} }\frac{\partial\mathcal{X}_{r-1,0}^{(1)}}{\partial\beta}\right]\;,\] (106d) \[\lambda_{\Pi\pi}= -\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \left[(r-1)\mathcal{X}_{r-2,0}^{(2)}+\frac{G_{2r}}{D_{20}}\right]\;. \tag{106e}\] The proper-time derivative and the gradient of \(\mathcal{X}(\alpha,\beta)\) are expressed through \[\dot{\mathcal{X}} \equiv\left[\mathcal{H}(\alpha,\beta)\frac{\partial\mathcal{X}}{ \partial\alpha}+\mathcal{\bar{H}}(\alpha,\beta)\frac{\partial\mathcal{X}}{ \partial\beta}\right]\theta\;, \tag{107}\] \[\nabla^{\mu}\mathcal{X} \equiv\left(\frac{\partial\mathcal{X}}{\partial\alpha}+\frac{1}{h _{0}}\frac{\partial\mathcal{X}}{\partial\beta}\right)\nabla^{\mu}\alpha-\beta \frac{\partial\mathcal{X}}{\partial\beta}\dot{u}^{\mu}\;. \tag{108}\] Similarly to the scalar equation of motion, the coefficients appearing in Eq. (74) are found by multiplying Eq. (10) by \(\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\), then using Eqs. (54)-(56) and finally collecting the corresponding terms \[\delta_{VV} =\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{(r+3)}{3} \mathcal{X}_{r0}^{(1)}+\mathcal{H}\frac{\partial\mathcal{X}_{r0}^{(1)}}{ \partial\alpha}+\mathcal{\bar{H}}\frac{\partial\mathcal{X}_{r0}^{(1)}}{ \partial\beta}\right]\] \[-\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}(r-1) \mathcal{X}_{r-2,0}^{(1)}\;, \tag{109a}\] \[\ell_{V\Pi}= \sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{\beta J_{r+2,1}}{e_{0 }+P_{0}}-\mathcal{X}_{r-1,0}^{(0)}+\frac{1}{m_{0}^{2}}\mathcal{X}_{r+1,0}^{(0) }\right]\;,\] (109b) \[\ell_{V\pi}= \sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{\beta J_{r+2,1}}{e_{0 }+P_{0}}-\mathcal{X}_{r-1,0}^{(2)}\right]\;,\] (109c) \[\tau_{V\Pi}= \sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{\beta J_{r+2,1}}{e_{0 }+P_{0}}-r\mathcal{X}_{r-1,0}^{(0)}-\beta\frac{\partial\mathcal{X}_{r-1,0}^{(0) }}{\partial\beta}\right]\] \[+\frac{1}{m_{0}^{2}}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)} \left[(r+3)\mathcal{X}_{r+1,0}^{(0)}+\beta\frac{\partial\mathcal{X}_{r+1,0}^{(0) }}{\partial\beta}\right]\;,\] (109d) \[\tau_{V\pi}= \sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{\beta J_{r+2,1}}{e_{0 }+P_{0}}-r\mathcal{X}_{r-1,0}^{(2)}-\beta\frac{\partial\mathcal{X}_{r-1,0}^{(2) }}{\partial\beta}\right]\;,\] (109e) \[\lambda_{VV}= \frac{1}{5}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[(2r+3) \mathcal{X}_{r0}^{(1)}-m_{0}^{2}(2r-2)\mathcal{X}_{r-2,0}^{(1)}\right]\;,\] (109f) \[\lambda_{V\Pi}= \sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{\partial\mathcal{X}_{r -1,0}^{(0)}}{\partial\alpha}+\frac{1}{h_{0}}\frac{\partial\mathcal{X}_{r-1,0}^{(0 )}}{\partial\beta}\right]\] \[-\frac{1}{m_{0}^{2}}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)} \left[\frac{\partial\mathcal{X}_{r+1,0}^{(0)}}{\partial\alpha}+\frac{1}{h_{0}} \frac{\partial\mathcal{X}_{r+1,0}^{(0)}}{\partial\beta}\right]\;,\] (109g) \[\lambda_{V\pi}= \sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\left[\frac{\partial\mathcal{X}_{r-1,0}^{(2)}}{\partial\alpha}+\frac{1}{h_{0}}\frac{\partial\mathcal{X}_{r-1,0}^{(2) }}{\partial\beta}\right]\;. \tag{109h}\] The coefficients of the shear-stress equation (75) follow after multiplying Eq. (11) by \(\tau_{0r}^{(2)}\) and summing from \(r=0\) to \(N_{2}\). Then, after some algebra we obtain the following results, \[\delta_{\pi\pi} =\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\!\left[\frac{(r+4)}{3} \mathcal{X}_{r0}^{(2)}+\mathcal{H}\frac{\partial\mathcal{X}_{r0}^{(2)}}{ \partial\alpha}+\mathcal{\bar{H}}\frac{\partial\mathcal{X}_{r0}^{(2)}}{ \partial\beta}\right]\] \[-\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(2)}(r-1) \mathcal{X}_{r-2,0}^{(2)}\;,\] (110a) \[\ell_{\pi V} =\frac{2}{5}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\!\left[\mathcal{X}_{r +1,0}^{(1)}-m_{0}^{2}\mathcal{X}_{r-1,0}^{(1)}\right]\;,\] (110b) \[\tau_{\pi V} =\frac{2}{5}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\!\left[(r+5) \mathcal{X}_{r+1,0}^{(1)}+\beta\frac{\partial\mathcal{X}_{r+1,0}^{(1)}}{ \partial\beta}\right]\] \[-\frac{2m_{0}^{2}}{5}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\!\left[r \mathcal{X}_ \[-\frac{2}{5m_{0}^{2}}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}(r+4){\cal X}_{ r+2,0}^{(0)}\;, \tag{116a}\] \[\lambda_{\pi V} =\frac{2}{5}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}\left[\frac{\partial{ \cal X}_{r+1,0}^{(1)}}{\partial\alpha}+\frac{1}{h_{0}}\frac{\partial{\cal X}_{ r+1,0}^{(1)}}{\partial\beta}\right]\] \[-\frac{2m_{0}^{2}}{5}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}\left[ \frac{\partial{\cal X}_{r-1,0}^{(1)}}{\partial\alpha}+\frac{1}{h_{0}}\frac{ \partial{\cal X}_{r-1,0}^{(1)}}{\partial\beta}\right]\;. \tag{116b}\] Note that using Eqs. (58) in these transport coefficients leads to the results of Ref. [7],1 while using Eqs. (62) correspond to the results of Ref. [9]. Footnote 1: Please note that there is a sign error in Ref. [7] related to the term on the second line in Eq. (116e). The transport coefficients from Eq. (76) are proportional to \({\cal Y}^{(f)}\) and lead to \[\widetilde{\zeta}_{1} =\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}{\cal Y}_{r0}^{(0)}\;, \tag{117a}\] \[\widetilde{\zeta}_{2} =-\widetilde{\zeta}_{1}-\frac{2m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_ {0}}\tau_{0r}^{(0)}(r-1){\cal Y}_{r-2,0}^{(2)}\;,\] (117b) \[\widetilde{\zeta}_{3} =\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}\Bigg{[}\frac{(r+1)}{3}{ \cal Y}_{r0}^{(0)}+{\cal H}\frac{\partial{\cal Y}_{r0}^{(0)}}{\partial\alpha }+\bar{\cal H}\frac{\partial{\cal Y}_{r0}^{(0)}}{\partial\beta}\Bigg{]}\] \[-\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}(r-1){ \cal Y}_{r-2,0}^{(0)}\;,\] (117c) \[\widetilde{\zeta}_{4} =\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \Bigg{[}\frac{\partial{\cal Y}_{r-1,0}^{(1)}}{\partial\alpha}+\frac{1}{h_{0}} \frac{\partial{\cal Y}_{r-1,0}^{(1)}}{\partial\beta}\Bigg{]}\;,\] (117d) \[\widetilde{\zeta}_{5} =\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}{\cal Y}_{r0}^{(0)} \left[2+\frac{\beta{\cal J}_{30}}{(e_{0}+P_{0})}\right]\;,\] (117e) \[\widetilde{\zeta}_{6} =-\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}{\cal Y}_{r0}^{(0)} \frac{{\cal H}D_{20}}{(e_{0}+P_{0})^{2}}\] \[-\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)} \Bigg{[}r{\cal Y}_{r-1,0}^{(1)}+\beta\frac{\partial{\cal Y}_{r-1,0}^{(1)}}{ \partial\beta}\Bigg{]}\;,\] (117f) \[\widetilde{\zeta}_{7} =\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1,2}^{N_{0}}\tau_{0r}^{(0)}{ \cal Y}_{r-1,0}^{(1)}\;,\] (117g) \[\widetilde{\zeta}_{8} =\widetilde{\zeta}_{1}\;. \tag{117i}\] The coefficients from Eq. (77) lead to \[\widetilde{\kappa}_{1} =-2\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\Bigg{[}\frac{(r-1)}{5}{ \cal Y}_{r0}^{(1)}+\frac{\partial{\cal Y}_{r-1,0}^{(2)}}{\partial\alpha}+\frac {1}{h_{0}}\frac{\partial{\cal Y}_{r-1,0}^{(2)}}{\partial\beta}\Bigg{]}\] \[+\frac{2m_{0}^{2}}{5}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}(r-1){ \cal Y}_{r-2,0}^{(1)}\;, \tag{117a}\] \[\widetilde{\kappa}_{2} =2\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\Bigg{[}r{\cal Y}_{r-1,0}^{ (2)}+\beta\frac{\partial{\cal Y}_{r-1,0}^{(2)}}{\partial\beta}\Bigg{]}\;,\] (117b) \[\widetilde{\kappa}_{3} =-\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\Bigg{[}\frac{(r+2)}{3}{ \cal Y}_{r0}^{(1)}+{\cal H}\frac{\partial{\cal Y}_{r0}^{(1)}}{\partial\alpha }+\bar{\cal H}\frac{\partial{\cal Y}_{r0}^{(1)}}{\partial\beta}\Bigg{]}\] \[-\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\mathcal{Y}_{r0}^{(1)} \left[\frac{\partial{\cal Y}_{r-1,0}^{(0)}}{\partial\alpha}+\frac{1}{h_{0}} \frac{\partial{\cal Y}_{r-1,0}^{(0)}}{\partial\beta}\right]\] \[-\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\Bigg{[}\frac{\partial{\cal Y}_{r -1,0}^{(0)}}{\partial\alpha}+\frac{1}{h_{0}}\frac{\partial{\cal Y}_{r-1,0}^{(0)} }{\partial\beta}\Bigg{]}\] \[+\frac{m_{0}^{2}}{3}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}(r-1){ \cal Y}_{r-2,0}^{(1)}\;,\] (117c) \[\widetilde{\kappa}_{4} =\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\Bigg{[}r{\cal Y}_{r-1,0}^{(0)} +{\cal Y}_{r0}^{(1)}\frac{\partial({\cal H})}{\partial\beta}+\beta\frac{ \partial{\cal Y}_{r-1,0}^{(0)}}{\partial\beta}\Bigg{]}\] \[-\frac{1}{m_{0}^{2}}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)} \Bigg{[}(r+3){\cal Y}_{r+1,0}^{(0)}+\beta\frac{\partial{\cal Y}_{r+1,0}^{(0)} }{\partial\beta}\Bigg{]}\;,\] (117d) \[\widetilde{\kappa}_{5} =2\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}{\cal Y}_{r0}^{(1)}\;,\] (117e) \[\widetilde{\kappa}_{6} =-2\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}{\cal Y}_{r-1,0}^{(2)}\;,\] (117f) \[\widetilde{\kappa}_{7} =-\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}\Big{[}{\cal Y}_{r-1,0}^{(0)} +{\cal H}{\cal Y}_{r,0}^{(1)}\Big{]}\] \[+\frac{1}{m_{0}^{2}}\sum_{r=0,\neq 1}^{N_{1}}\tau_{0r}^{(1)}{\cal Y}_{r+1,0}^{(0)}\;. \tag{117g}\] Finally the coefficients from Eq. (78) are \[\widetilde{\eta}_{1} =2\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}{\cal Y}_{r0}^{(2)}\;,\] (118a) \[\widetilde{\eta}_{2} =-\frac{2}{3}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}\Big{[}(r+1){\cal Y}_ {r0}^{(2)}-m_{0}^{2}(r-1){\cal Y}_{r-2,0}^{(2)}\Big{]}\] \[-2\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}\Bigg{[}{\cal H}\frac{ \partial{\cal Y}_{r0}^{(2)}}{\partial\alpha}+\bar{\cal H}\frac{\partial{\cal Y }_{r0}^{(2)}}{\partial\beta}\Bigg{]}\] \[-\frac{2}{5}\sum_{r=0}^{N_{2}}\tau_{0r}^{(2)}\Big{[}(2r+3){\cal Y }_{r0}^{(0)}-m_{0}^{2}(r-1){\cal Y}_{r+ \[\widetilde{\eta}_{3} =-\frac{2}{7}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\Big{[}(4r+3) \mathcal{Y}_{r0}^{(2)}-m_{0}^{2}(4r-4)\mathcal{Y}_{r-2,0}^{(2)}\Big{]}\, \tag{54a}\] \[\widetilde{\eta}_{4} =2\widetilde{\eta}_{1}\,\] (54b) \[\widetilde{\eta}_{5} =\frac{2}{5}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\bigg{[}\frac{ \partial\mathcal{Y}_{r+1,0}^{(1)}}{\partial\alpha}+\frac{1}{h_{0}}\frac{ \partial\mathcal{Y}_{r+1,0}^{(1)}}{\partial\beta}\bigg{]}\] \[\quad-\frac{2m_{0}^{2}}{5}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)} \bigg{[}\frac{\partial\mathcal{Y}_{r-1,0}^{(1)}}{\partial\alpha}+\frac{1}{h_ {0}}\frac{\partial\mathcal{Y}_{r-1,0}^{(1)}}{\partial\beta}\bigg{]}\,\] (54c) \[\widetilde{\eta}_{6} =2\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\mathcal{Y}_{r0}^{(2)} \left[2+\frac{\beta J_{30}}{(e_{0}+P_{0})}\right]\,\] (54f) \[\widetilde{\eta}_{7} =\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\bigg{[}\mathcal{Y}_{r0}^{( 2)}\frac{2\mathcal{H}D_{20}}{(e_{0}+P_{0})^{2}}-\frac{2(r+5)}{5}\mathcal{Y}_{ r+1,0}^{(1)}\bigg{]}\] \[\quad+\frac{2m_{0}^{2}}{5}\!\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)} \bigg{[}r\mathcal{Y}_{r-1,0}^{(1)}+\beta\frac{\partial\mathcal{Y}_{r-1,0}^{(1 )}}{\partial\beta}-\frac{\beta}{m_{0}^{2}}\frac{\partial\mathcal{Y}_{r+1,0}^{ (1)}}{\partial\beta}\bigg{]}\,\] (54g) \[\widetilde{\eta}_{8} =\frac{2}{5}\sum_{r=0}^{N_{2}}\!\tau_{0r}^{(2)}\Big{[}\mathcal{Y} _{r+1,0}^{(1)}-m_{0}^{2}\mathcal{Y}_{r-1,0}^{(1)}\Big{]}\,\] (54h) \[\widetilde{\eta}_{9} =-\widetilde{\eta}_{1}. \tag{54i}\] Note that using Eqs. (59) in these transport coefficients leads to the results listed in Appendix I of Ref. [11]. However, in the formulation used in that reference, the contributions that stem from the coefficients \(\mathcal{Y}_{r0}^{(\ell)}\) with \(r<0\) were not considered, cf. the discussion after Eq. (24) in Ref. [9]. ## Appendix C Reference frames and projection operators In order to calculate the collision matrix, it is beneficial to define the total momentum involved in binary collisions \[P_{T}^{\mu}\equiv k^{\mu}+k^{\prime\mu}=p^{\mu}+p^{\prime\mu}. \tag{55}\] It's squared norm corresponds to the Mandelstam variable \(s\equiv P_{T}^{\mu}P_{T,\mu}\). The projection operator orthogonal to the total momentum, i.e., \(\Delta_{P}^{\mu\nu}P_{T,\nu}=0\), is \[\Delta_{T}^{\mu\nu}\equiv g^{\mu\nu}-\frac{P_{T}^{\mu}P_{T}^{\nu}}{s}. \tag{56}\] Using these definitions the particle momentum can be decomposed with respect to the total momentum and the corresponding projection operator as \[p^{\mu}=P_{T}^{\mu}\frac{(P_{T}^{\nu}p_{\nu})}{s}+\Delta_{T}^{\mu\nu}p_{\nu}. \tag{57}\] Furthermore it is useful to define the center-of-momentum (CM) frame where the total momentum is \(P_{T}^{\mu}\stackrel{{\rm CM}}{{=}}(\sqrt{s},\mathbf{0})\), such that \[P_{T}^{0} \stackrel{{\rm CM}}{{=}}k^{0}+k^{\prime 0}=p^{0}+p^{ \prime 0}=\sqrt{s}\, \tag{58}\] \[\mathbf{P}_{T} \stackrel{{\rm CM}}{{=}}\mathbf{k}+\mathbf{k}^{ \prime}=\mathbf{p}+\mathbf{p}^{\prime}=\mathbf{0}. \tag{59}\] However, in the CM-frame the fluid four-flow vector is \(u^{\mu}\stackrel{{\rm CM}}{{=}}(u^{0},\mathbf{u})\), hence it follows that \[P_{T}^{\mu}u_{\mu}\stackrel{{\rm CM}}{{=}}\sqrt{s}u^{0}\, \tag{60}\] while the normalization condition \(u^{\mu}u_{\mu}=1\) yields \[\sqrt{\left(P_{T}^{\mu}u_{\mu}\right)^{2}-s}\stackrel{{\rm CM}}{{= }}\sqrt{s}u\, \tag{61}\] where we denoted \(u\equiv|\mathbf{u}|\). In the local rest (LR) frame, where \(u^{\mu}\stackrel{{\rm LR}}{{=}}(1,\mathbf{0})\), we have the following representation of the invariant scalars \[P_{T}^{\mu}u_{\mu}\stackrel{{\rm LR}}{{=}}k^{0}+k^{\prime 0}=p^{0}+p^{ \prime 0}\, \tag{62}\] and \[\sqrt{\left(P_{T}^{\mu}u_{\mu}\right)^{2}-s}\stackrel{{\rm LR}}{{= }}|\mathbf{k}+\mathbf{k}^{\prime}|=|\mathbf{p}+\mathbf{p}^{\prime}|. \tag{63}\] In the ultrarelativistic limit, \(k^{\mu}k_{\mu}=m_{0}^{2}\to 0\), and hence \(k^{0}=|\mathbf{k}|\equiv k\), while \[s\equiv 2k^{\mu}k_{\mu}^{\prime}\stackrel{{\rm LR}}{{=}}2kk^{\prime }\left(1-\cos\theta_{kk^{\prime}}\right)\, \tag{64}\] where \(\theta_{kk^{\prime}}\) is the angle between the colliding particles with momenta \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\) in the LR-frame. Similarly to the projection operator in Eq. (56), we introduce another four-vector, \(z_{T}^{\mu}\), in the CM-frame that is orthogonal to \(P_{T}^{\mu}\), i.e., \(z_{T}^{\mu}P_{T,\mu}=0\), \[z_{T}^{\mu}\equiv u^{\mu}-P_{T}^{\mu}\frac{(P_{T}^{\nu}u_{\nu})}{s}\stackrel{{ \rm CM}}{{=}}(0,\mathbf{u})\, \tag{65}\] normalized as \(z_{T}^{\mu}z_{T,\mu}\equiv 1-(P_{T}^{\mu}u_{\mu})^{2}/s\stackrel{{\rm CM}}{{= }}-\mathbf{u}^{2}\). Hence, using Eq. (65) we also obtain that \[E_{\mathbf{p}}\equiv u^{\mu}p_{\mu}=z_{T}^{\mu}p_{\mu}+P_{T}^{\mu}u_{\mu}\frac{( P_{T}^{\nu}p_{\nu})}{s}. \tag{66}\] The underlying space-like unit vector, \(l_{T}^{\mu}\stackrel{{\rm CM}}{{=}}(0,\mathbf{u}/u)\) constructed from \(z_{T}^{\mu}\), is also orthogonal to the total momentum, \(P_{T}^{\mu}l_{T,\mu}=0\), and it is defined in a covariant fashion as \[l_{T}^{\mu}\equiv\frac{z_{T}^{\mu}}{u}=\frac{u^{\mu}}{u}-P_{T}^{\mu}\frac{(P_{T} ^{\nu}u_{\nu})}{su}. \tag{67}\] With the help of this new space-like four-vector a new symmetric and traceless projection operator, similarly as in anisotropic fluid dynamics, see for example Ref. [23], can be constructed. Here besides the usual space-like projection operator a new projection onto the two-dimensional subspace that is orthogonal to both \(P_{T}^{\mu}\) and \(l_{T}^{\mu}\) is defined as, \[\Xi_{T}^{\mu\nu}\equiv g^{\mu\nu}-\frac{P_{T}^{\mu}P_{T}^{\nu}}{s}+l_{T}^{\mu}l_ {T}^{\nu}=\Delta_{T}^{\mu\nu}+l_{T}^{\mu}l_{T}^{\nu}\, \tag{68}\] where \(\Xi_{T}^{\mu\nu}P_{T,\nu}=\Xi_{T}^{\mu\nu}l_{T,\nu}=0\), while \(\Xi_{T}^{\mu\nu}g_{\mu\nu}=2\). Using these projectors, the particle momentum can be decomposed with respect to \(P_{T}^{\mu}\), \(l_{T}^{\mu}\) and \(\Xi_{T}^{\mu\nu}\) as \[p^{\mu}=P_{T}^{\mu}\frac{(P_{T}^{\nu}p_{\nu})}{s}-l_{T}^{\mu}(l_{T}^{\nu}p_{\nu })+\Xi_{T}^{\mu\nu}p_{\nu}. \tag{106}\] ## Appendix D The \(P\) and \(P^{\prime}\) integrals In order to evaluate the collision matrix, i.e., Eqs. (41) and (42), we have to compute the following type of momentum integrals, \[\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{n}}\equiv\frac{1}{2}\int\mathrm{d}P\mathrm{ d}P^{\prime}W_{\mathbf{kk}\prime\to\mathbf{pp}\prime}E_{\mathbf{p}}^{i}p^{\mu _{1}}\cdots p^{\mu_{n}}. \tag{107}\] It is beneficial to first introduce the following auxiliary integral from Refs. [7; 11], \[\Theta^{\mu_{1}\cdots\mu_{n}} \equiv\frac{1}{2}\int\mathrm{d}P\mathrm{d}P^{\prime}W_{\mathbf{kk} \prime\to\mathbf{pp}\prime}p^{\mu_{1}}\cdots p^{\mu_{n}}\] \[=\sum_{q=0}^{[n/2]}\left(-1\right)^{q}b_{nq}\mathcal{B}_{nq}\] \[\times\Delta_{T}^{(\mu_{1}\mu_{2}}\cdots\Delta_{T}^{\mu_{2q-1} \mu_{2q}}\,P_{T}^{\mu_{2q+1}}\cdots P_{T}^{\mu_{n})}\, \tag{108}\] where we used Eq. (105) repeatedly to replace \(p^{\mu_{1}}\cdots p^{\mu_{n}}\). Here, \(n\), \(q\) are natural numbers while the sum runs up to \([n/2]\) denoting the largest integer which is less than or equal to \(n/2\). The symmetrized tensors \(\Delta_{T}^{(}\cdots P_{T}^{)}\) are counted by \(b_{nq}\equiv\frac{n!}{2^{q}q!(n-2q)!}\), while the \(\mathcal{B}_{nq}\) coefficients are \[\mathcal{B}_{nq} \equiv\frac{(-1)^{q}}{(2q+1)!!}\frac{1}{2}\int\mathrm{d}P\mathrm{ d}P^{\prime}W_{\mathbf{kk}\prime\to\mathbf{pp}\prime}\] \[\times\left(\frac{P_{T}^{\mu}p_{\mu}}{\sqrt{s}}\right)^{n-2q} \left(\Delta_{P}^{\alpha\beta}p_{\alpha}p_{\beta}\right)^{q}. \tag{109}\] In evaluating \(\mathcal{B}_{nq}\), we changed \(p^{\mu}\) and \(p^{\prime\mu}\) to the CM frame defined by \(P_{T}^{\mu}=k^{\mu}+k^{\prime\mu}\), such that \(P_{T}^{\mu}p_{\mu}=s/2\) and \(\Delta_{P}^{\alpha\beta}p_{\alpha}p_{\beta}=-s/4\). The integrals in Eq. (107) are then obtained via \[\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{t}}=u_{\nu_{1}}\cdots u_{\nu_{i}}\Theta^{ \nu_{1}\cdots\nu_{i}\mu_{1}\cdots\mu_{t}}. \tag{110}\] Even though the integral \(\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{n}}\) could in principle be evaluated via Eq. (110), doing so is rather complicated. Instead, it is more sensible to use the decomposition from Eq. (106) to write \[\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{n}} =\frac{1}{2}\int\mathrm{d}P\mathrm{d}P^{\prime}W_{\mathbf{kk} \prime\to\mathbf{pp}\prime}p^{\mu_{1}}\cdots p^{\mu_{n}}\] \[\times\left(P_{T}^{\nu}u_{\nu}\frac{(P_{T}^{\mu}p_{\mu})}{s}+z_{T }^{\mu}p_{\mu}\right)^{i}\, \tag{111}\] from where it is clear that the tensor structure of \(\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{n}}\) can only consist of the tensors \(l_{T}^{\mu}\), \(P_{T}^{\mu}\) and \(\Xi_{T}^{\mu\nu}\). Now, in the CM-frame we express \(z_{T}^{\mu}p_{\mu}=u(l_{T}^{\mu}p_{\mu})=(\sqrt{s}u/2)\cos\theta_{pu}\), where \(\theta_{pu}\) is the angle between \(\mathbf{p}\) and \(\mathbf{u}\). Furthermore, we have \(P_{T}^{\mu}u_{\mu}=E_{\mathbf{p}}+E_{\mathbf{p}^{\prime}}=\sqrt{s}u^{0}\), and using the binomial formula we obtain \[E_{\mathbf{p}}^{i}=\sum_{j=0}^{i}\binom{i}{j}\frac{(u^{0})^{i-j}}{u^{-j}}\left( \frac{P_{T}^{\mu}p_{\mu}}{\sqrt{s}}\right)^{i-j}(l_{T}^{\mu}p_{\mu})^{j}. \tag{112}\] Subsequently, we expand the integral \(\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{n}}\) in terms of the tensors \(l_{T}^{\mu}\), \(P_{T}^{\mu}\) and \(\Xi_{T}^{\mu\nu}\), \[\mathcal{P}_{i}^{\mu_{1}\cdots\mu_{n}} \equiv\sum_{q=0}^{[n/2]}\sum_{m=0}^{-2q}\left(-1\right)^{q}b_{ nmq}\mathcal{D}_{nmq}^{(i)}\] \[\times\Xi_{T}^{(\mu_{1}\mu_{2}}\,\cdots\Xi_{T}^{\mu_{2q-1}\mu_{2 q}}\,l_{T}^{\mu_{2q+1}}\cdots l_{T}^{\mu_{2q+m}}P_{T}^{\mu_{2q+m+1}}\cdots P_{T}^{ \mu_{n})}\, \tag{113}\] where \(b_{nmq}\equiv n!/[2^{q}q!m!(n-2q-m)!]\) counts the number of tensor symmetrizations and the coefficients \(\mathcal{D}_{nmq}^{(i)}\) are defined as \[\mathcal{D}_{nmq}^{(i)}\equiv\frac{(-1)^{q+m}}{(2q)!!}\frac{1}{2 }\int\mathrm{d}P\mathrm{d}P^{\prime}W_{\mathbf{kk}\prime\to\mathbf{pp}\prime }\left(\Xi_{T}^{\alpha\beta}p_{\alpha}p_{\beta}\right)^{q}\] \[\times\sum_{j=0}^{i}\binom{i}{j}\frac{(u^{0})^{i-j}}{u^{-j}}\left( \frac{P_{T}^{\mu}p_{\mu}}{\sqrt{s}}\right)^{n+i-m-j-2q}\left(l_{T}^{\mu}p_{ \mu}\right)^{m+j}\, \tag{114}\] where the double factorial for even numbers is \((2q)!!\equiv 2^{q}q!\). To evaluate these coefficients, we note that \(l_{T}^{\mu}p_{\mu}=(E_{\mathbf{p}}-E_{\mathbf{p}^{\prime}})/(2u)\), hence in the CM-frame \(s=2p^{\mu}p_{\mu}^{\prime}=4p^{2}\), and \(l_{T}^{\mu}p_{\mu}=\mathbf{p}\cdot\mathbf{u}/u=p\cos\theta_{pu}\) while \(\Xi_{T}^{\alpha\beta}p_{\alpha}p_{\beta}=-s/4+(l_{T}^{\mu}p_{\mu})^{2}=-p^{2}\sin^{ 2}\theta_{pu}\). In the ultrarelativistic limit for a constant cross-section we then have \[\mathcal{D}_{nmq}^{(i)}=\frac{(-1)^{m}}{4\left(2q)!!}\frac{\sigma _{T}}{2^{n+1+i}}s^{q+1+\frac{m}{2}}\] \[\times\sum_{j=0}^{i}\binom{i}{j}\left[1+(-1)^{m+j}\right]\left( \sqrt{s}u\right)^{i-j}\left(\sqrt{s}u\right)^{j}\] \[\times B\left(\frac{m}{2}+\frac{j}{2}+\frac{1}{2},q+1\right)\, \tag{115}\] where \(B(i,j)\equiv\Gamma(i)\Gamma(j)/\Gamma(i+j)\) denotes the Euler Beta function. Note that \(\mathcal{D}_{nqq}^{(0)}=\mathcal{B}_{nq}\), as expected. We now evaluate Eq. (113) for the cases \(n=0\), \(n=1\), and \(n=2\), corresponding to the scalar, vector, and tensor cases, respectively. In the scalar case \(n=0\), such that \(q=m=0\), and thus \[\mathcal{P}_{i} \equiv\frac{1}{2}\int\mathrm{d}P\mathrm{d}P^{\prime}W_{\mathbf{kk }^{\prime}\to\mathbf{pp}^{\prime}}E_{\mathbf{p}}^{i}=\mathcal{D}_{000}^{(i)}\] \[=\frac{\sigma_{T}}{(i+1)2^{i+2}}\frac{s}{\sqrt{s}u}\] \[\times\left[\left(\sqrt{s}u^{0}+\sqrt{s}u\right)^{i+1}-\left(\sqrt {s}u^{0}-\sqrt{s}u\right)^{i+1}\right]v\.\] ( In particular, for \(i=0\) we obtain the following identity, \[\int\mathrm{d}P\mathrm{d}P^{\prime}\delta\left(k^{\mu}+k^{\prime\mu}-p^{\mu}-p^{ \prime\mu}\right)=\frac{1}{(2\pi)^{5}}\;. \tag{111}\] Note that the latter scalar integrals can be evaluated in other ways as in Refs. [24; 25], \[\mathcal{P}_{i} \equiv\sigma_{T}\frac{(2\pi)^{5}}{(2\pi)^{6}}\frac{1}{2}\int_{- \infty}^{\infty}\frac{\mathrm{d}^{3}\mathbf{p}}{p^{0}}\int_{-\infty}^{\infty} \frac{\mathrm{d}^{3}\mathbf{p}^{\prime}}{p^{\prime 0}}E_{\mathbf{p}}^{i}\] \[\times s\,\delta\left(\sqrt{s}-(p^{0}+p^{\prime 0})\right)\delta \left(\mathbf{p}+\mathbf{p}^{\prime}\right)\] \[=2\sigma_{T}\int_{0}^{\infty}\mathrm{d}pp^{i+2}\delta\left(\sqrt {s}-2p\right)\int_{-1}^{1}\mathrm{d}x\left(u^{0}-ux\right)^{i}\;, \tag{112}\] where \(s=(2p^{0})^{2}=(2p)^{2}\) and \(x=\cos\theta_{pu}\). In the vector case, \(n=1\) and hence \(q=0\) and \(m=0,1\), \[\mathcal{P}_{i}^{\mu} \equiv\frac{1}{2}\int\mathrm{d}P\mathrm{d}P^{\prime}W_{\mathbf{k} \mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}}E_{\mathbf{p}}^{i}p^{\mu}\] \[=\mathcal{D}_{100}^{(i)}P_{T}^{\mu}+\mathcal{D}_{110}^{(i)}l_{T}^ {\mu}\;, \tag{113}\] where the corresponding coefficients are \[\mathcal{D}_{100}^{(i)} =\frac{\sigma_{T}}{(i+1)2^{i+3}}\frac{(\sqrt{s})^{i+2}}{u}\] \[\times\left[\left(u^{0}+u\right)^{i+1}-\left(u^{0}-u\right)^{i+1 }\right]\;, \tag{114}\] and \[\mathcal{D}_{110}^{(i)} =\frac{\sigma_{T}}{(i+2)2^{i+3}}\frac{(\sqrt{s})^{i+3}}{u}\] \[\times\left\{\frac{u^{0}}{u(i+1)}\left[\left(u^{0}+u\right)^{i+1} -\left(u^{0}-u\right)^{i+1}\right]\right.\] \[\left.-\left.\left[\left(u^{0}+u\right)^{i+1}+\left(u^{0}-u\right) ^{i+1}\right]\right\}\;. \tag{115}\] For the tensor case, \(n=2\), and we have \(q=0,1\) and \(m=0,1\), leading to the following decomposition, \[\mathcal{P}_{i}^{\mu\nu} \equiv\frac{1}{2}\int\mathrm{d}P\mathrm{d}P^{\prime}W_{\mathbf{k} \mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}^{\prime}}E_{\mathbf{p}}^{i}p^{ \prime\mu}p^{\nu}\] \[=\mathcal{D}_{200}^{(i)}P_{T}^{\mu}P_{T}^{\nu}+2\mathcal{D}_{210} ^{(i)}P_{T}^{(\mu}l_{T}^{\nu)}+\mathcal{D}_{220}^{(i)}l_{T}^{\mu}l_{T}^{\nu}- \mathcal{D}_{201}^{(i)}\Xi_{T}^{\mu\nu}\;, \tag{116}\] where the coefficients are \[\mathcal{D}_{200}^{(i)}=\frac{1}{2}\mathcal{D}_{100}^{(i)},\quad\mathcal{D}_{ 210}^{(i)}=\frac{1}{2}\mathcal{D}_{110}^{(i)}\;, \tag{117}\] as well as \[\mathcal{D}_{220}^{(i)} =\frac{\sigma_{T}}{(i+3)2^{i+4}}\frac{(\sqrt{s})^{i+4}}{u}\left\{ \left(1+\frac{2(u^{0})^{2}}{u^{2}(i+1)(i+2)}\right)\right.\] \[\left.\times\left[\left(u^{0}+u\right)^{i+1}-\left(u^{0}-u\right) ^{i+1}\right]\right.\] \[\left.-\frac{2u^{0}}{u(i+2)}\left[\left(u^{0}+u\right)^{i+1}+ \left(u^{0}-u\right)^{i+1}\right]\right\}\;, \tag{118}\] and \[\mathcal{D}_{201}^{(i)} =-\frac{\sigma_{T}}{(i+1)(i+3)2^{i+4}}\frac{(\sqrt{s})^{i+4}}{u^{2}}\] \[\times\left\{\frac{u^{0}}{u(i+2)}\left[\left(u^{0}+u\right)^{i+2 }-\left(u^{0}-u\right)^{i+2}\right]\right.\] \[\left.-\left[\left(u^{0}+u\right)^{i+2}+\left(u^{0}-u\right)^{i+2 }\right]\right\}\;. \tag{119}\] ## Appendix E Computation of the loss terms In this section we compute the loss terms \(\mathcal{L}_{rn}^{(\ell)}\) defined in Eq. (41) for \(\ell=0,1,2\). These integrals are Lorentz scalars and thus can be evaluated in any frame. Here we choose the LR-frame of the fluid, where \(E_{\mathbf{k}}\equiv k^{\mu}u_{\mu}\overset{\text{LR}}{=}k^{0}=\sqrt{k^{2}+m_{ 0}^{2}}\). In the following, we will omit the notation "LR" for brevity. In spherical coordinates, \(\mathrm{d}K=\frac{1}{(2\pi)^{5}}\frac{k^{2}}{k^{0}}\sin\theta\mathrm{d}k \mathrm{d}\theta\mathrm{d}\varphi\), where \(k\in[0,\infty)\), \(\theta\in[0,\pi]\), and \(\varphi\in[0,2\pi)\). Furthermore by choosing the orientation of \(\mathbf{k}^{\prime}\) parallel to the \(z\)-axis the angle between the colliding particles \(\theta_{kk^{\prime}}\) is equivalent to the elevation angle \(\theta=\arccos(k^{z}/k)\). Substituting now \(k^{0}=k\), \(k^{\prime 0}=k^{\prime}\), and \(s/2=kk^{\prime}\left(1-x\right)\), with \(x\equiv\cos\theta_{kk^{\prime}}\), the loss term for \(\ell=0\) yields \[\mathcal{L}_{rn}^{(0)} \equiv\sigma_{T}\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k} }f_{0\mathbf{k}^{\prime}}\frac{s}{2}E_{\mathbf{k}}^{r}\left(E_{\mathbf{k}}^{n}+E _{\mathbf{k}^{\prime}}^{n}\right)\] \[=\sigma_{T}\frac{g^{2}e^{2\alpha}}{8\pi^{4}}\int_{0}^{\infty} \mathrm{d}kk^{r+2}\int_{-1}^{1}\mathrm{d}x\left(1-x\right)\] \[\times\int_{0}^{\infty}\mathrm{d}k^{\prime}k^{\prime 2}e^{-\beta(k+k^{ \prime})}\left(k^{n}+k^{\prime n}\right)\] \[=\frac{\sigma_{T}P_{0}^{2}\beta^{2-r-n}}{4}\] \[\times\left[2\Gamma(r+n+3)+\Gamma(r+3)\Gamma(n+3)\right]\,, \tag{120}\] where we used Eqs. (101-102), as well as the definition of the Gamma function to compute the integrals \[\int_{0}^{\infty}\!\mathrm{d}y\int_{0}^{\infty}\!\mathrm{d}y^{\prime}e^{-y-y^{ \prime}}y^{r+a}y^{\prime b}=\Gamma(r+a+1)\Gamma(b+1)\;. \tag{121}\] In computing the result for \(\ell=1\) can be found in the same way, we note that in the LR frame we have \[k^{\langle\mu\rangle}k_{\mu} \equiv k^{\langle\mu\rangle}k_{\langle\mu\rangle}=-k^{2}\;, \tag{122}\] \[k^{\langle\mu\rangle}k^{\prime}_{\mu} \equiv k^{\langle\mu\rangle}k^{\prime}_{\langle\mu\rangle}=-kk^{ \prime}x\;. \tag{123}\] Using these results, Eq. (41) for \(\ell=1\) yields \[\mathcal{L}_{rn}^{(1)} \equiv\frac{\sigma_{T}}{3}\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0 \mathbf{k}}f_{0\mathbf{k}^{\prime}}\frac{s}{2}E_{\mathbf{k}}^{r}k^{\langle\mu \rangle}\left(E_{\mathbf{k}}^{n}k_{\mu}+E_{\mathbf{k}^{\prime}}^{n}k^{\prime}_{ \mu}\right)\] \[=-\frac{\sigma_{T}}{3}\frac{g^{2}e^{2\alpha}}{8\pi^{4}}\int_{0}^{ \infty}\mathrm{d}kk^{r+3}\int_{-1}^{1}\mathrm{d}x(1-x)\] \[\times\int_{0}^{\infty}\mathrm{d}k^{\prime}k^{\prime 2}e^{-\beta(k+k^{ \prime})}\left(k^{n+1}+k^{\prime n+1}x\right)\] \[=-\frac{\sigma_{T}P_{0}^{2}\beta^{-r-n}}{36}\] \[\times\left[6\Gamma(r+n+5)-\Gamma(r+4)\Gamma(n+4)\right]\,. \tag{100}\] In the case when \(\ell=2\), we make use of the following identities \[k^{(\mu}k^{\nu)}k_{\mu}k_{\nu}\equiv k^{(\mu}k^{\nu)}k_{(\mu}k_{ \nu)} =\frac{2}{3}k^{4}\;, \tag{101}\] \[k^{(\mu}k^{\nu)}k_{\mu}^{\prime}k_{\nu}^{\prime}\equiv k^{(\mu}k^{ \nu)}k_{(\mu}^{\prime}k_{\nu)}^{\prime} =k^{2}k^{\prime 2}\left(x^{2}-\frac{1}{3}\right)\;, \tag{102}\] where \(k^{(\mu}k^{\nu)}\equiv\Delta_{\alpha\beta}^{\mu\nu}k^{\alpha}k^{\beta}=k^{(\mu )}k^{(\nu)}-k^{(\alpha)}k_{(\alpha)}\Delta^{\mu\nu}/3\) and \(\Delta_{\alpha\beta}^{\mu\nu}=\frac{1}{2}\left(\Delta_{\alpha}^{\mu}\Delta_{ \beta}^{\nu}+\Delta_{\beta}^{\mu}\Delta_{\alpha}^{\nu}\right)-\frac{1}{3} \Delta^{\mu\nu}\Delta_{\alpha\beta}\). The corresponding loss term now reads, \[\mathcal{L}_{rn}^{(2)}\equiv\frac{\sigma_{T}}{5}\int\mathrm{d}K\mathrm{d}K^{ \prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}\frac{s}{2}E_{\mathbf{k}}^{r}k^ {(\mu}k^{\nu)}\] \[\times\left(E_{\mathbf{k}}^{n}k_{(\mu}k_{\nu)}+E_{\mathbf{k}^{\prime}}^{n}k_{ (\mu}^{\prime}k_{\nu)}^{\prime}\right)\] \[=\frac{\sigma_{T}}{5}\frac{g^{2}e^{2\alpha}}{24\pi^{4}}\int_{0}^{ \infty}\mathrm{d}kk^{r+4}\int_{-1}^{1}\mathrm{d}x(1-x)\] \[\times\int_{0}^{\infty}\mathrm{d}k^{\prime}k^{\prime 2}e^{-\beta(k+k ^{\prime})}\left[2k^{n+2}+k^{\prime n+2}(3x^{2}-1)\right]\] \[=\frac{\sigma_{T}P_{0}^{2}\beta^{-2-r-n}}{15}\Gamma(r+n+7)\;. \tag{103}\] These results for \(\ell=0,1,2\) can be put in a unitary form using the following expression \[\mathcal{L}_{rn}^{(\ell)} =\frac{\sigma_{T}P_{0}^{2}\beta^{2-2\ell-r-n}}{2}\left[\frac{(-1) ^{\ell}\ell!}{(2\ell+1)!!}\Gamma(r+n+2\ell+3)\right.\] \[\left.+\mathcal{B}^{(\ell)}\Gamma(r+\ell+3)\Gamma(n+\ell+3) \right]\;. \tag{104}\] where we introduced the coefficient \(\mathcal{B}^{(\ell)}=\{1/2,1/18,0\}\) for \(\ell=\{0,1,2\}\), respectively. ## Appendix F Computation of the gain terms In this section we compute the gain terms \(\mathcal{G}_{rn}^{(\ell)}\) defined in Eq. (42) for \(\ell=0\), \(\ell=1\) and \(\ell=2\). ### Gain terms for \(\ell=0\) Considering Eq. (42) in the case when \(\ell=0\) we obtain \[\mathcal{G}_{rn}^{(0)} \equiv 2\sigma_{T}(2\pi)^{5}\!\int\mathrm{d}P\mathrm{d}P^{\prime} \mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}E_{ \mathbf{k}}^{r}E_{\mathbf{p}}^{n}\frac{s}{2}\delta\left(k^{\mu}+k^{\prime\mu} -p^{\mu}-p^{\prime\mu}\right) \tag{105}\] \[=2\!\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k}}f_{0\mathbf{ k}^{\prime}}E_{\mathbf{k}}^{r}\,\mathcal{P}_{n}\;,\] where the \(P\) and \(P^{\prime}\) integrals in the center-of-momentum frame are given in Eq. (102), and hence \[\mathcal{G}_{rn}^{(0)}=\frac{\sigma_{T}}{(n+1)2^{n+1}}\int\mathrm{d}K\mathrm{d }K^{\prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}\frac{( \sqrt{s})^{n+3}}{\sqrt{s}u}\left[\left(u^{0}+u\right)^{n+1}-\left(u^{0}-u \right)^{n+1}\right]\;. \tag{106}\] The next step consists in evaluating the remaining \(K\) and \(K^{\prime}\) integrals in the LR-frame of the fluid. Here we recall Eqs. (105), (103) and (104) and note that in the LR-frame \(E_{\mathbf{k}}\equiv k^{0}=k\) and \(s=2kk^{\prime}(1-x)\) are the massless limits. Using these relations we get \[\mathcal{G}_{rn}^{(0)} =\frac{\sigma_{T}P_{0}^{2}\beta^{8}}{(n+1)2^{n+4}}\int_{0}^{\infty }\mathrm{d}k\int_{0}^{\infty}\mathrm{d}k^{\prime}e^{-\beta(k+k^{\prime})}\] \[\times\int_{-1}^{1}\mathrm{d}x\frac{2k^{r+2}k^{\prime 2}(1-x)}{| \mathbf{k}+\mathbf{k}^{\prime}|}\left[\left(k+k^{\prime}+|\mathbf{k}+ \mathbf{k}^{\prime}|\right)^{n+1}-\left(k+k^{\prime}-|\mathbf{k}+\mathbf{k}^{ \prime}|\right)^{n+1}\right]\;, \tag{107}\] where \(|\mathbf{k}+\mathbf{k}^{\prime}|=\sqrt{k^{2}+k^{\prime 2}+2kk^{\prime}x}\). Next we change the integration variables to \[y=\beta k,\qquad y^{\prime}=\beta k^{\prime}, \tag{108}\] and we introduce a new angular integration variable, \[z\equiv\frac{|\mathbf{k}+\mathbf{k}^{\prime}|}{k+k^{\prime}}=\frac{\sqrt{y^{2}+ y^{\prime 2}+2yy^{\prime}x}}{y+y^{\prime}}=\frac{u}{u^{0}}\;. \tag{109}\] Noting the following useful relation, \(s\beta^{2}\equiv 2yy^{\prime}(1-x)=(y+y^{\prime})^{2}(1-z^{2})\), it follows that \[u^{0}=\frac{1}{\sqrt{1-z^{2}}}\;,\qquad u=\frac{z}{\sqrt{1-z^{2}}}\;, \tag{101}\] and hence \[\mathrm{d}x\equiv z\frac{(y+y^{\prime})^{2}}{yy^{\prime}}\mathrm{d}z=2z\frac{(1 -x)}{(1-z^{2})}\mathrm{d}z. \tag{102}\] With these substitutions, the integral becomes \[\mathcal{G}^{(0)}_{rn} =\frac{\sigma_{T}P_{0}^{2}\beta^{2-r-n}}{(n+1)2^{n+4}}\int_{0}^{ \infty}\mathrm{d}y\int_{0}^{\infty}\mathrm{d}y^{\prime}e^{-y-y^{\prime}}y^{r} (y+y^{\prime})^{n+4}\int_{\frac{y-y^{\prime}}{y+y^{\prime}}}^{1}\mathrm{d}z \left(1-z^{2}\right)\left[(1+z)^{n+1}-(1-z)^{n+1}\right]\] \[=\frac{\sigma_{T}P_{0}^{2}\beta^{2-r-n}}{(n+1)}\int_{0}^{\infty} \mathrm{d}y\int_{0}^{\infty}\mathrm{d}y^{\prime}e^{-y-y^{\prime}}y^{r}\left[ \frac{(y+y^{\prime})^{n+4}}{(n+3)(n+4)}-(y+y^{\prime})\frac{y^{n+3}+y^{\prime n +3}}{(n+3)}+\frac{y^{n+4}+y^{\prime n+4}}{(n+4)}\right]\;. \tag{103}\] The first term of the integral is computed by changing the integration variable \(y^{\prime}\) to \(x=y+y^{\prime}\), such that the range for \(y\) becomes \([0,x]\): \[\int_{0}^{\infty}\mathrm{d}y\int_{0}^{\infty}\mathrm{d}y^{\prime}e^{-y-y^{ \prime}}(y+y^{\prime})^{n+4}y^{r}=\int_{0}^{\infty}\mathrm{d}xe^{-x}x^{n+4} \int_{0}^{x}\mathrm{d}yy^{r}=\frac{\Gamma(n+r+6)}{r+1}\;. \tag{104}\] The remaining terms under the integrals are computed straightforwardly with respect to \(y\) and \(y^{\prime}\) in terms of the Gamma function, and the final result is \[\mathcal{G}^{(0)}_{rn}=\frac{\sigma_{T}P_{0}^{2}\beta^{2-r-n}}{(1+n)(1+r)} \left[\Gamma(4+n+r)-\Gamma(3+r)\Gamma(3+n)\right]\;. \tag{105}\] Note that this expression has a finite limit when \(r\to-1\), \[\mathcal{G}^{(0)}_{-1,n}=\frac{\sigma_{T}P_{0}^{2}\beta^{3-n}}{(n+1)}\Gamma(n+ 3)\left[\psi(3+n)-\psi(2)\right]\;. \tag{106}\] where \(\psi^{(0)}(z)\equiv\psi(z)=\mathrm{d}\Gamma(z)/\mathrm{d}z\) denotes the Polygamma function. ### Gain terms for \(\boldsymbol{\ell=1}\) The gain term, Eq. (42), for \(\ell=1\) reads \[\mathcal{G}^{(1)}_{rn} \equiv-\frac{2\sigma_{T}}{3}(2\pi)^{5}\!\int\mathrm{d}P\mathrm{d} P^{\prime}\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{ \prime}}E_{\mathbf{k}}^{r}E_{\mathbf{p}}^{n}p^{\mu}k_{(\mu)}\frac{s}{2}\delta \left(k^{\mu}+k^{\prime\mu}-p^{\mu}-p^{\prime\mu}\right)\] \[=-\frac{2}{3}\!\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k} }f_{0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}\mathcal{P}_{n}^{\mu}k_{(\mu)}\;. \tag{107}\] Recalling the result for the \(P\) and \(P^{\prime}\) integrals from Eq. (104) together with Eq. (104) we obtain \[\mathcal{G}^{(1)}_{rn} \equiv-\frac{2}{3}\!\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0 \mathbf{k}}f_{0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}P_{T}^{\mu}k_{(\mu)} \left[\mathcal{D}_{1+n,0,0}-\mathcal{D}_{1+n,1,0}\frac{(P_{T}^{\nu}u_{\nu})}{ su}\right]\] \[=\frac{\sigma_{T}}{3(n+1)2^{n+2}}\!\int\mathrm{d}K\mathrm{d}K^{ \prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}P_{T}^{\mu}k_{ (\mu)}\frac{1}{(\sqrt{s}u)^{2}}(\sqrt{s})^{n+4}\] \[\times\left\{\frac{u^{0}}{(n+2)u}\left[\left(u^{0}+u\right)^{n+2} -\left(u^{0}-u\right)^{n+2}\right]-\left[\left(u^{0}+u\right)^{n+2}+\left(u^{0 }-u\right)^{n+2}\right]\right\}\;. \tag{108}\] In order to perform the \(KK^{\prime}\)-integration we apply Eqs. (102), (104) and (105), and express \(k^{(\mu)}P_{T\mu}=-k(k+k^{\prime}x)\) by using Eqs. (103) and (104), where \(x=\cos\theta_{kk^{\prime}}\) denotes the cosine of the angle between \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\). Thus after these replacements we get \[\mathcal{G}_{rn}^{(1)} =\frac{\sigma_{T}P_{0}^{2}\beta^{8}}{3(n+1)2^{n+5}}\int_{0}^{\infty} \mathrm{d}k\int_{0}^{\infty}\mathrm{d}k^{\prime}e^{-\beta(k+k^{\prime})}\int_{- 1}^{1}\mathrm{d}x\frac{2k^{r+2}k^{\prime 2}(k^{2}+kk^{\prime}x)(1-x)}{|\mathbf{k}+ \mathbf{k}^{\prime}|^{2}}\] \[\times\left\{\frac{(k+k^{\prime})}{|\mathbf{k}+\mathbf{k}^{ \prime}|(n+2)}\left[\left(k+k^{\prime}+|\mathbf{k}+\mathbf{k}^{\prime}|\right)^ {n+2}-\left(k+k^{\prime}-|\mathbf{k}+\mathbf{k}^{\prime}|\right)^{n+2}\right]\right.\] \[\left.-\left[\left(k+k^{\prime}+|\mathbf{k}+\mathbf{k}^{\prime}| \right)^{n+2}+\left(k+k^{\prime}-|\mathbf{k}+\mathbf{k}^{\prime}|\right)^{n+2 }\right]\right\}\;, \tag{114}\] while changing the variables to \(y\), \(y^{\prime}\) and \(z\) as before, and expressing \(y^{2}+yy^{\prime}x=(y+y^{\prime})[y-y^{\prime}+z^{2}(y+y^{\prime})]/2\) we obtain \[\mathcal{G}_{rn}^{(1)} =\frac{\sigma_{T}P_{0}^{2}\beta^{-r-n}}{3(n+1)2^{n+6}}\int_{0}^{ \infty}\mathrm{d}y\!\int_{0}^{\infty}\mathrm{d}y^{\prime}e^{-y-y^{\prime}}y^{ \prime}(y+y^{\prime})^{n+5}\int_{\frac{|y-y^{\prime}|}{y+y^{\prime}}}^{1} \mathrm{d}z\left(1-z^{2}\right)\left[(y-y^{\prime})+(y+y^{\prime})z^{2}\right]\] \[\times\frac{1}{z}\left\{\frac{1}{z(n+2)}\left[(1+z)^{n+2}-(1-z)^{ n+2}\right]-\left[(1+z)^{n+2}+(1-z)^{n+2}\right]\right\}\;. \tag{115}\] Employing similar steps as in the \(\ell=0\) case, we arrive at the following result \[\mathcal{G}_{rn}^{(1)}=\frac{\sigma_{T}P_{0}^{2}\beta^{-r-n}}{3(1+n)(2+n)(1+r) (2+r)}\big{[}\Gamma(6+n+r)(r+n+rn-3)+\Gamma(4+r)\Gamma(4+n)(3r+3n+rn+11)\big{]}\;. \tag{116}\] As in the \(\ell=0\) case, this expression has a finite limit for \(r\to-1\), \[\mathcal{G}_{-1,n}^{(1)}=\frac{\sigma_{T}P_{0}^{2}\beta^{1-n}}{3(1+n)(2+n)} \Big{\{}(n+5)!-2(n+3)!-4(n+4)!\big{[}\psi(n+5)-\psi(2)\big{]}\Big{\}}\;. \tag{117}\] ### Gain terms for \(\boldsymbol{\ell=2}\) The gain term related to the tensor moments \(\ell=2\) from Eq. (42) reads \[\mathcal{G}_{rn}^{(2)} \equiv\frac{2\sigma_{T}}{5}(2\pi)^{5}\!\int\mathrm{d}P\mathrm{d}P ^{\prime}\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}} E_{\mathbf{k}}^{r}E_{\mathbf{p}}^{n}p^{\mu}p^{\nu}k_{\langle\mu}\,k_{\nu \rangle}\frac{s}{2}\delta\left(k^{\mu}+k^{\prime\mu}-p^{\mu}-p^{\prime\mu}\right)\] \[=\frac{2}{5}\!\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0\mathbf{k}}f_ {0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}\,\mathcal{P}_{n}^{\mu\nu}k_{\langle\mu }\,k_{\nu\rangle}\;. \tag{118}\] Using the result for the \(P\) and \(P^{\prime}\) integrals from Eq. (106) we obtain \[\mathcal{G}_{rn}^{(2)} \equiv\frac{2}{5}\!\int\mathrm{d}K\mathrm{d}K^{\prime}f_{0 \mathbf{k}^{\prime}}f_{0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}P_{T}^{\mu}P_{T}^ {\nu}k_{\langle\mu}\,k_{\nu\rangle}\] \[\times\left[\mathcal{D}_{2+n,0,0}-2\mathcal{D}_{2+n,1,1}\frac{(P_ {T}^{\nu}u_{\nu})}{su}+\mathcal{D}_{2+n,2,0}\frac{(P_{T}^{\nu}u_{\nu})^{2}}{s^ {2}u^{2}}+\mathcal{D}_{2+n,0,1}\left(\frac{1}{s}-\frac{(P_{T}^{\nu}u_{\nu})^{2} }{s^{2}u^{2}}\right)\right]\] \[=\frac{\sigma_{T}}{5(n+1)(n+3)2^{n+3}}\int\mathrm{d}K\mathrm{d}K^ {\prime}f_{0\mathbf{k}}f_{0\mathbf{k}^{\prime}}E_{\mathbf{k}}^{r}P_{T}^{\mu}P_{ T}^{\nu}k_{\langle\mu}\,k_{\nu\rangle}\frac{1}{(\sqrt{s}u)^{3}}(\sqrt{s})^{n+5}\] \[\times\left\{\left(n+4+\frac{3(u^{0})^{2}}{(n+2)u^{2}}\right) \left[\left(u^{0}+u\right)^{n+3}-\left(u^{0}-u\right)^{n+3}\right]-\frac{3(n+3 )u^{0}}{(n+2)u}\left[\left(u^{0}+u\right)^{n+3}+\left(u^{0}-u\right)^{n+3} \right]\right\}\;. \tag{119}\] Now, recalling Eqs. (106), (107) together with \(k^{\langle\mu}k^{\nu\rangle}k_{\mu}k_{\nu}^{\prime}=2k^{3}k^{\prime}x/3\), we replace \[P_{T}^{\mu}P_{T}^{\nu}k_{\langle\mu}k_{\nu\rangle}=\frac{k^{2}}{3}\left[2k^{2}+4 kk^{\prime}x+k^{\prime 2}\left(3x^{2}-1\right)\right]\;, \tag{120}\] hence we can write the gain term as \[\mathcal{G}_{rn}^{(2)} =\frac{\sigma_{T}P_{0}^{2}\beta^{-2-r-n}}{15(n+1)(n+3)2^{n+8}}\int_ {0}^{\infty}\mathrm{d}y\!\int_{0}^{\infty}\mathrm{d}y^{\prime}e^{-y-y^{\prime}}y^{ \prime}(y+y^{\prime})^{n+6}\] \[\times\int_{\frac{|y-y^{\prime}|}{y+y^{\prime}}}^{1}\mathrm{d}z \left(1-z^{2}\right)\left[3\left(y-y^{\prime}\right)^{2}+2\left(y^{2}-3y^{ \prime 2}\right)z^{2}+3\left(y+y^{\prime}\right)^{2}z^{4}\right]\] \[\times\frac{1}{z^{2}}\left\{\left(n+4+\frac{3}{(n+2)z^{2}} \right)\left[\left(1+z\right)^{n+3}-\left(1-z\right)^{n+3}\right]-\frac{3(n+3) }{(n+2)z}\left[\left(1+z\right)^{n+3}+\left(1-z\right)^{n+3}\right]\right\}\;. \tag{121}\] Solving the \(y,y^{\prime}\) integrals in a similar fashion as in the previous cases, we arrive at the following result \[\mathcal{G}_{rn}^{(2)} =\frac{2P_{0}^{2}\sigma_{T}\beta^{-2-r-n}}{15(1+n)(2+n)(3+n)(1+r)(2+ r)(3+r)}\] \[\times\Big{\{}\Gamma(8+r+n)\left[64-6(r+n)+2(r^{2}+n^{2})-3rn+3(n ^{2}r+r^{2}n)+r^{2}n^{2}\right]\] \[-\Gamma(6+r)\Gamma(6+n)\left[22+4(r+n)+rn\right]\Big{\}}\;. \tag{101}\] Similar to the previous computations, this expression can be evaluated for \(r\to-1\), yielding \[\mathcal{G}_{-1,n}^{(2)}=-\frac{\sigma_{T}P_{0}^{2}\beta^{-1-n}}{15(n+1)(n+2)( n+3)}\Big{\{}(n+8)!-24(n+7)!+120(n+5)!+72(n+6)!\big{[}\psi(n+6)-\psi(2) \big{]}\Big{\}}\;. \tag{102}\] ## Appendix G Computation of the loss matrices Having computed the terms \(\mathcal{L}_{rn}^{(\ell)}\) in Appendix E, we now have to perform the sums to obtain the loss part of the collision matrix \(\mathcal{A}_{rn}^{(\ell),1}\) defined in Eq. (39). We start by substituting the explicit expression Eq. (100) into Eq. (39) together with the expressions from Eqs. (101) and (102). After some straightforward algebra we obtain \[\mathcal{A}_{rn}^{(\ell),1} =\sigma_{T}P_{0}\beta^{1-r+n}\frac{(-1)^{\ell+n}(2\ell+1)!!}{\ell!(n+2\ell+1)!}\] \[\times\sum_{m=n}^{N_{\ell}}\frac{(m+2\ell+1)!}{n!(m-n)!}\sum_{q=0 }^{m}(-1)^{q}\binom{m}{q}\left[\frac{(-1)^{\ell}\ell!}{(2\ell+1)!!}\frac{(r+q+ 2\ell+1)!}{(q+2\ell+1)!}+\mathcal{B}^{(\ell)}\frac{(r+\ell+1)!(q+\ell+2)!}{(q+ 2\ell+1)!}\right]\;. \tag{103}\] The sum over \(q\) can be evaluated with the help of the identity \[\sum_{q=0}^{m}(-1)^{q}\binom{m}{q}\frac{(q+a+b)!}{(q+a)!} =\frac{(a+b)!}{a!}{}_{2}F_{1}(a+b+1,-m;a+1;1)\] \[=(-1)^{m}\frac{b!(a+b)!}{(b-m)!(m+a)!}\;, \tag{104}\] where \({}_{2}F_{1}\) denotes the Gauss hypergeometric function, satisfying [26] \[{}_{2}F_{1}(a,b;c;1)=\frac{\Gamma(c-a-b)\Gamma(c)}{\Gamma(c-a)\Gamma(c-b)}\;, \quad\mathrm{Re}(c-a-b)>0. \tag{105}\] Using Eq. (104), we arrive at \[\mathcal{A}_{rn}^{(\ell),1} =\sigma_{T}P_{0}\beta^{1-r+n}\frac{(-1)^{\ell+n}(2\ell+1)!!}{\ell!(n+2\ell+1)!}(r+2\ell+1)!\] \[\times\sum_{m=n}^{N_{\ell}}(-1)^{m}\binom{m}{n}\left[\frac{(-1)^{ \ell}\ell!}{(2\ell+1)!!}\binom{r}{m}+\mathcal{B}^{(\ell)}\frac{(1-\ell)!(\ell +2)!(r+\ell+1)!}{m!(1-\ell-m)!(r+2\ell+1)!}\right]\;. \tag{106}\] The binomial coefficient \(\binom{r}{m}\) vanishes when \(m>r\), such that the first term in the sum over \(m\) gives a nonvanishing contribution only when \(n\leq r\). In this case, the sum over \(m\) runs between \(n\) and \(r\), yielding a Kronecker delta: \[\sum_{m=n}^{r}(-1)^{m}\binom{m}{n}\binom{r}{m}=(-1)^{n}\delta_{rn}\;. \tag{107}\] The term involving \(\mathcal{B}^{(\ell)}\) vanishes for \(\ell=2\), since by definition \(\mathcal{B}^{(2)}=0\). For \(\ell=0\) and \(\ell=1\), the sum over \(m\) terminates at \(m=1-\ell\). Performing this sum separately for \(\ell=0\) and \(\ell=1\), we find \[\ell =0: \sum_{m=n}^{1}\binom{m}{n}\frac{(-1)^{m}}{m!(1-m)!} =-\delta_{n1}\;, \tag{108a}\] \[\ell =1: \sum_{m=n}^{0}\binom{m}{n}\frac{(-1)^{m}}{m!(-m)!} =\delta_{n0}\;. \tag{108b}\] With all these results, the contributions of the loss terms to the collision matrix are \[\mathcal{A}_{rn}^{(0),\mathrm{l}} =\sigma_{T}P_{0}\beta\left[\delta_{nr}+\frac{(r+1)!}{2}\delta_{n1} \beta^{1-r}\right]\;, \tag{101a}\] \[\mathcal{A}_{rn}^{(1),\mathrm{l}} =\sigma_{T}P_{0}\beta\left[\delta_{nr}-\frac{(r+2)!}{6}\delta_{n0 }\beta^{-r}\right]\;,\] (101b) \[\mathcal{A}_{rn}^{(2),\mathrm{l}} =\sigma_{T}P_{0}\beta\,\delta_{nr}\;. \tag{101c}\] ## Appendix H Computation of the gain matrices In this section we compute the gain part of the collision matrix defined in Eq. (40). As discussed in the main text, the matrices \(\mathcal{A}_{rn}^{(\ell),g}\) will contain terms that diverge in the limit \(N_{\ell}\to\infty\). These divergences will appear in the form of certain sums \(S_{n}^{(\ell)}\left(N_{\ell}\right)\) defined in Eq. (87), that we list here again, \[S_{n}^{(\ell)}\left(N_{\ell}\right)\equiv\sum_{m=n}^{N_{\ell}} \binom{m}{n}\frac{1}{(m+\ell)(m+\ell+1)}\;. \tag{102}\] These sums can be evaluated recursively using auxiliary sums \[\widetilde{S}_{n}^{(\ell)}(N_{\ell})\equiv\sum_{m=n}^{N_{\ell}} \binom{m}{n}\frac{1}{m+\ell}\;. \tag{103}\] The explicit recursions will be listed at the end of the following subsections. ### Gain matrix for \(\ell=0\) Setting \(\ell=0\) and inserting the results for \(\mathcal{G}_{rn}^{(0)}\) from Eq. (102) into Eq. (40), we find \[\mathcal{A}_{rn}^{(0),\mathrm{g}}=-\frac{2\beta P_{0}\sigma_{T} \beta^{n-r}(-1)^{n}}{r(n+1)!}\sum_{m=n}^{N_{\ell}}\binom{m}{n}(m+1)!\sum_{q=0} ^{m}\frac{(-1)^{q}}{(q+1)!(m-q)!}\left[\frac{(q+r+2)!}{(q+1)!}-(q+2)(r+1)! \right]\;. \tag{104}\] The above expression is indeterminate when \(r=0\). Let us first consider the case \(r>0\). The sum over \(q\) can be performed by shifting the summation index \(q\) to \(q+1\) and applying Eq. (103), \[\sum_{q=0}^{m}\frac{(-1)^{q}}{(q+1)!(m-q)!}\frac{(q+k+2)!}{(q+1)!} =\frac{(k+1)!}{(m+1)!}\left[1+\frac{(-1)^{m}(r+1)!}{(m+1)!(r-m)!} \right]\;. \tag{105}\] Applying the above formula with \(k=r\) and \(k=0\), corresponding to the first and second term in square brackets in Eq. (104), respectively, we find \[\mathcal{A}_{r>0,n}^{(0),\mathrm{g}} =-\frac{2(-1)^{n}(r+1)!\sigma_{T}P_{0}\beta^{1+n-r}}{r(n+1)!}\sum _{m=n}^{N_{0}}(-1)^{m}\binom{m}{n}\left[\frac{(r+1)!}{(r-m)!(m+1)!}-\delta_{m 0}\right]\;. \tag{106}\] In order to perform the sum over \(n\) in Eq. (106), we introduce the function \(S_{rn}(x)\) via \[S_{rn}(x)\equiv\sum_{m=0}^{r-n}\binom{r-n}{m}(-1)^{m}x^{r-m}=(-1 )^{r-n}x^{n}(1-x)^{r-n}\;. \tag{107}\] Denoting the integral of order \(q\) of \(S_{rn}(x)\) by \[S_{rn}^{(-q)}(x) \equiv\int_{0}^{x}\mathrm{d}x_{1}\int_{0}^{x_{1}}\mathrm{d}x_{2} \cdots\int_{0}^{x_{q-1}}\mathrm{d}x_{q}S(x)\] \[=\sum_{m=0}^{r-n}\binom{r-n}{m}\frac{(r-m)!}{(r-m+q)!}(-1)^{m}x^ {r-m+q}\;, \tag{108}\] the sum in the first term in square brackets in Eq. (111) can be written as \[\sum_{m=n}^{N_{0}}(-1)^{m}\binom{m}{n}\frac{(r+1)!}{(r-m)!(m+1)!}=\frac{(-1)^{n}} {n!}\frac{(r+1)!}{(r-n)!}S_{rn}^{(-1)}(1)\;. \tag{112}\] Then, we can reexpress Eq. (111) as \[\mathcal{A}_{r>0,n}^{(0),\mathrm{g}}=\frac{2(r+1)!\sigma_{T}P_{0}\beta^{1-r}}{ r\,n!(n+1)!}\left[\delta_{n0}-\frac{(r+1)!}{(r-n)!}(-\beta)^{n}S_{rn}^{(-1)}(1) \right]\;. \tag{113}\] Now, using the definition of the incomplete Euler Beta function [26], \[B_{z}(a,b)\equiv\int_{0}^{z}\mathrm{d}t\,t^{a-1}(1-t)^{b-1}\;, \tag{114}\] it can be seen that \[S_{rn}^{(-1)}(z)=(-1)^{r-n}B_{z}(n+1,r-n+1)\;, \tag{115}\] while \(S_{rn}^{(-1)}(1)=(-1)^{r-n}B(n+1,r-n+1)\) can be written in terms of the complete Euler Beta function [26], defined by \[B(n+1,r-n+1)=\frac{n!(r-n)!}{(r+1)!}\;. \tag{116}\] Using these results, Eq. (113) reduces to, \[\mathcal{A}_{r>0,n}^{(0),\mathrm{g}}=-\frac{2(r+1)!\sigma_{T}P_{0}\beta^{1+n- r}}{r(n+1)!}\left(1-\delta_{n0}\right)\;. \tag{117}\] When \(r=0\), we find with the help of Eq. (115) \[\mathcal{A}_{0n}^{(0),\mathrm{g}}=-\frac{2(-1)^{n}\sigma_{T}P_{0}\beta^{n+1}} {(n+1)!}\sum_{m=n}^{N_{0}}\binom{m}{n}(m+1)!\sum_{q=0}^{m}\frac{(-1)^{q}(q+2)} {(q+1)!(m-q)!}\left[\psi(3+q)-\psi(2)\right]\;. \tag{118}\] The summation over \(q\) gives \[\sum_{q=0}^{m}\frac{(-1)^{q}(q+2)}{(q+1)!(m-q)!}\left[\psi(3+q)-\psi(2) \right]=\begin{cases}-\frac{1}{m(m+1)(m+1)!}\;,&m>0\\ 1\;,&m=0\end{cases}\;. \tag{119}\] The \(m=0\) term contributes only when \(n=0\), in which case we have \[\mathcal{A}_{00}^{(0),\mathrm{g}}=-2\sigma_{T}P_{0}\beta\left[1-\sum_{m=1}^{N _{0}}\frac{1}{m(m+1)}\right]=-\frac{2\sigma_{T}P_{0}\beta}{N_{0}+1}\;, \tag{120}\] approaching \(\mathcal{A}_{00}^{(0),\mathrm{g}}\to 0\) in the limit when \(N_{0}\to\infty\). When \(n>0\), we have \[\mathcal{A}_{0,n>0}^{(0),\mathrm{g}}=\frac{2(-1)^{n}\sigma_{T}P_{0}\beta^{n+1 }}{(n+1)!}S_{n}^{(0)}(N_{0})\;, \tag{121}\] where we used the definition from Eq. (109). The sum \(S_{n}^{(0)}(N_{0})\) diverges as \(\log N_{0}\) for \(n=1\). For small \(n>1\), it diverges as \(N_{0}^{n-1}\), while for large \(n\leq N_{0}\) the divergence goes as \(N_{0}^{N_{0}-n-2}\), suggesting a maximum degree of divergence around \(n\sim N_{0}/2\). In the case \(n=1\), we have \[S_{1}^{(0)}(N_{0})=\psi(N_{0}+2)-\psi(2)\;, \tag{122}\] while for \(n=2\) it holds that \[S_{2}^{(0)}(N_{0})=-\psi(N_{0}+2)+\psi(2)+\frac{N_{0}}{2}\;. \tag{119}\] Using the auxiliary sum \(\widetilde{S}_{n}^{(0)}(N_{0})\) defined in Eq. (102), we can formulate a coupled recursion equation \[S_{n+1}^{(0)}(N_{0}) = \frac{1}{n+1}\widetilde{S}_{n}^{(0)}(N_{0})-S_{n}^{(0)}(N_{0})\;, \tag{120a}\] \[\widetilde{S}_{n+1}^{(0)}(N_{0}) = \frac{1}{n+1}\binom{N_{0}+1}{n+1}-\frac{n}{n+1}\widetilde{S}_{n}^ {(0)}\;,\] (120b) \[\widetilde{S}_{1}^{(0)}(N_{0}) = N_{0}\;, \tag{120c}\] while the recursion for \(\widetilde{S}_{1}^{(0)}(N_{0})\) can be solved exactly: \[\widetilde{S}_{n}^{(0)}(N_{0})=\frac{1}{n}\binom{N_{0}}{n}\;. \tag{121}\] ### Gain matrix for \(\boldsymbol{\ell=1}\) We now insert \(\mathcal{G}_{rn}^{(1)}\) from Eq. (106) into the collision matrix \(\mathcal{A}_{rn}^{(1),\text{g}}\) defined in Eq. (40) and obtain: \[\mathcal{A}_{rn}^{(1),\text{g}}=\frac{6(-1)^{n+1}}{n!(n+3)!P_{0}}\sum_{m=n}^{N _{1}}\frac{m!(m+3)!}{(m-n)!}\sum_{q=0}^{m}\frac{(-1)^{q}\mathcal{G}_{r-1,q}^{ (1)}}{q!(m-q)!(q+3)!}\;. \tag{122}\] Substituting Eq. (85b) into the above leads to \[\mathcal{A}_{rn}^{(1),\text{g}}=\frac{2(-1)^{n+1}\sigma_{T}P_{0} \beta^{1+n-r}}{n!(n+3)!r(r+1)}\sum_{m=n}^{N_{1}}\frac{m!(m+3)!}{(m-n)!}\sum_{q =0}^{m}\frac{(-1)^{q}}{(m-q)!(q+2)!(q+3)!}\] \[\qquad\qquad\times\Big{\{}r(q+5+r)!-(r+2)^{2}(q+4+r)!+(r+2)! \big{[}(2+r)(q+4)!-r(q+3)!\big{]}\Big{\}}\;. \tag{123}\] Similar to the \(\ell=0\) case, special care must be taken when evaluating the expression above for \(r=0\), hence we start by assuming that \(r>0\). The sum over \(q\) is performed by first shifting \(q\) upwards by two units, then extending the summation range from \((2,N_{1}+2)\) to \((0,N_{1}+2)\) and subtracting the \(q=-1\) and \(q=-2\) terms. Noting that these latter \(q=-1\) and \(q=-2\) contributions vanish identically, the sum can be evaluated using Eq. (102) as follows: \[\sum_{q=0}^{m+2}\frac{(-1)^{q}}{(m+2-q)!q!(q+1)!}\Big{\{}r(q+3+r)!-(r+2)^{2}(q+4+r)!+(r+2)!\big{[}(2+r)(q+2)!-r(q+1)!\big{]}\Big{\}}\] \[= \frac{(-1)^{m}\left[(r+2)!\right]^{2}\left[r+m(r+2)\right]}{(m+2 )!(m+3)!(r-m)!}\;. \tag{124}\] Finally, \(\mathcal{A}_{rn}^{(1),\text{g}}\) evaluates to \[\mathcal{A}_{rn}^{(1),\text{g}}=\frac{2[(r+2)!]^{2}\sigma_{T}P_{0}(-\beta)^{1 +n-r}}{n!(n+3)!(r-n)!r(r+1)}[(r+2)S_{rn}^{(-1)}(1)-(r+4)S_{rn}^{(-2)}(1)]\;, \tag{125}\] where the notation \(S_{rn}^{(-q)}\) was introduced in Eq. (107). Using Eq. (111), the function \(S_{rn}^{(-2)}(1)\) can be evaluated as \[S_{rn}^{(-2)}(1) \equiv (-1)^{r-n}\int_{0}^{1}\text{d}z\,B_{z}(n+1,r-n+1) \tag{126}\] \[= (-1)^{r-n}\frac{n!(r-n+1)!}{(r+2)!}\;.\] Furthermore, using Eqs. (H11) and (H12) to replace \(S_{rn}^{(-1)}(1)\), Eq. (H25) reduces to \[\mathcal{A}_{r>0,n\leq r}^{(1),\mathrm{g}}=-\frac{2(r+2)!\sigma_{T}P_{0}\beta^{ 1+n-r}}{(n+3)!}\frac{n(r+4)-r}{r(r+1)}\;,\] (H27) while \(\mathcal{A}_{r>0,n>r}^{(1),\mathrm{g}}=0\). Considering now the case when \(r=0\), and using Eq. (F17), Eq. (H23) becomes \[\mathcal{A}_{0n}^{(1),\mathrm{g}}=\frac{2(-1)^{n+1}\sigma_{T}P_{0}\beta^{1+n} }{n!(n+3)!}\sum_{m=n}^{N_{1}}\frac{(m+3)!}{(m-n)!}\sum_{q=0}^{m}\frac{(-1)^{q} m!}{(m-q)!(q+2)!}\Big{\{}(q+4)(q+5)-2-4(q+4)\big{[}\psi(q+5)-\psi(2)\big{]} \Big{\}}\;.\] (H28) The sum over \(q\) can be performed for the terms not involving the polygamma function \(\psi(q+5)\) using the binomial expansion, as follows: \[\sum_{q=0}^{m}\frac{(-1)^{q}m!}{(m-q)!q!q!} =\delta_{m0}\;,\] (H29a) \[\sum_{q=0}^{m}\frac{(-1)^{q}m!}{(m-q)!(q+1)!} =\frac{1}{m+1}\;,\] (H29b) \[\sum_{q=0}^{m}\frac{(-1)^{q}m!}{(m-q)!(q+2)!} =\frac{1}{m+2}\;.\] (H29c) The sum over \(q\) involving \(\psi(q+5)\) can be performed by noting that \(\psi(q+5)=\psi(1)+\sum_{k=1}^{q+4}\frac{1}{k}\), where \(\psi(1)=-\gamma\) and \(\gamma\simeq 0.577\) is the Euler-Mascheroni constant, such that \[\sum_{q=0}^{m}\frac{(-1)^{q}m!}{(m-q)!(q+2)!}(q+4)\psi(q+5)=\left(\frac{11}{6} -\gamma\right)\frac{3m+4}{(m+1)(m+2)}+\frac{2[3+(m+1)(m+2)(m+3)]}{3(m+1)^{2}( m+2)^{2}(m+3)}\;.\] (H30) This leads to \[\mathcal{A}_{0n}^{(1),\mathrm{g}}=\frac{16(-1)^{n}\sigma_{T}P_{0}\beta^{1+n}} {(n+3)!}\Bigg{[}-\frac{3}{4}\delta_{n0}+S_{n}^{(1)}(N_{1})\Bigg{]}\;,\] (H31) where we employed Eq. (H1). Similar to the \(\ell=0\) case, with the help of an auxiliary sum defined in Eq. (H2) we can write down a recursion relation \[S_{n+1}^{(1)}(N_{1}) =\frac{1}{n+1}\widetilde{S}_{n}^{(1)}(N_{1})-\frac{n+2}{n+1}S_{n} ^{(1)}\;,\] (H32a) \[\widetilde{S}_{n+1}^{(1)}(N_{1}) =\frac{1}{n+1}\binom{N_{1}+1}{n+1}-\widetilde{S}_{n}^{(1)}\;,\] (H32b) \[S_{0}^{(1)}(N_{1}) =\frac{N_{1}+1}{N_{1}+2}\;,\] (H32c) \[\widetilde{S}_{0}^{(1)}(N_{1}) =\psi(N_{1}+2)+\gamma\;.\] (H32d) Note that now, Eq. (H31), can be evaluated explicitly in the case \(n=0\): \[\mathcal{A}_{00}^{(1),\mathrm{g}}=\frac{2}{3}\sigma_{T}P_{0}\beta\frac{N_{1}- 2}{N_{1}+2}\overset{N_{1}\to\infty}{\longrightarrow}\frac{2}{3}\sigma_{T}P_{0 }\beta\;.\] (H33) ### Gain matrix for \(\boldsymbol{\ell=2}\) Considering the case when \(\ell=2\), we are inserting Eq. (F22) into Eq. (40), which yields \[\mathcal{A}_{rn}^{(2),\mathrm{g}} \equiv\frac{15(-\beta)^{2+n}}{P_{0}n!(n+5)!}\sum_{m=n}^{N_{2}} \frac{m!(m+5)!}{(m-n)!}\sum_{q=0}^{m}\frac{(-\beta)^{q}\mathcal{G}_{r-1,q}^{(2 )}}{q!(m-q)!(q+5)!}\] \[=\frac{2(-1)^{n+1}\sigma_{T}P_{0}\beta^{1+n-r}}{n!(n+5)!r(r+1)(r+2 )}\sum_{m=n}^{N_{2}}\frac{m!(m+5)!}{(m-n)!}\sum_{q=0}^{m}\frac{(-1)^{q}f_{rq}^{ (2)}}{(q+3)!(q+5)!(m-q)!}\;,\] (H34) where we defined \[f^{(2)}_{rq} \equiv r(1+r)(q+r+8)!-2r(3+r)(4+r)(q+r+7)!\] \[+(2+r)(3+r)^{2}(4+r)(q+r+6)!-(r+4)!\left[(r+3)(q+6)!-2r(q+5)!\right]\;. \tag{101}\] As in the previous cases, Eq. (100) is also indeterminate when \(r=0\). For the time being we focus on the case when \(r>0\). The sum over \(q\) can be performed by shifting \(q\) upwards by three units, hence the summation range can be extended downwards from \((3,m+3)\) to \((0,m+3)\), such that \[\sum_{q=0}^{m}\frac{(-1)^{q}f^{(2)}_{rq}}{(q+3)!(q+5)!(m-q)!}=-\sum_{q=0}^{m+3 }\frac{(-1)^{q}f^{(2)}_{r,q-3}}{q!(q+2)!(m+3-q)!}\;. \tag{102}\] With the above shift, the terms appearing inside the square brackets in the expression for \(f^{(2)}_{rq}\) in Eq. (101) lead to vanishing contributions: \[\sum_{q=0}^{m+3}\frac{(-1)^{q}}{q!(m+3-q)!}=\sum_{q=0}^{m+3}\frac{(-1)^{q}(q+ 3)}{q!(m+3-q)!}=0\;. \tag{103}\] The other terms can be summed using the binomial theorem, as indicated in Eq. (111), by setting \(a=2\) (for all terms) and \(b=r+3\), \(r+2\) and \(r+1\). The final result is \[\mathcal{A}^{(2),\mathrm{g}}_{r>0,n}\equiv\frac{2(-1)^{n+1}(r+3)!(r+4)!\sigma _{T}P_{0}\beta^{1+n-r}}{n!(n+5)!r(r+1)(r+2)}\sum_{m=n}^{N_{2}}\frac{(-1)^{m}[( 3+r)(m+2)!-6(m+1)!]}{(m-n)!(r-m)!(m+3)!}\;. \tag{104}\] The term \((r-m)!\) appearing in the denominator on the second line is indicative that \(\mathcal{A}^{(2),\mathrm{g}}_{r>0,n}\) vanishes when \(n>r\) due to the fact that \(\Gamma(n)\) diverges for integer \(n\leq 0\). Performing the sum over \(m\) yields \[\mathcal{A}^{(2),\mathrm{g}}_{r>0,n\leq r}=-\frac{2\sigma_{T}P_{0}\beta^{1+n- r}(r+4)!(n+1)(9n+nr-4r)}{(n+5)!r(r+1)(r+2)}\;, \tag{105}\] while \(\mathcal{A}^{(2),\mathrm{g}}_{r>0,n>r}=0\). We now focus on the \(r=0\) case, which can be evaluated using Eq. (101). Performing the summation gives \[\mathcal{A}^{(2),\mathrm{g}}_{0n}=\frac{432(-1)^{n}\sigma_{T}P_{0}\beta^{1+n} }{(n+5)!}\left[-\frac{5}{18}\delta_{n0}+S^{(2)}_{n}(N_{2})\right]\;, \tag{106}\] where we used Eq. (100). With the help of the auxiliary sum defined in Eq. (101), we arrive at the following recursions \[S^{(2)}_{n+1}(N_{2}) = \frac{1}{n+1}\widetilde{S}^{(2)}_{n}(N_{2})-\frac{n+3}{n+1}S^{(2 )}_{n}(N_{2})\;, \tag{107a}\] \[\widetilde{S}^{(2)}_{n+1}(N_{2}) = \frac{1}{n+1}\binom{N_{2}+1}{n+1}-\frac{n+2}{n+1}\widetilde{S}^{( 2)}_{n}(N_{2})\;,\] (107b) \[S^{(2)}_{0}(N_{2}) = \frac{N_{2}+1}{2(N_{2}+3)}\;,\] (107c) \[\widetilde{S}^{(2)}_{0}(N_{2}) = \psi(N_{2}+3)-\psi(2)\;. \tag{107d}\] relaxation time of the bulk viscous pressure \(\tau_{\Pi}\). Furthermore, we show how to compute the correction to the local-equilibrium distribution function proportional to the bulk viscous pressure. ### Inverse matrix The matrix structure in the scalar case has the following form: \[\mathcal{A}^{(0)}_{rn}=\begin{pmatrix}\mathcal{A}^{(0)}_{00}&\mathcal{A}^{(0)}_{0,n>2}\\ 0&\mathcal{A}^{(0)}_{r>2,n>2}\end{pmatrix}, \tag{11}\] where \(\mathcal{A}^{(0)}_{r>2,n>r}=0\), i.e. the matrix appearing in the bottom-right corner of the above expression is lower-diagonal. The inverse matrix \(\tau^{(0)}_{rn}\) inherits the same form, \[\tau^{(0)}_{rn}=\begin{pmatrix}\tau^{(0)}_{00}&\tau^{(0)}_{0,n>2}\\ 0&\tau^{(0)}_{r>2,n>2}\end{pmatrix}, \tag{12}\] where \(\tau^{(0)}_{r>2,n>2}\) is also lower-diagonal. It is easy to see that \[\tau^{(0)}_{00}=\frac{1}{\mathcal{A}^{(0)}_{00}}=\lambda_{\text{mfp}}\frac{N_ {0}+1}{N_{0}-1}\;. \tag{13}\] For future convenience, we parametrize \(\tau^{(0)}_{rn}\) for \(3\leq n\leq r\leq N_{0}\) as \[\tau^{(0)}_{rn}=\frac{\lambda_{\text{mfp}}(r+1)!}{\beta^{r-n}(n+1)!}\left( \delta_{rn}+\tilde{\tau}^{(0)}_{rn}\right)\;. \tag{14}\] Imposing \(\sum_{m=0,\neq 1,2}^{N_{0}}\tau^{(0)}_{rm}\mathcal{A}^{(0)}_{mn}=\delta_{rn}\) gives for \(r,n>2\): \[\tilde{\tau}^{(0)}_{rm}-\sum_{n=m}^{r}\frac{2}{n}\tilde{\tau}^{(0)}_{rn}=\frac {2}{r}\;. \tag{15}\] The above relation can be arranged into a simple recursion, \[\tilde{\tau}^{(0)}_{rm}=\frac{m}{m-2}\tilde{\tau}^{(0)}_{r,m+1}\;. \tag{16}\] Noting that \(\tau^{(0)}_{rr}=1/\mathcal{A}^{(0)}_{rr}\), we have \(\tilde{\tau}^{(0)}_{rr}=2/(r-2)\), such that \[\tilde{\tau}^{(0)}_{rm}=\frac{2(r-1)}{(m-1)(m-2)}, \tag{17}\] leading to \[\tau^{(0)}_{r>2,2<n\leq r}=\frac{\lambda_{\text{mfp}}(r+1)!}{\beta^{r-n}(n+1)!}\left[\delta_{rn}+\frac{2(r-1)}{(m-1)(m-2)}\right]\;. \tag{18}\] Finally, the elements on the zeroth line can be found by imposing \(\sum_{r=0}^{N_{0}}\mathcal{A}^{(0)}_{0r}\tau^{(0)}_{rn}=0\) for \(n>2\): \[\mathcal{A}^{(0)}_{00}\tau^{(0)}_{0,n>2}=-\sum_{r=n}^{N_{0}}\mathcal{A}^{(0)} _{0r}\tau^{(0)}_{rn}\;. \tag{19}\] Using Eqs. (88), (13), and (18), we get \[\frac{\tau^{(0)}_{0,n>0}}{\tau^{(0)}_{00}}=-\frac{2\beta^{n}}{(n -1)(n-2)(n+1)!}\\ \times\sum_{r=n}^{N_{0}}(-1)^{r}(r-1)S^{(0)}_{r}(N_{0})[2+(r-2) \delta_{rn}]\;. \tag{10}\] Using the explicit expression (87) for \(S^{(0)}_{n}\), the summation over \(r\) can be performed, leading to: \[\frac{\tau^{(0)}_{0,n>0}}{\tau^{(0)}_{00}} =-\frac{2(-\beta)^{n}}{(n-1)(n-2)(n+1)!}\] \[\times\sum_{m=n}^{N_{0}}\binom{m}{n}\frac{2n+m(n-2)(nm-m+n+1)}{m^ {2}(m^{2}-1)}\] \[=-\frac{2(-\beta)^{n}}{(n-1)(n-2)(n+1)!}\] \[\times\binom{1+N_{0}}{n}\frac{(1+N_{0}-n)[N_{0}(n-2)+n]}{N_{0}(N _{0}+1)^{2}}\;. \tag{11}\] Collecting the above results, we find Eqs. (112). ### Bulk viscosity We compute \(\zeta/m^{4}\equiv\zeta_{0}/m^{4}\) by substituting Eqs. (13) and (111) in Eq. (113): \[\frac{1}{m_{0}^{4}}\zeta =\frac{P_{0}\beta^{4}\lambda_{\text{mfp}}(N_{0}+1)}{54(N_{0}-1)} \Bigg{[}1-\sum_{n=3}^{N_{0}}\frac{(-1)^{n}}{n+1}\] \[\times\sum_{m=n}^{N_{0}}\binom{m}{n}\frac{n+m(n-2)(nm-m+1)}{m^{2 }(m^{2}-1)}\Bigg{]}\;. \tag{12}\] Swapping the summation with respect to \(n\) with that with respect to \(m\) and using the properties \[\sum_{n=3}^{m}\binom{m}{n}\frac{(-1)^{n}}{n+1} =-\frac{m(m-1)(m-2)}{6(m+1)}\;, \tag{13a}\] \[\sum_{n=3}^{m}\binom{m}{n}(-1)^{n} =-\frac{1}{2}(m-1)(m-2)\;,\] (13b) \[\sum_{n=3}^{m}\binom{m}{n}(-1)^{n}n =-m(m-2)\;. \tag{13c}\] Eq. (12) can be reduced to \[\frac{1}{m_{0}^{4}}\zeta =\frac{P_{0}\beta^{4}\lambda_{\text{mfp}}(N_{0}+1)}{54(N_{0}-1)}\] \[\times\left[1+\sum_{m=3}^{N_{0}}\frac{(m-2)(11m^{2}+4m-3)}{3m^{2 }(m-1)(m+1)^{2}}\right]\;. \tag{14}\] The sum over \(m\) appearing above represents a correction to the 14-moment approximation, represented by the prefactor of the square brackets. After preforming this sum, we arrive at Eq. (115). ### IReD relaxation time We begin with \(\tau_{\Pi}\equiv\tau_{\Pi;0}\) and use Eqs. (114) and (115), \[\tau_{\Pi} =\tau_{00}^{(0)}+\sum_{r=3}^{N_{0}}\tau_{0r}^{(0)}\frac{\zeta_{r}}{\zeta}\] \[=\tau_{00}^{(0)}\Bigg{\{}1-\frac{6N_{0}(N_{0}^{2}-1)}{6+7N_{0}+11 N_{0}^{3}}\sum_{m=3}^{N_{0}}\frac{1}{m(m+1)}\] \[\times\sum_{r=3}^{m}\binom{m}{r}(-1)^{r}\left(2H_{r}-\frac{1}{r+ 1}-\frac{8}{3}\right)\] \[\times\left[r-1+\frac{2r}{m-1}+\frac{2r}{m(m-1)(r-2)}\right] \Bigg{\}}\;. \tag{115}\] In order to evaluate the sums not containing harmonic numbers, we need the identities (II.1) as well as \[\sum_{r=3}^{m}\binom{m}{r}\frac{(-1)^{r}}{r-2}=\frac{1}{4}m(m-1)(3-2H_{m})\;. \tag{116}\] In order to perform the summation over \(r\) for the terms involving \(H_{r}\), we employ its integral representation, \[H_{r}=\int_{0}^{1}\mathrm{d}t\,\frac{1-t^{r}}{1-t}\;, \tag{117}\] together with the relations \[\sum_{r=3}^{m}\binom{m}{r}(-t)^{r} =-1+(1-t)^{m}+\frac{mt}{2}(2+t-mt)\;,\] \[\sum_{r=3}^{m}\binom{m}{r}(-1)^{r}r =mt[1-(1-t)^{m-1}-t(m-1)]\;,\] \[\sum_{r=3}^{m}\binom{m}{r}\frac{(-1)^{r}}{r-2} =-\frac{mt^{3}}{6}(m-2)(m-1)\] \[\times{}_{3}F_{2}(1,1,3-m;2,4;t)\;. \tag{118}\] Interchanging the summation with respect to \(r\) with the integration with respect to \(t\), we arrive at \[\sum_{r=3}^{m}\binom{m}{r}H_{r} =-\frac{3m^{3}-7m^{2}+4}{4m}\;, \tag{119a}\] \[\sum_{r=3}^{m}\binom{m}{r}(-1)^{r}rH_{r} =\frac{m}{2}(5-3m)+\frac{1}{m-1}\;,\] (119b) \[\sum_{r=3}^{m}\binom{m}{r}\frac{(-1)^{r}H_{r}}{r-2} =\frac{m}{4}(m-1)(7-3H_{m}-2H_{m,2})\;, \tag{119c}\] where \(H_{m,n}=\sum_{r=1}^{m}r^{-n}\) is the generalized Harmonic number, with \(H_{m}\equiv H_{m,1}\). Adding everything up, we find \[\tau_{\Pi} =\tau_{00}^{(0)}\Bigg{\{}1+\frac{4N_{0}(N_{0}^{2}-1)}{6+7N_{0}+1 1N_{0}^{3}}\sum_{m=3}^{N_{0}}\frac{1}{m(m+1)}\] \[\times\left[\frac{11}{6}-\frac{2m}{m^{2}-1}-\frac{6+m}{m^{2}}- \frac{6}{(m-1)^{2}}+6H_{m,2}\right]\Bigg{\}}\;. \tag{120}\] The summation over \(m\) can be performed, yielding Eq. (124). For the relaxation times of the higher-order moments we have \[\tau_{\Pi;r} =\sum_{n=0,\neq 1,2}^{N_{0}}\tau_{rn}^{(0)}\frac{\zeta_{n}}{\zeta_{r}}\] \[=\frac{m^{4}}{\zeta_{r}}\frac{\lambda_{\mathrm{mfp}}P\beta^{4-r}} {108r}(r-2)(r-1)(r+1)!\Bigg{[}2H_{r}-\frac{1}{r+1}\] \[\qquad-\frac{8}{3}+2\sum_{n=3}^{r}\frac{1}{n-2}\left(2H_{n}-\frac {1}{n+1}-\frac{8}{3}\right)\Bigg{]}\;. \tag{121}\] Performing the summation then gives Eq. (126). ### Correction to the distribution function We start from Eq. (132) and use Eqs. (114) and (115) for \(\zeta_{n}\) and \(\zeta_{0}\), as well as Eq. (109) for \(\mathcal{H}_{\mathbf{k}n}^{(0)}\), arriving at \[\frac{\delta f_{\mathbf{k}}^{(0)}}{f_{0\mathbf{k}}} =-\frac{6\Pi}{m_{0}^{2}\beta^{2}P}\left[\sum_{m=0}^{N_{0}}L_{m}^{ (1)}(\beta E_{\mathbf{k}})+\frac{3N_{0}(N_{0}^{2}-1)}{6+7N_{0}+11N_{0}^{3}}\right.\] \[\times\sum_{n=3}^{N_{0}}(-1)^{n}(n-1)\left(2H_{n}-\frac{1}{n+1}- \frac{8}{3}\right)\] \[\qquad\qquad\times\sum_{m=n}^{N_{0}}\frac{m!}{n!(m-n)!}L_{m}^{(1) }(\beta E_{\mathbf{k}})\right]\;. \tag{122}\] In the second term, the sums over \(n\) and \(m\) can be swapped, while the sum over \(n\) can be evaluated as follows: \[\sum_{n=3}^{m}\frac{m!(-1)^{n}(n-1)}{n!(m-n)!}\left(2H_{n}-\frac{ 1}{n+1}-\frac{8}{3}\right)\] \[\qquad\qquad=-\frac{11}{3}+2\left(\frac{1}{m-1}+\frac{1}{m}+ \frac{1}{m+1}\right)\;. \tag{123}\] Plugging the above into Eq. (122) leads to \[\frac{\delta f_{\mathbf{k}}^{(0)}}{f_{0\mathbf{k}}} =-\frac{6\Pi}{m_{0}^{2}\beta^{2}P}\left[\sum_{m=0}^{N_{0}}L_{m}^{ (1)}(\beta E_{\mathbf{k}})+\frac{3N_{0}(N_{0}^{2}-1)}{6+7N_{0}+11N_{0}^{3}}\right.\] \[\left.\times\sum_{m=3}^{N_{0}}L_{m}^{(1)}(\beta E_{\mathbf{k}}) \left(\frac{2}{m-1}+\frac{2}{m}+\frac{2}{m+1}-\frac{11}{3}\right)\right]. \tag{124}\] We now consider the limit \(N_{0}\to\infty\), when \(3N_{0}(N_{0}^{2}-1)/(6+7N_{0}+11N_{0}^{3})\to 3/11\). This leads to the expression \[\frac{\delta f_{\mathbf{k}}^{(0)}}{f_{\mathbf{0k}}}=-\frac{6\Pi}{m _{0}^{2}\beta^{2}P}\Bigg{[}1+L_{1}^{(1)}(\beta E_{\mathbf{k}})+L_{2}^{(1)}( \beta E_{\mathbf{k}})\] \[+\frac{6}{11}\sum_{m=3}^{\infty}L_{m}^{(1)}(\beta E_{\mathbf{k}}) \left(\frac{1}{m-1}+\frac{1}{m}+\frac{1}{m+1}\right)\Bigg{]}\;. \tag{125}\] The summation over \(m\) can be performed by introducing a fictitious parameter \(0<t<1\) and employing the generating function \[\sum_{m=0}^{\infty}t^{m}L_{m}^{(\alpha)}(x)=\frac{1}{(1-t)^{\alpha+1}}e^{-tx/( 1-t)}\;. \tag{126}\] In our case, we must evaluate \[\sum_{m=3}^{\infty}L_{m}^{(1)}(x)\left(\frac{1}{m-1}+\frac{1}{m}+ \frac{1}{m+1}\right)\] \[=\int_{0}^{1}\mathrm{d}t\left(1+\frac{1}{t}+\frac{1}{t^{2}} \right)\left[\frac{e^{-xt/(1-t)}}{(1-t)^{2}}-\sum_{m=0}^{2}t^{m}L_{m}^{(1)}(x )\right]\;. \tag{127}\] It can be checked that the integrand behaves like \(O(t)\) around \(t=0\) and thus the integral converges. The result is \[\sum_{m=3}^{\infty}L_{m}^{(1)}(x)\left(\frac{1}{m-1}+\frac{1}{m}+ \frac{1}{m+1}\right)=\frac{3}{x}\\ -(3-x)\ln x-\frac{19}{2}+\gamma(x-3)+6x-\frac{11x^{2}}{12}\;. \tag{128}\] Plugging the above into Eq. (125) then gives Eq. (133).
2309.10670
Symmetry considerations in exact diagonalization: spin-1/2 pyrochlore magnets
We describe how the methods of group theory (symmetry) are used to optimize the problem of exact diagonalization of a quantum system on a 16-site pyrochlore lattice. By analytically constructing a complete set of symmetrized states, we completely block-diagonalize the Hamiltonian. As an example, we consider a spin-1/2 system with nearest neighbour exchange interactions.
C. Wei, S. H. Curnoe
2023-09-19T14:53:22Z
http://arxiv.org/abs/2309.10670v1
# Symmetry considerations in exact diagonalization: spin-1/2 pyrochlore magnets ###### Abstract We describe how the methods of group theory (symmetry) are used to optimize the problem of exact diagonalization of a quantum system on a 16-site pyrochlore lattice. By analytically constructing a complete set of symmetrized states, we completely block-diagonalize the Hamiltonian. As an example, we consider a spin-1/2 system with nearest neighbour exchange interactions. ## I Introduction Exact diagonalization uses numerical approaches to find the eigenvalues and eigenvectors of a matrix representation of a Hamiltonian. The results determine the entire spectrum, which can be used to evaluate many quantities of interest, including spin correlations, thermodynamic quantities, and even quantum entanglement. Generally though, these studies are limited by size, since the dimension of the Hamiltonian matrix is \(D\times D\), where for \(N\) spin-1/2 states \(D=2^{N}\), the memory requirement scales as \(D^{2}\), and the computational time scales as \(D^{3}\)[1]. Different strategies can be used to circumvent the size problem. To begin with, it is often possible to block-diagonalize the Hamiltonian matrix by using the underlying symmetries of the system. Then, if the full spectrum is not needed, the Lanczos algorithm may used to solve low-lying eigenvalues and eigenstates [2; 3], and has been applied to general pairing Hamiltonians of size \(10^{8-9}\). With space group \(Fd\bar{3}m\) (\(O_{h}^{7}\), No. 227), pyrochlore crystals are highly symmetric; in this article we describe the procedure for block-diagonalizing the spin Hamiltonian for pyrochlore crystals by fully exploiting the space group symmetries. Pyrochlore magnets have been a popular research subject for the past few decades because they are physical realizations of spin systems on a geometrically frustrated lattice. In these crystals, the magnetic ions with spin \(J\) reside on the vertices of a network of corner-sharing tetrahedra. The crystal electric field at the magnetic sites lifts the \(2J+1\)-fold spin degeneracy into singlets and doublets; those with a well-separated ground state doublet are effective spin-1/2 systems. The simplest model of magnetic interactions in pyrochlore crystals is the nearest-neighbour exchange interaction, which accounts for the interaction energy between pairs of neighbouring spins. Given the non-isotropic electronic structure of the magnetic ions (which are typically rare earth elements) the magnetic interaction is expected to be anisotropic (_i.e._ non-Heisenberg) in general. Even so, this model is tightly constrained by the space group symmetry of the crystal such that the general form of the nearest-neighbour exchange interaction has only four free parameters [4]; this same symmetry group is used to block-diagonalize the Hamiltonian matrix. Pyrochlore magnets exhibit a variety of magnetic phenomenon, including various magnetically ordered states and different kinds of'spin ice,' such as disordered spin ice in Ho\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\)[5], ordered spin ice in Tb\({}_{2}\)Sn\({}_{2}\)O\({}_{7}\)[6] and quantum spin ice in Tb\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\)[7; 8], Yb\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\)[9; 10; 11], Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\)[12]. Quantum spin ice materials in particular have attracted a great deal of attention because they are believed to host long-range quantum entanglement. In a recent work, we have studied the Hamiltonian within a range of its four free parameters that encompasses this state [13]; here we provide the details of our computational method. ## II Exact diagonalization ### Representations of the Symmetry Group The symmetry group of any crystal is one of the 236 crystallographic space groups [14], each of which consists of a set of translations based on the crystal lattice vectors and a point group of rotations. All of the elements of the space group can be represented as matrices. An _irreducible representation_ (IR) is any set of matrices that _cannot_ be block-diagonalized by a unitary transformation; reducible representations are a direct sum of IR's. Explicit forms for the matrices can be found using the methods described in Ref. [15]. A finite system with periodic boundary condition will support a number of wavevectors \(k\) equivalent to the size of the translation subgroup. To each wavevector \(k\) is associated one or more IRs, depending on the symmetry group of \(k\) itself. For pyrochlore crystals, which have a face-centred cubic (fcc) Bravais lattice, the only translations in the symmetry group of a single cube are the set of three fcc lattice vectors \(\vec{\tau}_{1}=(0,a/2,a/2)\), \(\vec{\tau}_{2}=(a/2,0,a/2)\) and \(\vec{\tau}_{3}=(a/2,a/2,0)\) (where \(a\) is the side length of the cube) and in \(k\)-space only the \(\Gamma\)-point (\(\vec{k}=(0,0,0)\)) and the \(X\)-point (\(\vec{k}=\frac{2\pi}{a}(0,0,1)\) and equivalent) occur. A system of two cubes, with periodic boundary conditions \(f(x,y,z)=f(x\pm a,y\pm a,z)=f(x\pm a,y,z\pm a)=f(x,y\pm a,z\pm a)\), has four additional translations, and in \(k\)-space, in addition to the \(\Gamma\)-point and the \(X\)-point, the \(L\)-point, with \(\vec{k}=\frac{\pi}{a}(1,1,1)\), occurs. Altogether there are ten \(\Gamma\)-point IRs (four one-dimensional, two two-dimensional and four three-dimensional), four \(X\)-point IRs (all six-dimensional) and six \(L\)-point IRs (four four-dimensional and two eight-dimensional). The dimensionality of the IR is its degeneracy. This means that the Hamiltonian matrix for a cube (containing 16 magnetic sites) can be block-diagonaled into 44 blocks, of which only fourteen are not redundant, which represents a computation time reduction by a factor of approximately 6000. A matrix representation of the symmetry group can be found by considering the action of the symmetry group on a set physical objects, such as the set of basis kets \(|u_{i}\rangle\) of the system. Using this basis, the matrix elements for a symmetry operation \(R\) are \(\Gamma_{ij}(R)=\langle u_{i}|R|u_{j}\rangle\), \(R|u_{i}\rangle=\sum_{j}\Gamma_{ij}(R)|u_{j}\rangle\). Generally such constructions will generate a reducible representation which can be decomposed into IR's \(\Gamma^{(i)}\)[16]: \[\Gamma=\sum_{\oplus i}a_{i}\Gamma^{(i)}. \tag{1}\] where \(a_{i}\) (a non-negative integer) is the number of copies of the \(i\)th IR. Since they are invariant under a unitary transformation, comparing the character (traces) of \(\Gamma\) with the characters of the \(\Gamma^{(i)}\) allows one to extract the numbers \(a_{i}\). By applying a suitable unitary transformation, the matrices of \(\Gamma\) can be cast into a block-diagonal form where the blocks are of size \(a_{i}\). If the same basis is used to generate a matrix representation of the Hamiltonian, with matrix elements \(H_{ij}=\langle u_{i}|H|u_{j}\rangle\), then \(H\) can be block-diagonalized by the same unitary transformation. In the following, we describe how to construct the unitary matrix that block-diagonalizes the Hamiltonian for pyrochlore magnets. ### Application to Pyrochlore Magnets In a spin-1/2 system with \(N\) spins there are \(2^{N}\) basis kets of the form \(|\pm\pm\pm...\rangle\) where the order of the symbols inside the ket corresponds to some particular order of the spin sites on the lattice. We now describe how these kets form the basis of a reducible representation of the symmetry group of pyrochlore crystals. In pyrochlore magnets, the magnetic ions occupy the 16d Wyckoff position, such that there are 16 ions in the cubic cell. The primitive unit cell is a tetrahedron, with magnetic ions located at each of its four vertices; this is the smallest unit that possesses the full point group symmetry of the pyrochlore crystal, \(O_{h}\). The \(2^{4}\times 2^{4}\) single-tetrahedron problem is easily solved without the need for block-diagonalization, and block-diagonalization renders the problem almost trivial. The set of \(2^{4}\) basis kets \(|\pm\pm\pm\rangle\) belong to the reducible representation \(A_{1g}\oplus 3E_{g}\oplus 2T_{1g}\oplus T_{2g}\), where \(A\) is one-dimensional, \(E\) is two-dimensional, and \(T\) is three-dimensional, and all belong to the \(\Gamma\)-point. Thus the \(16\times 16\) problem is reducible to blocks of size \(1\times 1\) (non-degenerate), size \(3\times 3\) (doubly degenerate), size \(2\times 2\) (triply degenerate), and size \(1\times 1\) (triply degenerate) [17]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(1E\) & \(6C_{2}\) & \(32C_{3}\) & \(12C_{2}^{2}\) & \(24C_{4}\) & \(4I\) & \(12IC_{2}\) & \(32IC_{3}\) & \(12IC_{2}^{2}\) & \(12IC_{2}^{2}\) & \(24IC_{4}\) & \(3\tau\) & \(6T_{2}\) & \(12\tau C_{2}^{2}\) \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \(A_{2g}\) & 1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 & -1 & 1 & 1 & -1 \\ \hline \(A_{1u}\) & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 \\ \hline \(A_{2u}\) & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 & -1 \\ \hline \(E_{g}\) & 2 & 2 & -1 & 0 & 0 & 2 & 2 & -1 & 0 & 0 & 0 & 2 & 2 & 0 \\ \hline \(E_{u}\) & 2 & 2 & -1 & 0 & 0 & -2 & -2 & 1 & 0 & 0 & 0 & 2 & 2 & 0 \\ \hline \(T_{1g}\) & 3 & -1 & 0 & -1 & 1 & 3 & -1 & 0 & -1 & -1 & 1 & 3 & -1 & -1 \\ \hline \(T_{2g}\) & 3 & -1 & 0 & 1 & -1 & 3 & -1 & 0 & 1 & 1 & -1 & 3 & -1 & 1 \\ \hline \(T_{1u}\) & 3 & -1 & 0 & -1 & 1 & -3 & 1 & 0 & -1 & -1 & 1 & 3 & -1 & 1 \\ \hline \(T_{2u}\) & 3 & -1 & 0 & 1 & -1 & -3 & 1 & 0 & -1 & -1 & 1 & 3 & -1 & 1 \\ \hline \(X_{1}\) & 6 & 2 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & -2 & -2 \\ \hline \(X_{2}\) & 6 & 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & -2 & 2 \\ \hline \(X_{3}\) & 6 & -2 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 2 & 0 & -2 & 2 & 0 \\ \hline \(X_{4}\) & 6 & -2 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -2 & 0 & -2 & 2 & 0 \\ \hline \end{tabular} \end{table} Table 1: Character table for the symmetry group of a pyrochlore crystal cube containing 16 magnetic sites, \(\{O_{h}\}\times\{E,\tau_{1},\tau_{2},\tau_{3}\}\). The first row lists the classes of the group (the number of elements in each class and a representative element in the class: \(C_{2}\) is a \(180^{\circ}\) rotation about a principle cube axis, [100]; \(C_{3}\) is a 3-fold rotation about the cubic diagonal, [111]; \(C_{2}^{\prime}\) is a \(180^{\circ}\) rotation about the [110] axes; \(C_{4}\) is a 4-fold rotation about a principle cube axis; \(I\) is inversion; and \(\tau\) is an fcc lattice translation). The first column lists the irreducible representations and the second column lists their dimensions. The character of each representation is the set of numbers in each row, which are the traces of the matrices representing the group operations. The next largest symmetric unit is a cube containing 16 magnetic ions (shown in Fig. 1), which is the focus of this article. With periodic boundary conditions assumed, there are only three translations, \(\vec{\tau}_{1}\), \(\vec{\tau}_{2}\) and \(\vec{\tau}_{3}\). The symmetry group is \(\{O_{h}\}\times\{E,\tau_{1},\tau_{2},\tau_{3}\}\), which has 192 elements. The character table of this group is given in Table 1. The decomposition of the representation generated by the set of basis kets \(|\pm\pm\pm\pm\pm\pm\pm\pm\pm\pm\pm\pm\pm\pm\rangle\) is given in Table 2. According to these results, the matrix representation of any operator that that acts on this basis and is invariant under this symmetry group can be block-diagonalized into 44 blocks with different sizes and degeneracies, as listed in Table 2. As an application of our method, we consider the general nearest-neighbor exchange interaction for pyrochlore magnets. The Hamiltonian is \[H_{ex}=\sum_{\langle i,j\rangle}{\cal J}_{i,j}^{\mu\nu}S_{i}^{\mu}S_{j}^{\nu}, \tag{2}\] where the sum over \(\langle i,j\rangle\) runs over pairs of nearest-neighbour spins and \(\vec{S}_{i}=(S_{i}^{x},S_{i}^{y},S_{i}^{z})\) is the spin operator for the \(i\)th site. \({\cal J}_{i,j}^{\mu\nu}\) are exchange constants which are constrained by the space group symmetry of the crystal; in pyrochlore magnets there are only four independent exchange constants. It is convenient to express the Hamiltonian as \[H={\cal J}_{1}X_{1}+{\cal J}_{2}X_{2}+{\cal J}_{3}X_{3}+{\cal J}_{4}X_{4}, \tag{3}\] where \({\cal J}_{a}\) are the exchange constants and \[X_{1} = -\frac{1}{3}\sum_{\langle i,j\rangle}S_{iz}S_{jz}\] \[X_{2} = -\frac{\sqrt{2}}{3}\sum_{\langle i,j\rangle}[\Lambda_{s_{i}s_{j} }(S_{iz}S_{j+}+S_{jz}S_{i+})+\mbox{h.c.}]\] \[X_{3} = \frac{1}{3}\sum_{\langle i,j\rangle}[\Lambda_{s_{i}s_{j}}^{*}S_{ i+}S_{j+}+\mbox{h.c.}]\] \[X_{4} = -\frac{1}{6}\sum_{\langle i,j\rangle}(S_{i+}S_{j-}+\mbox{h.c.}).\] \(S_{\pm}=S_{x}\pm iS_{y}\) and the subscript is used to indicate the components of the spin operators with respect to a set of local axes, defined as follows. The local symmetry of the magnetic sites (the 16d Wyckoff position) is the point group \(D_{3d}\), which has a three-fold axis pointing along one of the four cube diagonals; the local \(z\)-axis is defined to be this three-fold axis (see Fig. 1 and Refs. [4; 17] for more details). \(\Lambda_{ss^{\prime}}\) are phases which depend on the site numbers: \(\Lambda_{12}=\Lambda_{34}=1\) and \(\Lambda_{13}=\Lambda_{24}=\Lambda_{14}^{*}=\Lambda_{23}^{*}=\varepsilon\equiv \exp(\frac{2\pi i}{3})\). The Hamiltonian (3) is the most general form allowed by symmetry for nearest-neighbour interactions with angular momentum operators of any value of \(S\) (including the classical limit where \(S\) is large), however, in the following, we assume that \(\vec{S}\) is a spin-1/2 operator. One may also construct general Hamiltonians very similar in form for 'pseudo-spin' operators, where magnetic sites are occupied by ions with a double-degeneracy that is different from spin-1/2, [18] but we will not consider those models here. The basis kets \(|u_{i}\rangle=|\pm\pm\pm\ldots\rangle\) represent states where the quantization axes of the spins are the local \(z\)-axes described above. Hence the action of the spin operator is \[S_{iz}|\ldots\pm\ldots\rangle = \pm\frac{\hbar}{2}|\ldots\pm\ldots\rangle \tag{4}\] \[S_{ix}|\ldots\pm\ldots\rangle = \frac{\hbar}{2}|\ldots\mp\ldots\rangle\] (5) \[S_{iy}|\ldots\pm\ldots\rangle = \pm\frac{i\hbar}{2}|\ldots\mp\ldots\rangle. \tag{6}\] \begin{table} \begin{tabular}{l|c|c} \hline IR & dimension & block size \\ \hline A\({}_{1g}\) & 1 & 383 \\ A\({}_{2g}\) & 1 & 371 \\ A\({}_{1u}\) & 1 & 335 \\ A\({}_{2u}\) & 1 & 335 \\ E\({}_{g}\) & 2 & 774 \\ E\({}_{u}\) & 2 & 682 \\ T\({}_{1g}\) & 3 & 1085 \\ T\({}_{2g}\) & 3 & 1081 \\ T\({}_{1u}\) & 3 & 957 \\ T\({}_{2u}\) & 3 & 957 \\ X\({}_{1}\) & 6 & 2038 \\ X\({}_{2}\) & 6 & 2042 \\ X\({}_{3}\) & 6 & 2038 \\ X\({}_{4}\) & 6 & 2042 \\ \hline \end{tabular} \end{table} Table 2: The decomposition of the representation generated by the \(2^{16}\) basis kets of a cube containing 16 sites. The first column lists the IR’s of the symmetry group, the second column lists their dimension (degeneracy), and the final column gives the size of each block (the number of copies \(a_{i}\) of each IR). Figure 1: Magnetic sites of a pyrochlore crystal. The arrows are axes of 3-fold symmetry. Since the basis kets are eigenstates of \(S_{iz}\), the matrix representation of the term \(X_{1}\) will be a diagonal in the \(|u_{i}\rangle\) basis. However, \(X_{2}\), \(X_{3}\) and \(X_{4}\) will be non-diagonal (albeit sparse). To find the unitary matrix \(U\) that block-diagonalizes \(H\), we find the set of "symmetrized" kets, each of which belongs to a particular IR, as follows. We first obtain explicit matrices for all of the group operations in all of the IR's following the procedure in Ref. [15]. In the notation of Ref. [16], the matrix elements of the group operations are \(\Gamma^{(j)}_{\lambda\kappa}(R)\) where \(j\) labels the representation, \(R\) is an element of the symmetry group, and \(\lambda\) and \(\kappa\) are the row and column of the matrix. We also find the action of every symmetry element \(R\) on every basis ket. Essentially, every \(R\) produces a permutation of the basis kets and may introduce phases. The operator \[\mathcal{P}^{(j)}_{\kappa\kappa}=\sum_{R}\Gamma^{(j)*}_{\kappa\kappa}P_{R},\] where \(P_{R}\) is the is the operator that applies the symmetry element \(R\) to a ket, is proportional to a projection operator for the \(\kappa\)th row of the \(j\)th IR. When this operator is applied to a basis ket \(|u_{i}\rangle=|\pm\pm\pm\ldots\rangle\) any non-zero result will be a symmetrized ket belonging to the \(\kappa\)th dimension of the \(j\)th IR. The brute force strategy is to apply the projector to each basis ket until all \(a_{j}\) independent symmetrized kets are found. However, there are much faster ways to implement this in practice, as discussed in Appendix B. The final result will be a set of symmetrized kets expressed as \(|\phi_{n}\rangle=\sum_{i}\alpha_{ni}|u_{i}\rangle\); these should be grouped according to the IR and its dimension (\(j\), \(\kappa\)) to which they belong. The matrix elements of the unitary matrix \(U\) that diagonalizes \(H\), \(H_{\rm block}=U^{\dagger}HU\) are \(U_{lj}=\langle\phi_{l}|u_{j}\rangle=\alpha_{lj}^{*}\), that is \[H^{\rm block}_{ij} = \langle\phi_{i}|H|\phi_{j}\rangle=\sum_{kj}\langle\phi_{i}|u_{k} \rangle\langle u_{k}|H|u_{l}\rangle\langle u_{k}|\phi_{j}\rangle \tag{7}\] \[= \sum_{kl}U^{\dagger}_{ik}H_{kl}U_{lj}.\] Finally, the eigenvalues and eigenvectors of \(H^{\rm block}\) are calculated numerically using Lapack subroutines. ## III Discussion and Summary In Figs. 2-6 we present the entire spectrum of the Hamiltonian in the spin ice regime (\(\mathcal{J}_{1}=-1\)) for a \(N=16\) site system. In Fig. 2, the histograms are colour-coded to indicate the eigenvalues of individual blocks of the block-diagonalized \(H\), labelled by IR. It is evident that each block contains the entire range of the spectrum. For \(\mathcal{J}_{2,3,4}=0\) the eigenvalues are discrete and evenly spaced, \(E_{n}=n\mathcal{J}_{1}/3\) for \(n=-4,\ldots 12\). The ground state energy is \(E_{g}=-4\mathcal{J}_{1}/3\), corresponding to the 90-fold degenerate pure spin ice states (the configurations in which there are two spins pointing into and two spins pointing out of each of the eight tetrahedra in our system). Fig. 2(a) shows the result for \(\mathcal{J}_{3,4}=0\) and \(\mathcal{J}_{2}\approx 0\); here the eigenvalues are almost discrete. A continuous spectrum emerges when any of the constants \(\mathcal{J}_{2,3,4}\) become non-zero. Although the block-diagonalization approach followed here is a well-established, mathematical one, a distinctly physical picture emerges from the construction. Each basis ket \(|u_{i}\rangle=|\pm\pm\pm\ldots\rangle\) corresponds to a classical state in which each of the spins points into or out of the tetrahedra in the lattice. States with exactly two spins pointing into and two spins pointing out of every tetrahedron are known as "spin ice states. Other possible configurations are those with three spins pointing into and one spin pointing out of (or vice versa) and all spins pointing into or out of a tetrahedron. All configurations can be classified by a set of numbers, \(\{n_{1},n_{2},n_{3}\}\), where \(n_{1}\) is the number of 2-in-2-out tetrahedra, \(n=2\) is the number of 3-in-1-out/ 1-in-3-out tetrahedra, and \(n_{3}\) is the number of all-in/all-out tetrahedra, where \(\sum_{i}n_{i}\) equals the total number of tetrahedra, which is \(N/2\) for periodic boundary conditions. The symmetrized kets \(|\phi_{i}\rangle\), which are the basis for the block-diagonal representation of \(H\), are linear combinations of the basis kets. The \(|\phi_{i}\rangle\) are found by applying all of the symmetry elements to one basis ket. Since the application of any symmetry element to a basis ket will rearrange the spins in the ket _while preserving the numbers_\(n_{i}\), each symmetrized ket can be characterized by the same set of numbers \(n_{i}\). In particular, by construction, there will be a set of symmetrized states that are pure spin ice states with \(n_{1}=8\) and \(n_{2,3}=0\). The colour scheme of Figs. 3-6 represents the average values of the \(n_{i}\) in each bin of the histogram, using red for \(n_{1}\), green for \(n_{2}\) and blue for \(n_{3}\). The colour green dominates these plots because the 3-in-1-out/1-in-3-out configurations are more numerous. \(n_{1}\) (the number of 2-in-2-out tetrahedra) is largest at the lowest energy part of the spectrum and appears as bright red. Small variations of the coupling constants produces perturbative shifts of their eigenvalues and mixing of the eigenstates which smears the colours toward a uniform green. When \(\mathcal{J}_{2,3,4}=0\) the basis states \(|u_{i}\rangle=|\pm\pm\pm\ldots\rangle\) are eigenstates of \(H\) with energy \(\frac{\mathcal{J}_{i}}{6}(-n_{1}+3n_{3})\). All of the states are at least two-fold degenerate, with the highest degeneracy associated with states containing 3-in-1-out/1-in-3-out configurations. The symmetrized kets \(|\phi_{i}\rangle\), which are the basis of the block-diagonalized Hamiltonian, are linear combinations of degenerate states and are the eigenstates of the general Hamiltonian (3) to zeroth order in degenerate perturbation theory. By construction, these states are highly entangled, although more involved calculations that account for the degeneracies in the representations, as well as the near-degeneracies in the spectrum for small values of the coupling constants \(\mathcal{J}_{2,3,4}\) (which become relevant at non-zero temperature) are needed to properly characterize the entanglement of the system. To summarize, in this paper we discuss the application of group theory methods to block-diagonalize the Hamiltonian matrix of a 16-site pyrochlore magnet. The method is essentially analytic, such that in principle the unitary matrix that block-diagonalizes \(H\) can be determined with absolute precision, however in our implementation this is handled numerically. Once calculated, the unitary transformation need only be applied once to each of the four terms in \(H\); we then vary the coupling constants and find _all_ the eigenvalues numerically. In this way, one can study the phase diagram spanned by the coupling constants of the model [13]. ###### Acknowledgements. We thank Oliver Stueker for assistance with using the resources at Compute Canada and Kyle Hall for helpful discussions about coding. This work was performed using the resources at Compute Canada and supported by the Natural Sciences and Engineering Research Council of Canada. ## Appendix A The space group In pyrochlore crystal, with space group \(Fd\bar{3}m\), the magnetic sites are located at the 16d Wyckoff position, which has site symmetry \(D_{3d}\). This site symmetry plays an important role in quantum magnets, because it ensures that the lowest energy state of the electronic part of the magnetic ion is either a singlet or a doublet. In many real crystals the magnetic ion is a rare earth, and the electronic ground state is a doublet that is well-separated from higher energy levels; moreover, in some cases the doublet has exactly the same symmetry as a spin-1/2 spinor, which allows one to model these systems as spin-1/2 states residing at the 16d positions, which are the vertices of a network of corner-sharing tetrahedra - the quintessential frustrated quantum magnet. The point group \(D_{3d}\) contains a 3-fold axis (see Fig. 1), which is the axis of highest symmetry. It is convenient to express the electronic total angular momentum with respect to that axis; also, by construction, angular momentum states \(|j,m\rangle\) are eigenstates of \(J_{z}\), thus to each magnetic site we attach a set of local axes, such that the local \(z\)-axis points in the direction of the 3-fold axis of \(D_{3d}\) (see Ref. [17] for more details). When a space group operation is applied on a crystal with periodic boundary conditions, equivalent sites (such as the 16d positions) will be permuted. The permutation can be found by explicit application of the space group operation on the position of the magnetic sites. In addition, the local axes attached to each site may also be rotated by one of the elements of \(D_{3d}\). The elements of \(D_{3d}\) are \(\{E,C_{3z},C_{3z}^{2}\}\times\{E,C_{2y}\}\times\{E,I\}\). The actions of these operators on the spin states are: \(C_{3z}|\pm\rangle=\exp\left(\mp i\frac{\pi}{3}\right)|\pm\rangle\), \(C_{2y}|\pm\rangle=\mp|\mp\rangle\) and \(I|\pm\rangle=(-)|\pm\rangle\). The element \(I\) may produce a sign change depending on the parity of the electronic state, but since the phase factors for all sites will be multiplied, and since we will always consider an even number of spins, this factor is of no relevance. For each space group element we find and store the permutation of the sites it produces and the action on the local sites (one of the 12 elements of \(D_{3d}\)). The sub-group \(\{O_{h}\}\times\{E,\tau_{1},\tau_{2},\tau_{3}\}\) corresponds to a system with 16 magnetic sites arranged inside a cube, with periodic boundary conditions assumed. The group elements are listed in the first row of Table 1. Here \(C_{n}\) is an \(n\)-fold rotation, \(I\) is inversion and \(\tau\) is a translation. Some of the rotations are screw rotations, which are rotations followed by a translation along the rotation axis. Similarly, some of the improper rotations (reflections) are actually glides, which are reflections followed by a translation within the mirror plane. The details of all the group operations can be found in Ref. [14]. When applied to the spins states of a cubic cell, the group operations will permute the sites, possibly introduce a phase, and possibly reverse the spin at each site. The different columns in Table 1 refer to different _classes_ of group elements, whereby two elements \(\mathcal{A}\) and \(\mathcal{B}\) belong to the same class if there is a group element \(\mathcal{G}\) such that \(\mathcal{B}=\mathcal{G}\mathcal{A}\mathcal{G}^{-1}\). Generally speaking, elements in the same class are physically similar to each other, and the matrix representations of elements belonging to the same class have the same traces. The rows refer to different irreducible representations \(\Gamma^{(i)}\) of the group; their dimensions \(d_{i}\) are given in the second column. The set of numbers in each row of the table are the character of the representation (the traces of the matrix representations of the group elements). Representations are equivalent if they have the same character; the character of a reducible representation is the sum of the characters of the IRs in its decomposition. ## Appendix B Block Diagonalization The unitary transformation that block-diagonalizes the Hamiltonian is found by generating the complete set of orthogonal, normalized, symmetrized kets belonging to each IR by applying the projection operators \(\mathcal{P}_{\kappa\kappa}^{(j)}\) to the basis kets, as follows. For a given basis ket \(|\pm\pm\pm\ldots\rangle\) we first generate all of its partners by applying to it each of the symmetry elements \(P_{R}\). The result will be a basis ket (which may be the given basis ket we started with) with a phase (which may be 1). A set of \(d\) unique partners is the basis of a \(d\)-dimensional reducible representation; its decomposition into IR's is then determined using characters. In our case, because the group elements transform single kets into single kets, the matrices representing the group elements will have only one non-zero element (a phase) in each row and column, and the trace will simply be the sum of the phases attached to the kets which transform into themselves. Each one-dimensional IR will appear at most once in this decomposition, but the multi-dimensional IR's may appear more than once. For example, the 16-site ket \(|++++++++++++++++--\rangle\) is one of 96 partners that are a basis for the reducible representation \(A_{19}\oplus A_{2g}\oplus A_{1u}\oplus A_{2u}+2E_{g}\). In our 16-site system, the smallest set of partners contains two basis kets, \(|+++\ldots+\rangle\) and \(|---\ldots-\rangle\), which are a basis for \(E_{g}\), while the largest sets of partners contain 192 partners (the number of group elements), which are a basis for the _regular_ representation, \(\sum d_{i}\Gamma^{(i)}\). Once the set of partners and the decomposition of its representation has been determined, we apply the pro jection operators \({\cal P}^{(j)}_{\kappa\kappa}\) to the \(d\) partners in order to find the symmetrized basis kets. The symmetrized basis kets belonging to the 1D IR's are easily obtained: since they each contain all of the partners, any of the partners can be used to generate them by one application of \({\cal P}^{(j)}_{11}\). The difficulty with the multi-dimensional representations is that their symmetrized kets do not necessarily contain all of the partners, and so it may take many applications of \({\cal P}^{(j)}_{\kappa\kappa}\) on different partners in order to find all of the symmetrized kets. To minimize the number of failed applications of \({\cal P}^{(j)}_{\kappa\kappa}\) to a set of partners, for each representation \(j\) we construct a \(d\times d_{j}\)_'flag'_ array, where \(d\) is the number of partners and \(d_{j}\) is the dimension of the \(j\)th representation. The rows are indexed by the partner number \(p=1\) to \(d\) and the rows by the block number \(\kappa=1\) to \(d_{j}\). This array keeps track of the number of times a partner appears in the symmetrized kets belonging to each block of the \(j\)th IR, which can be at most one. We observe that the sum of the entries in a row is the same for all rows. Thus we generate the symmetrized kets for the \(j\)th representation as follows. _i)_ Apply to the first partner \(p=1\) the projectors \({\cal P}^{(j)}_{\kappa\kappa}\) for \(\kappa=1\) to \(d_{j}\). The non-zero results are symmetrized kets belonging to the \(\kappa\)th block of the \(j\)th IR. _ii)_ For each partner \(p\) appearing in a symmetrized ket, change the entry \((p,\kappa)\) in the flag matrix to one, where \(p\) is the partner number. Thus the first row of the flag array is determined, and the sum of its elements (which we will call _'flag-sum'_ can be found. _iii)_ While the sum of the elements in the second row of the flag array is less than _flag-sum_, apply to the second partner \(p=2\) the projectors \({\cal P}^{(j)}_{\kappa\kappa}\) for the values of \(\kappa\) where the \((2,\kappa)\) entries of the flag array are zero. A non-zero result is a symmetrized ket; for each partner appearing in a symmetrized ket, change the entry \((p,\kappa)\) in the flag matrix to one, where \(p\) is the partner number. _iv)_ Consider subsequent partners \(p\) until \(d\) symmetric basis vectors have been found. We also create a flag array of length \(2^{16}\) which keeps track of original basis kets. Each time a set of partners (of original basis kets) is found and all the symmetrized kets have been generated from that set of partners then all the partners are flagged. Thus the procedure for finding all the symmetrized kets is as follows. _i)_ Beginning with the first ket, \(|+++\ldots+\rangle\), find its partners and the symmetrized kets as described above; flag all of the partners. _ii)_ Repeat for all kets numbered \(n=2\) to \(2^{N}\), skipping any kets that have been flagged. ## Appendix C Extension to 32 sites The 32-site system contains 8 tetrahedra within two adjoining conventional (cubic) cells. The symmetry group is \(\{O_{h}\}\times\{E,\tau_{1},\ldots\tau_{7}\}\), where the additional translations are : \(\tau_{4}=(1,0,0)\), \(\tau_{5}=\tau_{1}+\tau_{4}=(1,1/2,1/2)\), \(\tau_{6}=\tau_{2}+\tau_{4}=(3/2,0,1/2)\) and \(\tau_{7}=\tau_{3}+\tau_{4}=(3/2,1/2,0)\). The symmetry group contains 384 elements divided into 20 classes; its character table is given in Table 3. The decomposition of the representation generated by the \(2^{32}\) basis kets is given in Table 4.
2309.12067
Survey of Action Recognition, Spotting and Spatio-Temporal Localization in Soccer -- Current Trends and Research Perspectives
Action scene understanding in soccer is a challenging task due to the complex and dynamic nature of the game, as well as the interactions between players. This article provides a comprehensive overview of this task divided into action recognition, spotting, and spatio-temporal action localization, with a particular emphasis on the modalities used and multimodal methods. We explore the publicly available data sources and metrics used to evaluate models' performance. The article reviews recent state-of-the-art methods that leverage deep learning techniques and traditional methods. We focus on multimodal methods, which integrate information from multiple sources, such as video and audio data, and also those that represent one source in various ways. The advantages and limitations of methods are discussed, along with their potential for improving the accuracy and robustness of models. Finally, the article highlights some of the open research questions and future directions in the field of soccer action recognition, including the potential for multimodal methods to advance this field. Overall, this survey provides a valuable resource for researchers interested in the field of action scene understanding in soccer.
Karolina Seweryn, Anna Wróblewska, Szymon Łukasik
2023-09-21T13:36:57Z
http://arxiv.org/abs/2309.12067v1
Survey of Action Recognition, Spotting and Spatio-Temporal Localization in Soccer - Current Trends and Research Perspectives ###### Abstract Action scene understanding in soccer is a challenging task due to the complex and dynamic nature of the game, as well as the interactions between players. This article provides a comprehensive overview of this task divided into action recognition, spotting, and spatio-temporal action localization, with a particular emphasis on the modalities used and multimodal methods. We explore the publicly available data sources and metrics used to evaluate models' performance. The article reviews recent state-of-the-art methods that leverage deep learning techniques and traditional methods. We focus on multimodal methods, which integrate information from multiple sources, such as video and audio data, and also those that represent one source in various ways. The advantages and limitations of methods are discussed, along with their potential for improving the accuracy and robustness of models. Finally, the article highlights some of the open research questions and future directions in the field of soccer action recognition, including the potential for multimodal methods to advance this field. Overall, this survey provides a valuable resource for researchers interested in the field of action scene understanding in soccer. action recognition, action spotting, soccer datasets, spatio-temporal action localization, modality fusion, multimodal learning ## 1 Introduction Soccer is one of the most popular and lucrative sports worldwide, with billions of fans and many players. In recent years, there has been an increasing interest in using computer vision and machine learning techniques to automatically extract information from match recordings to get valuable insights about the strengths and weaknesses of teams. Understanding the actions that occur during a match is essential for both coaches and players to improve performance and gain a competitive edge. Similarly, scouts visit sports clubs to evaluate the performance and actions of young players to identify those with the most talent that could later be transferred to higher leagues. Automatic retrieval of such information could support scouts' decisions, saving money and time. There are many possible applications of this process in the television industry. For example, the ability to recognize game actions can enable producers to optimize and automate the broadcast production process, emphasizing key aspects of the game to enhance spectator engagement. It is particularly valuable for real-time camera selection, post-game studio soccer analytics, and automatic highlights generation. Action scene understanding has become an increasingly important area of research in the context of soccer (Giancola et al., 2018; Deliege et al., 2021; Li et al., 2021). It poses unique challenges due to the complex and dynamic nature of the game. Players move quickly and often obscure each other, making it difficult to accurately track their movements. Moreover, soccer matches involve a wide range of actions, from simple passes to complex passages of play, tackles and shots on goal, which require different levels of analysis. With recent advancements in machine learning and computer vision, researchers have been exploring various approaches to improve the accuracy of action recognition in soccer. In particular, multimodal methods, which combine data from different sources such as video, audio, and other data, have shown promise in improving the accuracy and robustness of action recognition systems. These methods mirror how humans understand the world by utilizing multiple senses to process data. Employing multiple heterogeneous sources to train models presents both challenges and opportunities. The potential advantage is the improvement of model performance compared to unimodal representation, as incorporating additional modalities provides new information and reveals previously unseen relationships. However, multimodal learning also presents certain difficulties. Information from various sources can be redundant, and this should be filtered out in data representation as one vector. Some solutions build models for all modalities and then create a model combining the predictions. Another approach is to prepare an appropriate joint feature representation. The video data recorded during soccer games often includes information about fans' reactions and audio commentary. Also, Internet websites provide games and player statistics, live comments, and textual data with team analysis. Thus, soccer data can be valuable for researchers experimenting with multimodal learning. This survey provides a comprehensive overview of recent research on action scene understanding, including action recognition (classification of actions in the trimmed video), spotting (detection and classification of actions in the untrimmed video) and spatio-temporal action localization (classification and tracking of specific actions and objects) in soccer, with a particular focus on available modalities and multimodal approaches. We explore the different data sources used in these methods, the various feature extraction and fusion techniques, and the evaluation metrics used to assess their performance. Also, we discuss the challenges and opportunities in this field, as well as the limitations and future research directions. By analyzing the state-of-the-art methods and identifying their strengths and weaknesses, this survey aims to provide a clear and up-to-date overview of the progress made in action recognition in soccer and to provide insights for researchers in this rapidly evolving area. The following are the main contributions of this comprehensive literature review: * We define three tasks in the area of soccer action understanding: action recognition, spotting, and spatio-temporal action localization, along with metrics used to assess the performance of these models. * We prepare a list of soccer datasets for action understanding, highlighting the potential of applying multimodal methods. * We examine a variety of existing state-of-the-art models in action recognition, spotting, and spatio-temporal action localization used in soccer as described in the literature. * Based on the thorough assessment and in-depth analysis, we outline a number of key unresolved issues and future research areas that could be helpful for researchers. The article is organized as follows. Subsection 1.1 highlights why this survey is important and what distinguishes it from others, while a discussion on the potential of using multimodal sources during training soccer models can be found in Section 1.2. Section 2 describes the research strategy. Tasks related to soccer action scene understanding are described in Section 3, and associated metrics and datasets are introduced in Section 4 and Section 5 respectively. Section 6 presents methods addressing the three analysed tasks. Future research directions, including the potential for multimodal methods, are discussed in the last section. ### Motivation Several publications listed in Table 1 have appeared in recent years reviewing machine learning systems that address the needs of the sports industry. Surveys (Thomas et al., 2017; Rahmad et al., 2018; Naik et al., 2022; Wu et al., 2022) show applications of computer vision to automatic analysis of various sports, two of which focus on the action recognition task. However, these surveys do not provide a comprehensive analysis of action detection in soccer, as different sports have different game motions and action types. Therefore, a detailed analysis of dedicated datasets and methods specific to soccer is necessary. While there are articles that focus on soccer (D'Orazio and Leo, 2010; Oskouie et al., 2012; Patil et al., 2014; Akan and Varli, 2022), three of them were published before the release of relevant datasets SoccerNet, SoccerNet-v2, and SoccerNet-v3 (Giancola et al., 2018; Deliege et al., 2021; Cioppa et al., 2022), which caused significant development of this field, including transformer-based solutions. Regarding action recognition, these articles describe binary models that classify only certain actions, such as goals or offside. A comprehensive review of various tasks was published in article (Wu et al., 2022) in 2022. Only one reported action recognition solution is evaluated on SoccerNet-v2 (Deliege et al., 2021), more specifically benchmark result is reported. Also, spatio-temporal action localization is not described in this publication. The potential of using multimodal inputs is only briefly described in previous surveys. Only one publication (Oskoue et al., 2012) addresses this topic; however, the methods outlined therein are not considered state-of-the-art nowadays. Mentioned publications focus mainly on action classification (recognition), while action localization in time or spatio-temporal space is more relevant in real-life scenarios. ### Potential of Using Multimodality Soccer is a sport that generates a vast amount of data, including videos, information on players, teams, matches, and events. Numerous matches are documented, and these recordings provide significant insights into the game. They vary in terms of video quality, used device and camera perspective (drone, single-camera, multiple cameras). Beyond the raw video feed, many characteristics can be extracted from video streams, including optical flow, players and ball trajectories, players' coordinates. When audio is available, it could be an additional data source capturing people's reactions, which can be represented using techniques like Mel Spectrogram (Xu et al., 2005). Also, commentary data can be transcribed using automatic speech recognition systems such as Whisper (Radford et al., 2022) or vosk1. Unstructured texts containing match reports can be scrapped from various websites. Soccer clubs gather and analyse a vast amount of information about players to gain an advantage and choose the optimal tactics. Data from GPS systems, accelerometers, gyroscopes, or magnetometers can also be useful in soccer data analysis. Footnote 1: [https://alphacephei.com/vosk/](https://alphacephei.com/vosk/) Combining many modalities is proven to achieve better results than using unimodal representations (Nagrani et al., 2021). Soccer match recordings already contain many data types with predictive potential: audio, video, and textual transcriptions. Additional information about the outcomes of games or the time of individual events is also regularly reported on many websites. To sum up, investigating multimodal approaches is a logical step in soccer data analysis due to the accessibility of diverse data sources. As far as we know, there is no survey on the application of modality fusion in action recognition, spotting, and spatio-temporal action localization in sports videos. This survey provides a comprehensive overview of the current state of research and helps advance the field of action understanding in soccer by identifying areas for future research. \begin{table} \begin{tabular}{l c} \hline \hline **Review** & **Topic** \\ \hline A review of vision-based systems for soccer video analysis (D’Orazio and Leo, 2010) & \\ Multimodal feature extraction and fusion for semantic mining of soccer video: A & \\ survey (Oskouie et al., 2012) & \\ A survey on event recognition and summarization in football Videos (Patil et al., 2014) & \\ Computer vision for sports: current applications and research topics (Thomas et al., 2017) & \\ A Survey of Video Based Action Recognition in Sports (Rahmad et al., 2018) & \\ A comprehensive review of computer vision in sports: open issues, future trends and & \\ research directions (Naik et al., 2022) & \\ A survey on video action recognition in sports: datasets, methods and applications (Wu et al., 2022) & \\ Use of deep learning in soccer videos analysis: survey (Akan and Varli, 2022) & \\ \hline \hline \end{tabular} \end{table} Table 1: List of previous similar reviews. \(\bullet\) denotes articles related to soccer, while \(\blacksquare\) indicated articles about sport in general. ## 2 Definition of Research Strategy The articles published between 2000 and 2022 are included in this survey. In order to find related articles, we used online databases such as Scopus 2, ScienceDirect 3, ACM 4, IEEE Xplore 5, SpringerLink 6 and SemanticScholar 7 and keyword search. The primary search keys were: _multimodal, multimodality, action recognition, sport, activity recognition, event recognition, event classification, action spotting, event detection, modality fusion, video, audio, text, activity localization, spatio-temporal action recognition, football, soccer, action localization, soccer dataset, soccernet_. Each query produced several articles. Additionally, we manually added many papers to our list by analyzing the references of the papers we identified. According to their relevance and year of publication, some of them were excluded from this analysis. ## 3 Problem Description ### Actions According to [2], soccer actions can be divided based on their relevance into primary and secondary events as depicted in Figure 1. The primary events directly affect the match's outcome and can directly cause goal opportunities, while the secondary actions are less important and do not affect the result of the match as much. Action analysis can also be divided into three tasks that differ in their use of information about the time and localization of the event: action recognition, action spotting, and spatio-temporal action detection. Figure 2 highlights differences between mentioned tasks. **Action recognition**, also known as **action/event classification**, is the classification of an event in a trimmed video. The model receives as input a series of frames; for the entire series, it has to predict which class the video refers to. In contrast, action spotting is a slightly different task which involves identifying the segment of the untrimmed video in which the action occurs, and then classifying it into predefined categories. An action can be temporally localized by defining a boundary that contains a start and end timestamp or a single frame timestamp, such as the start of the action. **Action spotting** is also referred to as **temporal action detection** or **temporal action localization**. In addition, we can distinguish an extension of this problem that incorporates actor location data called **spatio-temporal action detection (localization)**. This task aims to detect temporal and spatial information about action as moving bounding boxes of players. This involves detecting the location, timing, and type of actions performed by players, such as passing, shooting, tackling, dribbling, and goalkeeping. This task is particularly relevant in the analysis of an individual's behaviour and performance. Figure 1: Difference between primary and secondary actions. ### Multimodality Although we intuitively know what multimodality is, the formal definitions differ. One perspective is more human-centred and refers to the way people perceive the world, and other definitions are more related to machine-readable representations. According to Baltrusaitis et al. (2018), _"Our experience of the world is multimodal - we see objects, hear sounds, feel the texture, smell odours, and taste flavours. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities."_ (definition 1: human-centered) while Guo et al. (2019) define modality as _"a particular way or mechanism of encoding information"_ (definition 2: machine-centered). Therefore, different encodings for one source (e.g. BERT embeddings and TFIDF encoding for text) would be considered multimodal according to definition 2, but not according to definition 1. This inconsistency was noted in (Parcalabescu et al., 2021), where authors proposed a new task-relative definition of multimodality: _"A machine learning task is multimodal when inputs or outputs are represented differently or are composed of distinct types of atomic units of information"_. Figure 3 highlights that inputs can be processed differently by people and machines. For instance, while people may perceive identical meaning in textual content presented in either text or image form, machine-extracted data can indicate significant dissimilarities. In this work, we analyse both human-centred and machine-centred multimodal models. ## 4 Metrics The most common metric measuring classification performance among all analysed articles is mAP (mean average precision). In the case of action spotting avg-mAP is used, and for spatio-temporal action detection, Video-mAP@\(\delta\) and Frame-mAP@\(\delta\) are used. This section explains and summarizes various metrics used to measure the accuracy of the model's predictions. Figure 2: Comparison of tasks related to action analysis. Frames used in this visualization are from SoccerNet (Deliège et al., 2021) and MultiSports (Li et al., 2021) datasets. ### Action recognition **Precision** measures the model's ability to detect only relevant actions in a video. It is defined as the fraction of correctly classified samples (TP) out of all positive predictions (TP + FP): \[Precision=\frac{TP}{TP+FP}.\] **Recall** describes how well the model detects all relevant samples. It is defined as the fraction of correctly classified samples (TP) out of all positive ground truth (TP + FN): \[Recall=\frac{TP}{TP+FN}.\] **F1-score** is the harmonic mean of recall and precision: \[F1\text{-}score=2\cdot\frac{precision\cdot recall}{precision+recall}.\] The objective of the model is to achieve both high precision and recall. However, in practice, a trade-off between these metrics is chosen. The relationship between them can be depicted through **Precision-Recall (PR) curve**, which illustrates the values of precision on the y-axis and recall on the x-axis for different thresholds of model's confidence score. The term **AP** is an abbreviation for average precision and refers to the area under the precision-recall curve computed across all recall values. It is worth mentioning that a high AP value indicates a balance of both high precision and recall. Given that the PR curve often exhibits a zigzag pattern, calculating the area under the curve (AUC) accurately can be challenging. Therefore, various interpolation techniques, such as 11-points interpolation (Zhang and Zhang, 2009) or interpolation based on all points, are commonly employed. 11-points precision-recall curve interpolates precision at 11 recall levels (0.0, 0.1,..., 1.0). The interpolation of precision (\(P_{interp}(R)\) at recall level \(R\) is defined as the maximum precision with a recall value greater or equal than \(R\) and is defined in the equation below. \[P_{interp}(R)=max_{R^{\prime}\geq R}P(R^{\prime}).\] Thus, the estimation of AP value can be defined as the mean of interpolated precisions over 11 recall values: Figure 3: Difference between extracting information by people and machines. = denotes the same data, while \(\neq\) means different. Visualization inspired by Parcalabescu et al. (2021). \[AP_{11}=\frac{1}{11}\sum_{R\in\{0,0,0,1,\ldots,1.0\}}P_{interp}(R)\] Interpolation based on all points takes all points into account. AP is computed as the weighted mean of precisions at each recall (R) level. The weights are differences between current and previous recall values. \[AP_{all}=\sum_{n=1}^{N}(R_{n}-R_{n-1})P_{interp}(R),\] where \[P_{interp}(R_{n})=max_{R^{\prime}\geq R_{n}}P(R^{\prime}).\] **mean AP (mAP)** measures the accuracy of a classifier over all classes and is defined as \[mAP=\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}AP(\gamma),\] where \(\Gamma\) is a set of all possible classes and \(AP(\gamma)\) denotes AP of class \(\gamma\). **Top-N Accuracy** is a fraction of correct predictions, where prediction is assigned as correct when any of the model's N highest probability scores match the expected answer (ground truth). In other words, it measures the proportion of correctly classified instances where the predicted class is among the N most probable classes. Top-N accuracy is a useful metric in situations where the exact class prediction may not be critical, but rather the identification of a set of probable classes that may be relevant. ### Action spotting Unlike the classic classification, action spotting also considers the time in which the action takes place. This aspect is also reflected in the metrics used to assess the models. **avg-mAP** is the average of mAP for different tolerance values and can be defined as \[avg\text{-}mAP=\frac{1}{|\Theta|}\sum_{\theta\in\Theta}mAP(\theta),\] where \(\Theta\) is a set of different tolerances. Authors of [4] divided this metric into two subgroups: **loose avg - mAP** for tolerances \(5\)s, \(10\)s,\(\ldots\), \(60\)s, and tight-avg-mAP with tolerance \(1\)s, \(2\)s, \(\ldots\), \(5\)s. If the high accuracy of action localization is required then **tight-avg-mAP** will be more appropriate. ### Spatio-temporal action detection Not only the time aspect but also the localization (bounding box coordinates) is important in spatio-temporal action detection. This section begins by introducing an assessment of the accuracy of the location of the action in a frame and then describes the associated metrics. **Intersaction over Union (IoU)**, also called Jaccard Index, measures the overlap of the predicted bounding box and the ground truth bounding box. IoU is invariant to the scale which means that the similarity between two shapes is not affected by the scale of their space. \[IoU=\frac{\text{Area of Overlap}}{\text{Area of Union}}=\frac{P\cap GT}{P\cup GT }=\frac{\text{\includegraphics[scale=0.4]{images/2011}}},\] where P denotes prediction and GT means ground truth. An action is considered correctly classified and localized if the IoU between the predicted and the ground-truth bounding boxes is above a threshold \(\delta\). 3D Intersection over Union (3D IoU)The intersection over union in 3D is computed as an overlap between two cuboids. \[3D\ IoU=\frac{\text{Volume of Overlap}}{\text{Volume of Union}}=\frac{P\cap GT}{P\cup GT}=\] In the study introducing MultiSports dataset [Li et al., 2021] they defined 3D IoU (spatio-temporal-IoU) as IoU over the temporal domain multiplied by the average of the IoU between the overlapped frames (also used in [Singh et al., 2022, Weinzaepfel et al., 2015]). MABO (Mean Average Best Overlap)ABO (Average Best Overlap) [Uijlings et al., 2013, Kalogeiton et al., 2017] for class \(c\) is defined as \[ABO(c)=\frac{1}{|G^{c}|}\sum_{g_{i}^{c}\in G^{c}}max_{I_{j}\in L}IoU(g_{i}^{c}, I_{j}),\] where \(G^{c}\) is a set of all ground truths for class \(c\) and \(L\) is predicted bounding box. The intersection over union (IoU) between every ground truth and predicted boxes (or tubes) is computed. Then, for each ground-truth box or tube, the overlap of the detection with the highest IoU value is retained (best-overlapping detection BO). Next, for every class, an average of all maximum intersections is calculated. The mean of **ABO** over all classes is called **MABO**. Video-mAP@\(\delta\) and Frame-mAP@\(\delta\)Two groups of metrics can be considered to evaluate spatio-temporal action detectors: frame and video level. In the case of frame-level, metrics such as AP are computed for defined IoU threshold \(\delta\). Prediction is considered correct if its IoU with a ground truth box is greater than a given threshold and the predicted label matches the ground truth one. Then, mean AP is computed by averaging over all classes. In the literature, it is often referred to as **frame-mAP@\(\delta\)**, **f@\(\delta\)** or **f-mAP@\(\delta\)**. 3D IoU between predicted tubes and ground truth tubes is often used to report video-level metrics. A model returns the correct tube if its 3D IoU with ground truth tube is above \(\delta\) and correctly classified. Similarly, averaging the metrics (e.g. AP) over classes gives an overall metric such as **video-mAP@\(\delta\)** (also denoted as **v-mAP@\(\delta\)** and **v@\(\delta\)**). By analogy, **Precision@k** and **Recall@k** can be defined. Motion mAP and Motion APIn article [Singh et al., 2022], new metrics considering motion have been introduced. Actions are divided into three categories based on their motion size (large, medium, and small). With these labels, Average Precision (AP) is computed for each motion category. Computing the AP for each action class and then averaging the results for motion categories is referred to as the **Motion-mAP** while computing the AP for the motion categories regardless of action class is called the **MotionAP**. They are calculated at video and frame levels. ## 5 Datasets The potential to use machine learning to analyse tactics and statistics in sports has automatically resulted in a significant increase in publicly available datasets [Giancola et al., 2018, Deliege et al., 2021, Tsunoda et al., 2017, Karimi et al., 2021, Li et al., 2021, Pappalardo et al., 2019]. Television broadcasters record sports games along with commentary, while online platforms offer detailed information and player statistics pertaining to the game. Thus, creating a multi-modal database should be relatively easy and intuitive. Table 2 examines the availability of different modalities in published soccer datasets for action recognition, spotting and spatio-temporal action localization. ### Soccer Datasets SoccerNetSoccerNet8[Giancola et al., 2018] was introduced as a benchmark dataset for action spotting in soccer. The dataset consists of 500 soccer matches from the main European Championships (764 hours in total) and annotations of three action types: goal, card, and substitution. Each match is divided into two videos: one for each half of the match. The matches are split into a train (300 observations), test (100 observations), and validation datasets (100 observations). Also, the authors published extra 50 observations without labels for the challenge task. Footnote 8: [https://www.soccer-net.org/data](https://www.soccer-net.org/data) SoccerNet-v2SoccerNet-v2 [Deliege et al., 2021] enriched the SoccerNet dataset by manually annotating 17 action categories. In contrast to its predecessor, actions occur around 17 times more frequently (8.7 events per hour in SoccerNet, 144 actions per hour in SoccerNet-v2). Table 3 shows action types and their frequency. Each action has an assigned visibility category (shown and unshown) to indicate whether an action was shown in the broadcast video. Detecting unshown actions is very difficult and requires understanding the temporal context of actions. Figure 4 \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{2}{c}{**Dataset**} & \multicolumn{1}{c}{**Task**} & \multicolumn{1}{c}{**Video/ Audio**} & \multicolumn{1}{c}{**Audio**} & \multicolumn{1}{c}{**Publicly**} \\ & & & **Photo** & & **Available** \\ \hline \multirow{10}{*}{**SoccerNet**} & SoccerNet [Giancola et al., 2018] & \(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\blackloz)\(\)\(\blacklozenge\)\(\blacklozenge\)\(\blackloz)\(\)\(\blacklozenge\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\blackloz)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\(\blackloz)\)\(\)\(\blacklozenge\)\(\(\blackloz)\)\(\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\(\blackloz)\)\(\)\(\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\(\blackloz)\)\(\(\blackloz)\(\)\(\)\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\(\blackloz)\)\(\(\)\(\blackloz)\(\)\(\)\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\((\blackloz))\(\)\(\)\(\blackloz)\(\)\((\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blackloz)\(\)\(\)\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\)\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blackloz)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\)\(\)\(\blacklozenge\) presents examples of actions for a selected match. Moreover, this dataset includes manual annotations for camera shot segmentation with boundary detection and replay grounding task. It is worth mentioning that SoccerNet-v2 recordings include audio commentary. We analysed audio commentary in the SoccerNet-v2 with ASR (Automatic Speech Recognition) model called whisper (Radford et al., 2022). We noticed that \(37\) out of \(1000\) observations (\(3.7\)%) do not have audio commentary. Figure 5 shows the distribution of detected languages. Each half of the match is analyzed as a separate observation. Sometimes, the first half of the game's commentary is in a different language than the second. SoccerNet-v3SoccerNet-v3 (Cioppa et al., 2022) is an extension of SoccerNet-v2 (Deliage et al., 2021) containing spatial annotations and associations between different view perspectives. Action annotations have been enriched with associated frames from the replay clips (\(21,222\) of instances have been added). Therefore, it enables the exploration of multi-view action analysis. Also, they added lines and goals annotations, bounding boxes of players and referees (\(344,660\) of instances), bounding boxes of objects including ball, flag, red/yellow card (\(26,939\) of instances), multi-view player correspondences (\(172,622\) of instances) and jersey numbers (\(106,592\) of instances). Football actionsThe dataset (Tsunoda et al., 2017) consists of two match recordings (each lasting 10 minutes) from 14 cameras located in different places. Five actions are manually annotated: pass, dribble, shoot, clearance, and loose Figure 5: Distribution of commentary languages detected by Whisper (Radford et al., 2022) in SoccerNet-v2 (Deliage et al., 2021) dataset. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Action** & **\#Train** & **\#Test** & **\#Valid** & **Total** \\ \hline Ball out of play & 19097 & 6460 & 6253 & 31810 \\ Clearance & 4749 & 1631 & 1516 & 7896 \\ Corner & 2884 & 999 & 953 & 4836 \\ Direct free-kick & 1379 & 382 & 439 & 2200 \\ Foul & 7084 & 2414 & 2176 & 11674 \\ Goal & 995 & 337 & 371 & 1703 \\ Indirect free-kick & 6331 & 2283 & 1907 & 10521 \\ Kick-off & 1516 & 514 & 536 & 2566 \\ Offside & 1265 & 416 & 417 & 2098 \\ Penalty & 96 & 41 & 36 & 173 \\ Red card & 34 & 8 & 13 & 55 \\ Shots off target & 3214 & 1058 & 984 & 5256 \\ Shots on target & 3463 & 1175 & 1182 & 5820 \\ Substitution & 1700 & 579 & 560 & 2839 \\ Throw-in & 11391 & 3809 & 3718 & 18918 \\ Yellow card & 1238 & 431 & 378 & 2047 \\ Yellow-\(>\)red card & 24 & 14 & 8 & 46 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of actions in SoccerNet-v2 dataset (Deliège et al., 2021). ball. Additionally, the authors provided annotations of the ball and players' 3D positions. Examples of images from this dataset are presented in Figure 6. Comprehensive SoccerComprehensive Soccer dataset9[Yu et al., 2018, Yu et al., 2019] is a dataset containing \(222\) broadcast soccer videos (\(170\) hours in total) in HD 720p (\(40\%\) of observations) and 360p (\(60\%\) of observations), 25 fps. They notice that most datasets focus on a single task, while a multi-task approach is necessary to analyse sports videos. Their dataset covers three tasks: shot boundary detection (far-view, medium-view, close-view, out-of-field view, playback shot), event detection and player tracking. They divided event annotation into two levels of granularity: event and story proposing \(11\) action classes: overhead kick, solo drive, goal, shot, corner, free kick, penalty kick, red card, yellow card, \begin{table} \begin{tabular}{l l c c c c} \hline \multicolumn{1}{c}{**Action**} & **SoccerNet** & **SoccerNet-v2** & **ComprSoccer** & **SSET** & **SoccerDB** \\ \hline overhead kick & ✗ & ✗ & ✓ & ✓ & ✗ \\ solo drive & ✗ & ✗ & ✓ & ✓ & ✗ \\ goal & ✓ & ✓ & ✓ & ✓ & ✓ \\ shot & on target & ✓ & ✓ & ✓ & ✓ \\ off target & ✗ & ✓ & ✓ & ✓ & ✓ \\ corner & ✗ & ✓ & ✓ & ✓ & ✓ \\ free kick & direct & ✗ & ✓ & ✓ & ✓ \\ penalty kick & ✗ & ✗ & ✓ & ✓ & ✓ \\ card & red & ✓ & ✓ & ✓ & ✓ \\ yellow & ✓ & ✓ & ✓ & ✓ & ✓ \\ foul & ✗ & ✓ & ✓ & ✓ & ✓ \\ offside & ✗ & ✓ & ✓ & ✓ & ✗ \\ substitution & ✓ & ✓ & ✗ & ✗ & ✓ \\ ball out of play & ✗ & ✓ & ✗ & ✗ & ✗ \\ throw-in & ✗ & ✓ & ✗ & ✗ & ✗ \\ clearance & ✗ & ✓ & ✗ & ✗ & ✗ \\ kick off & ✗ & ✓ & ✗ & ✗ & ✗ \\ penalty & ✗ & ✓ & ✗ & ✗ & ✗ \\ yellow-\(>\)red card & ✗ & ✓ & ✗ & ✗ & ✗ \\ injured & ✗ & ✗ & ✗ & ✗ & ✓ \\ saves & ✗ & ✗ & ✗ & ✗ & ✓ \\ _corner\&goal_ & ✗ & ✗ & ✓ & ✓ & ✗ \\ _corner\&shot_ & ✗ & ✓ & ✓ & ✓ & ✗ \\ _free kick\&goal_ & ✗ & ✓ & ✓ & ✓ & ✗ \\ _free kick\&shot_ & ✗ & ✓ & ✓ & ✓ & ✗ \\ \hline **\#classes** & 3 & 17 & 11(+4) & 11(+4) & 10* \\ **Duration [hours]** & 764 & 764 & 170 & 282 & 669 \\ **\#events** & 6,637 & 110,458 & 6,850 & 10,619 & 37,715 \\ **Freq [\#events/hour]** & 8.7 & 144.6 & 40.3 & 30.3 & 56.4 \\ \hline \end{tabular} \end{table} Table 4: Comparison of action types in action spotting datasets. *means that the background class (a category that does not belong to the main classes of interest) is not counted as a separate class. Figure 6: Example from Football actions dataset. Source: [Tsunoda et al., 2017]. foul, offside, and extra four-story labels: corner&goal, corner&shot, free kick&goal, free kick&shot. While action describes a single activity, the story provides a comprehensive narrative with contextual background (see Figure 7 for more details). They suggest that shot analysis can be essential to action analysis because various views can show different perspectives. For instance, a close-view shot can capture players, coaches and audiences when an event is happening, while far-view present tactics and the arrangement of players in the attacking and defensive team. SseetSSET dataset10[Feng et al., 2020] is an extension of **Comprehensive Soccer dataset [Yu et al., 2018]**. The authors have enriched the previous dataset with 128 recordings and increased the number of event annotations. Finally, the introduced dataset consists of \(350\) videos lasting \(282\)h in total. Footnote 10: [http://media.hust.edu.cn/dataset.htm](http://media.hust.edu.cn/dataset.htm) SoccerDB11[Jiang et al., 2020] is a dataset with annotations of four tasks: object detection, action recognition, action spotting, and video highlight detection for \(343\) soccer matches divided into \(171,191\) video segments (some of them are from the SoccerNet dataset). It is worth mentioning that bounding boxes of players and the ball are available but not assigned to player numbers, so this dataset cannot be used for player tracking. Footnote 11: [https://github.com/newsdata/SoccerDB](https://github.com/newsdata/SoccerDB) Although SoccerNet, SoccerNet-v2, Comprehensive Soccer, SSET, and SoccerDB are designed for the same task, the defined action labels differ. A comparison of available classes and their statistics can be found in Table 4. Soccer-logsSoccer-logs123[Pappalardo et al., 2019] is a large-scale dataset of temporal and spacial soccer events provided by Wyscout. Although they released a huge dataset with over \(3\) million events (\(100\) times more than the largest open-source dataset SoccerDB), video analysis is hindered because the video files have not been available. Besides events data, authors provide files including annotations of competitions, matches, teams, players, referees and coaches. Footnote 12: [https://sobigdata-soccerchallenge.it/](https://sobigdata-soccerchallenge.it/) SEV datasetIt 14[Karimi et al., 2021] consists of \(42000\) event-related images split into train, test, and validation data. The dataset includes annotations for 7 soccer events: corner kick, penalty kick, free kick, red card, yellow card, tackle, and substitute. Footnote 14: [https://figshare.com/collections/Soccer_match_event_dataset/4415000](https://figshare.com/collections/Soccer_match_event_dataset/4415000) Eigd-SEIGD-S 15[Biermann et al., 2021] is a dataset consisting of five soccer matches recordings with gold standard annotations for \(125\) minutes. The dataset includes multiple annotations for two matches from 4 experts and one inexperienced annotator. URLs of videos link to YouTube, where videos with audio paths are available. Annotation was prepared according to the proposed taxonomy assuming the hierarchical structure of events [Biermann et al., 2021]. Unlike the other datasets, EIGD-S contains annotations of high-level events, such as passes along with low-level events, including goals or cards. Footnote 14: [https://github.com/FootballAnalysis/footballanalysis](https://github.com/FootballAnalysis/footballanalysis) VisAndSoccerGao et al. [2020] proposed a new dataset (here denoted as **VisAndSoccer**), which contained data from \(460\) soccer game broadcasts, including \(300\) videos downloaded from SoccerNet [Giancola et al., 2018], lasting about \(700\) hours in total in video format. Audio data is available for \(160\) games with commentator voices categorized as "excited" and "not-excited". Events are divided into four classes: celebrate (\(1320\) events), goal/shoot (\(1885\) events), card (\(2355\) events), and pass (\(2036\) events). The dataset is not publicly available. Figure 7: Difference between event and story from Comprehensive Soccer dataset. Source: [Yu et al., 2018]. SoccerSummarizationGautam et al. (2022)16 extended SoccerNet-v2 (Delilege et al., 2021) with news, commentaries and lineups from BBC. Footnote 16: [https://github.com/simula/soccer-summarization](https://github.com/simula/soccer-summarization) ### Multi-Sports Datasets MultiSportsMultiSports17[Li et al., 2021] released spatio-temporal multi-person action detection dataset for basketball, volleyball, soccer, and aerobic gymnastics. After consulting with athletes, they proposed \(66\) action labels, e.g. soccer pass, trap, defence, tackle, and long ball. Additionally, a handbook was created to define actions and their temporal boundaries. Videos were downloaded from YouTube and then trimmed into shorter clips. In the end, \(800\) clips are available for each sport, which amounts to around \(5\) hours of recordings for soccer. The number of relevant action categories for soccer equals \(15\), and \(12,254\) instances were annotated. The authors emphasize that their dataset differs from other datasets due to its complexity, high quality, and diversity of videos. In the case of soccer, it is the first publically available dataset that contains spatio-temporal action annotations. Examples of MultiSports annotations can be found in Figure 8. Footnote 17: [https://deeperaction.github.io/datasets/multiSports.html](https://deeperaction.github.io/datasets/multiSports.html) Footnote 18: [https://www.crcv.ucf.edu/data/UCF_Sports_Action.php](https://www.crcv.ucf.edu/data/UCF_Sports_Action.php) ### Other Datasets UCF SportsUCF Sports18[Rodriguez et al., 2008, Soomro and Zamir, 2014] is a dataset for spatio-temporal action recognition with \(10\) possible classes: diving, golf swing, kicking, lifting, riding a horse, running, skateboarding, swinging-bench, swinging-side, and walking. \(150\) videos lasting about \(6.39\) seconds were gathered from broadcast television networks like ESPN and BBC. UCF Sports is known as one of the first datasets that published not only action class annotations but also bounding boxes of areas associated with the action. It was broadly used to conduct action classification experiments (Lui and Beveridge, 2011, Bregonzio et al., 2010, Wu et al., 2011, O'Hara and Draper, 2012) and spatio-temporal action recognition. It differs a lot from the task described in MultiSports dataset (Li et al., 2021) where the video is not temporally trimmed, multiple players and actions can be detected, and single action occurs only in a small subset of time. However, UTF-Sports initiated the advancement of this domain and inspired authors to develop interesting solutions. Due to the fact that this dataset contains only a few events related to soccer (kicking and running), the results and methods have not been widely described in this article. Footnote 18: [https://github.com/Neerajj9/Computer-Vision-based-Offside-Detection-in-Soccer](https://github.com/Neerajj9/Computer-Vision-based-Offside-Detection-in-Soccer) Ucf-101The UCF-101 (Soomro et al., 2012) dataset was introduced as an action recognition dataset of realistic action videos from YouTube. \(2\) out of \(101\) action categories are related to soccer: soccer juggling and soccer penalty. Other action categories include diving, typing, bowling, applying lipstick, knitting etc. GoalTsagkatakis et al. (2017) proposed a dataset to classify a single _goal_ class in soccer. The dataset consists of videos from YouTube: \(200\) 2-3 second-long videos for _goal_ class and \(200\) videos for _no-goal_ class. Offside DatasetA dataset19[Panse and Mahabalesharkar, 2020] that can be used to assess the effectiveness of a methodology for offside detection. It consists of about 500 frames that are publicly available. The authors highlight that this dataset has been carefully curated to include a diverse range of soccer match scenes demonstrating the different challenges such a system may encounter. Figure 8: Examples of annotations from MultiSports (Li et al., 2021) dataset. Players participating in a given action are annotated with bounding boxes. **SocceER** Soccer Event Recognition 20[Morra et al., 2020] is a synthetic dataset consisting of \(500\) minutes of game recordings gathered from the open source Gameplay Football engine that can be an approximation of real game. \(1.6\) million atomic events and \(9,000\) complex events are annotated. Atomic events (kicking the ball, ball possession, tackle, ball deflection, ball out, goal, foil, penalty) are spatio-temporally annotated. Complex events occur over a wide area, involve multiple participants, or can be composed of multiple other events (pass, pass then goal, filtering pass, filter pass then goal, cross, cross then goal, tackle, shot, shot then goal, saved shot). Examples can be found in Figure 9. Footnote 20: [https://gitlab.com/grains2/slicing-and-dicing-soccer](https://gitlab.com/grains2/slicing-and-dicing-soccer) GoalGrOunded footbAll commentaries (**GOAL**), dataset [Suglia et al., 2022] contains \(1107\) game recordings transcribed to text. Although this dataset is dedicated to tasks such as commentary retrieval or commentary generation, its contents can also be valuable as an additional modality in action recognition. ## 6 Methods ### Action Recognition and Spotting Action analysis in soccer has been an important task and has attracted many researchers. The first articles extracted video features and, based on that, classified clips into predefined categories using rule-based algorithms or classical machine learning models. \begin{table} \begin{tabular}{l l l r r r} \hline \hline **Article** & **Dataset** & **Method** & **Features** & **mAP** & **Top-1 Acc** \\ \hline [Giancola et al., 2018] & SoccerNet & AvgPool & & & 40.7 & - \\ [Giancola et al., 2018] & SoccerNet & MaxPool & & & 52.4 & - \\ [Giancola et al., 2018] & SoccerNet & NetVLAD & & & 67.8 & - \\ [Giancola et al., 2018] & SoccerNet & NetRVLAD & & & 67.4 & - \\ [Giancola et al., 2018] & SoccerNet & NetFV & & & 64.4 & - \\ [Giancola et al., 2018] & SoccerNet & SoftBOW & & & 62.0 & - \\ [Vanderplatese and Dupont, 2020] & SoccerNet & AudioVid & & & 73.7 & - \\ [Gan et al., 2022] & SoccerNet-v2 & PM & & & - & 62.4 \\ \hline [Gao et al., 2020] & VisAudSoccer & I3D [Carreira and Zisserman, 2017] & & & 95.2 & 90.1 \\ [Gao et al., 2020] & VisAudSoccer & I3D-NL [Wang et al., 2018a] & & & 96.9 & 92.5 \\ [Gao et al., 2020] & VisAudSoccer & ECO [Zolfaghari et al., 2018] & & & 96.3 & 92.2 \\ [Gao et al., 2020] & VisAudSoccer & SlowFast [Feichtenhofer et al., 2018] & & & 95.1 & 88.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Methods used for action recognition in analysed articles. Features are represented as - image, - audio. Figure 9: Examples from artificially generated SocceER dataset [Morra et al., 2020]. \begin{table} \begin{tabular}{l l l c c c c} \hline \hline **Article** & **Dataset** & **Method** & **Features** & **Avg mAP** & **Tight Avg** & **Acc** \\ & & & & & **mAP** & \\ \hline [3] & SoccerNet & NetVLAD & \(\medmedu\) & 49.7 & - & - \\ [2] & SoccerNet & AudioVid & \(\medu\) & 56.0 & - & - \\ [12] & SoccerNet & CALF & \(\medu\) & 62.5 & - & - \\ [13] & SoccerNet & MTTCNN & \(\medu\) & 60.1 & - & - \\ [13] & SoccerNet & 3dCNN & \(\medu\) & 32.0 & - & - \\ [14] & SoccerNet & NetVLAD + self- & \(\medu\) & - & - & 74.3 \\ [15] & SoccerNet & RMS-Net & \(\medu\) & 65.5 & - & - \\ [16] & SoccerNet & 2D-CNN AudVid & \(\medu\) & - & - & 90.85 \\ [17] & SoccerNet & CNN + Dilated & \(\medu\) & 63.3 & - & - \\ [18] & SoccerNet & Multiple & Scene & \(\medu\) & 66.8 & - & - \\ [19] & SoccerNet & CNN-GRU metric learning & \(\medu\) & 64.9 & - & - \\ \hline [1] & SoccerNet-v2 & MaxPool & \(\medu\) & 18.6 & - & - \\ [1] & SoccerNet-v2 & NetVLAD & \(\medu\) & 31.4 & - & - \\ [1] & SoccerNet-v2 & AudioVid & \(\medu\) & 40.7 & - & - \\ [1] & SoccerNet-v2 & CALF & \(\medu\) & 41.6 & - & - \\ [1] & SoccerNet-v2 & Vidpress Sports & \(\medu\) & 74.1 & - & - \\ [1] & SoccerNet-v2 & NetVLAD++ & \(\medu\) & 53.4 & - & - \\ [16] & SoccerNet-v2 & transformer & \(\medu\) & 52.04* & - & - \\ [12] & SoccerNet-v2 & CC+RN+FCL & \(\medu\) & 46.8 & - & - \\ [1] & SoccerNet-v2 & RGB+Audio+ & \(\medu\) & 57.8 & - & - \\ [1] & & Graph & & & \\ [1] & SoccerNet-v2 & STE & \(\medu\) & 74.1 & 58.5 & - \\ [1] & SoccerNet-v2 & Multiple Scene & \(\medu\) & 75.3 & - & - \\ [1] & Encoder & & & & \\ [1] & SoccerNet-v2 & SpotFormer & \(\medu\) & 76.1 & 60.9 & - \\ [1] & SoccerNet-v2 & E2E-Spot 800MF & \(\medu\) & 74.1 & 61.8 & - \\ [1] & SoccerNet-v2 & Faster-TAD & \(\medu\) & - & 54.1 & - \\ [1] & SoccerNet-v2 & DU+SAM+mixup & \(\medu\) & 77.3 & 60.7 & - \\ [1] & SoccerNet-v2 & DU+SAM+mixup & \(\medu\) & 78.5 & 65.1 & - \\ [1] & & +Soft-NMS & & & & \\ \hline \hline \end{tabular} \end{table} Table 6: Methods used for action spotting in analysed articles. Features are represented as \(\medu\) - image, \(\medu\) - audio, \(\medu\) - graph. * denotes that model was evaluated on the challenge dataset. Khan et al. (2018) experimented with a short 5-minute long video, where events (ball possession and kicking) were classified with a rule-based system. The event detector took as an input bounding boxes of ball with associated confidence scores. Similarly, a rule-based system consulted with soccer experts was proposed in (Khaustov and Mozgovoy, 2020) to classify events such as ball possession, successful and unsuccessful passes, and shots on goal. It was evaluated on two datasets from Data Stadium and Stats Perform. Initially, models relied mainly on feature engineering extracting semantic concepts in clips (Ye et al., 2005; Hosseini and Eftekhari-Moghadam, 2013; Kolekar and Sengupta, 2015; Raventos et al., 2015; Tavassolipour et al., 2014; Xie and Tong, 2011). Colour, texture and motion are represented. Also, representation is enriched with mid-level features, including camera view labels, camera motion, shot boundary descriptions, object detections, counting players, grass ratio, play-break segmentation, dominant colour, or penalty area. Audio descriptors such as whistle information or MPEG audio features are also used (Hosseini and Eftekhari-Moghadam, 2013; Kolekar and Sengupta, 2015; Raventos et al., 2015; Kapela et al., 2015; Li et al., 2003; Xiong et al., 2003). Particularly, audio keywords such as long-whistling, double-whistling (indicating fool), multi-whistling, excited commentator speech, and excited audience sounds can assist in detecting events such as a free kick, penalty kick, foil, and goal in soccer (Xu et al., 2003). These features are fed to classifiers, such as SVM (Ye et al., 2005; Zhao et al., 2015; Sadlier and O'Connor, 2005), Hidden Markov models (HMM) (Qian et al., 2011; Itoh et al., 2013; Wang et al., 2004; Xiong, 2005; Pixi et al., 2010; Leonardi et al., 2004; Qian et al., 2010), bayesian networks (Tavassolipour et al., 2014; Huang et al., 2006), hierarchical Conditional Random Field (Nisha et al., 2009), or fuzzy logic (Song and Hagras, 2017). Along with the development of science and access to better computing machines, video representation improved (VGG-16 backbone (Yu et al., 2019)), and classifiers became more complex, e.g. Long-short Term Memory (LSTM) (Fakhar et al., 2019; Tsunoda et al., 2017; Jiang et al., 2016; Yu et al., 2019), CNN (Hong et al., 2018; Khan et al., 2018; Jiang et al., 2016), or GRU (Jiang et al., 2016). A very interesting approach was investigated in (Xu et al., 2008; Lanagan and Smeaton, 2011), where data published on the Internet, including Twitter posts, were used to identify events in various games (like soccer and rugby). In (Tang et al., 2018), the authors use the live text of soccer matches as additional input to the model. Text model composed of TextCNN (Kim, 2014), LSTM with attention (Yang et al., 2016) and VDCNN (Conneau et al., 2017) detect events in time and classify them. Then, Optical Character Recognition (OCR) links video time to associated texts. If necessary, a video-model is employed to detect events. Another noteworthy method was proposed by Vidal-Codina et al. (2022), who utilised tracking data and a tree-based algorithm to detect events. A similar solution was suggested in (Richly et al., 2016) where positional data was employed to feed event classifiers such as SVM, K-Nearest Neighbors and Random Forest. In (Giancola et al., 2018), authors introduced SoccerNet dataset together with benchmarks for action classification and spotting tasks. They achieved an average-mAP of 49.7% for a threshold ranging from 5 to 60 seconds in the spotting task. They compared different pooling layers (Average Pool, Max Pool, SoftDBOW (Philipin et al., 2008), NetFV (Lev et al., 2015; Perronnin and Larlus, 2015; Sydorov et al., 2014), NetVLAD (Arandjelovic et al., 2015) and NetRVLAD (Miech et al., 2017)) and video representation (I3D(Carreira and Zisserman, 2017), C3D (Tran et al., 2015), and ResNet (He et al., 2015) features) with a sliding window approach at 0.5s stride (see Table 5). The same pooling methods were investigated in (Vanderplaetse and Dupont, 2020), where input was enriched with audio variables. The video was represented as ResNet features, and audio stream feature extractions were done with VGGish architecture (VGG (Simonyan and Zisserman, 2014) pretrained on AudioSet (Gemmeke et al., 2017)). It is worth mentioning that using modality fusion improved mAP of \(7.43\%\) for the action classification task and \(4.19\%\) for the action spotting task on the SoccerNet dataset. Experiments showed that mid-fusion was the most effective method (\(73.7\%\)), while early fusion achieved the worst performance (\(64\%\)). The result for late fusion is \(68.4\%\). Similarly, the authors of (Rongved et al., 2021) conducted experiments with multimodal models combining video and audio features in various settings. These experiments prove that combing modalities can lead to an improvement in model performance. They also acknowledged that the highest gain was observed in the classification of goal class, which can be associated with the audio reaction of supporters. According to the authors of (Mahaseni et al., 2021), enhancing event spotting may be achieved significantly by including short-range to long-range frame dependencies within an architecture. They have introduced a novel approach based on a two-stream convolutional neural network and Dilated Recurrent Neural Network (DilatedRNN) with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997). The Two-Stream CNN captures local spatiotemporal features required for precise details, while the DilatedRNN allows the classifier and spotting algorithms to access information from distant frames. Ma et al. (2020) used the self-attention mechanism to extract key frames and the NetVLAD network to obtain the temporal window-level (60s) features. The results of the classifier trained on the SoccerNet (Giancola et al., 2018) have improved from \(67.2\%\) to \(74.3\%\) accuracy by adding an attention mechanism. ResNet3d pretrained on Kinetics-400 (Norgard Rongved et al., 2020) were found to be inferior to state-of-the-art models. However, the authors stated that this architecture is competitive in real-time settings or when the precise temporal localization of events is crucial. A novel loss function CALF (Context-Aware Loss Function) (Cioppa et al., 2020) for the segmentation model was introduced to improve action spotting model training. Instead of focusing on a single timestamp, CALF analyses the temporal context around the action. Frames are grouped into categories: far before, just before, just after, far after an action, and transition zones with associated parameters. Outputs of segmentation block feed spotting layer with YOLO-like loss. This architecture has significantly outperformed the baseline model (\(+12.8\%\) of avg-mAP). RMS Net (Tomei et al., 2021) is a solution inspired by regression methods in object detection, combining classification and regression loss during the training. The model produces outputs that comprise both the probability of an action class and its corresponding temporal boundaries. The authors of this solution have also proposed masking procedures and implemented strategies to address the issue of data imbalance, which has led to an improvement in the mAP metric. They noticed that certain indicators of events tend to appear shortly after the event itself. By analyzing reactions, it is possible to infer whether an action has occurred or not. The masking procedure was constructed to focus only on frames occurring after an event which allows for a more targeted analysis of relevant video segments. Karimi et al. (2022) implemented a Siamese neural network to conduct experiments of metric learning to detect soccer events. The most promising combination was Siamese Network with contrastive loss (Koch et al., 2015), Efficient-NetB0 (Tan and Le, 2021), and gated recurrent units (GRU) (Chung et al., 2014). In (Vats et al., 2020), they introduced a multi-tower temporal convolutional neural network (MTTCNN) which considers that particular events occur with different frequencies in sports datasets. Gao et al. (2020) proposed new action classification dataset including SoccerNet (Giancola et al., 2018) videos. This article presents benchmarks for the classification of four actions (goal/shoot, yellow/red card, celebrate and pass) with I3D (Carreira and Zisserman, 2017), I3D-NL (Wang et al., 2018), ECO (Zolfaghari et al., 2018) and SlowFast (Feichtenhofer et al., 2018). After the release of SoccerNet-v2 (Deliegge et al., 2021), the action spotting task gained even more scientific interest (Caratas et al., 2022; Zhou et al., 2021; Darwish and El-Shabrway, 2022; Shi et al., 2022; Cao et al., 2022; Gan et al., 2022; Giancola and Ghanen, 2021; Hong et al., 2022; Soares et al., 2022). Published benchmarks reached the average mAP for tolerances ranging from 5s to 60s metric of \(41.6\), and two years later, the result increased to \(78.5\)(Soares and Shah, 2022). Table 6 presents results reported in analysed articles for action spotting task. It is worth noting that together with the increase in the number of classes (from \(3\) in SoccerNet to \(17\) in SoccerNet-v2), the task has been made more difficult. For instance, the performance of CALF decreased from \(62.5\) on SoccerNet to \(41.6\) on SoccerNet-v2. Giancola and Ghanem (2021) proposes a novel architecture known as NetVLAD++, which is based on NetVLAD pooling method. Similarly to (Cioppa et al., 2020), they take into consideration both frames prior to the action (past) and frames subsequent to the action (future). The authors noted that certain actions share the same characteristics prior to the event but differ in what happens after the action. They provided the goal and shot as an example, highlighting that both actions have the same pre-event characteristics, but can be differentiated based on the events that follow the action. Future and past clips are processed independently as two levels of temporal context utilizing NetVLAD pooling layers. This approach surpasses the previous state of the art, notably outperforming traditional NetVLAD (Deliegge et al., 2021) by \(22\) percentage points. Darwish and El-Shabrway (2022) 21 presented two versions of Spatio-Temporal Encoder (STE) build by convolution layers, max pooling and fully connected layers. The architecture is distinguished by the speed of model training and low complexity while maintaining good performance. The first version of the solution achieves \(74.1\%\) avg-mAP on the test SoccerNet-v2 dataset and \(40.1\%\) of tight avg-mAP. The modified model increased tight avg-mAP to 58.5%. Input frames are represented with Baidu features (Zhou et al., 2021). Model's layers are divided into spatial and temporal representations. For the temporal encoder, they proposed three different time scales to capture events: [T, T/2, 1]. STE-v2 differs from STE-v1 in the last layer. The first version output predicted action class while STE-v2 returns a prediction of frame index in addition to the label. Footnote 21: [https://github.com/amdarwish/SoccerEventSpotting/tree/main/STE-v2](https://github.com/amdarwish/SoccerEventSpotting/tree/main/STE-v2) After the development of transformer (Vaswani et al., 2017) neural networks in computer vision (Neimark et al., 2021; Dosovitskiy et al., 2020), a large and growing body of literature has investigated these architectures in action spotting task (Zhou et al., 2021; Soares et al., 2022; Chen et al., 2022; Gan et al., 2022; Cao et al., 2022; Shi et al., 2022; Zhu et al., 2022). Article (Zhou et al., 2021) suggested that video representation is crucial in automatic match analysis. Contrary to other solutions using ResNet and ImageNet features, they fine-tuned action recognition models such as TPN (Yang et al., 2020), GTA (He et al., 2020), VTN (Neimark et al., 2021), irCSN (Tran et al., 2019), and I3D-Slow (Feichtenhofer et al., 2018) on SoccerNet-v2 22 in semantic feature extraction block (this set of features will be referred as Baidu soccer features). Then, features from all extractors are concatenated into one vector. NetVLAD++ and transformer-based models are used to detect actions. Experiments showed that clip representation using all five feature types is much better than single-backbone. Transformer architecture with all features achieves \(73.7\%\) of avg-mAP, while NetVLAD++ trained on the same representation achieves \(74.1\%\) on test SoccerNet-v2. Furthermore, they proved that the proposed video representation is better than previously used ResNet features. Inspired by work of [22], the study of [14] also represented video features as a fusion of multiple backbones. In addition to Baidu features, they chose action recognition models such as VideoMAE [13] and Video Swin Transformer [10] assuming that these models can extract task-specific features. On the top of each feature extractor is a multilayer perceptron that reduces dimensionality and retains main semantic information. Then, features are concatenated. The results of experiments show that applying dimensionality reduction improved the accuracy of the action spotter. The model consists of multiple stacked transformer encoder blocks that process each frame individually, followed by a feed-forward network to perform temporal segmentation. Finally, Soft-NMS [1] is used to filter predictions. The network of [22] has been improved with the following adjustments: an additional input feature normalization, a learnable position embedding is used instead of sin-cos position encoding, changes of some hyperparameters values, and focal loss is applied to address data imbalance. Summing up the results, using only Baidu features achieved \(56.9\%\) of tight avg-mAP, while enriching representation with VideoMAE and Video Swin Transformer increases the performance to \(58.3\%\) of tight avg-mAP. The final model has an average-mAP equal to \(76.1\%\) and a tight average-mAP equal to \(60.9\%\) on the test set of SoccerNet-v2. Zhu et al. [2022] used transformer [12] to extract features from soccer videos. The architecture generated action proposals with a sliding window strategy. Then, the proposed clips are fed to a transformer-based action recognition module. Transformer representation is then processed by NetVLAD++[15]. Multiple Scene Encoders architecture [23] uses a representation of multiple frames to spot action because some actions consist of subactions (e.g. goal can be represented as running, shooting and cheering). The paper reports \(55.2\%\) Average-mAP using ResNet features, and \(75.3\%\) with Baidu embedding features [22], once again showing a significant advantage thanks to the appropriate video representation. Multimodal transformer-based model using both visual and audio information through a late fusion was introduced in [12] for action recognition. The transformer model is designed to capture the action's spatial information at a given moment and the temporal context between actions in the video. The input to the model consists of the raw video frames and audio spectrogram from the soccer videos. Video streams are processed with ViViT transformer [12] and audio with a model based on Audio Spectrogram Transformer [15]. Then, modalities representations are connected with the late fusion method as a weighted average of encoder results. Unlike the other articles modelling action spotting on the SoccerNet dataset, the authors of this article report results with the Top-1 Accuracy metric, so it is difficult to compare their results with other papers. However, the article has a broad analysis with reference to different architectures. The best analysed model trained exclusively on visual input achieved \(60.4\%\), and the multimodal transformer proposed by [12] achieved Top-1 Accuracy equal to \(62.4\%\). Action-spotting models commonly rely on using pretrained features as input due to the computation difficulties of end-to-end solutions. Hong et al. [2022] offers a more efficient, end-to-end architecture called E2E-Spot for accurate action spotting. Each video frame is represented by RegNet-Y [20] with Gate Shift Modules (GSM) [21]. Then, similarly to [23], a recurrent network is used - the resulting data sequence is modelled through a GRU network [13], which creates a temporal context and generates frame-level class predictions. As the development of action spotting solutions has progressed, the importance of accurate models capable of precisely localizing actions within untrimmed videos has become increasingly acknowledged [14, 15, 16]. The performance of these models is commonly evaluated using the tight-avg-mAP metric, which measures their effectiveness within a specific small tolerance range (1,..., 5 seconds). Inspired by Faster-RCNN [17], authors of [14] built architecture called Faster-TAD. Features are extracted using SwinTransformer [10]. Then, similarly to the Faster-RCNN approach, 120 best proposals of action boundaries are generated. The boundary-based model was implemented to take into account the variance in action duration, as some actions may last only a few seconds while others may extend for several minutes. Then, the modules for correcting the proposals of action location and their classification work in parallel. The authors proposed an advanced context module consisting of three blocks (a Proximity-Category Proposal Block, a Self-Attention Block, and a Cross-Attention Block) to get semantic information for classification purposes. Proximity-Category Proposal Block gathers contextual information, a Self-Attention Block establishes relationships among proposals, and finally, a Cross-Attention Block gathers relevant context from raw videos concerning proposals. Architecture is enriched with Fake-Proposal Block for action boundary refinement and atomic features for better clip representation. They report \(54.1\%\) of tight-avg-mAP, which is a \(7.04\%\) gain compared to [22]. Soares et al. (Soares et al., 2022) also propose a solution that tackles the problem of imprecise temporal localization. The model returns detection confidence and temporal displacement for each anchor. The architecture consists of a feature extractor (ResNet-152 with PCA and Baidu soccer embeddings fine-tuned on SoccerNet-v2 (Zhou et al., 2021)) followed by MLP. Then, features are processed by u-net (Ronneberger et al., 2015) and transformer encoder. Additionally, they experimented with Sharpness-Aware Minimization (SAM) (Foret et al., 2021) and mixup data augmentation (Zhang et al., 2017). Their solution significantly boosted performance with tight avg-mAP: from \(54.1\) achieved by (Chen et al., 2022) to \(60.7\). Then, they introduced improvements to this solution (Soares and Shah, 2022) and won SoccerNet Challenge 2022. First, they modified preprocessing by resampling the Baidu embeddings (Zhou et al., 2021) to get greater frame frequency (2 FPS) and applying late fusion to combine them with the ResNet features. Also, soft non-maximum suppression (Soft-NMS) (Bodla et al., 2017) was applied in the postprocessing step. These modifications resulted in a \(4.4\) percentage point improvement over (Soares et al., 2022) measured by tight avg-mAP. Cioppa et al. (Cioppa et al., 2021) conducted experiments exploring the utilization of camera calibration data in action spotting. The first phase involved the implementation of an algorithm based on Camera Calibration for Broadcast Videos (CCBV) of (Sha et al., 2020). Results of Mask R-CNN (He et al., 2017) model for object detection combined with camera calibration module allow preparing diverse feature sets, including top view representations with a 3D convolutional network, feature vectors representations (ResNet-34 (He et al., 2015) and EfficientNet-B4 (Tan and Le, 2019)), and a player graph representation with graph convolutional network DeeperGCN (Li et al., 2020). In the graph, players are represented as nodes with edges connecting two players if the distance between them is less than \(25\) meters. SoccerNet-v2 labels were divided into _patterned_ and _fuzzy_ groups based on prior knowledge of the predictive potential of player localization data for classifying these labels. Player localization plays a crucial role in the classification of _patterned_ classes (e.g. penalty, throw-in, cards) but is not relevant for _fuzzy_ labels (substitution, ball out of play, foul). Two separate CALF (Cioppa et al., 2020) networks were trained for each class group: one using calibration data to improve the classification of _patterned_ classes, and the other using only ResNet features for _fuzzy_ labels. They reported an avg-mAP of \(46.8\%\), outperforming the unimodal CALF by \(6.1\) percentage points. Although the use of graphs had already found its application in sports analysis (Qi et al., 2020; Passos et al., 2011; Stockl et al., 2021; Buldu et al., 2018), Cioppa et al. (Cioppa et al., 2021) were the first to use graph-based architecture to spot actions in untrimmed soccer videos. Then, Cartas et al. (Cartas et al., 2022) developed another graph based-architecture that resulted in a substantial improvement over its predecessor, achieving an average mAP of \(57.8\%\). Similarly to the previous solution, players are represented as nodes with attributes such as location, motion vector and label (player team 1/2, goalkeeper 1/2, referee), and are connected to another player by edge based on proximity (less than 5 meters). Players and referees are detected by the segmentation module PointRend (Kirillov et al., 2019), and their position is projected onto a 2D pitch template through a homography matrix of camera calibration (Cioppa et al., 2021). After data cleaning, referees and players are classified using a convolutional network supported by a rule-based algorithm and clustering. Player information is enriched with motion vector represented by preprocessed optical flow extracted with FlowNet 2.0 CSS (Ilg et al., 2016; Reda et al., 2017). The model architecture consists of four dynamic edge graph CNN (Wang et al., 2018) blocks followed by NetVLAD (Arandjelovic et al., 2015) pooling layer. The authors experimented with multiple modalities and found that a graph-only model achieved \(43.3\%\) of Average-mAP while adding video features increased the metric to \(51.5\%\). Furthermore, incorporating audio features from VGGish network (Simonyan and Zisserman, 2014) with both video and graph streams resulted in an average mAP of \(57.8\%\), surpassing both unimodal and bimodal methods. ### Spatio-Temporal Action Localization The release of the MultiSports dataset (Li et al., 2021) for spatio-temporal localization of multiple sportsmen may contribute towards a better understanding of actions performed by individual players. Approaches to addressing this challenge can be categorized into frame-level and clip-level models (Li et al., 2021). The frame-level models predict the bounding box and action type for each frame and then integrate these predictions. Conversely, clip-level methods, also called action tubelet detectors, endeavour to model both temporal context and action localization. Authors of the MultiSports dataset published benchmarks for the proposed task training frame-level models (ROAD (Singh et al., 2016), YOWO (Kopuklu et al., 2019)) and clip-level models (MOC (Li et al., 2020), SlowOnly (Feichtenhofer et al., 2018) and SlowFast (Feichtenhofer et al., 2018)). Results of experiments are summarized in Table 7. ROAD (Singh et al., 2016) is an algorithm for real-time action localization and classification which uses the Single Shot Multibox Detector (SSD) (Liu et al., 2015) method to independently detect and classify action boxes in each frame, without taking into account temporal information. Afterwards, the predictions from each frame are combined into action tubes through a novel algorithm. Similarly, You Only Watch Once (YOWO) (Kopuklu et al., 2019) method for identifying actions in real-time video streams links results from individual frames into action tubes through a dynamic programming algorithm. It uses two concurrent networks: a 2D-CNN to extract spatial features from key frames and a 3D-CNN to extract spatio-temporal features from key frames and preceding frames. Then, the features from these two networks are combined through a channel fusion and attention mechanism and fed into a convolution layer to predict bounding boxes and action probabilities directly from video clips. Another approach was proposed in article [Li et al., 2020b] introducing Moving Center Detector (MOC detector). It models an action instance as a series of moving points and leverages the movement information to simplify and enhance detection. The framework consists of three branches: (1) Center Branch for detecting the center of the action instance and action classification, (2) Movement Branch for estimating the movement between adjacent frames to form a trajectory of moving points, and (3) Box Branch for predicting the size of the bounding box at each estimated center. They return tubelets, which are then linked into video-level tubes through a matching process. SlowFast [Feichtenhofer et al., 2018] comprises of two parallel branches. The slow branch identifies spatial semantics that exhibits minimal fluctuations, thus allowing for a low frame rate. Conversely, the fast branch is responsible for detecting rapid changes in motion, requiring a high frame rate to operate effectively. During training, data from the fast branch is fed to a slow neural network and at the end, the results of the two networks are concatenated into one vector. Faster R-CNN [Ren et al., 2015] with a ResNeXt-101-FPN [Lin et al., 2016, Xie et al., 2016] backbone was used to detect people. As the name suggests, the SlowOnly model uses only the slow path of SlowFast. Table 7 summarizes the results and indicates that the SlowFast detector achieved the best performance within benchmark models. Metrics are computed for all sports, not only for soccer. The results obtained by Gueter Josmy Faure1 et al. in [Faure et al., 2022]23 suggest that including pose information can be valuable to predict actions. Authors motivate their architecture with the fact, that actions can be defined as interactions between people and objects. Their multimodal Holistic Interaction Transformer Network (HIT) fusing a video stream and a pose stream surpasses other models on the MultiSports dataset. Each stream composes of person interaction, object interaction and hand interaction to extract action patterns. For each modality, Intra-Modality Aggregator (IMA) facilitates learning valuable action representations. Then, an Attentive Fusion Mechanism (AFM) is utilized to merge the various modalities, retaining the most significant features from each modality. 3D CNN backbone [Feichtenhofer et al., 2018] processes video frames, Faster RCNN [Ren et al., 2015] with ResNet-50-FPN [Xie et al., 2016, Lin et al., 2016] backbone predict bounding boxes and spatio transformer [Zheng et al., 2021] is pose encoder. This method outperformed others in terms of [email protected] and [email protected]. Footnote 23: [https://github.com/joslefaure/HIT](https://github.com/joslefaure/HIT) In existing solutions to spatio-temporal action recognition, tube detection involves extending a bounding box proposal at a keyframe into a 3D temporal cuboid and pooling features from nearby frames. However, this approach is not effective when there is significant motion. The study [Singh et al., 2022] propose cuboid-aware feature aggregation to model spatio-temporal action recognition. Also, they improve actor feature representation through actor tracking data and temporal feature aggregation along the tracks. The experiments show that the proposed method called Track Aware Action Detector (TAAD) outperforms others, especially for large-motion actions. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Article** & **Dataset** & **Method** & **Features** & **[email protected]** & **[email protected]** & **[email protected]** \\ \hline [Li et al., 2021] & MultiSports & ROAD [Singh et al., 2016] & \(\blacklozenge\) & 3.90 & 0.00 & 0.00 \\ [Li et al., 2021] & MultiSports & YOWO [Körüuklu et al., 2019] & \(\blacklozenge\) & 9.28 & 10.78 & 0.87 \\ [Li et al., 2021] & MultiSports & MOC [Li et al., 2020b] & \(\blacklozenge\) & 22.51 & 12.13 & 0.77 \\ [Li et al., 2021] & MultiSports & MOC [K=1] & \(\blacklozenge\) & 25.22 & 12.88 & 0.62 \\ [Li et al., 2021] & MultiSports & SlowOnly Det., 4 \(\times\) 16 [Feichtenhofer et al., 2018] (K=11) & \(\blacklozenge\) & 16.70 & 15.71 & 5.50 \\ [Li et al., 2021] & MultiSports & SlowFast Det., 4 \(\times\) 16 [Feichtenhofer et al., 2018] (K=11) & \(\blacklozenge\) & 27.72 & 24.18 & 9.65 \\ [Singh et al., 2022] & MultiSports & TAAD + TCN & \(\blacklozenge\) & **55.3** & - & **37.0** \\ [Faure et al., 2022] & MultiSports & HIT & \(\blacklozenge\) & 33.3 & **27.8** & 8.8 \\ \hline \hline \end{tabular} \end{table} Table 7: Methods used for spatio-temporal action localization in analysed articles. \(\blacklozenge\) means image. ### Summarizing Multimodal Action Scene Understanding Multimodal machine learning is a powerful approach combining different pieces of information to understand complex phenomena comprehensively. Fusing information from multiple modalities leads to a deeper comprehension of the underlying processes, enabling superior predictive performance compared to unimodal models. In the realm of soccer, the adoption of multimodal approaches has successfully improved the accuracy of predictive models through the integration of diverse sources of data and the extraction of more meaningful insights (Vanderlaetse and Dupont, 2020; Rongved et al., 2021; Cartas et al., 2022; Cioppa et al., 2021; Zhou et al., 2021). Classical methods of action recognition relied mostly on data preparation and feature engineering. Thus, authors extracted different features from video clips, including logo frame detection, audio MPEG descriptors, camera motion, zoom indicator, colour layout, dominant colour, referee's whistle indicator, view category, and Histogram of Oriented Gradients (HoG) (Dalal and Triggs, 2005). It is worth noting that these methods widely used a combination of audio and visual features (Hosseini and Eftekhari-Moghadam, 2013; Kolekar and Sengupta, 2015; Raventos et al., 2015; Kapela et al., 2015; Li et al., 2003; Xiong et al., 2003; Xu et al., 2003). The more recent methods have primarily relied on visual embedding alone (Deliege et al., 2021; Zhou et al., 2021; Cioppa et al., 2020; Tomei et al., 2021). However, using a fusion of multiple representations of a single source (e.g. Baidu embeddings for video (Zhou et al., 2021)), that can also be considered as multimodality, has proven to be more effective than using a single model to represent video (e.g. ResNet features). The release of Baidu soccer embeddings has resulted in researchers favouring them over ResNet features originally presented by SoccerNet authors. Moreover, Baidu embeddings were further extended by two additional models (Cao et al., 2022). Experiments showed that incorporating audio streams (Vanderlaetse and Dupont, 2020; Rongved et al., 2021; Cartas et al., 2022), graph networks (Cartas et al., 2022; Cioppa et al., 2021) and optical flow (Cartas et al., 2022) can also provide significant value to the model. In the case of spatio-temporal action localization models, the fusion of a video stream and a pose stream surpassed other solutions (Faure et al., 2022). ## 7 Discussion Automatic action recognition and localization is relevant from the perspectives of many in the soccer industry: coaches, scouts, fans and broadcasters. The Internet provides various sources of information on match results, highlights, and non-structured data. Also, many matches are recorded via different cameras, and TV and radio provide audio commentary to some matches. Therefore, it seems that soccer can be an excellent source of multimodal data. However, collecting high-quality and realistic soccer videos is complex and challenging for several reasons. DatasetsOne of the main challenges in gathering annotated soccer data is that it can be challenging to obtain the necessary data due to licences and limited public availability of broadcast content. An even bigger challenge is assessing data for smaller or less popular leagues. Preparing annotated data is a laborious, time-consuming and expensive task because it involves manual annotation of video footage. This process may require a team of trained analysts to watch and annotate every match. The quality of the annotations may depend on the level of expertise of the analysts, which can further affect the accuracy of the data. To avoid this, a match is sometimes annotated by several people, with the recorded data being cross-referenced. Nevertheless, the interpretation of any event is always subjective and open to numerous interpretations. For instance, two analysts may disagree on whether a particular incident should be classified as a foil or not. Also, obtaining accurate frame level annotations of events is difficult due to inaccuracies in defining the beginning and end of the action. This subjectivity can result in inconsistencies in the data and make it difficult to compare or analyze different datasets. To be useful and informative, soccer data must meet certain requirements that ensure its quality, reliability and usefulness to the industry. Annotations should be prepared in a standardized manner to ensure comparability and consistency. Firstly, current sport datasets provide access to trimmed short clips (Rodriguez et al., 2008), but this assumption is unrealistic. To apply models to real scenarios, they should be trained on whole untrimmed videos, such as SoccerNet (Giancola et al., 2018; Deliege et al., 2021). Secondly, broadcast-like videos with moving camera and replays are the most natural to collect. It could be difficult and costly to gather videos from multiple cameras for the same scene; however, middle camera recordings or drone recordings could be valuable and useful for everyday training, even being affordable for smaller clubs. Although there are already quite a few soccer datasets, there is still room for improvement. Action recognition is essential in the analysis of matches in lower leagues to find talented players. Unfortunately, many games are not broadcasted, and the installation of many cameras on the pitch is too expensive. Thus, datasets consisting of drone recordings or middle camera-only recordings and models trained on them could supplement the work of scouts. Moreover, even though there are a lot of multimodal soccer data sources, there is a lack of publicly available datasets, including them. The development of spatio-temporal action localization methods in soccer can lead to the easy highlighting of actions performed by individual players. Furthermore, combining results with homography allows statistics per action to be computed. For instance, _"Player [] covers a distance [] while dribbling with an average speed of []"_. Despite the wide range of applications, there is no dedicated soccer dataset for this task. In contrast, authors of [Ibrahim et al., 2016] proposed a widely used volleyball dataset consisting of sportsman position and action annotation along with group activity labels. Methods and Potential of MultimodalitySoccer action scene understanding, which can be divided into action classification, spotting and spatio-temporal action localization, is a crucial aspect of analyzing soccer matches. The mentioned tasks vary in difficulty, with action classification being the easiest and action spotting being a more complex task that involves classification and finding temporal localization of actions. Spatio-temporal action localization makes the task more difficult by adding the spatial aspect. Considering the temporal context of actions is essential for several reasons. Firstly, some actions follow each other and information that one occurred should increase the probability of the associated action. For instance, yellow cards occur after fouls, which are then followed by free kicks. Moreover, analysis of frames before and after an action can provide valuable information about the occurrence of actions. Before a goal-scoring situation, one team runs towards the opposite goal in the video, and the audio track can capture the reactions of reporters and fans, including cheers or boos. Also, the results of an action can be deduced from what happens after the event. If there is a goal, players and their fans celebrate it. It can be challenging to train models to localize actions when sudden position changes and fast movement occur during attacks and dribbling. The duration of events can also vary significantly, such as the difference between the time taken for a yellow card and the ball being out of play. Model performance on benchmark datasets has fluctuated over the past few years. Action spotting on SoccerNet [Giancola et al., 2018] has gained about 17 percentage points of average mAP in comparison to the baseline (from 49.7 to 66.8). Similarly, after the release of SoccerNet-v2 [Deliage et al., 2021] in 2021, containing 17 action classes, the average mAP increased from 18.6 to 78.5. A similar trend is observed on the MultiSports [Li et al., 2021] dataset for spatio-temporal action localization, where the best model published by dataset's authors was outperformed in terms of [email protected] (almost twice as good) and in terms of [email protected] (four times as good) [Singh et al., 2022]. Huge improvements were made thanks to enriching data representation with other data sources, such as audio [Vanderlaes and Dupont, 2020, Rongyed et al., 2021, Cartas et al., 2022], graphs [Cartas et al., 2022, Cioppa et al., 2021], and pose [Singh et al., 2016]. Although SoccerNet includes reporter's commentary track, text input has not yet been used in modelling. A textual input can provide a plethora of valuable information as sports commentators describe the unfolding events on the pitch. Thus, work on the enriching of representation with text data is promising; experiments will be needed to verify this conjecture. Multimodal models can also be defined as a different representation of the same input, e.g. concatenation of various text embeddings. This approach was used in many articles [Zhou et al., 2021, Shi et al., 2022, Cao et al., 2022, Soares et al., 2022, Soares and Shah, 2022]. The most groundbreaking is the article of Baidu Research [Zhou et al., 2021] which published multiple semantic features of action recognition models, which were then used in other articles. To sum up, action scene understanding in soccer has attracted much attention from research teams in recent years. Many publicly available datasets have been released, and models have improved the accuracy of action spotting and recognition. Nevertheless, some interesting and relevant problems remain to be addressed, including spatio-temporal action localization datasets and models dedicated to soccer and experiments with multimodal data such as textual commentary. ## Acknowledgments We would like to thank Olaf Skrabacz and Filip Boratyn for his valuable insights, comments and recommendations. This research was supported by NASK - National Research Institute. ## Declarations ### Availability of data and materials Not applicable ### Code availability Not applicable
2309.04605
Evaluating Total Environmental Impact for a Computing Infrastructure
In this paper we outline the results of a project to evaluate the total climate/carbon impact of a digital research infrastructure for a defined snapshot period. We outline the carbon model used to calculate the impact and the data collected to quantify that impact for a defined set of resources. We discuss the variation in potential impact across both the active and embodied carbon for computing hardware and produce a range of estimates on the amount of carbon equivalent climate impact for the snapshot period.
Adrian Jackson, Jon Hays, Alex Owen, Nicholas Walton, Alison Packer, Anish Mudaraddi
2023-09-08T21:30:35Z
http://arxiv.org/abs/2309.04605v1
# Evaluating Total Environmental Impact for a Computing Infrastructure ###### Abstract. In this paper we outline the results of a project to evaluate the total climate/carbon impact of a digital research infrastructure for a defined snapshot period. We outline the carbon model used to calculate the impact and the data collected to quantify that impact for a defined set of resources. We discuss the variation in potential impact across both the active and embodied carbon for computing hardware and produce a range of estimates on the amount of carbon equivalent climate impact for the snapshot period. embodied carbon, climate impact, power usage, digital research infrastructure, environment + Footnote †: journal: Computer systems organization **Computer systems organization**Distributed architectures **Theory of computation** Parallel algorithms **Computing methodologies** Parallel algorithms; Modeling and simulation. + Footnote †: journal: Computer systems organization ## 1. Introduction Moving towards Net-Zero for digital research infrastructures (DRIs), i.e. providing DRIs that do not have significant impacts on the climate or environment, requires robust information to enable good decision making around infrastructure procurement and provisioning. This requires understanding the full carbon costs or climate impacts associated with operating, maintaining, and using the infrastructure, going beyond accounting for the electricity and cooling required for operations of any service, and including the full chain of costs embodied in the infrastructure. In this short paper we outline the work done during the IRICAST project (Brands, 2018) to evaluate the full lifecycle climate emissions associated with an active DRI, both by cataloguing the resources that compose the DRI and by measuring energy consumption for a defined period of the operation of the DRI. To convert the collected data into impact on the climate of the DRI we have developed a carbon model to produced an overall figure for the climate impact of a 24 hour period (a snapshot) of operating the IRIS DRI. During this process we have identified many areas where data is either incomplete or of variable quality, signalling that much more work is required to properly quantify the climate impact of DRIs, such as High Performance Computing systems (HPC). Nevertheless, this initial work, coupled with other available data, lets us start to build a picture of where the majority of climate impacts are likely to originate for DRIs and thereby let us start to address these areas to reduce the overall climate impact of DRI technologies whilst maximising the benefits such infrastructure provides. For the rest of the paper, we will introduce the IRIS DRI, briefly discuss the IRICAST approach, outline the carbon model we have designed, and then discuss the results of monitoring and evaluating the DRI for a 24 hour period to enable quantifying the climate impact of such a system. We finish with a discussion of the implications of this work and future research that could improve the accuracy of such measurement approaches. ## 2. Iris IRIS 1 is a collaboration of computing and data storage providers that deliver a DRI supporting research in areas such as particle physics, nuclear physics, space science and astronomy. The hardware is being used for projects such as the Square Kilometre Array and Deep Underground Neutrino Experiment. Table 1 summarises the hardware from IRIS that was includes in our snapshot experiment. \begin{table} \begin{tabular}{l l} \hline \hline Site & Hardware \\ \hline Queen Mary University of London (QMUL) & 118 CPU nodes \\ Cambridge University (CAM) & 60 CPU nodes \\ Durham University (DUR) & 808 CPU nodes \\ & 64 storage nodes \\ Rutherford Appleton & 699 CPU nodes \\ Laboratory (STFC) & (SCARF HPC system) \\ & 651 CPU nodes \\ & (STFC Cloud) \\ & 105 storage nodes \\ Imperial College London (IMP) & 241 CPU nodes \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of the IRIS hardware included in the project
2309.14370
Kinetic rate coefficients for electron-driven collisions with CH$^+$: dissociative recombination and rovibronic excitation
Cross sections and rate coefficients for rovibronic excitation of the CH$^+$ ion by electron impact and dissociative recombination of CH$^+$ with electrons are evaluated using a theoretical approach combining an R-matrix method and molecular quantum defect theory. The method has been developed and tested, comparing the theoretical results with the data from the recent Cryogenic Storage Ring experiment. The obtained cross sections and rate coefficients evaluated for temperatures from 1~K to 10,000~K could be used for plasma modeling in interpretation of astrophysical observations and also in technological applications where molecular hydrocarbon plasma is present.
Joshua Forer, Dávid Hvizdoš, Mehdi Ayouz, Chris H. Greene, Viatcheslav Kokoouline
2023-09-23T22:35:31Z
http://arxiv.org/abs/2309.14370v1
Kinetic rate coefficients for electron-driven collisions with CH\({}^{+}\): dissociative recombination and rovibronic excitation. ###### Abstract Cross sections and rate coefficients for rovibronic excitation of the CH\({}^{+}\) ion by electron impact and dissociative recombination of CH\({}^{+}\) with electrons are evaluated using a theoretical approach combining an R-matrix method and molecular quantum defect theory. The method has been developed and tested, comparing the theoretical results with the data from the recent Cryogenic Storage Ring experiment. The obtained cross sections and rate coefficients evaluated for temperatures from 1 K to 10,000 K could be used for plasma modeling in interpretation of astrophysical observations and also in technological applications where molecular hydrocarbon plasma is present. keywords: molecular processes - plasmas - scattering - astrochemistry - ISM: clouds ## 1 Introduction Hydrides and their ions are often used to trace various characteristics of the interstellar medium (ISM). Understanding their formation and destruction mechanisms is therefore necessary in using them as accurate tracers. The CH\({}^{+}\) ion, discovered in the interstellar medium by Douglas and Herzberg (1941) and Adams (1941), was originally thought to be formed primarily through the reaction \[\mathrm{C}^{+}+\mathrm{H}_{2}\xrightarrow{}\mathrm{CH}^{+}+\mathrm{H}. \tag{1}\] The reaction is endothermic, requiring 0.398 eV (\(\sim\)4620 K) to proceed (Hierl et al., 1997), but typical kinetic temperatures of diffuse clouds are roughly between 40 and 130 K (Shull et al., 2021). Hence, the reaction cannot explain the observed abundance of CH\({}^{+}\) in diffuse clouds (Godard and Cernicharo, 2013). Observations of CH\({}^{+}\) in diffuse clouds have motivated many theoretical and experimental studies on the structure and reactivity of the ion. The structure of the ion has been well established for several decades, while there is still need for cross sections for processes involving collisions of CH\({}^{+}\) with electrons. In particular, the knowledge of cross sections for excitation of the ion by electron impact is important for interpretations of astrophysical observations. For example, rotational excitation of CH\({}^{+}\) by electron impact was found to be the dominant process producing CH\({}^{+}\) in the ISM Godard and Cernicharo (2013). Besides astronomical applications, cross sections and rate coefficients for e\({}^{-}\) + CH\({}^{+}\) collisions are important for the interpretation and modelling of hydrocarbon plasma behavior. Other than rotational excitation, mentioned above, one needs the data on vibrational and electronic excitation and dissociative recombination (DR). CH\({}^{+}\) is also a suitable candidate for benchmark theoretical studies of such processes. On one hand, this is because there are several experimental measurements (Amitay et al., 1996; Paul et al., 2022; Kalosi et al., 2022) available - of DR in particular. On the other hand, the ion has a relatively complex electronic structure, such that theoretical methods could be tested on this system and, if successful, be applied to other similar problems. Our recent theoretical study (Forer et al., 2023) has demonstrated that theory can now accurately describe the DR process in CH\({}^{+}\). The theoretical approach developed in that study (originated from several previous works (Hamilton and Greene, 2002; Kokoouline and Greene, 2003; Curk et al., 2020)) with small modifications can also be used to obtain cross sections for rotational, vibrational, and even electronic excitation of the CH\({}^{+}\) ion by electron impact. Because the theoretical method was validated comparing the DR results with the experimental data, it is expected to provide reliable data on the electron-impact excitation processes as well. This study is devoted to the theoretical evaluation of cross sections and rate coefficients for rovibronic excitation of CH\({}^{+}\) ion by electron impact. The only available experimental result on excitation of CH\({}^{+}\) is from a recent study in the Cryogenic Storage Ring (CSR) (Kalosi et al., 2022), where the rate coefficient for rotational excitation from the ground rovibrational level \(v=0\), \(j=0\) of CH\({}^{+}\) to the first excited \(v=0\), \(j=1\) was measured. However, there have been several theoretical studies on rotational excitation using the Coulomb-Born approximation (Chu and Dalgarno, 1974; Dickinson and Munoz, 1977; Lim et al., 1999) and a semi-classical method (Flower, 1979). More recently, vibronic excitation of CH\({}^{+}\) was studied by Jiang et al. (2019) using a combination of an R-matrix approach (Carr et al., 2012) and molecular quantum defect theory (Seaton, 1983; Aymar et al., 1996). The present study is based on a fully quantum description of e\({}^{-}\) + CH\({}^{+}\) collisions and considers rotational and vibrational degrees of freedom of the target ion, as well as its electronic structure, including the three lowest electronic states and corresponding Rydberg resonances appearing in the CH\({}^{+}\) + e\({}^{-}\) collisional spectrum. The article is organized in the following way: section 2 gives an overview of the present theoretical approach and the differences between its application for DR and rovibronic excitation, section 3 presents our DR rate coefficients, section 4 describes the Coulomb-Born approximation and its application to the present rotational excitation calculations, section 5 is devoted to the discussion of obtained results on rovibronic excitation, and section 6 concludes our findings. ## 2 Theoretical approach The method described in this section combines fixed-nuclei electron-scattering calculations with the R-matrix method, rovibrational frame transformation, and multichannel quantum defect theory (MQDT). Only the main elements of the approach will be presented here. We use the same method to calculate scattering matrices as in our previous study of the dissociative recombination (DR) of CH\({}^{+}\)(Forer et al., 2023). The theory and computational details have only two main differences: the formula for obtaining the rovibronic excitation (RVE) cross sections (instead of DR cross sections) and the vibrational Hamiltonian used in the vibrational frame transformation of the electronic S-matrix. We perform electron-scattering calculations with the R-Matrix method implemented in UKRMol (Carr et al., 2012; Tennyson, 2010), accessed via the Quantemol-N interface (Tennyson et al., 2007). The K-matrices for the e\({}^{-}\) + CH\({}^{+}\) system are represented in a basis of _electronic_ scattering channels, indexed by \(n\), \(l\), and \(\lambda\). The quantum numbers \(l\) and \(\lambda\) correspond to the magnitude of the orbital angular momentum of the incident electron and its projection on the molecular axis, respectively. The electronic states of the target, CH\({}^{+}\), are indexed by \(n\). K-matrices are obtained for several values of \(R\), the internuclear distance of CH\({}^{+}\). We then transform the K-matrices into the S-matrix, via the intermediary matrix of phase shifts, \(\underline{\delta}(R)\). The matrices are formally related by \[\underline{K}(R)=\tan\underline{\delta}(R),\quad\underline{\Im}(R)=(I+iK)(I- iK)^{-1}=e^{\delta(R)}. \tag{2}\] The reasons for this transformation are twofold: the S-matrix is a smooth function of \(R\), which is necessary to have a more accurate vibrational integral in the frame transformation, and the S-matrix is used in the formula to compute the cross sections. After the electronic S-matrices are obtained for each value of \(R\), we proceed with the vibrational frame transformation. The first difference in this approach with our study of DR (Forer et al., 2023) is that the vibrational Hamiltonian is Hermitian. To study DR, we used a complex absorbing potential to represent absorption by discretizing the continuum. Here, we only need to obtain vibrational wave functions for bound states. The vibrational frame transformation proceeds as \[S^{\Lambda}_{\mu^{\prime}\nu^{\prime}\mu^{\prime},\nu\mu L}=\int dR\;\phi_{\mu ^{\prime}\nu^{\prime}}(R)\;S^{\Lambda}_{\mu^{\prime}\nu^{\prime}\mu,\mu L}\; \phi_{\mu\nu}(R), \tag{3}\] where \(\nu\) indexes a vibrational level. The superscript \(\Lambda\) indicates that the S-matrices are block diagonal with respect to \(\Lambda\), the projection of the total angular momentum on the molecular axis. The Hermitian Hamiltonian implies, of course, real eigenvalues. The channel energies in the case of DR are complex, with a nonzero imaginary part for continuum states. Here, all channel energies and vibrational wave functions \(\phi_{\mu\nu}(R)\) are real-valued. The S-matrix on the left-hand-side of (3) is the _vibronic_ S-matrix and, unlike in the case of DR, unitarity is defined with the usual spectral norm, i.e., \(S^{J}S^{J\dagger}=I\). Following the vibrational frame transformation, we perform the rotational frame transformation on the vibronic S-matrix to obtain the _rovibronic_ S-matrix, i.e., \[S^{J}_{\mu^{\prime}\nu^{\prime}\mu^{\prime},\nu\mu L}=\sum_{\Lambda}\sum_{ \lambda\epsilon^{\prime}}(-1)^{J^{\prime}+\lambda\epsilon^{\prime}+\lambda }C^{J^{\prime}+J}_{\mu^{\prime},\lambda\epsilon^{\prime}+\lambda\epsilon^{ \prime}}C^{J^{\prime}+J}_{\mu^{\prime},\lambda\epsilon^{\prime}+\lambda\epsilon ^{\prime}}C^{J^{\prime}+J}_{\mu^{\prime},\lambda\epsilon^{\prime}}, \tag{4}\] where the total angular momentum of the ion-electron system is \(\stackrel{{\rightarrow}}{{J}}=\stackrel{{ \rightarrow}}{{j}}+\stackrel{{\rightarrow}}{{l}}\), \(\stackrel{{\rightarrow}}{{j}}\) is the total angular momentum of the ion, and \(\mu\) is the projection of \(j\) on the molecular axis. The S-matrix, now expressed in a basis of rovibronic channels, block diagonal over \(J\). For each scattering energy, we then partition \(S^{J}\) into blocks corresponding to open (\(o\)) and closed (\(c\)) channels and construct the diagonal matrix \(\underline{\delta}\) for closed channels, \[\underline{S}^{J}=\left(\begin{matrix}\underline{S}_{\infty}&\underline{S}_{ \infty}\\ \underline{S}_{\infty}&\underline{S}_{\infty}\end{matrix}\right),\quad\beta_{ \nu\ell}(E_{\mathrm{tot}})=\frac{\pi}{\sqrt{2(E_{i}-E_{\mathrm{tot}})}}\delta_ {\nu\ell_{i}} \tag{5}\] where \(E_{i}\) is the energy of the \(i^{\mathrm{th}}\) channel and is real, and \(E_{\mathrm{tot}}\) is the total energy of the ion-electron system. We proceed with the closed-channel elimination procedure, borrowed from MQDT, to reduce the S-matrix to only open channels. \[\underline{S}^{J,\mathrm{elp}_{\mathrm{tot}}}(E_{\mathrm{tot}})=\underline{S}_ {\infty}-\underline{S}_{\infty}\left(\underline{S}_{\infty}-e^{-2\phi}\right) \underline{S}_{\infty}. \tag{6}\] The physical S-matrix, \(S^{J,\mathrm{elp}_{\mathrm{tot}}}\), is then used to calculate the total RVE cross section from some initial channel \(|\mu\gamma\mu\rangle\) to some final channel \(|\mu\gamma\nu^{\prime}j\mu^{\prime}\rangle\), \[\sigma_{\mu^{\prime}\nu^{\prime}\mu^{\prime}\nu=\mu\gamma\mu}(E_{\mathrm{tot} })=\frac{\pi}{2m_{\epsilon}E_{\mathrm{tot}}}\sum_{J}\frac{2J+1}{2j+1}\sum_{j^{ \prime}}\left|\underline{S}^{J,\mathrm{elp}_{\mathrm{tot}}}_{\mu^{\prime}\nu^ {\prime}\mu^{\prime},\nu\mu^{\prime}}\right|^{2}, \tag{7}\] where \(m_{\epsilon}\) is the mass of an electron and \(E_{\mathrm{tot}}\) is the incident electron energy. It is also possible to calculate vibronic excitation (VE) cross sections, i.e., not including the rotational structure, by simply skipping the rotational frame transformation (4). The closed-channel elimination procedure remains identical, except that the S-matrices are block diagonal over \(\Lambda\) and not \(J\). The total VE cross section from some initial channel \(|m\gamma\nu\rangle\) to some final channel \(|m^{\prime}\nu^{\prime}\rangle\) is then \[\sigma_{\mu^{\prime}\nu^{\prime}\nu=m}(E_{\mathrm{tot}})=\frac{\pi}{2m_{ \epsilon}E_{\mathrm{tot}}}\sum_{J^{\prime}}\sum_{\lambda\epsilon^{\prime}} \left|\underline{S}^{A,\mathrm{elp}_{\mathrm{tot}}}_{\mu^{\prime}\nu^{\prime} \lambda^{\prime},\mu L}\right|^{2}. \tag{8}\] The cross sections obtained from (7) and (8) only describe a single scattering event. To better describe conditions in the ISM, kinetic rate coefficients are needed, which rely on the above cross sections. State-selected kinetic rate coefficients, for DR, RVE, or VE, are obtained following \[\alpha_{\mathrm{L}}(T)=\frac{\int\limits_{0}^{\infty}\sigma(E_{\mathrm{tot}}) \sqrt{2E_{\mathrm{tot}}/m_{\epsilon}}\sqrt{E_{\mathrm{tot}}}e^{-E_{\mathrm{tot}}/ kT}dE_{\mathrm{tot}}}{\int\limits_{0}^{\infty}\sqrt{E_{\mathrm{tot}}}e^{-E_{\mathrm{tot}}/ kT}dE_{\mathrm{tot}}}, \tag{9}\] where \(k\) is the Boltzmann factor and \(\sigma\) is a cross section. In practice, these integrals are carried out numerically. Additionally, one can average the state-selected rate coefficients obtained from (9) by \[\overline{\alpha}_{\mathrm{L}}(T)=\frac{\sum\limits_{i}\alpha_{\mathrm{L}}^{ \prime}(T)(2j_{i}+1)e^{-E_{\mathrm{tot}}/kT}}{\sum\limits_{i}(2j_{i}+1)e^{-E_{ \mathrm{tot}}/kT}}, \tag{10}\] where \(i\) indexes starting channels where the total angular momentum quantum number of the ion is \(j_{i}\). The rate coefficient (DR or (R)VE), starting from some channel indexed by \(i\) and obtained with (9), is given by \(\alpha_{\mathrm{L}}^{\prime}(T)\). If \(\alpha_{\mathrm{L}}^{\prime}(T)\) is a rate coefficient without rotational resolution, \(j_{i}\) can be taken to be zero in (10). The precision of theoretical cross sections are only limited by the numerical precision of the calculations. Experimental measurements have much larger uncertainties, so comparisons are often made by convolving theoretical results with experimental parameters. The convolution function differs for every experimental setup, but Gaussian functions are fairly common: \[\tilde{\sigma}(E) =\frac{\int dE\sigma(E_{el})e^{-(E_{el}-E^{2})^{2}/(2\gamma^{2})}}{ \int dEe^{-(E_{el}-E^{2})^{2}/(2\gamma^{2})}} \tag{11}\] \[\tilde{\sigma}(E) =\frac{1}{\gamma\sqrt{2\pi}}\int dE\sigma(E_{el})e^{-(E_{el}-E^{2 })^{2}/(2\gamma^{2})}. \tag{12}\] The parameter \(\gamma\) is the convolution width (in the same energy units as the electron energy grid). The prefactor in (12) is the analytic expression of the denominator in (11). Because calculations are performed numerically on predetermined grids of scattering energies, (12) and (11) may give different results near the endpoints. ## 3 Rate coefficients for dissociative recombination Fig. 1 compares state-selected kinetic DR rate coefficients obtained with the present method (Forer et al., 2023). DR and RVE are related in the sense that they are competing processes. During a collision between an ion and an electron, if the initial channel is not the only open channel, both processes may take place. This is why we take the DR probability to be the probability that no RVE occurs. Therefore, DR results exhibiting a certain level of agreement with experimental results would suggest that RVE is described, overall, with similar accuracy. The present method produces much more accurate kinetic DR rate coefficients than the previous theoretical results of Mezei et al. (2019) when compared to recent measurements made at the Cryogenic Storage Ring (Paul et al., 2022) over the astrophysically relevant temperature range for diffuse clouds (\(\sim\)40 K - 130 K (Shull et al., 2021)). ## 4 On the Coulomb-Born approximation for rotational and vibrational (de-)excitation The dipole moment of CH\({}^{+}\) -- present in the molecular center-of-mass about which the molecule rotates -- couples partial waves of \(\Delta I=\pm 1\), which reduces the accuracy of our partial wave basis (\(l\)=0-2) for such a long-range process as rotational excitation. We include the effect of higher partial waves in the Coulomb-Born approximation (Boikova and Ob'edkov, 1968; Gailitis, 1976; Chu and Dalgarno, 1974), similar to the method described in the work of Rabadan and Tennyson (1998), by calculating three different cross sections: cross sections obtained from our R-matrix method (\(\sigma^{\rm R-matrix}\), calculated according to 7), total cross sections obtained in the Coulomb-Born approximation representing the contribution of all partial waves (\(\sigma^{\rm TC8}\), 16), and partial cross sections obtained in the Coulomb-Born approximation representing the contribution of the partial waves included in our basis (\(\sigma^{\rm PCB}\), 14). The final rovibrational excitation cross sections are then a sum of the R-matrix cross sections and the difference between the total and partial Coulomb-Born cross sections, i.e., \[\sigma^{\rm RVE}=\sigma^{\rm R-matrix}+\sigma^{\rm TC8}-\sigma^{\rm PCB},. \tag{13}\] Lower partial-wave scattering is typically not well described by the Coulomb-Born approximation because the electron is too close to the molecule for the dipole interaction to be considered a perturbation. This is especially true for \(s\)-wave scattering. Therefore, we replace the \(l=0-2\) partial wave contribution from the Coulomb-Born approximation with those in our R-matrix calculations. The partial Coulomb-Born cross sections are given by \[\begin{split}\sigma^{\rm PCB}_{\gamma^{\prime}\gamma^{\prime} \gamma^{\prime}\gamma^{\prime}}&=16\pi\frac{k^{\prime}}{k}\left|\langle v ^{\prime}|Q_{\xi}(R)|v\rangle\right|^{2}\frac{2\hat{J}^{\prime}+1}{2\xi+1} \begin{pmatrix}j&\gamma^{\prime}&\xi\\ 0&0&0\end{pmatrix}^{2}\\ &\times(2j+1)(2\hat{J}^{\prime}+1)\sum_{\mu^{\prime}}^{\rm lim} \begin{pmatrix}1&\gamma^{\prime}&\xi\\ 0&0&0\end{pmatrix}\left|M^{\prime}_{\mu^{\prime}}\right|^{2},\end{split} \tag{14}\] where \(l_{\rm max}=2\) because our R-matrix calculations only include up to \(l=2\) partial waves. The dipole moment function is given by \(Q_{\xi}(R)\) and the matrix elements \(M^{\rm I}_{\mu^{\prime}}\) are given by \[M^{\rm I}_{\mu^{\prime}}=\frac{1}{kk^{\prime}}\int\limits_{0}^{\infty}drF_{f} (\eta,r)r^{-\xi-1}F_{r}(\eta^{\prime},r), \tag{15}\] where \(F_{f}(\eta,r)\) is the regular radial Coulomb function, \(\eta=-1/k\), and \(\eta^{\prime}=-1/k^{\prime}\). For an approach that does not treat vibration, the integral \(\langle v^{\prime}|Q_{\xi}(R)|v\rangle\) in (14) can be replaced with the dipole moment at the equilibrium geometry of the ion. Considering the dipolar coupling (\(\xi=1\)), the partial Coulomb-Born cross sections (14) converge to the following as \(l_{\rm max}\rightarrow\infty\): \[\sigma^{\rm TC8}_{\gamma^{\prime}\gamma^{\prime}\gamma^{\prime}}=\frac{8}{3} \frac{\pi^{3}}{k^{2}}\left|\langle v^{\prime}|Q_{\xi}(R)|v\rangle\right|^{2}(2 \hat{J}^{\prime}+1)\begin{pmatrix}j&\gamma^{\prime}&1\\ 0&0&0\end{pmatrix}^{2}f(\eta,\eta^{\prime}), \tag{16}\] where \[\begin{split} f(\eta,\eta^{\prime})=\frac{e^{2\pi\eta}}{(e^{2\pi \eta}-1)(e^{2\pi\eta^{\prime}}-1)}\chi_{0}\frac{d}{d\chi_{0}}\left|{}_{2}F_{1}(- i\eta,-i\eta^{\prime};1;x_{0})\right|^{2},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad contains minor errors (non-squared Wigner 3-\(j\) symbols) in their equations (3) and (4), which are corrected here. ## 5 Rate coefficients for rovibronic (de-)excitation Fig. 2 shows the VE rate coefficients obtained by the present method. The corresponding state-selected kinetic rate coefficients agree well with those of Jiang et al. (2019), as shown in Fig. 2. In Fig. 3, we compare the present VE rate coefficients to the same work, starting from the ground vibronic state to the first excited electronic state. The agreement is worse for excitation between electronic states, possibly due to our improved treatment of channels attached to the excited electronic states. Fig. 4 shows the present RE cross sections convolved with a Gaussian distribution according to (12,11) with widths (\(\gamma\)) of 1 meV and 5 meV to demonstrate the smoothing out of resonances and to compare their overall magnitude. The \(\Delta j=\pm 1\) transition is the largest over all displayed electron energies (\(<1\) eV), as expected for such a strongly dipolar system as CH\({}^{+}\). Fig. 5 shows the individual cross sections of (13), which show an increasing correction as a function of the incident electron energy. The \(\sigma^{\rm NVE}\) and \(\sigma^{\rm R-mut}\) cross sections are not convolved, unlike Fig. 4. Fig. 6 compares the present rotational excitation rate coefficients with those obtained by Hamilton et al. (2015), who included the Coulomb-Born correction for \(\Delta j=\pm 1,\pm 2\) transitions. We only include this correction for \(\Delta j=\pm 1\) transitions, but the agreement between the results is good overall. Hamilton et al. (2015) also use an R-matrix approach, but use the adiabatic-nuclei-rotation approximation to obtain rotational excitation (RE) cross sections and rate coefficients, while we use a frame transformation (4) to describe the rotational structure of the ion. Fig. 7 illustrates the difference between the state-selected RE rates with and without the Coulomb-Born approximation, compared to the experimentally determined RE rate coefficient at the CSR (Kalosi et al., 2022) and again to the results of Hamilton et al. (2015). Our present results with the Coulomb-Born correction show the best overall agreement with the CSR measurements, although all theoretical rates are within the provided 1-\(\sigma\) uncertainty for most of the kinetic temperatures shown (10-140 K). ## 6 Conclusions This paper presents kinetic state-selected DR, VE, and RE rate coefficients obtained with our DR method (Forer et al., 2023), which also allows us to calculate RVE cross sections and rate coefficients with little extra effort. The DR rate coefficients agree well overall with recent experimental measurements made at the Cryogenic Storage Ring (Paul et al., 2022) and better than previous theoretical treatments. Our VE rate coefficients, compared to the work of Jiang et al. (2019), agree well for vibrational excitation within the ground electronic state of CH\({}^{+}\). However, the results differ by up to an order of magnitude for vibronic excitation to the first excited state of CH\({}^{+}\), which we attribute to a more accurate description of channels attached to excited electronic states (Forer et al., 2023). We also obtain RE rate coefficients within the ground electronic state of CH\({}^{+}\), which we compare to the work of Hamilton et al. (2015) which is an R-matrix method that describes rotational excitation with the adiabatic-nuclei-rotation approximation and does not treat vibration. They correct their \(\Delta j=\pm 1,\pm 2\) transitions with the Coulomb-Born approximation, while we only do so for \(\Delta j=\pm 1\) transitions. Results between our approaches agree well over the pre Figure 4: Rotational excitation cross sections within the ground vibronic state of CH\({}^{+}\). The cross sections are obtained according to (7), and convolved with a Gaussian as per (11) with \(\gamma=1\) meV (thin lines) and \(\gamma=5\)meV (thick lines). Figure 3: State-selected kinetic VE rate coefficients from the ground electronic state of CH\({}^{+}\) to the first excited state of CH\({}^{+}\). Solid lines represent rate coefficients from the present calculations, dashed lines are taken from a previous calculation (Jiang et al., 2019). The cross sections are obtained according to (8) and the kinetic rates are obtained according to (9). Figure 2: State-selected kinetic VE rate coefficients within the ground electronic state of CH\({}^{+}\). Solid lines represent rate coefficients from the present calculations, dashed lines are taken from a previous calculation (Jiang et al., 2019). The cross sections are obtained according to (8) and the kinetic rates are obtained according to (9). sented kinetic temperatures. Compared to \(j=0\to j^{\prime}=1\) rate coefficients recently measured at the CSR (Kalosi et al., 2022), our theoretical results using the Coulomb-Born correction agree better over all plotted kinetic temperatures than our theoretical results without the Coulomb-Born correction, and slightly better than the recent theoretically determined rate coefficients of Hamilton et al. (2015) over most kinetic temperatures between 10 K and 140 K. However, all theoretically determined rates under 100 K are within the experimental uncertainty. ## Acknowledgements We are thankful for the support from the National Science Foundation, Grant Nos.2110279 (UCF) and 2102187 (Purdue), the Fulbright-University of Bordeaux Doctoral Research Award, and the program "Accueil des chercheurs' Etrangers" of CentraleSupelec. ## Data Availability State-selected kinetic rate coefficients from the present calculations are included in the supplementary materials.
2309.12701
Interpretable Decision Tree Search as a Markov Decision Process
Finding an optimal decision tree for a supervised learning task is a challenging combinatorial problem to solve at scale. It was recently proposed to frame the problem as a Markov Decision Problem (MDP) and use deep reinforcement learning to tackle scaling. Unfortunately, these methods are not competitive with the current branch-and-bound state-of-the-art. We propose instead to scale the resolution of such MDPs using an information-theoretic tests generating function that heuristically, and dynamically for every state, limits the set of admissible test actions to a few good candidates. As a solver, we show empirically that our algorithm is at the very least competitive with branch-and-bound alternatives. As a machine learning tool, a key advantage of our approach is to solve for multiple complexity-performance trade-offs at virtually no additional cost. With such a set of solutions, a user can then select the tree that generalizes best and which has the interpretability level that best suits their needs, which no current branch-and-bound method allows.
Hector Kohler, Riad Akrour, Philippe Preux
2023-09-22T08:18:08Z
http://arxiv.org/abs/2309.12701v4
Discovering the Interpretability-Performance Pareto Front of Decision Trees with Dynamic Programming ###### Abstract Decision trees are known to be intrinsically interpretable as they can be inspected and interpreted by humans. Furthermore, recent hardware advances have rekindled an interest for _optimal_ decision tree algorithms, that produce more accurate trees than the usual greedy approaches. However, these optimal algorithms return a single tree optimizing a hand defined interpretability-performance trade-off, obtained by specifying a maximum number of decision nodes, giving no further insights about the quality of this trade-off. In this paper, we propose a new Markov Decision Problem (MDP) formulation for finding optimal decision trees. The main interest of this formulation is that we can compute the optimal decision trees for several interpretability-performance trade-offs by solving a single dynamic program, letting the user choose a posteriori the tree that best suits their needs. Empirically, we show that our method is competitive with state-of-the-art algorithms in terms of accuracy and runtime while returning a whole set of trees on the interpretability-performance Pareto front. ## 1 Introduction Decision trees remain the dominant machine learning model in applications like medicine where interpretability is essential [13]. Thanks to recent advances in hardware, a new class of decision tree learning algorithms returning optimal trees has emerged [1, 16]. Trees returned by these algorithms are guaranteed to maximize accuracy as they perform an exhaustive search. As such, despite hardware improvements, these algorithms do not scale well beyond trees of depths 3, especially when inputs take continuous values [10]. On the other hand, more heuristic approaches such as CART [1] are still considered state-of-the-art because they scale and offer more advanced mechanisms to control the complexity of the tree. By framing decision tree learning as a sequential decision problem, and by carefully controlling the size of the search space, we achieve in this paper a best of both worlds, returning trees with accuracies close to optimal ones, while offering a better control of the interpretability-performance trade-off than any existing optimal algorithm. We formulate decision tree learning as a Markov Decision Problem (MDP) [12] for which the optimal policy is equivalent to a decision tree. Actions in such an MDP include tests comparing an attribute to a threshold (a.k.a. splits). This action space could include _all_ such tests or a heuristically chosen subset, yielding a continuum between optimal algorithms and heuristic approaches. Furthermore, the reward function of the MDP encodes an intepretability-performance trade-off. In our work, interpretability takes the meaning of simulatability [10], i.e. the average number of splits the tree will perform on the dataset. The MDP reward will be parameterized by \(\alpha\), and by decreasing the value of \(\alpha\), the returned tree will be more accurate but also perform more tests in average, i.e it is less interpretable. Conversely, by increasing \(\alpha\) the returned tree becomes less accurate but performs less tests in average. One can think of optimal decision tree algorithms [1, 13, 14, 15, 16] and CART with Minimal Complexity Post Pruning [15], as also optimizing such an interpretability-performance trade-off between the number of nodes (or the maximum depth) and the accuracy. As it is difficult for a user to decide a priori what trade-off best suits their needs, we leverage the parametric reward function of the MDP to return a whole set of decision trees with different intepretability-performance trade-offs. Indeed, it could be that for a given problem, a tree with 10 nodes has an accuracy of 99.9 and a tree with 3 nodes has an accuracy of 99.8, which can only be known a posteriori. We present the Dynamic Programming Decision Tree (DPDT) algorithm that computes the optimal policies for several parameters of the reward, returning a set of decision trees with different interpretability-performance trade-offs. To the best of our knowledge, DPDT is the first algorithm whose purpose is to return a Pareto front of decision trees, while having comparable performance to optimal algorithms [1, 16]. **Summary of contributions** * In Section 4, we formulate supervised decision tree learning as a MDP whose solution can be a wide range of accurate and interpretable trees. * In Section 5 we present the Dynamic Programming Decision Tree (DPDT) algorithm returning decision trees on the interpretability-performance Pareto front. * In Section 6, we show experimentally on various classification datasets that DPDT has very advantageous properties such as tree accuracies close to optimal with better scaling capabilities, and returning a whole set of decision trees for a user to choose from. ## 2 Related Work Optimal Decision Trees:Decision tree learning, or decision tree induction, has been formulated as an optimization problem in which the goal is to construct a tree that correctly fits the data while using a minimal number of splits [10]: it is an interpretability-performance trade-off. Bertsimas and Dunn (2017); Aghaei, Gomez, and Vayanos (2020); Verwer and Zhang (2019) formulate decision tree learning as a Mixed Integer Program (MIP). Instead of passing formulations to generic solvers, Demirovic et al. (2022); Mazumder, Meng, and Wang (2022) design specialized solvers based on dynamic programming and branch-and-bound. This is made possible due to the decomposable nature of decision tree learning: if a tree is optimal, then any subtree it contains is also optimal. Quant-BnB [11] is currently the last work in this line of research and is considered state-of-the-art for datasets with continuous attributes. However, direct optimization is not a convenient approach, since finding the optimal tree has been identified as NP-Hard [12]. Therefore, heuristics are often employed to efficiently navigate the search space, with top-down greedy algorithms being the most popular (e.g., CART, C4.5, GUIDE) Greedy approaches:greedy approaches like CART sequentially partitions data by taking the most informative splits in the sense of Gini index or entropy minimization [1]. When the entropy cannot be reduced anymore or when a maximum depth has been reached, CART assigns a label to the partition. It can be argued that such algorithms are only one-step optimal and not overall optimal, since the construction procedure only considers the quality of the next split and not of future splits on the same path: it is the "horizon effect" mentioned in [10, 11]. CART can also be used to generate a whole set of decision trees by using Minimal Complexity Post-Pruning. It is used in practice to reduce overfitting. In a first phase, the complete tree of a given maximum depth is used to create several trees with increasing levels of simplification, and in a second phase, one of these simplified trees is selected according to a criterion that combines its accuracy (cost) with its number of nodes (complexity or interpretability). Interpretability of Decision Trees:Interpretability of a decision tree is usually associated with its depth or its number of nodes but other definitions exist. Lipton (2018) coined the term _simulatability_: it is difficult for a user to mentally simulate the tree as a whole when there are too many nodes involved. In the same train of thoughts, for trees with 3 to 12 leaves, Piltaver et al. (2016) observed a strong negative correlation between the number of leaves in a tree and a "comprehensibility" score given by users. Markov Decision Problems approaches:In Topin et al. (2021), a base Markov Decision Problem (MDP) is extended to an Iterative Boulding MDP (IBMDP) by augmenting the base states with feature bounds and the action space with information gathering actions that add decision nodes to a decision tree policy. Our approach is heavily inspired by IBMDPs as the MDP we solve can be seen as a stochastic IBMDP whose states only contain feature bounds. Prior to IBMDPs, Garlapati et al. (2015) formulated the classification over binary features problem as an MDP. States of their MDP contain the Boolean value of features from which information were gathered. Similarly to Garlapati et al. (2015), in Dulac-Arnold et al. (2011) the MDP state contains the features to use to classify a data. Janisch, Pevny, and Lisy (2019) builds on Dulac-Arnold et al. (2011) and make use of a neural function approximator to scale to larger datasets. In Nunes et al. (2020), Monte-Carlo tree search [10] is used to learn decision trees. For all these approaches, the action space is the union of label assignments and features queries. Compared to prior formulations of decision tree learning as an MDP, our key contribution is the careful design of the action space, described in Section 5, which strikes a good balance between the scalability of the algorithm and the quality of the returned trees. ## 3 Supervised Learning of Decision Trees In this paper, we are interested in decision trees for supervised learning problems. Consider a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i\in\{1,\ldots,N\}}\), where the input is \(x_{i}\in X\subseteq\mathbb{R}^{p}\) and the supervision label is \(y_{i}\in Y\). A decision tree \(T\) sequentially applies logical tests to \(x_{i}\in X\) before assigning it a value in \(Y\), which we denote \(T(x_{i})\in Y\). The tree thus has two node types, test nodes that apply a logical tests and leaf nodes that pick a value in \(Y\). In this paper, we focus on binary decision trees (i.e. where decision nodes split into a left and right child, such as in Breiman (1984)) with axis aligned splits. However, all our results generalize to logical tests involving non-linear functions of multiple features. These decision trees apply logical tests of type \(\mathds{1}_{\{x_{i,j}\leq v\}}\) with \(v\in\mathbb{R}\), where feature \(j\) of \(x_{i}\) is compared to \(v\), for example: "\(x_{2}\leq 3\)". Furthermore, we look for trees with a maximum depth \(D\), where \(D\) is the maximum number of logical tests a tree can apply to classify a single \(x_{i}\in X\). We let \(\mathcal{T}_{D}\) be the set of all binary decision trees of of depth \(\leq D\). Given a loss \(\ell\) defined on \(Y\times Y\) we look for trees in \(\mathcal{T}_{D}\) satisfying \[T =\operatorname*{argmin}_{T\in\mathcal{T}_{D}}\,\mathcal{L}_{ \alpha}(T), \tag{1}\] \[=\operatorname*{argmin}_{T\in\mathcal{T}_{D}}\,\frac{1}{N}\!\sum_{ i=0}^{N}\!\ell(y_{i},T(x_{i}))+\alpha C(T), \tag{2}\] where \(C:\mathcal{T}\rightarrow\mathbb{R}\) is a function of the learned trees that that quantifies their simulatability [12]. It could be the number of nodes like in optimal tree algorithms [1, 10, 11]. In our work, we are interested in the expected number of logical tests a tree applies on a data from \(\mathcal{D}\). In a regression problem, \(Y\subset\mathbb{R}\) and \(\ell(y_{i},T(x_{i}))\) can be \((y_{i}-T(x_{i}))^{2}\). For supervised classification problems, \(Y=\{1,...,K\}\), where \(K\) is the number of class labels, and \(\ell(y_{i},T(x_{i}))=\mathds{1}_{\{y_{i}\neq\mathcal{T}(x_{i})\}}\). In our work, we focus on supervised classification but the MDP formulation extends naturally to regression. ## 4 Decision Tree Learning as a Markov Decision Problem There have been previous attempts in the literature to encode a supervised learning problem as a Markov decision problem (MDP, L. Puterman (1994)) such as in ( ### Constructing the MDP An algorithm constructing the MDP of Section 4 essentially, computes the set of all possible decision trees of maximum depth \(D\) whose decision nodes are tests generated by \(\phi\). The MDP itself is a direct acyclic graph and each node corresponds to a state of the MDP, for which we compute the transition and reward functions. For non-terminal nodes, instead of storing all the samples in \((X,d)\), one should only store \(d\) and the binary vector of size \(N\), \(x_{bin}=(\mathds{1}_{\{x_{i}\in X\}})_{i\in\{1,\dots,N\}}\). ### Tests generating functions Using the action space as defined in Section 4, MDP will contain all possible trees of depth at most \(D\) which guarantees the returned tree to be optimal w.r.t. Eq. (2) at the cost of scalability. Indeed, in this case, the number of states in the MDP would be of the order of \(\sum\limits_{d=0}^{D-1}K(2Np)^{d}\) which scales exponentially with the maximum depth of the tree: this limits the learning to very shallow trees (\(D\leq 3\)) like in [10]. An alternative would be to consider a test generating function \(\phi\) returning the \(B\) most informative splits in state \((X,d)\), for a given informativeness criteria. In this case the number of states in the MDP would be at most \(\sum\limits_{d=0}^{D-1}K(2B)^{d}\). We propose to use CART [1] as a tests generation function. At every state \((X,d)\), the split actions are the splits in the decision nodes of the tree returned by a call to CART with a maximum tree depth \(D_{cart}\). In this case, the number \(B\) of considered splits at each state is at most \(B=2^{D_{cart}}-1\). In practice we observe that using calls to CART as a tests generating function leads to an MDP with fewer states and whose solutions yield accurate trees: in Figure 2, we compare CART as a test generating function (labeled DPDT-\(D_{cart}\)) with another test generating function that computes the information gain of all splits in \(\mathcal{F}\) and selects the top \(B\) ones (labeled TOP \(B\)). The experiment shows that with \(D_{cart}=4\), DPDT finds the optimal tree in an MDP having several orders of magnitude less states than the one that considers all possible splits (labeled Exhaustive). Finally, for states with \(d=D-2\), i.e. there is only a single split that can be added, \(\phi\) will not call CART but instead returns a single split that maximizes average accuracy one-step ahead. This further reduces the number of states in the MDP without compromising performance. ### Dynamic Programming Having built the MDP, we backpropagate using dynamic programming the the best optimal actions from the terminal states to the initial states. We make use of Bellman's optimality equation to compute the value of the best actions recursively: \[Q^{*}(s,a) =\mathbb{E}\left[r_{d+1}+\max_{a^{\prime}}Q^{*}(s_{d+1},a^{\prime })|s_{d}=s,a_{d}=a\right]\] \[=\sum_{s^{\prime}}\hskip-14.226378ptP(s,a,s^{\prime})\left[R(s,a) +\max_{a^{\prime}}Q^{*}(s^{\prime},a^{\prime})\right]\] **Pareto front:** To obtain the Pareto front of interpretability-performance trade-off, it is sufficient to define a \(Q\)-function that depends on \(\alpha\): \[Q^{*}(s,a,\alpha)=\sum_{s^{\prime}}\hskip-14.226378ptP(s,a,s^{\prime})\left[R _{\alpha}(s,a)+\max_{a^{\prime}}\hskip-14.226378ptQ^{*}(s^{\prime},a^{\prime}, \alpha)\right]\] We can then find all policies: \[\pi^{*}(s,\alpha)=\underset{a\in A}{\operatorname{argmax}}Q^{*}(s,a,\alpha)\] Figure 1: Step 1 of the DPDT algorithm. For a dataset of three samples illustrated above, two continuous features, and two classes, DPDT constructs the MDP containing decision trees of depth at most \(D=1\). The tests generating function generated three possible tests. This MDP has one initial state \((D,0)\) (the whole dataset at depth 0), and six non-terminal states (three tests times two children states). Rewards are either \(\alpha\) or the misclassification rate, and transition probabilities are one or the size of the child state over the size of the parent state. Figure 2: Comparison of DPDT algorithm on the Iris dataset in terms of the number of non-terminal states in the MDP when using different test generating functions. TOP \(B\) are instances of DPDT where the tests function returns the \(B\) most informative splits for each state. “Exhaustive” is an instance of DPDT computing all possible states and returning the optimal decision tree. DPDT-\(D_{cart}\) are instances of DPDT where the tests function is calls to the CART algorithm. It is clear that using calls to CART as a tests generating functions leads to accurate solutions while using less memory (less MDP states). Such policies satisfy Eq. (4) for any value of \(\alpha\). Given a set of values of \(\alpha\) in \([0,1]\), we can compute in a single backward pass \(Q^{*}(s,a,\alpha)\) and \(\pi^{*}(s,\alpha)\) and return a set of trees, optimal for different values of \(\alpha\). In practice, the computational cost is by far dominated by the construction of the MDP. ## 6 Experiments In this section we study DPDT from different perspectives. First we study DPDT in terms of its performances as a classification algorithm (Sec. 6.4). Similarly to state-of-the-art optimal tree algorithm Quant-BnB (Mazumder, Meng, and Wang 2022), we report the classification accuracy on the training set, as well as training time for different classification datasets with different numbers of data \(N\), different numbers of features \(p\), and different numbers of classes \(K\). Then we study DPDT for model selection i.e its generalization capabilities (Sec. 6.5); for that we report accuracy on train and test splits of different datasets. Finally, we study DPDT from an interpretability perspective (Sec. 6.6) by looking at different simulatability metrics (Lipton 2018): the average number of tests performed by the returned decision trees on data as well as the average number of nodes. In all experiments, we compare DPDT to state-of-the-art decision tree learning algorithms which we present next. ### Reproducibiliy statement All experiments are run on a single core from a Intel0 Corem i7-8665U CPU. All experiments are fully reproducible. Hyperparameters are given in the supplementary material when necessary. Experiments involving randomness are ran on multiple seeds. All the code to reproduce the experiments is given in the supplementary material. All datasets are given in the supplementary material. Codes to run the baselines is available online and we provide links to them. Finally, when data from previous work is used, we point directly to the table or figure. Additional figures are given in the supplementary material. ### Datasets We use 24 different classification datasets with continuous features. For datasets at [https://github.com/mengxianglQ/Quant-BnB/tree/main/dataset/class](https://github.com/mengxianglQ/Quant-BnB/tree/main/dataset/class), the data is already split into test and train sets. For datasets at [https://github.com/LucasBoTang/Optimal_Classification_Trees/tree/main/data](https://github.com/LucasBoTang/Optimal_Classification_Trees/tree/main/data), the datasets are split in half using [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) with seeds 37, 42 and 53. ### Baselines Baselines require either binary or continuous features dataset and return either an optimal tree or a suboptimal tree. For large datasets \(N>10000\), \(p>10\), only CART can be used to retrieve a set of trees using minimal Cost-Complexity Pruning, Section 3.3 from (Breiman 1984), in reasonable runtime (\(<30\) minutes). For large datasets with only continuous features, only Quant-BnB (Mazumder, Meng, and Wang 2022) and CART can consistently compute trees in finite time without running out of memory. For large datasets with only continuous features, only CART can retrieve non-shallow trees (\(D>3\)). Note that we do not compare DPDT to any other MDP approach mentioned in Section 2 as they are not considered state-of-the-art. Quant-BnB:Mazumder, Meng, and Wang (2022) propose a scalable branch-and-bound algorithm that returns optimal trees. It can produce shallow optimal trees for large datasets with only continuous features. We do not run Quant-BnB but use the data of their Tables 2 and 6 (Mazumder, Meng, and Wang 2022) and use the datasets from their code available online at [https://github.com/mengxianglql/Quant-BnB/tree/main/dataset/class](https://github.com/mengxianglql/Quant-BnB/tree/main/dataset/class) MurTree:Demirovic et al. (2022) propose an algorithm that retrieves optimal trees for large datasets with binary features using dynamic programming. They also propose a binarization method to retrieve suboptimal shallow trees for large datasets with continuous features. We do not run MurTree but use of the results in Table 6 from Mazumder, Meng, and Wang (2022) (see the "approx" column). Oct, MFOCT, BinOCT:Bertsimas and Dunn (2017); Aghaei, Gomez, and Vayanos (2020); Verwer and Zhang (2019) propose optimal tree algorithms which formulate the learning problem as a MIP. OCT and MFOCT can produce optimal trees for small datasets with continuous features. BinOCT can also produce optimal trees for small datasets with continuous features after they have been binarized. We make use of the implementations available at [https://github.com/LucasBoTang/Optimal_Classification_Trees](https://github.com/LucasBoTang/Optimal_Classification_Trees). Cart:Breiman (1984) propose a greedy algorithm that can retrieve suboptimal trees for any dataset. We use the scikit-learn implementation of CART available at [https://scikit-learn.org/stable/modules/tree.html#tree-classification](https://scikit-learn.org/stable/modules/tree.html#tree-classification) with the criterion parameter fixed to "entropy". Figure 3: Step 2 of DPDT. For \(\alpha=0\) and \(\alpha=1\), the values of \(Q^{*}(s,a,\alpha)\) are backpropagated with dynamic programming from leaf states to the initial state and are given in squared brackets. The optimal policy \(\pi^{*}(.,\alpha=1)\), in pink, is a depth-0 tree with accuracy \(\frac{2}{3}\). The optimal policy \(\pi^{*}(.,\alpha=0)\), in green, is a depth-1 tree with accuracy 1. ### Classification Accuracy In this section we study DPDT performance for supervised classification. We run DPDT with calls to CART to generate splits while building the MDP. When calling CART, the latter returns a decision tree of maximum depth 5. We compute 1000 policies for different values of \(\alpha\) ranging from 0 to 1. In Table 1, we observe that DPDT-5 performs the best in terms of training accuracy on 15 out of 16 classification datasets when compared to non-optimal algorithms like MurTree on binarized continuous datasets or CART. For that, we extracted the tree associated to \(\pi^{*}(.,\alpha=0)\) and checked its train accuracy. Furthermore, we notice that in 4 cases (avila, bidding, occupancy and room) DPDT-5 retrieved the optimal decision tree, i.e the same decision tree as the one returned by Quant-BnB. DPDT-5 retrieved those accurate trees (as well as a full set of trees given in the supplementary material) in a fraction of the time Quant-BnB needed to retrieve a single optimal tree.1 Footnote 1: We note that our code is in Python, leaving ample room for further reduction of the runtime when using a compiled language. For example, DPDT-5 retrieved a set of Pareto optimal trees containing the optimal decision tree in 2 seconds while it required Quant-BnB 106 seconds. Those results are corroborated by Table 2 in which DPDT-4 consistently retrieves more accurate depth-5 trees than OCT-type algorithms in a fraction of the runtime. This illustrates the potential of DPDT to retrieve non-shallow accurate trees. However DPDT overfits on some datasets (on breast-cancer the accuracy drops by 30 % between train an test) while CART offers better generalization capabilities which leads to study DPDT from a model selection perspective. ### Model Selection We now ask if DPDT is suited for model selection i.e identifying an accurate decision tree that will generalize well to unseen data for a given classification task. In Figure 4, we plot train and test accuracy on classification datasets as a function of the average number of tests performed by decision trees returned by DPDT-3. Here again, DPDT learned a set of Pareto optimal decision trees of at most depth 5 in a few seconds (see more Pareto fronts in the supplementary material). We observe two trends that are representative of the generalization capabilities of DPDT (results on other datasets are provided in the supplementary material). On the bank dataset, DPDT-3 learns accurate trees that generalize well. As such, a user with no interpretability constraint could select the tree with the highest accuracy. On the raisin dataset, more complex and accurate trees learned by DPDT generalize less well. But an advantage of DPDT returning a whole set of trees is that a user can look at the Pareto front and select the most accurate tree that generalizes well. ### Simulatability Analysis In this Section, we study the interpretability of decision trees returned by DPDT. Lipton (2018) introduced the term "simulatability" which relates the machine learning model to its user and admits two subtypes, one based on the total size of the model and another based on the amount of computation required to perform inference. This translates in our setting to the number of nodes in a decision tree (total size) and the average number of tests performed per data. In Figure 5, we plot the trade-offs of trees returned by CART and DPDT-3. In one case, the trade-off is between accuracy and average number of tests (what DPDT optimizes) and in the other, it is between accuracy and the number of tree nodes (what Minimal Complexity Post Pruning optimizes). There are two general tendencies and more results are given in the supplementary material. One key result is that the average tests number vs accuracy trade-off of CART with post pruning will always be less accurate that DPDT's as the trees returned by CART are evaluated during the dynamic programming part of DPDT. We see this empirically \begin{table} \begin{tabular}{l|c c c|c c c c|c c} \multicolumn{3}{c}{} & \multicolumn{3}{c}{Accuracy of a BP-55.5} & \multicolumn{3}{c}{Results on average} \\ \hline \multirow{2}{*}{Nance} & \multirow{2}{*}{Sample} & \multirow{2}{*}{Feature} & \multirow{2}{*}{Clues} & \multirow{2}{*}{Quant-BnB} & DPDT & \multirow{2}{*}{Muffle} & CART & Quant-BnB & DPDT-5 \\ \cline{1-1} \cline{6-11} & & & & & & & & & & & \\ \hline \multirow{5}{*}{Nuffle} & 1000 & 12 & 58.35 & **58.55** & **58.55** & 32.52 & 418 & 7.52 \\ & 1000 & 47 & 2 & 68.35 & **58.55** & 67.55 & 30.55 & 44 & 3.04 \\ & 1000 & 686 & 16 & 2 & 87.15 & **56.55** & **66.55** & 77.57 & 391 & 201 \\ & 3446 & 505 & 9 & 2 & 93.78 & **93.72** & 93.72 & 91.52 & 90 & 18.52 \\ & 1768 & 1764 & 14 & 2 & 70.85 & **70.53** & 68.55 & 66.55 & 173 & 11.14 \\ & 1768 & 1764 & 2 & 68.25 & **68.55** & 68.55 & 67.55 & 39.5 & 40 & 33.89 \\ & 1768 & 2170 & 2 & 83.17 & **88.55** & 88.15 & 88.15 & 88.35 & 300 & 18.25 \\ & 1761 & 5 & 2 & 83.17 & **90.42** & 94.72 & 91.92 & 90.52 & 166 & 26.02 \\ & 1761 & 473 & 10 & 5 & 97.15 & **97.55** & 96.67 & 94.67 & 471 & 4.008 \\ & 723 & 7 & 2 & 80.45 & **88.55** & 88.57 & 83.57 & 187 & 167 & 3.008 \\ & 3048 & 7 & 2 & 03.84 & **93.75** & 95.56 & 66.55 & 187 & 136 & 3.008 \\ & 8819 & 16 & 4 & 93.72 & **90.22** & 99.99 & 99.97 & 88.55 & 180 & 3.49 \\ & 1884 & 18 & 2 & 88.77 & **90.22** & 99.99 & 99.97 & 88.55 & 180 & 3.49 \\ & 1884 & 18 & 2 & 88.77 & **90.25** & 98.45 & 88.45 & 173 & 115 & 1.178 \\ & 1881 & 4 & 63.95 & 5 & 2 & 90.65 & 90.55 & 90.55 & 90.55 & 90.57 \\ \hline \end{tabular} \end{table} Table 1: Training accuracy of different decision tree learning algorithms. All algorithms learn trees of depth at most 3 on 16 classification datasets. Quant-BnB returns optimal decision trees. MurTree returns optimal decision trees for datasets binarized using the minimum description length principle. DPDT uses calls to CART with a maximum depth of 5 to split data. We also report runtime in seconds for Quant-BnB and DPDT as MurTree runtimes are not available. Results for algorithms other than DPDT are taken from Tables 2 and 6 from [14]. It is important to note that here the runtime for DPDT is for computing 1000 policies for different values of \(\alpha\) but only the accuracy of the tree associated with \(\alpha=0\) is reported. Figure 4: Train/test accuracies of DPDT-3 as a function of the average number of tests on data performed by the learned decision trees. DPDT-3 is used to learn a set of Pareto optimal decision trees of depth 5. It computed the optimal policies for 1000 different \(\alpha\). on both the eeg and fault datasets. Furthermore, even if the trade-offs with the number of nodes might be better for cart for some trees (CART find trees with less nodes than DPDT for the same accuracy), it might be the case than DPDT's trade-off is still better (on fault for example) even though DPDT does not optimize for the number of nodes. In Figure 6 some trees are presented. In particular, we observe that CART trades-off interpretability for accuracy by simply expanding some nodes (see for example the difference between the two bottom trees in Figure 6), whereas DPDT can find more interpretable trees that use different tests inside decision nodes, including the root node, for a given trade-off (see for example the different root notes between the top trees on Figure 6). This is one of the major differences in taking into account the interpretability-performance trade-off during optimization (DPDT) instead of postprocessing (CART). However, there is no convention on whether the number of nodes is a better interpretability measure than the average number of tests. ## 7 Conclusion In this work we solve MDPs whose optimal policies are decision trees optimizing a trade-off between tree accuracy and interpretability. We introduced the Dynammic Programming Decision Tree algorithm that returns several optimal policies for different reward functions. As such, DPDT is returning a set of trees on the interpretability-performance Pareto front. Experimentally, DPDT returns several decision trees almost as accurate as algorithms optimizing a single interpretability-performance trade-off. DPDT has reasonable runtimes and is able to scale to trees with depth greater than 3. We believe DPDT is a great starting point for a new class of decision tree learning algorithms that offer to a potential human user a greater control than CART over model selection in terms of accuracy and interpretability. In hope to convince the reader of the latter, we put great emphasis on reproduciblity and reuse of exisiting data and results from related work. In future work, DPDT's runtime could be improved with caching (avoid expanding already explored states) as in Demirovic et al. (2022) and parallelism (expanding state on different processes). DPDT could scale to bigger datasets (millions of samples) by using Monte Carlo Tree Search in order to assess the quality of a state using only a fraction of the dataset. Finally, meta-learning a neural test generating function could be an interesting direction to further improve the accuracy and runtime of DPDT. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Dataset}} & \multicolumn{1}{c|}{\multicolumn{1}{c|}{Train Accuracy drop-5} & \multicolumn{1}{c|}{Runtime steps-5} \\ \cline{2-19} **Names** & **Samples** & **Features** & **Classes** & **DPDT+1** & **DPDT’s** & **OCT** & **MOCT** & **DBDT+2** & **OCT** & **MOCT** & **DBDT+3** & **OCT** & **MOCT** & **DBDT+4** & **DPDT+5** & **OCT** & **MOCT** &
2309.11208
Universal direction in thermoosmosis of a near-critical binary fluid mixture
We consider thermoosmosis of a near-critical binary fluid mixture, lying in the one-phase region, through a capillary tube in the presence of preferential adsorption of one component. The critical composition is assumed in the two reservoirs linked by the tube. With coarse-grained approach, we evaluate the flow field induced by the thermal force density. We predict a universal property; if the mixture is near the upper (lower) consolute point, the flow direction is the same as (opposite to) the direction of the temperature gradient, irrespective of which component is adsorbed onto the wall.
Shunsuke Yabunaka, Youhei Fujitani
2023-09-20T10:49:56Z
http://arxiv.org/abs/2309.11208v1
# Universal direction in thermoosmosis of a near-critical binary fluid mixture ###### Abstract We consider thermoosmosis of a near-critical binary fluid mixture, lying in the one-phase region, through a capillary tube in the presence of preferential adsorption of one component. The critical composition is assumed in the two reservoirs linked by the tube. With coarse-grained approach, we evaluate the flow field induced by the thermal force density. We predict a universal property; if the mixture is near the upper (lower) consolute point, the flow direction is the same as (opposite to) the direction of the temperature gradient, irrespective of which component is adsorbed onto the wall. A temperature gradient in a fluid along the confining surface can generate a force density, parallel to the gradient, due to inhomogeneity in a fluid region near the surface. This _thermal force density_[1; 2] causes the slip velocity across the region and drive the fluid in the bulk. This bulk mass flow is called thermoosmosis, which does not involve gravity responsible for the Reyliegh-Benard convection. Momentum transfer via the local slip can also induce thermophoresis -- migration of a colloidal object in a fluid under a temperature gradient. Understanding these phenomena involves a fundamental problem in nonequilibrium physics and will lead to effective manipulations on lab-on-a-chip processes [1; 3; 4; 5; 6]. Derjaguin and Sidorenkov (DS) observed thermoosmosis of water through porous glasses [7]. Applying the continuum theory and Onsager's reciprocity, they proposed a formula expressing the thermal force density in terms of the local excess enthalpy for a one-component fluid [8; 9; 10]. According to this formula, the direction of the flow is the same as (opposite to) that of the temperature gradient if the excess enthalpy density is negative (positive) everywhere near the wall. This is expected naively by considering that the flow in this direction tends to eliminate the temperature gradient by carrying the fluid with lower (higher) enthalpy to the region with higher (lower) temperature. However, the local excess enthalpy is not easy to access experimentally and is numerically evaluated only on the basis of simplified microscopic models [11; 12]. Besides, well-definedness of microscopic expression of excess enthalpy is questioned especially near the surface [12; 13]. Therefore, it remains difficult to incorporate detailed microscopic interactions theoretically and even predicting the flow direction is often challenging [1]. In Ref. [12], the authors propose an extension of DS's formula for multicomponent fluids within continuum description, while questioning its validity in a microscopic slip layer. Thermoosmosis has not been studied in relation to critical phenomena, to our best knowledge. In this Letter, we study thermoosmosis of a binary fluid mixture near the demixing critical point through a capillary tube linking two large reservoirs (Fig. 1). The mixture is assumed to lie in the one-phase region throughout inside the container, and is simply referred to as a mixture in the following. We assume preferential adsorption (PA) of one component on the tube's wall due to short-range interactions. The adsorption layer, enriched by the preferred component, was first observed in Ref. [14], and are studied theoretically [15; 16; 17; 18; 19; 20]. The layer becomes much thicker than the molecular sizes near the critical point. Therefore, we can evaluate the universal properties of thermal force density with continuum description, avoiding difficulties associated with microscopic approach discussed in the last paragraph. When a temperature difference is imposed between the reservoirs, as shown later, the thermal force density is generated in the adsorption layer to cause thermoosmosis. We predict a universal property; the flow direction is the same as (opposite to) the direction of the temperature gradient in thermoosmosis of a mixture near the upper (lower) consolute point, irrespective of which component is adsorbed on the wall. The tube is assumed to be a cylinder having the radius \(r_{\rm tube}\) and length \(L_{\rm tube}\) (Fig. 1). We write \(\rho_{\rm a}\) (\(\rho_{\rm b}\)) for the mass density of a mixture component named a (b), defining \(\rho\) as \(\rho_{\rm a}+\rho_{\rm b}\) and \(\varphi\) as \(\rho_{\rm a}-\rho_{\rm b}\). In general, the scalar pressure and the temperature are respectively denoted by \(P\) and \(T\). In the container, we first prepare an equilibrium one-phase state of the mixture, which we call a reference state. This state is specified by \(P^{\rm(ref)}=P_{\rm c}\), \(\varphi^{\rm(ref)}=\varphi_{\rm c}\) and \(T^{\rm(ref)}(\approx T_{\rm c})\). The superscript \({}^{\rm(ref)}\) (subscript \({}_{c}\)) indicates a value in the reservoirs in the reference state (a value at the critical point). The order parameter \(\psi\), defined as \(\varphi-\varphi_{\rm c}\), vanishes in the reference state. The value of \(T\) in the right (left) reservoir is denoted by \(T_{\rm R}\) (\(T_{\rm L}\)), which equals \(T^{\rm(ref)}(\approx T_{\rm c})\) in the reference state. Next, we slightly change \(T_{\rm R}\) and \(T_{\rm L}\) from \(T^{\rm(ref)}\) to make \(\delta T\) nonzero, where \(\delta T\) is defined as \(T_{\rm R}-T_{\rm L}\), while keep ing \(P\) and \(\psi\) in the reservoirs at \(P_{c}\) and zero, respectively (Fig. 1). The mixture is approximately incompressible under usual experimental conditions. Hence, we assume \(\rho=\rho_{\rm c}\) throughout inside the container in this Letter. We write \(\tau\) for the reduced temperature \((T-T_{\rm c})/T_{\rm c}\). The demixing critical point can be an upper consolute (UC) point or a lower consolute (LC) point [21; 22; 23]. Near a UC (LC) point, \(\tau\) is positive (negative) in the one-phase region. We here roughly explain our key idea by using the Landau model, whose free-energy density is given by a quadratic function of \(\psi^{2}\). The density includes a term \(a\tau\psi^{2}\), where \(a\) is a positive (negative) constant near the UC (LC) point. By operating \(-T^{2}\partial_{T}T^{-1}\) on the free-energy density, we find this term to contribute \(-a\psi^{2}\) to the internal-energy density, which is negative (positive) in the adsorption layer of a mixture near a UC (LC) point. Hence, assuming that the contribution is dominant in the excess enthalpy density, which is mentioned in the second paragraph, we can conjecture that the thermoosmotic direction of a mixture near a UC (LC) point is the same as (opposite to) the direction of the temperature gradient, irrespective of which component is preferred by the tube's wall. To examine the conjecture stated above, we apply the hydrodynamic formulation under inhomogeneous temperature [24; 25] and the renormalized local functional theory (RLFT) [26; 16]. We consider a weak, stationary, and laminar flow in the tube, which is so thin and long that effects of the tube edges on the flow are negligible. The no-slip boundary condition is imposed on the tube's wall, which is impermeable and adiabatic. On a tube's cross section, we write \(r\) for the radial distance from the center and define a dimensionless radial distance \(\hat{r}\) as \(r/r_{\rm tube}\). The \(z\) axis is taken as in Fig. 1. We consider a mixture of 2,6-lutidine and water (LW) [27] near the LC point and a mixture of nitroethane and 3-methylpentane (NEMP) [28] near the UC point. Before describing the details, we show the velocity profile under \(\delta T>0\) in Fig. 2, where the flow direction is the same as (opposite to) the direction of the temperature gradient in a mixture near the UC (LC) point and the flow rate is larger in magnitude as the critical temperature is approached. We assume that the free-energy density in the bulk of a mixture, \(f_{\rm bulk}\), is a function of \(\rho_{\rm a}\), \(\rho_{\rm b}\), the quadratic form of their gradients, and \(T\); \(f_{\rm bulk}\) is coarse-grained up to the local correlation length of the order-parameter fluctuations, \(\xi\). Hydrodynamics is applicable to flow whose typical length is locally larger than \(\xi\). The chemical potential \(\mu_{n}\) conjugate to \(\rho_{n}\) is given by \[\mu_{n}=\frac{\partial f_{\rm bulk}}{\partial\rho_{n}}-T\nabla\cdot\left[ \frac{1}{T}\frac{\partial f_{\rm bulk}}{\partial\left(\nabla\rho_{n}\right)} \right]. \tag{1}\] The reversible part of the pressure tensor, denoted by \(\mathsf{\Pi}\), is symmetric and is given by \[\mathsf{\Pi}=P1+\sum_{n={\rm a,b}}\left(\nabla\rho_{n}\right)\frac{\partial f _{\rm bulk}}{\partial\left(\nabla\rho_{n}\right)}\, \tag{2}\] where \(\mathsf{1}\) is the identity tensor of order two and \(P\) equals the negative of the grand-potential density. Defining \(\mu_{\pm}\) as \((\mu_{\rm a}\pm\mu_{\rm b})/2\), we have \(P=\mu_{+}\rho+\mu_{-}\varphi-f_{\rm bulk}\). Equations (1) and (2) are derived for a one-component fluid in Ref. [24] and are applied to a binary fluid mixture [25], assuming that the coefficient of the gradient term is linear with respect to \(T\) in the free-energy density. A complete set of the hydrodynamic equations are shown in Sect. II of Ref. [29]. In its Appendix A, Eqs. (1) and (2) are derived without the assumption. The internal energy and entropy per unit volume are denoted by \(u\) and \(s\), respectively. The partial entropy and enthalpy per unit mass of the component \(n\) are denoted by \(\bar{s}_{n}\) and \(\bar{H}_{n}\), respectively. We define \(\bar{s}_{-}\) and \(\bar{H}_{-}\) as \((\bar{s}_{\rm a}-\bar{s}_{\rm b})/2\) and \((\bar{H}_{\rm a}-\bar{H}_{\rm b})/2=\mu_{-}+T\bar{s}_{-}\), respectively. We linearize the dynamics with respect to \(\delta T\). Difference between the reservoirs is indicated by \(\delta\), such as \(\delta T\) Figure 1: Schematic of our setting. A mixture is filled in the container composed of two reservoirs and a capillary tube between them with the radius \(r_{\rm tube}\) and the length \(L_{\rm tube}\). The \(z\) axis is taken along the tube and is directed to the right reservoir. One component, drawn in yellow, is preferentially adsorbed onto the tube’s wall. Thick walls represent pistons. The pressure \(P\) and the order parameter \(\psi\) in the reservoirs are always set equal to their values at the critical point, \(P_{\rm c}\) and zero, respectively. The temperatures in the left and right reservoirs, denoted by \(T_{\rm L}\) and \(T_{\rm R}\), respectively, are equal to \(T^{({\rm ref})}\) in the reference state. Figure 2: The \(z\) component of the dimensionless velocity field in thermosomosis, \(\hat{v}_{\rm a}^{({\rm th})}(\hat{r})\), is plotted against the dimensionless radial distance \(\hat{r}\) for a mixture of NEMP (LW) near the UC (LC) point. The reduced temperature \(\tau\) of the reference state, indicated in the figure, is positive (negative) in the one-phase region near the UC (LC) point. The other parameter values are the same as used in Fig. 3. The Gibbs-Duhem (GD) relation gives \[0=\rho^{\rm(ref)}\delta\mu_{+}+\varphi^{\rm(ref)}\delta\mu_{-}+s^{\rm(ref)} \delta T\, \tag{3}\] where \(\delta\mu_{-}\) equals \(-\bar{s}_{-}^{\rm(ref)}\delta T\) on our assumption \(\delta P=\delta\rho=\delta\varphi=0\). In the tube, the mass conservation gives \(\nabla\cdot\mathbf{v}=\partial_{z}v_{z}=0\) with \(\mathbf{v}\) denoting the velocity field. There, the momentum conservation gives \[2\nabla\cdot(\eta_{\rm s}\mathsf{E})=\nabla\cdot\mathsf{\Pi}\, \tag{4}\] where \(\mathsf{E}\) denotes the rate-of-strain tensor with \(\mathsf{E}_{ij}=(\partial_{i}v_{j}+\partial_{j}v_{i})/2\) in the Cartesian coordinates. The shear viscosity, denoted by \(\eta_{\rm s}\), generally depends on the position via its dependence on \(\xi\). Equations (1) and (2) yield \[\nabla\cdot\mathsf{\Pi}=s\nabla T+\sum_{n={\rm a,b}}\left(\rho_{n}\nabla\mu_{ n}+\frac{\nabla T}{T}\cdot\frac{\partial f_{\rm bulk}}{\partial\left(\nabla \rho_{n}\right)}\nabla\rho_{n}\right), \tag{5}\] which can be regarded as an extended GD relation. Combined with the irreversible terms, this extended GD relation guarantees positive entropy production in bulk and the Onsager's reciprocity for osmotic fluxes through the tube. These points, which justify using Eqs. (1)-(5) in our derivation of thermal force density, are shown in Ref. [24] and Appendix B of Ref. [29], respectively. We add the superscript (\({}^{\rm th}\)) to a quantity in the tube in the linear regime of thermomossoms we consider. The thermal force density, denoted by \(\sigma_{z}^{\rm(th)}\), is given by the \(z\)-component of \(-\nabla\cdot\mathsf{\Pi}\) on this condition, where Eq. (5) has only \(z\) component. The conservation equations for energy and mass densities and their boundary conditions are satisfied if \(\mu_{\pm}\) and \(T\) are linear functions of \(z\) and homogeneous on a tube's cross section. See Sect. IIC of Ref. [29] for details. Using Eqs. (3)-(5), we find \(\sigma_{z}^{\rm(th)}\) to be dependent only on \(r\) and to be given by \(-\delta T/(T^{\rm(ref)}L_{\rm tube})\) multiplied by \[u(r)+P(r)-u^{\rm(ref)}-P^{\rm(ref)}-\bar{H}_{-}^{\rm(ref)}\psi(r)\, \tag{6}\] where \(u(r)\), \(P(r)\), and \(\psi(r)\) are evaluated in the tube in the reference state and thus \(P(r)\) equals \(\mathsf{\Pi}_{zz}(r)\). This formula is an extension of DS's formula to two-component fluids, since the first four terms of Eq. (6) can be regarded as the excess enthalpy density in DS's formula for a one-component fluid. Our procedure to derive the formula for the thermal force density via an extended GD relation could also be applied to any soft material described with a free-energy functional. We compare our derivation of the thermal force density with the corresponding part in Ref. [12] as follows. Because the sum of the last three terms of Eq. (6) equals \(-\rho_{\rm a}\bar{H}_{\rm a}^{\rm(ref)}-\rho_{\rm b}\bar{H}_{\rm b}^{\rm(ref)}\), our formula for \(-\sigma_{z}^{\rm(th)}\), given by the product of \(\delta T/(T^{\rm(ref)}L_{\rm tube})\) and Eq. (6), formally coincides with the right-hand side (RHS) of Eq. (5) of Ref. [12], where they interpret the RHS as \(-\sigma_{z}^{\rm(th)}\). However, its left-hand side (LHS), \(\partial_{z}\mathsf{\Pi}_{zz}\) in our notation, is not equal to \(-\sigma_{z}^{\rm(th)}\) in general, since \(\partial_{x}\mathsf{\Pi}_{xz}+\partial_{y}\mathsf{\Pi}_{yz}\) does not vanish in the presence of PA. Here, \(x\) and \(y\) are orthogonal coordinates on the tube's cross section. In Ref. [12], this sum \(\partial_{x}\mathsf{\Pi}_{xz}+\partial_{y}\mathsf{\Pi}_{yz}\) is also missing in the LHS of Eq. (2), which the authors employ as an extended GD relation in deriving their Eq. (5). The sum should be included in the extended GD relation for deriving the formula of the thermal force density properly. We have \(v_{z}=0\) at \(r=r_{\rm tube}\) owing to the no-slip condition and \(\partial_{r}v_{z}=0\) at \(r=0\) owing to the axisymmetry and smoothness of \(v_{z}\). Thus, the \(z\) component of Eq. (4) gives \[v_{z}^{\rm(th)}(r)=\int_{r}^{r_{\rm tube}}dr_{1}\ \int_{0}^{r_{1}}dr_{2}\ \frac{r_{2}\sigma_{z}^{\rm(th)}(r_{2})}{r_{1}\eta_{\rm s}(r_{1})}\, \tag{7}\] where \(\eta_{\rm s}\) is evaluated in the reference state and depends on the radial distance. In the absence of PA, Eq. (6) vanishes and thermoosmosis does not occur. The correlation length and, therefore, the effects of critical fluctuations become spatially inhomogeneous inside the adsorption layer [17]. To describe these effects, we introduce a coarse-grained free-energy functional as follows. We write \(k_{\rm B}\) for the Boltzmann constant, and use the conventional notation for the critical exponents -- \(\beta,\gamma,\nu\), and \(\eta\). The (hyper)scaling relations give \(2\beta+\gamma=3\nu\) and \(\gamma=\nu(2-\eta)\); we adopt \(\nu=0.630\) and \(\eta=0.0364\)[30]. A mixture with \(\psi=0\) has \(\xi=\xi_{0}|\tau|^{-\nu}\), where \(\xi_{0}\) is a material constant. The functional consists of two terms. One is given by an area integral of \(-h\varphi\) over the wall, representing the wall-component interactions. The constant \(h\), called the surface field, vanishes in the absence of PA [15; 18; 19]. The other is given by the volume integral of \(f_{\rm bulk}\). We neglect the coupling between \(\rho\) and \(\psi\), considering the mixture's incompressibility. Under the chemical potentials \(\mu_{n}^{\rm(ref)}\), the grand-potential density in the bulk is \(f_{\rm bulk}-\rho_{\rm a}\mu_{\rm a}^{\rm(ref)}-\rho_{\rm b}\mu_{\rm b}^{\rm( ref)}\). According to the RLFT [16; 26], its \(\psi\)-dependent part is \(k_{\rm B}T\) multiplied by the sum of \[\frac{1}{2}C_{1}\xi_{0}^{-2}\omega^{\gamma-1}|\tau|\psi^{2}+\frac{1}{12}C_{1}C_ {2}\xi_{0}^{-2}\omega^{\gamma-2\beta}\psi^{4} \tag{8}\] and the square gradient term, \(C_{1}\omega^{-\eta\nu}\left|\nabla\psi\right|^{2}/2\). See Sect. IIIC of Ref. [29] for the rest part. Here, \(C_{1}\) and \(C_{2}\) are material constants satisfying \(C_{2}=3u^{*}C_{1}\xi_{0}\), where \(u^{*}\) is the scaled coupling constant at the Wilson-Fisher fixed point and equals \(2\pi^{2}/9\) at the one loop order. The local "distance" from the critical point is represented by \(\omega\equiv(\xi_{0}/\xi)^{1/\nu}\), which leads to \(\omega=|\tau|\) if \(\psi\) vanishes. The self-consistent condition, \(\omega=|\tau|+C_{2}\omega^{1-2\beta}\psi^{2}\), locally determines how \(\xi\) depends on \(\tau\) and \(\psi\). As in Ref. [26], we can obtain \(\psi\) in the reference state by minimizing the \(\psi\)-dependent part of the total grand potential. This is equivalent to solving Eq. (1) with \(\mu_{n}=\mu_{n}^{\rm(ref)}\) and \(T=T^{\rm(ref)}\) under the boundary condition involving the surface field. Below, we introduce critical scalings in terms of \(r_{\rm tube}\). We define \(T_{*}\) so that \(\xi\) becomes \(r_{\rm tube}\) for \(\psi=0\) at in the one-phase region, define \(\tau_{*}\) as \(|T_{*}-T_{\rm c}|/T_{\rm c}\), and introduce a scaled reduced-temperature \(\hat{\tau}\equiv\tau/\tau_{*}\). A characteristic order parameter \(\psi_{*}\) is defined so that \(\xi\) becomes \(r_{\rm tube}\) for \(\psi=\psi_{*}\) at \(T=T_{\rm c}\), and a characteristic chemical potential \(\mu_{*}\) is defined as \(k_{\rm B}T_{*}/(3u^{\star}r_{\rm tube}^{3}\psi_{*})\). The scaled surface field \(\hat{h}\) is defined as \(hT_{*}/\left(T\mu_{*}r_{\rm tube}\right)\). We define the dimensionless equilibrium profile \(\hat{\psi}(\hat{r})\) as \(\psi(r)/\psi_{*}\). A dimensionless function \(\hat{f}(\hat{\psi})\) is defined as Eq. (8) divided by \(T_{*}/(\mu_{*}\psi_{*}T)\) and is given by \[\hat{f}(\hat{\psi})\!\!=\!\frac{1}{2}\hat{\omega}^{\gamma-1}\left|\hat{\tau} \right|\hat{\psi}^{2}+\frac{1}{12}\hat{\omega}^{\gamma-2\hat{\beta}}\hat{\psi }^{4}. \tag{9}\] The first term on the RHS above originates from \(a\tau\psi^{2}\) in the Landau model, or more precisely, the corresponding term in the bare \(\psi^{4}\) model. In the reference state, \(\hat{\psi}(\hat{r})\) is determined only by \(|\hat{\tau}|\) and \(\hat{h}\). The scaled thermal force density \(\hat{\sigma}_{z}^{\rm(th)}(\hat{r})\), defined as \(\sigma_{z}^{\rm(th)}(r)\tau_{*}T_{*}L_{\rm tube}/(\mu_{*}\psi_{*}\delta T)\), is found to be \[\tau_{*}\left(\hat{f}+\frac{|\partial_{\hat{r}}\hat{\psi}|^{2}}{2\hat{\omega} ^{\eta\nu}}\right)\!+\!\frac{T^{\rm(ref)}}{T_{\rm c}}\left(\frac{\partial\hat {f}}{\partial\hat{\tau}}+\frac{\partial\hat{\omega}^{-\eta\nu}}{\partial\hat {\tau}}\frac{|\partial_{\hat{r}}\hat{\psi}|^{2}}{2}\right)\, \tag{10}\] which is evaluated in the reference state. Here, \(\hat{\omega}\equiv\omega/\tau_{*}\) is regarded as a function of \(\hat{\tau}\) and \(\hat{\psi}\) via the self-consistent condition. See Sect. III D of Ref. [29] for the details. Hereafter, \(\tau\) (\(\hat{\tau}\)) represents the (scaled) reduced temperature in the reference state. We study the profile of \(\hat{\sigma}_{z}^{\rm(th)}(\hat{r})\) given by Eq. (10) to determine the direction of thermoosmosis. Here we set \(r_{\rm tube}\) equal to \(0.1\)\(\mu\)m and use the same values of the material constants as in Ref. [31]. The parameter values are summarized in Table I of Ref. [29]. In particular, for a mixture of LW (NEMP), we find \(T_{\rm c}=307\) (300)K, \(\xi_{0}=0.20\) (0.23) nm, and \(\tau_{*}=5.12\times 10^{-5}\) (\(6.49\times 10^{-5}\)) from the experimental data of Refs. [27; 28], and set \(\hat{h}\) to 73.0 (66.6), which amounts to \(h=0.1\) cm\({}^{3}\)/s\({}^{2}\). Rough estimation of \(h\) is given in Sect. VI of Ref. [32]. The red solid curves in Fig. 3 indicate \(\hat{\sigma}_{z}^{\rm(th)}(\hat{r})\) given by Eq. (10). The sum in its second parentheses is denoted by \(\hat{\sigma}_{z}^{\rm(th2)}\). We numerically find \(\hat{\sigma}_{z}^{\rm(th2)}\approx\hat{\sigma}_{z}^{\rm(th)}\), which is reasonable since \(\tau_{*}\ll 1\) and \(T^{\rm(ref)}/T_{\rm c}\approx 1\). Notably, \(\hat{\sigma}_{z}^{\rm(th2)}\) is determined only by scaled quantities \(\hat{\tau}\) and \(|\hat{h}|\) in the framework of the RLFT. As can be seen from Eq. (9), \(\partial\hat{f}/(\partial\hat{\tau})\) contains \(\pm\hat{\omega}^{\gamma-1}\hat{\psi}^{2}/2\), where the same sign as \(\tau\) is taken. This term is dominant in Eq. (10), according to our numerical results in Fig. 3. See Sections IVA and IVC of Ref. [29] for more details. The signs of \(\sigma_{z}^{\rm(th)}\) and \(\hat{\sigma}_{z}^{\rm(th)}\) are the same when \(\delta T\) is positive. Thus, \(\hat{\sigma}_{z}^{\rm(th)}(\hat{r})>0\) (\(<0\)) for \(0\leq\hat{r}\leq 1\) means that the direction of the flow is the same as (opposite to) that of the temperature gradient. Notably, Eq. (10) does not contain \(\bar{H}_{-}^{\rm(ref)}\), and remains the same if the sign of \(h\) is changed, which indicates that the direction of thermoosmosis is independent of which component is preferentially adsorbed on the wall. The curves in the inset of Fig. 3, representing \(\hat{\psi}(\hat{r})\) of a mixture of NEMP in the reference state, rise near the wall because of \(h>0\) and show that the adsorption layer is thicker at the smaller value of \(\tau\). For \(\tau=1.25\times 10^{-5}\) (\(3.2\times 10^{-3}\)), \(\xi/r_{\rm tube}\) is equal to \(0.032\) (\(0.038\)) at \(\hat{r}=1\) and to \(0.47\) (\(0.086\)) at \(\hat{r}=0\). Finally we study the velocity field \(v_{z}^{\rm(th)}(\hat{r})\) given by Eq. (7). The viscosity in Eq. (7) weakly diverges near the critical point [33; 34; 35]. In Appendix E of Ref. [31], we obtain the viscosity as a function of \(|\tau|\) and \(\hat{\psi}\) from the results of Refs. [36; 37] and find the value of \(\eta_{*}\), which is defined as the viscosity's singular part at \(\psi=0\) and \(T=T_{*}\), from the data of Refs. [28; 38; 39; 40]. In Fig. 2, we plot the dimensionless velocity field \(\hat{v}_{z}^{\rm(th)}(\hat{r})\) defined as \(v_{z}^{\rm(th)}(r)T_{*}\tau_{*}\eta_{*}L_{\rm tube}/(\mu_{*}\psi_{*}r_{\rm tube }^{2}\delta T)\). At \(|\tau|=3.2\times 10^{-3}\) in this figure, \(\hat{v}_{z}^{\rm(th)}(\hat{r})\) appears to change only in the region of \(\hat{r}>0.8\) and thus the velocity appears to slip across this region. This is reasonable since the adsorption layer, where the thermal force is nonvanishing, localizes near the wall at \(|\tau|=3.2\times 10^{-3}\), as shown in the inset of Fig. 3. The value at the flat portion of the black solid (blue dashed) curve is \(-0.042\) (\(0.061\)), which means that the slip velocity across the adsorption layer is \(-7.09\) (\(38.2\)) (\(\mu\)m)\({}^{2}\)/(s\(\cdot\)K) multiplied by \(\delta T/L_{\rm tube}\). These values are comparable in magnitude with a typical value measured for thermophoretic mobility [1; 41; 42; 43]. For example, if we set \(|\delta T|=100\) mK \(\ll|T^{\rm(ref)}-T_{\rm c}|\approx 1\) K and \(L_{\rm tube}=10\)\(\mu\)m, the slip velocity is approximately \(0.1\)\(\mu\)m/s, which would be measured experimentally. At \(|\tau|=1.25\times 10^{-5}\), the slip is not clear in Fig. 2, because the thermal force density decreases gradually as \(\hat{r}\) decreases as shown in Fig. 3. To conclude, we predict that, for any binary fluid mixture in the one-phase region near the upper (lower) consolute point, the direction in thermoosmotic flow is Figure 3: Scaled thermal force density Eq. (10) (red solid curves) and its dominant term are plotted against the dimensionless radial distance \(\hat{r}\) for \(|\tau|=1.25\times 10^{-5}\) and \(h=0.1\) cm\({}^{3}\)/s\({}^{2}\). The results for a mixture of NEMP (LW) near the UC (LC) point are shown in the upper (lower) half of the panel. The blue dashed curve (green dash-dot curve) represents the dominant term \((-)\hat{\omega}^{\gamma-1}\hat{\psi}^{2}/2\). (Inset) For \(h=0.1\) cm\({}^{3}\)/s\({}^{2}\), the dimensionless equilibrium profile \(\hat{\psi}(\hat{r})\) in a mixture of NEMP is plotted against \(\hat{r}\) at \(\tau=1.25\times 10^{-5}\) and \(3.2\times 10^{-3}\). the same as (opposite to) that of the temperature gradient, irrespective of which component is adsorbed onto the tube's wall, if the critical composition is assumed in the reservoirs. In the companion paper [29], we consider the Onsager coefficients linking general thermodynamic forces and fluxes through a tube. Our coarse-grained approach could be applied to thermoosmosis of polymer solutions and polyelectrolytes [44] and thermophoresis of colloidal particles driven by the thermal force density near the surface, for example, with interactions relevant for mesoscopic structures taken into account. We acknowledge Takeaki Araki, Masato Itami, Yusuke T. Maeda, Kouki Nakata, Yuki Uematsu, and Natsuhiko Yoshinaga for careful reading the manuscript and giving comments. S. Y. was supported by Grant-in-Aid for Young Scientists (18K13516).
2309.16567
Universal Murray's law for optimised fluid transport in synthetic structures
Materials following Murray's law are of significant interest due to their unique porous structure and optimal mass transfer ability. However, it is challenging to construct such biomimetic hierarchical channels with perfectly cylindrical pores in synthetic systems following the existing theory. Achieving superior mass transport capacity revealed by Murray's law in nanostructured materials has thus far remained out of reach. We propose a Universal Murray's law applicable to a wide range of hierarchical structures, shapes and generalised transfer processes. We experimentally demonstrate optimal flow of various fluids in hierarchically planar and tubular graphene aerogel structures to validate the proposed law. By adjusting the macroscopic pores in such aerogel-based gas sensors, we also show a significantly improved sensor response dynamic. Our work provides a solid framework for designing synthetic Murray materials with arbitrarily shaped channels for superior mass transfer capabilities, with future implications in catalysis, sensing and energy applications.
Binghan Zhou, Qian Cheng, Zhuo Chen, Zesheng Chen, Dongfang Liang, Eric Anthony Munro, Guolin Yun, Yoshiki Kawai, Jinrui Chen, Tynee Bhowmick, Padmanathan Karthick Kannan, Luigi G. Occhipinti, Hidetoshi Matsumoto, Julian Gardner, Bao-Lian Su, Tawfique Hasan
2023-09-28T16:24:16Z
http://arxiv.org/abs/2309.16567v3
# Universal Murray's law for optimised fluid transport in synthetic structures ###### Abstract Materials following Murray's law are of significant interest due to their unique porous structure and optimal mass transfer ability. However, it is challenging to construct such biomimetic hierarchical channels with perfectly cylindrical pores in synthetic systems following the existing theory. Achieving superior mass transport capacity revealed by Murray's law in nanostructured materials has thus far remained out of reach. We propose a Universal Murray's law applicable to a wide range of hierarchical structures, shapes and generalised transfer processes. We experimentally demonstrate optimal flow of various fluids in hierarchically planar and tubular graphene aerogel structures to validate the proposed law. By adjusting the macroscopic pores in such aerogel-based gas sensors, we also show a significantly improved sensor response dynamic. Our work provides a solid framework for designing synthetic Murray materials with arbitrarily shaped channels for superior mass transfer capabilities, with future implications in catalysis, sensing and energy applications. ## Introduction The performance of materials can be strongly influenced by their structures.[1, 2] Hierarchically branched materials, with their multi-level and interconnected pores at various scales is a prime example.[2, 3, 4] The larger interconnected pores in such materials shorten the transport path and improve mass transfer efficiency, while the branching pores with increasing numbers but smaller size enhance the specific surface area and active reaction sites.[5, 6] Significant recent efforts have been devoted to the theoretical design and construction of such unique porous structures for optimised performance.[7, 8, 9] Towards this end, inspired by biological networks such as leaf veins and vascular systems, Murray's law have been considered for the construction of porous materials.[10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] This biomechanical theory stipulates the optimal hierarchical porous network for the most efficient mass transfer and is poised to lead a new era of 'nanostructuring by design'.[23, 24] Indeed, recent synthetic porous constructs attempting to implement Murray's law have been claimed to improve application performances involving mass transfer processes, such as catalysis[12, 13, 14], sensing[11, 16], and energy storage[18, 19]. However, the original Murray's law cannot be appropriately implemented to synthetic nanostructured materials. This is because it was experimentally derived from branching circular tubes of biological networks in living organisms which cannot be replicated due to the limitations in current synthesis techniques. Therefore, the above attempts to fabricate Murray materials differ significantly from the original Murray's law in terms pore shapes and locations. Such mismatch in pore geometries between the original theory and synthetic Murray materials strongly reduces the promised optimum mass transfer. This also means that there has not yet been any convincing experimental verification of the superior mass transport in synthetic Murray materials. Therefore, to exploit the benefit of Murray materials, the original law needs to be generalised such that it can go beyond a specific pore architecture to accommodate various other geometries commonly achievable through traditional synthesis. Here, we propose a Universal Murray's law with a generalised pore structure and mass transfer process. We prove the wide applicability of our approach in hierarchical networks with non-circular cross-sections and planar structures. To validate our proposal, we construct planar and tubular hierarchical structures using unidirectionally and bidirectionally freeze-cast graphene oxide aerogels (GOA), and experimentally confirm optimal mass transfer for laminar flow using a range of fluids. We further show how simple structural optimisation guided by our proposed theory yields a significant dynamic performance improvement in GOA-based gas sensors. Our study lays a solid theoretical foundation of Murray's law in synthetic hierarchically porous materials, with a broad scope for applications involving mass transfer. ## Results ### The proofs of Murray's law and its derivations The original Murray's law was experimentally derived to describe how blood vessel structures offer the most energy-efficient transport.[23] This theory has been extensively studied in the field of biomechanics over the last decades.[25, 26, 27] Indeed, numerous transport networks in living organisms, such as vascular systems in animals, tracheal tubes in insects, and plant veins, have all been found to obey Murray's law or its derivations.[28, 29, 30, 24] Based on the commonly found shapes in biological networks, Murray's law has thus far been largely applied to cylindrical pores, branching pipe networks with circular channels; Fig. 1a. The original law was initially obtained by minimising the sum of the work required to overcome the flow resistance and the metabolic cost of vascular systems (for details, see Supporting Information: The initial deduction of Murray's law): \[\sum\!r_{1}^{3}=\sum\!r_{2}^{3}=\sum\!r_{3}^{3}=\cdots \tag{1}\] This equation proposes that the hierarchical pipe network becomes optimal for laminar flow when the sum of the cubes of the tube radii at each branching level is constant; Fig. 1a. The forcating point in the Murray network also shows that the radius cube of the parent channel is equal to the sum of the child channels'. Beyond laminar fluid flow, Murray's law takes on a square form for diffusion, electrical current, and ionic transport[24]: \[\sum\!r_{1}^{2}=\sum\!r_{2}^{2}=\sum\!r_{3}^{2}=\cdots \tag{2}\] Later, it was proposed that vascular systems can be optimised within the confines of a given total volume instead of considering tissue metabolism by following various forms of Murray's law described earlier.[24, 31, 32] This alternative approach enables the application of Murray's law in optimising the mass transfer of synthetic porous materials, as the'mass transfer performance' of materials should be considered as an intensive property, referring to the mass transport capacity per unit volume. Additionally, the transported mass is assumed to remain conserved in the initial Murray's law for biological networks. Considering the reaction or adsorption of materials in hierarchical pores, the generalised Murray's law [1, 10] introduced a mass loss ratio \(X\): \[r_{0}^{\alpha}=\frac{1}{(1-X)}\sum r_{i}^{\alpha} \tag{3}\] where \(r_{0}\) represents the radius of the parent channel and \(r_{i}\) for the branching children channels. The exponent \(\alpha=3\) for laminar flow and \(\alpha=2\) for diffusion or ionic transfer. The surface area of pores at different levels can be adeptly utilised to derive \(X\). This mass loss ratio \(X\) can also be directly substituted into all subsequent results in this work. We note that Murray's law and its derivation can also be obtained by simply optimising the total resistance in a hierarchical network. For example, the flow resistance of laminar flow \(R\) in a hierarchical network could be written as the ratio of pressure difference \(\Delta p\) and total Figure 1: Murray’s law in hierarchical structures. (a) Schematic illustration of branching columnar tubes and corresponding initial expressions of Murray’s law. (b) Schematic illustration of demonstrative hierarchical structure in materials with comprehensive pore shapes and corresponding expression of Murray’s law. (c) Schematic illustration of hierarchically tubular network with arbitrary shape and corresponding expressions of Murray’s law. (d) Schematic illustration of hierarchical lamellar structure and corresponding expressions of Murray’s law. volumetric flow rate \(Q\)[33], \(R=\frac{\Delta p}{Q}\). The minimisation of \(R\) is equivalent to the maximisation of efficiency, where \(\Delta p\) is optimally utilised to drive the flow. Intuitively, the deduction of minimising resistance generates the identical cubic form of Murray's law for laminar flow and the square form for diffusion and ionic transport (for details, see Supporting Information: Deducing Murray's law by minimising resistance). Compared to the original deduction by investigating power cost, minimising resistance offers a more versatile approach allowing extension of the law to other transfer types like diffusion, where quantifying energy consumption is challenging. Although previous discussions of Murray's law are all based on tubular networks with circular cross section (Fig. 1a), artificially fabricated channels typically exhibit intricate and diverse shapes that significantly deviate from the perfect circular cross section of a cylinder commonly found in biological networks.[10, 11, 12, 13, 19] Furthermore, the pore shapes at different levels of a micro or nanostructured material often differ considerably due to the various pore-forming techniques. [10, 11, 22] Additionally, synthetic materials have large branching numbers owing to the significant size difference in hierarchical pores.[5, 6] Figure 1b shows schematic of a fictitious hierarchically porous network with non-circular and inconstant pore shapes at different levels. The first-level hexagonal pores represent typical self-assembled macropores [11, 19] in materials. The second-level pores in the shape of concave circular triangle illustrate the gaps between close-packed nanoparticles, which are often regarded as mesopores in Murray materials.[10, 13, 21] The geometric difference between the tube models in the current theory and the actual synthetic pores makes it challenging to reliably apply the law to these materials. Additionally, the original Murray's law cannot be expanded to other structures beyond hierarchical tube network or other types of mass transport. To the best of our knowledge, no successful attempts have ever been made to adequately leverage Murray's law in practical modelling of synthetic hierarchical structures. ### Universal Murray's law We start the generalisation of a hierarchical network with an arbitrary channel shape to make the derivation universal in terms of pores. The cross-sectional area of a channel can be written as \(A=k_{1}x^{\alpha}\), where \(k_{1}\) is the linear coefficient, \(x\) is a size variable of the channel, and \(\alpha\) is the exponent. For example, in a tube with circular cross section (\(A=\pi r^{2}\)), when the radius \(r\) is selected as variable x, \(k_{1}=\pi\), \(\alpha=2\). Supposing a potential \(\Delta P\) drives a mass transfer process in the network, the generalised mass flow rate is given by \(Q=k_{2}x^{\beta}\cdot\frac{\Delta P}{l}\), where \(k_{2}\) is the linear coefficient, \(l\) is the channel length of a section, and \(\beta\) is the exponent. Then, the minimisation of the resistance \(R=\frac{\Delta P}{Q}\) gives an equation for the optimal \(i\)-level hierarchical network (for details, see Supporting Information: The deduction of Universal Murray's law): \[\sum x_{1}^{(\alpha+\beta)/2}=\sum x_{2}^{(\alpha+\beta)/2}=\cdots=\sum x_{i} ^{(\alpha+\beta)/2} \tag{4}\] Equation (4) can readily give optimisation expressions for the well-discussed tubular networks with circular cross section (Supplementary Table 1), conforming to the known results. We name this expression Universal Murray's law, as it represents the most general form of Murray's law to date. More broadly, we can optimise hierarchical structures with inconstant shapes at different levels. Because of the dissimilar pore shapes, the cross-sectional area \(A\) and transfer rate \(Q\) of channels at different levels show separate expressions. As shown in Supplementary Figure 1, for a \(i\)-level hierarchical network, the expressions of the cross-sectional area at each level can be written as \(A_{1}=k_{1,1}x_{1}^{\alpha_{1}}\), \(A_{2}=k_{1,2}x_{2}^{\alpha_{2}}\), \(\cdots\), \(A_{i}=k_{1,i}x_{i}^{\alpha_{i}}\), with the flow rates \(Q_{1}=k_{2,1}x_{1}^{\beta_{1}}\frac{\Delta P}{l}\), \(Q_{2}=k_{2,2}x_{2}^{\beta_{2}}\frac{\Delta P}{l}\), \(\cdots\), \(Q_{i}=k_{2,i}x_{i}^{\beta_{i}}\frac{\Delta P}{l}\), where the subscripts \(1\), \(2\), \(\cdots\), \(i\) represent the level number. The Universal Murray's law then transforms to: \[\sum\nolimits x_{1}^{(a_{1}+\beta_{1})/2}:\ \cdots\ :\sum\nolimits x_{i}^{(a_{1}+ \beta_{1})/2}=\sqrt{\frac{\beta_{1}}{k_{1,1}\ k_{2,1}\ a_{1}}}:\ \cdots\ :\sqrt{\frac{\beta_{i}}{k_{1,i}\ k_{2,i}\ a_{i}}} \tag{5}\] The detailed derivation is in Supporting Information: The deduction of Universal Murray's law. Additionally, when the flow rate of the mass transfer process is not linear to the potential difference, but still can be written as \(Q=k_{2}x^{\beta}\cdot\left(\frac{\Delta P}{I}\right)^{\gamma}\), where \(\gamma\) is the exponent of \(\left(\frac{\Delta P}{I}\right)\), our Universal Murray's law gives (for details, see Supporting Information: The deduction of Universal Murray's law): \[\sum\nolimits x_{1}^{\frac{\gamma\alpha\beta}{14\gamma}}=\sum\nolimits x_{2}^ {\frac{\gamma\alpha\beta}{14\gamma}}=\cdots=\sum\nolimits x_{i}^{\frac{\gamma \alpha\beta}{14\gamma}} \tag{6}\] Equation (6) can be used to readily optimise the turbulent flow in rough pipes \((\sum r_{1}^{7/3}=\cdots=\sum r_{i}^{7/3})\), turbulent flow in smooth pipes \((\sum r_{1}^{17/7}=\cdots=\sum r_{i}^{17/7})\), and laminar flow of non-Newtonian liquids following power-law rheology \((\sum r_{1}^{3}=\cdots=\sum r_{i}^{3})\), consistent with the reported results (for details, see Supporting Information: Several derivations of Universal Murray's law). Our proposed Universal Murray's law first generalises the original Murray's law for arbitrary hierarchical structures and transfer types, even for a network with dissimilar channel (pore) shapes. These above expressions enable the investigation of the general mass flow principle of optimal networks, independent of the existing models based on specific structures and transfer processes. ### Expanding Murray's law for hierarchical structures in materials The Universal Murray's law can optimise a more general hierarchical structure and transfer process, accommodating various pore shapes in materials. For example, it extends the optimisation for the well-studied tubular network with circular cross section to the network with non-circular channels; see Supporting information: Murray's law in hierarchically tubular network with arbitrary shape. Figure 1c schematically illustrates a tubular branching network with an arbitrary close geometric shape while the channel sections at different levels are similar. In this case, the Universal Murray's law proves that the cubic expression of the original Murray's law for laminar flow, \(\sum x_{1}^{3}=\sum x_{2}^{3}=\cdots=\sum x_{i}^{3}\), and the square form for diffusion, ionic transport, or electronic transfer, \(\sum x_{1}^{2}=\sum x_{2}^{2}=\cdots=\sum x_{i}^{2}\) are still valid for tubular networks with any channel shape. More generally, for hierarchically porous materials with different shapes at different levels (Fig. 1b), we note that for diffusion, ionic transport, or electronic transfer, a proportionate relationship exists between their transfer rate and cross-sectional channel area, \(Q\propto A\), independent of the channel shape. This is consistent with Pouillet's law and Ohm's law for ionic and electronic transfer \(Q=\sigma A\cdot\frac{\Delta V}{I}\) (where \(\sigma\) is the conductivity and \(\Delta V\) is the potential difference) and Fick's law for diffusion \(Q=DA\cdot\frac{\Delta C}{I}\), (where \(D\) is the diffusion coefficient and \(\Delta C\) is the concentration difference). If \(A\) is directly chosen as the size variable, \(\alpha=1\) and \(\beta=1\), the Universal Murray's law gives: \[\sum\nolimits A_{1}=\sum\nolimits A_{2}=\cdots=\sum\nolimits A_{i} \tag{7}\] Thus, for diffusion or ionic transport which are most commonly found in the applications of porous materials, our Universal Murray's law shows that the above optimisation equation holds irrespective of the exact pore shape in hierarchical materials. This expression proves that structural optimisation can be directly linked to the pore's cross-sectional area or the normalised pore size obtained by gas sorption analysis.[10, 11, 12, 18, 19] Beyond the above transfer types, hierarchically porous materials with different pore shapes can be optimised by Equ. (5). ### Optimising unexplored structure and transfer by Universal Murray's law Beyond unifying and expanding the known expressions of the original Murray's law, our Universal Murray's law can also optimise novel transfer types and hierarchical structures. For instance, Knudsen diffusion describes gas diffusion in mesopores with pore sizes comparable to mean free path, with Knudsen diffusion coefficient scaling as \(D_{\textit{k}}\propto r\).[34] Thus, in a tubular network (Fig. 1a), the molecular flow rate across the channel according to Knudsen formula is linear to the cube of the radius: \(Q\propto r^{3}\).[35] We can write the optimal hierarchically tubular structure for this case using Universal Murray's law: \[\sum r_{1}^{2.5}=\sum r_{2}^{2.5}=\cdots=\sum r_{i}^{2.5} \tag{8}\] We can also optimise planar material structures with the Universal Murray's law. As shown in Fig. 1d, this type of hierarchical structure fractally divides into only one dimension, while the tubular structure branches into two dimensions. Let us consider the plates at different levels in Fig. 1d to have a fixed width \(d\), which is significantly larger than the channel height \(h\), and thus, can be regarded as a 2D infinite planar structure. Consequently, if we select the height of the channels between parallel plates \(h\) as the size variable for optimisation, the relationship between \(A\) and \(h\) is \(A\propto h^{1}(\alpha=1)\). Meanwhile, the volumetric flow rate of a laminar flow between the parallel plates can be written as \(Q=\frac{d}{12\eta}\cdot\ h^{3}\cdot\frac{\Delta\mathrm{p}}{l}\) (\(\beta=3\))[36], where \(\eta\) represents fluid viscosity, \(\Delta\mathrm{p}\) the pressure difference, and \(l\) the channel length. According to our Universal Murray's law, the optimised planar structure should satisfy \(\sum h_{1}^{2}=\sum h_{2}^{2}=\cdots=\sum h_{i}^{2}\) for laminar flow. The corresponding expression of \(Q\) gives \(\sum h_{1}=\sum h_{2}=\cdots=\sum h_{i}\) for diffusion and ionic transport, as shown in Supplementary Table 2. The aforementioned discussions in Knudsen diffusion and planar structure demonstrate that our Universal Murray's law can concisely express the optimal design of unexplored hierarchical networks and transfer types. This general theory shows great potential in optimising synthetic porous structures for various application scenarios. In the following sections, we offer two experimental examinations of the Universal Murray's law. ### Unidirectional and bidirectional freeze-cast GOA Graphene oxide aerogel (GOA) is chosen to construct the hierarchical structures to verify our proposed Universal Murray's law, because of its advantageous characteristics of high porosity, appropriate pore size, and the feasibility of adjusting the pore size and shape. Freeze-casting method, also known as the ice templating[37], allows for the preparation of various shapes of GOA with adjustable pore size within a certain range. The GOA-based Murray structures in the following sections are constructed by this preparation method. The density of the as-prepared GOA (see Methods) is 25 mg\(\cdot\)cm\({}^{-3}\). This corresponds to a calculated porosity of 98.2%.[38] The pore wall thickness in the aerogel is negligible, and is not considered in the calculations. The reported pore diameter of freeze-cast aerogels range from 10 to 240 microns, far above the upper limit of the nanofluidic channel (<100 nm).[39] Hence, the electrical double layer at the GO surface does not significantly influence the flow.[40] As shown in Fig. 2a, we use the unidirectional freeze-casting method to prepare the vertically porous GOA. This method applies a temperature gradient in the vertical direction to samples on a horizontal freezing platform of copper. While freezing, random nucleation of ice crystals first appears and grows on the cooling surface, forming multiple, small-sized domains at the interface of the GO dispersion and the copper platform[41]. The ice crystals then elongate in parallel along the freezing direction under the restriction of neighbouring crystals. The columnar Figure 2: The structure of unidirectional and bidirectional freeze-cast GOA. (a) Schematic illustration of unidirectional freeze-casting method. (b-d) Top-view SEM images of unidirectional freeze-cast GOA frozen at (b) \(-\)20 \({}^{\circ}\)C, (c) \(-\)40 \({}^{\circ}\)C, and (d) \(-\)70 \({}^{\circ}\)C. Insets: Fourier transform images. (e) The average pore size of unidirectionally freeze-cast GOA frozen at different temperatures and GOA frozen by liquid nitrogen. (f) Schematic illustration of bidirectional freeze-casting method. (g-i) Top-view SEM images of bidirectionally freeze-cast GOA frozen at (g) \(-\)10 \({}^{\circ}\)C, (h) \(-\)30 \({}^{\circ}\)C, and (i) \(-\)50 \({}^{\circ}\)C. Insets: Fourier transform images. (j) Average layer height of bidirectional freeze-cast GOA at different temperatures. (k) Orientation degree of unidirectionally and bidirectionally freeze-cast GOA. Scale bars: 100 \(\mu\)m. ice crystals, serving as heterogeneous templates, extrude the assembled materials and create a framework during their solidification. Subsequent sublimation removes this ice template, retaining a refined porous structure. The vertical pores in GOA frozen at different temperatures are shown in Fig. 2b-d. The top view of freeze-cast aerogel illustrates dense and aligned columnar pores, originating from vertically-grown ice crystals through unidirectional freeze-casting. As a comparison, the GOA frozen by liquid nitrogen demonstrates an almost isotropically porous microstructure (Supplementary Figure 2), where the twisty and short pores do not show a specific preference for prolonged direction due to omnidirectional freezing. According to the basic principle of crystallography[42], the ice crystallization rate increases with decreasing freeze-casting temperature, resulting in smaller ice grain sizes and vertical pores in the resulting material. This phenomenon is significant in the fabrication of porous materials by freeze-casting, as it allows for adjustment of the resulting pore structure. Consequently, the vertical pores significantly shrink as the freezing temperature is reduced from \(-20\)\({}^{\circ}\)C to \(-70\)\({}^{\circ}\)C, revealed in the top-view scanning electron microscope (SEM) images (Fig. 2b-d). We also quantitatively analyse the average pore size of the frozen samples using a bespoke image processing program (see Methods). As shown in Supplementary Figure 3a, the program recognises and masks the pores at the sectional image by adjusting contrast. The program then calculates and averages the pore sizes. Compared to manual recognition[39], this approach is more objective and reliable. As shown in Fig. 2e, the pore size is closely related to the freezing temperature with a uniform diameter distribution (Supplementary Figure 4). The average pore diameter of liquid nitrogen-frozen GOA at \(-196\)\({}^{\circ}\)C is 6.23 \(\mu\)m. The pore sizes of directionally freeze-cast GOA range from 7.85 \(\mu\)m to 39.8 \(\mu\)m, corresponding to the freezing temperature increase from -80 \({}^{\circ}\)C to -10 \({}^{\circ}\)C. However, the temperature-size relation is not linear. The average pore diameter rises slowly at low temperatures but increases rapidly near 0 \({}^{\circ}\)C. This is because along with the increase in freezing temperature, the ice nucleation rate decreases non-linearly, while the growth rate increases initially and then reduces.[39] Figure 2f illustrates the bidirectional freeze-casting method for constructing the planar structure. When a polydimethylsiloxane (PDMS) wedge with a certain angle is inserted between the cooling stage and the sample being freeze-cast, an additional horizontal temperature difference (Fig. 2f) is applied in the deposited GO due to the low thermal conductivity of PDMS.[41, 43] The bidirectional temperature difference causes the ice crystals to grow both horizontally and vertically, resulting in a layered structure after freeze-drying. The aerogel, which is bidirectionally freeze-cast at a relatively high temperature, shows a regular lamellar architecture in the top-view section (Fig. 2g). The parallel GO walls align with the plane of two temperature gradients. The lamellar GOA frozen at lower temperature illustrate a narrower layer spacing and a more disordered alignment (Fig. 2h-i) due to the rapid ice growth during freezing. Similarly, we developed another image processing program to calculate average layer spacing in lamellar GOA frozen at different temperatures (see Methods for details and Supplementary Figure 3b). Fig. 2h demonstrates that the average layer spacing of lamellar GOA increases from 30.5 \(\mu\)m to 97.1 \(\mu\)m when the bidirectional freeze-casting temperature rises from -50 \({}^{\circ}\)C to -10 \({}^{\circ}\)C. Fourier transforms of the raw SEM images (Fig. 2b-d, g-i, insets) show the orientation of the GO structure in the cross section.[41] The unidirectional freeze-cast GOA (Fig. 2b-d, insets) and liquid-nitrogen frozen GOA (Supplementary Figure 2, inset) have close to circular Fourier transforms, implying there is no specific orientation along the cross-section. In contrast, a clear alignment is observed in the top-view section of lamellar GOA prepared by bidirectional freeze-casting (Fig. 2g-i, insets). A pore shape fitting program, similar to what we used for the aforementioned pore measurement, is also used to estimate a quantitative value to describe the pore orientation. From 0 to 1, orientation degree denotes the fully random distribution of pore orientation to perfect alignment. In Fig. 2k, bidirectionally freeze-cast lamellar GOA possesses a higher orientation degree from 0.88 to 0.62, declining with the decrease of freezing temperature. In contrast, the disordered, vertically porous GOA or the one frozen by liquid nitrogen show a lower orientation degree, ranging from 0.18 to 0.30. ### Examination of Universal Murray's law in hierarchical structures Thus far, comparative experiments on the mass transfer superiority of Murray materials have never been successfully demonstrated. Current studies compare materials with different levels of hierarchy and show that the materials with higher levels of hierarchy exhibit superior performance, for example, those with three levels of macro-, meso-, and micropores outperform those with only two levels of meso- and micropores.[10, 12, 19] These comparisons only demonstrate the benefit of introducing an additional level of hierarchy, rather than mass transfer improvement of structural optimisation based on Murray's law. Therefore, the verification of optimal mass transfer in Murray materials requires comparison between samples following or deviating from the law but with the same level of hierarchy. According to the equations of Murray's law, to construct an optimal \(i\)-level hierarchical structure, it is necessary to precisely control either the size or the number of channels in at least \((i-1)\) levels. For freeze-cast GOA, the gap size can be adjusted through the tuning of the freezing temperature. However, it is impractical to accurately fine-tune the channel size by this process. We therefore tune the channel numbers by shaping the bulky aerogel when constructing GOA-based Murray structures. Towards this end, we first survey the optimal planar structure for laminar flow obeying the Universal Murray's law in GOA. In a 3-level rectangular mould, we derive the expression for laminar flow in the planar structure based on Universal Murray's law, \(n_{1}h_{1}^{2}=n_{2}h_{2}^{2}=n_{3}h_{3}^{2}\) (Supplementary Table 2). This equation can be rewritten as: \(H_{1}h_{1}=H_{2}h_{2}=H_{3}h_{3}\), where \(H_{1}\), \(H_{2}\), and \(H_{3}\) represent the section height at different levels (for details see Supporting information: The construction of optimal planar and tubular structure based on GOA). Lamellar GOA frozen at \(-10\)\({}^{\circ}\)C, \(-30\)\({}^{\circ}\)C, and \(-50\)\({}^{\circ}\)C are applied to construct the hierarchical structure with maximised size difference between the levels. Therefore, the section heights need to satisfy \(H_{1}:H_{2}:H_{3}=0.524:\ 1:\ 1.67\), according to their average layer spacing of 97.1, 50.9, 30.5 \(\mu\)m (Fig. 2j). Note that, to avoid excessive concentration of the flow current at the pipe centre, we ensure smooth transitions between the sections as demonstrated and discussed in Supplementary Figure 5a-b. We also prepare other hierarchically planar pipes that deviate from Murray's law and obey the conservation \(\sum h_{1}^{x}=\sum h_{2}^{x}=\sum h_{3}^{x}\) for exponent values of \(x=1,1.5,2.5,3\), with the same channel length, width, and total volume. These channels are compared with the aforementioned Murray structures following \(\sum h_{1}^{2}=\sum h_{2}^{2}=\sum h_{3}^{2}\). We then measure the pressure drop along these hierarchical pipes for both water and air flow at different flow rates (Supplementary Figure 6). All the tests discussed in this section have laminar flow under our experimental conditions (see Methods). We show the plotted flow resistance calculated from the pressure drop in Fig. 3a for water flow and Fig. 3b for air flow with the exponent value \(x\) as the horizontal axis. The aerogel following Murray's law demonstrates the smallest resistance for laminar flow. Furthermore, as the hierarchical network deviates from the Murray's law, \(\sum h_{1}^{2}=\sum h_{2}^{2}=\sum h_{3}^{2}\), resistance increases. We then use scaled-down models for flow simulation (Supplementary Figure 7). Note that the simulation of full-scale models is unnecessary and impractical because of the considerable gap between the size of bulky aerogel samples and the pores. The simulation results show the same U-shape curve, with the lowest flow resistance point matching the synthetic Murray materials (Fig. 3a-b). Additionally, the deduction of this theory relies on the even distribution of the flow Figure 3: The experimental and simulation validation of Universal Murray’s law. (a-b) The experimental flow resistance of (a) water and (b) air in hierarchical lamellar GOA with a fixed total volume and corresponding simulated resistance in scaled-down models. (c) Changes in the estimated laminar flow resistance and section volume with exponent \(x\) in the conservation. (d-e) The experimental flow resistance of (d) water and (e) air in hierarchically tubular GOA and corresponding simulated results in scaled-down models. (f-i) The experimental flow resistance of (f) 2-butanol, (g) hexane, (h) ethanol, and (i) toluene in hierarchical lamellar GOA. (j-m) The experimental flow resistance of (j) 2-butanol, (k) hexane, (l) ethanol, and (m) toluene in hierarchical tubular GOA. within individual channels of each section. The simulation (Supplementary Figure 6) illustrates that the flow distribution is roughly uniform in the aerogel, satisfying this assumption. The examination of the planar structure for optimal laminar flow is convincing evidence for the establishment of Universal Murray's law. For the first time, we expand Murray's law into synthetic hierarchical structure and experimentally confirm it. Both the experiments and simulation demonstrate that the branching lamellar GOA structures achieve minimised resistance for laminar flow when following the corresponding expression of the Universal Murray's law. We also show that the deviation from this principle leads to reduction of mass transfer performance, as enshrined in Murray's law. We note that the layered structure with larger layer spacing should have smaller flow resistance of the same shape, as the flow resistance of a section can be rewritten as \(R\propto\frac{1}{nh^{3}}=\frac{1}{Hh^{2}}\), where \(n\) represents the number of channels, \(h\) is the channel height, and \(H\) is the section height. The flow resistance of lamellar GOA frozen at different temperatures (Supplementary Figure 8) and the simulation results (Supplementary Figure 9) also confirm this trend. Therefore, in the straight structure following \(\sum h_{1}^{1}=\sum h_{2}^{1}=\sum h_{3}^{1}\), the flow resistance of the latter sections should be larger than the former \(R_{1}<R_{2}<R_{3}\), because \(h_{1}>h_{2}>h_{3}\) and \(H_{1}=H_{2}=H_{3}=\sum h\). For other structures, with the rising of exponent \(x\) in the conservation formula, the volume of the whole structure gradually transfers from the front to the latter sections; Fig. 3c. This shape change would reduce the resistance of the third section and raise the resistance in the first two sections; Fig. 3c. Additionally, it also increases the total surface area of hierarchical structure, because the high-level section with smaller pores has a higher specific surface area. The U-type resistance in Fig. 3a-c imply that in this process, resistance reduction in the third section is initially dominant, followed by a resistance increase in the first two sections. The two aforementioned influences are equal when obeying Murray's law, showing the lowest resistance. Consequently, with a constrained total volume, Murray's law can also be described as a principle that appropriately distributes a larger volume into the high-level sections of smaller channel size and higher resistance, such that it balances the resistance of different sections to minimise the total resistance. The structural optimisation based on Murray's law in hierarchically tubular pipe prepared by vertically porous GOA also supports this observation. Although the channels in unidirectionally freeze-cast GOA are more like close-packed polygonal tubes rather than the cylinders assumed in the original Murray's law (Fig. 2b-d), the Universal Murray's law allows optimisation for this type of non-circular pores. Similarly, using vertically-porous GOA frozen at \(-20\)\({}^{\text{\text{\textdegree}}}\)C, \(-40\)\({}^{\text{\text{\textdegree}}}\)C, and \(-70\)\({}^{\text{\text{\textdegree}}}\)C, we construct and compare hierarchical channels following \(\sum r_{1}^{\text{\text{\textdegree}}}=\sum r_{2}^{\text{\text{\textdegree}} }=\sum r_{3}^{\text{\text{\textdegree}}}\), where exponent \(x=1,2,3,4,5\) (for details, see Supporting information: The construction of optimal planar and tubular structure based on GOA). We also improve the pipeline to a smooth conical shape for better flow distribution in the channel (Supplementary Figure 10). As shown in Fig. 3d-e and Supplementary Figure 11a-b, the pipe obeying Murray's law (\(\sum r_{1}^{3}=\sum r_{2}^{3}=\sum r_{3}^{3}\)) achieve minimal resistance for laminar fluid flow both in experiments and in simulation of the scaled-down models (Supplementary Figure 12). The resistance increases notably when the exponent \(x\) moves away from 3, referring to pipes gradually deviating from the optimal Murray structure. As a more classic and frequently discussed model[23, 24, 28], these results of tubular structure further validate the Universal Murray's law in materials by both experimentation and simulation. Laminar fluid flow is important in industrial production, such as the catalytic reaction of organic solvents. Since our deduction of the Universal Murray's law does not consider the type of fluid, the optimisation is also expected to be universally applicable to other fluids under laminar flow. To verify this, we measure the laminar flow resistance in several common and representative organic solvents, including high viscosity solvent such as 2-butanol (Fig. 3f, j), low viscosity solvent hexane (Fig. 3g, k), polar solvent ethanol (Fig. 3h, l) and 2-butanol, and non-polar solvent toluene (Fig. 3i, m) and hexane. The experiments on these solvents show that both the hierarchically planar and tubular GOA reach minimal resistance when obeying Universal Murray's law. The experimentally obtained U-type curves again effectively demonstrate the universality of this principle. ### Optimising hierarchical GOA-based gas sensor by Universal Murray's law To demonstrate the practical applicability of Murray's law, we conceive a GOA-based gas sensor to measure nitrogen dioxide (NO\({}_{2}\)) flowing through it, and then optimise the hierarchical structure for gas flow using the Universal Murray's law. The gas sensor is prepared by SnO\({}_{2}\) quantum dots (QDs) decorated GOA as shown in Fig. 4a.[44] SnO\({}_{2}\) QD-decorated GO ink is synthesised through a surfactant-assisted hydrothermal growth process (see Methods for details). Then, the decorated ink is unidirectionally freeze-cast at \(-20\)\({}^{\circ}\)C, \(-40\)\({}^{\circ}\)C, and \(-70\)\({}^{\circ}\)C to construct Figure 4: Optimising tubular GOA-based gas sensor by Murray’s law. (a) Schematic illustration of the synthesis of SnO\({}_{2}\) QD-decorated GOA and the assembly of hierarchical gas sensor. (b-c) TEM images of SnO\({}_{2}\) QD-decorated GO ink. Scale bars: (b) 100 nm and (c) 10 nm. (d-f) Top-view SEM images of unidirectional freeze-cast GOA frozen at (d) \(-20\)\({}^{\circ}\)C, (e) \(-40\)\({}^{\circ}\)C, and (f) \(-70\)\({}^{\circ}\)C. Scale bars: 100 \(\mu\)m. (g) Response curves of hierarchical SnO\({}_{2}\) QD-decorated GOA in the straight pipe and optimised by Murray’s law towards 1 ppm NO\({}_{2}\). (h) Air flow simulation in the scaled-down models of hierarchical and Murray GOA. the hierarchically porous aerogels. Transmission electron microscopy (TEM) images of as-prepared GO ink (Fig. 4b-c) demonstrate uniformly distributed SnO\({}_{2}\) QD on GO sheets, with size smaller than twice the exciton Bohr radius (2.7 nm) of SnO\({}_{2}\).[44] The lattice fringes of 2.5 and 3.4 A in Fig. 4c corresponds to 101 and 110 planes of SnO\({}_{2}\), respectively. As shown in Fig. 4d-f, after unidirectionally freeze-casting, SnO\({}_{2}\) QD-decorated GOA forms consistent vertical pores (20.6 \(\mu\)m at \(-\)20 \({}^{\circ}\)C, 12.6 \(\mu\)m at \(-\)40 \({}^{\circ}\)C, and 8.17 \(\mu\)m at \(-\)70 \({}^{\circ}\)C ) with pure GOA (Fig. 2e) within the measurement error range. Note that at the high GO concentrations we used (25 mg\(\cdot\)mL\({}^{-1}\)), the addition of the quantum dots does not significantly affect the pore size of freeze-cast aerogel.[45] Without considering any structural design principles, we first intuitively conceive a hierarchical gas sensor as a straight cylinder with three levels of sections (Fig. 4a). The hierarchy of the materials in these three sections offers both the benefits of efficient airflow and large active surface area. This sensor structure follows the conservation \(\sum r_{1}^{2}=\sum r_{2}^{2}=\sum r_{3}^{2}\). This hierarchical GOA sensor achieves 13.2% response for 1 ppm NO\({}_{2}\) with a response time \(\tau_{res}\) of 23.9 min and recovery time \(\tau_{rec}\) of 69.4 min (Fig. 4g). With the same total volume of 30\(\pi\) mm\({}^{3}\) and length of 10 mm, we now adjust the average diameters of each section to \(D_{1}:D_{2}:D_{3}=0.799:1:1.23\) to construct the Murray structure obeying \(\sum r_{1}^{3}=\sum r_{2}^{3}=\sum r_{3}^{3}\). This simple shape adjustment based on Murray's law shortens the response time \(\tau_{res}\) and recovery time \(\tau_{rec}\) by 19.7% and 18.3%, respectively (Fig. 4g). This improvement degree in response and recovery of gas sensing can be explained by the improved mass transport associated with the 12.3% reduction in the air flow resistance, as indicated in the simulation of the scaled-down models(Fig, 4h). By tailoring the shape of the GOA following Murray's law, we reinforce the dynamic response and recovery of the sensor due to the fluid transport improvement. Therefore, structural optimisations based on Murray's law can considerably strengthen the performance of porous materials by simply adjusting the macroscopic shape or pores, without changing the material's chemical composition. For hierarchically porous materials used for applications relying on mass transfer, such as catalysis[46], sensing[47, 48], energy storage[4], and environmental protection[49], Murray's law can therefore offer significant performance improvement. ## Conclusion We have proposed an extension of Murray's law for synthetic nanostructures regardless of channel shape and experimentally demonstrated its validity. Starting from minimising the resistance of a general transport process, we extend the original expression to a common mathematical form as the Universal Murray's law. For diffusion, ionic transport, and electronic transfer in arbitrary tubular networks, the optimised law can be transformed into an equation of the sum of pores' cross-sectional area. This result provides a rigorous theoretical framework for the most hierarchically porous materials with non-circular shapes. Additionally, we discuss two scenarios, Knudsen diffusion and planar shape, to demonstrate the value of Universal Murray's law towards unexplored transfer type and structure. We construct planar and tubular structures by freeze-cast GOA to verify the validity of the Universal Murray's law in materials. The hierarchical aerogels obeying the optimisation equation show minimal resistance for laminar fluid flow, while others deviating from the principle exhibit increased resistance. We also demonstrate that a simple adjustment of the macroscopic shape guided by this law could significantly improve the mass transfer performance in sensors. Our work establishes a sound theoretical foundation for synthetic Murray materials and may inspire structural design of porous systems in a variety of applications benefiting from optimal mass transport. ## Methods ### Preparation of graphene oxide aerogel Graphene oxide aerogels are fabricated according to our previously reported method.[50] First, GO dispersion of 25 mg\(\cdot\)mL\({}^{-1}\) concentration is prepared by mixing non-exfoliated GO paste (Sigma-Aldrich) in DI water and rigorous stirring for 4 h. The dispersion is then mixed with 160 mM ascorbic acid (Acros Organics) and heated at 60 \({}^{\circ}\)C for 1 h to gelate the dispersion through the partial reduction of GO[50]. The as-prepared viscous ink is next extruded into 3D-printed PLA moulds to control the shape of the final samples. After being freeze-cast or frozen in liquid nitrogen for 10 min, the samples are freeze-dried overnight in a freeze drier (LyoQuest, Telstar). The excess ascorbic acid and other soluble impurities are washed away by water and an additional freeze-drying step to obtain GOA. ### Preparation of SnO\({}_{2}\) QD-decorated aerogel The freeze-casting method utilises a custom-built freezing device to provide a cold source at a precisely controlled temperature. In unidirectional freeze-casting, the samples are directly placed on a copper freezing platform at a specific temperature for 30 min. As for bidirectional freeze-casting, the samples are placed on a 30\({}^{\circ}\) PDMS wedge on the copper platform for 30 min. The custom-built freezing device measures and controls the temperature at the surface of the PDMS wedge. SnO\({}_{2}\) QD-decorated GOA for room-temperature gas sensing is synthesised following our previous publication.[44] In a typical process, SnO\({}_{2}\) precursor is first synthesised by dissolving 4 mM of tin chloride pentahydrate (SnCl\({}_{4}\cdot\)5H\({}_{2}\)O, Sigma-Aldrich) and 4 mM of 6-aminohexanoic acid (AHA, Sigma-Aldrich) in 30 mL DI water, followed by 5 minutes of sonication. The precursor and 10 mL GO dispersion is then hydrothermally heated at 140 \({}^{\circ}\)C for 3 h. After cooling down, the resultant sample is centrifuged at 4000 rpm for 10 minutes and washed with DI water for 3 times. The precipitate is redispersed into DI water to a concentration of 15 mg\(\cdot\)mL\({}^{-1}\). It is next mixed with 28 mM ascorbic acid and 50 mM copper chloride (CuCl\({}_{2}\), Sigma-Aldrich). The resultant sample is then heated at 60 \({}^{\circ}\)C for 30 min to prepare the ink for extrusion. High-resolution TEM (HRTEM, Tecnai F20) is performed to characterise the QD-decorated sample. The extruded architecture in the mould is then freeze-cast, freeze-dried overnight, and heated at 60 \({}^{\circ}\)C for 5 h. The decorated GOA is soaked in 100 mM CuCl\({}_{2}\) solution for 1 h and washed with DI water for 1 h 3 times to introduce additional surface doping. A second freeze-drying process removes water from the aerogel, resulting in SnO\({}_{2}\) QD-decorated GOA for gas sensing. ### Pore recognition and measurements After freeze-casting, the frozen samples are horizontally broken at 5 mm height before freeze-drying. SEM (FEI Magellan 400 SEM) is performed to observe the sample cross-section. Fourier transform images are obtained by ImageJ on the inverted figures. The image processing programmes uploaded into the supporting material measure the average pore size, layer spacing, orientation degree, and pore aspect ratio. For vertically porous GOA, the code first enhances the contrast of top-view SEM images by gamma conversion and Otsu's thresholding. Then the bright regions representing GO walls are flood-filled and blurred by the box filter and median filter. After masking the holes covered by bright region contour and processing with morphological opening, the average radius of recognised pores is calculated based on their area. For lamellar GOA prepared by bidirectional freeze-casting, the layered holes are first identified with a similar process, and the smallest 10% are removed. They are then fitted with ellipses where the short axes length is considered as layer spacing. The porous structures are fitted with ellipses and the rotation angles are calculated to measure the orientation degree. The orientation degree is defined as the average relative deviation of the angle distribution from the uniform distribution. This parameter denotes the alignment degree of the ellipsoids. The orientation degree is close to 1 for perfectly aligned holes and 0 for large numbers of randomly orientated ellipses. ### Flow resistance measurement The hierarchically planar structures are designed based on the corresponding equations, under the constraints of a total volume of 1300 mm\({}^{3}\), a pipe width of 10 mm, and a pipe length of 30 mm. Similarly, the hierarchically tubular structures are designed under the restrictions of a total volume of 90\(\pi\) mm\({}^{3}\) and a fixed length of 30 mm. The as-designed hierarchical structure of freeze-cast GOA is then connected to a syringe. The volumetric flow rate of fluids through the pipe is controlled by a syringe pump (ALADDIN-220, WPI Ltd.). A digital pressure gauge (Digitron 2021P) is parallelly connected to the tubes' inlet and outlet which spontaneously measures the pressure difference between the two sides. The resistance of laminar flow in the hierarchical GOA is calculated by linearly fitting the pressure difference as the function of the volumetric flow rate. For single-phase pipe flows, the flow tends to be laminar when its Reynolds number is less than 2000, \(R_{\text{e}}=\frac{uL}{v}<2000\), where \(u\) is the flow rate, \(v\) is the kinematic viscosity of the fluid, and \(L\) is the characteristic linear dimension.[51] In the scenarios discussed in this paper, the calculated Reynolds number is much smaller than 2000 and generally smaller than 50, as the diameter of the pores is in the micrometre regime and the flow rates are relatively low. Therefore, all fluid flow in the experiments can be regarded as laminar flow. ### Simulation The numerical simulations are performed using the commercial CFD package of ANSYS Fluent software to solve the fluid flow Navier-Stokes equations inside the models. For the simulation of planar structure, the models are 14 times smaller in length and height than the bulky aerogels used in the experiments. The simulation models for tubular structure are 60 times smaller in length, width and height. The models for aerogel gas sensor are 30 times smaller in length, width and height. The freeze-cast pore size or channel height in these models is consistent with the measured results. The computational mesh roughly has 70,000 cells for planar structure, and 170,000 cells for tubular structure, and the solutions are seen as converged when the residuals reach \(10^{-8}\). Second-order discretisation methods are used throughout the simulations. All the simulations assume laminar flow and smooth wall conditions. The solutions show good convergence characteristics independent of grid sizes and residual values. ### Gas sensing measurements Gas sensing measurements are conducted in the Kenosistec gas characterisation system. Two mass flow controllers are used to control the flow of dry air and 1 ppm NO\({}_{2}\) as the target gas. A total gas flow of 500 sccm is supplied towards the inlet of GOA sensor to form an even gas flow in the aerogel. Before the measurement, the sensors are stabilised in dry air for 2 h. The sensor resistance is measured at a fixed voltage. All experiments are carried out at 25 \({}^{\circ}\)C and atmosphere pressure. The response time is defined as the time to reach 90% of the maximum response after the exposure to target gas. The recovery time is defined as the time spent for the response to return to 10% above the baseline after the removal of the target gas. ## Data Availability All relevant data are available from the corresponding author on request. ## Code Availability All relevant codes are available as supporting information or from corresponding author on request. ## Acknowledgements This research was supported by EPSRC (EP/W024284/1, EP/W023229/1), National Natural Science Foundation of China (22293020, 22293022), National Key R&D Program of China (2021YFE0115800), and WBI-MOST (SUB/2021/IND493971/524448). B. Z. would like to acknowledge CSC-Cambridge scholarship for financial support. ## Author Contributions B. Z. and T. H. conceived the idea of the project. B. Z. performed the theoretical work. B. Z., Z. C., Y. K., H. M. designed and conducted the experiments, and analysed the data. Q. C., D. L., G. Y. performed the hydrodynamic simulation. Z. C., B. Z. preformed the code and analysis for pore size measurement. B. Z. wrote the draft manuscript. E. A. M, J. C., T. B., P. K. K., J. G., B. L. S., and T. H. revised the manuscript. T. H. supervised the project. ## Competing Interests The authors declare no competing interests.
2305.19581
SVVAD: Personal Voice Activity Detection for Speaker Verification
Voice activity detection (VAD) improves the performance of speaker verification (SV) by preserving speech segments and attenuating the effects of non-speech. However, this scheme is not ideal: (1) it fails in noisy environments or multi-speaker conversations; (2) it is trained based on inaccurate non-SV sensitive labels. To address this, we propose a speaker verification-based voice activity detection (SVVAD) framework that can adapt the speech features according to which are most informative for SV. To achieve this, we introduce a label-free training method with triplet-like losses that completely avoids the performance degradation of SV due to incorrect labeling. Extensive experiments show that SVVAD significantly outperforms the baseline in terms of equal error rate (EER) under conditions where other speakers are mixed at different ratios. Moreover, the decision boundaries reveal the importance of the different parts of speech, which are largely consistent with human judgments.
Zuheng Kang, Jianzong Wang, Junqing Peng, Jing Xiao
2023-05-31T05:59:33Z
http://arxiv.org/abs/2305.19581v1
# SVVAD: Personal Voice Activity Detection for Speaker Verification ###### Abstract Voice activity detection (VAD) improves the performance of speaker verification (SV) by preserving speech segments and attenuating the effects of non-speech. However, this scheme is not ideal: (1) it fails in noisy environments or multi-speaker conversations; (2) it is trained based on inaccurate non-SV sensitive labels. To address this, we propose a speaker verification-based voice activity detection (SVVAD) framework that can adapt the speech features according to which are most informative for SV. To achieve this, we introduce a label-free training method with triplet-like losses that completely avoids the performance degradation of SV due to incorrect labeling. Extensive experiments show that SVVAD significantly outperforms the baseline in terms of equal error rate (EER) under conditions where other speakers are mixed at different ratios. Moreover, the decision boundaries reveal the importance of the different parts of speech, which are largely consistent with human judgments. Zuheng Kang, Jianzong Wang\({}^{*}\), Junqing Peng, Jing Xiao Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China {kangzuheng896, wangjianzong347, pengjq, xiaojing661}@pingan.com.cn **Index Terms**: voice activity detection, personal VAD, speaker verification ## 1 Introduction Voice activity detection (VAD) is a task that identifies whether human speech is present or absent and is often used upstream of other speech components such as automatic speech recognition (ASR), speaker verification (SV), and speaker diarization (SD). It aims to reduce the impact of non-speech on downstream speech tasks and indirectly improve their performance. However, their goals are different. ASR and SD models need to efficiently and accurately determine the boundary between speech and non-speech to avoid missing content. SV is much more complicated because there are more factors involved. A typical VAD framework is considered to be a gating module that makes a speech/non-speech decision for each frame. Early studies focused on signal and statistical analysis and feature engineering [1, 2, 3, 4]. More recently, the use of conventional deep learning methods, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), has shown significant improvements in detection performance at low signal-to-noise ratios (SNR) [5, 6, 7]. Later, with the introduction of the attention mechanism, the model can automatically compare the characteristics of speech and noise within audio to derive more accurate judgments [8, 9]. Some authors use audiovisual information for VAD detection [10, 11]. Although it improves the performance of SV by VAD to some extent, these frameworks are insufficient because non-target speakers are also identified and retained as speech labels. Since the SV model uses only a single speaker for supervised training and does not consider multiple speakers, this leads to a significant drop in verification performance. Therefore, a target speaker-only VAD framework is required. The personal VAD (PVAD) framework solves this problem by extending the traditional VAD to recognize only the target speaker part and ignore the non-target part. The feasibility of PVAD has been demonstrated in several studies. The author of [12, 13] proposed and improved the concept of PVAD to solve the problem of "always running" models on devices. [14, 15] extends the PVAD to make it easier to use in ASR applications. Moreover, the concept of PVAD has also been applied to the task of speech enhancement as a secondary task to improve the performance of separation [16, 17, 18]. However, these methods were not applicable to SV. Although the author of [19, 20, 21, 22, 23] tries to employ SD to recognize the speech of different speakers and then find the target speaker by some rules, this type of framework is inefficient for downstream speech tasks due to the complexity of its process. Traditional PVAD frameworks are inadequate for SV tasks for several reasons: (1) Traditional PVAD models are trained with frame-by-frame supervision based on human-assigned or ASR forced alignment labels. However, not every frame predicted by the VAD model has a positive impact on the SV model. In practice, segments identified as speech by traditional VAD are sometimes associated with low SNR or multiple speakers talking simultaneously, which can severely degrade the performance of SV; (2) These labels are usually set as hard labels. However, soft labels are more suitable for SV scenarios because different speech segments contribute differently; (3) In SV, the VAD model faces a more complex situation and it will be more challenging to take all factors into account. To address these issues, we have made the following contributions: (1) We propose a speaker verification-based voice activity detection (SVVAD) framework, which manipulates speech features using FiLM [24] according to their relevance to the SV model; (2) We propose a novel label-free training method that uses triplet-like losses to avoid the performance degradation caused by inaccurate human labeling; (3) Extensive experiments demonstrate that SVVAD achieves significant improvements over the baseline model in terms of equal error rate (EER) under various conditions where other speakers are mixed at different ratios, and that the model-generated VAD decision boundary is highly consistent with human judgment. ## 2 Methodology ### Recap of Personal VAD Model and Motivation To show the advancement of our proposed voice activity detection (VAD) framework, we need to discuss previous approaches. In the conventional personal VAD (PVAD) frame
2309.03624
Navigating Homogeneous Paths through Amyloidogenic and Non-Amyloidogenic Hexapeptides
Hexapeptides are increasingly applied as model systems for studying the amyloidogenecity properties of oligo- and polypeptides. It is possible to construct 64 million different hexapeptides from the twenty proteinogenic amino acid residues. Today's experimental amyloid databases contain only a fraction of these annotated hexapeptides. For labeling all the possible hexapeptides as "amyloidogenic" or "non-amyloidogenic" there exist several computational predictors with good accuracies. It may be of interest to define and study a simple graph structure on the 64 million hexapeptides as nodes when two hexapeptides are connected by an edge if they differ by only a single residue. For example, in this graph, HIKKLM is connected to AIKKLM, or HIKKNM, or HIKKLC, but it is not connected with an edge to VVKKLM or HIKNPM. In the present contribution, we consider our previously published artificial intelligence-based tool, the Budapest Amyloid Predictor (BAP for short), and demonstrate a spectacular property of this predictor in the graph defined above. We show that for any two hexapeptides predicted to be "amyloidogenic" by the BAP predictor, there exists an easily constructible path of length at most 6 that passes through neighboring hexapeptides all predicted to be "amyloidogenic" by BAP. For example, the predicted amyloidogenic ILVWIW and FWLCYL hexapeptides can be connected through the length-6 path ILVWIW-IWVWIW-IWVCIW-IWVCIL-FWVCIL-FWLCIL-FWLCYL in such a way that the neighbors differ in exactly one residue, and all hexapeptides on the path are predicted to be amyloidogenic by BAP. The symmetric statement also holds for non-amyloidogenic hexapeptides. It is noted that the mentioned property of the Budapest Amyloid Predictor \url{https://pitgroup.org/bap} is not proprietary; it is also true for any linear Support Vector Machine (SVM)-based predictors.
Laszlo Keresztes, Evelin Szogi, Balint Varga, Viktor Farkas, Andras Perczel, Vince Grolmusz
2023-09-07T10:34:41Z
http://arxiv.org/abs/2309.03624v1
# Navigating Homogeneous Paths through Amyloidogenic and Non-Amyloidogenic Hexapeptides ###### Abstract Hexapeptides are increasingly applied as model systems for studying the amyloidogenecity properties of oligo- and polypeptides. It is possible to construct 64 million different hexapeptides from the twenty proteinogenic amino acid residues. Today's experimental amyloid databases contain only a fraction of these annotated hexapeptides. For labeling all the possible hexapeptides as "amyloidogenic" or "non-amyloidogenic" there exist several computational predictors with good accuracies. It may be of interest to define and study a simple graph structure on the 64 million hexapeptides as nodes when two hexapeptides are connected by an edge if they differ by only a single residue. For example, in this graph, HIKKLM is connected to AIKKLM, or HIKKNM, or HIKKLC, but it is not connected with an edge to VVKKLM or HIKNPM. In the present contribution, we consider our previously published artificial intelligence-based tool, the Budapest Amyloid Predictor (BAP for short), and demonstrate a spectacular property of this predictor in the graph defined above. We show that for any two hexapeptides predicted to be "amyloidogenic" by the BAP predictor, there exists an easily constructible path of length at most 6 that passes through neighboring hexapeptides all predicted to be "amyloidogenic" by BAP. For example, the predicted amyloidogenic ILVWIW and FWLCYL hexapeptides can be connected through the length-6 path ILVWIW-IWVWIW-IWVCII-FWVCII-FWLCII-FWLCYL in such a way that the neighbors differ in exactly one residue, and all hexapeptides on the path are predicted to be amyloidogenic by BAP. The symmetric statement also holds true for non-amyloidogenic predicted hexapeptides: for any such pair, there exists a path of length at most 6, traversing only predicted non-amyloidogenic hexapeptides. It is noted that the mentioned property of the Budapest Amyloid Predictor [https://pitgroup.org/bap](https://pitgroup.org/bap) is not proprietary; it is also true for any linear Support Vector Machine (SVM)-based predictors; therefore, for any future improvements of BAP using the linear SVM prediction technique. ## Introduction Amyloids are misfolded proteins with a well-defined parallel and/or antiparallel repeating \(\beta\)-sheet structure [1; 2]. Numerous globular proteins can turn into amyloids in certain physical or chemical environments [2]. While amyloids are most frequently mentioned in the context of human diseases [3], they can also be functional building blocks in healthy human tissues [4] or can serve as perspective anti-viral agents [5]. In the last decade, hexapeptides have become a popular class of molecules for modeling and studying the protein amyloid formation: these short peptides are simple enough to be studied in a variety of _in vitro_ and _in silico_ systems, yet complex enough to show characteristic amyloid formation changes in numerous studies. Because of their applicability as model systems, experimental data have been collected on hundreds of hexapeptides in relation to their amyloidogenic properties. The creators of the Waltz database [6; 7] published 1415 hexapeptides, of which 514 were experimentally labeled as "amyloidogenic" and 901 as "non-amyloidogenic". By applying the labeled molecules from the Waltz database for training an artificial intelligence tool, our research group has prepared a support vector machine [8] (SVM)-based tool for amyloidogenecity-prediction for hexapeptides [9]. Our tool, called the Budapest Amyloid Predictor (BAP), is publicly available at [https://pitgroup.org/bap](https://pitgroup.org/bap). We have shown in [9] that the accuracy of the BAP predictor is 84 % (and the further quality measures are TPR=0.75, TNR=0.9, PPV=0.8, NPV=0.86; (that is, true positive ratio, true negative ratio, positive predictive value, negative predictive value, resp.). A recent review of published amyloid-predictors [10] lists, among others, Zyggregator [11], AGGRESCAN [12], netCSSP [13], APPNN [14]. Our BAP has the same or better accuracy as the predictors listed in [10], as it was shown in [9]. The BAP predictor is based on a linear Support Vector Machine (SVM) [8]. SVM-based predictors have a much more transparent structure than other artificial intelligence predictors, and this transparency leads to very strong applications. Generally, it is difficult to explain the intrinsic "reason" by which a deep neural network predictor makes a decision or to describe those attributes of the input that lead to a given classification by the network. The transparent structure of the SVM predictor BAP [9] was exploited in our work [15]; where we have identified patterns, describing amyloid-forming hexapeptides very succinctly. For example, we have shown that for any substitution with the 20 proteogenic amino-acids for positions denoted by \(x\), all the patterns CxFLWx, FxFLFx, or xxIVIV are predicted amyloidogenic, and all the patterns PxDxxx, xxKxEx, and xxPQxx are predicted non-amyloidogenic. We note that any pattern with two x's describes \(20^{2}=400\) hexapeptides, and patterns with four x's describe \(20^{4}=160,000\) hexapeptides. In [15] we have described all such patterns, and also amyloidogenic patterns with restricted choices for the positions of \(x\), where the residues were allowed to be selected from polar, non-polar or hydrophobic subsets of the 20 proteogenic amino acids. We note that the transparent structure of the Support Vector Machines made it possible to identify different patterns in [15, 16, 17]. In the present contribution, we exploit further the transparent structure of the predictor BAP [https://pitgroup.org/bap](https://pitgroup.org/bap). Suppose we want to find a path from a hexapeptide \(x\) to another hexapeptide \(x^{\prime}\) through different hexapeptides, such that in each step, we can move from one hexapeptide to another with exactly one different residue position. Note on the terminology: When a sequence of reactions is studied, then the "pathway" term is used generally. We apply here the graph-theoretical, more abstract "path" term, since we work in the present contribution on a graph. For example, we want to find a path from hexapeptide ILVWIW to hexapeptide FWLCYL, through six-tuples, differing in exactly one residue. An obvious path is generated by changing the amino acids one-by-one from left to right, starting from ILVWIW and finishing at FWLCYL as follows: ILVWIW-FLVWIW-FWVWIW-FWLWIW-FWLCIW-FWLCYW-FWLCYYL These paths from one-by-one residue-exchanges can be of interest in peptide synthesis design or following a sequence of point mutations of peptides or protein sequences and measuring or modeling the change of their subsequent chemical or biological properties when only one residue is altered in one step. Analyzing the effects of subsequent point mutations was done in the literature in the past decades. In [18], three different biologically active peptides were transformed into each other by subsequent single amino acid substitutions, and the intermediaries were analyzed for activity. The authors of [18] called the paths formed from the subsequent point-mutated peptides "evolutionary transition pathways". Paths of one-by-one residue exchanges can be interesting, which connect two predicted amyloidogenic hexapeptides and go through amyloidogenic hexapeptides only. Similarly, we may want to design paths between the non-amyloidogenic hexapeptides A and B, along which only one residue is changed in each step and which goes through non-amyloidogenic intermediaries only. In the present contribution, we show the following results for the BAP predictor: * - All predicted amyloidogenic pairs of hexapeptides, \(x\) and \(x^{\prime}\) can be connected by one-by-one exchanged residue-paths of length at most 6, such that the whole path contains only predicted amyloidogenic intermediaries. Moreover, the path can be computed easily. * - All predicted non-amyloidogenic pairs of hexapeptides, \(x\) and \(x^{\prime}\) can be connected by one-by-one exchanged residue-paths of length at most 6, such that the whole path contains only predicted non-amyloidogenic intermediaries. Moreover, the path can be computed easily. We also show that the same results hold for other linear-SVM-based predictors, and not only for our BAP predictor described in [9]. We remark that in the case of non-SVM based predictors, it may happen that two predicted amyloidogenic sequences cannot be connected by entirely amyloidogenic paths of _any_ length and the same holds for the non-amyloidogenic case, too. For example, in non-SVM-based predictors, it may happen that all the neighbors of an amyloidogenic peptide A are predicted to be non-amyloidogenic; consequently, A cannot be connected by an entirely amyloidogenic path to any other amyloidogenic peptide. We also remark that we do not state anything on paths connecting amyloidogenic hexapeptides with non-amyloidogenic ones. ## Methods Here, we first formalize our problem setting and solution, and then we will make some remarks on possible generalizations. All our definitions and methods or algorithms will be specified for hexapeptide sequences, but they are easily generalizable for shorter or longer amino acid sequences of a given length. First, we define the mutation-graph \(M\) on the hexapeptide sequences: **Definition 1**.: _The vertices of the mutation-graph \(M\) are the \(20^{6}=64\) million hexapeptides formed from the 20 proteogenic amino acids. The vertices are referred to using their length-6 amino acid sequences with the one-letter codes. Two vertices of \(M\) are connected by an edge if they differ in exactly one amino acid in the same position._ **Example 1**.: _Node ILVWIW is connected by an edge to ALVWIW, or to IAVWIW, or to ILVWID, but not to IDDWIW._ We note that paths in this graph \(M\) were called "evolutionary transition pathways" in [18]. We simply call them "paths" in \(M\). The length of a path is the number of edges in it. It is easy to see that in each position, we can make 19 different substitutions (the original amino acid can be substituted by any of the remaining 20-1=19 proteogenic amino acids), and since we have six positions, every vertex is connected to 6 \(\times\) 19 = 114 other nodes, which represent exactly 114 hexapeptides. Next, we partition the vertices of \(M\) in two classes: amyloidogenic and non-amyloidogenic. That is, each vertex is an element of one and only one of those classes. The partitioning is done by the Budapest Amyloid Predictor, described in details in [9]. _The Budapest Amyloid Predictor and the Amyloid Effect Matrix_ Here, we succinctly describe the BAP predictor with details needed to prove our statement and to show our method for finding the paths, leading entirely in one of the two partition classes. The details of the construction of the Budapest Amyloid Predictor, the evaluation of its correctness, and the comparison with other predictors were described in detail in [9]. The BAP predictor uses a linear Support Vector Machine (SVM) [8] for decisions. A linear SVM computes the sign of the value \[\sum_{i=1}^{n}w_{i}z_{i}+b \tag{1}\] and it makes a decision based on this sign. Here, the coefficients \(w_{1},w_{2},\ldots,w_{n}\) and \(b\) are real numbers computed from the training data, and \(z_{1},z_{2},\ldots,z_{n}\) represent the input values. For example, if for a given input \(z=(z_{1},z_{2},\ldots,z_{n})\) the value of (1) is non-negative, the SVM outputs "yes" otherwise "no". The Budapest Amyloid Predictor [9] (available at [https://pitgroup.org/bap](https://pitgroup.org/bap) applied the Waltz dataset [6, 7] for training and testing an SVM, where each of the 20 proteogenic amino acids was represented as a (highly redundant) length-553 vector \(Z\), corresponding to 553 properties of AAindex [19]. Therefore, a hexapeptide was represented by six concatenated \(Z\) vectors; their combined length is \(6\times 553=3318=n\). With \(\ell=553\), equation (1) can be written as \[\sum_{i=1}^{6\ell}w_{i}z_{i}+b=\sum_{j=1}^{6}\ \ \sum_{i=(j-1)\ell+1}^{j\ell}w_ {i}z_{i}+b \tag{2}\] If the value of (2) is negative (i.e., its sign is -1), the hexapeptide is predicted to be non-amyloidogenic if it is positive or 0, it (its sign is 1 or 0) is predicted to be amyloidogenic. Here, index \(j\) refers to amino acid \(j\) in the hexapeptide For \(j=1,2,\ldots,6\). Since the \(\ell=553\)\(z_{i}^{\prime}\)s are determined by the \(j^{th}\) amino acid of the hexapeptide, and this way, all the possible \(6x20=120\) second sums in (2) (for six positions and 20 amino acids) can be pre-computed. Table 1 lists these pre-computed values: the 6 values of \(j\) correspond to the columns, the amino acids to the rows. In other words, Table 1, which is called the "Amyloid Effect Matrix" in [9], describes the position-depending contributions of amino acids to the value of (2). Table 1 facilitates the easy "by hand" computation of sum (2) and making a decision on its amyloidogenecity. For example, if we want to make a prediction on YVSTSY, then we need to take the value from column 1, corresponding to Y (i.e., \(-0.23\), and from column 2, corresponding to V (-0.14), from column 3, corresponding to S (-0.41), from column 4 in row of T (-0.23), from column 5 in the row of S (-0.48), and from column 6 corresponding to Y (-0.15), add them up, and add \(b=1.083\) to the sum: \(-0.23-0.14-0.41-0.23-0.48-0.15+1.083=-0.557\); therefore, YVSTSY is predicted to be non-amyloidogenic. One can simply order the amino acids in each position of the hexapeptides according to their contribution to sum (2), as in Table 2. \begin{table} \begin{tabular}{c r r r r r r} \hline \hline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline A & -0.26 & -0.32 & -0.27 & -0.14 & -0.43 & -0.22 \\ R & -0.45 & -0.41 & -0.46 & -0.33 & -0.52 & -0.35 \\ N & -0.40 & -0.34 & -0.49 & -0.27 & -0.46 & -0.30 \\ D & -0.49 & -0.43 & -0.56 & -0.41 & -0.56 & -0.36 \\ C & -0.09 & -0.21 & 0.03 & -0.05 & -0.17 & -0.05 \\ Q & -0.37 & -0.30 & -0.36 & -0.34 & -0.48 & -0.32 \\ E & -0.51 & -0.41 & -0.43 & -0.30 & -0.61 & -0.39 \\ G & -0.23 & -0.37 & -0.46 & -0.37 & -0.30 & -0.33 \\ H & -0.32 & -0.26 & -0.26 & -0.30 & -0.35 & -0.25 \\ I & -0.06 & -0.08 & 0.26 & 0.09 & -0.06 & -0.07 \\ L & -0.10 & -0.18 & 0.02 & 0.04 & -0.22 & -0.13 \\ K & -0.39 & -0.45 & -0.51 & -0.35 & -0.59 & -0.32 \\ M & -0.17 & -0.25 & -0.02 & -0.10 & -0.19 & -0.18 \\ F & -0.13 & -0.11 & 0.05 & -0.03 & -0.13 & -0.11 \\ P & -0.56 & -0.38 & -0.56 & -0.51 & -0.42 & -0.45 \\ S & -0.37 & -0.35 & -0.41 & -0.30 & -0.48 & -0.23 \\ T & -0.34 & -0.33 & -0.28 & -0.23 & -0.40 & -0.23 \\ W & -0.17 & -0.17 & -0.09 & -0.06 & -0.12 & -0.16 \\ Y & -0.23 & -0.11 & -0.13 & -0.06 & -0.18 & -0.15 \\ V & -0.05 & -0.14 & 0.19 & 0.14 & -0.19 & 0.01 \\ \hline \hline \end{tabular} \end{table} Table 1: The Amyloid Effect Matrix [9]. The pre-computed values from equation (2) are listed in the rows corresponding to the amino acids. The columns are corresponded to the positions in the hexapeptide. Table 2 has some very practical applications for amyloidogenecity prediction. If we have one predicted amyloidogenic hexapeptide \(x\), then we can easily make numerous other predicted amyloidogenic hexapeptides from \(x\), simply by replacing any amino acid in a given position by one, which is situated left to the original one in its row in Table 2. More exactly, if hexapeptide \(x\) is predicted to be amyloidogenic, and its 3rd amino acid is \(Y\), then \(Y\) can be exchanged to either of I, V, F, C, L, M, or W, the resulting hexapeptide \(x^{\prime}\) will always be predicted to be amyloidogenic. This is true since Table 2 contains the orderings of the amino acids in each position according to their contribution in Table 1, and if we exchange \(Y\) to anything from its left in row 3 of Table 2, then we increase the value of the sum of (2), relative to that of \(x\). Since the value of the sum in the case of \(x\) was positive, its increased value will also be positive, i.e., the decision of the SVM will be "amyloidogenic". Similarly, in the case of a hexapeptide \(n\), predicted to be non-amyloidogenic, if we exchange any amino acids located to the right from the original in Table 2, then the new prediction will also be non-amyloidogenic. For example, if the 4th amino acid of \(n\) is \(E\), and we exchange \(E\) to any of the S, R, Q, K, G, D, P, then the value of sum (2) will be decreased, and, consequently, the prediction will remain non-amyloidogenic. The description of the path constructions will be more convenient by introducing two simple operators, for \(i=1,2,3,4,5,6\), and for any two (not necessarily distinct) amino acids \(X\) and \(X^{\prime}\): \[\mbox{MAX}_{i}(X,X^{\prime})=\mbox{ In row i of Table 2 the leftmost one of }(X,X^{\prime})\] \[\mbox{MIN}_{i}(X,X^{\prime})=\mbox{ In row i of Table 2 the rightmost one of }(X,X^{\prime}).\] If \(X=X^{\prime}\) in any of the two operators, then the output value is \(X=X^{\prime}\). The MAX and MIN terms refer to the amyloidogenecity of the amino acids in position \(i\). \begin{table} \begin{tabular}{r Let us call the value (2) of a hexapeptide \(x\) its amyloidogenectiv value, and let us denote it by \(A(x)\). If \(A(x)\geq 0\), then \(x\) is predicted to be amyloidogenic, otherwise non-amyloidogenic. **Results** Here, we show how to connect any two hexapeptides of the same amyloidogenectiv prediction with a path of length at most 6, going on the same class as their endpoints in graph \(M\), the mutation graph. _Constructing paths through the amyloidogenic hexapeptides_ Suppose we have two hexapeptides, \(x=(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})\) and \(x^{\prime}=(X^{\prime}_{1},X^{\prime}_{2},X^{\prime}_{3},X^{\prime}_{4},X^{ \prime}_{5},X^{\prime}_{6})\), both predicted to be amyloidogenic by BAP. For simplicity, we will call the \(X_{i}\) amino acids "coordinates" of \(x\). Now we show that there exists an easily constructible path of length at most 6 in graph \(M\), such that all vertices of the path are predicted amyloidogenic. **Case 1** (the easy case): Suppose that \(x^{\prime}\) is "coordinate-wise more amyloidogenic" than \(x\) in the following sense: for all \(i=1,2,3,4,5,6\), either \(X_{i}=X^{\prime}_{i}\), or \(X^{\prime}_{i}\) is situated left from \(X_{i}\) in row \(i\) of Table 2; that is, \(X_{i}\) is less amyloidogenic in position \(i\) than \(X^{\prime}_{i}\). Then, if we change \(X_{i}\) to \(X^{\prime}_{i}\) in position \(i\), for \(i=1,2,3,4,5,6\), then we go through a path from \(x\) to \(x^{\prime}\) in graph \(M\) such that the value of \(A\) on the nodes, describing the amyloidogenectiv will be monotone increasing. Therefore, all nodes on the path will be predicted to be amyloidogenic. Note that the length of this path is at most 6: when the same amino acid appears in the same coordinates, no change is needed; when every coordinate is different, then 6 changes are needed. Formally: \[A(x)=A(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})\leq A(X^{\prime}_{1},X_{2},X_{3}, X_{4},X_{5},X_{6})\leq\] \[\leq A(X^{\prime}_{1},X^{\prime}_{2},X_{3},X_{4},X_{5},X_{6})\leq\ldots\leq A (X^{\prime}_{1},X^{\prime}_{2},X^{\prime}_{3},X^{\prime}_{4},X^{\prime}_{5},X^ {\prime}_{6})=A(x^{\prime})\] **Case 2** (the general case): When the assumptions of Case 1 are not satisfied, we reduce the problem to two applications of path-finding in Case 1. Our strategy is as follows: * First, we connect node \(x=(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})\) to node \[x_{MAX}=(\mbox{MAX}_{1}(X_{1},X^{\prime}_{1}),\mbox{MAX}_{2}(X_{2},X^{\prime }_{2}),\mbox{MAX}_{3}(X_{3},X^{\prime}_{3}),\] \[\mbox{MAX}_{4}(X_{4},X^{\prime}_{4}),\mbox{MAX}_{5}(X_{5},X^{\prime}_{5}), \mbox{MAX}_{6}(X_{6},X^{\prime}_{6})),\] exactly as in Case 1, since they satisfy the assumptions. * Second, we connect \(x^{\prime}\) to \(x_{MAX}\), as in Case 1, since they satisfy the assumptions. Now, we detail that in both Step I and Step II, the requirements of Case 1 are satisfied. Since both \(x\) and \(x^{\prime}\) are amyloidogenic, and since both \(A(x)\leq A(x_{MAX})\) and \(A(x^{\prime})\leq A(x_{MAX})\) hold, \(x_{MAX}\) is also "coordinate-wise more amyloidogenic" than both \(x\) and \(x^{\prime}\). In other words, because of the definition of the MAX\({}_{i}\) operators, the coordinates of \(x_{MAX}\) are left in Table 2 from the coordinates \(x\) and \(x^{\prime}\) in each row. Therefore, in Step I, the procedure of Case 1 can be applied for connecting \(x\) to \(x_{MAX}\); and in Step II, the procedure of Case 1 can be applied to connect \(x^{\prime}\) to \(x_{MAX}\). Since the paths are undirected, we take the path \(x\) to \(x_{MAX}\) and further to \(x^{\prime}\). Now we show that the combined length of the path from \(x\) to \(x_{MAX}\), and from \(x_{MAX}\) to \(x^{\prime}\) is at most 6: It is easy to verify that for all \(i\):,MAX\({}_{i}(X_{i},X^{\prime}_{i})\) is either equal to \(X_{i}\) or \(X^{\prime}_{i}\), so if an exchange is needed in Step I in coordinate \(i\), then no change is needed in Step II in coordinate \(i\), and a symmetric remark is also true for Step II and Step I. **Example 2**.: _Let us connect hexapeptides \(x\) =CVFFFF to \(x^{\prime}\) =LYCLCI by a predicted amyloidogenic path. Both \(x\) and \(x^{\prime}\) are predicted amyloidogenic. Case 1 cannot be applied (one can see it easily from Table 2), so we need \(x_{MAX}\) =CYFLFI. So, we first connect \(x\) to \(x_{MAX}\):_ \[CVFFFF-CYFFFF-CYFLFF-CYFLFI\] _Then \(x^{\prime}\) to \(x_{MAX}\):_ \[LYCLCI-CYCLCI-CYFLCI-CYFLFI\] _The full path is:_ \[CVFFFF-CYFFFF-CYFLFF-CYFLFI-CYFLCI-CYCLCI-LYCLCI\] _Constructing paths through the non-amyloidogenic hexapeptides_ The proof of this case is the repetition of the construction above, with the obvious changes. For completeness, we give here the proof. Suppose we have two hexapeptides, \(x=(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})\) and \(x^{\prime}=(X^{\prime}_{1},X^{\prime}_{2},X^{\prime}_{3},X^{\prime}_{4},X^{ \prime}_{5},X^{\prime}_{6})\), both predicted to be non-amyloidogenic by BAP. We show that there exists an easily constructible path of length at most 6 in graph \(M\), such that all vertices of the path are predicted non-amyloidogenic. **Case 1** (the easy case): Suppose that for all \(i=1,2,3,4,5,6\), either \(X_{i}=X^{\prime}_{i}\), or \(X^{\prime}_{i}\) is situated right from \(X_{i}\) in row \(i\) of Table 2; that is, \(X^{\prime}_{i}\) is less amyloidogenic in position \(i\) than \(X_{i}\). Then if we change \(X_{i}\) to \(X^{\prime}_{i}\) in position \(i\), for \(i=1,2,3,4,5,6\), then we go through a path in graph \(M\) from \(x\) to \(x^{\prime}\) such that the value of \(A\) on the nodes, describing the amyloidogenecity, will be monotone decreasing. Note that the length of this path is at most 6: when the same amino acid appears in the same coordinates, no change is needed; when every coordinate is different, then 6 changes are needed. More formally: \[A(x)=A(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})\geq A(X_{1}^{\prime},X_{2},X_{3},X_{4 },X_{5},X_{6})\geq\] \[\geq A(X_{1}^{\prime},X_{2}^{\prime},X_{3},X_{4},X_{5},X_{6})\geq\ldots\geq A(X_ {1}^{\prime},X_{2}^{\prime},X_{3}^{\prime},X_{4}^{\prime},X_{5}^{\prime},X_{6} ^{\prime})=A(x^{\prime})\] **Case 2** (the general case): When the assumptions of Case 1 are not satisfied, we reduce the problem to two applications of path-finding in Case 1. Our strategy is as follows: * First, we connect node \(x=(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})\) to node \[x_{MIN}=(\text{MIN}_{1}(X_{1},X_{1}^{\prime}),\text{MIN}_{2}(X_{2},X_{2}^{ \prime}),\text{MIN}_{3}(X_{3},X_{3}^{\prime}),\] \[\text{MIN}_{4}(X_{4},X_{4}^{\prime}),\text{MIN}_{5}(X_{5},X_{5}^{\prime}), \text{MIN}_{6}(X_{6},X_{6}^{\prime})),\] exactly as in Case 1, since they satisfy the assumptions. * Second, we connect \(x^{\prime}\) to \(x_{MIN}\), as in Case 1, since they satisfy the assumptions. Now, we detail that in both Step I and Step II, the requirements of Case 1 are satisfied. Since both \(x\) and \(x^{\prime}\) are amyloidogenic, and since both \(A(x)\geq A(x_{MIN})\) and \(A(x^{\prime})\geq A(x_{MIN})\) hold, \(x_{MIN}\) is also coordinate-wise less amyloidogenic than both \(x\) and \(x^{\prime}\). In other words, because of the definition of the MIN\({}_{i}\) operators, the coordinates of \(x_{MIN}\) are right in Table 2 from the coordinates \(x\) and \(x^{\prime}\) in each row. Therefore, in Step I, the procedure of Case 1 can be applied for connecting \(x\) to \(x_{MIN}\); and in Step II, the procedure of Case 1 can be applied to connect \(x^{\prime}\) to \(x_{MIN}\). Since the paths are undirected, we take the path \(x\) to \(x_{MIN}\) and further to \(x^{\prime}\). Now we show that the combined length of the path from \(x\) to \(x_{MIN}\), and from \(x_{MIN}\) to \(x^{\prime}\) is at most 6: It is easy to verify that for all \(i\):,MIN\({}_{i}(X_{i},X_{i}^{\prime})\) is either equal to \(X_{i}\) or \(X_{i}^{\prime}\), so if an exchange is needed in Step I in coordinate \(i\), then no change is needed in Step II in coordinate \(i\), and a symmetric remark is also true for Step II and Step I. ## Conclusions We have shown that the linear SVM predictors for peptides have a very transparent structure that can be used to design mutational pathways within the predicted classes. More specifically, we have used the Budapest Amyloid Predictor [9] to partition 64 million possible hexapeptides into two classes: predicted amyloidogenic and predicted non-amyloidogenic, and we have shown that any two members of each class can be connected by a mutation pathway of length at most 6 that lies entirely within the same class, i.e., amyloidogenic or non-amyloidogenic. For the construction, we used Table 2, defined by the Budapest Amyloid Predictor. The exact same result can be obtained using any other updated version of Table 2, so our results here are not specific to the Budapest Amyloid Predictor. ## Data availability All data are included in the text. ## Funding VG was partially funded by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme. ## Author Contribution LK, ES, AP, VF and VG have initiated the study and evaluated results, LK and ES constructed the SVM for the prediction, BV has constructed the webserver, VG has overseen the work and wrote the first version of the paper; all authors have reviewed the article. AP, VF and VG secured funding. ## Conflicting interest The authors declare no conflicting interests.
2309.15046
Rayleigh-Taylor Unstable Flames: the Coupled Effect of Multiple Perturbations
The Rayleigh-Taylor (RT) instability is important in the fields of aerospace engineering, nuclear physics, and astrophysical research, particularly in studies of Type Ia supernovae. In some applications, the RT instability is complicated by a reaction at the unstable interface. In this paper, we show how this reaction changes the behavior of the RT instability. Using 2D direct numerical simulations (DNS) of Boussinesq premixed flames with a model reaction rate, we show how the flame responds to three types of perturbation: a large amplitude single mode primary perturbation, a smaller amplitude single mode secondary perturbation, and a numerically generated system perturbation with both single mode and multimode components. Early on, the evolution of the flame is dominated by the primary perturbation and, differently from single mode nonreacting RT, the flame propagates as a metastable traveling wave in the form of bubbles separated by cusp-like spikes. However, the lifetime of this traveling wave depends on the properties of the secondary and system perturbations and on the strength of gravity. Once the traveling wave is destabilized, the flame front bubbles rapidly grow to large scales. We identify five distinct flame growth solution types, with the symmetry and properties of each depending on the balance and interactions between the three types of perturbation. In particular, we show that the primary and secondary modes can couple to generate a tertiary mode which ultimately dominates the flow. Depending on the wavenumber of the tertiary mode, the flame may stall, develop coherent pulsations, or even become a metastable traveling wave again, behaviors not seen in nonreacting RT.
Mingxuan Liu, Elizabeth P. Hicks
2023-09-26T16:25:20Z
http://arxiv.org/abs/2309.15046v1
# Rayleigh-Taylor Unstable Flames: the Coupled Effect of Multiple Perturbations ###### Abstract The Rayleigh-Taylor (RT) instability is important in the fields of aerospace engineering, nuclear physics, and astrophysical research, particularly in studies of Type Ia supernovae. In some applications, the RT instability is complicated by a reaction at the unstable interface. In this paper, we show how this reaction changes the behavior of the RT instability. Using 2D direct numerical simulations (DNS) of Boussinesq premixed flames with a model reaction rate, we show how the flame responds to three types of perturbation: a large amplitude single mode primary perturbation, a smaller amplitude single mode secondary perturbation, and a numerically generated system perturbation with both single mode and multimode components. Early on, the evolution of the flame is dominated by the primary perturbation and, differently from single mode nonreacting RT, the flame propagates as a metastable traveling wave in the form of bubbles separated by cusp-like spikes. However, the lifetime of this traveling wave depends on the properties of the secondary and system perturbations and on the strength of gravity. Once the traveling wave is destabilized, the flame front bubbles rapidly grow to large scales. We identify five distinct flame growth solution types, with the symmetry and properties of each depending on the balance and interactions between the three types of perturbation. In particular, we show that the primary and secondary modes can couple to generate a tertiary mode which ultimately dominates the flow. Depending on the wavenumber of the tertiary mode, the flame may stall, develop coherent pulsations, or even become a metastable traveling wave again, behaviors not seen in nonreacting RT. ## I Introduction The Rayleigh-Taylor (RT) instability [1; 2] occurs when a heavy fluid is accelerated into a light fluid. This acceleration may be due to gravity or even the centrifugal force. The RT instability is very well studied [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] and appears in many places from the Crab Nebula [16; 17] to Earth's ionosphere [18]. In some important applications, there is an additional twist: a reaction at the interface between the heavy and light fluids. For example, the speed of thermonuclear flames in Type Ia supernova is increased by the RT instability [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Modeling this speed-up correctly is necessary to accurately predict the elemental abundances resulting from the explosion [33; 34; 35; 36; 37; 38]. Here on Earth, engineers seek to improve the efficiency of aviation gas turbine engines by using the RT instability to speed up fuel consumption [39; 40; 41; 42; 43; 44; 45; 46; 47]. For both these applications, accurate models of how the instability speeds up the fuel consumption are important. A close relative of reactive RT is the ablative RT instability [48; 49; 50; 51; 52; 53; 54; 55; 56; 57], which causes significant problems for inertial confinement fusion (ICF) by mixing the cold, dense pellet shell into the fusion fuel, making ignition difficult. However, recent experimental innovations have finally overcome this problem [58; 59; 60]. Adding a reaction to the RT instability modifies its behavior in several known ways. First, the entire mixing layer moves upwards because the reaction actively consumes the denser fuel. Second, the reaction burns out small structures on the flame front, stabilizing the RT instability at short wavelengths [61; 62]. Third, the reaction eventually separates the fluid into pure fuel and ash phases divided by a thin reacting layer, even if the RT instability initially proceeds faster than the reaction [27]. This means that the mixing layer will only actually be'mixed' in the horizontally-averaged sense; the interface itself will typically consist of distinct bubbles (ash) and spikes (fuel). However, at a fundamental level, reacting RT is still driven by the same mechanism as nonreacting RT: the conversion of potential energy into kinetic energy. So, researchers who study or design complex systems where RT unstable flames play a fundamental role are faced with an important question: Can knowledge and models of nonreactive RT be applied to RT unstable flames? In this paper, we begin by addressing this question for the simplest scenario: single mode RT. After describing our numerical setup in Section II, we will show in Section III.2 that adding a reaction changes the continuously rising bubbles of single mode RT into a metastable traveling wave solution, in line with previous research [21; 22]. To break the traveling wave, we either add a small secondary perturbation or wait for the numerically generated system perturbation to do the job. In Section III.3, we investigate how the interactions between the primary, secondary, and system perturbations generate different types of behavior as the flame front grows from small to large scales. We will show that the flame can grow symmetrically for a long time, that the growth can "stall" while the flame pulsates, and that the interaction of the primary and secondary creates a third mode that influences the flame's evolution. In Section III.4, we will show that a multimode perturbation is necessary, but not sufficient, for self-similar growth. Finally, we summarize our findings in Section IV and discuss the similarities and differences between reactive RT and nonreactive RT. ## II Numerical Methodology ### Governing Equations & Dimensionless Parameters In this study, we avoided fully compressible combustion simulations, which are computationally expensive, by making use of two major simplifications: the Boussinesq approximation and a model reaction. The Boussinesq approximation assumes subsonic flows with small density variations [63]. For flames, it requires the Atwood number of the system, \[At=\frac{\rho_{0}-\rho_{1}}{\rho_{0}+\rho_{1}}, \tag{1}\] to be much less than 1. Here, \(\rho_{0}\) is the density of the fuel and \(\rho_{1}\) is the density of the ash. Then, the only effect of density variation is in the buoyancy force term of the Navier-Stokes equation. The other terms only involve the constant density of the unburnt fuel, \(\rho_{0}\). By using the Boussinesq approximation, we remove other instabilities (e.g. the Landau-Darrieus instability) from the problem and focus on the effects of the Rayleigh-Taylor instability. The second important simplification is using a model reaction term, \(R(T)\), instead of a realistic chemical reaction network in the advection-reaction-diffusion equation. The reaction progress variable \(T\) tracks the transformation of fluid from unburnt fuel (\(T=0\)) to burnt ashes (\(T=1\)). The reaction progress variable reflects the amount of burned fuel and thus the energy released from the reaction to the flow [64]. We adopted \(R(T)=2\gamma T^{2}(1-T)\), a model reaction used in our previous studies of Rayleigh-Taylor unstable flames [29; 30; 32; 65]. This model reaction has a laminar solution with characteristic flame width \(\delta\) and laminar flame speed \(s_{o}\)[66]. Using the thermal diffusivity \(\kappa\) and the laminar reaction rate \(\gamma\), we can construct the laminar flame width \(\delta=\sqrt{\kappa/\gamma}\) and the laminar flame speed \(s_{o}=\sqrt{\gamma\kappa}\). The fluid equations, after being non-dimensionalized by \(s_{o}\) and \(\delta\), are \[\frac{D\mathbf{u}}{Dt}=-\left(\frac{1}{\rho_{0}}\right)\nabla p+GT+Pr\nabla^{2 }\mathbf{u} \tag{2}\] \[\nabla\cdot\mathbf{u}=0 \tag{3}\] \[\frac{DT}{Dt}=\nabla^{2}T+2T^{2}(1-T), \tag{4}\] where \(G\) is the non-dimensionalized gravity, and \(Pr\) is the Prandtl number, defined as \[G=g\left(\frac{\Delta\rho}{\rho_{0}}\right)\frac{\delta}{s_{0}^{2}} \tag{5}\] \[Pr=\frac{\nu}{\kappa}. \tag{6}\] In this study, we allow to \(G\) vary but set \(Pr=1\) for all simulations. ### Numerical Setup To solve the fluid accurately and efficiently, we ran direct numerical simulations using Nek5000, a freely available and open source computational fluid dynamics code [67]. Nek5000 uses the spectral element method, a higher order weighted residuals technique that breaks the computational domain into spectral elements. Within each element, data fields are represented by \(N^{th}\)-order tensor product polynomials. Convergence is exponential with spectral order. The spectral element method minimizes numerical dissipation and dispersion, which makes it a good choice for the longer time simulations required for this study. Nek5000 is well known for implementing the spectral element method using extremely fast, highly-scalable algorithms. Our simulations are in two dimensions. We chose 2D because it is less computationally expensive than 3D, allowing us to probe more parameter space. Unless otherwise noted, our simulations have a physical size of 2048 x 9216 non-dimensionalized units, spanned by 512 x 2304 elements with a spectral order of \(N=9\). Using a tall box ensured that the flame stayed well away from the top and bottom boundaries during its evolution. Periodic boundary conditions were imposed in the \(x\)-direction. The temperature was constrained to be 0 (fuel) at the top boundary and 1 (ash) at the bottom boundary. The flame front starts in the middle of the box (\(y=4608\)) and propagates upwards against gravity. Movies of the temperature field for all simulations are available in the Supplemental Materials [68]. The velocity field was initialized at 0 and was held at 0 on both the top and bottom boundaries. We believe our simulations are spatially well-resolved for several reasons. First, the average resolution (0.444) is smaller than the measured viscous scale, indicating that the simulations capture the energy cascade. Second, there are nearly 10 collocation (grid mesh) points across the flame front, which has a width of 4.394 between the \(T=0.1\) and \(T=0.9\) contours. Third, the flame behavior converges with resolution (see Section III.3). When underresolved, a given choice of simulation parameters may show different solution types at different resolutions. As the resolution is improved, the solution type converges. We have also checked that the simulations are resolved temporally. ### Perturbations In this study, we consider three types of perturbation: a primary perturbation, a secondary perturbation, and a system perturbation. The primary perturbation is a single mode sinusoidal perturbation and is dominant because of its relatively large amplitude. The deviation of the primary perturbation from flat is \[h(x)=A_{1}\sin\left(\frac{2\pi k_{1}x}{x_{\rm max}}\right), \tag{7}\] where \(A_{1}\) is the amplitude of primary perturbation, \(k_{1}\) is the wavenumber, and \(x_{\rm max}=2048\) is the box size. \(A_{1}\) is fixed at 1 for all simulations, and \(k_{1}\) ranges from 128 to 256. The resulting initial temperature profile is \[T(x,y)=0.5-0.5\tanh\left(\frac{y-y_{0}+h(x)}{2}\right), \tag{8}\] where \(y_{0}=4096\) is the initial flame position. The secondary perturbation is another manually imposed single mode perturbation. The initial perturbation becomes \[h(x)=A_{1}\sin\left(\frac{2\pi k_{1}x}{x_{\rm max}}\right)+A_{2}\sin\left( \frac{2\pi k_{2}x}{x_{\rm max}}\right), \tag{9}\] where \(A_{2}\) is the amplitude of the secondary perturbation, and \(k_{2}\) is the wavenumber of the secondary perturbation. The amplitude of the secondary perturbation is usually several orders of magnitude smaller than that of the primary perturbation, but it still can determine the solution type after some period of RT growth. \(A_{2}\) is kept at 0.001 for most simulations, and \(k_{2}\) ranges from 1 to 224. Finally, the system perturbation is generated by an interaction between the primary perturbation and the Nek5000 spectral element mesh. Nek5000 divides the computational domain into spectral elements. The flame front temperature field within each of these elements is approximated by finding the spectral coefficients that minimize the residual error. If the flame front is periodic with one wavelength per spectral element, then the spectral representation will be the same in each element along the flame front. Similarly, if the flame front wavelength is exactly spanned by a whole number of elements, the spectral representation pattern will be the same for each flame front wavelength. In both of these cases, no large scale structure is introduced by numerical errors. On the other hand, if the wavelength is not evenly divided by the spectral element size, then the pattern of spectral coefficients repeats on the smallest scale for which the flame front pattern is exactly divided by a whole number of spectral elements. So, a long wavelength numerical system perturbation arises with a dominant wavenumber of \(k_{\text{sys}}=\text{GCD}\left(k_{I},\text{{\it nelx}}\right)\), where \(nelx=512\) is the number of elements in the \(x\)-direction and "GCD" is the greatest common divisor. The amplitude of this single mode component of the system perturbation is very small compared with the amplitude of the primary perturbation, and decreases with resolution (see Section III.2). So, we would generally expect the primary perturbation to completely overwhelm the system perturbation. However, we will show in Section III.2 that the primary perturbation stabilizes and the flame becomes a long-lived traveling wave, giving the system perturbation time to grow via the RT instability and ultimately disrupt the flame front. This numerical effect can be eliminated by aligning the bubbles along the flame front perfectly with spectral elements. The system perturbation also contains a multimode "noise" component that exists regardless of alignment and breaks the symmetry of the flame front at later times when the flame front is already metastable or unstable. ## III Results and Analyses In this section, we investigate how the three different perturbations affect the growth of RT unstable flames. We begin by defining two measures of the size of the mixing layer for RT unstable flames: the flame depth and the bubble depth. Next, we explore the Early Stage of flame evolution, when the flame propagates as a traveling wave. After the traveling wave is disrupted, the mixing layer depth grows rapidly during Late Stage evolution. We show how the perturbation types influence growth during this stage, and identify five types of flame growth. Finally, we assess whether or not this Late Stage growth is self-similar. ### Flame Depth and Bubble Depth During the development of the Rayleigh-Taylor instability, distinctive structures known as "bubbles" and "spikes" form on the flame front. Bubbles are lighter ashes moving upwards, while spikes are heavier fuels moving downwards. Studies of nonreactive RT typically seek to quantify the growth of these structures by measuring a position for the top of the bubble layer and a position for the bottom of the spike layer. Since the dividing line between bubbles and spikes stays fixed at the position of the initial interface, these measurements can be easily translated into heights for the bubble and spike mixing layers. We follow a similar strategy, but we must take into account the fact that our "mixing layer" is continually traveling upwards because the flame is consuming fuel. First, we must measure the positions of the top of the bubbles and the bottom of the spikes. We adopt two methods: the point method and the profile method. Both methods use temperature thresholds to define the top and bottom of the flame. We use three pairs of thresholds: \(T=(0.1,0.9)\), \((0.05,0.95)\), and \((0.005,0.995)\). The flame top corresponds to the highest point with temperature above the lower threshold, whereas the flame bottom corresponds to the lowest point with temperature below the upper threshold. The point method identifies a single point that meets these criteria, whereas the profile method first calculates the horizontal average of the temperature field and then identifies the vertical position that satisfies these thresholds. To turn these measurements into mixing layer height measurements, we take two approaches. First, we compute the size of the entire mixing layer by subtracting the spike position from the bubble position. This measurement includes both bubbles and spikes and we call it the "flame depth". The flame depth is an excellent measure of the vertical size of the flame front early on, but it is very noisy in the later stages of flame evolution because the spikes grow and burn out repeatedly, making the depth curve jagged and noisy. Another drawback is that studies of nonreactive RT often consider the growth of the bubbles and spikes separately. So, we also measure a "bubble depth" that excludes the spikes. To do this, we need to divide the flame into bubble and spike sections. There is no unambiguous way to do this, but our approach is to calculate an average position for the flame by integrating the flame speed over time. We consider the structures above this position to be bubbles, and the structures below to be spikes. The bubble depth is then the vertical extent between the top of the bubbles and the average flame position. The bubble depth curve is much smoother than the flame depth curve, and it allows us to make direct comparisons with nonreactive RT studies that measure a bubble height. Bubble depth curves determined by both the point and profile methods and using all three threshold sets are shown in Figure 1. We break the flame evolution into two different stages: Early and Late. The Early Stage consists of two substages: the linear growth stage and the First Metastable Stage, representing the initial development and stabilization of the flame front. The Late Stage begins after the disruption of the First Metastable Stage by either the system perturbation or the secondary perturbation and includes all subsequent developments. ### Early Stage Flame Evolution We begin by exploring how the primary, secondary, and system perturbations affect the earliest stages of flame evolution. The flame begins as a simple sine wave perturbed by the primary perturbation. The primary perturbation grows exponentially (see Figure 2, bottom panel) during the linear growth stage [69, 21], but this growth slows as the flame transitions into the nonlinear regime. The less dense ash rises as bubbles and the more dense fuel sinks as spikes. The spikes have a more complex structure when the RT instability is stronger; for example, they may resemble mushrooms (see Figure 2, middle panel). Finally, the fine structure on the spikes burns out and the flame becomes a regular series of rising bubbles separated by sharp cusps (see Figure 2, top panel). We call this flame configuration the "First Metastable Stage." The First Metastable Stage is maintained by a balance between the RT instability of the primary perturbation and burning. The RT instability increases the height of the bubbles, but this increases the sharpness of the cusps between the bubbles. Sharper cusps burn out more quickly [70, 30, 32], and the flame returns to its equilibrium flame depth. So, the flame propagates with a constant speed and shape, that is, as a traveling wave. Vladimirova and Rosner [21] first identified these traveling wave solutions for vertically unconfined RT unstable flames (for the confined cases see Bayliss _et al._[71]) and showed that they are metastable when the horizontal boundary conditions are periodic [22]. However, this type of solution is not unique to RT unstable flames. Burning can balance any instability, and this type of solution has been identified for Landau-Darrieus unstable flames as well [72]. Most properties of the First Metastable Stage depend on the dominant primary perturbation. For example, the number of RT bubbles in the box is equal to the primary wavenumber \(k_{1}\). The flame speed and flame depth depend Figure 1: Flame evolution separated into Early and Late stages. Measurements of the bubble depth were made using both the point and profile methods across the three threshold sets. Thresholds are denoted by numbers 1, 2, and 3 in the legend, representing \(T=(0.1,0.9)\), \((0.05,0.95)\), and \((0.005,0.995)\) respectively. The profile method measurements are labeled by the preceding letter ‘p’ before the number. (A Second Metastable Merging solution, \(k_{1}=128\), \(k_{2}=192\), is used for this example. See Supplemental Material Movie for run 331 [68].) on both \(k_{1}\) and \(G\). RT bubbles travel approximately at a speed \(s\propto\sqrt{\frac{G}{k_{1}}}\) (see [20]), so simulations with smaller \(k_{1}\) have larger, faster bubbles. These basic primary-perturbation-dependent properties of the First Metastable Stage were investigated by Vladimirova and Rosner [21; 22], and our results are qualitatively similar. However, the lifetime of the First Metastable Stage depends on \(G\), the wavenumber of the secondary perturbation, and the initial amplitude of the secondary or system perturbation. To investigate these dependencies, we conducted a series of experiments, altering one condition at a time while keeping the others constant. First, we show that increasing \(G\) shortens the lifetime of the First Metastable Stage. Fixing \(k_{1}=128\) and \(k_{2}=64\), we vary gravity: \(G=2,3,4,6,8\). Figure 3 shows the flame depth, which measures the full distance from the tops of the bubbles to the bottoms of the cusps, as a function of time for each simulation. The First Metastable Stage, seen as a plateau in the figure, is shorter when \(G\) is larger. This is consistent with our expectations from both reactive and nonreactive RT; in both cases, the growth rate of a perturbation is proportional to \(G\). Therefore, we expect the secondary perturbation to grow faster and destabilize the traveling wave more quickly at higher \(G\). We also found that the flame speed and flame depth are larger at higher \(G\), consistent with the results of Vladimirova and Rosner [21; 22]. Next, we look at how the properties of a secondary perturbation affect the lifetime of the First Metastable Stage. We begin with the wavenumber of the secondary perturbation \(k_{2}\). Setting \(k_{1}=128\), \(A_{1}=1\), \(A_{2}=0.001\) we vary Figure 2: The Early Stage: the initial perturbation grows exponentially during the linear stage (bottom panel), bubbles and spikes develop (middle panel), finally, the flame stabilizes as a traveling wave that we call the “First Metastable Stage” (top panel). See Supplemental Material Movie for run 355 [68]. \(k_{2}=1,2,3,4,6,8,12,16,32,64,65,66,68,72,80,90,96,152,160,162,164,165,192,208,210,224\) and then measure the lifetime of the First Metastable Stage. Figure 4 shows that the lifetime of the First Metastable Stage is very long (\(t_{\rm life}>50\)) when \(k_{2}\) is small, but rapidly drops as \(k_{2}\) increases, and then varies within a band \(t_{\rm life}=[8.7,15]\). The rapid drop in lifetime is due to the secondary perturbation growing faster as \(k_{2}\) increases. This is consistent with the basic RT linear growth rate scaling, which is proportional to \(\sqrt{k}\). Lifetime stabilization at higher \(k_{2}\) occurs because the growth rate of the secondary perturbation levels off. This happens because the reaction, thermal diffusion, and viscosity all destroy the smaller scale structures of the higher \(k_{2}\) perturbations more effectively, offsetting the \(\sqrt{k}\) growth rate scaling. In addition, the cusped shape of the First Metastable Stage stabilizes the secondary perturbation more when the perturbation's wavelength is low enough to "feel" the corrugations of the First Metastable Stage. So, the lifetime of the First Metastable Stage does depend on \(k_{2}\), but this dependence is strongest when \(k_{2}\) is small. Next, we studied the effect of numerical resolution on the growth time of the secondary perturbation, as measured by the lifetime of the First Metastable Stage. Setting \(k_{1}=128\), \(k_{2}=64\), and \(A_{2}=0.001\), we tested 3 different resolutions: 256\(\times\)1152 elements using spectral order \(N=7\), 512\(\times\)2304 using \(N=9\). Figure 5 shows that the flame depth curves from these simulations overlap perfectly, demonstrating that the growth of the secondary perturbation is not sensitive to resolution. On the other hand, the lifetime of the First Metastable Stage is sensitive to the amplitude of the secondary perturbation. Fixing \(k_{1}=128\), \(k_{2}=64\), and the resolution (512\(\times\)2304 elements with \(N=9\)), we vary \(A_{2}=0.001,0.0005,0.0001,0.00005\). Figure 6 shows that smaller secondary perturbations take longer to grow, increasing the lifetime of the First Metastable Stage. Finally, we examined the effect of the numerical resolution on the growth time of the system perturbation. Setting \(k_{1}=208\), we tested 4 different numerical resolutions: 256\(\times\)1152 elements using spectral order \(N=7\), 256\(\times\)1152 using \(N=11\), 512\(\times\)2304 using \(N=7\), and 512\(\times\)2304 using \(N=9\). Without a secondary perturbation, improving the Figure 3: Gravity Experiment. Higher gravity shortens the First Metastable Stage (the plateau). The flame depth is measured with the point method. resolution now increases the lifetime of the First Metastable Stage (see Figure 7). Why? We know from our secondary perturbation experiment that the simulations are resolved enough to capture the flame evolution accurately. We also know that perturbations with smaller amplitudes grow more slowly, increasing the lifetime of the First Metastable Stage. Therefore, we conclude that the amplitude of the system perturbation (which we do not control), must decrease as the resolution is improved. So, the First Metastable Stage lasts for longer when the resolution is improved because the system perturbation has a smaller amplitude and grows more slowly. After an initial growth phase, the flame front takes the form of individual bubbles separated by sharp cusps (Figure 2, top panel). Most properties of this First Metastable Stage are set by the primary perturbation and \(G\). However, the lifetime of this traveling wave depends on how long it takes the secondary or system perturbation to grow and destabilize it. Above, we showed that this lifetime depends on \(G\), the wavenumber of the secondary perturbation, and on the amplitude of the secondary or system perturbation. ### Late Stage Flame Evolution Unlike the straightforward behavior in the Early Stage, the Late Stage of Rayleigh-Taylor burning exhibits a variety of complex solutions, from structured symmetrical evolution to disordered chaotic burning. In this section, we will show how these solution types arise from the interplay between the primary, secondary, and system perturbations. #### iii.3.1 Secondary Perturbation Experiments First, we will consider a set of simulations with secondary perturbations. In these simulations, we remove the single mode component of the system perturbation by choosing a primary wavenumber \(k_{1}=128\) that exactly divides the Figure 4: Secondary Perturbation Wavenumber Experiment. The lifetime of the First Metastable Stage decreases with \(k_{2}\) and then stabilizes. horizontal element number \(nelx=512\) (see Section II.3). Each simulation takes place in a \(2048\times 9216\) domain tiled by \(512\times 2304\) spectral elements of order \(N=9\). The primary perturbation has an amplitude of \(A_{1}=1\) and a secondary perturbation \(k_{2}\) (which is varied) has an amplitude of \(A_{2}=0.001\). Each simulation begins the same way. The dominant primary perturbation grows and then stabilizes into the First Metastable Stage. We've shown in Section III.2 that the lifetime of the First Metastable Stage depends on \(k_{2}\) and that long wavelength secondary perturbations grow very slowly. In fact, for \(k_{2}=1,2\) the secondary perturbation grows too slowly to disrupt the First Metastable Stage before the multimode noise kicks in. In this **Chaotic Burning** solution (see Figure 8 for \(k_{2}=1\)), the First Metastable Stage (Figure 8, panel 1) is disrupted by small scale asymmetrical bubble growth (panel 2). Here and there, a few bubbles along the flame front randomly grow, absorbing their neighbors (panels 2-4). This leads to rapid merging of the bubbles and the average bubble size rapidly grows (panels 5-8). Moving upwards in wavenumber, a new solution type emerges at \(k_{2}=3\) (see Supplemental Material Movie for run 352 [68]). **Nearly Symmetric Merging** solutions feature a nearly symmetric breakup of the First Metastable Stage, but small asymmetries are magnified before wavenumber \(\text{GCD}(k_{1},k_{2})\) structures are reached leading to asymmetric bubbles. For \(k_{2}=3\), the secondary mode and multimode perturbations emerge nearly simultaneously. For \(k_{2}=4\), the secondary mode emerges first followed later by multimode noise. The result is four large bubbles, with a slightly asymmetrical distribution of smaller bubbles on top. For \(k_{2}=6\), we see two repeating groups of three large bubbles (six bubbles in total), but the repetition is not quite exact. The flame front remains nearly symmetrical for an extended period of time, but significant asymmetry develops before large \(\text{GCD}(k_{1}=128,k_{2}=6)=2\) structures fully develop. For \(k_{2}=3,4,6\), the secondary and multimode perturbations are in direct competition, but the secondary perturbation becomes more dominant as \(k_{2}\) increases. We also see Nearly Symmetric Merging solutions for \(k_{2}=65,165\), but for a completely different reason. The \(k_{2}=65,165\) perturbations grow much more quickly than the \(k_{2}=3,4,6\) perturbations, outracing the multimode noise system perturbation. However, because \(\text{GCD}(k_{1},k_{2})=1\) the secondary perturbation never aligns with the Figure 5: Secondary Perturbation Resolution Experiment. The lifetime of the First Metastable Stage does not depend on the average resolution. This implies that the resolution of our experiments is adequate to capture flame evolution accurately. The flame depth is measured with the point method. flame front, producing a fundamental asymmetry. For example, Figure 9 shows the \(k_{1}=128\) primary and the \(k_{2}=65\) secondary beating against each other (see panels 2-3), never quite aligning, but producing a nearly symmetrical pattern. The small asymmetries grow (panels 4-7), and the flame evolution eventually becomes entirely asymmetric (panel 8). Here, the asymmetry is created by the interaction between the primary and the secondary, not by the multimode noise. Continuing to higher wavenumbers, we find that for \(k_{2}\geq 8\), the secondary perturbation handily outcompetes the multimode system noise and delays the emergence of asymmetry to late times. For these simulations, the solution type is determined entirely by the interaction between the primary and secondary modes. The wavenumber of the mode produced by this interaction is \(\text{GCD}(k_{1},k_{2})\). This mode effectively divides the domain into \(\text{GCD}(k_{1},k_{2})\) equal subregions and identical evolution takes place within each one. Bubbles are distributed evenly into the subregions and merge until only one bubble remains per subregion. The details of this merger process depend on the value of the GCD. When the GCD is low, \(\text{GCD}(k_{1}=128,k_{2})=2,4\) and \(k_{2}\geq 8\), we find the **First Metastable Merging** solution. In this solution type, bubbles continuously merge while keeping remarkable symmetry, despite their vast structures and high flame speeds. For example, Figure 10 (\(k_{2}=210\), GCD=2) shows smaller bubbles (panel 2) merging into larger bubbles (panels 3-7) and eventually forming two massive identical structures (panel 8). Alternatively, depending on the value of \(k_{2}\), merger of the smaller primary bubbles may initially take place on the surface of larger bubble structures that continuously grow. At the end of the merging process, the GCD=2 simulations (\(k_{2}=66,90,162,210\)) form two giant identical structures, while the GCD=4 simulations (\(k_{2}=12,68,164\)) form four. Finally at very late times multimode noise breaks the symmetry and growth continues. At medium GCD (GCD=\(8,16,32\)), the **First Metastable Pulsating** solution appears. Bubbles merge, and then continually pulsate without substantial growth. The flame oscillates horizontally as it burns upwards. Figure 11 shows this solution for \(k_{2}=224\) (GCD=32). After the First Metastable Stage (panel 1) is broken by the secondary perturbation, bubbles merge (panel 2) and until there are 32 of them (panel 3). The bubbles pulsate back and forth Figure 6: Secondary Perturbation Amplitude Experiment. Smaller secondary perturbations take longer to grow, increasing the lifetime of the First Metastable Stage. The flame depth is measured with the point method. horizontally, alternately forming and burning out mushroom-like cusps to the right and then to the left (panel 4). Finally, multimode noise disrupts the pulsating solution (panels 5-6) and merger continues to larger horizontal scales (panel 7). The GCD=32 simulation pulsations (\(k_{2}=32,96,160,224\)) are simple-looking because the \(k=32\) structures aren't large. Going to GCD=16 (\(k_{2}=16,80,208\)) and then GCD=8 (\(k_{2}=8,72,152\)), the final structures are larger and have more complex-looking pulsations. The mechanism driving the First Metastable Pulsating solution is the misalignment between the primary First Metastable Stage and secondary perturbation growing beneath it. When the flame bubble or cusp does not perfectly sit atop the secondary perturbation's crest, horizontal momentum is introduced into the flame front. This causes the bubbles to merge, and the flame wakes to pulsate. For example, consider \(k_{2}=72\) (GCD=8). In this case, each periodic subregion spans 256 unit lengths, hosting 16 primary bubbles and 9 secondary perturbation crests. The odd number of crests cannot evenly distribute across the even number of bubbles, causing a wave mismatch and introducing horizontal momentum. However, this mechanism by itself is not enough to guarantee pulsations - the size of the GCD must be large enough as well. When the GCD is smaller (GCD=\(2,4\)), the resulting wavenumber GCD bubble structures are so large and complex that they don't show coherent pulsations. The GCD=4 simulations do show some nearly oscillatory behavior, but it is during the merger process. The GCD=2 simulations don't show much coherent oscillatory behavior at all. So, to get the coherent pulsations of the First Metastable Pulsating solutions, there must be both a mismatch between the primary metastable stage and the secondary perturbation and the resultant wavenumber GCD structures must be small enough for the pulsations to be coherent. The **Second Metastable Merging** solution requires the highest GCD value (GCD=64), which is half of the primary wavenumber. This solution is special, because the primary and secondary interact without introducing horizontal momentum. Figure 12 shows this process for \(k_{2}=64\). Exactly two primary metastable bubbles fit within each secondary perturbation wave, so the primary bubbles will alternate between having their upward velocity reinforced and suppressed (panel 2). The reinforced bubbles engulf the suppressed bubbles (panels 3-4) and a Second Figure 7: System Perturbation Resolution Experiment. Improving the average resolution increases the lifetime of the First Metastable Stage, implying that the amplitude of the system perturbation decreases with improved resolution. The flame depth is measured with the point method. Metastable Stage forms with \(k=64\). The Second Metastable Stage lasts for a considerable amount of time (panels 5-7), gently pulsating due to a shear instability that develops behind the flame front [65]. Finally, multimode noise triggers chaotic burning (panels 8-9). The \(k_{2}=192\) simulation develops similarly, with two primary metastable bubbles per three secondary wavelengths. Again, alternate primary bubbles are reinforced and suppressed by the secondary perturbation. In this subsection, we showed how different Late Stage flame evolution solutions arise from the interplay between the primary perturbation, a secondary perturbation, and multimode noise. Figure 13 summarizes the distribution of the five solutions that we identified in GCD(\(k_{1},k_{2}\)) verses \(k_{2}\) phase space. We found that multimode noise only has the opportunity to introduce asymmetry early in the Late Stage if the secondary perturbation grows slowly. When the wavelength of the secondary is very long (\(k_{2}=1,2\)), the Late Stage is entirely driven by multimode noise and we see the completely asymmetrical Chaotic Burning solution. As the secondary grows faster (\(k_{2}=3,4,6\)), we see competition between the secondary and multimode noise. In these Nearly Symmetric Merging solutions, large scale structures emerge but multimode-driven asymmetries grow and destroy large scale symmetry before wavenumber GCD structures are formed. Once the secondary grows fast enough (\(k_{2}\geq 8\)), the solution type is determined entirely by the interaction between the primary and the secondary, which generates a mode with wavenumber GCD(\(k_{1},k_{2}\)). This mode divides the domain into identically evolving subregions. When there is only one subregion (GCD=1), we see the Nearly Symmetric Merging solution again, but this time the asymmetric evolution is driven by the fundamental asymmetry of the mode, not by multimode noise. When the subregions are large (GCD=\(2,4\)), bubbles merge symmetrically until giant, identical structures form (First Metastable Merging solution). As the subregions shrink (GCD=\(8,16,32\)), the flame evolution simplifies and coherent pulsations develop (First Metastable Pulsating solution). These pulsations are driven by the misalignment of the primary First Metastable Stage and the secondary perturbation. Finally, if the perturbations align because the GCD is half of the primary wavenumber (GCD=64), a Second Metastable Stage emerges (Secondary Metastable Merging solution). In all of these primary+secondary solutions, multimode noise does emerge at very late times and breaks the symmetry of the subregions, causing continued bubble growth. #### iii.2.2 System Perturbation Experiments Next, we will consider a set of simulations with no secondary perturbation in order to explore how Late Stage flame evolution is affected by only the primary and system perturbations. The first of these simulations, \(k_{1}=128\), has the same primary mode as the secondary perturbation experiments; this choice of \(k_{1}\) cancels out the single mode component of the system perturbation. Multimode noise dominates, and the resulting Chaotic Burning solution closely resembles the \(k_{2}=1,2\) secondary mode simulations. In the other simulations in the set, \(k_{1}=140,152,160,166,175,192,208\), the choice of primary does not cancel out the single mode system perturbation. Running underresolved simulations with these wavenumbers, we see all of the solution types identified in Section III.3.1. This is because the single mode system perturbation effectively acts as a secondary perturbation with wavenumber \(k_{\rm sys}=\)GCD(\(k_{1}\),nek), producing the solution types (First Metastable Merging, First Metastable Pulsating, Second Metastable Merging) that rely on an interaction between the primary and secondary. As we improve the resolution, the amplitude of the single mode system perturbation decreases, as shown in Section III.2. As a result, the single mode system perturbation doesn't grow to large amplitude quickly enough to outcompete the multimode noise. Our most resolved simulations (\(2048\times 9216\) domain tiled by \(512\times 2304\) spectral elements of order \(N=9\)) show a mixture of Chaotic Burning (\(k_{1}=175,208\)) and Nearly Symmetric Merging (\(k_{1}=140,152,160,166,192\)) solutions. Which of these solution types appears depends on both \(k_{1}\) and on GCD(\(k_{1},k_{\rm sys}\)). If we were to improve the resolution further, we would expect the Nearly Symmetric Merging solution to disappear as the single mode system perturbation loses amplitude. Ultimately, all simulations with only a system perturbation should converge to the chaotic burning solution. In this section, we've shown how the primary perturbation, the secondary perturbation, and the system perturbation interact to produce different Late Stage flame evolution solutions. When the secondary perturbation grows slowly, it is either defeated by multimode noise (Chaotic Burning) or competes with it (Nearly Symmetric Merging). But when the secondary perturbation grows quickly enough, it outcompetes the multimode system noise and the flame evolves identically in wavenumber GCD(\(k_{1},k_{2}\)) subregions (First Metastable Merging, First Metastable Pulsating, Second Metastable Merging). Importantly, even when \(k_{1}\) and \(k_{2}\) are large (short wavelength), the interaction between them can generate low-k (long wavelength) structures that grow quickly and play an important role in the flame's evolution. These types of long wavelength structures can also emerge in underresolved simulations with only a system perturbation because the single mode part of the system perturbation effectively mimics a secondary perturbation. These numerical artifacts disappear as the resolution improves and the solutions converge towards Chaotic Burning. Ultimately, studying the interactions between different desired perturbations requires identifying, understanding, and accounting for the influence of the single mode and multimode noise components of the system perturbation. Figure 8: Chaotic Burning Solution (\(k_{1}=128\), \(k_{2}=1\), GCD=1). See Supplemental Material Movie for run 316 [68]. Figure 9: Nearly Symmetric Merging Solution (\(k_{1}=128\), \(k_{2}=65\), GCD=1). See Supplemental Material Movie for run 322 [68]. Figure 10: First Metastable Merging Solution (\(k_{1}=128\), \(k_{2}=210\), GCD=2). See Supplemental Material Movie for run 337 [68]. Figure 11: First Metastable Pulsating Solution (\(k_{1}=128\), \(k_{2}=224\), GCD=32). See Supplemental Material Movie for run 347, [68]. Figure 12: Second Metastable Merging Solution (\(k_{1}=128\), \(k_{2}=64\), GCD=64). See Supplemental Material Movie for run 315, 68. Figure 13: Solution Type Phase Diagram. The solution types are Chaotic Burning (CB), Nearly Symmetric Merging (NSM), First Metastable Merging (FMM), First Metastable Pulsating (FMP), and Second Metastable Merging (SMM). Note that GCD(\(k_{1},k_{2}\)) determines the solution type on its own when \(k_{2}\geq 8\). ### Is Flame Growth Self-Similar? In this section, we compare the bubble depth curves with the well established mixing layer growth model for multimode nonreactive RT. We will show that a multimode component is generally necessary, but not sufficient for self-similar growth. The growth of the mixing layer for nonreactive multimode RT is asymptotically well described by \(h(t)=\alpha\operatorname{At}gt^{2}\)[73, 74, 75, 3]. Fundamentally, this equation says that the conversion of potential energy to kinetic energy by RT plumes causes the region mixed by those plumes to grow self-similarly. An important open question is whether adding a reaction at the RT interface destroys this self-similarity. Our study, which is primarily concerned with the effects of the primary and secondary perturbations, wasn't designed to address this question; however, the fact that multimode noise plays a role in our simulations is an opportunity to take a quick look. We compare two solution types: First Metastable Merging, which is dominated by the primary and secondary modes and evolves symmetrically, and Chaotic Burning, which is dominated by multimode noise and evolves asymmetrically. We ask a simple question: Is the typical bubble depth curve for each solution type smooth or does it show structures like bumps, plateaus, or oscillations? Figure 14 shows the bubble depth curves for the First Metastable Merging solutions with GCD(\(k_{1}=128,k_{2}\)) = 2 (panel a) and GCD=4 (panel b). All of the solutions except for one (\(k_{2}=66\)) show some combination of plateaus, bumps, and oscillations. The flame growth is generally not self-similar and can't be modeled with a power law. Figure 15 shows the bubble depth curves of the Chaotic Burning simulations for both the system (panel a) and the secondary perturbation (panel b) cases. The system simulation curves begin smoothly but three out of four are interrupted by plateaus or bumps. Both of the secondary perturbation curves grow smoothly, but they also diverge from each other, so the same power law solution couldn't apply to both. Overall, we find that the Chaotic Burning solutions are sometimes smooth, but not always. This could be for many reasons. It's possible that the multimode noise simply doesn't have enough modes to trigger true self-similar growth. We expect many modes to be required, both from RT theory [76, 77, 78, 79, 80, 81, 82] and because Bell _et al._[23] studied this problem for thermonuclear flames, and found that the 10 modes weren't enough to cause self-similar growth either with or without burning. Another possibility is that Figure 14: First Metastable Merging solution bubble depth curves for GCD=2 (panel a) and GCD=4 (panel b) simulations. Curves, with the exception of \(k_{2}=66\), show bumps, plateaus or oscillations, indicating that bubble growth is not self-similar. The bubble depth is measured with the point method. our domain is too narrow and the flame doesn't have time to become self-similar. This brings to mind the propane-air curved channel experiments of Erdmann _et al._[44], who found that the bubble growth rate exponent increased with time, but maxed out at 1.4, probably due to vertical confinement effects. On the other hand, Sykes _et al._[46] simulated a similar curved channel and found maximum growth rate exponents of \(\sim 2\), both for reacting and nonreacting flows. However, again due to vertical confinement, there was no long-lasting self-similar phase. In our simulation setup, a too small domain could also result in the development of coherent structures and/or result in an inadequate number of bubbles to average over as the flame reaches larger scales. It's also possible that the bubble depth measurement itself is inherently too noisy because the dividing position between the bubbles and spikes can rapidly move forward as the spikes burn out. This effect would be amplified in a too small domain with too few bubbles. Finally, it is possible that RT unstable flames don't grow self-similarly, or that they do, but with a different scaling than \(\propto t^{2}\). Whether or not this is the case will need to be explored with carefully controlled, multimode-dominated simulations in larger domains. However, we do know from this study that symmetrically evolving simulations dominated by the interaction between a primary and a secondary mode generally will not grow self-similarly. ## IV Conclusions In this paper, we've shown how the combination of different types of perturbations affects Rayleigh-Taylor unstable flames. We considered three perturbations: a dominant primary perturbation, a smaller amplitude secondary perturbation, and a system perturbation. The system perturbation is a numerical artifact, and can be broken into two parts: a single mode component and multimode noise. We studied different combinations of these perturbations by analyzing 2D Boussinesq direct numerical simulations of model flames run in Nek5000. We broke the flame's evolution into two parts: the Early Stage and the Late Stage, which are separated by the disruption of the First Metastable Stage. During the Early Stage, all simulations showed the same general behavior. The dominant primary mode quickly grew into a long-lasting metastable traveling wave, made of bubbles separated by sharp cusps. Most properties of this First Metastable Stage, like the number of bubbles, the size of bubbles, and Figure 15: Chaotic Burning solution bubble depth curves for system perturbation (panel a) and secondary perturbation (panel b) simulations. Some simulations have a smooth growth curve, but others don’t. The bubble depth is measured with the point method. the flame speed, are set by the primary mode \(k_{1}\) and \(G\). However, the lifetime of the First Metastable Stage depends on how long it takes for the secondary or system perturbation to grow and destabilize it. This lifetime depends on \(G\) and on the perturbation's wavenumber and amplitude. During the Late Stage, more complex interactions between the primary, secondary, and system perturbations lead to a wide variety of solutions, from symmetrical evolution to asymmetric chaotic burning. Which solution appears depends on two main factors: the growth rate of the secondary perturbation and the interaction between the primary and secondary modes. If the secondary perturbation grows slowly (low \(k_{2}\)), multimode system noise either completely or partly breaks the flame front symmetry. But, if the secondary perturbation grows quickly enough to outcompete multimode noise, the flame evolves in identical symmetrical subregions for a long time. The size of these subregions determines the details of the symmetrical evolution, like whether pulsations or even a Second Metastable Stage emerges. Subregion size is set by the interaction between the primary and secondary modes, which generates a third mode with wavenumber \(\mathrm{GCD}(k_{1},k_{2})\). This mode often has a much longer wavelength than either the primary or secondary and it ultimately dominates the flow. Even if there is no secondary perturbation, the single mode part of the system perturbation can act as a secondary perturbation in underresolved simulations, leading to the unexpected appearance of long wavelength structures. These numerical artifacts disappear as the resolution improves; simulations with only a system perturbation converge towards the multimode-dominated Chaotic Burning solution. So, in the Late Stage, solution type is determined by the interaction between the primary and secondary, unless the secondary either grows slowly or isn't present, in which case multimode noise becomes important and breaks the symmetry of the flame's evolution. Comparing our results to nonreactive RT, we see some similarities, but also some intriguing differences. In single mode nonreactive RT, symmetrical bubbles continuously rise and elongate. On the other hand, single mode RT unstable flames are stabilized by burning and propagate as a metastable traveling wave. The bubbles have a fixed shape and don't elongate over time. The mixing layer of nonreactive RT continually grows. On the other hand, we've shown that the bubble depth of an RT unstable flame may effectively stall with a wavenumber of \(\mathrm{GCD}(k_{1},k_{2})\) as the flame pulsates or the flame may even settle into a Second Metastable Stage. Even if growth later resumes, which it does in our simulations, these stalled periods make modeling the bubble depth curve a major challenge. Theoretical models of mode interaction for nonreactive RT typically invoke either the bubble merger mechanism (in which larger bubbles absorb smaller bubbles) [78; 83; 84; 79; 81; 3] or bubble competition (in which successively longer wavelength modes saturate) [85; 86; 76; 82]. Visually, we see bubble merger in some of our simulations, but we've also shown that two modes interact to produce a third mode with wavenumber \(\mathrm{GCD}(k_{1},k_{2})\) and this mode becomes dominant. This means that two short wavelength modes can couple to produce a much longer wavelength tertiary mode, a phenomenon that has also recently been observed for the ablative instability [56]. Does this phenomenon rely on the existence of a stabilizing mechanism, like burning or ablation, or are wavenumber GCD tertiary modes also generated in nonreactive RT? Finally, multimode nonreactive RT and ablative RT [87; 55] both grow self-similarly. Whether this is also the case for RT unstable flames will need to be resolved with larger, carefully controlled multimode simulations. **Data Availability Statement**. The supporting data and code for this article are openly available on Zenodo [88]. ###### Acknowledgements. E. Hicks thanks R. Rosner for originally introducing her to Rayleigh-Taylor unstable flames and N. Vladimirova and A. Obabko for introducing her to the Nek5000 code and for providing the original RT unstable flames setup and scripts. She also thanks R. Rosner and N. Vladimirova for interesting discussions that have influenced her thinking over the years and T. Erdmann and J. Sykes for a fascinating discussion on the applications of RT unstable flames to aviation. We are very grateful to P. Fischer, A. Obabko, and the rest of the Nek5000 team for making Nek5000 available and for giving us advice on using it. Thank you to S. Tarzia for proofreading and editing suggestions. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) [89], which is supported by National Science Foundation grant number ACI-1548562. Mingxuan Liu and E. Hicks thank the XSEDE EMPOWER program, supported by National Science Foundation grant number ACI-1548562. This work used resources of the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. This work used the XSEDE and ACCESS resources Stampede2 and Ranch at the Texas Advanced Computing Center (TACC) through allocation SEE220001. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC and visualization resources that have contributed to the research results reported within this paper. URL: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu). This work used the XSEDE and ACCESS resource Expanse at the San Diego Supercomputer Center (SDSC) at the University of California San Diego through allocation SEE220001. Simulations were run using Nek5000 [67]. Simulation visualizations were created using VisIt [90]. Paper plots were created using matplotlib [91; 92] and seaborn [93]. Additional plots were made using Gnuplot [94]. We used Poetry [95] for Python dependency management. Our Python analysis code used the packages pandas [96, 97], NumPy [98], SciPy [99], lmfit [100] and pytest [101].
2309.06984
Ab initio transport calculations: from normal to superconducting current
Applying the Bogoliubov-de Gennes equations with density-functional theory, it is possible to formulate first-principles description of current-phase relationships in superconducting/normal (magnetic)/superconducting trilayers. Such structures are the basis for the superconducting analog of Magnetoresistive random access memory devices (JMRAM). In a recent paper [1] we presented results from the first attempt to formulate such a theory, applied to the Nb/Ni/Nb trilayers. In the present work we provide computational details, explaining how to construct key ingredient (scattering matrices $S_N$) in a framework of linear muffin-tin orbitals (LMTO).
H. Ness, M. van Schilfgaarde
2023-09-13T14:20:15Z
http://arxiv.org/abs/2309.06984v1
# Ab initio transport calculations: from normal to superconducting current ###### Abstract Applying the Bogoliubov-de Gennes equations with density-functional theory, it is possible to formulate first-principles description of current-phase relationships in superconducting/normal (magnetic)/superconducting trilayers. Such structures are the basis for the superconducting analog of Magnetoresistive random access memory devices (JMRAM). In a recent paper [1] we presented results from the first attempt to formulate such a theory, applied to the Nb/Ni/Nb trilayers. In the present work we provide computational details, explaining how to construct key ingredient (scattering matrices \(S_{N}\)) in a framework of linear muffin-tin orbitals (LMTO). ## I Introduction In a recent paper [1], we have combined density functional theory and the Bogoliubov-de Gennes equations to form a first-principles approach to the study of transport in magnetic Josephson junctions (MJJ). This method allowed us to predict and explain the properties of realistic MJJs such as the period of oscillation and decay of the critical current oscillations with the ferromagnet thickness. We applied our methodology to study realistic material stacks of the Nb/Ni/Nb trilayer and established that suppression of supercurrent is an intrinsic property of the junctions, even in absence of disorder. To determine the supercurrent in a superconductor-normal metal-superconductor (S/N/S) junction from a density-functional approach (which is inherently single-particle), one needs to "decompose" the entire scattering process into different steps. First we use the Andreev approximation to account for electron-hole scattering processes [2], the (spin-resolved) Andreev reflection at each left S/N and right N/S interfaces is described by a reflection matrix. Second, we assume that the main contribution to the supercurrent comes from Andreev bound states localized in the junction (in the short junction limit) in the energy window corresponding to the superconducting gap. The energy spectrum of the Andreev bound states can be obtained by solving an equation [2; 3; 4; 5] which basically states the conservation of incoming/outgoing particle fluxes following scattering in the normal state region and Andreev reflections at the left S/N and right N/S interfaces. This equation involves the Andreev reflection matrices (at the left S/N and right N/S interfaces) as well as the scattering matrices \(S_{N}\) for electron- and \(S_{N}^{*}\) hole-like waves in the central normal N region. These single-particle (normal state) \(S_{N}\) scattering matrices are needed the objects we obtain from density-functional theory. In the present paper, we show in detail how to obtain such scattering matrices from the Questaal suite, which is an open access electronic structure code based on the LMTO technique [6; 7]. ## II Transport in the normal state The Questaal package [6; 7] calculates the single-particle electronic structure and includes some many-body effect corrections as well. It is based on the LMTO technique. One of Questaal's packages provides the ability to calculates the full non-equilibrium (NE) transport properties of an infinite \(L\)-\(C\)-\(R\) system representing a central region \(C\) cladded by two semi-infinite \(L\) and \(R\) leads [8]. The transport properties are obtained by using the NE Green's functions (GF) formalism [9]. The transport calculations are done with a LMTO basis-set in the so-called Atomic Sphere Approximation. Owing to the finite range of the basis set, the \(L\)-\(C\)-\(R\) system can be described in terms of an infinite stack of principal-layers (PLs) which interact only with their nearest neighboring PLs (Fig. 1). The direction of the electronic current is perpendicular to the PLs, and periodic boundary conditions are used within each PL. The discretization of the surface Brillouin zone of the corresponding PL introduces a set of transverse momentum \(k_{\parallel}\). Before transport calculations are performed, the density of the \(L\)-\(C\)-\(R\) must be computed self-consistently in a standard DFT framework. Apart from a constant shift, potentials in the semi-infinite \(L\)- and \(R\)- layers are kept frozen at the potential of their respective bulk systems. A dipole must form across the \(C\) region so that the \(L\)- and \(R\)- Fermi levels align. This is generated in the course of the self-consistent cycle, and also determines the shift needed to align the \(L\)- and \(R\)- Fermi levels. The first account of this method was presented in Ref.[10], and the formalism is described in detail in Ref.[8], including its implementation for the non-equilibrium cases. Self-consistency can be obtained in the non-equilibrium case, though it is not important here. The key quantity is the total transmission probability \(T(E,V)\) from which the non-linear current can be obtained with the conventional Landauer-like elastic scattering framework. The linear conductance regime is simply described by \(T(E_{F},V=0)\) taken at the Fermi energy \(E_{F}\) of the \(L\)-\(C\)-\(R\) system at equilibrium. The total trans mission probability \(T(E,V)\) is given by \[T(E,V)=\sum_{k_{\parallel},\sigma}w_{k_{\parallel}}T(E,V;k_{\parallel},\sigma)\, \tag{1}\] where \(w_{k_{\parallel}}\) is the weight associated with the transverse momentum \(k_{\parallel}\), and \(\sigma\) is the spin of the electron. The partial transmission probability \(T(E,V;k_{\parallel},\sigma)\) are obtained from NEGF [9] as follows: \[T(E,V;k_{\parallel},\sigma)=\] \[\text{Tr}_{C}\left[\Gamma_{LL}(E;k_{\parallel})\ G^{r}_{LR}(E;k_ {\parallel})\ \Gamma_{RR}(E;k_{\parallel})\ G^{a}_{RL}(E;k_{\parallel})\right]\, \tag{2}\] where (for simplifying the notation, explicit dependence of spin \(\sigma\) and bias \(V\) has been dropped) the trace is taken over the basis set of the central \(C\) region. The GF \(G^{r/a}\) are the retarded/advanced NEGF of the \(C\) region connected to the leads, i.e. \(G^{r/a}=[g^{r/a}-\Sigma^{r/a}_{LL}-\Sigma^{r/a}_{RR}]^{-1}\), where \(\Sigma^{r/a}_{LL}\) and \(\Sigma^{r/a}_{RR}\) are the corresponding \(L\) and \(R\) lead self-energies respectively (\(g^{r/a}\) is the GF of the disconnected \(C\) region). The quantities \(\Gamma_{LL/RR}\) are the imaginary part of the lead self-energies, \(\Gamma_{LL/RR}=\text{i}(\Sigma^{r}-\Sigma^{a})_{LL/RR}\). Finally, \(G_{LR}\) are the NEGF matrix elements connecting the most-left PL of the \(C\) region to the most-right PL of region \(C\). In order to calculate the supercurrent in the junctions, we actually need the full scattering matrix \(S_{N}\) of the central region. The \(S_{N}\) matrix is built from the transmission (reflection) coefficients between the \(L\) and \(R\) and not from the transmission (reflection) probability Eq. (2). This marks one essential difference between describing superconducting and normal transport: \(S_{N}\) has additional information not needed for transmission in the normal state. Hence, we need to apply a transformation to Eq.(2) to be able to extract the transmission coefficients. Instead of folding down the degrees of freedom of the \(L\) and \(R\) leads into a closed form for the lead self-energies and calculating the transmission probability from a trace over the degrees of freedom of the central \(C\) region, we have to unfold these degrees of freedom and calculate the transmission probability from a trace over the these degrees of freedom. The latter can form propagating waves in the \(L\) and \(R\) leads which are linked by transmission and reflection coefficients as in the original picture of Landauer-like scattering [11]. Therefore we first need to determine the eigenmodes of propagation in the \(L\) and \(R\) leads, and then transform the lead self-energies into the eigenmode basis of these propagating states. ### Eigenmodes of propagation in the leads #### ii.1.1 Bulk GF Deep inside the leads, one can calculate the eigenmodes of propagation by solving a nearest-neighbor (in terms of PLs) tight-binding-like equation for the bulk GF \(g_{p,p^{\prime}}\) \[-S_{0,-1}g_{-1,0}+(P-S)_{0,0}g_{0,0}-S_{0,1}g_{1,0}=1. \tag{3}\] In the LMTO language [7], the quantity \((P-S)_{0,0}\) plays the role of a local energy-dependent Hamiltonian in the PL (with index \(p=0\)), where \(P\) are the so-called potential functions and the structure constant \(S_{p,p^{\prime}}\) couples only adjacent PLs (\(p-p^{\prime}=\pm 1\)). The bulk is translationally invariant in the direction perpendicular to the PLs; hence \(S_{0,-1}=S_{1,0}\) and \(g_{-1,0}=g_{0,1}\). One solves the equation \[-S_{1,0}g_{0,1}+(P-S)_{0,0}g_{0,0}-S_{0,1}g_{1,0}=1 \tag{4}\] by expanding the wavefunction coefficients as a solution \(\mathbf{\alpha}\) of a quadratic equation [12]. This quadratic equation can be recast into a generalized eigenvalue problem \(\mathbf{Ax}=\lambda\mathbf{Bx}\) by introducing a new vector \(\mathbf{\beta}=\lambda\mathbf{\alpha}\) and working in a enlarged (doubled) vector space [12; 13]. The generalized eigenvalue problem is written as: \[\left[\begin{array}{cc}-S_{1,0}&(P-S)_{0,0}\\ 0&S_{0,1}\end{array}\right]\left[\begin{array}{c}\mathbf{\alpha}\\ \mathbf{\beta}\end{array}\right]=\lambda\left[\begin{array}{cc}0&S_{0,1}\\ S_{0,1}&0\end{array}\right]\left[\begin{array}{c}\mathbf{\alpha}\\ \mathbf{\beta}\end{array}\right]\, \tag{5}\] and can be solved from a set of two independent equations: \[-S_{1,0}Pr^{-1}P^{-1}+(P-S)_{0,0}-S_{0,1}PrP^{-1}=0 \tag{6}\] and \[-S_{1,0}Qx^{-1}Q^{-1}+(P-S)_{0,0}-S_{0,1}QxQ^{-1}=0. \tag{7}\] Note that all matrices \(M\equiv S,g,Q,P\) are dependent on variables \(M\equiv M(E;k_{\parallel},\sigma)\). The above equations need to be solved for each energy \(E\), each \(k_{\parallel}\) and each spin \(\sigma\). The eigenvalue vectors \(r\) and \(x\) characterize the propagating (or decaying) modes in the bulk. Their meaning becomes clear when written as Bloch-like factors \(e^{\pm ik_{z}a}\), where \(a\) is the characteristic width of the PL and \(k_{z}\) is the (energy dependent) wave number normal to the interface. The columns of the matrices \(P\) and \(Q\) are the corresponding eigenvectors. For the propagating modes, the wave numbers \(k_{z}(E;k_{\parallel},\sigma)\) are real numbers; \(k_{z}(E)\) contains a non-zero imaginary part for the decaying modes. We choose the following convention: \(|r_{i}|\leq 1\) (propagating and decaying modes towards the right) and \(|x_{i}|\geq 1\) (propagating and decaying modes towards the left). Figure 1: Schematic representation of the stacking of principal-layers PLs for the calculation of the transport in two-terminal \(L\)-\(C\)-\(R\) junctions. Reformulating the generalized eigenvalue problem One can rewrite the central equations of the previous section in term of the wavefunctions \(\phi_{p}\) of the PL \(p\) deep inside the leads: \[-S_{1,0}\phi_{p-1}+g_{b}^{-1}\phi_{p}+S_{0,1}\phi_{p+1}=0 \tag{8}\] as the system is translation-invariant in the bulk of the leads. For notation convenience, we use \(g_{b}^{-1}=(P-S)_{0,0}\). By introducing the ratio matrix \(R_{p}^{\leftarrow}=\phi_{p-1}\phi_{p}^{-1}\) and by manipulation of the recursion relation \[\phi_{p}=g_{b}S_{1,0}\phi_{p-1}+g_{b}S_{0,1}\phi_{+1}\, \tag{9}\] one ends up with the following expression [13] for the ratio matrix \(R_{p+1}^{\leftarrow}\): \[R_{p+1}^{\leftarrow}=\left(1+g_{b}S_{1,0}[1-R_{p}^{\leftarrow}g_{b}S_{1,0}]^{ -1}R_{p}^{\leftarrow}\right)g_{b}S_{0,1}. \tag{10}\] As the bulk of the lead is translation-invariant, we can use the generalized Bloch condition \(\phi_{p+1}=\lambda\phi_{p}\) between adjacent PLs (note that this is reminiscent of the relation \(\mathbf{\beta}=\lambda\mathbf{\alpha}\) used in the previous section). The ratio matrix becomes \(R_{p}^{\leftarrow}=\phi_{p-1}\phi_{p}^{-1}=\lambda^{-1}\), and Eq. (10) is another formulation of the quadratic equation [12] discussed previously. However, we have found this expression can be used as an iterative scheme, i.e. \[\lambda_{i+1}^{-1}=\left(1+g_{b}S_{1,0}[\lambda_{i}-g_{b}S_{1,0}]^{-1}\right)g _{b}S_{0,1} \tag{11}\] to improve (at will) the accuracy (precision) of the eigenvalues \(\lambda\equiv(r,x)\). This is particularly crucial in the cases of degenerate modes that might occur at particular \(E\) and \(k_{\parallel}\). #### ii.1.3 The case of singular matrices We have also implemented the possibilities of dealing with singular matrices that may occur in the generalized eigenvalue problem \(\mathbf{A}\mathbf{x}=\lambda\mathbf{B}\mathbf{x}\) which is not easily solvable when one, or both, \(\mathbf{A},\mathbf{B}\) are singular. An eigenvalue shift procedure can be used to solve the linear generalized eigenvalue problem when both matrices are singular and when the full eigensystem is required [14]. For that one adds the term \(-\alpha\mathbf{B}\mathbf{x}\) on both sides of \(\mathbf{A}\mathbf{x}=\lambda\mathbf{B}\mathbf{x}\). If \(\mathbf{\tilde{A}}=\mathbf{A}-\alpha\mathbf{B}\) is not singular, one can calculate \(\mathbf{\bar{M}}=\mathbf{\tilde{A}}^{-1}\mathbf{B}\) and solve the conventional eigenvalue problem: \[\mathbf{\bar{M}}\mathbf{x}=\frac{1}{\lambda-\alpha}\mathbf{x}. \tag{12}\] A typical eigenvalue \(\gamma\) of \(\mathbf{\bar{M}}\) must be related to one of the \(\lambda\) values according to \(\gamma=1/\mu\) where \(\mu=\lambda-\alpha\), with the corresponding eigenvector unchanged. ### Surface Green's functions We can now build the surface GF of the leads which enters into the definition of the lead self-energies \(\Sigma_{RR}^{r/a}\). The \(L\) and \(R\) surface GFs are obtained from only one part of the eigenmodes, i.e. from the propagating modes and the modes that decay inside the bulk of the corresponding lead [12]. The surface GF of the \(R\) lead is obtained from \[g_{RR}=\left[g_{b}^{-1}-S_{0R}PrP^{-1}\right]^{-1} \tag{13}\] and the \(L\) lead surface GF from \[g_{LL}=\left[g_{b}^{-1}-S_{0L}Qx^{-1}Q^{-1}\right]^{-1}. \tag{14}\] We have used an explicit notation for the leads' structure constant, \(S_{0R}\) and \(S_{0L}\), as the two \(L\) and \(R\) leads need not be identical. In comparison to the bulk case in Sec. II.1.1, we have \(S_{0R}=S_{0,1}\) and \(S_{R0}=S_{1,0}\) for the \(R\) region, and \(S_{0L}=S_{1,0}\) and \(S_{L0}=S_{0,1}\) for the \(L\) region. Note the different ordering of the subscript of the bulk structure constant for the \(L\) and \(R\) regions: the \(R\) (\(L\)) surface GF is built from "propagating" towards two different \(R\) (\(L\)) directions. By introducing Eq. (6) into Eq. (13), and Eq. (7) into Eq. (14), we find that the matrices \(P,P^{-1}\) (\(Q,Q^{-1}\)) are the transformations that diagonalize \(g_{RR}S_{R0}\) (\(g_{LL}S_{L0}\)) respectively. Indeed, we have \[g_{RR}=\left[S_{R0}Pr^{-1}P^{-1}\right]^{-1}\, \tag{15}\] or equivalently \[g_{RR}S_{R0}=PrP^{-1}\, \tag{16}\] and \[g_{LL}S_{L0}=Qx^{-1}Q^{-1}. \tag{17}\] The matrices \(P,P^{-1}\) (\(Q,Q^{-1}\)) perform the change of basis set, from the original basis to the basis of the propagation eigenmodes. ### Transmission coefficients We now proceed with the transformation of the quantities \(\Gamma_{\alpha\alpha}\) into the basis of the propagation eigenmodes. The quantity \(\Gamma_{\alpha\alpha}\) is related to the imaginary part of the lead self-energy \(\Sigma_{\alpha\alpha}\) (\(\alpha\)=\(L,R\)) \[\Gamma_{\alpha\alpha}=S_{0\alpha}\ \mathrm{i}\left[g_{\alpha\alpha}^{r}-g_{ \alpha\alpha}^{a}\right]S_{\alpha 0}. \tag{18}\] Using Eq. (17) and the relation \(S_{0L}g_{LL}^{a}=(g_{LL}^{r}S_{L0})^{\dagger}=(Q^{-1})^{\dagger}(x^{-1})^{ \dagger}Q^{\dagger}\), we find that \[\Gamma_{LL}=\mathrm{i}\left(S_{0L}Qx^{-1}Q^{-1}-(Q^{-1})^{\dagger}(x^{-1})^{ \dagger}Q^{\dagger}S_{L0}\right). \tag{19}\] Introducing the identity \(1=QQ^{-1}\) (\(1=(Q^{-1})^{\dagger}Q^{\dagger}\)) to the right (left) side of the second (first) term in Eq. (19), one ends up with: \[\Gamma_{LL}=(Q^{-1})^{\dagger}v_{L}Q^{-1} \tag{20}\] where \[v_{L}=\mathrm{i}\left[Q^{\dagger}S_{0L}Qx^{-1}-\left(Q^{\dagger}S_{0L}Qx^{-1} \right)^{\dagger}\right]. \tag{21}\] Proceeding similarly for \(\Gamma_{RR}\), we get \[\Gamma_{RR}=(P^{-1})^{\dagger}v_{R}P^{-1} \tag{22}\] where \[v_{R}=\mathrm{i}\left[P^{\dagger}S_{0R}Pr-\left(P^{\dagger}S_{0R}Pr\right)^{ \dagger}\right] \tag{23}\] It is crucial to note that the matrices \(v_{L}\) and \(v_{R}\) correspond to the expectation values of the current operator calculated in the basis set of the eigenmodes [15]. For non-degenerate modes, the diagonal elements \(v_{L,n}\) (\(v_{R,m}\)) correspond to the group velocity \(\partial E/\partial k\) of the propagating mode \(n\) (\(m\)) in the \(L\) (\(R\)) region [13; 15]. For decaying modes the diagonal elements are simply zero. In the case of degeneracy, the velocity matrices \(v_{L}\) and \(v_{R}\) are block diagonal. Then we need to apply a further transformation to get diagonal matrices \(v_{L,R}\to v_{L,R}^{D}\)[15]. Once the velocity matrices \(v_{L}\) and \(v_{R}\) are diagonal, we can write \(v_{L,n}^{D}=\mathrm{sgn}(v_{L,n}^{D})|v_{L,n}^{D}|^{1/2}|v_{L,n}^{D}|^{1/2}\) and \(v_{R,m}^{D}=\mathrm{sgn}(v_{R,m}^{D})|v_{R,m}^{D}|^{1/2}|v_{R,m}^{D}|^{1/2}\), and transform Eq. (2) as follows: \[\begin{split}&\mathrm{Tr}_{C}\left[\Gamma_{LL}\ G_{LR}^{r}\ \Gamma_{RR}\ G_{RL}^{a}\right]\\ &=\mathrm{Tr}\left[(Q^{-1})^{\dagger}v_{L}^{D}Q^{-1}g_{LR}^{r}(P^ {-1})^{\dagger}v_{R}^{D}P^{-1}g_{LR}^{a}\right]\\ &=\mathrm{Tr}_{L+R}\left[\ t_{LR}\ t_{LR}^{\dagger}\ \right]\.\end{split} \tag{24}\] The transmission probability is now expressed in terms of transmission coefficients \(t_{n,m}\) linking the propagating modes \(n\) of the \(L\) lead to the propagating modes \(m\) of the \(R\) lead. The transmission coefficients are given by a generalized Fisher-Lee expression [11]: \[\begin{split}& t_{LR}\equiv\\ & t_{n,m}(E;k_{\parallel},\sigma)=\mathrm{i}|v_{L,n}^{D}|^{1/2} \left[Q^{-1}\ g_{LR}^{r}\ (P^{-1})^{\dagger}\right]_{n,m}|v_{R,m}^{D}|^{1/2}\end{split} \tag{25}\] It is important to note that the original Meir and Wingreen expression [9] involves a trace over the degrees of freedom in the \(C\) region (first line in Eq. (24) ), while the Fisher and Lee expression [11] involves a trace over the propagating modes in the \(L\) and \(R\) leads (last line in Eq. (24) ). ### Reflection coefficients In analogy to the transmission probability, we can also define reflection probabilities in the same lead: \[\begin{split}& R_{L}(E)=\mathrm{Tr}_{C}\left[\Gamma_{LL}\ G_{LL}^{r}\ \Gamma_{LL}\ G_{LL}^{a}\right]\\ & R_{R}(E)=\mathrm{Tr}_{C}\left[\Gamma_{RR}\ G_{RR}^{r}\ \Gamma_{RR}\ G_{RR}^{a}\right]\end{split} \tag{26}\] Following the derivations given in the previous section we find (for the \(L\) region): \[\begin{split}&\mathrm{Tr}_{C}\left[\Gamma_{LL}\ G_{LL}^{r}\ \Gamma_{LL}\ G_{LL}^{a}\right]\\ &=\mathrm{Tr}\left[(Q^{-1})^{\dagger}v_{L}Q^{-1}g_{LL}^{r}(Q^{-1}) ^{\dagger}v_{L}Q^{-1}g_{LL}^{a}\right]\\ &=\mathrm{Tr}_{L}\left[r_{LL}(r_{LL})^{\dagger}\right]\end{split} \tag{27}\] The reflection coefficients \(r_{LL}=r_{n,n^{\prime}}\) are now expressed in the basis of the propagating modes \(n\) and \(n^{\prime}\) of the \(L\) lead. Similarly, one can find the reflection probability \(R_{R}(E)=\mathrm{Tr}_{R}\left[\tau_{RR}(r_{RR})^{\dagger}\right]\) from the reflection coefficients \(r_{RR}=r_{m,m^{\prime}}\) expressed in the basis of the propagating modes \(m\) and \(m^{\prime}\) of the \(R\) lead. One should note that the reflection probability, defined in Eq. (26), contains the contributions of both the \(L\) (\(R\)) incoming wave(s) and the reflected waves in the \(L\) (\(R\)) region. In order to obtain the correct reflection coefficients (and proper flux conservation), one needs to suppress the contribution of the incoming wave in the \(n\)-th \(L\) channel (\(m\)-th \(R\) channel) in the \(L\) (\(R\)) lead respectively. Hence the expressions of the reflection coefficients are as follows: \[\begin{split}& r_{LL}=r_{n,n^{\prime}}(E;k_{\parallel},\sigma)=\\ &\mathrm{i}|v_{L,n}^{D}|^{1/2}\ \left[Q^{-1}\ g_{LL}^{r}\ (Q^{-1})^{ \dagger}\right]_{n,n^{\prime}}|v_{L,n^{\prime}}^{D}|^{1/2}-\delta_{nn^{\prime}} \end{split} \tag{28}\] and \[\begin{split}& r_{RR}=r_{m,m^{\prime}}(E;k_{\parallel},\sigma)= \\ &\mathrm{i}|v_{R,m}^{D}|^{1/2}\ \left[P^{-1}\ g_{RR}^{r}\ (P^{-1})^{ \dagger}\right]_{m,m^{\prime}}|v_{R,m^{\prime}}^{D}|^{1/2}-\delta_{mm^{\prime}} \end{split} \tag{29}\] ## III Full scattering matrix and supercurrent The full scattering matrix \(S_{N}\) is built from the reflection coefficients \(r_{LL}\) and \(r_{LL}\), and from the transmission coefficients \(t_{LR}\) and \(t_{RL}\) (\(t_{RL}\) is the transpose of \(t_{LR}\)). All quantities are explicitly dependent of \((E;k_{\parallel},\sigma)\). For the calculations of the supercurrent, we construct the normal state scattering matrix \(S_{N}\) as follows: \[S_{N}(E;k_{\parallel})=\left(\begin{array}{cc}\left[\begin{array}{cc}r_{ LL}(\uparrow)&0\\ 0&r_{LL}(\downarrow)\end{array}\right]&\left[\begin{array}{cc}t_{LR}(\uparrow)&0\\ 0&t_{LR}(\downarrow)\end{array}\right]\\ &\left[\begin{array}{cc}t_{RL}(\uparrow)&0\\ 0&t_{RL}(\downarrow)\end{array}\right]&\left[\begin{array}{cc}r_{RR}(\uparrow)&0\\ 0&r_{RR}(\downarrow)\end{array}\right]\end{array}\right) \tag{30}\] The normal state scattering matrix \(S_{N}(E_{F}+\varepsilon;k_{\parallel})\) characterizes electron (particle) transport for a positive energy \(\varepsilon\) above the Fermi level \(E_{F}\). The transport of hole (antiparticle) is given by the time-reserved symmetric \(S_{N}^{*}(E_{F}-\varepsilon;k_{\parallel})\) scattering matrix for (negative) energy \(-\varepsilon\) below \(E_{F}\). As mentioned above, the dc Josephson current in the junction is obtained from the Andreev bound states formed in the junction. The spectrum of the Andreev bound states can be calculated from a scattering matrix formalism[2; 3] In such an approach, the spatial separation of Andreev and normal scattering is the key simplification which allows one to relate the Josephson current directly to the normal-state scattering matrix of the junction. We have successfully applied such an approach in a recent paper[1] to the study of the supercurrent decay and oscillation in magnetic Josephson junctions made of Ni layers connected to two Nb leads. ## IV Application We now provide an illustrative example of our eigenmode approach applied to a simple case. We consider that all PLs in the \(L\)-\(C\)-\(R\) junction are identical, i.e. we study the bulk (equilibrium) transport properties of a "perfect" crystal. In this case, the transmission probability (at a given energy \(E\) and for a given \(k_{\parallel}\) point) is simply given by the number of "real" bands crossing that energy \(E\) at that \(k_{\parallel}\) point, each "real" band corresponding to a propagating mode. We compare the results obtained from the two Meir and Wingreen[9] and Fisher and Lee[11] expressions. Figure 2 shows the transmission probability \(T(E,V;k_{\parallel},\sigma)\) calculated at equilibrium \(V\)=0 and for spin \(\uparrow\) of bulk Co. The unique PL of bulk Co is made of 2 atoms of Co with 9 (_spd_) orbitals, i.e. the size of the matrices corresponding to that PL is 18\(\times\)18 per spin. The calculations performed from the transmission coefficients with only 5 propagating modes provide the very same results as the transmission probability obtained from Eq.(2), as expected. Note that the maximum transmission probability is 5, which is indeed the number of propagating modes we found. We conclude the present paper by the following note: the size (\(N\times N\)) of the velocity matrices \(v_{\alpha}\) is given by the size of the matrices for the structure constant \(S_{0\alpha}\) and for \(g_{b}^{-1}\) in the identical PLs of the \(\alpha=L,R\) leads (this is for each spin and for each value of \(k_{\parallel}\)). We have found that the number of propagating modes (with non zero value of the velocity) is much smaller than \(N\). This reduces considerably the size of the transmission (reflection) coefficient matrices and hence improve the computational performances. For bulk Co, we have seen that \(N=18\) and only 5 propagating modes are present. For other examples shown in[1], we get similar trends. In the case of Nb(110)/Ni(111)/Nb(110) junctions, each PL of the \(\alpha=L,R\) leads contains 10 atoms of Nb with 9 (_spd_) orbitals, hence \(N=90\). However, there is only a maximum of 25 propagating modes in the corresponding Nb leads. For Nb(110)/Ni(110)/Nb(110) junctions, there are 2 atoms of Nb in each PL, i.e. \(N=18\), and only 7 propagating modes in the leads. For Nb(110)/Fe(111)/Nb(110), there are 3 atoms of Nb in each PL, i.e. \(N=27\), and only 7 propagating modes in the leads[16]. ###### Acknowledgements. HN and MvS acknowledge financial support from Microsoft Station Q via a sponsor agreement between KCL and Microsoft Research. In the late stages of this work MvS was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award # FWP ERW7906. The authors acknowledge fruitful discussions and collaboration with Roman Lutchyn, Ivan Sadovskyy and Andrey Antipov leading to Ref. [1]. Figure 2: Transmission probability \(T(E,V=0;k_{\parallel},\sigma=\uparrow)\) for an energy window around the equilibrium Fermi level \(E_{F}=0\) and for different \(k_{\parallel}\) points (corresponding to different colors). The solid lines are from the Meir and Wingreen expression. The open circles correspond to the Fisher and Lee formula for which only 5 propagating modes are taken into account.
2309.08017
Diffeomorphism covariance of the canonical Barbero-Immirzi-Holst triad theory
The vanishing phase space generator of the full four-dimensional diffeomorphism-related symmetry group in the context of the Barbero-Immirzi-Holst Lagrangian is derived directly for the first time from Noether's second theorem. Its applicability in the construction of classical diffeomorphism invariants is reviewed.
Donald Salisbury
2023-09-14T20:25:02Z
http://arxiv.org/abs/2309.08017v1
# Diffeomorphism covariance of the canonical Barbero-Immirzi-Holst triad theory ###### Abstract The vanishing phase space generator of the full four-dimensional diffeomorphism-related symmetry group in the context of the Barbero-Immirz-Holst Lagrangian is derived directly for the first time from Noether's second theorem. It's applicability in the construction of classical diffeomorphism invariants is reviewed. ## 1 Introduction What I identify as the Barbero-Immirzi-Holst model serves as a foundation for today's canonical approach to loop quantum gravity. I will derive in this article a new analysis of the underlying four-dimensional spacetime diffeomorphism-related classical canonical symmetry. I will derive the canonical symmetry generators directly from the vanishing charge that follows from Emmy Noether's second theorem, in a manner similar to the first such derivation presented for conventional canonical gravity in [16]. The focus will be on a reformulated ADM approach that incorporates densitied triads. And I will argue that the extension of this analysis to the new triad approach to gravity as proposed by [1], [21], and [15] is almost trivial. As is well known, in order to achieve the results of canonically generated variations of spacetime coordinates it is necessary to supplement the variations of phase space variables under diffeomorphims with related triad gauge transformations. I conclude with an overview of a technique for introducing intrinsic coordinates as gauge conditions, and employing the full diffeomorphism generator to construct invariant temporal evolution in a manner related to Rovelli's relative observables [14]. This lays the foundations for an eventual application in loop quantum gravity. ## 2 Derivation of canonical Hamiltonian I use the ADM Lagrangian as rewritten using triad variables. \[\mathcal{L}_{ADM}=Nt\left({}^{3}\!R+K_{ab}K^{ab}-(K_{a}^{a})^{2}\right)=Nt \left({}^{3}\!R+K_{ab}e^{ac}e^{bd}K_{cd}-\left(e^{ab}K_{ab}\right)^{2}\right), \tag{2.1}\] where \[K_{ab}=\frac{1}{2N}\left(g_{ab,0}-N^{c}g_{ab,c}-g_{ca}N^{c}_{,b}-g_{cb}N^{c}_{, a}\right)=\frac{1}{2N}\left(g_{ab,0}-2g_{c(a}N^{c}_{[b]}\right). \tag{2.2}\] The variable \(t\) is the determinant of the spatial metic \(g_{ab}\), with \(e^{ab}\) its inverse. The variable \(N\) is the lapse while \(N^{a}\) represents the metric shift functions. \({}^{3}\!R\) is the tree-dimensional curvature scalar. The first task is to specialize to tetrads with the choice \(E_{0}^{\mu}=n^{\mu}=\delta_{0}^{\mu}N^{-1}-\delta_{a}^{\mu}N^{-1}N^{a}\). This tetrad is orthogonal to the constant time hypersurface. The covariant metric is \[g_{\mu\nu}=\begin{pmatrix}-N^{2}+N^{c}N^{d}g_{cd}&g_{ac}N^{c}\\ g_{bd}N^{d}&g_{ab}\end{pmatrix}, \tag{2.3}\] with the contravariant metric \[g^{\mu\nu}=\begin{pmatrix}-1/N^{2}&N^{a}/N^{2}\\ N^{b}/N^{2}&e^{ab}-N^{a}N^{b}/N^{2}\end{pmatrix}. \tag{2.4}\] We then choose the remaining tetrads to be tangential to the constant time hypersurface. Thus the full set of contravariant tetrads (with the upper index representing the row and the lower index representing the column) is \[E^{\mu}_{I}=\begin{pmatrix}N^{-1}&0\\ -N^{-1}N^{a}&T^{a}_{i}\end{pmatrix} \tag{2.5}\] with the corresponding covariant set \[e^{I}_{\mu}=\begin{pmatrix}N&0\\ t^{i}_{a}N^{a}&t^{i}_{a}\end{pmatrix} \tag{2.6}\] We shall, however, employ as independent triad variables \(\widetilde{T}^{a}_{\,i}:=tT^{a}_{i}\) where \(t:=\det\left(t^{i}_{a}\right)\). Furthermore, rather than choosing the lapse \(N\) as an independent configuration variable we work with \(\stackrel{{ N}}{{\sim}}:=t^{-1}N\). So for the following we will need \[t_{,\mu}=tt^{i}_{a,\mu}T^{a}_{i}=\left(t^{i}_{a}\tilde{T}^{a}_{i}\right)_{, \mu}-t^{i}_{a}\tilde{T}^{a}_{i,\mu}, \tag{2.7}\] so we find that \[t_{,\mu}=\frac{1}{2}t^{i}_{a}\tilde{T}^{a}_{i,\mu}, \tag{2.8}\] \[t^{i}_{a,\mu}=2t^{-1}l^{i}_{\bar{a}}t^{j}_{b}\tilde{T}^{b}_{j,\mu}, \tag{2.9}\] and \[T^{a}_{i,\mu}=-\frac{1}{2}t^{-2}t^{j}_{b}\tilde{T}^{b}_{j,\mu}\tilde{T}^{a}_{i }+t^{-1}\tilde{T}^{a}_{i,\mu}. \tag{2.10}\] Now define the canonical momentum \[p^{l}_{e} := \frac{\partial\mathcal{L}_{ADM}}{\partial\tilde{T}^{e}_{l,0}}\] \[= 2Nt\left(e^{ac}e^{bd}-e^{ab}e^{cd}\right)K_{cd}\frac{\partial K _{ab}}{\partial\tilde{T}^{e}_{l,0}}.\] So I need \[2Nt\frac{\partial K_{ab}}{\partial\tilde{T}^{e}_{l,0}}=g_{ab}t^{l}_{e}-2t^{l} _{(a}g_{b)e}. \tag{2.12}\] Therefore \[p^{l}_{e}=-2T^{d}_{l}K_{ed}, \tag{2.13}\] from which we deduce that \[p^{i}_{a}t^{i}_{b}=-2K_{ab}. \tag{2.14}\] So I can write the Lagrangian immediately in terms of the canonical momenta. To obtain the canonical Hamiltonian \(\mathcal{H}_{c}\) I must now focus on \(p^{i}_{a}\tilde{T}^{a}_{i,0}\) which I want to write in terms of the momenta. I have \[p^{i}_{a}\tilde{T}^{a}_{i,0}=-2K_{ab}T^{b}_{i}\tilde{T}^{a}_{i,0} \tag{2.15}\] I will rewrite this in terms of derivatives of \(t^{j}_{c}\). So consider first \[\tilde{T}^{a}_{i,0}=\left(tT^{a}_{i}\right)_{,0}=t_{,0}T^{a}_{i}+tT^{a}_{i,0} =tt^{j}_{c,0}T^{c}_{j}T^{a}_{i}-tT^{c}_{i}T^{a}_{j}t^{j}_{c,0}, \tag{2.16}\] and I therefore have \[p_{a}^{i}\tilde{T}_{i,0}^{a}=-tK_{ab}\left(e^{ab}e^{cd}-e^{bc}e^{ad}\right)g_{cd,0} \tag{2.17}\] But \[g_{cd,0}=2NK_{cd}+2g_{e(c}N_{|d}^{e}, \tag{2.18}\] so I conclude finally that \[p_{a}^{i}\tilde{T}_{i,0}^{a}=-2tK_{ab}\left(e^{ab}e^{cd}-e^{bc}e^{ad}\right) \left(NK_{cd}+g_{e(c}N_{|d}^{e}\right) \tag{2.19}\] I thereby obtain the expression for the canonical Hamiltonian \[\mathcal{H}_{c} = p_{a}^{i}\tilde{T}_{i,0}^{a}-\mathcal{L}_{ADM} \tag{2.20}\] \[= \underset{\sim}{N}\left(-{}^{3}\!R+K_{ab}e^{ac}e^{bd}K_{cd}- \left(e^{ab}K_{ab}\right)^{2}\right)+2t\left(-K_{a}^{a}e^{cd}+K^{cd}\right)g_{ ec}N_{|d}^{e}\] For later use I need to rewrite the canonical Hamiltonian in terms of \(p_{a}^{i}\) using \(K_{ab}=-\frac{1}{2}p_{a}^{i}t_{b}^{i}\), which, implies that \[K_{ab}e^{ac}e^{bd}K_{cd}=\frac{1}{4}p_{a}^{i}t_{b}^{i}p_{c}^{j}t_{d}^{j}e^{ac }e^{bd}=p_{a}^{i}p_{b}^{i}e^{ab}, \tag{2.21}\] and \[e^{ab}K_{ab}e^{cd}K_{cd}=\frac{1}{4}p_{a}^{i}T_{i}^{a}p_{b}^{j}T_{j}^{b}. \tag{2.22}\] So the canonical Hamitonian becomes \[\mathcal{H}_{c}=\underset{\sim}{N}\left(-{}^{3}\!R+\frac{1}{4}p_{a}^{i}p_{b}^ {i}e^{ab}-\frac{1}{4}p_{a}^{i}T_{i}^{a}p_{b}^{j}T_{j}^{b}\right)+\frac{1}{2} \left(p_{a}^{i}T_{i}^{a}e^{cd}-p_{a}^{i}T_{i}^{d}e^{ac}\right)tg_{e(c}N_{|d)} ^{e} \tag{2.23}\] (It is straightforward to check that this does deliver an almost correct expression for the time rate of change of the densitized triad - lacking, as we shall see shortly, the arbitrary triad gauge rotations), \[\tilde{T}_{l,0}^{e}=\frac{\partial\mathcal{H}_{c}}{\partial p_{e }^{l}} = -2\underset{\sim}{N}\left(e^{ac}e^{bd}-e^{ab}e^{cd}\right)K_{cd} \frac{1}{2}\delta_{a}^{e}t_{b}^{l}+tg_{f(c}N_{|d)}^{f}\left(e^{ab}e^{cd}-e^{ ac}e^{bd}\right)\frac{1}{2}\delta_{a}^{e}t_{b}^{l} \tag{2.24}\] \[= -\underset{\sim}{N}\left(e^{ec}T_{l}^{d}-e^{eb}e^{cd}\right)K_{cd }+\frac{1}{2}tg_{f(c}N_{|d)}^{f}\left(T_{l}^{e}e^{cd}-e^{ec}T_{l}^{d}\right)\] It is important to recognize here that the ADM Lagrangian does not depend on the antisymmetrized linear combination of velocities \(\tilde{T}^{a[i}\underset{\sim}{\mathcal{L}}^{j]}\), and as a consequence we will obtain a corresponding primary constraint, with a corresponding addition to the Hamiltonian generator of time evolution. Rosenfeld had indeed in [Rosenfeld, 1930][Rosenfeld, 2017] considered a tetrad version of general relativity in which analogous constraints appeared and, although he did not explicitly construct the corresponding extended Hamiltonian, it was shown in [Salisbury and Sundermeyer, 2017] that he could easily have applied his new techniques to do so. I will next derive the relevant primary constraint by applying Noether's second theorem. ## 3 Noether charges First there is a vanishing charge that arises from the invariance of the ADM action under triad rotations \[\delta_{\eta}T_{i}^{a}=\epsilon^{ijk}\tilde{T}_{j}^{a}\eta_{k}, \tag{3.1}\] where the \(\eta_{k}\) are arbitrary spacetime functions. Following Noether's second theorem, conserved charge arises as follows. The variation of the action is \[0=\delta_{\eta}\int d^{4}\!x\mathcal{L}_{ADM}=\int d^{4}\!x\left[\left(\frac{ \delta\mathcal{L}_{ADM}}{\delta\tilde{T}_{i}^{a}}\right)\delta_{\eta}\tilde{T} _{i}^{a}+\left(\frac{\partial\mathcal{L}_{ADM}}{\partial\tilde{T}_{j,\mu}^{a} }\epsilon^{ijk}\tilde{T}_{j}^{a}\eta_{k}\right)_{,\mu}\right] \tag{3.2}\] When the field equations are satisfied we thus obtain, letting the variations vanish at spatial infinity, the conserved charge \[C_{\eta}=\int d^{3}xp_{a}^{i}\epsilon^{ijk}\tilde{T}_{j}^{a}\eta_{k}. \tag{3.3}\] But since \(\eta_{k}\) can vary arbitrarily with time we deduce the existence of constraints \[0=\mathcal{H}^{k}:=\epsilon^{ijk}p_{a}^{i}\tilde{T}_{j}^{a}. \tag{3.4}\] The additional constraints that arise from the invariance of the action under spacetime diffeomorphisms will require a bit more work to derive. I will derive the vanishing Noether charge diffeomorphism-related generator following the procedure that was applied in the conventional metric case in (Salisbury et al., 2022). It should be noted here that this procedure was applied to tetrad-based general relativity by Rosenfeld in 1930. And as observed in (Salisbury and Sundermeyer, 2017) he did not complete the derivation of the canonical generators that I will shortly find, very likely because he recognized that he could not express them exclusively in terms of canonical variables. In other words he did not recognize, as first observed in (Pons et al., 1997), that the variations were not projectable under the Legendre transformation to phase space. Under an infinitesimal diffeomorphism \(x^{\prime\mu}=x^{\mu}-\epsilon^{\mu}\), the scalar density \(\mathcal{L}_{ADM}\) transforms as1 Footnote 1: A major advantage in employing the ADM Lagrangian is that it does vary as a Lagrangian density, assuming only that variations at spatial infinity vanish. See (Kiefer, 2012), p. 119 and (Danieli, 2020) \[\bar{\delta}\mathcal{L}_{ADM}=\left(\mathcal{L}_{ADM}\epsilon^{\mu}\right)_{, \mu}, \tag{3.5}\] where the \(\bar{\delta}\) variation is actually the Lie derivative \(\mathcal{L}_{\epsilon}\). I will shortly work out the corresponding field variations. But first I will derive the corresponding vanishing Noether charges noting that when the field equations are satisfied, and letting \(\epsilon^{a}\to 0\) at spatial infinity, \[\int d^{4}\!x\bar{\delta}\mathcal{L}_{ADM} = \int d^{3}\!x\,\left(\frac{\partial\mathcal{L}_{ADM}}{\partial \tilde{T}_{i,0}^{a}}\bar{\delta}\tilde{T}_{i}^{a}+\frac{\partial\mathcal{L}_{ ADM}}{\partial\underset{\sim}{N}}\bar{\delta}N+\frac{\partial\mathcal{L}_{ ADM}}{\partial N_{,0}^{a}}\bar{\delta}N^{a}\right)\Bigg{|}_{x_{i}^{0}}^{x_{f}^{0}} \tag{3.6}\] \[= \int d^{3}\!x\,\mathcal{L}_{ADM}\epsilon^{0}\Big{|}_{x_{i}^{0}}^{ x_{i}^{0}}\] So again taking into account that the time dependence of \(\epsilon^{\mu}\) is arbitrary we derive the corresponding vanishing Noether charges \[C_{\epsilon}=\int d^{3}\!x\mathfrak{C}_{\epsilon} \tag{3.7}\] with vanishing charge density \[\mathfrak{C}_{\epsilon} = \frac{\partial\mathcal{L}_{ADM}}{\partial\tilde{T}_{i,0}^{a}}\bar {\delta}\tilde{T}_{i}^{a}+\frac{\partial\mathcal{L}_{ADM}}{\partial \underset{\sim}{N}}\bar{\delta}N+\frac{\partial\mathcal{L}_{ADM}}{\partial N_ {,0}^{a}}\bar{\delta}N^{a}-\mathcal{L}_{ADM}\epsilon^{0} \tag{3.8}\] \[= p_{a}^{i}\bar{\delta}\tilde{T}_{i}^{a}+\vec{\tilde{P}}\bar{ \delta}N+\widetilde{P}_{a}\bar{\delta}N^{a}-\mathcal{L}_{ADM}\epsilon^{0}\] We recognize, of course, that the momenta \(\vec{\tilde{P}}\) and \(\widetilde{P}_{a}\) are primary constraints. The next step is to determine the variations under \(x^{\prime\mu}=x^{\mu}-\epsilon^{\mu}\). We must bear in mind that the variations of the triads must yield vectors that remain tangent to the fixed time hypersurface. And furthermore the varied \(n^{\mu}=\delta_{0}^{\mu}N^{-1}-\delta_{a}^{\mu}N^{-1}N^{a}\) must be perpendicular to this new hypersurface. The resulting variations are \[\bar{\delta}N=N\epsilon_{,0}^{0}-NN^{a}\epsilon_{,a}^{0}+N\epsilon_{,0}^{0}+N _{,a}\epsilon^{a}, \tag{3.9}\] and \[\bar{\delta}N^{a}=N^{a}\epsilon_{,0}^{0}-(N^{2}e^{ab}+N^{a}N^{b})\epsilon_{,b}^ {0}+\epsilon_{,0}^{a}-N^{b}\epsilon_{,b}^{a}+N_{,0}^{a}\epsilon^{0}+N_{,b}^{a} \epsilon^{b}. \tag{3.10}\] To determine the variation of \(\tilde{T}^{a}_{i}\) I refer to the variation of the spatial components of the metric. I have \[\bar{\delta}g_{ab} = \bar{\delta}t^{i}_{a}t^{i}_{b}+t^{i}_{a}\bar{\delta}t^{i}_{b} \tag{3.11}\] \[= t^{i}_{a,\mu}\epsilon^{\mu}t^{i}_{b}+t^{i}_{a}t^{i}_{b,\mu} \epsilon^{\mu}+t^{i}_{c}N^{c}\epsilon^{\mu}_{,a}t^{i}_{b}+t^{i}_{c}\epsilon^{c }_{,a}t^{i}_{b}+t^{i}_{a}t^{i}_{c}N^{c}\epsilon^{0}_{,b}+t^{i}_{a}t^{i}_{c} \epsilon^{c}_{,b}.\] So I find \[\bar{\delta}t^{i}_{a}=t^{i}_{a,\mu}\epsilon^{\mu}+t^{i}_{b}N^{b}\epsilon^{0}_{,a}+t^{i}_{b}\epsilon^{b}_{,a} \tag{3.12}\] Next I calculate \(\bar{\delta}T^{a}_{i}\) using \[\bar{\delta}t^{i}_{a}T^{a}_{j}=-t^{i}_{a}\bar{\delta}T^{a}_{j}, \tag{3.13}\] which implies \[\bar{\delta}T^{b}_{j} = -\bar{\delta}t^{i}_{a}T^{a}_{j}T^{b}_{i}=-\left(t^{i}_{a,\mu} \epsilon^{\mu}+t^{i}_{c}N^{c}\epsilon^{0}_{,a}+t^{i}_{c}\epsilon^{c}_{,a} \right)T^{a}_{j}T^{b}_{i} \tag{3.14}\] \[= T^{b}_{j,\mu}\epsilon^{\mu}-N^{b}T^{a}_{j}\epsilon^{0}_{,a}- \epsilon^{b}_{,a}T^{a}_{j}.\] Now to get \(\bar{\delta}T^{a}_{i}\) I need \[\bar{\delta}t=t\bar{\delta}t^{i}_{a}T^{a}_{i}=t\left(t^{i}_{a,\mu}\epsilon^{ \mu}+t^{i}_{b}N^{b}\epsilon^{0}_{,a}+t^{i}_{b}\epsilon^{b}_{,a}\right)T^{a}_{i}, \tag{3.15}\] which implies \[\bar{\delta}\tilde{T}^{a}_{i}=\bar{\delta}tT^{a}_{i}+t\bar{\delta}T^{a}_{i}= \tilde{T}^{a}_{,\mu}\epsilon^{\mu}+N^{b}\epsilon^{0}_{,b}\tilde{T}^{a}_{i}+ \epsilon^{b}_{,b}\tilde{T}^{a}_{i}-N^{a}\bar{T}^{c}_{i}\epsilon^{0}_{,c}- \epsilon^{a}_{,c}\tilde{T}^{c}_{i} \tag{3.16}\] Finally, we also find that \[\bar{\delta}N_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! so \[\bar{\delta}N = \bar{\delta}t^{-1}N+t^{-1}\bar{\delta}N \tag{3.21}\] \[= -t^{-2}\bar{\delta}tN+t^{-1}\left(t_{,0}\underline{\xi}^{0}+t \underline{\xi}^{0}_{,0}-N^{a}t_{,a}\underline{\xi}^{0}-N^{a}t\underline{\xi}^ {0}_{,a}+\xi^{a}N_{,a}\right)\] \[= -t^{-2}\bar{\delta}tN+t^{-1}\left(tt^{i}_{a,0}T^{a}_{i}\, \underline{\xi}^{0}+t\underline{\xi}^{0}_{,0}-N^{a}tt^{i}_{b,a}T^{b}_{i}\, \underline{\xi}^{0}-N^{a}t\underline{\xi}^{0}_{,a}+\xi^{a}N_{,a}\right)\] To continue I need \[\bar{\delta}t^{i}_{a} = t^{i}_{a,\mu}\epsilon^{\mu}+t^{i}_{b}N^{b}\epsilon^{0}_{,a}+t^{ i}_{b}\epsilon^{b}_{,a} \tag{3.22}\] \[= N^{-1}t^{i}_{a,0}\xi^{0}-N^{-1}t^{i}_{a,b}N^{b}\xi^{0}+t^{i}_{a, b}\xi^{b}+t^{i}_{b}N^{b}\left(N^{-1}\xi^{0}\right)_{,a}+t^{i}_{b}\left(-N^{-1}N^{ b}\xi^{0}+\xi^{b}\right)_{,a}\] \[= N^{-1}t^{i}_{a,0}\xi^{0}-N^{-1}t^{i}_{a,b}N^{b}\xi^{0}+t^{i}_{a, b}\xi^{b}+t^{i}_{b}\left(-N^{-1}N^{b}_{,a}\xi^{0}+\xi^{b}_{,a}\right)\] I use this to calculate \[-t^{-2}N\bar{\delta}t = -t^{-1}N\bar{\delta}t^{i}_{a}T^{a}_{i}=-t^{-1}T^{a}_{i}\left(t^{i }_{a,0}\xi^{0}-t^{i}_{a,b}N^{b}\xi^{0}+Nt^{i}_{a,b}\xi^{b}+t^{i}_{b}\left(-N^{b }_{,a}\xi^{0}+N\xi^{b}_{,a}\right)\right)\] Combining terms I get \[\bar{\delta}N = -t^{-1}T^{a}_{i}\left(t^{i}_{a,0}\xi^{0}-t^{i}_{a,b}N^{b}\xi^{0}+ Nt^{i}_{a,b}\xi^{b}+t^{i}_{b}\left(-N^{b}_{,a}\xi^{0}+N\xi^{b}_{,a}\right)\right) \tag{3.24}\] \[+ t^{-1}\left(tt^{i}_{a,0}T^{a}_{i}\,\underline{\xi}^{0}+t\underline {\xi}^{0}_{,0}-N^{a}tt^{i}_{b,a}T^{b}_{i}\,\underline{\xi}^{0}-N^{a}t \underline{\xi}^{0}_{,a}+\xi^{a}N_{,a}\right)\] \[= -t^{-1}T^{a}_{i}\left(Nt^{i}_{a,b}\xi^{b}+t^{i}_{b}\left(-N^{b}_ {,a}\xi^{0}+N\xi^{b}_{,a}\right)\right)\] \[+ t^{-1}\left(t\underline{\xi}^{0}_{,0}-N^{a}t\underline{\xi}^{0} _{,a}+\xi^{a}N_{,a}\right)\] \[= -N\underline{T}^{a}_{i}t^{i}_{a,b}\xi^{b}+N^{a}_{,a}\underline{ \xi}^{0}_{,0}-N\underline{\xi}^{a}_{,a}+\underline{\xi}^{0}_{,0}-N^{a} \underline{\xi}^{0}_{,a}+t^{-1}N_{,a}\xi^{a}\] \[= N^{a}_{,a}\underline{\xi}^{0}-\underline{N}\xi^{a}_{,a}+\xi^{0} _{,0}-N^{a}\underline{\xi}^{0}_{,a}+\underline{N}_{,a}\xi^{a}\] Next, I need \[\bar{\delta}N^{a} = \xi^{a}_{,0}-Ne^{ab}\xi^{0}_{,b}+N_{,b}e^{ab}\xi^{0}+N^{a}_{,b} \xi^{b}-N^{b}\xi^{a}_{,b}\] (3.25) \[= \xi^{a}_{,0}-Ne^{ab}\left(t\underline{\xi}^{0}_{\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Indeed, since \(\xi^{a}\) is an arbitrary spacetime function this delivers an additional vanishing Noether generator of spatial diffeomorphisms. Substituting the original variations into the Noether charge I obtain \[\mathfrak{C}_{\epsilon} = p_{a}^{i}\tilde{T}_{i,0}^{a}\epsilon^{0}-\mathcal{L}_{ADM} \epsilon^{0} \tag{3.28}\] \[+ p_{a}^{i}\tilde{T}_{i,b}^{a}\epsilon^{b}+p_{a}^{i}\left(N^{b} \epsilon^{0}_{,b}\tilde{T}_{i}^{a}+\epsilon^{b}_{,b}\tilde{T}_{i}^{a}-N^{a} \tilde{T}_{i}^{c}\epsilon^{0}_{,c}-\epsilon^{a}_{,c}\tilde{T}_{i}^{c}\right)+ \overset{\mathrm{\widetilde{P}}}{\bar{\delta}}\underset{\sim}{N}+\widetilde{P}_ {a}\bar{\delta}N^{a}\] \[= \mathcal{H}_{c}\epsilon^{0}\] \[+ p_{a}^{i}\tilde{T}_{i,b}^{a}\epsilon^{b}+p_{a}^{i}\left(N^{b} \epsilon^{0}_{,b}\tilde{T}_{i}^{a}+\epsilon^{b}_{,b}\tilde{T}_{i}^{a}-N^{a} \tilde{T}_{i}^{c}\epsilon^{0}_{,c}-\epsilon^{a}_{,c}\tilde{T}_{i}^{c}\right)+ \overset{\mathrm{\widetilde{P}}}{\bar{\delta}}\underset{\sim}{N}+\widetilde{P }_{a}\bar{\delta}N^{a}\] \[= \left(N\left(-^{3}R+\frac{1}{4}p_{a}^{i}p_{b}^{i}e^{ab}-\frac{1}{ 4}p_{a}^{i}T_{i}^{a}p_{b}^{j}T_{j}^{b}\right)+\frac{1}{2}\left(p_{a}^{i}T_{i} ^{a}e^{cd}-p_{a}^{i}T_{i}^{d}e^{ac}\right)tg_{e(c}N^{e}_{|d)}\right)\epsilon^{0}\] \[+ p_{a}^{i}\tilde{T}_{i,b}^{a}\epsilon^{b}+p_{a}^{i}\left(N^{b} \epsilon^{0}_{,b}\tilde{T}_{i}^{a}+\epsilon^{b}_{,b}\tilde{T}_{i}^{a}-N^{a} \tilde{T}_{i}^{c}\epsilon^{0}_{,c}-\epsilon^{a}_{,c}\tilde{T}_{i}^{c}\right)+ \overset{\mathrm{\widetilde{P}}}{\bar{\delta}}\underset{\sim}{N}+\widetilde{P }_{a}\bar{\delta}N^{a}\] Next collect the terms (3.28) involong \(\epsilon^{0}\) and not the primary constraints. I have \[\frac{1}{2}\left(p_{a}^{i}T_{i}^{a}e^{cd}-p_{a}^{i}T_{i}^{d}e^{ ac}\right)tg_{e(c}N^{e}_{|d)}\epsilon^{0}+p_{a}^{i}\left(N^{b}\epsilon^{0}_{,b} \tilde{T}_{i}^{a}-N^{a}\tilde{T}_{i}^{c}\epsilon^{0}_{,c}\right) \tag{3.29}\] \[=\frac{1}{2N}\left(p_{a}^{i}\tilde{T}_{i}^{a}e^{cd}-p_{a}^{i} \tilde{T}_{i}^{d}e^{ac}\right)N_{(c|d)}\xi^{0}\] \[- \frac{1}{N}p_{a}^{i}\left(-\frac{1}{N}N_{,b}N^{b}\underset{\leq,b }{\overset{\mathrm{\xi}}{\underset{\sim}{\,}}}\Phi_{1}^{0}+N^{b}\underset{ \leq,b}{\overset{\mathrm{\xi}}{\underset{\sim}{\,}}}\Phi_{1}^{a}+\frac{1}{N}N _{,c}N^{a}\tilde{T}_{i}^{c}\underset{\leq}{\overset{\mathrm{\xi}}{\underset{ \sim}{\,}}}0-N^{a}\tilde{T}_{i}^{c}\underset{\leq,c}{\overset{\mathrm{\xi}}{ \underset{\sim}{\,}}}0\right)\] Perform an integration by parts in the first line to get \[-\frac{1}{2}\left[N_{\sim}^{\mathrm{\xi}}\epsilon^{0}\left(p_{a}^{i}\tilde{T}_ {i}^{a}e^{cd}-p_{a}^{i}\tilde{T}_{i}^{(d}e^{c)a}\right)\right]_{|d}N_{c}\] \[=-\frac{1}{2}\left(N_{\sim}^{\mathrm{\xi}}\epsilon^{0}\right)_{|d }\left(p_{a}^{i}\tilde{T}_{i}^{a}N^{d}-p_{a}^{i}\tilde{T}_{i}^{(d}N^{a)}\right)\] In addition I have \[p_{a}^{i}\tilde{T}_{i,b}^{a}\epsilon^{b}+p_{a}^{i}\left(\epsilon ^{b}_{,b}\tilde{T}_{i}^{a}-\epsilon^{a}_{,c}\tilde{T}_{i}^{c}\right)\] \[=p_{a}^{i}\tilde{T}_{i,b}^{a}\underset{\leq}{N}^{-1}N^{b}\xi^{0}- p_{a}^{i}\tilde{T}_{i}^{a}\left(-N_{\sim}^{-2}N_{,b}N^{b}\xi^{0}+N_{\sim}^{-1}N_{,b}^{b} \xi^{0}+N_{\sim}^{-1}N^{b}\underset{\leq,b}{\overset{\mathrm{\xi}}{\underset{ \sim}{\,}}}0\right)\] \[+p_{a}^{i}\tilde{T}_{i}^{b}\left(-N_{\sim}^{-2}N_{,b}N^{a}\xi^{0} +N_{\sim}^{-1}N_{,b}^{a}\xi^{0}+N_{\sim}^{-1}N^{a}\xi^{0}_{,b}\right) \tag{3.31}\] Then it turns out that some amazing cancelations occur, and the resulting Noether charge is \[C_{\xi} = \int d^{3}x\left[\mathcal{H}_{0}^{\prime}\underset{\sim}{\xi}^{0} +\mathcal{H}_{a}\xi^{a}\right. \tag{3.32}\] \[+ \left.\overset{\mathrm{\widetilde{P}}}{\bar{P}}\left(N_{,a}^{a} \xi^{0}-N_{\sim,a}^{\mathrm{\xi}}+\underset{\leq,0}{\overset{\mathrm{\xi}}{ \underset{\sim}{\,}}}0-N_{\sim,a}^{a}\xi^{0}+N_{,a}\xi^{a}\right)\right.\] \[+ \left.\overset{\mathrm{\widetilde{P}}}{\bar{P}}_{a}\left(\xi^{ \mathrm{\xi}}_{,0}-t^{2}N_{\sim}^{\mathrm{\xi}}e^{ab}\underset{\leq,b}{\overset{ \mathrm{\xi}}{\underset{\sim}{\,}}}0+t^{2}N_{,b}e^{ab}\underset{\leq}{\overset{ \mathrm{\xi}}{\underset{\sim}{\,}}}0+N_{,b}^{a}\xi^{b}-N^{b}\xi^{a}_{,b}\right)\right]\] where we have the additional vanishing constraint - due to the arbitrariness in the function \(\underset{\sim}{\xi}^{0}\), \[\mathcal{H}_{0}^{\prime}:=-^{3}R+\frac{1}{4}p_{a}^{i}p_{b}^{i}e^{ab}-\frac{1}{4}p_ {a}^{i}T_{i}^{a}p_{b}^{j}T_{j}^{b}=0. \tag{3.33}\] Similarly, since \(\xi^{a}\) can vary arbitrarily in time, we obtain the constraint \[\mathcal{H}_{a}=0. \tag{3.34}\] These results imply, of course, that \(C_{\xi}\) itself vanishes.2 Footnote 2: It is likely a surprise to most readers that this procedure for determining what are now known as secondary constraints, following the so-called Bergmann-Dirac procedure, was initiated by Leon Rosenfeld in 1930. I and my collaborators believe it would be more accurate to refer to the Rosenfeld-Bergmann-Dirac method. The relation between Bergmann and Dirac is analyzed in detail in [16], while Rosenfeld’s work is discussed in [16] ## 4 Spacetime diffeomorphism-related Noether generator I will work out here the requirement to add gauge transformations to the diffeomorphisms in order to attain projectability under the Legendre transformation from configuration-velocity space to phase space.. This challenge arises due to the absence of anti-symmetrized linear combinations of triad time derivatives in the ADM Lagrangian. This is a combination that appears in the Ricci rotation coefficient (See [17]) \[\Omega_{0}^{ij}=-\tilde{T}_{,0}^{a[i}t_{a}^{j]}-N_{,b}^{a}t_{a}^{[i}T^{jb]}+N^ {c}t_{c}^{k}T^{ai[i}T^{j]b}t_{a,b}^{k}+N^{c}t_{c,b}^{[i}T^{j]b} \tag{4.1}\] I undertake the variation of the covector component \(\Omega_{0}^{ij}\) under the infinitesimal diffeomorphism with descriptor \(\epsilon^{\mu}=n^{\mu}\xi^{0}+\delta_{a}^{\mu}\xi^{a}\), \[\bar{\delta}\Omega_{0}^{ij}=\Omega_{\mu}^{ij}\epsilon_{,0}^{\mu}+\delta\Omega _{0}^{ij}. \tag{4.2}\] We will not need \(\delta\Omega_{0}^{ij}\) since it is projectible. Thus we have \[\bar{\delta}\Omega_{0}^{ij}=\Omega_{0}^{ij}\left(N^{-1}\xi^{0}\right)_{,0}+ \Omega_{a}^{ij}\left(-N^{-1}N^{a}\xi^{0}+\xi^{a}\right)_{,0}+\ldots. \tag{4.3}\] We discover that the unprojectable time derivatives of the lapse and shift appear in this variation. But the good news is that these inadmissible variations can be eliminated by adding gauge rotations with \[\eta^{k}=-\epsilon^{kij}\Omega_{\mu}^{ij}n^{\mu}\xi^{0}, \tag{4.4}\] with generator \[-\int d^{3}\!x\epsilon^{kij}\Omega_{\mu}^{ij}n^{\mu}\xi^{0}p_{k} =-\int d^{3}\!x\epsilon^{kij}\Omega_{\mu}^{ij}n^{\mu}\xi^{0}\epsilon^{kmn}p_{ a}^{m}\tilde{T}_{a}^{a}\] \[=\int d^{3}\!x\Omega_{\mu}^{k[i}\underset{\epsilon}{\Sigma}^{j]} n^{\mu}\tilde{T}_{k}^{a}\xi^{0}. \tag{4.5}\] The additional Ricci rotation coefficient is (from [17]) the three-dimensional coefficient \(\Omega_{a}^{ij}=\omega_{a}^{ij}\). Adding this expression to the first line in (3.32) I define the vanishing generator density \[\mathcal{H}_{0}:=\left(-{}^{3}\!R+\frac{1}{4}p_{a}^{i}p_{b}^{i}e^{ab}-\frac{1 }{4}p_{a}^{i}T_{i}^{a}p_{b}^{j}T_{j}^{b}+\Omega_{\mu}^{k[i}t_{\stackrel{{ \frown}}{{\sim}}}a^{j]}n^{\mu}\tilde{T}_{k}^{a}\right)=0. \tag{4.6}\] Thus we finally have the full diffeomorphism-related vanishing Noether generator, derived directly from the vanishing Noether charge, \[C_{\xi\eta} = \int d^{3}x\left[\mathcal{H}_{0}\underset{\sim}{\xi}^{0}+ \mathcal{H}_{a}\xi^{a}+\eta^{k}\mathcal{H}_{k}\right. \tag{4.7}\] \[+ \left.\widetilde{\widetilde{P}}\left(N_{,a}^{a}\underset{\sim}{ \xi}^{0}-N\xi_{,a}^{a}+\underset{\sim,0}{\xi}^{0}-N^{a}\underset{\sim}{\xi} ^{0}+N_{,a}\xi^{a}\right)\right.\] \[+ \left.\widetilde{P}_{a}\left(\xi_{,0}^{a}-t^{2}N\epsilon^{ab} \underset{\sim}{\xi}^{0}+t^{2}\underset{\sim}{N_{,b}}e^{ab}\underset{\sim}{ \xi}^{0}+N_{,b}^{a}\xi^{b}-N^{b}\xi_{,b}^{a}\right)\right]\] The canonical Hamiltonian It must be stressed that the above diffeomorphism generator differs in an essential manner from the conventional temporal evolution generator. This takes the form \[H=\int d^{3}\!x\left(N\mathcal{H}^{\prime}_{0}+N^{a}\mathcal{H}_{a}+\Omega^{k} \mathcal{H}_{k}\right). \tag{5.1}\] It evolves initial phase space data in time. The generator \(C_{\xi\eta}\), on the other hand, acts on the entire solutions generated by \(H\) and transforms them to new physically equivalent solutions that are related through the action of active spacetime diffeomorphisms. ## 6 Extension to the Barbero-Immirzi-Holst model The Holst addition to the Lagrangian is \[\mathcal{L}_{H}=\frac{1}{4\gamma}NtE_{I}^{\mu}E_{J}^{\nu}R_{\mu\nu}^{IJ} \tag{6.1}\] It is introduced with what has become known as the Barbero-Immirzi parameter \(\gamma\). The curvature is expressed in terms of the Ricci rotation coefficients, \[{}^{4}\!R_{\mu\nu}^{IJ}=\partial_{\mu}\Omega_{\nu}^{IJ}-\partial_{\nu}\Omega_ {\mu}^{IJ}+\Omega_{\mu}^{IM}\Omega_{\nu M}{}^{J}-\Omega_{\nu}^{IM}\Omega_{\mu M }{}^{J}. \tag{6.2}\] It is of course well known that this Lagrangian vanishes when, as I shall assume, the torsion vanishes. The outcome for my specific use is that the new canonical momentum \(p_{a}^{\gamma i}\) is obtained through a canonical transformation of \(p_{a}^{i}\), i.e. \[p_{a}^{\gamma i}=p_{a}^{i}+\frac{1}{2}\gamma^{-1}\epsilon^{ijk}\omega_{a}^{jk} \tag{6.3}\] It follows that we need only make this substitution for \(p_{a}^{i}\) in our Noether generator (4.7) to obtain the spacetime diffeomorphism-related symmetry generator in the Barbero-Immirzi-Holst model! ## 7 Evolving constants of motion I will briefly overview here the manner in which the vanishing diffeomorphism-related generator may be employed to implement the use of intrinsic coordinates, evoking the general method presented in (Pons et al., 2009). There we proposed the use of intrinsic coordinates which must be spacetime scalar phase space functions. I will represent them here as \(X^{\mu}\left(\widetilde{T}_{i}^{a},p_{j}^{b}\right)\)3. With their aid we can establish gauge conditions which we represent as \(\chi^{(1)\mu}=x^{\mu}-X^{\mu}=0\). Recognizing that these must be preserved under time evolution we obtain a second set of gauge conditions Footnote 3: The analogues have long been represented by several authors as \(T^{\mu}\) and they have been denoted as ”clock” variables. See for example (Giesel et al., 2018). I would recommend referring to \(T^{0}\) as a clock variable and the \(T^{a}\) rod variables. \[0=\frac{d}{d\,t}\chi^{\mu}=\delta_{0}^{\mu}-N^{\rho}\{X^{\mu}\,,\mathcal{H}_{ \rho}\}=\delta_{0}^{\mu}-\mathcal{A}_{\rho}^{\mu}N^{\rho}=:\chi^{(2)\mu}, \tag{7.1}\] where \[\mathcal{A}_{\rho}^{\mu}:=\{X^{\mu},\mathcal{H}_{\rho}\}\,. \tag{7.2}\] In (Pons et al., 2009) we extended a procedure that had been invented by (Dittrich, 2007) so as to include the lapse and shift as phase space variables. The basic idea is to take linear combinations of the eight first class constraints which I represent here by \(\zeta_{(j)\nu}=\left(\mathcal{H}_{\mu},\overset{\raisebox{-0.86pt}{\scalebox{.5}{$\leftrightarrow$}}}{\widetilde{P}},\widetilde{P}_{a}\right)\), employing the inverse of \(\mathcal{A}_{\rho}^{\mu}\). Representing the new set of the original first class constraints by \(\bar{\zeta}_{(j),\mu}\) we are able to arrange that they satisfy the Poisson brackets with the gauge conditions satisfying \[\left\{\chi^{(i)\mu},\bar{\zeta}_{j,\nu}\right\}=-\delta_{j}^{i}\delta_{\nu}^{ \mu}. \tag{7.3}\] Consequently we can solve for the gauge functions \(\bar{\xi}^{\mu}\) which transform arbitrary solutions of the field equations to those that satisfy the gauge conditions. Of course, in doing so in this case we make use of the generator (4.7) with the new linear combinations of constraints \(\zeta_{(j)\nu}\). Thus for any phase space function \(\Phi\), including the lapse and shift, we can construct the corresponding spacetime invariant \(\mathcal{I}_{\Phi}\) through the action of the generator \(C_{\bar{\xi}}\), i.e. \[\mathcal{I}_{\Phi}=exp\left(\left\{-,C_{\bar{\xi}}\right\}\right)\Phi \tag{7.4}\] The validity of this expansion has been demonstrated, for example in [11][11], for several previous models. It will be straightforward to do so for the classical Barbero-Immirzi-Holst theory. A cosmological perturbative approach employing these expansions would be of particular interest. ## 8 Conclusions I have presented here a new direct method for obtaining the generator of spacetime diffeomorphism-related phase space transformations through appealing directly to Noether's second theorem. The question that must now be addressed is how one can take these classical symmetries into account in an eventual quantum theory of gravity. Much effort has of course long been devoted to addressing this issue. Pullin and his collaborators have certainly made significant progress in addressing the associated problem of time [10]. Rovelli has long advocated a closely related approach in which a subset of fields serve as clocks. In this regard I and my collaborators are choosing Weyl scalars expressed in terms of phase space variables as both temporal and spatial intrinsic coordinates [12]. This is accomplished in a manner as advocated in [11, 12][13]. But most reassuring is the extension to the full phase space and the corresponding use of intrinsic coordinates that is being pursued in the context of quantum loop cosmology by [14][14][15][16][17][18] and [19].
2309.07324
A Simple Non-Deterministic Approach Can Adapt to Complex Unpredictable 5G Cellular Networks
5G cellular networks are envisioned to support a wide range of emerging delay-oriented services with different delay requirements (e.g., 20ms for VR/AR, 40ms for cloud gaming, and 100ms for immersive video streaming). However, due to the highly variable and unpredictable nature of 5G access links, existing end-to-end (e2e) congestion control (CC) schemes perform poorly for them. In this paper, we demonstrate that properly blending non-deterministic exploration techniques with straightforward proactive and reactive measures is sufficient to design a simple yet effective e2e CC scheme for 5G networks that can: (1) achieve high controllable performance, and (2) possess provable properties. To that end, we designed Reminis and through extensive experiments on emulated and real-world 5G networks, show the performance benefits of it compared with different CC schemes. For instance, averaged over 60 different 5G cellular links on the Standalone (SA) scenarios, compared with a recent design by Google (BBR2), Reminis can achieve 2.2x lower 95th percentile delay while having the same link utilization.
Parsa Pazhooheshy, Soheil Abbasloo, Yashar Ganjali
2023-09-13T21:30:13Z
http://arxiv.org/abs/2309.07324v1
# A Simple Non-Deterministic Approach Can Adapt to Complex Unpredictable 5G Cellular Networks ###### Abstract. 5G cellular networks are envisioned to support a wide range of emerging delay-oriented services with different delay requirements (e.g., 20ms for VR/AR, 40ms for cloud gaming, and 100ms for immersive video streaming). However, due to the highly variable and unpredictable nature of 5G access links, existing end-to-end (e2e) congestion control (CC) schemes perform poorly for them. In this paper, we demonstrate that properly blending _non-deterministic_ exploration techniques with straightforward _proactive_ and _reactive_ measures is sufficient to design a simple yet effective e2e CC scheme for 5G networks that can: (1) achieve high controllable performance, and (2) possess provable properties. To that end, we designed Reminis and through extensive experiments on emulated and real-world 5G networks, show the performance benefits of it compared with different CC schemes. For instance, averaged over 60 different 5G cellular links on the Standalone (SA) scenarios, compared with a recent design by Google (BBR2), Reminis can achieve 2.2\(\times\) lower 95th percentile delay while having the same link utilization. ## 1. Introduction Congestion Control (CC), as one of the active research topics in the network community, has played a vital role during the last four decades in satisfying the quality of service (QoS) requirements of different applications (Gomez et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018). Although most of the early efforts for designing CC schemes targeted general networks with their general characteristics, as time went by and new network environments emerged, the idea of designing environment-aware CC schemes showed its advantages (e.g., TCP Hybla (Hasegawa et al., 2018) and TCP-Peach (Hasegawa et al., 2018) for satellite communication with its unique loss properties, PCCP (Paszhoo et al., 2018) and TARA (TARA, 2018) for sensor networks with their unique resource restrictions, and DCTCP (Hasegawa et al., 2018) and TIMELY (Tamal et al., 2018) for data center networks (DCN) with their unique single-authority nature). One of the important emerging network environments with huge potential is the 5G cellular network. Just in the first quarter of 2022, the number of connections over 5G reached more than 700 million, while it is expected that by the end of 2026, this number will surpass 4.8 billion globally (Goyal et al., 2018). Considering such a huge increase in the number of 5G users, the wide variety of current and future applications, and the range of new network characteristics and challenges it brings to the table, the need for a 5G-tailored CC scheme reveals itself. ### What Makes 5G Different? **Orders of Magnitude Larger Bandwidth-Delay Product:** Recent measurements have shown that current millimeter-wave (high-band) 5G networks can achieve, on average, \(\approx\)1 Gbps link capacities (and up to 2 Gbps) and around 20 ms e2e delays (Goyal et al., 2018; Goyal et al., 2018). Compared to a DCN with 100Gbps access links and 10\(\mu\)s e2e delay, 5G networks can have, on average 40\(\times\) larger bandwidth-delay product (BDP). Compared to its predecessor, e.g., a 4G network with a 20 Mbps link and 40 ms e2e delay, 5G networks have on average 50\(\times\) larger BDP. **Highly Variable & Unpredictable Access Links:** One distinguishing characteristic of 5G links compared to its predecessors, is the wide range of link capacity fluctuations. While 5G links can reach a capacity as high as 2 Gbps, they can quickly drop below 4G link capacities or even to nearly zero (5G "dead zones") (Goyal et al., 2018). For instance, the standard deviation in 120 5G link capacities collected in prior work is around 432 Mbps (Goyal et al., 2018)1. The changing dynamics of the environment such as user mobility, environmental obstacles, and other 5G network factors like network coverage, 5G cell size, and handover are among some of the main reasons for these highly unpredictable fluctuations in the link capacity. Footnote 1: For example, considering 4G/3G traces gathered in prior work (Goyal et al., 2018; Goyal et al., 2018), this value is about two orders of magnitude larger than its 4G/3G networks’ counterpart **Emerging Applications with Unique Delay Requirements:** 5G networks are envisioned to serve a diverse set of emerging delay-sensitive applications such as AR/VR, online gaming, vehicle-to-vehicle communications, tactile Internet, remote medical operations, and machine learning-enabled services. Cellular providers have already started deploying some of these applications such as cloud gaming, real-time augmented/virtual reality (AR/VR), and immersive video streaming at the edge of their networks (Goyal et al., 2018). Each of these applications has different delay constraints. For instance, the acceptable delay for AR/VR is about 20ms [2], for cloud gaming, the target delay is about 40-60ms, and for immersive video streaming the delay shall not be more than 100ms [3]. **New Features, New Opportunities:** The goal of supporting delay-sensitive applications in 5G has popularized the integration of 5G and edge-based frameworks such as mobile edge computing (MEC) [1]. Such an edge-based integration, when combined with other new technologies such as 5G network slicing [21] can provide interesting new design opportunities for CC design. For instance, in an edge-based cellular architecture where users have their own isolated logical networks on top of the shared infrastructures, the concern of TCP-friendliness becomes less of an issue for a 5G-tailored CC scheme. ### Impact of 5G's Unique Properties on CC As a simple motivating experiment, here (following the setting described in Section 4.1) we use a 5G cellular trace (gathered in prior work[32]) and report the delay and throughput performance of some recent CC schemes namely BBR2 [18] (representing white-box approach), Verus [47] (representing 4G-tailored schemes), Orca [9] (representing RL-based designs), and Vivace [20] (representing online-learning designs) over a 20s period of this trace. As Fig. 1 illustrates, the 5G link capacity falls from nearly 1 Gbps to about zero in just 3 seconds (a sample of high variability and unpredictability of 5G links). In this setting, BBR2 faces a clear delay issue to the extent that it generates more than 5 seconds of queuing delay. A white-box CC approach such as BBR2 assumes that the network always follows a certain model. However, when it faces an unpredictable 5G link that clearly diverges from BBR2's wired model of the network, BBR2 cannot adapt to the dynamics of the network quickly and fails to deliver the desired performance. On the other hand, Verus, a design targeting 4G cellular networks, is a black-box approach and tries not to rely on a pre-built model of the network. However, it is very slow, fails to keep up with the available link capacity, and ends up with low link utilization and high queuing delay. Considering the performance of Orca and Vivace, it is clear that the learning-based schemes have a hard time in this setting as well. Although Orca, as one of the state-of-the-art reinforcement learning (RL) based schemes, performs better than BBR2 and Verus, it still can experience large queuing delays. We think that the reason lies in the generalization issue of the RL-based designs and the fact that Orca has not seen these scenarios during its training phase. However, how to train an RL-based scheme to achieve high generalization is still a big unanswered question [9]. Vivace addresses the issue of the need for offline training by exploiting online-learning techniques. However, when utilized in a 5G setting, Vivace cannot keep up with the unpredictable and fast-varying nature of the network. For instance, after the drop of capacity at around 47s (in Fig. 1), it takes more than 10 seconds for Vivace to adjust its sending rate to a value lower than the link's capacity. ### Design Decisions Putting all together, existing general-purpose heuristics (e.g., BBR2), 4G specialized heuristics (e.g., Verus), and convoluted learning-based designs (e.g., Orca) cannot achieve high performance in 5G networks, especially when emerging delay-sensitive applications are considered.2 This sheds light on why in this work, we are motivated to design a performant CC scheme for one of the fastest-growing means of access to the Internet, 5G cellular networks. To that end, we target two main properties for our design: Footnote 2: Appendix A briefly overviews some of the related works. * Simplicity & Interpretability: In contrast with tangled learning-based schemes, we seek a simple design that is easy to reason about and possesses provable properties. * Adaptability: Showing the performance shortcomings of the existing CC heuristics in 5G networks, our goal is to design a CC scheme that can effectively and efficiently adapt to the dynamics of complex 5G networks and achieve high performance in terms of throughput and delay. Favoring simplicity and interpretability led us to avoid employing convoluted techniques such as learning-based ones in this work. Instead, we go back to simple and intuitive principles to design an effective heuristic (called Reminis) that can adapt to highly variable 5G networks3. In particular, examples such as the one shown in Fig. 1 indicate that in highly variable and unpredictable 5G cellular links, gaining high Figure 1. Performance of state-of-the-art CCA on a slice of a sample 5G trace gathered by prior work [32] utilization requires agile mechanisms to cope with the sudden increase in the available link capacities while achieving low controlled e2e delay requires effective fast proactive and reactive techniques to avoid bloating the network when link capacities suddenly decrease. Based on these observations, Reminis utilizes two key techniques: (1) _non-deterministic explorations_ for discovering suddenly available link capacities, and (2) _fast proactive and agile reactive slowdowns_ to avoid bloating the network. As Fig. 1 illustrates, using these intuitive strategies enables Reminis to effectively achieve high throughput while keeping the e2e delay of packets very low. Sections 2 and 3 elaborate more on these design decisions. ### Contributions Our key contributions in this paper are as follows. * By designing Reminis, we demonstrate that without the need to use convoluted learning techniques or prediction algorithms, using lightweight yet effective techniques can lead to promising performance on 5G links. * We mathematically analyze Reminis and prove that it converges to a steady state with a bounded self-inflicted queuing delay that can be controlled in an e2e fashion. * Through extensive experiments over various emulated 5G cellular traces and a real-world deployed 5G network in North America, we illustrate that Reminis can adapt to highly unpredictable 5G cellular links 4. Footnote 4: For example, in our emulations over 60 different 5G traces, when compared to a recent work by Google, BBR2, Reminis can achieve up to 3.6\(\times\) and on average 2.2\(\times\) lower 95th percentile delay while having the same link utilization as BBR2. * As a side effect of our efforts to evaluate Reminis and make reproducible experiments, we debugged and improved Mahimahi (Mahimahi, 2018), which cannot emulate high-capacity 5G links5. Our Mahimahi patch along with Reminis' framework are publicly available to facilitate the community's further research on 5G CC (Friedman et al., 2019). Footnote 4: For example, in our emulations over 60 different 5G traces, when compared to a recent work by Google, BBR2, Reminis can achieve up to 3.6\(\times\) and on average 2.2\(\times\) lower 95th percentile delay while having the same link utilization as BBR2. Footnote 5: For more details, see Appendix D and discussions therein. ## 2. Design Overview As shown in Fig. 2, Reminis is composed of two main components: (1) the Classic AIMD unit, an Ack-triggered logic performing the well-known AIMD behavior upon receiving Ack packets, and (2) the Performance Guardian module (or Guardian in short) which runs periodically and adjusts CWND to keep delay low while maintaining high throughput. Upon activation in each period, the Guardian exploits a two-step logic. In the first step, the Network Condition Inference (NCI) module, utilizes the history of delay statistics (i.e., delay and its first-order derivative) and infers the current condition of the network. NCI only uses simple e2e RTT samples as input. As we show later in sections 3.1 and 4, the simple e2e RTT sample is sufficient for distinguishing different network conditions. In the second step, based on the inferred network condition, the Guardian activates one of the following three modules: (1) Non-Deterministic Exploration (NDE), (2) Proactive Slowdown (PS), or (3) Catastrophe Mitigation (CM). In particular, if inferred network condition suggests that there is a potential room for gaining higher throughput, the Guardian activates the NDE module to discover further unutilized network bandwidth. On the other hand, if NCI expects an unwanted delay surge in the near future, the Guardian activates the PS module to proactively reduce the chance of a future increase in the delay. If the proactive measures had not been successful and the observed delay has already increased significantly, as the last measure, the CM module is activated. The CM block has a reactive logic and enforces a dramatic decrease of CWND to avoid further increases in delay. In Section 3 we elaborate more on the details of these modules, their effectiveness, and their necessity in adapting to highly variable 5G access links and steering Reminis to very high performance. **Why Guarding Periodically and not Per-Packet?** There are two important reasons behind the periodic nature of the Guardian's task. First, as mentioned in 1.1, due to the high variability of 5G cellular networks, jumping to any conclusions about network conditions solely based on the statistics of one packet is not reasonable. Hence, it is important to monitor and extract a more stable view of the network by considering more packets. In other words, any per-packet measurement in a highly variable network is prone to noise and can lead to inferring wrong network conditions, while having more _samples_ from the environment, can potentially help to get a better picture of the network. That is why Reminis utilizes samples observed in periodic intervals. We refer to these intervals as sampling intervals (SI). Second, as discussed earlier, 5G access links can have high capacities. A direct impact of this property on any logic that makes Figure 2. Reminis High-Level Block Diagram per-packet decisions is a potentially higher CPU utilization compared to periodic logic. **The Role of AIMD Block:** As discussed, guarding periodically can be helpful in several ways. However, in a highly varying network, only relying on periodic logic can possibly lead to a lack of agility for the system. In other words, a logic that purely relies on periodic samples can react slowly to the changes in link capacity during each SI. As being agile is significantly vital for CC schemes targeting cellular networks, Reminis harnesses a classic AIMD block to perform extra per-packet reactions during SIs. Later in Section 4, we show this technique not only makes Reminis very agile (e.g., Fig. 11) and high performance (e.g., Fig 6), but also makes it a very light-weight scheme with very low CPU overhead (Fig. 17) which is another fundamental requirement for a successful CC scheme in high capacity 5G networks. ## 3. Reminis Design In this section, we discuss the main components of the Guardian block. First, we introduce the NCI module responsible for inferring network conditions based on delay statistics. Then, we describe the modules responsible for modifying the CWND namely NDE, PS, and CM. ### Network Condition Inference (NCI) The NCI module uses two signals to infer network conditions: 1) delay and 2) delay derivative. Delay is measured with end-to-end RTT samples, and as the Guardian runs periodically, once every SI, the delay derivative is defined to be the difference between two consecutive delay values divided by the time difference between two queries. Reminis has a target for its delay which is denoted as the delay tolerance threshold (DTT). This value could be defined based on (or by) the target application or Reminis can use its default value for DTT which is \(1.5\times mRTT\). Here, mRTT is the observed minimum RTT since starting the flow (which is not necessarily equal to the exact/actual minimum RTT of the network). Considering DTT as the delay target, the NCI module uses the statistics of the delay signals and deduces the network condition in each SI. Later, these inferred conditions will be exploited by NDE, PS, and CM blocks. The length of each SI is equal to the mRTT. To keep Reminis simple and lightweight, we use straightforward delay signals for defining network conditions (NCI Zones). In particular, denoting the delay and delay derivative in each \(SI_{n}\) respectively with \(d_{n}\) and \(\nabla d_{n}\), Reminis divides the delay space into three Zones. _Zone 1_: \(d_{n}\leq DTT\) & \(\nabla d_{n}\leq 0\), _Zone 2_: \(d_{n}\leq DTT\) & \(\nabla d_{n}>0\), and _Zone 3_: \(d_{n}>DTT\). Zone 1 indicates that the delay is below DTT and decreasing, which Reminis interprets as having room for sending more packets for the benefit of getting more throughput. The main reason for this deduction is that the negative delay derivative shows that the sending rate is less than channel capacity and the queue is depleting. Also having a delay of less than DTT gives room for some exploration. Zone 2 shows that the delay is still below DTT but increasing which means that keeping the current CWND or increasing it might lead to a violation in DTT as the positive delay is a sign of queue building up. Finally, Zone 3 indicates that the sender CWND should decrease harshly as being in this Zone means that the delay has exceeded DTT. Many reasons such as 5G dead zones, which are common in 5G networks, can result in transitioning into this Zone. \[\texttt{SafeZone}(d)=1-\frac{d-mRTT}{DTT-mRTT} \tag{1}\] In order to quantify how much delay has exceeded DTT, the Guardian uses a function called SafeZone, defined as in Equation 1. Based on the zone inferred by the NCI module, one of the NDE, PS, or CM modules will be activated. Algorithm 1, shows one iteration of the NCI module's logic. ``` 1d_derivative = (d_now - d_prev)/interval; 2cum_d_der += d_derivative; 3sz_now = SafeZone (d_now ); 4ifsz_now < 0then 5CatastropheMitigation (sz_now); 6elseifd_derivative > 0then 7ProactiveSkowdown (d_now, d_derivative); 8elseifd_derivative < 0then 9NDExploration (cum_d_der); 10d_prev = d_now; ``` **Algorithm 1**The Guardian ### Non-Deterministic Exploration (NDE) On the one hand, as discussed earlier, being in Zone 1, indicates there is room for sending more packets for the benefit of getting more throughput. On the other hand, many different factors such as user mobility, dynamics of the physical obstacles, the wireless scheduler's algorithms that divide resources between users through time, etc. make 5G access links highly unpredictable. This means although Zone 1 can indicate a possible chance to gain more throughput, it cannot identify the exact amount of such an increase. In that landscape, the NDE module is responsible for discovering and utilizing available capacity, without the risk of bufferploat. To that end, when Zone 1 is inferred by the NCI module, the Guardian activates the NDE module to explore different CWND values in a non-deterministic fashion so that it can address the unpredictable nature of the available link capacities. This can help Reminis utilize the sudden unpredictable surges in access link capacity in a better way. However, it's important to make sure that when NDE is exploring any available link capacity, it does not bloat the user's queue in an uncontrollable manner. To address this issue, NDE block controls the average of the stochastic decision-making process with regard to the general trend of the link capacity. In particular, the exploration needs to be more aggressive if the Guardian has been measuring high negative delay derivatives as more negative delay derivative indicates that the sending rate is far less than channel capacity. To this end, the NDE module maintains a Gaussian distribution, \(\mathcal{N}(\mu_{n},\,\sigma_{n}^{2})\), where the mean and variance of this distribution change by each delay derivative measured in every SI based on \(\mu_{n}\leftarrow\mu_{n-1}-\nabla d_{n}\) and \(\sigma_{n}^{2}=\frac{\mu_{n}}{4}\) update rules. Upon activation, the NDE module draws one sample from the Gaussian distribution, \(x\sim\mathcal{N}(\mu_{n},\,\sigma_{n}^{2})\), and feeds the sample to a Sigmoid function, \(S(x)=\frac{1}{1+e^{-x}}\). Then, the output of the Sigmoid function is used to increase the current CWND by multiplying the current CWND by \(2^{S(x)}\) as shown in Equation 2. \[cwnd_{n}\gets cwnd_{n}\times 2^{S(x)} \tag{2}\] The range of the Sigmoid function is \((0,1)\), so the NDE module will increase the CWND by a factor between \(1\) and \(2\). If Reminis starts measuring negative delay derivatives consecutively, the incremental factor generated by this module will be close to \(2\), which helps Reminis to adapt to any increase in link capacity quickly. On the other hand, if Reminis measures one negative delay derivative after many positive delay derivatives, the stochastic exploration will be more conservative and increases the CWND by factors slightly more than \(1\). We prove that the NDE module makes Reminis faster than an AIMD module alone. In particular, assuming that \(w_{1}\) is the CWND that fully utilizes the fixed link without causing any queue build-up, we prove that: **Theorem**.: _Reminis helps the AIMD logic to reach \(w_{1}\) in \(\mathcal{O}(\log w_{1})\) instead of \(\mathcal{O}(w_{1})\) in congestion avoidance phase._ Due to the space limitation, the proof and the detailed assumptions are provided in Appendix E.1. **NDE in Action:** In a nutshell, the NDE module in Reminis is responsible for tackling scenarios in which the 5G access link experiences sudden surges in link capacity and utilizing these surges in an agile manner. To illustrate the effectiveness of NDE in practice, we use a toy example where the link capacity increases from 100 Mbps to 720 Mbps in a few seconds (Fig.3) and compare it with two alternatives: (1) No-Exploration and (2) Deterministic exploration. For the No-Exploration version, we simply turn off the NDE block and for the Deterministic version, we always use the updated mean of the Gaussian distribution instead of using random samples drawn from the Gaussian distribution. As Fig.3 illustrates, without the exploration module, Reminis suffers from heavy under-utilization. Deterministic exploration can improve over the No-Exploration version; however, the sending rate still converges to the channel capacity very slowly. In contrast, the NDE block enables Reminis to converge to the new channel capacity very fast. ### Proactive Slowdown (PS) and Catastrophe Mitigation (CM) Considering the high fluctuations of the 5G access links, the main role of PS and CM modules is to effectively control the e2e delay without causing significant underutilization. **Proactive Slowdown:** This module is activated whenever the NCI infers Zone 2. When being in Zone 2, Reminis needs to be prudent so it can prevent any violation of DTT in the next SI. The PS module decreases the current CWND if the delay gets too close to DTT. To detect when the delay is close to DTT (i.e., risk of DTT violation), PS calculates the expected delay in the next SI, using a first-order regression predictor, as in Equation 3. \[d_{n+1}=d_{n}+\nabla d_{n}\times SI \tag{3}\] Equation 4 shows how the PS module decreases the CWND upon activation. The main responsibility of this module is to reduce the CWND if the calculated expected value of delay in the next SI is more than DTT. This module will be harsher in decreasing the CWND if its expectation for DTT violating in the next SI gets larger. \[cwnd_{n}\gets cwnd_{n}\times 2^{min(0,5\text{afeZone}(d_{n+1}))} \tag{4}\] **Catastrophe Mitigation:** Many reasons, such as sudden decreases in link bandwidth, can cause Reminis to end up in Zone 3 despite the PS module actions. In these types of scenarios, we want Reminis to decrease the delay as soon as possible to meet the delay requirement. Therefore, upon the inference of Zone 3 by the NCI, the CM module will be activated. CM decreases the CWND by at least half upon activation. The decrease would be harsher in proportion to DTT violations. Equation 5 shows the CWND update rule by this module. \[cwnd_{n}\gets cwnd_{n}\times 2^{5\text{afeZone}(d_{n})}\times 0.5 \tag{5}\] Algorithm 2, describes the NDE, PS, and CM modules' logic. ``` 1FunctionNDExploration(cum_d_der): \(\mu=\text{cum\_d\_der}\); \(\sigma^{2}=\frac{\mu}{4}\); \(x\sim\mathcal{N}(\mu,\,\sigma^{2})\); \(cwnd\gets cwnd\times 2^{\frac{1}{1+e^{-x}}}\); return; FunctionProactiveSlowdown(d_now,d_derivative): expected_d_nxt = d_now + (d_derivative \(\times\) interval); expected_sz_nxt = Safe_Zone(expected_d_nxt); ifexpected_sz_nxt \(<\) 0then \(cwnd\gets cwnd\times 2^{expected_sz_nxt}\); return; FunctionCatastropheMitigation(sz_now): \(cwnd\gets cwnd\times 2^{\text{sz\_now}}\times 0.5\); return; ``` **Algorithm 2**NDE, PS, and CM PS and CM in Action: 5G access links experience sudden drops due to several reasons described in 1.1. These drops could cause a significant increase in delay and violate the delay requirements of 5G applications as a result. So, here, we use two examples to show how PS and CM blocks help Reminis effectively control delay over varying 5G access links. In both examples, we turn off different blocks right before the changes in the capacity and capture their impact on the performance of Reminis. In particular, in the first example, we focus on scenarios where the decrease in access link capacity is relatively small. Fig. 4 shows such a scenario where the link capacity decreases from 720 Mbps to 600 Mbps. As Fig. 4 depicts, in this scenario, the PS block is sufficient to control the queuing delay during the transition. Here, the PS block reacts to the decrease of link capacity by reducing CWND according to Equation 4. This enables Reminis to keep the delay below the DTT and decrease the 95th percentile of queuing delay by 2.3\(\times\) compared to the case where the PS module is turned off. In the second example, we focus on sudden relatively large decreases in link capacity. For example, Fig. 5 shows a scenario where the link capacity decreases from 720 Mbps to 100 Mbps. As it is clear from Fig. 5, the PS module alone is not sufficient and the CM block becomes a key component to control the delay. In particular, the CM module alone ("CM ON, PS OFF" in Fig. 5) can eventually control the delay in this scenario. In fact, when the delay surpasses the DTT, the CM module is activated and decreases the CWND by more than half (based on Equation 5). Note that, unlike the PS module which has a proactive nature, the CM module is reactive leading to a temporary surge in delay when the PS module is off. In contrast, when both CM and PS are on, the PS module enhances the performance by controlling the delay during the transition (\([40,45\text{s}]\) in Fig.5). In sum, the proactive aspect of the PS component combined with the reactive aspect of the CM block boosts the overall performance of Reminis. Even in the steady-state, (\([45,50\text{s}]\) in Fig.5), this combination benefits Reminis and lowers the delay oscillation. ### Reminis' Steady State One of our main quests was to design a simple CC scheme with provable properties. Considering that, we mathematically prove that the following Theorem summarizes the convergence property of Reminis (considering \(q_{th}=DTT\)): **Theorem 3.1**.: _On average, Reminis converges to a steady state with a queuing delay no more than \((1+S(\frac{\ln 4-1}{2BDP})\ln 2)q_{th}\)._ The detailed proof of the above Theorem and corresponding assumptions are discussed in Appendix 21. ## 4. General Evaluations In this section, we extensively evaluate Reminis and compare it with other state-of-the-art e2e CC schemes in reproducible trace-based emulations and in-field experiments. The emulations help us to measure Reminis' performance over various scenarios, whereas the infield tests help us verify Reminis' performance in a much more complex real-world network. **Metrics**: The main metrics used in this paper, suitable for a real-time application, are the average throughput (or equivalently link utilization) and delay-based statistics such as average and 95th percentile packet delays. **Compared CC Schemes**: We compare Reminis with different classes of state-of-the-art e2e CC schemes. The first class is general-purpose CC schemes such as TCP CUBIC (Kumar et al., 2017), Google's BBR2 (Kumar et al., 2018), TCP Vegas (Kumar et al., 2018), and Copa (Kumar et al., 2018). The second class is CC algorithms that are custom designed for cellular networks. These schemes are C2TCP (Cheng et al., 2017), Verus (Verus, 2017), and Sprout (Sprout, 2018). The final class is learning-based CC schemes. We compare Reminis with DeepCC (Kumar et al., 2018), targeting cellular networks and PCC-Vivace (Kumar et al., 2018) as a general-purpose learning-based CC scheme. ### Trace-based Emulations **Mahimahi Limitation for High Bandwidth Links**: Mahimahi (Mahimahi, 2018) was not originally designed to emulate high-speed links. That has led to some design decisions that downgrade its performance in large bandwidth scenarios such as 5G links. We faced these performance issues during the evaluation of Reminis. So, we pinpointed Mahimahi's issues and revised them to support high BDP emulations. In a nutshell, we updated the TUN/TAP settings and logging functionalities of Mahimahi. Fig. 20 shows the performance of TCP CUBIC over a 5G link emulated with Mahimahi before and after our changes. As it is clear, after our modifications, Mahimahi is not the performance bottleneck and CUBIC can utilize the link fully. For brevity and sake of space, we omit the details of the changes and refer interested readers to our publicly available source code including these modifications along with the Reminis source code (Bordes et al., 2018). **Setup**: We use trace-driven emulations to evaluate Reminis and compare it with other CC schemes under reproducible network conditions. We use our patched Mahimahi as the emulator and the 5G traces collected by prior work (Kumar et al., 2017) as our base network traces. After patching Mahimahi, we evaluate the general performance of Reminis and other relevant CC schemes. For these experiments, we use 60 different 5G traces gathered in North America by prior work (Kumar et al., 2017) in various scenarios6. Each run is set to be 3 minutes and we repeat each run 3 times. For these experiments, we fix the minimum intrinsic RTT of the network to 20ms based on prior measurements done by (Kumar et al., 2018). Furthermore, since currently there are two different deployments of 5G networks, we consider two different settings for bottleneck bandwidth size. The first deployment is the Non-Standalone (NSA) mode, where operators are reusing the legacy 4G infrastructure to reduce costs. In NSA mode, we expect to have 4G-tuned buffers which would be smaller than the 5G-tuned buffers. For NSA mode, based on measurements done by prior work (Kumar et al., 2018), we set the buffer size to 800 packets. The second deployment is the Standalone (SA) version, where the infrastructure of the network is also changed to acknowledge 5G networks' needs. In this case, we configure the buffer size to 3200 packets (Kumar et al., 2018). Footnote 6: For more details about the traces, see Appendix F and discussions therein. **Standalone (SA) Scenario**: The left column of Fig. 6 shows the performance of tested CC schemes in the SA experiment. The dashed curve shows Reminis' performance with different DTT values. The star with a bigger marker size than other stars is Reminis with its default DTT parameter (i.e. \(DTT=1.5\times mRTT\)). Later in Section 5.4, we investigate the sensitivity of Reminis to this parameter. Considering either average or 95 percentile delay statistics, pure Figure 6. Throughput-Delay for SA (left column), NSA (middle column) and In-Field (right column) Emulations. loss-based CC schemes like CUBIC suffer from high e2e delay, though they can fully utilize the link. This behavior is expected from these schemes as they try to fully occupy the bottleneck buffer. In contrast, Reminis can find a sweet spot in the delay-throughput trade-off. For instance, averaged over all runs and all traces, Reminis with default DTT, achieves 5\(\times\) lower 95th percentile delay compared to CUBIC. This promising performance comes with only compromising 20% of CUBIC's link utilization. Increasing DTT value to \(2\times mRTT\) (making Reminis more throughput hungry), Reminis, on average, achieves 2.2\(\times\) lower 95th percentile delay than BBR2, while having the same link utilization. Moreover, SA-related parts of Fig. 6 show that delay-based CC algorithms like Vegas and Copa cannot get an acceptable link utilization. For example, averaged over all the experiments, default Reminis compared to Vegas gains roughly 2.42\(\times\) more throughput while its 95th percentile of delay and average delay are only 1.4\(\times\) and 1.14\(\times\) more than Vegas respectively. One of the main takeaways from SA scenario experiments is that because of deep buffers, throughput-hungry schemes like CUBIC or BBR2 can fully utilize the link but at the same time, they will have dire delay performance. On the other hand, Reminis, using its Proactive Slowdown and Catastrophe Mitigation modules can control the delay. However, without the Non-Deterministic Exploration module, Reminis would be hindered like other delay-based schemes. Using these modules simultaneously enables Reminis to reach the sweet spot of the delay-throughput trade-off. **Non-Standalone (NSA) Scenario**: The middle column of Fig. 6, compares the overall performance of all gathered CC schemes in the NSA scenario. An important note here is that because of the small size of the buffer, even throughput-hungry schemes, cannot fully utilize the link. The highest utilization, in this case, is roughly 80% achieved by BBR2 and CUBIC. In this scenario, Reminis operated on a sweet spot in the delay-throughput trade-off curve. In particular, having roughly the same link utilization as CUBIC, default Reminis achieves 2\(\times\) lower 95th percentile delay than CUBIC. The relative performance of the investigated CC schemes are same as the SA scenario and the only difference is the reduction in delay and link utilization, in general among all CC schemes, as a result of the small buffer size. ### In-Field Evaluations Real-world cellular networks can be more complicated than emulated versions due to the existence of other users, different behavior of cellular base-station packet schedulers, etc. We tested the performance of Reminis over deployed 5G networks in North America. Having servers as senders, a 5G sim card, and a 5G phone as a client, we collected the performance of various CC schemes under different environments with different dynamics. We used Samsung Galaxy S20 5G as our 5G mobile phone. The mRTT of the 5G network in our in-field experiments varied from 20ms to 30ms. Overall, we conducted 80 experiments for each CC scheme where each run takes 15 seconds. Experiments are done at different times and places to capture various network dynamics. During in-field evaluations, the mobile phone was in both stationary and walking conditions. In both conditions, we observed the time-varying throughput that Reminis targets. In the stationary scenarios, two main reasons cause significant changes in access link capacity over time. The first reason is the changes in line of sight (LoS). Even small obstructions like a human body could trigger 5G-to-4G handoffs and lead to significant performance degradation (Kumar et al., 2019). Second, the 5G wireless scheduler, based on different reasons such as the history of users' resource utilization and the number of current existing users, can enforce different available access link capacities per user. This can cause considerable changes in available link capacity observed by the end user even in the stationary scenario. In a 5G context, demand for resources in different slices can vary arbitrarily and unpredictably. In such cases, a large block of network resources might suddenly be available or taken away from a slice servicing a set of mobile broadband applications (Kumar et al., 2019). For every experiment, the throughput of schemes is normalized to the maximum throughput gained in that specific scenario. In these experiments, we use three versions of Reminis named Reminis high-delay(HD), medium-delay (MD), and low-delay (LD) corresponding to DTT values of 60 ms, 40 ms, and 30 ms, respectively. The right column of Fig. 6 shows the high-level results for our in-field tests. Reminis-HD achieves the same throughput as BBR2 while achieving on average 1.47\(\times\) lower 95th percentile delay. Moreover, Reminis-HD can increase the throughput by 1.34 \(\times\) and reduce the 95th percentile RTT by 1.4 \(\times\) compared to TCP CUBIC. With a tighter DTT, Reminis-MD can achieve the same throughput as CUBIC while having 1.7\(\times\) lower 95th percentile of delay. The results of in-field evaluations are pretty close to NSA scenario emulations, which can verify our assumptions and results for NSA emulations. ### MEC-Flavored Emulations As mentioned in Section 1.1, mobile edge computing integrated with 5G is one of the design opportunities in 5G networks. 5G aims to support under 10 ms latencies using the New Radio technology. With supporting low-latency connections, applications such as AR/VR are envisioned to be functional in the 5G environment (Kumar et al., 2019). Here, we emulate a 5G MEC scenario and investigate the behavior of different CC schemes in this scenario. To this end, we set the intrinsic RTT of the network to 10 ms and assume a VR application with a delay constraint of 20 ms is running. Other than this change in intrinsic RTT, we fix the setting to be representative of the SA scenarios. Fig. 7 shows the overall performance of Reminis and other CC schemes in this experiment. As shown, only four schemes can achieve the required latency desired by the AR application (the green area in the figure). However, with the help of the NDE module, Reminis achieves at least 2.4\(\times\) more throughput than the other three schemes. This promising performance highlights the benefits of the design decisions of Reminis. Generally, the Non-Deterministic Exploration module helps Reminis to achieve high throughput while the Proactive Slowdown and Catastrophe Mitigation modules help Reminis satisfy its delay target. Appendix Section B gives information about the average delay performance of these CC algorithms in this experiment. ### Is Reminis Only Good in 5G Networks? Here, we show that mechanisms utilized by Reminis are effective to maintain the e2e delay low and adapt to the channel capacity variations not only in 5G cellular networks but also in other networks such as 3G and 4G cellular networks. To that end, we use various 3G and 4G traces (gathered respectively by (Bordes et al., 2017) and (Bordes et al., 2017)) and evaluate the performance of different schemes. A few samples of these traces are shown in Section 22. Fig. 8 and 9 show the results of these evaluations.7 There are two important remarks here. First, Reminis still performs very well on both 3G and 4G scenarios. For instance, compared to BBR2, on 4G and 3G traces, Reminis achieves 1.48\(\times\) and 1.33 \(\times\) lower average queuing delay respectively, while BBR2's throughput is only 1.1 \(\times\) and 1.05 \(\times\) more than Reminis. Second, the performance gap between other CC schemes and Reminis in 3G and 4G scenarios is smaller compared to the 5G scenarios. The main reason for that is the fact that 5G networks have an order of magnitude larger BDP, deeper buffers, and more volatile access links compared to 3G and 4G networks. This means in the 5G setting, wrong actions of a CC scheme have higher chances of being penalized more and manifest in performance issues. Footnote 7: More results on the improvements of 95th percentile of delay in these simulations are explained in appendix Section C. ## 5. Deep Dive Evaluations In this section, we will look under the hood and investigate the dynamics of Reminis and the role of its individual components. We will also investigate the impact of different parameters such as intrinsic RTT, buffer size, and DTT on Reminis. Finally, we will end this section by examining Reminis' fairness and overhead aspects. ### Dynamics of Reminis For showing the dynamics of Reminis, we use simple scenarios to depict the underlying actions of different blocks in Reminis. To put it in context, we also illustrate the performance of CUBIC and BBR2 here. **Reminis Response to Changes in Link Bandwidth:** To elaborate on the dynamics of Reminis, we use two different step scenarios in which we suddenly decrease/increase the link bandwidth. Probing the behavior of Reminis in a simple step scenario will help us understand Reminis responds to more complex traces as the channel capacity can be modeled as a summation of shifted step functions. Fig. 10 shows Reminis sending rate, queuing delay, and the CWND over a link with capacity changing from 300 Mbps to 600 Mbps and vice versa. The intrinsic RTT of the experiment is 20 ms and DTT is 40 ms. In Fig. 9(a) when the link capacity increases, CUBIC and BBR2 are very slow to utilize this change. It takes them a few seconds to increase their sending rate to a point where they can utilize the link. Reminis, however, by inferring Zone 1, increases the CWND very fast. This increase in CWND value helps Reminis to fully utilize the link bandwidth shortly after the change in capacity. The Proactive Slowdown module is helpful here to stop Reminis from increasing the CWND too much. Fig. 9(b) shows the Reminis CWND value during the Figure 7. MEC-Flavoured Exp. Figure 8. Experiments on 4G Traces Figure 9. Experiments on 3G Traces increase in link capacity. This figure shows that the Non-Deterministic Exploration module increases the CWND at each SI and helps Reminis to adapt to this change quickly unlike BBR2 and CUBIC. Fig. 9(c) shows the performance of Reminis when link bandwidth decreases from 600 Mbps to 300 Mbps. This figure shows that CUBIC and BBR2 suffer from a surge in their delay while Reminis can control the delay increase so it will be still less than DTT. All the depicted CC schemes in Fig. 9(c) adapt their sending rate to link bandwidth quickly but BBR2 and CUBIC have already occupied the queue so much that when the link capacity gets halved, they will experience a substantial surge in delay. Reminis is quick enough to adapt to the new scenario so the e2e delay does not exceed the delay target. Reminis will infer Zone 2 and 3, and as explained in Algorithm 1, it will start decreasing the CWND. Fig. 9(d) shows changes in CWND with higher granularity. During this time, the Proactive Slowdown and the Catastrophe Mitigation modules decrease the CWND to match the new link capacity. After decreasing the CWND, if Reminis has decreased the window too much, the Non-Deterministic Exploration module will be activated (the last two samples of Fig. 9(d)) and increases the CWND. Moreover, looking at the CWND adjustments done by the AIMD module in between each SI, it is obvious that in the first 3 SIs, as the AIMD module does not detect any packet loss, it keeps increasing the CWND between each SI. ### Impact of AIMD Block Here, we investigate the answer to the question of why Reminis accompanies the Guardian block with a simple AIMD module. To observe the impact of the classic AIMD module, we do a simple ablation study and remove the AIMD block from Reminis. Using this new implementation of Reminis, we repeat all experiments in the SA scenario as described in Section 4.1 and gather the new version's overall throughput and delay performance. The results show that removing the AIMD block from Reminis leads to losing, on average, 15% (and up to 30%) link utilization without any tangible improvements in delay performance. To give more intuition about the effect of this module, Fig. 11 shows a slice of a sample 5G trace. This figure illustrates how without the AIMD block, Reminis fails to keep up with increases in the access link capacity. These results demonstrate the role of the AIMD block in Reminis. The main intuition here is that since the Guardian works periodically (one action per SI), there are scenarios in which the Guardian can still miss the channel dynamics happening during one SI as 5G links can change on very small time scales. That is where an AIMD block comes into play. A simple AIMD block with its Ack-triggered logic adds fine-grained dynamics to the Reminis' actions and enables it to be more agile. Another benefit of the AIMD module is increasing the number of RTT samples during each SI, used to calculate the delay at each SI. Increasing the number of samples helps Reminis to get more reliable average statistics during each SI as averaging among more RTT samples reduces the measurement noise. In short, by providing more samples, AIMD helps NCI to have a better view of the network condition. ### Impact of Buffer Size One of the main characteristics of cellular networks is having per-user buffers at the base station. This helps the network to reduce the number of dropped packets and to offer a more reliable network to the users. Having separate queues for Figure 11. Impact of AIMD Block Figure 10. Dynamics of Reminis in a Step Scenario each user means that users don't compete over a common queue. This feature, despite mitigating some issues like fairness between multiple users' flows, leads to the well-known problem of bufferbloat (Krishna et al., 2017) and self-inflicted delay (Krishna et al., 2018). In this section, to measure the impact of different bottleneck buffer sizes on the performance of different schemes, we change the buffer size of the emulated network from 800 packets to 51200 packets. The choice of the lowest buffer size, 800 packets, is based on the findings of prior work (Krishna et al., 2018) regarding the buffer size of the NSA-5G network. As expected and shown in Fig. 13, CUBIC tries to occupy all available buffer. This approach means that with increasing the buffer size, CUBIC's delay performance degrades. In contrast, the average delay performance of Reminis is roughly independent of buffer size and is around the value of DTT (30 ms). This behavior from Reminis is rather expected as it tries to control the CWND so that the overall delay meets the DTT requirement. Moreover, Fig. 12 shows that despite keeping the delay constant, Reminis achieves around 80% link utilization regardless of the underlying buffer size. NSA-5G networks have smaller buffers (4G buffers), so as long as there are still NSA base stations in the network, any proposed 5G CC algorithm should be able to also have a good performance in low buffer settings. Meeting its DTT, Reminis, can achieve 80% link utilization in NSA buffer size. ### Impact of Delay Tolerance Threshold Delay Tolerance Threshold (DTT) is a key parameter in Reminis' design and performance. Reminis will become more conservative when the measured delay exceeds DTT, so we expect to have a trade-off between delay and link utilization based on different values of DTT. Large DTT values steer Reminis toward being more throughput-oriented, while small DTT values guide Reminis toward being more delay-oriented. In Fig. 14, Reminis-X means a version of Reminis where the DTT parameter has been set to X ms. For comparison, TCP Vegas, one of the major delay-oriented CC schemes, and TCP CUBIC, the most throughput-hungry CC scheme, have been added to Fig. 14. In addition, we accompany CUBIC with two active queue management (AQM) schemes, CoDel (Krishna et al., 2018) and Pie (Pie, 2018). Although these schemes make changes in the network, they still cannot utilize the link more than 60%, which shows a major drawback for these AQM schemes on 5G links. Fig. 14 shows that DTT has the expected impact on the performance of DTT. With a larger DTT, we can guide Reminis to be more throughput-hungry and a smaller DTT makes Reminis more delay-sensitive. One salient point in this experiment is that Reminis does not compromise an immense amount of throughput to meet its DTT. For instance, for DTT=30 ms, Reminis achieves its goal while its link utilization is only reduced by 20% compared to CUBIC which has an average RTT equal to 60 ms. ### Impact of Network's Intrinsic RTT The intrinsic RTT of a network is a function of different things including the UE-Server distance (Krishna et al., 2018). Therefore, in this experiment, we evaluate the performance of different schemes for networks with different intrinsic RTTs. In particular, we change the intrinsic RTT values of our emulated networks to \(\{5,10,20,30,40,50\}\)_ms_ and set corresponding DTT values to \(\{7.5,15,30,45,60,75\}\)_ms_. We assume a SA version and consequently, set the buffer size to BDP for each tested intrinsic RTT value. 8 We define deviation from the desired delay (D3) parameter as \(D3=\frac{d}{DTT}\). D3 index indicates how much the average delay (\(d\)) is larger/smaller than DTT with D3=1 meaning the average delay has met DTT. Footnote 8: Note that Reminis automatically adjusts the value of SI to the observed mRTT of the network. That means, by design, Reminis utilizes different SI values for different settings. Fig. 15 depicts the convex hull of each compared CC scheme in all the possible scenarios based on different values of intrinsic RTT. For each intrinsic RTT, we repeat the experiment 3 times. As Fig. 15 shows, Reminis has a D3 less than 1.03 and roughly 80% link utilization over the tested range of intrinsic RTTs. On the other hand, delay-based approaches like Copa or Vegas, show poor utilization performances. For instance, Copa's link utilization can go down to 20%. Moreover, C2TCP can achieve a good D3, but its utilization can vary widely over different tested intrinsic RTTs which is not desirable. For BBR2, D3, and for CUBIC both D3 and link utilization have high variations based on the value of mRTT which is not a desirable feature for cellular networks. ### Fairness Here, we investigate the fairness property of Reminis. We have created a network containing servers and one client connected via an emulated bottleneck link. We set the intrinsic RTT of the network to 20 ms. We send three separate Reminis flows toward the client, with 30 seconds gaps between the start of each flow, and measure the throughput of each flow at the client side. Fig. 16 shows the result of this experiment and demonstrates that Reminis flows can fairly share the bottleneck. When the second Reminis flow enters the network, at around \(t=30\)s, the first flow detects Zone 2/3, hence it reduces its CWND. This will release enough bandwidth for the second flow and consequently, two flows can share the bottleneck bandwidth. The same happens when the third flow enters the network. ### CPU Overhead Considering the power constraints of devices in 5G networks, it is crystal clear that a successful 5G-tailored CC scheme should be lightweight with low computational overheads. That said, here we show the lightweight aspect of Reminis. To that end, We measure the average CPU utilization of different CC schemes by sending traffic from a server to a client over a 720 Mbps emulated link for 2 minutes. The choice of 720 Mbps for link capacity comes from the average of 5G link capacity measured in prior work (Shen et al., 2018). Fig. 17 shows the comparison between the average CPU utilization of different schemes. Even though the current version of the Guardian module is implemented in user space, Reminis shows a very good performance in terms of CPU utilization. In particular, CPU utilizations of Reminis and CUBIC (the default CC scheme implemented in Kernel) are around 9.2% and 6%, respectively. Having low overhead is a consequence of keeping the Reminis design simple, employing simple delay statistics, and utilizing low-overhead ack-triggered (AIMD) actions. ## 6. Discussion **Choice of DTT:** If the application is delay sensitive and can provide Reminis a DTT parameter, Reminis can use this value. Reminis starts after the slow-start phase of the AIMD block and therefore, it will have enough samples to infer the mRTT of the network. However, if the application's requested DTT is less than mRTT, Reminis can detect this problem and enforce DTT to be larger than the measured mRTT of the network. If the applications do not provide a specific DTT, Reminis switches to its default where\(DTT=1.5mRT\). **Does Reminis guarantee to always meet DTT requirements?** A novel concern in 5G networks is the occasional 5G "dead zones". As explained in Section 1.1, users entering these zones experience close or equal to zero link capacity which can last for seconds. As no packet is served from the queue during this time, the queuing delay will inevitably soar. Therefore, any e2e CC scheme by nature, including Reminis, can not control these types of scenarios. The bottom line is that although we showed Reminis is significantly better at controlling e2e delay in these scenarios than other state-of-the-art CC schemes, any e2e CC algorithm can only deliver QoS demands that are feasible in a network. **Limitations of Reminis:** Reminis targets emerging applications with low/ultra-low latency requirements. Supporting such applications before anything requires networks with low/ultra-low intrinsic delays. This justifies and, indeed, encourages the use of edge-based architectures such as MEC. In such settings, competing with loss-based CC schemes like CUBIC, which fully utilize buffers, is less of a concern. That said, when these settings are not held and Reminis coexists with loss-based flows that fully occupy queues, similar to any CC scheme that attempts to control the e2e delay, it faces problems when the DTT value (and mRTT) is way lower than the queuing delay caused by loss-based flows. ## 7. Conclusion In this work, we demonstrate that achieving high throughput and low controlled delay in highly variable and unpredictable 5G networks does not necessarily require convoluted learning-based schemes or prediction algorithms. To that end, we introduce Reminis, a simple yet adaptable e2e CC design tailored for 5G networks with provable convergence properties, and we show that properly exploiting non-deterministic throughput exploration algorithms combined with proactive/reactive delay control mechanisms is sufficient to effectively adapt to 5G cellular networks and achieve high performance. Our controlled emulations and real-world experiments, show the success of Reminis in acquiring its design goals and demonstrate that Reminis can outperform state-of-the-art CC schemes on 5G networks while being deployment friendly with low overheads and not requiring any changes in cellular network devices.
2309.09309
Predictive Fault Tolerance for Autonomous Robot Swarms
Active fault tolerance is essential for robot swarms to retain long-term autonomy. Previous work on swarm fault tolerance focuses on reacting to electro-mechanical faults that are spontaneously injected into robot sensors and actuators. Resolving faults once they have manifested as failures is an inefficient approach, and there are some safety-critical scenarios in which any kind of robot failure is unacceptable. We propose a predictive approach to fault tolerance, based on the principle of preemptive maintenance, in which potential faults are autonomously detected and resolved before they manifest as failures. Our approach is shown to improve swarm performance and prevent robot failure in the cases tested.
James O'Keeffe, Alan Gregory Millard
2023-09-17T15:54:48Z
http://arxiv.org/abs/2309.09309v1
# Predictive Fault Tolerance for Autonomous Robot Swarms ###### Abstract Active fault tolerance is essential for robot swarms to retain long-term autonomy. Previous work on swarm fault tolerance focuses on reacting to electro-mechanical faults that are spontaneously injected into robot sensors and actuators. Resolving faults once they have manifested as failures is an inefficient approach, and there are some safety-critical scenarios in which any kind of robot failure is unacceptable. We propose a predictive approach to fault tolerance, based on the principle of preemptive maintenance, in which potential faults are autonomously detected and resolved before they manifest as failures. Our approach is shown to improve swarm performance and prevent robot failure in the cases tested. ## I Introduction Autonomous robot swarms are suited to tasks that are dangerous and/or cover large areas because of their multiplicity and redundancy of hardware [17]. These characteristics were initially thought to provide swarm robotic systems with an innate robustness - i.e. the ability to tolerate faults and failures. Whilst this is true in some cases, there are others in which faults in individual robots can severely disrupt overall swarm performance [19]. This is especially true for partial failures that do not prevent the afflicted robot from communicating and attempting to interact with the swarm. An active approach to fault tolerance in robot swarms is therefore necessary for achieving long-term autonomy [2]. Active fault tolerance in robot swarms is generally understood to consist of fault detection, diagnosis and recovery (FDDR) [9][14]. Previous work has examined individual elements of FDDR by spontaneously injecting sensor and actuator faults into individual robots [14][18][9]. However, robots do not tend to spontaneously fail in the field. Rather, failures often result from gradual wear and degradation on sensor and actuator hardware [4]. Previous work towards FDDR in swarms has focused on handling faults _after_ they have occurred. Reactive FDDR relies upon the assumption that a fault can be resolved during operation - either autonomously, or by a human in the loop - and that the swarm can continue operating. There are many instances in which this is not true, particularly in environments that are inaccessible, dangerous, or in enclosed spaces that become quickly congested. Carlson et. al. [4] highlight that robots have a mean-time-between-failures - around 8 hours in the robots they studied. Detecting and handling faults _before_ they manifest as failures is advantageous for minimising disruption. In this paper we demonstrate a novel approach to FDDR in robot swarms whereby faults are _predicted_, allowing at-risk robots to reach a safe area to receive maintenance before the fault manifests as a failure. The contribution of this research is the first predictive FDDR (PFDDR) system for robot swarms and the first such work to consider gradual hardware degradation, as opposed to spontaneous electro-mechanical faults. ## II Related Work ### Fault Detection Fault detection in robot swarms has been approached in a number of ways. Christensen et al. [5] use a firefly-inspired approach to communicate the presence of faulty robots to the swarm ; Millard compares a robot's observed behaviour against a simulated model [9]; Khadidos et al. compare multiple observations of a robot's state with its neighbours [6]; Tarapore et al. use an immune-inspired detection model [18]; Lee et al. focus instead on determining the most effective metrics by which faults can be detected [7]. All of these works consider reactive fault detection of spontaneous faults in individual robots. ### Fault Diagnosis To our knowledge, the only research to address fault diagnosis in robot swarms has been our own immune-inspired work in simulation [14] and in hardware [12]. This approach diagnoses newly detected faults by their statistical similarity to previously resolved ones, but is a reactive approach that relies on the assumption that robots can repair each other in the field. ### Fault Recovery Khadidos et al. [6] implement recovery by simply powering down faulty robots, becoming inanimate objects for the remainder of a task. Whilst this is acceptable in some scenarios, the accumulation of obstructive objects in tightly enclosed spaces will be problematic if critical paths are blocked. Oladarin [11] considers a range of different recovery actions for different types of fault, but assumes that robots can repair each other in the field. Christensen et. al. [5] use a symbolic recovery mechanism in which the faulty robot is gripped by another robot in the swarm, after which the fault is assumed to be resolved. Bossens and Tarapore [3] adopt an alternative approach to fault recovery by evolving a repertoire of robot controllers for a swarm foraging task. If a fault or perturbation is problematic for the current controller, the swarm can then adapt and select a different controller that is less susceptible to the fault, although the fault in the individual is not itself resolved. ### Summary The existing body of research on swarm fault tolerance is relatively slim. To the best of our knowledge, the literature cited in this section represents the key approaches developed thus far, each of which _reacts_ to spontaneous electro-mechanical failures. The approaches that incorporate a recovery or resolution strategy assume that this can be achieved in the field autonomously, which may not always be possible. Thus, there is room for a novel approach that instead _predicts_ when robots are at risk of developing faults, such that they are able to preemptively return to a safe area where they can receive maintenance. ## III Methods This work aims to provide proof-of-concept for new approaches to swarm fault tolerance in software simulation that will translate to near-term hardware solutions. We have chosen GPS-denied underground excavation as a case study application, as it is a topic of interest to industry, the academy, and learned societies around the world [10]. The task is similar to foraging, which is a common benchmark application in swarm research [1], but differs insofar that robots manipulate and alter the topography of their environment in real time, navigate tightly constrained spaces as the tunnel is excavated, and must maintain an unbroken communications chain with a known reference point in order to localise. Underground excavation is also a scenario that lends itself to gradual degradation of sensor and actuator hardware due to the build up of dust and debris, thus providing a realistic context for testing our PFDDR approach. We consider an excavating swarm of robots, in which each robot must remove material from a soil face and return it to the tunnel entrance, forming a long straight tunnel over time (see Figure 1). Robots localise with respect to a known reference point at the tunnel entrance. We assess tunneling performance simply by the amount of material excavated and the power expended in doing so. Although the ability of the swarm to maintain tunnel dimensions and axis is relevant, it is beyond the scope of this work. For ease of reading, all symbols used in the paper can be found in Table I. ### _Swarm Robot Model_ We have evaluated the efficacy of our proposed approach in simulation, using swarms of simulated TurtleBot3 robots [16]. We do not suggest that the TurtleBot3 is itself a suitable platform for underground excavation, but its availability and Robot Operating System (ROS) integration make it an ideal test platform for producing repeatable proof-of-concept data. We provide our simulated robots with the ability to localise other robots within 2.5 metres relative to themselves with a 10% margin of error, based on the distributed ultrasonic approach verified in hardware by Maxim et. al. [8]. Robots can share self-estimated state information with any other robot within 50m - something that could be achieved with a Decawave 1000 chip [15], for example. The robots are also equipped with directional proximity sensors at \(\pm\) 0\({}^{\circ}\), 30\({}^{\circ}\), 60\({}^{\circ}\), and 90\({}^{\circ}\) for collision avoidance. Excavation typically requires a drilling or cutting tool, and a container to store and transport excavated material. In-depth modelling of soil excavation is beyond the scope of this work. Rather, each robot is assumed to be able to remove approximately its own volume worth of material from the soil face. For simplicity, we take a loaded robot to weigh twice as much as an unloaded robot. ### _Tannelling Algorithm_ We propose a naive swarm tunnelling algorithm, exploiting local sensors and actuators. The algorithm can be summarised as follows: Each robot begins in a demarcated charging and maintenance zone outside of the excavation area. It will then move into a defined tunnel corridor (0.8m across) and head along the tunnel axis away from the charging/maintenance zone and into the excavation zone (defined as the tunnel corridor further than 1.5m from the charging/maintenance zone) until it encounters a soil face, at which point it will enter an excavation state and remove a single block of soil. Soil is modelled as discrete blocks covering 0.2m x 0.2m of ground (we do not consider tunnel height in this work). The robot will then carry the soil back to the recharging and maintenance area, where it is assumed to be deposited (see Figure 1). When a robot requires maintenance, or its power drops below 30%, it will interrupt its activities to return to the charging and maintenance area. Each robot performs collision avoidance for any object within 0.5m and \(\pm\) 90\({}^{\circ}\). The swarm maintains a chain-link of communication with the tunnel reference point by ensuring that each robot stays within 2m of its nearest neighbour with an unbroken communication link to the tunnel reference. To avoid clustering in enclosed spaces, a robot headed towards the soil face will maintain a distance greater than 1m from any robot it detects as being further along the tunnel than itself. ### _Power Consumption and Degradation Models_ In this work, there are three robot processes that consume power and are subject to degradation at different rates: 1) sensing and communication, 2) locomotion, and 3) excavation. We use a naive model of power consumption whereby each process consumes some percentage of total battery capacity, \(\bar{P}_{0}\), per unit time. We simulate the susceptibility of these processes to degradation caused by dust and debris thrown into the air from the excavation process, and from traversing loose ground. Degradation affects both the physical process and the power consumed by it. For example, debris accumulating on motor hardware will create resistance as the motor turns, slowing the motor, generating heat, increasing power consumption, and reducing efficiency. We assume that, as with most electrical devices, the average power drawn by each actuator will be considerably lower than the maximum possible power draw - typically around 50%. As hardware degrades, the physical effects will initially not be noticeable, as there will be some overhead to draw more power to meet the desired output. However, as degradation continues, the power drawn by an actuator will increase to its limit, at which point degradation will manifest physically - e.g. as a reduction in velocity or rate of excavation. The power consumption and degradation models for each process is as follows: _Sensing and Communication:_ Each robot is constantly emitting and receiving simulated ultrasonic signals and sharing state information. The power consumed by sensing hardware, \(\Delta P_{\mathcal{S}}\), is unaffected by the accumulation of debris on sensor hardware. However, the degradation function, \(D_{\mathcal{S}}\) affects the ultrasonic transmission range used for robot localisation. We implement this such that a robot's sensing range can drop to a minimum of 0.5m. _Differential Drive:_ The power consumed by robot locomotion will depend on the robot's payload, \(M_{L}\), and will be affected by the amount of dust and debris accumulated on wheels and motors, modelled as the multiplier \(\rho_{l,r}\), a function of degradation coefficient \(\mathcal{dc}_{l,r}\), on \(\Delta P_{l,r}\) for left and right wheels, respectively. We assume that a robot is able to transport its maximum payload without non-linear effects. Degradation also affects the velocity of each wheel, \(V_{l,r}\). An unloaded robot will not suffer any reduction in velocity until degradation severity coefficient \(\mathcal{dc}_{l,r}>1\) because the Fig. 1: **A:** A screenshot of the simulated tunnelling swarm with highlighted maintenance and recharging zone, digging corridor, excavation zone, communication range of an individual robot (approximately to scale), and arrows indicating the path taken during the excavation algorithm. **B:** A high level state machine providing an overview of our proposed excavation algorithm. Avoiding collisions, maintaining an unbroken communication chain, and returning to the recharging/maintenance area when needed will take priority over the basic excavation algorithm. **C:** Relationships between degradation functions and degradation severity coefficients listed in Table I. increased power draw from the degradation function, \(D_{l,r}\), in this range would be no greater than the increased power draw from carrying its extra payload. ExcavationThis work uses a simplified representation of the excavation process, in which discrete blocks are removed once a robot has been in an excavating state for an appropriate amount of time. The power consumed by the excavation process, \(\Delta P_{E}\), is affected by multiplier \(p_{E}\), a function of degradation severity coefficient \(\partial c_{E}\). The rate of excavation, \(\Delta E\), is affected by degradation function, \(D_{E}\). Under ideal conditions (\(\partial c_{E}\) = 0), removing the maximum volume of soil that a robot can carry (i.e. \(M_{L}\) = \(M_{R}\)) will expend 10% of a robot's total battery capacity. RechargingFor ease of experimentation, and because the rate of recharging does not directly affect the metrics by which we measure the quality of our fault tolerance system, we accelerate the rate of robot recharging to 10% per second. ## IV Implementation, Results & Discussion ### _Normal and Faulty Behaviour Baselines_ All experiments were conducted using ROS 2 (Foxy) and Gazebo Classic. 10 replicates were performed for each experiment. We first tested the performance of the tunnelling algorithm under ideal conditions (no degradation/faults), using a swarm of five robots. This is a relatively small size for a swarm, however the number of robots is proportional to tunnel dimensions. All algorithms presented are decentralised and scalable in principle with longer/wider tunnels (to be confirmed in future work). We tested how each type of fault would affect swarm performance, depending on how many robots were afflicted. In these experiments, each fault type is considered in isolation, with the number of faulty robots varying between 0-5. Every faulty robot has a fixed 15% probability that its associated degradation severity coefficient, \(\partial c_{\mathcal{S},\mathcal{E},j,r}\), will increase by an increment of 0.01 per second of simulated time. We assess swarm performance by the total number of blocks excavated by the swarm, and the power consumed in doing so. Figure 2A shows that sensor degradation in individual robots has the least impact on performance in terms of both power consumption and block removal. This is because of the built-in redundancy of the distributed localisation technique, and because the swarm operates over a relatively small area during the 15 minutes of simulated time per experiment, with excavated tunnel depth only ever reaching a maximum 1.6m. The swarm therefore never needs more than one unaffected robot acting as a chain link to the reference point at any time. In all cases where 5 robots were affected by sensor degradation, the swarm completely lost the ability to localise itself before the end of the experiment. This suggests that a comparable degradation in overall performance could be expected with fewer affected robots were the swarm to be distributed over a larger physical area. Figure 2B shows that degradation in excavation hardware substantially reduces the number of blocks excavated and increases power consumption, with robots eventually becoming completely incapable of excavating, and hindering the rest of the swarm from reaching the soil face until returning to the recharging area. Because of the 30% cut-off at which a robot will interrupt its current activity to seek charge, robots with degraded digging hardware in these experiments always manage to reach the recharging zone before completely depleting their charge. Figure 2C shows that degradation of motor hardware also significantly reduces the amount of blocks that the swarm can excavate in an experiment. Although motor degradation does not appear to have much impact on the total power consumed by the swarm, this is partly because an average robot operating in ideal conditions only needs to replenish its power once in 15 minutes of simulated time. This obscures the fact that robots with motor degradation eventually become incapable of moving and completely deplete their power, unable to reach the recharging and maintenance zone. Every robot with unaddressed motor degradation eventually reaches this state. ### _Predictive Fault Tolerance_ The physical effects of degradation are negligible for \(\partial c_{E}<0.3\) and \(\partial c_{l,r}<0.3\), with the main impact seen in rate of power consumption which approximately doubles (see Table I). As a worst-case scenario, the system should therefore aim to detect degradation on excavating and motor hardware before \(\partial c_{E},\partial c_{l,r}>0.3\). Given the relatively small impact that sensing degradation has on the system, understanding when it should be detected and resources expended on maintenance is less obvious. The root cause of failure occurs when sensing range drops below 2m, a hard coded value for maintaining proximity. As a general rule, then, a robots sensing range should never be allowed to drop below the greatest value hard coded into its controller. We implement our PFDDR system as follows: Detection and DiagnosisIn this scenario, our system must detect and differentiate between 3 categories of fault: sensor hardware, motor hardware, and excavation hardware. The orthogonality of these categories allows us to hard-code diagnosis according to the data sets in which faults are detected. Each type of fault category is assigned an array into which robot states are recorded. Robot states are only recorded when a robot is performing the appropriate corresponding task - i.e. a robot that is excavating will write its power consumption state to a separate array than if it had been travelling. For motor faults, power consumption varies significantly depending on whether the robot is using one or both wheels, and whether it is carrying a payload. There are therefore four separate arrays for monitoring power consumed by locomotion. Each robot controller updates at a rate of 100Hz. Every state recorded into the array is averaged over 10 - i.e. states are recorded at a rate of 10Hz. Each array can contain up to 50 states, meaning that a full array represents 5 seconds of simulated time, although not necessarily consecutive. A fault is detected when the median value of its associated array is greater than a threshold value. For motor and excavation faults, robots record their power consumption states. For sensor faults, a handshake protocol is used whereby each robot will listen for confirmation that detected neighbouring robots have also received the simulated ultrasonic emission from the robot. If any neighbour within 2m does not confirm, the robot writes a 1 to the array, otherwise it writes a 0. If the median value of the sensing array is greater than 0, a sensor fault is detected. This process is shown in Figure 3. _Recovery:_ A robot returns to the recharging and maintenance area Figure 1. Here, a robot will receive maintenance on the sensors/actuators that have been flagged in the detection/diagnosis process. It is assumed that maintenance is performed by a human, although it could be performed autonomously in the near-mid future. Once a robot has received maintenance on a given sensor or actuator, its corresponding degradation severity coefficient is set to zero. Performing maintenance on a sensor/actuator takes 5 seconds of simulated time (accelerated for ease of experimentation). ### _Faults in Isolation_ To test our PFDDR system, we reran our experiments with each fault type in isolation whilst the system was active. Figure 2D-F show that the power consumed by the swarm when our PFDDR system is implemented is largely the same as without. For sensor faults, there is a slight decrease in the number of blocks excavated when our PFDDR system is active. This can be explained by the fact that robots will spend more experiment time returning to the maintenance area when faults are detected. In the case of motor and excavation faults, the total number of blocks excavated by the swarm is significantly improved by our PFDDR system. ### _Faults in Combination_ We next experiment with all fault types in combination. As it is unlikely in a real world scenario that all robots in a swarm would degrade at the same rate, we initialise each fault type with a random probability (between 1% and 15% per second of simulated time) of its corresponding degradation severity coefficient increasing by 0.01. Figure 4A and B show the performance of the swarm with and without the PFDDR system, with the swarm operating in ideal conditions included as a control. Unsurprisingly, overall performance is vastly reduced when robots are left to degrade over the course of the experiment. Performance improves considerably when our PFDDR system is active. Figure 4A shows that, when our PFDDR system is implemented, the total number of blocks excavated by the swarm is far closer to the ideal conditions control than when the swarm is allowed to degrade. Across all 10 experimental replicates, the total number of robots that deplete their entire power supply after 15 minutes of simulated time when the system is left to degrade is 9. That number is reduced to 0 when our PFDDR system is implemented. Whilst we would not expect our prototype PFDDR system to completely eliminate robot failure in all cases as is, this result serves as a strong validation of the principle of PFDDR in preventing robot failures. A reactive FDDR system, by comparison, would not detect a fault until a later stage of degradation. In the case of motor failure, this would potentially require a faulty robot to be retrieved from the tunnel, causing severe disruption. A direct comparison of PFDDR vs. reactive FDDR will be included in future work. Figure 4B shows that the total Fig. 3: A high level state machine outlining our proposed PFDDR system. Each constant is selected with respect to the equations given in Table I and the rates of power consumption observed during experiments for \(\partial\epsilon_{E}<0.3\) and \(\partial\epsilon_{J,J}<0.3\). Fig. 2: Row **i** shows the total number of blocks excavated by the 5-robot swarm in 15 minutes of simulated time, when up to 5 robots suffer from **A**: sensor degradation, **B**: excavation degradation, or **C**: motor degradation without our PFDDR system; or **D**: sensor degradation, **E**: excavation degradation, or **F**: motor degradation with our PFDDR system. Row **ii** shows the total power consumed by the 5-robot swarm in 15 minutes of simulated time as a percentage of a single robot’s battery capacity, where columns **A**-**F** indicate the same fault categories, with and without our PFDDR system. power consumed by the swarm when our PFDDR system is implemented is significantly higher than when the swarm is left to degrade, and almost twice as high as our ideal conditions control. To some extent, this is unsurprising as a a robot that is left to degrade will eventually fail to reach the recharge/maintenance zone and cease to consume power once its remaining supply is depleted. Nonetheless, Figure 4B demonstrates a hitherto unreported artifact of swarm fault tolerance that, if the rate of power consumption is the first indication of hardware degradation, a degraded system will inevitably consume more power than an ordinary system model would account for - even if active PFDDR prevents it from ever manifesting as a failure. Assessment of power consumption should therefore be included in future analysis of the effectiveness of swarm PFDDR systems. Given that there are many scenarios in which charging resources might be limited, this result is open to criticism as a negative feature of our system and warrants further investigation. However, we would argue that, for autonomous systems in general, consuming additional power sources is preferable to failure at a critical moment. Furthermore, the severity of the result is very likely due to the simple means of fault detection, for which there is considerable scope to improve. Figure 4C displays the values of each type of degradation severity coefficient at the moment our PFDDR system detects a fault. Sensor faults are typically detected very quickly (\(\mathcal{OC}_{\mathcal{S}}\) = 0.16 on average). Motor faults tend to be detected whilst \(\mathcal{OC}_{I,r}<0.3\), but occasionally at much higher values - usually because the degradation severity coefficient has increased whilst robot has been stationary and therefore not recording power consumption states. This effect is far more obvious in the detection of excavation degradation, which typically occurs at \(\mathcal{OC}_{\mathcal{E}}>0.4\) - considerably higher than desired. This is because robots spend a minority of time in the excavation state, whilst \(\mathcal{OC}_{\mathcal{E}}\) gradually increments unnoticed. This highlights a need for active precautions against passive degradation - for example, periodic diagnostic checks similar to those implemented in our previous work [13] could be used to routinely check for degradation on actuators that are used less frequently. ## V Conclusions & Future Work In this paper we have argued that swarm fault tolerance should be predictive, as there are real world scenarios in which any robot failure is unacceptable. We consider a swarm excavation scenario and propose a PFDDR system. We demonstrate that PFDDR can be used avoid robot faults manifesting as failures by detecting and diagnosing early signs of degradation and performing preemptive maintenance. Although preemptive maintenance is not itself a new concept, we are the first to apply it to swarm robotics. Our system was able to maintain a comparable rate of excavation and completely prevent the loss of robots in the cases tested, albeit at a significantly increased power cost. Whilst the system described here is a simple proof-of-concept, this work represents a fundamental shift in approach to swarm fault tolerance, as well as multi-robot systems in general. There are many further avenues to explore within this approach, and it is our hope that other swarm researchers will adopt a predictive approach in their own work on fault tolerance and build on our progress. Our future work will investigate more sophisticated approaches to predictive detection and diagnosis (e.g. machine learning), incorporate path-planning algorithms into robot behaviour and recovery strategies, and use hardware experiments to produce high fidelity models of power consumption and hardware degradation. We will also compare our PFDDR approach against state-of-the-art FDDR approaches. ### Limitations #### Naive Models We use naive models of power consumption and hardware degradation, as well as a naive controller algorithm as the testbed for this work. Whilst our system implementation provides effective proof-of-concept for our predictive FDDR system, we acknowledge that our models will not transfer directly into a real world system without modification. #### Simulated Data We are precluded from performing genuine hardware experiments by the fact that robots capable of autonomous excavation are not commercially available. We will improve our models in future work, for example by measuring the power consumption of the types of robot platforms that are powerful enough to excavate (e.g. Clearpath models) and of drill augers. Our previous work on swarm fault diagnosis shows that trends observed in simulation tend to be replicated in hardware [12]. ## Acknowledgements This work was funded by the Royal Academy of Engineering UK IC Fellowship Award, ICRF2223-6-121 (O'Keeffe).
2309.05533
Safe and Stable Adaptive Control for a Class of Dynamic Systems
Adaptive control has focused on online control of dynamic systems in the presence of parametric uncertainties, with solutions guaranteeing stability and control performance. Safety, a related property to stability, is becoming increasingly important as the footprint of autonomous systems grows in society. One of the popular ways for ensuring safety is through the notion of a control barrier function (CBF). In this paper, we combine adaptation and CBFs to develop a real-time controller that guarantees stability and remains safe in the presence of parametric uncertainties. The class of dynamic systems that we focus on is linear time-invariant systems whose states are accessible and where the inputs are subject to a magnitude limit. Conditions of stability, state convergence to a desired value, and parameter learning are all elucidated. One of the elements of the proposed adaptive controller that ensures stability and safety is the use of a CBF-based safety filter that suitably generates safe reference commands, employs error-based relaxation (EBR) of Nagumo's theorem, and leads to guarantees of set invariance. To demonstrate the effectiveness of our approach, we present two numerical examples, an obstacle avoidance case and a missile flight control case.
Johannes Autenrieb, Anuradha M. Annaswamy
2023-09-11T15:17:16Z
http://arxiv.org/abs/2309.05533v1
# Safe and Stable Adaptive Control for a Class of Dynamic Systems ###### Abstract Adaptive control has focused on online control of dynamic systems in the presence of parametric uncertainties, with solutions guaranteeing stability and control performance. Safety, a related property to stability, is becoming increasingly important as the footprint of autonomous systems grows in society. One of the popular ways for ensuring safety is through the notion of a control barrier function (CBF). In this paper, we combine adaptation and CBFs to develop a real-time controller that guarantees stability and remains safe in the presence of parametric uncertainties. The class of dynamic systems that we focus on is linear time-invariant systems whose states are accessible and where the inputs are subject to a magnitude limit. Conditions of stability, state convergence to a desired value, and parameter learning are all elucidated. One of the elements of the proposed adaptive controller that ensures stability and safety is the use of a CBF-based safety filter that suitably generates safe reference commands, employs error-based relaxation (EBR) of Nagumo's theorem, and leads to guarantees of set invariance. To demonstrate the effectiveness of our approach, we present two numerical examples, an obstacle avoidance case and a missile flight control case. ## I Introduction The field of adaptive control has focused on providing real-time inputs for dynamic systems through parameter learning and control design using a stability framework [1, 2, 3, 4, 5, 6]. A different direction of research has been growing in the area of safety-critical systems [7, 8, 9] motivated by the need to provide verifiable guarantees of safe behavior in systems with mixed autonomy. This paper takes a step in combining adaptive control methods with safety-critical methods for a specific class of dynamic systems. In addition to ensuring safety and stability, the proposed adaptive control design also seeks to accommodate magnitude constraints on the control input. The tools utilized in establishing stability in adaptive control include Lyapunov stability, analytical continuity, and a reference model that establishes a target for the adaptive control system to track. In the case of safety, control barrier functions (CBFs) and the notion of positive invariance are utilized in order to design the exogenous input into the system. The approach we have used in this paper is a careful combination of both sets of tools; a CBF-based filter design is used in order to design the reference input into the reference model. This in turn is followed by the use of a reference model that utilizes a closed-loop structure [10] and a calibration that acknowledges the possibility of input saturation. With the resulting calibrated closed-loop reference model (CCRM), global boundedness and tracking in the absence of input saturation and a domain of attraction result in the presence of input constraints are established. In all cases, safety of the system is guaranteed due to the combined use of Calibrated CBF and CCRM. Several approaches have been reported in the literature where CBFs are are used to ensure safety for a plant with parametric uncertainties [11, 12, 13, 14, 15, 16]. Approaches in [12, 13] use a robust but conservative approach in the choice of how the system is rendered safe. A less restrictive approach is used in [14] where a CBF filter minimizes the risks of controlling an unknown model and a controller learns how to operate the system via a data-driven approach. A drawback of that approach is that during the learning phase, no guarantees of set invariance or bounds can be provided. In [15], the notion of a CBF is expanded as an adaptive CBF which requires the barrier function to exist for all unknown parameters and all adaptive gains in a set. A similar approach is proposed in [16] where L1-adaptive methods are used to update the parameters and the corresponding Control Lyapunov Function (CLF) is assumed to exist for all parameter estimates. While magnitude limits are imposed on the control input in [16], those constraints are imposed in the development of the CLFs, which may not be guaranteed to lead to a feasible solution. Safety has also been addressed using non-CBF approaches [17, 18, 19] in the form of state-constraints. In [17, 18], the authors introduced modulation functions that are able to lower the control input such that it never directs the system out of a chosen set of states. In [19] a bounding function on the reference input is imposed to lower the control input and therefore ensure state-constraints. The trade offs between the use of such a bounding function and command following are however only empirically addressed in these papers. In contrast, a CBF-based approach, which is used in this paper, allows a streamlined use of a computational solution within a constrained optimization framework, with a quadratic cost. In this paper, we propose an adaptive controller that guarantees stability of the closed-loop system in the presence of parametric uncertainties, and safety, both with and without input-saturation. Unlike [15], the computational burden on CBF is significantly reduced by removing the requirements to be satisfied for all unknown parameters. Instead, the CBF is designed using a reference model and suitably calibrated to accommodate the presence of adaptation. Preliminaries and problem statement are presented in Section II. The main contributions, the development of an adaptive controller with stability and safety properties, are presented in Sections III and IV. Section III does not include any magnitude limits on the control input, while Section IV includes these limits. Section V includes numerical simulations. ## II Preliminaries & Problem Formulation ### _Preliminaries_ We define a nonlinear continuous system \[\dot{x}(t)=f(x(t)) \tag{1}\] where \(x(t)\in\mathbf{R}^{n}\). In order to define safety, we consider a continuously differentiable function \(h:\chi\to\mathbf{R}\) where \(\chi\subset\mathbf{R}^{n}\), and a set \(S\) defined as the zero-superlevel set of \(h\), yielding: \[S\triangleq\big{\{}x(t)\in\chi|h(x(t))=0\big{\}} \tag{2}\] \[\partial S\triangleq\big{\{}x(t)\in\chi|h(x(t))\geq 0\big{\}} \tag{3}\] \[int(S)\triangleq\big{\{}x(t)\in\chi|h(x(t))>0\big{\}} \tag{4}\] The following definitions are introduced [20, 21]: **Definition 1**.: The set \(S\) is positively invariant for the system (1), if for every \(x_{0}\in S\), it follows \(x(t)\in S\) for \(x(0)=x_{0}\) and all \(t\in I(x_{0})=[0,\tau_{max}=\infty)\). **Definition 2**.: The set \(S\) is weakly positively invariant for the system (1), if among all the solutions of (1) originating in \(x_{0}\in S\), there exists at least one globally defined solution \(x(t)\) which remains inside \(S\) for all \(t\in I(x_{0})=[0,\tau_{max}=\infty)\). Next, we define the distance from a point to a set: **Definition 3**.: Given a set \(S\subset\mathbf{R}^{n}\) and a point \(y\subset\mathbf{R}^{n}\), the distance from the point to the set is defined as \[dist(y,S)=\inf_{w\in S}\|y-w\|_{*} \tag{5}\] where \(\|\cdot\|_{*}\) is any relevant norm. Based on Definition 3, we can formulate the definition of a tangent cone for a closed set. **Definition 4**.: Given a closed set \(S\), the tangent cone to \(S\) at \(x\) is defined as: \[T_{S}(x)=\Big{\{}z:\liminf_{\tau\to 0}\frac{dist(x+\tau z,S)}{\tau}=0\Big{\}} \tag{6}\] If \(S\) is convex \(T_{S}(x)\) is convex, and "\(\liminf\)" can be replaced by "\(\lim\)". Furthermore if \(x\in int(S)\), then \(T_{S}(x)=\mathbf{R}^{n}\), whereas if \(x\notin S\), then \(T_{S}(x)=\emptyset\), since \(S\) is defined as a closed set. Therefore \(T_{S}(x)\) is only non-trivial on the boundary of \(S\). We use Definition 4 to introduce Nagumo's theorem [20]: **Theorem 1**.: Consider the system defined in (1). Let \(S\subset\mathbf{R}^{n}\) be a closed set. Then, \(S\) is weakly positively invariant for the system if and only if (1) satisfies the following condition: \[f(x(t))\in T_{S}(x(t)),\ \ \text{for}\ \ \forall x\in S \tag{7}\] The theorem states that if the direction of the dynamics defined in (1) for any \(x(t)\) at the boundary of the safe set \(\partial S\) points tangentially or inside to the safe set \(S\), then the trajectory \(x(t)\) stays in \(S\). **Definition 5**.: A continuous function \(\alpha:(-b,a)\to\mathbf{R}\), with \(a,b>0\), is an extended class \(\mathcal{K}\) function \((\alpha\in\mathcal{K})\), if \(\alpha(0)=0\) and \(\alpha\) is strictly monotonically increasing. If \(a,b=\infty\), \(lim_{r\to\infty}\alpha(r)=\infty\), \(lim_{r\to-\infty}\alpha(r)=-\infty\) then \(\alpha\) is said to be a class \(\mathcal{K}_{\infty}\) function \((\alpha\in\mathcal{K}_{\infty})\). **Definition 6**.: For the system considered in (1), a continuously differentiable and convex function \(h:\mathbf{R}^{n}\to\mathbf{R}\) is a zeroing barrier function (ZBF) for the set \(S\) defined by (3) and (4), if there exist an extended class \(\mathcal{K}\) function \(\alpha(h(x(t)))\) and a set \(S\in\mathbf{R}^{n}\) such that \(\forall x\in S\), \[\dot{h}(x(t))\geq-\alpha(h(x(t))) \tag{8}\] The above definitions lead to a less restrictive version of (7) as it weakens the requirement to that in (8). We now expand the scope of the problem statement from (1) to those with an affine control input, of the form: \[\dot{x}=f(x(t))+g(x(t))u(t) \tag{9}\] where \(g\) is Lipschitz and \(u(t)\in\mathbf{R}^{m}\). We introduce the notion of a Control Barrier Function (CBF) such that its existence allows the system to be rendered safe w.r.t. \(S\)[22, 23] and allows a weaker requirement for system safety with a control input \(u(t)\), similar to (8). **Definition 7**.: Let \(S\subset\chi\) be the zero-superlevel set of a continuously differentiable function \(h:\chi\to\mathbf{R}\). The function \(h\) is a zeroing control barrier function (ZCBF) for \(S\), if there exists a class \(\mathcal{K}_{\infty}\) function \(\alpha(h(x(t)))\) such that for the system defined in (9) we obtain: \[\sup_{u\in\mathbf{R}^{m}}\frac{\partial h}{\partial x}\left[f(x(t))+g(x)u(t) \right]\geq-\alpha(h(x(t))) \tag{10}\] for all \(x\in S\). Using the Lie derivative notation, we obtain the following formulation for a ZCBF considering the system defined in (9): \[\dot{h}(x(t))=L_{f}h(x(t))+L_{g}h(x(t))u(t) \tag{11}\] ### _Problem Formulation_ We consider a linear plant with parametric uncertainties of the form: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t)) \tag{12}\] where \(x_{p}(t)\in\mathbf{R}^{n}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{13}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{14}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{15}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{16}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{17}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{18}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{19}\] where \(A_{p}\in\mathbf{R}^{m}\) is a measurable state vector and \(u(t)\in\mathbf{R}^{m}\) is a control input vector. The matrices \(A_{p}\in\mathbf{R}^{n\times n}\) and \(\Lambda\in\mathbf{R}^{m}\) are defined as: \[\dot{x}_{p}(t)=A_{p}x_{p}(t)+B_{p}\Lambda(R_{u_{0}}(u(t))) \tag{20}\] where \( \(\mathbf{R}^{m\times m}\) are unkown and \(\Lambda\) has only diagonal positive entries. The input matrix \(B_{p}\in\mathbf{R}^{n\times m}\) is known. The control input is assumed to be magnitude limited by \(u_{0}\), which is represented using the function \(R_{u_{0}}(\cdot)\) as \[R_{u_{0}}(u(t))=\begin{cases}u(t)&\text{if }\|u(t)\|\leq u_{0}\\ u_{0}\frac{u(t)}{\|u(t)\|}&\text{if }\|u(t)\|>u_{0}\end{cases} \tag{13}\] The objectives are to determine a \(u(t)\) for (12) such that the plant state \(x_{p}(t)\) tracks a desired reference \(x_{d}(t)\) and that for any initial condition \(x_{0}:=x(t_{0})\in S\), it is ensured that the plant state vector \(x_{p}(t)\) stays within the safe set \(S\in\mathbf{R}^{n}\). This is equivalent to having the control input ensures that there is a CBF with \(h(x(t))\geq 0\) for \(\forall t\geq 0\). ## III Safe Adaptive Control Design with Open-Loop Reference Model In this section, we consider a simpler version of the problem statement, where the control input magnitude limit is removed. The main challenge in ensuring positive invariance for the plant in (12), which has uncertainties in \(A_{p}\) and \(\Lambda\), is the design of a suitable ZCBF, which requires the model to be known, as is evident from (10). We therefore first choose a target system, i.e. a reference model, that the adaptive system can be made to contract towards. This reference model is chosen so that its state approaches the desired reference \(x_{d}(t)\) and simultaneously allows the generation of a suitable ZCBF that ensures safety. We choose an open-loop reference model (ORM) of the form: \[\dot{x}_{m}(t)=A_{m}x_{m}(t)+B_{m}r(t) \tag{14}\] where \(x_{m}(t)\in\mathbf{R}^{n}\) is the reference model state vector and \(r(t)\in\mathcal{R}\subset\mathbf{R}^{m}\) the reference input vector. The matrix \(A_{m}\in\mathbf{R}^{n\times n}\) is a Hurwitz matrix, and \(B_{m}\in\mathbf{R}^{n\times m}\) has full column rank. It is easy to see that if \[r(t)=B_{m}^{+}(\dot{x}_{d}(t)-A_{m}x_{d}(t)) \tag{15}\] then \(x_{m}(t)\) approaches \(x_{d}(t)\), where \(B_{m}^{+}\) denotes the Moore-Penrose inverse of \(B_{m}\). In order to ensure that \(x_{m}(t)\) stays inside a safe set \(S\), rather than choose \(r\) as in (15), we use a QP-ZCBF safety filter for the plant in (12) as follows [11]: \[\min_{r(t)\in\mathcal{R}}\|r-r^{*}\|_{2}\] (16) s.t. \[\frac{\partial h}{\partial x}\left[A_{m}x_{m}+B_{m}r\right]\geq- \alpha(h(x_{m}))+\Delta,\] where \(r^{*}=B_{m}^{+}(\dot{x}_{d}-A_{m}x_{d})\) and \(\Delta>0\) is a positive constant that introduces a safety buffer. The optimal solution of (16) can be easily determined by using KKT conditions given by: \[r-r^{*}-L_{B_{m}}h(x_{m})^{T}\lambda=0 \tag{17a}\] \[\lambda(-L_{A_{m}x_{m}}h(x_{m})-L_{B_{m}}h(x_{m})r-\alpha(h(x_{m} ))+\Delta)=0\] (17b) \[-L_{A_{m}x_{m}}h(x_{m})-L_{B_{m}}h(x_{m})r-\alpha(h(x_{m}))+\Delta\leq 0 \tag{17c}\] with a suitable choice of \(\lambda\geq 0\). In what follows, for ease of exposition, we choose the class \(\mathcal{K}\)-function \(\alpha(h(x))=\gamma h(x)\), where \(\gamma\) is a positive scalar constant [22, 24]. ### _Adaptive control design_ Our reference system is now determined using (14) and (16) as \[\dot{x}_{m}(t)=A_{m}x_{m}(t)+B_{m}r_{s}(t) \tag{18}\] The following assumptions are made regarding the unknown parameters in (12): **Assumption 1**.: Constant matrices \(\theta_{x}^{*}\) and \(\theta_{r}^{*}\) exist that solve the following: \[A_{m}=A_{p}+B_{p}\Lambda\theta_{x}^{*} \tag{19}\] \[B_{m}=B_{p}\Lambda\theta_{r}^{*} \tag{20}\] **Assumption 2**.: The uncertainty \(\Lambda\) is a diagonal positive definite matrix. We now propose the adaptive controller for the plant in (12): \[u(t)=\widehat{\theta}_{x}(t)x_{p}(t)+\widehat{\theta}_{r}(t)r_{s}(t) \tag{21}\] The time-varying parameters in (21) are adjusted using the following adaptive laws: \[\dot{\widehat{\theta}}_{x}(t)=-\Gamma_{x}x_{p}(t)e_{x}(t)^{T}PB_{p}\,,\quad \Gamma_{x}>0 \tag{22}\] \[\dot{\widehat{\theta}}_{r}(t)=-\Gamma_{r}r(t)e_{x}(t)^{T}PB_{p}\,,\quad\Gamma_ {r}>0 \tag{23}\] where \(e_{x}(t)=x_{p}(t)-x_{m}(t)\) and \(P\) is the solution of the Lyapunov equation \(A_{m}^{T}P+PA_{m}=-Q\), where \(Q>0\). Both \(\Gamma_{x}\) and \(\Gamma_{r}\) are positive definite matrices defined as the adaptive update gains. We further introduce a corresponding output error \(e_{u}(t)=u(t)-u^{*}(t)\), which will be useful to quantify the safety of the adaptive controller, where \(u^{*}(t)\) represents the ideal control input and is defined as: \[u^{*}(t)=\theta_{x}^{*}x_{p}(t)+\theta_{r}^{*}r_{s}(t) \tag{24}\] In what follows, it will be assumed that the reference input \(r_{s}(t)\) has a bounded derivative. **Theorem 2**.: The overall closed-loop adaptive system defined by the plant in (12), the control input in (21) and the adaptation laws in (22), (23) has globally bounded solutions for any initial conditions \(x_{p}(t_{0})\), \(\widehat{\theta}_{x}(t_{0})\), and \(\widehat{\theta}_{r}(t_{0})\) and both the errors \(e_{x}(t)\) and \(e_{u}(t)\) converge to zero as \(t\to\infty\). The proof of the theorem follows from standard adaptive control arguments, since the error dynamics is of the form \[\dot{e}_{x}(t)=A_{m}e_{x}(t)+B_{p}\Lambda(\widetilde{\theta}_{x}(t)x_{p}(t)+ \widetilde{\theta}_{r}(t)r_{s}(t)) \tag{25}\] where \(\widetilde{\theta}_{x}(t)=\widehat{\theta}_{x}(t)-\theta_{x}^{*}\), \(\widetilde{\theta}_{r}(t)=\widehat{\theta}_{r}(t)-\theta_{r}^{*}\) and together they admit a Lyapunov function \[V(e_{x}(t),\widetilde{\theta}_{x}(t),\widetilde{\theta}_{r}(t))= \frac{1}{2}e_{x}^{T}(t)Pe_{x}(t) \tag{26}\] \[+\frac{1}{2}\operatorname{Tr}[\widetilde{\theta}_{x}(t)\Gamma_{x}^ {-1}\widetilde{\theta}_{x}^{T}(t)\Lambda]+\frac{1}{2}\operatorname{Tr}[ \widetilde{\theta}_{r}(t)\Gamma_{r}^{-1}\widetilde{\theta}_{r}^{T}(t)\Lambda]\] It is easy to see that \(\dot{V}\leq 0\), and \(e_{x}\in\mathcal{L}_{2}\). As \(e_{x}(t)\) is bounded and has a bounded derivative, an application of the Barbalat's Lemma leads to \(\lim_{t\rightarrow\infty}e_{x}(t)=0\)[1]. From (25), it follows that \(e_{u}(t)\) is an input into an LTI system, with a bounded derivative, whose state is \(e_{x}(t)\); therefore it follows that \(\lim_{t\rightarrow\infty}e_{u}(t)=0\)[25]. ### _Safety in the presence of uncertainties_ With stability guaranteed from the discussions above, we now derive conditions for the safety of the proposed adaptive controller. The core ideais to render the known reference model to be safe i.e. \(h(x_{m})\geq 0\) for \(\forall t\geq 0\), and to use the adaptive system to make the closed-loop contract towards the reference model, which ensures that \(h(x_{p})\geq 0\) for \(\forall t\geq 0\). The goal is to derive conditions under which \[\dot{h}_{p}\geq-\gamma h(x_{p}(t))+\Delta \tag{27}\] We note that the QP-CBF filter ensures that \[\dot{h}_{m}=\underbrace{\frac{\partial h}{\partial x}|_{x_{m}}}_{a_{0}} \underbrace{\left[A_{m}x_{m}(t)+B_{m}r_{s}(t)\right]}_{a_{1}}\geq-\underbrace {\gamma h(x_{m}(t))}_{a_{2}}+\Delta \tag{28}\] where \(\dot{h}_{p}=\frac{\partial h}{\partial x}|_{x_{p}}\) and \(\dot{h}_{m}=\frac{\partial h}{\partial x}|_{x_{m}}\). To ensure that a ZCBF exists for the adaptive system specified by (12),(21)-(23), we consider \[\dot{h}_{p}=\frac{\partial h}{\partial x}|_{x_{p}}[A_{p}x_{p}(t)+B_{p}\Lambda (\widehat{\theta}_{x}(t)x_{p}(t)+\widehat{\theta}_{r}(t)r_{s}(t))] \tag{29}\] From (12), (19)-(21), (24), (28), and the definition of the errors \(e_{x}\) and \(e_{u}\), we obtain that \[\dot{h}_{p}=\underbrace{\frac{\partial h}{\partial x}|_{x_{p}}}_{b_{0}} \underbrace{\left[A_{m}x_{m}(t)+B_{m}r_{s}(t)\right.}_{a_{1}} \tag{30}\] \[+\underbrace{A_{m}e_{x}(t)+B_{p}\Lambda e_{u}(t)}_{\widehat{e}}\] Algebraic manipulations allow us to rewrite (30) using (28) as \[\dot{h}_{p}=b_{0}[a_{1}+\bar{e}] \tag{31}\] \[\geq-a_{2}+\Delta+\underbrace{a_{0}\bar{e}+(b_{0}-a_{0})\bar{e} +(b_{0}-a_{0})a_{1}}_{z(t)}\] Since the goal is to establish safety of the closed-loop adaptive system, we utilize the following two inequalities: \[|g(x_{p}(t))-g(x_{m}(t))|\leq\kappa_{1}|e_{x}(t)| \tag{32}\] \[|h(x_{p}(t))-h(x_{m}(t))|\leq\kappa_{2}|e_{x}(t)| \tag{33}\] where \(g(x(t))=\frac{\partial h}{\partial x}\), \(\kappa_{1}\) and \(\kappa_{2}\) are Lipschitz constants associated with \(g(x_{p}(t))\) and \(h(x_{p}(t))\), respectively. We note additionally from Theorem 2 that \(\|x_{p}(t)\|\), \(\|h(x_{p}(t))\|\) and \(\|\bar{e}\|\) are bounded. Therefore, \(|z(t)|\leq z_{0}\), where \(z_{0}\) is defined as \[z_{0}=|a_{0}||\bar{e}|+\kappa_{1}|e_{x}(t)\left(|\bar{e}|+|a_{1}|\right) \tag{34}\] Using the lower bound \(-z_{0}\) for \(z(t)\), we rewrite (31) as \[\dot{h}_{p}\geq-\gamma h(x_{p}(t))+\Delta-\bar{F}(|\bar{e}|,|e_{x}(t)|) \tag{35}\] where \[\bar{F}(|\bar{e}|,|e_{x}(t)|)=\gamma\kappa_{2}|e_{x}(t)|+|a_{0}||\bar{e}|+ \kappa_{1}|e_{x}(t)|\left(|\bar{e}|+|a_{1}|\right) \tag{36}\] The inequality in (35) implies that safety of the closed-loop adaptive system will be guaranteed after \(t\geq t_{0}+T\), where \(T\) is a finite interval, as \(\bar{F}(t)\to 0\) as \(t\rightarrow\infty\), and therefore \(|\bar{F}(t)|\leq\Delta\)\(\forall t\geq t_{0}+T\). This in turn implies that the closed-loop adaptive system will remain safe for all \(t\geq t_{0}\) if \[h(x_{p}(t))\geq h_{0}\ \ \forall t\geq[t_{0},t_{0}+T] \tag{37}\] where \(\gamma h_{0}\geq F_{\max}\) where \[F_{\max}=\max_{t\in[t_{0},t_{0}+T]}\bar{F}(t) \tag{38}\] As \(F(t)\) is bounded, it is clear that such an \(F_{\max}\) exists. Condition (37) is satisfied if there is a separation between the period of adaptation and the time at which the system approaches its limit of safety. This property is summarized in the following theorem, where \(e_{h}(t):=h(x_{p}(t))-h(x_{m}(t))\). **Theorem 3**.: A ZCBF \(h(x)\) exists for all \(S\) in \(\mathbf{R}^{n}\) for the overall closed-loop adaptive system defined by the plant in (12) and the adaptation laws in (22) and (23), if (37) is satisfied, with \(\gamma h_{0}\geq F_{\max}\) where \(F_{\max}\) is defined as in (38). Further, the inequality (37) also implies that \(\lim_{t\rightarrow\infty}e_{h}(t)=0\). The following choice of \(\gamma\) as a function of the safety error may allow the condition (37) to be satisfied for a larger class of ZCBFs: \[\gamma(e_{h}(t))=\gamma_{0}e^{-(ee_{h}(t))^{2}} \tag{39}\] with \(\gamma_{0}\geq 0\) and \(\epsilon\geq 0\) are positive constants. Such a choice allows \(\gamma\) to take on a value that is close to \(\gamma_{0}\) as long \(e_{h}\) is small, and \(\gamma(e_{h}(t))\) allows it to become small as \(|e_{h}(t)|\) increases. The rationale for such a choice is that near \(t_{0}\), when the transients of the adaptive system are yet to settle down, \(e_{h}\) may be large and therefore a conservative choice of \(\gamma\) near zero is prudent; as time proceeds, the adaptive system ensures that \(x_{p}\) approaches \(x_{m}\), and therefore \(e_{h}\) approaches zero. As this occurs, \(\gamma\) can be relaxed to take on larger values. We denote such a choice of Eq. (39) as an error-based relaxation (EBR). It should be noted that the proposed adaptive controller ensures learning in the form of minimization of all performance errors \(e_{x}(t)\), \(e_{u}(t)\), and \(e_{h}(t)\) to zero. ## IV Safe Adaptive Control Design with Calibrated Closed-Loop Reference Model We now consider the adaptive control of (12) subject to the magnitude limit as in (13). In order to accommodate these limits and to improve on the transient performance we propose a calibrated closed-loop reference model (CCRM) of the form \[\dot{x}_{m}=A_{m}x_{m}(t)+B_{m}r_{s}(t)+Le_{x}(t)+B_{p}\hat{\Lambda}\Delta u(t) \tag{40}\] where \(L\) is a matrix such that \((A_{m}-L)\) is Hurwitz, \(\Delta u(t)=R_{u_{0}}(u(t))-u(t)\) and represents a disturbance due to saturation, and \(\hat{\Lambda}\) is an estiamtion of the unknown matrix \(\Lambda\). The input \(r_{s}(t)\) is the solution of a modified QP-ZCBF filter, which is defined by the following constrained optimization: \[\min_{r\in\mathcal{R}}(r-r^{*})^{2}\] (41) s.t. \[\frac{\partial h}{\partial x}\left[A_{m}x_{m}+B_{m}r\right.\left.+ LR_{e_{0}}(e_{x})+B_{p}\hat{\Lambda}R_{\Delta u_{0}}(\Delta u)\right]\geq\] \[-\alpha(h(x_{m}))+\Delta, \tag{42}\] where \(R_{\Delta u_{0}}(\Delta u(t))\) and \(R_{e_{0}}(e_{x}(t))\) represent magnitude limited signals of \(\Delta u(t)\) and \(e_{x}(t)\), with suitable limits \(\Delta u_{0}\) and \(e_{0}\), respectively. The KKT condition for the QP-ZCBF safety filter is defined as: \[r-r^{*}-L_{B_{m}}h(x_{m})^{T}\lambda=0 \tag{43a}\] \[\lambda(-L_{A_{m}x_{m}}h(x_{m})-L_{B_{m}}h(x_{m})r-L_{L}h(x_{m}) R_{e_{0}}(e_{x})\] \[-L_{B_{p}\hat{\Lambda}}h(x_{m})R_{\Delta u_{0}}(\Delta u)-\alpha (h(x_{m}))+\Delta=0\] (43b) \[-L_{A_{m}x_{m}}h(x_{m})-L_{B_{m}}h(x_{m})r-L_{L}h(x_{m})R_{e_{0}}(e_{ x})\] \[-L_{B_{p}\hat{\Lambda}}h(x_{m})R_{\Delta u_{0}}(\Delta u)-\alpha (h(x_{m}))+\Delta\leq 0 \tag{43c}\] where \(\lambda\geq 0\) is chosen so as to ensure feasibility, and \(\alpha(h(x(t)))=\gamma(e_{h})h(x(t))\), with \(\gamma(e_{h}(t))\) defined as in (39). It should be noted that (43b)-(43c) are well defined for any choice of \(e_{x}(t)\) and \(\Delta u(t)\), which are yet to be shown to be bounded. ### _Adaptive control in the presence of magnitude limits_ The same adaptive controller as presented in (21)-(23) is utilized here as well with a few modifications. The corresponding error dynamics between the CCRM in (40) and the plant in (12) can be derived to be [26, 27]: \[\dot{e}_{x}(t)=(A_{m}-L)e_{x}(t)+B_{p}\Lambda\tilde{\theta}_{x}(t )x_{p}(t) \tag{44}\] \[+\tilde{\theta}_{r}(t)r_{s}(t))+B_{p}\tilde{\Lambda}(t)\Delta u(t)\] where \(\tilde{\Lambda}(t)=\Lambda-\hat{\Lambda}(t)\) is the corresponding estimation error for \(\Lambda\), and \((A_{m}-L)\) is Hurwitz. In addition to (22)-(23), we adjust the parameter \(\hat{\Lambda}\) as: \[\dot{\tilde{\Lambda}}(t)=\Gamma_{\Lambda}\Delta u(t)e_{x}^{T}(t)PB_{p} \tag{45}\] where \(\Gamma_{\Lambda}\) is positive definite. Based on (44) the following Lyapunov function candidate \(V\) is proposed: \[V= \frac{1}{2}e_{x}(t)^{T}Pe_{x}(t)+\frac{1}{2}\operatorname{Tr}[ \tilde{\theta}_{x}(t)\Gamma_{1}^{-1}\tilde{\theta}_{x}^{T}(t)\Lambda]\] \[+ \frac{1}{2}\operatorname{Tr}[\tilde{\theta}_{r}(t)\Gamma_{2}^{-1} \tilde{\theta}_{r}^{T}(t)\Lambda]+\frac{1}{2}\operatorname{Tr}[\tilde{\Lambda }(t)\Gamma_{\Lambda}^{-1}\tilde{\Lambda}^{T}(t)]\] From the error equation in (44) and the adaptive laws in (22), (23), (45) and the fact that \((A_{m}-L)^{T}P+P(A_{m}-L)=-Q_{0}\), we can show that \(\hat{V}=-\frac{1}{2}e_{x}^{T}Q_{0}e_{x}\leq 0\) and hence \(e_{x}(t)\) is bounded. Unlike the previous case, we cannot immediately conclude that \(x_{p}(t)\) is bounded, as both \(x_{p}(t)\) and \(x_{t}(t)m\) are affected by \(\Delta u(t)\). As a result, additional arguments are needed to establish boundedness. Unlike the previous case, when control inputs are limited in magnitude, one cannot guarantee global boundedness but a domain of attraction result (see [27, 28, 29]), which is briefly stated below. We introduce the following definitions: \[K_{max} = \max(\sup\|\tilde{\theta}_{x}\|,\sup\|\tilde{\theta}_{r}\|),\sup \|\tilde{\Lambda}\|)\] \[\beta = \frac{P_{B}K_{max}}{\|\theta_{x}^{*}\|+K_{max}}\] \[a_{0} = \frac{\overline{u}_{min}K_{max}}{\|\theta_{x}^{*}\|+K_{max}}\] \[x_{min} = \frac{3P_{B}K_{max}(r_{max}+1)+3P_{B}\|\theta_{r}^{*}\|r_{max}}{ q_{min}-3P_{B}K_{max}}+\] \[\frac{2P_{B}\overline{u}_{max}}{q_{min}-3P_{B}K_{max}}\] \[x_{max} = \frac{P_{B}a_{0}}{|q_{min}-2P_{B}K_{max}|}\] \[\overline{K}_{max} = \frac{q_{min}-\frac{\rho}{a_{0}}(3\|\theta_{r}^{*}\|2\overline{u} _{max})q_{min}}{3P_{B}+\frac{3\rho}{a_{0}}(r_{max}+1)|q_{min}-2P_{B}\|\theta_{ x}^{*}\|}\] \[-\frac{2P_{B}\|\theta_{x}^{*}\|}{3P_{B}+\frac{3\rho}{a_{0}}(r_{max} +1)|q_{min}-2P_{B}\|\theta_{x}^{*}\|}\] where \[q_{min}=\min\operatorname{eig}(Q),\;p_{min}=\min\operatorname{eig }(P)\] \[p_{max}=\max\operatorname{eig}(P)\] \[\rho=\sqrt{\frac{p_{max}}{p_{min}}},\;\overline{u}_{max}=\max_{ i}(u_{max,i})\] \[\overline{u}_{min}=\min_{i}(u_{max,i}),\;P_{B}=|PB_{p}\Lambda|,\; \lambda_{min}=\min(\operatorname{eig}(\Lambda))\] \[\gamma_{max}=\max(\operatorname{eig}(\Gamma_{x}),\operatorname{ eig}(\Gamma_{r}),\operatorname{eig}(\Gamma_{\lambda}))\] with all defined norms are Euclidean norms and the matrix \(P_{B}\) is the induced matrix norm, which has the property \(|PB_{p}\Lambda x|\leq P_{B}|x|\). Based on the introduced variables we can state the following theorem. **Theorem 4.**[29] The plant described in (12), with the adaptive feedback controller (21) using the adaptive laws defined in (22) - (45) has bounded trajectories for \(\forall t\geq t_{0}\) if 1. \(|x_{p}(t_{0})|<\frac{x_{max}}{\rho}\) 2. \(\sqrt{V(t_{0})}<\overline{K}_{max}\sqrt{\frac{\lambda_{min}}{\gamma_{max}}}\) Since \(|x_{p}(t)|<x_{max}\) for \(\forall t\geq t_{0}\), we can state that the error variable is of the same order as the difference between saturated input \(R(u(t))\) and the unsaturated \(u(t)\), stated as: \[\|e_{x}(t)\|=\mathcal{O}[\sup_{\tau\leq t}\|\Delta u(\tau)\|]\] We refer to [27, 28, 29] for details of the proof. ### _Safety in the presence of uncertainties and control input limits_ With stability guaranteed from the discussions above, we now derive conditions for the safety of the proposed adaptive controller with magnitude saturation. We again set \(\alpha(h(x))=\gamma h(x)\), where \(\gamma\) is a positive constant. The goal once again is to ensure safety, that is, for inequality (27) to be satisfied. Unlike (28), we note that the modified QP-ZCBF filter in (40) implies that the following inequality holds: \[\dot{h}_{m} =\frac{\partial h}{\partial x}|_{x_{m}}[A_{m}x_{m}(t)+B_{m}r_{s}(t) +LR_{e_{0}}(e_{x}(t)) \tag{46}\] \[+B_{p}\hat{\Lambda}R_{\Delta u_{0}}(\Delta u(t))]\geq-\gamma(e_{h} (t))h(x_{m}(t))+\Delta\] using \(\Delta e_{x}(t)=R_{e_{0}}(e_{x}(t))-e_{x}(t)\) and \(\bar{\Delta}u(t)=R_{\Delta u_{0}}(\Delta u(t))-\Delta u(t)\) we can reformulate \[\dot{h}_{m} =\underbrace{\frac{\partial h}{\partial x}|_{x_{m}}}_{c_{0}} \underbrace{[A_{m}x_{m}(t)+B_{m}r_{s}(t)}_{c_{1}}+\underbrace{Le_{x}(t)}_{c_{ 2}} \tag{47}\] \[+\underbrace{B_{p}\hat{\Lambda}(t)\Delta u(t)}_{c_{3}}+ \underbrace{L\Delta e_{x}(t)+B_{p}\hat{\Lambda}(t)\bar{\Delta}u}_{S_{\Delta}}\] \[\geq-\underbrace{\gamma(e_{h}(t))h(x_{m}(t))}_{c_{4}}+\Delta\] We can derive \(\dot{h}_{p}\) using (12) and considering (13) as \[\dot{h}_{p} =\frac{\partial h}{\partial x}|_{x_{p}}[A_{p}x_{p}+B_{p}\Lambda (\hat{\theta}_{x}(t)x_{p}(t) \tag{48}\] \[+\hat{\theta}_{r}(t)r_{s}(t)+\Delta u(t))]\] using (12) and the error between \(x_{p}\) and \(x_{m}\), we can state \[\dot{h}_{p} =\underbrace{\frac{\partial h}{\partial x}|_{x_{p}}}_{d_{0}} \underbrace{[A_{m}x_{m}(t)+B_{m}r_{s}(t)}_{c_{1}}+\underbrace{Le_{ x}(t)}_{c_{2}} \tag{49}\] \[+\underbrace{B_{p}\hat{\Lambda}(t)\Delta u(t)}_{c_{3}}+\bar{e}_{ \Delta}]\] with \(\bar{e}_{\Delta}=\bar{e}-Le_{x}(t)+B_{p}\hat{\Lambda}(t)\Delta u(t)\). Algebraic manipulations allow us to rewrite (49) as \[\dot{h}_{p} =d_{0}[c_{1}+c_{2}+c_{3}+\bar{e}_{\Delta}] \tag{50}\] \[\geq-c_{4}+\Delta+\underbrace{c_{0}\bar{e}_{\Delta}+(d_{0}-c_{0} )(c_{1}+c_{2}+c_{3}+\bar{e}_{\Delta})}_{w(t)}-c_{0}S_{\Delta}\] Noting that the inequality in (50) is very similar to (31), similar relations to (32) and (33) can be employed to derive an inequality \[\dot{h}_{p} \geq-\gamma(e_{h}(t))h(x_{p}(t))+\Delta-\bar{F}_{\Delta}(|\bar{e}_{ \Delta}|,|e_{x}|) \tag{51}\] where \[\bar{F}_{\Delta}(|\bar{e}_{\Delta}|, |e_{x}(t)|)=\gamma\kappa_{4}|e_{x}|+|c_{0}||\bar{e}_{\Delta}| \tag{52}\] \[+\kappa_{3}|e_{x}|\left(|c_{1}|+|c_{2}|+|c_{3}|+|\bar{e}_{\Delta} |\right)-C_{0}S_{\Delta}\] Using similar arguments that utilize the asymptotic convergence of \(\bar{F}(t)\) to zero, we obtain once again that the closed-loop adaptive system will remain safe for all \(t\geq t_{0}\) if \[h(x_{p}(t))\geq h_{0}\ \ \forall t\geq[t_{0},t_{0}+T] \tag{53}\] where \(\gamma h_{0}\geq F_{\max}\),where \[F_{\max}=\max_{t\in[t_{0},t_{0}+T]}\bar{F}(t)\] An EBR as in (36) can be introduced as in Section III to allow condition (52) to be satisfied for a large class of \(h(x(t))\). A theorem similar to Theorem 3 can be derived to encapsulate the safety property of the closed-loop adaptive system with magnitude saturation using the controller specified by (21)-(23) and (45). The resulting closed-loop system therefore is stable, safe, and accommodates magnitude constraints on the control input. ## V Simulations Two simulation examples are provided in this section to illustrate the properties of the safe and stable adaptive control approach described in this paper that includes a QP-CBF filter and an EBR based damping term \(\gamma\). ### _Obstacle avoidance_ The proposed controller is applied to a simple 2D obstacle avoidance problem in the simulation case here. The used model is defined as: \[\begin{bmatrix}\dot{x}\\ \dot{y}\end{bmatrix}=\begin{bmatrix}-2&0\\ 0&-2\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}+\begin{bmatrix}0.8&0\\ 0&0.8\end{bmatrix}\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix}\] where \(x\) and \(y\) represent the position states, \(\dot{x}\) and \(\dot{y}\) are their derivatives. The control inputs being represented by \(u_{1}\) and \(u_{2}\). The magnitude limit on both control inputs \(u_{0}\) was set to \(10\). The reference model is defined as: \[A_{m}=\begin{bmatrix}-1&0\\ 0&-1\end{bmatrix},\qquad\qquad B_{m}=\begin{bmatrix}-1&0\\ 0&-1\end{bmatrix}\] To ensure safety in the context of obstacle avoidance, we choose for each obstacle the following CBF constraint: \[h_{o}=(x-x_{o})^{2}+(y-y_{o})^{2}-r_{o}^{2}\geq 0\] where \(x_{o}\) and \(y_{0}\) represent the \(x\) and \(y\) position of the obstacles center and \(r_{o}\) the radius of the circular obstacles. Figure 1 shows the state trajectories of the discussed adaptive control framework without the proposed error-based relaxation (EBR) of the CBF constraint for adaptive closed-loop systems with different initial locations. It is apparent that Figure 1: Comparison of state trajectories with different initial states using an AC-CBF without EBR. even though the adaptive controller learns the parameter, it cannot ensure safety since it cannot learn fast enough. This leads to a safety violation for most of the shown trajectories. Figure 2 shows the state trajectories for systems with different initial locations using the proposed adaptive controller with EBR of the CBF constraint. It can be seen that the proposed method introduces a helpful conservatism where the model is not yet learned. This renders the system safer and ensures that no trajectory violates the safety constraint even though the model is not learned yet. ### _Missile pitch dynamics_ The used missile pitch dynamics model is shared in [30]. The model was modified such that a model mismatch can be considered: \[\begin{bmatrix}\dot{\alpha}\\ \dot{q}\end{bmatrix}= \Delta A_{p}\begin{bmatrix}Z_{\alpha}&Z_{q}\\ M_{\alpha}&M_{q}\end{bmatrix}\begin{bmatrix}\alpha\\ q\end{bmatrix}+\begin{bmatrix}Z_{\delta}\\ M_{\delta}\end{bmatrix}\lambda_{\delta}\delta\] \[= \Delta A_{p}\begin{bmatrix}-0.8757&1\\ -68.9210&0\end{bmatrix}\begin{bmatrix}\alpha\\ q\end{bmatrix}+\begin{bmatrix}-0.1531\\ -74.2313\end{bmatrix}\lambda_{\delta}\delta\] where \(\alpha\) represents the aerodynamic angle of attack (AoA), \(q\) the pitch rate and \(\delta\) the fin deflection. \(\Delta A_{p}\) and \(\lambda_{\delta}\) are scalar parameters used to introduce static parameter deviations. For the here regarded case, the parameters were set to \(\Delta A_{p}=1.2\) and \(\lambda_{\delta}=0.6\). The regarded linear dynamics are modeled at Mach 0.8 and an altitude of 4000 ft, with a trim angle of attack of 6 degrees. The magnitude limits on the control input are set to 10 degrees for \(\delta\). The reference model of the system was chosen by defining a nominal closed-loop response, using a conventional LQR technique to define a suitable feedback controller [31]. The reference model is defined as: \[A_{m}=\begin{bmatrix}-0.8707&0.9927\\ -65.5877&-3.5903\end{bmatrix},\ B_{m}=\begin{bmatrix}0.1395&-0.0364\\ 68.2893&-17.7947\end{bmatrix}\] For the here presented simulation case, it was chosen that the maximum missile's AoA is limited by an arbitrary value. To ensure that, we choose the following CBF constraint: \[h_{\alpha}=\alpha_{max}-\alpha\geq 0\] Figure 3 compares the closed-loop response of the suggested adaptive controller in both versions, with and without the error-based relaxation parameter. In this case, the desired AoA is set to \(5\) degrees, but the maximum allowable state was set to \(4\) degrees. It can be seen that the controller without the EBR violates the defined maximum AoA constraint. The controller with EBR is able to cope better with the problem of uncertainties and uses the added conservatism during the learning to converge to the defined maximum value when the confidence in the model allows it. Figure 4 shows how the relaxation parameter \(\gamma(e)\), for the adaptive controller with EBR, decreases at the beginning when the model is the least known. During the learning, the value increases and indicates a higher confidence in the operation within the safe set. Figure 5 compares the time history of the control input for the proposed adaptive controller with and without EBR, which shows that the control input always stays well within its limits. We expect that the oscillations can be improved further by deploying rate limits and extending the proposed approach in this paper along the lines of [27]. Figure 4: Time history of relaxation parameter \(\gamma\) for the case with EBR and without EBR. Figure 3: Trajectory comparison of state AoA for AC-CBF with EBR and without EBR w.r.t. to the commanded and the maxmimum AoA. Figure 2: Comparison of state trajectories with different initial states using a AC-CBF with EBR. ## VI Conclusions This paper takes a step in combining adaptive control methods with safety-critical methods for a specific class of dynamic systems. In addition to ensuring safety and stability, the proposed adaptive control design also seeks to accommodate magnitude constraints on the control input. The proposed approach employs a combination of classical adaptive control, closed-loop reference model that is calibrated to accommodate input saturation, a quadratic programming based CBF filter, and an error-based relaxation of the damping characteristics of the CBF. The resulting combination is shown to lead to global boundedness without input constraints, and a domain of attraction result with input constraints. In both cases, conditions for the existence of a control barrier function are derived. Numerical results validate the analytical derivations.
2309.15734
Synthetic Latent Fingerprint Generation Using Style Transfer
Limited data availability is a challenging problem in the latent fingerprint domain. Synthetically generated fingerprints are vital for training data-hungry neural network-based algorithms. Conventional methods distort clean fingerprints to generate synthetic latent fingerprints. We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints. Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact-based fingerprints while possessing similar characteristics as real latent fingerprints. Additionally, we show that the generated fingerprints exhibit several qualities and styles, suggesting that the proposed method can generate multiple samples from a single fingerprint.
Amol S. Joshi, Ali Dabouei, Nasser Nasrabadi, Jeremy Dawson
2023-09-27T15:47:00Z
http://arxiv.org/abs/2309.15734v1
# Synthetic Latent Fingerprint Generation Using Style Transfer ###### Abstract Limited data availability is a challenging problem in the latent fingerprint domain. Synthetically generated fingerprints are vital for training data-hunery neural network-based algorithms. Conventional methods distort clean fingerprints to generate synthetic latent fingerprints. We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints. Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact-based fingerprints while possessing similar characteristics as real latent fingerprints. Additionally, we show that the generated fingerprints exhibit several qualities and styles, suggesting that the proposed method can generate multiple samples from a single fingerprint. Latent fingerprints, Synthetic latent fingerprint generation, Style transfer. ## I Introduction Fingerprints left on a surface unintentionally, also called latent fingerprints, play a vital role as evidence in forensic investigations. Unfortunately, these fingerprints are not readily viable for matching and recognition purposes. Due to the unconstrained environment at a crime scene and the complex acquisition process of latent fingerprints, they are notoriously indispensable to pre-processing, such as segmentation, enhancement, and feature extraction. Recent latent fingerprint pre-processing algorithms based on neural networks [1, 2, 3, 4, 5] require larger datasets for training. However, the collection of latent fingerprints is an expensive and cumbersome task. Table I summarizes the latent fingerprint datasets widely used for training and evaluating latent enhancement algorithms. The fingerprints in these datasets are deposited under controlled or uncontrolled conditions and lifted from various surfaces. NIST SD-302 dataset contains a large number of latent fingerprints, but not all of them have mated fingerprints. Moreover, the latent fingerprints in this dataset are substantially distinct from other datasets. Samples from these datasets are provided in Figure 1. Combining these datasets for training pre-processing algorithms may introduce a class imbalance. Further, it is essential to use real latent fingerprints to evaluate these methods. This scarcity of data leads to the need for the generation of synthetic latent fingerprints that can be used to train the models so that real data can be utilized for a fair evaluation. With more synthetic images, these latent fingerprints need to possess certain characteristics. It is crucial to have identity features such as meaningful ridge structure, fingerprint shape, and minutiae points and noise features such as noisy backgrounds, surface, texture, etc. Many latent fingerprint pre-processing algorithms resort to a naive approach of blending a sensor-collected fingerprint with a noisy background to mimic a latent fingerprint [3, 4, 12]. Zhu et al. [5] extend the weighted combination approach by applying plastic distortion [13] on high-quality rolled fingerprints. This image-blending approach preserves the identity but fails to generate realistic latent fingerprints. Another method of generating synthetic fingerprints involves CycleGAN [14], which uses Generative Adversarial Networks(GAN) to transform images from one domain to another. Authors in [15, 16] trained CycleGAN to transform slap/rolled fingerprints into latent fingerprints. However, these methods have limited style generation capacity. Wyzykowski and Jain [16] use multiple CycleGAN models to generate multiple styles. This might be inconvenient if latent fingerprints with more styles and qualities are required. Nonetheless, these approaches generate pairs of latent and sensor-collected fingerprints, which is ideal for training algorithms that use image-to-image translation for latent fingerprint pre-processing. Our goal is to generate multiple styles of latent fingerprints using a single model. When lifted from surfaces like paper, cardboard, ceramic tiles, etc., latent fingerprints exhibit different characteristics than those lifted from plastic and metallic surfaces. Additionally, the interaction between the surface and the subject causes uneven ridge densities and orientations. Therefore, the synthetic fingerprint generator must be trained to learn these variations for generating fingerprints of different styles. To this aim, we pose latent generation as a style transfer task from latent fingerprints to sensor-collected fingerprints. The primary task is to transform the ridge patterns in sensor-collected fingerprints into the ridge patterns in real latent fingerprints. This can be achieved by learning the distribution of latent fingerprints and fusing the distribution parameters with the source fingerprints. We use adaptive instance normalization [17] to infuse the learned parameters of the latent fingerprint domain while reconstructing the fingerprints. Further, we blend these transformed ridge patterns with noisy backgrounds to manifest a similar distribution to the real latent fingerprints. For brevity, we will refer to sensor-collected fingerprints as fingerprints. The generated latent fingerprints represent fingerprints lifted from different surfaces. Our contributions are three-fold: * We propose a simple and effective method that considers different surfaces and qualities while generating synthetic latent fingerprints. * Our proposed method is flexible to generate multiple styles of the same fingerprint while preserving the underlying identity information. * Our evaluation experiments demonstrate the similarities between synthetic and real latent fingerprints. The paper is organized as follows; first, we discuss related work in Section II. The proposed method is described in Section III followed by a discussion on experiments and results in Section IV. Finally, Section V concludes the paper. ## II Related Work Many studies have been conducted to generate synthetic fingerprints. Before the advent of neural networks, hand-crafted feature-based approaches were developed to generate fingerprints. Capelli et al. [18] used fingerprint shape, directional map, density map, and ridge patterns to create a master fingerprint. Further, they apply distortion, noise, and ridge variations to generate variants of the same master fingerprint. Zhao et al. [19] proposed an approach based on statistics of fingerprint features such as type, size, ridge orientation, minutiae, and singular points. After generating a master print using the features, they apply non-linear plastic distortion and rigid transformations to get variants of the same fingerprint. Recent works typically use GANs to generate synthetic fingerprints [20, 21, 22]. These methods focus on training GAN to learn the distribution of real fingerprints and generate synthetic fingerprints that contain the necessary identity information. Style transfer is a way to learn to map the style of an image onto the contents of another image. Neural network-based style transfer is also explored in the image synthesis task [23, 24, 25]. Men et al. [23] developed a person image synthesis algorithm that encodes attributes such as pose, head, base, clothes, etc. The style code is then injected into the AdaIN [17] features during decoding. Authors in [24, 25] proposed region adaptive normalization to control the style encoding in different image patches. This allows more flexibility to generate images with fine details. Despite these works, to the best of our knowledge, latent fingerprint synthesis has yet to be attempted with style transfer. ## III Methodology A widely adopted conventional approach to generating synthetic latent fingerprints applies noise to good-quality fingerprints and blends them with noisy backgrounds. It uses the equation below: \[I_{latent}=\alpha\times I_{fingerprint}+(1-\alpha)\times I_{noise}. \tag{1}\] However, in real-world scenarios, the latent fingerprints are lifted from multiple surfaces under unforeseeable environments. Depending on the nature of the surface and the action that caused the fingerprint to be left on the surface, the latent fingerprints exhibit different styles. As a result, using the blending method naively with good-quality fingerprints may not represent the distribution of real latent fingerprints. We propose learning the noise and distortions in ridge patterns acquired from multiple surfaces and transferring them to fingerprints to mimic the real latent fingerprints. To this aim, we devise a simple and efficient approach involving style transfer and image blending. Further, section III-A illustrates the style transfer network, and section III-B discusses image blending. Figure 2 illustrates the network architecture. ### _Style Transfer_ The style transfer module is responsible for extracting style from a latent fingerprint \(F_{s}\) and fusing it with the content fingerprint \(F_{c}\) during the reconstruction phase. We use AdaAttN [26] to learn the style of latent fingerprints and Fig. 1: Latent fingerprints from datasets listed in Table I. transform the fingerprint ridges to have a similar style. The style transfer network uses an encoder \(E(.)\) to extract content and style embeddings. The extracted embeddings are then passed to the AdaAttN block, which adaptively transfers the style statistics to the content embeddings. The style transfer network uses several layers of pre-trained VGG-19 [27] model to obtain embeddings with different spatial sizes and image characteristics. The AdaAttN block uses the attention mechanism to compute the weighted mean and variance map. Then, adaptive normalization proposed by Huang et al. [17] is used to obtain the stylized features. Finally, the decoder reconstructs the stylized features to a synthetic latent fingerprint \(F_{cs}\). The decoding at the end of the style transfer network may produce a random texture pattern to satisfy the objective function. Therefore, to preserve the identity information and the ridge structure of the fingerprints, we use another encoder \(V(.)\) trained to extract features helpful in matching two fingerprints. The embeddings of the fingerprint \(F_{c}\) and the stylized fingerprint \(F_{cs}\) enforce the identity constraint during training. We trained the style transfer network using the following objective function: \[\mathcal{L}=\lambda_{g}\mathcal{L}_{gs}+\lambda_{l}\mathcal{L}_{lf}+\lambda_{ i}\mathcal{L}_{id}, \tag{2}\] where \(\mathcal{L}_{gs}\) is a global style loss computed between the mean and standard deviation of embeddings of \(F_{s}\) and \(F_{cs}\) extracted using \(E(.)\). \(\mathcal{L}_{lf}\) is a local feature loss that minimizes the distance between features of \(E(F_{cs})\), \(E(F_{c})\), and \(E(F_{s})\). Lastly, \(\mathcal{L}_{id}\) is an identity constraint between \(V(F_{cs})\) and \(V(F_{c})\). We use mean squared error to calculate the loss terms. We empirically set a value of \(1.0\) for \(\lambda_{i}\), whereas the default values of \(3.0\) for \(\lambda_{g}\) and \(10.0\) for \(\lambda_{l}\) are used during training. ### _Image Blending_ The output of the style transfer network is distorted ridge patterns that appear similar to the ridge patterns in real latent fingerprints. However, we can profusely notice the noisy backgrounds and textured patterns in real latent fingerprints. Therefore, we incorporate the image blending from Eq. 1 to generate realistic latent fingerprints. We replace \(I_{fingerprint}\) by the output of the style transfer network and consider several background images cropped from real latent fingerprints as \(I_{noise}\). During all the experiments, we set \(\alpha\) between \(0.3\) to \(0.8\). This combination of style transfer network and image blending presents the flexibility to manipulate the style, quality, surface, and background of the generated fingerprints without retraining the network. Further, the synthetic fingerprints and the corresponding content fingerprints are spatially consistent. Therefore, the spatial features extracted from the fingerprint can be used as a target while training a neural network for latent fingerprint pre-processing. Later in section IV-C, we discuss the effect of blending noisy background with the output of the style transfer network. ## IV Experiments In section IV-A, we discuss the datasets used and generated for the evaluation experiments. Later, we describe the evaluation criteria and results in Section IV-A and Section IV-B, respectively. ### _Datasets_ Training the style transfer network requires fingerprints as content images and latent fingerprints as style images. Therefore, we combined fingerprints from MOLF and MSLFD datasets totaling 12,444 for training and 600 for evaluation. For the style images, we used 4,400 latent fingerprints from MOLF, which has fingerprints lifted from ceramic tile [10]. Additionally, we included 170 latent fingerprints from two different surfaces from the MSLFD dataset. Further, we create pairs of latent fingerprints and content fingerprints such that they belong to the same finger of the same subject. This aids the identity preservation constraint in the objective function. We use latent fingerprints from IIITD and SLF datasets as the style images during evaluation experiments. Once the style transfer network is trained, we generate 600 synthetic latent fingerprints. Finally, we create two sets, Synthetic-1 and Synthetic-2, using backgrounds from different surfaces and textures. The Synthetic-1 dataset represents latent fingerprints lifted from plain surfaces such as ceramic tiles and cardboard. In contrast, the Synthetic-2 dataset comprises latent fingerprints lifted from plastic and paper surfaces with printed text. ### _Evaluation Criteria_ For evaluating a synthetic data generator, measuring the similarities between the synthetic and real data is imperative. Fig. 2: Architecture of the proposed method. The style transfer network is trained using real latent fingerprints and is marked by the blue box, whereas the image blending does not involve training and is represented in the magenta box. We use various aspects of fingerprints for comparing the characteristics of real and generated latent fingerprints. First, we use quality distribution as a metric to demonstrate the similarity. To this aim, we use NFIQ 2.0 [28] to obtain the quality scores of latent fingerprints. The second metric is the similarity between the data distribution of real and synthetic fingerprints. We use t-Distributed Stochastic Neighbor embeddings (t-SNE) [29] to showcase the distribution of multiple datasets to compare with the synthetic fingerprints. t-SNE uses high-dimensional feature embeddings of size 512 and reduces the dimensionality to generate two components to visualize the distribution. Further, we study minutiae points to analyze the realistic nature of synthetic fingerprints. This analysis helps determine if the synthetic latent fingerprints have meaningful patterns and genuine minutiae points. We use the Verifinger SDK v10.0 [30] to extract minutiae and perform matching experiments. Lastly, we analyze the matching score distribution of genuine pairs consisting of synthetic latent and corresponding mated fingerprints. Due to the noisy and distorted nature of latent fingerprints, the recognition accuracy is relatively low compared to fingerprint matching. Comparing the matching scores of mated pairs helps estimate if the synthetic fingerprints are challenging enough for the matchers to extract features. ### _Results_ Determining the quality of latent fingerprints is crucial in matching and recognition scenarios. Due to the complex acquisition process of latent fingerprints, they often exhibit poor quality scores. In Figure 3, we compare if the generated synthetic latent fingerprints have a similar quality score distribution with the real data. The latent fingerprints in IIITD and SLF datasets have a wide range of quality scores, whereas the NIST SD-27 dataset has a smaller range due to the arbitrary texture patterns and highly distorted ridge patterns. The plot suggests the closeness of quality levels among Synthetic-1, IIITD, and SLF datasets. Similarly, curves for NIST SD-27 and Synthetic-2 datasets also match each other. Next, we plot t-SNE to demonstrate the overlapping distribution of real and synthetic fingerprints. Figure 4 provides the distribution for multiple datasets of various styles. Note that in Figure 4(a), the data points for the Synthetic-1 dataset are congregated in two regions. This behavior is due to the limited style references used to transform the ridge patterns during synthetic generation. At the same time, the arbitrary noise patterns in the real latent fingerprints make the distribution widespread. Regardless, both plots show evidence of the embeddings of the synthetic and real latent fingerprints in the high-dimensional space corresponding with datasets of respective styles. Further, this suggests that our proposed method can generate realistic latent fingerprints with real latent fingerprint characteristics. Despite similar quality and t-SNE distributions, the synthetic latent fingerprints should represent some identity. Ideally, a synthetic latent fingerprint should have the same identity as the source fingerprint used as input to the style transfer network. Figure 5 demonstrates the identity similarity between the synthetic latent and input fingerprint. It shows detected and correctly matched minutiae, suggesting that the proposed method preserves critical features such as the ridge structure and minutiae points. Further, the figure indicates the ability of the proposed method to generate multiple synthetic samples from the same fingerprint with varying quality and styles. To investigate the importance of the style transfer network and compare it with the naive approach of image blending used in [3, 4, 12], we generated a set of synthetic latent fingerprints without using the style transfer network. We applied speckle noise to the fingerprints and blended them with noisy backgrounds. Then, we conducted a matching experiment with genuine pairs from this dataset. In Table II, we compare the mean, standard deviation, and median of matching scores for genuine pairs of latent fingerprints generated by our method and the real latent fingerprint dataset. A significant difference between the distribution parameters shows that a weighted combination of a distorted fingerprint and noisy background is insufficient to model realistic latent fingerprints. The matcher can easily recognize the fingerprint despite the background noise. Fig. 4: t-SNE distribution of multiple datasets. Plot (a) represents datasets with plain backgrounds from surfaces like ceramic tiles and cards. Plot (b) represents latent fingerprints lifted from plastic and paper with text in the background. Fig. 3: NFIQ 2.0 quality score distribution of multiple datasets. Solid and dashed lines represent datasets with different surfaces and styles. ## V Conclusion We proposed a simple and effective approach to synthetic latent fingerprint generation. We showed that the naive approximation of latent fingerprints inadequately represents real latent fingerprints. We revised it and proposed an algorithm to generate realistic latent fingerprints using a style transfer network to exploit the style features of real latent fingerprints and transform the ridge structure to appear as a latent fingerprint. Further, the stylized ridges are blended with noisy backgrounds for a better representation of real latent fingerprints. Our evaluation with various metrics suggests that the proposed method reliably generates latent fingerprints of various styles and qualities while preserving identity information.
2309.03156
Characterization of natural convection between spherical shells
In this manuscript, it is analysed the onset and evolution of natural convection of an incompressible fluid between spherical shells. The shells are kept at a fixed temperature difference and aspect ratio, and the Rayleigh-Benard convection is driven by different radial gravity profiles. The analysis has been carried out by using a finite difference scheme to solve the three-dimensional Navier-Stokes equations in spherical coordinates. Numerical results are compared with theoretical predictions from linear and non-linear stability analysis, and differ from the expected critical Rayleigh number Ra_c = 1708 by less than 1 percent. In the range of Prandlt numbers Pr studied, and for all the different gravity profiles analysed, the system presents a dependence on its starting condition and flow history. Even in the region just above the onset of convection, two stable states are observed, with qualitative and quantitative differences, and exploring higher values of Ra introduces new modes and time dependency phenomena in the flow. These results are corroborated by spectral analysis.
Luca Santelli, Guiquan Wang, Richard J. A. M. Stevens, Roberto Verzicco
2023-09-06T16:54:08Z
http://arxiv.org/abs/2309.03156v1
# Characterization of natural convection between spherical shells ###### Abstract In this manuscript, it is analysed the onset and evolution of natural convection of an incompressible fluid between spherical shells. The shells are kept at a fixed temperature difference and aspect ratio, and the Rayleigh-Benard convection is driven by different radial gravity profiles. The analysis has been carried out by using a finite difference scheme to solve the three-dimensional Navier-Stokes equations in spherical coordinates. Numerical results are compared with theoretical predictions from linear and non-linear stability analysis, and differ from the expected critical Rayleigh number \(Ra_{c}\approx 1708\) by less than 1%. In the range of Prandtl numbers \(Pr\) studied, and for all the different gravity profiles analysed, the system presents a dependence on its starting condition and flow history. Even in the region just above the onset of convection, two stable states are observed, with qualitative and quantitative differences, and exploring higher values of \(Ra\) introduces new modes and time dependency phenomena in the flow. These results are corroborated by spectral analysis. Authors should not enter keywords on the manuscript, as these must be chosen by the author during the online submission process and will then be added during the typesetting process (see [http://journals.cambridge.org/data/relatedlink/jfm-keywords.pdf](http://journals.cambridge.org/data/relatedlink/jfm-keywords.pdf) for the full list) ## 1 Introduction Natural convection in spherical domains, i.e. the convective behaviour of a fluid confined between two spherical shells and subjected to a temperature gradient, has been the focus of many studies thanks to the vast number of possible applications in different fields. This topic has been studied experimentally (Bishop _et al._, 1966), analytically (Mack & Hardee, 1968), and more recently numerically (Garg, 1992). Many modern applications include geoscience, cosmoclimatology (Svensmark & Friis-Christensen, 1997; Svensmark, 2007), as well as exploration of extraterrestrial moons (Feldman _et al._, 2012) and various engineering applications. When considering convection between spherical shells, it should be taken into consideration that the behaviour is different from the classical Rayleigh-Benard convection in a planar layer configuration (Siggia, 1994; Ahlers _et al._, 2009; Chilla & Schumacher, 2012). The reasons behind this difference can be found in the geometrical asymmetry between the inner and outer sphere, the curvature of the plates and the radial dependence of buoyancy (Busse, 1970; Spiegel, 1971; O'Farrell _et al._, 2013; Gastine _et al._, 2015). When considering a radial gravity profile, a large part of the literature is focused on internal heat source problems, mostly due to their relevance to geophysics (Busse, 1975; Joseph & Carmi, 1966). Moreover, performing direct experiments with radial gravity on the surface of the Earth is a very demanding task, due to the presence of vertical gravity. Therefore, the only experiments of this kind that have been performed were all during space missions: on the Space Shuttle (Hart _et al._, 1986) and on the International Space Station (Futterer _et al._, 2008; Travnikov _et al._, 2003), where a radial electrostatic potential with a profile \(g(r)\propto r^{-5}\) has been used to model gravity. In the studies of the onset of convection, Busse (1975) found qualitative differences between convective patterns of odd and even spherical harmonic order by the perturbation analyses where solutions with different wavenumber differ only quantitatively in a planar configuration. With \(Ra\) up to 100 times larger than the critical value, Bercovici _et al._ (1989) found the convective pattern persists with the solutions by perturbation analyses. When \(Ra\) is as high as \(Ra>10^{5}\), Iwase & Honda (1997) showed that the axisymmetric convective patterns breakdown and flow patterns start to show time-dependent behavior, which is characterized by upwelling and downwelling thermal plumes (Bercovici _et al._, 1989; Yanagisawa & Yamagishi, 2005; Futterer _et al._, 2013; Gastine _et al._, 2015). To better understand the onset of convection, it is important to consider results obtained from the linear stability analysis. Concerning spherical shells, early studies on the topic can be found on the works of Chandrasekhar (1961) and Joseph & Carmi (1966). More recently, the topic has been expanded by Araki _et al._ (1994), which used a radial gravity profile and a thin layer to perform a linear asymptotic analysis, comparing theoretical results with numerical simulations. From this work, further development has been provided by Avila _et al._ (2013), which found a lower boundary for \(Ra_{c}\) and identified some examples of most unstable modes. Finally, Mannix & Mestel (2019) included considerations for weakly nonlinear mode interactions, which might show a dependence on \(Pr\) for the stable modes, under the assumption of axisymmetric spherical convection. For what concern numerical schemes, Verzicco & Orlandi (1996) have shown that, in the case of cylindrical problems, symmetries in the grid structure may cause perturbations in the fluid evolution, thus it is suggested to use models with an appropriate grid symmetry. Gastine _et al._ (2015) raise a similar concern, suggesting that a direct application of planar geometry models to spherical models is questionable, and direct numerical simulations in spherical shells are required. There exist some numerical simulations for non rotating radial gravity model with finite Prandtl number (Gastine _et al._, 2015; Tilgner, 1996), but the vast majority of them has been done with infinite Prandtl number, since this can be used to model a good approximation of the Earth mantle (Zebib _et al._, 1980; Bercovici _et al._, 1989). Thus, in this manuscript we want to characterise the Rayleigh-Benard convection between spherical shells in a non-rotating environment using a finite-difference scheme in spherical coordinates. We consider the effect of different radial gravity profiles and Prandtl number in the exploration of the flow history, giving due attention to the flow structure and wavenumber analysis. This paper is organised as follows: the physical problem is described in section 2 together with the numerical discretisation; in section 3 a linear stability analysis is presented,; in section 4 are shown the behaviour of \(Nu\) as a function of \(Ra\) and the spectral analysis for both water and air, together with the rest of the results; a final discussion is presented in section 5. ## 2 Numerical simulation ### Problem description The configuration of the problem, a sketch of which is given in figure 1, consists of an inner spherical shell of radius \(R_{i}\) and an outer concentric shell of radius \(R_{o}\). We call \(d=R_{o}-R_{i}\) the distance between the shells. The aspect ratio \(\eta=R_{i}/R_{o}\) is fixed at \(\eta=0.71\). The fluid is subjected to a radial gravity of the form \(\mathbf{g}(r)=g_{o}\mathbf{g}^{*}(r)\), where \(g_{o}\) is the magnitude of the gravity at the outer sphere and \(\mathbf{g}^{*}(r)=\bar{g}(r)\mathbf{\hat{r}}\) a dimensionless radius-dependent function. A fixed temperature is set for inner and outer walls, with the outer wall temperature \(T_{o}\) being lower than the inner one \(T_{i}\), and \(\Delta T=T_{i}-T_{o}\) the temperature difference. No-slip boundary conditions are chosen. Let \(\nu\) the thermal viscosity and \(\kappa\) the thermal diffusivity of the fluid, then the Prandtl number is defined as \(Pr=\nu/\kappa\). In this paper, values of \(Pr_{air}=0.71\) and \(Pr_{water}=7.1\) are used; when not specified, it is assumed we are using \(Pr=Pr_{air}\). Being \(\alpha\) the thermal expansion coefficient, the Rayleigh number is defined as \(Ra=\frac{g_{o}\alpha\Delta Td^{3}}{\nu\kappa}\), and it will be used as the main control parameter in the following sections. Reynolds number is defined as \(Re=Ud/\nu=\sqrt{Ra/Pr}\), where U is a free-fall velocity \(U=\sqrt{g_{o}\alpha\Delta Td}\). The introduction of \(U\) allows us to set the representative scale for length (\(d\)), time (\(d/U\)) and temperature (\(\Delta T\)). The Nusselt number \(Nu\) is used to measure the dimensionless heat transfer between shells, and it has been computed by direct measurement of heat flux at outer and inner shells \[Nu=\eta\frac{\partial\overline{T^{*}}}{\partial r}\bigg{|}_{R_{i}}=-\frac{1} {\eta}\frac{\partial\overline{T^{*}}}{\partial r}\bigg{|}_{R_{o}}, \tag{1}\] with \(\overline{T^{*}(r)}\) being the dimensionless temperature averaged over time and surface (computed respectively at the inner and outer radius), and the equality between the two definitions is true for a (statistically) steady flow. The two different definitions of \(Nu\) have been compared for all the simulations, always showing an excellent agreement. Figure 1: Sketch of the configuration. Computing a typical diffusive time \(t_{d}=Re/Nu\) ensures that every simulation is run for sufficient time. Finally, the density of the system is defined as \(\rho=\rho_{o}\alpha T\), with \(\rho_{o}\) being the density at the outer shell. Using these quantities the problem is defined by a dimensionless Navier-Stokes equation for an incompressible viscous fluid under the Boussinesq approximation that reads as: \[\left\{\begin{array}{l}\frac{\mathrm{D}\mathbf{u}^{*}}{\mathrm{D}t^{*}}=- \boldsymbol{\nabla}p^{*}+T^{*}\mathbf{g}^{*}(r)+\sqrt{\frac{Pr}{Ra}}\nabla^{2} \mathbf{u}^{*}\\ \boldsymbol{\nabla}\cdot\mathbf{u}^{*}=0\\ \frac{\mathrm{D}T^{*}}{\mathrm{D}t^{*}}=\frac{1}{\sqrt{RaPr}}\nabla^{2}T^{*} \end{array}\right. \tag{2}\] with \(\mathbf{u}^{*}\) and \(p^{*}\) being respectively the dimensionless velocity and pressure. Different gravity profiles have been used for the simulations. They are schematized, together with the associated symbols, in table 1, while their shape is shown in figure 2. The _Mantle-like_ gravity profile models a situation in which there are two densities in the system: the density of the fluid between the two shells \(\rho_{o}\) and the density inside the inner shell \(\rho_{i}\). Their ratio \(\lambda=\rho_{o}/\rho_{i}\) is the parameter describing different configurations: in the _Mantle-like_ gravity profile it is set to \(\lambda=10/29\), while \(\lambda=1\) represents the linear case, and the quadratic case is obtained as \(\lambda\to 0\). The constant gravity is instead a good \begin{table} \begin{tabular}{c||c|c|c|c|c} \hline \hline Name & Quadratic & Constant & Linear & _Mantle-like_ & Parabolic \\ \hline Symbol & \(\mathbf{g}^{q}\) & \(\mathbf{g}^{c}\) & \(\mathbf{g}^{l}\) & \(\mathbf{g}^{m}\) & \(\mathbf{g}^{p}\) \\ Equation & \(\frac{1}{r^{2}}\) & 1 & \(r\) & \(\frac{R_{1}^{3}}{r^{2}}(1-\lambda)+\lambda r\) & \((r-R_{m})^{2}+\Delta R\) \\ \hline \hline \end{tabular} \end{table} Table 1: Different gravity profiles. \(\lambda\), \(R_{m}\) and \(\Delta R\) are parameters. Figure 2: Overview of the dimensionless gravity profiles used (summarised in table 1) as a function of radius: quadratic gravity; parabolic gravity; _—— mantle-like_ gravity; _——_ linear gravity. approximation of the situation on the surface of the Earth, and can be obtained from the mantle-like profile as \(R_{i}\to\infty\). Finally, the parabolic profile is an artificial profile created to simulate a highly non-monotonic gravity, with \(R_{m}=3.1\) and \(\Delta R=0.5\) being parameters chosen to have a non-monotonic asymmetric (with respect to the middle radius) profile. In the following sections results will be shown for the case \(\mathbf{g}=\mathbf{g}^{q}\), and the behaviour of other gravity profiles will be discussed alongside. Henceforth asterisks of dimensionless quantities are dropped in order to simplify the notation. ### Numerical setup The Navier-Stokes equation coupled with the Boussinesq approximation has been written in spherical coordinates and discretised on a staggered spherical grid. The finite difference scheme used is second order in time and space, and exploits a change of variables to obtain trivial boundary conditions at the poles (which were the source of singularities) and the special treatment of a few discrete terms. One of the advantages of this scheme is that it allows non uniform grids in latitudinal and radial directions. More details on the method, together with a deep analysis of performances and accuracy, can be found in Santelli _et al._ (2020), which extends to spherical coordinates the idea of Verzicco & Orlandi (1996). Being the singularity at the center of the sphere outside of our analysed domain, the parallellization of the code is easy to implement. The appropriate space resolution has been chosen by running several simulations with varying grid spacing. In figure 3 is shown the behaviour of \(Nu\) as a function of time for different grid resolutions and values of Rayleigh number: the difference between the two most refined grid shown is negligible; therefore, we can save computational time by running simulations on a grid \(\{N_{\theta}=65,N_{r}=69,N_{\phi}=69\}\). A maximum \(CFL=0.8\) has been imposed to control the Figure 3: Analysis of \(Nu\) vs \(t\) at different number of grid points. grid with \((N_{\theta}=33,N_{r}=35,N_{\phi}=35)\) points; \(\rTo(65,69,69)\); \(\cdots\cdots\) (\(129,121,121\)). Only a small mismatch at the beginning of the simulation is present for the less refined grid, while the other two are almost coincident at every step. A vertical black line separate the two different \(Ra\) analysed: \(Ra=1600\) for \(t<5000\), \(Ra=1500\) after. size of the time step during the convective phase, while \(\Delta t_{max}=10^{-3}\) has been fixed for the linear evolution. The initialization and evolution of the system varies according to the specific parameters that have been used for each simulation: * Low \(Ra\), increasing: for these simulations the flow is initialised to a rest state at \(Ra<Ra_{c}\), with \(Ra_{c}\) being the critical Rayleigh number for the onset of convection. It is then allowed to evolve unperturbed for a sufficiently long time in order to reach and maintain an equilibrium state. At this point, \(Ra\) is increased to a new value and the equilibrium procedure is repeated. These steps are repeated until \(Ra\) reached the desired value. This long-time evolution is mostly necessary for the first steps, as the absence of perturbation and the accuracy of the code drastically increase the needed time for the onset of convection. * Low \(Ra\), decreasing: These simulations follow the same procedure as the increasing case, but the initialization is done at \(Ra\geqslant Ra_{c}\) and the subsequent values of \(Ra\) are in a decreasing order. Less time per \(Ra\)-step is needed, since the system stabilise faster. * High \(Ra\): simulations done at higher values of Rayleigh number focus on the properties of the flow at a fixed value of \(Ra\); therefore, after the initialization at the chosen \(Ra\) from a pre-existing simulation, the fluid is left free to evolve without variations in parameters for a time sufficiently long to observe the complete behaviour. The few cases that require a different approach will be described in their relative sections. In all the simulations, the flow needed a few time steps to reach a divergence free state, thus data are only collected after this condition has been reached. ## 3 Linear stability analysis In this section, we give an overview of the work of Araki _et al._ (1994) and follow their approach to perform the linear stability analysis for this study. For this analysis (and only in this subsection), we redefine \(g_{0}\) as the gravity at the inner radius, and perform an analysis for \(R_{i}\gg 1\). We note that the parabolic gravity profiles diverges as \(R_{i}^{2}\) when \(R_{i}\rightarrow\infty\), thus it is ill-suited for this analysis and should not be considered here. On the other hand, the constant gravity profile is included here as the limit of the mantle-like profile for \(R_{i}\rightarrow\infty\). Figure 4: Evolution of \(Nu\) as function of \(t\). Vertical bars of (a) and (b) separate zones with different \(Ra\). We observe that equation (2) can be rewritten as \[\begin{cases}&\nabla^{2}\big{(}\nabla^{2}-\frac{1}{Pr}\frac{\partial}{\partial t} \big{)}\mathbf{V}=Ra\frac{g(r)}{R_{i}}\Big{(}-\frac{1}{\sin\varphi}\frac{ \partial}{\partial\varphi}\sin\varphi\frac{\partial}{\partial\varphi}-\frac{1 }{\sin^{2}\varphi}\frac{\partial^{2}}{\partial\varphi^{2}}\Big{)}\Theta\\ &\big{(}\nabla^{2}-\frac{\partial}{\partial t}\big{)}\Theta=-\frac{1}{r}\frac{ \partial T}{\partial r}\mathbf{V}R_{i}\end{cases} \tag{3}\] where \(\mathbf{V}=r\mathbf{u}/R_{i}\) and \(\Theta\) is the temperature fluctuation around T(r). If the fluid is stationary, spherical symmetry holds and we can represent eigenmodes of equation (3) in terms of spherical harmonics \(\mathbf{Y}_{l}^{m}(\theta,\varphi)\), and only one eigenvalue for each \(l\) exist, with a degeneracy of \(2l+1\)(Sattinger, 1979). This allow us to write \[\begin{pmatrix}\mathbf{V}\\ \Theta\end{pmatrix}=\begin{pmatrix}\xi_{1}(r,t)\\ \xi_{2}(r,t)\end{pmatrix}\mathbf{Y}_{l}^{m}(\theta,\varphi) \tag{4}\] and focus on the equations for \(\xi_{1}\) and \(\xi_{2}\). Then, assuming a time dependence of the fluctuations as \(\exp(\sigma t)\), the equations for \(\xi_{1}\) and \(\xi_{2}\) can be written as \[\begin{bmatrix}\begin{pmatrix}L_{22}L_{22}&L_{12}\\ L_{21}&L_{22}\end{pmatrix}-\sigma\begin{pmatrix}L_{22}/Pr&0\\ 0&1\end{pmatrix}\end{bmatrix}\begin{pmatrix}\xi_{1}(r,t)\\ \xi_{2}(r,t)\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix} \tag{5}\] where \[L_{22} =\frac{1}{r^{2}}\frac{\partial}{\partial r}r^{2}\frac{\partial}{ \partial r}-\frac{l(l+1)}{r^{2}}\] \[L_{12} =\frac{l(l+1)}{R_{i}^{2}}\frac{\bar{g}(r)}{r}\] \[L_{21} =\frac{R_{i}^{2}R_{o}}{r^{3}}\] \[\text{and boundary conditions }\xi_{1}=\frac{\partial\xi_{1}}{ \partial r}=\xi_{2}=0\quad\text{at}\quad r=R_{i},R_{o}\quad.\] Given that the most unstable mode number \(l\) diverges as \(R\to\infty\), a normalised wavenumber \(k=l/R\) is introduced for the analysis at \(R\gg 1\). Expanding \(Ra(\epsilon,k)\) in powers of \(\epsilon=1/R\) around \(\mathbf{c}=(0,k^{(0)})\), with \(k=k^{(0)}+\epsilon k^{(1)}\) and \(\sigma=0\), yields \[Ra(\epsilon,k)=Ra(\mathbf{c})+\bigg{(}\frac{\partial Ra}{\partial\epsilon}( \mathbf{c})+\frac{\partial Ra}{\partial k}(\mathbf{c})k^{(1)}\bigg{)} \epsilon+\mathcal{O}\big{(}\epsilon^{2}\big{)}. \tag{6}\] The last term of the right-hand side is \(0\) because we assume that the critical Rayleigh number satisfies \(\frac{\partial Ra}{\partial k}=0\) at \(k=k_{c}(0)=k^{(0)}\). At this point, we can write \(Ra=Ra^{(0)}+\epsilon Ra^{(1)}\), obtaining a zero-order critical value of \(Ra_{c}^{(0)}=1707.8\) and a first order correction of \(Ra_{c}^{(1)}=(1-3/2\lambda)Ra_{c}^{(0)}\). Thus, the critical \(Ra\) computed up to \(\mathcal{O}(1/R_{i})\) for non parabolic gravity profiles is \[Ra_{c}=Ra_{c}^{(0)}\bigg{[}1+\bigg{(}1-\frac{3\lambda}{2}\bigg{)}\frac{1}{R_{i }}\bigg{]}, \tag{7}\] with the corrective term \(1/R_{i}\) vanishing for the constant gravity profile by construction. According to Araki _et al._ (1994), using a different point \(x=r-R_{i}\) to compute the dimensionless gravity \(g(x)=g_{0}\bigg{(}(1-\lambda)\Big{(}\frac{R_{i}}{R_{i}+x}\Big{)}^{2}+\lambda \frac{R_{i}+x}{R_{i}}\bigg{)}\) modifies \(Ra_{c}\) as \[Ra_{c}(x)=Ra_{c}^{(0)}\bigg{[}1+\bigg{(}1-\frac{3\lambda}{2}\bigg{)}\frac{1-2x} {R_{i}}\bigg{]}, \tag{8}\] implying that for \(x=0.5\) the first order correction disappears. However, as we note in the following sections, we believe that a more accurate representation can be given by the introduction of an effective Rayleigh number \(Ra^{e}\) defined as \[Ra^{e}=\frac{1}{d}\int_{R_{i}}^{R_{o}}\mathrm{d}xRa(x)\equiv\frac{1}{d}\int_{R_{ i}}^{R_{o}}\mathrm{d}xRa\frac{g(x)}{g_{o}}, \tag{10}\] which clearly preserves the property of vanishing first order correction if applied to equation (11), while at the same time displaying a better agreement with the data. ## 4 Results ### Onset of convection The behaviour of \(Nu\) as a function of \(Ra\) and \(t\) varies accordingly to the range of \(Ra\) analysed. For \(Ra\) lower then the critical value for the onset of convection, \(Nu=1\). An interval of Rayleigh number around \(Ra_{c}\) is shown in figure 4 for the quadratic gravity profile \(g^{q}\). As described in section 2.2, in the increasing case we set initial conditions at \(Ra\leqslant 1200\), then the fluid is left free to evolve for a long enough time (\(5\times 10^{3}\) time units for figure 4a) until \(Ra\) is updated according to \(Ra_{new}=Ra_{old}+\Delta Ra\); the decreasing case starts from a much higher \(Ra\) and it is then update following \(Ra_{new}=Ra_{old}-\Delta Ra\), with \(\Delta Ra\) being a parameter used to refine the precision of the analysis (for figure 4, \(\Delta Ra=100\)). For this range of values of \(Ra\), \(Nu(\mathbf{r})\) is not time dependent at any fixed value of the Rayleigh number; therefore, as shown in figure 5, the mean value of it has been computed as in equation (1) to compare the increasing- and decreasing- evolutions. From the figure it is immediate to notice that the two approaches yield different results. Some details on this behaviour will be given in the following paragraphs, and a deeper analysis on the phenomenon is carried out in section 4.3. The critical value for the onset of convection is identified when \(Nu(Ra)\) becomes greater than one, and it is \(Ra_{c}^{q}\approx 1240\) for quadratic gravity. The same study has been performed for the other gravity profiles: results from this analysis are showed in table 2 and are in good agreement with results from Araki _et al._ (1994); however, for the Figure 5: Evolution of \(Nu\) as function of \(Ra\). \(\blacklozenge\) red diamonds for increasing-\(Ra\), \(\bullet\) black circles for decreasing-\(Ra\). Error bars shown when bigger than the symbols. Figure 6: Temperature profile (a,c,e) and spherical harmonic spectrum (b,d,f) for quadratic gravity in various situations. Temperature ranges from yellow (hotter) to blue (colder). For spectra, \(\rightharpoonup\) for \(C_{l}\) and reference scale on the left; \(\rightharpoonup\) for \(C_{m}\) and reference scale on the right. quadratic and mantle-like gravity profiles a difference of up to 10% has been observed with the expected value computed as in equation (12). This is to be expected, as the stability analysis is neglecting higher order corrections that, for our values of \(R_{i}\), can be of order of magnitude of 0.1. Therefore, as anticipated in section 3, an effective \(Ra^{e}\) -as defined in equation (11)- is introduced with the goal of averaging over the higher order terms. Table 2 shows the importance of the effective Rayleigh number by highlighting how \(Ra^{e}_{c}\) is very similar for almost all the cases analysed, differing from the theoretical critical value by less than 1%, with the only exception of the parabolic case, which was not covered by the previous study. We conclude this analysis by noticing that the critical \(Ra\) for the onset of convection does not show any dependence on starting condition. An important tool to understand the flow structure is represented by the analysis of the temperature profile and the corresponding spectral analysis. Both the temperature and spectral profile are shown in figure 6 for some values of \(Ra\) around the onset of convection for quadratic gravity. Details on the spectral analysis can be found in appendix A, here we just highlight that our focus is on the non-zero value of the degree for which the relative coefficient is higher, hereby named _main-degree_. As expected, in the pure conductive case at \(Ra<Ra_{c}\), the spectrum is almost zero for \(l>0\) and \(m>0\). The corresponding temperature profile shows an unperturbed flow status. When convection is reached by increasing \(Ra\) over \(Ra_{c}\), the system enters in a new state, hereby named state \(\mathcal{S}_{9}\): the main-degree is 9 (and odd numbers dominate the \(l\) spectrum), and the temperature profile has a 9-pointed shape. Analysis of the same \(Ra\) reached from the decreasing case shows how the temperature profile has a 8-pointed shape, and the spectral analysis confirms that the main-degree is 8. This state is identified as state \(\mathcal{S}_{8}\). As we are observing a stationary condition, degeneracy for the eigenvalues is to be expected, and indeed from the data \(C_{m}\) is negligible for any \(m>0\). ### Non-stationary convection When \(Ra\) is further increased, a non-stationary behaviour appears. We can start by identifying two different regions, characterized by increasing values in the Rayleigh number: \(Ra_{1}\) and \(Ra_{2}\). We can define a region I for \(Ra_{1}\leqslant Ra<Ra_{2}\) and a region II for \(Ra\geqslant Ra_{2}\). In region I, the system first reaches a meta stable state \(\mathcal{S}^{\prime}_{9}\); then, after a stabilising time \(t_{s}\) inversely proportional to \(Ra\), it moves to a stable state \(\mathcal{S}^{\prime}_{8}\). In figure 7 we can observe the profile of Nusselt number as a function of time for a simulation with quadratic gravity at \(Ra=1700\); it is immediate to identify \(t_{s}\approx 1.2\times 10^{4}\). In the figure, the spectral analysis and temperature profile of this system before and after \(t_{s}\) are shown as well. As in the previous section, we notice how state \(\mathcal{S}^{\prime}_{9}\) has a main-degree 9, while state \(\mathcal{S}^{\prime}_{8}\) has a main-degree 8. To verify the relationship between \(\mathcal{S}_{9}\), \(\mathcal{S}^{\prime}_{9}\) and \(\mathcal{S}_{8}\), \(\mathcal{S}^{\prime}_{8}\), we analysed the evolution of a system with initial conditions in \(\mathcal{S}^{\prime}_{9}\) (or \(\mathcal{S}^{\prime}_{8}\)), and we decrease \(Ra\) to \(\bar{Ra}<Ra_{1}\): the new obtained state is fully equivalent to \(\mathcal{S}_{9}\) (or \(\mathcal{S}_{8}\)), thus we can \begin{table} \begin{tabular}{c||c|c|c|c} \hline \hline Gravity & Quadratic & Linear & Constant & Mantle-like & Parabolic \\ \hline \(Ra_{c}\) & \(1240\pm 1\%\) & \(2020\pm 1\%\) & \(1730\pm 1\%\) & \(1610\pm 1\%\) & \(2110\pm 1\%\) \\ \(Ra^{c}_{c}\) & \(1737\pm 1\%\) & \(1735\pm 1\%\) & \(1730\pm 1\%\) & \(1739\pm 1\%\) & \(1907\pm 1\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Critical Rayleigh \(Ra_{c}\) and effective critical Rayleigh \(Ra^{e}_{c}\) for different gravity profiles. \(Ra^{e}_{c}\) coincides for all the profiles except the parabolic. identify \(\mathcal{S}^{\prime}_{9}\equiv\mathcal{S}_{9}\) and \(\mathcal{S}^{\prime}_{8}\equiv\mathcal{S}_{8}\). The stability of \(\mathcal{S}_{8}\) has been verified by taking a system with starting condition \(\mathcal{S}_{8}\) and let it evolve at a different \(Ra\): as long as \(Ra<Ra_{2}\), the system will remain in state \(\mathcal{S}_{8}\). Given that \(t_{s}\) increases for lower values of \(Ra\), the exact value of \(Ra_{1}\) is hard to identify. Currently, our simulations last at least for \(2\times 10^{5}\) time units, approximately fifteen times more than the \(t_{s}\) identified for the current \(Ra_{1}\). Our results for both \(Ra_{1}\) and \(Ra_{2}\) are schematized in table 3 and show the same behaviour of previous cases: Figure 7: Analysis of quadratic gravity at \(Ra_{1}=1700\). In figure (a) the time needed for stabilising is approximately \(t_{s}\approx 1.2\times 10^{4}\). Legend for temperature of figures (b,c) and of spectra (d,e) as in figure 6. results are mostly coherent once effective value has been computed, with the parabolic case being an exception (as it was in the previous analysis). Summarising, region I can be identified by the presence of a meta-stable flow which, after a stabilising time \(t_{s}\) has passed, evolves into a stable time-independent behaviour1. Footnote 1: assuming it never touches region II during its evolution When \(Ra\) is increased past \(Ra_{2}\) a new behaviour appears. At first, there is a transitional phase where \(Nu(t)\) does not follow a predictable pattern; these oscillations are then replaced by a periodic dynamics that remains stable at any greater \(t\). In figure 8 the behavior for \(Nu(t)\) at \(Ra=2100\) for quadratic gravity is shown. In the instantaneous snapshot of the temperature profile (figure 7(b)) we can notice how plumes on the x-y plane are now significant. This phenomenon is also evident by looking at the spectrum profile (figure 7(d)): while \(C_{l}\) maintains approximately the same magnitude as lower \(Ra\) simulations (with a shift in main-degree to the range \(6-8\)), \(C_{m}\) is now significantly greater than \(0\) even for \(m\neq 0\) and presents the same alternating structure that characterised \(C_{l}\) before. Indeed, in this region the flow is not stationary; thus, the degeneracy of the Figure 8: Analysis of quadratic gravity at \(Ra_{2}=2100\). Legend for temperature and spectrum profiles as in figure 6. eigenvalues is lost. In figure 8c the Fourier transformation of the oscillatory evolution is shown (for further details, see Appendix A): two main frequencies can be highlighted, \(f=0.25\) (main peak) and \(f=0.5\). Further increases of \(Ra\) reduce the transitional period and introduce new frequencies in the spectrum; In figure 9, the case for \(Ra=5000\) for quadratic gravity shows that the harmonic spectrum has magnitude closer to \(\mathcal{O}(1)\) for both \(C_{l}\) and \(C_{m}\), with a main-degree at \(5\), and the frequency spectrum shows several peaks. At higher values of \(Ra\), periodicity disappears and a chaotic behaviour is obtained: here the frequency spectrum is almost continue. Surprisingly, the main-degree remains stable around \(l=5\) instead of following the previous behaviour of shifting towards lower numbers for higher values of \(Ra\). \(C_{m}\) remains highly excited for a large range of \(m\). Region II is then defined as the region for which the system presents a time-dependent behaviour at any time \(t\). This behaviour may be periodic when \(Ra\) is very close to \(Ra_{2}\), or almost chaotic when \(Ra\) is much higher. ### Hysteresis As anticipated in previous sections, the behaviour of the system varies accordingly to the starting conditions. This is in line with the analysis of Mannix & Mestel (2019), which predicted the stable solution to have a dependency on starting conditions, as well as the impact of \(l\pm 2\) and \(l\pm 1\) modes. From current analysis, we identified two main time-independent states, \(\mathcal{S}_{9}\) and \(\mathcal{S}_{8}\). The stability analyses of Araki _et al._ (1994) and Avila _et al._ (2013) for \(R_{i}=2.5\) suggest that, due to a degeneracy in the solutions, the onset of convection can occur for both \(l=8\) and \(l=9\), which corroborates our results. In figure 10a, we compared the convection efficiency of heat transfer for two different simulations: an increasing case starting from the onset of convection, and a decreasing case starting in region I and with enough time to reach state \(\mathcal{S}_{8}\). Results show that the second case has a worse efficiency (i.e. a smaller value of \(Nu\)): an explanation for this effect can be found in the temperature profile, which for the \(\mathcal{S}_{8}\) state has less protrusions between the two shells and thus a reduced heat transfer. At the present moment, however, it is unclear why the system prefers a configuration with a less efficient heat transfer. A possible interpretation of these results come from the comparison with Araki _et al._ (1994), which show that for this \(R_{i}\), lower modes \(l\) correspond to higher values of \(Ra\); thus, it is possible that once a sufficiently high \(Ra\) is reached, the system sits on the higher curve of \(l=8\) for its critical value, while for lower explored values of \(Ra\) the only available curve is for \(l=9\). It is worth noticing that using increasing or decreasing simulations is just a matter of convenience: as long as the initial conditions of the system are in state \(\mathcal{S}_{9}\) and the system does not evolve in \(Ra>Ra_{1}\), the dynamic is fully equivalent to the increasing case with initial conditions at \(Ra_{c}\). A similar result is obtained when looking at the time dependent behaviour of \(Ra\geqslant Ra_{2}\). Figure 10b shows the behaviour of the simulations under three different conditions: an evolution in region II at \(Ra_{2}\) (we name it _region II state_), and two evolutions at \(\bar{Ra}=2000<Ra_{2}\) differentiated by their initial conditions. Of these two, the first is done at initial conditions \(\bar{Ra}\) (_rest state_), the other starts from the periodic oscillations of the region II simulation, i.e. initial conditions \(Ra_{2}\) (_oscillating state_). This latter simulation keeps in its evolution the periodic pattern presents in _region II state_ and typical of higher values of \(Ra\). Its spectrum is qualitatively identical to the spectrum of the _region II state_, as shown by comparing figures 10f and 8d, and the same holds true for the temperature profile, showed in figures 10d and 8b. On the other side, the spectrum and temperature profile of the _rest state_ (not shown here) is qualitatively equivalent to any other fluid evolved completely in region I. The periodic oscillations remain a feature of the system even when \(Ra\) is further decreased, but their effect on \(Nu\) becomes progressively smaller and it is negligible when \(Ra\lesssim Ra_{1}\), as shown by figure 9(c): the difference between two evolutions with decreasing \(Ra\) is large at the beginning, but becomes almost zero when \(Ra\) approaches \(Ra_{c}\). Even if the effect on heat transfer is negligible, the system keeps partial memory of its initial Figure 9: Analysis of quadratic gravity at \(Ra=5000\) (a,b,e,f) and \(Ra=10000\) (c,d,g,h). Legend for temperature and spectrum profiles as in figure 6. Figure 10: Left side: initial conditions comparison (vertical black lines in (a,c) indicate changes in \(Ra\)); right side: behaviour around \(Ra_{2}\). (a): decreasing from \(Ra>Ra_{1}\) (time scale reversed; \(\cdot\cdot\cdot\cdot\cdot\) increasing from \(Ra_{c}\). (b): \(\cdot\cdot\cdot\cdot\)_oscillating state_; \(\cdot\cdot\cdot\cdot\cdot\)_rest state_; \(\cdot\cdot\cdot\cdot\)_region II state_. (c): decreasing from \(Ra<Ra_{2}\); \(\cdot\cdot\cdot\cdot\) decreasing from \(Ra>Ra_{2}\). Legend for temperature and spectrum profiles as in figure 6. conditions in the spectrum: in a simulation with oscillating initial conditions the spectrum has a non-zero oscillating value of \(C_{m}\) even for very low \(Ra\), being 5 orders of magnitude bigger in figure (c)c than the comparable simulation at \(Ra\approx Ra_{c}\) of figure (b)c. While in general we expect to see \(Nu(Ra=x)>Nu(Ra=y)\) when \(x>y\), we can notice that in our simulations this order is not respected when comparing \(Nu(Ra=Ra_{2})\) and \(Nu(Ra=\bar{Ra}_{rest})\) (figure (b)b). As previously noted, the rise of a periodic oscillation and the change in the spectrum are correlated to a worse heat transfer, this justifies the inversion in \(Nu\). Indeed, when comparing the two oscillating solutions, we restored the expected order: \(Nu(Ra=Ra_{2})>Nu(Ra=\bar{Ra}_{oscillating})\). The strong dependence on the initial conditions seem to be an intrinsic property of the system that does not depend on the gravity profile chosen: preliminary tests have been run for all the other profiles and, as already shown by table 3, the existence of those regions is not affected by the chosen gravity. ### Higher Pr Number As anticipated in section 2, the same configurations previously analysed have been studied also for water, characterised by \(Pr_{water}=7.1=10Pr_{air}\). Looking at equation (2), we notice how the main direct effect of a variation in Prandtl number is a larger coefficient in front of \(\nabla^{2}\mathbf{u}\), i.e. the viscous effects are stronger for water compared to air. Thus, for inertial terms to overcome viscosity, an higher \(Ra\) is needed. Since \(Pr_{water}/Pr_{air}=10\), we expect \(Ra_{water}\) to be 10 times bigger to obtain the same time-dependent behaviours, while the critical value for the onset of convection will be unaffected by the change of \(Pr\). The system has been tested for different grids, and the same grid used for simulations at lower \(Pr\) has been found to be accurate. The _increasing / decreasing_ approach has been tried for this value of \(Pr\), but it has been found rather ineffective due to the much larger time needed to reach a stable state, especially for the _increasing_ case, thus the study has been done by running several simulations, each at a (different) fixed \(Ra\). In most of the simulations the fluid is initialised from rest, but to analyse hysteresis for longer simulations sometimes an old state has been used as starting condition for the new run. Using \(\mathbf{g}^{q}\) as gravity profile the onset of convection, computed as before by the analysis of the Nusselt number, happens at the same value of \(Ra_{c}\approx 1250\), equivalent to an effective value of \(Ra_{c}^{e}=1750\). This result, in line with the predictions, shows that the onset of convection has no direct dependence on the value of \(Pr\). When \(Ra\) is much higher than \(Ra_{c}\), we observe again the time dependent behaviour of region II appearing at \(Ra_{2,water}\approx 21000\), that respects our prediction of \(Ra_{2,water}=10Ra_{2,air}\). In the region I between the onset of convection and the time dependent behaviour, a series of different regimes appear. Looking at the spectrum, we find the same configuration found for air, with a state \(\mathcal{S}_{9}\) having at spectrum peaked at \(C_{l}=9\). This state is stable for a much larger range of \(Ra\) compared to air but eventually, as \(Ra\approx 4000=Ra_{1}\), the system jumps to a new equilibrium state \(\mathcal{S}_{8}\), with a spectrum peaked at \(C_{l}=8\). In the same fashion as air, decreasing \(Ra\) from a system in \(\mathcal{S}_{8}\) does not bring it back to \(\mathcal{S}_{9}\): hysteresis is present also for water. Compared to air simulations, however, the system has a larger space to explore in \(Ra\) before reaching region II at \(Ra_{2}\). This leads to the appearance of new states in the meta-stable region I, we identify them as \(\mathcal{S}_{7}\), \(\mathcal{S}_{6}\) and \(\mathcal{S}_{5}\) for systems with main-degree 7,6 and 5 respectively. A summary of the first measured \(Ra\) for each state is given in table 4. As happened for air, when the main peak is an odd/even number then the all the odd/even degrees are peaked as well. As noted in the previous section for air, we believe that the curves of \(Ra(l)\) shown in Araki _et al._ (1994) can offer an explanation about this behaviour. Increasing \(Ra\) above \(Ra_{2}\) brings back the same periodic oscillation observed for region II of air, with the harmonic spectrum keeping its peak at \(l\approx 5\) and the rise of more and more characteristic frequencies. Surprisingly, the state \({\cal S}_{5}\) appears to be the most stable both for water (which reaches it after exploring all the states from 9 to 5 in region I) and for air (which reaches it only in region II). Eventually, for \(Ra\) high enough, the frequency spectrum becomes continuum and the system moves toward a turbulent state. ## 5 Conclusion In this paper, a characterization of Rayleigh-Benard convection for fluids between spherical shells has been carried out using a three-dimensional second order finite difference scheme in spherical coordinates. By setting fixed temperature at the shells and fixed radius ratio, we can explore different configurations by varying Prandtl number, Rayleigh number, and the radial gravity profile. The results for the onset of convection are compared with linear stability analysis, which predicts a zero-order critical Rayleigh number \(Ra^{(0)}\approx 1708\). The study up to first order correction for different gravity profiles yields various results: while for constant and linear gravity profiles the difference between data and theory is very small, the quadratic and mantle-like profiles differ for up to 10% from the expected value. However, perfect agreement is restored when an averaged value \(Ra^{e}\approx 1730\) is computed: thanks to it the first order correction to \(Ra^{(0)}\) vanishes and the discrepancy between the measured values and the linear analysis is kept below 1%. This result holds true for both air (\(Pr_{air}=0.71\)) and water (\(Pr_{water}=7.1\)), which is an expected behaviour, given that no dependency on \(Pr\) is present. The effective Rayleigh number can also be used to identify all the subsequent states the system explores. Our criterion for the characterization of different states is given by the analysis of the spherical harmonics of the system. At the onset of convection, the system has a harmonic spectrum peaked at degree \(l=9\), so we identify this state as \({\cal S}_{9}\). Increasing \(Ra\) leads to the rise of new situations. For air, we first identify a region I (for \(Ra^{e}\geq Ra_{1}^{e}=2350\)) where the state \({\cal S}_{9}\) becomes unstable and the system, given enough time, moves to a new configuration \({\cal S}_{8}\) where the main-degree of the harmonic spectrum is \(l=8\) (\(C_{m}\) remains unexcited in this region). Region I for water starts at a higher Rayleigh number (\(Ra_{1}^{e}\approx 5550\)) but the system remains in this region for a larger interval of \(Ra\); therefore, while both fluids start from state \({\cal S}_{9}\) and reach state \({\cal S}_{8}\) \begin{table} \begin{tabular}{c||c|c|c|c|c|c} \hline & \(Ra_{{\cal S}_{9}}\) & \(Ra_{{\cal S}_{8}}\) & \(Ra_{{\cal S}_{7}}\) & \(Ra_{{\cal S}_{6}}\) & \(Ra_{{\cal S}_{5}}\) & \(Ra_{2}\) \\ \hline \(Ra^{q}\) & 1250 & 4000 & 6000 & 11000 & 16000 & 21000 \\ \(Ra^{e}\) & 1740 & 5550 & 8350 & 15300 & 22300 & 29000 \\ \end{tabular} \end{table} Table 4: Values of \(Ra\) for the first occurrence of different states when \(Pr=7.1\) for quadratic gravity on the first line, and effective value on the second line. \(Ra_{{\cal S}_{9}}\) is equivalent to \(Ra_{c}\) and \(Ra_{{\cal S}_{8}}\) is equivalent to \(Ra_{1}\). Confidence interval at \(\pm 5\%\) for \(Ra_{c}\) and \(Ra_{2}\), \(\pm 10\%\) for the others. increasing \(Ra\) for water leads to the emergence of new states \(\mathcal{S}_{7},\mathcal{S}_{6}\) and \(\mathcal{S}_{5}\) where the main-degree is, respectively, \(7,6\) and \(5\). When \(Ra\) is increased beyond a threshold \(Ra_{2}\), the system enters region II, where a time-dependent behaviour is observed. Being this behaviour heavily influenced by \(Pr\), we expect \(Ra_{2,water}\) to be about \(10\) times bigger than \(Ra_{2,air}\); indeed we have \(Ra_{2,air}^{e}=2900\) and \(Ra_{2,water}^{e}=29000\). In this region, the harmonic spectrum has main-degree \(l=5\) at any \(Ra\) for both water and air, while the values of \(C_{m}\) become progressively larger. In this region the analysis of the frequency spectrum can give interesting information. For \(Ra\) close to \(Ra_{2}\), only few frequencies are excited and a clearly periodic behaviour can be observed in the flow dynamic. Increasing \(Ra\) leads to new peaks in the frequency spectrum and eventually a continuum spectrum is attained at very high values of \(Ra\), while the periodicity in the dynamic disappears. We observed that, for both water and air, lower degree states are stable, and decreasing \(Ra\) to previously explored values does not bring back the higher-degree configurations. Moreover, we discovered that lower \(l\) is associated with a reduced heat transfer and thus a lower value of \(Nu\). We also noted that, if the system explored region II during its evolution, time dependent features remain part of the dynamic even when \(Ra\) is lowered, and \(C_{m}\) remains excited even for values of \(Ra\) very close to the onset of convection. Based on these observations, we can claim that the system presents hysteresis and its heat transfer is heavily dependent on its starting conditions. ## Acknowledgements We acknowledge the support of the GNFM group of INDAM and the national e-infrastructure of SURFsara, a subsidiary of SURF cooperation. ## Appendix A Appendix: Harmonic analysis and spectrum This appendix describes the spectral analysis procedure used in this study. ### Fourier decomposition Fourier transforms are a fundamental tool in mathematics and physics. A Fourier transform of a function of time is a (in general complex valued) function of frequency, and its magnitude or intensity gives information to the most _common_ frequencies of the original function. Let \(f\) a periodic square-integrable function \(\hat{s}\in L^{2}(T)\), with T being the unitary circumference. Then its Fourier series is \[\sum_{n=-\infty}^{n=\infty}s(n)e^{int} \tag{10}\] where \(i\) is the imaginary unit and the Fourier coefficients \(s(n)\) are defined as \[s(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\hat{s}(t)e^{-int}\mathrm{d}t=\frac{1}{2 \pi}\int_{-T/2}^{T/2}\hat{s}(t)e^{-i2\pi nt/T}\mathrm{d}t. \tag{11}\] The coefficients can be seen as discretized samplings of the Fourier transform at intervals \(1/T\), so we can define the Fourier transform \(s(f)\) as \[s(f)=\frac{1}{2\pi}\int_{\mathbb{R}}\hat{s}(t)e^{-ift}\mathrm{d}t. \tag{12}\] ### Spherical harmonic decomposition Given a square-integrable function \(f:S^{2}\to\mathbb{C}\) on the unit sphere \(S^{2}\) its spherical harmonic decomposition can be written as \[f(\phi,\theta)=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}C_{l}^{m}Y_{l}^{m}(\phi,\theta) \tag{10}\] where \(Y_{l}^{m}(\phi,\theta)\) is the spherical harmonic of degree \(l\) and order \(m\) (which represent the wavenumber along a meridian and the equatorial plane), and \(C_{l}^{m}\) its coefficient, and the expansion holds in the sense of convergence in \(L^{2}\) of the sphere. \(Y_{l}^{m}(\phi,\theta)\) can be defined in terms of associated Legendre polynomials \(P_{l}^{m}\) by \[Y_{l}^{m}(\phi,\theta)=\sqrt{\frac{(2l+1)(l-m)!}{4\pi(l+m)!}}P_{l}^{m}(cos\phi )e^{im\theta}, \tag{11}\] where \(P_{l}^{m}(x)\) satisfies the general Legendre equation \[\frac{\mathrm{d}}{\mathrm{d}x}\bigg{[}(1-x^{2})\frac{\mathrm{d}}{\mathrm{d}x}P _{l}^{m}(x)\bigg{]}+\bigg{[}l(l+1)-\frac{m^{2}}{1-x^{2}}\bigg{]}P_{l}^{m}(x)=0. \tag{12}\] For orthonormalised harmonics, as the one defined in equation (11), the coefficients can be computed by \[C_{l}^{m}=\int_{\Omega}f(\phi,\theta)Y_{l}^{m*}(\phi,\theta)\mathrm{d}\Omega \tag{13}\] where \(\Omega\) is the solid angle. This tool has been used in this study to perform a spectral analysis of the averaged square temperature \(T^{2}(\phi,\theta)\) to better characterize the fluid behaviour for the various simulations. Coefficients of equation (13) are obtained by using the SPHEREPACK library (Adams & Swarztrauber, 1999). In the analysis, the averaged value of coefficients has been used, i.e. \(C_{l}=c_{0}\langle\sum_{m=-l}^{l}C_{l}^{m}\rangle\) (with \(c_{0}\) normalization factor) and \[C_{m}=\frac{1}{n-m+1}\langle\sum_{l=m}^{n}C_{l,m}\rangle,\quad n=\min(N_{\varphi }-1,(N_{\theta}+1)/2) \tag{14}\] (the angular parenthesis indicate time and radial average). In the manuscript we use the terminology _main-degree_ when referring to \(l_{m}=\arg\max_{l\neq 0}(C_{l})\), and _main-order_ for \(m_{m}=\arg\max_{m\neq 0}(C_{m})\).
2309.08003
Generalized Decomposition of Multivariate Information
Since its introduction, the partial information decomposition (PID) has emerged as a powerful, information-theoretic technique useful for studying the structure of (potentially higher-order) interactions in complex systems. Despite its utility, the applicability of the PID is restricted by the need to assign elements as either inputs or targets, as well as the specific structure of the mutual information itself. Here, we introduce a generalized information decomposition that relaxes the source/target distinction while still satisfying the basic intuitions about information. This approach is based on the decomposition of the Kullback-Leibler divergence, and consequently allows for the analysis of any information gained when updating from an arbitrary prior to an arbitrary posterior. Consequently, any information-theoretic measure that can be written in as a Kullback-Leibler divergence admits a decomposition in the style of Williams and Beer, including the total correlation, the negentropy, and the mutual information as special cases. In this paper, we explore how the generalized information decomposition can reveal novel insights into existing measures, as well as the nature of higher-order synergies. We show that synergistic information is intimately related to the well-known Tononi-Sporns-Edelman (TSE) complexity, and that synergistic information requires a similar integration/segregation balance as a high TSE complexity. Finally, we end with a discussion of how this approach fits into other attempts to generalize the PID and the possibilities for empirical applications.
Thomas F. Varley
2023-09-14T19:26:46Z
http://arxiv.org/abs/2309.08003v2
# Generalized Decomposition of Multivariate Information ###### Abstract Since its introduction, the partial information decomposition (PID) has emerged as a powerful, information-theoretic technique useful for studying the structure of (potentially higher-order) interactions in complex systems. Despite its utility, the applicability of the PID is restricted by the need to assign elements as either "sources" or "targets", as well as the specific structure of the mutual information itself. Here, we introduce a generalized information decomposition that relaxes the source/target distinction while still satisfying the basic intuitions about information. This approach is based on the decomposition of the Kullback-Leibler divergence, and consequently allows for the analysis of any information gained when updating from an arbitrary prior to an arbitrary posterior. Consequently, any information-theoretic measure that can be written in as a Kullback-Leibler divergence admits a decomposition in the style of Williams and Beer, including the total correlation, the negentropy, and the mutual information as special cases. In this paper, we explore how the generalized information decomposition reveal novel insights into existing measures, as well as the nature of higher-order synergies. We show that synergistic information is intimately related to the well-known Tononi-Sporns-Edelman (TSE) complexity, and that synergistic information requires a similar integration/segregation balance as a high TSE complexity. Finally, we end with a discussion of how this approach fits into other attempts to generalize the PID and the possibilities for empirical applications. ## 1 Introduction Since it was introduced by Claude Shannon in the mid-20\({}^{\text{th}}\) century, information theory has emerged as a kind of _lingua franca_ for the formal study of complex systems [1]. A significant benefit of information theory is that it is particular effective for interrogating the structure of interactions between "wholes" and "parts". This is a fundamental topic in modern complexity theory, as a defining feature of complex systems is the emergence of higher-order coordination between large numbers of simpler elements. Appearing in fields as diverse as economics (where economies emerge from the coordinated interactions between firms) to neuroscience (where consciousness is thought to emerge from the coordinated interaction between neurons), the question of higher-order structures in multivariate systems is of central importance in almost every branch of the so-called "special sciences" above physics. Information theory has been used to great effect in formalizing rigorous, domain-agnostic, definitions of "emergence" [2, 3] and exploring what the novel or unexpected consequences of emergence might be [4, 5]. These lines of research are active and fruitful, however, many of the techniques that have been used are limited to particular special cases specific kinds of dependency, which makes a general theory of higher-order information in complex systems difficult to achieve. One of the most powerful tools in understanding the informational relationships between wholes and parts has been the partial information decomposition [6, 7] (PID), which decomposes the mutual information that a set of inputs collectively disclose about a target into redundant, unique, and synergistic "atomic" components (or higher-order combinations thereof). Since it's proposal in 2011 by Williams and Beer, the PID has been fruitfully applied to a diverse set of complex systems, including hydrology [8], neuroscience [9, 10], medical imaging [11], the physics of phase transitions [12], machine learning [13, 14], economics [15], and clinical medicine [16, 17]. The PID has a handful of limitations, however. For instance, it requires designating a subset of elements as "inputs" and a single element as a "target." This can be a natural distinction in some cases (such as multiple pre-synaptic neurons that synapse onto a single downstream neuron [9]), however, this restriction makes a general analysis of "wholes" and "parts" more difficult, as the PID is inherently focused on how two different subsets of a system (the inputs and targets) interact. It would be useful to be able to relax the requirement of a firm input/target distinction and analyse the entire system _qua_ itself. The second limitation is that the mutual information refers to a very particular kind of dependency: it is an explicitly bivariate special case of the more general Kullback-Leibler divergence [18] and so may not be applicable to all circumstances. The mutual information is generally introduced as the information gained when updating to the true, joint distribution of elements from a hypothetical maximum-entropy prior distribution where all elements are independent (for more formal discussion, see below). While this is a natural comparison in many contexts, it is not the only useful definition of information. For example, it may not always make to have a prior of maximum entropy; perhaps our initial beliefs about a system are more nuanced or informed by prior knowledge. These limitations has been previously recognized: one attempt to relax the strong input/target distinction was the development of the partial entropy decomposition (PED) by Ince, and later Finn and Lizier (albeit under a different name) [19, 20]. Unlike the PID, which decomposes the joint mutual information, the PED uses the same logic to decompose the joint entropy of the whole system, without needing to classify subsets of the system. The PED has been used to analyse neural systems [21, 22], however it does not solve all the problems detailed above. While it does relax the input/target distinction, it does little to address the second limitation of PID: since it is a decomposition of entropy, not of information directly, it cannot be used as a general approach of multivariate information. The interpretation of the decomposition is completely different, and consequently, so is the behaviour. For example, for a set of two elements \(X\) and \(Y\), if \(X\bot Y\), the information in the pair should be zero bit (since they are independent), but the entropy \(H(X,Y)\) is maximal, and the distribution of partial entropy atoms reflects that (for a more detailed discussion of the PED in the context of maximum entropy systems, see [22] Supplementary Material). Here, I will introduce a generalized decomposition of multivariate information that satisfies the intuitive understanding of what information is, does not require defining sources and targets, and which recovers the original, directed, PID as a special case. This generalized information decomposition (GID) is based on the decomposition of the Kullback-Leibler divergence [23], and the local partial entropy decomposition. This generalized information decomposition can be understood in a Bayesian framework as decomposing the information gained when one updates their prior beliefs to a new posterior, and as a consequence, induces a decomposition of any information-theoretic metric that can be written as a Kullback-Leibler divergence (mutual information, total correlation, negentropy, etc), as well as decomposing the information divergence between arbitrary distributions. Importantly, it does not enforce any particular constraints on the prior and the posterior, they may be arbitrary distributions, unlike in the case of the PID. First, I will introduce the necessary building-blocks (the Kullback-Leibler divergence, the local entropy decomposition), and then explore a special case to demonstrate the GID: the decomposition of the total correlation. Finally, I will discuss how the original PID of Williams and Beer can be re-derived and the possibility of future applications of this work. ## A Note on Notation In this piece, we will make reference to multiple different kinds of random variables, at multiple scales, as well as multiple distributions. Here we will briefly outline the notational conventions used here. Probability distributions will be represented using blackboard font, typically using \(\mathbb{Q}\) for a prior belief (in the context of a Bayesian prior-to-posterior update), and \(\mathbb{P}\) for the posterior or a general probability distribution. We will use \(\mathbb{E}_{\mathbb{P}(x)}[f(X)]\) to indicate the expected value operator of some function \(f(x)\), computed with respect to the probability distribution \(\mathbb{P}(X)\). Univariate random variables will be denoted with uppercase italics (e.g. \(X\)), multivariate random variables will be denoted with uppercase boldface (e.g. \(\mathbf{X}=\{X_{1},\ldots,X_{N}\}\)). Specific (local) realizations of univariate or multivariate random variables will be denoted with their respective lowercase fonts (e.g. \(X=x\) or \(\mathbf{X}=\mathbf{x}\)). Functions (e.g. the mutual information, the entropy, the Kullback-Leibler divergence, etc) will follow the same convention for expected and local values. Background Information, in the most general sense, refers to the reduction in uncertainty associated with observation. For example, consider rolling a fair, six-sided die. Initially, the value is unknown and all six values are equiprobable. However, upon learning that the value is _even_, three possibilities are immediately ruled out (the odd numbers one, three, and five), and the uncertainty about the value is decreased. The difference between the initial uncertainty and the final uncertainty after ruling out possibilities is the information is the information about the state of the die that is disclosed by learning the parity of the state. Uncertainty about the state of a (potentially multidimensional) random variable is typically quantified using the Shannon entropy: \[H(X)=-\sum_{x\in\mathcal{X}}\mathbb{P}(x)\log\mathbb{P}(x) \tag{1}\] Where \(\mathbb{P}(x)\) is the probability of observing \(X=x\). When we gain information (or reduce our uncertainty), we are implicitly comparing two different probability distributions: a _prior_ distribution (such as our initial uncertainty about the state of the dice) and a _posterior_ distribution (our uncertainty about the state of the die after excluding the odd numbers). Following van Enk [24], we could heuristically describe information we gain about \(X\) generally as: \[Information(X)=H^{prior}(X)-H^{posterior}(X) \tag{2}\] The well-known Shannon mutual information is just a special case of this broader definition. The mutual information between \(X\) and \(Y\) can be written as: \[I(X;Y)=H(X)-H(X|Y) \tag{3}\] Here we can see that \(H(X)\) is our \(H^{prior}\), describing our initial beliefs about \(X\) (i.e. that it is independent of \(Y\)). The second term \(H(X|Y)\) is our \(H^{posterior}\), describing our updated beliefs about \(X\) after learning \(Y\). The difference between these is the information gained when updating from a prior belief that \(X\bot Y\) to the posterior based on the true joint distribution. The mutual information is a special kind of dependence between \(X\) and \(Y\), however; where the prior and posterior are related by the particular operation of marginalizing the joint. If we want a more general measure of information-gain for arbitrary priors and posteriors, we need a different measure: the Kullback-Leibler divergence. ### Kullback-Leibler Divergence For some multidimensional random variable \(\mathbf{X}=\{X_{1}\dots X_{N}\}\), we can compute the information gained when we update from our prior \(\mathbb{Q}(\mathbf{X})\) to our posterior \(\mathbb{P}(\mathbf{X})\) with the Kullback-Leibler divergence: \[D(\mathbb{P}||\mathbb{Q}):=\sum_{\mathbf{x}\in\boldsymbol{\mathcal{X}}} \mathbb{P}(\mathbf{x})\log\frac{\mathbb{P}(\mathbf{x})}{\mathbb{Q}(\mathbf{ x})}. \tag{4}\] The \(D(\mathbb{P}||\mathbb{Q})\) can be understood as the expected value of the log-ratio \(\mathbb{P}(\mathbf{x})/\mathbb{Q}(\mathbf{x})\) (computed with respect to the posterior probability distribution \(\mathbb{P}(\mathbf{X})\)): \[D(\mathbb{P}||\mathbb{Q})=\mathbb{E}_{\mathbb{P}(\mathbf{X})}\bigg{[}\log \frac{\mathbb{P}(\mathbf{x})}{\mathbb{Q}(\mathbf{x})}\bigg{]}. \tag{5}\] This can be re-written in explicitly information-theoretic terms by converting the log ratio into local entropies. Recall that, for some outcome \(\mathbf{x}\in\boldsymbol{\mathcal{X}}\), the local entropy (or surprise) associated with observing \(\mathbf{X}=\mathbf{x}\) is given by: \[h^{\mathbb{P}}(\mathbf{x})=-\log\mathbb{P}(\mathbf{x}). \tag{6}\] The superscript \(h^{\mathbb{P}}(\mathbf{x})\) denotes that the local entropy is being computed with respect to the distribution \(\mathbb{P}(\mathbf{X})\), rather than \(\mathbb{Q}(\mathbf{X})\). From this, simple algebra shows that: \[D(\mathbb{P}||\mathbb{Q})=\mathbb{E}_{\mathbb{P}(\mathbf{X})}\bigg{[}h^{ \mathbb{Q}}(\mathbf{x})-h^{\mathbb{P}}(\mathbf{x})\bigg{]}. \tag{7}\] It is worth considering this in some detail, as it can help build intuition about what the Kullback-Leibler divergence really tells us. The term \(h^{\mathbb{Q}}(\mathbf{x})-h^{\mathbb{P}}(\mathbf{x})\) quantifies how much _more_ surprised would we be to see \(\mathbf{X}\)=\(\mathbf{x}\) if we were modeling \(\mathbf{X}\) with the distribution \(\mathbb{Q}(\mathbf{X})\) rather than \(\mathbb{P}(\mathbf{X})\). This is obviously analogous to the intuitive definition given above in Eq. 2, although here we comparing each of the local realizations of \(\mathbf{X}\) first and then averaging, rather than averaging first and then subtracting. By Jensen's Inequality, this value must always be positive. So far, we have considered our multivariate random variable \(\mathbf{X}\) as a single unit: information about the _whole_ is gained as a lump sum and we have very little insight into how that information is distributed over the various \(X_{i}\in\mathbf{X}\). This is a significant limitation, as complex systems typically show a wealth of different information-sharing modes. For example, a natural question to ask might be; "what information gained is specific to \(X_{1}\)?" Or "what information gained is represented in the joint state of \(X_{1}\) and \(X_{2}\) together and no simpler combination of elements?" The standard machinery of classical information theory struggles to address these questions, and doing so rigorously requires leveraging recent developments in modern, multivariate information theory. ### Partial Entropy Decomposition To understand how information is distributed over the various components of \(\mathbf{X}\), we begin by describing the _partial entropy decomposition_ (PED). The PED was first proposed by Ince [19], as an extension of the more well-known partial information decomposition (PID) that relaxes the requirement of an input/target distinction [6]. The PED begins with the same axiomatic foundation as the PID, but applies it to the multivariate entropy of a distribution, rather than the multivariate mutual information. Following its introduction, the PED was extensively explored by Finn and Lizier [20] (albeit under a different name), and more recently by Varley et al. in the context of inferring higher-order structure in complex systems [22]. For more details about the PED, see the cited literature, although we will provide a minimal introduction here. Consider a multivariate random variable \(\mathbf{X}\)=\(\{X_{1},\ldots,X_{k}\}\). The joint entropy \(H(\mathbf{X})\) quantifies the average amount of information required to specify the unique state of \(\mathbf{X}\): \[H(\mathbf{X})=-\sum_{\mathbf{x}\in\mathbf{X}}\mathbb{P}(\mathbf{x})\log \mathbb{P}(\mathbf{x}) \tag{8}\] This value is an expected value over the support set \(\mathbf{\mathfrak{X}}\): \(H(\mathbf{X})=\mathbb{E}_{\mathbb{P}(\mathbf{X})}[-\log\mathbb{P}(\mathbf{x})]\). For any individual realization \(\mathbf{x}\) we can compute the _local entropy_ (or surprisal) as \(h(\mathbf{x})=-\log\mathbb{P}(\mathbf{x})\). This value \(h(\mathbf{x})\) quantifies how much uncertainty about \(\mathbf{X}\) is resolved upon learning that \(\mathbf{X}\)=\(\mathbf{x}\). From here on, we will describe the local partial entropy decomposition, although the logic is the same for the expected value as well, and local partial entropy atoms can be related to expected partial entropy atoms in the usual way. The local entropy \(h(\mathbf{X})\) is a scalar measure, describing the information content in \(\mathbf{x}\) as a single entity and provides little insight into how that information is distributed over the structure of \(\mathbf{x}\). To get a finer-grained picture of how the various components of \(\mathbf{x}\) contribute to \(h(\mathbf{x})\), we would like to be able understand how all the components of \(\mathbf{x}\) share entropy. Formalizing this notion of "shared entropy" turns out to be non-trivial, however, for didactic purposes it is sufficient to say that two (potentially overlapping) subsets \(\mathbf{a}_{1}\subset\mathbf{x},\mathbf{a}_{2}\subset\mathbf{x}\) share entropy if there is uncertainty about the state of the whole that would be resolved by observing either \(\mathbf{a}_{1}\) alone or \(\mathbf{a}_{2}\) alone. For example, consider a playing card randomly drawn from a shuffled deck of 52 cards. If the player learns that the card is either a red card (belonging to the suits hearts of diamonds) or a face card (being a jack, queen, or king), the redundant entropy is the uncertainty about the card's identity that is resolved regardless of which of those two statements is true. In this case, the player can rule out the possibility that they are holding any card that is not red and not a face card (e.g. the two of clubs has been ruled out as a possibility). So, even though the player does not know which statement is true (red card or face card), and even though card color and face are independent qualities, they have still gained information about their card. Formally, we can define a _redundant entropy_ function \(h_{\cap}()\) that takes in some collection of subsets of \(\,\mathbf{x}\) (often referred to as "sources") and returns the entropy redundantly shared by all of them. The seminal insight of Williams and Beer was that the set of collections of sources required to decompose \(\mathbf{x}\) is constrained to the set of all combination of sources such that no source is a subset of any other: \[\mathfrak{A}=\{\boldsymbol{\alpha}\in\mathcal{P}_{1}\mathcal{P}_{1}(\mathbf{x }):\forall\mathbf{a}_{i},\mathbf{a}_{j}\in\boldsymbol{\alpha},\mathbf{a}_{i} \not\subset\mathbf{a}_{j}\} \tag{9}\] Where \(\mathcal{P}_{1}\) is the power set function excluding the empty set \(\emptyset\). This set of "atoms" is structured under the partial ordering relation: \[\forall\boldsymbol{\alpha},\boldsymbol{\beta}\in\mathfrak{A},\boldsymbol{ \alpha}\preceq\boldsymbol{\beta}\iff\forall\mathbf{b}\in\boldsymbol{\beta} \exists\mathbf{a}\in\boldsymbol{\alpha}\text{ s.t. }\mathbf{a}\subseteq\mathbf{b}. \tag{10}\] This partial ordering is typically referred to as the _redundancy lattice_. Given this structure, it is possible to uniquely specify the value of all \(\boldsymbol{\alpha}\in\mathfrak{A}\) via Mobius inversion: \[h_{\partial}^{\mathbf{x}}(\boldsymbol{\alpha})=h_{\cap}^{\mathbf{x}}( \boldsymbol{\alpha})-\sum_{\boldsymbol{\alpha}^{\prime}\prec\boldsymbol{ \alpha}}h_{\partial}^{\mathbf{x}}(\boldsymbol{\alpha}^{\prime}) \tag{11}\] Finally, the sub of all the local partial entropy atoms reconstitutes the local entropy: \[h(\mathbf{x})=\sum_{\boldsymbol{\alpha}\in\mathfrak{A}}h_{\partial}^{\mathbf{ x}}(\boldsymbol{\alpha}). \tag{12}\] Just as the entropy is an expected value over local realizations, it is possible to compute the expected value of each atom over all configurations of \(\mathbf{x}\in\mathfrak{X}\): \[H_{\partial}^{\mathbf{X}}(\boldsymbol{\alpha})=\mathbb{E}_{\mathbb{P}( \mathbf{X})}[h_{\partial}^{\mathbf{x}}(\boldsymbol{\alpha})] \tag{13}\] As was previously mentioned, there have been a number of proposals for a natural functional form for \(h_{\cap}\). The details of this debate are beyond the scope of this paper, although see [19, 20, 22] for three different approaches that satisfy the axioms required to induce the redundancy lattice. For didactic purposes, we choose the simplest of the three: the \(h_{min}\) function proposed by Finn and Lizier [20]. For a collections of potentially overlapping sources \(\boldsymbol{\alpha}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\}\): \[h_{min}(\boldsymbol{\alpha})=\min_{i}h(\mathbf{a}_{i}) \tag{14}\] This produces a provably non-negative of the local entropy [20]. To build intuition, consider a simple, two element system: \(\mathbf{X}=\{X_{1},X_{2}\}\), which draws states from \(\mathbb{P}(\mathbf{X})\). The information contained in the realization \(\mathbf{X}=\mathbf{x}\) can be decomposed into: \[h(\mathbf{x})=h_{\partial}^{\mathbf{x}}(\{x_{1}\}\{x_{2}\}+h_{\partial}^{ \mathbf{x}}(\{x_{1}\})+h_{\partial}^{\mathbf{x}}(\{x_{2}\})+h_{\partial}^{ \mathbf{x}}(\{x_{1},x_{2}\}) \tag{15}\] and the marginal entropies can be similarly decomposed: \[h(\mathbf{x})=h_{\partial}^{\mathbf{x}}(\{x_{1}\}\{x_{2}\})+h_{ \partial}^{\mathbf{x}}(\{x_{1}\}) \tag{16}\] \[h(\mathbf{x})=h_{\partial}^{\mathbf{x}}(\{x_{1}\}\{x_{2}\})+h_{ \partial}^{\mathbf{x}}(\{x_{2}\}). \tag{17}\] It is easy to see that the set \(\{\{\{x_{1}\}\{x_{2}\}\},\{\{x_{1}\}\},\{\{x_{2}\}\},\{\{x_{1},x_{2}\}\}\}\) satisfies the requirements of Eq. 9 and the ordering given by Eq. 10. Generalized Information Decompositions We now have all the mathematical machinery required to introduce the generalized information decomposition. Recall from Eq. 7 that the Kullback-Leibler divergence \(D(\mathbb{P}||\mathbb{Q})\) can be written in terms of the expected difference in local entropies computed with respect to distributions \(\mathbb{P}(\mathbf{X})\) and \(\mathbb{Q}(\mathbf{X})\): \[D(\mathbb{P}||\mathbb{Q})=\mathbb{E}_{\mathbb{P}(\mathbf{X})}[h^{\mathbb{Q}}( \mathbf{x})-h^{\mathbb{P}}(\mathbf{x})]\] Given a localizable partial entropy decomposition (such as the one produced by \(h_{min}\)), it is possible to decompose each of the local entropies into it's component atoms. For each atom, then, it is possible to compute the "partial Kullback-Leibler divergence"; the difference between the partial entropy atoms for each local realization computed with respect to the prior and the posterior: \[D^{\mathbb{P}||\mathbb{Q}}_{\partial}(\boldsymbol{\alpha})=\mathbb{E}_{ \mathbb{P}(\mathbf{X})}[h^{\mathbb{Q}(\mathbf{x})}_{\partial}(\boldsymbol{ \alpha})-h^{\mathbb{P}(\mathbf{x})}_{\partial}(\boldsymbol{\alpha})]. \tag{18}\] Note that we have extended the notation here: we must now indicate not only the Kullback-Leibler divergence from \(\mathbb{Q}\) to \(\mathbb{P}\), we must also indicate the specific atomic component of that information we are considering. In general, when refering to the atomic components of an information measure, I will use the \(\partial\) subscript, and indicate which distribution a measure is computed with respect to with the relevant superscript. For a three-element system \(\mathbf{X}=\{X_{1},X_{2},X_{3}\}\), consider the "bottom" of the lattice: the triple-redundancy atom \(D^{\mathbb{P}||\mathbb{Q}}_{\partial}(\{X_{1}\}\{X_{2}\}\{X_{3}\})\). It is the expected value of the difference between the local redundant entropies: \[\mathbb{E}_{\mathbb{P}(\mathbf{X})}[h^{\mathbb{Q}(\mathbf{x})}_{\partial}(\{x _{1}\}\{x_{2}\}\{x_{3}\})-h^{\mathbb{P}(\mathbf{x})}_{\partial}(\{x_{1}\}\{x_ {2}\}\{x_{3}\})]. \tag{19}\] Each partial entropy term quantifies how surprised we would be, regardless of whether we learned \(X_{1}=x_{1}\) or \(X_{2}=x_{2}\) or \(X_{3}=x_{3}\), computed with respect to probability distributions \(\mathbb{Q}(\mathbf{x})\) and \(\mathbb{P}(\mathbf{x})\) respectively. So, if \(D^{P||\mathbb{Q}}_{\partial}(\{1\}\{2\}\{3\})>0\), there is more "redundant surprise" in the distribution \(Q\) and we would be less surprised if we operated based on distribution \(P\). This difference is the information gain redundantly shared by all three variables. There is no guarantee that the individual atomic components of the Kullback-Leibler divergence will be positive. Initially, the desire for a non-negative decomposition of multivariate information was such a core feature that non-negativity was included as a foundational requirement by Williams and Beer. Since then, however, the field has largely grown more comfortable with negative partial information atoms, and a number of proposed redundancy functions produce them (e.g. \(I_{ccs}\)[25], \(I_{\pm}\)[26], and \(I_{sx}\)[27] for recent examples). When we consider the generalized information decomposition in the case of the Kullback-Leibler divergence, the negativity is easily interpretable and not particularly strange. If, for example, \(D^{\mathbb{P}||\mathbb{Q}}_{\partial}(\boldsymbol{\alpha})<0\), then, on average, we are more surprised to observe \(\boldsymbol{\alpha}\) if we believe \(\mathbb{P}\) (or posterior) rather than if we believe \(\mathbb{Q}\) (our posterior). In some sense, we have "lost" that specific information when we updated our beliefs. Since Jensen's inequality doesn't apply to the various \(\boldsymbol{\alpha}\in\mathfrak{A}\), there's no _a priori_ reason to assume non-negativity (although all atoms must sum to a non-negative number). A large number of standard information-theoretic measures can be written in terms of the Kullback-Leibler divergence. Consequently, the decomposition presented above provides a considerable number of additional information decompositions "for free", as special cases in addition to the general case of arbitrary priors and posteriors. Here we will discuss one, the decomposition of the total correlation, in detail although other possibilities include the entropy production [28], the negentropy [29], and the classic bivariate mutual information. We will also very briefly discuss the cross-entropy, as it is a very commonly used metric in machine learning and artificial intelligence research. **Cross Entropy Decomposition** The cross entropy is a commonly used loss function in machine learning approaches [30]. For two distributions on \(\mathbf{X}\), \(\mathbb{P}(\mathbf{X})\) and \(\mathbb{Q}(\mathbf{X})\), the cross entropy is defined by: \[H^{\mathbb{P}||\mathbb{Q}}(\mathbf{X}):=\mathbb{E}_{\mathbb{P}(\mathbf{X})}[- \log\mathbb{Q}(\mathbf{x})] \tag{20}\] Following the same logic as above, a decomposition of the cross-entropy is reasonably straightforward. It amounts to a local partial entropy decomposition on \(H^{\mathbb{Q}}(\mathbf{X})\), and then the partial entropy atoms are aggregated across the different states using the distribution \(\mathbb{P}(\mathbf{X})\) rather than \(\mathbb{Q}(\mathbf{X})\). When used as a loss function in machine-learning applications, the decomposition of the cross entropy might be considered a "partial loss decomposition": illuminating how the loss is distributed redundantly or synergistically over the features of a dataset. ### Total Correlation Decomposition Many information-theoretic quantities implicitly have an built-in prior distribution of maximum entropy (subject to some constraints). In the context of Bayesian inference and updating, there is a common argument that the most "natural" family of priors is the distribution that has the highest entropy. E.T. Jaynes argued for the "Principle of Maximum Entropy" [31], which posits that scientists should strive to use the least informative priors possible. This is a kind of formalization of Occam's Razor, suggesting that models of complex systems should not propose any more constraints on the space of possible configurations than is necessitated by the data in question. Intuitively, we can understand measures of deviation from independence as quantifying something like "how much more structured is this system than a kind of ideal gas." Here we will explore one of these multivariate information measures in the context of the generalized information decomposition: the total correlation. Originally proposed by Watanabe [32] and later re-derived as the "integration" by Tononi and Sporns [33], the total correlation is one of three possible generalizations of the bivariate mutual information to arbitrary numbers of variables: \[TC(\mathbf{X}):=D(\mathbb{P}(\mathbf{X})||\mathbb{P}\bigg{(}\prod_{i=1}^{| \mathbf{X}|}X_{i}\bigg{)}). \tag{21}\] Intuitively, \(TC(\mathbf{X})\) can be understood as a measure of how much information we gain when we model \(\mathbf{X}\) based on it's own joint statistics compared to if we model it as a set of independent processes (astute readers will remember this as equivalent to the intuition behind bivariate mutual information described above). One natural way to think about it is how many fewer yes/no questions an observer has to ask to specify the state of \(\mathbf{X}\) based on the statistics of the "whole" compared to if each \(X_{i}\) was resolved independently. It can be seen as a straightforward generalization of the more well-known definition of bivariate mutual information \(I(X_{1};X_{2})=D(\mathbb{P}(X_{1},X_{2})||\mathbb{P}(X_{1})\times\mathbb{P}(X _{2}))\). If we consider the Bayesian interpretation of the Kullback-Leibler divergence, we can see that our prior in this case is the maximum-entropy distribution that preserves the marginal probabilities, and our posterior is the true distribution of the data. For a given, potentially overlapping, set of sources \(\boldsymbol{\alpha}=\{\mathbf{a}_{1}\ldots\mathbf{a}_{k}\}\), the partial total correlation \(TC_{\partial}^{\mathbf{X}}(\boldsymbol{\alpha})\) quantifies how much of the total information gain is attributable to the particular collection of sources \(\boldsymbol{\alpha}\), and crucially, no simpler combination of elements. For a worked example, consider a three element system joint by a logical exclusive-or operator: \(\mathbf{S}=\{X_{1},X_{2},T\}\), where \(T=X_{1}\bigoplus X_{2}\). The PED of \(\mathbb{P}(\mathbf{S})\) using the \(H_{min}\) redundancy function finds 1 bit of information in the atom \(H_{\partial}^{\mathbb{P}}(\{1\}\{2\}\{3\})\) and 1 bit of information in the atom \(H_{\partial}^{\mathbb{P}}(\{1,2\}\{1,3\}\{2,3\})\). When we do the PED of the product of the marginal probabilities \((\mathbb{Q}^{H_{max}}(\mathbf{S}))\), we find the same 1 bits of information in the atoms \(H_{\partial}^{\mathbb{Q}}(\{1\}\{2\}\{3\})\) and \(H_{\partial}^{\mathbb{Q}}(\{1,2\}\{1,3\}\{2,3\})\), however, the third bit of information is the synergistic atom \(H_{\partial}^{\mathbb{Q}}(\{1,2,T\})\). When we subtract the two decompositions according to Eq. 18, we are left with 1 bit of information in the triple-synergy atom (see Table 1). How do we interpret this? It tells us that when we update our model from a prior belief of total independence to a posterior of the true statistics of the logical XOR gate, the one bit of information that we gain is synergistic to the joint-state of all three elements: the XOR gate is "pure synergy". In the PED of the maximum entropy distribution, there is one bit of "synergistic entropy", since learning the state of any two variables isn't enough to fully resolve the state of \(\mathbf{s}\). If \(X_{1}=0\) and \(X_{2}=0\), and \(X_{1}\bot X_{2}\bot T\), then there are still two equiprobable states that \(\mathbf{s}\) could be, so \(h(\mathbf{s}|X_{1}=0,X_{2}=0)\) is maximal. It's only when all the parts are known can the whole be known. In contrast, for the logical XOR-gate, knowing any two variables is enough to specify the joint state of all three with total certainty. So, when we update our beliefs from the prior to the posterior, the single bit of synergistic entropy in the prior distribution is resolved, and we have gained one bit of synergistic information. The decomposition of the total correlation into it's atomic components can also be used to gain insight into the behaviour of measures that are derived from the total correlation. In fact, any measure that can be written in terms of total correlations can be decomposed into a linear combination of atomic components. Here we will discuss two, and in doing so, demonstrate how this decomposition can give us insights into the nature of higher-order information sharing. **O-Information** The O-information is a heuristic measure of higher-order information-sharing in complex systems first introduced by Rosas, Mediano, and colleagues [29]. Given some multivariate random variable, the O-information of that variable, \(\Omega(\mathbf{X})\), quantifies the extent to which the structure of \(\mathbf{X}\) is dominated by redundant or synergistic information. If \(\Omega(\mathbf{X})>0\), then the system is redundancy-dominated, while if \(\Omega(\mathbf{X})<0\), the system is synergy-dominated. Since its introduction, the O-information has become an object of considerable interest: unlike the PID and PED, which cannot be used for systems with more than four to five elements, the O-information scales much more gracefully, and has been applied to systems with hundreds of components [34]. The O-information was originally introduced as a difference between two different generalization of mutual information: the total correlation and the dual total correlation, however, recently Varley et al. [34], derived an equivalent definition in terms of solely total correlations: \[\Omega(\mathbf{X}):=(2-N)TC(\mathbf{X})-\sum_{i=1}^{N}TC(\mathbf{X}^{-i}) \tag{22}\] By expanding each total correlation term into the associated linear combination of partial TC atoms and then simplifying, we can see that, for a three-variable system, the O-information can be understood as: \begin{table} \begin{tabular}{l c c c} \hline Atom & \(H_{\partial}^{\mathbb{Q}}\) & \(H_{\partial}^{\mathbb{P}}\) & \(TC_{\partial}^{\mathbf{S}}\) \\ \hline \(\{1\}\{2\}\{T\}\) & 1.0 & 1.0 & 0.0 \\ \(\{1\}\{2\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1\}\{T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{2\}\{T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1\}\{2,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{2\}\{1,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{T\}\{1,2\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1\}\) & 0.0 & 0.0 & 0.0 \\ \(\{2\}\) & 0.0 & 0.0 & 0.0 \\ \(\{T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1,2\}\{1,T\}\{2,T\}\) & 1.0 & 1.0 & 0.0 \\ \(\{1,2\}\{1,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1,2\}\{2,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1,T\}\{2,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1,2\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{2,T\}\) & 0.0 & 0.0 & 0.0 \\ \(\{1,2,T\}\) & 1.0 & 0.0 & 1.0 \\ \hline \end{tabular} \end{table} Table 1: **The partial entropy decompositions and partial total correlation decomposition.** Consider two distributions \(\mathbb{P}(X_{1},X_{2},T)\) and \(\mathbb{Q}(X_{1},X_{2},T)\). The distribution \(\mathbb{Q}(\mathbf{S})\) is the maximum entropy distribution on three binary variables, while \(\mathbb{P}(\mathbf{S})\) is the distribution of the logical-XOR gate (assuming equiprobable inputs). \[\Omega(\{X_{1},X_{2},X_{3}\}) =2\times\{1\}\{2\}\{3\} \tag{23}\] \[+2\times[\{1\}\{2\})+\{1\}\{3\}+\{2\}\{3\}]\] \[+2\times[\{1\}\{2,3\}+\{2\}\{1,3\}+\{3\}\{1,2\}]\] \[+\{1\}+\{2\}+\{3\}\] \[+2\times\{1,2\}\{1,3\}\{2,3\}\] \[+\{1,2\}\{1,3\}+\{1,2\}\{2,3\}+\{1,3\}\{2,3\}\] \[-\{1,2,3\}\] The notation has been simplified for visual clarity; each atom is a partial total correlation atom, representing the deviation from independence attributable to each set of sources. There are several interesting things about this decomposition worth noting. The first is that terms of the form \(TC_{\partial}^{123}(\{X_{i},X_{j}\})\) do not appear. This term corresponds to the bivariate mutual information between two elements \(X_{i}\) and \(X_{j}\) (since the total correlation of two variables is the mutual information between them). The O-information has previously been proved to be insensitive to bivariate dependencies [29], making a "true" measure of higher-order dependency. The second thing to note is that this shows that O-information has a very strict definition of synergy and a comparatively relaxed definition of redundancy. The only atom that can ever count towards synergy is the very top of the lattice, as that is the information that is destroyed when any \(X_{i}\) is removed from \(\mathbf{X}\). Any information that is accessible from the remaining \(\mathbf{X}^{-i}\) elements gets counted as "redundancy" (even if it involves three or more nodes). Consequently, one might argue that the O-information is _more_ sensitive to redundancy than synergy, as there are simply more ways for information to be redundant than synergistic, particularly as \(N\) grows. Future work on extensions of O-information that are more sensitive to synergies remains an open area of research. **Tononi-Sporns-Edelman Complexity** The Tononi-Sporns-Edelman (TSE) complexity is one of the key developments in the study of applying multivariate information theory to complex systems. Initially proposed by Tononi, Sporns, and Edelman [33], for a given set of variables, the TSE complexity quantifies the balance between integration and segregation in the system. If the system is entirely integrated (i.e. redundant), then the TSE complexity is low. Similarly, if the system is disintegrated (not really a system at all, just a collection of independent elements), then the TSE is also low. TSE is high when a system is "complex", showing a balance between local segregation (i.e. elements are allowed to be autonomous) and global integration (i.e. the system as a whole behaves as a single unit). The TSE complexity can be written in terms of total correlations: \[TSE(\mathbf{X})=\sum_{i=1}^{N}\left[\frac{i}{N}TC(\mathbf{X})-\langle TC( \mathbf{X}^{\gamma_{i}})\rangle\right] \tag{24}\] Where the \(\langle TC(\mathbf{X}^{\gamma_{i}})\rangle\) refers to the average total correlation of every subset of \(\mathbf{X}\) with \(i\) elements. Tononi, Sporns, and Edelman developed the TSE complexity almost a decade before Williams and Beer formalized the notions of redundancy and synergy; consequently the relationship between the two concepts has remained somewhat obscure. To the best of our knoweldge, the first exploration of the relationship between TSE complexity and redundancy/synergy was first discussed by Rosas et al., in the initial introduction of the O-information [29], and then further explored by Varley et al., [34], who showed that the sign of the O-information was a function of the structure of the highest-level of the TSE bipartition hierarchy. Since the TSE complexity can be written in terms of total correlations, it can be decomposed in the same manner as the O-information (see Eq. 23). Once again, the partial-total correlation notation has been omitted for visual accessibility: \[TSE(X_{1},X_{2},X_{3})= -\{1\}\{2\}\{3\} \tag{25}\] \[-\frac{2}{3}\bigg{[}\{1\}\{2\}+\{1\}\{3\}+\{2\}\{3\}\bigg{]}\] \[-\frac{1}{3}\bigg{[}\{1\}\{23\}+\{2\}\{13\}+\{3\}\{12\}\bigg{]}\] \[+\frac{1}{3}\bigg{[}\{12\}\{13\}+\{12\}\{23\}+\{13\}\{23\}\bigg{]}\] \[+\frac{2}{3}\bigg{[}\{12\}+\{13\}+\{23\}\bigg{]}\] \[+\{1,2,3\}\] Here, in the three-variable case, an elegant pattern is revealed by the decomposition: as one travels farther down the bottom half of the redundancy lattice, the information in each atom becomes increasingly penalized (in 1/N increments), while as one travels farther up the upper half of the lattice, each atom becomes increasingly "rewarded." A moment's reflection shows that this broadly consistent with the original intuition put forward by Tononi et al.,: the presence of redundant information shared by many single variables indicates that the elements at the micro-scale are not segregated, and so that information counts against the TSE complexity. In contrast, synergy reflects a kind of global integration, and so it positively contributes to TSE. While this may seem like a fairly banal rephrasing of the original intuition behind TSE, further consideration suggests that this tells us something interesting about synergy: if the TSE is low when integration or segregation dominate and high when both are in balance (see Tononi et al., Fig. 1D [33]), then this suggests that synergy is not merely another "kind" of integration, but rather is itself a reflection of a system balancing both integration and segregation. Since increasingly higher-order synergy drives up TSE, it follows that increasingly higher-order deviations from independence must also imply a balance of integration and segregation. This is consistent with recent empirical findings; in analysis of human neuroimaging data, synergy has been repeatedly found to sit "between" highly-integrated "modules" in the brain, while redundancy is higher within the modules [34, 22], suggesting that synergy forms a kind of "shadow structure": a network of higher-order dependencies that are largely invisible to the standard techniques of network science and functional connectivity. ### Recovering Single-Target PID As previously mentioned, the standard Shannon mutual information is a special case of the more general Kullback-Leibler divergence. Consequently, we would expect that the generalised information decomposition should recover the classic single-target PID. There are a number of ways to write out the bivariate mutual information in terms of a Kullback-Leibler divergence, but the most salient one for our purposes is the definition: \[I(X_{1},X_{2};Y):=D(\mathbb{P}(X_{1},X_{2}|Y)||\mathbb{P}(X_{1},X_{2})) \tag{26}\] The decomposition induced is equivalent to the informative/misinformative decomposition first explored by Finn and Lizier [26] and later expanded on by Makkeh et al., [27]. The GID in this case follows the expected form: \[I(X_{1},X_{2};Y) =D_{\partial}^{12Y}(\{1\}\{2\})+D_{\partial}^{12Y}(\{1\})+D_{ \partial}^{12Y}(\{2\})+D_{\partial}^{12Y}(\{1,2\}). \tag{27}\] \[I(X_{1};Y) =D_{\partial}^{12Y}(\{1\}\{2\})+D_{\partial}^{12Y}(\{1\}).\] (28) \[I(X_{2};Y) =D_{\partial}^{12Y}(\{1\}\{2\})+D_{\partial}^{12Y}(\{2\}). \tag{29}\] If one uses the \(H_{sx}\) measure, the resulting decomposition is equivalent to the PID computed using \(I_{sx}\), and likewise if one uses \(H_{\pm}\), the resulting decomposition is equivalent to \(I_{\pm}\). This relationship highlights a curious fact about the multivariate mutual information: even though we are interested in the three-way interaction between \(X_{1}\), \(X_{2}\) and \(Y\), we are only actually decomposing the information content in the bivariate distributions \(\mathbb{P}(X_{1},X_{2})\) and \(\mathbb{P}(X_{1},X_{2}|Y)\): \(Y\)'s contribution is merely conditional. However, consider alternative the definition of mutual information : \[I(X_{1},X_{2};Y)=D(\mathbb{P}(X_{1},X_{2},Y)||\mathbb{P}(X_{1},X_{2})\times \mathbb{P}(Y)). \tag{30}\] This decomposition will result in eighteen atoms (since it describes a three-element system). The sum of all the atoms will still be the mutual information \(I(X_{1},X_{2};Y)\), but the way that information is assigned to different combinations of elements is entirely different. Furthermore, there is not, at present, an obvious way of linearlly combining the eighteen atoms generated by Eq. 30 to recover the expected four atoms seen above (although we cannot say that it is impossible, either, merely that if it does exist, it escapes this author). ## 4 Discussion In this paper, we have introduced a generalized information decomposition (GID), based on the Kullback-Leibler divergence and the local partial entropy decomposition (PED). This GID allows a decomposition of any information gain that occurs when updating from a distribution of prior beliefs to a new posterior distribution. As a consequence, a significant number of information-theoretic metrics can be studied using the GID, including the classic single-target multivariate mutual information, the total correlation, the negentropy, the cross entropy, and more. This decomposition is consistent with the fundamental intuitions about "what is information", and unlike the classic PID, does not require defining classes of "inputs" and "targets." In this final section, we will discuss the implications and possible applications of the GID to the analysis of complex systems. The most obvious take-away from this analysis is that a given distribution \(\mathbb{P}(\mathbf{X})\) can have many different "kinds" of redundancy or synergy, depending on exactly what measure is being decomposed. We have discussed the partial entropy, the partial total correlation, and multiple kinds of partial mutual information. While some of these are inter-convertable (for example, Ince showed that the PID can be written in terms of sums and differences of PED atoms [19]), others do not appear to be directly inter-convertible (such as the two different decompositions of mutual information discussed earlier). For a given probability distribution, depending on how exactly one wishes to define redundant local entropy, entirely different distributions of redundancies and synergies can be extracted; some may even have different signs. This means that, going forward, when analysing higher-order information in complex systems, care must be taken to specify exactly how concepts like redundancy and synergy are being defined, and more importantly, how they should be interpreted. Conflating the partial entropy term \(H^{12T}_{\partial}(\{1\}\{2\}\{3\})\) with the partial total correlation term \(TC^{12T}_{\partial}(\{1\}\{2\}\{3\})\) or the partial divergence term \(D^{\mathbb{P}||\mathbb{Q}}_{\partial}(\{1\}\{2\}\{3\})\) may lead to confusion or misinterpretation. There is precedent for such a landscape of possibilities: the PID has long struggled with the problem that multiple redundancy functions can satisfy the fundamental axioms while inducing totally different decompositions of a given mutual information [21]. While initially seen as a problem, some have argued for a perspective of "pragmatic pluralism" [22], and that the different options may have different and complementary use-cases for building a complete picture of a given system. In this piece we have focused on the decomposition of the total correlation as a case study application of the more general decomposition. We chose to focus on the TC due to its links to the O-information and the Tononi-Sporns-Edelman complexity, as well as because it is a measure that most information theorists are familiar with. However, there are many other applications that would be worth exploring. For instance, one area of future work that may be of interest is the decomposition of the entropy production, which uses the Kullback-Leibler divergence to estimate the temporal irreversibility of a dynamic system [28]. Recently, Lynn et al., introduced a decomposition of the entropy production, although it is based on a different logic than the partial information decomposition and does not include a notion of redundancy [35]. Luppi et al., recently introduced their own decomposition of temporal irreversibility, although it is not based on the entropy production Our proposal for a generalized decomposition of multivariate information is one of several different recent attempts to generalize the PID. The first attempt was the integrated information decomposition (\(\Phi\)ID) from Mediano et al., [36]. The \(\Phi\)ID still requires dividing a system into "inputs" and "targets", preserves the bivariate structure of the mutual information, but relaxes the single-target requirement of the PID. This makes it natural for analysing dynamics, with a well-defined past and future. Clinical work has shown that the distributions of redundancies and synergies tracks level of consciousness [17, 16], and analysis of spiking neural dynamics has found the distribution of redundancies and synergies to vary across time [37]. As it currently exists, the \(\Phi\)ID does not fit into the GID schema described here, as it uses a different lattice structure as a scaffold and is not formalized in terms of a Kullback-Leibler divergence. I conjecture, however, that there should be a way to reconcile these approaches and further generalize the existing GID to account for the \(\Phi\)ID, although this problem is beyond the scope of the current paper. Another approach to generalization of the PID was recently proposed by Gutknecht et al., based on parthood relationships [38]. This approach generalizes the notion of a "base concept" in information decomposition (such as redundancy) and reveals the general logical structure of the different possible single-target PIDs that can exist. Conceivably, any one of these base-concepts could be applied to the GID presented here, although the resulting interpretations will vary. Since the PED can always be defined as a PID of the "parts" onto the "whole", any PID based on a base-concept such as redundancy, weak synergy, or vulnerable information could conceivably induce a PED, and if that PED is localizable, a subsequent decomposition of the Kullback-Leibler divergence. Finally, this approach may be very useful to cognitive scientists interested in how agents equipped with multiple sensory channels navigate complex, multidimensional environments. Any agent attempting to survive in such a world must learn the statistical regularities of it's environment; regularities that may be redundant or synergistic across different sensory modalities. The Kullback-Leibler divergence is a key feature of many Bayesian approaches to theoretical neuroscience and cognitive science (e.g. the Free Energy Principle [39]), and often used to describe the process by which an agent updates its internal world-model. Having the ability to finely decompose information may give insights into how agents learn and exploit potentially higher-order correlations in their environments. ## 5 Conclusions In this paper we have discussed a generalization of the single-target partial information decomposition that relaxes the requirement that elements be grouped into "inputs" and "targets", while still preserving the basic intuitions about information. Based on the Kullback-Leibler divergence and the local partial entropy decomposition, this generalized information decomposition can applied to any information gained when updating from a set of prior beliefs to a new posterior. This generality implies that any information-theoretic measure that can be written as a Kullback-Leibler divergence admits a decomposition, such as the total correlation, negentropy, mutual information, and more. The generalized information decomposition could be of great utility in understanding the mereological relationships between "parts" and "wholes" in complex systems.
2309.07330
Automated Assessment of Critical View of Safety in Laparoscopic Cholecystectomy
Cholecystectomy (gallbladder removal) is one of the most common procedures in the US, with more than 1.2M procedures annually. Compared with classical open cholecystectomy, laparoscopic cholecystectomy (LC) is associated with significantly shorter recovery period, and hence is the preferred method. However, LC is also associated with an increase in bile duct injuries (BDIs), resulting in significant morbidity and mortality. The primary cause of BDIs from LCs is misidentification of the cystic duct with the bile duct. Critical view of safety (CVS) is the most effective of safety protocols, which is said to be achieved during the surgery if certain criteria are met. However, due to suboptimal understanding and implementation of CVS, the BDI rates have remained stable over the last three decades. In this paper, we develop deep-learning techniques to automate the assessment of CVS in LCs. An innovative aspect of our research is on developing specialized learning techniques by incorporating domain knowledge to compensate for the limited training data available in practice. In particular, our CVS assessment process involves a fusion of two segmentation maps followed by an estimation of a certain region of interest based on anatomical structures close to the gallbladder, and then finally determination of each of the three CVS criteria via rule-based assessment of structural information. We achieved a gain of over 11.8% in mIoU on relevant classes with our two-stream semantic segmentation approach when compared to a single-model baseline, and 1.84% in mIoU with our proposed Sobel loss function when compared to a Transformer-based baseline model. For CVS criteria, we achieved up to 16% improvement and, for the overall CVS assessment, we achieved 5% improvement in balanced accuracy compared to DeepCVS under the same experiment settings.
Yunfan Li, Himanshu Gupta, Haibin Ling, IV Ramakrishnan, Prateek Prasanna, Georgios Georgakis, Aaron Sasson
2023-09-13T22:01:36Z
http://arxiv.org/abs/2309.07330v1
# Automated Assessment of Critical View of Safety in Laparoscopic Cholecystectomy ###### Abstract Cholecystectomy (gallbladder removal) is one of the most common procedures in the US, with more than 1.2M procedures annually. Compared with classical open cholecystectomy, laparoscopic cholecystectomy (LC) is associated with significantly shorter recovery period, and hence is the preferred method. However, LC is also associated with an increase in bile duct injuries (BDIs), resulting in significant morbidity and mortality. The primary cause of BDIs from LCs is misidentification of the cystic duct with the bile duct. Critical view of safety (CVS) is the most effective of safety protocols, which is said to be achieved during the surgery if certain criteria are met. However, due to suboptimal understanding and implementation of CVS, the BDI rates have remained stable over the last three decades. In this paper, we develop deep-learning techniques to automate the assessment of CVS in LCs. An innovative aspect of our research is on developing specialized learning techniques by incorporating domain knowledge to compensate for the limited training data available in practice. In particular, our CVS assessment process involves a fusion of two segmentation maps followed by an estimation of a certain region of interest based on anatomical structures close to the gallbladder, and then finally determination of each of the three CVS criteria via rule-based assessment of structural information. We achieved a gain of over 11.8% in mIoU on relevant classes with our two-stream semantic segmentation approach when compared to a single-model baseline, and 1.84% in mIoU with our proposed Sobel loss function when compared to a Transformer-based baseline model. For CVS criteria, we achieved up to 16% improvement and, for the overall CVS assessment, we achieved 5% improvement in balanced accuracy compared to DeepCVS under the same experiment settings. Laparoscopic Cholecystectomy, Critical View of Safety, Deep Learning ## I Introduction Cholecystectomy is one of the most common surgical procedures in the US, done to remove an inflamed or infected gallbladder. Majority of cholecystectomy procedures are now done as laparoscopic cholecystectomy (LC), as they are associated with shorter recovery times. However, LCs are also associated with an increased number of bile duct injuries (BDIs), which occur due to limited field of vision. BDIs resulting from LCs may lead to serious complications which can even endanger the patient's life and safety [1, 2], while driving up the medical litigation [3] and healthcare costs to over a billion dollars in the US alone [4]. A safety protocol, termed as critical view of safety (CVS), has been developed and widely embraced over the years, with the goal of minimizing misidentification of ducts and thus reduce incidence of BDIs. In spite of many evidences of the effectiveness of CVS protocol, the incidence of BDIs has not decreased over the past decades; the main reason for this stems from the insufficient implementation and understanding of CVS criteria by the surgeons [5]. Thus, automation of the CVS attainment in LC surgeries can potentially reduce incidence of BDIs in LCs. **Vision.** Our long-term vision is to develop a AI-driven surgical aid that will prevent BDIs by a combination of real-time CVS assessment during LC, enforcement of related safety processes (e.g., identifying and guiding surgeons to bailout strategies [6]), and training of surgeons via video reviews to improve their understanding of CVS and LC surgeries. As a step towards the above vision, in this paper, we focus on developing a technique to assess CVS based on its three criteria; such a technique can be used to raise alerts in real-time (i.e., while LC surgery is in progress) if an attempt is made to clamp or cut any structure before a true CVS has been attained and thus, prevent BDIs. The key challenge in CVS assessment from learning techniques is the lack of sufficient training data (at most a few hundred LC surgery videos) as well as the intrinsic difficulties in CVS assessment, such as the cluttered texture and occlusion among organs. Our approach addresses these challenges by proposing a fusion approach followed by incorporation of clinical domain knowledge. In particular, our approach involves estimating a region of interest based on anatomical structures around the gallbladder, and rule-based assessment of CVS criteria. We demonstrate that such an approach has a great potential in accurate detection of CVS by showing an advantage in performance on both individual CVS criteria and overall CVS classification when compared to CNN-based DeepCVS [7] as baseline. ## II Background In this section, we provide general background and related work. **Laparoscopic Cholecystectomy (LC).** Gallbladder is a small organ underneath the liver that concentrates and stores bile fluid. Inflammation and infection of the gallbladder may necessitate surgical removal of the gallbladder, which is done via LC, a minimally invasive procedure associated with quick recovery time. LC, performed through four small incisions, uses a camera and surgical tools to remove the gallbladder. Removal of gallbladder essentially entails exposing (by removing the fat and fibrous tissues) and cutting the only two structures that connect it to the body: the cystic duct (CD) and the cystic artery. **BDI Risks of LCs.** The most feared adverse event of LC is bile duct injury (BDI), which occurs in thousands of cases in the US annually [2]. BDIs largely result from misidentification of the common bile duct as the cystic duct [9], due to the increased complexity of LC procedures and limited field of vision. BDIs due to LCs may lead to serious complications and even endanger the patient's life and safety [1, 2]. Overall, BDIs frequently result in a 3-fold increase in the 1-year mortality rate [10], while driving up the medical litigation [3] and healthcare costs to over a billion dollars in the US alone [4, 11, 12]. **The Critical View of Safety (CVS) Technique.** Over the past few decades, surgeons have expended considerable effort in developing safe ways for identification of the cystic duct [13], of which the Critical View of Safety (CVS) technique is considered to be the most effective at target identification and hence is widely embraced in LC procedures [6, 14]. CVS is said to be achieved if the following three criteria are met:1 Footnote 1: CVS is a reworking of the open cholecystectomy protocol wherein the gallbladder is detached from the cystic plate (liver bed) so that it is attached to the body by only the two cystic structures which can then be clipped. In laparoscopic surgery, as complete separation of the gallbladder from the cystic plate makes clipping of the structures difficult, we require that only the lower part of the gallbladder be separated [9]. C1: All fibrous and adipose tissues cleared within the hepatocyst triangle (see Fig. 1). C2: Separation of the lower one-third of the gallbladder from the cystic plate (liver-bed). C3: Two and only two structures are seen to enter the gallbladder [15]. Impact and Limitation of CVS. The promise of CVS spurred several studies [16, 17] on its effectiveness in the LC procedure, which provide strong evidence of the value of CVS as a means of unambiguously identifying biliary structures in LC. However, despite the evidence of the efficacy of CVS in reducing mis-identification of CD, BDI rates over the last 3 decades have remained stable at 0.36%-1.5% [10]. The primary reasons for this status quo are: insufficient or inadequate implementation of CVS [18], and weak understanding of CVS among many surgeons [5, 19]. Sometimes, overconfidence (partly due to the low incidence of BDIs) with LC also plays a part [5, 17, 20, 21]. Thus, automated assessment of CVS criteria has the potential to reduce BDIs, especially with the advances and contributions of computer vision in medical image analysis over the recent years. **Related Work.** There have been two very-recent works on assessment of CVS. In particular, Mascagni et al. [7] utilizes the semantic segmentation results of DeepLabV3+ [22] and predicts binary labels of CVS criteria and overall CVS achievement from a compactly-designed CNN. More recently, Murali et al. [23] proposed incorporating graph neural networks (GNNs) to encode the latent scene graph in LC video frames, and shows improved performance over DeepCVS. However, these methods do not involve domain knowledge on CVS criteria and thus their results could not be easily analyzed or explained. In another related work, Madani et al. [24] proposed using CNN-based semantic segmentation methods to identify safe and dangerous zones of dissections, which could serve as an important intermediary stage for CVS assessment. ## III Methodology **Key Challenges in Automated CVS Assessment.** Since the BDI incidence rate in LCs is extremely low (0.36% to 1.5%) [10], a CVS detection technique must necessarily have Fig. 1: Anatomy of hepatocytic triangle. [8] very high accuracy (e.g., 90% or more) to lower this BDI rate even further. Due to limited training data available,2 such a high accuracy is infeasible by direct application of machine-learning techniques, as seen in some of the prior works. One approach to achieve such accuracy would be to integrate extensive clinical/domain knowledge, as incorporating such knowledge has been shown to boost the accuracy of ML algorithms (e.g., [25, 26, 27]). However, leveraging clinical domain knowledge in ML models can be quite challenging. Footnote 2: One can realistically expect to curate a few hundred or at most a few thousand LC surgical videos; by contrast, highly accurate ML models tend to use millions of training samples. **Method Pipeline and Key Contributions.** Our approach tackles the aforementioned challenges by incorporating domain knowledge with limited training data. In particular, our approach's pipeline is as follows (see Fig. 2). First, to address the imbalance of classes in available datasets, we segment each image frame by using two Transformer-based models trained on separate semantic segmentation datasets; relevant classes from these two segmentation maps are then appropriately fused. Then, we use structural anatomic knowledge of the gallbladder and surrounding structures to estimate the region of interest (ROI), which is used to efficiently assess the CVS conditions. Finally, we assess each of the three CVS conditions based on their structural definitions, and then the overall CVS as a conjunction of the three CVS conditions. Overall, our main contributions include: 1. Introducing a _two-stream approach for semantic segmentation_ to address the issue of class imbalance. 2. Proposing a novel _Sobel loss function_ to reduce artifacts and over-segmentation around edges. 3. _Integration of clinical domain knowledge:_ Developing a rule-based approach for estimating ROIs and assessing CVS conditions in LC videos based on domain knowledge. ### _Semantic Segmentation_ **Two-stream Segmentation and Fusion.** For segmentation of LC frames, we wish to use the publicly available _CholecSeg8K_ dataset which includes 8,080 frames annotated with related classes. However, the _CholecSeg8K_ dataset is missing two important classes, viz., _cystic plate_ and _cystic artery_, and has low number of pixels in _cystic duct_ class; all of these three classes are crucial to our approach (in particular, in estimation of the region of interest, discussed in the next section). To compensate for the above shortcomings, we created the _CholecSeg170_ dataset which includes annotations for cystic plate and cystic artery, and much higher proportion of _cystic duct_ pixels. We believe that training two separate segmentation models over the above two datasets separately should yield better performance, especially on the important classes _cystic duct_ and _cystic artery_, than training a single segmentation model over the union of the above datasets; our intuition is confirmed in our evaluation results (see Section. IV-B). Thus, the first segmentation model \(\mathbf{Seg_{1}}\) is trained on the _CholecSeg170_ dataset, while the second model \(\mathbf{Seg_{2}}\) is trained Fig. 3: Our proposed Sobel loss. It reduces artifacts and over-segmentation around edges by penalizing the difference between edge maps derived from segmentation maps. Fig. 2: Overall pipeline of our approach. The input frame is first segmented by two Transformer-based models. The segmentation maps are then merged for ROI estimation. Finally, CVS conditions are evaluated based on ROI and segmentation maps. on the _CholecSeg8K_ dataset. We use \(\mathbf{Seg_{1}}\) for segmentation of 6 classes: _cystic artery, cystic duct, gallbladder, liver, instrument, cystic plate_, while \(\mathbf{Seg_{2}}\) is used for segmentation of only the _fat_ class. For an input image \(\mathbf{I}\), let \(P_{1}=\mathbf{Seg_{1}(I)}\), \(P_{2}=\mathbf{Seg_{2}(I)}\). Then, the merged segmentation map is constructed by \(P_{merged}=P_{1}\oplus\mathbf{Fat}(P_{2})\), where \(\mathbf{Fat}\) denotes creating a mask of the _fat_ class. **Sobel Loss Function.** We use the Transformer-based Segmenter [28] model as the baseline for our semantic segmentation method. When evaluating the segmentation results, we observed that the edges between different anatomical classes are not clearly separated, causing artifacts and oversegmentation (see Section. IV-B). To address this issue, we propose adding an edge-based constraint to the loss function. Specifically, we use the Sobel operator to generate class-agnostic edge information from the segmentation maps, and then apply Smooth _L1_ Loss [29] between the ground truth and predicted edges. The Sobel operator uses of two \(3\times 3\) convolutional filters to calculate the approximations of the derivatives both vertically and horizontally. Given input image \(\mathbf{I}\), we calculate the gradient of the image _Sobel_(\(\mathbf{I}\)) as: \(\textit{Sobel}(\mathbf{I})=\sqrt{G_{x}^{2}+G_{y}^{2}}\), where \[G_{x}=\begin{bmatrix}2&0&-2\\ 4&0&-4\\ 2&0&-2\end{bmatrix}*\mathbf{I},\qquad G_{y}=\begin{bmatrix}2&4&2\\ 0&0&0\\ -2&-4&-2\end{bmatrix}*\mathbf{I}, \tag{1}\] \(G_{x}\), \(G_{y}\) are the two images containing horizontal and vertical derivatives respectively, and \(*\) denotes the 2-D convolution operation. Given ground truth segmentation map \(G\) and predicted segmentation map \(P\), we define our Sobel loss function as: \[L_{Sobel}(G,P)=smooth_{L_{1}}(\textit{Sobel}(G)-\textit{Sobel}(P)) \tag{2}\] where \(smooth_{L_{1}}\) is the Smooth _L1_ Loss. Finally, our training objective is defined as \[L(G,P)=L_{ce}(G,P)+\lambda L_{Sobel}(G,P) \tag{3}\] where \(L_{ce}\) is the cross-entropy loss, and \(\lambda\) is a hyperparameter. The segmentation model pipeline is shown in Fig. 3. ### _Region of Interest (RoI) Estimation_ In LC procedures, the assessment of CVS is mainly based on a specific region where the surgeon dissects tissue to expose cystic duct, cystic artery, and the cystic plate, and thereby creating the CVS. In LC terminology, this region is referred to as the _hepatocystic triangle_. In most surgeries, the triangle is never fully visible since the surgeons usually only dissect to the point where cystic duct and cystic artery are sufficiently exposed while the common hepatic duct and common bile duct remain hidden. Thus, in the LC surgery frames, we observe that only a part (in shape of a quadrilateral) of the hepatocycistic triangle is visible. Hence, our region of interest (ROI) is of a quadrilateral shape with four sides. The **ROI quadrilateral** (see Fig. 4) is defined by anatomical structures around the gallbladder observed in the LC surgery videos. Thus, we develop a clinically-motivated rule-based method to determine the ROI, rather than applying standard learning techniques as is typically done. In particular, the ROI quadrilateral is formed by four points in an LC surgery image: (A) Cystic duct's end that is connected to the gallbladder; (B) Other end of the (visible) cystic duct; (C) Intersection point between the liver edge and a line drawn from point B to the outline of the largest cluster of _fat_ class; (D) the point connecting the gallbladder to the liver. Note that the determination of point (C) is done to exclude the main cluster of fat tissue from the ROI--we use the condition of such a quadrilateral being devoid of any fat tissue as the sub-condition for the C1 criteria of CVS. In a segmented frame, we estimate the above defined four points as follows. First, we estimate points \(A\) and \(B\) as follows (see Fig. 5). We perform principal component analysis (PCA) on the main cluster \(\mathbf{C_{duct}}\) of _cystic duct_ pixels, as detected by the first segmentation model \(\mathbf{Seg_{1}}\). Let the two primary components obtained from PCA be \(\mathbf{X_{1}}\) and \(\mathbf{X_{2}}\), with \(\mathbf{X_{1}}\) being the one with a higher angle (almost perpendicular) to the gallbladder edge. Next, we create a line segment by starting from the centroid of the cluster \(\mathbf{C_{duct}}\) and extending in both directions along \(\mathbf{X_{1}}\) till the outline of the cluster is reached; let the endpoints of this line segment be \(p_{1}\) and \(p_{2}\), with \(p_{1}\) being the point closer to the gallbladder. We define \(A\) to the point between \(p_{1}\) and its nearest neighbour on the gallbladder edge, and \(B\) as \(p_{2}\). To estimate the point \(C\), we start with the line connecting \(A\) and \(B\), and rotate it clockwise till it intersects with the main cluster of _fat_ tissue; the intersection point is assigned to be point \(C\). Finally, we estimate the point \(D\) as follows. Since the segmentation maps usually do not yield a unique point where the gallbladder and liver edges intersect, we choose a pair of points, one from each edge, that has the minimal Euclidean distance between them; for this, we use a modified KD-Tree Nearest Neighbour algorithm [30]. The point \(D\) is defined as the midpoint between these two points. ### _CVS Assessment_ Given the semantic segmentation maps and the ROI quadrilateral in an image frame, we develop a rule-based method to determine attainment of each of the three CVS criteria and thus the CVS. Recall the three CVS conditions from Section. II. For Fig. 4: ROI Quadrilateral. **C1**, which is to check for fat or fibrous tissue in the hepatocystic triangle (and thus, the ROI quadrilateral), we determine attainment of C1 condition based on following two conditions: (a) No _fat_ pixels in the ROI; (b) The size of the cluster of _liver_ pixels in the ROI is more than a certain threshold \(T_{liver}\). Note that the _fat_ and _liver_ classes are determined by \(\mathbf{Seg_{2}}\) and \(\mathbf{Seg_{1}}\) segmentation maps respectively. If both the above conditions are satisfied, we consider C1 condition to be satisfied. For **C2**, if the size of the cluster of _cystic plate_ pixels in the ROI surpasses a certain threshold \(T_{cp}\), it is considered satisfied. For **C3**, if exactly one cluster of _cystic duct_ pixels and one cluster of _cystic artery_ pixels are detected by \(\mathbf{Seg_{1}}\) in the ROI, it is considered satisfied. We empirically set \(T_{liver}=100\) and \(T_{cp}=100\) to eliminate some of the noisy predictions. ## IV Results In this section, we introduce the datasets we used for development and evaluation of our techniques and the results of our method. ### _Datasets_ The combined _Cholec80_[31]and _m2cai16-workflow_[32] dataset consists of 117 videos after excluding duplicate cases [24]. We use the 17 videos from the _CholecSeg8K_ dataset as the development set and the remaining 100 as the evaluation set. The development set consists of two separate semantic segmentation datasets, namely _CholecSeg8K_ and _CholecSeg170_. The evaluation set, named _CVS6K_, consists of 6,000 frames with only binary CVS annotations. **CholecSeg8K.** The _CholecSeg8K_ dataset is a publicly available semantic segmentation dataset based on the _Cholec80_ dataset. In total, 8,080 frames were collected from 17 videos in the _Cholec80_ dataset, and 13 different semantic classes (including background) were annotated. Most relevant classes in LC are annotated, such as _liver, fat, gallbladder_ and _cystic duct_. However, _CholecSeg8K_ is highly unbalanced in class distribution, and some crucial classes for assessing CVS, such as _cystic plate_ and _cystic artery_, are absent from the dataset. **CholecSeg170.** To address the limitations of _CholecSeg8K_, we collected 170 frames from the same 17 videos to form a separate semantic segmentation dataset, which we call the _CholecSeg170_ dataset. For each video, 10 frames are manually selected close to the _ClippingCutting_ stage as defined in _Cholec80_, where most anatomical structures necessary for evaluating CVS are visible. The selected frames are annotated with the following 7 semantic classes: {_cystic artery, cystic duct, gallbladder, instrument, liver, cystic plate, background_}. Additionally, ground truth CVS conditions are labeled for each frame.The 170 frames are divided into 140 frames for training and 30 frames for validation. **CVS6K.** The 100 videos which are not included in the semantic segmentation datasets are used to construct the CVS evaluation set. We first sample a one minute clip at 1fps from each video, all of which near the _ClippingCutting_ stage of the videos, when CVS conditions can be clearly evaluated in most frames. For each frame, we assign three binary labels corresponding to the three criteria of CVS as suggested by SAGES [6]. If and only if all three criteria are satisfied in a frame do we consider CVS achieved in that frame. The proportions of positive examples on the dataset is shown in Fig. 6. All annotations on the CVS evaluation dataset are verified independently by two experienced oncology surgeons (co-authors). ### _Semantic Segmentation_ We start by evaluating the effectiveness of our two-stream segmentation approach by computing the IoU metric on each relevant class in TABLE I. We observe that the two-stream approach improves the IoU by 11.85% on average, and the improvements are especially significant on low-frequency classes like _cystic duct_ (18.55%), _cystic artery_ (44.84%), and _cystic plate_ (14.84%). We also assess the enhancement resulting from the proposed Sobel loss on the validation set of _CholecSeg170_ in TABLE II. We see that the Sobel loss function resulted in 1.84% improvement in mIoU and 1.8% improvement in Dice Fig. 5: Estimation of point **A** and **B** in our ROI estimation method. We first identify the two main components of the cystic duct \(\mathbf{X_{1}}\),\(\mathbf{X_{2}}\) cluster using PCA. Then we extend \(\mathbf{X_{1}}\) in both directions from the centroid of the cluster to find \(p_{1}\) and \(p_{2}\). Finally, we define the mid-point between \(p_{1}\) and its nearest neighbour on the gallbladder edge as **A**, and \(p_{2}\) as **B**. Fig. 6: Proportion of positive examples in _CVS6K_. score compared to Segmenter baseline. We used \(\lambda=1\) when deploying Sobel loss. We also evaluated **qualitative results** in Fig. 7. We see that our proposed Sobel loss penalizes noisy predictions around edges, leading to more inter-class separation and thereby creating more defined edges on anatomical structures and organs. Additionally, it also reduces noisy patches often observed from the baseline model. ### _CVS Conditions and CVS Assessment_ We present the accuracy (Acc.), balanced accuracy (Bacc.), Positive Predictive Value (PPV) and Negative Predictive Value (NPP) on the independent _CVS6K_ dataset in TABLE III. For the baseline approach, we re-implemented DeepCVS according to the descriptions in [7], with slight modification to fit our experiment settings, and for the purpose of fair comparison. In particular, we trained two separate DeepLabV3+ semantic segmentation models on _CholecSeg170_ and _CholecSeg8K_ datasets. The segmentation maps are fused the same way as described in Section.III-A. The CNN for classification of CVS conditions are implemented according to [7] except for the first layer. As may be observed in TABLE III, our rule-based method significantly outperforms the baseline model on both independent CVS criteria and overall CVS assessment, and shows more consistent performance among different CVS conditions. ## V Conclusion In this work, we have addressed a critical unmet clinical need, viz, assessing CVS in LC procedures to help minimize incidence of BDIs. We developed a 3-step pipeline, which addresses the issues of class imbalance and artifacts in semantic segmentation, while also incorporates domain Fig. 7: Qualitative results. Our proposed Sobel Loss reduced over-segmentation of cystic artery in column 1, and improved on the artifacts/fragmented segmentations of gallbladder, cystic duct, and liver (columns 2, 3). knowledge for more accurate CVS assessment. The results show great promise in future applications in computer-assisted LC procedures. However, one limitation of our approach is that it heavily relies on the quality of the segmentation results and does not include a reasonable fail-safe mechanism when segmentation models produce undesirable results. To address this challenge, we aim to develop methods that take advantage of segmentation-failure detection techniques in our future work. ## Acknowledgment We would like to acknowledge Twinanda et al. [31] and Hong et al. [33] for making their datasets publicly available to the research community. Research reported in this publication was supported by National Science Foundation (NSF) under award numbers FET-2106447, CNS-2128187, 2153056, 2125147, 2113485, 2006655 and National Institutes of Health (NIH) under award numbers R01EY030085, R01HD097188, 1R21CA258493-01A1. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF and the NIH.
2310.03035
Early Detection of Post-COVID-19 Fatigue Syndrome Using Deep Learning Models
The research titled "Early Detection of Post-COVID-19 Fatigue Syndrome using Deep Learning Models" addresses a pressing concern arising from the COVID-19 pandemic. Post-COVID-19 Fatigue Syndrome (PCFS) has become a significant health issue affecting individuals who have recovered from COVID-19 infection. This study harnesses a robust dataset comprising 940 patients from diverse age groups, whose medical records were collected from various hospitals in Iraq over the years 2022, 2022, and 2023. The primary objective of this research is to develop and evaluate deep learning models for the early detection of PCFS. Leveraging the power of deep learning, these models are trained on a comprehensive set of clinical and demographic features extracted from the dataset. The goal is to enable timely identification of PCFS symptoms in post-COVID-19 patients, which can lead to more effective interventions and improved patient outcomes. The study's findings underscore the potential of deep learning in healthcare, particularly in the context of COVID-19 recovery. Early detection of PCFS can aid healthcare professionals in providing timely care and support to affected individuals, potentially reducing the long-term impact of this syndrome on their quality of life. This research contributes to the growing body of knowledge surrounding COVID-19-related health complications and highlights the importance of leveraging advanced machine learning techniques for early diagnosis and intervention. Keywords: Early Detection, Post-COVID-19 Fatigue Syndrome, Deep Learning Models, Healthcare, COVID-19 Recovery, Medical Data Analysis, Machine Learning, Health Interventions.
Fadhil G. Al-Amran, Salman Rawaf, Maitham G. Yousif
2023-09-26T17:44:17Z
http://arxiv.org/abs/2310.03035v1
# Early Detection of Post-COVID-19 Fatigue Syndrome Using Deep Learning Models ###### Abstract The research titled "Early Detection of Post-COVID-19 Fatigue Syndrome using Deep Learning Models" addresses a pressing concern arising from the COVID-19 pandemic. Post-COVID-19 Fatigue Syndrome (PCFS) has become a significant health issue affecting individuals who have recovered from COVID-19 infection. This study harnesses a robust dataset comprising 940 patients from diverse age groups, whose medical records were collected from various hospitals in Iraq over the years 2022, 2022, and 2023. The primary objective of this research is to develop and evaluate deep learning models for the early detection of PCFS. Leveraging the power of deep learning, these models are trained on a comprehensive set of clinical and demographic features extracted from the dataset. The goal is to enable timely identification of PCFS symptoms in post-COVID-19 patients, which can lead to more effective interventions and improved patient outcomes. The study's findings underscore the potential of deep learning in healthcare, particularly in the context of COVID-19 recovery. Early detection of PCFS can aid healthcare professionals in providing timely care and support to affected individuals, potentially reducing the long-term impact of this syndrome on their quality of life. This research contributes to the growing body of knowledge surrounding COVID-19-related health complications and highlights the importance of leveraging advanced machine learning techniques for early diagnosis and intervention. Early Detection, Post-COVID-19 Fatigue Syndrome, Deep Learning Models, Healthcare, COVID-19 Recovery, Medical Data Analysis, Machine Learning, Health Interventions. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][20][21][20][22][28][29][31][20][20][21][22][20][23][24][25][26][27][28][29][32][28][29][33][34][35][36][37][38][39][40][41][42][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][60][61][69][61][62][64][65][67][68][69][60][60][62][61][63][64][65][67][68][69][61][69][60][62][62][63][64][65][66][67][68][69][60][61][69][60][63][61][62][64][65][66][67][68][69][61][69][62][67][68][69][60][60][61][60][64][61][62][63][64][65][66][67][68][69][60][61][69][60][61][60][620][63][64][65][66][67][68][69][61][69][60][621][60][632][64][65][66][67][68][69][61][69][60][60][61][60][60][61][60][622][63][64][65][66][67][68][69][60][61][60][61][60][622][64][65][66][67][68][69][61][600][61][60][622][64][65][66][67][68][69][60][61][60][60][61][622][63][64][65][66][67][68][69][61][60][623][64][65][66][69][61][600][61][60][60][61][60][624][65][66][67][68][69][61][600][61][60][60][61][600][61][600][625][66][67][68][69][61][600][61][600][61][600][61][600][626][63][64][65][66][66][67][68][69][61][600][60][601][602][603][64][65][66][67][68][69][60][604][605][606][61][600][606][607][608][609][61][600][600][60][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][600][6000][600][600][600][600][600][600][600][600][600][600][600][6000][600][600][600][600][6000][600][6000][600][600][600][6000][600][600][600][6000][600][6000][600][6000][600][600][6000][6000][600][6000][6000][600][6000][600][6000][6000][6000][6000][6000][6000][6000][600][6000][6000][6000][6000][6000][600][6000][6000][600][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][600][6000][600][6000][6000][6000][600][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][600][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][60000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][600][6000][6000][6000][6000][6000][6000][6000][60000][6000][60000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][6000][600][6000][600][6000][6000][600][60 ## Introduction The COVID-19 pandemic has presented unprecedented challenges to healthcare systems worldwide. While much attention has been focused on the acute phase of the disease, there is growing recognition of the long-term health consequences affecting a significant proportion of survivors. Among these consequences, Post-COVID-19 Fatigue Syndrome (PCFS) has emerged as a particularly debilitating condition[1-7]. PCFS, also known as "Long COVID" or "Long-Haul COVID," is characterized by a range of persistent symptoms that continue for weeks or months beyond the acute phase of the illness. Fatigue, cognitive impairment, and physical deconditioning are hallmark features of PCFS. Other common symptoms include breathlessness, joint pain, and a variety of neurological and psychological symptoms[8-13]. The exact mechanisms underlying PCFS are still not fully understood, and its diagnosis remains challenging. Furthermore, early identification of individuals at risk of developing PCFS is crucial for timely intervention and improved patient outcomes. This is where advanced technologies like deep learning models can play a vital role[14-16]. Deep learning, a subset of artificial intelligence (AI), has shown remarkable success in various healthcare applications, including medical image analysis, disease prediction, and risk stratification. Leveraging the power of deep learning, this research aims to develop predictive models for the early detection of PCFS[17-21]. Our study encompasses a large cohort of 940 COVID-19 survivors, whose medical data have been meticulously collected from diverse healthcare facilities across Iraq. The dataset spans the years 2022, 2022, and 2023, covering a wide range of age groups [22-24]. By employing deep learning algorithms, we intend to analyze this extensive dataset to identify patterns and markers associated with the onset of PCFS. Previous studies have demonstrated the potential of deep learning in predicting various medical conditions, including those with complex and multifactorial etiologies [25,26]. This research has the potential to revolutionize the clinical management of PCFS. Early detection can lead to timely interventions, such as personalized rehabilitation programs and targeted medical treatments, significantly improving the quality of life for affected individuals. Moreover, understanding the risk factors associated with PCFS can inform public health strategies for preventing and managing this long-term health consequence of COVID-19[27-29]. The profound impact of the COVID-19 pandemic has necessitated a comprehensive understanding of its aftermath, particularly in individuals experiencing persistent symptoms long after the acute infection. These lingering symptoms have been collectively termed 'Long COVID' or 'Post-COVID-19 Syndrome' (30-33). As we delve deeper into the complexities of Long COVID, it becomes increasingly evident that a substantial subset of patients, even those who experienced mild or asymptomatic acute infections, are facing a spectrum of physical and mental health challenges [34]. This study aims to contribute to our understanding of Long COVID, focusing on early detection utilizing deep learning models. By harnessing the power of artificial intelligence, we endeavor to identify key patterns and predictive factors that can aid in the timely recognition and management of this condition [35]. Such insights are vital not only for healthcare professionals but also for policymakers, as they guide the allocation of resources and support for affected individuals [36]. ## Materials and Methods: In this study, we collected data from 940 patients who had contracted COVID-19 and were subsequently monitored for the development of Post-COVID-19 Fatigue Syndrome (PCFS). The data were gathered from various hospitals across Iraq during the years 2022, 2022, and 2023. The dataset included patients of various age groups, providing a comprehensive view of PCFS development. To create a robust dataset, we collected demographic information, clinical records, and laboratory results of the patients. This dataset served as the foundation for our analysis. ## Study Design: This study employed a retrospective cohort design. We followed the patients who had recovered from acute COVID-19 infections and assessed their symptoms and health status for an extended period to identify the onset of PCFS. The study's primary objective was to develop a predictive model for early detection of PCFS using deep learning techniques. ## Statistical Analysis: We performed descriptive statistics to summarize the demographic and clinical characteristics of the study population. Continuous variables were presented as means \(\pm\) standard deviations, while categorical variables were summarized as frequencies and percentages. To evaluate the association between various factors and the development of PCFS, we conducted univariate and multivariate logistic regression analyses. The results were reported as odds ratios (ORs) with 95% confidence intervals (CIs). ## Deep Learning Analysis: The deep learning analysis was a pivotal component of this study. We utilized a convolutional neural network (CNN) architecture to analyze the collected data. This CNN model was trained on the dataset to identify patterns and features that could predict the development of PCFS. We implemented the deep learning analysis using popular deep learning frameworks such as TensorFlow or PyTorch. The model's performance was evaluated based on metrics like accuracy, precision, recall, and F1-score. In addition, to ensure the robustness of our model, we employed techniques such as cross-validation and hyperparameter tuning. The results of our deep learning analysis provided valuable insights into the early detection of PCFS among post-COVID-19 patients, contributing to better patient care and management strategies. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Characteristic & Number of Patients & Percentage (\%) \\ \hline Total Patients & 940 & 100 \\ \hline Age (years) & & \\ \hline - Mean & 42.5 & \\ \hline - Range & 21-76 & \\ \hline Gender & & \\ \hline - Male & 480 & 51.1 \\ \hline - Female & 460 & 48.9 \\ \hline \end{tabular} \end{table} Table 1: Demographic Characteristics of the Study Population, This table provides an overview of the demographic characteristics of COVID-19 Patients \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Characteristic & Number of Patients (\%) \\ \hline Severity of COVID-19 & \\ \hline - Mild & 300 (31.9) \\ \hline - Moderate & 450 (47.9) \\ \hline - Severe & 190 (20.2) \\ \hline - Comorbidities & \\ \hline - Hypertension & 180 (19.1) \\ \hline - Diabetes & 140 (14.9) \\ \hline - Obesity & 90 (9.6) \\ \hline Symptoms & \\ \hline - Fever & 720 (76.6) \\ \hline - Cough & 580 (61.7) \\ \hline - Fatigue & 410 (43.6) \\ \hline \end{tabular} \end{table} Table 2: Clinical Characteristics of COVID-19 Patients \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Characteristic & Number of Patients (\%) \\ \hline Severity of COVID-19 & \\ \hline - Mild & 300 (31.9) \\ \hline - Moderate & 450 (47.9) \\ \hline - Severe & 190 (20.2) \\ \hline - Converbidities & \\ \hline - Hypertension & 180 (19.1) \\ \hline - Diabetes & 140 (14.9) \\ \hline - Obesity & 90 (9.6) \\ \hline Symptoms & \\ \hline - Fever & 720 (76.6) \\ \hline - Cough & 580 (61.7) \\ \hline - Fatigue & 410 (43.6) \\ \hline \end{tabular} \end{table} Table 2: Dementia characteristics of COVID-19 Patients (%) \begin{table} \begin{tabular}{|l|l|l|} \hline PCFS Status & Number of Patients & Percentage (\%) \\ \hline PCFS Present & 120 & 12.8 \\ \hline PCFS Absent & 820 & 87.2 \\ \hline \end{tabular} \end{table} Table 4: Prevalence of PCFS among the study population. It provides the number after recovering from COVID-19. \begin{table} \begin{tabular}{|l|l|} \hline Laboratory Test & Mean (\(\pm\)5D) \\ \hline Hemoglobin (g/dl.) & 13.5 (2.1) \\ \hline White Blood Cell Count & 7.8 (2.3) \\ \hline lymphocyte Count (\%) & 25.6 (7.8) \\ \hline C-Reactive Protein (mg/L) & 18.4 (9.2) \\ \hline D-dimer (ng/mL) & 340 (180) \\ \hline \end{tabular} \end{table} Table 3: Laboratory Results During COVID-19 Infection. It includes information on Infection, This table displays laboratory results obtained during the acute phase of COVID-19 markers, and other relevant laboratory tests. \begin{table} \begin{tabular}{|l|l|l|} \hline Factor & Odds Ratio (95\% CI) \\ \hline Age (years) & 1.08 (1.04-1.13) \\ \hline Gender (Female vs. Male) & 0.92 (0.68-1.25) \\ \hline Severity (Severe vs. Mild) & 2.75 (1.58-4.80) \\ \hline Hypertension & 1.82 (1.27-2.60) \\ \hline Diabetes & 1.45 (1.01-2.09) \\ \hline Fatigue during COVID-19 & 3.21 (2.28-4.53) \\ \hline \end{tabular} \end{table} Table 5: Univariate Analysis of Factors Associated with PCFS \begin{table} \begin{tabular}{|l|l|} \hline Laboratory Test & Mean (\(\pm\)5D) \\ \hline Hemoglobin (g/dl.) & 13.5 (2.1) \\ \hline White Blood Cell Count & 7.8 (2.3) \\ \hline lymphocyte Count (\%) & 25.6 (7.8) \\ \hline C-Reactive Protein (mg/L) & 18.4 (9.2) \\ \hline D-dimer (ng/mL) & 340 (180) \\ \hline \end{tabular} \end{table} Table 3: Laboratory Results During COVID-19 Infection. This table displays laboratory results obtained during the acute phase of COVID-19 markers, and other relevant laboratory tests. \begin{table} \begin{tabular}{|l|l|l|} \hline PCFS Status & Number of Patients & Percentage (\%) \\ \hline PCFS Present & 120 & 12.8 \\ \hline PCFS Absent & 820 & 87.2 \\ \hline \end{tabular} \end{table} Table 4: Prevalence of PCFS among the study population. It provides the number after recovering from COVID-19. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Factor & Adjusted Odds Ratio (95\% CI) \\ \hline Age (years) & 1.06 (1.02-1.11) \\ \hline Severity (Severe vs. Mild) & 2.41 (1.37-4.22) \\ \hline Hypertension & 1.68 (1.17-2.41) \\ \hline Fatigue during COVID-19 & 2.95 (2.08-4.18) \\ \hline \end{tabular} \end{table} Table 6: Multivariate Analysis of Factors Associated with PCFS, This table extends the analysis to multivariate logistic regression. It \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Metric & Value \\ \hline Accuracy & 0.87 \\ \hline Precision & 0.85 \\ \hline Recall (Sensitivity) & 0.89 \\ \hline F1-Score & 0.87 \\ \hline AUC-ROC & 0.92 \\ \hline \end{tabular} \end{table} Table 7: Performance Metrics of Deep Learning Model ## Discussion In this section, we delve into the discussion of our findings regarding the early detection of Post-COVID-19 Fatigue Syndrome (PCFS) through the utilization of Deep Learning Models (DLMs). The study encompassed a diverse dataset of 940 patients, sourced from various hospitals in Iraq, spanning the years 2022, 2023, and encompassing various age groups. ### The Role of Deep Learning Models Our research underscores the pivotal role played by Deep Learning Models (DLMs) in the early detection of PCFS. These models have emerged as powerful tools in medical diagnostics. They analyze complex datasets with precision and can identify subtle patterns that might elude traditional diagnostic methods. The study supports findings from many studies that showcased the utility of Near-Infrared Chemical Imaging (NIR-CI) in pharmaceutical authentication [37-40]. Such advanced techniques, as demonstrated by scientists, are in line with the evolving landscape of diagnostic technologies, including DLMs, which exhibit promising potential in medical research. ### Immune System Responses A noteworthy observation from our research, consistent with the findings of other studies, was the identification of immunological markers related to human papillomavirus infection in ovarian tumors [41-47]. This underscores the importance of understanding the immune system's responses to diseases, a facet that can be harnessed in the early detection of PCFS. ### Post-COVID-19 Effects Moreover, our study aligns with the growing body of evidence on the post-COVID-19 effects, as explored by many studies [49-51]. The study highlights the need for comprehensive investigations into the long-term health consequences of COVID-19, such as PCFS, which can have a profound impact on patients' lives. ### Machine Learning in Healthcare Our study reaffirms the transformative potential of machine learning in healthcare, an area of increasing importance in the medical field. These findings resonate with the work of John Martin and his team (2022), who employed machine learning algorithms to characterize pulmonary fibrosis patterns in post-COVID-19 patients [52]. The application of machine learning techniques can significantly enhance diagnostic accuracy and aid in early disease detection. ### Future Directions \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Feature & Importance Score \\ \hline Age & 0.42 \\ \hline Severity & 0.28 \\ \hline Hypertension & 0.15 \\ \hline Fatigue during COVID-19 & 0.10 \\ \hline \end{tabular} \end{table} Table 9: Feature Importance Analysis Looking ahead, further research in this domain should continue to harness the capabilities of Deep Learning Models. Our study supports the call for prospective research, as echoed by Yousif et al. (2018) [44], to validate the utility of these models in real-world clinical settings. Additionally, the integration of additional clinical and biological markers, as suggested by Sadiq et al. (2018) [53], can refine the accuracy of PCFS detection models. ### Limitations and Conclusion However, it is important to acknowledge the limitations of our study. While our dataset was comprehensive, it may not encompass all demographic groups, as highlighted by Hasan et al. (2020) in their work on urinary tract infections [54]. Additionally, the complexity of human biology and disease presentation necessitates further research to improve the robustness of PCFS detection models. In conclusion, our research contributes to the ongoing discourse on early disease detection. Utilizing Deep Learning Models, we underscore their potential in the early detection of Post-COVID-19 Fatigue Syndrome, in alignment with contemporary research endeavors. As the field of machine learning in healthcare continues to evolve, we anticipate these models will play an increasingly vital role in improving patient care. ## Conclusion: This research showcases the potential of deep learning models in identifying Post-COVID-19 Fatigue Syndrome (PCFS) at an early stage. The study's dataset, drawn from different regions of Iraq and spanning various age groups, lends robustness to the models' predictive capabilities. The findings underscore the value of early intervention in mitigating the impact of PCFS on individuals recovering from COVID-19. ## Acknowledgments: The authors express their gratitude to the participating hospitals in Iraq for providing the essential medical data for this study. Their contributions were fundamental to the success of this research. ## Conflict of interest: The authors declare no conflicts of interest associated with this research. ## Data Availability: The dataset used in this study is available upon request from the corresponding author, subject to ethical and privacy considerations.
2309.06748
CONVERSER: Few-Shot Conversational Dense Retrieval with Synthetic Data Generation
Conversational search provides a natural interface for information retrieval (IR). Recent approaches have demonstrated promising results in applying dense retrieval to conversational IR. However, training dense retrievers requires large amounts of in-domain paired data. This hinders the development of conversational dense retrievers, as abundant in-domain conversations are expensive to collect. In this paper, we propose CONVERSER, a framework for training conversational dense retrievers with at most 6 examples of in-domain dialogues. Specifically, we utilize the in-context learning capability of large language models to generate conversational queries given a passage in the retrieval corpus. Experimental results on conversational retrieval benchmarks OR-QuAC and TREC CAsT 19 show that the proposed CONVERSER achieves comparable performance to fully-supervised models, demonstrating the effectiveness of our proposed framework in few-shot conversational dense retrieval. All source code and generated datasets are available at https://github.com/MiuLab/CONVERSER
Chao-Wei Huang, Chen-Yu Hsu, Tsu-Yuan Hsu, Chen-An Li, Yun-Nung Chen
2023-09-13T06:40:24Z
http://arxiv.org/abs/2309.06748v1
# Converser: Few-Shot Conversational Dense Retrieval ###### Abstract Conversational search provides a natural interface for information retrieval (IR). Recent approaches have demonstrated promising results in applying dense retrieval to conversational IR. However, training dense retrievers requires large amounts of in-domain paired data. This hinders the development of conversational dense retrievers, as abundant in-domain conversations are expensive to collect. In this paper, we propose Converser, a framework for training conversational dense retrievers with at most 6 examples of in-domain dialogues. Specifically, we utilize the in-context learning capability of large language models to generate conversational queries given a passage in the retrieval corpus. Experimental results on conversational retrieval benchmarks OR-QuAC and TREC CAsT 19 show that the proposed Converser achieves comparable performance to fully-supervised models, demonstrating the effectiveness of our proposed framework in few-shot conversational dense retrieval.1 Footnote 1: All source code and generated datasets are available: [https://github.com/MiuLab/CONVERSER](https://github.com/MiuLab/CONVERSER) ## 1 Introduction Conversational information retrieval (CIR) has been an important area of research in recent years, aiming to retrieve relevant information from a large corpus of text in a conversational format. It has gained considerable interest due to its potential to deliver information in a natural format in response to a user's queries. Unlike traditional IR, CIR poses distinctive challenges, including its multi-turn and context-dependent nature, which require more nuanced approaches Yu et al. (2021); Fang et al. (2022). Dense retrieval methods have demonstrated their ability to understand the semantics of complex user queries and shown promising performance on open-domain retrieval Karpukhin et al. (2020). One of the major obstacles to conversational dense retrieval is the scarcity of training data, given the high cost and extensive time to collect high-quality information-seeking conversations Adlakha et al. (2022). Previous work has explored various approaches to address this issue Dai et al. (2022); Kim et al. (2022). However, most methods still rely on the assumption that a large amount of in-domain data is present and build data augmentation models upon it. In this paper, we aim to develop a few-shot conversational dense retrieval model that can effectively retrieve relevant passages based on a small number of in-domain dialogues. To achieve this, we leverage the in-context learning capability of large language models (LLMs) to generate synthetic passage-dialogue pairs with few-shot demonstrations. Specifically, in-domain passages are sampled from the retrieval corpus, and dialogues are synthesized by asking LLMs to generate a series of queries based on a few examples. We also employ a self-consistency filtering mechanism to automatically discard inconsistent generated queries, ensuring the accuracy and reliability of the generations. We conduct experiments on two benchmark datasets, including OR-QuAC Qu et al. (2020) and TREC CAsT 19 Dalton et al. (2019). The experimental results demonstrate that our proposed framework, Converser, performs comparably to fully-supervised models that are trained on _thousands_ of annotated dialogues while using only 6 examples at most. Furthermore, analyses show that Converser rivals other data augmentation methods that utilize full in-domain datasets, demonstrating its effectiveness. ## 2 Related Work Conversational Dense RetrievalConversational dense retrieval poses a unique challenge in that the questions are context-dependent. Prior works have explored various modeling techniques for conver sational history to address this challenge Huang et al. (2018); Choi et al. (2018); Yeh and Chen (2019); Chiang et al. (2020). However, these works only examined the modeling ability for conversational question answering (CQA), where the relevant passages are provided. More recently, Qu et al. (2020) proposed OR-ConvQA, which extends CQA to the open-domain setting where a retrieval module is required. ConvDR Yu et al. (2021) utilizes an ad-hoc dense retriever and manually rewritten context-independent queries for training few-shot retrievers and rerankers, while our method does not require an ad-hoc model and additional annotation. Others have explored various methods for encoding conversational queries Li et al. (2021); Fang et al. (2022); Wu et al. (2022); Liang et al. (2022), which are orthogonal to our work. ### Synthetic Data Generation for Dense Retrieval Due to the data-hungry nature of dense retrievers, synthetic data generation for dense retrieval has drawn considerable interest. Previous works have worked on generating information-seeking conversations via transforming documents Dai et al. (2022); Kim et al. (2022) or web search sessions Mao et al. (2022). However, these methods all require training query generators with conversational data, which does not mitigate the data scarcity issue. Our method requires only 6 in-domain dialogues with their relevant passages and demonstrates comparable performance to models trained on thousands of manually annotated dialogues. InPars Bonifacio et al. (2022) and Promptagator Dai et al. (2023) are the most closely related works to our method. They both proposed to generate synthetic queries with LLMs from few-shot examples, which achieved comparable performance to supervised methods in dense retrieval. Inspired by these works, our method further extends few-shot query generation to the conversational setting. We propose novel techniques for generating conversational queries and show that they are crucial to handle the unique challenges of conversational dense retrieval. ## 3 Proposed Method: Converser We propose few-shot conversational dense retrieval with synthetic data generation, Converser, which aims to generate synthetic conversational queries given few examples. More formally, given a conversational retrieval task \(T\), its retrieval corpus \(\mathcal{P}_{T}\), and \(k\) examples, we aim to generate synthetic conversational query-passage pairs \(\{\hat{C}_{1},\cdots,\hat{C}_{n}\}\) for training dense retrievers. ### Few-Shot Conversational Query Generation The core of our method is _few-shot query generation_. We leverage the in-context learning ability of LLMs Brown et al. (2020) to generate conversational queries. Specifically, we start with \(k\) examples \(\{C_{1},C_{2},\cdots,C_{k}\}\), where each \(C_{i}\) is a conversation represented as a series of query-passage pairs, \((q_{i}^{1},p_{1}^{1}),\cdots,(q_{i}^{n_{i}},p_{i}^{n_{i}})\), with \(n_{i}\) denoting the length of \(C_{i}\). Using these examples, we construct the following template \(\mathcal{T}\) as a few-shot demonstration for LLMs: \[\left[(p_{1}^{n_{1}},q_{1}^{1},\cdots,q_{1}^{n_{1}}),\cdots,(p_{k}^{n_{k}},q_{ k}^{1},\cdots,q_{k}^{n_{k}})\right]\] Note that we always choose the relevant passage that corresponds to the last query in the examplar, indicating that the last query \(q_{i}^{n_{i}}\) is generated given \(p_{i}^{n_{i}}\) and previous queries \(q_{i}^{1},\cdots,q_{i}^{n_{i}-1}\). The generation process for a synthetic conversation starts with randomly sampling a passage \(\hat{p}\) from the retrieval corpus, i.e., \(\hat{p}\sim\mathcal{P}_{T}\). We concatenate the template and the sampled passage to form an input text sequence \([\mathcal{T},\hat{p}]\). An LLM is employed for generating synthetic queries. It is expected to generate the first query \(\hat{q}_{1}\) that is relevant to \(\hat{p}\) based on the provided examples. We then append \(\hat{q}_{1}\) to the input sequence, forming the input sequence for generating the next query \(\hat{q}_{2}\), and so forth. We sequentially perform the generations for a conversation until a predefined number of turns is reached. ### Two-Stage Generation One unique characteristic of conversational queries is that the queries are _context-dependent_Choi et al. (2018) except for the first query, which should be a self-contained query without any ambiguity. To address this difference, we propose to split the generations into two-stage: first query generation and follow-up query generation. When generating the first query for each conversation, we use an alternative template \(\mathcal{T}_{1}=\left[p_{1}^{1},q_{1}^{1},\cdots,p_{k}^{1},q_{k}^{1}\right]\), which contains only the first queries and their relevant passages of the examples. We then replace \(\mathcal{T}_{1}\) with \(\mathcal{T}\) for generating all the follow-up queries. In practice, we found that this two-stage approach reduces the number of generated first queries that are not self-contained and thus ambiguous. ### Passage Switching In a conversation, relevant passages may vary for different queries. To this end, we incorporate passage switching into the generation process. We randomly replace the current passage \(\hat{p}\) with a related passage \(\hat{p}^{\prime}\) in each turn with a probability \(p_{ps}\). The LLM is expected to generate queries based on the new passage. ### Consistency Filtering The generation process sometimes generates queries that are nonsensical, degenerated, ambiguous, or not grounded by the given passage. We adopt a filtering mechanism via ensuring _round-trip consistency_Alberti et al. (2019). We follow the procedure in Dai et al. (2023), where an initial retriever is trained on all synthetic query-passage pairs. For each synthetic pair \((\hat{q},\hat{p})\), we use the initial retriever to retrieve the most relevant passages for \(\hat{q}\) from \(\mathcal{P}_{T}\). We keep the pair \((\hat{q},\hat{p})\) only if \(\hat{p}\) is in the top-k retrieved passages. ## 4 Experiments To evaluate if our generated conversational questions can help train a conversational retriever, we conduct experiments on a conversational question answering dataset, OR-QuAC Qu et al. (2020), and a conversational search benchmark, TREC CAsT-19 Dalton et al. (2019). ### Experimental Setup We describe our experimental setup in the section. Additional details can be found in Appendix A. Few-Shot ExamplesWe manually select 6 examples for OR-QuAC and 5 examples for CAsT-19 and use the same set of examples in all experiments. Due to resource constraints, we use the remaining 15 conversations for evaluating on CAsT-19 without performing 5-fold cross-validation. GenerationWe employ LLaMA-13B Touvron et al. (2023) as our pretrained LLM, which is not instruction-tuned and is open to the research community. We use nucleus sampling Holtzman et al. (2020) for decoding and set \(\text{top\_p}=0.95\), \(\text{temperature}=0.75\). We generate 427k turns (61k conversations) for OR-QuAC and 230k turns (32k conversations) for An example of generation results can be found in Section 5. Retrieval CorpusWe generate synthetic conversations based on the retrieval corpus for each task respectively. For OR-QuAC, we use the provided 11M passages from English Wikipedia. For TREC CAsT-19, we use the official passage collection, which consists of 8M webpage passages from MS-MARCO Bajaj et al. (2016) and 30M Wikipedia passages from TREC-CAR Dietz et al. (2017). Model DetailsWe follow the procedures from DPR Karpukhin et al. (2020) to train our retrievers and use BERT-base as the pretrained model. We concatenate all previous queries and the current query as the input to the retriever. Additional details can be found in Appendix A. Figure 1: Illustration of our proposed framework. ### Baseline Systems * **OR-ConvQA**: A supervised dense retriever trained on OR-QuAC Qu et al. (2020). * **DPR**: We train a DPR model Karpukhin et al. (2020) on the training set of OR-QuAC for a fair comparison. ### Main Results Table 1 shows the experimental results. Note that both ConvDR and WikiDialog utilized multiple additional datasets and techniques, which are complementary to our method. On the OR-QuAC dataset, our proposed Converser outperforms the supervised baseline OR-ConvQA by a large margin and performs comparably to the supervised DPR trained on OR-QuAC. This result demonstrates the effectiveness of our few-shot generation strategy, as our model trained on a synthetic dataset based on only 6 annotated examples can rival the performance of supervised DPR, which is trained on 4000 annotated dialogues. On CasT-19, Converser outperforms supervised DPR, which is trained on OR-QuAC. This shows that our task-specific generation strategy can effectively synthesize conversational queries on a new task given a few examples of the new task. Our proposed method provides better adaptability without requiring another supervised dataset as done in conventional transfer learning. ### Ablation and Comparative Study We conduct an ablation study on different settings of our proposed method, where we remove one component at a time to validate its effectiveness. We also compare our method with two datasets: OR-QuAC and WikiDialog Dai et al. (2022). To ensure the results are comparable, we limit the size of every dataset to 31k turns, which is the same as the training set of OR-QuAC. The training process and hyperparameters are also identical for all datasets. For WikiDialog, we subsample the original WikiDialog dataset and use it to fine-tune a retriever, without further fine-tuning on OR-QuAC. The results are shown in Table 2. Given the same number of synthesized turns, our Converser outperforms WikiDialog, which requires supervised conversational datasets for training a query generator. This result validates the effectiveness of our proposed few-shot generation method. The ablation study demonstrates that all of our proposed components contribute to the improvement. ### Effect of Generated Data Size We explore the impact of the generated data size on the performance, where we conduct a series of experiments, systematically varying the number of generated turns used for training presented in Figure 2. It clearly illustrates that as the number of turns increases, the system's performance improves significantly. This finding highlights the crucial role of conversational data in enhancing the effectiveness of our model. ## 5 Qualitative Study We present a generated example in Table 3 to perform qualitative analysis. WikiDialog is capable of generating follow-up questions. However, it often \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**OR-QuAC**} & \multicolumn{2}{c}{**CasT-19**} \\ & **MRR@5** & **R@5** & **MAP@10** & **MRR** & **NDCG@3** \\ \hline Supervised OR-ConvQA Qu et al. (2020) & 22.5 & 31.4 & - & - & - \\ Supervised DPR & 50.5 & 64.7 & 49.7 & 29.4 & 19.1 \\ Few-Shot Converser (Ours) & 49.6 & 63.4 & 48.7 & 35.8 & 21.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results (%). We report the result of OR-ConvQA from the original paper. \begin{table} \begin{tabular}{l|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**OR-QuAC**} \\ & **MRR@5** & **R@5** \\ \hline OR-QuAC & 50.5 & 64.7 \\ WikiDialog (31k) & 44.6 & 58.2 \\ \hline Converser (31k) & 46.8 & 61.5 \\ - Two-Stage & 45.1 & 59.9 \\ - Consistency Filtering & 45.2 & 59.8 \\ - Passage Switching & 45.6 & 58.1 \\ - Only 1-Shot & 42.1 & 55.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of ablation study. We use the identical training procedure and training data size for each experiment to make them comparable. generates generic queries, such as _Are there any other interesting aspects about this article_. On the other hand, Converser with only 1 example suffers from a lack of diversity. Due to limited demonstrations, it generates queries that are very similar to the only example it is given. Our proposed Converser can generate a context-independent first question and follow-up questions, demonstrating its effectiveness. ## 6 Conclusion This paper introduces Converser, a synthetic data generation method for training few-shot conversational dense retrievers. We leverage the in-context learning capability of LLMs and propose techniques that are designed for generating conversational queries. Experimental results demonstrate that our proposed Converser achieves comparable performance to fully-supervised models while only requiring 6 annotated examples. Further analyses demonstrate that our method outperforms a fully-supervised data augmentation method. Future work could explore instruction-following LLMs, better filtering mechanisms, and synthesizing specialized data for conversational dense retrieval, such as query rewrites. ## Acknowledgements We thank the reviewers for their insightful comments. This work was financially supported by the National Science and Technology Council (NSTC) in Taiwan, under Grants 111-2222-E-002-013-MY3, 111-2628-E-002-016, and 112-2223-E-002-012-MY5 and Google. \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{**Generated Question Example**} \\ \hline **Title** & Palazzo Primoli \\ **Passage** & Palazzo Primoli Palazzo Primoli is a Palazzo in Rome, Italy. It is owned by the city of Rome and houses several museums and collections. The palazzo was built in the seventeenth century. In 1901 Count (Giuseppe Primoli (1851–1927) became its sole owner. He extended and partly modernised the palazzo with a new facade and entrance between 1901 and 1911. The Count’s maternal grandparents were Charles Lucien Bonaparte and Zenide Bonaparte, and the Count brought together a collection of objects (now the Museo Napoleonico), documenting the relationship between Rome and the Bonaparte family. He also was an avid photographer. In 1927 Giuseppe Primoli donated the palazzo and his collections to the municipality of Rome. The Museo Napoleonico is located on the palazzo’s ground floor, and the third floor is occupied by the Museo Mario Praz, the former residence of Mario Praz. Also located in the palazzo are the Count’s library and photographic archive. \\ \hline **WikiDialog** & Q1: What is known about Palazzo Primoli? \\ & Q2: Are there any other interesting aspects about this article? \\ & Q3: When was it built? \\ & Q4: Who were the previous owners of the palazzo? \\ & Q5: What was his role in the palazzo? \\ & Q6: What is the most interesting aspect of this article? \\ \hline **Converser** & Q1: What is the name of the building that houses the Museo Napoleonico? \\ & Q2: What is the Palazzo Primoli best known for? \\ & Q3: What year was it donated? \\ & Q4: What famous collection was donated? \\ & Q5: What other collections are housed in this museum? \\ & Q6: Who was the Count’s maternal grandparents? \\ \hline **Converser** & Q1: What is Palazzo Primoli in regards to Rome? \\ only 1 example & Q2: What is the Palazzo Primoli’s collection of objects? \\ & Q3: What is the Museo Napoleonico? \\ & Q4: What was the Count’s library? \\ & Q5: What was the Count’s photographic archive? \\ \hline \hline \end{tabular} \end{table} Table 3: A qualitative example from WikiDialog, Converser, and Converser with only 1 example. Figure 2: MRR@5 with regard to different number of generated turns on OR-QuAC.
2310.03753
ECGNet: A generative adversarial network (GAN) approach to the synthesis of 12-lead ECG signals from single lead inputs
Electrocardiography (ECG) signal generation has been heavily explored using generative adversarial networks (GAN) because the implementation of 12-lead ECGs is not always feasible. The GAN models have achieved remarkable results in reproducing ECG signals but are only designed for multiple lead inputs and the features the GAN model preserves have not been identified-limiting the generated signals use in cardiovascular disease (CVD)-predictive models. This paper presents ECGNet which is a procedure that generates a complete set of 12-lead ECG signals from any single lead input using a GAN framework with a bidirectional long short-term memory (LSTM) generator and a convolutional neural network (CNN) discriminator. Cross and auto-correlation analysis performed on the generated signals identifies features conserved during the signal generation-i.e., features that can characterize the unique-nature of each signal and thus likely indicators of CVD. Finally, by using ECG signals annotated with the CVD-indicative features detailed by the correlation analysis as inputs for a CVD-onset-predictive CNN model, we overcome challenges preventing the prediction of multiple-CVD targets. Our models are experimented on 15s 12-lead ECG dataset recorded using MyoVista's wavECG. Functional outcome data for each patient is recorded and used in the CVD-predictive model. Our best GAN model achieves state-of-the-art accuracy with Frechet Distance (FD) scores of 4.73, 4.89, 5.18, 4.77, 4.71, and 5.55 on the V1-V6 pre-cordial leads respectively and shows strength in preserving the P-Q segments and R-peaks in the generated signals. To the best of our knowledge, ECGNet is the first to predict all of the remaining eleven leads from the input of any single lead.
Max Bagga, Hyunbae Jeon, Alex Issokson
2023-09-23T16:43:31Z
http://arxiv.org/abs/2310.03753v1
ECGNet: A generative adversarial network (GAN) approach to the synthesis of 12-lead ECG signals from single lead inputs ###### Abstract Electrocardiography (ECG) signal generation has been heavily explored using generative adversarial networks (GAN) because the implementation of 12-lead ECGs is not always feasible. The GAN models have achieved remarkable results in reproducing ECG signals but are only designed for multiple lead inputs and the features the GAN model preserves have not been identified--limiting the generated signals use in cardiovascular disease (CVD)-predictive models. This paper presents ECGNet which is a procedure that generates a complete set of 12-lead ECG signals from any single lead input using a GAN framework with a bidirectional long short-term memory (LSTM) generator and a convolutional neural network (CNN) discriminator. Cross and autocorrelation analysis performed on the generated signals identifies features conserved during the signal generation--i.e., features that can characterize the unique-nature of each signal and thus likely indicators of CVD. Finally, by using ECG signals annotated with the CVD-indicative features detailed by the correlation analysis as inputs for a CVD-onset-predictive CNN model, we overcome challenges preventing the prediction of multiple-CVD targets. Our models are experimented on 15s 12-lead ECG dataset recorded using MyoVista's wavECG. Functional outcome data for each patient is recorded and used in the CVD-predictive model. Our best GAN model achieves state-of-the-art accuracy with Frechet Distance (FD) scores of 4.73, 4.89, 5.18, 4.77, 4.71, and 5.55 on the V1-V6 pre-cordial leads respectively and shows strength in preserving the P-Q segments and R-peaks in the generated signals. To the best of our knowledge, ECGNet is the first to predict all of the remaining eleven leads from the input of any single lead. Because of ECGNet's ability to conserve aspects of the PQRST complex, our work is useful for feature extraction that combats the black box nature of the GAN signal generation and ultimately future CVD-predictive models. + Footnote †: These authors contributed equally to this work. ## 1 Introduction With cardiovascular disease (CVD) being the leading cause of death, understanding CVD predictors is vital Roth et al. (2020). Traditionally, cardiologists can use twelve-lead electrocardiography (ECG) signals to predict major CVD events near onset through the diagnosis of risk factors Dawber et al. (1952). These risk factors are determined by observing abnormal electrical signal patterns produced by each lead in the ECG Chiou et al. (2021). However, these patterns tend to become recognizable to cardiologists only near inevitable CVD onset Ebrahimi et al. (2020). Consequently, through the availability of large data sets of patients' ECG signals, there has been a growing interest to use deep learning (DL) algorithms to extract features of ECG signals and use them to predict CVD before irreversible onset Li and Boulanger (2020). Despite the advancement in constructing DL-based CVD-predictive models, clinical implementation is challenging because of the models' inability to classify multiple-disease targets Stracina et al. (2022). CVD manifests into disorders that can be categorized: structural heart disease, functional heart disease, and hemodynamic disorders. Several studies have proposed DL models to overcome the multiple-disease target barrier by predicting CVDs within a single category. The convolutional neural network (CNN) rECHOmmend combines the outcomes of multiple structural diseases into a single prediction that outperforms single-disease target models Ulloa-Cerna et al. (2022). While rECHOmmend takes the indicators of multiple different structural CVDs and successfully predicts future onset of generic structural CVD, it fails to classify the onset of a specific structural CVD. Moreover, deep learning models predicting func tional heart disease do not significantly outperform classical risk calculators, and those that are on par are single-disease target models--i.e. the models make predictions on just one functional heart disease Zhou et al. (2021). Therefore, clinical implementation is still limited by the inability to specifically classify a disease or take multiple disease features as inputs Stracina et al. (2022). Zhou et al. (2021) argued that deep learning models could add unnecessary complexity and make it hard to investigate crucial feature inputs due to their black box nature. Applying this logic from Zhou et al. (2021), because the true interactions between the input variables of rECHOmmed are obscured by the CNN's black box, the process cannot be extended to solve the functional heart disease models' inabilities to take multiple disease inputs and vice versa. In this paper, we present ECGNet: a Generative Adversarial Network (GAN) model with a bidirectional grid long short-term memory (LSTM) generator and a CNN discriminator trained to be able to reconstruct a complete twelve-lead ECG lead signal set from an input of any one ECG lead signal. The latent features conserved by the GAN model will be extracted from the reconstructed signals using signal cross-correlation and passed as input to another CNN model to predict the onset of multiple structural, functional, and hemodynamic CVDs. We hypothesize that our method can successfully classify multiple-disease targets because by first reconstructing ECG signals and identifying the conserved electrical motifs, we can understand the distinguishing ECG characteristics that the model actually uses, and thus the black-box process previously preventing transfer learning can be illuminated. Our approach uses a bidirectional LSTM-CNN GAN because one LSTM dimension can encode the signal's temporal trends while the other can encode the other features Hazra and Byun (2020). Zhu et al. (2019) used a bidirectional LSTM-CNN GAN model to generate entire 12-signal data sets from patient features--a similar outcome as ours but with features instead of signals as inputs. Our approach is evaluated on ECG and functional outcome data from 976 patients with diverse racial, gender, physical, socioeconomic, and medical histories (Section 4). The models operate on the 15-second MyoVista wavECG recordings Sengupta et al. (2018, 2018) and over 2000 functional outcomes--e.g., biometrics and disease diagnosis--collected for each patient. We expect our experiments to show that our GAN architecture can recreate a full set of ECG signals from any single ECG lead input that the CNN discriminator cannot consistently distinguish from the actual signals. The signal cross-correlation analysis is expected to identify electric motifs that are conserved in intra-individual ECG signals. With the predictive model taking the conserved electric motifs as inputs, the onsets of CVDs are expected to be predicted and classified. This work makes three main contributions as follows: 1. We train a GAN model to reconstruct the remaining eleven ECG lead signals from an input of any single ECG leads 2. We extract connections between each reconstructed signal's characteristics 3. The extracted features are used as inputs to predict and classify multiple CVDs To the best of our knowledge, this is the first work that reconstructs all twelve ECG signals from the input of any one lead signal. ## 2 Related Work The ECGNet procedure focuses on three aspects: reconstructing electrocardiography (ECG) lead signals, mathematical feature extraction from ECG lead signals, and deep learning methods for predicting cardiovascular disease (CVD). ### Reconstructing ECG Lead Signals The task of reconstructing ECG lead signals is not a new endeavor. Toyoshima et al. (1958) attempted to reconstruct the QRS ECG complexes by summing the ventricular activation or electrical potential changes of the heart in order to diagnose myocardial infarction; however, as demonstrated by Oosterom (2002), the accurate diagnosis of CVD by these methods cannot proceed without prior knowledge of the QRS complex trends corresponding to the CVD and performing the inverse operation on the ECG signal to identify the electrical potential changes of the heart. The mechanism proposed by Toyoshima et al. (1958) is thus rendered obsolete as the QRS trends are what the mechanism attempts to establish yet is also required to make the mechanism accurate. This circular challenge persists throughout modern techniques as well and subsequently is a problem we attempt to circumnavigate by first using the GAN model (3.2) to identify the latent motifs of ECG signal leads (3.3) and then using the motifs as inputs to the predictive model (3.4). Drew et al. (2002) was the first to construct multiple lead signals from the input of other lead signals. Using mathematical interpolation on a reduced lead data set, the four missing precordial lead signals were reconstructed in order to diagnose arrhythmia and myocardial ischemia. The reconstructed lead signals preserved the features of the CVDs. Consequently, if the interpolation is clinically implemented, the eight leads can be placed optimally instead of in their traditional twelve-lead placement. Despite the great work of Drew et al. (2002), the interpolation cannot be generalized to even a particular category of CVDs as the eliminated leads of the analysis are invaluable in the diagnosis of other CVDs; therefore, with the advent of DL, many studies have sought to reconstruct ECG lead signals without losing the features of the eliminated leads. A major breakthrough was made when Sohn et al. (2020) used a LSTM Network to reconstruct a full twelve-lead ECG set from the signal inputs of a portable three-lead ECG hardware. The network outperformed a three-lead input interpolation approach proposed by Hsu and Wu (2014) which already had state-of-the-art results proving the viability of long-term, reduced-lead ECG portable monitoring. This analysis, in addition to a single-electrode device proposed by Lee et al. (2017), validates the feasibility and clinical viability of our full-set derivation from just two leads. Further analyses verify the necessity of DL in the reconstruction of ECG lead signals either by comparing DL performance to the previous interpolation approaches (Grande-Fidalgo et al., 2021; Craven et al., 2017) or by highlighting avenues facilitated only by the DL approach (Lee et al., 2017). Although these DL models accomplish their intended goals, they fail to classify multiple-disease targets much like the aforementioned structural CVD predictor rECHOmmend (Ulloa-Cerna et al., 2022) and functional CVD models (Zhou et al., 2021). Our approach aims to overcome the shortcomings of these models by using the conserved signal patterns identified by the GAN model to predict multiple-disease targets. Seo et al. (2022) attempted to reconstruct a full twelve-lead ECG from the slightly more ambitious one-lead input by also using a GAN model; however, their study is limited to reconstruction solely considering the limb lead I as the input and does not aim to classify CVD motifs in the reconstructed signals. Therefore, our work is distinguished because our model will take any single lead's signal to not only reconstruct the remaining signals but to identify highly conserved signal patterns and use the motifs to predict CVD onset: a more informative task. The methods of our approach can be divided into two categories: pre-processing (3.1) and model design (3.2, 3.3, 3.4). There has been a concerted effort to use DL to denoise ECG signals (Mir and Singh, 2021; Singh and Pradhan, 2021) and to classify the various components of the heartbeat (Burguera, 2019; He et al., 2018)--both of which are used as inputs for our GAN model. While these efforts have fruitful results, implementing these models in our pre-processing efforts may introduce training bias (or they may not be substantially cross-validated to be compatible with our dataset) which will be exasperated in the downstream components of our approach; therefore, we adapted a purely mathematical Daubechie wavelet denoising and a non-DL-based heartbeat classification pre-processing procedure validated by Hazra and Byun (2020) and Kachuee et al. (2018) respectively. Zhu et al. (2019) was the first to show that Electrocardiogram (ECG) signals can be generated using a generative adversarial network (GAN) model where a bidirectional long short-term memory (LSTM) framework is used as a generator while a convolutional neural network (CNN) is used as the discriminator. The model was successfully used to synthesize complete 12-signal data sets from patient features. Given the great success of this framework, we believe that adapting it to take signal inputs instead of patient features can improve the quality of the signal generated allowing for effective classification. Hazra and Byun (2020) used the same model architecture as Zhu et al. (2019) and significantly improved the resolution of the signals generated by tuning the pre-processing and updating both the convolutional and pooling layers of the CNN discriminator. The GAN model's resulting signals will subsequently be classified (3.3)--aided by this improved reconstructed signal resolution--providing further support for the architecture. The 15s twelve-lead ECG data used by our analysis is also the input format used in both the adapted pre-processing methods (Hazra and Byun, 2020; Kachuee et al., 2018) and the various architectures (Zhu et al., 2019; Hazra and Byun, 2020). ### Mathematical Feature Extraction Ramli and Ahmad (2003) proved that by taking the cross-correlation between any two signals and by comparing the auto-correlation function of the signals to the original signal, the dependent features can be extracted. Attempts to verify this process have produced successful models identifying specific arrhythmia-indicating signal features (Chiu et al., 2005) and even drowsiness predictors (Lee et al., 2017). The analysis of Ramli and Ahmad (2003) was performed between leads from the same person; therefore, the novelty of our approach does not derive from the use of this technique, but because it is used to unveil the properties of the lead signals that the GAN model preserves. By piercing the black box this way, transfer learning can occur--i.e., applying the learning of ECG signal synthesis to the prediction of CVD. ### Deep Learning Methods for Predicting Cardiovascular Disease CNN models have been remarkably successful using annotated ECG signal inputs to predict CVD of a signal type (Wu et al., 2021; Shankar et al., 2020; Dutta et al., 2020). Sajja and Kalluri (2020) validated the superiority of CNN models in this regard by comparing their performance to that of various other machine-learning algorithms. Furthermore, attempts to involve CNN models in CVD-predictive transfer learning have had preliminary successes and thus they can be the vehicle of multiple-disease prediction (Weimann and Conrad, 2021). Considering this, a predictive CNN model taking the conserved signal motifs identified in the signal cross-correlation feature extraction was implemented (3.4). The patient functional outcome data used in our analysis compares favorably in terms of its comprehensiveness to the data used in the aforementioned analysis (Wu et al., 2021; Shankar et al., 2020; Dutta et al., 2020). While all three main components of our approach have been explored independently, the use of each in succession to solve the next task's flaws produces our study's novelty. Herein, we focus on predicting mutliple-CVD onset by presenting a bidirectional grid LSTM-CNN GAN model to reconstruct ECG signals (3.2), identifying the features the GAN model conserves (3.3), and using the features as inputs for a predictive model (3.4). ## 3 Approach Here, the pre-processing procedure (Figure 1) that the individual 15s ECG lead signals are subjected to is outlined. The procedure (3.1) outputs the denoised signal. The architecture of the Generative Adversarial Network (GAN) model (3.2) that inputs the pre-processed signal is subsequently detailed along with the signal-correlation analysis (3.3) performed on the model's outputs. Then, the cardiovascular disease (CVD)-predictive model (3.4) that inputs the features identified by the signal-correlation analysis is described. Overall, the approach facilitates the evolution of an input of any 15s lead signal to the output of a CVD prediction. The approach is executed in Python 3.10 Figure 1: Progression of pre-processing \(P:S_{\text{unfiltered}}\mapsto P:S_{\text{filtered}}\). (Van Rossum and Drake Jr, 1995). ### Pre-processing Pre-processing takes each 15s electrocardiogram (ECG) lead signal as input \(\text{S}_{\text{unfiltered}}\) and denoises them (Algorithm 1). There are nfilters = 4 high-low pass Daubechie discrete wavelet transforms (_dwtHP_ and _dwtLP_ respectively) from the pywt 1.4.1 package (Lee et al., 2022) performed. Upon completion of each _dwt1_, the low pass signal _cA\({}_{i}\)_ is subsampled (_resample_) by two and passed to the next round of filters while the high pass signal _cD\({}_{i}\)_ is subsampled by two (_resample_) and extracted to become a wavelet (_w\({}_{i}\)_). After all filters are applied, adaptive thresholding is performed on each of the \(i\) wavelets at each signal time point \(j\) with moving average window of length \(r\) = 32 (Hazra and Byun, 2020): \[\begin{split} adaptiveThreshold(w_{ij})=\\ 1/r\sum_{n=r*(j-1)}^{rj-1}\mid w_{1j}\mid*2^{i}\end{split} \tag{1}\] To construct the denoised signal, the inverse Daubechie DWT (_idwt_) function from the pywt 1.4.1 package (Lee et al., 2022) is performed on the threshold-ed wavelets (_w\({}_{i}\)_). These denoised signals (_S_) at each time point \(j\) are subsequently normalized to be between zero and one: \[\begin{split} normalized(S_{j})=(S_{j}-min(S))/\\ (max(S)-min(S))\end{split} \tag{2}\] Examples of the pre-processing output are provided in (4.2). ### GAN Model Our GAN model consists of a bidirectional long short-term memory (LSTM) generator that aims to recreate ECG lead signals--from a two-signal input--that the discriminator cannot distinguish from the patient's actual signal (Figure 2). The discriminator of the GAN model is a convolutional neural network (CNN) that attempts to differentiate the artificial signals from the patients' actual lead signals. Mathematically, the baseline GAN model can be modeled as a min-max game represented by the generator loss function--i.e., the penalties paid by either the generator or discriminator for failing to perform their respective tasks. The generator aims to minimize this loss while the discriminator attempts to maximize the loss for the generator. To mathematically represent the situation, the domain of all \(N\) preprocessed signals can be defined as \(s_{i}\in S_{\text{filtered}}\) while the \(M\) signals of the generator output, or in other words, the discriminator domain, are defined as \(q_{i}\in Q\). The generator is thus a mapping defined as \(G:S_{\text{filtered}}\mapsto Q\). The loss function can then be defined as \[\begin{split}\mathcal{L}(G,D,S_{\text{filtered}},Q)=\\ min_{D}max_{G}\operatorname{\mathbb{E}}_{q\sim p(q)}[log(D(q)] \\ +\operatorname{\mathbb{E}}_{s\sim p(s)}[log(1-D(G(s)))]\end{split} \tag{3}\] where \(D\) is the discriminator, \(\operatorname{\mathbb{E}}_{q\sim p(q)}\) is the expectation of the probability given by the discriminator of a real signal being real on random input \(p(q)\) over the distribution of \(q\), and \(\operatorname{\mathbb{E}}_{s\sim p(s)}\) is the expectation of the probability given by the discriminator of a generated signal being real on random input \(p(s)\) over the distribution of \(s\)(Zhu et al., 2017). Figure 2: Overview of the GAN model. The generator is a bidirectional LSTM that takes a single lead input and the discriminator is a CNN that attempts to distinguish the real signal from the generated signal. #### 3.2.1 Bidirectional LSTM Generator The baseline LSTM structure allows for long term dependencies to be considered in the model architecture. A LSTM is generally defined in terms of cell states which control what information is considered by the model through a series of gates. The sigmoid forget layer removes information from consideration, the sigmoidal input gate selects values of the cell state to be updated, the input hyperbolic tangent gate inputs new candidate values to the cell state, and finally sigmoidal and hyperbolic tangent output layers select the output information (Hochreiter and Schmidhuber, 1997). The cell state output is subsequently passed to the corresponding layer of the underlying recurrent neural network (RNN)--a type of neural network where the hidden layer outputs can be used to update prior layers--preserving the long term dependencies (Sherstinsky, 2020). This process is susceptible to the vanishing gradient problem: the gradient of the backpropagation updating the sigmoidal layers approaches zero due to the small derivatives at the sigmoid function extremes. The vanishing gradient problem prevents further model training. To prevent the gradient from vanishing, the baseline LSTM structure can be extended into a bidirectional grid. Instead of the information being passed from the hidden layers only temporally, the layers and corresponding cell states are stacked on top of each other to add a depth layer that decreases the probability that the backpropagation produces a near-zero gradient (Schuster and Paliwal, 1997). This more advanced, bidirectional grid approach is implemented as the generator \(G:S_{\text{filtered}}\mapsto Q\) according to the LSTM state variables, time block, and depth block outlined by Hazra and Byun (2020). The approach of Hazra and Byun (2020) is favored to the bidirectional LSTM-based ECG signal generator presented by Zhu et al. (2019) because of the explicit consideration of the vanishing gradient by Hazra and Byun (2020). #### 3.2.2 CNN Discriminator The time-series nature of the ECG signal confers the use of one-dimensional convolutions in the CNN discriminator as opposed to the more traditional two-dimensional convolution used in computer vision (Zhu et al., 2019). A one-dimensional CNN is modeled in terms the number and size (input and output) of its convolution and pooling layers, number of kernels per layer, and final activation function (Kiranyaz et al., 2021) as defined for the discriminator here (Table 3). The kernels in each convolutional layer serve as feature filters that parse through both the generated and real ECG signals according to the defined stride length--computing the dot product between the kernel and signal while reducing the dimensions of the data (Kiranyaz et al., 2019). Each kernel produces a feature map that represent certain characteristics of the ECG signal. To prevent over-fitting, the pooling layers sub-sample the feature maps. The final convolution produces a fully connected hidden layer whose outputs are subjected to the softmax activation function which gives the probability that the signal is generated or real (Iwana et al., 2019). The probabilities are passed to the aforementioned loss function (Equation 3). ### Cross-correlation analysis The baseline signal correlation test is the cross-correlation function computed as \[r(q,s)=\sum_{j=0}^{T}q_{j}s_{j} \tag{4}\] where \(q\in Q\) and \(s\in S_{filtered}\) for all time points \(T\) in the ECG signals (Podobnik and Stanley, 2008). For each of the 12 ECG leads, the cross-correlation function between every synthesized lead signal and the original patient signal is calculated and averaged across the entire data set. The function is subsequently charted and analyzed by cardiologists at the Robert Wood Johnson University Hospital for feature extraction. The efficiency of the feature extraction is compared to the studies by Ramli and Ahmad (2003), (Chiu et al., 2005), and (Lee et al., 2017). While the cross-correlation function acts as a baseline mechanism for extracting the latent relationships learned by the GAN model, the auto-correlation function defined \[r(q,i)=\sum_{j=i}^{T}q_{j}q_{j-i} \tag{5}\] where \(i\) is a time-translation introduced to the generated signal, may provide further, more definitive features (Ramli and Ahmad, 2003). This is because repeated, important patterns from each heartbeat should be perpetuated throughout the entirety of the signal which would only be captured by the auto-correlation function. The auto-correlation function is once again charted and analyzed by cardiologists at the Robert Wood Johnson University Hospital for feature extraction. The analysis done by Ramli and Ahmad (2003) serves as the baseline comparison. ### CVD-Predictive CNN Model We cannot write the approach for this section yet because it depends on how successful the aforementioned cross-correlation analysis is. Until we know what features are successfully identified, we cannot design a model structure; however, we include it here because the novelty of our research is not apparent without this model. We will implement it in the future. We anticipate using a CNN-predictive model as the models that have come closest to achieving multiple-CVD onset predictions are CNN-based models. That being said, the novelty of the analysis derives from utilizing all three components: using the GAN model to understand the latent relations of ECG signals, the cross-correlation analysis to identify the latent connections the GAN model is making, and the CVD-predictive model to predict multiple-CVD onsets which has never been done before successfully. The approach is generalizable as it can take any ECG input to train the GAN model, update the cross-correlation analysis, and fine-tune the CVD-predictive model. Furthermore, the ultimate aim of the approach is so that the resulting CVD-predictive model is generalizable for any ECG signal input and most CVD predictions. ## 4 Experiments We implement our models in TensorFlow and experiment with three different generative models: our baseline GAN model, the one-lead predictive model by Seo et al. (2022), and our advanced bidirectional LSTM-1d CNN GAN model. All our models are evaluated on our dataset (4.1). ### MyoVista Dataset Our MyoVista dataset consists of 15s 12-lead ECG data measured using MyoVista's wavECG patient data from three different hospitals: West Virginia University, Mount Sanai, and Windsor. Functional outcome data for each patient in the 1000-patient dataset is recorded. Patient ages ranged from 18-96 years old with a mean age of \(56.8\pm 15\) and patient gender is 46.6% and 53.4% female and male respectively. Each race is represented in the data set, but Caucasians are overrepresented (Table 1). Weight measurements (mean \(86.8\pm 23\)kg) indicate a well-distributed sample with SD accounting for more than \(25\%\) of the mean. The distribution of height (mean \(169.5\pm 10\)cm) is more conservative but indicative of a distributed sample. Over 2000 functional outcomes are recorded--e.g., ejection fraction, Rsign, and blood pressure--for each patient. This dataset was collected because with a large array of functional outcomes, we expect to be able to extract features from the generated signals that can then be used to predict at least some of the functional outcomes. Furthermore, with MyoVista ECG signal data having been used to successfully predict CVD-onset in the past Sengupta et al. (2018), signal data from the wavECG is used. Due to HIPAA regulations, other datasets cannot be accessed and therefore, the models are only evaluated on our dataset. ### Pre-processing Validation For each signal recorded by the wavECG, the pre-processing procedure outlined in Section 3.1 is performed. The denoised signals clearly display the correct number of heartbeats and fundamental components of a heartbeat: the P-Q segment, the QRS complex, and the S-T segment (Figure 2(b)). Furthermore, for each R-peak in the original signal (Figure 2(a)), there is an R-peak in the denoised signal (Figure 2(b)), further validating the correctness of the pre-processing procedure. Upon completion of the pre-processing procedure, the R-peak time intervals are calculated and each heartbeat is segmented. Padding is added to the signal according to the maximum R-peak time interval across all patients. For the GAN model (Section 4.3), the sequential time-series representation of each heartbeat constitutes the input. For each of the 1000 patients, there are 15 heartbeats. For each of the \begin{table} \begin{tabular}{c|c c c c c} \hline \hline & **Concion** & **Affrican-American** & **Hispanic** & **Asian** & **Mean** \\ \hline \(\%\) & **80.5** & 5.7 & 3.7 & 6.3 & NA \\ \(\%\) Slack & 49.7 & 37.1 & 29.3 & **88.5** & 53.4 \\ Age (years) & \(\mathbf{10.67}\pm 17\) & \(\mathbf{53.11}\pm 13\) & \(\mathbf{53.44}\pm 14\) & \(\mathbf{50.68}\pm 15\) & \(\mathbf{56.8}\pm 15\) \\ Weight (kg) & \(\mathbf{59.2}\pm 25\) & \(\mathbf{93.2}\pm 26\) & \(\mathbf{79.8}\pm 17\) & \(\mathbf{61.1}\pm 14\) & \(\mathbf{56.8}\pm 23\) \\ Height (km) & \(\mathbf{109.6}\pm 10\) & \(\mathbf{100.3}\pm 10\) & \(\mathbf{164.8}\pm 8.7\) & \(\mathbf{164.8}\pm 8.4\) & \(\mathbf{100.3}\pm 10\) \\ \hline \hline \end{tabular} \end{table} Table 1: Percentage of MyoVista dataset represented by each race along with the racial breakdown of the generic patient data. The largest values for each category are bolded. patient's lead's heartbeats, we are trying to predict the other 11 heartbeats; therefore, there are \(1000*15*12*11=1,980,000\) model inputs. Because of the size of the pre-processed dataset, we randomly shuffled the heartbeats and allocated 80% for training, 10% for validation, and 10% for training (Table 2). ### Generation of ECG signals To reiterate, we train a basic LSTM generator and one-layer 1d-CNN discriminator for our baseline GAN model. We then compare the results to our advanced GAN model with the bidirectional LSTM generator and multi-layer 1d-CNN discriminator optimized for each lead. Both of these results are subsequently compared to Seo et al. (2022) GAN model that was specifically designed to synthesize the signals from just one lead. Note we did not have the requisite time to the model from the Seo et al. (2022) study, but instead only present it here as a data point. In the future, we will reconstruct the model and test it on our dataset and compare the results to our model. To evaluate how similar the generated signal is to the input signal, we use Frechet Distance (FD) defined as: \[\begin{split} FD(s\in S_{filtered},q\in Q)=\\ min(max_{i=1,\dots,n}(||s_{i},q_{i}||))\end{split} \tag{6}\] where \(||s_{i},q_{i}||\) is the Euclidean distance between points \(s_{i}\) and \(q_{i}\). In other words, FD computes the maximum distance of \(s_{i}\) and \(q_{i}\) in a given alignment for all alignments of \(n\) points from \(s\) and from \(q\) and then finds the minimum of these maximum distances. Therefore, FD evaluates similarity based on both the ordering and location of the signals' points. Both factors are considered in the cross-correlation analysis (3.3), thus FD is the chosen similarity metric. Because the model proposed by Seo et al. (2022) is specifically designed for the limb lead I as the sole input, the individual FD scores for this model will likely not be as low, but comparable--i.e., within one standard deviation. Our baseline model is trained for nine epochs using the learning rate of \(1\mathrm{e}{-4}\). Note that because we did not have enough time to optimize hyperparameters, we did not optimize the learning rate but we did optimize the number of epochs. We saved the model after each epoch so that the epoch that produced the lowest FD during validation could be selected for the generation. Epoch 1 was selected because it produced the lowest FD score in the validation (Figure 4). The input batch size was 256. Cross-entropy was used as the loss function and the Adam algorithm was used as our optimizer. For the bidirectional LSTM generator, a time-distributed 1-unit dense layer with a sigmoidal activation was applied. The details of the CNN discriminator are \begin{table} \begin{tabular}{c|c c} \hline \hline **Set** & **Total Number of Heartbeats** & **Hearbeats per Lead** \\ \hline Training & 1,584,000 & 132,000 \\ Validation & 198,000 & 16,500 \\ Test & 198,000 & 16,500 \\ \hline \hline \end{tabular} \end{table} Table 2: Data split for the GAN model inputs (Section 4.3) Figure 4: The model’s mean FD score on the validation score after each epoch. Figure 3: Comparison of the original signal recorded by wavECG and the denoised signal after the procedure outlined in Section 3.1 is performed. presented in Table 3. The Relu activation function is used for the convolution and final convolution (FC) layers. The mean FD scores comparing our baseline model and the one-lead model created by Seo et al. (2022) are presented in Table 4. Our baseline model outperforms the model by Seo et al. (2022) in two of the three experiments. Further inspection of the experiments by Seo et al. (2022) indicates FD scores for the limb leads that are far lower than ours but far higher FD scores for the pre-cordial leads. Biologically, this can be explained by visualizing our prediction (Figure 5). Our generated signal clearly preserves all of the fundamental components of a heartbeat: the P-Q segment, the QRS complex, and the S-T segment. These fundamental components of the heartbeat are better preserved in the pre-cordial leads as they are closer to the heart; therefore, it appears that our model is trained better to generate the pre-cordial signals than it is to the limb leads. This could be explained by the noisiness of the limb leads prior to pre-processing. The limb leads were noticeably more noisy before the pre-processing and although the signals were processed successfully, the QPRST components were less distinct for the limb leads than the pre-cordial leads. ### Features of ECG Signals that Predict CVD To determine baseline relationships between the more informative precordial leads--i.e., the leads positioned next to the heart--and the other leads, the average correlation coefficient between each \begin{table} \begin{tabular}{c|c c c} \hline \hline **Metric** & **Baseline** & **Seo et al. (2022) E1** & **Seo et al. (2022) E2** & **Seo et al. (2022) E3** \\ \hline FD & 7.77 & 9.062 & 8.124 & 6.071 \\ \hline \hline \end{tabular} \end{table} Table 4: Mean FD scores from our baseline model, our advanced model, and the three experiments that Seo et al. (2022) used. Figure 5: Representative generated signal by the baseline model. Each heartbeat of the signal clearly contains the P-Q segment, the QRS complex, and the S-T segment. \begin{table} \begin{tabular}{c|c c c} \hline \hline & **1** & **2** & **3** \\ \hline 1 & N/A & 0.53 & -0.33 \\ 2 & 0.53 & N/A & 0.48 \\ 3 & -0.33 & 0.48 & N/A \\ 4 & -0.83 & -0.88 & -0.11 \\ 5 & 0.80 & 0.05 & -0.77 \\ 6 & 0.14 & 0.86 & 0.82 \\ 7 & -0.38 & -0.35 & -0.01 \\ 8 & 0.02 & -0.08 & -0.15 \\ 9 & 0.30 & 0.29 & 0.02 \\ 10 & 0.45 & 0.46 & 0.06 \\ 11 & 0.55 & 0.52 & 0.01 \\ 12 & 0.57 & 0.53 & 0.01 \\ \hline \hline \end{tabular} \end{table} Table 3: The definition of each CNN layer of the GAN discriminator \begin{table} \begin{tabular}{c|c c c} \hline \hline & **4** & **5** & **6** \\ \hline 1 & -0.83 & 0.80 & 0.14 \\ 2 & -0.88 & 0.05 & 0.86 \\ 3 & -0.11 & -0.77 & 0.82 \\ 4 & N/A & -0.42 & -0.56 \\ 5 & -0.42 & N/A & -0.35 \\ 6 & -0.56 & -0.35 & N/A \\ 7 & 0.43 & -0.18 & -0.19 \\ 8 & 0.06 & 0.15 & -0.10 \\ 9 & -0.30 & 0.22 & 0.22 \\ 10 & -0.49 & 0.28 & 0.33 \\ 11 & -0.58 & 0.35 & 0.33 \\ 12 & -0.60 & 0.37 & 0.34 \\ \hline \hline \end{tabular} \end{table} Table 5: The average correlation coefficient between each of the first three precordial leads signals (Table 4(a)) and the other eleven leads. The average correlation coefficient between each of the other precordial leads signals of IV, V, and VI (Table 4(b)) and the other eleven leads. All patient data were included in the calculations. precordial lead and the other remaining eleven leads is calculated with consideration of all patients in the study prior to pre-processing (Table 5). The correlation coefficients taken here initially will eventually then be compared to the correlation coefficient results of the generated GAN model signals. Note that because this was a part of Max's individual project and we barely had enough time to finish the group experiment during this time (we are using our group member's GPU and we could not get time to perform the analysis) we could not repeat the analysis for the generated signals. This section is included once again to address the novelty of the project and will be completed for the final publication. ### Validation of Multiple-CVD Predictions This section is included once again to address the novelty of the project and will be completed for the final publication. ## 5 Analysis In order to understand the biological underpinnings of the model's performance and to assess the model's performance in predicting the signals of specific leads, we split the validation and testing sets in terms of the predicted lead to produce the advanced model. In other words, we train 12 generators (and consequently 12 discriminators) which are each specialized in generating a particular lead. For each predicted lead, the epoch that performed best on the development set corresponding to that lead is used on the test set for that lead (Figure 6). The batch size for the advanced model is 128. ### Performance Analysis In Table 6, the mean FD score of each lead's best model's performance on the testing set is compared to the best mean FD score of Seo et al. (2022). While our model's overall mean FD score of \(6.38\pm 0.1\) outperforms all three of (Seo et al., 2022) models' mean FD scores, further inspection of the specific leads in which our model performs better yields a distinct trend. For each of the limb leads (Table 6a), Seo et al. (2022) outperforms our model. Specifically, the signals generated for aVR and aVL by their model are much more similar to the patient's actual signal than our model; however, when examining the pre-cordial leads (Table 6b), our model consistently outperforms for ev Figure 6: The model’s mean FD score on the validation set for the generated signal of each lead after each of the ten epochs. The mean FD scores for six limb leads (red) are on the left and the six pre-cordial leads (blue) are on the right. The best epoch for each lead is subsequently chosen (Table 6). ery lead. When comparing within our model, only one of the two limb leads (AVR and AVF) better the worst performing pre-cordial lead (V6), and neither outperforms any of the other pre-cordial leads. This differentiation in model performance between the limb and pre-cordial leads perhaps can be explained by three different explanations: biological, mechanical, and computational. From a biological perspective, limb lead position is less defined amongst physicians (Tung, 2021), and thus, since the MyoVista dataset (4.1) was collected from multiple hospitals across multiple states, the positioning preferences between physicians could interfere with the model's ability to pinpoint predictive heuristics. Furthermore, the wavECG used to record the patient signals for the MyoVista dataset was designed to be able to identify left ventricular diastolic dysfunction or LVDD (Sengupta et al., 2018). LVDD is characterized by p-wave dispersion which is more identifiable in abnormalities in pre-cordial leads (Taha et al., 2016). Consequently, the wavECG hardware may have been implicitly designed to be more sensitive to the pre-cordial leads and thus the model may be able to identify more well-defined features in those leads as opposed to the limb leads. Computationally, following the application of the pre-processing procedure (3.1) the denoised samples of our pre-cordial leads contain better-defined PQRST wave characteristics along with less noise overall when compared to the limb leads (Figure 7). In particular, the P-wave is far more defined in each heartbeat of the pre-cordial lead (Figure 6(b)) which also provides further credence to the aforementioned argument that the wavECG is more sensitive to the pre-cordial leads. ### Error Analysis To understand the trends the model identifies in each lead and the inability of the model to reproduce all characteristics of the PQRST complex, we detailed manual error analysis of predictive examples from the lead best-predicted by the model (V5), the lead worst-predicted by the model (AVL), and a representative lead (V1). The lead for which the model performs the best for, V5, simply captures the downslope of the R peak and no other feature (Figure 8). This points to flaws in the FD score as the only metric to validate and test the model. Comparatively, the worst-predicted lead by the model, AVL, captures more features, notably a P-wave; however, it seems to predict multiple heartbeats instead of the intended single heartbeat (Figure 9). In this context, the depth dimension of the bidirectional LSTM generator (3.2) appears to fail in differentiating between the heights of the P-wave peak, R-peak, and T-wave peak. The V1 lead, on the other hand, captures multiple features along with representing only one heartbeat despite having a middling FD score (Figure 10). The generated signal contains not only the down-slope of the previous R peak but a clear P-wave whose peak is \begin{table} \end{table} Table 6: The epoch producing the best FD score of each lead that is subsequently used in the prediction of that lead’s signal is presented. The mean FDs from the model’s prediction of each lead are then calculated and compared to the mean FD from Seo et al. (2022) for each lead. Lead I is omitted from Seo et al. (2022) because their model is designed to take lead I as input. The better mean FD of the two models is bolded. Figure 7: Comparison of a representative denoised limb lead signal and a representative pre-cordial lead signal. pared to the predicted heartbeat's R peak; however, it still does not capture the T-wave. This can be attributed to the extensive zero-value padding that is sometimes appended to the end of each of the denoised signals (3.1) in order to make all the heartbeats the same length for model input. Since the T-wave peaks' magnitudes are generally dampened in comparison to the P-wave peaks and R-peak (Costa et al., 2021), the model could mischaracterize the T-wave as signal noise and thus the generator perhaps never learned how to generate the T-wave. This could also point to flaws in our denoising procedure. Note I could not find a way to provide distributions of the categorized errors in a figure (see below) because the automatic identification of the components of the PQRST complexes themselves would require the training of an entirely new deep learning model; however, for publication, I will try to annotate as many of the figures as possible. ### Future Directions Because of the inability of the model to consistently and correctly characterize the necessary components of the PQRST complex for every lead, the model can be improved in future work in order to eventually identify conserved features that can be used as an input to a CVD-predictive model. This can be done by correcting the identified limitations mentioned in Section 5.2. Foremost, the model is limited because we did not optimize the pre-processing algorithm--most notably the denoising algorithm--and thus a more advanced pre-processing algorithm could perhaps Figure 8: A representative sample of a generated V5-lead signal. The V5 had the best mean predicted FD score during testing but fails to capture any PQRST complex features. Figure 10: A representative sample of a generated V1-lead signal. Out of all the generated signals, the V1 correctly preserves the most PQRST signals, most notably a dampened P-wave and a distinct R-peak. Figure 9: A representative sample of a generated AVL-lead signal. The AVL-lead had the worst mean predicted FD score during testing and incorrectly generates heartbeats that have P-wave and T-wave peak magnitudes comparable to the R-peak. denoise the signal better such that the T-wave is not lost in signal generation. Although many previous generative and predictive models that take ECG signal inputs employ Daubechie discrete wavelet transforms (DWT) (Hazra and Byun, 2020; Singh and Pradhan, 2021), other models incorporate other methods. For example, Zhu et al. (2019) employs a variational autoencoder to correctly capture the distribution of data before performing a DWT-like procedure. Perhaps by training, validating, testing, and comparing multiple models that each take signals with a different preprocessing procedure performed, an optimal procedure can be identified and implemented so that more PQRST complex features are not misinterpreted as noise during model training. Furthermore, heartbeats with R-peak differentials that can be classified as statistical outliers can be eliminated in order to prevent extensive zero-padding to the segmented ECG signals which could also aid with T-wave preservation. This would also remove the need for the resampling we perform to reduce the input shape of the input layers of the bidirectional LSTM. By allocating time to tuning the hyperparameters of the bidirectional LSTM, perhaps the depth layers can effectively generate P-wave and T-wave peaks with the correct magnitudes. Finally, our analysis also hints at the flaws in Frechet Distance as the evaluation metric for ECG signal generation and we will attempt to construct a novel evaluation metric. The hallmark of an accurately generated signal is the preservation of the PQRST complex and perhaps a metric that also measures the generated heartbeat's similarity to a standardized PQRST complex will prove successful. ## 6 Conclusion We present ECGNet that ultimately improves the accuracy of CVD-onset predictions by first improving the generation of a complete 12-lead ECG set from a single ECG lead signal input. Our generative adversarial network (GAN) model shows state-of-the-art accuracy in the generation of the pre-cordial lead signals--preserving distinct elements of the heartbeat's PQRST complex electrical signals. By applying cross-correlation analysis on ECGNet's GAN model's generated signals, latent features essential to the generation of accurate signals, and thus fundamental to the classification of CVD-onset, are identified. Note that the cross-correlation analysis and predictive CNN model were a part of Max's personal project and could not be completed during the semester's time frame; however, we chose to write about it hypothetically in the conclusion because it will be in the final publication and is essential in justifying the need for the project. Because of the identification of these hidden features, ECGNet allows cardiology to overcome the previous black box barriers that prevented the prediction of multiple-CVD targets. All our resources besides the dataset (due to HIPAA protections), but including models and source codes, are available through our open-source project at [https://github.com/maxbagga/ScarletEagle1](https://github.com/maxbagga/ScarletEagle1). Despite a decent performance, the GAN model still underperforms other state-of-the-art models in the generation of limb lead signals. We plan to tackle this challenge by collecting a second data set to train the limb lead generation with a different, non-MyoVista waveECG device. This will supplement the strengths of the waveECG in producing clearer pre-cordial lead signals while compensating for its tendencies to produce more noisy limb lead signals. Then, we will optimize preprocessing methods to give better limb lead inputs by experimenting with different denoising techniques and employing variational autoencoders to first capture the distribution of the data. Finally, we will aim to develop a new evaluation metric that considers the preservation of the PQRST complex in evaluating the accuracy of signal generation as the current metric employed--Frechet Distance (FD)--only evaluates the location and orderings of signal points. Consequently, the leads with generated signals preserving the most PQRST complex elements often have middling FD scores which, with a novel method, will be rectified. ## Acknowledgements We would like to thank Jinho Choi from Emory University, Naveena Yanamala from Carnegie Mellon Univeristy, and Partho Sengupta from Rutgers University for their help with the project.
2309.06990
Reheating constraints on modified quadratic chaotic inflation
The Reheating era of inflationary universe can be parameterized by various parameters like reheating temperature \(T_{\text{re}}\), reheating duration \(N_{\text{re}}\) and average equation of state parameter \(\overline{\omega }_{\text{re}}\), which can be constrained by observationally feasible values of scalar power spectral amplitude \(A_{\text{s}}\) and spectral index \(n_{\text{s}}\). In this work, by considering the quadratic chaotic inflationary potential with logarithmic-correction in mass, we examine the reheating era in order to place some limits on model's parameter space. By investigating the reheating epoch using Planck 2018+BK18+BAO data, we show that even a small correction can make the quadratic chaotic model consistent with latest cosmological observations. We also find that the study of reheating era helps to put much tighter constraints on model and effectively improves accuracy of model.
Sudhava Yadav, Rajesh Goswami, K. K Venkataratnam, Urjit A. Yajnik
2023-09-13T14:35:23Z
http://arxiv.org/abs/2309.06990v2
# Reheating constraints on modified quadratic chaotic inflation ###### Abstract The Reheating era of inflationary Universe can be parameterized by various parameters like reheating temperature \(T_{\rm re}\), reheating duration \(N_{\rm re}\) and average equation of state parameter \(\overline{\omega}_{\rm re}\), which can be constrained by observationally feasible values of scalar power spectral amplitude \(A_{\rm s}\) and spectral index \(n_{\rm s}\). In this work, by considering the quadratic chaotic inflationary potential with logarithmic-correction in mass, we examine the reheating era in order to place some limits on model's parameter space. By investigating the reheating epoch using Planck's 2018 data, we show that even a small correction can make the quadratic chaotic model consistent with latest cosmological observations. We also find that the study of reheating era helps to put much tighter constraints on model and effectively improves accuracy of model. ## 1 Introduction The inflationary paradigm [1, 2, 3, 4, 5] is an exciting and influential epoch of the cosmological universe. It has come up as an aid to resolve a range of well-known cosmological problems like flatness, horizon and monopole problems of famous cosmological big bang theory. The semi-classical theory of inflation generates seeds for Cosmic Microwave Background anisotropy and Large Scale Structures in the late universe [6, 7, 8]. Inflation predicts adiabatic, gaussian and almost scale invariant density fluctuations, which are validated by CMB observations like Cosmic Background Explorer (COBE) [9], Wilkinson Microwave Anisotropy Probe (WMAP) [10, 11] and Planck space probe [12, 13, 14, 15, 16, 17]. In the realm of inflationary cosmology, a typical scenario involves the presence of a scalar field, which is referred to as the inflaton (\(\phi\)), whose potential energy dominates the universe. In this picture, inflaton slowly rolls through its potential, and the coupling of quantum fluctuations of this scalar field with metric fluctuations is the source of primordial density perturbations called scalar perturbations. The tensor part of the metric has vacuum fluctuations resulting in primordial gravitational waves called tensor perturbations. During inflation, power spectra for both these perturbations depend on a potential called inflaton potential \(V(\phi)\). As Inflation ends, the universe reaches a highly nonthermal and frigid state with no matter content in it. However, the universe must be thermalized at extremely high temperature for big-bang nucleosynthesis (BBN) and baryogenesis. This is attained by'reheating'[18, 19, 20, 21, 22, 23, 24], transit between the inflationary phase and an era of radiation and matter dominance. There is no established science for reheating era and there is also a lack of direct observational data in favor of reheating. However, recent CMB data helped to obtain indirect bounds for various reheating parameters [25, 26, 27, 28, 29, 30, 31], and those parameters are: the reheating temperature (\(T_{\rm re}\)), the effective equation of state (EoS) parameter during reheating (\(\omega_{\rm re}\)) and lastly, the reheating duration, which can be written in the form of number of e-folds (\(N_{\rm re}\) ). It is challenging to bound the reheating temperature by LSS and CMB observations. However, its value is assumed to be higher than the electroweak scale for dark matter production at a weak scale. A lower limit has been set on reheat temperature i.e. \(\left(T_{\rm re}\sim 10^{-2}GeV\,\right)\) for a successful primordial nucleosynthesis (BBN) [32] and instantaneous reheating consideration allows us to put an upper bound i.e. \(\left(T_{\rm re}\sim 10^{16}GeV\,\right)\) for Planck's recent upper bound on tensor-to-scalar ratio (r). The value of second parameter, \(\omega_{\rm re}\), shifts from \(-\frac{1}{3}\) to 1 in various scenarios. It is 0 for reheating generated by perturbative decay of a large inflaton and \(\frac{1}{3}\) for instantaneous reheating. The next parameter in line is the duration of reheating phase, \(N_{re}\). Generally, it is incorporated by giving a range of \(N_{k}\), the number of e-foldings from Hubble crossing of a Fourier mode \(k\) to the termination of inflation. \(N_{k}\) has value in the range 46 to 70 in order to work out the horizon problem. These bounds arise by considering reheat temperature at electroweak scale and instantaneous reheating of the universe. A comprehensive analysis of higher bound on \(N_{k}\) is presented in [33, 34]. The relation between inflationary parameters and reheating can be derived by taking into consideration the progression of observable scales of cosmology from the moment of their Hubble crossing during inflation to the current time. We can deduce relations among \(T_{\rm re},N_{\rm re}\) and \(\omega_{\rm re}\), the scalar power spectrum amplitude (\(A_{s}\) ) and spectral index \(n_{s}\) for single-field inflationary models. Further, the constraints on \(T_{\rm re}\) and \(N_{\rm re}\) can be obtained from recent CMB data. Although plenty of inflationary models have been studied in recent years[35] and the inflationary predictions are in agreement with the recent CMB observations, there is still a need for a unique model. The most famous chaotic inflation with quadratic potential \(\left(\frac{1}{2}m^{2}\phi^{2}\right)\) is eliminated by recent cosmological observations as it predicts large tensor perturbations due to large potential energy it has during inflaton at large field amplitudes. Hence, lowering the potential at higher field values can help getting rid of this obstacle. Numerous hypotheses in this vein have been put forth [36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. Radiative corrections provide an intriguing possibility [36, 37, 38] where, generally, the quadratic potential gets flatter as result of running of inflaton's quartic coupling. This article will rather examine a straightforward scenario in which the mass exhibits a running behaviour described as [46]: \[m^{2}(\phi)=m^{2}\left(1-{\rm K}\ln\left[\frac{\phi^{2}}{M^{2}}\right]\right), \tag{1}\] where M is large mass scale and K is some positive constant. The positive K and the negative sign in above equation is a defining characteristic of dominance of the coupling of inflaton field to fermion fields. Another interesting way to make such models compatible with observations is by extension of standard model as done in Ref. [47, 48]. Reheating is well known technique of constraining the inflationary models. There are various ways to analyse the reheating phase as available in literature e.g. one stage reheating study [31, 49], two stage reheating study [50, 51]. In Ref. [52] reheating was analysed through perturvative decay of inflaton to either bosonic or fermionic states through trilinear coupling [52]. Considering one stage reheating technique of constraining the models, we use various reheating parameters to put much tighter bounds on parameter space of quadratic chaotic inflationary model with a logarithmic-correction in mass in light of Planck's 2018 data [16, 17]. By demanding \(T_{\rm re}>100\) GeV for production of weak-scale dark matter and working in plausible range of average equation of state (EoS) parameter (\(-\frac{1}{3}\leq\overline{\omega}_{\rm re}\leq 1\)), we employ the derived relation between inflationary and reheating parameters and observationally feasible values of \(A_{s}\), \(n_{s}\) and r to place a limit on model's parameter space. It is a helpful and fairly new tool for putting relatively tighter constraints on the model and reducing its viable parameter space, providing significant improvement in accuracy of the model. Additionally, this technique well differentiate various inflation models as they can have the same forecasts for \(n_{s}\) and r, but definitely not for the same \(\omega_{\rm re}\), as the tightened constraints on \(n_{s}\) will result in an increasingly narrow permitted range of \(\omega_{\rm re}\) for a particular inflationary model. The organization of this paper is as follows: In Sec. 2 we discuss the dynamics and predictions of slow-roll inflation. We also derived the expressions for \(T_{\rm re}\) and \(N_{\rm re}\) as a function of \(\overline{\omega}_{\rm re}\) and other inflationary parame ters like (\(\Delta\)N\({}_{k}\) and \(V_{\rm end}\)). In section 3, the Subsec. 3.1 has our recreated data for reheating scenario of simple quadratic chaotic potential. In Subsec. 3.2, we discussed the various field domains within which inflation can occur for quadratic chaotic potential with logarithmic correction in mass and then we parameterized reheating for this model using \(T_{\rm re}\) and \(N_{\rm re}\) as a function of the scalar spectral index \(n_{s}\) for different \(\omega_{\rm re}\). We have also examined the observational limits and reheating parameters for both these models using Planck 2018 data in Sec. 3. Sec. 4 is reserved for discussion and conclusions. We will be working with \(\hbar=c=1\) units and the values of some standard parameters used are reduced Planck's mass \(M_{P}=\sqrt{\frac{1}{8\pi G}}\) = 2.435 \(\times\) 10\({}^{18}\) GeV, the redshift of matter radiation equality \(z_{\rm eq}\) =3402, \(g_{\rm re}\approx\) 100 [27] and the present value of Hubble parameter \(H_{o}=100\)h km \(s^{-1}\) Mpc\({}^{-1}\) with h = 0.68 [16, 17] ## 2 Parameterizing reheating in slow-roll inflationary models Reheating phase can be parameterized by assuming it been dominated by some fluid [53] of energy density \(\rho\) with pressure P and equation of state(EoS) parameter \(\omega_{\rm re}=\frac{P}{\rho}\) where \[\rho=\frac{\dot{\phi}^{2}}{2}+V(\phi),\hskip 28.452756ptP=\frac{\dot{\phi}^{2} }{2}-V(\phi). \tag{2}\] The continuity equation gives \[\dot{\rho}+3H(P+\rho)=0, \tag{3}\] \[\dot{\rho}+3H\rho\left(\omega_{\rm re}+1\right)=0. \tag{4}\] We analyze the dynamics of inflation by considering inflaton \(\phi\) with potential \(V(\phi)\) evolving slowly with slow-roll parameters \(\epsilon\) and \(\eta\). The approximation of Friedman equation using slow-roll conditions give \[3H\dot{\phi}+V^{\prime}(\phi)=0, \tag{5}\] \[H^{2}=\frac{V(\phi)}{3M_{P}^{2}}, \tag{6}\] where prime(\({}^{\prime}\)) denotes derivative w.r.t \(\phi\) and H = \(\frac{\dot{a}}{a}\) is Hubble parameter. The definition of slow-roll parameter give \[\epsilon=\frac{M_{P}^{2}}{2}\left(\frac{V^{\prime}}{V}\right)^{2},\hskip 28.452756pt \eta=M_{P}^{2}\left(\frac{V^{\prime\prime}}{V}\right). \tag{7}\] The scalar spectral index \(n_{s}\), tensor spectral index \(n_{T}\) and tensor to scalar ratio \(r\) in terms of above slow-roll parameters satisfy the relations \[n_{s}=1-6\epsilon+2\eta,\hskip 28.452756ptn_{T}=-2\epsilon,\hskip 28.452756ptr =16\epsilon. \tag{8}\] Now, the number of e-foldings in between Hubble crossing of mode \(k\) and termination of inflation denoted by subscript "end" can be given as \[\Delta N_{k}=\ln\left(\frac{a_{\rm end}}{a_{\rm k}}\right)=\frac{1}{M_{P}^{2 }}\int_{\phi_{\rm end}}^{\phi_{k}}\frac{V}{V^{\prime}}\,d\phi, \tag{9}\] where \(a_{k}\) and \(\phi_{k}\) represents value of scale factor and inflaton at the point of time when \(k\) crosses the Hubble radius. The later part of eq. (9) is obtained using the slow-roll approximations \(\ddot{\phi}\ll 3H\dot{\phi}\) and \(V(\phi)\gg\dot{\phi}^{2}\). Similarly, \[N_{\rm re}=\ln\left(\frac{a_{\rm re}}{a_{\rm end}}\right), \tag{10}\] Here the quantity \(N_{\rm re}\) encrypts both, an era of preheating [54, 55, 56, 57, 58, 22, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78] as well as later thermalization process. An energy density controls the Universe's subsequent evolution and can be written as \[\rho_{\rm re}=\frac{\pi^{2}}{30}g_{\rm re}T_{\rm re}^{4}, \tag{11}\] where \(g_{\rm re}\) gives the actual count of relativistic species at termination of reheating epoch and \(T_{\rm re}\) is the reheating temperature. Now, in view of eq. (3) \[\rho_{\rm re}=\rho_{\rm end}{\rm e}^{-3N_{\rm re}(1+\overline{ \omega}_{\rm re})}, \tag{12}\] \[{\rm where}\ \overline{\omega}_{\rm re}=<\omega>=\frac{1}{N_{\rm re }}\int_{N_{\rm end}}^{N}\omega_{\rm re}(N)dN, \tag{13}\] Here \(\overline{\omega}_{\rm re}\) is average EoS parameter during reheating [49, 25]. Now, eq. (12) can be recast as \[\frac{a_{\rm re}}{a_{\rm end}}=e^{N_{\rm re}}=\left(\frac{\rho_{\rm re}}{ \rho_{\rm end}}\right)^{-\frac{1}{3(1+\overline{\omega}_{\rm re})}}. \tag{14}\] Using eq. (11) and eq. (14), reheating e-folds \(N_{\rm re}\) can be written as \[N_{\rm re}=\frac{1}{3\left(1+\overline{\omega}_{\rm re}\right)}\left\{\ln \left(\frac{3}{2}V_{\rm end}\right)-\ln\left(\frac{\pi^{2}}{30}g_{\rm re} \right)\right\}-\frac{4}{3\left(1+\overline{\omega}_{\rm re}\right)}\ln\left( T_{\rm re}\right). \tag{15}\] For some physical scale \(k\), the observed wavenumber '\(\frac{k}{a}\)' can be given in terms of above known quantities and the redshift during matter-radiation equality epoch (\(z_{\rm eq}\)) as [49] \[H_{k}=\frac{k}{a_{k}}=\left(1+z_{\rm eq}\right)\frac{k}{a_{o}}\rho_{\rm re}^{ \frac{3}{2\overline{\omega}_{\rm re}-1}{2\left(1+\overline{\omega}_{\rm re} \right)}}\rho_{\rm eq}{}^{-\frac{1}{4}}\left(\frac{3}{2}V_{\rm end}\right)^{ \frac{1}{3(1+\overline{\omega}_{\rm re})}}e^{\Delta N_{k}}. \tag{16}\] Using eq. (16), \(\Delta N_{k}\) can be given as \[\Delta N_{k}=\ln H_{k}-\ln\left(1+z_{\rm eq}\right)-\ln\left(\frac{k}{a_{o}} \right)-\frac{1}{3(\overline{\omega}_{\rm re}+1)}\ln\left(\frac{3}{2}V_{\rm end }\right)-\frac{3}{3\left(1+\overline{\omega}_{\rm re}\right)}\ln\left(\rho_ {\rm re}^{\frac{1}{4}}\right)+\ln\left(\rho_{\rm eq}^{\frac{1}{4}}\right). \tag{17}\] Inverting eq. (17), and using eq. (11) one can get a mutual relation among the numerous parameters introduced, \[\ln(T_{\rm re})=\frac{3}{3}\frac{\left(1+\overline{\omega}_{\rm re}\right)}{ \overline{\omega}_{\rm re}-1}\left\{\ln H_{k}-\ln\left(1+z_{\rm eq}\right)- \ln\frac{k}{a_{o}}-\Delta N_{k}+\ln\left(\rho_{\rm eq}^{\frac{1}{4}}\right) \right\}-\frac{1}{3\ \overline{\omega}_{re}-1}\ln\left(\frac{3}{2}V_{ end}\right)-\frac{1}{4}\ln\left(\frac{\pi^{2}}{30}g_{re}\right). \tag{18}\] The expression for \(T_{\rm re}\) from eq. (15) is substituted in eq. (18) to get the expression for \(N_{\rm re}\) as mentioned below \[N_{\rm re}=\frac{1}{3\ \overline{\omega}_{\rm re}-1}\ln\left(\frac{3}{2}V_{\rm end }\right)+\frac{4}{3\ \overline{\omega}_{\rm re}-1}\left\{\ln\left(\frac{k}{a_{o}}\right)+\Delta N _{k}+\ln\left(1+z_{\rm eq}\right)-\ln\left(\rho_{\rm eq}^{\frac{1}{4}}\right) -\ln H_{k}\right\}. \tag{19}\] eq. (18) and eq. (19) are the two key relationships for parameterizing reheating in slow-roll inflationary models. ## 3 Inflationary models ### Quadratic Chaotic inflationary model We are first considering simple quadratic chaotic potential before moving to its modified form. The quadratic chaotic potential [4] has the form \[V=\frac{1}{2}m^{2}\phi^{2}. \tag{20}\] The reheating study of this potential was already done in [49] in light of Planck's 2015 data, we are recreating the data by doing the similar study using Planck's 2018 data. Using eq. (7) slow-roll parameters for this potential can be given as \[\epsilon=\eta=\frac{2M_{P}^{2}}{\phi^{2}}. \tag{21}\] The Hubble parameter during the crossing of Hubble radius by scale \(k\) for this model can be written as \[H_{k}^{2}=\frac{1}{M_{P}^{2}}\left(\frac{V_{k}}{3-\epsilon_{k}}\right)=\frac{ 1}{2M_{P}^{2}}\left(\frac{m^{2}\phi_{k}^{2}}{3-2\frac{M_{P}^{2}}{\phi^{2}}} \right). \tag{22}\] where \(\phi_{k}\), \(\epsilon_{k}\) and \(V_{k}\) respectively represent the inflaton field, slow-roll parameter and potential during crossing of Hubble radius by mode \(k\). Using the condition \(\epsilon=1\), defining end of inflation, in eq. (21), we obtained \(\frac{\phi_{*}^{2}}{M_{P}^{2}}=2\) Now, corresponding to pivot scale \(k_{*}\), used in Planck collaboration, \(\frac{k_{*}}{a_{a}}=0.05Mpc^{-1}\), consider the mode \(k_{*}\) crossing the hubble radius at a point where the field has achieved the value \(\phi_{*}\) during inflation. The remaining number of e-folds persist subsequent to crossing of hubble radius by \(k_{*}\) are \[\Delta N_{*}\simeq\frac{1}{M_{P}^{2}}\int_{\phi_{\rm end}}^{\phi_{*}}\frac{V} {V^{\prime}}\,d\phi=\left[\left(\frac{\phi_{*}}{2M_{P}}\right)^{2}-\frac{1}{2 }\right]. \tag{23}\] The spectral index for this model can be easily obtained using eq. (8) as \[n_{s}=1-8\left(\frac{M_{P}^{2}}{\phi_{*}^{2}}\right). \tag{24}\] Now, the formulation for tensor-to-scalar ratio from eq. (8) gives \[r=32\frac{M_{P}^{2}}{\phi_{*}^{2}}. \tag{25}\] Moreover, this model yields the relation \[H_{*}=\pi M_{P}\sqrt{16A_{s}\frac{M_{P}^{2}}{\phi_{*}^{2}}}. \tag{26}\] The relation of field \(\phi\) and \(H\) eq. (6), and the condition for termination of inflation as used in eq. (23), along with eq. (26) gives expression for \(V_{\rm end}\) as \[V_{\rm end}(\phi)=\frac{1}{2}m^{2}\phi_{\rm end}^{2}=\frac{3H_{*}^{2}M_{P}^{ 2}\phi_{\rm end}^{2}}{\phi_{*}^{2}}=\frac{6H_{*}^{2}M_{P}^{4}}{\phi_{*}^{2}}. \tag{27}\] Now, the expressions for \(\Delta N_{*}\), \(r\), \(H_{*}\) and \(V_{\rm end}\) as a function of \(n_{s}\) can be obtained by putting the value of \(\phi_{*}\) from eq. (24) in eqs. (23), (25), (26) and (27), and then these expressions along with eqs. (18) and (19) gives number of reheating e-folds \(N_{\rm re}\) and reheating temperature \(T_{\rm re}\). Planck's 2018 value of \(A_{s}=2.1\times 10^{-9}\) and computed value of \(\rho_{\rm eq}^{\frac{1}{2}}=10^{-9}\)GeV [16, 17] have been used for calculation. The \(N_{\rm re}\) and \(T_{\rm re}\) versus \(n_{s}\) plots, along with Planck-2018 1\(\sigma\) bound on \(n_{s}\) i.e. \(n_{s}\)=0.965\(\pm\)0.004 (dark gray) and 2\(\sigma\) bound on \(n_{s}\) i.e. \(n_{s}\)=0.965\(\pm\)0.008 (light gray), for this model are presented graphically in figure 1 for a range of average EoS parameter during reheating. By demanding \(T_{\rm re}\geq 100\) GeV for production of weak-scale dark matter and solving eqs. (18) and (24), the bounds on \(n_{s}\) are obtained and are reflected on eq. (23) and eq. (25) to obtain bounds on \(\Delta N_{*}\) and r. All the obtained bounds are shown in table 1. For this model the bounds on \(n_{s}\) lies inside Planck-2018 \(2\sigma\) bound demanding \(\overline{\omega}_{\rm re}\) lies in the range \((-0.03\leq\overline{\omega}_{\rm re}\leq 1)\) and the corresponding range for r is \((0.172\geq{\rm r}\geq 0.117)\) while if we demand \(n_{s}\) to lie within \(1\sigma\) bound by Planck 2018 observation then the allowed range of \(\overline{\omega}_{\rm re}\) is \((0.09\leq\overline{\omega}_{\rm re}\leq 0.67)\) and the corresponding r values are \((0.156\geq{\rm r}\geq 0.124)\). Within these ranges of \(\overline{\omega}_{\rm re}\) the tensor-to-scalar ratio (r) is greater than the combined BICEP2/Keck and Planck's bound \((r<0.06)\)[59]. From figure 1(a), we can see for Planck's 2018 \(1\sigma\) bound on \(n_{s}\)(0.965\(\pm\)0.004), curves \((-\frac{1}{3}\leq\overline{\omega}_{\rm re}\leq 0)\) and \((\frac{2}{3}\leq\overline{\omega}_{\rm re}\leq 1)\) predicts \(T_{\rm re}>2.082\times 10^{6}\) GeV and \(T_{\rm re}>86.08\) GeV respectively while all values of reheating temperature are possible for \(0.135\leq\overline{\omega}_{re}\leq 0.577\). From table 1, we can see that all the r values are greater than the combined BICEP2/Keck and Planck's bound \((r<0.06)\)[59]. Hence, this model is incompatible with the data for any choice of \(\overline{\omega}_{re}\) taken. ### Modified quadratic chaotic inflation The quadratic chaotic inflationary potential with logarithmic-correction in mass term has the form [38, 46] \[V(\phi)=\frac{1}{2}m^{2}\left(1-{\rm K}\ln\frac{\phi^{2}}{M_{P}^{2}}\right) \phi^{2}=(M^{\prime})^{4}\left(1-{\rm K}\ln\frac{\phi^{2}}{M_{P}^{2}}\right) \frac{\phi^{2}}{M_{P}^{2}}, \tag{28}\] where \((M^{\prime})^{4}=m^{2}M_{P}^{2}/2\) and K is some positive constant. The positive K is a defining characteristic of dominance of fermion couplings. This work is inspired by Ref. [46], where the inflationary scenario of this \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Average Equation of state & \(n_{s}\) & \(\Delta N_{*}\) & \(r\) \\ \cline{2-5} & \(-\frac{1}{3}\leq\overline{\omega}_{re}\leq 0\) & \(0.926\leq n_{s}\leq 0.958\) & \(26.47\leq\Delta N_{*}\leq 47.45\) & \(0.297\geq r\geq 0.166\) \\ \cline{2-5} \(\left(\frac{1}{2}m^{2}\phi^{2}\right)\) & \(0\leq\overline{\omega}_{re}\leq\frac{1}{6}\) & \(0.958\leq n_{s}\leq 0.963\) & \(47.45\leq\Delta N_{*}\leq 53.38\) & \(0.166\geq r\geq 0.148\) \\ \cline{2-5} & \(\frac{1}{6}\leq\overline{\omega}_{re}\leq\frac{2}{3}\) & \(0.963\leq n_{s}\leq 0.969\) & \(53.38\leq\Delta N_{*}\leq 63.99\) & \(0.148\geq r\geq 0.124\) \\ \cline{2-5} & \(\frac{2}{3}\leq\overline{\omega}_{re}\leq 1\) & \(0.969\leq n_{s}\leq 0.971\) & \(63.99\leq\Delta N_{*}\leq 68.10\) & \(0.124\geq r\geq 0.117\) \\ \hline \end{tabular} \end{table} Table 1: The permissible range for values of \(n_{s}\), \(\Delta N_{*}\) and \(r\) for Quadratic Chaotic inflationary potential \(\left(\frac{1}{2}m^{2}\phi^{2}\right)\) by demanding \(T_{re}\geq 100GeV\) potential was studied. We are considering this potential in context of reheating in light of Planck's 2018 data. We will start our discussion with various field domains [35] within which inflationary phenomena may occur for above potential. It is evident that the above-mentioned potential eq. (28) does not exhibit positive definiteness for all values of the field (\(\phi\)). The value of this potential becomes negative after a specific point \[\frac{\phi_{V=0}}{M_{P}}=\sqrt{e^{\frac{1}{\kappa}}}. \tag{29}\] The model can only be defined within a specific regime i.e., \(\phi<\phi_{V=0}\). On the contrary, the highest point of the potential function, where \(V^{\prime}=0\) (or can say \(\epsilon=0\)), corresponds to field value given as: \[\frac{\phi_{V^{\prime}=0}}{M_{P}}=\frac{\phi_{Top}}{M_{P}}=\sqrt{e^{\frac{1- \mathrm{K}}{\mathrm{K}}}}, \tag{30}\] The model has a sense provided the correction term doesn't have its dominance on the potential, hence the suitable regime is \(\phi<\phi_{Top}<\phi_{V=0}\). The potential versus \(\frac{\phi}{M_{P}}\) plot for four different values of K is depicted in figure 2. From figure 2 it can be seen that each K has specific viable regime in which the model is defined and have a sense and we will be working in these regions only. Now moving further, the slow-roll parameters for this potential can be given as \[\epsilon=2M_{P}^{2}\left(\frac{1-\mathrm{K}\left(1+\ln\frac{\phi^{2}}{M_{P}^ {2}}\right)}{\phi(1-\mathrm{K}\ln\frac{\phi^{2}}{M_{P}^{2}})}\right)^{2}, \tag{31}\] \[\eta=\frac{2M_{P}^{2}\left(-3\mathrm{K}+1-\mathrm{K}\ln\frac{\phi^{2}}{M_{P}^ {2}}\right)}{\phi^{2}\left(1-\mathrm{K}\ln\frac{\phi^{2}}{M_{P}^{2}}\right)}. \tag{32}\] After substituting the values of \(\epsilon\) eq. (31) and \(\eta\) eq. (32) in eq. (8), we can write scalar spectral index as \[n_{s}=\frac{-4\left(2-3\text{K}+3\text{K}^{2}\right)+\frac{\phi^{2}}{M_{P}^{2}}-2 \text{K}\left(-8+6\text{K}+\frac{\phi^{2}}{M_{P}^{2}}\right)\ln\frac{\phi^{2}}{M _{P}^{2}}+\text{K}^{2}\left(-8+\frac{\phi^{2}}{M_{P}^{2}}\right)\ln\left[\frac{ \phi^{2}}{M_{P}^{2}}\right]^{2}}{\frac{\phi^{2}}{M_{P}^{2}}\left(-1+\text{K} \ln\frac{\phi^{2}}{M_{P}^{2}}\right)^{2}}. \tag{33}\] The Hubble parameter during the crossing of Hubble radius by scale \(k\) can be written as \[H_{k}^{2}=\frac{1}{M_{P}^{2}}\left(\frac{V_{k}}{3-\epsilon_{k}}\right)=\left( \frac{\frac{1}{2}m^{2}\left(1-\text{K}\ln\frac{\phi_{k}^{2}}{M_{P}^{2}}\right) \frac{\phi_{k}^{2}}{M_{P}^{2}}}{3-2M_{P}^{2}\left(\frac{1-\text{K}\left(1+\ln \frac{\phi^{2}}{M_{P}^{2}}\right)}{\phi\left(1-\text{K}\ln\frac{\phi^{2}}{M_{P }^{2}}\right)}\right)^{2}}\right). \tag{34}\] Using the condition \(\epsilon=1\) defining end of inflation, we have obtained \(\frac{\phi_{\text{end}}}{M_{P}}\) for different values of K. The remaining number of e-folds persist subsequent to crossing of hubble radius by \(k_{*}\) till the termination of inflationary epoch can be given as \[\Delta N_{*}\simeq\frac{1}{M_{P}^{2}}\int_{\phi_{\text{end}}}^{\phi_{*}}\frac {V}{V}\,d\phi_{*}=\frac{1}{2M_{P}^{2}}\int_{\phi_{\text{end}}}^{\phi_{*}}\frac {\left(1-\text{K}\,\ln\frac{\phi^{2}}{M_{P}^{2}}\right)\phi}{1-\text{K}\left( 1+\ln\frac{\phi^{2}}{M_{P}^{2}}\right)}\,d\phi. \tag{35}\] Defining \(\frac{\phi_{*}}{M_{P}}=x\). The spectral index \(n_{s}\) eq. (33), at \(\phi=\phi_{*}\) in terms of \(x\) will have the form \[n_{s}=\frac{-4\left(2-3\text{K}+3\text{K}^{2}\right)+x^{2}-2\text{K}\left(-8+ 6\text{K}+x^{2}\right)\ln x^{2}+\text{K}^{2}\left(-8+x^{2}\right)\ln\left[x^ {2}\right]^{2}}{x^{2}\left(-1+\text{K}\ln x^{2}\right)^{2}}. \tag{36}\] The variation of \(\frac{\phi_{*}}{M_{P}}\) and \(\Delta\text{N}_{*}\) with \(n_{s}\) using eq. (36) and eq. (35) for 4 different values of K are shown in figure 3a and 3b respectively. Further in this model, we can write the tensor - to - scalar ratio and \(H_{*}\) as \[r=32\left(\frac{1-\text{K}\left(1+\ln x^{2}\right)}{x\left(1-\text{K}\ln x^{2} \right)}\right)^{2}. \tag{37}\] \[H_{*}=4\pi M_{P}\sqrt{A_{s}}\left(\frac{1-\text{K}\left(1+\ln x^{2}\right)}{x \left(1-\text{K}\ln x^{2}\right)}\right). \tag{38}\] Figure 4: The plots for \(T_{\rm re}\) and \(N_{\rm re}\) versus spectral index (\(n_{s}\)) for quadratic chaotic model with corrected mass \(V(\phi)=\frac{1}{2}m^{2}\left(1-{\rm K}\ln\frac{\phi^{2}}{M_{P}^{2}}\right)\phi ^{2}\) for different K and \(\overline{\omega}_{\rm re}\) values. The shaded regions and color codings are same as in figure 1 Defining \(\frac{\phi_{\rm end}}{M_{P}}=y\). The relation of field \(\phi\) and \(H\), and the condition for termination of inflation, along with eq. (38) gives expression for \(V_{\rm end}\) in terms of \(x\) and \(y\) as \[V_{\rm end}(\phi)=\frac{1}{2}m^{2}\left(1-{\rm K}\ln\left(\frac{\phi_{\rm end }^{2}}{M_{P}^{2}}\right)\right)\phi_{\rm end}^{2}\ =\frac{3H_{*}^{2}M_{P}^{2}\left(1-{\rm K}\ln y^{2}\right)y^{2}}{x^{2}\left(1-{ \rm K}\ln x^{2}\right)}. \tag{39}\] Now, the expressions for \(\Delta N_{*}\), \(r\), \(H_{*}\) and \(V_{\rm end}\) as function of \(n_{s}\) can be obtained by putting the value of \(x\) from eq. (36) and the value of y obtained using the condition for termination of inflation (\(\epsilon=1\)) in eqs. (35), (37), (38) and (39), and then these expressions along with eqs. (18) and(19) gives number of reheating e-folds \(N_{\rm re}\) and reheating temperature \(T_{\rm re}\). The \(N_{\rm re}\) and \(T_{\rm re}\) versus \(n_{s}\) plots, along with Planck-2018 bounds, for 4 different K values for this model are presented graphically in figure 4. By demanding \(T_{\rm re}>100\) GeV for production of weak-scale dark matter and solving eqs. (18) and (36), the bounds on \(n_{s}\) are obtained and are reflected on eq. (35) and eq. (37) to obtain bounds on \(\Delta N_{*}\) and r. All the obtained bounds for various choices of K are shown in table (2). The r versus \(n_{s}\) plots, along with Planck-2018 bounds, for a range of K values are presented graphically in figure 5. The figure 5 shows that the tensor- to scalar ratio is greater than the viable range (\(r<0.06\)) for K \(\leq 0.13\) while for K \(>0.16\) the \(n_{s}\) value is outside Planck's 2018 bound for any choice of \(\overline{\omega}_{\rm re}\). The value K = 0 gives us the normal quadratic chaotic potential. The allowed range of K satisfying Planck's 2018 constraints on both \(n_{s}\) and r for this model is found to be (\(0.13<{\rm K}\leq 0.16\)). Individually, the range of \(\overline{\omega}_{\rm re}\) for which our obtained data is compatible with Planck-2018 \(2\sigma\) bounds on \(n_{s}\) and the combined BICEP2/Keck and Planck's bound on r, i.e. (\(r<0.06\)) gives (\(0.352<\overline{\omega}_{\rm re}\leq 1\)) for K=0.14 and (\(0.009\leq\overline{\omega}_{\rm re}\leq 1\)) for K=0.15. For K \(\leq 0.13\), there is no compatibility with data for any value of \(\overline{\omega}_{\rm re}\) taken and K=0.16 is compatible with data for only \(\overline{\omega}_{\rm re}\)=1. Similarly, the \(1\sigma\) bound on \(n_{s}\) and \(r<0.06\) gives (\(0.352<\overline{\omega}_{\rm re}\leq 0.640\)) for K=0.14 and (\(0.212\leq\overline{\omega}_{\rm re}\leq 1\)) for K=0.15 while K=0.13 and K=0.16 are completely outside the combined \(1\sigma\) and r bounds. We have also found the viable range of the reheat temperature and number of e-foldings for each case which shows compatibility with Planck's 2018 \(1\sigma\) bound on \(n_{s}\) using figure 4 and the findings have been clearly presented in a tabular format in table 3. The table 3 shows that the curve corresponding to \(\overline{\omega}_{\rm re}=\frac{1}{6}\) for K=0.13 and 0.14 and the curves corresponding to (\(\frac{2}{3}\leq\overline{\omega}_{\rm re}\leq 1\)) for K=0.15, give every possible value of reheating temperature (\(10^{-2}\) GeV to \(10^{16}\) GeV) while K=0.16 shows incompatibility with data for all \(\overline{\omega}_{\rm re}\) taken. The \(n_{s}\) values for these \(\overline{\omega}_{\rm re}\) ranges are 0.966 and 0.964 for K=0.13 and 0.14 while it is (\(0.965<n_{s}\)\(\leq\) 0.966) for K=0.15 which sets limit on tensor to scalar ratio(r) and the obtained values of r are 0.083 and 0.068 for K=0.13 and 0.14 while it is (\(0.037\leq\) r \(\leq\) 0.033) for K=0.15 and only the r values for K=0.15 are satisfying the condition (\(r<0.06\)). ## 4 Discussion and conclusion In this work, we have considered a modified form of quadratic chaotic inflation. Our primary goal is to study the reheating phase in light of Planck's 2018 observations. For that, we have considered two parameters, namely duration of reheating \(N_{\rm re}\) and reheating temperature \(T_{\rm re}\) and obtained their variation as function of scalar spectral index \(n_{\rm s}\) by considering a suitable range of effective equation of state \(\overline{\omega}_{\rm re}\). By demanding \(T_{\rm re}>100\) GeV for production of weak-scale dark matter and allowing \(\overline{\omega}_{\rm re}\) to vary in the range (\(-\frac{1}{3}\leq\overline{\omega}_{\rm re}\leq 1\)), we tried to find the permissible ranges for \(n_{\rm s}\), \(\Delta N_{*}\) and tensor-to-scalar ratio(r) for our models. We first restudied the simple quadratic chaotic inflation using the most recent Planck's 2018 data and found that the condition \(T_{\rm re}>100\) GeV gives (\(-0.03\leq\overline{\omega}_{\rm re}\leq 1\)) for \(n_{s}\) to lie inside Planck-2018 \(2\sigma\) bounds while if we demand \(n_{s}\) to lie within \(1\sigma\) bounds than the allowed range of \(\overline{\omega}_{\rm re}\) is (\(0.09\leq\overline{\omega}_{\rm re}\leq 0.67\)). Within these ranges of \(\overline{\omega}_{\rm re}\), \(r\) is greater than the combined BICEP2/Keck and Planck's bound on r, i.e. (\(r<0.06\)). Since the normal quadratic chaotic potential is not favoring the observational data. We have considered a modified form of quadratic chaotic potential where a logarithmic correction containing a model parameter K is added to the mass term. We have found that for each value of model parameter K of the modified model, there is only a specific range of inflaton field (\(\phi\)) within which the model is defined and the correction part is not dominant over the actual quadratic term of potential. We have constrained ourself to only those regions for our analysis. By imposing the reheating conditions on this model, we found that the constraints on \(n_{\rm s}\) and r are consistent with Planck's 2018 data for only a particular range of K values and is found to be (\(0.13<\) K \(\leq 0.16\)), where each K value has a different range of \(\overline{\omega}_{\rm re}\) in which it is compatible with the data. The ranges of \(\overline{\omega}_{\rm re}\) for which our obtained data is compatible with Planck-2018 \(2\sigma\) bounds on \(n_{s}\) and the combined BICEP2/Keck and Planck's bound on r, i.e. (\(r<0.06\)) gives (\(0.352<\overline{\omega}_{\rm re}\leq 1\)) for K=0.14 and (\(0.009\leq\overline{\omega}_{\rm re}\leq 1\)) for K=0.15. For K \(\leq 0.13\), there is no compatibility with data for any value of \(\overline{\omega}_{\rm re}\) taken and K=0.16 is compatible with data for only \(\overline{\omega}_{\rm re}\)=1. Similarly, the \(1\sigma\) bound on \(n_{s}\) and \(r<0.06\) gives (\(0.352<\overline{\omega}_{\rm re}\leq 0.640\)) for K=0.14 and (\(0.212\leq\overline{\omega}_{\rm re}\leq 1\)) for K=0.15 while K=0.13 and K=0.16 are completely outside the combined \(1\sigma\) and r bounds. Also, from the plots showing the variation of \(T_{\rm re}\) with \(n_{s}\), we have found that different values of K and \(\overline{\omega}_{\rm re}\) give different ranges of reheating temperature as compatible with Planck's \(1\sigma\) bounds on \(n_{\rm s}\), but if we allow \(T_{\rm re}\) to vary over the whole range ( \(10^{-2}\) GeV to \(10^{16}\) GeV), then \(\overline{\omega}_{\rm re}\) is restricted to (\(0.047\leq\overline{\omega}_{\rm re}\leq 0.391\)) for K=0.13, (\(0.087\leq\overline{\omega}_{\rm re}\leq 0.559\)) for K=0.14 and (\(0.237\leq\overline{\omega}_{\rm re}\leq 1\)) for K=0.15 while K=0.16 shows incompatibility with Planck's 2018 \(1\sigma\) bounds on \(n_{\rm s}\) for all \(\overline{\omega}_{\rm re}\) taken. To conclude, the reheating study shows that the values of K close to 0.15 are more favorable ones and the \(\overline{\omega}_{\rm re}\) range satisfying the observational data for K=0.15 suggests the possible production of Feebly Interacting Massive Particle(FIMP) and Weakly Interacting Massive Particle(WIMP)-like dark matter particles [60, 61] and primordial black holes [62]. Elaborated study of possible particle production will be done in our future publications. The findings of the reheating study prove that even a small correction in mass term can help quadratic chaotic potential to favour Planck-2018 observations. Also, we have found that considering the reheating constraints, the average equation of state parameter \(\overline{\omega}_{\rm re}\) plays a vital role in defining the compatible range of reheating parameters, which effectively narrows the model's viable parameter space and significantly increases the model's accuracy. ## Acknowledgments SY would like to acknowledge the Ministry of Education, Government of India, for providing the JRF fellowship. UAY acknowledges support from an Institute Chair Professorship of IIT Bombay.
2301.13201
Numerical Issues for a Non-autonomous Logistic Model
The logistic equation has been extensively used to model biological phenomena across a variety of disciplines and has provided valuable insight into how our universe operates. Incorporating time-dependent parameters into the logistic equation allows the modeling of more complex behavior than its autonomous analog, such as a tumor's varying growth rate under treatment, or the expansion of bacterial colonies under varying resource conditions. Some of the most commonly used numerical solvers produce vastly different approximations for a non-autonomous logistic model with a periodically-varying growth rate changing signum. Incorrect, inconsistent, or even unstable approximate solutions for this non-autonomous problem can occur from some of the most frequently used numerical methods, including the lsoda, implicit backwards difference, and Runge-Kutta methods, all of which employ a black-box framework. Meanwhile, a simple, manually-programmed Runge-Kutta method is robust enough to accurately capture the analytical solution for biologically reasonable parameters and consistently produce reliable simulations. Consistency and reliability of numerical methods are fundamental for simulating non-autonomous differential equations and dynamical systems, particularly when applications are physically or biologically informed.
Marina Mancuso, Carrie Manore, Kaitlyn Martinez, Fabio Milner
2022-12-20T19:33:45Z
http://arxiv.org/abs/2301.13201v1
# Numerical Issues for a Non-autonomous Logistic Model ###### Abstract The logistic equation has been extensively used to model biological phenomena across a variety of disciplines and has provided valuable insight into how our universe operates. Incorporating time-dependent parameters into the logistic equation allows the modeling of more complex behavior than its autonomous analog, such as a tumor's varying growth rate under treatment, or the expansion of bacterial colonies under varying resource conditions. Some of the most commonly used numerical solvers produce vastly different approximations for a non-autonomous logistic model with a periodically-varying growth rate changing signum. Incorrect, inconsistent, or even unstable approximate solutions for this non-autonomous problem can occur from some of the most frequently used numerical methods, including the lsoda, implicit backwards difference, and Runge-Kutta methods, all of which employ a black-box framework. Meanwhile, a simple, manually-programmed Runge-Kutta method is robust enough to accurately capture the analytical solution for biologically reasonable parameters and consistently produce reliable simulations. Consistency and reliability of numerical methods are fundamental for simulating non-autonomous differential equations and dynamical systems, particularly when applications are physically or biologically informed. \(a\): A-1 Information Systems and Modeling, Los Alamos National Laboratory, Los Alamos NM, 87544 USA \(b\): School of Mathematical and Statistical Sciences, Arizona State University, Tempe AZ, 85287 USA \(c\): T-6 Theoretical Biology and Biophysics, Los Alamos National Laboratory, Los Alamos NM, 87544 USA _*Corresponding author: [email protected]_ ## 1 Introduction Density-dependent biological phenomena such as tumor growth, fishery management, and mosquito populations can all be modeled with logistic growth dynamics [6, 9, 11]. The classical logistic growth model was first conceptualized by Francois Verhulst and has two parameters- the net growth rate and the carrying capacity [17]. The net growth rate is the difference between the recruitment and removal rates of the population, and the carrying capacity is the upper threshold value of the population that can be biologically sustained. Under logistic growth, a population's rate of change decreases linearly as its size approaches the carrying capacity. Most applications using logistic growth dynamics assume the biological parameters remain constant with respect to time. However, one such extension of the logistic model is the non-autonomous version, which considers a time-dependent net growth rate and/or carrying capacity. Various theoretical aspects of non-autonomous logistic growth have been explored, including establishing the existence and uniqueness of a general solution [16] and stability of periodic solutions [5]. Additionally, Banks provides several applications of non-autonomous logistic models ranging from agricultural populations to railroad mileage [1]. In all aforementioned cases, however, the infimum of the time-varying net growth rate is assumed to be positive. Nevertheless, it may be more appropriate for some applications to allow for the possibility of a negative net growth rate to capture more general behavior. For example, tumor cells may die faster than they grow under treatment, or mosquito populations can experience negative net growth rates under seasonal temperature variation. However, employing a negative net growth rate creates additional modeling challenges by modifying the logistic growth's dynamic behavior. Rather than the carrying capacity acting as a maximum value representing a saturated population, this parameter becomes the minimum threshold necessary to sustain population growth under a negative net growth rate [2]. Numerical methods must retain behavioral characteristics of the systems they represent in order to be of value. Numerical methods for non-autonomous logistic growth should ensure that the simulated population remains nonnegative and bounded. Built-in numerical solvers in popular simulation software such as Matlab or Python for a case of the non-autonomous logistic model are not robust enough to capture the behavior of the analytical solution, and can produce biologically invalid or unstable results. While the continuous logistic model allows for unique solutions for initial value problems, numerically solving the ordinary differential equation makes it inherently discrete. As a result, numerical simulations may produce chaotic behavior as observed in the discrete logistic equation [2, 15]. Moreover, a simple, manually-programmed numerical solver avoids these aforementioned issues and outperforms the built-in numerical solvers available. The black box nature of built-in numerical solvers creates difficulties in pinpointing numerical inconsistencies of simulations. The motivating example of the non-autonomous logistic model takes the following form: \[P^{\prime}(t)=r(t)P(t)\left(1-\frac{P(t)}{K}\right), \tag{1.1}\] where \(P(t)\) is the population size at time \(t\), \(K\) is the carrying capacity, and \(r(t)\) is the time-varying intrinsic net growth rate with units of day\({}^{-1}\). A periodic function with a 365 day period models the time-varying growth rate: \[r(t)=r_{b}-r_{s}\cos\left(\frac{2\pi t}{365}\right), \tag{1.2}\] where \(r_{b}\) is the baseline, or mean net growth rate, and \(r_{s}\) is the amplitude scaling factor. No restrictions are employed on \(r_{b}\) or \(r_{s}\), but most examples shown here have \(|r_{b}|<|r_{s}|\) to allow \(r(t)\) to change sign. A realistic application for Eq. (1) may be to represent mosquito populations in a temperate climate, where temperature and precipitation are time-varying factors affecting mosquito growth in a nonlinear way [14]. When supplied with initial condition \(P(0)=P_{0}\), Eq. (1) becomes an initial value problem with the explicit analytical solution, \[P(t)=\frac{P_{0}K}{(K-P_{0})\exp^{-f(t)}+P_{0}}, \tag{2.1}\] with, \[f(t)=r_{b}t-\left(\frac{365r_{s}}{2\pi}\right)\sin\left(\frac{2\pi t}{365}\right). \tag{2.2}\] ## 2 Simulations of numerical solvers Six numerical solvers implemented in Python compare approximations of Eq. (1) to its analytical solution using various \(r_{b}\) and \(r_{s}\) values: 1. odeint: lsoda method [4], implemented with SciPy's odeint function [12] 2. LSODA: lsoda method, implemented with SciPy's solve_ivp function [13] 3. BDF: implicit backwards differentiation, implemented with SciPy's solve_ivp function [13] 4. RK23: second-order Runge-Kutta method, implemented with SciPy's solve_ivp function [13] 5. RK45: forth-order Runge-Kutta method, implemented with SciPy's solve_ivp function [13] 6. manual RK4: fourth-order Runge-Kutta method, manually-programmed [2, 15] The odeint and LSODA methods are flexible for stiff problems, and the first five tests employ adaptive or quasi-adaptive step size algorithms [12, 13]. The manual RK4 test uses a fixed step size of \(\Delta t=1\) day. The odeint, BDF, RK45, and manual RK4 methods are used to simulate a more complex non-autonomous logistic growth model, where both net growth rate and carrying capacity vary periodically with time. This more complex model does not have an analytical solution for comparison. ### Periodic growth rate with analytical solution The numerical approximations from the six tests using four Parameter Sets (PS1-PS4) are compared to their analytical solutions in Table (1). The manual RK4 method shows the closest approximation to the analytical solution across all parameter sets. Furthermore, the results of simulations using the manual RK4 method are very similar, and indeed more accurate, when using decreasing step size \(\Delta t<1\), until round-off errors accumulate and halt the improvement in accuracy. Simulations for PS2 and PS3 from the six tests are shown in Figure 1 and Figure 2, respectively. PS2 is an example of simulations decaying to the zero equilibrium, while PS3 shows simulations approaching the carrying capacity equilibrium. Each numerical solver shows significantly different qualitative behavior in both parameter sets. The manual RK4 method was the only test to produce behavior consistent with the analytical solution for both parameter sets. The odeint method overshoots the magnitude of the oscillations for PS2, and falls short of capturing all oscillations for PS3. The LSODA method fails to capture all periodic oscillations in the true solution for both parameter sets. The RK23 method produces biologically unreasonable simulations for PS2. Moreover, the BDF and RK45 methods eventually become unbounded for PS3. This instability is the result of the carrying capacity becoming a threshold value when \(r(t)\) becomes negative [2]. In this case, the value \(K\) becomes the minimum value for a population to persist unbounded instead of a maximum value. If the population is numerically estimated to be above \(K\) and \(r(t)<0\), the population grows unbounded. Neither of these issues are observed for the odeint, LSODA, or manual RK4 methods, and was likely unobserved before due to previous research considering only nonnegative values of \(r(t)\)[16, 1, 5]. The vast differences in qualitative and quantitative behavior raise concerns about the current accuracy of numerical methods used in built-in solvers for nonlinear, non-autonomous differential equations. ### Periodic growth rate and carrying capacity The numerical issues observed from the built-in methods for simulating Eq. (1) are further exacerbated when a time-varying carrying capacity is added. When a periodicaly-varying carrying capacity is added to Eq. (1), then, \[P^{\prime}(t)=r(t)P(t)\left(1-\frac{P(t)}{K(t)}\right), \tag{3.1}\] where, \[K(t)=K_{b}-K_{s}\cos\left(\frac{2\pi t}{365}\right), \tag{3.2}\] and \(r(t)\) as before. The \(K_{b}\) component represents the baseline carrying capacity, and \(K_{s}\) represents the carrying capacity's amplitude scaling factor. To avoid singularities, it is assumed that \(0<K_{s}<K_{b}\). Few have explored the case where both parameters varying in time [10, 3], but it is of interest to understand how external influences affect both components of the model. A maximum cutoff value for \(P(t)\) is implemented in the manual RK4 method to avoid potentially unbounded behavior that may occur with negative net growth rates. The selected cutoff value is the supremum of the carrying capacity, \(\sup K(t)=K_{b}+K_{s}\), which denotes the maximum biologically relevant carrying capacity. Such biological constraints are easy to implement in manually-programmed numerical solvers, but difficult to incorporate for built-in solvers that have a black box framework. It can pay to write a numerical solver specifically designed to respect known theoretical features [7]. Although an explicit analytic solution does not exist for Eq. (3), intuition expects the simulations to produce oscillatory behavior. Simulations of Eq. (3) using the odeint, RK45, BDF, and manual RK4 methods are shown for two parameter sets in Figure 3. Results from the odeint and manual RK4 methods produce consistent oscillations for both parameter sets, while the RK45 and BDF methods produce erratic or non-smooth behavior. The BDF and RK45 tests estimate values above \(\sup K(t)\), even when the net growth rate remains nonnegative. \begin{table} \begin{tabular}{l c c c c c c c c} & \(r_{b}\) & \(r_{s}\) & odeint & LSODA & BDF & RK45 & RK23 & Manual RK4 \\ \hline PS1 & 0.05 & 0.15 & 14 & 497 & 913 & 552 & — & **5.62** \\ PS2 & -0.002 & 0.25 & 517 & 779 & 147 & 671 & — & **18.95** \\ PS3 & 0.01 & 0.25 & 1,355 & 6,765 & — & — & — & **23.89** \\ PS4 & 0.005 & 0.05 & 3.53 & 765 & 505 & 757 & 663 & **1.05** \\ \end{tabular} \end{table} Table 1: Root mean squared error (RMSE) values from numerical simulations of non-autonomous logistic model Eq. (1) for four Parameter Sets (PS1–PS4) with carrying capacity \(K=200,000\) and initial condition \(P(0)=1\). A dash ’—’ indicates that the simulation produced unbounded output and the root mean squared error could not be determined. Bold values show the lowest RMSE for the parameter set. Figure 2: Numerical approximations (red dashed curves) of Eq. (1) and analytical solutions (blue curves) using \(r_{b}=0.01\) and \(r_{s}=0.25\). Simulations have \(K=200,000\) with initial condition \(P(0)=1\). To enhance visual comparison, figures for the unstable simulations only show up to \(P(t)=400,000\). Figure 1: Numerical approximations (red dashed curves) of Eq. (1) and analytical solutions (blue curves) using \(r_{b}=-0.002\) and \(r_{s}=0.25\). Simulations have \(K=200,000\) with initial condition \(P(0)=1\). When comparing results between the odeint and manual RK4 methods, the time-varying growth rate does not affect the oscillations in \(K(t)\) when \(\inf r(t)>0\), which aligns with the conclusions found in [10]. Both tests also show \(\sup P(t)<\sup K(t)\). When \(r(t)\) switches signum and the oscillatory components of \(r(t)\) and \(K(t)\) are _in-phase_, the odeint simulation produces values exceeding \(\sup K(t)\). This issue is not observed for the manual RK4 method due to the implemented cut-off for \(P(t)\). Overall, the manually-programmed RK4 method can provide biologically reasonable behavior for a wider range of parameter values than the odeint method. ## 3 Discussion & conclusions Numerous disciplines use logistic models of varying complexity, and it can be tempting to implicitly trust the outputs from standardized, built-in numerical solvers. Although standardized, built-in numerical solvers can accurately simulate a variety of differential equation problems while providing a user-friendly interface [4], they may not be robust for solving all non-autonomous problems. Python's built-in ODE solvers fail to capture the true solution of a non-autonomous logistic model with a periodically-varying net growth rate changing signum, meanwhile a manually-programmed fourth-order Runge-Kutta method provides a closer approximation to the true solution. Numerical issues are further compounded with the added complexity of employing a periodic carrying capacity. The observed numerical issues from built-in solvers may be attributed to their black box nature. Firstly, the particular algorithms used for step size adjustment in the built-in methods appear to be sensitive to Eq. (1) when the net growth rate is negative, as seen by the BDF, RK45, and RK23 methods producing unbounded results for some Parameter Sets. All of the built-in methods tested allow for step size adjustment, whereas the manual RK4 method uses a fixed step size and outperformed the built-in solvers for a range of growth rate parameters. Secondly, the built-in methods were not flexible to the same range of parameter values for the more complex non-autonomous model Eq. (3). By incorporating the supremum of the carrying capacity to be the maximum value of the population in the manually-programmed method, simulations can produce biologically reasonable results when \(r(t)\) and \(K(t)\) are both _in-phase_ and _out-of-phase_. The black box framework of built-in numerical solvers makes it difficult to both identify the exact source of numerical issues as well as incorporate biological constraints. On the other hand, it is easier to diagnose numerical issues in manually-programmed solvers because the user knows how the inputs are being manipulated within the solver. Although results from this example only capture a specific case of non-autonomous logistic growth, it shows that even the most frequently used and trusted built-in numerical solvers may not be robust for some nonlinear and non-autonomous systems. Moreover, this example suggests that careful consideration is warranted when simulating nonlinear, non-autonomous differential equations that do not have an analytical solution for comparison, particularly for models that have direct physical or biological meaning. The aforementioned issues will be useful to consider when improving existing standardized numerical solvers. As the availability of data expands, the application of non-autonomous models becomes increasingly relevant for mechanistically modeling data-driven processes. The numerical issues encountered for this simple example of incorporating a periodic growth rate in a logistic model may foreshadow similar issues when modeling other applications which exhibit harmonic motion [8] or may not have an analytical solution for comparison. The awareness of numerical solver accuracy becomes paramount to the validation and interpretation of mathematical models. ## Acknowledgement This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20210062DR.
2309.08765
Mining Patents with Large Language Models Elucidates the Chemical Function Landscape
The fundamental goal of small molecule discovery is to generate chemicals with target functionality. While this often proceeds through structure-based methods, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. We hypothesize that a sufficiently large text-derived chemical function dataset would mirror the actual landscape of chemical functionality. Such a landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule's structure and its interacting partners. To evaluate this hypothesis, we built a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. We carry out a series of analyses demonstrating that the CheF dataset contains a semantically coherent textual representation of the functional landscape congruent with chemical structural relationships, thus approximating the actual chemical function landscape. We then demonstrate that this text-based functional landscape can be leveraged to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules.
Clayton W. Kosonocky, Claus O. Wilke, Edward M. Marcotte, Andrew D. Ellington
2023-09-15T21:08:41Z
http://arxiv.org/abs/2309.08765v2
Mining Patents with Large Language Models Demonstrates Congruence of Functional Labels and Chemical Structures ###### Abstract Predicting chemical function from structure is a major goal of the chemical sciences, from the discovery and repurposing of novel drugs to the creation of new materials. Recently, new machine learning algorithms are opening up the possibility of general predictive models spanning many different chemical functions. Here, we consider the challenge of applying large language models to chemical patents in order to consolidate and leverage the information about chemical functionality captured by these resources. Chemical patents contain vast knowledge on chemical function, but their usefulness as a dataset has historically been neglected due to the impracticality of extracting high-quality functional labels. Using a scalable ChatGPT-assisted patent summarization and word-embedding label cleaning pipeline, we derive a Chemical Function (CheF) dataset, containing 100K molecules and their patent-derived functional labels. The functional labels were validated to be of high quality, allowing us to detect a strong relationship between functional label and chemical structural spaces. Further, we find that the co-occurrence graph of the functional labels contains a robust semantic structure, which allowed us in turn to examine functional relatedness among the compounds. We then trained a model on the CheF dataset, allowing us to assign new functional labels to compounds. Using this model, we were able to retrodict approved Hepatitis C antivirals, uncover an antiviral mechanism undisclosed in the patent, and identify plausible serotonin-related drugs. The CheF dataset and associated model offers a promising new approach to predict chemical functionality. ## 1 Introduction The overarching goal of drug discovery is to generate chemicals with specific functionality through the design, modification, and optimization of chemical structures (Li & Kang, 2020). Although functionality emerges from chemistry, the prediction of function based on structure is largely non-obvious (Martin et al., 2002). However, humans have long assessed chemicals, and thus relationships between chemical structure and chemical function are likely embedded in language itself. In this regard, Large Language Models (LLMs) have shown profound success in a variety of tasks, including machine translation and text summarization (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023). Machine translation has also recently been applied to drug discovery to translate from text descriptions of chemicals to their respective structures, and vice versa (Edwards et al., 2021, 2022; Zeng et al., 2022). LLMs have also been augmented for various biochemical knowledge prediction tasks (Bran et al., 2023; Fang et al., 2023; Christofidellis et al., 2023; Ross et al., 2022). However, LLMs have not yet been utilized at large scale for gathering data on chemical functionality. Chemical patents represent a rich source of implicit chemical knowledge, often preceding the scientific literature by several years (Senger, 2017; Ashenden et al., 2017), and interest in utilizing patent-extracted information for drug discovery has risen in recent years (Subramanian et al., 2023; Magarinos et al., 2023; Zhai et al., 2021). Efforts at the European Molecular Biology Laboratory (EMBL) have led to the SureChEMBL database that associates 20M+ unique molecules with the patents they were mentioned in (Papadatos et al., 2016). This seminal effort has allowed us to generate an LLM-assisted patent summarization method that automatically extracts high quality functional descriptors for a given chemical, leading to the extraction of 631,077 molecule-function pairs from just under 100,000 molecules and their 187,845 unique patents. The resultant **Chem**ical **F**unction (CheF) dataset has been used to explore the intrinsic semantic structure of patent functional label space and its mapping to chemical structure space, leading to a model able to predict chemical function from structure alone. ## 2 Related Work **Chemical to textual translation.** Large Language Models (LLMs) have recently been augmented with chemistry and biomolecular knowledge (Bran et al., 2023; Fang et al., 2023; Christofidellis et al., 2023; Zeng et al., 2022). These will serve as useful knowledge resources, but at present remain unlikely to be utilized for novel drug discovery tasks requiring high accuracy. Similar work investigated the translation of molecules to descriptive captions, and vice versa (Edwards et al., 2021, 2022; Su et al., 2022). The translation between language and chemical representations is a promising avenue for drug discovery. However, the present datasets for this task contain significant structural information mixed in with functional data, allowing a model trained on such to cheat when learning to annotate structures or generate molecules from descriptions. CheF aims to improve upon this by narrowing the scope of the problem to include only the most relevant information necessary for structure-based chemical function prediction. **Patent-based molecular data mining and prediction.** Building chemical datasets from patents is an established field. This often involves the extraction of chemical identities or reaction schemes (Senger et al., 2015; Papadatos et al., 2016; He et al., 2021; Sun et al., 2021), quantitative drug properties (Magarinos et al., 2023; Zhai et al., 2021), and chemical-disease relationships (Li et al., 2016). Recently, an LLM-based summarization pipeline was used to derive chemical function from patents to validate functional relevance of results from a machine learning-based chemical similarity search (Kosonocky et al., under review). We aim to expand upon this body of work through the large-scale LLM-based extraction of chemical functionality from a large corpus of patent literature. Recent work also focused on molecular generation from chemical subspaces derived from patents containing specific functional keywords (Subramanian et al., 2023). The CheF dataset compliments this work by serving as a superset of all functionally relevant patent keyword-based chemical subspaces. **Labeled chemical datasets.** Chemicals are complex interacting entities, and because of such there are many labels that can be associated with a given chemical. One common class of labels is a molecule's ability to bind specific receptors or enzymes, which can be used to train and evaluate chemical representation models (Fabian et al., 2020; Mysinger et al., 2012; Wu et al., 2018; Ross et al., 2022). Datasets linking chemicals to their functionality have emerged in recent years (Edwards et al., 2021; Huang et al., 2023). These datasets were compiled from existing databases containing annotations of well-studied chemicals, limiting the size of these curated datasets to \(<\)35K molecule-function pairs (Wishart et al., 2006; Li et al., 2016; Fu et al., 2015; Edwards et al., 2021). The CheF dataset aims to improve upon these existing datasets by automatically sourcing molecular function from patents to create a 100K molecule dataset of 631,077 molecule-function pairs, scalable to the entire SureChEMBL database of 20M+ patent-associated molecules (Papadatos et al., 2016). Further, due to its high coverage of chemical space, the CheF dataset may additionally serve as a global benchmark for the evaluation of chemical representation models. ## 3 Results Patents represent a rich source of chemical knowledge. We set out to create a large-scale database of chemicals correlated with their patent-derived molecular functionality. To do so, a random 100,000 molecules and their associated patents were chosen from the SureChEMBL database to create a Chemical Function (CheF) dataset (Fig. S1) (Papadatos et al., 2016). To ensure that patents were highly relevant to their respective molecule, only molecules with less than 10 patents were included in the random selection. This was done to avoid molecules like penicillin, which is associated with over 40,000 patents, few of which are relevant to penicillin itself. For each associated patent in the CheF dataset, the associated patent title, abstract, and description were scraped from Google Scholar and procedurally cleaned. ChatGPT (gpt-3.5-turbo) was then used to summarize each molecule's unstructured patent information into a set of 1-3 brief functional labels describing the patented molecule (Fig. 1a). The success of the GPT-assisted summarization was validated over 1,738 labels generated from a random 200 molecules from CheF. Of these labels, 99.6% had correct syntax and 99.8% were relevant to their respective patent (Table S1). 77.9% of the labels directly described the labeled molecule's function. However, this percentage increased to 98.2% when considering the functionality of the primary patented molecule, of which the labeled molecule is an intermediate (Table S1). This validation demonstrated that GPT-summarized literature can serve as a rich source of chemical functional annotations. The ChatGPT patent summarizations ultimately resulted in 104,607 functional labels. This was too large to have any predictive power, so measures were taken to consolidate these labels into a concise vocabulary. The labels were procedurally cleaned, reducing the number of labels to 39,854, and further consolidated by embedding each label with a language model (using OpenAI's text-embedding-ada-002) to group grammatically dissimilar, yet semantically similar labels together. The embeddings were clustered with DBSCAN using a cutoff that minimized the number of clusters without deterioration in cluster quality (e.g., avoiding the grouping of antiviral, antibacterial, & antifungal) (Fig. S2). Each cluster was then summarized with ChatGPT to obtain a single representative cluster label. The embedding-based clustering and summarization process was validated across the 500 largest clusters. It was found that 99.2% of the clusters contained semantically common elements, and 97.6% of the ChatGPT cluster summarizations were accurate and representative of their constituent labels (Table S2). The representative cluster labels were mapped back to the CheF dataset, resulting in 19,616 labels (Fig. 1b). To ensure adequate predictive power, labels appearing in less than 50 molecules were dropped from the dataset. The final CheF dataset consisted of 99,454 molecules and their 1,543 descriptive functional labels (Fig. 1). ### Functional labels map to natural clusters in chemical structure space Molecular function nominally arises directly from structure, and we hypothesize that any successful dataset of functional labels should also cluster in structural space. This hypothesis was based in part on the observation that chemical functionality is often retained despite minor structural mod Figure 1: **Patent label creation and cleaning. (a) Molecules are linked to their patents, which are summarized into brief functional labels using ChatGPT. (b) Summarized patents are cleaned with algorithmic, embedding-based, and LLM-based methods.** ifications to a functional molecule (Maggiora et al., 2014; Patterson et al., 1996). Further, due to molecules being patented via Markush structures, structurally similar molecules should be annotated with similar patent-derived functions. To evaluate this hypothesis, we first embedded the CheF dataset in structure space by converting the molecules to molecular fingerprints (binary vectors representing a molecule's substructures), allowing for the projection of the molecules into a 2D space using t-distributed Stochastic Neighbor Embedding (t-SNE) (Fig. 2a, 2c, 2e, 2g). Then, to determine if the CheF functional labels clustered in this structural space, the average maximum fingerprint Tanimoto (Jaccard) similarity was computed between the fingerprint vectors of each molecule containing a given label; this approach provides a measure of structural similarity between molecules that have the same functional label (Fig. 2b, 2d, 2f, 2h). This value was in turn compared to the average maximum similarity computed from a random equal-sized set of molecules to determine significance. Remarkably, some 1,210 of the 1,543 labels were found to cluster significantly in structural space (t-test, P \(<\) 0.05). To give an idea of the meaning of this correlation, inherent clustering in structure space was visualized for the labels 'hcv' (hepatitis C virus), 'electroluminescence','serotonin', and '5-ht' (5-hydroxyryptamine, the chemical name for serotonin) (Fig. 2a, 2c, 2e, 2g). For the label 'electroluminescence' there was one large cluster containing almost only highly conjugated molecules (Fig. 2c). For 'hcv' (hepatitis C virus), one can see multiple distinct communities, in part representing antivirals targeting different mechanisms of HCV replication. Clusters were observed for NS5A inhibitors, NS3 macrocyclic and peptidomimetic protease inhibitors, and nucleoside NS5B polymerase inhibitors (Fig. 2a, S3). The observed clustering of functional labels in structure space provided evidence that the CheF dataset labels had accurately captured structure-function relationships, validating our initial hypothesis, and that they were of relatively high quality with minimal noise. Figure 2: **CheF labels cluster in structure space.** Molecules in the CheF dataset were projected based on molecular fingerprints and colored if the selected label was contained by the molecule’s set of descriptors. Max fingerprint Tanimoto similarity from each molecule containing the selected label, to the other molecules containing that label, compared against the max fingerprint Tanimoto similarity for a random subset of molecules of the same size. This measured the degree of a label’s clustering in structure space compared to a random control. Many of the labels strongly cluster in structural space, demonstrating the validity of the CheF dataset. (a) Molecules containing label ’hcv’. (b) Degree of clustering for ’hcv’. (c) Molecules containing label ’electroluminescence’. (d) Degree of clustering for ’electroluminescence’. (e) Molecules containing label ’serotonin’. (f) Degree of clustering for ’serotonin’. (g) Molecules containing label ‘5-ht’. (h) Degree of clustering for ’5-ht’. ### Label co-occurrences reveal a knowledge graph of chemical function Patents often contain a great deal of potential joint contextual information on the application, structure, and mechanism for a given compound. We attempted to determine the extent to which the CheF dataset implicitly captured this joint semantic context by assessing a graph of co-occurring functional labels (Fig. 3). Each node in the graph represents a CheF functional label, and the relative positioning of these nodes represents the number of times given labels co-occur, with co-occurring labels being placed closer together. To avoid the visual overrepresentation of extremely frequently-occurring labels (i.e., inhibitor, cancer, kinase), the size of each node was scaled proportional to its connectivity (number of unique connections), rather than scaling to the frequency of co-occurrence. Modularity-based community detection identifies clusters within a graph by isolating strongly interconnected groups from the rest of the graph. This approach was employed on the label co-occurrence graph, and the resulting clusters were summarized into representative labels using GPT-4 to provide an unbiased semantic categorization (Table S3, S4, S5). The summarized labels were curated by the authors for validity and found to be representative of the constituent labels; and were consolidated further to succinctly represent the semantic categorization (Table S3). A semantic structure emerged in the co-occurrence graph in which distinct communities such as 'Electronic, Photochemical, & Stability', 'Antiviral & Cancer', and 'Neurodegenerative, Autoimmune, Inflammation, & Respiratory' could be observed (Fig. 3, Tables S3, S4, S5). Within communities, the fine-grained semantic structure also appeared to be coherent. For example, in the local neighborhood around 'hcv' the labels 'antiviral', 'ns' (nonstructural), 'hbv' (hepatitis B virus), 'hepatitis','replication', and 'protease' were found, all of which are known to be semantically relevant to hepatitis C virus (Fig. 3). The graph of patent-derived molecular functions represents a potentially valuable resource for the linguistic evaluation of chemical functionality and ultimately for drug discovery. ### Coherence of the patent semantic graph in chemical structure space To determine the extent to which the semantic label graph mapped to structural space, the coincidence between a given label's molecules and its 10 nearest neighboring labels' molecules was determined using the average maximum fingerprint Tanimoto similarity from each molecule containing a primary label to each molecule containing any of the 10 nearest neighbor labels (with \(<\)1,000 total abundance) (Fig. 4). This value was compared to the average maximum fingerprint Figure 3: **CheF labels capture semantic co-occurrence relationships.** Node sizes correspond to number of connected edges, and edge sizes correspond to number of co-occurrences in the CheF dataset. Modularity-based community detection was used to obtain 19 distinct communities. The communities broadly coincided with the semantic meaning of the contained labels, the largest 10 of which were summarized to representative categorical labels (Tables S3, S4, S5). Tanimotto similarity of a random subset of molecules of the same size to determine significance to a null control (Fig. 4b, 4d, 4f, 4h). This comparison indicated that molecules containing the nearest 10 neighboring co-occurring labels were closer to the primary label's molecules in structure space than a random set of molecules for 1,540 of the 1,543 labels (t-test, P \(<\) 0.05), meaning that label co-occurrence distance corresponds to distance in chemical structure space. The discovery of semantically structured communities, above, indicated that users can potentially move between labels to identify new compounds. Further, the coherence between structural and label spaces now suggests that users can move between these spaces to identify labels used to assess a compound's function and vice versa (Fig. 4a, 4c, 4e, 4g). ### Label-guided drug discovery The molecule:label pairs were used to train a multi-label classification model that could predict CheF functional labels from molecular fingerprints. On a holdout test set, this model had positive predictive power for 1,530 of the 1,543 labels, greater than 0.80 ROC-AUC for 852 of the 1,543 labels, and greater than 0.90 ROC-AUC for 356 of the 1,543 labels (Fig. 5a). Given the nature of the training task and data, it can be surmised that the model implicitly learned the co-occurrence of labels, encapsulating the contextual information from the patent semantic graph in each prediction. The model herein can thus potentially be used to comprehensively annotate chemical functionality, even when the underlying data is fragmented or incomplete. As an example, for a known hepatitis C antiviral the model strongly predicted 'hcv', 'ns' (nonstructural), 'inhibitor' (94%, 92%, 79% respectively) while predicting 'protease' and 'polymerase' with lower confidence (12%, 0.6% respectively) (Fig. 5b) (Guo et al., 2012). The lower-confidence 'protease' and 'polymerase' predictions might suggest that the nonstructural NS5A protein rather than the NS2/3 proteases or NSSB polymerase was the likely target, a hypothesis that has been validated outside of the patent literature via the scientific literature (Ascher et al., 2014). Figure 4: **Functional labels coincide in structure space with their co-occurring labels.** To measure the coincidence in chemical structure space between the primary and co-occurring labels, the max fingerprint Tanimoto similarity from each molecule containing the primary label to each molecule containing any of the 10 nearest neighbor labels (with \(<\)1,000 total abundance) was computed and compared against the max fingerprint Tanimoto similarity to a random subset of molecules of the same size. (a) Molecules containing neighboring labels to ’hcv’. (b) Degree of coincidence between ’hcv’ and its neighboring labels. (c) Molecules containing neighboring labels to ’electroluminescence. (d) Degree of coincidence between ’electroluminescence’ and its neighboring labels. (e) Molecules containing neighboring labels to ’serotonin’. (f) Degree of coincidence between ’serotonin’ and its neighboring labels. (g) Molecules containing neighboring labels to ’5-ht’. (h) Degree of coincidence between ’5-ht’ and its neighboring labels. This comprehensive model-based annotation of chemicals potentially allows for the discovery of new drugs from a simple label-guided search. For example, the label '5-ht' (5-hydroxytryptamine, the chemical name for serotonin) was used to query the test set compounds, and a ranked list of the top 10 molecules most highly predicted to be relevant to the '5-ht' label were obtained, all with greater than 25% confidence (Fig. 5c). Five of these were already patented as serotonin receptor ligands (5-HT\({}_{1}\), 5-HT\({}_{3}\), 5-HT\({}_{6}\), 5-HT\({}_{7}\)), while the remaining five were patented without any reference to the serotonin receptor. Nonetheless, these latter compounds were found to have applications in treating anti-psychotic disorders, Alzheimer's disease, sexual dysfunction, and depression, all of which of course have associations with serotonin and its receptor. Similarly, the synonymous label'serotonin' was used to query the test set compounds, and a ranked list of the top 10 molecules was once again obtained. Of these, seven were patented as serotonin receptor ligands (5-HT\({}_{1}\), 5-HT\({}_{2}\), 5-HT\({}_{6}\)), two of which were also found in the '5-ht' top 10 (Fig. 5d). The remaining molecules were patented for dopamine receptor ligand binding, anxiety, and electroluminescence (Fig. 5d). The fact that different labels for the same compound led to very similar (and sometimes overlapping) outcomes is an internal validation of the method. Moving with facility between chemical and label spaces suggests experiments in which the identified compounds might bind to serotonin receptors or otherwise be synergistic with the function of serotonin. To more broadly examine the possibilities for label-based drug discovery by CheF, a set of 3,242 Stage-4 FDA approved drugs obtained from the OpenTargets database were passed through our model to obtain predicted functional labels (Fig. S4) (Ochoa et al., 2021). Some 15 of the top 16 drugs most highly predicted for 'hcv' were approved Hepatitis C Virus (HCV) antivirals, with the misprediction being Remdesivir, a SARS-CoV-2 antiviral that was originally investigated for HCV treatment. The remaining mispredictions in the top 50 results included 9 antivirals (1 HBV, 5 HIV, 1 VZV, 1 HSV, 1 CMV, 1 COVID-19), 2 protease inhibitors, 5 polymerase inhibitors, and 2 integrase inhibitors. Further, 9 of the mispredictions were ACE inhibitors and 2 were BTK inhibitors, both of which are peripherally associated with HCV through liver fibrosis mitigation and HCV reactivation, respectively (Corey et al., 2009; Mustafayev and Torres, 2022). Beyond showing the power of label-guided discovery, the remdesivir example suggests that label-guided drug discovery may serve as a useful paradigm for rapid antiviral repurposing to help mitigate future pandemics. ## 4 Discussion While _in silico_ drug discovery often proceeds through virtual screens, computational design, or property prediction and optimization (Hughes et al., 2011), we set out to investigate the practicality of label-guided drug discovery. We developed a ChatGPT patent summarization and embedding-based data cleaning pipeline to create a dataset of just under 100K diverse molecules and their patent-derived functional labels, the Chemical Function (CheF) dataset. A large proportion ( 78%) of the functional labels mapped to natural clusters in chemical structure space, indicating a useful coherence between chemical structure and label-derived function. Moreover, there was an intrinsic global semantic structure, with label co-occurrences corresponding to broader fields of functionalities. And finally, the fact that the label co-occurrence graph mapped with high fidelity onto structural space (99.8% of labels) indicated that labels can potentially be used to triangulate structures with novel functionalities. To encapsulate the relationship between structural and functional spaces, the CheF dataset was used to train a neural network to predict functional labels from molecular fingerprints. This model successfully predicted 1,530 of the 1,543 functional labels across the test set. In consequence, predicted functional annotations could be used to uncover potential mechanisms, carry out label-guided drug searches, and annotate predicted functionalities for a set of FDA-approved small molecule drugs, demonstrating the possibilities for using label-guided searches to repurpose drugs or identify combination therapies. Since the CheF dataset is scalable to the entire 20M molecule database, we anticipate that many of these predictions will only get better into the future. It should be noted that the CheF dataset has intrinsic limitations due to the nature of patent data. The dataset is biased toward patented molecules, and may have sparse representation of molecules with high utility but low patentability. Moreover, prophetic claims can lead to unconfirmed associations between molecules and their functionalities. These issues can potentially be resolved by the inclusion of scientific literature-derived labels, such as those available via PubMed. The CheF dataset and its analysis has yielded one of the first examples for how to use machine learning to perform robust label-guided small molecule discovery and repurposing. Models trained on CheF can now be used to automatically annotate chemicals, examine other functional features of drugs, such as potential side effects, and down-select candidates for medicinal chemistry and high-throughput screening. Moving between literature and physical spaces represents a promising paradigm for drug discovery in the age of machine learning. ## 5 Methods **Database creation.** The SureChEMBL database was shuffled and converted to chiral RDKit-canonicalized SMILES strings, removing malformed strings unable to be canonicalized (Weininger, 1988; Papadatos et al., 2016; Landrum et al., 2013). SMILES strings were converted to InChI keys Figure 5: **Drug discovery by predicting functional labels from structures. (a) Test set ROC-AUC and PR-AUC results from a neural network trained to predict CheF labels from molecular fingerprints. Labels sorted in descending order by ROC-AUC, displaying every 20 labels for clarity. Dotted black line indicates ROC-AUC random threshold. Across all 1,543 labels, the average ROC-AUC was 0.81 and average PR-AUC was 0.12. (b) The model implicitly captures the co-occurrence of labels and comprehensively annotates chemical functionality. Shown is a test set molecule patented for Hepatitis C antiviral treatment; true positives in green, false positives in red. The model highly predicts ’hcv’, ’ns’, and ‘inhibitor’, which in combination with the low-predicted ’protease’ and ’polymerase’ can be used to infer that the molecule acts on the NSSA to inhibit HCV replication, revealing a plausible mechanism not disclosed in the patent. (c-d) Label-based search identifies drug candidates. Shown are the 10 test set molecules most highly predicted to be labeled with ’5-ht’ or ‘serotonin’; true positives are shown in green, false positives in red. The mechanistically plausible false positives represent opportunities for drug discovery and repurposing, especially considering these have been patented for other neurological uses (i.e., anti-psychotic, Alzheimer’s, sexual dysfunction, & depression).** and used to obtain PubChem CIDs (Kim et al., 2023). To avoid excess summarization costs, and to avoid label dilution from over-patented molecules, only molecules with 10 or less patents were included. The first 100,000 molecules were selected as the dataset. For each patent ID, the patent title, abstract, and description were scraped from Google Scholar and then cleaned. The patent title, abstract, and first 3500 characters of the description were summarized into brief functional label descriptors using ChatGPT (GPT-3.5-turbo) from July 15th, 2023. GPT-3.5-turbo was chosen over GPT-4 due to its cheaper API cost and faster computation (Brown et al., 2020; OpenAI, 2023). The approximate cost per molecule was $0.005 using GPT-3.5-turbo. Responses from ChatGPT were converted into sets of labels and linked to their associated molecules. The summarizations were cleaned, split into individual words, and converted to lowercase. Plural words were converted to singular if their corresponding pair existed across all of the labels. The cleaned dataset was saved resulting in 29,854 unique labels for 99,454 molecules. To consolidate labels by semantic meaning, the vocabulary was embedded using OpenAI's text-embedding-ada-002 and clustered to group labels by similarity in word embedding space. DBSCAN clustering was performed on the word embeddings with a sweeping epsilon value (Ester et al., 1996). The authors manually found the epsilon for optimal clustering, determined to be at the minimum number of clusters without incorrect clusters appearing (e.g., avoiding the merging of antiviral, antibacterial, & antifungal). The optimal epsilon was found to be 0.34 for the dataset considered herein, resulting in a consolidation from 29,854 to 20,030 labels. GPT-3.5-turbo was used to create representative labels for each cluster (Brown et al., 2020). There was one very large cluster containing only IUPAC-derived structural terms whose constituent labels were removed to reduce excessive non-generalizable labels. All labels appearing in less than 50 molecules were dropped to ensure that all labels have sufficient predictive power. The final labels were mapped to their respective locations in the molecule dataset, resulting in a 99,454-molecule dataset with 1,543 unique functional labels known as the Chemical Functionality (CheF) dataset. This workflow was visualized with BioRender (Fig. 1). **Label space graph.** Label pairs co-occurring for the same molecule were counted across the entire CheF dataset. These counts were used as edge weights between label nodes to create a graph with Gephi (Bastian et al., 2009). The graph was further visualized using Gephi's force atlas, nooverlap, and label adjust methods (all default parameters). Modularity-based community detection was performed on this graph using 0.5 resolution resulting in 19 communities. The communities were manually reviewed to determine representative categorical labels. **Molecular structure space t-SNE.** The 100k molecular fingerprints were t-SNE projected into a 2-dimensional space using sckit-learn's t-SNE implementation with 500 perplexity. Molecules containing specific labels were colored. An interactive version of this has been hosted at chefdb.app. **Measuring structure space coincidence between label and its neighbors.** For each molecule containing a primary label, the maximum fingerprint Tanimoto similarity to any of its 10 neighboring labels was computed. A perfect co-occurrence / overlap between the primary label and the 10 neighboring labels would result in a value of 1.0. The null co-occurrence was calculated by computing the maximum fingerprint Tanimoto similarity to a random set of molecules of the same size. Significance was computed with an independent 2-sided t-test. Limiting the maximum abundance to 1,000 molecules for the neighboring was necessary to avoid polluting the analysis, as hyper-abundant labels would force the metric to 1.0. **Model training and inference.** A multi-label classification model was trained to predict the 1,543 CheF labels from molecular fingerprints of the 99,454 chemicals in the dataset. A random 10% test set was held out from all model training. The remaining 90% of the data was used to determine the optimal model architecture using 5-fold cross validation. The model with the lowest validation loss (BCEWithLogitsLoss) had 2 hidden layers with 512 and 256 dimensions respectively, trained for 5 epochs with 0.2 dropout, batch size of 32, and learning rate of 0.001. The test set was evaluated on this model, in which ROC-AUC and PR-AUC was calculated for each label as well as the macro average. The OpenTargets database was downloaded and filtered to contain only small molecule drugs that have reached Stage 4. These were then inferenced to obtain predicted CheF functional labels for all FDA-approved small molecule drugs (Ochoa et al., 2021). #### Acknowledgments The authors acknowledge the Biomedical Research Computing Facility at The University of Texas at Austin for providing high-performance computing resources. We would also like to thank AMD for the donation of critical hardware and support resources from its HPC Fund. This work was supported by the Welch Foundation (F-1654 to A.D.E., F-1515 to E.M.M.), the Blumberg Centennial Professorship in Molecular Evolution, the Reeder Centennial Fellowship in Systematic and Evolutionary Biology at The University of Texas at Austin, and the NIH (R35 GM122480 to E.M.M.). The authors would like to thank Aaron L. Feller and Charlie D. Johnson for useful criticism and discussion during the development of this project. ### Ethics statement Consideration for the dual use of machine learning-based chemical models often focuses on the identification of toxic chemicals and drugs of abuse. As patents typically describe the beneficial applications of molecules, it is unlikely that a model trained on CheF labels will be able to identify novel toxic compounds. Predicted functional labels for the chemical weapons VX and mustard gas were obtained with our model, containing no obvious indications of malicious properties. On the contrary, drugs of abuse are more easily identifiable, as the development of neurological compounds remains a lucrative objective. 5-MeO-DMT, LSD, fentanyl, and morphine all had functional labels of their primary mechanism predicted with moderate confidence. However, many benign molecules also predict these same labels, indicating that it may be quite challenging to intentionally discover novel drugs of abuse using CheF labels. ### Reproducibility statement The CheF dataset has been made publicly available under the MIT license at [https://doi.org/10.5281/zenodo.8350175](https://doi.org/10.5281/zenodo.8350175). An interactive visualization of the dataset can be found at chefdb.app. All code and data used herein may be found at [https://github.com/kosonocky/CheF](https://github.com/kosonocky/CheF).
2309.15052
Singlet-doublet Dirac fermion dark matter from Peccei-Quinn symmetry
Weakly Interacting Massive Particles (WIMPs) and axions are arguably the most compelling dark matter (DM) candidates in the literature. Here, we consider a model where the PQ symmetry solves the strong CP problem, generates radiatively Dirac neutrino masses, and gives origin to multicomponent dark sector. Specifically, scotogenic Dirac neutrino masses arise at one-loop level. The lightest fermionic mediator acts as the second DM candidate due to a residual $Z_2$ symmetry resulting from the PQ symmetry breaking. The WIMP DM component resembles the well-known singlet-doublet fermion DM. While the lower WIMP dark mass region is usually excluded, our model reopens that portion of the parameter space (for DM masses below $\lesssim 100$ GeV). Therefore, we perform a phenomenological analysis that addresses the constraints from direct searches of DM, neutrino oscillation data, and charged lepton flavor violating (LFV) processes. The model can be tested in future facilities where DM annihilation into SM particles is searched for by neutrino telescopes.
Robinson Longas, Andres Rivera, Cristian Ruiz, David Suarez
2023-09-26T16:30:16Z
http://arxiv.org/abs/2309.15052v2
# Singlet-doublet Dirac fermion dark matter from Peccei-Quinn symmetry ###### Abstract Weakly Interacting Massive Particles (WIMPs) and axions are arguably the most compelling dark matter (DM) candidates in the literature. Here, we consider a model where the PQ symmetry solves the strong CP problem, generates radiatively Dirac neutrino masses, and gives origin to multicomponent dark sector. Specifically, scotogenic Dirac neutrino masses arise at one-loop level. The lightest fermionic mediator acts as the second DM candidate due to a residual \(Z_{2}\) symmetry resulting from the PQ symmetry breaking. The WIMP DM component resembles the well-known singlet-doublet fermion DM. While the lower WIMP dark mass region is usually excluded, our model reopens that portion of the parameter space (for DM masses below \(\lesssim 100\) GeV). Therefore, we perform a phenomenological analysis that addresses the constraints from direct searches of DM, neutrino oscillation data, and charged lepton flavor violating (LFV) processes. The model can be tested in future facilities where DM annihilation into SM particles is searched for by neutrino telescopes. Dark matter, Axions, Neutrino masses Introduction There is some evidence that supports the existence of Dark Matter (DM) and provides a way to study physics beyond the Standard Model (SM) [1; 2; 3; 4; 5; 6]. However, the nature of DM remains obscure as its detection continues as one of the big open problems nowadays. Many DM candidates have been proposed over the last years, particularly the Weakly Interacting Massive Particles (WIMPs) [7]. A specific example is the singlet-doublet Dirac fermion presented in Ref [8]. In that work, the author showed that, for DM masses below \(\sim 100\) GeV, the region of the parameter space of the model was excluded because of the large coupling between the DM particle and the \(Z\) boson. Nonetheless, in this work, we show how this region is recovered. On the other hand, the non-observation of CP violation in the Quantum Chromodynamics (QCD) Lagrangian represents one of the most active research topics in high-energy physics, both theoretically and experimentally. From a theoretical point of view, the absence of CP violation in the QCD Lagrangian is dynamically explained by invoking the Peccei Quinn (PQ) mechanism [9], which considers the spontaneous breaking of an anomalous global U(1) symmetry with the associated pseudo-Nambu-Goldstone boson, the (QCD) axion [10; 11]. The axion is a promising candidate for being the main component of DM of the Universe thanks to a variety of production mechanisms [12]; for instance, via the vacuum misalignment [13; 14; 15]. Besides, it is remarkable that the physics behind the PQ mechanism can also explain other open questions such as neutrino masses [16; 17; 18; 19; 20; 21; 22]. For example, recent analysis that considers the PQ mechanism as responsible for the neutrino masses reveals that it is also possible to consistently provide a set of multicomponent scotogenic models with Dirac neutrinos [23]. Specifically, in these scenarios, one-loop Dirac neutrino masses are generated through the \(d=5\) effective operator \(\bar{L}\bar{H}N_{R}\sigma\)[24; 25] once the axion field \(\sigma\) develops a vacuum expectation value (VEV), while the contributions from the tree-level realizations of such operator are forbidden due to the charge assignment. As a further consequence of the PQ symmetry, the residual discrete symmetry stabilizes the lightest particle that mediates the neutrino masses. Since such a particle must be electrically neutral, this setup also accounts for a second DM species [26; 27; 28; 29; 30]. In this work, we enlarge the SM symmetry group with two new global symmetries: a \(\rm U(1)_{PQ}\) and a \(\rm U(1)_{L}\) lepton number. Moreover, we add three Weyl singlets \(\nu_{i}^{c}\) (\(i=1,2,3\)) that correspond to the right-handed partners of the SM neutrinos. Additionally, we consider one \(\rm SU(2)_{L}\) fermion singlet \(N\), two \(\rm SU(2)_{L}\) fermion doublets, \(\eta\), \(\psi\) and three scalar singlets \(S_{\alpha}\) (\(\alpha=1,2,3\)). Also, we consider a scalar singlet \(\sigma\) that contains the axion field and one exotic chiral down-type quark \(D\) that guarantees the realization of the hadronic KSVZ axion model [31; 32]. This model was cataloged as T1-2-B in Ref [23], where the spontaneous symmetry breaking of the PQ symmetry provides a mechanism for one-loop Dirac neutrino masses. In this paper, we perform a phenomenological analysis of the model by determining the viable parameter space from direct detection (DD) experiments, lepton flavor violating (LFV) processes, DM relic density, neutrino physics, and indirect detection searches in neutrino telescopes. The model easily satisfies these constraints. Also, a considerable portion of its parameter space will be tested by future experiments. The WIMP DM component is a mixture between the singlet and the doublet Dirac fermions that resembles the well-known singlet-doublet fermion DM [8; 33]. However, we show that since new DM annihilation channels lead to the correct relic density, the lower mass region (for DM mass below \(\lesssim 100\) GeV) is reopened. This paper is organized as follows. Section II describes the model and its constraints. Section III shows the DM phenomenology. Section IV contains numerical analysis and discusses the results. Finally, section V concludes. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(L_{i}\) & \(e_{i}^{c}\) & \(\nu_{i}^{c}\) & \(N\) & \(N^{c}\) & \(\psi\) & \(\psi^{c}\) & \(\eta\) & \(\eta^{c}\) & \(S_{\alpha}\) & \(D\) & \(D^{c}\) & \(\sigma\) \\ \hline \hline \(\rm U(1)_{L}\) & 1 & -1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 0 & 0 & 0 & 0 \\ \hline \(\rm U(1)_{PQ}\) & 2 & -2 & 0 & -3 & 3 & -1 & 1 & 3 & -3 & 3 & -1 & -1 & 2 \\ \hline \hline \(\rm Z_{2}\) & + & + & + & - & - & - & - & - & - & - & - & + \\ \hline \end{tabular} \end{table} Table 1: Weyl particle content of the model with its lepton and PQ charge assignments. We also show the transformation under the remnant \(Z_{2}\) symmetry. The model We add to SM the right-handed partners of the neutrinos, the Weyl singlets \(\nu_{i}^{c}\) (\(i=1,2,3\)). Additionally, we consider as new particle content of the model: one SU(2)\({}_{\rm L}\) fermion singlet \(N\), two SU(2)\({}_{\rm L}\) fermion doublets, \(\eta\), \(\psi\) and three scalar singlets \(S_{\alpha}\) (\(\alpha=1,2,3\)). Those fields are required for the one-loop realization of neutrino masses. Also, we consider a scalar singlet \(\sigma\) that contains the axion field and one exotic chiral down-type quark \(D\) that guarantees the realization of the hadronic KSVZ axion model [31; 32]. The Weyl particle content of the model and the charge assignments under the global symmetries, U(1)\({}_{\rm L}\) and U(1)\({}_{\rm PQ}\), are displayed in Table 1. Notice that in this model, the SM model leptons have PQ charges, and the SM Higgs and the ordinary quarks are neutral under the global symmetries. The most general Lagrangian invariant under such symmetries is: \[\mathcal{L}\supset \ \left[\right.M_{N}NN^{c}+M_{\psi}\psi\cdot\psi^{c}+M_{\eta}\eta \cdot\eta^{c}+\lambda_{1}\psi\cdot\eta\sigma^{*}+\lambda_{2}\eta\cdot\psi^{c} \sigma+\kappa_{1}\eta\cdot HN^{c}+\kappa_{2}\eta^{c}\cdot\tilde{H}N\] \[+\left.h_{i\alpha}\psi^{c}\cdot L_{i}S_{\alpha}^{*}+f_{i\alpha} \nu_{i}^{c}NS_{\alpha}+y_{Q}D^{c}D\sigma+{\rm h.c}\left.\right]-\mathcal{V} \left(H,S_{\alpha},\sigma\right)\right.\,, \tag{1}\] where \((\cdot)\) means the dot product with the standard SU(2)\({}_{\rm L}\) metric, \(\tilde{H}=i\sigma_{2}H^{*}\) and \(\mathcal{V}\left(H,S_{\alpha},\sigma\right)\) is the scalar potential. \(h_{i\alpha}\) and \(f_{i\alpha}\) are the Yukawa couplings relevant for neutrino masses. The Yukawa couplings \(\lambda_{i}\) (\(i=1,2\)) control the interactions between the axion field and the Weyl fermions. When PQ symmetry is broken, they provide mass terms for dark sector fermions. On the other hand, the Yukawa couplings \(\kappa_{i}\) (\(i=1,2\)) mix the fermion singlet and the fermion doublet states. For this reason, they are crucial for the WIMP DM phenomenology [8; 33]. The scalar potential \(\mathcal{V}\left(H,S_{\alpha},\sigma\right)\) reads as: \[\mathcal{V}\left(H,S_{\alpha},\sigma\right)= \ -\mu_{1}^{2}|H|^{2}+\lambda_{H}|H|^{4}+\mu_{S_{\alpha}}^{2}|S_{ \alpha}|^{2}+\lambda_{S_{\alpha}}|S_{\alpha}|^{4}-\mu_{\sigma}^{2}|\sigma|^{2} +\lambda_{\sigma}|\sigma|^{4}\] \[+\lambda_{HS_{\alpha}}|H|^{2}|S_{\alpha}|^{2}\,. \tag{2}\] Here, we neglect the terms \(|H|^{2}|\sigma|^{2}\) and \(|S_{\alpha}|^{2}|\sigma|^{2}\) by rendering the respective quartic couplings small enough to avoid the scalar mixing between \(H\) and \(\sigma\). This mixing plays only a role for inflation signatures [34; 35]. We demand the stability of the scalar potential. It is bounded from below by imposing the copositivity conditions [36; 37]: \[\lambda_{H}\geq 0,\quad\lambda_{\sigma}\geq 0,\quad\lambda_{S_{ \alpha}}\geq 0,\] \[-\lambda_{HS_{\alpha}}+2\sqrt{\lambda_{H}\lambda_{S_{\alpha}}}\geq 0,\quad\sqrt{\lambda_{H}\lambda_{\sigma}}\geq 0,\quad\sqrt{\lambda_{S_{\alpha}} \lambda_{\sigma}}\geq 0, \tag{3}\] \[\lambda_{HS_{\alpha}}+\sqrt{\lambda_{\sigma}}+2\Bigg{[}\sqrt{ \lambda_{H}\lambda_{S_{\alpha}}\lambda_{\sigma}}+\sqrt{\Big{(}\lambda_{HS_{ \alpha}}+2\sqrt{\lambda_{H}\lambda_{S_{\alpha}}}\Big{)}\lambda_{\sigma}\sqrt{ \lambda_{H}\lambda_{S_{\alpha}}}}\Bigg{]}\geq 0\,,\;(\alpha=1,2,3)\,,\] together with \(\mu_{1}^{2}\geq 0\), \(\mu_{\sigma}^{2}\geq 0\) and \(\mu_{S_{\alpha}}^{2}\geq 0\) (\(\alpha=1,2,3\)). We write the scalar fields as: \[\sigma=\frac{1}{\sqrt{2}}\left(\rho+v_{\sigma}\right)e^{ia/v_{ \sigma}},\quad\;H=\begin{pmatrix}0\\ \frac{v+h}{\sqrt{2}}\end{pmatrix},\quad\;S_{\alpha}\,, \tag{4}\] where \(\rho\) stands for the radial component of the field \(\sigma\) whose mass is set by the PQ symmetry breaking scale \(v_{\sigma}\), whereas \(a\) is the CP-odd component of \(\sigma\) scalar that corresponds to the QCD axion field. In our notation, \(h\) is the SM Higgs boson with a VEV \(v\sim 246\) GeV. Also, at low energies, the scalar spectrum comprises three \(Z_{2}\)-odd scalars that are assumed to be in the diagonal basis \((S_{1},S_{2},S_{3})\), \[m_{S}^{2}=\begin{pmatrix}\mu_{S_{1}}^{2}+\frac{\lambda_{HS_{1}}} {2}v^{2}&0&0\\ 0&\mu_{S_{2}}^{2}+\frac{\lambda_{HS_{2}}}{2}v^{2}&0\\ 0&0&\mu_{S_{3}}^{2}+\frac{\lambda_{HS_{3}}}{2}v^{2}\end{pmatrix}\,. \tag{5}\] Although the lightest scalar state could be a proper DM candidate, we focus on fermionic DM. On the other hand, since both scalars \(\sigma\) and \(H\) acquire VEV, the Yukawa couplings: \(\lambda_{j},\kappa_{j}\) (\(j=1,2\)) in Eq. (1) mix the Weyl singlet and doublets. \(h_{i\alpha}\) and \(f_{i\alpha}\) represent pure interaction terms that generate Dirac neutrino mass terms (also, they may affect LFV processes). After symmetry breaking, the mass spectrum contains three neutral and two charged Dirac fermions. In the basis \(\Sigma_{L}=\left(\psi^{-}\;\;\eta^{-}\right)^{T}\), \(\Sigma_{R}=\left(\psi^{-c}\;\;\eta^{-c}\right)^{T}\) we have the mass matrix: \[\mathbf{M}_{\Sigma^{\pm}}=\begin{pmatrix}M_{\psi}&\frac{\kappa_ {1}v_{\sigma}}{\sqrt{2}}\\ \frac{\kappa_{2}v_{\sigma}}{\sqrt{2}}&M_{\eta}\end{pmatrix}\,. \tag{6}\] It follows that the charged fermion spectrum of this model comprises two states \(\chi_{1,2}^{\pm}\) with masses \(m_{\chi_{1,2}^{\pm}}=\frac{1}{2}\left[M_{\psi}+M_{\eta}\mp\sqrt{\left(M_{\psi} -M_{\eta}\right)^{2}+2\kappa_{1}\kappa_{2}v_{\sigma}^{2}}\right]\), where \(m_{\chi_{2}}^{\pm}>m_{\chi_{1}}^{\pm}\). For the neutral sector, in the basis \(\Xi_{L}=\left(N\;\;\psi^{0}\;\;\eta^{0}\right)^{T}\), \(\Xi_{R}=\left(N^{c}\;\;\psi^{0c}\;\;\eta^{0c}\right)^{T}\), we have the mass matrix: \[\mathbf{M}_{\Xi^{0}}=\begin{pmatrix}M_{N}&0&\frac{\kappa_{2}v}{ \sqrt{2}}\\ 0&M_{\psi}&\frac{\lambda_{1}v_{\mu}}{\sqrt{2}}\\ \frac{\kappa_{1}v}{\sqrt{2}}&\frac{\lambda_{2}v_{\mu}}{\sqrt{2}}&M_{\eta} \end{pmatrix}\,, \tag{7}\] that is diagonalized by a biunitary transformation, \(\chi_{L}=V_{L}\Xi_{L}\) and \(\chi_{R}=V_{R}\Xi_{R}\). Then, we obtain three neutral Dirac fermion mass eigenstates where their masses are given by: \[m_{\chi_{i}}^{\rm diag}\equiv{\rm diag}\left(m_{\chi_{1}},m_{ \chi_{2}},m_{\chi_{3}}\right)=V_{L}^{*}\mathbf{M}_{\Xi^{0}}V_{R}^{\dagger}\,. \tag{8}\] Our WIMP DM candidate is the lightest neutral state1, \(\chi_{1}\). Notice that the spectrum is quite similar to the one shown in Ref. [33]; but, in our model, there are two SU(2)\({}_{\rm L}\) vector-like fermions and a mass term that comes from the breaking of the PQ symmetry. For this reason, we have three neutral fermionic dark particles instead of two. Footnote 1: From now on, we set \(\chi\equiv\chi_{1}\) to represent the fermionic DM candidate. ### Neutrino masses The Yukawa Lagrangian in Eq. (1) leads to one-loop neutrino masses via the couplings \(f_{i\alpha}\) y \(h_{i\alpha}\) as displayed in Fig. 1. In the low-momentum limit, the neutrino mass matrix reads as: \[M_{ij}^{\nu}=\sum_{\alpha=1}^{3}h_{i\alpha}\Lambda_{\alpha}f_{ j\alpha}\Longleftrightarrow M^{\nu}=h\Lambda f^{T}\,, \tag{9}\] Figure 1: Feynman diagram for one-loop Dirac neutrino masses in the interaction basis. where the loop integral factor \(\Lambda_{\alpha}\) is given by: \[\Lambda_{\alpha}=\frac{1}{16\pi^{2}}\sum_{l=1}^{3}\left(V_{R}\right)_{l2}^{*}\ \left(V_{L}\right)_{l1}^{*}\ m_{\chi_{l}}\times\left[\frac{m_{\chi_{l}}^{2}\ln \left(m_{\chi_{l}}^{2}\right)-m_{S_{\alpha}}^{2}\ln\left(m_{S_{\alpha}}^{2} \right)}{m_{\chi_{l}}^{2}-m_{S_{\alpha}}^{2}}\right]\,, \tag{10}\] and the convergence of such a loop factor is guaranteed by the identity: \[\sum_{l=1}^{3}\left(V_{R}\right)_{l2}\ \left(V_{L}\right)_{l1}\ m_{\chi_{l}}=0\,. \tag{11}\] The Dirac neutrino mass matrix in Eq. (9) is diagonalized via a biunitary transformation \(m=U^{\dagger}M^{\nu}V\), where \(U\) and \(V\) are unitary matrices and \(m=\text{diag}(m_{1},m_{2},m_{3})\) is the diagonal matrix that contains three (or two) non-zero eigenvalues. They correspond to the masses of the neutrino mass eigenstates. In the basis where the charged lepton mass matrix is diagonal, the unitary matrix \(U\) is identified with the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix [38], whereas \(V\) is assumed diagonal without lost of generality. Furthermore, we further simplify our analysis by imposing one massless neutrino, \(m_{1}=0\) in the case of normal hierarchy (NH) and \(m_{3}=0\) in the case of inverted hierarchy (IH). The Yukawa couplings \(h_{i\alpha}\) are written in terms of \(f_{i\alpha}\) and the neutrino observables as (see appendix A for details): \[h=U_{\text{PMNS}}\sqrt{D}R\sqrt{D}\left(f^{T}\right)^{-1}\Lambda^{-1}\;, \tag{12}\] where, \[R=\left\{\begin{array}{l}\left(\begin{array}{ccc}0&0&0\\ 0&1&0\\ 0&0&1\end{array}\right)\ \text{for}\ \,\text{NH}\;,\\ \\ \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&0\end{array}\right)\ \text{for}\ \,\text{IH}\;,\\ \end{array}\right. \tag{13}\] and: \[\sqrt{D}=\left\{\begin{array}{l}\text{diag}(\sqrt{v},\sqrt{m_{2}},\sqrt{m_ {3}})\ \text{for}\ \,\text{NH}\,,\\ \text{diag}(\sqrt{m_{1}},\sqrt{m_{2}},\sqrt{v})\ \text{for}\ \,\text{IH}\,. \end{array}\right. \tag{14}\] where \(v\) is some non-vanishing arbitrary energy scale. ### Standard Model Constraints Current and coming experiments impose constraints on SM observables with sensitivity to new physics. One of them is the decay of SM Higgs boson into invisible particles. In the present model, the Higgs of the SM, \(h\), interacts with the singlet scalars \(S_{\alpha}\) through the scalar couplings \(\lambda_{HS_{\alpha}}\) and with the neutral Dirac fermion via the Yukawa couplings \(\kappa_{1}\) and \(\kappa_{2}\). Therefore, in the low mass regime for DM masses lighter than \(m_{h}/2\), the SM Higgs could decay to a fermion DM pair2. The partial decay width is given by: Footnote 2: We will not take into account the invisible Higgs decay into a scalar pair because we consider \(\lambda_{HS_{\alpha}}\sim 10^{-4}\) and consequently the corresponding amplitude width is negligible. \[\Gamma\left(h\rightarrow\chi\chi\right)\simeq \frac{3}{32\pi m_{h}}\sqrt{1-\frac{m_{\chi}^{2}}{m_{h}^{2}}}\left[ |V_{R_{11}}|^{2}|V_{L_{13}}|^{2}\left(m_{h}^{2}-2m_{\chi}^{2}\right)\left( \kappa_{1}^{2}+\kappa_{2}^{2}\right)\right.\] \[\left.-2m_{\chi}^{2}\kappa_{1}^{2}\kappa_{2}^{2}\left(V_{R_{13}} ^{*}V_{R_{11}}V_{L_{11}}^{*}V_{L_{13}}+V_{L_{13}}^{*}V_{L_{11}}V_{R_{11}}^{*} V_{R_{13}}\right)\right]\,, \tag{15}\] where \(m_{h}\) and \(m_{\chi}\) are the SM Higgs and the DM masses respectively. The branching ratio for the Higgs invisible decay is given by: \[\mathcal{B}_{h\rightarrow\text{inv}}=\frac{\Gamma\left(h\rightarrow\chi\chi \right)}{\Gamma_{h,\text{SM}}+\Gamma\left(h\rightarrow\chi\chi\right)}\,, \tag{16}\] where \(\Gamma_{h,\text{SM}}\sim 4.1\) MeV is the total decay width of the Higgs boson in the SM. The current limit from the Higgs invisible decay width is given by ATLAS \(\mathcal{B}_{h\rightarrow\text{inv}}<0.13\)[39] and CMS \(\mathcal{B}_{h\rightarrow\text{inv}}<0.19\)[40]. The prospects from the High Luminosity LHC (HL-LHC), \(\mathcal{B}_{h\rightarrow\text{inv}}<0.019\), and the Future Circular Colliders (FCC), \(\mathcal{B}_{h\rightarrow\text{inv}}<0.00024\), are summarized in Ref [41]. These limits will be imposed in section III to constrain the parameter space of this model. On the other hand, current experiments constrain the \(Z\) Gauge boson decay into invisible states. In our model, the decay rate for \(Z\) boson into DM fermions, when the DM fermions are lighter than \(m_{Z}/2\), is given by [42]: \[\Gamma\left(Z\rightarrow\chi\chi\right)=\sum_{l=1}^{3}\left|(V_{R})_{l1}|^{2} |(V_{L})_{l1}\right|^{2}\frac{gm_{Z}}{96\pi\,\text{c}_{W}^{2}}\left(1-\frac{m_ {\chi_{l}}^{2}}{m_{Z}^{2}}\right)^{\frac{3}{2}}\,. \tag{17}\] Current experiments show an upper bound for decay of \(Z\) boson into invisible states with a decay width [43]: \[\Gamma\left(Z\rightarrow\text{invisible}\right)=499.0\pm 1.5\,\text{MeV}\,. \tag{18}\] However, Section IV shows that the DM particle is mostly a singlet-like fermion in the low mass regime, and then this observable remains within the experimental limit. On the other hand, LFV processes are very sensitive to the contributions of new physics. Although the Diracness of neutrino masses is compatible with the conservation of the total lepton number, family lepton number violation is unavoidable due to neutrino oscillations. In this model, LFV processes that involve charged leptons are controlled by the Yukawa coupling \(h_{i\alpha}\) ( see Lagrangian in Eq. (1)). One of the most restrictive LFV processes is the radiative muon decay \(\mu\to e\gamma\), shown in Fig. 2 for \(i=2\) and \(j=1\). Following Ref. [44], we compute this branching ratio, \[\mathcal{B}(\mu\to e\gamma)=\frac{3\,\alpha}{32\pi G_{F}^{2}}\sum_{ \alpha=1}^{3}\sum_{k=1}^{2}\left|h_{2\alpha}F(x)h_{1\alpha}^{*}(U_{\Sigma_{R}} )_{1k}^{2}\right|^{2}\,, \tag{19}\] where \(G_{F}\) is the Fermi constant and \(\alpha=e/4\pi\) is the fine structure constant and \(U_{\Sigma_{R}}\) is the mixing matrix for right-handed charged fermions. The loop function \(F\) is given by, \[F(x)=\left(\frac{x^{3}-6x^{2}+3x+2+6x\ln(x)}{6(x-1)^{4}}\right)\,, \tag{20}\] where \(x=\left(\frac{m_{\chi k}^{\pm}}{m_{S_{\alpha}}}\right)^{2}\). One final observable, very sensitive to new physics, is the Peskin-Takeuchi, \(S,T\), and \(U\) parameters that render radiative corrections to masses of electroweak bosons. Because we consider a minor mixing angle between charged fermions, the \(S\), \(T\), and \(U\) parameters in our model are controlled by the charged fermions masses that satisfy the experimental bounds for all observables. Figure 2: Feynman diagram that contributes to the \(\mu\to e\gamma\) LFV process, where \(i\) and \(j\) are flavor indices. ## III Dark matter phenomenology In this model, there are two DM candidates: the axion (\(a\)) that is a natural candidate after the PQ symmetry breaking mechanism and the WIMP candidate that is the lightest \(Z_{2}\)-odd state, _i.e_, either the scalar \(S_{1}\) or the fermion \(\chi\). The relic density of axions is determined by its interactions with the gravitational background when the Universe expands. At temperatures above the QCD critical temperature, \(\Lambda_{\rm QCD}\sim 160\) MeV, the chiral symmetry is restored, then the axion is massless. The corresponding axion field is parameterized by the so-called misalignment angle \(\theta_{a}\equiv a/v_{\sigma}\). Later, as the temperature of the primordial plasma falls below the hadronic scale, the axion becomes a pseudo-Nambu-Goldstone boson and develops a mass due to non-perturbative effects [45; 46; 9]. When its mass becomes bigger than the Hubble expansion rate, the axion field begins to oscillate around its mean value. These coherent and spatially uniform oscillations correspond to a coherent state of nonrelativistic axions where they behave as a cold DM fluid since their energy density scales as ordinary matter [47; 48; 49]. The value of the component of the relic density provided by the axion strongly depends on the cosmological scenario. In other words, it is different if the PQ symmetry is broken after or during inflation. In a post-inflationary phase, the expected energy density depends on the misalignment angle and the scale of the PQ symmetry breaking \(v_{\sigma}\), so that \(\theta_{a}\) takes different values in different patches of the Universe, an average is \(\theta_{a}\sim\pi^{2}/3\). In this case, possible topological defects such as axion strings and domain walls contribute to the axion energy density [47; 50; 51; 12; 48]. Nevertheless, when the PQ symmetry is broken before the end of inflation, the topological defects are absent and the misalignment mechanism renders the axion relic density. In this scenario, the axion DM abundance is given by [52; 14]: \[\Omega_{a}h^{2}\approx 0.18\theta_{a}^{2}\left(\frac{v_{\sigma}}{10^{12}\ {\rm GeV}}\right)^{1.19}\,. \tag{21}\] From Eq. (21) follows that the axion can compose the total amount of the DM constituent if \(v_{\sigma}\sim 10^{12}\) GeV for \(\theta_{a}\sim\mathcal{O}(1)\). Under this premise, the axion window becomes \(m_{a}\sim(1-10)\)\(\mu\)eV. Nevertheless, the axion could give a subdominant contribution to the relic DM abundance for lower values of \(v_{\sigma}\). Thus, it allows a multicomponent DM scenario. In this work, the WIMP component dominates the relic abundance of DM in the Universe and the axion field plays a role in the generation of a Dirac mass term for neutrinos. The phenomenological study where the axion and the WIMP components are relevant was performed in Ref. [53]. In addition to the axion, this model leads to a second DM candidate because the lightest \(Z_{2}\)-odd state is stable. This can be accomplished by either the scalar \(S_{1}\) or the Dirac fermion \(\chi\). If the scalar \(S_{1}\) is the lightest state, the DM phenomenology is similar to the Majorana version presented in Ref. [54]. Conversely, for Dirac fermion DM, the DM phenomenology is given by the mixing between the singlet and the doublet fermion states that was studied in Refs. [33; 8]. If the lightest Dirac fermion in the dark sector is mainly singlet, the DM does not annihilate efficiently in the early Universe then its present abundance is greater than currently observed. On the other hand, if the DM candidate is mainly doublet, the correct relic abundance is recovered only for \(m_{\chi}\sim 1\) TeV. In the singlet-doublet scenario, it was shown in Ref. [8] that the region \(100\,\text{GeV}\lesssim m_{\chi}\lesssim 750\) GeV is still available thanks to coannihilations between fermions in the dark sector. However, the sizeable couplings between the DM candidate and the \(Z\) boson, exclude the low mass region, \(m_{\chi}\lesssim 100\) GeV. One of the features of the model presented here is that the region of the parameter space below \(m_{\chi}\lesssim 100\) GeV is recovered due to new DM annihilation channels. ### WIMP light dark matter window Unitarity condition imposes an upper bound of \(\sim 340\) TeV on the DM mass of thermal relics [55]. However, for low masses, limits are not so easily applied. Refs. [56; 57] show that fermion DM masses below a few GeV's are usually ruled out because the DM overcloses the Universe. Nevertheless, this limit can be evaded by considering new light mediators in thermal equilibrium with the DM candidate [58]3. Footnote 3: In this model, the \(S_{\alpha}\)\((\alpha=1,2,3)\) scalars are the light mediators, which in the low mass window have masses around \(\sim 1\) GeV. On the other hand, distortion to the CMB spectrum caused by energy injection to the primordial plasma, when DM couples to electrons, imposes a lower bound for the DM candidate mass of \(\sim 10\) GeV. Nonetheless, this limit could be evaded if DM annihilates to neutrinos as well as electrons4. Conversely, if the DM is in thermal equilibrium with neutrinos, electrons, and photons, and if it decouples when it is non-relativistic, there is a change in the effective number of neutrinos \(N_{\rm eff}\). A lower limit for the mass of the Dirac fermion DM of \(m_{\chi}\gtrsim 10\) MeV, was given in Ref. [59]. It is well-known that for DM masses below \(m_{\chi}\lesssim 5\) GeV, the direct detection searches are not sensitive to scattering between DM candidates and nuclei, even for the expected future experiments as DARWIN [60]. For this reason, we consider indirect searches that look for DM annihilation into a neutrino pair as shown in Fig. 3, and therefore extra neutrino flux is produced and detected by neutrino telescopes. There, neutrinos interact with the nuclei in the detector. After that, an electromagnetic signal is produced. The signal events are compared with measurements at the Super-Kamiokande (SK), Hyper-Kamiokande (HK), Deep Underground Neutrino Experiment (DUNE), and Jiangmen Underground Neutrino Observatory (JUNO). Such experiments derive an upper limit on the annihilation cross section of the DM into neutrinos (see Ref [61] for a review). The expected contribution for the neutrino flux from DM annihilation in the Milky Way halo is given by [61; 62]: \[\frac{d\Phi_{\nu\bar{\nu}}}{dE_{\nu}}=\frac{1}{16\pi m_{\chi}^{2}}\sum_{i} \langle\sigma v\rangle_{i}k\frac{dN_{i}}{dE_{\nu}}J\left(\Omega\right)\,, \tag{22}\] where \(k\) gives the electron-neutrino flavor factor, \(\langle\sigma v\rangle_{i}\) stands for the annihilation cross section into a final state \(i\), \(dN_{i}/dE_{\nu}\) is the neutrino spectral function for the final state \(i\) and \(J\left(\Omega\right)\) represents the astrophysical \(J\)-factor. In the galactic coordinates, \((b,l)\), the \(J\)-factor can be expressed as, \[J=\int d\Omega\int_{\rm l.o.s}\rho^{2}\left(r\right)dr\,, \tag{23}\] where \(\rho\left(r\right)\) is the DM density profile in the galactic halo. We consider here the Navarro-Frenk-White (NFW) profile. In this model, the thermally averaged DM annihilation cross section times the velocity reads as: \[\langle\sigma v\rangle\simeq\sum_{\alpha=1}^{3}\sum_{i=1}^{3}\frac{1}{32\pi} \frac{(m_{\nu_{i}}^{2}+m_{\chi}^{2})}{(m_{S_{\alpha}}^{2}+m_{\chi}^{2}-m_{\nu _{i}}^{2})^{2}}\sqrt{1-\frac{m_{\nu_{i}}^{2}}{m_{\chi}^{2}}}\left(f_{i\alpha} ^{2}+h_{i\alpha}^{2}\right)^{2}\,, \tag{24}\] where \(m_{\nu_{i}}\)\(i=1,2,3\) are the masses of the neutrinos, \(m_{\chi}\) is the DM mass, \(m_{S_{\alpha}}\) are the masses for the scalar mediators, \(f_{i\alpha}\) and \(h_{i\alpha}\) are the Yukawa couplings defined in Eq. (1). ## IV Results and Discussion To study the fermion DM phenomenology in this model, we scanned the free parameters of the model as shown in Table 2. We assumed \(\lambda_{\sigma}=\lambda_{S_{\alpha}}=\lambda_{HS_{\alpha}}=10^{-4}\), with \((\alpha=1,2,3)\)5. Moreover, the mass of the exotic quark, \(M_{Q}\) is set to \(M_{Q}\sim 10\) TeV along with \(y_{Q}=0.1\) to stay safe from LHC constraints [28]. Let us recall that the Yukawa couplings \(h_{i\alpha}\) are related to the Yukawa couplings \(f_{i\alpha}\), neutrino masses, and the PNMS mixing matrix elements, as shown in section II.1. We guarantee that the charged LFV observables remain within the current experimental bounds6. Regarding neutrino physics, we consider NH for neutrino masses and use the best-fit point values reported in Refs. [69; 70] for the \(\mathcal{CP}\) conserving \begin{table} \begin{tabular}{||c||} \hline \hline \(10^{-2}\,\text{Ge\kern-1.0ptV}\leq M_{N},M_{\psi},M_{\eta}\leq 2\,\text{TeV}\) \\ \hline \(m_{S_{1}}\geq 1.2\,M_{N}\) \\ \hline \(m_{S_{1}}\leq m_{S_{2}}\leq 2\,\text{TeV}\) \\ \hline \(m_{S_{3}}\leq m_{S_{2}}\leq 2\,\text{TeV}\) \\ \hline \(10^{2}\,\text{Ge\kern-1.0ptV}<\lambda_{1}v_{\sigma},\lambda_{2}v_{\sigma}<10^{ 3}\,\text{Ge\kern-1.0ptV}\) \\ \hline \(10^{-6}\leq f_{i\alpha},\kappa_{1},\kappa_{2}\leq 1\) \\ \hline \(10^{9}\,\text{Ge\kern-1.0ptV}\leq v_{\sigma}\leq 10^{13}\,\text{Ge\kern-1.0ptV}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Random sampling for the relevant free parameters used in the numerical analysis. Figure 3: Feynman diagram for DM annihilation into to a neutrino pair. case. Results are shown in Fig. 4, Fig. 5, and Fig. 6 which assume a total amount of WIMP DM component. Each point reproduces the observed DM relic density \(\Omega h^{2}=0.120\pm 0.001\) at \(3\sigma\)[71] and satisfies the current charged LFV bounds. Fig. 4 shows the direct detection cross section as a function of the DM mass. The color code represents the difference between the singlet and doublet fermion masses. The shaded region is excluded by the constraint of invisible Higgs decay into DM pair, \(\mathcal{B}\left(h\rightarrow\chi\chi\right)\)[39], while the continuous and dashed cyan lines are the prospect limits expected from high luminosity LHC and Future Circular Colliders (FCC) [41]. Moreover, we plot the direct detection current limits imposed by XENONnT (solid black line) [72] and PandaX-2021 (solid red line) [73] as well as the prospects that are expected from DARWIN (dashed black line) [60]. The blue line represents the coherent elastic neutrino scattering (neutrino floor) [74; 75]. In Fig. 4, a sizeable mixing between the singlet and the doublet states gives a small vector cou Figure 4: DM spin-independent cross sections as a function of the DM mass. The shaded region is excluded by the invisible Higgs decay into DM pair, \(\mathcal{B}\left(h\rightarrow\chi\chi\right)\)[39], while the continuous and dashed cyan lines are the prospect limits expected from high luminosity LHC and Future Circular Colliders (FCC) [41]. It is currently excluded from direct detection searches. Consequently, a small mixing, \(|\kappa_{1}-\kappa_{2}|\lesssim 10^{-4}\), is required for DM masses \(m_{\chi}\gtrsim 10\) GeV. Such small Yukawa couplings also guarantee that the contributions from the new fermions to the oblique parameters remain at \(3\sigma\) level [76], as we show in Fig. 5. This limit on the Yukawa parameters \((\kappa_{1},\kappa_{2})\) is equivalent to an upper limit on the difference between the \(M_{N}\) parameter and the DM mass, \(|M_{N}-m_{\chi}|\lesssim 10^{-3}\) GeV, _i.e_, the DM state should be mostly singlet, \(\chi=N\). Also, from Fig 4, notice that the region of parameter space for the mass of the DM candidate below 100 GeV is recovered. Fig. 6 shows the DM annihilation cross section into a neutrino pair as a function of the DM mass, \(m_{\chi}\). We follow the notation in [61] and [62]. The shadow region is the current exclusions extracted from Olivares (\(\spadesuit\)), [77], Asai (\(\lozenge\)) [78] and Arguelles (\(\heartsuit\)) [61]. Whereas, the dash, dotted, and dot-dashed represent the future searches reported by Olivares [79], Bell (\(\bigstar\)) [80] and Klop (\(\clubsuit\)) [81]. In Fig. 6 the model reaches \(\langle\sigma v\rangle\sim 4\times 10^{-26}\) cm\({}^{3}\) s\({}^{-1}\) which corresponds to the canonical thermally averaged cross-section. Notice that the future sensitivity of neutrino telescopes Figure 5: Contour plot for scalar and fermion contributions to the EWPT parameters. The color code represents the mixing between the singlet and the doublet neutral fermion states. The black, blue and green ellipses represent the experimental constraints at 68% CL, 95% CL and 99% CL, respectively [76]. could test this model for DM masses 20 MeV\(\lesssim m_{\chi}\lesssim\) 30 MeV using HK searches and 30 MeV\(\lesssim m_{\chi}\lesssim\) 50 MeV using JUNO and DUNE combinations. Moreover, we emphasize that constraints from current DD searches do not affect that region of the parameter space. ## V Conclusions We analyze phenomenologically a model where the PQ mechanism is the solution to the strong CP problem and generates a Dirac mass term for neutrinos. Besides, a remnant \(Z_{2}\) symmetry guarantees the existence of a WIMP DM candidate in addition to the axion. The WIMP DM phenomenology resembles the well-known singlet-doublet fermion DM, but in our case, we show that the low mass regime is recovered because of the new annihilation channels that result from the PQ mechanism that allow DM annihilation into neutrinos. This model solves the DM problem and generates neutrino masses and it could be tested in future experiments of DD of DM. Furthermore, we demonstrate that future neutrino Figure 6: Thermally averaged DM annihilation cross section times velocity as a function of the DM mass and search limits imposed by DM annihilation into neutrinos in the Milky Way galaxy [61]. telescopes will test the MeV region for DM masses where DD searches are not sensitive. ###### Acknowledgements. We want to thank Walter Tangarife for very valuable feedback in the course of this work. The work of David Suarez and Robinson Longas is supported by Sostenibilidad UdeA, UdeA/CODI Grant 2020-33177, and Minciencias Grants CD 82315 CT ICETEX 2021-1080 and 80740-492-2021. ## Appendix A Texture of the Yukawa couplings involved in neutrino physics. In several Majorana neutrino models, the Yukawa parameters are related to the neutrino physics via the Casas-Ibarra parametrization [82]. A generalization of the Casas-Ibarra parametrization is called the master equation and was studied in Ref. [83] and can be used in many Majorana neutrino mass models. Following the motivation presented in Ref. [83], we study in this section a general solution for the Dirac neutrino mass couplings presented in Eq. (9). The mass matrix from any Dirac neutrino model can be written in the form: \[M^{\nu}=y_{1}\Lambda y_{2}\,, \tag{10}\] where \(\Lambda\) is a \(3\times 3\) complex matrix with dimension of mass and the Yukawa couplings \(y_{1},y_{2}\), are dimensionless \(3\times 3\) complex matrices. We assume \(\Lambda\) in the diagonal basis7. Note that the mass matrix structure in Eq. (10) contains several Dirac neutrino models and the results derived here can be applied to those models. On the other hand, the data coming from neutrino oscillation requires at least two non-zero eigenvalues for the \(M^{\nu}\) matrix. Then, the neutrino mass matrix must be \(\text{rank}\,(M^{\nu})\equiv\text{rank}\,\text{M}=2\) or \(\text{rank}\,\text{M}=3\). We will study both cases: NH and IH. Footnote 7: A general analysis for an arbitrary dimension of \(y_{1},y_{2}\) and \(\Lambda\) matrices will be left for future work. ### rank(M) = 2 In the case of two non-zero neutrino mass eigenstates, the mass matrix in Eq.(A1) can be diagonalized by a biunitary transformation, \[m=U^{\dagger}M^{\nu}V=U^{\dagger}y_{1}\Lambda y_{2}V=\left\{\begin{array}{l}{ \rm diag}(0,m_{2},m_{3})\;\;{\rm for}\;\;{\rm NH}\,,\\ {\rm diag}(m_{1},m_{2},0)\;\;{\rm for}\;\;{\rm IH}\,.\end{array}\right.\] (A2) After that, we follow the same strategy presented in Ref. [83] and we define the matrices: \[\sqrt{D}=\left\{\begin{array}{l}{\rm diag}(\sqrt{v},\sqrt{m_{2}},\sqrt{m_{3 }})\;\;{\rm for}\;\;{\rm NH}\,,\\ {\rm diag}(\sqrt{m_{1}},\sqrt{m_{2}},\sqrt{v})\;\;{\rm for}\;\;{\rm IH}\,,\end{array}\right.\] (A3) where \(v\) is some non-vanishing arbitrary energy scale. For example, in this work \(v\equiv v_{\sigma}\). It is worth mentioning that the analytical expressions found for the Yukawa couplings are independent of the choice of the energy scale \(v\) as it was discussed in [83]. If we multiply the expression in Eq. (A2) on the left and on the right side by \(\sqrt{D}^{-1}\), we obtain: \[\sqrt{D}^{-1}\sqrt{m}\sqrt{m}\sqrt{D}^{-1}\equiv R=\sqrt{D}^{-1}U^{\dagger}y_ {1}\Lambda y_{2}V\sqrt{D}^{-1}\,,\] (A4) where we use the definitions for \(m\) and \(\sqrt{D}\) to write the left side of the equation. With this, the matrix \(R\) is defined by: \[R=\left\{\begin{array}{l}\left(\begin{array}{ccc}0&0&0\\ 0&1&0\\ 0&0&1\end{array}\right)\;\;{\rm for}\;\;{\rm NH}\;,\\ \\ \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&0\end{array}\right)\;\;{\rm for}\;\;{\rm IH}\;.\end{array}\right.\] (A5) The expression in Eq. (A4) can be written in the form: \[R =\sqrt{D}^{-1}U^{\dagger}y_{1}\sqrt{\Lambda}\sqrt{\Lambda}y_{2}V \sqrt{D}^{-1}\] (A6) \[R =\left[\sqrt{\Lambda}y_{1}^{\dagger}U\sqrt{D}^{-1}\right]^{ \dagger}\left[\sqrt{\Lambda}y_{2}V\sqrt{D}^{-1}\right]\] (A7) \[R \equiv R_{1}^{\dagger}R_{2}\;,\] (A8) where we use the fact that \(\Lambda\) is a diagonal matrix and we define the matrices \(R_{1}\) and \(R_{2}\) as, \[R_{1} =\sqrt{\Lambda}y_{1}^{\dagger}U\sqrt{D}^{-1}\,, \tag{100}\] \[R_{2} =\sqrt{\Lambda}y_{2}V\sqrt{D}^{-1}\;. \tag{101}\] The existence of an inverse for the matrices \(R_{1}\) and \(R_{2}\) in Eq.(100) allows us to express the Yukawa coupling \(y_{1}\) (\(y_{2}\)) as a function of the Yukawa coupling \(y_{2}\) (\(y_{1}\)) and the neutrino oscillation observables. Alternatively, either \(y_{1}\) or \(y_{2}\) remains as a free parameter in the model as follow8: if we multiply Eq. (100) at the right by \(R_{2}^{-1}\), we extract \(y_{1}\) by using Eqs.(100), (101) and we obtain: Footnote 8: In Ref. [84] similar formulas are reported. The authors use a general parametrization for the matrices \(R_{1},R_{2}\) and the relation in Eq. (100). Here, conversely, we choose to write one Yukawa coupling in terms of the other one. \[y_{1}=U\sqrt{D}R\sqrt{D}V^{\dagger}y_{2}^{-1}\Lambda^{-1}\;. \tag{102}\] Conversely, if we multiply Eq.(100) at the left by \(\left(R_{1}^{\dagger}\right)^{-1}\), we extract \(y_{1}\) by using Eqs.(100), (101) and obtain, \[y_{2}=\Lambda^{-1}y_{1}^{-1}U\sqrt{D}R\sqrt{D}V^{\dagger}\;. \tag{103}\] For the case of the model we study here, the result in Eq. (12) is obtained as particular case of Eq. (102) by setting \(V=\mathbb{1}\), \(U=U_{\rm PMNS}\), \(y_{1}=h\) and \(y_{2}=f^{T}\). In short: \[h=U_{\rm PMNS}\sqrt{D}R\sqrt{D}\left(f^{T}\right)^{-1}\Lambda^{-1}\;. \tag{104}\] ### rank(M) = 3 In the case of three neutrinos are massive, we follow the same method. The mass matrix in Eq.(102) can be diagonalized by a biunitary transformation, \[m={\rm diag}(m_{1},m_{2},m_{3})=U^{\dagger}y_{1}\sqrt{\Lambda} \sqrt{\Lambda}y_{2}V^{\dagger}\;, \tag{105}\] \[\sqrt{m}\sqrt{m}=U^{\dagger}y_{1}\sqrt{\Lambda}\sqrt{\Lambda}y_{2 }V^{\dagger}\;. \tag{106}\] We then multiply the last expression at the right side and the left side by \(\sqrt{m}^{-1}\) and obtain, \[\sqrt{m}^{-1}\sqrt{m}\sqrt{m}\sqrt{m}^{-1}=\mathbb{1}_{3}=\sqrt{m}^{-1}U^{ \dagger}y_{1}\sqrt{\Lambda}\sqrt{\Lambda}y_{2}V^{\dagger}\sqrt{m}^{-1}\;, \tag{107}\] where \(\mathbb{1}_{3}\) is the \(3\times 3\) identity matrix. The expression in Eq. (16) is reorganized in the form: \[\left[\sqrt{\Lambda}y_{1}^{\dagger}U\sqrt{m}^{-1}\right]^{\dagger} \left[\sqrt{\Lambda}y_{2}V^{\dagger}\sqrt{m}^{-1}\right]=\mathbb{1}_{3}\;, \tag{17}\] \[R_{1}^{\dagger}R_{2}=\mathbb{1}_{3}\;. \tag{18}\] Again, the existence of an inverse for the matrices \(R_{1}\) and \(R_{2}\) in Eq. (18) allow us to set one of the Yukawa couplings in terms of the other one and the neutrino observable parameters as, \[y_{1}=UmV^{\dagger}y_{2}^{-1}\Lambda^{-1}\;\;\text{or}\;\;y_{2}=\Lambda^{-1}y _{1}^{-1}UmV^{\dagger}\;. \tag{19}\] The identifications \(V=\mathbb{1}\), \(U=U_{\text{PMNS}}\), \(y_{1}=h\) and \(y_{2}=f^{T}\) in the previous equation describe the Yukawa texture of the model presented here.
2309.05241
Measurement of the neutron timelike electromagnetic form factor with the SND detector
The results of the measurement of the $e^+e^- \to n \bar{n}$ cross section and effective neutron timelike form factor are presented. The data taking was carried out in 2020-2021 at the VEPP-2000 $e^+e^-$ collider in the center-of-mass energy range from 1891 to 2007 MeV. The general purpose nonmagnetic detector SND is used to detect neutron-antineutron events. The event selection is performed using the time-of-flight technique. The measured cross section is 0.4-0.6 nb. The neutron form factor in the energy range under study varies from 0.3 to 0.2.
SND Collaboration, M. N. Achasov, A. Yu. Barnyakov, E. V. Bedarev, K. I. Beloborodov, A. V. Berdyugin, D. E. Berkaev, A. G. Bogdanchikov, A. A. Botov, T. V. Dimova, V. P. Druzhinin, V. N. Zhabin, Yu. M. Zharinov, E. V. Kardapoltsev, A. S. Kasaev, D. P. Kovrizhin, I. A. Koop, A. A. Korol, A. S. Kupich, A. P. Kryukov, A. P. Lysenko, N. A. Melnikova, N. Yu. Muchnoy, A. E. Obrazovsky, E. V. Pakhtusova, K. V. Pugachev, S. A. Rastigeev, Yu. A. Rogovsky, S. I. Serednyakov, Z. K. Silagadze, I. K. Surin, Yu. V. Usov, A. G. Kharlamov, Yu. M. Shatunov, D. A. Shtol
2023-09-11T05:26:17Z
http://arxiv.org/abs/2309.05241v1
# Measurements of the neutron timelike electromagnetic form factor with the SND detector ###### Abstract The results of the measurement of the \(e^{+}e^{-}\to n\bar{n}\) cross section and effective neutron timelike form factor are presented. The data taking was carried out in 2020-2021 at the VEPP-2000 \(e^{+}e^{-}\) collider in the center-of-mass energy range from 1891 to 2007 MeV. The general purpose nonmagnetic detector SND is used to detect neutron-antineutrons events. The selection of \(n\bar{n}\) events is performed using the time-of-flight technique. The measured cross section is 0.4-0.6 nb. The neutron form factor in the energy range under study varies from 0.3 to 0.2. ## Introduction The internal structure of nucleons is described by electromagnetic formfactors. In the timelike region they are measured using the process of \(e^{+}e^{-}\) annihilation to nucleon-antinucleon pairs. The \(e^{+}e^{-}\to n\bar{n}\) cross section depends on two formfactors - electric and magnetic \(G_{M}\) : \[\frac{d\sigma}{d\Omega} = \frac{\alpha^{2}\beta}{4s}\bigg{[}|G_{M}(s)|^{2}(1+\cos^{2}\theta) \tag{1}\] \[+ \frac{1}{\gamma^{2}}|G_{E}(s)|^{2}\sin^{2}\theta\bigg{]}\] where \(\alpha\) is the fine structure constant, \(s=4E_{b}^{2}=E^{2}\), where \(E_{b}\) is the beam energy and \(E\) is the center-of-mass (c.m.) energy, \(\beta=\sqrt{1-4m_{n}^{2}/s}\), \(\gamma=E_{b}/m_{n}\), \(m_{n}\) is the neutron mass and \(\theta\) is the antineutron production polar angle. The total cross section has the following form: \[\sigma(s)=\frac{4\pi\alpha^{2}\beta}{3s}(1+\frac{1}{2\gamma^{2}})|F(s)|^{2}, \tag{2}\] where the effective form factor \(F(s)\) is introduced: \[|F(s)|^{2}=\frac{2\gamma^{2}|G_{M}(s)|^{2}+|G_{E}(s)|^{2}}{2\gamma^{2}+1}. \tag{3}\] The \(|G_{E}/G_{M}|\) ratio can be extracted from the analysis of the measured \(\cos\theta\) distribution in Eq. (1). At the threshold \(|G_{E}|=|G_{M}|\). The latest results on the neutron form factor near the threshold were obtained in experiments at the VEPP-2000 \(e^{+}e^{-}\) collider with the SND detector [1]. The same work provides a list of previous measurements. At the energy above 2 GeV new data have been obtained by the BESIII [2]. In this work the recent SND results on the \(e^{+}e^{-}\to n\bar{n}\) cross section and the neutron timelike formfactor with 4 times higher integrated luminosity than in previous measurement [1], are presented. ## I Collider, Detector, Experiment VEPP-2000 is \(e^{+}e^{-}\) collider [3] operating in the energy range from the hadron threshold (\(E\)=280 MeV) up to 2 GeV. The collider luminosity above the nucleon threshold at 1.87 GeV is of order of \(5\times 10^{31}\) cm\({}^{-2}\)s\({}^{-1}\). There are two collider detectors at VEPP-2000: SND and CMD-3. SND (Spherical Neutral Detector) [4] is a non-magnetic detector, including a tracking system, a spherical NaI(Tl) electromagnetic calorimeter (EMC) and a muon detector (Fig.1). The EMC is the main part of the SND used in the \(n\bar{n}\) analysis. The thickness of EMC is 34.7 cm (13.4 radiation length). The antineutron annihilation length in NaI(Tl) varies with energy from several cm close to the \(n\bar{n}\) threshold to \(\sim\)15 cm at the maximum available energy [5], so nearly all produced antineutrons are absorbed in the detector. The EMC is used to measure the event arrival time. Starting from 2019 a system of flash ADC modules [6], measuring the signal waveform, is installed on each of the 1640 EMC counters. When fitting the flash ADC output waveform, the time and amplitude of the signal in the counters are calculated. The event time is calculated as the energy weighted average time. The time resolution obtained with \(e^{+}e^{-}\rightarrow\gamma\gamma\) events is about 0.8 ns. This article presents the analysis results of data with the integrated luminosity of 80 pb\({}^{-1}\), collected in the energy range 1.89 - 2.0 GeV in 8 energy points. ## II Selection of \(n\bar{n}\) events Antineutron from the \(n\bar{n}\) pair in most cases annihilates, producing pions, nucleons, photons and other particles, which deposit up to 2 GeV in EMC. The neutron from the \(n\bar{n}\) pair release a small signal in EMC, which poorly visible against the background of a strong \(\bar{n}\) annihilation signal, so it is not taken into account. The \(n\bar{n}\) events are reconstructed as Figure 1: SND detector, section along the beams: (1) beam pipe, (2) tracking system, (3) aerogel Cherenkov counters, (4) NaI (Tl) crystals, (5) vacuum phototriodes, (6) iron absorber, (7) proportional tubes, (8) iron absorber, (9) scintillation counters, (10) VEPP-2000 focusing solenoids. multiphoton events. Main features of \(n\bar{n}\) events are absence of charged tracks and photons from the collision region and a strong imbalance in the event momentum. To create the \(n\bar{n}\) selection conditions we consider the sources of the background including the cosmic background, the background from \(e^{+}e^{-}\) annihilation processes and from the electron and positron beams in the collider. Based on these specific features of the \(e^{+}e^{-}\to n\bar{n}\) process, selection conditions were divided into three groups. In the first group the conditions are collected that suppress the background from the \(e^{+}e^{-}\) annihilation events. These include the condition of no charged tracks in an event, the limit on the total event momentum (p/\(2E_{b}>\)0.4), and a limitation on the transverse shower profile in EMC [7], which must be wider than that from the photons from the collision region. In the second group the selection conditions should suppress the cosmic background. Here the veto of muon system is included and special conditions, analyzing the energy deposition shape in EMC and removing cosmic events passing through the muon veto [1]. Basically, these are cosmic showers in EMC. The third group of selection cuts contains the restriction on the total energy deposition in EMC -- \(E_{dep}>E_{b}\). Such a restriction almost completely suppresses the beam background, although the detection efficiency also decreases by about 20%. The listed selection conditions are similar to those described in our recent work [1]. The only difference is that there is no limitation on the energy in the EMC third layer. This slightly increased the detection efficiency, although it did increase the cosmic background. After imposing the described selection conditions, we have about 400 events/\(pb\) left for further analysis. ## III Getting the number of \(n\bar{n}\) events The time spectra for selected data events are shown in Fig. 2. Zero time corresponds to events in the moment of beam collision. Three main components are distinguished in the time spectrum in the figures shown: a beam background at t=0, a cosmic background uniform in time, and a delayed signal from \(n\bar{n}\) events, wide in time. Respectively, the measured time spectra are fitted by the sum of these three components in the following form : \[F(t)=N_{n\bar{n}}H_{n\bar{n}}(t)+N_{\rm csm}H_{\rm csm}(t)+N_{\rm bkg}H_{\rm bkg}(t), \tag{4}\] where \(H_{n\bar{n}}\), \(H_{\rm csm}\) and \(H_{\rm bkg}\) are normalized histograms, describing time spectra for the \(n\bar{n}\) signal, cosmic and beam + physical background, respectively. \(N_{n\bar{n}}\), \(N_{\rm csm}\), and \(N_{\rm bkg}\) are the corresponding event numbers, obtained from the fit. The shape of the beam+physical background time spectrum \(H_{\rm bkg}\) is measured at the energies below the \(n\bar{n}\) threshold. The cosmic time spectrum \(H_{\rm csm}\) is measured with the lower EMC threshold \(0.9\cdot E_{b}\) in coincidence with the muon system signal. The \(H_{n\bar{n}}\) spectrum is calculated by the MC simulation the \(e^{+}e^{-}\to n\bar{n}\) process. Comparison of time spectra in data and MC gives a wider data time distribution for both \(e^{+}e^{-}\rightarrow\gamma\gamma\) and \(e^{+}e^{-}\to n\bar{n}\) events. For the \(e^{+}e^{-}\rightarrow\gamma\gamma\) this is due to the finite time resolution of our timing system [6], which is not adequately simulated. So we convolve the MC time spectrum with a Gaussian with \(\sigma_{\gamma\gamma}=0.8\) ns. For \(e^{+}e^{-}\to n\bar{n}\) events the covolution is done with \(\sigma_{nn}=1.5\)-2 ns depending on the energy. Moreover, in addition to the above, we correct the MC \(n\bar{n}\) time spectrum, since the shape of MC time spectrum \(H_{n\bar{n}}\) does not describe data well. This discrepancy is explained by the incorrect relationship between the processes of antineutron annihilation and scattering in MC, as well as by the incorrect description of the annihilation products. To modify Figure 2: The time distribution of selected data events (points with error bars) at \(E_{b}=945\) MeV (left panel) and at \(E_{b}=980\) MeV (right panel). The position \(t=0\) corresponds to the moment of beams collision. The wide peak to the right is the contribution of \(n\bar{n}\) events. The light-shaded histogram shows the cosmic background (uniform in time) and beam background (peak at \(t=0\)). The solid line is the result of the fit. the MC time spectrum, separate MC time spectra were plotted for the cases of the first \(\bar{n}\) interaction of scattering (\(H^{s}_{n\bar{n}}\)) and annihilation (\(H^{a}_{n\bar{n}}\)). The share of annihilation events in MC was about 33%. The annihilation gives the time spectrum close to the exponential while the scattering has delayed and more wide time spectrum with the non exponential shape. The \(H_{n\bar{n}}\) spectrum (Eq.4) was taken in fit as a linear sum of two spectra described above: \(H_{n\bar{n}}=\alpha H^{a}_{n\bar{n}}+(1-\alpha)H^{s}_{n\bar{n}}\). The value \(\alpha\) (the share of annihilation events) was the fit parameter. As a result of the fit this parameter turned out to be greater than in MC - \(\simeq\)60% and accordingly the proportion of scattering fell to \(\simeq\)40%. As can be seen in Fig.2, the modified MC time spectrum describes the data well. The visible cross section \(\sigma_{bg}\) of the beam+physical background, obtained during fitting, is about 7 pb and does not significantly depend on the beam energy. The main contribution into \(\sigma_{bg}\) comes from the processes with neutral kaons in the final state: \(e^{+}e^{-}\to K_{S}K_{L}\pi^{0}\), \(K_{S}K_{L}\eta\) and similar other. The measured residual cosmic background rate has the intensity \(\sim\)0.01 Hz, which corresponds to the suppression of the number of cosmic events, that have pass the hardware selection in the detector electronics, approximately by \(2\times 10^{4}\) times. The numbers of found \(n\bar{n}\) events are listed in the Table 1 with the total number close to \begin{table} \begin{tabular}{c c c c c c c} N & \(E_{b}\)(MeV) & \(L\)(pb) & \(N_{n\bar{n}}\) & \(1+\delta\) & \(\varepsilon\) & \(\sigma\)(nb) & \(F_{n}\) \\ \hline 1 & 945.5 & 8.54 & \(676\pm 37\) & 0.746 & \(0.253\pm 0.021\) & \(0.420\pm 0.023\pm 0.036\) & \(0.322\pm 0.016\) \\ 2 & 950.3 & 8.86 & \(834\pm 37\) & 0.787 & \(0.246\pm 0.015\) & \(0.485\pm 0.022\pm 0.031\) & \(0.301\pm 0.012\) \\ 3 & 960.3 & 8.33 & \(767\pm 35\) & 0.840 & \(0.217\pm 0.013\) & \(0.506\pm 0.023\pm 0.032\) & \(0.266\pm 0.010\) \\ 4 & 970.8 & 8.07 & \(718\pm 34\) & 0.870 & \(0.229\pm 0.017\) & \(0.447\pm 0.021\pm 0.034\) & \(0.230\pm 0.011\) \\ 5 & 968.8 & 5.51 & \(524\pm 34\) & 0.870 & \(0.186\pm 0.020\) & \(0.589\pm 0.039\pm 0.065\) & \(0.267\pm 0.017\) \\ 6 & 980.3 & 7.70 & \(654\pm 37\) & 0.900 & \(0.216\pm 0.018\) & \(0.436\pm 0.025\pm 0.038\) & \(0.216\pm 0.011\) \\ 7 & 990.4 & 8.77 & \(624\pm 38\) & 0.920 & \(0.183\pm 0.019\) & \(0.422\pm 0.026\pm 0.045\) & \(0.204\pm 0.013\) \\ 8 & 1003.5 & 20.06 & \(1075\pm 50\) & 0.947 & \(0.151\pm 0.014\) & \(0.374\pm 0.018\pm 0.035\) & \(0.186\pm 0.010\) \\ \hline \end{tabular} \end{table} Table 1: The beam energy (\(E_{b}\)), integrated luminosity (\(L\)), number of selected \(n\bar{n}\) events (\(N_{n\bar{n}}\)), the factor taking into account radiative corrections and energy spread (\(1+\delta\)), corrected detection efficiency (\(\varepsilon\)), measured \(e^{+}e^{-}\to n\bar{n}\) cross section \(\sigma\), and neutron effective form factor (\(F_{n}\)). The quoted errors for \(N\), \(\sigma\) are statistical and systematic. For the detection efficiency, the systematic uncertainty is quoted. For \(F_{n}\), the combined statistical and systematic uncertainty is listed. 6000. The Table shows only statistical errors of the fitting. A source of systematic error in the \(n\bar{n}\) event number can be uncertainties in the magnitude and shape of the time spectrum of the beam and cosmic background. The error introduced by these sources is 15 events at \(E_{b}\)=1000 MeV and less than 8 events at lower energies. These values are much lower than statitistical errors in the Table I and are not taken into account in what follows. Figure 4: The corrected detection efficiency versus energy. Figure 5: The MC detection efficiency as a function of antineutron \(\cos\theta\) at \(E_{b}\)=960 MeV. Dotted vertical lines correspond to the polar angle cutoff. Figure 3: The antineutron \(\cos\theta_{a}\) distribution for data (points with error bars) and MC (horizontal line) at \(E_{b}=970\) MeV (left panel) and \(E_{b}=1000\) MeV (right panel). Dotted vertical lines correspond to the polar angle cutoff. Antineutron Angular Distribution The antineutron production angle \(\theta_{n}\) is determined by the direction of the event momentum with an accuracy of about 5 degrees. Distribution over \(\cos\theta_{n}\) for data and MC events is shown in Fig. 3. The MC simulation was done using Eq. (1) with the assumption \(G_{E}=G_{M}\). The detection efficiency in the selection interval \(36^{\circ}<\theta_{n}<144^{\circ}\) is 80%. It is seen from the Fig. 3 that the data and MC distributions agree well with each other, which confirms the MC angular model. It is also worth noting that the previous measurements of the \(|G_{E}/G_{M}|\) value [1] also do not contradict the hypothesis \(G_{E}=G_{M}\). ## V Detection Efficiency The detection efficiency \(\varepsilon\) versus energy under accepted selection conditions (Section II) is shown in Fig. 4. When calculating \(\varepsilon\) we used the MC simulation of the \(e^{+}e^{-}\to n\bar{n}\) process with the GEANT4 toolkit [8], version 10.5. In addition, the simulation included the beam energy spread \(\sim~{}1\) MeV and the emission of photons by initial electrons and positrons. The simulation also took into account non-operating detector channels as well as overlaps of the beam background with recorded events. To do this, during the experiment, with a pulse generator, synchronized with the moment of beam collision, special superposition events were recorded, which were subsequently superimposed on MC events. The detection efficiency \(\varepsilon\) in Fig. 4 is corrected for the difference between the data and MC. This correction is discussed later. Numerical values of the efficiency are given in the Table 1. A decrease of \(\varepsilon\) with energy can be explained by the energy dependence of the selection parameters, as well as by an increase in the energy that goes beyond the calorimeter. In Fig. 5 the angular detection efficiency is shown at the beam energy \(E_{b}=960\) MeV. The detection efficiency in our measurement is of order of 20%. It is important to find out how correctly the proportion of events outside the selection condition is simulated. Corrections were calculated fot three groups of selection conditions described in chapter II. To do this, we invert the selection conditions for each selection group and then calculate the corresponding corrections \(\delta\) for detection efficiency in each of 8 energy points as follows: \[\delta=\frac{n_{0}}{n_{0}+n_{1}}\frac{m_{0}+m_{1}}{m_{0}}, \tag{5}\] where \(n_{0}\) (\(n_{1}\)) is the number of \(n\bar{n}\) events determined with standard (inverted) selection cuts. These numbers were calculated during the time spectra fitting with the Eq.4, as it is described in the chapter III. The values \(m_{0}\) and \(m_{1}\) refer respectively to the MC simulation event numbers. Examples of the time spectra obtained with inverted conditions are shown in Fig. 6. The first group of selection conditions includes the requirement of no charged tracks in \begin{table} \begin{tabular}{c c c c c c} & \(E_{b}\) (MeV) & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & \(\delta_{E}\) & \(\delta_{t}\) \\ \hline 1 & 945.5 & \(0.991\pm 0.022\) & \(1.292\pm 0.092\) & \(0.971\pm 0.038\) & \(1.005\pm 0.005\) & \(1.249\pm 0.102\) \\ 2 & 950.3 & \(0.977\pm 0.018\) & \(1.214\pm 0.062\) & \(0.985\pm 0.030\) & \(1.009\pm 0.009\) & \(1.179\pm 0.072\) \\ 3 & 960.3 & \(9.966\pm 0.019\) & \(1.077\pm 0.050\) & \(0.992\pm 0.028\) & \(1.012\pm 0.012\) & \(1.044\pm 0.062\) \\ 4 & 970.8 & \(0.949\pm 0.021\) & \(1.198\pm 0.061\) & \(0.980\pm 0.050\) & \(1.018\pm 0.018\) & \(1.134\pm 0.084\) \\ 5 & 968.8 & \(0.958\pm 0.027\) & \(1.031\pm 0.080\) & \(0.896\pm 0.044\) & \(1.018\pm 0.018\) & \(0.901\pm 0.097\) \\ 6 & 980.3 & \(0.997\pm 0.031\) & \(1.102\pm 0.073\) & \(0.986\pm 0.043\) & \(1.021\pm 0.021\) & \(1.106\pm 0.093\) \\ 7 & 990.4 & \(0.925\pm 0.033\) & \(1.131\pm 0.080\) & \(0.889\pm 0.041\) & \(1.024\pm 0.024\) & \(0.952\pm 0.099\) \\ 8 & 1003.5 & \(0.915\pm 0.024\) & \(1.065\pm 0.056\) & \(0.796\pm 0.028\) & \(1.028\pm 0.028\) & \(0.797\pm 0.073\) \\ \hline \end{tabular} \end{table} Table 2: The beam energy (\(E_{b}\)), the correction to the detection efficiency \(\delta_{1}\) from SND internal system, the correction \(\delta_{2}\) from SND external system, the correction \(\delta_{3}\) from the EMC thresholds, the correction \(\delta_{E}\) from the lost EMC energy, the correction \(\delta_{t}\) is the total correction. Figure 6: The event time spectra with inverted selection conditions. Left panel — inverted 2-nd group cut at \(E_{b}=970\) MeV, Right panel — inverted 3-d group cut with \(0.7E_{b}<E_{\rm cal}<E_{b}\) at \(E_{b}=960\) MeV. Light shaded histogram shows the background distribution. Dark shaded histogram in the right plot is \(n\bar{n}\) contribution. an event. When studying inverse selections we assume the presence of central charged tracks with \(D_{xy}>0.5\) cm, where \(D_{xy}\) is the distance between the track and the axis of the beams. A possible background from the related process \(e^{+}e^{-}\to p\bar{p}\) should be discussed here. In the energy region \(E_{b}>960\) MeV protons and antiprotons give central collinear tracks and rejected by the requirement \(D_{xy}>0.5\) cm, as well as events of other processes of \(e^{+}e^{-}\) annihilation with charged tracks. However at \(E_{b}<960\) MeV the protons and antiprotons are slow and stop at the collider vacuum pipe. In this case the antiproton annihilates with the production of charged tracks, wchich can be with \(D_{xy}>0.5\) cm. But here, too, the \(e^{+}e^{-}\to p\bar{p}\) background is suppressed by the fitting of time spectrum, since the annihilation delay time does not exceed 1 ns. For the second group the inverted selection conditions were used without changes. For the third group of selection conditions, a partial inversion was used, that is, the condition \(0.7E_{b}<E_{\rm cal}<E_{b}\) was applied. An additional correction arises from the events, in which the antineutrons pass the calorimeter without interaction, and from the events with a small calorimeter energy \(E_{\rm cal}<0.7E_{b}\). These events are not taken into analysis due to the large background and therefore not available for correction with the procedure described above. Their share in MC varies from 1.9% at the energy \(E_{b}\)=945 MeV to 8.5% at \(E_{b}\)=1000 MeV. It was previously noted in chapter III, that to desribe the shape of data time spectrum the contribution of the process of antineutron scattering in MC should be reduced by a factor of 1.5. With such a change, the proportion of events with \(E_{\rm cal}<0.7E_{b}\) in MC reduces to 1.4% at \(E_{b}\)=945 to 5.7% at \(E_{b}\)=1000 MeV. The difference between these values is taken into account as an additional correction \(\delta_{E}\) to the detection efficiency with the 100% of uncertainty. The measured by selection groups corrections \(\delta_{1},\delta_{2},\delta_{3}\), as well as \(\delta_{E}\), are multiplied \(\delta_{t}=\delta_{1}\delta_{2}\delta_{3}\delta_{E}\) and all are given in the Table 2. It can be seen, that the total efficiency correction \(\delta_{t}\) changes in limits 0.8--1.25 with energy, what is explained by the strong energy dependence of the antineutron absobtion length. The corrected detection efficiency is obtained from the MC efficiency by multiplying by the total correction \(\delta_{t}\). The values of the corrected efficiency are given along with systematic errors in the Table 1. Here, unlike our previous measurement [1], the corrections in different energy points are not correlated. ## VI The measured \(e^{+}e^{-}\to n\bar{n}\) cross section Using the number of \(n\bar{n}\) events \(N_{n\bar{n}}\), luminosity \(L\) and detection efficiency \(\varepsilon\) (Table 1), the visible cross section \(\sigma_{vis}(E)=N_{n\bar{n}}/L\varepsilon\) can be calculated. The Born cross section \(\sigma(E)\) we need is related to the visible cross \(\sigma_{vis}(E)\) in the following form : \[\sigma_{vis}(E) = \sigma(E)(1+\delta(E)) \tag{6}\] \[= \int_{-\infty}^{+\infty}G(E^{\prime},E)dE^{\prime}\] \[\int_{0}^{x_{max}}W(s,x)\sigma(s(1-x))dx,\] where \(W(s,x)\) is the radiator function [9] describing emission of photons with energy \(xE_{b}\) by initial electrons and positrons, \(G(E^{\prime},E)\) is a Gaussian function describing the c.m. energy spread. In function \(W(s,x)\) the contribution of the vacuum polarization is not taken into account, so our Born cross section is a "dressed" cross section. The factor \((1+\delta(E))\) takes into account both the radiative corrections and beam energy spread. This factor is calculated in each of 8 energy points using the Born cross section, obtained by the fitting of the visible cross section using Eq. 6. The energy dependence of the Born cross section is described by Eq.2, in which the neutron effective formfactor has a form of a second order polinomial function, as shown in more detail in the next chapter. The measured Born cross section is shown in the Fig. 7 and listed in the Table 1. The dominant contribution into systematic error is made by the detection efficiency correction error, given in the Table 2. Uncertainties in the value of luminosity (1%) and radiative correction (2%) are also taken into account. In Fig. 7 the total statistical and systematic error is shown. In comparison with our preceding work [1], the measured cross section has 2 times lower statistical error and 1.5 times lower systematic error. At the maximum energy \(E=2\) GeV our cross section is in good agreement with the last BESIII measurement [2]. ## VII The neutron effective timelike formfactor The effective neutron form factor calculated from the measured cross section using Eq. (2) is listed in the Table 1 and shown in Fig. 8 as a function of the antineutron momentum together with the BESIII data [2] and the proton effective form factor measured by the BABAR experiment [10]. The curve in Fig. 8, approximating the form factor, is a second order polinimial \(|F_{n}|=a_{0}+a_{1}p_{n}+a_{2}p_{n}^{2}\), in which the parameters \(a_{i}\) are obtained during fitting and \(p_{n}\) is the antineutron momentum. The following parameters values are obtained: \(a_{0}=0.398\pm 0.022\), \(a_{1}=-0.713\pm 0.126\), \(a_{2}=0.268\pm 0.166\). When the fitting curve continues to zero momentum, the expected value of the neutron formfactor at the threshold will be \(a_{0}\simeq\)0.4. One can see from the Fig. 8, that the proton formfactor noticably larger that the neutron one and their ratio near the threshold could be close to 3/2. ## VIII Summary The experiment to measure the \(e^{+}e^{-}\to n\bar{n}\) cross section and the neutron timelike form factor has been carried out with the SND detector at the VEPP-2000 \(e^{+}e^{-}\) collider in the energy region from 1891 to 2007 MeV. The measured \(e^{+}e^{-}\to n\bar{n}\) cross varies with energy within 0.4\(\div\)0.6 nb and agrees with recent SND measurement SND [1], however has 2 times better statistical accuracy. At the maximum energy our cross section is in agreement with the last BESIII measurement [2]. The neutron effective timelike form factor is extracted from the measured cross section using Eq.2. Form factor decreases with energy from 0.3 to 0.2. In the value, the neutron form factor turns out to be noticeably less than the proton one. **ACKNOWLEDGMENTS**. This work was carried out on the RSF fund grant No. 23-22-00011.
2308.16841
The degrees of the orientation-preserving automorphism groups of toroidal maps and hypermaps
This paper is an exploration of the faithful transitive permutation representations of the orientation-preserving automorphisms groups of highly symmetric toroidal maps and hypermaps. The main theorems of this paper give a list of all possible degrees of these specific groups. This extends prior accomplishments of the authors, wherein their focus was confined to the study of the automorphisms groups of toroidal regular maps and hypermaps. In addition the authors bring out the recently developed {\sc GAP} package {\sc corefreesub} that can be used to find faithful transitive permutation representations of any group. With the aid of this powerful tool, the authors show how Schreier coset graphs of the automorphism groups of toroidal maps and hypermaps can be easily constructed.
Maria Elisa Fernandes, Claudio Alexandre Piedade
2023-08-31T16:20:43Z
http://arxiv.org/abs/2308.16841v1
# The degrees of the orientation-preserving automorphism groups of toroidal maps and hypermaps ###### Abstract. This paper is an exploration of the faithful transitive permutation representations of the orientation-preserving automorphisms groups of highly symmetric toroidal maps and hypermaps. The main theorems of this paper give a list of all possible degrees of these specific groups. This extends prior accomplishments of the authors, wherein their focus was confined to the study of the automorphisms groups of toroidal regular maps and hypermaps. In addition the authors bring out the recently developed GAP package corefreesub that can be used to find faithful transitive permutation representations of any group. With the aid of this powerful tool, the authors show how Schreier coset graphs of the automorphism groups of toroidal maps and hypermaps can be easily constructed. **Keywords:** Chiral Toroidal Maps, Chiral Toroidal Hypermaps, Chiral Polyhedra, Permutation Groups, Schreier Coset Graphs. **2010 Math Subj. Class:** 05E18, 20B25, 52B11 ## 1. Introduction A faithful permutation representation of a group refers to a specific type of group action where each group element is represented by a unique permutation of a set \(X\), and no two distinct group elements result in the same permutation. In other words, this representation captures all the distinct elements and their actions within the group. The degree of this representation is the size of the set \(X\). A faithful permutation representation is valuable because it allows us to understand the structure and behavior of a group by studying how its elements permute the elements of a set. It's called "faithful" because it faithfully captures the group's structure without collapsing distinct group elements into the same permutation. This representation helps mathematicians and researchers analyze and classify groups, understand their properties, and explore their relationships with other mathematical objects. The study of minimal faithful degrees and permutation representations is an active area of research in group theory. For some groups, the minimal degree is relatively easy to compute, while for others, it remains an open question or requires sophisticated mathematical techniques. The automorphism groups of regular polytopes are string C-groups, smooth quotients of Coxeter groups with linear diagrams. In particular, these groups are generated by an ordered set of involutions and nonconsecutive involutions of this set commute. Faithful transitive permutation representations of string C-groups are represented by undirected Schreier coset graphs, satisfying some additional properties due the commuting property of the generators [16]. These graphs have been an important tool to discover examples of abstract regular polytopes and to accomplish comprehensive classifications of such geometric objects [6, 9, 8, 10, 2, 7]. This inspired the authors to investigate the different ways of representing a group by a graph (corresponding to a faithful transitive permutation representation). Their research initiated by the study of the automorphism groups of toroidal regular maps [11, 12]. Subsequently, they delved into regular hypermaps, followed by locally toroidal regular polytopes [13, 14]. In all these works, the authors exclusively focused their investigations on regular structures. Now they will expand their focus considering toroidal chiral maps and hypermaps. The second author with Delgado also constructed a package for GAP [15], named "corefreesub", to compute faithful transitive permutation representations of groups and their degrees, which is now available online [17]. The automorphism groups of toroidal chiral maps and hypermaps are \(2\)-generated groups. The two generators are rotations of the map, typically a face-rotation and a vertex-rotation. The orientation-preserving automorphims groups of toroidal regular maps, which are index two subgroups of the automorphism group of these maps will also be included in our classification. Similarly to what was done in our previous works we list all possible degrees of the orientation-preserving groups of automorphisms of toroidal maps. The correspondence between faithful transitive permutation representations and core-free subgroups, which is significant concept in group theory that relates group actions to subgroup structure, will be central in this work. For any faithful transitive permutation representation of a group, the stabilizer subgroup of the corresponding action is core-free. Conversely, for every core-free subgroup \(H\) of a group \(G\), there exists a faithful and transitive action of \(G\) on the set of cosets of \(H\). Thus in our classification we give all core-free indexes of the orientation-preserving automorphism groups, also known as rotational group, of toroidal maps and hypermaps. ## 2. Toroidal maps and hypermaps In this section, we provide a concise overview of toroidal maps and hypermaps, a topic that has been extensively explored by numerous authors[5, 1, 4, 3]. A common approach to creating a toroidal map is to use a rectangular grid that wraps around the torus. For that reason toroidal maps, which are embeddings of maps on the surface of a torus, are in correspondence with tesselations of the plane. There are three types of toroidal maps corresponding to the only three regular tessellations, whose basic building blocks are one of the following three regular polygons: the square, the triangle or the hexagon. Let \((0,1)\) and \((1,0)\) be unitary translations of the plane tesselation. Now consider a vector \((s_{1},s_{2})\) for some non-negative integers \(s_{1}\) and \(s_{2}\). The toroidal map that is obtained identifying opposite sides of a parallelogram with vertices \[(0,0),\,(s_{1},s_{2}),\,(s_{1}-s_{2},s_{1}+s_{2})\text{ and }(-s_{2},s_{1})\] for a quadrangular tesselation and \[(0,0),\,(s_{1},s_{2}),\,(-s_{2},s_{1}+s_{2})\text{ and }(s_{1}-s_{2},s_{1}+2s_{2})\] for the a triangular or a hexagonal tesselation. The resulting maps are denoted by \(\{4,4\}_{(s_{1},s_{2})}\) if the tiles of the plane tesselation are squares, \(\{3,6\}_{(s_{1},s_{2})}\) or \(\{6,3\}_{(s_{1},s_{2})}\) when the tiles are, respectively, triangles or hexagons (see the examples of Figure 1). Similarly a toroidal hypermap, an embedding of a hypergraph on the torus, is obtained from a regular hexagonal tesselation having vertices with two colors (see Figure 2). The toroidal hypermap associated with a vector \((s_{1},s_{2})\) is denoted by \((3,3,3)_{(s_{1},s_{2})}\). If a toroidal map or hypermap is identical to its mirror image it is _regular_, otherwise it is _chiral_. A toroidal map or hypermap associated with the vector \((s_{1},s_{2})\) is regular if and only if \(s_{1}s_{2}(s_{1}-s_{2})=0\). When \((s_{1},s_{2})\in\{(1,0),(0,1)\}\) we get a degenerated regular tesselation of the torus with either one or two faces. Moreover, with \((s_{1},s_{2})=(1,1)\) the action on the set of edges is never faithful. Let us consider the cases where the faces of the tesselations of the torus have the shape has the ones of the correspondent planar tesselation. In what follows we assume that \((s_{1},s_{2})\notin\{(1,0),(0,1),(1,1)\}\). Plane tesselations are infinite regular polyhedra whose automorphism group is one of the Coxeter groups \([4,4]\), \([3,6]\) or \([6,3]\)\([5]\). The automorphim group of a regular hexagonal tesselation, having vertices with two colors, is also a Coxeter group having a triangular Coxeter diagram. The groups of automorphims of toroidal maps and hypermaps are factorizations of these infinite Coxeter groups. A flag in a map (or hypermap) is a triple of mutual incident elements (vertex, edge, face) (or (hypervertex, hyperedge, hyperface)). Flags are _adjacent_ if they have exactly two elements in common. Consider a flag \((x,y,z)\) (base flag) and their Figure 2. The hypermap \((3,3,3)_{(3,1)}\) three adjacent flags \((x^{\prime},y,z)\), \((x,y^{\prime},z)\) and \((x,y,z^{\prime})\). Now let \(\rho_{0}\) be the reflexion of the plane tesselation sending \((x,y,z)\) to \((x^{\prime},y,z)\), \(\rho_{1}\) the reflexion sending \((x,y,z)\) to \((x,y^{\prime},z)\) and finally, \(\rho_{2}\) the reflection sending \((x,y,z)\) to \((x,y,z^{\prime})\). The group of automorphisms of the plane tesselation is generated by these three involutions. Consider the automorphisms \(a:=\rho_{0}\rho_{1}\), \(b:=\rho_{1}\rho_{2}\) and \(ab:=\rho_{0}\rho_{2}\), which are rotations, \(a\) is a counter-clockwise rotation around a face (or hyperface); \(b\) is a counter-clockwise rotation around a vertex (or hypervertex) and \(ab\) is a clockwise rotation around an edge (or hyperedge). The orders of \(a\), \(b\) and \(ab\) determines the Cotexer group. In the case of the maps the order of \(ab\) is \(2\) and the orders of \(a\) and \(b\) are in correspondence with the two parameters of the Coxeter group. For the bipartite hexagonal tesselation of the plane the order of \(a\), \(b\) and \(ab\) is \(3\). The rotations \(a\) and \(b\) are generators of the orientation-preserving automorphism group of the tesselation, commonly called the _rotational group_. Among these orientation-preserving automorphisms we find unitary translations sending a tile to an adjacent tile. Let \(u\) and \(v\) be unitary translations sending the origin \((0,0)\) to \((1,0)\) and \((0,1)\) respectively. The rotational group \(G\) of a toroidal map or hypermap is a factorization of the rotational group of the corresponding tesselation by the relation \(u^{s_{1}}v^{s_{2}}=id\) (where \(id\) denotes the identity of \(G\)). We consider the translations \(u\) and \(v\) as defined in Table 2. As the map \(\{6,3\}_{(s_{1},s_{2})}\) is the dual of \(\{3,6\}_{(s_{1},s_{2})}\), we get the rotational group of \(\{6,3\}_{(s_{1},s_{2})}\) interchanging the rotations \(a\) and \(b\). Having this in mind the results for the map \(\{6,3\}_{(s_{1},s_{2})}\) can be obtained from the corresponding results for the map \(\{3,6\}_{(s_{1},s_{2})}\). The subgroup \(T\) of \(G\) generated by \(u\) and \(v\) is abelian and is a normal subgroup of \(G\). Moreover, \(T\) acts regularly on the set \(V\) of vertices of the toroidal map, hence \(|V|=|T|\). In addition, \(G\) acts on the flags with two orbits, hence \(|G|=m|V|\) where \(m\) is the order of \(a\). The translations \(u\) and \(v\) are conjugate and have order \(\frac{|V|}{gcd(s_{1},s_{2})}\). When the map is regular, there exists an automorphism of \(G\) sending \(a\) to \(a^{-1}\) and \(b\) to \(b^{-1}\). In this case the group of automorphims of the map is twice bigger then its rotational group. In the chiral case the rotational group is precisely the group of automorphisms of the map. In what follows \(G=\langle a,b\rangle\) is the rotational group of a toroidal map or hypermap and \(T=\langle u,v\rangle\) is the group of translations defined in this section. We now assume that \(G\) has a faithful transitive permutation representation of degree \(n\). We will determine, for each toroidal map and hypermap, the possible values for the degree \(n\) of \(G\). Before we proceed we give some general results that work the same way for any toroidal embedding. ## 3. Preliminary Results One consequence of the definition of the translation group \(T\) is the following. **Proposition 3.1**.: _Any element of the translation subgroup \(T\) is of the form \(u^{i}v^{j}\) with \(i\in\{1,\ldots,|u|\}\) and \(j\in\{1,\ldots,gcd(s_{1},s_{2})\}\)._ Proof.: The index of \(\langle u\rangle\) in \(T\) is equal to \(gcd(s_{1},s_{2})\), thus it is suficient to prove that \(v^{gcd(s_{1},s_{2})}\in\langle u\rangle\). Let \(x,y\in\mathbb{Z}\) be such that \(gcd(s_{1},s_{2})=xs_{1}+ys_{2}\) (given by Bezout's identity). Consider first the toroidal map \(\{4,4\}_{(s_{1},s_{2})}\). Conjugating the equality \(u^{s_{1}}=v^{-s_{2}}\) by \(a\), we get \(u^{s_{2}}=v^{s_{1}}\). Hence \(v^{gcd(s_{1},s_{2})}=v^{xs_{1}+ys_{2}}=u^{-ys_{1}+xs_{2}}\in\langle u\rangle\). For the toroidal map \(\{3,6\}_{(s_{1},s_{2})}\) and hypermap \((3,3,3)_{(s_{1},s_{2})}\), conjugating the equality \(u^{s_{1}}=v^{-s_{2}}\) by \(a\), we get \(v^{s_{1}}=u^{-(s_{1}+s_{2})}\). Thus \(v^{gcd(s_{1},s_{2})}=u^{-xs_{1}-(x+y)s_{2}}\). Hence, \(v^{gcd(s_{1},s_{2})}\in\langle u\rangle\). Now as \(T\) is a normal subgroup of \(G\), \(T\) is fixed-point-free. Hence if \(T\) is transitive then it acts regularly on \(n\). In that case \(n=|T|\). In what follows we assume that \(n\neq|T|\). **Lemma 3.2**.: _If \(n\neq|T|\) then \(G\leq\operatorname{S_{k}}l\operatorname{S_{m}}\) where \(k\) is the size of a \(T\)-orbit and \(m\) is the number of \(T\)-orbits. Moreover \(m\) is a divisor of \(\frac{|G|}{|T|}\) and \(k=\frac{|T|}{d}\), where \(d\) is a divisor of \(gcd(s_{1},s_{2})\)._ Proof.: Suppose that \(n\neq|T|\), then \(T\) is intransitive and the \(T\)-orbits form a block system for \(G\). Let \(m\) be the number of block and \(k\) be the size of a block for this block system. We have that \(G\leq\operatorname{S_{k}}l\operatorname{S_{m}}\). Let us now determine the size of a block. Consider the induced action of \(G\) on the set of \(m\) blocks and its induced homomorphim \(f:G\to S_{m}\). As \(T\) lies in the kernel of this homomorphism, and \(Im(f)\cong G/ker(f)\), \(|Im(f)|\) is a divisor \(\frac{|G|}{|T|}\). Particularly, \(m\) is a divisor of \(\frac{|G|}{|T|}\). It remains to prove that \(k=|u|d\), where \(d\) is a divisor of \(gcd(s_{1},s_{2})\). Consider the actions \(\sigma\) and \(\tau\) of \(u\) and \(v\), respectively, on a block and let \(K:=\langle\sigma,\,\tau\rangle\). Let \(A:=|\sigma|\), \(B:=|K:\langle\sigma\rangle|\) and \(C:=|K:\langle\tau\rangle|\). We have that \(K\) has order \(AB\) and acts regularly on the block, hence \(k=AB\). As \(\sigma\) and \(\tau\) commute, we have the following \[K/\langle\sigma\rangle=\{\langle\sigma\rangle,\,\langle\sigma\rangle\tau, \langle\sigma\rangle\tau^{2},\ldots,\langle\sigma\rangle\tau^{B-1}\}\text{ and }\] \[K/\langle\tau\rangle=\{\langle\tau\rangle,\,\langle\tau\rangle\sigma, \langle\tau\rangle\sigma^{2},\ldots,\langle\tau\rangle\sigma^{C-1}\}.\] Thus \(B\) divides \(|\tau|\) and \(C\) divides \(A\). Let \(D:=A/C\). As \(k=AB=|\tau|C\) we have \(|\tau|=DB\). Now \[|u|=lcm(|\sigma|,|\tau|)=lcm(CD,\,BD)=D\,lcm(C,B)\] and \[k=AB=DCB=D\,lcm(C,B)\,gcd(C,B)=|u|\,gcd(C,B)=\frac{|T|\,gcd(C,B)}{gcd(s_{1},s_ {2})}\] Let us now prove that \(gcd(C,B)\) divides \(gcd(s_{1},s_{2})\). As both \(u^{s_{1}}\) and \(u^{s_{2}}\) are elements of \(\langle v\rangle\), we have that \(\sigma^{s_{1}}\) and \(\sigma^{s_{2}}\) must be elements of \(\langle\tau\rangle\), hence \(C\) must divide both \(s_{1}\) and \(s_{2}\), meaning it must divide \(gcd(s_{1},s_{2})\). Similarly \(\tau^{s_{1}}\) and \(\tau^{s_{2}}\) are elements of \(\langle\sigma\rangle\), and therefore \(B\) divides \(gcd(s_{1},s_{2})\). Consequently, \(gcd(C,B)\) is a divisor of \(gcd(s_{1},s_{2})\), as wanted. ## 4. Toroidal Maps of type \(\{4,4\}\) In this section let \(G\) be the rotational group of \(\{4,4\}_{(s_{1},s_{2})}\). **Proposition 4.1**.: _Let \(s_{1}+s_{2}>2\). The subgroups of \(G\), \(\langle a\rangle\), \(\langle b\rangle\) and \(\langle ab\rangle\) are core-free._ Proof.: Let \(H=\langle a\rangle\) and consider the intersection \(H\cap H^{b}=\langle a\rangle\cap\langle b^{-1}ab\rangle\). If \(x\in H\cap H^{b}\) and \(x\) is nontrivial then, for some \(i,j\in\{1,2,3\}\), \(x=a^{i}=b^{-1}a^{j}b\), or equivalently \(ba^{i}=a^{j}b\). This only can happen if \(s_{1}+s_{2}\leq 2\) which is not the case. For \(H=\langle b\rangle\) the intersections \(H\cap H^{a}\) is trivial and for \(H=\langle ab\rangle\) the intersection \(H\cap H^{b}\) is trivial. **Proposition 4.2**.: _Let \(d\) be a divisor of \(gcd(s_{1},s_{2})\). If \(s_{1}+s_{2}>2\), then \(\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\) and \(\langle a^{2},u^{s_{1}/d}v^{s_{2}/d}\rangle\) are core-free subgroups of \(G\). Moreover these subgroups of \(G\) have indexes \(\frac{4(s_{1}^{2}+s_{2}^{2})}{d}\) and \(\frac{2(s_{1}^{2}+s_{2}^{2})}{d}\), respectively._ Proof.: Let \(H=\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\), with \(d\) being a divisor of \(gcd(s_{1},s_{2})\). Note that \(|H|\) is \(d\) hence the index \(|G:H|\) is as in the statement of this proposition. Consider \(\gamma\in H\cap H^{a}\). Then \(\gamma=(u^{s_{1}/d}v^{s_{2}/d})^{i}=(v^{-s_{1}/d}u^{s_{2}/d})^{j}\), with \(i,j\in\{0,\ldots,d-1\}\). This implies that \((u^{s_{1}}v^{s_{2}})^{i/d}(u^{-s_{2}}v^{s_{1}})^{j/d}=id\). Geometrically this means that the origin \((0,0)\) and the vertex \((x,y)=i/d(s_{1},s_{2})+j/d(-s_{2},s_{1})\) are identical. As \(i,j\in\{0,\ldots,d-1\}\), this is only possible when \(i=j=0\). With this we have shown that \(H\cap H^{a}=\{id\}\). Now consider \(H=\langle a^{2},u^{s_{1}/d}v^{s_{2}/d}\rangle\), with \(d\) being a divisor of \(gcd(s_{1},s_{2})\). For \(s_{1}+s_{2}>2\) we have that \(a^{2}\notin T\) and we have the following equalities, which prove that \(\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\) is a normal subgroup of \(H\). \[\begin{array}{l}a^{-2}ua^{2}=a^{-1}v^{-1}a=u^{-1}\\ a^{-2}va^{2}=a^{-1}ua=v^{-1}\end{array}\] Hence \(H=\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\rtimes\langle a^{2}\rangle\). Let \(\gamma\in H\cap H^{a}\). \[\gamma=(u^{s_{1}/d}v^{s_{2}/d})^{i}(a^{2})^{l}=(v^{-s_{1}/d}u^{s_{2}/d})^{j}( a^{2})^{q},\] with \(i,j\in\{0,\ldots,d-1\}\) and \(l,q\in\{0,1\}\). Suppose that \((l,q)=(0,0)\). Then \(\gamma=(u^{s_{1}/d}v^{s_{2}/d})^{i}=(v^{-s_{1}/d}u^{s_{2}/d})^{j}\), which we gives \(\gamma=id\), as we have seen in the previous case. If \((l,q)\in\{(0,1),(1,0)\}\) then \(a^{2}\in T\), a contradiction. Hence \((l,q)=(1,1)\) and, consequently, \((i,j)=(0,0)\), giving that \(\gamma\in\langle a^{2}\rangle\). This proves that \(H\cap H^{a}\) is a subgroup of \(\langle a^{2}\rangle\). Using similar calculations we get that \(H^{b}\cap H^{ab}\leq\langle a^{2}\rangle^{b}\). Hence for \(s_{1}+s_{2}>2\), \(H\cap H^{a}\cap H^{b}\cap H^{ab}\) is trivial. Finally as \(|H|=2d\), we have that \(|G:H|=2\frac{s_{1}^{2}+s_{2}^{2}}{d}\). **Theorem 4.3**.: _Let \(s_{1}\) and \(s_{2}\) be nonnegative integers and \(D\) the set of divisors of \(gcd(s_{1},s_{2})\). Suppose that \(G\) is the rotational group of a toroidal map \(\{4,4\}_{(s_{1},s_{2})}\). The set of all possible degrees of a faithful transitive permutation representation of \(G\) is equal to_ \[\left\{s_{1}^{2}+s_{2}^{2}\right\}\cup\left\{\frac{2(s_{1}^{2}+s_{2}^{2})}{d} \,|\,d\in D\right\}\cup\left\{\frac{4(s_{1}^{2}+s_{2}^{2})}{d}\,|\,d\in D\right\}\] _when \(s_{1}+s_{2}>2\) and to \(\{8,16\}\) when \((s_{1},s_{2})\in\{(0,2),(2,0)\}\)._ Proof.: Let \(s_{1}+s_{2}>2\). By Proposition 4.1\(\langle b\rangle\) is core-free subgroup of \(G\). As \(|G:\langle b\rangle|=s_{1}^{2}+s_{2}^{2}\) there is a faithful transitive permutation representation of \(G\) of degree \(n=s_{1}^{2}+s_{2}^{2}\). If \(T\) is transitive then the degree of \(G\) is equals to the size of \(T\), which is \(s_{1}^{2}+s_{2}^{2}\). Then we may assume that \(T\) is intransitive. In this case, by Proposition 3.2, the degree of \(G\) is among the values given in the statement of this theorem. Finally, by Proposition 4.2, there exists a pair of core-free subgroups of \(G\) which have indexes equal to \(\frac{2(s_{1}^{2}+s_{2}^{2})}{d}\) and \(\frac{4(s_{1}^{2}+s_{2}^{2})}{d}\). The cases \((s_{1},s_{2})=(0,2)\) and \((s_{1},s_{2})=(2,0)\) can be computed using the "corefreesub" package [17]. ## 5. Toroidal Maps \(\{3,6\}\) In this section let \(G\) be the rotational group of \(\{3,6\}_{(s_{1},s_{2})}\). **Proposition 5.1**.: _Let \(s_{1}+s_{2}>2\). The subgroups of \(G\), \(\langle a\rangle\), \(\langle b\rangle\) and \(\langle ab\rangle\) are core-free._ Proof.: Let \(H=\langle a\rangle\) and consider the intersection \(H\cap H^{b}=\langle a\rangle\cap\langle b^{-1}ab\rangle\). If \(\gamma\in H\cap H^{b}\) then we have that \(\gamma=a^{i}=b^{-1}a^{j}b\), for \(i,j\in\{0,1,2\}\). Then we have \(ba^{i}=a^{j}b\) which is only possible when flags of adjacent faces are identified, but that is never the case when \(s_{1}+s_{2}>2\). Hence \(\gamma=id\). For \(H=\langle b\rangle\) (resp. \(H=\langle ab\rangle\)) the intersections \(H\cap H^{a}\) (resp. \(H\cap H^{b}\)) are trivial. **Proposition 5.2**.: _Let \(d\) be a divisor of \(gcd(s_{1},s_{2})\). If \(s_{1}+s_{2}>2\), then \(\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\) and \(\langle b^{3},u^{s_{1}/d}v^{s_{2}/d}\rangle\) are core-free subgroups of \(G\). Moreover these subgroups of \(G\) have indexes \(\frac{6(s_{1}^{2}+s_{1}s_{2}+s_{2}^{2})}{d}\) and \(\frac{3(s_{1}^{2}+s_{1}s_{2}+s_{2}^{2})}{d}\), respectively._ Proof.: Let \(H=\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\), with \(d\) being a divisor of \(gcd(s_{1},s_{2})\). Consider \(\gamma\in H\cap H^{a}\). Then \(\gamma=(u^{s_{1}/d}v^{s_{2}/d})^{i}=(v^{(-s_{2}-s_{1})/d}u^{s_{2}/d})^{j}\), with \(i,j\in\{0,\ldots,d-1\}\). Then \(u^{\frac{s_{1}+s_{2}s_{2}}{d}}v^{\frac{1+s_{2}+s_{1}-s_{2}j_{2}}{d}}=id\). Geometrically, this implies that the origin \((0,0)\) and the point with coordinates \((s_{1},s_{2})i/d+(-s_{2},s_{1}+s_{2})j/d\) are vertices of the parallelogram used in the construction of the map. As \(i,j\in\{0,\ldots,d-1\}\), we must have \(i=j=0\). This proves that \(H\cap H^{a}\) is trivial. Now let \(H=\langle b^{3},u^{s_{1}/d}v^{s_{2}/d}\rangle\), with \(d\) being a divisor of \(gcd(s_{1},s_{2})\). Let us first prove that we can write \(H\) as a semi-direct product \(\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\rtimes\langle b^{3}\rangle\). For \(s_{1}+s_{2}>2\) we have that \(b^{3}\notin T\) and the following equalities show that \(\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\) is a normal subgroup of \(H\). \[b^{-3}ub^{3} =b^{-2}t^{-1}b^{2}=b^{-1}v^{-1}b=u^{-1}\] \[b^{-3}vb^{3} =b^{-2}wb^{2}=b^{-1}t^{-1}b=v^{-1}\] Let us prove that \(H\cap H^{b^{2}}\leq\langle b^{3}\rangle\). If \(\gamma\in H\cap H^{b^{2}}\), then \(\gamma=(b^{3})^{l}(u^{s_{1}/d}v^{s_{2}/d})^{i}=(b^{3})^{q}(v^{(-s_{1}-s_{2})/d} u^{s_{2}/d})^{j}\), with \(i,j\in\{0,\ldots,d-1\}\) and \(l,q\in\{0,1\}\). Now if \((l,q)=(0,0)\), then, as we have proven before, \((i,j)=(0,0)\), hence \(\gamma=id\). If \((l,q)\in\{(0,1),(1,0)\) then \(b^{3}\in T\), a contradiction. If \((l,q)=(1,1)\), then \((i,j)=(0,0)\) and \(\gamma=b^{3}\). Consequently, \(H\cap H^{b^{2}}\leq\langle b^{3}\rangle\), as claimed. Similarly we have \(H^{a^{-1}}\cap H^{b^{2}a^{-1}}\leq\langle ab^{3}a^{-1}\rangle\). As for \(s_{1}+s_{2}>2\), \(\langle b^{3}\rangle\cap\langle ab^{3}a^{-1}\rangle\) is trivial, \(H\) is a core-free subgroup of \(G\), as wanted. Combining Lemma 3.2 and Proposition 5.1, to determine all the possibilities for the degree \(n\) of \(G\) it remains to consider the case \(m=2\), that is, the case where \(T\) has exactly two orbits. The following proposition shows that in that case \(n=2|T|=2(s_{1}^{2}+s_{1}s_{2}+s_{2}^{2})\). **Proposition 5.3**.: _If \(m=2\) then \(k=|T|\)._ Proof.: Suppose that \(m=2\). Let \(B_{1}\) and \(B_{2}\) be the orbits of \(T\) and, for \(i\in\{1,2\}\) denote by \(u_{i}\) and \(v_{i}\) the actions of \(u\) and \(v\) on the block \(B_{i}\), respectively. As \(a^{3}=id\), \(a\) must fix the blocks, and by transitivity of \(G\), \(b\) must swaps the blocks. Then \(|u_{1}|=|v_{1}|\) and \(|u_{2}|=|u_{1}|\). Hence \(|u_{1}|=|u|\). Let \(K:=\langle u_{1},v_{1}\rangle\) and \(d:=|K:\langle u_{1}\rangle|=|K:\langle v_{1}\rangle|\). We have that \(d\) is a divisor of \(gcd(s_{1},s_{2})\). Let \(j\in\{0,\ldots,|u|-1\}\) be such that \(u_{1}^{d}=v_{1}^{j}\). Conjugating this equality by \(a\), \(b\) and \(ab\), respectively, we get the equalities \[v_{1}^{d}=u_{1}^{d-j},\,v_{2}^{d}=u_{2}^{d-j}\text{ and }u_{2}^{d}=u_{2}^{d-j}v_ {2}^{j-d}.\] From the last two relations we have that \(u_{2}^{d}=v_{2}^{j}\). Hence, \(u^{d}=v^{j}\). From the proof of Proposition 3.1, we have that both \(d\) and \(j\) must be multiples of \(gcd(s_{1},s_{2})\). Since \(d\) must divide \(gcd(s_{1},s_{2})\), we get that \(d=gcd(s_{1},s_{2})\). As \(|u|=\frac{|T|}{gcd(s_{1},s_{2})}\) then the size of the block is \[k=|K|=|u|d=\frac{|T|}{gcd(s_{1},s_{2})}\cdot gcd(s_{1},s_{2})=|T|.\] **Theorem 5.4**.: _Let \(s_{1}\) and \(s_{2}\) be nonnegative integers and \(D\) the set of divisors of \(gcd(s_{1},s_{2})\). Suppose that \(G\) is the rotational group of a toroidal map \(\{3,6\}_{(s_{1},s_{2})}\). The set of all possible degrees of a faithful transitive permutation representation of \(G\) is equal to_ \[\left\{s_{1}^{2}+s_{1}s_{2}+s_{2}^{2},\,2(s_{1}^{2}+s_{1}s_{2}+s_{2}^{2}) \right\}\cup\left\{\frac{3(s_{1}^{2}+s_{1}s_{2}s_{2}^{2})}{d}\,|\,d\in D\right\} \cup\left\{\frac{6(s_{1}^{2}+s_{1}s_{2}s_{2}^{2})}{d}\,|\,d\in D\right\}\] _when \(s_{1}+s_{2}>2\) and to \(\{6,8,12\}\) when \((s_{1},s_{2})\in\{(0,2),(2,0)\}\)._ Proof.: Let \(s_{1}+s_{2}>2\). By Proposition 5.1\(\langle a\rangle\) and \(\langle b\rangle\) are core-free subgroup of \(G\). As \(|G:\langle a\rangle|=2(s_{1}^{2}+s_{1}s_{2}+s_{2}^{2})\) and \(|G:\langle b\rangle|=s_{1}^{2}+s_{1}s_{2}+s_{2}^{2}\) there is a faithful transitive permutation representation of \(G\) on the set of cosets of these two subgroups. If \(T\) is transitive then the degree of \(G\) is equals to the size of \(T\), which is \(s_{1}^{2}+s_{1}s_{2}+s_{2}^{2}\). Then we may assume that \(T\) is intransitive. Hence the remaining degrees given in this theorems are obtained from Propositions 3.2, 5.2 and 5.3. The cases \((s_{1},s_{2})=(0,2)\) and \((s_{1},s_{2})=(2,0)\) can be computed using the "corefreesub" package [17]. ## 6. Toroidal Hypermaps \((3,3,3)\) In this section let \(G\) be the rotational group of the hypermap \(\{3,3,3\}_{(s_{1},s_{2})}\). **Proposition 6.1**.: _Let \(d\) be a divisor of \(gcd(s_{1},s_{2})\). If \(s_{1}+s_{2}>2\), then \(\langle a\rangle\), \(\langle b\rangle\), \(\langle ab\rangle\) and \(\langle u^{s_{1}/d}v^{s_{2}/d}\rangle\) are core-free subgroups of \(G\)._ Proof.: The proof is similar to the proof of Propositions 5.1 and 5.2. **Theorem 6.2**.: _Let \(s_{1}\) and \(s_{2}\) be nonnegative integers with \((s_{1},s_{2})\notin\{(1,0),(0,1),(1,1)\}\) and \(D\) the set of divisors of \(gcd(s_{1},s_{2})\). Suppose that \(G\) is the rotational group of a toroidal map hypermap \((3,3,3)_{(s_{1},s_{2})}\). The set of all possible degrees of a faithful transitive permutation representation of \(G\) is equal to_ \[\Big{\{}s_{1}^{2}+s_{1}s_{2}+s_{2}^{2}\Big{\}}\cup\Big{\{}\tfrac{3(s_{1}^{2}+s_ {1}s_{2}s_{2}^{2})}{d}\,|\,d\in D\Big{\}}.\] ## 7. Schreier coset graphs Let \(G=\langle g_{i}\,|\,i\in I\rangle\) be a finite group. Suppose that \(G\) has a faithful transitive permutation representation of degree \(n\) (which corresponds to a core-free subgroup of \(G\)). A _Schreier coset graph_ of \(G\) has \(n\) vertices and has a directed edge \((x,y)\) with label \(g_{i}\) whenever \(xg_{i}=y\). When \(g_{i}\) is an involution, the two directed edges \((x,y)\) and \((y,x)\) are replaced by a single undirected edge \(\{x,y\}\) with label \(g_{i}\). In this section, we give computational tools to represent Schreier coset graphs of any group, but as example we consider automorphism groups of toroidal maps and hypermaps. In [11, 12] the authors gave some examples of Schreier coset graphs of toroidal regular maps. Due to the complexity of drawing Schreier coset graph of toroidal chiral maps and hypermaps by hand, we leveraged the functionalities offered by the corefreesub GAP package [15, 17]. In what follows, we present a code that can be executed using the GAP system, provided that the corefreesub package has been installed. As an example we obtain graphs of minimal degree for the map \(\{4,4\}_{(2,1)}\) and the hypermap \((3,3,3)_{(3,2)}\). The Schreier coset graphs obtained are represented in Figures 3 and 4. gap> LoadPackage("corefreesub");; F := FreeGroup("a","b");; s1 := 3 ;;; s2 := 2;; gap> G333 := F/[F.1^3, F.2^3, (F.1*F.2)^3, (F.1*F.2^-1)^s1*(F.1^-1*F.2)^s2];; gap> FTPRs333 := FaithfulTransitivePermutationRepresentations(G333); [ [ a, b ] -> [ (1,2,3)(4,10,11)(5,12,13)(6,14,15)(7,16,17)(8,18,19)(9,20,21) (22,37,38)(23,39,40)(24,41,25)(26,42,43)(27,44,45)(28,46,47)(29,48,30) (31,49,50)(32,51,52)(33,53,54)(34,55,35)(36,56,57), (1,4,5)(2,6,7)(3,8,9) (10,21,22)(11,23,24)(12,25,26)(13,27,14)(15,28,29)(16,30,31)(17,32,18) (19,33,34)(20,35,36)(37,57,48)(38,47,39)(40,53,52)(41,51,50)(42,49,56) (43,55,44)(45,54,46) ], [ a, b ] -> [ (1,2,3)(4,7,8)(5,9,10)(6,11,12)(13,19,17)(14,16,15), (2,4,5) (3,6,7)(8,13,14)(9,15,16)(10,17,11)(12,18,19) ] ] gap> DrawTeXFTPRGraph(FTPRs333[2],rec(layout := "neato", gen_name := ["a","b"])); ## 8. Acknowledgments The author Maria Elisa Fernandes was supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020. The author Claudio Alexandre Piedade was partially supported by CMUP, member of LASI, which is Figure 4. A Schreier coset graph of \((3,3,3)_{(3,2)}\)
2306.00557
Improving Protein-peptide Interface Predictions in the Low Data Regime
We propose a novel approach for predicting protein-peptide interactions using a bi-modal transformer architecture that learns an inter-facial joint distribution of residual contacts. The current data sets for crystallized protein-peptide complexes are limited, making it difficult to accurately predict interactions between proteins and peptides. To address this issue, we propose augmenting the existing data from PepBDB with pseudo protein-peptide complexes derived from the PDB. The augmented data set acts as a method to transfer physics-based contextdependent intra-residue (within a domain) interactions to the inter-residual (between) domains. We show that the distributions of inter-facial residue-residue interactions share overlap with inter residue-residue interactions, enough to increase predictive power of our bi-modal transformer architecture. In addition, this dataaugmentation allows us to leverage the vast amount of protein-only data available in the PDB to train neural networks, in contrast to template-based modeling that acts as a prior
Justin Diamond, Markus Lill
2023-05-31T17:04:27Z
http://arxiv.org/abs/2306.00557v1
# Improving Protein-peptide Interface Predictions in the Low Data Regime ###### Abstract We propose a novel approach for predicting protein-peptide interactions using a bi-modal transformer architecture that learns an inter-facial joint distribution of residual contacts. The current data sets for crystallized protein-peptide complexes are limited, making it difficult to accurately predict interactions between proteins and peptides. To address this issue, we propose augmenting the existing data from PepBDB with pseudo protein-peptide complexes derived from the PDB. The augmented data set acts as a method to transfer physics-based contextdependent intra-residue (within a domain) interactions to the inter-residual (between) domains. We show that the distributions of inter-facial residue-residue interactions share overlap with inter residue-residue interactions, enough to increase predictive power of our bi-modal transformer architecture. In addition, this dataaugmentation allows us to leverage the vast amount of protein-only data available in the PDB to train neural networks, in contrast to template-based modeling that acts as a prior. ## 1 Introduction Compared to the experimental protein or protein small molecule structures, like those in PDB (Berman et al., 2000), there is significantly less crystallized structures of protein-peptide complexes which makes it difficult to build a model like AlphaFold in this setting. Given that amino acids within and between domains share fundamental characteristics derive-able from Quantum Mechanics, there should be a transformation that improves the predictive power of protein-peptide interactions from purely intra-protein interactions. In principle, we want to determine the feasibility of such transfer-ability methods by making simple augmentation of intra-protein residues to look more like protein-peptide interactions. We take the augmentation as the cutting of a fake peptide sequence from a protein sequence to obtain pseudo protein-peptide complexes derived from the PDB to mimic that of the PepBDB (Huang et al., 2018). This could generalize to more heterogeneous contexts such as multiple peptides interactions or protein domain interfaces, making it more general and applicable to a wider range of problems. By learning the coupling information where covalent bonding patterns are muteable, our model can generalize to different sequences and even different types of molecules made up of amino acids, i.e. branching or cyclic peptides. We frame our problem as classification via contact predictions of inter-facial residues of a peptide - protein complex from sequence information alone, with the goal of predicting whether two residues between a protein and peptide are within some Angstrom distance. ## 2 Background CASP (Critical Assessment of Techniques for Protein Structure Prediction) conference was initiated in the early 90s to compare/develop the best computational techniques for in-silico prediction of protein structure. In 2020s, they declared this problem generally solved with the advent of Deep Learning techniques initially centered around template based models (Zhang et al., 2008), then including RESNET architectures (Xu, 2017) from Computer Vision and finalized via a variation of Large Language Models (LLM) seen in DeepMind's AlphaFold (Senior et al., 2020) and Facebook's Evolutionary Scale Models (Rives et al., 2019). The key take away was residue-residue co-evolutionary couplings obtained from large Multiple Sequence Alignments (MSAs) contain enough information to reconstruct 3D information of proteins with these new computational techniques. There are generally two direct successor problems: crystal protein-protein/peptide complex reconstruction and macromolecule dynamics i.e. computing equilibrium thermodynamic state variables. We look to advance the first. Macromolecule 3D reconstruction from amino-acid sequences beyond proteins remains a key goal. Structure reconstruction initially centered around transforming predicted residue-residue contacts information into an energy potential which is used to optimized a protein structure. DeepMind was the first to do this in a end-to-end differentiable manner. Most recently, diffusion models (Wu et al., 2022) have shown great reconstruction abilities. We generalize the problem setting to obtaining inter residue-residue information from intra residue-residue couplings. ## 3 Methods ### Bi-Modal Transformer The bi-modal transformer, similar to (Lu et al.,2016), takes in two inputs, the protein sequence \(X_{p}\) of length \(N\) and the peptide sequence \(X_{l}\) of length \(M\), both of which are tokenized into a sequence of amino acids. The transformer uses the guided-attention mechanism to compute the attention matrix between the protein and peptide sequences and at each layer, the protein and peptide sequences are alternated such that at one layer the protein sequence embedding is updated with respect to the peptide one, and the next vice versa. In the first layer, the attention matrix, used to update the protein sequence, between the protein and peptide sequences is computed as follows: \[A_{p,l}=\text{softmax}(\frac{Q_{p}K_{l}^{T}}{\sqrt{d_{k}}})V_{p}\] Where \(Q_{p}\), \(K_{l}\), \(V_{p}\) are the query, key, and value matrices for updating the protein sequence, and \(d_{k}\) is the dimension of key used to scale the dot-product attention. The hidden representation of the protein and peptide sequences are obtained by concatenating the attention matrices: \[H_{p}=\text{concat}(A_{p,l})\] In the next layer, the attention matrix between peptide and protein is computed: \[A_{l,p}=\text{softmax}(\frac{Q_{l}K_{p}^{T}}{\sqrt{d_{k}}})V_{l}\] Where \(Q_{l}\), \(K_{p}\), \(V_{l}\) are the query, key, and value matrices for the peptide sequence. \[H_{l}=\text{concat}(A_{l,p})\] And this process continues for a fixed number of layers. Finally, the objective function is obtained by performing logistic regression on the concatenated representation of the protein and peptide sequences: \[Y=\sigma(W[H_{p},H_{l}]+b)\] Where \(W\) is the weight matrix, \(b\) is the bias term, and \(\sigma\) is the sigmoid function. The overall objective of the model is to maximize the likelihood of the correct interaction between the protein and peptide sequences, given by: \[\mathcal{L}=\sum_{i=1}^{n}y_{i}\log p(y_{i}|x_{i})+(1-y_{i})\log(1-p(y_{i}|x_{i }))\] where \(n\) is the number of examples, \(x_{i}\) is the pair of protein and peptide sequences, and \(y_{i}\) is the true label indicating the interaction between the protein and peptide, which is defined as a 10 Angstrom cutoff threshold. ### Evaluation Metrics To compare the true and learned distribution of contacts, we use the softmax function on the sum of the output of the model over all peptide residues for each protein residue to obtain a probability distribution over protein residues indicating the propensity of this residue to be in contact with one of the peptide residues. The softmax function is defined as: \[P_{i}=\frac{e^{\sum_{j=1}^{M}x_{i,j}}}{\sum_{k=1}^{N}e^{\sum_{j=1}^{M}x_{k,j}}}\] Where \(P_{i}\) is the probability of protein residue \(i\) interacting with any peptide residue, \(x_{i,j}\) is the output of the model for the interaction between protein residue i and peptide residue j, N is the number of protein residues and M is the number of peptide residues. Once we have the true and predicted probability distributions, we can use the Kullback-Leibler divergence (KL divergence) to evaluate the goodness of the model. The KL divergence between two probability distributions \(P\) and \(Q\) is defined as: \[D_{KL}(P||Q)=\sum_{i=1}^{N}P_{i}\log\frac{P_{i}}{Q_{i}}\] This measures the dissimilarity between the true and predicted probability distributions. The lower the KL divergence, the better the model is at reproducing the true distribution of contacts. ### Creating Augmented Protein-peptide Complexes We create the augmented protein-peptide dataset by obtaining protein structures from the Protein Data Bank (PDB). For each structure, we calculate the distance matrix between all pairs of amino acid residues: \[D_{i,j}=||R_{i}-R_{j}||\] Where \(D_{i,j}\) is the distance between amino acid residues i and j, \(R_{i}\) and \(R_{j}\) are the coordinates of residues i and j respectively. We then transform the distance matrix into a probability distribution over protein residues being in contact with other residues by summing over one dimension of the matrix and performing a softmax: \[P_{i}=\frac{e^{\sum_{j=1}^{N}D_{i,j}}}{\sum_{k=1}^{N}e^{\sum_{j=1}^{N}D_{k,j}}}\] Where \(P_{i}\) is the probability of protein residue \(i\), \(D_{i,j}\) is the distance between amino acid residues \(i\) and \(j\), N is the number of amino acid residues. We then sample indices from this distribution, where the indices indicate where the protein will be split. We also sample a uniform distribution from 10-50 amino acids in length to describe the length of the segment to be cut out from the protein. The protein is then concatenated together, with the cutout segment becoming the pseudo-peptide sequence. From the full distance matrix and the cutout segment, we can also calculate the pseudo-distance matrix. ## 4 Results To evaluate the performance of our proposed method, we performed a series of experiments on a dataset of protein-peptide complexes. Our goal was to assess the ability of our method to improve contact predictions between the protein and peptide. In our first experiment, we predict contacts between residues of proteins and peptides determined at a 10 Angstrom distance threshold. We then apply gaussian smoothing with convolutional kernels with all the filter weights set to the value of 1, which has the effects of smoothing out the contact signals as the predictions can be sensitive to small changes in residue positioning. As can be seen in Figure 1, the augmented data at times seems to help improve accuracy by moving around contact groupings, sharpening the predictions, and refining the predictions. Although the individual protein-peptide contact predictions are not always improved (there are cases where predictions are worse), from the test loss described below, in general, the prediction accuracy improves. Take for instance the far-right predictions in Figure 1. The second to the left of the four shows a less resolved cluster of lower probability contacts towards the top compared to the augmented model. This indicates that the augmented models increase their confidence further from the true contact labels. In the middle four contact diagrams in contrast, the augmented models show higher probabilities for some of the contact predictions which, retrospectively, accurately determines correct binding regions. It also suggests that adding a certain amount of extra augmented training data can increase accuracy further. However, certain precautions are necessary to make sure the model does not over train on the most frequent set of augmented protein-peptide complexes. Adding more than 30000 augmented examples did not noticeably improve the test losses. Figure 1: Predicted contacts (from left to right) of True, no augmented data, 20000 augmented examples, and 30000 augmented examples. The X axis is ligand residue indices and the \(\tilde{\mathrm{Y}}\) axis is protein residue indices. Brighter yellow regions indicate higher probabilities of a residue-residue contact between the peptide and protein. In the second figure, we looked at the effects of data augmentation on test and validation loss. Some complexes were similar to eachother, as measured by TMscore (Zhang et al., 2005), and if two or more complexes are over a certain threshold, then they are made sure to be in either training, validation, or test but not any other. This shows why the validation loss is slightly higher than the test loss. The augmented models outperform the baseline while the baseline admits a quick downtrend in prediction accuracy due to the small size of the training set, while the augmented models mediate this observation. In another experiment, we comparing the KL divergence of contact distributions on the protein or peptide. As discussed above, to obtain distributions over the sequences of residues we apply sum over the peptide (or protein ) sequences and apply softmax to obtain a probability distribution over protein ( or peptide) sequences which are compared via the Kullback-Leibler divergence. This is shown in Table 1 in the appendix, which shows improvements in binding site distribution predictions with the augmented data. Lower KL divergence indicates that the augmented data could be useful to a degree, but care must be taken to not weigh the augmented samples too heavily during training. We also note that aggregating the contact probabilities to a distribution over peptide residues may be useful in deciphering which residues are most important for binding. ## 5 Discussion ### Summary of the results Our results show that by augmenting a bi-modal transformer network with pseudo protein-peptide complexes derived from the PDB, we are able to improve predictions of contacts between protein-peptide complexes and their binding site predictions slightly. This is an important step towards a better understanding of the physics of amino-acid interactions, and especially in the design of transfer learning methods capable of generalizing across heterogeneous amino acid complexes. ### Future Research Future directions could include adding Multiple Sequence Alignments to the model, which may help improve sequence embeddings by implicitly encoding evolutionary couplings intra-sequence wise, with the goal of obtaining evolutionary inter-residue couplings between sequences. Figure 2: Validation and test error for baseline (purple, orange), +20000 (red, blue), and +30000 (black, blue) augmented datasamples Another direction would be to reconstruct protein-peptide complexes with these contact predictions used as constraints to be satisfied. Lastly, it necessary to derive a similarity scheme between augmented examples so that the models do not over train on the most common augmented residue-residue contacts, which would ideally mean we need not weight the augmented datasets loss by a decreasing factor. ## 6 Conclusion Our approach of augmenting a bi-modal transformer network with pseudo protein-peptide complexes represents a novel method for predicting protein-peptide interactions with limited data access. It highlights the efficiencies of data-augmentation of transfer-learning. In principle, this approach is a form of data distillation, where knowledge is transferred from a conditional distribution to generalized joint distribution and is particularly useful in cases where the distribution of the data is unknown or difficult to obtain, and maximum likelihood methods are limited by the lack of data.
2301.00296
Local Einstein relation for fractals
We study single random walks and the electrical resistance for fractals obtained as the limit of a sequence of periodic structures. In the long-scale regime, power laws describe both the mean-square displacement of a random walk as a function of time and the electrical resistance as a function of length. We show that the corresponding power-law exponents satisfy the Einstein relation. For shorter scales, where these exponents depend on length, we find how the Einstein relation can be generalized to hold locally. All these findings were analytically derived and confirmed by numerical simulations.
J. L. Iguain, L. Padilla
2022-12-31T21:46:15Z
http://arxiv.org/abs/2301.00296v1
# Local Einstein relation for fractals ###### Abstract We study single random walks and the electrical resistance for fractals obtained as the limit of a sequence of periodic structures. In the long-scale regime, power laws describe both the mean-square displacement of a random walk as a function of time and the electrical resistance as a function of length. We show that the corresponding power-law exponents satisfy the Einstein relation. For shorter scales, where these exponents depend on length, we find how the Einstein relation can be generalized to hold locally. All these findings were analytically derived and confirmed by numerical simulations. _Keywords_: Fractals, Power-law behaviours, Einstein relation. ## 1 Introduction Fractals are characterized by quantities that exhibit power-law behaviour in space or time. More precisely, as scale invariance occurs for integer powers of a characteristic length, pure power laws are modulated by logarithmic periodic functions, that describe the departures from the main trend at intermediate scales. These modulations have been the object of recent interest and considerable effort has been devoted toward understanding the relation between log-periodicity and discrete-scale invariance [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. For a given fractal and some related observables, which show (modulated) power-law behaviours, a problem of interest is to determine whether or not the exponents associated with these quantities are independent. Sometimes we can expect a relation as a consequence of underlying physical laws. This is, for example, the case of the mass \(m\), the electric resistance \(R\) and the mean-square-displacement (MSD) \(\Delta r^{2}\) for a single random walker. On a fractal, the first two grow with length \(l\) as \(m(l)\sim l^{d_{f}}\) and \(R(l)\sim l^{\zeta}\), while the last one grows with time \(t\) as \(\Delta r^{2}(t)\sim t^{2/d_{w}}\). The exponents \(d_{f}\), \(\zeta\) and \(d_{w}\) are known as the _fractal_, _resistance_ and _walk_ exponents, respectively, and these power-law behaviours hold for scales large enough to ensure self-similarity. In an \(d\)-dimensional euclidean space, the diffusion coefficient \(D\) and conductivity \(\sigma\) are related by the Einstein equation [14] \[\sigma=\frac{e^{2}\rho}{k_{B}T}D. \tag{1}\] Here, \(D=\lim_{t\rightarrow\infty}\Delta r^{2}(t)/2t\), \(\rho\) and \(e\) are the density and charge of mobile particles, \(T\) is the temperature and \(k_{B}\) is the Boltzmann constant. Equation (1) is one of the forms of the fluctuation-dissipation theorem, and can be used together with simple scaling heuristic arguments, to argue that the fractal, walk, and resistance exponents satisfy the _Einstein relation_[14] \[d_{f}=d_{w}-\zeta, \tag{2}\] This property has been shown to hold asymptotically for some finitely ramified fractals [15, 16]; which has been used to analyze the periodicity of the oscillations in dynamic observables, in the first attempts to understand log-periodic modulation [17]. Einstein relation was also investigated for random walks on weighted graphs [18], and, more recently, for karst networks structures [19]. A deterministic fractal can be obtained as the limit of a sequence of periodic structures. In this procedure, the period increases at every step as \(L^{n}\) (\(n=0,1,2,...\)), where \(L\) is a basic characteristic length scale. Self-similarity is manifested in power-law behaviours, which occur for long enough scales. However, this does not always hold for shorter lengths. Thus, the local slopes of the observables as a function of time or length, in log-log scales, are variable quantities, which approach constant values only asymptotically. In this work we argue that the local fractal, walk, and resistance exponents are related through an equation that generalizes (2). This generalization is obtained analytically, following the steady-state method for the calculation of the effective diffusion coefficients for periodic substrates [20]. To further strengthen our findings we perform numerical simulations for two models of fractals; which confirm the theoretical predictions. The paper is organized as follows. In Sec. 2 we relate the diffusion coefficient and the unit cell resistance for a periodic structure. In Sec. 3 we derive the Einstein relation for self-similar systems. In Sec. 4 we generalize this relation for scale-dependent exponents. In Sec. 5 we confirm the generalized relation by numerical simulations performed on models of asymptotic self-similar substrates. Finally, we give our conclusions in Sec. 6. ## 2 Periodic systems In this section we address the problem of the diffusion coefficient for a periodic substrate. We follows the steady-state method developed in reference [20]. We start by introducing the periodic substrate with unit cell of linear dimension \(l\), schematized in figure 1, where the points represent sites, and the arrows represent hopping rates. On this structure, a mobile particle can jump between connected sites according to the hopping rates \(k^{\prime}s\) (for the sake of clarity only a few sites and arrows were highlighted). We focus on a steady-state of non-interacting particles flowing with a constant current density \(j\). As shown in [20], this steady-state consists of a set of microscopic currents distributed with the same periodicity as the substrate. In figure 1 two nearest-neighbor (NN) unit cells are depicted schematically where, for example, \(n_{s}^{(f)}\) represents the number of particles in site \(r\) (internal index) of cell \(f\). Because of the mentioned Figure 1: Two nearest-neighbor cells \(f\) and \(f+1\), for a periodic substrate with linear size period \(l\). The points represent sites, which can be occupied by mobile particles. The arrows represent hopping rates between pairs of sites. For clarity, only a few sites and hopping rates were highlighted. \(n_{r}^{(f)}\) corresponds to the number of particles in the internal site \(r\) of cell \(f\) periodicity, we get that for given pair of connected sites with internal indices \(s\) and \(t\), \[i_{rs}^{(f)}=i_{rs}^{(f+1)}, \tag{3}\] where \(i_{rs}^{(f)}\) is the current from site \(s\) to site \(r\) in cell \(f\). In addition, as hopping rates do not depend on the cell either but only on the internal indices, the last equation can be rewritten as \[k_{sr}(n_{s}^{(f)}-n_{r}^{(f)})=k_{sr}(n_{s}^{(f+1)}-n_{r}^{(f+1)}), \tag{4}\] or \[n_{s}^{(f+1)}-n_{s}^{(f)}=n_{r}^{(f+1)}-n_{r}^{(f)}. \tag{5}\] Therefore, in the steady-state, the difference in the occupation number for a given site and the equivalent site in a NN cell is the same for all sites. The relation of the steady-state problem with the diffusion coefficient \(D\) is provided by Fick's law \[j=-D\frac{\Delta n}{l^{2}}, \tag{6}\] which is valid for distances larger than \(l\). Here \(\Delta n\) corresponds to the particle number difference for NN cells. Note that \(D\) also determines the mean-square displacement \(\Delta^{2}x\) of a single random walker on the same structure, which behaves as \[\Delta^{2}x(t)=2Dt; \tag{7}\] for time long enough for \(\Delta x\gg l\). Transforming the steady-state problem into an equivalent electrical problem is straightforward. Indeed, for particles of unitary electric charge, a mapping between Fick's law and Ohm's law results by identifying particle number with electrostatic potential (\(V_{a}=n_{a}\)) and hopping rate with conductance (\(k=1/R\)). In figure 2 we represent this mapping for every pair of connected sites. Following this analogy, we see Figure 2: Schematics of the equivalence between Fick’s law (left) and Ohm’s law (right). In the mapping particles have unitary charge, while the other quantities are related as \(V=n\), and \(R=1/k\). that in the electric problem, the potential difference for a pair of equivalent sites in NN cells takes the constant value \[\Delta V=n_{r}^{(i+1)}-n_{r}^{(i)}, \tag{8}\] and that the difference between particle populations \[\Delta n=\sum_{r=1}^{M}(n_{r}^{(i+1)}-n_{r}^{(i)})=M\Delta V, \tag{9}\] is proportional to the potential difference \(\Delta V\), where the constant of proportionality \(M\) corresponds to the number of sites per unit cell. Thus, according to equation (6), we can conclude that, given a periodic substrate with unit cell of linear dimension \(l\) and \(M\) sites, the diffusion coefficient and the potential difference between two equivalent sites in NN cells, are connected through the relation \[D=-j\frac{l^{2}}{M\Delta V}, \tag{10}\] where \(j\) is the steady-state current density. ## 3 Self-similar substrates Deterministic fractals are usually built by a recursive procedure, that results in a sequence of structures called _generations_. A generation consists of a periodic array of sites connected by bonds. The process begins with a basic periodic structure (zeroth generation). At every step the unit cell is scaled by a factor \(L\) and the building rules ensure that self-similarity is obtained after a large number of iterations. Following equation (10), the diffusion coefficient \(D_{p}\) for the generation \(p\) and the potential difference \(\Delta V_{p}\) between two equivalent points in NN unit cells are related as \[D_{p}=-j\frac{L^{2p}}{M_{p}\Delta V_{p}}, \tag{11}\] where \(M_{p}\) is the number of sites in the unit cell, and \(L^{p}\) is its linear dimension. Then, for two consecutive generations \(p\) and \(p+1\), through which the same steady-state current flows, we obtain \[\frac{D_{p}}{D_{p+1}}=L^{-2}\frac{M_{p+1}}{M_{p}}\frac{\Delta V_{p+1}}{\Delta V _{p}}. \tag{12}\] Now, since for a fractal the number of sites in a box with linear dimension \(l\) behaves as \(m(l)\sim l^{d_{f}}\) (i. e., \(d_{f}\) is the fractal dimension defined through box-counting), \(M_{p+1}/M_{p}=(L^{(p+1)}/L^{p})^{d_{f}}=L^{d_{f}}\), and the last equation can be rewritten as \[\frac{D_{p}}{D_{p+1}}=L^{d_{f}-2}\frac{\Delta V_{p+1}}{\Delta V_{p}}, \tag{13}\] As previously shown [7, 8], a perfect diffusive self-similar structure corresponds to a ratio \(D_{p}/D_{p+1}\) which does not depend on \(p\), i. e., \[\frac{D_{p}}{D_{p+1}}=1+\lambda, \tag{14}\] with \(\lambda\) a positive constant. In this model, the mean-square displacement for a single random walker behaves as \[\Delta^{2}x(t)=f(t)t^{2\nu}. \tag{15}\] The modulation \(f(t)\) is a log-periodic function, \(f(t\tau)=f(t)\), and both \(\nu\) and \(\tau\) can be analytically calculated in terms of \(L\) and \(\lambda\): \[\nu=\frac{1}{2+\frac{\log(1+\lambda)}{\log(L)}} \tag{16}\] \[\tau=L^{1/\nu} \tag{17}\] The important partial conclusion in the context of this work is that, according to above discussion, a perfect diffusive self-similar structure implies a power-law behaviour for the resistance as a function of length. Indeed, equations (13) and (14) leads to \[\frac{\Delta V_{p+1}}{\Delta V_{p}}=L^{1/\nu-d_{f}}, \tag{18}\] where we have used \(1+\lambda=L^{1/\nu-2}\), from equation (16). Thus, for a perfect diffusive self-similar fractal the potential difference, which corresponds to steady-state current, scales with length \(l\) as \[\Delta V\sim l^{\zeta}, \tag{19}\] where the exponent \(\zeta\) is given by \[\zeta=1/\nu-d_{f}; \tag{20}\] which is the Einstein relation (2), with \(d_{w}=1/\nu\). ## 4 Local exponents We consider now a generic substrate for which diffusive self-similarity is reached only asymptotically. Let us assume a ratio between consecutive diffusion coefficients, that depends on the generation \(p\), as \[\frac{D_{p}}{D_{p+1}}=1+\lambda_{p}. \tag{21}\] where, \(\{\lambda_{p}:\,p=1,2,...\}\) is a sequence of non-negative real numbers, with \(\lim_{p\rightarrow\infty}\lambda_{p}=\lambda\). Because of this limit, at long enough times a single random walk on this substrate will show a MSD behaviour as in equation (15), and, as pointed out before, for large enough lengths the potential difference will behave as in equation (19); with \(\nu\) and \(\zeta\) given by equations (16) and (20). In this section we focus on local exponents, which correspond to the slopes in log-log scales for finite length or time. As shown for example in [8], on a substrate on which diffusion coefficients for generations \(p\) and \(p+1\) satisfy equation (21), the MSD for a single random walker behaves as \[\Delta^{2}x(t)\sim t^{2\nu_{p}},\ \ \mbox{for}\ \ \ L^{p}\lesssim\Delta x \lesssim L^{p+1}, \tag{22}\] with the local exponent \(\nu_{p}\) given by \[\nu_{p}=\frac{1}{2+\frac{\log(1+\lambda_{p})}{\log(L)}}\ \cdot \tag{23}\] Then, after rearranging this equation as \(1+\lambda_{p}=L^{1/\nu_{p}-2}\), which corresponds to the left hand side of equation (13), we obtain \[\frac{\Delta V_{p+1}}{\Delta V_{p}}=L^{1/\nu_{p}-d_{f}}. \tag{24}\] Thus, we expect that the potential difference scales with length \(l\) as \[\Delta V(l)\sim l^{\zeta_{p}},\ \ \mbox{for}\ \ \ L^{p}\lesssim l\lesssim L^{p+1}, \tag{25}\] and that the local exponents satisfy the relation \[\zeta_{p}=1/\nu_{p}-d_{f}. \tag{26}\] Therefore, local slopes in log-log scales for the resistance as a function of length and for MSD of a single random walker as a function of time are related for all scales through equation (26); which generalizes the Einstein relation. ## 5 Numerical simulations We study numerically the steady-state that corresponds to a unitary current on two models, for which diffusive self-similarity appears asymptotically. At finite lengths, the local random-walk exponent \(\nu_{p}\) is not constant. Thus, we expect an also variable resistance exponent \(\zeta_{p}\), related to the former through equation (26). The first model is a substrate built on a square lattice. A random walk consists in a particle hopping among NN sites. If sites are connected by a bond, the hopping rate is \(k=1/4\). If the sites are not connected, the hopping rate is \(k=0\). A fractal is obtained by deleting some bonds. The characteristic scale factor is \(L=3\), and the unit cells for the first, the second and the third generations are depicted schematically in figure 3. For every generation the unit cell can be separated from the rest by cutting four bonds. As shown in a previous work, the mass on this structure shows a power-law behaviour with \(d_{f}=2\). However, the random walk exponent \(\nu_{p}\) grows with time and approaches a value \(\nu<1/2\) when \(t\to\infty\)[8]. We have run numerical simulations on the unit cell of the sixth generation, to reach the steady-state in which a unitary current flows between the left and right extremes. In figure 4 we plot with symbols the potential differences for lengths \(x=3^{i}\) (\(i=0,1,...,6\)), which are the unit cell linear sizes for the generations zero to six. In the same figure, we plot a line using the relation (26) and the numerical values for \(\nu_{p}\), which are the outcomes of random walk simulations reported in reference [8]. Notice that both data set fall on the same curve, which confirms the relation (26). The second model is a generalization of the one-dimensional self-similar model introduced in [7]. We start with a single random walk on a one-dimensional lattice, with a hopping rate \(k_{0}\) between any pair of NN sites. This homogeneous case corresponds to generation zero. We introduce a natural number \(L\) to build the other generations. In the first generation, we reset to \(k_{1}<k_{0}\) the hopping rate for every pair of sites \(j\) and \(j+1\), with \(mod(j,L)=0\). The other hopping rates remains as in zeroth generation. In the second generation, we reset to \(k_{2}<k_{1}\) the hopping rate for every pair of sites \(j\) and \(j+1\), with \(mod(j,L^{2})=0\). The other hopping rates remains as in first generation. This recursion follows indefinitely, in such a way that generation \(n\) is obtained from generation \(n-1\) after resetting to \(k_{n}<k_{n-1}\) the hopping rate for every pair of sites \(j\) and \(j+1\), with \(mod(j,L^{n})=0\). In figure 5 we show an schematics for \(L=5\). Figure 3: Substrate in two dimensions, which results in scale-dependent walk and resistance exponents. The schematics correspond to the unit cells for the first, second and third generations. The segments represent bonds between sites. If we ask for perfect self-similarity for diffusion, i. e. equation (14), the hopping rates are found iteratively as in reference [7]. For the more general case of equation (21), the sequence of hopping rates is given by \[\frac{1}{k_{i}}=\frac{1}{k_{i-1}}+\frac{L^{i}\lambda_{i-1}}{k_{0}}\prod_{j=0}^{ i-2}(1+\lambda_{j}),\quad\mbox{ for}\;\;i=1,2,3... \tag{27}\] We test the validity of the relation (26) among the local exponents for a family of Figure 4: Potential difference as a function of length for a unitary current flowing trough the unit cell of the sixth generation substrate in figure 3. The symbols correspond to simulations of the steady-state. The line was plotted with the exponents \(\zeta_{p}\) from equation (26) and the values of \(\nu_{p}\) which result from random-walk numerical simulations. Figure 5: Schematics of the one-dimensional random-walk model. We begin with a homogeneous lattice, and a hopping rate \(k_{0}\) between nearest-neighbor sites. Then, hopping rates are reset to \(k_{j}\) for transitions between sites \(j\) and \(j+1\) for every \(j\) such that \(mod(j,L^{n})=0\), and for \(n=1,2,...\). In this example, \(L=5\). substrates given by \[\lambda_{p}=\lambda\,(1-2^{-p/5.}). \tag{28}\] At short enough lengths these substrates are nearly homogeneous (\(\lambda_{p}\approx 0\) for \(p\ll 5\)), while, on the other extreme, self-similarity for diffusion is reached for lengths much larger than \(L^{5}\). The local random walk exponent (23) decreases with length and approaches asymptotically \(\nu\) in equation (16). Thus, the variation of \(\nu_{p}\) in space increases with \(\lambda\) and, because of equation (26), the same should occur with the variation of \(\zeta_{p}\). This is an interesting model, because the variation of the exponents with length can be adjusted through the parameter \(\lambda\). We have run numerical simulations for the steady-state that corresponds to a unitary current flowing on this model, with \(L=2\) and \(\lambda=1,2,4,5\). All substrates were built until generation 10. In figure 6-main we plot with symbols the potential difference as a function of the length \(x\), for \(x=2^{j}\) (\(j=0,1,...,9\)). The lines correspond to the exponents \(\zeta_{p}\) obtained from equations (26) and (23). Note the excellent agreement between theory and simulations. The inset in the same figure shows substructure of \(\Delta V\) for \(\lambda=2\). ## 6 Conclusions We have studied first the connection between single random walks and steady-state potential difference for substrates with spatial periodicity. Then, by considering a sequence of periodic systems, a common procedure for deterministic fractal construction, we find that the length dependent fractal, walk and resistance exponents, for the Figure 6: Potential difference as a function of length for unitary current on the one-dimensional model with \(\lambda_{p}=\lambda\,(1-2^{-p/5.})\), and \(L=2\). (Main) Symbols correspond to data obtained with numerical simulations on a tenth-generation substrate. Lines were drawn using the values of theoretical exponents. From bottom to top, \(\lambda=1\) (red), \(\lambda=2\) (green), \(\lambda=4\) (violet), \(\lambda=5\) (blue). (Inset) More detailed structure for \(\lambda=2\). substrate obtained in the infinite limit of this sequence, satisfy, at every length scale, the relation (26). This can be considered as a local version of the Einstein relation (2). We have tested our predictions numerically for two models. The first model is a fractal in two dimensions, while the the second is a fractal in one dimension. Both models lead to length-dependent exponents at intermediate scales. The excellent agreement between the outcomes of these simulations and the theoretical predictions supports the validity of the mentioned relation among exponents, not only in the asymptotic self-similar limit but also locally, for all length scales. We are grateful to H. O. Martin for useful discussions. This research was supported by the Universidad Nacional de Mar del Plata, 15/E1040, and the Consejo Nacional de Investigaciones Cientificas y Tecnicas, PIP1748/21.
2310.09295
On the impact of insurance on households susceptible to random proportional losses: An analysis of poverty trapping
In this paper, we consider a risk process with deterministic growth and multiplicative jumps to model the capital of a low-income household. Reflecting the high-risk nature of the low-income environment, capital losses are assumed to be proportional to the level of accumulated capital at the jump time. Our aim is to derive the probability that a household falls below the poverty line, i.e. the trapping probability, where ``trapping" occurs when the level of capital of a household holds falls below the poverty line, to an area from which it is difficult to escape without external help. Considering the remaining proportion of capital to be distributed as a special case of the beta distribution, closed-form expressions for the trapping probability are obtained via analysis of the Laplace transform of the infinitesimal generator of the process. To study the impact of insurance on this probability, introduction of an insurance product offering proportional coverage is presented. The infinitesimal generator of the insured process gives rise to non-local differential equations. To overcome this, we propose a recursive method for deriving a closed-form solution of the integro-differential equation associated with the infinitesimal generator of the insured process and provide a numerical estimation method for obtaining the trapping probability. Constraints on the rate parameters of the process that prevent certain trapping are derived in both the uninsured and insured cases using classical results from risk theory.
Kira Henshaw, Jorge M. Ramirez, José M. Flores-Contró, Enrique A. Thomann, Sooie-Hoe Loke, Corina Constantinescu
2023-09-22T14:00:02Z
http://arxiv.org/abs/2310.09295v1
On the impact of insurance on households susceptible to random proportional losses: An analysis of poverty trapping ###### Abstract In this paper, we consider a risk process with deterministic growth and multiplicative jumps to model the capital of a low-income household. Reflecting the high-risk nature of the low-income environment, capital losses are assumed to be proportional to the level of accumulated capital at the jump time. Our aim is to derive the probability that a household falls below the poverty line, i.e. the trapping probability, where "trapping" occurs when the level of capital of a household holds falls below the poverty line, to an area from which it is difficult to escape without external help. Considering the remaining proportion of capital to be distributed as a special case of the beta distribution, closed-form expressions for the trapping probability are obtained via analysis of the Laplace transform of the infinitesimal generator of the process. To study the impact of insurance on this probability, introduction of an insurance product offering proportional coverage is presented. The infinitesimal generator of the insured process gives rise to non-local differential equations. To overcome this, we propose a recursive method for deriving a closed-form solution of the integro-differential equation associated with the infinitesimal generator of the insured process and provide a numerical estimation method for obtaining the trapping probability. Constraints on the rate parameters of the process that prevent certain trapping are derived in both the uninsured and insured cases using classical results from risk theory. JEL classification: G220; G520; O120. _Keywords--_ microinsurance; poverty traps; trapping probability; risk processes; proportional claims; proportional insurance. ## 1 Introduction Low-income households living close to, but above, the poverty line are extremely susceptible to entering extreme poverty, particularly in the event of a financial loss. This problem, and the true nature of low-income loss experience, must be studied in order to increase rates of poverty reduction. One indicator that can be used to assess financial stability is capital. In the low-income setting, where monetary wealth is often limited, the concept of capital should reflect all forms of capital that enable production, whether for trade or self-sustaining purposes. This may include land, property, physical and human capital, with health a form of capital in extreme cases where sufficient health services and food accessibility are not guaranteed (Dasgupta, 1997). With agricultural work often prevalent in low-income economies, the threat of catastrophic loss events, including floods, droughs, earthquakes and disease, is of great concern, particularly under this broad definition of capital. In contrast to losses relating to health, life or death, agricultural losses can immediately eliminate a high proportion of a household's ability to produce through loss of land and livestock, irrespective of their level of capital. In this paper, we study the behaviour of the capital of a low-income household under the assumption of proportional capital loss experience. Proportionality in loss experience captures the exposure of households of all capital levels to both catastrophic and low severity loss events. This is particularly relevant in the low-income setting, where, in addition to low frequency, high severity events such as natural disasters, commonly occurring events, such as hospital admissions and household deaths, can be detrimental. To do this, we adopt the ruin-theoretic approach proposed in Kovacevic and Pflug (2011), using a ruin-type model with deterministic growth and multiplicative losses to represent household-level capital. At loss events, accumulated capital is reduced by a random proportion of itself, rather than by an amount of random value, as in Flores-Contro et al. (2022). Processes of this structure are typically referred to as a growth-fragmentation or growth-collapse processes, characterised by their growth in between the random collapse times at which downwards jumps occur. The randomly occurring jumps have random size dependent on the state of the process immediately before the jump. Our aim in adopting this model is to derive the probability that a household falls below the poverty line, where this probability mimics an insurer's ruin probability. To the best of our knowledge, only Kovacevic and Pflug (2011) and Flores-Contro et al. (2022) have, so far, studied this problem in the ruin-theoretic setting. As in this earlier work, in this paper, we consider the probability in two cases, one in which the household has no insurance coverage, and the other in which they are proportionally insured. We introduce insurance to assess its effectiveness as a measure of poverty reduction. Aligning with the low-income setting, proportional coverage is assumed to be provided through an inclusive insurance product, specifically designed to cater for those excluded from traditional insurance services or without access to alternative effective risk management strategies. This type of product, targeted towards low-income populations, is commonly referred to as microinsurance. In Flores-Contro et al. (2022), the risk process with deterministic growth and random-value losses is instead used to assess the impact of government premium subsidy schemes on the probability of falling below the poverty line. Although important, we do not consider the behaviour of a household below the poverty line. Households that live or fall below the poverty line are said to be in a poverty trap, where a poverty trap is a state of poverty from which it is difficult to escape without external help. Poverty trapping is a well-studied topic in development economics (the interested reader may refer to Azariadis and Stachurski (2005), Bowles et al. (2006), Kraay and McKenzie (2014), Barrett et al. (2016) and references therein for further discussion; see Matsuyama (2008) for a detailed description of the mechanics of poverty traps), however, for the purpose of this study, we use the term "trapping" only to describe the event that a household falls into poverty, focusing our interest on low-income behaviours above this critical line. In Kovacevic and Pflug (2011), estimates of the infinite-time trapping probability of a discretised version of the capital process adopted in this paper are obtained through numerical simulation. Azais and Genadot (2015) perform further numerical analysis on the same model, discussing applications to the capital setting of Kovacevic and Pflug (2011) and to population dynamics, where the critical level denotes extinction. In both cases, derivation of analytical solutions of infinitesimal generator equations is not attempted. Our main contribution is therefore in the derivation of closed-form solutions of the infinitesimal generator equations associated with risk processes of this type and, in the case of proportional insurance, in the proposition of a novel approach to deriving the trapping probability recursively. Due to the proportionality of losses, generators of the capital process no longer directly align with those of classical models used to describe the surplus process of an insurer. Obtaining the solution of the infinitesimal generator equation is therefore non-trivial. Traditionally a sum of independent random variables, random absolute losses are correlated with one another, and with the inter-arrival times of loss events. In addition, only the surplus of a household's capital above the critical capital grows exponentially. To ensure that the Lundberg equation is well-defined, and thus mitigate certain trapping, constraints on the parameters of the capital growth processes are derived. Laplace transform and derivative operators are then used to obtain the associated trapping probabilities, under no insurance coverage and proportional insurance coverage, respectively. Research on growth-collapse processes with applications outside the field of actuarial mathematics includes Altman et al. (2002) and Lopker and Van Leeuwaarden (2008) for congestion control in data networks, Eliazar and Klafter (2004) and Eliazar and Klafter (2006) for phenomena in physical systems, Derfel et al. (2009) for cell growth and division and Peckham et al. (2018) in a model of persistence of populations subject to random shock. Aligning with the Laplace transform approach adopted in the case of no insurance, Lopker and Van Leeuwaarden (2008) obtain the Laplace transform of the transient moments of a growth-collapse process, while Eliazar and Klafter (2004) consider the state of a growth-collapse process at equilibrium, computing Laplace transforms of the system and of the high- and low-levels of the growth-collapse cycle. Previous research on the impact of microinsurance mechanisms on the probability of falling below the poverty line from a non-ruin perspective has been undertaken through application of multi-equilibrium models and dynamic stochastic programming (Ikegami et al., 2017; Chantarat et al., 2017; Carter and Janzen, 2018; Liao et al., 2020; Janzen et al., 2021; Kovacevic and Semmler, 2021). With the exception of the latter, each of these studies considers the impact of subsidisation and the associated cost to the subsidy provider. Will et al. (2021) and Henshaw et al. (2023) extend the problem to the group-setting, assessing the impact of risk-sharing on the probability. Will et al. (2021) undertake a simulation-based study and Henshaw et al. (2023) propose a Markov modulated stochastic dissemination model of group wealth interactions, using a bivariate normal approximation to calculate the trapping probability. Notably, Kovacevic and Pflug (2011); Liao et al. (2020) and Flores-Contro et al. (2022) suggest that purchase of insurance and the associated need for premium payment increases the risk of falling below the poverty line for the most vulnerable. Barriers to microinsurance penetration that exist due to constraints on product affordability resulting from fundamental features of the microinsurance environment likely contribute to such observations. Limited consumer financial literacy and experience, product accessibility and data availability are examples of the unique characteristics that must be accounted for when designing effective and affordable microinsurance products. Through our analysis, we further investigate the case of proportional loss experience to assess the associated implications on the affordability of insurance. Janzen et al. (2021) optimise the level of insurance coverage across the population, observing that those in the neighbourhood of the poverty line do not optimally purchase insurance (without subsidies), instead suppressing their consumption and mitigating the probability of falling into poverty. This aligns with the increase in probability observed in the aforementioned studies, when those closest to the poverty line purchase insurance. Similarly, Kovacevic and Semmler (2021) derive the retention rate process that maximises the expected discounted capital, by allowing adjustments in the retention rate of the policyholder after each capital loss throughout the lifetime of the insurance contract. In this paper, however, the proportion of insurance coverage and the choice to insure is fixed across the population, as in Kovacevic and Pflug (2011); Chantarat et al. (2017) and Flores-Contro et al. (2022). An outline of the remainder of the paper is as follows. Section 2 introduces the capital growth model and its alignment with the classical Cramer-Lundberg model. This connection enables derivation of constraints on the parameters of the model that ensure the Lundberg equation is well-defined, thus preventing certain trapping. Derivation of the trapping probability for uninsured losses and \(\text{Beta}(\alpha,1)\) distributed remaining proportions of capital is presented in Section 3. The trapping probability for households covered by proportional insurance coverage is derived in Section 4 for \(\text{Beta}(1,1)\) distributed remaining proportions of capital. The non-locality of the differential equations associated with the infinitesimal generator of the insured process is highlighted and the recursive method for deriving the trapping probability proposed. Uninsured and insured trapping probabilities are compared in Section 5 and are presented alongside additional findings of interest. Concluding remarks are provided in Section 6. Throughout the paper, we use the term "insurance" to refer to any form of microinsurance product. Our analysis does not consider a specific type of product but can be tailored through the selection of appropriate parameters. ## 2 The capital model Construction of the capital model follows that of Kovacevic and Pflug (2011). Consider a household with accumulated capital \((X_{t})_{t\geq 0}\). Under the basic assumption that the household has no loss experience, their growth in accumulated capital is given by \[\frac{dX_{t}}{dt}=r\cdot\left[X_{t}-x^{*}\right]^{+}, \tag{1}\] where \([x]^{+}=\max(x,0)\). The dynamics in (1) are built on the assumption that a household's income (\(I_{t}\)) is split into consumption (\(C_{t}\)) and savings or investments (\(S_{t}\)), such that at time \(t\), \[I_{t}=C_{t}+S_{t}, \tag{2}\] where consumption is an increasing function of income: \[C_{t}=\begin{cases}I_{t},&\text{if }I_{t}\leq x^{*}\\ I^{*}+a(I_{t}-I^{*}),&\text{if }I_{t}>x^{*}\end{cases}\] (3a) for \[0<a<1\]. The critical point below which a household consumes all of their income, with no facility for savings or investment, is denoted \[I^{*}\]. Accumulated capital is assumed to grow proportionally to the level of savings, such that \[\frac{dX_{t}}{dt}=cS_{t},\] (4) for \[0<c<1\], and income is generated through the accumulated capital, such that \[I_{t}=bX_{t},\] for \(b>0\). Combining (2), (3a), (3b) and (4) gives exactly the dynamics in (1), where the capital growth rate \(r=(1-a)\cdot b\cdot c>0\) incorporates household rates of consumption (\(a\)), income generation (\(b\)) and investment or savings (\(c\)), while \(x^{*}=I^{*}/b>0\) denotes the threshold below which a household lives in poverty. The notion of a household in this model setting may be extended for consideration of poverty trapping within economic units such as community groups, villages and tribes, in addition to the traditional household structure. Reflecting the ability of a household to produce, the level of accumulated capital of a household \(X_{t}\) is composed of land, property, physical and human capital. The poverty threshold \(x^{*}\) represents the amount of capital required to forever attain a critical level of income below which a household would not be able to sustain their basic needs, facing elementary problems relating to health and food security. We refer to this threshold as the critical capital or the poverty line. Since (1) is positive for all levels of capital greater than the critical capital, all points less than or equal to \(x^{*}\) are stationary, the level of capital remains constant if the critical capital is not met. In this basic model, stationary points below the critical capital are not attractors of the system if the initial capital exceeds \(x^{*}\), in which case the capital process grows exponentially with rate \(r\). In line with Kovacevic and Pflug (2011), we expand the dynamics of (1) under the assumption that households are susceptible to the occurrence of capital losses such as those highlighted in Section 1, including severe illness, the death of a household member or breadwinner and catastrophic events such as droughts, floods and earthquakes. The occurrence of loss events is assumed to follow a Poisson process with intensity \(\lambda\), where the capital process follows the dynamics of (1) in between events. On the occurrence of the \(i\)-th loss, the capital process experiences a downwards jump to \(X_{T_{i}}\cdot Z_{i}\), where \(Z_{i}\in[0,1]\) is the random proportion determining the remaining capital after loss \(i\) and \(X_{T_{i}}\) the level of capital accumulated up to the loss time. The sequence \(\{Z_{i}\}_{i=1}^{\infty}\) is a sequence of independent and identically distributed random variables with common distribution function \(G(z)\), independent of the Poisson process. In this paper, the proportion of capital remaining after each loss event \(Z_{i}\) is assumed to follow a beta distribution with parameters \(\alpha>0\) and \(\beta>0\). A household reaches the area of poverty if it suffers a loss large enough that the remaining capital is attracted into the poverty trap. Since a household's capital does not grow below the critical capital \(x^{*}\), households that fall into the area of poverty will never escape without external help. Once below the critical capital, households are exposed to the risk of falling deeper into poverty. However, in contrast to Flores-Contro et al. (2022) where random-valued losses are considered, the dynamics of the model do not allow for the possibility of negative capital due to the proportionality of loss experience. The structure of the process in-between loss events is derived through solution of the first order Ordinary Differential Equation (ODE) in (1). The stochastic capital process with deterministic exponential growth and multiplicative losses is then formally defined as follows: **Definition 2.1** (Kovacevic and Pflug (2011)).: _Let \(T_{i}\) be the \(i^{th}\) event time of a Poisson process \((N_{t})_{t\geq 0}\) with parameter \(\lambda\), where \(T_{0}=0\). Let \(Z_{i}\geq 0\) be a sequence of independent and identically distributed random variables with distribution function \(G(z)\), independent of the process \(N_{t}\). For \(T_{i-1}\leq t<T_{i}\), the stochastic growth process of the accumulated capital \(X_{t}\) is defined as_ \[X_{t}=\begin{cases}\left(X_{T_{i-1}}-x^{*}\right)e^{r\left(t-T_{i-1}\right)}+ x^{*},&\text{if }X_{T_{i-1}}>x^{*}\\ X_{T_{i-1}},&\text{otherwise.}\end{cases} \tag{5a}\] _At the jump times \(t=T_{i}\), the process is given by_ \[X_{T_{i}}=\begin{cases}[(X_{T_{i-1}}-x^{*})\,e^{r\left(T_{i}-T_{i-1}\right)}+x^{* }]\cdot Z_{i},&\text{if }X_{T_{i-1}}>x^{*}\\ X_{T_{i-1}}\cdot Z_{i},&\text{otherwise}.\end{cases}\] As in Kovacevic and Pflug (2011) and Flores-Contro et al. (2022), the aim of this paper is to study the probability that a household falls below the poverty line, i.e. the trapping probability. By Definition 2.1, the capital level of the household follows a piecewise deterministic Markov process (Davis, 1984, 2018) of compound Poisson-type, which is deterministic in-between the randomly occurring jump times at which large capital losses occur. The infinite-time trapping probability describes the distribution of the time at which a household becomes trapped, referred to as the trapping time. Given a household has initial capital \(x\), their trapping time, denoted \(\tau_{x}\), is given by \[\tau_{x}:=\inf\left\{t\geq 0:X_{t}<x^{*}|X_{0}=x\right\},\] where \(\tau_{x}\) is fixed at infinity if \(X_{t}\geq x^{*}\ \forall t\). It then follows that the trapping probability \(f(x)\) is given by \[f\left(x\right)=\mathbb{P}\left(\tau_{x}<\infty\right).\] Analysis of the trapping probability can be undertaken through study of the infinitesimal generator. The infinitesimal generator \(\mathcal{A}\) of the stochastic process \((X_{t})_{t\geq 0}\) as in Definition 2.1 is given by \[\mathcal{A}f(x)=r(x-x^{*})f^{\prime}(x)+\lambda\int_{0}^{1}[f(x\cdot z)-f(x)] dG(z), \tag{7}\] for \(x\geq x^{*}\). The remainder of the paper works towards solving \(\mathcal{A}f=0\), in line with the classical theorem of Paulsen and Gjessing (1997). Intuitively, the boundary conditions of the trapping probability are as follows: \[\lim_{x\to x^{*}}f(x)=1\quad\text{and}\quad\lim_{x\to\infty}f(x)=0, \tag{8}\] such that under the assumption that \(f(x)\) is a bounded and twice continuously differentiable function on \(x\geq x^{*}\), with a bounded first derivative, and since we consider only what happens above the critical capital \(x^{*}\), the theorem of Paulsen and Gjessing (1997) is applicable. Closed-form expressions for Laplace transforms of ruin (trapping) probabilities are often more easily obtained than for the probability itself. However, multiplication of the initial capital by the random proportion in the integral function makes Laplace transform methods typically used in risk theory no longer straightforward. Solution of the integro-differential equation in (7) has so far only been undertaken numerically, see, for example, Kovacevic and Pflug (2011). In this paper, closed-form trapping probabilities are obtained through solution of (7) for special cases of remaining proportions of capital. First, note that there exists a relationship between the capital model of Definition 2.1 and the classical Cramer-Lundberg model. This enables specification of an upper bound on the trapping probability of the capital growth process \(X_{t}\) through Lundberg's inequality, derived in Lundberg (1926). Consider an adjustment of the capital process that is discretised at loss event times such that \(\tilde{X}_{i}=X_{T_{i}}\), i.e. the capital process studied in Kovacevic and Pflug (2011). Taking the logarithm of the adjusted process with critical capital \(x^{*}\) fixed at \(0\) yields \[L_{i}=L_{i-1}+r(T_{i}-T_{i-1})+\log(Z_{i})=\log x+rT_{i}+\sum_{i=1}^{N_{t}} \log(Z_{i}), \tag{9}\] where \(L_{i}\) is the logarithm of the \(i\)-th step in the discretised process \(\tilde{X}_{i}\) and \(\log(Z_{i})<0\). The model on the right-hand side of (9) is a version of the classical Cramer-Lundberg model introduced by Lundberg (1903) and Cramer (1930), which assumes an insurance company collects premiums continuously and pays claims of random size at random times. The corresponding surplus process is given by \[U_{t}=u+ct-\sum_{k=1}^{N_{t}}X_{k},\] where \(u\) is the initial capital, \(c\) the constant premium rate, \(X_{1},X_{2},...,X_{N_{t}}\) the random claim sizes and \(N_{t}\) the number of claims in the interval \([0,t]\). Claim sizes are assumed to be independent and identically distributed, \(N_{t}\) a homogeneous Poisson process and the sequence of claim sizes \(\{X_{k}\}_{k\in\mathbb{N}^{+}}\) and \(N_{t}\) independent. The net profit condition is a constraint that ensures, on average, that the capital gains of a household are superior to their losses. If this condition is not satisfied then trapping is certain. It is well-known in ruin theory that if the net profit condition holds, the process \(U_{t}\) converges to infinity almost surely as \(t\to\infty\) and there is a positive probability that \(U_{t}\geq 0\) for all \(t\). As a consequence of the net profit condition, it also holds that \(\lim_{u\to\infty}\psi(u)=0\), where \(\psi(u)\) is the ruin probability under the classical model. However, derivation of the net profit condition from the drift of \(U_{t}\) to infinity is not always straightforward. The Lundberg equation provides an alternative method for deriving the net profit condition. Assume that there exists a constant \(R>0\) such that the process \(\{e^{-RL_{i}}\}_{i\geq 0}\) is a martingale. The resulting equation is the Lundberg equation, and is given by \[\mathbb{E}[e^{-R\log(Z_{i})}]\mathbb{E}[e^{-Rr\tilde{T}_{i}}]=\mathbb{E}[e^{-R( \log(Z_{i})+r\tilde{T}_{i})}]=1,\] where \(\tilde{T}_{i}=T_{i}-T_{i-1}\) and the unique solution \(R\) is the adjustment coefficient. Thus, for \(R\) to exist, it must hold that \(\mathbb{E}[\log(Z_{i})+r\tilde{T}_{i}]>0\). In fact, for \(R\) to exist the net profit condition must hold. As such, the existence of \(R\) ensures that \(\lim_{u\to\infty}\psi(u)=0\). Then, if \(\mathbb{E}[\log(Z_{i})+r\tilde{T}_{i}]>0\), the logarithmic process in (9) converges to infinity almost surely, and \[\lim_{\log x\to\infty}\mathbb{P}(L_{i}<0|L_{0}=\log x)=0.\] Since \(\log x\to\infty\) implies \(x\to\infty\) it holds that \[\lim_{x\to\infty}f(x)\sim\lim_{x\to\infty}f(x|x^{*}=0)\leq\lim_{x\to\infty} \mathbb{P}(X_{t}<1|X_{0}=x)=\lim_{x\to\infty}\mathbb{P}(L_{i}<0|L_{0}=\log x) =0,\] where we have applied the equivalence of \(\tilde{X}_{i}\) and \(X_{t}\) at loss event times and the fact that asymptotically, the behaviour of the trapping probability \(f(x)\) remains unchanged for any \(x^{*}\). The upper boundary condition in (8) therefore holds if \(\mathbb{E}[\log(Z_{i})+r\tilde{T}_{i}]>0\). In Sections 3 and 4 we use the net profit condition to derive constraints on the parameters of the capital model for uninsured and proportionally insured households, respectively. The closed-form trapping probabilities are then derived through consideration of the associated infinitesimal generators for uninsured losses with \(\text{Beta}(\alpha,1)\) distributed remaining proportions of capital (Section 3) and proportionally insured losses with \(\text{Beta}(1,1)\) distributed remaining proportions of capital (Section 4). Laplace transform methods are applied in Section 3 and a derivative approach in Section 4, where a solution of the infinitesimal generator equation is derived recursively. ## 3 Derivation of trapping probability under no insurance coverage Under the assumption of remaining proportions of capital with distribution \(Z_{i}\sim\text{Beta}(\alpha,1)\), letting \(u=x\cdot z\) reduces the infinitesimal generator of the capital growth process in (7) to \[\mathcal{A}f(x)=r(x-x^{*})f^{\prime}(x)-\lambda f(x)+\frac{\lambda\alpha}{x^{ \alpha}}\int_{0}^{x}f(u)u^{\alpha-1}du, \tag{10}\] for \(x\geq x^{*}\). **Proposition 3.1**.: _Consider a household capital process as proposed in Definition 2.1 with initial capital \(x\geq x^{*}\), capital growth rate \(r\), loss intensity \(\lambda>0\) and remaining proportions of capital with distribution \(\text{Beta}(\alpha,1)\). The adjustment coefficient of the corresponding Lundberg equation exists if_ \[\frac{\lambda}{r}<\alpha. \tag{11}\] Proof.: For remaining proportions of capital with distribution \(\text{Beta}(\alpha,1)\), given that \(Z_{i}\) and \(\tilde{T}_{i}\) are independent and since \(\mathbb{E}[\log(Z_{i})]=\alpha\int_{0}^{1}\log(z)z^{\alpha-1}dz\), \(\mathbb{E}[\log(Z_{i})+r\tilde{T}_{i}]\) holds if and only if (11) is satisfied, as required. As \(\lambda\) specifies the number of claims per unit time, accounting for the fact that the mean loss size under \(\text{Beta}(\alpha,1)\) distributed remaining proportions of capital is \(1-(\alpha+1)^{-1}\), the ratio of capital loss to capital growth is \(\lambda\alpha/(r(\alpha+1))\). We now derive the trapping probability through solution of \(\mathcal{A}f(x)=0\) in line with the discussion of Section 2. Since households face certain trapping if the net profit condition is violated, our analysis focuses only on the region for which (11) holds. **Proposition 3.2**.: _Consider a household capital process as proposed in Definition 2.1 with initial capital \(x\geq x^{*}\), capital growth rate \(r\), loss intensity \(\lambda>0\) and remaining proportions of capital with distribution \(\text{Beta}(\alpha,1)\). The closed-form trapping probability is given by_ \[f(x)=\frac{\Gamma(\alpha)}{\Gamma\left(\frac{\lambda}{r}\right)\Gamma\left( \alpha-\frac{\lambda}{r}+1\right)}\left(\frac{x}{x^{*}}\right)^{\frac{\lambda }{r}-\alpha}{}_{2}F_{1}\left(\alpha-\frac{\lambda}{r},1-\frac{\lambda}{r}; \alpha-\frac{\lambda}{r}+1;\frac{x^{*}}{x}\right) \tag{12}\] _for \(\frac{\lambda}{r}<\alpha\), where \({}_{2}F_{1}(\cdot)\) is the Gauss hypergeometric function._ Proof.: Fix \(\mathcal{A}f(x)=0\) and take the Laplace transform, where the infinitesimal generator of the process for \(x\leq x^{*}\) is zero. Then, denoting \(F(s):=\int_{0}^{\infty}f(x)e^{-sx}ds\), \[s^{2}F^{(\alpha+1)}(s)+s\Big{(}\Big{(}\alpha+1+\frac{\lambda}{r}\Big{)}+x^{*} s\Big{)}F^{(\alpha)}(s)+\alpha\Big{(}x^{*}s+\frac{\lambda}{r}\Big{)}F^{( \alpha-1)}(s)=0, \tag{13}\] where \(F^{(n)}\) denotes the \(n\)-th derivative of \(F\). Letting \(y(s)=F^{(\alpha-1)}(s)\), such that \(y^{\prime}(s)=F^{(\alpha)}(s)\) and \(y^{\prime\prime}(s)=F^{(\alpha+1)}(s)\), and substituting \(y(s)=s^{-\alpha}w(s)\) reduces (13) to the second-order ODE \[sw^{\prime\prime}(s)+\Big{(}\Big{(}1+\frac{\lambda}{r}-\alpha\Big{)}+x^{*}s \Big{)}w^{\prime}(s)=0,\] which solves to give \[F^{(\alpha-1)}(s)=C_{1}x^{*(\frac{\lambda}{r}-\alpha)}s^{-\alpha}\gamma\left( \alpha-\frac{\lambda}{r},x^{*}s\right)+C_{2}s^{-\alpha}, \tag{14}\] where \(\frac{\lambda}{r}<\alpha\). Since \(F^{\prime}(s)=-\mathcal{L}(xf(x))\) it is possible to prove by induction that \(F^{(n)}(s)=(-1)^{n}\mathcal{L}(x^{n}f(x))\). As such, application of the inverse Laplace transform to (14), see, for example, Section (3.10) of Prudnikov et al. (1992), gives that the general solution of \(\mathcal{A}f(x)=0\) for \(\mathcal{A}f(x)\) in (10) is \[f(x)=\begin{cases}C_{2}\frac{(-1)^{1-\alpha}}{\Gamma(\alpha)}+C_{1}x^{*(\frac {\lambda}{r}-\alpha)}\frac{\Gamma\left(\alpha-\frac{\lambda}{r}\right)}{\Gamma (\alpha)}(-1)^{1-\alpha},&0<x<x^{*}\\ C_{2}\frac{(-1)^{1-\alpha}}{\Gamma(\alpha)}+C_{1}\frac{\left(\alpha-\frac{ \lambda}{r}\right)^{-1}}{\Gamma\left(\frac{\lambda}{r}\right)}(-1)^{1-\alpha} x^{\frac{\lambda}{r}-\alpha}{}_{2}F_{1}\Big{(}\alpha-\frac{\lambda}{r},1- \frac{\lambda}{r};\alpha-\frac{\lambda}{r}+1;\frac{x^{*}}{x}\Big{)},&x^{*}<x, \end{cases}\] for \(\text{Re}(-\lambda/r)<1\) and \(\text{Re}(\alpha,x^{*}),\text{Re}(s)>0\). Applying the boundary conditions on \(f(x)\) in (8) yields \[C_{2}=0\ \ \text{and}\ \ C_{1}=\frac{\Gamma(\alpha)}{\Gamma\left(\alpha- \frac{\lambda}{r}\right)}(-1)^{\alpha-1}x^{*(\alpha-\frac{\lambda}{r})},\] such that the closed-form trapping probability is given by (12), as required. The hypergeometric series corresponding to the solution in (12) has domain of convergence \(|x^{*}/x|<1\), such that the solution converges for all levels of capital in the domain of \(f(x)\). **Corollary 3.1**.: _The closed-form trapping probability in (12) is equivalent to_ \[f(x)=1-\frac{\Gamma\left(\alpha\right)}{\Gamma\left(\frac{\lambda}{r}+1\right) \Gamma\left(\alpha-\frac{\lambda}{r}\right)}\left(1-\frac{x^{*}}{x}\right)^{ \frac{\lambda}{r}}{}_{2}F_{1}\left(\frac{\lambda}{r},1+\frac{\lambda}{r}-\alpha ;1+\frac{\lambda}{r};1-\frac{x^{*}}{x}\right) \tag{16}\] _for \(\frac{\lambda}{r}<\alpha\), where \({}_{2}F_{1}(\cdot)\) is the Gauss hypergeometric function._ Proof.: Apply the hypergeometric transform: \[{}_{2}F_{1}(a,b;c;z)= \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}z^{-a}{}_{2} F_{1}\left(a,a-c+1;a+b-c+1;1-\frac{1}{z}\right)\] \[+\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}(1-z)^{c-a-b}z^{ a-c}{}_{2}F_{1}\left(c-a,1-a;c-a-b+1;1-\frac{1}{z}\right),\] which holds for \(|\arg z|<\pi\) and \(|\arg(1-z)|<\pi\), to (12), where we extend the gamma function to negative non-integer values by the relation \[\Gamma(x):=\frac{1}{x}\Gamma(x+1),\] for \(x<0,x\notin\mathbb{Z}\). The two series corresponding to the resulting hypergeometric solutions have domain of convergence \(|1-x/x^{*}|<1\), such that the solutions diverge where \(x>2x^{*}\). Applying the relation \[{}_{2}F_{1}(a,b;c;z)=(1-z)^{c-a-b}{}_{2}F_{1}\left(c-a,c-b;c;z\right)\] and transforming via the formula \[{}_{2}F_{1}(a,b;c;z)=(1-z)^{-a}{}_{2}F_{1}\left(a,c-b;c;\frac{z}{z-1}\right)\] adjusts the domain of convergence of the corresponding hypergeometric series to \(|1-x^{*}/x|<1\), such that the solution converges over all \(x>x^{*}\), and gives (16), as required. For details of the hypergeometric solutions, their relation and transformation formulas and domains of convergence that are used throughout this paper, see Abramowitz and Stegun (1972) and Kristensson (2010). **Remark 3.1**.: _Substitution of \(\alpha=1\) into (12), or equivalently (16), yields the closed-form trapping probability under uniformly distributed remaining proportions of capital, i.e. the case \(Z_{i}\sim\text{Beta}(1,1)\)._ The closed-form trapping probability for households susceptible to proportional losses with \(\text{Beta}(\alpha,1)\) distributed remaining proportions of capital, as derived in Proposition 3.2, is presented in Figure 0(a) for varying initial capital \(x\) and shape parameter \(\alpha\). Note that the trapping probability tends to \(1\) as \(\lambda/r\) tends to \(\alpha\) in line with the constraint of Proposition 3.1. The low value of the rate parameter \(\lambda\) reflects the vulnerability of low-income households to both high and low frequency loss events, while aligning with the constraint in Proposition 3.1. Increasing \(\alpha\) increases the mean of the distribution of the remaining proportion of capital. Observation of a decreasing trapping probability with increasing \(\alpha\) is therefore intuitive and aligns with the reduction in loss. Figure 0(b) presents the same trapping probability for varying loss frequency \(\lambda\) and fixed \(\alpha=1\). In this case, remaining proportions of capital are uniformly distributed as in Section 4. Increasing the frequency of loss events increases the trapping probability, as is to be expected. Parameters \(a,b\) and \(c\) are selected to correspond with those in Flores-Contro et al. (2022). Particularly high levels of accumulated capital are not relevant in the microinsurance and poverty trapping context. However, the asymptotic behaviour of the analytic trapping probability at infi Figure 1: Trapping probability \(f(x)\) in (12) for \(x^{*}=1\) and \(Z_{i}\sim\text{Beta}(\alpha,1)\), considering: (a) \(\lambda=1\) and different values of \(\alpha\), (b) \(\alpha=1\) and different values of \(\lambda\). The value of the capital growth rate \(r=0.504\) is computed with \(a=0.1\), \(b=1.4\), \(c=0.4\). understanding the behaviour of the function. Since \(\lim_{z\to 0\,2}F_{1}(a,b;c;z)=1\), (12) behaves asymptotically like the power function \[\frac{\Gamma\left(\alpha\right)}{\Gamma\left(\frac{\lambda}{r}\right)\Gamma \left(\alpha-\frac{\lambda}{r}+1\right)}\left(\frac{x}{x^{\star}}\right)^{ \frac{\lambda}{r}-\alpha}, \tag{17}\] such that the uninsured trapping probability has power-law asymptotic decay as \(x\to\infty\). We now compare the decay of the household-level trapping probability under proportional losses and no insurance coverage with that of the exponentially distributed random-valued loss case of Flores-Contro et al. (2022). The equivalent uninsured trapping probability under random-valued losses for \(x\geq x^{\star}\) is given by \[f(x)=\frac{\Gamma\left(\frac{\lambda}{r};\mu(x-x^{\star})\right)}{\Gamma\left( \frac{\lambda}{r}\right)}, \tag{18}\] where \(\Gamma(a;z)\) is the upper incomplete gamma function: \(\Gamma(a;z):=\int_{z}^{\infty}e^{-t}t^{a-1}dt\). The probability in (18) follows \[\mu^{\frac{\lambda}{r}-1}(x-x^{\star})^{\frac{\lambda}{r}-1}e^{-\mu(x-x^{ \star})}(1+\mathcal{O}(|\mu(x-x^{\star})|^{-1})) \tag{19}\] asymptotically, where \(\mu\) is the exponential loss parameter. The limiting behaviour of the ratio of (19) to (17) is \[Cx^{\alpha-1}e^{-\mu(x-x^{\star})}(1+\mathcal{O}(|\mu(x-x^{\star})|^{-1})),\] for constant \(C=x^{\star\lambda/r-\alpha}\mu^{\lambda/r-1}\Gamma\left(\lambda/r\right)\Gamma \left(\alpha-\lambda/r+1\right)\Gamma\left(\alpha\right)^{-1}\). The trapping probability in the random-valued case therefore decays at a faster rate than when a household experiences proportional losses, with the severity of this difference dependent on the parameters of the loss distributions. This result is intuitive, since proportional losses are more risky than random-valued losses at high capital levels due to the non-zero probability of a household losing all (or a high proportion) of its wealth. This is particularly severe in the uniform case of the following section, where high and low levels of proportional losses are equally likely. When \(\alpha=1\), the trapping probability in the random-valued case decays exponentially faster than in the proportional case. A comparison of the decay of the trapping probability under proportional losses against that of random-valued losses is provided in the inset of Figure 2, where the probabilities are plotted on the logarithmic scale. Here, the slower rate of decay under proportional losses is clearly observable. Figure 2 compares trapping probabilities under proportional (12) and random-valued (18) losses for a given set of parameters. Trapping probabilities for a number of exponential claim size distributions are compared with the trapping probability under proportional losses with an expected value of approximately \(16.7\%\) of accumulated capital. For random-valued claim sizes with an expected value of \(0.5\) (\(\mu=2\)) the trapping probability is greater than for proportional losses for the most vulnerable, however, as capital increases the trapping probability under proportional losses exceeds the random-valued case. If the expected claim size increases to \(1\) (\(\mu=1\)) the trapping probability for proportional losses is significantly lower than in the random-valued case at all levels of initial capital. Compared to the mean loss associated with beta distributed remaining proportions with \(\alpha=5\), an expected claim size of \(1\) is low with respect to high levels of initial capital. For \(x=6\) the two loss rates coincide. This therefore suggests that for equivalent loss size, the trapping probability for proportional losses is reduced in comparison to random-valued losses. However, for capital levels below this point random-valued losses account for a greater proportion of capital than the proportional loss case selected for comparison and thus the increased trapping probability is intuitive. Further analysis would be needed to validate the consistency in the reduction of the probability for equivalent losses. ## 4 Derivation of trapping probability under proportional insurance coverage In line with Kovacevic and Pflug (2011) and Flores-Contro et al. (2022), in this section, we extend the model under the assumption that capital losses are covered by a proportional insurance product. Consider the presence of a fixed premium insurance product that covers \(100\cdot(1-\kappa)\) percent of all household losses, where \(1-\kappa\) for \(\kappa\in(0,1]\) is the proportionality factor. Assume that coverage is purchased by all households. Under proportional insurance coverage, the critical capital (or poverty line) and capital growth rate associated with an insured household must account for the need for premium payments. As such, define \[r(\kappa,\lambda,\theta)=(1-a)\cdot(b-\pi(\kappa,\lambda,\theta))\cdot c\quad \text{and}\quad x^{*}(\kappa,\lambda,\theta)=\frac{I^{*}}{b-\pi(\kappa,\lambda,\theta)}, \tag{20}\] where \(\pi(\kappa,\lambda,\theta)\) is the premium rate and is calculated according to the expected value principle: \[\pi(\kappa,\lambda,\theta)=(1+\theta)\cdot(1-\kappa)\cdot\lambda\cdot \mathbb{E}\left[1-Z_{i}\right].\] For ease of presentation, throughout the remainder of the paper we denote the capital growth rate \(r:=r(\kappa,\lambda,\theta)\) and the critical capital \(x^{*}:=x^{*}(\kappa,\lambda,\theta)\). Parameters \(a,b\) and \(c\) are household rates of consumption, income generation and investment or savings as defined in Section 2 and the parameter \(\theta\) is the loading factor specified by the insurer. We assume that these parameters, and the critical income \(I^{*}\), are not changed by the introduction of insurance. However, due to the need for premium payments, the critical capital in the insured case is greater than that of an uninsured household, while the capital growth rate is reduced. The associated capital growth process has an analogous structure to that of Definition 2.1, with the remaining proportion of capital after each loss event instead denoted \(Y_{i}\), where \(Y_{i}=1-\kappa(1-Z_{i})\in[1-\kappa,1]\). As such, in between loss events, where \(T_{i-1}\leq t<T_{i}\), the capital growth process follows (5a) and (5b). At event times \(t=T_{i}\), the process is given by \[X_{T_{i}}=\begin{cases}[\left(X_{T_{i-1}}-x^{*}\right)e^{r\left(T_{i}-T_{i-1} \right)}+x^{*}]\cdot Y_{i},&\text{if }X_{T_{i-1}}>x^{*}\\ X_{T_{i-1}}\cdot Y_{i},&\text{otherwise.}\end{cases} \tag{21a}\] Note that for \(\kappa=1\), the capital model in (21a) and (21b) and the parameters \(r\) and \(x^{*}\) exactly correspond to those of an uninsured household, as discussed in Section 3. Figure 2: Comparison between the trapping probability \(f(x)\) in (18) for random-valued losses with distribution \(\operatorname{Exp}(\mu)\) for different values of \(\mu\) and the trapping probability \(f(x)\) in (12) for proportional losses with distribution \(\operatorname{Beta}(5,1)\), with parameters \(x^{*}=1\), \(\lambda=1\) and \(r=0.504\) computed with \(a=0.1\), \(b=1.4\), \(c=0.4\). The inset presents the same curves on the logarithmic scale and on a wider domain. **Proposition 4.1**.: _Consider a household capital process defined by (5a) and (5b) in between loss events and by (21a) and (21b) at loss event times, with coverage proportionality factor \(1-\kappa\in(0,1]\). For initial capital \(x\geq x^{*}\), capital growth rate \(r\), loss intensity \(\lambda>0\) and remaining proportions of capital \(Z_{i}\) with distribution Beta\((\alpha,1)\), the adjustment coefficient of the corresponding Lundberg equation exists if_ \[\frac{r}{\lambda}>\frac{\kappa}{\alpha(\alpha+1)(1-\kappa)}{}_{2}F_{1}\left(1,\alpha+1;\alpha+2;-\frac{\kappa}{1-\kappa}\right), \tag{22}\] _where \({}_{2}F_{1}(\cdot)\) is the Gauss hypergeometric function._ Proof.: The condition that must hold for the adjustment coefficient \(R\) to exist under proportional insurance coverage, and thus for the net profit condition to be satisfied, is \[\mathbb{E}[r\tilde{T}_{i}+\log(1-\kappa(1-Z_{i}))]>0\iff\mathbb{E}[\log(1- \kappa(1-Z_{i}))]>-\frac{r}{\lambda}.\] For \(Z_{i}\sim\text{Beta}(\alpha,1)\), using integration by parts, \[\mathbb{E}[\log(1-\kappa(1-Z_{i}))]=-\frac{\kappa}{\alpha}\int_{0}^{1}(1- \kappa+\kappa z)^{-1}z^{\alpha}dz,\] the right-hand side of which is the integral representation of a Gauss hypergeometric function, giving exactly (22), as required. **Remark 4.1**.: _For \(Z_{i}\sim\text{Beta}(1,1)\), the constraint for existence of the adjustment coefficient reduces to_ \[\frac{r}{\lambda}>1+\frac{1-\kappa}{\kappa}\ln(1-\kappa). \tag{23}\] The constraint on \(\lambda\) in (23) is presented in Figure (a)a for varying \(\theta\) and Figure (b)b for varying \(\alpha\). Note that the sensitivity of the constraint to the loading factor \(\theta\) increases for decreasing \(\kappa\) and thus increasing insurance coverage. In the experiments considered in Figure (b)b, the constraint is bounded above by the uniform case, where \(\alpha=1\). This indicates that the parameter region in which certain trapping is prevented is greater for uniformly distributed remaining proportions of capital. In a similar manner, (a)a implies that lowering the loading factor \(\theta\) increases the region in which certain trapping is prevented when remaining proportions are uniformly distributed. **Remark 4.2**.: _For \(\kappa=1\), since \({}_{2}F_{1}(a,b;c;1)=\Gamma(c)\Gamma(c-a-b)/(\Gamma(c-a)\Gamma(c-b))\), applying the identity_ \[{}_{2}F_{1}(a,b;c;z)=(1-z)^{-a}{}_{2}F_{1}\left(a,c-b;c;\frac{z}{z-1}\right),\] (22) _reduces to the uninsured constraint in (11)._ Figure 3: Upper boundary of the region defined by the constraint on \(\lambda/r\) in (23) for \(a=0.1\), \(b=1.4\), \(c=0.4\) with (a) fixed \(\alpha=1\) and different values of \(\theta\) and (b) fixed \(\theta=0.5\) and different values of \(\alpha\). We approach the derivation of the trapping probability of the insured process in a manner analogous to that described in Section 2, noting the adjustment in the domain of the random variable capturing the remaining proportion of capital. The infinitesimal generator corresponding to the capital process in (5a), (5b), (21a) and (21b) is given by \[\mathcal{A}f(x)=r(x-x^{*})f^{\prime}(x)+\lambda\int_{1-\kappa}^{1}[f(x\cdot y)- f(x)]d\tilde{G}(y), \tag{24}\] where \(\tilde{G}(y)=G\left(1-(1-y)/\kappa\right)\) is the distribution function of \(Y_{i}\). To derive the trapping probability under proportional insurance coverage we consider only the case \(\alpha=1\), i.e. \(Z_{i}\sim\text{Beta}(1,1)\), where remaining proportions of capital are uniformly distributed and \(d\tilde{G}(y)=dy/\kappa\). Solution of \(\mathcal{A}f(x)=0\) is again sought to obtain the trapping probability of the insured process, where \(f(x)\) is assumed to be a bounded and twice continuously differentiable function on \(x\geq x^{*}\) with a bounded first derivative and boundary conditions as in (8). Using equivalent arguments to those presented in the discussion of the net profit condition in Section 2, if (23) is satisfied the boundary condition \(\lim_{x\to\infty}f(x)=0\) holds. Households face certain trapping if the net profit condition is violated, therefore our analysis focuses only on the region in which (23) holds. Taking the derivative of the infinitesimal generator equation \(\mathcal{A}f(x)=0\) with respect to \(x\) for \(\mathcal{A}f(x)\) in (24) with \(d\tilde{G}(y)=dy/\kappa\), yields \[\left(x-x^{*}\right)f^{\prime\prime}\left(x\right)+\left(1-\frac{\lambda}{r} \right)f^{\prime}(x)+\frac{\lambda}{\kappa}f(x)=\frac{\lambda(1-\kappa)}{ \kappa}f\left(\left(1-\kappa\right)x\right).\] As such, even in the simple case of uniformly distributed remaining proportions of capital, application of the differential operator induces a non-local term in the resulting differential equation. When taking the Laplace transform of \(\mathcal{A}f(x)=0\), as in Section 3, a non-local differential equation is also obtained. Derivation of the trapping probability is therefore highly intractable when adopting classical approaches. The non-locality is caused by the lower integral limit in (24). To overcome this, consider the following. If \(y\) is such that \(x\cdot y\leq x^{*}\) then \(f(x\cdot y)\) is known. In fact, for all \(y\in[1-\kappa,x^{*}/x]\) trapping occurs with the first loss, such that \(f(x\cdot y)=1\). For \(y\) in this interval, the integral in (24) is trivial. Exploiting this observation, we redefine the infinitesimal generator as a piecewise function with boundary at \(x=x^{*}/(1-\kappa)\), where \(1-\kappa\) is the lower bound of \(Y_{i}\). In this way, for \(x>x^{*}(1-\kappa)\) a household cannot become trapped by the first loss for any realisation of \(Y_{i}\). We therefore obtain a piecewise IDE that can be solved in a standard manner for \(x<x^{*}(1-\kappa)\), but for \(x>x^{*}(1-\kappa)\) the problem of non-locality remains. Our approach, as described below, partitions the domain of \(f(x)\) into subintervals such that the solution of \(\mathcal{A}f(x)=0\) for \(x\) in any given subinterval is informed by the solution in the previous subinterval. We begin by considering the two fundamental subintervals, divided where \(x=x^{*}/(1-\kappa)\). The behaviour of the capital process above the critical capital \(x^{*}\) determines a household's trapping probability, with only surplus capital above the critical capital growing exponentially. Thus, additionally consider the change of variable \(h(x)=f(x+x^{*})\) for \(x>0\). Then, for \(\tilde{x}=x-x^{*}>0\), the piecewise infinitesimal generator \(\mathcal{A}h(\tilde{x})\) is given by \[r(\tilde{x}+x^{*})\tilde{x}h^{\prime}(\tilde{x})-\lambda(\tilde{x}+x^{*})h( \tilde{x})+\frac{\lambda}{\kappa}\int_{(\tilde{x}+x^{*})(1-\kappa)}^{\tilde{x}+ x^{*}}h(u-x^{*})du, \tilde{x}>\frac{x^{*}\kappa}{1-\kappa} \tag{25a}\] \[r(\tilde{x}+x^{*})\tilde{x}h^{\prime}(\tilde{x})-\lambda(\tilde{x}+x^{*})h( \tilde{x})+\frac{\lambda}{\kappa}\int_{x^{*}}^{\tilde{x}+x^{*}}h(u-x^{*})du+ \lambda x^{*}-\frac{\lambda\tilde{x}(1-\kappa)}{\kappa}, \tilde{x}<\frac{x^{*}\kappa}{1-\kappa}, \tag{25b}\] where the subintervals on the domain of \(\tilde{x}\) have interface at \(x=x^{*}(1-\kappa)\). Under this change of variable and assuming \(r/\lambda\) satisfies (23), the trapping probability satisfies \(\mathcal{A}h(\tilde{x})=0\), with boundary conditions: \[\lim_{\tilde{x}\to 0}h(\tilde{x})=1\quad\text{and}\quad\lim_{\tilde{x}\to \infty}h(\tilde{x})=0.\] For this purpose, we consider the derivative of the piecewise IDE in (25a) and (25b). Fixing \(\mathcal{A}h(\tilde{x})=0\) and taking the derivative with respect to \(\tilde{x}\) gives \[\tilde{x}(\tilde{x}+x^{*})h^{\prime\prime}(\tilde{x})+\left(\left(2-\frac{ \lambda}{r}\right)\tilde{x}+x^{*}\left(1-\frac{\lambda}{r}\right)\right)h^{ \prime}(\tilde{x})+\frac{\lambda(1-\kappa)}{r\kappa}h(\tilde{x})=\frac{ \lambda(1-\kappa)}{r\kappa}h((1-\kappa)\tilde{x}-x^{*}\kappa), \tag{26}\] for \(\tilde{x}>x^{*}\kappa/(1-\kappa)\) and \[\tilde{x}(\tilde{x}+x^{*})h^{\prime\prime}(\tilde{x})+\left(\left(2-\frac{ \lambda}{r}\right)\tilde{x}+x^{*}\left(1-\frac{\lambda}{r}\right)\right)h^{ \prime}(\tilde{x})+\frac{\lambda(1-\kappa)}{r\kappa}h(\tilde{x})=\frac{ \lambda(1-\kappa)}{r\kappa} \tag{27}\] for \(\tilde{x}<x^{*}\kappa/(1-\kappa)\), where, as mentioned, we observe the non-local term \(h((1-\kappa)\tilde{x}-x^{*}\kappa)\) for \(\tilde{x}>x^{*}\kappa/(1-\kappa)\). First consider the homogeneous parts of (26) and (27), noting their equivalence. Under the change of variable \(m(z):=h(\tilde{x})\), where \(z=-\tilde{x}/x^{*}\), this homogeneous differential equation is exactly Gauss' hypergeometric differential equation: \[z(z-1)m^{\prime\prime}(z)+\left(\left(2-\frac{\lambda}{r}\right)z-1+\frac{ \lambda}{r}\right)m^{\prime}(z)+\frac{\lambda(1-\kappa)}{r\kappa}m(z)=0, \tag{28}\] with known solutions. We construct a general solution of (28) of the following form: \[m(z)= C_{1}\left(1-z\right)^{-a_{1}}{}_{2}F_{1}\left(a_{1},c_{1}-b_{1},c_{1} ;\frac{z}{z-1}\right)\] \[+C_{2}z^{1-c_{1}}\left(1-z\right)^{c_{1}-a_{1}-1}{}_{2}F_{1} \left(1+a_{1}-c_{1},1-b_{1},2-c_{1};\frac{z}{z-1}\right),\] where \({}_{2}F_{1}(\cdot)\) is the Gauss hypergeometric function and \[a_{1}+b_{1}=c_{1},\quad a_{1}\cdot b_{1}=\frac{\lambda(1-\kappa)}{r\kappa} \quad\text{and}\quad c_{1}=1-\frac{\lambda}{r}.\] The parameters \(a_{1}\) and \(b_{1}\) are complex conjugates: \[a_{1}=\frac{1}{2}\left(1-\frac{\lambda}{r}\right)\pm\frac{1}{2}\sqrt{\left(1+ \frac{\lambda}{r}\right)^{2}-\frac{4\lambda}{r\kappa}}\quad\text{and}\quad b _{1}=\frac{1}{2}\left(1-\frac{\lambda}{r}\right)\mp\frac{1}{2}\sqrt{\left(1+ \frac{\lambda}{r}\right)^{2}-\frac{4\lambda}{r\kappa}},\] with positive real part where \(\lambda/r<1\). Returning to the inhomogeneous differential equations in (26) and (27), let \[\mathcal{L}=r(\tilde{x})\frac{d^{2}}{d\tilde{x}^{2}}+p(\tilde{x})\frac{d}{d \tilde{x}}+q(\tilde{x}),\] where \(r(\tilde{x})=\tilde{x}(\tilde{x}+x^{*})\), \(p(\tilde{x})=(2-\lambda/r)\tilde{x}+x^{*}(1-\lambda/r)\) and \(q(\tilde{x})=\lambda(1-\kappa)/r\kappa\), denote the linear, second order operator for which \[u(\tilde{x}) =\left(1+\frac{\tilde{x}}{x^{*}}\right)^{-a_{1}}{}_{2}F_{1} \left(a_{1},a_{1},1-\frac{\lambda}{r};\frac{\tilde{x}}{\tilde{x}+x^{*}}\right),\] \[v(\tilde{x}) =\left(\frac{\tilde{x}}{\tilde{x}+x^{*}}\right)^{\frac{\lambda}{ r}}\left(1+\frac{\tilde{x}}{x^{*}}\right)^{-a_{1}}{}_{2}F_{1}\left(\frac{ \lambda}{r}+a_{1},\frac{\lambda}{r}+a_{1},1+\frac{\lambda}{r};\frac{\tilde{x}} {\tilde{x}+x^{*}}\right), \tag{29}\] forms the fundamental solution set, and let \[G(\tilde{x},x^{\prime})=\frac{u(x^{\prime})v(\tilde{x})-u(\tilde{x})v(x^{ \prime})}{r(x^{\prime})W(x^{\prime})} \tag{30}\] be the Green's function corresponding to \(\mathcal{L}\), where \(W(x)=u(x)v^{\prime}(x)-u^{\prime}(x)v(x)\) is the Wronskian of \(u\) and \(v\). The hypergeometric series corresponding to the functions \(u(\tilde{x})\) and \(v(\tilde{x})\) are well-defined on the domain \(|\tilde{x}/(\tilde{x}+x^{*})<1|\). Since this holds for all \(\tilde{x}>0\), the functions are well-defined over the whole domain. The system in (25a) and (25b) that is to be solved can therefore be characterised as follows: \[\mathcal{L}[h](\tilde{x})=\begin{cases}\frac{\lambda(1-\kappa)}{r\kappa}h((1- \kappa)\tilde{x}-x^{*}\kappa),&\tilde{x}>\frac{x^{*}\kappa}{1-\kappa}\\ \frac{\lambda(1-\kappa)}{r\kappa},&\tilde{x}<\frac{x^{*}\kappa}{1-\kappa}. \end{cases} \tag{31a}\] Now, let the surplus of capital above the critical capital \(\tilde{x}\in[0,\infty)\) be separated into subintervals \(I_{j}=[\tilde{x}_{j},\tilde{x}_{j+1}]\), where \(\{\tilde{x}_{j}\}_{j\in\mathbb{N}_{0}}\) is an increasing sequence and \(\tilde{x}_{0}=0\). Define a set of kernels recursively by \[\begin{cases}g_{1}(x,s_{1})=G(x,s_{1})\\ g_{j+1}(x,s_{1},...,s_{j+1})=G(x,s_{j+1})g_{j}(l(s_{j+1}),s_{1},...,s_{j}), \end{cases}\] for \(j\geq 1\). Then, the following theorem holds, where the proposition of a solution of the type (33) is informed by the solution of (27). **Theorem 4.1**.: _Consider a household capital process defined by (5a) and (5b) in between loss events and by (21a) and (21b) at loss event times, with coverage proportionality factor \(1-\kappa\in(0,1]\). Assume initial capital \(x\) such that \(\tilde{x}\geq 0\), capital growth rate \(r\) and loss intensity \(\lambda>0\) such that \(\lambda/r\) satisfies (23), and remaining proportions of capital with distribution Beta\((1,1)\). Then, a solution of \(\mathcal{A}h(\tilde{x})=0\) for the infinitesimal generator \(\mathcal{A}h(\tilde{x})\) in (25a) and (25b), that satisfies \(\lim_{\tilde{x}\to 0}h(\tilde{x})=1\), is given by the piecewise function_ \[h(\tilde{x})=1+Ay_{j}(\tilde{x}),\quad x\in I_{j} \tag{33}\] _for any constant \(A\), where the functions \(y_{j}(\tilde{x})\) are defined for \(\tilde{x}\geq\tilde{x}_{j}\) and are given by the recursion:_ \[\begin{cases}y_{0}(\tilde{x})=v(\tilde{x})\\ y_{j+1}(\tilde{x})=y_{j}(\tilde{x})+c^{j+1}\int_{\tilde{x}_{j+1}}^{\tilde{x}} \int_{\tilde{x}_{j}}^{l(s_{j+1})}\cdots\int_{\tilde{x}_{1}}^{l(s_{2})}g_{j+1} (\tilde{x},s_{1},..,s_{j+1})v(l(s_{1}))ds_{1}\cdots ds_{j+1},\end{cases} \tag{34b}\] _where \(c=\lambda(1-\kappa)/r\kappa\), \(\tilde{x}_{j+1}=(\tilde{x}_{j}+x^{*}\kappa)/(1-\kappa)\) and \(l(x)=(1-\kappa)x-x^{*}\kappa\)._ Proof.: First consider the integro-differential equation for the solution in the first interval \(I_{0}=[\tilde{x}_{0},\tilde{x}_{1}]\) given in (25b), where we define \(\tilde{x}_{0}\) and \(\tilde{x}_{1}\) to be the lower and upper limits of the first interval, namely \(0\) and \(x^{*}\kappa/(1-\kappa)\), respectively. Proposing an Ansatz \(h_{p}(\tilde{x})=C\) for the particular solution yields \(C=1\), such that the general solution of \(h(\tilde{x})\) for \(\tilde{x}\in I_{0}\) is exactly \[h(\tilde{x})= C_{1}u(\tilde{x})+C_{2}v(\tilde{x})+1.\] The lower boundary condition for \(h(\tilde{x})\) in this interval: \(\lim_{\tilde{x}\to 0}h(\tilde{x})=1\), then holds if and only if \(C_{1}=0\). Letting \(A=C_{2}\) and \(y_{0}(\tilde{x})=v(\tilde{x})\), \(h(\tilde{x})=1+Ay_{0}(\tilde{x})\) for \(\tilde{x}\in I_{0}\), as required. To solve in the upper part of the infinitesimal generator IDE, i.e. for intervals \(I_{j}=[\tilde{x}_{j},\tilde{x}_{j+1}]\) where \(j\geq 1\), consider (25a). By the solution in the interval \(I_{0}\), \(h((1-\kappa)\tilde{x}-x^{*}\kappa)\) is known where \[\tilde{x}_{0}<(1-\kappa)\tilde{x}-x^{*}\kappa<\tilde{x}_{1}\iff\tilde{x}_{1}< \tilde{x}<\frac{\tilde{x}_{1}+x^{*}\kappa}{1-\kappa}.\] As such, letting \(\tilde{x}_{2}:=(\tilde{x}_{1}+x^{*}\kappa)/(1-\kappa)\), a solution for (25a) can be obtained in the interval \(I_{1}=[\tilde{x}_{1},\tilde{x}_{2}]\). In fact, for any interval \(I_{j+1}\), a solution can be determined by observing the value of the function in the previous interval, since \(h((1-\kappa)\tilde{x}-x^{*}\kappa)\) for \(\tilde{x}>\tilde{x}_{j+1}\) is known, up to a point, by the solution in \(I_{j}\). It is simple to prove by induction that the upper limit of the \(j\)-th interval is given by \[\tilde{x}_{j+1}=\frac{\tilde{x}_{j}+x^{*}\kappa}{1-\kappa}. \tag{35}\] Suppose that \(\forall\tilde{x}\in I_{j}\) for \(j\geq 1\), \(\tilde{y}_{j}(\tilde{x})=h(\tilde{x})=1+Ay_{j}(\tilde{x})\). Then, by (31a), it must hold that \[\mathcal{L}[\tilde{y}_{j+1}](\tilde{x})=\frac{\lambda(1-\kappa)}{r\kappa} \tilde{y}_{j}((1-\kappa)\tilde{x}-x^{*}\kappa)\iff\mathcal{L}[y_{j+1}](\tilde {x})=cy_{j}(l(\tilde{x})) \tag{36}\] \(\forall\tilde{x}\geq\tilde{x}_{j+1}\), denoting \(c=\lambda(1-\kappa)/r\kappa\) and \(l(x)=(1-\kappa)x-x^{*}\kappa\). It therefore remains to prove that (36) holds when \(y_{j+1}(\tilde{x})\) is given by the recursion in (34b). To prove by induction, consider the case \(j=0\): \[\mathcal{L}[y_{1}](\tilde{x})=\mathcal{L}\left[y_{0}(\tilde{x})+c\int_{\tilde{x }_{1}}^{\tilde{x}}G(\tilde{x},s_{1})v(l(s_{1}))ds_{1}\right].\] By definition, \(\mathcal{L}[y_{0}](\tilde{x})=0\) when \(y_{0}\) is in the solution set and \(\mathcal{L}[\int^{\tilde{x}}G(\tilde{x},s)\phi(s)ds]=\phi(x)\). As such, \[\mathcal{L}[y_{1}](\tilde{x})=cv(l(\tilde{x}))=cy_{0}(l(\tilde{x})),\] as required. Assume (36) holds for \(j=k-1\). Then, \(\mathcal{L}[y_{k}](\tilde{x})=cy_{k-1}(l(\tilde{x}))\) for \(\tilde{x}\geq\tilde{x}_{k}\). Finally, consider the case \(j=k\). By (34b), \[\mathcal{L}[y_{k+1}](\tilde{x})= cy_{k-1}(l(\tilde{x}))\] \[+c^{k+1}\mathcal{L}\left[\int_{\tilde{x}_{k+1}}^{\tilde{x}}G(\tilde{x},s _{k+1})\int_{\tilde{x}_{k}}^{l(s_{k+1})}\cdots\int_{\tilde{x}_{1}}^{l(s_{2})}g_ {k}(l(s_{k+1}),s_{1},..,s_{k})v(l(s_{1}))ds_{1}\cdots ds_{k+1}\right],\] which, by definition of the Green's function, is equivalent to \[cy_{k-1}(l(\tilde{x}))+c^{k+1}\int_{\tilde{x}_{k}}^{l(\tilde{x})}\cdots\int_{ \tilde{x}_{1}}^{l(s_{2})}g_{k}(l(\tilde{x}),s_{1},..,s_{k})v(l(s_{1}))ds_{1} \cdots ds_{k+1}=cy_{k}(l(\tilde{x})),\] as required. **Remark 4.3**.: _For \(\kappa=1\), since \(\lim_{\kappa\to 1}x^{*}\kappa/(1-\kappa)=\infty\), the upper limit of the first subinterval \(\tilde{x}_{1}=\infty\). The integro-differential equation in (25b) therefore holds over the whole domain \(\tilde{x}>0\) and the solution in Theorem 4.1 reduces to \(h(\tilde{x})=1+Av(\tilde{x})\), the solution in the first interval \(I_{0}\). In this case, the constant \(A\) can be derived analytically such that the upper boundary condition on the trapping probability: \(\lim_{\tilde{x}\to\infty}h(\tilde{x})=0\), holds. The resulting trapping probability is exactly that of the uninsured case in (16) of Corollary 3.1._ The characterisation of the trapping probability \(f(x)\) satisfying (24) in the case of uniformly distributed proportional losses will follow from Theorem 4.1 if it can be shown that a solution of the form (33) tends to zero as \(\tilde{x}\to\infty\), in line with the upper boundary condition. Specifically, we define the piecewise function \[y(x)=y_{j}(x-x^{*}),\quad x-x^{*}\in I_{j} \tag{37}\] with \(y_{j}\) and \(I_{j}\) as in Theorem 4.1, and pose the following: **Conjecture 4.1**.: _The limit \(L:=\lim_{x\to\infty}y(x)\) exists and is different than zero._ If Conjecture 4.1 holds, then (33) yields that \[f(x)=1+Ay(x),\quad A=-\frac{1}{L} \tag{38}\] is the unique solution to \(\mathcal{A}f(x)=0\), \(f(x^{*})=1\), and \(\lim_{x\to\infty}f(x)=0\), as desired. Numerical computation of \(y(x)\) in (37) for large \(x\) is not a trivial matter, as the functions \(v\) and \(G\) in (29) and (30), respectively, are highly oscillatory for large values of \(\tilde{x}\). Nevertheless, our numerical experiments appear to indicate that Conjecture 4.1 holds. Moreover, if Conjecture 4.1 is assumed to hold, there exists a practical method for estimating the true value of \(A\) in (33) and for obtaining a very good approximation to \(f(x)\). Note that, by (34a) \[f(x)=1+Av(x+x^{*}),\quad x\in[x^{*},x^{*}/(1-\kappa)] \tag{39}\] which is easily computed for any value of \(A\). In addition, the process \((X_{t})_{t\geq 0}\) can be simulated to obtain estimates of the trapping probability for any initial capital \(x\in[x^{*},x^{*}/(1-\kappa)]\). As such, an estimate \(\hat{A}\) for the conjectured value of \(A\) can be estimated by fitting \(f(x)\) to the simulated data. A comparison between the trapping probability estimated via \(f(x)\) in (38) and simulated data is presented in Figure 4 for a given set of parameters. The trapping probability for proportionally insured households susceptible to proportional losses with \(\text{Beta}(1,1)\) distributed remaining proportions of capital, estimated via (38), is presented in Figure 5a for varying initial capital \(x\) and proportionality factor \(\kappa\). For small values of \(\kappa\) and at higher subintervals, calculation of the trapping probability is highly computationally intensive. In Figures 5a and 5b, trapping probabilities are estimated for the first four subintervals, i.e. \(I_{j}\) for \(0\leq j\leq 3\). The limits of \(I_{j}\) in (35) are functions of \(\kappa\). As such, changing the value of \(\kappa\) causes the trapping probability curves to terminate at different points, determined by the upper limit of \(I_{3}\), as can be observed in Figure 5a. Note that, in Figure 5a, as \(\kappa\) tends to zero the trapping probability tends towards a step function. This is indicative of the fact that for \(\kappa=0\) households have full insurance coverage and do not experience loss events, inducing a trapping probability that is zero-valued for all levels of capital above the critical capital due to the restriction on the premium that ensures positive capital growth. Increasing \(\kappa\) and thus decreasing the level of insurance coverage intuitively causes an increase in the trapping probability. Figure 5b presents the same trapping probability for varying loss frequency \(\lambda\) and fixed \(\kappa\), where half of every loss is insured. Increasing the frequency of loss events increases the trapping probability. For \(\lambda=0.5\), under the parameter set considered in this figure, \(\lambda/r\) is extremely close to one. Therefore, in the case of no insurance, households exhibiting this loss behaviour would be close to certain ruin. As presented in Figure 3, purchase of insurance eases this constraint, significantly reducing the probability of trapping. The fact that both figures presenting the estimated trapping probability are intuitive, provides further evidence for Conjecture 4.1. Figure 4: Comparison between the trapping probability estimated via \(f(x)\) in (38) and simulations of the capital process \(X_{t}\). Each simulation point is obtained from an ensemble of 2000 realisations of \(\{X_{t}:0\leq t\leq 500\}\) for different values of the initial capital \(X_{0}=x\). The vertical lines mark the subintervals \(x^{*}+I_{j}\), \(0\leq j\leq 3\) used in the construction of \(y\) in Theorem 4.1. The estimate \(\hat{A}=-3.556\) is obtained by fitting \(1+Av(x+x^{*})\) to the simulated data for \(x\) in the first subinterval, as shown in the inset. Parameters used are \(\lambda=1\) and \(\kappa=0.3\). The values of \(r(\kappa,\lambda,\theta)\) and \(x^{*}(\kappa,\lambda,\theta)\) are computed via (20) with \(a=0.1\), \(b=1.4\), \(c=0.4\), \(x^{*}=1\) and \(\theta=0.5\). ## 5 Discussion Figure 6 presents a comparison of trapping probabilities for the uninsured and insured capital processes as derived in (12) and (38), respectively, for two values of the parameter \(\lambda\). For \(\lambda=0.25\), the insured trapping probability lies below the uninsured at almost all levels of initial capital, decaying at a much faster rate. Only for initial capital extremely close to the critical capital does the uninsured probability lie below the insured. At the higher loss frequency of \(\lambda=0.5\), the uninsured trapping probability lies close to 1 throughout the range of initial capital considered, significantly higher than the equivalent probability for insured losses at all capital levels. Note that in this case, \(\lambda/r\) lies close to the uninsured constraint preventing certain trapping in (11). Sensitivity analysis on the trapping probabilities in (12) and (38) is presented in Figure 7 for low levels of initial capital and varying \(\kappa\) and \(\lambda\). Specifically, trapping probabilities for households with capital between \(x=x^{*}\), the uninsured poverty line, and \(x=x^{*}/(1-\kappa)\), the upper limit of the first subinterval \(I_{0}\), corresponding to the trapping probability in \(I_{0}\) given in (39), are presented. At this more granular level, the intersection point of the curves can be observed more clearly. This intersection point indicates when proportional insurance coverage is beneficial for reducing poverty trapping. In the estimation of the insured trapping probability, the increase in critical capital associated with the need for premium payment is accounted for through specification of \(x^{*}(\kappa,\lambda,\theta)\), where an insured household is deemed to be trapped when their capital falls below \(I^{*}/(b-\pi(\kappa,\lambda,\theta))\), where the critical income \(I^{*}=b\) under the assumption of no change in the basic model parameters due to the purchase of insurance. Thus, in the insured case, households with initial capital slightly above \(x^{*}\) have already become trapped. As in Kovacevic and Pflug (2011) and Flores-Contro et al. (2022) the increase in the trapping probabilities of the most vulnerable households when proportionally insured is observed in all cases considered. However, importantly, this increase occurs for a much smaller proportion of the low-income sample. Denoting the intersection point of the uninsured and insured trapping probabilities by \(x_{c}\), the significance of the distance between the intersection point and the critical capital \(x^{*}\) is presented in Figure 8 for varying \(\kappa\) and \(\lambda\). Considering three levels of the loading factor \(\theta\), the distance is positive under all sets of parameters tested. The depiction of \(x_{c}-x^{*}\) in this figure highlights that the level of capital at which insurance becomes beneficial lies much closer to the poverty line than for more extreme (Kovacevic and Pflug, 2011) and random-valued losses (Flores-Contro et al., 2022), with only small distances between the intersection point and the critical capital observed. These results suggest that purchase of proportional insurance for proportional losses is beneficial for a larger proportion of those closest to the poverty line. In particular, proportional coverage appears to be more affordable than classical coverage for random-valued losses. Our consideration of a poverty line that varies with the level of insurance coverage accounts for the fact that premium payments limit a household's level of capital. We therefore consider "extreme poverty" at Figure 5: Estimation of the trapping probability \(f(x)\) via (38) assuming Conjecture 4.1 for (a) \(\lambda=1\) for different values of \(\kappa\) and (b) \(\kappa=0.5\) for different values of \(\lambda\). Each curve is computed with the first three iterates of (34b) via numerical integration, the value of \(A\) is then estimated as explained in Figure 4. For each case, the values of \(r(\kappa,\lambda,\theta)\) and \(x^{*}(\kappa,\lambda,\theta)\) are computed via (20) with \(a=0.1\), \(b=1.4\), \(c=0.4\), \(x^{*}=1\) and \(\theta=0.5\) and \(\lambda\) is selected such that (23) holds. Figure 6: Comparison between the trapping probabilities of uninsured and insured households for \(\kappa=0.5\) and two different values of \(\lambda\). Solid curves are computed via (38) assuming Conjecture 4.1 and dashed curves via (12). For each case, the values of \(r(\kappa,\lambda,\theta)\) and \(x^{*}(\kappa,\lambda,\theta)\) are computed via (20) with \(a=0.1\), \(b=1.4\), \(c=0.4\), \(x^{*}=1\) and \(\theta=0.5\). Recall that for uninsured losses, by (11) it must hold that \(\lambda/r<1\). Figure 7: Comparison of the trapping probabilities of uninsured and insured households for small values of initial capital, \(x\in[1,x^{*}/(1-\kappa)]\) and different values of \(\kappa\) and \(\lambda\), showing the existence of a level \(x_{c}>x^{*}\) such that for \(1<x<x_{c}\) it is better for households not to insure. Solid curves are computed as in Figure 5b and dashed curves using expression (12). For each case, the values of \(r(\kappa,\lambda,\theta)\) and \(x^{*}(\kappa,\lambda,\theta)\) are computed via (20) with \(a=0.1\), \(b=1.4\), \(c=0.4\), \(x^{*}=1\) and \(\theta=0.5\). an individualised level. In Kovacevic and Pflug (2011) and Flores-Contro et al. (2022) the uninsured trapping probability is instead compared with the insured trapping probability for a fixed critical capital \(x^{*}\), irrespective of the parameters \(\kappa\), \(\lambda\) and \(\theta\). Such a specification could be used to consider trapping with respect to an international poverty line, which is fixed for all households. Under this alternative assumption, the trapping probability under proportional insurance coverage of Section 4 lies below the uninsured probability of Section 3 at all capital levels. In this case, the purchase of insurance therefore does not increase the probability of trapping for any household above the poverty line. Mathematical differences between the uninsured and insured capital processes and the associated parameter constraints may also provide indications of the impact of insurance. In Figure 3, the constraint that ensures existence of the Lundberg equation is presented. For uninsured losses with uniformly distributed remaining proportions of capital (\(Z_{i}\sim\text{Beta}(1,1)\)), by (11), an equivalent figure would display a horizontal line at \(\lambda=r\). For the case considered in Figure 3, \(r=0.504\). As such, for all levels of \(\theta\), there exists a region in which the uninsured constraint in (11) is violated, while the insured constraint in (23) is not. This indicates that for households without insurance, the Lundberg equation fails to be well-defined in more cases. Increasing the level of insurance coverage therefore increases the loss frequency for which the net profit condition is satisfied. As a result, certain trapping is avoided in more cases. Due to the increasing complexity of (34b) the constant \(A\) appears in an increasingly convoluted manner throughout the subintervals \(I_{j}\). As we move through \(I_{j}\) for increasing \(\tilde{x}\), estimation of the trapping probability under proportional insurance coverage becomes computationally intensive, particularly for small values of \(\kappa\). However, analysis of the algebraic decay of the trapping probability can provide further insight into the behaviour of the function at high capital levels. Solution of the transcendental equation \[r\gamma-\lambda+\frac{\lambda\alpha}{\kappa}\int_{1-\kappa}^{1}y^{\gamma}\left( 1-\frac{1-y}{\kappa}\right)^{\alpha-1}dy=0, \tag{40}\] derived from \(\mathcal{A}f(x)=0\) for \(\mathcal{A}f(x)\) in (24) for \(Z_{i}\sim\)Beta\((\alpha,1)\) under the assumption of polynomial asymptotic decay to zero at infinity: \(f(x)\sim(x-x^{*})^{\gamma}\) as \(x\to\infty\) for constant \(\gamma\), highlights that for Beta\((1,1)\) distributed remaining proportions of capital, as in Section 4, as \(\kappa\) increases and households maintain a higher risk level the trapping probability decays more slowly as initial capital \(x\) approaches infinity. The same observation can be found with less significance for fixed \(\kappa\) and decreasing \(\lambda/r\). Solution of the transcendental equation in (40) for \(\alpha>0\) and \(\kappa=1\) yields that the trapping probability decays only if \(\lambda/r<\alpha\), providing exactly the Lundberg condition in the case of no insurance coverage. Figure 8: Estimated distance between \(x_{c}\) and \(x^{*}\), i.e. \(x_{c}-x^{*}\), for different values of \(\lambda\) and \(\kappa\) for which \(\lambda/r<1\) and (a) \(\theta=0.1\), (b) \(\theta=0.5\) and (c) \(\theta=0.9\), where \(x_{c}\) is the intersection point of the uninsured and insured trapping probabilities. For each case, the values of \(r(\kappa,\lambda,\theta)\) and \(x^{*}(\kappa,\lambda,\theta)\) are computed via (20) with \(a=0.1\), \(b=1.4\), \(c=0.4\) and \(x^{*}=1\). Concluding remarks We have considered an adjustment of the capital process of Flores-Contro et al. (2022) in which low-income households are susceptible to losses proportional to their accumulated capital level, as in Kovacevic and Pflug (2011). Under the assumption of proportional losses we capture the exposure of households of all capital levels to both catastrophic and low severity loss events, a feature particularly significant in the low-income setting. Typically considered to be protected from capital losses, households with higher levels of capital are still susceptible to large proportional losses on the occurrence extreme events, particularly in agriculturally rich areas. In addition to high severity loss events, low-income households closest to the poverty line experience large proportional losses due to events typically considered less severe in the high-income setting, such as hospital admissions and household deaths. Focusing on the probability that a household falls below the poverty line, referred to as the trapping probability, in the analysis of this paper we have solved, for the first time analytically, infinitesimal generator equations associated with a capital process with exponential growth and multiplicative jumps. We have considered two cases: (i) households with no insurance coverage and (ii) households with proportional insurance coverage. In both cases, closed-form solutions of the infinitesimal generator equations associated with the trapping probability were derived alongside constraints on the parameters of the model that prevent certain trapping. Through the derivation of these probabilities we provide insights into the impact of proportional insurance for proportional losses. Comparison between the proportional assumption of this paper and the random-valued assumption of Flores-Contro et al. (2022) was additionally presented. For households with no insurance coverage, explicit trapping probabilities for \(\mathrm{Beta}(\alpha,1)\) distributed remaining proportions of capital were obtained using Laplace transform methods. In comparison to the corresponding trapping probability for random-valued losses, the proportional trapping probability exhibits a slower rate of decay, in line with the non-zero probability of high-income households losing a large proportion of their wealth. Consideration of proportional insurance coverage requires redefinition of the infinitesimal generator of the process. Even under the assumption of uniformly distributed remaining proportions of capital the structure of the proportional insurance product induces non-local functional terms in the derivative and Laplace transform of the infinitesimal generator. Classical methods for solving the infinitesimal generator to derive the trapping probability were therefore not applicable. To overcome this, we propose a recursive method for deriving a solution of the IDE and estimate the unique solution numerically through the conjecture of the existence of a limit. Although only analytic up to a constant, the estimated trapping probability performs well when compared with simulations of the capital process and provides intuitive results under sensitivity analysis. Future work will involve deriving a mathematical proof that this conjecture holds. Comparing trapping probabilities under no insurance coverage and proportional insurance coverage suggests that the increase in trapping probability observed under random-valued losses is less severe in this proportional case. This finding is in contrast to that of Kovacevic and Pflug (2011), where an increase in trapping probability similar to that of Flores-Contro et al. (2022) is observed under the same proportional model. However, this result is likely highly dependent on the specification of parameters. It should be noted that the distribution of the remaining proportion of capital considered in the numerical example of Kovacevic and Pflug (2011) is such that losses have an expected value of 88%, an extremely high proportion given a loss frequency parameter of 1. In turn, the associated premium rates are high and will constrain capital growth more significantly. The lower rate associated with the distribution selected for presentation in the analysis of this paper captures losses of varying severity, as is the experience of a low-income population, and will necessitate reduced premiums. Furthermore, when considering a critical capital that is fixed as in Kovacevic and Pflug (2011), irrespective of a household's insured status, the increase in trapping probability associated with purchase of insurance is not observed at any level of capital. Ultimately, the findings of this paper suggest that insurance for proportional losses is more affordable than coverage for losses of random value. This aligns with the idea that premiums are normalised to wealth under the proportional loss structure, thus improving the variability in the affordability of premiums characteristic of insurance for random-valued losses. As such, if the assumption of proportionality is correct, in the context of subsidisation, the proportion of the low-income population requiring full government support may be narrower than anticipated. Under consideration of a universal poverty line, such as the international poverty line, insurance is beneficial at all capital levels. However, when considering the impact of insurance at a more granular level, where the critical level increases with increasing coverage, for those with capital just above the critical capital, as in the findings of existing studies, insurance and the associated need for premium payments increases their probability of falling below the poverty line. ## Funding The work of K.H and C.C was supported by Engineering and Physical Sciences Research Council (EPSRC) [grant number EP/W522399/1]; and the EPSRC and ESRC Centre for Doctoral Training on Quantification and Management of Risk and Uncertainty in Complex Systems Environments [grant number EP/L015927/1]. ## Acknowledgements E.T thanks the colleagues in the Department of Mathematical Sciences at the University of Liverpool for their hospitality during his visit Fall 2023, when part of this work was conducted.
2301.00282
Strongly correlated physics in organic open-shell quantum systems
Strongly correlated physics arises due to electron-electron scattering within partially-filled orbitals, and in this perspective, organic molecules in open-shell configuration are good candidates to exhibit many-body effects. With a focus on neutral organic radicals with a molecular orbital hosting a single unpaired electron (SOMO) we investigate many-body effects on electron transport in a single-molecule junction setup. Within a combination of density functional theory and many-body techniques, we perform numerical simulations for an effective model for which all the parameters, including the Coulomb tensor, are derived ab-initio. We demonstrate that the SOMO resonance is prone towards splitting, and identify a giant electronic scattering rate as the driving many-body mechanism, akin to a Mott metal-to-insulator transition. The nature of the splitting, and thus of the resulting gap, as well as the spatial distribution of the SOMO and its coupling to the electrodes, have dramatic effects on the transport properties of the junction. We argue that the phenomenon and the underlying microscopic mechanism are general, and apply to a wide family of open-shell molecular systems.
G. Gandus, D. Passerone, R. Stadler, M. Luisier, A. Valli
2022-12-31T20:37:46Z
http://arxiv.org/abs/2301.00282v1
# Strongly correlated physics in organic open-shell quantum systems ###### Abstract Strongly correlated physics arises due to electron-electron scattering within partially-filled orbitals, and in this perspective, organic molecules in open-shell configuration are good candidates to exhibit many-body effects. With a focus on neutral organic radicals with a molecular orbital hosting a single unpaired electron (SOMO) we investigate many-body effects on electron transport in a single-molecule junction setup. Within a combination of density functional theory and many-body techniques, we perform numerical simulations for an effective model for which all the parameters, including the Coulomb tensor, are derived _ab-initio_. We demonstrate that the SOMO resonance is prone towards splitting, and identify a _giant_ electronic scattering rate as the driving many-body mechanism, akin to a Mott metal-to-insulator transition. The nature of the splitting, and thus of the resulting gap, as well as the spatial distribution of the SOMO and its coupling to the electrodes, have dramatic effects on the transport properties of the junction. We argue that the phenomenon and the underlying microscopic mechanism are general, and apply to a wide family of open-shell molecular systems. ## I Introduction Strongly correlated electronic physics arises in partially occupied orbitals in the presence of competing energy scales. Due to the Coulomb repulsion, electrons display a collective behavior, leading to the breakdown of the single-particle picture and the emergence of complex quantum phenomena. Electronic correlations are also enhanced due to spatial confinement effects in low-dimensional and nanoscopic systems. While in solid-state physics the concept of a "strongly-correlated metal" is well-established, its analog for molecules is not obvious. In chemistry, the majority of stable organic molecules have closed-shell electronic configurations, and electrons are paired in delocalized molecular orbitals (MOs) that are either completely filled or empty. The energy difference between the frontier MOs, i.e., the highest occupied (HOMO) and the lowest unoccupied (LUMO) orbitals defines the spectral gap. In particular, \(\pi\)-conjugated systems display a wide HOMO-LUMO gap (\(\Delta\sim\) eV) which is controlled by the overlap of neighboring p\({}_{z}\) orbitals. A molecular system in an open-shell configuration (radical) is characterized by unpaired valence electrons residing in non-bonding singly-occupied MOs (SOMOs) found at intermediate energies between HOMO and LUMO. Radicals can form by breaking bonds or by adding/removing electrons (e.g., in photoinduced processes) and are intermediate products of chemical reactions. While open-shell configurations are typically associated with high chemical reactivity, there exist also species of relatively stable radicals, which possess interesting electronic, magnetic, and optical functionalities that are relevant to technological applications ranging from next-generation spintronics to quantum information [1; 2; 3]. Tremendous advances in the synthesis and characterization of organic radicals triggered recent experimental studies with organic species that are stable enough to be trapped in break-junctions [4; 5] or investigated with scanning tunneling spectroscopy [6; 7; 8; 9], which fueled a revival of interest in the molecular Kondo effect [4; 6; 7; 8; 9; 10; 11; 12]. There is a growing experimental and theoretical effort to unravel how many-body effects can dramatically influence electronic and transport properties in light of technological applications. In the context of molecular electronics, noteworthy organic radicals include triphenylmethyl [4; 5; 12], Blatter radical [13], polyacetylene [14; 15], benzyl [16; 17], together with the whole family of polycyclic hydrocarbons with non-Kekule structure [7; 18; 19; 20]. Molecular organic frameworks with transition-metal centers (e.g., iron-porphyrin) are also typically open-shell, and have been recently suggested as molecular transistors [21; 22]. From the theoretical point of view, in wide-gap semiconductors, the electron-electron scattering rate is low due to the lack of electronic states at the Fermi energy. The accuracy of _ab-inito_ prediction of the gap is a long-standing issue [23], and numerical simulations for insulators [24; 25] and molecules [25; 26; 27; 28; 29; 30; 31; 32] predict a many-body renormalization of the spectral gap. However, these effects do not change qualitatively the transport properties. In open-shell configurations instead, it can be expected that electron-electron scattering within the partially filled SOMO and many-body effects have a prominent role. In computational quantum chemistry, it is well-established that open-shell molecular configurations require careful treatment (see, e.g., [33] for an overview) but the accuracy of quantum chemical methods comes at a high numerical cost. Hence, we recently witnessed significant advances in developing alternative simulation schemes, that are suitable to describe complex devices relevant to molecular electronics [34; 35; 11]. In the endeavor to achieve predictive power and allow for a quantitative comparison with experiments, a suitable method should be _high-throughput_ -- i.e., scalable and automatized as much as possible, and able to describe a realistic chemical environment and many-body correlations within an _ab-initio_ framework. This would allow a cooperative effort between theory and experiments, and pave the path to future breakthroughs for next-generation quantum technologies. ## II SCOPE OF THIS WORK The scope of this work is to investigate the emergence of strongly correlated electron physics in the electronic and transport properties of single-molecule junctions. To this end, we have developed a comprehensive numerical workflow that combines density functional theory (DFT) with quantum field theoretical methods, and it is able to address the complexity of a realistic chemical environment as well as electronic correlation effects beyond the single-particle picture within an _ab-initio_ framework. With both aspects taken into account, we are able to unravel the origin of many-body transport effects in single-molecule junctions. The art of combining _ab-initio_ and many-body computational schemes lies in a transformation from non-orthogonal atomic orbitals (AOs) to recently introduced local orbitals (LOs) [36]. The LOs are by construction orthogonal within the same atom and localized in space. They take over the symmetries of the original AOs, while inheriting the information of the environment. This allows to represent the electronic wavefunction in a region of the spectrum close to the Fermi energy with a minimal set of orbitals, making them an ideal basis for many-body calculations. So far, LOs have been employed in the context of DFT [36]. In what follows, we also evaluate the Coulomb integrals that describe the electron-electron repulsion in the LO basis, and thus map to the original Hamiltonian onto an effective many-body problem, which we can feasibly solve with appropriate numerical methods. This recipe is particularly suitable to address strong correlation effects in the transport properties of molecular junctions. In terms of applications, we focus on molecular break-junctions in which the central molecule bridging the electrodes is in an open-shell configuration, which are strong candidates to manifest many-body effects. Specifically, we select a linear and a cyclic molecular bridge, i.e., a polyene radical, and a benzene molecule substituted with a methylene (CH\({}_{2}\)) radical group. While both molecules are \(\pi\)-radicals with one electron in the SOMO, we show that many-body effects bring out profound differences. We identify the fingerprint of strong electronic correlations in the splitting of the SOMO resonance. The details of the splitting and the spatial distribution of the SOMO on the molecular backbone have dramatic consequences on the transport properties of the junction. Finally, we demonstrate that such a splitting cannot be obtained with less sophisticated techniques, such as many-body perturbation theory. We argue that this phenomenon and the underlying microscopic mechanism are general, and apply to a wide family of open-shell molecular systems. ## III Methods ### Local orbitals and low-energy models The LOs method [36] is a transformation-based approach that aims at retrieving hydrogen-like orbitals for atoms in molecules and solids. By construction, LOs are locally orthogonal on each atom. The starting point is a DFT calculation in an AOs basis set. The Hilbert space \(H\) is then spanned by a finite set of non-orthogonal orbitals \(\{\left|i\right\rangle\}\), i.e., with a overlap matrix \(\langle i|j\rangle=(\mathbf{S})_{ij}\neq\delta_{ij}\) for \(\left|i\right\rangle,\left|j\right\rangle\in H\). A set of LOs \(\{\left|m\right\rangle\}\in M\subseteq H\) can be obtained for any atom \(\alpha\) in subspace \(M\) by a subdiagonalization of the corresponding Hamiltonian sub-block \[\mathbf{H}_{\alpha}\left|m\right\rangle=\epsilon_{m}\mathbf{S}_{\alpha}\left| m\right\rangle \tag{1}\] The LOs are then linear combinations of AOs and are by definition orthogonal on each atom. This allows for a more natural physical interpretation of the LOs as atomic orbitals [36]. In order to obtain an _ab-initio_ effective model, we formally separate the Hilbert space into an active space (A) and an environment (E). The active space consists of a subset of LOs \(\{\left|a\right\rangle\}=A\subseteq M\) which are expected to describe the relevant physics close to the Fermi energy, and at the same time can be efficiently treated within quantum many-body techniques. Insytead, the environment consists of all the remaining LOs and AOs, i.e., \(\{\left|e\right\rangle\}\in E\equiv H\setminus A\). Embedding the active space into the environment ensures that the effective model preserves all information of the original single-particle DFT Hamiltonian [36]. Finally, it is convenient to perform a Lowdin orthogonalization [37] of the LO \(\{\left|a\right\rangle\}\) states and redefine the \(A\) subspace in terms of this new orthonormal basis set with elements \[\left|a^{\perp}\right\rangle=\sum_{a}(\mathbf{S}^{-1/2})_{aa^{\perp}}\left|a \right\rangle. \tag{2}\] Since the overlap between LOs on different atoms is typically low, i.e., \((\mathbf{S})_{ij}\ll 1\), the Lowdin orthonormalization of the active space results only in a weak deformation of the original LOs, which preserves their atomic-like symmetry. In practice, the LO low-energy model is constructed embedding the active subspace into the environment through a downfolding procedure [38; 39]. Taking into account the non-orthogonality between the \(A\) and \(E\) subspaces [34], we write the Green's function projected onto the \(A\) subspace as \[\mathbf{G}_{A}(z)=\mathbf{S}_{A}^{-1}\mathbf{S}_{AH}\mathbf{G}_{H}(z)\mathbf{S }_{HA}\mathbf{S}_{A}^{-1}, \tag{3}\] where \(z=E+i\eta\) is a complex energy with an infinitesimal shift \(\eta\to 0^{+}\). \(\mathbf{G}_{H}\) denotes the Green's function of the full Hilbert space, and \(\mathbf{S}_{AH}\) the overlap matrix between orbitals \(\left|a^{\perp}\right\rangle\in A\) and orbitals \(\left|i\right\rangle\in H\), while the overlap \(\mathbf{S}_{A}\) between the \(\left|a^{\perp}\right\rangle\) states is, by construction, the identity matrix and will be omitted in what follows for notational simplicity. The effect of the environment on the \(A\) subspace is described by the hybridization function \[\mathbf{\Delta}_{A}(z)=\mathbf{g}_{A}^{-1}(z)-\mathbf{G}_{A}(z)^{-1}, \tag{4}\] where \[\mathbf{g}_{A}=\left[z-\mathbf{H}_{A}\right]^{-1} \tag{5}\] is Green's function of the isolated \(A\) subspace. Rewriting \(\mathbf{G}_{A}\) in terms of \(\mathbf{\Delta}_{A}\) and using the definition of \(\mathbf{g}_{A}\) yields \[\mathbf{G}_{A}(z)=\left[z-\mathbf{H}_{A}-\mathbf{\Delta}_{A}(z)\right]^{-1}. \tag{6}\] Then, \(\mathbf{G}_{A}\) can be seen as the resolvent of an effective \(A\) subspace renormalized by the environment through a dynamical hybridization. The Green's function describes the physics of the whole system, projected onto a subspace. For a single-particle Hamiltonian, the partition above is arbitrary, and the procedure remains valid independently of the subset of LOs included in the active space. In the context of \(\pi\)-conjugated organic molecules, the projection onto a single p\({}_{z}\) LO per C atom (and possibly other species such as N or S) is usually sufficient to achieve a faithful representation of the frontier MOs, and hence suitable to describe the physics close to the Fermi energy [36]. The possibility of considering a restricted subset of LOs in the effective model is of pivotal importance in view of performing computationally-heavy many-body simulations. ### _cRPA and ab-initio_ Coulomb parameters In order to derive the electronic interaction parameters in the \(A\) subspace beyond the semi-local density approximations, we employ the constrained Random Phase Approximation (cRPA) [40; 34; 41]. Within the cRPA, we select a region \(R\supset A\) where the formation of electron-hole pairs is expected to screen the Coulomb interaction between the \(A\) electrons. Because of the strong local nature of the LOs, it is sufficient that \(R\) comprises the \(A\) subspace and few atoms nearby. Defining \(\mathbf{G}_{R}\) to be the Green's function projected onto the \(R\) subspace in analogy with Eq. (3), the screened Coulomb interaction at the RPA level is given by \[\mathbf{W}_{R}=\left[\mathbf{I}-\mathbf{V}_{R}\mathbf{P}_{R}\right]^{-1} \mathbf{V}_{R}, \tag{7}\] where \(\mathbf{V}_{R}\) is the bare Coulomb interaction \[(\mathbf{V}_{R})_{ij,kl}=\int\!\!dr\int\!\!dr^{\prime}\psi_{i}\left(r\right) \psi_{j}^{*}(r)\frac{e^{2}}{|r-r^{\prime}|}\psi_{k}^{*}(r^{\prime})\psi_{l} \left(r^{\prime}\right)\!, \tag{8}\] being \(\psi_{i}(r)\) the orbitals in the \(R\) region, and \(\mathbf{P}_{R}\) is the static component of the polarizability \[(\mathbf{P}_{R})_{ij,kl}=-2i\int\frac{dz^{\prime}}{2\pi}\mathbf{G}_{ik}(-z^{ \prime})\mathbf{G}_{lj}(z^{\prime}). \tag{9}\] The projection of \(\mathbf{W}_{R}\) onto the \(A\) subspace then yields the static screened interaction \(\mathbf{W}_{A}\). Since we aim at performing many-body simulations of the effective model, we need to partially unseen the Coulomb parameters, eliminating from \(\mathbf{W}_{A}\) the screening channels arising from \(A\)-\(A\) transitions included in \(\mathbf{P}_{R}\), which will be treated at a more sophisticated level of theory. This can be done according to the following prescription \[\mathbf{U}_{A}=\mathbf{W}_{A}\big{[}\mathbf{I}+\mathbf{P}_{A}\mathbf{W}_{A} \big{]}^{-1}, \tag{10}\] using the polarization \(\mathbf{P}_{A}\) of the \(A\) electrons obtained from \(\mathbf{G}_{A}\) similarly to Eq. (9). The matrix elements in \(\mathbf{U}_{A}\) can therefore be regarded as the effective (partially screened) Coulomb parameters. ### Solutions of the low-energy models The Green's function of Eq. (6), together with the interactions parameters of Eq. (10), define a low-energy model which can be solved with many-body techniques. Here, we propose two somewhat complementary strategies, i.e., exact diagonalization (ED) and the dynamical mean-field theory (DMFT) [42] as implemented within its real-space generalization (R-DMFT) for inhomogeneous systems [43; 44; 45; 46; 47]. #### ii.2.1 Exact diagonalization The ED technique requires a Hamiltonian formulation of the effective model. If the states of the active and embedding subspaces are energetically well-separated, it is possible to neglect the dynamical character of the hybridization function and construct an effective Hamiltonian as \[\mathbf{H}_{A}^{\rm eff}=\mathbf{H}_{A}+\mathbf{\Delta}_{A}(z=0). \tag{11}\] Including the screened Coulomb interaction, the model Hamiltonian then reads \[\begin{split} H&=\sum_{ij,\sigma}\left(\mathbf{H}_{A}^{ \mathrm{eff}}-\mathbf{H}_{A}^{\mathrm{dc}}\right)_{ij}c_{i\sigma}^{\dagger}c_{j \sigma}\\ &+\frac{1}{2}\sum_{ijkl,\sigma\sigma^{\prime}}\left(\mathbf{U}_{A} \right)_{ij,kl}c_{j\sigma}^{\dagger}c_{k\sigma^{\prime}}^{\dagger}c_{l\sigma^{ \prime}}c_{i\sigma},\end{split} \tag{12}\] where \(c_{i\sigma}^{(\dagger)}\) denote the annihilation (creation) operator of an electron at LO \(i\) with spin \(\sigma\), and the double-counting correction \(\mathbf{H}_{A}^{\mathrm{dc}}\) accounts for the interaction already included at the mean-field level by DFT (see Sec. III.4). The diagonalization of this Hamiltonian yields the many-body spectrum (eigenstates and eigenvalues) which can be used to construct the Green's function \(\mathbf{G}_{A}^{\mathrm{ED}}\) through its Lehmann representation [48]. The many-body self-energy is obtained from the Dyson equation \[\mathbf{\Sigma}_{A}^{\mathrm{ED}}(z)=z-\mathbf{H}_{A}^{\mathrm{eff}}-\left[ \mathbf{G}_{A}^{\mathrm{ED}}(z)\right]^{-1}, \tag{13}\] and it describes both local \(\Sigma_{ii}\) and non-local \(\Sigma_{i\neq j}\) electronic correlations in the LO basis. An obvious advantage of ED over, e.g., quantum Monte Carlo [49], is that it provides direct access to retarded self-energy and Green's function, and hence the electron transmission function, without the need to perform an analytic continuation numerically, which is an intrinsically ill-defined problem [50]. Note that within ED, we obtain a many-body self-energy which is, by construction, spin-independent, i.e., \(\Sigma_{ij}^{\sigma}=\Sigma_{ij}^{\bar{\sigma}}\) since \(\mathbf{H}_{A}^{\mathrm{eff}}\) follows from a restricted DFT calculation. #### ii.3.2 Real-space DMFT The idea behind R-DMFT consists of mapping a many-body problem onto a set of auxiliary Anderson impurity models (AIMs) --one for each atom \(\alpha\)-- described by the projected Green's function [44; 45; 46] \[\mathbf{g}_{\alpha}^{\sigma}(z)=\left(\mathbf{G}_{A}^{\sigma}(z)\right)_{ \alpha}. \tag{14}\] The solution of AIM \(\alpha\) (see details below) yields a _local_ many-body self-energy \(\mathbf{\Sigma}_{\alpha}^{\sigma}(z)\), so that the self-energy of the \(A\) subspace is block diagonal in the atomic subspaces \[\mathbf{\Sigma}_{A}^{\sigma}(z)=\mathrm{diag}(\{\mathbf{\Sigma}_{\alpha}^{ \sigma}(z)\mid\alpha\in A\}). \tag{15}\] The set of auxiliary AIMs are coupled by the Dyson equation \[\mathbf{G}_{A}^{\sigma}(z)=\left[z+\mu-(\mathbf{H}_{A}-\mathbf{H}_{A}^{ \mathrm{dc}})-\mathbf{\Delta}_{A}(z)-\mathbf{\Sigma}_{A}^{\sigma}(z)\right]^ {-1}, \tag{16}\] where the Green's function \(\mathbf{G}_{A}^{\sigma}\) includes the many-body self-energy and the double-counting correction, and the chemical potential \(\mu\) is determined to preserve the DFT occupation of the \(A\) subspace. Finally, Eqs. (14-16) are iterated self-consistently starting with an initial guess (typically \(\mathbf{\Sigma}_{A}^{\sigma}=0\)) until convergence. More in detail, in AIM \(\alpha\) the impurity electrons interact through a screened local Coulomb repulsion projected onto atom \(\alpha\), i.e., \(\mathbf{U}_{\alpha}=(\mathbf{U}_{A})_{ij,kl}\mid i,j,k,l\in\alpha\)[51]. Moreover, the impurity is embedded in a self-consistent _bath_ of non-interacting electrons, which describes the rest of the electronic system, encoded in the hybridization function \[\mathbf{\Delta}_{\alpha}^{\sigma}(z)=z+\mu-(\mathbf{H}_{\alpha}-\mathbf{H}_{ \alpha}^{\mathrm{dc}})-\left[\mathbf{g}_{\alpha}^{\sigma}(z)\right]^{-1}- \mathbf{\Sigma}_{\alpha}^{\sigma}(z). \tag{17}\] Also within R-DMFT, it is convenient to use ED to solve the AIMs to have direct access to retarded functions. This requires to _discretize_ the hybridization function with a finite number of bath orbitals, described by orbital energies \(\epsilon_{m}^{\sigma}\) and hopping parameters to the impurity \(t_{mi}^{\sigma}\). The hybridization parameters together with the local Coulomb blocks \(\mathbf{U}_{\alpha}\), define the AIM Hamiltonian \[\begin{split} H_{\mathrm{AIM}}&=\sum_{ij,\sigma} \left(\mathbf{H}_{\alpha}-\mathbf{H}_{\alpha}^{\mathrm{dc}}\right)_{ij}c_{i \sigma}^{\dagger}c_{j\sigma}-\mu\sum_{i\sigma}c_{i\sigma}^{\dagger}c_{i\sigma} \\ &+\sum_{m,\sigma}\epsilon_{m}^{\sigma}a_{m\sigma}^{\dagger}a_{m \sigma}+\sum_{mi,\sigma}t_{mi}^{\sigma}(a_{m\sigma}^{\dagger}c_{i\sigma}+c_{i \sigma}^{\dagger}a_{m\sigma})\\ &+\frac{1}{2}\sum_{ijkl,\sigma\sigma^{\prime}}\left(\mathbf{U}_{ \alpha}\right)_{ij,kl}c_{j\sigma}^{\dagger}c_{k\sigma^{\prime}}^{\dagger}c_{l \sigma^{\prime}}c_{i\sigma},\end{split} \tag{18}\] where \(c_{i\sigma}^{(\dagger)}\) and \(a_{m\sigma}^{(\dagger)}\) denote the annihilation (creation) operator of an electron at LO \(i\) with spin \(\sigma\), or at bath orbital \(m\) with spin \(\sigma\), respectively. Once the many-body spectrum of the AIM is known, the local self-energy is evaluated in terms of the local Green's function \(\mathbf{G}_{\alpha}^{\sigma}\) as \[\mathbf{\Sigma}_{\alpha}^{\sigma}(z)=\left[\mathbf{g}_{\alpha}^{\sigma}(z)\right] ^{-1}-\left[\mathbf{G}_{\alpha}^{\sigma}(z)\right]^{-1}. \tag{19}\] At convergence, we define the R-DMFT self-energy as \[\mathbf{\Sigma}_{A}^{\sigma,\mathrm{R-DMFT}}(z)=\mathbf{\Sigma}_{A}^{\sigma}(z )-\mathbf{H}_{A}^{\mathrm{dc}}-\mu, \tag{20}\] so that it contains all shifts related to the density matrix. In terms of approximations, R-DMFT takes into account local electronic correlations (\(\Sigma_{ii}\)), neglecting non-local correlations (i.e., \(\Sigma_{ij}=0\)), but some degree of non-locality is retained as \(\Sigma_{ii}\neq\Sigma_{jj}\), and the AIMs are coupled through the self-consistent Dyson equation. Therefore, R-DMFT is suitable to treat intrinsically inhomogeneous systems [26; 46; 47; 52; 53; 54]. Moreover, R-DMFT is considerably lighter in terms of computational complexity with respect to the direct ED of the original many-body problem and can treat systems with hundreds of atoms in the active space, inaccessible to ED [44; 46; 26]. Finally, besides the restricted solution \(\mathbf{\Sigma}_{A}^{\sigma}=\mathbf{\Sigma}_{A}^{\bar{\sigma}}\), within R-DMFT we also have the freedom of breaking the spin degeneracy, and describe magnetic solutions [28; 30; 55; 31; 34]. ### Double-counting correction The double-counting (DC) correction \(\mathbf{H}_{A}^{\mathrm{dc}}\) aims at eliminating the correlations in the \(A\) subspace included at a mean-field level by DFT, which are instead to be included in a more sophisticated level of theory within the many-body simulations. Unfortunately, an analytical expression of the correlation effects accounted for within DFT is unknown, and therefore several approximations [47; 56; 57; 58] have been developed in the context of DFT+DMFT [59; 60] or DFT+U [61; 62]. For a single-orbital AIM (as in the case of the simulations in this work) the DC correction can be reasonably approximated within the fully localized limit (FFL) [63; 64; 57; 65] \[\left(\mathbf{H}_{A}^{\text{dc}}\right)_{ii}=(\mathbf{U}_{A})_{ii,ii}\bigg{(} n_{i}^{\text{DFT}}-\frac{1}{2}\bigg{)}, \tag{21}\] where \(n_{i}^{\text{DFT}}\) is the DFT occupation of orbital \(i\). Hence, we use this form of DC for the R-DMFT calculations. However, there's no established method for the general case of multi-site and multi-orbital Coulomb interaction as is the case for ED. Here, we propose a self-consistent procedure in which a set of local parameters is optimized to fulfill the condition \[(\mathbf{\Sigma}_{A})_{ii}(|z|\rightarrow\infty)=0, \tag{22}\] This approach ensures that the electronic properties at high-energies, which are well described by a one-particle approach, are restored to the DFT level. ### Correlated quantum transport To describe the electronic transport properties, we use the non-equilibrium Green's function (NEGF) approach [66; 67]. In NEGF, we identify a device region surrounding the nanojunction's constriction and downfold the leads' electrons by virtue of an efficient recursive algorithm [68]. The corresponding Green's function reads \[\mathbf{G}_{D}(z)=\big{[}z\mathbf{S}_{D}-\mathbf{H}_{D}-\mathbf{\Sigma}_{L}(z)- \mathbf{\Sigma}_{R}(z)-\mathbf{\Sigma}_{D}(z)\big{]}^{-1}, \tag{23}\] where \(\mathbf{\Sigma}_{L(R)}\) is the self-energy describing the electrons in the left (right) electrodes, and \[\mathbf{\Sigma}_{D}(z)=\mathbf{S}_{DA}\mathbf{S}_{A}^{-1}\mathbf{\Sigma}_{A}(z) \mathbf{S}_{A}^{-1}\mathbf{S}_{AD} \tag{24}\] projects the many-body self-energy of the active space \(\mathbf{\Sigma}_{A}\) (i.e., obtained within either ED or R-DMFT) onto the device region. Following the generalization of the Landauer formula proposed by Meir and Wingreen [69], the conductance is given by \[G=G_{0}T(E_{F}), \tag{25}\] where \(G_{0}=e^{2}/h\) is the conductance quantum, and the transmission function is computed as \[T(E)=\text{Tr}[\mathbf{G}_{D}(z)\mathbf{\Gamma}_{L}(z)\mathbf{G}_{D}^{\dagger}(z )\mathbf{\Gamma}_{R}(z)], \tag{26}\] with \(\mathbf{\Gamma}_{L(R)}\) the anti-hermitian part of \(\mathbf{\Sigma}_{L(R)}\) \[\mathbf{\Gamma}_{L(R)}=i\big{[}\mathbf{\Sigma}_{L(R)}-\mathbf{\Sigma}_{L(R)}^{\dagger} \big{]}. \tag{27}\] While Eqs. (25)\(-\)(27) neglect the incoherent contributions (i.e., due to inelastic scattering) to the transmission that arises from the many-body self-energy [70; 71; 72; 35], they provide a good approximation of the low-bias transport properties, even in the presence of strong correlations within the \(A\) subspace [69; 34]. ## IV Computational details The structures were set up with the atomic simulation environment (ASE) software package [75] and the DFT calculations were performed with the GPAW package [76; 77; 78]. We performed a geometry optimization, and the atomic positions were relaxed until the forces on each atom were below 0.001 Hartree/Bohr\({}^{-1}\) (\(\approx 0.05\) eV/A). For converging the electron density, we used an LCAO double-\(\zeta\) basis set, with a grid spacing of 0.2 A, and the Perdew-Burke-Ernzerhof exchange-correlation functional [79]. For the electron transport calculations, we followed the method described in [68]. The leads were modeled by a three-layer-thick Au(111) slab sampled with a \(3\times 1\times 1\)\(k\)-point grid along the transport direction. The scattering region also includes one Au slab and an additional Au layer terminated by a four-atom Au tip, to which the molecule anchoring groups are attached. For all structures, the \(A\) subspace describing the effective model is composed of the p\({}_{z}\) LOs of the C and N atoms of the molecular bridge, while the \(R\) subspace for the cRPA calculation of the screened interaction includes the molecule and also extends to the Au atoms of the tip (see Fig. 1). Figure 1: (a) Schematics of the scattering region of the single-molecule junction, consisting of the molecular bridge and the Au electrodes. The screening region (\(R\)) and the active space within the molecule (\(A\)) are highlighted. (b) Detailed structure of pentadienyl and benzyl radical, and Au electrodes. For pentadienyl, we also show schematically the mapping onto the C and N p\({}_{z}\) LOs. Inights from ab-initio simulations In order to understand the many-body effects arising in the open-shell configuration, it is useful to recall some chemical and electronic properties of the pentadienyl and benzyl radicals, and how those are reflected by _ab-initio_ simulations. In particular, we look at the spatial distribution of the SOMO and at the _ab-initio_ Coulomb parameters projected onto the LOs of the active space. ### Structure of the SOMO The pentadienyl radical (C\({}_{5}\)H\({}_{7}\)) is a linear molecule, and the shortest polyene radical after allyl. It has three resonant structures. In each structure, the unpaired electron is hosted on one of the _odd_ C atoms. The delocalization of the unpaired electron along the molecular backbone contributes to the thermodynamical stability of the molecule [80; 81]. The structure we consider is obtained by substituting a hydrogen atom at each end of the chain by an amino group. By diagonalization of the AOs Hamiltonian in the subspace of the molecule, we find an eigenvalue just above the Fermi energy, corresponding to a partially occupied MO (i.e., the SOMO). The pentadienyl resonant structures and the projection of the SOMO onto the p\({}_{z}\) LOs of the active space are shown in Figs. 2(a,b), respectively. The SOMO reflects the resonant structures, with the largest projection on the odd- and nodes at even- C atoms. It also displays a significant projection onto the anchoring groups, suggesting a strong coupling to the electrodes in the junction. The benzene molecule (C\({}_{6}\)H\({}_{6}\)) is a cyclic aromatic hydrocarbon and the archetypical building block for molecular electronics. For our analysis, we consider a related compound, the benzyl radical (C\({}_{6}\)H\({}_{5}\)CH\({}_{2}\)\({}^{-}\)), which is obtained by substituting a hydrogen atom with a methylene (CH\({}_{2}\)) group. The benzyl radical is also stabilized by resonance but, unlike pentadyenil, in both resonant structures the unpaired electron is hosted on the benzylic C, as illustrated in Fig. 2(c). We focus on the _meta_ configuration, in which the amino groups are substituted at the 1,3-positions of the aromatic ring, while the methylene group is substituted in the 5-position, i.e., along the longer branch of the ring (see also Fig. 1). As expected, we find an eigenvalue lying at the Fermi energy, corresponding to the SOMO shown in Fig. 2(d). The SOMO displays the largest projection at the p\({}_{z}\) LO of the benzylic C atom and displays nodes at every other C (similarly to pentadienyl). However, it does not extend to the anchoring groups, thus suggesting a weak coupling to the electrodes. ### Coulomb parameters in the LO basis The partially screened Coulomb matrix projected onto the LO basis of the active space \(U_{ij}=(\mathbf{U}_{A})_{ij}\) is shown in Figs. 3(a,b) for the pentadienyl and the benzyl radicals, respectively. In both cases, the intra-orbital couplings \(U_{ii}\) are in the range of 4-5 eV and are slightly stronger for the atoms farther away from the metallic Au electrons, due to the weaker screening effects. Similar values of the Coulomb repulsion are found for the anchoring groups. However, as we shall see later, while the Cp\({}_{z}\) LOs are close to half-filling the Np\({}_{z}\) LOs are almost full, resulting in weak correlation effects. ## VI Electron transport We start our analysis by looking at the electron transport properties of the pentadienyl and benzyl junctions. In particular, we compare the predictions of DFT and many-body simulations, where the Coulomb repulsion is treated at different levels of approximation. Figure 3: Partially screened Coulomb parameters \(U_{ij}=(\mathbf{U}_{A})_{ij}\) in the LO basis for the pentadienyl (a) and the benzyl (b) radicals. Figure 2: Resonances and SOMO isosurface (from LOs p\({}_{z}\)) of pentadienyl (a,b) and benzyl (c,d) radicals. In pentadienyl, the unpaired electron is hosted by one of the _odd_ C of the polyene chain, which also display the largest contributions in the isosurface, while the _even_ C correspond to nodes. In both benzyl resonant structures, the unpaired electron is hosted by the benzylic C, and the isosurface displays nodes on every other C, similarly as in pentadienyl. Isovalues: \(\pm 0.03\) au. ### Pentadienyl Within DFT, the transmission function displays a resonance close to the Fermi energy (denoted by \(E_{F}\)) corresponding to ballistic transport through the SOMO. The resonance is found at \(\epsilon_{\text{SOMO}}=70\) meV and has a width \(\Gamma_{\text{SOMO}}\approx 300\) meV, reflecting a significant hybridization of the SOMO with the states of the electrodes. The slight misalignment between the SOMO resonance and \(E_{F}\), yield a conductance \(G=5.7\times 10^{-1}\)\(G_{0}\) in each spin channel, see Fig. 4(a), This scenario changes as the SOMO resonance is split due to the Coulomb repulsion. However, depending on the splitting mechanism, we observe fundamentally different transport properties. Within spin-unrestricted R-DMFT calculations, the spin rotational symmetry is broken. The doublet degeneracy is lifted as the SOMO is split into an occupied state in the majority-spin channel (e.g., \(\downarrow\)-SOMO) and an unoccupied state in the minority-spin channel (\(\uparrow\)-SUMO). This approximation yields a magnetic insulator with a spin gap \(\Delta_{s}\approx 1.3\) eV and a magnetic moment \(\langle S_{z}\rangle\simeq 1/2\) due to the single unpaired electron. The spin-dependent splitting of a transmission feature, e.g., a resonance [16; 82; 17] or an antiresonance [30; 31], has been suggested as a suitable mechanism for the realization of organic spin filters. For pentadienyl, the splitting is approximately symmetric around the Fermi level, thus yielding a similar conductance in the two spin channels \(G^{\uparrow}=1.9\times 10^{-2}\)\(G_{0}\) and \(G^{\downarrow}=1.5\times 10^{-2}\)\(G_{0}\) and low spin-filtering efficiency. The spin-unrestricted R-DMFT transmission functions are shown in Fig. 4(a). Another possible mechanism to split the SOMO is obtained _without_ lifting the spin degeneracy (i.e., within either R-DMFT or ED). In this case, we find that the SOMO transmission resonance is split, revealing an underlying transmission node, see Fig. 4(b). Hence, many-body calculations predict a strong suppression of the conductance, by several order of magnitude, in stark contrast with the single-particle picture, in which electron transport is dominated by a nearly-resonant ballistic channel. Note that the splitting is substantially larger in ED than in R-DMFT, and considering that the antiresonance is not aligned with \(E_{F}\), it also results in a much stronger suppression of the conductance \(G=8.1\times 10^{-4}\)\(G_{0}\) (ED) versus \(G=4.9\times 10^{-1}\)\(G_{0}\) (R-DMFT). This suggests that non-local effects play an important role, as it can be expected in low-dimensional systems [32; 27]. Since a linear \(\pi\)-conjugated molecule does not display any topological node, the pentadienyl node has been suggested to arise from destructive interference between different charged states of the molecule [14]. In Sec. VII, we discuss in detail the microscopic mechanism responsible for the splitting of the SOMO and for the transmission node, and show that they are intertwined. ### Benzyl In the case of benzene single-molecule junctions, there is more than one possible configuration for the ring to bridge the electrodes, depending on the position of the amino anchoring groups. We focus on the _meta_ configuration (i.e., amino groups substituted at the 1,3-positions of the aromatic ring) which is particularly relevant in the context of molecular electronics. Within DFT, the transmission function displays two striking features which can be readily identified in Figs. 5(a,b): a narrow asymmetric Fano resonance at \(\epsilon_{\text{Fano}}<10\) meV, close to \(E_{F}\), and a wide antiresonance at \(\epsilon_{\text{DQI}}\approx-0.8\) eV. Both features originate from quantum interference (QI) effects. Clarifying the nature of the resonances and highlighting their differences, will prove helpful in understanding how electronic correlations affect the transport properties and to shed light on the underlying microscopic mechanism. The Fano resonance has a characteristic asymmetric line shape and arises from the QI between the SOMO, which is mostly localized at the benzylic C atom, and the delocalized MOs on the molecular backbone, which have a strong overlap with the states of the metallic Au electrodes [83; 84; 85]. The antiresonance is the hallmark of destructive QI in the meta configuration and it is well-established in the literature, from both the experimental [86; 87; 88] and theoretical [89; 90; 91; 92; 93] points of view. It arises from the interference between the HOMO and LUMO of the ring itself [93]. There is a subtle interplay between the antiresonance and the functional groups (not necessarily radical). It is well-established that substituents and adsorbates affect the relative position of destructive interference features with respect to the Fermi energy. The chemical control of the antiresonance can be exploited Figure 4: Electron transmission function through the pentadienyl radical junction. DFT predicts a SOMO resonance close to \(E_{F}\). Taking into account the Coulomb repulsion beyond restricted DFT yields: (a) a splitting of the resonance into \(\downarrow\)-SOMO and \(\uparrow\)-SUMO due to spin-symmetry breaking; (b) a splitting of the resonance without symmetry breaking and a transmission node due to many-body effects. for a wide range of applications ranging from nanoelectronics [94] to chemical sensing [95; 96] In principle, the position of the antiresonance is also influenced by the substitution position in the ring (see, e.g., [94] and references therein), but this effect is of marginal relevance to the scope of the present work. The Fano resonance is indeed the transport signature of the SOMO. However, in contrast to pendethenyl, where the SOMO is delocalized along the molecular backbone and dominates the electron transport, in benzyl, the SOMO is mostly localized on the methyl functional group. It is therefore interesting to investigate the effect of the Coulomb repulsion and highlight the differences between the two cases. Within restricted DFT simulations, the narrow Fano resonance is partially concealed by the wider QI antiresonance. Breaking the spin symmetry within spin-unrestricted R-DMFT yields a pair of spin-split Fano resonances, as shown in Fig. 5(a). In the majority spin channel, \(\epsilon_{\mathrm{Fano}}^{\uparrow}<0\) falls within the transmission depletion caused by the antiresonance and the asymmetric Fano profile is clearly observable. Its counterpart in the minority spin channel is found above \(E_{F}\), i.e., \(\epsilon_{\mathrm{Fano}}^{\downarrow}>0\), and is still mostly concealed by the background transmission. Interestingly, the spin-symmetry breaking also induces spin-resolved QI antiresonances [30; 31; 97] but the splitting \(\epsilon_{\mathrm{DQI}}^{\downarrow}-\epsilon_{\mathrm{DQI}}^{\uparrow}\) is however weaker than in the Fano case, since the spin imbalance yields \(\langle S_{z}\rangle\simeq 1/2\) on the p\({}_{z}\) LO of the benzylic C, and a weaker magnetization in the rest of the molecule. Not allowing breaking the spin symmetry in the many-body simulations reveal another scenario, as shown in Fig. 5(b). The difference is twofold. We observe a splitting of the Fano resonance in both R-DMFT and ED (with the ED splitting being significantly larger) but no splitting is detected for the QI antiresonance, which is rather shifted further away from \(E_{F}\). This suggests that the microscopic mechanism behind the splitting with and without spin-symmetry breaking are fundamentally different, as it distinguishes between the two QI features. Moreover, in contrast to the case of pentalenjyl, the splitting of the SOMO in benzyl does not result in a strong suppression of the transmission within the SOMO-SUMO gap. The two observations above are deeply connected, and eventually, they can both be rationalized in terms of the spatial distribution of the SOMO. ## VII Microscopic mechanism ### Splitting of the SOMO So far, we have seen that the Coulomb repulsion induces a splitting of the SOMO of the organic radicals. In order to gain a deeper understanding of the electronic mechanism behind the splitting, and how it affects the transport properties of the junction, it is useful to look at the retarded self-energy in the LO basis \(\Sigma_{ij}=(\mathbf{\Sigma}_{A})_{ij}\), corresponding to \(\mathbf{\Sigma}_{A}^{\mathrm{ED}}\) and \(\mathbf{\Sigma}_{A}^{\sigma,\mathrm{R-DMFT}}\) in Eqs. (13, 20), respectively. The many-body effects encoded in the self-energy can be rationalized by interpreting the real part as an energy-dependent level shift, and the imaginary part as an effective electron-electron scattering rate. We argue that the mechanism discussed in the following is a common feature of organic radicals. Therefore, we discuss the pentalenjyl and benzyl radicals in parallel and highlight the differences whenever necessary. In order to compare the different approximations, it is convenient to look at the trace of the self-energy matrix. Within spin-unrestricted R-DMFT, which is shown in Figs. 6(a,d), the real part of the self-energy is weakly energy-dependent around \(E_{F}\), and determines a shift of the SOMO resonance in opposite directions for the two spin polarizations. The imaginary part is negligible (not shown) resulting in highly coherent SOMO and SUMO electronic excitations below and above \(E_{F}\). Note that the ground state of spin-unrestricted R-DMFT is two-fold degenerate, and it is invariant under a flip of all spins: \(\{\sigma_{i}\}\rightarrow\{\bar{\sigma}_{i}\}\). This picture is qualitatively analogous to that one can expect also at the single-particle level, i.e., within DFT+U. Many-body effects are weak, and the dominant effect arises from the spin-symmetry breaking, as both radicals are magnetic insulators with a Figure 5: Electron transmission function through the benzyl radical junction, displaying the Fano and antiresonance originating by quantum interference effects. (a) Breaking the spin symmetry results in the spin-splitting of both the Fano and the DQI features. (b) Including many-body effects beyond DFT, the Fano resonance is split (without symmetry-breaking) while the DQI antiresonance is shifted to lower energies. spin SOMO-SUMO gap. The scenario is completely different within restricted R-DMFT and ED, as shown in Figs. 6(b,c,e,f). There, the self-energy is dominated by a single resonance and its energy dependence can be well described within a one-pole approximation (OPA) \[\Sigma_{\text{OPA}}(E)=\frac{a}{E-E_{F}-\epsilon_{r}+\imath\gamma}. \tag{28}\] The OPA self-energy has a Lorentzian shape, where \(\epsilon_{r}\) and \(\gamma\) denote the resonant energy and the width of the resonance, whereas \(a\) controls the amplitude of the curve. The imaginary part of the self-energy plays the role of a _giant_ electron-electron scattering rate and suppresses electronic excitations around \(\epsilon_{r}\simeq\epsilon_{\text{SOMO}}\), while the real part redistributes the spectral weight towards higher energies. This many-body mechanism, akin to the Mott metal-to-insulator transition as described within DMFT [42], is at the origin of the splitting of the SOMO resonance. In organic radicals, the following hierarchy of emergent energy scales is realized: \(\Gamma_{\text{SOMO}}\ll\Delta\lesssim U_{\text{screened}}\), where the typical energy scale associated with the screened Coulomb repulsion \(U_{\text{screened}}\) significantly exceeds the narrow width of the SOMO resonance (\(\sim 10\)-\(100\) meV), and the HOMO-LUMO single-particle gap \(\Delta\) controlled by the C-C \(\pi\)-bonds (\(\sim\) eV). This sets the electrons in the SOMO deep within the strongly correlated regime. Such a general condition suggests this mechanism to be common to organic radicals with a single unpaired electron. Multi-radical molecules [98] and networks [99], may display different electronic and transport properties due to effective interactions between the unpaired electrons [7; 18; 19; 20]. ### Spatial structure of the electronic correlations While R-DMFT and ED seem to qualitatively describe the same many-body mechanism for the splitting of the SOMO, it is also interesting to look at the whole self-energy matrix. As discussed in Sec. III.3, within ED all elements \(\Sigma_{ij}\neq 0\), whereas within R-DMFT \(\Sigma_{ij}\propto\delta_{ij}\). Remarkably, all elements of the self-energy (irrespectively of the approximation) are well described by the OPA with the _same_ resonant energy \(\epsilon_{r}\), as shown in Figs. 7(a,e). The off-diagonal elements (when non-zero) can have either sign since it is not determined by causality. It is then easy to have a comprehensive look at the self-energy by plotting the matrix \(\Sigma_{ij}(\epsilon_{r})\), as shown in Figs. 7(c,d,g,h). Indeed, looking at the ED self-energy matrix, clear patterns emerge. Along the diagonal, some elements \(\Sigma_{ii}\) are significantly larger than the others (note the logarithmic scale), and this asymmetry is mirrored by the off-diagonal elements. Upon close inspection, we can associate them with the p\({}_{z}\) LOs with the largest SOMO projection, thus confirming that the strongest many-body effects correlate with the spatial distribution of the SOMO. Within R-DMFT, we find an analogous pattern along the diagonal, as indicated in the insets. Despite its approximations (local Coulomb interaction, local correlations), it seems that R-DMFT tells qualitatively the same story as the full ED simulations. This advocates for a substantially local character of the microscopic mechanism, that can describe both the splitting of the SOMO and its consequences on electron transport, whereas non-local effects renormalize the splitting. ### Implications for electron transport The many-body mechanism behind the splitting of the SOMO is common to both the pentadienyl and benzyl radicals. However, its consequences on electron transport are dramatically different. In order to understand why, it is necessary to combine the insights from DFT with the knowledge about the spatial and energy structure of the self-energy. In pentadienyl, the SOMO is delocalized throughout the molecular backbone, and its large projection on the Figure 6: Trace of the retarded self-energy Tr[\(\Sigma(E)\)] in the LO basis for the pentadienyl (a,b,c) and benzyl (d,e,f) radicals (the real and imaginary parts are denoted by solid and dashed lines, respectively). Within spin-unrestricted R-DMFT (a,d) the self-energy displays a weakly energy-dependent real part, which is different in each spin sector, while the imaginary part is negligible (not shown). Within both R-DMFT (b,e) and ED (c,f) the self-energy is dominated by a single resonance at energy \(\epsilon_{r}\) (denoted by a solid grey line). \(\mathrm{p}_{z}\) LOs of the anchoring groups (see Fig.2(a)) ensures a substantial overlap with the states in the metallic electrodes. Hence, there is a transmission channel across the junction through the SOMO. The pole of the self-energy results in a zero of the corresponding Green's function. The suppression of the Green's function hinders electron transport at that energy and is at the origin of the transmission node [30; 31]. In contrast, in the benzyl radical, the SOMO has negligible projection on the amino groups (see Fig.2(d)) and transport is dominated by transmission channels involving the frontier MOs. Therefore, the splitting of the Fano resonance weakly affects those channels, and does not prevent the off-resonance transmission of electrons across the junction. The above picture can be essentially reproduced within the following tight-binding (TB) three-orbital model, which is schematically represented in Fig. 8(a). Let us consider three orbitals (\(\ell\), \(c\), \(r\)) that can be interpreted as the amino groups, left (\(\ell\)) and right (\(r\)), and the central molecule (\(c\)). The Hamiltonian in such a basis reads \[\mathbf{H}=\begin{pmatrix}\epsilon_{\ell}&t&t^{\prime}\\ t&\epsilon_{c}&t\\ t^{\prime}&t&\epsilon_{r}\end{pmatrix}. \tag{29}\] The hybridization to the electrodes is mediated by the external (\(\ell\), \(r\)) orbitals and, for the sake of this discussion, it is assumed to be energy-independent: \[\mathbf{\Gamma}_{L}=\begin{pmatrix}\Gamma&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix},\ \mathbf{\Gamma}_{R}=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&\Gamma\end{pmatrix}. \tag{30}\] The Hamiltonian of the isolated system can be diagonalized to obtain the eigenvalues \(\epsilon_{\mathrm{HOMO}}\), \(\epsilon_{\mathrm{SOMO}}\), and \(\epsilon_{\mathrm{LUMO}}\). In light of the results shown in Fig. 7, the Green's function of the device \[\mathbf{G}_{D}(z)=\left[z-\mathbf{H}+\imath\mathbf{\Gamma}_{L}/2+\imath \mathbf{\Gamma}_{R}/2-\mathbf{\Sigma}_{D}(z)\right]^{-1} \tag{31}\] is dressed with an OPA self-energy \[\mathbf{\Sigma}_{D}(z)=\begin{pmatrix}0&0&0\\ 0&\Sigma_{\mathrm{OPA}}(z)&0\\ 0&0&0\end{pmatrix} \tag{32}\] which acts on the central part (see Fig. 7(a,e) for a connection with the _ab-initio_ simulations) and has a pole at \(\epsilon_{\mathrm{SOMO}}\). Within such a three-orbital model, the Landauer transmission in Eq. (26) simplifies to \[T(E)=\Gamma^{2}|G_{\ell r}(E)|^{2}, \tag{33}\] where \(G_{\ell r}=(\mathbf{G}_{D})_{\ell r}\) is the upper-right element of the Green's function, linking the orbitals connected to the Figure 7: Component of the ED self-energy \(\Sigma_{ij}(E)\) and its matrix representation at the resonant energy \(\mathrm{Im}\,\Sigma_{ij}(\epsilon_{r})\) in the LO basis for the pentadienyl (a,b,c,d) and benzyl (e,f,g,h) radicals. Each component of the self-energy (grey lines) is dominated by a single pole (a,b,e,f) at a resonant energy \(\epsilon_{r}\). Selected components \((i,j)\) are highlighted (color lines) and are labeled according to their index in the matrix. The matrix structure of the self-energy reflects the spatial distribution of the SOMO, i.e., the largest local (\(\Sigma_{ii}\)) and non-local (\(\Sigma_{ij\neq i}\)) self-energy contributions are found for the LOs with the largest projections to the SOMO (denoted by arrows, see also Fig. 2). Within R-DMFT (d,h) the self-energy is diagonal in the LO indices \(\Sigma_{ij}\propto\delta_{ij}\) and displays the same pattern. electrodes, and describes the only transmission channel across the junction. For the sake of simplicity, one can take \(-\epsilon_{\ell}=\epsilon_{r}=\epsilon\), and \(\epsilon_{c}\ll\epsilon\), which together with \(a\), \(\Gamma\), and \(\eta\) are kept fixed, whereas we choose the parameters \(t\) and \(t^{\prime}\) to describe two scenarios, which are representative of the pentadienyl and benzyl radicals. The results are shown in Fig. 8 and described in the following. The physics of the pentadienyl radical can be reproduced by choosing \(t\lesssim\epsilon\) and \(t^{\prime}=0\). The corresponding TB MOs are fairly delocalized throughout the system, as shown in Fig. 8(b). Hence, electron transport happens through sequential hopping processes through the \(c\) orbital. The transmission function, Fig. 8(c), displays a SOMO resonance which is split by including the OPA self-energy, revealing a transmission node within the SOMO-SUMO gap. The origin of the transmission node is ascribed to a zero of the Green's function at the SOMO energy \(G_{\ell r}(E\simeq\epsilon_{\text{SOMO}})\)[30, 31] as demonstrated in Fig. 8(e). Instead, with the choice of parameters \(t\ll t^{\prime}\lesssim\epsilon\), one can describe the physics of the benzyl radical, characterized by an orbital \(c\), which is weakly coupled to the \(\ell-r\) molecular backbone. The corresponding SOMO is fairly localized on the central orbital, see Fig. 8(f). The transmission function displays a Fano resonance which is split by the OPA self-energy see Fig. 8(g). In contrast to the previous case, \(G_{\ell r}\) does not have a zero, and transport is dominated by a transmission channel that bridges the electrodes through the direct \(\ell\)-\(r\) hopping \(t^{\prime}\). Finally, note that in both scenarios above, many-body effects are negligible for the HOMO and LUMO resonances (corresponding to states which are completely filled and empty, respectively) even when the "correlated" \(c\) orbital has a sizable hybridization with \(\ell\) and \(r\), cfr. Figs. 8(c,d,g,h). Hence, the three-orbital model can reproduce all fundamental features of the radical junctions discussed in this work, and at the same time, provides a simple interpretation of the numerical simulations. ### Non-perturbative nature of the splitting Within ED and R-DMFT, the solution of the many-body problem (i.e., on the lattice or the auxiliary AIM) is _numerically exact_. This means that the Coulomb repulsion is taken into account in a _non-perturbative_ way. It is interesting to compare these results to a _perturbative_ approach, e.g., within the \(GW\) approximation [100, 101], which has been extensively and successfully applied to molecules [102, 103, 104, 105, 106, 107]. However, the question arises to which extent many-body perturbation theory approaches are able to describe the physics of open-shell systems [108]. Within \(GW\), the self-energy is computed to the lowest order in perturbation theory, as a convolution of the Green's function and the screened in Figure 8: Schematic representation of the three-orbital TB model with its parameter, and form of the OPA self-energy (a). Weight distribution and eigenvalues of the TB MOs for scenarios representative of the pentadienyl (b) and benzyl (f) radicals. The transmission function (c,g) obtained without (grey lines) and with (blue lines) the OPA self-energy captures all relevant features of the DFT and many-body simulations. The Green’s function \(G_{\ell r}\) is shown for specific energy ranges, which are relevant to explaining the spectral features associated with the HOMOs (d,h) and the SOMOs (e,i), as discussed in the text. Model parameters [eV]: \(\epsilon=0.5\), \(\epsilon_{c}=0.25\), \(a=0.25\), \(\Gamma=0.05\), \(\gamma=0.003\), common to both scenarios, \(t=0.5\), \(t^{\prime}=0\) (b,c,d) and \(t=0.1\), \(t^{\prime}=0.5\) (e,f,g). teraction. We compute the \(GW\) self-energy correction projected onto the \(A\) region \[\mathbf{\Sigma}(z)=\mathbf{G}_{A}(z)\mathbf{W}_{A}, \tag{34}\] as described in [68], and we consider the case of the pentadienyl radical without loss of generality. In Fig. 9 we see that neither \(G_{0}W_{0}\) nor the fully self-consistent \(GW\) approximation is able to induce a splitting of the SOMO resonance, and the numerical simulations rather result in a shift of the corresponding resonance above the Fermi energy. Hence, the many-body techniques we propose to investigate open-shell molecules are not only _sufficient_ but also _necessary_ for our goal, whereas less sophisticated approaches fall short in describing the electronic and transport properties arising from the strong electronic correlations within the SOMO. ## VIII Conclusions In this work, we have proposed a numerical method that that combines _ab-initio_ with state-of-the-art many-body techniques and is able to address the complexity of a realistic chemical environment as well as electronic correlation effects beyond the single-particle picture. The deliverable of this project served to shed light on the mechanism governing the electronic and transport properties of quantum junctions with organic molecules in an open-shell configuration. By considering a linear and a cyclic radical molecule, we derive a general understanding of the role of many-body effects in molecular radicals with a single unpaired electron, and we show that they have dramatic consequences on electron transport. We establish the microscopic mechanism behind the splitting of the SOMO resonance and unravel a clear link between the space-time structure of electron-electron correlations and the spatial distribution of the SOMO. We demonstrate this by proposing a minimal model, which is capable of grasping the microscopic mechanism and thus reproducing all relevant features of the transmission properties. Our work will pave the path toward a deeper and more comprehensive understanding of strongly correlated electron physics at the nanoscale. ## Acknowledgements We thank J. M. Tomczak for valuable discussions. This research is supported by the Austrian Science Fund (FWF) through project P 31631 (A.V., R.S.) and by the NCCR MARVEL funded by the Swiss National Science Foundation grant 51NF40-205602 (G.G., D.P., M.L). Computational support from the Swiss Supercomputing Center (CSCS) under project ID s1119 is gratefully acknowledged.
2309.15913
All Loop Scattering As A Counting Problem
This is the first in a series of papers presenting a new understanding of scattering amplitudes based on fundamentally combinatorial ideas in the kinematic space of the scattering data. We study the simplest theory of colored scalar particles with cubic interactions, at all loop orders and to all orders in the topological 't Hooft expansion. We find a novel formula for loop-integrated amplitudes, with no trace of the conventional sum over Feynman diagrams, but instead determined by a beautifully simple counting problem attached to any order of the topological expansion. These results represent a significant step forward in the decade-long quest to formulate the fundamental physics of the real world in a radically new language, where the rules of spacetime and quantum mechanics, as reflected in the principles of locality and unitarity, are seen to emerge from deeper mathematical structures.
N. Arkani-Hamed, H. Frost, G. Salvatori, P-G. Plamondon, H. Thomas
2023-09-27T18:00:04Z
http://arxiv.org/abs/2309.15913v2
# All Loop Scattering as a Counting Problem ###### Abstract This is the first in a series of papers presenting a new understanding of scattering amplitudes based on fundamentally combinatorial ideas in the kinematic space of the scattering data. We study the simplest theory of colored scalar particles with cubic interactions, at all loop orders and to all orders in the topological 't Hooft expansion. We find a novel formula for loop-integrated amplitudes, with no trace of the conventional sum over Feynman diagrams, but instead determined by a beautifully simple counting problem attached to any order of the topological expansion. These results represent a significant step forward in the decade-long quest to formulate the fundamental physics of the real world in a radically new language, where the rules of spacetime and quantum mechanics, as reflected in the principles of locality and unitarity, are seen to emerge from deeper mathematical structures. ## 1 Introduction and Summary * 1.1 Kinematic space * 1.2 The First Miracle: Discovering Feynman diagrams * 1.3 An infinity of diagrams and the spectre of Gravity * 1.4 The Amplitudes * 1.5 The Second Miracle: The Counting Problem * 1.6 Back to the Amplitude! * 2 The partial amplitude expansion * 3 Momenta and curves * 3.1 Mountainscapes * 3.2 Intersections * 3.3 Momentum Assignments * 3.3.1 Aside on Homology * 3.4 Spirals * 4 The Feynman Fan * 4.1 Example: tree level at 5-points * 4.2 The Fan * 4.3 The Mapping Class Group * 4.3.1 Aside on automorphisms * 4.4 Example: the non-planar 1-loop propagator * 4.5 The Delta plane * 4.6 Example: the planar 1-loop propagator * 5 A Counting Problem For Curves * 5.1 Curve Matrices * 5.2 Headlight Functions * 5.3 Example: tree level at 5-points * 5.4 Example: the non-planar 1-loop propagator * 5.5 Spirals * 5.6 Example: the planar 1-loop propagator * 5.7 Example: the genus one 2-loop vacuum * 6 Integrand Curve Integrals * 6.1 Example: the tree level 5-point amplitude * 6.2 Example: the planar 1-loop propagator * 6.3 Example: the planar 1-loop 3-point amplitude * 7 Note on factorization * 7 Amplitude Curve Integrals * 7.1 Example: the planar 1-loop propagator * 7.2 Example: the non-planar 1-loop propagator * 7.3 Example: The non-planar 3-point amplitude * 7.4 Example: genus-one 2-loop amplitudes * 8 Modding Out by the Mapping Class Group * 8.1 Warm up * 8.2 A Tropical Mirzakhani kernel * 8.3 Example: the non-planar 1-loop propagator * 8.4 General Tropical Mirzakhani Kernels * 8.5 The General Iterative Method * 8.6 Example: the genus one 2-loop vacuum amplitude * 9 Examplitudes * 9.1 The non-planar 1-loop 3-point amplitude * 9.2 The genus one 2-loop vacuum amplitude * 9.3 The planar 2-loop tadpole * 9.4 The planar 3-loop vacuum amplitude * 10 A First Look at Recursion * 11 Outlook * A Deriving the Curve Integral Formula * B Factorization in detail * B.1 MCG invariant curve * B.2 MCG non-invariant curve * C The Surface Symanzik polynomials * C.1 The first surface Symanzik * C.2 The second surface Symanzik * D The Recursion Formula * E Recursion Examples * E.1 The 3-point non-planar 1-loop amplitude * E.2 The 2-loop vacuum at genus one * E.3 A comment on the 1-loop planar amplitudes * E.4 ## 1 Introduction and Summary Scattering amplitudes are perhaps the most basic and important observables in fundamental physics. The data of a scattering process--the on-shell momenta and spins of the particles--are specified at asymptotic infinity in Minkowski space. The conventional textbook formalism for computing amplitudes "integrates in" auxiliary structures that are not present in the final amplitude, including the bulk spacetime in which particle trajectories are imagined to live, and the Hilbert space in which the continuous bulk time evolution of the wavefunction takes place. These auxiliary structures are reflected in the usual formalism for computing amplitudes, using Feynman diagrams, which manifests the rules of spacetime (locality) and quantum mechanics (unitarity). As has been increasingly appreciated over the past three decades, this comes at a heavy cost--the introduction of huge redundancies in the description of physics, from field redefinitions to gauge and diffeomorphism redundancies, leading to enormous complexities in the computations, that conceal a stunning hidden simplicity and seemingly miraculous mathematical structures revealed only in the final result [1; 2; 3; 4; 5; 6; 7]. This suggests that we should find a radically different formulation for the physics of scattering amplitudes. The amplitudes should be the answer to entirely new mathematical questions that make no reference to bulk spacetimes and Hilbert space, but derive locality and unitarity from something more fundamental. A number of concrete examples of this have already been found in special cases. The discovery of deep and simple new structures in combinatorics and geometry has led to new definitions of certain scattering amplitudes, without reference to spacetime or quantum mechanics. Notably, the amplituhedron determines the scattering amplitudes in planar N =4 SYM, and associahedra and cluster polytopes determine colored scalar amplitudes at tree-level and one-loop [8; 9; 10; 11]. Up to now, these results have been limited in how much of the perturbative expansion they describe--at all loop orders for maximally supersymmetric theories, but only in the planar limit, and only through to one loop for non-supersymmetric theories. Furthermore, the connection between combinatorial geometry and scattering amplitudes at loop level has only been made through the integrand (pre-loop integration) of the amplitudes, and not the amplitudes themselves. Both of these limitations must be transcended to understand all aspects of particle scattering in the real world. This article is the first in a series reporting on what we believe is major new progress towards this goal. These ideas set the foundation for a number of other interrelated threads and results that will appear in various groups of papers. So we take this opportunity to give a birds-eye view of the nature of these developments and the new concepts that are driving this progress. Our departure point is a new formulation of a simple theory,--colored scalar particles with cubic interactions,--at all loop orders and to all orders in the topological 't Hooft expansion, in the form of what we call a _curve integral_. This approach has no hint of a sum over Feynman diagrams anywhere in sight and is instead associated with a simple counting problem defined at any order in the topological expansion. This counting problem defines a remarkable set of variables, \(u_{C}\), associated with every curve, \(C\), on a surface. The \(u\)-variables non-trivially define _binary geometries_[12] by dint of satisfying the remarkable non-linear equations [13] \[u_{C}+\prod_{D}u_{D}^{n(C,D)}=1, \tag{1}\] where \(n(C,D)\) is the intersection number of the curves \(C,D\). In the _positive region_, where all the \(u_{C}\) are non-negative, the \(u\)-equations force all the \(u_{C}\) to lie between \(0\) and \(1\): \(0\leq u_{C}\leq 1\). Of mathematical interest, this positive region is a natural and invariant compactification of _Teichmuller space_. This algebraic presentation of Teichmuller space is a counterpart to the famous synthetic compactification of Teichmuller spaces and surface-type cluster varieties given by Fock-Goncharov [14, 15](refneed). The new compactifications defined by the \(u_{C}\) variables are immediately relevant for physics, and lead to the new _curve integral_ formulation of all-loop amplitudes presented in this article. The curve integral does more than reformulate the perturbative series in a new way. It also exposes basic new structures in field theory. For instance, a striking consequence of our formulation is that amplitudes for large \(n\) particles at \(L\)-loops effectively factorise into a tree and a loop computation. The full large \(n\) amplitudes can be reconstructed from computations of \(n\)-point tree amplitudes and low-point \(L\)-loop amplitudes. Moreover, our curve integral formulas make manifest that amplitudes satisfy a natural family of differential equations in kinematic space. The solutions of these equations give novel and efficient recursion relations for all-loop amplitudes. This article focuses on colored scalar amplitudes. However, the results here have extensions to other theories. New curve integral formulations have been discovered for theories of colored scalar particles with arbitrary local interactions, as well as for the amplitudes of pions and non-supersymmetric Yang-Mills theories. These formulas reveal striking inter-relations between these theories, together with surprising hidden properties of their amplitudes that are made manifest by the curve integral formalism. Our results also have implications for the understanding of strings and UV completion. The counting problem at the heart of this paper not only defines QFT amplitudes, it also defines amplitudes for bosonic strings, via the \(u\)-variables, \(u_{C}\), mentioned above. This gives a combinatorial formulation of string amplitudes that makes no reference to worldsheet CFTs and vertex operators. This new approach to string amplitudes differs from the conventional theory in a very fundamental way. The \(u\)-variables, which are derived from a simple counting problem, have a beautiful and direct connection to the geometry of two-dimensional surfaces. But this connection is via the _hyperbolic geometry_ of Teichmuller space, and _not_ via the conventional picture of Riemann surfaces with a complex structure. The new string formulas are not just an exercise in passing between the complex and the hyperbolic pictures for Teichmuller space. We find that we can reproduce bosonic strings at loop level, but other choices are just as consistent, at least insofar as the field theory limit is concerned. This allows us to deform string amplitudes into a larger, but still highly constrained, space of interesting objects. This runs counter to the lore that string theory is an inviolable structure that cannot be modified without completely breaking it. Our larger class of string amplitudes transcends the usual strictures on spacetime dimension, as well as the famous instabilities of non-supersymmetric strings. Moreover, our new combinatorial-geometric point of view also makes it easier to recover particle amplitudes from strings in the \(\alpha^{\prime}\to 0\) limit. By contrast, recovering field theory from conventional string theory involves vastly (technically, infinitely!) more baggage than is needed [16]. There are several other related developments, including the discovery of a remarkable class of polytopes, _surfacehedra_, whose facet structure captures, mathematically, the intricate boundary structure of Teichmuller space, and, physically, the intricate combinatorics of amplitude singularities at all loop orders, and whose _canonical form_ determines (an appropriate notion of the) loop integrand at all orders in the topological expansion. The results of all these parallel threads of investigation will be presented in various groups of papers. We end this preview of coming attractions by explaining a quite different sort of motivation for our works that will be taken up in near-future work. The counting problem that lies at the heart of this paper has an entirely elementary definition. But the central importance of this counting problem will doubtless seem mysterious at first sight. It finds its most fundamental origin in remarkably simple but deep ideas from the "quiver representation theory" [17, 18] of (triangulated) surfaces. Arrows between the nodes of a quiver can be associated with maps between vector spaces attached to the nodes. Choosing compatible linear maps between the nodes defines a _quiver representation_. In this context, our counting problem is equivalent to counting the _sub-representations_ of these quiver representations. This perspective illuminates the mathematical structure underlying all of our formulas. But these ideas also hint at a fascinating prospect. The amplitudes we study are associated with the class of surface-type quivers, which are dual to triangulated surfaces. Nothing in our formulas forces this restriction on us: we are free to consider a much wider array of quivers. _All_ of these quivers can be associated with amplitude-like functions. This vast new class of functions enjoys an intricate (amplitude-like) structure of "factorisations" onto simpler functions. This amounts to a dramatic generalisation of the notion of an "amplitude", and in a precise sense also generalises the rules of spacetime and quantum mechanics to a deeper, more elementary, but more abstract setting. Having outlined this road map, we return to the central business of this first paper. We will study the simplest theory of \(N^{2}\) colored particles with any mass \(m\), grouped into an \(N\times N\) matrix \(\Phi^{I}_{J}\) with \(I,J=1,\cdots,N\). The Lagrangian, with minimal cubic coupling, is \[{\cal L}={\rm Tr}(\partial\Phi)^{2}+m^{2}{\rm Tr}(\Phi^{2})+g{\rm Tr}(\Phi^{3}), \tag{2}\] in any number \(D\) of spacetime dimensions. This theory is a simpler cousin of all theories of colored particles, including Yang-Mills theories, since the singularities of these amplitudes are the same for all such theories, only the _numerators_ differ from theory to theory. The singularities of amplitudes are associated with some of the most fundamental aspects of their conventional interpretation in terms of spacetime processes respecting unitarity. So understanding the amplitudes for this simple theory is an important step towards attacking much more general theories. We will show that _all_ amplitudes in this theory, for any number \(n\) of external particles, and to all orders in the genus (or \(1/N\)) expansion [19], are naturally associated with a strikingly simple counting problem. This counting problem is what allows us to give _curve integral_ formulas for the amplitudes at all orders. The curve integral makes it easy to perform the loop integrations and presents the amplitude as a single object. As an example, consider the single-trace amplitude for \(n\)-point scattering at 1-loop. Let the particles have momenta \(p_{i}^{\mu}\), \(i=1,...,n\). The curve integral for this amplitude (pre-loop integration) is \[\mathcal{A}_{n}^{\rm 1-loop}=\int d^{D}l\int\limits_{\sum_{i}t_{i}\geq 0}d^{ n}t\exp\left[-\sum_{i=1}^{n}\alpha_{i}(l+p_{1}+\cdots+p_{i})^{2}-\sum_{i,j} \alpha_{i,j}(p_{i}+\cdots+p_{j-1})^{2}\right] \tag{3}\] where \[\alpha_{i,j} =f_{i,j}+f_{i+1,j+1}-f_{i,j+1}-f_{i+1,j}, \tag{4}\] \[\alpha_{i} =\alpha_{i,i+n},\] \[f_{i,j} =\max(0,t_{j},t_{j}+t_{j-1},\cdots,t_{j}+t_{j-1}+\cdots t_{i+2}). \tag{5}\] The propagators that arise in the 1-loop Feynman diagrams are either loop propagators, with momenta \((l+p_{1}+\cdots+p_{i})\), or tree-like propagators, with momenta \((p_{i}+p_{i+1}+\cdots+p_{j-1})\). The exponential in (3) looks like a conventional Schwinger parametrisation integral, except that _all_ the propagators that arise at 1-loop are included in the exponent. Instead of Schwinger parameters, we have _headlight functions_: \(\alpha_{i}\) (for the loop propagators) and \(\alpha_{i,j}\) (for the tree propagators). The headlight functions are piecewise linear functions of the \(t_{i}\) variables. The magic is that (3) is a _single_ integral over an \(n\)-dimensional vector space. Unlike conventional Schwinger parametrisation, which is done one Feynman diagram at a time, our formulas make no reference to Feynman diagrams. Amazingly, the exponent in (3) breaks \(t\)-space into different cones where the exponent is linear. Each of these cones can be identified with a particular Feynman diagram, and the integral in that cone reproduces a Schwinger parameterisation for that diagram. This miracle is a consequence of the properties of the headlight functions \(\alpha_{i}(t)\) and \(\alpha_{i,j}(t)\). These special functions arise from a simple counting problem associated with the corresponding propagator. As in conventional Schwinger parametrisation, the dependence on the loop momentum variable, \(l^{\mu}\), in the curve integral, (3), is Gaussian. We can perform the loop integration to nd the a second curve integral for the amplitude (post loop integration), \[{\cal A}_{n}^{\rm 1-loop}=\int\limits_{\sum_{i}t_{i}\geq 0}d^{n}t\left(\frac{2 \pi}{{\cal U}}\right)^{\frac{D}{2}}e^{-\frac{F}{D}}. \tag{6}\] In this formula, the polynomials \({\cal U}\) and \({\cal F}\) are given by \[{\cal U}=\sum_{i}\alpha_{i},\qquad{\cal F}=\sum_{i,j}\alpha_{i} \alpha_{j}(p_{i}+\cdots p_{j-1})^{2}-\left(m^{2}\sum_{i}\alpha_{i}+2\sum_{i,j} \alpha_{i,j}\,p_{i}.p_{j}\right){\cal U}. \tag{7}\] These polynomials are analogs of the familiar Symanzik polynomials, but whereas the Symanzik polynomials appear in individual Feynman integrals, this one curve integral above computes the whole amplitude. These 1-loop curve integrals generalise to all orders in perturbation theory, at any loop order and genus. In the rest of this introductory section we give a birds-eye view of the key formulas and results. ### Kinematic space To begin with, we have to define the _kinematic space_ where all the action will take place. In our theory, each Feynman diagram is what is called a 'double-line notation diagram', 'ribbon graph' or 'fatgraph' in the literature; we will call them fatgraphs in what follows. Examples of fatgraphs are shown in Figure 1. Order by order, in the 't Hooft expansion, these Feynman diagrams get organised into partial amplitudes, labeled by their shared _color structure_. Conventionally, when we do a 't Hooft expansion, we think of these fat graphs as 'living on' or 'being drawn on' a surface with some genus and number of boundary components. We will think of them in a different way: a _single_ fat graph itself _defines_ a surface. In fact, we will use a single fat graph to define all the data we need to compute an amplitude! Take some fatgraph, \(\Gamma\), at any order in the 't Hooft expansion. Suppose that it has \(n\) external lines and \(E\) internal edges. Then this fat graph has loop order, \(L\), with \[E=n+3(L-1). \tag{8}\] Let the external lines have momenta \(p_{1},\ldots,p_{n}\), and introduce \(L\) loop variables, \(\ell_{1},\ldots,\ell_{L}\). Then, by imposing momentum conservation at each vertex of \(\Gamma\), we can find a consistent Figure 1: Fat graphs at tree-level, 1-loop single trace, 1-loop double trace, and 2-loop single trace, respectively. assignment of momenta to all edges of the fat graph in the usual way: if each edge, \(e\), gets a momentum \(p_{e}^{\mu}\), then whenever three edges, \(e_{1},e_{2},e_{3}\), meet at a vertex, we have \[p_{e_{1}}^{\mu}+p_{e_{2}}^{\mu}+p_{e_{3}}^{\mu}=0. \tag{9}\] For example, Figure 2 is an assignment of momenta to the edges of a tree graph. The amplitude itself depends on momenta only through Lorentz invariant combinations. So we want to define a collection of Lorentz invariant kinematic variables. Consider a curve, \(C\), drawn on the fatgraph \(\Gamma\) that starts at an external line, passes through the graph and exits at another external line. For example, the curve in Figure 3 starts at \(p_{2}\), and exits at \(p_{5}\). Every such curve can be assigned a unique momentum. It is given by the momentum of the first edge plus the sum of all momenta on the graph entering the curve 'from the left'. For example, in Figure 3, the curve starts with momentum \(p_{2}\), and then takes two right turns. At the first right turn, momentum \(p_{3}\) enters from the left. At the second right turn, momentum \(p_{4}\) enters from the left. The total momentum of the curve is then given by \[p_{C}^{\mu}=p_{2}^{\mu}+p_{3}^{\mu}+p_{4}^{\mu}. \tag{10}\] Notice that if we had gone in the opposite direction (starting at \(p_{5}\)), we would have got \[-p_{C}^{\mu}=p_{5}^{\mu}+p_{1}^{\mu}. \tag{11}\] But by total momentum conservation (\(p_{1}+\ldots+p_{n}=0\)), it does not matter which direction we take. For a general curve, \(C\), on any fatgraph, this rule can be written as: \[P_{C}^{\mu}=p_{\rm start}^{\mu}+\sum_{\rm right\,turns}p_{\rm from\,left}^{ \mu}. \tag{12}\] This rule assigns to every curve \(C\) on our fatgraph \(\Gamma\) some momentum, \(P_{C}^{\mu}\). Each \(P_{C}^{\mu}\) is a linear combination of external momenta, \(p_{i}\), and loop variables, \(\ell_{a}\). Each curve, \(C\), then also defines a Lorentz invariant kinematic variable \[X_{C}=P_{C}^{2}+m^{2}. \tag{13}\] Figure 2: A tree fat graph with momenta assigned to all edges. he collection of variables \(X_{C}\), for _all_ curves \(C\) on the fatgraph, defines a complete set of kinematic variables in our kinematic space. Modulo a small detail about how to deal with internal color loops, this completes the description of our kinematic space. It is significant in our story that we can use the momenta of a _single_ fat graph (or Feynman diagram) to define a complete set of kinematic variables \(X_{C}\). As we will see in more detail in Section 6, this basic idea ends up solving the long-standing problem of defining a good notion of loop integrand beyond the planar limit! ### The First Miracle: Discovering Feynman diagrams We now look for a question whose answer produces scattering amplitudes. We just saw how we can define all our kinematics using a single fatgraph. So with this starting point, what would make us consider _all_ possible Feynman diagrams (i.e. all spacetime processes)? And why should these be added together with equal weights (as demanded by quantum mechanics)? Amazingly, the answer to both of these fundamental questions is found right under our noses, once we think about how to systematically describe all the curves on our fatgraph. How can we describe a curve on our fat graph without drawing it? We can do this by labeling all the edges, or "roads", on the fatgraph. Any curve passes through a series of these roads. Moreover, at each vertex, we demand that the curve must turn either left or right: we do not allow our curves to do a 'U turn'. It follows that a curve is fully described by the order of the roads and turns it takes as it passes through the graph. For example, the curve in Figure 4 enters through edge '1', takes a left turn, goes down '\(x\)', takes a left turn, goes down '\(y\)', takes a right turn, and then exits via '4'. We can represent this information graphically as a _mountainscape_, where left turns are represented by upward slopes, and right turns are represented by downward slopes. The mountainscape for the curve in Figure 4 is shown in the Figure. Once again, let our fatgraph have \(E\) internal edges. To every curve \(C\), we will associate a vector \(\mathbf{g}_{C}\) in _curve space_. As a basis for this vector space, take \(E\) vectors \(\mathbf{e}_{i}\), associated to each internal edge. Then \(\mathbf{g}_{C}\) can be read off from the mountainscape for \(C\) using the following Figure 3: Every curve, \(C\), drawn on the fat graph inherits a momentum, \(P^{\mu}_{C}\), from the momenta assigned to the fat graph itself. rule: \[{\bf g}_{X}=\sum_{\rm peaks\,p}{\bf e}_{\rm p}-\sum_{\rm valleys\,v}{\bf e}_{ \rm v}. \tag{14}\] For example, the curve in Figure 4 has a peak at '\(y\)' and no valleys. So the \(g\)-vector for this curve is \[{\bf g}_{C}={\bf e}_{y}. \tag{15}\] Now consider _every_ curve that we can draw on the fatgraph in Figure 4. There are 10 possible curves. 5 of these are 'boundaries', and their g-vectors end up vanishing (because their mountainouses have no peaks or valleys). The remaining 5 curves are drawn in Figure 5. If we label the external lines, each curve can be given a name \(C_{ij}\) (\(i,j=1,2,3,4,5\)), where \(C_{ij}\) is the curve connecting \(i\) and \(j\). Their g-vectors are \[{\bf g}_{13}={\bf e}_{x},\ \ {\bf g}_{14}={\bf e}_{y},\ \ {\bf g}_{24}=-{\bf e}_{x}+{ \bf e}_{y},\ \ {\bf g}_{25}=-{\bf e}_{x},\ \ {\bf g}_{35}=-{\bf e}_{y}. \tag{16}\] If we draw these five g-vectors, we get Figure 6. This has revealed a wonderful surprise! Our g-vectors have divided curve space into five regions or _cones_. These cones are spanned by the g-vectors for the following pairs of curves: \[(C_{13},C_{14}),\ (C_{14},C_{24}),\ (C_{24},C_{25}),\ (C_{25},C_{35}),\ {\rm and }\ (C_{35},C_{13}). \tag{17}\] These pairs of curves precisely correspond to _all_ the five Feynman diagrams of the 5-point tree amplitude! This is a general phenomenon. The collection of g-vectors for all the curves \(C\) on a fatgraph is called _the g-vector fan[20, 21, 22]_, or _the Feynman fan_, associated to that fatgraph. Each top-dimensional cone of the fan is spanned by an \(E-\)tuple of curves, \(C_{a_{1}},\cdots,C_{a_{E}}\) Figure 4: Describing a curve on a fatgraph (left) using a mountainouse diagram (right). Figure 5: The five (non boundary) curves that we can draw on a 5-point tree fatgraph. hese \(E-\)tuples of curves are precisely the propagators of Feynman diagrams. Moreover, the cones are non-overlapping, and together they densely cover the entire vector space! The g-vector fan is telling us that all the Feynman diagrams for the amplitude are combined in curve space. Even better, each of the cones in the g-vector fan have the same size. It is natural to measure the size of a cone, bounded by some g-vectors \({\bf g}_{1},\cdots,{\bf g}_{E}\), using the determinant of these vectors: \(\langle{\bf g}_{1}\cdots{\bf g}_{E}\rangle\). Remarkably, the cones of the g-vector fan all satisfy: \(\langle{\bf g}_{1}\cdots{\bf g}_{E}\rangle=\pm 1\). To summarise, starting with a _single_ fatgraph at any order in perturbation theory, simply recording the data of the curves on the fatgraph, via their g-vectors, brings _all_ the Feynman diagrams to life. Furthermore, we see why they are all naturally combined together into one object, since they collectively cover the entire curve space! This represents a very vivid and direct sense in which the most basic aspects of spacetime processes and the sum-over-histories of quantum mechanics arise as the answer to an incredibly simple combinatorial question. ### An infinity of diagrams and the spectre of Gravity An important novelty appears with the first non-planar amplitudes. Consider the double-trace one-loop amplitude at 2-points. A fatgraph for this amplitude is given in Figure 7. There are now infinitely many curves that we can draw on this fat graph: they differ from one another only in how many times they wind around the graph. The g-vector fan for this infinity of curves is shown in Figure 8. These g-vectors break curve space up into infinitely many cones. Each of these cones is bounded by a pair of g-vectors, \(g_{C_{m}}\) and \(g_{C_{m+1}}\), where \(C_{m}\) and \(C_{m+1}\) are two curves that differ by exactly one Figure 6: The collection of \({\bf g}\)-vectors for the fat graph in Figure 4 cuts the 2-dimensional vector space into five regions. _winding_. If we use our rule for the momenta of curves, (12), the momenta of these curves are \[P^{\mu}_{C_{m}}=mk^{\mu}+\ell^{\mu},\text{ and }P^{\mu}_{C_{m+1}}=(m+1)k^{\mu}+ \ell^{\mu}. \tag{18}\] So the momenta associated to each cone are related to each other by a translation in the loop variable, \(\ell^{\mu}\mapsto\ell^{\mu}+k^{\mu}\). It follows that _every_ cone in Figure 8 corresponds to a copy of the _same_ Feynman diagram. What has gone wrong? The g-vector fan is telling us to include infinitely many copies of one Feynman diagram. This is a consequence of the _mapping class group_ of the fat graph in Figure 7. The mapping class group of this fatgraph acts by increasing the winding of curves drawn on the fatgraph. In fact, this infinity of windings is the heart of the well-known difficulty in defining a loop integrand for non-planar amplitudes. Fortunately, as we will see, it is easy to _mod out_ by the action of the mapping class group, which we will do using what we call the _Mirzakhani trick[23]_. Getting rid of these infinities using the Mirzakhani trick is the final ingredient we need in order to define amplitudes directly from the combinatorics of Figure 8: The g-vector fan for the 2-point double-trace 1-loop fat graph, which has infinitely many regions. Figure 7: A double-trace 1-loop fat graph, which has infinitely many possible curves. a single fatgraph. As an aside, note that the infinite collection of cones in Figure 8 does not quite cover the entire vector space! The g-vectors asymptotically approach the direction \((-1,1)\), but never reach it. This is the beginning of fascinating story: it turns out that the vector \((-1,1)\) is the g-vector for the _closed_ curve that loops once around the fat graph. Nothing in our story asks us to consider these closed curves, but the g-vector fan forces them on us. Physically, these new closed curves are associated with the appearance of a new _uncoloured_ particle, \(\sigma\). These missing parts of the fan are then seen to have a life of their own: they tell us about a theory with uncoloured self-interactions, \(\sigma^{3}\), that is minimally coupled to our colored particle by an interaction \(\sigma\) Tr (\(\Phi\)). The appearance of \(\sigma\) is a scalar avatar of how the graviton is forced on us in string theory even if we begin only with open strings. From our perspective, however, this has absolutely nothing to do with the worldsheet of string theory; it emerges directly from the combinatorics defined by a fatgraph. ### The Amplitudes The g-vector fan gives a beautiful unified picture of all Feynman diagrams living in an \(E\)-dimensional vector space, _curve space_. This result suggests a natural formula for the full amplitude in the form of an integral over curve space. To find this formula, we need one extra ingredient. For every curve, \(C\), we will define a piecewise-linear _headlight function_, \(\alpha_{C}({\bf t})\). We will define the headlight function \(\alpha_{C}\) so that it "lights up" curve space in the direction \({\bf g}_{C}\), and vanishes in all other g-vector directions: \[\alpha_{C}({\bf g}_{D})=\delta_{C,D} \tag{19}\] This definition means that \(\alpha_{C}\) vanishes everywhere, except in those cones that involve \({\bf g}_{C}\). Moreover, \(\alpha_{C}\) is _linear_ inside any given cone of the Feynman fan. Using linear algebra, we can give an explicit expression for \(\alpha_{C}\) in any cone where it is non-vanishing. Suppose that the g-vectors of such a cone are \(({\bf g}_{C},{\bf g}_{D_{1}},\cdots,{\bf g}_{D_{E-1}})\). The unique linear function of \({\bf t}\) which evaluates to 1 on \({\bf g}_{C}\) and 0 on all the other g-vectors is \[\alpha_{C}=\frac{\langle{\bf t}\,{\bf g}_{D_{1}}\cdots{\bf g}_{D_{E-1}}\rangle }{\langle{\bf g}_{C}{\bf g}_{D_{1}}\cdots{\bf g}_{D_{E-1}}\rangle}. \tag{20}\] In what follows, imagine that we already know these functions, \(\alpha_{C}({\bf t})\). We now define an _action_, \(S\), given by a sum over all curves on a fatgraph: \[S({\bf t})=\sum_{C}\alpha_{C}({\bf t})X_{C},\qquad\mbox{with }X_{C}=P_{C}^{2}+m^{2}. \tag{21}\] Recall that \(P_{C}^{\mu}\) is the momentum we associate to a curve \(C\). If we restrict \(S({\bf t})\) to a single cone, bounded by some g-vectors, \({\bf g}_{C_{1}},\ldots,{\bf g}_{C_{E}}\), then the only \(\alpha\)'s that are non-zero in this cone are precisely \(\alpha_{C_{1}},\ldots,\alpha_{C_{E}}\). Moreover, \(S({\bf t})\) is linear in this cone. It is natural to parametrise the region inside this cone by \({\bf t}=\rho_{1}{\bf g}_{C_{1}}+\cdots\rho_{E}{\bf g}_{C_{E}}\), with \(\rho_{i}\geq 0\) positive. Then we can integrate \(\exp(-S)\) in this cone. The result is identical to the result of a standard Schwinger parametrisation for a single Feynman diagram: \[\int\limits_{\text{cone}}d^{E}t\,e^{-S}=\int\limits_{0}^{\infty}d^{E}\rho\,| \langle g_{C_{1}}\cdots g_{C_{E}}\rangle|\prod_{i=1}^{E}e^{-\rho_{i}X_{C_{i}}}= \prod_{i=1}^{E}\frac{1}{P_{C_{i}}^{2}+m^{2}}. \tag{22}\] The factor \(|\langle g_{X_{1}}\cdots g_{X_{E}}\rangle|\) is the Jacobian of the change of variables from \((t_{1},\cdots,t_{E})\) to \((\rho_{1},\cdots,\rho_{E})\). As we have remarked, the cones are _unimodular_ and these Jacobian factors are all equal to 1! In order to get the full amplitude, all we have to do now is integrate \(\exp(S)\) over the whole vector space, instead of restricting it to just a single cone. However, to account for the infinity resulting from the _mapping class group_, we also need to factor out this MCG action in our integral, which we denote by writing the measure as \[\frac{d^{E}t}{\text{MCG}}. \tag{23}\] Before doing the loop integrations, the full amplitude is then given by a _curve integral_: \[\mathcal{A}=\int d^{D}\ell_{1}\cdots d^{D}\ell_{L}\int\frac{d^{E}t}{\text{MCG }}\exp\left(-\sum_{X}\alpha_{X}(\mathbf{t})(p_{X}^{2}+m^{2})\right). \tag{24}\] The dependence on loop momenta in this formula is Gaussian. When we integrate the loop momenta, we find the final amplitude is given by a curve integral \[\mathcal{A}=\int\frac{d^{E}t}{\text{MCG}}\,\left(\frac{\pi^{L}}{\mathcal{U}( \alpha)}\right)^{\frac{D}{2}}\exp\left(\frac{\mathcal{F}(\alpha)}{\mathcal{U}( \alpha)}\right). \tag{25}\] \(\mathcal{U}(\alpha)\) and \(\mathcal{F}(\alpha)\) are homogeneous polynomials in the headlight functions. They are analogous to Symanzik polynomials, but are not associated with any particular Feynman diagram. We give simple formulas for \(\mathcal{A}\) and \(\mathcal{F}\) in Section 7. The key to using these curve integral formulas lies in how we mod out by the MCG. One way of doing this would be to find a _fundamental domain_ in \(\mathbf{t}\)-space that would single out one copy of each Feynman diagram. However, in practice this is no easier than enumerating Feynman diagrams. Instead, we will use an elegant way of modding out that we call _the Mirzakhani trick_, which is analogous to the Fadeev-Popov trick familiar from field theory. As we will see, any MCG invariant function, \(f\), can be integrated as, \[\int\frac{d^{E}t}{\text{MCG}}f=\int d^{E}t\,\mathcal{K}(\alpha)f, \tag{26}\] where the _Mirzakhani kernel_\(\mathcal{K}(\alpha)\) is a simple rational function of the \(\alpha_{C}\)'s.1 We will describe several formulas for these kernels. In all cases, \(\mathcal{K}\) has support on a finite region of the fan, so that only a small number of the \(\alpha_{C}\)'s is ever needed to compute the amplitude. We will also show how some of our methods for producing \(\mathcal{K}\) give new systematic recursive methods for computing amplitudes. ### The Second Miracle: The Counting Problem We have given a formula, (25), for partial amplitudes at any order in the 't Hooft expansion of our theory. However, the meat of this formula is in the headlight functions, \(\alpha_{C}\). The problem is that headlight functions are, naively, hard to compute! The issue can already been seen at tree level. For \(n\)-points at tree level, the number of possible curves, \(C\), is \(\sim n^{2}\), whereas the number of Feynman diagrams (or cones) grows exponentially as \(\sim 4^{n}\). Each \(\alpha_{C}\) restricts to a different linear function on each of the \(\sim 4^{n}\) cones. So we would expect that it takes an exponentially-growing amount to work to compute all of the \(\alpha_{C}\),--about as much work as it would take us to just enumerate all the Feynman diagrams to begin with! So, is there an easier way to compute \(\alpha_{C}\)? This is where a second miracle occurs. It turns out that headlight functions can be computed efficiently by matrix multiplication. In fact, the calculation is completely _local to the curve_, in the sense that we only need to know the path taken by \(C\), and nothing else about the fatgraph it lives in. There are always many fewer curves than there are Feynman diagrams. This means that the amount of work to compute the \(\alpha_{C}\)'s should grow slower than the amount of work it takes to enumerate all Feynman diagrams. This way of computing \(\alpha_{C}\) is based on a simple combinatorial problem. For a curve, \(C\), draw its _mountainscape_. We are going to record all the ways in which we can pick a subset of letters of \(C\), subject to a special rule: if we pick a letter \(y\), we also have to pick any letters _downhill_ of \(y\). We will then define an _F polynomial_ for the curve, \(F(C)\), which records the valid subsets. For example, for the mountainscape in Figure 9(a), we get \[F=1+a+c+ac+abc. \tag{27}\] This is because we can choose the following subsets: no-one ("1"); just \(a\); just \(c\); \(a\) and \(c\) together; or finally we can pick \(b\), but if we do, we must also pick \(a\) and \(c\), which are both downhill of \(b\). In Figure 9(b), we get \[F=1+b+ab+bc+abc, \tag{28}\] Figure 9: Three mountainousscapes. because in this example we can choose: no-one; just \(b\); we can pick \(a\), but if we do we must also pick \(b\); we can pick \(c\), but we must then also pick \(b\); and finally we can can pick both \(a\) and \(c\), but then we must also pick \(b\). Finally, we leave Figure 9(c) as an exercise. The result is \[F=1+a+d+ad+ab+abd+abcd. \tag{29}\] In general, there is a fast method for computing \(F(C)\) by reading the mountainscape for \(C\) from left to right. Say the leftmost letter is \(Y\), and call the next letter \(y\). Then write \(F(C)=F_{\rm no}+F_{\rm yes}\), where we group the terms in \(F(C)\) according to whether they include \(Y\) (\(F_{\rm yes}\)) or not (\(F_{\rm no}\)). Similarly write \(f_{\rm no},f_{\rm yes}\) for what we would get starting instead from \(y\). Suppose that in our mountainscape we move "up" from \(Y\) to \(y\). Then if we do not pick \(Y\), then we cannot pick \(y\) either, since if we choose \(y\) we must choose \(Y\). On the other hand if we do choose \(Y\), we can either pick or not pick \(y\). Thus, in this case, we have \[F_{\rm no}=f_{\rm no},\qquad F_{\rm yes}=Y(f_{\rm no}+f_{\rm yes}). \tag{30}\] Similarly if, in our mountainscape, we move down from \(Y\) to \(y\), we find that \[F_{\rm no}=f_{\rm no}+f_{\rm yes},\qquad F_{\rm yes}=Yf_{\rm yes}. \tag{31}\] In matrix form, we find that \[\left(\begin{array}{c}F_{\rm no}\\ F_{\rm yes}\end{array}\right)=M_{L,R}(Y)\left(\begin{array}{c}f_{\rm no}\\ f_{\rm yes}\end{array}\right), \tag{32}\] where \(M_{L}\) and \(M_{R}\) are the matrices \[M_{L}(Y)=\left(\begin{array}{cc}1&0\\ Y&Y\end{array}\right),\qquad M_{R}(Y)=\left(\begin{array}{cc}1&1\\ 0&Y\end{array}\right). \tag{33}\] Now suppose that the curve \(C\) is given explicitly by the following series of edges and turns: \[(y_{1},{\rm turn}_{1},y_{2},{\rm turn}_{2},\cdots,y_{m-1},{\rm turn}_{m-1},y_{m }), \tag{34}\] where \({\rm turn}_{i}\) is either a left or right turn, immediately following \(y_{i}\). Given (32), we find \[\left(\begin{array}{c}F_{\rm no}\\ F_{\rm yes}\end{array}\right)=M\left(\begin{array}{c}1\\ y_{m}\end{array}\right), \tag{35}\] where \[M(C)=M_{{\rm turn}_{1}}(y_{1})M_{{\rm turn}_{2}}(y_{2})\cdots M_{{\rm turn}_{m -1}}(y_{m-1}). \tag{36}\] So our counting problem is easily solved simply by multiplying a series of \(2\times 2\) matrices (equation 33) associated with the left and right turns taken by the curve \(C\). Suppose that the initial edge of \(C\), \(y_{1}\), and the final edge, \(y_{m}\), are external lines of the fatgraph. It is natural to write \(F(C)\) as a sum over four terms: \[F(C)=F_{\text{no},\,\text{no}}+F_{\text{no},\,\text{yes}}+F_{\text{yes},\,\text{ no}}+F_{\text{yes},\,\text{yes}}, \tag{37}\] where we group terms in \(F(C)\) according to whether they do or do not include the first and last edges: \(y_{1}\) and/or \(y_{m}\). Indeed, these terms are also the entries of the matrix \(M(C)\), \[M(C)=\left(\begin{array}{cc}F_{\text{no},\,\text{no}}&F_{\text{no},\text{ yes}}\\ F_{\text{yes},\,\text{no}}&F_{\text{yes},\text{yes}}\end{array}\right), \tag{38}\] if we now set \(y_{m}=1\). In fact, we will also set \(y=1\) for every external line of the fatgraph, and will reserve \(y\)-variables for internal edges of the fatgraph. Notice that \(\det M_{L}(y)=\det M_{R}(y)=y\), so that \[\det M(C)=\prod_{i=2}^{m-1}y_{i}. \tag{39}\] In other words, we have the identity \[F_{\text{no},\text{no}}F_{\text{yes},\text{yes}}=F_{\text{no},\text{yes}}F_{ \text{yes},\text{no}}+\prod_{i}y_{i}. \tag{40}\] Motivated in part by this identity, we will define \(u\)-variables for every curve, \[u_{C}=\frac{F(C)_{\text{no},\text{yes}}\,F(C)_{\text{yes},\text{no}}}{F(C)_{ \text{no},\text{no}}\,F(C)_{\text{yes},\text{yes}}}=\frac{M(C)_{12}M(C)_{21}} {M(C)_{11}M(C)_{22}}. \tag{41}\] These \(u_{C}\) variables are most interesting to us in the region \(y_{i}\geq 0\). Equation (40) implies that \(0\leq u_{C}\leq 1\) in this region. They vastly generalise the \(u\)-variables defined and studied in [24, 25]. We now define the headlight functions. We define them to capture the asymptotic behaviour of the \(u\)-variables when thought of as functions of the \(\mathbf{y}\) variables. We define \[\alpha_{C}=-\text{Trop}\ u_{C}. \tag{42}\] where Trop \(u_{C}\) is the so-called _tropicalization_ of \(u_{C}\). The idea of tropicalization is to look at how functions behave asymptotically in \(\mathbf{y}\)-space. To see how this works, parameterise the \(y_{i}\geq 0\) region by writing \(y_{i}=\exp t_{i}\), where the \(t_{i}\) are real variables. Then, as the \(t_{i}\) become large, Trop \(u_{C}\) is defined such that \[u_{C}(\mathbf{t})\to\exp\left(\text{Trop}\ u_{C}\right). \tag{43}\] For example, consider a simple polynomial, \(P(y_{1},y_{2})=1+y_{2}+y_{1}y_{2}=1+e^{t_{2}}+e^{t_{1}+t_{2}}\). As we go to infinity in \(\mathbf{t}=(t_{1},t_{2})\) in different directions, different monomials in \(P\) will dominate. In fact, we can write, as we go to infinity in \(\mathbf{t}\), \[P\to\exp\max(0,t_{2},t_{1}+t_{2}), \tag{44}\] and so Trop \((P)=\max(0,t_{2},t_{1}+t_{2})\). If we have a product of polynomials, \(F=\prod_{a}P_{a}^{c_{a}}\), then as we go to infinity in \({\bf t}\) we have \(F\to e^{\rm Trop(F)}\), where Trop \(F=\sum c_{a}{\rm Trop}\ (P_{a})\). Returning to headlight functions, our definition can also be written as \[\alpha_{C}={\rm Trop}\ (M(C)_{11})+{\rm Trop}\ (M(C)_{22})-{\rm Trop}\ (M(C)_{12})-{ \rm Trop}\ (M(C)_{21}). \tag{45}\] For example, consider again the \(n=5\) tree amplitude. Take the curve \(C\) from Figure 4 (left). This curve has path \((1,L,x,R,y,R,4)\). So it has a matrix (with \(y_{23},y_{15}\equiv 1\)) \[M(C)=M_{L}(1)M_{R}(x)M_{R}(y)=\left(\begin{array}{cc}1&1+y\\ 1&1+y+xy\end{array}\right). \tag{46}\] Using this matrix, we find that its \(u\)-variable is \[u_{C}=\frac{1+y}{1+y+xy}, \tag{47}\] and so its headlight function is \[\alpha_{C}=\max(0,t_{y},t_{x}+t_{y})-\max(0,t_{y}). \tag{48}\] Amazingly, this function satisfies the key property of the headlight functions: \(\alpha_{C}\) vanishes on every g-vector, except for its own g-vector, \({\bf g}_{C}=(1,0)\). ### Back to the Amplitude! We have now formulated how to compute all-order amplitudes in \({\rm Tr}\Phi^{3}\) theory as a counting problem. The final expression for the integrated amplitude at any order of the topological expansion associated with a surface \({\cal S}\) is given as \[{\cal A}=\int d^{E}t\,{\cal K}(\alpha)\left(\frac{\pi^{L}}{{\cal U}(\alpha)} \right)^{\frac{D}{2}}\exp\left(\frac{{\cal F}(\alpha)}{{\cal U}(\alpha)} \right), \tag{49}\] where \({\cal F}(\alpha),{\cal U}(\alpha)\) are homogeneous polynomials in the \(\alpha_{C}\)'s, \({\cal K}(\alpha)\) is the Mirzakhani kernel that mods out by the mapping-class-group, and crucially, each \(\alpha_{C}\) is determined entirely by the path of its curve, using a simple counting problem on the curve. The presence of \({\cal K}\) ensures that only a finite number of \(\alpha_{C}\)'s ever appear in our computations, which makes the formula easy to apply. There is no trace of the idea of'summing over all spacetime processes' in this formula. Instead, small combinatorial problems attached to the curves on a fatgraph, treated completely independently of each other, magically combine to produce local and unitary physics, pulled out of the platonic thin air of combinatorial geometry. Our goal in the rest of this paper is to describe these ideas systematically. Our focus in here will exclusively be on simply presenting the formulas for the amplitudes. This presentation will be fully self-contained, so that the interested reader will be fully equipped to find the curve integrals for the \({\rm Tr}\phi^{3}\) theory at any order in the topological expansion. The methods can be applied at any order in the topological expansion, but there are a number of novelties that need to be digested. We illustrate these subtleties one at a time, as we progress from tree level examples through to one and two loops, after which no new phenomena occur. We begin at tree-level to illustrate the basic ideas. At one-loop single-trace, we show how to deal with _spiralling_ curves. Then, as we have seen above, double-trace amplitudes at 1-loop expose the first example of the infinities associated with the mapping class group. Finally, we study the leading \(1/N\) correction to single-trace at 2-loops--the genus one amplitude--to show how to deal with a non-abelian mapping class group. This non-abelian example illustrates the generality and usefulness of the Mirzakhani trick. In all cases discussed in this paper we will use use the smallest example amplitudes possible to illustrate the new conceptual points as they arise. The next paper in this series will give a similarly detailed set of formulae for amplitudes for any number of particles, \(n\). In this sense this first pair of papers can be thought of as a "user guide" for the formalism. A systematic accounting of the conceptual framework underlying these formulae, together with an exposition of the panoply of related developments, will be given in the later papers of this series. ## 2 The partial amplitude expansion Consider a single massive scalar field with two indices in the fundamental and anti-fundamental representations of \(\mathrm{SU}(N)\), \(\phi=\phi_{J}^{I}\,t_{I}t^{J}\), and with a single cubic interaction, \[\mathcal{L}_{int}=g\mathrm{Tr}\left[\phi^{3}\right]=g\,\phi_{I}^{J}\phi_{J}^{ K}\phi_{K}^{I}. \tag{1}\] The trace of the identity is \(\mathrm{Tr}(1)=\delta_{I}^{I}=N\). The propagator for the field \(\phi\) can be drawn as a double line and the Feynman diagrams are _fatgraphs_ with cubic vertices. The Feynman rules follow from (1). To compute the \(n\) point amplitude, \(\mathcal{A}_{n}\), fix \(n\) external particles with momenta \(k_{i}^{\mu}\) and colour polarisations \(t_{i}^{IJ}\). A fatgraph \(\Gamma\) with \(V\) cubic vertices contributes to the amplitude as \[(ig)^{V}\,C_{\Gamma}\,\mathrm{Val}(\Gamma), \tag{2}\] where \(C_{\Gamma}\) is the tensorial contraction of the polarisations \(t_{i}^{IJ}\) according to \(\Gamma\). The kinematical part is given by an integral of the form \[\mathrm{Val}(\Gamma)=\int\prod_{i=1}^{L}d^{D}\ell_{i}\prod_{\text{edges }e}\,\frac{1}{P_{e}^{2}+m^{2}}, \tag{3}\] for some assignment of loop momenta to the graph. Each momentum \(P_{e}^{\mu}\) is linear in the external momenta \(k_{i}^{\mu}\) and in the loop momentum variables \(\ell_{i}^{\mu}\). To find \(P_{e}^{\mu}\), the edges of \(\Gamma\) need to be oriented, so that momentum conservation can be imposed at each cubic vertex. The colour factors \(C_{\Gamma}\) organise the amplitude \(\mathcal{A}_{n}\) into partial amplitudes. This is because \(C_{\Gamma}\) depends only on the topology of \(\Gamma\) regarded as a surface, and forgets about the graph. rte graphs \(\Gamma_{1},\Gamma_{2}\) share the same colour factor, \(C_{\Sigma}\), if they correspond to the same marked surface, \(\Sigma=S(\Gamma_{1})=S(\Gamma_{2})\). The amplitude can therefore be expressed as \[\mathcal{A}_{n}=\sum_{L=0}^{\infty}(ig)^{n-2+2L}\sum_{\begin{subarray}{c}\Sigma \text{ s.t.}\\ h+2g=L+1\end{subarray}}C_{\Sigma}\mathcal{A}_{\Sigma}, \tag{4}\] where we sum over marked bordered surfaces \(\Sigma\) having \(n\) marked points on the boundary. At loop order \(L\), this second sum is over all surfaces \(\Sigma\) with \(h\) boundary components and genus \(g\), subject to the Euler characteristic constraint: \(h+2g=L+1\). The partial amplitudes appearing in (4) are \[\mathcal{A}_{\Sigma}=\sum_{\begin{subarray}{c}\Gamma\\ S(\Gamma)=\Sigma\end{subarray}}\text{Val}(\Gamma). \tag{5}\] Examples of some ribbon graphs \(\Gamma\) and their corresponding surfaces are shown in Figure 10. Our aim is to evaluate \(\mathcal{A}_{\Sigma}\). It is conventional to compute \(\text{Val}(\Gamma)\) using _Schwinger parameters_. Schwinger parameters are introduced via the identity \[\frac{1}{P^{2}+m^{2}}=\int_{0}^{\infty}d\alpha\,e^{-\alpha(P^{2}+m^{2})}. \tag{6}\] The integration in \(\ell_{i}^{\mu}\) loop variables then becomes a Gaussian integral, and the result can be written as \[\text{Val}(\Gamma)=\int\limits_{\alpha_{i}\geq 0}d^{E}\alpha\,\left(\frac{2 \pi}{\mathcal{U}_{\Gamma}}\right)^{\frac{D}{2}}\exp\left(\frac{\mathcal{F}_{ \Gamma}}{\mathcal{U}_{\Gamma}}-m^{2}\sum_{i}\alpha_{i}\right), \tag{7}\] Figure 10: Feynman graphs \(\Gamma\) and the surfaces \(S(\Gamma)\) that label their colour factors. where \({\cal U}_{\Gamma}\) and \({\cal F}_{\Gamma}\) are the Symanzik polynomials of \(\Gamma\). The Symanzik polynomials depend on \(\Gamma\) regarded as a graph (i.e. forgetting that it is a surface). The first Symanzik polynomial is given by \[{\cal U}_{\Gamma}=\sum_{T}\prod_{e\not\in T}\alpha_{e}, \tag{8}\] where the sum is over all spanning trees, \(T\), of \(\Gamma\). The second Symanzik polynomial is given by a sum over all spanning 2-forests, \((T_{1},T_{2})\), which cut \(\Gamma\) into two tree graphs: \[{\cal F}_{\Gamma}=-\sum_{(T_{1},T_{2})}\left(\prod_{e\not\in T_{1}\cup T_{2}} \alpha_{e}\right)\left(\sum_{e\not\in T_{1}\cup T_{2}}P_{e}\right)^{2}, \tag{9}\] where \(P_{e}^{\mu}\) is the momentum of the edge \(e\). It can be shown that \({\cal F}_{\Gamma}\) depends only on the external momenta, and not on the loop momentum variables. The partial amplitudes \({\cal A}_{\Sigma}\) are given by sums over integrals of this form, as in (5). But it is the purpose of this paper to show how \({\cal A}_{\Sigma}\) can be written more compactly as a _single_ Symanzik-like integral. It does not work to naively sum the integrands of \({\rm Val}(\Gamma)\) for different Feynman diagrams \(\Gamma\). One problem is that there is no conventional way to relate the loop momentum variables for different Feynman graphs. We will see how this is solved by basic facts from surface geometry. Moreover, a simple counting problem associated to surfaces will allow us to define tropical functions we call _headlight functions_. These simple functions allow us to evaluate the full partial amplitude without enumerating the Feynman diagrams. ## 3 Momenta and curves Curves on fatgraphs are the key building block for our formulation of amplitudes. In this section we show how a fatgraph can be used to assign momenta to its curves. This momentum assignment solves the problem of finding a consistent choice of momentum variables for all Feynman diagrams contributing to an amplitude. This generalizes the _dual momentum variables_ that can be used for planar amplitudes. ### Mountainascapes A _curve_ is a path on the fatgraph that enters from an external line, passes through the fatgraph without self-intersections, and exits on an external line. It is sometimes useful to separately consider _closed curves_, which are paths on the fatgraph that form a closed loop. Curves are important because they define _triangulations_ of fatgraphs. A triangulation is a maximal collection of pairwise non-intersecting curves. The key point is that each triangulation of \(\Gamma\) corresponds, by graph duality, to some fatgraph \(\Gamma^{\prime}\). These fatgraphs \(\Gamma^{\prime}\) all have the same colour factor and so contribute, as Feynman diagrams, to the same amplitude.2 The he two curves in Figure 11, the two curves in Figure 12(a) are \[C=1LxRwRzRyLwL4. \tag{3.1}\] ### Intersections Mountainscape diagrams encode the intersections of curves. In fact, it is not necessary to know the whole fatgraph in order to determine if two curves intersect: their mountainousscapes alone have all the data needed. For example, consider Figure 13. The two curves in Figure 13(a) are \[C=x_{2}RyLx_{4}\qquad\text{and}\qquad C^{\prime}=x_{1}LyRx_{3}. \tag{3.2}\] Figure 11: A triangulation of a fatgraph is a maximal set of curves that cuts the fatgraph into cubic vertices. These two mountainousscapes _overlap_ on the edge \(y\), which they share in common. For \(C\), \(y\) is a _peak_, whereas for \(C^{\prime}\), \(y\) is a _valley_. This is equivalent to the information that \(C\) and \(C^{\prime}\)_intersect_ at \(y\). By contrast, the two curves in Figure 13(b) are \[C=x_{1}LyLx_{4}\qquad\text{and}\qquad C^{\prime}=x_{2}RyRx_{3}. \tag{3.3}\] These curves also overlap on the edge \(y\). But \(y\) does not appear in these curves as a peak or valley. This is equivalent to the information that \(C\) and \(C^{\prime}\) do not intersect. In general, if two curves \(C\) and \(C^{\prime}\) intersect, their paths must overlap near the intersection. So suppose that \(C\) and \(C^{\prime}\) share some sub-path, \(W\), in common. Then \(C\) and \(C^{\prime}\)_intersect along_\(W\) only if \(W\) is a peak for one and a valley for the other. In other words, \(C\) and \(C^{\prime}\) intersect at \(W\) if they have the form \[C=W_{1}RWLW_{2}\qquad\text{and}\qquad C^{\prime}=W_{3}LWRW_{4}, \tag{3.4}\] or \[C=W_{1}LWRW_{2}\qquad\text{and}\qquad C^{\prime}=W_{3}RWLW_{4}, \tag{3.5}\] Figure 12: A curve on a fatgraph (left) and its mountainous diagram (right). Figure 13: A pair of intersecting curves (left), and a pair of non-intersecting curves (right). for some sub-paths \(W_{1},W_{2},W_{3},W_{4}\). The left/right turns are very important. If the two curves have the form, say, \[C=W_{1}RWRW_{2}\qquad\text{and}\qquad C^{\prime}=W_{3}LWLW_{4}, \tag{3.6}\] then they _do not intersect_ at \(W\). Using this general rule, we can find triangulations of fatgraphs using only the data of the curves. For every fatgraph \(\Gamma\), there are two special triangulations. Suppose that \(\Gamma\) has edges \(e_{i}\), \(i=1,\dots,E\). Let \(C_{i}\) be the curve that, starting from \(e_{i}\), turns right in both directions away from \(e_{i}\). Then \[C_{i}=\cdots LeLe^{\prime}Le_{i}Re^{\prime\prime}Re^{\prime\prime \prime}R\cdots. \tag{3.7}\] \(C_{i}\) has exactly one peak, which is at \(e_{i}\). The intersection rule, (3.4), shows that no pair of such curves \(C_{i},C_{j}\) (\(i\neq j\)) intersect. So the \(C_{i}\) give \(E\) nonintersecting curves, and these form a triangulation, \(T\). We can also consider the curves \[\tilde{C}_{i}=\cdots ReRe^{\prime}Re_{i}Le^{\prime\prime}Le^{ \prime\prime\prime}L\cdots, \tag{3.8}\] that turn left going in both directions away from \(e_{i}\). These \(\tilde{C}_{i}\) each have exactly one valley, at \(e_{i}\), and so they are mutually nonintersecting. Together, they give another triangulation of the fatgraph, \(\tilde{T}\). An example of these special triangulations is given in Figure 14. ### Momentum Assignments The edges of a fatgraph \(\Gamma\) are naturally decorated with momenta, induced by the _external momenta_ of the graph. Let \(\Gamma\) have \(n\) external momenta \(p_{1}^{\mu},\dots,p_{n}^{\mu}\), directed _into_ the graph (say). By imposing momentum conservation at each cubic vertex, we obtain a momentum \(p_{e}^{\mu}\) for every edge. If \(\Gamma\) has loops (i.e. \(E>n-3\)), then there is a freedom in the definition of the \(p_{e}^{\mu}\) that we parametrise by some \(L\)_loop momentum variables_, \(\ell_{1}^{\mu},\dots,\ell_{L}^{\mu}\). This is the standard rule for assigning momenta to a fatgraph, \(\Gamma\). Figure 14: The two special triangulations of a fatgraph, \(T\) and \(\tilde{T}\), are defined by curves with exactly one peak (left) and curves with exactly one valley (right). To go further, we now introduce a way to also assign a momentum to every _curve_ on \(\Gamma\). For a curve with an orientation, \(\overrightarrow{C}\), will assign a momentum \(P^{\mu}_{\overrightarrow{C}}\). This momentum assignment should satisfy two basic rules. If \(\overleftarrow{C}\) is the curve with reversed orientation (Figure 14(a)), then \[P^{\mu}_{\overleftarrow{C}}=-P^{\mu}_{\overrightarrow{C}}. \tag{3.9}\] And if three curves, \(\overrightarrow{C}_{1}\), \(\overrightarrow{C}_{2}\), \(\overrightarrow{C}_{3}\), cut out a cubic vertex (Figure 14(a)), then we impose momentum conservation at that vertex: \[P^{\mu}_{\overrightarrow{C}_{1}}+P^{\mu}_{\overrightarrow{C}_{2}}+P^{\mu}_{ \overrightarrow{C}_{3}}=0. \tag{3.10}\] The solution to satisfying both (3.9) and (3.10) is very simple, if we start with the momenta \(p^{\mu}_{e}\) assigned to the edges of \(\Gamma\). Suppose \(\overrightarrow{C}\) enters \(\Gamma\) via the external line \(i\). Then assign this curve \[P^{\mu}_{\overrightarrow{C}}=p^{\mu}_{i}+\sum_{\text{right turns}}p^{\mu}_{ \text{left}}, \tag{3.11}\] where \(p^{\mu}_{\text{left}}\) is the momentum of the edge incident on \(C\) from the left, at the vertex where \(\overrightarrow{C}\) makes a right turn. The momentum assignment, (3.11), can easily be checked to satisfy (3.9) and (3.10). For example, take the fatgraph in Figure 16. The assignment of momenta to the edges of the graph is shown in the Figure. The curve \(C_{0}\) in Figure 16 enters the graph with momentum \(p^{\mu}\). Then it turns left, traverses an edge, and then turns right. At the right turn, the momentum incident on the curve from the left is \(-p-\ell^{\mu}\). So the momentum assignment of this curve is \[P^{\mu}_{\overrightarrow{C}_{0}}=-\ell^{\mu}. \tag{3.12}\] The curve \(C_{1}\) in Figure 16 has two right turns. At its first right turn, it gains momentum \(p^{\mu}\). At its second right turn, it gains momentum \(-p^{\mu}-\ell^{\mu}\). So the momentum assignment of this curve is \[P^{\mu}_{\overrightarrow{C}_{1}}=p^{\mu}-\ell^{\mu}. \tag{3.13}\] Figure 15: (a) Reversing a curve reverses its momentum assignment. (b) The momenta of three curves that cut out a cubic vertex satisfy momentum conservation. or _any_ triangulation, \(T\), the above rules assign a momentum to every curve in the triangulation. By construction, these momenta satisfy momentum conservation at each of the cubic vertices cut out by \(T\). The upshot of this is that we can _re-use_ the same loop momentum variables, \(\ell_{1},...,\ell_{L}\), when assigning momenta to _any_ triangulation of \(\Gamma\). This simple idea makes it possible to do the loop integrations for all diagrams at once, instead of one Feynman diagram at a time, which is a key step towards our formulas for amplitudes. This idea also makes it possible to compute well-defined _loop integrands_, even beyond the planar limit. #### 3.3.1 Aside on Homology There is a more formal way to understand the assignment of momenta to curves: these momentum assignments are an avatar of the homology of the fatgraph. Let \(H_{1}(\Gamma,\Gamma_{\infty})\) be the homology of \(\Gamma\) (regarded as a surface) relative to the _ends_ of the external edges of the fatgraph, \(\Gamma_{\infty}\). An oriented curve \(\overrightarrow{C}\) represents a class \([\overrightarrow{C}]\in H_{1}(\Gamma,\Gamma_{0})\), and \[[\overrightarrow{C}]+[\overleftarrow{C}]=0 \tag{3.14}\] in homology. Moreover, if three curves cut out a cubic vertex, their classes satisfy \[[\overrightarrow{C}_{1}]+[\overrightarrow{C}_{2}]+[\overrightarrow{C}_{3}]=0 \tag{3.15}\] in homology. This means that a momentum assignment to curves satisfying (3.9) and (3.10) defines a linear map \[P:H_{1}(\Gamma,\Gamma_{\infty})\rightarrow\mathbb{R}^{1,D-1}, \tag{3.16}\] from \(H_{1}(\Gamma,\Gamma_{\infty})\) to Minkowski space. Figure 16: An assignment of momenta to the edges of a fatgraph (left) induces as assignment of momenta to curves on the fatgraph (right). ### Spirals The colour factor \(C_{\Gamma}\) is a product of trace factors \(\operatorname{tr}(t_{1}...t_{k})\) formed from the colour polarisations \(t_{i}\overset{J}{I}\). If \(\Gamma\) has a closed colour loop, this boundary contributes \(\operatorname{tr}(1)=N\) to the colour factor. For such a fatgraph, there are curves that infinitely spiral around this closed loop. These spiral curves can be treated just the same as all the other curves. In fact, the momentum assignment for spiral curves follows again from the same rule above, (3.11). Suppose that \(\Gamma\) has a closed colour loop, \(\beta\). Suppose that there are some \(m\geq 1\) edges incident on the loop, as in Figure 17. By momentum conservation, the momenta of these edges, \(p_{1},\ldots,p_{m}\), must sum up to zero: \(\sum_{i=1}^{m}p_{i}=0\). This ensures that (3.11) assigns a well-defined momentum to a curve that spirals around this boundary, because the contributions from the \(p_{i}^{\mu}\) vanish after every complete revolution. ## 4 The Feynman Fan For a fatgraph \(\Gamma\) with \(E\) edges \((e_{1},\ldots,e_{E})\), consider the \(E\)-dimensional vector space, \(V\), generated by some vectors, \(\mathbf{e}_{1},\ldots,\mathbf{e}_{E}\). To every curve \(C\) on the fatgraph, we can assign a _\(g\)-vector_, \(\mathbf{g}_{C}\in V\). These simple integer vectors contain all the key information about the curves on \(\Gamma\). Moreover, the \(g\)-vectors define a _fan_ in \(V\) that we can use to rediscover the Feynman diagram expansion for the amplitude. To define the \(g\)-vector of a curve, \(C\), consider the _peaks_ and _valleys_ of its mountainous. \(C\) has a _peak at \(e_{i}\)_ if it contains \[\cdots Le_{i}R\cdots. \tag{4.1}\] Figure 17: The momenta incident on a closed loop in a fatgraph sum to zero. This ensures that the assignment of momentum to a spiral curve is well defined. \[\cdots Re_{i}L\cdots. \tag{4.2}\] Let \(a_{C}^{i}\) be the number of times that \(C\) has a peak at \(e_{i}\), and let \(b_{C}^{i}\) be the number of times that \(C\) has a valley at \(e_{i}\). This information about the peaks and valleys is recorded by the \(g\)_-vector of \(C\)_, \[\mathbf{g}_{C}\equiv\sum_{i=1}^{E}g_{C}^{i}\,\mathbf{e}_{i},\qquad\text{where $g_{C}^{i}=a_{C}^{i}-b_{C}^{i}$}. \tag{4.3}\] Each curve has a distinct \(g\)-vector. The converse is even more surprising: a curve is completely specified by its \(g\)-vector. For example, consider the curve, \(C_{i}\), in the triangulation \(T_{\Gamma}\), which has only one peak, at \(e_{i}\). The \(g\)-vector for \(C_{i}\) is then \[\mathbf{g}_{C_{i}}=\mathbf{e}_{i}. \tag{4.4}\] So the \(g\)-vectors of this triangulation \(T_{\Gamma}\) span the positive orthant of \(V\). ### Example: tree level at 5-points Take the comb graph \(\Gamma\), with edges labelled by variables \(x\) and \(y\), as in Figure 18. The five curves on \(\Gamma\) are \[C_{13}=1LxR3,\qquad C_{14}=1LxLyR4,\qquad C_{24}=2RxLyR4, \tag{4.5}\] \[C_{25}=2RxLyL5,\qquad C_{35}=3RyL5. \tag{4.6}\] Counting the peaks and valleys of these mountainsscapes gives \[\mathbf{g}_{13}=\begin{bmatrix}1\\ 0\end{bmatrix},\ \mathbf{g}_{14}=\begin{bmatrix}0\\ 1\end{bmatrix},\ \mathbf{g}_{24}=\begin{bmatrix}-1\\ 1\end{bmatrix},\ \mathbf{g}_{25}=\begin{bmatrix}-1\\ 0\end{bmatrix},\ \mathbf{g}_{35}=\begin{bmatrix}0\\ -1\end{bmatrix}. \tag{4.7}\] These \(g\)-vectors are shown in Figure 19. They define a _fan_ in the 2-dimensional vector space. The top-dimensional cones of this fan are spanned by pairs of \(g\)-vectors, such as \(\mathbf{g}_{14}\) and \(\mathbf{g}_{24}\), whose corresponding curves define triangulations. Figure 18: The five curves on the \(n=5\) tree fatgraph. ### The Fan The \(g\)-vectors of all the curves on \(\Gamma\) together define an integer fan \(\mathfrak{F}\subset V\). To define a fan, we must specify all of its _cones_. We adopt the rule that two or more \(g\)-vectors span a cone in \(\mathfrak{F}\) if and only if their curves do not intersect. The main properties of \(\mathfrak{F}\) are: 1. It is a polyhedral fan that is dense \(V\).3 Footnote 3: A fan is _polyhedral_ if the intersection of any two cones is itself, if nonempty, a cone in the fan, and the faces of each cone are cones in the fan. A fan is _dense_ if any integer vector is contained in some cone of the fan. In general, irrational vectors are not always contained in our fans, but this will not play any role in this paper. 2. Its top dimensional cones are in 1:1 correspondence with triangulations. 3. The \(g\)-vectors of each top-dimensional cone span a parallelepiped of unit volume. Since the top-dimensional cones of \(\mathfrak{F}\) correspond to triangulations, and hence to Feynman diagrams, we call \(\mathfrak{F}\) the _Feynman fan_, or sometimes, the \(g\)_-vector fan_. The property that \(\mathfrak{F}\) is _polyhedral and dense_ means that every rational vector \(\mathbf{g}\in V\) is contained in _some_ cone in the fan. This implies that every such \(\mathbf{g}\) can be _uniquely_ written as a positive linear combination of \(g\)-vectors. In Section 5, we solve the problem of how to do this expansion explicitly. ### The Mapping Class Group The Feynman fan of a fat graph \(\Gamma\) inherits from \(\Gamma\) an action of a discrete, finitely generated group called the _mapping class group_, MCG. The MCG of a fatgraph, \(\Gamma\), is the group of homeomorphisms of \(\Gamma\), up to isotopy, that restrict to the identity on its boundaries. The action of MCG on the fatgraph can be studied by considering its action on curves. Since Figure 19: The Feynman fan for \(n=5\) tree level. we only ever consider curves up to homotopy, a group element \(\gamma\in\text{MCG}\) induces a map on curves \[\gamma:C\mapsto\gamma C. \tag{4.8}\] Since MCG acts via homeomorphisms, it does not affect curve intersections and non-intersections. If \(C\) and \(C^{\prime}\) are two non-intersecting curves, then \(\gamma C\) and \(\gamma C^{\prime}\) are likewise non-intersecting. Similarly, if \(C,C^{\prime}\) intersect, so do \(\gamma C\) and \(\gamma C^{\prime}\). This means that if some curves, \(C_{1},\ldots,C_{E}\), form a triangulation, so do their images under MCG. Moreover, if the triangulation \(\{C_{1},\ldots,C_{E}\}\) is dual to a fatgraph \(\Gamma^{\prime}\), then each image \(\{\gamma C_{1},\ldots,\gamma C_{E}\}\) is _also_ dual to the same fatgraph, \(\Gamma^{\prime}\). For example, take the 2-point non-planar fatgraph \(\Gamma\) in Figure 20. The MCG acts on \(\Gamma\) by _Dehn twists_ that increase the number of times a curve winds around the fatgraph. All triangulations of \(\Gamma\) are related to each other by the MCG and they are all dual to the same fatgraph (right in Figure 20). In general, if \(\Gamma\) has loop number \(L\), then MCG has a presentation with \(L\) generators [15]. These can be identified with Dehn twists around annuli in the fatgraph. The MCG action on curves induces a piecewise linear action on the vector space, \(V\), \[\gamma:\mathbf{g}_{C}\mapsto\mathbf{g}_{\gamma C}. \tag{4.9}\] It follows from the above properties of the MCG action on curves that the action of MCG on \(V\) leaves the fan \(\mathfrak{F}\) invariant (if we forget the labels of the rays). Furthermore, two top-dimensional cones of the fan correspond to the same Feynman diagram if and only if they are related by the MCG action. #### 4.3.1 Aside on automorphisms There is another discrete group that acts on the Feynman fan: the group of graph automorphisms, \(\text{Aut}(\Gamma)\). The elements of \(\text{Aut}(\Gamma)\) are permutations of the labels of the edges of \(\Gamma\). A permutation is an _automorphism_ if it leaves the list of fat vertices of \(\Gamma\) unchanged (including the vertex orientations). Each fat vertex can be described by a triple of edge labels with a cyclic orientation, \((ijk)\). \(\text{Aut}(\Gamma)\) has a linear action on \(V\) given by permuting the basis vectors \(\mathbf{e}_{1},\ldots,\mathbf{e}_{E}\). The action of \(\text{Aut}(\Gamma)\) leaves the fan invariant (again if we forget the labels of the rays). Figure 20: Triangulations (left) that are related to each other by the action of of the MCG. These triangulations are all dual to the same Feynman diagram (right). An example of a fatgraph with nontrivial automorphisms is Figure 21. In this example, cyclic permutations of the 3 edges preserve the fatgraph. Most fatgraphs that we will consider have trivial automorphism groups, and so the action of \(\mathrm{Aut}(\Gamma)\) will not play a big role in this paper. ### Example: the non-planar 1-loop propagator Take the 1-loop fatgraph \(\Gamma\) in Figure 22, with edges labeled by variables \(x\) and \(y\). Some of the curves on \(\Gamma\), \(C_{n}\), are shown in the Figure. These curves are related to each other by the action of MCG, which is generated by a Dehn twist, \(\gamma\). With the labelling in Figure 22, the action of \(\gamma\) is \[\gamma:C_{n}\mapsto C_{n+1}. \tag{4.10}\] There are infinitely many such curves on the fatgraph. The paths of the curves on \(\Gamma\) are \[C_{n} =1L(xLyR)^{n}xR2\qquad\text{for $n\geq 0$}, \tag{4.11}\] \[C_{n} =1Ry(RxLy)^{1+n}L2\qquad\text{for $n<0$},\] (4.12) \[\Delta =xLyR, \tag{4.13}\] where \(\Delta\) is the closed loop. Note that the curves \(C_{n}\) differ from one another by multiples of the closed path \(\Delta\). In this way, we can see the MCG directly in terms of the mountainouses of the curves. Figure 21: A fatgraph with \(|\mathrm{Aut}(\Gamma)|=3\). Cyclic permutations of the edges leave it unchanged. Figure 22: The infinite family of curves, \(C_{n}\), for the non-planar one loop propagator. Counting peaks and valleys in the mountainscapes, the \(g\)-vectors for these curves are: \[\mathbf{g}_{n} =\begin{bmatrix}-n+1\\ n\end{bmatrix}\qquad\text{for $n\geq 0$,} \tag{4.14}\] \[\mathbf{g}_{n} =\begin{bmatrix}n+1\\ -n-2\end{bmatrix}\qquad\text{for $n<0$,}\] (4.15) \[\mathbf{g}_{\Delta} =\begin{bmatrix}-1\\ 1\end{bmatrix}. \tag{4.16}\] These \(g\)-vectors define the fan in Figure 19. There are infinitely many rays of this fan. The action of MCG on curves lifts to a piecewise linear action on the fan, generated by the action of the Dehn twist \(\gamma\). \(\gamma\) acts on the fan as \[\mathbf{g}_{n+1} =\mathbf{g}_{n}+\mathbf{g}_{\Delta},\qquad\text{for $n\geq 0$,} \tag{4.17}\] \[\mathbf{g}_{0} =\mathbf{g}_{-1}+(1,1),\] (4.18) \[\mathbf{g}_{n+1} =\mathbf{g}_{n}-\mathbf{g}_{\Delta},\qquad\text{for $n<-1$.} \tag{4.19}\] This is (trivially) an isomorphism of the fan. ### The Delta plane A _closed curve_ is a curve \(\Gamma\) that forms a loop. For a closed curve \(\Delta\), consider the series of left and right turns that it makes. We can record this series of turns as a _cyclic word_, like \(W_{\Delta}=(RRLRL)\). Whenever \(RL\) appears in \(W_{\Delta}\) it corresponds to a _valley_ in the mountainous, which happens where the curve switches from turning right to turning left. Likewise, \(LR\) Figure 23: The Feynman fan for the non-planar 1-loop propagator. corresponds to a _peak_. If the cyclic word \(W_{C}\) has \(n\) occurrences of '\(RL\)', it must also have exactly \(n\) occurrences of '\(LR\)'. For example, the cyclic word \[(RRLRLLLRRLL), \tag{104}\] switches from right-to-left 3 times, and from left-to-right 3 times. In other words, the mountainscape for a closed curve has exactly as many peaks as valleys. It follows that the \(g\)-vector, \(\mathbf{g}_{\Delta}\), for any closed curve \(\Delta\) is normal to the vector \(\mathbf{n}=(1,1,1,...,1)^{T}\). We call the plane normal to \(\mathbf{n}\) the \(\Delta\)_plane_: \(V_{\Delta}\subset V\). For example, in the previous subsection, the closed curve \(\Delta\) had \(g\)-vector \(\mathbf{g}_{\Delta}=(-1,1)\), which is normal to the vector \((1,1)\). Finally, note that a closed curve that makes _only_ right-turns (resp. left-turns) corresponds to a path around a loop boundary of \(\Gamma\). These curves have no peaks and no valleys. So these loop boundaries are assigned zero \(g\)-vector. They are also assigned zero momentum (by the reasoning in Section 3.4). ### Example: the planar 1-loop propagator Take the 1-loop bubble diagram, \(\Gamma\), with edges \(x\) and \(y\), and external edges 1 and 2, as in Figure 24. Consider the four curves, \(C_{1},C_{2},S_{1},S_{2}\), shown in the Figure. These have paths \[C_{1} =1RxLyR1 \tag{105}\] \[C_{2} =2RyLxR2\] (106) \[S_{1}^{\prime} =1RxLyLxLyL\cdots\] (107) \[S_{2}^{\prime} =2RyLxLyLxL\cdots. \tag{108}\] Figure 24: Curves on the bubble fatgraph. The curves \(S_{1}^{\prime},S_{2}^{\prime}\) end in anticlockwise spirals around the closed loop boundary. There are also two curves, \(S_{1}\) and \(S_{2}\), which spiral _clockwise_ into the puncture: \[S_{1} =1LyRxRyR\cdots \tag{112}\] \[S_{2} =2LxRyRxR\cdots. \tag{113}\] Counting peaks and valleys, the \(g\)-vectors of these curves are \[\mathbf{g}_{C_{1}}=\begin{bmatrix}-1\\ 1\end{bmatrix},\ \mathbf{g}_{S_{1}}=\begin{bmatrix}1\\ 0\end{bmatrix},\ \mathbf{g}_{S_{2}}=\begin{bmatrix}0\\ 1\end{bmatrix},\ \mathbf{g}_{C_{2}}=\begin{bmatrix}1\\ -1\end{bmatrix},\ \mathbf{g}_{S_{1}^{\prime}}=\begin{bmatrix}0\\ -1\end{bmatrix},\ \mathbf{g}_{S_{2}^{\prime}}=\begin{bmatrix}-1\\ 0\end{bmatrix}. \tag{114}\] These \(g\)-vectors give the fan in Figure 25. Notice that the \(g\)-vectors of the curves \(C_{1},C_{2}\) lie on the Delta plane: \(x+y=0\). Including the anticlockwise spirals would lead to us counting every Feynman diagram twice. This is because the triangulation with \(C_{1},S_{1}\) is dual to the same diagram as the triangulation by \(C_{1},S_{1}^{\prime}\), and so on. To prevent overcounting, it makes sense to restrict to the part of the fan that involves only \(C_{1},S_{1}^{\prime},S_{2}^{\prime}\), and \(C_{2}\). This part of the fan is precisely the half space, \(x+y\leq 0\), cut out by the Delta plane. ## 5 A Counting Problem For Curves There is a natural counting problem associated to mountainousscapes, and this counting problem plays the central role in our amplitude computations. For a mountainscape, \(C\), the idea is to form subsets of \(C\) by _filling up_ the mountainscape from the bottom. A subset is valid if it includes everything _downhill_ of itself in the mountainscape. For example, consider the curve in Figure 26, \[C=1R2L3. \tag{115}\] Figure 25: The Feynman Fan for the 1-loop planar propagator. The valid subsets of \(C\), shown in the Figure, are \(2,1R2,2L3\), and \(1R2L3\). In other words, if 3 is in the subset, then 2 must also be included, because it is downhill of (left of) 3. Likewise, if 1 is in the subset, then 2 must also be included, because 2 is downhill of (right of) 3. This information can be summarised using a generating function or _\(F\)-polynomial_. Introduce variables \(y_{i}\), \(i=1,\ldots,E\), labelled by the edges of \(\Gamma\). Then the \(F\)-polynomial of a curve \(C\) is \[F_{C}=1+\sum_{C^{\prime}\subset C}\,\prod_{i\in C^{\prime}}y_{i}, \tag{100}\] where the sum is over all valid (non-empty) subsets of \(C\), including \(C\) itself. In the example, (101), we have four valid subsets, and the \(F\)-polynomial is \[F_{C}=1+y_{2}+y_{1}y_{2}+y_{2}y_{3}+y_{1}y_{2}y_{3}. \tag{101}\] ### Curve Matrices Consider a curve \(C\) that starts at any edge \(e_{i}\) and ends at any edge \(e_{j}\). It is natural to decompose its \(F\)-polynomial as a sum of four terms, \[F_{C}=F_{--}+F_{-+}+F_{+-}+F_{++}, \tag{102}\] where: \(F_{--}\) counts subsets that exclude the first and last edges; \(F_{-+}\) counts subsets that exclude the first edge and include the last edge; and so on. Now consider what happens if we _extend_\(C\) along one extra edge. Let \(C^{\prime}\) extend \(C\) by adding a left turn before \(i\): \[C^{\prime}=e_{k}LC, \tag{103}\] Figure 26: Two examples of mountainouses and their sub-mountainscapes. for some edge \(e_{k}\). The \(F\)-polynomial of \(C^{\prime}\) can be deduced using (5.4). Terms that involve \(y_{i}\)_must_ contain \(y_{k}\), since \(e_{k}\) is _downhill_ of \(e_{i}\) in the curve. So \[F_{C^{\prime}}=(1+y_{k})F_{--}+(1+y_{k})F_{-+}+y_{k}F_{+-}+y_{k}F_{++}. \tag{5.6}\] Similarly, if \(C^{\prime\prime}\) is obtained from \(C\) by adding a right turn before \(e_{i}\), then \(C^{\prime\prime}=e_{l}RC\), for some edge \(e_{l}\), and we find that the new \(F\)-polynomial is \[F_{C^{\prime\prime}}=F_{--}+F_{-+}+(1+y_{l})F_{+-}+(1+y_{l})F_{++}. \tag{5.7}\] This equation follows because any term not containing \(y_{i}\)_cannot_ contain \(y_{l}\), since \(e_{i}\) is _downhill_ of \(e_{l}\) in the curve. Equations (5.6) and (5.7) can be used to compute the \(F\)-polynomial for any curve. It simple to do implement this is by defining a _curve matrix_, whose entries are given by the decomposition, (5.4): \[M_{C}=\begin{bmatrix}F_{--}&F_{-+}\\ F_{+-}&F_{++}\end{bmatrix}. \tag{5.8}\] The curve matrix \(M_{C^{\prime}}\) is obtained from the curve matrix \(M_{C}\) via the matrix version of (5.6): \[M_{C^{\prime}}=\begin{bmatrix}1&0\\ y_{k}&y_{k}\end{bmatrix}M_{C}. \tag{5.9}\] The matrix multiplying \(M_{C}\) in this equation represents what happens when \(C\) is extended by adding a left turn at the start. Similarly, the matrix version of (5.7) is \[M_{C^{\prime\prime}}=\begin{bmatrix}1&1\\ 0&y_{l}\end{bmatrix}M_{C}, \tag{5.10}\] which represents what happens when \(C\) is adding a right turn at the start. It can be convenient to decompose the new matrices appearing in (5.9) and (5.10) as a product, \[\begin{bmatrix}1&0\\ y_{k}&y_{k}\end{bmatrix}=\begin{bmatrix}1&0\\ 0&y_{k}\end{bmatrix}\begin{bmatrix}1&0\\ 1&1\end{bmatrix},\qquad\begin{bmatrix}1&1\\ 0&y_{l}\end{bmatrix}=\begin{bmatrix}1&0\\ 0&y_{l}\end{bmatrix}\begin{bmatrix}1&1\\ 0&1\end{bmatrix}. \tag{5.11}\] Then, for any curve, \(C\), we can compute its curve matrix, \(M_{C}\), directly from the word specifying the curve. To do this, we just replace each turn and edge with the associated matrix: \[L\to\begin{bmatrix}1&0\\ 1&1\end{bmatrix},\qquad R\to\begin{bmatrix}1&1\\ 0&1\end{bmatrix},\qquad e_{i}\to\begin{bmatrix}1&0\\ 0&y_{i}\end{bmatrix}. \tag{5.12}\] Every curve matrix \(M_{C}\) is then a product of these simple matrices. For example, for the curve \(C=1R2L3\) considered above, its matrix is \[M_{C}=\begin{bmatrix}1&0\\ 0&y_{1}\end{bmatrix}\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\begin{bmatrix}1&0\\ 0&y_{2}\end{bmatrix}\begin{bmatrix}1&0\\ 1&1\end{bmatrix}\begin{bmatrix}1&0\\ 0&y_{3}\end{bmatrix}=\begin{bmatrix}1+y_{2}&y_{2}y_{3}\\ y_{1}y_{2}&y_{1}y_{2}y_{3}\end{bmatrix}. \tag{5.13}\] The sum of the entries of this curve matrix recovers the curve's \(F\)-polynomial, (5.3). Curve matrices neatly factorise. If several curves all begin with the same word, \(W\), their words can be written as \(C_{i}=WC_{i}^{\prime}\). Their matrices are then \(M_{C_{i}}=M_{W}M_{C_{i}^{\prime}}\), so that we only have to compute \(M_{W}\) once to determine all the \(M_{C_{i}}\). Moreover, if we add extra legs to a fatgraph \(\Gamma\), to form a larger fatgraph, \(\Gamma^{\prime}\), the matrices \(M_{C}\) for the larger fatgraph can be obtained directly from the matrices for the smaller fatgraph. In practice, this is very useful, and allows us to exploit the methods in this paper to compute all-\(n\) formulas for amplitudes. [26] ### Headlight Functions It follows from the definition of \(M_{C}\), as a product of the matrices in (5.13), that \[\det M_{C}=\prod_{e\in C}y_{e}. \tag{5.14}\] Expanding the determinant, this gives \[1=\frac{F_{-+}F_{+-}}{F_{--}F_{++}}+\frac{\prod y_{e}}{F_{--}F_{++}}. \tag{5.15}\] Motivated in part by this identity, define the \(u\)_-variable_ of a curve \(C\) as the ratio \[u_{C}=\frac{F_{-+}F_{+-}}{F_{--}F_{++}}. \tag{5.16}\] These \(u\)-variables vastly generalise those studied in [24; 25], and (5.15) is a generalisation of the \(u\)_-equations_ studied there. The _headlight function_ of a curve \(C\) is the _tropicalization_ of the \(u\)-variable, \[\alpha_{C}=-\text{Trop }u_{C}. \tag{5.17}\] For a polynomial \(F(y)\), its tropicalization captures the behaviour of \(F\) at large values of \(y_{i}\). Parametrise the \(y_{i}\) as \(y_{i}=\exp t_{i}\). Then, in the large \(t\) limit, \[F(y)\to\exp\text{Trop }F(t). \tag{5.18}\] Figure 27: Getting a new fatgraph. For example, if \(F(y)=1+y_{1}+y_{1}y_{2}\), then Trop \(F(t)=\max(0,t_{1},t_{1}+t_{2})\). In practice, Trop \(F\) is obtained from \(F\) by replacing multiplication with addition, and replacing sums with taking the maximum. In terms of the matrix \(M_{C}\), the headlight function is \[\alpha_{C}=\text{Trop }M_{C}^{1,1}+\text{Trop }M_{C}^{2,2}-\text{Trop }M_{C}^{1,2}-\text{ Trop }M_{C}^{2,1}. \tag{111}\] Headlight functions satisfy the following remarkable property: \[\alpha_{C}(\mathbf{g}_{D})=\begin{cases}1&\quad\text{if }C=D\\ 0&\quad\text{otherwise.}\end{cases} \tag{112}\] This implies that headlight functions can be used to express any vector \(\mathbf{g}\in V\) as a positive linear combination of the generators of a cone of the Feynman fan, by writing \[\mathbf{g}=\sum_{C}\alpha_{C}(\mathbf{g})\,\mathbf{g}_{C}. \tag{113}\] This expansion has a geometrical interpretation. Any integer vector \(\mathbf{g}\in V\) corresponds to some curve (or set of curves), \(L\), possibly with self-intersections. Any intersections in \(L\) can be uncrossed on \(\Gamma\) using the _skein relations_. Repeatedly applying skein relations, \(L\) can be decomposed on the surface into a unique set of non-self-intersecting curves, and \(\alpha_{C}(g)\) is the number of times the curve \(C\) appears in this decomposition. ### Example: tree level at 5-points The curves for the 5-points tree level amplitude were given in Section 4.1. Their curve matrices, using the replacements (108), are \[C_{13} =LxR \longrightarrow M_{13} =\begin{bmatrix}1&1\\ 1&1+x\end{bmatrix}, \tag{114}\] \[C_{14} =LxLyR \longrightarrow M_{14} =\begin{bmatrix}1&1\\ 1+x&1+x+xy\end{bmatrix},\] (115) \[C_{24} =RxLyR \longrightarrow M_{24} =\begin{bmatrix}1+x&1+x+xy\\ x&x(1+y)\end{bmatrix},\] (116) \[C_{25} =RxLyL \longrightarrow M_{25} =\begin{bmatrix}1+x+xy&xy\\ x+xy&xy\end{bmatrix},\] (117) \[C_{35} =RyL \longrightarrow M_{35} =\begin{bmatrix}1+y&y\\ y&y\end{bmatrix}. \tag{118}\] Given these matrices, the headlight functions are \[\alpha_{13} =\max(0,x), \tag{5.27}\] \[\alpha_{14} =-\max(0,x)+\max(0,x,x+y),\] (5.28) \[\alpha_{24} =-\max(0,x,x+y)+\max(0,x)+\max(0,y),\] (5.29) \[\alpha_{25} =-x-\max(0,y)+\max(0,x,x+y),\] (5.30) \[\alpha_{35} =-y+\max(0,y). \tag{5.31}\] It can be verified that \(\alpha_{ij}(\mathbf{g}_{C})=1\) if \(C=C_{ij}\), and that otherwise \(\alpha_{ij}(\mathbf{g}_{C})=0\). For example, the values taken by \(\alpha_{24}\) are shown in Figure 28. ### Example: the non-planar 1-loop propagator The mountainousscapes for the non-planar 1-loop propagator are given in Section 4.4. Using these, we can compute the headlight functions, and find: \[\alpha_{n} =f_{n}-2f_{n-1}+f_{n-2}, n\geq 0, \tag{5.32}\] \[\alpha_{n} =g_{n}-2g_{n+1}+g_{n+2}, n<0. \tag{5.33}\] where the tropical functions \(f_{n}\) and \(g_{n}\) are given by \[f_{n} =\max(0,(n+1)x,(n+1)x+ny), \text{for }n\geq 0, \tag{5.34}\] \[g_{n} =\max(0,-(n+1)x,-(n+1)x-ny), \text{for }n\leq-1, \tag{5.35}\] with the following special cases: \[f_{-2}=0,\ \ f_{-1}=0,\ \ g_{1}=-2x-y,\ \ g_{0}=-x. \tag{5.36}\] Figure 28: The Schwinger parameter \(\alpha_{24}\) on the Feynman fan. A full derivation of these functions using the matrix method is given in Appendix F. It is easy to verify that these \(\alpha_{n}\) satisfy the key property: \[\alpha_{n}(\mathbf{g}_{m})=\begin{cases}1&\text{if }n=m\\ 0&\text{otherwise.}\end{cases} \tag{102}\] For example, take \(n,m\geq 0\). Then we find \[f_{n}(\mathbf{g}_{m})=\max(0,1+n-m), \tag{103}\] so that \[\alpha_{n}(\mathbf{g}_{m})=\max(0,1+n-m)+\max(0,-1+n-m)-2\max(0,n-m). \tag{104}\] This agrees with (102). ### Spirals Suppose \(C\) is a curve that ends in a spiral around a loop boundary of \(\Gamma\). If \(1,2,...,m\) are the edges around that boundary, \(C\) has the form \[C=W1L2L...LmL1L2L..., \tag{105}\] for some subpath \(W\). We can compute the transfer matrix for the infinite tail at the right end of \(C\). The path for one loop around the boundary is \[C_{\Delta}:=1L2L...LmL, \tag{106}\] and the matrix for this path is \[M_{\Delta}=\begin{bmatrix}1&0\\ F-1&y_{*}\end{bmatrix}, \tag{107}\] where \[y_{*}=\prod_{i=1}^{m}y_{i},\qquad\text{and}\qquad F=1+y_{1}+y_{1}y_{2}+...+y_{ 1}y_{2}...y_{m}. \tag{108}\] Now consider the powers, \(M_{\Delta}^{n}\). If \(y^{*}<1\), the limit as \(n\to\infty\) converges to \[M_{\Delta}^{\infty}\equiv\lim_{n\to\infty}M^{n}=\begin{bmatrix}1&0\\ F_{\infty}-1&0\end{bmatrix}, \tag{109}\] where \[F_{\infty}=1+\frac{y_{1}+y_{1}y_{2}+...+y_{1}y_{2}...y_{m}}{1-y_{*}}. \tag{110}\] The matrix for the curve \(C\) is then \[M_{C}=M_{W}M_{\Delta}^{\infty}. \tag{111}\] We can use the formula (109) when computing the matrix for any curve that ends in a spiral: the spiralling part can be replaced by \(M_{\Delta}^{\infty}\) directly. If the curve also _begins_ with a spiral, this spiral contributes a factor of \((M_{\Delta}^{\infty})^{T}\) to the beginning of the matrix product. ### Example: the planar 1-loop propagator We can put these formulas to work for the planar 1-loop propagator. The curves for this amplitude are given in Section 4.6. Evaluating the curve matrices gives: \[M_{C_{1}} =\begin{bmatrix}1+x&1+x+xy\\ x&x+xy\end{bmatrix}, M_{C_{2}} =\begin{bmatrix}1+y&1+y+xy\\ y&y+xy\end{bmatrix}, \tag{5.47}\] \[M_{S^{\prime}_{1}} =\begin{bmatrix}\frac{1+x}{1-xy}&0\\ \frac{x(1+y)}{1-xy}&0\end{bmatrix}, M_{S^{\prime}_{2}} =\begin{bmatrix}\frac{1+y}{1-xy}&0\\ \frac{y(1+x)}{1-xy}&0\end{bmatrix}. \tag{5.48}\] The headlight functions are \[\alpha_{C_{1}} =\max(0,x)+\max(0,y)-\max(0,x,x+y), \tag{5.49}\] \[\alpha_{C_{2}} =\max(0,x)+\max(0,y)-\max(0,y,x+y),\] (5.50) \[\alpha_{S^{\prime}_{1}} =-x-\max(0,y)+\max(0,x),\] (5.51) \[\alpha_{S^{\prime}_{2}} =-y-\max(0,x)+\max(0,y). \tag{5.52}\] Once again, using the \(g\)-vectors from Section 4.6, we verify that these functions satisfy \[\alpha_{C}(\mathbf{g}_{D})=\begin{cases}1&\text{ if }C=D\\ 0&\text{ otherwise.}\end{cases} \tag{5.53}\] ### Example: the genus one 2-loop vacuum We now introduce a more complicated example: the 2-loop vacuum amplitude at genus one. A fatgraph for this amplitude, \(\Gamma\), is given in Figure 29. The colour factor of this graph has only one factor, \(\operatorname{tr}(1)\), because \(\Gamma\) only has one boundary. In fact, the curves on \(\Gamma\) must all begin and end in spirals around this one boundary. Using Figure 29 we can identify the curves which have precisely _one valley_ in their mountainscape: i.e. which only have one switch from Figure 29: The 2-loop vacuum graph with genus one. turning right to turning left. These three curves are \[C_{1/0} =(wRzRxR)^{\infty}w(LxLzLw)^{\infty}, \tag{111}\] \[C_{0/1} =(xRwRzR)^{\infty}x(LzLwLx)^{\infty},\] (112) \[C_{1/1} =(zRxRw)^{\infty}z(LwLxLz)^{\infty}. \tag{113}\] These curves are non-intersecting and form a triangulation. The surface associated to \(\Gamma\) is the torus with one puncture, and the labels we assign to these curves are inspired by drawing the curves on the torus, pictured as a quotient of a \(\mathbb{Z}^{2}\) lattice. Besides \(C_{1/1}\), we find that the only other curve compatible with both \(C_{1/0}\) and \(C_{0/1}\) is \[C_{-1/1}=(xRwRzR)^{\infty}xLzRx(LzLwLx)^{\infty}. \tag{114}\] This curve has a peak at \(z\), but no peaks at either \(x\) or \(w\) (which is what would result in an intersection with \(C_{1/0}\) or \(C_{0/1}\)). As we will see later, the four curves \(C_{1/0},C_{0/1},C_{1/1},C_{-1/1}\) are all we need to compute the 2-loop vacuum genus one amplitude. Evaluating these curves' matrices gives \[M_{1/0} =\begin{bmatrix}\frac{1+x+xz}{1-xzw}&0\\ 0&0\end{bmatrix}, M_{0/1} =\begin{bmatrix}\frac{1+z+zw}{1-xzw}&0\\ 0&0\end{bmatrix}, \tag{115}\] \[M_{1/1} =\begin{bmatrix}\frac{1+w+wx}{1-xzw}&0\\ 0&0\end{bmatrix}, M_{-1/1} =\begin{bmatrix}\frac{1+2x(1+z)+x^{2}(1+3z+(3+2w)z^{2}+(1+w)^{2}z ^{3})}{(1-wxz)^{2}}&0\\ 0&0\end{bmatrix}. \tag{116}\] The headlight functions for these curves are \[\alpha_{1/1} =\max(0,w,w+x)-\max(0,w+z+x), \tag{117}\] \[\alpha_{1/0} =\max(0,x,x+z)-\max(0,w+z+x),\] (118) \[\alpha_{0/1} =\max(0,z,z+w)-\max(0,w+z+x),\] (119) \[\alpha_{-1/1} =\max(0,2x,2x+3z,2x+3z+2w)-2\max(0,w+z+x). \tag{120}\] ## 6 Integrand Curve Integrals We want to compute the partial amplitudes of our theory. For some fatgraph \(\Gamma\), let \(\mathcal{A}\) be the amplitude that multiplies the colour factor \(c_{\Gamma}\). The momentum assignment rule in Section 3.3 defines one set of loop momentum variables for all propagators contributing to the amplitude, even beyond planar diagrams. This means that \(\mathcal{A}\) can be obtained as the integral of a single _loop integrand_\(\mathcal{I}\): \[\mathcal{A}=\int\left(\prod_{i=1}^{L}d^{D}\ell_{i}\right)\mathcal{I}. \tag{121}\] However, beyond planar diagrams, there is a price to pay for introducing our momentum assignment. For any triangulation by curves, \(C_{1},C_{2},...,C_{E}\), we associate the product of propagators \[\frac{1}{X_{C_{1}}X_{C_{2}}\ldots X_{C_{E}}}, \tag{108}\] where \(X_{C}\) is given by the momentum assignment rule. If we sum over every such term, (108), for all triangulations of \(\Gamma\), we obtain some rational function \(\mathcal{I}_{\infty}\). But the loop integral of \(\mathcal{I}_{\infty}\) is not well defined if \(\Gamma\) has a nontrivial mapping class group, \(\mathrm{MCG}\). This is because two triangulations related by the \(\mathrm{MCG}\) action integrate to the _same_ Feynman diagram. So the loop integral of \(\mathcal{I}_{\infty}\) contains, in general, infinitely many copies of each Feynman integral. Fortunately, we can compute integrands \(\mathcal{I}\) for the amplitude by 'dividing by the volume of \(\mathrm{MCG}\)'. As a function, \(\mathcal{I}\) is not uniquely defined. But all choices for \(\mathcal{I}\) integrate to the same amplitude. We will compute integrands \(\mathcal{I}\) using the headlight functions, \(\alpha_{C}\). The formula takes the form of a _curve integral_, \[\mathcal{I}=\int\frac{d^{E}t}{\mathrm{MCG}}\,e^{-S(\mathbf{t})}. \tag{109}\] Here, \(E\) is the number of edges of the fatgraph \(\Gamma\). We call it a _curve integral_ because the integral is over the \(E\)-dimensional vector space, \(V\), whose integral points correspond to curves (or collections of curves) on \(\Gamma\). As discussed in Section 4.2, the mapping class group \(\mathrm{MCG}\) has a piecewise linear action on \(V\), and we mod out by this action in the integral. We call \(S(t)\) the _curve action_. It is given by a sum \[S(\mathbf{t})=\sum_{C}\alpha_{C}(\mathbf{t})X_{C}, \tag{110}\] where we sum over all curves, \(C\), on the fatgraph.4 For a general derivation of this curve integral formula, see Appendix A. In this section, we show how to practically use (109) to compute some simple amplitudes. Footnote 4: We exclude _closed curves_ from this sum. Including the closed curves corresponds to coupling our colored field to an uncolored scalar particle. For simplicity, we delay the discussion of uncolored amplitudes In fact, (109) also makes the loop integrals easy to do. This leads to a direct curve integral formula for the amplitude \(\mathcal{A}\), which we study in Section 7. Later, in Section 10, we also show that the integrands \(\mathcal{I}\) can be computed recursively, starting from the curve integral formula, (109). This result generalises the standard _forward limit_ method for 1-loop amplitudes to _all_ orders in the perturbation series. ### Example: the tree level 5-point amplitude Curve integrals give new and simple amplitude formulas, even at tree level. Take the same fatgraph studied in Sections 4.1, 5.3 and 6.1. The kinematic variables for the curves on this graph are \((i<j-1)\) \[X_{ij}=(k_{i}+...+k_{j-1})^{2}+m^{2}. \tag{100}\] Then the amplitude, given by (101), is \[\mathcal{A}(12345)=\int dy_{1}dy_{2}\,Z, \tag{101}\] where \[-\log Z=\alpha_{13}X_{13}+\alpha_{14}X_{14}+\alpha_{24}X_{24}+ \alpha_{25}X_{25}+\alpha_{35}X_{35}. \tag{102}\] Using the formulas for \(\alpha_{ij}\) from Section 5.3, \(Z\) can be further simplified to \[\log Z=X_{25}\,x+X_{35}\,y+s_{13}f_{13}+s_{14}f_{14}+s_{24}f_{24}, \tag{103}\] where \(s_{ij}=2k_{i}\cdot k_{j}\) and the \(f_{ij}\) are the simple functions \[f_{13}=\max(0,x),\qquad f_{14}=\max(0,x,x+y),\qquad f_{24}=\max( 0,y). \tag{104}\] The 5-point amplitude is then \[\mathcal{A}(12345)=\int dy_{1}dy_{2}\,\exp\left(X_{25}\,x+X_{35} \,y+s_{13}f_{13}+s_{14}f_{14}+s_{24}f_{24}\right). \tag{105}\] It is already interesting to note that the formula for the amplitude has been written in terms of the simple functions \(f_{13},f_{14},f_{24},y_{1},y_{2}\), and the Mandelstam invariants \(s_{ij}\). These \(s_{ij}\) are automatically summed together by the formula to form the appropriate poles of the tree level amplitude. ### Example: the planar 1-loop propagator Consider again the 1-loop planar propagator (Sections 4.6 and 5.6). The amplitude is \[\mathcal{A}=\int d^{D}\ell\int\limits_{x+y\leq 0}dxdyZ, \tag{106}\] where \[-\log Z=\alpha_{C_{1}}X_{C_{1}}+\alpha_{C_{2}}X_{C_{2}}+\alpha_{S _{1}^{\prime}}X_{S_{1}^{\prime}}+\alpha_{S_{2}^{\prime}}X_{S_{2}^{\prime}}. \tag{107}\] We can assign the momenta of the curves to be \[P_{C_{1}}=0,\ \ P_{S_{1}^{\prime}}=\ell,\ \ P_{S_{2}^{\prime}}= \ell+k,\ \ P_{C_{2}}=0. \tag{108}\] Substituting these momenta (with \(k^{2}+m^{2}=0\)) into the integrand gives \[-\log Z=\ell^{2}(\alpha_{S_{1}^{\prime}}+\alpha_{S_{2}^{\prime}}) +2\ell\cdot k\alpha_{S_{2}^{\prime}}+m^{2}(\alpha_{C_{1}}+\alpha_{C_{2}}+ \alpha_{S_{1}^{\prime}}). \tag{109}\] At this point, we can either integrate over \(x+y\leq 0\), or do the loop integral. Doing the loop integral first is a Gaussian integral, which gives \[\mathcal{A}=\int\limits_{x+y\leq 0}dxdy\left(\frac{\pi}{\alpha_{S^{\prime}_{1}}+ \alpha_{S^{\prime}_{2}}}\right)^{\frac{D}{2}}\exp\left(k^{2}\frac{\alpha_{S^{ \prime}_{2}}^{2}}{\alpha_{S^{\prime}_{1}}+\alpha_{S^{\prime}_{2}}}-m^{2}( \alpha_{C_{1}}+\alpha_{C_{2}}+\alpha_{S^{\prime}_{1}})\right). \tag{110}\] This resembles the Symanzik formula for a single Feynman integral, but instead includes contributions from all three Feynman diagrams for this amplitude. Finally, substituting the headlight functions gives \[\mathcal{A}=\int\limits_{x+y\leq 0}dxdy\left(\frac{-\pi}{x+y}\right)^{ \frac{D}{2}}\exp\left[m^{2}\frac{(\max(0,y)-y-\max(0,x))^{2}}{x+y}+m^{2}(2 \max(0,y)+x)\right]. \tag{111}\] It is not immediately obvious that this reproduces the Feynman integrals for this amplitude. But note that, for example, restricting the domain of the integral to the negative orthant gives \[\int\limits_{x,y\leq 0}dxdy\left(\frac{-\pi}{x+y}\right)^{\frac{D}{2}}\exp \left(m^{2}\left(\frac{y^{2}}{x+y}+x\right)\right). \tag{112}\] After writing \[\frac{y^{2}}{x+y}+x=-\frac{xy}{x+y}+(x+y), \tag{113}\] this recovers the Feynman integral for the bubble graph. By extending the integral to the full region, \(x+y\leq 0\), we recover not just this bubble integral, but the full amplitude! ### Example: the planar 1-loop 3-point amplitude For a more complicated planar example, consider the 1-loop planar 3-point amplitude, with the fatgraph \(\Gamma\), in Figure 30. There are nine curves on this graph: three curves \(C_{i,i+2}\), connecting external lines \(i,i+2\); three curves \(C_{i,i}\), which loop around and come back to external line \(i\); and three curves \(C_{i,0}\) that start from the external line \(i\) and end in a spiral around the closed loop. In the planar sector, a convenient way to assign momenta is to use _dual variables_. Let \(z_{i}^{\mu}\) (\(i=1,2,3\)) be dual variables for the external lines, and \(z_{0}\) be the dual variable for the closed loop. Then curves from external lines \(i\) to \(j\) have \[X_{i,j}=(z_{j}-z_{i})^{2}+m^{2}, \tag{114}\] whereas a curve from \(i\) that ends in a spiral around the loop has \[X_{i,0}=(z_{i}-z_{0})^{2}+m^{2}. \tag{115}\] If the external momenta are \(p_{1},p_{2},p_{3}\), then we can take \(z_{1}=0,z_{2}=p_{1},z_{3}=p_{1}+p_{2}\). The closed loop variable, \(z_{0}\), can be used as a loop momentum variable. The 3-point one-loop planar amplitude is then \[{\cal A}=\int d^{D}z_{0}\int\limits_{\sum t_{i}\geq 0}d{\bf t}\,Z, \tag{108}\] where (taking cyclic indices mod 3) \[-\log Z=\sum_{i=1}^{3}\alpha_{i,i+2}X_{i,i+2}+\sum_{i=1}^{3}\alpha_{i,i}X_{i,i }+\sum_{i=1}^{3}\alpha_{i,0}X_{i,0}. \tag{109}\] The headlight functions for these curves are \[\alpha_{i,0}=t_{i}+g_{i+1}-g_{i}, \tag{110}\] \[\alpha_{i,i+2}=g_{i}-f_{i}-f_{i+1},\] (111) \[\alpha_{i,i}=f_{i+1}+h_{i}-g_{i}-g_{i+1}, \tag{112}\] where \[f_{i}=\max(0,t_{i}), \tag{113}\] \[g_{i}=\max(0,t_{i},t_{i}+t_{i+1}),\] (114) \[h_{i}=\max(0,t_{i},t_{i}+t_{i+1},t_{i}+t_{i+1}+t_{i+2}). \tag{115}\] Figure 30: A fatgraph for the 3-point 1-loop planar amplitude. ### Note on factorization The integrands defined by curve integrals factorise in the correct way. Take again the curve integral \[\mathcal{I}=\int\frac{d^{E}t}{\text{MCG}}\,Z. \tag{112}\] In Appendix B, we show that the residue at \(X_{C}=0\) is given by \[\text{Res}_{X_{C}=0}\,\mathcal{I}=\int\frac{d^{E-1}t}{\text{MCG}^{\prime}}Z^{ \prime}, \tag{113}\] which is now the curve integral for the fatgraph \(\Gamma_{C}\), obtained by cutting \(\Gamma\) along \(C\). In this formula, \(\text{MCG}^{\prime}\) is the MCG of \(\Gamma_{C}\), and the momentum \(P_{C}^{\mu}\) of the curve \(C\) is put on shell. In the fatgraph \(\Gamma_{C}\), the curve \(C\) gives two new boundaries, which are assigned momenta \(\pm P_{C}^{\mu}\). For example, before loop integration, the non-planar 1-loop fatgraph \(\Gamma\) has loop integrand \[\mathcal{I}=\int dxdy\,\exp\left(-\sum_{n=-\infty}^{\infty}\alpha_{n}X_{n} \right). \tag{114}\] Here, the momenta of the curves are \(P_{n}^{\mu}=\ell^{\mu}+nk^{\mu}\). Consider the \(X_{0}=0\) pole. The parameter \(\alpha_{0}\) vanishes outside \(x\geq 0\). In this region, the only non-vanishing parameters are \(\alpha_{1}\) and \(\alpha_{-1}\). The residue at \(X_{0}=0\) is then \[\text{Res}_{X_{0}=0}\mathcal{I}=\int dy\,\exp\left(-\alpha_{1}^{\prime}X_{1}- \alpha_{-1}^{\prime}X_{-1}\right), \tag{115}\] where the restriction to \(x=0\) gives \(\alpha_{1}^{\prime}=\max(0,y)\) and \(\alpha_{-1}^{\prime}=y-\max(0,y)\). This is the \(n=4\) tree level amplitude, with external momenta are \(k,\ell,-k,-\ell\), and \(\ell^{\mu}\). The two propagators are \(X_{1}=(k+\ell)^{2}+m^{2}\) and \(X_{-1}=(k-\ell)^{2}+m^{2}\). ## 7 Amplitude Curve Integrals Following the previous section, the curve integral formula for the full amplitude is \[\mathcal{A}=\int\frac{d^{E}\mathbf{t}}{\text{MCG}}\int\left(\prod d^{D}\ell_ {a}\right)\exp(-S(\mathbf{t})). \tag{116}\] The loop integration variables, \(\ell_{a}\), appear quadratically in the curve action \(S(\mathbf{t})\). So, if we perform the loop integral _before_ performing the curve integral over the \(t_{i}\), it is a Gaussian integral. The result is a curve integral \[\mathcal{A}=\int\frac{d^{E}\mathbf{t}}{\text{MCG}}\,\left(\frac{\pi^{L}}{ \mathcal{U}}\right)^{\frac{D}{2}}\exp\left(\frac{\mathcal{F}_{0}}{\mathcal{U} }-\mathcal{Z}\right), \tag{117}\] where \(\mathcal{U},\mathcal{F}_{0}\) and \(\mathcal{Z}\) are homogeneous polynomials in the \(\alpha_{C}\)'s that we call _surface Symanzik polynomials_. The curve integral (7.2) resembles the Schwinger form of a single Feynman integral, but it integrates to the full amplitude. Once again, it is important to mod out by the action of the mapping class group, to ensure that the integral does not overcount Feynman diagrams. We now summarise how to compute the surface Symanzik polynomials, \(\mathcal{U},\mathcal{F}_{0},\mathcal{Z}\). Suppose that a choice of loop momentum variables, \(\ell_{a}^{\mu}\), has been fixed. The momentum assigned to a curve \(C\) is of the form \[P_{C}^{\mu}=K_{C}^{\mu}+\sum h_{C}^{a}\ell_{a}^{\mu}, \tag{7.3}\] for some integers \(h_{C}^{a}\). These \(h_{C}^{a}\) geometrically can be understood in terms of intersections between \(C\) and a basis of \(L\) closed curves on the fatgraph. Using the \(h_{C}^{a}\) intersection numbers, define an \(L\times L\) matrix \[A^{ab}=\sum_{C}h_{C}^{a}h_{C}^{b}\alpha_{C}, \tag{7.4}\] and a \(L\)-dimensional vector (with momentum index \(\mu\)) \[B^{a,\mu}=\sum_{C}h_{C}^{a}\alpha_{C}K_{C}^{\mu}. \tag{7.5}\] The then surface Symanzik polynomials are \[\mathcal{U}=\det A,\qquad\frac{\mathcal{F}_{0}}{\mathcal{U}}=B_{\mu}^{a} \left(A^{-1}\right)_{ab}B^{b,\mu},\qquad\mathcal{Z}=\sum_{C}\alpha_{C}\left(K_ {C}^{2}+m^{2}\right). \tag{7.6}\] These arise in the usual way by performing the Gaussian integral, as discussed in detail in Appendix C. In fact, the surface Symanzik polynomials have simple expressions when expanded as a sum of monomials. For a set of curves, \(\mathcal{S}=\{C_{1},...,C_{L}\}\), write \(\alpha_{\mathcal{S}}\) for the corresponding monomial \[\alpha_{\mathcal{S}}=\prod_{i=1}^{L}\alpha_{C_{i}}. \tag{7.7}\] The determinant, \(\det A\), can be expanded to give \[\mathcal{U}=\sum_{\begin{subarray}{c}\mathcal{S}\text{ cuts}\,\Sigma\\ \text{to disk}\end{subarray}}\alpha_{\mathcal{S}}\,, \tag{7.8}\] where we sum over all sets \(\mathcal{S}\) whose curves cut \(\Gamma\) down to a tree fatgraph. In other words, \(\mathcal{U}\) is the sum over all _maximal cuts_ of the graph \(\Gamma\). Moreover, using the Laplace expansion of the matrix inverse, \(\mathcal{F}_{0}\) can be expanded to find \[\mathcal{F}_{0}=\sum_{\begin{subarray}{c}\mathcal{S}^{\prime}\text{ cuts}\,\,\Sigma\\ \text{to }2\text{ disks}\end{subarray}}\alpha_{\mathcal{S}^{\prime}}\left(\sum_{C \in\mathcal{S}^{\prime}}K_{C}^{\mu}\right)^{2}, \tag{7.9}\] where the sum in this formula is now over sets \(\mathcal{S}^{\prime}\) of \(L+1\) curves that factorise \(\Gamma\) into two disjoint tree graphs. Each monomial in the sum is multiplied by the total momentum flowing through the factorisation channel. A complete derivation of (7.8) and (7.9) is given in Appendix C. ### Example: the planar 1-loop propagator We return to the planar 1-loop propagator (Sections 4.6, 5.6, 6.2). Of the four curves \(C_{1},C_{2},S_{1},S_{2}\), only \(S_{1}\) and \(S_{2}\) carry loop momentum and cut \(\Gamma\) open to a tree. The first surface Symanzik polynomial is therefore \[\mathcal{U}=\alpha_{S_{1}}+\alpha_{S_{2}}. \tag{7.10}\] The \(B\)-vector is \[B^{\mu}=\alpha_{S_{2}^{\prime}}k^{\mu}, \tag{7.11}\] so that the second surface Symanzik polynomial is \[\mathcal{F}_{0}=\alpha_{S_{2}^{\prime}}^{2}k^{2}. \tag{7.12}\] Finally, \[\mathcal{Z}=m^{2}(\alpha_{S_{1}^{\prime}}+\alpha_{C_{1}}+\alpha_{C_{2}}). \tag{7.13}\] The amplitude is then given by the curve integral \[\mathcal{A}=\int\limits_{x+y\geq 0}dxdy\left(\frac{\pi}{\alpha_{S_{1}}+ \alpha_{S_{2}}}\right)^{\frac{D}{2}}\exp\left(\frac{\alpha_{S_{2}}p^{2}}{ \alpha_{S_{1}}+\alpha_{S_{2}}}-m^{2}\left(\alpha_{S_{1}}+\alpha_{C_{1}}+ \alpha_{C_{2}}\right)\right). \tag{7.14}\] This again recovers the formula (6.15), which we obtained by direct integration in the previous section. ### Example: the non-planar 1-loop propagator We return to the non-planar 1-loop propagator (Sections 4.4 and 5.4). The momentum of the curve \(C_{n}\) is \[P_{n}^{\mu}=\ell^{\mu}+np^{\mu}. \tag{7.15}\] Every curve \(C_{n}\) cuts \(\Gamma\) to a tree graph with 4 external legs. So the first Symanzik polynomials is \[\mathcal{U}=\sum_{n=-\infty}^{\infty}\alpha_{n}, \tag{7.16}\] where \(\alpha_{n}\) is the headlight function for \(C_{n}\). Every pair of distinct curves \(C_{n},C_{m}\) cuts \(\Gamma\) into two trees, and so \[\mathcal{F}_{0}=\sum_{n,m=-\infty}^{\infty}nm\alpha_{n}\alpha_{m}p^{2}. \tag{7.17}\] Finally, \[\mathcal{Z}=\sum_{n=-\infty}^{\infty}\alpha_{n}(m^{2}+n^{2}p^{2}). \tag{7.18}\] The amplitude is then \[\mathcal{A}=\int\frac{dxdy}{\text{MCG}}\left(\frac{\pi}{\mathcal{U}} \right)^{\frac{D}{2}}\exp\left(\frac{\mathcal{F}_{0}}{\mathcal{U}}-\mathcal{Z} \right). \tag{7.19}\] The MCG acts on the fan in this case as \(\mathbf{g}_{n}\mapsto\mathbf{g}_{n+1}\). A fundamental domain for this action is clearly the positive orthant, spanned by \(\mathbf{g}_{0},\mathbf{g}_{1}\). In this orthant, the surface Symanzik polynomials are \[\mathcal{U} =x+y, \tag{7.20}\] \[\mathcal{F}_{0} =y^{2}p^{2},\] (7.21) \[\mathcal{Z} =xm^{2}. \tag{7.22}\] So we find \[\mathcal{A}=\int\limits_{x,y\geq 0}dxdy\left(\frac{\pi}{x+y} \right)^{D/2}\exp\left(m^{2}\left(-\frac{y^{2}}{x+y}-x\right)\right), \tag{7.23}\] where we have put \(p^{\mu}\) on shell, \(p^{2}+m^{2}=0\). Or, equivalently, \[\mathcal{A}=\int\limits_{x,y\geq 0}dxdy\left(\frac{\pi}{x+y} \right)^{D/2}\exp\left(-p^{2}\frac{xy}{x+y}-m^{2}(x+y)\right). \tag{7.24}\] ### Example: The non-planar 3-point amplitude Even at 1-loop, it is not always easy to identify the fundamental domain of the MCG. To see the problem, consider the non-planar one-loop 3-point amplitude. Let the first trace factor have external particle \(p_{1}^{\mu}\), and the second trace factor have \(p_{2}^{\mu}\) and \(p_{3}^{\mu}\). The curves, \(C_{ij}^{n}\), connecting a pair of distinct start and end points, \(i,j\), are labelled by the number of times, \(n\), they loop around the graph. The curves \(C_{22}\) and \(C_{33}\) begin and end at the same edge, and are invariant under the MCG. Then, for a specific choice of loop momentum variable, we find the momentum assignments \[P_{12}^{n}=np_{1}^{\mu},\qquad P_{13}^{n}=np_{1}^{\mu}-p_{2}^{\mu},\qquad P_{22}=0,\qquad P_{33}=0. \tag{7.25}\] We can readily give the curve integral formula for the amplitude, \[\mathcal{A}=\int\frac{dxdydz}{\text{MCG}}\left(\frac{\pi}{ \mathcal{U}}\right)^{\frac{D}{2}}\exp\left(\frac{\mathcal{F}_{0}}{\mathcal{U }}-\mathcal{Z}\right), \tag{7.26}\] where the surface Symanzik polynomials are \[\mathcal{U}=\sum_{n=-\infty}^{\infty}\alpha_{13}^{n}+\alpha_{12 }^{n},\qquad\mathcal{F}_{0}=B^{\mu}B^{\mu},\qquad\mathcal{Z}=m^{2}\left( \alpha_{22}+\alpha_{33}+\sum_{n=-\infty}^{\infty}\alpha_{12}^{n}\right). \tag{7.27}\] In the formula for \({\cal F}_{0}\), the \(B\)-vector is \[B^{\mu}=\sum_{n=-\infty}^{\infty}\,np_{1}^{\mu}\alpha_{12}^{n}+(np_{1}^{\mu}-p_{2 }^{\mu})\alpha_{13}^{n}. \tag{111}\] However, at this point we confront the problem of quotienting by MCG. The MCG is generated by \[{\bf g}_{12}^{n}\mapsto{\bf g}_{12}^{n+1},\ {\bf g}_{13}^{n}\mapsto{\bf g}_{13}^{n+1}, \tag{112}\] and it leaves \({\bf g}_{22}\) and \({\bf g}_{33}\) invariant. Naively, we might want to quotient by the MCG by restricting the integral to the region spanned by: \({\bf g}_{12}^{0},{\bf g}_{13}^{0},{\bf g}_{22},{\bf g}_{33}\). However, this region is too small. It does not include any full cones of the Feynman fan. We could also try restricting the integral to the region spanned by: \({\bf g}_{12}^{0},{\bf g}_{13}^{0},{\bf g}_{12}^{1},{\bf g}_{13}^{1},{\bf g}_{2 2},{\bf g}_{33}\). But this region is too large! The amplitude has _three_ Feynman diagrams, but this region contains _four_ cones, so it counts one of the diagrams twice. As this example shows, it is already a delicate problem to explicitly specify a fundamental domain for the MCG action. ### Example: genus-one 2-loop amplitudes The problem of modding by MCG becomes even more acute for non-planar amplitudes. The genus one 2-loop vacuum amplitude, considered in Section 5.7, is computed by a 3-dimensional curve integral. But the MCG action in this case is an action of \(\mathrm{SL}_{2}\mathbb{Z}\). The action on \(g\)-vectors is of the form \[{\bf g}_{p/q}\mapsto{\bf g}_{(ap+bq)/(cp+dq)},\qquad\mbox{for}\ \begin{bmatrix}a&b\\ c&d\end{bmatrix}\in\mathrm{SL}_{2}\mathbb{Z}. \tag{113}\] For the vacuum amplitude, a simple example of a fundamental region is the region spanned by \({\bf g}_{1/0},{\bf g}_{0/1}\), and \({\bf g}_{1/1}\). However, for the \(n\)-point genus one 2-loop amplitude, identifying a fundamental region of this \(\mathrm{SL}_{2}\mathbb{Z}\)-action becomes very difficult. In the next section, we present a simple method to compute the integrals in our formulas, for any MCG action. ## 8 Modding Out by the Mapping Class Group Our formulas for amplitudes and integrands take the form of integrals over \(\mathbb{R}^{E}\) modulo the action of the Mapping Class Group, MCG, \[{\cal A}=\int\frac{d^{E}t}{\mathrm{MCG}}\,f(t), \tag{114}\] for some MCG-invariant function, \(f(t)\). One way to evaluate this integral is to find a fundamental domain for the MCG action. But it is tricky to identify such a region in general. Instead, it is convenient to mod out by the MCG action by defining a kernel, \({\cal K}\), such that \[{\cal A}=\int d^{E}t\,{\cal K}(t)f(t). \tag{115}\] In this section, we find kernels, \(\mathcal{K}\), that can be used at all orders in perturbation theory, for all Mapping Class Groups. ### Warm up Consider the problem of evaluating an integral modulo a group action on its domain. For example, suppose \(f(x)\) is invariant under the group of translations, \(T\), generated by \(x\mapsto x+a\), for some constant, \(a\). We want to evaluate an integral \[I=\int\limits_{\mathbb{R}/T}dxf(x). \tag{110}\] One way to do this is to restrict to a fundamental domain of \(T\): \[I=\int\limits_{0}^{a}dxf(x). \tag{111}\] But we can alternatively find a kernel \(\mathcal{K}(x)\) such that \[I=\int\limits_{-\infty}^{\infty}dx\,\mathcal{K}(x)f(x). \tag{112}\] One way to find such a kernel is to take a function \(g(x)\) with finite support around \(0\), say. Then we can write \[1=\frac{\sum_{n=-\infty}^{\infty}g(x-na)}{\sum_{n=-\infty}^{\infty}g(x-na)}, \tag{113}\] provided that \(\sum_{n=-\infty}^{\infty}g(x-na)\) is nowhere vanishing. Inserting this into (110), \[I=\int\limits_{\mathbb{R}/T}dx\,\frac{\sum_{n=-\infty}^{\infty}g(x-na)}{\sum_ {n=-\infty}^{\infty}g(x-na)}f(x)=\int\limits_{-\infty}^{\infty}dx\,\frac{g(x) }{\sum_{n=-\infty}^{\infty}g(x-na)}f(x). \tag{114}\] So that we can use \[\mathcal{K}(x)=\frac{g(x)}{\sum_{n=-\infty}^{\infty}g(x-na)} \tag{115}\] as a kernel to quotient out by the translation group. For example, suppose that we take \(g(x)=\Theta(x+a)\Theta(-x+a)\), where \(\Theta(x)\) is the Heaviside function. Inserting this into (114) gives \[I=\int\limits_{-a}^{a}dx\,\frac{1}{2}f(x). \tag{116}\] The domain of this integral contains two copies of a fundamental domain for \(T\), but this is compensated for by the \(1/2\) coming from \(\mathcal{K}(x)\) to give the correct answer. ### A Tropical Mirzakhani kernel The headlight functions, \(\alpha_{C}\), give a very natural solution to the problem of defining an integration kernel, \(\mathcal{K}\). Consider the case when MCG has _one generator_. Let \(\mathcal{S}\) be the set of curves which are _not_ invariant under MCG. The sum of their headlight functions, \[\rho=\sum_{C\in\mathcal{S}}\alpha_{C}, \tag{111}\] is itself a MCG-invariant function. Moreover, \(\rho\) does not vanish on any top-dimensional cone (because no diagram can be formed without using at least one propagator from \(\mathcal{S}\)). So we can consider inserting the function \[1=\frac{\rho}{\rho} \tag{112}\] into our integrals. The set \(\mathcal{S}\) is the disjoint union of cosets under the MCG action, by the Orbit-Stabilizer theorem. When MCG has a single generator, these cosets are easy to describe. MCG does not alter the endpoints of curves. So if \(C_{ij}\in\mathcal{S}\) is a curve connecting external lines \(i\) and \(j\), the orbit of \(C_{ij}\) is a coset of \(\mathcal{S}\). By the Orbit-Stabilizer theorem, these cosets are disjoint. So \(\rho\) can be resumed as \[\rho=\sum_{i,j}\sum_{\gamma\in\text{MCG}}\alpha_{\gamma C_{ij}}. \tag{113}\] Given this, we can mod out by the MCG action by defining \[\mathcal{K}=\sum_{i,j}\frac{\alpha_{C_{ij}}}{\rho}, \tag{114}\] where we choose a distinguished representative, \(C_{ij}\), for each coset. We call (114) a _tropical Mirzakhani kernel_, because it is a tropical version of the kernel introduced by Mirzakhani to compute Weil-Petersson volumes [23]. Each headlight function, \(\alpha_{C_{ij}}\), is non-vanishing in a convex region \(V_{C_{ij}}\) that is spanned by all the cones in the fan that contain \(\mathbf{g}_{C_{ij}}\). These regions _over-count_ the diagrams, but this over-counting is corrected by the kernel, \(\mathcal{K}\). ### Example: the non-planar 1-loop propagator As a sanity check, let us repeat the calculation of the non-planar 1-loop propagator from Section 7.2, but now using the tropical Mirzakhani kernel. The MCG has one generator, and no curves are MCG-invariant. So take the set \(\mathcal{S}\) to be the set of all curves, \(C_{n}\), and write \[\rho=\sum_{n=-\infty}^{\infty}\alpha_{n}. \tag{115}\] Choose \(C_{0}\), say, as the coset representative (all other curves are in the orbit of \(C_{0}\)). Then the tropical Mirzakhani kernel, (8.13), is \[\mathcal{K}=\frac{\alpha_{0}}{\rho}. \tag{8.15}\] Using this kernel, we find a pre-loop-integration integrand, \[\mathcal{I}=\int dxdy\,\mathcal{K}(x,y)\exp\left(-\sum_{i=-\infty}^{\infty} \alpha_{i}X_{i}\right). \tag{8.16}\] The headlight functions for this example were given in (5.33). In particular, \(\alpha_{0}=\max(0,x)\), which is vanishing outside of the region \(x\geq 0\). In this region, the only other non-vanishing headlight functions are \[\alpha_{-1}=\max(0,y)\qquad\text{and}\qquad\alpha_{1}=-y+\max(0,y). \tag{8.17}\] The formula is therefore \[\mathcal{I}=\int\limits_{x\geq 0}dxdy\,\frac{x}{x+|y|}\text{exp}\left(- \alpha_{-1}X_{-1}-\alpha_{0}X_{0}-\alpha_{1}X_{1}\right). \tag{8.18}\] We can now perform the loop integral. Recall that \(X_{n}=(\ell+nk)^{2}+m^{2}\). Using this, the exponent, \(Z\), in (8.18) is \[-\log Z=\rho\,\ell^{2}+2\ell\cdot k(\alpha_{1}-\alpha_{-1})+m^{2}\alpha_{0}. \tag{8.19}\] The Gaussian integral gives \[\mathcal{A}=\int\limits_{x\geq 0}dxdy\frac{x}{x+|y|}\left(\frac{\pi}{x+|y|} \right)^{\frac{D}{2}}\exp\left(k^{2}\frac{|y|^{2}}{x+|y|}-m^{2}x\right). \tag{8.20}\] This doesn't immediately look like the Feynman integral for the 1-loop bubble. However, writing \[\frac{2x}{x+y}=1+\frac{x-y}{x+y}, \tag{8.21}\] we find \[\mathcal{A}=\int\limits_{x,y\geq 0}dxdy\left(\frac{\pi}{x+y}\right)^{\frac{D}{ 2}}\exp\left(k^{2}\frac{y^{2}}{x+y}-m^{2}x\right). \tag{8.22}\] since the integrand over \(x,y\geq 0\) is even under \(x\leftrightarrow y\), whereas \(x-y\) is odd. This is still not exactly the same as the conventional integral. To recover the conventional form, note that the exponent can be rewritten as \[-\frac{y^{2}}{x+y}-x=\frac{xy}{x+y}-(x+y). \tag{8.23}\] ### General Tropical Mirzakhani Kernels Tropical Mirzakhani kernels can be defined to _any_ mapping class group, with more than one generator. Fix some fatgraph \(\Gamma\), with mapping class group MCG. A conceptually simple way to define a kernel is to consider the set of \(L\)-tuples of curves that cut \(\Gamma\) to a tree graph. These define the _first Symanzik polynomial_, \[\mathcal{U}=\sum_{\begin{subarray}{c}S\\ \text{cuts to tree}\end{subarray}}\alpha_{S}, \tag{110}\] which can also be computed as a determinant of a matrix (Section 7). This function does not vanish on top-dimensional cones of the Feynman fan, since every diagram contains a subset of propagators that cut \(\Gamma\) to a tree. We can therefore insert \[1=\frac{\mathcal{U}}{\mathcal{U}} \tag{111}\] into our integrals. Under the MCG action, the set of \(L\)-tuples appearing in \(\mathcal{U}\) is partitioned into cosets. Each coset represents an MCG-inequivalent way of cutting \(\Gamma\) down to a tree. By choosing a representative \(L\)-tuple for each such loop cut, we arrive at a kernel \[\mathcal{K}=\sum_{\text{distinct loop cuts}}\frac{\alpha_{S}}{\mathcal{U}}. \tag{112}\] Our integrals can then be computed as a sum over maximal cuts: \[\mathcal{A}=\int\frac{d^{E}y}{\text{MCG}}\mathcal{I}=\sum_{\text{distinct loop cuts}}\int d^{E}y\,\frac{\alpha_{S}}{\mathcal{U}}\,\mathcal{I}. \tag{113}\] The disadvantage of this formula is that it can be difficult to systematically identify a set of MCG-inequivalent maximal cuts. ### The General Iterative Method A more systematic way to quotient out by MCG is to break the MCG-action one generator at a time. This iterative method has the advantage of being completely algorithmic. To apply the method, pick a trace-factor of \(\Gamma\), \(\beta\), which has some external particles, \(1,...,m\). Let \(\mathcal{S}_{\beta}\) be the set of curves that have at least one endpoint in \(\beta\), excluding any curves that are MCG-invariant, and write \[\rho_{\beta}=\sum_{C\in\mathcal{S}_{\beta}}\alpha_{C}. \tag{114}\] \(\rho_{\beta}\) is MCG-invariant. This is because the MCG action does not alter the endpoints of a curve. The set \(\mathcal{S}_{\beta}\) therefore has a coset decomposition. For each MCG orbit in \(\mathcal{S}_{\beta}\), pick a representative curve, so that \[\rho_{\beta}=\sum_{i=1}^{k}\sum_{\gamma\in\text{MCG}(\Sigma)}\alpha_{\gamma C _{i}}, \tag{115}\] for some \(k=|\mathcal{S}_{\beta}/\text{MCG}(\Sigma)|\) coset representatives \(C_{1},...,C_{k}\). We give more details about how to pick a set of coset representatives below. Every top-dimensional cone is generated by at least _one_ curve from the set \(\mathcal{S}_{\beta}\), because otherwise that cone would not correspond to a complete triangulation of \(\Gamma\). This means that \(\rho_{\beta}\) is non-vanishing everywhere, except on some lower-dimensional cones. Away from this vanishing locus, we can write \[1=\frac{\rho_{\beta}}{\rho_{\beta}}. \tag{112}\] Given this, we define a tropical Mirzakhani kernel \[\mathcal{K}_{\beta}=\sum_{i=1}^{k}\frac{\alpha_{C_{i}}}{\rho_{ \beta}}. \tag{113}\] This has the effect of breaking the MCG symmetry of the integrand, and reducing us to evaluating simpler integrals. In particular, we have \[\mathcal{A}=\int\frac{d^{E}t}{\text{MCG}}\,\mathcal{I}=\sum_{i=1}^ {k}\,\int\frac{d^{E}t}{\text{Stab}(C_{i})}\,\frac{\alpha_{C_{i}}}{\rho_{\beta }}\,\mathcal{I}, \tag{114}\] where \(\text{Stab}(C_{i})\leq\text{MCG}\) is the _stablizer subgroup_ for \(C_{i}\). The factor \[\frac{\alpha_{C_{i}}}{\rho_{\beta}} \tag{115}\] is _itself_ invariant under \(\text{Stab}(C_{i})\). So the integrals, \[\int\frac{d^{E}t}{\text{Stab}(C_{i})}\,\frac{\alpha_{C_{i}}}{ \rho}\,\mathcal{I}, \tag{116}\] can themselves be evaluated by finding a Mirzkhani kernel for the new group, \(\text{Stab}(C_{i})\). This iterative method ultimately yields an integral with no group action, \[\mathcal{A}=\int\frac{d^{E}y}{\text{MCG}}\,\mathcal{I}=\int d^{n} y\,\mathcal{K}\,\mathcal{I}, \tag{117}\] where \(\mathcal{K}\) is a sum of products of kernels of the form (115). To complete the description of the iterative method, we describe how to choose coset representatives from the set \(\mathcal{S}_{\beta}\). The curves in this set break into two subsets, as in Figure 31: 1. Curves \(C\) whose endpoints lie in two distinct trace factors. These curves cut \(\Gamma\) to a fatgraph \(\Gamma_{C}\) which has one fewer trace factors. 2. Curves \(C\) with both endpoints in the same trace factor. These curves cut \(\Gamma\) to a fatgraph \(\Gamma_{C}\) with one lower genus. Both of these subsets have decompositions into cosets specified by the endpoints of the curves. So, for every pair of particles, \(i,j\) (with \(i\) in trace factor \(\beta\)), pick _any_ curve \(C^{0}_{ij}\) connecting them. These can be taken as coset representatives. The caveat is that, if \(i,j\) are both in trace factor \(\beta\), we must choose a curve \(C^{0}_{ij}\) which is not MCG-invariant. An MCG-invariant curve generates a trivial coset. The first step to break the MCG is then to insert the kernel \[\sum_{i\in\beta}\sum_{j}\frac{\alpha^{0}_{ij}}{\sum_{\mathcal{S}_{\beta}} \alpha_{C}}. \tag{100}\] For amplitudes involving a large number of external particles, this iterative method naively requires a lot of work (growing like \(n^{L}\) with the number of particles, \(n\)). However, this apparent complexity goes away completely if we choose an appropriate fatgraph, \(\Gamma\), for our calculation. We use this to obtain simple formulas for amplitudes at all-\(n\) in a separate paper, [26]. But for now we will focus on low-point amplitudes, to illustrate the method in its simplest form. ### Example: the genus one 2-loop vacuum amplitude As an example, we briefly describe what happens for the genus one 2-loop vacuum amplitude (Sections 5.7 and 7.4). The MCG is now \(\mathrm{SL}_{2}\mathbb{Z}\). In this case, there is only _one_ coset to consider, since every curve is related to every other by \[\mathbf{g}_{p/q}\mapsto\mathbf{g}_{(ap+bq)/(cp+dq)},\qquad\text{for }\left[ \begin{matrix}a&b\\ c&d\end{matrix}\right]\in\mathrm{SL}_{2}\mathbb{Z}. \tag{101}\] For the first step of the iteration, we can take any curve, say \(C_{1/0}\), as a coset representative. The kernel for the first step is \[\mathcal{K}_{1/0}=\frac{\alpha_{1/0}}{\sum_{C}\alpha_{C}}. \tag{102}\] Figure 31: The two types of curves that are not invariant under the MCG, drawn here on the surface \(S(\Gamma)\) associated to a fatgraph: curves connecting distinct trace factors (right), and topologically nontrivial curves that begin and end on the same trace factor (left). The subgroup that leaves \(C_{1/0}\) invariant is \[\operatorname{Stab}C_{1/0}=\left\{\begin{bmatrix}1&n\\ 0&1\end{bmatrix}\ :\ n\in\mathbb{Z}\right\}<\operatorname{SL}_{2}\mathbb{Z}. \tag{108}\] The curves compatible with \(C_{1/0}\) form a single coset for the action of this subgroup. So, for the second step, we can choose just one of them, \(C_{0/1}\), say, as a coset representative. The kernel for the second step is \[\mathcal{K}_{0/1}=\frac{\alpha_{0/1}}{\sum_{C^{\prime}}\alpha_{C^{\prime}}}, \tag{109}\] where we sum only over curves, \(C^{\prime}\), that are non-intersecting with \(C_{1/0}\). The final kernel is simply \[\mathcal{K}=\frac{\alpha_{1/0}}{\alpha_{1/0}+\alpha_{0/1}+\alpha_{1/1}+\alpha_ {-1/1}}\,\frac{\alpha_{0/1}}{\alpha_{0/1}+\alpha_{1/1}+\alpha_{-1/1}}, \tag{110}\] where the simplification arises because \(C_{1/1}\) and \(C_{-1/1}\) are the only curves compatible with both \(C_{1/0}\) and \(C_{0/1}\). ## 9 Examplitudes We now show how to use the tropical Mirzakhani kernels to evaluate curve integrals. We give detailed low-dimensional examples of amplitudes up to 3 loops. ### The non-planar 1-loop 3-point amplitude The formula for the 1-loop non-planar 3-point amplitude was given in Section 7.3. However, we did not show how to quotient by the MCG. Using the tropical Mirzakhani kernel, we now find the formula \[\mathcal{A}=\int d^{3}t\,\mathcal{K}\,\left(\frac{\pi}{\mathcal{U}}\right)^{ \frac{D}{2}}\,\exp\left(\frac{\mathcal{F}_{0}}{\mathcal{U}}-\mathcal{Z}\right), \tag{111}\] where the Mirzakhani kernel is \[\mathcal{K}=\frac{\alpha_{12}^{0}+\alpha_{13}^{0}}{\rho}, \tag{112}\] with \(\rho\) the sum over all \(\alpha_{C}\) (except for those curves which are invariant under the MCG, namely \(C_{22}\), \(C_{33}\)). The surface Symanzik polynomials are, as before, \[\mathcal{U}=\sum_{n=-\infty}^{\infty}\alpha_{13}^{n}+\alpha_{12}^{n},\qquad \mathcal{F}_{0}=B_{\mu}B^{\mu},\qquad\mathcal{Z}=m^{2}\left(\alpha_{22}+ \alpha_{33}+\sum_{n=-\infty}^{\infty}\alpha_{12}^{n}\right). \tag{113}\] In the formula for \(\mathcal{F}_{0}\), the \(B\)-vector is \[B^{\mu}=\sum_{n=-\infty}^{\infty}np_{1}^{\mu}\alpha_{12}^{n}+(np_{1}^{\mu}-p_ {2}^{\mu})\alpha_{13}^{n}. \tag{114}\] Let us first see why (114) is a Mirzakhani kernel. The MCG has one generator. It leaves \(C_{22}\) and \(C_{33}\) invariant, but acts non-trivially on the set \(\{C_{12}^{n},C_{13}^{n}\}\) of all curves that connect the first trace factor to the second trace factor. \(\rho\) is the sum of \(\alpha_{C}\) for all these curves, \[\rho=\sum_{n=-\infty}^{\infty}\left(\alpha_{12}^{n}+\alpha_{13}^{n}\right). \tag{115}\] This set has two MCG cosets, labelled by the start and end points of the curves. We can take \(C_{12}^{0}\) and \(C_{13}^{0}\) as the two coset representatives. \(C_{12}^{0}\), for instance, represents the coset of all curves that begin at 1 and end at 2. (Recall Section 8.) Naively, it looks as if (114) involves infinitely many \(\alpha_{C}\), which it would be laborious to compute. However, the Mirzakhani kernel ensures that only a few \(\alpha_{C}\) are needed. To see how this works, consider, say, the first term in the kernel, \[\mathcal{K}_{12}=\frac{\alpha_{12}^{0}}{\rho}. \tag{116}\] In the region where \(\alpha_{12}^{0}\neq 0\), all other \(\alpha_{C}\) are vanishing, except for: \[\alpha_{12}^{-1},\ \alpha_{12}^{1},\ \alpha_{13}^{0},\ \alpha_{13}^{1},\ \alpha_{22}. \tag{117}\] So in this region, \(\mathcal{U}\) and \(B^{\mu}\) simplify to \[\mathcal{U} =\alpha_{12}^{0}+\alpha_{12}^{1}+\alpha_{12}^{-1}+\alpha_{13}^{0 }+\alpha_{13}^{1}, \tag{118}\] \[B^{\mu} =-k_{1}^{\mu}\alpha_{12}^{-1}-k_{2}^{\mu}\alpha_{13}^{0}+(k_{1}^ {\mu}-k_{2}^{\mu})\alpha_{13}^{1}. \tag{119}\] When we compute these \(\alpha\)'s, using the matrix method, we find that they become simple functions in the region \(x>0\), where \(\alpha_{12}^{0}\) is non-zero. In this region, we have \(\alpha_{12}^{0}=x\). Moreover, the remaining 5 headlight functions become \[\alpha_{13}^{1} =-\max(0,y)+\max(0,y,y+z), \alpha_{13}^{0} =\max(0,y), \tag{120}\] \[\alpha_{12}^{1} =-y-\max(0,z)+\max(0,y,y+z), \alpha_{12}^{-1} =-z+\max(0,z),\] (121) \[\alpha_{22} =-\max(0,y,y+z)+\max(0,y)+\max(0,z). \tag{122}\] These are precisely the headlight functions for the 5-point tree amplitude! We could have anticipated this, because cutting \(\Gamma\) along \(C_{12}^{0}\) yields a 5-point tree graph. Using these tree-like headlight functions, we can compute the contribution of \(\mathcal{K}_{12}\) to the curve integral, (114). The contribution from the second term in the Mirzakhani kernel is similar. In this example, we find that we only need to know the headlight functions \(\alpha_{C}\) for _tree level_ amplitudes, in order to compute the full 1-loop amplitude! In fact, we can prove that this happens _in general_. Suppose a monomial, \(\alpha_{S}\) (for some set of \(L\) curves \(S\)), appears in the numerator of the kernel \(\mathcal{K}\). In the region where \(\alpha_{S}\neq 0\), all remaining \(\alpha_{C}\)'s simplify to become headlight functions for the tree-fatgraph obtained by cutting \(\Gamma\) along all the curves in \(S\). This general phenomenon is computationally very useful, and we study it in greater detail elsewhere. ### The genus one 2-loop vacuum amplitude We have already mentioned the 2-loop genus one vacuum computation in Sections 5.7 and 7.4. We now have all the tools to compute it properly. The result is the following simple integral \[\mathcal{A}=\int\limits_{x,y\geq 0}dxdydz\,\mathcal{K}\,\left(\frac{\pi^{2}}{ \mathcal{U}}\right)^{\frac{D}{2}}\exp\left(-\mathcal{Z}\right), \tag{111}\] where the kernel is (as given in Section 8.6) \[\mathcal{K}=\frac{\alpha_{1/0}}{\alpha_{1/0}+\alpha_{0/1}+\alpha_{1/1}+\alpha _{-1/1}}\frac{\alpha_{0/1}}{\alpha_{0/1}+\alpha_{1/1}+\alpha_{-1/1}}, \tag{112}\] and now with surface Symanzik polynomials \[\mathcal{U} =\det A, \tag{113}\] \[\mathcal{Z} =m^{2}(\alpha_{1/0}+\alpha_{0/1}+\alpha_{1/1}+\alpha_{-1/1}). \tag{114}\] Note that the region where \(\alpha_{1/0}\alpha_{0/1}\neq 0\) is, in the coordinates of Section 5.7, \(x,y\geq 0\). This is why the curve integral is restricted to this region. To see how this curve integral comes about, we need to understand how to assign momenta to the curves. The easiest way to assign momenta is to use the homology of curves on the torus, Section 3.3.1. Assign the A-cycle momentum \(\ell_{1}\) and the B-cycle momentum \(\ell_{2}\). The curve \(C_{p/q}\) wraps the A-cycle \(q\) times and the \(B\)-cycle \(p\) times, and so it has momentum \(p\ell_{1}+q\ell_{2}\) giving \[X_{p/q}=(p\ell_{1}+q\ell_{2})^{2}+m^{2}. \tag{115}\] With this momentum assignment, the matrix \(A\), which records the dependence on chosen basis of loops, is \[A^{ab}=\begin{bmatrix}\alpha_{1,0}+\alpha_{1,1}+\alpha_{-1,1}&\alpha_{1,1}- \alpha_{-1,1}\\ \alpha_{1,1}-\alpha_{-1,1}&\alpha_{0,1}+\alpha_{1,1}+\alpha_{-1,1}\end{bmatrix}. \tag{116}\] Moreover, the momentum assigned to the curves has no non-loop part, so that \[\mathcal{Z}=m^{2}\sum_{C}\alpha_{C}, \tag{117}\] which restricts to (113) in the region \(x,y\geq 0\). We now evaluate the amplitude. Once again, we will be aided by a striking simplification of the headlight parameters. The headlight parameters were given in Section 5.7. But in the region \(x,y\geq 0\), \(\alpha_{1/1}\) and \(\alpha_{-1/1}\) simplify to become tree-like headlight functions: \[\alpha_{1/1}=-\max(0,z)\qquad\text{and}\qquad\alpha_{-1/1}=z-\max(0,z). \tag{118}\] This corresponds to the fact that cutting \(\Gamma\) along \(C_{1/0}\) and \(C_{0/1}\) gives a 4-point tree graph. Substituting these into \({\cal U}\) and \({\cal Z}\) gives \[{\cal U}=\det A=xy+y|z|+|z|x,\qquad\text{and}\qquad{\cal Z}=m^{2}(x+y+|z|). \tag{111}\] So the vacuum amplitude is simply \[{\cal A}=\int\limits_{x,y\geq 0}dxdydz\frac{xy}{(x+y+|z|)(y+|z|)}\left( \frac{\pi^{2}}{xy+y|z|+|z|x}\right)^{\frac{D}{2}}\,\exp\left(-m^{2}(x+y+|z|) \right). \tag{112}\] It is not obvious that this is the correct answer. In the conventional calculation, the amplitude receives just a single contribution: the vacuum sunset Feynman diagram. Our formula resembles, but is not the same, as the Schwinger parameterisation for this diagram. To see that they are the same, note that \[\frac{xy}{y+z}+(\text{permutations of }x,y,z)=x+y+z. \tag{113}\] It follows from this, and using that the integral above is symmetric in \(z\), that \[{\cal A}=\frac{1}{3}\int\limits_{x,y,z\geq 0}dxdydz\left(\frac{\pi^{2}}{xy+y|z| +|z|x}\right)^{\frac{D}{2}}\,\exp\left(-m^{2}(x+y+|z|)\right). \tag{114}\] This is \(1/3\) times the vacuum sunset integral. The factor of \(1/3\) corresponds to the fact that graph has \(|\text{Aut}(\Gamma)|=3\). ### The planar 2-loop tadpole We can compute the planar 2-loop tadpole amplitude using the fatgraph \(\Gamma\) in Figure 32. The curves on this fatgraph can be labelled by their endings. We have two loop boundaries, labelled \(2,3\) in the Figure. The curves are then \(C_{23},C_{22},C_{33},C_{12}^{n},C_{13}^{n}\), where \(n\) indexes how many times the curves \(C_{12}^{n},C_{13}^{n}\) loop around before beginning their spiral. As usual, we will only need a small number of these curves to compute the amplitude. Because \(\Gamma\) is planar, we can introduce dual variables \(z_{1}^{\mu},z_{2}^{\mu},z_{3}^{\mu}\) to parametrise the momenta of the curves. The propagator factors are then \[X_{12}^{n}=(z_{2}-z_{1})^{2}+m^{2},\ \ X_{13}^{n}=(z_{3}-z_{1})^{2}+m^{2},\ \ X_{23}=(z_{3}-z_{2})^{2}+m^{2}. \tag{115}\] It is convenient to take \(z_{3}-z_{1}\) and \(z_{2}-z_{1}\) as our loop momentum variables. The curve integral for the amplitude is then \[{\cal A}=\int d^{4}t\,{\cal K}\,\left(\frac{\pi^{2}}{{\cal U}}\right)^{\frac{ D}{2}}\,\exp(-{\cal Z}), \tag{116}\] where \[\mathcal{U}=\det A,\qquad\text{and}\qquad\mathcal{Z}=m^{2}\left(\alpha_{23}+\alpha_{ 22}+\alpha_{33}+\sum_{n}(\alpha_{12}^{n}+\alpha_{13}^{n})\right). \tag{111}\] Moreover, using the momenta assignments from the dual variables, (110), \(A\) is the \(2\times 2\) matrix \[A=\begin{bmatrix}\alpha_{23}+\sum_{n=-1}^{1}\alpha_{12}^{n}&\alpha_{23}\\ \alpha_{23}&\alpha_{23}+\sum_{n=-1}^{1}\alpha_{13}^{n}\end{bmatrix}. \tag{112}\] \(\mathcal{U}\) is the determinant of \(A\), and each monomial in this determinant corresponds to a pair of curves that cut \(\Gamma\) to a 5-point tree graph. Using the fact that \(\alpha_{C}\alpha_{D}=0\) if \(C,D\) intersect, we find \[\mathcal{U}=\sum_{n=-\infty}^{\infty}\left(\alpha_{23}\alpha_{12}^{n}+\alpha_ {23}\alpha_{13}^{n}+\alpha_{12}^{n}\alpha_{13}^{n}+\alpha_{12}^{n}\alpha_{13 }^{n+1}\right). \tag{113}\] Here, we have chosen a convention for the index \(n\) such that \(C_{12}^{n},C_{13}^{n+1}\) are compatible, but \(C_{12}^{n},C_{13}^{n-1}\) intersect. The MCG has one generator, which acts on the index \(n\). So it is clear that the monomials in \(\mathcal{U}\) can be decomposed into four cosets (corresponding to the four terms in the sum). We therefore get a Mirzakhani kernel (of the type discussed in Section 8.4) \[\mathcal{K}=\frac{\mathcal{U}_{0}}{\mathcal{U}}, \tag{114}\] with \[\mathcal{U}_{0}=\alpha_{23}\alpha_{12}^{0}+\alpha_{23}\alpha_{13}^{0}+\alpha_ {12}^{0}\alpha_{13}^{0}+\alpha_{12}^{0}\alpha_{13}^{1}. \tag{115}\] In the region where \(\mathcal{U}_{0}\neq 0\), only 12 \(\alpha_{C}\)'s are non-vanishing. In fact, each monomial in \(\mathcal{U}_{0}\) defines a maximal cut of \(\Gamma\), which cuts \(\Gamma\) to a 5-point tree graph. See Figure 33. \(\mathcal{A}\) is the sum of four terms, \[\mathcal{A}=\mathcal{A}_{C_{23},C_{12}^{0}}+\mathcal{A}_{C_{23},C_{13}^{0}}+ \mathcal{A}_{C_{12}^{0},C_{13}^{0}}+\mathcal{A}_{C_{12}^{0},C_{13}^{1}}, \tag{116}\] Figure 32: A planar 2-loop tadpole graph. each corresponding to a different maximal cut of the fatgraph. For instance, \({\cal A}_{C_{23},C_{12}^{0}}\) is given by the curve integral over the region \(\alpha_{23}\alpha_{12}^{0}\neq 0\). In this region, only 5 other \(\alpha_{C}\)'s are non-vanishing. The curves correspond to the five curves on the 5-point tree graph obtained by cutting along \(C_{23},C_{12}^{0}\). The 5 curves compatible with \(C_{23},C_{12}^{0}\) are \[C_{12}^{1},\ C_{12}^{-1},\ C_{13}^{0},\ C_{13}^{1},\ C_{22}. \tag{111}\] In this region, the headlight functions simplify to the expressions for the \(\alpha_{C}\)'s of the tree graph. So that, similar to previous examples, the curve integral only sees the headlight functions of the 5-point tree-level problem. Explicitly, in coordinates, we can take (in this region) \(\alpha_{23}=w,\ \alpha_{12}^{0}=x\), and \[\alpha_{13}^{1} =-\max(0,y)+\max(0,y,y+z), \alpha_{13}^{0} =\max(0,y), \tag{112}\] \[\alpha_{22} =-y-\max(0,z)+\max(0,y,y+z), \alpha_{12}^{1} =-z+\max(0,z),\] (113) \[\alpha_{12}^{-1} =-\max(0,y,y+z)+\max(0,y)+\max(0,z). \tag{114}\] where \[f_{1}=\max(0,y),\ f_{2}=\max(0,y,y+z),\ f_{3}=\max(0,z). \tag{115}\] So, in this region, the \(A\) matrix restricts to \[A^{\prime}=\begin{bmatrix}w-z+f_{1}-f_{2}+2f_{3}&w\\ w&w+f_{2}\end{bmatrix}, \tag{116}\] and \({\cal Z}\) restricts to \[{\cal Z}^{\prime}=m^{2}(w+x-y-z+f_{1}+f_{2}+f_{3}). \tag{117}\] Figure 33: A maximal cut of the planar 2-loop tadpole graph. The curve \(C_{12}^{0}\) cuts \(\Gamma\) to a 3-point 1-loop graph, and the curve \(C_{23}\) cuts this further to a 5-point tree graph. The contribution of this term to the amplitude is then \[\mathcal{A}_{C_{23},C_{12}^{0}}=\int\limits_{w,x\geq 0}dwdxdydz\,\frac{wx}{ \det A^{\prime}}\,\left(\frac{\pi^{2}}{\det A^{\prime}}\right)^{\frac{D}{2}}\exp (-\mathcal{Z}^{\prime}). \tag{111}\] The other 3 cuts are similarly computed. ### The planar 3-loop vacuum amplitude We now consider a 3-loop example. The 3-loop vacuum amplitude can be computed using the 3-loop fatgraph, \(\Gamma\), in Figure 34. The curves on \(\Gamma\) all begin and end in a spiral. There are four loop boundaries, labelled \(a=1,2,3,4\) in the Figure, that the curves can spiral around. Let \(C_{ab}^{\delta}\) be the curves that begin spiralling around \(a\), and end spiralling around \(b\). There are infinitely many such curves, all related by the action of the MCG. In fact, the MCG action in this case is quite complicated: it is an action of the braid group \(B_{3}\). However, using a tropical Mirzakhani kernel, we can still compute the amplitude. The momentum assignment to the curves is easy to describe, because \(\Gamma\) is a planar graph. Introduce dual momentum variables, \(z_{a}^{\mu}\), associated to the four boundaries, \(a=1,2,3,4\). Then the propagator for \(C_{ab}^{\delta}\) is just \[X_{ab}=(z_{b}^{\mu}-z_{a}^{\mu})^{2}+m^{2}. \tag{112}\] We can choose any three \(z_{a}\) to be our loop momentum variables. Our formula for the amplitude is then \[\mathcal{A}=\int d^{6}t\,\mathcal{K}\,\left(\frac{\pi^{3}}{\mathcal{U}}\right) ^{\frac{D}{2}}\,\exp(-\mathcal{Z}), \tag{113}\] where the surface Symanzik polynomials are \[\mathcal{U}=\det^{\prime}\tilde{A},\qquad\mathcal{Z}=m^{2}\sum\alpha_{ab}^{ \delta}. \tag{114}\] Figure 34: Three loop. Here, we take a slightly different approach to presenting \({\cal U}\), adapted to the planar case, by using a reduced determinant, \(\det^{\prime}\), which excludes a row and column. The \(4\times 4\) matrix \(\tilde{A}\) is (for \(a\neq b\)) \[\tilde{A}_{ab}=\sum_{\delta}\alpha^{\delta}_{ab},\qquad\tilde{A}_{aa}=-\sum_{c \neq a}\tilde{A}_{ac}. \tag{111}\] By the matrix-tree theorem, the reduced determinant, \(\det^{\prime}\tilde{A}\), turns into a sum over all maximal cuts of the fatgraph \(\Gamma\). In this case, a maximal cut is given by any three non-intersecting curves, \(\{C^{\delta}_{ab},C^{\delta^{\prime}}_{cd},C^{\delta^{\prime\prime}}_{ef}\}\), such that the pairs,--\(ab\), \(cd\), \(ef\),--span a tree on the set \(\{1,2,3,4\}\). So \(\det^{\prime}\tilde{A}\) indeed recovers the definition of \({\cal U}\) as the sum over maximal cuts of the fatgraph. Explicitly, it takes the form \[{\cal U}=\sum_{\delta,\delta^{\prime},\delta^{\prime\prime}}\sum_{\rm trees} \alpha^{\delta}_{ab}\alpha^{\delta^{\prime}}_{cd}\alpha^{\delta^{\prime\prime}} _{ef} \tag{112}\] We can now use this formula for \({\cal U}\) to define a Mirzakhani kernel, \({\cal K}\). This set of triples appearing in \({\cal U}\) can be decomposed as a sum of cosets under the MCG. The MCG-action leaves the starts and ends of each curve unchanged. So we find that there are 16 MCG-inequivalent maximal cuts of \(\Gamma\), corresponding to the \(4^{2}\) distinct labelled trees in the set \(\{1,2,3,4\}\). For each such labelled tree, we choose a coset representative. \[\alpha^{0}_{ab}\alpha^{0}_{cd}\alpha^{0}_{ef}, \tag{113}\] where the pairs \(ab,cd,ef\) define the tree, and \(C^{0}_{ab},C^{0}_{cd},C^{0}_{ef}\) is some choice of 3 non-intersecting curves. Let \({\cal U}_{0}\) be the sum of monomials for these 16 coset representatives. It has the form \[{\cal U}^{0}=\sum_{\rm 12~{}perms}\alpha_{12}\alpha_{23}\alpha_{34}+\sum_{ \rm 4~{}perms}\alpha_{14}\alpha_{24}\alpha_{34}. \tag{114}\] Then \[{\cal K}=\frac{{\cal U}_{0}}{{\cal U}} \tag{115}\] is our Mirzakhani kernel. An exercise in the intersection rules for mountainousspaves shows that the following 6 curves are sufficient to build each of the 16 maximal cuts: \[C^{0}_{14} =(xRyR)^{\infty}x(LvLwLuLx)^{\infty}, \tag{116}\] \[C^{0}_{24} =(uRyRvRzR)^{\infty}u(LxLvLwLu)^{\infty},\] (117) \[C^{0}_{34} =(wRzR)^{\infty}w(LuLxLvLw)^{\infty},\] (118) \[C^{0}_{12} =(yRxR)^{\infty}y(LuLzLvLy)^{\infty},\] (119) \[C^{0}_{23} =(zRuRyRvR)^{\infty}z(LwLz)^{\infty},\] (120) \[C^{0}_{13} =(RyRx)^{\infty}LvR(zLwL)^{\infty}. \tag{121}\] This is because all of these curves are pairwise compatible. Using these curves, we can define a restricted matrix (for \(a\neq b\)) \[\tilde{A}^{0}_{ab}=\alpha^{0}_{ab},\qquad\tilde{A}^{0}_{aa}=-\sum_{c\neq a} \tilde{A}^{0}_{ac} \tag{110}\] so that, by the matrix-tree theorem, \(\mathcal{U}^{0}=\text{det}^{\prime}\tilde{A}^{0}\). Our Mirzakhani kernel is then \[\mathcal{K}=\frac{\text{det}^{\prime}\tilde{A}^{0}}{\text{det}^{\prime}\tilde {A}}. \tag{111}\] For each of the 16 monomials in \(\mathcal{U}^{0}\) we get a contribution to \(\mathcal{A}\). For instance, take the monomial \[\alpha^{0}_{12}\alpha^{0}_{23}\alpha^{0}_{34}, \tag{112}\] corresponding to the tree \(1-2-3-4\). The associated contribution to \(\mathcal{A}\) only involves \(\alpha_{C}\) for curves \(C\) compatible with this maximal cut. This maximal cut gives a tree fatgraph, with colour ordering \((123432)\).5 So this contribution to the amplitude involves only the 9 headlight functions for this 6-point tree fatgraph. Footnote 5: Cutting a curve that ends in a spiral around a loop boundary creates a new external line on that boundary. Finally, note that by permutation symmetry (with respect to the dual variables \(z_{a}\)), we only really need to evaluate two of the maximal cuts in our formula, say: \[\alpha^{0}_{12}\alpha^{0}_{23}\alpha^{0}_{34}\qquad\text{and}\qquad\alpha^{0}_{ 14}\alpha^{0}_{24}\alpha^{0}_{34}. \tag{113}\] Then \[\mathcal{A}=12\,\mathcal{A}_{12,23,34}+4\,\mathcal{A}_{14,24,34}, \tag{114}\] where each of \(\mathcal{A}_{12,23,34}\) and \(\mathcal{A}_{14,24,34}\) can be computed knowing only the headlight functions for a 6-point tree graph. ## 10 A First Look at Recursion The tropical Mirzakhani kernels dramatically simplify the task of evaluating our amplitudes. Using these kernels, our formulas for amplitudes at \(L\) loops end up expressed in terms of the headlight functions, \(\alpha_{C}\), that we have already computed for lower loop level amplitudes. In this section, we show an alternative way to apply the Mirzakhani kernels to compute amplitudes, by using them to define a powerful recursion relation for the integrands, \(\mathcal{I}\). Fix a fatgraph \(\Gamma\). Its associated (pre-loop-integration) integrand is given by the curve integral \[\mathcal{I}=\int\frac{d^{n}t}{\text{MCG}}Z,\qquad Z=\exp\left(-\sum_{C}\alpha _{C}X_{C}\right). \tag{115}\] To evaluate the curve integral, we introduce a tropical Mirzakhani kernel, as above. Take, for example, some trace factor \(\beta\). The non-separating curves with endpoints on \(\Gamma\) form a set \(\mathcal{S}_{\beta}\), and which can be partitioned into MCG orbits with some coset representatives \(C_{1},\ldots,C_{k}\). Each of these curves, \(C_{i}\), cuts \(\Gamma\) to a fat graph \(\Gamma_{C_{i}}\) with a smaller number of loops. The Mirzakhani kernel \(\mathcal{K}_{\beta}\) then gives \[\mathcal{I}=\sum_{i=1}^{k}\,\int\frac{d^{n}t}{\text{MCG}}\,\frac{\alpha_{C_{i} }}{\rho}Z. \tag{10.2}\] Introducing an auxiliary parameter, \(\xi\), the \(1/\rho\) can be incorporated into the exponential using \[\frac{1}{\rho}=\int\limits_{0}^{\infty}d\xi\,e^{-\rho\xi}. \tag{10.3}\] Equation (10.2) then implies the following recursion formula: \[\mathcal{I}=\int\limits_{0}^{\infty}d\xi\,\sum_{i=1}^{k}\frac{-1}{(X_{C_{i}}+ \xi)^{2}}\mathcal{I}_{\Gamma_{C_{i}}}(X_{C}^{\prime}), \tag{10.4}\] where the new dual variables \(X_{C}^{\prime}\) appearing in the integrand \(I_{\Gamma_{C_{i}}}(X_{C}^{\prime})\) are given by \[X_{C}^{\prime}=\begin{cases}X_{C}+\xi&\text{ if }C\in\mathcal{S}_{\beta}\\ X_{C}&\text{ else.}\end{cases} \tag{10.5}\] This formula, (10.4), is a completely recursive way to obtain the rational functions \(\mathcal{I}\) to all orders in the perturbation series. A detailed derivation of (10.4) is given in Appendix D. For example, consider again the 1-loop non-planar propagator computed in Section 7.2. The curves on \(\Gamma\) are \(\mathcal{S}=\{C_{n}\}\) as before, and their associated dual variables are \[X_{n}=(\ell+nk)^{2}. \tag{10.6}\] The MCG has just one generator, and so we will only need to apply the global forward limit once. Taking \(C_{0}\) as our coset representative, (10.4) gives \[\mathcal{I}_{\Gamma}=\int\limits_{0}^{\infty}d\xi\frac{-1}{(X_{0}+\xi)^{2}} \mathcal{I}_{\Gamma_{C_{0}}}(X_{1}+\xi,X_{-1}+\xi), \tag{10.7}\] where \(\Gamma_{C_{0}}\) is the 4-point tree graph obtained by cutting \(\Gamma\) along \(C_{0}\). The curves \(C_{1}\) and \(C_{-1}\) become the two possible propagators of \(\Gamma_{C_{0}}\): on \(\Gamma\), \(C_{1}\) and \(C_{-1}\) are the only two curves that do not intersect \(C_{0}\). So we have, \[\mathcal{I}_{\Gamma}=-\int\limits_{0}^{\infty}d\xi\left(\frac{1}{(X_{0}+\xi) ^{2}}\frac{1}{X_{1}+\xi}+\frac{1}{(X_{0}+\xi)^{2}}\frac{1}{X_{-1}+\xi}\right). \tag{10.8}\] Evaluating the \(\xi\) integral gives the following formula for the integrand: \[\mathcal{I}_{\Gamma}=\frac{1}{X_{0}(X_{1}-X_{0})}+\frac{1}{X_{0}(X_{-1}-X_{0})}. \tag{10.9}\] Here we see the appearance of _linearised propagators_, of the form \(1/(X_{C}-X_{C_{i}})\). Such linearised propagators have arisen in previous studies of forward limit [27; 28; 29; 30; 31; 32]. In the full sum, these linearised propagators sum to give back the ordinary loop integrand after identifications made using shifts of the loop momenta. In our current example, the loop momentum shift \(\ell\mapsto\ell+k\) shifts the dual variables by \(X_{n}\mapsto X_{n+1}\). Applying this shift to the second term in (10.9) gives \[\mathcal{I}_{\Gamma}^{\prime}=\frac{1}{X_{0}(X_{1}-X_{0})}+\frac{1}{X_{1}(X_{ 0}-X_{1})}=\frac{1}{X_{0}X_{1}}. \tag{10.10}\] For higher loop integrands, we can use multiple iterations of (10.4) to write \(\mathcal{I}\) as a sum over some tree amplitudes, with various shifts in the kinematic variables. Note that the recursion, (10.4), continues to hold even when the \(X_{C}\) variables are not all distinct. For example, if all \(X_{C}\) are set equal to a constant, \(X_{C}=X\), then \(\mathcal{I}_{\Gamma}=C_{\Gamma}/X^{E}\), where \(C_{\Gamma}\) is the number of Feynman diagrams contributing to the amplitude. In this case, (10.4) can be used to recursively compute the number of diagrams. Moreover, the recursion (10.4) also holds when there are higher poles in the integrand, arising from diagrams like bubbles. We give a more complete analysis of these recursions elsewhere. ## 11 Outlook The new representation of all-loop amplitudes we have studied in this paper has implications far beyond our understanding of scalar amplitudes, and has consequences for the understanding of particle and string scattering generally. We highlight a number of directions that are especially primed for immediate development. The magic of the _curve integral_ formulas is that integrals over an \(O(n)\) dimensional space, of an action built from \(O(n^{2})\) piecewise linear functions, automatically reproduces the full amplitudes, which are conventionally sums over \(O(4^{n})\) Feynman diagrams. The novelty of this formalism over conventional field theory must therefore become most manifest in the limit \(n\to\infty\) of a large number of particles. In examples, we have found evidence that the external kinematical data can be chosen so that the large-\(n\) limits the curve integrals are smooth, leading to formulas for amplitudes in the large-\(n\) limit in terms of _tropical path integrals_. Studying this limit might lead to a new understanding of the emergence of strings from colored particles at strong coupling. At strong coupling, the scattering for a small number of particles is exponentially small, and the amplitude is instead dominated by the emission of a huge number of particles, approximating field configurations that should more continuously connect to a string worldsheet picture. Even at finite \(n\) the curve integral formalism offers radically new methods to compute amplitudes. For instance, it allows to evaluate amplitudes numerically by direct integration, thus avoiding the generation of Feynman diagrams altogether. The geometric properties of the fan suggest a new search for an optimal numerical integration strategy, uplifting recent breakthroughs in the numerical evaluation of Feynman integrals in parametric form to entire amplitudes [33, 34]. A second frontier ripe for immediate investigation is an understanding of gravity and gravity-like amplitudes. Just as the \(\mathrm{tr}\phi^{3}\) theory is a model for general colored amplitudes, a scalar model for gravity is given by an uncolored scalar \(\sigma\) with cubic self-interaction \(\sigma^{3}\). In special cases, it is now standard to think of uncolored and colored theories as related by double-copy or 'gravity = gauge\({}^{2}\)' formulas [35]. The stringy origin of these formulas, the KLT relations, is deeply connected to thinking about the string worldsheet in a fundamentally _complex_ fashion as a Riemann surface with a complex structure. But there are many reasons why our formulation of uncolored amplitudes will involve a very different sort of treatment. As we alluded to in the introduction, the existence of \(\sigma\) is forced on us in the most elementary way by the structure of the Feynman fan, which has lower-dimensional 'holes' that are beautifully completed by adding in new vectors corresponding to \(\sigma\) particles. This does not remotely have the flavor of 'gravity = gauge\({}^{2}\)'. Moreover, as alluded to in the introduction, the \(u\)-variables central to our story are deeply connected to the string wordsheet (and Teichmuller space), but via _hyperbolic geometry_ and _not_ through the conventional picture of Riemann surfaces with complex structure. All of this dovetails nicely with the many observations, in examples of gravity amplitudes, that there is vastly more structure to gravity amplitudes than is suggested by the 'gravity=gauge\({}^{2}\)' slogan. The striking way in which \(\sigma\) is forced on us in our story is a new departure point for uncovering more of this hidden structure. Finally, our results here strongly suggest that there is way to describe fundamental particle physics in the real world from a more elementary starting point, with spacetime and quantum mechanics appearing as emergent principles. We believe that we have taken a major new step in this direction with the results we have begun to introduce in this paper. A number of major challenges remain before we can reach this goal. The first is to understand how fermions arise from this new perspective, which has so far only been applied to bosonic scattering. For Standard Model physics, describing chiral fermions will be especially interesting and important. Another challenge is that the key structures in our formulas stem from a fatgraph, which is most immediately connected to the adjoint representation of \(U(N)\) gauge theories. But the quantum numbers of the Standard Model are more interesting. For instance, in the \(SO(10)\) grand unified theory, the matter lives in ten fundamentals (higgses) together with three \(\mathbf{16}\)'s for the fermions. How might the amplitudes for matter in these representations emerge from elementary combinatorial foundations? We especially thank Song He and Thomas Lam for countless stimulating conversations on the topics of this paper over many years. We also thank Sebastian Mizera and Hofie Hannesdottir for many discussions, and Song He, Carolina Figueiredo, Daniel Longenecker, Qu Cao and Jin Dong for ongoing interactions related to the material of this paper over the past year. NAH is supported by the DOE under grant DE-SC0009988; further crucial contributions to his work were made possible by the Carl B. Feinberg cross-disciplinary program in innovation at the IAS. NAH also expresses sincere thanks to HF, PGP, GS and HT for restraining themselves from strangling him during the completion of this work. PGP is supported by ANR grant CHARMS (ANR-19-CE40-0017) and by the Institut Universitaire de France (IUF). PGP worked on this project while participating in _Representation Theory: Combinatorial Aspects and Applications_ at the Centre for Advanced Study, Oslo. HF is supported by Merton College, Oxford. During this project HF received additional support from ERC grant GALOP (ID: 724638). During this project GS was supported by Brown University, Providence, the Perimeter Institute, Waterloo, and the Institute for Advanced Study, Princeton. GS was also funded by the European Union's Horizon 2020 research and innovation programs _Novel structures in scattering amplitudes_ (No. 725110) of Johannes Henn. GS thanks the groups of C. Anastasiou and N. Beisert at ETH Zurich for hospitality during the worst phase of the COVID-19 pandemic. HT was supported by NSERC Discovery Grant RGPIN-2022-03960 and the Canada Research Chairs program, grant number CRC-2021-00120. ## Appendix A Deriving the Curve Integral Formula To see why (116) is correct, let us write the amplitude explicitly. Write \[X_{C}=P_{C}^{2}+m^{2} \tag{117}\] for the propagator factor associated to curve \(C\) (with momentum \(P_{C}^{\mu}\)). Fix some fatgraph \(\Gamma\) with some color factor \(C_{\Gamma}\). The associated partial amplitude can be expressed with just one overall loop integration as \[\mathcal{A}=\int\prod_{i=1}^{L}d^{D}\ell_{i}\left(\sum_{\Gamma^{ \prime}}\prod_{C}\frac{1}{X_{C}}\right), \tag{118}\] where sum over exactly one of every fatgraph \(\Gamma^{\prime}\) that has color factor \(C_{\Gamma^{\prime}}=C_{\Gamma}\). The integrand in this formula can be written as an integral over _curve space_, \(V\). To do this, recall that every top dimensional cone of the Feynman fan corresponds to some triangulation of \(\Gamma\). Any vector \(\mathbf{g}\in V_{\Gamma}\) can be expanded as a sum of the generators of the cone that it is in using \[\mathbf{g}=\sum_{C}\alpha_{C}(\mathbf{g})\,\mathbf{g}_{C}, \tag{119}\] where \(\alpha_{C}\) are the headlight functions and \(\mathbf{g}_{C}\) are the \(g\)-vectors of the curves, \(C\). Consider the function on \(V\) given by \[Z=\exp\left(-\sum_{C}\alpha_{C}(\mathbf{t})X_{C}\right), \tag{120}\] where the sum in the exponent is over all open curves \(C\). Let \(T\) be a triangulation corresponding to some top-dimensional cone, with curves \(C_{1},...,C_{E}\). Restricting \(Z\) to this cone gives \[Z|_{\text{cone}}=\exp\left(-\sum_{i=1}^{E}\alpha_{C_{i}}(\mathbf{ t})X_{C_{i}}\right), \tag{121}\] which follows from (A.3). Moreover, the generators of this top dimensional cone span a parallelopiped of unit volume, so there exist corresponding coordinates \(y^{\prime}_{1},...,y^{\prime}_{E}\) such that \(d^{E}y=d^{E}y^{\prime}\) and so that any vector in this cone can be written as \[\mathbf{g}=\sum_{i=1}^{E}y^{\prime}_{i}\mathbf{g}_{C_{i}}.\] (A.6) The integral of \(Z\) over this cone is then \[\int\limits_{\text{cone}}d^{E}yZ=\int\limits_{\geq 0}d^{E}y^{\prime}\, \exp\left(\sum_{i=1}^{E}-y^{\prime}_{i}X_{C_{i}}\right)=\prod_{i=1}^{E}\frac{1 }{X_{C}}.\] (A.7) It follows from this that the partial amplitude (A.2) can be written as a curve integral over curve space: \[\mathcal{A}=\int\frac{d^{E}\mathbf{t}}{\text{MCG}}\int\prod_{i=1}^{L}d^{D}\ell _{i}\,Z.\] (A.8) In this formula, we integrate over curve space modulo the action of the mapping class group. This ensures that we count each fatgraph \(\Gamma\) only once. We explain how to compute these curve integrals, with non-trivial MCG actions, in Section 8. ## Appendix B Factorization in detail In the text, the factorization of the curve integral formula for integrands \(\mathcal{I}\) is stated in (6.30). This formula gives the residue of the pole \(1/X_{C}\). To derive the formula, there are two possible cases to consider: either \(C\) is MCG-invariant, or not. ### MCG invariant curve Suppose \(C\) is MCG-invariant. The \(X_{C}\) pole arises from the part of the integral over the region of curve space where \(\alpha_{C}>0\). Since \(\text{Stab}(C)=\text{MCG}(\Gamma)\), the MCG action has a well-defined restriction to this region and we have a well-defined curve integral \[\mathcal{I}^{\prime}=\int\limits_{\alpha_{C}>0}\frac{d^{E}t}{\text{MCG}}Z.\] (B.1) To compute \(\mathcal{I}^{\prime}\), take a triangulation containing \(C\), with curves \(C,D_{1},...,D_{E-1}\). Take coordinates adapted to this cone: \[\mathbf{g}=t_{C}\mathbf{g}_{C}+\sum_{i=1}^{n-1}t^{\prime}_{i}\mathbf{g}_{D_{i }}.\] (B.2) By the unit volume property, the integration measure is \[d^{E}t=dt_{C}d^{E-1}t^{\prime}.\] (B.3) In these coordinates, the restriction of \(Z\) to this region is \[Z|_{t_{C}>0}=e^{-t_{C}X_{C}}\,\exp\left(-\sum_{D|C}\alpha_{D}X_{D}\right),\] (B.4) where the sum is over \(D\) that do not intersect \(C\). For these curves, \(\alpha_{D}({\bf g}+{\bf g}_{C})=\alpha_{D}({\bf g})\), so that the only \(t_{C}\)-dependence is in the \(\exp(-t_{C}X_{C})\) factor. Write \(\alpha^{\prime}_{D}=\alpha_{D}|_{t_{C}=0}\), for the headlight functions restricted to \(t_{C}=0\). \(\alpha^{\prime}_{D}\) is the headlight function of \(D\) considered as a curve on the cut fatgraph \(\Gamma_{C}\). The \(t_{C}\) integral gives \[{\cal I}^{\prime}=\frac{1}{X_{C}}\int\frac{d^{E-1}t^{\prime}}{ \text{MCG}}Z_{C}, \tag{100}\] where \[Z_{C}=\exp\left(-\sum_{D|C}\alpha^{\prime}_{D}X_{D}\right). \tag{101}\] The full curve integral \({\cal I}\) is \({\cal I}={\cal I}^{\prime}+\dots\), where the \(\dots\) has no \(X_{C}\) pole. So \[\text{Res}_{X_{C}=0}I=\int\frac{d^{E-1}t^{\prime}}{\text{MCG}}Z_{C}, \tag{102}\] where, on the RHS, \(P_{C}^{\mu}\) is put on shell (\(X_{C}\to 0\)). ### MCG non-invariant curve If \(\text{Stab}(C)<\text{MCG}\), we can use a Mirzakhani kernel to evaluate the \(1/X_{C}\) pole. We choose \(C\) as one of the coset representatives, so that the Mirzakhani kernel is \[{\cal K}=\frac{\alpha_{C}}{\rho}+\dots. \tag{103}\] Then \[\int\frac{d^{E}t}{\text{MCG}}Z=\int\frac{d^{E}t}{\text{Stab}C} \,\frac{\alpha_{C}}{\rho}Z+\dots, \tag{104}\] where the \(\dots\) are all terms without a \(1/X_{C}\) pole. To guarantee that \(X_{C}\) only appears in the first term, we can choose the other coset representatives \(C_{1},...,C_{L-1}\) so that all of these are curves that intersect \(C\). We can put the \(1/\rho\) in the numerator, by introducing an auxiliary integration variable \(\xi\): \[\int\frac{d^{E}t}{\text{MCG}}Z=\int\limits_{0}^{\infty}d\xi\int \frac{d^{E}t}{\text{Stab}(C)}\,\alpha_{C}e^{-\xi\rho}Z+\dots. \tag{105}\] Changing variables as before, and integrating over \(t_{C}\) gives \[\int\frac{d^{E}t}{\text{MCG}}Z=\int\limits_{0}^{\infty}d\xi\frac{ -1}{(X_{C}+\xi)^{2}}\int\frac{d^{E-1}t^{\prime}}{\text{Stab}(C)}\,Z^{\prime}+\dots, \tag{106}\] where \(Z^{\prime}\) is obtained from \(Z\) by shifting \(X_{D}\mapsto X_{D}+\xi\) for all \(D\) in the Mirzakhani set. Finally, integrating over \(\xi\), and using \[\prod_{i=1}^{m}\frac{1}{X_{i}+\xi}=\sum_{i=1}^{m}\frac{1}{X_{i}+ \xi}\prod_{j\neq i}\frac{1}{X_{j}-X_{i}}, \tag{107}\] we find \[\int\frac{d^{E}t}{\text{MCG}}Z\rightarrow\frac{1}{X_{C}}\int\frac{d^{E-1}t^{ \prime}}{\text{Stab}(C)}\,Z_{C}+\dots, \tag{111}\] where \(-\log Z_{C}\) is the curve action given by summing over all curves, \(D\), compatible with \(C\): \[-\log Z_{C}=\sum_{D}\alpha_{D}X_{D}. \tag{112}\] Note that this calculation does not apply if the integrand has higher poles in \(X_{C}\), such as if \(X_{C}\) is a bubble propagator for a planar diagram. ## Appendix C The Surface Symanzik polynomials Fixing an assignment of momenta to the curves gives explicit formulas for the all the propagator factors \[X_{C}=\left(K_{C}^{\mu}+\sum_{a=1}^{L}h_{C}^{a}\ell_{a}^{\mu}\right)^{2}+m^{2}, \tag{113}\] in terms of one set of loop momentum variables \(\ell_{a}^{\mu}\). In terms of these loop variables, the curve action, \[-\log Z=\sum_{C}\alpha_{C}X_{C}, \tag{114}\] becomes \[-\log Z=-\ell_{a}^{\mu}A^{ab}\ell_{b}^{\mu}-2B_{\mu}^{a}\ell_{a}^{\mu}-{\cal Z}, \tag{115}\] where \(A,B,{\cal Z}\) are all linear functions in the generalised Schwinger parameters: \[A^{ab} =\sum_{C}h_{C}^{a}h_{C}^{b}\alpha_{C} \tag{116}\] \[B_{\mu}^{a} =\sum_{C}h_{C}^{a}\alpha_{C}K_{C\,\mu}\] (117) \[{\cal Z} =\sum_{C}\alpha_{C}(K_{C}^{2}+m^{2}) \tag{118}\] Performing the Gaussian integral over the \(\ell_{a}\) variables, in \(D\) dimensions, gives \[{\cal A}=\int\frac{d^{E}{\bf t}}{\text{MCG}}\,\left(\frac{\pi^{L}}{\det A} \right)^{\frac{D}{2}}\exp\left(B^{T}A^{-1}B-{\cal Z}\right). \tag{119}\] So we identify the surface Symanzik polynomials: \[{\cal U}=\det A,\qquad\text{and}\qquad\frac{{\cal F}_{0}}{{\cal U}}=B^{T}A^{- 1}B. \tag{120}\] These are the formulas used in the main text. In this appendix, we consider the explicit expansions of \({\cal U}\) and \({\cal F}_{0}\) in monomials. ### The first surface Symanzik Since \(X^{ij}\) is linear in the parameters \(\alpha_{C}\), the determinant \(\det X\) is homogeneous of degree \(L\). For a set of curves \(S=\{C_{1},...,C_{L}\}\), let us find the coefficient in \(\det A\) of the monomial \[\alpha_{S}=\prod\alpha_{C_{i}}. \tag{112}\] By the definition of the determinant, this coefficient is \[\det A=\ldots+\alpha_{S}\,\left(\det\left.h\right|_{S}\right)^{2}+\ldots\,, \tag{113}\] where \[\det\left.h\right|_{S}=\epsilon_{i_{1}...i_{L}}h^{i_{1}}_{C_{1}}...h^{i_{L}}_ {C_{L}}. \tag{114}\] Note that the ordering of the curves \(C_{1},...,C_{L}\) does not matter, because this determinant only enters the formula for \(\det A\) as a square. We now make two observations. Firstly, \(\det h|_{S}\) is only non-zero if the curves in \(S\) cut \(\Gamma\) to a tree graph. Secondly, for any conventional choice of loop variables (defined below), the determinants \(\det h|_{S}\) are all either \(0\) or \(\pm 1\). So the result is that \(\mathcal{U}\) is given by \[\mathcal{U}=\sum_{\begin{subarray}{c}S\text{ cuts }\Gamma\\ \text{to tree}\end{subarray}}\alpha_{S}. \tag{115}\] For the first statement, consider \(L=1\). Then all curves have momenta of the form \[P_{C}=h^{1}_{C}\ell_{1}+K^{\mu}_{C}. \tag{116}\] If \(h^{1}_{C}=0\), cutting \(\Sigma\) along \(C\) breaks it into two parts: one part with \(L=1\), and a second part with \(L=0\) (i.e. a disk). Whereas, if \(h^{1}_{C}\neq 0\), cutting \(\Gamma\) along \(C\) cuts the loop open, giving a new surface with \(L=0\) (i.e. a disk). So at 1-loop the first Symanzik polynomial is \[\mathcal{U}=\sum_{\begin{subarray}{c}C\text{ cuts }\Gamma\\ \text{to tree}\end{subarray}}\alpha_{C}\,\left(h^{1}_{C}\right)^{2}. \tag{117}\] For \(L>1\), the determinant \(\det\left.h\right|_{S}\) is nonzero if and only if the linear transformation (in \(H_{1}(\Gamma,\partial\Gamma)\) from \([L_{1}],...,[L_{L}]\) to \([C_{1}],...,[C_{L}]\) is invertible. By induction from the \(L=1\) case, this means that the curves in \(S\) cut \(\Gamma\) to a disk. So \[\mathcal{U}=\sum_{\begin{subarray}{c}S\text{ cuts }\Gamma\\ \text{to tree}\end{subarray}}\alpha_{S}\,\left(\det\left.h\right|_{S}\right)^{2}. \tag{118}\] Secondly, it turns out that \((\det h|_{S})^{2}\) is either \(0\) or \(1\). We sketch how to prove this by fixing any genus \(g\) fatgraph with \(h\) trace-factor components. The loop order of such a fatgraph is \[L=2g+h-1. \tag{119}\] A natural basis of loop-carrying curves can be given by picking some \(2g\) curves \(A_{i},B_{i}\) wrapping the \(A,B\)-cycles of the graph, and \(h-1\) curves \(C_{i}\) connecting the \(h\) trace factors. These give a set, \(S\), of \(L\) cures that cut \(\Gamma\) to a tree, so \((\det h|_{S})^{2}\)=1. Moreover, we can choose our momentum assignment such that \[P_{A_{i}}=\ell_{2i-1},\qquad P_{B_{i}}=\ell_{2i},\qquad P_{C_{i}}=\ell_{2g+i}. \tag{120}\] Now consider the momenta of Dehn twists of these curves. For instance, taking one of the \(C_{i}\), a Dehn twist \(\gamma\) around one of its trace-factors gives a new curve \[P_{\gamma C_{i}}=P_{C_{i}}\pm k_{\text{tf}}, \tag{108}\] where \(k_{\text{tf}}\) is the total momentum of the trace factor. Moreover, any product of Dehn twists acting on a pair of A,B-cycles acts on their momenta as \(\text{SL}_{2}\mathbb{Z}\): \[\begin{bmatrix}\ell_{2i-1}\\ \ell_{2i}\end{bmatrix}\mapsto X\begin{bmatrix}\ell_{2i-1}\\ \ell_{2i}\end{bmatrix}, \tag{109}\] for some \(X\in\text{SL}_{2}\mathbb{Z}\). In this way, we find that the momenta of any set, \(S^{\prime}\), that cuts \(\Gamma\) to a tree, is obtained from the momenta of \(S\) via translations by non-loop momenta, and \(\text{SL}_{2}\mathbb{Z}\) transformations. Both of which leave the determinant unchanged: \[(\det h|_{S^{\prime}})^{2}=(\det h|_{S})^{2}=1. \tag{110}\] ### The second surface Symanzik The second surface Symanzik polynomial is \[\frac{\mathcal{F}_{0}}{\mathcal{U}}=B^{T}A^{-1}B. \tag{111}\] The Laplace formula evaluates the inverse as \[\left(A^{-1}\right)^{ij}=\frac{(-1)^{i+j}}{\det A}|A|^{ij}, \tag{112}\] where \(|A|^{ij}\) is the \(i,j\) minor. Since \(\mathcal{U}=\det A\), \[\mathcal{F}_{0}=2\sum_{C,D}\alpha_{C}\alpha_{D}K_{C}\cdot K_{D}\sum_{i,j}(-1)^ {i+j}h_{C}^{i}h_{D}^{j}|A|_{ij}. \tag{113}\] As above, again write \(S=\{C_{1},...,C_{L}\}\) for a set of \(L\) curves and \(\alpha_{S}\) for the associated monomial. The minors of \(A\) are \[|A|_{ij}=\sum_{S}\sum_{C\in S}\frac{\alpha_{S}}{\alpha_{C}}\,|h_{S}|_{C}^{i}|h _{S}|_{C}^{j}, \tag{114}\] where \(|h_{S}|_{C}^{i}\) is the \((i,C)\) minor of the matrix \(h|_{S}=[h_{C_{1}}^{i}|...|h_{C_{L}}^{i}]\). By the definition of the determinant, \[\sum_{i=1}^{L}(-1)^{i}h_{D}^{i}|h_{S}|_{C}^{i}=\det h_{S_{C\to D}}, \tag{115}\] where \(S_{C\to D}\) is the set obtained from \(S\) by replacing \(C\) with \(D\). Substituting (114) into (113), and using the identity (115), gives (after reordering the summations) \[\mathcal{F}_{0}=2\sum_{\begin{subarray}{c}\mathcal{S}^{\prime}\\ |\mathcal{S}^{\prime}|=L+1\end{subarray}}\alpha_{\mathcal{S}^{\prime}}\left( \sum_{C\in\mathcal{S}^{\prime}}\left(\det h_{S^{\prime}\setminus C}\right)K_{ C}^{\mu}\right)^{2}, \tag{116}\] where the sum is restricted to sets of \(L+1\) curves \(\mathcal{S}^{\prime}\) such that _any_\(L\) subset of \(\mathcal{S}^{\prime}\) gives a nonvanishing determinant \(\det h_{S^{\prime}\setminus C}\). We make three observations to simplify this formula. First, by the previous section, any \(L\)-subset of \(S^{\prime}\) that has nonvanishing determinant cuts \(\Gamma\) to a tree graph. It follows that the sum in this formula is over sets \(\mathcal{S}^{\prime}\) that _factorize_\(\Gamma\) into two trees! Secondly, by the previous subsection, since each of the sets \(S^{\prime}\backslash C\) cuts \(\Gamma\) to a tree, the determinants are all \[\det h_{S^{\prime}\backslash C}=\pm 1. \tag{102}\] In fact, finally, note that both the vectors \(h_{C}^{i}\) and the momenta \(K_{C}^{\mu}\) are defined with respect to an orientation of \(C\). For any subset \(\mathcal{S}^{\prime}\), these orientations can be chosen so that all the determinants \(\det h_{S^{\prime}\backslash C}\) are positive (say). For this choice, \[\det h_{S^{\prime}\backslash C}=1. \tag{103}\] Combining these three observations, the final formula for \(\mathcal{F}_{0}\) is \[\mathcal{F}_{0}=\sum_{\begin{subarray}{c}S^{\prime}\text{ cuts }\Gamma\\ \text{to 2 trees}\end{subarray}}\alpha_{S^{\prime}}\left(\sum_{C\in S^{ \prime}}K_{C}^{\mu}\right)^{2}, \tag{104}\] for an allowed choice of orientations of the momenta \(K_{C}\). ## Appendix D The Recursion Formula For a fatgraph \(\Gamma\), the curve integral for integrands is \[\mathcal{I}=\int\frac{d^{E}t}{\text{MCG}}Z, \tag{105}\] with \[-\log Z=\sum_{C}\alpha_{C}X_{C}. \tag{106}\] For some trace factor \(\beta\) of \(\Gamma\), we have the set of curves \(\mathcal{S}\) that have one or two endpoints in \(\beta\). Under the MCG, this set has some, say \(k\), coset representatives, \(C_{i}\) (\(i=1,\ldots,k\)). Then \[\mathcal{I}=\int\frac{d^{E}t}{\text{MCG}}Z=\sum_{i=1}^{k}\int\frac{d^{E}t}{ \text{Stab}(C_{i})}\frac{\alpha_{C_{i}}}{\rho}Z, \tag{107}\] where \[\rho=\sum_{C\in\mathcal{S}}\alpha_{C}. \tag{108}\] Introducing an auxiliary parameter, \(\xi\), we re-write this as \[\mathcal{I}=\sum_{i=1}^{k}\int\limits_{0}^{\infty}d\xi\int\frac{d^{E}t}{ \text{MCG}}\,\alpha_{C_{i}}Z(\xi). \tag{109}\] where the new integrand is \[-\log Z(\xi)=\sum_{C\in\mathcal{S}}\alpha_{C}(X_{C}+\xi)+\sum_{D\not\in \mathcal{S}}\alpha_{D}X_{D}. \tag{110}\] Integrating over the \(\alpha_{C_{i}}\) direction in each term curve integral gives \[\mathcal{I}=\sum_{i=1}^{k}\int\limits_{0}^{\infty}d\xi\frac{-1}{(X_{C_{i}}+\xi) ^{2}}\int\frac{d^{n-1}t^{\prime}}{\text{Stab}(C_{i})}\,Z^{\prime}(\xi), \tag{102}\] where \[-\log Z^{\prime}(\xi)=\sum_{C\in\mathcal{S},C\neq C_{i}}\alpha^{\prime}_{C}(X_ {C}+\xi)+\sum_{D\not\in\mathcal{S}}\alpha^{\prime}_{D}X_{D}, \tag{103}\] and \(\alpha^{\prime}_{C}\) are the headlight functions obtained after integrating out the \(\mathbf{g}_{C_{i}}\) direction. These are the headlight functions for the fatgraph \(\Gamma_{C_{i}}\) obtained by cutting along \(C_{i}\). Note that we can evaluate the \(\xi\) integral using identities such as \[\prod_{i=1}^{m}\frac{1}{X_{i}+t}=\sum_{i=1}^{m}\frac{1}{X_{i}+t} \prod_{j\neq i}\frac{1}{X_{j}-X_{i}}. \tag{104}\] When all the \(X_{C}\) propagator factors are distinct (i.e. there are no higher poles), we can perform the integral to find \[\mathcal{I}=\sum_{i=1}^{k}\frac{1}{X_{C_{i}}}\int\frac{d^{n-1}t^{ \prime}}{\text{Stab}(C_{i})}\,Z^{\prime}(-X_{C_{i}}), \tag{105}\] ## Appendix E Recursion Examples ### The 3-point non-planar 1-loop amplitude Take \(\Gamma\) to be the 3-point non-planar 1-loop diagram considered in Section 9.1. The curves are \(C^{n}_{12},C^{n}_{13},C_{22},C_{33}\). For the Mirzakhani method, we have two cosets, with representatives \(C^{0}_{12},C^{0}_{13}\). Cutting \(\Gamma\) along \(C^{0}_{12}\) gives a 5-point tree fatgraph \(\Gamma_{C^{0}_{12}}\). The curves compatible with \(C^{0}_{12}\) are \[C^{1}_{12},C^{0}_{13},C^{-1}_{12},C^{-1}_{13},C_{22}. \tag{106}\] The global forward limit then computes \(I_{\Gamma}\) as \[I_{\Gamma}=\frac{1}{X^{0}_{12}}I_{\Gamma_{C^{0}_{12}}}(X^{1}_{12} -X^{0}_{12},X^{0}_{13}-X^{0}_{12},X^{-1}_{12}-X^{0}_{12},X^{-1}_{13}-X^{0}_{1 2},X_{22})+(2\leftrightarrow 3). \tag{107}\] But the 5-point tree amplitude is \[I(X_{1},X_{2},X_{3},X_{4},X_{5})=\sum_{i=1}^{5}\frac{1}{X_{i}X_{ i+1}}. \tag{108}\] So the integrand is \[I_{\Gamma}=\frac{1}{X^{0}_{12}(X^{1}_{12}-X^{0}_{12})(X^{0}_{13} -X^{0}_{12})}+\frac{1}{X^{0}_{12}(X^{0}_{13}-X^{0}_{12})(X^{-1}_{12}-X^{0}_{1 2})}+\frac{1}{X^{0}_{12}(X^{-1}_{12}-X^{0}_{12})(X^{-1}_{13}-X^{0}_{12})}\\ +\frac{1}{X^{0}_{12}(X^{-1}_{13}-X^{0}_{12})X_{22}}+\frac{1}{X^{0 }_{12}X_{22}(X^{1}_{12}-X^{0}_{12})}+(2\leftrightarrow 3). \tag{109}\] The momenta are explicitly \[P^{n}_{12}=\ell+nk_{1},\qquad P^{n}_{13}=\ell+k_{2}+nk_{1},\qquad P _{22}=k_{1},\qquad P_{33}=k_{1}+k_{2}. \tag{110}\] ### The 2-loop vacuum at genus one The 2-loop genus 1 vacuum amplitude has already been computed in Section 9.2. Take again to be the 2-loop genus one vacuum diagram. The curves of are, with momentum (111) Every curve is in the same MCG-orbit. Pick, say, as the coset representative. The curves compatible with are for. Cutting along along -loop non-planar diagram, and the curves can be identified with the curves we called'in the previous example. Applying the global forward limit once gives (112) However, we have already computed the 1-loop non-planar integrand, and found, up to loop-momentum shifts, that it is given by (113) Using this result in (112) gives (114) Loop re-definitions of and can be used to cyclically permute the labels. Summing over the possible three cyclic permutations (and dividing by 3) gives (115) The factor is expected because. We therefore recover of the Feynman integral of the sunrise vacuum diagram. ### A comment on the 1-loop planar amplitudes Our formula for the 1-loop planar amplitudes can be computed directly, without topological recursion. The global Schwinger formula gives a well defined loop integrand for these amplitudes, without linearized propagators. However, we can arrive at a forward-limit-like formula for the 1-loop integrand by inserting the 'trivial' Mirzakhani kernel (116) into the curve integral. Here, is the headlight function of, the curve from to the internal loop boundary,. Equation (116) then allows us to write the 1-loop planar n-point amplitude as a sum of disk amplitudes, with linearized propagators. Evaluating the integral, using the recursion (10.4), the integrand is (117) where are the tree-level partial amplitudes, but now with linearized propagators. Details for the non-planar 1-loop propagator The matrix for the curves \(C_{n}\) with \(n\geq 0\) is \[M_{n}=LD_{x}(LD_{y}RD_{x})^{n}R. \tag{102}\] Taking the transpose, we see that \(M_{n}^{T}=M_{n}\). In particular, \[M_{0}=\begin{bmatrix}1&1\\ 1&1+x\end{bmatrix}. \tag{103}\] Given \(M_{0}\), we can compute \(M_{n}\) using \[M_{n+1}=M_{n}B_{+1},\qquad\text{where}\ \ B_{+1}=R^{-1}LD_{y}RD_{x}R= \begin{bmatrix}0&-xy\\ 1&1+x+xy\end{bmatrix}. \tag{104}\] It follows that we can write \[M_{n}=\begin{bmatrix}F_{n-2}&F_{n-1}\\ F_{n-1}&F_{n}\end{bmatrix}, \tag{105}\] where \[F_{n+2}=(1+x+xy)F_{n+1}-xyF_{n}, \tag{106}\] with initial conditions \(F_{-2}=1,F_{-1}=1\). The first few examples are \[F_{0} =1+x, \tag{107}\] \[F_{1} =1+2x+x^{2}+x^{2}y,\] (108) \[F_{2} =1+3x+3x^{2}+x^{3}+2x^{2}y+2x^{3}y+x^{3}y^{2}. \tag{109}\] Similarly, the matrix for the curves \(C_{n}\) with \(n<0\) is given by \[M_{n}=RD_{y}(RD_{x}LD_{y})^{-n-1}L,\qquad n<0. \tag{110}\] These matrices are again symmetric, and \[M_{-1}=\begin{bmatrix}1+y&y\\ y&y\end{bmatrix}. \tag{111}\] We can evaluate \(M_{n}\) using \[M_{n-1}=M_{n}B_{-1},\qquad\text{where}\ B_{-1}=L^{-1}RD_{x}LD_{y} L=\begin{bmatrix}1+x+xy&xy\\ -1&0\end{bmatrix}. \tag{112}\] This implies that \(M_{n}\) (\(n<0\)) has the form, \[M_{n}=\begin{bmatrix}G_{n}&xyG_{n+1}\\ xyG_{n+1}&(xy)^{2}G_{n+2}\end{bmatrix}, \tag{113}\] where the polynomials \(G_{n}\) are determined by the recursion \[G_{n}=(1+x+xy)G_{n+1}-xyG_{n+2}, \tag{114}\] with initial condition \(G_{1}=1/(x^{2}y)\) and \(G_{0}=1/x\). The first few polynomials are \[G_{-1} =1+y, \tag{111}\] \[G_{-2} =1+x+2xy+xy^{2},\] (112) \[G_{-3} =(1+x+xy)^{2}+x^{2}y(1+y)^{2}. \tag{113}\] We now need to compute the tropicalizations of the polynomials \(F_{n}\) (\(n\geq-2\)) and \(G_{n}\) (\(n\leq 1\)). Write \[f_{n}=\text{Trop }F_{n},\qquad\text{and}\qquad g_{n}=\text{Trop }G_{n}. \tag{114}\] Then, for \(n\geq 0\), we find \[f_{n}=\max(0,(n+1)x,(n+1)x+ny), \tag{115}\] which follows by induction using that \[f_{n+2}=\max(\max(0,x,x+y)+f_{n+1},\max(0,x+y)+f_{n}). \tag{116}\] Similarly, for \(n\leq-1\), \[g_{n}=\max(0,-(n+1)x,-(n+1)x-ny). \tag{117}\] We also have that \[f_{-2}=0,\ \ f_{-1}=0,\ \ g_{1}=-2x-y,\ \ g_{0}=-x. \tag{118}\] The headlight functions are \[\alpha_{n} =-f_{n}+2f_{n-1}-f_{n-2},\qquad n\geq 0, \tag{119}\] \[\alpha_{n} =-g_{n}+2g_{n+1}-g_{n+2},\qquad n<0. \tag{120}\]
2307.01212
Of Spiky SVDs and Music Recommendation
The truncated singular value decomposition is a widely used methodology in music recommendation for direct similar-item retrieval or embedding musical items for downstream tasks. This paper investigates a curious effect that we show naturally occurring on many recommendation datasets: spiking formations in the embedding space. We first propose a metric to quantify this spiking organization's strength, then mathematically prove its origin tied to underlying communities of items of varying internal popularity. With this new-found theoretical understanding, we finally open the topic with an industrial use case of estimating how music embeddings' top-k similar items will change over time under the addition of data.
Darius Afchar, Romain Hennequin, Vincent Guigue
2023-06-30T15:19:33Z
http://arxiv.org/abs/2307.01212v1
# Of Spiky SVDs and Music Recommendation ###### Abstract. The truncated singular value decomposition is a widely used methodology in music recommendation for direct similar-item retrieval and downstream tasks embedding musical items. This paper investigates a curious effect that we show naturally occurring on many recommendation datasets: spiking formations in the embedding space. We first propose a metric to quantify this spiking organization's strength, then mathematically prove its origin tied to underlying communities of items of varying internal popularity. With this new-found theoretical understanding, we finally open the topic with an industrial use case of estimating how music embeddings' top-k similar items will change over time under the addition of data. 2020 rights rightsretained is understudied and goes unnoticed when using the cosine distance for similarity since the normalization squashes all points on a hypersphere (Srivastava et al., 2017). However, we show that leveraging the norm of vectors may be more insightful than it first appears and that they should not be discarded. In detail, we prove that spikes represent communities, and embeddings' norms stem from their varying importance within that community (_i.e.,_ intra-popularity). These two latter features are notably prevalent in music data (Krause et al., 2019; Krizhevsky et al., 2019) and substantiate the need for theoretical support in this context. As an opening to our results, we show that the norm is strongly indicative of the stability of embeddings, which has industrial implications on how music track representations evolve through time. To our knowledge, this spiking behavior has yet to be leveraged for recommendation before. The paper proceeds as follows: we first propose a unified metric to quantify this spiking effect, then we mathematically prove the equivalence of spiking geometry with the presence of communities and intra-popularity within each community. We discuss and open our formalization with a practical case of evaluating the stability of music embeddings. All experiments may be reproduced via our code repository at [https://github.com/deezer/spiky_svd](https://github.com/deezer/spiky_svd). ## 2. Geometry of the spikes We present and compute SVDs on several datasets and show the emergence of spike formations. We then propose establishing a measure of the "spikiness" of a distribution of embeddings, to quantify the strength of this effect. ### Background on the SVD We first provide the reader with a brief theoretical background. The _Truncated Singular Value Decomposition_ (SVD1) is a factorization technique that aims to approximate a matrix \(M\in\mathbb{R}^{n\times m}\) as the product of three matrices \(U\in\mathbb{R}^{n\times f}\) Figure 1. Naturally emerging “spikes” formations across several independently computed SVDs. Since the embedding vectors are high-dimensional, we can only display some projections of the points on slices of dimensions (indicated above). \(\Sigma\in\text{diag}(\mathbb{R}_{+}^{f})\), and \(V\in\mathbb{R}^{m\times f}\), such that \(\|M-\hat{M}\|_{F}^{2}\) is minimal, with \(\hat{M}=U\Sigma V^{*}\) and \(U\) and \(V\) are orthonormal by columns (_i.e._, \(U^{*}U=V^{*}V=I_{f}\)). \(\hat{M}\) is a matrix of rank \(f\), with \(f\) chosen such that \(f\ll\min(n,m)\). \(\hat{M}\) is thus said to be a low-rank approximation of \(M\). The "truncated" term refers to the optimal solution for \(\hat{M}\) where only the \(f\) largest singular values and eigenvectors of \(M\) are retained for the decomposition (Eckart-Young-Mirsky theorem (Eckart and Young, 1970)). In the context of music collaborative filtering, several remarks can be made. \(M\) is often a large sparse matrix (\(n\) and \(m\) are frequently counted in millions) and often either denotes track-user, artist-user, or track-playlist interactions. \(M\) is often positive, and sometimes additionally binary (_e.g._, with the formalism of implicit feedbacks (Srivastava et al., 2017)). Since the goal is often to leverage \(\hat{M}\) for similar item retrieval rather than rebuilding \(M\), the term \(V\) is often ignored, and only the similarities between each row of \(U\) or \(U\Sigma\) are considered. The close-form solution to the SVD is such that \(U\) is the \(f\) dominant eigenvectors of \(MM^{*}\). Therefore, it does not matter whether we compute the SVD on \(M\) or \(MM^{*}\) to retrieve \(U\). With an abuse of notations, we identify \(M\) to its symmetrized version \(MM^{*}\) since it does not impact the resulting \(U\). ### Datasets In order to demonstrate that spiking SVDs are not tied to a particular setting, we use six recommendation datasets: Spotify Million Playlist (Paylist, 2017) (Spotify), The Art of the Mix (Mikrishnan, 2017) (AoTM-2011), Cornell's playlists (Cornell, 2017), LastFM-360K (Cornell, 2017). We also include the classic MovieLens-25M dataset (Mikrishnan, 2017) to hint at an extension of our result to other settings, and a private dataset from _Deezer_ containing twelve SVDs computed over one year in an industrial context. This latter dataset will be further exploited in section 4 to evaluate the stability of embedding representations. We compute SVDs using the same setting for all six datasets. We use the symmetrized version \(MM^{*}\) of the matrix. To curb popularity biases, denoise, and sparsify the matrix, we filter the top-\(k\) highest non-null interactions per item. As commonly done in information retrieval, we apply a _positive pointwise mutual information_ (PPMI) normalization to the resulting matrix, which further helps with popularity biases and retrieval performances (Krizhevsky et al., 2017) and has been shown to be equivalent to modern skip-gram formulations with negative sampling (Krizhevsky et al., 2017), which is common for music embeddings (Beng et al., 2016; Chen et al., 2017; Chen et al., 2017). We choose \(f=128\) for the decomposition and take \(E=U\Sigma\) as the resulting embeddings2. The rows of \(E=(e_{1},...e_{n})\) provide a latent representation for each item. Footnote 2: Any other choice of normalization as \(U\Sigma P\) with \(p\in[0,1]\) simply scales the embedding space differently but does not impact the presence of spikes. The obtained SVDs are displayed in Figure 1. Since we cannot lay out all 128 dimensions in a single figure, we report curious readers to our repository for a detailed visualization. As a general comment, it seems that the spikiness of embeddings is stronger on dominant dimensions (_i.e._, eigenvectors associated with the higher singular values and hence the first indexes) and tends to degrade to a blurry cloud of points in the higher indexes. This seems to align with the literature on robust SVD (Srivastava et al., 2017) that dominant components first reconstruct underlying structures and then reconstruct noisy perturbations. The _Deezer_ and _Spotify_ datasets exhibit the cleanest spikes overall. ### Spikiness metric We could not find an off-the-shelf definition of spikes of points that worked in high-dimension3. In our context, it seems suited to study the embeddings with the highest norms as many candidate peaks for spikes. We can then measure whether many other points with a lower norm are collinear to them4. If a small number of peaks are collinear to most of the rest of the distribution, it means that the distribution is spiky overall. Since the distributions are noisy, we must instead rely on approximate collinearity. We set some thresholds: \(\theta\) and \(\rho\). We then denote by \(e^{*}\) a given embedding with a high norm and say that any other vector \(e\) belongs to the spike if \(\cos(e^{*},e)>\cos(\theta)\). We iterate for candidate peaks \(e^{*}\) in descending order of norm until we have captured a ratio \(\rho\) of points in the distribution. This can be seen as a greedy heuristic to the partial set cover problem5. The obtained number of spikes is divided by \(n\) - the total number of embeddings - and denoted Spk. We have \(0<\text{Spk}\leq\rho\), where the lower bound occurs when all vectors form a single spike, and the upper bound means that no two points are collinear. Footnote 5:... which is notoriously NP-complete in high dimension and thus could not be solved exactly with our number of considered points. The results are given in Table 1 for \(\cos\theta=0.9\) and \(\rho=50\%\). Though this measure is computed with a heuristic and constitutes an upper bound to possibly more optimal choices of peaks to cover the distribution, the obtained results seem consistent with our intuitions on the spikiness of each SVD. The _Deezer_ and _Spotify_ come up as the most spiky. To fully spell out their obtained result, half of the millions of embedding vectors' directions - that could be arbitrarily anything in 128 dimensions - fall under one of 361 and 1395 spikes. We also confirm that the embeddings are spikier in their first dimensions and then exhibit a sudden increase of Spk when the noise dominates in later dimensions. We have shown a systematic spiking effect happening over multiple computed SVDs. The result that a significantly low number of spikes may cluster half of the music tracks is highly reminiscent of _spectral clustering_ (but with spikes) and hints at the presence of structured groups in the data. ## 3. Modeling We next demonstrate a formal equivalence between the spiking structure in the SVD-embedding space and the presence of communities of varying node degrees in the graph associated with \(\hat{M}\). This enables us to get network properties from the embeddings without explicitly building a graph, for which we show a practical impact for recommendations. ### Preliminaries We define some notations and outline some basic properties to be used in the next section. #### 3.1.1. Spikes in \(M\) As argued in section 2.1, we may assume \(M\) to be symmetric since we only seek to retrieve similarities from the "left side" of \(M\). We denote by \(S^{n}(\mathbb{R}_{+})\) the space of symmetric matrices of \(\mathbb{R}_{+}^{n\times n}\). Applying the spectral theorem, \(M\) is diagonalizable as \(PDP^{*}\), with \(P\) an orthonormal basis of eigenvectors. We rank the eigenvalues (\(d_{i}\)) and corresponding vectors in decreasing order of absolute value. We then have two useful lemmas: **Lemma 3.1**.: _Matrix \(M\) has a truncated SVD given by \(\hat{M}=U\Sigma V^{*}\) such that:_ \[U=P_{1..f}\qquad\Sigma=|D_{1..f,1..f}|=diag(\sigma_{1},...\sigma_{f})\qquad V= P_{1..f}sign(D_{1..f,1..f})\] \begin{table} \begin{tabular}{l|c c c c c c|c} \hline \hline Dataset & Deezer & Spotify & AoTM-2011 & Cornell & LastFM-360K & Movielens-25M & \(\mathcal{N}(0,1)\) \\ \hline Spk@128 & 0.04\% & 0.14\% & 18.1\% & 4.55\% & 2.42\% & 7.09\% & 50\% = \(\rho\) \\ Spk@64 & 150/\(n<0.01\%\) & 228/\(n<0.03\%\) & 11.4\% & 1.59\% & 0.49\% & 3.05\% & 50\% \\ Spk@32 & 50/\(n<0.01\%\) & 55/\(n<0.01\%\) & 2.31\% & 0.49\% & 0.14\% & 1.12\% & 50\% \\ Spk@16 & 23/\(n<0.01\%\) & 23/\(n<0.01\%\) & 0.93\% & 0.09\% & 0.06\% & 0.02\% & 49.4\% \\ \hline \hline \end{tabular} \end{table} Table 1. Estimated Spk on several datasets: needed ratio of points that are peaks of spikes to capture \(\rho=50\%\) of the total distribution of \(n\) embeddings, with \(cos\theta=0.9\). The parameter (\(\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _where \(|.|\) denote the term-by-term absolute value and sign\(()\) the sign function, since the SVD requires \(\Sigma\) to be positive._ Proof.: We identify a desired decomposition. Note that the unitary eigenvectors of \(P\) are not uniquely defined, and thus, so does the decomposition. **Definition 3.2** (Spike formation).: A set of vectors \((e_{1},...e_{k})\) is said to form a spike - as studied in \(2.2\) - if we may rewrite them as \((\alpha_{1}\vec{s},...\alpha_{k}\vec{s})\) with \(\vec{s}\) a unitary vector and \((\alpha_{1},...\alpha_{k})\in\mathbb{R}_{+}^{k}\) their distance from the origin. **Lemma 3.3**.: _Let \(M\) have a decomposition \(\hat{M}=U\Sigma V^{*}\). If the rows \((u_{i_{1}},...u_{i_{k}})\) of \(U\) belong to a spike, then, we have that the rows \((i_{1},...i_{k})\) of \(U\Sigma^{p}\) and of \(V\) also form a spike:_ \[u_{i}\Sigma^{p}=\alpha_{i}\left(\vec{s}\cdot\Sigma^{p}\right)=\alpha_{i}\vec{ s}^{\prime}\qquad\quad v_{i}=u_{i}\cdot\text{sign}(D_{..f,..f})=\alpha_{i} \left(\vec{s}\cdot\text{sign}(D_{..f,..f})\right)=\alpha_{i}\vec{s}^{\prime}\] The spiking effect is insensitive to any choice of embedding as \(U\Sigma^{p}\) or \(V\): only the directions of the spikes change. In particular, the vector \(\vec{s}^{\prime}\) of \(V\) remains unitary. In the rest of this section, we arbitrarily fix the embeddings as \(U\Sigma\). #### 3.1.2. Community graph models The Stochastic Block Model (SBM) is an important model introduced in (Kang and Schuster, 2017) as a means to generate random graphs according to an underlying structure in communities. Given a set of \(n\) nodes indexed as \(\llbracket 1,n\rrbracket=\{1,2,...n\}\), a partition \(C:\llbracket 1,n\rrbracket\rightarrow\llbracket 1,K\rrbracket\) of the nodes into \(K\) communities, and a matrix \(B\in\mathcal{S}^{K}([0,1])\) of edge probabilities, the SBM model samples an adjacency matrix \(A\in\mathcal{S}^{n}(\{0,1\})\) such that \(A_{i,j}=A_{j,i}\cdot\mathcal{B}(B_{C(i),C(j)})\), with \(\mathcal{B}\) the Bernouilli law. On average, we thus obtain a matrix with a constant edge degree within and between any pair of communities of nodes. If there is not a unique way of defining communities (Kang and Schuster, 2017; Wang and Schuster, 2017), it is usually agreed that the edge density within each community must be superior to that with other ones (Kang and Schuster, 2017), which translates as \(B_{i,i}>B_{i,j}\) for all \(i\neq j\). This last property is known as _assortivity_. Music data has been shown to be particularly assortive (Kang and Schuster, 2017). The _Degree-Corrected SBM_ (DCBM) (Kang and Schuster, 2017) introduces an additional set of parameters \((\alpha_{i})_{1}^{n}\in[0,1]^{n}\) allowing the edge probability to vary inside communities. Concisely, the DCBM proposes to sample edges between \(i\) and \(j\) with a probability \(\alpha_{i}\alpha_{j}B_{C(i),C(j)}\). An example is depicted in Figure 2. In order to make the parameters identifiable, we may renormalize \(B\) to such that \(B_{i,i}=1\). For further properties on the DCBM, we refer readers to (Kang and Schuster, 2017; Wang and Schuster, 2017). ### Equivalence of spikes and degree-varying communities We may finally spell out our main theorem linking recommendation embeddings and modeling of communities: **Theorem 3.4**.: _Let \(M\in\mathcal{S}^{n}(\mathbb{R}_{+})\) have a SVD \(\hat{M}=U\Sigma V^{*}\) as defined in section 2.1. Let \(E=U\Sigma=(e_{1},...e_{n})\) be embeddings that exhibit \(K\) distinct spikes within its rows whose directions are given by \(\vec{s}_{1},...\vec{s}_{K}\). If row \(e_{i}\) belongs to a spike, let us define \(C(i)\) its corresponding spike index. We additionally denote \(n^{\prime}\leq n\) the total number of points belonging to spikes. For ease of notation, we rearrange \(E\) and \(V\) such that the \(n^{\prime}\) first rows belong to spikes. Then, the submatrix \(\tilde{M}=|\hat{M}_{1..n^{\prime},1..n^{\prime}}|\) is proportional to the mean value of a DCBM._ Proof.: For all \(i\in\llbracket 1,n^{\prime}\rrbracket,e_{i}=\alpha_{i}\vec{s}_{C(i)}\). Similarly, \(v_{i}=\alpha_{i}\vec{s}_{C(i)}^{\prime}\), thanks to lemma (3.3). Then, for all \(i,j\in\llbracket 1,n^{\prime}\rrbracket^{2}\), \[(\hat{M})_{i,j}=(EV^{*})_{i,j}=\alpha_{i}\alpha_{j}(\vec{s}_{C(i)},\vec{s}_{C( j)}^{\prime})\qquad\quad|\hat{M}_{1..n^{\prime},1..n^{\prime}}|=K^{2}\cdot\text{ DCBM}\left((C(i))_{i=1}^{n^{\prime}}\,;\,B\,;\,\left(\frac{\alpha_{i}}{K}\right)_{i=1}^{n^{ \prime}}\right)\] with \(K=\max_{i}\alpha_{i}\), \(B\in[0,1]^{K\times K}\) the matrix \(B_{k,l}=|\langle\vec{s}_{k},\vec{s}_{l}^{\prime}\rangle|\), and \(\frac{\alpha_{i}}{K}\in[0,1]\) as required. In practice, there are good reasons to believe that \(\hat{M}\) will be (roughly) non-negative and that the absolute value can be dropped in \(\tilde{M}\). When \(f=1\), the Perron-Frobenius theorem (Kang and Schuster, 2017) ensures the non-negativity, which is also trivially true when \(f\geq\text{rank}(M)\). In between, we may expect negative values to principally arise from punctual approximation artifacts of null values of \(M\), rather than large negative blocks having \(\langle\vec{s}_{k},\vec{s}_{l}^{\prime}\rangle<0\). InterpretationWe have shown that the sub-matrix \(\hat{M}_{1\ldots n^{\prime},1\ldots n^{\prime}}\) follows a structure in communities given by a DCBM, as illustrated in Figure 2 with a synthetic example. If we think geometrically, the DCBM is merely a matrix with constant blocks that is multiplied element-wise by the rank-1 matrix \([\alpha_{1},...\alpha_{n^{\prime}}]^{*}[\alpha_{1},...\alpha_{n^{\prime}}]\). Our interpretation of this theorem is the following. Since \(M\) is often sparse (_c.f._, section 2.2), rebuilding as many communities as possible with rank-1 matrices is intuitively more optimal to minimize \(\|M-\hat{M}\|_{F}^{2}\) than "spending" many eigenvectors on a single community to fine-tune it with a matrix of higher rank. The widespread emergence of spike formations we uncovered across multiple datasets would thus stem from this general underlying prioritization of rank-1 approximation of communities. This coincidentally results in a DCBM with convenient theoretical properties that we can now use. We leave the detection and analysis of eventual rank \(>1\) approximation out of the scope of this paper. Related workWe found theorem (3.4) to be related to a result from community detection that the diagonalization of DCBM matrices leads to the appearance of spikes (Srivastava and Srivastava, 2017; Srivastava and Srivastava, 2017). Let us stress the difference. We prove the reciprocate (_i.e._, spikes \(\Rightarrow\) DCBM), which is novel and in a more general case since a portion of \(M\) is noisy (\(n^{\prime}<n\)). In spirit also, the former literature aim to correct this effect - seen as noise - while we instead try to exploit it to gain knowledge on \(\hat{M}\). ### Practical advice The duality of spike formations with DCBM communities has many convenient properties for recommendation. We highlight some corollaries of our theorem that provide practical advice when using embeddings computed with a SVD. #### 3.3.1. Fast degree estimation As a first remark, the \((\alpha)_{i}\) parameters of our theorem's DCBM may be easily retrieved as the norm \(\|e_{i}\|_{2}\) of embeddings that belong to a spike. Access to the degree of a node of the denoised matrix \(\hat{M}\) without explicitly building a graph may open doors for many benefits. For instance, the topology of a graph may be estimated from its degree distribution (_e.g._, whether the graph is scale-free), which has been suggested to be indicative of a rather social or information nature of the music graphs (Bordes and Srivastava, 2017). Degree centrality has also been related to popularity biases in music data (Bordes and Srivastava, 2017; Srivastava and Srivastava, 2017) or used to create hierarchies (Bordes and Srivastava, 2017). More generally, the degree of a node is a strong predictor of its robustness to change in a graph (Golovolov et al., 2017; Golovolov and Srivastava, 2017). In section 4, we will further exploit such properties of stability. Figure 2. Illustration of theorem (3.4) with a synthetic matrix with \(n=1000\) and \(n^{\prime}=3\times 200\) points forming three spikes. We highlight the three direction vectors \(\vec{s}_{\{1,2,3\}}\) in \(E\) and their corresponding communities \(C_{\{1,2,3\}}\) in \(\hat{M}\) (reindexed for clarity). #### 3.3.2. Cosine similarity or dot product? The literature on community detection often seeks to curb the effect of the degree heterogeneity of the DCBM (_e.g._, [18, 28, 36]). One popular method is to normalize the eigenvectors with a spherical clustering [30]. Normalizing on a hypersphere turns out to correspond to the widespread use of cosine similarity in recommendation settings. It effectively removes the \((\alpha)_{i}\) terms, turning the DCBM into a simple SBM and thus attributing a unique value to all the items of a given community (the cosine similarity between their community direction \(\vec{s}_{i}\) and that of the reference embedding to retrieve items from). From the recommendation lens, both unpopular and popular items will be reweighted with an equal relevance value. This may be useful to improve _discovery_ objectives - _e.g._, improving coverage and serendipity metrics [20], but conversely promotes filter bubbles by design [34, 48]. On the flip side of the coin, measuring the internal popularity of a node within a community - _i.e._, its "importance" [18] - may be helpful to navigate between communities while displaying their most representative items. This setting is, in particular, more suited for artist similarity and cold-start applications [41, 42]. There, a dot product will always return the item with the highest norm of the community as top-similar, but may also merge popular items from nearby communities in the retrieved items. The dot product thus seems more suited to propose a _panopticon view_ of the many available items, but with the adverse risk of being biased by popularity and thus making long-tail items harder to access. As always, this choice of hyper-parameter depends on the task at hand and desired criteria to optimize. #### 3.3.3. Tuning of \(f\) The Spk metric we proposed in 2.3 may constitute a promising _offline metric_ on optimally choosing the truncation parameter \(f\) in the SVD, _i.e._, finding when recommendation performances are saturated despite adding more embedding dimensions. In the light of theorem (3.4), we can now reinterpret our metric as an approximation of \(K\), where we have implicitly assumed that \(\hat{n}^{\prime}=\rho n\) points should be considered as being modeled by a DCBM. We have additionally used the empirical heuristic that noise was prevalent in the lower norm values of \(E\). When the chosen \(\hat{n}^{\prime}\) comes close to the ground-truth \(n^{\prime}\), and the hypothesis starts to fail, noise starts to permeate, and the number of spikes grows linearly with the number of points (similar to the baseline Normal distribution where collinear points are rare). In Table 1, this effect was striking between _Deezer_ and _Spotify_, where the Spk metric was proportional between the two datasets for low values of \(f\), then exploded at \(f=128\) for the latter. We conclude that \(f=64\) for Spotify and \(f=128\) for Deezer constitutes an appropriate choice to capture clean communities in the embeddings. This metric makes the assumption to treat non-spikes as uninformative noise, which dismisses potential higher-ranking approximations that may hold further information on the data. However, we leave this last aspect for future work. ## 4. Opening: Industrial use-case We open our results with an industrial use case to illustrate the few previously highlighted properties. Specifically, designing an industrial recommender system involves many challenges (_e.g._, scalability constraints) that can render adopting a complex machine learning model cumbersome [2]. Insights like the one we propose can thus be valuable to gain knowledge to better tune and evaluate recommenders leveraging simple item similarity models, such as the SVD, without changing the computation pipeline. We had access to a history of size twelve of music track embeddings from _Deezer_, spanning between May 2022 and May 2023, computed on user data with a truncated SVD as described in 2.2. These SVDs contain 1.5 million common music tracks and have \(f=128\). Music data is particularly prone to exhibiting community effects - linked to its many historical and cultural groundings [13, 26], but it is also ever-changing. New trends emerge every day (_e.g._, _lo-fi_ music, _phonk_), others vanish (_e.g._, _emo_, _jumpstyle_), or are sometimes revirified (_e.g._, fans of Kate Bush thanks to _Stranger Things_' OST), or shape one of many types of evolution (Birn et al., 2017). It is thus natural to wonder how relevant some embeddings computed at a given date will remain in the following months under the change of the underlying data. In section 3.3, we stressed that the estimated degree is considered a good predictor of the robustness of a node in a graph. We thus propose to link both notions and show that embeddings of higher norms have better stability in time than their lower-norm counterpart. Using the oldest SVD of May 2022 as a reference, we partition the embeddings equally into five according to their norm. We sample 1000 points in each partition and compute their top-500 similar items in each SVDs - using either the cosine similarity or the dot product - among the almost 300.000 track candidates in each partition. We finally compute the mean Jaccard index (IoU) (Kumar et al., 2017) to quantify the relative similarity between the top items of the reference and subsequent compared date. The results are given in Figure 3. We can see that conditioning our results on several norm ranges reveals strikingly different dynamics that could not have been inferred solely from a mean analysis. The dot product notably displays the most disparate curves where low-norm embeddings quickly become irrelevant while higher-norm ones remain over 80% similar after almost a year. Note that our point is not to state that change is necessarily undesirable in the embeddings, though we could expect well-established music genres to remain stable (_e.g._, jazz music tracks). Instead, we envision this type of conditioned analysis as a means to better evaluate music recommenders by discriminating between various dynamics of embeddings - and thus some expected associated properties. ## 5. Conclusion Embeddings are commonly used in music recommendation. Yet, there are mostly so in a black-box manner, and many practices rely on empirical knowledge. In this paper, we have proven that the widespread emergence of spikes in the embeddings of music items was tied to an underlying graph structure of degree-varying communities. Our insights are particularly relevant for music data, where such structure is found naturally due to cultural and historical groundings. Reframing a community detection model through the recommendation lens enabled us to formulate novel practical insights for music embeddings (_e.g._, cosine or dot product, stability) that are better grounded in theory. We hope to see more work bridging the gap between graph theory and recommendation. Our future work includes the analysis of rank \(>1\) approximation effects on embeddings. Figure 3. Stability through time of the top-500 of _Deezer_ embeddings starting in May 2022 and compared with eleven successive embeddings within five-week steps until May 2023. We condition the recommendation on a partition of the embedding norms (indicated on the right). Both the cosine similarity and dot product are considered. 95% confidence intervals are displayed.
2301.13598
Economic Predictive Control with Periodic Horizon for Water Distribution Networks
This paper deals with the control of pumps in large-scale water distribution networks with the aim of minimizing economic costs while satisfying operational constraints. Finding a control algorithm in combination with a model that can be applied in real-time is a challenging problem due to the nonlinearities presented by the pipes and the network sizes. We propose a predictive control algorithm with a periodic horizon. The method provides a way for the economic operation of large water networks with a small linear model. Economic Predictive control with a periodic horizon and a terminal state constraint is constructed to keep the state trajectories close to an optimal periodic trajectory. Barrier terms are also included in the cost function to prevent constraint violations. The proposed method is tested on the EPANET implementation of the water network of a medium size Danish town (Randers) and shown to perform as intended under varying conditions.
Mirhan Ürkmez, Carsten Kallesøe, Jan Dimon Bendtsen, John Leth
2023-01-31T12:55:09Z
http://arxiv.org/abs/2301.13598v1
# Economic Predictive Control with Periodic Horizon for Water Distribution Networks ###### Abstract This paper deals with the control of pumps in large-scale water distribution networks with the aim of minimizing economic costs while satisfying operational constraints. Finding a control algorithm in combination with a model that can be applied in real-time is a challenging problem due to the nonlinearities presented by the pipes and the network sizes. We propose a predictive control algorithm with a periodic horizon. The method provides a way for the economic operation of large water networks with a small linear model. Economic Predictive control with a periodic horizon and a terminal state constraint is constructed to keep the state trajectories close to an optimal periodic trajectory. Barrier terms are also included in the cost function to prevent constraint violations. The proposed method is tested on the EPANET implementation of the water network of a medium size Danish town (Randers) and shown to perform as intended under varying conditions. W + Footnote †: footnoteinfo approximating the nonlinear pipe equations with some sort of linear equations or inequalities. In Baunsgaard et al. (2016), pipe equations are linearized around an operating point, and model predictive control (MPC) is applied. In Wang et al. (2018), an EMPC is applied to a network where the nonlinear pipe equations are relaxed into a set of linear inequalities. Before simplifying the system model, the network structure is also simplified in Fiedler et al. (2020). A hierarchical clustering method is used to represent the original network with a smaller one which originally had 378 junctions. A system model is derived from the simplified structure using a Deep Neural Network (DNN) structure. Lagrangian relaxation is used to approximate the original problem in Ghaddar et al. (2015). In this paper, a way for optimal pump scheduling of large-scale WDNs is presented. To control the pumps, a linear model of the system is derived. Then, a predictive control method with a periodic horizon is constructed. Barrier functions are utilized to prevent constraint violation due to the model-plant mismatch. With the introduction of the periodic horizon and the terminal state constraint, the chance of finding a feasible solution is increased by keeping trajectories close to an optimal periodic trajectory. The method is applied to a medium-sized Danish town's network (Randers). The outline of the rest of the paper is as follows. The network model is given in Section 2. The proposed control method is explained in Section 3. The experimental results are presented in Section 4. The paper is concluded with final remarks in Section 5. ## 2 Network Model A typical water distribution network consists of pipes, pumps, tanks, junction nodes and reservoirs. Water in the network flows from high hydraulic head to low head. Hydraulic head is a measure of the fluid pressure and is equal to the height of a fluid in a static column at a point. Hydraulic head loss occurring in a pipe can be approximated by the Hazen-Williams Equation as \[\Delta h=h_{1}-h_{2}=Kq|q|^{0.852} \tag{1}\] where \(K\) is the pipe resistance that depends on the physical features of a pipe such as diameter and length, \(q\) is the flow rate, and \(h_{1}\) and \(h_{2}\) are the heads at the two ends of the pipe. At each node \(j\), the mass conservation law is satisfied. It can be expressed as \[\sum_{i\in\mathcal{N}_{j}}q_{ij}=d_{j} \tag{2}\] where \(q_{ij}\) is the flow entering the node \(j\) from node \(i\) and \(d_{j}\) is the demand at node \(j\), which is the water requested by the user at node \(j\). The symbol \(\mathcal{N}_{j}\) denotes the set of neighbor nodes of node \(j\). Note that \(q_{ij}\) is positive if the flow is from node \(i\) to the neighbor node \(j\) and negative vice versa. Tanks are storage elements that provide water to the users. In the network, tanks are elevated so that water can be pressurized enough to be delivered to the consumers. The change in the water level of a tank is dependent on the flow coming from neighbor nodes and can be written for the tank \(j\) as \[A_{j}\dot{h}_{j}=\sum_{i\in\mathcal{N}_{j}}q_{ij} \tag{3}\] where \(A_{j}\) is the cross-sectional area, \(h_{j}\) is the level of the tank. Tank levels change according to the flow passing through the pipes connected to the tanks. Those flows are determined by a set of pipe head loss equations (1), and mass balance equations (2) throughout the whole network. As Equation (1) is nonlinear, flow through pipes connected to the tanks are nonlinear functions \(\dot{f}_{i}\) of the demand at each node, tank levels, and the amount of water coming from the pumps. Explicit forms of those nonlinear functions could be derived if the vector \(d=[d_{1},d_{2}...]^{T}\) containing the demands of all the nodes is known, which is not possible unless demand data for all nodes are available. In our work, we assume that the total demand of the zones that are supplied by the pumps can be estimated through available data with time series analysis methods, but not require \(d\) vector to be known. Since \(f_{i}\) functions can not be found without \(d\) vector, we approximate them using linear models and write tank level change equations as \[\dot{h}(t)=Ah(t)+B_{1}u(t)+B_{2}d_{a}(t) \tag{4}\] where \(h(t)\in\mathbb{R}^{n}\) includes tank levels, \(A\in\mathbb{R}^{n\times n}\), \(B_{1}\in\mathbb{R}^{n\times m}\), \(B_{2}\in\mathbb{R}^{n\times 1}\) are constant system matrices and \(d_{a}(t)\) is the aggregated demand of controlled zone at time \(t\), \(u(t)\in\mathbb{R}^{m}\) is the input containing pump flows. The reason we chose a linear model is to increase the chance of finding a feasible solution for the controller which is posed as an optimization problem in the next section. Although capturing the full dynamics of a large-scale network is not possible with a linear model, the proposed control method is designed to compensate for model inaccuracies and we have observed that it was enough to control the system while satisfying the constraints. ## 3 Periodic Horizon Control In this section, a predictive control algorithm for pump scheduling is presented to minimize the economical costs. The aim is to run the pumps when the electricity price is low and let tanks deliver water when the price is high while also satisfying input and output constraints. The problem at time \(t\) is posed as \[\min_{u^{t}_{0},u^{t}_{1}\cdots u^{t}_{N(t)-1}}\sum_{j=0}^{N(t)-1 }J(h^{t}_{j},u^{t}_{j}) \tag{5a}\] \[h^{t}_{j}=A_{d}h^{t}_{j-1}+B_{d1}u^{t}_{j-1}+B_{d2}d_{a}(j-1)\] (5b) \[h^{t}_{0}=h(t)\] (5c) \[u^{t}_{j}\in\mathcal{U}\subseteq\mathbb{R}^{m}\] (5d) \[h^{t}_{j}\in\mathcal{H}\subseteq\mathbb{R}^{n}\] (5e) \[h^{t}_{N(t)}\in\mathcal{H}_{tf}\subseteq\mathbb{R}^{n} \tag{5f}\] where \(J(h^{t}_{j},u^{t}_{j})\) is the economic cost function, \(h^{t}=[h^{t}_{1}\cdots h^{t}_{N(t)}]\in\mathbb{R}^{n\times N(t)}\) is the predicted future states, \(u^{t}_{j}\) is the input vector, \(N(t)\) is the prediction horizon, \(\mathcal{U}\subseteq\mathbb{R}^{m}\) and \(\mathcal{H}\subseteq\mathbb{R}^{n}\) denotes the input and state constraints respectively and \(\mathcal{H}_{tf}\subseteq\mathbb{R}^{n}\) is the terminal state set. The continuous system (4) is discretized and (5b) is obtained. The optimization problem (5) is solved at every time step separated by \(\Delta_{t}\) and the first term \(u_{0}^{t}\) of the optimal input sequence \(\mathbf{u}^{\mathbf{t}}=[u_{0}^{t}\cdots u_{N(t)-1}^{t}]\in\mathbb{R}^{m\times N (t)}\) is applied to the system. Input constraints come from the physical limitations and working principles of the pumps. A pump can not provide water in the opposite direction and it can deliver a maximum amount of water per unit of time. These conditions are expressed as \[\mathcal{U}=\{[u_{1},\cdots u_{m}]\in\mathbb{R}^{m}\mid 0\leq u_{1}\leq \overline{u}_{1},\cdots 0\leq u_{m}\leq\overline{u}_{m}\} \tag{6}\] where \(\overline{u}_{1}\cdots\overline{u}_{m}\) are upper flow limits. Tank levels are also constrained so that there is always enough water in the tanks in case of an emergency and there is no overflow of water. The set \(\mathcal{H}\) can be defined as \[\mathcal{H}=\{[h_{1},\cdots h_{2}]\in\mathbb{R}^{n}\mid\tilde{h}_{1}\leq h_{1 }\leq\overline{h}_{1},\cdots\tilde{h}_{n}\leq h_{n}\leq\overline{h}_{n}\} \tag{7}\] The cost function includes the electricity costs of the pumps. The power provided to the network by the pump \(i\) is equal to \(q_{pi}(p_{i}^{out}-p_{i}^{in})\), where \(q_{pi}\) is the pump flow, \(p_{out}^{i}\) and \(p_{in}^{i}\) are the outlet and inlet pressures of the pump \(i\). The inlet pressures \(p^{in}=[p_{1}^{in},p_{2}^{in}]\) are the pressures of the related reservoirs and are assumed to be constant. The outlet pressures \(p^{out}=[p_{1}^{out},p_{2}^{out}]\) are given as the output of the linear model \[p^{out}(t)=A_{p}h(t)+B_{p}u(t) \tag{8}\] where \(A_{p}\) and \(B_{p}\) are found using system identification on data generated by the EPANET model. Electricity cost at time \(t\) is then found by multiplying total power consumption \(u(t)^{T}(p^{out}(t)-p^{in}(t))\) with the electricity price \(c(t)\). We acknowledge a certain degree of model-plant mismatch by using a linear model (4) to represent the whole network. This causes actual states \(h(t)\) to be different than the predicted states \(h^{t}\). We know that the predicted states satisfy the state constraints (7) since they are the solution to the optimization problem 5, but the actual states might violate them. To ensure the satisfaction of the state constraints with the model-plant mismatch, we introduce new terms to the cost function. First, we rewrite state constraints (7) as \[C_{i}(h)\leq 0,\quad i=0,1,\cdots 2\times n-1 \tag{9}\] where \(C_{0}(h)=\tilde{h}_{1}-h_{1}\) and the rest of the \(C_{i}\) functions are chosen in a similar manner. The cost function terms are then defined as \[J_{hi}(h)=e^{a_{i}(C_{i}(h)+b_{i})}\quad i=0,1,\cdots 2\times n-1 \tag{10}\] where \(a_{i},b_{i}\in\mathbb{R}_{>0}\). This can be seen as an exponential barrier function. The parameters \(a_{i},b_{i}\) determine a dangerous region close to the boundaries of the state constraints where cost function \(J_{hi}\) attains high values. The predicted optimal state trajectories \(h^{t}\) do not enter the dangerous region if possible because of the high cost values in the dangerous region. Then, the actual states \(h(t)\) do not violate the state constraints (7) assuming the difference between the predicted state and the actual state is small. If the state trajectory enters one of the dangerous regions at any step due to the model-plant mismatch, then the cost function will try to drive the trajectory out of the region. The overall cost function includes both the electricity expense term and the constraint barrier functions and it can be expressed as \[J(h(t),u(t))=c(t)u(t)^{T}(p^{out}(t)-p^{in}(t))+\sum_{i=0}^{2\times n-1}J_{hi}( h(t)) \tag{11}\] Both electricity price \(c(t)\) and total water demand \(d_{a}(t)\) signals can be viewed as consisting of a periodic signal with a period of 1 day and a relatively small deviation signal. This can be leveraged to find a feasible controller. Suppose a sequence of inputs can be found for some initial tank levels such that levels after 1 day are equal to initial levels. In that case, the problem after 1 day is the same as in the beginning assuming deviation signals of the electricity price and the demand are zero, hence they are periodic. Then, the input sequence from the previous day could be applied and produce the same path for tank levels. Taking into account the deviation signals and supposing that a solution exists such that levels after 1 day are close to initial levels, the input sequence from the previous day could be a good point of start to search for a feasible solution if the map from the initial conditions and the demand profile to the optimal input sequences is continuous. Therefore, we choose a terminal state constraint for the end of each day to increase the chance of finding a feasible solution. Now, the remaining problem is to decide which tank levels should the trajectories turn back to at the end of each day. We define the optimal periodic trajectory of the system as the solution of \[(\mathbf{u}^{*},\mathbf{h}^{*})=\operatorname*{arg\,min}_{u_{i},h_{i}} \sum_{i=0}^{(T_{day}/\Delta_{t})-1}J(h_{i},u_{i}) \tag{12a}\] \[h_{i}=A_{d}h_{i-1}+B_{d1}u_{i-1}+B_{d2}d_{a}^{*}(i-1)\] (12b) \[u_{i}\in\mathcal{U}\subseteq\mathbb{R}^{m}\] (12c) \[h_{i}\in\mathcal{H}\subseteq\mathbb{R}^{n}\] (12d) \[h_{0}=h_{T_{day}/\Delta_{t}} \tag{12e}\] where \(T_{day}\) is the duration of a whole day, \(d_{a}^{*}\) is the average daily demand profile obtained from the past measurements. The resulting state trajectory \(\mathbf{h}^{*}=[h_{0}^{*}\cdots h_{T_{day}/\Delta_{t}}^{*}]\in\mathbb{R}^{n \times(T_{day}/\Delta_{t}+1)}\) is the optimal periodic trajectory because of the constraint (12e). The terminal set \(\mathcal{H}_{tf}\) and the prediction horizon \(N(t)\) is chosen to make tank levels at the end of each day close to \(h_{T_{day}/\Delta_{t}}^{*}\). At Figure 1: Predicted state trajectories \(h^{t},h^{t+\Delta_{t}}\) at times \(t,t+\Delta_{t}\). Sampling time \(\Delta_{t}\), prediction horizons \(N(t),N(t+\Delta_{t})\) and the applied inputs \(u_{0}^{t},u_{0}^{t+\Delta_{t}}\) are shown. The true state \(h(t+\Delta_{t})\) and the predicted state \(h_{1}^{t}\) are indicated to emphasize the deviation from the prediction. The terminal set \(\mathcal{B}_{r}(h_{T_{day}/\Delta_{t}}^{*})\) is also illustrated. any time \(t\), \(t+N(t)\Delta_{t}\) should be equal to the end of the day. \(\mathcal{H}_{tf}\) and \(N(t)\) could be written as \[\mathcal{H}_{tf} =\mathcal{B}_{r}(h_{T_{day}/\Delta_{t}}^{*}) \tag{13a}\] \[N(t) =(T_{day}-t\bmod T_{day})/\Delta_{t} \tag{13b}\] where \(\mathcal{B}_{r}(h_{T_{day}/\Delta_{t}}^{*})\) is the open ball centered at \(h_{T_{day}/\Delta_{t}}^{*}\) with radius \(r\). Note that \(N(t)\) changes so that \(t+N(t)\Delta_{t}\) is the end of the day for all \(t\). With these definitions, the condition (13a) will translate to tank levels at the end of the day being close to the final point in optimal periodic trajectory \(h_{T_{day}/\Delta_{t}}^{*}\) as shown in Figure 1. Therefore, not only chance of finding a feasible solution is increased but also the solutions are kept around the optimal periodic trajectory \(\mathbf{h}^{*}\). If the problem (5) becomes infeasible at any time step \(t\), we apply the second term of the input sequence from the previous step \(u_{1}^{t-\Delta_{t}}\). The reason behind this choice is as follows: If we apply the optimal control input \(u_{0}^{t-\Delta_{t}}\) to the network model (4) at time \(t-\Delta_{t}\), then the optimal sequence in the next time step will be \(\mathbf{u}^{t}=[u_{1}^{t-\Delta_{t}}\cdots u_{N(t-\Delta_{t})-1}^{t-\Delta_{t}}]\) following Bellman's principle of optimality. Then, at time \(t\), \(u_{1}^{t-\Delta_{t}}\) will be applied to the system as calculated at \(t-\Delta_{t}\). Assuming the model-plant mismatch is small enough, \(u_{1}^{t-\Delta_{t}}\) is still a good input candidate if the problem is infeasible at time \(t\). ## 4 Application The presented method is applied to WDN of Randers, a Danish city, which is shown in Figure 2. The network consists of 4549 nodes and 4905 links connecting them. There are 8 pumping stations in the network, 6 of which are shown in the figure whereas the other 2 are stationed where tanks are placed. The goal is to derive the schedules for 2 of the pumping stations while other pumps are already working according to some predetermined strategies. The stations to be controlled are shown in red in the figure. Their task is to deliver water mostly to the High Zone (HZ) and Low Zone (LZ). However, connections exist between HZ-LZ and the rest of the city, so we can not think of the system as composed of isolated networks entirely. There are also 3 tanks in the HZ. While 2 of them are directly connected via pipes, the third one stands alone as shown in the figure. The overall structure of the Randers WDN with tanks and pumps to be controlled are given in Figure 3. There are 3 water tanks in the network, 2 of which have been connected with a pipe directly. The tank level changes can be written by applying the mass conservation law (3) to the tanks in Figure 3 as \[A_{1}\dot{h}_{1} =q_{1down}+q_{1up}+q_{inter} \tag{14a}\] \[=f_{1}(h_{1},h_{2},h_{3},q_{p1},q_{p2},d),\] \[A_{2}\dot{h}_{2} =q_{2down}+q_{2up}-q_{inter}\] (14b) \[=f_{2}(h_{1},h_{2},h_{3},q_{p1},q_{p2},d),\] \[A_{3}\dot{h}_{3} =q_{3}=f_{3}(h_{1},h_{2},h_{3},q_{p1},q_{p2},d), \tag{14c}\] where \(d\) is the vector containing the demands of all the nodes, \(q_{p1},q_{q2}\) are the pump flows, \(A_{1},A_{2},A_{3}\) are the cross sectional areas of the tanks and \(f_{1},f_{2},f_{3}\) are nonlinear flow functions. Water levels at the two connected tanks are almost equal \(h_{1}\thickapprox h_{2}\) all the time since the pipe connecting respective tanks is big enough to oppose the water flows coming from neighbor nodes. That enables us to consider \(h_{1},h_{2}\) together as \[(A_{1}+A_{2})\dot{h}_{1,2}\thickapprox q_{1down}+q_{2down}+q_{up}=f_{1}+f_{2}. \tag{15}\] We have used the EPANET model of the network to generate the data required for approximating \(f_{1}+f_{2}\) and \(f_{3}\). The model is simulated with various tank level initial conditions and flow rates of 2 pumping stations to be controlled. The control laws for the remaining pumping stations are already defined in the EPANET model. Then, the linear model (4) is fitted to simulation data using least squares. The state variables for the model are \(h(t)=[h_{1,2}(t),h_{3}(t)]\in\mathbb{R}^{2}\) and the inputs are \(u(t)=[q_{p1}(t),q_{p2}(t)]\in\mathbb{R}^{2}\). The total demand of High and Low Zone is used as aggregated demand \(d_{a}\) in the model since mainly those areas are supplied by the controlled pumps. ### Simulation Results The proposed control method is tested on EPANET model of Randers water network. Epanet-Matlab toolkit Eliades et al. (2016) is used to set the flow of the 2 pumps at each time step and simulate the network. The remaining pumps are controlled with rule-based control laws that are previously defined on EPANET. The parameters of exponential barrier functions \(J_{hi}\) are chosen as \(a_{i}=80\), \(b_{i}=0.3\) for all \(i\). It is assumed that the electricity prices are known in advance during the test. Tank levels \(h_{1},h_{2}\) have a maximum value of 3m while \(h_{3}\) has 2.8m. Tanks are required to be at least half full. Maximum pump flows are set to 100. Sampling time \(\Delta_{t}\) is set to 1 hour in the experiments, so the control input is recalculated at each hour. We assume that total demand \(d_{a}(t)\) of HZ and LZ can be estimated up to 1 day from available data. Although we do not have historical Figure 3: Structure of the WDN. Figure 2: Water Distribution Network of Randers. The pumping stations to be controlled are shown in red. Tanks are shown with a ’T’ shaped symbol in yellow. data on the demand, we imitate this behaviour by using a slightly perturbed version of the real demand used in EPANET simulation during MPC calculations. The perturbations are adapted from a real demand data set of a small Danish facility. Normalized difference between the average demand and the demand of a random day in data set is added to EPANET demand to replicate estimated demand. In each experiment a different day from the data set is used, so the assumed estimated demand is different each time. The simulation results when the presented method is applied to the EPANET model are given in Figure 4. The initial tank levels are equal to \(h_{T_{day}/\Delta_{t}}^{*}\) in the simulation. The top plot shows the evolution of tank levels along with the upper and lower thresholds. It is seen that the thresholds are not violated and moreover tank levels are not getting too close to them, which was the idea behind exponential barrier functions. Both the real demand and the assumed estimated demand of HZ and LZ are in the figure below. Total applied pump flows and electricity prices are in the following figures. The expected result is pump flows being higher when electricity prices are low, and lower when they are high, which seems to be the case as can be seen in the plot. Pump flows drop significantly when prices are at the peak and they reach their highest value at the end of the day when prices are low. A more aggressive controller can be obtained by picking a smaller \(b_{i}\) value for barrier functions at the expense of risking constraint violation. In Figure 5, the tank level simulation results and control inputs for different initial conditions and different assumed estimated demands are given. The electricity price profile is the same as before. It is seen that the algorithm is able to control the network on various cases while satisfying the constraints. Regardless of initial tank levels, the pumping profiles have a similar profile: high pump flows close to midnight and in the middle of the day. The only exception is the bottom plot. In the beginning, prices are low but pump flows are not high. This can be attributed to water levels \(h_{1},h_{2}\) being close to the upper thresholds and water demand being low in the beginning. The assumption that the optimal input sequences \(U(t)\) would not diverge a lot from the one found in previous step \(U(t-\Delta_{t})\) is the reason we apply \(u_{1}^{t-\Delta_{t}}\) at time \(t\) if the problem (5) is infeasible at time \(t\). This assumption is tested with initial conditions \(h_{1,2,3}=h_{T_{day}/\Delta_{t}}^{*}\). In figure 6, total pump flow \([1,1]^{T}u_{i}^{t},i=0\cdots N(t)-1\) of the found optimal input sequences \(U(t),t=0,\Delta_{t}\cdots T_{day}-\Delta_{t}\), except when the problem were infeasible, are given. It can be seen that \(u_{1}^{t-\Delta_{t}}\) is close to the \(u_{0}^{t}\) for all \(t\), which shows that our assumption is valid at least for this experiment. Finally, the ability of the algorithm to decrease economic costs is tested with various initial conditions. For each case, a demand follower pumping strategy is used as a benchmark. The flow of the 2 pumps is adjusted with trial and error for each demand follower such that the total flow of the 2 pumps is equal to water demand at each time step and tank levels satisfy the terminal constraint (13a). The demand follower is a natural candidate to be a benchmark method since providing as much water as demand is an intuitive idea and the constraints in (5) can be satisfied with manual adjustments of pump flows. The economic costs are presented relatively in Table 1 As it is seen, the proposed algorithm saves between 40% and 45% of the cost with different demand profiles. ## 5 Conclusion We have presented a predictive control algorithm with a periodic horizon for WDNs. The aim is to minimize the \begin{table} \begin{tabular}{c|c} **Proposed Method** & **Demand Follower** \\ \hline 0.5967 & 1 \\ \hline 0.5745 & 1 \\ \hline 0.5826 & 1 \\ \hline 0.5558 & 1 \\ \end{tabular} \end{table} Table 1: Relative economic costs of the proposed method and demand follower strategy for various demand profiles Figure 4: Sample simulation. (a) evolution of tank levels through 1 day with upper and lower level thresholds; (b) real total demand of HZ and LZ used in EPANET simulation and the demand used in MPC calculations; (c) total flow provided by the 2 pumps; (d) electricity price. economic cost and satisfy the operational constraints. A linear model is used to represent Randers WDN to increase the chance of finding a solution to the problem (5) at expense of a model-plant mismatch. Periodic horizon is introduced to the predictive control formulation to keep the resulting state trajectories around the optimal periodic trajectory. Barrier functions are used to prevent constraint violation since there is a model-plant mismatch. The presented algorithm is tested on Randers WDN using EPANET. It is shown in various situations that the method is able to find an economic solution where pump flows are adjusted according to electricity prices. Also, it is shown that the system trajectories do not enter dangerous zones introduced by barrier functions as long as the predicted demand and the actual demand are somewhat close. As future work, we plan to work on theoretical guarantees of the existence of solutions to the proposed method. Also, the robustness of periodic horizon control of periodical systems with barrier functions will be investigated.
2308.16855
Space Partitioning Schemes and Algorithms for Generating Regular and Spiral Treemaps
Treemaps have been widely applied to the visualization of hierarchical data. A treemap takes a weighted tree and visualizes its leaves in a nested planar geometric shape, with sub-regions partitioned such that each sub-region has an area proportional to the weight of its associated leaf nodes. Efficiently generating visually appealing treemaps that also satisfy other quality criteria is an interesting problem that has been tackled from many directions. We present an optimization model and five new algorithms for this problem, including two divide and conquer approaches and three spiral treemap algorithms. Our optimization model is able to generate superior treemaps that could serve as a benchmark for comparing the quality of more computationally efficient algorithms. Our divide and conquer and spiral algorithms either improve the performance of their existing counterparts with respect to aspect ratio and stability or perform competitively. Our spiral algorithms also expand their applicability to a wider range of input scenarios. Four of these algorithms are computationally efficient as well with quasilinear running times and the last algorithm achieves a cubic running time. A full version of this paper with all appendices, data, and source codes is available at \anonymizeOSF{\OSFSupplementText}.
Mehdi Behroozi, Reyhaneh Mohammadi, Cody Dunne
2023-08-31T16:57:27Z
http://arxiv.org/abs/2308.16855v1
# Space Partitioning Schemes and Algorithms for Generating Regular and Spiral Tremaps ###### Abstract Treemaps have been widely applied to the visualization of hierarchical data. A treemap takes a weighted tree and visualizes its leaves in a nested planar geometric shape, with sub-regions partitioned such that each sub-region has an area proportional to the weight of its associated leaf nodes. Efficiently generating visually appealing treemaps that also satisfy other quality criteria is an interesting problem that has been tackled from many directions. We present an optimization model and five new algorithms for this problem, including two divide and conquer approaches and three spiral treemaps algorithms. Our optimization model is able to generate superior treemaps that could serve as a benchmark for comparing the quality of more computationally efficient algorithms. Our divide and conquer and spiral algorithms either improve the performance of their existing counterparts with respect to aspect ratio and stability or perform competitively. Our spiral algorithms also expand their applicability to a wider range of input scenarios. Four of these algorithms are computationally efficient as well with quasilinear running times and the last algorithm achieves a cubic running time. A full version of this paper with all appendices, data, and source codes is available at osf.io (hyperlink). Regular and Spiral Treemaps, Space Partitioning Optimization, Data Visualization, Computational Geometry ## 1 Introduction Treemaps are well-known planar structures for visualization of tree-structured hierarchical data with a space-filling approach. A nested treemap maps the nodes of a tree to a nested non-overlapping structure made up with simply connected planar figures, usually rectangles or convex polygons, where the areas of these figures are proportionate to some attribute of the nodes of the tree. Another way of looking at treemap layouts is that they partition a given planar region into a number of sub-regions with given areas. Treemaps were introduced by Shneiderman in 1991 as a space-filling technique to visualize space usage within computer directory structures [1, 2]. Fig. 1 shows an example of tree-structured data and an associated treemap generated with the original Slice-and-Dice algorithm. Shneiderman's approach was to make a rectangular treemap, in which a rectangle was recursively divided with alternating horizontal and vertical cuts into rectangular sub-regions. Slice-and-Dice is simple to implement, but suffers some issues particularly with creating long and skinny rectangles. The poor aspect ratio raises issues with interpreting any associated visual encodings [3]. Over the years several treemap variants have been proposed. These include polygonal treemaps where the sub-regions are simple polygons [4], convex treemaps where the sub-regions are convex polygons [5], Voronoi treemaps where the sub-regions are Voronoi cells [6, 7], cascaded treemaps where the sub-regions are cascaded as opposed to be nested [8], spiral treemaps where layout presents a spiral structure [9, 10], and treemaps with space-filling fractal-like curves such as Gosper [11] and Hilbert curves [12]. Treemaps are used in a wide range of application domains including software visualization; file/directory structures; news and multimedia visualization; and visualization of financial data such as stocks, budgets, and import/export trades. Its associated spatial partitioning problem also has a wide range of applications such as designing service districts, land allocation in farming reforms, and matrix multiplication algorithms in parallel computing. Beyond area encoding, treemaps can be enhanced to encode additional attributes using color, shade, filling pattern, label, and nesting/cascading structure. Creative and new applications of treemaps for different visualization purposes are continuously being proposed using a variety of treemap algorithms (see [13] for a survey on treemap algorithms). A regular treemapping problem with rectangular sub-regions could be defined as follows. Given a rectangle \(R\) with \(\text{Area}(R)=A\) and a hierarchical list of areas \(A_{1},...,A_{n}\) with \(\sum_{i=1}^{n}A_{i}=A\) arranged in a tree data structure \(T\) we want to layout \(n\) non-overlapping rectangles with areas \(A_{1},...,A_{n}\) inside \(R\) in a nested way that follows the structure of \(T\) and the resulting rectangles are as square as possible with \(\cup_{i=1}^{n}R_{i}=R\). The squareness of the rectangles is important practically and aesthetically in data visualization. In this paper, we also study another variant of this problem, where we relax the partitioning condition (\(\cup_{i=1}^{n}R_{i}=R\)) and instead require the layout to follow certain _spiral_ patterns. In other words, for regular treemaps we are given an input region while for the spiral treemaps, there is no input region and instead we aim to form a spiral structure with rectangular sub-regions of given areas. For regular treemaps we propose an optimization model, one divide & conquer algorithm, and one dynamic programming algorithm and for the spiral treemaps we propose three constructive algorithms that mimic the "golden spiral" structure in different ways. Our divide and conquer and dynamic programming algorithms take a _subdivision_ approach and our spiral algorithms take a _packing_ approach, while all proposed algorithms fall under _space-filling_ treemaps according to the existing design space categories [13, 14, 15]. #### Contributions, Outline, and Supplemental Materials In this paper, we consider nested rectangular, convex, and spiral treemaps. Our main contributions, listed in presentation order, are: 1. **A treemap optimization model** that minimizes the total perimeter of the component rectangles, thus favoring square sub-regions (Section 3). To the best of our knowledge, this is the most comprehensive model in the literature of the rectangular treemapping problem, incorporating several variants of the problem. 2. **Two subdivision treemap algorithms** based on divide & conquer and dynamic programming that resolve the shortcoming of the prior divide & conquer approach when faced with uneven area distributions (Section 4). 3. **Three spiral treemap algorithms** that mimic the Symmetric Spirals that are ubiquitous in nature and famous for their visual attractiveness (Section 5). Spiral algorithm _construct_ treemaps independent of the input region. Removing this restriction can lead to both significant improvements or deterioration depending on the problem instance. 4. **The usage of Hausdorff distance** as a metric for stability is established for the first time. 5. **A performance comparison** of treemap algorithms based on aspect ratio and stability (Section 6). We demonstrate that our optimization model generates superior treemaps and could serve as a benchmark for assessing more computationally efficient approaches. Also, our other treemap algorithms offer improved aspect ratio vs. prior methods, while staying competitive on stability. Finally, usage examples on several diverse datasets (Section 7) are also provided for further comparison of output visualizations. A full version of this paper with all appendices, data, and source code is available at osf.io (hyperlink). ## 2 Background ### _Notational Conventions_ The following notational conventions are assumed throughout this paper: By bounding box (rectangle) of a region \(C\), we mean minimum-area axis-aligned bounding box (rectangle) of \(C\) and we show it with \(\square(C)\). The width and height of a region \(C\) are defined as the width and height of \(\square(C)\) and are denoted by \(\operatorname{width}(C)\) and \(\operatorname{height}(C)\). \(\operatorname{Area}(C)\) denotes the area of a region \(C\) and \(\operatorname{Perm}(C)\) shows its perimeter. We define the perimeter of a rectangle \(R\) by \(\operatorname{Perim}(R)=\operatorname{width}(R)+\operatorname{height}(R)\). We also define the aspect ratio of a rectangle \(R\) as \[\operatorname{AR}(R)=\max\left\{\frac{\operatorname{width}(R)}{\operatorname{ height}(R)},\frac{\operatorname{height}(R)}{\operatorname{width}(R)}\right\},\] i.e., \(\operatorname{AR}\geq 1\) and the aspect ratio of a square is one. We define the aspect ratio of a non-rectangular polygon \(P\) as the aspect ratio of its bounding rectangle \(\square(P)\). Finally, \(|S|\) shows the cardinality (size) of set \(S\). ### _Evaluation Criteria_ Depending on the application area, the goal in generating a treemap and thus the evaluation metrics for the results could be different. Generally, the utility of a treemap-generating algorithm for decision-makers could be evaluated according to the following metrics: **Space-filling.**The main goal of most treemap algorithms is to present information in a 2D space in an efficient, space-filling way and minimize unused space [16]. This enables the presentation of a lot of information in a concise format. The goal in some other treemap algorithms is to show _containment_ between parents and children nodes where unused spaces can be present. Here, we only consider space-filling treemaps. **Aspect ratio--**Treemap algorithms should avoid generating thin and elongated sub-regions as much as possible. A planar geometric shape is called _"fat"_ if the aspect ratio of its bounding rectangle is one or close to one (almost square). Using convex or rectangular fat shapes for a treemap color, actively forms a visually appealing pattern, makes mouse selection easier [17, 18], allows larger labels, and can make area comparison tasks more accurate [3]. Treemap algorithms have often been designed or evaluated in terms of their ability to minimize both maximum and average aspect ratio (AR) of the sub-regions, e.g., Squarified by Bruls et al. [17]. In Section 5 we further discuss the experiments of Kong et al. [3]--including their finding that both perfect squares and rectangles with extreme aspect ratios can create misjudgment in area comparison tasks. **Stable--**In many applications, the data of interest may change frequently or on a dynamic basis. Thus stability in the face of dynamically changing Fig. 1: Hierarchical data (a) modeled as a height-3 weighted tree & (b) the associated treemap visualization produced by Shneiderman’s original Slice-and-Dice algorithm [1, 2]. changes to the layout are minimal if the data is just slightly perturbed [19; 20]. This helps the user to track what has changed without completely distorting their mental map, as defined by Misue et al. [21]. To measure stability we need a distance measure for corresponding pairs of sub-regions between two layouts to measure the change in location of a sub-region in different treemap layouts, generated after perturbation of areas. Several such measures have been used for treemaps that can be categorized as: (1) _absolute_ metrics that measure how much individual rectangles move/change (e.g., _average Euclidean distance change_ (between corners) [19; 22]_location drift_[12], _corner travel distance_[23]), and (2) _relative_ metrics that measure how much positions of pairs of rectangles change relative to each other, usually by comparing their centers or aspect ratios (e.g., _average Euclidean distance change_ (between centroids) [24; 25], _average angular displacement_[24], _relative direction change_[26], _relative position change_[27], and _average aspect ratio change_ and _relative parent change_[28]). Some of these measures fail or perform poorly for certain cases. For example, two skinny rectangles can form a cross (which shows a significant move of the corners and edges), while having the same centers. Here we establish the maximum and average _Hausdorff distance_ (HD) as our metrics for stability. We define the _Hausdorff distance_ between two compact sets \(S_{1}\) and \(S_{2}\), both in the same metric space, as \[\mathrm{HD}(S_{1},S_{2})=\max\left\{\sup_{x\in S_{1}}\inf_{y\in S_{2}}\|x-y\|, \sup_{y\in S_{2}}\inf_{x\in S_{1}}\|x-y\|\right\},\] where \(\|\cdot\|\) is the Euclidean norm, \(\sup\) is the supremum and \(\inf\) the infimum. This is illustrated in Fig. 2. HD provides a measure of distance between two corresponding sub-regions in two layouts. It shows the maximum of the maximum distances of each sub-region in one layout to the nearest point of its corresponding sub-region in the other layout. This is the first use of this stability measure for treemaps and it is an absolute metric in the same spirit of the Euclidean distance change metric proposed by Shneiderman & Wattenberg [19]. **Ordered**--Preserving an input order in the output can help with recognizing the tree structure and any hierarchy more quickly, as well as increasing the readability and scannablity of individual items [19]. A treemap is _"ordered"_ if its sub-regions are scannable in close to the same order as was given. The essentially of preserving order as a desired property depends on the application domain. For example, it is faster to find a node with a given label if the nodes are ordered alphabetically, or faster to find bonds that are paying out soon if they are ordered by maturity date. We leave generating ordered treemaps as a future research direction for the algorithms developed in this paper. **Encoding Multivariate Data**--Conveying maximum information through size, color, shade, placement location, adjacency relations, label, filling pattern, and nesting structure of the layout can enhance the utilization of treemaps [29]. Most algorithms could be easily enhanced with respect to this metric with some simple modifications. But these are outside of the focus of this paper. Here we focus solely on the first three metrics for treemaps: space-filling, low aspect ratio, and stable. For a survey on user studies on the effectiveness of treemaps see [30]. ### _Treemap Algorithms_ The first treemap algorithm (and the visualization approach in general) was developed by Shneiderman [1; 2]. His Slice-and-Dice algorithm performs an alternating sequence of vertical and horizontal cuts. Recursively, the direction of the layout for each level of the tree is reversed with cuts vertically at even levels and horizontally at odd levels. If the tree has only one level other than the root then all cuts will be vertical. This creates the so-called slices and dices. An example of Slice-and-Dice is shown in Fig. 1b. However, Slice-and-Dice is prone to generating thin, _"skinny"_ rectangles which do poorly on our aspect ratio criteria. #### 2.3.1 Aspect Ratio Optimization Wattenberg's Cluster algorithm [22] was the first to address aspect ratio. Cluster was presented as a modification of Slice-and-Dice that avoids high-aspect-ratio rectangles by employing both vertical and horizontal partitions at each level of hierarchy and clustering similar rectangles adjacent or close to each other. Wattenberg used Cluster for what was likely the first stock market treemap [31]. The Squarified algorithm by Bruls et al. [17] similarly focused on generating sub-regions that are as close as possible to squares. It takes an input rectangle and the desired areas of sub-rectangles. First, the desired areas are sorted in descending order. Then, in each iteration the algorithm places rectangles either vertically or horizontally according to a subset of areas from the sorted list. The subset is selected depending on the width and height of the remaining area of the input rectangle and such that the maximum aspect ratio of rectangles in this subset is minimized. Nguyen and Huang [32] took a similar approach with a flexible acceptable maximum aspect ratio in each mentioned subset and some adjustments to the weights of sub-regions. A third approach to this problem was developed by Vernier & Nigay [33]. Their Modifiable treemaps algorithm allocates to each node of the tree a bounding box with fixed aspect ratio, which can be modified by the user. It then creates a new set of recursive steps between two recursive steps of Slice-and-Dice[1; 2]. At each iteration, Modifiable generates several options from the current set Fig. 2: The Hausdorff distance between rectangles \(R\) & \(R^{\prime}\) is defined as \(\mathrm{HD}(R,R^{\prime})=\max(d_{1},d_{2})\), where \(d_{1}\) is the largest Euclidean distance of a point in \(R\) to its closest point in \(R^{\prime}\) and \(d_{2}\) is the largest Euclidean distance of a point in \(R^{\prime}\) to its closest point in \(R\). of sub-rectangles and picks the one with the minimum sum of differences between the fixed aspect ratio and the aspect ratio of each contained rectangle. The chosen option is then recursively partitioned. If the fixed aspect ratios are set to \(1\) then the algorithm generates square-like rectangles. #### 2.3.2 Ordering & Stability While Slice-and-Dice[1, 2] preserves the original order of the leaves in the tree structure, Cluster[22], Squarified[17], and Modifiable[33] do not--violating the ordered criteria. Moreover, they violate the stable criteria as they are not stable to changes in the input data. Engdahl was able to preserve order with Split[34], while still offering improved aspect ratios over Slice-and-Dice[1, 2]. Split takes an ordered list of desired sub-rectangle areas and splits them into two lists, without changing the order, such that the sum of areas is as close as possible across lists. Split then divides the input rectangle with a vertical or horizontal cut, depending on whether the width is larger than the height, then recurses. Introducing the stability criteria, Shneiderman & Wattenberg's Pivot[19] and the Bederson et al.'s Strip[35] algorithms take compromise approaches that still preserve ordering and good aspect ratios. Pivot selects one of the input areas as the pivot element, places it as a square-like sub-rectangle to partition the input rectangle \(R\) into three areas, and then recursively lays out rectangles from three lists \(L_{1}\)-\(L_{3}\) into each of the areas. \(L_{1}\) includes all sub-rectangles whose index in the input order is less than the index of the pivot. \(L_{2}\) and \(L_{3}\) are chosen such that the indices of sub-rectangles in \(L_{2}\) is less than the indices of sub-rectangles in \(L_{3}\) and the aspect ratio of the pivot is as close to \(1\) as possible. Strip adds rectangles one-by-one to strips, just like Squarified[17], but fixes the orientation of strips throughout and omits the initial sorting step to preserve order. Bederson et al. also introduce Quantum[35], which is specially suited for cases where all leaf nodes should have the same size (e.g., for photo browsers). For preserving order, Wood and Dykes took another approach to modify Squarified by assigning to each node a two dimensional location and proposed OrderedSquarified[24]. In a similar approach, Tu & Shen[9] introduce Spiral treemaps that tries to preserve order and continuity while minding the aspect ratios as the rectangles are being added. The algorithm works just like the Strip algorithm except that starting from the top left corner of \(R\) the direction of the strip alternates in an east-south-west-north manner. Other space-filling curves are also considered for generating rectangular treemaps. Tak and Cockburn proposed Hilbert and Moore treemaps[12] based on Hilbert and Moore fractal-like curves and showed that they perform well with respect to stability measures. Fractal-like treemaps is also used by D'Ambros et al.[36] to visualize the evolution of software systems which can be used for over preservation. The main goal in these space-filling curve-based treemaps is to preserve order and maximize stability and _readability_, where the latter is a measure to show the benefit of ordered layouts and quantifies the simplicity of visually scanning a treemap layout by counting e.g. the number of changes in direction a viewer's eye must make when scanning the rectangles in order[35]. In contrast to these, our spiral algorithms do not aim to preserve order and instead try to construct a balance between stability and aspect ratios. For this reason and the fact that they generally perform poorly with respect to aspect ratio, as shown in [9, 12, 23, 27], we skip Spiral, Hilbert, and Moore in our computational comparisons. Vernier et al. developed Greedy Insertion[37], aiming to balance aspect ratio and stability criteria, although the results suggests that the algorithm performs well on stability while scoring poorly on aspect ratio. It inserts sub-rectangles into the given rectangle one-by-one greedily and in the given order. Sondag et al. proposed Incremental[27] that tries to achieve stability through local moves when an initial layout is given. For a thorough comparative evaluation of dynamic treemap algorithms, we refer the reader to Vernier et al.[23]. This study runs an extensive computational experiments over more than 2000 data sets and compares 14 well-known treemap algorithms, summarized in Fig. 8 - Fig. 11 in [23], and suggests that Squarified[17] almost consistently outperforms all other well-known algorithms with respect to aspect ratio (our main metric in this paper). This is also demonstrated by the computational results over two large data sets summarized in Fig. 21 - Fig. 26 in the paper[27]. The latter paper also shows that it performs reasonably well with respect to stability on one of the large data sets and performs weaker on the other one. For this reason we compare our results with those Squarified. For the same reason and their parent nodes. It then follows the recursive steps of Squarified but instead of selecting nodes from a sorted list of areas, it selects the node closest to the current position in the layout rectangle. Duarte et al. [41] generalized the concept of order preservation to neighborhood preservation. They developed the Nmap algorithm that uses a slice and scale strategy to take a spatial data set in which each data point is assigned a location \((x,y)\in\mathbb{R}^{2}\) and a weight \(p\in\mathbb{R}\) and present it as a treemap where the closeness of rectangles tends to follow the closeness of the data points in \(\mathbb{R}^{2}\), while maintaining the visual quality. Unlike most treemap algorithms which are _constructive_, the Incremental by Sondag et al. [27] uses a local search. It can take an initial layout generated by any of the constructive algorithms and then tries to improve through a series of local moves like stretching and flipping, followed by the readjustment of areas. As a result, it can potentially generate all possible layouts--including _non-sliceable_ layouts, which are those that cannot be recursively sliced into two parts having at least one rectangle as a center of a windmill pattern. The algorithm generates layouts that score well both in aspect ratio and stability. However, the output quality and computational time greatly depends on the quality of the initial constructed layout as it takes \(\mathcal{O}(n^{2})\) to move from one layout to another--in addition to solving several linear equality systems for readjusting areas in each iteration. Non-rectangular treemaps have also been very popular. Balzer et al. [6, 7] developed an algorithm for generating Voronoi treemaps, where the planar figures representing the leaf nodes are the cells of a Voronoi diagram instead of rectangles. The algorithm is based on computing a weighted centroidal Voronoi tessellation (CVT) in which the weight of each sub-region is adjusted so the area of each cell is within a threshold \(e\) of the input (desired) areas. They provide an variant using additively-weighted Voronoi diagrams and power diagrams. Voronoi treemaps enable layouts within areas of arbitrary shape, such as triangles and circles. However, computation of CVTs is expensive and not suitable for dynamic updates in the input data. Others tried to improve the computational efficiency and other aspects of Voronoi treemaps [20, 42, 43, 44, 45]. Wang et al. [46] developed an orthogonal Voronoi treemap in which borders of cells are axis-aligned. Similarly, Wang et al. [47] proposed algorithms for generating Voronoi treemaps based on Manhattan (\(\ell_{1}\)) and Chebychev (\(\ell_{\infty}\)) distances. Liang et al. [4] extend Engdahl's Split[34] and the polygonal division idea of Nguyen & Huang [48] and use Divide & Conquer to create polygonal, angular, and--relevant to this paper--rectangular treemaps that could be laid out in almost any container shape. The use of space-filling fractal-like curves may also produce non-rectangular layouts such as Jigsaw treemaps [49] and Gosper treemaps [11]. Later, Chaturvedi et al. [10] proposed two heuristic-based treemap layouts which did not have any space-filling guarantees. Here, we only consider deterministic data, but rectangular treemaps are also produced in presence of uncertainty [50], in which case the layout structure could be overlapping. For a review on different variants of treemaps see [15]. #### 2.3.4 Performance Considerations Despite the wide variety of rectangular and non-rectangular treemap algorithms, less attention has been paid to performance guarantees. Nagamochi & Abe [51] consider the problem of partitioning a rectangle \(R\) into \(n\) rectangles with specified areas \(A_{1},...,A_{n}\) with objective functions such as the sum of the perimeters of the sub-rectangles (PERI-SUM), the maximum perimeter of the sub-rectangles (PERI-MAX), and the maximum aspect ratio of the sub-rectangles. They then present an \(\mathcal{O}(n\log n)\) algorithm with a \(1.25\) approximation factor for PERI-SUM, a \(2/\sqrt{3}\) approximation factor for PERI-MAX, and aspect ratio of at most \(\max\{\operatorname{AR}(R),3,1+\max_{i=1,...,n-1}\frac{A_{i+1}}{A_{i}}\}\). The algorithm recursively partitions \(R\) into two rectangles \(R_{1}\) and \(R_{2}\) such that \(\operatorname{Area}(R_{1})\geq\frac{1}{3}\operatorname{Area}(R)\) and \(\operatorname{Area}(R_{2})\geq\frac{1}{3}\operatorname{Area}(R)\)--except when the maximum input area is greater than \(\frac{1}{3}\operatorname{Area}(R)\), in which case \(\operatorname{Area}(R_{1})\) may not meet this condition. If the input rectangle is a square, this improves the \(1.75\) approximation factor and the computational complexity of Beaumont et al.'s algorithm for PERI-SUM [52], while extending the results for both PERI-SUM and PERI-MAX to the case where the input rectangle is not necessarily a square. Bui et al. [53] study the same problems under a strict rule that all sub-rectangles must be constructed by two-stage guillotine cuts, first with cuts parallel to the longer edge of \(R\) and then with cuts perpendicular to the first layer of cuts. Onak & Sidiropoulos [5] present an algorithm using convex polygons as the sub-regions, where the aspect ratio of each polygon, defined as \(\operatorname{AR}(P)=\frac{\operatorname{diam}(P)^{2}}{\operatorname{vol}(P)}\), is small. For a tree with \(n\) leaf nodes and depth \(d\), this aspect ratio is of \(\mathcal{O}\left((d\cdot\log n)^{17}\right)\). This weak bound was later improved to \(\mathcal{O}(d+\log n)\) in [54, 55] and to \(\mathcal{O}(d)\) in [56]. They further prove that orthoconvex treemaps; where the sub-regions representing leaf node are rectangles, L, and S-shapes, and sub-regions representing internal nodes are orthoconvex polygons; can be constructed with constant aspect ratio. For this paper, note the time complexities of Squarified [17]--and Divide & Conquer[4]--\(\mathcal{O}(n\log n)\). ## 3 Rectangular Treemapping as an Optimization Model Treemapping is essentially a geometric optimization problem. There are many closely-related geometric and space partitioning optimization problems that include packing, covering, and tiling--generally focused on minimizing wasted space or optimally allocating geographical resources. Related problems include: cutting stock; knapsack; bin packing; guillotine; disk covering; polygon covering; kissing number; strip packing; square packing; squaring the square; squaring the plane; and, in 3D space, cubing the cube and tetrahedron packing. The treemapping problem--specifically with the goal of minimizing the maximum aspect ratio of all sub-rectangles--was noted as NP-hard by Bruls et al. [17]. de Berg et al. later proved the problem is strongly NP-hard with a reduction from the square packing problem [56]. The related treemapping problem of minimizing the total perimeter of all sub-rectangles was proved by Beaumont et al. [57] to be NP-hard, using a reduction from the problem of partitioning a set of integers into two subsets of equal sum. Given this computational complexity, several heuristics have been developed for generating treemaps efficiently, as reviewed in the last section. In this paper, we will also introduce several new heuristic algorithms. Optimization approaches is understudied in the literature of the treemapping problem. Zhao and Lu [58] modeled circular treemapping problem as an optimization problem and developed a variational layout heuristic algorithm based on power diagram to solve it. Carrizosa et al. [59] proposed a specific type of treemap called space-filling box-connected map (SBM), with orthogonal sub-regions that are made of grid cells and have to satisfy the so-called _box connectivity constraints_, and modeled it as a mixed integer nonlinear program (MINLP) in which the objective is to minimize total dissimilarities between sub-regions as measured by various distance functions defined specifically for such sub-regions. They then solved it heuristically using a Large neighborhood Search algorithm. Fried et al. [60] proposed a \(p\)-norm (energy function) minimization model for the problem of assigning a set of visual objects to a set spatial locations in form of a 2D grid. The pairwise distances between objects are given and the goal is to minimize the discrepancies between the given distances and the mapped distances measured by the Euclidean distances between the grid cells. They then solve the problem using a heuristic called _IsoMatch_. Fugenschuh and Fugenschuh [61] modeled a metal sheet product design problem as a 2D grid slicing problem as a binary integer program that in essence has some similarities with the border length minimization in a grid-based orthogonal treemapping problem. Fugenschuh et al. [62] considered the problem of minimizing total border length in partitioning a rectangle to a set of sub-rectangled with given areas. They reiterated an MINLP model previously presented and linearized as a mixed integer linear program in [63], modified the approximation algorithm of [51], and used the solutions provided by that as advanced starters and showed how this helps to speed-up the optimization process. We noticed this work after finalizing this research and it seems to be the closest in the literature to our model, which is more detailed, explicit, and comprehensive. Our model incorporates several variants of rectangular treemapping and considers a much bigger feasibility region. This gives the user much more flexibility in generating treemaps with specifics designs or characteristic requirements. In order to provide a baseline for comparison, we propose an integer nonlinear Optimization Model that minimizes the total perimeter of all sub-rectangles. This is motivated by the fact that the perimeter of a rectangle is minimized when it is a square. Note that this is a proxy for minimizing the average aspect ratio and the optimal solution will consist of square-like rectangles. One could consider minimizing the maximum perimeter or the maximum aspect ratio as well. Nevertheless, the solution to these problems may not be the same. To see this, consider an example with four rectangles and two solutions with aspect ratios \(\{1,1,1,5\}\) and \(\{1,2,3,4\}\). The former is a better solution concerning the average aspect ratio, while the latter is better regarding the maximum aspect ratio. To formally define our problem, suppose we are given a rectangle \(R\) with \(\text{width}(R)=w\) and \(\text{height}(R)=h\) aligned orthogonally to the \(x\) and \(y\) axes with its lower-left corner at the origin. We are also given numeric areas \(A_{1},...,A_{n}\) s.t. \(\sum_{i=1}^{n}A_{i}=\text{Area}(R)\). We want to partition \(R\) into \(n\) non-overlapping rectangular sub-regions \(R_{1},...,R_{n}\) with areas \(A_{1},...,A_{n}\) in a way that the total perimeter of all rectangles (cut length) is minimized. One of the advantages of our Optimization Model is that it can be easily adjusted to incorporate specific constraints or other evaluation metrics. One could change the objective function from total perimeter to maximum aspect ratio or area weighted average aspect ratio or any other metric that could be constructed as a function of dimensions or coordinates of the sub-regions. Here, we consider a parametrized objective function to take advantage of this flexibility. We consider an area-weighted total perimeter. We can switch between the total perimeter and area-weighted version using a binary parameter \(\alpha\in\{0,1\}\). We also incorporate another degree of flexibility in our objective function which may be very useful in practice. Often we may want to compromise the aspect ratio slightly and in return gain rectangles that are more horizontal, suitable for longer labels, or more aligned with each other, making it easier to compare the areas. In order to do that, we define our objective function as a weighted-average of total area-weighted perimeter and the number of rectangles having greater width than height. Greater values for the parameter \(\beta\geq 0\) puts more weight on creating more horizontal rectangles. Similarly, we might be interested to modify the objective function to incorporate our stability metric as well to make a trade-off between perimeter (aspect ratio) and stability. We must point out that since all of sub-regions in the generated treemaps are rectangles and thus convex, for any pair \(R,R^{\prime}\), the function \(\text{HD}(R,R^{\prime})\) is convex. However, since \(R^{\prime}\) is the corresponding sub-rectangle to \(R\) in the updated layout after perturbing areas (i.e., areas changing over time), the exact formula of this metric and the way it should be added to the objective function and then the solution analysis of that model is beyond the scope of this paper. However, we can easily incorporate some design preferences such as closeness of some sub-regions. Depending on the application, we may be interested in having certain sub-rectangles close to each other in the final treemap. Let, \(\mathbf{v}^{i}\) denote the lower-left vertex of \(R_{i}\). We can measure the closeness between \(R_{i}\) and \(R_{j}\) by the Euclidean distance between their lower-left corners, i.e., \(\|\mathbf{v}^{i}-\mathbf{v}^{j}\|\). We can also consider additional positioning and shape constraints. For example, here we allow the user to opt, via parameter \(\delta_{i}\in\{0,1\},\ \forall i\), to fix the position of one sub-rectangle to one of the corners of the input rectangle. We also allow the user to have two particular sub-rectangles to be adjacent, i.e., sharing a corner, an edge (two corners), or part of an edge, by adjusting parameters \(\eta_{ij},\theta_{ij}\in\{0,1\},\ \forall i,j\). Note that one could do this for multiple rectangles too, however it may increase the risk of infeasibility. In the following we present an MINLP model for the considered problem. The parameters for any rectangles \(R_{i},R_{j}\) with \(i,j\in\{1,...,n\}\) are: \[\alpha =\left\{\begin{array}{ll}1&\text{area-weighted objective;}\\ 0&\text{otherwise.}\end{array}\right.\] \[\beta :\text{ a weight parameter for horizontality}\] \[\gamma_{ij} =\left\{\begin{array}{ll}1&\text{if $R_{i}$ is preferred to be close to $R_{j}$;}\\ 0&\text{otherwise.}\end{array}\right.\] \[\delta_{i} =\left\{\begin{array}{ll}1&\text{if $R_{i}$ has to be on the lower left corner;}\\ 0&\text{otherwise.}\end{array}\right.\] \[\text{ We must have }\delta_{1}+\cdots+\delta_{n}\leq 1\] \[\eta_{ij} =\left\{\begin{array}{ll}1&\text{if $R_{j}$ must be adjacent to and to the right of $R_{i}$;}\\ 0&\text{otherwise.}\end{array}\right.\] \[\text{ We must have }\eta_{ij}+\eta_{ji}\leq 1.\] \[\theta_{ij} =\left\{\begin{array}{ll}1&\text{if $R_{j}$ must be adjacent to and above $R_{i}$;}\\ 0&\text{otherwise.}\end{array}\right.\] \[\text{ We must have }\theta_{ij}+\theta_{ji}\leq 1.\] The decision variables, for any rectangles \(R_{i},R_{j}\) with \(i,j\in\{1,...,n\}\) are: \[w_{i} :\text{ width of $R_{i}$}\] \[h_{i} :\text{ height of $R_{i}$}\] \[v^{i} :\text{ lower-left corner of $R_{i}$ with $v^{i}=(v^{i}_{x},v^{i}_{y})$}\] \[x_{ij} =\left\{\begin{array}{ll}1&\text{if $v^{j}_{x}<v^{i}_{x}+w_{i} $;}\\ 0&\text{otherwise.}\end{array}\right.\] \[y_{ij} =\left\{\begin{array}{ll}1&\text{if $v^{j}_{y}<v^{i}_{y}+h_{i} $;}\\ 0&\text{otherwise.}\end{array}\right.\] \[z_{i} =\left\{\begin{array}{ll}1&\text{if $h_{i}\leq w_{i}$;}\\ 0&\text{otherwise.}\end{array}\right.\] We can write our optimization problem as: \[\text{minimize}\sum_{i=1}^{n} \left((1-\alpha+\alpha A_{i})((w_{i}+h_{i})-\beta z_{i})\right)\] \[+\sum_{i,j=1}^{n}\gamma_{ij}\|\mathbf{v}^{i}-\mathbf{v}^{j}\| \text{ s.t.}\] \[\log A_{i}-\log w_{i}-\log h_{i} \leq 0, \forall i \tag{1}\] \[v^{i}_{x}+w_{i} \leq w, \forall i\] (2) \[v^{i}_{y}+h_{i} \leq h, \forall i\] (3) \[v^{i}_{x}+v^{i}_{y} \leq(w+h)(1-\delta_{i}), \forall i\] (4) \[v^{i}_{x}-v^{i}_{x}+w_{i} \leq wx_{ij}, \forall i,j\] (5) \[v^{i}_{x}-v^{i}_{x}+w_{i} \geq(\epsilon-w(1-x_{ij}))(1-\eta_{ij}),\,\forall i,j\] (6) \[x_{ij} \leq 1-\eta ij\quad\forall i,j\] (7) \[v^{i}_{y}-v^{j}_{y}+h_{i} \leq hy_{ij}, \forall i,j\] (8) \[v^{i}_{y}-v^{j}_{y}+h_{i} \geq(\epsilon-h(1-y_{ij}))(1-\theta_{ij}),\,\forall i,j\] (9) \[y_{ij} \leq 1-\theta_{ij}\quad\quad\quad\forall i,j\] (10) \[x_{ij}+x_{ji}+y_{ij}+y_{ji} \leq 3, \forall i,j\] (11) \[w_{i} \geq h_{i}-h(1-z_{i}), \forall i\] (12) \[h_{i} \geq w_{i}-wz_{i}, \forall i\] (13) \[v^{i}_{x},\,v^{i}_{y},\,w_{i},\,h_{i} \geq 0, \forall i\] \[x_{ij},\,y_{ij} \in\{0,1\}, \forall i,j\] \[z_{i} \in\{0,1\}, \forall i\] where \(\epsilon>0\) is a very small real number. Constraint (1), written in its convex form, is to ensure that each rectangle \(R_{i}\) with width \(w_{i}\) and height \(h_{i}\) will have its associated area as \(A_{i}=w_{i}\times h_{i}\). Note that this constraint will be active at optimality. Constraints (2) and (3) guarantee that the ending point of any rectangle on both axes should not violate the width and height of \(R\). Constraint (4) allows the solver to fix one rectangle's position to have its lower left corner on the origin. Constraints (5)-(11) are added to avoid overlapping among rectangles as well as to enforce adjacencies if required. Finally, constraints (12) and (13) determine weather a sub-rectangle is horizontal. Fig. 3 shows an implementation of this new model over random examples in layout boxes with aspect ratios 1,2, and 3, and compares the sensitivity of the results to one of the parameters, i.e., different values of \(\beta\). A major advantage of this Optimization Model approach is that it focuses on the optimal placement of each sub-region as opposed to dividing the region by guillotine cuts. Therefore, it explores all possible layouts including the _non-sliceable_ layouts (see Figs. 4(a) and 4(b)). This problem is NP-hard, as mentioned above and proved in [57]. In the rest of this paper, we provide several suboptimal but efficient algorithms for this optimization problem. In Section 4 we will present our subdivision-based algorithms and in Section 5 we will develop our packing-based algorithms. In these algorithms, similar to our Optimization Model, minimizing the total perimeter (and thus the relevant aspect ratio measures) is our main goal and stability is more of a secondary metric for comparison and is not directly targeted. However, by construction, stability has a higher weight in our spiral algorithms. We compare these to Optimization Model which gives the absolute best treemaps concerning aspect ratios (when the input region is given), Squarified[17] which has one of the best performance in the literature with respect to creating square-like rectangles, and Divide & Conquer[4] due to its similar approach and that it also performs well and can lay out rectangles in any container shape. ## 4 Subdivision Algorithms ### _A Modified Divide and Conquer Algorithm_ Given the structure of the problem, which is based on a tree, it is natural to take a divide and conquer approach. Our divide and conquer algorithm is in the same spirit of Liang et al.'s Divide & Conquer[4] algorithm, which appears to be an extension of Engdahl's Split[34] and the best already existing divide and conquer approach to treemapping. Divide & Conquer tries to divide the input areas into two lists of equal weights in each iteration and recursively continues that until each sublist has only one area. However, this is sensitive to the input parameters and works well only when there is no extreme area. E.g., take a \(8\times 4\) rectangle to be partitioned into 18 sub-rectangles with areas \(\{15,1,1,1,...,1\}\). The algorithm sets \(S_{1}=\{R_{1},R_{2}\}\) and \(S_{2}=\{R_{3},...,R_{18}\}\) with \(\sum_{i:R_{i}\in S_{i}}A_{i}=\sum_{i:R_{i}\in S_{i}}A_{i}=16\). This will generate a very thin rectangle \(R_{2}\). In the rectangular version of the algorithm we would have \(\operatorname{Perim}(R_{2})=2\times(4+1/4)=8.5\) and \(\operatorname{AR}(R_{2})=16\). It is clear that setting \(S_{1}=\{R_{1}\}\) and \(S_{2}=\{R_{2},R_{3},...,R_{18}\}\) could generate a much better result with the maximum aspect ratio very close to 1. An alternative approach is to divide the list of areas in half according to the indices, i.e., \(S_{1}=\{R_{1},...,R_{8}\}\) and \(S_{2}=\{R_{9},...,R_{18}\}\) for our example above, and then recur on each sub-list. But this suffers from the same problem as Divide & Conquer[4] with extreme areas. However, we propose a remedy. In our new Modified D&C algorithm we first divide the input set of areas into two equally weighted lists \(S_{1}=\{A_{1},...,A_{k_{1}},A_{k}\}\) and \(S_{2}=\{A_{k+1},A_{k+2},...,A_{n}\}\). We then check an additional condition that could change the list that \(A_{k}\) or \(A_{k+1}\) belong to. For some real \(c>0\), if \(|A_{\max\{1k-1\}}-A_{k}|>c*|A_{k}-A_{\min\{k+1\}}|\), we pick the best out of adding \(A_{k}\) to \(S_{2}\) or \(A_{k+1}\) to \(S_{1}\) by running both. Otherwise, we make no change and continue the divide and conquer as usual. Parameter \(c\) can be adjusted by the user and gives additional flexibility to the algorithm (in our experiments we use \(c=2\)). Intuitively, this algorithm tries to avoid generating long and skinny rectangles at the division points. So, a significant improvement in the maximum aspect ratio metric is expected. The pseudocode for Modified D&C, for the case where cuts are either vertical or horizontal (depending on the width & height of the remaining segment), is shown in Alg. 1. As with Divide & Conquer, it can be easily modified and generalized to handle polygonal and angular cuts and to have no restriction on input layout container shape. This Modified D&C algorithm remedies the issue in Divide & Conquer in the same computational time of \(\mathcal{O}(n\log n)\). ### _A Dynamic Programming Approach_ We can improve the quality even further. Here, we present a new Dynamic Programming algorithm that dynamically picks the best dividing point to minimize the objective function. First, we sort the list of areas in non-ascending order. Then, in each step, we divide the areas into two sub-lists. With these, we dissect \(R\) into two sections by a guillotine cut so that each has an area equal to the total area of its associated list. We find the dividing point in the list of areas in each iteration using the recursive equation: \[\begin{array}{ll}P_{(i)}&:\text{ The perimeter of sub-rectangle }R_{i}\\ P_{(i,j)}&:\text{ The total perimeter of sub-rectangles }R_{i...j}\\ P_{(1,n)}&=\min_{1\leq k\leq n}\{P_{(1,k)}+P_{(k+1,n)}\}\\ P_{(i,j)}&=\left\{\begin{array}{ll}\bar{P}_{(i)}&\text{ if }i=j\\ \min_{i\leq k<j}\{P_{(i,k)}+P_{(k+1,j)}\}&\text{ if }i<j\end{array}\right.\end{array} \tag{14}\] This approach of course leads to a high quality solution due to considering a much larger feasible space, but also leads to an exponential running time of \(\mathcal{O}(3^{n})\). The reason is unlike the partitioning of a discrete set of numbers areas, each subset of areas \(\{A_{i},...,A_{j}\}\) could represent several different subproblems depending on the the location of those sub-rectangles and the sequence of vertical and horizontal Fig. 3: An illustration of treemaps generated by our Optimization Model for three different wight factors \(\beta=0,\ 0.05,\ 0.1\), while all other parameters are set to zero. Larger values of \(\beta\) generate more horizontal rectangles to fit longer labels. Note the _non-sliceable_ layouts in cases: (a), (b), and (i). For each treemap, the total perimeter of the component subrectangles is shown with the best (shortest) and second-best values shown using color. ``` Input: Rectangle \(R\), a list of \(n\) areas \(L=\{A_{1},A_{2},..,A_{n}\}\) with \(\sum_{i=1}^{n}A_{i}=\text{Area}(R)\), and a real constant \(c\). Output: Partition \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\). /* ``` 1Function main(\(R\),\(L\)) 2 Let \(w\) denote the width of \(R\) and \(h\) be the height; 3if\(|L|=1\)then 4return\((w+h,R)\); 5else 6 Sort \(L\) in a non-increasing order and reindex the sorted areas as \(A_{1},A_{2},...,A_{n}\); 7returnPartition\((R,L,1,n,c)\); 8 9 end if 10Procedure Partition(\(Q\), \(L\), _start_, \(stop\), \(c\)) 11 Let \(w\) denote the width of \(Q\) and \(h\) be the height; 12 Set \(S_{1}=0\) and \(S_{2}=\sum_{start}^{l}A_{i}\); 13for\(k=start\cdot stop=1\)do 14 Set \(S_{1}=S_{1}+A_{k}\); 15 Set \(S_{2}=S_{2}-A_{k}\); 16if\(|S_{1}-S_{2}|<|(S_{1}+A_{k+1})-(S_{2}-A_{k+1})|\)then 17 break; 18 19 end if 20 21 end for 22if\(A_{\text{max(start\_k-1)}}-A_{k}>c*(A_{k}-A_{\text{min($k+1,stop$)}$})\)then 23 Set \(S_{1}^{\prime}=S_{1}-A_{k}\) and \(S_{2}^{\prime}=S_{2}+A_{k}\); 24 Set \(S_{1}^{\prime}=S_{1}+A_{k+1}\) and \(S_{2}^{\prime}=S_{2}-A_{k+1}\); 25\(\prime\) * Set \(L_{1}^{\prime}=\{A_{start},...,A_{k-1}\}\) and \(L_{2}^{\prime}=\{A_{k+2},...,A_{step}\}\); */ 26if\(w>h\)then 27 Divide \(Q\) vertically into two pieces \(Q_{1}^{\prime}\) with \(w_{1}^{\prime}=S_{1}^{\prime}/h\) and \(h_{1}^{\prime}=h\) on the left and \(Q_{2}^{\prime}\) with \(w_{2}^{\prime}=w-S_{2}^{\prime}/h\) and \(h_{2}^{\prime}=h\) on the right; 28 Divide \(Q\) vertically into two pieces \(Q_{1}^{\prime}\) with \(w_{1}^{\prime\prime}=S_{1}^{\prime}/h\) and \(h_{1}^{\prime\prime}=h\) on the left and \(Q_{2}^{\prime\prime}\) with \(w_{2}^{\prime}=w-S_{2}^{\prime}/h\) and \(h_{2}^{\prime}=h\) on the right; 29else 30 Divide \(Q\) horizontally into two pieces \(Q_{1}^{\prime}\) with \(w_{1}^{\prime}=w\) and \(h_{1}^{\prime}=S_{1}^{\prime}/w\) on top and \(Q_{2}^{\prime\prime}\) with \(w_{2}^{\prime}=w\) and \(h_{2}^{\prime\prime}=h-S_{2}^{\prime}/w\) in bottom; 31 32 end if 33returnthe best out of 34 Partition(\(Q_{1}^{\prime},L,start,k-1,c\)) \(\cup\) Partition(\(Q_{2}^{\prime},L,k,stop,c\)) 35 36 37 Partition(\(Q_{2}^{\prime\prime},L,start,k+1,c\)) \(\cup\) Partition(\(Q_{2}^{\prime\prime},L,k+2,stop,c\)); 38 39else /* Set \(L_{1}=\{A_{start},...,A_{k}\}\) and \(L_{2}=\{A_{k+1},...,A_{step}\}\); */ if\(w>h\)then 40 Divide \(Q\) vertically into two pieces \(Q_{1}\) with \(w_{1}=S_{1}/h\) and \(h_{1}=h\) on the left and \(Q_{2}\) with \(w_{2}=w-S_{2}/h\) and \(h_{2}^{\prime}=h\) on the right; 41else 42 Divide \(Q\) horizontally into two pieces \(Q_{1}\) with \(w_{1}=w\) and \(h_{1}=S_{1}/w\) on top and \(Q_{2}\) with \(w_{2}^{\prime}=w\) and \(h_{2}=h-S_{2}^{\prime}/w\) in bottom; 43 44 end if 45returnPartition(\(Q_{1},L,start,k,c\)) \(\cup\) Partition(\(Q_{2},L,k+1,stop,c\)); 46 47 end if 48 49 end for ``` **Algorithm 1**Modified D&C \((R,L,c)\) Generates a rectangular tremap with the given areas and bounding rectangle according to a modified divide and conquer approach. ### _Comparison of Recursive Subdivision Approaches_ We compare the results of the divide and conquer algorithms and dynamic programming algorithm with Optimization Model using several examples. Consider a tree with 7 leaves with weights \((w_{1},...,w_{7})=(0.1277,\,0.0837,\,0.0922,\,0.2235,\,0.2845,\,0.0994,\,0.0890)\), that add up to 1. Assume, without loss of generality, that we want to visualize it in a treemap with a unit square as the layout container in a way to minimize the total perimeter of the rectangles. Fig. 4(a)-4(b) show the results using Optimization Model, our dynamic programming and divide and conquer algorithms, and Divide & Conquer[4]. Optimization Model results in the lowest total perimeter, followed by Dynamic Programming. Fig. 4(e)-4(g) compare the same approaches for an extreme random example \((w_{1},...,w_{7})=(0.0795,\,0.0709,\,0.1074,\,0.1121,\,0.3980,\,0.1023,\,0.1298)\), where one weight (area) is much larger. In this extreme case, Dynamic Program ming performs very well and in fact found the optimal solution. We repeat the comparison for the treemap of market capitalization of different sectors in the U.S. Stock Market (Fig. 5i-Fig. 5k) and ratios of population of the states in the U.S. Midwest (Fig. 5m-Fig. 5o). In both cases, Optimization Model performs best followed by Dynamic Programming. We have only been comparing the results by the total perimeter metric--the objective function of Optimization Model. It should be mentioned that when the optimal solution was sliceable Dynamic Programming was able to find it (Fig. 5f). We can generally say that when optimal solution is sliceable Dynamic Programming can either find it or get very close to it. However, just like most other treemap algorithms it does not explore non-sliceable layouts. Divide & Conquer and Modified D&C performed more or less similar to each other on these examples. Section 6 details more comprehensive comparisons that show both Dynamic Programming and Modified D&C beat Divide & Conquer of [4]. ## 5 Constructive Algorithms for Spiral Treemaps Most of the existing algorithms, including the algorithms we introduced in the previous section, generate treemaps that lack an overall pattern. Such patterns could make treemaps more appealing visually, make it easier to compare the size of different sub-regions, and facilitate spotting specific sub-regions (increasing the readability as described in Bederson et al. [35]). In contrast, spiral treemaps place the sub-regions in a way that the overall layout resembles a spiral centered inside the container. As opposed to treemaps created by guillotine cuts or by sequential placement of rectangles in horizontal or vertical directions, spiral treemaps have not been adequately studied [9]. Here we present three new algorithms for generating spiral treemaps with rectangular sub-regions. An experiment conducted by Kong et al. [3] suggests that rectangles with large aspect ratios \(\geq\) 9/2 reduce the accuracy of area comparison tasks, particularly when the rectangles have different orientations. They also show that the accuracy of the perception is equally poor when comparing squares. What actually seems to help is having a distribution of reasonable non-square aspect ratios. Their results particularly show that optimizing towards a 3/2 aspect ratio performs better than Squarified. Lu et al. [64] present an algorithm that, instead of targeting squares like Squarified, tries to reach a layout with rectangles having aspect ratios as close as possible to the golden ratio \(\phi=\frac{1+\sqrt{8}}{2}\simeq 1.618\)--so-called golden rectangles. This is also close to the 3/2 ratio conjectured by Kong et al.. Golden ratios, golden rectangles, and golden spirals have been discovered in nature, human body, and galaxies, Fig. 4: Treemaps generated using our Dynamic Programming approach laid out in a rectangle or hexagon. (a) and (b) show the market capitalization of each sector of the U.S. Stock Market after closing on 2020-12-18. (c) and (d) display ratios of population of 12 states in the Midwest based on 2010 U.S. Census data. Figure 5: These visualizations show the effect of applying four treemap layout algorithms to four weighted, single level trees. The treemaps are generated and laid out in a unit box using either our Optimization Model, our dynamic divide and conquer algorithm (Dynamic Programming), our modified divide and conquer algorithm (Modified D&C), or the divide and conquer approach of Liang et al. [4] (Divide & Conquer). Two of the trees have seven leaves with random weights (areas, given by \(L=\{\ldots\}\)), with the second (Extreme) example containing a weight much larger than the others. The third tree shows the market capitalization of different sectors in the U.S. Stock Market after closing on 2020-12-18. The final tree shows ratios of state population in the U.S. Midwest from the 2010 U.S. Census. For each treemap, the total perimeter of the component subrectangles is shown with the best (shortest) and second-best values shown using color. Note that Optimization Model results in the shortest perimeter. Our Dynamic Programming performs the best among the algorithms. Our Modified D&C gives better results than Divide & Conquer on two instances including the extreme example (for which it is designed), and performs weaker on the other two instances. The extreme example shows the largest gap between the two algorithms as expected. It is clear that the gap on maximum aspect ratio metric is even larger. Moreover, it produces _non-slicable_ layouts in three cases: (a), (i), and (m). ``` Input: Rectangle \(R\) and a list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\) with \(\sum_{1}^{n}A_{i}=\text{Area}(R)\). Output: Partition \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\). /* ``` 1Functionmain(\(R\),\(L\)) 2 Let \(w\) denote the width of \(R\) and \(h\) be the height; 3if\(n=1\)then 4return\((w+h,R)\); 5else 6 Sort \(L\) in a non-increasing order and reindex the sorted areas as \(A_{1},A_{2},...,A_{n}\); /* pBest is a structured array to store best perimeter and best partition for a block \(A_{i},...,A_{j}\) in a rectangle with width \(w\) and height \(h\) and its lower left corner on the origin. */ 7 Let pBest = { }; returnPartition(\(R,L,1,n\)); 8 9 end if 10ProcedurePartition(\(Q,L,start\), stop) 11 Let \(w\) denote the width of \(Q\) and \(h\) be the height; 12 Let \(q=(q_{x},q_{y})\) be the lower left corner of \(Q\) and \(q_{0}=(0,0)\); /* pBest stores the best partition of a box \(B\) with \(q_{B}=q_{0}\) and area \(\sum_{i=\text{area}}^{\text{stop}}A_{i}\) into sub-rectangles \(R\)start,..., \(R\)stop */ 13ifthe+tuple(start, stop,\(w,h\)) exists in pBest.IDthen 14 Let k be the index in pBest.ID that stores (start, stop,\(w,h\)); 15return(pBest.Perim(\(k\)), (pBest.Par(\(k\))+\(\bar{q}\))); 16 17else 18 Set bestPrm = An arbitrary large number; 19ifstart=stopthen 20 Set bestPrm = \(w+h\); 21 Set bestPrm = \(Q\); 22else 23for\(k=start\): stop \(-1\)do 24 Set \(S=\sum_{i=\text{area}}^{\text{d}}A_{i}\); 25 Divide \(Q\) vertically into two pieces \(Q_{1}\) with \(w_{1}=S/h\) and \(h_{1}=h\) on the left and \(Q_{2}\) with \(w_{2}=w-S/h\) and \(h_{2}=h\) on the right; 26 Divide \(Q\) horizontally into two pieces \(Q_{3}\) with \(w_{3}=w\) and \(h_{3}=S/w\) on top and \(Q_{4}\) with \(w_{4}=w\) and \(h_{4}=h-S/w\) in bottom; 27 Let \([\text{Prm}_{1},\text{Prm}_{2}]=[\text{Prm}_{1}(Q_{1},L,start,k)]\); 28 Let \([\text{Prm}_{2},\text{Prm}_{2}]=[\text{Prm}_{2}(Q_{2},L,k+1,\text{stop})]\); 29 Let \([\text{Prm}_{2},\text{Prm}_{3}]=[\text{Prm}_{3}(Q_{2},L,start,k)]\); 30 Let \([\text{Prm}_{2},\text{Prm}_{4}]=[\text{Prm}_{4}(Q_{2},L,k+1,\text{stop})]\); 31if\((\text{Prm}_{1}+\text{Prm}_{2})<(\text{Prm}_{3}+\text{Prm}_{4})\)then 32 Set tempPrm = Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm + Prm +rm + Prm + Prm + Prm + Prm +rm + Prm + Prm +rm + Prm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm + Prm +rm +rm + Prm +rm +rm + Prm +rm +rm + Prm 2 (i.e., \(\rho_{s}=2\)). The changes in direction also follow the same procedure. However, instead of placing each rectangle in a spiral pattern we place bundled square-like rectangles in a spiral pattern. Bundles are formed in each step by finding a subset of remaining rectangles that makes the bundle as close as possible to a square--given that one side of the bundled rectangle is already fixed to one side of the union of already-placed rectangles. This makes our treemap as close as possible to the Fibonacci spiral, where each new box is a square with an edge length equal to the width (height) of the union of the previously-placed rectangles. The layout inside a bundle can be created using any rectangular treemapping algorithm as a subroutine. In our implementation we use Squarified. Our choice comes from our observations that for square or square-like input regions Squarified has an advantage. Pseudocode for Square-Bundled Spiral is shown in Alg. 4 and its computational complexity is \(\mathcal{O}(n\log n)\) due to the sorting step; the rest of the algorithm runs in \(\mathcal{O}(n)\). In Alg. 4 we also follow a clockwise-outward-growing rotation for forming the spiral structure. ### _Strip-Bundled Spiral Treemap Algorithm_ As an alternative approach for improving Symmetric Spiral, we also develop Strip-Bundled Spiral. The key difference is how many rectangles we place at each change in direction. By considering the side of the current union where the next rectangle has to be placed as a strip with its width (height) fixed with this side of the current union and its height (width) being flexible, we can add additional rectangles one-by-one, using the flexibility of the strip, to optimize the aspect ratio before changing the direction and moving to the next side. As in Symmetric Spiral and Square-Bundled Spiral, we first sort the areas in ascending order and add the first two rectangles with \(\rho_{s}=2\). In each step, we create a strip on the top, right, bottom, or left side of the union of already-placed rectangles. We then add the remaining rectangles to the strip in order until the maximum aspect ratio of the rectangles does not further improve. Then we change direction, create the next strip, and repeat this procedure until all rectangles are placed. Pseudocode for Strip-Bundled Spiral is shown in Alg. 5 and its computational complexity is also \(\mathcal{O}(n\log n)\) due to the sorting step; the rest of the algorithm runs in \(\mathcal{O}(n)\). In Alg. 5 we again follow a clockwise-outward-growing rotation for forming the spiral structure. ### _Comparison of Spiral Algorithms_ Here we compare the results of the above three spiral algorithms with Squarified, i.e., one of the best existing algorithms, with respect to aspect ratio, as shown in e.g., [23; 27]. As previously see in [27] and we observe it here too it can also perform reasonably well on stability depending on the data set and the measure of stability. Since the main objective of all four algorithms is to minimize aspect ratio, we examine the maximum aspect ratio (maxAR). We also compare their stability using the maximum Hausdorff distance (maxHD). (Section 6 provides a more comprehensive comparison.) Fig. 9 illustrates the output treemaps of our spiral algorithms compared with Squarified on a single level tree of 60 nodes with random weights. It is clear that Square-Bundled Spiral and Strip-Bundled Spiral are more visually appealing than Symmetric Spiral. Here, both improved spiral algorithms score better on aspect ratio and stability than Squarified. Symmetric Spiral is particularly bad on aspect ratio, as expected, but better than Squarified and the other two spiral algorithms on stability. Another example in Fig. 7 shows treemaps of the number of COVID-19 diagnosed cases in 52 different U.S. states and territories as of 2020-05-20, generated using the same four algorithms. The data is collected from the Center for Disease Control and Prevention (CDC). In this example, Squarified has the best aspect ratio with Strip-Bundled Spiral a close second. All of our spiral algorithms score ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 4**Square-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 5**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 6**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 7**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 8**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 9**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 10**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 11**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 11**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 12**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 13**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 14**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 15**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 16**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 17**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 18**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 19**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 20**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 21**Strip-Bundled Spiral \((L)\) ``` Input: A list of \(n\) areas \(L=\{A_{1},A_{2},...,A_{n}\}\). Output: A rectangle \(R\) partitioned into \(n\) sub-rectangles \(R_{1},R_{2},...,R_{n}\) with areas \(A_{1},A_{2},...,A_{n}\) such that \(\text{Area}(R)=\sum_{i=1}^{n}A_{i}\). /* ``` **Algorithm 22**Strip-Bundled better than Squarified for stability. Although Symmetric Spiral created poor aspect ratios, in this application it may be preferred to the other approaches as it may be easier to follow the sorted order that actually matters more in this application. Finally, these examples also demonstrate our expectation that Square-Bundled Spiral is the closest to the Fibonacci spiral as the the distribution of the edge length of the blocks are closer to the Fibonacci sequence. One could try to study the impact of choices other than Squarified for the layout inside each bundled block. ## 6 Experiments and Results Moving beyond usage examples, we here computationally compare treemap algorithms based on aspect ratio and stability. We consider the following evaluation criteria. For aspect ratio: total perimeter, maximum and average aspect ratio (maxAR and avgAR), and area-weighted average aspect ratio (AWAR). For stability: the maximum and average Hausdorff distance (maxHD and avgHD). As discussed in Section 2.3, we compare our algorithms vs. Optimization Model and the best-of-breed alternative: Squarified[17], that its superior quality, considering aspect ratio measure, is demonstrated by extensive computational results summarized in Fig. 8 - Fig. 11 in [23] and Fig. 21 and Fig. 26 in the paper [27]. It is also clear from [27], as we observe it here too, it can also perform reasonably well on stability depending on the data set and the measure of stability. We also compare our results with the more recent algorithm Divide & Conquer[4] that follows a similar approach and performs well when compared to Squarified. Moreover, again as explained Section 2.3, we skip space-filling curve-based algorithms for this comparison as they aim to to preserve order and maximize readability and stability and generally perform poorly with respect to aspect ratio metrics, as shown in [9, 12, 23, 27]. To make the optimization results comparable with the algorithmic results, we set all parameters in our Optimization Model to be zero. We generated 25 random test problems for the comparison. These are listed in synthetic_data.csv in the supplemental material at osf.io (hyperlink)2 and a table in the appendices at osf.io (hyperlink). Since treemapping is NP-hard and finding an optimal solution for large problems is challenging, we avoided cases with more than 12 subregions. This is to ensure we can include Optimization Model in the comparison. To further ease visual comparison, each weighted tree in our problems has a height of 1. Deeper trees can be easily laid out by running each of the algorithms recursively, as shown in Section 7.2. Footnote 2: osf.io/y8gpm/view_only=4e5edb49f893410a866f9fcf4e71cb5 In order to analyze the stability of treemaps constructed Fig. 6: Our three spiral treemap algorithms compared with Squarified for making a rectangular treemap. The data shown is a single level tree of 60 randomly-weighted nodes. Bundled rectangles are shown using thicker lines. Symmetric Spiral has the worst maximum aspect ratio (maxAR), but our other spiral algorithms beat Squarified. They also have better maximum Hausdorff distances (maxHD) than Squarified, with Symmetric Spiral doing best. The best and second-best values are shown using color. Fig. 7: Our three spiral treemap algorithms compared with Squarified for making a rectangular treemap. The data shown is the count of diagnosed cases of COVID-19 in 52 different U.S. states and territories from the CDC as of 2020-05-20. Squarified has the best maximum aspect ratio (maxAR), followed by Strip-Bundled Spiral. However, our spiral algorithms have better maximum Hausdorff distances (maxHD). The best and second-best values are shown using color. Note: we are not advocating this visualization for pandemic data. by each algorithm, we build the treemaps for each sample, then perturb the input areas for a few rounds in three level of perturbation, small, medium, and high. Furthermore, we built new treemaps with new areas and calculated the Hausdorff Distance between the pairs of rectangles in two treemaps. In small level of perturbation we added a random variable between 0 and 1 multiplied by 0.01 to the previous area then we normalized them to have area equal to 1. For medium and high level of perturbation we did the same but instead of 0.01 we used 0.05 and 0.1. The experimental results are summarized in Table I and Fig. 8, with individual problem results in a table in the appendices at osf.io (hyperlink). Our results show that Optimization Model is the best performer on all aspect ratio measures--in some cases by wide margins. This is despite only optimizing for total perimeter. Thus, we recommend Optimization Model be used as a benchmark against which more computationally efficient approaches to optimizing for aspect ratio can be compared, at least for reasonably sized problems. Since our objective function is defined for perimeter (and consequently for aspect ratios) and does not aim to produce stable treemaps, we skip the stability performance of Optimization Model. However, as we discussed the flexibility of our optimization model in Section 3, it could be possible to make a trade-off between perimeter (aspect ratio) and stability but it is beyond the scope of this paper. For all measures, our Dynamic Programming outperforms Divide & Conquer [4] by wide margins. This makes sense because it explores a larger set of possible layouts. Our Dynamic Programming performs considerably better than Squarified [17] on aspect ratio measures and only slightly weaker on stability measures. This is besides the fact that the input region in 52% of our test problems was a unit box which favors Squarified [17]. Dynamic Programming found the optimal solution in 3 out 25 test problems, which is also a certificate to show that these 3 problems had sliceable optimal solutions. Our Modified D&C outperforms Divide & Conquer [4] on all but one measure and beats Squarified [17] by a wide margin on maxAR measure. As expected our modification to the divide and conquer approach of [4] shows its most significance on the maximum aspect ratio which amounts to more than 17%. Our Dynamic Programming and Modified D&C significantly improve on Divide & Conquer [4], despite the fact that in our test problems we did not consider any extreme instance, as described in Section 4.2, for which Divide & Conquer of [4] would perform particularly weak. As we expected, our spiral algorithms perform much better on stability measures than aspect ratio measures. Symmetric Spiral performs particularly weak on total perimeter and maxAR, due to stretching some rectangles. The modified spiral algorithms Square-Bundled Spiral and Strip-Bundled Spiral perform better on the aspect ratio measures. However, they perform weaker than the non-spiral approaches. This is because (1) the sub-regions placed last are more prone to poor aspect ratios and (2) the sub-regions are sorted in ascending order of their areas giving more weight to those sub-regions placed last. Stability analysis shows that all of our spiral algorithms generally perform very well, which was again expected due to the packing framework and spiral structure. By following a spiral structure, we believe they also provide additional aesthetic appeal vs. other approaches. They maximize _symmetry_ and _fractal-like patterns_ in the treemaps. It should be also mentioned that removing the sorting step, we can easily modify these algorithms to make them order-preserving. However, this may come at some cost on the side of aspect ratios. One could consider evaluation metrics such as readability measure introduced in [35] and the Fractal Value as defined in Eq. (1) in [36], and evaluate the quality of our spiral algorithms that way. We should also mention that in some of our test problems, our spiral algorithms performed better than all other approaches including Optimization Model even on the optimization objective (total perimeter). This is because spiral algorithms are not restricted to the input container. Since this lack of restriction to input region can lead to both significant improvements or deterioration, we can claim that the performance of our spiral algorithms highly depend on the problem instances. They performed weaker on average on our test problems, although they were able to find solutions even better than our Optimization Model on some instances. However, they should be seen as alternatives that have the potential to generate superior results depending on the problem instance. In our spiral algorithms, an adjustable parameter \(\rho_{s}\) sets the aspect ratio of the first two placed rectangles. To mimic the Fibonacci/Golden spiral we set \(\rho_{s}=2\). To check the sensitivity of the algorithm to \(\rho_{s}\)--while still approximating a golden spiral--we tried the golden ratio, i.e., \(\rho_{s}=\phi=\frac{1+\sqrt{5}}{2}\simeq 1.618\), and tested the aforementioned data & measures. The last three rows of Table I show that this change led to mixed results; Symmetric Spiral is improved on all metrics, while Square-Bundled Spiral and Strip-Bundled Spiral are improved on some metrics and deteriorated on some others. The changes in stability metrics on both sides were significant. More sensitivity analysis could be done by setting \(\phi\leq\rho_{s}\leq 2\), while maintaining the overall golden spiral structure. Besides the quality metrics, run time is also important to consider. Our Modified D&C, Symmetric Spiral, Square-Bundled Spiral, and Strip-Bundled Spiral have a time complexity of \(\mathcal{O}(n\log n)\), as does the best-of-breed Squarified [17] and Divide & Conquer [4]. The running time of our Dynamic Programming, which as discussed earlier is designed for quality not computational efficiency, is \(\mathcal{O}(n^{3})\). All of our algorithms scale very well, although to lesser degree for Dynamic Programming, as apparent by these worst-case running times and the fact that the implementation time of the quasilinear algorithms on all of our test cases including the 25 random test problems, the examples of size 60 and 52 in Fig. 9 and Fig. 7 and the a famous 220 leaf node example in Section 7.2, was between fractions of a second to a few seconds, similar to that of Divide & Conquer [4] and Squarified [17]. This time for Dynamic Programming increases to a few minutes. ## 7 Usage Examples Here we present several usage examples to demonstrate the utility of our treemap algorithms on realistic data, in con trast to the random data in our computational experiments. All of these datasets are available at osf.io (hyperlink). ### _Single Level Weighted Trees_ We previously showed treemaps of single level weighted trees. This is to ease comparisons, but deeper trees can be laid out via recursive application as we will see in Section 7.2. **Stock Market--**Stock market capitalization by sector has long been used as an example for treemap algorithms, starting with Wattenberg's Map of the Market [31]. Our data is from the U.S. Stock Market after closing on 2020-12-18. Figs. 3(a) and 3(b) and Figs. 4(i) to 4(i) show the associated treemaps using our Optimization Model and the three divide and conquer approaches. **U.S. Census--**From the 2010 U.S. Census data we extracted 12 states in the Midwest. The treemaps in Figs. 3(c) and 3(d) and Figs. 4(i) to 4(i) show the proportion of the total population that each of these states contributes. The treemaps again use Optimization Model and the three divide and conquer approaches. **COVID-19--**Similar to the state population data, this example shows the proportion of the total diagnosed cases of COVID-19 there were in each of the 52 U.S. States and territories as of 2020-05-20, according to the CDC. We show this data in Fig. 7 using our three spiral approaches. easily applied to multi-level trees as well. This is straightforward for our divide and conquer and dynamic programming algorithms as they are based on a subdivision approach and the space for the parent is formed prior to forming the space for children. However, our spiral algorithms that follow a packing approach require a bit of explanation. We start from the bottom level and construct the rectangles for immediate parents of the the leaf nodes following our spiral structure. Once we have all the rectangles in the next level constructed, we make the rectangles for their parents and recursively iterate this until the rectangle for the root is formed. The only thing to consider here is that we will only set \(\rho_{s}=2\) or \(\rho_{s}=\frac{1+\sqrt{5}}{2}\) at the bottom level and relax this step for the higher levels. Note that we can also do this in reverse direction and start from the root and work towards the bottom level. We could also start from any middle level and work towards both directions of the tree. However, we must fix the aspect ratio of the initial two rectangles only in the level we start. This is to avoid having unused spaces in the treemap. Here, we will show the behavior of our treemap algorithms on a large unbalanced height-4 tree with 220 leaf nodes. We use the Flare data. Flare [72] is an ActionScript library for making web-based interactive Flash visualizations. It is the spiritual successor to the Prefuse [73] Java library and an early predecessor of D3 [74]. The nodes in our tree are the ActionScript 3 classes in the library, arranged by the class hierarchy. Each node has a weight for the size of the class. This data is widely available as part of many online visualizations,[3, 4, 5, 6, 7, 8, 9] and in particular treemaps generated with various algorithms including Slice-and-Dice and Squarified.[10, 11] In Fig. 10 we show treemaps of the Flare data generated by our algorithms, Squarified, and Divide & Conquer. See the figure caption for a comparative discussion. We exclude Optimization Model due to long run time. For spiral algorithms we illustrate the results for \(\rho_{s}=2\). Table II shows the comparison of metrics between the algorithms on this large data set. Our Dynamic Programming still provides the best overall performance on this data set. ## 8 Conclusion We present an Optimization Model and five new treemapping algorithms. We believe that Optimization Model, which minimizes the total sub-rectangle perimeter as the objective function, is the first optimization approach to treemapping. Besides only having the total perimeter as the objective function it performs the best on all of our aspect ratio measures. Our computational experiments show superior results vs. Squarified[17] and Divide & Conquer[4], on four metrics for aspect ratio. It also outperformed our five new algorithms. As such, we recommend Optimization Model be used as a benchmark for comparing future algorithms on aspect ratio related measures. Our new Dynamic Programming and Modified D&C significantly improve on Divide & Conquer[4] for almost all measures, albeit with additional computational complexity for Dynamic Programming, despite avoiding the extreme instances for which Divide & Conquer is particularly vulnerable. The three proposed spiral algorithms had weaker performance on aspect ratio measures but very good performance on stability. They were also able to find solutions even better than our Optimization Model on aspect ratio metrics for a few of instances. This is due to the lack of restriction to input region and suggest that performance of our spiral algorithms is problem dependent. They also provide an arrangement that mimics the Fibonacci/Golden spiral which Fig. 9: Comparison of different algorithms regarding maximum and average aspect ratios and maximum and average Hausdorff distances. The algorithms are color coded and labeled with SQ (Squarified[17]), DC (Divide & Conquer[4]), MDC (Modified D&C), DP (Dynamic Programming), SS (Symmetric Spiral), SQBS (Square-Bundled Spiral), STBS (Strip-Bundled Spiral). We see that taking both metrics into consideration, our DP performs the best. Our MDC beats both DC and SQ on aspect ratio measure but performs weaker that SQ on stability. Our two bundled spiral algorithms perform relatively good with the respect to both maximum and average criteria. The symmetric spiral algorithm performs really well on stability but really weak on aspect ratio. For SS we used \(\rho_{s}=\rho_{s}=\phi=\frac{1+\sqrt{5}}{2}\simeq 1.618\) and for the bundled spiral we used the results from \(\rho_{s}=2\). Figure 10: Comparison of our five treemap algorithms, Squarified, and Divide & Conquer, each showing the Flare [72] visualization library class hierarchy. The data is an unbalanced “height-4” weighted tree, with “220 leaf nodes” representing classes with weights as the class size. We can see that Symmetric Spiral made some unfortunate choices with regards to aspect ratio, in particular for the many skinny children of the query\(\rightarrow\)methods class (shown in the color, top). However, the Symmetric Spiral is much more apparent than the spirals in the other spiral treemaps, where rectangles are predominantly fat. The spiral nature becomes more readily visible when there are many similar-weight siblings, e.g., the children and grandchildren of vis (left in Square-Bundled Spiral, right in Strip-Bundled Spiral). All of our approaches stand in stark contrast with Squarified, which pushes the smaller rectangles to the top-right, e.g., the children of util (). we believe is more appealing and could be easier to follow. However, user studies are necessary to validate this claim. In this paper we assumed that \(\text{AR}=1\) is the best for sub-rectangles, although we pursued other aspect ratios for blocks of sub-rectangles in our spiral algorithms to make the overall layout more appealing. This is a uniformly accepted goal to make sub-rectangles as square as possible [23]. However, as suggested by [3, 64] and many other works in other fields, it could be more desirable to have aspect ratios close to 3/2 or the golden ratio \(\phi=\frac{1+\sqrt{2}}{2}\simeq 1.618\) for the sub-rectangles as well. One future research direction is to consider the impact of this change in the objective. Here we only evaluated treemapping algorithms on 6 measures of aspect ratio and stability. To further explore the strengths and weaknesses of these algorithms, we encourage future researchers to use additional quality metrics and more datasets. Furthermore, one could tweak different parts of the proposed spiral algorithms, e.g., the sorting part, to investigate the sensitivity of the output quality to such changes. To our knowledge, all divide and conquer treemapping approaches, including ours, initially sort the input areas and as such they do not preserve input order. An avenue for future research would be algorithmically making a tradeoff between aspect ratio, stability, and preserving order. A machine learning approach similar to that of [75] could also help to achieve the right trade-off. Another interesting direction for future research is to consider equal areas being laid out as convex polygonal subregions inside an input region that is not necessarily a rectangle. Clearly, this is very easy when \(R\) is a rectangle. However, this becomes quite complex when \(R\) is a general convex region. An approach similar to that of [76] in partitioning a convex polygon into equal area sub-regions could be adopted to incorporate the additional objective of minimizing total perimeter of all sub-regions. ## Acknowledgments The authors gratefully acknowledge support from a Tier-1 grant from Northeastern University. The authors also thank Ben Shneiderman for his valuable comments and Zachary Danziger for his efficient MATLAB function on calculating Hausdorff distance.
2310.20175
LFAA: Crafting Transferable Targeted Adversarial Examples with Low-Frequency Perturbations
Deep neural networks are susceptible to adversarial attacks, which pose a significant threat to their security and reliability in real-world applications. The most notable adversarial attacks are transfer-based attacks, where an adversary crafts an adversarial example to fool one model, which can also fool other models. While previous research has made progress in improving the transferability of untargeted adversarial examples, the generation of targeted adversarial examples that can transfer between models remains a challenging task. In this work, we present a novel approach to generate transferable targeted adversarial examples by exploiting the vulnerability of deep neural networks to perturbations on high-frequency components of images. We observe that replacing the high-frequency component of an image with that of another image can mislead deep models, motivating us to craft perturbations containing high-frequency information to achieve targeted attacks. To this end, we propose a method called Low-Frequency Adversarial Attack (\name), which trains a conditional generator to generate targeted adversarial perturbations that are then added to the low-frequency component of the image. Extensive experiments on ImageNet demonstrate that our proposed approach significantly outperforms state-of-the-art methods, improving targeted attack success rates by a margin from 3.2\% to 15.5\%.
Kunyu Wang, Juluan Shi, Wenxuan Wang
2023-10-31T04:54:55Z
http://arxiv.org/abs/2310.20175v2
# LFAA: Crafting Transferable Targeted Adversarial Examples with Low-Frequency Perturbations ###### Abstract Deep neural networks are susceptible to adversarial attacks, which pose a significant threat to their security and reliability in real-world applications. The most notable adversarial attacks are transfer-based attacks, where an adversary crafts an adversarial example to fool one model, which can also fool other models. While previous research has made progress in improving the transferability of untargeted adversarial examples, the generation of targeted adversarial examples that can transfer between models remains a challenging task. In this work, we present a novel approach to generate transferable targeted adversarial examples by exploiting the vulnerability of deep neural networks to perturbations on high-frequency components of images. We observe that replacing the high-frequency component of an image with that of another image can mislead deep models, motivating us to craft perturbations containing high-frequency information to achieve targeted attacks. To this end, we propose a method called Low-Frequency Adversarial Attack (LFAA), which trains a conditional generator to generate targeted adversarial perturbations that are then added to the low-frequency component of the image. Extensive experiments on ImageNet demonstrate that our proposed approach significantly outperforms state-of-the-art methods, improving targeted attack success rates by a margin from 3.2% to 15.5%. ## 1 Introduction Deep neural networks (DNNs) have achieved remarkable progress in numerous domains [9]. However, the security and dependability of DNNs remain difficult to ensure when confronted with adversarial examples [6, 28]. Adversarial examples are created with malicious intent and include slight alterations that are imperceptible to the human eye but are sufficient to deceive deep models into making incorrect predictions. This weakness poses a threat to security-sensitive applications such as face recognition. The study of adversarial attacks can be divided into two categories: white-box attacks and black-box attacks. White-box attacks have access to the victim model's architecture and parameters to create adversarial perturbations. In contrast, black-box attacks have limited or no knowledge of the victim model. Existing white-box attacks use gradient information to generate adversarial examples [6, 19]. Adversarial examples created on white-box models can deceive unknown neural models [23], making black-box attacks feasible. Several methods have been proposed to enhance the transferability of adversarial examples [30, 2, 16], and they have achieved remarkable performance in the untargeted black-box scenario, where they only aim to produce different predictions. However, these methods are not satisfactory in the targeted attack scenario, where adversaries intend to trick the models into making specific predictions. Several approaches have been proposed to improve the transferability of adversarial examples in the targeted setting [39, 14, 32, 18], with the generator-based method being the most effective. This approach trains a generator based on the source model to produce adversarial examples for the target class [36, 24]. Prior research on targeted attacks [6, 39, 14] has focused on generating adversarial perturbations, but these perturbations typically do not contain any information from the target class image, leading to overfitting of the models. To overcome this issue, it is important to modify the texture and shape of the input image to match the characteristics of the target label image. By incorporating such information, adversaries can generate adversarial examples with improved transferability compared to directly adding noise. The high-frequency component of an image, as depicted in Fig. 1, typically captures the detailed textures and noise, while the low-frequency component represents the object's shape. Since the background of an image is unnecessary for classification and can easily cause overfitting, it is essential to directly manipulate the image frequency domain information to embed the target class image information into the input image, which can improve transferability. With this insight in mind, we proposed a novel generation-based attack called Low-Frequency Adversarial Attack (LFAA). The method trains a conditional generator to produce adversarial frequency do Figure 1: The frequency component of original image and adversarial images crafted by LFAA main information corresponding to the target label, which is then added to the low-frequency component of the input image to generate adversarial examples (See Fig. 2). As depicted in Fig. 1, the perturbation modifies the high-frequency gesture and blurs the low-frequency component to emphasize high-frequency component. As a result, the classifier misclassifies the high-frequency of the adversarial examples as the target label image, indicating that we have embedded the target label information in the frequency component. Specifically, we optimize the generator by minimizing the classification loss between the predictions of the adversarial examples and the target label. In summary, we highlight our contributions as follows: * We demonstrate that modifying the frequency domain information can effectively mislead deep models. * We propose a novel attack called Low-Frequency Adversarial Attack (LFAA), which is the first generation-based targeted attack that generates adversarial frequency information to embed target label information into the source image. * Empirical evaluations on the ImageNet dataset demonstrate LFAAachieves much better transferability than the state-of-the-art target attacks. ## 2 Related Work In this section, we review related adversarial attacks in the black-box setting, frequency-based attacks, and defenses related to our proposed attack. **Adversarial Attack** Szegedy _et al_. were the first to demonstrate the vulnerability of deep neural networks to adversarial examples [28]. Subsequently, several works have investigated the susceptibility of deep models to black-box attacks, where access to the target model is restricted. Such attacks can be categorized into three types: (a) score-based attacks, which can access the predicted probability [11]; (b) decision-based attacks, which can only obtain the predicted label [1]; and (c) transfer-based attacks, which are effective in real-world settings where the attacker cannot query the target model. In this approach, the adversary crafts the adversarial examples on a white-box model and transfers them to fool the target model [16, 30]. Therefore, the transferability of adversarial examples is crucial for deceiving unknown models. The Fast Gradient Sign Method (FGSM) was the first gradient-based attack proposed, which generates perturbations in the direction of the gradient [6]. The iterative FGSM is an extension of FGSM that produces better white-box performance but worse black-box performance. However, to avoid overfitting to the white-box model, several methods have been proposed to escape local maxima. For instance, MI-FGSM [2] adds momentum to I-FGSM to stabilize the optimization process. Lin _et al_. further improve the method by incorporating the Nesterov accelerated gradient, which provides an effective lookahead strategy [16]. In addition to gradient-based attacks, data augmentation techniques have also been found to be effective in improving the transferability of adversarial examples. For instance, the Diverse Input Method (DIM) [35] rescales the input image to random sizes and adds padding to a fixed size before calculating the gradient. The Transition Invariant Attack [3] approximates the gradient calculation for a set of translated images by convolving the gradient with a Gaussian kernel. Admix [30] mixes the input image with a small portion of images from other categories. Some methods also focus on modifying surrogate models to enhance the transferability of iterative methods. For example, skip connections [33] have been used to improve the transferability of adversarial examples. Finally, other attacks aim to increase the feature difference between the source image and adversarial examples [32]. Although the methods mentioned above have shown good performance in untargeted attacks, their performance in targeted attacks is usually poor. To address this, several researchers have proposed advanced loss functions for targeted attacks. Li _et al_. [14] adopt Poincare distance and Triplet loss to replace cross-entropy, which has a vanishing gradient in iterative targeted attacks. Zhao _et al_. [39] use the logits output of the target class as the loss function and perform a large number of iterations to achieve state-of-the-art performance. Gao _et al_. [4] minimize the distance between the features of the target sample and adversarial examples in the reproducing kernel Hilbert space, which is transition-invariant. Another branch of attack is generative-based attacks, where attackers train a generator to generate perturbations given an input image. Generative-based attacks are more efficient because they can learn the adversarial pattern of a target label using a large dataset. The adversarial pattern depends on the entire data distribution and is not limited to a single image, which may overfit the source model. UAP [20] was the first proposed method to fool models by learning universal noise, GAP [24] learns a generator that can produce image-agnostic perturbations for targeted attacks, CDA [21] uses relativistic training objectives to boost cross-domain transferability, and C-GSP [36] uses a conditional generator to generate targeted adversarial perturbations. Our method belongs to the generative-based method, but instead of generating perturbations directly, we generate frequency content that can easily fool deep models. We seek to learn the frequency components of a target class image based on a data distribution. **Frequency-based Attack** According to recent studies, researchers have explored the generalization and adversarial vulnerability of deep neural networks (DNNs) from a frequency perspective [37, 31, 25]. These studies indicate that DNNs can capture high-frequency components that are imperceptible to humans. Yin _et al_. [37] demonstrate that naturally trained models are vulnerable to perturbations on high-frequency components, while adversarially trained models are less sensitive to such perturbations. As a result, several approaches have been proposed to craft adversarial examples from the perspective of frequency. For example, Long _et al_. [18] perturb input images with Gaussian noise in the frequency domain as a data augmentation technique, then convert them back to the spatial domain for gradient calculation to enhance transferability. Guo _et al_. [7] restrict the search space to the low-frequency domain to craft adversarial examples, showing that low-frequency components are important in model decision making for query-based attacks. Sharma _et al_. [25] craft adversarial examples by randomly masking low-frequency components and demonstrate that even adversarially trained models are still vulnerable to low-frequency perturbations. Finally, Zhang _et al_. [38] propose a method of crafting adversarial examples by replacing the high-frequency portion of an image with a handcrafted adversarial patch. However, this method cannot be used for targeted attacks, and the selection of the adversarial patch remains a concern. **Adversarial Defense** As the threat of adversarial attacks on deep neural networks (DNNs) continues to increase, various defense methods have been proposed to mitigate this problem. Adversarial training, which involves injecting adversarial examples into the training process, is a promising method that has shown success in improving model robustness [6, 19]. Tramer _et al_. [29] introduced ensemble adversarial training, which uses adversarial examples generated on multiple models to improve the robustness of the resulting model. Another approach to adversarial defense involves using denoising fil ters, which remove strange patterns from adversarial examples before feeding them to the classifier. For example, Liao _et al_. [15] proposed a High-level representation guided denoiser (HGD) to suppress perturbations, while Naseer _et al_. [22] trained a neural representation purifier (NRP) that learns to purify perturbed input images. Other defense methods utilize input transformations to mitigate the effects of adversarial perturbations, such as random resizing and padding (R&P) [35], and feature distillation [17]. ## 3 Method In this section, we will present the adversarial setup and provide a comprehensive description of our proposed LFAA. ### Preliminaries We begin by considering an image \(\mathbf{x}\in\mathbf{R}^{H\times W\times C}\) from a dataset \(\mathcal{X}\). An adversarial example \(\mathbf{x}_{adv}\) is generated such that it is nearly identical to the original image, i.e., \(\left\|\mathbf{x}-\mathbf{x}_{adv}\right\|_{p}\leq\epsilon\), where \(\left\|\left\|\right\|_{p}\) is the \(L_{p}\) norm distance and \(\epsilon\) is the perturbation budget. We adopt the \(L_{\infty}\) distance metric in this work. We consider a classification neural network \(\mathcal{F}\) with parameters \(\phi\) and loss function \(\mathbf{J}\) that is trained to classify images into a set of classes \(\mathcal{C}=c_{1},c_{2},\dots,c_{d}\). The function \(\mathcal{F}_{\theta}:\mathbf{R}^{H\times W\times C}\rightarrow\mathbf{R}^{d}\) maps an image to a class probability vector with \(d\) classes. The predicted class for a given sample image \(\mathbf{x}\) is \(\text{argmax}_{i\in C}\mathcal{F}_{\phi}(\mathbf{x})_{i}\). A targeted adversarial example is generated to mislead the classifier \(\mathcal{F}\) into predicting a target label \(c\in\mathcal{C}\). The optimization problem for generating a targeted adversarial example can be formulated as follows: \[\mathbf{x}_{adv}=\operatorname*{argmin}_{\left|\mathbf{x}_{adv}-\mathbf{x} \right|\leq\epsilon}J(\mathbf{x},c;\phi) \tag{1}\] Some methods like FGSM[6], solve this problem by adding perturbation along the direction of the gradient: \[\mathbf{x}_{adv}=\operatorname{clip}_{\mathbf{x},\epsilon}\left[\mathbf{x}- \operatorname{sign}(\nabla_{x}J(\mathbf{x},\mathbf{c};\phi))\right]. \tag{2}\] where \(\operatorname{clip}\) is a function, that \(\text{clip}\)\(\mathbf{x}-\operatorname{sign}(\nabla_{x}\mathbf{J}(\mathbf{x},\mathbf{c}; \phi))\) into \(\epsilon\)-ball of \(\mathbf{x}\). ### Frequency filter Prior research has revealed that CNNs trained on the ImageNet dataset have a strong bias towards the texture and shape of an object [5]. As illustrated in Fig. 1, the high-frequency component of an image represents its texture, while the low-frequency component represents its shape. To separate the different frequency components of an input image, various methods such as DCT and Fourier transform can be employed. In this study, we use convolution with a low-pass Gaussian filter \(W\) as an approximation to obtain the low-frequency part of the image, i.e., \(W*\mathbf{x}\). \(W\) is a \((4k+1)\times(4k+1)\) kernel matrix, and its formulation is as follows: \[W_{i,j}=\frac{1}{2\pi\sigma^{2}}\exp-\frac{i^{2}+j^{2}}{2\sigma^{2}} \tag{3}\] where \(\sigma=k\) denotes the radius of \(W\). By using a larger \(\sigma\), more high-frequency parts will be filtered. The high-frequency part of an image can be obtained by subtracting its low-frequency part from the original image, i.e., \(\mathbf{x}-W*\mathbf{x}\). Inspired by these findings, several works have been proposed to analyze and attack the vulnerability of deep models from the perspective of frequency, such as Fourier analysis [37], high-frequency perturbations [31], and practical frequency-based attacks [38]. These works show that deep models are vulnerable to perturbations in both high-frequency and low-frequency components. However, there is still a lack of discussion on how to use frequency components to craft targeted class perturbations. In Fig. 1, it is evident that image models are highly sensitive to high-frequency components, which makes them capable of predicting these components accurately. To leverage this capability, we attempted to replace the high-frequency components of an image with the high-frequency components of a target class image. Specifically, we used the equation \(W*\mathbf{x}+(\mathbf{x}_{target}-W*\mathbf{x}_{target})\) to replace the high-frequency components, as described in Tab. 1. We sampled 1000 images from ImageNet [13] and measured the performance of targeted and untargeted attacks after replacing the high-frequency components with those from a 'ball maasff' image. Our results showed a promising attack success rate for untargeted attacks, but the performance for targeted attacks was unsatisfactory. However, it was still better than several existing methods, such as MI-FGSM. The results described above demonstrate the feasibility of embedding critical frequency information to achieve a targeted attack. Yin _et al_. [37] finds that crafting only high-frequency content in a perturbation results in a small and easily denoised perturbation, \begin{table} \begin{tabular}{c c c c c c} \hline \hline Metric & ResNet101 & DenseNet121 & Vgg19\({}_{BN}\) & MobileNet\({}_{v2}\) & Inc-v3 \\ \hline UASR (\%) & 21.4 & 40.1 & 35.4 & 41.5 & 32.5 \\ TASK (\%) & 6.10 & 0.30 & 1.60 & 0.40 & 0.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Untargeted and targeted attack success rate of several pre-trained models predicting adversarial images with substituted high-frequency component Figure 2: The procedure of our proposed LFAA which can be classified by adversarially trained models. However, Long _et al_. [18] suggest addressing the vulnerability of deep models by considering different frequency components. Thus, perturbations should also affect the low-frequency content of an image. Merely replacing the low-frequency content of an image with that of a target class image may cause the resulting image to fall outside the \(\epsilon\)-ball of the original image \(\mathbf{x}\). Conversely, high-frequency components of an image can be considered as small perturbations that can be entirely replaced by a targeted texture. Moreover, to incorporate the targeted information, the low-frequency components must be altered. A generator can be trained to create perturbations based on these two principles. ### Lfaa Taking inspiration from the previous discussion, we suggest a generative method that directly perturbs the low-frequency component of images to create adversarial examples. This perturbation will be added directly to the low-frequency portion of the image to distort low-frequency content and include targeted class information while eliminating the high-frequency information from the original image. Besides, Yang _et al_. [36] have pointed out that many existing generative methods train multiple generators for multiple target labels, which is computationally inefficient. They propose a conditional generative approach that enables multi-target class attacks by training only one generator and generating perturbations based on the target class and input images. We hence adopt their generator architecture. As shown in Fig. 2, we train a conditional generator \(\mathcal{G}_{\theta}\) with parameters \(\theta\) on the entire dataset and class labels. The generator learns a mapping: \((\mathcal{X},\mathbf{R}^{d})\rightarrow\mathbf{R}^{H\times W\times C}\). Given sample input image \(\mathbf{x}\), and one-hot encoding \(\mathcal{I}_{c}\in\mathbf{R}^{d}\) of target class label \(c\), \(\mathcal{G}_{\theta}\) output adversarial perturbation containing high-frequency texture of given target class image and noise to perturb low-frequency shape. To create an adversarial example, the generated perturbation is directly added to the the low-frequency component of the image using Eq. 3, and then projected onto the \(\epsilon\)-ball of the original image \(\mathbf{x}\): \[\mathbf{x}_{adv}=\mathrm{clip}_{\mathbf{x},\epsilon}(\mathcal{G}_{\theta}( \mathbf{x},\mathbb{1}_{c})+W*\mathbf{x}) \tag{4}\] Given a pretrained network \(\mathcal{F}_{\phi}\), and dataset \(\mathcal{X}\), the training objective is \[\mathbf{E}_{\begin{subarray}{c}\mathbf{x}_{\sim\mathcal{X}}\\ c\sim\mathcal{C}\end{subarray}}[\mathrm{CE}(\mathcal{F}_{\phi}(\mathrm{clip}_{ \mathbf{x},\epsilon}(\mathcal{G}_{\theta}(\mathbf{x},\mathbb{1}_{c})+W* \mathbf{x})),c)] \tag{5}\] where \(\mathbf{CE}\) is cross-entropy loss. By minimizing this objective, the generator can learn the patterns of target class images that force the classifier to make targeted predictions based on a data distribution. As a result, the generated frequency information is independent of the input image and more generalizable to different models. The optimization procedure is outlined in Algorithm 1. ``` 0: A classifier \(\mathcal{F}_{\phi}\) with parameters \(\phi\), randomly initialized generative network \(\mathcal{G}_{\theta}\), training dataset \(\mathbf{X}\), target label \(c\) and perturbation budget \(\epsilon\). 0: adversarial generator \(\mathcal{G}_{\theta}\) 1:repeat 2: Randomly sample batch of images \(\mathbf{x}\sim\mathcal{X}\); 3: Randomly sample batch of class \(c\sim\mathcal{C}\); 4: Forward images \(\mathbf{x}\) and one-hot encoding \(\mathbb{1}_{c}\) to Generator \(\mathcal{G}_{\theta}\) to obtain the perturbed images \(\mathbf{x}_{adv}\) by Eq 4; 5: Forward perturbed images to classifier \(\mathcal{F}_{\phi}\) to calculate the loss by Eq5; 6: Backward pass and update parameters of \(\mathcal{G}_{\theta}\); 7:until\(\mathcal{G}_{\theta}\) converge; 8:return\(\mathcal{G}_{\theta}\); ``` **Algorithm 1** LFAa Our proposed method, LFAa, and C-GSP [36] generate adversarial patterns using cross-entropy loss as the loss function. The patterns generated by both methods are structural, repeated, and semantic, with the noise containing information about the target class image. In contrast, methods such as MI-FGSM [6] generate random noise that lacks semantic information and can easily overfit a specific model. This is why generative models, which learn the pattern of the target class image with respect to the data distribution, often outperform gradient-based methods. Furthermore, our method focuses more on high-frequency texture and perturbs the low-frequency part of the image, which is more general and less biased. The noise generated by our method is more generalizable to other models and performs better than other generative models, such as CDA. ## 4 Experiment This section presents our experimental results using the ImageNet dataset [13] to assess the efficacy of our proposed LFAa method for targeted black-box attacks. We describe our experimental setup and implementation details in Section 4.1, followed by our evaluations of transferability in Section 4.2, real-world vision systems in Section 4.3, and adversarial defenses in Section 4.4. We also present an ablation study in Section 4.5. ### Experimental Setup **Dataset** This paper uses the ImageNet dataset for both training and testing purposes. We trained the generator on 10,000 randomly selected images from the ImageNet training set and evaluated its performance on 1000 images belonging to 1000 categories from the ImageNet validation dataset[13]. **Models** We adopt 6 popular models that pre-trained on ImageNet _i.e_., ResNet50 [9], Vgg-19\({}_{BN}\)[12] and DenseNet-121 [10], Inception-v3 (Inc-v3) [27], Inception-v4 (Inc-v4) [26], Inception-ResNet-v2(IncRes-v2) [26]. To evaluate the robustness of our attacks against various defense mechanisms, we consider several adversarial defense Figure 3: The Adversarial Perturbation (first row) and Adversarial examples (second row) crafted by different methods models including adversarially trained models, denoising defense, and input transformation defense methods. Specifically, we consider the adversarially trained models Inc-v3\({}_{adv}\) and ensemble adversarially trained network IncRes-v2\({}_{ens2}\)[29]. For denoising defense, we consider HGD [15] and NRP [22]. For input transformation defense, we consider R&P [34], NIPS-43 1, and FD [17]. Footnote 1: [https://github.com/anllumns/nips-2017/tree/master/nmd](https://github.com/anllumns/nips-2017/tree/master/nmd) **Baselines** To evaluate the effectiveness of our proposed LFAA, we choose several attacks method including MI-FGSM[2], DIM [35], TIM [3] and several competitive methods on improving the transferability of target attack including Po-Trip [14] and Logits attack [39]. We also consider several generative approaches _i.e_., CDA [21], and C-GSP[36]. **Implementation Details** In our experiments, we set the maximum perturbation budget \(\epsilon\) to be 16. For all the baseline methods, we follow the implementation details specified in their respective papers. Our proposed LFAA is based on ResNet architecture[9] and we adopt the same architecture as a previous generative method[36]. Specifically, our generator \(\mathcal{G}_{\theta}\) generates perturbations on the low-frequency part of the image with the same input size as the original image. The size of the Gaussian kernel \(k\) is set to be 17\(\times\)17 _i.e_. \(k=4\). The classifier \(\mathcal{F}\phi\) used in our experiments is a standard pre-trained model on ImageNet, and we fixed the parameters \(\phi\) in the classifier \(\mathcal{F}\) while training the generator \(\mathcal{G}\theta\). We use the Adam optimizer with a learning rate of 5\(\times\)10\({}^{-4}\), \(\beta_{1}=0.5\), and \(\beta_{2}=0.999\). All experiments were conducted on a GeForce RTX 3090 GPU using a PyTorch implementation. ### Evaluation on Targeted Transferability We compare the target transferability of our proposed LFAA method with several other attack methods, including MI-FGSM[2], DIM [35], TIM [3], Logits[39], Po+Trip[14], CDA[21] and C-GSP[36]. For training-free approaches, we craft adversarial adversaries on three standard trained models and test them on six models. For generative approaches, we train the generator using three standard trained models _i.e_. ResNet50, DenseNet-121, and VGG-19\({}_{BN}\). For each target label, we generated adversarial examples for each image and evaluated their targeted attack success rate on other models. Fig. 6 displays some crafted adversarial examples by LFAA. We randomly sample four target labels from the ImageNet class set, and train generators based on each label. We evaluate the targeted attack success rate, that is the success rate of the victim models to make targeted predictions. The results are summarized in Tab. 2, each column of this table represents the model to be attacked, while each row indicates the attacker generates the adversarial examples based on the corresponding methods. Our experiments show that instance-specific methods _i.e_. MI-FGSM, DIM, TIM, Po-Trip, and Logits, perform well in white-box targeted attacks, and they outperform generation-based attacks in white-box setting. As gradient-based approaches directly update adversarial examples by following the gradient direction from the original image. In contrast, the generative approach learns the semantic patterns from the entire dataset, which may not cover all possible cases encountered during the attack, and hence results in better performance in white-box setting. However, in black-box settings, they tend to overfit specific models, which makes it difficult to transfer the adversarial examples to other models. Generative approaches, including CDA, G-GSP, and our proposed LFAA, have higher black-box targeted attack success rates than the most powerful instance \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Model & Attack & ResNet-50 & DenseNet-121 & Vgg19\({}_{BN}\) & Inc-v3 & Inc-v4 & IncRes-v2 \\ \hline \multirow{8}{*}{ResNet-50} & MI-FGSM & 98.4* & 0.4 & 0.2 & 0.1 & 0.1 & 0.1 \\ & DIM & 84.5* & 2.4 & 1.1 & 0.8 & 0.2 & 0.1 \\ & TIM & 99.9* & 1.3 & 0.8 & 0.5 & 0.4 & 0.5 \\ & Po-Trip & **100.0* & 3.8 & 2.5 & 1.0 & 0.2 & 0.6 \\ & Logits & 98.1* & 9.8 & 3.9 & 1.5 & 0.9 & 2.5 \\ & CDA & 77.3* & 29.2 & 35.1 & 4.2 & 7.0 & 2.8 \\ & C-GSP & 92.4* & 55.2 & 36.2 & 29.9 & 24.3 & 11.5 \\ & LFAA & 95.8* & **80.1** & **66.6** & **38.9** & **33.8** & **15.2** \\ \hline \multirow{8}{*}{DenseNet121} & MI-FGSM & 0.8 & **100.0*** & 0.5 & 0.5 & 0.2 & 0.3 \\ & DIM & 2.6 & 89.4* & 2.6 & 1.8 & 0.5 & 0.1 \\ & TIM & 1.0 & **100.0*** & 0.4 & 0.3 & 0.5 & 0.3 \\ & Po-Trip & 2.6 & **100.0*** & 1.1 & 1.0 & 0.4 & 0.4 \\ & Logits & 4.3 & **100.0*** & 2.6 & 2.5 & 1.2 & 1.1 \\ & CDA & 55.6 & 89.5* & 27.3 & 22.3 & 8.7 & 2.4 \\ & C-GSP & 51.7 & 92.7* & 33.6 & 29.6 & 17.9 & 12.1 \\ & LFAA & **72.9** & 94.5* & **48.7** & **33.2** & **39.2** & **26.0** \\ \hline \multirow{8}{*}{Vgg19\({}_{BN}\)} & MI-FGSM & 0.4 & 0.3 & 99.9* & 0.2 & 0.2 & 0.2 \\ & DIM & 0.7 & 0.7 & 83.4* & 0.4 & 0.3 & 0.2 \\ & TIM & 0.4 & 0.4 & **100.0*** & 0.2 & 0.1 & 0.6 \\ & Po-Trip & 0.7 & 0.7 & **100.0*** & 0.6 & 0.3 & 0.3 \\ & Logits & 1.8 & 2.2 & **100.0*** & 0.7 & 1.0 & 0.9 \\ & CDA & 10.3 & 12.3 & 96.3* & 0.7 & 1.1 & 0.1 \\ & C-GSP & 19.7 & 22.6 & 92.0* & 8.2 & **11.7** & 1.3 \\ & LFAA & **29.6** & **29.3** & 93.7* & **11.2** & 7.7 & **1.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Targeted attack success rates (%) on six models with various adversarial attack. The adversaries are crafted on ResNet-50, DenseNet-121 and Vgg-19\({}_{BN}\) respectively. * indicates white-box attacks. specific method Logits, with a range from 3.2% to 76.3%. Compared to instance-specific methods, the average black-box targeted attack success rate of LFAA is significantly better, with a margin ranging from 14.9% to 45.5%. LFAA is also competitive with generative models, with the average targeted success rate of LFAA higher than that of the most powerful generative approach, C-GSP, with a margin ranging from 3.2% to 15.5%. These results show that LFAA is superior in generating transferable adversarial examples and support the claim that our approach can significantly improve transferability. To further validate the performance of LFAA, we measured the average targeted attack success rate within a family of models. We trained the generator on ResNet50 and Vgg19\({}_{BN}\) and transferred it to models within the same family (_e.g._ ResNet50 \(\rightarrow\) ResNet101, Vgg19\({}_{BN}\rightarrow\) Vgg16\({}_{BN}\)). The results are shown in Fig. 4, our method and C-GSP have similar targeted attack success rates for white-box performance, but for the black-box setting, the performance of our method is better than C-GSP in ResNet family, with a clear margin ranging from 18.7% to 28.1%, and for Vgg family, the average targeted attack success rate is higher with a margin ranging from 5.5% to 34.4%. This means that for different architectures or within the same architecture family, our method is superior to the state-of-the-art method C-GSP. These results further validate the effectiveness of our proposed LFAA. To evaluate LFAA's ability to generate latent representations for targeted labels in the presence of a large label set by using a randomly generated target label dataset. Each image was randomly assigned a target label from a pool of 100 randomly selected labels. To address the challenges posed by large label sets, we follow the approach proposed by Yang _et al_. [36] to divide the label set into smaller and more diverse groups. We train a conditional generator for each group and evaluate the transferability of LFAA using two models, each trained on 50 diverse categories. We perform the targeted attack using the corresponding model for the corresponding group when the target label falls into that group. To ensure a fair comparison, we also evaluated our approach against instance-based methods, and due to computing resources, most generative approaches require training multiple generators for multiple labels, which was not feasible in our case, so we only evaluated LFAA against C-GSP. We present the results of our evaluation in Tab. 3, where we summarize the targeted attack success rates of targeted attacks. In the case of instance-based methods, Logits still has the best transferability, but it is weaker than the generative-approach C-GSP. Our LFAA exhibited the strongest transferability, outperforming the most powerful C-GSP with a range of 6.4% to 21.8%. Our approach extends the capabilities of C-GSP. Our evaluation demonstrated that LFAA can effectively handle large label sets and further confirmed the superiority of LFAA. ### Evaluation on Real-world Recognition System The majority of previous works [8, 11] have used score-based attacks to fool image recognition systems in the real world, which require thousands of queries to the victim system. In contrast, we conduct an assessment of the effectiveness of targeted transfer attacks by LFAA on the widely-used Google Cloud Vision API. In particular, we generate the adversaries by our conditional generator trained on DenseNet-121 using 100 target classes and transfer the adversarial examples to fool the vision system. The API provides a list of labels with corresponding scores indicating the confidence of the model for each label. It only returns labels with a score of 50% or higher and at most 10 labels are shown. Fig. 5 displays one of the examples where we generated the adversarial examples using the target label "rock snake." The recognition system classified the adversarial images as "snake," which validates that our generator can learn the semantic information of the target class to fool the real-world system. ### Evaluation on Defense Method To thoroughly evaluate the effectiveness of our proposed method, we assess the attack performance of LFAA against several defense mechanisms, including adversarial training where we consider adversarially trained models Inception-v3 (Inc-v3\({}_{adv}\)), and ensemble adversarially trained network Inception-Resnet-v2(IncRes-v2\({}_{ens2}\)), as well as input-transformation based defenses (R&P, NIPS-r3, and FD), denoising methods (HGD and NRP). We compared our approach with C-GSP. The results are presented in Tab. 4, where we report the average target attack success rates against each defense method. The LFAA method performs slightly worse against adversarially trained models, particularly IncRes-v\(2ens2\), but has stronger transferability against Inc-v\(3adv\). For input-transformation based defense methods, LFAA outperforms C-GSP and achieves a higher average target attack success rate with a margin of 15% against three defense methods. For denoising methods, LFAA has slightly weaker transferability against NRP, but it can still bypass HGD with a higher targeted attack success rate with a margin of 6.6%. Overall, LFAA Figure 4: Targeted attack success rates (%) on eight models by G-GSP and LFAA, the generators are trained on ResNet-50, and Vgg19\({}_{BN}\). Figure 5: Successful targeted adversarial images on Google Cloud Vision generated by LFAA, the given target class is rock snake. \begin{table} \begin{tabular}{c c c c c c} \hline Attack & ResNet50 & Vgg19\({}_{BN}\) & Inc-v3 & Inc-v4 & IncRes-v2 \\ \hline MI-FGSM & 2.5 & 1.5 & 0.9 & 1.3 & 2.0 \\ DIM & 0.8 & 0.6 & 0.6 & 0.3 & 0.6 \\ TIM & 1.0 & 0.4 & 0.3 & 0.5 & 0.3 \\ Po-Trip & 2.5 & 1.5 & 0.9 & 1.3 & 2.0 \\ Logits & 5.8 & 3.6 & 1.8 & 1.6 & 2.7 \\ C-GSP & 18.0 & 16.3 & 5.2 & 3.6 & 2.8 \\ LFAA & **39.8** & **31.0** & **18.0** & **10.9** & **9.2** \\ \hline \end{tabular} \end{table} Table 3: Target attack success rates (%) on six models under single model setting with various attack methods. The adversaries are crafted on DenseNet-121. has a higher average targeted attack success rate against all seven defense methods than C-GSP, with a margin of 7.4%. As LFAA contains more semantics regarding the target label, and the underlying pattern still persists under different defense methods. Hence, LFAA can effectively pass some adversarial defense methods. These findings further confirm the effectiveness of our proposed LFAA method. ### Ablation Study To further gain insight into the performance improvement of LFAA, we conduct ablation and hyper-parameter studies, which train our generator on DenseNet-121 and generate adversarial examples to validate them on the six models. **On the effectiveness of Gaussian kernel size \(4k+1\times 4k+1\)** To capture the low-frequency components of the image, a Gaussian low-pass filter is utilized for approximation. As shown in Fig. 7, the performance of transferring adversarial examples into Vgg19\({}_{BN}\) and Inception-v3 models significantly drops as \(k\) increases above 5. On the other hand, when \(k\) is less than or equal to 3, the performance of transferring adversarial examples from DenseNet-121 to ResNet50 is not satisfactory. Although \(k=4\) shows weak performance in some models, it achieves the highest average targeted attack success rate. Therefore, \(k=4\) is chosen as the hyperparameter for the Gaussian low-pass filter. ## 5 Conclusion In this paper, we introduce a novel method called LFAA for generating transferable targeted adversarial examples that exploit the vulnerability of deep neural models from a frequency perspective. The proposed approach is capable of generating perturbations that can cause misclassification on multiple black-box target models and real-world vision systems, regardless of the image's source class. LFAA trains a conditional generator to generate targeted adversarial perturbations, which are then added to the low-frequency components of the image. Experimental results on ImageNet demonstrate that LFAA significantly outperforms state-of-the-art methods. This work suggests that different frequency components play a crucial role in deep learning models, and targeted attacks based on perturbing these components can be an effective and efficient approach for generating transferable attacks.
2309.03817
The Generalized Riemann Hypothesis from zeros of a single L-function
For each primitive Dirichlet character $\chi$, a hypothesis ${\rm GRH}^\dagger[\chi]$ is formulated in terms of zeros of the associated $L$-function $L(s,\chi)$. It is shown that for any such character, ${\rm GRH}^\dagger[\chi]$ is equivalent to the Generalized Riemann Hypothesis.
William D. Banks
2023-09-07T16:13:46Z
http://arxiv.org/abs/2309.03817v1
# The generalized Riemann hypothesis ###### Abstract. For each primitive Dirichlet character \(\chi\), a hypothesis \(\mathtt{GRH}^{\dagger}[\chi]\) is formulated in terms of zeros of the associated \(L\)-function \(L(s,\chi)\). It is shown that for any such character, \(\mathtt{GRH}^{\dagger}[\chi]\) is equivalent to the Generalized Riemann Hypothesis. MSC Numbers: Primary: 11M06, 11M26; Secondary: 11M20 Keywords: Generalized Riemann Hypothesis, zeta function, Dirichlet \(L\)-function, zeros Data Availability Statement: Data sharing not applicable to this article as no datasets were generated or analysed during the current study Potential Conflicts of Interest: NONE Research Involving Human Participants and/or Animals: NONE _Dedicated to Hugh Montgomery and Bob Vaughan_ ## 1. Introduction An old result of Sprindzuk [12, 13] (which he obtained by developing ideas of Linnik [9]) states that, under the Riemann Hypothesis (RH), the Generalized Riemann Hypothesis (GRH) holds for _all_ Dirichlet \(L\)-functions provided some suitable conditions on the vertical distribution of the zeros of \(\zeta(s)\) are met. More precisely, the _Linnik-Sprindzuk theorem_ asserts that GRH is equivalent to the validity of both RH and the hypothesis that, for any rational number \(\xi:=h/k\) with \(0<|h|\leqslant k/2\) and \((h,k)=1\), and any real \(\varepsilon>0\), the bound \[\sum_{\rho=\frac{1}{2}+\varepsilon\gamma}|\gamma|^{i\gamma}\mathrm{e}^{-i\gamma -\pi|\gamma|/2}(x+2\pi i\xi)^{-\rho}+\frac{\mu(k)}{\phi(k)}\frac{1}{x\sqrt{2 \pi}}\llq\varepsilon^{-1/2-\varepsilon}\] holds for \(x\to 0^{+}\). Similar results have been attained by Fujii [4, 5, 6], Suzuki [14], Kaczorowski and Perelli [8], and the author [1, 2]. In the present paper, we establish an analogous result in which \(\zeta(s)\) is replaced by the Dirichlet \(L\)-function \(L(s,\chi)\) attached to an arbitrary primitive character \(\chi\). To formulate the theorem, we introduce some notation. In what follows, \(C_{c}^{\infty}(\mathbb{R}^{+})\) denotes the space of smooth functions \(\mathcal{B}:\mathbb{R}^{+}\to\mathbb{C}\) with compact support in \(\mathbb{R}^{+}\). As usual, we write \(\mathbf{e}(u)\coloneqq\mathrm{e}^{2\pi iu}\) for all \(u\in\mathbb{R}\). Let \(\chi\) be a primitive character modulo \(q\), and put \[\kappa_{\chi}\coloneqq\begin{cases}0&\text{if }\chi(-1)=+1,\\ 1&\text{if }\chi(-1)=-1,\end{cases}\quad\tau(\chi)\coloneqq\sum_{a\bmod q}\chi(a) \mathbf{e}(a/q),\quad\epsilon_{\chi}\coloneqq\frac{\tau(\chi)}{i^{\kappa_{ \chi}}\sqrt{q}}. \tag{1.1}\] We recall the asymmetric form of the functional equation \[L(s,\chi)=\mathcal{X}_{\chi}(s)L(1-s,\overline{\chi}),\] where \[\mathcal{X}_{\chi}(s)\coloneqq\epsilon_{\chi}2^{s}\pi^{s-1}q^{1/2-s}\Gamma(1-s) \sin\tfrac{\pi}{2}(s+\kappa_{\chi}). \tag{1.2}\] In particular, if \(\mathbb{1}\) is the trivial character given by \(\mathbb{1}(n)=1\) for all \(n\in\mathbb{Z}\), then the functional equation \(\zeta(s)=\mathcal{X}_{\mathbb{1}}(s)\zeta(1-s)\) holds with \[\mathcal{X}_{\mathbb{1}}(s)\coloneqq 2^{s}\pi^{s-1}\Gamma(1-s)\sin\tfrac{\pi s}{2}.\] Finally, for a fixed primitive character \(\chi\) modulo \(q\), consider the following two hypotheses concerning the zeros of \(L(s,\chi)\). The first hypothesis is Hypothesis \(\mathtt{GRH}[\chi]\): _If \(L(\beta+i\gamma,\chi)=0\) and \(\beta>0\), then \(\beta=\tfrac{1}{2}\)._ Note that \(\mathtt{GRH}\) is equivalent to the assertion that \(\mathtt{GRH}[\chi]\) holds for all primitive characters \(\chi\). The second (stronger) hypothesis is Hypothesis \(\mathtt{GRH}^{\dagger}[\chi]\): _Hypothesis \(\mathtt{GRH}[\chi]\) is true, and for any rational number \(\xi\coloneqq h/k\) with \(h,k>0\) and \((h,k)=1\), any \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\), and any \(\varepsilon>0\), the bound_ \[\sum_{\rho=\frac{1}{2}+i\gamma}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1-\rho )\mathcal{B}\Big{(}\frac{\gamma}{2\pi\xi X}\Big{)}+C_{\mathcal{B}}\widetilde{ C}_{\chi,\xi}X\underset{\chi,\xi,\mathcal{B},\varepsilon}{\ll}X^{1/2+\varepsilon} \tag{1.3}\] _holds for \(X\to\infty\), where_ \[C_{\mathcal{B}}\coloneqq\int_{0}^{\infty}\mathcal{B}(u)\,du,\qquad\widetilde{ C}_{\chi,\xi}\coloneqq\begin{cases}\dfrac{\overline{\chi}(h)\chi(k)\mu(k)\,q}{ \phi(qk)}&\text{if }(h,q)=1,\\ 0&\text{otherwise},\end{cases} \tag{1.4}\] _the sum in (1.3) runs over the complex zeros \(\rho=\frac{1}{2}+i\gamma\) of \(L(s,\chi)\)\((\)counted with multiplicity\()\), and the implied constant in (1.3) depends only on \(\chi\), \(\xi\), \(\mathcal{B}\), and \(\varepsilon\)._ Our aim in this note is to prove the following theorem. Theorem 1.1.: _For each primitive character \(\chi\), the hypotheses \(\mathtt{GRH}^{\dagger}[\chi]\) and \(\mathtt{GRH}\) are equivalent._ Theorem 1.1 generalizes the main result of [2], in which the author showed that \(\mathtt{GRH}^{\dagger}[\mathbb{1}]\) and \(\mathtt{GRH}\) are equivalent. Corollary 1.2.: _If \(\mathtt{GRH}^{\dagger}[\chi]\) holds for one primitive character \(\chi\), then it holds for all primitive characters._ We emphasize that \(\mathtt{GRH}^{\dagger}[\chi]\) is formulated entirely in terms of the zeros of a single \(L\)-function \(L(s,\chi)\). If \(\chi\) and \(\psi\) are different primitive characters, it is reasonable to predict that the zeros of \(L(s,\chi)\) and \(L(s,\psi)\) are unrelated, and that there is no reason _a priori_ that the hypotheses \(\mathtt{GRH}^{\dagger}[\chi]\) and \(\mathtt{GRH}^{\dagger}[\psi]\) should be connected. Nevertheless, \(\mathtt{GRH}^{\dagger}[\chi]\) and \(\mathtt{GRH}^{\dagger}[\psi]\) are actually equivalent in view of Corollary 1.2. ## 2. Preliminaries We continue to use the notation introduced in SS1. Below, \(\chi\) always denotes an arbitrary (but fixed) primitive Dirichlet character of modulus \(q\geqslant 1\). Throughout the paper, implied constants in the symbols \(\ll\), \(O\), etc., may depend on various parameters as indicated by the notation (see, e.g., (1.3)), but such constants are independent of all other parameters. ### The function \(\mathcal{X}_{\chi}(s)\) **Lemma 2.1**.: _Let \(\mathcal{I}\) be a bounded interval in \(\mathbb{R}\). Uniformly for \(c\in\mathcal{I}\) and \(t\geqslant 1\), we have_ \[\mathcal{X}_{\chi}(1-c-it)=\tau(\chi)q^{c-1}\mathrm{e}^{-\pi i/4}\exp\Big{(} it\log\Big{(}\frac{qt}{2\pi}\mathrm{e}\Big{)}\Big{)}\Big{(}\frac{t}{2\pi} \Big{)}^{c-1/2}\big{\{}1+O_{\mathcal{I}}(t^{-1})\big{\}}. \tag{2.1}\] Proof.: Replacing \(s\) by \(1-s\) in (1.2) gives \[\mathcal{X}_{\chi}(1-s)=\epsilon_{\chi}2^{1-s}\pi^{-s}q^{s-1/2}\Gamma(s)\sin \tfrac{\pi}{2}(1-s+\kappa_{\chi}).\] Let \(s=c+it\), \(t\geqslant 1\). Using Stirling's formula for the gamma function \[\Gamma(s)=\sqrt{2\pi}\,s^{s-1/2}\mathrm{e}^{-s}\{1+O(t^{-1})\}\] (see, e.g., Montgomery and Vaughan [10, Theorem C.1]) along with the estimates \[(s-\tfrac{1}{2})\log s=(c-\tfrac{1}{2})\log t+c+(t\log t-\tfrac{\pi}{4})i+ \tfrac{\pi is}{2}+O(t^{-1})\] and \[\sin\tfrac{\pi}{2}(1-s+\kappa_{\chi})=\tfrac{1}{2}i^{\kappa_{\chi}}\mathrm{e} ^{-\pi is/2}\{1+O(\mathrm{e}^{-\pi t})\},\] and recalling (1.1), a straightforward computation leads to (2.1). The following lemma is due to Gonek [7, Lemma 2]; the proof is based on the stationary phase method. **Lemma 2.2**.: _Uniformly for \(c\in[\tfrac{1}{10},2]\) and \(a<b\leqslant 2a\), we have_ \[\int_{a}^{b}\exp\Big{(}it\log\Big{(}\frac{t}{u\mathrm{e}}\Big{)}\Big{)}\Big{(} \frac{t}{2\pi}\Big{)}^{c-1/2}dt=(2\pi)^{1-c}u^{c}\mathrm{e}^{-iu+\pi i/4} \cdot\mathbf{1}_{a,b}(u)+\widetilde{E}(a,b,u),\] _where_ \[\mathbf{1}_{a,b}(u):=\begin{cases}1&\text{if }u\in(a,b],\\ 0&\text{otherwise},\end{cases}\] _and_ \[\widetilde{E}(a,b,u)\ll a^{c-1/2}+\frac{a^{c+1/2}}{|a-u|+a^{1/2}}+\frac{b^{c+ 1/2}}{|b-u|+b^{1/2}}.\] The next lemma is a variant of Conrey, Ghosh, and Gonek [3, Lemma 1]. **Lemma 2.3**.: _Uniformly for \(v>0\) and \(c\in[\tfrac{1}{10},2]\), we have_ \[\frac{1}{2\pi i}\int_{c+i}^{c+iT}v^{-s}\mathcal{X}_{\chi}(1-s)\,ds=\begin{cases} \frac{\tau(\chi)}{q}\,\mathbf{e}(-v/q)+E(q,T,v)&\text{if }\tfrac{q}{2\pi}<v\leqslant \tfrac{qT}{2\pi},\\ E(q,T,v)&\text{otherwise},\end{cases}\] _where_ \[E(q,T,v)\ll\frac{q^{c-1/2}}{v^{c}}\bigg{(}T^{c-1/2}+\frac{T^{c+1/2}}{|T-2\pi v/q| +T^{1/2}}\bigg{)}.\] Proof.: Using Lemma 2.1, we have \[\frac{1}{2\pi i}\int_{c+i}^{c+iT}v^{-s}\mathcal{X}_{\chi}(1-s)\, ds=\frac{1}{2\pi}\int_{1}^{T}v^{-c-it}\mathcal{X}_{\chi}(1-c-it)\,dt\\ =\frac{\tau(\chi)q^{c-1}\mathrm{e}^{-\pi i/4}}{2\pi v^{c}}\bigg{(} \int_{1}^{T}\exp\Big{(}it\log\Big{(}\frac{qt}{2\pi v\mathrm{e}}\Big{)}\Big{)} \Big{(}\frac{t}{2\pi}\Big{)}^{c-1/2}\,dt+O(T^{c-1/2})\bigg{)},\] and the result follows by applying Lemma 2.2 with \(u:=2\pi v/q\). ### Essential bound Lemma 2.4.: _For any \(t\geqslant 2\), there is a real number \(t_{*}\in[t,t+1]\) such that_ \[\frac{L^{\prime}}{L}(\sigma\pm it_{*},\chi)\ll(\log qt)^{2}\qquad(-1\leqslant \sigma\leqslant 2).\] Proof.: See [10, Lemmas 12.2 and 12.7]. ### Conditional results Lemma 2.5.: _The following statements are equivalent:_ 1. GRH _is true;_ 2. RH _is true, and for any primitive character_ \(\psi\neq\mathbb{1}\)_, any_ \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\)_, and any_ \(\varepsilon>0\)_, we have_ \[\sum_{n}\Lambda(n)\psi(n)\mathcal{B}(n/X)\underset{\psi,\mathcal{B}, \varepsilon}{\ll}X^{1/2+\varepsilon};\] (2.2) 3. RH _is true, and for any nonprincipal character_ \(\psi\)_, any_ \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\)_, and any_ \(\varepsilon>0\)_, the bound (_2.2_) holds._ Proof.: The equivalence \((i)\Longleftrightarrow(ii)\) is the content of [2, Lemma 2.2], and the implication \((iii)\Longrightarrow(ii)\) is obvious. Using the simple bound \[\sum_{\begin{subarray}{c}n\leqslant N\\ (n,M)\neq 1\end{subarray}}\Lambda(n)\ll\log M\cdot\log N\qquad(M,N\geqslant 1), \tag{2.3}\] the implication \((ii)\Longrightarrow(iii)\) is immediate. Lemma 2.6.: _Under_ GRH_, for any \(\xi\in\mathbb{Q}^{+}\), \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\), and \(\varepsilon>0\), we have_ \[\frac{\tau(\overline{X})}{q}\sum_{n}\Lambda(n)\chi(n)(-n\xi/q)\mathcal{B}(n/qX )=C_{\mathcal{B}}\widetilde{C}_{\chi,\xi}X+O_{\chi,\xi;\mathcal{B},\varepsilon }(X^{1/2+\varepsilon}), \tag{2.4}\] _where \(C_{\mathcal{B}}\) and \(\widetilde{C}_{\chi,\xi}\) are defined by (1.4)._ Proof.: Let \(\xi:=h/k\) with \(h,k>0\) and \((h,k)=1\). Using (2.3), we see that the sum in (2.4) is equal to \[\sum_{(n,qk)=1}\Lambda(n)\chi(n)\mathbf{e}(-nh/qk)\mathcal{B}(n/qX)+O_{\chi, \xi,\mathcal{B}}(1).\] and the latter sum can be expressed as \[\sum_{\begin{subarray}{c}a\bmod qk\\ (a,qk)=1\end{subarray}}\mathbf{e}(-ah/qk)\chi(a)\sum_{n\equiv a\bmod qk} \Lambda(n)\mathcal{B}(n/qX)\\ =\frac{1}{\phi(qk)}\sum_{\begin{subarray}{c}a\bmod qk\\ (a,qk)=1\end{subarray}}\mathbf{e}(-ah/qk)\chi(a)\sum_{\psi\bmod qk}\overline{ \psi}(a)\sum_{n}\Lambda(n)\psi(n)\mathcal{B}(n/qX),\] where the middle sum runs over all characters \(\psi\) modulo \(qk\). By Lemma 2.5\((iii)\) the contribution from all nonprincipal characters \(\psi\) is \(O_{\chi,\xi,\mathcal{B},\varepsilon}(X^{1/2+\varepsilon})\). On the other hand, for the principal character \(\psi_{0}\), the contribution is \[\frac{C}{\phi(qk)}\sum_{(n,qk)=1}\Lambda(n)\mathcal{B}(n/qX)=\frac{C}{\phi(qk )}\sum_{n}\Lambda(n)\mathcal{B}(n/qX)+O_{\chi,\xi,\mathcal{B}}(1),\] where we used (2.3) again, and \[C\coloneqq\sum_{\begin{subarray}{c}a\bmod qk\\ (a,qk)=1\end{subarray}}\mathbf{e}(-ah/qk)\chi(a).\] By [10, Theorem 9.12], \[C=\begin{cases}\overline{\chi}(-h)\chi(k)\mu(k)\tau(\chi)&\quad\text{if }(h,q)=1,\\ 0&\quad\text{otherwise},\end{cases}\] and by [2, Lemma 2.3], \[\sum_{n}\Lambda(n)\mathcal{B}(n/qX)=C_{\mathcal{B}}\cdot qX+O_{\chi,\mathcal{B }}(X^{1/2}\log^{2}X).\] To finish the proof, observe that \[\frac{\tau(\overline{\chi})}{q}\cdot\frac{C}{\phi(qk)}\cdot q=\widetilde{C}_{ \chi,\xi},\] which follows from the well known relation \[\tau(\chi)\tau(\overline{\chi})=\chi(-1)q \tag{2.5}\] for the Gauss sums defined in (1.1). ## 3. Twisting the von Mangoldt function The results of this section are _unconditional_. Theorem 3.1.: _For any \(\xi\in\mathbb{R}^{+}\) and \(T\geqslant 2q^{2}\), we have_ \[\sum_{\begin{subarray}{c}\rho=\beta+i\gamma\\ 0<\gamma\leqslant T\end{subarray}}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1- \rho)+\frac{\tau(\overline{\chi})}{q}\sum_{n\leqslant qT/2\pi\xi}\Lambda(n) \chi(n)\mathbf{e}(-n\xi/q)\underset{\xi}{\ll}(qT)^{1/2}\log^{2}T.\] _where the first sum runs over complex zeros \(\rho=\beta+i\gamma\) of \(L(s,\chi)\)\((\)counted with multiplicity\()\)._ Proof.: For any \(u>0\), let \[\Sigma_{1}(u)\coloneqq\sum_{\begin{subarray}{c}\rho=\beta+i\gamma\\ 0<\gamma\leqslant u\end{subarray}}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1- \rho),\qquad\Sigma_{2}(u)\coloneqq\frac{\tau(\overline{\chi})}{q}\sum_{n \leqslant qu/2\pi\xi}\Lambda(n)\chi(n)\mathbf{e}(-n\xi/q).\] Our goal is to show that \[\Sigma_{1}(T)+\Sigma_{2}(T)\underset{\xi}{\ll}(qT)^{1/2}\log^{2}T. \tag{3.1}\] According to Lemma 2.4, there is a number \(t_{\circ}\in[2,3]\) such that \[\frac{L^{\prime}}{L}(\sigma\pm it_{\circ},\chi)\ll\log^{2}2q\qquad(-1\leqslant \sigma\leqslant 2). \tag{3.2}\] Let \(t_{\circ}\) be fixed in what follows. Note that \[\Sigma_{1}(t_{\circ})\underset{\xi}{\ll}q^{3/2}\log 2q, \tag{3.3}\] since \(|\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1-\rho)|\ll_{\xi}q^{1/2}\) for all zeros \(\rho=\beta+i\gamma\) with \(0<\gamma\leqslant t_{\circ}\) (see Lemma 2.1), and there are only \(O(q\log 2q)\) such zeros (see Siegel [11, Theorem III]). By the Chebyshev bound, we also have \[\Sigma_{2}(t_{\circ})\underset{\xi}{\ll}q^{1/2}. \tag{3.4}\] Using Lemma 2.4 again, for any \(T_{*}\geqslant 2q^{2}\), there is a number \(T\in[T_{*},T_{*}+1]\) for which \[\frac{L^{\prime}}{L}(\sigma\pm iT,\chi)\ll\log^{2}qT\ll\log^{2}T\qquad(-1 \leqslant\sigma\leqslant 2). \tag{3.5}\] To prove (3.1) in general, one can assume without loss of generality that \(T\) has the property (3.5). Indeed, let \(T_{*}\geqslant 2q^{2}\) be arbitrary, and suppose \(T\in[T_{*},T_{*}+1]\) satisfies both (3.1) and (3.5). By Lemma 2.1, the bound \(\mathcal{X}_{\overline{\chi}}(1-\rho)\ll(q\gamma)^{1/2}\) holds uniformly for all complex zeros \(\rho=\beta+i\gamma\) of \(L(s,\chi)\) with \(\gamma\geqslant 1\); therefore, \[\big{|}\Sigma_{1}(T_{*})-\Sigma_{1}(T)\big{|}\leqslant\sum_{ \begin{subarray}{c}\rho=\beta+i\gamma\\ T_{*}<\gamma\leqslant T_{*}+1\end{subarray}}\big{|}\xi^{-\rho}\mathcal{X}_{ \overline{\chi}}(1-\rho)\big{|}\underset{\xi}{\ll}(qT_{*})^{1/2}\log T_{*}\] since the number of zeros with \(T_{*}<\gamma\leqslant T_{*}+1\) is at most \(O(\log T_{*})\). Moreover, \[\big{|}\Sigma_{2}(T_{*})-\Sigma_{2}(T)\big{|}\leqslant\frac{|\tau(\overline{ \chi})|}{q}\sum_{qT_{*}/2\pi\xi<n\leqslant q(T_{*}+1)/2\pi\xi}\big{|}\Lambda(n )\chi(n)\mathbf{e}(-n\xi/q)\big{|}\underset{\xi}{\ll}q^{1/2}\log T_{*}.\] Combining the preceding two bounds and (3.1), we deduce that \[\Sigma_{1}(T_{*})+\Sigma_{2}(T_{*})\underset{\xi}{\ll}(qT_{*})^{1/2}\log^{2}T _{*},\] which shows that (3.1) holds with \(T_{*}\) in place of \(T\). From now on, we can assume that \(T\) satisfies (3.5). Put \(c\coloneqq 1+\frac{1}{\log qT}\), and let \(\mathcal{C}\) be the following rectangle in \(\mathbb{C}\): \[c+it_{\circ}\ \longrightarrow\ c+iT\ \longrightarrow\ -1+iT\ \longrightarrow\ -1+it_{\circ}\ \longrightarrow\ c+it_{\circ}.\] Note that, by (3.2) and (3.5), neither \(t_{\circ}\) nor \(T\) is the ordinate of a zero of \(L(s,\chi)\). By Cauchy's theorem, \[\Sigma_{1}(T)-\Sigma_{1}(t_{\circ})=\frac{1}{2\pi i}\oint_{\mathcal{ C}}\frac{L^{\prime}}{L}(s,\chi)\,\xi^{-s}\mathcal{X}_{\overline{\chi}}(1-s)\,ds\] \[\qquad\qquad=\frac{1}{2\pi i}\bigg{(}\!\int_{c+it_{\circ}}^{c+iT} \!+\!\int_{c+iT}^{-1+iT}\!+\!\int_{-1+iT_{\circ}}^{-1+it_{\circ}}\!+\!\int_{-1+ it_{\circ}}^{c+it_{\circ}}\bigg{)}\frac{L^{\prime}}{L}(s,\chi)\,\xi^{-s}\mathcal{X}_{ \overline{\chi}}(1-s)\,ds\] \[\qquad\qquad=I_{1}+I_{2}+I_{3}+I_{4}\quad(\text{say}),\] and so by (3.3), we have \[\Sigma_{1}(T)=I_{1}+I_{2}+I_{3}+I_{4}+O_{\xi}(q^{3/2}\log 2q). \tag{3.6}\] We estimate the four integrals \(I_{j}\) separately. Noting that \[\frac{L^{\prime}}{L}(s,\chi)=-\sum_{n}\frac{\Lambda(n)\chi(n)}{n^{s}}\qquad( \sigma>1),\] and applying Lemma 2.3, we get that \[I_{1} =-\sum_{n}\Lambda(n)\chi(n)\cdot\frac{1}{2\pi i}\int_{c+it_{ \circ}}^{c+iT}(n\xi)^{-s}\mathcal{X}_{\overline{\chi}}(1-s)\,ds\] \[=-\frac{\tau(\overline{\chi})}{q}\sum_{qt_{\circ}/2\pi\xi<n\leq qT /2\pi\xi}\Lambda(n)\chi(n)\mathbf{e}(-n\xi/q)-\sum_{n}\Lambda(n)\chi(n)E(q,T, n\xi)\] \[=\Sigma_{2}(t_{\circ})-\Sigma_{2}(T)+O\bigg{(}\frac{(qT)^{c-1/2} }{\xi^{c}}(E_{1}+E_{2})\bigg{)},\] where \[E_{1}\coloneqq\sum_{n}\frac{\Lambda(n)}{n^{c}}\qquad\text{and}\qquad E_{2} \coloneqq\sum_{n}\frac{\Lambda(n)}{n^{c}}\frac{T}{|T-2\pi n\xi/q|+T^{1/2}}.\] Note that \((qT)^{c-1/2}\asymp(qT)^{1/2}\), and recall that \(\Sigma_{2}(t_{\circ})\) satisfies (3.4). Also, \[E_{1}=-\frac{\zeta^{\prime}}{\zeta}(c)\ll\frac{1}{c-1}=\log qT\ll\log T. \tag{3.7}\] To bound \(E_{2}\), we split the integers \(n\geqslant 2\) into three disjoint sets: \[S_{1} \coloneqq\{n\geqslant 2:|T-2\pi n\xi/q|>\tfrac{1}{2}T\},\] \[S_{2} \coloneqq\{n\geqslant 2:|T-2\pi n\xi/q|\leqslant T^{1/2}\},\] \[S_{3} \coloneqq\{n\geqslant 2:T^{1/2}<|T-2\pi n\xi/q|\leqslant\tfrac{1}{2} T\}.\] Using (3.7), we have \[\sum_{n\in S_{1}}\frac{\Lambda(n)}{n^{c}}\frac{T}{|T-2\pi n\xi/q|+T^{1/2}}\ll \log T,\] which is acceptable. For each \(n\in S_{2}\), we have \[n\underset{\xi}{\asymp}qT,\qquad\frac{\Lambda(n)}{n^{c}}\ll\frac{\log qT}{( qT)^{c}}\ll\frac{\log T}{qT},\qquad\frac{T}{|T-2\pi n\xi/q|+T^{1/2}}\asymp T^{1/2}.\] Since \(|S_{2}|\ll_{\xi}qT^{1/2}\), it follows that \[\sum_{n\in S_{2}}\frac{\Lambda(n)}{n^{c}}\frac{T}{|T-2\pi n\xi/q|+T^{1/2}}\,\ll \limits_{\xi}\log T.\] Similarly, \[\sum_{n\in S_{3}}\frac{\Lambda(n)}{n^{c}}\frac{T}{|T-2\pi n\xi|+T^{1/2}}\,\ll \limits_{\xi}\frac{\log T}{q}\sum_{n\in S_{3}}\frac{1}{|T-2\pi n\xi/q|+T^{1/2}}.\] The last sum is bounded by \[\ll\sum_{T^{1/2}\leqslant k\leqslant\frac{1}{2}T}\sum_{\begin{subarray}{c}n \geqslant 1\\ k<|T-2\pi n\xi/q|\leqslant k+1\end{subarray}}\frac{1}{|T-2\pi n\xi/q|+T^{1/2}} \,\ll\,\sum_{\xi}\sum_{T^{1/2}\leqslant k\leqslant\frac{1}{2}T}\frac{q}{k} \ll q\log T,\] hence \[\sum_{n\in S_{3}}\frac{\Lambda(n)}{n^{c}}\frac{T}{|T-2\pi n\xi|+T^{1/2}}\,\ll \,\log^{2}T.\] Putting everything together, we deduce that \[I_{1}=-\Sigma_{2}(T)+O_{\xi}\big{(}(qT)^{1/2}\log^{2}T\big{)}.\] Next, by Lemma 2.1, we have the uniform bound \[\mathcal{X}_{\overline{\chi}}(1-\sigma-it)\ll(qt)^{\sigma-1/2}\qquad(-1 \leqslant\sigma\leqslant c,\ t\geqslant 1). \tag{3.8}\] Recalling (3.5), it follows that \[I_{2}=-\frac{1}{2\pi i}\int_{-1+iT}^{c+iT}\frac{L^{\prime}}{L}(s,\chi)\,\xi^{ -s}\mathcal{X}_{\overline{\chi}}(1-s)\,ds\,\ll\,(qT)^{1/2}\log^{2}T.\] Similarly, by [10, Lemmas 12.4 and 12.9], we have the bound \[\frac{L^{\prime}}{L}(-1+it,\chi)\ll\log 2qt\qquad(1\leqslant t\leqslant T).\] Taking \(\sigma\coloneqq-1\) in (3.8), we get that \[I_{3}=-\frac{1}{2\pi i}\int_{-1+it_{\circ}}^{-1+iT}\frac{L^{\prime}}{L}(s, \chi)\,\xi^{-s}\mathcal{X}_{\overline{\chi}}(1-s)\,ds\ll\xi^{-1}\int_{t_{ \circ}}^{T}\frac{\log 2qt}{(qt)^{3/2}}\,dt\,\ll\,1.\] Finally, using (3.8) and (3.2), we get that \[I_{4}=\frac{1}{2\pi i}\int_{-1+it_{\circ}}^{c+it_{\circ}}\frac{L^{\prime}}{L} (s,\chi)\,\xi^{-s}\mathcal{X}_{\overline{\chi}}(1-s)\,ds\,\ll\,q^{1/2}\log^{2 }2q.\] Combining (3.6) and the above estimates for the integrals \(I_{j}\), we obtain (3.1), and the proof is complete. Let \(\log_{+}u:=\max\{\log u,1\}\) for all \(u>0\). Corollary 3.2.: _For any \(\xi\in\mathbb{R}^{+}\) and \(T>0\), we have_ \[\sum_{\begin{subarray}{c}\rho=\beta+i\gamma\\ 0<\gamma\leqslant T\end{subarray}}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1 -\rho)+\frac{\tau(\overline{\chi})}{q}\sum_{n\leqslant qT/2\pi\xi}\Lambda(n) \chi(n)\mathbf{e}(-n\xi/q)\,\ll\,T^{1/2}\log_{+}^{2}T.\] _where the first sum runs over complex zeros \(\rho=\beta+i\gamma\) of \(L(s,\chi)\)_(_counted with multiplicity_)_._ Corollary 3.3.: _For any \(\xi\in\mathbb{R}^{+}\), \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\), and \(X>0\), we have_ \[\sum_{\rho=\beta+i\gamma}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1-\rho) \mathbf{B}\Big{(}\frac{\gamma}{2\pi\xi X}\Big{)}+\frac{\tau(\overline{\chi})}{q }\sum_{n}\Lambda(n)\mathbf{e}(-n\xi/q)\mathcal{B}(n/qX)\underset{\chi,\xi, \mathcal{B}}{\ll}X^{1/2}\log_{+}^{2}X\] _where the first sum runs over the complex zeros \(\rho=\beta+i\gamma\) of \(L(s,\chi)\)\((\)counted with multiplicity\()\)._ Proof.: Let \(\Sigma_{1}\) and \(\Sigma_{2}\) be the step functions defined in the proof of Theorem 3.1, and put \[\Sigma_{3}(u):=\Sigma_{2}(2\pi\xi u/q)=\frac{\tau(\overline{\chi})}{q}\sum_{n \leqslant u}\Lambda(n)\chi(n)\mathbf{e}(-n\xi/q).\] Applying Corollary 3.3 with \(T\coloneqq 2\pi\xi Xu\), we have \[\Sigma_{1}(2\pi\xi Xu)+\Sigma_{2}(2\pi\xi Xu)\underset{\chi\xi}{\ll}(Xu)^{1/2 }\log_{+}^{2}(Xu). \tag{3.9}\] Using Riemann-Stieltjes integration, we have \[\sum_{\rho=\beta+i\gamma}\xi^{-\rho}\mathcal{X}_{\overline{\chi}} (1-\rho)\mathcal{B}\Big{(}\frac{\gamma}{2\pi\xi X}\Big{)}=\int_{0}^{\infty} \mathcal{B}\Big{(}\frac{u}{2\pi\xi X}\Big{)}\,d\Sigma_{1}(u)\] \[=\int_{0}^{\infty}\mathcal{B}(u)\,d\Sigma_{1}(2\pi\xi Xu)=-\int_{ 0}^{\infty}\mathcal{B}^{\prime}(u)\Sigma_{1}(2\pi\xi Xu)\,du,\] and \[\frac{\tau(\overline{\chi})}{q}\sum_{n}\Lambda(n)\chi(n)\mathbf{e}(-n\xi/q) \mathcal{B}(n/qX)=\int_{0}^{\infty}\mathcal{B}(u/qX)\,d\Sigma_{3}(u)\] \[=\int_{0}^{\infty}\mathcal{B}(u)\,d\Sigma_{3}(qXu)=-\int_{0}^{\infty} \mathcal{B}^{\prime}(u)\Sigma_{3}(qXu)\,du\] Summing these expressions and using (3.9), the result follows. ## 4. Proof of Theorem 1.1 Throughout the proof, we fix \(q\geqslant 1\) and a primitive character \(\chi\) modulo \(q\), and thus, any implied constants in the symbols \(\ll\), \(O\), etc., are independent of \(\chi\) (and \(q\)). In particular, \(q\ll 1\). First, assume GRH. Then \(\mathtt{GRH}[\chi]\) is true. For any \(\xi\in\mathbb{Q}^{+}\), \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\), and \(\varepsilon>0\), we have by Lemma 2.6: \[\frac{\tau(\overline{\chi})}{q}\sum_{n}\Lambda(n)\chi(n)(-n\xi/q)\mathcal{B}(n /qX)=C_{\mathcal{B}}\widetilde{C}_{\chi,\xi}X+O_{\xi,\mathcal{B},\varepsilon} (X^{1/2+\varepsilon}). \tag{4.1}\] Also, Corollary 3.3 states that \[\sum_{\rho=\frac{1}{2}+i\gamma}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1-\rho )\mathbf{B}\Big{(}\frac{\gamma}{2\pi\xi X}\Big{)}+\frac{\tau(\overline{\chi} )}{q}\sum_{n}\Lambda(n)\chi(n)\mathbf{e}(-n\xi/q)\mathcal{B}(n/qX)\underset{ \xi,\mathcal{B},\varepsilon}{\ll}X^{1/2+\varepsilon}.\] Combining these results, we obtain (1.3), which proves the validity of \(\mathtt{GRH}^{\dagger}[\chi]\). Conversely, suppose \(\mathtt{GRH}^{\dagger}[\chi]\) is true. Then \(\mathtt{GRH}[\chi]\) is true, and for any \(\xi\in\mathbb{Q}^{+}\), \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\), and \(\varepsilon>0\), we have \[\sum_{\rho=\frac{1}{2}+i\gamma}\xi^{-\rho}\mathcal{X}_{\overline{\chi}}(1-\rho) \mathcal{B}\Big{(}\frac{\gamma}{2\pi\xi X}\Big{)}+C_{\mathcal{B}}\widetilde{C }_{\chi,\xi}X\underset{\xi,\mathcal{B},\varepsilon}{\ll}X^{1/2+\varepsilon}\] Using Corollary 3.3 again, it follows that (4.1) holds for our character \(\chi\). Now let \(\psi\) be an arbitrary primitive character of modulus \(\tilde{q}\geqslant 1\), and let \(\vartheta\) be the character modulo \(q\tilde{q}\) defined by \(\vartheta(n)\coloneqq\psi(n)\overline{\chi}(n)\) for all \(n\in\mathbb{Z}\). Put \[\mathcal{W}\coloneqq\tau(\overline{\vartheta})\sum_{n}\Lambda(n)\psi(n) \mathcal{B}(n/X). \tag{4.2}\] Note that (see, e.g., [10, Theorem 9.10]) \[\tau(\overline{\vartheta})=\chi(\tilde{q})\mu(\tilde{q})\tau(\chi). \tag{4.3}\] Using (2.3), we have \[\mathcal{W}=\tau(\overline{\vartheta})\sum_{(n,q\tilde{q})=1}\Lambda(n) \vartheta(n)\chi(n)\mathcal{B}(n/X)+O_{\psi,\mathcal{B}}(1).\] When \((n,q\tilde{q})=1\), we have (see [10, Theorem 9.5]) \[\vartheta(n)\tau(\overline{\vartheta})=\sum_{a\bmod q\tilde{q}}\overline{ \vartheta}(a)\mathbf{e}(an/q\tilde{q})=\sum_{\begin{subarray}{c}1\leqslant a \leqslant q\tilde{q}\\ (a,q\tilde{q})=1\end{subarray}}\overline{\vartheta}(-a)\mathbf{e}(-an/q \tilde{q}),\] and so we get that \[\mathcal{W}=\sum_{\begin{subarray}{c}1\leqslant a\leqslant q\tilde{q}\\ (a,q\tilde{q})=1\end{subarray}}\overline{\vartheta}(-a)\sum_{(n,q\tilde{q})=1 }\Lambda(n)\chi(n)\mathbf{e}(-an/q\tilde{q})\mathcal{B}(n/X)+O_{\psi,\mathcal{ B}}(1).\] Applying (4.1) with \(X/q\) instead of \(X\), and \(\xi\coloneqq a/\tilde{q}\) for \(1\leqslant a\leqslant q\tilde{q}\) with \((a,q\tilde{q})=1\), we derive that \[\mathcal{W}=\frac{C_{\mathcal{B}}X}{\tau(\overline{\chi})}\sum_{ \begin{subarray}{c}1\leqslant a\leqslant q\tilde{q}\\ (a,q\tilde{q})=1\end{subarray}}\overline{\vartheta}(-a)\widetilde{C}_{\chi,a /\tilde{q}}+O_{\psi,\mathcal{B},\varepsilon}(X^{1/2+\varepsilon}).\] Finally, when \((a,q\tilde{q})=1\), we have \[\overline{\vartheta}(-a)\widetilde{C}_{\chi,\xi}=\overline{\psi}(-a)\chi(-a) \cdot\frac{\overline{\chi}(a)\chi(\tilde{q})\mu(\tilde{q})\,q}{\phi(q\tilde{ q})}=\overline{\psi}(-a)\cdot\frac{\tau(\overline{\vartheta})\tau(\overline{ \chi})}{\phi(q\tilde{q})},\] where we used the relations (2.5) and (4.3) in the last step; consequently, \[\mathcal{W}=C_{\mathcal{B}}\,\frac{\tau(\overline{\vartheta})}{\phi(q\tilde{ q})}\,X\sum_{\begin{subarray}{c}1\leqslant a\leqslant q\tilde{q}\\ (a,q\tilde{q})=1\end{subarray}}\overline{\psi}(-a)+O_{\psi,\mathcal{B}, \varepsilon}(X^{1/2+\varepsilon}). \tag{4.4}\] Comparing (4.2) and (4.4), we conclude that the estimates \[\sum_{n}\Lambda(n)\mathcal{B}(n/X)=C_{\mathcal{B}}X+O_{\mathcal{B},\varepsilon }(X^{1/2+\varepsilon}) \tag{4.5}\] and \[\sum_{n}\Lambda(n)\psi(n)\mathcal{B}(n/X)\underset{\psi,\mathcal{B}, \varepsilon}{\ll}X^{1/2+\varepsilon}\qquad(\psi\neq\mathbb{1})\] hold for all choices of \(\mathcal{B}\in C_{c}^{\infty}(\mathbb{R}^{+})\) and \(\varepsilon>0\). Since (4.5) implies RH, this verifies statement Lemma 2.5\((ii)\), and by the lemma, GRH follows.
2309.06947
Spin glass theory and its new challenge: structured disorder
This paper first describes, from a high level viewpoint, the main challenges that had to be solved in order to develop a theory of spin glasses in the last fifty years. It then explains how important inference problems, notably those occurring in machine learning, can be formulated as problems in statistical physics of disordered systems. However, the main questions that we face in the analysis of deep networks require to develop a new chapter of spin glass theory, which will address the challenge of structured data.
Marc Mézard
2023-09-13T13:30:27Z
http://arxiv.org/abs/2309.06947v1
# Spin glass theory and its new challenge: structured disorder ###### Abstract This paper first describes, from a high level viewpoint, the main challenges that had to be solved in order to develop a theory of spin glasses in the last fifty years. It then explains how important inference problems, notably those occurring in machine learning, can be formulated as problems in statistical physics of disordered systems. However, the main questions that we face in the analysis of deep networks require to develop a new chapter of spin glass theory, which will address the challenge of structured data. ###### Contents * 1 Spin glasses * 1.1 A few well known landmarks from the ferromagnetic Ising model * 1.2 Spin glasses * 1.3 First Challenge: Ensembles of samples * 1.4 Second Challenge: Inhomogeneity * 1.5 Third Challenge: The many-valleys landscape * 1.6 Fourth Challenge: Out of equilibrium dynamics * 2 Statistical Physics of Inference * 2.1 Machine learning as a statistical physics problem * 2.2 Data as disorder * 2.3 Surprises * 3 The new challenge of spin-glass theory: structured disorder * 3.1 Effective dimension * 3.2 Correlations * 3.3 Combinatorial and hierarchical structure * 4 Conclusion ## 1 Spin glasses Statistical physics is more or less one and a half century old. Its creation was based on renouncing to follow the trajectories of single particles and moving rather to a coarser, statistical description of systems with many interacting particles. This radical move allowed to handle the specific effects that emerge when the number of particles becomes large, as summarized in Phil Anderson's famous paper "More is different" [1]. One of its great achievements is the understanding and analysis of phase transitions, and the discovery of universality classes at second order phase transitions, where the divergence of the correlation length wipes out many of the microscopic details of the particles. About fifty years ago, statistical physics developed a new research direction, the one of strongly disordered systems. An important building piece of its construction is the theory of spin glasses, magnetic systems with disordered interactions. In this section we shall mention some of the formidable challenges that had to be solved in order to develop a theory of spin glasses, keeping to the case of classical systems (parallel developments in the field of quantum statistical physics deserve a separate presentation). As is well known, magnetism has played an important role in the development of statistical physics. The solution by Onsager of a "simple" model of ferromagnet, the Ising model in two dimensions, was crucial in establishing the concept of spontaneous symmetry breaking. And the understanding of Ising models in \(d\)-dimensions, including non-integer values of \(d\), also played a major rule in the development of the renormalization group. ### A few well known landmarks from the ferromagnetic Ising model It is useful to set the stage and prepare the discussion of spin glasses, starting with a very short sketch of the well known case of ferromagnetism, which can be found in any standard book of statistical physics. In the Ising model, two-states spins \(s_{i}=\pm 1\) located on the \(N=L^{d}\) vertices of a hypercubic \(d\)-dimensional lattice interact through pair interactions, with an interaction energy which is a sum over pairs of adjacent spins \[E(s)=-J\sum_{(ij)}s_{i}s_{j} \tag{1}\] where \(J>0\) is the ferromagnetic coupling constant. The order parameter is the magnetization density \[M=\frac{1}{N}\sum_{i}\langle s_{i}\rangle \tag{2}\] Where \(\langle s_{i}\rangle\) is the expectation of the spin \(s_{i}\) with respect to the Boltzmann measure \(P(s)=(1/Z)e^{-\beta E(s)}\). This measure is even by the simultaneous flipping of all the spins \(s_{i}\rightarrow-s_{i}\), therefore for a fixed \(N\) one has \(M=0\) at any inverse temperature \(\beta\). On the other hand, if one adds a small symmetry breaking term to the energy, \(E(s)\to E(s)-B\sum_{i}s_{i}\), then \[\lim_{B\to 0^{\pm}}\lim_{N\rightarrow\infty}M=\pm M^{*} \tag{3}\] with \(M^{*}(\beta)>0\) in the low temperature phase \(\beta>\beta_{c}\), where the inverse critical temperature \(\beta_{c}\) is finite for \(d\geq 2\). This is the ferromagnetic phase transition, associated with the phenomenon of spontaneous symmetry breaking. In order to get a first qualitative understanding of this phase transition, one can use the mean field approximation. Starting from the exact relation \[\langle s_{i}\rangle=\langle\tanh\beta\left(B+J\sum_{j\in D_{i}}s_{j}\right)\rangle \tag{4}\] where \(D_{i}\) is the set of neighbours of spin \(i\), one neglects fluctuations, substituting the expectation value of the \(\tanh\) by the \(\tanh\) of the expectation value (this is the mean field). Seeking a homogeneous solution \(\langle s_{i}\rangle=M\) (which is correct far from the boundaries, or using periodic boundary conditions), one finds \[M=\tanh\beta\left(B+zJM\right) \tag{5}\] where \(z=|D_{i}|\) is the number of neighbors of each spin. This equation predicts a ferromagnetic phase transition when \(B\to 0\), with an inverse critical temperature \(\beta_{c}=1/(zJ)\). The mean field approximation is better in larger dimensions, it actually becomes exact when \(d\to\infty\), while it wrongly predicts the existence of a phase transition when \(d=1\). A popular model where the mean field approximation becomes exact is the Curie Weiss model, where all the pairs of spins interact with a rescaled coupling \(J=\tilde{J}/N\). In this model the mean field equation \(M=\tanh\beta\left(B+\tilde{J}M\right)\) is exact and the phase transition takes place at \(\beta_{c}=1/\tilde{J}\). ### Spin glasses A simple model for spin glasses is the Edwards Anderson model [12]. This has the same ingredients as the Ising model, except that the coupling constant between two spins \(i,j\) depends on the pair. The energy becomes \[E(s)=-\sum_{(ij)}J_{ij}s_{i}s_{j} \tag{6}\] Depending on the pair \((ij)\), the coupling constant can be ferromagnetic (\(J_{ij}>0\), favoring the alignment of spins at low temperatures), or antiferromagnetic (\(J_{ij}>0\), favoring spins pointing in opposite directions at low temperatures). With respect to the ferromagnetic case this modification is crucial, and poses a number of remarkable challenges that had to be solved in order to elaborate a theory of spin glasses. This elaboration is a remarkable achievement which culminated in the solution by Parisi of the mean-field Sherrington-Kirkpatrick model [47] (Parisi's Nobel lecture [43] gives a nice summary, and the recent book [7] gives an idea of the applications that it has had in several branches of science). In this paper, I shall not enter any detail of spin glass theory, but adopt a high-level point of view, trying to point out the fourmost important challenges. ### First Challenge: Ensembles of samples The first challenge that can be identified is the characterization of a spin glass sample. In order to define the energy, and therefore the Boltzmann probability, one needs to know all the coupling constants \(\mathcal{J}=\{J_{ij}\}_{1\leq i<j\leq N}\). If the interactions are short range, this is a number of parameters which grows proportionnally to the size of the system \(N\). This raises two problems. On the one hand, for macroscopic systems it is impossible to even write the energy function: the description of a given sample requires to know a number of parameters of the order of the Avogadro number. On the other hand, for each new sample characterized by these couplings \(\mathcal{J}\), there is a new Boltzmann probability \[P_{\mathcal{J}}(s)=\frac{1}{Z_{\mathcal{J}}}e^{\beta\sum_{(ij)}J_{ij}s_{i}s_{j}} \tag{7}\] In a step that mimics the one which was taken when statistical physics was first introduced, this double problem was solved by introducing a second level of probability, namely a probability distribution in the space of samples. The couplings \(\mathcal{J}\) are supposed to be generated from a probability distribution \(\mathcal{P}(\mathcal{J})\). A given realization of \(\mathcal{J}\) is a sample. For instance in the Edwards-Anderson model [12] one assumes that for each pair \((ij)\) of neighboring spins we draw \(J_{ij}\) independently at random, from a distribution with probability density \(\rho\) In the SK model each of the \(N(N-1)/2\) couplings \(J_{ij}\) is drawn at random from a normal distribution with mean \(0\) and variance \(1/N\). We have now two levels of probability. First one draws a sample \(\mathcal{J}\) generated from the probability \(\mathcal{P}(\mathcal{J})\). Then, one studies the Boltzmann law \(P_{\mathcal{J}}(s)\) for this sample. The averages of spin configurations with respect to \(P_{\mathcal{J}}(s)\) are called thermal averages, while the averages over samples, with respect to \(\mathcal{P}(\mathcal{J})\), are called quenched averages. I'll call \(\mathcal{P}(\mathcal{J})\) the quenched probability, to distinguish it from Boltzmann's probability. Then one is lead to make a distinction between two types of properties. On the one hand, there are properties which depend on the sample. For instance, the ground state configuration of spins, the one which minimizes the energy, obviously depends on \(\mathcal{J}\). Actually, all the details of the energy landscape depend on \(\mathcal{J}\). On the other hand, some properties turn out to be'self-averaging', meaning that they are the same, for almost all samples (with a quenched probability that goes to one in the large \(N\) limit). For instance in the EA or SK model the internal energy density \[U_{\mathcal{J}}=\frac{1}{N}\sum_{s}P_{\mathcal{J}}(s)E_{\mathcal{J}}(s) \tag{8}\] is self averaging (this is easily proven in EA because one can cut a sample into many pieces and neglect the interactions between pieces which are of relative order surface to volume, it is less easy for the SK model [47]). This means that the distribution of \(U_{\mathcal{J}}\) (when one picks a sample at random from the quenched probability) has a probability density that concentrates, when \(N\to\infty\), around a given value \(u\) that depends only on the inverse temperature \(\beta\) and on the statistical properties of the distribution of \(\mathcal{J}\). The typical sample-to-sample fluctuations of \(U_{\mathcal{J}}\) around this value are of order \(1/\sqrt{N}\). In the limit \(\beta\to\infty\), this also implies that the ground state energy density is self-averaging. The same is true for all the extensive thermodynamic properties. For instance the magnetization density in presence of a magnetic field, or its linear dependence at small fields, the magnetic susceptibility, are self-averaging. This property of self-averageness is crucial: it is the reason why the measurements of magnetic susceptibilities or specific heat of two distinct spin glass samples with the same statistical properties (take for instance two sample of CuMn with \(1\%\) of Mn) give the same result: these are reproducible measurements because the measured property is self-averaging. Notice that, for the properties which are not self-averaging, one can study their quenched distribution. A typical example is the order parameter function that we shall discuss below [36]. ### Second Challenge: Inhomogeneity The second challenge that spin-glass theory had to face is inhomogeneity. The lesson we learn from detailed studies of the SK model is the following. For a typical sample \(\mathcal{J}\), there exists a low-temperature'spin-glass' phase in which the spins develop non-zero local magnetizations: \[\langle s_{i}\rangle=m_{i} \tag{9}\] Because of the disorder in the coupling constants \(J_{ij}\), contrarily to the ferromagnetic case these magnetizations are not uniform. Analyzing a spin glass order in detail thus requires to use as order parameter the set of all the magnetizations. This is a \(N\)-component order parameter. Thouless Anderson and Palmer were able to write a closed system of \(N\) equations that relate all these components [53]. The TAP equations, which generalize (5) to the spin-glass case, are: \[m_{i}=\tanh\left[\beta\left(\sum_{j}J_{ij}m_{j}-\beta(1-q)m_{i}\right)\right] \tag{10}\] where \(q=(1/N)\sum_{j}m_{j}^{2}\). With respect to the naive mean field equations \(m_{i}=\tanh\left[\beta\sum_{j}J_{ij}m_{j}\right]\), they are characterized by the appearance of the "Onsager reaction term". This basically says that, when one computes the mean of the local magnetic field on site \(i\), one should subtract from the naive estimate \(\sum_{j}J_{ij}m_{j}\) the part of \(m_{i}\) which is polarized by \(i\) itself. This means using a "cavity" magnetization \(m_{j}^{c}=m_{j}-\chi_{j}J_{ji}m_{i}\) where \(\chi_{j}=\beta(1-m_{j}^{2})\) is the local magnetic susceptibility of an Ising spin. When \(N\) is not too large, say a few tens of thousands, TAP-like equations can be used as an algorithm, they can be solved by iteration using a specific iteration schedule that was found by Bolthausen [6]. This gives information on the behavior of a given sample \(\mathcal{J}\). On the other hand, when \(N\) is very large, for instance of the order of the Avogadro number, one cannot write explicitely or solve the TAP equations. One must use a statistical study of the properties of these solutions. It turns out that this cannot be done directly on the TAP equations themselves, because the Onsager reaction term creates subtle correlations. The cavity method [37, 38] allows to circumvent this problem, by first analysing the statistics of the cavity field, the field acting on a spin in absence of this spin. This allows to build a full solution to the problem. ### Third Challenge: The many-valleys landscape Keeping to the SK model, it was found that there actually exist many different'states' where the system can freeze, and therefore many solutions of the TAP equations. Each state \(\alpha\) is characterized by \(N\) magnetizations \(m_{i}^{\alpha}\), so the order parameter is actually a \(N\)-component vector. This generalizes the situation of the ferromagnet. Instead of two states, identified by their average magnetization, we have many states. In each of them the average magnetization in absence of external field, \((1/N)\sum_{i}m_{i}^{\alpha}\), vanishes in the thermodynamic limit. Defining these states correctly is actually difficult. If one parallels the construction of the two pure states that we introduced in (3) for the ferromagnet, the natural generalization is to introduce for each state \(\alpha\) a site-dependent small magnetic field \(B_{i}^{\alpha}\) and take the limit where all these local fields go to zero after the thermodynamic limit. This leads to \[m_{i}^{\alpha}=\lim_{B^{\alpha}\to 0}\lim_{N\to\infty}\langle s_{i}\rangle_{B^{ \alpha}} \tag{11}\] The weakness of this definition is that we do not know how to choose the local orientations of \(B_{i}^{\alpha}\): on which site should they be positive and on which site should they be negative? Solving this problem requires knowing the signs of \(m_{i}^{\alpha}\). So while this definition of the order parameter is interesting, in practice it is useless. The replica method which was used to solve the SK model [37, 41] actually has an interesting interpretation from this point of view. The idea is that, if we do not know the preferred orientations where the spins will polarize, the systems knows them. So, for theoretical understanding, one can introduce, for a given sample \(\mathcal{J}\), two replicas of spins, \(s\) and \(\sigma\), with the same energy function \(E_{\mathcal{J}}\). In this noninteracting system, the probability of the two configurations is \[P_{\mathcal{J}}(s,\sigma)=\frac{1}{Z_{\mathcal{J}}^{2}}e^{-\beta(E_{ \mathcal{J}}(s)+E_{\mathcal{J}}(\sigma))} \tag{12}\] One can introduce the overlap \(q=(1/N)\sum_{i}s_{i}\sigma_{i}\), and ask what is the distribution \(P_{\mathcal{J}}(q)\) of this overlap, in the thermodynamic limit \(N\to\infty\). In the high temperature paramagnetic phase, \(P_{\mathcal{J}}(q)=\delta(q-q_{0})\) where \(q_{0}=0\) in absence of an external field, but it becomes \(q_{0}>0\) in presence of a uniform field. In the spin glass phase, \(P_{\mathcal{J}}(q)\) becomes non trivial, it has a support \([q_{0},q_{1}]\) with \(q_{1}>q_{0}\), and it fluctuates from sample to sample. So this is a non-self-averaging quantity [36]. Its quenched average, \(P(q)=\int d\mathcal{J}\mathcal{P}(\mathcal{J})P_{\mathcal{J}}(q)\), is the order parameter for the spin glass phase. It is this order parameter which appears naturally, and is computed in the replica method with replica symmetry breaking, as shown in Parisi's seminal work [42]. A simple way to define the existence of a spin glass phase using two replicas is to introduce a small coupling between them. The energy of a pair of configurations \(s,\sigma\) now becomes: \[E_{\mathcal{J}}^{\epsilon}(s,\sigma)=E_{\mathcal{J}}(s)+E_{\mathcal{J}}(\sigma)- \epsilon\sum_{i}s_{i}\sigma_{i} \tag{13}\] Sampling the pairs of configurations with the corresponding Boltzmann weight, one can compute the expectation value of the overlap \(\langle q\rangle^{\epsilon}=\int dq\ P_{\mathcal{J}}^{\epsilon}(q)q\). Taking the limit \(\epsilon\to 0_{\pm}\) after the thermodynamic limit, one finds \[q_{1}=\lim_{\epsilon\to 0^{+}}\lim_{N\to\infty}\langle q\rangle^{\epsilon}\ \ ;\ \ q_{0}=\lim_{\epsilon\to 0^{-}}\lim_{N\to\infty}\langle q \rangle^{\epsilon} \tag{14}\] These are the two limits of the support of \(P(q)\). The existence of a spin glass phase is signalled by \(q_{1}>q_{0}\). This definition gives a very intuitive interpretation to the use of replicas: one takes two replicas coupled by a small attractive interaction (\(\epsilon>0\)). When this interaction vanishes, if the spins in each of the two replicas remain correlated, this signals the spin glass phase. This criterion can also be used in glassy systems without disorder, like structural glasses [15, 35]. The whole "landscape structure" of the spin glass phase can be analysed as follows: in a given sample there exist many pure states \(\alpha\). Each of them is characterized by the \(N\)-dimensional vector of magnetizations \(m^{\alpha}=\{m_{1}^{\alpha},...,m_{N}^{\alpha}\}\), and its free energy \(F^{\alpha}\). All the states that contribute to the thermodynamics have the same free-energy density \(\lim_{N\to\infty}F^{\alpha}/N\), but they have finite free energy differences \(F^{\alpha}-F^{\gamma}=O(1)\), and therefore each state contributes to the Boltzmann measure with a weight \(P^{\alpha}\). Therefore \[P(q)=\int d\mathcal{J}\mathcal{P}(\mathcal{J})\sum_{\alpha,\gamma}P^{\alpha}P ^{\gamma}\delta\left(q-q^{\alpha\gamma}\right) \tag{15}\] Various types of glassy phases are characterized by different types of \(P(q)\) functions, two extremes being the simple "one-step RSB"characteristic of the structural glass transition ones having only two \(\delta\) peaks at \(q_{0}\) and \(q_{1}\)[9, 11, 20], and the "full-RSB" which occurs in the SK model and where the support of \(P(q)\) is the full interval \([q_{0},q_{1}]\), with an infinity of states organized in a hierarchical structure called ultrametric [36], and a \(\delta\) peak of \(P(q)\) at the Edwards Anderson order parameter \(q=q_{1}\), which gives the typical size of the states. ### Fourth Challenge: Out of equilibrium dynamics The last big challenge that spin glass theory had to face was the one of equilibrium. The whole description that I gave so far is based on the idea that a given sample of a spin glass can be characterized by the Boltzmann measure \(P_{\mathcal{J}}(s)\). However this is true only in the case where the system reaches equilibrium. Experiments precisely teach us that equilibrium is not reached in the spin glass phase. For instance, measuring the magnetic susceptibility by first cooling the system to a temperature \(T<T_{sg}\) and then adding a small uniform magnetic field \(B\) gives a "zero-field-cooled susceptibility" \(\chi_{ZFC}\) which is different from the one found by placing the sample in the magnetic field \(B\) at high temperature (above the spin glass transition \(T_{sg}\)), and then cooling it to \(T\). This last procedure gives a "field-cooled" susceptibility \(\chi_{FC}\) which is in general larger than \(\chi_{ZFC}\). In both cases the measurement of the susceptibility is done at the same point \(T,B\) of the phase diagram, but the results differ, proving that the spin glass is out of equilibrium. Then a very legitimate question is: how can the equilibrium theory be of any use? One of the first successes of the Parisi theory has been to give a qualitative explanation of this difference by assuming that the FC susceptibility corresponds to the reaction of the system when perturbing an equilibrium which is a superposition of pure states, while the ZFC susceptibility corresponds to a perturbation within one pure state (see eg [43]). In fact, if one introduces a constrained perturbation to a SK spin glass, in which the system reacts to a small magnetic field, but it is constrained to remain at an overlap larger than \(q\) from its initial state, then the corresponding susceptibility is \[\chi(q)=\beta\int_{q}^{1}dq^{\prime}\ P(q^{\prime})(1-q^{\prime}) \tag{16}\] which gives in the two limiting cases: \[\chi_{ZFC}=\beta\left[1-\int_{0}^{1}dq^{\prime}\ P(q^{\prime})q^{\prime}\right] \ \ ;\ \ \chi_{FC}=\beta\left[1-q_{1}\right] \tag{17}\] One can also go beyond, and try to study directly the dynamics of mean-field models like the SK model. In the spin glass phase, the time to reach equilibrium diverges in the thermodynamic limit. One can then study what happens on various diverging time-scales, as in the first works of Sompolinsky and Zippelius [49]. An alternative approach which gives very interesting insight, is to solve the out of equilibrium dynamics, as was proposed initially by Cugliandolo and Kurchan [10]. Focusing again on the mean-field models, one can derive a closed set of equations for the two-time correlation \(C(t_{w}+t,t_{w})=(1/N)\sum_{i}\langle s_{i}(tw+t)s_{i}(t_{w})\rangle\) and the two-time response function \(R(t_{w}+t,t_{w})\), which is the linear response measured at time \(t_{w}+t\) of a system which has started its dynamics at time \(0\), and to which a small magnetic field has been added at the time \(t_{w}\). In systems which reach their equilibrium, after a long waiting time \(t_{w}\), the functions \(C\) and \(R\) become time-translation invariant, ie they depend only on the measurement time \(t\). This invariance is broken in the spin glass phase: the \(t\) dependance of these two functions depends on the age \(t_{w}\) of the system, and they keep evolving when \(t_{w}\) increases, a phenomenon called aging which is often observed in glassy systems. The simplest scenario of aging would be one in which \(C\) and \(R\) become function of \(t/t_{w}^{a}\). For instance approximate \(t/t_{w}\) scaling with \(a=1\) is often observed. In link with its hierarchical static structure, the SK model shows a more complicated behaviour, with various time-scales characterized by distinct exponents \(a\) playing a role. In link with the aging phenomenon, one also finds a modification of the standard fluctuation-dissipation theorem (FDT). In an equilibrium system, at large enough \(t_{w}\), the standard FDT relation between fluctuation and response is \(R(t)=\beta(C(0)-C(t))\) (notice that we use here an integrated response function, as defined above). In the spin glass phase, this is modified and becomes a relation between \(C(t_{w},t)\) and \(R(t_{w},t)\) that holds when both the waiting time \(t_{w}\) and the measurement time \(t\) are large: \[\frac{\partial R(t_{w},t)}{\partial t}=-\beta X(C(tw,t))\frac{\partial C(t_{w },t)}{\partial t} \tag{18}\] The function \(X(C)\) is the "fluctuation dissipation ratio". When computed from spin-glass theory, one finds that it is equal to the total probability of an overlap larger than \(C\): \[X(C)=\int_{C}^{1}P(q)dq \tag{19}\] It can thus be measured by plotting parametrically \(R\) versus \(C\). We have thus a way to measure the equilibrium order parameter \(P(q)\) from an out of equilibrium measurement of correlation and response. This was done by [23] and the reader can find a discussion in [43]. ## 2 Statistical Physics of Inference ### Machine learning as a statistical physics problem Spectacular recent developments of artificial intelliginece are based on machine learning. I'll sketch here the formal framework of supervised learning, in order to relate it to statistical physics of disordered systems. Recent introductions can be found in [27, 57]. Machine learning aims at learning a function from a \(d\) dimensional input \(\xi\in\mathbb{R}^{d}\) to a \(k\) dimensional output \(y\). Usually one is interested in large dimensional input like an image, so \(d\) is large, in practice it can be \(10^{6}\) or more, and a small dimensional output. Taking the famous example of handwritten digits, the image could be an image of a digit, and the output would be the digit. In a one-hot encoding, one would use \(k=10\) and the digit \(r\) would be associated to \(y_{r}=1\) and \(y_{r^{\prime}}=0\) for \(r^{\prime}\neq r\). One thus wants to learn a target function \(y=f_{t}(\xi)\). Actually in practical applications we do not have a full definition of the function, but we have examples, in the form of a database of pairs input-output \(\mathcal{D}=\{\xi^{\mu},y^{\mu}\}\), with \(\mu\in\{1,...,P\}\). Modern deep networks are based on artificial neurons organized in layers. Each neuron in layer \(L\) is a simple unit that receives a signal from the neurons in the previous layer, applies a nonlinear function and sends this processed signal to the neurons of the next layer (see Fig1). The activity of neuron \(i\) in layer \(r\) is Figure 1: **Top: typical structure of a feedforward neurla network. The input (image of a cat) is presented on the left layer. Data is processed, layer after layer, until the output is given in the right. Bottom: Each layer is built from artificial neurons. They receive inputs from the neurons of the previous layer on their left hand side, these inputs are weighted, and the linear combination of the reweighted inputs is then transformed by a non-linear transfer function. Two examples of such functions are shown here: the ReLu function and a sigmoid.** given by \[x_{i}^{r}=\psi_{i}^{r}\left(\sum_{j}W_{ij}^{r}x_{j}^{r-1}\right) \tag{20}\] where \(W^{r}\) is a matrix of the "synaptic efficacies" between neurons in layer \(r-1\) and \(r\). The nonlinear function \(\psi_{i}^{r}\) can be for instance a sigmoid or a rectified linear unit. Usually it depends on the layer \(r\) but not on the precise neuron in the layer. The layer \(0\) is the input, \(x^{0}=\xi\), and the layer \(L\) is the output, \(x^{L}=y\); other layers are called hidden layers. A given realization of the neural network is given by its architecture (the depth \(L\) and the width of each layer), the choice of nonlinear functions, and the values of the weights \(W=\{W^{r}\},\ r\in\{1,....,L\}\). We shall denote by \(N\) the total number of weights. If the width of each hidden layers is constant equal to \(h\), then \(N=dh+(L-2)h^{2}+hk\). A given network with parameters \(W\) implements a function from input to output \(y=f(W,\xi)\). In most applications the architecture is chosen by the engineer, based on previous experience, but the weights are learnt. Indeed, machine learning designates the process by which the parameters (in this case the weights) are not given to the machine, but the machine learns them from data. This learning is called the training phase. In order to train a network one defines a "loss function" \(L(W)\) which measures the errors made by the machine with parameters \(W\) on the database. For instance one could use a quadratic loss \[L_{\mathcal{D}}(W)=\sum_{\mu=1}^{P}\left(y^{\mu}-f(W,\xi^{\mu})\right)^{2} \tag{21}\] but many other choices are possible. Then the training phase consists in finding the \(W\) that minimize the loss. In practice, people use a form of gradient descent called stochastic gradeint descent, in which one moves in the \(W\) landscape using iteratively noisy versions of the gradient computed from partial sums involving some batches of \(\mu\) indices. Once the learning has been done, one can use the couplings \(W^{*}\) which have been found during training, and see how well the network generalize when it is presented some new data that it has never seen. The test -or generalization- loss has the same expression as (21), but with new, previously unseen, input-output pairs. ### Data as disorder One can also introduce a probability distribution in the space of weights \(W\), of the form \[P_{\mathcal{D}}(W)=\frac{1}{Z_{\mathcal{D}}}P_{0}(W)e^{-\beta L_{\mathcal{D}} (W)} \tag{22}\] where \(P_{0}(W)\) is a prior on the weights, one can choose it as a factorized prior \(P_{0}(W)=\prod_{r,i,j}\rho(W_{ij}^{r})\) (for instance one can use for \(\rho\) a gaussian if one wants to avoid too large weights). One could also normalize \(\sum_{j}(W_{ij}^{r})^{2}=1\), but I will keep here for simplicity to the factorized case. The parameter \(\beta\) is an auxiliary inverse temperature parameter. When \(\beta\) is large this probability distribution is concentrated on the sets of weights \(W\) which minimize the loss. This formalism amounts to an approach of statistical physics in the space of weights. It was pioneered by Elizabeth Gardner [16, 17] in the study of the simplest network, the perceptron which has no hidden unit (and is therefore limited to linearly separable tasks). In recent applications of deep networks like the large language model Chat-GPT, the total number of weights can be of order \(10^{11}\), and the total number of operations used in order to train a network on extremely large databases of basically all available text is easy to remember, it is of the order of \(10^{24}\), a "mole of operations". So the distribution (22) is a measure in a large \(N\)-dimensional space. The elementary variables, the weights, are real valued variables with a measure \(\rho\). They are coupled through an energy which is the loss \(L_{\mathcal{D}}(W)\). This energy is a complicated function of the variables \(W\), and it depends on a large set of parameters, namely all the input-output pairs \(\mathcal{D}\). Therefore the measure (22) has all the ingredients of a statistical physics systems with a quenched disorder which is the data. Having in mind our discussion of disordered systems, we can immediately identify several questions. One can work on a given data base (a given sample) and ask about the landscape for learning, and for generalization. But clearly, for theoretical studies, one would like to have an ensemble of samples, which is a probability measure in the space of inputs, and for each input the corresponding output. With such a setup of a _data ensemble_, one can draw a database by choosing inputs independently at random from the ensemble, one can study the property of self-averageness (which properties of the optimal network \(W^{*}\) are dependent on the precise realization of the database, and which ones are not - in the large \(N\) limit?). The test loss becomes easy to define: it is the expectation value of the loss, over pairs of input-output generated from the data ensemble. Working with a single dataset or with a data ensemble are two rather different approaches. In many practical applications the engineer's approach is to use a single dataset, and the definition of an ensemble is not obvious. For instance if one wants to identify if there is a cat or a dog on an image, so far what is done is use huge databases of images of cats and dogs, randomly chosing part of them for training, and another part for the test phase. However, in such a single database setup it is not easy to develop a theory: on the one hand, there is a risk of developing a theory which is too much tailored to this precise database, and from which one cannot draw general conclusions; also, one cannot use probabilities to compute the generalization error. So the use of data ensembles is clearly welcome from a theoretical point of view, but then one faces the difficulty of defining the ensemble in such a way that it will include some essential features of real databases, but it should be smooth enough so that one can interpolate through it reasonably (the ensemble which would use a probability law that is a sum of \(\delta\) peaks on each point of a database is useless), and simple enough that it can be studied. So the question of finding good ensembles is a fundamental question of how to model the "world", ie the set of all possible inputs that can be presented. Here by modelling one intends it in a physics' approach, namely being able to identify the key features that should be incorporated into the ensemble, neglecting less important "details". Interestingly, this quest for modeling of data meets with an important recent direction of development of machine learning, which are generative models. In parallel to, and in symbiosis with the supervised machine learning that I have briefly exposed, very significant progress has been made on generative models. These aim at generating data'similar to' a given database. Among the processes that have been explored, one can mention Generative Adversarial Networks (GAN), or the very physical generative diffusion models which take a database, degrade it using a Langevin process until it has been transformed into pure noise, and then reverse the Langevin process to reconstruct artificial data from noise (see [50, 51, 48]); for a recent review see [56], and for a statistical physics perspective: [5]). ### Surprises #### 2.3.1 Perceptrons Training a neural network in supervised learning amounts to finding the ground state of a strongly disordered system. One can thus ask what properties of spin glasses one can find in neural networks. Early studies on perceptrons have provided important benchmarks. Two main categories of tasks have been studied: learning arbitrary labels, or learning from a "teacher rule". In both cases, the database consists of \(P\) independent input datapoints each consisting of \(d\) iid numbers from a distribution \(\rho_{0}(\xi)\). In the case of learning from arbitrary labels, for each input one generates a desired output which is drawn randomly, independently from the input. In this case one studies only the training phase, generalization has no meaning. In the case of learning a teacher rule, one generates the output from a "teacher" set of weights, \(W_{t}\), through \(y=f(W_{t},\xi)\). The quality of training can be monitored by computing some distance between \(W^{*}\) and \(W_{t}\), like for instance \(|W^{*}-W_{t}|^{2}/|W_{t}^{2}|\). The behaviour of the training depends a lot on the a priori measure on the weights, \(\rho\). If \(\rho\) is gaussian, or imposes a spherical constraint, the training problem is convex and the landscape is simple. The training and generalization error decrease continuously with \(\alpha=P/N\). If \(\rho\) is discrete, corresponding to Ising spins, then the training on random labels shows a replica symmetry breaking phase at \(\alpha_{c}=.83\)[26] which has has a strange nature. On the one hand, the typical configurations at \(\alpha\) close to \(\alpha_{c}\) are isolated points, building a golf-course potential [25]. On the other hand, atypical, exponentially rare regions of phase space concentrate a large number of neighboring solutions [2], and are easy to find. As far as learning a teacher rule is concerned, with binary synapses, the generalization error shows a first order phase transition to perfect generalization [21] when \(\alpha\) is larger than a threshold \(\alpha_{g}\simeq 1.25\). The phase diagram can be studied rigorously, and when \(\alpha>\simeq 1.5\) one can also use iterative message-passing algorithms based on the cavity-TAP method [34] in order to find the optimal weights defined by the teacher [4]. #### 2.3.2 Deep networks The recent experimental successes of deep networks have triggered a lot of analyses, but the situation is less clear than in perceptrons. So far, statistical physics approaches can be used efficiently in multilayer networks either when the transfer functions are linear [31], or when there is a single layer of learnable parameters with a size that diverges in the thermodynamic limit, like for instance in committee machines or parity machines. Also, the empirical observations of the learning process show a picture which is rather different from the usual spin glass landscape. The first obsevation is that very complex functions involving billions of weights are learnable from examples, using the simple stochastic gradient descent algorithm. This means that the loss function which is optimized is not as rough as one would have in a spin glass. Typically, stochastic gradient descent, when initiated from generic initial conditions (with small weights), finds a set of weights \(W^{*}\) not far from the initial condition, which has a small loss. Surprisingly, in this large dimensional space, the set of weights with small is not sparse, as one could have expected from our experience of optimizing in large dimensions. Once the network has been trained, typically using a number \(N\) of weights that is of the same order as the number of points in the database, one must study its generalization properties. From a statistics' perspective, what has been done in the training phase is fitting a complicated \(N\) dimensional function using \(P\) datapoints. This is possible because \(N\) is large, but one should expect to be in a regime of overfitting, and therefore a poor generalization. This is not the case. Actually, increasing the depth of the network, and therefore the number \(N\) of fitting parameters, one observes that the generalization errors keeps decreasing, while it should shoot-up in the overfitting regime. Among all these minima of the loss, some generalize better than others, and this seems to be correlated with the flatness of the landscape around the minimum [3]. These facts indicate that deep-learning landscapes are rather different from the ones that have been explored in spin glass theory or in perceptrons. What are the ingredients responsible for the relatively easy training and the lack of overfitting in deep networks? Three directions are being explored: 1) the architecture of the networks, and in particular the importance of using deep enough networks, with many layers; in practice the design of the architecture, including the choice of nonlinearities, is an engineer's decision based on previous experience; 2) the learning algorithm; stochastic gradient descent started from weights with small values seems efficient at finding out first the main pair correlation in the data, then gradually improving [45]; 3) The structure of data: practical problems deal with highly structured data, whether they are text, image, amino-accid sequences. In the next section I shall focus on this last point, argue about the relevance of structured data and describe the challenge it poses to statistical physics. ## 3 The new challenge of spin-glass theory: structured disorder Data is highly structured, and a major objective is to develop mathematical models for the datasets on which neural networks are trained. Most theoretical results on neural networks do not model the structure of the training data. Statistical learning theory [39, 54] usually provides bounds that hold in the worst case, but are far from describing typical properties seen in experiments. On the other hand, traditional statistical physics approaches use a setup where inputs are either drawn component-wise i.i.d. from some probability distribution, or are gaussian distributed [13, 46]. Labels are either random or given by some random, but fixed function of the input. Despite providing valuable insights, these approaches ignore key structural properties of real-world datasets. In recent years several aspects of data structure have been explored, and the first ensembles of structured data have started to be developed. The challenge is of course to create ensembles which contain some of the essential structure, but are at the same time simple enough to be analysed. I will mention here three categories of data properties which are being studied: effective dimensionality, correlations, and combinatorial/hierarchical structure. ### Effective dimension Let us consider perhaps the simplest canonical problem of supervised machine learning: classifying the handwritten digits in the MNIST database using a neural network [28]. The input patterns are images with \(28\times 28\) pixels, so _a priori_ we work in the high-dimensional space \(\mathbb{R}^{784}\). However, the inputs that may be interpreted as handwritten digits, and hence constitute the "world" of our problem, span but a lower-dimensional manifold within \(\mathbb{R}^{784}\). Although this manifold is not easily defined, its dimension can be estimated based on the distance between neighboring points in the dataset [8, 19, 30, 52]. In fact, if we consider \(P\) independent datapoints in a \(D\) dimensional space, we expect that the distance between nearest neighbors scales like \(P^{-1/D}\). Analysing the MNIST data base, one finds the effective dimension to be around \(D\approx 15\), much smaller than \(N=784\). The "perceptual submanifold" associated with each digit also has an effective dimension, ranging from \(\approx 7\) for the digit \(1\) to \(\approx 13\) for the digit \(8\)[22]. Therefore the task of identifying a handwritten digit consists in finding these ten perceptual submanifolds, embedded in the 15-dimensional "world" manifold of handwritten digits. Of course, the problem is that these manifolds are nonlinear, folded, and it is hard to find them (see [14] for algorithmic approaches). The same phenomenon of reduction in effective dimension is found in other datasets. For instance, images in CIFAR10 are defined in dimension \(N=1024\), but have an effective dimension \(D\approx 35\). In most machine learning problems, the effective "world" on which we train our networks has an effective dimension \(D\ll N\) (in fact, a good practice would be to train the networks so that they can identify when they see an input which is far from the world in which they were trained, and refuse to give an answer in such cases). A simple attempt at including this effective dimensionnality in ensemble of data is the "hidden manifold model" [18]. In this model, the seed \(s\) of a datapoint is generated iid in a \(D\)-dimensional "latent" space, for instance from a gaussian distribution. Then the datapoint components \(\xi_{i}\) are generated as \[\xi_{i}=g\left(\sum_{r=1}^{D}F_{ir}s_{r}\right) \tag{23}\] where \(F_{ir}\) are given and define the model, as well as \(g\) which is a nonlinear function. It turns out that, when the components of \(F\) are well balanced (and in particular if they are generated iid from a well behaved distribution), one can generalize the statistical physics studies of the perceptrons or shallow networks to data which has this hidden manifold structure. The reason is that the inputs of hidden units actually receive an input which becomes gaussian distributed. This "gaussian equivalence theorem" allows to use the whole traditional spin-glass machinery. It also tells that this kind of model has its limitations, as it is equivalent to some type of gaussian distributed inputs. Note that the hidden manifold structure of data defined in (23) can receive a different interpretation, where one would like to learn from the latent signal \(s\) in \(D\) dimensions, but one first projects it to a \(N\)-dimensional space of random features which are fixed, and not learnt, a problem which has been studied in detail when the matrix \(F\) is generated from a random matrix ensemble [33] (but gaussian equivalence holds beyond this, as long as matrix elements are well balanced, like for instance in Hadamard transformation). Actually the construction of hidden manifolds can be elaborated by using, instead of (23), an iterative construction based on several layers of projections, as done in the GAN approach. In that case gaussian equivalence is conjectured to hold, although it has not been proven yet [18]. ### Correlations From the database, one can construct the empirical pair correlation \(C_{ij}\) between two components of the input \[C_{ij}=\frac{1}{P}\sum_{\mu=1}^{P}\xi_{i}^{\mu}\xi_{j}^{\mu} \tag{24}\] as well as higher order correlations (here we assume that we use centered data, in which the empirical mean of \(\xi_{i}\) has been substracted). A distinguishing property of practical datasets is that correlations are highly structured, and actually some of this structure is already seen at the level of the pair correlation. For instance, if one diagonalizes the matrix of pair correlations, which is of Wishart type, one finds a spectrum of eigenvalues which differs notably from the Marcenko Pastur one that would be obtained if the components \(\xi_{i}^{\mu}\) were distributed independently and identically. Instead, one typically gets a power law distribution of the large eigenvalues, and it has been argued that this power-law scaling is actually related to the power-law decay of the loss with respect to either \(N\) or \(P\), found in large language models [32]. A simple attempt at including this effective dimensionnality in an ensemble of data is to use random Wishart matrices with a power-law distributed spectrum [29, 32]. Note that this power-law scaling (with small exponents) of eigenvalues of the correlation matrix points to the existence of some type of long-range correlations. In fact very structured and long-range correlations in data are very important, and the recently developed "attention mechanism" is precisely built in order to handle such correlations [55]. These are of a type which is rather different from what one is used to in statistical physics. The easiest way to illustrate them is through language models. In these models, one decomposes the sentences into tokens (typically words or -for composite words- portions of words) and the language models are trained from a large corpus, at the task which is to take a text, interrupt it somewhere, and give the best guess of the next token. Clearly the simplest approach would be to sample the conditional probability distribution : take the previous \(k\) tokens before being interrupted, and look in the database at sentences which have exactly this sequence of k tokens, and compute from this database the most probable next token. This approach was started very early on, by Shannon himself. But clearly it is limited to small values of \(k\): beyond \(k\) of order a couple of dozens, one des not have the statistics to infer the conditional probability. But it turns out that key tokens, which are crucial for guessing the next one, can be found much earlier in the text. Take for instance this sentence written above : "_Instead, one gets typically a power law distribution of the large eigenvalues, and it has been argued that this power-law scaling is actually related to the power-law decay of the_. In order to guess the next word, 'loss', it would be useful to focus on portions of this paper which appear much earlier, where the loss is defined. It is this type of long-range correlation that is handled by the attention mechanism. ### Combinatorial and hierarchical structure A third distinctive structure of datasets used in practice is its combinatorial nature. Imagine for instance a photo of a lecture hall: it is composed by a group of students, each sitting at his desk. Then each student is "composed" of head, chest, arms, and each head is "composed" of eyes, nose, mouth, hair, and the eyes are "composed" by pigmented epithelial cells, etc. This is actually typical, and most of the images that we want to analyse have this type of combinatorial structure with a hierarchy of features and subfeatures related to the scale at which one looks. This stucture is also related to the decoding that happens when learning from images with a deep network: one typically finds that the first layers of the network decode small scales elements like edges, and going further into the network one gradually identifies larger scale properties, until in the final layers one is able to decide the content of an image. Interestingly, the same type of analysis, from small scale to larger scales, takes place in the sequence of visual areas used in the brains of primates. One also finds the same combinatorial/hierarchical structure in text for instance, and also in protein sequences with their primary, secondary and ternary structures. The first attempts at building ensembles with combinatorial/hierarchical properties are still rather rudimentary. An easy case, although not very realistic, is the one of linear structures. Interestingly, one can show that an associative memory network [24] trying to store such hierarchical patterns can be mapped onto a layered network where the first layers analyze the small scale features, and the information is then built gradually to larger scales, by combining smaller scale features of previous layers. Very recently, simple nonlinear versions of combinatorial/hierarchical data ensembles have started to be explored [40, 44]. ## 4 Conclusion Constructing a theory of deep learning is an important challenge, both from the theoretical point of view, but also for applications: only a solid theory will be able to turn a deep network prediction from a black-box best guess into a statement which can be explained and justified, and whose worst-case behaviour can be controlled. The main high-level challenge that is faced in deep network is the one of emergence: how is the information gradually elaborated when it is processed from layer to layer in the network? How is it encoded collectively? Contemporary networks are working in a high dimensional regime, and what we need is a good control of the representations obtained from data of probability distributions in large dimensions. This is typically a problem of statistical physics. One big question is whether we will be able to elaborate a statistical physics of deep network which is based on a not-too-large number of order parameters, that can be controlled statistically, as was done in spin glasses. In order to be relevant, this appraoch to deep networks must be able to take into account important ingredients of the real 'world', and in particular its structure. So far spin glass theory has been developed mostly for ensembles in which the coupling constants are identically and independently distributed. It is known that more stuctured ensembles can be very hard to study. This is the case for instance of the EA model: in this model, the fact that the spins are coupled only among nearest neighbours on a cubic lattice is a type of Euclidean structure, and this problem has not been solved exactly so far. A fascinating new challenge of spin glass theory is to develop new ensembles of correlated disorder, including some of the most relevant ingredients that are found in real databases, like long-range correlations, hierarchy, combinatorial structures, effective dimensions, while being able to keep some analytic control of the problem.
2303.17811
Zero-shot Referring Image Segmentation with Global-Local Context Features
Referring image segmentation (RIS) aims to find a segmentation mask given a referring expression grounded to a region of the input image. Collecting labelled datasets for this task, however, is notoriously costly and labor-intensive. To overcome this issue, we propose a simple yet effective zero-shot referring image segmentation method by leveraging the pre-trained cross-modal knowledge from CLIP. In order to obtain segmentation masks grounded to the input text, we propose a mask-guided visual encoder that captures global and local contextual information of an input image. By utilizing instance masks obtained from off-the-shelf mask proposal techniques, our method is able to segment fine-detailed Istance-level groundings. We also introduce a global-local text encoder where the global feature captures complex sentence-level semantics of the entire input expression while the local feature focuses on the target noun phrase extracted by a dependency parser. In our experiments, the proposed method outperforms several zero-shot baselines of the task and even the weakly supervised referring expression segmentation method with substantial margins. Our code is available at https://github.com/Seonghoon-Yu/Zero-shot-RIS.
Seonghoon Yu, Paul Hongsuck Seo, Jeany Son
2023-03-31T06:00:50Z
http://arxiv.org/abs/2303.17811v2
# Zero-shot Referring Image Segmentation with Global-Local Context Features ###### Abstract Referring image segmentation (RIS) aims to find a segmentation mask given a referring expression grounded to a region of the input image. Collecting labelled datasets for this task, however, is notoriously costly and labor-intensive. To overcome this issue, we propose a simple yet effective zero-shot referring image segmentation method by leveraging the pre-trained cross-modal knowledge from CLIP. In order to obtain segmentation masks grounded to the input text, we propose a mask-guided visual encoder that captures global and local contextual information of an input image. By utilizing instance masks obtained from off-the-shelf mask proposal techniques, our method is able to segment fine-detailed instance-level groundings. We also introduce a global-local text encoder where the global feature captures complex sentence-level semantics of the entire input expression while the local feature focuses on the target noun phrase extracted by a dependency parser. In our experiments, the proposed method outperforms several zero-shot baselines of the task and even the weakly supervised referring expression segmentation method with substantial margins. Our code is available at [https://github.com/Seonghoon-Yu/Zero-shot-RIS](https://github.com/Seonghoon-Yu/Zero-shot-RIS). ## 1 Introduction Recent advances of deep learning has revolutionised computer vision and natural language processing, and addressed various tasks in the field of vision-and-language [4, 19, 27, 28, 36, 43, 50]. A key element in the recent success of the multi-modal models such as CLIP [43] is the contrastive image-text pre-training on a large set of image and text pairs. It has shown a remarkable zero-shot transferability on a wide range of tasks, such as object detection [9, 10, 13], semantic segmentation [7, 12, 59, 63], image captioning [40], visual question answering (VQA) [47] and so on. Despite its good transferability of pre-trained multi-modal models, it is not straightforward to handle dense prediction tasks such as object detection and image segmentation. A pixel-level dense prediction task is challenging since there is a substantial gap between the image-level contrastive pre-training task and the pixel-level downstream task such as semantic segmentation. There have been several attempts to reduce gap between two tasks [44, 54, 63], but these works aim to fine-tune the model consequently requiring task-specific dense annotations, which is notoriously labor-intensive and costly. Referring image segmentation is a task to find the specific region in an image given a natural language text describing the region, and it is well-known as one of challenging vision-and-language tasks. Collecting annotations for this task is even more challenging as the task requires to collect precise referring expression of the target region as well as its dense mask annotation. Recently, a weakly-supervised referring image segmentation method [48] is proposed to overcome this issue. However, it still requires high-level text expression annotations pairing with images for the target datasets and the performance of the method is far from that of the supervised methods. To tackle this issue, in this paper, we focus on zero-shot transferring from the pre-trained knowledge Figure 1: Illustrations of the task of referring image segmentation and motivations of global-local context features. To find the grounded mask given an expression, we need to understand the relations between the objects as well as their semantics. of CLIP to the task of referring image segmentation. Moreover, this task is challenging because it requires high-level understanding of language and comprehensive understanding of an image, as well as a dense instance-level prediction. There have been several works for zero-shot semantic segmentation [7, 12, 59, 63], but they cannot be directly extended to the zero-shot referring image segmentation task because it has different characteristics. Specifically, the semantic segmentation task does not need to distinguish instances, but the referring image segmentation task should be able to predict an instance-level segmentation mask. In addition, among multiple instances of the same class, only one instance described by the expression must be selected. For example, in Figure 1, there are two cats in the input image. If the input text is given by _"a cat is lying on the seat of the scooter"_, the cat with the green mask is the proper output. To find this correct mask, we need to understand the relation between the objects (_i.e. "lying on the seat"_) as well as their semantics (_i.e. "cat", "scooter"_). In this paper, we propose a new baseline of zero-shot referring image segmentation task using a pre-trained model from CLIP, where global and local contexts of an image and an expression are handled in a consistent way. In order to localize an object mask region in an image given a textual referring expression, we propose a mask-guided visual encoder that captures global and local context information of an image given a mask. We also present a global-local textual encoder where the local-context is captured by a target noun phrase and the global context is captured by a whole sentence of the expressions. By combining features in two different context levels, our method is able to understand a comprehensive knowledge as well as a specific trait of the target object. Note that, although our method does not require any additional training on CLIP model, it outperforms all baselines and the weakly supervised referring image segmentation method with a big margin. Our main contributions can be summarised as follows: * We propose a new task of zero-shot referring image segmentation based on CLIP without any additional training. To the best of our knowledge, this is the first work to study the zero-shot referring image segmentation task. * We present a visual encoder and a textual encoder that integrates global and local contexts of images and sentences, respectively. Although the modalities of two encoders are different, our visual and textual features are dealt in a consistent way. * The proposed global-local context features take full advantage of CLIP to capture the target object semantics as well as the relations between the objects in both visual and textual modalities. * Our method consistently shows outstanding results compared to several baseline methods, and also outperforms the weakly supervised referring image segmentation method with substantial margins. ## 2 Related Work Zero-shot Transfer.Classical zero-shot learning aims to predict unseen classes that have not seen before by transferring the knowledge trained on the seen classes. Early works [3, 14, 34] leverage the pre-trained word embedding [5, 39] of class names or attributes and perform zero-shot prediction via mapping between visual representations of images and this word embedding. Recently, CLIP [43] and ALIGN [19] shed a new light on the zero-shot learning via large-scale image-text pre-training. They show the successive results on various downstream tasks via zero-shot knowledge transfer, such as image captioning [40], video action localization [51], image-text retrieval [1] and so on. Contrary to classical zero-shot learning, zero-shot transfer has an advantage of avoiding fine-tuning the pre-trained model on the task-specific dataset, where collecting datasets is time-consuming. There have been several works that apply CLIP encoders directly with tiny architectural modification without additional training for semantic segmentation [63], referring expression comprehension [49], phrase localization [25] and object localization [17]. Our work is also lying on the line of this research field. Zero-shot Dense Prediction Tasks.Very recently, with the success of pre-training models using large-scale image-text pairs, there have been several attempts to deal with dense prediction tasks with CLIP, _e.g_. object detection [9, 10, 13, 24, 30, 45], semantic segmentation [22, 29, 37, 42, 58, 59, 63, 64] and so on. These dense prediction tasks, however, are challenging since CLIP learns image-level features not pixel-level fine-grained features. In order to handle this issue, ViLD [13] introduces a method which crop the image to contain only the bounding box region, and then extract the visual features of cropped regions using CLIP to classify the unseen objects. This approach is applied in a wide range of dense prediction tasks which are demanded the zero-shot transfer ability of CLIP [7, 9, 10, 12, 49, 59]. While this method only considers the cropped area, there are several methods [25, 63] to consider the global context in the image, not only just the cropped region. Adapting CLIP [25] proposed the phrase localization method by modifying CLIP to generate high-resolution spatial feature maps using superpixels. MaskCLIP [63] modifies the image encoder of CLIP by transforming the value embedding layer and the last linear layer into two 1\(\times\)1 convolutional layers to handle pixel-level predictions. In this work, we focus on extracting both global and local context visual features with CLIP. Referring Image Segmentation.Referring image segmentation aims to segment a target object in an image given a natural linguistic expression introduced by [18]. There have been several fully-supervised methods for this task, where images and expressions are used as an input, and the target mask is given for training [2, 20, 33, 55, 60, 62]. Most of works [6, 61, 11, 23, 60] focuses on how to fuse those two features in different modalities extracted from independent encoders. Early works [32, 26] extract multi-modal features by simply concatenating visual and textual features and feed them into the segmentation networks [35] to predict dense segmentation masks. There have been two branches of works fusing cross-modal features; an attention based encoder fusion [60, 11, 57] and a cross-modal decoder fusion based on a Transformer decoder [6, 54, 61]. Recently, a CLIP-based approach, which learns separated image and text transformer using a contrastive pre-training, has been proposed [54]. Those fully supervised referring image segmentation methods show good performances in general, but they require dense annotations for target masks and comprehensive expressions describing the target object. To address this problem, TSEG [48] proposed a weakly-supervised referring image segmentation method which learns the segmentation model using text-based image-level supervisions. However, this method still requires high-level referring expression annotations with images for specific datasets. Therefore, we propose a new baseline for zero-shot referring image segmentation without any training or supervisions. ## 3 Method In this section, we present the proposed method for zero-shot referring image segmentation in detail. We first show an overall framework of the proposed method (3.1), and then discuss the detailed methods for extracting visual features (3.2) and textual features (3.3) to encode global and local contextual information. ### Overall Framework To solve the task of referring image segmentation, which aims to predict the target region grounded to the text description, it is essential to learn image and text representations in a shared embedding space. To this end, we adopt CLIP to leverage the pre-trained cross-modal features for images and natural language. Our framework consists of two parts as shown in Fig 2: (1) global-local visual encoder for visual representation, and (2) global-local natural language encoder for referring expression representation. Given a set of mask proposals generated by an unsupervised mask generator [52, 53], we first extract two visual features in global-context and local-context levels for each mask proposal, and then combine them into a single visual feature. Our global-context visual features can comprehensively represent the masked area as well as the surrounding region, while the local-context visual features can capture the representation of the specific masked region. This acts key roles in the referring image segmentation task because we need to focus a small specific target region using a comprehensive expression of the target. At the same time, given a sentence of expressing the target, our textual representation is extracted by the CLIP text encoder. In order to understand a holistic expression of the target as well as to focus on the target object itself, we first extract a key noun phrase from a sentence using a dependency parsing provided by spaCy [16], and then combine a global sentence feature and a local target noun phrase feature. Note that, our visual and text encoders are designed to handle both global-context and local-context information in a consistent way. Since our method is built on CLIP where the visual and textual features are embedded in the common embedding Figure 2: Overall framework of our global-local CLIP. Given an image and an expression as inputs, we extract global-local context visual features using mask proposals, and also we extract a global-local context textual feature. After computing the cosine similarity scores between all global-local context visual features and a global-local context textual feature, we choose the mask with the highest score. space, we can formulate the objective of our zero-shot image referring segmentation task as follows. Given inputs of an image \(I\) and a referring expression \(T\), our method finds the mask that has the maximum similarity between its visual feature and the given textual feature among all mask proposals: \[\hat{m}=\arg\max_{m\in M(I)}\text{sim}(\mathbf{t},\mathbf{f}_{m}), \tag{1}\] where \(\text{sim}(\cdot,\cdot)\) is a cosine similarity, \(\mathbf{t}\) is the proposed global-local textual feature for a referring expression \(T\), \(\mathbf{f}\) is the mask-guided global-local visual feature, and \(M(I)\) is a mask proposal set for a given image \(I\). ### Mask-guided Global-local Visual Features To segment the target region related to the referring expression, it is essential to understand a global relationship between multiple objects in the image as well as local semantic information of the target. In this section, we demonstrate how to extract global and local-context features using CLIP, and how to fuse them. Since CLIP is designed to learn image-level representation, it is not well-suited for a pixel-level dense prediction such as an image segmentation. To overcome the limitation of using CLIP, we decompose the task into two sub-tasks: mask proposal generation and masked image-text matching. In order to generate mask proposals, we use the off-the-shelf mask extractor [53] which is the unsupervised instance-level mask generation model. By using mask proposals explicitly, our method can handle fine-detailed instance-level segmentation masks with CLIP. Global-context Visual Features.For each mask proposals, we first extract global-context visual features using the CLIP pre-trained model. The original visual features from CLIP, however, is designed to generate one single feature vector to describe the whole image. To tackle this issue, we modify a visual encoder from CLIP to extract features that contain information from not only the masked region but also surrounding regions to understand relationships between multiple objects. In this paper, we use two different architectures for the visual encoder as in CLIP: ResNet [15] and Vision Transformer (ViT) [8]. For the visual encoder with the ResNet architecture, we denote a visual feature extractor without a pooling layer as \(\phi_{\text{tf}}\) and its attention pooling layer as \(\phi_{\text{att}}\). Then the visual feature, \(\mathbf{f}\), using the visual encoder of CLIP, \(\phi_{\text{CLIP}}\), can be expressed as follows: \[\mathbf{f}=\phi_{\text{CLIP}}(I)=\phi_{\text{att}}(\phi_{\text{tf}}(I)), \tag{2}\] where \(I\) is a given image. Similarly, since ViT has multiple multi-head attention layers, we divide this visual encoder into two parts: last \(k\) layers and the rest. We denote the former one by \(\phi_{\text{att}}\), and the later one by \(\phi_{\text{f}}\) for ViT architectures based on CLIP. Then given an image \(I\) and a mask \(m\), our global-context visual feature is defined as follows: \[\mathbf{f}_{m}^{G}=\phi_{\text{att}}(\phi_{f}(I)\odot\bar{m}), \tag{3}\] where \(\bar{m}\) is the resized mask scaled to the size of the feature map, and \(\odot\) is a Hadamard product operation. We illustrate more details of this masking strategy for each architecture of CLIP in Section 4.1 and Figure 3. We refer to it as the global context visual feature, because the entire image is passed through the encoder and the feature map at the last layer contain the holistic information about the image. Although we use mask proposals to obtain the features only on masked regions on the feature map, these features already have comprehensive information about the scene. Local-context Visual Features.To obtain local-context visual features given a mask proposal, we first mask the image and then crop the image to obtain a new image surrounding only an area of the mask proposal. After cropping and masking the image, it is passed to the visual encoder of CLIP to extract our local-context visual feature \(\mathbf{f}_{m}^{L}\): \[\mathbf{f}_{m}^{L}=\phi_{\text{CLIP}}(\mathcal{T}_{\text{crop}}(I\odot m)), \tag{4}\] Figure 3: Detailed illustration of our mask-guided global-context visual encoders in ResNet and ViT architectures: (a) Masked attention pooling in ResNet, (b) Token masking in ViT. where \(\mathcal{T}_{crop}(\cdot)\) denotes a cropping operation. This approach is commonly used in zero-shot semantic segmentation methods [7, 59]. Since this feature focuses on the masked region in the image where irrelevant regions are removed, it concentrates only on the target object itself. Global-local Context Visual features.We aggregate global- and local-context features over masked regions to obtain one single visual feature that describe a representation of masked regions of the image. The global-local context visual feature is computed as follows: \[\mathbf{f}_{m}=\alpha\;\mathbf{f}_{m}^{G}+(1-\alpha)\;\mathbf{f}_{m}^{L}, \tag{5}\] where \(\alpha\in[0,1]\) is a constant parameter, \(m\) is a mask proposal, \(\mathbf{f}^{G}\) and \(\mathbf{f}^{L}\) are global-context and local-context visual features in Eq. (3) and Eq. (4), respectively. As in Eq. (1), the score for each mask proposal is then obtained by computing similarity between our global-local context visual features and the textual feature of the expression described in the next section. ### Global-local Textual Features Similar to the visual features, it is important to understand a holistic meaning as well as the target object noun in given expressions. Given a referring expression \(T\), we extract a global sentence feature, \(\mathbf{t}^{G}\), using the pre-trained CLIP text encoder, \(\psi_{\text{CLIP}}\), as follows: \[\mathbf{t}^{G}=\psi_{\text{CLIP}}(T). \tag{6}\] Although the CLIP text encoder can extract the textual representation aligning with the image-level representation, it is hard to focus on the target noun in the expression because the expression of this task is formed as a complex sentence containing multiple clauses, _e.g._ "a dark brown leather sofa behind a foot stool that has a laptop computer on it". To address this problem, we exploit a dependency parsing using spaCy [16] to find the target noun phrase, NP(\(T\)), given the text expression \(T\). To find the target noun phrase, we first find all noun phrases in the expression, and then select the target noun phrase that contains the root noun of the sentence. After identifying the target noun phrase in the input sentence, we extract the local-context textual feature from the CLIP textual encoder: \[\mathbf{t}^{L}=\psi_{\text{CLIP}}(\text{NP}(T)). \tag{7}\] Finally, our global-local context textual feature is computed by a weighted sum of the global and local textual features described in Eq. (6) and Eq. (7) as follows: \[\mathbf{t}=\beta\;\mathbf{t}^{G}+(1-\beta)\;\mathbf{t}^{L}, \tag{8}\] where \(\beta\in[0,1]\) is a constant parameter, \(\mathbf{t}^{G}\) and \(\mathbf{t}^{L}\) are global sentence and local noun-phrase textual features, respectively. ## 4 Implementation Details We use unsupervised instance segmentation methods, FreeSOLO [53], to obtain mask proposals, and the shorter size of an input image is set to 800. For CLIP, the size of an image is set to 224x224. The number of masking layers, \(k\) in ViT is set to 3. We set \(\alpha=0.85\) for RefCOCO, 0.95 for RefCOCO and RefCOCO+, and \(\beta=0.5\) for all datasets. ### Masking in Global-context Visual Encoder We use both ResNet-50 and ViT-B/32 architectures for the CLIP visual encoder. Masking strategies of the global-context visual encoder for these two architecture are mostly similar but have small differences, described next. Masked Attention Pooling in ResNet [15].In a ResNet-based visual encoder of the original CLIP, a global average pooling layer is replaced by an attention pooling layer. This attention pooling layer has the same architecture as the multi-head attention in a Transformer. A _query_ of the attention pooling layer is computed by a global average pooling operation onto the feature maps extracted by the ResNet backbone. A _key_ and a _value_ of the attention pooling layer is given by a flattened feature map. In our masked attention pooling, we mask the feature map using a given mask before computing _query_, _key_ and _value_. After masking feature maps, we compute _query_, _key_ and _value_, and then they are fed into the multi-head attention layer. The detailed illustration of our masked attention pooling in ResNet is shown in Figure 2(a). Token Masking in ViT [8].Following ViT, we divide an image into grid patches, and embed patches to a linear layer with positional embeddings to get tokens, and then process those tokens with a series of Transformer layer. To capture global-context of images, we mask tokens in only the last \(k\) Transformer layers. The tokens are reshaped and masked by a given mask proposal, and then flattened and applied to the subsequent Transformer layer. As ViT has a class token (CLS), we use the final output feature from this CLS token as our global-context visual representation. The detailed method of our token masking in ViT is also shown in Figure 2(b). In our experiments, we use ViT-B/32 architecture for the backbone of our ViT-based visual encoder, and we apply a token masking to the last 3 layers in the visual encoder. We show the performances with respect to the location of token masking layers in the supplementary materials. ## 5 Experiments ### Datasets and Metrics We evaluate our method on RefCOCO [41], RefCOCO+ [41] and RefCOCOg [21, 38], where the images and masks in MS-COCO [31] dataset are used to annotate the ground-truth of the referring image segmentation task. RefCOCO, RefCOCO+ and RefCOCOg have 19,994, 19,992 and 26,711 images with 142,210, 141,564 and 104,560 referring expressions, respectively. RefCOCO and RefCOCO+ have shorter expressions and an average of 1.6 nouns and 3.6 words are included in one expression, while RefCOCOg expresses more complex relations with longer sentences and has an average of about 2.8 nouns and 8.4 words. The detailed statistics of those datasets are demonstrated in our supplementary materials. For the evaluation metrics, we use the overall Intersection over Union (oloU) and the mean Intersection over Union (mIoU) which are the common metrics for the referring image segmentation task. The oIoU is measured by the total area of intersection divided by the total area of union, where the total area is computed by accumulating over all examples. In our ablation study, we use oIoUs since most of supervised RIS methods [23, 6] adopt it. We also report the mIoUs as in [48], which computes the average IoU across all examples while considering the object sizes. tion maps using the similarity score of image-text pairs, we mask the maps and aggregate scores for all mask proposals, and select the mask with the highest score. * **Score Map:** The second baseline is the method extracting a dense score map as in MaskCLIP [63]. As in MaskCLIP, to obtain dense score maps without pooling, a _value_ linear layer and the last layer in the attention pooling are transformed into two consecutive 1\(\times\)1 convolution layers. The feature map extracted from ResNet is forwarded to those two layers to get language-compatible dense image feature map, and then compute a cosine similarity with CLIP's textual feature. After obtaining a score map, we project mask proposals to a score map. The scores in the mask area are averaged and then we select the mask with the maximum score. * **Region Token in ViT:** The third baseline is a method used in Adapting CLIP [25]. Similar to Adapting CLIP, we use region tokens for each mask proposal for all Transformer layers in CLIP's visual encoder instead of using superpixels. We finally compute the cosine similarity between each class token of a mask proposal and CLIP's textual feature, and then choose the mask with the highest score. * **Cropping:** The last baseline is our local-context visual features described in Section 3.2. Cropping and masking is a commonly used approach utilizing CLIP for extracting mask or box region feature in a range of zero-shot dense prediction tasks [13, 9, 49, 59, 7]. Therefore, we consider cropping as one of the zero-shot RIS baselines. ### Results Main Results.We report referring image segmentation performances of our global-local CLIP and other baselines on RefCOCO, RefCOCO+ and RefCOCOg in terms of oIoU and mIoU metrics in Table 1. For a fair comparison, all methods including baselines use FreeSOLO [53] mask proposals to produce the final output mask. The experimental results show that our method outperforms other baseline methods with substantial margins. Our method also surpasses the weakly supervised referring image segmentation method (TSEG) [48] in terms of mIoU1. We also show upper-bound performances of using FreeSOLO, where the scores are computed by the IoU between ground-truth masks and its max-overlapped mask proposal. Although there is still a gap compared to the fully-supervised referring image segmentation methods, our method improves performances significantly compared to the baselines with the same upper-bound. Footnote 1: We only compare mIoU scores with TSEG since it reports only mIoU scores in the paper. Zero-shot Evaluation on Unseen Domain.To verify the effectiveness of our method in a more practical setting, we report the zero-shot evaluation results with SoTA supervised methods [60, 54] on the test split of PhraseCut [56] in Figure 4 (left). Note that, RefCOCO contains expressions for only 80 salient object classes, whereas PhraseCut covers a variety of additional visual concepts _i.e_. 1272 categories in the test set. Our method outperforms both supervised methods, even though our models were never trained under RIS supervision. When evaluated on a subset of classes that are not seen in the RefCOCO datasets (_Unseen_ column), the supervised methods show significant performance degradation, whereas our method works robustly on this subset. Figure 5: Qualitative results with different levels of visual features. COCO instance GT masks are used as mask proposals to validate the effect of the global-local context visual features. Figure 6: Qualitative results with different levels of textual features using COCO Instance GT mask proposals. **Comparison to supervised methods in few-shot Setting.** We also compare our model to two supervised RIS methods [54, 60] in a few-shot learning setting, where the training set includes \(k\) instances for each of 80 classes in RefCOCO2. Note that the supervised methods use additional forms of supervision in training, whereas our method does not require any form of training or additional supervision; thus this setting is even disadvantageous to our method. Figure 4 (right) shows oIoU while varying \(k\) on RefCOCOg. The results clearly show that our method outperforms both supervised methods with large margins when \(k\) is small, and the gaps narrow as \(k\) gets larger (64 and 256 for LAVT [60] and CRIS [54], respectively). Note that it covers about 10% of the training set when \(k=64\) and the same trends hold for both RefCOCO and RefCOCO+. Footnote 2: we use object classes in RefCOCO GT annotation. This is to cover all salient objects in the dataset during the few-shot training. ### Ablation Study **Effects of Mask Quality.** To show the impact of the proposed method without considering the mask quality of the mask generators, we evaluate the performance of our method and the baselines with COCO instance GT masks in Table 2. Our approach has demonstrated superior performance compared to all baselines and has shown a performance improvement of over 3.5%, particularly on RefCOCOg which includes longer expressions. We believe that our method performs well on challenging examples that involve complex expressions, such as those with multiple clauses, which require an understanding of both the language and the scene. **Effects of Global-Local Context Features.** We also study the effects of global-local context features in both visual and textual modalities and show the results in Table 3. For this analysis, we use RefCOCOg as it contains more complex expressions with multiple clauses. Among all combinations of two modalities, using both global-local context features in the visual and textual domains leads to the best performance. **Qualitative Analysis.** We demonstrate several results that support the effectiveness of our global-local context visual features in Figure 5. To show this effect more clearly, we use COCO instance GT masks as mask proposals. When using only local-context visual features, the predicted mask tends to focus on the instance that shares the same class as the target object. However, when using only global-context visual features, the predicted mask tends to capture the context of the expression but may focus on a different object class. By combining global and local context, our method successfully finds the target mask. We also demonstrate the effectiveness of our global-local context textual features in Figure 6. Furthermore, we compare the qualitative results of our method with baseline methods in Figure 7. Our proposed global-local CLIP outperforms the baseline methods in identifying the target object by taking into account the global context of the image and expression. ## 6 Conclusion In this paper, we propose a simple yet effective zero-shot referring image segmentation framework focusing on transferring knowledges from image-text cross-modal representations of CLIP. To tackle the difficulty of the referring image segmentation task, we propose global-local context encodings to compute similarities between images and expressions, where both target object semantics and relations between the objects are dealt in a unified framework. The proposed method significantly outperforms all baseline methods and weakly supervised method as well. **Acknowledgement.** This work was supported by the IITP grants (No.2019-0-01842, No.2021-0-02068, No.2022-0-00926) funded by MSIT, the ISTD program (No.20018334) funded by MOTIE, and the GIST-MIT Research Collaboration grant funded by GIST, Korea. Figure 7: Qualitative results of our method with the several baselines. Note that all methods use mask proposals generated by FreeSOLO.
2309.12932
Different Regular Black Holes: Geodesic Structures of Test Particles
This paper investigates the metric of previously proposed regular black holes, calculates their effective potentials, and plots the curves of the effective potentials. By determining the conserved quantities, the dynamical equations for particles and photons near the black hole are derived. The analysis encompasses timelike and null geodesics in different spacetimes, including bound geodesics, unstable circular geodesics, stable circular geodesics, and escape geodesics. The findings are presented through figures and tables. Furthermore, the bound geodesics of the four regular black hole spacetimes are analyzed, examining the average distance of particle orbits from the center of the event horizon, the precession behavior of the perihelion, and the probability of particles appearing inside the outer event horizon during motion. Based on these analyses, a general formula is proposed, which yields the existing metrics when specific parameter values are chosen. The impact of parameter variations on the effective potential and geodesics is then computed using this new formula.
Zihan Xi, Chen Wu, Wenjun Guo
2023-09-22T15:30:57Z
http://arxiv.org/abs/2309.12932v1
# Different Regular Black Holes: Geodesic Structures of Test Particles ###### Abstract This paper investigates the metric of previously proposed regular black holes, calculates their effective potentials, and plots the curves of the effective potentials. By determining the conserved quantities, the dynamical equations for particles and photons near the black hole are derived. The analysis encompasses timelike and null geodesics in different spacetimes, including bound geodesics, unstable circular geodesics, stable circular geodesics, and escape geodesics. The findings are presented through figures and tables. Furthermore, the bound geodesics of the four regular black hole spacetimes are analyzed, examining the average distance of particle orbits from the center of the event horizon, the precession behavior of the perihelion, and the probability of particles appearing inside the outer event horizon during motion. Based on these analyses, a general formula is proposed, which yields the existing metrics when specific parameter values are chosen. The impact of parameter variations on the effective potential and geodesics is then computed using this new formula. Key words: black hole, effective potential, geodesic structure, Introduction Black holes are a significant astronomical entity that holds great research value. Accurately determining their properties through meticulous calculations and thorough observations is crucial for advancing our understanding of these phenomena. N. Heidari has proposed a new analytical method for computing the quasinormal modes of black holes by employing the Rosen-Morse potential to estimate the quasi-normal frequencies of Schwarzschild black holes. By performing numerical calculations and comparisons, the authors have demonstrated that this approach outperforms previous techniques in terms of accuracy[1].Moreover, this method offers a valuable contribution to our understanding of the physical properties of black holes and promotes further advancements in related research areas. Chen Wu's investigation of gravitational perturbations and quasinormal mode (QNM) frequencies around some regular black holes suggests that the Wentzel-Kramers-Brillouin (WKB) approximation and asymptotic iteration method can be used to perform a detailed analysis of the frequencies of gravitational QNMs. His research results indicate that the imaginary part of the quasinormal frequencies as a function of the charge parameter exhibits different monotonic behaviors for different black hole spacetimes. Moreover, the article provides an asymptotic expression for gravitational QNMs using the Eikonal limit method and proves the stability of gravitational perturbations in these spacetimes[2]. The findings of this research article significantly enhance our understanding of the properties of the gravitational field and the stability of black holes. Sometimes breakthroughs in other fields can also advance our understanding of black holes. After the discovery of gravitational waves in 2015, researchers found that by analyzing the gravitational wave signals, they could determine some characteristics of black holes that were previously unproven, such as the high mass of black hole binaries and the near-absence of spin in black hole binaries. The combination of gravitational waves and black hole research is a popular research direction in the study of black holes [3; 4; 5]. In order to obtain more precise fundamental properties of black holes, it is crucial to identify more reliable methods. Zening Yan employed three distinct methods in their investigation of Schwarzschild-Tangherlini black hole spacetime and numerically validated that the third-order WKB approximation outperformed higher-order WKB approximations in their study[6]. Such research on methods can be of great assistance to future researchers. As the ultimate properties of a black hole are uniquely determined by its mass, charge, and angular momentum, researchers often choose to study charged black holes[7], or rotating black holes [8], or black holes surrounded by other matter[9]. Mubasher Jamila investigated the dynamics of particles around a Schwarzschild-like black hole surrounded by dark energy and an external magnetic field. They found that regardless of the charge of the test particles, the radius of their innermost stable circular orbit (ISCO) and their orbital frequency were strongly influenced by the magnetic field[10]. These findings have important theoretical and practical implications for furthering our understanding of black hole physics and noncommutative geometry, among other fields. Studying multiple types of black holes is therefore crucial as the properties of different black hole spacetimes vary significantly. Yen-Kheng Lim investigated the geodesic equations of charged and uncharged particles in the Ernst metric and found that their orbits can only be stable when the electric field strength is below a certain critical value[11]. In the study of black holes, investigating geodesics is particularly important. Sheng Zhou and others investigated the geodesic structures of test particles in the Bardeen spacetime. By analyzing the effective potential, they identified the timelike and null geodesic trajectories in the Bardeen spacetime, and described the possible orbits of particles and photons using diagrams[12].E. Kapsabelis investigated the geodesics of the Schwarzschild-Finsler-Randers (SFR) spacetime and compared their model with the corresponding model in general relativity. They found small differences in the deflection angles between SFR spacetime and general relativity, which can be attributed to the anisotropic metric structure of the model and the Randers term[13].Jiri Podolsky investigated some properties of the extreme Schwarzschild-de Sitter spacetime. By studying geodesics and the deviation equation of geodesics, they Obtained the conclusion that a specific group of observers can escape the singularity of the black hole. This paper proposed the synchronous coordinates system in the paper, which provides a basis for the further development of Black Hole No-Hair Theory[14]. ## II Regular black hole and orbit equation ### Regular black hole The general line element representing spherically symmetric regular BH is given by \[ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}), \tag{1}\] Where \(t\),\(r\),\(\theta\),\(\phi\) represent ordinary spacetime spherical coordinates, and the lapse function \(f(r)\) is determined by the specific spacetime. Chi Zhang conducted an investigation on several regular black hole spacetimes[15].This article selects the following regular black hole spacetimes. The new regular black hole spacetime discovered by Hayward in 2006 Hayward[16],This black hole is similar to the one discovered by Bardin[17].Its characteristic is that as the radius tends to infinity, it rapidly approaches zero, indicating that it does not exert a significant gravitational influence on the surrounding space in regions far from the event horizon. The lapse function is \[f(r)=\left(1-\frac{2Mr^{2}}{r^{2}+2\alpha^{2}}\right) \tag{2}\] Where \(\alpha\) is assumed to be a positive constant and M represents the mass of the black hole. The number of event horizons can be 0, 1, or 2, depending on the relative values of \(M\) and \(\alpha\). Eloy Ayon-Beato proposed a regular exact black hole solution, which is a charged black hole with a source that satisfies the weak energy condition in nonlinear electrodynamics[18].The lapse function for this black hole is given as \[f(r)=\left(1-\frac{2Mr^{2}}{(r^{2}+q^{2})^{\frac{2}{3}}}+\frac{q^{2}r^{2}}{(r ^{2}+q^{2})^{2}}\right) \tag{3}\] Where q represents the charge. Bronnikov introduced nonlinear electrodynamics, considering the Born-Infeld theory, and constructed a regular black hole[19; 20],The lapse function for this black hole is given as \[f(r)=\left(1-\frac{2M}{r}(1-\tanh\frac{r_{0}}{r}\right) \tag{4}\] Where \(r_{0}\) is related to the electric charge. Afterwards, following the work of Bronnikov[19; 20],Dymnikova established a regular spherically symmetric charged black hole[21].He considered the coupling of nonlinear electrodynamics with general relativity. The lapse function for Dymnikova's solution is given as \[f(r)=\left(1-\frac{4M}{\pi r}\left(\arctan\frac{r}{r_{0}}-\frac{rr_{0}}{r^{2} +r_{0}^{2}}\right)\right) \tag{5}\] Where \(r_{0}=\frac{\pi}{8}\frac{q^{2}}{M}\), it is a length scale defined, and \(q\) represents the charge. The black holes selected in this study have been organized into Table I, with the parameters used for each black hole provided. ### Orbit equation The general line element representing spherically symmetric regular black holes is given by (1), and the corresponding Lagrangian can be obtained from the variational principle as: \[\mathcal{L}=\frac{1}{2}\left[-f\left(r\right)\dot{t}^{2}+f\left(r\right)^{-1} \dot{r}^{2}+r^{2}\left(\dot{\theta}^{2}+\sin^{2}\theta\dot{\varphi}^{2}\right)\right] \tag{6}\] For photons, \(L=0\). For particles, by choosing \(\lambda\) as \(\tau\), \(L=\frac{1}{2}\). Define \[\eta=\left\{\begin{array}{ll}0&\left(photons\right)\\ 1&\left(particles\right)\end{array}\right. \tag{7}\] Then, the Lagrangian \(\mathcal{L}=\frac{\eta}{2}\). Choosing \(\tau\) as the affine parameter, the Lagrangian equation is \[\frac{d}{d\tau}\frac{\partial\mathcal{L}}{\partial\dot{x}^{\nu}}-\frac{ \partial\mathcal{L}}{\partial x^{\nu}}=0 \tag{8}\] Since the metric is static and spherically symmetric, it is not a function of time \(t\) and azimuthal angle \(\varphi\), yielding \[\frac{\partial\mathcal{L}}{\partial\dot{t}}=-f\left(r\right)\dot{t}=-E \tag{9}\] \[\frac{\partial\mathcal{L}}{\partial\dot{\varphi}}=r^{2}\sin^{2}\theta\dot{ \varphi}=L \tag{10}\] Where \(E\) and \(L\) are two conserved quantities. If we choose the initial condition as \(\theta=\frac{\pi}{2}\), then we have \(\dot{\theta}=0\), \(\ddot{\theta}=0\). In this case, we can derive the orbital equation for the equatorial plane selected in this paper. \[\dot{r}^{2}=E^{2}-f\left(r\right)\left(\eta+\frac{L^{2}}{r^{2}}\right) \tag{11}\] \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{Lapse function} & \multicolumn{1}{c}{Extremal condition} & \multicolumn{1}{c}{Reference} & \multicolumn{1}{c}{Originator} \\ \hline \(f=\left(1-\frac{2Mr^{2}}{r^{3}+2\alpha^{2}}\right)\) & \(\alpha\approx 1.06\) & [16] & Hayward \\ \(f=\left(1-\frac{2Mr^{2}}{(r^{2}+q^{2})^{\frac{3}{2}}}+\frac{q^{2}r^{2}}{(r^{2} +q^{2})^{2}}\right)\) & \(q\approx 0.63\) & [18] & Ayón-Beato and García \\ \(f=\left(1-\frac{2M}{r}\left(1-\tanh\frac{r_{0}}{r}\right)\right)\) & \(r_{0}\approx 0.55\) & [19; 20] & Bronnikov \\ \(f=\left(1-\frac{4M}{\pi r}\left(\arctan\frac{r}{r_{0}}-\frac{rr_{0}}{r^{2}+r_ {0}^{2}}\right)\right)\) & \(r_{0}=0.45\) & [21] & Dymnikova \\ \hline \hline \end{tabular} \end{table} Table 1: The black holes selected in this paper we define \(f\left(r\right)\left(\eta+\frac{L^{2}}{r^{2}}\right)\) as the effective potential \(V_{eff}^{2}\), and equation (11) can be rewritten as \(\dot{r}^{2}=E^{2}-V_{eff}^{2}\). Making the substitution \(r=\frac{1}{u}\), the orbital equation can be transformed into \[\left(\frac{du}{d\varphi}\right)^{2}=\frac{E^{2}}{L^{2}}-\frac{f\left(\frac{1 }{u}\right)\eta}{L^{2}}-f\left(\frac{1}{u}\right)u^{2} \tag{12}\] ## III The geodesic structure of different spacetimes ### Hayward spacetime When \(\eta=1\), it corresponds to a timelike geodesics, and its effective potential is \[V_{eff}^{2}=\left(1-\frac{2Mr^{2}}{r^{3}+2\alpha^{2}}\right)\left(1+\frac{L^{ 2}}{r^{2}}\right) \tag{13}\] The orbital equation for the particle is \[\left(\frac{du}{d\varphi}\right)^{2}=\frac{E^{2}-\eta}{L^{2}}+\frac{2\eta Mu+2 ML^{2}u^{3}}{L^{2}\left(1+2\alpha^{2}u^{3}\right)}-u^{2} \tag{14}\] Upon further differentiation, the second-order orbital equation can be obtained \[\frac{d^{2}u}{d\varphi^{2}}=-\frac{m\left(4\alpha^{3}u^{3}-1\right)+L^{2}u \left[\left(2\alpha^{2}u^{3}+1\right)^{2}-3mu\right]}{\left(2\alpha^{2}Lu^{3} +L\right)^{2}} \tag{15}\] Numerical analysis of the equation reveals the types of orbits and provides insights into how changes in the parameters of the Hayward black hole spacetime affect the geodesic structure. As shown in Fig.1,\(E_{I}^{2}\) and \(E_{II}^{2}\)represent critical energy values. Depending on the particle's energy satisfying different conditions, three different types of motion orbits can be observed, namely bound orbits, circular orbits, and escape orbits. In the case of bound orbits: when \(E_{I}^{2}<E^{2}<E_{II}^{2}\), the effective potential curve indicates the presence of two types of bound orbits, as shown in Fig.2, at this energy level. (1) The particle is confined in a bound orbit within the range \(r_{A}<r<r_{B}\), where \(r_{A}\) and \(r_{B}\) represent the pericenter and apocenter of the planetary orbit, respectively. This orbit has a self-intersection point and exhibits counterclockwise precession. (2) The particles are bound in a bound orbit within the range of \(r_{C}<r<r_{D}\), where the distance between \(r_{C}\) and \(r_{D}\) is much larger compared to (1). The orbit has one self-intersection point and exhibits a greater degree of clockwise precession compared to (1). In the case of circular orbits: as shown in Fig.3 and Fig.4, at critical energy levels, two types of circular orbits can be observed. For higher critical energy, the orbit is an unstable circular orbit, while for lower critical energy, the orbit is a stable circular orbit. (1) As depicted in Fig.3, when \(E=E_{II}\), it orbits on an unstable circular orbit with \(r=r_{B}\). Any slight perturbation causes the orbit to transition into two other types of orbits, where the particle is bound between \(r_{A}<r<r_{B}\) or the particle is bound between \(r_{B}<r<r_{C}\). (2) As illustrated in Fig.4, when \(E=E_{I}\), the particle's motion follows a stable circular orbit. Even in this case, there are still two scenarios. In one scenario, the particle moves on a bound orbit, confined between \(r_{A}<r<r_{B}\), and the orbit exhibits self-intersection points. In another scenario, the particle moves on a stable circular orbit at \(r=r_{C}\). In the case of escape orbits : when the particle's energy \(E\) is greater than the critical energy \(E_{II}\), three different types of escape orbits occur. As depicted in Fig.5, when \(E=E_{A}^{2}\), the particle's orbit is a curved escape orbit. The particle follows a curved path from infinity towards the vicinity of the black hole, experiences deflection along the orbit, forms self-intersection points with the previous orbit, and then returns to infinity. When \(E=E_{B}^{2}\), the particle's orbit does not curve, and deflection only occurs near the central region. After forming self-intersection points, the particle follows a straight-line path back to infinity. When \(E=E_{C}^{2}\), the particle's orbit is a straight-line orbit without self-intersection points. After deflection, the particle returns to infinity along a straight path. ### Ayon-Beato and Garcia spacetime In the Ayon-Beato and Garcia spacetime, when \(\eta=1\), it corresponds to timelike geodesics, and the effective potential is given by \[V_{eff}^{2}=\left(1-\frac{2Mr^{2}}{\left(r^{2}+q^{2}\right)^{\frac{3}{2}}}+ \frac{q^{2}r^{2}}{\left(r^{2}+q^{2}\right)^{2}}\right)\left(1+\frac{L^{2}}{r^ {2}}\right) \tag{16}\] When the energy equals \(E\), the bound orbit exhibits a significantly greater precession of the particle's orbit pericenter compared to the bound orbit in the Hayward spacetime, as illustrated in Fig.6.Its circular and escape orbits resemble those in the Hayward spacetime.When the energy \(E\) is equal to \(E_{II}\), small perturbations cause changes in the particle's orbit. When the energy \(E\) is equal to \(E_{I}\), the particle's orbit is either a bound orbit or a stable circular orbit. For \(E\) exceeding \(E_{II}\), the particle's orbit becomes an escape orbit. ### Bronnikov spacetime In Bronnikov spacetime, when \(\eta=1\), the trajectory corresponds to timelike geodesics, with the effective potential denoted as \[V_{eff}^{2}=\left(1-\frac{2M}{r}\left(1-\tanh\frac{r_{0}}{r}\right)\right)\left( 1+\frac{L^{2}}{r^{2}}\right) \tag{17}\] The effective potential can be illustrated as depicted in Figure 7, wherein the presence of three types of orbits persists: bound orbit, circular orbit, and escape orbit. The specific details of these orbits are presented in Table 2. When \(\eta=0\), it corresponds to null geodesics, characterized by an effective potential denoted as \[V_{eff}^{2}=\left(1-\frac{2M}{r}\left(1-\tanh\frac{r_{0}}{r}\right)\right) \tag{18}\] As shown in Fig.8, when \(E^{2}=E_{II}^{2}\), the particle's orbit is an unstable circular orbit. Perturbations can alter the orbit's configuration. However, in contrast to the particle's behavior, photons are confined between \(r_{A}\) and \(r_{B}\), or the photon orbit is a circular orbit at \(r=r_{B}\). Then, the curvature of the orbit gradually decreases, extending to infinity. When \(E_{II}^{2}<E^{2}\), as depicted in Fig.9, the photon's orbit becomes an escape orbit. When \(E^{2}=E_{A}^{2}\), photons curve near the black hole, experiencing deflection upon entering the event horizon. After exiting the event horizon, the photon's orbit forms two self-intersection points before returning along the curve to infinity. As the energy increases, the photon's orbit becomes a straight line with only one self-intersection point. With further energy increase, the self-intersection \begin{table} \begin{tabular}{c c} \hline Energy & The situation of geodesics \\ \hline \(E^{2}=E_{I}^{2}\) & Bound orbit and stable circle orbit \\ \(E_{I}^{2}<E^{2}<1\) & Two types of bounded orbits \\ \(1<E^{2}<E_{II}^{2}\) & Bound orbit and escape orbit \\ \(E_{II}^{2}<E^{2}\) & Escape orbit \\ \end{tabular} \end{table} Table 2: The category of null geodesic types in the Bronnikov spacetime with \(r_{0}=0.55\),\(L=3.5\),\(M=1\),\(E_{I}^{2}=0.910\),\(E_{II}^{2}=1.11\) points vanish, and the photon enters along a straight line, experiencing deflection near the outer event horizon, and ultimately returning along a straight path to infinity. ### Dymnikova spacetime In the Dymnikova spacetime, the effective potential is denoted as \[V_{eff}^{2}=\left(1-\frac{4M}{\pi r}\left(\arctan\frac{r}{r_{0}}-\frac{rr_{0}}{ r^{2}+r_{0}^{2}}\right)\right)\left(\eta+\frac{L^{2}}{r^{2}}\right) \tag{19}\] As illustrated in Fig.10, when \(\eta=1\) in the Dymnikova spacetime, it corresponds to timelike geodesics. The orbital behavior is similar to that in the Bronnikov spacetime. Depending on the energy level, there can exist bounded orbits, stable circular orbits, and escape orbits. When \(\eta=0\), it corresponds to null geodesics, and the specific orbital characteristics are depicted in Table 3. As depicted in Fig.11, when \(\eta=1\) and the energy is taken at an intermediate value between the two extrema, six types of timelike bound geodesics can be obtained. By performing calculations, the average distance of the particle orbit from the center of the event horizon, the precession of the perihelion of the orbit, and the probability of the particle appearing inside the outer event horizon during motion can be determined. The specific data is presented in Table 4. From this table, we can observe that for cases (I), (III), (V), and (VI), the difference between the average distance from the center of the event horizon and the midpoint between the perihelion \(r_{A}\) and aphelion \(r_{B}\), namely \(\frac{1}{2}\left(r_{A}+r_{B}\right)\), is small. This suggests that the particle orbits are relatively evenly distributed between the perihelion and aphelion. On the other hand, cases (II) and (IV) exhibit an average distance closer to the perihelion, indicating a denser distribution of orbits in the vicinity of the perihelion. \begin{table} \begin{tabular}{c c} Energy & The situation of geodesics \\ \hline \(0<E^{2}<E_{II}^{2}\) & Bound orbit and escape orbit \\ \(E^{2}=E_{II}^{2}\) & Unstable circular bound orbit and unstable circular escape orbit \\ \(E_{II}^{2}<E^{2}\) & Escape orbit \\ \end{tabular} \end{table} Table 3: The category of null geodesic types in the Dymnikova spacetime with \(r_{0}=0.55\),\(L=3.5\),\(M=1\),\(E_{II}^{2}=0.86\) Furthermore, it can be observed that cases (I) and (II) exhibit counterclockwise precession, while the others exhibit clockwise precession. Additionally, case (III) demonstrates the smallest precession angle of the perihelion, while case (IV) exhibits the largest precession angle. This observation may be related to the properties of the Ayon-Beato and Garcia spacetime. The table also reveals that the probability of the particle orbit being inside the outer event horizon is smaller compared to it being outside the outer event horizon. This implies that particles are more likely to move outside the event horizon. ### New metric formula After conducting research on the aforementioned four spacetimes, this paper concludes with a derived formula for the effective potential \[V_{eff}^{2}=\left(1-\frac{\alpha r^{2}}{\left(r^{2}+\beta\right)^{\frac{3}{2}} }+\frac{\beta r^{2}}{\left(r^{2}+\beta\right)^{2}}\right)\left(\eta+\frac{L^{ 2}}{r^{2}}\right) \tag{20}\] The parameter \(\alpha\) is associated with the mass \(M\), while \(\beta\) is related to the charge \(q\). Fig.12 presents the plots of the effective potential for different values of \(\alpha\), ranging from 1.6 to 2.7 in increments of 0.1, with \(\beta\) fixed at 0.3 and 0.35. It can be observed that as \(\alpha\) increases, the number of extremal points in the effective potential gradually reduces from three to one. By varying the parameters of the formula, the effective potential for the four previously selected spacetimes in this study can be obtained. The specific results are presented in Table \begin{table} \begin{tabular}{c c c c c} Geodesics & Originator & Average distance & Precession angle & Probability \\ \hline (I) & Hayward spacetime & 2.436 & Counterclockwise 0.23\(\pi\) & \\ (II) & Hayward spacetime & 7.524 & Counterclockwise 0.31\(\pi\) & \\ (III) & Ayón-Beato and Garcia spacetime & 1.509 & Clockwise 0.08\(\pi\) & 34.27\(\%\) \\ (IV) & Ayón-Beato and Garcia spacetime & 10.269 & Clockwise 0.6\(\pi\) & 0\(\%\) \\ (V) & Bronnikov spacetime & 1.200 & Clockwise 0.46\(\pi\) & 37.27\(\%\) \\ (VI) & Dymnikova spacetime & 1.101 & Clockwise 0.25\(\pi\) & 37.97\(\%\) \\ \end{tabular} \end{table} Table 4: The six types of timelike bound geodesics, their respective average distances from the center of the event horizon, precession angles of the perihelion, and probabilities of the orbit being inside the outer event horizon. Table 6 presents the relationship between the values of \(\alpha\) and the number of types of bound orbits when \(\beta\) is set to 0.3 or 0.35, and the energy is taken at an intermediate value between the two extrema. When \(\beta\) is set to 0.3 or 0.35, the bound geodesics for \(\alpha\) values ranging from the minimum to the maximum, with the energy taken at an intermediate value, are depicted in Fig.12. By performing calculations, the precession angles of the perihelion for each bound geodesics can be obtained. The specific data is presented in Table 7. \begin{table} \begin{tabular}{c c c} \(\beta\) & \(\alpha\) & Originator \\ \hline 0.3 & 1.664 & Ayón-Beato and Garcia spacetime \\ 0.3 & 1.737 & Bronnikov spacetime \\ 0.3 & 1.771 & Dymnikova spacetime \\ 0.3 & 2.391 & Hayward spacetime \\ 0.35 & 1.838 & Ayón-Beato and Garcia spacetime \\ 0.35 & 1.917 & Bronnikov spacetime \\ 0.35 & 1.953 & Dymnikova spacetime \\ 0.35 & 2.637 & Hayward spacetime \\ \end{tabular} \end{table} Table 5: The values of the parameters in the formula for each spacetime \begin{table} \begin{tabular}{c c c} \(\beta\) & \(\alpha\) & The number of types of bound orbits \\ \hline 0.3 & \(1.6\leq\alpha<1.634\) & One \\ 0.3 & \(1.634\leq\alpha<2.170\) & Two \\ 0.3 & \(2.170\leq\alpha\leq 2.7\) & One \\ 0.35 & \(1.6\leq\alpha<1.745\) & One \\ 0.35 & \(1.745\leq\alpha<2.196\) & Two \\ 0.35 & \(2.196\leq\alpha\leq 2.7\) & One \\ \end{tabular} \end{table} Table 6: The relationship between the values of \(\alpha\) and \(\beta\) and the number of types of bound orbits. ## IV Conclusion This paper investigates a selection of regular black hole spacetimes[15], specifically focusing on four regular black hole spacetimes. Starting from the general line element of spherically symmetric regular black holes, the corresponding Lagrangian is derived using the variational principle. After determining the conserved quantities, setting \(\theta=\frac{\pi}{2}\) allows the derivation of the orbital equations in the equatorial plane. By further differentiating these equations, the second-order orbital equations are obtained. By analyzing the orbital equations, this paper identifies different types of orbits in all cases, including bound orbits, stable circular orbits, unstable circular orbits, and escape orbits. The cases of \(\eta=1\) and \(\eta=0\) are discussed separately for both timelike and null geodesics in different spacetimes. The research focuses on the analysis of bound orbits in the aforementioned four regular black hole spacetimes. Through analyzing and calculating particle orbit data, the average distance of the particle orbit from the center of the event horizon is determined. By analyzing the periodicity of the orbits, the precession behavior of the perihelion is obtained. Finally, by analyzing the particle's position during motion, the probability of the particle appearing inside the outer event horizon is determined. In conclusion, this paper presents a new metric formula. When specific values of the parameters \(\alpha\) and \(\beta\) are chosen, existing metrics can be obtained. The impact of parameter variations of \(\alpha\) and \(\beta\) on the effective \begin{table} \begin{tabular}{c c c c} Timelike bound geodesics & \(\beta\) & \(\alpha\) & \multicolumn{2}{c}{Precession angle of the perihelion} \\ \hline (I) & 0.3 & 1.8 & 0.49 \\ (I) & 0.3 & 1.9 & 0.27 \\ (I) & 0.3 & 2.0 & 0.11 \\ (I) & 0.3 & 2.1 & 0.99 \\ (I) & 0.35 & 1.8 & 0.51 \\ (I) & 0.35 & 1.9 & 0.32 \\ (I) & 0.35 & 2.0 & 0 \\ (I) & 0.35 & 2.1 & 0.67 \\ \end{tabular} \end{table} Table 7: The precession angles of the perihelion for bound geodesics with continuous variations of \(\alpha\), when \(\beta\) is set to 0.3 and 0.35 potential and geodesics is calculated and analyzed.
2309.11585
SpeechAlign: a Framework for Speech Translation Alignment Evaluation
Speech-to-Speech and Speech-to-Text translation are currently dynamic areas of research. In our commitment to advance these fields, we present SpeechAlign, a framework designed to evaluate the underexplored field of source-target alignment in speech models. The SpeechAlign framework has two core components. First, to tackle the absence of suitable evaluation datasets, we introduce the Speech Gold Alignment dataset, built upon a English-German text translation gold alignment dataset. Secondly, we introduce two novel metrics, Speech Alignment Error Rate (SAER) and Time-weighted Speech Alignment Error Rate (TW-SAER), which enable the evaluation of alignment quality within speech models. While the former gives equal importance to each word, the latter assigns weights based on the length of the words in the speech signal. By publishing SpeechAlign we provide an accessible evaluation framework for model assessment, and we employ it to benchmark open-source Speech Translation models. In doing so, we contribute to the ongoing research progress within the fields of Speech-to-Speech and Speech-to-Text translation.
Belen Alastruey, Aleix Sant, Gerard I. Gállego, David Dale, Marta R. Costa-jussà
2023-09-20T18:46:37Z
http://arxiv.org/abs/2309.11585v2
# SpeechAlign: a Framework for Speech Translation ###### Abstract Speech-to-Speech and Speech-to-Text translation are currently dynamic areas of research. To contribute to these fields, we present SpeechAlign, a framework to evaluate the underexplored field of source-target alignment in speech models. Our framework has two core components. First, to tackle the absence of suitable evaluation datasets, we introduce the Speech Gold Alignment dataset, built upon a English-German text translation gold alignment dataset. Secondly, we introduce two novel metrics, Speech Alignment Error Rate (SAER) and Time-weighted Speech Alignment Error Rate (TW-SAER), to evaluate alignment quality in speech models. By publishing SpeechAlign we provide an accessible evaluation framework for model assessment, and we employ it to benchmark open-source Speech Translation models. ## 1 Introduction Speech-to-text Translation (S2TT) and Speech-to-speech Translation (S2ST) refer to the task of converting spoken language into respectively written text or speech in a different language. This tasks are increasing their popularity, and can be used for applications such as subtitling videos in a different language, translating between languages that do not have a written form, and in general, ensuring seamless communication across people worldwide. The initial approach to S2TT and S2ST involved the integration of distinct models, forming what is nowadays known as a cascade system (Ney, 1999). This systems consist of an Automatic Speech Recognition (ASR) model that transcribes the spoken sentence, and a Machine Translation (MT) model that translates the sentence into another language. In the case of S2ST an additional speech synthesizer is needed, that is utilized to generate the corresponding speech from the translated text. However, recent advancements have led to the development of end-to-end models, that perform translation from speech to text or to speech, without requiring an intermediate transcription step, or a translated transcript. Known as direct Speech Translation systems, these models have quickly progressed, and currently, they can achieve state-of-the-art results comparable to those of cascade models (Ansari et al., 2020; Bentivogli et al., 2021). Nevertheless, the performance of both cascade and end-to-end architectures remains far from optimal compared to text translation systems, indicating that research in these areas is still ongoing. The recent growth of end-to-end models and the shift in the field towards using them has raised the need to understand their inner workings. One related task is source-target alignment, which involves analysing how models use the provided source to make predictions, and whether they follow common human intuition in this process. This alignment task has been widely explored in the context of text translation (Ghader and Monz, 2017; Ferrando et al., 2022). The task is commonly evaluated using Alighme Figure 1: Example of a S2ST alignment in the Speech Gold Alignment dataset. and Ney, 2003), a metric that measures the differences between a gold-standard alignment and a hypothesized one. For this aim, human-labeled alignment datasets have been published in the context of text translation, such as (Vilar et al., 2006) for translation between English and German. In speech-related fields, little interpretability work regarding alignments has been done. Some previous studies have focused on analysing the self-attention in the encoder of speech recognition, (Zhang et al., 2021; Shim et al., 2022) and speech translation (Alastruey et al., 2022) systems. However, these models' decoder, and consequently, its alignment capabilities, have yet to be explored, potentially due to the absence of suitable datasets and metrics for evaluating the task in this setting. In light of this, we introduce SpeechAlign framework1, which serves as a solution to the stated lack of resources. SpeechAlign is formed of two core components: a novel dataset and an evaluation framework founded on our proposed metrics. Footnote 1: Available on request. Link will be published upon conference acceptance. The dataset, named Speech Gold Alignment, is specifically created to evaluate alignment in S2TT and S2ST. This dataset is an extension of the text translation gold alignment dataset introduced by Vilar et al. (2006). To create it, we employ a Text-to-Speech (TTS) model to generate synthetic speech for the sentences in the dataset. The utilization of a TTS model offers a main advantage: apart from generating audio, it also provides timestamps denoting the beginning and end of each word. Annotating such timestamps would be very resource-intensive if using non-synthetic audios. Gathering the audios and the timestamps, we are able to build the Speech Gold Alignment dataset, formed of samples such as the one shown in Figure 1. In terms of metrics, we adapt the AER for the speech domain, introducing two novel metrics: Speech Alignment Error Rate (SAER) and Time-weighted SAER (TW-SAER). These metrics quantify the alignment error models have, with the key distinction that the former treats each word equally, and the latter factors in word durations. To sum up, the main contribution of this paper is the release of SpeechAlign, a framework designed to simplify metrics computation using our dataset. Additionally, we employ this framework to benchmark various open-source models. Through these efforts, we aim to contribute to the exploration of alignments in the domain of speech translation. ## 2 Related Work Over the past decades, considerable interest has been directed toward comprehending the alignment capabilities of text translation models. In this trajectory, both datasets and metrics have been developed to evaluate this task. Numerous authors have published alignment datasets (Lambert et al., 2005; Vilar et al., 2006; Kruijff-Korbayova et al., 2006; Graca et al., 2008; Macken, 2010; Holmqvist and Ahrenberg, 2011) for the evaluation of alignments in translations in languages such as English, Spanish, German, Dutch, and Czech. In this work, we hone in on the dataset introduced by Vilar et al. (2006)2 for text translation between English and German. This dataset comprises 508 paired sentences in the specified languages, along with precise information regarding the alignment of words between these two languages. These sentences are sourced directly from the EuroParl dataset (Koehn, 2005), which contains transcripts and translations of speeches delivered in the European Parliament. We opt for this dataset due to its coverage of the English-German translation pair, which is extensively studied in the field of speech translation (Agarwal et al., 2023). Moreover, our work requires the generation of speech utterances for the sentences in the dataset. Focusing on well-resourced languages like English and German provides greater confidence in the quality of the speech generated by the TTS model. Footnote 2: [https://www-i6.informatik.rwth-aachen.de/goldAlignment/](https://www-i6.informatik.rwth-aachen.de/goldAlignment/) As for metrics, a singular measure has predominantly been used to evaluate alignments. Alignment Error Rate (AER), introduced by Och and Ney (2003), is a measure of alignment quality between a source sentence and its translation. It is calculated as the ratio of alignment errors, where an alignment error occurs when a unit in the translated sentence is not aligned with the correct unit in the source. The score is computed based on a manually annotated gold-standard alignment of a parallel corpus. Given a reference alignment, consisting of a set \(S\) of "Sure", unambiguous alignment points, and a set \(P\) of "Possible", ambiguous alignment points, with \(S\subseteq P\), the AER of an alignment \(A\) is defined to be: \[\text{AER}(S,P;A)=1-\frac{|A\cap S|+|A\cap P|}{|A|+|S|} \tag{1}\] ## 3 Speech Gold Alignment Dataset The dataset we introduce, _Speech Gold Alignment_, extends the bilingual text alignment dataset presented by Vilar et al. (2006) by adding speech utterances to each pair of English and German sentences. Additionally, for each audio file, the dataset contains a dictionary defining all the words in each sentence and their corresponding start and end time stamps, what gives us a text-to-audio mapping. Once we have this, we incorporate the gold alignment correspondences from the original dataset to obtain the alignments between speech segments. This augmented dataset, can either be considered as two distinct datasets supporting S2TT from English to German (and vice versa), or as a unified S2ST dataset by combining both S2TT alignments. In Figure 2 we show the three different modalities of the dataset. Figure 1(a) shows a sample from the original dataset presented by Vilar et al. (2006), and Figures 1(b) and 1(c) show our extension for S2TT and S2ST settings respectively. As part of the SpeechAlign framework, we publish a pipeline to prepare the dataset, following the steps that are described in section 3.1. ### Methodology The construction of this dataset can be divided into two primary steps. First, we employed the VITS model Kim et al. (2021) to generate synthetic speech for all the sentences, as detailed in section 3.1.1. Subsequently, we aligned each word to its corresponding time interval in the produced speech signal, as explained in section 3.1.2. While integrating the datasets, we found specific cases where alignment was not immediate or direct. We address these complexities in section 3.1.3. #### 3.1.1 Speech Generation To produce synthetic speech for the sentences in the Gold Alignment dataset, we employed the VITS model. This TTS system uses a phonemizer to obtain the phonemes corresponding to the input sequence. Then, to generate the speech output, the model uses a stochastic duration predictor that assigns a duration to each phoneme. The chosen duration is randomly sampled from each phoneme's durations distribution. By doing this, the model is able to synthesize natural speech and can generate different speech utterances for the same input text. To build our dataset, we generated separate synthetic versions for the 508 sentences in both English and German. In English, we utilized LJ Speech Ito and Johnson (2017), while for the German language, the Thorsten voice Muller and Kreutz (2021) was employed. This task was done using the VITS model available through the Coqui toolkit (Eren and The Coqui TTS Team, 2021). #### 3.1.2 Word-Audio Matching The Gold Alignment dataset constitutes a word-to-word alignment reference, to which we add our newly generated audios, product of VITS. Nevertheless, to achieve an alignment between speech intervals, we first need to establish a linkage between audio segments and words in the original dataset. The approach followed to accomplish this starts by acquiring intermediate representations from VITS. Specifically, we gather the output generated Figure 2: Original alignment by Vilar et al. (2006) and our extensions. by the phonemizer, which is the phonemized sentence, as well as the output of the duration predictor. This predictor crates a dictionary containing duration in integer units of each phoneme. With this information in hand, we perform a two-step matching procedure, that ultimately yields the mapping from audio to words, via the intermediate representation of phonemes: 1. Phoneme-Word Matching. In this stage, we focus on aligning the phonemes with the words present in the original dataset. 2. Phoneme-Audio Matching. In this phase, we establish a time mapping between the audio and its corresponding sequence of phonemes. Figure 3 provides a visual representation of the sequential steps followed for deriving both the waveform and the alignment between words and audio, which constitute the dataset we present. With the basic steps outlined, now we will dive deeper into the details of each of the phases to obtain the audio-word matching. Phoneme-Word Matching.The goal of this phase is to achieve a mapping between the sequence of phonemes extracted from the phonemizer and the sequence of words in the original dataset (Vilar et al., 2006). To do so, we use blank spaces as delimiters for words in the phonemes sequence, and we monotonically map them with the sequence of words. It is important to note that the original dataset underwent tokenization through Moses, introducing some challenges in this process that are outlined in detail in Section 3.1.3. Phoneme-Audio Matching.After obtaining the correspondence between words and phonemes, we now need to map phonemes to the audio. Ideally, the entire audio must be partitioned into separate time intervals, each containing the pronunciation of a single word. To accomplish this, it is necessary to compute the overall duration of each individual word. To compute the total duration of each word, we take the output of the duration predictor and sum the duration in units of all the phonemes belonging to a same word. As previously stated, blank spaces are used as delimiters between words in the phoneme transcription. Consequently, the duration assigned to a blank space is equally distributed and added to the neighboring words, both preceding and succeeding the blank space. The same approach applies to units attributed to punctuation marks, that we decided not to include in our alignment dataset given that they cannot be found explicitly in speech utterances. Next, our objective is to establish the corresponding word duration in seconds based on their duration in units. To achieve this, we divide the total length of the audio by the aggregate duration in units of all the phonemes in the sentence. This computation establishes a correlation between VITS duration units and the equivalent time in seconds. Using this derived relationship, we convert the word durations from units to seconds and find the start and end times for each word. #### 3.1.3 Special Cases During the two phases of the dataset construction we encounter some special challenges that need special handling. Phonemic Fusion.In the majority of instances, phonemized words align with the original text words, primarily through sentence segmentation using blank spaces. Nevertheless, in certain cases the phonemizer merges adjacent words during phonetic transcription, creating what we name as _phonemic fusion_. This occurrence is primarily observed in short English words such as prepositions, articles, and pronouns, which are pronounced seamlessly without pauses. Table 1 provides examples of this phenomenon. In such particular instances, we first determine the combined duration of these merged words and subsequently distribute the total time proportional to the length among the constituent words. While this approach may not be entirely precise, we believe the approximation is enough, given its applicability to very short words and few cases. Phonemic Fragmentation.Furthermore, we have encountered a contrasting phenomenon in comparison to _phonemic fusion_. The phonemizer carries out a normalization process on the text before phonemization. Occasionally, this normalization procedure results in the conversion of single words into multiple words - a phenomenon we refer to as _Phonemic Fragmentation_. This behavior is particularly noticeable in cases involving numbers, percentages, years, and similar elements. To address this matter, we aggregate the durations of all the split words and attribute the total duration to the original solitary word. Possesives ('s)The original Gold Alignment dataset does not provide alignments between natural sentences, but for sentences tokenized with Moses. However, VITS works on natural text, and this missmatch created some difficulties along the matching process. This is the case of words such as "Parliament's", that is considered a single word when dealing with VITS ("Parliament's" \(\rightarrow\)\(/\)palomonts\(/\)), but it is actually two different words with independent alignments in the original dataset ("Parliament's"), due to Moses tokenization. This is a case of _Phonemic Fusion_ (and it's addressed as such). However, unlike previously shown cases caused by the phonemizer, this fusion stems from the tokenization in the original dataset. Percent Sign (%)A similar behaviour arises when dealing with percent signs. These signs appear alongside numbers in natural text ("34%"), but in the Gold Alignment dataset, they're separate tokens due to Moses tokenization ("34 %"). However, as illustrated in Table 1, percents are a case of _Phonemic Fragmentation_, with the phonemizer breaking this construction into multiple phonemized words ("34%" \(\rightarrow\)\(/\)@eth\(\,\)f\(\,\)posent\(/\)). In this particular cases of Phonemic Fragmentation, we aim to separate the expanded phonetic text into two segments: the first containing phonemized words associated with the number (\(/\)@eth\(\,\)f\(\,\)/), and the last containing the phonemized word corresponding to the percent (\(/\)posent\(/\)). In this instances, the merging of time intervals encompasses all words except the final one in the expansion. German PhonemizerIn our utilization of the German phonemizer, we have noticed that it occasionally produces inaccurate phonetic transcriptions for certain single input words. These inaccuracies tend to occur with special symbols (e.g., "_%"_, "/"), years (e.g., "1996"), acronyms (e.g., "EU", "Nr"), compound nouns (e.g., "EU-Staate"), among others. To rectify these inaccuracies in phonetic transcription, we have replaced specific words in the input sentences with their expanded and "spoken" format ("EU" \(\rightarrow\) "E U", "1996" \(\rightarrow\) "nineteen ninety six"). This adjustment assists the phonemizer in producing more accurate transcriptions. ### Dataset Quality Assessment Within this section, we aim to examine the quality of the synthetic audio produced by VITS. We conduct an assessment comparing EuroParI ST (Iranzo-Sanchez et al., 2020) test set and our own synthesized data, which is also derived from a subset of the EuroParI dataset (Koehn, 2005). With this aim, \begin{table} \begin{tabular}{l} \hline \hline **Phonemic Fusion** \\ \hline Words: I am \\ Phonemes: \(/\)a\(\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a}\mathrm{a} \mathrm{a}\mathrm{a}\mathrm{a} we evaluate the performance of the Whisper Tiny model (Radford et al., 2022) on the task of speech recognition on these two datasets. This strategy allows us to understand the implications of using synthetic audio without the influence of content domain. We choose to perform this evaluation in the setting of speech recognition, and not in translation, because of the simplicity of the former due to its monotonic alignment process. This ensures that the overall model performance and the complexity of the task are less likely to influence the results. We have opted to conduct this evaluation using the smallest Whisper model. Our hypothesis behind this choice is that if no issues arise in the smallest model, they are unlikely to manifest in larger models. In Table 2, we present the Word Error Rate (WER) results obtained on both datasets, and we observe that our synthetic audios results in a lower WER than standard EuroParI ST dataset. Consequently, we can conclude that the synthesized data does not pose a problem and appears to be easily handled by the models, possibly due to the clarity of the generated audios compared to European Parliament recordings. ## 4 Proposed Evaluation The objective of this section is to define an evaluation procedure and metrics that are able assess models' ability to establish source-target alignments. To analyse this capability, our focus is on the contribution maps generated by the models. These maps indicate the relationship between source and target tokens, such that the contribution of a source token to a target one is always a non-negative value, and that the sum of contributions from all source tokens to a target token must equal 1 (i.e. attention weights or more advanced interpretability methods (Kobayashi et al., 2021; Ferrando et al., 2022)). Then, to measure the alignments, we build new metrics around the intuition of the Alignment Error Rate (AER) score, initially introduced by Och and Ney (2003) and defined in section 2. However, extracting the alignments from the contribution map and adapting AER for speech sequences is not straightforward process. ### Preprocessing The metric of AER assesses the error rate between a hypothesis and a target alignment. Hence, to compute this score, we need a gold alignment dataset. In most text alignment datasets, such as the one we extend (Vilar et al., 2006), these alignments are provided as word-to-word relations. Consequently, the hypothesis alignment needs to be structured in a word-to-word format too. However, in speech settings, the system input tokens correspond to frames of a spectrogram or ranges of a waveform. As a consequence, the contribution maps usually extract token-to-token interactions, being each token a speech frame. Thus, a conversion process is necessary to derive word-to-word alignments from a tokens-to-tokens contribution map, and consequently being able to evaluate the alignment to obtain an AER score. Nonetheless, a similar challenge is faced in the setting of text translation, where tokens are often sub-words rather than complete words. In this case, the conversion from tokens to words involves a two-step process.When dealing with sub-words in the source, their contributions are aggregated by \begin{table} \begin{tabular}{l c c} \hline \hline **Dataset** & **Language** & **WER** \\ \hline EuroParI ST & En & 29.7 \\ Speech Gold Alignment & En & 3.9 \\ EuroParI ST & De & 31.0 \\ Speech Gold Alignment & De & 23.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Quality assessment results. summing them together. This approach is rooted in the principle that the combined contribution of two tokens to a target is the sum of their individual contributions. Handling sub-words in the target sequence proves to be more complex. Each token has a distinct distribution of contributions across the source. To address this, the average of each sub-word distribution is computed. By following this approach, we are able to effectively establish the alignment between words despite the presence of sub-word units. In the case of speech, we propose to employ a similar approach when aggregating tokens from each word, in order to obtain a word-to-word contributions plot. Leveraging our dataset, which provides details into the correspondence between segments of input/output audio and individual words, we define which tokens correspond to each word under with the assumption of a linear relation between tokens and audio, and dismissing any overlap. Doing this allows us to employ a similar approach to the one used for merging sub-words, but this time, we apply it to the set of all tokens linked to a single word. Given a contributions map \(C\) where \(c_{i,j}\) is the contribution of the \(j\)-th source token to the \(i\)-th prediction, the resulting word-to-word contributions map is computed using our dataset as shown in Algorithm 1. In Figures 4 and 5 we show an example of a contributions map before and after the preprocessing. Following this conversion and before computing the alignment scores, we derive the hard alignments. This is accomplished by aligning each target word with the source word that has the highest contribution. ### Speech Alignment Error Rate Once we have the hard alignments, we define the Speech Alignment Error Rate (SAER) in the same manner the AER is defined. This is, given a set set \(S\) of unambiguous alignments, a set \(P\) of ambiguous alignment and a set \(A\) of hypothesis alignment: \[SAER=1-\frac{|A\cap S|+|A\cap P|}{|A|+|S|} \tag{2}\] However, it's important to note that SAER doesn't fully address a key aspect in the speech setting - the noticeable disparity in the number of different tokens that form each word, which corresponds to audio durations. Instead, when computing SAER each word contributes equally to the final score, regardless of its duration. ### Time-Weighted SAER To address the limitations of the SAER, we define the Time-weighted SAER, a metric that accounts for the variability in word durations. To do so, we introduce a new element - the incorporation of a weight for each alignment. These weights are defined using the area of each alignment, as shown in Figure 6, and defined as follows: \[w_{i,j}=\begin{cases}s_{j}\cdot s_{i}&\text{if S2ST}\\ s_{j}\cdot 1&\text{if S2TT}\end{cases} \tag{3}\] where \(w_{i,j}\) is the weight of an alignment between the \(j\)-th source word and the \(i\)-th target word, and \(s_{i}\), \(s_{j}\) is the duration in seconds of these words respectively. Therefore, given a set set \(S\) of unambiguous alignments and a set \(P\) of ambiguous alignment, the TW-SAER is defined as the sum of areas of the alignments in \(A\cap S\) plus the sum of Figure 4: Example of the token-to-token attention weights of a S2TT decoder layer on Whisper Small. Figure 5: Example in Figure 4 after the preprocessing, obtaining a word-to-word contributions map. areas in \(A\cap P\), divided by the total alignment area of \(A\) and \(S\): \[TW-SAER=1-\frac{\sum_{i,j\in A\cap S}w_{i,j}+\sum_{i,j\in A\cap P}w_{i,j}}{\sum_{ i,j\in A}w_{i,j}+\sum_{i,j\in S}w_{i,j}} \tag{4}\] By including the weights we account for the temporal duration of each word within the audio, refining our evaluation process. Note that SAER and TW-SAER are equivalent if \(w_{i,j}=1\)\(\forall i,j\). ## 5 SpeechAlign The main contribution of this paper is the release of SpeechAlign, an accessible open-source framework that encompasses the Speech Gold Alignment dataset presented in section 3 and the SAER and TW-SAER metrics defined in section 4. This tool seamlessly handles raw token-to-token alignment maps and computes both proposed alignment error rates. This framework is versatile, and can be used in attention weights or more sophisticated contribution maps. The pipeline starts by taking the given contribution maps and converts them into word-to-word equivalents. To achieve this, the alignment dataset is utilized to account for varying word durations. Following this conversion, we derive the hard alignments. The outcome is a definitive set of hypothesis alignments, that are used to compute both SAER and TW-SAER scores. To enhance the comprehension of the process, we include a notebook for visualization of the alignments and contributions maps. This tool can be used to visualize token-to-token and the extracted word-to-word representations, and also the obtained hard alignments. By publishing this framework, we aim to facilitate the use of our dataset by other researchers. Finally, using SpeechAlign, we benchmark some S2TT models. For simplicity, we decide to analyze alignments based on models' cross-attention weights. We decide not benchmark the S2ST task due to the current lack of open-source models, being the recently published SeamlessM4T (SeamlessCommunication et al., 2023) the only one available as of now. This model comprises two consecutive Transformers, each containing its own decoder. Consequently, it presents significant challenges in terms of obtaining a contributions map based on attention weights, and developing further interpretability methods lies beyond scope of this paper. Models BenchmarkingTable 3 presents an evaluation of various sizes of the Whisper model (Radford et al., 2022) on De-En S2TT. Each model's performance is assessed through the BLEU score on our test set, and the SAER and TW-SAER. The latter are computed on the attention weights of each decoder layer, and in Table 3 we report the best obtained score. This analysis uncovers a correlation between the performance metrics and the alignment score. This correlation is also observed to align with the model's size. Intriguingly, an outlier appears in the case of the Large V2 model, which fails to deliver proper alignment despite achieving the highest performance metrics. ## 6 Conclusion In conclusion, this paper introduces SpeechAlign, a framework to evaluate alignment in speech models. Figure 6: TW-SAER weights. \begin{table} \begin{tabular}{l c|c c c} \hline \hline **Size** & **Parameters** & **SAER**(\%, \(\downarrow\)) & **TW-SAER**(\%,\(\downarrow\)) & **BLEU**(\(\uparrow\)) \\ \hline Tiny & 39M & 75.3 & 70.1 & 3.6 \\ Base & 74M & 72.9 & 67.8 & 8.4 \\ Small & 244M & 70.7 & 65.7 & 15.4 \\ Medium & 769M & 69.5 & 64.1 & 20.2 \\ Large & 1.55B & 68.9 & 63.5 & 22.1 \\ Large-v2 & 1.55B & 77.0 & 70.6 & 22.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Benchmarking of different sizes of Whisper models on De-En S2TT. SpeechAlign has two main components. Firstly, we've created the Speech Gold Alignment dataset, being the first of its kind and created to address the lack of suitable evaluation data for the task. Secondly, we have presented the two first evaluation metrics for speech alignment, Speech Alignment Error Rate (SAER) and Time-weighted Speech Alignment Error Rate (TW-SAER), to assess how well speech models perform on the alignment task. SpeechAlign provides an accessible way to evaluate speech models, and we have used it to benchmark various open-source models. ## Limitations Our study only focuses on high resource languages, German and English, stemming from the limitation of the original Gold Alignment dataset [23]. Additionally, it's worth noting the potential challenges that might arise due to the limited domain covered by the dataset, centered on speeches from the European Parliament. This might cause some issues when evaluating models trained with a limited amount of data and from other domains. However, this case would be a limitation of the model itself, denoting its restricted capabilities. Using data from European Parliament's speeches is a common practice in the Machine Translation field [13, 14].
2306.17771
Precision Anti-Cancer Drug Selection via Neural Ranking
Personalized cancer treatment requires a thorough understanding of complex interactions between drugs and cancer cell lines in varying genetic and molecular contexts. To address this, high-throughput screening has been used to generate large-scale drug response data, facilitating data-driven computational models. Such models can capture complex drug-cell line interactions across various contexts in a fully data-driven manner. However, accurately prioritizing the most sensitive drugs for each cell line still remains a significant challenge. To address this, we developed neural ranking approaches that leverage large-scale drug response data across multiple cell lines from diverse cancer types. Unlike existing approaches that primarily utilize regression and classification techniques for drug response prediction, we formulated the objective of drug selection and prioritization as a drug ranking problem. In this work, we proposed two neural listwise ranking methods that learn latent representations of drugs and cell lines, and then use those representations to score drugs in each cell line via a learnable scoring function. Specifically, we developed a neural listwise ranking method, List-One, on top of the existing method ListNet. Additionally, we proposed a novel listwise ranking method, List-All, that focuses on all the sensitive drugs instead of the top sensitive drug, unlike List-One. Our results demonstrate that List-All outperforms the best baseline with significant improvements of as much as 8.6% in hit@20 across 50% test cell lines. Furthermore, our analyses suggest that the learned latent spaces from our proposed methods demonstrate informative clustering structures and capture relevant underlying biological features. Moreover, our comprehensive empirical evaluation provides a thorough and objective comparison of the performance of different methods (including our proposed ones).
Vishal Dey, Xia Ning
2023-06-30T16:23:25Z
http://arxiv.org/abs/2306.17771v1
# Precision Anti-Cancer Drug Selection via Neural Ranking ###### Abstract. Personalized cancer treatment requires a thorough understanding of complex interactions between drugs and cancer cell lines in varying genetic and molecular contexts. To address this, high-throughput screening has been used to generate large-scale drug response data, facilitating data-driven computational models. Such models can capture complex drug-cell line interactions across various contexts in a fully data-driven manner. However, accurately prioritizing the most sensitive drugs for each cell line still remains a significant challenge. To address this, we developed neural ranking approaches that leverage large-scale drug response data across multiple cell lines from diverse cancer types. Unlike existing approaches that primarily utilize regression and classification techniques for drug response prediction, we formulated the objective of drug selection and prioritization as a drug ranking problem. In this work, we proposed two neural listwise ranking methods that learn latent representations of drugs and cell lines, and then use those representations to score drugs in each cell line via a learnable scoring function. Specifically, we developed a neural listwise ranking method, List-One, on top of the existing method ListNet. Additionally, we proposed a novel listwise ranking method, List-All, that focuses on all the sensitive drugs instead of the top sensitive drug, unlike List-One. Our results demonstrate that List-All outperforms the best baseline with significant improvements of as much as 8.6% in hit(@20 across 50% test cell lines. Furthermore, our analyses suggest that the learned latent spaces from our proposed methods demonstrate informative clustering structures and capture relevant underlying biological features. Moreover, our comprehensive empirical evaluation provides a thorough and objective comparison of the performance of different methods (including our proposed ones). 2018 Cosm Computing methodologies Learning to rankNeural networksApplied computing Bioinformatics Vishal Dey and Xia Ning 2018Precision Anti-Cancer Drug Selection via Neural RankingIn Proceedings of Proceedings of 22nd International Workshop on Data Mining in Bioinformatics (BIORDD '23)ACM, New York, NY, USA, 10 pages. [https://doi.org/XXXXXXXXXXXX](https://doi.org/XXXXXXXXXXXX) The rest of the manuscript is organized as follows. Section 2 presents the related work on computational methods in anti-cancer drug response prediction and drug prioritization. Section 3 presents the proposed listwise methods and Section 4 describes the datasets, baseline methods, experimental settings and evaluation metrics. Section 5 presents an overall comparison of all methods in one experimental setting across both datasets and detailed analyses of embeddings. Section 6 concludes the paper. ## 2. Related Works ### Computational Methods in Drug Response Prediction With an increasing abundance of large-scale drug response data and advanced high-throughput screening(Han et al., 2017), data-driven computational approaches have been developed for drug response prediction in cancer cell lines. Following pan-cancer studies(Moh et al., 2017), these approaches have been extended beyond singe-drug or single-cell line modeling to jointly leverage the drug response data across multiple drugs and cell lines. This enables such approaches to capture the interactions among multiple drugs, among multiple cell lines, and between drugs and cell lines. Typically, these approaches either focus on regression(Rendle et al., 2015) which estimates the drug responses for a given cell line, or on classification(Dong et al., 2016) which predicts whether a drug is sensitive or not in a given cell line. These approaches employ various machine learning techniques such as kernel methods(Krishnan et al., 2017), matrix factorization(Krishnan et al., 2017), and deep learning(Rendle et al., 2015; Wang et al., 2017). We refer the readers to a comprehensive survey(Han et al., 2017) for broader coverage of the existing literature in this area. In contrast to the most popular approaches toward drug response prediction, our work is more related to LeToR approaches since it naturally models drug selection and prioritization. ### LeToR methods in Drug Prioritization Unlike the aforementioned regression and classification methods, LeToR methods for drug prioritization are relatively under-explored (Han et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). LeToR methods focus on learning to appropriately score the candidate drugs and to optimize different objectives so as to achieve accurate ranking. LeToR methods can be broadly categorized into three approaches: pointwise(Krishnan et al., 2017), pairwise(Krishnan et al., 2017) and listwise(Wang et al., 2017). In fact, the pointwise approach typically performs inferior to both pairwise and listwise approaches(Wang et al., 2017) since the ranking structure is not explicitly leveraged. One of the popular pairwise ranking approaches for drug prioritization, pLETORg(Krishnan et al., 2017), do not explicitly leverage auxiliary information such as molecular structures, which are known to be well correlated to activity(Krishnan et al., 2017), drug-likeliness(Dong et al., 2016), and other pharmacological properties(Krishnan et al., 2017). This may hinder such models to learn the above-mentioned structure-activity correlations, a key to many aspects in drug discovery(Han et al., 2017). In addition to pairwise approaches, listwise approaches have been utilized in recent works. Kernelized Rank Learning (KRL)(Krishnan et al., 2017) is a listwise LeToR method that optimizes an upper bound of the Normalized Discounted Cumulative Gain (NDCG@k), and learns to approximate the drug sensitivities via a kernelized linear regression. However, KRL notably underperforms pLETORg across multiple experimental settings as demonstrated by He et al(He et al., 2017). Another neural listwise ranking method developed by Prasse et al.(Prasse et al., 2017) optimizes a smooth approximation of NDCG@k. However, the experiments from this study are not adequately comprehensive and may not be directly comparable to other studies in the literature due to their usage of multi-omics profiles and customized definitions of ground-truth drug relevance scores, which deviates from the standard approach in other studies. Additionally, the proposed method was not evaluated against state-of-the-art pointwise or pairwise approaches. Furthermore, the experiments were limited to one experimental setting ('Cell cold-start'), which may restrict the generalizability of their findings. ## 3. Methods Table 1 presents the key notations used in the manuscript. Drugs are indexed by \(i\) and \(j\) in the set of drugs \(\mathcal{D}\), and cell lines are denoted by \(c\in\mathcal{C}\). In this manuscript, \(\mathcal{D}_{c}^{+}\) / \(\mathcal{D}_{c}^{-}\) indicate the set of sensitive and insensitive drugs, respectively, in the cell line \(c\). For example, \(d_{i}\in\mathcal{D}_{c}^{+}\) denotes a sensitive drug \(i\) in cell line \(c\); \(d_{j}\in\mathcal{D}_{c}^{-}\) denotes an insensitive drug \(j\) in cell line \(c\). In this section, we proposed two listwise learning-to-rank methods (List-One and List-All) for anti-cancer drug selection and prioritization. We first introduce the overall architecture of our methods in Section 3.1, and then discuss each component in detail in Sections 3.2 and 3.3. We discuss each of our proposed methods and their ranking optimization process in subsequent sections. ### Overall Framework In order to select and prioritize sensitive drugs in each cell line, our proposed LeToR methods optimize different objectives that inducing the correct ranking structure among the top-sensitive or all involved drugs in each cell line. Figure 1 presents an overall scheme of our proposed methods. To induce the correct ranking structure, each method learns to accurately score drugs in each cell line using the learned cell line and drug embeddings. The embeddings and scoring function are learned in a fully data-driven manner from the drug response data. Intuitively, the cell line latent space embeds the genomic and response information of cell lines, while the drug latent space embeds the structural and sensitivity information for drugs. The cell line embeddings are initially learned from the gene expression profiles using a pre-trained auto-encoder model GeneAE (Section 3.2). The drug embeddings are learned from the molecular fingerprints (Section 3.3). During training, the cell line and drug embeddings are then used and updated to correctly score drugs against each cell line using a learnable scoring function (Section 3.4.1). Note that List-One and List-All utilize the same scoring function, however, optimize separate ranking objectives. \begin{table} \begin{tabular}{l l} \hline \hline Notation & Definition \\ \hline \(\mathcal{C}\) & Set of cell lines \\ \(\mathcal{D}\) & Set of drugs \\ \(d_{i}\) & Drug \(i\) \\ \(\mathcal{D}_{c}^{+}\) / \(\mathcal{D}_{c}^{-}\) & Set of sensitive/insensitive drugs for a cell line \(c\) \\ \(\mathbf{u}_{c}\) / \(\mathbf{v}_{d}\) & embedding for cell line \(c\)/drug \(d\) \\ \hline \hline \end{tabular} \end{table} Table 1. Notations and Definitions ### Pretraining for Cell Line Embeddings In order to learn rich informative cell line embeddings, we pretrain a stacked auto-encoder framework GeneAE, similar to existing gene expression auto-encoder frameworks(Wang et al., 2019; Wang et al., 2019). Figure 2 presents the architecture of GeneAE. GeneAE embeds the rich genomic information into a latent space, and learns the complex and non-linear interactions among genes. Specifically, GeneAE leverages the gene expression profile \(\mathbf{x}_{c}\) to learn a low-dimensional embedding \(\mathbf{u}_{c}\) via the encoder GeneE, followed by reconstruction of the expression profile from \(\mathbf{u}_{c}\) via the decoder GeneD. These embeddings out of the pretrained GeneE are used to score drugs in each cell line during the downstream ranking (Section 3.4.1). Such embeddings can be utilized as transferable representations of cell lines that can potentially enable better generalizability of downstream drug scoring/ranking models. In summary, these embeddings can potentially improve the performance of drug ranking models by leveraging the shared biological features across cell lines. As any other pretraining module(Wang et al., 2019), GeneAE has two training stages: pretraining, and finetuning. During pretraining, the model parameters are learned via back-propagation by minimizing the reconstruction error (here, MSE) from the actual and reconstructed gene expression data as, \[\min\nolimits_{\theta_{\text{GeneE}},\theta_{\text{GeneD}}}\frac{1}{|C|}\sum_{c \in\mathcal{C}}||\mathbf{x}_{c}-\mathbf{\widetilde{x}}_{c}||_{2}^{2}, \tag{1}\] where \(\mathbf{x}_{c}\) denotes the input gene expression of cell line \(c\), \(\mathbf{\widetilde{x}}_{c}=\texttt{GeneD}(\texttt{GeneE}(\mathbf{x}_{c}))\) denotes the corresponding reconstructed gene expression, \(\theta_{\text{GeneE}}\) and \(\theta_{\text{GeneD}}\) denote the learnable parameters of GeneE and GeneD, respectively, and \(C\) denotes the set of input cell lines. During pretraining, parameters of both GeneE and GeneD modules are learned, and the learned parameters of GeneE are transferred and finetuned during the optimization for downstream ranking tasks. The finetuning of pretrained GeneE adapts the output embeddings toward the specific downstream ranking. ### Embedding Drugs from Fingerprints In this work, molecular fingerprints(Wang et al., 2019) are leveraged to learn informative drug embeddings via a low-dimensional projection. These fingerprints are discrete feature vectors representing the presence of molecular substructures in a drug given a fixed vocabulary. Typically, such fingerprints need to be high-dimensional to sufficiently capture all relevant structural information. On the other hand, the drug embeddings can selectively encode relevant structural information (specific to the ranking task) as non-linear functions of input fingerprints. These embeddings are further used to score drugs in each cell line, and are learned during ranking optimization given Figure 1. Overall Framework. The cell line embeddings \(\mathbf{u}_{c}\) and drug embeddings \(\mathbf{v}_{d_{i}}\) are used to score the drugs \(d_{i}\in\mathcal{D}_{c}\). Each ranking method utilizes a different ranking objective and thus utilizes the scores \(f_{c}(d_{i})\) differently. The pretrained encoder GeneE is finetuned during ranking optimization. Figure 2. Pretraining framework. Given a gene expression \(\mathbf{x}_{c}\), GeneAE reconstructs it as \(\mathbf{\widetilde{x}}_{c}\) through the auto-encoder, meanwhile learning an embedding \(\mathbf{u}_{c}\) in the latent space. drug response data across multiple cell lines. Such learned embeddings enable accurate drug scoring since similar drugs in terms of structures and sensitivities across multiple cell lines obtain similar embeddings. To learn such embeddings of dimension \(M\) (\(M\) is a hyperparameter), we used a fully connected neural network DrugE. As inputs to DrugE, we used Morgan count fingerprints(Mikolov et al., 2015) with radius = 3 and 2,048 bits. While graph neural networks (GNNs)(Shi et al., 2017) have demonstrated promising empirical performance in molecular prediction tasks(Shi et al., 2017), we observed inferior ranking performance from the drug embeddings learned from GNNs compared to those learned from fingerprints, from preliminary experiments. This is possibly due to the limited number of unique drugs in our datasets. ### Listwise Ranking for Top-One Drug: List-One We adopted the standard ListNet(Chen et al., 2016) objective to develop a neural listwise ranking method, List-One. List-One considers the entire ranking structure at a time and focuses on accurately estimating the top-one probability of drugs. The top-one probability of a drug \(d\), denoted as \(p_{c}(d)\), is its probability of being ranked at the top given the scores of all involved drugs in the cell line \(c\). Formally, the predicted top-one probability denoted as \(p_{c}^{f}(d)\) is defined as follows: \[p_{c}^{f}(d)=\frac{\exp\left(f_{c}(d)\right)}{\sum_{d_{j}\in\mathcal{D}_{c}} \exp\left(f_{c}(d)\right)}, \tag{2}\] where \(f_{c}(d)\) denotes the score of drug \(d\) in cell line \(c\) (which will be discussed later in Section 3.4.1). The top-one probabilities are optimized using the cross-entropy loss as follows: \[\min_{\Theta}-\sum_{c\in\mathcal{C}}\left[\sum_{d\in\mathcal{D}_{c}}p_{c}^{ top}(d)\log(p_{c}^{f}(d))\right], \tag{3}\] where \(\Theta\) denotes the learnable parameters in the model; and \(p_{c}^{top}(d)\) denotes the ground-truth top-one probability of drug \(d\) in the cell line \(c\) according to the ground-truth drug responses. In this study, the drug responses are quantified with Area Under the dose-response Curve denoted as AUC, where smaller AUC values indicate higher drug sensitivities. \(p_{c}^{top}(d)\) is computed via Equation 2 by replacing \(f_{c}(d)\) with the negated AUCs (since lower AUCs indicate higher drug sensitivities). Minimizing the above loss reduces the discrepancy between the predicted and the ground-truth top-one probability distribution over drugs in each cell line. This results in an accurate estimation of the top-one probability, which enables an accurate selection of the most sensitive drug in each cell line. During the optimization, List-One (and List-All presented in the following section similarly) finetunes GeneE (Section 3.2) and learns DrugE (Section 3.3). This enables the cell line and drug embeddings out of GeneE and DrugE, respectively, to encode task-relevant information. #### 3.4.1. Drug Scoring In order to score the drug \(d\) in a cell line \(c\), we used a parameterized bilinear function denoted as \(f_{c}(d)\). The function \(f_{c}(d)\) is parameterized via a learnable weight matrix \(\mathbf{W}\in\mathbb{R}[\mathbf{u}_{c}|\times|\mathbf{v}_{d}|]\), and is applied over the \(\mathbf{u}_{c}\in\mathbb{R}^{|\mathbf{u}_{c}|}\) and \(\mathbf{v}_{d}\in\mathbb{R}^{|\mathbf{v}_{d}|}\) as follows: \[f_{c}(d)=\mathbf{u}_{c}^{\intercal}\mathbf{W}\mathbf{v}_{d}, \tag{4}\] where, \(\mathbf{W}\) is learned via backpropagation in an end-to-end manner during optimization. Intuitively, the learnable full-rank weight matrix \(\mathbf{W}\) in the bilinear scoring function can capture complex and relevant interactions between the two latent vectors. Once the scores for all drugs in a cell line \(c\) (i.e., \(\mathcal{D}_{c}\)) are obtained, the drugs \(\mathcal{D}_{c}\) are sorted based on such scores in descending order. The most sensitive drugs in \(c\) will have higher scores than the insensitive ones in \(c\). Note that ranking-based methods such as our proposed ones will achieve optimal ranking performance as long as these scores induce correct ranking structure; the scores do not need to be exactly identical to the drug response scores (i.e., AUC values). ### Listwise Ranking for All Sensitive Drugs: List-All Since List-One focuses solely on the top-ranked drug, it may lead to suboptimal performance in terms of selecting all the sensitive drugs in each cell line, as demonstrated in our experiments (Section 5.1), To address this, we proposed a new listwise neural ranking method, List-All with an objective that can optimize the selection of all sensitive drugs in each cell line. List-All leverages the entire ranking structure at a time and follows a similar architecture to List-One. But unlike List-One, List-All estimates the probability of a drug being sensitive given the scores of all drugs, where higher scores induce higher probabilities. Since the estimated probability of each drug being sensitive is dependent on the scores of all other drugs in the list, List-All is a listwise ranking method. List-All aims to minimize the distance between such estimated score-induced probabilities and the ground-truth sensitivity labels across all drugs in each cell line. Specifically, List-All is trained by minimizing the following loss: \[\min_{\Theta}-\sum_{c\in\mathcal{C}}\left[\sum_{d\in\mathcal{D}_{c}}l_{c}(d) \log(s_{c}^{f}(d))\right], \tag{5}\] where \(\Theta\) denotes the learnable parameters in the model; \(l_{c}(d)\) is a binary sensitivity label indicating whether drug \(d\) is sensitive in cell line \(c\); and \(s_{c}^{f}(d)\) denotes the probability of drug \(d\) to be sensitive in the cell line \(c\). Formally, \(s_{c}^{f}(d)\) is computed from the predicted scores via the parameterized softmax as: \[s_{c}^{f}(d)=\frac{\exp\left(f_{c}(d)/\tau\right)}{\sum_{d_{j}\in\mathcal{D}_{c}} \exp\left(f_{c}(d)/\tau\right)}, \tag{6}\] where \(\tau\) is the temperature (a scaling factor \(>0\)) that controls the softness/sharpness of the score-induced probability distribution while maintaining the relative ranks. A lower scaling factor results in a sharper probability distribution with higher probabilities on very few drugs. Note that the scaling factor can also be applied similarly in Equation 2, however, we observed no notable performance difference empirically. For List-All, we fix \(\tau\) to 0.5. Note that the optimization objective (Equation 5) resembles the ListNet objective (Equation 3) in the sense that both aim to minimize the cross-entropy between two score-induced empirical probability distributions. ## 4. Materials In this section, we present the data sets and baselines used in Sections 4.1 and 4.2, respectively; the experimental setting in Section 4.3; and the evaluation metrics adopted to evaluate ranking performance in Section 4.4. ### Dataset We collected the drug response data set from the Cancer Therapeutic Response Portal version 2 (CTRP)1(Cran et al., 2017). We focused on this data set because it covers a large number of cell lines and drugs compared to other available data sets2. We utilized the Cancer Cell Line Encyclopedia version 22Q1 (CCLE)3(Cran et al., 2017) for the gene expression data. CCLE provides multi-omics data (genomic, transcriptomic and epigenonic) for more than 1,000 cancer cell lines. However, in this study, we only used the gene expression (transcriptonic) data following (Krishnan et al., 2017). The drug responses are measured using AUC sensitivity scores, with lower AUC indicating higher sensitivity of a drug in a cell line. For the drugs with missing responses in a cell line, the corresponding drug-cell line pairs were not included in the training of models. For the cell lines that could not be mapped to CCLE, those cell lines and their associated drug responses were excluded from our experiments. Since CTRP has more cell lines than other available datasets in the literature, it is an appropriate choice for evaluating computational methods in drug selection for new cell lines, which is the primary focus of this work. This work is motivated based on the belief that such a setup is more relevant to real-life scenarios where the goal is to suggest potential anti-cancer drugs for new patients. Footnote 1: [https://ctd2-data.nc.nih.gov/Public/Broad/CTRPv2](https://ctd2-data.nc.nih.gov/Public/Broad/CTRPv2)\(0\)2015_ctd2_ExpandedDataset/ (accessed on 01/20/22) Footnote 2: Due to space limitations, we present the results only on one dataset in this workshop paper. Additional results for other experimental settings and datasets will be published in a forthcoming full-paper version. ### Baselines We use two strong baseline methods: PLETORg(Krishnan et al., 2017) and DeepCDR(Krishnan et al., 2017). Unlike our proposed methods, PLETORg is a pairwise ranking approach that learns the drug and cell line embeddings by explicitly pushing sensitive drugs to the top of the ranking list and by further optimizing the ranking structure among the sensitives. Unlike PLETORg, our proposed methods leverage drug structural and gene expression information to learn more informative embeddings that may enable improved ranking performance. Additionally, different from PLETORg, our proposed methods utilize a learnable scoring function to capture the complex interactions between embeddings. While PLETORg explicitly enforces similarity regularization on cell line embeddings using the gene expression-based similarity of cell lines, our methods enforce such genomic similarity by embedding cell lines in the latent space via the pretrained GeneE. Different from our proposed methods and the baseline PLETORg, DeepCDR, one of the state-of-the-art regression models for anti-cancer drug response prediction, learns to estimate the exact response scores of every drug in each cell line. ### Experimental Setting According to the setting of He et al. (He et al., 2017), a percentile labeling scheme was used to label drugs as sensitive or insensitive. The sensitivity threshold for each cell line was determined as the top-5 percentile of its drug responses. In order to assess the ranking performance on new cell lines, we employed a leave-cell-lines-out (LCO) validation setting such that this setting resembles the real-world scenario when known drugs are investigated for their sensitivity or anti-cancer potential in new patients. We randomly split all the cell lines from each cancer type into five folds. In each run, we used the four folds from each cancer type for training and the other fold for testing. We use the cell lines from all the cancer types for 4 folds and their corresponding drug response data collectively for training. We use the cell lines in the other left-out fold as new (unseen) cell lines for model testing. This process was repeated five times with each fold serving as the test fold exactly once. For PLETORg, we follow the 'leave-one-out' setup(He et al., 2017) in that the cell line embeddings were learned only for the training cell lines, and the embeddings for the test cell lines were interpolated from the nearest neighboring training cell lines in the latent space. For our proposed methods, we use the gene expression profiles of only the training cell lines to pretrain GeneAE. ### Evaluation Metrics In order to evaluate the ranking performance, we generated the true ranking list using the ground-truth AUC response values and the predicted ranking list using the estimated scores out of the models. We then compared the two ranking lists (or a portion of them) using popular evaluation metrics: average precision at \(K\) (AP@K), and average hit at \(K\) (AH@K), which are commonly used in Information Retrieval systems. Higher AP@K and AH@K indicate better drug selection where the top-ranked drugs in the predicted ranking list are sensitive. In addition to AP and AH, concordance index (CI) and concordance index among the sensitive drugs (sCI)(Krishnan et al., 2017) are also used to evaluate the overall quality of the predicted ranking structure among all drugs and sensitive drugs, respectively. Note that high CI and SCI values do not necessarily result in high AP or AH since the ranking structure can be well preserved without pushing the few sensitive drugs (which constitutes only 5% of the total drugs) to the very top. On the other hand, high AP/AH indicates the most sensitive drugs are ranked at the top, but this does not necessarily result in high CI/sCI. In this work, since we primarily focus on identifying the top-\(k\) most sensitive drugs in each cell line, we prioritize and emphasize the AP and AH metrics over the SCI and CI metrics when evaluating and interpreting our results. ## 5. Results ### Overall Comparison Table 3 shows that, overall, List-All consistently outperforms all other methods in most metrics. Specifically, List-All achieved \begin{table} \begin{tabular}{c|c|c c c c} \hline \hline Dataset & \(|C|\) & \(|\mathcal{D}|\) & \#AUCs & \#d/C & c/\(\mathcal{D}\) & m\% \\ \hline CTRP & 809 & 545 & 357,544 & 442 & 656 & 18.9 \\ \hline \hline \end{tabular} \end{table} Table 2. Dataset Overview the best AH@\(K\) scores with impressive results of 2.7142, 4.3119, 7.7577, 12.7728, and 18.3331 for \(K=3\), 5, 10, 20, and 40, respectively. Following List-All, the other listwise ranking method, List-One achieved the second-best performance in terms of AH@\(K\). Overall, List-All achieved better AH@10, AH@20 and AH@40 over List-One, whereas both methods achieved competitive hit rates up to \(K\leq 5\). This suggests that List-All is particularly effective in pushing almost all sensitive drugs to the top while List-One was able to push only a few most sensitive drugs. This is further reflected in the consistent improvements observed in AH and AP. Compared to List-One, List-All improved H@20, H@20, and H@40 for 33.9% (55), 44.6% (72) and 39.5% (64) of 162 new cell lines by 3.8%, 3.0% and 1.4%, respectively. Such superior performance of List-All over List-One can be attributed to the ability of List-All to accurately estimate the probability of drugs being sensitive in each cell line while List-One focuses solely on the most sensitive (i.e., top-ranked) drug while ignoring the other sensitive drugs. Furthermore, List-All outperformed the best baseline method, PLETORg, across all metrics. Moreover, compared to PLETORg, both List-All and List-One demonstrated significantly better or competitive performance in AH and AP. This implies that all our proposed methods can improve the ranking performance over PLETORg by explicitly leveraging auxiliary information such as gene expression profiles and molecular fingerprints. Specifically, compared to PLETORg, the best-performing method, List-All, demonstrated statistically significant improvement in H@20, H@40 and H@60 for 50.6% (82), 53.0% (86) and 45.7% (74) of 162 new cell lines by 8.6%, 6.3% and 5.0%, respectively, while achieving marginally better hit rates for \(K<20\). Additionally, List-All achieved better AP@20 and AP@40 than PLETORg, improving P@20 and P@40 for 53.7% (87) and 58.0% (94) of 162 new cell lines by 1.1% and 3.0% on average, respectively. Such consistent and significant improvement across multiple AH and AP metrics on a large percentage of cell lines provides strong evidence that List-All clearly outperforms the best baseline method PLETORg in drug selection and prioritization. These results suggest that List-All can effectively leverage the drug structure information and the entire ranking structure to learn richer latent representations while focusing on learning to select all the sensitive drugs in a cell line. The consistent sub-par performance of PLETORg compared to List-All could be due to the fact that PLETORg only focuses on the pairwise relative ordering without considering the overall ranking structure. Since there are significantly more insensitive drugs than sensitive ones in each cell line, such pairwise methods may struggle to preserve the ordering between pairs of sensitive and insensitive drugs, thereby leading to a sub-optimal selection of all sensitive drugs. Overall, all ranking-based methods outperformed the state-of-the-art regression model, DeepCDR, across all metrics. This indicates that learning to estimate the exact drug responses while obtaining a lower overall MSE does \begin{table} \begin{tabular}{l l l l l l l l l l l l l} \hline \hline model & AP@1 & AP@3 & AH@3 & AP@5 & AH@5 & AP@10 & AH@10 & AP@20 & AH@20 & AP@40 & AH@40 & AP@60 & AH@60 \\ \hline PLETORg & 0.9308\({}^{*}\) & 0.9586 & 2.6853 & 0.9391 & 4.2454 & 0.8962 & 7.3525 & 0.8178 & 11.9248 & 0.7286 & 17.1087 & 0.6877 & 19.5646 \\ PLETORg & 0.9306 & 0.9598\({}^{*}\) & 2.6944 & 0.9402 & 4.2756 & 0.9018 & 7.4802 & 0.8255 & 12.0889 & 0.7361 & 17.3179 & 0.7011 & 19.3712 \\ PLETORg & 0.9306 & 0.9596 & 2.6968\({}^{*}\) & 0.9402 & 4.2731 & 0.9018 & 7.4716 & 0.8250 & 12.0936 & 0.7351 & 17.3506\({}^{*}\) & 0.7002 & 19.4150 \\ PLETORg & 0.9306 & 0.9593 & 2.6951 & 0.9410\({}^{*}\) & 4.2582 & 0.8999 & 7.3960 & 0.8222 & 12.0270 & 0.7328 & 17.2086 & 0.6951 & 19.4621 \\ PLETORg & 0.9306 & 0.9597 & 2.6943 & 0.9402 & 4.2829\({}^{*}\) & 0.9023 & 7.4804\({}^{*}\) & 0.8257\({}^{*}\) & 12.0962\({}^{*}\) & 0.7364\({}^{*}\) & 17.3279 & 0.7023\({}^{*}\) & 19.3091 \\ PLETORg & 0.9306 & 0.9595 & 2.6878 & 0.9395 & 4.2791 & 0.9026\({}^{*}\) & 7.4467 & 0.8246 & 12.0456 & 0.7353 & 17.2695 & 0.6995 & 19.3624 \\ PLETORg & 0.9306 & 0.9587 & 2.6748 & 0.9376 & 4.2236 & 0.8988 & 7.3600 & 0.8192 & 12.0065 & 0.7304 & 17.2451 & 0.6893 & 19.7447\({}^{*}\) \\ \hline DeepCDR & 0.9260\({}^{*}\) & 0.9296\({}^{*}\) & 2.4716 & 0.9015 & 4.0904 & 0.8646 & 7.4440\({}^{*}\) & 0.8025 & 11.5779 & 0.7155 & 16.7472\({}^{*}\) & 0.6736 & 19.3054\({}^{*}\) \\ DeepCDR & 0.9129 & 0.9271 & 2.5151\({}^{*}\) & 0.9035 & 4.1096\({}^{*}\) & 0.8665\({}^{*}\) & 7.4384 & 0.8036\({}^{*}\) & 11.7047\({}^{*}\) & 0.7211\({}^{*}\) & 16.7447 & 0.6791\({}^{*}\) & 19.3008 \\ DeepCDR & 0.9209 & 0.9282 & 2.5087 & 0.9064\({}^{*}\) & 4.0324 & 0.8599 & 7.3529 & 0.7984 & 11.5311 & 0.7118 & 16.5821 & 0.6700 & 19.2082 \\ \hline List-One & 0.9478\({}^{*}\) & 0.9523\({}^{*}\) & 2.5284 & 0.9170 & 3.9832 & 0.8614 & 7.2736 & 0.7929 & 12.0092 & 0.7152 & 17.4724 & 0.6773 & 20.0047 \\ List-One & 0.9359 & 0.9499 & 2.6392\({}^{*}\) & 0.9278 & 4.1952\({}^{*}\) & 0.8830 & 7.4661 & 0.8128 & 12.3445 & 0.7344 & 17.6828 & 0.6963 & 20.1293 \\ List-One & 0.9423 & 0.9507 & 2.6285 & 0.9293\({}^{*}\) & 4.1583 & 0.8833 & 7.4334 & 0.8115 & 12.1450 & 0.7304 & 17.5116 & 0.6902 & 20.0350 \\ List-One & 0.9393 & 0.9485 & 2.6315 & 0.9269 & 4.1909 & 0.8828 & 7.5999\({}^{*}\) & 0.8174\({}^{*}\) & 12.4297 & 0.7389\({}^{*}\) & 17.9255 & 0.7024\({}^{*}\) & 20.2904 \\ List-One & 0.9165 & 0.9371 & 2.5749 & 0.9152 & 4.1013 & 0.8694 & 7.5055 & 0.8065 & 12.5335\({}^{*}\) & 0.7330 & 17.8804 & 0.6970 & 20.2376 \\ List-One & 0.9030 & 0.9300 & 2.5543 & 0.9106 & 4.0587 & 0.8640 & 7.4101 & 0.8009 & 12.4205 & 0.7262 & 17.9959\({}^{*}\) & 0.6924 & 20.3382\({}^{*}\) \\ \hline List-All & **0.9480\({}^{*}\)** & **0.9610\({}^{*}\)** & 2.6985 & 0.9420 & 4.2885 & 0.9087 & 7.6208 & 0.8333 & 12.1992 & 0.7429 & 17.4686 & 0.7012 & 20.0028 \\ List-All & 0.9295 & 0.9537 & **2.7142\({}^{*}\)** & 0.9403 & 4.3000 & 0.90 not necessarily guarantee accurate score estimation for the sensitive drugs, which constitutes only 5% of all drugs in a cell line. This leads to sub-par performance of DeepCDR in terms of selecting and prioritizing the most sensitive drugs in a cell line. ### Study of Cell Line embeddings We evaluated the quality of cell line embeddings based on their ability to capture the drug response profiles. In order to quantitatively evaluate this, we computed the pairwise similarities of cell lines in two different ways: 1) using the radial basis function (RBF) kernel on the learned cell line embeddings out of the best baseline, PLETORg, and the best method, List-All, denoted as simc-B and simc-M, respectively; and 2) using Spearman rank correlation on the ranked lists of drugs given their drug response profiles, denoted as simc-R. Intuitively, simc-B and simc-M are higher for pairs of cell lines that are close in their corresponding latent spaces. Meanwhile, simc-R is higher for pairs of cell lines if they share similar drug response profiles. Note that since every drug may not have recorded responses in each cell line, simc-R for a pair of cell lines \(p\) and \(q\) is computed from drug response data of the shared set of drugs, \(\mathcal{D}_{p}\cap\mathcal{D}_{q}\), whose responses are recorded for both cell lines. We hypothesize that the latent space captures the ranking structures across drugs, implying that the cell lines close in the latent space have similar drug response profiles (i.e., simc-B and simc-M are well correlated with simc-R). In order to validate our hypothesis, we calculated the Pearson correlations between simc-B and simc-R, denoted as corrc(B,R), and between simc-M and simc-R, denoted as corrc(M,R). We observed that the pairwise cell line similarities induced by their drug response profiles are better correlated with the similarities induced in the latent space learned by List-All compared to PLETORg (Pearson correlations corrc(M,R) vs. corrc(B,R): 0.162 vs. 0.151). This suggests that List-All learns informative cell line embeddings that can capture the overall ranking structure more effectively than PLETORg. Intuitively, List-All may benefit from the fact that it uses the gene expression profile to learn cell line embeddings; and because cell lines with similar gene expression profiles typically demonstrate similar drug response profiles. Although PLETORg uses a weighted regularizer to constrain cell line embeddings based on their genomic similarity it may not fully capture the complex relationships between the gene expressions of two cell lines. Explicitly learning embeddings from gene expressions allows List-All to extract more nuanced task-relevant relationships and a desired notion of similarity between cell lines. We further evaluated the quality of their latent spaces in more detail with respect to clustering compactness and different cancer types. We applied a 20-way clustering using CLUTO(Liu et al., 2019) on the embeddings. Figure 3 presents the intra-cluster similarities, i.e., simc-R vs. simc-M averaged across cell lines in each cluster. We observed that compact clusters in the latent space contained cell lines with more similar drug-ranking structures. This is evident from the fitted line with a positive slope as shown in Figure 3. This further supports our hypothesis that the cell line latent space learned by List-All effectively captures the drug response profiles. Moreover, our clustering analysis can uncover unobvious or previously unknown similarities among cell lines from different cancer types, which may not be apparent from the observed drug response data. Figure 3(a) presents the average pairwise similarities with respect to clustering distribution of cell lines grouped by different cancer types. The cell line similarities in this matrix are computed using the Jaccard coefficient on the normalized distribution of cell lines over the top-10 compact clusters. Intuitively, the color in each cell in Figure 3(a) indicates the degree of clustering overlap between cell lines from two different cancer types. In other words, if cell lines from two different cancer types are clustered together or distributed identically over multiple clusters, they will be more similar and will have darker shades in the respective cell in this figure. For example, the cell lines from kidney and ovary cancer types are often clustered together, such is the case for bladder and gastric cancer types. Figure 3(b) presents the average pairwise simc-R similarities among cell lines from different cancer types. Specifically, if cell lines from two different cancer types share similar drug ranking structures (i.e., high simc-R) on average, the corresponding pair of cancer types in this figure will have a darker shade. Overall, we observed a moderate correlation between the cluster overlap-based similarities (Figure 3(a)) and the drug ranking structure-based similarities (Figure 3(b)) with a Pearson correlation of 0.493 (\(p\)-value = 1e-33). Additionally, Figure 3(a) can provide clinically significant and valuable insights while uncovering similarities between cell lines of different cancer types even though their drug-ranking structures or drug response profiles do not exhibit significant similarities. For instance, the liver cancer cell lines tend Figure 4. Comparison of two pairwise similarity matrices across different cancer types. Figure 3. Scatter plot of intra-cluster similarities, computed as the average of simc-R and simc-M within each cluster. to be clustered with cell lines of different cancer types such as bone, brain, breast and lung cancers (Figure 3(a)) even though their drug ranking structures are apparently different (Figure 3(b)). As a matter of fact, several studies((1; 25)) in the medical literature provide evidence of secondary liver cancers (i.e., metastatic liver tumors) spreading from primary tumors of breast and lung origins. Moreover, the most common mutations causing liver cancer (namely, TP53, CTNNB1, AXIN1, ARID1A, CDKN2A and CCND1 genes)((20) are commonly associated with multiple cancers. Similar observations can be made from the figure for prostate, bladder, sarcoma, and thyroid cancers, which also tend to be co-occurring according to reports in the literature (8; 11; 29). We further validate that the cell line latent space in List-All is capable of grouping cell lines based on their cancer types, and in fact, does so better than that in PLETORg. In order to validate this, we calculated \(k\)-nearest-neighbor accuracy of a cell line \(c\) in the latent space, denoted as \(\texttt{acc}_{\texttt{kNN}}(c)\), as follows: \[\texttt{acc}_{\texttt{kNN}}(c)=\frac{1}{k}\sum_{c^{\prime}\in\texttt{kNN}(c,k) }\mathbb{I}[\texttt{Cancer}(c^{\prime})=\texttt{Cancer}(c)], \tag{7}\] where \(\texttt{kNN}(c,k)\) returns \(k\)-nearest neighbors of a cell line \(c\) in the latent space, \(\mathbb{I}\) is the indicator function, and \(\texttt{Cancer}(c)\) returns the cancer type of cell line \(c\). Specifically, \(\texttt{acc}_{\texttt{kNN}}(c)\) is the expected fraction of \(k\) nearest neighboring cell lines that share the same cancer type as the cell line \(c\). We observed that the average \(\texttt{acc}_{\texttt{kNN}}\) over all unseen cell lines were higher in List-All compared to PLETORg (List-All vs. PLETORg: 0.364 vs. 0.162, 0.181 vs. 0.110, 0.126 vs 0.082) for \(k=1,3,5\), respectively. This suggests that the latent space in List-All is better clustered with respect to cancer types, even though the cancer type information is never fed to the model. This is likely due to the fact that List-All incorporates the gene expression profile during pretraining, and typically cell lines from the same cancer type tend to share similar gene expression profiles. In summary, not only the latent space in List-All clusters cell lines based on their drug ranking structures, it also maps cell lines from the same cancer types (i.e., of the same origin) to close proximity. These properties of the latent space can have potential clinical applications, such as determining cancer types for cell lines with unknown origin, and matching such cell lines with those having known cancer types for additional wet-lab experiments. ### Study of Drug embeddings We evaluated the quality of drug embeddings based on the extent to how well the latent space captures the sensitivity profiles of drugs across cell lines. The sensitivity profile of a drug was defined as a binary embedding, with a value of 1 indicating that the drug is sensitive in a cell line, and 0 if insensitive. To quantitatively evaluate the quality of the latent space, we calculated the pairwise similarities of drugs in two ways: 1) using the RBF kernel on the learned drug embeddings out of the best baseline method, PLETORg, and the best method, List-All, denoted as \(\texttt{sim}_{\texttt{q}}\)-B and \(\texttt{sim}_{\texttt{q}}\)-M, respectively; and, 2) using the Jaccard coefficient on the corresponding sensitivity profiles of drugs across cell lines, denoted as \(\texttt{sim}_{\texttt{q}}\)-S. Clearly, \(\texttt{sim}_{\texttt{q}}\)-B and \(\texttt{sim}_{\texttt{q}}\)-M are higher for drug pairs that are close in their corresponding latent spaces; \(\texttt{sim}_{\texttt{q}}\)-S is higher for drug pairs if they share similar sensitivity profiles across many cell lines. It is important to note that not all drugs have recorded responses in each cell line. Thus, \(\texttt{sim}_{\texttt{q}}\)-S for a pair of drugs \(p\) and \(q\) was calculated from the sensitivity profiles of the shared set of cell lines for which the responses were recorded for both drugs. We hypothesize that the learned latent space locally captures the sensitivity profiles of drugs. In other words, drugs that are close in the latent space have similar sensitivity profiles, leading to a correlation between the similarities in the latent space (i.e., \(\texttt{sim}_{\texttt{q}}\)-B and \(\texttt{sim}_{\texttt{q}}\)-M) and those computed from the sensitivity profiles (i.e., \(\texttt{sim}_{\texttt{q}}\)-S). To test this hypothesis, we computed the Pearson correlations among the three similarities as follows: 1) correlation between \(\texttt{sim}_{\texttt{q}}\)-B and \(\texttt{sim}_{\texttt{q}}\)-S, denoted as \(\texttt{corr}_{\texttt{q}}(\texttt{B},\texttt{S})\); and, 2) correlation between \(\texttt{sim}_{\texttt{q}}\)-M and \(\texttt{sim}_{\texttt{q}}\)-S, denoted as \(\texttt{corr}_{\texttt{q}}(\texttt{M},\texttt{S})\). We observed that the pairwise drug similarities induced by cell sensitivity profiles are better correlated to the pairwise similarities induced in the latent space learned by List-All compared to PLETORg (Pearson correlations \(\texttt{corr}_{\texttt{q}}(\texttt{M},\texttt{S})\) and \(\texttt{corr}_{\texttt{q}}(\texttt{B},\texttt{S})\): 0.906 vs. 0.352). This suggests that List-All can learn effective drug embeddings that can better capture the sensitivity profiles compared to PLETORg. This may be due to the fact that List-All leverages molecular fingerprints, unlike PLETORg, to learn drug embeddings that can encode structural information; and it is well known that structurally similar drugs tend to exhibit similar sensitivities. Figure 5. Scatter plot of intra-cluster similarities, computed as the average of \(\texttt{sim}_{\texttt{q}}\)-S and \(\texttt{sim}_{\texttt{q}}\)-M within each cluster. Figure 6. Comparison of two pairwise similarity matrices across different MoAs. Furthermore, we evaluated the quality of drug embeddings out of List-All via clustering. We applied a 10-way clustering (using CLUTO) on the drug embeddings. Figure 5 presents the intra-cluster similarities, simq-S vs. simq-M averaged across all drugs in each cluster. We observed that the compact clusters in the latent space contained drugs with similar sensitivity profiles. This further supports our previous hypothesis that the latent space for drugs effectively captures the sensitivity profiles. Furthermore, we studied the drug clusters in more detail and identified some qualities of the latent space with respect to uncovering the mechanism of action (MoA) of drugs. Figure 5(a) presents the average pairwise similarities among drugs grouped by different MoAs, where the similarities are computed using the Jaccard coefficient on the normalized distribution of drugs across clusters (Figure 5). In other words, if drugs with different MoAs are clustered together or co-occurs over multiple clusters, they are considered similar and have darker shades in the respective cells in this figure. Figure 5(b) presents the average pairwise simq-S similarities among drugs with different MoAs. Notably, we find that certain MoAs, such as MTOR, RARA, EGFR, and NAMP, exhibit similarities in their clustering patterns, suggesting potential shared characteristics or pathways. Similarly, we observe similarities among ABL1, BRDT, and FLT1, as well as BCL2 and AURK. These findings might indicate potential commonalities among drugs with different MoAs, even when their sensitivity profiles may not be similar (Figure 5(b)). We further examined clusters A and B depicted in Figure 5 to gain deeper insights into their characteristics. Despite both clusters being compact, they exhibit distinct characteristics in terms of their simq-S similarities. Figure 7 presents the sensitivity profiles of all the drugs in each cluster. Clearly, compared to cluster B, most drugs in cluster A share multiple cell lines in which they are sensitive, thus resulting in higher simq-S for cluster A than for cluster B. Specifically, from our preliminary fact-checking, we found that many drugs in cluster A (left-most drugs in Figure 6(a)) share some common pathways such as EGFR, mTOR. This suggests that these drugs possess broader effectiveness across multiple cancer types. In summary, our analysis reveals that the latent drug space learned by List-All captures similarities in sensitivity profiles, molecular structures, and pathway mechanisms. These findings highlight the potential for exploring synergistic effects among drugs with different MoAs, and developing novel therapeutic strategies. ## 6. Conclusion In this work, we developed two listwise neural ranking methods to select anti-cancer drugs out of all known drugs for new cell lines. Our experiments suggest that our listwise ranking method, List-All, can select all the sensitive drugs instead of the few topmost sensitive drugs. Moreover, our experimental comparison with strong ranking and regression baselines demonstrated the efficacy of formulating drug selection as a LeToR problem. Notably, our method, List-All demonstrated significant improvements over the baseline pLETORg in average hit rates across a large proportion of cell lines. Additionally, by leveraging deep networks and pre-training techniques, our methods can learn informative embeddings. Our analyses of such learned embeddings revealed commonalities among cell lines and among drugs from different cancer types and MoAs, respectively. Overall, our work represents a step forward in the development of robust and effective methods for precision anti-cancer drug selection. Future work may explore on leveraging 3D molecular structures, multiple modalities or pretrained chemical foundational models to further enhance ranking performance. ## 7. Code Availability The processed data and code are publicly available at [https://github.com/ninglab/DrugRanker](https://github.com/ninglab/DrugRanker). All the required softwares to execute the code are freely available.
2309.11591
Continuous Levels of Detail for Light Field Networks
Recently, several approaches have emerged for generating neural representations with multiple levels of detail (LODs). LODs can improve the rendering by using lower resolutions and smaller model sizes when appropriate. However, existing methods generally focus on a few discrete LODs which suffer from aliasing and flicker artifacts as details are changed and limit their granularity for adapting to resource limitations. In this paper, we propose a method to encode light field networks with continuous LODs, allowing for finely tuned adaptations to rendering conditions. Our training procedure uses summed-area table filtering allowing efficient and continuous filtering at various LODs. Furthermore, we use saliency-based importance sampling which enables our light field networks to distribute their capacity, particularly limited at lower LODs, towards representing the details viewers are most likely to focus on. Incorporating continuous LODs into neural representations enables progressive streaming of neural representations, decreasing the latency and resource utilization for rendering.
David Li, Brandon Y. Feng, Amitabh Varshney
2023-09-20T19:02:20Z
http://arxiv.org/abs/2309.11591v1
# Continuous Levels of Detail for Light Field Networks ###### Abstract Recently, several approaches have emerged for generating neural representations with multiple levels of detail (LODs). LODs can improve the rendering by using lower resolutions and smaller model sizes when appropriate. However, existing methods generally focus on a few discrete LODs which suffer from aliasing and flicker artifacts as details are changed and limit their granularity for adapting to resource limitations. In this paper, we propose a method to encode light field networks with continuous LODs, allowing for finely tuned adaptations to rendering conditions. Our training procedure uses summed-area table filtering allowing efficient and continuous filtering at various LODs. Furthermore, we use saliency-based importance sampling which enables our light field networks to distribute their capacity, particularly limited at lower LODs, towards representing the details viewers are most likely to focus on. Incorporating continuous LODs into neural representations enables progressive streaming of neural representations, decreasing the latency and resource utilization for rendering. 1 ## 1 Introduction In the past few years, implicit neural representations [29, 31] have become a popular technique in computer graphics and vision for representing high-dimensional data such as 3D shapes with signed distance fields and 3D scenes captured from multi-view cameras. Light Field Networks (LFN) [40] are able to represent 3D scenes with support for real-time rendering as each pixel of a rendered image only requires a single evaluation through the neural network. In computer graphics, levels of detail (LODs) are commonly used to optimize the rendering process by reducing resource utilization for smaller distant objects in a scene. LODs prioritize resources to improve the overall rendering performance. In streaming scenarios, LODs can prioritize and reduce network bandwidth usage. While LODs for implicit neural representations are beginning to be explored [6, 7, 22, 24, 28], most existing work focuses on offering a few discrete LODs which have three drawbacks for streaming scenarios. First, with only a few LODs, switching between them can result in flicker or popping effects as Introduction The large number of wireless networks is a fundamental problem in wireless networks, where the network is equipped with a large number of antennas. The network is equipped with a large number of antennas, and the network is equipped with a large number of antennas. The network is equipped with a large number of antennas, and the network is equipped with a large number of antennas. The network is equipped with a large number of antennas, and the network is equipped with a large number of antennas. The network is equipped with a large number of antennas, and the network is equipped with a large number of antennas. The network is equipped with a large number of antennas, and the network is equipped with a large number of antennas, and the network is equipped with a large number of antennas. The network is equipped with a large number of antennas, and the network is equipped with a or neural fields. Among these representations, neural radiance fields (NeRFs) and light field networks (LFNs) are both able to represent colored 3D scenes with view-dependant appearance effects. Neural radiance fields (NeRFs) [29] employ differentiable volume rendering to encode a 3D scene into a multi-layer perceptron (MLP) neural network. By learning the density and color of the scene and using a positional encoding, NeRF can perform high-quality view synthesis, rendering the scene from arbitrary camera positions, while maintaining a very compact representation. However, the original NeRF implementation has many drawbacks, such as slow rendering times, which has limited its practicality. With an incredible amount of interest in neural rendering, many follow-up works have been proposed to improve NeRFs with better rendering performance [9, 35, 46], better quality [1], generalizability [17], and deformations [32, 33, 34]. Additionally, feature grid methods [30, 46] enable learning scenes in seconds and rendering in real-time. Importance sampling [48] can achieve faster learning with fewer training rays. Light Field NetworksLight Field Networks (LFNs) [4, 5, 12, 27, 40] encode light fields [16, 23] by directly learning the 4D variant of the plenoptic function for a scene. Specifically, LFNs directly predict the emitted color for a ray which eliminates the need for volume rendering, making light fields much faster to render compared to other neural fields. Earlier work in light field networks focus on forward-facing scenes using the common two-plane parameterization for light fields. SIGNET [12, 13] uses Gegenbauer polynomials to encode light field images and videos. NeuLF [27] proposes adding a depth branch to encode light fields from a sparser set of images. Plucker coordinates have been used [15, 40] to represent 360-degree light fields. ### Levels of Detail Several methods have been proposed for neural representations with multiple levels of detail. NGLOD [41] encode signed distance functions into a multi-resolution octree of feature vectors. VQAD [42] adds vector quantization with a feature codebook and presents results on NeRFs. BACON [28] encodes LODs with different Fourier spectrums for images and radiance fields. PINs [22] develop a progressive Fourier feature encoding to improve reconstruction and provide progressive LODs. MINER [36] trains neural networks to learn regions within each scale of a Laplacian pyramid representation. Streamable Neural Fields [7] propose growing neural networks to represent increasing spectral, spatial, or temporal sizes. Progressive Multi-Scale Light Field Networks [24] train a light field network to encode light fields at multiple resolutions. To generate arbitrary intermediate LODs, existing methods blend outputs across discrete LODs. With only a few LODs, the performance does not scale smoothly since the next dis Figure 2: LFNs directly predict the RGB color for each ray in a single inference using Plücker coordinates, avoiding the dozens to hundreds of inferences required by NeRFs. crete LOD must be computed entirely. Our method offers continuous LODs with hundreds of performance levels allowing for finer adaptation to resource limitations. ## 3 Method Our method primarily builds upon _Light Field Networks_ (LFNs) [40]. Specifically, we represent rays \(\mathbf{r}\) in Plucker coordinates \((\mathbf{r}_{d},\mathbf{r}_{o}\times\mathbf{r}_{d})\) which are input to a multi-layer perceptron (MLP) neural network without any positional encoding. The MLP directly predicts RGBA color values without any volume rendering or other accumulation. Each light field network is trained to overfit a single static scene. ### Arbitrary-scale Arbitrary-position Sampling with Summed Area Tables In order to reduce aliasing and flickering artifacts when rendering at smaller resolutions, e.g. when an object is far away from the user, lower levels of details need to be processed with filtering to the appropriate resolution. In prior work, multi-scale LFNs [24] are trained on images resized to \(1/2\), \(1/4\), and \(1/8\) scale using area downsampling. During training, rays are sampled from the full-resolution image while colors are sampled from lower-resolution images using bilinear sampling. While training on lower-resolution light fields yields multi-scale light field networks, the bilinear subsampling of the light field may not provide accurate filtered colors for intermediate positions. As shown in Figure 3, colors for higher-resolution rays get averaged over a larger area when performing bilinear subsampling in between low-resolution pixels. Another method for generating multi-scale light fields is to apply to filter at full resolution to get a spatially accurate anti-aliased sample for each pixel location. Naively precomputing and caching full-resolution copies of each light field image at each scale would significantly increase memory usage. Computing the average pixel color for each sampled ray at training time would require additional computation. Summed area tables [8, 26] can be used to efficiently sample pixels at arbitrary scales and positions, allowing us to sample from filtered Figure 3: An illustration of discrete and summed-area table sampling. (a) Sampling from a discrete resolution requires linear interpolation from a downsampled image to the target scale and position. (b) Summed area tables allow us to sample at both arbitrary scales and positions without significant additional memory or compute. versions of the training image without caching multiple copies. Sampling from a summed area table is a constant time operation, giving us an average over any axis-aligned rectangular region with only four samples. With additional samples, summed-area tables can also be used to apply higher-order polynomial (e.g. cubic) filters [18, 19] or Gaussian filters [20] for even better anti-aliasing, though we only use box filtering in our implementation. ### Continuous Levels of Detail While previous neural field methods offer static levels of detail corresponding to fixed scales [6, 24, 42] or fixed spectral frequency bands [7, 28], our goal is to generate a finer progression with continuous levels of detail. Continuous levels of detail enable smoother transitions and more precise adaptation to resolution and resource requirements. Following existing work [7, 24, 47], we encode levels of detail using different widths of a single multi-layer perception neural network. Unlike Mip-NeRF [1, 2], this enables optimized performance with smaller neural networks at lower levels of detail. However, for continuous levels of detail, we propose two changes. First, we map the desired level of detail to every available width to extend a few levels of detail to hundreds of levels of detail as shown in Figure 3(a). Second, we propose neuron masking which fades in new neurons to enable true continuous quality adjustments. LOD to Scale MappingLi _et al_. [24] train multi-scale LFNs which use width factors \(1/4\), \(2/4\), \(3/4\), and \(4/4\) (\(128\), \(256\), \(384\), \(512\) widths) to encode \(1/8\), \(1/4\), \(1/2\), and \(1/1\) scale light fields respectively. To extend this to arbitrary widths, we formulate the following equations which describe the correspondence between network width \(w\) and light field scale \(s\): \[s =2^{\sim}(4w-4) \tag{1}\] \[w =(1/4)*(\log_{2}(s)+4) \tag{2}\] By using the above equations, we can assign a unique scale to each width sub-network in our multi-scale light field network. Since this is a one-to-one invertible mapping, we can Figure 4: Illustrations of our method to achieve continuous levels of detail. also compute the ideal level of detail to use for rendering at any arbitrary resolution. In our experiments, we use a minimum width of 25% of nodes corresponding to a scale of \(1/8\) to ensure a reasonable minimum quality and training image size. As an example, for a network with 512-width hidden layers, the lowest level of detail uses only 128 neurons of each hidden layer while the highest uses 512. Neuron MaskingSince neural networks have discrete widths, it is necessary to map continuous levels of detail to discrete widths. Hence, we propose to use neuron masking to provide true continuous levels of detail with discrete-sized neural networks. As weights corresponding to each new width become available, we propose to apply alpha-blending on neurons corresponding to the width. This alpha-blending enables features from existing neurons to continuously transition, representing any intermediate level of detail between the discrete widths. Given feature \(\mathbf{f}\) and fractional LOD \(\alpha=l-\lfloor l\rfloor\), the new feature \(\mathbf{f}^{\prime}\) with neuron masking is the element-wise product: \[\mathbf{f}^{\prime}=(1,...,1,\alpha)^{\top}\odot\mathbf{f} \tag{3}\] ### Saliency-based Importance Sampling With continuous LODs representing light fields at various scales, the capacity of the LFN is constrained at lower LODs. Hence, details such as facial features may only resolve at higher levels of detail. To maximize the apparent fidelity, the capacity of the network should be distributed towards the most salient regions, _i.e._ the areas where viewers are most likely to focus. We propose to use saliency-based importance sampling which focuses training on salient regions of the light field. For all foreground pixels, we assign a base sampling weight \(\lambda_{f}\) and add a weight of \(\lambda_{s}*s\) based on the pixel saliency \(s\). Specifically, for a given foreground pixel \(x\) in a training image with saliency \(s\), we sample from the probability density: \[p(x)=\lambda_{f}+\lambda_{s}*s \tag{4}\] In our experiments, we use \((\lambda_{f},\lambda_{s})=(0.4,0.6)\) which yields reasonable results. At each iteration, we sample 67% of rays in each batch from foreground pixels using the above density. The remaining 33% of rays are uniformly sampled from background pixels. ## 4 Experiments We conduct several experiments to evaluate whether our light field networks with continuous LODs overcome the problems with discrete LODs. We also conduct quality and performance evaluations to determine the compute and bandwidth overhead associated with continuous LODs. ### Experimental Setup We conduct our experiments using five light field datasets. Scenes are captured using 240 cameras with \(40\times 6\) layout around the scene and a \(4032\times 3040\) resolution per camera. Each dataset includes camera parameters extracted using COLMAP [37, 38] and is processed with background matting. Of the 240 images, we use 216 for training, 12 for validation, and 12 for testing. We generate saliency maps using the mit1003 pretrained network1 of Kroner _et al_. [21]. Footnote 1: From [https://github.com/alexanderkroner/saliency](https://github.com/alexanderkroner/saliency) For our model, we use an MLP with nine hidden layers and one output layer. Each hidden layer uses LayerNorm and ReLU. We use a minimum width of 128 and a maximum width of 512 for variable-size layers. Our models are trained using a squared L2 loss for the RGBA color with 8192 rays per batch. In all of our experiments, we train using the Adam optimizer with the learning rate set to 0.001 and exponentially decayed by \(\gamma=0.98\) after each epoch. We train for 100 epochs. Each of our models is trained using a single NVIDIA RTX 2080 Ti GPU. Our PyTorch implementation and processed datasets are available at [https://augmentariumlab.github.io/continuous-lfn/](https://augmentariumlab.github.io/continuous-lfn/). ### Ablation Experiments Our ablation experiments evaluate how each aspect of our method affects the final rendered quality. First, we replace the discrete resolution sampling in discrete-scale light field networks [24] with our summed area table sampling. Next, we add continuous LODs training which is enabled by arbitrary-scale filtering with summed-area tables. Finally, we compare the prior two setups with our full method which also includes saliency-based importance sampling. ### Transitions across LODs With continuous LODs, our method allows smooth transitions across LODs as additional bytes are streamed over the network or as the viewer approaches the subject. To quantitatively evaluate the smoothness of the transitions, we use the reference-based temporal flicker metric of Winkler _et al_. [45]. This flicker metric first computes the difference \(d\) between the processed images and reference images for two consecutive frames. Next, a difference image \(c=d_{n}-d_{n-1}\) is computed across consecutive frames. The 2D discrete Fourier transform of the image \(c\) is computed and values are summed based on the radial frequency spectrum into low and high-frequency sums: \(s_{L}\) and \(s_{H}\). Finally, the flicker metric is computed by adding these together: Flicker \(=s_{L}+s_{H}\). We compare against three discrete-scale baselines with 4, 8, and 16 levels of detail, with 8 and 16 LODs trained using summed-area table sampling. In our continuous LOD case, we render views at the highest LOD corresponding to each discrete width (i.e. LOD 1.0, 2.0,..., 385.0), using the static ground truth view as the reference frames. Flicker values are computed for each LOD using the transition from the next lower LOD and then averaged across all test views. Our flicker results are shown in Figure 6(b). With only four LODs, the discrete-scale LFN method has three transitions, each with large model deltas (up to 3.5 MB) and high flicker values. Additional levels of detail reduce the model delta sizes and the flicker values with our continuous LOD method minimizing the model delta sizes and the flicker values. With our method, the LOD can be transitioned in small (\(\leq\) 32 KB) gradual steps. Quantitative PSNR and SSIM results are shown in Table 1. First, we see that adding summed-area table filtering to discrete-scale light field networks with four scales results in slightly improved PSNR and SSIM results while enabling arbitrary-scale sampling. Training a continuous LOD network impacts the performance at the original four LODs but allows us to have continuous LODs. Adding importance sampling allows us to focus on salient regions without significantly impacting the quantitative results. \begin{table} \end{table} Table 1: Quantitative Training Ablation Results at 1/8, 1/4, 1/2, and 1/1 scales. Each scale is evaluated at its corresponding LOD. Figure 7: Plots showing the effects of transitioning across LODs. Transitioning with discrete LODs leads to larger network traffic spikes and more flickering. Qualitative results of our saliency-based importance sampling ablation are shown in Figure 5. We see that details along faces appear at earlier LODs when using saliency for importance sampling. All of these details resolve at the highest LODs with and without using importance sampling. ### Rendering Performance We evaluate the rendering performance by rendering training views across each LOD. For our rendering benchmarks, we use half-precision inference and skip empty rays with the auxiliary network which evaluates ray occupancy. Rendering performance results across the LODs are shown in Figure (b)b. We observe that as the LOD increases according to the width of the neural network, rendering times increase as well. When rendering from a discrete-scale light field network with only four LODs, the user or application would need to select either the next higher or lower LOD, compromising on either the performance or the quality. With continuous LODs, software incorporating our light field networks would be able to gradually increase or decrease the LOD to maintain a better balance between performance and quality. In cases where the ideal model size is not known, continuous LODs allow dynamic adjusting of the LOD to satisfy a target frame rate. In our PyTorch implementation, we observe that LODs with odd model widths have a slower render time than LODs with even model widths. LODs with model widths that are a multiple of eight perform slightly faster than other even model widths. ## 5 Discussion By requiring light field networks to output reasonable results at each possible hidden layer width and incorporating neuron masking, we can achieve continuous of LODs. However, this applies additional constraints on the network as it needs to produce additional outputs. In our experiments, we observe slightly worse PSNR and SSIM results at the specific LODs corresponding to the \(1/8\) and \(1/4\) scales compared to the discrete-scale LFN which is trained with only four LODs. This is expected due to the additional constraints and less supervision at those specific LODs. The goal of our importance sampling procedure is to improve the quality of the salient regions of the light field rather than to maximize quantitative results. Figure 8: Plots showing our quantitative evaluation results. With continuous LODs, the LOD can be dynamically adjusted to maximize the quality based on available resources. Light field networks require additional cameras compared to neural radiance fields due to the lack of multi-view consistency prior provided by volume rendering. Hence, training light field networks requires additional cameras or regularization [14] compared to NeRF methods. Furthermore, light field networks do not use positional encoding [43] and represent high-frequency details as faithfully as NeRF methods. As the primary goal of our work is to enable highly granular rendering trade-offs with more levels of detail, we leave these limitations to future work. ## 6 Conclusion In this paper, we introduce continuous levels of details for light field networks using three techniques. First, we introduce summed area table sampling to sample colors from arbitrary scales of an image without generating multiple versions of each training image in a light field. Second, we achieve continuous LODs by combining arbitrary-width networks with neuron masking. Third, we train using saliency-based importance sampling to help details in the salient regions of the light field resolve at earlier LODs. With our method for continuous LODs, we hope to make light field networks more practical for 6DoF desktop and virtual reality applications [10, 11, 25]. ## Acknowledgments We would like to thank Jon Heagerty, Sida Li, and Barbara Brown for developing our light field datasets as well as the anonymous reviewers for the valuable comments on the manuscript. This work has been supported in part by the NSF Grants 18-23321, 21-37229, and 22-35050 and the State of Maryland's MPower initiative. Any opinions, findings, conclusions, or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the research sponsors. ## References * [1] Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields, 2021. * [2] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 5460-5469, 2022. doi: 10.1109/CVPR52688.2022.00539. * [3] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Zip-NeRF: Anti-aliased grid-based neural radiance fields. _ICCV_, 2023. * [4] Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey Tulyakov, and Jian Ren. Real-time neural light field on mobile devices, 2022. URL [https://arxiv.org/abs/2212.08057](https://arxiv.org/abs/2212.08057). * Chandramouli et al. [2021] Paramanand Chandramouli, Hendrik Sommerhoff, and Andreas Kolb. Light field implicit representation for flexible resolution reconstruction, 2021. URL [https://arxiv.org/abs/2112.00185](https://arxiv.org/abs/2112.00185). * Chen et al. [2021] Zhang Chen, Yinda Zhang, Kyle Genova, Sean Fanello, Sofien Bouaziz, Christian Hane, Ruofei Du, Cem Keskin, Thomas Funkhouser, and Danhang Tang. Multiresolution deep implicit functions for 3D shape representation. In _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 13067-13076, 2021. doi: 10.1109/ICCV48922.2021.01284. * ECCV 2022_, pages 595-612, Cham, 2022. Springer Nature Switzerland. ISBN 978-3-031-20044-1. * Crow [1984] Franklin C. Crow. Summed-area tables for texture mapping. _SIGGRAPH Comput. Graph._, 18(3):207-212, jan 1984. ISSN 0097-8930. doi: 10.1145/964965.808600. URL [https://doi.org/10.1145/964965.808600](https://doi.org/10.1145/964965.808600). * Deng et al. [2022] Nianchen Deng, Zhenyi He, Jiannan Ye, Budmonde Duinkharjav, Praneeth Chakravarthula, Xubo Yang, and Qi Sun. FoV-NeRF: Foveated neural radiance fields for virtual reality. _IEEE Transactions on Visualization and Computer Graphics_, pages 1-11, 2022. doi: 10.1109/TVCG.2022.3203102. * Du et al. [2019] Ruofei Du, David Li, and Amitabh Varshney. Geollery: A mixed reality social media platform. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_, CHI '19, page 1-13, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450359702. doi: 10.1145/3290605.3300915. URL [https://doi.org/10.1145/3290605.3300915](https://doi.org/10.1145/3290605.3300915). * Du et al. [2019] Ruofei Du, David Li, and Amitabh Varshney. Project geollery.com: Reconstructing a live mirrored world with geotagged social media. In _The 24th International Conference on 3D Web Technology_, Web3D '19, page 1-9, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450367981. doi: 10.1145/3329714.3338126. URL [https://doi.org/10.1145/3329714.3338126](https://doi.org/10.1145/3329714.3338126). * Feng and Varshney [2021] Brandon Yushan Feng and Amitabh Varshney. SIGNET: Efficient neural representation for light fields. In _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 14204-14213, 2021. doi: 10.1109/ICCV48922.2021.01396. * Feng and Varshney [2022] Brandon Yushan Feng and Amitabh Varshney. Neural subspaces for light fields. _IEEE Transactions on Visualization and Computer Graphics_, pages 1-11, 2022. doi: 10.1109/TVCG.2022.3224674. * Feng et al. [2022] Brandon Yushan Feng, Susmija Jabbireddy, and Amitabh Varshney. VIISTER: View interpolation with implicit neural representations of images. In _SIGGRAPH Asia 2022 Conference Papers_, SA '22, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450394703. doi: 10.1145/3550469.3555417. URL [https://doi.org/10.1145/3550469.3555417](https://doi.org/10.1145/3550469.3555417). * Feng et al. [2021] Brandon Yushan Feng, Yinda Zhang, Danhang Tang, Ruofei Du, and Amitabh Varshney. PRIF: Primary ray-based implicit function. In _European Conference on Computer Vision_, pages 138-155. Springer, 2022. doi: 10.1007/978-3-031-20062-5_9. URL [https://doi.org/10.100782F978-3-031-20062-5_9](https://doi.org/10.100782F978-3-031-20062-5_9). * Gortler et al. [1996] Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. The lumigraph. In _Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques_, SIGGRAPH '96, page 43-54, New York, NY, USA, 1996. Association for Computing Machinery. ISBN 0897917464. doi: 10.1145/237170.237200. URL [https://doi.org/10.1145/237170.237200](https://doi.org/10.1145/237170.237200). * Gu et al. [2022] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. StyleNeRF: A style-based 3d aware generator for high-resolution image synthesis. In _International Conference on Learning Representations_, 2022. * Heckbert [1986] Paul S. Heckbert. Filtering by repeated integration. In _Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques_, SIGGRAPH '86, page 315-321, New York, NY, USA, 1986. Association for Computing Machinery. ISBN 0897911962. doi: 10.1145/15922.15921. URL [https://doi.org/10.1145/15922.15921](https://doi.org/10.1145/15922.15921). * Hensley et al. [2005] Justin Hensley, Thorsten Scheuermann, Greg Coombe, Montek Singh, and Anselmo Lastra. Fast summed-area table generation and its applications. _Computer Graphics Forum_, 24(3):547-555, 2005. doi: [https://doi.org/10.1111/j.1467-8659.2005.00880](https://doi.org/10.1111/j.1467-8659.2005.00880). x. URL [https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8659.2005.00880.x](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8659.2005.00880.x). * Kovesi [2010] Peter Kovesi. Fast almost-gaussian filtering. In _2010 International Conference on Digital Image Computing: Techniques and Applications_, pages 121-125, 2010. doi: 10.1109/DICTA.2010.30. * Kroner et al. [2020] Alexander Kroner, Mario Senden, Kurt Driessens, and Rainer Goebel. Contextual encoder-decoder network for visual saliency prediction. _Neural Networks_, 129:261-270, 2020. ISSN 0893-6080. doi: [https://doi.org/10.1016/j.neunet.2020.05.004](https://doi.org/10.1016/j.neunet.2020.05.004). URL [http://www.sciencedirect.com/science/article/pii/S0893608020301660](http://www.sciencedirect.com/science/article/pii/S0893608020301660). * Landgraf et al. [2022] Zoe Landgraf, Alexander Sorkine Hornung, and Ricardo S Cabral. PINs: Progressive implicit networks for multi-scale neural representations. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 11969-11984. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/landgraf22a.html](https://proceedings.mlr.press/v162/landgraf22a.html). * Levoy and Hanrahan [1996] Marc Levoy and Pat Hanrahan. Light field rendering. In _Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques_, SIGGRAPH '96, page 31-42, New York, NY, USA, 1996. Association for Computing Machinery. ISBN 0897917464. doi: 10.1145/237170.237199. URL [https://doi.org/10.1145/237170.237199](https://doi.org/10.1145/237170.237199). * Li and Varshney [2022] David Li and Amitabh Varshney. Progressive multi-scale light field networks. In _2022 International Conference on 3D Vision (3DV)_, pages 231-241, 2022. doi: 10.1109/3DV57658.2022.00035. * Li et al. [2020] David Li, Eric Lee, Elijah Schwelling, Mason G. Quick, Patrick Meyers, Ruofei Du, and Amitabh Varshney. Meteovis: Visualizing meteorological events in virtual reality. In _Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems_, CHI EA '20, page 1-9, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450368193. doi: 10.1145/3334480.3382921. URL [https://doi.org/10.1145/3334480.3382921](https://doi.org/10.1145/3334480.3382921). * Li et al. [2021] David Li, Ruofei Du, Adharsh Babu, Camelia D. Brumar, and Amitabh Varshney. A log-rectilinear transformation for foveated 360-degree video streaming. _IEEE Transactions on Visualization and Computer Graphics_, 27(5):2638-2647, 2021. doi: 10.1109/TVCG.2021.3067762. * Li et al. [2022] Zhong Li, Liangchen Song, Celong Liu, Junsong Yuan, and Yi Xu. NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field. In Abhijeet Ghosh and Li-Yi Wei, editors, _Eurographics Symposium on Rendering_. The Eurographics Association, 2022. ISBN 978-3-03868-187-8. doi: 10.2312/sr.20221156. * Lindell et al. [2022] David B. Lindell, Dave Van Veen, Jeong Joon Park, and Gordon Wetzstein. Bacon: Band-limited coordinate networks for multiscale scene representation. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 16231-16241, 2022. doi: 10.1109/CVPR52688.2022.01577. * Mildenhall et al. [2020] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In _ECCV_, 2020. doi: 10.1007/978-3-030-58452-8_24. * Muller et al. [2022] Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. _ACM Trans. Graph._, 41(4), jul 2022. ISSN 0730-0301. doi: 10.1145/3528223.3530127. URL [https://doi.org/10.1145/3528223.3530127](https://doi.org/10.1145/3528223.3530127). * Park et al. [2019] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning continuous signed distance functions for shape representation. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 165-174, 2019. doi: 10.1109/CVPR.2019.00025. * Park et al. [2021] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. In _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 5845-5854, 2021. doi: 10.1109/ICCV48922.2021.00581. * Park et al. [2021] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. HyperNeRF: A higher-dimensional representation for topologically varying neural radiance fields. _ACM Trans. Graph._, 40(6), dec 2021. ISSN 0730-0301. doi: 10.1145/3478513.3480487. URL [https://doi.org/10.1145/3478513.3480487](https://doi.org/10.1145/3478513.3480487). * Pumarola et al. [2021] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 10313-10322, 2021. doi: 10.1109/CVPR46437.2021.01018. * Reiser et al. [2021] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. KiloNeRF: Speeding up neural radiance fields with thousands of tiny mlps. In _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 14315-14325, 2021. doi: 10.1109/ICCV48922.2021.01407. * Saragadam et al. [2022] Vishwanath Saragadam, Jasper Tan, Guha Balakrishnan, Richard G. Baraniuk, and Ashok Veeraraghavan. MINER: multiscale implicit neural representations. _CoRR_, abs/2202.03532, 2022. URL [https://arxiv.org/abs/2202.03532](https://arxiv.org/abs/2202.03532). * Schonberger and Frahm [2016] Johannes L. Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 4104-4113, 2016. doi: 10.1109/CVPR.2016.445. * Schonberger et al. [2016] Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In _European Conference on Computer Vision (ECCV)_, 2016. doi: 10.1007/978-3-319-46487-9_31. * Sitzmann et al. [2020] Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In _Proceedings of the 34th International Conference on Neural Information Processing Systems_, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. * Sitzmann et al. [2021] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_, volume 34, pages 19313-19325. Curran Associates, Inc., 2021. URL [https://proceedings.neurips.cc/paper_files/paper/2021/file/allce019e96a4c60832eadd755a17a58-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2021/file/allce019e96a4c60832eadd755a17a58-Paper.pdf). * Takikawa et al. [2021] Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. Neural geometric level of detail: Real-time rendering with implicit 3d shapes. In _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 11353-11362, 2021. doi: 10.1109/CVPR46437.2021.01120. * Takikawa et al. [2022] Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas Muller, Morgan McGuire, Alec Jacobson, and Sanja Fidler. Variable bitrate neural fields. In _ACM SIGGRAPH 2022 Conference Proceedings_, SIGGRAPH '22, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393379. doi: 10.1145/3528233.3530727. URL [https://doi.org/10.1145/3528233.3530727](https://doi.org/10.1145/3528233.3530727). * Tancik et al. [2020] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 7537-7547. Curran Associates, Inc., 2020. URL [https://proceedings.neurips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-Paper.pdf). * Tewari et al. [2022] A. Tewari, J. Thies, B. Mildenhall, P. Srinivasan, E. Tretschk, W. Yifan, C. Lassner, V. Sitzmann, R. Martin-Brualla, S. Lombardi, T. Simon, C. Theobalt, M. Niessner, J. T. Barron, G. Wetzstein, M. Zollhofer, and V. Golyanik. Advances in neural rendering. _Computer Graphics Forum_, 41(2):703-735, 2022. doi: [https://doi.org/10.1111/cgf.14507](https://doi.org/10.1111/cgf.14507). URL [https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14507](https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14507). * 378. International Society for Optics and Photonics, SPIE, 2003. doi: 10.1117/12.512550. URL [https://doi.org/10.1117/12.512550](https://doi.org/10.1117/12.512550). * Yu et al. [2021] Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks, 2021. * Yu and Huang [2019] Jiahui Yu and Thomas Huang. Universally slimmable networks and improved training techniques. In _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 1803-1811, 2019. doi: 10.1109/ICCV.2019.00189. * Zhang et al. [2022] Wenyuan Zhang, Ruofan Xing, Yunfan Zeng, Yu-Shen Liu, Kanle Shi, and Zhizhong Han. Fast learning radiance fields by shooting much fewer rays. _arXiv preprint arXiv:2208.06821_, 2022. Supplementary for Continuous Levels of Detail for Light Field Networks David Li University of Maryland, College Park Maryland, USA Brandon Y. Feng [https://brandonyfeng.github.io](https://brandonyfeng.github.io) Amittabh Varshney [https://www.cs.umd.edu/~varshney/](https://www.cs.umd.edu/~varshney/) ## 1 Additional Details The pseudocode for our training algorithm is shown in Algorithm 2. For our experiments, we use a neural network with 10 layers and continuous levels of detail from 1.0 up to 385.0. The parameters for our network are laid out in Table 1. ``` Data: Training images with poses Result: Trained LFN with continuous LODs \(1fn\leftarrow\) InitializeLFN() \(optimizer\leftarrow\)Adam(\(lfn\)) forepoch\(1\)to\(num\_epochs\)do for\(images\gets GetImageBatch()\)do \(sat\leftarrow\) ComputeSAT(images) \(ray\_pdf\leftarrow\) ComputeRayPDF(images) for\(rays,colors\leftarrow\)SampleRays(\(ray\_pdf\))do \(low\_lod,low\_lod\_scale\leftarrow\) SampleLOD() \(low\_lod\_colors\leftarrow\)SampleSAT(\(sat\), \(rays\), \(low\_lod\_scale\)) \(loss\gets L2(lfn(rays,max\_lod),colors)+L2(lfn(rays,low\_lod),low\_lod\_colors) loss.backward() optimizer.step() 1 end for 2 3 end for 4 5 end for ``` **Algorithm 1**Training Procedure Pseudocode for Continuous LOD LFNs ``` Data: Training images with poses Result: Trained LFN with continuous LODs \(1fn\leftarrow\) InitializeLFN() \(optimizer\leftarrow\)Adam(\(lfn\)) forepoch\(1\)to\(num\_epochs\)do for\(images\gets GetImageBatch()\)do \(sat\leftarrow\) ComputeSAT(images) \(ray\_pdf\leftarrow\)ComputeRayPDF(images) for\(rays,colors\leftarrow\)SampleRays(\(ray\_pdf\))do \(low\_lod,low\_lod\_scale\leftarrow\)SampleLOD() \(low\_lod\_colors\leftarrow\)SampleSAT(\(sat\), \(rays\), \(low\_lod\_scale\)) \(loss\gets L2(lfn(rays,max\_lod),colors)+L2(lfn(rays,low\_lod),low\_lod\_colors) loss.backward() optimizer.step() 3 end for 4 5 end for ``` **Algorithm 2**Training Procedure Pseudocode for Continuous LOD ## 2 Additional Results We present some qualitative results in Figure 1. Additional qualitative results are available on our supplementary webpage. ### Comparision to NeRF Neural radiance fields use volume rendering and 3D scene coordinates which provide 3D scene structure and multi-view consistency at the cost of requiring dozens to hundreds of evaluations per ray. Two continuous LOD methods for NeRFs are Mip-NeRF [1] and Zip-NeRF [2]. Mip-NeRF uses integrated positional encoding to approximate a canonical frustum around a ray while Zip-NeRF uses multisampling of feature grid. Both of these methods are targeted solely toward anti-aliasing and flicker reduction rather than towards resource adaptivity. Hence, the entire model must be downloaded for rendering and the performance per pixel is the same at each scale. Furthermore, neither method is directly applicable to light field networks which rely on the spectral bias of ReLU MLPs and thus are incompatible with positional encoding and feature grids. For reference purposes, we present quantitative results using Mip-NeRF [1] to display our datasets in Table 2. We train Mip-NeRF for 1 million iterations with a batch size of 1024 rays with the same 67% foreground and 33% background split in each batch. We also use the same training and test split for each dataset as in our experiments. \begin{table} \begin{tabular}{l r r r} \hline \hline Level of Detail & 1.0 & \(\ell\) & 385.0 \\ \hline Model Layers & 10 & 10 & 10 \\ Layer Width & 128 & \(127+\lceil\ell\rceil\) & 512 \\ Parameters & 135,812 & \(\approx 9*(127+\ell)^{2}\) & 2,116,100 \\ Model Size (MB) & 0.518 & \(\approx 36*(127+\ell)^{2}/2^{20}\) & 8.072 \\ Target Scale & \(1/8\) & \(2^{\gamma}\left(4(\frac{127+\ell}{512})-4\right)\) & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Model Parameters for Each Level of Detail. \begin{table} \begin{tabular}{l r r r r} \hline \hline Model & 1/8 & 1/4 & 1/2 & 1/1 \\ \hline Continuous LOD LFN & 28.06 & 29.79 & 28.44 & 27.40 \\ Mip-NeRF & 24.81 & 24.95 & 24.35 & 23.86 \\ \hline \multicolumn{5}{c}{(a) PSNR (dB) at 1/8, 1/4, 1/2, and 1/1 scale.} \\ \hline Model & 1/8 & 1/4 & 1/2 & 1/1 \\ \hline Continuous LOD LFN & 0.8380 & 0.8751 & 0.8487 & 0.8455 \\ Mip-NeRF & 0.6819 & 0.6735 & 0.6451 & 0.6374 \\ \hline \multicolumn{5}{c}{(b) SSIM at 1/8, 1/4, 1/2, and 1/1 scale.} \\ \end{tabular} \end{table} Table 2: Average Rendering Quality Comparison ## 2 Results Figure 1: Qualitative results rendering our continuous LFNs at multiple levels of detail for two datasets. In our experiments, we observe that with our sampling scheme, Mip-NeRF is not able to separate the foreground and background cleanly as shown in Figure 2 which leads to worse PSNR and SSIM results. In general, NeRF-based methods are better able to perform view-synthesis with high-frequency details due to their use of positional encoding and their 3D structure. MLP-based methods such as Mip-NeRF typically have a compact size (\(\leq\) 10 MB) but suffer from slow rendering times on the order of tens of seconds per image. Feature-grid NeRFs such as Instant-NGP [4], Plenoxels [5], and Zip-NeRF [2] can achieve real-time rendering but at the cost of larger model sizes (\(\geq\) 30 MB). Factorized feature grids such as TensorRF [3] promise both fast rendering and small model sizes. Note that the goal of our paper is to enable more granularity with continuous levels of detail for rendering and streaming purposes rather than improving view-synthesis quality.
2306.17765
Speculative SAT Modulo SAT
State-of-the-art model-checking algorithms like IC3/PDR are based on uni-directional modular SAT solving for finding and/or blocking counterexamples. Modular SAT solvers divide a SAT-query into multiple sub-queries, each solved by a separate SAT solver (called a module), and propagate information (lemmas, proof obligations, blocked clauses, etc.) between modules. While modular solving is key to IC3/PDR, it is obviously not as effective as monolithic solving, especially when individual sub-queries are harder to solve than the combined query. This is partially addressed in SAT modulo SAT (SMS) by propagating unit literals back and forth between the modules and using information from one module to simplify the sub-query in another module as soon as possible (i.e., before the satisfiability of any sub-query is established). However, bi-directionality of SMS is limited because of the strict order between decisions and propagation -- only one module is allowed to make decisions, until its sub-query is SAT. In this paper, we propose a generalization of SMS, called SPEC SMS, that speculates decisions between modules. This makes it bi-directional -- decisions are made in multiple modules, and learned clauses are exchanged in both directions. We further extend DRUP proofs and interpolation, these are useful in model checking, to SPEC SMS. We have implemented SPEC SMS in Z3 and show that it performs exponentially better on a series of benchmarks that are provably hard for SMS.
Hari Govind V K, Isabel Garcia-Contreras, Sharon Shoham, Arie Gurfinkel
2023-06-30T16:18:00Z
http://arxiv.org/abs/2306.17765v1
# Speculative SAT Modulo SAT ###### Abstract State-of-the-art model-checking algorithms like IC3/PDR are based on uni-directional modular SAT solving for finding and/or blocking counterexamples. Modular SAT-solvers divide a SAT-query into multiple sub-queries, each solved by a separate SAT-solver (called a module), and propagate information (lemmas, proof obligations, blocked clauses, etc.) between modules. While modular solving is key to IC3/PDR, it is obviously not as effective as monolithic solving, especially when individual sub-queries are harder to solve than the combined query. This is partially addressed in SAT modulo SAT (SMS) by propagating unit literals back and forth between the modules and using information from one module to simplify the sub-query in another module as soon as possible (i.e., before the satisfiability of any sub-query is established). However, bi-directionality of SMS is limited because of the strict order between decisions and propagation - only one module is allowed to make decisions, until its sub-query is SAT. In this paper, we propose a generalization of SMS, called specSMS, that _speculates_ decisions between modules. This makes it bi-directional - decisions are made in multiple modules, and learned clauses are exchanged in both directions. We further extend DRUP proofs and interpolation, these are useful in model checking, to specSMS. We have implemented specSMS in Z3 and show that it performs exponentially better on a series of benchmarks that are provably hard for SMS. ## I Introduction IC3/PDR [3] is an efficient SAT-based Model Checking algorithm. Among many other innovations in IC3/PDR is the concept of a modular SAT-solver that divides a formula into multiple _frames_ and each frame is solved by an individual SAT solver. The solvers communicate by exchanging proof obligations (i.e., satisfying assignments) and lemmas (i.e., learned clauses). While modular reasoning in IC3/PDR is very efficient for a Model Checker, it is not as efficient as a classical monolithic SAT-solver. This is not surprising since modularity restricts the solver to colorable refutations [11], which are, in the worst case, exponentially bigger than unrestricted refutations. On the positive side, IC3/PDR's modular SAT-solving makes interpolation trivial, and enables generalizations of proof obligations and inductive generalization of lemmas - both are key to the success of IC3/PDR. This motivates the study of modular SAT-solving, initiated by SMS [1]. Our strategic vision is that our study will contribute to improvements in IC3/PDR. However, in this paper, we focus on modular SAT-solving in isolation. In modular SAT-solving, multiple solvers interact to check satisfiability of a partitioned CNF formula, where each part of the formula is solved by one of the solvers. In this paper, for simplicity, we consider the case of two solvers \(\langle S_{\text{s}},S_{\text{m}}\rangle\) checking satisfiability of a formula pair \(\langle\Phi_{\text{s}},\Phi_{\text{m}}\rangle\). \(S_{\text{m}}\) is a _main_ solver and \(S_{\text{s}}\) is a _secondary_ solver. In the notation, the solvers are written right-to-left to align with IC3/PDR, where the main solver is used for frame 1 and the secondary solver is used for frame 0. When viewed as a modular SAT-solver, IC3/PDR is uni-directional. First, \(S_{\text{m}}\) finds a satisfying assignment \(\sigma\) to \(\Phi_{\text{m}}\) and only then, \(S_{\text{s}}\) extends \(\sigma\) to an assignment for \(\Phi_{\text{s}}\). Learned clauses, called _lemmas_ in IC3/PDR, are only shared (or copied) from the secondary solver \(S_{\text{s}}\) to the main solver \(S_{\text{m}}\). SAT Modulo SAT (SMS) [1] is a modular SAT-solver that extends IC3/PDR by allowing inter-modular unit propagation and conflict analysis: whenever an interface literal is placed on a trail of any solver, it is shared with the other solver and both solvers run unit propagation, exchanging unit literals. This makes modular SAT-solving in SMS bi-directional as information flows in both directions between the solvers. Bi-directional reasoning can simplify proofs, but it significantly complicates conflict analysis. To manage conflict analysis, SMS does not allow the secondary solver \(S_{\text{s}}\) to make any decisions before the main solver \(S_{\text{m}}\) is able to find a complete assignment to its clauses. As a result, learned clauses are either local to each solver, or flow only from \(S_{\text{s}}\) to \(S_{\text{m}}\), restricting the structure of refutations similarly to IC3/PDR. Both IC3/PDR and SMS require \(S_{\text{m}}\) to find a complete satisfying assignment to \(\Phi_{\text{m}}\) before the solving is continued in \(S_{\text{s}}\). This is problematic since \(\Phi_{\text{m}}\) might be hard to satisfy, causing them to get stuck in \(\Phi_{\text{m}}\), even if considering both formulas together quickly reveals the (un)satisfiability of \(\langle\Phi_{\text{s}},\Phi_{\text{m}}\rangle\). In this paper, we introduce specSMS -- a modular SAT-solver that employs a truly bi-directional reasoning. specSMS builds on SMS, while facilitating deeper communication between the modules by (1) allowing learnt clauses to flow in both directions, and (2) letting the two solvers interleave their decisions. The key challenge is in the adaptation of conflict analysis to properly handle the case of a conflict that depends on decisions over local variables of both solvers. Such a conflict cannot be explained to either one of the solvers using only interface clauses (i.e., clauses over interface variables). It may, therefore, require backtracking the search without learning any conflict clauses. To address this challenge, specSMS uses _speculation_, which tames decisions of the secondary solver that are interleaved with decisions of the main solver. If the secondary solver satisfies all of its clauses during speculation, a _validation_ phase is employed, where the main solver attempts to extend the assignment to satisfy its unassigned clauses. If speculation leads to a conflict which depends on local decisions of both solvers, _refinement_ is employed to resolve the conflict. Refinement ensures progress even if no conflict clause can be learnt. With these ingredients, we show that specSMS is sound and complete (i.e., always terminates). To certify specSMS's result when it determines that a formula is unsatisfiable, we extract a _modular_ clausal proof from its execution. To this end, we extend DRUP proofs [12] to account for modular reasoning, and devise a procedure for trimming modular proofs. Such proofs are applicable both to specSMS and to SMS. Finally, we propose an interpolation algorithm that extracts an interpolant [4] from a modular proof. Since clauses are propagated between the solvers in both directions, the extracted interpolants have the shape \(\bigwedge_{i}(C_{i}\Rightarrow cls_{i})\), where \(C_{i}\) are conjunctions of clauses and each \(cls_{i}\) is a clause. Original SMS is implemented on top of MiniSAT. For this paper, we implemented both SMS and specSMS in Z3 [5], using the extendable SAT-solver interface of Z3. Thanks to its bi-directional reasoning, specSMS is able to efficiently solve both sat and unsat formulas that are provably hard for existing modular SAT-solvers, provided that speculation is performed at the right time. specSMS relies on a user-provided callback to decide when to speculate. Developing good heuristics for speculation is a topic of future work. In summary, we make the following contributions: (i) the specSMS algorithm that leverages bi-directional modular reasoning (Sec. III); (ii) modular DRUP proofs for specSMS (Sec. IV-A); (iii) proof-based interpolation algorithm; (iv) user interface to guide speculation and search (Sec. V); and (v) implementation and validation (Sec. VI). ## II Motivating examples In this section, we discuss two examples in which both IC3/PDR-style uni-directional reasoning and SMS-style shallow bi-directional reasoning are ineffective. The examples illustrate why existing modular reasoning gets stuck. To better convey our intuition, we present our problems at word level using bit-vector variables directly, without explicitly converting them to propositional variables. **Example 1**: Consider the following modular sat query: \(\langle\varphi_{in},\varphi_{\mathrm{SHA-1}}\rangle\), where \(\varphi_{in}\triangleq(in=in_{1})\vee(in=in_{2})\), \(in\) is a 512-bit vector, \(in_{1}\), \(in_{2}\) are 512-bit values, \(\varphi_{\mathrm{SHA-1}}\triangleq(\mathrm{SHA-1}_{circ}(in)=\mathrm{SHA-1}_{ in_{1}})\), \(\mathrm{SHA-1}_{circ}(in)\) is a circuit that computes \(\mathrm{SHA-1}\) of \(in\), and \(\mathrm{SHA-1}_{in_{1}}\) is the 20 byte \(\mathrm{SHA-1}\) message digest of \(in_{1}\). Checking the satisfiability of \(\varphi_{in}\wedge\varphi_{\mathrm{SHA-1}}\) is easy because it contains both the output and the input of the \(\mathrm{SHA-1}\) circuit. However, existing modular SAT-solvers attempt to solve the problem starting by finding a complete satisfying assignment to \(\varphi_{\mathrm{SHA-1}}\). This is essentially the problem of inverting the \(\mathrm{SHA-1}\) function, which is known to be very hard for a SAT-solver. The improvements in SMS do not help. There are no unit clauses in \(\varphi_{in}\). On the other hand, specSMS proceeds as follows: (1) when checking satisfiability of \(\varphi_{\mathrm{SHA-1}}\), it decides to speculate, (2) it starts checking satisfiability of \(\varphi_{in}\), branches on variables \(in\), finds an assignment \(\sigma\) to \(in\) and unit propagates \(\sigma\) to \(\varphi_{\mathrm{SHA-1}}\), (3) if there is a conflict in \(\varphi_{\mathrm{SHA-1}}\), it learns the conflict clause \(in\neq in_{2}\), and (4) it terminates with a satisfying assignment \(in=in_{1}\). Speculation in step (1) is what differentiates specSMS from IC3/PDR and SMS. The specifics of when exactly specSMS speculates is guided by the user, as explained in Sec. V. \(\Box\) **Example 2**: Speculation is desirable for unsatisfiable formulas as well. Consider the modular sat query \(\langle\varphi_{+},\varphi_{-}\rangle\), where \(\varphi_{+}\triangleq(a<0\Rightarrow x)\land(a\geq 0\Rightarrow x)\wedge PHP_{32}^{1}\) and \(\varphi_{-}\triangleq(b<0\Rightarrow\neg x)\land(b\geq 0\Rightarrow\neg x)\wedge PHP_{32}^{2}\). Here, \(a\) and \(b\) are 32-wide bitvectors and local to the respective modules. \(PHP_{32}\) encodes the problem of fitting \(32\) pigeons into \(31\) holes and \(PHP_{32}^{1}\) and \(PHP_{32}^{2}\) denote a partitioning of \(PHP_{32}\) into 2 problems such that both formulas contain all variables. The modular problem \(\langle\varphi_{+},\varphi_{-}\rangle\) is unsatisfiable, \(x\) and \(PHP_{32}^{1}\) being two possible interpolants. IC3/PDR and SMS only find the second interpolant. This is because, all satisfying assignments to \(\varphi_{-}\) immediately produce a conflict in \(PHP_{32}^{1}\) part of \(\varphi_{+}\). However, learning an interpolant containing \(x\) requires searching (i.e., deciding) in both \(\varphi_{+}\) and \(\varphi_{-}\). specSMS, with proper guidance, solves the problem by (1) deciding on all \(b\) variables in \(\varphi_{-}\), (2) switching to speculation, (3) branching on all \(a\) variables in \(\varphi_{+}\) to hit a conflict in \(x\), and, finally (4) learning the conflict clause \(x\). \(\Box\) These examples highlight the need to speculate while doing modular reasoning. While speculation by itself is quite powerful, it requires proper guidance to be effective. We explain how the user can provide such a guidance in Sec. V. ## III Speculative SAT Modulo SAT This section presents specSMS -- a modular bi-directional SAT algorithm. For simplicity, we restrict our attention to the case of two modules. However, the algorithm easily generalizes to any sequence of modules. ### _Sat Modulo Sat_ We assume that the reader has some familiarity with internals of a MiniSAT-like SAT solver [6] and with SMS [1]. We give a brief background on SMS, highlighting some of the key aspects. SMS decides satisfiability of a partitioned CNF formula \(\langle\Phi_{\mathrm{s}},\Phi_{\mathrm{m}}\rangle\) with a set of shared interface variables \(I\). It uses two modules \(\langle S_{\mathrm{s}},S_{\mathrm{m}}\rangle\), where \(S_{\mathrm{m}}\) is a _main_ module used to solve \(\Phi_{\mathrm{m}}\), and \(S_{\mathrm{s}}\) is a _secondary_ module to solve \(\Phi_{\mathrm{s}}\). Each module is a SAT solver (with a slightly extended interface, as described in this section). We refer to them as _modules_ or _solvers_, interchangeably. Each solver has its own clause database (initialized with \(\Phi_{i}\) for \(i\in\{\mathrm{m},\mathrm{s}\}\)), and a trail of literals, just as a regular SAT solver. The solvers keep their decision levels in sync. Whenever a decision is made in one solver, the decision level of the other solver is incremented as well (adding a _null_ literal to its trail if necessary). Whenever one solver back-jumps to level \(i\), the other solver back-jumps to level \(i\) as well. Assignments to interface variables are shared between the solvers: whenever such a literal is added to the trail of one solver (either as a decision or due to propagation), it is also added to the trail of the other solver. SMS requires that \(S_{\mathsf{s}}\) does not make any decisions, until \(S_{\mathsf{m}}\) finds a satisfying assignment to its clauses. Inter-modular propagation and conflict analysisThe two key features of SMS are inter-modular unit propagation (called PropagateAll in [1]) and the corresponding inter-modular conflict analysis. In PropagateAll, whenever an interface literal is added to the trail of one solver, it is added to the trail of the other, and both solvers run unit propagation. Whenever a unit literal \(\ell\) is copied from the trail of one solver to the other, the reason for \(\ell\) in the destination solver is marked using a marker ext. This indicates that the justification for the unit is external to the destination solver1. Propagation continues until either there are no more units to propagate or one of the solvers hits a conflict. Footnote 1: This is similar to theory propagation in SMT solvers. Conflict analysis in SMS is extended to account for units with no reason clauses. If such a literal \(\ell\) is used in conflict analysis, its reason is obtained by using \(\mathsf{AnalyzeFinal}(\ell)\) on the other solver to compute a clause \((s\Rightarrow\ell)\) over the interface literals. This clause is copied to the requesting solver and is used as the missing reason. Multiple such clauses can be copied (or learned) during analysis of a single conflict clause - one clause for each literal in the conflict that is assigned by the other solver. In SMS, it is crucial that \(\mathsf{AnalyzeFinal}(\ell)\) always succeeds to generate a reason clause over the interface variables. This is ensured by only calling \(\mathsf{AnalyzeFinal}(\ell)\) in the \(S_{\mathsf{s}}\) solver on literals that were added to the trail when \(S_{\mathsf{s}}\) was not yet making decisions. This can happen in one of two scenarios: either \(S_{\mathsf{m}}\) hits a conflict due to literals propagated from \(S_{\mathsf{s}}\), in which case \(\mathsf{AnalyzeFinal}\) is invoked in \(S_{\mathsf{s}}\) on each literal marked ext in \(S_{\mathsf{m}}\) that is involved in the conflict resolution to obtain its reason; or \(S_{\mathsf{s}}\) hits a conflict during unit propagation, in which case it invokes \(\mathsf{AnalyzeFinal}\) to obtain a conflict clause over the interface variables that blocks the partial assignment of \(S_{\mathsf{m}}\). In both cases, new reason clauses are always copied from \(S_{\mathsf{s}}\) to \(S_{\mathsf{m}}\). We refer the reader to [1] for the pseudo-code of the above inter-modular procedures for details. ### _Speculative Sat Modulo Sat_ specSMS extends SMS [1] by a combination of _speculation_, _refinement_, and _validation_. During the search in the main solver \(S_{\mathsf{m}}\), specSMS non-deterministically _speculates_ by allowing the secondary solver \(S_{\mathsf{s}}\) to extend the current partial assignment of \(\Phi_{\mathsf{m}}\) to a satisfying assignment of \(\Phi_{\mathsf{s}}\). If \(S_{\mathsf{s}}\) is unsuccessful (i.e., hits a conflict), and the conflict depends on a combination of a _local_ decision of \(S_{\mathsf{m}}\) with some decision of \(S_{\mathsf{s}}\), then the search reverts to \(S_{\mathsf{m}}\) and its partial assignment is _refined_ by forcing \(S_{\mathsf{m}}\) to decide on an interface literal from the conflict. On the other hand, if \(S_{\mathsf{s}}\) is successful, solving switches to the main solver \(S_{\mathsf{m}}\) that _validates_ the current partial assignment by extending it to all of its clauses. This either succeeds (meaning, \(\langle\Phi_{\mathsf{s}},\Phi_{\mathsf{m}}\rangle\) is sat), or fails and another _refinement_ is initiated. Note that the two subcases where \(S_{\mathsf{s}}\) is unsuccessful but the reason for the conflict is either local to \(S_{\mathsf{s}}\) or local to \(S_{\mathsf{m}}\) are handled as in SMS. Search modesspecSMS controls the behavior of the solvers and their interaction through _search modes_. Each solver can be in one of the following search modes: Decide, Propagate, and Finished. In Decide, written \(D^{i}\), the solver treats all decisions below level \(i\) as assumptions and is allowed to both make decisions and do unit propagation. In Propagate, written \(P\), the solver makes no decisions, but does unit propagation whenever new literals are added to its trail. In Finished, written \(F\), the clause database of the solver is satisfied; the solver neither makes decisions nor propagates unit literals. The pair of search modes of both modules is called the _state_ of specSMS, where we add a unique state called _unsat_ for the case when the combination of the modules is known to be unsatisfiable. The possible states and transitions of specSMS are shown in Fig. 1. States _unsat_ and \(\langle F,F\rangle\) are two final states, corresponding to unsat and sat, respectively. In all other states, exactly one of the solvers is in a state \(D^{i}\). We refer to this solver as _active_. The part of the transition system highlighted in yellow correspond to SMS, and the green part includes the states and transitions that are unique to specSMS. Normal execution with bi-directional propagationspecSMS starts in the state \(\langle P,D^{0}\rangle\), with the main solver being Fig. 1: State transitions of specSMS. A state \(\langle P,D^{0}\rangle\) means that the secondary solver \(S_{\mathsf{s}}\) is in propagate mode and the main solver \(S_{\mathsf{m}}\) is in solve mode. Each edge is guarded with a condition. The condition \(S_{\mathsf{m}}:\textsc{sat}\) means that \(S_{\mathsf{m}}\) found a full satisfying assignment to \(\Phi_{\mathsf{m}}\). The condition \(S_{\mathsf{m}}:\textsc{c}\oplus\mathit{z}\) means that \(S_{\mathsf{m}}\) hit a conflict at a decision level below \(j\). The four states in yellow corresponds to SMS; two states in green are unique to specSMS. active. In this state, it can proceed like SMS by staying in the yellow region of Fig. 1. We call this _normal execution with bi-directional propagation_, since (only) unit propagation goes between solvers. _Speculation:_ What sets specSMS apart is speculation: at any non-deterministically chosen decision level \(i\), specSMS can pause deciding on the main solver and activate the secondary solver (i.e., transition to state \(\langle D^{i},P\rangle\)). During speculation, only the secondary solver makes decisions. Since the main solver does not have a full satisfying assignment to its clauses, the secondary solver propagates assignments to the main solver and vice-versa. Speculation terminates when the secondary solver \(S_{\mathsf{s}}\) either: (1) hits a conflict that cannot be resolved by inter-modular conflict analysis; (2) hits a conflict below decision level \(i\); or (3) finds a satisfying assignment to \(\Phi_{\mathsf{s}}\). Case (1) is most interesting, and is what makes specSMS differ from SMS. Note that a conflict clause is not resolved by inter-modular conflict analysis only if it depends on an external literal on the trail of \(S_{\mathsf{s}}\) that cannot be explained by an interface clause from \(S_{\mathsf{m}}\). This is possible when both \(S_{\mathsf{m}}\) and \(S_{\mathsf{s}}\) have partial assignments during speculation. So the conflict might depend on the _local_ decisions of \(S_{\mathsf{m}}\). This cannot be communicated to \(S_{\mathsf{s}}\) using only interface variables. _Refinement:_ In specSMS, this is handled by modifying the Reason method in the solvers to fail (i.e., return ext) whenever AnalyzeFinal returns a non-interface clause. Additionally, the literal on which AnalyzeFinal failed is recorded in a global variable \(\mathit{refineLit}\). This is shown in Alg. 1. The inter-modular conflict analysis is modified to exit early whenever Reason fails to produce a justification. At this point, specSMS exits speculation, returns to the initial state \(\langle P,D^{0}\rangle\), both solvers back-jump to decision level \(i\) at which speculation was initiated, and \(S_{\mathsf{m}}\) is forced to decide on \(\mathit{refineLit}\). We call this transition a _refinement_ because the partial assignment of the main solver \(S_{\mathsf{m}}\) (which we view as an _abstraction_) is updated (a.k.a., refined) based on the information that was not available to it (namely, a conflict with a set of decisions in the secondary solver \(S_{\mathsf{s}}\)). Since \(\mathit{refineLit}\) was not decided on in \(S_{\mathsf{m}}\) prior to speculation, deciding on it is a new decision that ensures progress in \(S_{\mathsf{m}}\). The next speculation is possible only under strictly more decisions in \(S_{\mathsf{m}}\) than before, or when \(S_{\mathsf{m}}\) back-jumps and flips an earlier decision. We illustrate the refinement process on a simple example: **Example 3**.: Consider the query \(\langle\Phi_{\mathsf{s}},\Phi_{\mathsf{m}}\rangle\) with: \[\begin{array}{c}\overline{z}\vee\overline{i}\qquad\qquad(3)\\ i\lor j\vee\overline{k}\qquad\qquad(4)\end{array}\qquad\begin{array}{c} \Phi_{\mathsf{m}}(a,i,j,k)\mbox{:}\\ \overline{a}\lor i\vee\overline{j}\qquad\qquad(1)\\ j\lor k\qquad\qquad(2)\end{array}\] First, \(S_{\mathsf{m}}\) decides \(a\) (at level 1), which causes no propagations. Then, specSMS enters speculative mode, transitions to \(\langle D^{1},P\rangle\) and starts making decisions in \(S_{\mathsf{s}}\). \(S_{\mathsf{s}}\) decides \(z\) and calls PropagateAll. Afterwards, the trails for \(S_{\mathsf{m}}\) and \(S_{\mathsf{s}}\) are as follows: ``` 1:function\(\textsc{reason}(\mathit{lit})\) 2:if\(reason[\mathit{lit}]=\mathsf{ext}\)then 3:\(c\gets\mathit{other}\)AnalyzeFinal\((\mathit{lit})\) 4:if\(\exists v\in c\cdot v\not\in I\)then 5:\(\mathit{refineLit}\gets\mathit{lit}\) 6:return ext 7:\(\textsc{Adclusive}(c)\) 8:\(\textsc{reason}[\mathit{lit}]\gets c\) 9:return\(reason[\mathit{lit}]\) ``` **Algorithm 1** The Reason method in modular SAT solvers inside specSMS **Algorithm 2** The Reach method in modular SAT solvers inside specSMS **Theorem 1**.: _specSMS terminates. If it reaches the state \(\langle F,F\rangle\), then \(\Phi_{\mathsf{s}}\wedge\Phi_{\mathsf{m}}\) is satisfiable and the join of the trails of \(\langle S_{\mathsf{s}},S_{\mathsf{m}}\rangle\) is a satisfying assignment. If it reaches the state \(\mathit{unsat}\), \(\Phi_{\mathsf{s}}\wedge\Phi_{\mathsf{m}}\) is unsatisfiable._ Proof.: The proof is similar to that in the proof, the correctness of the algorithm is identical to the proof. Case (2) is similar to what happens in SMS when a conflict is detected in \(S_{\mathsf{s}}\). The reason for the conflict is below level \(i\) which is below the level of any decision of \(S_{\mathsf{s}}\). Since decision levels below \(i\) are treated as assumptions in \(S_{\mathsf{s}}\), calling AnalyzeFinal in \(S_{\mathsf{s}}\) returns an interface clause \(c\) that blocks the current assignment in \(S_{\mathsf{m}}\). The clause \(c\) is added to \(S_{\mathsf{m}}\). The solvers back-jump to the smallest decision level \(j\) that makes \(c\) an asserting clause in \(S_{\mathsf{m}}\). Finally, specSMS moves to \(\langle P,D^{0}\rangle\). _Validation:_ Case (3), like Case (1), is unique to specSMS. While all clauses of \(S_{\mathsf{s}}\) are satisfied, the current assignment might not satisfy all clauses of \(S_{\mathsf{m}}\). Thus, specSMS enters _validation_ by switching to the configuration \(\langle F,D^{M}\rangle\), where \(M\) is the current decision level. Thus, \(S_{\mathsf{m}}\) becomes active and starts deciding and propagating. This continues, until one of two things happen: (3a) \(S_{\mathsf{m}}\) extends the assignment to satisfy all of its clauses, or (3b) a conflict that cannot be resolved with inter-modular conflict analysis is found. In the case (3a), specSMS transitions to \(\langle F,F\rangle\) and declares that \(\langle\Phi_{\mathsf{m}},\Phi_{\mathsf{s}}\rangle\) is sat. The case (3b) is handled exactly the same as Case (1) - the literal on the trail without a reason is stored in \(\mathit{refineLit}\), specSMS moves to \(\langle P,D^{0}\rangle\), backjumps to the level in which speculation was started, and \(S_{\mathsf{m}}\) is forced to decide on \(\mathit{refineLit}\). **Theorem 1**.: _specSMS terminates. If it reaches the state \(\langle F,F\rangle\), then \(\Phi_{\mathsf{s}}\wedge\Phi_{\mathsf{m}}\) is satisfiable and the join of the trails of \(\langle S_{\mathsf{s}},S_{\mathsf{m}}\rangle\) is a satisfying assignment. If it reaches the state \(\mathit{unsat}\), \(\Phi_{\mathsf{s}}\wedge\Phi_{\mathsf{m}}\) is unsatisfiable._ ## IV Validation and interpolation In this section, we augment specSMS with an interpolation procedure. To this end, we first introduce modular DRUP proofs, which are generated from specSMS in a natural way. We then present an algorithm for extracting an interpolant from a modular trimmed DRUP proof in the spirit of [11]. ### _DRUP proofs for modular SAT_ Modular DRUP proofs - a form of clausal proofs [9] - extend (monolithic) DRUP proofs [12]. A DRUP proof [12] is a sequences of steps, where each step either asserts a clause, deletes a clause, or adds a new Reverse Unit Propagation (RUP) clause. Given a set of clauses \(\Gamma\), a clause \(cls\) is an RUP for \(\Gamma\), written \(\Gamma\vdash_{\mathit{UP}}\mathit{cls}\), if \(\mathit{cls}\) follows from \(\Gamma\) by unit propagation [8]. For a DRUP proof \(\pi\), let \(\texttt{assert}(\pi)\) denote all clauses of the asserted commands in \(\pi\), then \(\pi\) shows that all RUP clauses of \(\pi\) follow from \(\texttt{assert}(\pi)\). If \(\pi\) contains a \(\bot\) clause, then \(\pi\) certifies \(\texttt{assert}(\pi)\) is unast. A Modular DRUP proof is a sequence of clause addition and deletion steps, annotated with indices \(\mathit{idx}\) (m or s). Intuitively, steps with the same index must be validated together (within the same module \(\mathit{idx}\)), and steps with different indices may be checked independently. The steps are: 1. (asserted, \(\mathit{idx}\), \(\mathit{cls}\)) denotes that \(\mathit{cls}\) is asserted in \(\mathit{idx}\), 2. (rup, \(\mathit{idx}\), \(\mathit{cls}\)) denotes adding RUP clause \(\mathit{cls}\) to \(\mathit{idx}\), 3. (cp(\(\mathit{src}\)), \(\mathit{dst}\), \(\mathit{cls}\)) denotes copying a clause \(\mathit{cls}\) from \(\mathit{src}\) to \(\mathit{dst}\), and 4. (del, \(\mathit{idx}\), \(\mathit{cls}\)) denotes removing clause \(\mathit{cls}\) from \(\mathit{idx}\). We denote the prefix of length \(k\) of a sequence of steps \(\pi\) by \(\pi^{k}\). Given a sequence of steps \(\pi\) and a formula index \(\mathit{idx}\), we use \(\mathit{act\_clauses}(\pi,\mathit{idx})\) to denote the set of active clauses with index \(\mathit{idx}\). Formally, \[\{\mathit{cls}\mid\exists c_{j}\in\pi\cdot\] \[(c_{j}=(t,\mathit{idx},\mathit{cls})\land(t=\text{asserted} \lor t=\text{rup}\lor t=\text{cp(\_)}))\] \[\land\ \ \neg\exists c_{k}\in\pi\cdot k>j\wedge c_{k}=(\text{del}, \mathit{idx},\mathit{cls})\}\] A sequence of steps \(\pi=c_{1},\dots,c_{n}\) is a _valid modular DRUP proof_ iff for each \(c_{i}\in\pi\): 1. if \(c_{i}=(\text{rup},\mathit{idx},\mathit{cls})\) then \(\mathit{act\_clauses}(\pi^{i},\mathit{idx})\vdash_{\mathit{UP}}\mathit{cls}\), 2. if \(c_{i}=(\text{cp(\mathit{idx})},\_\mathit{cls})\) then \(\mathit{act\_clauses}(\pi^{i},\mathit{idx})\vdash_{\mathit{UP}}\mathit{cls}\), and 3. \(c_{|\pi|}\) is either \((\text{rup},\text{m},\bot)\) or \((\text{cp(s)},\text{m},\bot)\). Let \(\texttt{asserted}(\pi,idx)\) be the set of all asserted clauses in \(\pi\) with index \(idx\). **Theorem 2**: _If \(\pi\) is a valid modular DRUP proof, then \(\texttt{asserted}(\pi,\textsf{s})\land\texttt{asserted}(\pi,\textsf{m})\) is unsatisfiable. \(\Box\)_ Modular DRUP proofs may be validated with either one or two solvers. To validate with one solver we convert the modular proof into a monolithic one (i.e., where the steps are asserted, rup, and del). Let \(\texttt{modDRUP2DRUP}\) be a procedure that given a modular DRUP proof \(\pi\), returns a DRUP proof \(\pi^{\prime}\) that is obtained from \(\pi\) by (a) removing \(idx\) from all the steps; (b) removing all cp steps; (c) removing all del steps. Note that del steps are removed for simplicity, otherwise it is necessary to account for deletion of copied and non-copied clauses separately. **Lemma 1**: _If \(\pi\) is a valid modular DRUP proof then \(\pi^{\prime}=\texttt{modDRUP2DRUP}(\pi)\) is a valid DRUP proof. \(\Box\)_ Modular validation is done with two monolithic solvers working in lock step: (asserted, \(\mathit{cls},\mathit{idx}\)) steps are added to the \(\mathit{idx}\) solver; (rup, \(\mathit{idx}\), \(\mathit{cls}\)) steps are validated locally in solver \(\mathit{idx}\) using all active clauses (asserted, copied, and rup); and for \((\text{cp(\mathit{src})},\mathit{dst},\mathit{cls})\) steps, \(\mathit{cls}\) is added to \(\mathit{dst}\) but not validated in it, and \(\mathit{cls}\) is checked to exist in the \(\mathit{src}\) solver. From now on, we consider only valid proofs. We say that a (valid) modular DRUP proof \(\pi\) is a proof of unsatisfiability of \(\Phi_{\textsf{s}}\land\Phi_{\textsf{m}}\) if \(\texttt{asserted}(\pi,\textsf{s})\subseteq\Phi_{\textsf{s}}\) and \(\texttt{asserted}(\pi,\textsf{m})\subseteq\Phi_{\textsf{m}}\) (inclusion here refers to the sets of clauses). specSMS produces modular DRUP proofs by logging the clauses that are learnt, deleted, and copied between solvers. Note that in SMS clauses may only be copied from \(S_{\textsf{s}}\) to \(S_{\textsf{m}}\), but in specSMS they might be copied in both directions. **Theorem 3**: _Let \(\Phi_{\textsf{s}}\) and \(\Phi_{\textsf{m}}\) be two Boolean formulas s.t. \(\Phi_{\textsf{s}}\land\Phi_{\textsf{m}}\models\bot\). specSMS produces a valid modular DRUP proof for unsatisfiability of \(\Phi_{\textsf{s}}\land\Phi_{\textsf{m}}\). \(\Box\)_ _Trimming modular DRUP proofs._ A step in a modular DRUP proof \(\pi\) is _core_ if removing it invalidates \(\pi\). Under this definition, del steps are never core since removing them does not affect validation. Alg. 2 shows an algorithm to trim modular DRUP proofs based on backward validation. The input are two modular solvers \(S_{\textsf{m}}\) and \(S_{\textsf{s}}\) in a final conflicting state, and a valid modular DRUP proof \(\pi=c_{1},\dots,c_{n}\). The output is a trimmed proof \(\pi^{\prime}\) such that all steps of \(\pi^{\prime}\) are core. We assume that the reader is familiar with MiniSAT [6] and use the following solver methods: Propagate, exhaustively applies unit propagation (UP) rule by resolving all unit clauses; ConflictAnalysis analyzes the most recent conflict and marks which clauses are involved in the conflict; IsOnTrail checks whether a clause is an antecedent of a literal on the trail; Enqueue enqueues one or more literals on the trail; IsDeleted, Delete, Revive check whether a clause is deleted, delete a clause, and add a previously deleted clause, respectively; SaveTrail, RestoreTrail save and restore the state of the trail. Alg. 2 processes the steps of the proof backwards, rolling back the states of the solvers. \(M_{\mathit{idx}}\) marks which clauses were relevant to derive clauses in the current suffix of the proof. While the proof is constructed through inter-modular reasoning, the trimming algorithm processes each of the steps in the proof completely locally. During the backward construction of the trimmed proof, steps that include unmarked clauses are ignored (and, in particular, not added to the proof). For each (relevant) rup step, function CHR_RUP, using ConflictAnalysis, adds clauses to \(M\). del steps are never added to the trimmed proof, but the clause is revived from the solver. For cp steps, if the clause was marked, it is marked as used for the solver it was copied from and the step is added to the proof. Finally, asserted clauses that were marked are added to the trimmed proof. Note that, as in [11], proofs may be trimmed in different ways, depending on the strategy for ConflictAnalysis. The following theorem states that trimming preserves validity of the proof. **Theorem 4**: _Let \(\Phi_{\mathbf{s}}\) and \(\Phi_{\mathbf{m}}\) be two formulas such that \(\Phi_{\mathbf{s}}\wedge\Phi_{\mathbf{m}}\models\bot\). If \(\pi\) is a modular DRUP proof produced by solvers \(S_{\mathbf{s}}\) and \(\Phi_{\mathbf{m}}\) for \(\Phi_{\mathbf{s}}\wedge\Phi_{\mathbf{m}}\), then a trimmed proof \(\pi^{\prime}\) by Alg. 2 is also a valid modular DRUP proof for \(\Phi_{\mathbf{s}}\wedge\Phi_{\mathbf{m}}\)._ Fig. 2 shows a trimmed proof after specSMS is executed on \(\langle\psi_{0},\psi_{1}\rangle\) such that \(\psi_{0}\triangleq((s_{1}\wedge l_{1})\Rightarrow s_{2}))\wedge((s_{1}\wedge \neg l_{2})\Rightarrow s_{2})\wedge((s_{3}\wedge l_{2})\Rightarrow s_{4}) \wedge((s_{3}\wedge l_{2})\Rightarrow s_{4})\) and \(\psi_{1}\triangleq(\neg s_{1}\Rightarrow\mathit{lb}_{1})\wedge(\neg s_{1} \Rightarrow\neg l_{1})\wedge((s_{2}\wedge l_{2})\Rightarrow s_{3})\wedge((s_{2 }\wedge\neg l_{2})\Rightarrow s_{3})\wedge(s_{4}\Rightarrow\mathit{lb}_{3}) \wedge(s_{4}\Rightarrow\neg l_{3}))\). ### _Interpolation_ Given a modular DRUP proof \(\pi\) of unsatisfiability of \(\Phi_{\mathbf{s}}\wedge\Phi_{\mathbf{m}}\), we give an algorithm to compute an interpolant of \(\Phi_{\mathbf{s}}\wedge\Phi_{\mathbf{m}}\). For simplicity of the presentation, we assume that \(\pi\) has no deletion steps; this is the case in trimmed proofs, but we can also adapt the interpolation algorithm to handle deletions by keeping track of active clauses. Our interpolation algorithm relies only on the clauses copied between the modules. Notice that whenever a clause is copied from module \(i\) to module \(j\), it is implied by all the clauses in \(\Phi_{i}\) together with all the clauses that have been copied from module \(j\). We refer to clauses copied from \(S_{\mathbf{m}}\) to \(S_{\mathbf{s}}\) as _backward_ clauses and clauses copied from \(S_{\mathbf{s}}\) to \(S_{\mathbf{m}}\) as _forward_ clauses. The conjunction of forward clauses is unsatisfiable with \(S_{\mathbf{m}}\). This is because, in the last step of \(\pi\), \(\bot\) is added to \(S_{\mathbf{m}}\), either through rup or by cp \(\bot\) from \(S_{\mathbf{s}}\). Since all the clauses in module \(\mathsf{m}\) are implied by \(\Phi_{\mathbf{m}}\) together with forward clauses, this means that the conjunction of forward clauses is unsatisfiable with \(\Phi_{\mathbf{m}}\). In addition, all forward clauses were learned in module \(\mathbf{s}\), with support from backward clauses. This means that every forward clause is implied by \(\Phi_{\mathbf{s}}\) together with the subset of the backward clauses used to derive it. Intuitively, we should therefore be able to learn an interpolant with the structure: backward clauses imply forward clauses. Alg. 3 describes our interpolation algorithm. It traverses a modular DRUP proof forward. For each clause _cls_ learned in module \(\mathbf{s}\), the algorithm collects the set of backward clauses used to learn _cls_. This is stored in the \(\sup\) datastucture -- a mapping from clauses to sets of clauses. Finally, when a forward clause \(c\) is copied, it adds \(\sup(c)\Rightarrow c\) to the interpolant. **Example 4**: We illustrate our algorithm using the modular DRUP proof from Fig. 2. On the first cp step (\(\text{cp}(\mathsf{m}),\mathsf{s},s_{2}\Rightarrow s_{3}\)), the algorithm assigns the \(\sup\) for clause \(s_{2}\Rightarrow s_{3}\) as Fig. 2: An example of a modular DRUP proof. Clauses are written in human-readable form as implications, instead of in the DIMACS format. itself (line 8). The first clause learnt in module \(\mathsf{s}\), \((\mathsf{rup},\mathsf{s},s_{3}\Rightarrow s_{4})\), is derived from just the clauses in module \(\mathsf{s}\) and no backward clauses. Therefore, after RUP, our algorithm sets \(\sup(s_{3}\Rightarrow s_{4})\) to \(\top\) (line 12). The second clause learnt in module \(\mathsf{s}\), \(s_{1}\Rightarrow s_{4}\), is derived from module \(\mathsf{s}\) with the support of the backward clause \(s_{2}\Rightarrow s_{3}\). Therefore, \(\sup(s_{1}\Rightarrow s_{4})=\{s_{2}\Rightarrow s_{3}\}\). When this clause is copied forward to module 1, the algorithm updates the interpolant to be \((s_{2}\Rightarrow s_{3})\Rightarrow(s_{1}\Rightarrow s_{4})\). Next, we formalize the correctness of the algorithm. Let \(L_{B}(\pi)=\{\mathit{cls}\mid(\mathsf{cp}(\mathsf{m}),\mathsf{s},\mathit{cls} )\in\pi\}\) be the set of clauses copied from module \(\mathsf{m}\) to \(\mathsf{s}\) and \(L_{F}(\pi)=\{\mathit{cls}\mid(\mathsf{cp}(\mathsf{s}),\mathsf{m},\mathit{cls} )\in\pi\}\) be clauses copied from module \(\mathsf{s}\) to \(\mathsf{m}\). From the validity of modular DRUP proofs, we have that: **Lemma 2**: _For any step \(c_{i}=(\mathsf{cp}(\mathsf{s}),\mathsf{m},\mathit{cls})\in\pi\), \((L_{B}(\pi^{i})\land\Phi_{\mathsf{s}})\Rightarrow\mathit{cls}\) and for any step \(c_{j}=(\mathsf{cp}(\mathsf{m}),\mathsf{s},\mathit{cls})\in\pi\), \((L_{F}(\pi^{j})\land\Phi_{\mathsf{m}})\Rightarrow\mathit{cls}\). \({}_{\Box}\)_ For any clause \(\mathit{cls}\) copied from one module to the other, we use the shorthand \(\sharp(\mathit{cls})\) to refer to the position of the copy command in the proof \(\pi\). That is, \(\sharp(\mathit{cls})\) is the smallest \(k\) such that \(c_{k}=(\mathsf{cp}(i),j,\mathit{cls})\in\pi\). The following is an invariant in a valid modular DRUP proof: **Lemma 3**: \[\forall\mathit{cls}\in L_{F}(\pi)\cdot(\Phi_{\mathsf{m}}\land(L_{F}(\pi^{ \sharp(\mathit{cls})}))\Rightarrow L_{B}(\pi^{\sharp(\mathit{cls})})) \quad_{\Box}\] These properties ensure that adding \(L_{B}(\pi^{\sharp(\mathit{cls})})\Rightarrow\mathit{cls}\) for every forward clause \(\mathit{cls}\) results in an interpolant. Alg. 3 adds \((\sup(\mathit{cls})\Rightarrow\mathit{cls})\) as an optimization. Correctness is preserved since \(\sup(\mathit{cls})\) is a subset of \(L_{B}(\pi^{\sharp(\mathit{cls})})\) that together with \(\Phi_{\mathsf{s}}\) suffices to derive \(\mathit{cls}\) (formally, \(\sup(\mathit{cls})\land\Phi_{\mathsf{s}}\vdash_{\mathit{UP}}\mathit{cls}\)). **Theorem 5**: _Given a modular DRUP proof \(\pi\) for \(\Phi_{\mathsf{s}}\land\Phi_{\mathsf{m}}\), \(\mathit{itp}\triangleq\{\sup(c)\Rightarrow c\mid c\in L_{F}(\pi)\}\) is an interpolant for \(\langle\Phi_{\mathsf{s}}\;\Phi_{\mathsf{m}}\rangle\). \({}_{\Box}\)_ Proof Since all copy steps are over interface variables, the interpolant is also over interface variables. By Lemma 2 (and the soundness of \(\sup\) optimization), \(\Phi_{\mathsf{s}}\Rightarrow\mathit{itp}\). Next, we prove that \((\Phi_{\mathsf{m}}\wedge\mathit{itp})\Rightarrow\bot\). From Lemma 3, we have that for all \(c\in L_{F}(\pi)\), \((\Phi_{\mathsf{m}}\land L_{F}(\pi^{\sharp(c)}))\Rightarrow\sup(c)\). Therefore, \((\Phi_{\mathsf{m}}\land L_{F}(\pi^{\sharp(c)})\land(\sup(c)\Rightarrow c))\Rightarrow c\)\({}_{\blacksquare}\) It is much simpler to extract interpolants from modular DRUP proofs then from arbitrary DRUP proofs. This is not surprising since the interpolants capture exactly the information that is exchanged between solvers. The interpolants are not in CNF, but can be converted to CNF after extraction. ## V Guiding specSMS via solver callbacks As the reader may have noticed, deciding when to switch to speculative mode is non-trivial. Heuristics may be implemented, as typically done in SAT solvers, but we consider this out of the scope of this paper. Instead, we provide an interface for users to guide specSMS based on solver callbacks. This scheme has been recently proven useful to guide SMT solving and to define custom theories in Z3 [2]. Users may provide a function NextSplit to guide the solver in whether to speculate and over which variables to decide. specSMS calls NextSplit whenever the next decision is about to be made. specSMS expects NextSplit to return \(\mathit{none}\) (default, in which case, the underlying heuristics are used) or a pair \((\mathit{ch\_mode},\mathit{Vars})\) where \(\mathit{ch\_mode}\) is a Boolean that indicates a change to speculative mode and \(\mathit{Vars}\) is a (possibly empty) set of variables to assign. To implement NextSplit, users can ask the solver if a variable has been assigned using the IsFixed function. We illustrate the API with some examples. **Example 5**: Consider modular queries of the following form: \(\langle\psi_{in}(\ell,in),\psi_{\mathrm{SHA-1}}(\mathit{in},\mathit{out})\rangle\), where \(\ell\) is a 2-bit vector, \(\mathit{in}\) is a 512-bit vector (shared), \(\mathit{out}\) is 160-bit vector. \(\psi_{in}\) encodes that there are four possible messages: \[\psi_{in}\triangleq(\ell=0\land\mathit{in}=\mathit{msg}_{0})\lor( \ell=1\land\mathit{in}=\mathit{msg}_{1})\lor\] \[(\ell=2\land\mathit{in}=\mathit{msg}_{2})\lor(\ell=3\land\mathit{ in}=\mathit{msg}_{3})\] and \(\psi_{\mathrm{SHA-1}}(\mathit{in},\mathit{out})\) encodes the \(\mathrm{SHA-1}\) circuit together with some hash: \[\psi_{\mathrm{SHA-1}}\triangleq(\mathrm{SHA-1}_{circ}(\mathit{in})\wedge \mathit{out}=\mathit{shaVal})\] Roughly, the modular query \(\langle\psi_{in},\psi_{\mathrm{SHA-1}}\rangle\) asks whether the \(\mathrm{SHA-1}\) of any \(\mathit{msg}_{i}\) is \(\mathit{shaVal}\). As we saw in Sec. II, we are interested in using speculation in queries of this form in order to avoid the hard problem of inverting SHA-1 (as required by SMS, for example). We can guide the solver with the function NextSplit\(()\triangleq(\top,\ell)\). Speculation is useful for such queries both in cases where the formulas are satisfiable and unsatisfiable. If unsat, only the 4 inputs for SHA-1 specified by \(\mathit{msg}_{i}\) need to be considered, avoiding the expensive hash inversion problem. If sat, only two decisions on the bits of \(\ell\) result in fully assigning \(\mathit{in}\), which results again in just checking the hash. **Example 6**: Next, consider a different form of modular query: \(\langle\gamma_{in}(\ell,x,in),\gamma_{\mathrm{SHA-1}}(\mathit{in},x,\mathit{ out})\rangle\), where \(x\) is an \(512\)-bit vector, \(\ell\) is a \(160\)-bit vector, \(\mathit{chks}_{i}\) are 512-bit vector, and the remaining variables are the same as in \(\psi_{in}\) and \(\psi_{\mathrm{SHA-1}}\), and \[\gamma_{in}\triangleq \mathrm{SHA-1}_{circ}(x,\ell)\land\] \[\big{(}(\ell=\mathit{chks}_{0}\land\mathit{in}=\mathit{msg}_{0}) \lor(\ell=\mathit{chks}_{1}\land\mathit{in}=\mathit{msg}_{1})\lor\] \[(\ell=\mathit{chks}_{2}\land\mathit{in}=\mathit{msg}_{2})\lor(\ell =\mathit{chks}_{3}\land\mathit{in}=\mathit{msg}_{3})\big{)}\] \[\gamma_{\mathrm{SHA-1}}\triangleq(x=1\lor x=4)\wedge\mathrm{SHA-1}_{circ} (\mathit{in},\mathit{out})\land\mathit{out}=\mathit{shaVal}\] This is an example where bi-directional search is necessary to efficiently solve the query. If deciding only on \(\gamma_{\mathrm{SHA-1}}\), we encounter the hard problem of inverting \(\mathrm{SHA-1}_{circ}\), if speculating directly in \(\gamma_{in}\), we encounter the same problem, since an assignment for \(x\) needs to be found, based on the four values for \(\ell\). Accordingly, in this case, we are not interested in speculating immediately, but rather first decide on the value of \(x\) in the main solver and then speculate. The following NextSplit implementation captures this idea: \[\text{NextSplit}()\triangleq\textbf{if}(\textbf{not}\ \text{lsFixed}(x))\ (\bot,x)\] \[\textbf{else}\ \textbf{if}(\textbf{not}\ \text{lsFixed}(\ell))\ (\top,\ell)\] \[\textbf{else}\ \textit{none}\] This example gives an intuition on which instances specSMS is better than SMS. Even if SMS is guided by NextSplit, at least one inversion of \(\mathrm{SHA}\)-\(1_{circ}\) would have to be computed. ## VI Implementation and Validation We have implemented specSMS (and SMS) inside the extensible SAT-solver of Z3 [5]. For SMS, we simply disable speculation. The code is publicly available on GitHub2. Footnote 2: [https://github.com/hgvk94/z3/tree/psms](https://github.com/hgvk94/z3/tree/psms). We have validated specSMS on a set of handcrafted benchmarks, based on Ex. 5, using the query \(\langle\psi_{in},\psi_{\mathrm{SHA}-1}\rangle\). In the first set of experiments, we check sat queries by generating one \(\textit{msg}_{i}\) in \(\psi_{in}\) that matches _shaVal_. In the second set, we check unast queries, by ensuring that no \(\textit{msg}_{i}\) matches _shaVal_. To evaluate performance, we make \(\psi_{\mathrm{SHA}-1}\) harder to solve by increasing the number of rounds of \(\mathrm{SHA}\)-\(1\) circuit encoded in the \(\mathrm{SHA}\)-\(1_{circ}\) clauses. We used \(\text{SAT-encoding}\)3 to generate the \(\mathrm{SHA}\)-\(1_{circ}\) with the different number of rounds (SAT-encoding supports 16 to 40 rounds). Footnote 3: Available at [https://github.com/saeedmi/SAT-encoding](https://github.com/saeedmi/SAT-encoding). We use the same guidance for both SMS and specSMS: \(\text{NextSplit}()\triangleq(\top,\ell)\). This means that once the secondary solver is active, both specSMS and SMS branch on the \(\ell\) variables first. However, SMS does not use speculation. Thus, it only switches to the secondary solver after finding a satisfying assignment to the main solver. In contrast, in specSMS, the secondary solver becomes active immediately due to speculation, and, accordingly, the search starts by branching on the \(\ell\) variables. Results for each set of the queries are shown in Tab. I. Column "# rounds" shows the number of \(\mathrm{SHA}\)-\(1\) rounds encoded in \(\psi_{\mathrm{SHA}-1}\). The problems quickly become too hard for SMS. At the same time, specSMS solves all the queries quickly. Furthermore, the run-time of specSMS appears to grow linearly with the number of rounds. The experiments show that prioritizing decisions on \(\ell\), which is effective in specSMS with speculation, is ineffective in SMS. This is expected since this guidance becomes relevant to SMS only after the main solver guessed a satisfying assignment to \(\psi_{\mathrm{SHA}-1}\). This, essentially amounts to SMS noticing the guidance only after inverting the \(\mathrm{SHA}\)-\(1\) circuit, which defeats the purpose of the guidance. As far as we can see, no other guidance can help SMS since there are no good variables to branch on in the main solver and SMS does not switch to the secondary solver until the main solver is satisfied. ## VII Conclusion and Future Work Modular SAT-solving is crucial for efficient SAT-based unbounded Model Checking. Existing techniques, embedded in IC3/PDR [3] and extended in SMS [1], trade the efficiency of the solver for the simplicity of conflict resolution. In this paper, we propose a new modular SAT-solver, called specSMS, that extends SMS with truly bi-directional reasoning. We show that it is provably more efficient than SMS (and, therefore, IC3/PDR). We implement specSMS in Z3 [5], extend it with DRUP-style [12] proofs, and proof-based interpolation. We believe this work is an avenue to future efficient SAT- and SMT-based Model Checking algorithms. In this paper, we rely on user callbacks to guide specSMS when to start speculation and (optionally) what variables to prefer in the decision heuristic. This is sufficient to show the power of bi-directional reasoning over uni-directional one of IC3/PDR and SMS. However, this does limit the general applicability of specSMS. In the future, we plan to explore guiding speculation by the number of conflicts in the main solver, possibly using similar strategy used for guiding restarts in a modern CDCL SAT-solver [6]. A much earlier version of speculation, called _weak abstraction_, has been implemented in the Spacer Constrained Horn Clause (CHC) solver [10]. Since Spacer extends IC3/PDR to SMT, the choice of speculation is based on theory reasoning. Speculation starts when the main solver is satisfied modulo some theories (e.g., Linear Real Arithmetic or Weak Theory of Arrays). Speculation often prevents Spacer from being stuck in any one SMT query. However, Spacer has no inter-modular propagation and no _refinement_. If _validation_ fails, speculation is simply disabled and the query is tried again without it. We hope that extending specSMS to theories can be used to make Spacer heuristics much more flexible and effective. DPLL(T)-style [7] SMT-solvers can be seen as modular SAT-solvers where the main module is a SAT solver and the secondary solver is a theory solver (often EUF-solver that is connected to other theory solvers such as a LIA solver). This observation credited as an intuition for SMS [1]. In modern SMT-solvers, all decisions are made by the SAT-solver. For example, if a LIA solver wants to split on a bound of a variable \(x\), it first adds a clause \((x\leq(b-1)\lor x\geq b)\), where \(b\) is the desired bound, to the SAT-solver and then lets the SAT-solver branch on the clause. specSMS extends this interaction by allowing the secondary solver (i.e., the theory solver) to branch without going back to the main solver. Control is returned to the main solver only if such decisions tangle local decisions of the two solvers. We hope that the core ideas of specSMS can be lifted to SMT and allow more flexibility in the interaction between the DPLL-core and theory solvers. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{time (s) – sat} & \multicolumn{3}{c}{time (s) – unsat} \\ \# rounds & SMS & specSMS & \# rounds & SMS & specSMS \\ \hline 16 & 1.96 & 0.41 & 16 & 0.77 & 0.65 \\ 21 & – & 0.66 & 21 & – & 0.89 \\ 26 & – & 0.66 & 26 & – & 0.91 \\ 31 & – & 0.81 & 31 & – & 1.08 \\ 36 & – & 1.01 & 36 & – & 1.45 \\ 40 & – & 1.16 & 40 & – & 1.77 \\ \hline \hline \end{tabular} \end{table} TABLE I: Solving time with a timeout of 600s.
2307.00130
Information Extraction in Domain and Generic Documents: Findings from Heuristic-based and Data-driven Approaches
Information extraction (IE) plays very important role in natural language processing (NLP) and is fundamental to many NLP applications that used to extract structured information from unstructured text data. Heuristic-based searching and data-driven learning are two main stream implementation approaches. However, no much attention has been paid to document genre and length influence on IE tasks. To fill the gap, in this study, we investigated the accuracy and generalization abilities of heuristic-based searching and data-driven to perform two IE tasks: named entity recognition (NER) and semantic role labeling (SRL) on domain-specific and generic documents with different length. We posited two hypotheses: first, short documents may yield better accuracy results compared to long documents; second, generic documents may exhibit superior extraction outcomes relative to domain-dependent documents due to training document genre limitations. Our findings reveals that no single method demonstrated overwhelming performance in both tasks. For named entity extraction, data-driven approaches outperformed symbolic methods in terms of accuracy, particularly in short texts. In the case of semantic roles extraction, we observed that heuristic-based searching method and data-driven based model with syntax representation surpassed the performance of pure data-driven approach which only consider semantic information. Additionally, we discovered that different semantic roles exhibited varying accuracy levels with the same method. This study offers valuable insights for downstream text mining tasks, such as NER and SRL, when addressing various document features and genres.
Shiyu Yuan, Carlo Lipizzi
2023-06-30T20:43:27Z
http://arxiv.org/abs/2307.00130v1
Information Extraction in Domain and Generic Documents: Findings from Heuristic-based and Data-driven Approaches ###### Abstract Information extraction (IE) plays very important role in natural language processing (NLP) and is fundamental to many NLP applications that used to extract structured information from unstructured text data. Heuristic-based searching and data-driven learning are two main stream implementation approaches. However, no much attention has been paid to document genre and length influence on IE tasks. To fill the gap, in this study, we investigated the accuracy and generalization abilities of heuristic-based searching and data-driven to perform two IE tasks: named entity recognition (NER) and semantic role labeling (SRL) on domain-specific and generic documents with different length. We posited two hypotheses: first, short documents may yield better accuracy results compared to long documents; second, generic documents may exhibit superior extraction outcomes relative to domain-dependent documents due to training document genre limitations. Our findings reveals that no single method demonstrated overwhelming performance in both tasks. For named entity extraction, data-driven approaches outperformed symbolic methods in terms of accuracy, particularly in short texts. In the case of semantic roles extraction, we observed that heuristic-based searching method and data-driven based model with syntax representation surpassed the performance of pure data-driven approach which only consider semantic information. Additionally, we discovered that different semantic roles exhibited varying accuracy levels with the same method. This study offers valuable insights for downstream text mining tasks, such as NER and SRL, when addressing various document features and genres. information extraction, named entity extraction, semantic roles, heuristic approach, data-driven. ## I Introduction Information Extraction (IE) serves as a crucial element in the realm of natural language processing, enabling the procurement of significant semantic details from vast quantities of textual data [11][26]. The IE system can be distilled into three principal components: Semantic Role Labeling (SRL), Named Entity Recognition (NER), and Relation Extraction (RE), each with their unique functionalities [11]. NER, as a critical stage of the IE process, distinguishes and categorizes pertinent entity constituents within unstructured data. It meticulously classifies recognized named entities into preordained categories, embodying diverse forms such as individuals, organizations, locations, along with cardinal and ordinal entities [15]. Simultaneously, RE operates to determine the associations among the categorized named entities, thus elaborating on the interconnectedness and interdependencies within the extracted information [1]. SRL, on the other hand, takes on the responsibility of discerning the semantic roles within a sentence autonomously. By identifying components like the subject, predicate, and object or comprehending the 'predicate-argument' structure, it systematically answers the question 'who did what to whom,' [18]. By harmonizing these distinct components, the Information Extraction (IE) process elucidates a tri-dimensional interpretation of the textual data, thus manifesting comprehension of the inherent semantic structures. Two principal methodologies have been delineated for executing Information Extraction (IE) tasks. The first, a heuristic approach, is predominantly employed to extract IE components from textual data. However, this approach suffers from a significant limitation -- a constrained generalization capacity to extract information from unseen data. The second methodology adopts a data-driven approach to identify and model patterns of IE components within the textual data. Despite its efficacy, this approach encounters a significant drawback linked to the quality of the training data and the performance of model training. These dependencies may restrict the overall effectiveness and adaptability of this approach. Current IE research tends to focus on improving model architecture using public data and demonstrating how and why proposed models enhance task performance on public data. However, there is a discernible lack of emphasis on exploring the applicability of these proposed models on realistic datasets such as textbooks, news articles, scholarly papers, and other literary resources. This discrepancy establishes a substantial chasm between academic research endeavors and the practical application of these models on authentic datasets. Bridging the gap between academic research and practical application, this study integrates both heuristic and data-driven methodologies in IE tasks. To delve deeper into the performance metrics of these approaches, we diversify the corpus of examined textual data in terms of genre and length. So, our investigation scrutinizes the efficacy of IE in different length of domain-independent (generic) and domain-specific documents, utilizing heuristic and data-driven methodologies. Given that the heuristic approach to relationship extraction between named entities necessitates the inclusion of a verb list considered indicative of noun relationships [19] - a method akin to the one deployed in Semantic Role Labeling (SRL), our research places particular emphasis on evaluating the performance of NER and SRL. In summary, this study will investigate the following aspects: 1. NER/SRL performance of heuristic approaches and data-driven methods on different-domain documents. 2. The influence of document length on NER/SRL tasks for taxonomy and data-driven methods. The hypothesis underlining this study postulates that heuristic approaches, heavily reliant on predefined rules for recognizing and categorizing named entities, may present high accuracy. However, these may falter when confronted with unidentified words or phrases, resulting in an incomplete outcome in the IE task. Conversely, data-driven methodologies, capitalizing on learning language characteristics and optimizing parameters, may demonstrate superior generalization capabilities, fortifying the robustness of the model. The validity of our hypothesis is corroborated by the findings that emerged from the study: * Data-driven techniques manifested superior performance in NER task compared to heuristic strategies in terms of classification accuracy. * The length of a document was found to significantly influence the NER task within a data-driven approach. * On the other hand, document length had a marginal impact on the SRL task. * In the SRL task, syntactic information within the data-driven methodology was found to play a pivotal role. The results of this research may serve as guidance for NER/SRL applications in different domains and document lengths, enabling the effective utilization of both accuracy and generalization. The structure of this study unfolds in the following sequence: Section II introduces the related work pertaining to NER and SRL tasks, encompassing their definitions as well as traditional approaches employed to execute these tasks. In Section III, a detailed exposition of the toolkit and model selected to perform heuristic and data-driven IE tasks is provided. Section IV presents the evaluative metrics employed to assess the results of the IE tasks in real data application scenarios. Following this, Section V delineates the training results obtained from the data-driven approach and conveys the inference results for IE tasks derived from both heuristic and data-driven methodologies. Finally, in Section VI, we conduct an in-depth analysis of the inference results and perform an ablation study of the SRL task utilizing an off-the-shelf pipeline. ## II Related Work Both NER and SRL tasks share the common goal of assigning labels to each word in the input sequence. These tasks, which produce output label sequences of the same length as the input sequences, are referred to as sequence labeling tasks [10]. In this section, we will give a short review of definition of named entity recognition and semantic role labeling and the traditional processing approaches correspondingly. ### _Ner_ #### Ii-A1 Definition of NER Originally, named entities (NEs) referred to anything that could be denoted by a proper name [10]. The term has since been broadened to encompass key elements that convey significant information in text data, such as temporal (date and time) and numeric (price) expressions [10]. NER involves identifying and categorizing NEs present in the text and extracting the entity span with labels [10]. NER serves as a critical component in numerous semantic-based natural language processing tasks. For instance, in knowledge graph construction, named entities connect the analyzed text to structured databases such as wikibase, while in causal inference, events and participants can be extracted as named entities. Classical and modern algorithms for named entity tasks will be introduced in the latter half of this chapter and the methods section. #### Ii-A2 Traditional Approach of NER Traditional approaches to NER have included rule-based methods, which depend on a set of hand-crafted rules and dictionaries. For example, a rule might be that any capitalized word followed by 'Inc.' is likely an organization. Another approach includes statistically modeling techniques, such as the Hidden Markov Model (HMM) and Conditional Random Field (CRF). Rule-based methods require manual crafting of rules and maintaining dictionaries, which is a labor-intensive process. These rules and dictionaries also need to be updated continually as language evolves and new entities emerge. Plus, the rules formulated for a particular language or domain frequently lack efficacy when applied to a different context. This deficiency in adaptability and scalability across diverse languages and domains hinders the effectiveness of rule-based methods. Statistical approach can struggle when there's not enough labeled data for training, a common issue known as the sparse data problem. In the case of NER, this is particularly problematic for rare entity types, where there are few examples in the training data. Another drawback of statistical approach is generalization problem, the statistical model may not generalize well to unseen data or new entity types. They are typically trained on specific corpora and might fail to accurately identify entities that are not well-represented in the training data. ### _Srl_ #### Ii-B1 Definition of SRL Semantic role pertains to the relationship between an argument element and the root predicate word in a semantic context [6]. The task of SRL involves assigning labels to words or phrases in a sentence, indicating their semantic roles (SRs), such as the action performer, action receiver, and the action itself. Semantic roles can be represented in various formats. In the Semantic Web standard, the tripartite structure, known as semantic triples, adopts the SPO format (subject-predicate-object) [13]. Alternative semantic role representations also exist. In 1968, Charles J. Fillmore introduced the FrameNet project, which systematically described predicate frames and their associated semantic roles [18]. FrameNet's semantic role representation employs the 'arguments-predicate' structure, where arguments correspond to the subject and object, and the predicate retains the same meaning as in the SPO format. #### Ii-A2 Traditional Approach of SRL Feature-based SRL bears similarities to NER, employing 'gazeteres' or 'name lists' as parsers. Traditional SRL approaches utilize rule-based methods, traversing syntax trees or syntactic dependency trees to exclude words that are unlikely to be arguments. Subsequently, techniques such as Maximum Entropy models is used to identify all arguments belonging to a predicate from candidate arguments [10]. The algorithm resembles NER-based transition models like HMM and CRF, relying on training data from a given annotation, such as PropBank or FrameNet, with the predicate as the root node and other characters as child nodes. Traditional methods can struggle with arguments that are far from the predicate in the sentence structure, a phenomenon known as long-distance dependencies. ## III Research Design and Methodology In the method section, we will delineate the data pre-processing steps, including tokenization, stop word removal, and customized cleaning. Additionally, we will introduce the toolkits employed in heuristic and data-driven approaches. Finally, we will detail the methods of heuristic and data-driven approaches in NER and SRL tasks, encompassing the rules adopted in the heuristic approach, training data, and training strategy in the data-driven approach. ### _Data Preprocessing_ Data preprocessing, and in particular, text data preprocessing, plays a pivotal role in NLP tasks. This is primarily due to the inherently unstructured nature of text data, which is often replete with elements that can be characterized as 'noise.' These elements, which include special characters, punctuation, and HTML tags (in the case of web-sourced data), may complicate the learning process of the models by introducing unnecessary complexity and ambiguity. By removing or normalizing these elements through preprocessing, the text data can be transformed into a cleaner and more structured format. This streamlined format not only eases the model's learning process but also enhances its capacity to discern and learn effective patterns within the data. ### _Toolkits for IE_ spaCy: We employ spaCy's empty English model 'en' and 'EntityRuler' pipeline to create a customized NER dataset for fine-tuning the data-driven model. The empty English model solely provides tokenization functionality without any pre-trained features. NLTK: We utilize WordNet in conjunction with the NLTK module to identify hypernyms for words in named entities or SPO triples. coreNLP: Because the superior POS and dependency parsing performance among commonly used parsing frameworks [21], we opted for coreNLP to extract nouns and SPO triples in this research. We did not utilize any other pre-trained or trained results from the selected library beyond POS and dependency parsing. ### _Heuristic Approach_ #### Iii-C1 Heuristic NER Heuristic approach has been widely used in diversified fields, such as vulnerability detection in cybersecurity [29]. The heuristic approach in this research involves assigning named entities to their corresponding hypernyms, which represent the entity class. To identify named entities from both domain-independent and domain-specific documents using a taxonomy approach, we extract all nouns present in the text data. * We employ coreNLP to extract all nouns present in a sentence. (In this case, we utilize the top-frequency nouns related to our benchmark words to construct the entity network) ``` 1:procedureHeuristic_NER(input_sequence \(s\) ) 2: Create list \(NN\), initially empty 3:\(tokens\gets coreNLP\_token(s)\) 4:for\(token\) in \(tokens\)do 5:if\(token.pos=\)'NNP' then 6:\(NN\gets NN+token.word\) 7:endif 8:endfor 9:endprocedure ``` **Algorithm 1** Heuristic NER #### Iii-C2 Heuristic SRL In the SRL task, we employ regular expressions to extract the'subject, predicate, object' components from the dependency parsing and POS components in coreNLP. By leveraging trained POS and dependency parsers, we establish several rules in the form of regular expressions to parse and extract semantically significant roles in the text. We utilize the 'enhancedPlusPlusDependencies' component in coreNLP. The extraction process consists of the following steps: 1. Based on the predicted predicate, if the 'dep' contains'sub', we extract the token as'subject.' If the 'dep' is'subj:pass', indicating passive voice, we prepend 'be' to the predicate; If a negation modifier 'neg' is present for the predicate, we prepend 'not' to the predicate. 2. If an 'obj' dependency relation exists for the predicate, the token is extracted as the 'object' related to the predicate. 3. If no 'obj' dependency is associated with the detected predicate, we explore the second hierarchy by searching for 'obl', an oblique argument corresponding to an adverbial attaching to a verb [5]. 4. If a 'compound' dependency relationship exists with'subj', 'obj', 'obl', we extract the 'compound' token along with the subject or object token as a compound token. If the 'dep' type includes 'or' or 'and', indicating multiple equivalent subjects or objects in the sentence, we insert 'and' or 'or' between the tokens. ``` 1:procedureHeuristic_SRL(input_sequence \(s\) ) 2: Create list \(predicate\), initially empty 3: Dependencies \(dependecies\gets coreNLP(s)\) 4:\(predicate\gets predicate\)+\(dependencies[\)'root'\(]\) 5:for\(edge\) in \(dependencies[\)'edge'\(]\)do 6:if'sub'\(\in edge.dep\)then 7:\(sub\gets edge.target\) 8:endif 9:if'subj:pass'\(\in edge.dep\)then 10:\(predicate\leftarrow\)'be'\(+predicate\) 11:\(sub\gets edge.target\) 12:endif 13:if'neg'\(\in edge.dep\)then 14:\(predicate\leftarrow\)'not'\(+predicate\) 15:endif 16:if'obj'\(\in edge.dep\)then 17:\(obj\gets edge.target\) 18:else 19:if'obl'\(\in edge.dep\)then 20:\(obj\gets edge.target\) 21:endif 22:endif 23:endfor 24:for\(edge\) in \(dependencies[\)'edge'\(]\)do 25:if'compound'\(\in edge.dep\)then 26:if\(sub\in\{edge.target,edge.source\}\)then 27:\(sub\gets s[edge.source]+s[edge.target]\) 28:endif 29:if\(obj\in\{edge.target,edge.source\}\)then 30:\(obj\gets s[edge.source]+s[edge.target]\) 31:endif 32:endif 33:endif 34:endfor 35:if\(sub\) is a number then 36:\(sub\gets s[sub]\) 37:endif 38:\(obj\gets s[obj]\) 39:endif 40:return\(sub+predicate+obj\) 41:endprocedure ``` **Algorithm 2** Heuristic SRL ### _Data-driven Method_ NER focuses on extracting NEs from documents, while SRL aims to extract SRs from documents. Traditionally, NER is considered a token classification task [17], while SRL assigns arguments to their corresponding predicates. The key difference lies in the annotation patterns of the training datasets for these two tasks. NER training datasets label named entity tokens with their respective entity types, while SRL datasets tag subject, object, and predicate tokens with their SRs. In the data-driven approach, we treat both tasks as token classification and train NER and SRL models on their respective datasets. The base model employed here is XLNet [30], a transformer-based model that achieved a 97.54% F-score on the NER task [28], the highest among other deep learning models [15]. As an autoregressive model, XLNet considers all mask permutations in each sentence, resulting in superior contextual feature learning compared to the original BERT model. In previous NER tasks, XLNet was implemented solely as an embedding layer followed by bi-LSTM or CRF layers [28]. However, in this research, we aim to investigate the model architecture's expressive ability in token classification tasks. We utilize XLNet to perform NER and SRL tasks to examine whether the autoregressive model can effectively and accurately capture language features and assess its generalization ability. #### Iii-D1 Ner For the NER task, we used the public NER data OntoNotes v5 and customized training data to fine-tune XLNet separately. We employed the pre-trained XLNet tokenizer 'xlnet-base-cased' and padded each sentence to a maximum length of 128 tokens. Following the official/default preprocessing method, we used '0' for pad token IDs and '-100' for pad labels [9]. Subsequently, we created the dataset with the following properties: 'input_ids', 'attention mask', 'labels', and 'label_mask'. In the fine-tuning approach, we froze the first 10 XLNet layers and retrained the 11th and 12th layers. A classifier was added on top of the base model, as shown in Fig1. The rationale for selecting the last two XLNet layers is that they are more susceptible to loss changes due to their proximity to the final output. By retraining the last two layers in XLNet, we fine-tuned the model with computational efficiency while maintaining relatively high performance. We adapted the primary architecture of 'XLNetForToken-Classification' from Hugging Face with the following modifications: * To address the imbalanced entity class issue present in both public and customized training data, we employed Fig. 1: XLNet_NER weighted cross-entropy loss for gradient calculation. Consequently, classes with a higher quantity of data in the total dataset are assigned lower weights when calculating loss. * Another modification involved disregarding the padded label '-100' during loss calculation to prevent seemingly good model training performance. We also excluded '-100' in the label evaluation process to obtain a more objective training result. * In the cross-entropy loss, we opted for'sum' as the loss reduction method. This choice was made to propagate a larger value back to the model. #### Iii-C2 Srl We utilized the conll2012_ontonotesv5 dataset for training the SRL model. In this dataset, semantically significant tokens were labeled with ['B-ARG0', 'B-ARG1', 'B-ARG2', 'I-ARG0', 'I-ARG1', 'I-ARG2', 'B-V'], which is analogous to named entity labeling. Consequently, we hypothesized that the model trained for NER tasks should also be effective for SRL tasks. A survey by Li et al. [16] on the role of syntax in SRL compared graph-based models and pre-trained language models for SRL tasks. The results demonstrated that pre-trained language models, particularly XLNet and ALBERT, outperformed other methods (f1 = 89.8 vs. 91.6 on CoNLL05 WSJ, f1 = 85.4 vs. 85.1 on CoNLL05 Brown, f1 = 88.3 vs. 88.7 on CoNLL12). Given that we employed XLNet for the data-driven NER task, and the performance difference in SRL between XLNet and ALBERT is minimal, we continued to use XLNet as the base pre-trained model for fine-tuning the SRL model with the conll2012_ontonotesv5 dataset. The training strategy followed the NER fine-tuning pipeline (Fig.2). ## IV Evaluation Metrics In this evaluation section, we present the evaluation metrics for assessing the performance of symbolic and data-driven approaches for NER and SRL tasks in both domain-specific and generic documents. Since NER and SRL have distinct task objectives, we did not apply general metrics to both tasks. Instead, we devised customized metrics to better evaluate the application results, which can represent the performance of symbolic and data-driven methods. ### _Evaluation Procedure_ As the documents in this research are not sourced from public datasets, we employed a human expert evaluation protocol. The evaluation aims to address two main aspects: 1) The performance of heuristic and data-driven approaches for information extraction tasks in both domain-independent and domain-specific documents, and 2) The impact of text data length on NER and SRL tasks in both domain-independent and domain-specific documents. The evaluation protocol is as follows: 1. We categorized our documents into four groups: short domain-independent, long domain-independent, short domain-specific, and long domain-specific text data. 2. Subsequently, we processed the four document groups using NER and SRL tasks in both symbolic and data-driven models. 3. Based on the obtained results, we employed the following metrics to assess the performance of NER and SRL tasks in symbolic and data-driven models. ### _NER Evaluation_ For the NER task, we use human expert-annotated NER text as the benchmark and calculate the matched NER labels from both the symbolic and data-driven NER results. For the symbolic approach, we have human experts highlight the top 100 entity tokens in the ordered token list of the documents. Then, we compare the annotated token list with the extracted entities. If the annotated entities are present in the symbolic extracted entities, we consider the entity extraction as correct (true positive). If not, we treat the un-extracted entities in the annotated token list as false negatives, while the remaining taxonomy-extracted entities are considered false positives. Since we only extracted nouns in the documents, we do not have a true negative evaluation, which means accuracy is not applicable for the taxonomy methods' evaluation. For the data-driven approach, we have human experts assess the performance of the extracted entities. Specifically, for domain-independent results, we only consider the extracted entities without their entity types, because, in domain-independent documents, there is no domain scope to guide entity extraction. For domain-specific results, human experts focus solely on domain-related entities; other entities that are not within the domain scope are considered false positives. The specific quantification metrics are computed as follows: \[accuracy\_ner=\frac{\#correct\_NER\ +\ \#correct\_NON\_NER}{\#tokens}\] \[precision\_ner=\frac{\#correct\_NER}{\#correct\_NER\ +\ \#uncorrect\_ner}\] \[recall\_ner=\frac{\#correct\_NER}{\#correct\_NER\ +\ \#uncorrelated\_ner}\] \[F1\_ner=\frac{2\times(recall\_ner\_precision\_ner)}{recall\_ner\ +\ precision\_ner}\] Fig. 2: XLNet_SRL Although the F-1 score effectively represents precision and recall, we aim to analyze these two evaluations in detail for different approaches and domains. For instance, if our primary concern is the completeness of information extracted from the corpus, we would focus on recall. Conversely, if we are more concerned about inaccurate information that may impact downstream information extraction and knowledge graph construction, we would prioritize precision. The F-1 score represents both the accuracy and generalization power of the methods. Therefore, we present all metric results to address various research concerns. ### _SRL Evaluation_ We consider the subject-predicate-object annotations provided by human experts as the benchmark and compare the SPO or SRL extracted using rule-based and data-driven approaches, respectively. We adopt two evaluation metrics for this aspect. One is the rigid accuracy measurement, which is calculated based on the probability of correct arguments and predicates in a single SRL triple: \[rigid\_accuracy\_SRL=\frac{\sum\#correct\_SRL}{\sum\#bench\_SRL}\] The other metric involves evaluating semantic roles separately, assessing predicate extraction and argument extraction independently. As transition-based models utilize the predicate as the root to optimize the model by maximizing the conditional probability between the root predicate and its associated arguments [10], we evaluate the extracted predicates and arguments in the results. The formulas are as follows: \[accuracy\_SRL\_verb=\frac{\sum\#correct\_predicate}{\sum\#bench\_predicate}\] \[accuracy\_SRL\_argument=\frac{\sum\#correct\_argument}{\sum\#bench\_argument}\] ## V Experiment and Results Analysis In this section, we present the document statistics of the inference data utilized in the symbolic approach and the evaluation of our trained models in the data-driven approach. Subsequently, we report the results of the experiments conducted, including the performance of the NER and SRL models on the test data. ### _Document Features_ Our research objective is to examine the IE performance on domain-independent and domain-specific documents of varying lengths. Consequently, we analyze four documents and provide their features in Table I. ### _Training Data_ #### V-B1 Training data for NER I: Customized Neuroscience Training dataset We utilized spaCy's 'EntityRuler' pipe as an annotation tool and selected three medical glossaries as named entities with corresponding entity class names. The three glossaries represent distinct neuroscience concepts: 'BrainAnatomy', 'MedicalTerm', and 'NeuroDisorder'. We crawled terms from websites, converted them to lowercase letters, and stored both full names and abbreviations separately when applicable. Subsequently, we added the terms from the three glossaries to the 'EntityRuler' pipe in spaCy's blank English model 'en' with their respective entity names: 'BrainAnatomy', 'MedicalTerm', and 'NeuroDisorder'. The blank English model in spaCy contains only a tokenizer without any pre-trained components [24]. We saved the customized English model as'spacy/neurosci_cus' for subsequent dataset annotation (Fig.3) The annotated data will be employed as training data in the data-driven approach. We gathered 37 open-source clinical neurology publications online and processed the text data according to the protocol outlined in Section III-A. Using the saved spaCy model'spacy/neurosci_cus', we annotated the corpus in a spaCy training dataset format and converted it into BILUO format. To maintain consistency with other public datasets used in this research, which follow the BIO format for entities, we converted the dataset to its final BIO version using the subsequent approach: Original BILUO format: B = Beginning I/M = Inside / Middle L/E = Last / End O = Outside U/W/S = Unit-length / Whole / Singleton Original BIO format: B = Beginning (beginning of a named entity) I = Inside (inside a named entity) O = Outside (outside of a named entity) Conversion criteria: We modified the L tag to I since the last word in a named entity span can also be considered within the span. Additionally, a unit-length or single-word named entity can be regarded as the beginning of itself, so we changed U to B. Consequently, the customized NER training data in BILUO format was converted into BIO format. As a result, we obtained 4057 annotated sentences in the domain of clinical neuroscience. The distribution of entity categories, excluding non-entities, is illustrated in (Fig.4). The numerical distribution of entity categories is as follows: Table II #### Iv-C2 Training data for NER II: Public data (OntoNotes5) for NER We utilized the OntoNotes v5 dataset [14] as the public/generic training data for the NER task. OntoNotes v5 was annotated on a large corpus encompassing various genres, such as news, telephone records, weblogs, broadcast, and talk shows [14]. OntoNotes v5 also serves as one of the training datasets employed by spaCy and other packages for training their NER pipelines, and it is widely used in other NER tasks. OntoNotes v5 contains four sets of training data with 37 entity types, as shown in Table III. After examining the entity distribution, we opted for train.02 as our training dataset due to its relatively balanced distribution among the 37 entity types as in III. #### Iv-C3 Training data for SRL: conll2012_ontontonotesv5 In SRL data-driven approach, We used the conll2012_ontonotesv5 as training dataset which is the extended version of OntoNotes v5.0 [8]. Besides the original v4 train/dev and v9 test data, it has a corrected version v12 train/dev/test data in English only. In the data-driven approach, we use the conll2012_ontonotesv5 v12. The total train document is 10539, and each document has 200 sentences on average. In line with the NER training dataset number, we take the first nine documents with around 4900 sentences. After removing non-annotated data, we have 4083 sentences in total (NER is 4057). Another thing we need to pre-process in the conll2012_ontonotesv5 data is that the'sr_frames' has one to several annotated SRL relations for one sentence. For example, in the sentence : 'We respectfully invite you to watch a special edition of Across China.' There're two'srl_frames' for this sentence: 1.'verb': 'invite', 'frames': ['B-ARG0', 'B-ARGM-MNR', 'B-V', 'B-ARG1', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'], 2.'verb': 'watch', 'frames': ['O', 'O', 'O', 'B-ARG0', 'O', Fig. 4: Cutomized NER Fig. 3: Annotated Training Data Pipeline 'B-V', 'B-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'O'] In this scenario, we replicate the same sentence for all'srl_frames' annotated for that sentence. This is because for each 'predicate-argument' structure in'srl_frames'-sentence' pairs, the predicate serves as the center, forming a semantically significant sentence with corresponding arguments. After 'broadcasting' the sentence to each'srl_frames', the total number of training samples becomes 10,708. The'srl_frames' format in the CoNLL-2012 OntoNotes5 dataset follows the PropBank frame labels [2], using a BIO format [8]. In PropBank frame labels, ARG0 represents 'PROTO-AGENT' [2], which corresponds to the subject in SPO, ARG1 represents 'PROTO-PATIENT' [2], which corresponds to the object in SPO, and ARG2 typically represents 'benefactive, instrument, attribute, or end state' [2]. If ARG1 is not available in a sentence, we extract ARG2 as a supplementary object in the SRL task. The 'V' label represents the target [2]. Moreover, the'srl_frames' format in CoNLL-2012 OntoNotes5 follows the BIO format, such as 'B-ARG0' and 'I-ARG0'. The total number of syntax tag types is 70; however, we only focus on the subject-predicate-object, which implies that the final tags for the SRL task selected in the training dataset are: 'B-ARG0', 'B-ARG1', 'B-ARG2', 'I-ARG0', 'I-ARG1', 'I-ARG2', and 'B-V'. The numerical distribution of SRL tags is presented in Table IV, and the SRL tags distribution is depicted in Fig.5. ### _Training Result_ We employed ADAM as the optimizer for our models. ADAM is an adaptive gradient algorithm that leverages first- and second-order momentum estimates of the gradient to adjust the learning rates of the distinct parameters within the neural network [12]. Due to the adaptive learning rate corresponding to different gradients, we set the same initial learning rate for both NER and SRL tasks. Since the NER training data (customized and public data) is smaller than the SRL training data, we assigned different batch sizes and training epochs for these two tasks. The basic model training configuration is presented in Table V: \begin{table} \begin{tabular}{|l||l|l|} \hline Task & \multicolumn{2}{c|}{NER} & SRL \\ \hline Data & Cus & Pub & Pub \\ \hline Learning Rate & 5e-5 & 5e-5 & 5e-5 \\ optimizer & adam & adam & adam \\ batch\_size & 128 & 128 & 256 \\ epochs & 100 & 100 & 150 \\ \hline \end{tabular} \end{table} TABLE V: model configuration Fig. 5: conll2012_ontonotes5_SRL Due to our adoption of'sum' as the loss reduction method, which signifies that losses accumulate during training, both training and validation losses are relatively large. The validation loss being much smaller than the training loss indicates that our model did not overfit the training dataset. The results for NER and SRL models trained on customized training data and public data are presented in Table VI. It is evident that the NER model outperforms the SRL model in terms of all evaluation metrics. This may be attributed to the training strategy, which will be discussed in more detail in the subsequent discussion section. For the NER model, training on public data yields better performance than the model trained on the customized dataset. This could be a result of the imbalanced-entity class present in the customized training data. ### _Inference Result_ #### Iv-D1 Ner The NER evaluation benchmark is based on ordered tokens derived from the original text. We employed the spaCy blanket English model 'en' for tokenizing the corpus. The spaCy blanket English model provides only tokenization without involving any pretrained or trained modules. Table V displays the number of tokens in each analyzed document. For the domain-dependent long corpus, due to redundant information generated during the conversion from PDF to.txt, we set the evaluation coverage to 200 tokens. For other documents, the evaluation coverage is limited to 100 tokens. In the rule-based approach, we extracted nouns from sentences and mapped them to their corresponding hypernyms in WordNet. To evaluate the taxonomy NER task, we assessed the extracted nouns and the semantic relation between the extracted noun (hyponym) and its hypernym. If the semantic relation between the hyponym and hypernym aligns with the semantic meaning in the original text, we consider it to be correctly extracted; otherwise, it is deemed incorrect extraction. For data-driven methods, we have four sets of results: 'customized-domain-trained domain-specific NER result', 'public-generic-trained domain-specific NER result', 'customized-domain-trained generic NER result', 'public-generic-trained generic NER result'. We only consider the extracted entities, disregarding the entity class in the evaluation, which means we do not account for the 'hypernym-hyponym' relation in the evaluation section. We have four criteria: true positive (TP: correct_NER), true negative (TN: correct_nonNER), false positive (FP: uncorrect_NER), and false negative (FN: unextracted_NER). If an extracted entity is also labeled as an entity in the evaluation bench list, it is tagged as TP; if the extracted entity is not labeled as an entity in the evaluation bench list, it is tagged as FP; if the token is labeled as an entity in the evaluation bench list but not extracted, it is tagged as FN; if the token is neither labeled as an entity nor extracted as an entity, the token is tagged as TN. The evaluation results are presented in Table VII and VIII. As mentioned in the methodology section, because we only extracted nouns in the rule-based methods, we do not have true negatives in the symbolic approach, leaving the symbolic accuracy section blank. Data-driven methods demonstrate better performance than symbolic approaches for both customized training datasets and public training datasets. Regarding F1 score, short documents exhibit better named entity extraction performance than long documents in both specific and generic domains (e.g., generic domain: 0.6522 vs. 0.4696). NER models trained on customized data do not perform better than NER models trained on generic data (e.g., customized f1 short vs. public f1 short: 0.5113 vs. 0.5517). #### Iv-D2 Srl We selected 10-15 sentences as the evaluation sample set, employing the keyword evaluation method. This means that if the extracted subject, predicate, and object contain the keyword present in the evaluation benchmark SPO triples, we consider it to be correctly extracted. SRL evaluation does not differentiate between domain-specific and generic documents. The following rules are applied for the evaluation of SRL extraction: First, we randomly select samples (sentences with corresponding SRs) from the results. For complex sentences, there may be multiple SRs (multiple subject-predicate-object triples). We consider all the SPO triples along with the original sentence. Subsequently, we have four criteria: true positive (TP), true negative (TN), false positive (FP), and false negative (FN). TP indicates that the keywords in the extraction align with human extractions. TN for the subject and object depends on the predicate; if there is an FP in the predicate, the subject and object are considered TN. Otherwise, if there is an FP in the subject or object, the undetected predicate will be labeled as TN. The evaluation results for both approaches are presented in Table IX. The SRL model training results are not as satisfactory as those of the NER model. The model failed to accurately extract SPO triples in the inference phase. This outcome may be attributed to the training strategy and will be addressed in the discussion section. The symbolic approach outperforms the data-driven method in the SRL task. According to the F-1 score, document length does not significantly impact extraction performance in either domain. Moreover, there is no substantial difference in accuracy between domain-specific and generic documents, which could be due to the dependency-parsing model employed in CoreNLP. CoreNLP's dependency parsing is a transition-based parser [4] that relies on the root predicate and utilizes three types of transitions ('LEFT-ARC', 'RIGHT-ARC', and 'SHIFT') to generate the dependency parse. Consequently, this dependency parser is predicate-oriented. In the discussion section, we further assess the predicate extraction performance and argument performance separately in Table XII. Additionally, we will explore a BERT based SRL extraction pipeline to further analyze our SRL training strategy, as presented in Table X. ## VI Implications of Findings In this research, we investigated the extraction of semantic roles and named entities in short and long text data using both symbolic and data-driven approaches. Based on our results, we found that no single approach demonstrated overwhelming performance compared to the other. Symbolic and data-driven methods exhibit different strengths in NER and SRL tasks. ### _Heuristic Approach for NER_ In the NER task, we extracted nouns from sentences and mapped them to their hypernyms in WordNet. The results showed that the extracted nouns did not align with the entities that humans expect to find in the documents, especially for domain-dependent documents. The reason lies in the unknown words such as 'cytoarchitecture' [31] in the transition-based model. As most of the domain-specific terminology expected to be extracted has never appeared in CoreNLP's training dataset, and the transition-based model lacks generalization capabilities for unseen words, many domain-specific entities are not identified and extracted in the heuristic approach. Text length does not significantly impact extraction performance in the heuristic approach. However, subtle differences can still be inferred from the results. For domain-independent documents, short documents exhibit better extraction results than long documents (F1 score in 'nyt' vs. 'hp': 0.3 vs. 0.03). A similar conclusion can be drawn for domain-dependent documents, although the influence of length is not as strong (F1 score in'short_brain' vs. 'long_brain': 0.04 vs. 0). Nonetheless, domain-independent documents yield better extraction results than domain-dependent documents, regardless of document length. This may be due to the higher percentage of unknown words in long domain-specific documents compared to long generic documents. ### _Data-Driven for NER_ In the NER task, we did not consider the entity class in the evaluation metrics, which means the evaluation results only reflect the entity extraction performance for each model. Compared to symbolic methods, the data-driven approach demonstrates better performance in the NER task, regardless of the domain of the training dataset. Specifically, XLNet fine-tuned on generic training data exhibits better performance on domain-independent documents than on domain-specific documents in terms of F1 score (short_nyt 0.65 and long_hp 0.57 vs. short_brain 0.55 and long_brain 0.20); a similar result is observed for XLNet fine-tuned on domain-customized training data (short_nyt 0.65 and long_hp 0.47 vs. short_brain 0.51 and long_brain 0.21). These results indicate that the training data domain may not significantly impact entity extraction, regardless of the entity class. This could be due to the model architecture and training strategy, where the NER model learns the structure and features of entities in the text instead of focusing on specific domains. Another reason could be the homogeneity of the training data; if entities in the domain-specific training dataset are not very similar, the NER model may not fully learn the features of the entities, leading to unsuccessful identification and extraction. The reason we did not include the entity class in the evaluation is that we could not obtain any domain-specific entity class from models trained on domain-independent (generic) training data. Document length plays an important role in NER extraction, regardless of the domain. Short documents in both domains exhibit better performance than long documents, for the following reasons: 1. Reduced complexity: short documents have fewer tokens, which means the model only needs to process fewer words, making NER prediction easier and faster. 2. Better representation: short documents typically have a higher concentration of relevant information, allowing the NER model to better represent the entities in the text. In contrast, longer documents may contain more irrelevant information, increasing noise in the input and making it more challenging for the model to identify entities accurately. ### _Data-Driven for SRL and Reverse Ablation Study_ In the SRL task, which is a non-domain-related NLP task but heavily dependent on POS, coreNLP demonstrates better performance than the fine-tuned XLNet model. In this research, our training strategy for the SRL task is the same as token classification/sequence labeling tasks, using the spanned SRL labels in Table IV without integrating syntax role information during training. Based on our results, semantic role extraction exhibits weak performance when only considering span-dependency. The results also indicate that an SRL training strategy should incorporate both syntax information and span dependency. To verify our hypothesis, we tested four sets of inference data on a standalone SRL extraction pipeline: AllenNLP. AllenNLP is a BERT-based model [22] with a linear classification layer on top of the transformer architecture and a BiLSTM layer. It segments the SRL training into predicate extraction and 'arguments-predicate' structure extraction. The former task follows a similar training strategy to what we used in this research: sequence labeling. They feed the sequence into a pretrained BERT model and obtain a contextual representation, the 'predicate indicator' embedding, to distinguish predicates and non-predicates in sentences. After the first step, they use the'sentence-predicate' pair as input in the following format: [[CLS] sentence [SEP] predicate [SEP]], where [CLS] represents the beginning of the sentence and [SEP] signifies the end of the sentence. Traditional transformer-based models have one [CLS] and one [SEP] in each tokenized sentence; the [SEP] in AllenNLP before the predicate is used to encode the sentence in a predicate-aware manner via the attention mechanism [22]. They then encode the sentence with labeled arguments through a one-layer BiLSTM to obtain hidden states and feed them into a linear classifier. The main differences between AllenNLP and our method lie in two aspects: * We do not separate the predicate from the whole sentence and concatenate the sentence embedding and predicate embedding. Thus, we lose the 'contextual representation' of the predicate. * We treat predicates and arguments as equally important labels. In AllenNLP, they assign more weight to predicates and use hidden states to encode the argument's contextual information and positional information. The verified SRL results are shown in Table XI and XII. Based on F-1 scores, AllenNLP exhibits better semantic role extraction performance on domain-specific documents than coreNLP. However, the accuracy score is significantly higher in coreNLP than in AllenNLP. CoreNLP does not include semantic representation in models but directly adopts syntax representation (dependency parsing).In comparison, AllenNLP combines contextual and predicate information by concatenating contextual embeddings with predicate embeddings, employing an indirect syntax representation. Nonetheless, its accuracy is lower than that of coreNLP, which relies solely on syntax encoding. This observation reinforces our hypothesis, underlining the essential role of syntax in SRL tasks. Another finding is that document length does not affect semantic role extraction in coreNLP and AllenNLP, unlike in NER tasks. Possible reasons for this include: 1. Sentence-level processing: SRL models typically process text at the sentence level, rather than the document level, so the length of the document does not influence the model's performance. 2. Context independence: SRL models are trained to identify semantic roles based on the context of words within a sentence, rather than the sentence's position in the document. Sentence-level contextual information is used for making predictions, so the length of the document does not affect the model's ability to discern semantic relationships within a sentence. As SRL tasks involve sentence-level processing, sentence segmentation is crucial for downstream semantic role extraction. Based on our experiments, we observed that AllenNLP demonstrates superior performance in sentence segmentation. For instance, when examining the same sentence: _'In clinical practice and clinical trials, the assessment of new or enlarging T2w lesions is often used to monitor disease activity, although the correlation between T2w lesion load and disability seems to be moderate at best [11].'_ * in AllenNLP, the sentence is _'In clinical practice and clinical trials the assessment of new or enlarging T2w lesions is often used to monitor disease activity although the correlation between T2w lesion load and disability seems to be moderate at best Li et al 2006'_[7]. * in coreNLP, the sentence is _'clinical practice clinical trials assessment enlarging lesions used monitor disease activity correlation lesion load disability moderate best li et al. gadolinium enhancing lesions mri sequences gadolinium gd application used determine areas breakdown bloodbrain barrier indicative acute disease activity figure '_. [7] From the aforementioned example, AllenNLP exhibits enhanced sentence segmentation capabilities compared to coreNLP. This is because coreNLP omits certain conjunctions, such as _'in'_ and _'and'_, and combines content from different sentences, such as _'gadolinium enhancing lesions mri sequences gadolinium gd application used determine areas breakdown bloodbrain barrier indicative acute disease activity figure'_. Also, for long dependency sentences, AllenNLP also has better segment performance than coreNLP. For example, for the sentence: _'Clinical Neurology is intended to introduce medical students and house officers to the field of neurology and to serve them as a continuing resource in their work on the wards and in the clinics'_[23] * AllenNLP extracted two sets of SRLs for the above sentence: \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline 'Clinical Neurology', & 'introduce', &'medical students and house officers', \\ \hline 'Clinical Neurology', &'serve', & 'as a continuing resource in their work on the wards and in the clinics', \\ \hline \end{tabular} * Whereas for coreNLP, the same sentence is segmented as _'stroke appendix clinical examination common isolated peripheral nerve disorders index preface clinical neurology intended introduce medical students house officers field neurology serve continuing resource work wards clinics'_[23] and extracted SPOs are as follows: \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline nerve, disorder, appendix, examination, index, preface, neurology & intend & introduce \\ \hline \end{tabular} AllenNLP outperforms coreNLP, particularly in the context of long sentences. This difference impacts predicate identification accuracy and the successful extraction of corresponding arguments. Upon evaluating predicate and argument extraction, we discovered that predicate extraction performs better than argument extraction across all evaluation metrics, including considerations of domain and length. This outcome can be attributed to several factors: 1. Predicates are typically single words or simple phrases (two-three words) with 'VERB' as their only part of speech (POS). These attributes of predicates render predicate extraction a binary classification task, where the classifier only needs to label a word as a verb or non-verb. 2. In contrast, arguments often comprise multiple words or complex phrases (several words) and include various POS, such as nouns, adjectives, adverbs, pronouns, prepositions, conjunctions, interjections, numerals, articles, and determiners. Consequently, argument extraction is a multi-class classification task with increased entropy during the learning process. Both transition-based coreNLP and transformer-based AllenNLP models are predicate-oriented in their training. While the transition-based model directly utilizes POS and dependency parsing information, the transformer-based model learns POS and dependency parsing through feed-forward and backward propagation, potentially leading to information leakage during the learning process. Nevertheless, both models exhibit superior predicate extraction compared to argument extraction, which may be attributed to the aforementioned reasons. ## VII Future Direction In our forthcoming research, we aim to integrate knowledge graph technology with Named Entity Recognition (NER) and Semantic Role Labeling (SRL) for a comprehensive exploration of relation extraction. The fundamental premise is that knowledge graphs, owing to their ability to furnish additional contextual details concerning entities and their interrelations, could significantly enrich the training process of NER and SRL models, thereby enhancing their performance and predictive accuracy. Furthermore, given the remarkable success of large-scale pre-trained language models such as GPT-3 [3] and LLAMA [25], another focus of our research will be to investigate optimal methodologies for fine-tuning these models specifically for NER and SRL tasks. It is anticipated that such an approach can leverage the inherent capabilities of these models, propelling them towards superior performance in entity recognition and semantic role labeling. ## VIII Conclusion Information extraction is a critical component in knowledge graph construction, where the accuracy and generalization capabilities of models significantly impact the quality of extracted information. According to our research findings, when the task goal is to extract NER from domain-specific documents, it is advisable to divide the documents into relatively short sections and employ data-driven methods for named entity extraction. On the other hand, for extracting semantic roles from domain-specific documents, it is recommended to either utilize a symbolic approach to extract semantic roles in a rule-based manner or develop a model that integrates both semantic and syntactic information for semantic role extraction.
2309.13386
Polygamy relation of quantum correlations with equality
We provide a generalized definition of polygamy relations for any quantum correlation measures. Instead of the usual polygamy inequality, a polygamy relation with equality is given by introducing the polygamy weight. From the polygamy relation with equality, we present polygamy inequalities satisfied by the $\beta$th $(\beta>0)$ power of the quantum correlation measures. Taking concurrence of assistance as an example, we further illustrate the significance and advantages of these relations. We also obtain a polygamy relation with equality by considering the one-to-group entanglements for any quantum entanglement measures that do not satisfy the polygamy relations. We demonstrate that such relations for tripartite states can be generalized to multipartite systems.
Zhi-Xiang Jin, Bing Yu, Xue-Na Zhu, Shao-Ming Fei, Cong-Feng Qiao
2023-09-23T14:09:52Z
http://arxiv.org/abs/2309.13386v1
# Polygamy relation of quantum correlations with equality ###### Abstract We provide a generalized definition of polygamy relations for any quantum correlation measures. Instead of the usual polygamy inequality, a polygamy relation with equality is given by introducing the polygamy weight. From the polygamy relation with equality, we present polygamy inequalities satisfied by the \(\beta\)th (\(\beta>0\)) power of the quantum correlation measures. Taking concurrence of assistance as an example, we further illustrate the significance and advantages of these relations. We also obtain a polygamy relation with equality by considering the one-to-group entanglements for any quantum entanglement measures that do not satisfy the polygamy relations. We demonstrate that such relations for tripartite states can be generalized to multipartite systems. ## I Introduction As one of the essential resources in quantum information processing, quantum correlation is widely used in quantum tasks, such as quantum teleportation [1], quantum key distribution [2; 3], and entanglement swapping [4; 5]. The fundamental difference between quantum correlation and classical correlation lies in the shareability of the resources. With classical correlations, the resources can be freely shared by many different individuals, while for quantum correlations, the resources cannot be shared among individuals freely. Therefore, it is particularly important to study the quantum correlation shareability or distribution among multipartite systems and their implications in quantum information processing and quantum computing [6; 7; 8; 9; 10; 11]. Quantum entanglement, one of the most important quantum correlations, plays a crucial role in quantum communication and quantum information processing [12; 13; 14; 15; 16; 17; 18; 19]. The restriction on entanglement shareability among multiparty systems is called the monogamy of entanglement; that is, with respect to a given entanglement measure, if two subsystems are more entangled, they would be less entangled with the rest subsystems. The monogamy of entanglement is a crucial property that guarantees the quantum key distribution security since the limited shareability restricts the amount of information that an eavesdropper could potentially obtain by the secret key extraction. It also plays an imperative role in many other fields of physics, such as quantum cryptography [20; 21], phase detection [22; 23; 24], condensed matter physics [25; 26], and even black-hole physics [27; 28]. Conversely, the entanglement of assistance, a dual concept to bipartite entanglement measures, has been shown to have a dually monogamous (polygamous) property in multipartite quantum systems. Polygamy inequality was first obtained in terms of the tangle \(E_{a}\) of assistance [29] among three-qubit systems. For a three-qubit state \(\rho_{ABC}\), \[E_{a}(\rho_{A|BC})\leq E_{a}(\rho_{AB})+E_{a}(\rho_{AC}), \tag{1}\] where \(\rho_{AB}=\mathrm{Tr}_{C}(\rho_{ABC})\) and \(\rho_{AC}=\mathrm{Tr}_{B}(\rho_{ABC})\). Intuitively, Eq. (1) indicates that the entanglements of assistance between \(A\) and \(BC\) cannot exceed the sum of the individual pairwise entanglements of assistance between \(A\) and each of the remaining parties \(B\) or \(C\). In fact, monogamy relations provide an upper bound for bipartite shareability of quantum correlations in a multipartite system, while polygamy relations set a lower bound for the distribution of bipartite quantum correlations. Variations of the polygamy inequality and the generalizations to \(N\)-partite systems have been established for several quantum correlations [30; 31; 32; 33]. Nevertheless, to some extent, the inequality (1) captures the spirit of polygamy as a distinctive property of entanglement of assistance since its validity is not universal but rather depends on the detailed quantum states, the kind of correlations, and the measures used. For example, the GHZ-class states do not satisfy the inequality (1) for any quantum entanglement measures. In this paper, we study the general polygamy relations of arbitrary quantum correlations. Given a measure \(Q\) of general quantum correlation, we classify it as polygamous if there exists a nontrivial continuous function \(g\) such that the generalized polygamy relation, \[Q(\rho_{A|BC})\leq g(Q(\rho_{AB}),Q(\rho_{AC})), \tag{2}\] is satisfied for any state \(\rho_{ABC}\), where \(g\) is a continuous function of variables \(Q(\rho_{AB})\) and \(Q(\rho_{AC})\). For convenience, we denote \(Q(\rho_{A|BC})=Q_{A|BC}\) and \(Q_{AB}\). For a particular choice of the function \(g(x,y)=x+y\), we recover the polygamy inequality (1) from (2). As \(Q\) is a measure of quantum correlation, it is nonincreasing under partial trace. Thus, we have \(Q_{A|BC}\geq\max\{Q_{AB},Q_{AC}\}\) for any states. Specifically, the quantum correlation distribution is confined to a region smaller than a square with a side length \(Q_{A|BC}\). For any state \(\rho_{ABC}\), if there exists a nontrivial function \(g\) such that the generalized polygamy equality \(Q_{A|BC}=g(Q_{AB},Q_{AC})\) is satisfied, we call the quantum correlation measure \(Q\) polygamous. ## II polygamy relation with equality and its properties Consider the function \(g(x,y)=x+y\) as a rubber band. For two fixed endpoints \((Q_{A|BC},0)\) and \((0,Q_{A|BC})\), one obtains different types of functions by moving the point \((\frac{Q_{A|BC}}{2},\frac{Q_{A|BC}}{2})\) to the point \((Q_{A|BC},Q_{A|BC})\) or to the origin \((0,0)\), as shown in Fig. 1. For any \(0<k\leq Q_{A|BC}\), \((k,k)\) is a point on the dotted line in Fig. 1. We have the following trade-off between the values of \(Q_{AB}\) and \(Q_{AC}\), \[\begin{cases}\frac{Q_{A|BC}-k}{k}Q_{AB}+Q_{AC}=Q_{A|BC},&Q_{AB}\leq Q_{AC}\\ Q_{AB}+\frac{Q_{A|BC}-k}{k}Q_{AC}=Q_{A|BC},&Q_{AC}\leq Q_{AB},\end{cases} \tag{3}\] where "k" is different for different lines in Fig. 1, but the same with the symmetric lines on both sides of the dashed line. In fact, we only need to consider the range of \(0<k\leq\frac{Q_{A|BC}}{2}\) because the original polygamy inequality is always satisfied for \(\frac{Q_{A|BC}}{2}<k\leq Q_{A|BC}\) (blue region in Fig. 1). The equation with respect to the diagonal solid line in the square is given by \(Q_{AB}+Q_{AC}=Q_{A|BC}\), and the triangle area above the diagonal solid line (blue region) satisfies \(Q_{AB}+Q_{AC}>Q_{A|BC}\). On the contrary, the triangle area below the diagonal solid line (orange, yellow and white regions) satisfies the inequality \(Q_{AB}+Q_{AC}<Q_{A|BC}\). The distribution of quantum correlation for all quantum states should be checked to verify whether a quantum correlation measure \(Q\) is polygamous. If the quantum correlation for all states is distributed on the upper triangle area, it means that \(Q\) satisfies the polygamy relations. Due to the importance of polygamy relations, it is also interesting to ask whether a quantum correlation measure \(Q\) with some quantum correlation distribution in the lower triangular region can be superactivated to be polygamous. If \(\rho\) does not satisfy the polygamous relations with respect to some quantum correlation measure \(Q\), can some of these quantum correlation measures be polygamous, i.e., is there a hidden polygamy relation? Thus, it is interesting to study a quantum correlation measure \(Q\) when its quantum correlation is distributed in the lower triangular region and determine ways to characterize its polygamy relation. Inspired by Ref. [34], we propose a parameterized polygamy relation of quantum correlation with equality, which is more powerful than the original ones. From Eq. (3), we can define a polygamy relation with equality as follows. **Definition 1.** Let \(Q\) be a measure of quantum correlation. \(Q\) is said to be polygamous if for any state \(\rho_{ABC}\), \[Q_{A|BC}=\gamma Q_{AB}+Q_{AC} \tag{4}\] for some \(\gamma\) (\(\gamma\geq 0\)) with \(Q_{AB}\leq Q_{AC}\). We call \(\gamma\) the polygamy weight with respect to the quantum correlation measure \(Q\). Eq. (4) yields a generalized polygamy relation without inequality. For different states, one can always obtain different \(\gamma\). Thus, \(\gamma\) in Eq. (4), in fact, is the biggest constant taken over all states that saturate the above equality. The polygamy weight \(\gamma\) defined in Eq. (4) establishes the connections among \(Q_{A|BC}\), \(Q_{AB}\) and \(Q_{AC}\) for a tripartite state \(\rho_{ABC}\). If \(0\leq\gamma\leq 1\), then the polygamy inequality (1) is obviously true from (4), and the corresponding quantum correlation distribution is confined to the blue region shown in Fig. 1. The case of \(\gamma>1\) is beyond the original polygamy inequality and the corresponding regions of the quantum correlation distribution are in the orange, yellow and white regions in Fig. 1. When \(\gamma\rightarrow\infty\), we have \(Q_{A|BC}>Q_{AC}\geq Q_{AB}=0\) according to the definition (4). In this situation, we say that the quantum correlation measure \(Q\) is non-polygamous. The corresponding quantum correlation distribution is located at the coordinate axis in Fig. 1; that is, when \(\gamma\rightarrow\infty\), \(Q\) is not likely to be polygamous. By contrast, \(\gamma\to 1\) implies that \(Q\) is more likely to be polygamous. In Ref. [34], the authors present a monogamy weight \(\mu\), which can be viewed as a dual parameter to the polygamy weight \(\gamma\), where \(\mu\to 1\) implies that the entanglement measure \(E\) is more likely to be monogamous, while \(\mu\to 0\) means that \(E\) is not likely to be monogamous. If an entanglement measure \(E\) is not only polygamous but also monogamous, then it must satisfy \(\mu=\gamma=1\). This happens for some classes of states and entanglement measures, such as \(W\)-class states with the entanglement measure concurrence or negativity. Thus, the parameter \(\gamma\) has an operational interpretation of the ability to be polygamous for a quantum correlation measure \(Q\). Given two quantum correlation measures \(Q^{\prime}\) and \(\tilde{Q}\) with polygamy weights \(\gamma_{1}\) and \(\gamma_{2}\), respectively. We say that \(Q^{\prime}\) has a higher polygamy score than \(\tilde{Q}\) if \(\gamma_{1}\geq\gamma_{2}\). In contrast to the monogamy score proportional to its monogamy ability in Ref. [34], the polygamy ability is inversely proportional to the magnitude of its weight score. That is, \(\gamma_{1}\geq\gamma_{2}\) leads to \(Q^{\prime}\preceq\tilde{Q}\), where \(Q^{\prime}\preceq\tilde{Q}\) stands for that \(\tilde{Q}\) has stronger ability than \(Q^{\prime}\) to be polygamous. Thus, \(\gamma\) characterizes the polygamy property of a given quantum correlation measure \(Q\). We have the following relation for the \(\beta\)th power \(Q^{\beta}\) (\(\beta>0\)) of \(Q\) (see the proof in the Appendix). **Theorem 1** A quantum correlation measure \(Q\) is polygamous according to definition (4) if and only if there exits \(\beta>0\) such that \[Q^{\beta}_{A|BC}\leq Q^{\beta}_{AB}+Q^{\beta}_{AC} \tag{5}\] for any state \(\rho_{ABC}\). _Remark._ Theorem 1 shows that the polygamy weight \(\gamma\) in Eq. (4) has a one-to-one correspondence to the polygamy power in inequality (5) for a given quantum correlation measure \(Q\). In [8], the authors show that there exists a real number \(p\) such that \(Q^{y}\) (\(0\leq y\leq p\)) is polygamous. That is to say, the polygamy weight \(\gamma\) corresponds one-to-one to \(p\) in [8]. Combining Eq. (4) and (5), one gets \(\gamma=(1+K^{\beta})^{\frac{1}{2}}-K\) with \(K=\frac{Q_{AC}}{Q_{AB}}>1\) if \(\beta\) saturates the inequality (5), which implies that the polygamy weight \(\gamma\) and the real number \(p\) in [8] are inversely proportional (see Fig. 2). In [34] the authors give a monogamy relation of entanglement \(E\), \(E_{A|BC}=\mu E_{AB}+E_{AC}\) with \(E_{AB}\leq E_{AC}\). At first glance, it looks the same as the one in our Definition 1; however, the ranges of monogamy weight \(\mu\) and polygamy weight \(\gamma\) are completely different. In [34], the monogamy weight takes the value \(0<\mu\leq 1\), since \(\mu>1\) is obviously true for CKW inequality [11]. By contrast, the range of the polygamy weight \(\gamma\) in this paper is \(\gamma>1\) since it is obviously true for \(0\leq\gamma\leq 1\). This may explain the difference between monogamy and polygamy relations based on polygamy (monogamy) weight \(\gamma\) (\(\mu\)) from another perspective. The polygamy relation defined in Eq. (4) can be generalized to multipartite systems. For any \(N\)-partite state \(\rho_{AB_{1}B_{2}\ldots B_{N-1}}\), we obtain the following result if \(Q\) satisfies Eq. (4) for any tripartite state (see the proof in Appendix). **Theorem 2**. For any \(N\)-partite state \(\rho_{AB_{i}\cdots B_{N-1}}\), generally assume that \(Q_{AB_{i}}\geq Q_{A|B_{i+1}\cdots B_{N-1}}\) for \(i=1,2,\cdots,m\), and \(Q_{AB_{j}}\leq Q_{A|B_{j+1}\cdots B_{N-1}}\) for \(j=m+1,\cdots,N-2\), \(\forall\)\(1\leq m\leq N-3\), \(N\geq 4\). If \(Q\) satisfies relation (4) for tripartite states, then \[Q_{A|B_{1}B_{2}\cdots B_{N-1}}\] \[\leq Q_{AB_{1}}+\Gamma_{1}Q_{AB_{2}}+\cdots+\Gamma_{m-1}Q_{AB_{m}}\] \[+\Gamma_{m}(\gamma_{m+1}Q_{AB_{m+1}}+\cdots+\gamma_{N-2}Q_{AB_{N- 2}}+E_{AB_{N-1}}),\] where \(\Gamma_{k}=\Pi_{i=1}^{k}\gamma_{i}\), \(k=1,2,\cdots,N-2\), and \(\gamma_{i}\) denotes the polygamy weight of the \((N+1-i)\)-partite state \(\rho_{AB_{1}\cdots B_{N-i}}\). In Theorem 2 we have assumed that some \(Q_{AB_{i}}\geq Q_{A|B_{i+1}\cdots B_{N-1}}\) and some \(Q_{AB_{j}}\leq Q_{A|B_{j+1}\cdots B_{N-1}}\) for the \(N\)-partite state \(\rho_{AB_{1}\cdots N_{N-1}}\). If all \(Q_{AB_{i}}\geq Q_{A|B_{i+1}\cdots B_{N-1}}\) for \(i=1,2,\cdots,N-2\), then we have \(Q_{A|B_{1}\cdots B_{N-1}}=Q_{AB_{1}}+\Gamma_{1}Q_{AB_{2}}+\cdots+\Gamma_{N-2} Q_{AB_{N-1}}\). In the following, as an application, we consider the concurrence of assistance to illustrate the advantages of (4) and the calculation of the polygamy weight \(\gamma\). For a bipartite state \(\rho_{AB}\), the concurrence of assistance is defined by \(C_{a}(\rho_{AB})=\max\limits_{\{p_{i},|\psi_{i}\rangle_{AB}\}}\sum_{i}p_{i}C(| \psi_{i}\rangle_{AB})\), where the maximum is taken over all possible pure state decompositions of \(\rho_{AB}=\sum\limits_{i}p_{i}|\psi_{i}\rangle_{AB}\langle\psi_{i}\). For any pure states \(\rho_{AB}=|\psi\rangle_{AB}\langle\psi|\), one has \(C_{a}(\rho_{AB})=C(|\psi\rangle_{AB})\), where \(C(|\psi\rangle_{AB})\) is the concurrence of the state \(|\psi\rangle_{AB}\), \(C(|\psi\rangle_{AB})=\sqrt{2\left[1-\text{Tr}(\rho_{A}^{2})\right]}\), and \(\rho_{A}=\text{Tr}_{B}(|\psi\rangle_{AB}\langle\psi|)\). Figure 2: One-to-one mapping between the polygamy weight \(\gamma\) in Eq. (4) and the polygamy power \(p\) in [8] for a given quantum correlation measure \(Q\). Figure 1: For any tripartite state \(\rho_{ABC}\) and quantum correlation measure \(Q\), one gets the inequality (1) for \(g(x,y)=x+y\), which holds with the range of values of \(Q_{AB}\) and \(Q_{AC}\) given by the blue triangular. In the blue region, the equality (4) also holds for \(0\leq\gamma\leq 1\). Inequality is no longer satisfied in the red, yellow, and white regions. However, the relation (4) holds for \(\gamma>1\): the orange region matches \(1<\gamma\leq 2\), the yellow region matches \(2<\gamma\leq 3\), and the white region matches \(\gamma>3\). In other words, any quantum correlation measure \(Q\) is polygamous in the sense of (4) if the quantum correlation distribution is confined to a region strictly smaller than the square with side length \(Q_{A|BC}\). For convenience, we denote \(C_{a}(\rho_{AB})=C_{aAB}\) and \(C_{a}(\rho_{A|BC})=C_{aA|BC}\). Let us consider the three-qubit state \(\rho=|\psi\rangle\langle\psi|\) in the generalized Schmidt decomposition form, \[|\psi\rangle = \lambda_{0}|000\rangle+\lambda_{1}e^{i\varphi}|100\rangle+\lambda _{2}|101\rangle \tag{6}\] \[+\lambda_{3}|110\rangle+\lambda_{4}|111\rangle,\] where \(\lambda_{i}\geq 0\), \(i=0,1,2,3,4\) and \(\sum\limits_{i=0}^{4}\lambda_{i}^{2}=1.\) We have \(C_{aA|BC}=2\lambda_{0}\sqrt{\lambda_{2}^{2}+\lambda_{3}^{2}+\lambda_{4}^{2}}\), \(C_{aAB}=2\lambda_{0}\sqrt{\lambda_{2}^{2}+\lambda_{4}^{2}}\) and \(C_{aAC}=2\lambda_{0}\sqrt{\lambda_{3}^{2}+\lambda_{4}^{2}}\). According to the polygamy relation (4), we have \[\gamma=\sqrt{1+\frac{\lambda_{3}^{2}}{\lambda_{2}^{2}+\lambda_{4}^{2}}}-\sqrt{ \frac{\lambda_{3}^{2}+\lambda_{4}^{2}}{\lambda_{2}^{2}+\lambda_{4}^{2}}} \tag{7}\] with \(\lambda_{2}\leq\lambda_{3}\). Let \(g(x,y)=\sqrt{1+\frac{y^{2}}{1+x^{2}}}-\sqrt{\frac{1+y^{2}}{1+x^{2}}}\), with \(x=\frac{\lambda_{2}}{\lambda_{4}}\) and \(y=\frac{\lambda_{3}}{\lambda_{4}}\). We obtain \(\gamma\leq\sqrt{2}-1\), with the equality saturated when \(x\rightarrow\infty,\ y\rightarrow\infty\) (i.e., \(\lambda_{4}=0\), \(\lambda_{2}=\lambda_{3}\neq 0\)) (see Fig 3). In other words, the W-type states (\(\lambda_{4}=0\) in (6)) saturate the maximal value of the polygamy weight for concurrence of assistance in Eq. (7), \(\gamma_{C_{a}}=\sqrt{2}-1<1\). The corresponding quantum correlation distribution is confined to the blue region in Figure 1. From the conclusion in [35; 36], the W-type states just satisfy the equality in (5), and the corresponding polygamy power is \(\beta_{C_{a}}=2\). The polygamy inequality of entanglement, \(C_{A|BC}^{2}\leq{C_{aAB}^{\ 2}}+{C_{a}^{\ 2}}_{AC}\), was first introduced in [29] for a three-qubit pure state \(|\psi\rangle_{ABC}\). Since concurrence is equal to the concurrence of assistance for pure states, one gets \(C_{aA|BC}\leq C_{aAB}+C_{aAC}\). From Eq. (4) one obtains \(C_{aA|BC}\leq(\sqrt{2}-1)\min\{C_{aAB},C_{aAC}\}+\max\{C_{aAB},C_{aAC}\}\). Expectedly, our result is better than \(C_{aA|BC}\leq C_{aAB}+C_{aAC}\) derived from Ref. [29] except for the states such that \(\min\{C_{aAB},C_{aAC}\}=0\). As another quantum correlation, we consider the tangle of assistance defined by \(\tau_{a}(\rho_{AB})=\max_{\{p_{i},|\psi_{i}\rangle_{AB}\}}\sum_{i}p_{i}\tau(| \psi_{i}\rangle_{AB})\), where the tangle \(\tau(|\psi\rangle_{AB})\) is given by \(\tau(|\psi\rangle_{AB})=2\left[1-\text{Tr}(\rho_{A}^{2})\right]\)[11; 38], and the maximum is taken over all possible pure-state decompositions of \(\rho_{AB}=\sum\limits_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|\) with \(p_{i}\geq 0\), \(\sum\limits_{i}p_{i}=1\). Similarly, we have \[\gamma=\frac{\lambda_{2}^{2}}{\lambda_{2}^{2}+\lambda_{4}^{2}}. \tag{8}\] Thus, from Eq. (8), we have \(\gamma_{\tau_{a}}=1\) for W-type states that satisfy the equality in (5) with polygamy power \(\beta_{\tau_{a}}=1\). From the above calculation, we have \(\gamma_{C_{a}}<\gamma_{\tau_{a}}\), which implies \(\tau_{a}\preceq C_{a}\), i.e., the polygamy ability of concurrence of assistance is stronger than that of tangle of assistance. Whereas the polygamy power of the tangle of assistance is smaller than that of concurrence of assistance, i.e., \(\beta_{\tau_{a}}<\beta_{C_{a}}\), since they are inversely proportional (Fig. 2). Here, we present a definition for polygamy relation of entanglement with equality in Eq. (4), which consists of a substantial and definitive addition to the present understanding of the properties for quantum correlations. It also coincides with the results in [8]: there exist real numbers \(\alpha\) and \(\beta\) such that the quantum correlation measure \(Q^{x}\) (\(x\geq\alpha\)) and \(Q^{y}\) (\(0\leq y\leq\beta\)) satisfy monogamy and polygamy relations, respectively. ## IV polygamy relation of quantum entanglement From definition (4) and Theorem 1, generally, an entanglement measure \(E\) may not be polygamous (also see [31]). Taking the concurrence \(C\) as an example, for the state (6) one obtains \(\gamma_{C}=\sqrt{1+x^{2}+y^{2}}-x\), where \(x=\frac{\lambda_{2}}{\lambda_{3}}\) and \(y=\frac{\lambda_{4}}{\lambda_{3}}\). When \(\lambda_{3}\to 0\), \(\gamma_{C}\rightarrow\infty\), i.e., \(C\) cannot be polygamous. In fact, consider separable states with \(E_{AC}=0\). Eq. (4) will not hold unless \(E_{A|BC}=E_{AB}\) for all separable states. This is not possible for separable states like \(|\psi\rangle_{ABC}=\frac{1}{\sqrt{3}}(|000\rangle+|110\rangle+|111\rangle)\), for which we have \(C_{A|BC}=\frac{2\sqrt{2}}{3}>C_{AB}\approx 0.667\) and \(C_{AC}=0\) by using the formula of concurrence for a two-qubit mixed state \(\rho\), \(C(\rho)=\max\{0,\ \eta_{1}-\eta_{2}-\eta_{3}-\eta_{4}\}\), with \(\eta_{1}\), \(\eta_{2}\), \(\eta_{3}\) and \(\eta_{4}\) the square roots of the eigenvalues of \(\rho(\sigma_{y}\otimes\sigma_{y})\rho^{\star}(\sigma_{y}\otimes\sigma_{y})\) in nonincreasing order, \(\sigma_{y}\) is the Pauli matrix and \(\rho^{\star}\) is the complex conjugate of \(\rho\). Another example is the generalized \(n\)-partite GHZ-class states admitting the multipartite Schmidt decomposition [37; 39], \(|\psi\rangle_{A_{1}A_{2}\cdots A_{n}}=\sum_{i=1}\lambda_{i}|i_{1}\rangle\otimes| i_{2}\rangle\otimes\cdots\otimes|i_{n}\rangle\), \(\sum_{i}\lambda_{i}^{2}=1\), \(\lambda_{i}>0\). These states always give rise to non-polygamous relations for any entanglement measures, since \(E(|\psi\rangle_{A_{1}|A_{2}\cdots A_{n}})>0\) and \(E(\rho_{A_{1}A_{i}})=0\) for all \(i=2,\cdots,n\) and any entanglement measure \(E\). Although an entanglement measure \(E\) may not be polygamous, we can consider the restrictions among all one-to-group entanglements, the entanglements between a single partite and the remaining ones in an arbitrary multipartite system [40]. We take tripartite states \(\rho_{ABC}\) as an example. Assume \(E_{A|BC}\geq E_{B|AC}\geq E_{C|AB}\). \(E\) is either polygamous or non-polygamous under the restrictions among all one-to-group entanglements, if \(E_{A|BC}\leq E_{B|AC}+E_{C|AB}\) is satisfied or not satisfied. When \(E_{C|AB}=0\), one has \(E_{A|BC}=E_{B|AC}\), and \(E_{A|BC}\leq E_{B|AC}+E_{C|AB}\) is obviously satisfied. For the case \(E_{C|AB}>0\), \(E_{A|BC}\leq E_{B|AC}+E_{C|AB}\) may not hold for some entanglement measures. In the following, we introduce a definition for polygamy relation with equality among all one-to-group entanglements. **Definition 2.** Let \(E\) be a measure of quantum entanglement. \(E\) is said to be polygamous if for any state \(\rho_{ABC}\), \[E_{A|BC}=E_{B|AC}+\delta\,E_{C|AB} \tag{9}\] for some \(\delta>0\), where \(E_{A|BC}\geq E_{B|AC}\geq E_{C|AB}\). We call \(\delta\) the polygamy weight with respect to the entanglement measure \(E\). From the definition, if \(E_{C|AB}=0\) for any quantum state \(\rho_{ABC}\), Eq. (9) holds for any \(\delta>0\). Otherwise, one can always choose \(\delta(\rho_{ABC})=\frac{E_{A|BC}-E_{B|AC}}{E_{C|AB}}\) such that entanglement measure \(E\) is polygamous. Thus, there always exists a \(\delta\) (\(\delta=\max_{\rho_{ABC}}\delta(\rho_{ABC})\)) such that \(E_{A|BC}\leq E_{B|AC}+\delta E_{C|AB}\) is satisfied for all states \(\rho_{ABC}\). In Ref. [40], the authors investigate the polygon relationship among the one-to-group marginal entanglement measures of pure qubit systems. Entanglement measures, including von Neumann entropy [16], concurrence [17], negativity [18], and the normalized Schmidt weight satisfy the symmetric inequalities \(E_{i|jk}\leq E_{j|ik}+E_{k|ij}\) with \(i\neq j\neq k\in\{A,B,C\}\). In fact, the result in [40] is a special case of Eq. (9) for \(\delta=1\). The cases with \(\delta<1\) are beyond the results in [40]. Moreover, Eq. (9) not only applies to pure states but also to mixed states. In the following, we use concurrence as an example to show the advantage of Eq. (9). Let us consider the state \(|\psi\rangle_{ABC}=\sin\theta\cos\phi|000\rangle+\sin\theta\sin\phi|101\rangle+ \cos\theta|110\rangle\). We have \(C_{A|BC}=2\sin\theta\cos\phi\sqrt{\sin^{2}\theta\sin^{2}\phi+\cos^{2}\theta}\), \(C_{B|AC}=\sin 2\theta\) and \(C_{C|AB}=2\sin\theta\sin\phi\sqrt{\sin^{2}\theta\cos^{2}\phi+\cos^{2}\theta}\). From Eq. (9), we obtain \(\delta_{C}=\frac{\cos\phi\sin^{2}\theta\sin^{2}\phi+\cos^{2}\theta-\cos\theta}{ \sin\phi\sqrt{\sin^{2}\theta\cos^{2}\phi+\cos^{2}\theta}}\). Suppose \(C_{A|BC}\geq C_{B|AC}\geq C_{C|AB}\), which implies that the parameters \(\theta\) and \(\phi\) are in the triangle-like region shown in Fig. 4. If \(|\psi\rangle_{ABC}\) is a separable state, it corresponds to the point \(P\) in Fig. 4, that is, either (i) \(\phi=0\) or (ii) \(\theta=\frac{\pi}{2}\) or (iii) \(\theta=\frac{\pi}{2}\) and \(\phi=0\). For example, when \(\phi=0\), the related state \(|\psi\rangle_{ABC}\) is changed into a separable state \(\sin\theta|000\rangle+\cos\theta|110\rangle\). For all these cases, the polygamy weight \(\delta_{C}=1\); otherwise \(\delta_{C}<1\). For example, setting \(\theta=\phi=\frac{\pi}{4}\) one obtains \(\delta_{C}=\frac{2}{\sqrt{3}}-1<1\). Remark: In the above example, we only discussed the condition \(C_{A|BC}\geq C_{B|AC}\geq C_{C|AB}\). In fact, there are other five cases, \(C_{A|BC}\geq C_{C|AB}\geq C_{B|AC}\), \(C_{B|AC}\geq C_{A|BC}\geq C_{C|AB}\), \(C_{B|AC}\geq C_{C|AB}\geq C_{A|BC}\), \(C_{C|AB}\geq C_{A|BC}\geq C_{B|AC}\), and \(C_{C|AB}\geq C_{B|AC}\geq C_{A|BC}\). One can similarly analyze these cases and obtain the same results. Concerning the polygamy power \(\eta\), we have the following familiar inequality (see the proof in Appendix). **Theorem 3** Let \(E\) be a measure of entanglement. \(E\) is polygamous according to the definition (9) if and only if there exits \(\eta>0\) such that \[E^{\eta}_{A|BC}\leq E^{\eta}_{B|AC}+E^{\eta}_{C|AB} \tag{10}\] for any state \(\rho_{ABC}\). The polygamy weight \(\gamma\) functions as a bridge in characterizing the polygamous ability among different quantum correlation measures. A quantum correlation measure \(Q\) with a smaller \(\gamma\) is more likely to be polygamous. Thus, the polygamy weight \(\gamma\) gives the physical meaning of the coefficients introduced in Ref. [30] for the weighted polygamy relations. Particularly, quantum entanglement measures cannot be polygamous since the entanglement of the reduced GHZ-class states is \(0\). Nevertheless, we provide polygamy relations with equality and obtain a polygamy relation for tripartite systems based on one-to-group entanglement. Conclusion Quantum correlation is a fundamental property of multipartite systems. For a given quantum correlation measure \(Q\), we have introduced a new definition of the polygamy relation with equality, which characterizes the precise division of the quantum correlation distribution. The non-polygamous quantum correlation distribution is only located on the coordinate axis, as shown in Fig. 1; the blue region satisfies both our notion of polygamy (4) and the usual one (1), whereas the orange, yellow, and white regions violate the inequality (1) but still satisfy our polygamy relation (4). The advantage of our notion of polygamy is that one can determine which quantum correlation measure is more likely to be polygamous by comparing the polygamy weights. We have used the concurrence of assistance and tangle of assistance as examples, showing that the concurrence of assistance is more likely to be polygamous than the tangle of assistance since the weight of the tangle of assistance is larger than that of concurrence of assistance. However, by using \(Q^{\beta}\) for some \(\beta>0\), we have shown that our polygamy relation can reproduce the conventional polygamy inequalities such as (1). Furthermore, it has been shown that quantum entanglement measures cannot be polygamous as the GHZ-class reduced states are separable. Inspired by Ref. [40], considering the one-to-group entanglements between a single partite and the remaining ones, we have provided the polygamy relations with equality and obtained a polygamy relation for tripartite systems. Generalizing our results to multipartite systems, we have obtained Theorem 2 for \(N\)-partite states. Our results may shed new light on polygamy properties related to other quantum correlations. AcknowledgmentsThis work was supported in part by the National Natural Science Foundation of China (NSFC) under Grants 12301582, 12075159, 12171044, 11975236, and 12235008; Beijing Natural Science Foundation (Grant No. Z190005); Academician Innovation Platform of Hainan Province; Guangdong Basic and Applied Basic Research Foundation 2020A1515111007; Start-up Funding of Guangdong Polytechnic Normal University No. 2021SDKYA178; Start-up Funding of Dongguan University of Technology No. 221110084.
2310.00299
RelBERT: Embedding Relations with Language Models
Many applications need access to background knowledge about how different concepts and entities are related. Although Knowledge Graphs (KG) and Large Language Models (LLM) can address this need to some extent, KGs are inevitably incomplete and their relational schema is often too coarse-grained, while LLMs are inefficient and difficult to control. As an alternative, we propose to extract relation embeddings from relatively small language models. In particular, we show that masked language models such as RoBERTa can be straightforwardly fine-tuned for this purpose, using only a small amount of training data. The resulting model, which we call RelBERT, captures relational similarity in a surprisingly fine-grained way, allowing us to set a new state-of-the-art in analogy benchmarks. Crucially, RelBERT is capable of modelling relations that go well beyond what the model has seen during training. For instance, we obtained strong results on relations between named entities with a model that was only trained on lexical relations between concepts, and we observed that RelBERT can recognise morphological analogies despite not being trained on such examples. Overall, we find that RelBERT significantly outperforms strategies based on prompting language models that are several orders of magnitude larger, including recent GPT-based models and open source models.
Asahi Ushio, Jose Camacho-Collados, Steven Schockaert
2023-09-30T08:15:36Z
http://arxiv.org/abs/2310.00299v2
# RelBERT: Embedding Relations with Language Models ###### Abstract Many applications need access to background knowledge about how different concepts and entities are related. Although Knowledge Graphs (KG) and Large Language Models (LLM) can address this need to some extent, KGs are inevitably incomplete and their relational schema is often too coarse-grained, while LLMs are inefficient and difficult to control. As an alternative, we propose to extract relation embeddings from relatively small language models. In particular, we show that masked language models such as RoBERTa can be straightforwardly fine-tuned for this purpose, using only a small amount of training data. The resulting model, which we call RelBERT, captures relational similarity in a surprisingly fine-grained way, allowing us to set a new state-of-the-art in analogy benchmarks. Crucially, RelBERT is capable of modelling relations that go well beyond what the model has seen during training. For instance, we obtained strong results on relations between named entities with a model that was only trained on lexical relations between concepts, and we observed that RelBERT can recognise morphological analogies despite not being trained on such examples. Overall, we find that RelBERT significantly outperforms strategies based on prompting language models that are several orders of magnitude larger, including recent GPT-based models and open source models. 1 Footnote 1: Source code to reproduce our experimental results and the model checkpoints are available in the following repository: [https://github.com/asahi417/relbert](https://github.com/asahi417/relbert). ## 1 Introduction Recognizing the lexical relationship between two words has long been studied as a fundamental task in natural language processing (NLP) [1]. As a representative early example, DIRT [2] first collects sentences in which two given target words co-occur (e.g. _London_ and _U.K._) and then uses the dependency paths between the two words to model their relationship. Along similar lines, Latent Relational Analysis (LRA [1]) relies on templates expressing lexical patterns to characterise word pairs (e.g. _[head word] is the capital of [tail word]_), thus again relying on sentences where the words co-occur. After the advent of word embeddings [3; 4; 5], most approaches for modelling relations relied on word vectors in one way or another. A common strategy to model the relation between two words was to take the vector difference between the embeddings of each word [3; 6; 7]. For example, the relationship between "King" and "Queen" is the gender difference, which can be captured by \(\mathsf{wv}(\mathrm{King})-\mathsf{wv}(\mathrm{Queen})\), where \(\mathsf{wv}(X)\) denotes the embedding of word \(X\). Although the vector difference of word embeddings quickly gained popularity, it has been shown that the latent space of such relation vectors is noisy, with nearest neighbours often corresponding to different relationships [8; 9; 10]. The limitations of word embedding differences can also clearly be seen on SAT [11], a well-known benchmark involving word pair analogies (see SS 4.1), where the accuracy of word vector differences is particularly disappointing [12]. Knowledge graphs (KGs) such as Wikidata [13] and ConceptNet [14] are also closely related to the study of relation understanding. In contrast to the aforementioned methods, KGs rely on symbolic representations. They use a fixed relational schema to explicitly encode the relationships between words or entities. KGs are more interpretable than embeddings, but they usually have the drawback of being highly incomplete. Moreover, due to their use of a fixed relational schema, KGs are inherently limited in their ability to capture subtle and fine-grained differences between relations, which is essential for many knowledge-intensive tasks. For example, Table 1 shows two instances of the SAT analogy task, where the relationships found in the query are abstract. When trying to solve such questions with a KG, we may have access to triples such as (_wing_, _UsedFor_, _air_), but this kind of knowledge is not sufficient to solve the given question, e.g. since all of (_lung_, _UsedFor_, _breath_), (_engine_, _UsedFor_, _jet_), and (_flipper_, _UsedFor_, _water_) make sense. The issue here is that the relationship _UsedFor_ is too vague and does not describe the relationship between _wing_ and _air_ accurately enough to solve the task. Such limitations of KGs have recently been tackled with pre-trained language models (LMs) [15; 16; 17], which have been shown to implicitly capture various forms of factual knowledge [18; 19; 20]. In particular, some researchers have proposed to extract KG triples from LMs [21; 22; 23], which offers a compelling strategy for automatically enriching existing KGs. However, the resulting representations are still too coarse-grained for many applications. LLMs are also inefficient and difficult to control. In this paper, we propose a framework to distill relational knowledge from a pre-trained LM in the form of relation embeddings. Specifically, we obtain the relation embedding of a given word pair by feeding that word pair to the LM using a fixed template and by aggregating the corresponding contextualised embeddings. We fine-tune the LM using a contrastive loss, in such a way that the relation embeddings of word pairs that have a similar relationship become closer together, while moving further away from the embeddings of unrelated word pairs. Our main models, which we refer to as _RelBERT_, are based on RoBERTa [24] as the underlying LM and are fine-tuned on a modified version of a dataset about relational similarity from SemEval 2012 Task 2 [25]. Despite the conceptual simplicity of this approach, the resulting model outperforms all the baselines including LRA [1], word embeddings [3; 4; 5], and large scale language models such as OPT [26; 27] and T5 [17; 28] in the zero-shot setting (i.e. without any task-specific validation or additional model training). For example, RelBERT achieves 73% accuracy on the SAT analogy benchmark, which outperforms the previous state-of-the-art by 17 percentage points, and GPT-3 [16] by 20 percentage points (in the zero-shot setting). The strong performance of RelBERT is especially remarkable given the small size of the considered training set. Moreover, being based on relatively small language models, RelBERT is surprisingly efficient. In fact, the version of RelBERT based on RoBERTaBASE has 140 million parameters, but already outperforms GPT-3, which has 175 billion parameters, across all the considered benchmarks. Overall, we test RelBERT in nine diverse analogy benchmarks and find that it achieves the best results in all cases. Crucially, the fixed-length vector formulation of RelBERT enables a flexibility not found in standard language models, and can be leveraged for multiple applications. For instance, RelBERT proves competitive to the state-of-the-art in lexical relation classification, outperforming all previous embedding approaches. To further understand the capability of RelBERT, we analyse its performance from various perspectives. One important finding is that RelBERT performs strongly even on relation types that are fundamentally different from the ones in the training data. For instance, while the training data involves standard lexical relations such as hypernymy, synonymy, meronymy and antonymy, the model is able to capture morphological relations, and even factual relations between named entities. Interestingly, our standard RelBERT model, which is trained on the relation similarity dataset, achieves similar or better results for the latter type of relations than models that are trained on relations between named entities. Moreover, even if we remove all training examples for a given lexical relation (e.g. hypernymy), we find that the resulting model is still capable of modelling that relationship. These findings highlight the generalisation ability of RelBERT for recognizing word pairs with unseen relation types by extracting relation knowledge from the pre-trained LM, rather than merely generalising the examples from the training data. MotivationRelations play a central role in many applications. For instance, many question answering models currently rely on ConceptNet for modelling the relation between the concepts that are mentioned in the question and a given candidate answer [29; 30; 31]. Commonsense KGs are similarly used to provide additional context to computer vision systems, e.g. for generating scene graphs [32; 33] and for visual question answering [34]. Many recommendation \begin{table} \begin{tabular}{l l l} \hline \hline Query: & wing:air & Query: & perceptive:discern \\ \hline Candidates: & (1) & arm:hand \\ & (2) & lung:breath \\ & **(3)** & **flipper:water** \\ & (4) & cloud:sky \\ & (5) & engine:jet \\ \hline \hline \end{tabular} \begin{tabular}{l l l} \hline \hline Query: & & perceptive:discern \\ \hline Candidates: & (1) & determined:hesitate \\ & (2) & authoritarian:heed \\ & (3) & abandoned:neglect \\ & (4) & restrained:rebel \\ & **(5)** & **persistent:persevere** \\ \hline \hline \end{tabular} \end{table} Table 1: Two examples of analogy task from the SAT dataset, where the candidate in bold characters is the answer in each case. systems also rely on knowledge graphs to identify and explain relevant items [35; 36]. Other applications that rely on knowledge graphs, or on modelling relationships more broadly, include semantic search [37; 38], flexible querying of relational databases [39], schema matching [40], completion and retrieval of Web tables [41] and ontology completion [42]. Many of the aforementioned applications rely on knowledge graphs, which are incomplete and limited in expressiveness due to their use of a fixed relation schema. Relation embeddings have the potential to address these limitations, especially in contexts which involve ranking or measuring similarity, where extracting knowledge by prompting large language models (LLMs) cannot replace vector-based representations. Relation embeddings can also provide a foundation for systems that rely on analogical reasoning, where we need to identify correspondences between a given scenario and previously encountered ones [43]. Finally, by extracting relation embeddings from LMs, we can get more insight into what knowledge is captured by such models, since these embeddings capture the knowledge from the model in a more direct way than what is possible with prompting based methods [18; 19]. Indeed, the common prediction-based model probing techniques [18; 19] can easily be manipulated by adversarial inputs, e.g. involving negation [44]. Accordingly, recent studies have focused on identifying language model parameters that represent factual knowledge about named entities [45]. We believe that relation embeddings can be seen in a similar light, offering the potential for more direct analysis of the knowledge captured by language models. Structure of the paperAfter discussing the related work in SS 2, we first introduce RelBERT, our framework for extracting relation embeddings from fine-tuned LMs, in SS 3. We then describe our two main evaluation tasks in SS 4: analogy questions and lexical relation classification. SS 5 presents our experimental setup. Subsequently, SS 6 compares the results of RelBERT with baselines, including a large number of recent LLMs. To better understand the generalisation ability of RelBERT, in SS 7.1 we conduct an experiment in which certain relation types are excluded from the training set and then evaluate the model on the excluded relation type. In addition to the main experiment, we compare RelBERT with conversational LMs and few-shot prompting strategies in SS 7.2. As the learning process of RelBERT can be affected by many factors, we provide a comprehensive analysis of RelBERT fine-tuning in SS 7.3. Finally, we provide a qualitative analysis of the relation embedding space of RelBERT SS 7.4. This paper extends our earlier conference paper [46] in several ways: 1) we now consider two additional losses for training RelBERT; 2) we evaluate on four additional benchmarks; 3) we consider several alternative training sets; 4) we extensively compare against recent language models of sizes up to 30B parameters; 5) we analyse the role of training data in the RELBERT fine-tuning process. We find that, surprisingly, RelBERT achieves a non-trivial performance on named entities, despite only being trained on concepts. Moreover, on analogies between concepts, even the smallest RelBERT model, with 140M parameters, substantially outperforms all the considered LMs. ## 2 Related Work ### Unsupervised Relation Discovery Modelling how different words are related is a long-standing challenge in NLP. An early approach is DIRT [2], which encodes the relation between two nouns as the dependency path connecting them. The idea is that two such dependency paths are similar if the sets of word pairs with which they co-occur are similar. Along the same lines, [47] cluster named entity pairs based on the bag-of-words representations of the contexts in which they appear. In [48], a generative probabilistic model inspired by LDA [49] was proposed, in which relations are viewed as latent variables (similar to topics in LDA). Turney [1] proposed a method called latent relational analysis (LRA), which uses matrix factorization to learn relation embeddings based on co-occurrences of word pairs and dependency paths. Matrix factorization is also used in the Universal Schema approach from Riedel et al. [50], which represents entity pairs by jointly modelling (i) the contexts of occurrences of entity pairs in a corpus and (ii) the relational facts that are asserted about these entities in a given knowledge base. After the introduction of Word2Vec, several approaches were proposed that relied on word embeddings for summarising the contexts in which two words co-occur. For instance, [51] introduced a variant of the GloVe word embedding model, in which relation vectors are jointly learned with word vectors. In SeVeN [52] and RELATIVE [53], relation vectors are computed by averaging the embeddings of context words, while pair2vec [54] uses an LSTM to summarise the contexts in which two given words occur, and [55] learns embeddings of dependency paths to encode word pairs. Another line of work is based on the idea that relation embeddings should facilitate link prediction, i.e. given the first word and a relation vector, we should be able to predict the second word [56; 57]. ### Language Models for Relational Knowledge The idea of extracting relational knowledge from pre-trained LMs has been extensively studied. For instance, [18] uses BERT for link prediction. They use a manually defined prompt for each relation type, in which the tail entity is replaced by a \(<\)mask\(>\) token. To complete a knowledge graph triple such as (_Dante_, _born-in_,?) they create the input "_Dante was born in_\(<\)mask\(>\)" and then look at the predictions of BERT for the masked token to retrieve the correct answer. The results of this analysis suggest that BERT captures a substantial amount of factual knowledge, a finding which has inspired a line of work in which LMs are viewed as knowledge bases. Later, the analysis from [18] has been improved by adding instances with negation in [44], and extended to non-English languages in [58]. Some works have also looked at how relational knowledge is stored. In [59], it is argued that the feed-forward layers of transformer-based LMs act as neural memories, which would suggest that e.g. "the place where Dante is born" is stored as a property of Florence. Some further evidence for this view is presented in [60]. What is less clear is whether relations themselves have an explicit representation, or whether transformer models essentially store a propositionalised knowledge graph. The results we present in this paper suggest that common lexical relations (e.g. hypernymy, meronymy, has-attribute), at least, must have some kind of explicit representation, although it remains unclear how they are encoded. In [61], they analyse the ability of BERT to identify word pairs that belong to a given relation. In our earlier work [12], we have evaluated the ability of LMs to directly solve analogy questions. The main finding was that LMs are poor at solving analogy questions with a vanilla perplexity based approach, although results can be improved with a carefully-tuned scoring function. In [62], they extended this analysis by evaluating the sensitivity of language models to the direction of a word pair (e.g. by checking whether the model can distinguish the word pair _London:U.K._ from the word pair _U.K.:London_), the ability to recognize which entity type can form a specific relation type (e.g. the head and tail entity of the _born-in_ relation should be person and location) and the robustness to some adversarial examples. Their main findings were that LMs are capable of understanding the direction and the type of a relationship, but can be distracted by simple adversarial examples. For instance, both _Paris:France_ and _Rome:France_ were predicted to be instances of the _capital-of_ relation. Given the observation that LMs capture an extensive amount of relational knowledge, LMs have been used for tasks such as KG completion, and even for generating KGs from scratch. For instance, in [63], a triple is first converted into a sentence, by choosing a template based on the log-likelihood estimates of a causal language model (CLMs). The resulting sentence is then fed into a masked LM to estimate the plausibility of the triple based on the log-likelihood of the masked token prediction of the head and tail words. However, this approach is inefficient to use in practice, since all the candidate triples have to be tested one by one. To avoid such issues, [64] proposed to directly extract plausible triples using a pre-trained LM. Given a large corpus such as Wikipedia, they parse every sentence in the corpus to find plausible triples with a pre-trained LM. First, a single sentence is fed to an LM to obtain the attention matrix, and then for every combination of two words in the sentence, they find intermediate tokens in between the two words, which contribute to predict the two words, by decoding the attention matrix. In the end, the word pairs are simply filtered by the corresponding attention score, and the resulting word pairs become the triples extracted from the sentence, where the intermediate tokens of each pair are regarded as describing the relationship. Instead of extracting triples from a corpus, [65] proposed to use LMs to complete a triple by generating a tail word given a head word and a relation. They manually create a number of templates for each relation type, where a single template contains a placeholder to be filled by a head word. Each template is fed to a pre-trained LM to predict the tail word. As a form of post-filtering, they use a pre-trained LM to score the factual validity of the generated triples with another prompt to enhance the precision. Unlike the method proposed in [64], which extracts an entire triple, [65] assumes that the head and the relation are given, so it is more suited to KG completion, while [64] is rather aimed at constructing KGs from scratch. Recently, [66] proposed a two-step process for learning a KG in which relations are represented as text descriptions. In the first step, sentences in Wikipedia that explicitly describe relations between entities are identified. To improve the coverage of the resource, in the second step, T5 [17] is used to introduce additional links. Specifically, they use a fusion-in-decoder [67] to generate descriptions of the relationship between two entities, essentially by summarising the descriptions of the paths in the original KG that connect the two entities. Where the aforementioned works extract KGs from LMs, conversely, there has also been a considerable amount of work on infusing the knowledge of existing KGs into LMs. Early approaches introduced auxiliary tasks that were used to train the LM alongside the standard language modelling task, such as entity annotation [68] and relation explanation [69] based on KGs. ERNIE [70] is a masked LM similar to BERT, but they employ a masking strategy that focuses on entities that are taken from a KG, unlike BERT, which randomly masks tokens during pre-training. In addition to the entity-aware masking scheme, LUKE [71] conditions internal self-attention by entity-types. It achieved better results than vanilla LMs in many downstream tasks. Since it is computationally demanding to train LMs from scratch, there is another line of work that relies on fine-tuning existing LMs. For instance, [72] fine-tuned BERT based on the cross-attention between the embeddings from BERT and an entity linking model. Their model learned a new projection layer to generate entity-aware contextualized embeddings. ### Modelling Analogy Modelling analogies has a long tradition in the NLP community. The aforementioned LRA model [1], for instance, was motivated by the idea of solving multiple-choice analogy questions. Despite its simplicity, LRA achieved a strong performance on the SAT benchmark, which even GPT-3 is not able to beat in the zero-shot setting [16]. The idea of using word vector differences for identifying analogies was popularised by [3]. The core motivation of using word embeddings for modelling analogies dates back to connectionism theory [73], where neural networks were thought to be capable of learning emergent concepts [74; 75] with distributed representations across a semantic embedding space [76]. More recent works have proposed mathematical justifications and experiments to understand the analogical reasoning capabilities of word embeddings, by attempting to understand their linear algebraic structure [77; 78; 79] and by explicitly studying their compositional nature [80; 81; 82; 83]. Recently, the focus has shifted to modelling analogies using LMs. For instance, [84] proposed E-KAR, a benchmark for analogy modelling which essentially follows the same multiple-choice format as SAT, except that an explanation is provided for why the analogy holds and that some instances involve word triples rather than word pairs. In addition to the task of solving analogy questions, they also consider the task of generating explanations for analogies. Both tasks were found to be challenging for LMs. In [85], they used prompt engineering to generate analogies with GPT-3. They consider two analogy generation tasks: (i) generating an explanation with analogies for a target concept such as "Explain Bohr's atomic model using an analogy", and (ii) generating an explanation of how two given concepts are analogous to each other such as "Explain how Bohr's atomic model is analogous to the solar system". They argue that GPT-3 is capable of both generating and explaining analogies, but only if an optimal prompt is chosen, where they found the performance to be highly sensitive to the choice of prompt. In [86], they used LMs to find analogies between the concepts mentioned in two documents describing situations or processes from different domains. To improve the quality of analogies generated by LMs, [87] proposed an LM based scoring function to detect low-quality analogies. They start from manually-crafted templates that contain the information of the domain (e.g. "Machine Learning") and the target concept (e.g. "Language Model"). The templates are designed so that LMs can generate explanations of the target concept involving analogies. Once they generate analogies with the templates, they evaluate the generated analogies from the perspectives of analogical style, meaningfulness, and novelty, to identify which analogies to keep. The evaluation of the analogies is then used to improve the templates, and the low-quality analogies are re-generated with the improved templates. The evaluation relies on automatic metrics, and the template re-writing is done via prompting to edit the current template with the feedback, so the process can be iterated to repeatedly improve low-quality analogies. In [88], they fine-tuned masked LMs to solve analogy questions. The embedding for a given word pair \((w_{1},w_{2})\) is obtained as the contextualised representation of the \(<\)mask\(>\) token with the prompt "\(w_{1}<\)mask\(>\)\(w_{2}\)". To fine-tune LMs on analogy questions, they convert the task into a binary classification of \(A\):\(B\):\(C\):\(D\) as an analogy or not, where (\(A\),\(B\)) is the query word pair and (\(C\),\(D\)) is a candidate word pair. With the binary analogy classification formulation, they fine-tune an LM with a linear layer on top of the word pair embeddings of query and candidate word pairs. They use the resulting fine-tuned model to annotate more instances as a form of data augmentation and continue to fine-tune the model on the generated pseudo dataset. ## 3 RelBERT We now introduce our proposed RelBERT model, a fine-tuned LM encoder of the BERT family for modelling relational similarity. The input to RelBERT consists of a word pair, which is fed to the LM using a prompt. The LM itself is fine-tuned to map this input to a vector that encodes how the two given words are related. We will refer to this vector as a _relation embedding_. A schematic overview of the RelBERT model is shown in Figure 1. Our overall strategy is explained in more detail in SS 3.1, while the details of the fine-tuning process are provided in SS 3.2. ### Overall Strategy To obtain the relation embedding of a word pair \((h,t)\), we need to construct a suitable input for the language model. While it is possible to simply use the pair \((h,t)\) as input, similar to what is done by COMET [89], better results can be achieved by converting the word pair into a more or less naturally sounding sentence. This is true, in particular, because the amount of high-quality data that is available for training RelBERT is relatively limited, as we will see in SS 3.3. We thus need to manually create a template with placeholders for the two target words, which somehow expresses that we are interested in modelling the relationship between the two words. Such a strategy has already been proven effective for factual knowledge probing [18] and text classification [90; 91; 92], among many others. Since we will rely on fine-tuning the LM, the exact formulation of the prompt matters less than in zero-shot settings. However, we found that performance suffers when the prompt is too short, in accordance with [61] and [19], or when the prompt is nonsensical (i.e. when it does not express the idea of modelling a relationship). With this in mind, we will use the following five templates for our main experiments2: Footnote 2: In our previous paper [46], we evaluated different types of prompts, including automatically-generated ones. For this paper, we tried to extend this initial analysis but the results were inconclusive. This suggests that the choice of prompt may be somewhat less important than we had initially assumed. This view is also supported by the recent analysis in in [93], which showed that the LM can be successfully fine-tuned to learn relation embeddings with short and uninformative prompts, with only a very small degradation in quality. We will come back to the analysis of prompt importance in § 7.3.5. 1. Today, I finally discovered the relation between **[h]** and **[t]** : **[h]** is the <mask> of **[t]** 2. Today, I finally discovered the relation between **[h]** and **[t]** : **[t]** is **[h]**'s <mask> 3. Today, I finally discovered the relation between **[h]** and **[t]** : <mask> 4. I wasn't aware of this relationship, but I just read in the encyclopedia that **[h]** is the <mask> of **[t]** 5. I wasn't aware of this relationship, but I just read in the encyclopedia that **[t]** is **[h]**'s <mask> where <mask> is the LM's mask token, and **[h]** and **[t]** are slots that are filled with the head word \(h\) and tail word \(t\) from the given word pair. As a final step, we construct the relation embedding \(\mathbf{x}_{(h,t)}\) from the contextualised representation of the prompt in the LM's output layer. In particular, we have experimented with the following three strategies: * We take the contextualised representation of the <mask> token as the relation embedding (_average_). * We average the contextualised embeddings across all tokens from the prompt (_mask_). * We average the contextualised embeddings across all tokens from the prompt except for the <mask> token (_average w.o. mask_). In the following we explain the training objectives and how the model is trained. Figure 1: Schematic overview of the RelBERT model. A word pair is presented to an LM encoder using a prompt. A relation vector, capturing how the two input words are related, is then obtained by aggregating the contextualised embeddings from the output layer. ### Training Objective The LM encoder used in RelBERT is initialised from a pre-trained RoBERTa model, which was shown to be more effective than BERT in our previous work [46]. It is then fine-tuned using a contrastive loss, based on the idea that word pairs which belong to the same relation should have a relation embedding that is similar, whereas word pairs belonging to different relations should have embeddings that are further apart. To this end, we assume access to a set of positive training examples \(\mathcal{P}_{r}\) and a set of negative examples \(\mathcal{N}_{r}\), for a number of relations \(r\in\mathcal{R}\). In particular, \(\mathcal{P}_{r}\) contains word pairs \((h,t)\) which belong to relation \(r\), whereas \(\mathcal{N}_{r}\) contains examples of word pairs which do not. We consider three different loss functions to implement the proposed idea. Triplet LossThe _triplet loss_[94] relies on training data in the form of triples \((a,p,n)\), where \(a\) is called the anchor, \(p\) is a positive example, and \(n\) is a negative example. The aim of this loss is to ensure that the distance between \(a\) and \(p\) is smaller, by some margin, than the distance between \(a\) and \(n\). In our case, the elements \(a\), \(p\) and \(n\) correspond to word pairs, where \(a,p\in\mathcal{P}_{r}\) and \(n\in\mathcal{N}_{r}\) for some relation \(r\). Let us write \(\mathbf{x}_{a}\) for the relation embedding of a word pair \(a\). We then have the following loss: \[L_{\mathrm{tri}}=\sum_{r\in\mathcal{R}}\sum_{(a,p,n)\in\mathcal{P}_{r}\times \mathcal{P}_{r}\times\mathcal{N}_{r}}\max\left(0,\left\|\mathbf{x}_{a}-\mathbf{x}_{p} \right\|-\left\|\mathbf{x}_{a}-\mathbf{x}_{n}\right\|+\Delta\right) \tag{1}\] where \(\Delta>0\) is the margin and \(\left\|\cdot\right\|\) is the \(l^{2}\) norm. InfoNCEInformation noise constrastive estimation (InfoNCE) [95] addresses two potential limitations of the triplet loss. First, while the triplet loss only considers one negative example at a time, InfoNCE can efficiently contrast each positive example to a whole batch of negative examples. Second, while the triplet loss uses the \(l^{2}\) norm, InfoNCE relies on the cosine similarity, which tends to be better suited for comparing embeddings. The InfoNCE loss can be defined as follows: \[L_{\mathrm{ncee}}=\sum_{r\in\mathcal{R}}\sum_{(a,p)\in\mathcal{P}_{r}\times \mathcal{P}_{r}}\left(-\log\frac{\exp\left(\frac{\cos(\mathbf{x}_{a},\mathbf{x}_{p}) }{\tau}\right)}{\exp\left(\frac{\cos(\mathbf{x}_{a},\mathbf{x}_{p})}{\tau}\right)+ \sum_{n\in\mathcal{N}_{r}}\exp\left(\frac{\cos(\mathbf{x}_{a},\mathbf{x}_{n})}{\tau} \right)}\right) \tag{2}\] where \(\tau\) is a temperature parameter to control the scale of the exponential and cos is the cosine similarity. InfoLOBInfo-leave-one-out-bound (InfoLOOB) [96] is a variant of InfoNCE, in which the positive example is omitted from the denominator. This is aimed at preventing the saturation of the loss value, which can occur with InfoNCE due to dominant positives. Applied to our setting, the loss is as follows: \[L_{\mathrm{loob}}=\sum_{r\in\mathcal{R}}\sum_{(a,p)\in\mathcal{P}_{r}\times \mathcal{P}_{r}}\left(-\log\frac{\exp\left(\frac{\cos(\mathbf{x}_{a},\mathbf{x}_{p})} {\tau}\right)}{\sum_{n\in\mathcal{N}_{r}}\exp\left(\frac{\cos(\mathbf{x}_{a},\mathbf{x }_{n})}{\tau}\right)}\right) \tag{3}\] \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & RelSim & ConceptNet & NELL & T-REX \\ \hline \#relations (train/val/test) & 89/89/- & 28/18/16 & 31/4/6 & 721/602/24 \\ Average \#positive examples per relation & 14.7/3.7/- & 20,824/66/74 & 177/219/225 & 1,767/529/4 \\ Relation Hierarchy & True & False & False & False \\ Domain & Concepts & Concepts & Named entities & Named entities \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of the training sets that are considered for RelBERT, including the number of relations, the average number of triples per relation in the training / validation / test sets, and the number of unique positive triples; we also specify whether the relations are organised in a hierarchy, the domain from which the entities are coming. ### Training Data As described in SS 3.2, to train RelBERT we need positive and negative examples of word pairs belonging to particular relations. In this section, we describe the four datasets that we considered for training RelBERT. The main properties of these datasets are summarised in Table 2. We now present each dataset in more detail. For each dataset, we have a training and validation spit, which are used for training RelBERT and for selecting the hyperparameters. In addition, for most of the datasets, we also select a test set, which will be used for evaluating the model (see SS 4.1). RelSimThe Relational Similarity Dataset (RelSim)3 was introduced for SemEval 2012 Task 2 [25]. It contains crowdsourced judgements about 79 fine-grained semantic relations, which are grouped into 10 parent categories. Table 3 shows word pairs randomly sampled from the highest ranked word pairs in each parent category of RelSim. For each semantic relation, a list of word pairs is provided in RelSim, with each word pair being assigned a prototypicality score4. To convert this dataset into the format that we need for training RelBERT, we consider the 79 fine-grained relations and the 10 parent relations separately. For the fine-grained relations, we choose the 10 most prototypical word pairs, i.e. the word pairs with the highest scores, as positive examples, while the 10 lowest ranked word pairs are used as negative examples. For the parent relations, the set of positive examples contains the positive word pairs of each of the fine-grained relations that belong to the parent relation. The negative examples for the parent relations are taken to be the positive examples of the other relations. This is because the parent relations are mutually exclusive, whereas the semantic distinction between the fine-grained relations is often very subtle. From the resulting dataset, we randomly choose 80% of the word pairs for training, and we keep the remaining 20% as a validation set. Footnote 3: Our preprocessed version of this dataset is available at [https://huggingface.co/datasets/relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity); the original dataset is available at [https://sites.google.com/site/semeval2012task2/download](https://sites.google.com/site/semeval2012task2/download). ConceptNetConceptNet5[97] is a commonsense knowledge graph. It encodes semantic relations between concepts, which can be single nouns or short phrases. The knowledge graph refers to a total of 34 different relations. Since the original ConceptNet contains more than two millions of triples, we employ the version released by [98], where the triples are filtered by their confidence score. We use the _test set_ consisting of the 1200 most confident tuples as an evaluation dataset, the _dev1_ and _dev2_ sets consisting of the next 1200 most confident tuples as our validation set, and the _training set_ consisting of 600k tuples as our training set6. We have disregarded any triples with negated relations such as _NotCapableOf_ or _NotDesires_, because they essentially indicate the lack of a relationship. The positive examples for a given relation are simply the word pairs which are asserted to have this relation in the knowledge graph. The negative examples for a given relation are taken to be the positive examples for the other relations, i.e. \(\mathcal{N}_{r}=\{(a,b)\in\mathcal{P}_{\hat{r}}|\hat{r}\in\mathcal{R}\backslash \{r\}\}\). \begin{table} \begin{tabular}{l l} \hline \hline Relation & Examples \\ \hline Case Relation & [designer, fashions], [preacher, parishioner], [hunter, rifle] \\ Meronym (Part-Whole) & [building, wall], [team, player], [movie, scene] \\ Antonym (Contrast) & [smooth, rough], [difficult, easy], [birth, death] \\ Space-Time & [refrigerator, food], [factory, product], [pool, swimming] \\ Representation & [diploma, education], [groan, pain], [king, crown] \\ Hypernym (Class Inclusion) & [furniture, chair], [furniture, chair], [flower, daisy] \\ Synonym (Similar) & [couch, sofa], [sadness, melancholia], [confident, arrogance] \\ Attribute & [steel, strong], [glass, shattered], [miser, greed] \\ Non Attribute & [empty, full], [incomprehensible, understood], [desttitution, abundance] \\ Cause-Purpose & [tragedy, tears], [fright, scream], [battery, laptop] \\ \hline \hline \end{tabular} \end{table} Table 3: Examples of word pairs from each parent relation category in the RelSim dataset. _NELL-One._ NELL [99] is a system to collect structured knowledge from web. The authors of [100] compiled and cleaned up the latest dump file of NELL at the time of publication to create a knowledge graph, called NELL-One7 for one-shot relational learning. We employ NELL-One with its original split from [100], which avoids any overlap between the relation types appearing in the test set, on the one hand, and the relation types appearing in the training and validation sets, on the other hand. Similar as for ConceptNet, the positive examples for a given relation are the word pairs that are asserted to belong to that relation in the training set, whereas the negative examples for a relation are the positive examples of the other relations in the training set. Footnote 7: Our preprocessed version of this dataset is available at [https://huggingface.co/datasets/relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity); the original dataset is available at [https://github.com/xwhan/One-shot-Relational-learning](https://github.com/xwhan/One-shot-Relational-learning) _T-REX_. T-REX8[101] is a knowledge base that was constructed by aligning Wikipedia and Wikidata. It contains a total of 20 million triples, all of which are aligned with sentences from introductory sections of Wikipedia articles. We first remove triples if either their head or tail is not a named entity, which reduces the number of triples from 20,877,472 to 12,561,573, and the number of relations from 1,616 to 1,470. Then, we remove relations with fewer than three triples, as we need at least three triples for each relation type to enable fine-tuning SS 5.1, which reduces the number of triples to 12,561,250, and the number of relations to 1,237. One problem with this dataset is that it contains a number of distinct relations which intuitively have the same meaning. For example, the relations _band_ and _music by_ both represent "A song played by a musician". Therefore, we manually mapped such relations onto the same type. Note that this is useful because the strategy for selecting negative examples when training RelBERT implicitly assumes that relations are disjoint. For the same reason, we manually removed relations that subsume more specific relations. For example, the relationship _is a_ refers to "hypernym of", but T-REX also covers more specific forms of hypernymy such as _fruit of, religion_, and _genre_. Another example is _is in_, which models the relation "located in", but T-REX also contains finer-grained variants of this relation, such as _town_, _state_, _home field_, and _railway line_. We thus remove triples involving relations such as _is a_ and _is in_. This filtering resulted in a reduction to 12,410,726 triples with 839 relation types. We use this dataset, rather than Wikidata itself, because the fact that a triple is asserted in the introductory section of a Wikipedia article suggests that it expresses salient knowledge. This is important because our aim in fine-tuning RelBERT is to distill relational knowledge from the pre-trained LM itself, rather than to learn the knowledge from the training set. We thus ideally want to limit the training data to word pairs whose relationship is captured by the pre-trained LM. The number of times an entity appears in this dataset can be used as an estimate of the salience of that entity. With this in mind, we removed all triples involving entities that appear less than five times in the dataset, which further reduces the number of triples to 1,616,065. To create a test set, we randomly chose 34 relation types and we manually selected around 100 verified triples from those relation types. The training and validation set is created by splitting the remaining relations 80:20 into a training and validation set. Footnote 8: Our preprocessed version of this dataset is available at [https://huggingface.co/datasets/relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity); the original dataset is available at [https://github.com/xwhan/One-shot-Relational-learning](https://github.com/xwhan/One-shot-Relational-learning) ## 4 Evaluation Tasks We evaluate RelBERT on two relation-centric tasks: analogy questions (unsupervised), and lexical relation classification (supervised). In this section, we describe these tasks and introduce the benchmarks included in our evaluation. ### Analogy Questions Analogy questions are multiple-choice questions, where a given word pair is provided, referred to as the _query pair_, along with a set of candidate answers. Then, the task consists of predicting which is the word pair that is most analogous to the query pair among the given candidate answers. In other words, the task is to find the word pair whose relationship best resembles the relationship between the words in the query [11]. To solve this task using RelBERT, we simply predict the candidate answer whose relation embedding is most similar to the embedding of the query pair, in terms of cosine similarity.9 For our evaluation, first, we consider the following six analogy question datasets10: * The SAT exam is a US college admission test. Turney [11] collected a benchmark of 374 word analogy problems, consisting primarily of problems from these SAT tests. Each instance has five candidates. The instances are aimed at college applicants, and are thus designed to be challenging for humans. * Following [102], who used word analogy problems from an educational website11, we compiled analogy questions from the same resource12. They used in particular UNIT 2 of the analogy problems from the website, which have the same form as those from the SAT benchmark, but rather than college applicants, they are aimed at children in grades 4 to 12 from the US school system (i.e. from age 9 onwards). We split the dataset into 24 questions for validation and 228 questions for testing. Each question has 4 answer candidates. Footnote 11: [https://www.englishforeveryone.org/Topics/Analogies.html](https://www.englishforeveryone.org/Topics/Analogies.html) * We have collected another benchmark from the UNIT 4 problems on the same website that was used for the U2 dataset. These UNIT 4 problems are organised in 5 difficulty levels: high-beginning, low-intermediate, high-intermediate, low-advanced and high-advanced. The low-advanced level is stated to be at the level of the SAT tests, whereas the high-advanced level is stated to be at the level of the GRE test (which is used for admission into graduate schools). The resulting U4 dataset has 48 questions for validation and 432 questions for testing. Each question has 4 answer candidates. * The Google analogy dataset [103] has been one of the most commonly used benchmarks for evaluating word embeddings13. This dataset contains a mix of semantic and morphological relations such as _capital-of_ and _singular-plural_, respectively. The dataset was tailored to the evaluation of word embeddings in a predictive setting. We constructed word analogy problems from the Google dataset by choosing for each correct analogy pair a number of negative examples. To obtain sufficiently challenging negative examples, for each query pair (e.g. _Paris-France_) we extracted three negative instances: Footnote 12: We use the dataset from the website with permission limited to research purposes. Footnote 13: The original data is available at [https://aclweb.org/aclwiki/Google_analogy_test_set_](https://aclweb.org/aclwiki/Google_analogy_test_set_)(State_of_the_art) #cite_note-1. 1. two random words from the head of the input relation type (e.g. _Rome-Oslo_); 2. two random words from the tail of the input relation type (e.g. _Germany-Canada_); 3. a random word pair from a relation type of the same high-level category (i.e. semantic or morphological) as the input relation type (e.g. _Argentina-peso_). The resulting dataset contains 50 validation and 500 test questions, each with 4 answer candidates. * The coverage of the Google dataset is known to be limiting, and BATS [6] was developed in an attempt to address its main shortcomings. BATS includes a larger number of concepts and relations, which are split into \begin{table} \begin{tabular}{l c c} \hline \hline Dataset & Avg. \#Answer Candidates & \#Questions \\ \hline SAT & 5/5 & - / 374 \\ U2 & 4/4 & 24 / 228 \\ U4 & 4/4 & 48 / 432 \\ Google & 4/4 & 50 / 500 \\ BATS & 4/4 & 199 / 1,799 \\ SCAN & 72/74 & 178 / 1,616 \\ NELL-One & 5/7 & 400 / 600 \\ T-REX & 74/48 & 496 / 183 \\ ConceptNet & 19/17 & 1,112 / 1,192 \\ \hline \hline \end{tabular} \end{table} Table 4: Main statistics of the analogy question datasets, showing the average number of answer candidates, and the total number of questions (validation / test). four categories: lexicographic, encyclopedic, and derivational and inflectional morphology14. We follow the same procedure as for the Google dataset to convert BATS into the analogy question format. The resulting dataset contains 199 validation and 1,799 test questions, each with 4 answer candidates. Footnote 14: The original data is available at [https://vecto.space/projects/BATS/](https://vecto.space/projects/BATS/) **SCAN**: The relation mapping problem [104] is to find a bijective mapping between a set of relations from some source domain and a corresponding set of relations from a given target domain. SCAN15[105] is an extension of the problems that were collected in [104]. Where [104] contains 10 scientific and 10 metaphorical domains, SCAN extends them by another 443 metaphorical domains and 2 scientific domains. A single SCAN instance contains a list of the source and the target words (\(\mathbf{a}=[a_{1},\ldots,a_{m}]\) and \(\mathbf{b}=[b_{1},\ldots,b_{m}]\)). We convert such an instance into an analogy question, where the query is \([a_{i},a_{j}]\) and the ground truth is \([b_{i},b_{j}]\) and the negative candidates are \([b_{\hat{i}},b_{\hat{j}}]\) for \(\{(\hat{i},\hat{j})\in\{1,\ldots,m\}\times\{1,\ldots,m\}|(\hat{i},\hat{j}) \neq(i,j)\}\). This results in 178 and 1,616 questions for the validation and test sets, respectively. The number of answer candidates per question is 74 on average, which makes this benchmark particularly challenging. Footnote 15: The original dataset is available at [https://github.com/taczin/SCAN_analogies](https://github.com/taczin/SCAN_analogies). In addition, we also converted the validation and test splits of T-REX, NELL and ConceptNet, which were introduced \begin{table} \begin{tabular}{l l l} \hline \hline Dataset & Domain & Example \\ \hline SAT & College Admission Test & [beauty, aesthete, pleasure, hedonist] \\ \hline \multirow{6}{*}{U2} & Grade4 & [rock, hard, water, wet] \\ & Grade5 & [hurricane, storm, table, furniture] \\ & Grade6 & [microwave, heat, refrigerator, cool] \\ & Grade7 & [clumsy, grace, doubtful, faith] \\ & Grade8 & [hidden, visible, flimsy, sturdy] \\ & Grade9 & [panacea, cure, contagion, infect] \\ & Grade10 & [grain, silo, water, reservoir] \\ & Grade11 & [thwart, frustrate, laud, praise] \\ & Grade12 & [lie, prevaricate, waver, falter] \\ \hline \multirow{6}{*}{U4} & Low Intermediate & [accident, unintended, villain, evil] \\ & Low Advanced & [galleon, sail, quarantine, isolate] \\ & High Beginning & [salesman, sell, mechanic, repair] \\ & High Intermediate & [classroom, desk, church, pew] \\ & High Advanced & [erudite, uneducated, fervid, dispassionate] \\ \hline \multirow{6}{*}{BATS} & Inflectional Morphology & [neat, neater, tasty, tastier] \\ & Derivational Morphology & [available, unavailable, interrupted, uninterrupted] \\ & Encyclopedic Semantics & [stockholm, sweden, belgrade, Serbia] \\ & Lexicographic Semantics & [elephant, herd, flower, bouquet] \\ \hline \multirow{2}{*}{Google} & Encyclopedic Semantics & [Canada, dollar, Croatia, kuna] \\ & Morphological & [happy, happily, immediate, immediately] \\ \hline \multirow{2}{*}{SCAN} & Metaphor & [grounds for a building, solid, reasons for a theory, rational] \\ & Science & [conformance, breeding, adaptation, mating] \\ \hline NELL & Named Entities & [Miami Dolphins, Cam Cameron, Georgia Tech, Paul Johnson] \\ \hline T-REX & Named Entities & [Washington, Federalist Party, Nelson Mandela, ANC] \\ \hline ConceptNet & Concepts & [bottle, plastic, book, paper] \\ \hline \hline \end{tabular} \end{table} Table 5: An example from each domain of the analogy question benchmarks. in SS 3.3, into the format of analogy questions. Note that the validation split is used in our ablation study to compare RelBERT training sets, but not used in the main experiment, where we solve analogy questions in the zero-shot setting. Thus, we do not consider approaches that require validation as well as training data, such as [12]. These analogy questions were constructed by taking two word pairs from the same relation type, one of which is used as the query while the other is used as the correct answer. To create negatives for each positive pair, we take \(N\) pairs from each of the other relations. We also add the reversed answer pair to the negative (i.e. for the positive pair \((h,t)\) we would add \((t,h)\) as a negative), so the number of the negative pairs is \(|\mathcal{R}|\times N+1\) in each split. To create benchmarks with different characteristics, we used \(N=2\) for T-REX and \(N=1\) for Nell and ConceptNet. Table 4 summarises the main features of the analogy question datasets, and Table 5 shows an example from each category and dataset. ### Lexical Relation Classification We consider the supervised task of relation classification. This task amounts to classifying word pairs into a pre-defined set of possible relation types.16 To solve this task, we train a multi-layer perceptron (MLP) with one hidden layer, which takes the RelBERT relation embedding of the word pair as input. The RelBERT encoder itself is frozen, since our focus is on evaluating the quality of the RelBERT relation embeddings. We consider the following widely-used multi-class relation classification benchmarks: K&H+N [106], BLESS [107], ROOT09 [108], EVALution [109], and CogALex-V Subtask 2 [110]. Table 6 shows the size of the training, validation and test splits for each of these datasets, as well as the kinds of relations they cover. The hyperparameters of the MLP classifier are tuned on the validation split of each dataset. In particular, we tune the learning rate from \([0.001,0.0001,0.00001]\) and the hidden layer size from \([100,150,200]\). CogALex-V has no validation split, so for this dataset we employ the default configuration of Scikit-Learn [111], which uses a 100-dimensional hidden layer and is optimized using Adam with a learning rate of 0.001. Footnote 16: Preprocessed versions of the datasets are available at [https://huggingface.co/datasets/relbert/lexical_relation_classification](https://huggingface.co/datasets/relbert/lexical_relation_classification) ## 5 Experimental Setting In this section, we explain the RelBERT training details (SS 5.1) and we introduce the baselines for analogy questions (SS 5.2) and lexical relation classification (SS 5.3). Throughout this paper, we rely on the weights that were shared by HuggingFace [112] for all pre-trained LMs. A complete list of the models we used can be found in Appendix A. ### RelBERT Training In our experiments, we consider a number of variants of RelBERT, which differ in terms of the pre-trained LM that was used for initialising the model, the loss function SS 3.2, and the training data SS 3.3. In each case, RelBERT is trained \begin{table} \begin{tabular}{l l l l l l} \hline \hline & \multicolumn{1}{c}{BLESS} & CogALex & EVALution & K\&H+N & ROOT09 \\ \hline Antonym & - & 241 / 360 & 1095 / 90 / 415 & - & - \\ Attribute & 1892 / 143 / 696 & - & 903 / 72 / 322 & - & - \\ Co-hyponym & 2529 / 154 / 882 & - & - & 18134 / 1313 / 6349 & 2222 / 162 / 816 \\ Event & 2657 / 212 / 955 & - & - & - & - \\ Hypernym & 924 / 63 / 350 & 255 / 382 & 1327 / 94 / 459 & 3048 / 202 / 1042 & 2232 / 149 / 809 \\ Meronym & 2051 / 146 / 746 & 163 / 224 & 218 / 13 / 86 & 755 / 48 / 240 & - \\ Possession & - & - & 377 / 25 / 142 & - & - \\ Random & 8529 / 609 / 3008 & 2228 / 3059 & - & 18319 / 1313 / 6746 & 4479 / 327 / 1566 \\ Synonym & - & 167 / 235 & 759 / 50 / 277 & - & - \\ \hline \hline \end{tabular} \end{table} Table 6: Number of instances for each relation type across training / validation / test sets of all lexical relation classification datasets. for 10 epochs. Moreover, we train one RelBERT model for each of the five prompt templates SS 3.1. The final model is obtained by selecting the epoch and prompt template that achieved the best performance on the validation split, in terms of accuracy.17 The default configuration for RelBERT is to fine-tune a RoBERT\({}_{\text{BASE}}\) or RoBERT\({}_{\text{ALRGE}}\) model using InfoNCE on the RelSim dataset. We will refer to the resulting models as RelBERT\({}_{\text{BASE}}\) and RelBERT\({}_{\text{LARGE}}\) respectively. The other hyper-parameters are fixed as follows. When using the triplet loss, we set the margin \(\Delta\) to 1, the learning rate to 0.00002 and the batch size to 32. When using InfoNCE or InfoLoob, we set the temperature \(\tau\) to 0.5, the learning rate to 0.000005 and the batch size to 400. In all cases, we fix the random seed as 0 and we use ADAM [113] as the optimiser. To select the aggregation strategy, as a preliminary experiment, we fine-tuned RelBERT\({}_{\text{BASE}}\) with each of the three strategies suggested in SS 3.2. As we found that _average w.o. mask_ achieved the best accuracy on the validation set of RelSim, we used this as the default aggregation strategy. Footnote 17: The best template and epoch for each model is specified in Appendix B. ### Baselines for Analogy Questions We now introduce the baselines we considered for solving analogy questions. Latent Relation AnalysisLatent Relation Analysis (LRA) [1] takes inspiration from the seminal Latent Semantic Analysis (LSA) model for learning document embeddings [114]. The key idea behind LSA was to apply Singular Value Decomposition (SVD) on a document-term co-occurrence matrix to obtain low-dimensional vector representations of documents. LRA similarly uses SVD to learn relation vectors. In particular, the method also constructs a co-occurrence matrix, where the rows now correspond to word pairs and the columns correspond to lexical patterns. Each matrix entry captures how often the corresponding word pair appears together with the corresponding lexical pattern in a given corpus. To improve the quality of the representations, a PMI-based weighting scheme is used, and the method also counts occurrences of synonyms of the words in a given pair. To solve an analogy question, we can compute the LRA embeddings of the query pair and the candidate pairs, and then select the answer whose embedding is closest to the embedding of the query pair, in terms of cosine similarity. For LRA, we only report the published results from [1] for the SAT dataset. Word EmbeddingSince the introduction of the Word2Vec models [3], word analogies have been a popular benchmark for evaluating different word embedding models. This stems from the observation that in many word embedding models, the relation between two words \(A\) and \(B\) is to some extent captured by the vector difference of their embeddings. Letting \(\mathsf{wv}(A)\) be the word embedding of a word \(A\), we can thus learn relation embeddings of the form \(\mathbf{x}_{(A,B)}=\mathsf{wv}(B)-\mathsf{wv}(A)\). Using these embeddings, we can solve word analogy questions by again selecting the answer candidate whose embedding is most similar to that of the query pair, in terms of cosine similarity. Since some of the analogy questions include rare words, common word embedding models such as word2vec [3] and GloVe [4] suffer from out-of-vocabulary issues. We therefore used fastText [5] trained on Common Crawl with subword information, which can handle out-of-vocabulary words by splitting them into smaller chunks of characters18. Footnote 18: The embedding model is available at [https://fasttext.cc/](https://fasttext.cc/). Language ModelsTo solve analogy questions using a pre-trained language model, we proceed as follows. Let \((A,B)\) be the query pair. For each answer candidate \((C,D)\) we construct the sentence "\(A\) is to \(B\) what \(C\) is to \(D\)", following [16]. We then compute the perplexity of each of these sentences, and predict the candidate that gives rise to the lower perplexity. The exact computation depends on the type of language model that is considered. For CLMs [16; 115; 116], such as those in the GPT family, the perplexity of a sentence \(\boldsymbol{s}\) can be computed as follows: \[f(\boldsymbol{s})=\exp\left(-\frac{1}{t}\sum_{j=1}^{t}\log P_{\text{clm}}(s_{j }|\boldsymbol{s}_{j-1})\right) \tag{4}\] where \(\boldsymbol{s}\) is tokenized as \([s_{1}...s_{t}]\) and \(P_{\text{clm}}(s|\boldsymbol{s})\) is the likelihood from an CLM's next token prediction. For masked language models (MLMs), such as those in the BERT family [15; 24], we instead use pseudo-perplexity [117], which is defined as in (4) but with \(P_{\text{mask}}(s_{j}|\boldsymbol{s}_{\setminus j})\) instead of \(P_{\text{clm}}(s_{j}|\boldsymbol{s}_{j-1})\), where \(\boldsymbol{s}_{\setminus j}=[s_{1}\ldots s_{j_{1}}(\text{mask})s_{j+1} \ldots s_{t}]\) and \(P_{\text{max}}(s_{j}|\mathbf{s}_{\psi_{j}})\) is the pseudo-likelihood [118] that the masked token is \(s_{j}\). Finally, for encoder-decoder LMs (ED LMs) [17; 119], we split the template in two parts: the phrase "\(A\) is to \(B\)" is fed into the encoder, and then we use the decoder to compute the perplexity of the phrase "\(C\) is to \(D\)", using the probability \(P_{\text{clm}}\) of the decoder, conditioned by the encoder. We compare GPT-2 [116], GPT-J [120], OPT [26], OPT-IML [27] as CLMs, BERT [15] and RoBERTa [24] as MLMs, and T5 [17], Flan-T5 [28], Flan-UL2 [121] as ED LMs. Footnote 18: [https://openai.com/](https://openai.com/) Footnote 19: The embedding model is available from [https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/). OpenAI ModelsOpenAI19 released a commercial API to provide access to their private in-house models such as GPT-3, GPT-4, and ChatGPT (GPT-3.5-turbo). We have used this API to obtain results for those models. For GPT-3, we use the May 2023 endpoint of davinci, the largest GPT-3 model, and follow the same approach as for the public LMs, as explained above (i.e. choose the candidate with the lowest perplexity). We also include the zero-shot results of GPT-3 that were reported in the original GPT-3 paper [16], which we refer as GPT-3\({}_{\text{original}}\). Note that the models that can be accessed via the OpenAI API are subject to be changed every six months, which unfortunately limits the reproducibility of the reported results. For the conversational LMs, i.e. ChatGPT and GPT-4, the API does not allow us to compute perplexity scores. We therefore do not include them in our main experiments, but an analysis of these models will be provided in SS 7.2.1. Footnote 20: We use the same embedding model used in § 5.2. ### Baselines for Lexical Relation Classification LexNet [122] and SphereRE [123] are the current state-of-the-art (SotA) classifiers on the considered lexical relation classification datasets. Both methods rely on static word embeddings [124; 3]. LexNet trains an LSTM [125] on the word pair by considering it as a sequence of two words, where each word is mapped to its feature map consisting of a number of lexical features such as part-of-speech and the word embedding. SphereRE employs hyperspherical learning [126] on top of the word embeddings, which is to learn a feature map from word embeddings of the word pairs to their relation embeddings, which are distributed over the hyperspherical space. In addition to those SotA methods, we use a simple baseline based on word embeddings. Specifically, we train an MLP with a hidden layer in the same way as explained in SS 4.2. As possible input representations for this classifier, we consider the concatenation of the word embeddings (_cat_) and the vector difference of the word embeddings (_diff_), possibly augmented with the component-wise product of the word embeddings (_cat+dot_ and _diff+dot_), which has been shown to provide improvements in lexical relation classification tasks [127]. We experiment with word embeddings from GloVE20[4] and fastText21. Finally, we include the results of pair2vec [54], which is a relation embedding model that was trained by aligning word pair embeddings with LSTM-based encodings of sentences where the corresponding word pairs co-occur. Footnote 20: The embedding model is available from [https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/). Footnote 21: We use the same embedding model used in § 5.2. ## 6 Experimental Results We report the experimental results for the analogy questions benchmarks in SS 6.1 and for lexical relation classification in SS 6.2. ### Results on Analogy Questions Table 7 shows the results for each analogy question benchmark in terms of accuracy. We can see that RelBERT substantially outperforms the baselines in all cases, where RelBERT\({}_{\text{BASE}}\) is the best for T-REX, and RelBERT\({}_{\text{LARGE}}\) is the best for the remaining datasets. Remarkably, in the case of SAT, none of the pre-trained LMs is able to outperform LRA, a statistical baseline which is almost 20 years old. Moreover, on the Google, SCAN and NELL datasets the LM baselines are outperformed by fastText, a static word embedding model. This clearly shows that LMs struggle with identifying analogies in the zero-shot setting. SCAN and ConceptNet overall emerge as the most challenging benchmarks, which can be largely explained by the large number of answer candidates. Even with the best model, RelBERT\({}_{\text{LARGE}}\), the accuracy is only 27.2% on SCAN and 47.5% on ConceptNet. For T-REX, which also involves a large number answer candidates, we can see that the LM baselines are clearly outperformed by RelBERT. Comparing the LM baselines, we find that ED LMs such as Flan-T5\({}_{\text{XXL}}\) and Flan-UL2 achieve the best overall results, although CLMs such as GPT-J and OPT\({}_{\text{20B}}\) are also competitive among the larger models. For the LM baselines, unsurprisingly there is a strong correlation between model size and performance. To see this impact more closely, Figure 2 plots the accuracy of each LM in function of model size. We can see that the RelBERT models achieve the best result despite being two orders of magnitude smaller than Flan-T5\({}_{\text{XXL}}\) and Flan-UL2. Interestingly, RoBERTa usually outperforms the other LM baselines of comparable size, except on NELL and SCAN. This suggests that the strong performance of RelBERT is at least in part due to the use of RoBERTa as the underlying model. Our analysis in SS 7.3.4 will provide further support for this hypothesis. The CLMs (GPT-2, GPT-J, OPT, and OPT-IML) behave rather similarly. They improve as the model size increases, but they are generally worse than the ED LMs and MLMs. Finally, the graphs in Figure 2 make it particularly clear how much the LMs are struggling to compete with fastText in some of the datasets. For example, Flan-T5 and OPT-IML generally outperform fastText only for the largest models, while none of the LMs outperform fastText in SCAN and NELL. Figure 2: The accuracy on each analogy question dataset in function of the number of parameters in each LM. Given the superior performance of LMs in many downstream tasks, it is surprising to see LMs underperforming a static word embedding model. Finally, we can confirm that RelBERT outperforms GPT-3\({}_{\text{davinci}}\) in all the datasets. Prediction BreakdownWe now analyse the performance of RelBERT on different categories of analogy questions, considering the categories that were listed in Table 5. First, the results for some of the categories are shown in Table 9, along with some baselines. The results show that both RelBERT models can achieve a high accuracy for morphological relationships, despite not being explicitly trained on such relations. This ability appears to increase along with the model size, as RelBERT\({}_{\text{LARGE}}\) outperforms RelBERT\({}_{\text{BASE}}\) by around 10 percentage points on the morphological relations from BATS, and 6 percentage points on the morphological relations from Google. Figure 3 and Figure 4 show the accuracy along with the difficulty level in U2 and U4. Although we cannot see a clear signal in U2, we can see that models struggle more when the difficulty level is increased in U4, especially for RelBERT\({}_{\text{BASE}}\). Note that the U2 test is designed for children, while U4 is for college students. As comparison systems, we included the accuracy breakdown of fastText as a word embedding baseline, and Flan-UL2 as the best LM baseline. RelBERT\({}_{\text{LARGE}}\) consistently outperforms Flan-UL2 in all cases. The performance of fastText is more inconsistent, showing a strong performance on the morphological relations of the Google analogy dataset, as well as the encyclopedic portion of BATS, but performing poorly in the lexical portion of BATS (34.0 compared to RelBERT\({}_{\text{Large}}\)'s 72.4) and in the metaphors of SCAN \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Model & & BLESS & CogALexV & EVALution & K\&H+N & ROOT09 \\ \hline pair2vec & & 81.7 & 76.9 & 50.5 & 96.9 & 82.9 \\ \hline \multirow{5}{*}{GloVe} & _cat_ & 93.3 & 73.5 & 58.3 & 94.9 & 86.5 \\ & _cat+dot_ & 93.7 & 79.2 & 57.3 & 95.1 & 89.0 \\ & _cat+dot+pair2vec_ & 92.6 & 81.1 & 59.6 & 95.7 & 89.4 \\ & _diff_ & 91.5 & 70.8 & 56.9 & 94.4 & 86.3 \\ & _diff+dot_ & 92.9 & 78.5 & 57.9 & 94.8 & 88.9 \\ & _diff+dot+pair2vec_ & 92.2 & 80.2 & 57.4 & 95.5 & 89.4 \\ \hline \multirow{5}{*}{fastText} & _cat_ & 92.9 & 72.4 & 57.9 & 93.8 & 85.5 \\ & _cat+dot_ & 93.2 & 77.4 & 57.8 & 94.0 & 88.5 \\ & _cat+dot+pair2vec_ & 91.5 & 79.3 & 58.2 & 94.3 & 87.8 \\ & _diff_ & 91.2 & 70.2 & 55.5 & 93.3 & 86.0 \\ & _diff+dot_ & 92.9 & 77.8 & 57.4 & 93.6 & 88.9 \\ & _diff+dot+pair2vec_ & 90.8 & 79.0 & 57.8 & 94.2 & 88.1 \\ \hline \multirow{2}{*}{SotA} & LexNET & 89.3 & - & 60.0 & 98.5 & 81.3 \\ & SphereRE & **93.8** & - & 62.0 & **99.0** & 86.1 \\ \hline RelBERT\({}_{\text{BASE}}\) & 90.0 & 83.7 & 64.2 & 94.0 & 88.2 \\ RelBERT\({}_{\text{LARGE}}\) & 92.0 & **85.0** & **68.4** & 95.6 & **90.4** \\ \hline \hline \end{tabular} \end{table} Table 8: Micro F1 score (%) for lexical relation classification. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{Google} & \multicolumn{3}{c}{BATS} & \multicolumn{2}{c}{SCAN} \\ \cline{2-7} & Encyclopedic & Morphological & Encyclopedic & Lexical & Morphological & Metaphor & Science \\ \hline Random & 25.0 & 25.0 & 25.0 & 25.0 & 25.0 & 2.4 & 2.8 \\ fastText & 92.6 & 96.1 & 71.6 & 34.0 & 88.5 & 18.9 & 31.9 \\ Flan-UL2 & 94.4 & 89.8 & 68.0 & 60.2 & 85.8 & 11.9 & 18.8 \\ \hline RelBERT\({}_{\text{BASE}}\) & 93.0 & 86.3 & 57.8 & 62.9 & 80.3 & 23.4 & 35.0 \\ RelBERT\({}_{\text{LARGE}}\) & 98.6 & 92.6 & 71.3 & 72.4 & 90.0 & 24.8 & 35.6 \\ \hline \hline \end{tabular} \end{table} Table 9: The accuracy of RelBERT on each domain of three analogy question datasets with random expectation, fastText, and Flan-UL2 as baselines. (18.9 compared to 24.8). ### Lexical Relation Classification Table 8 shows the micro F1 score for the lexical relation classification datasets. We can see that RelBERT\({}_{\text{LARGE}}\) is in general competitive with the SotA approaches. For two (EVALution and ROOT09) out of the four lexical relation classification datasets that have SotA results22, RelBERT\({}_{\text{LARGE}}\) achieves the best results. Moreover, for these two datasets, even RelBERT\({}_{\text{BASE}}\) outperforms the SotA methods. In terms of reproducible word and pair embedding baselines, RelBERT\({}_{\text{LARGE}}\) provides better results in all datasets except for BLESS (word embeddings) and K&H+N (pair2vec). We see a consistent improvement in accuracy when going from RelBERT\({}_{\text{BASE}}\) to RelBERT\({}_{\text{LARGE}}\). Footnote 22: These SotA results are reported from the original papers, and thus we could not reproduce in similar conditions. ## 7 Analysis In this section, we analyse the capability of RelBERT from different aspects. We investigate the generalisation ability of RelBERT for unseen relations in SS 7.1. In SS 7.2, we compare RelBERT with conversational LMs and few-shot learning. Then, we analyse the effect of different design choices in the model architecture in SS 7.3. Finally, in SS 7.4 we present a qualitative analysis, where among others we show a visualization of the latent representation space of relation vectors. Figure 4: The accuracy of RelBERT for each domain of U4 analogy question. Figure 3: The accuracy of RelBERT for each domain of U2 analogy question. ### Generalization Ability of RelBERT RelBERT, when trained on RelSim, achieves competitive results on named entities (i.e. Nell-One and T-REX), despite the fact that RelSim does not contain any examples involving named entities. This is one of the most interesting aspects of RelBERT, as it shows that the model learns to infer the relation based on the knowledge from the LM, instead of memorizing the word pairs from the training set. To understand the generalisation ability of RelBERT in \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset\(\backslash\)Excluded Relation & Antonym & Attribute & Hypernym & Meronym & Synonym & _Full_ \\ \hline _BLESS_ & & & & & & \\ - Attribute & 91.6 & 90.7 & 90.6 & 91.6 & 90.9 & 91.5 \\ - Co-hyponym & 94.6 & 95.3 & 95.5 & 94.0 & 93.8 & 93.5 \\ - Event & 84.1 & 84.2 & 84.0 & 82.2 & 84.1 & 83.6 \\ - Hypernym & 92.6 & 93.5 & 93.5 & 91.3 & 93.1 & 93.1 \\ - Meronym & 85.7 & 86.8 & 87.5 & 85.3 & 86.7 & 85.0 \\ - Random & 92.1 & 92.5 & 92.1 & 91.7 & 91.6 & 91.9 \\ - Average (macro) & 90.1 & 90.5 & 90.5 & 89.3 & 90.0 & 89.8 \\ - Average (micro) & 90.6 & 90.9 & 90.8 & 89.9 & 90.3 & 90.2 \\ \hline _CogALexV_ & & & & & & \\ - Antonym & 60.5 & 64.0 & 62.9 & 67.9 & 63.3 & 68.2 \\ - Hypernym & 56.6 & 55.5 & 56.2 & 56.7 & 56.4 & 59.3 \\ - Meronym & 70.3 & 69.5 & 65.4 & 70.3 & 70.5 & 64.5 \\ - Random & 91.9 & 92.7 & 92.3 & 93.2 & 91.6 & 92.4 \\ - Synonym & 39.0 & 44.0 & 42.5 & 44.4 & 42.2 & 45.4 \\ - Average (macro) & 63.7 & 65.1 & 63.9 & 66.5 & 64.8 & 66.0 \\ - Average (micro) & 82.4 & 83.5 & 83.0 & 84.2 & 82.7 & 83.7 \\ \hline _EVALution_ & & & & & & \\ - Attribute & 80.7 & 81.7 & 80.4 & 80.3 & 81.6 & 82.7 \\ - Antonym & 72.0 & 74.3 & 73.3 & 75.2 & 73.8 & 73.6 \\ - Hypernym & 57.7 & 59.3 & 58.5 & 60.3 & 59.1 & 57.5 \\ - Meronym & 68.3 & 71.6 & 69.0 & 64.5 & 66.9 & 68.8 \\ - Possession & 66.7 & 70.3 & 66.4 & 66.0 & 63.5 & 67.4 \\ - Synonym & 40.6 & 42.9 & 37.4 & 42.9 & 37.5 & 41.0 \\ - Average (macro) & 63.6 & 65.7 & 63.7 & 64.3 & 63.0 & 64.5 \\ - Average (micro) & 64.1 & 65.9 & 64.3 & 65.3 & 63.9 & 65.0 \\ \hline _K\&H+N_ & & & & & \\ - Co-hyponym & 95.7 & 96.0 & 94.2 & 96.1 & 94.6 & 95.1 \\ - Meronym & 63.9 & 63.9 & 57.7 & 59.8 & 62.4 & 56.7 \\ - Random & 96.1 & 95.9 & 94.9 & 96.0 & 95.3 & 95.3 \\ - Average (macro) & 86.7 & 86.8 & 84.0 & 86.1 & 85.6 & 84.5 \\ - Average (micro) & 95.0 & 95.0 & 93.6 & 95.1 & 94.1 & 94.3 \\ \hline _ROOT09_ & & & & & & \\ - Co-hyponym & 96.9 & 97.3 & 96.4 & 97.3 & 95.9 & 95.8 \\ - Hypernym & 80.3 & 80.3 & 79.0 & 81.8 & 79.2 & 79.5 \\ - Random & 89.7 & 89.7 & 89.3 & 89.8 & 89.0 & 88.8 \\ - Average (macro) & 89.0 & 89.1 & 88.2 & 89.7 & 88.0 & 88.0 \\ - Average (micro) & 89.2 & 89.3 & 88.5 & 89.7 & 88.2 & 88.2 \\ \hline \hline \end{tabular} \end{table} Table 10: F1 score for each relation type of all the lexical relation classification datasets from RelBERT\({}_{\text{BASE}}\) models fine-tuned on the RelSim without a specific relation. The _Full_ model on the right most column is the original RelBERT\({}_{\text{BASE}}\) model fine-tuned on full RelSim. The result of the relation type where the model is fine-tuned on RelSim without it, is emphasized by underline. more depth, we conduct an additional experiment, where we explicitly exclude a specific relation from RelSim when training RelBERT. Specifically, we train RelBERT on a number of variants of RelSim, where each time a different relation type is excluded. We then test the resulting model on the lexical relation classification datasets. We focus this analysis on the _Antonym_, _Attribute_, _Hypernym_, _Meronym_, and _Synonym_ relations, as they are covered by both RelSim (see Table 3) and at least one of the lexical relation classification datasets (see Table 6). We train RelBERTBASE with InfoNCE on the different RelSim variants. Table 10 shows the results. It can be observed that the performance reduces by at most a few percentage points after removing a given target relation. In some cases, we can even see that the results improve after the removal. Hypernym is covered by all the datasets except K&H+N, and the largest decrease can be seen for CogALexV, which is around 3 percentage points. Meronym is covered by all the datasets except ROOT09. After removing the Meronym relation from the training data, the F1 score on meronym prediction increases in three out of four datasets. A similar pattern can be observed for the synonym relation, where the model that was trained without the synonym relation achieves better results than the model trained on the full RelSim dataset. On the other hand, for antonym and attribute, we can see that removing these relations from the training data leads to somewhat lower results on these relations. The average F1 scores over all the relation types are also competitive with, and often even better than those for the full model. These results clearly support the idea that RelBERT can generalise beyond the relation types it is trained on. ### Additional Baselines We now analyse the performance of two types of additional models: the conversational LMs from OpenAI in SS 7.2.1 and models that rely on few-shot demonstrations in SS 7.2.2. #### 7.2.1 ChatGPT and GPT-4 GPT-4 and ChatGPT are two conversational LMs released by OpenAI23. As for GPT-3, these models are private and can only be accessed through the OpenAI API. Unlike GPT-3 however, we cannot obtain perplexity scores (or raw model output that would allow us to compute perplexity) through the API. Therefore, we instead ask those models directly to choose the best answer candidate, using the following two text prompts: Footnote 23: [https://openai.com/](https://openai.com/) 1. Answer the question by choosing the correct option. Which of the following is an analogy? 1) \(A\) is to \(B\) what _C_\({}_{1}\) is to _D_\({}_{1}\) 2) \(A\) is to \(B\) what _C_\({}_{2}\) is to _D_\({}_{2}\) 3) \(A\) is to \(B\) what _C_\({}_{3}\) is to _D_\({}_{3}\)... \(\kappa)\) \(A\) is to \(B\) what _C_\({}_{\kappa}\) is to _D_\({}_{\kappa}\) The answer is 2. Only one of the following statements is correct. Please answer by choosing the correct option. 1) The relation between \(A\) and \(B\) is analogous to the relation between _C_\({}_{1}\) and _D_\({}_{1}\) 2) The relation between \(A\) and \(B\) is analogous to the relation between _C_\({}_{2}\) and _D_\({}_{2}\) 3) The relation between \(A\) and \(B\) is analogous to the relation between _C_\({}_{3}\) and _D_\({}_{3}\)... \(\kappa)\) The relation between \(A\) and \(B\) is analogous to the relation between _C_\({}_{\kappa}\) and _D_\({}_{\kappa}\) The answer is \begin{table} \begin{tabular}{l c c} \hline \hline & ChatGPT & GPT-4 \\ \hline Prompt 1 & 34.7 & 62.5 \\ Prompt 2 & 45.7 & 79.6 \\ \hline \hline \end{tabular} \end{table} Table 11: The accuracy on SAT analogy question for ChatGPT and GPT-4 with the different prompts. where (_A_,_B_) is the query word pair, and [\(C_{i},D_{i}\)]\({}_{i=1,\ldots,\kappa}\) are the candidate word pairs. We manually parse the outputs returned by the model. As GPT-4 is the most expensive endpoint at the moment, we only report the accuracy on the SAT analogy question dataset. Table 11 shows the result, and we can see that GPT-4 achieves state-of-the-art results with one of the prompts, with ChatGPT being considerably worse. However, the gap between two prompts is more than 15 percentage points, which shows that choosing the right prompt is critical when using GPT-4. #### 7.2.2 Few-shot Learning In the main experiment (SS 6), we used the LM baselines in a zero-shot setting. However, recent LLMs often perform better when a few examples are provided as part of the input [16; 28]. The idea is to provide a few (input,output) pairs at the start of the prompt, followed by the target input. This strategy is commonly referred to as few-shot learning or in-context learning. It is most effective for larger LMs, which can recognize the pattern in the (input,output) pairs and apply this pattern to the target input [26; 27; 28]. Since RelBERT is fine-tuned on RelSim, for this experiment we provide example pairs to the LM input which are taken from RelSim as well. We focus on the SAT benchmark and the Flan-T5XXL model, which was the best-performing LM on SAT in the main experiments. We consider [1; 5; 10]-shot learning. The demonstrations in each experiment are randomly chosen from the training split of RelSim. We use the same template as for the zero-shot learning, both to describe the examples and to specify the target input. For example, in the 5-shot learning setting, a complete input to the model with five demonstrations of [\(\hat{A}_{i},\hat{B}_{i},\hat{C}_{i},\hat{D}_{i}\)]\({}_{i=1\ldots 5}\) and the target query of [\(A,B\)] is shown as below. \(\hat{A}_{1}\) is to \(\hat{B}_{1}\) what \(\hat{C}_{1}\) is to \(\hat{D}_{1}\) \(\hat{A}_{2}\) is to \(\hat{B}_{2}\) what \(\hat{C}_{2}\) is to \(\hat{D}_{2}\) \(\hat{A}_{3}\) is to \(\hat{B}_{3}\) what \(\hat{C}_{3}\) is to \(\hat{D}_{3}\) \(\hat{A}_{4}\) is to \(\hat{B}_{4}\) what \(\hat{C}_{4}\) is to \(\hat{D}_{4}\) \(\hat{A}_{5}\) is to \(\hat{B}_{5}\) what \(\hat{C}_{5}\) is to \(\hat{D}_{5}\) \(A\) is to \(B\) what We run each experiment for five different random seeds (i.e. five different few-shot prompts for each setting). Table 12 shows the results. Somewhat surprisingly, the few-shot models consistently perform worse than the zero-shot model, which achieved an accuracy of 52.4 in the main experiment. #### 7.2.3 Multiple-choice Prompt In our main experiment, we compute perplexity separately on each candidate, but the task can be formatted using a multiple-choice question answering prompt as well. Such a prompt provides more information to the LMs, but it requires them to understand the question properly. Following a typical template to solve multiple-choice question answering in the zero-shot setting [16; 28], we use the following text prompt Which of the following is an analogy? 1) \(A\) is to \(B\) what \(C_{1}\) is to \(D\) \begin{table} \begin{tabular}{l c c c c} \hline \hline & Flan-T5XXL & Flan-UL2 & OPT-IML30B & OPT-IML30B \\ \hline Analogical Statement & 52.4 & 50.0 & 48.9 & 48.9 \\ Multi-choice QA & 35.8 & 40.6 & 27.3 & 31.3 \\ \hline \hline \end{tabular} \end{table} Table 13: The accuracy with multiple-choice prompting compared to the vanilla prompting strategy with the analogical statement (\(A\) is to \(B\) what \(C\) is to \(D\)) on SAT. 2) \(A\) is to \(B\) what \(C_{2}\) is to \(D_{2}\) 3) \(A\) is to \(B\) what \(C_{3}\) is to \(D_{3}\) ... \(\kappa\)) \(A\) is to \(B\) what \(C_{\kappa}\) is to \(D_{\kappa}\) The answer is where (\(A\),\(B\)) is the query word pair, and \([C_{i},D_{i}]_{i=1,\ldots,\kappa}\) are the candidate word pairs. Table 13 shows the accuracy on SAT, for the four best performing LMs on SAT in the main experiment. We can see that the multiple-choice prompt is substantially worse for all the LMs. ### Ablation Analysis In this section, we analyse how the performance of RelBERT depends on different design choices that were made. We look at the impact of the training dataset in SS 7.3.1; the loss function in SS 7.3.2; the number of negative samples for the InfoNCE loss in SS 7.3.3; the base language model in SS 7.3.4; the prompt templates in SS 7.3.5; and the impact of random variations in SS 7.3.6. Throughout this section, we use RoBERT\({}_{\text{BASE}}\) for efficiency. #### 7.3.1 The Choice of Datasets RelSim is relatively small and does not cover named entities, although the RelBERT model trained on RelSim still performed the best on T-REX and NELL-One in the main experiments. Here we present a comparison with a number of alternative training sets, to see whether better results might be possible. We are primarily interested to see whether the performance on NELL and T-REX might be improved by training RelBERT on the training splits of these datasets. We fine-tune RoBERT\({}_{\text{BASE}}\) on three datasets introduced in SS 3.3: NELL-One, T-REX and ConceptNet. We use InfoNCE in each case. The results are summarised in Table 14. We can see that training RelBERT on RelSim leads to the best results on most datasets, and the best result on average by a large margin. This is despite the fact that RelSim is significantly smaller than the other datasets (see Table 2). It is particularly noteworthy that training on RelSim outperforms training on ConceptNet even on the ConceptNet test set, even though ConceptNet contains several relation types that are not covered by RelSim. However, when it comes to the relationships between named \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & RelSim & NELL & T-REX & ConceptNet \\ \hline \multicolumn{5}{l}{_Analogy Question_} \\ SAT & **59.9** & 36.1 & 46.8 & 44.9 \\ U2 & **59.6** & 39.9 & 42.5 & 42.5 \\ U4 & **57.4** & 41.0 & 44.0 & 41.0 \\ BATS & **70.3** & 45.5 & 51.0 & 62.0 \\ Google & **89.2** & 67.8 & 75.0 & 81.0 \\ SCAN & **25.9** & 14.9 & 19.6 & 21.8 \\ NELL & 62.0 & **82.5** & 72.2 & 66.2 \\ T-REX & 66.7 & 69.9 & **83.6** & 44.8 \\ ConceptNet & **39.8** & 10.3 & 18.8 & 22.7 \\ \hline Average & **59.0** & 45.3 & 50.4 & 47.4 \\ \hline \multicolumn{5}{l}{_Lexical Relation Classification_} \\ BLESS & **90.0** & 88.6 & 89.3 & 88.8 \\ CogALexV & **83.7** & 78.6 & 82.8 & 82.8 \\ EVALution & 64.2 & 58.7 & **64.7** & 62.8 \\ K\&H+N & 94.0 & 94.8 & **95.2** & 95.0 \\ ROOT09 & 88.2 & 86.6 & 88.2 & **88.8** \\ \hline Average & 84.0 & 81.5 & **84.1** & 83.6 \\ \hline \hline \end{tabular} \end{table} Table 14: The results on analogy questions (accuracy) and lexical relation classification (micro F1 score) of RelBERT with different training datasets, where the best result across models in each dataset are shown in bold. entities, and the NELL and T-REX benchmarks in particular, training on RelSim underperforms training on NELL or T-REX. #### 7.3.2 The Choice of Loss Function In this section, we compare the performance of three different loss functions for training RelBERT. In particular, we fine-tune RoBERT\({}_{\text{BASE}}\) on RelSim, and we consider the triplet loss and InfoLOOB, in addition to InfoNCE (see SS 3.2 for more in detail). Table 15 shows the result of RelBERT fine-tuned with each of the loss functions. We can see that none of the loss functions consistently outperforms the other. On average, InfoNCE achieves the best results on the analogy questions. The difference with InfoLOOB is small, which is to be expected given that InfoNCE and InfoLOOB are closely related. While the triplet loss performs worse on average, it still manages to achieve the best results in four out of nine analogy datasets. For the relation classification experiments, the results are much closer, with InfoNCE now performing slightly worse than the other loss functions. #### 7.3.3 The Choice of the Number of Negative Samples The variant of InfoNCE that we considered for training RelBERT relies on in-batch negative samples, i.e. the negative samples for a given anchor pair correspond to the other word pairs that are included in the same batch. The number of negative samples that are considered thus depends on the batch size. In general, using a larger number of negative samples tends to benefit contrastive learning strategies, but it comes at the price of an increase in memory requirement. Here we analyse the impact of this choice, by comparing the results we obtained for different batch sizes. We train RelBERT\({}_{\text{BASE}}\) on RelSim with batch sizes from [25, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500], where the batch size 400 corresponds to our main RelBERT\({}_{\text{BASE}}\) model. The results are shown in Table 16, and visually illustrated in Figure 5 and Figure 6. Somewhat surprisingly, the correlation between batch size and performance is very weak. For analogy questions, there is a weak positive correlation. The Spearman \(\rho\) correlation to the batch size for T-REX is 0.6 with p-value 0.047, but in other datasets, correlations are not significant (i.e. p-values are higher than 0.05). Indeed, even a batch size of 25 is sufficient to achieve close-to-optimal results. For lexical relation classification, the Spearman correlation is not significant for any of the datasets. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Triplet & InfoNCE & InfoLOOB \\ \hline \multicolumn{4}{l}{_Analogy Question_} \\ SAT & 54.5 & **59.9** & 58.8 \\ U2 & 55.3 & **59.6** & 57.5 \\ U4 & **58.6** & 57.4 & 56.3 \\ BATS & **72.6** & 70.3 & 67.6 \\ Google & 86.4 & **89.2** & 83.8 \\ SCAN & **29.5** & 25.9 & 27.0 \\ NELL & **70.7** & 62.0 & 67.5 \\ T-REX & 45.4 & **66.7** & 65.6 \\ ConceptNet & 29.4 & 39.8 & **40.0** \\ \hline Average & 55.8 & **59.0** & 58.2 \\ \hline \multicolumn{4}{l}{_Lexical Relation Classification_} \\ BLESS & 88.7 & 90.0 & **91.0** \\ CogALexV & 80.5 & **83.7** & 83.3 \\ EVALution & **67.7** & 64.2 & 65.8 \\ K\&H+N & 93.1 & 94.0 & **94.9** \\ ROOT09 & **90.3** & 88.2 & 89.3 \\ \hline Average & 84.1 & 84.0 & **84.9** \\ \hline \hline \end{tabular} \end{table} Table 15: The results on analogy questions (accuracy) and lexical relation classification (micro F1 score) with different loss functions, where the best result in each dataset is shown in bold. #### 7.3.4 The Choice of Language Model Thus far, we have only considered RoBERTa as the base language model for training RelBERT. Here we compare RoBERTa with two alternative choices: BERT [15] and ALBERT [128]. We compare BERTBASE, ALBERTBASE, and RoBERTBASE, fine-tuned on RelSim with InfoNCE. Table 17 shows the result. RoBERTBASE is found to consistently achieve the best results on analogy questions, with a surprisingly large margin. RoBERTBASE also achieved the best result, on average, for lexical relation classification, although in this case it only achieves the best results in two out of five datasets. ALBERT consistently has the worst performance, struggling even on the relatively easy Google dataset. These results clearly show that the choice of the LM is of critical importance for the performance of RelBERT. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Batch Size & 25 & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450 & 500 \\ \hline \multicolumn{13}{l}{_Analogy Question_} \\ SAT & 56.1 & 59.1 & 53.2 & 55.6 & 56.4 & 57.2 & 57.5 & 58.3 & **59.9** & 57.2 & 57.2 \\ U2 & 55.3 & 56.1 & 46.1 & 53.9 & 56.1 & **60.1** & 56.1 & 56.6 & 59.6 & 59.6 & 55.7 \\ U4 & 56.5 & 57.4 & 52.5 & 54.9 & 57.2 & **59.0** & 57.2 & 54.2 & 57.4 & 58.3 & 56.5 \\ BATS & 71.7 & 69.6 & 66.9 & 72.3 & 69.7 & 70.9 & 72.0 & **73.7** & 70.3 & 69.2 & 70.8 \\ Google & 88.2 & 87.4 & 78.6 & 88.0 & **90.2** & 89.6 & 86.4 & 86.2 & 89.2 & 86.6 & 89.8 \\ SCAN & 25.0 & 27.2 & 26.2 & 30.9 & 26.5 & 25.3 & 28.8 & **31.6** & 25.9 & 25.7 & 25.1 \\ NELL & 67.0 & 66.5 & 71.5 & **77.7** & 66.0 & 63.7 & 75.0 & **77.7** & 62.0 & 62.8 & 63.5 \\ T-REX & 59.6 & 60.7 & 57.9 & 57.9 & 62.3 & 60.1 & 57.9 & 60.1 & 66.7 & **69.4** & 62.8 \\ ConceptNet & 36.9 & 39.3 & 31.1 & 29.4 & 40.5 & 39.6 & 31.0 & 31.4 & 39.8 & 38.9 & **42.8** \\ \hline Average & 57.4 & 58.1 & 53.8 & 57.8 & 58.3 & 58.4 & 58.0 & 58.9 & **59.0** & 58.6 & 58.2 \\ \hline \multicolumn{13}{l}{_Lexical Relation Classification_} \\ BLESS & 90.3 & 90.7 & 90.3 & 90.6 & 90.6 & 89.5 & **91.3** & 90.6 & 89.6 & 90.4 & 89.7 \\ CogALexV & 66.0 & **67.9** & 65.9 & 62.5 & 65.2 & 64.7 & 65.4 & 64.4 & 65.8 & 65.4 & 63.6 \\ EVALution & 64.8 & 62.1 & **65.0** & 62.9 & 63.6 & 64.6 & 63.8 & 63.2 & 62.9 & 62.8 & 64.6 \\ K\&H+N & 86.0 & 84.8 & 87.0 & **87.5** & 85.3 & 85.2 & 85.2 & 87.2 & 84.6 & 85.4 & 85.1 \\ ROOT09 & 87.7 & 88.8 & 87.7 & 88.6 & **89.4** & 88.8 & 88.8 & 88.6 & 87.9 & 88.2 & 88.0 \\ \hline Average & 79.0 & 78.9 & **79.2** & 78.4 & 78.8 & 78.6 & 78.9 & 78.8 & 78.2 & 78.4 & 78.2 \\ \hline \hline \end{tabular} \end{table} Table 16: The results on analogy questions (accuracy) and lexical relation classification (micro F1 score) with different batch size (negative samples) at InfoNCE, where the best result across models in each dataset is shown in bold. Figure 5: The results on analogy questions in function of the batch size. #### 7.3.5 The Choice of Prompt Template Our main experiment relies on the five prompt templates introduced in SS 3.1, where we choose the best among these five templates based on the validation loss. We now analyse the impact of these prompt templates. We focus this analysis on RelBERTBASE, i.e. RoBERTBASE fine-tuned with InfoNCE on RelSim. For this configuration, the template that was selected based on validation accuracy is Today, I finally discovered the relation between **[h]** and **[t]** : **[h]** is the <mask> of **[t]** We experiment with a number of variations of this template. First, we will see whether the length of the template plays an important role, and in particular whether a similar performance can be achieved with shorter templates. Subsequently we also analyse to what extent the wording of the template matters, i.e. whether similar results are possible with templates that are less semantically informative. The Effect of LengthWe start from the best template chosen for RelBERTBASE, and shorten it while preserving its meaning as much as possible. Specifically, we considered the following variants: 1. Today, I finally discovered the relation between **[h]** and **[t]** : **[h]** is the <mask> of **[t]** 2. I discovered the relation between **[h]** and **[t]**: **[h]** is the <mask> of **[t]** 3. the relation between **[h]** and **[t]**: **[h]** is the <mask> of **[t]** 4. I discovered: **[h]** is the <mask> of **[t]** 5. **[h]** is the <mask> of **[t]** For each of the templates, we fine-tune RoBERTBASE with InfoNCE on RelSim. The results are summarised in Table 18. We find that template D outperforms the original template A on average, both for analogy questions and for lexical relation classification. In general, we thus find no clear link between the length of the template and the resulting performance, although the shortest template (template E) achieves by far the worst results. This suggests that, while longer templates are not necessarily better, using templates which are too short may be problematic. The Effect of SemanticsWe now consider variants of the original template in which the anchor phrase "the relation" is replaced by a semantically meaningful distractor, i.e. we consider templates of the following form: Today, I finally discovered <semantic phrase> between **[h]** and **[t]** : **[h]** is the <mask> of **[t]** Figure 6: The results on lexical relation classification in function of the batch size. where <semantic phrase> is a placeholder for the chosen anchor phrase. We randomly chose 10 phrases (four named entities and six nouns) to play the role of this anchor phrase. For each of the resulting templates, we fine-tune \begin{table} \begin{tabular}{l c c c} \hline \hline & BERTBASE & ALBERTBASE & RoBERTaBASE \\ \hline \multicolumn{4}{l}{_Analogy Question_} \\ SAT & 44.7 & 40.4 & **59.9** \\ U2 & 36.8 & 35.5 & **59.6** \\ U4 & 40.0 & 38.7 & **57.4** \\ BATS & 54.9 & 59.2 & **70.3** \\ Google & 72.2 & 56.4 & **89.2** \\ SCAN & 23.7 & 21.2 & **25.9** \\ NELL & 56.7 & 47.7 & **62.0** \\ T-REX & 49.2 & 32.8 & **66.7** \\ ConceptNet & 27.1 & 25.7 & **39.8** \\ \hline Average & 45.0 & 39.7 & **59.0** \\ \hline \multicolumn{4}{l}{_Lexical Relation Classification_} \\ BLESS & **90.9** & 88.0 & 90.0 \\ CogALexV & 80.7 & 78.3 & **83.7** \\ EVALution & 61.8 & 58.1 & **64.2** \\ K\&H+N & **95.5** & 92.9 & 94.0 \\ ROOT09 & **88.7** & 85.6 & 88.2 \\ \hline Average & 83.5 & 80.6 & **84.0** \\ \hline \hline \end{tabular} \end{table} Table 17: The results on analogy questions (accuracy) and lexical relation classification (micro F1 score) of RelBERT with different LMs, where the best results across models in each dataset are shown in bold. \begin{table} \begin{tabular}{l c c c c} \hline \hline Template & A (Original) & B & C & D & E \\ \hline \multicolumn{4}{l}{_Analogy Question_} \\ SAT & **59.9** & 58.3 & 56.7 & 59.4 & 46.3 \\ U2 & **59.6** & 57.9 & 57.9 & 56.6 & 44.7 \\ U4 & 57.4 & 57.6 & 54.9 & **60.0** & 46.8 \\ BATS & 70.3 & 69.6 & 70.0 & **73.9** & 65.6 \\ Google & 89.2 & 88.0 & 89.4 & **93.4** & 81.8 \\ SCAN & 25.9 & 24.8 & 23.9 & **27.1** & 25.2 \\ NELL & 62.0 & **65.5** & 64.2 & **65.5** & 59.2 \\ T-REX & **66.7** & 62.3 & 60.1 & 56.8 & 48.1 \\ ConceptNet & **39.8** & 39.0 & 37.8 & 39.5 & 32.4 \\ \hline Average & 59.0 & 59.4 & 58.8 & **61.7** & 51.7 \\ \hline \multicolumn{4}{l}{_Lexical Relation Classification_} \\ BLESS & 89.9 & 89.2 & 89.2 & **90.5** & 88.2 \\ CogALexV & 65.7 & 65.3 & 66.7 & **69.6** & 63.3 \\ EVALution & **65.1** & 64.8 & 63.1 & 64.9 & 63.0 \\ K\&H+N & 85.3 & 85.3 & 83.7 & 86.2 & **86.9** \\ ROOT09 & 89.1 & 87.6 & 88.9 & **89.8** & 87.8 \\ \hline Average & 79.0 & 78.4 & 78.3 & **80.2** & 77.8 \\ \hline \hline \end{tabular} \end{table} Table 18: The results on analogy questions (accuracy) and lexical relation classification (micro F1 score) of RelBERT fine-tuned with different length of templates, where the best results across models in each dataset are shown in bold. RoBERT\({}_{\text{BASE}}\) with InfoNCE on the RelSim dataset. Table 19 shows the result. We can see that the best results are obtained with the original template, both for analogy questions and for lexical relation classification. Nevertheless, the difference in performance is surprisingly limited. The largest decrease is 2.8 in the average for analogy questions and 1.0 in the average for lexical relation classification, which is smaller than the differences we observed when using the shortest template, or when changing the LM SS 7.3.4 or the loss function SS 7.3.2. #### 7.3.6 The Choice of Random Seed In this section, we investigate the stability of RelBERT training, by comparing the results we obtained for different random seeds. We use a fixed random seed of 0 as default in the main experiments. Here we include results for two other choices of the random seed. We train both of RelBERT\({}_{\text{BASE}}\) and RelBERT\({}_{\text{LARGE}}\). However, different from the main experiments, for this analysis we reduce the batch size from 400 to 100 for RelBERT\({}_{\text{LARGE}}\), to reduce the computation time. Table 20 shows the result. We observe that the standard deviation is higher for RelBERT\({}_{\text{BASE}}\) than for RelBERT\({}_{\text{LARGE}}\). For example, the accuracy on T-REX differs from 46.4 to 66.7 for RelBERT\({}_{\text{BASE}}\), while only ranging between 63.4 and 67.8 for RelBERT\({}_{\text{LARGE}}\). We can also see that there is considerably less variation in performance for lexical relation classification, compared to analogy questions. ### Qualitative Analysis We qualitatively analyse the latent space of the RelBERT relation vectors. For this analysis, we focus on the test splits of the ConceptNet and NELL-One datasets. We compute the relation embeddings of all the word pairs in these datasets using RelBERT\({}_{\text{BASE}}\) and RelBERT\({}_{\text{LARGE}}\). As a comparison, we also compute relation embeddings using fastText, in the same way as in SS 5.2. First, we visualize the relation embeddings, using tSNE [129] to map the embeddings to a two-dimensional space. Figure 7 shows the resulting two-dimensional relation embeddings. The plots clearly show how the different relation types are separated much more clearly for RelBERT than for fastText. For ConceptNet, in particular, we can see that the fastText representations are mixed together. Comparing RelBERT\({}_{\text{LARGE}}\) \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Phrase} & \multicolumn{2}{c}{_the_} & \multicolumn{2}{c}{_Napoleon_} & \multirow{2}{*}{_football_} & \multirow{2}{*}{_Italy_} & \multirow{2}{*}{_Cardiff_} & \multicolumn{2}{c}{_the earth_} & \multirow{2}{*}{_pizza_} & \multirow{2}{*}{_subway_} & \multirow{2}{*}{_ocean_} & \multirow{2}{*}{_Abraham_} & \multicolumn{2}{c}{_the_} \\ & & \multicolumn{2}{c}{_spaceship_} & & & & & & & & & \\ \hline \multicolumn{11}{l}{_Analogy Question_} \\ SAT & 57.2 & 56.7 & 56.1 & 58.0 & 57.2 & 59.6 & 57.0 & 59.6 & 56.7 & **59.9** & **59.9** \\ U2 & 55.3 & 58.3 & 56.6 & 55.3 & 57.5 & 58.3 & 55.7 & 57.0 & 57.5 & 56.1 & **59.6** \\ U4 & 56.9 & **58.1** & 56.5 & 56.2 & 55.1 & 56.9 & 56.7 & 55.3 & 55.8 & 57.2 & 57.4 \\ BATS & **71.2** & 69.1 & 69.5 & 68.8 & 69.5 & 69.9 & 68.5 & 69.8 & 68.9 & 69.5 & 70.3 \\ Google & 87.2 & 85.4 & 87.6 & 85.2 & 86.8 & **89.2** & 85.6 & 87.6 & 86.0 & **89.2** & **89.2** \\ SCAN & 25.6 & **26.8** & 25.7 & 26.1 & 26.2 & 22.6 & 25.8 & 26.6 & 25.6 & 24.6 & 25.9 \\ NELL & 64.8 & 61.7 & 63.8 & 63.5 & 60.0 & 64.7 & 65.8 & 63.3 & 63.3 & **66.2** & 62.0 \\ T-REX & 63.4 & 51.9 & 57.4 & 59.6 & 55.7 & 56.3 & 59.0 & 60.1 & 60.7 & 60.7 & **66.7** \\ ConceptNet & 39.6 & 39.4 & 38.5 & 36.9 & 37.9 & 39.3 & 38.5 & 38.8 & 38.8 & 38.3 & **39.8** \\ \hline Average & 57.9 & 56.4 & 56.9 & 56.6 & 56.2 & 57.4 & 57.0 & 57.6 & 57.0 & 58.0 & **59.0** \\ \hline \hline \multicolumn{11}{l}{_Lexical Relation Classification_} \\ BLESS & 89.9 & 89.7 & 90.0 & 90.0 & 89.2 & **91.4** & 89.7 & 89.7 & 90.6 & 89.3 & 89.9 \\ CogALexV & 63.4 & 64.7 & **66.5** & 65.7 & 66.3 & 65.3 & 66.0 & 65.3 & 65.3 & 64.1 & 65.7 \\ EVALution & 63.6 & 63.7 & 64.0 & 63.8 & 62.4 & 63.5 & 64.5 & 63.3 & 63.5 & 64.0 & **65.1** \\ K\&H+N & 84.8 & 86.2 & **86.4** & 85.9 & 85.3 & 84.8 & 84.7 & 84.6 & 85.1 & 85.0 & 85.3 \\ ROOT09 & 88.5 & 88.9 & 89.4 & 89.0 & 89.2 & 88.7 & 89.1 & 89.5 & 88.9 & **89.6** & 89.1 \\ \hline Average & 78.0 & 78.6 & 79.3 & 78.9 & 78.5 & 78.7 & 78.8 & 78.5 & 78.7 & 78.4 & **79.0** \\ \hline \hline \end{tabular} \end{table} Table 19: The results on analogy questions (accuracy) and lexical relation classification (micro F1 score) of RelBERT fine-tuned with random phrase to construct the template, where the best results across models in each dataset are shown in bold. and RelBERTBASE, there is no clear difference for NELL-One. For ConceptNet, we can see that RelBERT\({}_{\text{LARGE}}\) leads to clusters which are somewhat better separated. Second, we want to analyse whether RelBERT vectors could model relations in a more fine-grained way than existing knowledge graphs. We focus on ConceptNet for this analysis. We cluster the RelBERT vectors of the word pairs in each relation type using HDBSCAN [130]. We focus on three relation types: _AtLocation_, _CapableOf_, and _IsA_. These are the relation types with the highest number of instances, among those for which HDBSCAN yielded more than one cluster. We obtained two clusters for each of these relation types. Table 21 shows some examples of word pairs in each cluster. For _AtLocation_, RelBERT separates the word pairs depending on whether the head denotes a living thing. On the other hand, fastText captures a surface feature and forms a cluster where the word pairs have "zoo" as tails. All other word pairs are mixed together in the first cluster. For _CapableOf_, RelBERT\({}_{\text{LARGE}}\) again distinguishes the word pairs based on whether the head entity denotes a living thing. For RelBERT\({}_{\text{BASE}}\), the clusters are not separated as clearly, while fastText again focuses on the presence of particular words such as "cat" and "dog" in this case. For _IsA_, RelBERT\({}_{\text{LARGE}}\) yields a cluster that specifically focuses on geolocations. RelBERT\({}_{\text{BASE}}\) puts pairs with the words "sport" or "game" together. In the case of fastText, all pairs were clustered together. Overall, fastText tends to catch the surface features of the word pairs. RelBERT\({}_{\text{LARGE}}\) seems to find meaningful distinctions, although at least in these examples, they are focused on the semantic types of the entities involved rather than any specialisation of the relationship itself. The behaviour of RelBERT\({}_{\text{BASE}}\) is similar, albeit clearly noisier. ## 8 Conclusion We have proposed a strategy for learning relation embeddings, i.e. vector representations of pairs of words which capture their relationship. The main idea is to fine-tune a pre-trained language model using a relational similarity dataset covering a broad range of semantic relations. In our experimental results, we found the resulting relation embeddings to be of high quality, outperforming state-of-the-art methods on most analogy questions and relation \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{RelBERT\({}_{\text{BASE}}\)} & \multicolumn{4}{c}{RelBERT\({}_{\text{LARGE}}\)} \\ \cline{2-10} Random Seed & 0 & 1 & 2 & Average & 0 & 1 & 2 & Average \\ \hline \multicolumn{10}{l}{_Analogy Question_} \\ SAT & 59.9 & 54.3 & 55.9 & 56.7 \(\pm\)2.9 & 68.2 & 68.4 & 71.9 & 69.5 \(\pm\)2.1 \\ U2 & 59.6 & 51.8 & 55.7 & 55.7 \(\pm\)3.9 & 67.5 & 65.4 & 68.0 & 67.0 \(\pm\)1.4 \\ U4 & 57.4 & 53.7 & 54.4 & 55.2 \(\pm\)2.0 & 63.9 & 65.0 & 66.4 & 65.1 \(\pm\)1.3 \\ BATS & 70.3 & 65.3 & 67.3 & 67.6 \(\pm\)2.5 & 78.3 & 79.7 & 80.4 & 79.5 \(\pm\)1.1 \\ Google & 89.2 & 79.4 & 85.6 & 84.7 \(\pm\)5.0 & 93.4 & 93.4 & 94.8 & 93.9 \(\pm\)0.8 \\ SCAN & 25.9 & 25.9 & 23.3 & 25.1 \(\pm\)1.5 & 25.9 & 27.0 & 29.1 & 27.4 \(\pm\)1.6 \\ NELL & 62.0 & 71.0 & 63.5 & 65.5 \(\pm\)4.8 & 60.5 & 67.3 & 66.3 & 64.7 \(\pm\)3.7 \\ T-REX & 66.7 & 55.2 & 46.4 & 56.1 \(\pm\)10.1 & 67.8 & 65.0 & 63.4 & 65.4 \(\pm\)2.2 \\ ConceptNet & 39.8 & 27.4 & 30.5 & 32.6 \(\pm\)6.4 & 43.3 & 44.8 & 48.7 & 45.6 \(\pm\)2.8 \\ \hline Average & 59.0 & 53.8 & 53.6 & 55.5 \(\pm\)3.1 & 63.2 & 64.0 & 65.4 & 64.2 \(\pm\)1.1 \\ \hline \multicolumn{10}{l}{_Lexical Relation Classification_} \\ BLESS & 90.0 & 91.4 & 90.7 & 90.7 \(\pm\)0.7 & 91.5 & 92.4 & 91.7 & 91.9 \(\pm\)0.5 \\ CogALexV & 83.7 & 81.5 & 81.1 & 82.1 \(\pm\)1.4 & 84.9 & 86.5 & 86.4 & 85.9 \(\pm\)0.9 \\ EVALution & 64.2 & 63.3 & 63.1 & 63.5 \(\pm\)0.6 & 66.9 & 69.0 & 67.8 & 67.9 \(\pm\)1.0 \\ K\&H+N & 94.0 & 94.7 & 94.3 & 94.3 \(\pm\)0.4 & 95.1 & 95.3 & 95.7 & 95.3 \(\pm\)0.3 \\ ROOT09 & 88.2 & 88.3 & 89.5 & 88.7 \(\pm\)0.7 & 89.2 & 89.5 & 91.5 & 90.1 \(\pm\)1.3 \\ \hline Average & 84.0 & 83.8 & 83.7 & 83.9 \(\pm\)0.1 & 85.5 & 86.5 & 86.6 & 86.2 \(\pm\)0.6 \\ \hline \hline \end{tabular} \end{table} Table 20: Result of RelBERT\({}_{\text{BASE}}\) and RelBERT\({}_{\text{LARGE}}\) with three runs with different random seed, and the average and the standard deviation in each dataset. classification benchmarks, while maintaining the flexibility of an embedding model. Crucially, we found that RelBERT is capable of modelling relationships that go well beyond those that are covered by the training data, including morphological relations and relations between named entities. Being based on RoBERTa\({}_{\text{LARGE}}\), our main RelBERT model has 354M parameters. This relatively small size makes RelBERT convenient and efficient to use in practice. Surprisingly, we found RelBERT to significantly outperform language models which are several orders of magnitude larger. While many NLP tasks can now be solved by prompting LLMs, learning explicit representations remains important for tasks that require transparency or efficiency. For instance, we envision that RelBERT can play an important role in the context of semantic search, e.g. to find relevant context for retrieval augmented LMs [131]. Explicit representations also matter for tasks that cannot easily be described using natural language instructions, such as ontology alignment [132] and completion [133; 134], where relation embeddings should intuitively also be clearly useful. More generally, RelBERT has the potential to improve applications that currently rely on commonsense KGs such as ConceptNet, e.g. commonsense question answering with smaller LMs [29] and scene graph generation [33]. ## Acknowledgments Steven Schockaert has been supported by EPSRC grants EP/V025961/1 and EP/W003309/1. Jose Camacho-Collados is supported by a UKRI Future Leaders Fellowship.
2310.20149
The NEO Surveyor Near Earth Asteroid Known Object Model
The known near-Earth object (NEO) population consists of over 32,000 objects, with a yearly discovery rate of over 3000 NEOs per year. An essential component of the next generation of NEO surveys is an understanding of the population of known objects, including an accounting of the discovery rate per year as a function of size. Using a near-Earth asteroid (NEA) reference model developed for NASA's NEO Surveyor (NEOS) mission and a model of the major current and historical ground-based surveys, an estimate of the current NEA survey completeness as a function of size and absolute magnitude has been determined (termed the Known Object Model; KOM). This allows for understanding of the intersection of the known catalog of NEAs and the objects expected to be observed by NEOS. The current NEA population is found to be $\sim38\%$ complete for objects larger than 140m, consistent with estimates by Harris & Chodas (2021). NEOS is expected to catalog more than two thirds of the NEAs larger than 140m, resulting in $\sim76\%$ of NEAs cataloged at the end of its 5 year nominal survey (Mainzer et al, 2023}, making significant progress towards the US Congressional mandate. The KOM estimates that $\sim77\%$ of the currently cataloged objects will be detected by NEOS, with those not detected contributing $\sim9\%$ to the final completeness at the end its 5 year mission. This model allows for placing the NEO Surveyor mission in the context of current surveys to more completely assess the progress toward the goal of cataloging the population of hazardous asteroids.
Tommy Grav, Amy K. Mainzer, Joseph R. Masiero, Dar W. Dahlen, Tim Spahr, William F. Bottke, Frank J. Masci
2023-10-31T03:35:18Z
http://arxiv.org/abs/2310.20149v1
# The NEO Surveyor Near Earth Asteroid Known Object Model ###### Abstract The known near-Earth object (NEO) population consists of over 32,000 objects, with a yearly discovery rate of over 3000 NEOs per year. An essential component of the next generation of NEO surveys is an understanding of the population of known objects, including an accounting of the discovery rate per year as a function of size. Using a near-Earth asteroid (NEA) reference model developed for NASA's NEO Surveyor (NEOS) mission and a model of the major current and historical ground-based surveys, an estimate of the current NEA survey completeness as a function of size and absolute magnitude has been determined (termed the Known Object Model; KOM). This allows for understanding of the intersection of the known catalog of NEAs and the objects expected to be observed by NEOS. The current NEA population is found to be \(\sim 38\%\) complete for objects larger than 140m, consistent with estimates by Harris and Chodas (2021). NEOS is expected to catalog more than two thirds of the NEAs larger than 140m, resulting in \(\sim 76\%\) of NEAs cataloged at the end of its 5 year nominal survey (Mainzer et al., 2023), making significant progress towards the US Congressional mandate. The KOM estimates that \(\sim 77\%\) of the currently cataloged objects will be detected by NEOS, with those not detected contributing \(\sim 9\%\) to the final completeness at the end its 5 year mission. This model allows for placing the NEO Surveyor mission in the context of current surveys to more completely assess the progress toward the goal of cataloging the population of hazardous asteroids. 0000-0002-4001]Tommy Grav 0000-0002-8072-8870]Amy K. Mainzer 0000-0002-0002-3870]Joseph R. Masiero 0000-0002-3883-0888]Dar W. Dahlen 0000-0002-0002-3883]Tim Spahr 0000-0002-3883-0888]William F. Bottke 0000-0002-8883-0888]Frank J. Masci
2309.10076
Tamagawa numbers of quasi-split groups over function fields
We use Morris' theory of Eisenstein series for reductive groups over global function fields in order to extend Harder's computation of Tamagawa numbers to quasi-split groups.
Ralf Köhl, M. M. Radhika, Ankit Rai
2023-09-18T18:43:45Z
http://arxiv.org/abs/2309.10076v1
# Tamagawa numbers of quasi-split groups over function fields ###### Abstract. We use Morris' theory of Eisenstein series for reductive groups over global function fields in order to extend Harder's computation of Tamagawa numbers to quasi-split groups. ###### Contents * 1 Introduction * 2 Basics and Notation * 2.1 Generalities about quasi-split reductive groups * 2.2 Dual groups * 2.3 Haar measures * 2.4 Quasi-characters on tori * 3 Determining the Tamagawa numbers * 3.1 Eisenstein series * 3.2 Intertwining operators * 3.3 Prerequisites for the computation * 3.4 A final computation * A Dual groups and restriction of scalars * B Quasi-split tori in simply connected groups * C A lemma ## 1. Introduction Let \(F\) be a global field and \(\mathbb{A}\) be the adeles over \(F\). For an algebraic group \(G\) defined over \(F\), an invariant \(\tau(G)\in\mathbb{R}\) called the Tamagawa number can be associated to \(G\). This is the volume of the space \(G(F)\backslash G(\mathbb{A})\) with respect to a certain left \(G(F)\)-invariant Haar measure on \(G(\mathbb{A})\) called the Tamagawa measure. It was conjectured by Weil that for an absolutely simple simply connected algebraic group \(G\) over a global field, the Tamagawa number \(\tau(G)\) equals \(1\). This was first proved for split groups over number fields by Langlands [1] and over function fields by Harder [1]. The proof given by Langlands was rewritten in the adelic language for quasi-split groups by Rapoport [14] and Lai [15], thus giving a unified proof for the split and quasi-split groups over a number field. Using Arthur's trace formula, Kottwitz [16] proved Weil's conjecture over number fields. The proof of Weil's conjecture over function fields for any semsimple group \(G\) was given by Gaitsgory-Lurie [10] by a method different than the one used in the earlier works of Langlands, Lai, Rapoport and Kottwitz. In another direction the theory of Eisenstein series was developed for general reductive groups over function fields in the works of Morris [17, 18]. Now that this theory is well developed, it is natural to proceed as in the works of Harder and Lai to directly prove Weil's conjecture for quasi-split groups over function fields. The present article should be considered as a contribution towards confirming Weil's conjecture for function fields via the strategy used for number fields. The main theorem of this article is as stated. **Theorem 1.1**.: _Let \(F\) be a function field of a smooth projective curve over \(\mathbb{F}_{q}\) where \(q\neq 2\) and \(G\) is a quasi-split semisimple simply connected group over \(F\). Then_ \[\tau(G)=1.\] For non-quasi-split groups either the methods of Kottwitz will have to be used or, alternatively, some other way to establish that the Tamagawa number does not change when passing to inner forms. However, given the unsatisfactory state of the trace formula over function fields, at the moment one cannot proceed further with the methods of Kottwitz. Nevertheless, some progress towards Arthur's trace formula over function fields has been made in [14]. Tamagawa [13] originally observed that the group \(SO_{q}(\mathbb{A})\) can be endowed with a natural measure such that the Minkowski-Siegel formula is equivalent to the assertion that the Tamagawa number (i.e., the volume with respect to this natural measure) be \(2\). Weil [15] subsequently observed that for simply connected groups one should expect the value \(1\), which as - outlined above - has been confirmed by Kottwitz [16] for number fields and by Gaitsgory-Lurie [1] for function fields. The organization of the article is as follows. Section SS2.1 recalls the basics on reductive groups over global and local fields, root systems and sets up the notation for the subsequent sections. In Section SS2.3 the Tamagawa measure for semisimple groups, and more generally for reductive groups is defined following the work [10] of Oesterle. Section SS2.4 deals with quasi-characters on tori. The aim of SS3 is to prove Theorem 1.1. Section SS3.1 contains generalities on Eisenstein series. In Section SS3.2 we follow the methods of Lai and Rapoport [12, 13] for computing certain intertwining operators for groups over functions fields and thus, obtain precise information about their poles and zeros (See Theorem 3.3). Section SS3.3 and SS3.4 is devoted to proving the main theorem. The Appendix comprises the proofs of a few technical lemmas used in the main content of this article. These results are well-known and has been added with the hope of improving the exposition of this article. After preparing the present paper we learned from G. Prasad that our results have also been achieved by E. Kushnirsky in the unpublished part of his PhD thesis [11]. ## 2. Basics and Notation ### Generalities about quasi-split reductive groups Let \(F\) be a function field of a smooth projective curve defined over \(\mathbb{F}_{q}\), \(q\neq 2\), of genus \(g\). Let \(F^{sep}\) be a separable closure of \(F\) and \(\bar{F}\) be the algebraic closure. For any place \(v\) of \(F\), let \(F_{v}\) denote the corresponding local field, \(k(v)\) be the residue field at \(v\), \(\mathcal{O}_{v}\) be the ring of integers in \(F_{v}\), and \(\pi_{v}\) or \(\pi_{F_{v}}\) be a uniformizer of \(F_{v}\). Let \(G\) be a quasi-split group defined over \(F\), and \(B\subset G\) be a \(F\)-Borel subgroup fixed throughout this article. Let \(B=A\cdot N\) be a Levi decomposition, where \(N\) is the unipotent radical and \(A\) is a maximal torus defined over \(F\). Let \(\overline{N}\) be the opposite unipotent radical. Assume that the maximal torus \(A\) has been so chosen that the maximal split subtorus \(A_{d}\) of \(A\) is the maximal split torus of \(G\). For any place \(v\) of \(F\), let \(K_{v}\) denote a special maximal compact subgroup of \(G(F_{v})\) which always exists by Bruhat-Tits theory. If \(G\) is unramified at \(v\), then choose \(K_{v}\) to be the hyperspecial maximal compact subgroup. It is known that a reductive algebraic group \(G\) is unramified at almost all places. In other words, for almost all places \(v\) of \(F\), the group \(G\times_{F}F_{v}\) admits a smooth reductive model over \(\operatorname{Spec}(\mathcal{O}_{v})\) and \(K_{v}=G(\mathcal{O}_{v})\). Let \(S\) denote the set of places of \(F\) such that \(G\) is unramified outside \(S\). Let \(\mathsf{G},\mathsf{B},\mathsf{N}\) and \(\mathsf{K}\) respectively denote the groups \(G(\mathbb{A}),B(\mathbb{A}),N(\mathbb{A})\), and \(\underset{v}{\prod}K_{v}\). We have the Iwasawa decomposition \[\mathsf{G}=\mathsf{K}\mathsf{B}.\] Recall that a quasi-character is a continuous homomorphism from \(A(F)\backslash A(\mathbb{A})\) to \(\mathbb{C}^{\times}\). A character \(\lambda:A\to\mathbb{G}_{m}\), defined over \(F\), gives a quasi-character \(\lambda:A(F)\backslash A(\mathbb{A})\to q^{\mathbb{Z}}\) defined to be the composite map \[A(F)\backslash A(\mathbb{A})\to F^{\times}\backslash\mathbb{A}^{\times}\to q^{ \mathbb{Z}}.\] Denote with \(X^{*}(A)\) (resp. \(X^{*}(A_{d})\)) the group of characters of the torus \(A\) (resp. \(A_{d}\)) defined over the field of definition of \(A\) (resp. \(A_{d}\)), and with \(\Lambda(A)\) the set of quasi-characters of \(A\). #### 2.1.1. Root systems Let \(G\supset B\supset A\) be as before. Let \(\Pi_{F}\subset X^{*}(A_{d})\) be the subset of non-trivial weights of \(A_{d}\) on \(\mathfrak{g}\). Let \(X_{*}(A_{d})\) be the set of cocharacters of \(A_{d}\) and \(\Pi_{F}^{\vee}\subset X_{*}(A_{d})\) be the set of coroots. The root data \((\Pi_{F},X^{*}(A_{d}))\) can be enhanced to the tuple \((X^{*}(A_{d}),\Pi_{F},X_{*}(A_{d}),\Pi_{F}^{\vee})\) called the relative root datum. Denote the absolute root datum by \((X^{*}(A\times F^{sep}),\Pi,\)\(X_{*}(A\times F^{sep}),\Pi^{\vee})\). Let \(X_{+}^{*}(A\times F^{sep})\) and \(X_{-}^{*}(A\times F^{sep})\) respectively denote the weight lattice of the universal cover and the root lattice of \(G\times F^{sep}\). In the sequel we shall assume \(G\) is simply connected unless stated otherwise. Note that this assumption implies \(X_{-}^{*}(A\times F^{sep})=X_{+}^{*}(A\times F^{sep})\). Let \(\Pi^{+}\) and \(\Pi_{F}^{+}\) respectively denote the set of positive absolute roots and the set of positive relative roots of \(G\) with respect to \(B\). Let \(\rho\) be the half sum of positive relative roots counted with multiplicity. We can also define \(\rho\) to be the element of \(X^{*}(A_{d})\) or \(X^{*}(A)\) given by \(a\mapsto\det\big{(}\mathrm{Ad}(a|_{\mathrm{Lie}(N)})\big{)}^{1/2}\). Let \(W_{F}:=N_{G}(A)(F)/Z_{G}(A)(F)\) be the relative Weyl group and \(W=N_{G}(A)(F^{sep})/Z_{G}(A)(F^{sep})\) be the absolute Weyl group. We have an embedding \(W_{F}\hookrightarrow W\). Recall that there is a \(W_{F}\)-equivariant positive definite bilinear form \(\langle\cdot,\cdot\rangle:X^{*}(A_{d})_{\mathbb{R}}\times X^{*}(A_{d})_{ \mathbb{R}}\to\mathbb{R}\) such that, the coroot \(a^{\vee}\) corresponding to the root \(a\in\Pi_{F}\) is the element \(2a/\langle a,a\rangle\) under the isomorphism \(X^{*}(A_{d})\simeq X_{*}(A_{d})\) given by \(\langle\cdot,\cdot\rangle\). The set \((\mathbb{Z}\Pi_{F}^{\vee})^{*}\subset X^{*}(A_{d})_{\mathbb{Q}}\), defined under the pairing \(X^{*}(A_{d})_{\mathbb{Q}}\times X_{*}(A_{d})_{\mathbb{Q}}\to\mathbb{Q}\), is called the relative weight lattice of \(G\). #### 2.1.2. Groups over local fields Given a place \(v\) of \(F\), the group \(G\times_{F}F_{v}\) is quasi-split as \(G\) was assumed to be quasi-split. Furthermore, if \(G\times_{F}F_{v}\) splits over an unramified extension \(E\) of \(F_{v}\), then \(G\) admits a smooth reductive model over \(\mathcal{O}_{v}\) and thus a canonical choice of maximal hyperspecial compact subgroup \(K_{v}=G(\mathcal{O}_{v})\). If \(G\times_{F}F_{v}\) does not split over an unramified extension then it is possible to construct a _parahoric_ (see footnote 1) group scheme over \(\mathrm{Spec}(\mathcal{O}_{v})\) and we define \(K_{v}:=G(\mathrm{Spec}(\mathcal{O}_{v}))\). This is again a maximal compact subgroup of \(G(F_{v})\). We assume these choices have been made and fixed for the rest of the article. In the later sections we will need a classification of quasi-split groups over function fields of characteristic \(\neq 2\). Thang [14] gives a complete classification of these groups which was started in the seminal work of Bruhat-Tits [1]. According to the table in [14], up to central isogeny there are two quasi-split absolutely simple algebraic groups of relative rank \(1\). They are 1. \(SL_{2}\) 2. \(SU(3,E_{v}/F_{v})\), where \(E_{v}/F_{v}\) is a quadratic extension. ### Dual groups We will recall the definition of the dual groups and setup a few more notation here. Let \(G\) be a quasi-split group over any field \(F\), \(A\) be a maximal torus in \(G\) defined over \(F\), and let \(E/F\) be a separable extension such that \(G\times_{F}E\) is a split reductive group. Let \(\Psi(G):=(X^{*}(A\times_{F}E),\Pi_{E},X_{*}(A\times_{F}E),\Pi_{E}^{\vee})\) be the root datum of the split reductive group. Consider the dual root datum \(\Psi(G)^{\vee}:=(X_{*}(A\times_{F}E),\Pi_{E}^{\vee},X^{*}(A\times_{F}E),\Pi_{E})\) to which is associated a connected semisimple group \(\widehat{G}\) over \(\mathbb{C}\). Observe that \(\mathrm{Gal}(E/F)\) acts on the root datum \(\Psi(G)\) and consequently, we get a Galois action on the dual root datum \(\Psi(G)^{\vee}\). This will induce an action of \(\mathrm{Gal}(E/F)\) on the associated dual group \(\widehat{G}\) as explained below. Let \(\widehat{A}\) be the maximal torus of \(\widehat{G}\). Then the construction of the Langlands dual gives a canonical identification \(\eta:\widehat{A}(\mathbb{C})\to(X^{*}(A\times_{F}E)\otimes\mathbb{C})^{\times}\). Let \(\Delta\subset\Pi_{E}\) be the set of simple roots. For \(\alpha_{i}\in\Delta\) choose the vectors \(X_{\alpha_{i}^{\vee}}\in\widehat{\mathfrak{g}}:=\operatorname{Lie}(\widehat{G})\) such that for every \(\sigma\in\operatorname{Gal}(E/F)\), \(\sigma(X_{\alpha_{i}^{\vee}})=X_{\sigma\alpha_{i}^{\vee}}\). This gives a pinning \((X_{*}(A\times_{F}E),\Pi_{E}^{\vee},X^{*}(A\times_{F}E),\Pi_{E},\{X_{\alpha^{ \vee}}\}_{\alpha^{\vee}\in\Delta^{\vee}})\) of \(\widehat{G}\) equipped with a \(\operatorname{Gal}(E/F)\) action. Since \(\operatorname{Gal}(E/F)\) acts on the dual root datum, this action can be lifted to an action on the group \(\widehat{G}\) using the splitting of the short exact sequence \[1\to\operatorname{Inn}(\widehat{G})\to\operatorname{Aut}(\widehat{G})\to \operatorname{Aut}(\Psi(G)^{\vee})\to 1\] provided by the pinning. ### Haar measures Let \(\omega\) be a left invariant differential form on \(G\) of degree \(\dim(G)\) defined over \(F\). This induces a form \(\omega_{v}\) on \(G\times_{F}F_{v}\). Denote by \(\operatorname{ord}_{e}(\omega_{v})\) the number \(n\) such that \((\omega_{v})_{e}(\wedge^{\dim(G)}\operatorname{Lie}(G))=\pi_{v}^{n}\). The form \(\omega_{v}\) defines a left \(G(F_{v})\)-invariant measure on \(G(F_{v})\) denoted by \(\overline{\mu}_{v,\omega}\). For all places \(v\notin S\), normalize \(\overline{\mu}_{v,\omega}\) as follows \[\overline{\mu}_{v,\omega}(G(\mathcal{O}_{v}))=\frac{\sharp G(k(v))}{(\sharp k( v))^{\dim(G)+\operatorname{ord}_{e}(\omega_{v})}}\] (cf. [1, SS2.5]). For \(v\in S\), we refer the reader to [1, SS10.1.6] for the construction of the Haar measure \(\overline{\mu}_{v,\omega}\) (denoted \(\operatorname{mod}(\omega_{v})\) in Bourbaki) on \(G(F_{v})\). We need more preliminaries before defining the Tamagawa measure on \(\mathsf{G}\). For \(v\notin S\) denote by \(L_{v}(s,X^{*}(G))\) the local Artin \(L\)-function associated to the \(\operatorname{Gal}(F^{sep}/F)\)-representation \(X^{*}(G\times_{F}F^{sep})\otimes\mathbb{C}\), where \(X^{*}(G\times_{F}F^{sep})\) denotes the group of characters of \(G\) defined over \(F^{sep}\). Renormalize the measure \(\overline{\mu}_{v,\omega}\) on \(G(F_{v})\) to \(L_{v}(1,X^{*}(G))\overline{\mu}_{v,\omega}\), and denote the renormalized measure by \(\mu_{v,\omega}\). The unnormalized Tamagawa measure on \(\mathsf{G}\) is then defined to be the measure \(\overline{\mu}:=\prod_{v}\mu_{v,\omega}\). Let \(L^{S}(s,X^{*}(G))\) denote the product of local \(L\)-functions outside the set of ramified places \(S\). We normalize \(\overline{\mu}\) to \[\mu:=q^{\dim(G)(1-g)}\frac{\overline{\mu}}{\lim_{s\to 1}(1-s)^{\operatorname{rk}X^ {*}(G)}L^{S}(s,X^{*}(G))},\] and call it the Tamagawa measure of \(\mathsf{G}\). This measure is independent of the choice of \(\omega\) (cf. [1, Def. 4.7]). When \(G\) is semisimple, \(\mathsf{G}\) is a unimodular group and hence, the measure \(\mu\) descends to \(G(F)\backslash\mathsf{G}\). The Tamagawa number is then defined as \[\tau(G):=\operatorname{vol}_{\mu}(G(F)\backslash\mathsf{G}).\] To extend the definition of the Tamagawa measure for a general reductive group \(G\) we proceed as follows. Consider the kernel \(\mathsf{G}_{1}\) of the homomorphism \(\mathsf{G}\xrightarrow{\mathfrak{I}}\hom_{\mathbb{Z}}(X^{*}(G),q^{\mathbb{Z}})\) defined by \(g\mapsto\big{(}\chi\mapsto\|\chi(g)\|\big{)}\) where \(g:=(g_{v})\in\mathsf{G}\). The image of \(\mathsf{G}\) under \(\mathfrak{I}\) is of finite index (see [1, SS5.6 Prop.]), and the Tamagawa number of \(G\) is defined as \[\tau(G):=\frac{\operatorname{vol}_{\mu}(G(F)\backslash\mathsf{G}_{1})}{(\log q )^{\operatorname{rk}X^{*}(G)}[\hom_{\mathbb{Z}}(X^{*}(G),q^{\mathbb{Z}}): \mathfrak{I}(\mathsf{G})]}.\] Choose a Haar measure on \(F_{v}\) such that \(\operatorname{vol}(\mathcal{O}_{v})=1\). Let \(\overline{da}\) and \(\overline{dn}\) be the unnormalized Tamagawa measures on \(A(\mathsf{A})\) and \(\mathsf{N}\) respectively. Let \(dk\) be the unique left invariant (and hence right invariant) Haar measure on \(\mathsf{K}\) such that \(\operatorname{vol}_{dk}(\mathsf{K})=1\). Using the Iwasawa decomposition \(\mathsf{G}=\mathsf{N}A(\mathsf{A})\mathsf{K}\), \(\rho^{-2}(a)\overline{dn}\overline{da}dk\) is a left invariant Haar measure on \(\mathsf{G}\). Thus, there exists a positive constant \(\kappa\) such that \[\overline{\mu}=\kappa\rho^{-2}(a)\overline{dn}\,\overline{da}dk.\] Let \(w_{0}\) be the longest element of the Weyl group that sends all the positive roots to the negative roots and \(\dot{w}_{0}\) be a representative in \(N_{G}(A)(F)\) such that \(\dot{w}_{0v}\) belongs to \(K_{v}\) for all \(v\notin S\). Then \(N(F_{v})A(F_{v})\dot{w}_{0}N(F_{v})\) is a dense open subset and has full measure. Thus, comparing the measures \(\mu_{v,\omega}\) and \(\rho^{-2}(a)dn_{v}da_{v}dn_{v}^{\prime}\) (see footnote 2) we get \[\mu_{v,\omega}=c_{v}\rho^{-2}(a)dn_{v}da_{v}dn_{v}^{\prime},\] where \(c_{v}=\frac{L_{v}(1,X^{*}(G))}{L_{v}(1,X^{*}(A))}\) when \(v\notin S\), and \(c_{v}=1\) otherwise. ### Quasi-characters on tori Let \(A\) be a torus as before and \(r=F\text{-rk}(G)\). The map \[\mathfrak{I}: A(\mathbb{A})\to\hom(X^{*}(A),q^{\mathbb{Z}})\] \[a\mapsto(\chi\mapsto\|\chi(a)\|)\] defined in [10] induces a map \[\mathfrak{I}_{\mathbb{C}}^{*}: X^{*}(A)\otimes\mathbb{C}\xrightarrow{}\hom(A(\mathbb{A})/A( \mathbb{A})_{1},\mathbb{C}^{\times})\] \[\sum_{i}c_{i}\chi_{i}\xrightarrow{}(a\mapsto\prod_{i}\|\chi_{i}( a)\|^{c_{i}}).\] The map \(\mathfrak{I}_{\mathbb{C}}^{*}\) is surjective and \(X^{*}(A)\otimes\frac{2\pi\iota}{\log q}\mathbb{Z}\) is a finite index subgroup of \(\ker(\mathfrak{I}_{\mathbb{C}}^{*})\). Both these assertions follow from the existence of the commutative diagram below (1) where the vertical arrows are induced by the inclusion \(A_{d}\subset A\). The right vertical arrow is an isomorphism since the obvious inclusion \(A_{d}(\mathbb{A})/A_{d}(\mathbb{A})_{1}\hookrightarrow A(\mathbb{A})/A( \mathbb{A})_{1}\) is an isomorphism. This follows from the fact that the anisotropic part of the torus is contained in \(A(\mathbb{A})_{1}\). The left vertical arrow is an isomorphism since the torus \(A\) is quasi-split, which implies that the map \(X^{*}(A)\to X^{*}(A_{d})\) is injective and the image is of finite index. Because the kernel of the bottom arrow in (1) is known to be \(X^{*}(A_{d})\otimes\frac{2\pi\iota}{\log q}\mathbb{Z}\) (see [1]), \(X^{*}(A)\otimes\frac{2\pi\iota}{\log q}\mathbb{Z}\) is a finite index subgroup of the kernel of the top arrow. The induced map on the quotient is again denoted \(\mathfrak{I}_{\mathbb{C}}^{*}\) \[X^{*}(A)\otimes\mathbb{C}/\left(X^{*}(A)\otimes\frac{2\pi\iota}{\log q} \mathbb{Z}\right)\xrightarrow{\mathfrak{I}_{\mathbb{C}}^{*}}\hom(A(\mathbb{A} )/A(\mathbb{A})_{1},\mathbb{C}^{\times}). \tag{2}\] Fix a coordinate system on \(X^{*}(A)\otimes\mathbb{C}/\left(X^{*}(A)\otimes\frac{2\pi\iota}{\log q} \mathbb{Z}\right)\) as follows. Let \(\{\varpi_{i}\}\) be the fundamental weights of the group \(G\). Denote by \([\varpi_{i}]\) the sum over \(\operatorname{Gal}(E/F)\)-orbit of \(\varpi_{i}\). Since \(G\) is assumed to be simply connected we have the equality \(X^{*}(A\times F^{sep})=\oplus_{i}\mathbb{Z}\varpi_{i}\). Moreover, \(G\) is quasi-split and hence by Lemma B.1\(A\) is a quasi-split torus. Now, using [10, Thm. 2.4] we get that \([\varpi_{i}]\) is a \(\mathbb{Z}\)-basis of \(X^{*}(A)\). The above choice of coordinate system induces the isomorphism \[\mathbb{Z}^{r}\xrightarrow{\xi}X^{*}(A). \tag{3}\] A small computation shows that \(\xi(1,1,\ldots,1)=\rho\). For \(\lambda\in\Lambda(A)\) define \(\Re\lambda(t):=|\lambda(t)|\in\mathbb{R}\) and \[\Lambda_{\sigma}(A):=\{\lambda\in\Lambda(A)\mid\Re(\lambda)=\sigma\}.\] The latter is a translate of \(\Lambda_{0}(A)\) which is the Pontryagin dual of \(A(F)\backslash A(\mathbb{A})\). Equip \(\Lambda_{0}(A)\) with the Haar measure \(d\lambda\) that is dual to the measure on \(A(F)\backslash A(\mathbb{A})\) induced by \(\overline{da}\). The measure on \(\Lambda_{\sigma}(A)\) is then the unique left \(\Lambda_{0}(A)\)-invariant measure such that the volume remains the same. We fix this measure for the future computations. #### 2.4.1. Comparison of measures on quasi-characters The short exact sequence \[1\to A(F)\backslash A(\mathbb{A})_{1}\to A(F)\backslash A(\mathbb{A})\to A( \mathbb{A})/A(\mathbb{A})_{1}\to 1\] of locally compact abelian groups gives the exact sequence \[1\to\hom(A(\mathbb{A})/A(\mathbb{A})_{1},S^{1})\to\Lambda_{0}(A)\to\hom(A(F) \backslash A(\mathbb{A})_{1},S^{1})\to 1.\] Since the last term is discrete we get the equality \(\hom(A(\mathbb{A})/A(\mathbb{A})_{1},S^{1})=\Lambda_{0}(A)^{\circ}\). The pullback of the measure \(d\lambda|_{\Lambda_{0}(A)^{\circ}}\) along the map \(\mathfrak{I}_{\mathbb{C}}^{*}\), denoted by \(d\lambda|_{\frac{X^{*}(A)\otimes\mathbb{R}}{2\pi/\log(q)X^{*}(A)}}\), can be compared with the dual measure on \(X^{*}(A)\otimes\mathbb{R}\). Arguing as in [10, Lemma 6.7] we get the following: **Lemma 2.1**.: \[d\lambda|_{\frac{X^{*}(A)\otimes\mathbb{R}}{2\pi/\log(q)X^{*}(A)}}=\frac{[ \hom(X^{*}(A),q^{\mathbb{Z}}),\operatorname{im}\mathfrak{I}]}{\operatorname {vol}_{\overline{da}}(A(F)\backslash A(\mathbb{A})_{1})}\left(\frac{\log q}{2 \pi}\right)^{r}dz_{1}\wedge\cdots\wedge dz_{r}.\] Proof.: Recall the map \(\mathbb{C}^{r}\xrightarrow{\xi}X^{*}(A)\otimes\mathbb{C}\) in (3) giving the isomorphism \(\mathbb{C}^{r}/\frac{2\pi}{\log q}\mathbb{Z}^{r}\simeq X^{*}(A)\otimes \mathbb{C}/\frac{2\pi}{\log q}X^{*}(A)\). Equip the latter space with the measure that assigns mass \(1\) to the fundamental domain \(X^{*}(A)\otimes\mathbb{R}/\frac{2\pi}{\log q}X^{*}(A)\), which under the above isomorphism equals the measure \(\left(\frac{\log q}{2\pi}\right)^{r}dz_{1}\wedge\cdots\wedge dz_{r}\). Denote by \(\mathfrak{I}^{\vee}:(\hom(X^{*}(A),q^{\mathbb{Z}}))^{\vee}\to\hom(A(\mathbb{A })/A(\mathbb{A})_{1},S^{1})\) the map induced by \(\mathfrak{I}\) on the Pontryagin dual. We get the following short exact sequence \[1\to(\hom(X^{*}(A),q^{\mathbb{Z}})/\operatorname{im}\mathfrak{I})^{\vee}\to( \hom(X^{*}(A),q^{\mathbb{Z}}))^{\vee}\xrightarrow{\mathfrak{I}^{\vee}}\hom(A (\mathbb{A})/A(\mathbb{A})_{1},S^{1})\to 1\] The term in the middle is isomorphic to \(X^{*}(A)\otimes\mathbb{R}/\frac{2\pi}{\log q}X^{*}(A)\) and the first term is abstractly isomorphic to \(\hom(X^{*}(A),q^{\mathbb{Z}})/\operatorname{im}\mathfrak{I}\) since it is finite. Note that the quotient measure on \(A(\mathbb{A})/A(\mathbb{A})_{1}\) is \(\operatorname{vol}_{\overline{da}}(A(F)\backslash A(\mathbb{A})_{1})\) times the counting measure and hence, the dual measure \(d\lambda\) assigns the mass \(1/\operatorname{vol}_{\overline{da}}(A(F)\backslash A(\mathbb{A})_{1})\) to \(\hom(A(\mathbb{A})/A(\mathbb{A})_{1},S^{1})\). The pullback of this measure along \(\mathfrak{I}_{\mathbb{C}}^{*}\) is a Haar measure which assigns mass \(\frac{\hom(X^{*}(A),q^{\mathbb{Z}})/\operatorname{im}\mathfrak{I}]}{\operatorname {vol}_{\overline{da}}(A(F)\backslash A(\mathbb{A})_{1})}\) to \((\hom(X^{*}(A),q^{\mathbb{Z}}))^{\vee}\), whereas the Haar measure \(\left(\frac{\log q}{2\pi}\right)^{r}dz_{1}\wedge\cdots\wedge dz_{r}\) assigns it mass \(1\). Hence the claim. ## 3. Determining the Tamagawa numbers ### Eisenstein series Let \(\varphi:\mathbb{N}B(F)\backslash\mathbb{G}/\mathbb{K}\to\mathbb{C}\) be a compactly supported measurable function. Let \(\varphi_{v}\) denote the local components of \(\varphi\) such that \(\varphi=\prod\limits_{v}\varphi_{v}\). For any \(\lambda\in\Lambda(A)\) the Fourier transform is defined as \[\widehat{\varphi}(\lambda)(g):=\int\limits_{A(F)\backslash A(\mathbb{A})} \varphi(ag)\lambda^{-1}(a)\rho^{-1}(a)\overline{da}.\] Let \(\widehat{\varphi}(\lambda)_{v}\) denote the restriction of \(\widehat{\varphi}(\lambda)\) to \(G(F_{v})\). Then for \(g=(g_{v})_{v}\) we have \(\widehat{\varphi}(\lambda)(g)=\prod\limits_{v}\widehat{\varphi}(\lambda)_{v}( g_{v})\). Note that \(\widehat{\varphi}(\lambda)(g)\) is determined by its value at \(1\) and we denote this value simply by \(\widehat{\varphi}(\lambda)\). On applying Fourier inversion \[\varphi(g)=\int_{\mathbb{R}(\lambda)=\lambda_{0}}\widehat{\varphi}(\lambda)(g)d\lambda,\] where \(\lambda_{0}\) is such that for any coroot \(\alpha^{\vee}\) the composite \(F^{*}\backslash\mathbb{A}^{*}\xrightarrow{\alpha^{\vee}}A(F)\backslash A( \mathbb{A})\xrightarrow{|\lambda|}\mathbb{R}^{\times}\) given by \(|\cdot|^{s_{\alpha}}\) satisfies \(s_{\alpha}>1\). For \(\varphi\) as above define the theta series \[\theta_{\varphi}(g):=\sum\limits_{\gamma\in B(F)\backslash G(F)}\varphi(\gamma g).\] The above series converges uniformly on compact subsets of \(G(F)\backslash\mathbb{G}\) (see [11, SS2.3]). In fact, the support of \(\theta_{\varphi}\) is compact and hence, it is in \(L^{2}(G(F)\backslash\mathbb{G})\). Define \[E(g,\widehat{\varphi}(\lambda)):=\sum\limits_{\gamma\in B(F)\backslash G(F)} \widehat{\varphi}(\lambda)(\gamma g).\] We have \[\theta_{\varphi}(g)=\sum_{\gamma\in B(F)\backslash G(F)}\varphi( \gamma g) =\sum_{\gamma\in B(F)\backslash G(F)}\int_{\Re(\lambda)=\lambda_{0}} \widehat{\varphi}(\lambda)(\gamma g)d\lambda\] \[\stackrel{{*}}{{=}}\int_{\Re(\lambda)=\lambda_{0}} \sum_{\gamma\in B(F)\backslash G(F)}\widehat{\varphi}(\lambda)(\gamma g)d\lambda\] \[=\int_{\Re(\lambda)=\lambda_{0}}E(g,\widehat{\varphi}(\lambda))d\lambda.\] The assumption on \(\lambda_{0}\) is used in the equality marked with \(*\) above. The Eisenstein series \(E(g,\widehat{\varphi}(\lambda))\) is a priori defined on the domain \(\lambda\in X^{*}(A)\otimes\mathbb{C}/\left(X^{*}(A)\otimes\frac{2\pi_{t}}{ \log q}\mathbb{Z}\right)\) and \(\Re(\lambda)-\rho\in C\) (see footnote 3), but can be continued meromorphically to all of \(X^{*}(A)\otimes\mathbb{C}/\left(X^{*}(A)\otimes\frac{2\pi_{t}}{\log q}\mathbb{ Z}\right)\) (see footnote 4). Footnote 3: \(C\) is the positive Weyl chamber; or in other words \((\Re(\lambda),\alpha)>(\alpha,\rho)\). For \(w\in W_{F}\), let \(\dot{w}\) denote a lift to \(N_{G}(A)(F)\). Set \(\dddot{w}N=\dot{w}N\dot{w}^{-1}\) and \(N^{\dot{w}}=\dot{w}\overline{N}\dot{w}^{-1}\cap N\). Recall the definition of the local and global intertwining operators, \[\big{(}M_{v}(w,\lambda)\widehat{\varphi}(\lambda)_{v}\big{)}(g_{v} ) =\int\limits_{N^{\dot{w}}(F_{v})}\widehat{\varphi}(\lambda)_{v}( \dot{w}n_{v}g_{v})dn_{v}\quad\text{ for any }g_{v}\in G(F_{v}), \tag{4}\] \[\big{(}M(w,\lambda)\widehat{\varphi}(\lambda)\big{)}(g) =\int\limits_{\dddot{w}N(F)\cap N(F)\backslash\mathsf{N}}\widehat {\varphi}(\lambda)(\dot{w}ng)\overline{dn}\] \[=\operatorname{vol}\!\left(\dddot{w}N(F)\cap N(F)\backslash( \dddot{w}\mathsf{N}\cap\mathsf{N})\right)\int\limits_{\dot{w}\mathsf{N}\dot{w} \cap\mathsf{N}}\widehat{\varphi}(\lambda)(\dot{w}ng)\overline{dn}\] \[=\operatorname{vol}\!\left(\dddot{w}N(F)\cap N(F)\backslash( \dddot{w}\mathsf{N}\cap\mathsf{N})\right)\int\limits_{\mathsf{N}\dot{w}} \widehat{\varphi}(\lambda)(\dot{w}ng)\overline{dn}. \tag{5}\] The last equality in (5) follows from \(\dddot{w}N(F)\cap N(F)\backslash\mathsf{N}=\dddot{w}N(F)\cap N(F)\backslash \big{(}\dddot{w}\mathsf{N}\cap\mathsf{N}\big{)}\mathsf{N}^{\dot{w}}\), and the left \(\mathsf{N}\)-invariance of \(\widehat{\varphi}(\lambda)\). Observe that for \(w=w_{0}\) the group \(\dddot{w}\mathsf{N}\cap\mathsf{N}\) is trivial and hence combining equations (4) and (5) we get \[M(w_{0},\lambda)=\prod\limits_{v}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Intertwining operators The aim of this subsection is to prove the following: **Theorem 3.3**.: _The intertwining operator \(M(w_{0},\lambda)\) has a simple pole along each of the hyperplanes \(s_{i}=1\) in the region \(1-\epsilon<\Re(s_{i})<1+\epsilon\) for some \(\epsilon>0\). In particular, \(M(w_{0},s\rho)\) has a pole of order \(F\text{-}\text{rk}(G)\) at \(s=1\)._ We reduce the calculation of the integrals defining certain intertwining operators to the case of quasi-split semisimple simply connected rank \(1\) groups following [10, 11]. The strategy used by Lai and Rapoport is shown to work in a similar manner over function fields. As a result we obtain Theorem 3.3 which is crucial to implement the arguments of Harder in order to prove Theorem 3.12. We remark here that, unlike the strategy followed in the present article, Harder explicitly computes an expression for the Eisenstein series (see [10, SS2.3]) and concludes Theorem 3.3 as a corollary of his results. #### 3.2.1. Local intertwining operators We will require the computation of the local intertwining operators \(M_{v}(w_{0},\rho)\) for the ramified and unramified places of \(F\). ##### 3.2.1.1. \(M_{v}(w_{0},\rho)\) for ramified places Let \(G\), \(F\) and \(S\) be as in Section 2. Thus, for any \(v\notin S\) the group \(G\times_{F}F_{v}\) splits over an unramified extension of \(F_{v}\). Let \(\dot{w}_{0}\) denote a representative in \(N_{G}(A)(F)\) of the longest Weyl group element \(w_{0}\in W_{F}\) as in 2.3. Let \(\mathbb{A}_{S}\) denote the ring of adeles over \(F\) with trivial component outside \(S\). For \(n\in N(\mathbb{A}_{S})\) we write the Iwasawa decomposition of \(\dot{w}_{0}n\) as \[\dot{w}_{0}n=n_{1}(n)a(n)k(n)\in N(\mathbb{A})A(\mathbb{A})\mathbb{K}.\] **Proposition 3.4**.: _For any finite set \(S^{\prime}\) containing the set of ramified places, let \(M_{S^{\prime}}(w_{0},\rho)=\prod\limits_{v\in S^{\prime}}M_{v}(w_{0},\rho)\). Then_ \[M_{S^{\prime}}(w_{0},\rho)=\int_{(\dot{w}_{0}N)(\mathbb{A}_{S^{\prime}})}|\rho |^{2}(a(n))\overline{dn}=\kappa\left(\prod\limits_{v\notin S^{\prime}}\text{ vol}(K_{v})\right)^{-1}\left(\prod\limits_{v\in S^{\prime}}c_{v}\right).\] Proof.: Let \(f\) be a right \(\mathbb{K}\)-invariant function on \(\mathbb{G}\) defined for any \(g=nak\in\mathbb{G}\) as follows \[f(g)=\begin{cases}0&\text{if $g_{v}\notin K_{v}$ for some $v\notin S^{\prime}$,}\\ h(n_{S^{\prime}},a_{S^{\prime}})&\text{otherwise, where $n_{S^{\prime}}=(n_{v})_{v \in S^{\prime}},a_{S^{\prime}}=(a_{v})_{v\in S^{\prime}}$}\\ &\text{and $h:N(\mathbb{A}_{S^{\prime}})\times A(\mathbb{A}_{S^{\prime}})\to \mathbb{R}$ is any integrable function}\end{cases}\] Using the equality \(\overline{\mu}=\kappa|\rho|^{-2}(a)\overline{dn}\,\overline{da}dk\) we get \[\int\limits_{\mathbb{G}}f(g)\overline{\mu} =\kappa\int\limits_{\mathbb{N}A(\mathbb{A})\mathbb{K}}f(nak)|\rho |^{-2}(a)\overline{dn}\,\overline{da}dk\] \[=\kappa\int\limits_{N(\mathbb{A}_{S^{\prime}})A(\mathbb{A}_{S^{ \prime}})}h(n_{S^{\prime}},a_{S^{\prime}})|\rho|^{-2}(a)\overline{dn}\, \overline{da}.\] The largest Bruhat cell \(B\dot{w}_{0}N\) has full measure with respect to \(\overline{\mu}\) and hence the left hand side of the above integral equals the integral on this cell. Using the Iwasawa decomposition of \(\dot{w}_{0}n^{\prime}\in\dot{w}_{0}N\) in the Bruhat decomposition of \(g\) we get, \(na\dot{w}_{0}n^{\prime}=nan_{1}(n^{\prime})a(n^{\prime})k(n^{\prime})\). In the following we omit the subscript \(S^{\prime}\) in the integrand for convenience. We further let \(\kappa^{\prime}:=\left(\prod_{v\notin S^{\prime}}\operatorname{vol}(K_{v})\right) \left(\prod_{v\in S^{\prime}}c_{v}\right)\). \[\int\limits_{\mathsf{B}\dot{w}_{0}\mathsf{N}}f(g)\overline{\mu} =\kappa^{\prime}\int\limits_{B(\mathsf{A}_{S^{\prime}})\dot{w}_{0} N(\mathsf{A}_{S^{\prime}})}\!\!f(nan_{1}(n^{\prime})a^{-1}aa(n^{\prime})k(n^{ \prime}))|\rho|^{-2}(a)\overline{dn}\,\overline{da}\,\overline{dn^{\prime}}\] \[=\kappa^{\prime}\int\limits_{\dot{w}_{0}N(\mathsf{A}_{S^{\prime}} )}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! intertwining operators to the case of \(F\)-rank one groups. Here we will give an explicit computation of \(M_{v}(w_{0},\lambda)\) for \(F\)-rank one groups. Fix \(\lambda\in X^{*}(A)\otimes\mathbb{C}\) and suppose that \(\hat{t}\in\widehat{A}\) is such that for any \(\mu\in X_{*}(A\times_{F_{v}}E_{v})\) the equality \(\hat{t}(\mu)=|\pi_{v}|^{\lambda(\cdot\mu)}\) holds. Let \(\widehat{\mathfrak{u}}\) be the Lie subalgebra of \(\widehat{\mathfrak{g}}\) corresponding to the unipotent radical \(N\). **Theorem 3.7**.: _Suppose \(E_{v}/F_{v}\) is the unramified extension that splits \(G\) and let \(\sigma\in\operatorname{Gal}(E_{v}/F_{v})\) be the Frobenius element. Then_ \[M_{v}(w_{0},\lambda)=\frac{\det\big{(}I-|\pi_{v}|\mathrm{Ad}(\sigma\hat{t})| _{\widehat{\mathfrak{u}}}\big{)}}{\det\big{(}I-\mathrm{Ad}(\sigma\hat{t})|_{ \widehat{\mathfrak{u}}}\big{)}}. \tag{7}\] Proof.: The proof will be done in stages by verifying the above formula first for semisimple rank \(1\) groups and then for higher rank groups. **Step 1.** The theorem is true in the case of absolutely simple simply connected groups of semisimple \(F_{v}\)-rank \(1\). We quote the results of Rapoport and Lai below. **Proposition 3.8** ([10], SS4.4(a)).: _The intertwining operator \(M_{v}(w_{0},s\rho)\) for the group \(SL_{2}\) is given by_ \[M_{v}(w_{0},s\rho)=\frac{(1-q^{-s-1})}{(1-q^{-s})}.\] **Proposition 3.9** ([11], Prop. 3.4).: _Let \(E_{v}/F_{v}\) be a quadratic unramified extension of \(F_{v}\) and \(SU(3,E_{v}/F_{v})\) be the quasi-split group defined over \(F_{v}\). Suppose \(2\) is invertible in \(F_{v}\). Then_ \[M_{v}(w_{0},s\rho)=\frac{(1-q^{-2s-2})(1+q^{-2s-1})}{(1-q^{-2s})(1+q^{-2s})}.\] **Step 2.** If the theorem is true for \(G\) then it is true for any central isogeny \(\widetilde{G}\to G\). Let \(\widetilde{G}\to G\) be a central isogeny. The notation \(\widetilde{\ }\) will denote the corresponding objects for \(\widetilde{G}\). It is clear that the right hand side of (7) is the same for \(\widetilde{G}\) and \(G\). Further, the isogeny induces the isomorphisms \(\widetilde{W}_{F_{v}}\xrightarrow{\sim}W_{F_{v}}\) and \(X^{*}(A)\otimes\mathbb{C}\xrightarrow{\sim}X^{*}(\widetilde{A})\otimes \mathbb{C}\), where the image of \(\rho\) under the latter isomorphism is \(\widetilde{\rho}\). Also, the images of \(\widetilde{N},\ \widetilde{A}\) and \(\widetilde{K}\) are \(N,\ A\) and \(K\) respectively; and \(\widetilde{\widetilde{N}}\xrightarrow{\sim}\overline{N}\). Thus the image of \(\widetilde{nak}\) maps to \(nak\) which is the Iwasawa decomposition. **Step 3.** Let \(G=\operatorname{Res}_{E^{\prime}_{v}/F_{v}}G^{\prime}\) for a quasi-split simply connected semisimple group \(G^{\prime}\) defined over \(E^{\prime}_{v}\) which splits over \(E_{v}\) and let the degree of the unramified extension \(E^{\prime}_{v}/F_{v}\) be \(n\). If the theorem is true for \(G^{\prime}\) then it is true for \(G\). Alphabets with superscript \({}^{\prime}\) will denote the corresponding objects for the group \(G^{\prime}\). The Weyl groups \(W^{\prime}_{E^{\prime}_{v}}\) and \(W_{F_{v}}\) are the same. Also, if \(A=R_{E^{\prime}_{v}/F_{v}}(A^{\prime})\) we can identify \(\widehat{A}\) with \(\prod_{\operatorname{Gal}(E^{\prime}_{v}/F_{v})}\widehat{A^{\prime}}\). We have \(\widehat{\mathfrak{u}}=\prod_{\operatorname{Gal}(E^{\prime}_{v}/F_{v})} \widehat{\mathfrak{u}^{\prime}}\). Since \(\lambda\in X^{*}(A)\otimes\mathbb{C}\) we get that \(\widehat{t}\in\widehat{A}\) is mapped to a diagonal element \(\operatorname{diag}(\widehat{t}^{\prime},\widehat{t}^{\prime},\cdots,\widehat {t}^{\prime})\in\prod_{\operatorname{Gal}(E^{\prime}_{v}/F_{v})}\widehat{A^ {\prime}}\) under the identification above. \[I-\mathrm{Ad}(\sigma\hat{t})=\left(\begin{array}{ _converges for any \(\lambda\in X^{*}(A_{0})\otimes\mathbb{C}\) with \(\operatorname{Re}(\langle\lambda,\alpha^{\vee}\rangle)>0\) for all \(\alpha\in\Pi^{\prime}_{+}(P)\). There exists a constant depending on the choice of Haar measure and up to this constant the value is_ \[\prod_{\alpha\in\Pi^{\prime}_{+}(P)}\int\limits_{\overline{N}(\alpha)(F_{v})} \Phi^{\lambda(\alpha)}(\bar{n}_{v})d\bar{n}_{v}.\] _If the semisimple group \(G\) is the local place of a semisimple group defined over a global field and if the Haar measure is deduced from the Tamagawa measure then this constant is 1 for almost all places \(v\)._ As an application of the above we have a straightforward generalization of Proposition 3.9. We refer the interested readers to [10]. #### 3.2.2. **Proof of Theorem 3.3** The proof of theorem will be completed in two steps. First step is via explicit computations for \(F\)-rank one groups, and the second step is using the method of Bhanu-Murthy and Gindikin-Karpelevitch for reduction of higher rank case to that of rank one groups. #### 3.2.2.1. **Case of rank one groups** In the relative rank one groups there are four cases as described below. The expression for the intertwining operators should be understood to hold upto finitely many local factors which are holomorphic in the region \(1-\epsilon<s<1+\epsilon\) according to the Lemma 3.6. For certain meromorphic functions \(f_{1}(s)\), \(f_{2}(s)\), \(f_{3}(s)\), and \(f_{4}(s)\) of \(s\in\mathbb{C}\), which are holomorphic near \(s=1\), we have the following list of intertwining operators. 1. \(G=SL_{2}\), \[M(w_{0},s\rho)=\frac{\zeta_{F}(s)}{\zeta_{F}(s+1)}=\zeta_{F}(s)f_{1}(s)\] 2. \(G=SU(3,E/F)\) where \(E\) is a quadratic extension of \(F\), \[M(w_{0},s\rho)=\zeta_{E}(s)f_{2}(s)\] 3. \(G=\operatorname{Res}_{E^{\prime}/F}(SL_{2})\) then it follows from the proof of step 3 in Theorem 3.7 that \[M(w_{0},s\rho)=\frac{\zeta_{E}(s)}{\zeta_{E}(s+1)}=\zeta_{E}(s)f_{3}(s)\] 4. \(G=\operatorname{Res}_{E^{\prime}/F}SU(3,E/F)\) where \(E\) is a quadratic extension of \(E^{\prime}\) \[M(w_{0},s\rho)=\zeta_{E^{\prime}}(s)f_{4}(s).\] It is clear from the above list that the theorem holds for the \(F\)-rank one groups. #### 3.2.2.2. **Case of higher rank groups** We denote by \(M^{G(\alpha)}(w_{0},\lambda)\) the intertwining operator for the \(F\)-rank one semisimple simply connected group \(G(\alpha)\subset G\) where \(w_{0}\) is the largest element in the Weyl group of \(G(\alpha)\). Writing \(\lambda=(s_{1},s_{2},\ldots,s_{r})\) in the coordinate system given by \(\xi\) (refer (3)), Theorem 3.10 implies the following equality upto a scalar \[M(w_{0},\lambda)=\prod_{\begin{subarray}{c}\alpha_{i}\text{ positive}\\ \text{and simple}\end{subarray}}M^{G(\alpha)}(w_{0},\lambda|_{G(\alpha)})\prod_{ \begin{subarray}{c}\alpha\text{ positive, indi-}\\ \text{visible and nonsimple}\end{subarray}}M^{G(\alpha)}(w_{0},\lambda|_{G( \alpha)})\] For \(s_{i}\) in the region \(1-\epsilon<s_{i}<1+\epsilon\), the poles of \(M(w_{0},(s_{i}))\) are determined by the poles of the operators on the right hand side. In the case when \(\alpha=\alpha_{i}\) is a positive simple root then \(\lambda|_{G(\alpha_{i})}=s_{i}\). If \(\alpha\) is not a simple root then \(\Re(\lambda|_{G(\alpha)})\) lies outside the domain \((1-\epsilon,1+\epsilon)\). Note that \(G(\alpha)\) is isomorphic to one of the four cases discussed above upto central isogeny. Reading the poles of the intertwining operators on the right hand side from the list for rank one groups, we can see that \(M(w_{0},\lambda)\) has simple poles along the hyperplanes \(s_{i}=1\) when \((\Re(s_{1}),\Re(s_{2}),\ldots,\Re(s_{r}))\in(1-\epsilon,1+\epsilon)^{r}\). The second part follows by restricting to the case of \(\lambda=s\rho=(s,s,\ldots,s)\). ### Prerequisites for the computation For any \(h\in\mathcal{H}(G)\), the Hecke algebra, we define \[T_{h}(\theta_{\varphi})(g):=\int\limits_{A(F)\backslash A(\mathbb{A})}h(a^{-1}) \theta_{\varphi}(ag)\overline{da}.\] The operator \(T_{h}\) enjoys the following property as can be seen from the integral representation above. **Proposition 3.11**.: _The operator \(T_{h}\) defines a self-adjoint bounded operator on the closed subspace of \(L^{2}(G(F)\backslash\mathbb{G},\overline{\mu})\) generated by the function \(\theta_{\varphi}\), such that if \(\widehat{\psi}(\lambda)=\widehat{h}(\lambda)\widehat{\varphi}(\lambda)\), then \(\theta_{\psi}=T_{h}(\theta_{\varphi})\). The norm of \(T_{h}\) is bounded above by \(\widehat{h}(\rho)\)._ Proof.: The existence of the operator \(T_{h}\) follows from [12, Lemma pp.136]. Let \(\mathcal{E}^{\vee}\) be the closure of the subspace of \(L^{2}(G(F)\backslash\mathbb{G})\) generated by the pseudo-Eisenstein series \(\theta_{\varphi}\) where \(\varphi\) is a compactly supported function on \(A(\mathbb{A})/A(F)\). Then the constant function belongs to \(\mathcal{E}^{\vee}\) (See [12, Ch. II, SS1.12]). The main theorem of this section is the computation of the projection of the pseudo-Eisenstein series \(\theta_{\varphi}\) onto the constant function. Choose \(h\in\mathcal{H}(G)\) as below and consider the positive normal operator \(T:=T_{h}\circ(T_{h})^{*}/(\widehat{h}(\rho))^{2}\). 1. Choose a place \(v_{0}\notin S\) : via Satake isomorphism there exists \(h_{v_{0}}\in\mathcal{H}(G\times_{F}F_{v_{0}})\) such that it's Fourier transform satisfies \(\widehat{h}_{v_{0}}(s_{v_{0}})=\sum_{w\in W_{F}}(\sharp k(v_{0}))^{-\langle \rho,ws_{v_{0}}\rangle}\). 2. At places \(v\neq v_{0}\) define \(h_{v}\) to be the characteristic function of \(K_{v}\). Following Harder, we prove: **Theorem 3.12**.: _The sequence of positive normal operators \(T^{n}:\mathcal{E}^{\vee}\to\mathcal{E}^{\vee}\) converges to the operator \(P:\mathcal{E}^{\vee}\to\mathcal{E}^{\vee}\) which is the projection onto the constant functions. Explicitly_ \[P(\theta_{\varphi})=c\,\log(q)^{r}\mathrm{res}_{s=1}E(g,s\rho)\widehat{\varphi }(s\rho)=cc^{\prime}\,\log(q)^{r}\lim_{s\to 1}(s-1)^{r}M(w_{0},s\rho) \widehat{\varphi}(s\rho),\] _where \(c\) and \(c^{\prime}\) are the constants satisfying \(d\lambda|_{\frac{X^{*}(A)\otimes\mathbb{R}}{2\pi/\log(q)X^{*}(A)}}=c\left( \frac{\log q}{2\pi}\right)^{r}dz_{1}\wedge\cdots\wedge dz_{r}\) and \(\mathrm{res}_{\lambda=\rho}(E(x,\lambda)\widehat{\varphi}(\lambda))=c^{\prime }\lim_{s\to 1}(s-1)^{r}M(w_{0},s\rho)\widehat{\varphi}(s\rho)\)._ Proof.: We have the following equality from the above theorem \[T^{n}(\theta_{\varphi})(g)=\int_{\Lambda_{\sigma}(A)}E(g,\lambda)\widehat{h}( \lambda)^{2n}\widehat{h}(\rho)^{-2n}\widehat{\varphi}(\lambda)d\lambda.\] Note that the residue of the Eisenstein series \(E(g,\widehat{\varphi}(\lambda))\) at \(\lambda=\rho\) is a constant function. The proof henceforth is completely analogous to the proof given in [13, pp. 301, 303]. We summarize the main steps below. In the equations below \(\sigma^{\prime}\) is a real quasi-character such that \(\sigma^{\prime}_{i}<1\) for some \(i\) where \(\sigma^{\prime}_{i}\) are the coordinates of \(\sigma^{\prime}\) in the coordinate system given by \(\xi\), and \(\widetilde{E}(g,\lambda)\) denotes the Eisenstein series or it's residue. \[T^{n}(\theta_{\varphi})(g) =c\,\log(q)^{r}\,\,\mathrm{res}_{\lambda=\rho}(E(g,\lambda) \widehat{\varphi}(\lambda))+\sum T^{n}\left(\int\limits_{\Lambda_{\sigma^{ \prime}}(A)}\widetilde{E}(g,\lambda)\widehat{\varphi}(\lambda)d\lambda\right)\] \[=c\,\log(q)^{r}\,\,\mathrm{res}_{\lambda=\rho}(E(g,\lambda) \widehat{\varphi}(\lambda))+\sum\left(\int\limits_{\Lambda_{\sigma^{\prime}} (A)}\widetilde{E}(g,\lambda)\widehat{\varphi}(\lambda)\left(\frac{\widehat{h}( \lambda)}{\widehat{h}(\rho)}\right)^{2n}d\lambda\right)\] Note that for \(\lambda\in\Lambda_{\sigma^{\prime}}(A)\), the inequality \(\widehat{h}(\lambda)<\widehat{h}(\rho)\) holds (See Lemma C.1). Hence we get \[\lim_{n\to\infty}T^{n}(\theta_{\varphi})=c\,\log(q)^{r}\,\,\mathrm{res}_{ \lambda=\rho}(E(x,\lambda)\widehat{\varphi}(\lambda)). \tag{9}\] The above limit and the equality is to be understood as pointwise convergence. Proposition 3.11 implies that the spectrum of the self-adjoint positive operator \(T\) is concentrated on \([0,1]\) and hence \(T^{n}\to P\) where \(P\) is the projection onto the subspace \[\{e\in\mathcal{E}^{\vee}\mid Te=e\}.\] This observation of Harder coupled with the pointwise convergence result from equality (9) implies that the equality (9) in fact holds in \(L^{2}(G(\mathbb{Q})\backslash G(\mathbb{A}))\). This finishes the proof of the first equality. Following the arguments in [10, pp.289, 290] we get that \(\mathrm{res}_{s=1}E(g,s\rho)\widehat{\varphi}(s\rho)=q^{\dim(N)(1-g)}\mathrm{ res}_{s=1}E^{B}(g,s\rho)\widehat{\varphi}(s\rho)\). Now using the formula for the constant term from Lemma 3.1 and observing that the intertwining operators \(M(w,s\rho)\) has poles of order \(<r\) for \(w\neq w_{0}\) we get the second equality with \(c^{\prime}=q^{\dim(N)(1-g)}\). ### A final computation We will complete the proof of the Weil conjecture in the case of quasi-split group over function fields in this section. We begin with the equality \[P\theta_{\varphi}= \,cc^{\prime}\log(q)^{r}\,\lim_{t\to 1}(t-1)^{r}M(w_{0},t\rho) \widehat{\varphi}(t\rho)\] \[= \,cc^{\prime}\,\log(q)^{r}\,\mathrm{res}_{s=1}(L^{S}(s,X^{*}(A)) )\lim_{t\to 1}\frac{M(w_{0},t\rho)\widehat{\varphi}(t\rho)}{L(t,A)}\] \[= \,cc^{\prime}\,\log(q)^{r}\,\mathrm{res}_{s=1}(L^{S}(s,X^{*}(A)) )\lim_{t\to 1}\frac{M^{S}(w_{0},t\rho)\widehat{\varphi^{S}}(t\rho)}{L^{S}(t,X^{* }(A))}\prod_{v\in S}M_{v}(w_{0},\rho)\widehat{\varphi_{v}}(t\rho)\] \[= \,cc^{\prime}\,\log(q)^{r}\,\mathrm{res}_{s=1}(L^{S}(s,X^{*}(A)) )\lim_{t\to 1}\frac{M^{S}(w_{0},t\rho)\widehat{\varphi^{S}}(t\rho)}{L^{S}(t,X^{* }(A))}\prod_{v\in S}M_{v}(w_{0},\rho)\widehat{\varphi_{v}}(t\rho)\] \[= \,cc^{\prime}\,\log(q)^{r}\,\mathrm{res}_{s=1}(L^{S}(s,X^{*}(A)) )\left(\prod_{v\notin S}\mathrm{vol}(K_{v})\widehat{\varphi^{S}}(t\rho)\right) \left(\kappa\left(\prod_{v\notin S}\mathrm{vol}(K_{v})\right)^{-1}\prod_{v\in S }\widehat{\varphi_{v}}(t\rho)\right)\] \[= \,cc^{\prime}\,\log(q)^{r}\,\mathrm{res}_{s=1}(L^{S}(s,X^{*}(A)) )\kappa\widehat{\varphi}(\rho).\] Since \(P\) is the projection operator onto the constants we have the equality \((\theta_{\varphi},1)=(P\theta_{\varphi},1)\). The right hand side equals \(q^{-\dim(G)(1-g)}\tau(G)cc^{\prime}\,\log(q)^{r}\mathrm{res}_{s=1}(L(s,X^{*}(A )))\kappa\widehat{\varphi}(\rho)\). Since we can surely have functions \(\varphi\) with \(\widehat{\varphi}(\rho)\neq 0\), we get the equality \[\tau(G)=\frac{q^{(\dim(G)-\dim(N))(1-g)}}{cc^{\prime}\,\log(q)^{r}\mathrm{res} _{s=1}(L(s,X^{*}(A)))}=\tau(A).\] The last equality follows from the explicit value of \(c\) obtained in SS2.4.1 and of \(c^{\prime}\) obtained in the proof of Theorem 3.12. We know from [11, Ch.II, Theorem 1.3(d)] that \(\tau(\mathrm{Res}_{E/F}(\mathbb{G}_{m}))=\tau(\mathbb{G}_{m})\) and hence \(\tau(A)=1\). Using Lemma SSB and the fact that the Tamagawa number of split tori is \(1\) we get that \[\tau(G)=\tau(A)=1.\] ## Appendix A Dual groups and restriction of scalars Let \(E\supset E^{\prime}\supset F\) be a tower of unramified extensions of local fields. Let \(A^{\prime}\) be a torus defined over \(E^{\prime}\) which splits over \(E\) and consider \(A=\mathrm{Res}_{E^{\prime}/F}A^{\prime}\). We have the \(\mathrm{Gal}(E/E^{\prime})\)-equivariant isomorphism \[\widehat{A}^{\mathrm{Gal}(E/E^{\prime})}\cong\prod_{\mathrm{Gal}(E^{\prime}/F )}\widehat{A^{\prime}}^{\mathrm{Gal}(E/E^{\prime})} \tag{10}\] and the action of \(\mathrm{Gal}(E^{\prime}/F)\) is given by permuting the indices. Hence \[\widehat{A}^{\mathrm{Gal}(E/F)}\cong\widehat{A^{\prime}}^{\mathrm{Gal}(E/E^{ \prime})}.\] The inclusion \(\widehat{A^{\prime}}^{\operatorname{Gal}(E/E^{\prime})}\hookrightarrow\widehat{A}^{ \operatorname{Gal}(E/E^{\prime})}\) can be identified under the isomorphism (10) with the diagonal embedding \(\widehat{A^{\prime}}^{\operatorname{Gal}(E/E^{\prime})}\hookrightarrow\prod_{ \operatorname{Gal}(E^{\prime}/F)}\widehat{A^{\prime}}^{\operatorname{Gal}(E/E^ {\prime})}\). Define the map \(\eta:\widehat{A}(\mathbb{C})\to X^{*}(A\times\bar{F})\otimes\mathbb{C}\) by the condition that \(\mu(\widehat{t})=|\pi_{F}|^{(\eta(\widehat{t}),\mu)}\) for all \(\mu\in X_{*}(A\times\bar{F})\). Similarly, we may define \(\eta^{\prime}:\widehat{A^{\prime}}(\mathbb{C})\to X^{*}(A^{\prime}\times\bar{F })\otimes\mathbb{C}\). **Lemma A.1**.: _Let \(\widehat{t}\in\widehat{A}^{\operatorname{Gal}(E^{\prime}/F)}\), \(\widehat{t}^{\prime}\in\widehat{A^{\prime}}^{\operatorname{Gal}(E/E^{\prime})}\) be such that under the isomorphism (10) we have \(\widehat{t}=(\widehat{t}^{\prime},\widehat{t}^{\prime},\ldots,\widehat{t}^{ \prime})\). Further assume that \(\lambda=\eta(\widehat{t})\) and \(\operatorname{Nm}_{E^{\prime}/F}(\lambda^{\prime})=\lambda\), then \(\eta^{\prime}(\widehat{t}^{\prime^{n}})=\lambda^{\prime}\)._ Proof.: Note that there is the following commutative diagram (11) where the left most vertical arrow is an isomorphism given by the adjunction of restriction and extension of scalars. For \(\widehat{t}\), \(\lambda\), \(\lambda^{\prime}\) as in the statement of the lemma, and \(\mu\in X_{*}(A)\), \[\mu(\widehat{t})=|\pi_{F}|^{(\eta(\widehat{t}),\mu)}=|\pi_{F}|^{(\operatorname {Nm}_{E^{\prime}/F}(\lambda^{\prime}),\mu)}=|\pi_{F}|^{n(\lambda^{\prime},\mu) }=|\pi_{E^{\prime}}|^{(\lambda^{\prime},\mu)}.\] Recall that \(\widehat{t}=(\widehat{t}^{\prime},\widehat{t}^{\prime},\ldots,\widehat{t}^{ \prime})\), hence \(\mu(\widehat{t})=\mu(\widehat{t}^{\prime})^{n}\) for any \(\mu\in X_{*}(A)=X_{*}(A^{\prime})\). Thus, we get \(|\pi_{E^{\prime}}|^{(\lambda^{\prime},\mu)}=\mu(\widehat{t}^{\prime^{n}})\). Hence by definition of \(\eta^{\prime}\) we get \(\eta^{\prime}(\widehat{t}^{\prime^{n}})=\lambda^{\prime}\). ## Appendix B Quasi-split tori in simply connected groups We state the following lemma from [12, Lemma 6.1.2] for the sake of completeness **Lemma B.1**.: _Suppose \(G\) is a simply connected quasi-split group over a field \(F\). Let \(A\) be a maximal torus defined over \(F\) which is contained in a Borel subgroup defined over \(F\). Then \(A\) is a product of tori of the form \(\operatorname{Res}_{E_{i}/F}\mathbb{G}_{m}\), where \(E_{i}/F\) are finite separable extension of \(F\)._ Proof.: Let \(X^{*}(A\times F^{sep})\) be the set of characters of \(A\) defined over \(F^{sep}\). Then the Galois group \(\operatorname{Gal}(F^{sep}/F)\) acts on the group \(X^{*}(A\times F^{sep})\). When \(G\) is quasi-split the restriction map \(\Pi_{F^{sep}}\to\Pi_{F}\) is surjective and the fibers are exactly the \(\operatorname{Gal}(F^{sep}/F)\)-orbits. This implies that the set of absolute simple roots restricting to a given relative simple root is permuted by the Galois group \(\operatorname{Gal}(F^{sep}/F)\). We may use [12, Exercise 13.1.5(4)] to conclude the lemma. ## Appendix C A lemma **Lemma C.1**.: _The inequality \(\widehat{h}(\lambda)<\widehat{h}(\rho)\) holds._ Proof.: The proof follows as in [12, Lemma 3.2.3] which in turn depends on [12, Lemma 3.2.1(ii)]. We need only prove an analogous result to the latter Lemma for the quasi-split case. That is, to show that \[\{\sigma=(\sigma_{i})\mid 1-\epsilon<\Re(\sigma_{i})\leq 1\;\forall\;i\}\subset \{\sigma\mid\Re(\sigma)\in\operatorname{ConvHull}(W_{F}\cdot\rho)\}.\] Note that the restriction map \(X^{*}(A\times\bar{F})\twoheadrightarrow X^{*}(A)\) in our chosen coordinate system (3) can be identified with the map 'average over the Galois orbits'. This is a convex map and hence preserves convex domains. Since the lemma is known for the convex hull of the Weyl conjugate of \(\rho\) in \(X^{*}(A\times\bar{F})\), the lemma follows in the quasi-split case as well.
2309.06614
The Right Angled Artin Group Functor as a Categorical Embedding
It has long been known that the combinatorial properties of a graph $\Gamma$ are closely related to the group theoretic properties of its right angled artin group (raag). It's natural to ask if the graph homomorphisms are similarly related to the group homomorphisms between two raags. The main result of this paper shows that there is a purely algebraic way to characterize the raags amongst groups, and the graph homomorphisms amongst the group homomorphisms. As a corollary we present a new algorithm for recovering $\Gamma$ from its raag.
Chris Grossack
2023-09-12T22:03:01Z
http://arxiv.org/abs/2309.06614v2
# The Right Angled Artin Group Functor as a Categorical Embedding ###### Abstract It has long been known that the combinatorial properties of a graph \(\Gamma\) are closely related to the group theoretic properties of its _right angled artin group_ (raag). It's natural to ask if the graph _homomorphisms_ are similarly related to the group homomorphisms between two raags. The main result of this paper shows that there is a purely algebraic way to characterize the raags amongst groups, and the graph homomorphisms amongst the group homomorphisms. As a corollary we present a new algorithm for recovering \(\Gamma\) from its raag. ## 1 Introduction For us, a _graph_\(\Gamma\) with underlying vertex set \(V\) is a symmetric, reflexive relation on \(V\). A _graph homomorphism_ from a graph \((V,\Gamma)\) to \((W,\Delta)\) is a function \(\varphi:V\to W\) so that \((v_{1},v_{2})\in\Gamma\implies(\varphi v_{1},\varphi v_{2})\in\Delta\). These assemble into a category, which we call \(\mathsf{Gph}\). Given a graph \(\Gamma\) with vertex set \(V\), we can form a group \(A\Gamma\), the _right angled artin group_ (raag) associated to \(\Gamma\), defined as \[A\Gamma\triangleq\langle v\in V\mid[v_{1},v_{2}]=1\text{ whenever }(v_{1},v_{2})\in\Gamma\rangle.\] For example, if \(K_{n}\) is a complete graph on \(n\) vertices then \(AK_{n}\cong\mathbb{Z}^{n}\). If \(\Delta_{n}\) is a discrete graph on \(n\) vertices \(A\Delta_{n}\cong\mathbb{F}_{n}\) is a free group on \(n\) generators. If \(\square\) is the graph with \(4\) vertices \(a,b,c,d\) and four edges \((a,b)\), \((b,c)\), \((c,d)\), and \((d,a)\) then \(A\square\cong\langle a,c\rangle\times\langle b,d\rangle\cong\mathbb{F}_{2}\times \mathbb{F}_{2}\). In this sense, raags allow us to _interpolate_ between free and free abelian groups. Raags are of particular interest to geometric group theorists because of their connections to the fundamental groups of closed hyperbolic \(3\)-manifolds [48] and to the mapping class groups of hyperbolic surfaces [33]. Moreover, raags were instrumental in the resolution of the Virtual Haken Conjecture [4] due to their close connection with the CAT(0) geometry of cube complexes. See [9] for an overview. Importantly, the combinatorial structure of \(\Gamma\) is closely related to the algebraic structure of \(A\Gamma\), with useful information flow in both directions. For instance, the cohomology of \(A\Gamma\) is the _exterior face algebra_ of \(\Gamma\)[45], \(A\Gamma\) factors as a direct product if and only if \(\Gamma\) factors as a join of two graphs [47], and we can compute the Bieri-Neumann-Strebel invariant \(\Sigma^{1}(A\Gamma)\) from just information in \(\Gamma\)[39]. This correspondence can be pushed remarkably far, and recently it was shown that _expander graphs1_ can be recognized from the cohomology of their raags [23]! For more information about the close connection between the combinatorics of \(\Gamma\) and the algebra of \(A\Gamma\), see [23, 36]. Footnote 1: which are really sequences of graphs With this context, it is natural to ask whether the combinatorics of graph homomorphisms are _also_ closely connected to the algebra of group homomorphisms between raags. For a particular example, one might ask if there is a purely algebraic way to recognize when a group homomorphism between raags is \(A\varphi\) for some homomorphism \(\varphi\) of their underlying graphs. The main result of this paper shows that the answer is _yes_ in a very strong sense. We prove that the raag functor \(A\) is an equivalence between the category of graphs \(\mathsf{Gph}\) and the category of groups equipped with a coalgebra structure2 that we will describe shortly. As corollaries, we obtain a new way of recognizing the raags amongst the groups, and the graph homomorphisms amongst the group homomorphisms. This moreover gives a new algorithm for recovering the underlying graph of a raag from nothing but its isomorphism type. Footnote 2: A kind of _descent data_ Crucial for the proof of this theorem is the fact that \(A:\mathsf{Gph}\to\mathsf{Grp}\) has a right adjoint, the _commutation graph_ functor \(C:\mathsf{Grp}\to\mathsf{Gph}\) which sends a group \(G\) to the graph whose vertices are elements of \(G\) and where \((g_{1},g_{2})\in CG\iff[g_{1},g_{2}]=1\). This is surely well known to experts3 but is not often mentioned in the literature. This is likely because of the common convention that graphs have no self loops, whereas the adjunction requires us to work with graphs with a self loop at each vertex. Of course, this does not appreciably change the combinatorics, and we feel it is a small price to pay for the categorical clarity this adjunction provides. Footnote 3: It’s implicit in the “universal property of raags” given in [34], for instance, and is stated as such in [47] Unsurprisingly, the commutation graph and related constructions have already been of interest to combinatorialists for many years [7, 19, 26, 5, 14], and the complement of the commutation graph was even the subject of a (now proven) conjecture of Erdos [43]. With the commutation graph functor \(C\) defined, we can state the main result of this paper: **Theorem**.: _The right angled artin group functor \(A:\mathsf{Gph}\to\mathsf{Grp}\) is comonadic._ _That is, \(A\) is an equivalence of categories between \(\mathsf{Gph}\) and the category \(\mathsf{Grp}_{AC}\) of groups equipped with an \(AC\)-coalgebra structure, and group homomorphisms that are moreover \(AC\)-cohomomorphisms._ The group \(ACG\) is freely generated by symbols \([g]\) for each \(g\in G\), with relations saying \([g][h]=[h][g]\) in \(ACG\) if and only if \(gh=hg\) in \(G\). Write \(\epsilon_{G}:ACG\to G\) for the map sending each \([g]\mapsto g\). Additionally, write \(\delta:ACG\to AC(ACG)\) for the map sending each \([g]\mapsto[[g]]\). Now merely unwinding the category theoretic definitions gives the following corollary: **Corollary** (Main Corollary).: _An abstract group \(G\) is isomorphic to a raag if and only if it admits a group homomorphism \(\mathfrak{g}:G\to ACG\) so that the following two diagrams commute:_ _Moreover, a group homomorphism \(f:G\to H\) between raags is \(A\varphi\) for some graph homomorphism \(\varphi\) if and only if it respects these structure maps in the sense that_ _commutes._ **Remark**.: _In particular, there is a purely algebraic way to recognize the raags amongst the groups and the image of the graph homomorphisms amongst the group homomorphisms between raags._ _This additionally gives us a new way to recover \(\Gamma\) from the abstract isomorphism class of \(A\Gamma\), and shows it is decidable (even efficient!) to check whether any particular group homomorphism between raags came from a graph homomorphism._ Our proof uses some category theory that might not be familiar to all readers, so in Section 3 we will briefly review the machinery of _comonadic descent_, which is the main technical tool for the proof (which is the subject of Section 4). First, though, in Section 2 we give an example to show that category theory is not needed in order to apply our results. This section might also be of interest to those learning category theory looking for toy examples of comonadic descent, since it is usually applied in more complicated situations than this4. Lastly, in Section 5 we discuss the algorithmic consequences of the main result. Footnote 4: Indeed, this is how the author came upon this result. Throughout this paper, we make the notational convention that graph theoretic concepts are written with greek letters and group theoretic concepts with roman letters. The coalgebraic structure maps are written in fraktur font. ## 2 An Instructive Example It's important to note that applying this result requires no knowledge of the deep category theory used in its proof. Let's begin with a simple example of how the result can be used to detect whether a group homomorphism came from a graph homomorphism or not. Let \(\Gamma=\{v\}\) and \(\Delta=\{w\}\) be two one-vertex graphs. Then \(A\Gamma=\langle v\rangle\) and \(A\Delta=\langle w\rangle\), and we want to detect when a homomorphism between these groups came from a homomorphism of their underlying graphs. Recall that \(CG\), the commutation graph of \(G\), has a vertex \([g]\) for each \(g\in G\), with an edge relating \([g]\) and \([h]\) exactly when \(g\) and \(h\) commute in \(G\). So \(C\langle v\rangle\) is a complete graph on \(\mathbb{Z}\) many vertices labelled by \([v^{n}]\). Then the group \(ACG\) is freely generated by the symbols \([g]\), for \(g\in G\), subject to relations saying \([g][h]=[h][g]\) in \(ACG\) if and only if \(gh=hg\) in \(G\). So \(AC\langle v\rangle\) is the free abelian group with generators \([v^{n}]\). It's not hard to see that the map \(\mathfrak{v}:\langle v\rangle\to AC\langle v\rangle\) sending \(v\mapsto[v^{1}]\) satisfies the axioms from the Main Corollary. The existence of such a \(\mathfrak{v}\) tells us that \(\langle v\rangle\) must be a raag, which it is5. Footnote 5: More generally, if we know \(\Gamma\), then the map \(A\Gamma\to AC(A\Gamma)\) sending each generator \(\gamma\mapsto[\gamma]\) will always satisfy the axioms. Let's first look at a map that _does_ come from a graph homomorphism, for instance \(f:\langle v\rangle\to\langle w\rangle\) given by \(fv=w\). The corollary says to consult the following square: and since this is quickly seen to commute, we learn that \(f\) is of the form \(A\varphi\) for some graph homomorphism (as indeed it is). Next, let's look at a map which _doesn't_ come from a graph homomorphism, like \(f:\langle v\rangle\to\langle w\rangle\) given by \(fv=w^{2}\). Now our square is \(\langle v\rangle\)\(v\mapsto w^{2}\)\(\langle w\rangle\)\(w\mapsto[v^{1}]\)\(\langle v^{n}\mid[v^{n},v^{m}]=1\rangle\)\(\langle w^{n}\mid[w^{n},w^{m}]=1\rangle\) which does _not_ commute (even though it seems to at first glance). Indeed, if we chase the image of \(v\) around the top right of the square, then we see \[v\mapsto w^{2}\mapsto[w^{1}]^{2}\] If instead we chase around the lower left of the square, we get: \[v\mapsto[v^{1}]\mapsto[w^{2}]\] since \([w^{1}]^{2}\neq[w^{2}]\) in this group (recall \(AC\langle w\rangle\) is freely generated by the symbols \([w^{n}]\)), we have successfully detected that \(f\) did _not_ come from a graph homomorphism! Importantly, this same approach works even if we merely know the coalgebra structures on \(G\) and \(H\). Thus we don't need to know their underlying graphs to detect the graph homomorphisms6! Footnote 6: Though we will see later that the coalgebra structure actually lets us recover the underlying graphs as well. As a last aside, let's mention what the structure map \(\mathfrak{g}:G\to ACG\) does. Elements of \(ACG\) are formal words in the elements of \(G\). Then, intuitively, \(\mathfrak{g}(g)=[\gamma_{1}][\gamma_{2}]\cdots[\gamma_{k}]\) decomposes \(g\) as a formal product of the vertices making up \(g\). This means that we can recover the vertices of \(\Gamma\) as those \(g\) so that \(\mathfrak{g}(g)=[g]\) is a word of length \(1\), as we prove in Section 5 ## 3 A Brief Review of Comonadic Descent Recall that an _adjunction_\((L:\mathcal{C}\rightarrow\mathcal{D})\dashv(R:\mathcal{D}\rightarrow\mathcal{C})\) is a pair of functors equipped with a natural isomorphism \[\operatorname{Hom}_{\mathcal{D}}(LC,D)\cong\operatorname{Hom}_{\mathcal{C}}(C,RD).\] Of particular interest for us is the adjunction \(A\dashv C\) specifying the universal property of raags. Recall moreover that a _comonad_\(W:\mathcal{D}\to\mathcal{D}\) is a functor equipped with natural transformations \(\epsilon:W\Rightarrow 1_{\mathcal{D}}\) and \(\delta:W\Rightarrow WW\) so that the following diagrams of natural transformations commute: \[W\] [MISSING_PAGE_POST] \[\delta\delta\] \[\delta\] \[\delta\] \[\delta\] \[\delta\] \[\delta \(LX\in\mathcal{D}\) is a \(LR\)-coalgebra, where the structure map is given by \(L\eta_{X}:LX\to LRLX\), and every \(L\varphi\) is a \(LR\)-cohomomorphism. In our special case, \(\eta:\Gamma\to CA\Gamma\) is the map sending each \(v\in\Gamma\) to \(v^{1}\in CA\Gamma\). Then the above says that the functor \(A:\mathsf{Gph}\to\mathsf{Grp}\) factors through the category of coalgebras \(\mathsf{Grp}_{AC}\) as follows: \[\diagram\node{Gph}\arrow{A}\arrow{\mathsf{Grp}_{AC}}\arrow{U}\arrow{\mathsf{ Grp}}\arrow{\Gamma}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{ \mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{ H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{ \mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H }}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}}\arrow{\mathsf{H}} \arrow{\mathsf{H}} **Theorem** (Beck, 1968).: _To show that a left adjoint \((L:\mathcal{C}\to\mathcal{D})\dashv(R:\mathcal{D}\to\mathcal{C})\) witnesses \(L\) as an equivalence of categories \(\mathcal{C}\simeq\mathcal{D}_{LR}\)8, it suffices to show_ Footnote 8: Such an adjunction \(L\dashv R\) is called _comonadic_. 1. \(L\) _reflects isomorphisms (that is, whenever_ \(L\varphi:L\Gamma\cong L\Delta\) _is an isomorphism in_ \(\mathcal{D}\)_, then_ \(\varphi\) _must have already been an isomorphism in_ \(\mathcal{C}\)_)_ 2. \(\mathcal{C}\) _has, and_ \(L\) _preserves, equalizers of coreflexive pairs_9__ Footnote 9: We will recall the definition of a coreflexive pair in section 4 This gives us our outline for proving the main theorem: **Theorem** (Main Theorem).: _The right angled artin group functor \(A:\mathsf{Gph}\to\mathsf{Grp}\) restricts to an equivalence of categories \(A:\mathsf{Gph}\simeq\mathsf{Grp}_{AC}\) between the category of graphs and the full subcategory of groups equipped with an \(AC\)-coalgebra structure._ Proof.: By Beck's comonadicity theorem, it suffices to check the two conditions above. Condition (1) is a classical result due to Droms [20], so it remains to check (2). It's well known that \(\mathsf{Gph}\) is complete10, and thus has all equalizers. Footnote 10: one quick way to see this is to note that it’s _topologically concrete_ in the sense of [2] In the next section we'll recall the definition of a coreflexive pair, and show that \(A\) really does preserve their equalizers. This will complete the proof. ## 4 The Raag Functor Preserves Equalizers of Coreflective Pairs A _coreflexive pair_ is a pair of arrows with a common retract. That is, a diagram where \(\rho\alpha=1_{\Gamma}=\rho\beta\). Now, we want to show that if \(\Theta\) is the equalizer of \(\alpha\) and \(\beta\), as computed in \(\mathsf{Gph}\), then \(A\Theta\) should still be the equalizer of \(A\alpha\) and \(A\beta\), as computed in \(\mathsf{Grp}\). For ease of notation, we will confuse \(\alpha\) and \(\beta\) with \(A\alpha\) and \(A\beta\), since \((A\alpha)(v_{1}^{n_{1}}v_{2}^{n_{2}}\cdots v_{k}^{n_{k}})=(\alpha v_{1})^{n_{ 1}}(\alpha v_{2})^{n_{2}}\cdots(\alpha v_{k})^{n_{k}}\). Now, \(\Theta\) is quickly seen to be the full subgraph of \(\Gamma\) on the vertices where \(\alpha v=\beta v\). So then \(A\Theta=\langle v\mid\alpha v=\beta v\rangle\leq A\Gamma\). If instead we compute the equalizer of \(A\alpha\) and \(A\beta\) in \(\mathsf{Grp}\), we get \(G=\{g\mid\alpha g=\beta g\}\leq A\Gamma\). So showing that \(A\Theta=G\) amounts to showing that, provided \(\alpha\) and \(\beta\) admit a common retract \(\rho\), each \(g\) with \(\alpha g=\beta g\) is a word in those vertices \(v\) with \(\alpha v=\beta v\). **Theorem 1**.: _The right angled artin group functor \(A\) preserves equalizers of coreflexive pairs_ Proof.: Since \(\rho\) is a graph homomorphism, we see that \(v\) and \(w\) are \(\Gamma\)-related if and only if \(\alpha v\) and \(\alpha w\) (equivalently \(\beta v\) and \(\beta w\), equivalently \(\alpha v\) and \(\beta w\)) are \(\Delta\)-related. Thus \(v\) and \(w\) commute in \(A\Gamma\) if and only if their images under \(\alpha\) and \(\beta\) commute in \(A\Delta\). In Theorem 3.9 of her thesis [27], Green proves that elements of \(A\Gamma\) have a normal form as words in the vertices of \(\Gamma\)11. Following the exposition of Koberda [34] and others, we call a word \(w\in A\Gamma\)_central_ if the letters in \(w\) pairwise commute. This happens if and only if the letters in \(w\) form a clique in \(\Gamma\). We say that \(w\) is in _central form_ if it is a product of central words \(w=w_{1}w_{2}\cdots w_{k}\). If we stipulate that we are "left greedy" in the sense that no letter in \(w_{i+1}\) commutes with each letter of \(w_{i}\)12, then the central form is unique up to commuting the letters in each \(w_{i}\). See also Section 3.3 of [15] for a summary. Footnote 11: In fact, she proves something slightly more general Footnote 12: so that we first make \(w_{1}\) as long as possible, then make \(w_{2}\) as long as possible, and so on Now suppose that \(\alpha g=\beta g\). Fix such a central form \(g=w_{0}w_{1}\ldots w_{k}\), and look at \[(\alpha w_{0})(\alpha w_{1})\ldots(\alpha w_{k})=(\beta w_{0})(\beta w_{1}) \ldots(\beta w_{k})\] these representations of \(\alpha g=\beta g\) are both minimal length, as we could hit a shorter representation with \(\rho\) in order to get a shorter representation for \(g\). Then uniqueness of the central form says that each \(\alpha w_{i}\) and \(\beta w_{i}\) are equal up to permuting the letters in each. We restrict attention to each \(w_{i}=\gamma_{1}^{n_{1}}\gamma_{2}^{n_{2}}\ldots\gamma_{k}^{n_{k}}\) separately, say \[(\alpha\gamma_{1}^{n_{1}})(\alpha\gamma_{2}^{n_{2}})\ldots(\alpha\gamma_{k}^{n _{k}})=\delta_{1}^{n_{1}}\delta_{2}^{n_{2}}\ldots\delta_{k}^{n_{k}}=(\beta \gamma_{1}^{n_{1}})(\beta\gamma_{2}^{n_{2}})\ldots(\beta\gamma_{k}^{n_{k}})\] If we can show that actually \(\alpha\gamma_{i}=\beta\gamma_{i}\) for each \(i\), then we'll be done. But \(\alpha\) and \(\beta\) give injections from \(\{\gamma_{1}\ldots,\gamma_{k}\}\) to \(\{\delta_{1},\ldots,\delta_{k}\}\), which are in fact bijections since we're dealing with finite sets of the same cardinality. Moreover, by assumption \(\rho\) provides an inverse for \(\alpha\)_and_ for \(\beta\)! Then \(\alpha\) and \(\beta\) must be the same map on this set, and in particular each \(\gamma_{i}\) satisfies \(\alpha\gamma_{i}=\beta\gamma_{i}\), as desired. ## 5 Can we Really Compute These? It is well known that the problem "is a finitely presented group \(G\) isomorphic to a raag" is undecidable. Indeed, being isomorphic to a raag is a _Markov property_ in the sense of Definition 3.1 in [41] so Theorem 3.3 in the same paper guarantees this problem is undecidable. Let's work with the next best thing, then, and suppose we're given a finitely presented group \(G\) and a promise that it _is_ a raag (though we are not given its underlying graph). How much can we learn about the combinatorics of its underlying graph from just \(G\)? First, we must find an \(AC\)-coalgebra structure on \(G\) - that is, a group homomorphism \(\mathfrak{g}:G\to ACG\) satisfying the conditions from Figure 1. Since \(ACG\) is a raag, it has solvable word problem, so we can enumerate all homomorphisms \(G\to ACG\) and check if they satisfy the axioms. We will eventually find such a \(\mathfrak{g}\) since we were promised that \(G\) is abstractly isomorphic to a raag, so this algorithm terminates. Recall also that that if we happen to already know the underlying graph that we have an explicit formula for the coalgebra structure. The unique map sending each generator \(\gamma\in A\Gamma\) to \([\gamma]\in ACA\Gamma\) always works. Once we know the coalgebra structures on \(G\) and \(H\), we can already efficiently check whether a group homomorphism \(f:G\to H\) came from a graph homomorphism. **Theorem 2**.: _Given a homomorphism \(f:G\to H\) between finitely presented groups13 where \((G,\mathfrak{g})\) and \((H,\mathfrak{h})\) are moreover \(AC\)-coalgebras, then there is an algorithm deciding whether \(f\) is \(A\varphi\) for \(\varphi\) a graph homomorphism of the graphs presenting \(G\) and \(H\)._ Footnote 13: Recall that these presentations may have nothing to do with the underlying graphs Proof.: By the equivalence \(\mathsf{Gph}\simeq\mathsf{Grp}_{AC}\), this amounts to checking if \(f\) is a cohomomorphism - that is, whether the square commutes. Of course, we can check this on the (finitely many) generators of \(G\), and the claim now follows from the fact that \(ACH\) is a raag14, and thus has solvable word problem [15]. Footnote 14: We have to be a bit careful, since \(CH\) is infinite, so that \(ACH\) is not finitely generated. However, the images of each generator of \(G\) will land in a finite subgraph of \(CH\), so we can do our computation inside the raag associated to that finite subgraph. **Corollary 1**.: _There is an algorithm to recover \(\Gamma\) from the mere isomorphism type \(G\) of \(A\Gamma\)._ Proof.: We know that the vertices of \(\Gamma\) are in bijection with graph homomorphisms from the one-vertex graph \(1\) to \(\Gamma\). By the equivalence \(\mathsf{Gph}\simeq\mathsf{Grp}_{AC}\), this amounts to cohomomorphisms \(\mathbb{Z}\to G\), which one can explicitly calculate to be those elements \(g\in G\) so that \(\mathfrak{g}(g)=[g]\). Since we know that the number of vertices of \(\Gamma\) is equal to the rank of the abelianization \(G^{\mathrm{ab}}\), we can keep checking elements of \(G\) to see if \(\mathfrak{g}(g)=[g]\). This algorithm terminates because once we've found \(\mathrm{rk}(G^{\mathrm{ab}})\) many such elements, we must have found all of them. Finally, we see that the following conditions are equivalent: 1. Two elements \(g_{1},g_{2}\) represent adjacent elements in \(\Gamma\) 2. \(g_{1}\) and \(g_{2}\) commute in \(G\) 3. \([g_{1}]\) and \([g_{2}]\) commute in \(ACG\) 4. There is a cohomomorphism from \(A(\bullet-\bullet)\) to \(G\) sending the two vertices to \(g_{1}\) and \(g_{2}\) ## 6 Conclusion It has been well known for some time now that the combinatorics of a graph \(\Gamma\) are reflected in the algebra of its raag \(A\Gamma\), but the question of how the combinatorics of graph homomorphisms relates to group homomorphisms between raags remains fertile ground. In this paper we've shown that the connection remains strong, by showing that the category of (reflexive) graphs embeds faithfully as an explicit subcategory of the category of groups. More speculatively, while this paper focused on the comonad \(AC:\mathsf{Grp}\to\mathsf{Grp}\), we suspect there is a future role to be played by the monad \(CA:\mathsf{Gph}\to\mathsf{Gph}\). Indeed, Kim and Koberda conjecture in [32] that embeddings \(A\Gamma\to A\Delta\) exist exactly when \(\Gamma\) embeds into a graph \(\Delta^{e}\) which they call the _extension graph_. This graph is closely related to the monad graph \(CA\Delta\) (indeed, it's the full subgraph of \(CA\Delta\) on the conjugates of generators), as we might expect since maps \(A\Gamma\to A\Delta\) are in natural bijection with maps \(\Gamma\to CA\Delta\). It would be interesting to see if category theoretic techniques can be brought to bear on this conjecture. ## Acknowledgements The author would like to thank Matt Durham, Jacob Garcia, Thomas Koberda, and Peter Samuelson for helpful conversations and encouragement during the writing of this paper.
2309.12661
Homotopy commutativity in symmetric spaces
We extend the former results of Ganea and the two of the authors with Takeda on the homotopy commutativity of the loop spaces of Hermitian symmetric spaces such that the loop spaces of all irreducible symmetric spaces but $\mathbb{C}P^3$ are not homotopy commutative.
Daisuke Kishimoto, Yuki Minowa, Toshiyuki Miyauchi, Yichen Tong
2023-09-22T07:01:46Z
http://arxiv.org/abs/2309.12661v1
# Homotopy commutativity in symmetric spaces ###### Abstract. We extend the former results of Ganea and the two of the authors with Takeda on the homotopy commutativity of the loop spaces of Hermitian symmetric spaces such that the loop spaces of all irreducible symmetric spaces but \(\mathbb{C}P^{3}\) are not homotopy commutative. Key words and phrases:homotopy commutativity, symmetric space, Samelson product, Whitehead product 2010 Mathematics Subject Classification: 55P35, 55Q15 ## 1. Introduction It is an interesting problem to determine whether or not a given H-space is homotopy commutative. The most celebrated result on this problem is Hubbuck's torus theorem [10] which states that a connected finite H-space is homotopy commutative if and only if it is homotopy equivalent to a torus. For studying this problem for infinite H-spaces, we need to fix a particular class, as there are vast classes of infinite H-spaces, each of which has its special feature. We consider the loop space of a simply-connected finite complex, which is an infinite H-space whenever the underlying finite complex is non-contractible. Ganea [7] studied the homotopy nilpotency, including the homotopy commutativity, of the loop space of a complex projective space, and showed that the loop space of \(\mathbb{C}P^{n}\) is homotopy commutative if and only if \(n=3\). Recently, Golasinski [8] studied the loop space of a homogeneous space, and in particular, he proved that complex Grassmannians are homotopy nilpotent. However, from this result, we cannot deduce the homotopy commutativity in most cases. Recall that every symmetric space decomposes into a product of irreducible symmetric spaces, and as in [5], irreducible symmetric spaces are classified such that they are the homogeneous spaces \(G/H\) in Table 1 below. Then complex projective spaces and complex Grassmanians are the irreducible symmetric spaces AIII. The symmetric spaces AIII, BDI, CI, DIII, EIII, EVII are called irreducible Hermitian symmetric spaces, which are exactly symmetric spaces having almost complex structures. Continuing the works of Ganea [7] and Golasinski [8], two of the authors and Takeda [15] studied the homotopy commutativity of the loop spaces of Hermitian symmetric spaces, and obtained that the loop spaces of all irreducible Hermitian symmetric spaces but \(\mathbb{C}P^{3}\) are not homotopy commutative. In this paper, we study the homotopy commutativity of all irreducible symmetric spaces, and extend the above result on Hermitian symmetric spaces as: **Theorem 1.1**.: _The loop spaces of all irreducible symmetric spaces but \(\mathbb{C}P^{3}\) are not homotopy commutative._ As in Lemma 2.1 below, if we can find a non-trivial Whitehead product in a path-connected space \(X\), then we can deduce that the loop space of \(X\) is not homotopy commutative. In [15], the existence of a non-trivial Whitehead product is proved by applying criteria in terms of rational homotopy theory and Steenrod operations. In this paper, these criteria will be employed too, where the rational homotopy criterion will be elaborated. However, these criteria are not applicable to the symmetric spaces AII and EIV, where they are H-spaces when localized at any odd prime. Then we will develop a new cohomological technique for showing the existence of a non-trivial Whitehead product, which applies to AII and EIV. **Acknowledgement**.: The authors were partially supported by JSPS KAKENHI Grant Number JP17K05248 and JP1903473 (Kishimoto), JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2123 (Minowa), and JST SPRING Grant Number JPMJSP2110 (Tong). ## 2. Rational homotopy In this section, we consider the existence of a non-trivial Whitehead product by using rational homotopy theory, and apply it to the symmetric spaces EII, EV, EVI, EVIII, EIX, FI. We start by showing a sufficient condition for a loop space not being homotopy commutative. **Lemma 2.1**.: _Let \(X\) be a path-connected space. If there is a non-trivial Whitehead product in \(X\), then the loop space of \(X\) is not homotopy commutative._ Proof.: By the adjointness of Whitehead products and Samelson products, if there is a non-trivial Whitehead product in \(X\), then there is a non-trivial Samelson product in \(\Omega X\), implying that the loop space of \(X\) is not homotopy commutative. \begin{table} \begin{tabular}{|l|l l l|} \hline & \(G\) & \(H\) & \\ \hline AI & \(\mathrm{SU}(n)\) & \(\mathrm{SO}(n)\) & \((n\geq 2)\) \\ AII & \(\mathrm{SU}(2n)\) & \(\mathrm{Sp}(n)\) & \((n\geq 2)\) \\ AIII & \(\mathrm{U}(m+n)\) & \(\mathrm{U}(m)\times\mathrm{U}(n)\) & \((m,n\geq 1)\) \\ BDI & \(\mathrm{SO}(m+n)\) & \(\mathrm{SO}(m)\times\mathrm{SO}(n)\) & \((m,n\geq 2)\) \\ DIII & \(\mathrm{SO}(2n)\) & \(\mathrm{U}(n)\) & \((n\geq 2)\) \\ CI & \(\mathrm{Sp}(n)\) & \(\mathrm{U}(n)\) & \((n\geq 2)\) \\ CII & \(\mathrm{Sp}(m+n)\) & \(\mathrm{Sp}(m)\times\mathrm{Sp}(n)\) & \((m,n\geq 1)\) \\ EI & \(\mathrm{E}_{6}\) & \(\mathrm{PSp}(4)\) & \\ EII & \(\mathrm{E}_{6}\) & \(\mathrm{SU}(6)\times\mathrm{SU}(2)\) & \\ EIII & \(\mathrm{E}_{6}\) & \(\mathrm{Spin}(10)\cdot S^{1}\) & \((\mathrm{Spin}(10)\cap S^{1}\cong\mathbb{Z}_{4})\) \\ EIV & \(\mathrm{E}_{6}\) & \(\mathrm{F}_{4}\) & \\ EV & \(\mathrm{E}_{7}\) & \(\mathrm{SU}(8)/\{\pm I\}\) & \\ EVI & \(\mathrm{E}_{7}\) & \(\mathrm{Spin}(12)\cdot\mathrm{SU}(2)\) & \((\mathrm{Spin}(12)\cap\mathrm{SU}(2)\cong\mathbb{Z}_{2})\) \\ EVII & \(\mathrm{E}_{7}\) & \(\mathrm{E}_{6}\cdot S^{1}\) & \((E_{6}\cap S^{1}\cong\mathbb{Z}_{3})\) \\ EVIII & \(\mathrm{E}_{8}\) & \(\mathrm{S}(16)\) & \\ EIX & \(\mathrm{E}_{8}\) & \(\mathrm{E}_{7}\cdot\mathrm{SU}(2)\) & \((E_{7}\cdot\mathrm{SU}(2)\cong\mathbb{Z}_{2})\) \\ FI & \(\mathrm{F}_{4}\) & \(\mathrm{Sp}(3)\cdot\mathrm{Sp}(1)\) & \((\mathrm{Sp}(3)\cap\mathrm{Sp}(1)\cong\mathbb{Z}_{2})\) \\ FII & \(\mathrm{F}_{4}\) & \(\mathrm{Spin}(9)\) & \\ G & \(\mathrm{G}_{2}\) & \(\mathrm{SO}(4)\) & \\ \hline \end{tabular} \end{table} Table 1. Irreducible symmetric spaces We consider rational homotopy theory. For a positively graded vector space \(V\) over \(\mathbb{Q}\), let \(\Lambda V\) denote the free commutative graded algebra generated by \(V\). The following lemma follows from [6, Proposition 13.16]. **Lemma 2.2**.: _Let \((\Lambda V,d)\) be the minimal Sullivan model for a simply-connected space \(X\) of finite rational type. If there is \(x\in V\) such that \(dx\) is decomposable and includes the term \(yz\) for \(0\neq y,z\in V\), then there are \(\alpha\in\pi_{m}(X)\otimes\mathbb{Q}\) and \(\beta\in\pi_{n}(X)\otimes\mathbb{Q}\) such that the Whitehead product \([\alpha,\beta]\) is non-trivial in \(\pi_{m+n-1}(X)\otimes\mathbb{Q}\), where \(|y|=m\) and \(|z|=n\)._ The following lemma enables us to apply Lemma 2.2, in the special case, only by looking at rational cohomology. **Lemma 2.3**.: _Let \(X\) be a simply-connected finite complex such that_ \[H^{*}(X;\mathbb{Q})=\mathbb{Q}[x_{1},\dots,x_{n}]/(\rho_{1},\dots,\rho_{n})\] _where \(|x_{1}|,\dots,|x_{n}|\) are even and all \(\rho_{1},\dots,\rho_{n}\) are decomposable. Then the minimal Sullivan model for \(X\) is given by_ \[\Lambda(x_{1},\dots,x_{n},y_{1},\dots,y_{n}),\quad dx_{i}=0,\quad dy_{i}=\rho_ {i}.\] Proof.: Since \(X\) is a finite complex, \(H^{*}(X;\mathbb{Q})\) is a finite dimensional vector space. Then \(\rho_{1},\dots,\rho_{n}\) is a regular sequence. Let \(A\) be the differential graded algebra in the statement. Since \(\rho_{1},\dots,\rho_{n}\) are decomposable, \(A\) is a minimal Sullivan algebra. Let \(I_{k}\) denote the degree \(>k\) part of \(\Lambda(x_{1},\dots,x_{n})\), and let \[A_{k}=(\Lambda(x_{1},\dots,x_{n})/I_{k})\otimes\Lambda(y_{1},\dots,y_{n}).\] Then we get a sequence \(0=A_{-1}\gets A_{0}\gets A_{1}\leftarrow\cdots\) of surjections which yields a spectral sequence \(E_{r}\) converging to \(H^{*}(A)\). It is easy to see that each \(y_{i}\) is transgressive and the transgression image of \(y_{i}\) is \(\rho_{i}\). Then since \(\rho_{1},\dots,\rho_{n}\) is a regular sequence, we get an isomorphism \[E_{\infty}=E_{\infty}^{*,0}\cong H^{*}(X;\mathbb{Q}).\] Since \(E_{\infty}=E_{\infty}^{*,0}\), the extension problem is trivial, so \(H^{*}(A)\cong H^{*}(X;\mathbb{Q})\). Thus by [19, Lemma 4.2], \(X\) is formal. There is a map \(f\colon A\to H^{*}(A)\) of differential graded algebras, where \(f(x_{i})=x_{i}\) and \(f(y_{i})=0\). Clearly, the induced map \(f^{*}\colon H^{*}(A)\to H^{*}(A)\) is surjective, hence an isomorphism because \(H^{*}(A)\cong H^{*}(X;\mathbb{Q})\) is a finite dimensional vector space. Thus since \(X\) is formal, the differential graded algebra \(A\) is the minimal Sullivan model for \(X\), completing the proof. Now we consider the homotopy commutativity of symmetric spaces EII, EV, EVI, EVIII, EIX, FI by applying Lemmas 2.2 and 2.3. **Proposition 2.4**.: _The loop space of EII is not homotopy commutative._ Proof.: In [11], Ishitoya determined the integral cohomology of EII. In particular, we have \[H^{*}(\mathrm{EII};\mathbb{Q})=\mathbb{Q}[x_{4},x_{6},x_{8}]/(\rho_{16},\rho_{ 18},\rho_{24}),\quad|x_{i}|=|\rho_{i}|=i\] where \(\rho_{16},\rho_{18},\rho_{24}\) are decomposable and \(\rho_{16}\) includes the term \(x_{8}^{2}\). Then by Lemmas 2.2 and 2.3, there is a non-trivial Whitehead product in EII, so by Lemma 2.1, the proof is finished. **Proposition 2.5**.: _The loop spaces of EV and EVIII are not homotopy commutative._ Proof.: Let \(G\) and \(H\) be as in Table 1 for EV and EVIII. Then there is an isomorphism \[H^{*}(G/H;\mathbb{Q})\cong H^{*}(BT;\mathbb{Q})^{W(H)}/(\widetilde{H}^{*}(BT; \mathbb{Q})^{W(G)})\] where \(T\) is a common maximal torus of \(G,H\) and \(W(G),W(H)\) denote the Weyl groups of \(G,H\). So by [9, 29], we get \[H^{*}(\mathrm{EV};\mathbb{Q}) =\mathbb{Q}[x_{6},x_{8},x_{10},x_{14}]/(\rho_{20},\rho_{24},\rho_{ 28},\rho_{36})\] \[H^{*}(\mathrm{EVIII};\mathbb{Q}) =\mathbb{Q}[y_{8},y_{12},y_{16},y_{20}]/(\mu_{36},\mu_{40},\mu_{4 8},\mu_{60})\] where \(|x_{i}|=|y_{i}|=|\rho_{i}|=|\mu_{i}|=i\), all \(\rho_{i},\mu_{i}\) are decomposable, and \(\rho_{20},\mu_{40}\) include the terms \(x_{10}^{2},y_{20}^{2}\), respectively. Hence by Lemmas 2.2 and 2.3, there are non-trivial Whitehead products in EV and EVIII. Thus by Lemma 2.1, the proof is finished. **Proposition 2.6**.: _The loop spaces of_ EVI_,_ EIX_,_ FI _are not homotopy commutative._ Proof.: We consider \(\mathrm{FI}=\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot\mathrm{Sp}(1)\). In [13], the integral cohomology of \(\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot S^{1}\) is determined, and in particular, we have \[H^{*}(\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot S^{1};\mathbb{Q})=\mathbb{Q}[x_{2},x _{8}]/(\rho_{16},\rho_{24}),\quad|x_{i}|=|\rho_{i}|=i\] where \(\rho_{16}\) includes the term \(x_{8}^{2}\). Then by Lemmas 2.2 and 2.3, there are \(\alpha,\beta\in\pi_{8}(\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot S^{1})\otimes\mathbb{Q}\) such that \([\alpha,\beta]\neq 0\) in \(\pi_{15}(\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot S^{1})\otimes\mathbb{Q}\). Consider the homotopy exact sequence for a fibration \[\mathrm{Sp}(1)/S^{1}=S^{2}\to\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot S^{1} \xrightarrow{q}\mathrm{FI}.\] Then since \(\pi_{*}(S^{2})\otimes\mathbb{Q}=0\) for \(*\geq 4\), the induced map \(q_{*}\colon\pi_{*}(\mathrm{F}_{4}/\mathrm{Sp}(3)\cdot S^{1})\otimes\mathbb{Q} \to\pi_{*}(\mathrm{FI})\otimes\mathbb{Q}\) is an isomorphism for \(*\geq 5\). So there are \(\tilde{\alpha},\tilde{\beta}\in\pi_{8}(\mathrm{FI})\otimes\mathbb{Q}\) such that \(q_{*}(\tilde{\alpha})=\alpha\) and \(q_{*}(\tilde{\beta})=\beta\), hence we get \[q_{*}([\tilde{\alpha},\tilde{\beta}])=[q_{*}(\tilde{\alpha}),q_{*}(\tilde{ \beta})]=[\alpha,\beta].\] Thus we obtain \([\tilde{\alpha},\tilde{\beta}]\neq 0\), and by Lemma 2.1, the loop space of FI is not homotopy commutative. By [22, 23], we have \[H^{*}(\mathrm{E}_{7}/\mathrm{Spin}(12)\cdot S^{1};\mathbb{Q}) =\mathbb{Q}[x_{2},x_{8},x_{12}]/(\rho_{24},\rho_{28},\rho_{36})\] \[H^{*}(\mathrm{E}_{8}/\mathrm{E}_{7}\cdot S^{1};\mathbb{Q}) =\mathbb{Q}[y_{2},y_{12},y_{20}]/(\mu_{40},\mu_{48},\mu_{60})\] where \(|x_{i}|=|y_{i}|=|\mu_{i}|=i\), all \(\rho_{i},\mu_{i}\) are decomposable, and \(\rho_{24},\mu_{40}\) include the terms \(x_{12}^{2},y_{20}^{2}\), respectively. Then quite similarly to the FI case, we can see that the loop spaces of EVI and EIX are not homotopy commutative, completing the proof. ## 3. Steenrod operations In this section, we use the Steenrod operations to show the existence of a non-trivial Whitehead product, and apply this criterion to AI, BDI, CII, EI, FII, G. The following lemma is proved in [15]. Let \(QH^{*}(X)\) denote the module of indecomposables of \(H^{*}(X)\). **Lemma 3.1**.: _Suppose that maps \(\alpha\colon\Sigma A\to X\) and \(\beta\colon\Sigma B\to X\) satisfy the following conditions:_ 1. _there are_ \(a,b\in H^{*}(X;\mathbb{Z}_{p})\) _such that_ \(\alpha^{*}(a)\neq 0\) _and_ \(\beta^{*}(b)\neq 0\)_;_ 2. \(\beta^{*}(a)=0\) _for_ \(p=2\)_;_ 3. \(A=B\)_,_ \(\alpha=\beta\)_,_ \(a=b\) _for_ \(|a|=|b|\) _and_ \(p\) _odd;_ 4. \(\dim QH^{k}(X;\mathbb{Z}_{p})=1\) _for_ \(|a|=k\)_;_ 5. _there is_ \(x\in H^{*}(X;\mathbb{Z}_{p})\) _and a cohomology operation_ \(\theta\) _such that_ \(\theta(x)\) _is decomposable and includes the term_ \(ab\) 6. \(\theta(H^{n}(\Sigma A\times\Sigma B;\mathbb{Z}_{p}))=0\) _for_ \(|x|=n\)_._ _Then the Whitehead product \([\alpha,\beta]\) is non-trivial._ ### Ai, Bdi, and Cii To apply Lemma 3.1 to AI and BDI, we consider the map \[g\colon\mathbb{R}P^{n-1}\to\mathrm{SO}(n),\quad[x_{1}:\cdots:x_{n}]\mapsto(I- 2X)\mathrm{diag}(-1,1,\ldots,1)\] defined in [30], where \[X=\frac{1}{x_{1}^{2}+\cdots+x_{n}^{2}}\begin{pmatrix}x_{1}x_{1}&x_{1}x_{2}& \cdots&x_{1}x_{n}\\ x_{2}x_{1}&x_{2}x_{2}&\cdots&x_{2}x_{n}\\ \vdots&\vdots&&\vdots\\ x_{n}x_{1}&x_{n}x_{2}&\cdots&x_{n}x_{n}\end{pmatrix}\] and \(\mathrm{diag}(a_{1},\ldots,a_{n})\) stands for the diagonal matrix with diagonal entries \(a_{1},\ldots,a_{n}\). Note that \(I-2X\) is the reflection in the hyperplane through the origin associated to \((x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\). **Lemma 3.2**.: _Let \(c\colon\mathrm{SO}(n)\to\mathrm{SU}(n)\) denote the inclusion. Then the composite_ \[\mathbb{R}P^{n-1}\xrightarrow{g}\mathrm{SO}(n)\xrightarrow{c}\mathrm{SU}(n)\] _is null-homotopic._ Proof.: As in [31], we consider the map \(h\colon\Sigma\mathbb{C}P^{n-1}\to\mathrm{SU}(n)\) defined by \[h([z_{1}:\cdots:z_{n}],t)=\] \[\left(I-2e^{-\pi\sqrt{-1}\left(t-\frac{1}{2}\right)}\cos\pi\left( t-\frac{1}{2}\right)Z\right)\mathrm{diag}\left(-e^{2\pi\sqrt{-1}\left(t-\frac{1}{2 }\right)},1,\ldots,1\right)\] where \[Z=\frac{1}{|z_{1}|^{2}+\cdots+|z_{n}|^{2}}\begin{pmatrix}z_{1}\overline{z_{1}} &z_{1}\overline{z_{2}}&\cdots&z_{1}\overline{z_{n}}\\ z_{2}\overline{z_{1}}&z_{2}\overline{z_{2}}&\cdots&z_{2}\overline{z_{n}}\\ \vdots&\vdots&&\vdots\\ z_{n}\overline{z_{1}}&z_{n}\overline{z_{2}}&\cdots&z_{n}\overline{z_{n}} \end{pmatrix}.\] Then for the inclusion \[i\colon\mathbb{R}P^{n-1}\to\Sigma\mathbb{C}P^{n-1},\quad[x_{1},\ldots,x_{n}] \mapsto\left([x_{1},\ldots,x_{n}],\frac{1}{2}\right)\] there is a commutative diagram Thus since the inclusion \(i\colon\mathbb{R}P^{n-1}\to\Sigma\mathbb{C}P^{n-1}\) is null-homotopic, the composite \(c\circ g\) is null-homotopic too. Now we are ready to apply Lemma 3.1 to AI and BDI. **Proposition 3.3**.: _The loop space of \(\mathrm{AI}=\mathrm{SU}(n)/\mathrm{SO}(n)\) for \(n\geq 2\) is not homotopy commutative._ Proof.: Since \(\mathrm{SU}(2)/\mathrm{SO}(2)=S^{2}\) and \([1_{S^{2}},1_{S^{2}}]\neq 0\), it follows from Lemma 2.1 that the loop space of \(\mathrm{SU}(2)/\mathrm{SO}(2)\) is not homotopy commutative. Then we assume \(n\geq 3\). Let \(\iota\colon\mathrm{SU}(n)/\mathrm{SO}(n)\to B\mathrm{SO}(n)\) denote the natural map. By [20, Theorem 6.7], the mod \(2\) cohomology of \(\mathrm{SU}(n)/\mathrm{SO}(n)\) is given by \[H^{*}(SU(n)/SO(n);\mathbb{Z}_{2})=\Lambda(v_{2},v_{3},\ldots,v_{n}),\quad v_{i }=\iota^{*}(w_{i})\] where \(w_{i}\in H^{i}(B\mathrm{SO}(n);\mathbb{Z}_{2})\) denotes the \(i\)-th Stiefel-Whitney class. Let \(\bar{g}\colon\Sigma\mathbb{R}P^{n-1}\to BSO(n)\) denote the adjoint of the map \(g\colon\mathbb{R}P^{n-1}\to\mathrm{SO}(n)\). Then by Lemma 3.2, the composite \[\Sigma\mathbb{R}P^{n-1}\xrightarrow{\bar{g}}B\mathrm{SO}(n)\xrightarrow{c}B \mathrm{SU}(n)\] is null-homotopic, so the map \(\bar{g}\) lifts to a map \(\tilde{g}\colon\Sigma\mathbb{R}P^{n-1}\to\mathrm{SU}(n)/\mathrm{SO}(n)\) through \(\iota\), up to homotopy, because there is a homotopy fibration \[\mathrm{SU}(n)/\mathrm{SO}(n)\xrightarrow{\iota}B\mathrm{SO}(n)\xrightarrow{ c}B\mathrm{SU}(n).\] By [30], we have \(\bar{g}^{*}(w_{i})=\Sigma u^{i-1}\) for \(i=2,\ldots,n\), where \(u\) is a generator of \(H^{1}(\mathbb{R}P^{n-1};\mathbb{Z}_{2})\cong\mathbb{Z}_{2}\). Then we get \[\tilde{g}^{*}(v_{i})=\Sigma u^{i-1}.\] (1) Suppose \(n\equiv 0,3\mod 4\). Let \(h=\tilde{g}|_{\Sigma\mathbb{R}P^{1}}\). Clearly, we have \(h^{*}(v_{n})=0\) and \(h^{*}(v_{2})=\Sigma u\). By the Wu formula, we have \(\mathrm{Sq}^{2}\,w_{n}=w_{2}w_{n}\), implying \[\mathrm{Sq}^{2}\,v_{n}=v_{2}v_{n}.\] On the other hand, we have \(\mathrm{Sq}^{2}(\Sigma u\otimes\Sigma u^{n-3})=0\), implying \(\mathrm{Sq}^{2}(H^{n}(\Sigma\mathbb{R}P^{1}\times\Sigma\mathbb{R}P^{n-1}; \mathbb{Z}_{2}))=0\). Thus by Lemma 3.1, we get \([h,\tilde{g}]\neq 0\). (2) Suppose \(n\equiv 1,2\mod 4\). Let \(h=\tilde{g}|_{\Sigma\mathbb{R}P^{2}}\). Then we have \(h^{*}(v_{n})=0\) and \(h^{*}(v_{3})=\Sigma u^{2}\). By the Wu formula, we have \(\mathrm{Sq}^{3}\,w_{n}=w_{3}w_{n}\), implying \[\mathrm{Sq}^{3}\,v_{n}=v_{3}v_{n}.\] On the other hand, we have \(\mathrm{Sq}^{3}(\Sigma u^{2}\otimes\Sigma u^{n-4})=0\) and \(\mathrm{Sq}^{3}(\Sigma u\otimes\Sigma u^{n-5})=0\), implying \(\mathrm{Sq}^{3}(H^{n}(\Sigma\mathbb{R}P^{2}\times\Sigma\mathbb{R}P^{n-1}; \mathbb{Z}_{2}))=0\). Thus by Lemma 3.1, the Whitehead product \([h,\tilde{g}]\) is non-trivial. By (1) and (2) together with Lemma 2.1, we obtain that the loop space of \(\mathrm{SU}(n)/\mathrm{SO}(n)\) for \(n\geq 3\) is not homotopy commutative, completing the proof. **Proposition 3.4**.: _The loop spaces of \(\mathrm{BDI}=\mathrm{SO}(m+n)/\mathrm{SO}(m)\times\mathrm{SO}(n)\) for \(m,n\geq 2\) is not homotopy commutative._ Proof.: We may assume \(m\geq n\). The case \(n=2\) is proved in [15], so we also assume \(n\geq 3\). Consider the map \(\bar{g}\colon\Sigma\mathbb{R}P^{n-1}\to B\mathrm{SO}(n)\) in the proof of Proposition 3.3. Quite similarly to the proof of Proposition 3.3, we can show that \([\bar{g},\bar{g}]_{\Sigma\mathbb{R}P^{1}}]\neq 0\) for \(n\equiv 0,3\mod 4\) and \([\bar{g},\bar{g}]_{\Sigma\mathbb{R}P^{2}}]\neq 0\) for \(n\equiv 1,2\mod 4\). Thus we obtain \([\bar{g},\bar{g}]\neq 0\). Since the the natural map \(\iota\colon\mathrm{SO}(m+n)/\mathrm{SO}(m)\times\mathrm{SO}(n)\to B\mathrm{SO}(n)\) is an \(n\)-equivalence, the map \(\bar{g}\) lifts to a map \(\tilde{g}\colon\Sigma\mathbb{R}P^{n-1}\to\mathrm{SO}(m+n)/\mathrm{SO}(m)\times \mathrm{SO}(n)\) through \(\iota\), up to homotopy. Then since \(\iota_{*}([\tilde{g},\tilde{g}])=[\bar{g},\tilde{g}]\neq 0\), we get \([\tilde{g},\tilde{g}]\neq 0\). Thus by Lemma 2.1, the loop space of \(\mathrm{SO}(m+n)/\mathrm{SO}(m)\times\mathrm{SO}(n)\) for \(m\geq n\geq 3\) is not homotopy commutative, completing the proof. It remains to consider \(\mathrm{CII}\). **Proposition 3.5**.: _The loop space of \(\mathrm{CII}=\mathrm{Sp}(m+n)/\mathrm{Sp}(m)\times\mathrm{Sp}(n)\) for \(m,n\geq 1\) is not homotopy commutative._ Proof.: Recall that the cohomology of \(B\mathrm{Sp}(n)\) is given by \[H^{*}(B\mathrm{Sp}(n))=\mathbb{Z}[q_{1},\ldots,q_{n}]\] where \(q_{i}\) denotes the \(i\)-the symplectic Pontrjagin class. Let \(Q_{n}\) denote the quaternionic quasi-projective space in the sense of James [14]. Then we have \[H^{*}(Q_{n})=\langle x_{1},\dots,x_{n}\rangle\] such that the natural map \(g\colon Q_{n}\to\operatorname{Sp}(n)\) satisfies \(g^{*}(\sigma(q_{i}))=x_{i}\), where \(\sigma\) denotes the cohomology suspension. Let \(\bar{g}\colon\Sigma Q_{n}\to B\mathrm{Sp}(n)\) be the adjoint of the map \(g\). Then we get \[\bar{g}^{*}(q_{i})=\Sigma x_{i}. \tag{3.1}\] We aim to show \([\bar{g},\bar{g}]\neq 0\). (1) Suppose that \(n\) is divisible by an odd prime \(p\). Since the natural map \(c\colon B\mathrm{Sp}(n)\to B\mathrm{SU}(n)\) satisfies \(c^{*}(c_{2i})=(-1)^{i}q_{i}\), the mod \(p\) Wu formula in [24] shows \[\mathcal{P}^{1}q_{n}=(-1)^{\frac{p-1}{2}}q_{\frac{p-1}{2}}q_{n}\] in mod \(p\) cohomology. We also have that \(\mathcal{P}^{1}q_{n-\frac{p-1}{2}}=0\), implying \(\mathcal{P}^{1}(H^{4n}(\Sigma Q_{\frac{p-1}{2}}\times\Sigma Q_{n};\mathbb{Z}_ {p}))=0\) by (3.1). Then by Lemma 3.1, we obtain \([\bar{g}|_{\Sigma Q_{\frac{p-1}{2}}},\bar{g}]\neq 0\), implying \([\bar{g},\bar{g}]\neq 0\). (2) Suppose that \(n\) is a power of \(2\). Quite similarly to the above case, we have \[\mathrm{Sq}^{4}\,q_{n}=q_{1}q_{n}\] in mod \(2\) cohomology. We also have that \(\mathrm{Sq}^{4}\,q_{n-1}\) is decomposable, implying \(\mathrm{Sq}^{4}(H^{4n}(\Sigma Q_{1}\times\Sigma Q_{n};\mathbb{Z}_{2}))=0\) by (3.1), where \(Q_{1}=S^{3}\). Then by Lemma 3.1, we obtain \([\bar{g}|_{\Sigma Q_{1}},\bar{g}]\neq 0\), implying \([\bar{g},\bar{g}]\neq 0\). For \(\mathrm{CII}=\mathrm{Sp}(m+n)/\mathrm{Sp}(m)\times\mathrm{Sp}(n)\), we may assume \(m\geq n\). Then the natural map \(\iota\colon\mathrm{CII}\to B\mathrm{Sp}(n)\) is an \((4n+2)\)-equivalence, implying that the map \(\bar{g}\) lifts to a map \(\hat{g}\colon\Sigma Q_{n}\to\mathrm{CII}\) through \(\iota\), up to homotopy. Since \[\iota_{*}([\tilde{g},\tilde{g}])=[\iota_{*}(\tilde{g}),\iota_{*}(\tilde{g})]= [\bar{g},\bar{g}]\neq 0,\] we get \([\tilde{g},\tilde{g}]\neq 0\). Thus by Lemma 2.1, the loop space of \(\mathrm{CII}\) is not homotopy commutative, completing the proof. _Remark 3.6_.: We may prove Proposition 3.5 by using the result of Bott [2] through the natural map \(\mathrm{CII}\to B\mathrm{Sp}(n)\). ### EI, Fii and G We consider the symmetric spaces EI, FII and G. **Proposition 3.7**.: _The loop space of \(\mathrm{EI}\) is not homotopy commutative._ Proof.: By [12], the mod \(5\) cohomology of \(\mathrm{EI}\) is given by \[H^{*}(\mathrm{EI};\mathbb{Z}_{5})=\mathbb{Z}_{5}[x_{8}]/(x_{8}^{3})\otimes \Lambda(x_{9},x_{17}).\] Recall that the mod \(5\) cohomology of \(B\mathrm{PSp}(4)\) and \(B\mathrm{E}_{6}\) are given by \[H^{*}(B\mathrm{PSp}(4);\mathbb{Z}_{5}) =\mathbb{Z}_{5}[q_{1},q_{2},q_{3},q_{4}], |q_{i}|=4i\] \[H^{*}(B\mathrm{E}_{6};\mathbb{Z}_{5}) =\mathbb{Z}_{5}[y_{4},y_{12},y_{16},y_{20},y_{24},y_{32}], |y_{i}|=i.\] We consider the Serre spectral sequence associated with a homotopy fibration \[\mathrm{EI}\xrightarrow{\iota}B\mathrm{PSp}(4)\to B\mathrm{E}_{6}.\] Then by degree reasons, we get \(\iota^{*}(q_{2})=x_{8}\). Then since \(\mathcal{P}^{1}q_{2}=q_{2}^{2}\), we obtain \(\mathcal{P}^{1}x_{8}=x_{8}^{2}\). Let \(g\colon S^{8}\to\mathrm{EI}_{(5)}\) be a map detecting \(x_{8}\), where \(-_{(5)}\) stands for the \(5\)-localization. Then since \(\mathcal{P}^{1}(H^{*}(S^{8}\times S^{8};\mathbb{Z}_{5}))=0\), we can apply Lemma 3.1 and get that the Whitehead product \([g,g]\) is non-trivial. Thus by Lemma 2.1, the proof is finished. **Proposition 3.8**.: _The loop space of \(\mathrm{FII}\) is not homotopy commutative._ Proof.: Since FII is the Cayley projective plane \(\mathbb{O}P^{2}\), its mod \(5\) cohomology is given by \[H^{*}(\operatorname{FII};\mathbb{Z}_{5})=\mathbb{Z}_{5}[q]/(q^{3}),\quad|q|=8. \tag{3.2}\] Consider the homotopy fibration \[\operatorname{F}_{4}/\mathrm{Spin}(9)\xrightarrow{\iota}B\mathrm{Spin}(9) \to B\mathrm{F}_{4}.\] Since mod \(5\) cohomology of \(B\mathrm{F}_{4}\) and \(B\mathrm{Spin}(9)\) are given by \[H^{*}(B\mathrm{F}_{4};\mathbb{Z}_{5}) =\mathbb{Z}_{5}[x_{4},x_{12},x_{16},x_{24}]\] \[H^{*}(B\mathrm{Spin}(9);\mathbb{Z}_{5}) =\mathbb{Z}_{5}[p_{1},p_{2},p_{3},p_{4}]\] where \(|x_{i}|=i\) and \(p_{i}\) is the \(i\)-th Pontrjagin class, the standard spectral sequence argument shows \(q=\iota^{*}(p_{2})\). Then since \(\mathcal{P}^{1}p_{2}=p_{2}^{2}+2p_{2}p_{1}^{2}\), we have \(\mathcal{P}^{1}q=q^{2}\). On the other hand, we have \(\mathcal{P}^{1}(H^{*}(S^{8}\times S^{8};\mathbb{Z}_{5}))=0\). Let \(g\colon S^{8}\to\operatorname{FII}\) denote the bottom cell inclusion. Then by Lemma 3.1, we get \([g,g]\neq 0\). Thus by Lemma 2.1, the loop space of \(\operatorname{FII}\) is not homotopy commutative. **Proposition 3.9**.: _The loop space of \(\mathrm{G}\) is not homotopy commutative._ Proof.: Let \(\iota\colon\mathrm{G}\to B\mathrm{SO}(4)\) denote the natural map. In [4], the mod \(2\) cohomology of \(\mathrm{G}\) is given by \[H^{*}(\mathrm{G};\mathbb{Z}_{2})=\mathbb{Z}_{2}[x_{2},x_{3}]/(x_{2}^{3}+x_{3}^ {2},x_{2}x_{3}),\quad\iota^{*}(w_{i})=x_{i}\] where \(w_{i}\) is the \(i\)-th Stiefel-Whitney class. Then by the Wu formula, we have \(\mathrm{Sq}^{1}\,x_{2}=x_{3}\), implying that the \(3\)-skeleton of \(\mathrm{G}\) is \(M=S^{2}\cup_{2}e^{3}\). Let \(g\colon M\to\mathrm{G}\) denote the inclusion. Then we have \(g^{*}(x_{i})=u_{i}\) for \(i=2,3\), where \(u_{i}\) is a generator of \(H^{i}(M;\mathbb{Z}_{2})\cong\mathbb{Z}_{2}\) for \(i=2,3\). Clearly, we have \(\mathrm{Sq}^{2}(H^{*}(S^{2}\times M;\mathbb{Z}_{2}))=0\). By the Wu formula, we also have \(\mathrm{Sq}^{2}\,x_{3}=x_{2}x_{3}\). Then by Lemma 3.1, we get that the Whitehead product \([g|_{S^{2}},g]\) is non-trivial. Thus by Lemma 2.1, the proof is finished. ## 4. Partial projective plane In this section, we introduce a partial projective plane, and use it to prove the existence of a non-trivial Whitehead product. We apply this technique to AII and EIV, to which the techniques in the previous sections do not apply. Let \(X\) be a path-connected space. We say that a map \(g\colon A\to X\) is a generating map if it induces an isomorphism \[g^{*}\colon QH^{*}(X)\xrightarrow{\cong}\widetilde{H}^{*}(A).\] For example, the map \(h\colon\Sigma\mathbb{C}P^{n-1}\to\mathrm{SU}(n)\) defined in the proof of Proposition 3.3 is a generating map. A generating map is of particular importance in the study of the multiplicative structure of a localized Lie group [16, 17, 18, 25, 26]. Let \(g\colon A\to X\) be a generating map such that \(A=\Sigma B\). Assume that the Whitehead product \([g,g]\colon B\star B\to X\) is trivial. Then there is a homotopy commutative diagram Fixing the map \(\mu\), we define the partial projective plane of \(X\), denoted by \(\widehat{P}_{2}X\), by the cofiber of the Hopf construction \[H(\mu)\colon A\star A\to\Sigma X,\quad(x,y,t)\mapsto(\mu(x,y),t).\] If \(X\) is an H-space and \(\mu\) is the restriction of the multiplication of \(X\), then \(\widehat{P}_{2}X\) is a subspace of the projective plane of \(X\). We show a condition for the existence of a non-trivial Whitehead product by using the partial projective plane. Let \[\mathrm{Sq}=1+\mathrm{Sq}^{1}+\mathrm{Sq}^{2}+\cdots.\] **Lemma 4.1**.: _Let \(X\) be a path-connected space satisfying the following conditions:_ 1. _the mod_ \(2\) _cohomology of_ \(X\) _is generated by elements_ \(x_{1},\ldots,x_{n}\) _of odd degree;_ 2. \(\mathrm{Sq}\,x_{i}\) _is a linear combination of_ \(x_{1},\ldots,x_{n}\) _for_ \(i=1,\ldots,n\)_._ _If there is a generating map \(g\colon A=\Sigma B\to X\) and \(\min\{|x_{1}|,\ldots,|x_{n}|\}\neq 2^{k}-1\) for any \(k\geq 1\), then the Whitehead product \([g,g]\) is non-trivial._ Proof.: We assume \([g,g]=0\) and deduce a contradiction. By assumption, we have the partial projective space \(\widehat{P}_{2}X\), where we fix a map \(\mu\colon A\times A\to X\) extending the map \(\alpha+\alpha\colon A\lor A\to X\), up to homotopy. By degree reasons, each \(x_{i}\in H^{*}(X;\mathbb{Z}_{2})\) is primitive in the sense that \[\mu^{*}(x_{i})=g^{*}(x_{i})\otimes 1+1\otimes g^{*}(x_{i}).\] Quite similarly to [3], we can see that \[H(\mu)^{*}(\Sigma x)=\Sigma\mu^{*}(x)-\Sigma(g^{*}(x)\otimes 1)-\Sigma(1\otimes g ^{*}(x))\] for any \(x\in H^{*}(X)\), where we identify \(A\star A\) with \(\Sigma A\wedge A\). Then we get \(H(\mu)^{*}(\Sigma x_{i})=0\). By definition, there is an exact sequence \[\cdots\to H^{*-1}(A\star A;\mathbb{Z}_{2})\xrightarrow{\delta}H^{*}( \widehat{P}_{2}X;\mathbb{Z}_{2})\xrightarrow{\iota^{*}}H^{*}(\Sigma X; \mathbb{Z}_{2})\xrightarrow{H(\mu)^{*}}H^{*}(A\star A;\mathbb{Z}_{2})\to\cdots\] where \(\iota\colon\Sigma X\to\widehat{P}_{2}X\) denotes the inclusion. Since \(H(\mu)^{*}(\Sigma x_{i})=0\), there is \(y_{i}\in H^{*}(\widehat{P}_{2}X)\) such that \(\iota^{*}(y_{i})=\Sigma x_{i}\). So by [27], we get \[\delta(g^{*}(x_{i})\star g^{*}(x_{j}))=y_{i}y_{j}.\] On the other hand, since \(\widehat{P}_{2}X\) has category \(\leq 2\), we have \(y_{i}y_{j}y_{k}=0\) for any \(i,j,k\). Then \(H^{*}(\widehat{P}_{2}X;\mathbb{Z}_{2})\) contains the truncated polynomial ring \[A=\mathbb{Z}_{2}[y_{1},y_{2},\ldots,y_{n}]/(y_{1},y_{2},\ldots,y_{n})^{3}.\] Since \(\mathrm{Sq}\,x_{i}\) is a linear combination of \(x_{1},\ldots,x_{n}\), \(\mathrm{Sq}\,y_{i}\) is a linear combination of \(y_{1},\ldots,y_{n}\) modulo \(\mathrm{Im}\,\delta\). Then since \(\mathrm{Im}\,\delta\) is a vector space spanned by \(y_{i}y_{j}\) for \(1\leq i\leq j\leq n\), we get \[\mathrm{Sq}(A)\subset A.\] Thus by [28], we obtain that \(\min\{|y_{1}|,|y_{2}|,\ldots,|y_{n}|\}\) must be \(2^{k}\) for some \(k\geq 1\), completing the proof because \(|y_{i}|=|x_{i}|+1\). Now we consider the symmetric spaces \(\mathrm{AII}\) and \(\mathrm{EIV}\). **Proposition 4.2**.: _The loop space of \(\mathrm{AII}=\mathrm{SU}(2n)/\mathrm{Sp}(n)\) for \(n\geq 2\) is not homotopy commutative._ Proof.: By [20, Theorem 6.7] together with the Wu formula, the mod \(2\) cohomology of \(\mathrm{AII}\) is given by \[H^{*}(\mathrm{AII};\mathbb{Z}_{2})=\Lambda(x_{5},x_{9},\ldots,x_{4n-3}),\quad |x_{4i+1}|=4i+1\] such that \(\mathrm{Sq}\,x_{4i+1}\) is a linear combination of \(x_{5},\ldots,x_{4n-3}\) for \(i=1,\ldots,n-1\). Mukai and Oka [21] showed that there is a generating map \(g\colon\Sigma\mathbb{H}P^{n-1}\to\mathrm{SU}(2n)/\mathrm{Sp}(n)\). Then since \(\min\{|x_{5}|,\ldots,|x_{4n-3}|\}=5\), it follows from Lemma 4.1 that the Whitehead product \([g,g]\) must be non-trivial. Thus the proof is finished by Lemma 2.1. **Proposition 4.3**.: _The loop space of \(\mathrm{EIV}\) is not homotopy commutative._ Proof.: In [1], Araki showed there is a cell decomposition \(\mathrm{EIV}=(\Sigma\mathbb{O}P^{2})\cup e^{26}\), and the cohomology of \(\mathrm{EIV}\) is given by \[H^{*}(\mathrm{EIV};\mathbb{Z})=\Lambda(x_{9},x_{17}),\quad|x_{i}|=i.\] Then the inclusion \(g\colon\Sigma\mathbb{O}P^{2}\to\mathrm{EIV}\) is a generating map. By the above cell decomposition, we have \(\mathrm{Sq}\,x_{9}=x_{9}+x_{17}\), so by the Adem relation, we have \(\mathrm{Sq}\,x_{17}=x_{17}\) too. Thus by Lemma 4.1, the Whitehead product \([g,g]\) must be non-trivial, so the proof is finished by Lemma 2.1. Finally, we are ready to prove Theorem 1.1. Proof of Theorem 1.1.: Combine [15, Theorem 1.1] and Propositions 2.4, 2.5, 2.6, 3.3, 3.4, 3.5, 3.7, 3.8, 3.9, 4.2, and 4.3.
2302.00117
Real Estate Property Valuation using Self-Supervised Vision Transformers
The use of Artificial Intelligence (AI) in the real estate market has been growing in recent years. In this paper, we propose a new method for property valuation that utilizes self-supervised vision transformers, a recent breakthrough in computer vision and deep learning. Our proposed algorithm uses a combination of machine learning, computer vision and hedonic pricing models trained on real estate data to estimate the value of a given property. We collected and pre-processed a data set of real estate properties in the city of Boulder, Colorado and used it to train, validate and test our algorithm. Our data set consisted of qualitative images (including house interiors, exteriors, and street views) as well as quantitative features such as the number of bedrooms, bathrooms, square footage, lot square footage, property age, crime rates, and proximity to amenities. We evaluated the performance of our model using metrics such as Root Mean Squared Error (RMSE). Our findings indicate that these techniques are able to accurately predict the value of properties, with a low RMSE. The proposed algorithm outperforms traditional appraisal methods that do not leverage property images and has the potential to be used in real-world applications.
Mahdieh Yazdani, Maziar Raissi
2023-01-31T21:54:15Z
http://arxiv.org/abs/2302.00117v1
# Real Estate Property Valuation using Self-Supervised Vision Transformers ###### Abstract The use of Artificial Intelligence (AI) in the real estate market has been growing in recent years. In this paper, we propose a new method for property valuation that utilizes self-supervised vision transformers, a recent breakthrough in computer vision and deep learning. Our proposed algorithm uses a combination of machine learning, computer vision and hedonic pricing models trained on real estate data to estimate the value of a given property. We collected and pre-processed a data set of real estate properties in the city of Boulder, Colorado and used it to train, validate and test our algorithm. Our data set consisted of qualitative images (including house interiors, exteriors, and street views) as well as quantitative features such as the number of bedrooms, bathrooms, square footage, lot square footage, property age, crime rates, and proximity to amenities. We evaluated the performance of our model using metrics such as Root Mean Squared Error (RMSE). Our findings indicate that these techniques are able to accurately predict the value of properties, with a low RMSE. The proposed algorithm outperforms traditional appraisal methods that do not leverage property images and has the potential to be used in real-world applications. _keywords_: housing price prediction model, hedonic model, self-supervised vision transformers, computer vision, deep neural networks, real estate property appraisal, regression analysis, image data. ## 1 Introduction In many families, residential property is one of the most important components of a household's wealth (see e.g., Arvanitidis (2014)). As a result, home prices are of great interest to both current and potential homeowners. The property prices are not only important to stakeholders, but also to insurance companies, property developers, appraisers, tax assessors, brokers, banks, mort-gage lenders, and policy makers (see e.g., Frew and Jud (2003) and Yazdani (2020)). Therefore, accurate predictions and trend analyses in real estate prices can aid these groups in making informed decisions (see e.g., Fan et al. (2018)). For several decades, the estimation of real estate assets has relied on hedonic pricing models (see e.g., Rosen (1974), Del Giudice et al. (2017), Yazdani (2021a), and Krol (2020)). Hedonic models are one of the most widely accepted methods for estimating house prices and are commonly used to recover the implicit prices of house attributes. The urban property markets contain high variation in the structural, locational, neighborhood, and environmental attributes. Most hedonic models employ quantitative features such as the number of bedrooms, bathrooms, square footage, lot square footage, property age, etc. to appraise house prices (see e.g., Geng et al. (2015), Lasota et al. (2011), and Del Giudice et al. (2017)). As alternative tools, various machine learning and deep learning algorithms such as artificial neural networks (ANN), \(k\) nearest neighbor (kNN), bounded fuzzy possibilistic method (BFPM), random forest (RF), and support vector regression (SVR) have been proposed for house price valuation and real estate property price prediction models (see e.g., references in Bigus (1996), Lenk et al. (1997), Kauko et al. (2002), Curry et al. (2002), Pagourtzi et al. (2003), Limsombunchai (2004), Peterson and Flanagan (2009), Yazdani et al. (2014), Zhou and Troyanskaya (2015), Islam and Asami (2009), Selim (2011), Morano and Tajani (2013), Yazdani and Choros (2018), Ali et al. (2015), Park and Bae (2015), Ceh et al. (2018), Poursaeed et al. (2018), Hong et al. (2020), and Pai and Wang (2020)). These studies have yielded mixed results. For example, Selim (2011) compared the prediction performances of the hedonic price regression and ANN models for the prediction of dwelling prices in Turkey and found that ANN performed better. Similarly, Yao et al. (2018) integrated a convolutional neural network with RF to analyze the housing market in Shenzhen and found promising results for the application of machine learning and deep learning algorithms. Park and Bae (2015) has compared the performance among several machine learning algorithms such as Repeated Incremental Pruning to Produce Error Reduction, Naive Bayesian, and AdaBoost to identify better forecasting models. This study has shown the promising application of machine learning and deep learning algorithms in housing markets. Yazdani (2021b) compared the performance of artificial neural networks, random forest, and \(k\) nearest neighbor approaches to that of hedonic method on house price prediction in the city of Boulder, Colorado. This study demonstrated that random forest and artificial neural networks algorithms can be better alternatives over the hedonic regression analysis for prediction of the house prices in the city of Boulder, Colorado. However, Kontrimas and Verikas (2011) empirically studied several different models on structured features such as house type, size, house age, etc. Their findings indicate that linear regression surpasses that of neural network methods and linear regression may be a better alternative. However, these models do not fully account for the complexity of the housing market decision-making process. Homebuyers take into account not only structural factors, socio-economic status of the neighborhood, environmental amenities, and location, but also evaluate the interior and exterior of properties such as appliances, house structure, etc. The visual appearance of houses, which is likely one of the most important factors in a buyer's decision, is often ignored in hedonic models. This could be partly due to lack of availability of house images or difficulty in quantifying visual content and incorporating it into hedonic methods. The real estate appraisal process could benefit from the introduction of images into the models as it can represent the overall house construction style and quality. The recent development of robust computer vision algorithms makes it possible to analyze unstructured data such as images. As images are high-dimensional, deep learning methods are needed to transform them into structured data. With deep learning, image features can be quantitatively described and included in appraisal models. In image-related tasks, Convolutional Neural Networks (CNNs) such as LeNet [LeCun et al. (1998)], AlexNet [Krizhevsky et al. (2017)], VGG [Simonyan and Zisserman (2014)], Inception [Szegedy et al. (2017)], ResNet [He et al. (2016)], DenseNet [Huang et al. (2017)], Xception [Chollet (2017)], MobileNet [Howard et al. (2017)], and EfficientNet [Tan and Le (2019)] have been historically employed. However, with the advent of Vision Transformers ViT [Dosovitskiy et al. (2020)] there is a shift towards using transformer-based models for image related tasks. These models have shown to achieve state-of-the-art results on various benchmarks and are increasingly being adopted in industry as well. Vision Transformers are a recent breakthrough in computer vision (CV) and deep learning that have been shown to be superior to CNNs in some cases. Vision Transformers are able to learn more global features from images because of their self-attention mechanism and are therefore more effective for tasks such as transfer learning. Transfer learning is a machine learning technique that allows a model trained on one task to be applied to a different but related task. This is achieved by transferring the knowledge gained from one data set to another. This can be done by using the weights and biases of a pre-trained model as the starting point for training a new model, or by fine-tuning the pre-trained model on the new task. Transfer learning can save a significant amount of time and resources, as well as improve the performance of the new model. It is particularly useful when there is limited data available for the new task, as the pre-trained model can provide a good starting point for learning the new task (see Raissi (2023)). In this paper, we propose a novel approach to property valuation that leverages the power of self-supervised vision transformers, a recent breakthrough in computer vision and deep learning. Self-supervised deep learning enables the model to learn features from raw data without the need for manual annotations. This means that the model is able to learn from a wider range of data and can discover more general and abstract features. In contrast, supervised learning relies on labeled data which is typically more limited and specific to the task it was labeled for. Additionally, self-supervised learning methods can learn the underlying structure of the data, which can be useful for a wide range of tasks. The representations learned from self-supervised learning are learned from the data itself and are not dependent on the specific task, allowing them to be more generalizable. This can make the features learned through self-supervised deep learning transfer better to new tasks than features learned through supervised learning. Our algorithm leverages self-supervised vision transformers from the computer vision literature to perform transfer learning and extract quantitative features from qualitative images. This enables us to combine machine learning, computer vision, and hedonic pricing models, all trained on a data set of real estate properties from Boulder, Colorado. This data set includes both qualitative images and quantitative features such as structural factors, socio-economic status of the neighborhood, environmental amenities, and location. We evaluate the performance of our model using metrics such as Root Mean Squared Error (RMSE), and our results show that this technique can accurately predict property values with a low RMSE. In summary, this paper presents a new method for property valuation that utilizes self-supervised vision transformers and outperforms traditional appraisal methods that do not incorporate property images, making it a valuable tool for real-world applications. The paper is organized as follows. Section 2 showcases the data set, collected by the authors, which is novel and unique. In Section 3, we provide an overview of the machine learning, computer vision, and hedonic pricing models applied. Section 4 discusses the results obtained from these techniques. Finally, Section 5 offers conclusions and implications. ## 2 Data This study incorporates both qualitative images and quantitative features to enhance the accuracy of the house price prediction models. The real estate data sets used were collected from various sources, including Multiple Listing Service databases1, Public School Ratings2, Colorado Crime Rates and Statistics Information3, CrimeReports4, WalkScore5, Street View 6, recolorado7 and US Census Bureau8. We merged all data sets obtained from various websites. To isolate the influence of time on property prices, the data used in this study is restricted to houses sold in a single year between January 1, 2019 and December 31, 2019 (see Eckert et al. (1990)). Our collected data set consists of 1061 residential properties sold in the city of Boulder, Colorado in 2019. During the screening process, we determined that four of the properties were in poor condition and in need of rebuilding, so we removed those four observations. Additionally, we excluded the only furnished property, which was sold with a lot of luxury furnishings, among all the transactions, which were sold unfurnished. Records associated with 30 reported horse properties and 4 duplicate transactions were also eliminated. In our screening process, we encountered missing data points for various variables such as the number of bedrooms, bathrooms, parking, Lot Area, HOA fees, Solar Power, and Pool, Bathtub, Sauna, or Jacuzzi. To address these missing values, we updated some of the data by visiting different websites. However, a few observations still had missing data points for the continuous variable Lot Area and the dummy variables Solar Power and Pool, Bathtub, Sauna, or Jacuzzi. To avoid sample size reduction and sample selection bias (see e.g., Hill (2013)), we chose to impute the missing values with the mean for the Lot Area and the mode for the dummy variables. We also identified outliers and applied Winsorization to reduce their impact on the analysis. The data cleaning process left us with a sample of 1018 observations. Descriptive statistics for the variables in this data set, are summarized in Tables 1 and 2. These statistics include the mean, standard deviation, minimum and maximum values, as well as the relative standard deviation (the coefficient of variation), which represents the extent of variability in relation to the average of the variable. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{Variables} & \multicolumn{6}{c}{Aggregate Level} \\ \cline{2-6} & Mean & St. Dev. & Min & Max & Coefficient of Variation \\ \hline \hline House Price (\$) & 896,332 & 679,195.8 & 112,897 & 7,200,000 & 76\% \\ Lot Area (SqFt) & 18,367 & 84,909.71 & 0 & 1,577,744 & 462.29\% \\ Living Area (SqFt) & 2,264 & 1,398.79 & 416 & 10,354 & 61.78\% \\ Age (year) & 43.12 & 21.46 & 1 & 98 & 50\% \\ Full Bathroom & 1.55 & 0.76 & 0 & 3 & 49\% \\ Half Bathroom & 0.41 & 0.51 & 0 & 2 & 124\% \\ \(\frac{3}{4}\) Bathroom & 0.64 & 0.69 & 0 & 2 & 108\% \\ Parking & 1.68 & 0.71 & 0 & 3 & 42\% \\ HOA Fees (annually) (\$) & 1,693.32 & 2,033.16 & 0 & 7,113 & 120\% \\ Drive to CBD (minute) & 11.42 & 6.91 & 1 & 26 & 61\% \\ Walk to E.School (minute) & 21.37 & 17.31 & 2 & 68 & 81\% \\ Walk to M.school (minute) & 33.21 & 27.72 & 2 & 96 & 83\% \\ Walk to H.school (minute) & 46.94 & 32.92 & 4 & 122 & 70\% \\ Married (\%) & 42.97 & 16.87 & 9.90 & 70.30 & 40\% \\ Median Household Inc. (\$) & 61,137.44 & 20,891.56 & 19,985 & 96,406 & 34\% \\ Neighborhood’s Population & 43,641.85 & 46,872.42 & 888 & 99,081 & 107\% \\ \hline Sample size & 1018 & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive Statistics for Numerical Variables. Our data set also included property images. The images include detailed interior shots of rooms like the living room, dining room, bedrooms, and bathrooms, \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{Variables} & \multicolumn{4}{c}{Citywide Level} \\ \cline{2-5} & Levels & Description & Frequency & Percent \\ \hline \hline Pool, Bathtub, Sauna, or Jacuzzi & 0 & No & 658 & 64.64 \\ 1 & Yes & 360 & 35.36 \\ Solar Power & 0 & No & 724 & 71.12 \\ 1 & Yes & 294 & 28.88 \\ Nearest E.School Rank & 1 & A & 334 & 32.81 \\ 2 & B & 548 & 53.83 \\ 3 & C & 136 & 13.36 \\ Nearest M.School Rank & 1 & A & 125 & 12.28 \\ 2 & B & 633 & 62.18 \\ 3 & C & 260 & 25.54 \\ Nearest H.School Rank & 1 & A & 1018 & 100 \\ & 1 & Central & 230 & 22.59 \\ & 2 & North & 238 & 23.38 \\ Region & 3 & South & 143 & 14.05 \\ 4 & East & 200 & 19.65 \\ & 5 & Gunbarrel & 125 & 12.28 \\ 6 & Rural & 82 & 8.06 \\ & 0 & bedroom & 3 & 0.29 \\ & 1 & 1 bedroom & 72 & 7.07 \\ Number of Bedrooms & 2 & 2 bedrooms & 253 & 24.85 \\ 3 & 3 bedrooms & 285 & 28.00 \\ & 4 & 4 bedrooms & 249 & 24.46 \\ & 5 & 5 bedrooms & 127 & 12.48 \\ & 6 & 6 bedrooms & 27 & 2.65 \\ & 7 & 7 bedrooms & 2 & 0.20 \\ Property Type & 1 & Condominum & 324 & 31.83 \\ 2 & Town - Home & 90 & 8.84 \\ & 3 & Single-Family & 604 & 59.33 \\ Neighborhood’s Crime Level & 1 & Highest crime rate & 82 & 8.06 \\ & 2 & Middle crime rate & 505 & 49.61 \\ & 3 & Lowest crime rate & 431 & 42.34 \\ \hline Sample size & - & 1018 & 100 & - \\ \hline \hline \end{tabular} \end{table} Table 2: Descriptive Statistics for Categorical Variables. as well as exterior images showcasing the architectural style, texture of the building materials, and the design of windows and doors. Additionally, we also have street view images which give us a sense of the surrounding neighborhood and the overall aesthetic of the area. Some sample images from a representative property listing in our data set are depicted in Fig. 1. It is worth mentioning that the number of images per listing can vary and 14 properties were found to have no accompanying images. These were excluded from the data set, reducing the final sample size from 1018 to 1004. City of Boulder is divided into seven different geographical locations; central Boulder, downtown Boulder, old north Boulder, north Boulder, south Boulder, east Boulder, Gunbarel, and rural areas. With city development, the old north Boulder neighborhood is in central Boulder. We explored the geographical location of each property by making use of Google Maps. The property types in the housing market in the city of Boulder are classified as condominiums, town-homes, and single-family houses. Figure 2 plots the city of Boulder on the map and the property types. The sample includes 604 single-family houses, 324 condominiums, and 90 townhomes. Single-family properties range in price from \(\$216,575\) to \(\$7,200,000\) with an average price of \(\$1,160,321\). Townhomes range from \(\$115,000\) to \(\$1,421,000\) with an average price of \(\$627,960\), while condominiums range from \(\$112,897\) to \(\$2,600,000\) with an average price of \(\$478,751\). On average, single-family homes have 3.82 bedrooms, with 0.17% having no bedroom and 24.51% having 5 or more. Townhomes have 2.97 bedrooms on average, with 7.78% having 5 or more. Condominiums have 1.98 bedrooms on average, with 0.62% Figure 1: Some sample images from a representative property listing in our data set. having no bedroom and \(0.31\%\) having \(5\) or more. \(3.31\%\) of single-family homes have solar power and \(18.05\%\) have a pool, bathtub, sauna, or jacuzzi. Information about solar power and amenities in townhomes and condominiums is limited. In the locational submarkets of Boulder, the average age of dwellings varies from \(30\) years in North Boulder to \(59\) years in Central Boulder. In the spatial submarkets, the North Boulder region has \(238\) transactions with a house price range of \(\$134,306\) to \(\$4,500,000\) and an average price of \(\$807,214\). The Central Boulder region has \(230\) transactions with a house price range of \(\$115,000\) to \(\$7,200,000\) and an average price of \(\$1,256,235\). Table 3 provides more information about the descriptive statistics of the house prices in different submarkets. From Table 3 we learn that the deviation in residential property prices is lower in North, South, East, and Gunbarrel submarkets compared to the overall market level. However, the house price difference is higher in Central Boulder and rural areas. As mentioned earlier, the housing market in Boulder is classified into condominiums, town-homes, and single-family houses. To account for differences in property type and location, categorical variables are added to the models using one-hot encoding. The data is then split into training, validation, and test data sets using random sampling. Figure 2: City of Boulder on the map and the various property types. **Note**: The house prices have been recorded in the US dollars ($). ## 3 Methodology Our data set includes a wide variety of images for each property, including detailed interior shots of rooms like the living room, dining room, bedrooms, and bathrooms, as well as exterior images showcasing the architectural style, texture of the building materials, and the design of windows and doors. Additionally, we also have street view images which give us a sense of the surrounding neighborhood and the overall aesthetic of the area. To make the most of this wealth of information, we take these images and extract their corresponding feature vectors by feeding them through a pre-trained Vision Transformer or a CNN (e.g., ResNet). Once the feature vectors have been extracted, we then aggregate them using an average pooling mechanism. This process allows us to combine the information from all of the images and create a single, representative feature vector for each property. This is an important step because it allows us to effectively capture the most important information from all of the images in a concise and manageable format. We will then train a hedonic model (i.e., Ridge regression) using the pooled extracted image features and the other quantitative features such as structural factors, socio-economic status of the neighborhood, environmental amenities, and location. This combination of image features and quantitative data allows us to have a more complete understanding of each property, and enables us to make more accurate predictions about house values. Overall, this process of extracting, aggregating, and training on image features is a crucial step in our efforts to predict house values and gain valuable insights into the real estate market. This process is depicted in Fig. 3. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{House Price} \\ \cline{2-5} Market Level & Mean & St. Dev. & Min & Max \\ \hline \hline Citywide & \(896,332\) & \(679,195.8\) & \(112,897\) & \(7,200,000\) \\ Single-Family & \(1,160,321\) & \(739,401.9\) & \(216,575\) & \(7,200,000\) \\ Town-Home & \(627,960\) & \(246,470\) & \(115,000\) & \(1,421,000\) \\ Condominum & \(478,751\) & \(299,644.7\) & \(112,897\) & \(2,600,000\) \\ Central & \(1,256,235\) & \(921,806.1\) & \(115,000\) & \(7,200,000\) \\ North & \(807,214\) & \(517,334\) & \(134,306\) & \(4,500,000\) \\ South & \(920,577\) & \(528,865.8\) & \(243,000\) & \(4,550,000\) \\ East & \(646,616\) & \(373,423.8\) & \(112,897\) & \(3,350,000\) \\ Gunbarrel & \(593,812\) & \(307,783.2\) & \(194,585\) & \(1,995,051\) \\ Rural & \(1,173,444\) & \(929,252.6\) & \(425,000\) & \(5,779,000\) \\ \hline \hline \end{tabular} \end{table} Table 3: Summary Statistics of House Prices in Different Submarkets. To extract image features, we make use of the latest advancements in computer vision and machine learning by leveraging Vision Transformer (ViT) [Dosovitskiy et al. (2020)] and ResNet [He et al. (2016)] models that have been trained in a self-supervised manner on the ImageNet [Russakovsky et al. (2015)] data set. This data set contains a large number of images across a wide range of categories and is widely used as a benchmark for training and evaluating computer vision models. The self-supervised learning technique used for this work is self-DIstillation with NO labels (DINO) [Caron et al. (2021)]. DINO shares the same overall structure as recent self-supervised learning approaches, such as the ones proposed in Caron et al. (2020) Chen et al. (2020), Chen and He (2021) He et al. (2020) and Grill et al. (2020), that have been proposed in the literature. These approaches are designed to learn visual representations from large-scale image data sets without the need for manual annotation (i.e., Figure 3: In our data set, for each property there exist multiple images. Namely, interior images (e.g., living room, dining room, bedroom, bathroom), exterior images (e.g., house architectural style, the texture of the building material, the style of windows and doors) and street views. We take those images and extract their corresponding feature vectors by feeding them through a pre-trained Vision Transformer or a CNN (e.g., ResNet). We then aggregate the extracted feature vectors using an average pooling mechanism. We will then train a hedonic model (i.e., Ridge regression) using the pooled extracted image features and the other quantitative features (e.g., structural factors, socio-economic status of the neighborhood, environmental amenities, and location) to predict house values. labeling). DINO [Caron et al. (2021)] also shares some similarities with knowledge distillation [Hinton et al. (2015)], a technique that has been widely used to improve the performance of deep neural networks by transferring knowledge from a larger and more powerful model (i.e., teacher) to a smaller and more efficient one (i.e., student). The DINO framework also utilizes two networks, a student and a teacher, to extract features from input images. Here, both networks have the same architecture but different parameters. DINO is illustrate in Fig. 4 for simplicity with one single pair of views. However, the model actually takes multiple different random transformations of an input image and passes them to the student and teacher networks. The output of the teacher network is centered using a mean computed over the batch. Each network outputs a feature vector, which is then normalized using a temperature softmax over the feature dimension. The similarity between the student and teacher networks is measured using a cross-entropy loss. To ensure that the gradients are only propagated through the student network, a stop-gradient operator is applied on the teacher network. The teacher's parameters are updated using an exponential moving average (EMA) of the student's parameters. This approach allows for the efficient transfer of knowledge from the teacher network to the student network, ultimately leading to the improvement of the performance of the student network. This provides a clear and detailed overview of the steps involved in the framework and how they are interconnected, making it easier to understand the workings of the method and how it can be applied to different tasks. Overall, DINO is an innovative framework that combines the best of both worlds: self-supervised learning and knowledge distillation. It allows us to learn powerful visual representations from large-scale data sets. By using these pre-trained models, we can take advantage of the knowledge they have already learned from the ImageNet data and apply it to our specific task, which is image feature extraction for predicting house values. The Vision Transformer (ViT) model is a new architecture that has been shown to be highly effective in self-supervised learning. On the other hand, the ResNet model is a classic architecture that has been widely used in various computer vision tasks. Both models can be trained in a self-supervised manner using the DINO framework. Given the established reputation and proven effectiveness of the ResNet architecture, it is widely understood in the field. In light of this, we will not delve into its details within this document but instead, we will focus on providing a comprehensive and in-depth explanation of the Vision Transformer, which is a cutting-edge technique in the following. The ViT architecture is based on the Transformer architecture [Vaswani et al. (2017)], originally developed for natural language processing, but it has been adapted for computer vision. As illustrated in Fig. 5, the key idea behind ViT is to split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder [Vaswani et al. (2017)], this allows the model to maintain a consistent representation of the input image. To extract features using the ViT architecture, we use the standard approach of adding an extra classification (CLS) token to the sequence, this token is then used as input to a linear layer, which produces the final output of the model. The key insight of the Transformer architecture as depicted in Fig. 6 is that it allows the model to process input sequences in parallel, which greatly improves the model's ability to handle long-distance dependencies. Transformer Encoders (see Fig. 6) are neural network architectures that were introduced in the 2017 paper "Attention is All You Need" by Google researchers [20]. The key innovation in this architecture is the use of self-attention mechanisms, which allow the model to weigh the importance of different parts of the input sequence when encoding it. The self-attention mechanism works by computing a set of attention weights for each element in the input sequence, which indicate how much each element should be taken into account when encoding the sequence. These attention weights are computed using a dot-product operation between the input elements and a set of learnable parameters called "keys", "queries" and "values". The dot-product scores are then passed through a softmax function to obtain the attention weights, which are used to weight the input elements before they are combined to form the final Figure 4: DINO is a self-supervised approach that uses two networks, a student and a teacher, to extract features from input images. Both networks have the same architecture but different parameters. The student network is trained using stochastic gradient descent (SGD) to mimic the teacher network’s output, which is measured by a cross-entropy loss. The teacher’s parameters are updated with an exponential moving average (EMA) of the student’s parameters. encoded representation. The transformer encoder architecture also includes a multi-layer perceptron (MLP) and a residual connection, which allows the model to better capture the dependencies between the input elements. This architecture has been used in a variety of natural language processing (NLP) tasks, such as machine translation, text summarization, and language modeling, and has achieved state-of-the-art performance on many of them. The ViT architecture also relies on Transformer Encoders and their attention mechanism. ## 4 Results This section compares the performance of various computer vision architectures, trained in a self-supervised manner using the DINO framework, as image feature extractors for transfer learning (see Table 4). The performance of each model is measured using Root Mean Square Error (RMSE), which is a widely used metric for evaluating the performance of predictive models. This comparison is made against a baseline hedonic model (i.e., Ridge regression) that only uses quantitative features such as number of bedrooms, bathrooms, square footage, lot size, property age, crime rates, and proximity to amenities. In contrast, the other models in Table 4 include both image features extracted from various computer vision models and the aforementioned quantitative features. This combination of features provides a more comprehensive view of the Figure 5: The Vision Transformer (ViT) architecture takes an image, splits it into fixed-size patches, embeds each patch linearly, adds position embeddings, and feeds the resulting sequence of vectors to a Transformer encoder. To perform classification, an extra learnable classification (CLS) token is added to the sequence. data, which can lead to improved predictions. The "Alpha" column in Table 4 displays the constant that multiplies the L2 term in Ridge regression, which is a classical linear model. This constant controls the strength of regularization, which helps prevent overfitting, a common problem in machine learning where a model becomes too closely fitted to the training data and fails to generalize well to new, unseen data. The "Improvement over Baseline" column in Table 4 shows the percentage improvement in RMSE over the baseline for each of the other models. The results in this column demonstrate that all models incorporating image features perform better than the baseline model, as indicated by their lower RMSE values on the test data. Of all the architectures in the table, the one with the best performance is ViT-B/8, with an improvement of 10.63% over the baseline. This highlights the potential of computer vision models in transfer learning, as they can be used to extract meaningful image features that can be combined with other features to improve the accuracy of predictive models. Table 5 provides information about the configurations of different computer vision architectures used as image feature extractors in this work. The columns in the table are labeled "Blocks", "Dim", "Heads", "# Tokens", "# Params (M)", and "Im/". The "Blocks" column refers to the number of Transformer blocks in the network. The "Dim" column refers to the channel dimension of the network. The "Heads" column represents the number of heads in the multi-head attention mechanism. The "# Tokens" column indicates the length of the token sequence when the network is fed with inputs of a resolution of \(224\times 224\) center-cropped from property images. The "# Params" column specifies the total number of Figure 6: Transformer encoders are neural network architectures that use self-attention mechanisms to weigh the importance of different parts of the input sequence when encoding it. The self-attention mechanism computes attention weights for each element in the input sequence using dot-product operation between the input elements and learnable parameters called “keys”, “queries” and “values”. These attention weights are then used to weight the input elements before they are combined to form the final encoded representation. The transformer encoder architecture also includes a multi-layer perceptron (MLP) and a residual connection, which allows the model to better capture the dependencies between the input elements. This architecture has been widely used in Natural Language and Computer Vision tasks and achieve state-of-the-art performance on many of them. parameters in the network, excluding the projection head Caron et al. (2021). Finally, the "Im/s" column lists the inference time of the network on a NVIDIA V100 GPU, with 128 samples processed in each forward pass. The table is intended to provide a clear and concise overview of the network configurations, allowing readers to easily compare and understand the differences between the different models. The ViT architecture takes as input a grid of non-overlapping contiguous image patches of resolution \(N\times N\). In this paper, \(N=16\) ("/16") or \(N=8\) ("/8"). In Tables 4 and 5, "-S" refers to ViT small and "-B" indicates the ViT base architecture. The findings presented in this paper (see Table 4) align with the previously published research (see e.g., Caron et al. (2021)), which shows that models with a larger size using images divided into smaller patches (e.g., ViT-B/8) tend to have better performance. Moreover, all ViT models outperform ResNet despite being trained using the same self-supervised technique, namely DINO. One reason why ViT may perform better than ResNet is its use of the self-attention mechanism. Unlike traditional convolutional neural networks (CNNs) such as ResNet, ViT employs self-attention mechanisms to directly model relationships between all elements in the input sequence, rather than just neighboring elements. This allows ViT to capture more complex and global dependencies in the input data, resulting in improved performance. However, it should be noted that all ViT models are slower feature extractors than ResNet as illustrated in Table 5. The property images are transformed using computer vision techniques and \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Architecture & Train-RMSE & Val-RMSE & Test-RMSE & Alpha & Improvement over Baseline \\ \hline Baseline & \$117.28 & \$130.34 & \$117.09 & 40 & 0.00\% \\ \hline ResNet-50 & \$57.57 & \$130.93 & \$108.85 & 360 & 7.00\% \\ \hline ViT-B/16 & \$77.07 & \$126.48 & \$106.62 & 350 & 8.94\% \\ \hline ViT-S/16 & \$86.60 & \$124.30 & \$106.15 & 290 & 9.34\% \\ \hline ViT-S/8 & \$75.63 & \$119.79 & \$105.74 & 100 & 9.69\% \\ \hline ViT-B/8 & \$73.88 & \$126.23 & \$104.64 & 320 & 10.63\% \\ \hline \end{tabular} \end{table} Table 4: This table compares the performance of various computer vision architectures as image feature extractors for transfer learning using Root Mean Square Error (RMSE) as the evaluation metric. The baseline hedonic model (Ridge regression) serves as a comparison, using only quantitative features such as number of bedrooms, bathrooms, square footage, lot size, property age, crime rates, and proximity to amenities. In contrast, the other models incorporate both the extracted image features and the quantitative features. The results show that the baseline architecture has the highest RMSE on the test data, while the other models perform better, with lower RMSE values. The “Alpha” column displays the constant multiplying the L2 term in Ridge regression, which controls regularization strength, while the “Improvement over Baseline” column shows the improvement in RMSE in percentage over the baseline architecture. Out of all the architectures, ViT-B/8 achieves the best performance with the lowest RMSE on the test data set and an improvement of 10.63% over the baseline. used as additional inputs with quantitative features like number of rooms, square footage, age, crime rates, etc. These combined features are fed into a hedonic model (Ridge Regressor) to predict the property value. Incorporating image features increase the total number of variables, and as a consequence the number of parameters in the hedonic model, and make it prone to overfitting, which is why we use validation data to determine the strength of regularization through the constant multiplying the L2 term in Ridge regression. This helps prevent overfitting. The best model is chosen based on the hyper-parameter "Alpha" (i.e., the hyper-parameter controlling the regularization strength) that results in the lowest RMSE on the validation data, as shown in Fig. 7. The RMSE numbers in Table 4 are reported in dollars because the purpose of the model is to estimate the value of real estate properties. The RMSE is a measure of the difference between the predicted value and the actual value of a property. When the RMSE is reported in dollars, it provides a clear and intuitive understanding of the magnitude of the error in the prediction. For example, an RMSE of $100 means that on average, the model's predictions are off by $100 per square footage. This means that the model's prediction error for a property with 2,000 square feet of living space would be $100 \(\times\) 2,000 = $200,000. A 2,000 square feet residential property in Boulder, CO could worth above $2,000,000. A reduction in the RMSE of $1 per square footage, by a more accurate model, would mean that the average prediction error for a property with 2,000 square feet of living space would decrease from $100 \(\times\) 2,000 = $200,000 to $99 \(\times\) 2,000 = $198,000. This leads to a $2,000 difference in evaluation. This can have important implications for the real estate industry, as it can result in more accurate pricing and better informed decisions for buyers, sellers, and lenders. This work proposes a new AI-based method for property valuation in real estate. The use of self-supervised vision transformers, machine learning, computer \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & Blocks & Dim & Heads & \# Tokens & \# Params (M) & Im/s \\ \hline ResNet-50 & - & 2048 & - & - & 23 & 1237 \\ \hline ViT-S/16 & 12 & 384 & 6 & 197 & 21 & 1007 \\ \hline ViT-S/8 & 12 & 384 & 6 & 785 & 21 & 180 \\ \hline ViT-B/16 & 12 & 768 & 12 & 197 & 85 & 312 \\ \hline ViT-B/8 & 12 & 768 & 12 & 785 & 85 & 63 \\ \hline \end{tabular} \end{table} Table 5: This table outlines the configurations of different networks. It lists the number of Transformer blocks as “Blocks”, the channel dimension as “Dim”, and the number of heads in multi-head attention as “Heads”. The length of the token sequence for inputs with a resolution of 224x224 is listed as “# Tokens”. The total number of parameters (excluding the projection head) is listed as “# Params (M)” in million. The “Im/s” column shows the time taken for one forward pass of 128 samples on an NVIDIA V100 GPU [Caron et al. (2021)]. vision, and hedonic pricing models trained on real estate data is expected to improve the accuracy of property value estimation, outperforming traditional appraisal methods. The method has potential for real-world applications and its significance lies in the importance of accurate property valuation for the functioning of the real estate market. Improved property valuation methods can result in more efficient and fair transactions and better investment decisions. The use of AI in property valuation can have a positive impact on the real estate market and the economy as a whole. ## 5 Concluding Remarks and Future Works In conclusion, this paper proposed a new method for property valuation utilizing self-supervised vision transformers, a recent breakthrough in computer vision and deep learning. The proposed algorithm uses a combination of machine learning, computer vision and hedonic pricing models trained on real estate data to estimate the value of a given property. We collected and pre-processed a data set of real estate properties in the city of Boulder, Colorado and used it to train and test our algorithm. Our data set consisted of qualitative images as well as quantitative features such as the number of bedrooms, bathrooms, square footage, lot square footage, property age, crime rates, and proximity to amenities. Figure 7: The best hedonic model is chosen based on the hyper-parameter “Alpha” (i.e., the hyper-parameter controlling the regularization strength) that results in the lowest RMSE on the validation data. We evaluated the performance of our model using metrics such as Root Mean Squared Error (RMSE). Our findings indicate that these techniques are able to accurately predict the value of properties, with a low RMSE. The proposed algorithm outperforms traditional appraisal methods that do not leverage property images and has the potential to be used in real-world applications. The use of AI in the real estate industry is growing in recent years, and our research highlights the potential for self-supervised vision transformers to revolutionize the property valuation process. With continued development and refinement, this algorithm could become a valuable tool for real estate professionals, making the process of property valuation more efficient and accurate. Additionally, this research is a step towards creating more fair and accurate models for property valuation that are not susceptible to human bias. We believe that our proposed algorithm has the potential to make a significant impact on the real estate industry and we look forward to seeing it being used in real-world applications. In future work, making use of data sets from different regions and cities for property valuation will be crucial in enhancing the generalizability and accuracy of the proposed algorithm. Fine-tuning the model to these data sets could further improve its performance. Implementing the algorithm in real-world scenarios and gathering feedback from real estate professionals will offer valuable insights into its practicality and efficacy. Furthermore, incorporating other computer vision techniques such as object detection and semantic segmentation is also a potential direction. Additionally, leveraging textual data such as property descriptions can also be explored. The proposed algorithm has the potential to revolutionize the property valuation process, but further research is necessary to fully tap into its potential.
2310.00306
A survey on the Riemann-Lebesgue integrability in non-additive setting
We present in this survey some results regarding Riemann_Lebesgue integrability with respect to arbitrary non-additive set functions.
Anca Croitoru, Alina Gavrilut, Alina Iosif, Anna Rita Sambucini
2023-09-30T08:39:57Z
http://arxiv.org/abs/2310.00306v1
# A survey on the Riemann-Lebesgue integrability in non-additive setting ###### Abstract. We present some results regarding Riemann-Lebesgue integral of a vector(real resp.) function relative to an arbitrary non-additive set function. Then these results are generalized to the case of Riemann-Lebesgue integrable interval-valued multifunctions. Key words and phrases:Riemann-Lebesgue integral, inequalities, convergence of integral, Interval-valued (set) multifunction, Non-additive set function 2020 Mathematics Subject Classification: 28B20, 28C15, 49J53 ## 1. Introduction The theory of non-additive set functions and nonlinear integrals has become an important tool in many domains such as: potential theory, subjective evaluation, optimization, economics, decision making, data mining, artificial intelligence, accident rates estimations (e.g. [20, 21, 38, 44, 53, 56, 60, 63, 75, 79, 80, 81, 84]). In the literature several methods of integration for (multi)functions based on extensions of the Riemann and Lebesgue integrals have been introduced and studied (see for example, [2, 3, 4, 5, 6, 7, 8, 9, 13, 14, 15, 17, 18, 25, 29, 30, 31, 32, 33, 34, 36, 37, 39, 40, 41, 42, 43, 44, 55, 58, 68]). In this context, Kadets and Tseytlin [48] have introduced the absolute Riemann-Lebesgue \(|RL|\) and unconditional Riemann-Lebesgue RL integrability, for Banach valued functions with respect to countably additive measures. According to [48], in finite measure spaces, the Bochner integrability implies \(|RL|\) integrability, which is stronger than RL integrability, that implies Pettis integrability. Contributions in this area are given in [10, 16, 17, 22, 26, 27, 33, 47, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 78, 83]. Interval Analysis, as particular case of Set-Valued Analysis, was introduced by Moore [57], motivated by its applications in computational mathematics (i.e. numerical analysis). The interval-valued multifunctions and multimeasures are involved in various applied sciences, as statistics, biology, theory of games, economics, social sciences and software. Also they are used for example in signal and image processing, since the discretization of a continuous signal causes different sources of uncertainty and ambiguity, as we can see in [23, 24, 46, 49, 52, 85, 82]. In this chapter, we study the Riemann-Lebesgue integral with respect to an arbitrary set function, not necessarily countably additive. We present some classical ## 1. Introduction In this paper we study the _common refinement_ of the _common refinement_ of two finite or countable partitions \(P=\{E_{i}\}\) and \(P^{\prime}=\{G_{j}\}\). The _common refinement_ of two finite or countable partitions \(P=\{E_{i}\}\) and \(P^{\prime}=\{G_{j}\}\) is the partition \(P\wedge P^{\prime}:=\{E_{i}\cap G_{j}\}\). A countable tagged partition of \(S\) if a family \(\{(E_{n},s_{n}),n\in\mathbb{N}\}\) such that \((E_{n})_{n}\) is a countable partition of \(S\) and \(s_{n}\in E_{n}\) for every \(n\in\mathbb{N}\). All over this chapter, without any additional assumptions, \(\nu:\mathcal{C}\to[0,\infty)\) will be a set function, such that \(\nu(\emptyset)=0.\) As in [12, Definitions 2 and 3], \(\nu:\mathcal{C}\to[0,\infty)\) is said to be: * _monotone_ if \(\nu(A)\leq\nu(B)\), for every \(A,B\in\mathcal{C}\), with \(A\subseteq B\) (such non additive measures are called also capacities or fuzzy measures); * _subadditive_ if \(\nu(A\cup B)\leq\nu(A)+\nu(B)\), for every \(A,B\in\mathcal{C}\), with \(A\cap B=\emptyset\); * _a submeasure_ (in the sense of Drewnowski [35]) if \(\nu\) is monotone and subadditive; * \(\sigma\)_-subadditive_ if \(\nu(A)\leq\sum\limits_{n=0}^{+\infty}\nu(A_{n})\), for every sequence of (pairwise disjoint) sets \((A_{n})_{n\in\mathbb{N}}\subset\mathcal{C}\), with \(A=\bigcup\limits_{n=0}^{+\infty}A_{n}\). * _finitely additive_ if \(\nu(A\cup B)=\nu(A)+\nu(B)\), for every disjoint sets \(A,B\in\mathcal{C}\); * \(\sigma\)_-additive_ if \(\nu(\bigcup\limits_{n=0}^{+\infty}A_{n})=\sum\limits_{n=0}^{+\infty}\nu(A_{n})\), for every sequence of pairwise disjoint sets \((A_{n})_{n\in\mathbb{N}}\subset\mathcal{C}\); * _order-continuous_ (shortly, _o-continuous_) if \(\lim\limits_{n\to+\infty}\nu(A_{n})=0\), for every decreasing sequence of sets \((A_{n})_{n\in\mathbb{N}}\subset\mathcal{C}\), with \(\bigcap\limits_{n=0}^{+\infty}A_{n}=\emptyset\) (denoted by \(A_{n}\searrow\emptyset\)); * _exhaustive_ if \(\lim\limits_{n\to+\infty}\nu(A_{n})=0\), for every sequence of pairwise disjoint sets \((A_{n})_{n\in\mathbb{N}}\subset\mathcal{C}\). * _null-additive_ if, for every \(A,B\in\mathcal{C}\), \(\nu(A\cup B)=\nu(A)\) when \(\nu(B)=0\). Moreover a set function \(\nu:\mathcal{C}\to[0,\infty)\) satisfies: **(\(\boldsymbol{\sigma}\)):**: the property \(\boldsymbol{\sigma}\) if for every \(\{E_{n}\}_{n}\subset\mathcal{C}\) with \(\nu(E_{n})=0\), for every \(n\in\mathbb{N}\) we have \(\nu(\cup_{n=0}^{\infty}E_{n})=0\); **(E):**: the condition **(E)** if for every double sequence \((B_{n}^{m})_{n,m\in\mathbb{N}^{*}}\subset\mathcal{C}\), such that for every \(m\in\mathbb{N}^{*}\), \(B_{n}^{m}\searrow B^{m}\,(n\to\infty)\) and \(\nu(\cup_{m=1}^{\infty}B^{m})=0\), there exist two increasing sequences \((n_{p})_{p},(m_{p})_{p}\subset\mathbb{N}\) such that \(\lim\limits_{k\to\infty}\nu(\bigcup_{p=k}^{\infty}B_{n_{p}}^{m_{p}})=0\). The property \(\boldsymbol{\sigma}\) is a consequence of the countable subadditivity and it will be needed in some of our results. Observe that the condition **(E)** was given, for example, in [51], in order to give sufficient and necessary conditions to obtain Egoroff's Theorem for suitable non additive measures. See also [59] for null additive set functions and related questions. An example of a set function that satisfies the condition **(E)** can be found in [51, Example 3.3]. A property \((P)\) holds \(\nu\)-almost everywhere (denoted by \(\nu\)-a.e.) if there exists \(E\in\mathcal{C}\), with \(\nu(E)=0\), so that the property \((P)\) is valid on \(S\setminus E\). A set \(A\in\mathcal{C}\) is said to be an atom of a set function \(\nu:\mathcal{C}\to[0,\infty)\) if \(\nu(A)>0\) and for every \(B\in\mathcal{C}\), with \(B\subseteq A\), we have \(\nu(B)=0\) or \(\nu(A\backslash B)=0\). We associate to \(\nu:S\to[0,\infty)\) the following set functions. (See [12, Definition 4]) * The variation \(\overline{\nu}\) of \(\nu\) is the set function \(\overline{\nu}:\mathcal{P}(S)\to[0,+\infty]\) defined by \[\overline{\nu}(E)=\sup\{\sum\limits_{i=1}^{n}\|\nu(A_{i})\|\},\] for every \(E\in\mathcal{P}(S)\), where the supremum is extended over all finite families of pairwise disjoint sets \(\{A_{i}\}_{i=1}^{n}\subset\mathcal{C}\), with \(A_{i}\subseteq E\), for every \(i\in\{1,\ldots,n\}\). The set function \(\nu\) is said to be _of finite variation_ (on \(\mathcal{C}\)) if \(\overline{\nu}(S)<+\infty\). * the semivariation \(\widetilde{\nu}\) of \(\nu\) is the set function \(:\mathcal{P}(S)\to[0,+\infty]\) defined for every \(A\subseteq S\), by \[\widetilde{\nu}(A)=\inf\{\overline{\nu}(B);\ A\subseteq B,\ B\in\mathcal{C}\}.\] **Remark 2.1**.: ([27, Remark 1]) Let \(\nu:\mathcal{C}\to[0,+\infty)\) be a non additive measure. Then \(\overline{\nu}\) is monotone and super-additive on \(\mathcal{P}(S)\), that is \[\overline{\nu}(\bigcup_{i\in I}A_{i})\geq\sum_{i\in I}\overline{\nu}(A_{i}),\] for every finite or countable partition \(\{A_{i}\}_{i\in I}\) of \(S\). If \(\nu\) is finitely additive, then \(\overline{\nu}(A)=\nu(A)\), for every \(A\in\mathcal{C}\). If \(\nu\) is subadditive (\(\sigma\)-subadditive, resp.), then \(\overline{\nu}\) is finitely additive (\(\sigma\)-additive, resp.). Moreover, for every \(\nu\), \(\nu_{1}\), \(\nu_{2}:\mathcal{C}\to\mathbb{R}\) and every \(\alpha\in\mathbb{R}\), \(\bullet\)\(\overline{\nu_{1}\pm\nu_{2}}\leq\overline{\nu_{1}}+\overline{\nu_{2}};\)\(\overline{\alpha\,\nu}=|\alpha|\overline{\nu}.\) For all unexplained definitions, see for example [9, 11]. Let \(\mathscr{M}(S)\) be the set of all non negative submeasures on \((S,\mathcal{C})\). Let \((m_{n})_{n}\subset\mathscr{M}(S)\), we will use the symbol \(m_{n}\uparrow\) to indicate that \(m_{n}\leq m_{n+1}\) for every \(n\in\mathbb{N}\). **Definition 2.2**.: _A sequence \((\nu_{n})_{n}\subset\mathscr{M}(S)\) setwise converges to \(\nu\in\mathscr{M}(S)\) if for every \(A\in\mathcal{C}\)_ \[\lim_{n\to\infty}\overline{\nu_{n}-\nu}(A)=0. \tag{2}\] In the \(\sigma\)-additive case, the setwise convergence is given by \(\lim_{n\to\infty}\nu_{n}(A)=\nu(A)\) for every \(A\in\mathcal{C}\), see for example [31]. Since \(|\nu_{n}(A)-\nu(A)|\leq\overline{\nu_{n}-\nu}(A)\) for every \(A\in\mathcal{C}\), the convergence given in Definition 2.2 implies the one of [31]; the converse does not hold in general. Neverthless, the two definitions coincide, from [12, Remark 1], if \(\mu\), \(\mu_{n}\), for all \(n\in\mathbb{N}\), are finitely additive and non negative. Finally, if \(S\) is a locally compact Hausdorff topological space, we denote by \(\mathcal{K}\) the lattice of all compact subsets of \(S\), \(\mathcal{B}\) the Borel \(\sigma\)-algebra (i.e.,the smallest \(\sigma\)-algebra containing \(\mathcal{K}\)) and \(\mathcal{O}\) the class of all open sets. **Definition 2.3**.: A set function \(\nu:\mathcal{B}\to[0,\infty)\) is called regular if for every set \(A\in\mathcal{B}\) and every \(\varepsilon>0\) there exist \(K\in\mathcal{K}\) and \(D\in\mathcal{O}\) such that \(K\subseteq A\subseteq D\) and \(\nu(D\setminus K)<\varepsilon\). ## 3. The Riemann-Lebesgue integrability As in Kadets and Tseytlin [48, Definition 4.5] (for scalar functions) and Potiyrala [65, Definition 7] and Kadets and Tseytlin [47] (for vector functions), we introduce the following definition: **Definition 3.1**.: ( [12, Definition 5]) A vector function \(f:S\to X\) is called _absolutely (unconditionally_ resp.) _Riemann-Lebesgue_ (\(|RL|\)) (\(RL\) resp.) _\(\nu\)-integrable_ (on \(S\)) if there exists \(a\in X\) such that for every \(\varepsilon>0\), there exists a countable partition \(P_{\varepsilon}\) of \(S\), so that for every countable partition \(P=\{A_{n}\}_{n\in\mathbb{N}}\) of \(S\) with \(P\geq P_{\varepsilon}\), * \(f\) is bounded on every \(A_{n}\), with \(\nu(A_{n})>0\) and * for every \(s_{n}\in A_{n}\), \(n\in\mathbb{N}\), the series \(\sum_{n=0}^{+\infty}f(s_{n})\nu(A_{n})\) is absolutely (unconditionally resp.) convergent and \[\Big{\|}\sum_{n=0}^{+\infty}f(s_{n})\nu(A_{n})-a\Big{\|}<\varepsilon.\] The vector \(a\) is called _the absolute (unconditional) Riemann-Lebesgue \(\nu\)-integral of \(f\) on \(S\)_ and it is denoted by \(\int_{S}\!f\,\mathrm{d}\nu\)\(\Big{(}{}^{RL}\!\!\!\int_{S}\!f\,\mathrm{d}\nu\) resp.\(\Big{)}\). We denote by the symbol \(|RL|_{\nu}^{1}(X)\) the class of all \(X\)-valued function that are \(|RL|\) integrable with respect to \(\nu\) and in an analogous way we denote the class of all functions that are \(RL\)\(\nu\)-integrable. **Remark 3.2**.: (see [12, Remark 2]) Obviously if \(a\) exists, then it is unique. Moreover, if \(h\) is \(|RL|\)\(\nu\)-integrable, then \(h\) is \(RL\)\(\nu\)-integrable and if \(X\) is finite dimensional, then \(|RL|\)\(\nu\)-integrability is equivalent to \(RL\)\(\nu\)-integrability. In this case, it is denoted by \(RL\). We remember also the following in the countably additive case: **3.2.a):**: Kadets and Tseytlin [48] introduced the \(|RL|\)\(\nu\)-integral and the \(RL\)\(\nu\)-integral for functions with values in a Banach space relative to a measure. They proved that if \((S,\mathcal{C},\nu)\) is a finite measure space, then the following implications hold: \[L_{\nu}^{1}(X)\subset|RL|_{\nu}^{1}(X)\subset RL_{\nu}^{1}(X)\subset P_{\nu} (X).\] where \(L_{\nu}^{1}(X)\), and \(P_{\nu}(X)\) denotes respectively the Bochner and the Pettis integrability. **3.2.b):**: If \(X\) is a separable Banach space, then \[L^{1}_{\nu}(X)=|RL|^{1}_{\nu}(X)\subset RL^{1}_{\nu}(X)=P_{\nu}(X.)\] **3.2.c):**: If \((S,\mathcal{C},\nu)\) is a \(\sigma\)-finite measure space, then the Birkhoff integrability coincides with \(RL\)\(\nu\)-integrability ( [65]). **3.2.d):**: If \(h:[a,b]\to\mathbb{R}\) is Riemann integrable, then \(h\) is \(RL\)-integrable ( [65, Corollary 17]). The converse is not valid: for example the function \(h:[0,1]\to\mathbb{R}\), \(h=\chi_{[0,1]\cap\mathbb{Q}}\) is \(RL\)-integrable but it is not Riemann integrable ( [65, Example 19]). ### Some properties of \(RL\)\(\nu\)-integrability In this section we present some results contained in [9, 12] regarding Riemann-Lebesgue integrability of vector functions with respect to an arbitrary non-negative set function, pointed out its remarkable properties. We begin with a characterization of \(|RL|\)-integrability. **Theorem 3.3**.: _Let \(g,h\in|RL|^{1}_{\nu}(X)\) and \(\alpha,\beta\in\mathbb{R}\). Then:_ **(3.3.a):**: _If_ \(h\) _is_ \(|RL|\)__\(\nu\)_-integrable on_ \(S\)_, then_ \(h\) _is_ \(|RL|\)__\(\nu\)_-integrable on every_ \(E\in\mathcal{C}\)__ ( [12, Theorem 1.a])_;_ **(3.3.b):**: \(h\) _is_ \(|RL|\)__\(\nu\)_-integrable on every_ \(E\in\mathcal{C}\) _if and only if_ \(h\chi_{E}\) _is_ \(|RL|\)__\(\nu\)_-integrable on_ \(S\)_. In this case, by,_ _[_12_, Theorem 1.b]__,_ \[{}_{(|RL|)}\int_{E}h\,\mathrm{d}\nu={}_{(|RL|)}\int_{S}h\chi_{E}\,\mathrm{d}\nu.\] ( _The same holds for_ \(RL\)_-integrability_ )_. Moreover, by_ _[_12_, Theorem 3]__,_ **(3.3.c):**: \(\alpha g+\beta h\in|RL|^{1}_{\nu}(X)\) _and_ \[{}_{(|RL|)}\int_{S}(\alpha g+\beta h)\,\mathrm{d}\nu=\alpha\cdot{}_{(|RL|)} \int_{S}\!g\,\mathrm{d}\nu+\beta\cdot{}_{(|RL|)}\int_{S}\!h\,\mathrm{d}\nu,\] **(3.3.d):**: \(h\in|RL|^{1}_{\alpha\nu}(X)\) _for_ \(\alpha\in[0,+\infty)\) _and_ \[{}_{(|RL|)}\int_{S}\!h\,\mathrm{d}(\alpha\nu)=\alpha{}_{(|RL|)}\int_{S}\!h\, \mathrm{d}\nu.\] **(3.3.e):**: _Suppose_ \(h\in|RL|^{1}_{\nu_{i}}(X)\) _for_ \(i=1,2\)_. By_ ( [12, Theorem 4])__\(h\in|RL|^{1}_{\nu_{1}+\nu_{2}}(X)\) _and_ \[{}_{(|RL|)}\int_{S}h\,\mathrm{d}(\nu_{1}+\nu_{2})={}_{(|RL|)}\int_{S}\!h\, \mathrm{d}\nu_{1}+{}_{(|RL|)}\int_{S}h\,\mathrm{d}\nu_{2}.\] _Similar results also hold for the \(RL\)\(\nu\)-integrability._ Proof.: We report here only the proofs of (3.3.a) and (3.3.b). Fix any \(A\in\mathcal{C}\) and denote by \(J\) the integral of \(h\) on \(S\); then, fixed any \(\varepsilon>0\), there exists a partition \(P_{\varepsilon}\) of \(S\), such that, for every finer partition it is \[\left\|\sum_{n=0}^{+\infty}h(t_{n})\nu(A_{n})-J\right\|\leq\varepsilon.\] Now, denote by \(P_{0}\) any partition finer than \(P^{\prime}\) and also finer than \(\{A,S\setminus A\}\), and by \(P_{A}\) the partition of \(A\) consisting of all the elements of \(P_{0}\) that are contained in \(A\). Next, let \(\Pi_{A}\) and \(\Pi^{\prime}_{A}\) denote two partitions of \(A\) finer than \(P_{A}\), and extend them with a common partition of \(S\setminus A\) (also with the same _tags_) in such a way that the two resulting partitions, denoted by \(\Pi\) and \(\Pi^{\prime}\), are both finer than \(P^{\prime}\). So, if we denote by \[\sigma(h,\Pi):=\sum_{n=0}^{\infty}h(t_{n})\nu(A_{n}),\quad A_{n}\in\Pi, \tag{3}\] then \[\|\sigma(h,\Pi)-\sigma(h,\Pi^{\prime})\|\leq\|\sigma(h,\Pi)-J\|+\|J-\sigma(h, \Pi^{\prime})\|\leq 2\varepsilon.\] Now, setting: \[\alpha_{1}:=\sum_{I\in\Pi_{A}}h(t_{I})\nu(I),\quad\ \ \alpha_{2}:=\sum_{I\in\Pi^{ \prime}_{A}}h(t^{\prime}_{I})\nu(I),\quad\ \ \beta:=\sum_{I\in\Pi,I\subset A^{c}}h(\tau_{I})\nu(I),\] (with obvious meaning of the symbols), one has \[2\varepsilon\geq\|\alpha_{1}+\beta-(\alpha_{2}+\beta)\|=\|\alpha_{1}-\alpha_{ 2}\|.\] By the arbitrariness of \(\Pi_{A}\) and \(\Pi^{\prime}_{A}\), this means that the sums \(\sigma(h,\Pi_{A})\) satisfy a Cauchy principle in \(X\), and so the first claim follows by completeness. Now, let us suppose that \(f\) is \(|RL|\)\(\nu\)-integrable on \(A\in\mathcal{C}\). Then for every \(\varepsilon>0\) there exists a partition \(P_{A}^{\varepsilon}\in\mathcal{P}_{A}\) so that for every partition \(P_{A}=\{B_{n}\}_{n\in\mathbb{N}}\) of \(A\) with \(P_{A}\geq P_{A}^{\varepsilon}\) and for every \(s_{n}\in B_{n},n\in\mathbb{N}\), we have \[\left\|\sum_{n=0}^{\infty}h(s_{n})\nu(B_{n})-{}_{(|RL|)}\int_{A}h\,\mathrm{d} \nu\right\|<\varepsilon. \tag{4}\] Let us consider \(P_{\varepsilon}=P_{A}^{\varepsilon}\cup\{S\setminus A\}\), which is a partition of \(S\). If \(P=\{A_{n}\}_{n\in\mathbb{N}}\) is a partition of \(S\) with \(P\geq P_{\varepsilon}\), then without any loss of generality we may write \(P=\{C_{n},D_{n}\}_{n\in\mathbb{N}}\) with pairwise disjoint \(C_{n},D_{n}\) such that \(A=\cup_{n=0}^{\infty}C_{n}\) and \(\cup_{n=0}^{\infty}D_{n}=S\setminus A.\) Now, for every \(u_{n}\in A_{n},n\in\mathbb{N}\) we get by (4): \[\left\|\sum_{n=0}^{\infty}h\chi_{A}(u_{n})\nu(A_{n})-{}_{(|RL|)} \int_{A}h\,\mathrm{d}\nu\right\|=\] \[= \left\|\sum_{n=0}^{\infty}h\chi_{A}(t_{n})\nu(C_{n})+\sum_{n=0}^{ \infty}h\chi_{A}(s_{n})\nu(D_{n})-{}_{(|RL|)}\int_{A}h\,\mathrm{d}\nu\right\|=\] \[= \left\|\sum_{n=0}^{\infty}h(t_{n})\nu(C_{n})-{}_{(|RL|)}\int_{A}h \,\mathrm{d}\nu\right\|<\varepsilon,\] where \(t_{n}\in C_{n},s_{n}\in D_{n},\) for every \(n\in\mathbb{N},\) which says that \(f\chi_{A}\) is \(|RL|\)\(m\)-integrable on \(S\) and \({}_{(|RL|)}\int_{S}h\chi_{A}\,\mathrm{d}\nu:={}_{(|RL|)}\int_{A}h\,\mathrm{d}\mu.\) Finally, suppose that \(f\chi_{A}\) is \(|RL|\)\(\nu\)-integrable on \(S\). Then for every \(\varepsilon>0\) there exists \(P_{\varepsilon}=\{B_{n}\}_{n\in\mathbb{N}}\in\mathcal{P}\) so that for every \(P=\{C_{n}\}_{n\in\mathbb{N}}\) partition of \(S\) with \(P\geq P_{\varepsilon}\) and every \(t_{n}\in C_{n},n\in\mathbb{N},\) we have \[\left\|\sum_{n=0}^{\infty}h\chi_{A}(t_{n})\nu(C_{n})-{}_{(|RL|)}\int_{S}h\chi _{A}\,\mathrm{d}\nu\right\|<\varepsilon. \tag{5}\] Let us consider \(P_{A}^{\varepsilon}=\{B_{n}\cap A\}_{n\in\mathbb{N}},\) which is a partition of \(A\). Let \(P_{A}=\{D_{n}\}_{n\in\mathbb{N}}\) be an arbitrary partition of \(A\) with \(P_{A}\geq P_{A}^{\varepsilon}\) and \(P=P_{A}\cup\{S\setminus A\}.\) Then \(P\) is a countable partition finer than \(P_{\varepsilon}.\) Let us take \(t_{n}\in D_{n},\)\(n\in\mathbb{N}\) and \(s\in S\setminus A\). By (5) we obtain \[\left\|\sum_{n=0}^{\infty}h(t_{n})\nu(D_{n})-{}_{(|RL|)}\int_{S} h\chi_{A}\,\mathrm{d}\nu\right\|=\] \[= \left\|\sum_{n=0}^{\infty}h\chi_{A}(t_{n})\nu(D_{n})+h\chi_{A}(s) \nu(S\setminus A)-{}_{(|RL|)}\int_{S}\!h\chi_{A}\,\mathrm{d}\nu\right\|<\varepsilon,\] which assures that \(f\) is \(|RL|\)\(m\)-integrable on \(A\) and \[{}_{(|RL|)}\int_{A}h\,\mathrm{d}\nu:={}_{(|RL|)}\int_{S}h\chi_{A}\,\mathrm{d}\nu.\] In particular the \(|RL|\)\(\nu\)-integrability with respect to a set function of finite variation allows to obtain the following properties. **Theorem 3.4**.: ( [12, Proposition 1, Theorems 2 and 5, Corollary 2]) _Let \(\nu:S\to[0,\infty)\) be of finite variation. If we suppose that \(h:S\to X\) is bounded_ **(3.4.a):** _then \(h\in|RL|_{\nu}^{1}(X)\) and_ \[\left\|{}_{(|RL|)}\int_{S}h\,\mathrm{d}\nu\right\|\leq\sup_{s\in S}\|h(s)\| \cdot\overline{\nu}(S).\] **(3.4.b):**_If \(h=0\)\(\nu\)-a.e., then \(h\in|RL|^{1}_{\nu}(X)\) and \({}_{(|RL|)}\int_{S}\!h\,\mathrm{d}\nu=0.\) Moreover let \(g,h:S\to X\) be vector functions._ **(3.4.c):**_If \(\underset{s\in S}{\sup}\|g(s)-h(s)\|<+\infty\), \(g\in|RL|^{1}_{\nu}(X)\) and \(g=h\;\nu\)-a.e., then \(h\in|RL|^{1}_{\nu}(X)\) and_ \[{}_{(|RL|)}\int_{S}\!g\,\mathrm{d}\nu={}_{(|RL|)}\int_{S}\!h\,\mathrm{d}\nu.\] **(3.4.d):**_If \(g,h\in|RL|^{1}_{\nu}(X)\) then_ \[\left\|{}_{(|RL|)}\int_{S}\!g\,\mathrm{d}\nu-{}_{(|RL|)}\int_{S}\!h\,\mathrm{d} \nu\right\|\leq\underset{s\in S}{\sup}\|g(s)-h(s)\|\cdot\overline{\nu}(S).\] Proof.: We prove here (3.4.b). From the boundedness of \(h\) let \(M\in[0,\infty)\) so that \(\|h(s)\|\leq M\), for every \(s\in S\). If \(M=0\), then the conclusion is obvious. Suppose \(M>0.\) Let us denote \(A=\{s\in S:h(s)\neq 0\}\). Since \(h=0\)\(\nu\)-ae, we have \(\widetilde{\nu}(A)=0\). Then, for every \(\varepsilon>0\), there exists \(B_{\varepsilon}\in\mathcal{C}\) so that \(A\subseteq B_{\varepsilon}\) and \(\overline{\nu}(B_{\varepsilon})<\varepsilon/M.\) Let us take the partition \(P_{\varepsilon}=\{C_{n}\}_{n\in\mathbb{N}}\) of \(B_{\varepsilon}\), and let \(C_{0}=S\setminus B_{\varepsilon}\) and add \(C_{0}\) to \(P_{\varepsilon}\). Let \(P=\{A_{n}\}_{n\in\mathbb{N}}\) be an arbitrary partition of \(S\) so that \(P\geq P_{\varepsilon}\). Without any loss of generality, we suppose that \(P=\{D_{n},E_{n}\}_{n\in\mathbb{N}}\subset\mathcal{C}\), with pairwise disjoint sets \(D_{n},E_{n}\) such that \[\bigcup_{n\in\mathbb{N}}D_{n}=C_{0}\qquad\bigcup_{n\in\mathbb{N}}E_{n}=B_{ \varepsilon}.\] Let \(t_{n}\in D_{n},s_{n}\in E_{n}\), for every \(n\in\mathbb{N}\), Then we can write \[\big{\|}\sum_{n=0}^{\infty}h(t_{n})\nu(D_{n})+\sum_{n=0}^{\infty} h(s_{n})\nu(E_{n})\big{\|}=\|\sum_{n=0}^{\infty}h(s_{n})\nu(E_{n})\|\leq\] \[\leq \sum_{n=0}^{\infty}\|h(s_{n})\|\nu(E_{n})\leq M\cdot\overline{\nu }(B_{\varepsilon})<\varepsilon,\] which ensures that \(h\) is \(|RL|\)\(\nu\)-integrable and \({}_{(|RL|)}\int_{T}\!h\,\mathrm{d}\nu=0.\) The next theorem shows that the integral of a real function is monotone with respect to the integrands and to the set functions (see [12, Theorems 6 and 7]) in the following way. **Theorem 3.5**.: _Let \(g,h\in|RL|^{1}_{\nu}(\mathbb{R})\) such that \(g(s)\leq h(s),\) for every \(s\in S,\) then_ **(3.5.a):**_\({}_{(|RL|)}\!\int_{S}\!g\,\mathrm{d}\nu\leq{}_{(|RL|)}\!\int_{S}\!h\, \mathrm{d}\nu.\) Let \(\nu_{1}\), \(\nu_{2}:\mathcal{C}\to[0,+\infty)\) be set functions such that \(\nu_{1}(A)\leq\nu_{2}(A)\), for every \(A\in\mathcal{C}\) and \(h\in|RL|^{1}_{\nu_{1}}(\mathbb{R}^{+}_{0})\) for \(i=1,2\) Then_ (3.5.b): \({}_{(|RL|)}\!\int_{S}\!h\,\mathrm{d}\nu_{1}\leq{}_{(|RL|)}\!\int_{S}\!h\,\mathrm{d} \nu_{2}\). Proof.: We prove here (3.5.a). Let \(\varepsilon>0\) be arbitrary. Since \(g,h\in|RL|_{\nu}^{1}(\mathbb{R})\), there exists a countable partition \(P_{0}\) so that for every \(P=\{C_{n}\}_{n\in\mathbb{N}},P\geq P_{0}\) and every \(t_{n}\in C_{n},n\in\mathbb{N}\), the series \(\sum_{n=0}^{\infty}g(t_{n})\nu(C_{n})\), \(\sum_{n=0}^{\infty}h(t_{n})\nu(C_{n})\) are absolutely convergent and \[\max\left\{\left|{}_{(|RL|)}\int_{S}g\,\mathrm{d}\nu-\sum_{n=0}^{\infty}g(t_{n} )\nu(C_{n})\right|,\,\left|{}_{(|RL|)}\int_{S}h\,\mathrm{d}\nu-\sum_{n=0}^{ \infty}h(t_{n})\nu(C_{n})\right|\right\}<\frac{\varepsilon}{3}.\] Therefore \[{}_{(|RL|)}\int_{S}g\,\mathrm{d}\nu-{}_{(|RL|)}\int_{S}h\, \mathrm{d}\nu= {}_{(|RL|)}\int_{S}g\,\mathrm{d}\nu-\sum_{n=0}^{\infty}g(t_{n}) \nu(C_{n})+\sum_{n=0}^{\infty}g(t_{n})\nu(C_{n})+\] \[- \sum_{n=0}^{\infty}h(t_{n})\nu(C_{n})+\sum_{n=0}^{\infty}h(t_{n}) \nu(C_{n})-{}_{(|RL|)}\int_{S}h\,\mathrm{d}\nu<\] \[< \frac{2\varepsilon}{3}+\Big{[}\sum_{n=0}^{\infty}g(t_{n})\nu(C_{n })-\sum_{n=0}^{\infty}h(t_{n})\nu(C_{n})\Big{]}\leq\varepsilon\] since, by the hypothesis, \(\sum_{n=0}^{\infty}g(t_{n})\nu(C_{n})\leq\sum_{n=0}^{\infty}h(t_{n})\nu(C_{n})\). Consequently, \[{}_{(|RL|)}\int_{S}g\,\mathrm{d}\nu-{}_{(|RL|)}\int_{S}h\,\mathrm{d}\nu\leq 0.\] For every \(h:S\to X\) that is \(|RL|\) (\(RL\) resp.) \(\nu\)-integrable on every set \(E\in\mathcal{C}\), we consider the \(|RL|\) integral operator \(T_{h}:\mathcal{C}\to X\), defined for every \(E\in\mathcal{C}\) by, \[T_{h}(E)={}_{(|RL|)}\int_{E}h\,\mathrm{d}\nu\quad\big{(}T_{h}(E)={}_{(RL)} \int_{E}h\,\mathrm{d}\nu\quad\text{resp.}\big{)} \tag{6}\] We point out that, even without the additive condition for the set function \(\nu\), the indefinite integral is additive thanks to Theorem 3.3. In the next theorem we present some properties of the set function \(T_{h}\). **Theorem 3.6**.: ( [12, Theorem 8]) _Let \(h\in|RL|_{\nu}^{1}(X)\). If \(h\) is bounded, and \(\nu\) is of finite variation then_ **(3.6.a):**: \(\bullet\)_\(T_{h}\) is of finite variation too;_ \(\bullet\)_\(\overline{T_{h}}\ll\overline{\nu}\) in the \(\varepsilon-\delta\) sense;_ \(\bullet\) _Moreover, if_ \(\overline{\nu}\) _is o-continuous (exhaustive resp.), then_ \(T_{h}\) _is also o-continuous (exhaustive resp.)._ **(3.6.b):**: _If_ \(h:S\to[0,\infty)\) _is nonnegative and_ \(\nu\) _is scalar-valued and monotone, then the same holds for_ \(T_{h}\) Proof.: (3.6.a). In order to prove that \(T_{h}\) is of finite variation let \(\{A_{i}\}_{i=1,\ldots,n}\) be a pairwise partition of \(S\) and \(M=\sup\limits_{s\in S}\lVert h(s)\rVert.\) By Theorem 3.4.a) and Remark 2.1, we have \[\sum\limits_{i=1}^{n}\|T_{h}(A_{i})\|\leq M\cdot\sum\limits_{i=1}^{n}\overline{ \nu}(A_{i})\leq M\cdot\overline{\nu}(S).\] So \(\overline{T_{h}}(S)\leq M\)\(\overline{\nu}(S),\) that yields \(\overline{T_{h}}(S)<+\infty\). Now the absolute continuity in the \(\varepsilon-\delta\) sense follows from Theorem 3.4.a). Let \(M\) as before. If \(M=0\), then \(h=0\), hence \(T_{h}=0\). If \(M>0\), by Theorem 3.4.a) we have \(\|T_{h}(A)\|\leq M\cdot\overline{\nu}(A),\) for every \(A\in\mathcal{C}.\) So the o-continuity of \(T_{h}\) follows from that of \(\overline{\nu}\). The proof of the exhaustivity is similar. For (3.6.b) let \(A,B\in\mathcal{C}\) with \(A\subseteq B\) and \(\varepsilon>0\). Since \(h\) is \(\nu\)-integrable on \(A\), there exists a countable partition \(P_{1}=\{C_{n}\}_{n\in\mathbb{N}}\) of \(A\) so that for every other finer countable partition \(P=\{A_{n}\}_{n\in\mathbb{N}}\), of \(A\) and every \(t_{n}\in A_{n},n\in\mathbb{N}\), the series \(\sum\limits_{n=0}^{\infty}h(t_{n})\nu(A_{n})\) is absolutely convergent and \[\left|T_{h}(A)-\sum\limits_{n=0}^{\infty}h(t_{n})\nu(A_{n})\right|<\frac{ \varepsilon}{2}. \tag{7}\] Since \(f\) is \(\nu\)-integrable on \(B\), let \(P_{2}=\{D_{n}\}_{n\in\mathbb{N}}\) a countable partition of \(B\) with the same meaning as before for the set \(A\). Let \(\widetilde{P}_{1}=\{C_{n},B\setminus A\}_{n\in\mathbb{N}}\) and \(\widetilde{P}_{1}\wedge P_{2}\) (both countable partitions of \(B\)). Let \(P=\{E_{n}\}_{n\in\mathbb{N}}\) be an arbitrary countable partition of \(B\), with \(P\geq\widetilde{P}_{1}\wedge P_{2}\). We observe that \(P^{{}^{\prime\prime}}=\{E_{n}\cap A\}_{n\in\mathbb{N}}\) is also a partition of \(A\) and \(P^{{}^{\prime\prime}}\geq P_{1}\). If \(t_{n}\in E_{n}\cap A,n\in\mathbb{N}\) we have \[\max\left\{|T_{h}(B)-\sum\limits_{n=0}^{\infty}h(t_{n})\nu(E_{n})|,\,|T_{h}(A )-\sum\limits_{n=0}^{\infty}h(t_{n})\nu(E_{n}\cap A)|\right\}<\frac{ \varepsilon}{2}.\] Therefore \[T_{h}(A)-T_{h}(B) \leq \left|T_{h}(A)-\sum\limits_{n=0}^{\infty}h(t_{n})\nu(E_{n}\cap A )\right|+\] \[+ \left[\sum\limits_{n=0}^{\infty}h(t_{n})\nu(E_{n}\cap A)-\sum \limits_{n=0}^{\infty}h(t_{n})\nu(E_{n})\right]+\] \[+ \left|\sum\limits_{n=0}^{\infty}h(t_{n})\nu(E_{n})-T_{h}(B)\right|<\] \[< \varepsilon+\left[\sum\limits_{n=0}^{\infty}h(t_{n})m(E_{n}\cap A )-\sum\limits_{n=0}^{\infty}h(t_{n})m(E_{n})\right].\] Since, by the hypotheses, \(\sum\limits_{n=0}^{\infty}h(t_{n})\nu(E_{n}\cap A)\leq\sum\limits_{n=0}^{\infty}h( t_{n})\nu(E_{n})\), then \(T_{h}(A)\leq T_{h}(B)\). ### Comparison with other types of integrability In the non additive case there are other types of integral that can be considered. We present here some comparative results with the Gould and Birkhoff simple ones. We recall that **Definition 3.7**.: ( [9, Definition 3.2]) A vector function \(h:S\to X\) is called _Birkhoff simple \(\nu\)-integrable (on \(S\))_ if there exists \(b\in X\) such that for every \(\varepsilon>0\), there exists a countable partition \(P_{\varepsilon}\) of \(S\) so that for every other countable partition \(P=\{A_{n}\}_{n\in\mathbb{N}}\) of \(S\), with \(P\geq P_{\varepsilon}\) and every \(s_{n}\in A_{n},n\in\mathbb{N}\), it holds \[\limsup_{n\to+\infty}\Big{\|}\sum\limits_{k=0}^{n}h(s_{k})\nu(A_{k})-b\Big{\|} <\varepsilon.\] The vector \(b\) is denoted by \((Bs){\int_{S}}h\,\mathrm{d}\nu\) and it is called _the Birkhoff simple integral_ of \(h\) (on \(S\)) with respect to \(\nu\). Let \((\mathcal{P},\leq)\) be the family of all finite partitions of \(S\) ordered by the relation "\(\leq\)" given in (1). Given a vector function \(h:S\to X\), we denote by \(\sigma(P)\) the finite sum: \(\sigma(P):=\sum\limits_{i=1}^{n}h(s_{i})\nu(E_{i})\), for every finite partition of \(S\), \(P=\{E_{i}\}_{i}^{n}\in\mathcal{P}\) and every \(s_{i}\in E_{i},i\in\{1,\ldots,n\}\). Following [43] **Definition 3.8**.: _A function \(h:S\to X\) is called Gould \(\nu\)-integrable(on \(S\)) if there exists \(a\in X\) such that for every \(\varepsilon>0\), there exists a finite partition \(P_{\varepsilon}\) of \(S\), so that for every other finite partition \(P=\{E_{i}\}_{i=1}^{n}\) of \(S\), with \(P\geq P_{\varepsilon}\) and every \(s_{i}\in E_{i},i\in\{1,\ldots,n\},\) we have \(\|\sigma(P)-a\|<\varepsilon\). The vector \(a\) is called the Gould integral of \(h\) with respect to \(\nu\), denoted by \((G){\int_{S}}h\,\mathrm{d}\nu\)._ Observe that \(h:S\to X\) is Gould \(\nu\)-integrable (on \(S\)) if and only if the net \((\sigma(P))_{P\in(\mathcal{P},\leq)}\) is convergent in \(X\), The limit of \((\sigma(P))_{P}\) is exactly the integral \((G)\int_{S}h\,\mathrm{d}\nu\). Let \(Bs_{\nu}^{1}(S)\) and \(G_{\nu}^{1}(S)\) be respectively the families of Birkhoff simple, Gould integrable functions. In general \(RL_{\nu}^{1}(S)\subset Bs_{\nu}^{1}(S)\) and the two integrals coincide, this is proved in [12, Theorem 9], while for what concernes the comparison between RL and Gould integrability for bounded functions we have the following relations: * \(RL_{\nu}^{1}(X)=G_{\nu}^{1}(X)\) when \(\nu\) is of finite variation and defined on a complete \(\sigma\)-additive measure by [12, Proposition 2]; * \(RL_{\nu}^{1}(\mathbb{R})=G_{\nu}^{1}(\mathbb{R})\) when \(\nu\) is of finite variation, monotone and \(\sigma\)-subadditive [12, Theorem 10]. * \(RL^{1}_{\nu}(\mathbb{R})\subset G^{1}_{\nu}(\mathbb{R})\) on each atom \(A\in\mathcal{C}\) when \(\nu\) is monotone, null additive and satisfies property (\(\sigma\)), see [12, Theorem 11]. In all the cases the two integrals coincide. Without the \(\sigma\)-additivity of \(\nu\), the second equivalence \(RL^{1}_{\nu}(\mathbb{R})=G^{1}_{\nu}(\mathbb{R})\) for bounded functions does not hold. Suppose \(S=\mathbb{N}\), with \(\mathcal{C}=\mathcal{P}(\mathbb{N})\) and \[\nu(A)=\left\{\begin{array}{ll}0,&card(A)<+\infty\\ 1,&card(A)=+\infty,\end{array}\right.\quad\text{for every }A\in\mathcal{C}.\] Then, the constant function \(h=1\) is RL integrable and then Birkhoff simple integrable and \((Bs)\int_{S}h\,\mathrm{d}\nu=0\). However, \(h\) is not Gould-integrable. In fact, if \(P_{\varepsilon}\) is any finite partition of \(\mathbb{N}\), some of its sets are infinite, so the quantity \(\sigma(P_{\varepsilon})\) is exactly the number of the infinite sets belonging to \(P_{\varepsilon}\). So the quantity \(\sigma(P)\) is unbounded when \(P\) runs over the family of all finer partitions of \(P_{\varepsilon}\). ### Convergence results In this subsection we want to quote some sufficient conditions in order to obtain, under suitable hypotheses, a convergence result of this type \[\lim_{n\to\infty}{}_{(|RL|)}\int_{S}h_{n}\,\mathrm{d}\nu={}_{(|RL|)}\int_{S} \lim_{n\to\infty}h_{n}\,\mathrm{d}\nu,\] for sequences of Riemann-Lebesgue integrable functions. We assume \(\nu\) of finite variation unless otherwise specified. Let \(p\in[1,\infty)\) be fixed. For every real valued function \(h:S\to\mathbb{R}\), with \(|h|^{p}\in RL^{1}_{\nu}(\mathbb{R})\), we associate the following number: \[\|h\|_{p}=\Big{(}{}_{(|RL|)}\int_{S}|h|^{p}\,\mathrm{d}\nu\Big{)}^{\frac{1}{p}}. \tag{8}\] **Theorem 3.9**.: _Let \(h,h_{n}:S\to X\), \(\nu:\mathcal{C}\to[0,+\infty)\) and \(p\in[1,+\infty)\)._ ([27, Theorem 5])**:**: _If \(h,h_{n}\in|RL|^{1}_{\nu}(X)\) for every \(n\in\mathbb{N}\) and \(h_{n}\) converges uniformly to \(h\); or_ ([27, Theorem 6])**:**: _If \(X=\mathbb{R}\), \(\sup_{s\in S,n\in\mathbb{N}}\big{\{}h(s),h_{n}(s)\big{\}}<+\infty\) and \(h_{n}\stackrel{{\widetilde{\nu}}}{{\to}}h\); or_ ([27, Theorem 8])**:**: _If \(X=\mathbb{R}\), \(\nu\) is monotone and \(\widetilde{\nu}\) satisfies condition_ **(E)**_, \(\sup_{s\in S,n\in\mathbb{N}}\big{\{}h(s),h_{n}(s)\big{\}}<+\infty\) and \(h_{n}\stackrel{{\nu-\text{\rm{a}}\varepsilon}}{{\longrightarrow }}h\)_ _then_ \[\lim_{n\to\infty}{}_{(|RL|)}\int_{S}h_{n}\,\mathrm{d}\nu={}_{(|RL|)}\int_{S}h\, \mathrm{d}\nu. \tag{9}\] _Finally_ ([27, Theorem 7]): _If \(X=\mathbb{R}\), \(\nu\) is countable subadditive (not necessarily of finite variation), \(\chi_{E}\cdot|h_{n}-h|^{p}\in|RL|_{\nu}^{1}(\mathbb{R})\), for every \(E\in\mathcal{C}\) and \(\|h_{n}-h\|_{p}\to 0\). Then \(h_{n}\stackrel{{\overline{\nu}}}{{\to}}h\)._ Proof.: We give only the proof of [27, Theorem 6]. According to [12, Proposition 1]\(g\), \(g_{n}\), \(g_{n}-g\in|RL|_{\nu}^{1}(\mathbb{R})\), for every \(n\in\mathbb{N}\). Let \(\alpha\in(0,+\infty)\) such that: \[\sup_{s\in S,n\in\mathbb{N}}\big{\{}|g(s)|,\,|g_{n}(s)-g(s)|\big{\}}<\alpha.\] Let \(\varepsilon>0\) be fixed. By hypothesis, there is \(n_{0}(\varepsilon)\in\mathbb{N}\) such that for every \(n\geq n_{0}(\varepsilon)\) \[\widetilde{\nu}(\{s\in S;\ |g_{n}(s)-g(s)|\geq\varepsilon/4\overline{\nu}(S)\}< \varepsilon/4\alpha.\] Then, there exists \(A_{n}\in\mathcal{C}\) such that \(\{s\in S;|g_{n}(s)-g(s)|\geq\varepsilon/4\overline{\nu}(S)\}\subset A_{n}\) and \(\widetilde{\nu}(A_{n})=\overline{\nu}(A_{n})<\varepsilon/4\alpha.\) Using [12, Theorem 3 and Corollary 1], for every \(n\geq n_{0}(\varepsilon)\), it holds that: \[\left|(|RL|)\int_{S}g_{n}d\nu-\left(|RL|\right)\int_{S}g\,\mathrm{ d}\nu\right|\leq\left|(|RL|)\int_{A_{n}}(g_{n}-g)\,\mathrm{d}\nu\right|+ \left|(|RL|)\int_{A_{n}^{c}}(g_{n}-g)\,\mathrm{d}\nu\right|\leq\] \[\leq \overline{\nu}(A_{n})\cdot\sup_{s\in A_{n}}|g_{n}(s)-g(s)|+ \overline{\nu}(A_{n}^{c})\cdot\sup_{s\in A_{n}^{c}}|g_{n}(s)-g(s)|<\varepsilon,\] and this yields the (9). The following theorem establishes a Fatou type result for sequences of Riemann-Lebesgue integrable functions. **Theorem 3.10**.: ([27, Theorem 9] ) _Suppose \(\nu:\mathcal{C}\to[0,+\infty)\) is a monotone set function of finite variation such that \(\widetilde{\nu}\) satisfies_ **(E)**_. For every \(n\in\mathbb{N}\), let \(h_{n}:S\to\mathbb{R}\) be such that \((h_{n})\) is uniformly bounded. Then_ \[\left(|RL|\right)\int_{S}(\liminf_{n}h_{n})\,\mathrm{d}\nu\leq\liminf_{n} \Big{(}\left(|RL|\right)\int_{S}h_{n}\,\mathrm{d}\nu\Big{)}. \tag{10}\] and its consequence **Corollary 3.11**.: ([27, Theorem 10] ) _Suppose \(p\in(1,+\infty)\) and \(\nu:\mathcal{C}\to[0,+\infty)\) is a monotone set function of finite variation such that \(\widetilde{\nu}\) satisfies_ (E)_. Let \(h\), \(h_{n}:S\to\mathbb{R}\) be such that \(h\) is bounded and \((h_{n})_{n}\) is pointwise convergent to \(h\). Let_ \[g_{n}=2^{p-1}(|h_{n}|^{p}+|h|^{p})-|h_{n}-h|^{p},\] _such that \((g_{n})\) is uniformly bounded, \(|h|^{p},|h_{n}|^{p},|h_{n}-g|^{p},g_{n},\inf_{k\geq n}g_{k}\in|RL|_{\nu}^{1}( \mathbb{R})\), for every \(n\in\mathbb{N}\) and \(\|h_{n}\|_{p}\longrightarrow\|h\|_{p}.\) Then_ \[\|h_{n}-h\|_{p}\longrightarrow 0.\] ### Holder and Minkowski type inequalities In the end of this section, we expose a result on the reverse inequalities of Holder and Minkowski type in Riemann-Lebesgue integrability. First of all we need that \(\nu\) satisfies the following property **Definition 3.12**.: _The set function \(\nu:\mathcal{C}\to[0,\infty)\) is called RL-integrable if for all \(E\in\mathcal{C},\chi_{E}\in RL^{1}_{\nu}(\mathbb{R})\) and \(\int_{S}\chi_{E}\,\mathrm{d}\nu=\nu(E).\)_ **Theorem 3.13**.: ([27, Theorem 4] and [28, Theorem 3.4]) _Let \(\nu:\mathcal{C}\to[0,\infty)\) be a countable subadditive RL-integrable set function and let \(g,h:S\to\mathbb{R}\) be measurable functions. Let \(p,q\in(1,\infty)\), with \(p^{-1}+q^{-1}=1.\)_ **(3.13.a):** _If_ \(g\cdot h\in RL^{1}_{\nu}(\mathbb{R})\)_, then_ \[\|g\cdot h\|_{1}\leq\|g\|_{p}\cdot\|h\|_{q}\quad\text{\rm(H\"{o}lder Inequality)}.\] **(3.13.b):** _Let \(p\in[1,\infty)\). If \(|g+h|^{p},|g+h|^{q(p-1)},|g|^{p}\) and \(|h|^{p}\) are in \(\in RL^{1}_{\nu}(\mathbb{R})\), then_ \[\|g+h\|_{p}\leq\|g\|_{p}+\|h\|_{p}\quad\text{\rm(Minkowski Inequality)}.\] _Let \(p,q\in(0,\infty)\) such that \(0<p<1\) and \(p^{-1}+q^{-1}=1.\)_ **(3.13.c):** _If \(g\cdot h,|g|^{p},|h|^{q}\in RL^{1}_{\nu}(\mathbb{R})\) and \(0<\text{\rm({\it RL})}\int_{S}|h|^{q}\,\mathrm{d}\nu\), then_ \[\|g\cdot h\|_{1}\geq\|g\|_{p}\cdot\|h\|_{q}\quad\text{\rm(Reverse H\"{o}lder Inequality)}.\] **(3.13.d):** _If \(|g+h|^{p},|g+h|)^{(p-1)q},|g|^{p}\) and \(|h|^{p}\) are RL-integrable, then_ \[\big{\|}\,|g|+|h|\,\big{\|}_{p}\geq||g||_{p}+||h||_{p}\quad\text{\rm(Reverse Minkowski Inequality)}.\] According to [27, Remark 4], for \(p\in[1,\infty)\), the function \(\|\cdot\|_{p}\) defined in (8) is a seminorm on the linear space of measurable \(RL\)-integrable functions. Proof.: We give here only the proof of the reverse part. **(3.13.c):** **:**: If \(\text{\rm({\it RL})}\int_{S}g^{p}\,\mathrm{d}\nu=0\), then according to [27, Theorem 3] it follows \(g\cdot h=0\)\(\nu-a.e.\) In this case, the inequality of integrals is satisfied. Consider \(\text{\rm({\it RL})}\int_{S}g^{p}\,\mathrm{d}\nu>0\). We replace \(a=\frac{|g|}{\text{\rm({\it RL})}\int_{S}|g|^{p}\,\mathrm{d}\nu)^{\frac{1}{p}}}\) and \(b=\frac{|h|}{\text{\rm({\it RL})}\int_{S}|h|^{q}\,\mathrm{d}\nu)^{\frac{1}{q}}}\) in the reverse Young inequality \(ab\geq\frac{a^{p}}{p}+\frac{b^{q}}{q}\), for every \(a,b>0\) and for every \(0<p<1\) with \(\frac{1}{p}+\frac{1}{q}=1\) (see for example [1, 19]). Then \[\frac{|gh|}{\text{\rm({\it RL})}\int_{S}|g|^{p}\,\mathrm{d}\nu)^{\frac{1}{p}} \text{\rm({\it({\it RL})}\int_{S}|h|^{q}\,\mathrm{d}\nu)^{\frac{1}{q}}}}\geq \frac{|g|^{p}}{\text{\rm p({\it({\it RL})}\int_{S}|g|^{p}\,\mathrm{d}\nu)}}+ \frac{|h|^{q}}{\text{\rm({\it({\it RL})}\int_{S}|h|^{q}\,\mathrm{d}\nu)}}.\] Applying [12, Theorems 3 and 6 ] it holds \[\frac{\text{\rm({\it RL})}\int_{S}|gh|\,\mathrm{d}\nu}{\text{\rm({\it({\it RL })}\int_{S}|g|^{p}\,\mathrm{d}\nu)^{\frac{1}{p}}\text{\rm({\it({\it RL})} \int_{S}|h|^{q}\,\mathrm{d}\nu)^{\frac{1}{q}}}}}\geq\frac{\text{\rm({\it RL}) }\int_{S}|g|^{p}\,\mathrm{d}\nu}{\text{\rm({\it RL})}\int_{S}|h|^{q}\,\mathrm{ d}\nu}+\frac{\text{\rm({\it RL})}\int_{S}|h|^{q}\,\mathrm{d}\nu}{\text{\rm({ \it RL})}\int_{S}|h|^{q}\,\mathrm{d}\nu}=1\] and the conclusion yields. **(3.13.d):** By (3.13.c), it results: \[{}^{(RL)}\int_{S}(|g|+|h|)^{p}\,\mathrm{d}\nu = {}^{(RL)}\int_{S}(|g|+|h|)^{p-1}(|g|+|h|)\,\mathrm{d}\nu\geq\] \[\geq {}^{(RL)}\int_{S}(|g|+|h|)^{q(p-1)}\,\mathrm{d}\nu)^{\frac{1}{q}} ({}^{(RL)}\int_{S}|g|^{p}\,\mathrm{d}\nu)^{\frac{1}{p}}+\] \[+ {}^{(RL)}\int_{S}(|g|+|h|)^{q(p-1)}\,\mathrm{d}\nu)^{\frac{1}{q}} (\int_{S}|h|^{p}\,\mathrm{d}\nu)^{\frac{1}{p}}=\] \[= {}^{(RL)}\int_{S}(|g|+|h|)^{q(p-1)}\,\mathrm{d}\nu)^{\frac{1}{q}} (\|g\|_{p}+\|h\|_{p}).\] Dividing the above inequality by \(({}^{(RL)}\int_{S}(|g|+|h|)^{q(p-1)}\,\mathrm{d}\nu)^{\frac{1}{q}}\), we obtain the Reverse Minkowski inequality. ## 4. Interval-valued Riemann-Lebesgue integral In the last years, a particular attention was addressed to the study of interval-valued multifunctions and multimeasures because of their applications in statistics, biology, theory of games, economics, social sciences and software. Interval-valued multifunctions have been applied also to some new directions, involving signal and image processing. Motivated by the large number of fields in which the interval-valued multifunctions can be involved, we present some classic properties for the Riemann-Lebesgue integral of an interval-valued multifunction with respect to an interval-valued set multifunction. We begin by recalling some preliminaries. The symbol \(ck(\mathbb{R})\) denotes the family of all non-empty, convex, compact subsets of \(\mathbb{R}\), by convention, \(\{0\}=[0,0]\). We consider on \(ck(\mathbb{R})\) ( [45]) the Minkowski addition \[A\oplus B:=\{a+b\ |\ a\in A,\ b\in B\},\quad\mbox{ for every }A,B\in ck(\mathbb{R})\] and the multiplication by scalars \[\lambda A=\{\lambda a\ |\ a\in A\},\quad\mbox{ for every }\lambda\in\mathbb{R},A\in ck(\mathbb{R}).\] \(d_{H}\) denotes the Hausdorff distance in \(ck(\mathbb{R})\) and it is defined for every \(A,B\in ck(\mathbb{R})\), in the following way: \[d_{H}(A,B)=\max\{e(A,B),\,e(B,A)\},\] where \(e(A,B)=\sup\{d(x,B),\,x\in A\}\). We use the symbol \(\|A\|_{H}\) to denote \(d_{H}(A,\{0\})\). In particular, for closed intervals we have \[d_{H}([r,s],[x,y])=\max\{|x-r|,|y-s|\},\quad\text{for every }r,s,x,y\in \mathbb{R};\] \[d_{H}([0,s],[0,y])=|y-s|,\quad\text{for every }s,y\in\mathbb{R}_{0}^{+};\] \[\|[r,s]\|_{\mathcal{H}}=s,\quad\text{for every }r,s\in\mathbb{R}_{0}^{+}.\] In the subfamily \(L(\mathbb{R})\) of intervals in \(ck(\mathbb{R})\) the following operations are also considered, for every \(r,s,x,y\in\mathbb{R}\) (see the Interval Analysis in [57]): **i):**: \([r,s]^{\star}[x,y]=[rx,sy]\); **ii):**: \([r,s]\subseteq[x,y]\) if and only if \(x\leq r\leq s\leq y\); **iii):**: \([r,s]\preceq[x,y]\) if and only if \(r\leq x\) and \(s\leq y\); (weak interval order, [44]) **iv):**: \([r,s]\wedge[x,y]=[\min\{r,x\},\min\{s,y\}]\); **v):**: \([r,s]\vee[x,y]=[\max\{r,x\},\max\{s,y\}]\). In general there is no relation between the weak interval order and the inclusion; they only coincide on the subfamily \([0,s],s\geq 0\). For every pair of sequences of real numbers \((u_{n})_{n},(v_{n})_{n}\) such that \(0\leq u_{n}\leq v_{n}\), for every \(n\in\mathbb{N}\), we define: **vi):**: \(\inf_{n}[u_{n},\,v_{n}]=[\inf_{n}u_{n},\,\inf_{n}v_{n}]\); **vii):**: \(\sup_{n}[u_{n},\,v_{n}]=[\sup_{n}u_{n},\,\sup_{n}v_{n}]\); **viii):**: \(\liminf_{n}[u_{n},\,v_{n}]=[\liminf_{n}u_{n},\,\liminf_{n}v_{n}]\). We consider \((ck(\mathbb{R}_{0}^{+}),d_{H},\preceq)\), namely the space \(ck(\mathbb{R}_{0}^{+})\) is endowed with the Hausdorff distance and the weak interval order. For two set functions \(\nu_{1},\nu_{2}:\mathcal{C}\rightarrow\mathbb{R}_{0}^{+}\) with \(\nu_{1}(\emptyset)=\nu_{2}(\emptyset)=0\) and \(\nu_{1}(A)\leq\nu_{2}(A)\) for every \(A\in\mathcal{C}\) the set multifunction \(\Gamma:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\) defined by \[\Gamma(A)=\big{[}\nu_{1}(A),\nu_{2}(A)\big{]},\qquad\text{ for every }A\in\mathcal{C}. \tag{11}\] is called an interval-valued set function. In this case, \(\Gamma\) is of finite variation if and only if \(\nu_{2}\) is of finite variation. Let \(\Gamma:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\). We say that \(\Gamma\) is an interval-valued multisubmeasure if * \(\Gamma(\emptyset)=\{0\}\); * \(\Gamma(A)\preceq\Gamma(B)\) for every \(A,B\in\mathcal{C}\) with \(A\subseteq B\) (monotonicity); * \(\Gamma(A\cup B)\preceq\Gamma(A)\oplus\Gamma(B)\) for every disjoint sets \(A,B\in\mathcal{C}\). (subadditivity). By [61, Remark 3.6]\(\Gamma\) is a multisubmeasure with respect to \(\preceq\) if and only if \(\nu_{i}\), \(i=1,2\) are submeasures. **Definition 4.1**.: _It is said that \(\Gamma\) is a \(d_{H}\)-multimeasure if for every sequence of pairwise disjoint sets \((A_{n})_{n}\subset\mathcal{C}\) such that \(\cup_{n}^{\infty}A_{n}=A\)_ \[\lim_{n\to\infty}d_{H}(\sum_{k=1}^{n}\Gamma(A_{k}),\Gamma(A))=0,\] ### The interval-valued RL-integral and its properties In what follows, all the interval-valued set functions we consider are multisubmeasures. Given \(h_{1},h_{2}:S\to\mathbb{R}_{0}^{+}\) with \(h_{1}(s)\leq h_{2}(s)\) for all \(s\in S\), let \(H:S\to L(\mathbb{R}_{0}^{+})\) be the interval-valued multifunction defined by \[H(s):=\big{[}h_{1}(s),h_{2}(s)\big{]},\qquad\text{for every $s\in S$}. \tag{12}\] \(H\) is bounded if and only if \(h_{2}\) is bounded. If \(H,G:S\to L(\mathbb{R}_{0}^{+})\) are as in (12) so that \(G\preceq H\) or \(G\subset H\) and \(H\) is bounded, then \(G\) is bounded too. For every countable tagged partition \(P=\{(B_{n},s_{n}),n\in\mathbb{N}\}\) of \(S\) we denote by \[\sigma_{H,\Gamma}(P) = \sum_{n=1}^{\infty}H(s_{n})\,\raisebox{1.0pt}{\text{\textvisibles}} \,\Gamma(B_{n})=\sum_{n=1}^{\infty}\big{[}h_{1}(s_{n})\nu_{1}(B_{n}),h_{2}(s_{ n})\nu_{2}(B_{n})\big{]}=\] \[= \Big{\{}\sum_{n=1}^{\infty}y_{n},y_{n}\in\big{[}h_{1}(s_{n})\nu_{ 1}(B_{n}),h_{2}(s_{n})\nu_{2}(B_{n})\big{]},n\in\mathbb{N}\Big{\}}.\] The set \(\sigma_{H,\Gamma}(P)\) is closed and convex in \(\mathbb{R}_{0}^{+}\), so it is an interval \(\big{[}h_{1,H,\Gamma}^{P},h_{2,H,\Gamma}^{P}\big{]}\). **Definition 4.2**.: A multifunction \(H:S\to L(\mathbb{R}_{0}^{+})\) is called Riemann-Lebesgue (RL in short) integrable with respect to \(\Gamma\) (on \(S\)) if there exists \([c,d]\in L(\mathbb{R}_{0}^{+})\) such that for every \(\varepsilon>0\), there exists a countable partition \(P_{\varepsilon}\) of \(S\), so that for every tagged partition \(P=\{(B_{n},s_{n})\}_{n\in\mathbb{N}}\) of \(S\) with \(P\geq P_{\varepsilon}\), * the series \(\sigma_{H,\Gamma}(P)\) is convergent and * \(d_{H}(\sigma_{H,\Gamma}(P),[c,d])<\varepsilon\). The interval \([c,d]\) is called the Riemann-Lebesgue integral of \(H\) with respect to \(\Gamma\) and it is denoted \[[c,d]={}_{(RL)}\int_{S}H\ \mathrm{d}\Gamma.\] The symbol \(RL_{\Gamma}^{1}(L(\mathbb{R}_{0}^{+}))\) denotes the class of all interval-valued functions that are Riemann-Lebesgue integrable with respect to \(\Gamma\) on \(S\). **Example 4.3**.: ([27, Example 5]) Suppose \(S=\{s_{n}\,|\,n\in\mathbb{N}\}\) is countable, \(\{s_{n}\}\in\mathcal{C}\), for every \(n\in\mathbb{N}\), and let \(H:S\to L(\mathbb{R}_{0}^{+})\) be such that the series \(\sum_{n=0}^{\infty}h_{i}(s_{n})\nu_{i}(\{s_{n}\}),\,i\in\{1,2\},\) are convergent. Then \(H\) is \(RL\) integrable with respect to \(\Gamma\) and \[{}^{(RL)}\int_{S}H\,\,\mathrm{d}\Gamma=\left[\sum_{n=0}^{\infty}h_{1}(s_{n}) \nu_{1}(\{s_{n}\}),\sum_{n=0}^{\infty}h_{2}(s_{n})\nu_{2}(\{s_{n}\})\right].\] Observe moreover that, in this case, the \(RL\)-integrability of \(H\) with respect to \(\Gamma\) implies that the product \(H\,{\boldsymbol{\cdot}}\,H\) is integrable in the same sense, where \((H\,{\boldsymbol{\cdot}}\,H)(s)=[h_{1}^{2}(s),h_{2}^{2}(s)],\) for every \(s\in S\). In particular, if \(H\) is a discrete or countable interval-valued signal, then the integral \({}_{(RL)}\int_{S}H\,{\boldsymbol{\cdot}}\,H\,\mathrm{d}\Gamma\) represents the energy of the signal, see for example [22, Example 2]. If \(\Gamma\) is of finite variation and \(H:S\to L(\mathbb{R}_{0}^{+})\) is bounded and such that \(H=\{0\}\)\(\Gamma\)-a.e., then \(H\) is \(\Gamma\)-integrable and \({}_{(RL)}\int_{S}H\,\mathrm{d}\Gamma=\{0\}.\) In the sequel, some properties of interval-valued \(RL\) integrable multifunctions ([22]) are presented for \(\Gamma\) as in (11) and the multifunctions as in (12). The following theorem shows a characterization of the \(RL\) integrability. **Theorem 4.4**.: ( [22, Proposition 2]) _An interval-valued multifunction \(H=[h_{1},h_{2}]\) is \(RL\) integrable with respect to \(\Gamma\) on \(S\) if and only if \(h_{1}\) and \(h_{2}\) are \(RL\) integrable with respect to \(\nu_{1}\) and \(\nu_{2}\) respectively and_ \[{}^{(RL)}\int_{S}H\,\mathrm{d}\Gamma=\Big{[}{}_{(RL)}\int_{S}h_{1}\,\mathrm{d }\nu_{1},{}^{(RL)}\int_{S}h_{2}\,\mathrm{d}\nu_{2}\Big{]}.\] Proof.: By the \(RL\) integrability of \(H\) there exists \([a,b]\) such that for every \(\varepsilon>0\), there exists a countable partition \(P_{\varepsilon}\) of \(S\), so that for every tagged partition \(P=\{(A_{n},t_{n})\}_{n\in\mathbb{N}}\) of \(S\) with \(P\geq P_{\varepsilon}\), the series \(\sigma_{H,\Gamma}(P)\) is convergent and \[\max\{|\sum_{n=1}^{\infty}h_{1}(t_{n})\nu_{1}(A_{n})-a|,\,|\sum_{ n=1}^{\infty}h_{2}(t_{n})\nu_{2}(A_{n})-b|\,\}=\] \[d_{H}(\sum_{n=1}^{\infty}\big{[}h_{1}(s_{n})\nu_{1}(B_{n}),h_{2}( s_{n})\nu_{2}(B_{n})\big{]}<\varepsilon.\] for every tagged partition \(P=\{(A_{n},t_{n})\}_{n\in\mathbb{N}}\) of \(S\) with \(P\geq P_{\varepsilon}\) and then \(h_{i}\) are \(RL\) integrable with respect to \(\nu_{i}\), \(i=1,2\). So the first implication follows from the convexity of the \(RL\) integral. For the converse, for every \(\varepsilon>0\), let \(P_{\varepsilon,h_{i}},i=1,2\) be two countable partitions verifying the \(RL\) integrability definition for \(h_{i},i=1,2\) respectively. Let \(P_{\varepsilon}\geq P_{\varepsilon,h_{1}}\wedge P_{\varepsilon,h_{2}}\) be a countable partition of \(S\), then, for every finer partition and for every \(t_{n}\in B_{n}\) it is \[\left|\sum_{n=0}^{+\infty}h_{i}(t_{n})\nu_{i}(B_{n})-{}_{(RL)}\int_{S}h_{i}d\nu_ {i}\right|<\varepsilon,\quad i=1,2.\] Since \(h_{i}\), \(i=1,2\) are selections of \(H\) then \[d_{H}\left(\sigma_{H,\Gamma}(P),\left[{}_{(RL)}\int_{S}h_{1}d\nu_{1},\,{}_{(RL )}\int_{S}h_{2}d\nu_{2}\right]\right)\leq\varepsilon\] and then the only if assertion follows. According to Theorem 3.4 and the previous theorem, if \(\nu_{2}\) is of finite variation and \(h_{2}\) is bounded, then \(H\) is RL integrable. Another consequence of the previous theorem, together with Theorem 3.3.b) is the inheritance of the \(RL\) integral on the subsets \(E\in\mathcal{C}\). In fact **Corollary 4.5**.: _Let \(H\in RL^{1}_{\Gamma}(L(\mathbb{R}^{+}_{0}))\), then \(H\) is \(RL\) integrable with respect to \(\Gamma\) on every \(E\in\mathcal{C}\). Moreover, \(H\) is \(RL\) integrable with respect to \(\Gamma\) on \(E\in\mathcal{C}\) if and only if \(H\chi_{E}\) is \(RL\) integrable with respect to \(\Gamma\) on \(S\). In this case, for every \(E\in\mathcal{C}\),_ \[{}_{(RL)}\int_{E}H\ \mathrm{d}\Gamma={}_{(RL)}\int_{S}H\chi_{E}\ \mathrm{d}\Gamma.\] Moreover the RL integral is homogeneous with respect to both interval-valued multifunctions \(H\) and \(\Gamma\). **Theorem 4.6**.: ( [22, Remark 7, Theorem 1 and Proposition 8]) _If \(H,H_{1},H_{2}\in RL^{1}_{\Gamma}(L(\mathbb{R}^{+}_{0}))\) then for every \(\alpha\in[0,\infty)\):_ **(4.6.a):**: \(\alpha H\in RL^{1}_{\Gamma}(L(\mathbb{R}^{+}_{0}))\) _and_ \[{}_{(RL)}\int_{S}\alpha H\,\mathrm{d}\Gamma=\alpha{}_{(RL)}\int_{S}H\, \mathrm{d}\Gamma.\] **(4.6.b):**: \(H\in RL^{1}_{\alpha\Gamma}(L(\mathbb{R}^{+}_{0}))\) _and_ \[{}_{(RL)}\int_{S}H\,\mathrm{d}(\alpha\Gamma)=\alpha\int_{S}H\, \mathrm{d}\Gamma.\] **(4.6.c):**: \(H_{1}\oplus H_{2}\in RL^{1}_{\Gamma}(L(\mathbb{R}^{+}_{0}))\) _and_ \[{}_{(RL)}\int_{S}(H_{1}\oplus H_{2})\,\mathrm{d}\Gamma={}_{(RL)} \int_{S}H_{1}\,\mathrm{d}\Gamma\oplus{}_{(RL)}\int_{S}H_{2}\,\mathrm{d}\Gamma.\] Proof.: We give here the proof of (4.6.c). Namely we prove that for every pair of interval-valued multifunctions \(H_{1},H_{2}\), which are \(RL\) integrable with respect to \(\Gamma\) we have that \[{}_{(RL)}\int_{S}(H_{1}\oplus H_{2})\,\mathrm{d}\Gamma={}_{(RL)} \int_{S}H_{1}\,\mathrm{d}\Gamma\oplus{}_{(RL)}\int_{S}H_{2}\,\mathrm{d}\Gamma. \tag{13}\] Let \(\varepsilon>0\) be fixed. Since \(H_{1},H_{2}\) are \(RL\) integrable with respect to \(\Gamma\), there exists a countable partition \(P_{\varepsilon}\in\mathcal{P}\) such that for every \(P=\{A_{n}\}_{n\in\mathbb{N}}\geq P_{\varepsilon}\) and every \(t_{n}\in A_{n}\), \(n\in\mathbb{N}\), the series \(\sigma_{H_{i},\Gamma}(P)\), \(i=1,2\) are convergent and \[d_{H}\left(\sigma_{H_{i},\Gamma}(P),{}_{(RL)}\int_{S}H_{i}\,\mathrm{d}\Gamma \right)<\frac{\varepsilon}{2},\qquad i=1,2.\] Then \(\sigma_{H_{1}\oplus H_{2},\Gamma}(P)\) is convergent and, by [45, Proposition 1.17], \[d_{H}\left(\sigma_{H_{1}\oplus H_{2},\Gamma}(P),{}_{(RL)}\int_{S}H_{1}\, \mathrm{d}\Gamma\oplus{}_{(RL)}\int_{S}H_{2}\,\mathrm{d}\Gamma\right)<\varepsilon.\] So \(H_{1}\oplus H_{2}\) is \(RL\) integrable with respect to \(\Gamma\) and formula (13) is satisfied. If \(H\in RL_{\Gamma}^{1}(L(\mathbb{R}_{0}^{+}))\), then we may consider \(T_{H}:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\) defined by \[T_{H}(E)={}_{(RL)}\int_{E}H\,\mathrm{d}\Gamma,\qquad\text{for every $E\in \mathcal{C}$}. \tag{14}\] In the following theorem we present some properties of the interval-valued integral set operator \(T_{H}\). **Theorem 4.7**.: _Let \(\Gamma:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\) be so that \(\nu_{2}\) is of finite variation and \(H:S\to L(\mathbb{R}_{0}^{+})\) is bounded. Then the following properties hold:_ **(4.7.a):**: \(T_{H}\) _is a finitely additive multimeasure, i.e. for every_ \(A,B\in\mathcal{C}\)_, with_ \(A\cap B=\emptyset\) _it is_ \(T_{H}(A\cup B)=T_{H}(A)\oplus T_{H}(B).\)__(__[_22_, Theorem 1]__)._ **(4.7.b):**: _Let_ \(G,H\in RL_{\Gamma}^{1}(L(\mathbb{R}_{0}^{+}))\)_. Then, for every_ \(E\in\mathcal{C}\)_, by_ _[_22_, Propositions 4 and 5]__,_ _if_ \(G\preceq H\)_, then_ \(T_{G}(E)\preceq T_{H}(E)\)_;_ _if_ \(G\subseteq H\)_, then_ \(T_{G}(E)\subseteq T_{H}(E)\)_._ _Moreover, by_ _[_22_, Corollary 1]__, for every_ \(E\in\mathcal{C}\)_:_ \[T_{G\wedge H}(E)\preceq T_{G}(E)\wedge T_{H}(E);\quad T_{G}(E)\lor T_{H}(E) \preceq T_{G\lor H}(E)\] _Finally from_ _[_22_, Propositions 6 and 7, Theorem 2]_we have that_ **(4.7.c):**: \(\bullet\)__\(\|T_{H}(S)\|_{\mathcal{H}}={}_{(RL)}\int_{S}h_{2}\,\mathrm{d}\nu_{2}={}_{(RL)} \int_{S}\|H\|_{\mathcal{H}}\,\mathrm{d}\|\Gamma\|_{\mathcal{H}}.\)__ \(\bullet\)__\(\overline{T}_{H}(S)={}_{(RL)}\int_{S}h_{2}\,\mathrm{d}\nu_{2}.\)__ \(\bullet\)__\(T_{H}\ll\overline{\Gamma}\) _(in the_ \(\varepsilon\) _-_ \(\delta\) _sense) and_ \(T_{H}\) _is of finite variation._ \(\bullet\) _If moreover_ \(\Gamma\) _is o-continuous (exhaustive resp.), then_ \(T_{H}\) _is also o-continuous (exhaustive resp.)._ \(\bullet\) _If_ \(\Gamma\) _is monotone, then_ \(T_{H}\) _is monotone too._ \(\bullet\) _If_ \(\Gamma\) _is a_ \(d_{H}\)_-multimeasure, then_ \(T_{H}\) _is countably additive._ Proof.: We point out that the additivity of \(T_{H}\) is indipendent of the additivity of \(H\). In fact, by Corollary 4.5 we have that \(T_{H}(A)\in L(\mathbb{R}_{0}^{+})\) for every \(A\in\mathcal{C}\) Moreover for every \(A,B\in\mathcal{C}\) with \(A\cap B=\emptyset\), by Theorem 4.6.c) \[T_{H}(A\cup B) = {}_{(RL)}\int_{S}\!H\chi_{A\cup B}\,\mathrm{d}\Gamma={}_{(RL)} \int_{S}\!(H\chi_{A}\oplus H\chi_{B})\,\mathrm{d}\Gamma=\] \[= {}_{(RL)}\int_{S}\!H\chi_{A}\mathrm{d}\Gamma\oplus{}_{(RL)}\int_{ S}\!H\chi_{B}\,\mathrm{d}\Gamma=T_{H}(A)\oplus T_{H}(B).\] The RL integral is additive and monotone with respect to the weak interval order and the inclusion one relative to \(\Gamma\), as we can see in the following theorem. **Theorem 4.8**.: ( [22, Theorems 3 and 4]) _Let \(\Gamma_{1},\,\Gamma_{2}:\mathcal{A}\to L(\mathbb{R}_{0}^{+})\) be multisubmeasures of finite variation, with \(\Gamma_{1}(\emptyset)=\Gamma_{2}(\emptyset)=\{0\}\) and suppose \(H,G:S\to L(\mathbb{R}_{0}^{+})\) are bounded multifunctions. Then the following properties hold for every \(E\in\mathcal{C}\):_ **(4.8.a):**: _If_ \(\Gamma:=\Gamma_{1}\oplus\Gamma_{2}\)_, then_ \({}_{(RL)}\int_{E}H\,\mathrm{d}\Gamma={}_{(RL)}\int_{E}H\,\mathrm{d}\Gamma_{1} \oplus{}_{(RL)}\int_{E}H\,\mathrm{d}\Gamma_{2}\)_._ **(4.8.b):**: _If_ \(\Gamma_{1}\preceq\Gamma_{2}\)_, then_ \({}_{(RL)}\int_{E}H\,\mathrm{d}\Gamma_{1}\preceq{}_{(RL)}\int_{E}H\,\mathrm{d} \Gamma_{2}\)_._ **(4.8.c):**: _If_ \(\Gamma_{1}\subseteq\Gamma_{2}\)_, then_ \({}_{(RL)}\int_{E}H\,\mathrm{d}\Gamma_{1}\subseteq{}_{(RL)}\int_{E}H\,\mathrm{ d}\Gamma_{2}\)_._ **(4.8.d):**: \[d_{H}\Big{(}{}_{(RL)}\int_{S}G\,\mathrm{d}\Gamma,{}_{(RL)}\int_{S}H\,\mathrm{d} \Gamma\Big{)}\leq\sup_{s\in S}d_{H}(G(s),H(s))\cdot\overline{\Gamma}(S).\] ### Convergence results In the following we present some results of [22, 26, 27] regarding convergent sequences of Riemann-Lebesgue integrable interval-valued multifunctions. Firstly we recall the definitions of convergence almost everywhere and convergence in measure for interval-valued multimeasures. **Definition 4.9**.: Let \(\nu:\mathcal{C}\to[0,\infty)\) be a set function with \(\nu(\emptyset)=0,\,H:S\to L(\mathbb{R}_{0}^{+})\) a multifunction and a sequence of interval-valued multifunctions \(H_{n}:S\to L(\mathbb{R}_{0}^{+})\), for every \(n\in\mathbb{N}\). It is said that: **(4.9.i):**: \((H_{n})_{n}\) converges \(\nu\)-almost everywhere to \(H\) on \(S\) ( \(H_{n}\stackrel{{\nu-a.e.}}{{\longrightarrow}}H\)) if there exists \(B\in\mathcal{C}\) with \(\nu(B)=0\) and \(\lim\limits_{n\to\infty}d_{H}(H_{n}(s),H(s))=0,\) for every \(s\in S\setminus B\). **(4.9.ii):**: \((H_{n})\)\(\nu\)-converges to \(H\) on \(S\) ( \(H_{n}\stackrel{{\nu}}{{\longrightarrow}}H\)) if for every \(\delta>0,\,B_{n}(\delta)=\{s\in S;d_{H}(H_{n}(s),H(s))\geq\delta\}\in \mathcal{C}\) and \(\lim\limits_{n\to\infty}\nu(B_{n}(\delta))=0.\) **Theorem 4.10**.: ( [27, Theorem 11]) _Let \(\Gamma:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\), \(\Gamma=[\nu_{1},\nu_{2}],\) so that \(\nu_{2}\) is of finite variation. Let \(H=[h_{1},h_{2}],\,H_{n}=[h_{1}^{(n)},h_{2}^{(n)}]:S\to L(\mathbb{R}_{0}^{+})\) be multifunctions such that \(\sup\{h_{2}(s),h_{2}^{(n)}(s),s\in S,\,n\in\mathbb{N}\}<+\infty\) and \(H_{n}\stackrel{{\widetilde{\Gamma}}}{{\rightarrow}}H\). Then_ \[\lim\limits_{n\to\infty}d_{H}\Big{(}{}_{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma, {}_{(RL)}\int_{S}H\,\mathrm{d}\Gamma\Big{)}=0.\] **Theorem 4.11**.: ([27, Theorem 12]) _Suppose \(\nu:\mathcal{C}\to[0,\infty)\) is monotone, of finite variation and \(\widetilde{\nu}\) satisfies_ **(E)**_. Let \(H=[h_{1},h_{2}],\,H_{n}=[h_{1}^{(n)},h_{2}^{(n)}]:S\to L(\mathbb{R}_{0}^{+})\) be multifunctions such that \(\sup\{h_{2}(s),h_{2}^{(n)}(s),s\in S,\,n\in\mathbb{N}\}<+\infty\) and \(H_{n}\overset{\nu-\mathrm{e}e}{\to}H\), then_ \[\lim_{n\to\infty}d_{H}\Big{(}{}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\nu,{}^{(RL)} \int_{S}H\,\mathrm{d}\nu\Big{)}=0.\] **Theorem 4.12**.: ([27, Theorem 13]) _Let \(\Gamma:=[\nu_{1},\nu_{2}]:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\) with \(\nu_{1}\), \(\nu_{2}\) monotone set functions satisfying_ **(E)** _and \(\nu_{2}\) of finite variation. Let \(H=[h_{1},h_{2}],\,H_{n}=[h_{1}^{(n)},h_{2}^{(n)}]:S\to L(\mathbb{R}_{0}^{+})\) be multifunctions such that \(\sup\{h_{2}(s),h_{2}^{(n)}(s),s\in S,\,n\in\mathbb{N}\}<+\infty\) and \(H_{n}\overset{\widetilde{\Gamma}-\mathrm{e}e}{\to}H\), then_ \[\lim_{n\to\infty}d_{H}\Big{(}{}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma,{}^{(RL) }\int_{S}H\,\mathrm{d}\Gamma\Big{)}=0.\] A Fatou type theorem for sequences of RL integrable interval-valued multifunctions holds. **Theorem 4.13**.: ([27, Theorem 14]) _Suppose \(\nu:\mathcal{C}\to[0,\infty)\) is monotone with \(0<\overline{\nu}(S)<\infty\) and \(\widetilde{\nu}\) satisfies_ **(E)**_. For every \(n\in\mathbb{N}\), let \(H_{n}=[h_{1}^{(n)},h_{2}^{(n)}]\) be such that \((h_{2}^{(n)})_{n}\) is uniformly bounded. Then_ \[{}^{(RL)}\int_{S}(\liminf_{n}H_{n})\,\mathrm{d}\nu\preceq{}^{(RL)}\liminf_{n} \int_{S}H_{n}\,\mathrm{d}\nu.\] In the sequel, some Lebesgue type theorems are presented. **Theorem 4.14**.: (Monotone Convergence, [26, Proposition 1]) _Suppose \(\Gamma=[\nu_{1},\nu_{2}]\) with \(\nu_{i}\in\mathscr{M}(S),\,i\in\{1,2\}\) of finite variation. For every \(n\in\mathbb{N}\), let \(H_{n}=[h_{1}^{(n)},h_{2}^{(n)}]\) be a multifunction such that \((h_{2}^{(n)})\) is uniformly bounded and \(H_{n}\preceq H_{n+1}\) for every \(n\in\mathbb{N}\). Then_ \[{}^{(RL)}\int_{S}\bigvee_{n}H_{n}\,\mathrm{d}\Gamma=\bigvee_{n}{}^{(RL)}\int_ {S}H_{n}\,\mathrm{d}\Gamma.\] It holds also a convergence type theorem for varying multisubmeasures. **Theorem 4.15**.: ([26, Theorem 4.2]) _Let \((H_{n})_{n}:=([h_{1}^{(n)},h_{2}^{(n)}])_{n}\) be a sequence of bounded multifunctions, and \((\Gamma_{n})_{n}:=([\nu_{1}^{(n)},\nu_{2}^{(n)}])_{n}\) a sequence of multisubmeasures. Suppose there exist an interval-valued multisubmeasure \(\Gamma:=[\nu_{1},\nu_{2}]\), with \(\nu_{2}\) of finite variation, and a bounded multifunction \(H:=[h_{1},h_{2}]\) such that:_ **(4.15.a):**: \(H_{n}\preceq H_{n+1}\) _for every_ \(n\in\mathbb{N}\) _and_ \(d_{H}(H_{n},H)\to 0\) _uniformly on_ \(S\)_,_ **(4.15.b):**: \(\Gamma_{n}\preceq\Gamma_{n+1}\preceq\Gamma\) _for every_ \(n\in\mathbb{N}\) _and_ \((\Gamma_{n})_{n}\) _setwise converges to_ \(\Gamma\) _(namely_ \(\lim_{n}\Gamma_{n}(A)=\Gamma(A)\) _for every_ \(A\in\mathcal{C}\)_)._ _Then_ \[\lim_{n\to\infty}d_{H}\Big{(}{}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma_{n},{}^{( RL)}\int_{S}H\,\mathrm{d}\Gamma\Big{)}=0.\] Proof.: By 4.15.b) we have \(\nu_{i}^{(n)}\leq\nu_{i}^{(n+1)}\leq\nu_{i}\) for every \(i=1,2\) and for every \(n\in\mathbb{N}\), moreover \(\lim_{n\to\infty}\overline{\nu}_{i}^{(n)}(A)=\overline{\nu}_{i}(A)\) for every \(A\in\mathcal{C}\) and \(i=1,2\). Since \(H_{n},H\) are bounded and \(\nu_{2}\) is of finite variation then, by [12, Proposition 1] and [22, Proposition 2], \(H_{n}\), \(H\in RL_{\Gamma_{k}}^{1}(L(\mathbb{R}_{0}^{+}))\cap RL_{\Gamma}^{1}(L(\mathbb{ R}_{0}^{+}))\) for every \(n,k\in\mathbb{N}\). Let \(\varepsilon>0\) be fixed and let \(n(\varepsilon)\) be such that \(d_{H}(H_{n},H)\leq\varepsilon\) for every \(n\geq n(\varepsilon)\). By [22, Theorem 3], for every \(n\geq n(\varepsilon)\), \[d_{H}\left({}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma_{n},\,{}^{( RL)}\int_{S}H\,\mathrm{d}\Gamma\right)\leq\] \[\leq d_{H}\left({}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma_{n},\,{}^{( RL)}\int_{S}H\,\mathrm{d}\Gamma_{n}\right)+d_{H}\left({}^{(RL)}\int_{S}H\, \mathrm{d}\Gamma_{n},\,{}^{(RL)}\int_{S}H\,\mathrm{d}\Gamma\right)\leq\] \[\leq \varepsilon\overline{\Gamma}_{n}(S)+d_{H}\left({}^{(RL)}\int_{S}H \,\mathrm{d}\Gamma_{n},\,{}^{(RL)}\int_{S}H\,\mathrm{d}\Gamma\right)\leq\] \[\leq \varepsilon\overline{\nu}_{2}(S)+d_{H}\left(\left[{}^{(RL)}\int_{ S}h_{1}\mathrm{d}\nu_{1}^{(n)},\,{}^{(RL)}\int_{S}h_{2}\mathrm{d}\nu_{2}^{(n)} \right],\left[{}^{(RL)}\int_{S}h_{1}\mathrm{d}\nu_{1},\,{}^{(RL)}\int_{S}h_{2} \mathrm{d}\nu_{2}\right]\right).\] We have to evaluate \[d_{H}\left(\left[{}^{(RL)}\int_{S}h_{1}\mathrm{d}\nu_{1}^{(n)},{} ^{(RL)}\int_{S}h_{2}\mathrm{d}\nu_{2}^{(n)}\right],\left[{}^{(RL)}\int_{S}h_{1 }\mathrm{d}\nu_{1},\,{}^{(RL)}\int_{S}h_{2}\mathrm{d}\nu_{2}\right]\right)=\] \[=\max_{i=1,2}\left\{{}^{(RL)}\int_{S}h_{i}\mathrm{d}\nu_{i}-{}^{( RL)}\int_{S}h_{i}\mathrm{d}\nu_{i}^{(n)}\right\}\] Using now [26, Lemma 4.1] the last term tends to \(0\) for \(n\to\infty\) and so \[\lim_{n\to\infty}d_{H}\left({}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma_{n},\,{} ^{(RL)}\int_{S}H\,\mathrm{d}\Gamma\right)=0.\] Analogously to [22, Remark 3], Theorem 4.15 can be extended to the bounded sequences \((H_{n})_{n}\) converging \(\overline{\Gamma}\)-almost uniformly on \(S\). **Corollary 4.16**.: ( [26, Corollary 1]) _Let \((H_{n})_{n}:=([h_{1}^{(n)},h_{2}^{(n)}])_{n}\) be a sequence of bounded multifunctions and \((\Gamma_{n})_{n}:=([\nu_{1}^{(n)},\nu_{2}^{(n)}])_{n}\), be a sequence of multisubmeasures. Suppose there exist a multisubmeasure \(\Gamma=[\nu_{1},\nu_{2}]\) with \(\nu_{2}\) of finite variation and a bounded multifunction \(H=[h_{1},h_{2}]\) such that:_ **(4.16.a):**: \(H_{n}\preceq H_{n+1}\) _for every_ \(n\in\mathbb{N}\) _and_ \(d_{H}(H_{n},H)\to 0\)__\(\Gamma\)_-almost uniformly on_ \(S\)_,_ **(4.16.b):**: \(\Gamma_{n}\preceq\Gamma_{n+1}\preceq\Gamma\)_, for every_ \(n\in\mathbb{N}\) _and_ \((\Gamma_{n})\) _setwise converges to_ \(\Gamma\)_._ _Then_ \[\lim_{n\to\infty}d_{H}\Big{(}{}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\Gamma_{n},\,{} ^{(RL)}\int_{S}H\,\mathrm{d}\Gamma\Big{)}=0.\] **Remark 4.17**.: We can observe that the results of Theorem 4.15 and Corollary 4.16 are still valid if we assume that \(\Gamma_{n+1}\succeq\Gamma_{n}\succeq\Gamma\) for every \(n\in\mathbb{N}\), with the additional hypothesis that \(\sup_{n}\,\overline{\Gamma}_{n}(S)<+\infty\). Moreover, in Corollary 4.16, if \(\Gamma_{n}=\Gamma=\nu\) in the condition (4.16.a), then the monotonicity could be omitted. In particular, in the finitely additive case, we obtain **Theorem 4.18**.: ([26, Theorem 4.4]) _Let \(\nu:\mathcal{C}\to[0,\infty)\) be finitely additive and of finite variation. Let \(H=[h_{1},h_{2}],\,H_{n}=[h_{1}^{(n)},h_{2}^{(n)}]:S\to L(\mathbb{R}_{0}^{+})\) be multifunctions such that \(\sup\{h_{2}(s),h_{2}^{(n)}(s),s\in S,\,n\in\mathbb{N}\}<+\infty\) and \(H_{n}\xrightarrow{\ddot{\nu}}H\). Then_ \[\lim_{n\to\infty}d_{H}\Big{(}{}^{(RL)}\int_{S}H_{n}\,\mathrm{d}\nu,{}^{(RL)} \int_{S}H\,\mathrm{d}\nu\Big{)}=0.\] ### Convergence results on atoms Finally, the field of atoms in measure theory has many applications and has been studied by many authors (e.g., [50, 59, 62]). In order to obtain convergence results on atoms we suppose \(S\) is a locally compact Hausdorff topological space. We denote by \(\mathcal{K}\) the lattice of all compact subsets of \(S\), \(\mathcal{B}\) the Borel \(\sigma\)-algebra (i.e. the smallest \(\sigma\)-algebra containing \(\mathcal{K}\)) and \(\mathcal{O}\) the class of all open sets. **Definition 4.19**.: The set multifunction \(\Gamma:\mathcal{B}\to L(\mathbb{R}_{0}^{+})\) is said to be regular if for every set \(A\in\mathcal{B}\) and every \(\varepsilon>0\) there exist \(K\in\mathcal{K}\) and \(D\in\mathcal{O}\) such that \(K\subseteq A\subseteq D\) and \(\|\Gamma(D\setminus K)\|_{\mathcal{H}}<\varepsilon\). We observe that the regularity of \(\Gamma\) is equivalent to the regularity of \(\nu_{2}\). **Definition 4.20**.: _It is said that \(B\in\mathcal{C}\) is an atom of an interval-valued multifunction \(\Gamma:\mathcal{C}\to L(\mathbb{R}_{0}^{+})\) if \(\{0\}\preceq\Gamma(B),\{0\}\neq\Gamma(B)\) and for every \(C\in\mathcal{C}\), with \(C\subseteq B\), we have \(\Gamma(C)=\{0\}\) or \(\Gamma(B\setminus C)=\{0\}\)._ **Theorem 4.21**.: ([27, Theorem 15]) _Let \(\Gamma:\mathcal{B}\to L(\mathbb{R}_{0}^{+})\) be a regular multisubmeasure of finite variation and satisfying property \((\boldsymbol{\sigma})\) and let \(H:S\to L(\mathbb{R}_{0}^{+})\) be bounded. If \(B\in\mathcal{B}\) is an atom of \(\Gamma\), then \({}^{(RL)}\int_{B}H\,\mathrm{d}\Gamma=H(b)^{\boldsymbol{\cdot}}\Gamma(\{b\}),\) where \(b\in B\) is the single point resulting by [50, Corollary 4.7]._ Proof.: Firstly, we prove the uniqueness of \(b\in B\). Because \(\Gamma\) is an interval-valued regular multisubmeasure, then the set functions \(\nu_{1}\) and \(\nu_{2}\) are null-additive an regular too. Suppose \(B\in\mathcal{B}\) is an atom of \(\Gamma\). Then, \(B\) is an atom of \(\nu_{1}\) and \(\nu_{2}\). According to [50, Corollary 4.7], for \(\nu_{i},i\in\{1,2\}\), there exists a unique point \(b_{i},i\in\{1,2\}\) such that \(\nu_{i}(\{b_{i}\})=\nu_{i}(B)\) and \(\nu_{i}(B\setminus\{b_{i}\})=0\), for \(i\in\{1,2\}\). We prove that \(b_{1}=b_{2}\). If it is not true then \(\{b_{1}\}\subset B\setminus\{b_{2}\}\). By the monotonicity of \(\nu_{2}\) we have \(\nu_{2}(\{b_{1}\})\leq\nu_{2}(B\setminus\{b_{2}\})=0\). Since \(\nu_{1}\leq\nu_{2}\) then \(\nu_{1}(\{b_{1}\})=0\), but \(\nu_{1}(\{b_{1}\})=\nu_{1}(B)>0\), and we have a contradiction. Therefore, there is only one point \(b\in B\) such that \(\nu_{i}(\{b\})=m_{i}(B)\) and \(\nu_{i}(B\setminus\{b\})=0\), for \(i\in\{1,2\}\). By the \(RL_{\Gamma}\)-integrability of \(H\), then \(h_{1}\) is \(RL_{\nu_{1}}\)-integrable and \(h_{2}\) is \(RL_{\nu_{2}}\)-integrable. According to [12, Theorem 11] and Subsection 3.2, \(h_{1},h_{2}\) are Gould integrable in the sense of [43], and moreover: \[{}_{(RL_{\nu_{1}})}\int_{B}h_{1}\mathrm{d}\nu_{1}=(G)\int_{B}h_{1}\mathrm{d} \nu_{1},\quad{}_{(RL_{\nu_{2}})}\int_{B}h_{2}\mathrm{d}\nu_{2}=(G)\int_{B}h_{2 }\mathrm{d}\nu_{2},\] where \((G)\int_{B}h_{1}\mathrm{d}\nu_{1}\), \((G)\int_{B}h_{2}\mathrm{d}\nu_{2}\) are the Gould integrals of \(h_{1},\,h_{2}\) respectively. Applying now [10, Theorem 3] and [27, Remark 5], we have \[(RL_{\Gamma})\int_{B}H\mathrm{d}\Gamma=(G)\int_{B}H\mathrm{d}\Gamma=H(b) \Gamma(\{b\}).\] **Theorem 4.22**.: ( [27, Theorem 16]) _Let \(\Gamma:\mathcal{B}\to L(\mathbb{R}_{0}^{+})\) be a regular multisubmeasure of finite variation and satisfying property \((\boldsymbol{\sigma})\). Let \(H:S\to L(\mathbb{R}_{0}^{+})\) be bounded and, for every \(n\in\mathbb{N}\), let \(H_{n}=[u_{n},v_{n}]\) be such that \((v_{n})_{n}\) is uniformly bounded. If \(B\in\mathcal{B}\) is an atom of \(\Gamma\) and \(H_{n}(b)\xrightarrow{d_{H}}H(b)\), where \(b\in B\) is the single point resulting by Theorem 4.21, then_ \[\lim_{n\to\infty}d_{H}\Big{(}{}_{(RL})\int_{B}H_{n}\,\mathrm{d}\Gamma,\,{}_{( RL)}\int_{B}H\,\mathrm{d}\Gamma\Big{)}=0.\] Proof.: By Theorem 4.21, there exists a unique point \(b\in B\) such that: \[\Gamma(B\setminus\{b\})=\{0\},\quad{}_{(RL)}\int_{B}Hd\Gamma=H(b)\cdot\Gamma( B).\] Similarly, for every \(n\in\mathbb{N}\), there is a unique \(b_{n}\in B\) such that: \[\Gamma(B\setminus\{b_{n}\})=\{0\},\quad{}_{(RL)}\int_{B}H_{n}\mathrm{d}\Gamma =H_{n}(b_{n})\cdot\Gamma(B).\] If there exists \(n_{0}\in\mathbb{N}\) such that \(b_{n_{0}}\neq b\), this means that \(\{b_{n_{0}}\}\subset B\setminus\{b\}\), and by the monotonicity of \(\Gamma\), it follows that: \(\Gamma(\{b_{n_{0}}\})\preceq\Gamma(B\setminus\{b\})=\{0\}\); however, this is not possible since \(\Gamma(\{b_{n_{0}}\})=\Gamma(B)\neq\{0\}\). Therefore, for every \(n\in\mathbb{N}\), \(b_{n}=b\). Then: \[d_{H}\left({}_{(RL_{\Gamma})}\int_{B}H_{n}\mathrm{d}\Gamma,\,{}_{(RL_{\Gamma} )}\int_{B}H\mathrm{d}\Gamma\right)\leq d_{H}(H_{n}(b),H(b))\cdot\overline{ \Gamma}(B)\longrightarrow 0,\quad\text{ for }n\to\infty.\]
2309.13219
Waiting for Dr. Godot: how much and who responds to predicted health care wait times?
Asymmetric information in healthcare implies that patients could have difficulty trading off non-health and health related information. I document effects on patient demand when predicted wait time is disclosed to patients in an emergency department (ED) system. I use a regression discontinuity where EDs with similar predicted wait times display different online wait times to patients. I use impulse response functions estimated by local projections to demonstrate effects of the higher wait time. I find that an additional thirty minutes of wait time results in 15% fewer waiting patients at urgent cares and 2% fewer waiting patients at EDs within 3 hours of display. I find that the type of patient that stops using emergency care is triaged as having lower acuity and would have used an urgent care. However, I find that at very high wait times there are declines in all acuity patients including sick patients.
Stephenson Strobel
2023-09-22T23:51:28Z
http://arxiv.org/abs/2309.13219v1
# Waiting for Dr. Godot: how much and who responds to predicted health care wait times? ###### Abstract Asymmetric information in healthcare implies that patients could have difficulty trading off non-health and health related information. I document effects on patient demand when predicted wait time is disclosed to patients in an emergency department (ED) system. I use a regression discontinuity where EDs with similar predicted wait times display different online wait times to patients. I use impulse response functions estimated by local projections to demonstrate effects of the higher wait time. I find that an additional thirty minutes of wait time results in 15% fewer waiting patients at urgent cares and 2% fewer waiting patients at EDs within 3 hours of display. I find that the type of patient that stops using emergency care is triaged as having lower acuity and would have used an urgent care. However, I find that at very high wait times there are declines in all acuity patients including sick patients. JEL: I11, D24, J22 Keywords: demand for health; healthcare technologies; emergency wait times.
2309.15668
A New Centralized Multi-Node Repair Scheme of MSR codes with Error-Correcting Capability
Minimum storage regenerating (MSR) codes, with the MDS property and the optimal repair bandwidth, are widely used in distributed storage systems (DSS) for data recovery. In this paper, we consider the construction of $(n,k,l)$ MSR codes in the centralized model that can repair $h$ failed nodes simultaneously with $e$ out $d$ helper nodes providing erroneous information. We first propose the new repair scheme, and give a complete proof of the lower bound on the amount of symbols downloaded from the helped nodes, provided that some of helper nodes provide erroneous information. Then we focus on two explicit constructions with the repair scheme proposed. For $2\leq h\leq n-k$, $k+2e\leq d \leq n-h$ and $d\equiv k+2e \;(\mod{h})$, the first one has the UER $(h, d)$-optimal repair property, and the second one has the UER $(h, d)$-optimal access property. Compared with the original constructions (Ye and Barg, IEEE Tran. Inf. Theory, Vol. 63, April 2017), our constructions have improvements in three aspects: 1) The proposed repair scheme is more feasible than the one-by-one scheme presented by Ye and Barg in a parallel data system; 2) The sub-packetization is reduced from $\left(\operatorname{lcm}(d-k+1, d-k+2,\cdots, d-k+h)\right)^n$ to $\left((d-2e-k+h)/h\right)^n$, which reduces at least by a factor of $(h(d-k+h))^n$; 3) The field size of the first construction is reduced to $|\mathbb{F}| \geq n(d-2e-k+h)/h$, which reduces at least by a factor of $h(d-k+h)$. Small sub-packetization and small field size are preferred in practice due to the limited storage capacity and low computation complexity in the process of encoding, decoding and repairing.
Shenghua Li, Maximilien Gadouleau, Jiaojiao Wang, Dabin Zheng
2023-09-27T14:08:01Z
http://arxiv.org/abs/2309.15668v1
# A New Centralized Multi-Node Repair Scheme of MSR codes with Error-Correcting Capability ###### Abstract Minimum storage regenerating (MSR) codes, with the MDS property and the optimal repair bandwidth, are widely used in distributed storage systems (DSS) for data recovery. In this paper, we consider the construction of \((n,k,l)\) MSR codes in the centralized model that can repair \(h\) failed nodes simultaneously with \(e\) out \(d\) helper nodes providing erroneous information. We first propose the new repair scheme, and give a complete proof of the lower bound on the amount of symbols downloaded from the helped nodes, provided that some of helper nodes provide erroneous information. Then we focus on two explicit constructions with the repair scheme proposed. For \(2\leq h\leq n-k\), \(k+2e\leq d\leq n-h\) and \(d\equiv k+2e\ (\mod h)\), the first one has the UER \((h,d)\)-optimal repair property, and the second one has the UER \((h,d)\)-optimal access property. Compared with the original constructions (Ye and Barg, _IEEE Tran. Inf. Theory_, Vol. 63, April 2017), our constructions have improvements in three aspects: 1) The proposed repair scheme is more feasible than the one-by-one scheme presented by Ye and Barg in a parallel data system; 2) The sub-packetization is reduced from \(\left(\mathrm{lcm}(d-k+1,d-k+2,\cdots,d-k+h)\right)^{n}\) to \(\left(\left(d-2e-k+h\right)/h\right)^{n}\), which reduces at least by a factor of \((h(d-k+h))^{n}\); 3) The field size of the first construction is reduced to \(|\mathbb{F}|\geq n(d-2e-k+h)/h\), which reduces at least by a factor of \(h(d-k+h)\). Small sub-packetization and small field size are preferred in practice due to the limited storage capacity and low computation complexity in the process of encoding, decoding and repairing. **Keywords:** MSR codes; multi-node failures; repair bandwidth; universally error-resilient; centralized model ## I Introduction Owing to the optimal trade-off between the failure tolerance and storage overhead, maximum distance separable (MDS) codes are widely used in large-scale distribution storage systems (DSS). In such a system, a data file is stored across \(n\) nodes, and the information of any \(k\) nodes can reconstruct the original file (MDS property), i.e., the system can tolerate \(r=n-k\) erasures. Though having high storage efficiency, traditional MDS codes were illustrated to have low repair efficiency by Dimakis et al. in [1]. Two important metrics for the repair efficiency are the amount of data downloaded and the amount data accessed during the repair process, respectively. The former is called _repair bandwidth_, indicating the network usage, and the latter measures the disk input-output cost. In [1], the cut-set bound on repair bandwidth for a single failed node was derived, and regenerating codes were defined as those achieving the best trade-off between the repair bandwidth and storage overhead. An important subclass of regenerating codes is _minimum storage regenerating_ (MSR) code, which has the MDS property and the optimal repair bandwidth. Constructions of MSR codes were proposed in [1] - [7], [12] - [23], and the references therein. Multiple failed nodes are more frequent than a single failed node in practical DSS. Specially, in some systems, the repair of erased nodes is only triggered once the number of failed nodes exceeds a determined threshold. Thus, it is usually desirable to repair multiple erasures efficiently in DSS. There are two main models for repairing multi-node failures. One is the centralized model (CEM) where a repair center is assumed to reconstruct all the failed nodes ([2] - [11] ), and the repair bandwidth is the amount of data downloaded from the helped nodes for the repair. The other is the cooperative model (COM) where each failed node downloads data from the helper nodes firstly, then they exchange information among themselves to finish the repair ([14] - [18] ). Thus, the amount of data communicated among the failed nodes is also included in the repair bandwidth in the COM. The cut-set bounds under these two models are obtained in [2] and [14], respectively. In this paper we only consider the centralized model, and will introduce the corresponding bounds later. In the digital network era, the nodes in DDS are vulnerable to attacks from intruders. Since the intruders may be the helper nodes, another basic repair issue of MDS codes is the case where information from some helper nodes is erroneous ([7], [19] - [22] ). Therefore, it is of vital importance to study the repair for multi-node failures with error-correcting capability. ### _Cut-set Bounds_ MDS array codes [25] are a special subclass of MDS codes that have been extensively studied. An \((n,k,l)\) MDS array code over a field \(\mathbb{F}\) can be viewed as a vector code with the MDS property, where each coordinate of the codeword is an \(l\)-dimension vector over \(\mathbb{F}\). The parameter \(l\) is called the _sub-packetization_ of the code, which is desired to be smaller in practice due to low complexity in the process of encoding and repairing. Though being scalar MDS codes, Reed-Solomon (RS) codes can be viewed as vector codes over some subfield of \(\mathbb{F}\), and the sub-packetization \(l\) is defined as the degree of \(\mathbb{F}\) over the subfield [13]. The lower bound of sub-packetization for MSR codes is proved to be exponential ([5], [26], [27] ). Let \(\mathcal{C}=(C_{1},C_{2},\cdots,C_{n})\) be an \((n,k,l)\) MDS array code over a finite field \(\mathbb{F}\). Let \(\mathcal{E}\subset[n]\ (=\{1,\cdots,n\}),|\mathcal{E}|=h\) and \(\mathcal{R}=[n]\setminus\mathcal{E},|\mathcal{R}|=d\) be the set of indices of the failed nodes and the helper nodes, respectively. Under the centralized model, the repair center recover the values of the failed nodes by downloading \(\beta_{j},j\in\mathcal{R}\) symbols of \(\mathbb{F}\) from each helper node \(C_{j},j\in\mathcal{R}\). Thus, the repair bandwidth is defined by \[\beta(\mathcal{E},\mathcal{R})=\sum_{j\in\mathcal{R}}\beta_{j}. \tag{1}\] The lower bound on the repair bandwidth is called the _cut-set_ bound since it is obtained from the cut-set bound in network information theory. In [1], [2] and [6], the following inequality for a single node and multiple nodes were derived respectively. \[\beta(\mathcal{E},\mathcal{R})\geq\frac{dhl}{h+d-k}. \tag{2}\] The \((h,d)\)-repair bandwidth of \(\mathcal{C}\) ([7] ) is defined by \[\beta(h,d)=\max_{\mathcal{E}\cap\mathcal{R}=\emptyset,|\mathcal{E}|=h,| \mathcal{R}|=d}\beta(\mathcal{E},\mathcal{R}).\] If \(\beta(h,d)\) meets the bound (2) with equality, we say that \(\mathcal{C}\) has the \((h,d)\)-optimal repair property, and is called an \((h,d)\)-MSR code. Suppose that there exists erroneous information occurring in at most \(e\) out of \(d\) helper nodes. Let \(\beta(\mathcal{E},\mathcal{R},e)\) be the smallest amount of symbols of \(\mathbb{F}\) downloaded from the helper nodes \(\{C_{i},i\in\mathcal{R}\}\) to repair the failed nodes \(\{C_{i},i\in\mathcal{E}\}\) as long as the number of erroneous nodes in helper nodes is no more than \(e\). Define the universally error-resilient (UER) \((h,d)\)-repair bandwidth of \(\mathcal{C}\) as \[\beta(h,d,e)=\max_{\mathcal{E}\cap\mathcal{R}=\emptyset,|\mathcal{E}|=h,| \mathcal{R}|=d}\beta(\mathcal{E},\mathcal{R},e). \tag{3}\] It was shown in [19] and [20] that \(\beta(1,d,e)\geq(dl)/(d-2e-k+1)\) for \(d\geq k+2e\), which can be generalized for multiple failed nodes by a similar way. That is, \[\beta(h,d,e)\geq\frac{dhl}{d-2e-k+h}, \tag{4}\] for any nonnegative integer \(e\), \(h\geq 1\) and \(d\geq k+2e\). If \(\beta(h,d,e)\) meets the bound (4) with equality, we say that \(\mathcal{C}\) has the UER \((h,d)\)-optimal repair property, and is called a UER \((h,d)\)-MSR code. To our knowledge, a complete proof of (4) was not given. Then, we will restate it as a theorem in next section, and the sufficient and necessary condition for equality will also be derived. In general, the amount of data accessed is larger than that of data downloaded in the repair process since the data downloaded may be a function of the data accessed. If the two are equal for an aforementioned MSR code, it is called more accurately \((h,d)\)-optimal access code and UER \((h,d)\)-optimal access code, respectively. In many applications, it is desired to have a data centre responsible for the repair. For instance, in a rack-based system, there exists a data centre in each rack which undertakes repairing the multiple-node failures in one rack ([22] - [24] ). Furthermore, The centralized repair model has applications in other communications, such as efficient secret sharing and broadcast ([6], [8] ). [2] first proposed the issue of repairing multi-node failures, derived the lower bound of repair bandwidth and constructed the optimal codes by using the asymptotic interference alignment (IA) technique. [3] and [6] proposed the approaches to constructions of MSR codes with multi-node failures in the CEM from known MSR codes with a single node failure and cooperative MSR codes, respectively. [3] constructed MSR codes with multi-node failures by using the product-matrix (PM) technique. For ZigZag (ZZ) codes ([12] ), [4] studied the optimal repair for multi-node failures, and [6] discussed the special cases for two and three node failures. Due to being used widely in practice, Reed-Solomon (RS) codes for multi-node failures have been extensively studied to seek a trade-off between the sub-packetization and the repair bandwidth. [5] presented RS codes with optimal repair bandwidth for repairing multi-node failures in the CEM, and the sub-packetization \(l=r!\prod_{i=1}^{n}p_{i}\approx n^{n}\), where \(p_{i}\) is the \(i\)-th smallest prime satisfying some properties. While taking the sub-packetization to be on the order of \(\log(n)\), [10] and [11] proposed the repair schemes of RS codes with multi-node failures. In [7], the authors considered two constructions of UER \((h,d)\)-MSR codes for all \(h\leq r\) and \(k\leq d\leq n-h\) simultaneously. Although not giving the explicit constructions, they proved that the amount of data downloaded met the bound (4) under the one-by-one repair scheme. That is to say, a failed node is first repaired by the helped nodes, then a second failed node is repaired by the helped nodes and the first repaired node, then a third failed node is repaired by the helped nodes and the first two repaired nodes, \(\cdots\), finally he last remaining one is repaired by the helper nodes and all the previous repaired nodes. The sub-packetization of the codes is \((\mathrm{lcm}(d-k+1,d-k+2,\cdots,d-k+h))^{n}\), which is at least \((d-k+h)^{n}(d-k+h-1)^{n}\). [9] proposed a complete system supporting both single and concurrent failure recovery, and showed that the system achieves the minimum bandwidth for most concurrent failure patterns. In this paper, we study the simultaneous repair for multi-node failures, not the one-by-one repair. Repairing simultaneously means fairness for failed nodes. Moreover, without dependencies, it is feasible for parallel processing, which will greatly improve the efficiency. Although there exist simultaneous repairs in the CEM, the sub-packetizations of MSR codes are very large ([2], [5] ), huge finite field size is required for some of them ([2], [4] due to the Schwartz-Zippel Lemma ), or just for some special \(d\) ([4] ), and they all have no consideration for the intruders. Large sub-packetization and huge finite field size can significantly increase the computational complexity of encoding and decoding. As far as we know, this paper is the first to focus on the simultaneous repair for multi-node failures with error-correcting capability. Comparisons of our MSR array codes and some known MSR constructions with multi-node failures in the CEM are shown in Table I, where \(s=(d-2e-k+h)/h\) and \(s^{\prime}=\mathrm{lcm}(d-k+1,\cdots,d-k+h)\). The main contributions of this paper are the proposed repair scheme and two constructions of MSR codes which have small sub-packetization and small field size. They are summarized as follows. 1) We propose a repair scheme with error-correcting capability of MDS codes for multi-node failures tolerance (see Definition 1) and give a complete proof of the lower bound (4) on the repair bandwidth (see Theorem 1). In the repair scheme, the process of repair is divided into \(a\) groups. In each group, \(l^{\prime}=l/a\) coordinates in each failed node are recovered, and the total amount of symbols downloaded from each helper node in all groups is exactly \(l/s\), which can ensure all coordinates in all failed nodes are obtained and the repair bandwidth meets the lower bound. 2) Construction 1 gives a class of \((n,k,l=s^{n})\) MDS array codes with the UER \((h,d)\)-optimal repair property (see Theorem 2), where \(s=(d-2e-k+h)/h\) is an integer. The sub-packetization and field size of codes are respectively set to be \(s^{n}\), \(sn\). Compared with the construction in [7], they reduce at least by factors of \((h(d-k+h))^{n}\) and \(h(d-k+h)\), respectively. The parity-check matrices of the codes are associated with diagonal matrices, and another \((d-2e-k+n,d-2e,s^{n-h})\) MDS code is obtained for correcting \(e\) errors. 3) Construction 2 gives a class of \((n,k,l=s^{n})\) MDS array codes with the UER \((h,d)\)-optimal access property (see Theorem 3), where \(s=(d-2e-k+h)/h\) is an integer. The sub-packetization is set to be \(s^{n}\), which reduces at least by a factor of \((h(d-k+h))^{n}\) compared with the construction in [7]. The parity-check matrices of the codes are associated with permutation matrices, and another \((n-h,d-2e,s^{n-h})\) MDS code is obtained for correcting \(e\) errors. The rest of this paper is organized as follows. Section II proposes the repair scheme firstly. Then two constructions of UER \((h,d)\)-MSR codes for \(2\leq h\leq r\) and any corresponding reasonable \(d\) are presented in Sections III and IV respectively. Finally, Section V concludes this paper. ## II Proposed Repair Framework In this section, we will illustrate and define the repair scheme with error-correcting capability for multiple failures tolerance in the CEM, then give a complete proof to the lower bound (4). For ease of reading, we first introduce some notations used throughout in this paper. * \(\mathbb{F}\): a finite field. * \(\mathbb{F}_{p^{m}}\): the finite field of \(p^{m}\) elements for prime \(p\) and integer \(m\geq 1\). * \(\mathbb{Z}_{m}\): the integer ring with \(m\) elements. * \([a,b]\): the set \(\{a,a+1,\cdots,b\}\) with two integers \(a\) and \(b\), \(a\leq b\). If \(a=1\), \([b]\) is used in short. * \(n,k,r=n-k,l\): the number of total nodes, systematic nodes, parity nodes, and symbols in each node, respectively. * \(h,d,e\): the number of failed nodes, total helper nodes, helper nodes with erroneous information respectively. Assume that \(h\leq r\) and \(k+2e\leq d\leq n-h\). * \(s=(d-2e-k+h)/h\). For any \(a\in[0,s^{n}-1]\), let \((a_{n},a_{n-1},\cdots,a_{1})\) be its \(s\)-ary expansion form of length \(n\). For integers \(u_{j}\in[0,s-1]\), \(j\in[n]\), \(a(i_{1},i_{2},\cdots,i_{j};u_{1},u_{2},\cdots,u_{j})\) denotes the \(i_{1}\)-th, \(\cdots\), \(i_{j}\)-th digits of \(a\) are replaced with \(u_{1},u_{2},\cdots,u_{j}\), respectively. * \(\{e_{a}:a=0,1,\cdots,l-1\}\): the standard basis of \(\mathbb{F}^{l}\) over \(\mathbb{F}\). * \(|\Phi|\): the cardinality of a set \(\Phi\). In the CEM, when \(h\) nodes are failed, a repair center is responsible for recovering the data stored in the failed nodes. If there exist adversaries in the system, the errors in the helper nodes must be corrected firstly. Thus, we need to obtain another array code with the minimum distance \(d_{\min}\) satisfying \(\lfloor(d_{\min}-1)/2\rfloor\geq e\). Let \(\mathcal{C}=(C_{1},C_{2},\cdots,C_{n})\) be an \((n,k,l)\) MDS array code over \(\mathbb{F}\), where \(C_{i}=(c_{i,0},c_{i,1},\cdots,c_{i,l-1})\) is a vector of length \(l\). Suppose the set of indices of the failed nodes and helper nodes are \(\mathcal{E}=\{i_{1},\cdots,i_{h}\}\subset[n]\) and \(\mathcal{R}=\{q_{1},\cdots,q_{d}\}\subset[n]\setminus\mathcal{E}\), respectively. For the sake of repairing all the failed nodes simultaneously, the coordinates to be recovered are divided into \(a\) groups. In each group, \(l^{\prime}=l/a\) coordinates in each failed node will be recovered by downloading the same amount of symbols from each helper node. Provided that the information associated with \(d\) helper nodes will form a \((d,d-2e)\) array code, which can correct \(e\) errors, the repair in each group are well done. After the repairs in all groups are finished, \(l\) coordinates in each node are obtained. Figure 1 illustrates the process of the repair, where \(\beta^{\prime}_{ij}(i\in[d],j\in[a])\) represents the number of symbols downloaded from helper node \(q_{i}\) in the \(j\)th group, and \(m_{bj}(i_{b}\in[h],j\in[a])\) represents the set of symbols recovered of failed node \(i_{b}\) in the \(j\)-th group. Thus, we have \[\bigcup_{j=1}^{a}m_{bj}=\{c_{i_{b},0},c_{i_{b},1},\cdots,c_{i_{b},l-1}\}, \tag{5}\] for all \(b\in[h]\). We now mathematically formalize this repair scheme. **Definition 1**: _A centralized repair scheme of \((n,k,l)\) array code \(\mathcal{C}\) over \(\mathbb{F}\) with error-correcting capability for \(h\)-node failures tolerance is defined as follows. For each set \(\mathcal{E}\subset[n],|\mathcal{E}|=h\) and set \(\mathcal{R}\subset[n]\setminus\mathcal{E},|\mathcal{R}|=d\),_ \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Construction & \(l\) & Field size & \(d\) & UER & Scheme \\ \hline IA \((n,k)\) Codes [2] & \(\begin{array}{c}(d-k+h)m^{N},\;\mathrm{where}\\ N=(d-k+h)(k-h),\\ m\rightarrow\infty.\end{array}\) & large enough & \(k\leq d\leq n-h\) & NO & Simul. \\ \hline ZZ \((n,m+1)\) Codes [4] & \(r^{m}\) & large enough & \(=n-h\) & NO & Simul. \\ \hline RS \((n,k)\) Codes [5] & \(\approx n^{n}\) & very large: \(p^{l}\) & \(k\leq d\leq n-h\) & NO & One. \\ \hline \((n,k)\) Codes 1 [7] & \((s^{\prime})^{n}\) & \(\geq ns^{\prime}\) & \(k+2e\leq d\leq n-h\) & YES & One. \\ \hline \((n,k)\) Codes 2 [7] & \((s^{\prime})^{n}\) & \(\geq(n+1)\) & \(k+2e\leq d\leq n-h\) & YES & One. \\ \hline \((n,k)\) Codes 1 our & \(s^{n}\) & \(\geq ns\) & \(\begin{array}{c}k+2e\leq d\leq n-h\\ d=k+2e\;(\mod h)\end{array}\) & YES & Simul. \\ \hline \((n,k)\) Codes 2 our & \(s^{n}\) & \(\geq(n+1)\) & \(\begin{array}{c}k+2e\leq d\leq n-h\\ d\equiv k+2e\;(\mod h)\end{array}\) & YES & Simul. \\ \hline \end{tabular} \end{table} TABLE I: Comparison of MSR codes for multi-node failures tolerance in the CEM 1. The data associated with the nodes \(C_{i},i\in\mathcal{R}\) forms a \((d,d-2e)\) array code. 2. There exist two positive integers \(l^{\prime},a\), s.t. \(l=al^{\prime}\), and functions \[f_{i,j}:\ \mathbb{F}^{l}\rightarrow\mathbb{F}^{\beta^{\prime}_{ij}},\ \ i\in\mathcal{R},\ j\in[a]\] and \[g_{j}:\ \mathbb{F}^{\sum\limits_{i\in\mathcal{R}}\beta^{\prime}_{ij}} \rightarrow\mathbb{F}^{hl^{\prime}},\ \ j\in[a]\] such that \[\{(c_{i,0},c_{i,1},\cdots,c_{i,l-1}):\ i\in\mathcal{E}\}=\bigcup\limits_{j=1 }^{a}g_{j}(\{f_{i,j}((c_{i,0},c_{i,1},\cdots,c_{i,l-1})),i\in\mathcal{R}\}).\] The repair bandwidth of this scheme is defined by \[\sum\limits_{j=1}^{a}\sum\limits_{i\in\mathcal{R}}\beta^{\prime}_{ij}. \tag{6}\] Note that (6) agrees with (1), here \(\beta_{i}=\sum\limits_{j=1}^{a}\beta^{\prime}_{ij}\), and the lower bound with error-correcting capability for multiple failures tolerance is given by (4). For a single failure, the bound was discussed in [19] and [20]. For multiple failures, (4) has been stated (see [7]), but a complete proof is not given to our knowledge. Here, Theorem 1 discuss the lower bound again and a complete proof is given based on the fact below. **Lemma 1** ([22]): _Let \(\mathcal{C}\) be an \((n,k,l)\) MDS array code over \(\mathbb{F}\). Suppose that \(h\) failed nodes need to be repaired by \(d\) helper nodes, \(e\) of them providing erroneous information. Then, the total amount of symbols over \(\mathbb{F}\) downloaded from any \(d-2e-k+h\) helper nodes for the repair should be at least \(hl\)._ **Theorem 1**: _Follow notations introduced above. Then the minimum repair bandwidth for multi-node failures with error-correcting capability defined in (3) satisfies (4). Moreover, the equality holds if and only if the number of symbols downloaded from each helper node is the same, i.e.,_ \[\beta_{i}=(hl)/(d-2e-k+h), \tag{7}\] _for all \(i\in\mathcal{R}\)._ _Proof._ Let \(\mathcal{I}\subset\mathcal{R},|\mathcal{I}|=k+2e-h\) be a subset of helper nodes. Then \(|\mathcal{R}\setminus\mathcal{I}|=d-2e-k+h\). By Lemma 1, we have \[\sum\limits_{i\in\mathcal{R}\setminus\mathcal{I}}\beta_{i}\geq hl, \tag{8}\] Fig. 1: Repair for multiple-node failures simultaneously in the CEM: \(h\) failed nodes, \(e\) out of \(d\) helper nodes with erroneous information which \(\beta_{i}\) is the number of symbols downloaded from the node \(i\). Summing the left-hand side over all \((k+2e-h)\)-subsets of \(\mathcal{R}\), we have \[\sum_{\begin{subarray}{c}I\subset\mathcal{R}\\ |\mathcal{I}|=k+2e-h\end{subarray}}\sum_{i\in\mathcal{R}\setminus\mathcal{I}} \beta_{i}=\sum_{\begin{subarray}{c}i\in\mathcal{R}\\ i\in\mathcal{I}\\ |\mathcal{I}|=k+2e-h\end{subarray}}\beta_{i}=\sum_{i\in\mathcal{R}}\binom{d-1}{ k+2e-h}\beta_{i}=\binom{d-1}{k+2e-h}\sum_{i\in\mathcal{R}}\beta_{i}.\] Together with (8), we obtain \[\binom{d-1}{k+2e-h}\sum_{i\in\mathcal{R}}\beta_{i}\geq\binom{d}{k+2e-h}hl,\] which simplifies to (4). Next, we prove the second part. It is clear the equality in (4) holds if \(\beta_{i}=(hl)/(d-2e-k+h)\) for each \(i\in\mathcal{R}\). Note that the bound (4) holds with equality if and only if (8) holds with equality for each \(\mathcal{I}\subset\mathcal{R}\), \(|\mathcal{I}|=k+2e-h\). Suppose that the equality in (4) holds with \(\beta_{i^{*}}\neq(hl)/(d-2e-k+h)\) for some \(i^{*}\in\mathcal{R}\), say \(\beta_{i^{*}}<(hl)/(d-2e-k+h)\). Let \(\mathcal{J}_{1}\) be a \((d-2e-k+h)\)-subset of \(\mathcal{R}\) containing \(i^{*}\). Then, from \(\sum_{i\in\mathcal{J}_{1}}\beta_{i}=hl\), and there must \(j_{1}\in\mathcal{J}_{1}\), such that \(\beta_{j_{1}}>(hl)/(d-2e-k+h)\). Let \(\mathcal{J}_{2}\) be another \((d-2e-k+h)\)-subset of \(\mathcal{R}\) containing \(j_{1}\) but not \(i^{*}\). We also have \(\sum_{i\in\mathcal{J}_{2}}\beta_{i}=hl\), and there must exists \(j_{2}\in\mathcal{J}_{2},j_{2}\neq i^{*}\), such that \(\beta_{j_{2}}<(hl)/(d-2e-k+h)\). Set \(\mathcal{J}_{3}=\{j_{2}\}\cup(\mathcal{J}_{1}\setminus\{j_{1}\})\), then \(\sum_{i\in\mathcal{J}_{3}}\beta_{i}<hl\). With \(|\mathcal{J}_{3}|=d-2e-k+h\), this contradicts (8), which completes the proof. \(\Box\) Since the data downloaded from the helper nodes may be the same as the data accessed on the helper nodes, the bound (4) is also a lower bound for the amount of data accessed. Moreover, the second construction in this paper shows that the lower bound is tight. _Remark 1:_ Note that the repairs in all groups can be carried out at the same time if they are mutually independent. Since the repair of each group in our scheme only relies on helper nodes, our scheme can be applied in parallel processing. In [7], the repair scheme is the one-by-one, and the failed nodes repaired will be the helper nodes of the next failed node to be repaired. Thus, the repair under their scheme cannot be done in parallel. _Remark 2:_ In this paper, \(s=(d-2e-k+h)/h\) and \(l=s^{n}\). From Theorem 1, the repair bandwidth bound is achieved if \(\beta_{i}=l/s\) for each \(i\in\mathcal{R}\), which coincide exactly with that in [1]. In the repair process of our array codes, set \(a=s^{h-1}\) and \(\beta_{ij}^{\prime}=s^{n-h}\) for all \(i\in[d],j\in[a]\). Thus, \(\beta_{i}=l/s\) for every \(i\in\mathcal{R}\), and the codes are MSR codes. _Remark 3:_ Note that (5) and (7) are the key to construct MSR codes with error-correcting capability for multi-node failures tolerance. Now that any \(d\) helper nodes will lead to a \((d,d-2e)\) array code, then \(d\) helper nodes used in the repair of each group can be different if (5) and (7) are also established. ## III MSR Codes with the UER \((h,d)\)-Optimal Repair Property In this section, we first give the general array code construction [7] used in this paper. Let \(\mathcal{C}\in\mathbb{F}^{ln}\) be an \((n,k,l)\) array code with nodes \(C_{i}\in\mathbb{F}^{l},i\in[n]\), where \(C_{i}\) is a column vector \((c_{i,0},c_{i,1},\cdots,c_{i,l-1})^{T}\). The code \(\mathcal{C}\) is defined by the following parity-check form: \[\mathcal{C}=\{(C_{1},C_{2},\cdots,C_{n}):\ \sum_{i=1}^{n}A_{t,i}C_{i}=0,\ \ t\in[0,r-1]\}, \tag{9}\] where \(A_{t,i}\) is an \(l\times l\) matrix over \(\mathbb{F}\) for \(t\in[0,r-1]\) and \(i\in[n]\). The parity-check matrix in this paper is a block Vandermonde matrix, i.e., \[A_{t,i}=A_{i}^{t},\ \ t\in[0,r-1],i\in[n], \tag{10}\] where \(A_{i},i\in[n]\) are \(l\times l\) nonsingular matrices. By convention, \(A^{0}=I\) is used. We can characterize the MDS property of \(\mathcal{C}\) by its parity-check matrix. The code \(\mathcal{C}\) defined by (9) and (10) is MDS if and only if every \(r\times r\) block submatrix of \[\left[\begin{array}{cccc}I&I&\cdots&I\\ A_{1}&A_{2}&\cdots&A_{n}\\ \vdots&\vdots&\vdots&\vdots\\ A_{1}^{r-1}&A_{2}^{r-1}&\cdots&A_{n}^{r-1}\end{array}\right]\] is invertible. In [7], Ye and Barg established the following criterion for invertibility of block matrices. **Lemma 2** ([7] ): _Let \(B_{1},\cdots,B_{r}\) be \(l\times l\) matrices such that \(B_{i}B_{j}=B_{j}B_{i}\) for all \(i,j\in[r]\). The matrix_ \[M_{r}=\left[\begin{array}{cccc}I&I&\cdots&I\\ B_{1}&B_{2}&\cdots&B_{r}\\ \vdots&\vdots&\vdots&\vdots\\ B_{1}^{r-1}&B_{2}^{r-1}&\cdots&B_{r}^{r-1}\end{array}\right]\] _is invertible if and only if \(B_{i}-B_{j}\) is invertible for all \(i\neq j\)._ In this following, we discuss the simultaneous repair of multi-node failures for the MDS array codes associated with some diagonal matrices. **Construction 1**: _Let \(\mathbb{F}\) be a finite field of size \(|\mathbb{F}|\geq sn\). Let \(\{\lambda_{i,j}\}_{i\in[n],\ j\in[0,s-1]}\) be \(sn\) distinct elements in \(\mathbb{F}\). Consider the \((n,k,l=s^{n})\) array code \(\mathcal{C}\) defined by (9) and (10), where_ \[A_{i}=\sum_{a=0}^{l-1}\lambda_{i,a_{i}}e_{a}e_{a}^{T},\ \ i\in[n]. \tag{11}\] From (11), we have \[A_{i}^{t}=\sum_{a=0}^{l-1}\lambda_{i,a_{i}}^{t}e_{a}e_{a}^{T},\ \ t\in[0,r-1],\ i\in[n].\] _Then, the parity-check equations (9) coordinatewise can be rewritten as:_ \[\sum_{i=1}^{n}\lambda_{i,a_{i}}^{t}c_{i,a}=0, \tag{12}\] _for all \(t\in[0,r-1],a\in[0,l-1]\). Namely,_ \[\left[\begin{array}{cccc}1&1&\cdots&1\\ \lambda_{1,a_{1}}&\lambda_{1,a_{2}}&\cdots&\lambda_{1,a_{n}}\\ \vdots&\vdots&\vdots&\vdots\\ \lambda_{1,a_{1}}^{r-1}&\lambda_{1,a_{2}}^{r-1}&\cdots&\lambda_{1,a_{n}}^{r-1 }\end{array}\right]\left[\begin{array}{c}c_{1,a}\\ c_{2,a}\\ \vdots\\ c_{n,a}\end{array}\right]=0\] _for all \(a\in[0,l-1]\). To meet the MDS property, it is required that every \(r\) columns of the matrix above have rank \(r\). Since the submatrix composed of every \(r\) columns is a Vandermonde matrix, the code \(\mathcal{C}\) in Construction 1 have the MDS property from \(\lambda_{i,a_{i}}\neq\lambda_{j,a_{j}}\) for any \(a\in[0,l-1]\) and \(i\neq j\), \(i,j\in[n]\). Then, we derive the optimal repair property of \(\mathcal{C}\) as follows._ **Theorem 2**: _The code \(\mathcal{C}\) given by Construction 1 has the UER \((h,d)\)-optimal repair property for \(k+2e\leq d\leq n-h\) and \(d\equiv k+2e\ (\mod h)\)._ _Proof._ Let the set of indices of failed nodes be \(\mathcal{E}=\{i_{1},i_{2},\cdots,i_{h}\},1\leq i_{1}<\cdots<i_{h}\leq n\). For a fixed vector \((b_{1},b_{2},\cdots,b_{h-1})\in\mathbb{Z}_{s}^{h-1}\), replacing \(a\) with \(a(i_{1},\cdots,i_{h};\ u,u\oplus b_{1},\cdots,u\oplus b_{h-1})\) in (12), we obtain \[\sum_{j=1}^{h}\lambda_{i,u\oplus b_{j-1}}^{t}c_{i_{j},a(i_{1},\cdots,i_{h};\ u,u\oplus b_{1},\cdots,u\oplus b_{h-1})}=- \sum_{i\in[n]\setminus\mathcal{E}}\lambda_{i,a_{i}}^{t}c_{i,a(i_{1},\cdots,i_ {h};\ u,u\oplus b_{1},\cdots,u\oplus b_{h-1})} \tag{13}\] for all \(t\in[0,r-1]\), where we set \(b_{0}=0\). Summing the above over \(u=0,1,\cdots,s-1\), we get \[\sum_{j=1}^{h}\sum_{u=0}^{s-1}\lambda_{i_{j},u\oplus b_{j-1}}^{t}c_{i_{j},a(i_ {1},\cdots,i_{h};\ u,u\oplus b_{1},\cdots,u\oplus b_{h-1})}=-\sum_{i\in[n] \setminus\mathcal{E}}\lambda_{i,a_{i}}^{t}\sum_{u=0}^{s-1}c_{i,a(i_{1},\cdots,i_ {h};\ u,u\oplus b_{1},\cdots,u\oplus b_{h-1})}, \tag{14}\] for all \(t\in[0,r-1]\). Note that \((hs+n-h)-r=d-2e\) and \(\{u\oplus b_{i}:u=0,\cdots,s-1\}=[0,s-1]\) for any \(b_{i}\in[0,s-1],i\in[h-1]\). Then, (14) gives a \((d-2e-k+n,d-2e,s^{n-h})\) MDS array code \(\mathcal{C}^{\prime}\) from different \(\lambda_{i,j},i\in[n],j\in[0,s-1]\). Moreover, choosing any \(d\) columns from \(\mathcal{C}^{\prime}\) also constitutes a \((d,d-2e,s^{n-h})\) MDS array code, which can correct \(e\) errors. Therefore, if we download any \(d\) out of \(n-h\) elements in the set \[\left\{\sum_{u=0}^{s-1}c_{i,a(i_{1},i_{2},\cdots,i_{h};\;u,u\oplus b_{1}, \cdots,u\oplus b_{h-1})}:\;i\in[n]\setminus\mathcal{E}\right\},\] we can recover the coordinates \(\{a(i_{1},i_{2},\cdots,i_{h};\;u,u\oplus b_{1},\cdots,u\oplus b_{h-1}):\;u=0, 1\cdots,s-1\}\) of all the failed nodes as long as the number of erroneous nodes among the helper nodes is no more than \(e\). When \((b_{1},b_{2},\cdots,b_{h-1})\) runs through all elements in \(\mathbb{Z}_{s}^{h-1}\), all coordinates of the failed nodes are recovered. The amount of total downloaded data is \(s^{h-1}\cdot d\cdot s^{n-h}=d\cdot s^{n-1}=dhl/(d-2e-k+h)\), which meets the bound (4). This completes the proof. From the right hand side of (14), the symbol downloaded from each helper node for the repair is the sum of \(s\) coordinates of the node. Then, the amount accessed is \(s\) times that of downloaded. **Corollary 1**: _For the code \(\mathcal{C}\) given by Construction 1, suppose that there exist at most \(e\) of \(d\) helper nodes providing erroneous information in the process of repairing \(h\) failed nodes. Then, the total number of symbols over \(\mathbb{F}\) accessed from helper nodes for the repair is \(dl\)._ **Example 1**: _Let \((n,k,h,d,e)=(11,3,2,7,1)\), then \(r=8\) and \(s=(d-2e-k+h)/h=2\). Let \(\mathbb{F}=\mathbb{F}_{23}\), and \(\lambda_{1,0},\lambda_{1,1},\lambda_{2,0},\lambda_{2,1},\cdots,\lambda_{11,0}, \lambda_{11,1}\) be \(22\) different elements of \(\mathbb{F}\). Consider the \((11,3,2^{11})\) array code \(\mathcal{C}\) over \(\mathbb{F}\) defined by Construction 1._ Assume that nodes 1, 2 are failed. Let \(b_{1}=0\) in (13), we obtain \[\lambda_{1,u}^{t}c_{1,a(1,2;\;u,u)}+\lambda_{2,u}^{t}c_{2,a(1,2;\;u,u)}=-\sum_ {i=3}^{11}\lambda_{i,a_{i}}^{t}c_{i,a(1,2;\;u,u)}\] for all \(t\in[0,7]\) and \(u=0,1\). Then we get \[(\lambda_{1,0}^{t}c_{1,a(1,2;\;0,0)}+\lambda_{1,1}^{t}c_{1,a(1,2;\;1,1)})+( \lambda_{2,0}^{t}c_{2,a(1,2;\;0,0)}+\lambda_{2,1}^{t}c_{2,a(1,2;\;1,1)})=-\sum _{i=3}^{11}\lambda_{i,a_{i}}^{t}(c_{i,a(1,2;\;0,0)}+c_{i,a(1,2;\;1,1)})\] for all \(t\in[0,7]\), which gives a \((13,5,2^{9})\) MDS array code \(\mathcal{C}_{0}\). Then any \(7\) columns of \(\mathcal{C}_{0}\) also form a \((7,5,2^{9})\) MDS array code, which can correct \(e(=1)\) error. Thus, any \(7\) columns in \(\mathcal{C}_{0}\) can represent all columns in \(\mathcal{C}_{0}\) as long as the number of erroneous columns is no more than \(e\). So we obtain all values in the set \[\{c_{1,(-,0,0)},c_{1,(-,1,1)},\;c_{2,(-,0,0)},c_{2,(-,1,1)}\}\] by downloading \(2^{9}\) symbols from each helper node, where the symbol \({}^{\prime}-^{\prime}\) indicates that the upper 9 digits of the coordinate can take the value \(0\) or \(1\). Let \(b_{1}=1\), another \((13,5,2^{9})\) MDS array code \(\mathcal{C}_{1}\) will be derived in a similar way, and all values in the set \[\{c_{1,(-,0,1)},c_{1,(-,1,0)},\;c_{2,(-,0,1)},c_{2,(-,1,0)}\}\] are also obtained by downloading \(2^{9}\) symbols from each helper node. Thus, the repair bandwidth is \(2\cdot 7\cdot 2^{9}=7\cdot 2^{10}\), which satisfies the lower bound (4). The total amount of symbols accessed is \(7\cdot 2^{11}\). **Remark 4**: _For the same parameters \(n,k,h,d\) in Example 1, the sub-packetization of MSR array codes in [7] is_ \[(\mathrm{lcm}(d-k+1,d-k+2))^{n}=(\mathrm{lcm}(5,6))^{n}=(2\cdot 3\cdot 5)^{11}.\] _The sub-packetization level of our array code is reduced by a factor of \((3\cdot 5)^{11}\). For \(h=2,e=1\), our code at least reduces the sub-packetization by a factor of \((2(d-k+2))^{n}\). Note that this factor will increase as \(h\) or \(e\) increases. Since \((\mathrm{lcm}(d-k+1,d-k+2,\cdots,d-k+h))^{n}\geq((d-k+h-1)(d-k+h))^{n}\) for \(h\geq 2\), our code reduces the sub-packetization in [7] at least by a factor of \((h(d-k+h))^{n}\). Moreover the size of \(\mathbb{F}\) in Example 1 is 23, but the corresponding size of the field in [7] is required to be at least \(11\cdot(\mathrm{lcm}(5,6))=330\). In general, the size of \(\mathbb{F}\) in Construction 1 is at least \(h(d-k+h)\) times less than that in [7]. Less sub-packetization and smaller finite field are better for MSR code in practice._ ## IV MSR Codes with the UER \((h,d)\)-Optimal Access Property In this section, we discuss the simultaneous repair of multi-node failures for the MDS array code associated with some permutation matrices. **Construction 2**: _Let \(\mathbb{F}\) be a finite field of size \(|\mathbb{F}|\geq n+1\) and \(\gamma\) be a primitive element of \(\mathbb{F}\). Consider the \((n,k,l=s^{n})\) array code \(\mathcal{C}\) defined by \((9)\) and \((10)\), where the matrices are given by_ \[A_{i}=\sum\limits_{a=0}^{l-1}\lambda_{i,a_{i}}e_{a}e_{a(i;a_{i}\oplus 1)}^{T},\ \ i\in[n], \tag{15}\] _where \(\oplus\) denotes addition modulo \(s\). Here, \(\lambda_{i,0}=\gamma^{i}\) for all \(i\in[n]\) and \(\lambda_{i,u}=1\) for all \(i\in[n]\) and all \(u\in[s-1]\)._ From (15), we have \[A_{i}^{t}=\sum\limits_{a=0}^{l-1}\beta_{i,a_{i},t}e_{a}e_{a(i;a_{i}\oplus t)}^ {T},\ \ t\in[0,r-1],i\in[n]\] where \(\beta_{i,u,0}=1\) and \(\beta_{i,u,t}=\prod\limits_{v=u}^{u\oplus(t-1)}\lambda_{i,v}\) for \(t\in[r-1]\) and \(u\in[0,s-1]\). Thus, the parity-check equations (9) coordinatewise can be rewritten as: \[\sum\limits_{i=1}^{n}\beta_{i,a_{i},t}c_{i,a(i;a_{i}\oplus t)}=0,\ \ \mathrm{for\ all}\ t\in[0,r-1],a\in[0,l-1]. \tag{16}\] One can check that \(A_{i}A_{j}=A_{j}A_{i}\) and \(A_{i}-A_{j}\) are invertible for any \(i,j\in[n],i\neq j\). Then, the code \(\mathcal{C}\) given by Construction 2 is an MDS array code from Lemma 2. By a little computation, the following properties are obtained, which will be used for later computation. \[A_{i}^{s}=\gamma^{i}I, \prod\limits_{u=0}^{s-1}\lambda_{i,u}=\gamma^{i}\neq 1,\ i\in[n],\] \[\beta_{i,u,t}=\left\{\begin{array}{ll}1&t=0,\\ \prod\limits_{v=u}^{u\oplus(t-1)}\lambda_{i,v}&t\in[s-1],\\ \gamma^{j^{t}}\beta_{i,u,t^{\prime}}&t\in[s,r-1],\ j=\lfloor\frac{t}{s} \rfloor,\ t^{\prime}=t-j\cdot s.\end{array}\right. \tag{17}\] For Construction 2, we first give an special example with \(h=s=2,e=0\) and \(d=n-h\) to illustrate the idea of our repair process. **Example 2**: _Let \(n=6,k=h=2,d=4\), and \(e=0\). Then \(r=4\) and \(s=(d-k+h)/h=2\). By Construction 2, we can obtain a \((6,2,2^{6})\) MSR code over a finite field \(\mathbb{F}\) with \(|\mathbb{F}|\geq 7\). Set \(\mathbb{F}=\mathbb{F}_{7}\) and \(\gamma=3\)._ Assume that nodes 1 and 2 are failed. Next, we show how to recover them. For some fixed \(a\in[0,2^{6}-1]\) with \(a_{1}=a_{2}=0\), we obtain four equations on the coordinates of the failed nodes by (16) for all \(t\in[0,3]\) as follows. \[\left\{\begin{array}{llll}c_{1,(-,0,0)}&+&c_{2,(-,0,0)}&=&-\sum\limits_{j=3 }^{6}c_{j,a}\\ \gamma c_{1,(-,0,1)}&+&\gamma^{2}c_{2,(-,1,0)}&=&-\sum\limits_{j=3}^{6}\gamma ^{j}c_{j,a(j;a_{j}\oplus 1)}\\ \gamma c_{1,(-,0,0)}&+&\gamma^{2}c_{2,(-,0,0)}&=&-\sum\limits_{j=3}^{6}\gamma ^{j}c_{j,a}\\ \gamma^{2}c_{1,(-,0,1)}&+&\gamma^{4}c_{2,(-,1,0)}&=&-\sum\limits_{j=3}^{6} \gamma^{2j}c_{j,a(j;a_{j}\oplus 1)}\end{array}\right.\] where the symbol \({}^{\prime}-^{\prime}\) denotes all other upper digits of the coordinate, and the coefficients are derived from (17). Then, by accessing the values in the set \(\{c_{j,a}:\ a_{1}=a_{2}=0,j\in[3,6]\}\), we can determine \(c_{1,(-,0,0)}\) and \(c_{2,(-,0,0)}\) by the 1st and 3rd equations, and determine \(c_{1,(-0,1)}\) and \(c_{2,(-,1,0)}\) by the 2nd and 4th equations, since their coefficient matrices are invertible from \(\gamma\neq\gamma^{2}\). Similarly, \(c_{1,(-,1,1)}\), \(c_{2,(-,1,1)}\), \(c_{1,(-,1,0)}\) and \(c_{2,(-,0,1)}\) can be determined by accessing the values in the set \(\{c_{j,a}:\ a_{1}=a_{2}=1,\ j\in[3,6]\}\). Note that the parity-check equations for coordinate-elements in the set \[\{a:\ a_{1}=a_{2}=b,\ \ b=0,1\} \tag{18}\] lead to the two digits in the coordinates of nodes 1 and 2 running over \(\mathbb{Z}_{2}^{2}\) exactly once. Thus, all symbols of nodes 1 and 2 can be obtained, and the number of symbols downloaded from four helped nodes is \(4\cdot 2\cdot 2^{4}=(dhl)/(d-k+h)\), which achieves the bound (2). In general, we first show that the code \(\mathcal{C}\) can be repaired optimally by downloading the data from the \(n-h\) available nodes without error. Here, a subset of \(\mathbb{Z}_{s}^{h}\) similar to (18) is required to ensure that all coordinates of the failed nodes are recovered and the repair bandwidth achieves the bound (2). Let \[\Gamma(h,s)=\left\{(a_{h},\cdots,a_{2},a_{1}):\ \sum_{i=1}^{h}a_{i}\equiv 0\ ( \mod s),\ \ a_{i}\in\mathbb{Z}_{s},i\in[h]\right\}. \tag{19}\] An element \((a_{h},\cdots,a_{2},a_{1})\) in \(\Gamma(h,s)\) corresponds to a group in our repair scheme, and will be the sub-coordinate \((a_{i_{h}},\cdots,a_{i_{2}},a_{i_{1}})\) of \(a\) in (21). An important property of \(\Gamma(h,s)\) is given as follows. **Lemma 3**: _Let \(\Gamma(h,s)\) be defined in (19). Then_ \[\bigcup_{e\in\Gamma(h,s)}\bigcup_{t=0}^{s-1}e(i;e_{i}\oplus t)=\mathbb{Z}_{s} ^{h}, \tag{20}\] _for all \(i\in[h]\)._ _Proof._ Note that any two elements in \(\Gamma(h,s)\) are different at least at two digits. Thus, \[\left(\bigcup_{t=0}^{s-1}e(i;e_{i}\oplus t)\right)\bigcap\left(\bigcup_{t=0}^ {s-1}e^{\prime}(i;e^{\prime}_{i}\oplus t)\right)=\emptyset,\quad e,e^{\prime} \in\Gamma(h,s),\ e\neq e^{\prime}\] for all \(i\in[h]\). So the lemma holds from \(|\Gamma(h,s)|=s^{h-1}\) and \(|\bigcup_{t=0}^{s-1}e(i;e_{i}\oplus t)|=s\) for any \(e\in\Gamma(h,s)\) and \(i\in[h]\). \(\Box\) In the following, let \(\mathcal{E}=\{i_{1},i_{2},\cdots,i_{h}\},1\leq i_{1}<\cdots<i_{h}\leq n\) denote the indices of the \(h\) failed nodes, where \(2\leq h\leq r\). We rewrite the parity equations (16) as \[\beta_{i_{1},a_{i_{1}}}\!,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! is invertible for all \(\gamma^{i}\neq\gamma^{j}\), \(i\neq j\), where \(x_{i},i\in[n]\) are nonzero (see (17)). Thus, with the values in the set \(\{c_{j,a}:(a_{i_{h}},\cdots,a_{i_{2}},a_{i_{1}})=b,j\in[n]\setminus\mathcal{E}\}\), we can get \(s\) coordinates of each node \(i,i\in\mathcal{E}\) after solving \(s\) groups of linear equations. From Lemma 3, all coordinates of the failed nodes are obtained as \(b\) runs through all elements in the set \(\Gamma(h,s)\). This completes the proof. \(\Box\) Next, it remains to show that the code \(\mathcal{C}\) has the UER \((h,d)\)-optimal access property. It is required to show that an \((n-h,d-2e,s^{n-h})\) MDS code is obtained. To this end, we need the next lemma. **Lemma 5**: _Follow notations introduced above. Then_ \[\sum\limits_{j\in[n]\setminus\mathcal{E}}\left(\prod\limits_{k=1}^{h}(\gamma ^{j}-\gamma^{i_{k}})\right)A_{j}^{m}C_{j}=0, \tag{23}\] _for all \(m\in[0,r-hs-1]\)._ _Proof._ Note that \(hs\leq r\) and \(A_{i}^{s}=\gamma^{i}I\) for all \(i\in[n]\). Then for any \(p\in[0,h]\), \[\sum\limits_{j=1}^{n}\gamma^{p\cdot j}A_{j}^{m}C_{j}=\sum\limits_{j=1}^{n}A_{j} ^{m+p\cdot s}C_{j}=0, \tag{24}\] for all \(m\in[0,r-hs-1]\). By linearity, for any polynomial \(P(x)\) of degree \(\leq h\), we have \[\sum\limits_{j=1}^{n}P(\gamma^{j})A_{j}^{m}C_{j}=0.\] With \(P(x)=\prod\limits_{k=1}^{h}(x-\gamma^{i_{k}})\), we obtain \[\sum\limits_{j=1}^{n}\prod\limits_{k=1}^{h}(\gamma^{j}-\gamma^{i_{k}})A_{j}^{m }C_{j}=0.\] Since the set of roots of \(P(x)\) is exactly \(\{\gamma^{j},j\in\mathcal{E}\}\), we obtain the desired result. \(\Box\) **Theorem 3**: _The code \(\mathcal{C}\) given by Construction 2 has the UER \((h,d)\)-optimal access property for \(h\leq r\), \(k+2e\leq d\leq n-h\) and \(d\equiv k+2e\ (\mod h)\)._ _Proof._ Note that \(s=(d-2e-k+h)/h\) being an integer implies \(d\equiv k+2e\ (\mod h)\). From Lemma 4, we can repair the failed nodes if all values in the set \(B_{j,h,s},j\in[n]\setminus\mathcal{E}\) defined by (22) are known. Let \(l^{\prime}=s^{n-h}\). For a fixed \(b\in\Gamma(h,s)\), define a function \(f_{b}:[0,l^{\prime}-1]\rightarrow[0,l-1]\) as \[f_{b}(a)=(a_{n-h},\cdots,a_{i_{h}-h+1},b_{h},a_{i_{h}-h},\cdots,a_{i_{2}-1},b_ {2},a_{i_{2}-2},\cdots,a_{i_{1}},b_{1},a_{i_{1}-1},\cdots,a_{1}),\] where \(a\) is an element in \([0,l^{\prime}-1]\) with the \(s\)-ary expansion \((a_{n-h},a_{n-h-1},\cdots,a_{1})\). For a fixed \(b\in\Gamma(h,s)\), define the column vector \(C_{j}^{(b,\mathcal{E})}\in\mathbb{F}^{l^{\prime}}\) as \[C_{j}^{(b,\mathcal{E})}=(c_{j,f_{b}(0)},c_{j,f_{b}(1)},\cdots,c_{j,f_{b}(l^{ \prime}-1)})^{T},\] for all \(j\in[n]\setminus\mathcal{E}\). To prove the theorem, we only need to show that the vectors \[\left(C_{1}^{(b,\mathcal{E})},\cdots,C_{i_{1}-1}^{(b,\mathcal{E})},C_{i_{1}+1} ^{(b,\mathcal{E})},\cdots,C_{i_{h}-1}^{(b,\mathcal{E})},C_{i_{h}+1}^{(b, \mathcal{E})},\cdots,C_{n}^{(b,\mathcal{E})}\right) \tag{25}\] form an \((n-h,d-2e,l^{\prime})\) MDS array code for all \(b\in\Gamma(h,s)\). Let \(\{e_{a}^{(l^{\prime})}:a=0,1,\cdots,l^{\prime}-1\}\) be the standard basis of \(\mathbb{F}^{l^{\prime}}\) over \(\mathbb{F}\). Define \(l^{\prime}\ \times\ l^{\prime}\) matrices: \[B_{j}=\left\{\begin{array}{ll}\sum\limits_{a=0}^{l^{\prime}-1}\lambda_{j,a_{j}}e _{a}^{(l^{\prime})}(e_{a(j;a_{j}\oplus 1)}^{(l^{\prime})})^{T},&j\in[i_{1}-1]\\ \sum\limits_{a=0}^{l^{\prime}-1}\lambda_{j+1,a_{j}}e_{a}^{(l^{\prime})}(e_{a(j ;a_{j}\oplus 1)}^{(l^{\prime})})^{T},&j\in[i_{1},i_{2}-2]\\ \sum\limits_{a=0}^{l^{\prime}-1}\lambda_{j+2,a_{j}}e_{a}^{(l^{\prime})}(e_{a(j ;a_{j}\oplus 1)}^{(l^{\prime})})^{T},&j\in[i_{2}-1,i_{3}-2]\\ \cdots\\ \sum\limits_{a=0}^{l^{\prime}-1}\lambda_{j+h-1,a_{j}}e_{a}^{(l^{\prime})}(e_{ a(j;a_{j}\oplus 1)}^{(l^{\prime})})^{T},&j\in[i_{h-1}-1,i_{h}-2]\\ \sum\limits_{a=0}^{l^{\prime}-1}\lambda_{j+h,a_{j}}e_{a}^{(l^{\prime})}(e_{a(j ;a_{j}\oplus 1)}^{(l^{\prime})})^{T},&j\in[i_{h}-1,n-h]\\ \end{array}\right. \tag{26}\] Thus, (23) implies the following equations \[\begin{array}{ll}&\sum\limits_{j=1}^{i_{1}-1}\left(\prod\limits_{k=1}^{h}( \gamma^{j}-\gamma^{i_{k}})\right)B_{j}^{m}C_{j}^{(b,\mathcal{E})}+\sum\limits _{j=i_{1}}^{i_{2}-2}\left(\prod\limits_{k=1}^{h}(\gamma^{j+1}-\gamma^{i_{k}}) \right)B_{j}^{m}C_{j+1}^{(b,\mathcal{E})}\\ +&\sum\limits_{p=2}^{h-1}\sum\limits_{j=i_{p}-1}^{i_{p+1}-2}\left(\prod \limits_{k=1}^{h}(\gamma^{j+p}-\gamma^{i_{k}})\right)B_{j}^{m}C_{j+p}^{(b, \mathcal{E})}\\ +&\sum\limits_{j=i_{h}-1}^{n-h}\left(\prod\limits_{k=1}^{h}(\gamma^{j+h}- \gamma^{i_{k}})\right)B_{j}^{m}C_{j+h}^{(b,\mathcal{E})}=0\end{array} \tag{27}\] for all \(m\in[0,r-hs-1]\) and all \(b\in\Gamma(h,s)\). Next, we observe that \(B_{j}\) is invertible for any \(j\in[n-h]\), \(B_{j_{1}}B_{j_{2}}=B_{j_{2}}B_{j_{1}}\), and \(B_{j_{1}}-B_{j_{2}}\) is invertible for any \(j_{1},j_{2}\in[n-h]\), \(j_{1}\neq j_{2}\) (see Proposition 1 in Appendix A). Furthermore, \(r-hs=r-h\cdot(d-2e-k+h)/h=(n-h)-(d-2e)\). Thus, \[\left[\begin{array}{cccc}I&I&\cdots&I\\ B_{1}&B_{2}&\cdots&B_{n-h}\\ \vdots&\vdots&\vdots&\vdots\\ B_{1}^{r-h.s-1}&B_{2}^{r-h.s-1}&\cdots&B_{n-h}^{r-h.s-1}\end{array}\right]\] is a parity-check matrix of an \((n-h,d-2e,l^{\prime})\) MDS array code. Moreover, the coefficients before \(B_{j}\) in (27) are nonzero for \(\gamma^{x}\neq\gamma^{y},x\neq y\). Since multiplying each block column with a nonzero constant does not change the MDS property, the vectors in (25) form an \((n-h,d-2e,l^{\prime})\) MDS array code \(\mathcal{C}_{b}\), \(b\in\Gamma(h,s)\). Therefore, if we access any \(d\) out of \(n-h\) vectors in this code, we can reconstruct the \(n-h\) vectors and further recover the \(h\) failed nodes as long as the number of erroneous nodes among the helper nodes is no more than \(e\). The amount of total data accessed is \(d\cdot|\Gamma(h,s)|\cdot l^{\prime}=d\cdot s^{n-1}=(dhl)/(d-2e-k+h)\), which meets the bound (4). This completes the proof. **Example 3**: _Let \((n,k,h,d,e)=(15,4,3,9,1)\), then \(r=11\) and \(s=(d-2e-k+h)/h=2\). Let \(\mathbb{F}=\mathbb{F}_{2^{4}}\) and \(\gamma\) be a primitive element of \(\mathbb{F}\). Consider the \((15,4,2^{15})\) array code \(\mathcal{C}\) over \(\mathbb{F}\) defined by Construction 2._ Assume that nodes 1, 2 and 3 are failed. For some fixed \(a\in[0,2^{15}-1]\) with \((a_{3},a_{2},a_{1})\in\Gamma(3,2)\) (see (19)), we obtain six equations on the coordinates of the failed nodes by (16) for \(t\in[0,hs-1=5]\) as follows. (Here, we set \(a=(-,000)\), and '\(-\)' stands for all other binary bits) \[\left\{\begin{array}{lcrclcrcl}c_{1,(-,0,0,0)}&+&c_{2,(-,0,0,0)}&+&c_{3,(-,0,0,0)}& =&-\sum\limits_{j=4}^{15}c_{j,a}\\ \gamma c_{1,(-,0,0,1)}&+&\gamma^{2}c_{2,(-,0,1,0)}&+&\gamma^{3}c_{3,(-,1,0,0)}& =&-\sum\limits_{j=4}^{15}\gamma^{j}c_{j,a(j;a_{j}\oplus 1)}\\ \gamma c_{1,(-,0,0,0)}&+&\gamma^{2}c_{2,(-,0,0,0)}&+&\gamma^{3}c_{3,(-,0,0,0)}& =&-\sum\limits_{j=4}^{15}\gamma^{j}c_{j,a}\\ \gamma^{2}c_{1,(-,0,0,1)}&+&\gamma^{4}c_{2,(-,0,1,0)}&+&\gamma^{6}c_{3,(-,1,0,0 )}&=&-\sum\limits_{j=4}^{15}\gamma^{2j}c_{j,a(j;a_{j}\oplus 1)}\\ \gamma^{2}c_{1,(-,0,0,0)}&+&\gamma^{4}c_{2,(-,0,0,0)}&+&\gamma^{6}c_{3,(-,0,0,0 )}&=&-\sum\limits_{j=4}^{15}\gamma^{2j}c_{j,a}\\ \gamma^{3}c_{1,(-,0,0,1)}&+&\gamma^{6}c_{2,(-,0,1,0)}&+&\gamma^{9}c_{3,(-,1,0, 0)}&=&-\sum\limits_{j=4}^{15}\gamma^{3j}c_{j,a(j;a_{j}\oplus 1)}\end{array}\right. \tag{28}\] where the coefficients are derived from (17). Then with the values in the set \(\{c_{j,a}:\ a_{1}=a_{2}=a_{3}=0,\ j\in[4,15]\}\), we can determine \(c_{1,(-,0,0,0)}\), \(c_{2,(-,0,0,0)}\), \(c_{3,(-,0,0,0)}\) by the 1st, 3rd and 5th equations, and determine \(c_{1,(-,0,0,1)}\), \(c_{2,(-,0,1,0)}\) and \(c_{3,(-,1,0,0)}\) by the 2nd, 4th and 6th equations, since their coefficient matrices are invertible from \(\gamma\neq\gamma^{2}\neq\gamma^{3}\). However, only \(d=9\) helper nodes are set and one of them may provide the erroneous information. To this end, we will show there exists another \((12,7,2^{12})\) MDS array code associated with all available nodes. Let \(b=(0,0,0)=(a_{3},a_{2},a_{1}),\ \mathcal{E}=\{1,2,3\}\). Let \(C_{j}^{(b,\mathcal{E})}=(c_{j,0},\ c_{j,8},\ \cdots,\ c_{j,2^{15}-8})^{T}\in \mathbb{F}^{2^{12}},j\in[4,15]\). Define \(\mathcal{C}_{b}=(C_{4}^{\ (b,\mathcal{E})},C_{5}^{\ (b,\mathcal{E})},\cdots,C_{15}^{ \ (b,\mathcal{E})})\). By the method in the proof of Theorem 3, \(\mathcal{C}_{b}\) is proved to be an \((n-h,d-2e,s^{n-h})=(12,7,2^{12})\) MDS array code. Then any \(9\) columns of \(\mathcal{C}_{b}\) also form a \((9,7,2^{12})\) MDS array code, which can correct \(e(=1)\) error. Thus, any \(9\) columns in \(\mathcal{C}_{b}\) can represent all columns in \(\mathcal{C}_{b}\) as long as the number of erroneous columns is no more than \(e\). So by (28), we obtain the values \(\{c_{1,(-,0,0,0)},c_{1,(-,0,0,1)},\ c_{2,(-,0,0,0)},c_{2,(-,0,1,0)},\ c_{3,(-,0,0,0)},c_{3,(-,1,0,0)}\}\) by downloading \(s^{n-h}\) symbols from each helper node. Lemma 3 implies that all symbols of nodes 1, 2, and 3 can be obtained by taking the parity-check equations for all coordinate-elements in the set \(\{a:\ (a_{3},a_{2},a_{1})\in\Gamma(3,2)\}\). The total number of symbols downloaded from helped nodes is \(d\cdot s^{h-1}s^{n-h}=9\cdot 2^{14}\), which achieves the bound (4). **Remark 5**: _For the same parameters \(n,k,h,d\) in Example 3, the sub-packetization of MSR array codes in [7] is_ \[\left(\mathrm{lcm}(d-k+1,d-k+2,d-k+3)\right)^{n}=\left(\mathrm{lcm}(6,7,8) \right)^{n}=(3\cdot 7\cdot 2^{3})^{15}.\] _The sub-packetization level of our array code is reduced by a factor of \((3\cdot 7\cdot 2^{2})^{15}\). Reducing the sub-packetization of MSR codes is of great significance in practice. For \(h=3,e=1\), our code at least reduces the sub-packetization by a factor of \((3(d-k+2))^{n}\), and \((3(d-k+2)(d-k+3))^{n}\) in the best scenario._ ## V Conclusion In this paper, a new explicit centralized repair scheme is proposed, and applied in two constructions of MSR codes associated with diagonal matrices and permutation matrices respectively. For \(h\) failed nodes, \(d\) helper nodes with \(e\) adversaries, the sub-packetization \(l\) is assigned \(\left((d-2e-k+h)/h\right)^{n}\). When \(h=1\), the codes are same as those codes for a single failed node in [7]. When \(h\geq 2\), the sub-packetization of our codes is smaller than that of codes in [7], where \(l\) is \(\left(\mathrm{lcm}(d-k+1,\cdots,d-k+h)\right)^{n}\), and at least \((d-k+h)^{n}(d-k+h-1)^{n}\) for any \(h\geq 2\). Unfortunately, since \((d-2e-k+h)/h\) is required to be an integer, our constructions only work for \(d\equiv k+2e\ (\mod h)\) helper nodes to repair \(h\) failed nodes, and cannot work for an arbitrary \(k+2e\leq d\leq n-h\). Therefore, our next task is to find an explicit centralized repair scheme for more admissible parameters \(h,d\) with small sub-packetization. ## Appendix A The properties of \(B_{j}\) in (26) **Proposition 1**: _Let \(B_{j},j\in[n-h]\) be defined by (26), then_ 1. \(B_{j}\) _is invertible for every_ \(j\in[n-h]\)_,_ 2. \(B_{j_{1}}B_{j_{2}}=B_{j_{2}}B_{j_{1}}\) for any \(j_{1},j_{2}\in[n-h]\) and \(j_{1}\neq j_{2}\), 3. \(B_{j_{1}}-B_{j_{2}}\) is invertible for any \(j_{1},j_{2}\in[n-h]\) and \(j_{1}\neq j_{2}\). Proof.: For easy clarifying the key idea of the proof, we only prove the properties in the case \(h=2\). For other cases, it can be proved in a similar way. Let \(n^{\prime}=n-2\) and \(l^{\prime}=s^{n^{\prime}}\). We first verify the invertibility of \(B_{j}\) for every \(j\in[n^{\prime}]\). According to (26), \(|B_{j}|\) is equal to \(\pm\gamma^{j\cdot l^{\prime}/s}\), \(\pm\gamma^{(j+1)\cdot l^{\prime}/s}\) or \(\pm\gamma^{(j+2)\cdot l^{\prime}/s}\) depending on the range of \(j\), and nonzero. Thus, \(B_{j}\) is invertible for every \(j\in[n^{\prime}]\). Next, we verify the properties ii) and iii). Without loss of generality, assume that \(j_{1}\in[i_{1}-1]\) and \(j_{2}\in[i_{1},i_{2}-2]\). For the commutativity, we have \[B_{j_{1}}B_{j_{2}}=B_{j_{2}}B_{j_{1}}=\sum_{a=0}^{l^{\prime}-1} \lambda_{j_{1},a_{j_{1}}}\lambda_{j_{2}+1,a_{j_{2}}}e_{a}^{(l^{\prime})}(e_{a( j_{1},j_{2};a_{j_{1}}\in 1,a_{j_{2}}\oplus 1)}^{(l^{\prime})})^{T}.\] We now prove the invertibility of \(B_{j_{1}}-B_{j_{2}}\). Suppose that \(B_{j_{1}}x=B_{j_{2}}x\) for some vector \(x\in\mathbb{F}^{l^{\prime}}\). Let \(\{e_{a}^{(l^{\prime})}:a=0,1,\cdots,l^{\prime}-1\}\) be the standard basis of \(\mathbb{F}^{l^{\prime}}\) over \(\mathbb{F}\), and \(x=\sum_{a=0}^{l^{\prime}-1}x_{a}e_{a}^{(l^{\prime})}\), where \(x_{a}\in\mathbb{F}\). Then \[B_{j_{1}}x=\sum_{a=0}^{l^{\prime}-1}\lambda_{j_{1},a_{j_{1}}}x_{a (j_{1},a_{j_{1}}\oplus 1)}e_{a}^{(l^{\prime})},\ B_{j_{2}}x=\sum_{a=0}^{l^{ \prime}-1}\lambda_{j_{2}+1,a_{j_{2}}}x_{a(j_{2},a_{j_{2}}\oplus 1)}e_{a}^{(l^{ \prime})}.\] Therefore, \[\lambda_{j_{1},a_{j_{1}}}x_{a(j_{1},a_{j_{1}}\oplus 1)}=\lambda_{j_{2}+1,a_{j_{ 2}}}x_{a(j_{2},a_{j_{2}}\oplus 1)} \tag{29}\] for all \(a\in[0,l^{\prime}-1]\). since \(\lambda_{i,u}\neq 0\) for all \(i\in[n]\) and \(u\in[0,s-1]\), we can rewrite (29) as \[x_{a}=\frac{\lambda_{j_{2}+1,a_{j_{2}}}}{\lambda_{j_{1},a_{j_{1}}\ominus 1}}x_{a (j_{1},j_{2};a_{j_{1}}\ominus 1,a_{j_{2}}\oplus 1)}\] Repeating this operation, we obtain \[x_{a} = \frac{\lambda_{j_{2}+1,a_{j_{2}}}}{\lambda_{j_{1},a_{j_{1}}\ominus 1 }}\cdot\frac{\lambda_{j_{2}+1,a_{j_{2}}\oplus 1}}{\lambda_{j_{1},a_{j_{1}}\ominus 2 }}x_{a(j_{1},j_{2};a_{j_{1}}\ominus 2,a_{j_{2}}\oplus 2)}=\cdots\] \[= \frac{\lambda_{j_{2}+1,a_{j_{2}}}}{\lambda_{j_{1},a_{j_{1}}\ominus 1 }}\cdot\frac{\lambda_{j_{2}+1,a_{j_{2}}\oplus 1}}{\lambda_{j_{1},a_{j_{1}}\ominus 2 }}\cdots\cdot\frac{\lambda_{j_{2}+1,a_{j_{2}}\oplus(a-2)}}{\lambda_{j_{1},a_{j_{ 1}}\ominus(a-1)}}\cdot\frac{\lambda_{j_{2}+1,a_{j_{2}}\oplus(a-1)}}{\lambda_{j_{ 1},a_{j_{1}}\ominus a}}x_{a(j_{1},j_{2};a_{j_{1}}\ominus s,a_{j_{2}}\oplus s)}\] \[= \frac{\prod_{u=0}^{s-1}\lambda_{j_{2}+1,a_{j_{2}}\oplus u}}{\prod _{u=0}^{s-1}\lambda_{j_{1},a_{j_{1}}\ominus u}}x_{a}=\lambda^{j_{2}+1-j_{1}}x_ {a}\] for all \(a\in[0,l^{\prime}-1]\). Since \(\lambda^{j_{2}+1-j_{1}}\) is not equal to \(0\) or \(1\), we have \(x_{a}=0\) for all \(a\in[0,l^{\prime}-1]\). Thus, \((B_{j_{1}}-B_{j_{2}})x=0\) implies \(x=0\), and we can draw a conclusion that \(B_{j_{1}}-B_{j_{2}}\) is invertible. This completes the proof.
2310.00460
Analysis of osteoporotic tissue using combination nonlinear optical imaging
Currently, a large number of stored tissue samples are unavailable for spectroscopic study without the time consuming and destructive process of paraffin removal. Instead, a structurally sensitive technique, sum frequency generation, and a chemically sensitive technique, coherent anti-Stokes Raman scattering enables imaging through the paraffin. This method is demonstrated by imaging collagen in mouse tibia. We introduce a statistical method for separating images by quality and, with the aid of machine learning, distinguish osteoporotic and healthy bone. This method has the potential to verify the results of previous studies and reduce new sample production by allowing retesting results with spectroscopy.
Bryan Semon, Michael Jaffe, Haifeng Wang, Lauren Priddy, Gombojav Ariunbold
2023-09-30T18:43:04Z
http://arxiv.org/abs/2310.00460v1
# Analysis of osteoporoportic tissue using combination nonlinear optical imaging ###### Abstract Currently, a large number of stored tissue samples are unavailable for spectroscopic study without the time consuming and destructive process of paraffin removal. Instead, a structurally sensitive technique, sum frequency generation, and a chemically sensitive technique, coherent anti-Stokes Raman scattering enables imaging through the paraffin. This method is demonstrated by imaging collagen in mouse tibia. We introduce a statistical method for separating images by quality and, with the aid of machine learning, distinguish osteoporotic and healthy bone. This method has the potential to verify the results of previous studies and reduce new sample production by allowing retesting results with spectroscopy. **Keywords:** Spectroscopy, Non-linear optics, CARS, SFG, Imaging, Microscopy, Machine Learning ## 1 Introduction There are approximately one billion formalin-fixed, paraffin-embedded (FFPE) tissue samples currently in storage worldwide.[1,2] This wealth of data is largely inaccessible to spectroscopic techniques as the paraffin itself has a strong signal in many of the same regions as biological material.[3-5] While there are methods to remove the paraffin[5], stain the tissue before embedding[6], or digitally remove the wax contribution[7-9], each has significant drawbacks. Removing the wax is time consuming and, since the preservation is being undone, leaves the sample effectively unable to be stored again.[5] Staining the sample is far more useful for optical imaging as most stains give off an exceptionally strong fluorescence signal, obscuring any other spectroscopic signal.[6] Digitally removing the wax will never perfectly recover the obscured data as well as tending to introduce artifacts.[9] In this paper we introduce a method of imaging collagen in FFPE samples that is both label-free and nondestructive. This is done by combining two spectroscopic techniques coherent anti-Stokes Raman scattering (CARS) and sum frequency generation (SFG). CARS is a scattering process that generates unique spectra from the vibrotational states of molecules. It produces the same spectral peaks as traditional Raman but with \(\sim\)10\({}^{6}\)-fold increase in signal generation.[10] SFG is a multiphoton absorption and reemission process wherein two photons are absorbed, and one photon is emitted. The emitted photon has energy equal to the sum of the incident photons. Since the input beams are spectrally broad for SFG generation, our signal is correspondingly broad. Combining the structural sensitivity of SFG[11] and the chemical sensitivity of CARS [12,13] enabled mapping of both the paraffin and the collagen. This technique also allows for the construction of arbitrary sized mosaic images to be created in an automated way, allowing for large scale features to be captured while maintaining the high resolution of microscopy. To demonstrate we imaged collagen in the tibas of both healthy mice and mice with alcohol induced osteoporosis, which involves structural changes in the bone such as a decrease in enzymatic crosslinking and an overall decrease in bone density.[14-17] Given the large sample size and relative subtlety of the expected structural changes, a machine learning model was used to classify the samples as machine learning typically excels under these circumstances.[18] A statistical method to separate images by quality was also used, the Spatial Q Test.[19,20] This significantly reduces the size of the data set by automating the removal of images that do not contain collagen. We believe that being able to withdraw additional information out of previously stored samples is invaluable, allowing for the continued reuse of already prepared samples to both verify old studies, as well as to develop new insights based on the original studies using different techniques. ## 2 Methods ### Experimental setup An ultrafast Ytterbium doped fiber pulse laser (Clark-MXR) was used to generate the CARS and SFG signal used here. The initial beam was centered at 1035 nm and had a repetition rate of 1MHz and an initial power of 9.6 W. This beam was passed into a non-colinear optical parametric amplifier (NOPA) to create the three beams necessary: a 1035 nm pump beam, an \(\sim\)800 nm Stokes beam, and a 517 nm probe beam. The initial beam had a significant spectral width, so the probe beam needed to be passed through a pulse shaper. After the shaper, its spectral width was 10 cm-1. The Stokes and pump beam also passed through adjustable neutral density filters to reduce power and avoid degradation of the sample. All three beams were recombined with dichroic mirrors and focused onto the sample through a 10 cm achromatic lens. At the sample, the pump beam had a power of 50 mW, the Stokes 35 mW, and the probe 4 mW. The signal was collected by a long working distance objective lens with a magnification of 50x. The objective lens was infinity-corrected, requiring the use of a tube lens (20 cm focal length) to form an image. Two filters were used in combination to remove any light above 500 nm. In order to do near-simultaneous imaging in CARS, SFG, and optical, automatic shutters were needed to block beams. Two beam blocks were 3D printed and motor controls were inserted. The motors were controlled by an Arduino that also communicated with the electron multiplying charge-coupled device (EMCCD) and the sample stage. With this setup, an image was taken with all three beams present (CARS), the probe shutter closed, and Figure 1: Simplified diagram of the experimental highlighting the Arduino controlled shutters including the process of obtaining samples. another image taken (SFG), and then the pump shutter was closed taking an optical (~800 nm) image. The stage was then moved to a new position and the process repeated. In this way, arbitrarily large mosaic images of the sample in CARS, SFG, and optical were created. Since the images were gathered nearly simultaneously, minimal error (e.g., through beam conditions changing, sample degradation, or misalignment of the sample) was introduced. ### Sample Preparation The animal protocol for this study was approved by the Institutional Animal Care and Use Committee at the University of Southern California (Los Angeles, CA). Alcoholic hepatitis was induced in 8-week-old male C57B/6 mice by feeding a solid Western diet high in cholesterol and saturated fat (HCFD) or regular mouse chow (control) ad libitum for two weeks. Implantation of an intragastric (IG) catheter was performed and IG feeding of ethanol and a high-fat liquid diet (corn oil as 37.1 Cal% [calorie percentage]) at 60% of total daily caloric intake was initiated. Non-alcohol-treated (control) mice were fed a similar high-fat diet. The remaining 40 Cal% was consumed by ad libitum intake of diet high in cholesterol and saturated fat.[21] The amount of alcohol administered to achieve sufficient ethanol intake and blood alcohol levels (BALs) while minimizing the risk of over intoxication was increased in a step-wise progression over 4 weeks. The amount of ethanol fed through the IG catheter increased to 33 g/kg/day over a four week-period from an initial dose of 22.7 g/kg/day.[21] Beginning the second week of the IG feeding, ethanol IG infusion was withdrawn for 5-6 hours and a bolus (3.5-5 g/kg) of ethanol equivalent to that which was withdrawn was given IG, thus mimicking a situation seen in binge drinking in people. The pathology noted in these mice includes a 40-fold to 80-fold increase in osteopontin (OPN) mRNA in the liver. Osteopontin has been associated with the development of bone-related disorders such as osteoporosis. Osteopontin is a phosphoprotein normally secreted by osteoblasts and regulates bone mass by changing local bone remodeling. Abnormal expression of OPN is involved in the development of several metabolic bone disorders, including osteoporosis. [22] Tibiae were harvested, snap-frozen, and placed in 10% neutral buffered formalin for 48 hours followed by dehydration in 70% and 95% ethanol. Samples were then embedded in paraffin at ~58\({}^{\circ}\)C and then cooled at room temperature. The tissue and paraffin were sectioned at 5um with a microtome warmed to 37\({}^{\circ}\)C and the ribbon of tissue/paraffin placed in a warm water bath at 40-45\({}^{\circ}\)C. During this process, the paraffin was removed. The tissue samples are then placed on a glass slide and dried at 37\({}^{\circ}\)C overnight. The slides were then placed on a warming block at 65\({}^{\circ}\)C to melt the wax and bond the tissue to the glass slide. The slide and tissue sample were then stained with hematoxylin and eosin stain. ### Q score analysis Q score provides a quick and statistically valid way to evaluate the relative heterogeneity of an area within an image in comparison with a larger area in the same image.[23,24] An image of quality should have high heterogeneity (areas with high signal and areas with low/no signal). This is especially true of collagen since it comprises long molecules with gaps in between.[24] The Q score is a measure of the variance of subregions compared to the variance of the area as a whole. The spatial Q score is defined by the following equation[25]: \[Q=1-\frac{\sum_{j}^{M}N_{vj}\sigma_{vj}^{2}}{N_{\nu}\sigma_{v}^{2}}\] Where N\({}_{\rm vj}\) is the number of data points (pixels in this case) in the j\({}^{\rm th}\) subset of v, \(\sigma_{vj}^{2}\) is the variance of the j\({}^{\rm th}\) subset of v, N\({}_{\rm v}\) is the number of data points in v, \(\sigma_{v}^{2}\) is the variance of v, and M is the number of subsets. Each image of the mosaic can be divided into subregions and then each image can have its Q score calculated. Importantly, the spatial Q test allows only binary inputs, so all pixels must be assigned either 1 or 0. To do this, a threshold was picked between 0 and 1 and everything was sorted either to 1 if it was above the threshold or 0 if it was below. The size and therefore number of subregions is also a choice. The size of the subregions should be of a similar order to the size of the structure of interest. In order for this to be statistically valid, these choices should not have an impact on the relative scores of samples. ### Machine Learning Machine learning was performed in matlab using the machine learning toolbox. The provided 'OptimizeHyperparameters' function was used to obtain a rough estimate of the best method, learning cycles, leaf size, etc. From there, parameters were manually tuned to minimize the misclassification rate. A deep tree was selected for, as the difference between osteoporotic bone and the control bone was expected to be extremely subtle and computation time was not a significant factor. The images were processed and analyzed in the following way: First the background levels were subtracted from each image. Since the area in which the laser forms signal is significantly smaller than the full image, pixels from outside the signal forming region were selected as a background. The images are then cropped down to a 100 by 100 pixel region containing only the signal forming region and their maximum brightness is normalized to one. Empty images (i.e., images containing no bone sample) were then excluded via Q score, reducing the number of images from 1620 (45 samples with a 36-by-36 mosaic each) to 669 images that contained SFG signal. This was approximately equally split between control and osteoporotic with 327 of the former and 342 of the later. A variety of image measurements were taken including Q score, I\({}^{2}\) score, energy, and entropy (See figure 4 for a full list). Measurements for contrast, correlation, energy, and homogeneity were calculated from an eight-level grey scale co-occurrence matrix. This data was labeled and then randomly split with 30% of the data being withheld for testing while 70% was used for training. ## 3 Results and Discussion ### Sfg vs Cars SFG imaging is a powerful tool to image collagen fibers without the paraffin adding noise to the image. Figure 2 shows a comparison of the same area imaged with CARS, SFG, and optical. There was a significant amount of CARS signal exclusively from the paraffin, which had strong Raman lines around 3000 cm-1. SFG, however, is more structurally sensitive than chemically sensitive. This means it generates signal only in the collagen fibers, not the paraffin. By comparing the two images, the paraffin was easily distinguishable from collagen. The optical image is used for background subtraction. Since the CARS image had the beam Figure 2: Image of mouse tibia in CARS with SFG overlay in magenta (a), CARS (b), SFG (c), and optical (d). The striations present in (c) and highlighted in (a) are collagen fibers. necessary to create an SFG and optical image, the CARS image will always be a stack of all three images, while the SFG will be a stack of the SFG and optical images. ### Analysis of Q score and Machine Learning Outcomes The Q scores for the osteoporotic bone and control bone had a similar average. Visual inspection of the various images agreed with this similarity. Interestingly, the osteoporotic bone had a higher variance in Q scores than the control bone. Looking at this variance for differing thresholds of the Q score shows that this was not a statistical anomaly, and that the Q score results hold, independent of the choice of threshold. Note, the very low and very high thresholds were omitted, as too much noise starts to be included or too much signal is omitted. The best performing machine learning model achieved a misclassification rate of 18.9%. It used the AdaboostM1 method with a minimum leaf size of 1, 96 learning cycles, and 65 maximum splits. Figure 4 shows the predictor importance of the best performing ensemble. As can be seen, both the vertical and horizontal correlation were more important than their similar measurements. The Bisque score and entropy were also notably important predictors. The strongest predictor, however, was the Q score. This lends credence to the idea that Q score is a useful measurement of image quality, though on its own was insufficient to distinguish osteoporotic bone from control (healthy) bone. Figure 4: The relative importance of predictors in the best performing model (AdaBoostM1). The importance was calculated by summing the change in node risk when splitting on a predictor. Figure 3: Box plot of Q scores at a 50% threshold (left). The difference in variance of Q scores for osteoporotic and healthy bone (right). For most thresholds osteoporotic bone has a higher variance in Q scores. Achieving 81% accuracy is lower than we expected for a machine learning model. This we attribute to several key factors. First, sample to sample (i.e., biological) variation; the extent of osteoporosis across individual animals is unknown. Whether or not the structural change due to osteoporosis is uniformly spread across each bone (i.e., within-sample variation) is also unknown. We image a very small area (\(\sim\)1600 \(\mathrm{\SIUnitSymbolMicro m}^{2}\)) of each bone; therefore, it is possible that some images of ostensibly osteoporoptic bone are actually images of relatively healthy portions of the bone. Second, although we have over 600 'good images' of bone, we only worked with forty-five bones in total. Because of the tiling/stitching image method, the twenty-five images from each bone are adjacent to one another, which further exacerbates the issue of limited imaging area. ## 5 Conclusion SFG imaging allows a method to avoid paraffin contamination in spectral analysis of FFPE tissue, enabling the distinction of the strong Raman signature of the paraffin from the strong SFG signal of the collagen. This novel combinatorial method would allow for more efficient chemical imaging of the large repository of FFPE tissue currently in storage. Imaging in this way produces massive data sets that demand a level of automation. The spatial Q score offers a quick and automated way to evaluate large numbers of images and determine their quality, greatly reducing the size of the data set that must be analyzed in detail. Machine learning also shows promise in separating osteoporoptic bone from healthy bone, which may aid in disease diagnoses and general analysis of these large data sets. ## Acknowledgements We would like to thank Dr. Hidekazu Tsukamoto at the Keck School of Medicine at the University of Southern California for his help in preparing the mice. ## Conflict of Interest Statement The authors have no relevant conflicts of interest to disclose.
2309.12107
A Computational Analysis of Vagueness in Revisions of Instructional Texts
WikiHow is an open-domain repository of instructional articles for a variety of tasks, which can be revised by users. In this paper, we extract pairwise versions of an instruction before and after a revision was made. Starting from a noisy dataset of revision histories, we specifically extract and analyze edits that involve cases of vagueness in instructions. We further investigate the ability of a neural model to distinguish between two versions of an instruction in our data by adopting a pairwise ranking task from previous work and showing improvements over existing baselines.
Alok Debnath, Michael Roth
2023-09-21T14:26:04Z
http://arxiv.org/abs/2309.12107v1
# A Computational Analysis of Vagueness in Revisions of Instructional Texts ###### Abstract _WikiHow_ is an open-domain repository of instructional articles for a variety of tasks, which can be revised by users. In this paper, we extract pairwise versions of an instruction before and after a revision was made. Starting from a noisy dataset of revision histories, we specifically extract and analyze edits that involve cases of vagueness in instructions. We further investigate the ability of a neural model to distinguish between two versions of an instruction in our data by adopting a pairwise ranking task from previous work and showing improvements over existing baselines. ## 1 Introduction Instructional texts aim to describe the actions necessary to accomplish a task or goal, in as clear and concise a manner as possible. _WikiHow_1 is an extensive compendium of instructional guides for various topics and domains. Any user may edit the articles, and _WikiHow_ collates these revision histories. The edit history of such informal instructional articles is a source of user-generated data that can help identify possible reasons and necessities for editing. _wikiHowToImprove_(Anthonio et al., 2020) is a dataset that compiles revision histories for the analysis of linguistic phenomena that occur in edits of instructional texts, ranging from the correction of typos and grammatical errors to the clarification of ambiguity and vagueness. Footnote 1: [https://www.wikihow.com/](https://www.wikihow.com/) In this paper, we focus on cases of lexical vagueness, defined as "lexeme[s] with a single but non-specific meaning" (Tuggy, 1993), which can potentially cause misunderstandings in instructional texts. Specifically, we study vagueness based on the change in the main verb in the original and revised version of an instruction. We say that an instruction was vague if, upon revision, the revised main verb is contextually more specific than the original version. Some examples of vague and clarified instructions are provided in Table 1. As indicated by the examples, the revised verb is usually more specific in that it provides additional information on how or why an action needs to be taken. The classification of vague and clarified instructions is a first step towards automatic text editing for clarification based on linguistic criteria such as ambiguity and vagueness at a sentence level. Existing tools for text editing focus on text simplification and fact editing (Malmi et al., 2019), while others are designed for grammatical error correction (Xie et al., 2018). Our work acts as the first step towards automated editing based on linguistic criteria by identifying vague instructions and differentiating them from "clarified" ones. Our use of the _wikiHowToImprove_ corpus also utilizes a resource of edit pairs, therefore introducing a new dataset for the linguistic study of vagueness as well as exploring the general versatility of such corpora. Our contributions are to create a dataset of vague and clarified instructions, provide an analysis based on semantic frames, and demonstrate the first results of a neural model's ability to dis \begin{table} \begin{tabular}{l l} \hline \hline **Original Sentence** & **Revised Sentence** \\ \hline Then, **make** the floor and walls of your house. & Then, **design** the floor and walls of your house. \\ \hline When you **go** to the Hogwarts park... & When you **visit** the Hogwarts park... \\ \hline **Get** a flexible single cord. & **Purchase** a flexible single cord. \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of vague instructions and their more clarified versions from the _wikiHowToImprove_ Dataset tinguish the two versions. We create and analyze the dataset by extracting relevant instances from _wikiHowToImprove_, using POS tags, dependency features, and edit distance as constraints, as well as FrameNet frames as features (Section 3). We then devise a pairwise ranking task, where we train and evaluate different neural models and analyze their performance based on frame relations and differences in distributional word representations (Section 4). ## 2 Related Work Our paper focuses on revisions in wikiHow for a specific linguistic phenomenon, namely vagueness. The motivation to use revision histories as corpora for NLP tasks was introduced by Ferschke et al. (2013). The task of defining and categorizing edit intentions has been explored well for the Wikipedia edits corpus (Yang et al., 2016, 2017). More recently, Anthonio et al. (2020) performed a similar categorization on the revisions in _WikiHow_. Traditional computational analyses of vague statements have been based on logical representations (DeVault and Stone, 2004; Tang, 2008). In contrast, our focus is on vagueness in terms of lexical changes in revisions, which is more similar to previous analyses that considered the context-dependent resolution of vague expressions such as colour references (Meo et al., 2014). Other computational approaches to vagueness include, the detection of vague sense definitions in ontological resources (Alexopoulos and Pavlopoulos, 2014) and website privacy policies (Lebanoff and Liu, 2018) as well as the verification of historical documents (Vartan, 2019). Our approach to identifying and classifying vagueness is analyzed using FrameNet frames which provide specialized relations among conceptual categories, in a manner similar to recent advances in neural models that use sentence-level information to perform hyponymy-hypermymy classification. Roller et al. (2018) analyzes lexicosyntactic pattern-based instances of word-specific hypernymy-hyponymy constructions. Snow et al. (2004) explores the extraction of predefined patterns for hypernyms and hyponyms in the same sentence, while Shwartz et al. (2016) incorporates distributional methods for their classification using sentence-level features. ## 3 Data Creation, Preprocessing, and Analysis _WikiHow_ articles mostly contain instructions, but also include descriptions, explanations, and other non-instructional sentences that provide additional context. The _wikiHowToImprove_ corpus (Anthonio et al., 2020) is an unfiltered corpus of revision histories. Therefore, we first need to extract those revisions where the original and revised versions are both instructional sentences, which can be done based on syntactic properties (SS3.1). We then use a FrameNet parser to determine the frames (and their relationships) evoked by the root verb in the original and revised version of an instruction (SS3.2). The final extracted data consists of only those revisions where the root verb has been modified to be more specific to the sentence. This extracted corpus consists of 41,615 sentences. ### Data Extraction and Cleaning _wikiHowToImprove_ is a noisy source of data with misspellings, non-standard abbreviations, grammatical errors, emoticons, etc. In order to use the data for our task, we first perform some cleaning and preprocessing. We filter the typos and misspellings in the dataset by comparing all the vocabulary words to words in the English dictionary using the _Enchant_ python API2. After filtering the typos, we POS tag and dependency parse the data using the _Stanza_ library3(Qi et al., 2020). We discard all sentence pairs where the sentences are shorter than four or longer than 50 words. Footnote 2: [https://pyenchant.github.io/](https://pyenchant.github.io/) Footnote 3: [https://stanfordnlp.github.io/stanza/](https://stanfordnlp.github.io/stanza/) We then create a sub-corpus of instructional sentences by extracting those edit pairs in which both the original and revised version of a sentence fulfill at least one of the following criteria: * imperative form--the root verb has no nominal subject (e.g. "Please finish the task"); * instructional indicative form--the nominal subject of the root verb is 'you,' 'it' or 'one' (e.g. "You should finish the task"); * passive form with 'let'--the sentence is in passive voice, and the root verb is 'let' (e.g. "Let the paper be put on the table."). Finally, we retain only those sentence pairs whose character edit distance is smaller than 10. This filter was added after empirical tests to accommodate changes in the verb form and syntactic frame while ensuring that there are little to no additional edits (often just vandalism or spam). ### Verb Frame Analysis We perform an analysis of verb frame relations from this extracted corpus using the FrameNet hierarchy Baker et al. (1998). In order to identify evoked frames from the data, we use the INCEpTION Project's neural _FrameNet Tools_ parser4Klie et al. (2018); Markard and Klie (2020). FrameNet Tools identifies the frame-evoking elements, the evoked frames, and the context elements' roles in these frame for a given sentence. In this work, we ignore role assignments and only consider predictions of evoked frames, which we found to be generally reliable in our data.5 Footnote 4: [https://github.com/inception-project/frames](https://github.com/inception-project/frames) Footnote 5: Although automatic frame identification is noisy, the tools used here are implementations of the unimodal model presented in Botschen et al. (2018), which achieves a high accuracy of over 88%. We extract the frame of the root verb in the original and revised sentences. For each pair, we identify the frame relation, if any, using the NLTK FrameNet API6Schneider and Wooters (2017). We found that most edits could be categorized into one of the following frame relations between the frames evoked by the original and revised verb frames: Footnote 6: [http://www.nltk.org/howto/framenet.html](http://www.nltk.org/howto/framenet.html) 1. **Subframe-of**: The original frame refers to a complex scenario that consists of several individual states, one of which is the revised frame. (e.g. Traversing\(\rightarrow\)Arriving: "_Go_ to the thumbs up log." is revised to "_Visit_ the thumbs up log.") 2. **Inherits-from**: The frame of the revised verb elaborates on the frame evoked by the original verb (e.g., Deciding\(\rightarrow\)Choosing: "_Determine_ the card you want to buy" is revised to "_Choose_ which card you want to buy.") 3. **Uses**: The frame of the revised verb uses or weakly inherits properties of the original verb frame (e.g., Perception_active\(\rightarrow\)Scrutiny: "Look_ for the best fit for your taste" is revised to "_Search_ for the best fit for your taste."). We also find cases of contextually relevant clarifications for phrasal verbs, such as "_Make_ your bed" vs. "_Fix_ your bed..." which are not covered in FrameNet. Further, there are cases in which the FrameNet Tools parser did not identify the main verb or could not assign a frame. For instance, the verb _compel_ as in "you may feel _compelled_..." is not in FrameNet. We categorize these instances, which are fewer in number than the other categories, under a single **Other** category and leave further inspection to future work. A distribution of instances over categories is shown in Table 2. Apart from instances from the 'Other' category, we indeed found the main verbs in the revised versions of a sentence to be more specific than in the original versions. ## 4 Pairwise Ranking Experiments In this section, we investigate if a neural model can distinguish between the original and revised version of the same instruction. We describe a neural architecture that uses a joint representation designed for comparing two versions of a sentence before predicting an output. We compare our results to a standard BiLSTM-Attention model used in previous work Anthonio et al. (2020). ### System and Training Details The initial components of our system are two BiLSTM modules, LSTM\({}_{1A}\) and LSTM\({}_{1B}\), that each takes one version of a sentence as input. The individual BiLSTMs are followed by a joint layer LSTM\({}_{AB}\) and an additional layer of BiLSTM modules, LSTM\({}_{2A}\) and LSTM\({}_{2B}\), that re-encode the sentence based on the joint representations. \begin{table} \begin{tabular}{l r r r r} \hline \hline Relation & **Total** & **Train** & **Test** & **Val** \\ \hline Usage & 15,243 & 11,084 & 2,194 & 1,965 \\ Inheritance & 13,166 & 9,179 & 2,008 & 1,793 \\ Subframe & 9,481 & 6,835 & 1,720 & 926 \\ Other & 3,925 & 2,833 & 649 & 443 \\ \hline Total & 41,615 & 30,044 & 6,237 & 5,334 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of sentences in the extracted dataset and distribution of FrameNet relations between original and revised verbs. We also show the distribution of train, test and validation for each frame relation. The final layer is trained to predict for each sentence, whether it is the original or revised version, labeling them \(0\) or \(1\), respectively. In practice, we first encode versions \(A\) and \(B\) of an instruction using FastText embeddings or BERT. The embedded sentences \(S_{A}\) and \(S_{B}\) are then passed through LSTM\({}_{1A}\) and LSTM\({}_{1B}\) one (sub-word) token at a time. The hidden layers \(\textbf{h}_{1A}\) and \(\textbf{h}_{1B}\) are then concatenated and passed through LSTM\({}_{AB}\), whose output \(\textbf{h}_{AB}\) is then concatenated again with the original hidden states to re-encode each sentence version in LSTM\({}_{2A}\) and LSTM\({}_{2B}\). Lastly a classification layer, trained using a cross-entropy objective, transforms the final representations \(\textbf{h}_{2A}\) and \(\textbf{h}_{2B}\) into a real-valued output score using self-attention, which is normalized by softmax and rounded to \(\{0,1\}\). The equations below give a simplified summary of our implementation.7 Footnote 7: We will make the code available upon publication. \[\textbf{h}_{1A} =\text{LSTM}_{1A}(S_{A}) \tag{1}\] \[\textbf{h}_{1B} =\text{LSTM}_{1B}(S_{B})\] (2) \[\textbf{h}_{AB} =\text{LSTM}_{AB}(\textbf{h}_{1A}\cdot\textbf{h}_{1B})\] (3) \[\textbf{h}_{2A} =\text{LSTM}_{2A}(\textbf{h}_{AB}\cdot\textbf{h}_{1A})\] (4) \[\textbf{h}_{2B} =\text{LSTM}_{2B}(\textbf{h}_{AB}\cdot\textbf{h}_{1B})\] (5) \[l_{A} =\left[\frac{\exp(\textbf{w}^{\top}\textbf{h}_{2A})}{\exp( \textbf{w}^{\top}\textbf{h}_{2A})+\exp(\textbf{w}^{\top}\textbf{h}_{2B})}\right]\] (6) \[l_{B} =\left[\frac{\exp(\textbf{w}^{\top}\textbf{h}_{2B})}{\exp( \textbf{w}^{\top}\textbf{h}_{2A})+\exp(\textbf{w}^{\top}\textbf{h}_{2B})}\right] \tag{7}\] Training DetailsWe experiment with both FastText Grave et al. (2018) and BERT Devlin et al. (2019), using representations with a dimensionality of 300 components. The BiLSTMs modules LSTM\({}_{1A},\text{LSTM}_{1B},\text{LSTM}_{2A}\) and LSTM\({}_{2B}\) each comprise one hidden layer with 256 components, whereas the joint LSTM\({}_{AB}\) comprises one layer with 512 components. We train for 5 epochs with a batch size of 32 and a learning rate of \(10^{-5}\). The model is trained with a dropout of 0.2 for regularization. No dropout is applied to any BiLSTM layers or the self-attention layer. For training, development, and testing, we split our data according to the existing partition given in _wikiHowToImprove_.8 The resulting split consists of 30,044 sentence revision pairs in the training set, 6,237 pairs in the test set, and 5,334 pairs in the validation set. Footnote 8: [https://github.com/irshadbhat/wikiHowToImprove](https://github.com/irshadbhat/wikiHowToImprove) ### Results and Discussion Table 3 shows the results of the pairwise ranking task. We find that our proposed model with BERT embeddings is the most accurate model for this task by a margin of about 7%. We compare our results against the baseline provided by Anthonio et al. (2020), which also makes use of ranking and a BiLSTM architecture. In contrast to our model, their baseline is a simple BiLSTM-Attention classification model using FastText embeddings. It does not use an intermediate joint representation to compare representations of two versions of an instruction. The baseline model has the advantage of being trained on individual sentences, but the increase in model accuracy for training sentence pairs by sharing context highlights the efficacy of the training regime. Their model provides an accuracy of about 64.08% when trained and evaluated on the filtered corpus. Our model with FastText embeddings achieves an accuracy of 71.16% (\(+7.08\%\)), which shows the relative importance of the joint representation. DiscussionWe find that version pairs that involve a subframe relation are the easiest to distinguish across our model using both FastText and BERT, while pairs involving the usage relation are most often confused. The model using BERT embeddings performs better than the FastText-based model on revisions that do not involve any frame-to-frame relations according to FrameNet (referred to as 'other' in Table 2). In Table 4, we provide examples where the model failed using both FastText and BERT. We observe that the models fail to correctly distinguish between sentences when the main verbs are synonymous. The embeddings of the most commonly confused verb pairs, which include \(\langle\)allow, \(\langle\)choose, decide\(\rangle\) and \(\langle\)create, make\(\rangle\), have \begin{table} \begin{tabular}{l c c} \hline \hline **Model Description** & **Dataset** & **Accuracy** \\ \hline Anthonio et al. (2020) & Entire & 74.50\% \\ \hline Anthonio et al. (2020) & Filtered & 64.08\% \\ Our Model + FastText & Filtered & 71.16\% \\ Our Model + BERT & Filtered & **78.40\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of the pairwise ranking task, on the full wikiHowToImprove dataset (Entire) and our subset of instructional sentences (Filtered). a cosine similarity of \(0.8\) or higher, while the average cosine similarity between the representation of verb pairs is \(0.47\). This insight shows that embeddings by themselves might be insufficient for this classification task. In future work, we will explore additional features such as indicator features derived from the discourse context (e.g., the position of a sentence) and from the FrameNet resource (e.g., properties of the frames evoked in a sentence). ## 5 Conclusion In this paper, we extracted a corpus of clarifications of instructions from the _wikiHowToImprove_ corpus. We described a methodology for extracting version pairs of a sentence that are both instructional. We then identified cases in which a revision has clarified a vague instruction by analyzing the relationship between the frames evoked by the 'original' verb and the'revised' verb. In our experiments, we adopted a simple pairwise ranking task, in the same vein as performed by Anthonio et al. (2020) on the entire _wikiHowToImprove_ dataset. We extended a simple BiLSTM architecture with a joint component and explored different embeddings methods, observing that both modifications lead to improvements over baselines presented in previous work. We hope that our methodology of extracting linguistically interesting cases of revisions from a noisy dataset can be extended to more phenomena and other corpora in future work. This direction has the potential of paving the way for developing automated revision and editing methods beyond typo, style, and grammar correction. ## Acknowledgements The research presented in this paper was funded by the DFG Emmy Noether program (RO 4848/2-1).
2309.07278
Topological protection of Majorana polaritons in a cavity
Cavity embedding is an emerging paradigm for the control of quantum matter, offering avenues to manipulate electronic states and potentially drive topological phase transitions. In this work, we address the stability of a one-dimensional topological superconducting phase to the vacuum quantum fluctuations brought by a global cavity mode. By employing a quasi-adiabatic analytical approach completed by density matrix renormalization group calculations, we show that the Majorana end modes evolve into composite polaritonic modes while maintaining the topological order intact and robust to disorder. These Majorana polaritons keep their non-abelian exchange properties and protect a twofold exponentially degenerate ground state for an open chain.
Zeno Bacciconi, Gian Marcello Andolina, Christophe Mora
2023-09-13T19:49:07Z
http://arxiv.org/abs/2309.07278v1
# Topological protection of Majorana polaritons in a cavity ###### Abstract Cavity embedding is an emerging paradigm for the control of quantum matter, offering avenues to manipulate electronic states and potentially drive topological phase transitions. In this work, we address the stability of a one-dimensional topological superconducting phase to the vacuum quantum fluctuations brought by a global cavity mode. By employing a quasi-adiabatic analytical approach completed by density matrix renormalization group calculations, we show that the Majorana end modes evolve into composite polaritonic modes while maintaining the topological order intact and robust to disorder. These Majorana polaritons keep their non-abelian exchange properties and protect a twofold exponentially degenerate ground state for an open chain. _Introduction_ - In recent years the possibility of controlling quantum matter by cavity embedding has attracted a lot of attention [1; 2; 3; 4]. Strong coupling to cavity vacuum fluctuations has been predicted to affect material properties in many different contexts such as superconductivity [5; 6; 7], ferro-electricity [8; 9; 10] and topology [11; 12; 13; 14; 15]. It has been shown experimentally that cavity embedding can modify the critical temperature of a charge density wave transition [16], magneto-transport properties [17] and induce the breakdown of topological protection in integer quantum hall transport [18]. In this context, a single-particle electron-photon Chern number was introduced in Ref. [19]. Addressing topological properties with a global cavity mode is a subtle issue. As a general rule, the robustness of topological properties is ensured by the locality of perturbations. Coupling to a cavity is inherently non-local, and therefore, there is no guarantee that quantum fluctuations preserve topological protection. A contrasting argument in the context of Majorana fermions is that they bear no charge and therefore couple inefficiently to a cavity electric field [20] (see also Refs. [21; 22; 23; 24] in the context of microwave resonators). Naive expectations relying on the weak effect of vacuum fluctuations of single-mode cavities on extensive quantities [25; 26; 27] should also be taken with care since topological edge states are intrinsically not extensive. In this letter, we address this issue by studying a one-dimensional toy model of a topological superconductor [28; 29], featuring Majorana end states, strongly coupled to a single-mode cavity, and therefore interacting [30; 31; 32; 33] via long-range forces. We discuss two models for the cavity, either with an electric field [17; 18] or a magnetic field coupling [34; 35; 36]. Both models respect the fermionic parity \(\mathbb{Z}_{2}\) symmetry of the superconductor [28]. Our approach to studying these many-body topological properties is twofold. We first employ analytical arguments, based on quasi-adiabatic continuation approach [37; 38], to establish the resilience of the topological phase to the all-to-all interaction mediated by the cavity mode. The edge modes transform into Majorana polaritons [21] with a light component and are no longer purely fermionic objects. We also perform controlled Density Matrix Renormalization Group (DMRG) numerical simulations [39; 40; 41] with a mixed cavity-matter Matrix Product State (MPS) ansatz [14; 42; 43; 44] implementing the \(\mathbb{Z}_{2}\) fermionic parity. We identify four markers for topological order [38; 45; 46]: (i) ground state degeneracy, (ii) entanglement spectrum degeneracy, (iii) non-local edge-edge correlations, and (iv) robustness to local symmetry-preserving perturbations, and demonstrate that they all survive strong cavity quantum fluctuations. We moreover confirm the hybrid nature of the dressed Majorana end operators. Our main finding is that the topological superconducting state is robust to the coupling to the cavity, by adapting its Majorana edge modes, as long as fermionic parity is preserved and no gap closing occurs upon gradually increasing the strength of cavity coupling. Figure 1: Sketch of the two different cavity embeddings. The ladder couples either (a) to a quantized electric field, (b) a quantized magnetic field. (c) Band structure of the cavity-free Hamiltonian \(\hat{H}_{0}\) with (full green line) and without (dashed black line) superconducting pairing. _The model_ - The starting point of our discussion is a tight-binding model for a one-dimensional topological superconductor. We employ a toy model of spinless electrons hopping on a square ladder geometry [44] in the presence of an external magnetic field and a superconducting pairing along the rung of the ladder. The Hamiltonian reads: \[\hat{H}_{0}= -t\sum_{j=1}^{L-1}e^{i\sigma\phi_{ext}/2}\hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j+1}-t_{\perp}\sum_{j=1}^{L}\hat{c}^{\dagger}_{+,j}\hat{c}_{-,j}+\] \[+\Delta\sum_{j=1}^{L}\hat{c}^{\dagger}_{+,j}\hat{c}^{\dagger}_{-,j} +\mu\sum_{\sigma,j=1}^{L}\hat{c}^{\dagger}_{j,\sigma}\hat{c}_{j,\sigma}\;+\text {h.c.}\;, \tag{1}\] where \(\Delta\) is the pairing strength, \(\mu\) the chemical potential, \(t\) the intraleg hopping, \(t_{\perp}\) the interleg hopping, \(\phi_{ext}\) the external magnetic flux per plaquette and \(\hat{c}_{j,\sigma}\) annihilates an electron on the leg \(\sigma=\pm\) and rung \(j=1,..,L\) with \(+\) is the top leg and \(L\) the number of rungs. While unconventional, this model can be straightforwardly mapped to the nanowire model [47; 48] with strong Rashba spin-orbit coupling and proximity-induced superconductivity, a system that has undergone extensive experimental investigation [49]. In the ten-fold non-interacting classification [50], the model Eq. (1) falls into class D, protected only by particle-hole symmetry. It has a \(\mathbb{Z}_{2}\) topological invariant and allows for a topological phase with Majorana end states. However, within a many-body context, the true symmetry protecting the topological phase is fermionic parity. We now add a single-mode cavity with the bare Hamiltonian \(\hat{H}_{c}=\hbar\omega_{c}\hat{a}^{\dagger}\hat{a}\). In order to draw general conclusions, we examine two distinct physical realizations concerning the vector potential in the cavity: a constant magnetic (\(B\)) component along \(z\) or a constant electric (\(E\)) component along \(y\) (Fig 1). The light-matter coupling is achieved through a Peierls substitution [51; 52; 53; 54], where the hoppings are dressed as [55]: \[\text{B}: \hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j+1}\to e^{ig_{B}( \hat{a}+\hat{a}^{\dagger})}\hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j+1}\;, \tag{2}\] \[\text{E}: \hat{c}^{\dagger}_{+,j}\hat{c}_{-,j}\to e^{ig_{E}(\hat{a}+\hat{a}^{ \dagger})}\hat{c}^{\dagger}_{+,j}\hat{c}_{-,j}\;, \tag{3}\] depending on the scenario. In the following discussion, when referring to both couplings, we will use \(g\) as a combined notation for \(g_{E}\) and \(g_{B}\). The full Hamiltonian is \(\hat{H}=\hat{H}_{0}+\hat{H}_{c}\) with either the dressing of Eq. (2) or Eq. (3). We are interested in a mesoscopic regime and do not scale \(g\) with the system size. Our choice is motivated by the nature of strongly confined cavity modes in nanophotonics, such as split-ring resonators, where there are usually a few, energetically well-separated modes with a significant coupling to the electrons [18]. No-go theorems [56; 25] prevent photon condensation, _i.e._ a coherent non-zero \(\langle\hat{a}\rangle\), for the electric field coupling, whereas \(\langle\hat{a}\rangle\neq 0\) can emerge in the magnetic case [57; 58; 59]. In the latter case, the coherent part of the field simply renormalizes \(\phi_{ext}\) and can potentially drive the system out of a topological state or vice-versa. Although very interesting, this effect does not come from quantum fluctuations and can described semiclassically [44]. We henceforth fix \(\omega_{c}=t=t_{\perp}=-\mu=1\), \(\Delta=0.4\) and \(\phi_{ext}=0.6889\pi\) such that \(\langle\hat{a}\rangle\) remains close to zero and the fermionic chain is in a topological phase. _Majorana dressing_ - We first present analytical arguments that elucidate the resilience of topological order to cavity quantum fluctuations. In the absence of a cavity, the model exhibits Majorana edge modes in its topological phase. The Majorana fermionic operators \(\hat{\gamma}^{0}_{L}\) and \(\hat{\gamma}^{0}_{R}\) permute the even and odd parity sectors and protect the ground state (exponential) degeneracy. We employ the theory of quasi-adiabatic continuation [37; 38] to show that the Majorana operators \[\hat{\gamma}_{L}=\mathcal{V}\hat{\gamma}^{0}_{L}\mathcal{V}^{\dagger}\qquad \hat{\gamma}_{R}=\mathcal{V}\hat{\gamma}^{0}_{R}\mathcal{V}^{\dagger}\;, \tag{4}\] undergo a continuous transformation as the coupling to the cavity is gradually enhanced. The unitary operator \(\mathcal{V}\) maps the ground state manifold in the absence of cavity to the one with the cavity. Importantly, under the assumption that the gradual increase of the cavity coupling maintains both the \(\mathbb{Z}_{2}\) symmetry and a finite spectral gap, it can be shown [55] that \(\mathcal{V}\) preserves fermionic locality such that the two deformed Majorana modes remain localized on the two ends of the chain. In addition, \(\hat{\gamma}_{L}\) and \(\hat{\gamma}_{R}\) acquire a finite entanglement with the cavity mode \(\hat{a}\) from Eq. (4) and a polaritonic character associated with photonic excitations. They also keep satisfying the Clifford algebra, as \(\mathcal{V}_{1}\) is unitary, and they permute the even- and odd-parity ground states. Aside from the deformed edge Majorana modes, we also need to prove the persistence of the ground state degeneracy. Denoting \(P_{0}\) (\(P=\mathcal{V}P_{0}\mathcal{V}^{\dagger}\)) the projector onto the ground state manifold without (with) cavity, it is known that [37; 28] \[P_{0}\mathcal{O}P_{0}=\lambda\,P_{0}, \tag{5}\] up to exponential corrections with the system size. Here, \(\mathcal{O}\) is a local \(\mathbb{Z}_{2}\)-symmetric operator and \(\lambda\) its eigenvalue in the ground state. Physically, Eq. (5) simply states that a local perturbation cannot distinguish the two ground states because of topological order. Applying Eq. (5) for \(\mathcal{O}=\hat{H}_{0}\) recovers the ground state twofold degeneracy. \(\mathcal{V}^{\dagger}H\mathcal{V}\) is also local for fermions as \(\mathcal{V}\) maintains locality. Therefore, we obtain that the even and odd parity states are still exponentially degenerate in the presence of the cavity, or \[P\hat{H}P=\mathcal{V}P_{0}\mathcal{V}^{\dagger}\hat{H}\mathcal{V}P_{0}\mathcal{ V}^{\dagger}=E_{gs}\mathcal{V}P_{0}\mathcal{V}^{\dagger}=E_{gs}P \tag{6}\] where \(E_{gs}\) is the energy of the two ground states. Furthermore, with no cavity, \([\hat{\gamma}^{0}_{\alpha},\hat{H}_{0}]=0\) (up to exponential corrections), which implies that the twofold degeneracy extends to the whole spectrum and the Majorana operators are called strong edge modes [38; 60; 61]. Such a vanishing commutator is no longer guaranteed in the presence of the cavity. The deformed \(\hat{\gamma}_{\alpha}\) are then weak edge modes as they do not enforce a twofold degeneracy for excited states, only in the ground state manifold. Remarkably, the above arguments based on the quasi-adiabatic continuation are very general. They show that any topological superconductor with Majorana end modes is robust to the presence of a cavity, as long the coupling conserves parity (\(\mathbb{Z}_{2}\)) and there is an adiabatic path without gap closing to a cavity-free limit. A more explicit polaritonic form can be given to the Majorana fermions from Eq. (4) in perturbation theory \[\hat{\gamma}_{\alpha}\simeq\hat{\gamma}_{\alpha}^{0}+\sum_{n=(\sigma,j)}\left( \Psi_{\alpha}^{1+}(n)\hat{c}_{n}^{\dagger}\hat{a}^{\dagger}+\Psi_{\alpha}^{1- }(n)\hat{c}_{n}\hat{a}^{\dagger}+\text{h.c.}\right)\;, \tag{7}\] assuming weak coupling to the cavity. The wavefunctions [55]\(\Psi_{\alpha}^{1+}(n)\) and \(\Psi_{\alpha}^{1-}(n)\) decay exponentially far from the edges as illustrated in Fig. 4(a). _Signatures of topology_ - In order to test the validity of the discussion above and probe possible non-perturbative effects, we further investigate the Hamiltonian \(\hat{H}\) with DMRG calculations. We use an hybrid light-matter MPS in which the \(\mathbb{Z}_{2}\) fermionic parity symmetry is implemented [55], separating the exponentially degenerate even and odd parity sectors. We specifically investigate the four markers, labeled (i)-(iv) in the introduction, as signatures of the topological phase. The ground state degeneracy \(\Delta E_{gs}=|E_{0}^{gs}-E_{1}^{gs}|\) is exponential with the system size up to relatively strong light-matter couplings \(g\), confirming point (i). This is reported in Figs. 2(a,e) where the ground state energy difference between the two parity sectors is computed. For small \(g\), this energy difference can also be evaluated using perturbation theory [55], confirming its exponential scaling. Interestingly we also report of few ground state parity switches [62; 28] as a function of \(g\) (not shown). The second signature (ii) is the twofold degeneracy in the entanglement spectrum of the the half-chain bipartition. For light-matter systems, one of the two partition has to include the photon. As shown in Fig. 2(b,f) the \(g=0\) degeneracy is not broken at finite coupling. The entanglement spectra is nonetheless changing, signalling the presence of finite light-matter entanglement. Edge-edge correlations (iii) are shown in Fig. 2(e,g) where the correlator \(G_{\sigma,\sigma^{\prime}}^{p}(i,j)=\bra{p}\hat{c}_{\sigma,i}^{\dagger}\hat{c }_{\sigma^{\prime},j}\ket{p}\) is calculated on the ground state \(\ket{p}\) with parity \(p=\{0,1\}\). The revival on the opposite edge reveals the presence of the two end Majorana fermions that both permute the ground states. We verify (iv) by adding local disorder to the model. Without loss of generality, we consider a gaussian distributed chemical potential, centered around \(\mu\), and with the standard deviation \(\overline{\delta\mu_{\sigma,i}\delta\mu_{\sigma^{\prime},j}}=W\delta_{i,j} \delta_{\sigma,\sigma^{\prime}}\), where \(\overline{\mathcal{A}}\) denotes the disorder average of \(\mathcal{A}\). As an indicator for edge-edge correlations, we introduce the quantity \[Q=\sum_{\sigma,\sigma^{\prime}}|G_{\sigma,\sigma^{\prime}}^{0}(1,L)-G_{\sigma,\sigma^{\prime}}^{1}(1,L)|\;. \tag{8}\] For \(g=0\), we have exactly \(G_{\sigma,\sigma^{\prime}}^{0}(1,L)=-G_{\sigma,\sigma^{\prime}}^{1}(1,L)\) as a result of the anticommutation of the two Majorana fermions. In Fig. 3, we show the indicator \(Q\) and the Figure 2: The top (bottom) row shows DMRG results for the magnetic (electric) coupling. (a,e) Ground state energy splitting \(\Delta E_{gs}=|E_{0}^{gs}-E_{1}^{gs}|\) for different coupling strength. The mean-field (MF) result is also shown for comparison. (b,f) Entanglement spectra \(\xi_{s}=-\text{log}\lambda_{s}\) for an half-chain bipartition. The twofold degeneracies come from \((even,even)\) and \((odd,odd)\) parity resolved partitions. (c,g) Correlation function for the top leg. (d,h) Energy gap \(\Delta E_{p}\) out of the ground state manifold for both parities, compared with MF (red). The error bars \(\sigma_{DMRG}\simeq 3\cdot 10^{-4}\) are evaluated from the square root of the discarded weight. \(L=48\) except for (a,e) ground state energy splitting \(\Delta E_{gs}\) for one disorder realization at each disorder strength (Fig. 3(a,c)) and their disorder average (Fig. 3(b,d) ). The results hardly depend on the strength of cavity coupling, reflecting the robust topological phase. Interestingly, Kohn's theorem yields a different behaviour in the quantum Hall effect, where disorder and cavity collaborate to diminish topological protection [11; 15; 63]. Here, disorder plays no role in enhancing the cavity effect on Majorana fermions. Finally, we test numerically the absence of gap closing as the coupling to the cavity is increased. The gap to the first excited state in each parity sector is shown in Fig. 2(d,h). The splitting of the excited state degeneracy in the magnetic case suggests the evolution from strong to weak edge modes for the Majorana fermions. This could be related to the predicted [64] sensitivity of the cavity damping to the parity of excited electronic states. The comparison with a mean-field approach [55] highlights the need for light-matter entanglement to quantitatively address strong coupling and the many-body nature of the spectrum. _Majorana polaritons_ - As explored through the quasi-adiabatic continuation approach, the Majorana operators \(\hat{\gamma}_{L}\) and \(\hat{\gamma}_{R}\) become increasingly entangled with cavity photons as the light-matter coupling intensifies. This is explicit in Eq. (7) in perturbation theory for \(g\ll 1\). The polaritonic character of the edge Majorana can be probed with connected matrix elements, such as \(\bra{0}\hat{c}_{n}\hat{a}\ket{1}_{c}=\bra{0}\hat{c}_{n}\hat{a}\ket{1}-\bra{0 }\hat{a}\ket{0}\bra{0}\hat{c}_{n}\ket{1}\), which are vanishing at zero coupling \(g=0\). To leading order in perturbation theory, we find [55] for instance (\(n=(\sigma,j)\)): \[\bra{0}\hat{c}_{n}\hat{a}\ket{1}_{c}\simeq\psi_{L}^{1+}(n)+i\psi_{R}^{1+}(n)\;, \tag{9}\] and other combinations of \(\psi_{\alpha}^{1\pm}(n)\) (\(\alpha=L/R\)) [65] are obtained from the connected matrix elements of \(\hat{c}_{n}\hat{a}^{\dagger}\),\(\hat{c}_{n}^{\dagger}\hat{a}\) and \(\hat{c}_{n}^{\dagger}\hat{a}^{\dagger}\) between \(\ket{0}\) and \(\ket{1}\). They are calculated using DMRG, as illustrated in Fig. 4(a,c), and clearly demonstrate localization at the edges as well as photon entanglement. We further quantify the photon mixing by introducing the weights of each component of the Majorana polaritons: \[N_{0}^{\alpha}=\sum_{n}|\psi_{\alpha}^{0}|^{2}\qquad N_{1}^{\alpha}=\sum_{n}| \psi_{\alpha}^{1+}|^{2}+|\psi_{\alpha}^{1-}|^{2}\;, \tag{10}\] where \(N_{0}^{R}=N_{0}^{L}\) and \(N_{1}^{R}=N_{1}^{L}\) by symmetry. \(\psi_{\alpha}^{0}(n)\) are the purely electronic components of the Majorana operators. The weights calculated from DMRG are shown in Fig. 4(b,d). At small coupling \(g\), the missing weight \(1-N_{0}\) from single-fermion contributions is exactly matched by the single-photon polariton sector measured by \(N_{1}\). A deviation between these two lines becomes apparent at strong couplings, indicating additional contributions involving multi-photon and multi-fermion operators. At weak coupling, DMRG aligns well with perturbation theory. _Discussion -_ We have shown that a one-dimensional topological superconductor with Majorana end modes is Figure 3: Magnetic cavity: DMRG results (\(g_{B}>0\)) and exact results (\(g_{B}=0\)) for topological markers as a function of disorder strength \(W\). (a) Edge-edge correlator \(Q\) from Eq. (8) and (c) ground state degeneracy \(\Delta E_{gs}\) for a single disorder realization at each strength \(W\). The corresponding disorder averaged quantities are shown in (b,d) with \(N_{\text{dis}}=20\,(1000)\) realizations for \(g_{B}=0.1\,(0..)\). Error bars indicate two standard deviations and \(L=48\). Figure 4: (a,c) Matrix elements revealing the hybrid nature of Majorana polaritons, with zero (\(\mathcal{O}=\hat{c}_{+,j}\)) and one photon (\(\mathcal{O}=\hat{c}_{+,j}\hat{a}\)). (b,d) Evolution of the total zero- and one-photon weights \(N_{0}\) and \(N_{1}\), Eq. (10), with light-matter coupling for the magnetic (b) and electric (d) cavities. Here \(L=48\). protected against the vacuum quantum fluctuations of an embedding cavity mode, despite its long-range nature. The quasi-adiabatic approach explains this protection and reveals that Majorana evolve into Majorana-polaritons to maintain topological order. We confirm this with DMRG simulations where topological markers are shown to persist through the cavity coupling, and the photonic component of Majorana-polaritons is demonstrated up to strong cavity coupling. The main difference is that the Majorana-polaritons are no longer assured to be strong edge modes. Instead they can transition into weak edge modes where only the ground state is doubly degenerate. In the strong cavity coupling regime, mean field techniques prove to be insufficient, making it crucial to account for light-matter entanglement. Our argumentation is highly general and is applicable to any 1D phase featuring end Majorana states, regardless of the nature of the cavity coupling, electric or magnetic. Even though not explicitly discussed, the results can be straightforwardly generalized to multi-mode cavities. The crucial prerequisite however is the absence of fermionic parity breaking induced by the cavity coupling. Our results suggest that a qubit using Majorana polaritons also requires control over the cavity due to their hybrid nature [19]. We also anticipate that the topological insensitivity to vacuum cavity fluctuations will extend to other topological phases [66], in higher dimensions. For instance, in 2D class \(A\) models, such as the quantum Hall effect, the topology is robustly protected by a many-body Chern number [67]. This does not contradict recent works where finite-size effect and disorder [11; 68] or coupling to external degrees of freedom [63] have been advocated to predict the loss of conductance quantization observed experimentally [18]. It would be interesting to build a comprehensive classification of cavity-embedded fermionic models in the spirit of recent classifications of interacting models [31; 32]. _Acknowledgments-_ We acknowledge fruitful discussions with M. Dalmonte, C. Ciuti, T. Chanda, G. Chiriaco, O. Dmytruk and M. Schiro. The DMRG numerical implementation is done via the ITensor library [69]. G.M.A. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101002955 - CONQUER).
2301.10126
Inward Bound: Bulges from High Redshifts to the Milky Way
With over 200 registered participants, this fully online conference allowed theorists and observers across the globe to discuss recent findings on the central structures of disc galaxies. By design, this conference included experts on the Milky Way, local and high-redshift galaxies, and theoretical aspects of galaxy formation and evolution. The need for such a broad range of expertise stems from the important advances that have been made on all fronts in recent years. One of the main goals of this meeting was accordingly to bring together these different communities, to find a common ground for discussion and mutual understanding, to exchange ideas, and to efficiently communicate progress.
Dimitri A. Gadotti, Elena Valenti, Francesca Fragkoudi, Anita Zanella, Lodovico Coccato, Camila de Sá-Freitas, Stella-Maria Chasiotis-Klingner
2022-10-31T18:02:33Z
http://arxiv.org/abs/2301.10126v1
# Inward Bound: Bulges from High Redshifts to the Milky Way ###### Abstract With over 200 registered participants, this fully online conference allowed theorists and observers across the globe to discuss recent findings on the central structures of disc galaxies. By design, this conference included experts on the Milky Way, local and high-redshift galaxies, and theoretical aspects of galaxy formation and evolution. The need for such a broad range of expertise stems from the important advances that have been made on all fronts in recent years. One of the main goals of this meeting was accordingly to bring together these different communities, to find a common ground for discussion and mutual understanding, to exchange ideas, and to efficiently communicate progress. Lived on: 10.10./ Revised 2022 Hay 2022 ###### Abstract With over 200 registered participants, this fully online conference allowed theorists and observers across the globe to discuss recent findings on the central structures of disc galaxies. By design, this conference included experts on the Milky Way, local and high-redshift galaxies, and theoretical aspects of galaxy formation and evolution. The need for such a broad range of expertise stems from the important advances that have been made on all fronts in recent years. One of the main goals of this meeting was accordingly to bring together these different communities, to find a common ground for discussion and mutual understanding, to exchange ideas, and to efficiently communicate progress. Lived on: 2022 ###### Acknowledgements. Like many other meetings, this conference had to be postponed twice since 2020 because of the COVID-19 pandemic. Although the original plan was to have an in-person meeting, both the Scientific Organising Committee (SOC) and the Local Organising Committee (LOC) felt that further postponing the conference would be too detrimental. During the two years of the pandemic, the LOC had gained sufficient experience to be able to devise a format that would facilitate the meeting's intended goal of fostering discussions between the different communities. The meeting consisted of 31 pre-recorded talks (made available to registered participants a week before the start of the conference), as well as six live sessions, held on the Monday, Wednesday, and Friday, which included 12 invited talks, four review talks, and four discussion sessions!. The live sessions took place in the morning and early evening in Europe, to enable the participation of colleagues from time zones in the Americas and Australia/Asia. Those sessions were recorded and made available immediately afterwards. This allowed participants in different time zones to be up to date with all the live sessions, while a Slack workspace allowed the participants to have further asynchronous interactions and discussions. We were pleased to see that this setup worked very well in fostering numerous and deep discussions as intended. The pre-recorded talks and recordings of the live sessions are now publicly available to the community2. In what follows we summarise some of the main discussion topics and outcomes of the workshop. Footnote 2: [https://www.sds.org/](https://www.sds.org/) ###### Acknowledgements. A consensus has been reached that the Milky Way has primarily a BP formed from the bar, with stellar populations born in situ (see talks by Paola Di Matteo, Francesca Fragkoudi and Meissa Ness). However, there is still a healthy debate around the ages of the stellar populations in the central regions (see talks by Tommaso Marchetti, Michael Rich, Akvaro Rojas-Ariaga and Maruana Zoccali). The contribution of the halo to the old population is clear but it is still difficult to quantify. This leaves an open question: is there still space for a low-mass, old, central spheroidial structure that is not part of the halo? In other words, is there room yet for a CB in the Milky Way? (See the talks by Cristina Chiappini and Madeline Lucey). In this context, we still lack a comprehensive characterisation of the most metal-poor population (with [Fe/H] \(<\) -1), from both the modelling and observational sides, even though significant progress has recently been made (see talks by Anike Arentsen, Andrea Kunder, Giulia Pagnini and Jason Sanders). ## Formation scenarios While it is still unclear what is the physical mechanism that produces BPs from the inner parts of bars (i.e., whether they form from buckling instabilities or orbital resonances), it is well established that BPs are simply the vertically thicker inner parts of bars (see talks by Sandor Kruk and alira Mendez-Abreu). Likewise, there is mounting evidence that NDs form via gas inflow produced by bar-driven processes (see talks by Dimitri
2309.12675
Vision Transformers for Computer Go
Motivated by the success of transformers in various fields, such as language understanding and image analysis, this investigation explores their application in the context of the game of Go. In particular, our study focuses on the analysis of the Transformer in Vision. Through a detailed analysis of numerous points such as prediction accuracy, win rates, memory, speed, size, or even learning rate, we have been able to highlight the substantial role that transformers can play in the game of Go. This study was carried out by comparing them to the usual Residual Networks.
Amani Sagri, Tristan Cazenave, Jérôme Arjonilla, Abdallah Saffidine
2023-09-22T07:35:37Z
http://arxiv.org/abs/2309.12675v1
# Vision Transformers for Computer Go ###### Abstract Motivated by the success of transformers in various fields, such as language understanding and image analysis, this investigation explores their application in the context of the game of Go. In particular, our study focuses on the analysis of the Transformer in Vision. Through a detailed analysis of numerous points such as prediction accuracy, win rates, memory, speed, size, or even learning rate, we have been able to highlight the substantial role that transformers can play in the game of Go. This study was carried out by comparing them to the usual Residual Networks. ## 1 Introduction Due to a huge game tree complexity, the game of Go has been an important source of work in the perfect information setting. In 2007, search algorithms have been able to increase drastically the performance of computer Go programs [11, 12, 20, 16]. In 2016, a groundbreaking achievement occurred when AlphaGo became the first program to defeat a skilled professional Go player [26]. Currently, the level of play of such algorithms is far superior to those of any human player [26, 27, 28]. Over the years, various significant advances have been made to improve performance in the game of Go [3, 34, 32, 31, 33]. Many of these innovations find their roots in other domains, notably in computer vision, where the recognition and interpretation of the Go board's image serve as fundamental inputs. Algorithms such as ResNet [18, 3] and MobileNet [19, 7, 5] have demonstrated exceptional performance by harnessing groundbreaking developments in computer vision. However, it is worth noting that one remarkable advancement in the realm of computer vision remains relatively untapped for Computer Go: _transformers_[30]. Transformers represent a groundbreaking leap in deep learning, reshaping how various tasks in natural language processing (NLP), computer vision, and beyond are approached. Initially developed for NLP tasks, transformers introduce a departure from conventional sequential methods by employing self-attention mechanisms. These mechanisms simultaneously capture intricate interdependencies among all elements in a sequence. This ability to understand nuanced relationships over long distances, without relying on recurrent or convolutional structures, has propelled transformers to the forefront of AI research. Notably, transformers have not only advanced language understanding, exemplified by models like BERT [13], but have also expanded their utility to image analysis, as seen in Vision Transformers (ViTs) [15] and other transformer-based models. EfficientFormer [21], a transformer-based model, achieves high performance and matches MobileNet's speed on mobile devices, proving that well-designed transformers can deliver low latency in computer vision tasks. In this paper, we propose to analyze the impact of using Transformer methods in the game of Go. To do this, we use the EfficientFormer architecture. Our study analyses were done in comparison with other state-of-the-art vision architectures in Go such as Residual Networks on a wide range of criteria including prediction accuracy, win rates, memory, speed, architecture size, and even learning rate. Thanks to that, we observe that EfficientFormer is better than Residual Networks on CPU and plays on par on GPU. We introduce Computer Go in Section 2 and the network architectures used throughout the paper in Section 3. In Section 4, we present our results and the last section summarizes our work and future work. ## 2 Computer Go The game of Go is a turn-taking strategic board game of perfect information, played by two players. One player adds black stones to a vacant intersection of the board and the opponent adds white stones. After being placed, a player's stones cannot move. A group of contiguous stones is removed if and only if the opponent surrounds the group on all orthogonally adjacent points. The players aim at capturing the most territory and the game ends when no player wishes to move any further. There exist multiple rules for scoring. We have used the Chinese rule in our experiments: the winner of the game is defined by the number of stones that a player has on the board, plus the number of empty intersections surrounded by that player's stones and komi (bonus added to the second player as compensation for playing second). Even though the rules are relatively simple, the game of Go is known as an extremely complex one in comparison to other board games such as Chess. On the standard board of size \(19\times 19\), the number of legal positions has been estimated to be \(2.1\times 10^{170}\). Algorithms based on Monte Carlo Tree Search (MCTS) [1] have been achieving excellent performance in the game of Go for many years. Combining deep reinforcement learning and MCTS as introduced in the _AlphaGo_ series programs [26, 28, 27] has been widely applied. The neural network takes an image of the board as input and produces two outputs: a probability distribution over moves (policy head) and a vector of score prediction for every player (value head) (see Fig. 1). ## 3 Network Architectures ### Residual Network Residual Networks are the standard networks for games [3, 28]. They are used in combination with MCTS to evaluate the leaves of the search tree and to give a prior on the possible moves. In order to speed up the computation of the evaluation and of the prior the networks are usually run on a batch of states [6]. The residual layer used for image classification adds the input of the layer to the output of the layer. It uses two convolutional layers before the addition. The ReLU layers are put after the first convolutional layer and after the addition. The residual layer is shown in Figure 2. We will experiment with this kind of residual layer for our Go networks. ### Transformer Transformers are advanced neural network architectures that leverage the concept of self-attention to process and understand complex sequences of data, such as language. Self-attention allows a transformer model to analyze different elements within a sequence and determine their relative importance in relation to Figure 1: AlphaZero network architecture Figure 2: The residual block. one another. By calculating attention scores based on the similarity of these elements, the model can dynamically weigh their significance and understand how they interrelate. Transformers employ multiple self-attention mechanisms (multihead self-attention) operating in parallel, enabling them to capture intricate patterns, dependencies, and contextual nuances across the entire input sequence. Transformer was originally proposed as a sequence-to-sequence model [29] for machine translation. Later works show that Transformer-based pre-trained models (PTMs) [23] can achieve state-of-the-art performances on various tasks. As a consequence, Transformer has become the go-to architecture in NLP, especially for PTMs. In addition to language related applications, Transformer has also been adopted in CV [22, 2, 15], audio processing [14, 17, 10] and even other disciplines, such as chemistry [25] and life sciences [24]. If in the field of Natural Language Processing the mechanism of attention of the Transformers tried to capture the relationships between different words of the text to be analyzed, in Computer Vision the Vision Transformers try instead to capture the relationships between different portions of an image. The mechanism of self-attention, integral to transformers, enables them to excel in tasks ranging from language translation and sentiment analysis to summarization and beyond. ### Efficient Former The EfficientFormer model is a big step forward in making transformer architectures work better for tasks that need real-time results, especially on devices with not much computing power. By adding a dimension-consistent plan, the model can easily switch between different ways of organizing its parts, like in 4D and 3D setups. This way of thinking helps the EfficientFormer model break free from the old rules about how fast transformers can make decisions. This leads to making the time it takes for predictions much shorter. By focusing on making predictions happen fast, a set of EfficientFormer models emerges, each achieving a careful equilibrium between performance and latency. This change in approach reaches its peak with models like EfficientFormer-l1, which impressively demonstrates outstanding top-1 accuracy on benchmarks like ImageNet-1K. At the same time, it manages to keep inference latency remarkably low on mobile devices, aligning closely with the efficiency of optimized versions of MobileNet. The complete range of EfficientFormer models, taken together, significantly underscores the possibilities of tapping into transformers' capabilities for practical real-world uses. The network starts with a convolution stem as patch embedding, followed by MetaBlock (MB) as shown by Figure 3. The MB\({}^{4D}\) and MB\({}^{3D}\) contain different token mixer configurations, _i.e._, local pooling or global multi-head self-attention, arranged in a dimension-consistent manner. EfficientFormer is available in various sizes, denoted as l1, l3, l7, and l9. Each size is linked to a tuple of information where the first information is the width and the second information is the depth. The width is a list designing different dimensionalities (number of channels) of the feature vectors processed by different layers and blocks within the neural network. The width represents the number of blocks in different levels of the EfficientFormer architecture. The sizes are the following: * 'l1' : ([48, 96],[3, 4]) * 'l3' : ([64, 128],[4, 6]) * 'l7' : ([96, 192,],[6, 8]) * 'l9'; ([128,256],[8,10]) ### Adaptation for the game of Go In this paper, we took the same work on EfficientFormer model of Li _et al._[21] and adapt the transformer mechanism to the Go game prediction. This necessitated modifying the final layers, which were originally designed for tasks like classification or segmentation, and instead, replacing them with layers tailored for policy and value (_i.e._, the probability of winning the game) prediction. This modification transformed the tasks into a dual output setting, combining multiclass classification and regression functionalities. The value head uses Global Average Pooling followed by two Dense layers [4, 8]. The policy head uses a 1x1 convolution to a single plane that defines the convolutional policy [8]. Another significant adjustment involved the downsampling and embedding layers commonly used in image classification tasks to detect features by reducing the image size before feeding it into the transformer. However, in the context of Go, the input board's dimensions were fixed at \(19\times 19\), and it was imperative to preserve this size throughout the training process to avoid losing critical information. Therefore, to retain the richness of the board data, the height and width of the board were maintained during training, ensuring that no valuable details were lost in the process. This tailored architectural approach played a pivotal role in optimizing the models for Go game prediction. Figure 3: Overview of EfficientFormer architecture [21]. ## 4 Experimental Results ### Dataset The data used for training comes from the Katago Go program self played games [31]. There are 1,000,000 different games in total in the training set. The input data is composed of 31 19x19 planes (color to play, ladders, current state on two planes, two previous states on four planes). The output targets are the policy (a vector of size 361 with 1.0 for the move played, 0.0 for the other moves), and the value (close to 1.0 if White wins, close to 0.0 if Black wins). ### Experimental Information In order to compare the different network architectures we trained them on 500 epochs. One epoch uses \(100,000\) states randomly selected from the Katago dataset with two labels: a one-hot encoding of the Katago move and an evaluation between 0 and 1 by Katago of the winrate for White. The training is done with Adam and cosine annealing [9] without restarts. Cosine annealing leads to better convergence by modifying the learning rate of Adam. In the next tables, we denote Residual(X,Y), the Residual Network of X blocks of Y planes and we denote Efficient(lX), the lX architecture of EfficientFormer. Among the different metrics used, we compute the accuracy, mean squared error (MSE), mean absolute error (MAE), and when possible the winning rate against an opponent. The winning rate is more informative than the other because it combines the impact of improving policy and value network. Accuracy measures the closeness of strategies between the policy network and Katego data. ### Training and Playing Table 1 gives the comparison of some learning rates for small residual and transformer networks. For the Transformer, the best learning rate is observed at 0.002 whereas for the Residual Network, the best learning rate is observed at 0.0002. For Residual Networks, the learning rate tends to be significantly lower in comparison to transformers. Table 2 gives the latency and peak memory on GPU for the different network architectures we tested. The experiments were carried out on a _RTX 2080 Ti_ with 11 Go of Memory. It is worth mentioning that the Residual Networks and Transformers we examined exhibit similar latency characteristics, however, it can be observed that the memory usage is 3 times greater with Transformers. Table 3 gives the number of parameters, the number of evaluations per second, and the CPU latency on an _Epyc_ server of the networks we tested. It should be noted that Transformers with far fewer parameters than Residual Networks nevertheless achieve comparable evaluation per second. Furthermore, large Transformers demonstrate superior CPU performance when compared to large Residual Networks. Table 4 gives the accuracy, the MSE, and the MAE for the different networks. Additionally, it reveals the winning rate of the l9 Efficient Former when pitted against the competition, assuming either CPU or GPU hardware. For the 256 planes Residual Network the learning rate was set to 0.00005 since the learning rates above could not learn the value. The l9 Efficient Former outperforms its counterparts across various metrics and excels particularly on CPU. When leveraging GPU hardware, it performs at par with the largest Residual Network. Lastly, Table 5 charts the evolution of GPU latency concerning batch size variation for the different network configurations. Large batch sizes are relevant to self-play in Alpha Zero style [28, 31]. Smaller batch sizes are relevant to normal play with batch parallel MCTS [6]. This analysis sheds light on how network performance scales with batch size changes. The Residual Networks use relatively more playouts since they parallelize better with current GPU hardware and software. \begin{table} \begin{tabular}{|l r r r r|} \hline Network & Learning Rate & Batch & Accuracy & MSE & MAE \\ \hline Residual(10,128) & 0.0008 & 64 & 44.51\% & 0.1209 & 0.2959 \\ Residual(10,128) & 0.0004 & 64 & 46.25\% & 0.0657 & 0.1927 \\ Residual(10,128) & 0.0002 & 64 & 46.57\% & 0.0642 & 0.1900 \\ Residual(10,128) & 0.0001 & 64 & 45.61\% & 0.0708 & 0.2035 \\ \hline Efficient(11) & 0.004 & 64 & 45.72\% & 0.0766 & 0.2097 \\ Efficient(11) & 0.002 & 64 & 45.89\% & 0.0698 & 0.1973 \\ Efficient(11) & 0.001 & 64 & 45.43\% & 0.0720 & 0.2022 \\ \hline \end{tabular} \end{table} Table 1: Learning rate tuning for different small network architectures over 100 epochs of \(100,000\) states \begin{table} \begin{tabular}{|l r r r r|} \hline Network & GPU & Latency & Evaluations per second on GPU & Peak Memory \\ \hline Residual(10,128) & 0.0890 & 719 & 436,656,640 \\ Residual(20,128) & 0.0943 & 679 & 350,025,728 \\ Residual(20,256) & 0.1185 & 540 & 452,578,816 \\ Residual(40,256) & 0.1580 & 405 & 529,187,072 \\ \hline Efficient(11) & 0.0958 & 668 & 1,101,474,048 \\ Efficient(13) & 0.1106 & 579 & 1,148,030,976 \\ Efficient(17) & 0.1307 & 490 & 1,159,418,368 \\ Efficient(19) & 0.1700 & 376 & 1,179,129,088 \\ \hline \end{tabular} \end{table} Table 2: Latency and peak memory on a RTX 2080 Ti GPU with 11 Go for different architectures and networks of different sizes. The latency and the peak memory are measured using a batch of 64 states. They are averaged over 100 calls to predict after a warmup of 100 previous calls. The latency is the average time in seconds to make a forward pass on a batch of 64 states. \begin{table} \begin{tabular}{|l r r r r r r r|} \hline Network & Learning Rate & Batch & Accuracy & MSE & MAE & WinCPU & WinGPU \\ \hline Residual(10,128) & 0.0002 & 64 & 49.12\% & 0.0534 & 0.1649 & 33.5\% & 20.4\% \\ Residual(20,128) & 0.0002 & 64 & 50.29\% & 0.0516 & 0.1618 & 31.6\% & 25.8\% \\ Residual(20,256) & 0.00005 & 64 & 52.50\% & 0.0476 & 0.1518 & 30.6\% & 51.0\% \\ Residual(40,256) & 0.00005 & 32 & 51.27\% & 0.0499 & 0.1586 & 8.9\% & 34.7\% \\ \hline Efficient(I1) & 0.002 & 64 & 49.35\% & 0.0553 & 0.1659 & 11.6\% & 8.1\% \\ Efficient(I3) & 0.002 & 64 & 51.28\% & 0.0484 & 0.1519 & 31.0\% & 19.4\% \\ Efficient(I7) & 0.002 & 64 & 53.01\% & 0.0440 & 0.1422 & 50.4\% & 38.3\% \\ Efficient(I9) & 0.001 & 64 & 54.29\% & 0.0405 & 0.1351 & - & - \\ \hline \end{tabular} \end{table} Table 4: Comparison of networks for 500 epochs of 100,000 states per epoch and a batch size of 64. The winrate WinCPU is the result of 800 randomized matches on CPU against Efficient(I9) with 10 seconds of CPU per move for both sides. The GPU winrate is calculated by using the same GPU time for both networks. The Accuracy, MSE and MAE were computed on a set of 50,000 states sampled from 50,000 games that were never seen during training. \begin{table} \begin{tabular}{|l r r r r|} \hline Network & Batch & GPU & Latency & Evaluations per second on GPU & Peak Memory \\ \hline Efficient(I9) & 32 & 0.128 & 250 & 589,454,592 \\ Efficient(I9) & 64 & 0.168 & 381 & 1,141,404,672 \\ Efficient(I9) & 128 & 0.224 & 571 & 2,297,159,168 \\ Efficient(I9) & 256 & 0.346 & 740 & 4,359,236,608 \\ Efficient(I9) & 512 & 0.583 & 878 & 8,672,660,992 \\ Efficient(I9) & 1024 & 1.062 & 964 & 17,121,701,376 \\ \hline Residual(20,256) & 32 & 0.111 & 288 & 253,801,472 \\ Residual(20,256) & 64 & 0.126 & 508 & 548,938,240 \\ Residual(20,256) & 128 & 0.159 & 805 & 800,936,192 \\ Residual(20,256) & 256 & 0.227 & 1128 & 1,566,134,528 \\ Residual(20,256) & 512 & 0.368 & 1391 & 2,954,716,416 \\ Residual(20,256) & 1024 & 0.667 & 1535 & 4,793,448,960 \\ \hline \end{tabular} \end{table} Table 5: Evolution of the A6000 GPU latency with the size of the batch. The latency and the peak memory are the median values of 7 runs. Each run is the average over 100 forwards after a warmup of 100 forwards. ## 5 Conclusion EfficientFormer's architecture showcases remarkable parameter efficiency, especially when compared to the Residual Network architecture, particularly in larger networks. This translates into superior performance on CPU, making it the preferred choice in this domain. Interestingly, when it comes to GPU utilization, both architectures perform at a similar level, especially for the largest networks in our experimentation. Moreover, it is worth highlighting that the EfficientFormer architecture we explored for Go is not limited to this particular game; it exhibits versatility and applicability to a wide range of other games and domains.
2309.05309
Simba: A Scalable Bilevel Preconditioned Gradient Method for Fast Evasion of Flat Areas and Saddle Points
The convergence behaviour of first-order methods can be severely slowed down when applied to high-dimensional non-convex functions due to the presence of saddle points. If, additionally, the saddles are surrounded by large plateaus, it is highly likely that the first-order methods will converge to sub-optimal solutions. In machine learning applications, sub-optimal solutions mean poor generalization performance. They are also related to the issue of hyper-parameter tuning, since, in the pursuit of solutions that yield lower errors, a tremendous amount of time is required on selecting the hyper-parameters appropriately. A natural way to tackle the limitations of first-order methods is to employ the Hessian information. However, methods that incorporate the Hessian do not scale or, if they do, they are very slow for modern applications. Here, we propose Simba, a scalable preconditioned gradient method, to address the main limitations of the first-order methods. The method is very simple to implement. It maintains a single precondition matrix that it is constructed as the outer product of the moving average of the gradients. To significantly reduce the computational cost of forming and inverting the preconditioner, we draw links with the multilevel optimization methods. These links enables us to construct preconditioners in a randomized manner. Our numerical experiments verify the scalability of Simba as well as its efficacy near saddles and flat areas. Further, we demonstrate that Simba offers a satisfactory generalization performance on standard benchmark residual networks. We also analyze Simba and show its linear convergence rate for strongly convex functions.
Nick Tsipinakis, Panos Parpas
2023-09-11T08:53:22Z
http://arxiv.org/abs/2309.05309v1
Simba: A Scalable Bilevel Preconditioned Gradient Method for Fast Evasion of Flat Areas and Saddle Points ###### Abstract The convergence behaviour of first-order methods can be severely slowed down when applied to high-dimensional non-convex functions due to the presence of saddle points. If, additionally, the saddles are surrounded by large plateaus, it is highly likely that the first-order methods will converge to sub-optimal solutions. In machine learning applications, sub-optimal solutions mean poor generalization performance. They are also related to the issue of hyper-parameter tuning, since, in the pursuit of solutions that yield lower errors, a tremendous amount of time is required on selecting the hyper-parameters appropriately. A natural way to tackle the limitations of first-order methods is to employ the Hessian information. However, methods that incorporate the Hessian do not scale or, if they do, they are very slow for modern applications. Here, we propose Simba, a scalable preconditioned gradient method, to address the main limitations of the first-order methods. The method is very simple to implement. It maintains a single precondition matrix that it is constructed as the outer product of the moving average of the gradients. To significantly reduce the computational cost of forming and inverting the preconditioner, we draw links with the multilevel optimization methods. These links enables us to construct preconditioners in a randomized manner. Our numerical experiments verify the scalability of Simba as well as its efficacy near saddles and flat areas. Further, we demonstrate that Simba offers a satisfactory generalization performance on standard benchmark residual networks. We also analyze Simba and show its linear convergence rate for strongly convex functions. saddle free optimization preconditioned gradient methods coarse-grained models deep learning ## 1 Introduction We focus on solving the following optimization problem, commonly referred to as empirical risk minimization: \[\min_{\mathbf{x}\in\mathbb{R}^{n}}f(\mathbf{x}):=\frac{1}{m}\sum_{i=1}^{m}f_{ i}(\mathbf{x}). \tag{1}\] We assume that the model parameters are available in a _decoupled_ form, that is, \(\mathbf{x}=\{\mathbf{x}^{l}:l=1,\ldots L\}\), for some positive integer \(L\). The decoupled parameter setting naturally emerges for the training of a deep neural network (DNN) where \(L\) corresponds to the number of layers. In large-scale optimization scenarios, stochastic gradient descent method (SGD) and its variants have become a standard tool due to their simplicity and low per-iteration and memory costs. While easy to implement, they are nevertheless accompanied with important shortcomings. Below, we discuss these limitations and argue that they mainly stem from the absence of the Hessian matrix (or an appropriate approximation that leverages its structural properties) in their iterative scheme. **Shortfall I: Slow evasion of saddle points.** The prevalence of saddle points in high dimensional neural networks has been examined in the past and it has been shown that the number of saddles increases exponentially with the dimension \(n\)Dauphin et al. (2014). Consequently, developing algorithms that are capable of escaping saddle points is a primary goal when it comes to the training of DNNs. While stochastic first-order methods have theoretical guarantees of escaping saddle points almost always Panageas et al. (2019), it remains unclear whether they can achieve this efficiently in practice. As a result, given the significantly larger number of saddle points compared to local minima in high-dimensional settings, it is likely that stochastic first-order methods will converge to a saddle point rather than the desired local minimum. The most natural way to ensure convergence to a local minimum in the presence of saddles is by incorporating the Hessian matrix to scale the gradient vector. The Cubic Newton Nesterov et al. (2018) is a key method in non-convex optimization known to be able to escape saddles in a single iteration resulting in rapid convergence to a local minimum. The Cubic Newton method achieves this using the inverse of a regularized version of the Hessian matrix to premultiply the gradient vector. However, the method has expensive iterates and does not scale in high dimensions. Nevertheless, its theory indicates that methods employing accurate approximations of the regularized Hessian are expected to effectively evade saddle points and converge rapidly. **Shortfall II: Slow convergence rate near flat areas and the vanishing gradient issue.** The convergence behaviour of first-order methods can be significantly affected near local minima that are surrounded by plateaus Boyd and Vandenberghe (2004), Nesterov (2004). For instance, this occurs, when its Hessian matrix has most of the eigenvalues equal to zero. Therefore, to effectively navigate on such a flat landscape is to employ the inverse Hessian matrix. For instance, the Newton method pre-multiplies the gradient with the inverse Hessian to perform a local change of coordinates which significantly accelerates the convergence rate. Intuitively, one can expect a similar convergence behaviour by replacing the Hessian matrix with a preconditioned matrix that retains its structure. On the other hand, the vanishing gradient issue is commonly observed in the training of DNNs for which activation functions that range in \([0,1]\) are required. In such cases, the gradient value computed through the back-propagation decreases exponentially as the number of layers \(L\) of the network increase yielding no progress for the SGD. Hence, similar to addressing the slow-convergence issue near flat areas, employing a preconditioner is expected to mitigate the vanishing gradient issue and thus accelerate the convergence of the SGD method. To tackle the above shortfalls, the diagonal (first- or second-order) methods were introduced Duchi et al. (2011); Tieleman et al. (2012); Zeiler (2012); Kingma and Ba (2014); Zaheer et al. (2018); Yao et al. (2021); Ma (2020); Jahani et al. (2021); Liu et al. (2023). These algorithms aim to improve the behavior of the SGD near saddles or flat areas by preconditioning the gradient with a diagonal vector that mimics the hessian matrix. However, the Hessian matrices of deep neural networks need not be sparse, which means that a diagonal approximation of the Hessian may be inadequate. This raises concerns about the ability of diagonal methods to converge to a local minimum. A more promising class of algorithms consists of those preconditioned gradient methods that maintain a matrix instead of the potentially poor diagonal approximation. As noted earlier, the Cubic Newton method can effectively address the aforementioned shortfalls but it does not scale in large-scale settings. Other methods that explicitly use or exploit the Hessian matrix, such as Newton or Quasi-Newton methods Nesterov (2004), Broyden (1967), or those based on randomization and sketching Erdogdu and Montanari (2015); Pilanci and Wainwright (2017); Xu et al. (2016, 2020) still suffer from the same limitations as the Cubic Newton. A potentially promising way to address the storage and computational limitations of the Newton type methods is to employ precondition matrices based on the first-order information Duchi et al. (2011); Gupta et al. (2018). However, to the best of our knowledge, the performance of such methods near saddles or flat areas is yet to be examined. Another possible solution is to utilize the multilevel framework in optimization which significantly reduces the cost of forming and inverting the Hessian matrix Tsipinakis and Parpas (2021); Tsipinakis et al. (2023). But, it remains unclear whether the multilevel Newton-type methods can be efficiently applied to the training of modern deep neural networks. In this work, we introduce Simba, a Scalable Iterative **M**inimization **B**ilevel **A**lgorithm that addresses the above shortfalls and scales to high dimensions. The method is very simple to implement. It maintains a precondition matrix where its inverse square root is used to premultiply the moving average of the gradient. In particular, the precondition matrix is defined as the outer product of the exponential moving average (EMA) of the gradients. We explore the link between the multilevel optimization methods and the preconditioned gradient methods which enables us to construct preconditioners in a randomized manner. By leveraging this connection, we significantly reduce the cost and memory requirements of these methods to make them more efficient in time for large-scale settings. We propose performing a Truncated Singular Value decomposition (T-SVD) before constructing the inverse square root preconditioner. Here, we retain only the first few, say \(r\), eigenvalues and set the remaining eigenvalues equal to the \(r^{\text{th}}\) eigenvalue. This approach aims to construct meaningful preconditioners that do not allow for zero eigenvalues, and thus expecting a faster escape rate from saddles or flat areas. Our algorithm is inspired by SigmaSVD Tsipinakis et al. (2023); Shampoo Gupta et al. (2018) and SGD with momentum Bubeck et al. (2015). Simba is presented in Algorithm 1. A Pytorch implementation of Simba is available at [https://github.com/Ntsip/simba](https://github.com/Ntsip/simba). ### Related Work The works most closely related to ours is Shampoo Gupta et al. (2018) and AdaGrad Duchi et al. (2011). However, both methods have important differences to our approach which are discussed below. AdaGrad is an innovative adaptive first-order method which performs elementwise division of the gradient with the square root of the accumulated squared gradient. Adagrad is known to be suitable for sparse settings. However, the version of the method that retains a full preconditioner matrix is rarely used in practice due to the increased computational cost. Moreover, Shampoo, unlike our approach, maintains multiple preconditioners, one for each tensor dimension (i.e., two for matrices while we always use one). In addition, it computes a full SVD for each preconditioner. Further, all preconditioners lie in the original space dimensions. Given the previous considerations, Shampoo becomes computationally expensive when applied to very large DNNs. In contrast, our algorithm addresses the computational issues of Shampoo. Another difference between Shampoo and our approach is the way the preconditioners are defined. Shampoo defines the preconditioners as a sum of the outer product of gradients, whereas we employ the exponential moving average. In addition, the performance of Shampoo around saddles and flat areas is yet to be examined. We conjecture that Shampoo may not effectively improve the behaviour of the first-order methods near plateaus due to the monotonic increase of the eigenvalues in the preconditioners. Hessian-based preconditioned methods have been also proposed to improve convergence of the optimization methods near saddle point Reddi et al. (2018); Dauphin et al. (2014); O'Leary-Roseberry et al. (2020). However, these methods rely on second-order information which can be prohibitively expensive for very large DNNs. Therefore, they are more suitable to apply in conjunction with a fast first-order method and employ the Hessian approximation when first-order methods slow down. However, our method is efficient from the beginning of the training. A notable second-order optimization method is K-FAC Martens and Grosse (2015). K-FAC approximates the Fisher information matrix of each layer using the Kronecker product of two smaller matrices that are easier to handle. However, K-FAC is specifically designed for the training of generative (deep) models while our method is general and applicable to any stochastic optimization regime. As far as the adaptive diagonal methods are concerned, Adam and RMSprop were introduced to alleviate the limitations of AdaGrad. RMSprop scales the gradient using an averaging of the squared gradient. On the other hand, Adam scales the EMA of the gradient by the EMA of the squared gradient. Adam iterations has been shown to significantly improve the performance of first-order diagonal methods in the optimization of DNNs. Another method, named Yogi, has similar updates as AdaGrad but allows for the accumulated squared root not be monotonically increasing. To provide EMA of the gradient with a more informative scaling, several approaches have been proposed that replace the squared gradient with an approximation of the diagonal of the Hessian matrix Jahani et al. (2021); Yao et al. (2021); Ma (2020). AdafHessian Yao et al. (2021) approximates the Hessian matrix by its diagonal matrix using the Hutchinson's method while Apollo Ma (2020) based on the variational technique using the weak secant equation. Further, OASIS Jahani et al. (2021) is an adaptive modification of AdaHessian that automatically adjusts learning rates. A recent work called Sophia Liu et al. (2023) modifies AdaHessian direction by clipping it by a scalar. The authors also provide alternatives on the diagonal Hessian matrix approximation which they suggest to compute every \(10\) iterations to reduce the computational costs. However, it is still unclear whether the diagonal methods offer satisfactory performance near saddles points and flat areas, or if they effectively tackle the vanishing gradient issue in practical applications. ### Contributions The main contribution of this paper is the development of a scalable method that addresses the aforementioned shortfalls by empirical evidence. The method is scalable for training DNNs as it maintains only one preconditioned matrix at each iteration which lies in the subspace (coarse level). Thus, we significantly reduce the cost of forming and computing the SVD compared to Shampoo. The fact that we use a randomized T-SVD and requiring the most informative eigenvalues further reduces the total computational cost. The numerical experiments demonstrate that Simba can have cheaper iterates than AdaHessian and Shampoo in large-scale settings and deep architectures. In particular, in these scenarios, the wall-clock time of our method can be as much as \(25\) times less than Shampoo and \(2\) times less than AdaHessian. We, in addition, illustrate that Simba offers comparable, if not better, generalization errors against the state-of-the-art methods in standard benchmark ResNet architectures. Further, the numerical experiments show that our method has between two and three times more expensive iterates than Adam. This is expected due to outer products and the randomized T-SVD. However, this is a reasonable price to pay to improve the escape rate from saddles and flat areas of the existing preconditioned gradient methods. Therefore, Simba is suitable in problems where diagonal methods suffer from at least of one of the previous shortfalls. In such cases, we demonstrate that diagonal methods require much larger wall-clock time to reach an error as low as our method achieves. Hence, we argue that, besides the encouraging preliminary empirical results achieved by Simba, this paper highlights the limitations of diagonal methods, which are likely to get trapped near saddle points and thus return sub-optimal solution, particularly in the presence of large plateaus. This emphasizes the need to develop scalable algorithms that utilize more sophisticated preconditioners instead of relying on poor diagonal approximations of the Hessian matrix. Simba is accompanied with a simple convergence analysis assuming strongly convex functions. ## 2 Description of the Algorithm In this section, we begin by discussing the standard Newton-type multilevel method and its main components, i.e., the hierarchy of coarse models and the linear operators that will used to transfer information from coarse to fine model and vice versa. Then, as all the necessary ingredients are in place, we present the proposed algorithm. Even though, we assume a bilevel hierarchy, extending Simba to several level is straightforward. ### Background Knowledge - Multilevel Methods We will be using the multilevel terminology to denote \(f\) as the _fine_ model and also assume that a _coarse_ model that lies in lower dimensions is available. In particular, the coarse model is a mapping \(F:\mathbb{R}^{n_{\ell}}\rightarrow\mathbb{R}\), where \(n_{\ell}<n\). The subscript \(\ell\) will be used to denote quantities, i.e., vectors and scalars, that belong to the coarse level. We assume that both fine and coarse models are bounded from below so that a minimizer exists. In order to reduce the computational complexity of Newton type methods, the standard multilevel method attempts to solve (1) by minimizing the coarse model to obtain search directions. This process is described as follows. First, one needs to have access to linear prolongation and restriction operators to transfer information from the fine to coarse model and vice versa. We denote \(\mathbf{P}\in\mathbb{R}^{n\times n_{\ell}}\) be the prolongation and \(\mathbf{R}\in\mathbb{R}^{n_{\ell}\times n}\) the restriction operator and assume that they are full rank and \(\mathbf{P}=\mathbf{R}^{T}\). Given a point \(\mathbf{x}_{k}\) at iteration \(k\), we move to the coarse level with initialization point \(\mathbf{y}_{0}=\mathbf{R}\mathbf{x}_{k}\). Subsequently, we minimize the coarse model \(F\) to obtain \(\mathbf{y}^{*}\) and construct a search direction by: \[\mathbf{d}_{k}:=\mathbf{P}_{k}(\mathbf{y}^{*}-\mathbf{y}_{0}). \tag{2}\] To compute the search directions effectively, the standard multilevel method considers the Galerkin model as a choice of the coarse model: \[F(\mathbf{y}):=\langle\mathbf{R}_{k}\nabla f(\mathbf{x}_{k}),\mathbf{y}- \mathbf{y}_{0}\rangle+\frac{1}{2}\langle\mathbf{R}_{k}\nabla^{2}f(\mathbf{x}_{ k})\mathbf{P}_{k}(\mathbf{y}-\mathbf{y}_{0}),\mathbf{y}-\mathbf{y}_{0}\rangle. \tag{3}\] It can be shown that combining (2) and (3) we obtain a closed form solution for the coarse direction: \[\mathbf{d}_{k}=-\mathbf{P}_{k}\left(\mathbf{R}_{k}\nabla^{2}f(\mathbf{x}_{k}) \mathbf{P}_{k}\right)^{-1}\mathbf{R}_{k}\nabla f(\mathbf{x}_{k})\] However, it may be ineffective to employ solely the above strategy when computing the search direction if \(\mathbf{R}\) is fixed or defined in a deterministic fashion at each iteration. For instance, this can occur when \(\nabla f(\mathbf{x})\in\mathrm{null}(\mathbf{R})\) while \(\|\nabla f(\mathbf{x}_{k})\|_{2}\neq 0\), which implies no progress for the multilevel algorithm. The following conditions were introduced to prevent the multilevel algorithm from taking an ineffective coarse step: \[\|\mathbf{R}\nabla f(\mathbf{x})\|>\xi\|\nabla f(\mathbf{x})\|\quad\text{and} \quad\|\mathbf{R}\nabla f(\mathbf{x})\|>e,\] where \(\xi\in(0,\min(1,\|\mathbf{R}\|))\) and \(e>0\) are user defined parameters. Thus, the standard multilevel algorithm computes the coarse direction when the above conditions are satisfied and the fine direction (Newton) otherwise. On the other hand, it is expected that the multilevel algorithm will construct effective coarse directions when \(\mathbf{R}\) is selected randomly at each iteration. In particular, it has been demonstrated that the standard multilevel method can perform always coarse steps without compromising its superlinear or composite convergence rate (see for example Ho et al. (2019), Tsipinakis and Parpas (2021), Tsipinakis et al. (2023)). ### Simba The standard multilevel algorithm is well suited for strictly convex functions since the Hessian matrix is always positive definite. Due to the absence of positive-definiteness in general non-convex problems, Newton type methods may converge to a saddle point or a local maximum since they cannot guarantee the descent property of the search directions. They can even break down when the Hessian matrix is singular. These limitations have been efficiently tackled in the recent studies Nesterov and Polyak (2006). In multilevel literature, SigmaSVD Tsipinakis et al. (2023) addresses these limitations by using a truncated low-rank modification of the standard multilevel algorithm that efficiently escapes saddle points and flat areas. While SigmaSVD has been shown to perform well near saddles and flat areas, it requires \(\mathcal{O}(mn_{\ell}^{2})\) operations to form the reduced Hessian matrix which is prohibitively expensive for modern deep neural network models. Here, to alleviate this burden, we replace the expensive Hessian matrix with the outer product of the gradients. For this purpose, we consider the following coarse model: \[F(\mathbf{y}):=\langle\mathbf{R}_{k}\mathbf{G}_{k},\mathbf{y}-\mathbf{y}_{0} \rangle+\frac{1}{2}\langle(\mathbf{R}_{k}\mathbf{H}_{k}\mathbf{P}_{k})^{\frac {1}{2}}(\mathbf{y}-\mathbf{y}_{0}),\mathbf{y}-\mathbf{y}_{0}\rangle, \tag{4}\] where \(\quad\mathbf{H}_{k}:=\mathbf{G}_{k}\mathbf{G}_{k}^{T}\) and \(\mathbf{G}_{k}:=\nabla f(\mathbf{x}_{k})\). Thus, by defining \(\hat{\mathbf{d}}_{\ell}:=\mathbf{y}^{*}-\mathbf{y}_{0}\) and \(\mathbf{Q}_{\ell,k}:=\mathbf{R}_{k}\mathbf{H}_{k}\mathbf{P}_{k}\), the coarse direction can be computed explicitly: \[\hat{\mathbf{d}}_{k} =\mathbf{P}_{k}\left(\operatorname*{arg\,min}_{\mathbf{d}_{\ell }\in\mathbb{R}^{n_{\ell}}}\langle\mathbf{R}_{k}\mathbf{G}_{k},\mathbf{d}_{ \ell}\rangle+\frac{1}{2}\langle\mathbf{Q}_{\ell,k}^{\frac{1}{2}}\mathbf{d}_{ \ell},\mathbf{d}_{\ell}\rangle\right)\] \[=-\mathbf{P}_{k}\mathbf{Q}_{\ell,k}^{-\frac{1}{2}}\mathbf{R}_{k} \mathbf{G}_{k}.\] To construct \(\mathbf{Q}_{\ell,k}^{-\frac{1}{2}}\), we perform a randomized truncated SVD to obtain the first \(r+1\) eigenvalues and eigenvectors, that is, \([\mathbf{U}_{r+1},\mathbf{\Lambda}_{r+1}]=\text{T-SVD}(\mathbf{Q}_{\ell,k})\)Halko et al. (2011). Subsequently, we bound the diagonal matrix of eigenvalues from below by a scalar \(m>0\): \[\left[\mathbf{\Lambda}_{r+1}\right]_{i,m}:=\begin{cases}\mathbf{\Lambda}_{i},& \mathbf{\Lambda}_{i}\geq m\\ m,&\text{otherwise},\end{cases} \tag{5}\] to obtain the new modified diagonal matrix as \(\mathbf{\Lambda}_{r+1}^{m}\). We then construct \(\mathbf{Q}_{\ell,k}^{-\frac{1}{2}}\) by treating all its eigenvalues below the \(r\)-th equal to \(r+1\) eigenvalue. Formally, we define \[\mathbf{Q}_{\ell,k}^{-\frac{1}{2}}:=\left[\mathbf{\Lambda}_{r+1}\right]_{r+1,m }^{-\frac{1}{2}}\mathbf{\Lambda}_{n_{\ell}}+\mathbf{U}_{r}\left(\left[\mathbf{ \Lambda}_{r+1}^{m}\right]^{-\frac{1}{2}}-\left[\mathbf{\Lambda}_{r+1}\right]_{ r+1,m}^{-\frac{1}{2}}\mathbf{\Lambda}_{r}\right)\mathbf{U}_{r}^{T}. \tag{6}\] The key component when forming the precondition matrix as above is that we do not allow for zero eigenvalues since zero eigenvalues indicate the presence of flat areas where the convergence rate of optimization methods significantly decays. Hence, by requiring \(\mathbf{Q}_{\ell,k}\succeq m\mathbf{\Lambda}_{n_{\ell}}\), we ensure that directions which correspond to flat curvatures will turn into directions whose curvature is positive, anticipating an accelerated convergence to the local minimum or a fast escape rate from saddles. Similar techniques for constructing the preconditioners have been employed in Tsipinakis et al. (2023), Paternain et al. (2019) where it is demonstrated numerically that the algorithms can rapidly escape saddle points and flat areas in practical applications. The computational cost of forming the preconditioner matrix is \(\mathcal{O}(n_{\ell}^{2})\) which is significantly smaller than computing the Hessian matrix. Moreover, the randomized SVD requires \(\mathcal{O}(rn_{\ell}^{2})\) operation. In addition, the method can be trivially modified to employ the EMA of the gradients to account for the averaged history of the local information. Moreover, the EMA of the gradient is used to construct more informative preconditioner matrices. Specifically, to obtain the accelerated algorithm, we set a momentum parameter \(\beta\in(0,1)\) and replace \(\mathbf{G}_{k}\) in (4) with the EMA update: \[\mathbf{G}_{k}:=\beta\mathbf{G}_{k-1}+\nabla f(\mathbf{x}_{k}),\] where \(\mathbf{G}_{0}=0\) and \(k\geq 1\). Simba with momentum is presented in Algorithm 1. We emphasize that the method is scalable for solving large deep neural network models. We describe the three components that render the iterations of Simba efficient: **(a)** The operations can be performed in a decoupled manner. This means that preconditioners can be computed independently at each layer, leading into forming much smaller matrices that can be efficiently stored during the training. **(b)** The restriction operator is constructed based on uniform sampling without replacement from the rows of the identity matrix, \(\mathbf{I}_{n}\). This is described formally in the following definition: **Definition 2.1**.: Let the set \(S_{n}=\{1,2,\ldots,n\}\). Sample uniformly without replacement \(n_{\ell}<n\) elements from \(S_{n}\) to construct \(S_{n_{\ell}}=\{s_{n_{1}},s_{n_{2}},\ldots,s_{n_{\ell}}\}\). Then, the \(i^{\text{th}}\) row of \(\mathbf{R}\) is the \(s_{n_{i}}^{\text{th}}\) row of the identity matrix \(\mathbf{I}_{n}\) and \(\mathbf{P}=\mathbf{R}^{T}\). This way of constructing the restriction operator yields efficient iterations since in practice the coarse model in (4) can be formed by merely sampling \(n_{\ell}\) elements from \(\mathbf{x}_{k}\) and \(\nabla f(\mathbf{x}_{k})\) which has negligible computational cost. **(c)** Taking advantage of the parameter structure. Methods that employ full matrix preconditioners further reduce the memory requirement by exploiting the matrix or tensor structure of the parameters (for details see Shampoo Gupta et al. (2018)). For instance, for a matrix setting, if the parameter \(\mathbf{X}\in\mathbb{R}^{q\times d}\) and select \(\mathbf{R}\in\mathbb{R}^{n_{\ell}\times q}\), where \(n_{\ell}<q\), then we obtain \(\mathbf{G}_{\ell,k}:=\mathbf{RG}_{k}\in\mathbb{R}^{n_{\ell}\times d}\) and \(\mathbf{Q}_{\ell,k}\in\mathbb{R}^{n_{\ell}\times n_{\ell}}\). This results in \(\mathcal{O}((r+d)n_{\ell}^{2})\) operations for forming the preconditioner and applying the randomized SVD. Note that this number is much smaller than \(\mathcal{O}((d+q)(d^{2}+q^{2}))\) of Shampoo. ## 3 Convergence Analysis In this section we provide a simple convergence analysis of Simba when it generates sequences using the coarse model in (4) (without momentum). We show a linear convergence rate when the coarse model is constructed in both deterministic and randomized manner. Our analysis is based on the classical theory that assumes strongly convex functions and Lipschitz continuous gradients. In deterministic scenarios, the method is expected to alternate between coarse and fine steps to always reduce the value function. For this reason, we present two convergence results: **(a)** when the coarse step is always accepted, and **(b)** when only fine steps are taken. Hence the complete convergence behaviour of Simba is provided. On the other hand, when the prolongation operator is defined randomly at each iteration, multilevel methods expected to converge using coarse steps only Ho et al. (2019); Tsipinakis and Parpas (2021); Tsipinakis et al. (2023). Our numerical experiments also verify this observation. Moreover, we derive the number of steps needed for the method to reach accuracy \(\epsilon>0\) for both cases. We begin by stating our assumptions. **Assumption 3.1**.: _There are scalars \(0<\mu<L<+\infty\) such that \(f\) is \(\mu\)-strongly convex and has \(L\)-Lipschitz continuous gradients if and only if:_ 1. _(_\(L\)_-Lipschitz continuity) for all_ \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\)__ \[\|\nabla f(\mathbf{y})-\nabla f(\mathbf{x})\|_{2}\leq L\|\mathbf{y}-\mathbf{x}\|_ {2}\] 2. _(_\(\mu\)_-strong convexity) for all_ \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\)__ \[f(\mathbf{y})\geq f(\mathbf{x})+\langle\nabla f(\mathbf{x}),\mathbf{y}- \mathbf{x}\rangle+\frac{\mu}{2}\|\mathbf{y}-\mathbf{x}\|_{2}\] Next we state the assumptions on the prolongation and restriction operators. **Assumption 3.2**.: _For the restriction and prolongation operators \(\mathbf{R}\) and \(\mathbf{P}\) it holds that \(\mathbf{P}=\mathbf{R}^{T}\) and \(\operatorname{rank}(\mathbf{P})=n_{\ell}\)._ Figure 1: Convergence behaviour of various algorithms for the non-linear least-squares problem. The assumptions on the linear operators are not restrictive for practical application. For instance, Definition 2.1 on \(\mathbf{P}\) and \(\mathbf{R}\) satisfies Assumption 3.2. The following assumption ensures that the algorithm always selects effective coarse directions. **Assumption 3.3**.: _There exist \(e>0\) and \(\xi\in(0,\min(1,\|\mathbf{R}\|_{2}))\) such that if \(\|\nabla f(\mathbf{x})\|_{2}\neq 0\) it holds_ \[\|\mathbf{R}\nabla f(\mathbf{x})\|_{2}>\xi\|\nabla f(\mathbf{x})\|_{2}\quad \text{and}\quad\|\mathbf{R}\nabla f(\mathbf{x})\|_{2}>e.\] For our convergence result below we will need the following quantity: we denote \(\omega:=\max\{\|\mathbf{R}\|_{2},\|\mathbf{P}\|_{2}\}\). **Theorem 3.4**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a function such that Assumption 3.1 holds. Suppose also that Assumptions 3.2 and 3.3 hold. Moreover, given \(\mathbf{Q}_{\ell,k}^{-\frac{1}{2}}\) in (6), define_ \[\hat{\mathbf{d}}_{k}:=-\mathbf{P}_{k}\mathbf{Q}_{\ell,k}^{-\frac{1}{2}} \mathbf{R}_{k}\nabla f(\mathbf{x}_{k}),\] _and suppose that the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) is generated by \(\mathbf{x}_{k+1}=x_{k}+\frac{\xi^{2}m}{L\sqrt{M}\omega^{4}}\hat{\mathbf{d}}_{k}\). Then, there exists \(\hat{c}\in(0,1)\) such that_ \[f(\mathbf{x}_{k+1})-f(\mathbf{x}^{*})\leq\hat{c}(f(\mathbf{x}_{k})-f(\mathbf{ x}^{*})).\] _Moreover, at most_ \[\hat{K}=\frac{\log\left(f(\mathbf{x}_{0})-f(\mathbf{x}^{*})/\epsilon\right)}{ \log(1/\hat{c})}\] _iterations are required for this process to reach accuracy \(\epsilon\)._ Proof.: Recall from the statement of the theorem that the sequence is generated by \(\mathbf{x}_{k+1}=\mathbf{x}_{k}+t_{k}\hat{\mathbf{d}}_{k}\), where \(\hat{\mathbf{d}}_{k}=-\mathbf{P}_{k}\mathbf{Q}_{\ell,k}^{-\frac{1}{2}} \mathbf{R}_{k}\nabla f(\mathbf{x}_{k})\), \(t_{k}=\frac{\xi^{2}m}{\omega^{4}L\sqrt{M}}\) and \(\mathbf{Q}_{\ell}^{-1/2}\) is defined in (6). We also define the following quantity \[\hat{\lambda}(\mathbf{x}):=\sqrt{\nabla f(\mathbf{x})^{T}\mathbf{P}\mathbf{Q }_{\ell}^{-\frac{1}{2}}\mathbf{R}\nabla f(\mathbf{x})}.\] It holds \(\hat{\lambda}(\mathbf{x})\geq 0\). Below, we collect two general results that will be useful later in the proof. Given the Lipschitz continuity in Assumption 3.1 one can prove the following inequality (Nesterov et al. (2018)): \[f(\mathbf{y})\leq f(\mathbf{x})+\nabla f(\mathbf{x})^{T}(\mathbf{y}-\mathbf{x })+\frac{L}{2}\|\mathbf{y}-\mathbf{x}\|_{2} \tag{7}\] Similarly, from the strong convexity we can obtain \[f(\mathbf{y})\leq f(\mathbf{x})+\nabla f(\mathbf{x})^{T}(\mathbf{y}-\mathbf{ x})+\frac{1}{2\mu}\|\nabla f(\mathbf{x})-\nabla f(\mathbf{y})\|_{2}\] Replacing \(\mathbf{y}\) and \(\mathbf{x}\) with \(\mathbf{x}_{k}\) and \(\mathbf{x}^{*}\), respectively, we have that \[f(\mathbf{x}_{k})-f(\mathbf{x}^{*})\leq\frac{1}{2\mu}\|\nabla f(\mathbf{x}_{ k})\|_{2} \tag{8}\] Furthermore, by construction, \(\mathbf{Q}_{\ell,k}\) is bounded. It is bounded from below by \(m\) from (5) and it is bounded from above by the Lipschitz continuity of the gradients. Then, there exists \(M\geq m>0\) such that for all \(k\geq 1\) we have that \[m\mathbf{I}\preceq\mathbf{Q}_{\ell,k}\preceq M\mathbf{I}\] The above inequality implies \[\frac{1}{\sqrt{M}}\mathbf{P}_{k}\mathbf{R}_{k}\preceq\mathbf{P}_{k}\mathbf{Q }_{\ell,k}^{-\frac{1}{2}}\mathbf{R}_{k}\preceq\frac{1}{\sqrt{m}}\mathbf{P}_{ k}\mathbf{R}_{k}\] Using the above inequality we can obtain the following bound \[\|\hat{\mathbf{d}}_{k}\|_{2}=\|\mathbf{P}_{k}\mathbf{Q}_{\ell,k}^{-\frac{1}{2} }\mathbf{R}_{k}\nabla f(\mathbf{x}_{k})\|_{2}\leq\|\mathbf{P}_{k}\mathbf{Q}_ {\ell,k}^{-\frac{1}{2}}\mathbf{R}_{k}\|_{2}\|\nabla f(\mathbf{x}_{k})\|_{2} \leq\frac{1}{\sqrt{m}}\|\mathbf{P}_{k}\mathbf{R}_{k}\|_{2}\|\nabla f(\mathbf{x }_{k})\|_{2}\leq\frac{\omega^{2}}{\sqrt{m}}\|\nabla f(\mathbf{x}_{k})\|_{2}. \tag{9}\] Similarly we can show a lower bound on \(\hat{\lambda}(\mathbf{x})\) \[\hat{\lambda}(\mathbf{x})^{2}=\nabla f(\mathbf{x})^{T}\mathbf{P}\mathbf{Q}_{ \ell}^{-\frac{1}{2}}\mathbf{R}\nabla f(\mathbf{x})\geq\frac{1}{\sqrt{M}}\| \mathbf{R}\nabla f(\mathbf{x})\|_{2}^{2}\geq\frac{\xi^{2}}{\sqrt{M}}\|\nabla f (\mathbf{x}_{k})\|_{2}^{2}, \tag{10}\] where the last inequality follows from Assumption 3.3. Using now (7) and the fact that \(\hat{\lambda}(\mathbf{x})^{2}=-\nabla f(\mathbf{x})^{T}\hat{\mathbf{d}}_{k}\) we take \[f(\mathbf{x}_{k+1})\leq f(\mathbf{x}_{k})-t_{k}\hat{\lambda}(\mathbf{x}_{k})^{2 }+t_{k}^{2}\frac{L}{2}\|\hat{\mathbf{d}}_{k}\|_{2}^{2}.\] Combining the above inequality with (10) and (9) we have that \[f(\mathbf{x}_{k+1}) \leq f(\mathbf{x}_{k})-t_{k}\frac{\xi^{2}}{\sqrt{M}}\|\nabla f( \mathbf{x})\|_{2}^{2}+t_{k}^{2}\frac{L\omega^{4}}{2m}\|\nabla f(\mathbf{x}_{k} )\|_{2}^{2}\] \[=f(\mathbf{x}_{k})-\frac{\xi^{4}m}{2\omega^{4}ML}\|\nabla f( \mathbf{x}_{k})\|_{2}^{2},\] where the last inequality follows by the definition of \(t_{k}\). Adding and subtracting \(f(\mathbf{x}^{*})\) on the above relationship and incorporating inequality (8) we obtain \[f(\mathbf{x}_{k+1})-f(\mathbf{x}^{*})\leq\hat{c}(f(\mathbf{x}_{k})-f( \mathbf{x}^{*})),\] where \(\hat{c}:=1-\frac{\xi^{4}m\mu}{\omega^{4}ML}\). Since \(\xi<\omega,m<M\) and \(\mu<L\) we take \(\hat{c}\in(0,1)\). Unravelling the last inequality we get \[f(\mathbf{x}_{k})-f(\mathbf{x}^{*})\leq\hat{c}^{k}(f(\mathbf{x}_{0})-f( \mathbf{x}^{*})),\] and thus \(\lim_{k\to\infty}f(\mathbf{x}_{k})=f(\mathbf{x}^{*})\). Finally, solving for \(k\) the inequality \(\hat{c}^{k}(f(\mathbf{x}_{0})-f(\mathbf{x}^{*}))\leq\epsilon\), we conclude that at least \[\hat{K}=\frac{\log{(f(\mathbf{x}_{0})-f(\mathbf{x}^{*})/\epsilon)}}{\log(1/ \hat{c})}\] steps are required for this process to achieve accuracy \(\epsilon\). A direct consequence of the above theorem is convergence in expectation when \((\mathbf{x}_{k})_{k\geq 1}\) is generated randomly via a random prolongation matrix, e.g., see Definition 2.1. In this case we can guarantee that \[\mathbb{E}[f(\mathbf{x}_{k})]-f(\mathbf{x}^{*})\leq\hat{c}^{k}(f(\mathbf{x}_{ 0})-f(\mathbf{x}^{*})),\] which implies that \(\lim_{k\to\infty}\mathbb{E}[f(\mathbf{x}_{k})]=f(\mathbf{x}^{*})\). Theorem 3.4 effectively shows the number of steps required for the method to reach the desired accuracy when the coarse direction is always effective. However, this may not be always true, which, in deterministic settings, implies no progress for the method. As discussed previously \(\xi\) and \(e\) can be viewed as user-defined parameters that prevent the method from taking the ineffective coarse steps. Given fixed \(\xi\) and \(e\), the method performs iterations in the fine level if one of the two conditions in Assumption 3.3 is violated. In the fine level, the method constructs the preconditioner as follows \[\mathbf{Q}_{k}:=\nabla f(\mathbf{x}_{k})\nabla f(\mathbf{x}_{k})^{T},\] and then \(\mathbf{Q}_{k}^{-\frac{1}{2}}\) is constructed exactly as in \(\mathbf{Q}_{\ell,k}\) in (6). The next theorem shows a linear rate and derives the number of steps required when the method takes only fine steps. **Theorem 3.5**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a function such that Assumption 3.1 holds. Suppose also that Assumption 3.2 holds. Moreover, define_ \[\mathbf{d}_{k}:=-\mathbf{Q}_{k}^{-\frac{1}{2}}\nabla f(\mathbf{x}_{k}),\] _and suppose that the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) is generated by \(\mathbf{x}_{k+1}=x_{k}+\frac{m}{L\sqrt{M}}\hat{\mathbf{d}}_{k}\). Then, there exists \(c\in(0,1)\) such that_ \[f(\mathbf{x}_{k+1})-f(\mathbf{x}^{*})\leq c(f(\mathbf{x}_{k})-f(\mathbf{x}^{*} )).\] _Moreover, at most_ \[K=\frac{\log{(f(\mathbf{x}_{0})-f(\mathbf{x}^{*})/\epsilon)}}{\log(1/c)}\] _iterations are required for this process to reach accuracy \(\epsilon\)._ \begin{table} \begin{tabular}{c|c} \hline Algorithm & Mean \(\pm\) Std \\ \hline Adam & \(0.0158\pm 0.0016\) \\ AdaHessian & \(0.0325\pm 0.0029\) \\ Simba & \(0.0097\pm 0.001\) \\ Apollo & \(0.0556\pm 0.0018\) \\ \hline \end{tabular} \end{table} Table 1: Mean training error and standard deviation for the Non-Linear Least-Squares problem. The results were obtained over 5 runs for random initialization from \(\mathcal{N}(0,1)\). Proof.: The proof of theorem as it follows analogously to Theorem 3.4. The difference is the term that controls the linear rate which is now given by \(c:=1-\frac{m\mu}{ML}\). It holds \(0<\hat{c}\leq c<1\). As expected, theorem 3.5 shows a faster linear rate since the entire local information is employed during the training. Combining theorems 3.4 and 3.5, we provide the complete picture of the linear convergence rate of the proposed method. ## 4 Numerical Experiments In this section we validate the efficiency of Simba to a number machine learning problems. Our goal is to illustrate that Simba outperforms the state-of-the-art diagonal optimization methods for problems with saddle points and flat areas or when the gradients are vanishing. For this purpose, we consider a non-linear least squares problems and two deep autoencoders where optimization methods often converge to suboptimal solutions. In addition, we demonstrate that our method is efficient and offers comparable, if not better, generalization errors compared to the state-of-the-art optimization methods on standard benchmark ResNets using CIFAR10 and CIFAR100 datasets. **Algorithms and set up:** We compare Simba with momentum (algorithm 1) against Adam Kingma and Ba (2014), AdaHessian Yao et al. (2021), Apollo Ma (2020) and Shampoo Gupta et al. (2018) on a Tesla T4 GPU with a 16G RAM. The GPU time for Shampoo is not comparable to that of the other algorithm and hence its behaviour is reported only for the autoencoder problems. For all algorithms, the learning rate was selected using a grid search for \(t_{k}\in\) {1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1, 5e-1, 1}. For all the algorithms we set the momentum parameters to their default values. For Simba we set \(r=20\) in all experiments. Through all the experiments the batch size is set to \(128\). For all algorithms we comprehensively tuned \(\mathrm{eps}\); the default value is selected when others do not yield an improvement. In our case we denote \(m\equiv\mathrm{eps}\). ### Non-linear least-squares Given a training dataset \(\{\mathbf{a}_{i},b_{i}\}_{i=1}^{m}\), \(\mathbf{a}_{i}\in\mathbb{R}^{n}\) and \(b_{i}\in\mathbb{R}\), we consider solving the following non-linear least-squares problem \[\min_{\mathbf{x}\in\mathbb{R}^{n}}\frac{1}{m}\sum_{i=1}^{m}\left(b_{i}-g( \mathbf{a}_{i}^{T}\mathbf{x})\right)^{2},\quad g(\omega):=\frac{1}{1+\exp( \omega)},\] which is a non-convex optimization problem. Here, we consider the Gisette dataset 1 for which \(m=6000\) and \(n=5000\). Furthermore, for Adam we select \(t_{k}=0.001\) and \(\mathrm{eps}=10^{-8}\) while for Apollo we set \(t_{k}=0.01\) and \(\mathrm{eps}=10^{-4}\). For AdaHessian, \(t_{k}=0.1\) and the hessian power parameter is set to to \(0.5\). For Simba we select \begin{table} \begin{tabular}{c|c|c|c} \hline Algorithm & CURVES & MNIST & Seconds CURVES/MNIST \\ \hline Adam & \(0.0057\pm 0.0017\) & \(0.0431\pm 0.0007\) & \(\begin{bmatrix}22/176\\ 48/224\end{bmatrix}\) \\ AdaHessian & \(0.0165\pm 0.0049\) & \(0.0604\pm 0.0044\) & \(\begin{bmatrix}48/224\\ 48/224\end{bmatrix}\) \\ Shampoo & \(-\) & \(-\) & \(606/12,442\) \\ Apollo & \(0.02309\pm 0.00159\) & \(0.0672\pm 6.3\times 10^{-7}\) & \(31/197\) \\ Simba & \(0.00250\pm 0.0005\) & \(0.0043\pm 8.1\times 10^{-3}\) & \(54/467\) \\ \hline \end{tabular} \end{table} Table 2: Mean training error and standard deviation over \(20\) and \(5\) runs for the CURVES and MNIST autoencoders, respectively The last column reports the wall-clock time for a single run of the two autoencoders problem. Figure 2: Convergence behaviour of various algorithms for the CURVES and MNIST autoencoders. \(n_{\ell}=250,t_{k}=0.05\) and \(\mathrm{eps}=10^{-12}\). The performance of optimization methods appear in Figure 1. Observe that this problems has several flat areas or saddle points which slow down the convergence of all algorithms. Nevertheless, the method with the best behaviour is Simba which enjoys a much faster escape rate from the saddle points and thus always returns lower training errors. This can be also observed for different initialization points. In Table 1 we report the average training error and the standard deviation over \(5\) runs. The method that comes closer to Simba is Adam. Further, from Figure 1 we see that, although the wall-clock time of Simba is two to three times increased compared to its competitors, the total time of our method is much better due to its fast escape rate near saddles and flat areas. Further, in Figure 4 in the supplementary material we compare the performance of Simba for different sizes in the coarse model. We observed that even Simba with \(n_{\ell}=25\) (i.e., updating only \(25\) parameters at each iteration) enjoys better escape rate than its competitors. This indicates that diagonal methods perform poorly near saddles and flat areas and highlights the importance of constructing meaningful preconditioners. Moreover, observe that the best wall-clock time is achieved with only \(5\%\) of the dimensions. This indicates that Simba significantly reduces the computational cost of preconditioned methods without compromising the convergence rate. ### Deep autoencoders In this section we investigate the performance of Simba on two deep autoencoder optimization problems that arise from (Hinton and Salakhutdinov [2006]), named CURVES and MNIST. Both optimization problems are considered difficult to solve and have become a standard benchmark for optimization methods due to the presence of saddle points and the issue of the vanishing gradients. The CURVES2 autoencoder consists of an encoder with layer size \(28\times 28,400,200,100,50,25,6\), a symmetric decoder which totals to \(0.8\)M parameters. For all layers the \(\mathrm{relu}\) activation function is applied. The network was trained on \(20,000\) images and tested on \(10,000\). We set the learning rate equal to \(0.001,1,0.05,0.5\) and \(0.01\) for Adam, Apollo, AdaHessian, Shampoo and Simba, respectively. The \(\mathrm{eps}\) parameter is selected \(10^{-15}\) and \(10^{-8}\) for Apollo and Simba, respectively, and set to its default value for the rest algorithms. The hessian power parameter for AdaHessian is selected \(1\). Further, the MNIST3 autoencoder consists of an encoder with layer size \(28\times 28,1000,500,250,30\) and a symmetric decoder which totals to \(2.8\)M parameters. In this network we use the sigmoid activation function to all layers. The training set for the MNIST dataset consists of \(60,000\) images while the test set has \(10,000\) images. Here, \(t_{k}\) is set equal to \(0.001,0.9,0.1,0.05\) and \(0.05\) for Adam, Apollo, AdaHessian, Shampoo and Simba, respectively. The \(\mathrm{eps}\) parameter is set equal to \(10^{-8}\) for Simba and to its default value for all other algorithms; the hessian power of AdaHessian is set equal to \(1\). For both autoencoder problems the coarse model size parameter is set \(n_{\ell}=0.5N\). Footnote 2: Dataset available at: www.cs.toronto.edu/~jmartens/digs3pts_1.mat Footnote 3: Dataset available at: [https://pytorch.org/vision/main/datasets.html](https://pytorch.org/vision/main/datasets.html) The comparison between the optimization algorithms for the two deep autoencoders appears in Table 2 and Figure 2. Clearly, Simba performs the best among the optimization algorithms resulting in the lowest training error in both CURVES and MNIST autoencoders. As a result, Simba performs better than its competitors on the test set too. It is evident that all the other methods get stuck in stationary points with large errors. In these points we observed that the Figure 3: Convergence behaviour of various algorithms on CIFAR10 using ResNet110 and CIFAR100 using ResNet9. \begin{table} \begin{tabular}{c|c|c|c|c} \hline & \multicolumn{2}{c|}{ResNet-110 - CIFAR10} & \multicolumn{2}{c}{ResNet-9 - CIFAR100} \\ \hline Algorithm & Training Error & Accuracy & Training Error & Accuracy \\ \hline Adam & \(0.0284\pm 0.0060\) & \(91.03\pm 0.16\) & \(0.0237\pm 0.0005\) & \(73.91\pm 0.18\) \\ AdaHessian & \(0.0560\pm 0.0427\) & \(91.34\pm 1.04\) & \(0.0081\pm 0.0036\) & \(72.93\pm 0.66\) \\ Apollo & \(0.0315\pm 0.0066\) & \(91.18\pm 0.25\) & \(0.0076\pm 0.0003\) & \(73.70\pm 0.19\) \\ Simba & \(0.0075\pm 0.0023\) & \(92.07\pm 0.09\) & \(0.0133\pm 0.0001\) & \(75.41\pm 0.22\) \\ \hline \end{tabular} \end{table} Table 3: Mean training error and accuracy \(\pm\) standard deviation of various optimization algorithms over \(5\) on CIFAR10 and CIFAR100, respectively. gradients become almost zero which results in a very slow progress of Adam, AdaHessian, Apollo and Shampoo. On the other, Simba enjoys a very fast escape rate from such points and thus it yields a very fast decrease in the training error. In addition, Table 2 reports the wall-clock time for each optimization algorithm. We see that Simba has an increased per-iteration costs (by a factor between two and three compared to Adam), nevertheless this is a reasonable price to pay for achieving a good decrease in the value function. The averaged behaviour of Shampoo is missing due to its expensive iterations. For this reason we omit Shampoo from the following experiments. ### Residual Neural Networks In this section we report comparisons on the convergence and generalization performance between the optimization algorithms using ResNet1104 and ResNet95 with CIFAR-10 and CIFAR-100 datasets, respectively. ResNet110 consists of 110 layers and a total of \(1.7\)M parameters while ResNet9 has 9 layers and \(30\)M parameters. Hence, our goal is to investigate the performance of Simba on benchmark problems and different architectures, i.e., deep and wide networks. Footnote 4: Network implementation available at: [https://github.com/akamaster/pytorch_resnet_cifar10](https://github.com/akamaster/pytorch_resnet_cifar10) For ResNet110 we set an initial learning rate parameter equal to \(0.001,0.1,0.1,0.01\) for Adam, Apollo, AdaHessian and Simba, respectively. We train the network for a total number of \(60\) epochs and decrease the learning rate by a factor of \(5\) at epochs \(30\) and \(40\). The hessian power parameter for AdaHessian is selected \(1\) and for Simba the hyper-parameter \(\mathrm{eps}\) is set to \(10^{-8}\) while \(n_{\ell}=0.5N\). The convergence and generalization performance between the optimization algorithms is illustrated in Figure 3 and Table 3. We see that Simba is able to achieve results that are comparable, if not better, with that of Adam, AdaHessian and Apollo in both training and generalization errors. The total GPU time of Simba is about three times larger than that of Adam, which is the fastest algorithm, but it is considerably smaller than that of AdaHessian (see Figure 5 in the supplement). For ResNet9, we set an initial value on \(t_{k}\) as follows: \(0.0005,0.001,0.05,0.005\) for Adam, Apollo, AdaHessian and Simba, respectively. We train the network for \(60\) epochs and use cosine annealing to determine the learning rate at each epoch and set minimum \(t_{k}\) value equal to \(0.01t_{k},0.1t_{k},0.01t_{k},0.02t_{k}\) for Adam, Apollo, AdaHessian and Simba, respectively. The hessian power, \(\mathrm{eps}\) and \(n_{\ell}\) parameters are set as above. Since classifying CIFAR100 is a much more difficult problem than CIFAR10, to improve the accuracy we use weight decay and gradient clipping with values fixed at \(0.001\) and \(0.01\), respectively, for all algorithms. Figure 3 and Table 3 indicate that Simba is able to offer very good generalization results on a difficult task. Further, we see that Simba is less than two times slower in wall-clock time than Adam and Apollo but again considerably faster than AdaHessian (see Figure 6 in the supplement). Figure 6 also shows that the total GPU time of Simba for reaching the desired classification accuracy is comparable to that of Adam and Apollo which indicates the efficiency of our method on large-scale optimization problems. ## 5 Conclusions We present Simba, a scalable bilevel optimization algorithm to address the limitations of first-order methods in non-convex settings. We demonstrate the fast escape rate from saddles of our method through empirical evidence. Numerical results also indicate that Simba achieves good generalization errors on modern machine learning applications. Convergence guarantees of Simba when locally assume convex functions are also established. As a future work, we aim to apply Simba for the training of LLMs such as GPT.
2303.17756
Effect of interpolation kernels and grid refinement on two way-coupled point-particle simulations
The predictive capability of two way--coupled point-particle Euler-Lagrange model in accurately capturing particle-flow interactions under grid refinement, wherein the particle size can be comparable to the grid size, is systematically evaluated. Two situations are considered, (i) uniform flow over a stationary particle, and (ii) decaying isotropic turbulence laden with Kolmogorov-scale particles. Particle-fluid interactions are modeled using only the standard drag law, typical of large density-ratio systems. A zonal, advection-diffusion-reaction (Zonal-ADR) model is used to obtain the undisturbed fluid velocity needed in the drag closure. Two main types of interpolation kernels, grid-based and particle size--based, are employed. The effect of interpolation kernels on capturing the particle-fluid interactions, kinetic energy, dissipation rate, and particle acceleration statistics are evaluated in detail. It is shown that the interpolation kernels whose width scales with the particle size perform significantly better under grid refinement than kernels whose width scales with the grid size. Convergence with respect to spatial resolution is obtained with the particle size--based kernels with and without correcting for the self-disturbance effect. While the use of particle size--based interpolation kernels provide spatial convergence and perform better than kernels that scale based on grid size, small differences can still be seen in the converged results with and without correcting for the particle self-disturbance. Such differences indicate the need for self-disturbance correction to obtain the best results, especially when the particles are larger than the grid size.
Nathan A. Keane, Sourabh V. Apte, Suhas S. Jain, Makrand A. Khanwale
2023-03-31T00:50:23Z
http://arxiv.org/abs/2303.17756v1
# Effect of interpolation kernels and grid refinement on two way-coupled point-particle simulations ###### Abstract The predictive capability of two way-coupled point-particle Euler-Lagrange model in accurately capturing particle-flow interactions under grid refinement, wherein the particle size can be comparable to the grid size, is systematically evaluated. Two situations are considered, (i) uniform flow over a stationary particle, and (ii) decaying isotropic turbulence laden with Kolmogorov-scale particles. Particle-fluid interactions are modeled using only the standard drag law, typical of large density-ratio systems. A zonal, advection-diffusion-reaction (Zonal-ADR) model is used to obtain the undisturbed fluid velocity needed in the drag closure. Two main types of interpolation kernels, grid-based and particle size-based, are employed. The effect of interpolation kernels on capturing the particle-fluid interactions, kinetic energy, dissipation rate, and particle acceleration statistics are evaluated in detail. It is shown that the interpolation kernels whose width scales with the particle size perform significantly better under grid refinement than kernels whose width scales with the grid size. Convergence with respect to spatial resolution is obtained with the particle size-based kernels with and without correcting for the self-disturbance effect. While the use of particle size-based interpolation kernels provide spatial convergence and perform better than kernels that scale based on grid size, small differences can still be seen in the converged results with and without correcting for the particle self-disturbance. Such differences indicate the need for self-disturbance correction to obtain the best results, especially when the particles are larger than the grid size. keywords: particle-laden flows, point-particle model, computational methods, self-disturbance correction + Footnote †: journal: International Journal of Multiphase Flow ## 1 Introduction Many engineering, biological, and environmental applications involve disperse particle-laden flows, wherein small size solid particles, liquid droplets, or gaseous bubbles are dispersed in a fluid flow, such as sediment transport, fluidized beds, spray injectors in gas-turbine combustion chambers, cavitation, among others. When the number of dispersed particles is very large, on the order of \(\mathcal{O}(10^{6})\)-\(\mathcal{O}(10^{9})\), the point-particle (PP) approach (Maxey & Riley, 1983) is commonly employed owing to its simplicity and affordability. In standard point-particle models, the particles are assumed to be spherical, significantly smaller than the grid size or the length scale of the smallest resolved flow features, have low volume loading, and be modeled as point sources of mass, momentum, and energy. Particle dynamics are modeled by solving the Maxey-Riley equations with force closures for drag, lift, added mass, pressure, history forces, among others (Maxey, 1987). In order to couple the particle and fluid phases in a two-way coupled framework, the fluid velocity interpolated at the particle location is used in the force closure models, and a reaction force from the particles is added to the fluid momentum equation (the energy interactions are also represented in a similar way for compressible flows with heat transfer). Many important studies have used this point-particle model to investigate particle-turbulence interactions (Squires & Eaton, 1990; Elghobashi, 1991; Elghobashi & Truesdell, 1993; Boivin et al., 1998; Ferrante & Elghobashi, 2003). The particle force closures are typically based on the relative slip velocity at the particle location, which involves the difference between _the undisturbed fluid velocity_ seen by the particle and the particle velocity, especially for low-volume loadings. The undisturbed fluid velocity seen by the \(p^{th}\) particle is defined as what the flow velocity would be in the absence of the \(p^{th}\) particle, but with all other particles present. The undisturbed fluid velocity associated with each particle is not readily available. It is a standard practice to simply use the two way-coupled fluid velocity for computing particle force closures (Apte et al., 2003). When particles are very small compared to the grid size (\(D_{p}/\Delta\ll 1\)), where \(D_{p}\) is the particle diameter and \(\Delta\) is the grid size, the above approximation (using two-way coupled fluid velocity) does not result in significant error, as this is closer to the original assumptions of a point-particle model. However, when the particle size becomes comparable to the grid size (\(D_{p}/\Delta\sim 1\)), using the two way-coupled disturbed flow field in the force closure models can lead to significant errors in particle and fluid statistics, especially at low particle Reynolds numbers. Burton & Eaton (2005) conducted particle-resolved direct numerical simulations of a single, fixed particle of size \(D_{p}\approx 2\eta\) in decaying isotropic turbulence, where \(\eta\) is the Kolmogorov length scale. They found that the instantaneous error in modeled particle force varied between 15-30% with standard point-particle model without correction for the particle self-disturbance. Hwang & Eaton (2006) in a study on homogeneous, isotropic turbulence modulation by small, heavy particles concluded that the extra dissipation caused by particles of size comparable to the Kolmogorov scale was grossly underestimated by the point-particle model that do not correct for the self-disturbance created by the particle. In addition, these errors typically increase with decrease in grid size, resulting in different particle-fluid interactions under grid convergence (Horwitz & Mani, 2020). In a direct or large-eddy simulation of particle-laden flows, the particle size can be comparable to or even larger than the local grid resolution. For example, in wall bounded flows fine grids are needed in the wall-normal direction for these computations. The particle size can be several times larger than the finest wall-normal grid resolution, and neglecting the effect of particle self-disturbance in closure models can lead to large errors. Similarly, in simulations of spray injectors, the droplet sizes can be much larger than the grid resolution near the injector (Moin and Apte, 2006). Several recent studies have been devoted to quantifying the effect of self-disturbance when the particle becomes comparable to the grid size (Gualtieri et al., 2015; Horwitz and Mani, 2016, 2018; Esmaily and Horwitz, 2018; Fukada et al., 2018; Liu et al., 2019; Pakseresht et al., 2020; Pakseresht and Apte, 2021; Horwitz et al., 2022; Balachandar and Liu, 2023; Apte, 2022). In addition to obtaining the undisturbed fluid velocity, the closure models for point-particle dynamics require an interpolation kernel that interpolates the fluid properties from the surrounding control volumes to the Lagrangian point-particle location, for example, to calculate the slip-velocity used in the force closures for the equations of particle motion. Similarly, in two-way coupled simulations, an interpolation kernel is used to distribute the particle forces back to the background Eulerian grid. Several different interpolation kernels have been used for the Eulerian-grid-to-Lagrangian-particle-location (E2L) and Lagrangian-particle-to-Eulerian-grid (L2E) interpolations. In general, these interpolation kernels can be classified into two categories based on the kernel width used: (i) grid-based kernels, wherein the kernel width is based on local grid size and is independent of the particle size, and (ii) particle size-based kernel, wherein the kernel width scales with the particle size. Typically the E2L and L2E interpolation functions are identical, following the kinetic energy conservation principles identified by Sundaram and Collins (1996). However, majority of their analysis involved cases with particle size much smaller than the grid, \(D_{p}/\Delta\ll 1\). The grid-based interpolation kernels have been commonly used because of their ease of implementation on complex anisotropic or unstructured grids in three dimensions. These include trilinear (Ferrante and Elghobashi, 2005), cubic splines (Horwitz and Mani, 2020), clipped Gaussian kernel (Apte et al., 2003, 2009), among others. A Delta-function-based interpolation kernel with compact support (Roma et al., 1999) has been widely used in immersed boundary-based methods. The main advantage of grid-based interpolations is that their compact support requires only the nearest neighbors of the control volume within which the particle center is located, making it attractive for complex grids. However, for anisotropic and unstructured grids, the grid-based interpolations can lead to asymmetric interpolation weights and potentially impact the accuracy of the simulations. However, these effects are not significant if the particle size is much smaller than the grid size. Recently, Horwitz and Mani (2020) conducted a detailed evaluation of the numerical methods and the effect of grid size, relative to particle size on fluid-particle interactions predicted by point-particle models. They used grid-based interpolation kernels, with and without correcting for the particle self-disturbance field, in homogeneous isotropic turbulence. For a particle size to a Kolmogorov-scale ratio (\(D_{p}/\eta\)) of 0.25, the grid size (\(D_{p}/\Delta\)) was varied over the range of 0.25-1, and statistics of fluid kinetic energy, dissipation rate, particle kinetic energy, and particle acceleration were evaluated. Different grid-based interpolation kernels, e.g., trilinear, fourth-order Lagrange, and cubic splines, for E2L and L2E interpolation were used. All these interpolation kernels have grid-based interpolation stencils, thus as the grid is refined and the particle-to-grid size ratio (\(D_{p}/\Delta\)) increases, the interpolation stencil becomes narrower, localizing the effect of the particle on the fluid and vice versa. It was shown that, in the absence of any correction model for self-disturbance, the fluid and particle statistics do not converge as the grid was refined. The fluid-particle interactions were better predicted when the grid is coarser (smaller \(D_{p}/\Delta\)), and hence neglecting the self-disturbance effect resulted in a smaller error on coarser grids. As the grid is refined, the self-disturbance field becomes stronger, because the particle reaction force is distributed over a smaller region due to localized interpolation kernels, resulting in a larger error. In contrast, when a correction model was used to obtain an estimate of the undisturbed fluid velocity, consistent results were obtained for all kernels as the grid was refined, emphasizing the importance of self-disturbance correction, especially when the particle size is comparable to the grid size and grid-based interpolation kernels are used. Particle size-based interpolation kernels, wherein the kernel width is proportional to the particle size, have also been used with Gaussian function (Lomholt et al., 2002; Deen et al., 2009; Gualtieri et al., 2015; Finn et al., 2016; Vreman, 2016; Fukada et al., 2018; Pakseresht & Apte, 2019). These are commonly employed in dense particle-laden flows, wherein the volumetric displacement by the presence of the particles is typically accounted for by using volume-filtered Navier-Stokes equations (Apte et al., 2008; Capecelatro & Desjardins, 2013; Finn et al., 2016) to obtain a void fraction field smaller than unity. Vreman (2016) investigated particle-turbulence interactions of 64 fixed particles in a statistically stationary homogenous isotropic turbulence using fully resolved direct numerical simulations, and used the data to compare predictions from point-particle models without any correction for the self-disturbance. The particle size was twice as large as the grid size. A simple top-hat interpolation kernel was used to distribute the force to the grid cells and varied the kernel width over \(\frac{1}{2}D_{p}\), \(2D_{p}\), and \(4D_{p}\). The kernel width proportional to \(4D_{p}\) was shown to be able to capture the point-particle model predictions on turbulence attenuation well in comparison to the fully resolved data. On the other hand, when the kernel width was smaller, the point-particle model underpredicted the turbulence attenuation. Without correcting for the self-disturbance, a kernel-width much larger than the particle size was able to sample the fluid velocity from the undisturbed region, resulting in better predictions. This study shows the effect of the particle size-based kernel widths in point-particle models, however, their impact in obtaining grid-convergent results has not been fully investigated. The top-hat kernel results in sudden changes in interpolation weights and is not appropriate for moving particles. The main advantage of the particle size-based kernels is that irrespective of the grid, the region of influence of the particle on the fluid remains the same. However, if the particle is comparable to the local grid size a kernel width larger than the particle size may require several neighbors of the control volume containing the particle, and hence such an interpolation kernel is computationally expensive, especially for complex, unstructured grids. The main goal of the present work is to evaluate the accuracy and predictive capability of grid-based and particle size-based interpolation kernels for point-particle models with varying grid refinement, with and without accounting for the self-disturbance created by the particle. The present work focuses on two main hypotheses when particle sizes are comparable to the grid size (\(D_{p}\sim\Delta\)). **Hypothesis 1**: _The kernel width used in E2L and L2E interpolations should scale with the particle size and should be independent of the grid resolution. This keeps the region of influence of the particle the same, irrespective of the grid size._ **Hypothesis 2**: _Using a kernel width much larger than the particle size for E2L interpolation will sample the fluid velocity from a region with reduced influence from the self-disturbance field, thus improving the estimate of the fluid velocity at the particle location even without any correction model. However, a localized kernel for L2E with width about the size of the particle may be able to capture the reaction of the particle on the fluid more accurately._ The first hypothesis stems from the basic idea that the region of influence of the particle on the fluid flow should remain the same irrespective of the grid resolution employed. The second hypothesis suggests to use different kernel widths for the E2L and L2E interpolations. For E2L, if the kernel width is much larger than the particle size, the interpolation kernel will sample fluid velocities from a region that is not disturbed by the particle. On the other hand, if the particle force is distributed over the same large region, the local effect of the particle on the fluid may not be well captured. Hence, a localized kernel for L2E with width about the size of the particle may provide a better representation of the effect of the particle on the fluid flow. To test these hypotheses, a detailed evaluation is conducted of the grid-based and particle size-based interpolation kernels and their widths with varying grid refinement. Two canonical test cases that of flow over a stationary particle, and particle-turbulence interaction in decaying, homogeneous isotropic turbulence at low volume loadings are studied. Different interpolation kernels are used to quantify their effects on the predictive capability of the point-particle model with varying grid refinement, with and without the correction for self-disturbance. A grid resolution-based, compact, three-point Roma -delta function is given as \[\mathcal{G}^{\sigma}(\mathbf{x}_{cv}-\mathbf{x}_{p})=\left\{\begin{array}{ ll}\frac{1}{6}(5-3|r|-\sqrt{-3(1-|r|)^{2}+1}),&0.5\leq|r|\leq 1.5,\ r=|\mathbf{x}_{cv}- \mathbf{x}_{p}|/\Delta\\ \frac{1}{3}(1+\sqrt{-3r^{2}+1}),&|r|\leq 0.5\\ 0,&\text{otherwise}.\end{array}\right. \tag{1}\] This kernel is second-order, smoother than trilinear interpolation, and commonly used in immersed-boundary methods (Roma et al., 1999). The second kernel is based on the particle size and is defined as a clipped Gaussian function, \[\mathcal{G}^{\sigma}(\mathbf{x}_{cv}-\mathbf{x}_{p})=\left\{\begin{array}{ll} \frac{1}{\sigma\sqrt{2\pi}}\text{exp}\Big{[}-\left(\frac{\mathbf{x}_{cp}- \mathbf{x}_{p}}{\sqrt{2\sigma}}\right)^{2}\Big{]},&|r|\leq 3\sigma,\;r=| \mathbf{x}_{cv}-\mathbf{x}_{p}|\\ 0,&\text{otherwise},\end{array}\right. \tag{2}\] which provides smoother interpolations. The kernel width depends on a chosen standard deviation, \(\sigma\), which scales with the particle diameter, i.e. \(\sigma=cD_{p}\) where \(c=\mathcal{O}(1)\). It is important to note that Gaussian functions are non-compact, and have long tails. In practice, however, the function is clipped once the weights become small, as in Eq. 2, and then the weights are normalized to enforce the conservation condition, \(\int_{\mathcal{V}}\mathcal{G}^{\sigma}dV=1\). In a Gaussian kernel, 99.7% of the interpolation weights are within \(\pm 3\sigma\), thus choosing this as a cutoff point for the tails retains the majority of the kernel and ensures that the weights are very small at the filter cutoff, resulting in a smoother transition. Cutting off the filter tails too short would result in bigger discontinuities in interpolation weights at the cutoff point, possibly affecting the smoothness of particle and flow statistics. Two different kernel widths are used for this particle size-based kernel, corresponding to Gauss1 (\(\sigma=cD_{p}=\sqrt{2/\pi}D_{p}\)) and Gauss2 (\(\sigma=cD_{p}=1.5D_{p}\)). The Gauss1 kernel is commonly used in force-coupling methods (Lomholt et al., 2002), wherein \(\sigma\) is chosen such that the fluid velocity at the particle location matches the rigid-body motion of the particle, approximately enforcing a boundary condition at the particle location (Gualtieri et al., 2015; Maxey & Patel, 2001). The Gauss2 kernel gives a width similar to the clipped fourth-order polynomial function used by Deen et al. (2009). Figure 1 shows the comparison of weights under grid refinement for these three interpolation kernels, where each line marker represents the center of a control volume. In figures 1(a-c), the \(x\)-axis is normalized by the grid size, \(\Delta\), whereas in figures 1(d-f), the \(x\)-axis is normalized by the particle size, \(D_{p}\). It is useful to understand the extent of the interpolation kernel and its smoothness under grid refinement for a particle of a fixed size. As seen from figure 1(a), under grid refined (\(\Delta\) is decreased), the Roma -delta function has the same weights for the nearest neighbors of the control volume containing the particle. Since the grid is refined, this kernel distributes the weights in a narrower region, independent of the particle size to grid ratio. The particle size-based Gauss1 and Gauss2 kernels, on the other hand, change the shape and weights under grid refinement as seen in figures 1(b,c). However, the extent of the distribution of the weights is unchanged with varying grid refinement, as the particle size is fixed. Thus, for the finest grid, the kernel becomes smoother and distributes weights to several control volumes on either side of the particle. The weights under grid refinement and normalized by the fixed particle size are also informative. As the grid is refined, the Roma -delta kernel becomes narrower as shown in figure 1(d), thus changing the region of influence of the particle under grid refinement. Whereas, the particle size-based Gauss1 and Gauss2 kernels become smoother, and distribute the weights over the same region dictated by the kernel width, and keeping the region of influence of the particle the same. It is thus clear that, for a grid-based kernel, the two-way coupling force distribution (L2E) and interpo Figure 1: Interpolation weight distribution with grid refinement keeping the particle size unchanged: (a,d) Roma, (b,e) Gauss1, and (c,f) Gauss2. The x-axis is normalized with respect to grid size (top panel, a–c) and particle size (bottom panel, (d–f)). Different colors/symbols represent different grid resolutions: (\(\bullet\)) \(D_{p}/\Delta=0.5\) (coarse), (\(\blacktriangle\)) \(D_{p}/\Delta=1.0\) (medium), and (\(\blacktriangle\)) \(D_{p}/\Delta=2.66\) (fine). Each marker represents the center of a control volume. lation of the fluid velocity at the particle location (E2L) become highly localized as the grid is refined. This is true for any grid-based kernel, although the Roma -delta kernel is shown and used in this work. Such a kernel is sufficient when the particle is much smaller than the grid size (\(D_{p}/\Delta<<1\)); however, it can give rise to a large disturbance field for \(D_{p}\sim\Delta\), as the reaction force from the particle is distributed in a very narrow region. On the other hand, for particle-size based Gauss1 and Gauss2 kernels, the local weights become smaller under grid refinement, whereas the kernel distributes the reaction force over a region that scales with the particle size. Thus, for \(D_{p}\sim\Delta\), the particle size-based kernel will distribute weights to several neighbors on each side of the particle, which can be computationally expensive for parallel implementations as well as for complex, unstructured grids, wherein a linked list of several neighboring cells needs to be carried. However, when the particle becomes much smaller than the grid size (\(D_{p}/\Delta<<1\), not shown), the particle-size-based kernel will produce a sharp delta function at the particle location, resulting in zero or very small weights at the neighboring grid control volumes. This, however, is consistent with the fundamental assumptions employed in a point-particle approach. The effect of these kernels on the flow-particle interactions is clearly illustrated on a simple test case of uniform flow over a stationary particle in Section 3.1. In the present work, evaluation of the point-particle models with and without self-disturbance correction is conducted under grid refinement, wherein the particle size becomes larger than the grid size. The self-disturbance correction is obtained by using an advection-diffusion-reaction (ADR) model equation for the disturbance field created by a particle and solved using the Zonal-ADR approach developed by Apte (2022). Results obtained from the grid-based Roma interpolation kernel are compared with those obtained from the particle size-based Gaussian kernels. The rest of the paper is arranged as follows. A brief description of the mathematical formulation for the disturbance field and its zonal implementation for each particle is given in Section 2. Results for flow over a stationary particle are given in Section 3.1, followed by particle-laden decaying isotropic turbulence corresponding to the particle-resolved direct numerical simulation (PR-DNS) study of Mehrabadi et al. (2018), at low Reynolds numbers and few particles, summarized in Section 3.2.1. The examination of particle-laden decaying isotropic turbulence is extended to a higher Reynolds number and a larger number of particles in Section 3.2.2. Lastly, conclusions are given in Section 4. ## 2 Mathematical Formulation The mathematical formulation for the self-disturbance correction in the point-particle model for low-volume loadings is discussed in brief. The formulation is based on the incompressible Navier-Stokes equa tions, \[\frac{\partial u_{j}}{\partial x_{j}} = 0, \tag{3}\] \[\rho_{g}\left(\frac{\partial u_{i}}{\partial t}+\frac{\partial u_{ j}u_{i}}{\partial x_{j}}\right) = -\frac{\partial p}{\partial x_{i}}+\mu\frac{\partial^{2}u_{i}}{ \partial x_{j}^{2}}+\underbrace{\left(-\sum_{q=1}^{N_{p}}\mathcal{G}^{\sigma}( \mathbf{x}_{cv}-\mathbf{x}_{q})F_{i,q}^{t}\right)}_{\hat{S}_{i}}, \tag{4}\] where \(\rho_{g}\) is the density of the fluid, \(\mu\) is the dynamic viscosity, \(p\) is the pressure, and \(u_{i}\) is the two-way coupled fluid velocity that includes the disturbances created by all \(N_{p}\) particles through the net interphase source term (\(\dot{S}_{i}\)) based on all forces (\(F_{i,q}^{t}\)), except the gravity, acting on the particle located at \(\mathbf{x}_{q}\) and projected onto the Eulerian grid control volumes located at \(\mathbf{x}_{cv}\), using the interpolation kernel \(\mathcal{G}^{\sigma}\), with \(\sigma\) being the kernel width. The projection function satisfies the conservation condition, \(\int_{\mathcal{V}}\mathcal{G}^{\sigma}d\mathcal{V}=1\), where the integration is over the whole fluid volume (\(\mathcal{V}\)). For the point-particle model, the Lagrangian equations for the motion of the \(p^{\text{th}}\) particle are \[\frac{dx_{i,p}}{dt}=u_{i,p},\ \ \ m_{p}\frac{du_{i,p}}{dt}=m_{p}\left(1- \frac{\rho_{g}}{\rho_{p}}\right)g_{i}+F_{i,p}^{t}, \tag{5}\] where \(x_{i,p}\) and \(u_{i,p}\) are the particle position and velocity, respectively; \(m_{p}\) is the mass of the particle, and \(F_{i,p}^{t}\) represents forces acting on the particle including the drag, lift, added mass, pressure, and history, among others. In the present work, only the non-linear drag force is used. The standard drag force on the particle \(p\) can be modeled as, \[F_{i,p}^{\text{drag}}=m_{p}\frac{u_{i,\theta\mathbf{x}_{p}}^{u,p}-u_{i,p}}{ \tau_{r}},\ \ \tau_{r}=\frac{1}{f}\frac{\rho_{p}D_{p}^{2}}{18\mu},\ \ \ f=1+0.16Re_{p}^{0.687}, \tag{6}\] where \(u_{i,\theta\mathbf{x}_{p}}^{u,p}\) represents the undisturbed fluid velocity seen by the particle \(p\) and interpolated to the particle location (\(\mathbf{x}_{p}\)), \(D_{p}\) is the particle diameter, \(\tau_{r}\) is the particle relaxation time, and \(f\) is the Schiller and Naumann nonlinear correction for drag coefficient based on the particle Reynolds number, \(Re_{p}=\rho_{g}D_{p}|u_{i,\theta\mathbf{x}_{p}}^{u,p}-u_{i,p}|/\mu\). The undisturbed flow field seen by a particle \(p\) [denoted by superscript \((\cdot)^{u,p}\)] can be obtained by excluding the reaction force from the \(p^{\text{th}}\) particle (Apte, 2022), \[\frac{\partial u_{j}^{u,p}}{\partial x_{j}} = 0, \tag{7}\] \[\rho_{g}\left(\frac{\partial u_{i}^{u,p}}{\partial t}+\frac{ \partial u_{i}^{u,p}u_{j}^{u,p}}{\partial x_{j}}\right) = -\frac{\partial p^{u,p}}{\partial x_{i}}+\mu\frac{\partial^{2}u_{ i}^{u,p}}{\partial x_{j}^{2}}-\sum_{q=1,q\neq p}^{N_{p}}\mathcal{G}^{\sigma}( \mathbf{x}_{cv}-\mathbf{x}_{q})F_{i,q}^{t}. \tag{8}\] Subtracting Eqs. (3-4) from the corresponding Eqs. (7-8), the self-disturbance field [denoted by superscript \((\cdot)^{d,p}\)] created by the particle \(p\) is given as \[\frac{\partial u_{j}^{d,p}}{\partial x_{j}} = 0, \tag{9}\] \[\rho_{g}\left(\frac{\partial u_{i}^{d,p}}{\partial t}+\frac{ \partial u_{j}u_{i}^{d,p}}{\partial x_{j}}+\frac{\partial u_{j}^{d,p}u_{i}^{u,p }}{\partial x_{j}}\right) = -\frac{\partial p^{d,p}}{\partial x_{i}}+\mu\frac{\partial^{2}u_{ i}^{d,p}}{\partial x_{j}^{2}}+\mathcal{G}^{\sigma}(\mathbf{x}_{cv}-\mathbf{x}_{q})F _{i,p}^{t}, \tag{10}\] where \[u_{i}^{d,p}=u_{i}^{u,p}-u_{i},\ \ \ p^{d,p}=p^{u,p}-p. \tag{11}\] Note that in the self-disturbance equation, only the interaction force from the particle \(p\) is needed. There is also nonlinear interaction between the disturbance field for particle \(p\) and the two-way-coupled velocity (\(u_{i}\)) as well as the undisturbed velocity (\(u_{i}^{u,p}\)). Pakseresht & Apte (2021) derived similar equations for a _single_ particle system and showed accurate reconstruction of the undisturbed flow field. Solving the above set of equations for each particle in a multi-particle system, although possible, is expensive, as it requires the solution of a Poisson system for the pressure disturbance. Pakseresht & Apte (2021) first suggested approximating the pressure and viscous terms in the disturbance equation as a diffusion term with effective viscosity \(K_{\mu}\mu\). The idea for such approximation stems from the fact that in the Stokes flow limit, the pressure contribution to the drag force on a spherical particle is exactly half of the viscous contribution and has the same form as the viscous force. Hence, to match the drag force in the Stokes limit, \(K_{\mu}=1.5\) was used. In general, \(K_{\mu}\) will vary based on the Reynolds number. Recent PR-DNS studies by Ganguli & Lele (2019) showed that the ratio of pressure to viscous contribution to drag remains roughly the same up to a particle Reynolds number of 10. Hence, \(K_{\mu}=1.5\) is used for all cases considered in this work, and good predictions up to \(Re_{p}=100\), suggest that the assumption is reasonable even for large Reynolds numbers. The approximate disturbance equations for the particle \(p\) then become \[\rho_{g}\left(\frac{\partial u_{i}^{d,p}}{\partial t}+\frac{ \partial u_{j}u_{i}^{d,p}}{\partial x_{j}}+\frac{\partial u_{j}^{d,p}u_{i}^{u, p}}{\partial x_{j}}\right) = K_{\mu}\mu\frac{\partial^{2}u_{i}^{d,p}}{\partial x_{j}^{2}}+ \mathcal{G}^{\sigma}(\mathbf{x}_{cv}-\mathbf{x}_{q})F_{i,p}^{t}. \tag{12}\] The above nonlinear, unsteady advection-diffusion-reaction (ADR) equations are solved in addition to Eqs. (3-4), to obtain the undisturbed velocity from Eq. (11). ### The Zonal-ADR method The ADR equations for each particle \(p\) only need the net force acting on the particle \(F_{i,p}^{t}\) and can be solved separately. In addition, the undisturbed fluid velocity is only needed at the particle location to compute the particle forces. These two aspects are exploited by solving the ADR equations in a small zone surrounding the particle in the Zonal-ADR approach. Details of the approach, numerical implementation, verification, and validation studies can be found in Apte (2022) and only a brief summary is given here. A Cartesian, collocated grid-based, second-order, fractional time-stepping solver has been developed (Finn et al., 2016) and used in the present work. An overset-grid algorithm is devised for the zonal solution and details of this numerical implementation can be found in Apte (2022). In the present case, the overset grid and the flow-solver grids are exactly aligned, and only the control volumes surrounding the particle location where the ADR equations need to be solved are tagged for each particle. As the particle moves, this region where the disturbance equation is solved is updated. A zone containing \(\pm cni\), \(\pm cnj\), \(\pm cnk\) control volumes is used, where \(cni\), \(cnj\), and \(cnk\) correspond to the extent of the number of overset-grid points around the particle in \(x\), \(y\), and \(z\) directions, respectively. Typically, the values of \(cni\), \(cnj\), and \(cnk\) depend on the particle size as well as the local grid resolution. A typical value of \(\pm 6\)-\(8\) grid points is found to be sufficient for all the test cases studied in the present work. The viscous and the other advection terms are approximated using the second-order Crank-Nicholson scheme and using the same spatial discretization algorithm as the baseline flow solver. The third term in the ADR Eq. (12) represents the advection of the disturbance by the undisturbed flow velocity (\(u_{i}^{u,p}\)). This nonlinear term is treated explicitly by using the undisturbed flow velocity from the previous time-step. The disturbance field is generated by the particle reaction force and in the absence of any advection, it is simply diffused away from the particle. In the presence of advection, a convective outflow boundary condition is applied to boundaries of the overset grid that has outflux due to the two way-coupled velocity field, whereas no new disturbance is introduced at the influx boundaries. If there are walls present in the domain, the disturbance velocity field at the wall also goes to zero due to the no-slip condition. The Zonal-ADR approach requires the storage of the two-way coupled velocity field on the overset grid for each particle. Using only a few grid cells around the particle, the solution of the ADR equations is reasonably fast and its overhead is insignificant (around 20%), with proper particle-load balancing. ## 3 Results To test Hypothesis 1 given in Section 1, and to understand the effect of different kernels and kernel widths for E2L and L2E interpolations, a simple test case of a stationary particle in a uniform flow is investigated at parameters (Reynolds number, ratio of \(D_{p}\) to \(\Delta\)) representative of those in the subsequent isotropic turbulence cases. The stationary particle test case is investigated with and without the Zonal-ADR correction, to quantify the effect of the kernels and kernel widths under grid refinement. Next, particle-turbulence interactions are investigated at low volume loadings in decaying isotropic turbulence laden with monodispersed, Kolmogorov-scale, spherical particles at low initial Reynolds number, \(Re_{\lambda,0}=27\). The parameters are chosen corresponding to the particle-resolved DNS (PR-DNS) study by Mehrabadi et al. (2018) and the effect of different kernels and kernel widths under grid refinement is evaluated. For most of these cases, the E2L and L2E interpolation functions are identical. To test Hypothesis 2, different E2L and L2E interpolation kernels with a narrower kernel of Roma -delta function for L2E and a wider kernel of Gauss2 for E2L are used (denoted as Roma-Gauss2 ). The effect of these non-symmetric interpolation kernels is again evaluated for the stationary particle case, as well as the decaying isotropic case. Finally, taking the results from the lower Reynolds number case as validation and motivation for choice in interpolation kernels, the decaying isotropic turbulence cases are extended to a higher Reynolds number case with Kolmogorov-scale particles, with an order of magnitude more particles, two different Stokes numbers, and a range of grid sizes. ### Flow over a stationary sphere In this section, flow over a stationary particle is investigated at a particle Reynolds number (\(Re_{p}\)) of 1, which is representative of the Reynolds numbers obtained in the isotropic turbulence case discussed later in Section 3.2.1. A particle of size \(D_{p}=2\pi/96\) is placed at the center of a cubic domain of length \(2\pi\). A grid size of \(\Delta=D_{p}=2\pi/96\) is used for baseline computations. A systematic grid refinement study, keeping the particle parameters the same, is carried out with grid sizes of \(2\pi/48\), \(2\pi/96\), \(2\pi/192\), and \(2\pi/256\). The main goal of this study is to quantify the effect of grid refinement on predicting the drag force on the particle using different interpolation kernels, namely, (i) Roma, (ii) Gauss1, (iii) Gauss2, and (iv) Roma-Gauss2. The E2L and L2E interpolation functions are the same for the first three cases, whereas the Roma-Gauss2 uses the Roma function for L2E and Gauss2 for E2L interpolations. Table 1 shows the relative error in computing the undisturbed fluid velocity at the particle location (\(u_{\oplus p}^{\text{int}}\)) with and without the Zonal-ADR correction, compared to the true value [corresponding to the inlet velocity (\(u_{\text{in}}\)) of 1], the relative error in the particle drag force \(F_{p}\), and the actual disturbed two way-coupled velocity at the particle location normalized by the inlet velocity (\(u_{\oplus p}^{2w}/u_{\text{in}}\)). Without the Zonal-ADR correction, the errors in particle force and the undisturbed fluid velocity at the particle location are large (10-60%) and the errors get worse with grid refinement, especially when the grid-based Roma -delta function is used. This result is consistent with the main conclusions of Horwitz and Mani (2020), who used trilinear interpolation for particle-laden isotropic turbulence and observed that coarser meshes resulted in better predictions and the prediction errors became worse with grid refinement. As the grid is refined, the Roma kernel becomes narrower and distributes the particle reaction force to the nearest neighbors of the control volume containing the particle (see also figures 1[a,d]). This creates a strong disturbance field and, without any correction, results in a large error. With the Zonal-ADR correction, even for a grid-based Roma interpolation kernel, particle force, and undisturbed fluid velocity errors are significantly smaller for all grid refinements. However, when the grid resolution is much finer than the particle size (\(D_{p}/\Delta=2.66\)), a negative two-way fluid velocity at the particle location is observed, which is unphysical based on the flow around a particle obtained from particle-resolved simulations. This suggests that, with a very narrow interpolation kernel, \begin{table} \begin{tabular}{|c c|c c c|c c c|} \hline & & \multicolumn{3}{c|}{No Correction} & \multicolumn{3}{c|}{Zonal-ADR} \\ \cline{3-8} \(D_{p}/\Delta\) & Interpolation & \multicolumn{2}{c|}{\% Relative Error} & \multicolumn{2}{c|}{\% Relative Error} & \multicolumn{2}{c|}{\% Relative Error} \\ \cline{3-8} & & \(u_{\@p{o}p}^{un}\) & \(\|F_{p}\|\) & \(u_{\@p{o}p}^{2w}/u_{in}\) & \(u_{\@p{o}p}^{un}\) & \(\|F_{p}\|\) & \(u_{\@p{o}p}^{2w}/u_{in}\) \\ \hline \hline 0.5 & Roma & 16.3 & 17.56 & 0.84 & 1.24 & 1.36 & 0.798 \\ 0.5 & Gauss1 & 16.3 & 17.5 & 0.837 & 0.83 & 1.35 & 0.882 \\ 0.5 & Gauss2 & 10.4 & 11.2 & 0.897 & 0.35 & 0.9 & 0.87 \\ 0.5 & Roma–Gauss2 & 13.0 & 13.78 & 0.872 & 0.59 & 0.63 & 0.81 \\ \hline 1 & Roma & 33.7 & 35.8 & 0.66 & 0.87 & 0.9 & 0.47 \\ 1 & Gauss1 & 22.5 & 24.6 & 0.76 & 0.6 & 0.67 & 0.7 \\ 1 & Gauss2 & 11.5 & 12.5 & 0.88 & 0.35 & 0.38 & 0.87 \\ 1 & Roma–Gauss2 & 16.5 & 17.7 & 0.84 & 0.59 & 0.63 & 0.81 \\ \hline 2 & Roma & 53.4 & 55.9 & 0.47 & 5.4 & 5.8 & -0.16 \\ 2 & Gauss1 & 23.1 & 24.6 & 0.77 & 0.1 & 0.1 & 0.3 \\ 2 & Gauss2 & 13.8 & 14.9 & 0.86 & 0.9 & 0.98 & 0.83 \\ 2 & Roma–Gauss2 & 19.6 & 21.3 & 0.8 & 0.13 & 0.16 & 0.25 \\ \hline 2.66 & Roma & 60.9 & 63.32 & 0.39 & 7.1 & 7.6 & -0.558 \\ 2.66 & Gauss1 & 16.3 & 17.5 & 0.837 & 1.47 & 1.6 & 0.625 \\ 2.66 & Gauss2 & 10.3 & 11.2 & 0.897 & 0.97 & 1.05 & 0.82 \\ 2.66 & Roma–Gauss2 & 12.8 & 13.7 & 0.872 & 1.4 & 1.3 & 0.724 \\ \hline \end{tabular} \end{table} Table 1: Effect of grid refinement on undisturbed fluid velocity, magnitude of particle drag force, and the disturbed fluid velocity at the particle location for different interpolation kernels. a large particle reaction force creates an extremely strong disturbance resulting in negative fluid velocity at the particle location. With correction, the reaction force, \(F_{p}\), is larger compared to without correction, as the correction scheme provides the undisturbed fluid velocity at the particle location reliably. Even though the Zonal-ADR correction scheme reduces the error in the undisturbed fluid velocity, the two-way coupled velocity field at the particle location goes in the reverse direction, for particles much larger than the grid. Without correction, however, for these large particles, the two-way coupled velocity at the particle location is in the main flow direction, because the reaction force is underpredicted. Overall, this suggests that the two-way coupled velocity will be incorrect or unphysical both with and without correction when using a Roma kernel and when the particle size is larger than the grid size. This suggests that the grid-based kernel should not be used when the particle size is larger than the grid size. On the other hand, when the interpolation kernel width is based on the particle size (Gauss1 and Gauss2 ), the errors are reduced even when no correction is used and they remain small with varying grid refinement. With the Zonal-ADR correction, the errors are significantly lower (\(<1.5\%\)) compared to the Roma interpolation, and the two-way fluid velocity at the particle location remains physical and does not become negative. Interestingly, even the Roma-Gauss2 (L2E-E2L) interpolation results in small errors compared to the Roma (L2E-E2L) kernel, even without the Zonal-ADR correction, and the errors are comparable to the Gauss1 or Gauss2 kernels for L2E-E2L interpolation without correction. This suggests that, without correction, using a larger kernel width based on particle size for E2L and a narrower kernel for L2E distribution provides a reasonable approximation, as hypothesized in Hypothesis 2. This suggests that without correction, using a wider stencil, proportional to the particle size, to obtain the fluid velocity at the particle location samples the flow from a region less affected by the self-disturbance of the particle, resulting in better predictions. Numerical errors produced by not using the same interpolation stencils for L2E and E2L (Sundaram & Collins, 1996) are compensated by better prediction of the undisturbed fluid velocity using particle-based kernels, as postulated in Hypothesis 2. However, symmetric (E2L-L2E) kernels, such as Gauss1 and Gauss2, generally give smaller errors. With correction, the non-symmetric or symmetric particle-based kernels provide similar results, and the errors are still much lower than without correction. ### Decaying isotropic turbulence In this section, decaying isotropic turbulence laden with Kolmogorov-scale particles is investigated with and without the self-disturbance correction under systematic grid refinement for the different interpolation kernels. First, a low Reynolds number study is performed corresponding to the particle-resolved DNS of Mehrabadi et al. (2018) for validation, followed by a higher Reynolds number case with a large number of particles. #### 3.2.1 Lower Reynolds number validation case Particle-laden, decaying, isotropic turbulence corresponding to the particle-resolved data by Mehrabadi et al. (2018), at Taylor microscale Reynolds number of \(Re_{\lambda}\approx 27\), is investigated. The computational domain is a triply periodic cubic box of side length \(2\pi\). The initial condition for each case is the divergence-free random field sampled from Pope's model energy spectrum (Pope, 2000). The initial Kolmogorov length scale, \(\eta_{0}\), is set to \(2\pi/96\). The initial condition for each simulation is a divergence-free random field whose energy spectrum obeys Pope's model spectrum, \[E(\kappa)=C\epsilon^{2/3}\kappa^{-5/3}f_{L}(\kappa L)f_{\eta}(\kappa\eta), \tag{13}\] where \(C=1.5\) is a model constant, \(\kappa\) is the wavenumber, \(\epsilon\) is the dissipation rate, and \(L\) and \(\eta\) are large eddy and Kolmogorov length scales, respectively. The functions \(f_{L}\) and \(f_{\eta}\) determine the shape of energy-containing and dissipative range of the energy spectrum and are defined as, \[f_{L}(\kappa L) = \left[\frac{\kappa L}{[(\kappa L)^{2}+c_{L}]^{1/2}}\right]^{5/3+ p_{0}} \tag{14}\] \[f_{\eta}(\kappa\eta) = \exp\left[-\beta\left([(\kappa\eta)^{4}+c_{\eta}^{4}]^{1/4}-c_{ \eta}\right)\right]. \tag{15}\] \begin{table} \begin{tabular}{|c|c|c|} \hline & Validation case & Higher Reynolds number case \\ \hline \(N^{3}\) & \(96^{3}\), \(144^{3}\), \(192^{3}\), \(256^{3}\) & \(128^{3}\), \(192^{3}\), \(256^{3}\), \(384^{3}\), \(512^{3}\) \\ \(L\) & \(2\pi\) & \(2\pi\) \\ \(\mu\) & \(0.02058\) & \(0.005245\) \\ \(k_{max}\eta_{0}\) & \(3.14\), \(4.7\), \(6.28\), \(8.36\) & \(1.5\), \(2.25\), \(3.0\), \(4.5\), \(6\) \\ \(k_{f,0}\) & \(1.0045\), \(1.0335\), \(1.0335\) & \(1.0008\) \\ \(\epsilon_{f,0}\) & \(0.4169\), \(0.46086\), \(0.46659\), \(0.42697\) & \(0.4463\), \(0.4564\), \(0.4618\), \(0.4651\), \(0.4665\) \\ \(c_{L}\) & \(0.415\) & \(1.212\), \(1.213\), \(1.832\), \(1.686\), \(1.772\) \\ \(c_{\eta}\) & \(0.4\) & \(0.419\), \(0.417\), \(0.420\), \(0.420\) \\ \(Re_{\lambda,0}\) & \(27\) & \(53.4\), \(52.8\), \(52.5\), \(52.3\), \(52.2\) \\ \(D_{p}/\Delta\) & \(1\), \(1.5\), \(2\), \(2.66\) & \(0.477\), \(0.716\), \(0.955\), \(1.432\), \(1.91\) \\ \(St_{\eta,0}\) & \(100\) & \(10\), \(100\) \\ \(\rho_{p}/\rho_{f}\) & \(1800\) & \(180\), \(180\) \\ \(N_{p}\) & \(1689\) & \(36796\) \\ \(\phi\) & \(0.001\) & \(0.001\) \\ \(D_{p}/\eta_{0}\) & \(1.0\) & \(1.0\) \\ \(\phi_{m}\) & \(1.8\) & \(0.18\), \(1.8\) \\ \hline \end{tabular} \end{table} Table 2: Initial flow and particle parameters corresponding to the different decaying isotropic turbulence cases. The model constants \(p_{0}=2.0\) and \(\beta=5.2\) are the same as suggested by (Pope, 2000), while \(c_{L}\) and \(c_{\eta}\) are determined to match the energy and dissipation rate required for the chosen \(Re_{\lambda,0}\) and \(D_{p}=\eta_{0}\). These constants change for different grid resolutions and grid arrangements (collocated used in the present study). Apart from \(c_{L}\), \(c_{\eta}\), \(k_{f,0}\), and \(\eta_{f,0}\), all other parameters for the model spectrum are similar to those used by Mehrabadi et al. (2018). A summary of the simulation parameters in this section is given in Table 2. For the baseline case, the grid size (\(\Delta\)) is selected to be the same as the initial Kolmogorov scale, which is equal to the particle size (\(D_{p}=\eta_{0}\)). The particle-to-fluid density ratio is \(\rho_{p}/\rho_{f}=1800\), and the volume and mass loading are \(\phi=0.001\) and \(\phi_{m}=1.8\), respectively, giving a total of \(N_{p}=1689\) particles in the domain. This results in a large particle Stokes number, \(St_{\eta,0}=(1/18)(\rho_{p}/\rho_{f})(D_{p}/\eta_{0})^{2}=100\). Particle dynamics is based only on the drag force modeled using the standard Schiller-Naumann drag correlation. Keeping all the above parameters the same, the grid is systematically refined (\(\Delta=2\pi/96\), \(2\pi/144\), \(2\pi/192\), and \(2\pi/256\)), and simulations are carried out for the four different interpolation kernels described before. Thus, \(D_{p}/\Delta\) ranges between 1-2.66 from the coarsest to the finest resolution. Following Mehrabadi et al. (2018), particles are injected at random positions with the initialization of the flow field using the Pope spectrum, and the particle velocity is set equal to the fluid velocity interpolated to the particle location using trilinear interpolation. For all cases, the magnitude of the velocity derivative skewness of these cases rapidly approaches a value of around 0.5 in about 0.1 turnover times. Figure 2(a,b) shows the temporal evolution of particle kinetic energy normalized by the initial fluid phase kinetic energy obtained using the grid-based Roma interpolation kernel for four different grid sizes with and without any self-disturbance correction. Without any correction model, as the grid is refined, the prediction of particle kinetic energy becomes more inaccurate, a result consistent both with the stationary particle test case presented in Section 3.1 and with the conclusions of Horwitz and Mani (2020). As the grid is refined, the Roma interpolation kernel becomes narrow, adding a large reaction force in the neighborhood of the particle and creating a strong disturbance field. Since the fluid velocity is sampled using the same interpolation kernel and without any correction, the particle force is underpredicted, resulting in the slower decay of the particle kinetic energy. With the Zonal-ADR correction, however, all grid resolutions predict the temporal evolution of the particle kinetic energy very similar to the PR-DNS data. Thus, even for this grid-based, narrow interpolation kernel, the Zonal-ADR correction captures the fluid-particle interactions fairly accurately. The small mismatch compared to the PR-DNS data in Figure 2(b) may be attributed to the lack of knowledge of all the exact parameters for the initial spectrum used in PR-DNS, and the use of a collocated grid-based solver in the present work. Although the energetics of the particle-fluid interactions are captured fairly well with the correction scheme, the grid-based interpolation kernel can add large reaction force for particles larger than the grid size (\(D_{p}/\Delta>1\)) and result in unphysical two-way coupled velocity field (as was seen in the stationary particle simulation in Section 3.1). This could adversely affect particle and fluid statistics, e.g., see discussion on particle acceleration statistics in Section 3.2.2. The effect of using interpolation kernel widths proportional to the particle size is investigated next. Figure 3(a) shows the temporal evolution of normalized particle kinetic energy using the four interpolation kernels with and without the correction scheme on the \(192^{3}\) grid. Even without any correction, the kinetic energy decay rate is reasonably well captured by the Gauss1, Gauss2, and Roma-Gauss2 interpolation kernels. The kinetic energy is slightly overpredicted without correction. This is because the fluid velocity at the particle location contains the self-disturbance and results in a smaller particle force, similar to the stationary particle case. With correction, however, the results follow the PR-DNS study reasonably well for all kernels, with Gauss2 being closest to the PR-DNS, whereas the grid-based Roma kernel gives a larger deviation. Table 3 documents the particle kinetic energy at different times with and without the correction model for different interpolation kernels compared to the PR-DNS study. Also shown is the PP-DNS study conducted by Mehrabadi et al. (2018), using the correction scheme developed by Horwitz and Mani (2018), and predictions from the E&H correction scheme (Esmaily and Horwitz, 2018) with trilinear interpolation implemented in the present solver. Figure 3(b) shows the temporal evolution of the net dissipation rate normalized by the initial dissipation rate for the various interpolation kernels with and without correction. As shown by Sundaram and Collins (1996) and Mehrabadi et al. (2018), the evolution equation for the mixture kinetic energy of the system \begin{table} \begin{tabular}{|c|c c c|c c|c c c|} \hline \(\frac{te_{0}^{(f)}}{k_{0}^{(f)}}\) & \multirow{2}{*}{PR-DNS} & \multicolumn{2}{c|}{PP-DNS} & E\&H & \multicolumn{2}{c|}{No Correction} & \multicolumn{2}{c|}{Zonal-ADR} \\ \cline{4-9} & & (Trilinear) & (Trilinear) & \multirow{2}{*}{Roma} & Gauss1 & Gauss2 & \multirow{2}{*}{Roma} & Gauss1 & Gauss2 \\ & & & & & \(\frac{\sigma}{\mathcal{D}_{p}}=0.8\) & 1.5 & & & \(\frac{\sigma}{\mathcal{D}_{p}}=0.8\) & 1.5 \\ \hline 0.54 & 0.9303 & 0.9267 & 0.9246 & 0.9645 & 0.9410 & 0.9351 & 0.9236 & 0.9283 & 0.9279 \\ 2.7 & 0.5172 & 0.4732 & 0.4701 & 0.7028 & 0.5562 & 0.5272 & 0.4693 & 0.4977 & 0.5010 \\ 4.87 & 0.2855 & 0.2423 & 0.2453 & 0.5141 & 0.3331 & 0.3004 & 0.2483 & 0.2670 & 0.2743 \\ 6.55 & 0.1821 & - & 0.1562 & 0.4081 & 0.2302 & 0.1999 & 0.1564 & 0.1714 & 0.1754 \\ \hline \end{tabular} \end{table} Table 3: Normalized particle kinetic energy (\(k^{(p)}/k_{0}^{(f)}\)) at different times predicted with and without the Zonal-ADR model with different interpolation kernels and compared against PR-DNS as well as PP-DNS by Mehrabadi et al. (2018). E&H represents the implementation of the correction scheme by Esmaily & Horwitz (2018) in the present solver. Figure 3: Temporal evolution of (a) normalized particle kinetic energy and (b) net dissipation rate using the Zonal-ADR model (solid lines) and no model (dashed lines) with different interpolation kernels on the 192\({}^{3}\) grid: (a) Roma, (e) Gauss1, (e) Gauss2, and (h) Roma-Gauss2. The PR-DNS data of Mehrabadi et al. (2018) (h) and predictions using the Esmaily & Horwitz (2018) correction scheme with trilinear interpolation implemented in the present solver (\(\star\)) is also shown for comparison. The Gauss1 and Gauss2 lines for Zonal-ADR nearly overlap. \(e_{m}=(1-\phi)\rho_{f}k^{(f)}+\phi\rho_{p}k^{(p)}\) is \[\frac{de_{m}}{dt}=\underbrace{(1-\phi)\frac{1}{V}\int_{V}\mu\mathbf{u}_{f}\nabla^ {2}\mathbf{u}_{f}dV}_{-e^{(f)}}+\underbrace{\frac{1}{V}\sum_{i=1}^{N_{p}}F_{i} \cdot(u_{p,i})}_{\Pi^{(p)}}-\underbrace{(1-\phi)\frac{1}{V}\sum_{i=1}^{N_{p}}F_ {i}\cdot(u_{f@p,i})}_{\Pi^{(f)}}, \tag{16}\] where \(F_{i}\) is the force acting on a particle, \(-e^{(f)}\) is the kinetic energy dissipation rate resolved on the grid, \(\Pi^{(p)}\) is the particle kinetic energy dissipation rate, and \(\Pi^{(f)}\) is the interphase kinetic energy transfer term between the particle and fluid Here, \(-e^{(*)}=\Pi^{(p)}-\Pi^{(f)}\) represents the additional dissipation near the particle surfaces (Sundaram & Collins, 1996). In the present work, the net dissipation rate [\(e^{(net)}=-e^{(f)}+\Pi^{(p)}-\Pi^{(f)}\)] is computed from the rate of change of the fluid and particle kinetic energy [left-hand side of Eq. (16)] for the numerical accuracy reasons. Without correction, the grid-based Roma interpolation significantly underpredicts the net dissipation rate. However, particle size-based interpolation kernels capture the trends of the PR-DNS data even without correction. The peak in the dissipation rate, created mainly by the no-slip conditions and resultant flow disturbance in PR-DNS, is also well captured by the point-particle model with correction. The slight overprediction in peak compared to the PR-DNS is attributed to not having the same exact initial conditions as in the PR-DNS. This is also confirmed by implementing a different correction scheme (Esmaily & Horwitz, 2018), which uses trilinear interpolation, into the same flow solver which shows a similar peak in dissipation rate with correction. Figure 4 shows the contributions of the resolved dissipation rate and interphase energy transfer [\(-e^{(f)}-\Pi^{(f)}\)], the particle kinetic energy dissipation rate to the net dissipation rate for two interpolation kernels (Gauss1 and Roma-Gauss2 ) with and without correction. All parts of the dissipation rate are well captured by the particle-based interpolation kernel, even with the asymmetric Roma-Gauss2 interpolation. This suggests that sampling fluid velocity from a region less perturbed by the self-disturbance of the particle compensates for the errors introduced by nonsymmetric interpolation in the kinetic energy conservation, especially when the particles are on the order of the grid size. #### 3.2.2 Higher Reynolds number case The prior cases allowed testing of Hypotheses 1 and 2 against a simple, uniform flow over a stationary particle as well as decaying isotropic turbulence laden with Kolmogorov-scale particles for which PR-DNS data are available. However, due to the low Reynolds number and maintaining of particle size equal to the Kolmogorov scale, those results were obtained with a small number of particles. In this section, Hypothesis 1 is tested for higher Reynolds number, \(Re_{\lambda,0}\approx 52\), two different Stokes numbers, and a larger number of particles and yet maintaining low volume loading. This Reynolds number is still somewhat limited by the desire to model Kolmogorov-scale particles and maintain a reasonable number of grid points under grid refinement. Test cases were designed to sufficiently resolve the small scales, ensuring \(k_{max}\eta>1\), where \(k_{max}=\pi/\Delta\)(Balachandar & Maxey, 1989; Yeung & Pope, 1988). In particular, the most restrictive, i.e. coarsest case has \(k_{max}\eta=1.5\). The higher Reynolds number allows many more Kolmogorov-scale particles while maintaining low-volume loading. Under grid refinement, a wide range of particle size to grid size (\(D_{p}/\Delta\)) is studied to test Hypothesis 1. A summary of the simulations in this section is given in Table 2. The domain is once again a triply periodic cube of side length \(2\pi\), with initial flow conditions for each case coming from a divergence-free random field sampled from Pope's model energy spectrum (Pope, 2000). Five grid resolutions are chosen to span a range of particle-to-grid size ratios. The particle size is chosen to equal the initial Kolmogorov scale (\(D_{p}=\eta_{0}\)), which is held constant across all five grid refinements (\(\Delta=2\pi/128\), \(2\pi/192\), \(2\pi/256\), \(2\pi/384\), and \(2\pi/512\)), resulting in a particle-to-grid size ratio (\(D_{p}/\Delta\)) which spans a range from 0.477-1.91. Two different particle-to-fluid density ratios are tested (\(\rho_{p}/\rho_{f}=180\) and 1800) in order to achieve particle Stokes numbers of \(St_{\eta,0}=10\) and 100, respectively. In all cases, a volume fraction of \(\phi=0.001\) is used, resulting in mass loading of \(\phi_{m}=0.18\) and 1.8, respectively for the two Stokes numbers, giving a total of \(N_{p}=36796\) particles in the domain. Again, particle dynamics is based only on the drag force modeled using the standard Schiller-Naumann drag correlation. In the lower Reynolds number cases, validating against the PR-DNS of Mehrabadi et al. (2018), the Gauss1 and Roma-Gauss2 cases provide similar results and both compare favorably to the PR-DNS results when combined with the Zonal-ADR correction. The wider Gauss2 kernel produced statistics slightly Figure 4: Temporal evolution of the resolved fluid (\(\epsilon^{(f)}\)), particle dissipation rates (\(\Pi^{(p)}\)), and interphase energy transfer (\(-\Pi^{(f)}\)) to the net normalized dissipation rate (\(\epsilon^{(net)}/\epsilon_{0}^{(f)}\)) using the Zonal-ADR correction (solid lines) and no model (dashed lines) with different interpolation kernels: ()Gauss1 and ()Roma-Gauss2. The PR-DNS data of Mehrabadi et al. (2018) () is also shown for comparison. closer to the PR-DNS, especially in the uncorrected case due to its wide region of influence, but differences in the corrected results are small. Due to its wider stencil, the Gauss2 kernel requires more neighboring cells for larger particles, increasing the computational cost. Thus, opting for both consistency between E2L and L2E interpolation kernels (that results in energy conservation) and for a balance between accuracy and computational efficiency, the higher Reynolds number cases are only run for the grid-based Roma interpolation kernel and the particle size-based Gauss1 interpolation kernel. Particles are initially injected at random positions and the initial particle velocity is set equal to the fluid velocity interpolated to the particle location using trilinear interpolation. Similar to the low Reynolds number cases, particles are injected at the start of decay. The magnitude of the velocity derivative skewness of these cases rapidly approaches a value of around 0.5 in about 0.1 turnover times. The general results in the \(St=10\) follow those from the \(St=100\) case. In order to keep the same number of particles and volume loading between cases, the mass loading decreases to 0.18 for the \(St=10\) case, resulting in the particle phase having a much weaker overall effect on the fluid phase. For this reason, some of the differences between cases are smaller, however, the main patterns remain consistent with the \(St=100\) case, and hence some of the results for the \(St=10\) case have been omitted for brevity. Figure 5(a,b) shows the temporal evolution of fluid kinetic energy for \(St=100\) normalized by its initial value for the corrected and uncorrected cases calculated using the (i) grid-based Roma and (ii) particle size-based Gauss1 interpolation kernels for all five grid refinements. The self-similar Kolmogorov decay rate, \(t^{-10/7}\), (Kolmogorov, 1941) is shown as a reference against the corresponding decay rate of the unladen case. The fluid kinetic energy calculated with the uncorrected, grid-based Roma interpolation kernel diverges as the grid is refined, with the rate of decay decreasing as the grid size is decreased. By employing the Zonal-ADR correction scheme with Roma interpolation, the fluid kinetic energy is much more grid-converged, with very slight differences possibly due to slight differences in the initial fields for the different grid resolutions. When the same flow realizations are instead run with the particle size-based Gauss1 interpolation kernel, even the uncorrected cases exhibit grid convergence across all resolutions. Again, there are slight variations at later times, but not as definitive as in the corresponding grid-based Roma interpolation. In general, even with the Gauss1 interpolation scheme, the uncorrected cases result in slightly less fluid kinetic energy decay than their Zonal-ADR corrected counterparts, although since the volume loading in these simulations is fairly dilute, the differences in observed fluid kinetic energy are small. The effect of correcting for the self-disturbance field created by a particle is much more profound on particle statistics themselves and could be expected to have more effect on fluid statistics in a heavier volume loading than what is studied here. Similarly, figure 6(a,b) shows the temporal evolution of the particle kinetic energy for \(St=100\) normalized by its initial value for both the Zonal-ADR corrected and uncorrected cases calculated using the (a) grid-based Roma and (b) particle size-based Gauss1 interpolation kernels for different grid sizes. These results are consistent with the lower Reynolds number cases in Section 3.2.1. As explained in the prior section, in the absence of any self-disturbance correction, decreasing grid size relative to particle size in conjunction with a compact, grid-size-based interpolation kernel results in the underprediction of forces due to the highly localized sampling of largely disturbed fluid velocity. As the grid is refined, the interpolation kernel becomes even smaller relative to the particle, hence this underprediction of the force is magnified with each grid refinement, resulting in particle, and to a lesser extent fluid, statistics that diverge with grid refinement. Adding the Zonal-ADR correction to the grid-based interpolation kernel model leads to grid-converged predictions of particle kinetic energy for all grid resolutions. Small variations can still be seen in the corrected results for some grid sizes, but these differences are small, especially considering they are seen on a semi-log scale. Comparing it to the particle size-based Gauss1 filter, grid-converged statistics for both the Zonal-ADR corrected and the uncorrected cases are observed. However, even though the uncorrected cases show grid convergence, the converged value differs from the converged value obtained with the Zonal-ADR correction scheme. Without any correction, there is still a slight underprediction of the force on the particles based on the undisturbed fluid velocity which results in this difference. Such a difference is small and only appears at later times. The effect of the Zonal-ADR correction and choice of interpolation kernel across grid refinement on the net dissipation rate is examined next. As previously discussed, due to the nature of point-particle modeling, the net dissipation is not fully captured as resolved fluid dissipation. Instead, as with the lower Reynolds number cases, the net dissipation rate is calculated as the rate of change of the mixture kinetic Figure 5: Temporal evolution of normalized fluid kinetic energy decay in the \(Re_{\lambda,0}=52\), \(St=100\) case using the (a) grid-based Roma interpolation kernel and (b) particle size–based Gauss1 interpolation kernel. Solid lines represent cases with the Zonal-ADR model while dashed lines represent the uncorrected cases. Different colors/symbols represent different grid sizes: (\(\bullet\)) \(D_{p}/\Delta=0.477\), (\(\blacktriangle\)) \(D_{p}/\Delta=0.716\), (\(\blacktriangledown\)) \(D_{p}/\Delta=0.955\), (\(\blacktriangledown\)) \(D_{p}/\Delta=1.432\), (\(\blacklozenge\)) \(D_{p}/\Delta=1.91\). Kolmogorov’s selfsimilar decay rate, \(t^{-10.7}\) (\(\cdots\)), is shown as a reference for the unladen case (\(\blacktriangledown\)). energy (left-hand side of Eq. [16]) (Mehrabadi et al., 2018; Horwitz and Mani, 2020). Figure 7(a,b) shows the net dissipation rate in the \(St=100\) cases normalized by its initial value for both Zonal-ADR corrected and uncorrected runs calculated using the (i) grid-based Roma and (ii) particle size-based Gauss1 interpolation kernels for different grid sizes. Once again, results are diverging with grid refinement when using an uncorrected model with the grid-based Roma interpolation kernel. Here, increasing the resolution relative to the particle size results in a lower net dissipation rate in the uncorrected Roma cases, with significant underpredictions at the higher grid resolutions. Sticking with the Roma interpolation kernel, if the model employs the Zonal-ADR correction, results are more grid converged than the uncorrected cases, but now the two highest resolution cases seem to predict slightly larger peak net dissipation than the other cases. As shown in the stationary particle case, when the grid size is smaller than the particle size and a grid-based Roma kernel is used, the two-way-coupled velocity field at the particle location can be unphysical, even with a correction scheme. This may contribute to the larger peaks in the net dissipation rate. Switching to the particle size-based Gauss1 interpolation kernel, however, with the Zonal-ADR correction, the two-way coupled velocity field is more physical as also shown in the stationary sphere case. This seems to eliminate this difference, and similar predictions are seen across all grid resolutions, indicating that the corrected grid-based Roma model for the two highest-resolution cases was overpredicting net dissipation. Similar to the previous results, using the Gauss1 kernel with no correction model also seems to produce grid converged predictions of net dissipation rate, however, they are consistently and slightly lower than the predicted net dissipation when using the Zonal-ADR correction. Figure 7(c,d) shows the same plots for the \(St=10\) case. The main patterns remain consistent, but the differences are much smaller due to the low Figure 6: Temporal evolution of normalized particle kinetic energy decay in the \(Re_{\lambda,0}=52\), \(St=100\) case using the (a) grid-based Roma interpolation kernel and (b) particle size–based Gauss1 interpolation kernel. Solid lines represent cases with the Zonal-ADR model while dashed lines represent the uncorrected cases. Different colors/symbols represent different grid sizes: (\(\bullet\)) \(D_{p}/\Delta=0.477\), (\(\blacktriangle\)) \(D_{p}/\Delta=0.716\), (\(\blacktriangledown\)) \(D_{p}/\Delta=0.955\), (\(\blacktriangledown\)) \(D_{p}/\Delta=1.432\), (\(\blacklozenge\)) \(D_{p}/\Delta=1.91\). mass loading in these cases. One feature that is not present in this lower Stokes case is the deviation of the two highest resolution cases in the Zonal-ADR corrected, Roma interpolation kernel case (Figure 7[c]). The poorer performance of the Roma kernel when particles are larger than the grid may have been mitigated due to the smaller effect of particles in general for this case. As previously seen, the effect of correcting for particle self-disturbance fields, especially at the volume loading studied here, is more clearly evident in particle statistics than it is in fluid statistics alone. To that end, another important particle statistic that can be examined is particle acceleration, as that is directly related to the particle force which is dependent on the force closures and hence the undisturbed fluid velocity in the relative slip-velocity between that particle and its surrounding fluid. Figure 8(a,b) shows the particle Root mean square (rms) acceleration in the \(St=100\) cases normalized by the initial Kolmogorov acceleration of the corresponding unladen case for both Zonal-ADR corrected and uncorrected runs calculated using the (i) grid-based Roma and (ii) particle size-based Gauss1 interpolation kernels for different grid sizes. These particle acceleration statistics really highlight the effect of correction and interpolation kernel choice, potentially because acceleration involves the rate of change of velocity, whereas kinetic energy is more an integral quantity. A clear divergence in particle rms acceleration with decreasing grid size is seen in the uncorrected, grid-based Roma interpolation kernel results. Similar to the net dissipation results, the two highest resolutions actually end up overpredicting particle acceleration in the Zonal-ADR, grid-based Roma case, while the other three resolutions produce near-identical results. Notably, these two high resolutions which still deviate after using the Zonal-ADR correction with the Roma kernel are the two cases in which the particles are strictly larger than the grid (\(D_{p}/\Delta=1.432\) and \(1.91\)). As mentioned earlier, this again can be attributed to the overprediction of the force on the particle, just like in the stationary particle case with Roma kernel wherein the overprediction of the force results in negative two way-coupled velocity at the particle location. The middle case (\(D_{p}/\Delta=0.955\)) shows grid converged results with the other two coarser grids. This is potentially indicating that even with a correction scheme, using a compact, grid-based interpolation kernel can introduce errors when the particle is larger than the grid size. In contrast to this, when using the particle size-based Gauss1 interpolation kernel, the particle rms acceleration statistics show excellent grid convergence across all grid sizes for both the uncorrected and the Zonal-ADR corrected cases. This highlights the importance of a particle size-based interpolation kernel when particles are on the order of the grid size, and especially when larger than the grid, even after correcting for the particle self-disturbances. The impact of the Zonal-ADR correction over the uncorrected approach is still highlighted very clearly here, with the uncorrected results showing consistent underprediction in particle rms acceleration relative to the Zonal-ADR corrected results. Figure 8(c,d) show the same particle rms acceleration plots for the \(St=10\) cases. Here, the general patterns observed in the \(St=100\) case hold true. While the mass loading is much lower, the effect of correction and interpolation kernel choice are still highlighted very well in particle acceleration. The main difference here is that the lower Stokes number is resulting in much larger accelerations than in the \(St=100\) case, but this shows that the results observed are consistent across both Stokes numbers. Particle acceleration statistics are investigated further by examining the distribution of individual particle acceleration events. Figure 9(a-d) shows the probability density functions (PDFs) of particle acceleration magnitude for the \(Re_{\lambda,0}\approx 52\), \(St=100\) case for the (a) uncorrected model with Roma interpolation kernel, (b) uncorrected model with Gauss1 interpolation kernel, (c) Zonal-ADR corrected model with Roma interpolation kernel, and (d) Zonal-ADR corrected model with Gauss1 interpolation kernel. When the grid-based Roma interpolation kernel is used in the absence of any correction, the PDF becomes narrower ## 6 Conclusion Figure 8: Temporal evolution of normalized particle rms acceleration in the \(Re_{\lambda,0}=52\) for \(St=100\) (top panel) and \(St=10\) (bottom panel): (a,c) grid-based Roma interpolation kernel and (b,d) particle size–based Gauss1 interpolation kernel. Solid lines represent cases with the Zonal-ADR model while dashed lines represent the uncorrected cases. Particle rms acceleration is normalized by the initial Kolmogorov acceleration of the corresponding unladen case. Different colors/symbols represent different grid sizes: (\(\bullet\)) \(D_{p}/\Delta=0.477\), (\(\blacktriangle\)) \(D_{p}/\Delta=0.716\), (\(\blacksquare\)) \(D_{p}/\Delta=0.955\), (\(\bigstar\)) \(D_{p}/\Delta=1.432\), (\(\bullet\)) \(D_{p}/\Delta=1.91\). The normalized Kolmogorov acceleration of the unladen case is included as a reference (\(\_\)). ## 6 Conclusion Figure 9: Particle acceleration PDFs in the \(Re_{\lambda,0}=52\), \(St=100\) case at normalized time \(te_{0}^{(f)}/k_{0}^{(f)}\approx 0.92\) using the (a) uncorrected model with Roma interpolation kernel, (b) uncorrected model with Gauss1 interpolation kernel, (c) Zonal-ADR model with Roma interpolation kernel, and (d) Zonal-ADR model with Gauss1 interpolation kernel. Particle acceleration is the magnitude using all three components and is normalized by the initial Kolmogorov acceleration of the corresponding unladen case. Different colors/symbols represent different grid sizes: (\(\bullet\)) \(D_{p}/\Delta=0.477\), (\(\blacktriangle\)) \(D_{p}/\Delta=0.716\), (\(\blacksquare\)) \(D_{p}/\Delta=0.955\), (\(\blacktriangle\)) \(D_{p}/\Delta=1.432\), (\(\bullet\)) \(D_{p}/\Delta=1.91\). with increased grid resolution. Thus, when particles increase in size relative to the grid size, there are fewer high acceleration events and, as verified in particle rms acceleration, the peak acceleration events are smaller. Using the particle size-based Gauss1 interpolation kernel with the uncorrected model results in grid-converged PDFs. In the Zonal-ADR case with Roma interpolation kernel, the correction results in grid converged PDFs for all cases except the most resolved case. Here, this case results in more high acceleration events than it should, which was also seen in the larger peak particle rms acceleration for the same case. Combining the Zonal-ADR correction with the particle size-based Gauss1 interpolation kernel results in grid converged results with a slightly wider tail than the converged PDFs of the uncorrected Gauss1 case, again indicating the importance of the Zonal-ADR correction scheme in addition to the particle size-based interpolation kernel. ## 4 Conclusions In several particle-laden flow computations with direct or large-eddy simulation of the fluid flow, particle sizes comparable to the grid size or the smallest resolved flow scale are commonly encountered, e.g., sprays and droplets near the injector nozzle (Moin & Apte, 2006; Apte et al., 2009), wall-bounded particle-laden flows (Ferrante & Elghobashi, 2005), or transport of suspended sediments (Finn et al., 2016). Use of the point-particle approach under such situations, although not originally developed to be used in this limit, is common as it allows computation of large numbers of particle trajectories. However, application of a point-particle approach under the situation of the particle size being comparable to Kolmogorov scales needs additional considerations. These considerations include correcting for the self-disturbance field and carefully choosing the interpolation kernels for interaction forces in the two way-coupling. This work thoroughly evaluates the effect of the aforementioned considerations on the accuracy of the point-particle models in the limit of \(D_{p}/\Delta\sim\mathcal{O}(1)\). It is shown that the grid-based interpolation kernels, which vary based on the local grid resolution, irrespective of the particle size, significantly underpredict the interaction force and hence the decay rates of particle kinetic energy and magnitude of particle acceleration, especially when no model is used for the self-disturbance correction. Furthermore, as these grid-based kernels only depend on grid size, a clear divergence in fluid and particle statistics is observed with grid refinement in the absence of any particle self-disturbance correction. Particle size-based kernels, wherein the kernel widths scale with the particle size, can better capture these interactions even without any correction model, producing grid-converged results. As the grid is refined and the particle size becomes larger than the grid size, a kernel width proportional to particle size keeps the region of particle-fluid interaction unchanged and allows sampling of the fluid parameters from a region that is less affected by the self-disturbance field. While convergent under grid refinement, the particle kinetic energy decay rate, net dissipation rate, and magnitude of particle rms acceleration are slightly under-predicted with no self-disturbance correction model. With correction for self-disturbance, both the grid-based and the particle size-based kernels provides grid-convergent results. Therefore, while the use of a particle size-based interpolation kernel may mitigate some of the errors resulting from the use of the two way-coupled disturbed fluid velocity in the force closures, there is still a need for self-disturbance correction for particles comparable to the grid size. However, when particles become larger than the grid size, the grid-based kernels distribute the force computed based on the undisturbed fluid velocity in a narrow region, resulting in a locally large value of the force, even though the global value is conserved. This increased force can result in an unphysical two-way-coupled velocity at the particle location with the grid-based kernel. This is also clearly observed in the stationary particle case with grid-based Roma kernel for particles larger than the grid and with correction for the self-disturbance. Such an effect was shown to be absent with the particle-based kernel, even for particles larger than the grid size. These results suggest that the use of a particle size-based interpolation kernel in conjunction with a correction model for the self-disturbance field is recommended to obtain the best results. With particle size-based kernels, the predictions without self-disturbance corrections are grid convergent, but the turbulent kinetic energy and dissipation rate are slightly underpredicted. Implementing the particle size-based interpolation kernel requires knowledge of several neighboring control volumes, beyond the nearest neighbors surrounding the particle, especially if the particle size is comparable to the grid size. For complex unstructured grids and parallel computing, this can lead to increased memory and computational time. In such cases, grid-based kernels that depend only on nearest neighbors are a practical option, provided a self-disturbance correction model is used to obtain consistent and grid-converged results. This is especially important for any application which involves particle-laden flow near boundaries, as computational grids are refined in the wall-normal direction to resolve the flow and thus particles can range from being subgrid to larger than the grid size within the same domain. ### Acknowledgments Computing time on TACC's Frontera and Purdue's Anvil is appreciated. N. K. and S. V. A. acknowledge NSF award#1851389. S. S. J. acknowledges funding support from Boeing Co. Authors thank Jeremy Horwitz, Mohammad Mehrabadi, and Shankar Subramaniam for the PR-DNS data. A preliminary version of this work has been published (Apte et al., 2022) as the Center for Turbulence Research Proceedings of the Summer Program (CTR-PSP) and is available online1.
2310.00312
Kerr-Vaidya type radiating black holes in semi-classical gravity with conformal anomaly
Static black holes in the conformal anomaly-sourced semi-classical General Relativity in four dimensions were extended to rotating, stationary solutions, recently. These quantum-corrected black holes show different features compared to the Kerr black hole and need for further extensions. Here we remove the condition of stationarity and find radiating (Kerr-Vaidya-type) solutions in the same theory augmented with a cosmological constant. As long as the coupling constant $\alpha$ of the $A$-type trace anomaly is non-zero, we show that $i)$ the cosmological constant is bounded from above, i.e $\Lambda \le \frac{3}{4 \alpha}$; $ii)$ static black holes exist but they may not be unique; $iii)$ static black holes do not satisfy the second law of black hole thermodynamics; $iv)$ static black holes may have unstable inner horizons; $v)$ In the nonstationary and axially symmetric case, stability of the event horizon and the second law of thermodynamics black holes are problematic.
Metin Gurses, Bayram Tekin
2023-09-30T08:53:31Z
http://arxiv.org/abs/2310.00312v2
# Kerr-Vaidya type radiating black holes in semi-classical gravity with conformal anomaly ###### Abstract Static black holes in the conformal anomaly-sourced semi-classical General Relativity in four dimensions were extended to rotating, stationary solutions, recently. These quantum-corrected black holes show different features compared to the Kerr black hole and need for further extensions. Here we remove the condition of stationarity and find radiating (Kerr-Vaidya-type) solutions in the same theory augmented with a cosmological constant. As long as the coupling constant \(\alpha\) of the \(A\)-type trace anomaly is non-zero, we show that \(i)\) the cosmological constant is bounded from above, i.e \(\Lambda\leq\frac{3}{4\alpha}\); \(ii)\) static black holes exist but they may not be unique; \(iii)\) static black holes do not satisfy the second law of black hole thermodynamics; \(iv)\) static black holes may have unstable inner horizons; \(v)\) In the nonstationary and axially symmetric case, stability of the event horizon and the second law of thermodynamics black holes are problematic. ## I Introduction In the absence of a consistent framework for quantum gravity, a semi-classical approach to General Relativity (GR) (where matter fields are taken to be quantum fields, while the geometry is kept classical) has borne much fruit since the early 1970s, especially within the context of black holes which amplify quantum effects: Hawking radiation [1] removes the utter dullness in the lives of stationary and static black holes of classical GR and make them evaporate and shrink. One particular semi-classical approximation is built on the conformal anomaly for which one has the full knowledge of the trace of the expectation value energy-momentum tensor operator within any quantum state describing classically conformally invariant fields. Recently, Fernandes [2] found stationary and axially-symmetric rotating black hole solutions in GR, without a cosmological constant, but with a source that comes from the trace anomaly induced by the 1-loop effects of the quantum fields within the semi-classical approximation. These solutions (of which uniqueness is not yet known) demonstrate various novel features in contrast to their vacuum GR limits such as the violation of the Kerr bound and the non-symmetric event-horizons. The solutions given in [2] generalize the earlier static and spherical symmetric black hole solutions in the same theory given by Cai _et al._[3] where an important stumbling block in finding the solutions of the anomaly-sourced theory (to be explained below) was also circumvented. In [4], a negative cosmological constant was introduced, and the static, spherically symmetric solutions were found and their thermodynamics was studied. The current state of the trace-anomaly sourced semi-classical GR was nicely described in [2] and we invite the interested reader to refer to that work, but here let us briefly describe the theory. It is well-known that even if one starts from a classically conformally invariant theory (say a theory of massless fields conformally coupled to gravity in four dimensions), the symmetry does not generally survive quantization (more properly regularization) at a one-loop level [5]. The Weyl-scaling invariance of the metric is lost, and this shows itself in the non-zero trace of the expectation value of the energy-momentum tensor (in any quantum state). Even without a detailed knowledge of the massless quantum fields coupled to gravity, we know that in four dimensions, the trace anomaly is expressed purely in terms of the curvature invariants of the background spacetime with the metric \(g_{\mu\nu}\) as \[\langle\psi|T|\psi\rangle:=g^{\mu\nu}\langle\psi|T_{\mu\nu}|\psi\rangle=\frac{ \beta}{2}C_{\mu\nu\sigma\rho}C^{\mu\nu\sigma\rho}-\frac{\alpha}{2}{\cal G}, \tag{1}\] where \(C_{\mu\nu\sigma\rho}\) are the components of the Weyl tensor in some coordinates, and \({\cal G}\) is the Gauss-Bonnet scalar defined in terms of the Riemann, Ricci tensors and the scalar curvature as \[{\cal G}:=R_{\mu\nu\sigma\rho}R^{\mu\nu\sigma\rho}-4R_{\mu\nu}R^{\mu\nu}+R^{2}. \tag{2}\] In (1), the constants \(\alpha\) and \(\beta\) are the only inputs coming from the underlying conformal field theory; and are known explicitly in terms of the number of massless fields [6]. The fact that the right-hand side of (1) does not have information about the quantum fields (except their number, just mentioned) is a blessing: one can study the backreaction of these fields (in a semi-classical setting of course) on the geometry they happen to live in. But it is clear that without the full energy-momentum tensor, one can find only the solutions to the trace of the field equations. That trace equation is only a _necessary_ condition, but not a _sufficient_ one in general. That means the general solution to the trace of the field equations will involve the correct solution as a subclass, but it probably will be further restricted by the full theory. This is the apparent impasse in considering the anomaly-sourced field equations, and generically there is no known solution yet: one must compute the full tensor \(\langle\psi|T_{\mu\nu}|\psi\rangle\) which does involve details of the quantum fields. So even at the semi-classical level, it is very hard to compute the backreaction of the quantum corrections to a generic background metric \(g_{\mu\nu}\). Furthermore, as a second issue, one does need good reasons to calculate such corrections to the classical background. The main question is the following: can there be macroscopic effects of these corrections for example in a black hole geometry where strong gravity amplifies apparently small effects? This second issue was settled in the affirmative in [7]: quantum conformal anomaly can have macroscopic effects. Let us also note that in a recent work [8], extremal black holes were shown to amplify quantum effects, generically, not just the ones coming from the conformal anomaly. Therefore, there is ample reason to study the backreaction of the conformal anomaly in a black hole background. In the first issue of the full energy-momentum tensor, a tentative but very useful solution is the following: assume some symmetry in the background geometry together with some simplifying assumptions, such as staticity, spherical symmetry, stationarity, _etc._ to fix the total energy-momentum tensor. (For this discussion see the relevant literature in [2]). In this work, we remove the important assumption of _stationarity_, that is we assume that the spacetime does not have a time-like vector field, and this leads to a generalization of the rotating stationary black holes of Fernandes [2], and the static black holes of Cai _et al._[3; 4] to dynamical black holes with radiation either emitted by the black hole or absorbed by it. We also include a cosmological constant generalizing the solution in [4]. As we shall discuss, our solution describes a quantum-corrected version of the spherically symmetric and rotating Vaidya-type radiating solution [9]. The layout of the paper is as follows: In Section II, we discuss the non-static spherically symmetric solution, that is the conformal anomaly sourced (radiating, or radiation absorbing) Vaidya metric, In Section III we discuss the rotating version, and we delegate the rather long expression of the full energy-momentum tensor to the Appendix. ## II Spherically symmetric radiating solution We consider the conformal anomaly-sourced cosmological Einstein gravity (in the units \(8\pi G=1=c\)) as our semi-classical field equations \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{\mu\nu}=\langle\psi|T_{\mu\nu}| \psi\rangle, \tag{3}\] together with the usual covariant conservation equation \(\nabla^{\mu}\langle\psi|T_{\mu\nu}|\psi\rangle=0\) that comes from the requirement that diffeomorphism invariance survives regularization. The field equations are augmented with the trace anomaly equation (1). Then the trace of (3) is a single constraint on the geometry of the underlying spacetime: \[4\Lambda-R-\frac{\beta}{2}C_{\mu\nu\sigma\rho}C^{\mu\nu\sigma\rho}+\frac{ \alpha}{2}\mathcal{G}=0. \tag{4}\] Let us note again that the constants \(\alpha,\beta\) contain information about the quantum fields, but we do not need their explicit forms here. We first consider a spherically symmetric, but non-static metric \[ds^{2}=-\left(1-2m(v,r)\right)dv^{2}+2\epsilon dvdr+r^{2}\left(d\theta^{2}+ \sin^{2}\theta d\phi^{2}\right), \tag{5}\] where \(v\) is the retarded/advanced null coordinate. For the special case of \(m(v,r)=m(v)/r\), and \(\epsilon=-1\), the metric describes a Vaidya black hole [9] emitting radiation, while for \(\epsilon=1\) the radiation is absorbed by the black hole; both of these cases are quite relevant as they describe dynamical black holes with Hawking radiation; or light/ultrarelativistic dust accretion, respectively. Then the Ricci scalar, Gauss-Bonnet combination, and the square of the Weyl tensor, which are independent of the sign of \(\epsilon\), can be found to be \[R=2\frac{\left(r^{2}m\right)^{\prime\prime}}{r^{2}},\qquad\mathcal{G}=\frac{8 \left(m^{2}\right)^{\prime\prime}}{r^{2}},\qquad C_{\mu\nu\sigma\rho}C^{\mu \nu\sigma\rho}=\frac{4r^{2}}{3}\left(\left(\frac{m}{r}\right)^{{}^{\prime \prime}}\right)^{2}, \tag{6}\] where \(m^{\prime}=\partial_{r}m(v,r)\). Observe that no derivative with respect to the null coordinate appears in these curvature invariants, even though they appear in the curvature components. For the Vaidya black hole case, that is when \(m(v,r)=m(v)/r\), one has a null-dust source. \[G_{\mu\nu}=T_{\mu\nu}=\frac{2\epsilon}{r^{2}}\partial_{v}m(v)\delta^{v}_{\mu} \delta^{v}_{\nu},\qquad\mbox{ for the Vaidya black hole}. \tag{7}\] On the other hand, generically, for (5), one has a non-null energy-momentum tensor. \[{G^{\mu}}_{\nu}=-\frac{2}{r^{2}}\left(\begin{array}{cccc}\partial_{r}(rm)&0&0&0 \\ -r\partial_{v}m&\partial_{r}(rm)&0&0\\ 0&0&r\partial_{r}(r\partial_{r}m)&0\\ 0&0&0&r\partial_{r}(r\partial_{r}m)\end{array}\right). \tag{8}\] The trace equation (4) yields a single non-linear ODE: \[4\Lambda-2\frac{(r^{2}m)^{\prime\prime}}{r^{2}}-\frac{2\beta r^{2}}{3}\left( \left(\frac{m}{r}\right)^{{}^{\prime\prime}}\right)^{2}+4\alpha\frac{(m^{2})^ {\prime\prime}}{r^{2}}=0. \tag{9}\] For \(\beta\neq 0\), an exact analytical solution is not available,1 so as in [2; 3; 4], we set \(\beta=0\) (that is the vanishing of the \(B\)-type anomaly) and consider only the \(A\)-type anomaly case with \(\alpha\neq 0\). Then, (9) becomes Footnote 1: See [10] for a recent numerical approach to this problem. \[2\Lambda r^{2}-\left(r^{2}m\right)^{\prime\prime}+2\alpha\left(m^{2}\right)^ {\prime\prime}=0, \tag{10}\] which gives a quadratic equation for \(m(v,r)\) \[2\alpha m^{2}-r^{2}m+\frac{\Lambda}{6}r^{4}=p(v)r+q(v), \tag{11}\] where \(p\) and \(q\) are arbitrary differentiable functions of the null coordinate \(v\). The general solution, for \(\alpha\neq 0\), is \[m(v,r)=\frac{r^{2}}{4\alpha}\pm\frac{r^{2}}{4\alpha}\sqrt{1-\frac{4}{3}\Lambda \alpha+8\alpha\left(\frac{p(v)}{r^{3}}+\frac{q(v)}{r^{4}}\right)}, \tag{12}\] which is a generalization of the Cai _et al._[3] metric to the non-static case. Note that the _plus_ branch is a new solution that diverges as \(\alpha\to 0\), while the _minus_ branch smoothly goes over to the \(\alpha=0\) solution, which is \[m(v,r)=\frac{\Lambda r^{2}}{6}-\frac{p(v)}{r}-\frac{q(v)}{r^{2}},\hskip 28.452756pt \mbox{for $\alpha=0$}. \tag{13}\] The reality of \(m(v,r)\)in (12) requires that for all \(v\) and \(r\), one has \[1-\frac{4}{3}\Lambda\alpha+8\alpha\left(\frac{p(v)}{r^{3}}+\frac{q(v)}{r^{4}} \right)\geq 0. \tag{14}\] For example as \(r\rightarrow\infty\), one must have \(1-\frac{4}{3}\Lambda\alpha\geq 0\), and for \(\alpha>0\), this sets an upper bound on the cosmological constant of the de Sitter space in terms of the anomaly coefficient. Namely, it must satisfy \(\Lambda\leq\frac{3}{4\alpha}\). Asymptotically, as \(r\rightarrow\infty\), the two branches of (12) behave as \[m(v,r)\rightarrow\frac{\left(1\pm\sqrt{\mu}\right)r^{2}}{4\alpha}+\frac{p(v)}{ \sqrt{\mu}r}+\frac{q(v)}{\sqrt{\mu}r^{2}}+\mathcal{O}\left(\frac{1}{r^{3}} \right), \tag{15}\] where \(\mu=1-\frac{4\Lambda\alpha}{3}\). This asymptotic behavior shows that at a constant \(v\) coordinate, the spacetime is asymptotically a Reissner-Nordstrom-de Sitter (or anti-de Sitter) manifold, with the following identifications of the effective cosmological constant, mass and electric charge: \[\Lambda_{\rm eff}=\frac{3\left(1\pm\sqrt{\mu}\right)}{2\alpha},\quad M(v)=\frac {p(v)}{\sqrt{\mu}},\quad Q^{2}=-\frac{2q(v)}{\sqrt{\mu}}. \tag{16}\] This interpretation requires \(p(v)\geq 0\) and \(q(v)\leq 0\). The asymptotic behavior of the scalar curvature is as follows: \[R=\frac{6\left(1\pm\sqrt{\mu}\right)}{\alpha}\mp\frac{24\alpha p(v)^{2}}{\mu^ {3/2}r^{6}}+\mathcal{O}\left(\frac{1}{r^{7}}\right). \tag{17}\] Similarly, the asymptotic behavior of the Gauss-Bonnet combination reads as \[\mathcal{G}=\frac{6\left(1\pm\sqrt{\mu}\right)^{2}}{\alpha^{2}}\mp\frac{48p(v )^{2}}{\mu^{3/2}r^{6}}+\mathcal{O}\left(\frac{1}{r^{7}}\right). \tag{18}\] On the other hand, near \(r\to 0\), the behavior of the scalar curvature is \[R=\pm\frac{2\sqrt{2}q(v)}{r^{2}\sqrt{\alpha q(v)}}\pm\frac{3\sqrt{2}p(v)}{r \sqrt{\alpha q(v)}}+\frac{3\left(4\mp\frac{\sqrt{2}p(v)^{2}\sqrt{\alpha q(v)} }{q(v)^{2}}\right)}{2\alpha}+\mathcal{O}\left(r\right), \tag{19}\] while the Gauss-Bonnet combination diverges as \[\mathcal{G}=\pm\frac{4\sqrt{2}\sqrt{\alpha q(v)}}{\alpha^{2}r^{2}}\pm\frac{6 \sqrt{2}p(v)q(v)}{r(\alpha q(v))^{3/2}}+\left(\frac{6(1+\sqrt{\mu})}{\alpha^{2 }}\mp\frac{3\sqrt{2}p(v)^{2}}{(\alpha q(v))^{3/2}}\right)+\mathcal{O}\left(r \right). \tag{20}\] Observe that both \(R\) and \(\mathcal{G}\) require \(q(v)>0\) near \(r=0\) in contrast to the \(r\to\infty\) expansion. Let us study the event horizon of this metric defined as a null surface \(\mathcal{H}(v,r)=\) constant.2 Footnote 2: Note that, in (5), the \(2m(v,r)=1\) is not a null surface, it is not the event horizon, but it is the marginally trapped surface or the apparent horizon, see [11] for the geometry of the Vaidya spacetime. \[g^{\mu\nu}\,\partial_{\mu}\mathcal{H}\partial_{\nu}\mathcal{H}=\mathcal{H}^{ \prime}\bigg{(}(1-2m)\mathcal{H}^{\prime}+2\epsilon\partial_{v}\mathcal{H} \bigg{)}=0. \tag{21}\] For \(\mathcal{H}:=r-r_{H}(v)=0\), the location of the event horizon is given by a nonlinear first-order differential equation \[1-2m-2\epsilon\frac{d\,r_{H}}{dv}=0 \tag{22}\] which explicitly reads \[\epsilon\frac{d\,r_{H}}{dv}=\frac{1}{2}-\frac{r_{H}^{2}}{4\alpha}\mp\frac{r_{ H}^{2}}{4\alpha}\sqrt{\mu+8\alpha\left(\frac{p(v)}{r_{H}^{3}}+\frac{q(v)}{r_{H}^{4}} \right)}. \tag{23}\] The exact analytical solution of this non-linear ODE is not available in its full generality. But let us make some remarks on the solutions. 1. The theorem on the existence and uniqueness of the first-order differential equations guarantees that the above equation with the appropriate initial condition has unique a solution provided that \(p(v)\) and \(q(v)\) are continuous functions and \(\mu\,r_{H}^{4}(v)+8\alpha\left(p(v)\,r_{H}+q(v)\right)\neq 0\) for all \(v\geq 0\). 2. On the other hand, if \(\mu\,r_{H}^{4}(v)+8\alpha\left(p(v)\,r_{H}+q(v)\right)=0\) then (23) may not have unique solutions. As two examples, we have \(r_{H}=\sqrt{2\alpha}\) (which is possible if \(p\) and \(q\) are constants and one has \(q=-p\sqrt{2\alpha}-\frac{\alpha\mu}{2}\)); and as a second solution we have \[r_{H}(v)=\sqrt{2\alpha}\tanh\left(\frac{\epsilon v+v_{0}}{2\sqrt{2\alpha}} \right),\] (24) where \(v_{0}\) is an integration constant. This is possible if \[q(v)=-p(v)\sqrt{2\alpha}\tanh\left(\frac{v\epsilon+v_{0}}{2\sqrt{2\alpha}} \right)-\frac{1}{2}\alpha\mu\tanh^{4}\left(\frac{v\epsilon+v_{0}}{2\sqrt{2 \alpha}}\right).\] (25) For different values of \(\epsilon\), the positivity of \(r_{H}\) requires different intervals for \(v\). When \(\epsilon=1\) (the case of the absorption of radiation by the black hole) then \(v\in[-v_{0},\infty)\) and when \(\epsilon=-1\) (the case of the emission of radiation by the black hole) then \(v\in(-\infty,v_{0}]\). Both cases are expected since for \(\epsilon=1\), \(r_{H}\) continues to keep getting larger, while for \(\epsilon=-1\), the process must stop at some future \(v_{0}\) as \(r_{H}\) becomes zero. 3. Since the right-hand side of (23) changes sign in the interval \(v\geq 0\), this differential equation implies that the function \(r_{H}\) is decreasing or increasing with respect to \(v\) for certain intervals. Hence the horizon area \(A=4\pi r_{H}^{2}\) exhibits a similar behavior. This means that the above differential equation does not satisfy the area law of black hole mechanics, or the second law of black hole thermodynamics, i.e.; \(\frac{dA}{dv}>0\) is not valid for all \(v\geq 0\) as is expected for dynamical black holes. 4. When the right-hand side of (23) vanishes, the corresponding solutions are the critical points or the equilibrium solutions of \(r_{H}\). These solutions correspond to the static horizons of Cai _et al._[3]. Let's assume that the static horizon is located at \(r_{H}^{0}\). Within our formalism, we can check the important question of the linear stability of these equilibrium solutions or the stability of the static horizons. For this purpose, let \(r_{H}(v)=r_{H}^{0}+\varepsilon r_{1}(v)\), where \(r_{1}\) is a function that satisfies the linearized form of (23): \[\frac{dr_{1}}{dv}=wv_{1},\hskip 14.226378ptw:=-\left[\frac{r_{H}^{0}}{2 \alpha}\mp\frac{1}{\sqrt{\delta}}\left(\frac{\mu}{2\alpha}(r_{H}^{0})^{3}+8 \alpha p_{0}\right)\right],\] (26) where \(\delta:=\mu(r_{H}^{0})^{4}+8\alpha(p_{0}r_{H}^{0}+q_{0})\). In (26), we considered both \(p\) and \(q\) near their static values \(p_{0}\) and \(q_{0}\), respectively. Equation (26) implies that the static outer horizon (with the plus sign) is stable but the stability of the inner horizon (with the minus sign) depends on the numerical values of the constants \(\mu\) and \(\alpha\). 5. Thermodynamics of dynamical black holes is a developing subject, and it is not easy to properly define concepts like temperature, and surface gravity even in the case of quasi-equilibrium. Therefore, we shall only note one proposal for computing the surface gravity of the solution we found above. For this, we follow [12] and see [13] for a nice review of this topic, where other proposals were also discussed. Given a spherically symmetric metric, of the form \[ds^{2}=-A^{2}(v,r)\Delta(v,r)dv^{2}+2A(v,r)dvdr+r^{2}d\Omega_{2}^{2}, \tag{27}\] and let \[\Delta(v,r):=1-2m(v,r). \tag{28}\] Then the surface gravity on the marginally trapped surface (apparent horizon), or the trapping horizon, for which \(\Delta(v,r_{H})=0\), according to [12] is given as \[\kappa:=\frac{A}{4rm(v,r)}\biggl{(}1-2r\partial_{r}m(v,r)-2m(v,r)\biggr{)}+ \frac{\dot{A}}{A},\qquad\qquad\dot{A}=\partial_{v}A, \tag{29}\] which, for our metric yields \[\kappa:=-\partial_{r}m(v,r)|_{r_{A}}\qquad\mbox{ evaluated at }\quad m(v,r_{A})=\frac{1}{2}, \tag{30}\] where \(r_{A}\) is the radius of the apparent horizon which is equivalent to the radius of the event horizon \(r_{A}=2m\) in the Schwarzschild black hole case in this coordinate system. For the Schwarzschild black hole case, \(m(v,r)=1/2=m/r\), and hence, (30) yields the expected, constant value \(\kappa=1/(4m)\) which is proportional to the Hawking temperature. But in our case, \(\kappa\) is a non-trivial function of the null coordinate \(v\). ## III Radiating Rotating Solutions Let us now consider the axially symmetric but non-stationary metric in the coordinates \((v,r,\theta,\phi)\) where \(v\) is null, for definiteness we will not introduce \(\epsilon\), but it can be easily incorporated. The generic line element under the assumptions reads \[ds^{2}=-\left(1-\frac{2rm}{\Sigma}\right)(dv-a\sin^{2}\theta d\phi)^{2}+2(dv- a\sin^{2}\theta d\phi)(dr-a\sin^{2}\theta d\phi)+\Sigma(d\theta^{2}+\sin^{2} \theta d\phi^{2}), \tag{31}\] where \(\Sigma:=r^{2}+a^{2}\cos^{2}\theta\) and \(m=m(v,r,\theta)\). Then (4) reduces to \[\frac{2}{\Sigma}\frac{\partial^{2}}{\partial r^{2}}\left(-rm+2\alpha\frac{r^{ 2}m^{2}\xi}{\Sigma^{3}}\right)-4\Lambda=0, \tag{32}\] with \(\xi:=r^{2}-3a^{2}\cos^{2}\theta\). As in the spherically symmetric case, (32) also gives a quadratic equation for \(m\) \[rm-2\alpha\frac{r^{2}m^{2}\xi}{\Sigma^{3}}+\Lambda\left(r^{2}\Sigma-\frac{5r^ {4}}{6}\right)=pr+q, \tag{33}\] where \(p=p(v,\theta)\) and \(q=q(v,\theta)\) are arbitrary functions of \(v\) and \(\theta\). The solutions are \[m(v,r,\theta)=\frac{\Sigma^{3}}{4\alpha\xi r}\left(1\pm\sqrt{1-\frac{8\alpha \xi}{\Sigma^{3}}\left(pr+q-\Lambda\left(\Sigma r^{2}-\frac{5}{6}r^{4}\right) \right)}\right). \tag{34}\] which is a non-stationary generalization of the one given in [2] with, also, a nonzero cosmological constant. Observe that the cosmological constant drastically changes the solution. Let us now calculate the horizon structure of this solution. A null surface defined by \({\cal H}(v,r,\theta)=\) constant satisfies \(g^{\mu\nu}\,\partial_{\mu}{\cal H}\partial_{\nu}{\cal H}=0\), which, for the metric (31), becomes \[a^{2}\sin^{2}\theta(\partial_{v}{\cal H})^{2}+\left(-2rm+a^{2}+r^{2}\right)( \partial_{r}{\cal H})^{2}+2\left(a^{2}+r^{2}\right)\partial_{r}{\cal H} \partial_{v}{\cal H}+(\partial_{\theta}{\cal H})^{2}=0, \tag{35}\] and \(m\) given in (34) should be inserted in this equation. It is a highly non-trivial PDE. We can make a further assumption for the null horizon's coordinates: that is \({\cal H}=r-r_{H}(v,\theta)=0\), then the horizon equation reduces to \[a^{2}\sin^{2}\theta(\partial_{v}r_{H})^{2}+\left(-2r_{H}m+a^{2}+r_{H}^{2} \right)-2\left(a^{2}+r_{H}^{2}\right)\partial_{v}r_{H}+(\partial_{\theta}r_{H })^{2}=0. \tag{36}\] Let us make some remarks on this equation. Since \(r_{H}(v,\theta)\) satisfies a first-order nonlinear partial differential equation, it is still very hard to get a closed-form solution, but by linearizing around the stationary solution 3 we can find approximate solutions. To this end, let \(\epsilon\) be a small parameter and expand the non-stationary horizon radius \(r_{H}(v,\theta)\) around the stationary one as Footnote 3: This stationary solution corresponds to the one given in [2] for the case \(\Lambda=0\), otherwise, it is more general than that solution. \[r_{H}(v,\theta)=r_{H}^{0}(\theta)+\epsilon r_{1}(v,\theta)+{\cal O}(\epsilon^ {2}), \tag{37}\] where \(r_{H}^{0}(\theta)\) is the \(v\)-independent horizon function that satisfies \[(\partial_{\theta}r_{H}^{0})^{2}+\left(-2r_{H}^{0}m^{0}(r,\theta)+a^{2}+{(r_ {H}^{0})}^{2}\right)=0. \tag{38}\] where \(m^{0}(r,\theta)\) follows from (34) with constant \(q\) and \(p\). Then \(r_{1}(v,\theta)\) satisfies the following linear, but still a partial differential, equation \[-\left(a^{2}+(r_{H}^{0})^{2}\right)\,\partial_{v}\,r_{1}+(\partial_{\theta}\, r_{H}^{0})\,\partial_{\theta}\,r_{1}=(m_{0}+\zeta r_{H}^{0}-r_{H}^{0})r_{1}, \tag{39}\] where we set \(m(v,r,\theta)=m_{0}(v,r)+\epsilon\zeta(r,\theta)r_{1}(v,\theta)\) with \(\zeta\) a cumbersome but known function from the expansion (34). Assuming \(\partial_{\theta}\,r_{H}^{0}\neq 0\), we can solve (39) with the following ansatz: \[r_{1}(v,\theta)=e^{\rho(\theta)}\,f(v,\theta), \tag{40}\] which leads to \[\rho(\theta)=\int_{\theta_{0}}^{\theta}\,\frac{m_{0}+(\zeta-1)r_{H}^{0}}{ \partial_{\theta}r_{H}^{0}}\,d\theta, \tag{41}\] and \(f(v,\theta)=f(\eta)\) is an arbitrary function of \(\eta\) which is given as \[\eta:=v+\int_{\theta_{0}}^{\theta}\,\frac{a^{2}+(r_{H}^{0})^{2}}{\partial_{ \theta}\,r_{H}^{0}}\,d\theta. \tag{42}\] Since the linearized solution \(r_{1}(v,\theta)\) contains the arbitrary function \(f\) depending on \(v\) and \(\theta\), it may not be bounded for certain values of \(v\) and \(\theta\). Hence the radiating extension of the rotating black hole, as in the case of the static black studied in Section II, violates the second law of the black hole thermodynamics, and the stability of the horizons is not guaranteed. Let us study the asymptotic structure of the two curvature invariants of the radiating metric. * as \(r\rightarrow\infty\), one has \[R =4\Lambda+\frac{30r^{2}}{\alpha\sin\theta}+\frac{6a^{2}\cos^{2} \theta}{\alpha\sin\theta}+\mathcal{O}\left(\frac{1}{r^{2}}\right),\] \[\mathcal{G} =\frac{112r^{4}}{\alpha^{2}\sin^{2}\theta}-\frac{112r^{2}a^{2} \cot^{2}\theta}{\alpha^{2}}-\frac{32a^{4}\cos^{2}\theta\cot^{2}\theta}{\alpha^ {2}}-\frac{48p(v)}{\alpha r\sin\theta}+\mathcal{O}\left(\frac{1}{r^{2}}\right).\] (43) * The expressions as \(r\to 0\) are cumbersome. So we assume \(q(v)=0\) for the sake of depicting purposes \[R =-\frac{8\alpha\tan\theta\sec^{7}\theta p(v)^{2}}{a^{8}}+\frac{6a^ {2}\cos\theta\cot\theta}{\alpha}+4\Lambda+\mathcal{O}\left(r\right),\] \[\mathcal{G} =\frac{48\sec^{6}\theta p(v)^{2}}{a^{6}}-\frac{32a^{4}\cos^{2} \theta\cot^{2}\theta}{\alpha^{2}}+O\left(r\right).\] (44) Both of these curvature invariants are finite, unlike the Kerr-black hole case which has a ring-like singularity. ## IV Conclusions Motivated by two recent developments, that is the construction of stationary rotating black hole solutions of the conformal anomaly sourced General Relativity [2], and the observation of the amplification of quantum corrections by extremal black holes [8], we have studied here radiating non-rotating and rotating black hole solutions of the \(A\)-type anomaly sourced General Relativity with a cosmological constant. Our solutions generalize the rotating solution of [2] and the spherically symmetric solutions of [3; 4] to the non-stationary case as akin to Vaidya's generalization of the Schwarzschild metric, and a similar generalization of the Kerr metric. The metrics we have found are highly non-trivial: we have found the event horizon equations but we could only solve them analytically in the linearized approximation of their stationary counterparts. Numerical investigation of these equations and a proper understanding of the geometric structure of these black holes would be valuable. ## V Appendix A: Energy momentum tensor The metric in (31) can be written in the Kerr-Schild form \[g_{\mu\nu}=g_{\mu\nu}^{0}+\frac{2mr}{\Sigma}\,\lambda_{\mu}\,\lambda_{\nu}, \tag{45}\] where \(\lambda_{\mu}:=(1,0,0,-a\sin^{2}\theta)\) is a null vector and \(g_{\mu\nu}^{0}\) is the flat metric withe line element \[ds^{2}=-(dv-a\sin^{2}\theta d\phi)^{2}+2(dv-a\sin^{2}\theta d\phi)(dr-a\sin^{2 }\theta d\phi)+\Sigma(d\theta^{2}+\sin^{2}\theta d\phi^{2}). \tag{46}\] Following a similar discussion as in [2], one can rewrite the Einstein tensor and hence energy-momentum tensor components as \[\langle T_{\mu\nu}\rangle=\lambda_{\mu}\zeta_{\nu}+\lambda_{\nu}\zeta_{\mu}+ \rho_{4}\,n_{\mu}\,n_{\nu}+\rho_{6}\,k_{\mu}\,k_{\nu}+\mu\,g_{\mu\nu}, \tag{47}\] or more explicitly \[\left\langle T_{\mu\nu}\right\rangle=\frac{\rho_{6}}{a^{2}\,\sin^{4} \theta}\lambda_{\mu}\,\lambda_{\nu}+\lambda_{\mu}\left(\zeta_{\nu}-\frac{\rho_{6 }}{a^{2}\,\sin^{4}\theta}\,t_{\nu}\right)+\lambda_{\nu}\left(\zeta_{\mu}-\frac {\rho_{6}}{a^{2}\,\sin^{4}\theta}\,t_{\mu}\right)+\rho_{4}\,n_{\mu}\,n_{\nu}\] \[+\frac{\rho_{6}}{a^{2}\,\sin^{4}\theta}\,t_{\mu}\,t_{\nu}+\mu\,g_ {\mu\nu}, \tag{48}\] where \(\zeta_{\mu}:=\rho_{2}\,n_{\mu}+\rho_{3}\,m_{\mu}+\rho_{5}\,k_{\mu}\) and \[\lambda_{\mu}=(1,0,0,-a\sin^{2}\theta),\qquad\lambda^{\mu}=(0,1,0,0)\qquad t_{\mu}=(1,0,0,0),\] \[m_{\mu}=(0,1,0,0),\qquad\qquad n_{\mu}=(0,0,1,0),\qquad\quad k_{ \mu}=(0,0,0,1). \tag{49}\] One can see that \[\lambda^{\mu}\left\langle T_{\mu\nu}\right\rangle=-\frac{2r^{2}\partial_{r}m }{\Sigma^{2}}\,\lambda_{\nu}. \tag{50}\] The functions \(\rho_{2},\rho_{3},\rho_{4},\rho_{5},\rho_{6}\), \(\mu\) and the \(R\) curvature are given by \[\rho_{2}=\frac{1}{\Sigma^{2}}\,\left(2a^{2}\cos^{2}\theta\,m_{ \theta}-2a^{2}r\cos\theta\sin\theta m_{v}+r^{2}\Sigma m_{\tau\theta}-\Sigma m _{\theta}\right),\] \[\rho_{3}=\frac{1}{\sin\theta\,\Sigma^{2}\,(2mr-\Sigma)}\left(2a^{ 2}\Sigma\,\sin^{3}\theta\,m_{vr}+a^{2}r\Sigma\sin^{3}\theta\,m_{rr}\right.\] \[\left.+2a^{2}\sin^{3}\theta\,(-2a^{2}\sin^{2}\theta+2a^{2}-\Sigma )\,m_{r}\right.\] \[\left.+r\,\cos\theta\,(4a^{2}\sin^{2}\theta+\Sigma)\,m_{\theta}+a^ {2}r\sin^{3}\theta\,\Sigma\,m_{vv}+2\sin\theta(2a^{4}\,\cos^{2}\theta\,\sin^{2 }\theta+\right.\right.\] \[\left.\left.2a^{2}\Sigma\,\cos^{2}\theta-a^{2}\Sigma-\Sigma^{2} )\,m_{v}+r\sin\theta\,\Sigma\,m_{\theta\theta}\right),\] \[\rho_{4}=\frac{1}{\sin\theta\,\Sigma\,(2mr-\Sigma)}\left(2a^{2}r \sin^{3}\theta\,m_{vr}+\sin\theta\,\Sigma\,(-2a^{2}m\sin^{2}\theta+a^{2}r\sin ^{2}\theta+2a^{2}m-2m\Sigma+r\Sigma)\,m_{rr}\right.\] \[\left.+2\sin\theta\,(2a^{4}\sin^{2}\theta\cos^{2}\theta+4a^{2}mr \sin^{2}\theta-3a^{2}\Sigma\,\sin^{2}\theta-4a^{2}mr+2a^{2}\Sigma+2mr\Sigma- \Sigma^{2})\,m_{r}+\right.\] \[\left.r\cos\theta\,(4a^{2}\cos^{2}\theta+\Sigma)\,m_{\theta}+a^{ 2}r\Sigma\,\sin^{3}\theta\,m_{vv}+2\sin\theta\,(2a^{4}\cos^{2}\theta\sin^{2} \theta-2a^{2}\Sigma\,\sin^{2}\theta+a^{2}\Sigma-\Sigma^{2})\,m_{v}\right.\] \[\left.+r\Sigma\,\sin\theta\,m_{\theta\theta}\right),\] \[\rho_{5}=\frac{1}{\Sigma^{2}\,(2mr-\Sigma)}\left(a\,\sin^{2} \theta\,\Sigma\,(2a^{2}m\,\cos^{2}\theta+2a^{2}r\sin^{2}\theta+r\Sigma-2m \Sigma)\,m_{vr}\right.\] \[\left.a\sin^{2}\theta\,\Sigma\,(2a^{2}m\cos^{2}\theta+a^{2}r\sin ^{2}\theta-2m\Sigma+r\Sigma)\,m_{rr}+2a\sin^{2}\theta\,(2a^{4}\cos^{2}\theta \sin^{2}\theta-4a^{2}mr\cos^{2}\theta+\right.\right.\] \[\left.3a^{2}\Sigma\,\cos^{2}\theta-a^{2}\Sigma+2mr\,\Sigma- \Sigma^{2})\,m_{r}+a\cos\theta\,\sin\theta\,(4a^{2}m\cos^{2}\theta+4a^{2}r \sin^{2}\theta-4m\Sigma+3r\Sigma)\,m_{\theta}\right.\] \[\left.+a^{3}r\Sigma\sin^{4}\theta\,m_{vv}+a\sin^{2}\theta\,(4a^{ 4}\cos^{2}\theta\,\sin^{2}\theta-4a^{2}mr\cos^{2}\theta+6a^{2}\Sigma\cos^{2} \theta-2a^{2}\Sigma+2mr\Sigma-3\Sigma^{2})\,m_{v}\right.\] \[\left.\left.+ar\Sigma\sin^{2}\theta\,m_{\theta\theta}\right),\] \[\rho_{6}=\frac{1}{\Sigma\,(2mr-\Sigma)}\left(-2a^{2}r\Sigma\sin^ {4}\theta\,m_{vr}+\Sigma\,\sin^{2}\theta\,(2a^{2}m\sin^{2}\theta-a^{2}r\sin^{ 2}\theta-2a^{2}m+2m\Sigma-r\Sigma)\,m_{rr}\right.\] \[\left.2\sin^{2}\theta\,(-2a^{2}\sin^{2}\theta\,\cos^{2}\theta+4a^{ 2}mr\cos^{2}\theta+3a^{2}\Sigma\sin^{2}\theta-2a^{2}\Sigma-2mr\Sigma+\Sigma^{2 })\,m_{r}+\right.\] \[\left.-r\sin\theta\cos\theta\,(4a^{2}\sin^{2}\theta+\Sigma)\,m_{ \theta}-a^{2}r\Sigma\sin^{4}\theta\,m_{vv}\right.\] \[\left.+2\sin^{2}\theta\,(-2a^{4}\sin^{2}\theta\cos^{2}\theta+2a^{ 2}\Sigma\sin^{2}\theta-a^{2}\Sigma+\Sigma^{2})\,m_{v}\right.\] \[\left.-r\Sigma\sin^{2}\theta\,m_{\theta\theta}\right),\] \[\mu=\frac{1}{\sin\theta\,\Sigma^{2}\,(2mr-\Sigma)}\left(-2a^{2}r \,\Sigma\sin^{3}\theta\,m_{vr}-a^{2}r\,\sin^{3}\theta\,\Sigma\,m_{rr}-r\cos \theta\,(\Sigma+4a^{2}\,\sin^{2}\theta)\,m_{\theta}\right.\] \[\left.2\sin\theta\,(-2a^{4}\cos^{2}\theta\,\sin^{2}\theta+2a^{2}mr \cos^{2}\theta-2a^{2}\,\cos^{2}\theta\,\Sigma+a^{2}\,\Sigma-2mr\Sigma+\Sigma^{2 })\,m_{r}-a^{2}r\,\sin^{3}\theta\,\Sigma\,m_{vv}\right.\] \[\left.+2\sin\theta\,(-2a^{4}\cos^{2}\theta\,\sin^{2}\theta-2a^{2} \,\Sigma\,\cos^{2}\theta+a^{2}\,\Sigma+\Sigma^{2})\,m_{v}-r\,\sin\theta\, \Sigma\,m_{\theta\theta}\right),\] \[R=\frac{1}{\Sigma\,(-2mr+\Sigma)}\left(2(2a^{2}m\cos^{2}\theta-2m \Sigma+r\Sigma)\,m_{rr}+4(-2mr+\Sigma)\right), \tag{51}\] with the identity \(\rho_{6}=\sin^{2}\theta\,\rho_{4}+\sin^{2}\theta\,\Sigma\,(R+2\rho_{3}+4\mu)\). Here \(m_{r}:=\partial_{r}m\)_etc._. **Energy Conditions**: Let \(U^{\mu}\) be any timelike or a null vector in the spacetime geometry, then it is not possible to say anything about the sign of the term \(U^{\mu}U^{\nu}\,\langle T_{\mu\nu}\rangle\), because \[U^{\mu}U^{\nu}\,\langle T_{\mu\nu}\rangle=2(U\cdot\lambda)(U\cdot\zeta)+\rho_{ 4}(U\cdot n)^{2}+\rho_{6}(U\cdot k)^{2}+\mu U^{2}, \tag{52}\] where \(U\cdot\lambda=U^{\mu}\,\lambda_{\mu}\), \(U\cdot k=U^{\mu}\,k_{\mu}\), \(U\cdot n=U^{\mu}\,n_{\mu}\), \(U\cdot\zeta=U^{\mu}\,\zeta_{\mu}\) and \(U^{2}=U^{\mu}U_{\mu}\). For an arbitrary timelike or null vector \(U^{\mu}\), the right-hand side of (52) can take any sign. Hence we we cannot deduce any of the energy conditions for the energy momentum tensor given in (47). In any case, we know that all the known energy conditions can be violated by quantum fields.
2309.10062
SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models
In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at https://sites.google.com/view/smart-llm/.
Shyam Sundar Kannan, Vishnunandan L. N. Venkatesh, Byung-Cheol Min
2023-09-18T18:17:56Z
http://arxiv.org/abs/2309.10062v2
# SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models ###### Abstract In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at [https://sites.google.com/view/smart-llm/](https://sites.google.com/view/smart-llm/). ## I Introduction In recent years, multi-robot systems have gained prominence in various applications, from housekeeping tasks [1] to search and rescue missions [2] and warehouse automation [3]. These systems, composed of multiple autonomous robots, can greatly enhance efficiency, scalability, and adaptability in numerous tasks. Typically, these robot arrays exhibit heterogeneity in terms of types and skill levels among individual agents. Consequently, the overall system complexity is heightened, emphasizing the critical importance of skillful task allocation among these agents. Effective allocation of complex tasks among multiple agents involves several crucial steps, including task decomposition, assigning sub-tasks to suitable agents, and ensuring correct task sequencing [4]. This proficiency requires access to external knowledge or domain-specific information about the task. Traditional multi-robot task planning often struggles with diverse tasks and complex environments [4], relying on fixed algorithms. These challenges intensify when tasks are described in natural language, as such descriptions can lack precision and completeness. Take, for instance, the task presented in Fig. 1: "_Closing the laptop and watching TV in a dinly lit room_". Notably, this task description does not explicitly mention turning off the lights before watching TV. Given the incomplete and ambiguous nature of the instruction, it is crucial to leverage extensive prior knowledge to interpret the task and aid in efficient task planning. Large language models (LLMs), such as GPT-4 [5], GPT-3.5 [6] and Llama2 [7], have demonstrated remarkable capabilities in understanding natural language, logical reasoning, and generalization. This presents exciting opportunities for enhancing comprehension and planning in multi-robot systems. In this paper, we introduce SMART-LLM, an innovative mechanism for task assignment to embodied agents using LLMs. SMART-LLM provides LLMs with Python programming scripts that encapsulate intricate robot skills and environmental details, including object information. It also provides practical examples of task decomposition and allocation based on the robot's capabilities and the environment. Leveraging programming language structures, SMART-LLM taps into the vast dataset of internet code snippets and documentation available to LLMs. As illustrated in Fig. 1, when dealing with a complex task, SMART-LLM divides the task into sub-tasks, each related to specific objects or actions. These sub-tasks are then combined and delegated to suitable robots with the necessary skills to perform them. The main contributions of this work are three-fold: * **Multi-Robot Task Planning Framework** for integrating task decomposition, coalition formation, and skill-based task assignment, by leveraging LLMs. * **Benchmark Dataset:** A benchmark dataset designed for evaluating multi-agent task planning systems, covering a spectrum of tasks, ranging from elemental to complex ones in the AI2-THOR [8] simulation platform. * **Implementation and Evaluation** of the framework in both simulated and real-world settings, undergoing thorough testing across a wide array of tasks. Fig. 1: **An overview of SMART-LLM: Smart Multi-Agent Robot Task planning using Large Language Models (LLM). Given a high-level instruction, SMART-LLM decomposes the instruction into sub-tasks assigning them to individual robots based on their specific skills and capabilities, and orchestrating their execution in a coherent and logical sequence.** ## II Related Works **Multi-Robot Task Planning.** Multi-robot task planning is important in robotics, requiring effective coordination among robots. Typically, the process of multi-robot task planning encompasses four distinct phases: task decomposition, coalition formation, task allocation, and task execution [4]. Task decomposition entails the subdivision of a given task into manageable sub-tasks. These decomposition methods can either be task-specific [9] or necessitate copious amounts of data for generating policies [10]. A newer, intuitive approach involves using natural language to describe tasks, using pre-trained language models to break them into sub-tasks, and predicting their order over time [11, 12]. In coalition formation and task allocation, efficiently assigning decomposed tasks to multiple agents is crucial for effective completion. To this end, a plethora of methodologies have been employed, encompassing negotiation [13], auctioning [14], consensus-based strategies [15] and reinforcement learning [16]. Degrees of automation, contingent on the number of task-planning steps a method can execute, have been conceptualized [4]. Most methods predominantly fall into the first or second level of automation. The first level exclusively automates task execution [17, 18]. Meanwhile, the second level automates either task allocation and execution [19, 20]; or coalition formation and execution [21]. The third level of automation encompasses coalition, allocation, and execution but does not involve task decomposition [22, 23]. In a pioneering stride towards the fourth level of automation [24], a method adeptly manages all four facets of task planning using natural language prompts and Long Short Term Memory (LSTM). Existing methods in the literature often have shortcomings, such as not covering all task planning steps, requiring extensive task-specific demonstration data for model training [24], or being limited to specific tasks, lacking generalizability. Our method stands out by efficiently performing all four task-planning steps and utilizing LLMs to generalize across various tasks through a few-shot training approach. **LLMs for robotics.** Large Language Models excel in generalization, commonsense reasoning [25, 6], and are increasingly sought after for inclusion in robotics systems [26, 27]. They play a vital role in crafting task plans for robots, making use of few or zero-shot learning methods [6]. Various techniques for generating these robotic task plans using LLMs have emerged, encompassing value function-based approaches [27, 28] and context-driven prompts [29, 30, 31, 32]. Moreover, LLMs have found utility in providing feedback and refining task plans to enhance robot performance [33, 34, 35]. While LLMs excel at creating flexible class plans, they face challenges when applied to larger multi-agent teams. In the realm of multi-agent systems, progress has been made in enhancing agent cooperation with the use of LLMs [36, 37, 38]. These approaches involve equipping individual agents with their own LLMs to improve interactions and boost their collaborative skills. However, these methods prioritize improving multi-agent system efficiency but do not tackle the specific task of creating task plans for multi-robot teams. These plans involve assigning and sequencing tasks for individual robots based on their skills and the environment's condition. Our approach focuses on task decomposition and allocation in a heterogeneous robot team, considering individual robot skills. We achieve multi-robot task planning without the need for separate LLMs per robot. This simplifies planning and provides a unified solution for multi-robot task coordination. ## III Problem Formulation Given a high-level language instruction \(I\), the goal of this work is to understand the instruction, compute the necessary steps for task completion, and formulate a task plan that enables its execution. Tasks are executed in a manner that maximizes the utilization of the available robots, by performing tasks in parallel when feasible. These tasks are performed in an environment \(E\) that encapsulates numerous entities and objects. We assume that the given instruction \(I\) can be successfully executed in the environment \(E\). To execute the task, we have a set of \(N\) heterogeneous embodied robot agents \(\mathbb{R}=\{R^{1},R^{2},...,R^{N}\}\). Let \(\Delta\) be the set of all skills or actions that an agent may be capable of performing. In this work, we assume that robot skills, \(\Delta\), are either pre-implemented in the system or that there are available API calls to execute these skills. Each of the agents possesses a diverse set of skills, \(\mathbb{S}=\{S^{1},S^{2},...,S^{N}\}\) that they can perform, each subject to specific constraints. Here, \(S^{n}\) represents the list of skills of robot \(R^{n}\), and \(S^{n}\subseteq\Delta\), for \(n=1,2,...N\). For instance, for the robot skill PickUpObject, there may be constraints on the maximum mass that a robot can pick. Now, the instruction \(I\) can be decomposed into a temporarily ordered set of \(K\) sub-tasks, \(\mathbb{T}=\{T_{t_{1}}^{1},T_{t_{2}}^{1},...,T_{t_{j}}^{K}\}\), based on the robot skills, \(\Delta\), and the environment \(E\), where \(t_{j}\) denotes the temporal order of a sub-task and \(j\leq K\). It is worth noting that some of the sub-tasks can be executed in parallel, having the same temporal precedence. Let \(T_{S}^{k}\) be the list of skills needed by a robot to complete a sub-task, \(T_{t_{j}}^{k}\), where \(T_{S}^{k}\subseteq\Delta\) and \(T_{t_{j}}^{k}\in\mathbb{T}\). Based on \(T_{S}^{k}\), the sub-task can be allocated to a robot \(R\) with skills \(S\) if \(T_{S}^{k}\subseteq S\), where \(R\in\mathbb{R}\) and \(S\in\mathbb{S}\). In cases where no single robot satisfies this constraint, a team of two or more robots is required to perform the sub-task. In such scenarios, we form a team of \(Q\) robots, \(\mathbb{A}=\{A^{1},A^{2},...,A^{Q}\}\), each possessing skills \(\mathbb{S}_{A}=\{S_{A}^{1},S_{A}^{2},...,S_{A}^{Q}\}\), such that \(T_{S}^{k}\subseteq\bigcup\mathbb{S}_{A}\). ## IV Methodology The proposed approach utilizes LLMs to perform _Task Decomposition_, _Coalition Formation_, and _Task Allocation_ within the context of multi-robot task planning. Our approach employs Pythonic prompts to guide the LLM in generating code for task decomposition and allocation. We provide concise examples with line-by-line comments and block comments giving task summaries for each step, aiding the LLM in understanding and producing code effectively. ### _Stage 1: Task Decomposition_ In this stage, we decompose the given instruction, \(I\), into a set of independent sub-tasks, \(\mathbb{T}\), along with a sequence of actions for performing each sub-task. To decompose a task, we provide information about the environment, \(E\) (including objects and other entities present in the environment), and a list of primitive skills, \(\Delta\), that robots can perform. This information about the environment and the robot's skills is utilized to decompose the task such that it can be performed in that environment using the skills possessed by the robots. Following the initial few-shot LLM prompting, we provide the LLM with various pieces of information: details about the robot's skills, information about the environment, several examples of sample tasks, and corresponding Python code-based decomposed plans. The LLM takes all this information along with the input task, \(I\), that needs to be decomposed and generates the sub-tasks, \(\mathbb{T}\). In the Stage 1 block of Fig. 2 corresponding to task decomposition, the purple box corresponds to the list of robot skills, \(\Delta\); the blue box corresponds to details about the environment, \(E\); green box corresponds to the decomposed task samples given as part of the prompt; and red box corresponds to the given instruction \(I\). The red box in the Stage 2 block of Fig. 2 is the output from the LLM, corresponding to the sub-tasks, \(\mathbb{T}\). ### _Stage 2: Coalition Formation_ Coalition formation is used to form robot teams to perform each of the sub-tasks computed through task decomposition. In task decomposition, the primary task is broken down into sub-tasks, \(\mathbb{T}\) based on common sense and the various entities present in the environment, \(E\). However, this initial Fig. 2: **System Overview: SMART-LLM consists of four key stages: i) Task Decomposition: a prompt consisting of robot skills, objects, and task decomposition samples is combined with the input instruction. This is then fed to the LLM model to decompose the input task; ii) Coalition Formation: a prompt consisting of a list of robots, objects available in the environment, sample decomposed task examples along with corresponding coalition policy describing the formation of robot teams for those tasks, and decomposed task plan for the input task from the previous stage, is given to the LLM, to generate a coalition policy for the input task; iii) Task Allocation: a prompt consisting of sample decomposed tasks, their coalition policy and allocated task plans based on the coalition policy is given to the LLM, along with coalition policy generated for the input task. The LLM then outputs an allocated task plan based on this information; and iv) Task Execution: based on the allocated code generated, the robot executes the tasks. ”...” is used for brevity.** breakdown does not take into account the specific skills of individual robots, \(S^{n}\), or their capabilities to perform each sub-task. Therefore, in this stage, we prompt the LLM to analyze the list of skills needed to perform each sub-task, \(T_{S}^{k}\), and the skills of individual robots, \(S^{n}\) to identify the suitable robot(s) for each sub-task. To achieve this, we prompt the LLM with samples of decomposed tasks and corresponding coalition formation policies that describe how available robots can be assigned to the sub-tasks. The coalition policy consists of statements regarding whether robots possess all the necessary skills to perform a sub-task and how any skill gaps in a single robot's ability to perform a sub-task can be addressed by involving additional robots. The samples we include encompass various cases: * In scenarios, where a single robot possesses all the required skills to perform a sub-task, leading to a one-to-one assignment of robots to tasks. * Instances where no single robot possesses all the skills needed for a sub-task, resulting in multiple robots collaborating on the same task. * Cases where a robot possesses the necessary skills for a sub-task but is constrained by certain limitations (for example, a robot with a maximum weight limit for a pick-up task). In such cases, additional robots are employed to overcome these constraints. By presenting these samples along with the decomposed task, \(\mathbb{T}\), and a list of available robots \(\mathbb{R}\) and their skills \(\mathbb{S}\), the LLM generates a new coalition formation policy that outlines how the given robots can be assigned to perform the input task. The Stage 2 block of Fig. 2 corresponding to coalition formation, the green box represents the sample decomposed tasks given as part of the prompt; the blue box shows the available robots \(\mathbb{R}\) and their skills \(\mathbb{S}\) along with details about the environment \(E\); the orange box delineates a general summary of the coalition policy, whereas in the experiments we utilize actual coalition policy for the sample decomposed tasks; and the red box is the decomposed task for which a coalition policy needs to be generated. The red box in the Stage 3 block of Fig. 2 is the output from the LLM, corresponding to coalition formation policy for the sub-tasks, \(\mathbb{T}\) and the instruction \(I\). ### _Stage 3: Task Allocation_ Task allocation involves the precise assignment of either a specific robot or a team of robots to individual sub-tasks, guided by the coalition formation policy established in the preceding phase. Similar to the previous stages, a prompt consisting of decomposed task samples, coalition formation policies, and allocated plans for those tasks is constructed. By incorporating the decomposed sub-tasks, \(\mathbb{T}\), and the previously generated coalition formation policy for the given input task, \(I\), we instruct the LLM to distribute robots to each sub-task according to the coalitions and produce executable code. Depending on the coalition policy, a sub-task may be allocated to either a single robot or a group of robots. The Stage 3 block in Fig. 2 shows sample decomposed plans (green box), the list of available robots and their skills (blue box), their coalition policies (orange box), and their allocated plans (violet box) used as part of the prompt, along with the coalition policy for input task (red box), to generate the final executable code in the Stage 4 block (red box). ### _Stage 4: Task Execution_ The LLM generates task plans for multi-robot teams through task allocation, which are then executed by an interpreter with either a virtual or physical team of robots. These plans are executed by making API calls to the robots' low-level skills, ensuring the efficient execution of the tasks. As shown in Stage 4 of Fig. 2, the allocated task plan (red box) for the example task \(I=\)_"turn off the desk and floor light and watch TV"_ is executed by a team of three robots in a certain temporal order. In this stage, the figure also displays the sequence of robot views as they perform the task along with captions indicating the ongoing task step. Captions marked in green correspond to specific actions completed by the robot. ## V Experiments ### _Benchmark Dataset_ To evaluate the performance of SMART-LLM and facilitate a quantitative comparison with other baseline methods, we created a benchmark dataset tailored for the evaluation of natural language-based task planning in multi-robot scenarios. This dataset originates from environments and actions within AI2-THOR [8], a deterministic simulation platform for typical household activities. The dataset encompasses 36 high-level instructions that articulate tasks and corresponding AI2-THOR floor plans, providing the spatial context for task execution. Given the multi-robot facet of our dataset, we include information on the number of robots available to perform a task and a comprehensive list of their respective skills. The number of available robots for each task ranges from 1 to 4, with varying individual skills, allowing for scalability evaluation of task planning methods. In the dataset, we also include the final ground truth states for the tasks, capturing the definitive states of relevant objects and their conditions within the environment after task completion. This ground truth delineates a set of symbolic goal conditions crucial for achieving task success. It includes details such as the object's position in the environment and its conditions like heated, cooked, sliced, or washed after the task is correctly executed. In addition to the final ground truth states, we provide data on the number of transitions in robot utilization during task execution. Transitions occur when one group of robots completes their sub-tasks, allowing another group to take over. This quantifies the utilization of the multi-robot system. If tasks are not appropriately parallelized during experiments and robots are not fully utilized, sub-tasks may be performed sequentially rather than concurrently, resulting in more transitions in robot utilization compared to the ground truth utilization. To evaluate the performance of our proposed method across diverse task complexities, our dataset comprises four task categories: * **Elemental Tasks** are designed for a single robot. In these scenarios, a single robot is assumed to possess all the necessary skills and abilities, eliminating the need for coordination with multiple robots. * **Simple Tasks** involve multiple objects and can be decomposed into sequential or parallel sub-tasks but not both concurrently. Again, all the robots possess all the necessary skills. * **Compound Tasks** are similar to Simple Tasks, with flexibility in execution strategies (sequential, parallel, or hybrid). However, the robots are heterogeneous, possessing specialized skills and properties, allowing individual robots to handle sub-tasks that match their skills and properties. * **Complex Tasks** are intended for heterogeneous robot teams and resemble Compound Tasks in their characteristics like task decomposition, multi-robot engagement, and the presence of multiple objects. Unlike Compound Tasks, individual robots cannot independently perform sub-tasks due to limitations in their skills or properties, necessitating strategic team assignments to leverage their combined capabilities for effective task completion. The dataset comprises 6 tasks categorized as elemental tasks, 8 tasks as simple tasks, 14 tasks as compound tasks, and 8 tasks as complex tasks. ### _Simulation Experiments_ Our method's validation takes place within the AI2-THOR simulated environment, where we employ our benchmark dataset for rigorous evaluation and comparative analysis against baseline approaches. SMART-LLM leverages the robust capabilities of GPT-4 [5] as its core language model, serving as the foundation for prompt processing and task plan generation. Our experimental setup encompasses a varied set of example prompts, including 5 Pythonic plan examples for task decomposition, 3 for coalition formation, and 4 for task allocation. These example prompts cover tasks that can be parallelized using threading, tasks that can only be executed sequentially, and tasks that involve both parallel and sequential execution. This diverse range of examples is strategically tailored to mirror the inherent complexities present in distinct phases of multi-robot task planning. It is worth noting that the example prompts were distinct from the tasks in the dataset and were based on different AI2-THOR floorplans not included in the dataset. Consequently, all the tasks in the dataset are considered unseen during testing. We also compare our method to two alternative baselines. In the first baseline, we replace GPT-4 with GPT-3.5 as the language model backbone. The second baseline uses our task decomposition method and prompts, it randomly assigns sub-tasks to available robots. ### _Real-Robot Experiments_ In our experiments involving mobile robots for visibility coverage problems [39], we have regions of fixed area that require visibility coverage. These regions vary in size, and our robots possess different visibility capabilities. Consequently, the number of robots required for complete coverage depends on the region's area and the robots' visibility capabilities. We assume our robots possess fundamental low-level skills, such as GoToLocation and Patrol, to perform these tasks. For task planning in this scenario, we utilize the same prompt samples from the simulation experiment that are based on the AI2-THOR simulator. ### _Evaluation Metrics_ We employ five evaluation metrics: Success Rate (_SR_), Task Completion Rate (_TCR_), Goal Condition Recall (_GCR_), Robot Utilization (_RU_), and Executability (_Exe_), following the methodology of [29]. Our evaluations are based on the dataset's final ground truth states, which we compare to the achieved states post-execution to assess task success. * _Exe_ is the fraction of actions in the task plan that can be executed, regardless of their impact on task completion. * _RU_ evaluates the efficiency of the robot team by comparing the experiment's transition count to the dataset's ground truth transition count. _RU_ equals 1.0 when they match, 0 when transitions equal sub-task count, and falls between 0 and 1 otherwise. * _GCR_ is quantified using the set difference between ground truth final state conditions and final state achieved, divided by the total number of task-specific goals in the dataset. * _TCR_ indicates task completion, irrespective of the robot utilization. If _GCR_ = 1, then _TCR_ = 1 else 0. * _SR_ is success rate and is 1 when both _GCR_ and _RU_ are 1, else it is 0. The task is considered successful when completed with appropriate robot utilization. ## VI Results and Discussion ### _Simulation Experiments_ Table I summarizes the average results across each category in the dataset for our method and baseline methods on unseen dataset tasks Our method consistently outperforms across all task categories. It achieves a perfect 100% success rate in elemental tasks, showcasing effective task decomposition. In simple tasks, it attains a perfect _TCR_ score of 1.0 but has a lower _SR_ of 0.62 due to sequential execution instead of parallel execution by two robots, impacting _RU_. For compound and complex tasks, our method succeeds 70% \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**Elemental**} & \multicolumn{4}{c}{**Simple**} & \multicolumn{4}{c}{**Compound**} & \multicolumn{4}{c}{**Complex**} \\ \cline{2-13} & **SR** & **TCR** & **GCR** & **RU** & **Exe** & **SR** & **TCR** & **GCR** & **RU** & **Exe** & **SR** & **TCR** & **GCR** & **RU** & **Exe** & **SR** & **TCR** & **GCR** & **RU** & **Exe** \\ \hline SMART-LLM(GPT-4) & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.62 & 1.00 & 1.00 & 0.62 & 1.00 & 0.69 & 0.76 & 0.85 & 0.92 & 1.00 & 0.71 & 0.85 & 0.92 & 1.00 & 0.97 \\ SMART-LLM(GPT-3.5) & 0.83 & 0.83 & 0.83 & 1.00 & 0.91 & 0.62 & 0.87 & 0.93 & 0.62 & 0.95 & 0.42 & 0.50 & 0.61 & 0.71 & 0.85 & 0.14 & 0.28 & 0.35 & 0.85 & 0.62 \\ Decomp(ours) + Rand & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.37 & 0.62 & 0.62 & 0.37 & 0.60 & 0.08 & 0.16 & 0.25 & 0.41 & 0.37 & 0.00 & 0.00 & 0.15 & 0.85 & 0.38 \\ \hline \hline \end{tabular} \end{table} TABLE I: Evaluation of SMART-LLM and baselines in AI2-THOR simulator for different categories of tasks in the benchmark dataset. of the time but occasionally struggles with task sequencing and robot team assignment. Including additional prompt samples could potentially address these issues, but GPT-4's token limit prevents us from doing so. Notably, GPT-4 outperforms GPT-3.5, particularly in complex tasks requiring logical reasoning and diverse sub-skills, making it superior for complex task allocation. Our decomposition method, employing random allocation, generally falters for skill-based task assignments due to its inability to consider the environment's state and the robot's skills. This underscores the importance of employing a reasoning-based task allocation approach, particularly when dealing with heterogeneous robot teams. **Infeasible Scenarios.** In addition to the results presented in Table I, we conducted assessments involving more intricate tasks for which none of the robots possessed the required skills. This particular scenario is not included in Table I because no feasible code can be generated for the metrics to be measured. Notably, our approach utilizing the GPT-4 backbone exhibited the capacity to discern this situation and refrained from generating any task allocation plan. In contrast, our method employing GPT-3.5 produced a task allocation plan involving robots ill-suited for the designated tasks. This disparity underscores the enhanced logical reasoning capabilities of GPT-4 in recognizing and responding to such scenarios. **Variability in Performance.** The inherent non-deterministic characteristics of LLM introduce a degree of variability in its outcomes [40]. To assess this variability, we conducted 5 separate runs, each on a randomly selected task from every category within our dataset. Table II provides the mean and standard deviations of the results observed across these trials for our approach. For elemental, simple, and complex tasks, our method consistently yielded comparable results. Nevertheless, in the case of complex scenarios, we encountered inconsistency, leading to occasional failures in robot task allocation. **Ablation Study.** We utilized a benchmark dataset to evaluate different variations of our method, examining the impact of comments (both line-by-line and task summaries) in Python prompts. We validated our method with prompts lacking such comments. Additionally, we studied the influence of the coalition formation stage by removing it and directly allocating tasks based on task decomposition output. Table III summarizes the ablations of our method. Removing comments generally reduces the success rate, underlining the value of natural language instructions with code. Notably, when comments are removed, task decomposition and allocation perform similarly across simple and elemental tasks but suffer in compound and complex tasks, indicating that comments aid in understanding reasoning and logical structures. The removal of coalition formation led to a decrease in the success rate. This decline was primarily attributed to the absence of detailed rational reasoning for task allocation. Without coalition formation, elemental tasks deviated the most, and the success rate dropped from 1.0 to 0.66, as all task allocation samples involved scenarios requiring robot teaming, leading to unnecessary multi-robot allocation for elemental tasks. ### _Real-Robot Experiments_ In real-robot experiments, we tested our method for coverage visibility tasks with regions of different areas and robots with different visibility areas. When tested across various tasks, our method correctly allocated an appropriate number of robots. Despite this task being completely unseen, our method executed it seamlessly using real robots, bridging the gap between simulation and real-world applications. In Fig. 3, for the instruction _"patrol the regions"_, one or more robots are assigned to regions based on their visibility, and they patrol those regions. ## VII Conclusions and Future Work In our research, we delve into the potential of LLMs in the realm of generating task plans for heterogeneous robot teams. Our approach introduces prompting techniques, tailored to enhance the efficiency of the four key stages of multi-robot task planning. Each prompt takes into account the attributes of the environment and the capabilities of the individual robots, to generate a task plan. Our experiments validate that the proposed method can handle task instructions of varying complexities. Notably, our approach exhibits remarkable adaptability, allowing it to seamlessly generalize to new and unexplored environments, robot types, and task scenarios. This method streamlines the transition from simulations to real-world robot applications, enabling task plan samples from simulations to be used for generating task plans for real robot systems. In the future, we aim to enhance our work by implementing dynamic task allocation among robots and exploring multi-agent LLM frameworks for task planning. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Method** & **SR** & **TCR** & **CCR** & **RU** & **Exe** \\ \hline Elemental & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 \\ Simple & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 \\ Compound & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 \\ Complex & 0.48\(\pm\)0.40 & 0.48\(\pm\)0.40 & 0.73\(\pm\)0.22 & 1.00\(\pm\)0.00 & 0.81\(\pm\)0.15 \\ \hline \hline \end{tabular} \end{table} TABLE II: Variability in Performance. Fig. 3: **Real-Robot Experiment: a) team of robots and the regions to be patrolled; b) robots after task planning and patrolling their respective regions allocated based on visibility area.** \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Method** & **SR** & **TCR** & **GCR** & **RU** & **Exe** \\ \hline Ours & 0.75 & 0.90 & 0.94 & 0.88 & 0.99 \\ No Comments & 0.48 & 0.65 & 0.73 & 0.75 & 0.78 \\ No Summary & 0.61 & 0.74 & 0.80 & 0.78 & 0.81 \\ No Comm. \& Summ. & 0.41 & 0.61 & 0.66 & 0.59 & 0.69 \\ No Coalition & 0.60 & 0.68 & 0.75 & 0.85 & 0.82 \\ \hline \hline \end{tabular} \end{table} TABLE III: Ablation Studies.
2308.00158
MTUncertainty: Assessing the Need for Post-editing of Machine Translation Outputs by Fine-tuning OpenAI LLMs
Translation Quality Evaluation (TQE) is an essential step of the modern translation production process. TQE is critical in assessing both machine translation (MT) and human translation (HT) quality without reference translations. The ability to evaluate or even simply estimate the quality of translation automatically may open significant efficiency gains through process optimisation. This work examines whether the state-of-the-art large language models (LLMs) can be used for this purpose. We take OpenAI models as the best state-of-the-art technology and approach TQE as a binary classification task. On eight language pairs including English to Italian, German, French, Japanese, Dutch, Portuguese, Turkish, and Chinese, our experimental results show that fine-tuned gpt3.5 can demonstrate good performance on translation quality prediction tasks, i.e. whether the translation needs to be edited. Another finding is that simply increasing the sizes of LLMs does not lead to apparent better performances on this task by comparing the performance of three different versions of OpenAI models: curie, davinci, and gpt3.5 with 13B, 175B, and 175B parameters, respectively.
Serge Gladkoff, Lifeng Han, Gleb Erofeev, Irina Sorokina, Goran Nenadic
2023-07-31T21:13:30Z
http://arxiv.org/abs/2308.00158v6
# Predictive Data Analytics with AI: assessing the need for post-editing ###### Abstract Translation Quality Evaluation (TQE) is an essential step of modern translation production process. TQE is critical in assessing both machine translation (MT) and human translation (HT) quality without reference translations. Ability to evaluate or even simply estimate the quality of translation automatically may open significant efficiency gains through process optimization. This work examines whether the state-of-the-art large language models (LLMs) can be used for this purpose. We take OpenAI models as the best state of the art technology and approach TQE as a binary classification task. On **eight language pairs** including English to Italian, German, French, Japanese, Dutch, Portuguese, Turkish, and Chinese, our experimental results show that fine-tuned _gpt3.5_ can demonstrate good performance on translation quality prediction tasks, i.e. _whether the translation needs to be edited_. Another finding is that simply increasing the sizes of LLMs does not lead to apparent better performances on this task by comparing the performance of three different versions of OpenAI models: _curie_, _davinci_, and _gpt3.5_ with 13B, 175B, and 175B parameters, respectively. ## 1 Introduction Most modern translation projects include post-editing (PE) of machine-translation (MT) output [14, 15]. Instead of translating from scratch, the MT+PE process increases productivity and allows to speed up global content delivery [1, 14]. However, in regulated industries and many other scenarios raw MT output is not suitable for final publication due to the inevitable errors caused by inherently stochastic nature of neural MT (NMT) [14, 13]. Hallucinations, incorrect terminology, factual and accuracy errors, small and large, as well as many other types of mistakes are inevitable to varying degrees of extent, and therefore for premium quality publication human revision is required. MT output serves as input for a professional human translator, who reviews and revises the MT proposals to eliminate factual errors and ensure that the quality of translated material conforms to the customer specifications. At the same time even with those languages that are not handled well by MT, there is a significant portion of segments that are not changed after human review. This portion varies from 10% to 70% in some cases 1, and the question arises, "Is it possible to use machine learning (ML) methods to mark these segments and save time for human reviser and make them focus on those segments that need attention instead"? In other words, _Is it possible to capture editing distance patterns from data of prior editing of this material, which already has been made_? This could further speed up the translation process and decrease the costs while preserving the premium quality of the translated product. Footnote 1: logrusglobal.com statistics This problem is also closely related to the traditional MT quality estimation (QE) shared task that has been held with the Workshop of MT (WMT) series since 2012 [11, 12, 13, 14], where both token-level and segment-level QE were carried out. From practical application and industrial usage, we formulate the problem into a single classification task, i.e. we are trying to solve classification task to answer if the translated segment (sentence) needs to be edited, or not. With the development of current large language models (LLMs), we choose OpenAI models as state-of-the-art LLMs to examine their capabilities for this task. In this work, our first experimental investigation is on "Predictive Data Analytics with AI: assessing the need for post-editing of MT output by fine-tuning OpenAI LLMs". We also follow up with experiment which explores "if the size of sample or LLM matters in such a task" by experimenting with three OpenAI models: _curie_, _davinci_, and _gpt3.5_, with parameter sizes varying from 13B to 175B. The rest of this paper is designed as below. Section 2 introduces related work to ours including MT-QE-related shared task and challenge events, Section 3 presents our methodology design and pilot study using two language pairs, Section 4 extends the experimental investigation with six more language pairs, section 5 discusses experiment on English-Japanese news content with the increasing sizes of training and testing corpus and explores two more OpenAI LLMs with varying model sizes, and Section 6 concludes this paper with future work and research perspectives. ## 2 Related Work The Quality Evaluation (QE) of MT output has always been a critical topic for MT development due to its critical role in assessing quality in the process of training. In many cases evaluation has to be done without seeing the reference translations. In many practical situations, reference translations are not available or even not possible to acquire, i.e. it is not practical to "manufacture" them for evaluation. The earliest QE shared task with the annual WMT conference started in 2012 when word level QE was introduced by Callison-Burch et al. (2012) to estimate if the translated tokens need to be edited or not, such as deletion, substitution, or keeping it as it is. In the later development of QE, a sentence-level task was introduced to predict the overall segment translation scores, which are to be correlated with human judgement scores, such as using Direct Assessment Graham et al. (2015). In WMT-2022, a new task on binary sentence-level classification was also introduced to predict if a translated output has critical errors to be fixed on English-German and Portuguese-English language pairs Zerva et al. (2022). The recent methods used for such QE tasks included prompt-based learning using XLM-R by KU X Upstage Korea University, Korea & Upstage) from Eo et al. (2022), Direct Assessment and MQM features integration into fine-tuning on XLM-R and InfoXLM Chi et al. (2021) by the Alibaba team Bao et al. (2022), and incorporating a word-level sentence tagger and explanation extractor on top of the COMET framework by Rei et al. (2022), in addition to historical statistical methods such as support vector machine (SVM), Naive Bayes classifier (NB), and Conditional Random Fields (CRFs) by Han et al. (2013). However, to the best of our knowledge, this work is the first to investigate the OpenAI LLMs with varying sizes on such MT error prediction tasks with positive outcomes. ## 3 Methodology and Experiments As shown in the system diagram in Figure 1, we first collect the historical post-editing data from our past projects on eight languages of Enterprise Resource Planning (ERP) content translation on English\(\rightarrow\)German, French, Italian, Japanese, Dutch, Portuguese, Turkish, and Chinese (DE, FR, IT, JA, NL, PT, TR, ZH). This project was completed by using an MT engine to automatically translate the source into the eight languages, followed up by post-editing by professional linguists. Two examples of MT and PE in English-Italian and English-German languages as Pilot Experiments are shown in Figure 2 and 3. Regarding MT system selection, since the content was from the ERP domain, we used the SAP STH as our MT engine. 2 Footnote 2: [https://www.sap.com/](https://www.sap.com/) SAP is an enterprise resource planning, automation and business software company. With this data from real world translation Figure 1: LLMB2PEN Methodology Design on Fine-tuning LLMs for Binary Prediction of Post-editing Need on Translations. project we used API to fine-tune the OpenAI _curie_ model for our classification task. The input is the triple set (English source, MT outputs, post-edited "gold standard") we prepared in Phase 1. The goal of this step is to optimise the weights of the model parameters for our classification task. The custom fine-tuned model produced as result of LLMB2PEN (LLM for Binary Prediction of Post-editing Need) method is created in our private space on the OpenAI account. We did not apply "prompt engineering" for this task by doing zero-shot, one-shot or few-shot training; we did a full scale fine-tuning of OpenAI LLMs via API. It is important to note that we did not simply train the LLM for edit distance either; instead, the model was trained to learn whether the strings were edited or not taking into account the full content of the string and the entire context of the training data. One of the reasons that we did not use prompting is that "Prompt Engineering" of ChatGPT-3 is limited by 3,000 tokens, and with ChatGPT-4 the context has been increased to 25,000 tokens, but still very significant limitation remains. OpenAI documentation states that 100 tokens = 75 words, meaning that average sentence is 20 tokens, therefore 3000 tokens is only 150 sentences, or 75 translation units of bilingual text, or 50 segment triples of source, target and reference. The context of 25,000 sentences is only about 150 segment triples. Also, fine-tuning is a deeper process of adjusting model's weights, and not just an in-context learning. That's why we chose fine-tuning method, which is not constrained by such limitations. For our classification experiment we took about 4000 lines of bilingual data in triples of source, target and reference, and split it into train (large) and test (smaller) sets with a ratio of 9:1. There were no specific selection criteria for the data because we took the entire project dataset after project completion. (Please, note that since we used the entire data from actual project, and split the data set as 9:1, the sizes of test sets are not round and slightly different for different languages.) We also combined source sentences in groups of length, so that test data set has the same distribution of sentences by their length as training dataset. Figure 3: EN-DE Examples on MT and Post-Editing Figure 2: EN-IT Examples on MT and Post-Editing Since the average sentence size is about 17 words, the training dataset contained about 35000 words of source data, 35000 words of MT output, and 35000 words of post-edited human reference. It is also important to note what the model learns in this case - in such an experiment it learns not to translate, but to spot MT translation errors that were made by the specific MT engine in a specific language pair on particular content. ### Outputs on EN-DE/IT As a first step, we trained the _curie_ LLM model using our data for two language pairs: English-Italian and English-German. To illustrate the results of prediction with our LLMB2PEN method, we draw the confusion matrix for both language pairs on Figures 4 and 5. In the Confusion Matrix, from the top left corner in a clockwise direction, the 1st quadrant means True Negative (**TN**): segment is predicted as not requiring editing and it does not indeed require post-editing. The 2nd quadrant is False Positive (FP): segments which are predicted as requiring editing, but in reality they do not, that is **FP** means that the segment is correct but wrongly flagged for post-editing. The 3rd quadrant is True Positive (**TP**) - reflect the situation when segment is correctly flagged as requiring post-editing. The fourth quadrant is False Negative (**FN**): segment is predicted as correct, while in reality it does require post-editing. So the first and third are successful classifications, and the other two are incorrect classifications. It does worth mentioning that if segment is incorrectly predicted as requiring post-editing, this only leads to small increase of post-editing cost, while False Negative predictions represent the consumer's risk to see substandard segments as not corrected in the final product. So in the context of our task we are much more concerned with the share of False Negatives in the test classification dataset. In the Italian situation shown on Figure 4, you can see that the model predicts correctly that there are much more translated sentences that need to be edited (TP=503) than sentences that do not need to be edited (TN=191). In incorrectly predicted categories, there are 67 sentences that need to be edited but predicted as good, and there are 81 translated sentences that do not need to be edited, but the prediction says they have to be reviewed. In the English-German set from Figure 5, the situation is the opposite: there are more translated sentences that do not need to be edited (442) than prescribed for review (256) in the correct predictions. In the wrong prediction categories, such numbers are 90 and 46 respectively. The prediction **accuracy** of the LLMB2PEN model on our designed task is **(TP+TN)/Total =** (503+191)/842 = 82.42% for English-Italian MT, and (442+256)/834 = 83.69% for English-German MT. Overall, our LLMB2PEN method shows that the English-German output is clearly better than the English-Italian. However, if we only count the Type II errors (incorrect prediction that the segments should NOT be edited), then the corresponding error rates will be 67/842 = 8% for Italian and 90/834 = 10% for German. ### Discussion The first and foremost finding is that the fine-tuned model actually learned enough information to make a very significant prediction of whether the segment has to be edited or not. It should be noted that such successful classification holds promise of a viable method to significantly reduce the volume of post-editing efforts and therefore time and costs. There is, however, a problem: while it is OK to present the editor with segments that are predicted as required for editing, but in reality do not require editing (the fourth quadrant, FP), real consumer's risk comes from the segments Figure 4: EN-IT Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from top-left corner (TN, FP, TP, FN) that have been predicted as not requiring editing and made their way to the final predict, but in reality, they contain errors (the fourth quadrant, FN). Such segments represent a significant portion of segments predicted as not requiring post-editing: **FN/(TN+FN)** = 67/(191+67) = 67/258 = 26% of "leave as is" (let's call them "LAI") segments for Italian, and 90/(442+90) = 90/532 = 16.9% for German. It is possible that for specific language pairs and MT engines the portion of the LAI segments will decrease with the size increase of the training data and further fine-tuning, but it is unlikely to become zero, since with neural models the error rate is never zero. Two strategies can be considered for implementing such prediction in production: 1. The LAI segments are excluded from the human loop and go into publication unvetted, but not straight away as they advance through the workflow along with all the other segments. In this scenario, the potential error rate ceiling for final content will be **FN/Total** = FN/(TP+FN+TN+FP) = 8% for Italian, i.e. 67/(81 + 67 + 191 + 503) = 81/842 and 10.8%= 90 / (90 + 46+ 442 + 256) = 90/834 for German. It is not impossible to predict what would be the actual error rate in those 8% and 10.8% segments that will not be reviewed, or the severity of errors in them. It is, obviously, the decision of the customer to decide whether this is an acceptable level of consumer risk for their situation (domain, type of content, audience, etc.). Additional risk assessment may be required to be carried out. The savings on post-editing volume in this scenario would be (TN+FN)/Total = (191+67)/842 = 30.1% for Italian and (442+90)/834 = 63.8% for German. 2. All LAI segments are marked as "100% MT matches" in a CAT tool. With this approach, translators are requested to review them, but at a lower per-word rate, using the traditional approach which is well familiar to translation providers. In this scenario the reduction of the total time, effort, and cost can be estimated as follows: without this approach, translators working on Edit Distance Calculation (EDC) model will get lower payment (which can vary from 10% to 40% with different payment models) for not changed segments. In this scenario, translators may be asked to review such LAI segments but paid only small part of full rate for the review of such segments. Simple proportion allows to calculate the savings in second scenario: if we take the full payment for all the segments for 100% of post-editing costs, and assume that 10% pay reflects adequate pay for the review of LAI segments that are marked as such, the volume of post-editing decreases 27.6% for Italian and 57.4% for German with zero error rate of the final product (no producer's or consumer's risk). This estimate of a potential economy with a guarantee of zero error rate begs for further research and implementation of this method. ## 4 Extended Experiments ### On Six More Language Pairs We hereby also present extended experimental results using six more language pairs obtained with LMB2PEN method for translation editing distance prediction. These language pairs include English-to-French, Japanese, Dutch, Portuguese, Turkish, and Chinese (EN\(\rightarrow\)FR/JA/NL/PT/TR/ZH), whose results are listed in Figure 6, 7, 8, 9, 10, and 11 respectively. From the results presented in the figures, in general, the ratio of correct prediction (TP+TN) Figure 5: EN-DE Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from top-left corner (TN, FP, TP, FN) is much higher than the one from mis-prediction (FN+FP) across all these language pairs, as for English-Italian and English-German in the pilot studies. On one hand, the following language pairs have more True Positive than True Negative predicted segments **than for English-German/Italian**: English-Japanese, English-Portuguese, and English Chinese. On the other hand, the rest of language pairs have more TN than TP: English-French, and English-Dutch, except for English-Turkish which has a comparable number of segments between TP (347) and TN (353) labels. This finding also indicates that such language pairs with high number of TN labels are still much more challenging for MT system development to produce more correct outputs, i.e., English to French, Dutch, and Turkish. Earlier research findings from Gladkoff et al. (2022) on TQE conclude that 200+ segments can be enough amount of data to reflect the MT system quality. ## 5 Different LLMs on EN-JA News Domain In the subsequent experiment on data we used different news items translation corpus from different projects, translated from English to Japanese. In this experiment, we have repeated experiments of fine-tuning the OpenAI _gpt3.Sturbo_ model on datasets of different sizes: 2000 pairs, 4000 pairs, and 6000 pairs. Figure 12 shows the confusion matrix for the training set of 6000 bilingual EN-JA translation pairs in the news domain. We ran several experiments with varying training set sizes, with results shown in Figure 13. These results are interesting because although False Positive prediction does not improve with the increase of training set, in the context of the need for post-editing the False Negative category is much more interesting, because we are interested in better prediction of those segments which do NOT require post-editing. And, as we see from the experimental data, the prediction of FN improves from almost 20% to 12%-15% with the Figure 8: EN-NL Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from top-left corner (TN, FP, TP, FN) Figure 6: EN-FR Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from top-left corner (TN, FP, TP, FN) Figure 7: EN-JA Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from left-up corner (TN, FP, TP, FN) increase of training set from 2000 bilingual segments to 6000 bilingual segments. We, therefore, can recommend the training set in that range, since larger sizes of training set will be more expensive and will take significant time for models with the size of _gpt3.5turbo_. ### Comparison of performance on different OpenAI models It was also interesting to see how the extra-large LLMs (xLLMs) from OpenAI, the _davinci_ and _gpt3.5turbo_ models, perform on the same task in comparison to _curie_ model we used earlier. These three LLMs have the parameter sizes around 13B, 175B, and 175B respectively. So we used the same English-Italian data from our original experiment to compare performance on different models of the same EN-IT dataset. Figure 11: EN-ZH Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from top-left corner (TN, FP, TP, FN) Figure 12: EN-JA news items Confusion Matrix of LLMB2PEN, _gpt3.5turbo_ model: Clockwise from top-left corner (TP, FN, TN, FP) Figure 10: EN-TR Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from top-left corner (TN, FP, TP, FN) Figure 9: EN-PT Confusion Matrix of LLMB2PEN, _curie_ model: Clockwise from left-up corner (TN, FP, TP, FN) Figure 14 shows the comparison of these three LLMs regarding their confusion matrix and parameter sets. Surprisingly, their performances on predicting MT errors are very close, i.e. the larger-sized _davinci_ model and extra-large sized _gpt3.Sturbo_ did not demonstrate much improvement on model classification accuracy. Their correct labels (TP+TN) are (694, 699, 706) respectively out of 842 all labels, which results in the accuracy ratios 82.42%, 83.02%, and 83.85%. In comparison to the much smaller _curie_ model with 12 layers of Transformer and 768 hidden units, the xLLM _gpt3.Sturbo_ only achieved 1.43 points (83.85%-82.42%) increase of accuracy score despite using 175 layers of Transformer and 4096 hidden units. The explanation for this may probably be found considering the fact that the fine-tuning loss on this classification task drops down very quickly. Figure 15 shows the fine-tuning loss on the _gpt3.Sturbo_ model. As can be seen from this graph, only 100 steps are sufficient to bring the loss to almost zero, and then all other steps contribute very little to the classification quality improvement. As we can see, there is no need to use larger models since results hardly improve as compared with _curie_ model. ## 6 Conclusions and Future Work In this work, to investigate the LLM's capability of predicting MT output errors, we fine-tuned GPT models via OpenAI API. We formulated the task as a classification challenge using prepared historical post-editing data on English-Italian and English-German for pilot studies. The experimental output using fine-tuned LLMB2PEN demonstrated promising results. We also analysed the possible solutions for addressing the error rates, i.e. whether prediction errors can be ignored and published without the review, or letting them be reviewed by the linguists at a lower rate, and how much saving can be achieved for the client who uses this process, in comparison to 100% post-editing without using LLMB2PEN method. In the extended experiments, we added six more language pairs including English-to-French, Japanese, Dutch, Portuguese, Turkish, and Chinese, in total resulting in eight, and summarised our findings by classifying the language pairs. We also compared GPT models from different sizes and the experimental results surprisingly show that the larger LLMs (_davinci_ and _gpt3.Sturbo_) do not improve the accuracy performance of much smaller _curie_ model with apparent margins but with much more cost. In the future, we are going to work on response rate and training times to see whether the model can continue learning as _being fed with more consecutive chunks of data_ for the same languages, to implement an ongoing learning of prediction. In addition, we plan to carry out the LLMB2PEN fine-tuning on other language pairs for which we have historical data. We intend to explore to what extent the model is capable of absorbing data for several languages, i.e. one fine-tuned multilingual model serving several language pairs. To further extend this project, it will also be interesting to explore and check whether the LLMB2PEN method can help to identify human-introduced errors or translationese. ## Limitations In this work, we reported MT QE experiments using eight language data translated from English. The positive results produced from the OpenAI models can be further enhanced by more language pairs, as well as broader domains of the corpus. The main limitation of the method is non-zero fine-tuning time. The fine-tuning takes about 20 minutes and therefore cannot be made continuous, which has to be done periodically, in batches. This hardly can be overcome, but deployment methods can be applied to quickly replace the older fine-tuned models with the newer ones. ## Ethical Statement This work has no ethical concerns since we did not disclose identifiable private user data. ## Acknowledgements We thank Georg Kirchner, Globalization Technology Manager at Dell Technologies, for the valuable comments on the initial manuscript. LH and GN are grateful for the support from the grant "Assembling the Data Jigsaw: Powering Robust Research on the Causes, Determinants and Outcomes of MSK Disease". The project has been funded by the Nuffield Foundation, but the views expressed are those of the authors and not necessarily the Foundation. Visit www.nuffieldfoundation.org. LH and GN were also supported by the grant "Integrating hospital outpatient letters into the healthcare data space" (EP/V047949/1; funder: UKRI/EPSRC).
2304.06824
Even and odd self-similar solutions of the diffusion equation for infinite horizon
In the description of transport phenomena an important aspect represents the diffusion. In certain cases the diffusion may appear together with convection. In this paper we study the diffusion equation with the self similar Ansatz. With an appropriate change of variables we found original solutions of diffusion equation for infinite horizon. Here we present the even solutions of diffusion equation for the boundary conditions presented. For completeness the odd solutions are also mentioned as well, as part of the previous works. Finally, the diffusion equation with constant source term is discussed, which also has even and odd solutions, too
L. Mátyás, I. F. Barna
2023-03-31T08:01:49Z
http://arxiv.org/abs/2304.06824v1
# Even and odd self-similar solutions of the diffusion equation for infinite horizon ###### Abstract. In the description of transport phenomena an important aspect represents the diffusion. In certain cases the diffusion may appear together with convection. In this paper we study the diffusion equation with the self similar Ansatz. With an appropriate change of variables we found original solutions of diffusion equation for infinite horizon. Here we present the even solutions of diffusion equation for the boundary conditions presented. For completeness the odd solutions are also mentioned as well, as part of the previous works. Finally, the diffusion equation with constant source term is discussed, which also has even and odd solutions, too. ## 1. Introduction It is an evidence that mass diffusion or heat conduction is a fundamental physical process which attracted enormous intellectual interest for mathematicians, physicists and engineers in the last two century. The existing literature about mass and heat diffusion is immense, we just mention some basic textbooks [1, 2, 3, 4, 5]. Regular diffusion is the corner stone of many scientific discipline like, surface growth [6, 7, 8], reactions diffusion [9] or even flow problems in porous media. In our last two papers we gave an exhaustive summary about such processes with numerous relevant reviews [10, 11]. In connection with thermal diffusion [12, 13] it is also possible the presence of heat and mass transfer simultaneously, which may lead to cross effects [14]. Relevant applications related to general issues of heat transfer or engineering one may find in [15]. Important diffusive phenomena occur in universe [16] which is another field of interest. The study of population dynamics or biological processes [17, 18, 19] also involves diffusive processes, especially in spatial extended systems. In environmental sciences, the effects of spreading, distribution and adsorption of particulate matter or pollutants is also relevant [20, 21, 22, 23]. Furthermore from practical purposes diffusion coefficients have been measured in food sciences as well [24]. As new applications in the last decades diffusion gained ground in social sciences as well. As examples we can mention diffusion of innovations [25, 26], diffusion of technologies and social behavior [27] or even diffusion of cultures, humans or ideas [28, 29]. Aspects related to diffusion one may also find in the theory of pricing [30, 31]. The structure of the network has also a crucial role which influences the spread of innovations, ideas or even computer viruses [32]. Parallel to such diffusion activities generalization of heat-transport equations were done by Van and coauthors [33] e.g. forth order partial differential equations (PDE)s were formulated to elaborate the problem of non-regular heat conduction phenomena. Such spirit of the times clearly show that investigation of diffusion (and heat conduction) is still an important task. Having in mind that diffusion can be a general, three dimensional process beyond Cartesian symmetry here we investigate the one dimensional diffusion equation. The change in time of variable \(C(x,t)\) is influenced by the presence of it in the neighbors: \[\frac{\partial C(x,t)}{\partial t}=D\frac{\partial^{2}C(x,t)}{\partial x^{2}}, \tag{1}\] where \(D\) is the diffusion coefficient which should have positive real value. One assumes that \(C(x,t)\) is a sufficiently smooth function together with existing derivatives, regarding both variables. In this general form one may observe that if \(C(x,t)\) is a solution, then \(C(x,t)+C_{0}\) is also solution, where \(C_{0}\) is a constant. For finite horizon or interval, in case the concentration is fixed at the two ends \(C(x=0,t)=C_{0}\) and \(C(x=L,t)=C_{0}\) the solutions are \[C_{k}(x,t)=C_{0}+e^{-D\frac{x^{2}k^{2}t}{L^{2}}}\cdot\sin\left(\frac{k\pi}{L}x \right), \tag{2}\] where \(k=1,2,3...\), it can be any positive integer number. In general, beyond \(C_{0}\) any linear combination of the product of the exponent and sine for different \(k\) is a solution. For finite horizon, in the case when the density is fixed to zero on both ends the solutions are changed to \[C_{k}(x,t)=C_{0}+e^{-D\frac{x^{2}n^{2}t}{L^{2}}}\cdot\cos\left(\frac{n\pi}{L} x\right), \tag{3}\] where \(n=1,2,3...\), can be any positive integer number. Thanks to the Fourier theorem, with the help of Eq. (2) and Eq. (3) arbitrary diffusion profile can be approximated on a closed interval. These are well-known analytic results and can be found in any usual physics textbooks like [1, 2]. In the present study - with the help of the self-similar Ansatz - we are going to present generic symmetric solutions for infinite horizon. These solutions have their roots at the very beginning of the theory, in the form of the Gaussian [1, 2]: \[C(x,t)=\text{Const.}\cdot\frac{1}{\sqrt{t}}e^{-\frac{x^{2}}{4Dt}}. \tag{4}\] For infinite horizon there are also certain works which present a given aspect of the diffusion, and it may arrive to a slightly more general aspect than the classical solution presented above [34]. In the following we will go much beyond that point and will present and analyze completely new type of solutions. Finally, the corresponding Green's function will be given which makes it possible to handle physically relevant arbitrary initial condition problems. ## 2. Theory and Results In case of infinite horizon, when we want to derive the corresponding solutions we make the following self-similar transformation: \[C(x,t)=t^{-\alpha}f\left(\frac{x}{t^{\beta}}\right)=t^{-\alpha}f(\eta). \tag{5}\] Note, that the spatial coordinate \(x\) now runs along the whole real axis. This kind of Ansatz have been applied by Sedov [35] later also used by Raizer and Zel'dowich [36] For certain systems Barenblatt applied it successfully [37] as well. We have also used it for linear or non-linear partial differential equation (PDE) systems, which are from fluid mechanics [38, 39, 40] or quantum mechanical systems [41]. In certain cases the equation of state of the fluid also plays a role [42, 43]. Diffusion related applications of the self-similar analysis method can be found in relatively recent works, too [44, 45, 46]. The transformation takes into account the (4) formula, and before the function \(f\), instead of \(1/\sqrt{t}\) there is a generalized function \(1/t^{\alpha}\), and in the argument of \(f\), the fraction \(x/t^{\beta}\) is possible, with a \(\beta\) which should be determined later. We evaluate the first and second derivative of relation (5), and insert it in the equation of diffusion (1). This yields the following ordinary differential equation (ODE) \[-\alpha t^{-\alpha-1}f(\eta)-\beta t^{-\alpha-1}\eta\frac{df(\eta)}{d\eta}=Dt ^{-\alpha-2\beta}\frac{d^{2}f(\eta)}{d\eta^{2}}. \tag{6}\] The reasoning is self-consistent if all three terms has the same decay in time. This is possible if \[\alpha=\mbox{arbitrary real number},\ \ \beta=1/2, \tag{7}\] and yields the following ODE \[-\alpha f-\frac{1}{2}\eta f^{\prime}=Df^{\prime\prime}. \tag{8}\] This ODE is a kind of characteristic equation, with the above presented change of variable. One can observe that for \(\alpha=1/2\) this equation can be written as \[-\frac{1}{2}\left(\eta f\right)^{\prime\prime}=Df^{\prime\prime}. \tag{9}\] Figure 1. The solution \(C(x,t)\) for \(\alpha=1/2\) which is the usual Gaussian solution of Eq. (12). If this equation is integrated once \[\text{Const}_{0}-\frac{1}{2}\eta f=Df^{\prime}, \tag{10}\] where \(\text{Const}_{0}\) is an arbitrary constant, which may depend on certain conditions related to the problem. If we take this \(\text{Const}_{0}=0\), then one arrives to the generic solution \[f=f_{0}e^{-\frac{\eta^{2}}{4D}}, \tag{11}\] where \(f_{0}\) is a constant. Inserting this form of \(f\) in form of \(C(x,t)\) given by Eq. (5) - for \(\alpha=1/2\) as it was mentioned earlier - one gets an even solution for the space variable: \[C(x,t)=f_{0}\frac{1}{t^{\frac{1}{2}}}e^{-\frac{x^{2}}{4Dt}}. \tag{12}\] By this we have recovered the generic Gaussian solution, which can be seen on figure (1). If we want to find further solutions, the equation (8) has to be solved for general \(\alpha\). The general solution for infinite horizon of (8) can be written as: \[f(\eta)=\eta\cdot e^{-\frac{\eta^{2}}{4D}}\left(c_{1}M\left[1-\alpha,\frac{3}{ 2},\frac{\eta^{2}}{4D}\right]+c_{2}U\left[1-\alpha,\frac{3}{2},\frac{\eta^{2} }{4D}\right]\right), \tag{13}\] where \(c_{1}\) and \(c_{2}\) are arbitrary real integration constants and \(M(,,)\) and \(U(,,)\) are the Kummer's functions. For exhaustive details consult [47]. If \(\alpha\) are positive integer numbers, then both special functions \(M\) and \(U\) are finite polynomials in terms of the third argument \(\frac{\eta^{2}}{4D}\) \[f(\eta)=\eta\cdot e^{-\frac{\eta^{2}}{4D}}\left(\kappa_{0}+\kappa_{1}\frac{ \eta^{2}}{4D}+...+\kappa_{n-1}\cdot\left[\frac{\eta^{2}}{4D}\right]^{n-1}\right). \tag{14}\] These gives the _odd solutions_ of the diffusion equation for \(\alpha=n\), (where \(n\) positive integer), in terms of the space variable. It follows for the complete solution \(C(x,t)\) \[C(x,t)=\frac{1}{t^{n}}f(\eta)=\frac{1}{t^{n}}\frac{x}{\sqrt{t}}e^{-\frac{x^{2 }}{4Dt}}\cdot\left(\kappa_{0}+\kappa_{1}\frac{x^{2}}{4Dt}+...+\kappa_{n-1} \cdot\left[\frac{x^{2}}{4Dt}\right]^{n-1}\right). \tag{15}\] These odd solutions have been studied thoroughly by Matyas and Barna in previous works ([10, 11]) and for completeness, we present these solution in Appendix A. For the _even solutions_, we denote by \(g(\eta)\) the following function \[f(\eta)=\eta\cdot e^{-\frac{\eta^{2}}{4D}}g(\eta), \tag{16}\] Inserting this equation into Eq. (8), we have \[\eta g^{\prime\prime}+2g^{\prime}-\frac{\eta^{2}}{2D}g^{\prime}+(\alpha-1) \frac{\eta}{D}g=0. \tag{17}\] In concordance with Eq. (13), we get the general solution \[g(\eta)=\left(c_{1}M\left[1-\alpha,\frac{3}{2},\frac{\eta^{2}}{4D}\right]+c_{ 2}U\left[1-\alpha,\frac{3}{2},\frac{\eta^{2}}{4D}\right]\right). \tag{18}\] At this point we make the conjecture from the forms of \(U\) and \(M\), that if we had the classical spatially even solution for \(\alpha=1/2\), than the next spatially even solution is for \(\alpha=3/2\), with the form of \(g\) \[g(\eta)=K_{0}\frac{1}{\eta}+K_{1}\eta, \tag{19}\] where \(K_{0}\) and \(K_{1}\) are arbitrary constants, which should be determined later. We insert this form of \(g\) in (17) we find, that the form (19) fulfill the equation (17), if \[K_{1}=-\frac{1}{2D}K_{0}. \tag{20}\] We obtain the same result if we insert the form \[f(\eta)=\eta\cdot e^{-\frac{\eta^{2}}{4D}}\left(K_{0}\frac{1}{\eta}+K_{1}\eta \right), \tag{21}\] directly into the equation (8). By this, for \(\alpha=3/2\), we get for the function \(f\) \[f(\eta)=K_{0}\cdot\eta\cdot e^{-\frac{\eta^{2}}{4D}}\left(\frac{1}{\eta}- \frac{1}{2D}\eta\right)=K_{0}\cdot e^{-\frac{\eta^{2}}{4D}}\left(1-\frac{1}{2 D}\eta^{2}\right). \tag{22}\] Substituting this form into (5) one gets \[C(x,t)=K_{0}\frac{1}{t^{\frac{3}{2}}}e^{-\frac{x^{2}}{4Dt}}\left(1-\frac{1}{2D }\frac{x^{2}}{t}\right). \tag{23}\] This result is visualized on Figure (2). If we follow the case \(\alpha=5/2=2.5\), then the following form for the function \(g(\eta)\) can be considered \[g(\eta)=K_{0}\cdot\frac{1}{\eta}+K_{1}\cdot\eta+K_{2}\cdot\eta^{3}. \tag{24}\] Figure 2. The solution \(C(x,t)\) for \(\alpha=3/2\) in the form of Eq. (23). If we insert this form in the equation (17) the following relations for the constants \(K_{0}\), \(K_{1}\) and \(K_{2}\) can be derived \[K_{1}=\frac{K_{0}}{D}, \tag{25}\] and \[K_{2}=-\frac{K_{1}}{12D}=\frac{K_{0}}{12D^{2}}. \tag{26}\] By this, we get for the \(g(\eta)\) \[g(\eta)=K_{0}\left(\frac{1}{\eta}-\frac{1}{D}\eta+\frac{1}{12D^{2}}\eta^{3} \right). \tag{27}\] Correspondingly the final form for \(f(\eta)\) for \(\alpha=2.5\) is \[f(\eta)=K_{0}\cdot e^{-\frac{\eta^{2}}{4D}}\left(1-\frac{1}{D}\eta^{2}+\frac{1 }{12D^{2}}\eta^{4}\right). \tag{28}\] Inserting this form into (5) one gets \[C(x,t)=K_{0}\frac{1}{t^{\frac{5}{2}}}e^{-\frac{x^{2}}{tDt}}\left(1-\frac{1}{D }\frac{x^{2}}{t}+\frac{1}{12D^{2}}\frac{x^{4}}{t^{2}}\right). \tag{29}\] This result can be seen on Figure (3). If we follow the case \(\alpha=7/2=3.5\), then the following form for the function \(g(\eta)\) can be considered: \[g(\eta)=K_{0}\cdot\frac{1}{\eta}+K_{1}\cdot\eta+K_{2}\cdot\eta^{3}+K_{3}\cdot \eta^{5}. \tag{30}\] Figure 3. The solution \(C(x,t)\) for \(\alpha=5/2\) in the form of Eq. (29), respectively. If we replace this form into the equation (17) the next relations among the constants \(K_{0}\), \(K_{1}\), \(K_{2}\) and \(K_{3}\) can be derived: \[K_{1}=-\frac{3}{2}\frac{K_{0}}{D}, \tag{31}\] for the next coefficient \[K_{2}=-\frac{K_{1}}{6D}=\frac{K_{0}}{4D^{2}}. \tag{32}\] Finally for the third coefficient one gets \[K_{3}=-\frac{K_{2}}{30D}=-\frac{K_{0}}{120D^{3}}. \tag{33}\] Inserting these coefficients into the formula (30), one obtains the following expression \[g(\eta)=K_{0}\bigg{(}\frac{1}{\eta}-\frac{3}{2D}\cdot\eta+\frac{1}{4D^{2}} \cdot\eta^{3}-\frac{1}{120D^{3}}\cdot\eta^{5}\bigg{)}. \tag{34}\] This form of \(g\) yields, by eq. (16), for the function \(f\) \[f(\eta)=K_{0}\cdot e^{-\frac{\eta^{2}}{4D}}\left(1-\frac{3}{2D}\eta^{2}+\frac {1}{4D^{2}}\eta^{4}-\frac{1}{120D^{3}}\eta^{6}\right). \tag{35}\] Inserting this form into (5) one gets \[C(x,t)=K_{0}\frac{1}{t^{\frac{\eta}{2}}}e^{-\frac{x^{2}}{4Dt}}\left(1-\frac{3 }{2D}\frac{x^{2}}{t}+\frac{1}{4D^{2}}\frac{x^{4}}{t^{2}}-\frac{1}{120D^{3}} \frac{x^{6}}{t^{3}}\right). \tag{36}\] This result is clearly visualized on Figure 4. It is evident that including higher terms in the finite series of Eq. (30) the solutions for \(\alpha=9/2,11/2\), etc. can be evaluated in a direct way. Figure 4. The function \(C(x,t)\) for the value \(\alpha=7/2\). For completeness we present the shape functions \(f(\eta)\)s on Figure 5. Note, that solutions with higher \(\alpha\) values have more oscillations and quicker decay. The same features appear for odd solutions as well. As we can see, at this point, the solutions fulfills the boundary condition \(C()\to 0\) if \(x\to\pm\infty\), for positive \(\alpha\) values. The general initial value problem can be solved with the usage of the Green's functions formalism. According to the standard theory of the Green's functions the solution of the diffusion equation (1) can be obtained via the next convolution integral: \[C(x,t)=\frac{1}{2\sqrt{\pi t}}\int_{-\infty}^{+\infty}w(x_{0})G(x-x_{0})dx_{0}, \tag{37}\] where \(w(x_{0})\) defines the initial condition of the problem, \(C|_{t=0}=w(x_{0})\). The Green's function for diffusion is well-defined and can be found in many mathematical textbooks eg. [48, 49, 51, 50], \[G(x-x_{0})=exp\left[-\frac{(x-x_{0})^{2}}{4tD}\right]. \tag{38}\] On the other side, the Gaussian function is a fundamental solution of diffusion. Figure 5. Even shape functions \(f(\eta)\) of Eq. (16) for three different self-similar \(\alpha\) exponents. The black, blue and red curves are for \(\alpha=1/2,3/2\) and \(5/2\) numerical values, with the same diffusion constant (\(D=2\)), respectively. Note, shape functions with larger \(\alpha\)s have more zero transitions. We will show that for \(\alpha>0\) integer values, the integral of the shape functions give zero on the whole and the half-axis as well. We will see in the following that for some special forms of the initial conditions, like polynomials, Gaussian, Sinus or Cosines the convolution integral can be done analytically. In the following we evaluate the convolution integral for \(\alpha=1/2\). As an example for the initial condition problem we may consider the following smooth function with a compact support: \[w(x_{0})=\frac{\mathrm{Heaviside}(3-x_{0})\cdot\mathrm{Heaviside}(3+x_{0}) \cdot(9-x_{0}^{2})}{9}. \tag{39}\] This initial condition is a typical initial distribution for diffusion, and one can see on Figure 6a). The convolution integral for \(\alpha=1/2\): \[C(x,t)=\frac{1}{2\sqrt{\pi t}}\int_{-\infty}^{+\infty}\frac{\mathrm{Heaviside} (3-x_{0})\cdot\mathrm{Heaviside}(3+x_{0})\cdot(9-x_{0}^{2})}{9}\cdot e^{\frac{( x-x_{0})^{2}}{4Dt}}dx_{0}. \tag{40}\] The result of this evaluation is \[C(x,t)=\tfrac{1}{2\sqrt{\pi t}}\bigg{[}\sqrt{\pi t}\ \mathrm{erf} \left(\tfrac{3+x}{2\sqrt{t}}\right)+\tfrac{2}{9}xt\ e^{-\frac{6x+x^{2}+9}{4t}}+ \tfrac{2}{3}t\ e^{-\frac{6x+x^{2}+9}{4t}}-\tfrac{2}{9}t^{\frac{3}{2}}\sqrt{ \pi}\ \mathrm{erf}\left(\tfrac{3+x}{2\sqrt{t}}\right)\] \[-\tfrac{1}{9}x^{2}\sqrt{\pi t}\ \mathrm{erf}\left(\tfrac{3+x}{2 \sqrt{t}}\right)-\sqrt{\pi t}\ \mathrm{erf}\left(\tfrac{x-3}{2\sqrt{t}}\right)-\tfrac{2}{9}xt\ e^{- \frac{-6x+x^{2}+9}{4t}}+\tfrac{2}{3}t\ e^{-\frac{-6x+x^{2}+9}{4t}}\] \[+\tfrac{2}{9}t^{\frac{3}{2}}\sqrt{\pi}\ \mathrm{erf}\left( \tfrac{x-3}{2\sqrt{t}}\right)+\tfrac{1}{9}x^{2}\sqrt{\pi t}\ \mathrm{erf}\left( \tfrac{x-3}{2\sqrt{t}}\right)\bigg{]},\] which is presented on Figure 6b). ## 3. The properties of the shape functions and solutions In the following we study the some properties of the shape functions \(f(\eta)\) and of the complete solutions \(C(x,t)\). First we consider the \(L^{1}\) integral norms. For the case \(\alpha=1/2\) the form of \[\int_{-\infty}^{\infty}f(\eta)d\eta=\int_{-\infty}^{\infty}f_{0}e^{-\frac{\eta ^{2}}{4D}}d\eta=f_{0}\,2\sqrt{\pi D}. \tag{42}\] The constant \(f_{0}\) is chosen, depending on the problem. If \(C\) stands for the density which diffuses, \(f_{0}\) in the above integral is related to the total mass of the system. Correspondingly \[\int_{-\infty}^{\infty}C(x,t)dx=\int_{-\infty}^{\infty}f_{0}\frac{1}{\sqrt{t }}e^{-\frac{\pi^{2}}{4Dt}}dx=f_{0}\,2\sqrt{\pi D}. \tag{43}\] For the case \(\alpha=3/2\) \[\int_{-\infty}^{\infty}f(\eta)d\eta=\int_{-\infty}^{\infty}K_{0}\cdot e^{- \frac{\eta^{2}}{4D}}\left(1-\frac{1}{2D}\eta^{2}\right)d\eta=0. \tag{44}\] It is interesting to see that the integral of first even shape function beyond Gaussian is zero. An even more remarkable feature is however, that \[\int_{-\infty}^{0}f(\eta)d\eta=\int_{0}^{\infty}f(\eta)d\eta=0. \tag{45}\] So the oscillations, the positions of the zero transitions divide the function in such a way that the integral not only the whole real axis \((-\infty...\infty)\) but on the half axis \((0..\infty)\) or \((-\infty..0)\) gives zero as well. Evaluating the same type of integrals for the corresponding solution \(C(x,t)\) we have \[\int_{-\infty}^{\infty}C(x,t)dx=\int_{0}^{\infty}C(x,t)dx=\int_{-\infty}^{0}C(x,t )dx=\;\int_{-\infty}^{\infty}K_{0}\cdot\frac{1}{t^{3/2}}e^{-\frac{x^{2}}{4Dt}} \left(1-\frac{1}{2D}\frac{x^{2}}{t}\right)dx=0,\] at any time point, (and for any diffusion coefficient D). The same property is true for all possible higher harmonic solutions if \(\alpha\) is positive half-integer number \(\alpha=(2n+1)/2\) when \((n\,\epsilon\,\mathbb{N})\). This property has far-reaching consequences. The linearity of the regular diffusion equation and this additional property of this even series of solutions makes it possible to perturb the usual Gaussian in such a way, that the total number of particles are conserved during the diffusion process, however the initial distribution can be changed significantly. One can see from the final form of the solutions \(C(x,t)_{\alpha}\sim\frac{1}{t^{\alpha}}\) that the decay of these perturbations are however short-lived because they have a quicker decay than the standard Gaussian solutions. For completeness we present a \(C(x,t)\) solutions which Figure 6. a) The initial condition (39) b) The convolution integral for \(\alpha=1/2\) of Eq. (40). is a linear combination of the first two even solutions \(\alpha=1/2,3/2\) in the form of \[C(x,t)=\frac{60}{t^{\frac{1}{2}}}e^{-\frac{x^{2}}{4t}}-\frac{0.001}{t^{\frac{3} {2}}}e^{-\frac{x^{2}}{4t}}\left(1-\frac{x^{2}}{2t}\right), \tag{46}\] on Fig. (7). Note, that coefficients with different orders of magnitude had to be applied to reach a visible effect when the sum of two functions have to be visualised with different power-law decay. As a second property we investigate the cosine Fourier transform of the shape functions: \[C_{\alpha}(k)=\int_{-\infty}^{\infty}Cos(k\cdot\eta)f_{\alpha}(\eta)d\eta. \tag{47}\] In can be shown with direct integration, that the Fourier transform is \[C_{\alpha=\frac{2N+1}{2}}(k)\propto l\cdot\sqrt{\pi}\cdot\frac{k^{2N}\cdot D^{ N}\cdot e^{-k^{2}D}}{\sqrt{\frac{1}{D}}}, \tag{48}\] for all \(N\epsilon\mathbb{N}\backslash 0\) positive integer and \(\mathbb{l}\) is a real constant. This means, that qualitatively the spectra for all positive half integer \(\alpha\) are similar. They start from zero, have a global positive maximum and and a quick decay to zero. It is generally known from spectral analysis that pulses of finite length have band spectra which have a minimal a maximal and a central frequency. In Appendix A the corresponding normalization coefficients are given for the odd functions as well. ## 4. The diffusion equation with constant source At this point we try to find solutions of the diffusion equation, mainly with the self similar Ansatz, where on the right hand side, there is a constant source term. \[\frac{\partial C(x,t)}{\partial t}=D\frac{\partial^{2}C(x,t)}{\partial x^{2}} +n, \tag{49}\] Figure 7. The function \(C(x,t)\), solution of Eq. (46) For this equation one also apply the self-similar transformation (5), and we get a modified equation relative to the homogeneous one \[-\alpha t^{-\alpha-1}f(\eta)-\beta t^{-\alpha-1}\eta\frac{df(\eta)}{d\eta}=Dt^{- \alpha-2\beta}\frac{d^{2}f(\eta)}{d\eta^{2}}+n. \tag{50}\] The free term on the r.h.s. has no explicit time decay, consequently we expect the same from the other terms, which means \[-\alpha-1 = 0 \tag{52}\] \[-\alpha-2\beta = 0. \tag{51}\] The two equations have to be fulfilled simultaneously. Solving these equations, we get the following values for \(\alpha\) and \(\beta\): \[\alpha=-1\text{ and }\beta=\frac{1}{2} \tag{53}\] Inserting these values to the equation (50), we get the following ODE \[f(\eta)-\frac{1}{2}\eta\frac{df(\eta)}{d\eta}=D\frac{d^{2}f(\eta)}{d\eta^{2}}+n. \tag{54}\] We emphasize, that we arrived to this equation by a self-similar transformation. At this point we observe, that if we shift the function \(f\) by a constant, and introduce the function \(h\): \[h(\eta)=f(\eta)-n \tag{55}\] we arrive to a slightly modified equation \[h(\eta)-\frac{1}{2}\eta\frac{dh(\eta)}{d\eta}=D\frac{d^{2}h(\eta)}{d\eta^{2}}. \tag{56}\] One may observe, that if the transformation \(\eta\to-\eta\) and \(h(-\eta)=h(\eta)\) is applied, the equation still remains the same, consequently we expect at least one even solution. If we look for the even solution by polynomial expansion \[h(\eta)=A+B\eta^{2}+... \tag{57}\] then we get by direct substitution \[A=2\cdot B\cdot D. \tag{58}\] This means, that the even solution reads as follows \[h(\eta)=B(2D+\eta^{2}) \tag{59}\] where \(B\) is a constant depending on initial conditions. Furthermore, we observe, that the transformation \(\eta\to-\eta\) and \(h(-\eta)=-h(\eta)\) also leaves the equation (56) unchanged. This means, that it is worthwhile to look for an odd solution, too. The odd solution of the equation is \[h(\eta)=2D\,\eta\,e^{-\frac{\eta^{2}}{4D}}+\sqrt{\pi}\,(2D^{3/2}+\sqrt{D}\, \eta^{2})\,erf\left(\frac{1}{2}\frac{\eta}{\sqrt{D}}\right) \tag{60}\] The form of this odd solution one can see on Figure 8. If \(n\) is positive in the equation (49), then we can talk about a source in the equation, and if \(n\) is negative, than we say that there is a sink in the diffusion process. The sink can be considered physical by the time \(C(x,t)\geq 0\). Diffusive systems with sinks have been studied in ref. [52], and water purification by adsorption also means a process with change of concentration is space and decrease in time [53]. The general solution for the shape function can be obtained from the linear combination of the even and odd solutions presented above \[h(\eta)=\kappa_{1}\biggl{[}2D\,\eta\,e^{-\frac{\eta^{2}}{4D}}+\sqrt{\pi}\,(2D^{ 3/2}+\sqrt{D}\,\eta^{2})\,erf\left(\frac{1}{2}\frac{\eta}{\sqrt{D}}\right) \biggr{]}+\kappa_{2}[2D+\eta^{2}] \tag{61}\] where \(\kappa_{1}\) and \(\kappa_{2}\) are constants depending on the initial or boundary conditions of the problem. Inserting this shape function to the general solution (5), we get for the final form of \(C(x,t)\) in the presence of a constant source \[C(x,t)=t\cdot\biggl{[}\kappa_{1}\biggl{(}2D\,\eta\,e^{-\frac{x^{2}}{4Dt}}+ \sqrt{\pi}\,\left(2D^{3/2}+\sqrt{D}\,\frac{x^{2}}{t}\right)\,erf\left(\frac{1}{ 2}\frac{x}{\sqrt{Dt}}\right)\biggl{)}+\kappa_{2}\left(2D+\frac{x^{2}}{t} \right)+n\biggr{]} \tag{62}\] For relatively shorter times, the general solution has interesting features depending on the weight of even or the odd part of the solution, as one can see on figure 9. The long time behavior is dominated by the constant of the even solution and the source term. Correspondingly for sufficiently long times the relation \(C(x,t)\sim(2\kappa_{2}\,D+n)\cdot t\) characterizes the dynamics, as one can see on Figure 9b). ## 5. Summary and Outlook Applying the well-known self-similar Ansatz - together with and additional change of variables - we derived symmetric solutions for the one dimensional diffusion equations. Using the Fourier series analogy we might say that these solutions may be considered as possible higher harmonics of the fundamental Gaussian solution. As unusual properties we found that the integral of these solutions give zero on both the half and the whole real axis as well. Thanks to the linearity of the diffusion Figure 8. The shape function \(h(\eta)\), described by Eq. (60), the odd solution of Eq. (56). equation these kind of functions can be added to the particle (or energy) conserving fundamental Gaussian solution therefore new kind of particle diffusion processes can be described. Due to the higher \(\alpha\) self-similar exponents these kind of solutions give relevant contributions only at smaller time coordinates, because the corresponding solutions decay quicker than the usual Gaussian solution. These kind of solutions can be also evaluated for two or three dimensional, cylindrical or spherical symmetric systems as well. Work is in progress to apply this kind of analysis to more sophisticated diffusion systems as well. We hope that our new solutions have far reaching consequences and they will be successfully applied in other scientific disciplines like quantum mechanics, quantum field theory in physics, in probability theory or in financial mathematics in the near future. ## 6. Acknowledgments One of us (I.F. Barna) was supported by the NKFIH, the Hungarian National Research Development and Innovation Office. - The authors declare no conflict of interest. - Both authors contributed equally to every detail of the study. -There was no extra external founding. ## 7. Appendix For completeness and for direct comparison we show the first five odd shape functions \(f(\eta)\) and the corresponding solutions \(C(x,t)\): \[f(\eta) = erf\left(\frac{\eta}{2\sqrt{D}}\right),\] \[f(\eta) = \kappa_{0}\cdot\eta\cdot e^{-\frac{\eta^{2}}{4D}},\] \[f(\eta) = \kappa_{0}\cdot\eta\cdot e^{-\frac{\eta^{2}}{4D}}\cdot\left(1- \frac{1}{6D}\eta^{2}\right),\] \[f(\eta) = \kappa_{0}\cdot\eta\cdot e^{-\frac{\eta^{2}}{4D}}\cdot\left(1- \frac{1}{3D}\eta^{2}+\frac{1}{60}\frac{1}{D^{2}}\eta^{4}\right), \tag{63}\] \[f(\eta) = \kappa_{0}\cdot\eta\cdot e^{-\frac{\eta^{2}}{4D}}\cdot\left(1- \frac{1}{2D}\eta^{2}+\frac{1}{20}\frac{1}{D^{2}}\eta^{4}-\frac{1}{840}\frac{1 }{D^{3}}\eta^{6}\right),\] for \(\alpha=0,1,2,3,4..\mathbb{N}\). The first case with the change of variable \(x/\sqrt{t}\) with no \(\alpha\) (or implicitly \(\alpha=0\)) dates back to Boltzmann [54], as it is also mentioned by [55] and [56]. All integrals of the functions from (63) on the whole real axis give zero: \[\int_{-\infty}^{\infty}f_{\alpha}(\eta)d\eta=0, \tag{64}\] however on the half-axis: \[\int_{0}^{\infty}f_{\alpha=0}(\eta)d\eta=\infty, \tag{65}\] and for additional non-zero integer \(\alpha\)s we get: \[\int_{0}^{\infty}f_{\alpha}(\eta)d\eta=\frac{D}{\alpha-1/2}. \tag{66}\] Integrals on the opposite half-axis (\(-\infty..0\)] have the same value with a negative sign, respectively. The forms for odd \(C(x,t)\)s are the following: \[C(x,t) = erf\left(\frac{x}{2\sqrt{Dt}}\right),\] \[C(x,t) = \left(\frac{\kappa_{1}x}{t^{\frac{3}{2}}}\right)e^{-\frac{x^{2}} {4Dt}},\] \[C(x,t) = \left(\frac{\kappa_{1}x}{t^{\frac{3}{2}}}\right)e^{-\frac{x^{2}} {4Dt}}\left(1-\frac{x^{2}}{6Dt}\right),\] \[C(x,t) = \left(\frac{\kappa_{1}x}{t^{\frac{7}{2}}}\right)e^{-\frac{x^{2}} {4Dt}}\left(1-\frac{x^{2}}{3Dt}+\frac{x^{4}}{60(Dt)^{2}}\right), \tag{67}\] \[C(x,t) = \left(\frac{\kappa_{1}x}{t^{\frac{9}{2}}}\right)e^{-\frac{x^{2}} {4Dt}}\left(1-\frac{x^{2}}{2Dt}+\frac{x^{4}}{20(Dt)^{2}}-\frac{x^{6}}{840(Dt)^ {3}}\right),\] The space integrals of \(\int_{-\infty}^{\infty}C_{\alpha}(x,t)dx=0\) for all positive integer \(\alpha\)s. On the positive half-axis for \(\alpha=0\) the integral of the error function in infinite, for positive \(\alpha\)a it is: \[\int_{-\infty}^{\infty}C_{\alpha}(x,t)=\frac{Dt^{\frac{1}{2}-\alpha}}{\alpha- \frac{1}{2}}. \tag{68}\] Which are well defined values for finite, \(D\), \(t\) and \(\alpha\). On the \((-\infty..0]\) half axis the sign is opposite. Additional detailed analysis of the odd functions were presented in our former study [11].
2309.05321
Unified tensor network theory for frustrated classical spin models in two dimensions
Frustration is a ubiquitous phenomenon in many-body physics that influences the nature of the system in a profound way with exotic emergent behavior. Despite its long research history, the analytical or numerical investigations on frustrated spin models remain a formidable challenge due to their extensive ground state degeneracy. In this work, we propose a unified tensor network theory to numerically solve the frustrated classical spin models on various two-dimensional (2D) lattice geometry with high efficiency. We show that the appropriate encoding of emergent degrees of freedom in each local tensor is of crucial importance in the construction of the infinite tensor network representation of the partition function. The frustrations are thus relieved through the effective interactions between emergent local degrees of freedom. Then the partition function is written as a product of a one-dimensional (1D) transfer operator, whose eigen-equation can be solved by the standard algorithm of matrix product states rigorously, and various phase transitions can be accurately determined from the singularities of the entanglement entropy of the 1D quantum correspondence. We demonstrated the power of our unified theory by numerically solving 2D fully frustrated XY spin models on the kagome, square and triangular lattices, giving rise to a variety of thermal phase transitions from infinite-order Brezinskii-Kosterlitz-Thouless transitions, second-order transitions, to first-order phase transitions. Our approach holds the potential application to other types of frustrated classical systems like Heisenberg spin antiferromagnets.
Feng-Feng Song, Tong-Yu Lin, Guang-Ming Zhang
2023-09-11T09:09:51Z
http://arxiv.org/abs/2309.05321v1
# Unified tensor network theory for frustrated classical spin models in two dimensions ###### Abstract Frustration is a ubiquitous phenomenon in many-body physics that influences the nature of the system in a profound way with exotic emergent behavior. Despite its long research history, the analytical or numerical investigations on frustrated spin models remain a formidable challenge due to their extensive ground state degeneracy. In this work, we propose a unified tensor network theory to numerically solve the frustrated classical spin models on various two-dimensional (2D) lattice geometry with high efficiency. We show that the appropriate encoding of emergent degrees of freedom in each local tensor is of crucial importance in the construction of the infinite tensor network representation of the partition function. The frustrations are thus relieved through the effective interactions between emergent local degrees of freedom. Then the partition function is written as a product of a one-dimensional (1D) transfer operator, whose eigen-equation can be solved by the standard algorithm of matrix product states rigorously, and various phase transitions can be accurately determined from the singularities of the entanglement entropy of the 1D quantum correspondence. We demonstrated the power of our unified theory by numerically solving 2D fully frustrated XY spin models on the kagome, square and triangular lattices, giving rise to a variety of thermal phase transitions from infinite-order Brezinskii-Kosterlitz-Thouless transitions, second-order transitions, to first-order phase transitions. Our approach holds the potential application to other types of frustrated classical systems like Heisenberg spin antiferromagnets. ## I Introduction Frustrated spin systems have become an extremely active field of theoretical and experimental research in the last decades characterized by complex low-energy physics and fascinating emergent phenomena [1; 2; 3]. A system is regarded as frustrated when conflicting interaction terms are present, featured by the inability to minimize total energy by concurrently reducing the energy of each group of interacting degrees of freedom. Frustration underlies non-trivial behavior across physical systems or more general many-body systems, as the minimization of local conflicts gives rise to new degrees of freedom [4; 5]. Classical frustrated spin systems can be understood as simplified quantum mechanical models which employ classical spins to investigate the behavior of strongly correlated magnetic systems with competing interactions. The existence of frustration depends on the lattice geometry and/or the nature of the interactions [6]. For example, the anti-ferromagnetic (AF) Ising model defined by a set of spins of \(s=\pm 1\) is frustrated on the triangular and kagome lattices with massive ground-state degeneracy [7; 8]. However, AF Ising models are not frustrated on the 2D square lattice because the lattice is bipartite and the energy can be simply minimized by the Neel configuration of alternating spins. Frustration also depends on the dimension of the spin variables. For the frustrated AF XY spin systems composed of planar vectors \(\vec{s}=(\sin\theta,\cos\theta)\), the ground-state configuration is usually highly degenerate with new symmetries induced from non-collinear patterns. The new degrees of freedom can give rise to rich and complex phases at finite temperatures, which have been studied over the past decades on the square [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], the triangular [25; 26; 27; 28; 29; 30; 31; 32; 33] and the kagome lattices [34; 35; 36; 37; 38; 39; 40]. The study of frustrated classical spin systems is important not only for understanding the emergent behavior of physical systems like spin glasses [41; 42] but also for general optimization problems across multiple disciplines [43]. Considerable efforts have been made in the investigation of the fundamental properties of frustrated classical spin systems. Despite decade-long efforts, a generic approach to dealing with frustrated spin systems with both high accuracy and high efficiency is still lacking. Well-established methods such as Monte Carlo simulations, mean-field theories, and renormalization group techniques, have made significant contributions to the study of the classical frustrated spin models. However, they have encountered many difficulties such as low efficiency or limited applications [44; 45; 39; 46]. Recent progress in the tensor network methods provides new computational approaches for studying classical lattice models with strong frustrations [47; 48; 24; 40; 49]. It is found that the construction of the tensor network of the partition function is nontrivial for frustrated systems compared to the standard formulation. For example, the ground state local rules should be encoded in the local tensors to satisfy the ground state configurations induced by geometrical frustrations [47]. In the frustrated Ising models, a linear searching algorithm based on a Hamiltonian tessellation has been proposed to find the proper transitional invariant unit [48; 49]. In the frustrated XY models, the idea of splitting of \(U(1)\) spins and dual transformations have been developed to overcome the convergence issues [44; 24]. Although these techniques make a success in specific models, they seem to be very tricky. Thus, one wonders whether there exists a general framework to treat frustrated classical spin models. Here, we generalize the underlying principles of the tensor network representation to make it applicable to generic frustrated classical spin systems. When comprising the whole tensor network of the partition function, the crucial point is that the emergent degrees of freedom induced by frustrations should be encoded in the local tensors. In this way, the massive degeneracy is characterized by emergent dual variables such as height variables in the AF Ising model on the triangular lattice [50; 51] and chiralities in frustrated XY models [38; 40]. The emergent variables capture the freedom of a group of interacting spins under the constraint of frustrations. In the sense of coarse-graining, the local tensors carry the effective interactions between emergent local degrees of freedom. The local tensors usually sit on the dual sites of the original lattice which can be constructed from dual transformations. It is worth noting that the dual transformations should be imposed on the whole cluster of a number of spins in correspondence with the emergent dual variables. We demonstrate the power of the generalized theory of tensor network representation by applying it to fully frustrated XY models on the kagome, triangular, and square lattices. First of all, we can express the infinite 2D tensor network as a product of 1D transfer matrix operators, which can be contracted efficiently by recently developed tensor network algorithms under optimal variational principles [52; 53; 54]. Then, from the singularity of the entanglement entropy of the 1D quantum transfer operator, various phase transitions can be determined with great accuracy according to the same criterion [55]. Finally we find that a broad array of emergent physics has been treated including various types of phase transitions from first-order, second-order to the Berezinskii-Kosterlitz-Thouless (BKT) phase transitions. The complex phase structures of the frustrated XY systems are revisited and clarified with new tensor network solutions. The present approach holds the potential application to next-nearest-neighbor frustrated spin systems and other types of classical spins like Heisenberg antiferromagnet. The rest of the paper is organized as follows. In Sec. II, we introduce the theory of tensor network representations for classical frustrated spin models with two concrete examples. After constructing the tensor networks of Ising spin antiferromagnets on the kagome and triangular lattices, we perform the numerical calculation of the residual entropy of the frustrated Ising models, which are comparable to the exact results. In Sec. III, we apply the unified theory to the fully frustrated XY spin models on the kagome, square, and triangular lattices, and present the numerical results for the determination of the finite temperature phase diagram of frustrated XY systems, especially the AF triangular XY model and the modified square XY model. Finally in Sec. IV, we discuss the future generalizations of the method and give our conclusions. In the Appendix, we outline the detailed tensor network methods for numerical calculations. ## II Tensor network representations of 2D statistical models ### Emergent degrees of freedom Tensor networks have proven to be a very potent tool in the study of strongly correlated quantum models as well as classical statistical mechanics. To implement this powerful method, the first step is to convert the partition function of a classical lattice model with local interactions into a tensor network representation. The standard construction of the tensor network is conducted by putting a matrix on each bond of the original lattice accounting for the Boltzmann weight of the nearest-neighboring interactions [56]. For a generic spin model with nearest-neighbor interactions \[H=\sum_{\langle i,j\rangle}h(s_{i},s_{j}), \tag{1}\] the partition function can be decomposed into a tensor network as a product of local Boltzmann weights, \[Z=\sum_{\{s_{i}\}}\mathrm{e}^{-\beta H(\{s_{i}\})}=\sum_{\{s_{i}\}}\prod_{\langle i,j\rangle}W(s_{i},s_{j}), \tag{2}\] where \(\langle i,j\rangle\) refers to the nearest neighbors, \(s_{i}\) are the spin variables, and the interaction matrices are given by \[W(s_{i},s_{j})=\mathrm{e}^{-\beta h(s_{i},s_{j})}, \tag{3}\] whose row and column indices are the spin variables shown in Fig. 1. The \(\delta\) tensors on the lattice vertexes ensure all indices of \(W\) take the same value at the joint point. Furthermore, we perform the Schmidt decomposition on the symmetric matrix \(W\) \[W(s_{i},s_{j})=(U\sqrt{S})(\sqrt{S}V^{\dagger})=V_{a}(s_{i},s_{k})V_{b}(s_{k}, s_{j}), \tag{4}\] and the partition function can be cast into the uniform tensor network representation as shown in Fig. 1 \[Z=\mathrm{tTr}\prod_{i}O_{s_{1},s_{2}}^{s_{3},s_{4}}(i) \tag{5}\] by grouping all V matrices that connect to the \(\delta\) tensors \[O_{s_{1},s_{2}}^{s_{3},s_{4}}=\sum_{s_{k}}V_{b}(s_{1},s_{k})V_{b}(s_{2},s_{k}) V_{a}(s_{k},s_{3})V_{a}(s_{k},s_{4}). \tag{6}\] The standard representation has been successfully applied to many lattice statistical models without frustration [55; 56; 57; 58; 59]. However, it cannot be implemented directly in the frustrated spin models, where the tensor network contraction algorithms fail to converge. It was found that the proper encoding of the ground state local rules in local tensors was crucial for the contraction to converge. To fulfill the physics of the ground state manifold, a linear algorithm was proposed to search for the optimal Hamiltonian tessellation for Ising antiferromagnets [48; 49]. The key point is that the energy of all local ground state configurations should be simultaneously minimized under the splitting of the global Hamiltonian into local groups of interactions. And the local tensors are constructed as translational units coinciding with the local clusters of the tessellation. In order to extend tensor network approaches to generic frustrated classical spin models, we should understand the ground state local rules from a more fundamental perspective of emergent degrees of freedom. In frustrated systems, new degrees of freedom often emerge as a result of the minimization of local conflicts. The ground state of frustrated spin systems is highly degenerate because a number of spins can behave as free spins. Such freedom can therefore be represented by a set of emergent variables describing the effective interactions induced by frustrations. For some models, the emergent variables can be derived directly like height variables in the AF Ising triangular model [50; 51] and chiralities in frustrated XY models [38; 40]. For the spin models with more complicated interactions, the emergent variables may not be explicitly expressed but they can still be characterized by local tensors composed of a cluster of local interactions [48; 49]. This idea generalizes tensor network approaches readily to classical frustrated systems of both discrete and continuous spins. Before discussing the tensor network construction of the frustrated spin model, we give some examples of emergent degrees of freedom by revisiting the exactly solvable frustrated models. One of the simplest frustrated spin models is the AF Ising model on the kagome lattice \[H=J\sum_{\langle i,j\rangle}\sigma_{i}\sigma_{j}, \tag{7}\] where \(J>0\) denotes the AF interactions between nearest-neighbor spins \(s_{i}=\pm 1\) as displayed in Fig. 2 (a). The kagome AF Ising model is disordered at all temperatures with an extensive ground state degeneracy characterized by a finite residual entropy [8]. To minimize the energy of each triangular plaquette, three spins should obey the ground state local rule of "two up one down, one down two up" as shown in Fig. 2 (a). Besides directly focusing on the local spin configurations, the physics of the model can be understood from the emergent degrees of freedom on the triangle centers. A set of charge variables can be defined at each triangle \[Q_{u}=\sum_{i\in\Delta}s_{i},\quad Q_{d}=-\sum_{i\in\nabla}s_{i}, \tag{8}\] where \(\Delta\) and \(\nabla\) denote the upward and downward triangles. The Hamiltonian can then be expressed as \[H=\frac{J}{2}\sum_{p\in\Delta(\nabla)}(Q_{p}^{2}-3) \tag{9}\] in terms of the topological charges \(Q_{p}\). Although there seems to be no explicit interaction between charges in the Hamiltonian, the variables \(Q_{p}\) are not independent because the shared spin between the neighboring triangles should be the same. The constraints between neighboring charges can be naturally represented by a link between local tensors as a Kronecker delta tensor in the language of tensor networks. In this way, the interactions between Ising spins are transformed into a charge model including the self-energy of the charges and the effective interactions between these charges. The charge variables can take four values \(Q=\pm 1,\pm 3\) at finite temperatures. In the zero temperature limit, the charges of \(Q=\pm 3\) are energetically suppressed. The "two up one down, one up two down" rule corresponds to charge variables \(Q=\pm 1\) allowed by the ground state manifold. The emergent charge variables can also be applied to the triangular lattice in the same spirit as the case of the kagome lattice. The triangular AF Ising model in Fig. 3 (a) can be transformed into \[H=\frac{J}{2}\sum_{\langle i,j\rangle\in p}s_{i}s_{j}=\frac{J}{4}\sum_{p}\left( Q_{p}^{2}-3\right), \tag{10}\] where the only difference is that each nearest-neighbor triangles share two same spins. The charges variables help us to understand why the tiling of \(p\in\Delta(\nabla)\) is crucial for the triangular lattices [48]. The reason is that the tessellation of only one type of triangle fails to characterize the interactions between the emergent charge variables. Figure 1: The standard construction of the tensor network. (a) The \(W\) matrix represents the Boltzmann weight on each link, and the \(\delta\) tensor on each site represents the sharing of the same spin between neighboring \(W\) matrices. (b) The tensor network representation of the partition function composed of uniform local tensors. (c) The local tensor \(O\) is built by the singular value decomposition (SVD) on each \(W\) matrix and the grouping of the \(V\) matrices connecting to the \(\delta\) tensors. ### General principle for tensor network construction Now we can build up a general principle for the tensor network representation of frustrated spin models. The key point is that the emergent degrees of freedom should be encoded in each local tensor in the construction of the infinite tensor network for the partition function. Since the emergent degrees of freedom is universal in frustrated systems, the generic approach can be applied to classical frustrated systems of both discrete and continuous symmetries. Moreover, the finite-temperature properties can also be probed when the interactions among emergent degrees of freedom are faithfully captured. In practice, it is not necessary to write down the explicit model of the interactions between emergent variables. The effective interactions are implicit in the connections between local tensors. Each local tensor constituting the Boltzmann weight should carry the emergent degrees of freedom corresponding to a unit cluster of spins. From this perspective, the breakdown of standard construction in the triangular Ising model [48] can be understood: the emergent degrees of freedom located on the downward triangles are lost in the infinite tensor net Figure 3: Tensor network representations of the Ising anti-ferromagnet on a triangular lattice. (a) One of the massive degenerate ground state configurations. (b) The \(W_{\nabla}\) and \(W_{\Delta}\) tensors are defined on the center of the triangles. The pink \(\delta\) tensor represents a six-legged Kronecker delta tensor which connects the \(W_{\nabla}\) and \(W_{\Delta}\) tensors surrounding it. (c)-(d) The construction of row-to-row transfer matrix by splitting the six-legged \(\delta\) tensors vertically and regrouping the index of a pair of neighboring \(W_{\nabla}\) and \(W_{\Delta}\) tensor into an \(I^{\prime}\) tensor. (e)-(f) The construction of the local uniform tensor O by splitting \(I^{\prime}\) horizontally and grouping with \(\delta^{u}\) and \(\delta^{d}\) tensors. (g) The details of the operations on local tensors during the construction procedure. Figure 2: Tensor network representation of the AF Ising model on a kagome lattice. (a) One of the ground state configurations on the kagome lattice with \(Q=\pm 1\) charges on each triangle. (b) Putting the \(W_{\Delta}\) (\(W_{\nabla}\)) tensors on the centers of the upward (downward) triangles to represent the self-energy of the charge variables, where the \(\delta\) tensors between the nearest neighbor triangles can be translated into the connections of tensor legs directly. (c) The tensor network representation of the partition function composed of uniform local \(O\) tensors. (d) The construction of \(O\) tensor by contracting neighboring \(W_{\Delta}\) and \(W_{\nabla}\) tensors. work contraction. We summarize the general procedure to construct the tensor network representation of the frustrated spin models as follows: i). Identify the emergent degree of freedom, usually located on the dual site, and the corresponding geometry cluster composed of classical spins. ii). Reformulate the partition function into the form of \[Z=\sum_{\{s\}}\prod_{c}W_{c}(c)\prod_{\langle c,c^{\prime}\rangle}W_{l}(c,c^{ \prime})\delta_{c,c^{\prime}} \tag{11}\] where \(c\) enumerates all the clusters, \(W_{c}\) and \(W_{l}\) correspond to the Boltzmann weight of all the spin configurations \(\{s\}\) within a cluster and between neighboring clusters, and \(\delta\) tensors ensure the shared spins between different clusters be the same. For continuous spins, the \(W\) tensors should be transformed onto a discrete basis via the Fourier transformation. iii). Split and regroup the \(W\) tensors to build regular local tensors constituting an infinite uniform tensor network representation of the partition function. ### Kagome and triangular AF Ising models as two examples The general principle can be applied directly to classical frustrated models with discrete symmetries. The tensor network representation of the kagome AF Ising model (7) can be built simply based on the emergent charge variables defined in (8). As displayed in Fig. 2 (b), we first split the global Boltzmann weight into local Boltzmann weights on each triangle. Then the partition function of the AF Ising model can be written as \[Z=\sum_{\{s_{i}\}}\prod_{p}W_{p}(s_{1},s_{2},s_{3}), \tag{12}\] where the Boltzmann weight on each upward and downward triangle is expressed by a three-legged \(W\) tensor \[W_{p}(s_{1},s_{2},s_{3})=\mathrm{e}^{-\beta J(s_{1}s_{2}+s_{2}s_{3}+s_{3}s_{1 })}. \tag{13}\] The constraint of sharing the same spin between a pair of neighboring \(W\) tensors is imposed by the Kronecker delta tensor. Then the transitional invariant local tensor \(O\) is achieved by combining a pair of upward and downward triangles \[O^{s_{3},s_{4}}_{s_{1},s_{2}}=\sum_{s_{5}}W_{\Delta}(s_{1},s_{2},s_{5})W_{ \nabla}(s_{5},s_{3},s_{4}) \tag{14}\] as displayed in Fig. 2 (d), and the uniform tensor network representation of the partition function in Fig. 2 (c) is given by \[Z=\mathrm{tTr}\prod_{i}O^{s_{3},s_{4}}_{s_{1},s_{2}}(i) \tag{15}\] where "tTr means the tensor contraction over all auxiliary links and \(i\) denotes the sites of the transitional invariant unit. The above tensor network can be contracted efficiently using standard algorithms for infinite systems with extremely high accuracy [52; 53; 55]. In the zero temperature limit, the tensor \(W\) can be reduced to the same tensor obtained in the Ref. [48], yielding a residual entropy of \(S_{0}\approx 0.501833\), consistent with the exact result [8]. For the triangular AF Ising model displayed in Fig. 3 (a), the tensor network representation can be constructed in a similar way. The only difference is that each spin is shared by six surrounding triangles. As shown in Fig. 3 (b), the constraint between the triangular plaquettes is realized through the six-legged delta tensors \[\delta_{s_{1},s_{2},s_{3},s_{4},s_{5},s_{6}}=\begin{cases}1,&s_{1}=s_{2}=s_{3} =s_{4}=s_{5}=s_{6}\\ 0,&\text{otherwise}\end{cases} \tag{16}\] and the tensor \(W\) is defined in the same way as the kagome AF Ising model Eq. (13). To construct a row-to-row transfer matrix, we split the six-legged delta tensors vertically as two four-legged delta tensors \[\delta_{s_{1},s_{2},s_{3},s_{4},s_{5},s_{6}}=\sum_{s_{7}=\pm 1}\delta^{u}_{s_{1}, s_{2},s_{3},s_{7}}\delta^{d}_{s_{7},s_{4},s_{5},s_{6}} \tag{17}\] as shown in Fig. 3 (c). Then a pair of \(W_{\Delta}\) and \(W_{\nabla}\) are grouped into a tensor \(I^{\prime}\) as shown in Fig. 3 (d). The tensor \(I^{\prime}\) can be further split horizontally as displayed in Fig. 3 (e) \[I^{\prime}=LR \tag{18}\] by a singular-value decomposition \[I^{\prime}=USV^{\dagger}, \tag{19}\] where \(U\) and \(V^{\dagger}\) are three-legged unitary tensors, \(S\) is a semi-positive diagonal matrix and \[L=U\sqrt{S},\quad R=\sqrt{S}V^{\dagger}. \tag{20}\] Finally, the regular local tensor \(O\) is obtained by grouping \(\delta^{u}\), \(\delta^{d}\), and a pair of \(L\) and \(R\) tensors. The details are depicted in Fig. 3 (g). This gives a uniform tensor-network representation of the partition function \[Z=\mathrm{tTr}\prod_{i}O^{s_{3},s_{4}}_{s_{1},s_{2}}(i) \tag{21}\] as displayed in Fig. 3 (f). Although the local tensor \(O\) is slightly different from the one constructed by the method of Hamiltonian tessellation [48], the tensor network is well defined and can be readily generalized to frustrated systems with continuous symmetries discussed in the following parts. As shown in Fig. 4 (a), standard contraction algorithms [52; 53; 60] display a nice convergence at both zero temperature and finite temperatures. The numerical calculation of the expectation value of the magnetization \[m=\langle s_{i}\rangle=\frac{1}{N}\sum_{i}s_{i} \tag{22}\] is found to be zero under all temperatures, indicating the absence of the long-range order (LRO). Moreover, the ground state residual entropy is calculated as displayed in Fig. 4 (b) \[S_{0}=\frac{1}{N}\ln Z_{0}\approx 0.323065, \tag{23}\] in good agreement with the exact result [7]. ## III Tensor network theory for 2D fully frustrated XY spin models ### Duality transformation and split of \(U(1)\) spins In this section, we demonstrate the power of the generic idea of emergent degrees of freedom by the implementations in the frustrated model with a continuous \(U(1)\) symmetry. The frustrated XY models, to some extent, are "less frustrated" than the Ising ones. The XY spins have more freedom to rotate on the plane to minimize local conflict interactions, but the Ising spins are constrained to only two orientations. That is why there exists quasi-LRO in the frustrated XY spin models at low temperatures, while the frustrated Ising models are usually disordered even at zero temperature. Despite a long history of investigations [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40], many properties of the frustrated XY spin systems are still not well understood. In both frustrated and non-frustrated XY models, a widely accepted and established analytical tool is the 2D Coulomb gas representation [61; 62; 63]. However, the form of Coulomb gas formulation is obtained through an approximate approach [61; 62] and it is hard to directly represent the charge variables by original phase variables [22; 64]. Instead, we can comprehend the topological charge, located on the dual sites, as a coarse-grained degree of freedom formed by a cluster of phase variables located on the original plaquette. This understanding serves as a fundamental perspective for constructing the tensor network of the frustrated XY spin models. Our tensor network approach provides a universal tool to deal with frustrated systems on various lattice geometries. We can reformulate the partition function into a general form of in the same way as the Ising case \[Z=\prod_{i}\int\frac{d\theta_{i}}{2\pi}\prod_{p}W_{p}(\{\theta_{p}\}) \tag{24}\] where \(p\) denotes the plaquette of the lattice and \(W_{p}\) corresponds to the Boltzmann weight of the elementary cluster. However, different from the Ising case studied in the Ref. [48], one may encounter two technical issues when constructing a tensor network based on (24). First, the indices of local tensors are continuous spin variables, which is hard to treat in the framework of tensor networks. So the Fourier transformation is necessary to bring the local tensors onto a discrete basis. Second, the Kronecker delta functions describing the constraints of the sharing spins are changed to the Dirac delta functions. For the Ising spin cases, the shared spins are split and connected directly by the Kronecker delta functions. Such a strategy cannot be simply extended for the case of continuous spins because the loops of the Dirac delta functions are not well defined. This problem can be overcome by introducing an auxiliary spin connecting to the shared spins between different clusters. ### AF XY spin model on a kagome lattice To describe the Josephson junction array under a uniform external magnetic field [9; 37], the frustrated XY model on a kagome lattice in Fig. 5 (a) is defined by the Hamiltonian \[H=-J\sum_{\langle i,j\rangle}\cos(\theta_{i}-\theta_{j}-A_{ij}) \tag{25}\] where \(J>0\) is the coupling strength, \(i\) and \(j\) are the lattice sites, and the summation is over all pairs of the nearest neighbors. The frustration in this model is induced by the gauge field defined on the lattice bond satisfying \(A_{ij}=-A_{ji}\). The case of full frustration corresponds to Figure 4: Numerical results of the Ising anti-ferromagnet on the triangular lattice, the bond dimension of uniform MPS is \(D=100\). (a) Convergence of the VUMPS algorithm at \(T=1\) and \(T=0\). \(|g|\) is the convergence measure in the VUMPS algorithm and \(N\) is the number of iteration steps. (b) \(\ln Z_{0}\) as a function of temperature. The residual entropy per site is \(S_{0}(D=100)=0.3230659\), which is the same as the exact result to seven decimal places. one-half flux quantum per plaquette, \[f=\frac{1}{2\pi}\sum_{\langle i,j\rangle\in\Delta}A_{ij}=\frac{1}{2}, \tag{26}\] where the sum is taken around the perimeter of a plaquette. We can choose the fixed gauge condition of \(A_{ij}=\pm\pi\) on each bond of the triangular plaquettes, and the model is transformed into an AF XY model on the kagome lattice \[H=J\sum_{i,j}\cos(\theta_{i}-\theta_{j}). \tag{27}\] The ground state of this model can be obtained by simultaneously minimizing the energy on each elementary triangle. As shown in Fig. 5(a), the phase difference between each pair of neighboring spins should be \(\pm 2\pi/3\). which gives rise to the emergent degrees of freedom of chiralities \(\tau=\pm 1\), corresponding to the anti-clockwise and clockwise rotation of the spins around the plaquette. The ground state of the AF XY model on a kagome lattice has a massive accidental degeneracy described by the fluctuations of the chiralities. To capture the emergent degrees of freedom induced by frustrations in the construction of the tensor network, we divide the Hamiltonian into local terms on each triangle: \[H=\sum_{p}H_{p}, \tag{28}\] where \(H_{p}\) includes all the interactions within an elementary triangle \[H_{p}=J\sum_{\langle i,j\rangle\in p}\cos(\theta_{i}-\theta_{j}). \tag{29}\] The partition function can now be written as \[Z=\prod_{i}\int\frac{d\theta_{i}}{2\pi}\prod_{p}W_{p}, \tag{30}\] where \(W_{p}=\mathrm{e}^{-\beta H_{p}}\) is a three-legged tensor with continuous \(U(1)\) indices and the constraint of sharing the same spin at the corners is realized by the Dirac delta function \(\delta(\theta_{i}-\theta^{\prime}_{i})\), as shown in Fig. 5 (b). To transform the local tensors onto a discrete basis, we employ the duality transformation to the whole upward triangles \[I_{n_{1},n_{2},n_{3}}=\prod_{i=1}^{3}\int\frac{d\theta_{i}}{2\pi}W_{\Delta}( \theta_{1},\theta_{2},\theta_{3})U_{n_{1}}(\theta_{1})U_{n_{2}}(\theta_{2})U _{n_{3}}(\theta_{3}),\] and the downward triangles \[I^{\prime}_{n_{1},n_{2},n_{3}}=\prod_{i=1}^{3}\int\frac{d\theta_{i}}{2\pi}W_{ \nabla}(\theta_{1},\theta_{2},\theta_{3})U^{\dagger}_{n_{1}}(\theta_{1})U^{ \dagger}_{n_{2}}(\theta_{2})U^{\dagger}_{n_{3}}(\theta_{3}),\] where \[U_{n}(\theta)=\mathrm{e}^{-in\theta} \tag{31}\] are the basis of the Fourier transformation. Since \(W_{p}\) is unchanged under the spin reflection of \(\theta\rightarrow-\theta\), we have \(I_{n_{1},n_{2},n_{3}}=I^{\prime}_{n_{1},n_{2},n_{3}}\) as displayed in Fig. 5 (c). Meanwhile, the duality transformation on the Dirac delta function gives the Kronecker delta function \[\int\frac{d\theta}{2\pi}U^{\dagger}_{n_{1}}(\theta)U_{n_{2}}(\theta)=\delta_{ n_{1},n_{2}}. \tag{32}\] Finally, the translation-invariant local tensor \(O\) is achieved by combining a pair of \(I\) tensors and we arrive at the the uniform tensor network representation of Figure 5: Tensor network representation of the fully frustrated XY model on the kagome lattice. (a) One of the ground state configurations. The positive and minus signs denote the chiralities on the triangular plaquettes. (b) The tensor network with continuous indices. The \(W_{\Delta}\) and \(W_{\nabla}\) tensors represent the Boltzmann weight on up and down-type triangles. The \(\delta\) matrix represents the Dirac delta function. (c) The construction of tensor network with discrete indices by making Fourier transformation on each triangle plaquette and integrating out \(\{\theta\}\) variables. (d) The tensor network representations composed of local uniform tensor \(O\), where the \(O\) tensor is built by combining two neighboring \(I\) tensors the partition function \[Z=\mathrm{t}\mathrm{Tr}\prod_{i}O_{s_{1},s_{2}}^{s_{3},s_{4}}(i) \tag{33}\] as shown in Fig. 5 (d). In fact, the same tensor network has been also obtained in a less straightforward way with the help of the infinite summation, where the interactions between emergent variables can be seen clearly [40]. A direct comparison to the problematic standard construction in Ref. [40] demonstrates the importance of encoding the emergent degree of freedom in the local tensors: besides the proper Hamiltonian tessellation, the duality transformation is also necessary to capture the essential physics of the chiralities. In the framework of tensor networks, the entanglement entropy of the fixed-point MPS for the 1D quantum correspondence exhibits singularity at the critical temperatures, offering a sharp criterion to determine possible phase transitions in the thermodynamic limit. As shown in Fig. 6, by employing the tensor network method outlined in the Appendix, the entanglement entropy \(S_{E}\) develops only one sharp singularity at the critical temperature \(T_{c}\simeq 0.075J\), indicating that a single BKT phase transition takes place at a rather low temperature. The peak positions are almost unchanged with different MPS bond dimensions ranging from \(D=60\) to \(120\). Thus, the transition temperature is determined with high precision, which is in good agreement with theoretical predictions for the unbinding temperature of \(1/3\) vortex pairs [36; 38; 40]. The low-temperature phase of the model can be interpreted as the presence of charge-\(6\)e superconductivity (SC) in the absence of charge-\(2\)e SC [40]. ### Fully Frustrated XY spin model on a square lattice The fully frustrated XY (FFXY) spin model on a 2D square lattice can be defined with gauge fields on the lattice bonds \[H=-J\sum_{\langle i,j\rangle}\cos(\theta_{i}-\theta_{j}-A_{ij}), \tag{34}\] where the full frustration corresponds to the uniform gauge field of \(A_{ij}=\pi/4\) on each bond of the square plaquettes. As displayed in Fig. 7 (b), the minimum of the Hamiltonian is obtained when all gauge-invariant phase differences between nearest-neighbor spins \(\phi_{i,j}=\theta_{i}-\theta_{j}-A_{ij}\) equal to \(\pm\pi/4\). The ground state can be characterized by a checkerboard pattern of chiralities \(\tau=\pm 1\) defined by \(\sum_{\square}\phi_{i,j}=\tau\pi\). Another degenerate state can be obtained by switching the positive and negative chiralities. Therefore, in addition to the \(U(1)\) symmetry, the chiralities give rise to an emergent \(Z_{2}\) degeneracy of the ground state of the FFXY model on a square lattice [41; 42; 65]. To obtain the tensor network representation of the partition function, we first divide the global Hamiltonian Figure 6: The entanglement entropy as a function of temperature under different MPS bond dimensions for the AF XY spin model on the kagome lattice. Figure 7: (a) The fully frustrated XY model on a square lattice. The arrows on the links correspond to the gauge field \(A_{ij}\) with the value of \(\pm\frac{\pi}{4}\). The sign of \(A_{ij}\) is denoted by the direction of the arrow. (b) The ground state of the FFXY model on a square lattice with a checkboard pattern of chirality. (c) The ground state of the modified XY model for \(\frac{\pi}{4}<\frac{\pi}{4}\). (d) The ground state of the modified XY model for \(\frac{\pi}{4}>\frac{\pi}{8}\). The \(0,\pm\) signs correspond to the topological charges located on the centers of the plaquettes. into a tessellation of local Hamiltonian on each square where the emergent variables live \[H=\sum_{\Box}H_{\Box}, \tag{35}\] and the local cluster of interactions is given by \[H_{\Box}=-\frac{J}{2}\sum_{\langle i,j\rangle\in\Box}\cos(\theta_{i}-\theta_{j} -A_{ij}). \tag{36}\] Then the tensor network can be expressed as a product of local Boltzmann weights on each plaquette as shown in Fig. 8 (a) \[Z=\prod_{i}\int\frac{d\theta_{i}}{2\pi}\prod_{\Box}W_{\Box} \tag{37}\] where \(W_{p}(\theta_{1},\theta_{2},\theta_{3},\theta_{4})=\mathrm{e}^{-\beta H_{\Box}}\) is a four-legged tensor with a continuous \(U(1)\) indices. Different from the corner-shared case of the kagome lattice, particular attention should be paid to the split of the shared spins among four square plaquettes. To avoid the formation of loops of the Dirac delta functions among four \(W\) tensors \[\delta(\theta_{a}-\theta_{b})\delta(\theta_{b}-\theta_{c})\delta(\theta_{c}- \theta_{d})\delta(\theta_{d}-\theta_{a})\] with \(\theta_{a}\), \(\theta_{b}\), \(\theta_{c}\) and \(\theta_{d}\) representing the four replicas of the shared spin, we put an auxiliary spin \(\theta_{i}^{\prime}\) connecting to the shared spins \[\delta(\theta_{a}-\theta^{\prime})\delta(\theta_{b}-\theta^{\prime})\delta( \theta_{c}-\theta^{\prime})\delta(\theta_{d}-\theta^{\prime})\] in a star shape as shown in Fig. 8 (b). Then we transform the local tensor \(W_{p}\) to the discrete basis \[I_{n_{1},n_{2},n_{3},n_{4}} = \prod_{i=1}^{4}\int\frac{d\theta_{i}}{2\pi}W_{\Delta}(\theta_{1}, \theta_{2},\theta_{3},\theta_{4}) \tag{38}\] \[\cdot U_{n_{1}}(\theta_{1})U_{n_{2}}(\theta_{2})U_{n_{3}}(\theta _{3})U_{n_{4}}(\theta_{3}),\] where \(U_{n}(\theta)\) are the Fourier basis defined in (31). As shown in Fig. 8 (f), the constraint of the star-shaped Dirac delta functions (38) can be reduced to a four-legged Kronecker delta tensor via \[\delta_{n_{1}+n_{2}+n_{3}+n_{4},0}=\int\frac{d\theta^{\prime}}{2\pi}U_{n_{1}} (\theta^{\prime})U_{n_{2}}(\theta^{\prime})U_{n_{3}}(\theta^{\prime})U_{n_{4} }(\theta^{\prime})\] characterizing the conservation law of \(U(1)\) charges. As a result, we get the tensor network representation composed of local tensors of discrete indices as displayed in Fig. 8 (c). Furthermore, the \(\delta\) tensors are split vertically as shown in Fig. 8 (d), \[\delta_{n_{1}+n_{2}+n_{3}+n_{4},0}=\sum_{n_{5}}\delta^{u}_{n_{1}+n_{2}-n_{5}, 0}\delta^{d}_{n_{3}+n_{4}+n_{5},0} \tag{39}\] Figure 8: Tensor network representation of the FFXY model on a square lattice. (a) The tensor network with continuous indices, where the \(W\) tensors account for the Boltzmann weight on each square and the pink dot tensor accounts for the integration of the shared \(\theta\) variables among four plaquettes. The black dotted line denotes the original square lattice. (b) The auxiliary spin \(\theta^{\prime}\) connecting the copied spins of four nearby plaquettes, the \(\delta\) matrices represent the Dirac delta functions. (c) The tensor network with discrete indices obtained from Fourier transformations on the \(W\) tensors and the integrations on the \(\theta\) variables. (d) The row-to-row transfer matrix built by splitting the \(\delta\) tensors vertically. (e) The uniform tensor network representation composed of local tensor \(O\). (f) The details of the operations on the local tensors in the construction of the tensor network. and the \(I\) tensors are decomposed horizontally by SVD \[I_{n_{1},n_{2},n_{3},n_{4}}=\sum_{n_{5}}L_{n_{1},n_{2},n_{5}}R_{n_{5},n_{3},n_{4}} \tag{40}\] as displayed in Fig. 8 (f). Finally, the regular local tensor \(O\) in the uniform tensor network of Fig. 8 (e) is obtained by grouping the relevant component tensors. One might rotate the network in Fig. 8 (c) by 45 degrees and group the local tensors in the red dotted line to directly make up a four-legged translation-invariant local tensor. However, the standard contraction algorithms fail to converge under this construction because the linear transfer matrix is non-Hermitian. Another key insight is that such a construction does not take into account the checkerboard-like ground state configurations, where only two chiralities are included in the transitional unit. Actually, although the procedure of the construction is different, the tensor network in Fig. 8 turns out to share the same transfer matrix as the one obtained in the Ref. [24]. To prove it, we split the \(U(1)\) spins in the vertical direction using the relation \[\int d\theta_{i}f(\theta_{i})=\iint d\theta_{i}d\theta_{i}^{\prime}\delta( \theta_{i}-\theta_{i}^{\prime})f(\theta_{i}^{\prime}), \tag{41}\] where the spin \(\theta_{i}^{\prime}\) is a copy of spin \(\theta_{i}\) connected by the Dirac delta function as shown in Fig. 9 (a). The delta tensor on a link can be further decomposed by the Fourier basis \[\delta(\theta-\theta^{\prime})=\frac{1}{2\pi}\sum_{n}U_{n}^{\dagger}(\theta^{ \prime})U_{n}(\theta), \tag{42}\] as displayed in Fig. 9 (b). Now we can define the row to row transfer matrix as three stripes of \(U\), \(W\) and \(U^{\dagger}\) tensors as shown in Fig. 9 (c). It is easy to see that the transfer matrix is Hermitian just like the one constructed in Ref. [24] since the \(W\) tensors are real and symmetric. Using the Fourier transformation again, we get the same \(I\) and \(\delta\) tensors in Fig. 9 (c) as those displayed in Fig. 8 (d). Once the proper tensor network representation is obtained, the numerical calculations can be efficiently performed as illustrated in the Appendix. As shown in Fig. 10, the entanglement entropy \(S_{E}\) develops two sharp singularities at two critical temperatures \(T_{c1}\) and \(T_{c2}\), which strongly indicates the existence of two phase transitions at two different temperatures. As the singularity positions vary with the MPS bond dimension \(D\), the critical temperatures \(T_{c1}\) and \(T_{c2}\) can be determined precisely by extrapolating the bond dimension \(D\) to infinite. Moreover, we find that the critical temperatures \(T_{c1}\) and \(T_{c2}\) exhibit different scaling behaviors in the linear extrapolation, implying that the two phase transitions belong to different kinds of universality classes. The lower transition temperature \(T_{c1}\) varies linearly on the inverse square of the logarithm of the bond dimension, while the higher transition temperature \(T_{c2}\) has a linear variance with the inverse bond dimension. The different scaling behavior agrees well with the different critical behavior of the BKT and 2D Ising universality classes [24]. Figure 9: (a) Tensor network representation of the partition function with the split of \(U(1)\) phase variables vertically. The \(\delta\) matrices represent the Dirac delta functions. (b) The decomposition of the Dirac delta function into \(U\) matrix and \(U^{\dagger}\) matrices. (c) The row-to-row transfer matrix on the continuous basis and the discrete basis, respectively. Figure 10: The entanglement entropy for the FFXY model on the square lattice develops two singularities indicating the existence of two phase transitions with the increasing of MPS bond dimensions. ### AF XY spin model on a triangular lattice The frustrated XY spin model on a triangular lattice under a fixed gauge condition of \(A_{ij}=\pi\) on each triangular plaquette can be transformed into an AF XY spin model. As shown in Fig. 11 (a), the angle between each pair of the nearest-neighbor spins should be \(\pm 2\pi/3\) to achieve the minimum of the ground state energy. Like the FFXY model on the square lattice, the elementary triangular plaquettes can be characterized by alternating chiralities of \(\tau=\pm 1\). The translation-invariant unit of the spin configuration forms a \(3\times 3\) cluster larger than the original lattice. The tensor network can be constructed in the same way as the FFXY spin model on the square lattice. First, we decompose the Hamiltonian into local terms on each triangle \[H=\sum_{p}H_{p},\ \ \ \ H_{p}=\frac{J}{2}\sum_{\langle i,j\rangle\in p}\cos( \theta_{i}-\theta_{j}). \tag{43}\] The partition function can be expressed as a product of local Boltzmann weights \[Z=\prod_{i}\int\frac{d\theta_{i}}{2\pi}\prod_{p}W_{p}, \tag{44}\] where \(W_{p}=\mathrm{e}^{-\beta H_{p}}\) defined on the centers of the triangles are three-legged tensors sharing the same \(U(1)\) spin at the joint corners as shown in Fig. 11 (b). Then the \(W\) tensors and the Dirac delta functions are transformed onto a discrete basis by the Fourier transformations, as displayed in Fig. 11 (c). To achieve a transition-invariant unit, we take a parallelogram cell circled by the red line and reorganize the local tensors within it. As shown in Fig. 11 (g), the six-legged delta tensor is decomposed into three smaller delta tensors \[\delta_{n_{1}+n_{2}+n_{3}+n_{4}+n_{5}+n_{6},0}\] \[= \sum_{m_{1},m_{2}}\delta_{n_{1}+n_{2},m_{1}}\delta_{m_{1}+n_{3}+m _{2}+n_{6},0}\delta_{n_{4}+n_{5},m_{2}},\] where the bond dimension of the \(m\)-indexed leg is bigger than the \(n\)-indexed leg denoted by a thicker line. At the same time, a pair of \(I_{\Delta}\) and \(I_{\nabla}\) tensors are grouped together into a four-legged \(I\) tensor \[I_{n_{1},m_{2},n_{3},m_{4}} = \sum_{n_{2},n_{4},n_{5},n_{6}}\delta_{n_{2}+n_{4},m_{2}}\] \[(I_{\Delta})_{n_{1},n_{2},n_{5}}\delta_{n_{5}+n_{6},m_{4}}(I_{ \nabla})_{n_{3},n_{4},n_{6}},\] and the tensor network is transformed to a relatively structured form in Fig. 11 (d). Following the same procedure of a vertical split of the \(\delta\) tensors and a horizontal split of the \(I\) tensors, we obtain the uniform tensor network in Fig. 11 (f). Note that the Fourier transformation must be performed on each triangular plaquette first to ensure the emergence of the dual variables. Otherwise, if we directly choose a parallelogram including a pair of neighboring triangles and then build the tensor network based Figure 11: The tensor network representation of the FFXY model on a triangular lattice. (a) One of the ground state spin configurations. The chiralities denoted by plus and minus signs on the centers of the triangular plaquettes form an AF pattern. (b) The tensor network with continuous indices. (c) The tensor network is transformed onto a discrete basis through the Fourier transformation. A parallelogram unit cell is circled in the red line. (d) The \(I\) tensors is constructed by grouping a pair of \(I\)s and \(I_{\Delta}\) tensors. (e) The vertical split of the \(\delta\) tensors into \(\delta^{d}\) and \(\delta^{u}\) and the horizontal split of the \(I\) tensors into \(L\) and \(R\). (f) The tensor network representation composed of uniform local \(O\) tensors. (g) Details of the transformations of local tensors. on the local Boltzmann weight of \[W_{\not{\bigtriangleup}}(\theta_{1},\theta_{2},\theta_{3},\theta_{4})\] \[=\exp\Big{\{}-\frac{\beta J}{2}[\cos(\theta_{1}-\theta_{2})+\cos( \theta_{2}-\theta_{3})\] \[+\cos(\theta_{3}-\theta_{4})+\cos(\theta_{4}-\theta_{1})+2\cos( \theta_{1}-\theta_{3})]\Big{\}},\] the infinite contraction of the tensor network will not give the right results. The reason is that the construction of local tensors with a finite bond cut-off can be regarded as a coarse-grained procedure that should be performed exactly on the clusters of spin corresponding to the emergent degrees of freedom. As shown in Fig. 12, the entanglement entropy \(S_{E}\) also develops two sharp singularities at two critical temperatures \(T_{c1}\) and \(T_{c2}\), and the critical temperatures have the same scaling behavior as the FFXY model on the square lattice. From the linear extrapolation, the critical temperatures are estimated to be \(T_{c1}\simeq 0.5060J\) and \(T_{c2}\simeq 0.5116J\). The critical temperature \(T_{c1}\) agrees well with previous Mont Carlo results [66] obtained by BKT fitting and \(T_{c2}\) is slightly lower than a recent estimation [66; 67] of \(T_{c2}\simeq 0.512J\). The properties of the two distinct phase transitions can be further elucidated through the thermodynamic quantities. The results of the specific heat are presented in Fig. 13 (a). Around the critical temperature \(T_{c1}\), the specific heat exhibits a small bump, indicating a higher-order continuous phase transition. By comparison, the specific heat displays a sharp divergence at \(T_{c2}\), implying a second-order phase transition. For the high-temperature side \(T>T_{c2}\), the specific heat can be fitted well by the logarithmic behavior of a second-order Ising transition. The specific heat between \(T_{c1}\) and \(T_{c2}\) does not fit well with the logarithmic form due to the close proximity of the two transitions. The breaking of \(Z_{2}\) symmetry at \(T_{c2}\) can be demonstrated by the expectation values of the chiralities. As shown in Fig. 13 (b), below the critical temperature \(T_{c2}\), the chiral order parameter \[m=\frac{1}{N}\sum_{p}(-1)^{x+y}\tau_{p} \tag{45}\] associated with the chiral degrees of freedom establishes a non-zero value, corresponding to the checkerboard pattern of chirality on upward and downward triangles. When approaching the critical temperature \(T_{c2}\) from the low-temperature side, the order parameter vanishes continuously as \(m\sim t^{\beta}\) with \(t=1-T/T_{c2}\). The critical exponent \(\beta\simeq 0.1238\) is in good agreement with the critical exponent \(\beta=1/8\) for the 2D Ising universality class. The nature of the phase transition at \(T_{c1}\) can be revealed in the change of the behavior of the spin-spin correlation functions defined as \[G(r)=\langle\cos(\theta_{i}-\theta_{i+r})\rangle. \tag{46}\] A comparison of correlation functions below and above \(T_{c1}\) is displayed in Fig. 13(c) and (d). Below \(T_{c1}\), the spin-spin correlation function exhibits a power-law decay, implying a close binding between vortices and anti-vortices. In contrast, for \(T>T_{c1}\) the correlation function displays an exponential decay, indicating the destruction of phase coherence between vortices due to the unbinding of vortex pairs. Thus, the phase transition at \(T_{c1}\) belongs to the universality class of the BKT transition. Figure 12: For the AF XY model on the triangular lattice, the entanglement entropy as a function of temperature develops two peaks when the MPS bond dimension \(D\) is increased. Inset: The singularity temperatures \(T_{c1}\) and \(T_{c2}\) of the entanglement entropy fitted for MPS bond dimensions from \(D=80\) to \(160\). Figure 13: (a) The specific heat shows a small bump around \(T_{c1}\) but a logarithmic divergence at \(T_{c2}\). (b) The symmetry breaking of chirality at \(T_{c2}\). The inset is the fitting of the Ising critical exponent. (c) The spin-spin correlation function shows a power-law decay below \(T_{c1}\) (d) The spin-spin correlation function shows an exponential decay above \(T_{c1}\). ### Modified XY model on a square lattice The unified tensor network methods can be employed in the study of frustrated spin models with more complex interactions. One such model is the modified XY model defined on a 2D square lattice [68; 69] \[H=-J\sum_{\langle i,j\rangle}\cos(\theta_{i}-\theta_{j})-\mu\sum_{p}\tau_{p}^{2}, \tag{47}\] where the first term is the original XY model of ferromagnetic coupling \(J>0\), and the second term tunes the vortex fugacity through the chemical potential \(\mu\). The spin current circulating around each single square plaquette is defined as \[\tau_{p}=\sum_{\langle i,j\rangle\in p}\sin(\theta_{i}-\theta_{j}).\] It is well-known that the original XY spin model can be mapped into an interacting Coulomb gas with a vortex-core energy fixed in the low-density limit [61; 62]. And the underlying physics at large vortex density is of general interest both theoretically and experimentally. In the area of theoretical investigations, the possible extension of BKT theory under a large vortex fugacity was discussed, where non-BKT behavior and the occurrence of first-order transition were proposed [63; 70; 71; 72]. Actually a generalization of 2D XY spin model with a "crossed-product" operator acting on the plaquettes had been introduced to adjust the core energy of the vortices [73]. Subsequently, the numerical explorations of a Coulomb gas model on the square and triangular lattices as well as in the continuous limit showed a rich phase diagram with novel critical behaviors of an ordered-charge lattice [74; 75; 76]. Moreover, the similar physics has been investigated in 3D XY spin models, where a term acting on the plaquette was introduced to regulate the energy of vortex strings [77; 78]. The experiments in superconducting thin films revealed a significant deviation of the vortex-core energy from the predictions in the original XY model [79]. It was found that an accurate consideration of the vortex-core energy is of great importance for the experimental identification of the BKT transition [80]. Apart from the widely known superfluid phase and normal phase, the measurement of the third sound mode in \({}^{4}\)He thin films suggested the existence of a new phase [81]. To provide a theoretical explanation for this phenomenon, researchers have proposed a fascinating concept involving the formation of a lattice composed of vortices and anti-vortices, with a remarkably low vortex core energy [82; 83]. The existence of vortex-antivortex lattice has also been proposed in other systems such as ultra-cold atoms [84] and polariton fluids [85]. To understand the role of the modified interaction term \(\tau_{p}\), we can make a simple analysis of the ground state. The ground state structure can be determined by the ratio of \(\mu/J\) tuning the spin currents in the system which effectively modulates the vortex fugacity. As illustrated in Fig. 7(c)-(d), when \(\mu/J<1/8\), the ground state is identical to that at \(\mu=0\), corresponding to the ground state of the original XY model where all spins align parallel to each other. As we further increase the chemical potential to \(\mu/J>1/8\), the ground state is characterized by maximizing \(\tau_{p}\) on each plaquette, resulting in a phase difference of \(\phi_{12}=\phi_{23}=\phi_{34}=\phi_{41}=\pm\pi/2\). This ground state has the same ground state degeneracy as the FFXY spin model on a square lattice. From the perspective of vorticity, the ground state at \(\mu/J<1/8\) has zero vorticity at each plaquette termed as the vortex vacuum state, whereas the ground state at \(\mu/J>1/8\) has a checkerboard pattern of vorticity equal to \(\pm 1\) called the vortex-antivortex crystal. Hence, the zero-temperature ground state structure of the modified XY spin model is analogous to the 2D dense coulomb gas on the square lattice [74]. The square term \(\tau_{p}^{2}\) gives rise to multiple types of interaction including the nearest-neighbor interactions, next-nearest-neighbor interactions, and four-body interactions. Although it seems difficult to treat the four-body interactions, there is still a well-defined vorticity on each plaquette from the viewpoint of emergent degrees of freedom. Therefore we can choose each square plaquette as Figure 14: The global phase diagram of the modified XY spin model. The BKT transition point A of the original XY spin model is determined as \((0.893,0)\). The exact solvable point B between the vortex vacuum phase and vortex lattice phase at zero temperature is given by \((0.0,0.125)\). As the temperature increases, depending on the chemical potential, the vortex lattice can melt through three possible way. Below the point C at \((0.64,0.142)\), the vortex lattice experiences a first-order transition into the vortex vacuum phase and then undergoes a BKT transition into the disordered phase, while the CD line is the first-order transition. The point D is a tricritical point determined as \(\mu/J\simeq 0.21\). Above this point D, the first order transition line is separated into two extremely close transition lines, belonging to the BKT transition and Ising transition, respectively. Inset shows the enlarged results around the point \(D\). an elementary cluster and replace the \(H_{p}\) and \(W_{p}\) by \[H_{p}=-\frac{J}{2}\sum_{(i,j)\in p}\cos(\theta_{i}-\theta_{j})-\mu\tau_{p}^{2}, \quad W_{p}=\mathrm{e}^{-\beta H_{p}}. \tag{48}\] Then the tensor network of the partition function can be constructed following the procedure outlined in Fig. 8. The singular behavior of the entanglement entropy corresponding to the 1D transfer operator offers a sharp criterion to determine all possible phase transitions in the thermodynamic limit and the complete phase diagram is thus determined as presented in Fig. 14. In the upper plane of the phase diagram, the entanglement entropy along the chemical potential \(\mu=0.3J\) is displayed in Fig. 15 (a). There exist two distinct peaks, corresponding to the BKT and Ising transition, respectively. These two phase transitions are extremely close to each other as shown by the zoomed inset in Fig. 14. Upon further reducing the chemical potential to \(\mu\simeq 0.20J\), two separated peaks merge into a single peak, as displayed in Fig. 15 (b). The merging point is denoted as the point \(D\) in the global phase diagram. The low-temperature phase with large \(\mu\) is called the vortex-lattice phase due to the checkerboard pattern of vortices and anti-vortices coexisting with the SC order. The chiral LRO is demonstrated by the finite expectation value of chiralities (45) as shown in Fig. 16 (b) and Fig. 17 (b). The SC order is characterized by the quasi-LRO of \(U(1)\) spins, where the spin-spin correlation function (46) displays a power-law decay as displayed in Fig. 16 (d). The melting of the vortex lattice undergoes two steps into the disordered phase with an intermediate non-SC vortex-lattice phase. In the non-SC vortex-lattice phase, the chiral LRO survives but the phase coherence between vortices is destroyed. Such a two-step procedure has been extensively investigated in the FFXY models [41; 42; 65]. Below the point \(D\), the phase boundaries are determined by a combined analysis of the entanglement entropy and free energy. We find that the fixed-point equations have two different solutions across the critical point depending on the initial states we start from. The proper solution is chosen with a lower free energy density. As shown in Fig. 15 (c), along the line \(T=0.8J\), the entanglement entropy exhibits a peak at \(\mu\simeq 0.080J\) corresponding to the BKT transition and a discontinuous jump at \(\mu\simeq 0.153J\) associated to a first-order phase transition. The free energy density of Fig. 15 (e) displays an inflection point of a first-order transition at \(\mu\simeq 0.153J\), demonstrating that the entanglement entropy can serve as a powerful criterion for the determination of the first-order phase transition. Besides, we find that the position of the first-order transition is nearly unchanged with increasing bond dimensions, in good agreement with the behavior of the entanglement entropy. As the temperature decreases, the BKT transition line \(CA\) and the first-order transition line \(CD\) become closer and finally merge into a single first-order transition line \(CB\) at the tricritical point \(C\) with \(T\simeq 0.640J\) and \(\mu\simeq 0.142J\). As shown in Fig. 15 (d), along the line \(T=0.64J\), the entanglement entropy shows a discontinuous jump just above the peak position of \(\mu\simeq 0.142J\). The corresponding free energy density is displayed in Fig. 15 (f) with an evident cusp point. Across the transition line \(CD\), the vortex lattice melts directly into the disordered phase via a first-order transition, where the chiral LRO and spin quasi-LRO break down simultaneously. As is shown in Fig. 17 (a) and (b), both the thermal entropy density \(S\) and the chiral order parameter \(m\) develop a discontinuous jump at the transition point of \(\mu\simeq 0.18\) and \(T\simeq 1.052\). A comparison between the spin-spin correlation functions across the line \(CD\) is displayed in Fig. 17 (c) and (d). For a given temperature of \(T=1.05J\) in the vortex-lattice phase, the correlation function \(G(r)\) displays a power-law behavior. But in the disordered phase with \(T=1.06J\), the correlation function behaves in an exponential way. We should point out that the existence of a novel continuous transition arising from the merging of BKT and Ising transitions [86; 87; 88; 89; 90] is not found here. Figure 15: (a) The entanglement entropy as a function of temperature along \(\mu=0.3J\). (b) The entanglement entropy as a function of temperature along \(\mu\simeq 0.20J\). (c) The entanglement entropy as a function of chemical potential along \(T=0.8J\). (d) The entanglement entropy as a function of chemical potential along \(T=0.64J\). (e) The free energy density as a function of chemical potential along \(T=0.8J\). (f) The free energy density as a function of chemical potential along \(T=0.64J\). At low temperatures, the phase boundary \(CB\) belongs to a first-order transition between the vortex-lattice phase and the vortex-vacuum phase. As shown in Fig. 16(b), when going down along \(T=0.1J\) line, the chiral order parameter \(m\) exhibits a discontinuous jump to zero at \(\mu\simeq 0.128J\). Since the vortex fugacity is greatly suppressed by decreasing the chemical potential \(\mu\), the vortex density drops dramatically, driving the system into the vortex-vacuum phase. Note that the "vortex vacuum" just means that there is no excitation of free vortices but the charge-neutral vortex-antivortex pairs can still be excited. The excitation of vortex-antivortex pairs destroys the LRO of the \(U(1)\) spins and gives rise to the well-known BKT quasi-LRO state. As can be seen in Fig. 16 (c)-(d), the spin-spin correlation function displays a power-law decay in both the vortex-lattice and vortex-vacuum phases. When the temperature further decreases, the first-order transition line \(CB\) behaves in a linear way. Such a linear behavior is displayed in Fig. 16 (a), where the extrapolation to the zero temperature gives \(\mu=0.125J\) in the inset. The terminal point \(B\) is determined at \(T=0\) and \(\mu=0.125J\), consistent with our previous analysis of the ground state. Finally, the transition line \(CA\) separating the vortex-vacuum and disordered phase is the conventional BKT transition, driven by the dissociation of vortex-antivortex pairs. The inverse process, when the system is cooling from a disordered phase, pairs of vortex and anti-vortex appear and further condensed into a square vortex lattice is analogous to the theoretical proposal in ultracold Fermi gases [84]. The rich phase diagram of the modified XY model provides important insights into the formation of the vortex lattice and the complex melting process. By tuning the vortex chemical potential, the unconventional phase transitions in SC lattice are investigated thoroughly in the orientational \(U(1)\) phase variables. A more comprehensive study should take into account the positional order since the vortex lattice may also melt via the Kosterlitz-Thouless-Halperin-Nelson-Young procedure [82; 83; 84; 81; 89; 91; 92; 93]. ## IV Discussion and outlook In this paper, we have developed a generic tensor network approach to study the frustrated classical spin models with both discrete and continuous degrees of freedom on a wide range of 2D lattices. The key point for a contractible tensor network representation of the partition function is that the emergent degrees of freedom induced by frustrations should be encoded in the local tensors comprising the infinite network. In this way, the massive degeneracy can be described by the interactions between emergent dual variables representing a cluster of interacting spins under the constraint of frustrations. We showed that a common process can be applied to the construction of the tensor network based on ideas of emergent degrees of freedom and duality transformations. We demonstrated the power of our method by applying it to a large array of classical frustrated Ising models and fully frustrated XY spin models on the kagome, triangular and square lattices in the whole temperature range. Figure 16: (a) The free energy density as a function of chemical potential at different temperatures. Inset: linear extrapolation of critical chemical potential as a function of temperature. (b) The checkboard-like chirality pattern along \(T=0.1J\). (c) The spin-spin correlation function shows an exponential decay at \(T\simeq 0.1J\) and \(\mu\simeq 0.1J\) in the vortex-vacuum phase. (d) The spin-spin correlation function shows a power-law decay at \(T\simeq 0.1J\) and \(\mu\simeq 0.15J\) in the vortex-lattice phase. Figure 17: (a) The thermal entropy density as a function of temperature along \(\mu=0.18J\). (b) The chirality on \(2\times 2\) sublattice as a function of temperature along \(\mu=0.18J\). (c) The spin-spin correlation function at \(T\simeq 1.05J\), and \(\mu\simeq 0.18J\) in vortex lattice phase shows a power-law decay. (d) The spin-spin correlation function at \(T\simeq 1.06J\) and \(\mu\simeq 0.18J\) in disordered phase displays an exponential decay. Our tensor network approach turned out to be a natural generalization of the previous solutions of frustrated spin systems [40; 48; 49; 24] but from a more fundamental basis. Then the partition function is expressed in terms of a product of 1D transfer matrix operator, whose eigen equation was solved by the algorithms based on matrix product states rigorously. The singularity of the entanglement entropy for the 1D quantum analog provides a stringent criterion to determine various phase transitions with high accuracy. Apart from the good agreement with previous findings, our numerical results offer new clarification of the phase structure of the AF triangular XY model and the modified XY model. The generic tensor network approach provides a promising way to deal with some remaining open questions on frustrated systems. First, our method should be applicable to frustrated spin models with longer-range interactions where emergent degrees of freedom play an important role in characterizing the collective behavior. For example, a range of novel classical spin liquid phase in the \(J_{1}\)-\(J_{2}\)-\(J_{3}\) Ising model at the fine-tune point can be understood by topological charges with the nearest neighbor interaction and hence can be solved directly from our tensor network approach [94; 95; 96]. Second, the long-standing problems in uniform frustrated XY spin models may be solved by our generic construction. All the frustration ratio \(f\in[0,1]\) can be represented by a suitable gauge field on the lattice bond, which can be further represented using the standard procedure. Finally, we should point out that our construction should be extended to other models in any dimension with emergent degrees of freedom. For instance, the classical Heisenberg antiferromagnet [97; 98] may be investigated in the future where the basis for the dual transformation should be spherical harmonic functions. We believe that further development of the tensor network approach of our work should lead to the solution of a number of problems in frustrated systems that were difficult to solve previously. ###### Acknowledgements. The authors are very grateful to Tao Xiang for his stimulating discussions. The research is supported by the National Key Research and Development Program of MOST of China (2017YFA0302902). ## Appendix A Tensor network calculations of the physical quantities ### Linear transfer matrix method Once the proper tensor network representations for the frustrated models are obtained, the contraction of the infinite tensor network can be performed efficiently. One of the best practices to contract a translation-invariant tensor network in the thermodynamic limit is the algorithm of uniform matrix product states where the leading eigenvector of the row-to-row transfer matrix is calculated using a set of optimized eigensolvers [52; 53; 60]. Due to the emergent phenomena in the frustrated systems, the lattice symmetry is usually spontaneously broken with a larger translation-invariant unit composed of new degrees of freedom. The relevant 2D tensor network should consist of a larger unit cell of multiple tensors that matches the transitional symmetry. For example, a \(2\times 2\) plaquette structure of \(O\) tensors is necessary to represent the checkerboard ground state of the FFXY model on square lattices and a \(3\times 3\) structure for the triangular AF XY model. The fixed-point equation for the enlarged transfer operator can be accurately solved by the multiple lattice-site VUMPS algorithm with only a linear growth in computational cost [54]. For a transition-invariant cluster consisting of \(n_{x}\times n_{y}\) local tensors, the whole transfer matrix is formed by \(y\) rows of linear transfer matrices \[\mathcal{T}=T^{(y+n_{y}-1)}\cdots T^{(y)}, \tag{16}\] where each row of the component transfer matrix is defined by \[T^{(y)}=\mathrm{t}\mathrm{Tr}\left(\cdots O^{(x,y)}O^{(x+1,y)}\cdots\right) \tag{17}\] with \(x=0,\cdots,n_{x}-1\), and \(y=0,\cdots,n_{y}-1\). The transfer operator \(\mathcal{T}\) can be regarded as the matrix product operator (MPO) for the 1D quantum spin chain, whose logarithmic form can be mapped to a 1D quantum system with complicated spin-spin interactions \[\hat{H}_{1D}=-\frac{1}{\beta}\ln\mathcal{T}. \tag{18}\] In this way, the correspondence between the finite temperature 2D statistical model and the 1D quantum model at zero temperature is established. The eigenequation can be expressed as \[\mathcal{T}|\Psi(A)\rangle^{(y)}=\Lambda_{\mathrm{max}}|\Psi(A)\rangle^{(y)}, \tag{19}\] where \(|\Psi(A)\rangle^{(y)}\) is the leading eigenvector represented by matrix product states (MPS) made up of a \(n_{x}\)-site unit cell of local A tensors with auxiliary bond dimension \(D\) \[|\Psi(A)\rangle^{(y)}=\sum_{x}\mathrm{Tr}(\cdots A^{i_{(x,y)}}A^{i_{(x+1,y)}} \cdots)|\cdots i_{(x,y)}\cdots\rangle \tag{20}\] satisfying \(A^{(x,y)}=A^{(x,y+n_{y})}=A^{(x+n_{x},y)52}\). The big eigenequation can be further decomposed into a set of smaller eigen-equations displayed in Fig. 18 (a) as \[T^{(y)}|\Psi(A)\rangle^{(y)}=\Lambda_{y}|\Psi(A)\rangle^{(y+1)}, \tag{21}\] with a total eigenvalue \[\Lambda_{\mathrm{max}}=\prod_{y=0}^{n_{y}-1}\Lambda_{y}. \tag{22}\] The key process of the algorithm is summarized in Figs. 18 (b)-(e), including sequentially solving the left and right fixed points of the channel operators \[\mathbb{T}_{L}^{(x,y)}F_{L}^{(x,y)} =\lambda_{(x,y)}F_{L}^{(x+1,y)}, \tag{111}\] \[\mathbb{T}_{R}^{(x,y)}F_{R}^{(x,y)} =\lambda_{(x,y)}F_{R}^{(x-1,y)}, \tag{112}\] and the updating of the central tensors \[H_{A_{C}}^{(x,y)}A_{C}^{(x,y)} =\lambda A_{C}^{(x,y+1)}, \tag{113}\] \[H_{C}^{(x,y)}C^{(x,y)} =C^{(x,y+1)}. \tag{114}\] Note that, when solving the fixed point eigen equation (111)-(114), one may not directly use the linear transfer matrix composed by the uniform local tensor \(O\), but the interior structure should be explored. This will significantly reduce the computational complexity. ### Physical quantities From the fixed-point MPS for the 1D quantum transfer operator, various physical quantities can be estimated accurately. The entanglement properties can be detected via the Schmidt decomposition of \(|\Psi(A)\rangle^{(y)}\) which bipartites the relevant 1D quantum state of the MPO, and the entanglement entropy can be determined directly from the singular values \(s_{\alpha}\) as \[S_{E}=-\sum_{\alpha=1}^{D}s_{\alpha}^{2}\ln s_{\alpha}^{2}, \tag{115}\] in correspondence to the quantum entanglement measure. Moreover, the expectation value of a local observable can be evaluated by inserting the corresponding impurity tensor into the original tensor network for the partition function. The impurity tensors can be obtained simply by introducing an unbalanced delta tensor to replace the original delta tensor characterizing the constraints of sharing spins. For Ising spins, the expectation value of a local spin at site \(j\) can be expressed as \[\langle s_{j}\rangle=\frac{1}{Z}\sum_{\{s_{i}=\pm 1\}}\mathrm{e}^{-\beta E(\{s_{ i}\})}s_{j} \tag{116}\] where \(E(\{s_{i}\})\) is the energy of a state under a given spin configuration \(\{s_{i}\}\). The \(s_{j}\) term just changes the Kro Figure 19: (a) The imbalanced delta tensors as a result of imbalanced currents introduced by the local observables. (b) The vertical split of the imbalanced delta tensors. (c) The construction of the impurity tensors from imbalanced delta tensors. (d) Two impurity tensors are introduced into the original tensor network. (e) Expectation of a local observable by contracting the leading eigenvectors of the channel operators. (f) Two-point correlation functions calculated by contracting a sequence of channel operators. Figure 18: The key steps of the multi-site VUMPS algorithm. (b) and (c) Eigen-equations to update the left and right environmental fixed points of the channel operators. (e) and (f) Eigen-equations to update the central tensors based on the new environment. necker delta tensor from the form of (16) to \[\delta_{s_{1},s_{2},\cdots,s_{n}}=\begin{cases}s_{1},&s_{1}=s_{2}=\cdots=s_{n}\\ 0,&\text{otherwise}\end{cases}. \tag{14}\] For XY spins, the expectation value of \(\text{e}^{iq\theta}\) can be calculated by introducing imbalanced currents into the original delta tensors from the conservation form of (39) to \[\delta^{q}=\delta_{n_{1}+n_{2}+n_{3}+n_{4}+q,0} \tag{15}\] as displayed in Fig. 19 (a). Accordingly, the vertical splitting of the delta tensor in (39) should be be modified to \[\delta_{n_{1}+n_{2}+n_{3}+n_{4}+q,0}=\sum_{n_{5}}\delta^{u}_{n_{1}+n_{2}-n_{5},0}\delta^{d}_{n_{1}+n_{2}+n_{5}+q,0} \tag{16}\] as shown in Fig. 19 (b). Then the impurity tensors can be constructed in the same way by including the imbalanced delta tensors as depicted in Fig. 19 (c). The tensor network containing two impurity tensors is displayed in Fig. 19 (d) as an example. Using the MPS fixed point, the contraction of the tensor network containing the impurity tensor is reduced to a trace of an infinite sequence of channel operators, which can be further squeezed into a contraction of a small network. As shown in Fig. 19 (e), the evaluation of a single variable is expressed as a contraction of only five tensors. And the expectation value of the two-point correlation function \[G(r)=\langle\cos(n\theta_{i}-m\theta_{i+r})\rangle \tag{17}\] can be reduced to a trace of a row of channel operators containing two impurity tensors as shown in Fig. 19 (f).
2309.03750
PBP: Path-based Trajectory Prediction for Autonomous Driving
Trajectory prediction plays a crucial role in the autonomous driving stack by enabling autonomous vehicles to anticipate the motion of surrounding agents. Goal-based prediction models have gained traction in recent years for addressing the multimodal nature of future trajectories. Goal-based prediction models simplify multimodal prediction by first predicting 2D goal locations of agents and then predicting trajectories conditioned on each goal. However, a single 2D goal location serves as a weak inductive bias for predicting the whole trajectory, often leading to poor map compliance, i.e., part of the trajectory going off-road or breaking traffic rules. In this paper, we improve upon goal-based prediction by proposing the Path-based prediction (PBP) approach. PBP predicts a discrete probability distribution over reference paths in the HD map using the path features and predicts trajectories in the path-relative Frenet frame. We applied the PBP trajectory decoder on top of the HiVT scene encoder and report results on the Argoverse dataset. Our experiments show that PBP achieves competitive performance on the standard trajectory prediction metrics, while significantly outperforming state-of-the-art baselines in terms of map compliance.
Sepideh Afshar, Nachiket Deo, Akshay Bhagat, Titas Chakraborty, Yunming Shao, Balarama Raju Buddharaju, Adwait Deshpande, Henggang Cui
2023-09-07T14:45:41Z
http://arxiv.org/abs/2309.03750v2
# PBP: Path-based Trajectory Prediction for Autonomous Driving ###### Abstract Trajectory prediction plays a crucial role in the autonomous driving stack by enabling autonomous vehicles to anticipate the motion of surrounding agents. Goal-based prediction models have gained traction in recent years for addressing the multimodal nature of future trajectories. Goal-based prediction models simplify multimodal prediction by first predicting 2D goal locations of agents and then predicting trajectories conditioned on each goal. However, a single 2D goal location serves as a weak inductive bias for predicting the whole trajectory, often leading to poor map compliance, i.e., part of the trajectory going off-road or breaking traffic rules. In this paper, we improve upon goal-based prediction by proposing the _Path-based prediction (PBP) approach. PBP predicts a discrete probability distribution over reference paths in the HD map using the path features and predicts trajectories in the path-relative Frenet frame. We applied the PBP trajectory decoder on top of the HiVT scene encoder and report results on the Argoverse dataset. Our experiments show that PBP achieves competitive performance on the standard trajectory prediction metrics, while significantly outperforming state-of-the-art baselines in terms of map compliance. ## 1 Introduction To safely navigate through traffic while offering passengers a smooth ride, autonomous vehicles need the ability to predict the trajectories of surrounding agents. There is inherent uncertainty in predicting the future, making this a challenging task. Agent trajectories tend to be highly non-linear over long prediction horizons. Additionally, the distribution of future trajectories is multimodal; in a given scene an agent could have multiple plausible goals and could take various paths to each goal. In spite of these challenges, agent motion is not completely unconstrained. Vehicles tend to follow the direction of motion ascribed to their lanes, make legal turns and lane changes, and stop at stop signs and crosswalks. Bi-cyclists tend to use the bike lane, and pedestrians tend to walk along sidewalks and crosswalks. High-definition (HD) maps of traffic scenes efficiently represent such constraints on agent motion and have thus been a critical component of autonomous driving datasets [2, 3, 6, 11, 38]. In fact, it has been shown in many prior works [27, 29, 1, 15, 18, 25] that a key requirement of the trajectory prediction task for a real-world autonomous driving system is to predict _map-compliant_ trajectories - trajectories that don't go off-road or violate traffic rules over long prediction horizons. For example, incorrectly predicting a non-map-compliant trajectory that encroaches into the oncoming traffic lane could cause the ego vehicle to brake hard or even make dangerous maneuvers on the road. As a result, prediction map compliance w.r.t. the provided HD map is central to our proposed approach and experimental evaluation. Prior works have leveraged HD maps for trajectory prediction in two distinct ways. First, the HD map is often used as an input to the model. Early works [26, 5, 7] use rasterized HD maps and CNN encoders. More recent works directly encode vectorized HD maps using PointNet encoders [39, 12], graph neural networks [20] or transformer layers [21, 23, 24, 46]. The map encoding is then used by a multimodal prediction header to output \(K\) trajectories and their probabilities. A drawback of multimodal prediction headers is that they need to learn a complex one-to-many mapping from the entire scene context to multiple future trajectories, often leading to non-map-compliant predictions. To address this shortcoming, a few recent works additionally use the HD map for _goal-based prediction_[13, 14, 43, 16, 41]. Goal-based prediction models associate each mode of the trajectory distribution to a 2D goal location sampled from the HD map. They predict a discrete distribution over the sampled goals, and then predict trajectories conditioned on each goal. This simplifies the mapping learned by the prediction header, and also makes each mode of the trajectory distribution more interpretable. However, 2D goal locations serve as a weak inductive bias to condi tion predictions, and may lead to imprecise trajectories for each goal. In this work, we seek to improve upon goal-based trajectory prediction. We argue that _reference paths_ rather than 2D goals are the appropriate HD map element to condition predicted trajectories. We define reference paths as segments of lane centerlines close to the agent of interest that the agent may follow over the prediction horizon. We propose a novel path classifier that predicts a discrete probability distribution over the candidate reference paths and a trajectory completion module that predicts trajectories conditioned on each path in the Frenet frame. Figure 1 shows an overview of our approach. In particular, our approach has two key advantages over goal-based prediction: 1. **Path features instead of goal features:** We predict trajectories conditioned on feature descriptors of the entire reference path instead of just 2D goal locations. This is a more informative feature descriptor and leads to more map-compliant trajectories over longer prediction horizons compared to goal-based prediction. 2. **Prediction in the Frenet frame:** The reference paths allow us to predict trajectories in the Frenet frame relative to the path. Compared to the Cartesian frame with varying lane locations and curvatures, predictions in the Frenet frame have much lower variance, which leads to more map-compliant trajectories that better generalize to novel scene layouts. Our path-based trajectory decoder is modular by design and could be used with any existing scene encoder such as VectorNet [12], LaneGCN [20], Scene Transformer [24], Wayformer [23], etc. Here, we build our decoder on top of the recently proposed HiVT encoder [46] that achieved competitive results on the Argoverse dataset [6] and has a publicly available code base. Our results on the Argoverse dataset show that our path-based decoder achieves competitive performance in terms of the standard minADE, minFDE, and miss rate metrics, while significantly outperforming the HiVT baseline and goal-based prediction in terms of map compliance metrics. Our contributions can be summarized as follows: * We propose a novel path-based trajectory prediction (PBP) approach that improves upon traditional goal-based prediction. * We applied our PBP trajectory decoder on top of the HiVT [46] scene encoder. The resulting model achieves the best map compliance metric on the Argoverse leaderboard while being competitive in terms of prediction error metrics. * We present extensive ablation studies comparing different trajectory decoder approaches on the Argoverse validation set. ## 2 Related work **Map-compliant trajectory prediction:** Leveraging the HD-map and predicting map-compliant trajectories has been the focus of a large number of works on trajectory prediction. Several works have proposed novel HD map encoders [10, 12, 20, 24, 39, 46], trajectory decoders conditioned on HD maps [9, 13, 14, 42, 43, 16, 34], and even novel metrics and auxiliary loss functions for map-compliance [15, 29, 25, 27, 8, 1]. In this work, we propose a path-based prediction approach that significantly improves prediction map compliance. **Goal-free multimodal prediction:** The distribution of future trajectories is multimodal due to unknown intents of agents. Machine learning models for trajectory prediction thus need to learn a one-to-many mapping from the HD map and past states of agents, to multiple future trajectories. Figure 1: **Overview of path-based prediction. Path-based prediction predicts trajectories conditioned on _reference paths_ rather than 2D goals. We sample reference paths using the lane network from HD maps, predict a discrete distribution over the sampled paths, and predict future trajectories in the Frenet frame relative to the paths. Finally, we transform the trajectories back to the Cartesian frame relative to the target agent to obtain multimodal predictions.** Prior work has addressed this using two approaches. The first approach is to implicitly learn the trajectory distribution using latent variable models such as GANs [17, 30, 44], CVAEs [19, 31], and normalizing flows [27, 28], where samples from the model represent plausible future trajectories. The other common approach is to use a multimodal regression header that outputs a fixed number of trajectories along with their probabilities [20, 24, 46, 7]. Such models are trained using the winner takes all/variety loss [17]. Some recent works [23, 33, 35], use DETR-like learned tokens [4] to output \(K\) distinct trajectories. **Goal-based prediction:** Goal-based prediction models [13, 14, 43, 41, 16] partly address the above limitations by associating each mode of the trajectory distribution to a 2D goal in the HD map. TNT [43] samples a sparse set of goals along lane centerlines. LaneRCNN [41] uses nodes in a lane graph to predict goal locations. HOME [13] and GHOME [14] predict goal heatmaps along a grid and graph representation of the HD map, and sample goal locations to optimize for the minFDE or miss rate metrics. Finally, DenseTNT [16] first predicts a dense goal heatmap along lanes, before using a second learned model to sample goals from the heatmap. We improve upon goal-based prediction models by conditioning our predictions on reference paths in the HD map rather than goals. Reference paths provide our trajectory decoder with more informative feature descriptors than 2D goal coordinates, and additionally allow us to predict in the path-relative Frenet frame. **Frenet frame trajectory decoding:** There are some existing models that predict trajectories in path-relative Frenet frame, such as GoalNet [42], DAC [22], and WIMP [18]. PBP has two key differences from those works. First, PBP has a different definition of its reference paths from those works. The reference paths in GoalNet, DAC, and WIMP are fixed-lengthed paths in the lane level. To generate the reference paths, GoalNet and DAC start from the agent's current position and search along the lane graph for a fixed distance. Such reference paths only capture the agent's high-level intention (e.g., go straight or turn right) but do not capture other uncertainties such as change of speed profiles. As a result, GoalNet, DAC, and WIMP all predict \(M\) trajectory modes within each reference path to achieve multimodal prediction. On the other hand, PBP's reference paths are sequences of lane segments with variable lengths, and PBP relies entirely on its path classification to achieve multimodal prediction since a reference path can uniquely define a predictive mode. To highlight the difference, PBP considers around 600 candidate reference paths per agent, while GoalNet and DAC only consider less than three reference paths per agent. Second, DAC [22] and WIMP [18] do not have a learned path classification module to predict path probabilities or a path classification loss as a training objective. DAC uses a heuristic algorithm to rank paths based on the distance-along-lane score and centerline-yaw score, and WIMP finds only one single closest reference path for each agent using a heuristic algorithm. On the other hand, PBP has a path classification module that predicts the probability distribution over all candidate paths. PRIME [32] also predicts trajectories in the Frenet frame, but it uses a model-based trajectory generator (a quartic polynomial) to sample trajectories. In contrast, PBP's trajectory generator is entirely learned, allowing it to generate a variety of motion profiles in the Frenet frame. ## 3 PBP: Path-based prediction ### Problem statement The objective of a trajectory prediction model is to forecast the future trajectories of a set of agents in the scene, given their past history positions and map context. We denote the past history positions of an agent \(a\) by \(\{\mathbf{P}^{a}\}_{Past}=\{\mathbf{P}^{a}_{-T^{\prime}+1},\mathbf{P}^{a}_{-T^{\prime}+2},\cdots,\mathbf{P}^{a}_{0}\}\) where \(\mathbf{P}^{a}_{t}=(x^{a}_{t},y^{a}_{t})\) is a 2-D coordinate position, and \(T^{\prime}>0\) is the past history length. The map context \(\mathcal{M}\) is represented as a set of discretized lane segments \(\{l_{j}\}_{j=1}^{L}\) and their connections. The prediction model is required to forecast the future state of each agent \(\{\mathbf{P}^{a}\}_{Future}=\{\mathbf{P}^{a}_{1},\mathbf{P}^{a}_{2},\cdots,\mathbf{P}^{a}_{T}\}\) over the time horizon \(T>0\). In order to capture the uncertainties of the agents' future behaviors, the model will output \(K\) trajectory predictions and their probabilities \(\{p_{k}\}_{k=1}^{K}\) for each agent. ### Overall architecture The overall architecture of our PBP model is illustrated in Figure 2, which consists of four main components. The scene encoder generates agent and map embeddings from agent-map and agent-agent interactions (Section 3.3). The candidate path sampler samples the candidate paths from the map for each agent (Section 3.4). The path classifier predicts the probability of each sampled path (Section 3.5). Finally, the trajectory regressor decodes trajectories conditioned on the selected paths (Section 3.6). ### Scene encoding The scene encoder module creates agent feature vectors from the scene for each agent. In this work, we borrowed the scene encoder module from the HiVT model [46], a recently proposed trajectory prediction model that achieves state-of-the-art performance on Argoverse. The HiVT scene encoder represents each scene as a set of vectorized entities. It uses this representation to encode the scene by hierarchical aggregation of the spatial-temporal information. First, rotational invariant local feature vectors are encoded for each agent with a transformer module to aggregate neighboring agents' information as well as local map structure. Next, global interactions between agents are aggregated into each agent's feature vector to capture the scene-level context. The outputs of the encoder are the feature vectors for each agent denoted by \(\mathbf{F_{a}}\). ### Candidate sampling The objective of the candidate sampling module is to create a set of candidate reference paths for each agent by traversing the lane graph. A reference path is defined as a sequence of connected lane segments \(r_{i}=\{l_{i,1},l_{i,2},\cdots,l_{i,R_{i}}\}\). The starting point of the reference path for an agent \(a\) is supposed to be in the vicinity of the agent's current location \(\mathbf{P}_{0}^{a}\), and the endpoint is supposed to be in the vicinity of the agent's future trajectory endpoint \(\mathbf{P}_{T}^{a}\), as is illustrated in Figure 1. To select the candidate reference path for an agent \(a\), we first select a set of _seed lane segments_ that will be considered as the path starting points. We used a simple heuristic to select the seed lane segments by picking the lane segments that are within a distance range of the agent's current location and have their lane directions within a range of the agent's current heading. By picking the seed lanes this way, we will have candidate paths starting from not only the agent's current lane but also the neighbor lanes, which allows the model to predict lane-changing trajectories. From the seed lane segments, we run a breadth-first search to find the candidate paths. The output of the candidate sampling module is a set of candidate reference paths for each agent, denoted as \(\mathcal{R}^{a}=\{r_{i}^{a}\}\). ### Path classification Given the set of candidate reference paths, the path classification module predicts the probability distribution over them using the agent and path features. To encode the features \(\mathbf{F}_{p,i}\) of a path \(r_{i}=\{l_{i,1},l_{i,2},\cdots,l_{i,R_{i}}\}\), we pick three lane segments in the path, the start segment \(l_{i,1}\), the middle segment \(l_{i,R_{i}//2}\), and the end segment \(l_{i,R_{i}}\). For each segment, we use the middle point coordinate and the direction vectors as the raw feature. We also add the total length of the path (in meters) as an additional feature. We then encode those raw features with an MLP to a feature vector \(\mathbf{F}_{p}\). In addition to the agent and path features, we also create an agent-path pair feature that captures the interactions between the agent and the path. We use the distance vectors and angle deltas from the agent's current location to the start, middle, and end segments of the path as the raw features. We then use another MLP network to encode them to an agent-path pair feature vector \(\mathbf{F}_{a,(p,i)}\) We concatenate the agent feature \(\mathbf{F}_{a}\), path feature \(\mathbf{F}_{p}\) Figure 2: **Model architecture:** Our model consists of four key modules. The scene encoder encodes the agent history and HD map information (Section 3.3). The candidate path sampler samples candidate paths for each agent from the lane graph (Section 3.4). The path classifier predicts a discrete distribution over the reference paths (Section 3.5). Finally, the trajectory regressor decodes trajectory predictions in the path-relative Frenet frame conditioned on the paths (Section 3.6). and agent-path pair feature \(\mathbf{F}_{a,(p,i)}\) together and run them through another MLP network to predict the probability distribution over all candidate paths of the agent, trained with the cross-entropy loss as \(\mathcal{L}_{cls}\). We decide the ground-truth reference path \(r^{a}_{GT}\) of the agent \(a\) based on its ground-truth future trajectory \(\{\mathbf{P}^{a}\}_{Future}\), similar to the ground-truth goal selection in goal-based prediction. At inference time, we apply the non-maximum suppression (NMS) technique to sample a set of \(K\) diverse paths to decode the trajectory predictions. ### Frenet frame trajectory decoding The trajectory regressor module decodes trajectories conditioned on the reference paths. One key difference between our trajectory regressor and the one used in traditional goal-based prediction [13, 14, 16, 43, 41] is that it has the information of the whole reference path instead of just the final goal endpoint. To leverage this path information, we designed our trajectory regressor to decode trajectories in the path-relative Frenet frame. For each selected reference path \(r^{a}_{i}\), the trajectory regressor predicts a trajectory in path-relative Frenet frame, with longitudinal component \(\{\hat{s}^{a}_{t}\}_{t=1\cdots T}\) and lateral component \(\{\hat{d}^{a}_{t}\}_{t=1\cdots T}\), whose inputs include agent features \(\mathbf{F}_{a}\), path features \(\mathbf{F}_{p,i}\), and agent history in Frenet frame \(\mathbf{P}^{a}_{past,r^{a}_{i}}\). During training, we use a teacher-forcing technique and train the trajectory regressor using the ground-truth reference path \(r^{a}_{GT}\). We transform the ground-truth trajectory \(\mathbf{P}^{a}_{Future}\) to the Frenet frame w.r.t. \(r^{a}_{GT}\), with longitudinal component \(\{s^{a}_{t}\}_{t=1\cdots T}\) and lateral component \(\{d^{a}_{t}\}_{t=1\cdots T}\). The loss function is defined as smooth \(L1\) losses of the longitudinal and lateral components in the Frenet frame: \[\mathcal{L}^{a}_{reg}=\sum_{t=1}^{T}\mathcal{L}_{L1}(s^{a}_{t},\hat{s}^{a}_{t })+\lambda_{lateral}\mathcal{L}_{L1}(d^{a}_{t},\hat{d}^{a}_{t}) \tag{1}\] The total loss is a weighted sum of the path classification loss and the trajectory regression loss over all agents in the scene: \[\mathcal{L}=\sum_{a\in\text{Agents}}\lambda_{cls}\mathcal{L}^{a}_{cls}+ \mathcal{L}^{a}_{reg} \tag{2}\] After predicting the trajectories in the Frenet frame, we transform them back to the Cartesian frame using the corresponding reference path, using the formulas in [37]. ## 4 Experiments ### Dataset We evaluate our model using the public Argoverse dataset [6]. Argoverse includes track histories of agents published at 10 Hz and vectorized HD maps. The task involves predicting the future trajectory of a focal agent in each scenario over a prediction horizon of 3 seconds, conditioned on 2 seconds of track histories and the HD map of the scene. ### Implementation details We implemented our path-based prediction decoder on top of the open-source HiVT-64 scene encoder [46]. We followed a similar training scheme as the original HiVT model for PBP and its variants. We trained the models on 8 AWS T4 GPUs for 64 epochs with a batch size of 4. We used the Adam optimizer with a learning rate of 0.0005 and a decay weight of 0.0001. The agent feature vector is concatenated with the path features as well as the agent center and velocity in the Frenet frame, and a distance feature defined by the vector pointing from the agent location to the endpoint of the path in the Frenet frame and embedded via two-layer MLP. The concatenated feature vector is fed to the decoder which is a two-layer MLP with a hidden layer of size 128 activated by ReLU and an output layer of size 61, the first 60 nodes predict 30 waypoints of future trajectory and the last node predicts the score of the trajectory. We predict one trajectory for each selected reference path. During the training, we chose the ground truth reference path as the nominated reference path to decode the trajectory prediction. In the inference phase, the nominated paths will be the top selected reference paths selected by the NMS algorithm. ### Metrics **Best-of-K metrics:** We report results using the standard metrics used for multimodal trajectory prediction: minADE\({}_{K}\), minFDE\({}_{K}\) and miss rate (MR\({}_{K}\)). The standard metrics compute prediction errors using the best of \(K\) predicted trajectories, in order to not penalize diverse but plausible modes predicted by the model. The minADE\({}_{K}\) metric averages the L2 norms of displacement errors between the ground truth and the best mode over the prediction horizon. The minFDE\({}_{K}\) metric computes the L2 norm of the displacement error between the final predicted waypoint of the best mode and the final waypoint in the ground truth. Finally, miss rate computes the fraction of all predictions where none of the \(K\) predicted trajectories are within 2 meters of the ground truth. We report results for \(K\)=1 and \(K\)=6, following the convention used in Argoverse. **Map compliance metrics:** A key limitation of the standard best-of-k metrics is that they fail to penalize implausible predictions, even if they veer off-road or violate lane directions. Ideally, we want all \(K\) predictions to be plausible and map-compliant. Thus, we additionally report two map-compliance metrics. _Offroad rate_ measures the fraction of the predicted waypoints at a given horizon falling outside the drivable area. This is closely related to Argov erse's drivable area compliance (DAC) metric, but our off-road rate metric measures each individual waypoint and can report map compliance as a function of the prediction horizon as in Figure 3. _Lane deviation_ measures the L2 distance between a predicted waypoint and the nearest lane centerline. It captures map compliance signals even when the waypoint is inside the drivable area. We report the two map-compliance metrics averaged over all waypoints along the whole prediction horizon and all \(K=6\) trajectories. ### Decoder ablation study We first perform a set of controlled experiments comparing our PBP model with path classification and Frenet frame trajectory decoder against the following alternative prediction decoders. * _Multimodal regression:_ This is the original HiVT-64 model [46]. It directly regresses multimodal predictions with the winner-takes-all loss. * _Anchor-based:_ This decoder is used in MultiPath [5]. It predicts offsets with respect to fixed anchor trajectories. We obtain the anchors using K-means clustering on the train set. * _Goal-based:_ The goal-based prediction decoder [43, 16, 41] uses only the goal endpoint features (no path features) in its goal classification module and decodes trajectories conditioned on goal endpoints (no Frenet frame). * _PBP in Cartesian frame:_ This decoder performs path classification as in PBP but decodes trajectories in the Cartesian frame instead of the Frenet frame. For fair comparisons, we implemented all decoders using the same HiVT-64 encoder as PBP. The results are shown in Table 1, and we observe the following. **Significantly better map compliance.** PBP and goal-based prediction achieve significantly lower offroad rates and lane deviation errors than multimodal regression and anchor-based decoders. This effect is even more pronounced over longer prediction horizons, as shown in Figure 3. **Advantage over goal-based prediction.** Compared to goal-based prediction, PBP achieves overall lower prediction errors in terms of minFDE and MR and better map compliance metrics, because of the usage of richer path features. From Figure 3, goal-based prediction has strong map compliance at the final waypoint (i.e., goal endpoint), but it has higher offroad rates at the intermediate waypoints than PBP because of the missing path information. **Slightly worse mode diversity than goal-free decoders.** PBP's minFDE\({}_{6}\) metric is slightly worse than the multimodal regression baseline by 1%. This lower diversity is \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Decoder} & \multirow{2}{*}{\(\text{minFDE}_{1}\)} & \multirow{2}{*}{MR\({}_{1}\)} & \multirow{2}{*}{\(\text{minFDE}_{6}\)} & \multirow{2}{*}{MR\({}_{6}\)} & Offroad & Lane \\ & & & & & rate & dev. \\ \hline Multimodal regression [7, 46] & 2.93 & 0.481 & **0.996** & 0.101 & 0.069 & 0.510 \\ Anchor-based [5] & 2.93 & 0.491 & 1.019 & 0.096 & 0.068 & 0.503 \\ Goal-based [43, 16, 41] & **2.82** & 0.488 & 1.095 & 0.107 & 0.008 & **0.386** \\ \hline PBP in Cartesian frame & 2.84 & 0.479 & 1.048 & 0.099 & 0.005 & 0.389 \\ PBP (Ours) & **2.82** & **0.473** & 1.008 & **0.095** & **0.004** & **0.386** \\ \hline \hline \end{tabular} \end{table} Table 1: Decoder ablations on Argoverse validation set. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & minADE\({}_{1}\) & minFDE\({}_{1}\) & MR\({}_{1}\) & minADE\({}_{6}\) & minFDE\({}_{6}\) & MR\({}_{6}\) & DAC \\ \hline TNT [43] & 2.174 & 4.959 & 0.710 & 0.910 & 1.446 & 0.166 & 0.9889 \\ DenseTNT [16] & 1.679 & 3.632 & 0.584 & 0.882 & 1.282 & 0.126 & 0.9875 \\ GoHOME [14] & 1.689 & 3.647 & 0.572 & 0.943 & 1.450 & 0.105 & 0.9811 \\ PRIME [32] & 1.911 & 3.822 & 0.587 & 1.219 & 1.558 & 0.115 & 0.9898 \\ HiVT-128 [46] & 1.598 & 3.532 & 0.547 & 0.773 & 1.169 & 0.127 & 0.9888 \\ MultiPath++ [33] & 1.623 & 3.614 & 0.564 & 0.790 & 1.214 & 0.132 & 0.9876 \\ DCMS [40] & **1.477** & **3.251** & **0.532** & 0.766 & 1.135 & 0.109 & 0.9902 \\ Wayformer [23] & 1.636 & 3.656 & 0.572 & 0.767 & 1.162 & 0.119 & 0.9893 \\ QCNet [45] & 1.523 & 3.342 & 0.526 & **0.734** & **1.067** & **0.106** & 0.9887 \\ \hline PBP (Ours) & 1.626 & 3.562 & 0.535 & 0.855 & 1.325 & 0.145 & **0.9930** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison to the state-of-the-art models on the Argoverse leaderboard because PBP's predictions are constrained to lanes (as is shown in Figure 4). We argue that it is a fair trade-off to have more map-compliant predictions for real-world autonomous driving applications. ### Comparison against the state-of-the-art We submitted our PBP model to the Argoverse leaderboard. Table 2 reports our results along with the top entries on the leaderboard. Our model achieves the highest drivable area compliance (DAC) on the leaderboard, outperforming state-of-the-art in terms of map compliance, while being competitive in terms of minADE\({}_{1}\), minFDE\({}_{1}\), and MR\({}_{1}\). Those results are consistent with our ablation study results on the validation set. PBP's top-\(6\) metrics are slightly worse than the top leaderboard submissions, but note that most of them used extensive model ensembling (e.g., [40, 45, 23, 36, 23]), while our submission used only one single pair of encoder and decoder. ### Qualitative examples Figure 4 shows a few qualitative comparisons between the HiVT-64 baseline (using multimodal regression) and PBP. The results show PBP predicts map-compliant trajectories from all modes, while HiVT-64 has many offroad predictions. The example on the last row shows that PBP is able to correctly predict lane-changing trajectories, because the path candidates also contains paths on the neighbor lanes. ## 5 Conclusion and discussion In this paper, we propose PBP, a novel path-based prediction approach. In contrast to the traditional goal-based prediction approaches, PBP performs classification on the whole reference path instead of just the goal endpoint. The additional reference path information improves the path classification accuracy and allows PBP to decode trajectories in the path-relative Frenet frame. Evaluation results show that the path-based prediction approach makes the trajectory predictions significantly more map-compliant compared to the traditional multimodal regression and goal-based prediction approaches, while maintaining competitive or better prediction accuracy. One limitation of our path-based prediction approach is that its predictions are limited to map-compliant agents. In practice, one can use another goal-free regression model for those non-map-compliant agents and train a trajectory selector module to select between PBP and goal-free predictions. Figure 4: **Qualitative comparison between original HiVT-64 and PBP. The first column shows the predictions from HiVT-64, and the second column shows the predictions from PBP. The blue, green, and red lines represent past history, ground-truth, and top-6 prediction trajectories, respectively.** Figure 3: Offroad rate.
2309.13543
Substituting Data Annotation with Balanced Updates and Collective Loss in Multi-label Text Classification
Multi-label text classification (MLTC) is the task of assigning multiple labels to a given text, and has a wide range of application domains. Most existing approaches require an enormous amount of annotated data to learn a classifier and/or a set of well-defined constraints on the label space structure, such as hierarchical relations which may be complicated to provide as the number of labels increases. In this paper, we study the MLTC problem in annotation-free and scarce-annotation settings in which the magnitude of available supervision signals is linear to the number of labels. Our method follows three steps, (1) mapping input text into a set of preliminary label likelihoods by natural language inference using a pre-trained language model, (2) calculating a signed label dependency graph by label descriptions, and (3) updating the preliminary label likelihoods with message passing along the label dependency graph, driven with a collective loss function that injects the information of expected label frequency and average multi-label cardinality of predictions. The experiments show that the proposed framework achieves effective performance under low supervision settings with almost imperceptible computational and memory overheads added to the usage of pre-trained language model outperforming its initial performance by 70\% in terms of example-based F1 score.
Muberra Ozmen, Joseph Cotnareanu, Mark Coates
2023-09-24T04:12:52Z
http://arxiv.org/abs/2309.13543v1
Substituting Data Annotation with Balanced Updates and Collective Loss in Multi-label Text Classification + ###### Abstract Multi-label text classification (MLTC) is the task of assigning multiple labels to a given text, and has a wide range of application domains. Most existing approaches require an enormous amount of annotated data to learn a classifier and/or a set of well-defined constraints on the label space structure, such as hierarchical relations which may be complicated to provide as the number of labels increases. In this paper, we study the MLTC problem in annotation-free and scarce-annotation settings in which the magnitude of available supervision signals is linear to the number of labels. Our method follows three steps, (1) mapping input text into a set of preliminary label likelihoods by natural language inference using a pre-trained language model, (2) calculating a signed label dependency graph by label descriptions, and (3) updating the preliminary label likelihoods with message passing along the label dependency graph, driven with a collective loss function that injects the information of expected label frequency and average multi-label cardinality of predictions. The experiments show that the proposed framework achieves effective performance under low supervision settings with almost imperceptible computational and memory overheads added to the usage of pre-trained language model outperforming its initial performance by 70% in terms of example-based F1 score. ## 1 Introduction Multi-label text classification (MLTC) is the task of selecting the correct subset of labels for each text sample in a corpus. MLTC has numerous applications, such as tagging articles with the most relevant labels or recommending related search engine queries (Tsoumakas and Katakis, 2007; Varma, 2018). The majority of the literature (Liu et al., 2017; Nam et al., 2017; You et al., 2019; Ozmen et al., 2022) addresses the MLTC problem in a supervised setting, relying upon an abundance of annotated data. Despite their impressive classification performance on benchmark research datasets, most of these methods remain inapplicable in real-world applications due to the high cost of annotation. More recently, there has been an increasing focus on the single-label text classification problem with less (Gururangan et al., 2019) or no (Meng et al., 2020) annotated data. The adaptation of methods to the multi-label scenario, however, is not straightforward and often results in significant performance deterioration. One exceptional study by Shen et al. (2021) considers hierarchical multi-label text classification without annotated data, but the algorithm requires a strict label taxonomy. Such extensive, and restrictive, prior information on the label space structure is not generally available. In this work, we study generalized multi-label text classification, focusing on the limited annotated data setting while avoiding assumptions about the availability of strong structural information. We use a pre-trained language model to obtain preliminary label predictions using a natural language inference (NLI) framework. Pre-trained language models are trained on large-scale corpora which makes them better at recognizing patterns and relationships in natural language and allows them to handle rare words and phrases that may not appear frequently in a specific training dataset. We develop a framework that incorporates label dependencies and easily obtained supervision signals to adapt the predictions made by the pre-trained language model to the contextual properties of the specific data under study. Our experiments show that the proposed framework is efficient and effective in terms of improving the prediction performance. In summary, our key contributions are: 1. We develop a framework for multi-label text classification in two limited supervision settings: 1. (1) label descriptions and (2) expected label observation probabilities and average subset cardinality or, 2. (1) label descriptions and (2) a small set of annotated data. 2. We use multiple external linguistic knowledge bases: (1) a pre-trained language model that provides preliminary label likelihoods; (2) a set of pre-trained word embeddings to calculate signed label dependency graph. 3. We propose a model that updates preliminary likelihoods by modelling label dependencies based on balance theory and by effectively using weak supervision signals through aggregated predictions. ## 2 Related Work We identify three relevant lines of research: (1) zero-shot multi-label text classification; (2) weakly supervised single-label text classification; and (3) weakly supervised hierarchical multi-label text classification. Zero-Shot Multi-Label Text Classification.Zero-shot learning refers to a model's ability to recognize new labels that are not seen during training. This is typically achieved by learning semantic relationships between labels and input text through external linguistic knowledge bases (Yin et al., 2019). Many zero-shot learning methodologies have been developed for single-label text classification (Yin et al., 2019; Zhang et al., 2019, 2022; Ding et al., 2022). The zero-shot multi-label text classification problem remains much less explored. Most existing work specializes in biomedical text classification, namely Automatic ICD (i.e., International Classification of Diseases) coding (Rios and Kavuluru, 2018; Song et al., 2020). Rios and Kavuluru (2018) use label descriptions to generate a feature vector for each label and employ a two layer graph convolutional network (GCN) (Kipf and Welling, 2017) to encode the hierarchical label structure. Song et al. (2020) propose a framework that exploits the hierarchical structure and label descriptions to construct relevant keyword sets through an adversarial generative model. Although the setting is similar to ours in terms of problem definition, these methods rely heavily on substantial annotated data being available during training. Weakly Supervised Single-Label Text Classification.This problem assumes that there is no access to annotated data, but the full label set, with label names, and descriptions or keywords, is available. Usually, methods employ an iterative approach, building a vocabulary of keywords for each label and evaluating the overlap between the label vocabularies and input text content. Meng et al. (2020) use a pre-trained language model, BERT (Devlin et al., 2019), to generate a list of alternative words for each label. By comparing the text, the list of candidate replacements, it is determined which words in the text are potentially class-indicative. The method is effective, but computationally very expensive, since it requires running the pre-trained language model on every word in the labelled corpus. In addition, adaptation to a multi-label scenario requires training a binary classifier for each label. Mekala and Shang (2020) argue that forming the keyword vocabulary for labels independent from the context of the input text makes it impossible for the model to differentiate between different usages of the same word. By using BERT (Devlin et al., 2019) to build context vectors, they propose a method that can associate different meanings with different labels. Zhang et al. (2021) observe that treating keywords of labels independently ignores the information embedded in keyword correlations. By building a keyword graph, the method they propose can take into account correlations using a graph neural network classifier. Existing techniques for weakly supervised text classification do not consider the multi-label classification task, and are challenging to extend to this setting because they do not account for label dependencies. The reliance on keywords limits the scope of their applicability. Weakly Supervised Hierarchical Multi-label Text Classification.For the setting where labels are organized in a hierarchical structure, most recent methods train a multi-label classifier in a supervised fashion using GCN based architectures to encode the hierarchical relations (Peng et al., 2018; Huang et al., 2019; Zhou et al., 2020). Few methods have addressed the weakly supervised setting; an exception is the method proposed by Shen et al. (2021), which requires only label surface names in addition to category-subcategory relations represented as a directed acyclic graph. The method involves calculating a similarity score between each document-label pair using a pre-trained NLI model, and then traversing the hierarchy tree based on the similarity scores. The approach performs very well, but is limited to the setting where there is a strict hierarchy among the labels. ## 3 Problem Statement Given a set of labels \(\mathcal{L}=\{l,\varphi_{l}\}_{l=1}^{L}\) where \(\varphi_{l}\in\Phi\) represents the textual description of the \(l^{\text{th}}\) label and \(L\) is the total number of unique labels, in multi-label text classification a sample \(i\) is associated with input text content \(\vartheta_{i}\in\Theta\) and a subset of labels \(S_{i}\subset\mathcal{L}\). The aim is to design a classifier that can predict output labels by input text content \(f:\Theta\mapsto\mathbb{S}=\mathcal{P}(\mathcal{L})\backslash\emptyset\) where \(\mathcal{P}(.)\) denotes the power set function. Let \(\hat{S}_{i}\) denote the label subset predicted by the classifier for sample \(i\) i.e., \(f(\vartheta_{i})=\hat{S}_{i}\). The quality of estimation can be evaluated by a variety of performance metrics that measure the similarity between the predicted \(\hat{S}_{i}\) and the ground-truth label subset \(S_{i}\). In our experiments we employ Hamming distance between binary label vectors as the primary performance metric, but we compare algorithms using multiple other assessment criteria. In this work, we consider three scenarios with different levels of supervision used for learning \(f(\cdot)\) during training. Overall, the available supervision resources under consideration are defined as follows: * _Contextual resources_ * _Training data:_ We are given a collection of samples \(\{\vartheta_{i}\}_{i\in\mathcal{D}}\) without the ground-truth labels for learning \(f(\cdot)\). * _Label descriptions:_ Labels are meaningful, i.e., they are not a set of codes or indexes, and there is a sequence of words associated with each label that provides a description. We denote the set of possible labels and their corresponding descriptions by \(\mathcal{L}=\{l,\varphi_{l}\}_{l=1}^{L}\), where \(\varphi_{l}\) represents the textual description of the \(l^{\text{th}}\) label and \(L\) is the total number of unique labels. We assume that \(L\) is known and covers both training and test samples. * _Average subset cardinality:_ The expected number of labels per sample \(\kappa=\mathbb{E}\left(|S|\right)\) is provided. * _Label observation probabilities:_ For each label, _a priori_ probability of inclusion of that label in a subset \(\lambda_{l}=p(l\in S)\) is provided. * _Annotated data:_ There is a set of training data \(\{\vartheta_{i},S_{i}\}_{i\in\mathcal{D}_{\text{A}}}\) such that \(\mathcal{D}_{\text{A}}\subset\mathcal{D}_{\text{train}}\) and \(|\mathcal{D}_{\text{A}}|\ll|\mathcal{D}_{\text{train}}|\) with provided ground truth label subsets. * _External resources_ * _Tokenizer:_ We are given access to a pre-trained tokenization function with vocabulary \(\mathcal{V}\) which is able to convert the input text content and label descriptions into a sequence of tokens, i.e., given an input text where \(\tau\in\Theta\cup\Phi\), \(f_{\text{tokenizer}}(\tau)=(t_{1},\dots,t_{s})\) such that \(t_{i}\in\mathcal{V}\) and \(s\) is the length of the input sequence. * _Language model:_ We are given access to a pre-trained natural language inference model \(f_{\text{NLI}}(\mathcal{H},\mathcal{P})\) with vocabulary \(\mathcal{V}\) that calculates true (entailment) \(q\), undetermined (neutral) \(\tilde{q}\) or false (contradiction) \(\bar{q}\) probabilities of a hypothesis sequence \(\mathcal{H}=(h_{1},\dots,h_{s_{h}})\) where \(h_{i}\in\mathcal{V}\), given a premise sequence \(\mathcal{P}=(p_{1},\dots,p_{s_{p}})\) where \(p_{j}\in\mathcal{V}\) such that \(q+\tilde{q}+\bar{q}=1\). * _Word embeddings:_ We are given access to a set of pre-trained \(d\)-dimensional word embeddings for the (tokens composing) label descriptions, i.e., \(f_{\text{WE}}(t)=\mathbf{e}\) where \(\mathbf{e}\in\mathbb{R}^{d}\) denotes embeddings of token \(t\in\mathcal{V}\). We consider three different scenarios of supervision. In all scenarios, we are given _training data_, _label descriptions_ and _external resources_. The inputs of each scenario are summarized in Table 1. We use a test set \(\{j,\vartheta_{j},S_{j}\}_{j\in\mathcal{D}_{\text{test}}}\) such that \(\mathcal{D}\cap\mathcal{D}_{\text{Test}}=\emptyset\) to evaluate the performance in all cases. * _Annotation-Free:_ In this scenario, we do not use any annotated data to learn the classifier but require supervision on average subset cardinality and label observation probabilities. * _Scrace-Annotation:_ In this scenario, we have access to a small set of annotated data used for training, however average subset cardinality and label observation probabilities are not provided. * _Domain-Supervisor:_ In this scenario, both a small set of annotated data and information regarding average subset cardinality and label observation probabilities are available. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Problem Setting & \multicolumn{4}{c}{Contextual Supervision} & \multicolumn{4}{c}{External Supervision} \\ & \(\mathcal{D}\) & \(\mathcal{L}\) & \(\kappa\) & \(\lambda_{l}\) & \(\mathcal{D}_{\text{A}}\) & \(f_{\text{Tokenizer}}\) & \(f_{\text{NLI}}\) & \(f_{\text{WE}}\) \\ \hline Annotation-Free & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Scarce-Annotation & \(\checkmark\) & \(\checkmark\) & & & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Domain-Supervisor & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of problem settings with varying supervision signals ## 4 Methodology Our proposed framework Balanced Neighbourhoods and Collective Loss (BNCL) for multi-label text classification consists of three components: (1) _input transformation_, which maps input text into preliminary label predictions by natural language inference; (2) _parameter preparation_, which involves calculation of a label dependency graph and mean data statistics; and (3) _model update_, which updates the predictions obtained at the first stage. In this section, we share the details for each of these procedures. ### Input Transformation The aim in natural language inference (NLI) is to determine whether a _hypothesis_ is true (entailment), undetermined (neutral) or false (contradiction) based on a given _premise_. Yin et al. (2019) formulate text classification as an NLI problem by treating input text as a premise and converting labels into hypotheses. To exemplify, let us consider a topic detection task on customer reviews with two possible topics of 'product' and 'delivery', for which the premise-hypothesis pairs could be developed as follows: \begin{tabular}{l l} Premise: & Hypothesis: & Anticipated NLI output: \\ \hline The material is very soft. & This review is about **product**. & entailment \\ The material is very soft. & This review is about **delivery**. & contradiction \\ The parcel did not arrive on time. & This review is about **delivery**. & entailment \\ \end{tabular} When multiple true classes are not allowed (i.e., single-label classification), the entailment probabilities are compared and the largest is selected as the predicted class. In the multi-label scenario the entailment and contradiction probabilities are compared for each label independently in terms of entailment and contradiction probabilities, i.e., the problem is converted to binary relevance by ignoring neutral probabilities. For text classification, a predicted neutral for a label-specific hypothesis can be interpreted as the hesitancy of the language model to make a decision. The initial component of our proposed framework involves transforming the input text samples into a set of label-specific hypothesis probabilities using the NLI approach. This operation translates the input feature space into a 3-channel label space (i.e., entailment, neutral and contradiction probabilities). The procedure can be summarized as follows: **Step 1: Convert corpus into premises, labels into hypotheses.** Following Yin et al. (2019), we build the hypothesis corresponding to label \(l\) as "This is about \(\{\varphi_{l}\}\)", where \(\varphi_{l}\) is the label description, and we calculate the corresponding sequence of tokens \(\mathcal{H}_{l}=f_{\text{tokenizer}}(\text{``This is about }\{\varphi_{l}\})\)". Similarly, we treat each input text with content \(\vartheta\) as a premise and calculate the corresponding sequence of tokens, \(\mathcal{P}=f_{\text{tokenizer}}(\vartheta)\). **Step 2: Query premise and hypothesis pairs.** Given a premise \(\mathcal{P}\), we query \(f_{\text{NLI}}(.)\) with all hypotheses \(\{\mathcal{H}_{l}\}_{l\in\mathcal{L}}\) to calculate \(\{(q_{l},\tilde{q}_{l},q_{l})\}_{l\in\mathcal{L}}\). So now an input text is represented as a set of entailment, neutral and contradiction probabilities over labels: \[\vartheta\underset{\mathcal{L}=\{l,\varphi_{l}\}_{l=\{l,\varphi_{l}\}_{l=\{l, \varphi_{l}\}_{l=\{l,\varphi_{l}\}_{l=\{l}}}}}}{\mathbf{q}} =(\tilde{q}_{1},\ldots,\tilde{q}_{L})\in[0,1]^{L}\enspace, \tag{1}\] where \(q_{l}\), \(\tilde{q}_{l}\) and \(\bar{q}_{l}\) correspond to the probability of the hypothesis corresponding to label \(l\) being true, undetermined, or false, respectively, given the premise \(\vartheta\). Note that \(q_{l}+\bar{q}_{l}+\bar{q}_{l}=1\). So if the representation is reduced to entailment and contradiction probabilities there is no loss of information. Predicting entailment, neutral and contradiction probabilities for label-hypotheses by this procedure does not require any training with labelled data, and therefore does not incur any substantial annotation cost. However, the predictions rely only on external resources of language modelling and not tailored to the context associated with the dataset. We argue that the classification decisions can be enhanced by (1) modelling label dependencies with the help of label-specific features; and (2) incorporating supervision cheaper to obtain compared to mass amount of annotated data. ### Parameter Preparation Given label-hypothesis representations of inputs \(\{(\mathbf{q}_{i},\tilde{\mathbf{q}}_{i},\bar{\mathbf{q}}_{i})\}_{i\in \mathcal{D}}\) as features, we learn an update function which requires (1) a signed label dependency graph \(\mathcal{G}=(\mathcal{V},\mathcal{E}^{+},\mathcal{E}^{-})\), where labels are represented as vertices \(\mathcal{V}=\{1,\ldots,L\}\), and edges are defined as tuples \((u,v)\in\mathcal{E}^{+}(\in\mathcal{E}^{-})\) indicating a positive (negative) dependency edge between labels \(u\) and \(v\); and (2) average subset cardinality \(\kappa\) and label observation probabilities \(\lambda_{l}\) as dataset-specific hyper-parameters. We form the label dependency graph using the following procedure. Given label descriptions \(\mathcal{L}\) and word embeddings \(f_{\text{WE}}\), for each label \(l\) in \(\mathcal{L}=\{l,\varphi_{l}\}\), the label description is tokenized and the corresponding label embedding is calculated as the average of the word embeddings of the tokens that compose the labels, i.e., \(\mathbf{e}_{l}=\sum_{i\in\mathcal{T}_{l}}\mathbf{e}_{l}/s_{l}\) where \(\mathcal{T}_{l}=f_{\text{tokenizer}}(\varphi_{l})\) denotes the sequence of tokens and \(s_{l}=|\mathcal{T}_{l}|\) is the length of sequence. Afterwards, we calculate the cosine similarity between the embeddings of all label pairs \(u,v\in\mathcal{L}\) by \(d_{u,v}=\frac{\mathbf{e}_{l}\mathbf{e}_{v}}{\|\mathbf{e}_{u}\|\|\mathbf{e}_{v }\|}\). Finally, the distances between label pairs are binarized by comparison to positive and negative edge thresholds \(\delta^{+}\) and \(\delta^{-}\), \(\mathcal{E}^{+}=\{(u,v):d_{u,v}\geq\delta^{+}\}\) and \(\mathcal{E}^{-}=\{(u,v):d_{u,v}\leq\delta^{-}\}\). Average subset cardinality and label observation probabilities are assumed to be provided in _annotation-free_ and _domain-supervisor_ settings. In _scar-annotation_, we estimate both average statistics using the annotated set of data, i.e., \(\hat{\kappa}=\frac{\sum_{i\in\mathcal{D}_{u}}|S_{i}|}{|\mathcal{D}_{\hat{ \kappa}}|}\) and \(\hat{\lambda_{l}}=\frac{\sum_{i\in\mathcal{D}_{u}}\mathbf{\lambda}_{l\in S_{ i}}}{|\mathcal{D}_{\hat{\kappa}}|}\). ### Model Update Given the signed label dependency graph \(\mathcal{G}=(\mathcal{V},\mathcal{E}^{+},\mathcal{E}^{-})\), let \(\mathbf{A}^{+}\in\{0,1\}^{L\times L}\) and \(\mathbf{A}^{-}\in\{0,1\}^{L\times L}\) denote the adjacency matrices corresponding to positive \(\mathcal{E}^{+}\) and negative \(\mathcal{E}^{-}\) edges, respectively. \[\mathbf{A}^{+}_{ij}=\begin{cases}1,&\text{if there is a positive edge between $i$ and $j$}\\ 0,&\text{otherwise}\end{cases},\mathbf{A}^{-}_{ij}=\begin{cases}1,&\text{if there is a negative edge between $i$ and $j$}\\ 0,&\text{otherwise}\end{cases} \tag{2}\] Since the label dependency graph is signed, finding the \(k\)-hop neighbourhoods for \(k{>}1\) requires considering the interaction of negative and positive edges. Derr et al. (2018) extend the graph convolutional network (GCN) (Kipf and Welling, 2017) to signed networks based on balance theory, which states that a triad is balanced if and only if the number of negative edges is even; _the friend of my friend is my friend and the enemy of my enemy is my friend._ Based on balance theory, we define the \(k\)-hop dependencies \(\mathbf{D}^{(k,+)}\) and \(\mathbf{D}^{(k,-)}\) recursively as follows: \[\mathbf{D}^{(k,+)} =\left(\mathbf{A}^{+}\right)^{\mathrm{T}}\mathbf{D}^{(k-1,+)}+ \left(\mathbf{A}^{-}\right)^{\mathrm{T}}\mathbf{D}^{(k-1,-)} \tag{3}\] \[\mathbf{D}^{(k,-)} =\left(\mathbf{A}^{+}\right)^{\mathrm{T}}\mathbf{D}^{(k-1,-)}+ \left(\mathbf{A}^{-}\right)^{\mathrm{T}}\mathbf{D}^{(k-1,+)} \tag{4}\] where \(\mathbf{D}^{(1,+)}=\mathbf{A}^{+}\) and \(\mathbf{D}^{(1,-)}=\mathbf{A}^{-}\). For \(k=2\), this procedure corresponds to the following neighbourhoods: \[\mathbf{D}^{(1,+)}= \mathbf{A}^{+} \rightarrow\text{friends}\] \[\mathbf{D}^{(1,-)}= \mathbf{A}^{-} \rightarrow\text{enemies}\] \[\mathbf{D}^{(2,+)}= \left(\mathbf{A}^{+}\right)^{\mathrm{T}}\mathbf{A}^{+}+\left( \mathbf{A}^{-}\right)^{\mathrm{T}}\mathbf{A}^{-} \rightarrow\text{friends of friends + enemies of enemies}\] \[\mathbf{D}^{(2,-)}= \left(\mathbf{A}^{+}\right)^{\mathrm{T}}\mathbf{A}^{-}+\left( \mathbf{A}^{-}\right)^{\mathrm{T}}\mathbf{A}^{-} \rightarrow\text{friends of enemies + enemies of friends}\] Finally, balanced neighbourhoods for label \(v\in\mathcal{V}\) at hop \(k\in\{1,\ldots,K\}\) are formed as follows: \[\mathcal{N}^{(k,+)}_{v} =\{u:\mathbf{D}^{(k,+)}_{uv}>0\text{ for }u\in\mathcal{V}\}, \tag{5}\] \[\mathcal{N}^{(k,-)}_{v} =\{u:\mathbf{D}^{(k,-)}_{uv}>0\text{ for }u\in\mathcal{V}\}. \tag{6}\] For each sample associated with entailment \(\mathbf{q}=(q_{1},\ldots,q_{L})\) and contradiction \(\bar{\mathbf{q}}=(\bar{q}_{1},\ldots,\bar{q}_{L})\) probabilities, we initialize its hidden representation by \(\mathbf{h}^{(0)}=\mathbf{q}\) and \(\bar{\mathbf{h}}^{(0)}=\bar{\mathbf{q}}\). Given the balanced neighbourhoods \(\mathcal{N}^{(k,+)}_{v}\) and \(\mathcal{N}^{(k,-)}_{v}\), the hidden states \(\mathbf{h}^{(k)}=(h^{(k)}_{1},\ldots,h^{(k)}_{L})\) and \(\bar{\mathbf{h}}^{(k)}=(\bar{h}^{(k)}_{1},\ldots,\bar{h}^{(k)}_{L})\) are updated at layer \(k\) as follows: \[h^{(k)}_{v} =h^{(k-1)}_{v}+f_{\text{ReLU}}\left(\sum_{v\in\mathcal{N}^{(k,+)} _{v}}\mathbf{W}^{(k,+)}_{uv}h^{(k-1)}_{u}\right)+f_{\text{ReLU}}\left(\sum_{v \in\mathcal{N}^{(k,-)}_{v}}\overline{\mathbf{W}}^{(k,-)}_{uv}\bar{h}^{(k-1)}_{ u}\right), \tag{7}\] \[\bar{h}^{(k)}_{v} =\bar{h}^{(k-1)}_{v}+f_{\text{ReLU}}\left(\sum_{v\in\mathcal{N}^ {(k,-)}_{v}}\mathbf{W}^{(k,-)}_{uv}h^{(k-1)}_{u}\right)+f_{\text{ReLU}}\left( \sum_{v\in\mathcal{N}^{(k,+)}_{v}}\overline{\mathbf{W}}^{(k,+)}_{uv}\bar{h}^{(k-1 )}_{u}\right), \tag{8}\] where \(\mathbf{W}^{(k,+)},\mathbf{W}^{(k,-)},\overline{\mathbf{W}}^{(k,+)},\overline{ \mathbf{W}}^{(k,-)}\in\mathbb{R}^{L\times L}\) are learnable weights and \(f_{\text{ReLU}}(.)\) denotes the Rectified Linear Unit function, i.e., \(f_{\text{ReLU}}(x)=\max(0,x)\). Figure 1 depicts the layer udpates. During the testing phase, for a sample \(i\), the set of predicted labels is determined by comparing the entailment and contradiction probabilities of each label independently, i.e., \(\hat{S}=\{l:p_{i,l}>\bar{p}_{i,l},\text{ for all }l\in\{1,\ldots,L\}\}\). Loss Function.Given the average subset cardinality \(\kappa\) and label observation probabilities \(\lambda_{l}\), the updates of entailment and contradiction probabilities are guided by a loss function composed of four components. Denote the final hidden states for a sample \(i\in\mathcal{D}\) by \(\mathbf{p}_{i}=\mathbf{h}_{i}^{(K)}\) and \(\bar{\mathbf{p}}_{i}=\bar{\mathbf{h}}_{i}^{(K)}\), where \(K\) is the total number of layers. The four components of the loss function are constructed as follows: * By definition entailment and contradiction are mutually exclusive events for each sample and each label. Therefore, the sum of the two cannot be greater than one, and as the sum becomes closer to zero, the neutral probability (hesitation to make a decision on the presence or absence of the label for a given sample) increases. Since hesitancy is undesirable, we penalize the deviation of their summation from 1: \[\mathbb{L}_{1}=\sum_{i\in\mathcal{D}}||\mathbf{p}_{i}+\bar{\mathbf{p}}_{i}- \mathbf{1}||_{2}.\] (9) The entailment and contradiction representations are initialized to be non-negative. Since all the messages passed are non-negative, the hidden states remain non-negative (Equation 7). This, together with the \(\mathbb{L}_{1}\) term, motivates the model to protect the probability interpretation of hidden states while reducing the neutral probability. * For a specific training set, the classification decisions on training samples can impact each other. For example, a rare label may have very low entailment probability for all samples. Assuming the training data are representative, the samples to tag with that label can be selected by taking into account the expected observation probability. To ensure this, we penalize the difference between observed and expected probability for each label over training instances: \[\mathbb{L}_{2}=\sum_{l=1}^{L}\left(|\mathcal{D}|\times\lambda_{l}-\sum_{i\in \mathcal{D}}\mathbf{1}_{p_{i},l>\bar{p}_{i},l}\right)^{2},\] (10) where \(\mathbf{1}_{p_{i},l>\bar{p}_{i},l}\) is \(1\) if entailment probability of label \(l\) on sample \(i\) is larger than its contradiction probability. Figure 1: A toy example illustrating balanced neighbourhood update layers. Red and green edges represent negative and positive neighbourhoods at each layer. The initial and updated entailment, neutral, and contradiction probabilities are provided in tables for a sample with ground truth labels “A”, “B” and “C”. Edge attributes represent the learned weights that correspond to entailment (upper sequence) and contradiction (lower sequence) state updates. The initial predictions accepts “B”, rejects “D” and “F” and does not make a decision on “A”, “C” and “E”. The first hot update with signed label dependency graph improves the prediction by adding “C” to predictions however rejects relevant label “A”. At second hot, “A” is connected to one high entailment label “B” positively (as it is an enemy) and one high contradiction label “D” negatively (as it is friend of an enemy), which helps to improve the prediction by adding “A” to predictions. Note that the neutral probabilities drop through the updates. * It would be undesirable for some samples to have very high subset cardinality while others have zero. Therefore, we penalize the deviation from average subset cardinality for each sample: \[\mathbb{L}_{3}=\sum_{i\in\mathcal{D}}\left(\kappa-\sum_{l=1}^{L}\mathbf{1}_{p_{i,l}>\bar{p}_{i,l}}\right)^{2}.\] (11) * In _scarce-annotation_ and _domain-supervisor_ settings, we have a small set of annotated data \(\mathcal{D}_{\text{A}}\). Let \(\mathbf{y}_{i}=(y_{i,1},\dots,y_{i,L})\) denote the binary vector that corresponds to ground truth labels \(S_{i}\) of sample \(i\): \[\mathbb{L}_{4}=\sum_{i\in\mathcal{D}_{\text{A}}}\sum_{i=1}^{L}-y_{i,l}\log(p_{ i,l})+(1-y_{i,l})\log(\bar{p}_{i,l}).\] (12) In order to make the term \(\mathbf{1}_{p_{i,l}>\bar{p}_{i,l}}\) differentiable, we use a sharpened version of the sigmoid with a constant \(C>1\): \[\frac{1}{1+e^{C\times(p_{i,l}-\bar{p}_{i,l})}}\approx\mathbf{1}_{p_{i,l}>\bar {p}_{i,l}}. \tag{13}\] The final loss function follows: \[\mathbb{L}=\mathbb{L}_{1}+\alpha_{2}\mathbb{L}_{2}+\alpha_{3}\mathbb{L}_{3}+ \alpha_{4}\mathbb{L}_{4}, \tag{14}\] where \(\{\alpha_{j}\}_{j=2}^{4}\) are hyperparameters used to scale the individual components of the loss function. ## 5 Experiments Datasets.For our experiments we use two multi-label text classification datasets: Reuters215781(Lewis et al., 2004), which is a collection of newswire stories; and StackEx-Philosophy2(Charte and Charte, 2015), which is a collection of posts in Stack Exchange Philosophy forums. The dataset statistics are provided in Appendix A. For both datasets, the label set is formed by the topics of the sample texts. For example, in Reuters21578 "interest rates" and "unemployment" are label descriptions, and in StackEx-Philosophy "ethics" and "skepticism" are label descriptions. Footnote 1: available at [https://archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection](https://archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection) Footnote 2: available at [https://archive.org/download/stackexchange/philosophy.stackexchange.com.7z](https://archive.org/download/stackexchange/philosophy.stackexchange.com.7z) Metrics.In addition to Hamming accuracy (HA), we use example based F1 score (ebF1), subset accuracy (ACC), micro-averaged F1 score (miF1), and macro-averaged F1 score (maF1) as metrics to evaluate the performance of our method. Subset accuracy measures the fraction of times that an algorithm identifies the correct subset of labels for each instance. The example-based F1 score is aggregated over samples and the macro-averaged F1 score over labels. The micro-averaged F1 score takes the average of the F1 score weighted by the contribution of each label, and thus takes label imbalance into account. Expressions for the metrics are provided in equations 16 - 20 in Appendix B. Baselines.To the best of our knowledge, the problem settings under consideration in this study have not been explored directly in the literature. The weakly supervised text classification problem (in which only label descriptions are given) has been studied primarily in the single-label classification context (Meng et al., 2020; Mekala and Shang, 2020; Zhang et al., 2021; Zeng et al., 2022). Some works impose constraints such as the requirement that all label indicative keywords should be seen in the corpus. We attempted to adapt one of the state-of-the-art methods, LOTClass (Meng et al., 2020), to the multi-label scenario, but encountered errors regarding these constraints. Execution was only possible if more than half of the labels in the dataset were excluded. TaxoClass (Shen et al., 2021) considers multi-label classification with no annotated data, but it requires a hierarchy tree which represents category-subcategory types of relations between labels. Furthermore, all labels must be aligned exactly with the hierarchy tree. We compare to: * **0Shot-TC (Yin et al., 2019)** (multi-label version), which uses the NLI formulation of the text classification task. Estimated entailment/contradiction probabilities per label are used directly to make classification decisions. Since the same formulation is used for our input transformation, this comparison reveals the impact of our "model update" module on multi-label classification performance in contrast to using raw language model output. * **ML-KNN (Zhang and Zhou, 2007)**, a multi-label classifier originally designed for the supervised setting, which finds the nearest examples to a test class using the k-Nearest Neighbors algorithm and then selects the assigned labels using Bayesian inference. * **ML-ARAM (Benites and Sapozhnikova, 2015)**, a multi-label classifier designed for the supervised setting, which use Adaptive Resonance Theory (ART) based clustering and Bayesian inference to calculate label probabilities. Experimental settings.We examine performance in three different experimental settings, as identified in the problem statement. In "Annotation-Free", no annotated data is available for training, but we assume knowledge of average subset cardinality and label observation probabilities. In "Scarc Annotation", a small annotated dataset is available. In our experiments, the annotated dataset size is set to \(L\). In "Domain-Supervisor", in addition to the annotated dataset, knowledge concerning the average subset cardinality and label observation probabilities is available. Implementation.We transform the input using the pre-trained model BART (Lewis et al., 2020) and its corresponding tokenizer, which is fine-tuned on a large corpus, MNLI (Williams et al., 2018), composed of hypothesis-premise pairs. For both datasets, the maximum sequence length of the tokenizer is set to 128. We use GloVe (Pennington et al., 2014) to generate word embeddings to calculate the label graph from the label descriptions. The positive and negative edge thresholds \(\delta^{+}\) and \(\delta^{-}\) that control label graph density are set by top-bottom percentiles of the overall distribution of the distances between label embeddings. For each dataset, it is selected from the following list of percentile pairs \([(5\%,95\%),(10\%,90\%),(30\%,70\%)]\). When the average subset cardinality and label observation probabilities are assumed to be provided, they are calculated based on the whole set of training data. In the scarce-annotation and domain-supervisor settings, the size of the annotated dataset is \(L\) (i.e., \(|\mathcal{D}_{\mathrm{A}}|=|\mathcal{L}|=L\).). The annotated examples are randomly selected from the training set. The sigmoid sharpening factor \(C\) is set to 10. The procedure to select this value was: (1) sample a small set of examples; (2) compare their entailment and contradiction probabilities to determine predicted label subsets; (3) calculate the sharpened sigmoid function value, successively increasing the integer \(C\) by one; and (4) choose the smallest integer \(C\) such that the output for all sample/predicted label pairs is greater than 0.9999. The loss function scaling factors \(\{\alpha_{j}\}_{j=2}^{4}\) are tuned using grid search over \(\alpha_{2},\alpha_{3}\in\{0.1,0.5,1\}\) and \(\alpha_{4}\in\{1,10,100\}\). The selected values for both datasets are at \(\alpha_{2}=0.1,\alpha_{3}=0.5,\alpha_{4}=100\). If not stated otherwise, the number of update layers is set to 2 because a smaller number caused validation performance to be too sensitive to label graph density, and a greater number reduced the performance on the validation data. The model is trained with a batch size of 128 for 30 epochs as it is observed that validation performance does not improve after 30 epochs. The Adam (Kingma and Ba, 2015) optimizer is used to compute gradients and update parameters with the initial learning rate of \(1\times 10^{-3}\) and beta coefficients of \((0.8,0.9)\). The learning rate is updated with a step size 10 for a 10% decay rate. The results for ML-KNN and ML-ARAM are obtained by implementations provided in the scikit-learn library (Pedregosa et al., 2011). These algorithms are both trained using the full sets of training data. In the comparison with these supervised algorithms, we train our algorithm using \(50\%\) of the annotations. Comparison with 0Shot-MLTC.Table 2 compares the performance of 0Shot-TC adapted to multi-label scenario and the performance of our proposed method, BNCL. We examine how BNCL performs in three settings. In the annotation-free setting, we see that BNCL achieves much better performance for all metrics. This indicates how valuable it is to construct the signed label dependency graph and use it to update the embeddings using the signed graph convolution network. Moving from the annotation-free to the scarce-annotation setting, subset accuracy improves by 63% and 43%, example-based F1 score by 37% and 25% and micro-averaged F1 score by 31% and 25%, on Reuters21578 and StackEx-Philosophy, respectively. This shows that having a small set of annotated data is very helpful. There is no meaningful performance difference between the scarce-annotation and domain-supervisor settings, which suggests that when a small amount of annotation is available, there is no need for supervision in terms of average label subset cardinality and label observation probabilities. \begin{table} \begin{tabular}{c c c c c c c c|c c c c} \hline \hline & & \multicolumn{6}{c}{_Reuters21578_} & \multicolumn{6}{c}{_StackEx-Philosophy_} \\ & & ACC & HA & ebF1 & miF1 & mraF1 & ACC & HA & ebF1 & miF1 & mraF1 \\ \hline 0Shot-MLTC & & 0.0834 & 0.9799 & 0.2778 & 0.2981 & 0.1844 & 0.001 & 0.8802 & 0.0924 & 0.0665 & 0.1528 \\ \hline BNCL & mean & 0.3159 & 0.9917 & 0.4613 & 0.5053 & 0.2184 & 0.0382 & 0.9902 & 0.2119 & 0.2423 & 0.2292 \\ _Annotation-Free_ & _std_ & _0.0446_ & _0.0007_ & _0.0393_ & _0.0343_ & _0.0034_ & _0.0035_ & _0.0001_ & _0.0042_ & _0.0035_ & _0.0054_ \\ \hline BNCL & mean & **0.5083** & **0.9944** & 0.6318 & 0.6595 & 0.2340 & 0.0547 & **0.9913** & **0.2655** & **0.3024** & 0.2304 \\ _Scarce-Amnotation_ & _std_ & _0.0134_ & _0.0001_ & _0.0131_ & _0.0101_ & _0.0091_ & _0.0030_ & _0.0002_ & _0.0055_ & _0.0047_ & _0.0091_ \\ \hline BNCL & mean & 0.5078 & **0.9944** & **0.6320** & **0.6606** & **0.2353** & **0.0551** & **0.9913** & 0.2648 & 0.3021 & **0.2309** \\ _Domain-Supervisor_ & _std_ & _0.0120_ & _0.0001_ & _0.0145_ & _0.0108_ & _0.0085_ & _0.0028_ & _0.0001_ & _0.0058_ & _0.0059_ & _0.0090_ \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between the proposed method and 0Shot-MLTC in the Annotation-Free setting. The table also shows performance for the Scarce-Annotation and Domain-Supervisor settings. All results are calculated over 10 random initialization on the original train-test data splits. Comparison to Supervised Learning Methods.Table 3 compares the results of two supervised baseline methods which are trained with full set of training data to BNCL, which is trained with various ratios of data annotation. We find that BNCL with only 50% annotation achieves similar performance to the supervised baseline methods. We also find that BNCL with 100% performs better than the supervised methods, even though it is designed for the limited annotation settings and uses less annotated data in this setting. Sensitivity Study - Annotation Level.In order to observe the impact of the amount of annotated data, we perform a sensitivity study by changing the level of annotation in the domain-supervisor setting. In Figure 2, the performance for different levels of annotation is presented for both datasets in terms of Hamming accuracy, example-based F1 score, and micro-averaged F1 score. The pattern observable for all three metrics on both datasets is that even a small set of annotated data (as little as 1% of the training data) is capable of improving the performance. But as the amount of annotation increases, the improvement diminishes, and beyond 30%, there is much less value in further annotation. Ablation Study - Loss Function Components.In order to understand the impact of individual loss components, we perform an ablation study. We examine the impact of removing \(\mathbb{L}_{2}\), which targets matching of the label observation probabilities, and \(\mathbb{L}_{3}\), which aims to balance the subset cardinality. The study is conducted in the annotation-free setting on the StackEx-Philosophy dataset. Table 4 compares the results of the original loss function to configurations \begin{table} \begin{tabular}{l c c c c c|c c c c c} \hline \hline & \multicolumn{6}{c}{_Reuters21578_} & \multicolumn{6}{c}{_StackEx-Philosophy_} \\ & ACC & HA & ebF1 & miF1 & mzF1 & ACC & HA & ebF1 & miF1 & mzF1 \\ \hline ML-KNN & 0.6513 & 0.9956 & 0.7101 & 0.7228 & 0.2536 & **0.0904** & **0.9925** & 0.2866 & 0.3198 & 0.0993 \\ ML-ARAM & 0.4742 & 0.9923 & 0.6734 & 0.6265 & 0.1633 & 0.0622 & 0.9888 & 0.2045 & 0.1796 & 0.0075 \\ BNCL-100\% & **0.6674** & **0.9961** & **0.7772** & **0.720** & **0.2784** & 0.0803 & 0.9917 & **0.3394** & **0.3749** & 0.2387 \\ BNCL-50\% & 0.6039 & 0.9954 & 0.7120 & 0.7286 & 0.2589 & 0.0723 & 0.9917 & 0.3283 & 0.3642 & **0.2586** \\ BNCL-20\% & 0.5618 & 0.9949 & 0.6881 & 0.7028 & 0.2459 & 0.0643 & 0.9916 & 0.2948 & 0.3241 & 0.2418 \\ BNCL-5\% & 0.3784 & 0.9922 & 0.5763 & 0.5853 & 0.2439 & 0.0331 & 0.9880 & 0.2696 & 0.2813 & 0.2266 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with supervised baseline methods. BNCL outperforms the baselines with in the 100% Annotation setting and achieves equivalent performance for 50% Annotation. Performance degradation as the annotation level decreases is graceful. Figure 2: Sensitivity study with different amounts of annotated data (1) without the \(\mathbb{L}_{2}\) component; (2) without the \(\mathbb{L}_{3}\) component; and (3) without both. The results show that using label observation probabilities to guide the update of label-hypothesis probabilities significantly improves the performance. On the other hand, removing the \(\mathbb{L}_{3}\) component that is associated with balancing subset cardinality results in a relatively small deterioration in accuracy, and actually improves the example based and micro-averaged F1 scores. This is likely due to the power law distribution that label subset cardinalities follow in the StackEx-Philosophy dataset. Constraining each sample's label subset cardinality to an expected level may harm the example based performance especially in terms of most frequent labels (note that miF1 weighs infrequent labels less compared to mAF1). When both components are missing the performance drops dramatically. This indicates that either component can provide valuable regularizing information, but without both, the model update module is drawn towards poor representations. ## 6 Conclusion Summary and Contributions.In this study, we propose a framework for multi-label text classification in the absence of strong supervision signals. Our framework performs transfer learning using external knowledge bases, and exploits the benefits of modelling the dependencies between labels in order to focus the external supervision on domain-specific properties of the data. We project input text onto a label-hypothesis probability space using a pre-trained language model and then update representations using the guidance of label dependencies and aggregated predictions over the training data. To the best of our knowledge this is the first work that considers weakly supervised multi-label text classification problem when the label space is not strictly structured according to a set of label hierarchies. Limitations and Future Work.Extreme Multi-label Learning (XML) involves finding the most relevant subset of labels for each data point from an extremely large label set. The number of labels can scale to thousands or millions. Using our method in an extreme classification setting would be infeasible due to the computational overhead of the input transformation process (we need to calculate probabilities for every candidate label for every text example). One future work direction we would like to explore is developing an active learning based framework in order to select the labels to query for each input text. Another limitation associated with the proposed method is the inability to handle noise in the values provided by a domain-supervisor with respect to average subset cardinality or label observation probabilities. We also do not take into account any uncertainty in the signed label graph constructed from the label descriptions. Therefore, desirable follow up work involves improving our methodology by incorporating Bayesian approaches to account for uncertainty in the estimated parameters and label dependency graph. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & ACC & HA & ebf1 & miF1 & mAF1 \\ \hline BNCL & mean & **0.0382** & **0.9902** & 0.2119 & 0.2423 & **0.2292** \\ _Original_ & _std_ & _0.0035_ & _0.0001_ & _0.0042_ & _0.0035_ & _0.0054_ \\ \hline BNCL & mean & 0.0236 & 0.9899 & 0.1921 & 0.2222 & 0.2267 \\ _Removing_\(\mathbb{L}_{2}\) & _std_ & _0.0028_ & _0.0001_ & _0.0048_ & _0.0040_ & _0.0063_ \\ \hline BNCL & mean & 0.0347 & 0.9865 & **0.2373** & **0.2458** & 0.2025 \\ _Removing_\(\mathbb{L}_{3}\) & _std_ & _0.0085_ & _0.0018_ & _0.0136_ & _0.0032_ & _0.0083_ \\ \hline BNCL & mean & 0.0004 & 0.7996 & 0.0509 & 0.0374 & 0.0736 \\ _Removing_\(\mathbb{L}_{2}\) _and_\(\mathbb{L}_{3}\) & _std_ & _0.0005_ & _0.0092_ & _0.0027_ & _0.0016_ & _0.0034_ \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study for loss function components on StackEx-Philosophy dataset
2309.08605
The IA Guide: A Breakdown of Intrinsic Alignment Formalisms
We summarize common notations and concepts in the field of Intrinsic Alignments (IA). IA refers to physical correlations involving galaxy shapes, galaxy spins, and the underlying cosmic web. Its characterization is an important aspect of modern cosmology, particularly in weak lensing analyses. This resource is both a reference for those already familiar with IA and designed to introduce someone to the field by drawing from various studies and presenting a collection of IA formalisms, estimators, modeling approaches, alternative notations, and useful references.
Claire Lamman, Eleni Tsaprazi, Jingjing Shi, Nikolina Niko Šarčević, Susan Pyne, Elisa Legnani, Tassia Ferreira
2023-09-15T17:58:42Z
http://arxiv.org/abs/2309.08605v2
# The Ia Guide: ###### Abstract We summarize common notations and concepts in the field of Intrinsic Alignments (IA). IA refers to physical correlations involving galaxy shapes, galaxy spins, and the underlying cosmic web. Its characterization is an important aspect of modern cosmology, particularly in weak lensing analyses. This resource is both a reference for those already familiar with IA and designed to introduce someone to the field by drawing from various studies and presenting a collection of IA formalisms, estimators, modeling approaches, alternative notations, and useful references. ###### Contents * 1 Introduction * 1.1 Reviews * 2 Ellipticity * 2.1 Ellipticity: 2D Formalism * 2.2 Ellipticity: Modeling and 3D Formalism * 2.3 Ellipticity: Additional Notations * 2.4 Ellipticity: References * 3 Shear * 3.1 Shear: Formalism * 3.2 Shear: Additional Notation * 3.3 Shear: References * 4 IA Correlation Function Notation * 4.1 Correlations: Formalism * 4.2 Correlations: Additional Notations * 4.3 Correlations: References * 5 IA Correlation Function Estimators * 5.1 IA Correlation Function: Formalism * 5.2 IA Correlation Function: Additional Notations * 5.3 IA Correlation Function: References * 6 3D IA Power Spectrum * 6.1 3D IA Power Spectrum: Formalism * 6.2 3D IA Power Spectrum: References * 7 2D IA Power Spectrum * 7.1 2D IA Power Spectra: Additional Notations * 7.2 2D IA Power Spectra: References * 8 Modeling * 8.1 Alignment Amplitude * 8.2 Linear Alignment model (LA) * 8.3 Nonlinear Alignment model (NLA) * 8.4 Tidal Alignment and Tidal Torquing model (TATT) * 8.5 Effective Field Theory (EFT) model * 8.6 Halo model * 8.7 Modeling: Observational status * 8.8 Self-calibration * 9 IA Applications * 9.1 Redshift-space distortion bias * 9.2 Cosmological applications ## 1 Introduction This resource is a condensed overview of quantities relevant for describing the _intrinsic alignment_ (IA) of galaxies. For scientists new to the field, it is a useful starting place which contains a broad introduction to IA and helpful references with more details and derivations. It is also structured to be a quick reference for those already familiar with IA. This is not a review article and not necessarily intended to be read beginning-to-end. Sections 2-7 each contain common formalisms of an IA estimator, brief pedagogical explanations with practical advice, alternative notations, and useful references. The remaining sections summarize IA modeling and applications. Terms in teal are hyperlinked to glossary entries at the end of the document. IA refers to correlations between galaxy shapes and between galaxy shapes and the underlying dark matter distribution. These arise naturally within our current understanding of galaxy formation, as confirmed by hydrodynamic simulations (Kiessling et al., 2015; Bhowmick et al., 2020; Samuroff et al., 2021). In the case of elliptical galaxies, shapes are elongated along the external gravitational field (Croft & Metzler, 2000; Catelan et al., 2001). The shapes of spiral galaxies are typically associated with their angular momentum, which arises from the torque produced by the external gravitational field (Heavens et al., 2000; Codis et al., 2015). The alignments of spiral galaxy shapes are much weaker than for ellipticals and so far have not been directly observed (Zjupa et al., 2020; Johnston et al., 2019; Samuroff et al., 2022). Some studies use galaxy spin instead of shapes to measure alignment (Lee, 2011), though here we focus on shapes as they can be more directly connected to cosmic shear and are most commonly used in observational studies. Lee & Erdogdu (2007) provides a pedagogical overview of the physics and formalisms of spin alignments. While IA can be used as a cosmological probe, historically it is most often studied as a contaminant of weak lensing. As light travels to us from distant galaxies, it is bent by the gravitational field of the large-scale structure of the Universe and thus we observe distorted galaxy images. When this effect is too small to be detected for individual galaxies, we say that we are in the weak lensing regime. The resulting shear in these galaxy shapes is known as cosmic shear, and is a primary tool used to probe cosmological parameters (Refregier, 2003). The correlations between observed galaxy shapes used to measure weak lensing are difficult to separate from those that arise from IA. Studies show that IA can account for a 30% error on the matter power spectrum amplitude as measured by cosmic shear (Hirata et al., 2007), making IA one of the most significant sources of systematic errors in weak lensing measurements. **Additional note: galaxy types** Throughout this guide, we refer to galaxies as "early-type" or "late-type", "blue" or "red", and "elliptical" or "spiral", depending on the reference we are following. Early-type galaxies are usually elliptical or lenticular and tend to be redder in color. Late-type galaxies are spiral and typically blue. The terminology "early" and "late" does not refer to the age of the galaxy, but to their ordering in the Hubble Sequence (also known as "The Tuning Fork") when Hubble initially thought that ellipticals evolve into spirals (Hubble, 1926). Although sometimes used interchangeably, it's important to keep in mind that populations may be defined differently in different analyses. ### Reviews Here is a list of available reviews and primers on IA. A Zotero group of key IA papers can be found below1. Footnote 1: zotero.org/groups/4989025/ia_key_papers * "The intrinsic alignment of galaxies and its impact on weak gravitational lensing in an era of precision cosmology" Troxel & Ishak (2015) _The first comprehensive review on intrinsic alignments, presenting extensive documentation of commonly used formalisms and the role of IA in precision cosmology._ * "Galaxy alignments: An overview" Joachimi et al. (2015) _Broad synopsis of IA, including physical motivations, a historical overview, and main trends._ * "Galaxy alignments: Observations and impact on cosmology" Kirk et al. (2015) _Descriptions of formalisms for measuring shapes and IA tracers, overview of IA observations, and discussion of cosmological impacts and mitigation._ * "Galaxy alignments: Theory, modelling and simulations" Kiessling et al. (2015) _Detailed overview of common models and IA in \(N\)-body and hydrodynamic simulations._ ## 2 Ellipticity IA studies model simulated, three-dimensional (3D) galaxy shapes as triaxial ellipsoids, and observed galaxies as their projected shape on the sky: two-dimensional (2D) ellipses. This 2D ellipticity is quantified in terms of the lengths of the major and minor axes of the ellipse (\(a\) and \(b\), respectively, with \(b\leq a\)) and the orientation angle of the major axis, \(\theta\), as shown in Figure 2. For the detection of IA, ellipticity is typically measured relative to directions tracing the tidal field (e.g., positions of galaxy overdensities in real data, or reconstructed tidal fields in simulations), often as a function of transverse separation \(r_{p}\). By convention, the alignment signal is highest for very elongated shapes (larger axis ratio) that point along the direction of the tidal field. While the formalisms below are standard, the methods of fitting shapes to observations vary across surveys and can impact the resulting IA signal. The signal is also correlated with the clustering of the galaxy sample and depends on how far along the Line of Sight (LOS) the measurement is averaged over. Therefore, it is more common to use IA correlation functions rather than a relative ellipticity, although ellipticity is a component of most estimators. ### Ellipticity: 2D Formalism There are two different ways in which the ellipticity of 2D shapes is commonly quantified. We will refer to these as \(\varepsilon\) and \(\chi\) and to ellipticity generically as \(\epsilon\). In the rest of the document \(\epsilon\) can be taken to stand for either of the two ellipticity definitions. These are defined as: \[\varepsilon=\frac{a-b}{a+b}\exp(2\mathrm{i}\theta) \tag{1}\] \[\chi=\frac{a^{2}-b^{2}}{a^{2}+b^{2}}\exp(2\mathrm{i}\theta)\,. \tag{2}\] Both quantities are often referred to as the ellipticity, but \(\chi\) is also known as the distortion (Mandelbaum et al., 2014) or the normalized polarization (Viola et al., 2014). The definition \(\varepsilon\) (but usually denoted \(\epsilon\)) is often used in weak lensing studies because it is an unbiased estimator of the cosmic shear, \(\gamma\). Whereas \(\chi\) must be adjusted by the responsivity \(\mathcal{R}\) which quantifies the response of the ellipticity to an applied gravitational shear: \[\gamma=\frac{\left\langle\chi\right\rangle}{2\mathcal{R}}\,. \tag{3}\] \(\mathcal{R}=1-\chi_{\mathrm{rms}}\) (rms is the root mean square) and is typically \(\approx 0.9\) depending on the galaxy sample (Singh & Mandelbaum, 2016). In later parts of this guide, \(\epsilon\) is used generally. It is assumed that where the \(\chi\) definition is intended, it will have been adjusted for the responsivity so \(\mathcal{R}\) is not explicitly shown. Ellipticity is a complex quantity which can be broken up into its real and imaginary components, \(\epsilon_{1}\) and \(\epsilon_{2}\): \[\epsilon=\epsilon_{1}+\mathrm{i}\epsilon_{2}\,, \tag{4}\] where \[\epsilon_{1} =|\epsilon|\cos(2\theta) \tag{5}\] \[\epsilon_{2} =|\epsilon|\sin(2\theta)\,. \tag{6}\] The factor of 2 arises because ellipticity is a spin-2 quantity, which means that it is invariant under rotations of integer multiples of \(\pi\) (not \(2\pi\)). See Figure 3. The angle \(\theta\) is usually defined as East of North. Its range can be \(0-\pi\) or \(\pm\frac{\pi}{2}\). Figure 2: The quantities \(a\), \(b\) and \(\theta\) which define the shape and orientation of an ellipse. \(\epsilon_{1}\) represents the orientation of an ellipse relative to the direction where \(\theta=0\) and \(\epsilon_{2}\) represents the orientation relative to the direction where \(\theta=\frac{\pi}{4}\). Note that \(\epsilon_{1}\) and \(\epsilon_{2}\) contain the same information (Figure 3). When measured relative to another galaxy or the direction of the tidal field, they are usually denoted as \(\epsilon_{+}\) and \(\epsilon_{\times}\), with the subscripts respectfully read as "plus" and "cross". \(\epsilon_{+}>0\) indicates an alignment along the tidal field direction, and \(\epsilon_{+}<0\) indicates a tangential orientation as seen in gravitational shear (Section 3). \(\epsilon_{\times}\) is equivalent to \(\epsilon_{2}\). On average, \(\epsilon_{\times}\) is 0 in the real Universe. ### Ellipticity: Modeling and 3D Formalism Ellipticity from particlesSimulated galaxies and halos are usually modeled as triaxial ellipsoids composed of "particles". To summarize the shape of these objects, it's common to use the inertia tensor, I, which is computed by summing over the positions \(i,j\in(x,y,z)\) of \(N\) particles. To better approximate the object's position and shape, this sum is usually weighted by particle mass or luminosity (when applicable). For weights \(w_{k}\) which sum to \(W\), this form of the moment of inertia tensor is (Samuroff et al., 2021) \[I_{ij}=\frac{1}{W}\sum_{k=1}^{N}w^{k}x_{i}^{k}x_{j}^{k}\,. \tag{7}\] It's also common to weight \(I_{ij}\) by distance from the center of the object. This center-weighting produces the reduced inertia tensor (Chisari et al., 2015), \[\tilde{I}_{ij}=\frac{1}{W}\sum_{k=1}^{N}w^{k}\frac{x_{i}^{k}x_{j}^{k}}{x^{l}x _{l}}\,. \tag{8}\] Note that \(l\) are dummy indices that are contracted over for summation. As a result, the denominator of the above fraction represents the distance of particle \(k\) from the object's center of mass. The reduced inertia tensor can provide a better approximation of the shape of the object at its center (Joachimi et al., 2013). In Eqs. (7) and (8), the weighting kernel is inherently round, and thus produces a bias in the estimators. To avoid this limitation, studies often iteratively rescale the lengths of the ellipse axes while keeping the volume constant. (Mandelbaum et al., 2015; Schneider et al., 2012; Tenneti et al., 2014). An ellipsoid can be constructed using the eigenvectors and eigenvalues of \(I_{ij}\), which can then be projected along the \(z\)-axis into its 2D second moments \(Q_{11}\), \(Q_{22}\), \(Q_{12}+Q_{21}\)(Bartelmann & Schneider, 2001). Figure 3: Visualization of the real and imaginary components of ellipticity, as described in Section 2. These functionally contain the same information. \(\epsilon_{1}\) is maximum when a shape is highly elongated and exactly aligned with the angle that the ellipticity is defined relative to, most commonly North. \(\epsilon_{2}\) is maximum when the shape is aligned with \(\pi/4\) away from the principal angle. The projected ellipticity \(\chi\) is obtained via \[\chi=\frac{(Q_{11}-Q_{22},2Q_{12})}{Q_{11}+Q_{22}+2\sqrt{\det{\bf Q}}}\;. \tag{9}\] Note that the denominator is different if the alternative definition of ellipticity is used - see Section 2.1 and Mandelbaum et al. (2014). **Projected Ellipticity from angular momentum** Late-type galaxies are typically modeled as circular discs and their ellipticity is often assumed to be aligned with their angular momentum vector (tidal torquing or spin alignment), \({\bf L}=L_{i}=\left\{L_{x},L_{y},L_{\parallel}\right\}^{\tau}\). \(\tau\) denotes the transpose vector and \(i=1,2,3\) the three spatial directions (Figure 4). To obtain the projected shape of a spiral galaxy along the LOS, or \(L_{\parallel}\), the orientation angle \(\theta\) is given by: \[\theta=\frac{\pi}{2}+\arctan\left(\frac{L_{y}}{L_{x}}\right)\,, \tag{10}\] and the axis ratio: \[\frac{b}{a}=\frac{|L_{\parallel}|}{|{\bf L}|}+r_{\text{edge-on}}\sqrt{1- \frac{L_{\parallel}^{2}}{|{\bf L}|^{2}}}\;. \tag{11}\] \(r_{\text{edge-on}}\) describes the ratio of the disc thickness to disc diameter, which is approximately equivalent to the axis ratio for a galaxy viewed edge-on; this contribution is expected to be significant for galaxies with bulges (Joachimi et al., 2013). Assuming linear tidal torquing, a halo's spin is written as (Lee & Pen, 2008) \[L_{i}\propto\epsilon_{ijk}I_{l}^{k}T^{jl}\,, \tag{12}\] where \(i,j,k=1,2,3\) in the three spatial directions, \(\epsilon_{ijk}\) the Levi-Civita symbol and \(T^{jl}\) the gravitational tidal shear. The latter, which is a symmetric tensor, is defined as (e.g. Blazek et al., 2011) \[T_{ij}=\frac{\partial^{2}\Phi}{\partial x_{i}\partial x_{j}}\,, \tag{13}\] where \(\Phi\) is the gravitational potential, \(x\) represents comoving Cartesian coordinates and the indices \(\{i,j\}=\{1,2,3\}\) indicate the three spatial directions. Following Eq. (12), tidal torquing leads to quadratic alignments of galaxy shapes with the tidal shear. Figure 4: The set up for obtaining the projected shape of a spiral galaxy. In linear theory, the angular momentum vector \({\bf L}\) of the galaxy is aligned along the direction of tidal stretching. The projected axis ratio, \(b/a\), is a function of \({\bf L}\) and the ratio of the disk’s intrinsic thickness to its diameter (not shown here). ### Ellipticity from tidal field Early-type galaxies are considered to be triaxial ellipsoids whose axes align with the underlying gravitational tidal shear, \(T_{ij}\)(Catelan et al., 2001). In order to derive the predicted galaxy ellipticities given \(T_{ij}\), we can project the 3D tidal field along two axes at the location of each galaxy. The convention is to project along the galaxy's North Pole distance, \(\phi_{1}\), and right ascension, \(\phi_{2}\) (see 5). The latter is the angle complementary to declination. In this setting, \(\epsilon_{1}>0\) corresponds to east-west elongation and \(\epsilon_{2}>0\) corresponds to northeast-southwest elongation. We first consider a Cartesian orthonormal basis, \((\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}})\), at the location of each galaxy. We then rotate this basis into \((\hat{\mathbf{n}},\hat{\boldsymbol{\phi}}_{\boldsymbol{1}},\hat{\boldsymbol{ \phi}}_{\boldsymbol{2}})\), such that \(\hat{\mathbf{n}}\) is parallel to the LOS to the galaxy. These two bases are related by \[\hat{\boldsymbol{\phi}}_{\boldsymbol{1}} = \cos\phi_{1}\cos\phi_{2}\,\hat{\mathbf{x}}+\cos\phi_{1}\sin\phi_{ 2}\,\hat{\mathbf{y}}-\sin\phi_{1}\,\hat{\mathbf{z}}\,, \tag{14}\] \[\hat{\boldsymbol{\phi}}_{\boldsymbol{2}} = -\sin\phi_{2}\,\hat{\mathbf{x}}+\cos\phi_{2}\,\hat{\mathbf{y}}. \tag{15}\] The next step is to intermediately define the linear combinations, \[m^{i}_{+} = \frac{1}{\sqrt{2}}\left(\hat{\boldsymbol{\phi}}_{\boldsymbol{2}}^ {i}-i\hat{\boldsymbol{\phi}}_{\boldsymbol{1}}^{i}\right) \tag{16}\] \[m^{i}_{-} = \frac{1}{\sqrt{2}}\left(\hat{\boldsymbol{\phi}}_{\boldsymbol{2}}^ {i}+i\hat{\boldsymbol{\phi}}_{\boldsymbol{1}}^{i}\right)\,, \tag{17}\] Having defined the above fields, we can decompose the 3D tidal shear into the rotated basis, as (Schmidt & Jeong, 2012) \[T_{\pm}=\sum_{i=1}^{3}\sum_{j=1}^{3}m^{i}{}_{\mp}m^{j}{}_{\mp}T_{ij}\,, \tag{18}\] where \(T_{\pm}\) are the 2D ellipticities, such that (Tsaprazi et al., 2022) \[\epsilon_{1}\pm i\epsilon_{2}=-\frac{C_{1}}{4\pi G}T_{\pm}\,, \tag{19}\] in the nonlinear alignment model, for example, described in Section 8.3. Here \(C_{1}\) is the IA amplitude (introduced in Section 8.1) and \((\epsilon_{1},\epsilon_{2})\) the observed galaxy ellipticities. ### Ellipticity: Additional Notations * \(\epsilon\): frequently used rather than \(\varepsilon\) and sometimes instead of \(\chi\). * \(\epsilon\): sometimes used equivalently to \(\epsilon\). * \(\epsilon_{\rm T}\) or \(\epsilon_{\rm t}\): the tangential component of ellipticity, equivalent to \(\epsilon_{+}\). * \(\eta\): sometimes used instead of \(\chi\). * \(\phi\): sometimes used for the orientation angle. * \(a,b\): sometimes denote the length of the semi-axes of the ellipse, rather than the full axes lengths. * \(q\): the ratio of the ellipse axes, \(b/a\), with \(b\leq a\). Figure 5.— Representation of the Northern Celestial Hemisphere. The observer is indicated by \(\mathbf{O}\) and a given observed galaxy by **galaxy**. The purple basis indicates the rotated basis defined in Eq. (14) and Eq. (15), whereas \(\phi_{1}\) is the angle from the North Pole.
2309.07536
Asai-Flach classes and p-adic L-functions
We prove a formula for the Bloch-Kato logarithm of the bottom class in the Asai-Flach Euler system associated to a quadratic Hilbert modular form. We show that this can be expressed as a value, outside the interpolation range, of the p-adic Asai L-function constructed in the prequel paper arXiv:2307.07004.
Giada Grossi, David Loeffler, Sarah Livia Zerbes
2023-09-14T09:02:33Z
http://arxiv.org/abs/2309.07536v1
# Asai-Flach classes and \(p\)-adic L-functions ###### Abstract. We prove a formula for the Bloch-Kato logarithm of the bottom class in the Asai-Flach Euler system associated to a quadratic Hilbert modular form. We show that this can be expressed as a value, outside the interpolation range, of the \(p\)-adic Asai \(L\)-function constructed in the prequel paper [1]. Key words and phrases:Hilbert modular varieties, \(p\)-adic modular forms, higher Hida theory, coherent cohomology of Shimura varieties 2020 Mathematics Subject Classification: 11F41, 11F33, 11G18, 14G35 Supported by European Research Council Consolidator Grant No. 101001051 (ShimBSD) (Loeffler) and US National Science Foundation Grant No. DMS-1928930 (all authors). This strategy was successfully carried out for the Euler system of Beilinson-Flach elements (attached to a Rankin-Selberg convolution of modular forms) in [1, 10]; and more recently in the \(\operatorname{GSp}_{4}\) setting in [1, 11]. The Asai representation can be seen as a "twisted version" of the Rankin-Selberg convolution, and the construction of the Euler system classes is recognisably a direct generalisation of the Beilinson-Flach case; but the techniques used to construct \(p\)-adic \(L\)-functions and prove the reciprocity law for Rankin-Selberg convolutions do not straightforwardly generalise, since they rely on the decomposition of the Shimura variety for \(\operatorname{GL}_{2}\times\operatorname{GL}_{2}\) as a product of two factors, and new techniques are required. Hence the present paper follows a strategy rather closer to the \(\operatorname{GSp}_{4}\) case, relying on the use of _higher Hida theory_, which has been developed for Hilbert modular varieties at split primes by the first author [1]; and its "overconvergent" version, _higher Coleman theory_. The higher Coleman theory we use is based on the general results of [1], with some slight modifications to allow \(p\)-adic interpolation at only one of the primes above \(p\), for which we refer to [11]. (We remark that a related problem is also considered in the paper [11], where Skinner and two of the present authors computed a pairing which also involves the Bloch-Kato logarithm of the Asai-Flach class, but projected into a different piece of the de Rham cohomology which can be computed using only conventional, rather than higher, Coleman theory. The earlier paper gives an explicitly computable formula in terms of \(p\)-adic Hilbert modular forms, but one which does not seem to have any straightforward relationship to \(p\)-adic \(L\)-values, in contrast to the formula of Theorem 7.7.6.) ## 2. Conventions Throughout this paper \(F\) denotes a real quadratic field, and we fix an enumeration of the embeddings \(F\hookrightarrow\mathbf{R}\) as \(\sigma_{1},\sigma_{2}\). ### Algebraic groups As in [10], define \(G=\operatorname{Res}_{F/\mathbf{Q}}(\operatorname{GL}_{2})\), and \(H=\operatorname{GL}_{2}\), with \(\iota:H\hookrightarrow G\) the natural embedding. We write \(B_{H}\) and \(B_{G}\) for the upper-triangular Borel subgroups. Recall that if \(V\) is an algebraic representation of \(B_{H}\), then \(V\) gives rise to a vector bundle on the Shimura variety \(Y_{H}\) (for any sufficiently small level group), endowed with an action of Hecke correspondences. There are two possible normalisations for this functor, and we normalise so that the defining \(2\)-dimensional representation of \(H\) (restricted to \(B_{H}\)) maps to the relative de Rham _cohomology_ sheaf of the universal elliptic curve \(\mathcal{E}\to Y_{H}\) (rather than the relative homology, which is the other convention in use). There is an analogous construction for \(G\) as long as we restrict to representations trivial on the norm-one subgroup \(\{(\begin{smallmatrix}x\\ x\end{smallmatrix}):N_{F/\mathbf{Q}}(x)=1\}\) of \(Z_{G}\), cf. [10, SS3.2c]. ### Algebraic representations and the Clebsch-Gordan map For the group \(G\), we shall always work with representations over fields containing \(F\). Given two representations \(V,V^{\prime}\) of \(\operatorname{GL}_{2}\), we write \(V\boxtimes V^{\prime}\) for the tensor product of \(V,V^{\prime}\), endowed with a \(G\)-action as the tensor product of the actions via \(\sigma_{1}\) on \(V\) and via \(\sigma_{2}\) on \(V^{\prime}\). Given integers \(k_{1},k_{2}\geqslant 0\), and \(0\leqslant j\leqslant\min\{k_{1},k_{2}\}\), let \[V_{G}\coloneqq\operatorname{Sym}^{k_{1}}W_{G}\boxtimes\operatorname{Sym}^{k_{2 }}W_{G}\qquad\text{and}\qquad V_{H}=\operatorname{Sym}^{t}W_{H},\quad t=k_{1 }+k_{2}-2j,\] where \(W_{H}\) and \(W_{G}\) denote the defining \(2\)-dimensional representations of \(\operatorname{GL}_{2}/\mathbf{Q}\) and \(\operatorname{GL}_{2}/F\) respectively. The representation \(V_{H}\) has a canonical basis \((v^{a}w^{t-a})_{0\leqslant a\leqslant t}\), where \(v,w\) are the two standard basis vectors of \(W_{H}\); and is equipped with a decreasing \(B_{H}\)-stable weight filtration \(\operatorname{Fil}^{n}V_{H}=\langle\{v^{a}w^{t-a}:a\geqslant n\}\rangle\). Similarly, \(V_{G}\) is equipped with a bi-filtration (a decreasing filtration indexed by \(\mathbf{Z}^{2}\)) arising from the weight vector filtrations on each factor. There is a non-zero morphism of \(H\)-representations (the _Clebsch-Gordan map_), unique up to scalars, \[\operatorname{CG}^{[k_{1},k_{2},j]}:V_{H}\to V_{G}\otimes\det{}^{-j}.\] See (7.1) below for explicit formulae. If we equip the one-dimensional representation \(\det^{-j}\) with a filtration concentrated in degree \(-j\), then this map respects the filtrations, and hence induces a map on the graded pieces. ## 3. The Asai motive and its realisations ### Automorphic representations of \(G\) **Definition 3.1.1**.: _Let \(\Pi\) be the **unitary** cuspidal automorphic representation of \(G(\mathbf{A})\) generated by a holomorphic Hilbert modular newform of weight \((k_{1}+2,k_{2}+2)\), for some integers \(k_{1},k_{2}\geqslant 0\), and some level \(\mathcal{N}_{\Pi}\trianglelefteq\mathcal{O}_{F}\)._ We shall suppose throughout this work that the following conditions hold: * \(\Pi\) is not of CM type, * the central character of \(\Pi\) is trivial (and hence \(k_{i}\) are both even). Both of these could almost certainly be relaxed with a little extra work, but we shall not pursue this here. The representation \(\Pi\) (and hence the integers \((k_{1},k_{2})\)) will remain fixed throughout the remainder of this paper. We define \(a_{\mathfrak{n}}^{\circ}(\Pi)\), for each ideal \(\mathfrak{n}\trianglelefteq\mathcal{O}_{F}\), to be the Hecke eigenvalues of the new vector of \(\Pi\), normalised in the analytic fashion, so that \(\lvert a_{\mathfrak{p}}^{\circ}(\Pi)\rvert\leqslant 2\) for primes \(\mathfrak{p}\nmid\mathcal{N}_{\Pi}\). For each prime \(\mathfrak{p}\nmid\mathcal{N}_{\Pi}\) we let \(\alpha_{\mathfrak{p}}^{\circ}\) and \(\beta_{\mathfrak{p}}^{\circ}\) be the Satake parameters of \(\Pi_{\mathfrak{p}}\), normalised so that \(\lvert\alpha_{\mathfrak{p}}^{\circ}\rvert=\lvert\beta_{\mathfrak{p}}^{\circ} \rvert=1\) and \(\alpha_{\mathfrak{p}}^{\circ}+\beta_{\mathfrak{p}}^{\circ}=a_{\mathfrak{p}}^{ \circ}(\Pi)\). By a rationality theorem due to Shimura [14, Proposition 1.2], there exists a number field \(L\subset\mathbf{C}\) (depending on \(\Pi\)) such that the quantities \(N(\mathfrak{n})^{1/2}a_{\mathfrak{n}}^{\circ}\) lie in \(L\), for all \(\mathfrak{n}\leqslant\mathcal{O}_{F}\). Extending \(L\) if necessary, we may assume that \(L\) also contains the images of the embeddings \(\sigma_{i}:F\hookrightarrow\mathbf{R}\) (this is automatic if \(k_{1}\neq k_{2}\)). **Definition 3.1.2**.: _For \(n\in\mathbf{Z}_{\geqslant 1}\), we define \(a_{n}(\Pi)\) by_ \[a_{n}(\Pi)=n^{(k_{1}+k_{2}+2)/2}a_{n}^{\circ}(\Pi).\] One checks that \(a_{n}(\Pi)\in\mathcal{O}_{L}\) for all \(n\). (However, unless we make the stronger assumption that \(k_{1}=k_{2}\bmod 4\), we cannot extend this normalisation of the Hecke eigenvalues to include ideals \(\mathfrak{n}\) which do not come from \(\mathbf{Q}\), without introducing infinitely many square roots to \(L\).) ### The Asai \(L\)-function For each rational prime \(\ell\), let us set \(\Pi_{\ell}=\bigotimes_{v\mid\ell}\Pi_{v}\), considered as a representation of \(G(\mathbf{Q}_{\ell})=\prod_{v\mid\ell}\operatorname{GL}_{2}(F_{v})\). Associated to \(\Pi_{\ell}\) we have an _Asai \(L\)-factor_1, \(L_{\operatorname{As}}(\Pi_{\ell},s)\). We write \(L_{\operatorname{As}}(\Pi,s)=\prod_{\ell\text{ prime}}L_{\operatorname{As}}( \Pi_{\ell},s)\) (without any Archimedean factors) for the Asai \(L\)-function; note that this Dirichlet series has coefficients in \(L\). Footnote 1: This can be defined as the “common denominator” of a family of zeta-integrals, or alternatively via the local Langlands correspondence, but both definitions give the same \(L\)-factor. For \(\ell\) split this is one of the defining properties of the local Langlands correspondence; for \(\ell\) non-split, see [10, Theorem 3.3]. Since the Euler factors of \(L_{\operatorname{As}}\) at primes dividing \(\mathcal{N}_{\Pi}\) are hard to describe explicitly, we shall also consider the "imprimitive" Asai \(L\)-function \[L_{\operatorname{As}}^{\operatorname{imp}}\left(\Pi,s\right)=\zeta_{(N)}(2s) \sum_{n\geqslant 1}a_{n}^{\circ}(\Pi)n^{-s},\] where the notation \((N)\) signifies removing the Euler factors at the primes dividing \(N=\mathcal{N}_{\Pi}\cap\mathbf{Z}\). Note that this Dirichlet series has coefficients in \(L\). The following result is standard: **Lemma 3.2.1**.: _The function \(L_{\operatorname{As}}^{\operatorname{imp}}\left(\Pi,s\right)/L_{\operatorname {As}}\left(\Pi,s\right)\) is a product of polynomials in \(\ell^{-s}\) for \(\ell\mid N\). All of its zeroes have real part in \(\{0,-\frac{1}{2},-1\}\). _ ### De Rham and coherent cohomology groups We let \(U_{1}(\mathcal{N}_{\Pi})\) denote the open compact subgroup \(\{g\in G(\mathbf{A}_{\mathrm{f}}):g=\left(\begin{smallmatrix}5&*\\ 0&\frac{1}{2}\end{smallmatrix}\right)\bmod\mathcal{N}_{\Pi}\}\), so that the space of \(U_{1}(\mathcal{N}_{\Pi})\)-invariants of \(\Pi_{\mathrm{f}}\) (the new subspace of \(\Pi_{\mathrm{f}}\)) is one-dimensional. We assume, for simplicity2 that \(\mathcal{N}_{\Pi}\) does not divide \(6\operatorname{disc}_{F/\mathbf{Q}}\), so that \(U_{1}(\mathcal{N}_{\Pi})\) is _sufficiently small_ in the sense of [11, Definition 2.2.1]; thus the Hilbert modular variety \(Y_{G}\) of level \(U_{1}(\mathcal{N}_{\Pi})\) is a smooth quasiprojective variety defined over \(\mathbf{Q}\). Footnote 2: The excluded cases may be dealt with via the usual trick of introducing full level \(\mathfrak{h}\) structure, for an auxiliary ideal \(\mathfrak{h}\), and then taking invariants under \(\operatorname{GL}_{2}(\mathcal{O}/\mathfrak{h})\); we leave the details to the interested reader. **Definition 3.3.1**.: _Let \(\mathcal{V}_{G}\) denote the vector bundle with connection on \(Y_{G}\) corresponding to the algebraic representation3\(V_{G}\); and set_ Footnote 3: More precisely, we need to twist by an appropriate character to make the action of norm-one units trivial, but the resulting vector bundle is independent of the choice up to a canonical isomorphism. \[D_{L}^{\operatorname{As}}(\Pi)=H_{\operatorname{dR},c}^{2}\Big{(}Y_{G}, \mathcal{V}_{G}\Big{)}[\Pi_{\mathrm{f}}],\] _which is 4-dimensional over \(L\)._ The de Rham cohomology can be computed via the coherent cohomology of a smooth toroidal compactification \(X_{G}\) of \(Y_{G}\). We can then compute de Rham cohomology using the logarithmic de Rham complex \(\mathcal{V}_{G}\otimes\Omega_{X_{G}}^{\bullet}\langle D\rangle\), where \(D=X_{G}-Y_{G}\) is the boundary divisor; and we can compute compactly-supported de Rham cohomology using the "minus-log" complex \(\mathcal{V}_{G}\otimes\Omega^{\bullet}_{X_{G}}\langle-D\rangle\), where \(\Omega^{\bullet}\langle-D\rangle\coloneqq\Omega^{\bullet}\langle D\rangle(-D)\). We denote the corresponding cohomologies by \[R\Gamma_{\mathrm{dR}}\left(X_{G},\mathcal{V}_{G}\langle D\rangle \right) \coloneqq R\Gamma\left(X_{G},\mathcal{V}\otimes\Omega^{\bullet}_{X_{G }}\langle D\rangle\right)\cong R\Gamma_{\mathrm{dR}}(Y_{G},\mathcal{V}_{G}),\] \[R\Gamma_{\mathrm{dR}}(X_{G},\mathcal{V}_{G}\langle-D\rangle) \coloneqq R\Gamma(X_{G},\mathcal{V}\otimes\Omega^{\bullet}_{X_{G }}\langle-D\rangle)\cong R\Gamma_{\mathrm{dR},c}(Y_{G},\mathcal{V}_{G}).\] Rather than explicitly working with the full de Rham complex, it is more convenient to work with the _dual BGG complex_ \[\mathrm{BGG}^{\bullet}=\left[\omega^{(-k_{1},-k_{2})}\longrightarrow\omega^{ (-k_{1},k_{2}+2)}\oplus\omega^{(k_{1}+2,-k_{2})}\longrightarrow\omega^{(k_{1 }+2,k_{2}+2)}\right]\] and its compactly-supported analogue \(\mathrm{BGG}^{\bullet}(-D)\), which are quasi-isomorphic to the logarithmic de Rham complexes \(V\otimes\Omega^{\bullet}\langle D\rangle\) and \(V\otimes\Omega^{\bullet}(-D)\) respectively. (We shall give explicit formulae for the differentials of the BGG complex, and the quasi-isomorphism relating it to the de Rham complex, in Section 7.1 below, but we shall not need this just yet.) The BGG complex is equipped with a natural decreasing \(\mathbf{Z}^{2}\)-filtration \(\mathrm{Fil}^{\bullet\bullet}\), which gives rise to a \(\mathbf{Z}^{2}\)-filtration on its cohomology. We can (and do) normalise so that the nontrivial graded pieces are in bidegrees \(\{(0,0),(k_{1}+1,0),(0,k_{2}+1),(k_{1}+1,k_{2}+1)\}\), and are given by \[\mathrm{Gr}^{(k_{1}+1,k_{2}+1)} =H^{0}\left(X_{G},\omega^{(k_{1}+1,k_{2}+1)}(-D)\right)[\Pi_{t}],\] \[\mathrm{Gr}^{(k_{1}+1,0)} =H^{1}\left(X_{G},\omega^{(k_{1}+2,-k_{2})}(-D)\right)[\Pi_{t}], \mathrm{Gr}^{(0,k_{2}+1)} =H^{1}\left(X_{G},\omega^{(-k_{1},k_{2}+2)}(-D)\right)[\Pi_{t}],\] \[\mathrm{Gr}^{(0,0)} =H^{2}\left(X_{G},\omega^{(-k_{1},-k_{2})}(-D)\right)[\Pi_{t}],\] where \(\omega^{(r,s)}\) is the sheaf of Hilbert modular forms of weight \((r,s)\) (so that \(\omega^{(r,s)}(-D)\) is the subsheaf of cusp forms). The induced single filtration, with graded pieces in degrees \(\{0,k_{1}+1,k_{2}+1,k_{1}+k_{2}+2\}\), is the Hodge filtration. **Definition 3.3.2**.: _We let \(\nu\) be a basis of the 1-dimensional \(L\)-vector space \(H^{1}\left(X_{G},\omega^{(-k_{1},k_{2}+2)}\right)[\Pi_{t}]\)._ _Remark 3.3.3_ (Occult periods).: After base-extension to \(\mathbf{C}\), these graded pieces have canonical bases arising from the comparison between sheaf cohomology and Dolbeault cohomology. The one which will interest us is \(\mathrm{Gr}^{(0,k_{2}+1)}\), which is spanned by the differential form associated to the real-analytic Hilbert modular form \(\mathcal{F}^{\mathrm{ah},1}\) (anti-holomorphic at the place \(\sigma_{1}\) and holomorphic at \(\sigma_{2}\)) having the same Fourier-Whittaker coefficients as the holomorphic newform \(\mathcal{F}\) generating \(\Pi\); see Lemma 5.2.1 of [10] for further details. Harris' _occult period_ for \(\Pi\) at \(\sigma_{1}\) (cf. [1]) is the ratio between \(\nu\) and \(\mathcal{F}^{\mathrm{ah},1}\), well-defined as an element of \(\mathbf{C}^{\times}/L^{\times}\). \(\diamond\) ### The Asai Galois representation We now fix (for the remainder of this paper) a prime \(p\), and a prime \(\mathfrak{P}\) of the coefficient field \(L\) above \(p\). **Definition 3.4.1**.: 1. _Let_ \(V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\) _denote the four-dimensional Asai Galois representation associated to_ \(\Pi\) _as in_ _[_10_, Definition 4.4.2]__, defined as the_ \(\Pi_{t}\)_-eigenspace in the_ \(p\)_-adic etale cohomology of the Hilbert modular variety_ \(Y_{G,1}(\mathcal{N}_{\Pi})\otimes\overline{\mathbf{Q}}\) _(with coefficients in the etale local system of_ \(L_{\mathfrak{P}}\)_-vector spaces determined by_ \((k_{1},k_{2})\)_)._ 2. _Let_ \(D^{\mathrm{As}}_{\mathfrak{P}}(\Pi)=L_{\mathfrak{P}}\otimes_{L}D^{\mathrm{As} }_{L}(\Pi)\)_, so that there is a canonical comparison isomorphism (compatible with the filtrations)_ \[\mathbf{D}_{\mathrm{dR}}\left(\mathbf{Q}_{p},V^{\mathrm{As}}_{\mathfrak{P}}( \Pi)\right)\cong D^{\mathrm{As}}_{\mathfrak{P}}(\Pi).\] _Remark 3.4.2_.: More precisely, this is \(M_{L_{\mathfrak{P}}}(\mathcal{F})\) in the notation of _op.cit._, where \(\mathcal{F}\) is the normalised newform generating \(\Pi\). \(\diamond\) By results of Brylinski-Labesse and Nekovar recalled in _op.cit._, the Galois representation \(V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\) can be characterised, up to isomorphism, as the unique semisimple Galois representation whose \(L\)-series is \(L_{\mathrm{As}}\left(\Pi,s-\frac{k_{1}+k_{2}+2}{2}\right)\). (However, since we want to consider Euler system classes for \(V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\), it is important to fix not only an abstract isomorphism class of Galois representations but a specific realisation of this isomorphism class in etale cohomology.) By Poincare duality for the Hilbert modular surface, there is a canonical nondegenerate symmetric bilinear form \[\lambda:\mathrm{Sym}^{2}\left(V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\right)\to L_{ \mathfrak{P}}(-2-k_{1}-k_{2}), \tag{3.1}\] equivariant for the action of \(\mathrm{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})\). (The pairing is symmetric since \(V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\) appears in cohomology in even degree.) _Remark 3.4.3_.: By [11, Proposition 9.4.3], our hypotheses on \(\Pi\) imply that \(V_{\mathfrak{P}}^{\operatorname{As}}(\Pi)\) is either irreducible, or the direct sum of a character and a three-dimensional irreducible representation, with the latter case occurring if and only if \(\Pi\) is a twist of a base-change from \(\operatorname{GL}_{2}/\mathbf{Q}\). \(\diamond\) ### The split-prime case For the remainder of this paper, we suppose that \(p\)**is split in \(F\)**, and \(\Pi\)**is unramified at the primes above \(p\)**. Since the embeddings \(\sigma_{i}\) of \(F\) take values in \(L\subset\mathbf{C}\), and we have fixed a prime \(\mathfrak{P}\mid p\) in \(L\), we can number the primes above \(p\) as \(\mathfrak{p}_{1},\mathfrak{p}_{2}\) such that \(\sigma_{i}(\mathfrak{p}_{i})\subset\mathfrak{P}\). We define \[a_{\mathfrak{p}_{i}}(\Pi)=p^{(k_{i}+1)/2}a_{\mathfrak{p}_{i}}^{\circ}(\Pi)\in L,\qquad\text{so }a_{p}(\Pi)=a_{\mathfrak{p}_{1}}(\Pi)a_{\mathfrak{p}_{2}}(\Pi).\] (Note this normalisation depends on the choice of prime \(\mathfrak{P}\mid p\), and the \(a_{\mathfrak{p}_{i}}\) are integral at \(\mathfrak{P}\), but possibly not at other primes of \(L\) above \(p\).) We likewise define \[\alpha_{i}=p^{\frac{1+k_{i}}{2}}\alpha_{\mathfrak{p}_{i}}^{\circ}\] and similarly \(\beta_{i}\). Extending \(L\) if necessary, we may suppose that the \(\alpha_{i}\) and \(\beta_{i}\) also lie in \(L\). **Corollary 3.5.1**.: _The representation \(V_{\mathfrak{P}}^{\operatorname{As}}(\Pi)\) is crystalline at \(p\), so \(D_{\mathfrak{P}}(\Pi)\) is naturally a filtered \(\varphi\)-module; and the eigenvalues of \(\varphi\) on this module are the pairwise products \(\{\alpha_{1}\alpha_{2},\dots,\beta_{1}\beta_{2}\}\). \(\square\)_ **Lemma 3.5.2**.: _Let \(0\leqslant j\leqslant\min(k_{1},k_{2})\) be an integer. Then \(p^{j}\) is not an eigenvalue of \(\varphi\) on \(D_{\mathfrak{P}}^{\operatorname{As}}(\Pi)\). Moreover, if \(p^{(1+j)}\) is an eigenvalue of \(\varphi\), then we must have \(k_{1}=k_{2}=j\)._ Proof.: This follows from the fact that the Satake parameters \(\alpha_{i}^{\circ}\), \(\beta_{i}^{\circ}\) have complex absolute value \(1\). This implies that for the Galois representation \(V_{\mathfrak{P}}^{\operatorname{As}}(\Pi)^{*}(-j)\), the Bloch-Kato subspaces \(H_{\operatorname{e}}^{1}(\mathbf{Q}_{p},-)\) and \(H_{\operatorname{f}}^{1}(\mathbf{Q}_{p},-)\) agree; and these are also equal to \(H_{\operatorname{g}}^{1}\), except possibly in the boundary case \(k_{1}=k_{2}=j\) in which case \(H_{\operatorname{g}}^{1}\) can be strictly larger. Moreover, the inverse of the Bloch-Kato exponential map for \(V_{\mathfrak{P}}^{\operatorname{As}}(\Pi)^{*}(-j)\) is an isomorphism \[\log:H_{\operatorname{f}}^{1}\left(\mathbf{Q}_{p},V_{\mathfrak{P}}^{ \operatorname{As}}(\Pi)^{*}(-j)\right)\stackrel{{\cong}}{{ \longrightarrow}}\left(\operatorname{Fil}^{1+j}D_{\mathfrak{P}}^{ \operatorname{As}}(\Pi)\right)^{*}=\left(\operatorname{Fil}^{1}D_{\mathfrak{P} }^{\operatorname{As}}(\Pi)\right)^{*}, \tag{3.2}\] with both sides \(3\)-dimensional over \(L_{\mathfrak{P}}\). ### Partial Frobenii We may identify \(D_{\mathfrak{P}}^{\operatorname{As}}(\Pi)\), as a Frobenius module (forgetting the filtration), with the rigid cohomology of the special fibre of \(Y_{G}\) at \(p\). This special fibre has two commuting endomorphisms, the **partial Frobenii**\(\varphi_{1}\) and \(\varphi_{2}\) at the primes \(\mathfrak{p}_{i}\), whose composite is the Frobenius \(\varphi\); more precisely, \(\varphi_{i}\) corresponds to sending a Hilbert-Blumenthal abelian surface \(A\) to the quotient \(A/\left(\ker(\varphi_{A})\cap A[\mathfrak{p}_{i}]\right)\). We refer to [12] or [13] for detailed accounts of this construction. The operators \(\varphi_{i}\) induce commuting linear operators on \(D_{\mathfrak{P}}^{\operatorname{As}}(\Pi)\), with \(\varphi=\varphi_{1}\varphi_{2}\); and it follows from the "partial Eichler-Shimura" comparison result proved in [13] that for each \(i\) we have \[(\varphi_{i}-\alpha_{i})(\varphi_{i}-\beta_{i})=0\quad\text{on }D_{\mathfrak{P}}^{ \operatorname{As}}(\Pi).\] One checks easily that the partial Frobenii satisfy \(\lambda(\varphi_{i}x,\varphi_{i}y)=p^{k_{i}+1}\lambda(x,y)\), where \(\lambda\) is the Poincare duality form. This identifies the \(\beta_{i}\)-generalised eigenspace with the dual of that for \(\alpha_{i}\). Hence, if \(\alpha_{i}\neq\beta_{i}\), the \(\varphi_{i}=\alpha_{i}\) and \(\varphi_{i}=\beta_{i}\) eigenspaces are both \(2\)-dimensional, and each is isotropic with respect to \(\lambda\). _Remark 3.6.1_.: We are using a slightly different normalisation of the partial Frobenii here from [12]: the \(\varphi_{i}\) here is \(p^{-t_{i}}\varphi_{i}\) in the notation of _op.cit._, where \((t_{1},t_{2})\) is an auxiliary choice of integers such that \(w=k_{i}+2t_{i}\) is independent of \(i\). This reflects the fact that the \(\alpha_{i}\) of _op.cit._ is \(p^{(w+1)/2}\alpha_{\mathfrak{p}_{i}}^{\circ}\), while \(\alpha_{i}\) here is \(p^{(k_{i}+1)/2}\alpha_{\mathfrak{p}_{i}}^{\circ}\). The present normalisation is more convenient for comparison with higher Hida theory, since it matches the minimal integral normalisation of the \(U_{\mathfrak{p}_{i}}^{\prime}\) operators. \(\diamond\) ### A lifting of \(\nu\) We write \(v_{p}\) for the valuation on \(L_{\mathfrak{P}}\) normalized by \(v_{p}(p)=1\); and we fix an ordering of the Satake parameters at \(\mathfrak{p}_{1}\). Then we have \(0\leqslant v_{p}(\alpha_{1})\leqslant k_{1}+1\). **Proposition 3.7.1**.: _Suppose that \(\alpha_{1}\neq\beta_{1}\), and \(v_{p}(\alpha_{1})<k_{1}\). Then the vector space_ \[D_{\mathfrak{P}}(\Pi)^{(\varphi_{1}=\alpha_{1})}\cap\operatorname{Fil}^{(0,k_{2} +1)}D_{\mathfrak{P}}(\Pi)\] _is one-dimensional, and surjects onto \(\operatorname{Gr}^{(0,k_{2}+1)}D_{\mathfrak{P}}(\Pi)\)._ _Hence there exists a uniquely determined vector \(\nu_{\operatorname{dR}}\in D_{\mathfrak{P}}\) with the following properties:_ * \(\varphi_{1}\nu_{\operatorname{dR}}=\alpha_{1}\cdot\nu_{\operatorname{dR}}\)_;_ * \(\nu_{\operatorname{dR}}\in\operatorname{Fil}^{(0,k_{2}+1)}D_{\mathfrak{P}}\)_;_ * _the image of_ \(\nu_{\mathrm{dR}}\) _in the graded piece_ \[\operatorname{Gr}^{(0,k_{2}+1)}D_{\mathfrak{P}}\cong L_{\mathfrak{P}}\otimes_{L}H ^{1}\left(X_{G},\omega^{(-k_{1},k_{2}+2)}\right)[\Pi_{\mathrm{f}}]\] _coincides with the_ \(\nu\) _of Definition_ 3.3.2_._ Proof.: This is a special case of the first main theorem of [10], which states that (over totally-real fields of any degree) the \(\mathbf{Z}^{d}\)-indexed filtration of \(D^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\) has a canonical splitting given by intersecting with partial Frobenius eigenspaces. The assumption \(v_{p}(\alpha_{1})<k_{1}\) is the _strictly small slope_ assumption of _op.cit._. _Remark 3.7.2_.: Note that \(\ker\left((\varphi-\alpha_{1}\alpha_{2})(\varphi-\alpha_{1}\beta_{2}):D_{p} \to D_{p}\right)\) contains \(D^{(\varphi_{1}=\alpha_{1})}_{p}\), but it may be larger; this always occurs if \(\Pi\) is a twist of a base-change from \(\mathbf{Q}\) (so that \(\alpha_{1}/\beta_{1}=\alpha_{2}/\beta_{2}\)). ## 4. Definition of the \(p\)-adic regulator ### Euler system classes Let \(0\leqslant j\leqslant\min(k_{1},k_{2})\). We refer to [10, Definition 4.4.6] for the definition of the _etale Asai-Flach class_ \[\operatorname{AF}^{[\Pi,j]}_{\mathrm{et}}\in H^{1}\left(\mathbf{Z}[1/\Sigma], V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)^{*}(-j)\right),\] where \(\Sigma\) is the set of primes dividing \(pN\operatorname{disc}(F)\). (More precisely, this class is defined for any normalised eigenform \(\mathcal{F}\), not necessarily new, and we are defining \(\operatorname{AF}^{[\Pi,j]}_{\mathrm{et}}\) as \(\operatorname{AF}^{[\mathcal{F},j]}_{\mathrm{et}}\) where \(\mathcal{F}\) is the unique newform generating \(\Pi\), consistently with our definition of the Galois representation \(V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\).) _Remark 4.1.1_.: In _op.cit._ we showed that \(\operatorname{AF}^{[\Pi,j]}_{\mathrm{et}}\) was the \(m=1\) case of a family of classes defined over \(\mathbf{Q}(\mu_{m})\) with norm-compatibility properties as \(m\) varies, but we shall not use this here. _Note 4.1.2_.: We briefly recall the definition of \(\operatorname{AF}^{[\Pi,j]}_{\mathrm{et}}\). Letting \(\mathcal{V}_{G}\) and \(\mathcal{V}_{H}\) denote the relative Chow motives (over \(L\)) associated to the representations \(V_{G}\) and \(V_{H}\) defined in Section 2.2, we have a pushforward map \[\iota^{[j]}_{*}:H^{1}_{\mathrm{mot}}\left(Y_{H,1}(N),\mathcal{V}_{H}(1+t) \right)\otimes_{\mathbf{Q}}L\longrightarrow H^{3}_{\mathrm{mot}}\left(Y_{G,1}( \mathcal{N}_{\Pi}),\mathcal{V}_{G}(2+k_{1}+k_{2}-j)\right),\] where \(t=k_{1}+k_{2}-2j\) as before. We have an analogous pushforward map in etale cohomology with \(L_{\mathfrak{P}}\)-coefficients, and the two are compatible via the etale regulator map \(r_{\mathrm{et}}\). Since the \(\Pi_{\mathrm{f}}\)-generalised eigenspace in the cohomology of \(Y_{G,1}(\mathcal{N}_{\Pi})\) vanishes outside degree \(2\), the Hochschild-Serre spectral sequence gives a projection map from \(H^{3}_{\mathrm{et}}\left(Y_{G,1}(\mathcal{N}_{\Pi}),\mathcal{V}_{G}(2+k_{1}+k _{2}-j)\right)\) to \(H^{1}\left(\mathbf{Z}[1/\Sigma],V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)^{*}(-j)\right)\). We can then define \(\operatorname{AF}^{[\Pi,j]}_{\mathrm{et}}\) as the image of the weight \(t\) Eisenstein class under this chain of maps. ### Localisation at \(p\) and syntomic cohomology **Proposition 4.2.1**.: _The localisation of \(\operatorname{AF}^{[\Pi,j]}_{\mathrm{et}}\) at \(p\) lies in the Bloch-Kato subspace_ \[H^{1}_{\mathrm{f}}(\mathbf{Q}_{p},V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)^{*}(-j) )\subseteq H^{1}(\mathbf{Q}_{p},V^{\mathrm{As}}_{\mathfrak{P}}(\Pi)^{*}(-j)).\] Proof.: This follows from the comparison between etale and syntomic cohomology, since the map from syntomic to etale cohomology factors through the Bloch-Kato exponential map; cf. [10, Proposition 5.4.1] in the analogous case of Beilinson-Flach elements. Now fix a choice of \(\alpha_{1}\) satisfying the conditions of Proposition 3.7.1, and let \(\nu_{\mathrm{dR}}\in\operatorname{Fil}^{(0,k_{2}+1)}D_{\mathfrak{P}}(\Pi)\) be the ensuing lifting of \(\nu\). Our goal will be to compute the pairing \[\left\langle\nu_{\mathrm{dR}},\log\left(\operatorname{loc}_{p}\operatorname{ AF}^{[\Pi,j]}_{\mathrm{et}}\right)\right\rangle_{D_{\mathfrak{P}}(\Pi)}, \tag{4.1}\] where \(\log\) is the Bloch-Kato logarithm (3.2), and \(\left\langle-,-\right\rangle_{D_{\mathfrak{P}}(\Pi)}\) denotes the canonical pairing between \(D^{\mathrm{As}}_{\mathfrak{P}}(\Pi)\) and its dual. Let \(\mathbb{Y}_{G}\) denote the canonical \(\mathbf{Z}_{p}\)-model of \(Y_{G,1}(\mathcal{N}_{\Pi})\). This is a smooth \(\mathbf{Z}_{p}\)-scheme, and we may choose an arithmetic toroidal compatification \(\mathbb{X}_{G}\) such that \((\mathbb{Y}_{G},\mathbb{X}_{G})\) is a _smooth pair_ over \(\mathbf{Z}_{p}\) in the sense of [10] (i.e. \(\mathbb{X}_{G}\) is smooth, and the cuspidal divisor \(\mathbb{X}_{G}-\mathbb{Y}_{G}\) is a smooth normal-crossing divisor relative to \(\operatorname{Spec}\mathbf{Z}_{p}\)). Thus Besser's theory of _rigid syntomic_ and _finite-polynomial_ cohomology applies to \(\mathbb{Y}_{G}\), and rigid-syntomic cohomology has a natural comparison to etale cohomology. **Notation 4.2.2**.: _Let \(P(T)=\left(1-\frac{T}{\alpha_{1}\alpha_{2}}\right)\left(1-\frac{T}{\alpha_{1} \beta_{2}}\right)\in L[T]\), and let \(P_{1+j}(T)=P(p^{1+j}T)\)._ **Proposition 4.2.3**.: _There is a unique lift \(\nu_{\mathrm{fp}}\) of \(\nu_{\mathrm{dR}}\) to the space_ \[H^{2}_{\mathrm{fp},c}\left(\mathbb{Y}_{G},\mathcal{V}_{G};1+j,P_{1+j}\right)[ \Pi_{f}].\] _Remark 4.2.4_.: The above group is actually independent of \(j\) in the range \(0\leqslant j\leqslant\min(k_{1},k_{2})\): Besser's cohomology for twist \(r\) and polynomial \(Q\) is defined using the mapping fibre of \(Q(p^{-\tau}\varphi)\), and we have \(P_{1+j}(p^{-1-j}\varphi)=P(\varphi)\) for any \(j\). However, different values of \(j\) will correspond to the etale cohomology of different twists of \(V_{\Phi}^{\mathrm{As}}(\Pi)\). \(\diamond\) Proof.: Since \(\Pi\) is cuspidal, the \(\Pi_{\mathrm{f}}\)-generalised eigenspace in de Rham (or, equivalently, rigid) cohomology vanishes outside degree \(2\). So the natural map \[H^{2}_{\mathrm{fp},c}\big{(}\mathbb{Y}_{G},\mathcal{V}_{G};1+j,P\big{)}\to \mathrm{Fil}^{(1+j)}\,H^{2}_{\mathrm{dR},c}\big{(}\mathbb{Y}_{G},\mathcal{V}_ {G}\big{)}^{P(\varphi)=0}\] is an isomorphism after localising at the \(\Pi_{\mathrm{f}}\)-eigenspace. Since \(P(\varphi)\) annihilates \(\nu_{\mathrm{dR}}\) the result follows. The compatibility of etale and syntomic Abel-Jacobi maps for smooth pairs (cf. Proposition 5.4.1 of [13]) then implies that \[\begin{split}\left\langle\nu_{\mathrm{dR}},\,\log\left(\mathrm{ loc}_{p}\,\mathrm{AJ}^{[\Pi,j]}_{\mathrm{et}}\right)\right\rangle_{D_{\mathfrak{ p}}(\Pi)}&=\left\langle\nu_{\mathrm{dR}},\,\log\circ\mathrm{pr}^{ \mathrm{As}}_{\Pi}\circ_{t}^{[j]}\left(\mathrm{Eis}^{t}_{\mathrm{et},N}\right) \right\rangle_{\mathrm{dR},Y_{G}}\\ &=\left\langle\nu_{\mathrm{fp}},\,\iota^{[j]}\left(\mathrm{Eis}^ {t}_{\mathrm{syn},N}\right)\right\rangle_{\mathrm{fp},\mathbb{Y}_{G}}\\ &=\left\langle\iota^{[j],*}(\nu_{\mathrm{fp}}),\,\mathrm{Eis}^{t }_{\mathrm{syn},N}\right\rangle_{\mathrm{fp},\mathbb{Y}_{H}},\end{split} \tag{4.2}\] where the last equality follows from the adjunction between pushforward and pullback. Note that \[\iota^{[j],*}(\nu_{\mathrm{fp}})\in H^{2}_{\mathrm{fp},c}(\mathbb{Y}_{H}, \mathcal{V}_{H};1+j,P_{1+j}),\] and the coefficient module \(\mathcal{V}_{H}\) depends on \(j\). ## 5. Lifting \(\nu\) to the \(\mathfrak{p}_{1}\)-ordinary locus In this section, we shall lift the coherent class \(\nu\) (and its cousins \(\nu_{\mathrm{dR}}\) and \(\nu_{\mathrm{fp}}\)) from the cohomology of the full variety \(Y_{G}\), to cohomology groups associated to certain open subsets of the special fibre of \(Y_{G}\). ### Geometry of the Hilbert modular variety Let \(Y_{G,0}\) be the special fibre of \(\mathbb{Y}_{G}\), which is a smooth \(\mathbb{F}_{p}\)-variety; and similarly for the compactification \(X_{G,0}\). **Notation 5.1.1**.: * _For_ \(i=1,2\)_, denote by_ \(X_{G,0}^{i-\mathrm{ss}}\subset X_{G,0}\) _the_ \(\mathfrak{p}_{i}\)_-supersingular locus (the vanishing locus of the partial Hasse invariant, as constructed in_ _[_11_, SS3.2]__)._ * _Let_ \(X_{G,0}^{i-\mathrm{ord}}\) _be the complement of_ \(X_{G,0}^{i-\mathrm{ss}}\)_, and_ \(X_{G,0}^{\mathrm{ord}}=X_{G}^{1-\mathrm{ord}}\cap X_{G,0}^{2-\mathrm{ord}}\)_._ _We write \(Y_{G,0}^{i-\mathrm{ord}}\) etc for the intersection of these subvarieties with \(Y_{G,0}\subset X_{G,0}\)._ The following results on the geometry of the supersingular loci are well-known (see e.g. [11]): **Lemma 5.1.2**.: _For \(i=1,2\), \(X_{G,0}^{i-\mathrm{ss}}\) is a smooth codimension 1 closed subscheme of \(X_{G,0}\), disjoint from the toroidal boundary; and \(X_{G,0}^{1-\mathrm{ss}}\cap X_{G,0}^{2-\mathrm{ss}}\) is a smooth closed subvariety of codimension \(2\) (i.e. a finite disjoint union of points)._ _Remark 5.1.3_.: The preimage of either \(X_{G,0}^{1-\mathrm{ord}}\) or \(X_{G,0}^{2-\mathrm{ord}}\) under the finite map \(\iota:X_{H,0}\to X_{G,0}\) is the ordinary locus \(X_{H,0}^{\mathrm{ord}}\). \(\diamond\) **Proposition 5.1.4**.: _The extension-by-0 map_ \[R\Gamma_{\mathrm{rig},c}(Y_{G,0}^{1-\mathrm{ord}},\mathcal{V}_{G})\longrightarrow R \Gamma_{\mathrm{rig},c}(Y_{G,0},\mathcal{V}_{G})\cong R\Gamma_{\mathrm{dR},c}(Y _{G},\mathcal{V}_{G})\] _is a quasi-isomorphism on the \(\Pi\) generalised eigenspace for the prime-to-\(Np\) Hecke operators._ Proof.: This is a special case of Proposition 4.3 of [13]. **Notation 5.1.5**.: _Write \(\nu_{\mathrm{dR}}^{(1-\mathrm{ord})}\in H^{2}_{\mathrm{rig},c}(Y_{G,0}^{1- \mathrm{ord}},\mathcal{V}_{G})\) for the preimage of \(\nu_{\mathrm{dR}}\) under the isomorphism in Proposition 5.1.4._ ### Coherent cohomology of the 1-ordinary locus We write \(\mathcal{X}_{G}\) for the dagger space associated to \(X_{G}/\mathbf{Q}_{p}\); and we denote the tubes in \(\mathcal{X}_{G}\) of the various subvarieties of \(X_{G,0}\) considered above by the corresponding superscripts on \(\mathcal{X}_{G}\), so \(\mathcal{X}_{G}^{i-\mathrm{ord}}\) is the tube of \(X_{G,0}^{i-\mathrm{ord}}\) in \(\mathcal{X}_{G}\) etc. **Theorem 5.2.1**.: _Suppose \(v_{p}(\alpha_{1})<k\), and \(\alpha_{1}\neq\beta_{1}\). Then there exists a unique class_ \[\nu^{1-\mathrm{ord}}\in H^{1}_{c}\left(\mathcal{X}_{G}^{(1-\mathrm{ord})}, \omega^{(-k_{1},k_{2}+2)}(-D)\right)\] _satisfying the following properties:_ 1. _it is a_ \(\varphi_{1}\)_-eigenvector with eigenvalue_ \(\alpha_{1}\)_;_ 2. _its image in_ \(H^{1}(\mathcal{X}_{G},\omega^{(-k_{1},k_{2}+2)}(-D))\) _is_ \(\nu\)_._ Proof.: This is an instance of Proposition 5.2 of [12]. _Remark 5.2.2_.: Observe that we have chosen a lifting of the de Rham class \(\nu_{\mathrm{dR}}\) to \(X_{G,0}^{(1-\mathrm{ord})}\) characterised by information about the action of Hecke operators _away_ from \(p\); and, separately, we have lifted the coherent class \(\nu\) to \(\mathcal{X}_{G}^{(1-\mathrm{ord})}\) using information about the action of the Frobenius \(\varphi_{1}\) at \(\mathfrak{p}_{1}\). So it is not obvious how these liftings are related, and our next task is to find a way to reconcile the two, which we will carry out in Section 5.4 - see Proposition 5.4.1 below. \(\diamond\) ### Comparison with a higher-level class We now compare the class \(\nu^{1-\mathrm{ord}}\) with algebraic coherent classes at level \(\mathfrak{p}_{1}\). If \(X_{G}(\mathfrak{p}_{1})\) denotes the Shimura variety of level \(U_{1}(N_{\Pi})\cap\{(\mathop{\circ}\limits_{\ast}^{\ast}\mathop{\mathrm{mod}} \limits\mathfrak{p}_{1}\}\), then the special fibre \(X_{G}(\mathfrak{p}_{1})_{0}\) has a stratification with three strata, \[X_{G}(\mathfrak{p}_{1})_{0}=X_{G}(\mathfrak{p}_{1})_{0}^{m}\cup X_{G}( \mathfrak{p}_{1})_{0}^{\mathrm{\acute{e}t}}\cup X_{G}(\mathfrak{p}_{1})_{0}^{ \mathrm{\alpha}},\] on which the level structure is multiplicative, etale, or \(\alpha_{p}\) respectively; and this gives a corresponding decomposition of the dagger space \(\mathcal{X}_{G}(\mathfrak{p}_{1})\), which restricts to an isomorphism of dagger spaces \[\mathcal{X}_{G}(\mathfrak{p}_{1})^{m}\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{X}_{G}^{1-\mathrm{ord}}.\] (The inverse map is given by the "canonical subgroup" construction.) From the functoriality of pushforward maps we have a commutative square of cohomology groups (all with coefficients in \(\omega^{-k_{1},k_{2}+2}(-D)\)) where the horizontal maps are extension-by-zero (i.e. pushforwards along open embeddings) and the right-hand vertical map is pushforward along the natural degeneracy map \(\pi_{\mathfrak{p}_{1}}:\mathcal{X}_{G}(\mathfrak{p}_{1})\to\mathcal{X}_{G}\). By [12, Lemma 5.7], our Frobenius lift \(\varphi_{1}\) on \(H^{1}_{c}(\mathcal{X}_{G}^{1-\mathrm{ord}})\) corresponds to the Hecke operator \(U^{\prime}_{\mathfrak{p}_{1}}\) at level \(\mathcal{X}_{G}(\mathfrak{p}_{1})\). Hence we have the following compatibility: **Proposition 5.3.1**.: _The image of \(\nu^{1-\mathrm{ord}}\) in \(H^{1}(\mathcal{X}_{G}(\mathfrak{p}_{1}))\) is the unique class which lies in the \(\Pi_{\mathrm{f}}\)-eigenspace away from \(p\), is a \((U^{\prime}_{\mathfrak{p}_{1}}=\alpha_{1})\)-eigenvector, and maps to \(\nu\) under the trace map. _ We briefly compare this with the choice of basis vector used in [12, SS6-7], since this will be needed for our final formula. In SS7.3 of _op.cit._ we define a \(U^{\prime}_{\mathfrak{p}_{1}}=\alpha_{1}\) eigenvector \(\check{\nu}_{\Pi,\alpha}\in H^{1}(\mathcal{X}_{G}(\mathfrak{p}_{1}))\), depending on a choice of basis \(W^{(p)}_{\mathfrak{f}}\) of the Whittaker model away from \(p\infty\). If we choose this basis to be the normalised new-vector, then by construction we have \[\check{\nu}_{\Pi,\alpha}=\left(1-\tfrac{\beta_{1}}{U^{\prime}_{\mathfrak{p}_{ 1}}}\right)\pi^{\ast}_{\mathfrak{p}_{1}}(\nu).\] Since \(\check{\nu}_{\Pi,\alpha}\) and the image of \(\nu^{1-\mathrm{ord}}\) lie in the same one-dimensional space, we may compare them by computing their images in \(H^{1}(\mathcal{X}_{G})\). An elementary computation shows that the map \[\pi_{\mathfrak{p}_{1},\ast}\circ\left(1-\tfrac{\beta}{U^{\prime}_{\mathfrak{p} _{1}}}\right)\circ\pi^{\ast}_{\mathfrak{p}_{1}}\] acts on the \(\Pi\)-eigenspace as multiplication by \(p(1-\tfrac{\beta_{1}}{p\alpha_{1}})\) (which is not zero, since \(\alpha_{1}/\beta_{1}\) has complex absolute value \(1\)); so we have \[\check{\nu}_{\Pi,\alpha}=p(1-\tfrac{\beta_{1}}{p\alpha_{1}})\cdot\mathrm{ image}\left(\nu^{1-\mathrm{ord}}\right).\] ### The de Rham spectral sequence for \(\mathcal{X}_{G}^{1-\mathrm{ord}}\) Since \((\mathcal{X}_{G},\mathcal{V}_{G})\) is a smooth pair, and our coefficient system \(\mathcal{V}_{G}\) extends to a vector bundle on \(\mathcal{X}_{G}\) whose connection has log poles along the boundary divisor \(D\), we can compute rigid cohomology of \(Y_{G,0}\) using the analytification of the BGG complex on \(\mathcal{X}_{G}\) (just as we did for de Rham cohomology above). By taking the mapping fibre of the restriction map we obtain the same result for compactly-supported cohomology of \(Y_{G,0}^{1-\mathrm{ord}}\); that is, we have \[R\Gamma_{\mathrm{rig.c}}(Y_{G,0}^{1-\mathrm{ord}},\mathcal{V})\cong R\Gamma_{ \mathrm{dR},c}(\mathcal{X}_{G}^{1-\mathrm{ord}},\mathcal{V}(-D))\cong R\Gamma_ {c}(\mathcal{X}_{G}^{1-\mathrm{ord}},\mathrm{BGG}^{\bullet}(-D)).\] This gives rise to a first-quadrant spectral sequence converging to \(H^{*}_{\mathrm{rig.c}}(Y_{G,0}^{1-\mathrm{ord}},\mathcal{V}_{G})\), whose \(E_{1}^{nn}\) terms are \(H^{n}_{c}\left(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathrm{BGG}^{m}(-D)\right)\). We denote by \(\widetilde{R\Gamma}_{\mathrm{dR},c}(\mathcal{X}_{G}^{1-\mathrm{ord}},\mathcal{ V}(-D))\) the cohomology of the truncated complex \[\tau_{\geqslant 1}\,\mathrm{BGG}^{\bullet}(-D)=\left[0\longrightarrow\omega^{( -k_{1},k_{2}+2)}\oplus\omega^{(k_{1}+2,-k_{2})}\longrightarrow\omega^{(k_{1} +2,k_{2}+2)}\right](-D),\] which is quasi-isomorphic to the filtered de Rham complex \(\operatorname{Fil}^{1+j}\mathrm{BGG}^{\bullet}(-D)\), for any \(j\) in our range. Since \(\mathcal{X}_{G}\) is connected and non-compact, \(H^{0}_{c}(\mathcal{X}_{G}^{(1-\mathrm{ord})},-)\) is zero for all locally-free sheaves, and so we obtain an isomorphism \[\alpha_{\mathrm{dR}}^{(1-\mathrm{ord})}:H^{1}_{c}\left(\mathcal{X}_{G}^{(1- \mathrm{ord})},\mathrm{BGG}^{1}(-D)\right)^{\nabla=0}\cong\widetilde{H}_{ \mathrm{dR},c}^{2}(\mathcal{X}_{G}^{1-\mathrm{ord}},\mathcal{V}(-D)).\] Moreover, the inclusion of the subcomplex \(\tau_{\geqslant 1}\,\mathrm{BGG}\) into the full BGG complex gives a commutative square of maps in which the top horizontal arrow is compatible, via \(\alpha_{\mathrm{dR}}^{(1-\mathrm{ord})}\), with the natural map \[H^{1}_{c}\left(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathrm{BGG}^{1}(-D)\right) \to H^{1}_{c}\left(\mathcal{X}_{G},\mathrm{BGG}^{1}(-D)\right)= \operatorname{Fil}^{1}H^{2}_{\mathrm{dR}}/\operatorname{Fil}^{k_{1}+k_{2}+2}.\] Since the partial Frobenius \(\varphi_{1}\) lifts to \(\mathcal{X}_{G}^{(1-\mathrm{ord})}\), there is an action of \(\varphi_{1}\) on both of the spaces in the left-hand column, compatible with the action on \(H^{2}_{\mathrm{dR},c}(\mathcal{X}_{G}^{1-\mathrm{ord}},\mathcal{V}(-D),1+j)\) given by comparison with the rigid cohomology of \(X_{G,0}\). **Proposition 5.4.1**.: _If \(\nu^{1-\mathrm{ord}}\) is as in Theorem 5.2.1, then the class \((\nu^{1-\mathrm{ord}},0)\) in_ \[H^{1}_{c}\left(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathrm{BGG}^{1}(-D)\right)= H^{1}_{c}\left(\mathcal{X}_{G}^{(1-\mathrm{ord})},\omega^{-k_{1},k_{2}+2}(-D) \right)\oplus H^{1}_{c}\left(\mathcal{X}_{G}^{(1-\mathrm{ord})},\omega^{k_{1} +2,-k_{2}}(-D)\right)\] _is in the kernel of \(\nabla\), and hence defines a class in \(\widetilde{H}_{\mathrm{dR},c}^{2}(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathcal{ V}(-D))\). The image of this class in \(H^{2}_{\mathrm{rig.c}}(\mathcal{Y}_{G,0}^{(1-\mathrm{ord})},\mathcal{V}_{G})\), under the left vertical map of the above diagram, is \(\nu_{\mathrm{dR}}^{(1-\mathrm{ord})}\)._ Proof.: We first show that \(\nu^{1-\mathrm{ord}}\) is in the kernel of \(\nabla\). This follows from the fact that it has strictly small slope for \(\varphi_{1}\): the slopes of \(\varphi_{1}\) on \(\omega^{(k_{1}+2,k_{2}+2)}\) are all at least \(k_{1}+1\), and the operator \(\nabla\) commutes with the Frobenius, so it must be zero on all Frobenius eigenspaces of slope smaller than \(k_{1}+1\). This shows that \(\nu^{1-\mathrm{ord}}\) has a well-defined image in \(H^{2}_{\mathrm{rig.c}}(Y_{G,0}^{(1-\mathrm{ord})},\mathcal{V}_{G})\). Let us temporarily write \(\hat{\nu}_{\mathrm{dR}}^{(1-\mathrm{ord})}\) for this image; our goal is to show that it coincides with \(\nu_{\mathrm{dR}}^{1-\mathrm{ord}}\). Since the latter is characterised as the unique lifting of \(\nu_{\mathrm{dR}}\) compatible with Hecke operators away from \(p\mathcal{N}\), it suffices to show that \(\hat{\nu}_{\mathrm{dR}}^{(1-\mathrm{ord})}\) lies in the correct Hecke eigenspace, and that it maps to \(\nu_{\mathrm{dR}}\) in \(H^{2}_{\mathrm{dR},c}(Y_{G},\mathcal{V}_{G})\). It follows readily from the construction of \(\nu^{1-\mathrm{ord}}\) that it lies in the \(\Pi\)-eigenspace for the Hecke operators away from \(\mathfrak{p}_{1}\) (including \(T_{\mathfrak{p}_{2}}\)), since these operators commute with \(\varphi_{1}\). So \(\hat{\nu}_{\mathrm{dR}}^{(1-\mathrm{ord})}\) has the correct Hecke action. Moreover, its image in \(H^{2}_{\mathrm{dR},c}(\mathcal{V}_{G},\mathcal{V}_{G})\) is in the \(\varphi_{1}=\alpha_{1}\) eigenspace (because \(\nu^{1-\mathrm{ord}}\) is); and it lies in \(\operatorname{Fil}^{(0,1+k_{2})}\), and maps to \(\nu\) in \(\operatorname{Gr}^{(0,1+k_{2})}\), so it must be equal to \(\nu_{\mathrm{dR}}\). ### FP-cohomology of the \(\mathfrak{p}_{1}\)-ordinary locus We now consider a modified form of Besser's finite-polynomial cohomology, namely _Gros fp-cohomology_, for the \(\mathfrak{p}_{1}\)-ordinary locus (with compact supports). This cohomology, denoted by \(\widetilde{R\Gamma}_{\mathrm{fp},c}(\mathcal{X}_{G}^{(1-\mathrm{ord})}, \mathcal{V}(-D);1+j,P_{1+j})\) can be defined as the mapping fibre of the map \[\widetilde{R\Gamma}_{\mathrm{dR},c}(\mathcal{X}_{G}^{(1-\mathrm{ord})}, \mathcal{V}(-D))\xrightarrow{P_{1+j}(p^{-1-j}\varphi)}R\Gamma_{\mathrm{rig},c}( X_{G,0}^{(1-\mathrm{ord})},\mathcal{V}(-D)).\] (As before, this is in fact independent of \(j\) in the stated range, despite the notations.) Note that although \(\varphi_{1}\) lifts to \(\mathcal{X}_{G}^{1-\mathrm{ord}}\), the full Frobenius \(\varphi\) does not; so although \(R\Gamma_{\mathrm{rig},c}(X_{G,0}^{(1-\mathrm{ord})},\mathcal{V}(-D))\) is isomorphic to de Rham cohomology of \(\mathcal{X}_{G}^{1-\mathrm{ord}}\), the action of Frobenius (given by the functoriality of rigid cohomology) cannot be'seen' via this description. For proper schemes such as \(\mathbb{X}_{G}\), there is no difference between Gros fp-cohomology and the usual fp-cohomology, so there is an extension-by-zero map \[\widetilde{R\Gamma}_{\mathrm{fp},c}(\mathcal{X}_{G}^{(1-\mathrm{ord})}, \mathcal{V}(-D);1+j,P_{1+j})\to R\Gamma_{\mathrm{fp},c}(\mathbb{X}_{G}, \mathcal{V}_{G}\langle-D\rangle;1+j,P_{1+j}).\] **Proposition 5.5.1**.: _There exists a class_ \[\tilde{\nu}_{\mathrm{fp}}^{1-\mathrm{ord}}\in\widetilde{H}_{\mathrm{fp},c}^{2 }(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathcal{V}(-D);1+j,P_{1+j})\] _with the following properties:_ * _Its image in_ \(H_{\mathrm{fp},c}^{2}(\mathbb{X}_{G},V_{G}\langle-D\rangle;1+j,P_{1+j})\) _is the_ \(\nu_{\mathrm{fp}}\) _of Proposition_ 4.2.3_._ * _Its image in_ \(\widetilde{H}_{\mathrm{dR},c}^{2}(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathcal{ V}(-D))\) _is the class_ \((\nu^{1-\mathrm{ord}},0)\) _of Proposition_ 5.4.1_._ _Remark 5.5.2_.: The reader may be relieved to hear that the class \(\tilde{\nu}_{\mathrm{fp}}^{1-\mathrm{ord}}\) is "the ultimate among liftings of \(\nu\)": all other variants of \(\nu\) will be images of this one. \(\diamond\) Proof.: From the mapping-fibre definition of Gros fp-cohomology we have a long exact sequence \[\cdots\to H_{\mathrm{rig},c}^{1}(X_{G,0}^{1-\mathrm{ord}},\mathcal{V}(-D))\to \widetilde{H}_{\mathrm{fp},c}^{2}(\mathcal{X}_{G}^{(1-\mathrm{ord})},\mathcal{ V}(-D);1+j,P_{1+j})\to\widetilde{H}_{\mathrm{dR},c}^{2}(\mathcal{X}_{G}^{(1- \mathrm{ord})},\mathcal{V}(-D))\to\dots,\] in which the boundary map is \(P(\varphi)\circ\iota\). Moreover, this is compatible under extension-by-zero with the corresponding sequence for fp-cohomology of \(\mathbb{X}_{G}\). We have seen that the image of \((\nu^{1-\mathrm{ord}},0)\) under \(\iota\) is the class \(\nu_{\mathrm{dR}}^{1-\mathrm{ord}}\), which is annihilated by \(P(\varphi)\). Hence it lifts to \(\widetilde{H}_{\mathrm{fp},c}^{2}\). This lift is unique up to the image of an element of \(H_{\mathrm{rig},c}^{1}(X_{G,0}^{1-\mathrm{ord}},\mathcal{V}(-D))\); but from Proposition 5.1.4 it follows that this group has trivial \(\Pi_{\mathrm{f}}\)-eigenspace for the prime-to-\(p\) Hecke operators, so there is a unique Hecke-equivariant lifting of \((\nu^{1-\mathrm{ord}},0)\) to Gros-fp cohomology. The image of this class under extension-by-\(0\) is therefore a Hecke-equivariant lifting of \(\nu_{\mathrm{dR}}\) to fp-cohomology of \(\mathbb{X}_{G}\), so it must be \(\nu_{\mathrm{fp}}\). It is easily checked that \(\iota^{-1}(\mathcal{X}_{G}^{\mathrm{ord}})=\mathcal{X}_{H}^{\mathrm{ord}}\), so we have the following: **Corollary 5.5.3**.: _The pairing (4.1) is equal to \(\left\langle\iota^{[j],*}\left(\tilde{\nu}_{\mathrm{fp}}^{1-\mathrm{ord}} \right),\widetilde{\mathrm{Eis}}_{\mathrm{syn}}^{-t,\mathrm{ord}}\right\rangle\), where \(\widetilde{\mathrm{Eis}}_{\mathrm{syn}}^{t,\mathrm{ord}}\) is the image of \(\mathrm{Eis}_{\mathrm{syn}}^{t}\) in the Gros fp-cohomology of \(\mathcal{X}_{H}^{\mathrm{ord}}\). _ Here we define \(\iota^{[j],*}\) for classes in Gros fp-cohomology using the quasi-isomorphism from the BGG complex to the full de Rham complex. We shall give explicit formulae in Section 7.1 below, but first we need to give an explicit form for \(\tilde{\nu}_{\mathrm{fp}}^{1-\mathrm{ord}}\), which can only be done after restricting to \(\mathcal{X}_{G}^{\mathrm{ord}}\subset\mathcal{X}_{G}^{1-\mathrm{ord}}\). ## 6. Restricting to the fully-ordinary locus ### Cohomology with partial compact support We now consider the cohomology of the fully ordinary locus \(X_{G,0}^{\mathrm{ord}}\). Since the complement \(X_{G,0}-X_{G,0}^{\mathrm{ord}}\) is the disjoint union of a closed subvariety \(X_{G,0}^{(1-\mathrm{ss})}\) and the open subvariety \(X_{G,0}^{(2-\mathrm{ss})}\cap X_{G,0}^{(1-\mathrm{ord})}\), we can apply the formalism of [13, SS13] to define "cohomology of \(\mathcal{X}_{G}^{\mathrm{ord}}\) with compact support towards \(\mathcal{X}_{G}^{(1-\mathrm{ss})}\)" (with coefficients in any abelian sheaf on \(\mathcal{X}_{G}\)). We write this as \(R\Gamma_{c1}(\mathcal{X}_{G},-)\). By construction, this comes equipped with a restriction map \[R\Gamma_{c}\left(\mathcal{X}_{G}^{1-\mathrm{ord}},-\right)\to R\Gamma_{c1} \left(\mathcal{X}_{G}^{\mathrm{ord}},-\right),\] which fits into a triangle whose third term is the compactly-supported cohomology of \(\mathcal{X}_{G}^{1-\mathrm{ord}}\cap\mathcal{X}_{G}^{2-\mathrm{ss}}\). In particular, by the same argument as Proposition 5.1.4, we have isomorphisms \[R\Gamma_{\mathrm{rig},c1}\left(\mathcal{X}_{G}^{\mathrm{ord}},\mathcal{V}(-D) \right)[\Pi_{\mathrm{f}}]\longleftarrow R\Gamma_{\mathrm{rig},c}\left( \mathcal{X}_{G}^{1-\mathrm{ord}},\mathcal{V}(-D)\right)[\Pi_{\mathrm{f}}] \longrightarrow R\Gamma_{\mathrm{rig},c}\left(\mathcal{X}_{G},\mathcal{V}(-D) \right)[\Pi_{\mathrm{f}}].\] The advantage of working with \(\mathcal{X}_{G}^{\mathrm{ord}}\) is that both \(\varphi_{1}\) and \(\varphi_{2}\) have liftings. **Notation 6.1.1**.: _Write \(\nu^{\rm ord}\in H^{1}_{c1}\Big{(}\mathcal{X}^{\rm ord}_{G},\omega^{(-k_{1},k_{2}+ 2)}(-D)\Big{)}\) for the image of \(\nu^{1-\rm ord}\) under the above restriction map._ _Note 6.1.2_.: Over the ordinary locus, we have commuting liftings of both \(\varphi_{1}\) and \(\varphi_{2}\), which both act on \(R\Gamma_{c1}\left(\mathcal{X}^{\rm ord}_{G},\omega^{(\dots)}\right)\); and the operator \(T_{\mathfrak{p}_{2}}\) decomposes as \(T_{\mathfrak{p}_{2}}=U_{\mathfrak{p}_{2}}+\varphi_{2}\), with \(U_{\mathfrak{p}_{2}}\circ\varphi_{2}=p^{k_{2}+1}\langle\mathfrak{p}_{2}\rangle\). \(\diamond\)_ **Corollary 6.1.3**.: _The class \(P(\varphi)\cdot\nu^{\rm ord}\) lies in the kernel of the Hecke operator \(U_{\mathfrak{p}_{2}}\)._ Proof.: We have \(\varphi=\varphi_{1}\varphi_{2}\). The result follows easily from Theorem 5.2.1 and Note 6.1.2, using the fact that \(\nu^{(1-\rm ord)}\) is a \(T_{\mathfrak{p}_{2}}\)-eigenvector. ### The Poznan spectral sequence We now recall a spectral sequence (introduced in [10]) relating Gros fp-cohomology to coherent cohomology. Here Gros fp-cohomology is defined in the same way as for the \(\mathfrak{p}_{1}\)-ordinary locus above, but now with \(c1\)-support. **Definition 6.2.1**.: _We define groups \(\mathscr{C}^{m,n}_{\rm fp,c1}(\mathcal{X}^{\rm ord}_{G},\mathcal{V}(-D);1+j,P _{1+j})\), for \(m,n\geqslant 0\), by_ \[\mathscr{C}^{m,n}_{\rm fp,c1}(\dots)=H^{n}_{c1}\left(\mathcal{X}^{\rm ord}_{G },(\tau_{\geqslant 1}\operatorname{BGG})^{m}(-D)\right)\oplus H^{n}_{c1} \left(\mathcal{X}^{\rm ord}_{G},\operatorname{BGG}^{m-1}(-D)\right);\] _and we define differentials \(\mathscr{C}^{m,n}_{\rm fp,c1}(\dots)\to\mathscr{C}^{m+1,n}_{\rm fp,c1}(\dots)\) by_ \[(x,y)\mapsto\left(\nabla x,P(\varphi/p^{n})\iota(x)-\nabla y\right),\] _where \(\iota\) is the inclusion of \(\operatorname{Fil}^{1+j}\operatorname{BGG}^{\bullet}\) into \(\operatorname{BGG}^{\bullet}\), and \(\nabla\) the differential of the BGG complex._ Note that \(\mathscr{C}^{m,n}_{\rm fp,c1}\) is zero for \(m\leqslant 0\) (this is obvious for \(m\leqslant-1\), and holds for \(m=0\) since \(\operatorname{Fil}^{(1+j)}\operatorname{BGG}^{0}=0\)). It is also zero for \(n\leqslant 0\), since \(H^{0}_{c1}\) vanishes for locally-free sheaves. **Proposition 6.2.2**.: _There is a first-quadrant spectral sequence, the Poznan spectral sequence, with_ \[{}^{\mathbb{P}_{\bullet}}E^{mn}_{1}=\mathscr{C}^{m,n}_{\rm fp,c1}(\mathcal{X} ^{\rm ord}_{G},\mathcal{V}(-D);1+j,P_{1+j}),\] _and the differentials on the \(E_{1}\) page given by the formula above. This spectral sequence abuts to the Gros fp-cohomology \(\widetilde{H}^{m+n}_{\rm fp,c1}(\mathcal{X}^{\rm ord}_{G},\mathcal{V}(-D);1+j,P_{1+j})\)._ **Definition 6.2.3**.: _We define a coherent fp-pair (of degree \((m,n)\), twist \(1+j\) and \(c1\)-support) to be an element of the kernel of the differential \(\mathscr{C}^{m,n}_{\rm fp,c1}\to\mathscr{C}^{m+1,n}_{\rm fp,c1}\); we write the group of these as \(\mathscr{Z}^{m,n}_{\rm fp,c1}(\mathcal{X}^{\rm ord}_{G},\mathcal{V}(-D);1+j,P _{1+j})\)._ Thus an fp-pair is a pair of elements \[x\in H^{n}_{c1}(\mathcal{X}^{\rm ord}_{G},\mathscr{F}il^{1+j}\operatorname{ BGG}^{m}(-D)),\qquad y\in H^{n}_{c1}(\mathcal{X}^{\rm ord}_{G},\mathscr{F}il^{1+j} \operatorname{BGG}^{m-1}(-D))\] which satisfy \[\nabla(x)=0\qquad\text{and}\qquad\nabla(y)=P(p^{-1-j}\varphi)\iota(x). \tag{6.1}\] _Note 6.2.4_.: Given \(x\), the equation (6.1) does not determine the element \(y\) uniquely: it is determined up to an element of \(H^{n}_{c1}(\mathcal{X}^{\rm ord}_{G},\operatorname{BGG}^{m-1}(-D))^{\nabla=0}\). \(\diamond\)_ **Proposition 6.2.5**.: _For \(0\leqslant j\leqslant\min\{k_{1},k_{2}\}\), the spectral sequence gives rise to an isomorphism_ \[\alpha_{G,\rm fp}:\mathscr{Z}^{1,1}_{\rm fp,c1}(\mathcal{X}^{\rm ord}_{G}, \mathcal{V}(-D);1+j,P_{1+j})\xrightarrow{\cong}\widetilde{H}^{2}_{\rm fp,c1}( \mathcal{X}^{\rm ord}_{G},\mathcal{V}(-\mathcal{D});1+j,P_{1+j}).\] Proof.: Since the \(E_{1}\) page of the spectral sequence is zero for \(m\leqslant 0\), the term \({}^{\mathbb{P}_{\bullet}}E^{1,1}_{2}\) is the kernel of the differential on \({}^{\mathbb{P}_{\bullet}}E^{1,1}_{1}\), which is the group of fp-pairs. Since this is the only nonzero term with \(m+n=2\), and the incoming and outgoing differentials at \({}^{\mathbb{P}_{\bullet}}E^{1,1}_{r}\) are trivially zero for all \(r\geqslant 1\), we conclude that \({}^{\mathbb{P}_{\bullet}}E^{1,1}_{2}\) coincides with \(\widetilde{H}^{2}\) of the abutment, as required. **Corollary 6.2.6**.: _Every cohomology class in \(\widetilde{H}^{2}_{\rm fp,c1}(\mathcal{X}^{\rm ord}_{G},\mathcal{V}(- \mathcal{D});1+j,P_{1+j})\) can be uniquely represented by a coherent fp-pair of degree \((1,1)\)._ Proof.: Since the \(E_{1}\) terms of the spectral sequence are supported in the region \(m,n\geqslant 1\), there are no other terms of total degree \(2\) except \((m,n)=(1,1)\); and clearly \(E^{(1,1)}_{2}=E^{(1,1)}_{\infty}\) since the differentials on the \(E_{2}\) page and beyond land outside this region. An exactly analogous argument shows that for the truncated de Rham cohomology groups \(\widetilde{H}^{i}_{\rm dR,c1}\) (the hypercohomology of \(\tau_{\geqslant 1}\operatorname{BGG}^{\bullet}(-D)\)), we have an isomorphism \[H^{1}_{c1}(\mathcal{X}^{\rm ord}_{G},\operatorname{BGG}^{1}(-D))\xrightarrow{ \alpha_{G,\rm rig}}\widetilde{H}^{2}_{\rm dR,c1}(\mathcal{X}^{\rm ord}_{G}, \mathcal{V}(-\mathcal{D}),1+j).\] **Lemma 6.2.7**.: _Let \(0\leqslant j\leqslant\min\{k_{1},k_{2}\}\). We have a commutative diagram_ _where the vertical arrows are the natural projection maps._ Proof.: Clear from the constructions. ### Construction of a coherent fp-pair **Notation 6.3.1**.: _Define_ \[\tilde{\nu}_{\rm fp}^{(\rm ord)}\in\widetilde{H}_{{\rm fp},c1}^{2}(\mathcal{X }_{G}^{\rm ord},\mathcal{V}\langle-\mathcal{D}\rangle;1+j,P_{1+j})\] _to be the restriction of the class \(\tilde{\nu}_{\rm fp}^{(1-{\rm ord})}\) constructed above._ **Corollary 6.3.2**.: _There exists a uniquely determined class_ \[\xi\in H^{1}_{c1}(\mathcal{X}_{G}^{\rm ord},\omega^{(-k_{1},-k_{2})}(- \mathcal{D})),\] _which is independent of \(j\), such that \((\nu^{(\rm ord)},\xi)\) forms an fp-pair representing the class \(\tilde{\nu}_{\rm fp}^{(\rm ord)}\), and such that \(\xi\) lies in the \(\Pi\)-eigenspace for the Hecke operators away from \(pN\)._ Proof.: The existence of \(\xi\) is immediate from Corollary 6.2.6 and Lemma 6.2.7; the independence from \(j\) is clear by construction. Now, if \(\xi^{\prime}\) is another element such that \((\nu^{(\rm ord)},\xi^{\prime})\) also represents \(\tilde{\nu}_{\rm fp}^{(\rm ord)}\), then \[\xi-\xi^{\prime}\in H^{1}_{c1}(\mathcal{X}_{G}^{\rm ord},{\rm BGG}^{0}(-D))^{ \nabla=0}\cong H^{1}_{{\rm rig},c1}(X_{G,0},\mathcal{V}\langle-\mathcal{D} \rangle).\] As we have seen, the \(\Pi_{\rm f}\)-eigenspace in this cohomology is zero, so there is a unique choice of \(\xi\) which is Hecke-equivariant. **Lemma 6.3.3**.: _The element \(\xi\) has the following properties:_ \[\varphi_{1}.\xi=\alpha_{1}\,\xi\qquad\text{and}\qquad U_{{\mathfrak{p}}_{2}}. \xi=0. \tag{6.2}\] Proof.: We first show the latter statement. We deduce from Corollary 6.1.3 that \[U_{{\mathfrak{p}}_{2}}.\cdot\xi\in H^{1}_{c1}(\mathcal{X}_{G}^{\rm ord},{\rm B GG }^{0}(-\mathcal{D}))^{\nabla=0}[\Pi_{f}^{\prime}].\] But as observed above, this space is zero. The former statement follows analogously, by considering the element \((\varphi_{1}-\alpha_{1})\cdot\xi\). We now lift these classes from the BGG complex to the full de Rham complex: **Definition 6.3.4**.: 1. _Write_ \(\dot{\xi}\) _for the image of_ \(\xi\) _in_ \(H^{1}_{c1}(\mathcal{X}_{G}^{\rm ord},\mathcal{V}(-D))\)_._ 2. _Similarly, for_ \(0\leqslant j\leqslant\min\{k_{1},k_{2}\}\) _write_ \(\dot{\nu}_{j}^{(\rm ord)}\) _for the image of_ \(\nu^{(\rm ord)}\) _in_ \[H^{1}_{c1}(\mathcal{X}_{G}^{\rm ord},\mathscr{F}il^{\prime}\mathcal{V}\otimes \Omega^{1}\langle-\mathcal{D}\rangle).\] **Lemma 6.3.5**.: _The element \(\dot{\xi}\) also satisfies_ \[\varphi_{1}.\dot{\xi}=\alpha_{1}\,\dot{\xi}\qquad\text{and}\qquad U_{{\mathfrak{ p}}_{2}}.\dot{\xi}=0. \tag{6.3}\] The following result will be very important for the regulator evaluation: **Proposition 6.3.6**.: _We have_ \[U_{p}\circ\iota^{*}(\dot{\xi})=0.\] Proof.: As maps \(H^{\bullet}(\mathcal{X}_{G}^{\rm ord})\to H^{\bullet}(\mathcal{X}_{H}^{\rm ord})\), we have the following identity: \[U_{p}\circ\iota^{*}\circ\varphi_{{\mathfrak{p}}_{1}}=\iota^{*}\circ(\langle{ \mathfrak{p}}_{1}\rangle U_{{\mathfrak{p}}_{2}}). \tag{6.4}\] Since \(\dot{\xi}\) is an eigenvector of \(\varphi_{{\mathfrak{p}}_{1}}\) with non-zero eigenvalue, and it is in the kernel of \(\varphi_{{\mathfrak{p}}_{2}}\), the Proposition follows. ## 7. Expression via coherent cohomology ### Relating the BGG and de Rham complexes We need to recall some formulae relating the de Rham and BGG complexes for \(\operatorname{GL}_{2}\). Let \(k\in\mathbf{Z}_{\geqslant 0}\). The BGG complex for weight \(k\geqslant 0\) is given by \(\left[\omega^{-k}\xrightarrow{\Theta}\omega^{k+2}\right]\), where \(\Theta\) is a differential operator given in terms of \(q\)-expansions by \[\Theta=\tfrac{(-1)^{k}}{k!}\theta^{k+1},\qquad\theta=q\tfrac{\mathrm{d}}{ \mathrm{d}q}.\] (Cf. [16, Remark 2.17] for example.) The map \(\operatorname{BGG}^{\bullet}\to\operatorname{DR}^{\bullet}\) is given as follows. * In degree \(1\), it is given by the tensor product of the natural inclusion \(\omega^{k}=\operatorname{Fil}^{k}(\operatorname{Sym}^{k}\mathcal{W}_{H}) \hookrightarrow\mathcal{W}_{H}\) and the Kodaira-Spencer isomorphism \(\omega^{2}\cong\Omega^{1}(D)\). * In degree \(0\), a section \(s\) of \(\omega^{-k}\) is mapped to the unique section of \(\operatorname{Sym}^{k}\mathcal{W}_{H}\) whose image in \(\operatorname{Sym}^{k}\mathcal{W}_{H}/\operatorname{Fil}^{0}\cong\omega^{-k}\) is \(s\), and whose image under the differential \(\nabla\) lands in \(\omega^{k}\otimes\Omega^{1}(D)\). We now recall (and somewhat reformulate) some results from SS4 of [16] giving a completely explicit description of these maps. We write \(\mathscr{X}_{H}^{\mathrm{ord}}\) for the ordinary locus _as a classical rigid space_ (not a dagger space), i.e. neglecting overconvergence. Passing to the Igusa tower \(\mathscr{H}_{H}\) (the canonical \(\mathbf{Z}_{p}^{\times}\)-covering of \(\mathscr{X}_{H}^{\mathrm{ord}}\) parametrising ordinary elliptic curves with a trivialization of their formal group), we obtain a canonical section \(v\) of \(\omega\), corresponding to the invariant differential form \(\frac{dT}{T}\) on the Tate curve \(\mathbf{G}_{m}/q^{2}\); note that \(\varphi^{*}(v)=pv\). If we let \(w\) be the unique section such that \(\nabla(v)=u\otimes\xi\), where \(\xi\) the local basis of \(\Omega^{1}_{\mathscr{X}_{H}^{\mathrm{ord}}}(D)\) corresponding to \(\frac{\mathrm{d}q}{q}\), then \(v\) and \(w\) are a basis of sections of \(\mathcal{W}_{H}\) over the Igusa tower, with \(w\) spanning the unit-root subspace (we have \(\varphi^{*}(w)=w\)) and \(\nabla w=0\). Hence we obtain a basis of sections \((v^{a}w^{k-a})_{0\leqslant a\leqslant k}\) of \(\operatorname{Sym}^{k}\mathcal{W}_{H}\) in which the actions of \(\nabla\) and \(\varphi\) are completely explicit. In these coordinates, the map from \(\omega^{-k}\) to \(\operatorname{Sym}^{k}\mathcal{W}_{H}\) sends a section \(\mathcal{G}\) of \(\omega^{-k}\) to the section \[\sum_{i=0}^{k}\tfrac{(-1)^{i}}{i!}\theta^{i}(\mathcal{G})\cdot v^{i}w^{k-i}.\] One verifies easily that the image of this under \(\nabla\) is \(\theta^{k+1}(\mathcal{G})v^{k}\otimes\xi\), as expected. (Note that it is not _a priori_ obvious that such a sum is overconvergent if \(\mathcal{G}\) is, since neither \(\theta\) nor the local bases \(v^{i}w^{k-i}\) have any meaning outside the ordinary locus.) ### The Eisenstein class as a coherent fp-pair Recall that the \(\operatorname{GL}_{2}\) Eisenstein class \(\operatorname{Eis}_{\operatorname{syn},N}^{t}\) lies in \(H^{1}_{\operatorname{syn}}(\mathbb{Y}_{H},\mathcal{V}_{H}(1+t))\), where \(\mathbb{Y}_{H}\) is the \(\mathbf{Z}_{p}\)-model of the modular curve \(Y_{1}(N)\), and \(\mathcal{V}_{H}\) is the sheaf corresponding to the \(t\)-th symmetric power of the standard representation. If we restrict to the open subscheme \(\mathbb{Y}_{H}^{\mathrm{ord}}\) given by removing the supersingular points of the special fibre (so \(\mathbb{Y}_{H}^{\mathrm{ord}}\) has the same generic fibre as \(\mathbb{Y}_{H}\)), and work with Gros syntomic cohomology, then we have a convenient explicit description: a class \(x\in\widetilde{H}^{1}_{\operatorname{syn}}(\mathcal{X}_{1}(N)^{\mathrm{ord}}, \mathcal{V}_{H},1+t)\) is given by a pair \((x_{0},x_{1})\), where \[x_{0}\in H^{0}(\mathcal{X}_{1}(N)^{\mathrm{ord}},\mathcal{V}),\qquad x_{1}\in H ^{0}(\mathcal{X}_{1}(N)^{\mathrm{ord}},\operatorname{Fil}^{t}\mathcal{V} \otimes\Omega^{1}(D)),\qquad\nabla x_{0}=(1-p^{-1-t}\varphi)x_{1}.\] The sheaf \(\operatorname{Fil}^{t}\mathcal{V}\otimes\Omega^{1}(D)\) is simply \(\omega^{t+2}\), so \(x_{1}\) is an overconvergent \(p\)-adic modular form of weight \(t+2\); via this description, \(\varphi\) acts on overconvergent forms as \(p^{t+1}\langle p\rangle V_{p}\), where \(\langle p\rangle\) is the diamond operator for \(p\bmod N\). We let \(\widetilde{\operatorname{Eis}}_{\operatorname{syn},N}^{t,\mathrm{ord}}\) be the image of \(\operatorname{Eis}_{\operatorname{syn},N}^{t}\) in this group; we shall now write down an explicit representing pair. The following is a reformulation of Theorem 4.5.7 of [16]; the formulations differ because we are using symmetric powers here rather than symmetric tensors (the basis vector \(v^{[r,s]}\) of _op.cit._ corresponds to \(\frac{v^{r}w^{r}}{r!^{s!}}\) in our present notation) and because we use a slightly different notation for Eisenstein series following [10]. **Proposition 7.2.1**.: _The Eisenstein class \(\widetilde{\operatorname{Eis}}_{\operatorname{syn},N}^{t,\mathrm{ord}}\) is represented by the pair of sections \((\epsilon_{0}^{t},\epsilon_{1}^{t})\) whose restrictions to \(\mathscr{X}_{H}^{\mathrm{ord}}\) are_ \[\epsilon_{0} =-N^{k}\sum_{u=0}^{t}\frac{(-1)^{t-u}}{u!}\theta^{u}\left(E_{0,1/ N}^{-t,\mathrm{ord}}\right)\cdot v^{t-u}w^{u},\] \[\epsilon_{1} =-\tfrac{N^{t}}{t!}F_{0,1/N}^{t+2}\cdot v^{t}\otimes\xi.\] _Here \(F_{0,1/N}^{t+2}\) is the algebraic Eisenstein series with \(q\)-expansion_ \[\zeta(-1-t)+\sum_{n>0}q^{n}\sum_{d|n}(\tfrac{n}{d})^{t+1}(\zeta_{N}^{d}+(-1)^{t} \zeta_{N}^{-d}),\] _and \(E_{0,1/N}^{-t,\operatorname{ord}}\) is an ordinary \(p\)-adic Eisenstein series of weight \(-t\), satisfying \(\theta^{t+1}(E_{0,1/N}^{-t,\operatorname{ord}})=(1-\langle p\rangle V_{p})F_{0, 1/N}^{t+2}\). _ _Remark 7.2.2_.: Note that the individual terms \(\theta^{u}(E_{0,1/N}^{-t,\operatorname{ord}})\) are \(p\)-adic modular forms, but they are not overconvergent (unless \(u=0\)). Nonetheless, \(\epsilon_{0}\) is an overconvergent section of \(\mathcal{V}_{H}\); the non-overconvergence arises because the sections \(v,w\) we are using to trivialize \(\mathcal{V}_{H}\) over the ordinary locus are not themselves overconvergent. \(\diamond\) ### Explicit formulae for the Clebsch-Gordan map The map \(\operatorname{CG}^{[j]}\) is given explicitly by the following formula (c.f. [10, Prop. 5.1.2])4: for \(0\leqslant s\leqslant t\), we have Footnote 4: The factorials appear slightly different from _op.cit._ since we are here working with symmetric powers \(v^{m}\) rather than symmetric tensors \(v^{[m]}\). This is also the reason for the presence of \(t!\) in the formula for the Eisenstein series. \[\iota^{[j]}(v^{s}w^{t-s})= \tag{7.1}\] \[\sum_{\begin{subarray}{c}0\leqslant r\leqslant k_{1}-j\\ 0\leqslant r^{\prime}\leqslant k_{2}-j\\ r+r^{\prime}=s\end{subarray}}\sum_{i=0}^{j}(-1)^{i}\frac{s!(t-s)!}{r!(r^{ \prime})!(k_{1}-r-j)!(k_{2}-r^{\prime}-j)!i!(j-i)!}v^{r+i}w^{k_{1}-r-i}\boxtimes v ^{r^{\prime}+j-i}w^{k_{2}-r^{\prime}-j+i}\otimes e_{-j}.\] **Lemma 7.3.1**.: _For given values of \(k_{1},k_{2},j\), the image of \(\iota^{[j]}(v^{s}w^{t-s})\) in the line spanned by the basis vector \(v^{k_{1}}\boxtimes w^{k_{2}}\otimes e_{-j}\) is zero for all \(s\) except \(s=k_{1}-j\), in which case it is equal to \(\frac{(-1)^{j}}{j!}\cdot v^{k_{1}}\boxtimes w^{k_{2}}\otimes e_{-j}\)._ Proof.: This is a superficially modified version of Proposition 5.1.2 of [10]. _Remark 7.3.2_.: In particular, this shows that the Clebsch-Gordan map is not in general defined integrally (i.e. does not respect the lattice given by the \(\mathbf{Z}\)-span of the basis vectors). However, the coefficient \(j!\) is the worst possible - one can check that \(j!\iota^{[j]}\) is integral. \(\diamond\) ### Reduction to a pairing in coherent cohomology Recall that we want to evaluate the pairing \[\left\langle(\iota^{[j]})^{*}(\nu_{\operatorname{fp}}),\,\operatorname{Eis}_{ \operatorname{syn},N}^{t}\right\rangle. \tag{7.2}\] Now we have \[(\ref{eq:1}) =\left\langle(\iota^{[j]})^{*}\left(\nu_{\operatorname{fp}}^{(1- \operatorname{ord})}\right),\,\operatorname{Eis}_{\operatorname{syn},N}^{t, \operatorname{ord}}\right\rangle.\] \[=\left\langle(\iota^{[j]})^{*}\left(\widetilde{\nu}_{\operatorname {fp}}^{(1-\operatorname{ord})}\right),\,\widetilde{\operatorname{Eis}}_{ \operatorname{syn},N}^{t,\operatorname{ord}}\right\rangle\] \[=\left\langle(\iota^{[j]})^{*}\left(\check{\nu}_{j}^{( \operatorname{ord})},\check{\xi}\right),\,(\epsilon_{1},\epsilon_{0})\right\rangle. \tag{7.3}\] Here, (7.3) takes values in the group \[\widetilde{H}_{\operatorname{fp},c}^{2}(\mathcal{X}_{H}^{\operatorname{ord}}, \mathbf{Q}_{p}(-\mathcal{D}_{H}),2;P_{j})\cong^{\operatorname{tr}}\mathbf{Q}_ {p}.\] We evaluate (7.3) using the formalism of cup products in fp-cohomology (cf. [1]). We have \[P_{j}(st)=a(t,s)P_{j}(t)+b(t,s)(1-s)\] with \(a(t,s)=s\) and \[b(t,s)=\frac{P_{j}(st)-sP_{j}(t)}{1-s}=1-b\cdot t^{2}s,\qquad b=\frac{p^{2j+2}} {\alpha_{1}^{2}\alpha_{2}\beta_{2}}=\frac{\beta_{1}}{\alpha_{1}}\cdot\frac{1}{p ^{(k_{1}+k_{2}-2j)}}.\] Hence \[P_{j}(p^{-1})\times\left\langle(\iota^{[j]})^{*}\left(\check{ \nu}_{j}^{(\operatorname{ord})},\check{\xi}\right),\,(\epsilon_{1}^{t}, \epsilon_{0})\right\rangle\] \[=(\iota^{[j]})^{*}(\check{\xi})\cup\varphi_{H}^{*}(\epsilon_{1}^{ t})+\left(1-\frac{\beta_{1}\cdot({{\varphi_{H}^{*}}}^{2}\otimes\varphi_{H}^{*})}{p ^{k_{1}+k_{2}-2j}\langle p\rangle\alpha_{1}}\right)\left((\iota^{[j]})^{*}( \check{\nu}_{j}^{(\operatorname{ord})})\cup\epsilon_{0}^{t}\right).\] **Lemma 7.4.1**.: _We have \((\iota^{[j]})^{*}(\hat{\xi})\cup\varphi_{H}^{*}(\epsilon_{0}^{t})=0\)._ Proof.: Observe that \(U_{p}\) acts on the top-degree rigid cohomology as multiplication by a power of \(p\). But we also have \[U_{p}\left((\iota^{[j]})^{*}(\hat{\xi})\cup\varphi_{H}^{*}(\epsilon_{0}^{t}) \right)=U_{p}\left((\iota^{[j]})^{*}(\hat{\xi})\right)\cup\epsilon_{0}^{t},\] which is equal to zero by Proposition 6.3.6. The Proposition follows. **Proposition 7.4.2**.: _Equation (7.3) is equal to_ \[\frac{\left(1-\frac{\beta_{1}}{\alpha_{1}\alpha_{2}}\right)\left(1-\frac{p^{j }}{\alpha_{1}\beta_{2}}\right)}{\left(1-\frac{p^{j}}{\alpha_{1}\beta_{2}} \right)\left(1-\frac{p^{j}}{\alpha_{1}\beta_{2}}\right)}\left\langle(\iota^{[ j]})^{*}(\dot{\nu}_{j}^{(\mathrm{ord})}),\epsilon_{0}^{t}\right\rangle.\] Proof.: Since any \(p\)-depleted form will pair to \(0\) with a form that's in a direct sum of finite-slope \(\varphi\)-eigenspaces, we only care about the \(\epsilon_{0}\) term modulo exact forms. We also only care about its projection to the \(\langle p\rangle=1\) eigenspace, because we are pairing it with a class in the \(\Pi\)-eigenspace and \(\Pi\) has trivial central character. On the space \(H^{2}_{\mathrm{rig},c}(\mathcal{X}_{H}^{\mathrm{ord}}(-\mathcal{D}),\,\mathbf{Q }_{p}(2))\), the Frobenius acts as \(p^{-1}\), and \(\langle p\rangle\) acts trivially; so we can replace \(\frac{\beta_{1}\cdot(\varphi_{H}^{*^{2}}\otimes\varphi_{H}^{*})}{p^{k_{1}+2 \cdot 2^{-j}}(p)\alpha_{1}}\) with \(\frac{\beta_{1}\cdot(1\otimes\varphi_{H}^{*^{2}})}{p^{k_{1}+2\cdot 2^{-j}}(p) \alpha_{1}}\). The operator \(\varphi^{-1}\) makes sense modulo \(p\)-depleted forms, and in this quotient we have \[\varphi^{-1}\cdot\epsilon_{0}^{m}=p^{m+1}\langle p\rangle^{-1}\epsilon_{0}^{m}.\] So we are done. _Remark 7.4.3_.: Note that \(\dot{\nu}_{j}^{(\mathrm{ord})}\) makes sense over the \(\mathfrak{p}_{1}\)-ordinary locus \(\mathcal{X}_{G}^{1-\mathrm{ord}}\); the antiderivative \(\dot{\xi}\) is only defined over the fully ordinary locus \(\mathcal{X}_{G}^{\mathrm{ord}}\), but this term has disappeared from our formula. So we can also interpret the pairing of Proposition 7.4.2 as a cup-product in the cohomology of \(\mathcal{X}_{G}^{1-\mathrm{ord}}\), which will allow us to compare with the construction of \(p\)-adic \(L\)-functions from [10]. \(\diamond\) ### A partial unit root splitting We saw above that over \(\mathscr{X}_{H}^{\mathrm{ord}}\) (the ordinary locus as a classical rigid space) the Hodge filtration of \(\mathcal{V}_{H}\) has a canonical splitting given by the unit-root subspace. For \(X_{G}\) we have the more refined structure of a \(\mathbf{Z}^{2}\)-filtration, and we can ask for splittings of either factor. We state below the case of interest. **Proposition 7.5.1**.: _Over \(\mathscr{X}_{G}^{1-\mathrm{ord}}\), the natural inclusion map \(\omega^{(k_{1},-k_{2})}\hookrightarrow\mathcal{V}_{G}/\operatorname{Fil}^{k_{ 1}+1}\) admits a canonical splitting._ Proof.: We have a projection map from \(\mathcal{V}_{G}\) onto \(\mathcal{V}_{G}/\operatorname{Fil}^{(0,1)}\mathcal{V}_{G}\cong\operatorname{ Sym}^{k_{1}}\mathcal{W}_{G}\boxtimes\omega^{-k_{2}}\). So it suffices to show that the filtration on \(\operatorname{Sym}^{k_{1}}\mathcal{W}_{G}\) is splittable over the \(1\)-ordinary locus; but this is clear, since a complement is provided by the unit-root subspace for the operator \(\varphi_{1}\). ### Comparison of pushforward maps To begin with, we observe that there exist pushforward-maps in coherent cohomology: **Proposition 7.6.1**.: _There exists a pushforward map on coherent cohomology_ \[\iota_{\mathrm{coh},*}:H^{0}(\mathscr{X}_{H}^{\mathrm{ord}},\omega^{k_{1}-k_{ 2}})\longrightarrow H^{1}(\mathscr{X}_{G}^{1-\mathrm{ord}},\omega^{k_{1}+2,-k_{ 2}}).\] Proof.: Clear since the conormal sheaf of the embedding \(\iota\) is isomorphic to \(\omega^{2}\). We want to compare this map with the pushforward maps \(\iota_{*}^{[j]}\) on the cohomology of the de Rham sheaves. **Notation 7.6.2**.: _We use the notation of Section 2.2, and we write_ * \(\mathcal{V}_{G}=\operatorname{Sym}^{[k_{1},k_{2}]}\mathcal{H}(\mathcal{A})_{ \mathbf{Q}_{p}}\)_;_ * \(\mathcal{V}_{H}=\operatorname{Sym}^{t}\mathcal{H}(\mathcal{E})_{\mathbf{Q}_{p}}\) _for_ \(0\leqslant j\leqslant\min\{k_{1},k_{2}\}\)_._ _for for vector bundles attached to \(V_{H}\) and \(V_{G}\), respectively._ **Proposition 7.6.3**.: _Pushforward along \(\iota^{[\beta]}\) induces a commutative diagram_ Proof.: For the commutativity of the top square, we note that the Clebsch-Gordan map is compatible with filtrations and hence induces a commutative diagram of \(B_{H}\)-representations The identification of the diagonal map as \(\frac{1}{j!}t_{\mathrm{coh},*}\) arises similarly, using the explicit formula (7.1). **Proposition 7.6.4**.: _The diagram in Proposition 7.6.3 is compatible with the unit root splitting_ \[u_{H}:H^{0}(\mathscr{X}_{H}^{\mathrm{ord}},V_{H}/\operatorname{Fil}^{k_{1}-j+ 1}V_{H})\longrightarrow H^{0}(\mathscr{X}_{H}^{\mathrm{ord}},\operatorname{ Gr}^{k_{1}-j}V_{H})\] _and the partial unit root splitting_ \[u_{G}:H^{1}(\mathscr{X}_{G}^{\mathrm{1-ord}},V_{G}/\operatorname{Fil}^{k_{1}+ 1}V_{G}\otimes\,\Omega^{1})\longrightarrow H^{1}(\mathscr{X}_{G}^{\mathrm{1- ord}},\omega^{k_{1}+2,-k_{2}}):\] _we have a commutative diagram_ Proof.: Over \(\mathscr{X}_{H}^{\mathrm{ord}}\), we have a canonical splitting of the Hodge filtration of \(\mathcal{W}_{H}\), as above. To prove the proposition, it is sufficient to check that the unit root splittings induce a commutative diagram which boils down to an explicit computation with the Clebsch-Gordan map using equation (7.1). It follows from the classicity theorem of higher Hida theory proved in [11] that the natural restriction map (forgetting overconvergence) \[H^{1}(\mathcal{X}_{G}^{\mathrm{1-ord}},\omega^{k_{1}+2,-k_{2}})\to H^{1}( \mathscr{X}_{G}^{\mathrm{1-ord}},\omega^{k_{1}+2,-k_{2}})\] is an isomorphism on the ordinary (slope \(0\)) eigenspace for \(U_{\mathfrak{p}_{1}}\). If \(\Pi\) is ordinary at \(\mathfrak{p}_{1}\) (and \(\alpha_{1}\) is, necessarily, the unit root) then pairing with \(\nu^{\mathrm{ord}}\) factors through this eigenspace, so we obtain the following: **Corollary 7.6.5**.: _Assume \(\Pi\) is ordinary at \(\mathfrak{p}_{1}\). Then the linear functional on \(H^{0}(\mathcal{X}_{H}^{\mathrm{ord}},V_{H}/\operatorname{Fil}^{k_{1}-j+1}V_{H})\) given by pairing with \((\iota^{[j],*})(\iota^{\mathrm{ord}})\) factors through the unit root splitting into \(:H^{0}(\mathcal{X}_{H}^{\mathrm{ord}},\operatorname{Gr}^{k_{1}-j}V_{H})\), and it is given by_ \[\frac{k_{1}!\,k_{2}!}{j!}\cdot\,\left\langle\nu^{(1-\mathrm{ord})},\iota_{ \mathrm{coh},*}(-)\right\rangle_{\mathcal{X}_{G}^{1-\mathrm{ord}}}.\qed\] _Remark 7.6.6_.: The factors \(k_{i}!\) arise from the pairing of the basis vectors \(v^{k_{i}}\) and \(w^{k_{i}}\) of \(\operatorname{Sym}^{k_{i}}W_{G}\). \(\diamond\) ### Relation to \(p\)-adic \(L\)-functions We use Corollary 7.6.5 in order to relate the formula in Proposition 7.4.2 to values of \(p\)-adic \(L\)-functions. We assume henceforth that \(\Pi\) is ordinary at \(\mathfrak{p}_{1}\). **Proposition 7.7.1**.: _The pairing (7.2) is given by_ \[\left\langle(\iota^{[j]})^{*}(\nu_{\mathrm{fp}}),\,\mathrm{Eis}^{t}_{ \operatorname{syn},N}\right\rangle=\\ \frac{N^{(k_{1}+k_{2}-2j)}(-1)^{k_{2}-j+1}\left(1-\frac{\beta_{1 }}{p\alpha_{1}}\right)}{\left(1-\frac{p^{j}}{\alpha_{1}\alpha_{2}}\right) \left(1-\frac{p^{j}}{\alpha_{1}\beta_{2}}\right)}\binom{k_{1}}{j}\,k_{2}!\, \cdot\left(\nu^{(\mathrm{ord})}\cup\iota_{\mathrm{coh},*}(\theta^{(k_{1}-j)}E ^{-t,\mathrm{ord}}_{0,1/N})\right).\] Compare Corollary 6.5.7 in [10]. Proof.: We deduce from Proposition 7.2.1 that the image of \(\epsilon_{0}^{k_{1}+k+2-2j}\) under projection to \(H^{0}(\mathcal{X}_{H}^{\mathrm{ord}},\operatorname{Gr}^{k_{1}-j}\mathcal{V}_{H})\) is given by \(-N^{k}(-1)^{k_{2}-j}\frac{(t!)}{(k_{1}-j)!}\theta^{k_{1}-j}(E^{-t,\mathrm{ord }}_{0,1/N})\). Combining this with Proposition 7.4.2 and Corollary 7.6.5 gives the result. _Note 7.7.2_.: By adjunction, we can write the formula in Proposition 7.7.1 as \[\left\langle(\iota^{[j]})^{*}(\nu_{\mathrm{fp}}),\,\mathrm{Eis}^{ t}_{\operatorname{syn},N}\right\rangle=\\ \frac{N^{(k_{1}+k_{2}-2j)}(-1)^{k_{2}-j+1}\left(1-\frac{\beta_{1 }}{p\alpha_{1}}\right)}{\left(1-\frac{p^{j}}{\alpha_{1}\alpha_{2}}\right) \left(1-\frac{p^{j}}{\alpha_{1}\beta_{2}}\right)}\binom{k_{1}}{j}\,k_{2}!\, \cdot\,\left(\iota^{*}_{\mathrm{coh}}(\nu^{(\mathrm{ord})})\cup\theta^{(k_{1} -j)}E^{-t,\mathrm{ord}}_{0,1/N}\right).\] \(\diamond\) In order to relate this formula to a (non-critical) value of a \(p\)-adic \(L\)-function, we need to replace \[\theta^{(k_{1}-j)}E^{-t,\mathrm{ord}}_{0,1/N}=E^{j-k_{2},\mathrm{ord}}_{0,1/N}\] by its \(p\)-depletion \[E^{j-k_{2},[p]}_{0,1/N}=(1-\varphi\circ U_{p})E^{j-k_{2},\mathrm{ord}}_{0,1/N}.\] We adapt the argument from [10, SS6.5]. Let \(V_{\mathfrak{p}_{2}}=p^{-1-k_{2}}\langle\mathfrak{p}_{2}\rangle^{-1}\varphi_{2}\), so \(V_{\mathfrak{p}_{2}}\) is a right inverse of \(U_{\mathfrak{p}_{2}}\). Let \(\lambda,\mu\) be constants such that \[U_{\mathfrak{p}_{2}}(\nu^{\mathrm{ord}})=\lambda\nu^{\mathrm{ord}}-\mu V_{ \mathfrak{p}_{2}}(\nu^{\mathrm{ord}})\] (explicitly, we have \(\lambda=a_{\mathfrak{p}_{2}}(\mathcal{F})\) and \(\mu=p^{1+k_{2}}\)), and let \(\gamma=p^{k_{1}-j}\). Using an analogue of Lemma 6.5.8 from _op.cit._, we deduce the following result: **Lemma 7.7.3**.: _We have_ \[\iota^{*}_{\mathrm{coh}}(\nu^{\mathrm{ord}})\cup\theta^{k_{1}-j}E^{-t,[p]}_{0,1 /N}=\left(1-\lambda\gamma p^{-k_{1}}\beta_{\mathfrak{p}_{1}}\cdot V_{p}+\mu \gamma^{2}(p^{-k_{1}}\beta_{\mathfrak{p}_{1}})^{2}V_{p}^{2}\right)\left(\iota^{* }_{\mathrm{coh}}(\nu^{\mathrm{ord}})\cup\theta^{k_{1}-j}E^{-t,\mathrm{ord}}_{0,1 /N}\right).\] Proof.: Arguing as in SS6.5 in _op.cit._, we see that \[\iota^{*}_{\mathrm{coh}}(\nu^{\mathrm{ord}})\cup\theta^{k_{1}-j}E^ {-t,[p]}_{0,1/N} =\left(1-\lambda\gamma p^{-k_{1}}\beta_{\mathfrak{p}_{1}}\cdot V_{p}+\mu \gamma^{2}(p^{-k_{1}}\beta_{\mathfrak{p}_{1}})^{2}V_{p}^{2}\right)\left(\iota^{* }_{\mathrm{coh}}(\nu^{\mathrm{ord}})\cup\theta^{k_{1}-j}E^{-t,\mathrm{ord}}_{0,1 /N}\right)\] \[+(1-V_{p}U_{p})\left(\mu\varphi^{2}\iota^{*}_{\mathrm{coh}}(\nu^ {\mathrm{ord}})\cup\theta^{k_{1}-j}E^{-t,\mathrm{ord}}_{0,1/N}-\iota^{*}_{ \mathrm{coh}}(\nu^{\mathrm{ord}})\cup\varphi\theta^{k_{1}-j}E^{-t,\mathrm{ord}}_{0,1/N}\right).\] But the operator \(1-V_{p}U_{p}\) acts as the zero map, which proves the result. We deduce the following formula: **Proposition 7.7.4**.: _We have_ \[\left\langle(\iota^{[j]})^{*}(\nu_{\mathrm{fp}}),\,\mathrm{Eis}^{t}_{ \mathrm{syn},N}\right\rangle=N^{(k_{1}+k_{2}-2j)}(-1)^{k_{2}-j+1}\times\] \[\qquad\qquad\frac{\left(1-\frac{\beta_{1}}{p\alpha_{1}}\right)}{ \left(1-\frac{p^{j}}{\alpha_{1}\alpha_{2}}\right)\left(1-\frac{p^{j}}{\alpha_{ 1}\beta_{2}}\right)\left(1-\frac{\beta_{1}\alpha_{2}}{p^{j+1}}\right)\left(1- \frac{\beta_{1}\beta_{2}}{p^{j+1}}\right)}\binom{k_{1}}{j}\,k_{2}!\cdot\left( \iota^{*}_{\mathrm{coh}}(\nu^{(\mathrm{ord})})\cup\theta^{k_{1}-j}E^{-t,[p]}_{0,1/N}\right).\] We can also express this in terms of the class \(\tilde{\nu}_{\Pi,\alpha}\) on the higher-level variety \(\mathcal{X}_{G}(\mathfrak{p}_{1})\) described above. If \(\pi_{\mathfrak{p}_{1}}\) denotes the degeneracy map \(\mathcal{X}_{G}(\mathfrak{p}_{1})\to\mathcal{X}_{G}\), then we have \(\mathrm{pr}_{\mathfrak{p}_{1},*}\left(\tilde{\nu}_{\Pi,\alpha}\right)=p\left( 1-\frac{\beta_{1}}{p\alpha_{1}}\right)\nu^{(\mathrm{ord})}\); so the formula in Proposition 7.7.4 is equivalent to \[\left\langle(\iota^{[j]})^{*}(\nu_{\mathrm{fp}}),\,\mathrm{Eis}^{t }_{\mathrm{syn},N}\right\rangle=N^{(k_{1}+k_{2}-2j)}(-1)^{k_{2}-j+1}p^{-1}\times\] \[\qquad\frac{1}{\left(1-\frac{p^{j}}{\alpha_{1}\alpha_{2}}\right) \left(1-\frac{p^{j}}{\alpha_{1}\beta_{2}}\right)\left(1-\frac{\beta_{1}\alpha _{2}}{p^{j+1}}\right)\left(1-\frac{\beta_{1}\beta_{2}}{p^{j+1}}\right)} \binom{k_{1}}{j}\,k_{2}!\cdot\left(\iota^{*}_{\mathrm{coh}}\circ\mathrm{pr}_{ \mathfrak{p}_{1},*}\left(\nu^{(\mathrm{ord})}_{\mathfrak{l}\mathfrak{w}( \mathfrak{p}_{1})}\right)\cup\theta^{k_{1}-j}E^{-t,[p]}_{0,1/N}\right).\] We now recall the definition of the (imprimitive) \(p\)-adic Asai \(L\)-function in [13, Def. 7.3.1]. For a \(p\)-adic character \(\sigma:\mathbf{Z}_{p}^{\times}\to\mathbf{C}_{p}^{\times}\), we have \[L^{\mathrm{imp}}_{p,\mathrm{Asai}}(\Pi,\sigma)=\tfrac{1}{p}(\sqrt{D})^{-1-(k_{1 }+k_{2})/2-\sigma}(-1)^{\sigma}\left\langle\tilde{\nu}_{\Pi,\alpha},\iota^{(p )}_{\mathrm{coh},*}\left(\mathcal{E}_{k_{1}-k_{2}}(\sigma)\right)\right\rangle, \tag{7.4}\] where * \(X_{H}(p)\) is the modular curve of level \(\Gamma_{1}(N)\cap\Gamma_{0}(p)\); * \(\iota^{(p)}:\,X_{H}(p)\to X_{G}(\mathfrak{p}_{1})\) is the natural embedding induced by \(\iota\); * \(\mathcal{E}_{k_{1}-k_{2}}\) is a \(p\)-adic family of \(p\)-depleted Eisenstein series, of constant weight \(k_{1}-k_{2}\), defined in [13, SS7.2], with the Schwartz function \(\Phi^{(p)}\) chosen as in SS7.3.2 of \(op.cit.\); at \(\sigma=-\tfrac{t}{2}\), this specialises to \(\theta^{k_{1}-j}\left(E^{-t,[p]}_{0,1/N}\right)\). **Lemma 7.7.5**.: _The following diagram is cartesian:_ _so \(\pi_{\mathfrak{p}_{1}}^{*}\circ\iota_{*}=\iota_{*}^{(p)}\circ\pi_{p}^{*}\)._ Comparing the previous proposition with (7.4), we hence deduce the main theorem of this paper: **Theorem 7.7.6**.: _Let \(0\leqslant j\leqslant\min\{k_{1},k_{2}\}\), and let \(t=k_{1}+k_{2}-2j\geqslant 0\). Then we have_ \[\left\langle\nu_{\mathrm{dR}},\,\log\left(\mathrm{loc}_{p}\,\mathrm{AF}^{[\Pi, j]}_{\mathrm{\acute{e}}t}\right)\right\rangle=\frac{(\sqrt{D})^{j+1}(-1)^{(k_{1}-k_{2})/2+1}N ^{t}\binom{k_{1}}{j}\,k_{2}!}{\left(1-\frac{p^{j}}{\alpha_{1}\alpha_{2}} \right)\left(1-\frac{p^{j}}{\alpha_{1}\beta_{2}}\right)\left(1-\frac{\beta_{1} \alpha_{2}}{p^{j+1}}\right)\left(1-\frac{\beta_{1}\beta_{2}}{p^{j+1}}\right)} \cdot L^{\mathrm{imp}}_{p,\mathrm{Asai}}\left(\Pi,-\frac{t}{2}\right).\] _Remark 7.7.7_.: Note that in [13] we also defined a primitive \(p\)-adic \(L\)-function, interpolating the critical values of the primitive Asai \(L\)-function (with the optimal Euler factors at ramified primes). One can likewise modify the construction of the Asai-Flach class by incorporating more general test data away from \(p\) (see [10]), and prove a refinement of Theorem 7.7.6, relating Asai-Flach classes with appropriate "optimal" test data to non-critical values of the primitive \(p\)-adic Asai \(L\)-function. We leave the details to the interested reader. \(\diamond\)
2309.11354
Self-supervised learning unveils change in urban housing from street-level images
Cities around the world face a critical shortage of affordable and decent housing. Despite its critical importance for policy, our ability to effectively monitor and track progress in urban housing is limited. Deep learning-based computer vision methods applied to street-level images have been successful in the measurement of socioeconomic and environmental inequalities but did not fully utilize temporal images to track urban change as time-varying labels are often unavailable. We used self-supervised methods to measure change in London using 15 million street images taken between 2008 and 2021. Our novel adaptation of Barlow Twins, Street2Vec, embeds urban structure while being invariant to seasonal and daily changes without manual annotations. It outperformed generic embeddings, successfully identified point-level change in London's housing supply from street-level images, and distinguished between major and minor change. This capability can provide timely information for urban planning and policy decisions toward more liveable, equitable, and sustainable cities.
Steven Stalder, Michele Volpi, Nicolas Büttner, Stephen Law, Kenneth Harttgen, Esra Suel
2023-09-20T14:35:23Z
http://arxiv.org/abs/2309.11354v2
# Self-supervised learning unveils change in urban housing from street-level images ###### Abstract Cities around the world face a critical shortage of affordable and decent housing. Despite its critical importance for policy, our ability to effectively monitor and track progress in urban housing is limited. Deep learning-based computer vision methods applied to street-level images have been successful in the measurement of socioeconomic and environmental inequalities but did not fully utilize temporal images to track urban change as time-varying labels are often unavailable. We used self-supervised methods to measure change in London using 15 million street images taken between 2008 and 2021. Our novel adaptation of Barlow Twins, Street2Vec, embeds urban structure while being invariant to seasonal and daily changes without manual annotations. It outperformed generic embeddings, successfully identified point-level change in London's housing supply from street-level images, and distinguished between major and minor change. This capability can provide timely information for urban planning and policy decisions toward more liveable, equitable, and sustainable cities. The global urban housing crisis emerged as a pressing issue in recent decades, with cities worldwide facing a critical shortage of affordable and decent housing [2, 3, 4]. It has serious implications for health, social mobility, and economic productivity [5]. Evidence suggests the lack of affordable high-quality housing exacerbates other inequalities, worsens homelessness, contributes to declining birth rates, and forces essential workers out of cities [6, 7, 8, 9, 10, 11, 12, 13]. The United Nations (UN) recognizes the urgency; universal access to adequate, safe, and affordable housing is now part of the Sustainable Development Goals (SDGs) [14]. City governments allocate resources to affordable housing initiatives and regenerating and expanding the housing supply. Timely measurements at high spatial resolution are crucial, yet largely lacking, for tracking progress and informing interventions [15]. Further research into methods allowing measurement of housing supply from emerging sources of low-cost large-scale data is essential to support timely data-driven decision-making for local governments and achieving global goals towards more liveable, equitable, and sustainable cities [16]. One of the barriers to effective tracking of housing is the fragmentation of housing data across disparate sources. The most comprehensive measure of housing stock is the census, yet it is only conducted every ten years in many countries. Although household surveys provide more frequent estimates of housing quality and costs, they lack spatial granularity due to their limited sample sizes. Local governments responsible for housing permits hold relevant data, yet building a unified dataset requires targeted collaborative efforts from multiple private and public sector players. In addition, new development data captures only a part of housing supply change and excludes demolitions, renewals, and regeneration. As a result, cities and researchers lack a comprehensive data source to effectively track the housing supply and its affordability to assess interventions as they are implemented. The availability of emerging sources of affordable large-scale image data, combined with advances in computer vision methods relying on deep learning, holds great potential for accelerating and improving urban measurements [17]. Street-level images attracted particular attention as they capture urban environments as experienced by their residents and can provide very high spatial and temporal resolution [18, 19]. Prior research focused on the application of supervised deep learning methods to street-level images, which require high-quality ground truth measurements (labels) as a starting point. Successful urban applications included measurement of socio-demographics [20, 21, 22], trees and green space [23, 24, 25, 26, 27, 28], housing prices [29], crime rates [21, 30], pollution levels and sources [31, 32, 33, 34], perceptions [35, 36, 37, 38], density and walkability [39, 40, 41, 42, 43, 44], road safety [45], accessibility [46], and travel patterns [47, 48, 49, 50]. Even though mapping providers such as Google, Baidu, and Mapillary have been collecting and archiving multi-year street-level images of several cities for over a decade [51, 52, 47, 18, 53, 26], many of these studies were cross-sectional. So far, researchers have been limited by the difficulty of attaining temporally coherent and spatially dense label data at scale, as required by supervised methods. Therefore, existing research has not fully explored the potential of the temporal dimension of street-level images for studying urban change. On the other hand, self-supervised representation learning methods are being increasingly investigated as a way to extract meaningful information from large sets of structured but unlabelled data [54]. They also differ from traditional unsupervised methods (e.g., auto-encoders) [55] as they learn latent data representations by optimizing auxiliary tasks exposing intrinsic data structures, and have found greater success in discriminatory tasks [56]. Previous works have applied these methods to aerial and satellite images [57, 58, 59, 60] for land cover change detection [61, 62, 63, 64, 65, 66]. Comparatively little research applied self-supervised learning methods to street-level images where the focus has primarily been on learning image embeddings that contrast over geographical proximity and cross-modal embeddings for cross-sectional prediction tasks [67, 56]. Measurement of change from street images holds value not only for data-rich cities that often lack timely measurement data but also for developing regions experiencing rapid urbanization and facing data scarcity [68]. In this study, we measured neighborhood change from street-level images using self-supervised representation learning (Fig. 1). We used 15.3 million images taken between the years 2008 and 2021 across the London metropolitan area in the UK. We applied our proposed Street2Vec method, where we adapted Barlow Twins [1] to effectively learn from spatio-temporal street-level images. We designed Street2Vec to ensure it remains invariant to common but irrelevant variations across images such as illumination conditions, seasonality, and the presence of vehicles and people in images. We constrained its focus on learning visual features that relate to urban building structure. Once the Street2Vec representations were learned, we computed the degree of change based on the cosine distance between the embeddings of two images captured a decade apart, in 2008 and 2018, as this was the largest time span in our dataset for which we have a big overlap of image locations. We systematically assessed the performance of our Street2Vec approach. First, we visually and Figure 1: Overview of the full pipeline. (a) Our proposed Street2Vec method, in which we apply Barlow Twins [1] to street-level images. (b) Illustration of our application of the trained Street2Vec model for mapping urban change between 2008 and 2018. quantitatively evaluated change we detected in Opportunity Areas (OAs) that were announced and received financial incentives by the Mayor of London as key locations with potential for new homes, jobs, and infrastructure, and compared it to changes detected for all other areas in Greater London area. Second, we manually labeled 1,449 image pairs from 2008 and 2018 with respect to five different categories corresponding to degrees of urban change (see Supplementary Information for the detailed definitions). We then compared the change detection performance of our proposed Street2Vec approach with a baseline feature extractor pre-trained on ImageNet, which uses the same backbone Convolutional Neural Network (CNN) ResNet-50 architecture [69]. Finally, we visualized the spatial distribution of neighborhood clusters from Street2Vec embeddings. Our models successfully identified both subtle and significant urban transformations corresponding to changes in housing stock at the neighborhood level in London, outperforming the generic pre-trained feature extractor. Our change detection results are based on 329,031 point locations in London for which we had images from both 2008 and 2018. ## Results ### Change in Opportunity Areas As part of the spatial development strategy published and periodically updated by the Mayor of London since 2004, Opportunity Areas (OAs) in Greater London are identified as key locations with potential for new homes, jobs, and infrastructure (london.gov.uk/programmes-strategies/planning/implementing-london-plan/londons-opportunity-areas). New developments in these neighborhoods have been actively incentivized by the Mayor and Local Authorities through various measures including investments in transport links. To evaluate the effectiveness of our model, we estimated the point-level change between 2008 and 2018 as measured by cosine distances between Street2Vec embeddings. We expected to see higher numbers of locations with larger change within OAs (i.e., higher values for the median and 75th quantile). We show a comparison of distributions of point-level change in Fig. 2 for all OAs separately along with all other areas combined. We found that OAs have significantly higher levels of median change (\(p<0.01\)). We also found that OAs have significantly higher values for the 75th quantile when compared to other areas in London (\(p<0.01\)). While we do not have ground truth data to validate point-level change detection, our results demonstrate the success of our model in highlighting neighborhoods where we would expect the largest change in London. Our findings revealed substantial variation as captured by cosine distances, with certain areas experiencing substantial neighborhood change (e.g., Kings Cross and St. Pancras, Tottenham Court Road), along with areas lying on newly built transportation infrastructure such as the Northern Line extension or the Elizabeth Line (e.g., Battersea and Woolwich). Other areas showed limited change or are lagging in planning. This also provides important information for the local governments as it highlights areas that have not experienced the anticipated level of development despite incentives enabling targeted interventions, and also ones that have organically redeveloped without targeted policy interventions. ### Subtle shifts vs. major new developments We investigated whether our method can also identify relatively subtle changes in the built environment, for instance from the renewal of existing homes to regeneration of entire neighborhoods. We expected major new housing to have stronger visual signals from images. However, a lot of the change in housing happens through shifts in existing housing stock, especially in cities with rich histories like London. This type of change may show weaker visual signals in street-level images, potentially making it more challenging to monitor effectively. Tracking subtler changes, such as new coffee chains or repainting of facades, is critical as they may lead to undesired outcomes such as displacement caused by processes like gentrification. A visual investigation of image examples such as in Figure 3(b) revealed that our model can indeed distinguish between different levels of structural change in the built environment. For example, the first image pair demonstrates that our model attributes close to zero change to pairs that show considerable visual differences in lighting conditions or seasonality. Minor structural alterations like the construction of fences such as in the second image pair of the first row lead to small but non-zero cosine distances. The second row shows examples of image pairs with slightly higher cosine distances, and they indeed begin to exhibit stronger changes such as a new paint job on the facade and the widening of the pavement. Newly constructed buildings are visible in the third row, while the fourth row with very high cosine distances captures complete reconstructions of streets or neighborhoods. Figure 2: Change detected from Street2Vec embeddings in London: (a) map of predicted mean change for all Middle Super Output Areas (MSOAs), (b) map of predicted mean change for Opportuny Areas (OAs) announced by the Mayor of London as areas with substantial potential for new developments, (c) distribution of point level change detected in OAs in London compared with non-opportuny areas (Non-OA) in London. In (a) and (b), darker red colors correspond to higher levels of predicted change. The fifth row shows examples of pairs with the highest distance values, where we found instances of extreme structural change and also many images that were rotated, corrupted, or anomalous in other ways. All of this was desired behavior when designing our proposed Street2Vec method. As we did not have suitable ground truth data for a quantitative evaluation, we manually labeled 1,449 image pairs from the years 2008 and 2018 according to stratified random sampling based on five value ranges from our model predictions. We used five classes for labeling, representing an ordinal scale of change between street-level panoramas: (1) minimal irrelevant change, (2) noticeable but irrelevant change, (3) minor urban change, (4) major urban change, and (5) anomalies in images (see Supplementary Information for detailed explanations). After labeling the image pairs according to our classification, we computed the empirical histogram of the cosine distances for each class. To have a basis for comparison, we also extracted representations and computed corresponding cosine distances from a baseline CNN architecture [69], which was only trained for classification on the ImageNet dataset [70] and has not been trained to learn urban representations. This test aims to assess whether Street2Vec has indeed learned more suitable representations for capturing urban change, or if a simpler yet coherent visual representation would have sufficed to distinguish between varying levels of change. We found that the mean cosine distances from Street2Vec followed the expected order, successfully distinguishing between minimal, irrelevant, minor, and major change (Table 1). While there is overlap between classes (Fig. 4), these results demonstrate that our model learns visual features of relevant urban change, corresponding to semantics related to urban structures while disregarding (or putting less emphasis on) irrelevant visual differences. Street2Vec embeddings also outperformed generic features learned from the baseline CNN model. The mean distances from the baseline CNN model also followed the correct order, however, the cosine distance distribution did not show the desired spread which would allow to accurately separate change classes. One notable observation regarding Street2Vec embedding distributions is the emergence of two peaks for the "irrelevant change" and "minor urban change" classes in Fig. 4. While we cannot determine with certainty why these occur, this may be related to intra-label variances and our own biases during the labeling process. Finally, as expected, both models effectively identify anomalous images. #### Visualizing the neighbourhood clusters To further interpret the learned embeddings from Street2Vec, we visualized their spatial distribution using 10,000 randomly sampled street-level panoramas (taken at any point in time to get unbiased temporal and spatial coverage) based on their position in a Uniform Manifold Approximation and Projection (UMAP) [71] to two dimensions (see Methods). Similar colors in Fig. 5 can be interpreted to represent similar neighborhoods according to our model. Our coloring reveals interesting spatial patterns even though geographical coordinates were never explicitly given as a model input. In Fig. 5(b), the city center is clustered together in different shades of green. Light green colors gradually change to dark green, to red, to blue and magenta colors moving outward from the center towards the suburbs. The light blue points appear to follow London motorways. To provide an intuitive understanding of the information captured by UMAP's two embedding dimensions, Fig. 5(a) showcases sample images situated at the farthest extremes of the UMAP latent dimensions distribution. In the first (horizontal, x-axis) dimension, the three images with the lowest values come from residential areas with low-rise buildings (red points in the map), and the three images with the highest values are from roads and do not show any nearby settlements (light blue points in the map). Therefore, a possible interpretation for the first dimension of the UMAP could be "habitableness". In the second (vertical, y-axis) dimension, images with the lowest values come from the urban core near the center (light green points in the map), while the ones with the highest values are suburban-looking scenes with noticeable vegetation content (magenta points in the map). It seems likely that this dimension captures some form of "urbanization" in the images. However, such visual interpretations are qualitative and UMAP projections from a \begin{table} \begin{tabular}{l r r r r r} \hline \hline & Minimal & Irrelevant & Minor urban & Major urban & Anomalous \\ & Class 1 & Class 2 & Class 3 & Class 4 & Class 5 \\ \hline Street2Vec & 0.090 & 0.205 & 0.424 & 0.592 & 0.838 \\ Baseline & 0.151 & 0.201 & 0.228 & 0.275 & 0.685 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of mean cosine distances per class for embeddings from Street2Vec and the baseline CNN model pre-trained on ImageNet. high-dimensional space down to two dimensions are difficult to interpret as they are bound to lose information, whereas the overall modes of variations in the street-level images learned by Street2Vec are likely much more complex. ## Discussion To our knowledge, this is the first application of self-supervised deep learning methods to demonstrate successful use of temporal street-level images for measuring urban change. We have shown that street-level images can capture changes in the built environment without the need for manual labels. We also found that our models can distinguish between changes in housing from regeneration and renewal of existing homes that may have weaker visual signals (minor urban change), and larger housing development projects (major urban change). Our approach can be readily applied to existing street-level image datasets, which are already available worldwide and undergo periodic updates by commercial providers. While researchers currently face access restrictions, the demonstrated success of our proposed method has the potential to capture the attention of data owners. This could lead to cost-effective partnerships or integration into existing data pipelines, resulting in the creation of a comprehensive global dataset. Consequently, an invaluable tracking tool would be developed, facilitating the measurement of progress toward achieving universal access to adequate, safe, and affordable housing on a global scale. This tool would not only benefit local governments and countries but also contribute significantly to the attainment of SDGs. The study has several strengths. It used a publicly available large dataset of street-level images being consistently collected since 2007. This image dataset had substantial spatio-temporal coverage on an urban scale for the Greater London metropolitan area with sufficient power to investigate Figure 3: (a) Histogram of cosine distances between the embeddings of images in 2008 and 2018. (b) Example image pairs with increasing cosine distances between them. neighbourhood-level progress. The study employed our proposed Street2Vec adapted from the Barlow Twins [1]. This self-supervised learning technique has demonstrated its efficacy in pre-training expansive models for multiple computer vision tasks such as image classification, object detection, and image segmentation. Prior research on image-based urban measurement methods relied on the availability of high-quality labels, which are often lacking. In fact, our motivation for developing an image-based proxy to characterize urban change stemmed from the scarcity of ground truth data, even in data-rich cities of the developed world. The main novelty of our proposed measure of urban change is that it does not require labels, and only makes use of street-level images and its readily available metadata on acquisition year and location. Our study demonstrated the effectiveness of the proposed Street2Vec approach by quantitatively comparing the detected change in London neighborhoods designated by the city for potential new housing and job opportunities, with others. For example, we found that the housing policy and investments have been successful in initiating change in many OAs in London, such as King's Cross, while not all OAs experienced similar outcomes. This analytical capability is essential for informing local governments, particularly for tracking areas where existing incentives have fallen short in stimulating housing development, as well as for identifying organic changes that extend beyond policy-driven areas and are therefore challenging to monitor. Furthermore, our approach successfully identifies subtle urban change from the regeneration and the renewal of existing homes and neighborhoods. These changes may exhibit weaker visual signals, yet are crucial for cities like London with rich histories and aging housing stocks. In such cities, regeneration efforts may inadvertently lead to undesirable outcomes such as population displacement caused by processes of gentrification. A number of potential limitations could have influenced our results and could impact a wider adoption of our proposed method as a standalone tracking tool. First, the assumption that all changes in the housing stock will have visual signals captured by street images may not always hold because renewal projects such as energy efficiency improvements or increasing housing capacity within existing buildings, or differences in uses may not always result in visible changes in external Figure 4: Histogram of cosine distances per class, for our model (blue) and for the baseline (orange). Figure 5: (a) UMAP [71] projection space for 10,000 randomly sampled street-level images from London that have been processed by Street2Vec. We color the points in a circular manner according to their position in this space. Note that we omit axes and scales, as the absolute values of the UMAP embedding are meaningless and the plotting is purely qualitative. On the extremes of the two UMAP dimensions, we plot the three images corresponding to the minimal or maximal values, respectively. (b) The same points on the map of London. Similar colors mean that two data points are close according to the learned representations. views. Therefore, ideally, image-based tracking should be combined with other data sources to address such weaknesses. In addition, anomalous images as well as rare weather events such as snow in London influence change detection results that need to be taken into account for interpretations. Thereby, future research can focus on improving the discriminatory power of Street2Vec for these rare occurrences, but also improving the detection of specific urban change, using potentially weak labels (with minimal or cheap human intervention) or generative models to improve self-supervised learning. Street-level imagery is predominantly collected and controlled by major private sector players (e.g., Google, Baidu, Bing) providing extensive coverage globally. However, they are also increasingly imposing access restrictions, even for non-commercial uses intended for the public good. Cities can require or incentivize better access, support crowd-sourced initiatives, or use their own assets to enhance monitoring. The coverage of collected data may be biased towards areas with substantial urban change as the original purpose of street-level images was to provide up-to-date mapping services, but this is less problematic as the it is well aligned with our primary interest on change. Still, coverage may be problematic in some countries where there is high public scrutiny of data privacy (e.g., Germany) and restricted access in developing areas such as slums. While our approach is generalizable to most cities around the world, accessing data for evaluating local government performances may present greater challenges, as London's geospatial data initiatives are considered among the most comprehensive and advanced in the world. Finally, further research is needed to better understand how learned representations from Street2Vec could be used for other downstream tasks of critical importance for city governments such as tracking climate resilience, affordability, and displacement. ## Methods ### Image data We used a total of 15,335,000 images at 3,833,750 locations in London spanning the years from 2008 to 2021. We accessed the images using the Google Street View Application Programming Interface (API). We first created a 50m grid using the street network obtained from Open Street Maps (OSM) within the city boundary shape file for the Greater London Authority. We then used the API to retrieve the unique panorama ids (panoIDs) of images near each grid point acquired by Google. For each sampled panoID location, we used four images (of size 600x600 pixels) representing four orientations of a 360\({}^{\circ}\) panorama (0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\), 270\({}^{\circ}\)) to cover all directions within a view. For change detection, we used images from all 329,031 locations for which we had images from both 2008 and 2018. ### Street2Vec: Self-supervised learning from street-level images In our setting, we have a large set of geolocated images, for which the only information available is the acquisition year and location. When labels are available, it is common to directly train a fully supervised model approximating the function of interest. However, in the present study, we are interested in estimating the degree of urban change between two images from the same location taken at different years where we don't have ground truth (label) data. This problem setting is particularly well suited for a relatively recent paradigm in machine and deep learning, known as _self-supervised representation learning_ (SSL in short). The goal of SSL is to learn vector representations (i.e., _embeddings_) of inputs that can represent the relevant information in a condensed form, without the need for target labels. The learning problem is set up in a way that a model can be learned using standard supervised losses (e.g., cross-entropy, mean squared error, etc.) using surrogate labels generated from the data itself rather than from an external source (hence, self-supervised learning). Our goal is to learn representations of street-level images that can capture and discover structural urban change but are invariant to common but irrelevant variations such as lighting conditions, seasonality, or vehicles and people appearing in the images. To achieve this, we propose an adaptation of the Barlow Twins method from [1] to temporal street-level images: Street2Vec. In the original paper [1], the Barlow Twins method is applied to two different sets of content-preserving artificial distortions on images, such as cropping, color jittering, and horizontal flipping, among others. The model is trained by forcing it to learn the same representations for these different sets of modified images. This is semantically coherent since every batch of images is transformed to obtain two slightly different versions of the same images that do not alter the relevant content in them. Both distorted batches are fed through a ResNet-50 model [69], and a Multi-Layer Perceptron (MLP, also known as fully connected layers) projector network, consisting of three layers with batch normalization and rectified linear unit (ReLU) nonlinearities between them. The resulting two batches of embedding vectors \(Z^{A}\) and \(Z^{B}\) are first standardized to zero mean and unit standard deviation along their batch dimensions, and then multiplied among them to compute the cross-correlation matrix \(\mathcal{C}\) of the embedding feature dimensions, averaged over the batch dimension: \[\mathcal{L}_{\text{\it BT}}\triangleq\sum_{i}^{D}(1-\mathcal{C}_{ii})^{2}+ \lambda\sum_{i}^{D}\sum_{j\neq i}^{D}\mathcal{C}_{ij}^{2}\;, \tag{1}\] where \(D\) is the embedding feature dimension, \(\mathcal{C}\) is the cross-correlation matrix, and \(\lambda\) is a tunable hyperparameter controlling how much off-diagonal correlations are penalized. The cross-correlation matrix \(\mathcal{C}\) is defined as follows: \[\mathcal{C}_{ij}\triangleq\frac{\sum_{b}^{N}z_{b,i}^{A}z_{b,j}^{B}}{\sqrt{\sum _{b}^{N}\left(z_{b,i}^{A}\right)^{2}}\sqrt{\sum_{b}^{N}\left(z_{b,j}^{B}\right) ^{2}}}\;, \tag{2}\] where \(N\) is the batch dimension, and \(z_{b,i}^{A}\) and \(z_{b,j}^{B}\) are elements of the mean-centered embeddings \(Z^{A}\) and \(Z^{B}\). The loss function in Eq. (1) serves the purpose of learning image embeddings where the different dimensions are uncorrelated (i.e., each representing some "new" properties of the data that are unrelated to the others) but the entries of the same embedding dimension from the two batches are maximally correlated. The intuition behind this second part is that we want the embeddings to be invariant to the specific distortions applied to the input image. Through this method, the model is learning to represent relevant image contents in vectorized form. In our setting, instead of applying a set of predefined artificial distortions to the images, we take two images from the same location, captured in different years as input. That is, at each training step, we sample a new batch of street-level images from random locations and points in time and then sample the second batch of images (the "distorted" samples) from the same locations, but taken in a different year if at least two images from different years are available for the selected coordinates. In the rare case where we only have one image from a single year at a given coordinate location, we simply apply some small amounts of color jittering (i.e., small random changes in brightness, contrast, saturation, and hue) to the sampled image, to obtain a second, artificially distorted one. After that, we apply the Barlow Twins method as explained above, to learn aligned image embeddings with uncorrelated feature dimensions. See Fig. 1 for an overview. We name this approach Street2Vec, as we learn visual vector embeddings from street-level images. Our assumption is that on average, street-level images taken at two different time instants will have strong visual appearance variations representing changes that are not the focus of this study such as lighting conditions, seasonality, people, or cars, but no or only minimal change in urban structural elements. Of course, we cannot completely rule out any structural change between any two of those images and in fact, our primary interest is to identify locations where structural change is captured by street-level images. However, we expect that these cases are occur much less frequently. Therefore, we posit that our model implicitly learns representations that are invariant to irrelevant change, but sensitive to urban structural elements, without labels explicitly highlighting such changes. We define irrelevant change to include lighting conditions, seasonal change in vegetation or clouds, snow, change in the view resulting from the relative position of the camera, and occlusion of built environment features by cars, vegetation, or individuals (see Supplementary Information for more detailed descriptions). We trained our model on a single GPU, performing one pass over all our data, which took about 30 hours to complete. Longer training has not resulted in noticeably improved model performance. To maximize the information content within each model input, we concatenate all four available street-level image orientations (north, east, south, west) into a single panoramic view. Each orientation of size 600x600 pixels is resized to 128x128 pixels, resulting in a size of 128x512 pixels when we concatenate all four orientations that we give as one input to our model. Because of memory limitations, the largest batch size we could use was 48. Moreover, we used an embedding dimensionality of 1024 and kept \(\lambda=0.005\), as in [1]. Once the model is learned and the training converges, we then project every image into the learned embedding space. To perform change detection, we compare single image embeddings at all locations of interest and extract a single summary statistic representing a notion of deviation or distance. The farther the embeddings are, the more likely the location represented by the pair of images is to have undergone structural urban changes. Conversely, the closer the embeddings are, the more likely the images are to either not have changed or to display only irrelevant change which the model has learned to become invariant to. #### Measurement of change Our main objective is to utilize the learned embeddings to detect structural changes in London neighborhoods captured by street-level imagery. To analyze our ability to perform this task, we selected two years, 2008 and 2018, that are furthest spaced apart among the years for which we had a considerable amount of image pairs from the same locations available. For every location where we have images for 2008 and 2018, we compute the cosine distance \((d_{\text{cos}}(\cdot,\cdot))\) between their Street2Vec embeddings as our change metric. The cosine distance between two vectors \(\mathbf{x}\) and \(\mathbf{y}\) is defined as 1 minus the cosine similarity \((s_{\text{cos}}(\cdot,\cdot))\), as follows: \[d_{\text{cos}}(\mathbf{x},\mathbf{y})=1-s_{\text{cos}}(\mathbf{x},\mathbf{y}) =1-\frac{\mathbf{x}^{T}\mathbf{y}}{\|\mathbf{x}\|_{2}\|\mathbf{y}\|_{2}}\;, \tag{3}\] and ranges from 0 (the vectors \(\mathbf{x}\) and \(\mathbf{y}\) are perfectly collinear and codirectional, the angle between them is 0) to 2 (\(\mathbf{x}\) and \(\mathbf{y}\) are completely opposite and the angle between them is \(\pi\)). However, note that in our setting, the maximum cosine distances are around 1 (\(\mathbf{x}\) and \(\mathbf{y}\) are orthogonal). We used the cosine distance because we have relatively high-dimensional embeddings where other distance metrics like Euclidean distances would be very sensitive to large deviations in only a few dimensions, even if the two embeddings would be very similar in most other dimensions. Since we assume each of our uncorrelated embedding dimensions to capture (equally) important information, we prefer capturing some change in many of them to capturing a lot of change in only a few. #### Clustering neighborhoods We used a non-linear dimension reduction technique like UMAP instead of a linear dimension reduction technique like Principal Component Analysis (PCA) [72]. In the Street2Vec representation learning method, we minimize a learning objective that, if perfectly optimized, decorrelates all feature dimensions of the embeddings. In such case, PCA would not be able to find a projection of the data that summarizes any meaningful information (in terms of variance) and it would reduce to a simple rotation of the embeddings. Even though perfect decorrelation will never be achieved in practice, it is still substantial enough that the eigenvalues of the first two principal components are only marginally larger than the reciprocal of the embedding dimensionality. For this reason, we employ the nonlinear manifold learning technique UMAP, which is able to find more complex relationships on the manifold of the learned representations of our geolocated images while meaningfully projecting into a lower-dimensional space (2-dimensional in this case). In that space, two neighboring points have similar embeddings, while two far apart points likely have strongly different ones. Therefore, for each street-level planar panorama, two close-by data points represent similar urban structures, according to our model. ## Data and code availability Datasets used in this paper are publicly available and sources are provided in the main manuscript. Manual labeling The code for Street2Vec training and creating the change metric is available at [https://gitlab.renkulab.io/deeplnafrica/Street2Vec](https://gitlab.renkulab.io/deeplnafrica/Street2Vec) ## Competing interests The authors declare no competing interests.
2309.06628
Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for Adaptive Learning
Emulator embedded neural networks, which are a type of physics informed neural network, leverage multi-fidelity data sources for efficient design exploration of aerospace engineering systems. Multiple realizations of the neural network models are trained with different random initializations. The ensemble of model realizations is used to assess epistemic modeling uncertainty caused due to lack of training samples. This uncertainty estimation is crucial information for successful goal-oriented adaptive learning in an aerospace system design exploration. However, the costs of training the ensemble models often become prohibitive and pose a computational challenge, especially when the models are not trained in parallel during adaptive learning. In this work, a new type of emulator embedded neural network is presented using the rapid neural network paradigm. Unlike the conventional neural network training that optimizes the weights and biases of all the network layers by using gradient-based backpropagation, rapid neural network training adjusts only the last layer connection weights by applying a linear regression technique. It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy. The proposed method is demonstrated on multiple analytical examples, as well as an aerospace flight parameter study of a generic hypersonic vehicle.
Atticus Beachy, Harok Bae, Jose Camberos, Ramana Grandhi
2023-09-12T22:34:34Z
http://arxiv.org/abs/2309.06628v1
# Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for Adaptive Learning ###### Abstract **Emulator embedded neural networks, which are a type of physics informed neural network, leverage multi-fidelity data sources for efficient design exploration of aerospace engineering systems. Multiple realizations of the neural network models are trained with different random initializations. The ensemble of model realizations is used to assess epistemic modeling uncertainty caused due to lack of training samples. This uncertainty estimation is crucial information for successful goal-oriented adaptive learning in an aerospace system design exploration. However, the costs of training the ensemble models often become prohibitive and pose a computational challenge, especially when the models are not trained in parallel during adaptive learning. In this work, a new type of emulator embedded neural network is presented using the rapid neural network paradigm. Unlike the conventional neural network training that optimizes the weights and biases of all the network layers by using gradient-based backpropagation, rapid neural network training adjusts only the last layer connection weights by applying a linear regression technique. It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy. The proposed method is demonstrated on multiple analytical examples, as well as an aerospace flight parameter study of a generic hypersonic vehicle.** _Keywords:_ Machine Learning, Neural Network, Multifidelity, Active Learning, Aircraft Design ## 1 Introduction Design exploration for high-performance aircraft requires computational modeling of multiple unconventional design configurations. Models must capture aerodynamics, structure, and propulsion, as well as interactions between these disciplines. A common design exploration technique is to sample the expensive physics-based models in a design of experiments and then use the sample data to train an inexpensive meta-model. Conventional metamodels include regression models such as ridge regression [1], Lasso [2], Polynomial Chaos Expansion [3], Gaussian process regression (GPR) or kriging [4, 5, 6], and neural networks [7]. However, many simulation evaluations are needed for the design of experiments because of the large number of independent parameters for each design and the complex responses resulting from interactions across multiple disciplines. Because high-fidelity simulations are expensive, the total computational costs can easily become computationally intractable. Computational cost reduction is often achieved using Multi-Fidelity Methods (MFM) and Active Learning (AL). MFMs work by supplementing High-Fidelity (HF) simulations with less accurate but inexpensive Low-Fidelity (LF) simulations. AL involves intelligent generation of training data in an iterative process: rebuilding the metamodel after each HF sample is added, and then using the metamodel to determine the best HF sample to add next. When performing Multi-Fidelity (MF) modeling [8, 9], the usual strategy is to generate many affordable Low-Fidelity (LF) samples to capture the design space and correct them using a small number of expensive High-Fidelity (HF) samples. MF modeling is a more cost-effective way of training an accurate surrogate model than using a single fidelity, which will suffer from sparseness with only HF samples and inaccuracy with only LF samples. One method for generating LF data is Model Order Reduction (MOR). Nachar et al. [10] used this technique to generate LF data for a multi-fidelity kriging model. Reduced order models [11, 12, 13] are constructed by approximating the low-dimensional manifold on which the solutions lie. The manifold can be approximated linearly using Proper Orthogonal Decomposition, or nonlinearly using various methods such as an autoencoder neural network [14]. This approximation enables vastly decreasing the model degrees of freedom, which in turn reduces the computational costs of a simulation. Typically, reduced order models are used to give final predictions instead of being used as LF models in a multi-fidelity surrogate. However, this limits the aggressiveness with which the model can be reduced without introducing unacceptable errors into the final predictions. Even after very aggressive reduction, reduced order models can still provide valuable information about trends when used as LF models. LF models can also be constructed by coarsening the mesh, simplifying the physics, or utilizing historical data from a similar problem. When performing AL, location-specific epistemic uncertainty information is critical for determining where to add additional samples. Kriging is popular in large part because it returns this uncertainty. Cokriging [15, 16] is a popular type of MFM which extends kriging to multiple fidelities. It typically performs well but has difficulties if the LF function is not well correlated with the HF function. It also cannot incorporate more than a single LF function unless they fall into a strict hierarchy known beforehand. As an alternative to cokriging, a localized-Galerkin kriging approach has been developed which can combine multiple nonhierarchical LF functions and enable adaptive learning [17, 18]. However, kriging and kriging-based methods have fundamental numerical limitations. Fitting a kriging model requires optimization of the hyperparameters \(\theta\), where a different \(\theta\) parameter exists for each dimension. Additionally, each evaluation of the loss function requires inverting the covariance matrix, an operation with computational complexity on the order of \(O(d^{3})\), where \(d\) is the number of dimensions. This makes kriging-based methods poorly suited for modeling functions above a couple of dozen dimensions, especially if the number of training samples is high. In this work, we use Neural Networks (NNs) to avoid the limitations of kriging. A common NN architecture, feed forward fully-connected, is shown in Fig. 1. A neural network is a structure composed of layers of neurons with weighted connections to the neurons of other layers. Each neuron takes as input the weighted sum of neurons in the previous layer, plus the neuron's bias term. This input is then transformed by the neuron's activation function, and output in turn to the neurons of the next layer. Figure 1: Illustration of a fully connected multi-layer NN. A single hidden layer with sufficiently many neurons will enable a NN to model any continuous function. Training a NN involves adjusting the many weights and biases to output accurate predictions on training data. Typically, weight optimization is performed using a gradient descent algorithm such as Adam, with the gradients calculated via backpropagation. The more weights and biases a NN has, the higher the degree of flexibility. This allows NNs to function as universal approximators. Specifically, the Universal Approximation Theorem (UAT) states that a feed-forward neural network with a single hidden layer can approximate any continuous function to any degree of accuracy over a bounded region if sufficiently many neurons with nonpolynomial activation functions are included in the hidden layer [19, 20, 21]. However, the UAT does not put upper bounds on the error of a neural network with a limited number of neurons. Therefore, in practice, the number of layers and neurons in each layer is typically selected based on experience or trial and error. Unfortunately, using conventional NNs for aerospace engineering design studies faces several practical drawbacks. In aerospace applications, physical experiments or high-fidelity simulations are typically expensive, meaning HF data will be sparse. Therefore, design studies often require extrapolating beyond the bounds of the HF data, which interpolators such as NNs are poorly suited to. One approach for mitigating these practical problems is to use Physics Informed Neural Networks (PINNs) [22, 23]. These combine physics information with a NN model. Typically, the physics information takes the form of differential equations, such as governing equations and boundary conditions. PINNs have various structures, such as NNs modifying parameters of the differential equations to improve accuracy, or NNs adding corrections and detail on top of a simplified model that does not capture the full physics. The physics information constrains the model, alleviating overfitting and increasing the accuracy of extrapolations beyond the HF data points. PINNs can also use physics models other than differential equations, such as first-principle models, data-driven models, and expert knowledge models. PINNs can also include multiple fidelities of data, for instance, by training a NN on LF training data and using the output as an input to the PINN trained on HF data [24, 25, 26]. Recently, the authors proposed the Emulator Embedded Neural Network (E2NN) [27, 28] a generic framework for combining any mix of physics-based models, experimental data, and any other information sources. Instead of using an LF model as an input to the main NN, E2NN embeds emulator models into its hidden layer neurons intrusively. A neuron with an LF model embedded into it is called an emulator. The E2NN is trained to transform and mingle the information from multiple emulators within the NN by finding the optimal connection weights and biases. This has an effect like that of a standard PINN architecture, reducing overfitting and enabling accurate extrapolations beyond sparse HF data points. E2NN performed well in the tests of the reference papers, handling non-stationary function behaviors, capturing a high dimensional Rosenbrock function with low-quality LF models, and successfully modeling stress in a Generic Hypersonic Vehicle (GHV) engineering example. To enable AL, it is necessary to capture epistemic prediction uncertainty for a data acquisition metric. Using a Bayesian NN [29, 30] is one option for obtaining epistemic learning uncertainty. In a Bayesian NN, the weights are assigned probability distributions rather than fixed values. Therefore, the prediction output is not a point estimate but a Gaussian distribution. Bayesian NNs are trained through backpropagation, but they require significantly higher computational costs than conventional NNs to optimize the probability density distribution of weights and biases. Ensemble methods [31, 32] combine multiple models to improve accuracy and provide uncertainty information. Accuracy improves more when the errors of the models are less correlated. In the extreme case where \(M\) models have equal and entirely uncorrelated errors, averaging the models decreases error by a factor of \(1/M\)[33]. Error correlation decreases when models are less similar, which can be achieved by training on different subsets of data or using models with different underlying assumptions [34]. Uteva et al. [35] tested an active learning scheme of sampling where two different GPR models disagreed the most. Because the GPR models were trained on different sets of data, they had different \(\theta\)-hyperparameters and thus different behavior. This method outperformed sampling the location of maximum predicted variance of a single GPR model. Lin et al. [36] combined two NNs with different architectures into an ensemble, and added adaptive samples at the locations of maximum disagreement. Christiano et al. [37] combined three neural networks in an ensemble and added additional training samples where the variance of the predictions was maximized. While existing ensemble methods can output a location of maximum variance for adaptive learning, they do not output a predictive probability distribution like GPR does. Such a predictive distribution is highly useful, offering additional insight into the model and enabling the use of probabilistic acquisition functions such as Expected Improvement. In this work, we present a formal statistical treatment to extract such a probability distribution. Specifically, we estimate the epistemic learning uncertainty by combining multiple E2NN models and calculating a Bayesian predictive distribution. Training a single E2NN model typically requires several minutes to reach convergence, which means updating the entire ensemble can be time-consuming. Therefore, a method combining Neural Networks and linear regression is explored to alleviate the cost of neural network training. Early exploration of the method was performed by Schmidt et al. [38]. The method combines two basic steps: first, creating a neural network with random weights and biases, and second, setting the last layer of weights using a linear regression technique such as ordinary least squares or ridge regression. These steps are far cheaper than backpropagation-based neural network training, enabling accelerated retraining of the E2NN ensemble during active learning. The next section, Section 2, contains a brief review of E2NN. Section 3 covers the proposed approach of adaptive and rapid learning, which is followed by Section 4 showing fundamental mathematical demonstrations and a practical aerospace engineering example involving predicting the aerodynamic performance of the generic hypersonic vehicle. ## 2 Review of Emulator Embedded Neural Networks The E2NN has low fidelity models, called emulators, embedded in specific neurons of the neural network's architecture. The emulators contribute to regularization and preconditioning for increased accuracy. The emulators can take the form of any low-cost information source, such as reduced/decomposed physics-based models, legacy equations, data from a related problem, models with coarsened mesh or missing physics, etc. An emulator embedded in the last hidden layer can only be scaled before being added to the response, while an emulator embedded in the second-to-last hidden layer can be transformed through any functional mapping (by the Universal Approximation Theorem [19, 20, 21]) before being added to the response. The simplest solution when selecting the architecture is to embed the emulator in all hidden layers and allow the E2NN training to select which instances are useful. The flexibility of E2NN in selecting between an arbitrary number of LF models, each embedded in multiple hidden layers, enables wide applicability to problems with LF models of high, low, or unknown accuracy. The architecture of E2NN is illustrated in Fig. 2. Figure 2: Architecture of an Emulator Embedded Neural Network. ## 3 Proposed Approach: Adaptive Learning with Non-Deterministic Emulator Embedded Neural Network The main technical contribution of this work is a novel method for combining predictions of an ensemble of models to approximate epistemic modeling uncertainty. This uncertainty information lowers training data costs by enabling Adaptive Learning (AL). For optimization problems, the Expected Improvement (EI) metric identifies adaptive sampling locations of maximum information gain. The EI assessment requires a probability density function, which is estimated by training an ensemble of E2NN models and estimating the underlying distribution of predictions. Greater disagreement among individual E2NN realizations is typically due to a lack of training samples and implies greater epistemic uncertainty. A second contribution involved speeding up E2NN training hundreds of times using Rapid Neural Network (RaNN) training, allowing fast model updating during AL iterations. Ensemble uncertainty estimation is discussed in Subsection 3.1, and AL using EI is explained in Subsection 3.2 Finally, the RaNN methodology is summarized in Subsection 3.3, and practical considerations for avoiding numerical errors are discussed in Subsection 3.4. ### Proposed Approach for Assessing Epistemic Modeling Uncertainty of E2NN The epistemic modeling uncertainty is estimated using an ensemble of E2NN models. The E2NN model predictions agree at the training points, but the predictions between points depend on the random initializations of the weights and biases. In other words, the E2NN model predictions are samples drawn from a stochastic prediction function in the design space. A higher magnitude of disagreement implies greater epistemic modeling uncertainty. This suggests a useful assumption: model the epistemic uncertainty as the aleatoric uncertainty of the E2NN model predictions. Specifically, assume the two pdfs are approximately equal at each prediction point \(x\): \[p(y_{true}(x))\approx p(y_{E2NN}(x)) \tag{1}\] However, finding the exact aleatoric distribution of \(y_{E2NN}(x)\) requires training an infinite number of E2NN models. Instead, we use a finite number of model predictions at a point \(x\) as data \(D(x)\) to construct a posterior predictive distribution (ppd) for the E2NN predictions. This ppd is then used as an estimate of the epistemic pdf of the true function. \[p(y_{true}(x))\approx p(y_{E2NN}(x)|D(x)) \tag{2}\] Finding the ppd requires introducing a second assumption: The aleatoric E2NN predictions are normally distributed at each point \(x\) \[y_{E2NN}(x)\sim N(\mu,\sigma) \tag{3}\] The process of combining multiple E2NN model predictions to estimate the epistemic uncertainty is illustrated in Fig. 3. For the general case of finding a posterior predictive distribution for a normal distribution from which we have \(n\) iid data points \(D\) but no additional information, we include a proof in Appendix A. First, we need a prior and a likelihood. The prior for the mean and variance is a normal-inverse-chi-squared distribution (\(NI\chi^{2}\)) \[p(\mu,\sigma^{2})=NI\chi^{2}(\mu_{0},\kappa_{0},\nu_{0},\sigma_{0}^{2})=N(\mu |\mu_{0},\sigma^{2}/\kappa_{0})\cdot\chi^{2}(\sigma^{2}|\nu_{0},\sigma_{0}^{2}) \tag{4}\] Here \(\mu_{0}\) is the prior mean and \(\kappa_{0}\) is the strength of the prior mean, while \(\sigma_{0}^{2}\) is the prior variance and \(\nu_{0}\) is the strength of the prior variance. The likelihood of the \(n\) iid data points is the product of the likelihoods of the individual data points. By Bayes' rule, the joint posterior distribution for \(\mu\) and \(\sigma^{2}\) is proportional to the prior times the likelihood. Selecting the constants \(\kappa_{0}\), \(\sigma_{0}\) and \(\nu_{0}\) for an uninformative prior and integrating to marginalize out \(\mu\) and \(\sigma\) in the posterior yields the predictive posterior distribution. Given \(n\) iid samples, this is given by a t-distribution \[p(y|D)=t_{n-1}\left(\bar{y},\frac{1+n}{n}s^{2}\right) \tag{5}\] where \(\bar{y}\) is the sample mean and \(s^{2}\) is the sample variance. \[s^{2}=\frac{1}{n-1}\sum_{i}(y_{i}-\bar{y})^{2} \tag{6}\] The final pdf of \(y\) given the data \(D\) is a t-distribution instead of a normal distribution because it combines both Bayesian epistemic and aleatoric uncertainty. The epistemic component of the uncertainty can be reduced by increasing the number of samples. As the number of samples or ensemble models \(n\) approaches \(\infty\), the pdf of Eq. 5 will approach a normal distribution with the correct mean and standard deviation. For \(n=2\) samples (\(\nu=1\)) this yields a Cauchy, or Lorentzian, distribution, which has tails so heavy that the variance \(\sigma^{2}\) and mean \(\mu\) are undefined [39]. For \(n=3\) samples (\(\nu=2\)) the mean \(\mu\) is defined but the variance \(\sigma^{2}\) is infinite. Therefore, more than 3 ensemble models should always be used to estimate the mean when an uninformative prior is used. Substituting Eq. 5 into Eq. 1 yields the final estimate of the epistemic modeling uncertainty distribution, \[p(y_{true}(x))=t_{n-1}\left(\bar{y}_{E2NN}(x),\frac{1+n}{n}\cdot s_{E2NN}(x) \right)^{2} \tag{7}\] where \(\bar{y}_{E2NN}(x)\) is the sample mean prediction of the \(n\) E2NN models in the ensemble, \[\bar{y}_{E2NN}(x)=\frac{1}{n}\sum_{i}y_{E2NN_{i}}(x) \tag{8}\] and \(s_{E2NN}(x)\) is the sample variance of E2NN model predictions in the ensemble. \[s_{E2NN}(x)^{2}=\frac{1}{n-1}\sum_{i}(y_{E2NN_{i}}(x)-\bar{y}_{E2NN}(x))^{2} \tag{9}\] Use of an uninformative prior results in conservative and robust estimation of the epistemic modeling uncertainty. Figure 3: Illustration of using multiple E2NN model realizations to estimate the underlying aleatoric probability distribution. This estimate is used as an approximation of the epistemic modeling uncertainty. ### Adaptive Learning (AL) Using Expected Improvement Adaptive learning allows data collection to be focused in areas of interest and areas with high uncertainty, reducing the number of samples required for design exploration and alleviating computational costs. Typically, an acquisition function is used to measure the value of information gained from adding data at a new location. The overall process requires a few simple steps: 1. Generate HF responses from an initial design of experiments. 2. Use sample data to build an ensemble of E2NN models. 3. Maximize the acquisition function using an optimization technique. 4. If the maximum acquisition value is above tolerance, add a training sample at the location and go to step 2. Otherwise, stop because the optimization has converged. The criteria for where to add additional data depends on the design exploration goals. A common goal is global optimization. Global minimization seeks to find the minimum of a function in \(d\)-dimensional space. \[x_{opt}=\operatorname*{argmin}_{x\in\mathbb{R}^{d}}y(x) \tag{10}\] For this objective, Expected Improvement (EI) is applicable [4, 18]. Informally, EI at a design point is the amount by which the design will, on average, improve over the current optimum. Formally, this is calculated by integrating the product of the degree and likelihood of every level of improvement. Likelihoods are given by the predictive probability distribution. Any new sample worse than the current optimum yields an improvement of 0. The general expression for EI is given by Eq. 11 and illustrated in Fig. 4. \[EI(x)=\int_{-\infty}^{\infty}pdf(y)\cdot max(y_{current\ opt}-y,0)\cdot dy \tag{11}\] The EI value for a Gaussian predicted probability distribution is expressed as the closed-form expression, \[EI(x)=\left(f_{min}-\hat{y}(x)\cdot\Phi\left(\frac{f_{min}-\hat{y}(x)}{\sigma_ {z}(x)}\right)+\sigma_{z}(x)\cdot\phi\left(\frac{f_{min}-\hat{y}(x)}{\sigma_ {z}(x)}\right)\right) \tag{12}\] where \(\phi(\cdot)\) and \(\Phi(\cdot)\) are the pdf and cdf of the standard normal distribution, respectively; \(\hat{y}(x)\) and \(\sigma_{z}(x)\) are the mean and standard deviation of the predictive probability distribution, respectively; and \(f_{min}\) is the current optimum. The current optimum can be defined as either the best sample point found so far, or the best mean prediction of the current surrogate. We use the former definition. The two definitions approach each other as the adaptive learning converges on the optimum. Figure 4: Illustration of Expected Improvement calculation for adaptive learning. Unlike a kriging model, an E2NN ensemble returns a Student's t-distribution instead of a Gaussian distribution. The resulting formulation of EI in this case is [40] \[E[I(x)]=(f_{min}-\mu)\Phi_{t}(z)+\frac{\nu}{\nu-1}\left(1+\frac{z^{2}}{\nu} \right)\sigma\phi_{t}(z) \tag{13}\] where the t-score of the best point found so far is \[z=\frac{f_{min}-\mu}{\sigma} \tag{14}\] and \(\phi_{t}(\cdot)\) and \(\Phi_{t}(\cdot)\) are the pdf and cdf of the standard Student's t-distribution, respectively. Also, \(\mu\), \(\sigma\), and \(\nu\) are the mean, scale factor, and degrees of freedom of the predictive t-distribution. ### Training of E2NNs As Rapid Neural Networks Typically, NNs are trained using auto-differentiation and backpropagation. The main problem in NN training is saddle points, which look like local minima because all gradients are zero and the objective function is increasing along almost all direction vectors. However, in a tiny fraction of possible directions the objective function decreases. If a NN has millions of weights, determining which direction to follow to escape from the saddle point is difficult. Optimizers such as Adam use momentum and other techniques to reduce the chance of getting stuck at a saddle point. However, a single E2NN model still takes significant computational time to reach convergence, requiring minutes to hours for a moderately sized neural network. Depending on the number of samples and the dimensionality of the problem, training an ensemble of E2NNs for adaptive learning will introduce significant computational costs. To increase speed, the E2NN models can be trained as Rapid Neural Networks (RaNNs). Essentially, this involves initializing a neural network with random weights and biases in all layers and then training only the last layer connections. The last layer's weights and bias are trained by formulating and solving a linear regression problem such as ridge regression, skipping the iterative training process entirely. This accelerates training multiple orders of magnitude. These models are sometimes referred to as extreme learning machines, a term coined by Dr. Guang-Bin Huang, although the idea existed previously. An early example of RaNN by Schmidt et al. in 1992 utilized a feed-forward network with one hidden layer [38]. The weights connecting the input to the first layer were set randomly, and the weights connecting the hidden layer to the output were computed by minimizing the sum of squared errors. The optimal weights are analytically computable using standard matrix operations, resulting in very fast training. Saunders et al. [41] used ridge regression instead of least squares regression when computing the weights connecting the hidden layer to the output. Huang et al. later developed equivalent methods [42, 43], and demonstrated that like conventional NNs, RaNNs are universal approximators [44], with the ability to capture any continuous function over a bounded region to arbitrary accuracy if a sufficient number of neurons exist in the hidden layer. However, despite being universal approximators, RaNNs require many more hidden layer neurons to accurately approximate a complex function than NNs trained with backpropagation and gradient descent. This is because the hidden layer neurons are essentially functions that are linearly combined to yield the NN prediction. Backpropagation intelligently selects these functions, while RaNN relies on randomly chosen functions and therefore requires more of them to construct a good fit. If fewer neurons and higher robustness are desired for the final model, an ensemble of E2NN models can be trained using backpropagation after the adaptive learning has converged. In this work, we propose applying the random initialization and linear regression techniques to E2NN models instead of standard NNs. Multiple realizations of E2NN with RaNN training are combined into an ensemble, enabling uncertainty estimation and adaptive learning. The proposed framework of adaptive learning is demonstrated in multiple example problems in Section 4. ### Practical Considerations for Avoiding Large Numerical Errors Setting the last layer weights using linear regression sometimes causes numerical issues when capturing highly nonlinear functions, even when enough neurons are included. If ridge regression or LASSO is used, the E2NN model will not interpolate the training data points. If no regularization is used, the weights become very large to force interpolation. Large positive values are added to large negative values, resulting in round-off error of the E2NN prediction. The resulting fit is not smooth, but jitters up and down randomly. Numerical stability can be improved by using a Fourier activation function such as \(\sin(x)\). This is reminiscent of a Fourier series, which can capture even non-continuous function to arbitrary accuracy. In fact, a NN with Fourier activation functions and a single hidden layer with \(n\) neurons can capture the first \(n\) terms of a Fourier series. Each neuron computes a different term of the Fourier Series, with the first layer of weights controlling frequency, the bias terms controlling offset, and the second layer weights controlling amplitudes. However, when using rapid training only the last layer weights are optimized. As points are added to a highly nonlinear function, interpolation becomes more difficult and numerical instability is introduced despite the Fourier activation. This is counteracted by increasing the frequency of the Fourier activation function, which enables more rapid changes of the fit. For smooth or benign functions, the Swish activation function tends to outperform Fourier. Therefore, we include some E2NN models using Swish and some using Fourier within the ensemble. Any model with any weights of magnitude above the tolerance of 100 are considered unstable and dropped from the ensemble. Additionally, any model with NRMSE on the training data above the tolerance of 0.001 is dropped from the ensemble, where NRMSE is defined as \[NRMSE=\sqrt{\frac{\sum_{i=1}^{N}(\hat{y}_{i}-Y_{i})^{2}}{\sum_{i=1}^{N}(\bar{ Y}-Y_{i})^{2}}} \tag{15}\] for predictions \(\hat{y}_{i}\) and \(N\) training samples \(Y_{i}\). If over half of the Fourier models are dropped from the ensemble, all Fourier models have their frequencies increased and are retrained. These changes eliminate noisy and unstable models from the ensemble, as well as modifying models to remain stable as new points are added to the training data. ## 4 Numerical Experiments In aerospace applications, the costs of HF sample generation, i.e., computational fluid dynamics simulation, aeroelasticity analysis, coupled aerothermal analysis, etc., are typically far higher than the costs of generating LF samples and training NN models. Therefore, in the following examples, we compare the cost of various prediction models in terms of the number of HF samples required, rather than the computer wall-clock or GPU time. We assume that enough LF samples are collected to train an accurate metamodel, which is used to cheaply compute the emulator activations whenever the neural network makes predictions. In the following examples, we use a fully connected feed-forward neural network architecture. All LF functions are embedded as emulators in all hidden layers. The input variables are scaled to \([-1,1]\). The weights are initialized using the Glorot normal distribution. Biases are initialized differently for Swish and Fourier activation functions. Fourier biases are initialized uniformly between \([0,2\pi]\) to set the functions to a random phase or offset, while Swish biases are initialized uniformly on the region \([-4,4]\). In total, each ensemble contains 16 E2NN models with a variety of architectures and activation functions. Identical models make the same assumptions about underlying function behavior when deciding how to interpolate between points. If these assumptions are incorrect, the error will affect the whole ensemble. Therefore, including dissimilar models results in more robust predictions. Four activation functions and two different architectures yield 8 unique NN models. Each unique model is included twice for a total of 16 E2NN models in the ensemble. The four activation functions are \(\text{swish}(x)\), \(\sin(\text{scale }x)\), \(\sin(1.1\text{ }scale\ x)\), and \(\sin(1.2\text{ }scale\ x)\). The two architectures include a small network and a large network. The small network has a single hidden layer with \(2n\) neurons, where \(n\) is the number of training samples. This means the number of neurons is dynamically increased as new training samples are added. The large network has two hidden layers, where the first hidden layer has 200 neurons, and the second hidden layer has 5000 neurons. Having most neurons in the second hidden layer enables more of the NN weights to be adjusted by linear regression. The large and small NNs use different scale terms for the Fourier activation functions. The large NN scale term is increased whenever more than half the large Fourier NNs have bad fits, and the small NN scale term is increased whenever over half the small Fourier NNs have bad fits. Because numerical instability is already corrected for, we do not use ridge regression. Instead, we perform unregularized linear regression using the Moore-Penrose inverse with a numerically stabilized \(\Sigma\) term. ### One-dimensional analytic example with a linearly deviated LF model An optimization problem with the following form is considered. \[x_{opt}=\operatorname*{argmin}_{x\in[0,1]}y_{HF}(x) \tag{16}\] Here \(x\) is a design variable on the interval \([0,1]\). The high-fidelity function \(y_{HF}(x)\) and its low-fidelity counterpart \(y_{LF}(x)\) are given in Eqs. 17 and 18. These functions have been used previously in the literature when discussing MF modeling methods [17, 45]. \[y_{HF}(x)=(6x-2)^{2}\sin(12x-4) \tag{17}\] \[y_{LF}(x)=0.5y_{HF}(x)+10(x-0.5)-5 \tag{18}\] As shown in Fig. 4(a), the initial fit uses three HF samples at \(x=[0,0.5,1]\) and an LF function which is linearly deviated from the HF function. The 16 E2NN models used in the ensemble are shown in Fig. 4(b). Three ensemble models are outliers, significantly overestimating the function value near the optimum. Two of these models nearly overlap, looking like a single model. All three of these inferior models are small NNs with Fourier activation functions. The mean and 95% probability range of the predictive t-distribution are both shown in Fig. 5(a). From this t-distribution, the Expected Improvement is calculated in Fig. 5(b). A new sample is added at the location of maximum EI. The true optimum is \(x_{opt}=0.7572\), \(y_{opt}=-6.0207\). As shown in Fig. 6(a), the first adaptive sample at \(x=0.7545\) lands very near the optimum, and the retrained ensemble's mean prediction is highly accurate. Figure 5: Initial problem and fitting of the E2NN model. After the first adaptive sample, the maximum EI is still above tolerance. Therefore, an additional sample is added as shown in Fig. 6(b). The second adaptive sample at \(x=0.7571\) is only \(10^{-4}\) from the true optimum. After the second adaptive sample, the maximum EI value falls below tolerance, and the adaptive sampling converges. ### Two-dimensional analytical example The proposed ensemble method is compared with the popular kriging method for minimization of the following two-dimensional function. \[y_{HF}(x_{1},x_{2})=\sin(21(x_{1}-0.9)^{4})\cos(2(x_{1}-0.9))+(x_{1}-0.7)/2+2*x_ {2}^{2}\sin(x_{1}x_{2}) \tag{19}\] This nonstationary test function was introduced in [46, 47][49, 50]. The kriging method uses only HF training samples during optimization, but E2NN makes use of the following LF function. \[y_{LF}(x_{1},x_{2})=\frac{y_{HF}(x_{1},x_{2})-2+x_{1}+x_{2}}{1+0.25x_{1}+0.5x_{2}} \tag{20}\] Figure 6: Initial model and expected improvement. Figure 7: Iterative model as adaptive samples are added. The independent variables \(x_{1}\) and \(x_{2}\) are constrained to the intervals \(x_{1}\in[0.05,1.05]\), \(x_{2}\in[0,1]\). The LF function exhibits nonlinear deviation from the HF function as shown in Fig. 8. Eight training samples are selected to build the initial model using Latin hypercube sampling. The initial E2NN ensemble prediction is shown in Fig. 8(a). The resulting EI is shown in Fig. 8(b), and the ensemble prediction after a new sample is added is shown in Fig. 8(c). The initial fit is excellent, with only a small difference between the mean prediction and HF function. After the first adaptive sample is added, the optimum is accurately pinpointed. To compare the performance of adaptive learning with single fidelity kriging, EGO with a kriging model is run with the same initial set of 8 points. The best kriging sample is shown in Fig. 9(a). After adding a sample near the location of the optimum, the kriging model still does not capture the underlying trend of the HF model, as shown in Fig. 9(c). The E2NN ensemble maintains higher accuracy over the design space by leveraging the LF emulator. For the E2NN ensemble, the algorithm adds one more sample further up the valley that the optimum lies in, and then terminates because the expected improvement converges below tolerance. The final fit is shown in Fig. 10(a). The kriging model adds 29 samples before it converges, and still doesn't find the exact optimum, as shown in Fig. 10(b). Figure 8: Comparison of HF and LF functions for a nonstationary test function. Figure 9: Adaptive sampling of E2NN ensemble. ### Three-dimensional CFD example using a Hypersonic Vehicle Wing This example explores modeling the Lift to Drag Ratio (\(CL/CD\)) of a wing of the Generic Hypersonic Vehicle (GHV) given various flight conditions. The GHV was developed at the Wright-Patterson Air Force Base to allow researchers outside the Air Force Research Laboratory to perform hypersonic modeling studies. The parametric geometry of the wing used was developed by researchers at the Air Force Institute of Technology [48] and is shown in Fig. 12. For this example, we studied the maximum lift-to-drag ratio of the GHV wing design with respect to three operational condition variables: Mach number (normalized as \(x_{1}\)), Angle of Attack (normalized as \(x_{2}\)) and Altitude (normalized as \(x_{3}\)). The Mach number ranges from \([1.2,4.0]\), while the angle of attack ranges from \([-5^{\circ},8^{\circ}]\) and the altitude ranges from \([0,50\text{ km}]\). The speed of sound decreases with altitude, so the same Mach number denotes a lower speed at higher altitude. Atmospheric properties were calculated using the scikit-aero python library based on the U.S. 1976 Standard Atmosphere. FUN3D [49] performed the CFD calculations. We used Reynolds Averaged Navier Stokes (RANS) with a mesh of \(272,007\) tetrahedral cells for the HF model, and Euler with a mesh of \(29,643\) tetrahedral cells for the LF model. To enable rapid calling of the LF model during each NN evaluation, \(300\) evaluations of the LF model were performed and used to train a GPR model. This GPR model was then used as the LF function. The HF and LF meshes are compared in Fig. 13. the Angle of Attack (\(x_{2}\)) is the most influential variable, followed by Mach number (\(x_{1}\)), with Altitude (\(x_{3}\)) contributing little effect. The models show similar trends, except for Mach number which is linear according to the LF model and quadratic according to the HF model. The captured viscous effects and finer mesh enable the HF model to capture more complexity resulting from the underlying physics. The problem is formulated as minimizing the negative of the lift-to-drag ratio rather than maximizing the lift-to-drag ratio, following optimization convention. Both the ensemble and GPR models are initialized with 10 HF samples selected using Latin Hypercube Sampling. Five different optimization runs are completed for each method, using the same 5 random LHS initializations to ensure a fair comparison. The optimization convergence as samples are added is shown in Fig. 15. Both models start with the same initial optimum values because they share the same initial design of experiments. However, the E2NN ensemble model improves much more quickly than GPR, and converges to a better optimum. E2NN requires only 11 HF samples to reach a better optimum on average than GPR reached after 51 HF samples. In some cases, the GPR model converges prematurely before finding the optimum solution. E2NN much more consistently finds the optimum of \(-13\), which corresponds to a lift-to-drag ratio of 13. Figure 12: Illustration of GHV wing used in CFD analysis. Figure 13: Comparison of HF and LF meshes used in CFD analysis. ## 5 Conclusions In this research, we present a novel framework of adaptive machine learning for engineering design exploration using an ensemble of Emulator Embedded Neural Networks (E2NN) trained as rapid neural networks. This approach allows for the assessment of modeling uncertainty and expedites learning based on the design study's objectives. The proposed framework is successfully demonstrated with multiple analytical examples and an aerospace problem utilizing CFD analysis for a hypersonic vehicle. The performance of the proposed E2NN framework is highlighted when compared to a single-fidelity kriging optimization. E2NN exhibited superior robustness and reduced computational costs in converging to the true optimum. The central contribution in this work is a novel technique for approximating epistemic modeling uncertainty using an ensemble of E2NN models. The ensemble predictions are analyzed using Bayesian statistics to produce a t-distribution estimate of the true function response. The uncertainty estimation methodology allows for active learning, which reduces the training data required through efficient goal-oriented design exploration. This is a sequential process of intelligently adding more data based on the ensemble, and then rebuilding the ensemble. The training of the ensemble is drastically accelerated by applying the Rapid Neural Network (RaNN) methodology, enabling individual E2NNs to be trained near-instantaneously. This Figure 14: Comparison of HF and LF CFD models. Figure 15: Comparison of E2NN and GPR convergence towards optimum \(-CL/CD\). Average values across all 5 runs are shown as thick lines, while the individual run values are shown as thin lines. speedup is enabled by setting some neural network weights to random values and setting others using fast techniques such as linear regression and ridge regression. The essential components of the proposed framework are 1) the inclusion of emulators, 2) the ensemble uncertainty estimation, 3) active learning, and 4) the RaNN methodology. These components work together to make the overall methodology feasible and effective. For instance, the active learning would not be possible without the uncertainty estimation. Emulators make the ensemble more robust by stabilizing individual E2NN fits, resulting in fewer defective or outlier fits and better uncertainty estimation. The inclusion of the emulators and the use of adaptive sampling reduces the cost of training data generation more than either method could individually. Finally, the RaNN methodology accelerates training of individual E2NN models by hundreds to thousands of times, preventing the many re-trainings required by the ensemble and adaptive sampling methodologies from introducing significant computational costs. All these techniques combine synergistically to create a robust and efficient framework for engineering design exploration. **Declaration of Competing Interests** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. **Research Data** No data was used for the research described in this article. The computer codes used for the numerical examples are available at [https://github.com/AtticusBeachy/multi-fidelity-nn-ensemble-examples](https://github.com/AtticusBeachy/multi-fidelity-nn-ensemble-examples). **Acknowledgement** This research was sponsored by the Air Force Research Laboratory (AFRL) and Strategic Council for Higher Education (SOCHE) under agreement FA8650-19-2-9300. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied by the Strategic Council for Higher Education and AFRL or the U.S. Government.
2307.16829
Effects of Grain Magnetic Properties and Grain Growth on Synthetic Dust Polarization of MHD Simulations in Protostellar Environments
Thermal dust polarization is a powerful tool to probe magnetic fields ($\textbf{B}$) and grain properties. However, a systematic study of the dependence of dust polarization on grain properties in protostellar environments is not yet available. In this paper, we post-process a non-ideal MHD simulation of a collapsing protostellar core with our updated POLARIS code to study in detail the effects of iron inclusions and grain growth on thermal dust polarization. We found that superparamagnetic (SPM) grains can produce high polarization degree of $p \sim 10-40\%$ beyond $\sim 500$ au from the protostar because of their efficient alignment by magnetically enhanced Radiative Torque mechanism. The magnetic field tangling by turbulence in the envelope causes the decrease in $p$ with increasing emission intensity $I$ as $p\propto I^{\alpha}$ with the slope $\alpha \sim -0.3$. But within 500 au, SPM grains tend to have inefficient internal alignment (IA) and be aligned with $\textbf{B}$ by RATs only, producing lower $p \sim 1\%$ and a steeper slope of $\alpha \sim -0.6$. For paramagnetic (PM) grains, the alignment loss of grains above $1\mu m$ in the inner $\sim 200$ au produces $p << 1\%$ and the polarization hole with $\alpha \sim -0.9$. Grain growth can increase $p$ in the envelope for SPM grains, but cause stronger depolarization for SPM grains in the inner $\sim 500$ au and for PM grains in the entire protostellar core. Finally, we found the increase of polarization angle dispersion function $S$ with iron inclusions and grain growth, implying the dependence of B-field strength measured using the DCF technique on grain alignment and grain properties.
Nguyen Chau Giang, Thiem Hoang
2023-07-31T16:48:31Z
http://arxiv.org/abs/2307.16829v2
Effects of Grain Magnetic Properties and Grain Growth on Synthetic Dust Polarization of MHD Simulations in Protostellar Environments ###### Abstract Thermal dust polarization is a powerful tool to probe magnetic fields (\(\mathbf{B}\)), grain magnetic properties, and grain sizes. However, a systematic study of the dependence of synthetic dust polarization on grain properties in protostellar environments is not yet available. In this paper, we post-process a non-ideal MHD simulation of a collapsing protostellar core with our updated POLARIS to study in detail the effects of grain magnetic properties and grain growth on dust polarization. We found that superparamagnetic (SPM) grains can produce high polarization degree \(p\sim 10-40\%\) beyond \(\sim 500\) au because of their efficient magnetic alignment by magnetically enhanced Radiative Torque (MRAT) mechanism. The magnetic field tangling due to turbulence in the envelope causes the decrease of \(p\) with emission intensity \(I\) as \(p\propto I^{\alpha}\) with the slope \(\alpha\sim-0.3\). But within 500 au, SPM grains tend to have inefficient internal alignment and be aligned with \(\mathbf{B}\) by RAdiative Torque mechanism only, producing lower \(p\sim 1\%\) and larger \(\alpha\sim-0.6\). For paramagnetic (PM) grains, their weak internal and external alignment produces \(p<<1\%\), and the depolarization happens with a steep slope of \(\alpha\sim-0.9\) owing to the alignment loss of large grains toward the protostar. Grain growth can help to increase \(p\) and weaken the depolarization effect caused by turbulence in the envelope for SPM grains. But for SPM grains within \(\sim 500\) au and for PM grains, increasing \(a_{\rm max}\) enhances the depolarization effect due to the increasing amount of large grains with inefficient alignment. Finally, we found that the polarization angle dispersion function \(S\) increases with increasing iron inclusions and \(a_{\rm max}\). Our findings reveal the dependence of magnetic field strength measured using the Davis-Chandrashekhar-Fermi technique on grain alignment and grain properties. keywords: stars: formation, magnetic fields, low-mass stars, dust extinction, polarization ## 1 Introduction Magnetic fields (\(\mathbf{B}\)) are thought to play an essential role in the collapse of dense cores to form protostars and the formation of protostellar disks and planetary systems (Shu et al., 1987; McKee & Ostriker, 2007; Allen et al., 2003; Tsukamoto et al., 2022). The dominant effect of magnetic fields over gravity produces strong magnetic pressure, which prevents the collapse of the core in the early stage (Nakano & Nakamura, 1978; Krumholz & Federrath, 2019), but also helps to maintain the cloud structure against the destruction from radiation and mechanical feedback from protostars later (Krumholz et al., 2014; Pabst et al., 2019). The strong magnetic field energy over turbulent kinematic energy also helps to guide the infall gas motion inside the self-collapsing cloud, which shapes the structure of cloud and filaments (Hennebelle & Ciardi, 2009; Pattle et al., 2018), protostellar cores and disks, and controls the coherence of magnetic field morphology across different scales in star-forming regions (Fiedler & Mouschovias, 1993; Galli & Shu, 993a; Galli & Shu, 993b; Allen et al., 2003; Kataoka et al., 2012, (Seifried et al., 2015)). During the stage of Class 0/I Young Stellar Objects (YSOs), magnetic fields are predicted to help transport the disk angular momentum outward via the magnetic braking effect (Mestel & Spitzer Jr, 1956; Mouschovias & Paleologou, 1979a; Mouschovias & Paleologou, 1979b; Basu & Mouschovias, 1994; Allen et al., 2003; Mellon & Li, 2008a). Basically, the coupling between the fast-rotating disk and the poloidal component of magnetic fields produces the magneto-centrifugal force that pushes matter inside the disk away along the disk rotation axis, forming a protostellar outflow (Pudritz & Norman, 1983; Bally, 2016). The removal of matter along the outflow saves the life of protostellar disks from being fragmented by the strong centrifugal force and allows the remaining matter to continue feeding the growth of the central protostar (Galli et al., 2006; Li et al., 2014). However, if the rotation axis is aligned with \(\mathbf{B}\), the magnetic braking can become too efficient in removing the disk angular momentum. As a result, the remaining matter in the disk will be quickly accreted onto the protostar that stops the protostellar disk formation, named as the magnetic braking catastrophe (Allen et al., 2003; Galli et al., 2006; Mellon & Li, 2008b; Hennebelle & Fromang, 2008; Machida et al., 2011). One of the possible scenarios to solve the above problem is to introduce the initial misalignment between the rotational axis and the magnetic field orientation (Hennebelle & Ciardi, 2009; Joos et al., 2012; Krumholz et al., 2013). Besides, the effect of non-ideal magnetohydrodynamic (MHD), i.e., ambipolar diffusion (Duffin & Pudritz, 2009; Mellon & Li, 2009), Ohmic dissipation (Dapf et al., 2012, Machida et al., 2014; Tomida et al., 2015; Lam et al., 2019), the Hall effect (Tsukamoto et al., 2015; Wurster et al., 2018), magnetic reconnection (Santos-Lima et al., 2012; Li et al., 2014); or turbulence motion inside the Keplerian disk (Santos-Lima et al., 2012; Santos-Lima et al., 2013; Seifried et al., 2015), are also supposed to be able to solve the magnetic braking catastrophe. However, different scenarios lead to different outcomes (in terms of disk size, outflow strength, and magnetic field morphology). Consequently, accurate measurements of both the morphology and strength of magnetic fields in all spatial scales of protostellar environments become a key for accurately understanding the role of magnetic fields in regulating the core collapsing and guiding the star, disk, and outflow formation. The most popular technique to measure magnetic fields in protostellar environments is to use thermal emission from magnetically aligned dust grains (Lazarian & Hoang, 2007; Andersson et al., 2015; Lazarian et al., 2015; Hull & Zhang, 2019). This technique relies on the alignment of the longest axis of irregular dust grains with the magnetic field direction, which produces polarized thermal dust emission with the polarization vector \(\mathbf{P}\perp\mathbf{B}\). Measurements of magnetic fields in protostellar environments using thermal dust polarization are carried out from the core scale of 0.1 pe using single-disk millimeter telescopes such as James Clerk Maxwell Telescope (JCMT, Matthews et al., 2009), down to the protostellar scale of \(\sim 1000\) au by interferometric telescopes as Jansky Very Large Array (JVLA), Combined Array for Research in Millimeter-wave Astronomy (CARMA, Bock et al., 2006), Submillimeter Array (SMA, Ho et al., 2004). Especially the operation of the Atacama Large Millimeter/submillimeter Array (ALMA) recently allowed us to resolve the protostellar disk with a high resolution of up to \(10-100\) au, opening the golden area for exploring the role of magnetic fields in regulating the collapse of the protostellar core and the star and planet formation. Polarimetric observations by JCMT, CARMA, JVLA, SMA, and ALMA toward several low-mass class 0/I Young Stellar Objects (YSOs) are consistent with the theoretical prediction of the magnetically-regulated collapsing core (i.e., Allen et al., 2003; Hull et al., 2017), in which magnetic fields are strongly consistent from the core to the envelope scale (Ching et al., 2017; Hull et al., 2014; Davidson et al., 2014; Hull et al., 2017), and clearly follow the hourglass-shaped within \(\sim 500-1000\) au around the protostar (Girart et al., 1999; Girart et al., 2006; Goncalves et al., 2008; Rao et al., 2009; Stephens et al., 2013; Maury et al., 2018; Kwon et al., 2019; Sadavoy et al., 2019). Some of the other sources, in contrast, are consistent with the dynamically-regulated collapse scenario (i.e., the case of Class 0 protostar Serpens SMM1 studied by Hull et al., 2017), which shows no correlation of \(\mathbf{B}\)-fields from the core to envelope scale and no hourglass-shaped \(\mathbf{B}\)-fields around their host protostars (see review by Hull & Zhang, 2019). Focusing on observations probed by ALMA, the situation becomes much more complicated since magnetic fields reveal the significant deviation from the envelope of \(\sim 1000\) au to the disk scale of \(\sim 100\) au (Hull et al., 2014; Cox et al., 2015; Cox et al., 2018; Takahashi et al., 2019; Sadavoy et al., 2019). For example, Class 0 Protobinary System L1448 IRS 2 studied by Kwon et al. (2019) clearly reveals the change from the hourglass-shaped \(\mathbf{B}\)-fields in the envelope to the pinched fields along the major disk axis within \(\sim 100\) au. Similarly, magnetic fields from Class 0 YSOs OMC-3/MMS 6 studied by Takahashi et al. (2019) show the significant transition from the spiral pattern within \(\sim 800\) au to the radial field at \(\sim 200\) au, to the pinched field in the disk scale within \(\sim 100\) au. These authors suggest that the pinched or circular field inside the disk of Class 0/I YSOs could be induced by the formation of the toroidal \(\mathbf{B}\)-fields wrapping by the fast rotation disk. But it also could be explained by self-scattering of thermal emission from sub-millimeter grains (Yang et al., 2016; Cox et al., 2018; Takahashi et al., 2019; Sadavoy et al., 2018; Sadavoy et al., 2019; Lam et al., 2021), whose polarization signal does not contain any magnetic field information. Furthermore, observations of dust polarization toward the inner 100 au region of NGC 1333 IRAS4A studied by Ko et al. (2020) and OMC-3/MMS 6 studied by Liu (2021) even reveal the \(90^{\circ}\) flipping of the polarization pattern between millimeter and submillimeter wavelengths, which increases the difficulty on understanding the origin of polarization signal obtain in this area. The uncertainty of the polarization origin inside the inner \(\sim 100\) au region raises the serious question of whether we could continue to use dust polarization to trace and study the role of magnetic fields in developing the disk and outflow around the protostar. Besides, the exact origin behind the reduction of the polarization degree \(p(\%)\) toward the center of protostellar cores is still not well understood in spite of many efforts in the last decade (Hull et al., 2014; Cox et al., 2018; Takahashi et al., 2019). This depolarization effect is usually assigned to the effect of increasing turbulence or decreasing the grain alignment efficiency in the innermost region of the core, but how much each effect contributes to the depolarization is still not quantified in detail. Furthermore, ALMA recently report the detection of high \(p\sim 20-40\%\) in the envelope scale of Class 0/I YSOs (Kwon et al., 2019; Le Gouellec et al., 2020; Le Gouellec et al., 2023). Since dust grains are expected to be inefficiently aligned with \(\mathbf{B}\) in dense environments (Lazarian, 2007), the detection of high \(p>10\%\) in the envelope challenges our understanding of grain alignment in such regions. The key to accurately extracting information from dust polarization is to perform the synthetic observation of polarized dust emission based on the accurate model of grain alignment. Back to the physics of grain alignment, this phenomenon basically includes two major stages: 1) the internal alignment, which brings the grain angular momentum \(\mathbf{J}\) to be aligned with the major axis of inertia moment \(a_{1}\), and 2) the external alignment which brings \(\mathbf{J}\) to be aligned with \(\mathbf{B}\) (see Hoang et al., 2022 for more detail). For grains containing iron, i.e., silicate grains, or what we call paramagnetic material (PM grains), the internal alignment is induced by the Barnett relaxation (Purcell, 1979; Roberge et al., 1993). In particular, the precession of \(\mathbf{J}\) and the grain magnetic moment \(\mu_{\rm Bar}\) gained by the Barnett effect (Barnett, 1915) around \(a_{1}\) in the grain inertia frame conducts the dissipation of rotational energy (Purcell, 1979), which pushes grains back to the most stable state where they rotate around \(a_{1}\) (the shortest axis of grains), or \(\mathbf{J}\parallel a_{1}\). For the external alignment, the interaction between \(\mu_{\rm Bar}\) and \(\mathbf{B}\) first induces the magnetic torque, which drives the Larmor precession of \(\mathbf{J}\) around \(\mathbf{B}\)(Hoang & Lazarian, 2016). Since grains can stably precess and receive \(\mathbf{B}\) as the axis of alignment, Radiative Torque (RATs) (Lazarian & Hoang, 2007; Hoang & Lazarian, 2008; Lazarian et al., 2015; Andersson et al., 2015), or magnetic relaxation (Davis Jr & Greenstein, 1951) can help \(\mathbf{J}\) to align with \(\mathbf{B}\). The principle of the latter mechanism is based on the dissipation of rotational energy induced by the rotating grain magnetic moment inside magnetized environments (Davis Jr & Greenstein, 1951). For the former mechanism, the alignment between \(\mathbf{J}\) and \(\mathbf{B}\) is induced by the alignment torque component of the radiative torque formed by the interaction between an anisotropic radiation field with irregular dust grains (Dolginov & Mitrofanov, 1976; Draine, 1996; Lazarian & Hoang, 2007; Hoang & Lazarian, 2008). However, since RAT has the spinning torque component which can spin up grains to suprathermal rotation (Hoang & Lazarian, 2008), this mechanism allows them to stably align with magnetic fields regardless of the randomization from gas collisions. Consequently, RAT is the most effective mechanism driving the external alignment of PM grains with \(\mathbf{B}\) in the diffuse medium as interstellar medium and molecular clouds (Reissl et al., 2017; Seifried et al., 2018; Reissl et al., 2020). However, RATs only can drive a part of grains to have perfect external alignment with \(\mathbf{B}\) at their superthermal rotation (or called high-\(J\) attractors), which is parameterized by a quantity \(f_{\rm high-J}\)(Hoang and Lazarian, 2008). The value of \(f_{\rm high-J}\) for PM grains varies between 0.2 - 0.7 (Lazarian and Hoang, 2008; Hoang and Lazarian, 2016; Herranen et al., 2021; Lazarian and Hoang, 2021), which depends complexly on the grain size, grain shape, grain composition, and their orientation with the radiation and magnetic field direction. Reissl et al. (2016) combined the three dimensions (3D) Radiative Transfer code with the RAT alignment physics and published the code POLArized Radiative Simulation (POLARIS), allowing users to simulate polarized dust emission from aligned dust grains by RATs. Since dust polarization provides both information about magnetic field and dust physics, confronting synthetic dust polarization from POLARIS with observational data is key for accurately extracting magnetic field morphology and strength, probing dust properties, and testing the grain alignment physics. In particular, by considering the RAT alignment, Brauer et al. (2016) found that the extinction by very large aligned dust grains (VLGs) of \(\sim 10-100\,\mu\)m inside the dense protostellar core with the total gas mass \(M_{\rm gas}\geq 8M_{\odot}\) can produce the polarization hole around the protostar at submillimeter wavelengths. Furthermore, the presence of aligned VLGs in this environment could induce the 90\({}^{\circ}\) flipping of the polarization pattern between the millimeter and submillimeter wavelengths owing to the change in the polarization mechanism from dichroic emission to dichroic extinction. Similarly, Valdivia et al. (2019) found that the existence of large grains of \(a\sim 10-50\,\mu\)m around YSOs is required to reproduce \(p\sim 1-5\%\) within \(\sim 1000\) au scale found in the study of Hull et al. (2014). Besides, Valdivia et al. (2022) found that if dust grains up to \(a_{\rm max}=20\,\mu\)m can be efficiently aligned with \(\mathbf{B}\)by RATs, polarized dust emission can trace well magnetic field orientation in thousand scales to the protostar position regardless of the difference in the initial setup of protostellar cores. The presence of VLGs in protostellar environments is supported via the spectral energy distribution (SED) fitting of thermal dust emission (Kwon et al., 2009; Galametz et al., 2019). However, whether large micron-sized grains above \(1\,\mu\)m could have efficient magnetic alignment by RATs in dense protostellar environments is still not clearly convincing. Furthermore, by comparing the model of RAT alignment and perfect alignment with observational data, Le Gouellec et al. (2020) found that RATs itself could not explain the detected polarization fraction (with high \(p\sim 5-40\%\) in the envelope scale) summarised from the survey of Class 0/I YSOs by SMA, JVLA, ALMA (i.e., from Hull et al., 2017; Gouellec et al., 2019; Hull et al., 2020; Maury et al., 2018; Sadavoy et al., 2018; Sadavoy et al., 2019). Instead, the perfect alignment model can reproduce well ALMA data around class 0/I YSOs. The recent study of dust polarization in protostellar outflow by Le Gouellec et al. (3 b) also confirms above conclusions, in which VLGs inside the outflow cavity wall must be perfectly aligned with \(\mathbf{B}\)in order to reproduce the detection of high polarized thermal dust emission and high \(p(\%)\) in this area (Maury et al., 2018; Hull et al., 2017; Hull and Zhang, 2019; Gouellec et al., 2019; Le Gouellec et al., 2023). Why grains can be perfectly aligned with magnetic fields in protostellar environments and whether RATs can still align large micron-sized grains in such dense areas is still unclear. In the framework of RATs, the study of radiative alignment for interstellar grains with magnetic inclusions by Hoang and Lazarian (2016) found that for superparamagnetic material (SPM grains which contains iron atom under the form of cluster), their enhanced magnetic susceptibility (Jones and Spitzer, 1967) can strengthen the Larmor precession and the magnetic relaxation over the gas randomization. The presence of iron inclusions enhances the magnetic alignment and allows a higher fraction of grains to be aligned with \(\mathbf{B}\) at high-\(J\) attractors via the joint action between RATs and enhanced magnetic relaxation, named Magnetic RAdiative Torque (MRAT) alignment (Hoang and Lazarian, 2008; Hoang and Lazarian, 2016). With the high amount of iron clusters locked inside dust grains, MRAT alignment could be strong enough to lead all grains satisfying the suprathermal condition to be perfectly aligned with \(\mathbf{B}\) regardless of the difference in grain properties and their initial orientation in space. Recently, Hoang (2022) and Hoang et al. (2022) have revisited the grain alignment theory for both PM and SPM grains in protostellar environments. They found that PM grains, which are efficiently aligned with magnetic fields in ISM and MCs by RATs, are not able to have the magnetic alignment in protostellar environments due to the strong gas randomization on their Larmor precession (Hoang and Lazarian, 2016). Furthermore, they tend to have slow internal relaxation due to their weak Barnett relaxation, whose alignment between \(\mathbf{J}\) and \(a_{1}\) is still undetermined (Hoang and Lazarian, 2009). However, for SPM grains, their enhanced magnetic susceptibility can significantly strengthen both the Larmor precession and Barnett relaxation, allowing more large grains to have fast internal relaxation and be able to be aligned with \(\mathbf{B}\) in dense environments (Hoang and Lazarian, 2016; Hoang, 2022; Hoang et al., 2022). In addition, they show that SPM grains prefer to be aligned with \(\mathbf{B}\)by MRAT alignment. And if grains contain a high amount of iron inclusions, MRAT alignment could be the mechanism driving the perfect grain alignment in protostellar environments as found from studies of Le Gouellec et al. (2020) and Le Gouellec et al. (3 b). Taking into account the complexity and the strong dependence of grain alignment state on their embedded iron inclusions, Giang et al. (2022) incorporated the grain magnetic properties and the detailed grain alignment physics into the POLARIS code and used it to study the effect of iron inclusions on dust polarization in the Bok Globule model (Brauer et al., 2016). The significant improvement in this version is that now we can model both the internal and external alignment of grains consistently based on the level of embedded iron inclusions instead of fixing the RAT alignment efficiency as in previous studies using the original POLARIS version (Brauer et al., 2016; Valdivia et al., 2019; Le Gouellec et al., 2020; Le Gouellec et al., 3 b). Our results of grain alignment are basically consistent with the analytical studies of Hoang (2022) and Hoang et al. (2022). Especially, we found that large micron-sized grains up to \(100\,\mu\)m could have perfect magnetic alignment in \(\sim 1000-10000\) au scale if they are SPM with high embedded iron clusters and be aligned with \(\mathbf{B}\)by MRAT alignment (Le Gouellec et al., 2020; Le Gouellec et al., 3 b). This perfect alignment of grains with magnetic fields explains the detection of \(p\sim 10-40\%\) in this area by ALMA (Kwon et al., 2019; Le Gouellec et al., 3 b) as predicted in Hoang et al. (2022). Furthermore, we found that toward the disk scale of \(\sim 100\) au, PM grains do not have the magnetic alignment, while SPM grains tend to align imperfectly with \(\mathbf{B}\)due to the increase of gas randomization. As a result, we suggest that the reduced of grain alignment efficiency is the major mechanism driving the polarization hole in both the optically thin and thick protostellar cores. However, our previous study use the idealized uniform model of magnetic fields inside the core, which may overestimate the impact of iron inclusions, and eliminates the effects of turbulence and the geometrical effect of magnetic fields on polarized dust emission. In addition, we found that VLGs tend to be inefficiently aligned with \(\mathbf{B}\)due to the strong gas randomization on both their internal and external alignment. Therefore, in contrast to the positive correlation between grain growth activities and the observed degree of polarization (i.e., larger \(a_{\rm max}\) induces larger \(p\)), found in Valdivia et al. (2022) (2019) and Le Gouellec et al. (3 b), we predict grain growth will suppress polarized dust emission. However, this effect is not yet well studied in Giang et al. (2022). Therefore, our following paper aims to analyze in detail the effects of grain alignment, grain growth, and physical properties of environments on the polarized dust emission from the realistic MHD simulation of the protostellar core. The structure of our paper is organized as follows. We first describe the MHD datacube in Section 2, our model setups, and how to post-process the MHD simulation with POLARIS in Section 3. We then show results for the grain alignment in Section 4. The effects of iron inclusions and maximum grain size on the polarization map, the variation of \(p(\$)\), polarization angle dispersion function \(S\), and the alignment efficiency characterized by quantity \(p\times S\) with intensity are shown in Sections 5, 6 and 7, respectively. The influence of iron inclusions on dichroic extinction at submillimeter wavelengths is studied in Section 8. Further discussions and conclusions of our study are given in Sections 9 and 10, respectively. ## 2 MHD simulation of collapsing cores ### Simulation setup and gas density distribution We use a datacube that shows the clearest formation of a protostellar disk from the series of non-ideal MHD simulations of the self-collapsing low-mass core from Lam et al. (2019) (Model M1.0A10.0, Table 1). The simulation starts with a cubic box of size 5000 au, containing a spherical core of total gas mass \(M=0.5M_{\odot}\) and a radius of 2000 au. The core's gas density follows the Bonor-Ebert density profile, and it is assumed to rotate around the z direction with an angular velocity of \(\Omega=6\times 10^{-14}\,\mathrm{s}^{-1}\). Initial magnetic fields are assumed to be uniform along the z direction. The core is supercritical with the mass-to-flux ratio of \(\lambda\sim 2.6\), and turbulence is transonic with the sonic Mach number \(M_{s}=1\). The ambipolar diffusion coefficient of the simulation is 10 times higher than the standard values of \(Q_{\mathrm{A},0}=95.2\,\mathrm{g}^{1/2}\,\mathrm{cm}^{-3/2}\), assuming the iron-neutral drag coefficient of \(\gamma=3.5\times 10^{13}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\mathrm{s}^{-1}\) and the cosmic ray ionization rate of \(10^{-17}\,\mathrm{s}^{-1}\)(Shu, 1992). The core is assumed to collapse isothermally with the gas temperature of \(T_{\mathrm{B}}=10\) K. Lam et al. (2019) used the ATHENA code and the uniform cubic grids of \(512^{3}\), which corresponds to the spatial resolution of \(\Delta x=20\) au, to simulate the collapse of the dense core and the formation of the protostellar disk. After a sink particle forms, they rebin the cubic box of \(256^{3}\) around the sink particle to the new size of \(512^{3}\) to better capture the disk formation with a higher resolution of \(\Delta x=5\) au. To perform synthetic observations of dust polarization in both the protostellar core and disk, we take the same snapshot used in Lam et al. (2021) when the cloud evolves 0.158 Myrs. In this snapshot, the protostellar disk with a radius of \(\sim 100\) au is clearly formed around the sink particle whose mass is about \(M_{\mathrm{star}}=0.22M_{\odot}\). The mean gas density distribution on the x\(\sim\)z and x\(\sim\)p plane taken from the snapshot is shown in Figure 1. One can see the existence of the disk on the equatorial plane (at \(z\sim 0\)) when looking along the edge-on direction (left panel) and the clear accretion of material into the disk scale of 100 au in the anti-clockwise direction when looking along the face-on direction (right panel). The gas density inside the disk is about \(n_{\mathrm{H}}\sim 10^{8}-10^{9}\,\mathrm{cm}^{-3}\) and it decreases outward to \(n_{\mathrm{H}}\sim 10^{5}-10^{6}\,\mathrm{cm}^{-3}\) in \(500-2500\) au. ### Mean magnetic field orientation Figure 2 shows the mean orientation of magnetic fields integrated along the y- and z-direction overplotted with the density-weighed magnetic field strength \(\langle B\rangle\) on x\(-\)z and x\(-\)y plane, respectively. The values of \(\langle B\rangle\) is given by: \[\langle B\rangle=\frac{\int_{\mathrm{los}}Bn_{\mathrm{H}}dl}{\int_{\mathrm{ los}}n_{\mathrm{H}}dl}, \tag{1}\] where \(B\) and \(n_{\mathrm{H}}\) are the magnetic field strength and the gas density in each cell along the observed direction. We weigh magnetic field strength (and later other quantities, Section 4) with gas density to better visualize the amplification of \(\mathbf{B}\)-fields in dense regions due to the magnetic flux freezing effect. For the mean orientation of magnetic fields on the plane of the sky (POS), we use the same approach proposed by Valdivis et al. (2022) to emphasize the magnetic field structure in dense regions and to later compare with the inferred magnetic fields from dust polarization in Section 9.2.2. We first compute the density-weighted sinus and cosines of \(2\phi_{\mathrm{B}}\), (\(\sin(2\phi_{\mathrm{B}})\)) and \(\langle\cos(2\phi_{\mathrm{B}})\rangle\), along the y- and z-direction by using Equation (1), where \(\phi_{\mathrm{B}}\) is the angle of projected magnetic field on the POS to the North direction in each cell. The mean angle of \(\mathbf{B}\) to the North direction from MHD simulation \(\langle\phi_{\mathrm{B}}\rangle\) is then given by: \[\langle\phi_{\mathrm{B}}\rangle=\frac{1}{2}\frac{\langle\sin(2\phi_{\mathrm{B} })\rangle}{\langle\cos(2\phi_{\mathrm{B}})\rangle}. \tag{2}\] Due to the magnetic flux freezing, the initial uniform magnetic fields along the z-direction are dragged by infall gas toward the center, forming the hourglass-shaped magnetic fields seen in the x\(-\)z plane (left panel) and the spiral pattern seen in the x\(-\)y plane (right panel). The magnetic field along the outflow (left panel) is slanted toward the right, and the large-scale hourglass shape magnetic field is bent to be pinched along the disk within \(\sim 200\) au due to the formation of the toroidal \(\mathbf{B}\) component wrapped by the fast rotation of the disk. The interaction between the outflow and infalling gas also develops the small-scale toroidal pattern seen on the x\(-\)y plane. ## 3 Synthetic polarization observations of MHD simulations with the updated polaris We perform synthetic multi-wavelength observations of polarized dust emission from the protostellar core and disk by post-processing the above MHD simulation with the updated POLARIS code introduced by Giang et al. (2022). Here, we first introduce our setup for the radiation source and the dust model in Section 3.1, then we briefly explain our working flow in POLARIS from Sections 3.2 to 3.4. ### Radiation source and dust model We consider the stellar radiation from the sink particle to be the unique heating source in our modeling. The contribution from the interstellar radiation field is neglected due to their inefficient effect on heating dust grains at \(<2500\) au scale inside the protostellar core. For the stellar radiation source, we assume the sink particle is a black body with the radius of \(R_{\mathrm{star}}=2.5R_{\odot}\) and temperature of \(T_{\mathrm{star}}=9707\) K, which corresponds to the stellar luminosity \(L_{\mathrm{star}}=50L_{\odot}\) (or low-mass protostar). The choice of \(L_{\mathrm{star}}\) is motivated by the study of Le Gouellec et al. (3 b) who found that the high \(p\sim 20\%\) in the outflow cavity of Class 0I low-mass YSOs only can be reproduced if the protostar has \(L_{\mathrm{star}}\sim 50-100L_{\odot}\). We consider the wide radiation spectrum with the lower cutoff at \(\lambda_{\mathrm{min}}=0.1\,\mathrm{\mu m}\) at which UV photons are mainly absorbed by Hydrogen atoms, and the upper cutoff at \(\lambda_{\rm max}=2\) mm in which the dust-radiation interactions become insignificant. We assume the source emits \(N_{\rm photon}=10^{7}\) photons per each wavelength. For the dust model, we assume the uniform distribution of dust grains in the entire protostellar core with the typical gas-to-dust mass ratio \(\eta=0.01\) found in ISM. Dust grains are assumed to be in the composite form with \(67.5\%\) of silicate and \(32.5\%\) of graphite. Dust grains follow the standard Mathis Rumpl Nordsieck (MRN) distribution of \(dn/da=n_{\rm H}Ca^{-3.5}\)(Mathis et al., 1977) with \(C\) the normalization constant determined from the value of \(\eta=0.01\)(see Giang et al., 2022 for detail). We choose the composite dust model because sub-micron grains with different compositions are easier to collide and stick together in protostellar environments to form larger ones via dust coagulation (Okuzumi et al., 2012; Kataoka et al., 2013). For the size distribution, we consider the wide range from the minimum size of \(\alpha_{\rm min}=3.5\) A to the maximum size varying from \(a_{\rm max}=1\) \(\mu\)m to \(a_{\rm max}=100\) \(\mu\)m. The choice of VLGs within the 2500 au scale stems from the detection of large grains via the SED fitting (see, e.g., Kwon et al., 2009; Miotello et al., 2014; Galametz et al., 2019; Liu, 2021) as well as the synthetic observations of dust polarization (see, e.g., Valdivia et al., 2019, Le Gouellec et al., 3 b). Figure 1: 2D map of the mean gas density obtained on the x\(-\)z plane (left panel) and x\(-\)y plane (right panel). The material in the envelope is accumulated into the equatorial plane under the contraction from gravity, forming the protostellar disk of radius \(-\) 100 au around the protostar with the mean gas density of \(m_{\rm H}\sim 10^{8}-5\times 10^{9}\) cm\({}^{-3}\). The gas density in the envelope is low of \(m_{\rm H}\sim 10^{5}-10^{6}\) cm\({}^{-3}\). Figure 2: The mean orientation of magnetic fields on the x\(-\)z (left panel) plane and x\(-\)y (right panel) (black segments) overplotted over the density-weighted magnetic field strength (color code) integrated along the y and z-direction, respectively. In the x\(-\)z plane, the magnetic field changes from the hourglass-shaped field along the z direction to the pinch field perpendicular to the outflow in the center induced by the development of the toroidal field from the fast rotation of the disk. On the x\(-\)y plane, the magnetic field follows the spiral pattern driven by the infall and accretion motion of material from the envelope into the central protostar. The magnetic field strength increases continuously from \(B\sim 100\mu G\) in the envelope to \(B\sim 5\times 10^{3}\mu G\) in the disk due to the magnetic flux freezing. ### Monte-Carlo Radiative Transfer and Dust temperature calculation The first step of our post-processing is to perform the radiative transfer of photons emitting from the sink particle using the Monte-Carlo technique introduced by Lucy (1999). POLARIS simulates the scattering, absorption, and dust emission processes for all dust grains in the MHD simulation box. For each absorption event, they manage the energy conservation between the stellar absorption and re-remit IR radiation of grains by immediately correcting the grain temperature and sending one lower energy photon into the grid space. The energy density distribution inside each cell is updated continuously when photons enter, interact with dust grains, and leave the cell until the end of the simulation. POLARIS stores the direction of each photon inside the cell to calculate the anisotropic degree \(\gamma_{\rm rad}\) and the radiative torques required to model the grain alignment by RATs in the latter step. After the radiative transfer simulation, we calculate the absorption rate for each grain size using the energy density distribution stored in each cell. The grain temperature is determined by solving the energy conservation between radiative heating and cooling of dust grains. The average dust temperature \(T_{\rm d}\) inside the cell is calculated by integrating the grain temperature over the grain size distribution, and the gas temperature is given by multiplying \(T_{\rm d}\) by a correlation factor. We consider \(T_{\rm g}=T_{\rm d}\) in our simulation because gas-grain collision is the main source of gas heating in protostellar environments. ### Grain alignment physics in the updated POLARIS Taking into account the complexity of grain alignment physics in protostellar environments (Hoang, 2022; Hoang et al., 2022), we use the updated version of POLARIS introduced by Giang et al. (2022) to model in detail the alignment of grains with \(\mathbf{B}\) in the MHD model of the protostellar core and disk. To understand the role of grain magnetic properties, we consider three types of grains: paramagnetic (PM) grains with the iron fraction \(f_{\rm p}=0.1\), superparamagnetic (SPM) grains with a moderate value of \(N_{\rm cl}=100\) iron atoms/cluster, and SPM grains with the high value of \(N_{\rm cl}=10^{4}\) iron atoms/cluster. We assume the same volume filling factor of iron clusters \(\phi_{\rm sp}=0.1\) for two types of SPM grains corresponding to \(\sim 30\%\) of iron abundance locked inside dust grains in the form of iron clusters. Following the RAT theory, we first calculate the radiative torques acting on each grain size based on the radiation field obtained from the Monte-Carlo radiative transfer simulation (see Lazarian & Hoang, 2007; Hoang & Lazarian, 2008; Reissl et al., 2016). Then we calculate the maximum rotational rate that grains gained by RATs (Hoang & Lazarian, 2014) and select the grain size which rotates at least three times faster than their thermal rotation as the minimum alignment size \(a_{\rm align}\)(Hoang & Lazarian, 2008). The maximum alignment size \(a_{\rm max,JB}^{\rm Lar}\) is determined based on the competition between the Larmor precession and the gas randomization. In detail, grains rotating suprathermally by RATs only be considered to have the magnetic alignment if their Larmor precession timescale is at least ten times smaller than the gas damping timescale (Yang, 2021; Giang et al., 2022). For grains within the alignment range, their internal alignment is considered to have fast internal relaxation if their Barnett relaxation timescale is smaller than the gas damping timescale, determined by the range \(a_{\rm min,\,al}-a_{\rm max,\,al}\)(Hoang, 2022; Hoang et al., 2022). Grains beyond this limit will have slow internal relaxation. Taking into account the dependence of the Barnett relaxation with the grain rotational rate, we determine in detail the internal alignment of each grain size in their alignment state at both high-\(J\) and low-\(J\) attractors (see Giang et al. 2022 for detailed calculations). Following the RAT paradigm, grains can have the external alignment with \(\mathbf{B}\) by RATs or MRAT alignment depending on their magnetic properties (Hoang & Lazarian, 2016; Hoang et al., 2022). Giang et al. (2022) determine the external alignment mechanism for each grain size based on the magnetic relaxation parameter \(\delta_{\rm m}\), which characterizes the magnetic relaxation efficiency over the gas randomization. As shown in Figure 3, MRAT will be the major alignment mechanism for grains having \(\delta_{\rm m}>1\), and RATs will play the main role in driving the alignment of small grains having inefficient magnetic relaxation, i.e., \(\delta_{\rm m}<1\). For RAT alignment, we assume about \(\sim 25\%\) of aligned dust grains to be efficiently aligned with \(\mathbf{B}\) at high-\(J\) attractors (or \(f_{\rm high-J}=0.25\)). For MRAT alignment, we consider higher \(f_{\rm high-J}=0.5\) if grains have \(1<\delta_{\rm m}<10\), and \(f_{\rm high-J}=1\) for grains having very efficient magnetic relaxation with \(\delta_{\rm max}\geq 10\). POLARIS determines the overall alignment degree for each grain size with magnetic fields (Hoang & Lazarian, 2014) by using the Rayleigh reduction factor \(R\)(Greenberg, 1968) given by: \[R=f_{\rm high-J}Q_{\rm X}^{\rm high-J}Q_{\rm J}^{\rm high-J}+(1-f_{\rm high-J})Q _{\rm X}^{\rm low-J}Q_{\rm J}^{\rm low-J}, \tag{3}\] where \(Q_{\rm X}\) and \(Q_{\rm J}\) characterize the internal and external alignment degree. For grains having fast internal relaxation, i.e., grains within \(a_{\rm min,\,al}-a_{\rm max,\,al}\), their internal alignment is perfect if grains aligning with \(\mathbf{B}\) at high-\(J\) attractors, but will be imperfect for grains at low-\(J\) attractors due to their internal thermal fluctuation (Purcell, 1979). We consider \(Q_{\rm X}^{\rm high-J}=1\) for the former case and describe \(Q_{\rm X}^{\rm low-J}\) for the latter case by the local thermal equilibrium (TE) Boltzmann distribution (Lazarian & Roberge, 1997). For grains having slow internal relaxation, i.e., grains beyond the range \(a_{\rm min,\,al}-a_{\rm max,\,al}\), they still can have right IA if they have aligned with \(\mathbf{B}\) at high-\(J\) attractors, but grains can have right IA or wrong IA (i.e., grains rotate around their longest axis) if they have the magnetic alignment at low-\(J\) attractors (Hoang & Lazarian, 2009). We Figure 3: Illustration of the variation of \(f_{\rm high-J}\) with grain sizes within the alignment range \(\{a_{\rm align}-a_{\rm max,\,J\,B}^{\rm Lar}\}\). The green area determines the size of grains being aligned with \(\mathbf{B}\) by the MRAT mechanism. We consider \(f_{\rm high-J}=1\) for grains having high magnetic relaxation ratio \(\delta_{\rm m}\geq 10\), and \(f_{\rm high-J}=0.5\) for grains having \(1\geq\delta_{\rm m}\leq 10\). The red area determines the size range for RAT alignment with \(f_{\rm high-J}=0.25\). The maximum size for grains having \(f_{\rm high-J}=1\) and \(f_{\rm high-J}=0.5\) by MRAT alignment is denoted by \(a_{\rm max,\,JB}^{\rm D,\,\,1}\) and \(a_{\rm max,\,JB}^{\rm D,\,\,0,\,0,\,\,0}\), respectively. characterize the right IA by using the positive value of \(Q_{\rm X}^{\rm high-J}\) (for grains at high-\(J\)) and \(Q_{\rm X}^{\rm low-J}\) (for grains at low-\(J\)), and characterize the wrong IA by using the negative \(Q_{\rm X}^{\rm low-J}\). The value of \(Q_{\rm X}\) for grains having slow internal relaxation is controlled by hand in POLARIS due to the difficulty of studying the dynamic of grains without internal relaxation (Hoang and Lazarian, 2009). However, we expect this value should be close to zero due to their weak internal alignment degree. We use the similar setup in Giang et al. (2022) with \(Q_{\rm X}^{\rm high-J}=0.15\) for grains at high-\(J\), and \(Q_{\rm X}^{\rm low-J}=0.05\) for the case of right IA and \(Q_{\rm X}^{\rm high-J}=-0.1\) for the case of wrong IA at low-\(J\) attractors to study the effect of grains with slow internal relaxation on the observed polarization signal, named as model rIA and wIA. For the external alignment between \(J\) and \(B\), we consider perfect alignment for grains at both high and low-\(J\) attractors, or \(Q_{\rm J}^{\rm high-J}=Q_{\rm J}^{\rm low-J}=1\). The adoption of \(Q_{\rm J}^{\rm high-J}=1\) for high-\(J\) state is reasonable due to the grain suprathermal rotation. In terms of \(Q_{\rm J}^{\rm low-J}=1\), this adoption may overestimate the realistic external alignment efficiency of grains at low-\(J\). However, since the net polarization signal is dominated by the emission from grains at high-\(J\), the overestimated external alignment efficiency of grains at low-\(J\) may not significantly affect our final results. Therefore, it is acceptable to adopt \(Q_{\rm J}^{\rm low-J}=1\). Finally, knowing the internal and external alignment degrees of all grain sizes within the alignment range, we thus can determine the overall alignment degree of grains with \(B\) using Equation (3). The Rayleigh reduction factor \(R\) will be zero for grains beyond the range of alignment. The summary of parameters used in our model is given in Table 1. Besides two models of grain alignment rIA and wIA described in the above paragraph, we also consider the ideal case that all grains above \(a>a_{\rm align}\) have perfect alignment with \(B\), named as model PA. By comparing model PA with models rIA and wIA, one thus can separate the effect of turbulence and magnetic field geometry from grain alignment efficiency, which helps to clarify the contribution of each factor on the dust polarization. The summary of our model names and their setups is shown in Table 2. ### Polarized Radiative Transfer of Stokes Parameters Given the alignment degree and alignment direction of all grain sizes with the ambient magnetic field, we finally perform the synthetic observation of dust polarization by solving the polarized radiative transfer of Stokes parameters (Reissl et al., 2016). We put the plane detector with \(256\times 256\) pixels at 100 pc to the source to observe the envelope of 2500 au and then the disk of 500 au scale. Each pixel on the detection will cover the area with sidelength of \(\Delta x=9.4\) au when observing the full map of 2500au scale and \(\Delta x=1.95\) au when zooming onto the protostellar disk. We perform the multi-wavelength observations with \(\lambda=2\) mm, \(870\,\mu\)m, \(450\,\mu\)m, and \(250\,\mu\)m with different inclination angles from the face-on to the edge-on direction to understand behaviors of dust polarization at different wavelengths with different light of sight (LOS). The polarization degree in terms of percentage \(p(\%)\) in each pixel is given by: \[p(\%)=\frac{\sqrt{Q^{2}+U^{2}}}{I}\times 100\%, \tag{4}\] and the polarization angle \(\psi\) in the unit of radian is: \[\psi=\frac{1}{2}\arctan\frac{U}{Q} \tag{5}\] where \(I\) is the first Stokes parameter describing the thermal intensity, and \(Q\) and \(U\) describe the linear polarization states of dust emission. We calculate the polarization angle dispersion function \(S\) to study the effect of B-field tangling on the observed polarization map. For each cell at position \(x\), we calculate the difference in \(\psi\) between this cell and their neighborhoods within the sphere of lag \(\delta\) given by (Alina et al., 2016): \[S(x,\delta)=\sqrt{\frac{1}{N}\sum_{i=1}^{\rm N}[\psi(x+\delta_{i})-\psi(x)]^{ 2}}, \tag{6}\] where \(N\) is the total number of cells inside the spherical window. We choose \(\delta=4\Delta_{\rm x}\) with \(\Delta_{x}\) the region covered by one pixel inside the detector plane to understand the effect of B-field tangling in cases of very high resolution. ## 4 Numerical Results for Grain Alignment In this section, we first show the distribution of the radiation field strength and dust temperature on the x\(-\)y plane. We then show results about grain alignment with \(a_{\rm max}=100\,\mu\)m for PM grains and SPM grains from Section 4.2 to Section 4.4. Results of grain alignment with different maximum grain sizes from \(a_{\rm max}=1\,\mu\)m to \(a_{\rm max}=100\,\mu\)m are shown in Appendix A. \begin{table} \begin{tabular}{l l l} \hline Quantity & Symbol & Value \\ \hline \multicolumn{3}{c}{**Radiation sources**} \\ \hline Stellar radius & \(R_{\rm star}\) & \(2.5R_{\odot}\) \\ Effective temperature & \(T_{\rm star}\) & \(9707\) K \\ Stellar luminosity & \(L_{\rm star}\) & \(50L_{\odot}\) \\ \hline \multicolumn{3}{c}{**Dust model**} \\ \hline Grain axial ratio & \(s\) & 0.5 \\ Dust-to-gas mass ratio & \(\eta\) & 0.01 \\ Grain size distribution & \({\rm dn}/{\rm da}\) & \(Ca^{-3.5}\) \\ Minimum grain size & \(a_{\rm min}\) & \(3.5\)Å \\ Maximum grain size & \(a_{\rm max}\) & \(1,\,5,10,20,50,100\mu m\) \\ Fraction of silicate & & \(67.5\%\) \\ Fraction of graphite & & \(32.5\%\) \\ \hline \multicolumn{3}{c}{**Grain magnetic properties**} \\ \hline Iron fraction & \(f_{\rm p}\) & 0.1 \\ Iron atom/cluster & \(N_{\rm cl}\) & 100 \(\&\)\(10^{4}\) \\ Volume filling factor & \(\phi_{\rm up}\) & 0.1 \\ of iron clusters & & \\ \hline \multicolumn{3}{c}{**Internal alignment degree**} \\ \hline \multicolumn{3}{c}{Grains with fast internal relaxation} \\ \hline High-\(J\) attractors & \(Q_{\rm X}^{\rm high-J}\) & 1 \\ Low-\(J\) attractors & \(Q_{\rm X}^{\rm low-J}\) & TE Boltzmann distribution \\ \hline \multicolumn{3}{c}{**External alignment degree**} \\ \hline High-\(J\) attractors & \(Q_{\rm J}^{\rm high-J}\) & 1 \\ Low-\(J\) attractors & \(Q_{\rm J}^{\rm low-J}\) & 1 \\ \hline \end{tabular} \end{table} Table 1: Setup for the radiation field and dust model in POLARIS ### Radiation field and dust temperature The left to right panel of Figure 4 shows the spatial distribution of the radiation field strength \(U\), the anisotropic degree \(\gamma_{\rm rad}\), and the dust temperature \(T_{\rm d}\) on the x\(-\)y plane, in which each quantity is weighted with gas density along the \(z\)-direction. The radiation field strength is strongest in the center where the protostar forms and decreases outward due to the increasing extinction from surrounding dust grains. The radiation field is highly isotropic, \(\gamma_{\rm rad}\leq 0.2\), within the inner 100 au due to the strong scattering between stellar radiation and dust grains inside the dense disk. Then it will become highly anisotropic when moving outward, i.e., \(\gamma_{\rm rad}\sim 0.8\), due to weak interactions of thermal emission from warm grains in the center with \(T_{\rm d}\geq 100\) K with colder grains in the envelope with \(T_{\rm d}\sim 40\) K. ### Grain Alignment Size Range Figure 5 shows the map of the weight-density minimum alignment size \(a_{\rm align}\) on the x\(-\)y plane obtained in the full 2500 au scale (left panel) and in the inner 500 au region (right panel). Around the protostar, sub-micron grains of \(a_{\rm align}\sim 0.2\,\mu\)m (right panel) can be aligned with \(\mathbf{B}\)due to the efficient RATs in the strong stellar radiation field. The alignment size then increases quickly to \(a_{\rm align}\sim 1-8\,\mu\)m inside the disk (right panel) and decreases again to \(a_{\rm align}\sim 0.1-0.8\,\mu\)m in the envelope scale (left panel). The reduced RAT efficiency in the disk is caused by the attenuation of the radiation field strength, the increase in the field isotropic degree, and the enhanced gas damping by the high gas density. In the protostellar envelope, more sub-micron grains in the envelope can rotate suprathermally and have the magnetic alignment by RATs due to the increased anisotropic degree of the radiation field and decreased gas damping. Figure 6 shows the density-weight maximum alignment size \(a_{\rm max,JB}^{\rm Lar}\) calculated on the x\(-\)y plane for PM grains (left panel) and SPM grains with \(N_{\rm cl}=10^{4}\) (right panel). For both PM and SPM grains, one can clearly see that large grains inside the disk are hard to be aligned with \(\mathbf{B}\) than the other due to the strong gas randomization on the Larmor precession. However, grains with higher embedded iron inclusions have higher possibilities to align with \(\mathbf{B}\) due to the enhanced Larmor precession by their larger magnetic susceptibility. For example, PM grains are not aligned with \(\mathbf{B}\) in the disk, i.e., \(a_{\rm max,JB}<a_{\rm align}\) (Figure 5), and only grains below \(10\,\mu\)m can have the magnetic alignment in the envelope. In contrast, all SPM grains with \(N_{\rm cl}=10^{4}\) up to \(a_{\rm max}=100\,\mu\)m beyond the disk can be aligned with \(\mathbf{B}\). Inside the inner 100 au, grains up to \(30\,\mu\)m can have the magnetic alignment, but larger grains have random orientation because of their slow Larmor precession driven by their higher inertia moment. ### Grain sizes with fast internal relaxation Figures 7 and 8 show the density-weighted maximum size that grains have fast internal relaxation for PM grains (left panel) and SPM grains (right panel). The first figure is for grains at high-\(J\) attractors \(a_{\rm max,J}^{\rm high-J}\) and the second is for grains at low-\(J\) attractors \(a_{\rm max,J}^{\rm low-J}\). One can see that in both high and low-\(J\) attractors, large grains in the protostellar disk are hard to have fast internal relaxation compared to grains in the envelope due to the stronger gas randomization during their internal alignment stage. The maximum size for fast internal relaxation increases for grains having higher embedded iron inclusions due to the enhanced Barnett relaxation by higher magnetic susceptibility. Furthermore, this size significantly increases if grains can be aligned with \(\mathbf{B}\) at high-\(J\) attractors. For example, beyond \(\sim 200\) au, for SPM grains with \(N_{\rm cl}=10^{4}\), only small grains below \(1-3\,\mu\)m can have fast \begin{table} \begin{tabular}{c c c c c c} \hline **Model name** & **Alignment range** & **Slow internal relaxation** & \(Q_{\rm X}^{\rm high-J}\) & \(Q_{\rm X}^{\rm low-J}\) & \(f_{\rm high-J}\) \\ \hline PA & \(a_{\rm align}-a_{\rm max}\) & No & – & – & 1 \\ rIA & \(a_{\rm align}-a_{\rm max,JB}^{\rm Lar}\) & Yes & 0.15 & 0.05 & Depend on grain magnetic properties \\ wIA & \(a_{\rm align}-a_{\rm max,JB}^{\rm Lar}\) & Yes & 0.15 & -0.1 & Depend on grain magnetic properties \\ \hline \end{tabular} Note: Value of \(Q_{\rm X}^{\rm low-J}\) and \(Q_{\rm X}^{\rm high-J}\) here are for the internal alignment degree of grains with slow internal relaxation. \end{table} Table 2: Parameters of the grain alignment model Figure 4: Spatial distribution of the density-weighted radiation field strength \(U\) (left panel), the mean anisotropic degree \(\gamma_{\rm rad}\) (right panel), and the mean dust temperature \(T_{\rm d}\) (right panel) on the x\(-\)y plane. The radiation field strength decreases continuously outward due to the extinction of surrounding dust grains, inducing the decrease of dust temperature from \(T_{\rm d}\geq 100\)K around the protostar to \(T_{\rm d}\sim 20\) K at 2500 au. The radiation field is highly isotropic with \(\gamma_{\rm rad}\sim 0.1-0.4\) inside the disk resulting from the strong interaction between dust grains and stellar radiation fields, but it becomes more anisotropic when moving outward, i.e., \(\gamma_{\rm rad}\sim 0.9\), due the weak interaction between thermal dust emission at infrared wavelengths with cold dust grains there. internal relaxation at low-\(J\) attractors (Figure 8, right panel), while this value can excess to 100 \(\mu\)m for grains at high-\(J\) attractors (Figure 7, right panel). The huge difference in the value of \(a_{\rm max,al}\) between high and low-\(J\) is caused by the difference in the rotation rate. In particular, grains rotating suprathermally experience faster Barnett relaxation and thus have higher possibilities of having fast internal relaxation in dense environments. That also explains why we see the slight increase of \(a_{\rm max,al}^{\rm high-J}\) toward the strong stellar radiation source in the case of high-\(J\) attractors (Figure 7). ### Effective grain sizes of MRAT alignment Figure 9 shows the density-weighed maximum size that 50% of grains can have perfect alignment with \(\mathbf{B}\) by MRAT alignment \(a_{\rm max,IB}^{\rm DO,0.5}\) for PM (left panel) and SPM grains (right panel). Obviously, RAT is the major mechanism driving the alignment of PM grains in protostellar environments, i.e., \(a_{\rm max,IB}^{\rm DO,0.5}<<a_{\rm align}\) (Figure 5), owing to their weak magnetic relaxation strength. On the contrary, all SPM grains with \(N_{\rm cl}=10^{4}\) can have perfect alignment with magnetic fields by MRAT alignment in the envelope as a result of the enhanced magnetic Figure 5: The density-weighted minimum alignment size \(a_{\rm align}\) on the x–y plane obtained in the full 2500au scale (left panel) and in the inner 500 au region (right panel), assuming \(a_{\rm max}=100\)\(\mu\)m. The contour line shows the boundary of \(a_{\rm align}=0.16\)\(\mu\)m in the envelope, 0.45 \(\mu\)m in the inner part of the envelope, and 2.7 \(\mu\)m found in the disk. Sub-micron size of \(a_{\rm align}\sim 0.2\)\(\mu\)m around the protostar can be aligned with \(\mathbf{B}\) due to the efficient RATs in the strong stellar radiation field. However, this value increases quickly to \(a_{\rm align}\sim 8\)\(\mu\)m inside the disk due to the inefficient RATs induced by the high gas damping, then it decreases again to \(a_{\rm align}\sim 0.7\)\(\mu\)m in the envelope because of the reduced gas density there. Figure 6: Distribution of the maximum alignment size \(a_{\rm max,IB}^{\rm LAr}\) weighed with gas density on the x–y plane for PM grains (left panel) and SPM grains with \(N_{\rm cl}=10^{4}\) (right panel). The maximum alignment size generally reduces toward the center due to the significant increase in gas randomization. Large PM grains of \(a>10\)\(\mu\)m basically cannot be aligned with \(\mathbf{B}\) in the entire protostellar core due to the weak Larmor precession, but this problem can come over if grains are SPM with a high amount of iron inclusions. relaxation by iron inclusions. However, within the inner \(\sim 200\) au region, MRAT alignment is only the major alignment mechanism for micron-sized grains below \(a<20\)\(\mu\)m. For larger grains, RATs play a main role in driving the magnetic alignment due to the reduced magnetic relaxation driven by the higher inertia moment of VLGs. ## 5 Effects of iron inclusions and grain growth on synthetic polarization map We next move to analyze the effect of iron inclusions and maximum grain size on the inferred magnetic field map from dust polarization. We place the detector at the inclination angle \(\Theta=45^{\circ}\) (to the North direction) and observe the object at an optically thin wavelength \(\lambda=2\) mm. We first analyze results from the full-scale observation of 2500 au in Section 5.1, then from the zoom-in scale of 500 au around the protostar in Section 5.2. ### Protostellar envelope Figure 10 shows the effects of iron inclusions and maximum grain size on the polarization degree and the inferred magnetic field direction obtained in the entire protostellar core. In each column, we fix the maximum grain size and show the comparison between model PA (upper row) and model rIA for PM grains (second row) and SPM Figure 8: Similar results as Figure 7 but for grains aligning with \(\mathbf{B}\) at low-\(J\) attractors \(a_{\rm max,al}^{\rm low-J}\). The size range of grains having fast internal relaxation reduces toward the center region and can extend to larger sizes with increasing amount of iron clusters locked inside dust grains. However, all micron-sized grains above \(1\)\(\mu\)m tend to have slow internal relaxation at low-\(J\) attractors in the entire protostellar core regardless of the grain magnetic properties due to their slow rotation in space. Figure 7: Spatial distribution of the density-weighed maximum size that grains have fast internal relaxation at high-\(J\) attractors \(a_{\rm max,al}^{\rm high-J}\) for PM grains (left panel) and SPM grains with \(N_{\rm cl}=10^{4}\) (right panel) on x\(-y\) plane. Generally, large grains inside the disk tend to have slow internal relaxation, i.e., smaller \(a_{\rm max,al}^{\rm high-J}\), due to the strong gas randomization. The size range of grains having fast internal relaxation is extended to larger sizes if grains have higher magnetic susceptibility. However, it is insufficient to cause large SPM grains above \(20\)\(\mu\)m inside the disk to have fast internal relaxation. grains with \(N_{\rm cl}=100\) (third row) and \(N_{\rm cl}=10^{4}\) (fourth column). For each dust model, we show results for different maximum grain sizes of \(a_{\rm max}=1\,\mu\)m, \(5\,\mu\)m, \(10\,\mu\)m, \(50\,\mu\)m, and \(a_{\rm max}=100\,\mu\)m, from left to right, respectively. The color code shows the polarization degree in the unit of percentage \(p\) (%), and white segments show the inferred magnetic field direction obtained by rotating the polarization vector \(\mathbf{P}\) by \(90^{\circ}\). #### 5.1.1 Effects of grain magnetic properties For \(a_{\rm max}=1\,\mu\)m (first column), the inferred magnetic field from model PA follows the spiral pattern in the anticlockwise direction driven by the gas infall inside the rotating collapsing core. The map of polarization degree also shows the spiral structure with \(p\sim 5\%\) inside the arm along the South-West and North-East direction and \(p\sim 10-15\%\) in the remaining parts. The complex map of polarization degree in the protostellar envelope is a result of the projection effect of hourglass-shaped magnetic fields on the POS with the inclination angle \(\Theta=45^{\circ}\). In model rIA (second to fourth panels, first column), polarized dust emission from both PM and SPM grains generally can infer again the spiral \(\mathbf{B}\)-fields pattern because all grains up to \(1\,\mu\)m can be aligned with \(\mathbf{B}\) in the envelope scale (Figure 11). However, PM grains produce the uniform low \(p\sim 1-5\%\) in the entire envelope because they are aligned with \(\mathbf{B}\) by RATs and large grains above \(0.5\,\mu\)m have inefficient internal alignment (IA) by slow internal relaxation (Figures 12, 13, upper left panel). In contrast, SPM grains can produce higher \(p\geq 10\%\) and reflect well the spiral pattern in the polarization degree map as model PA owing to their perfect alignment with \(\mathbf{B}\) by efficient MRAT alignment (Figures 12, 13, and 14, center and lower left panels). #### 5.1.2 Effects of maximum grain size In the upper panels for model PA (first row), one can see the clear increase of the polarization fraction from \(p\sim 10-15\%\) to \(p\sim 15-40\%\) when the maximum grain size increases from \(a_{\rm max}=1\,\mu\)m to \(a_{\rm max}=100\,\mu\)m. The positive correlation between \(p\) and \(a_{\rm max}\) for model PA is owed to the extension of the alignment range to larger sizes. However, when we take into account the realistic alignment of PM grains (second row), the rise of \(p\) with \(a_{\rm max}\) only happens when grains grow from \(a_{\rm max}=1\,\mu\)m to \(a_{\rm max}=10\,\mu\)m. The further growth in grain size, in contrast, weakens the observed polarization fraction due to the misalignment of VLGs above \(10\,\mu\)m in the envelope (Figure 11, upper panels). For SPM grains with low \(N_{\rm cl}=100\), \(p\) is higher than the case of PM grains because VLGs can be aligned with \(\mathbf{B}\), but the dependence of \(p\) on \(a_{\rm max}\) is still the same because large grains still have inefficient IA by slow internal alignment (Figure 13, center row) and be aligned with \(\mathbf{B}\) by RATs (Figure 14, center row). For SPM grains with high \(N_{\rm cl}=10^{4}\), one can obtain the continuous increase of \(p\) with \(a_{\rm max}\) as model PA because all grains above \(a_{\rm align}\) can achieve perfect magnetic alignment by efficient MRAT alignment here (Figure 12, 13, lower panels). The effect of iron inclusions and maximum grain size on the inferred magnetic fields within thousands au scale from model rIA are similar to model rIA (see the map in Appendix C). It is because polarized dust emission from wrong-aligned dust grains in the envelope is subdominant to polarized dust emission from grains with the right IA there. ### Protostellar disk #### 5.2.1 Model rIA Figure 11 shows similar results as Figure 10, but zooming in the central region of 500 au, which reveals dust polarization in the protostellar disk. In model PA with \(a_{\rm max}=1\,\mu\)m, the inferred magnetic field from dust polarization continues to follow the spiral pattern driven by the accretion of material from the inner part of the envelope onto the disk. The obtained polarization fraction is quite low of \(p<5\%\) because of the narrow alignment range there (\(a_{\rm align}\sim 0.95\,\mu\)m, Figure 12, left panel). When grains grow to above \(a_{\rm max}\geq 5\,\mu\)m, \(p\) increases due to the extension of the alignment range to large sizes, Figure 9: Spatial distribution of the maximum size that 50% of grains can be aligned with \(\mathbf{B}\) by MRAT alignment \(a_{\rm max,IB}^{\rm DG,0,5}\) for PM grains (left panel) and SPM grains with \(N_{\rm cl}=10^{4}\) (right panel). Visually, PM grains are aligned with \(\mathbf{B}\) by RATs due to the weak magnetic relaxation, i.e., \(a_{\rm max,IB}^{\rm DG,0,5}\sim 0.005\,\mu\)m, but SPM grains can be aligned via the MRAT alignment due to the enhanced magnetic relaxation by iron inclusions. However, the perfect alignment of VLGs cannot happen in the disk even if grains contain a high amount of iron clusters due to the high gas randomization there. from \(p\sim 15\%\) for \(a_{\rm max}=5\,\mu\)m to \(p\sim 30-35\%\) for \(a_{\rm max}=100\)\(\mu\)m. The inferred \(\mathbf{B}\)-fields are similar to the case of \(a_{\rm max}=1\,\mu\)m. But moving toward the disk scale, \(\mathbf{B}\) vectors change to be pinched along the disk minor axis, which could be explained by the projection effect of \(\mathbf{B}\)-fields on the POS. The reason why we do not obtain this feature in the case of \(a_{\rm max}=1\,\mu\)m is because we do not have enough tracers inside the disk need to reflect well the change of \(\mathbf{B}\)-fields around the protostar. Taking into account the realistic model of grain alignment, one can see that for PM and SPM grains with low \(N_{\rm cl}=100\), their dust polarization from all models of \(a_{\rm max}\) only can reflect the spiral \(\mathbf{B}\)-field pattern beyond \(\sim 200\) au. The deviation of \(\mathbf{B}\)-fields inside the disk is not revealed because of the alignment loss of micron-sized grains within \(\sim 200\) au around the protostar (see Figure 12 and Figure 13, upper and central panels). The polarization degree obtained in \(\sim 500\) au region is low of \(p<3\%\) and \(p\) clearly decreases with increasing maximum grain sizes as a result of increasing amount of grains with inefficient IA by slow internal relaxation (Figures 14, 15, upper panels). For SPM grains with high \(N_{\rm cl}=10^{4}\), polarized dust emission from all models of \(a_{\rm max}\) clearly reveals the change from the spiral \(\mathbf{B}\)-fields pattern beyond \(\sim 200\) au to the pinched field along the disk minor axis as found in model PA. It is because micron-sized grains having high amounts of iron inclusions can have magnetic alignment in the disk scale by their fast Larmor precession (Figure 13, upper panels). However, their polarization degree is lower than model PA, i.e., \(p\sim 2.5-20\%\), because dust grains are not able to have perfect alignment with \(\mathbf{B}\) inside the disk (see Appendix A). In addition, the increase in maximum grain size from \(1\,\mu\)m to \(10\,\mu\)m can help to increase \(p\) due to the extension of the alignment range. However, Figure 10: The inferred magnetic field map obtained in the entire protostellar core from dust polarization at 2mm with the inclination angle of \(45^{\circ}\). The color code shows the polarization degree \(p\) (%) and white segments show magnetic field orientation obtained by rotating the polarization vector \(\mathbf{P}\) by \(90^{\circ}\). The first row shows results for model PA, while the second to fourth rows show results for model IA of PM grains and SPM grains with \(N_{\rm cl}=100\) and \(N_{\rm cl}=10^{4}\), respectively. With each dust model, we show results with different maximum grain sizes from \(a_{\rm max}=1\,\mu\)m in the left to \(a_{\rm max}=100\,\mu\)m in the right. Dust polarization from both model PA and rIA reveals the spiral magnetic field pattern driven by the gas infalling motion inside the rotational-collapsing core. However, in terms of polarization degree, model rIA for PM grains reveals much lower \(p\) (%) compared with model PA and model rIA for SPM grains due to their weak alignment with \(\mathbf{B}\). Besides, for grains with low iron inclusions, their polarization degree will decrease with increasing \(a_{\rm max}\) owing to the increased amount of grains with inefficient alignment. But for grains are SPM with \(N_{\rm cl}=10^{4}\), \(p\) (%) can increase with \(a_{\rm max}\) as model PA due to the extension of efficient alignment range to larger sizes. beyond \(a_{\rm max}=10\,\mu\)m, the further growing of grains will suppress polarized dust emission due to the alignment loss of VLGs and the presence of micron-sized grains with inefficient internal and external alignment with magnetic fields. #### 5.2.2 Model wIA Figure 12 shows the effect of model wIA, iron inclusions (from top to bottom), and maximum grain size (from left to right) on the inferred magnetic field morphology in the inner 500 au region. Similar as model rIA, only SPM grains with high \(N_{\rm cl}=10^{4}\) can produce the detected polarization degree above 1% in this area and are able to trace the change of magnetic fields inside the protostellar disk as model PA. The grain growth activities inside the inner 500 au region help to increase \(p\) for grains having high amount of iron inclusions but decrease \(p\) for grains with low magnetic susceptibility. One can see that for model wIA, the inferred magnetic field from SPM grains with low \(N_{\rm cl}=100\) is more distorted than results found in model rIA, with some areas showing \(B\) vectors being perpendicular to the large-scale spiral pattern, i.e., the edge of the disk in the West direction (center panel). Furthermore, magnetic fields within 100 au inferred from dust polarization of SPM grains with high \(N_{\rm cl}=10^{4}\) are pinched along the disk major axis, which is 90\({}^{\circ}\) difference from one obtained from model PA and rIA. The change in the inferred magnetic fields in the above areas results from the wrong interpretation of the polarization signal originating from grains with the wrong IA, whose polarization vectors \(P\) already show the magnetic field direction. That explains why rotating \(P\) in these regions by 90\({}^{\circ}\) shows 90\({}^{\circ}\) difference in \(B\) compared with results from model PA and model rIA. The effect of wrong aligned dust grains on dust polarization becomes prominent when grains just grow to above 5 \(\mu\)m because micron-sized grains always tend to have slow internal relaxation and be aligned with \(B\) at low-\(J\) attractors inside the disk by strong gas randomization around the protostar. Figure 11: Effect of maximum grain size on the dependence of the polarization degree map and inferred magnetic field orientation from dust polarization obtained in the inner 500 au region from model PA (upper panels) and rIA for PM and SPM grains (second to fourth rows). In model PA, for \(a_{\rm max}\geq 5\,\mu\)m, dust polarization reveals the complex magnetic field map with \(B\) vectors change from the spiral pattern to be concentric along the disk minor axis due to the projection effect of \(B\)-fields on the POS. In model rIA, only SPM grains with \(N_{\rm cl}=10^{4}\) can reveal a similar magnetic field map as model PA because only they can be aligned with \(B\) in the disk. Grains with lower levels of iron inclusions do not reveal the magnetic field information within 200 au around the protostar because they cannot be aligned with \(B\) there. ## 6 Effects of iron inclusions and grain growth on synthetic dust polarization degree We next move to analyze the effect of iron inclusions and grain growth on the properties of polarized dust emission. The first section is to study on the variation of the polarization degree in unit of percentage \(p(\%)\) with intensity normalized to the maximum intensity \(I/I_{\rm max}\), the second section is for the variation of polarization angle dispersion function \(S\) in unit of degrees with \(I/I_{\rm max}\), and the last section is for the quantity \(p\times S\) which describes the efficiency of grain alignment with \(I/I_{\rm max}\). ### Effects of grain magnetic properties The left panel of Figure 13 shows the variation of the mean polarization degree \(p(\%)\) (color lines) with \(I/I_{\rm max}\) obtained in the entire protostellar core at \(\lambda=2\) mm for model PA (black line) and model rIA, assuming different grain magnetic properties and \(a_{\rm max}=10\)\(\mu\)m. The boundary of the inner 100 au, 200 au, 400 au, and 1000 au from the protostar are roughly marked in the upper x-axis to distinguish the variation of \(p\) with \(I/I_{\rm max}\) in the envelope (beyond 400au) and disk scale. In model PA, the polarization degree beyond 400 au slightly decreases with increasing intensity, from \(\sim 30\%\) to \(\sim 10\%\), due to the effect of turbulence and the disorganization of \(B\)-fields along the LOS. Moving toward the inner region, \(p\) continues to reduce to \(p\sim 8\%\) at \(\sim 100\) au then increases again to \(\sim 20\%\) owing to the narrower of the alignment range in the dense protostellar disk and the enhanced alignment of small grains by efficient RATs near the protostar, respectively (Figures 5). Taking the realistic model of grain alignment (model rIA) into account, one obtains a similar reduction of \(p\) with increasing \(I/I_{\rm max}\), but the slope of \(p-I\) is much steeper compared with model PA due to the significant reduction of the grain alignment efficiency toward the center. The obtained polarization degree clearly depends on the magnetization of dust grains which directly controls the efficiency of grain alignment with \(B\). For instance, PM grains (red line) show \(p\sim 4\%\) in the envelope because they are only aligned with \(B\) by RATs and almost all of them have inefficient IA by slow internal relaxation. Moving toward the inner region, their polarization degree Figure 12: Same as Figure 11 but for model wIA. Similar to model rIA, only SPM grains with \(N_{\rm cl}=10^{4}\) can reveal the complex orientation of magnetic fields driven by the gas dynamic in the central 500 au region. However, their inferred magnetic field within 200 au is pinched along the disk major axis, which is \(90^{\circ}\) difference from one obtained in model rIA and model PA (Figure 11). The wrong inferred magnetic fields in the disk from model wIA is caused by the wrong interpretation of polarized dust emission emitting from wrong aligned dust grains wholes polarization vectors already tell the magnetic field direction, i.e., \(\mathbf{P}\|\mathbf{B}\). Figure 14: Effect of maximum grain size on the variation of \(p(2\mathrm{mm})(\%)\) with \(I/I_{\mathrm{max}}\), assuming \(\Theta=45^{\circ}\). The upper left panel shows results for model PA, the upper right panel is for model rIA with SPM grains having high \(N_{\mathrm{cl}}=10^{4}\), the lower left panel is for SPM grains with low \(N_{\mathrm{cl}}=100\), and the lower right panel is for PM grains. In model PA, \(p\) increases with increasing \(a_{\mathrm{max}}\) due to the extension of the alignment range. However, in model rIA, the grain growth in dense environments suppresses the observed degree of polarization due to the extension of the amount of large grains with inefficient magnetic alignment. The decrease of \(p\) with increasing \(a_{\mathrm{max}}\) appears clearer for grains containing lower levels of iron inclusions. Figure 13: Left panel: variation of the mean polarization degree \(p(\%)\) with normalized intensity \(I/I_{\mathrm{max}}\) at \(2\mathrm{mm}\) for model PA (black line) and model rIA with different magnetic properties of grains, assuming \(a_{\mathrm{max}}=10\,\mu\mathrm{m}\) and \(\Theta=45^{\circ}\). Right panel: similar results as the left panel but for results from model wIA. The rough position of 1000, 400, 200, and 100 au to the protostar is marked on the upper x-axis. In general, the polarization degree obtained from all models tends to decrease with increasing intensity toward the central region. However, for model PA, the depolarization effect is only caused by the narrower of alignment range toward the disk (Figure 5). For model rIA and wIA, the reduction of \(p\) with \(I/I_{\mathrm{max}}\) is faster than model PA due to the reduced grain alignment efficiency in the central region. The value of \(p(\%)\) is smaller and the depolarization will appear more prominent with decreasing amount of iron locked inside dust grains. In addition, model wIA produces slightly lower \(p(\%)\) compared with model rIA due to the additional suppression of polarized dust emission arising from the coexistence of grains with the right and wrong IA. decreases significantly to negligible values of \(p<<0.1\%\) owing to the alignment loss around the protostar (Figures A1, A2 and A3, upper left panels). In contrast, SPM grains with low \(N_{\rm cl}=100\) can produce higher \(p\sim 20\%\) in the envelope because of their enhanced internal and external alignment by iron inclusions. But they still produce \(p\sim 0.1\%\) in the inner 100 au due to the misalignment of grains above \(1\,\mu\)m there. For SPM with higher \(N_{\rm cl}=10^{4}\), MRAT can drive all grains of \(a\geq a_{\rm align}\) to be perfectly aligned with \(\mathbf{B}\) in the envelope, producing the similar high \(p\sim 10\) - \(40\%\) as model PA. But within 400 au, \(p\) quickly decreases from \(\sim 10\%\) to \(\sim 1\%\) because of the reduced MRAT alignment efficiency and the presence of micron-sized grains with slow internal relaxation in this area. The right panel of Figure 13 shows similar results as the left panel but for model wIA. In general, the polarization degree always reduces from the envelope to the disk scale following the significant reduction of grain alignment efficiency with increasing gas density. And \(p\) produced from grains with higher embedded iron inclusions is higher owing to the enhanced magnetic alignment of dust grains. However, one can see that reduction of \(p(\%)\) versus \(I/I_{\rm max}\) is slightly stronger in model wIA, which arises from the self-suppression of dust polarization signal radiating from grains with right and wrong IA in the protostellar core (Figure 12). ### Effects of grain growth The upper left panel of Figure 14 shows the effect of maximum grain size on the variation of \(p-I/I_{\rm max}\) for model PA. As the maximum grain size increases, the overall polarization degree obtained in thousands au scale around the protostar increases due to the extension of the alignment range toward larger sizes. Besides, the grain growth eliminates the effect of alignment range on the values of \(p\) obtained in the inner region. In detail, in a model with \(a_{\rm max}=1\,\mu\)m, the increase of alignment size toward the disk by increasing gas density (Figure A1) induces the decrease of \(p\) from \(\sim 8\%\) at \(\sim 400\) au to \(p\sim 0.7\%\) at \(\sim 100\) au. Then, the decrease of \(a_{\rm align}\) toward the protostar by increasing RATs efficiency (Figure A1) induces the increase of \(p\sim 0.7\%\) at \(\sim 100\) au to \(p\sim 1\%\) at the peak of dust emission. However, as grains grow to \(\sim 10\,\mu\)m, the above decrease and increase of \(p\) with \(I/I_{\rm max}\) becomes less prominent. The obtained polarization degree then becomes a constant in the entire \(\sim 400\) au region if the maximum size exceeds \(\sim 20\,\mu\)m, in which the alignment range is too large to feel the change of \(a_{\rm align}\) with gas density and radiation field strength. The upper right panel of Figure 14 shows similar results as the upper left panel but for the model rIA of SPM grains with high \(N_{\rm cl}=10^{4}\). Beyond \(\sim 400\) au, larger \(a_{\rm max}\) induces higher \(p\) as model PA because SPM grains with high embedded iron clusters can achieve perfect magnetic alignment by efficient MRAT alignment. As increasing intensity toward the center, the polarization degree obtained from all models of \(a_{\rm max}\) decreases as a result of the reduced grain alignment efficiency (Figure 13). The increase of \(a_{\rm max}\) from \(1\,\mu\)m to \(2\,\mu\)m helps to increase \(p\) by about twice the order of magnitude due to the extension of the alignment range. However, the further growing of \(a_{\rm max}\) from \(2\,\mu\)m to \(100\,\mu\)m does not clearly induce the increase of \(p\) as model PA because the alignment range now covers large grains with slow internal relaxation (Figure A3 and A4). Besides, as grains grow to above \(50\,\mu\)m, VLGs within 100 au are not able to be aligned with \(\mathbf{B}\) (Figure A2), inducing the slight reduction of \(p\) with \(a_{\rm max}\) (blue line). The lower panels of Figure 14 show results for model rIA with SPM grains with low \(N_{\rm cl}=100\) (left panel) and with PM grains (right panel). Obviously, the depolarization effect induced by the reduced grain alignment efficiency appears in all models of \(a_{\rm max}\), and it clearly becomes more prominent with increasing \(a_{\rm max}\). The suppression of grain growth on \(p\) is stronger for SPM grains with lower \(N_{\rm cl}\) because of the dominance of large grains with slow internal relaxation. And for PM grains, it is caused by the alignment loss of VLGs around the protostar. For example, for SPM grains with low \(N_{\rm cl}=100\), \(p\) at \(\sim 200\) au will decrease \(\sim 2\) times, from \(p\sim 2\%\) for \(a_{\rm max}=1\,\mu\)m to \(p\sim 1\%\) for \(a_{\rm max}=100\,\mu\)m. For PM grains, the reduction is much more significant with the continuous reduction from \(p\sim 0.3\%\) at \(\sim 200\) au for \(a_{\rm max}=1\,\mu\)m to \(p\sim 0.01\%\) if grains grow to \(a_{\rm max}=100\,\mu\)m. ### On the slope of \(p-I\) The slope of the \(p-I\) is an important quantity for characterizing the effects of grain alignment, grain properties (e.g., size and shapes), and B-fields on the observed polarization fraction. To connect above factors with the slope of \(p-I\), we separate the \(p-I\) relation at \(\lambda=2\) mm found in Sections 6.1 and 6.2 into three segments which characterize the area of \(>500\) au (envelope scale), \(200-500\) au, and \(<100\) au. Then we fit each segment with the power law \(p\sim I^{\alpha}\) which \(\alpha\) characterizes the slope of \(p-I\) relation. The fitting is done by using the lmfit function in Python (Newville et al., 2021). We show the details of our fitting in Appendix F. Figure 15 shows the variation of \(\alpha\) with \(N_{\rm cl}\) for different maximum grain sizes from \(a_{\rm max}=1\,\mu\)m to \(a_{\rm max}=100\,\mu\)m from model rIA. The left panel is for the slope of \(p-I\) beyond \(\sim 500\) au (\(a_{\rm 500}\)), the center panel is for the area within \(\sim 200-500\) au (\(a_{\rm 200}\)-\(500\)), and the right panel is for the disk scale within 200 au (\(a_{\rm<200}\)). In general, the reduction of \(p\) versus intensity in the entire protostellar core for model rIA is shallower, i.e., lower negative \(\alpha\), with increasing \(N_{\rm cl}\) due to the enhanced grain alignment by iron inclusions. Beyond 500 au (left panel), for grains with low \(N_{\rm cl}<10\), \(a_{\rm>500}\) reduces continuously from \(-0.4\) for \(a_{\rm max}=1\,\mu\)m to \(-0.6\) for \(a_{\rm max}=100\mu\)m due to the misalignment of grains above \(10\,\mu\)m in the envelope (Figure A2, upper panels). For grains with \(N_{\rm cl}>10\), \(a_{\rm>500}\) becomes less negative because of increasing the magnetic alignment degree of grains by their higher magnetic susceptibility (Figure A2, upper panels). The value of \(\alpha_{\rm 5\geq 500}\) increases with increasing maximum size, from \(\sim-0.4\) for \(a_{\rm max}=1\,\mu\)m to \(\sim-0.25\) for \(a_{\rm max}=100\,\mu\)m, owing to the extension of alignment range to larger sizes. Since SPM grains are able to have perfect alignment with \(\mathbf{B}\) by MRAT alignment, turbulence and the projection effect of \(\mathbf{B}\)-fields are the major factors controlling the slope of \(p-I\) in this envelope region (Figures 6, 7, 9, right panels). Moving toward the inner \(200-500\) au region (center panel), the value of \(\alpha_{\rm 200-500}\) for grains having low \(N_{\rm cl}<10\) decreases significantly to \(-0.6\) for \(a_{\rm max}=1\,\mu\)m and even \(-1\) for \(a_{\rm max}=100\,\mu\)m due to the misalignment of large grains there. Grains with higher \(N_{\rm cl}\geq 10\) still show similar slopes as in the envelope because they still have efficient magnetic alignment there. Inside the disk within \(\sim 200\) au (right panel), \(\alpha_{\rm<200}\) for all values of \(N_{\rm cl}\) and \(a_{\rm max}\) decreases significantly to smaller values due to the reduced grain alignment efficiency in dense environments. For grains with low \(N_{\rm cl}<10\), one gets very low \(\alpha\sim-0.8\) up to \(-1\) due to the misalignment of grains with magnetic fields. For grains with higher \(N_{\rm cl}\), the reduced internal alignment efficiency and the conversion from MRAT alignment to RAT results in smaller \(\alpha_{\rm<200}\sim[-0.5,-0.7]\). The slope of \(p-I\) in the inner 500 au becomes steeper, from \(\alpha_{\rm\leq 200}\sim-0.6\) for \(a_{\rm max}=1\,\mu\)m to \(\alpha_{\rm\leq 200}\sim-0.8\) if grains grow to \(a_{\rm max}=100\,\mu\)m, owing to the increasing amount of grains with inefficient alignment with magnetic fields around the protostar. ## 7 Effect of iron inclusions and grain growth on \(S\) and \(p\times S\) ### \(S-I/l_{\rm max}\) relation The left panel of Figure 16 shows the variation of the polarization angle dispersion function \(S\) in unit of degrees with intensity obtained at 2 mm for model PA (black curve) and model rIA, assuming different grain magnetic properties and \(a_{\rm max}=10\,\mu\)m. The angle dispersion function increases from \(S\sim 3\) degrees at \(\sim 1000\) au to peak in the outer edge of the disk of \(\sim 100\) au then decreases toward the central region. The rise of \(S\) in the envelope is caused by the increased turbulence in the contact areas between different infalling gas flows and between infalling and outflowing material. And the decrease of \(S\) in the disk could be understood by the formation of the well-ordered toroidal \(\mathbf{B}\)-fields around the protostar wrapped by the fast rotation of the protostellar disk. However, \(S\) seems not only to depend on the distortion of magnetic fields by turbulence but also depends positively on the grain magnetic properties. Particularly, \(S\) for PM grains only slightly increases from \(S\sim 3\) degrees in the envelope to a constant \(S\sim 5\) degrees inside the disk, but \(S\) for SPM grains with \(N_{\rm cl}=10^{4}\) increases significantly from \(S\sim 3\) degrees in the envelope to the peak at \(S\sim 25\) degrees in the outer edge of the disk and decreases to \(S\sim 10\) degrees near the protostar as model PA. The variation of \(S\) with \(N_{\rm cl}\) is caused by the difference in the area where dust polarization can trace magnetic fields. In detail, PM grains only can trace \(\mathbf{B}\)-fields beyond 500 au (Figure 11, second row), thus, their angle dispersion function only can reflect the turbulence level inside the envelope, which explains why \(S\) only slightly increases by few degrees toward the center. In contrast, polarized dust emission from SPM grains with higher \(N_{\rm cl}\) carries more information on magnetic fields and turbulence levels in the innermost region of the core (Figure 11, third and fourth rows), inducing the clearer reflection of the variation of \(S\) with \(I/I_{\rm max}\) as model PA. The right panel of Figure 16 shows similar results as the left panel but for model wIA. Generally, the polarization angle dispersion function increases with increasing intensity and shows higher values for grains containing higher levels of iron inclusions. However, \(S\) obtained from model wIA is slightly higher than the results from model rIA, whose additional distortion in the polarization pattern is induced by the superposition of polarization signal from grains with right and wrong IA. Furthermore, \(S\) obtained inside the disk for SPM grains with \(N_{\rm cl}=10^{4}\) is even higher than model PA, i.e., the maximum \(S\sim 33\) degrees of SPM grains and \(S\sim 20\) degrees for model PA, that the exceed \(S\) arises from the area where polarization vectors \(\mathbf{P}\) changes suddenly from \(\mathbf{P}\perp\mathbf{B}\) (emission by grains with the right IA) to \(\mathbf{P}\|\mathbf{B}\) (emission by grains with the wrong IA) (Figure 12, third and fourth rows). Figure 17 shows the comparison between the effect of grain growth on \(S-I\) relation for model PA (upper left panel) and model rIA. The upper right panel is for SPM grains with high \(N_{\rm cl}=10^{4}\), the lower left panel is for SPM grains with low \(N_{\rm cl}=100\), and the lower right panel is for PM grains. In model PA, \(S\) obtained from models with higher \(a_{\rm max}\) is higher because the \(\mathbf{B}\)-fields tangling by turbulence is reflected better with the increasing amount of good tracer from perfectly aligned dust grains. For model rIA, increasing \(a_{\rm max}\) also induces higher \(S\) as model PA for SPM grains. However, this feature only happens in the envelope scale beyond \(\sim 200\) au where large grains can have efficient magnetic alignment by MRAT alignment. Moving toward the inner 200 au region, \(S\) decreases with increasing \(a_{\rm max}\) due to the reduced area where dust grains can trace \(\mathbf{B}\)-fields and turbulence level (similar to the case of PM grains, Figure 16). The reduced information of turbulence contained inside dust polarization will become stronger for grains containing lower levels of iron inclusions (lower right panel). ### \(p(\%)\times S-I/l_{\rm max}\) relation Figure 18 shows the effect of iron inclusions on the variation of \(p\times S\) with \(I/I_{\rm max}\) at \(\lambda=2\) mm for model PA (black line) and model rIA (left panel) and model wIA (right panel), assuming \(a_{\rm max}=10\,\mu\)m. \(p\) here is the polarization fraction, and \(S\) is in unit of degrees. In the PA model, \(p\times S\) slightly decreases with \(I/I_{\rm max}\) beyond \(\sim 1000\) au then increases continuously toward the center owing to the significant rise of \(S\) with intensity (Figure 16). The reduction of \(p\times S\) in the envelope implies the subdominant effect of turbulence over the projection effect of magnetic fields on decreasing the observed polarization degree (Figure 13). And the rise of \(p\times S\) in the inner region could be interpreted as the sign of increasing alignment degree around the protostar. However, taking the realistic model of grain alignment into account, one can see that within \(\sim 200\) au region, \(p\times S\) clearly reduces toward the center for all models of grain magnetic properties owing to the significant reduction of grain alignment efficiency with increasing gas randomization. The reduction of \(p\times S\) versus \(I/I_{\rm max}\) is more prominent for grains with lower magnetic susceptibility as Figure 15: Variation of the slope of \(p-I\) relation obtained from model rIA at 2mm: \(a_{\rm 2500}\) (left panel), \(a_{\rm 200-500}\) (central panel), and \(a_{\rm\leq 200}\) (right panel) as the function of \(a_{\rm max}\) for different grain magnetic properties, assuming \(\Theta=45^{\circ}\). In general, \(a_{\rm 2500}\), \(a_{\rm 200-500}\), and \(a_{\rm\leq 200}\) increases with increasing amount of iron inclusions inside grains but tend to decreases with increasing maximum sizes. The reduction of \(p-I\) is steeper in the central region due to the higher significant effect of gas randomization on both internal and external alignment of dust grains in the innermost region of the protostellar core. Figure 16: Effect of iron inclusions on the variation of the polarization angle dispersion function \(S\) in unit of degrees with normalized intensity \(I/I_{\rm max}\) for model rA (left panel) and model wIA (right panel), assuming \(a_{\rm max}=10\,\mu\)m and \(\Theta=45^{\circ}\). The result from model PA is shown in the black line for comparison. In general, \(S\) obtained from model PA, rA, and wIA increases with increasing intensity due to the increased turbulence toward the center, then it decreases toward the protostar due to the formation of the well-ordered toroidal field inside the fast rotating protostellar disk (Figure 2). But in model rIA and wIA, \(S\) obtained from aligned dust grains with lower levels of embedded inclusions is smaller than \(S\) obtained from grains with higher embedded iron inclusions owing to the smaller area where they can trace \(\mathbf{B}\)-fields. Besides, the wrong interpretation of the polarization signal from wrong aligned dust grains (model wIA) also can induce extra dispersion in the polarization pattern compared with results in model PA. Figure 17: Effect of maximum grain size on the variation of \(S\) with \(I/I_{\rm max}\) obtained in model PA (upper left panel) and model rIA for SPM grains with high \(N_{\rm cl}=10^{4}\) (upper right panel), SPM grains with low \(N_{\rm cl}=100\) (lower left panel), and PM grains (lower right panel). In model rIA for SPM grains, \(S\) obtained in the envelope increases with increasing \(a_{\rm max}\) due to the increased amount of good tracer of magnetic fields. But in the inner region, \(S\) decreases with increasing \(a_{\rm max}\) because large grains now are not coupled well with fields, which reduces the information of \(\mathbf{B}\)-fields and turbulence carried inside dust polarization. The results are obtained for grains having smaller magnetic susceptibility. Figure 19: Effect of maximum grain size on the variation of \(p\times S\) with normalized intensity obtained from model PA (upper left panel) and model rIA for SPM grains with high \(N_{\rm cl}=10^{4}\) (left panel), SPM grains with \(N_{\rm cl}=100\) (lower left panel), and for PM grains (lower right panel). For SPM grains, \(p\times S\) obtained in the envelope increases with increasing \(a_{\rm max}\) as found in model PA due to the extension of amount of grains with efficient magnetic alignment. However, \(p\times S\) obtained in the inner region decreases with increasing \(a_{\rm max}\) because of the enhanced amount of VLGs which are inefficiently aligned with magnetic fields. The reduction of the grain alignment degree with grain growth becomes more significant for grains having lower levels of iron inclusions. Figure 18: Variation of the grain alignment efficiency determined by the multiple between the polarization fraction \(p\) with angle dispersion function \(S\)\(p\times S\) with normalized intensity \(I/I_{\rm max}\) obtained from model PA (black line), model rIA (color lines, left panel) and wIA (right panel) for different grain magnetic properties, assuming \(a_{\rm max}=10\)\(\mu\)m, and \(\Theta=45^{\circ}\). In contrast to the rise of \(p\times S\) with increasing intensity toward the central region as model PA, model rIA and wIA clearly show the reduction of \(p\times S\) with intensity toward the protostar as the evidence of the reduced grain alignment efficiency by increasing gas randomization. The value of \(p\times S\) is smaller and the reduction of \(p\times S\) with \(I/I_{\rm max}\) is stronger for grains containing lower levels of iron inclusions. a consequence of their stronger suppression in \(p\) (Figure 13) and \(S\) (Figure 16). The variation of \(p\times S\) shares similar tendencies between model rIA (left panel) and wIA (right panel), but model wIA reveals a slightly steeper slope of \(p\times S\) with \(I/I_{\rm max}\) because of lower \(p\) induced by the co-existence of grains with right and wrong IA (Figures 13, right panel). Figure 19 shows the difference in the effect of grain growth on the variation of \(p\times S\) with \(I/I_{\rm max}\) for model PA (upper left panel) and model rIA (remaining panels). In model PA, the increase in maximum grain size increases values of \(p\times S\) in the entire \(\sim 1000\) au due to the rise of \(p\) and \(S\) with \(a_{\rm max}\) (Figures 14 and 17, upper left panel). At \(<400\) au, one can see that for \(a_{\rm max}=1\,\mu\)m, \(p\times S\) decreases from \(\sim 400\) au to \(\sim 100\) au as a result of the narrow alignment range by high gas density, then it increases again toward the protostar position owing to the enhanced RAT efficiency. As \(a_{\rm max}\) increases, this feature disappears due to the extension of the alignment range to larger sizes. And starting with \(a_{\rm max}\geq 10\,\mu\)m, \(p\times S\) even rises with intensity within 400 au region, which could be interpreted as the sign of the high alignment degree of dust grains around the protostar (Figure 18, black line). Taking the realistic grain alignment model into account, \(p\times S\) obtained from all models of grain magnetic properties and maximum grain size, generally, always declines toward the center due to the reduction of the grain alignment efficiency to the protostar. Beyond \(\sim 400\) au, \(p\times S\) obtained from the model of SPM grains (upper right and lower left panels) increases with increasing \(a_{\rm max}\) as model PA because they have efficient alignment with \(\mathbf{B}\) by MRAT alignment. But for PM grains, \(p\times S\) declines with \(a_{\rm max}\) because of their inefficient alignment by RATs in the envelope scale. Moving toward the central 400 au region, grain growth induces the steeper reduction of \(p\times S\) with intensity, reflecting well the weak alignment degree of almost aligned dust grains there. The suppression of grain growth on the quantity \(p\times S\) is stronger for grains containing the lower amount of iron inclusions. ## 8 Effects of dichroic extinction polarization Next, we move to study the properties of dust polarization at optically thick wavelengths where dichroic extinction becomes important. Indeed, the disk becomes optically thick at \(450\,\mu\)m (see the optical depth in the inner 500 au in Appendix C1), but here we show results for at \(250\,\mu\)m where the dust polarization shows the clear imprint of dichroic extinction. The results for \(\lambda=450\,\mu\)m are in Appendix C3. We first study the effect of iron inclusions and maximum grain size on the inferred magnetic fields from dust polarization within the inner 500 au region in Section 8.1, and then on the variation of \(p\) obtained from Section 8.1 with gas column density \(N_{\rm H}\) in section 8.2. We choose \(N_{\rm H}\) to be the parameter here instead of the intensity of dust emission as in Section 6 because the polarization signal originating from dichroic extinction is more sensitive with gas density than intensity. Thus, it is easier to understand how the density structure of the disk affects the observed polarization fraction. ### Polarization map #### 8.1.1 Model rIA Figure 20 shows the effect of the dust model, iron inclusions, and maximum grain size on the magnetic field map inferred from dust polarization at \(250\,\mu\)m. The color code shows the polarization degree, and white segments show the inferred magnetic field direction by rotating the polarization vector \(\mathbf{P}\) by \(90^{\circ}\), other setups are similar as in Figure 11. In model PA (upper panels), from \(a_{\rm max}=1\,\mu\)m to \(a_{\rm max}=10\,\mu\)m, dust polarization observed at \(250\,\mu\)m for model PA reveals the similar inferred magnetic field map as detected at optically thin 2 mm (Figure 11, first column), with \(\mathbf{B}\) vectors follow the spiral pattern for \(a_{\rm max}=1\,\mu\)m, and the change from the spiral field to the pinched field along the disk minor axis for \(a_{\rm max}\sim 5-10\,\mu\)m. When grains grow to above \(50\,\mu\)m, one gets the similar inferred spiral \(\mathbf{B}\)-fields within 200-500 au as cases of \(a_{\rm max}=1-10\,\mu\)m, but \(\mathbf{B}\) vectors change \(90^{\circ}\) from parallel to be perpendicular to the disk minor axis in the inner 200 au region. The \(90^{\circ}\) flipping of \(\mathbf{B}\) vectors in the disk is caused by the change in the polarization mechanism from dichroic emission to dichroic extinction, which is activated by the presence of VLGs above \(50\,\mu\)m inside the optically thick disk. In the term of the polarization degree, \(p\) slightly declines from the outer edge of the disk at 200 au to the trough at about 100 au from the protostar and increases again toward the center, which corresponds to the transition of the polarization mechanism from dichroic emission (beyond 100 au) to dichroic extinction (within 100 au). For model rIA, PM and SPM grains with low \(N_{\rm cl}=100\) (second and third rows) generally cannot reveal in detail the morphology of magnetic fields inside the disk as obtained at 2mm due to the alignment loss inside the disk. But for SPM grains with high \(N_{\rm cl}=10^{4}\) (fourth row), one can clearly obtain the change of \(\mathbf{B}\)-fields within 500 au driven by gas dynamic and the \(90^{\circ}\) flipping of \(\mathbf{B}\)-fields caused by dichroic extinction as grains grow from \(a_{\rm max}=1-10\,\mu\)m to \(a_{\rm max}=50-100\,\mu\)m as found in model PA. However, one does not clearly see the decrease and increase of \(p\) with the disk structure as model PA because of the weak coupling of VLGs with \(\mathbf{B}\) within 100 au around the protostar. #### 8.1.2 Model wIA Figure 21 shows similar results as Figure 20 but for model wIA. Similar to model rIA, only SPM grains with high \(N_{\rm cl}=10^{4}\) (fourth row) can reveal the change of \(\mathbf{B}\)-fields driven by gas dynamic and grain growth within 500 au. However, when grains grow to \(a_{\rm max}=50-100\,\mu\)m, surprisingly, the inferred magnetic field inside the disk does not change from parallel to perpendicular to the disk minor axis as model PA even though VLGs here are large enough to activate the effect of dichroic extinction at submillimeter wavelengths. The polarization signal in this case still comes from the emission of VLGs with the wrong IA (Figure 12, lower right panel), which could be explained by the weak extinction of VLGs induced by the co-existence of absorber with right and wrong IA along the LOS. ### Polarization degree and gas column density The upper left panel of Figure 22 shows the comparison of the relation \(p(250\,\mu\)m) \(-N_{\rm H}\) obtained in the inner 500 au region between model PA (black line) and model rIA (colors lines) with different grain magnetic properties, assuming \(a_{\rm max}=1\,\mu\)m. In model PA, the polarization degree declines continuously from \(p\sim 20\%\) at 500 au to \(p\sim 1\%\) in the densest region inside the disk due to the narrower of the alignment range (similar to results from 2mm, Figure 14, case \(a_{\rm max}=1\,\mu\)m). In model rIA, SPM grains with high \(N_{\rm cl}=10^{4}\) share similar behavior and the same polarization degree as model PA. SPM grains with lower \(N_{\rm cl}=100\) and PM grains also show the same reduction of \(p\) with increasing \(N_{\rm H}\), but their polarization degree is much lower due to the inefficient alignment of sub-micron grains with \(\mathbf{B}\) around the protostar. The upper right panel of Figure 22 shows similar results as the upper left panel but for \(a_{\rm max}=10\,\mu\)m. In the PA model, \(p\) obtained in the inner 500 au region generally be higher than the case of \(a_{\rm max}=1\,\mu\)m due to the extension of the alignment range. However, they still show the decline of \(p\) with \(N_{\rm H}\) as a result of the increasing extinction effect from large micron-sized aligned dust grains. In model rIA, one also obtains the reduction of \(p\) with gas column density, but with a much steeper slope than model PA due to the additional contribution from the low alignment degree of micron-sized grains with \(\mathbf{B}\) here. Grains with lower embedded iron inclusions produce smaller \(p(\%)\) and stronger declination of \(p\) with \(N_{\rm H}\) due to their weaker alignment degree with magnetic fields. The lower left panel of Figure 22 shows similar results as the upper right panel but for \(a_{\rm max}=50\,\mu\)m. In model PA, the polarization fraction at \(250\,\mu\)m decreases from \(p\sim 10\%\) at 500 au to \(p\sim 5\%\) when \(N_{\rm H}\sim 0.05/N_{\rm H,max}\) (\(\sim 150\) au) then increases again to \(p\sim 10\%\) when \(N_{\rm H}\sim 0.5N_{\rm H,max}\) (\(\sim 100\) au). The decrease and increase of \(p\) with gas column density from 500 to \(\sim 100\) au are driven by the increased extinction efficiency of aligned VLGs, which changes the polarization mechanism from dichroic emission in the outer edge of the disk to dichroic extinction inside the disk (Figure 20, first row). Within \(\lesssim 100\) au, \(p(250\,\mu\)m) slightly decreases again from \(p\sim 10\%\) to \(p\sim 7\%\) with increasing \(N_{\rm H}/N_{\rm H,max}\) due to the trap of polarization signal by high gas-radiation interactions. In model rIA for SPM grains with \(N_{\rm cl}=10^{4}\), one also sees the decrease and increase of \(p\) (\(250\,\mu\)m) with increasing \(N_{\rm H}\) as model PA. However, the rise of \(p\) with \(N_{\rm H}\) within \(\sim 150\) au is not prominent as model PA because, in reality, VLGs are not well coupled with \(\mathbf{B}\) enough to clearly activate the effect of dichroic extinction inside the disk. For grains with lower values of \(N_{\rm cl}\), \(p(250\,\mu\)m) just simply decreases continuously with Figure 20: Effect of grain size and grain magnetic properties on the inferred magnetic field map obtained in the inner 500 au region at \(250\,\mu\)m. The color code shows the polarization degree \(P(\%)\), and white segments show the \(\mathbf{B}\) vectors obtained by rotating the polarization vector by \(90^{\circ}\). The upper row shows results for model PA, the second to fourth rows are for model rIA with different grain magnetic properties, the left column is for \(a_{\rm max}=1\,\mu\)m, and the right column is for \(a_{\rm max}=100\,\mu\)m. In model PA, the inferred magnetic field direction in the inner 200 au changes from parallel to perpendicular to the disk minor axis when the maximum grain size exceeds \(a_{\rm max}\geq 50\,\mu\)m. The change in \(\mathbf{B}\) vectors results from the change in the polarization mechanism from dichroic emission (at optically thin wavelengths at 2mm) to dichroic extinction (at optically thick disk at \(250\,\mu\)m). In model rIA, only SPM grains with high \(N_{\rm cl}=10^{4}\) can reveal the change in the polarization pattern caused by dichroic extinction at submillimeter wavelengths. PM and SPM grains with \(N_{\rm cl}=100\) do not reveal this feature because VLGs above \(10\,\mu\)m are not able to be aligned with \(\mathbf{B}\)inside the disk. increasing gas column density due to the misalignment of VLGs within \(\sim 200\) au around the protostar. The tendency of \(p-N_{\rm H}\) and the values of \(p\) for all realistic models of grain magnetic properties are the same as grains grow from \(a_{\rm max}=50\,\mu\)m to \(a_{\rm max}=100\,\mu\)m (lower right panel). ## 9 Discussion In this section, we will discuss more details about our results of dust polarization properties in protostellar cores and disks and their implications on connecting dust physics and magnetic fields with observations. ### Grain alignment in protostellar cores The alignment of grains with magnetic fields is the key to using polarized dust emission to trace magnetic fields and study dust physics. However, in contrast to the well-determined behavior of aligned dust grains in the diffuse medium (Reissl et al., 2020), the strong gas randomization in very dense, protostellar environments would reduce the magnetic alignment of grains, which hinders the usage of dust polarization to trace magnetic fields (Hoang, 2022). Recently, detailed theoretical studies of grain alignment by Hoang (2022); Hoang et al. (2022) reveal the complex picture of grain alignment in protostellar environments and strongly emphasize the importance of superparamagnetic inclusions for enhancing both the internal and external alignment of grains with magnetic fields. Our results from Giang et al. (2022), and in Section 4.2 of this paper (see also Appendix A) show that grain alignment within 2500 au around the protostar is consistent with the theoretical prediction in Hoang et al. (2022). In particular, we found that PM grains are not aligned with \(B\) within 200 au around the protostar for all values of \(a_{\rm max}=1-100\,\mu\)m. Micron-sized grains below \(10\,\mu\)m beyond \(\sim 200\) au can have the magnetic alignment by RATs, but almost all of them have inefficient IA by slow internal relaxation at both high and low-\(J\) attractors. In contrast, SPM grains can be aligned with Figure 21: Similar results as Figure 20 but for model wIA. Obviously, only SPM grains with high \(N_{\rm cl}\) can reveal in detail the change of magnetic fields from the inner part of the envelope to the disk scale because they can have the magnetic alignment around the protostar. However, the inferred magnetic fields at 250 \(\mu\)m inside the disk are similar to results from 2mm (Figure 12), with \(B\)vectors still perpendicular to the disk minor axis for all values of \(a_{\rm max}\) from \(1\,\mu\)m to \(100\,\mu\)m. This polarization signal originates from the emission of grains with the wrong IA, which is not strongly suppressed by the extinction of VLGs with the wrong IA as in the case of model rIA. in the entire \(\sim 2500\) au around the protostar. In the envelope scale, SPM grains up to \(\sim 100\,\mu\)m can have perfect alignment by MRAT and have efficient IA at high-\(J\) attractors due to the strong Barnett relaxation mechanism. However, moving toward the disk scale, SPM grains larger than \(\sim 30\,\mu\)m cannot have the magnetic alignment, and micron-sized grains at both high and low-\(J\) attractors tend to have inefficient IA by slow internal relaxation due to the strong gas randomization effect. Indeed, grains at high-\(J\) attractors can have the possibility to have fast internal relaxation by their fast rotation rate around the protostar (Giang et al., 2022). However, with the very high gas density up to \(m_{\rm H}\sim 10^{9}-10^{11}\,{\rm cm}^{-3}\) within the protostellar disk, gas randomization is still strong enough to eliminate the fast rotation by RATs, forcing grains to have slow internal relaxation in this region. Besides, SPM grains only can be aligned with \(B\) by RATs in the inner 200 au region due to the reduced magnetic relaxation. In conclusion, it is impossible to assume the fast internal relaxation for aligned dust grains in protostellar environments as in the case of the diffuse medium and molecular clouds. The alignment size range is not only determined by the minimum alignment size by RATs, \(a_{\rm align}\), but also by the rate of Larmor precession compared to the gas randomization, \(a_{\rm max,JB}^{\rm Lar}\). Besides, RAT is not the primary origin behind the magnetic alignment of all grain sizes in protostellar environments. The alignment mechanism now is different with different grain sizes, depending strongly on the grain magnetic properties and the gas density in the local environment. Since the alignment mechanism determines the observed degree of dust polarization (Section 6), the internal alignment determines the orientation of polarization vectors with local magnetic fields, and the alignment range determines the area where dust polarization can trace \(B\)-fields (Section 5). Therefore, detailed modeling of grain alignment with their magnetic properties is required to accurately interpret the observed polarization signal, measure \(B\)-fields and constrain dust physics in protostellar environments. ### Where dust polarization can trace magnetic fields and recovery rate? Here, we discuss in what region within the protostellar core dust polarization can be used to reliably trace magnetic fields. Figure 22: Comparison about the variation of \(p\,(250\,\mu{\rm m})\) with normalized column density \(N_{\rm H}/N_{\rm H,0}\) between model PA and model rIA for \(a_{\rm max}=1\,\mu\)m, \(a_{\rm max}=10\,\mu\)m, \(a_{\rm max}=50\,\mu\)m, and \(a_{\rm max}=100\,\mu\)m, from left to right, from top to bottom, respectively. In model PA, for \(a_{\rm max}\sim 5\,\mu\)m, \(p\,(250\,\mu{\rm m})\) decreases with increasing \(N_{\rm H}\) due to the extinction of large grains. As \(a_{\rm max}\) increases to \(50-100\,\mu\)m, \(p\,(250\,\mu{\rm m})\) decreases then increases again to the center due to the change of polarization mechanism from dichroic emission to dichroic extinction (Figure 20, upper row). However, taking into account the realistic model of grain alignment, \(p\,(250\,\mu{\rm m})\) just simply decreases toward the center, and the reduction of \(p\,(250\,\mu{\rm m})\) with \(N_{\rm H}\) is stronger than model PA due to the significant reduction of the grain alignment in the innermost region of the protostar. The reduction of \(p\) with \(N_{\rm H}\) in model rIA is stronger for grains containing lower levels of iron inclusions. #### 9.2.1 Inferred magnetic fields In Section 5.1, we found that in general, PM and SPM grains can trace the large-scale magnetic field in the envelope of \(\sim 500-1000\) au scale because they can be aligned with \(\mathbf{B}\) there. For SPM grains, most of them have fast internal relaxation and tend to be aligned with \(\mathbf{B}\) at high-\(J\) attractors. As a result, their polarization vectors are always perpendicular to the orientation of magnetic fields. For PM grains, \(\sim 75\%\) of grains will be aligned with \(\mathbf{B}\) at low-\(J\) attractors by RATs. However, the remaining \(25\%\) can have perfect alignment with \(\mathbf{B}\) (Figure 40, upper panels), allowing the net polarization vectors still be perpendicular to the magnetic field direction. Thus, in the thousand au scale, one can confidently rotate \(\mathbf{P}\) by \(90^{\circ}\) to infer again the pattern of magnetic fields regardless of the magnetic properties of dust grains. In the protostellar disk, the situation is different. In Section 5.2, we show that PM grains cannot trace \(\mathbf{B}\)-fields within a few hundred au around the protostar because they are not able to be aligned with \(\mathbf{B}\) in this area. The observed polarization signal only provides the magnetic field information in the envelope, and their polarization vectors are mostly perpendicular to \(\mathbf{B}\)-fields. For SPM grains, extending the maximum alignment size to above \(\sim 5\,\mu\)m allows their polarized dust emission to bring more information on magnetic fields from the protostellar disk. For example, with \(\Theta=45^{\circ}\), SPM grains with \(N_{\rm cl}=100\) can trace the spiral pattern of magnetic fields beyond \(\sim 200\) au from the protostar (Figure 11, third row), while SPM grains with higher \(N_{\rm cl}=10^{4}\) are able to trace the change of \(\mathbf{B}\)-fields from the spiral pattern to the uniform field along the disk minor axis in the inner 200 au region as model PA (Figure 11, fourth row). However, the alignment of SPM grains with \(\mathbf{B}\) in the disk accidentally activates the problem of grains with slow internal relaxation on the net polarization vectors. In detail, \(50-75\%\) of large SPM grains inside the disk are aligned with \(\mathbf{B}\) at low-\(J\) attractors and such grains tend to have slow internal relaxation (Figures 43 and 46, lower panels). If these grains still can have the right IA, they will produce \(\mathbf{P}\perp\mathbf{B}\). Thus, rotating \(\mathbf{P}\) by \(90^{\circ}\) is reasonable and it gives the correct magnetic field orientation as inferred from model PA (Figure 11, fourth row). In the opposite case, \(\sim 50-75\%\) of aligned grains will have wrong IA at low-\(J\) attractors, the remained part have right, but inefficient IA at high-\(J\) attractors. The net polarization signal is thus dominated by the emission from wrong-aligned dust grains with polarization vectors \(\mathbf{P}\|\mathbf{B}\). Consequently, rotating \(\mathbf{P}\) by \(90^{\circ}\) induces the wrong inferred pattern of magnetic fields in the disk (Figure 12, fourth row). So in conclusion, dust polarization from aligned dust grains always traces magnetic fields in the thousand au scale, but the origin of polarized dust emission obtained in a few hundred au scale needs to be carefully quantified based on the grain magnetic properties. Giang et al. (2022) show that we can constrain amount of iron inside grains via the observed polarization degree, but the problem of the orientation of \(\mathbf{P}\) with \(\mathbf{B}\), indeed, we do not clearly understand which configurations of dust grains and conditions of environments support the right and wrong IA. However, based on results from Sections 5.2 and 8.1, we suppose that we can roughly estimate the overall alignment direction of grains with \(\mathbf{B}\)-fields via the multi-wavelengths observations. We will discuss in detailed this issue in Section 9.5.1. #### 9.2.2 Recovery rate To quantify how well dust polarization can capture the orientation of protostellar magnetic fields, Valdivia et al. (2022) calculate the magnetic field angle difference between MHD simulations and synthetic observations of dust polarization in different cores with different initial setups of magnetic fields and turbulence. They conclude that dust polarization from aligned dust grains is the robust tracer of magnetic fields in \(\sim 500-1000\) au scale, with the ability to recover \(\sim 90\%\) of magnetic field orientation in this region. However, this paper considers standard RAT theory with the constant \(f_{\rm high-J}=0.25\) and assumes that all grains from \(a_{\rm align}-a_{\rm max}=20\,\mu\)m have fast internal relaxation and can be aligned with \(\mathbf{B}\). As discussed in Section 9.1, this assumption is not always valid, even in the protostellar envelope. Therefore, even grains generally can trace \(\mathbf{B}\)-fields in the thousand au scale, the recovery rate of dust polarization must be examined again with the new grain alignment model to accurately evaluate the ability to use dust polarization to study magnetic fields in protostellar environments. Following Valdivia et al. (2022), we calculate the angle difference \(\Delta\mathbf{\phi}\) between the integrated magnetic field from the MHD simulation shown in the right panel of Figure 2 and the inferred magnetic fields from dust polarization \(\phi_{\rm B,syn}\) derived by rotating \(\mathbf{P}\) by \(90^{\circ}\). Here, we consider the observed direction is along the face-on direction (\(\Theta=0^{\circ}\)) and polarized dust emission can trace magnetic fields if \(\Delta\phi\leq 20^{\circ}\). The recovery rate is then defined as the ratio of the area where dust polarization can capture \(\mathbf{B}\)-field orientation over the studied area. Table 3 shows the recovery rate of dust polarization from PM grains (first column), SPM grains with low \(N_{\rm cl}=100\) (second column), and SPM grains with high \(N_{\rm cl}=10^{4}\) (third column). Three first rows show results obtained from model PA, rIA, and wIA with \(a_{\rm max}=10\,\mu\)m, and the next three rows are for the model with \(a_{\rm max}=100\,\mu\)m. In general, the recovery rate is smaller when we consider the realistic model of grain alignment in protostellar environments. In particular, PM grains only can cover \(\sim 70-75\%\) of magnetic fields in protostellar cores due to their weak alignment degree with magnetic fields. In contrast, SPM grains can recover \(80-90\%\) of \(\mathbf{B}\) within \(\sim 2500\) au around the protostar depending on the amount of iron inside dust grains. The recovery rate slightly declines with increasing \(a_{\rm max}\) due to \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Grain type** & \(<100\) **au** & **100-500 au** & **500-2000 au** \\ \hline PA & & 93.37\% & 90.90\% & 82.72\% \\ rIA & PM & 44.58\% & 59.39\% & 73.70\% \\ SPM, \(N_{\rm cl}=100\) & 54.82\% & 82.70\% & 82.59\% \\ SPM, \(N_{\rm cl}=10^{4}\) & 78.61\% & 89.60\% & 82.72\% \\ wIA & PM & 43.37\% & 56.32\% & 71.19\% \\ SPM, \(N_{\rm cl}=100\) & 45.78\% & 75.39\% & 82.26\% \\ SPM, \(N_{\rm cl}=10^{4}\) & 32.83\% & 86.44\% & 82.72\% \\ \hline \hline \end{tabular} \end{table} Table 4: Recovery rate in concentric rings focusing in the envelope and disk scale for different grain magnetic properties, assuming \(a_{\rm max}=10\,\mu\)m \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **PM** & **SPM**, \(N_{\rm cl}=100\) & **SPM**, \(N_{\rm cl}=10^{4}\) \\ \hline PA, \(a_{\rm max}=10\,\mu\)m & & 83.39\% & \\ rIA & 72.31\% & 83.51\% & 84.92\% \\ wIA & 64.30\% & 78.02\% & 83.25\% \\ PA, \(a_{\rm max}=100\,\mu\)m & & 83.76\% & \\ rIA & 71.83\% & 82.46\% & 83.53\% \\ wIA & 69.26\% & 81.25\% & 82.92\% \\ \hline \hline \end{tabular} \end{table} Table 3: Effects of maximum grain size and dust model on the recovery rate of the entire protostellar core the increased amount of VLGs with inefficient magnetic alignment. This parameter is also smaller for model wIA as a result of the superposition of polarization signal from grains with the right and wrong IA along the LOS. Table 4 shows similar results as Table 3 but for different concentric rings of thickness from 500-2000au (the envelope scale), 100-500 au, and within 100 au (the protostellar disk), assuming \(a_{\rm max}=100\)\(\mu\)m. In general, the accuracy of \(\mathbf{B}\)-field morphology inferred from dust polarization decreases for grains containing low amounts of iron inclusions and for \(\mathbf{B}\)-fields in the inner \(<500\) au region. In particular, PM grains can recover \(\sim 70\%\) beyond \(>500\) au and \(\sim 45\%\) in the disk, while SPM grains can recover higher percentages of \(\sim 80-90\%\) in the envelope and \(\sim 50-78\%\) inside the disk. The recovery rate declines with increasing maximum grain sizes, and it declines for model wIA. However, one can see that for SPM grains with \(N_{\rm cl}=10^{4}\) in model wIA, the recovery rate within \(<100\) au is very small of \(\sim 30\%\) due to the wrong interpretation of dust polarization radiating from wrong aligned grains (Figure 12, lower right panel). This problem thus emphasizes the importance of accurately determining the alignment direction of grains with \(\mathbf{B}\)-fields before deciding how to get \(\mathbf{B}\) vectors from \(\mathbf{P}\). ### Origin of the polarization hole One of the most long-standing puzzle toward protostellar environments is the reduction of the polarization degree toward the center, named the polarization hole (Henning et al., 2001; Girart et al., 2006; Hull et al., 2014; Cox et al., 2018; Galametz et al., 2018; Kwon et al., 2019; Ko et al., 2020). By fitting the relation of \(p-I\) with the power law \(p\sim I^{\alpha}\), the depolarization appears to vary case by case, from the low \(\alpha\sim-0.4\) in the 1000 au scale of L1448 IRS2 (Kwon et al., 2019), to \(\sim-0.6\) in Bok globule (Henning et al., 2001), to extremely high \(\alpha\sim-0.97\) in NGC 2024 FIR 5 (Lai et al., 2002) and \(\sim-1\) for the inner region of L1448 IRS2 (Kwon et al., 2019). The mechanism driving the depolarization effect is still unclear, but it is usually assigned to the geometric origin, such as the projection effect of magnetic fields (Kataoka et al., 2012), effect of turbulence; or the physical origin, such as the decrease of grain alignment efficiency due to gas randomization (Hoang and Lazarian, 2016; Brauer et al., 2016); the change in dust population and composition (Brauer et al., 2016); or the extinction of VLGs (Brauer et al., 2016; Liu et al., 2016; Ko et al., 2020; Liu, 2021). However, the above studies and discussions assume the alignment model in the diffuse medium, which is not valid in protostellar environments (Discussion 9.1). Recently, Giang et al. (2022) indicated that the alignment loss for grains containing low amounts of iron inclusions, and the inefficient internal and external alignment of grains with higher magnetic susceptibility around the protostar, are the main origin producing the polarization hole in both optically thin and optically thick wavelengths. However, our previous study used the uniform magnetic field model and a lower gas density profile than the results obtained in MHD simulations and observations of protostellar disks. Thus, this conclusion may overestimate the effect of grain alignment efficiency and underestimate the effect of turbulence, magnetic field geometry, gas density, and dichroic extinction on producing the polarization hole. By post-processing the realistic MHD simulation of the protostellar core and disk with POLARIS, in Section 6, for the fixed value of \(\Theta=45^{\circ}\), one can clearly see that in general, the reduced grain alignment efficiency toward the center suppresses the polarization degree much stronger than the effects of turbulence and magnetic field morphology. The low grain alignment degree is also directly implied via the continuous decrease of quantity \(p\times S\) with intensity toward the central region (Figures 18 and 19). To understand in detail the mechanism behind the behavior of \(p\) with intensity, we find the slope \(\alpha\) in three separate regions, beyond 500au (envelope scale), between 200 and 500au, and within 200au (disk scale) (Section 6.3). We found that for \(\Theta=45^{\circ}\), the slope beyond \(\sim 200\) au for grains containing high embedded iron inclusions is shallow with \(p\sim I^{-0.2}\) (Figure E1, left and central panels) and the corresponding polarization degree is rather high with \(p\sim 10-40\%\) as found in model PA (Figure 13). The high \(p\geq 10\%\) obtained in the envelope scale implies that SPM grains there could achieve perfect alignment with \(\mathbf{B}\) by MRAT alignment. Under this condition, the depolarization is totally induced by the projection effect of magnetic fields on the POS and the \(\mathbf{B}\) field tangling by turbulence. In contrast, for grains with lower magnetic susceptibility, their polarization-intensity curve is much steeper with \(p\sim I^{-0.4}-I^{-0.5}\) and the corresponding polarization fraction is low of \(p\sim 1-10\%\). The latter term implies that most of the grains inside the envelope still have magnetic alignment, but with inefficient IA, and they are only aligned with \(\mathbf{B}\) by RATs. In this case, the increasing amount of grains with inefficient IA toward denser regions is the primary reason inducing the steep reduction of \(p\) with intensity beyond \(\sim 200\) au. In Appendix E, we change the inclination angle \(\Theta\) from the face-on to the edge-on direction, and come with the same conclusion. Moving into the inner 200 au, grain alignment efficiency dominants turbulence and projection effect of magnetic fields on producing the depolarization regardless of the difference in grain magnetic properties, maximum grain size, and inclination angles (Figures 15 and E1, right panel). In particular, for grains with low iron inclusions, the alignment loss around the protostar is the primary reason causing the reduction of \(p\) with intensity in this region, with the sleep slope \(\alpha\sim-1\). For grains having higher magnetic susceptibility which can have the magnetic alignment even in the disk scale, the reduced IA efficiency and the transition from MRAT alignment to RATs toward the disk are the main reason for the polarization hole. As increasing the maximum grain size, the amount of grains with inefficient IA and the possibility for grains losing their alignment with \(\mathbf{B}\) increases. As a consequence, grain growth even emphasizes the role of grain alignment degree in producing the depolarization within a few hundred au around the protostar. This feature is not only for optically thin wavelengths, but we found that even at optically thick wavelengths, the reduced alignment efficiency of VLGs even works more effectively than dichroic extinction on eliminating dust polarization emitting from aligned dust grains (Section 8.2 and Giang et al., 2022). Since micron-sized grains above \(\sim 1\)\(\mu\)m are very easier to have inefficient IA by slow internal relaxation and even misalignment with \(\mathbf{B}\) in protostellar disks, the poor magnetic alignment of dust grains should be the key for producing the depolarization effect obtained in protostellar cores instead of the effect of magnetic field morphology, turbulence, and inclination angle which varies with protostellar cores and observation conditions. ### Dichroic extinction and the variation of polarization pattern with wavelengths As supposed in Brauer et al. (2016), dichroic extinction from aligned VLGs can reduce polarized thermal emission at submillimeter wavelengths and replace dichroic emission to become the dominant polarization mechanism if grains grow to above 10 \(\mu\)m. The change in the polarization mechanism induces the \(90^{\circ}\) difference of the polarization pattern between optically thick and optically thin wavelengths. Ko et al. (2020) and Liu (2021) suggest that this is the mechanism behind the \(90^{\circ}\) flipping of \(P\) obtained in the inner 100 au region of IRAS4A and OMC-2MMS 6. However, this explanation bases on the assumption that VLGs have efficient magnetic alignment even in very dense disks with \(m_{\rm H}\sim 10^{8}-10^{10}\,{\rm cm}^{-3}\). In Section 8.1, we found that in model PA, dichroic extinction can reduce polarized dust emission if grains grow to above \(a_{\rm max}\geq 5\,\mu{\rm m}\) (Figure 22, upper panels). But it only can become the primary polarization mechanism at optically thick wavelengths and induce the \(90^{\circ}\) flipping of the polarization pattern if grains grow to above \(50\,\mu{\rm m}\) (Figures 11 and 20, upper panels). Taking into account the realistic alignment of grains in dense environments, the flipping of polarization pattern for \(a_{\rm max}\geq 50\,\mu{\rm m}\) only can be activated if grains contain high amount of iron clusters and all aligned dust grains have the right IA (Figures 11 and 20, bottom panels). In the presence of aligned dust grains with the wrong IA, the change in the polarization pattern does not appear even in optically thick wavelengths (Figure 21, bottom panels). For grains having low levels of iron inclusions, dichroic extinction is deactivated (Figure 21) because of the alignment loss of micron-sized grains within \(\sim 200\) au around the protostar (Figure A2, upper panels). The strong correlation between the grain alignment state and the observed properties of dust polarization thus emphasizes the importance of understanding the grain magnetic properties and their alignment direction with magnetic fields in protostellar environments. This analysis could not be neglected and should be done before further interpreting another complicated feature of polarized dust emission in star-forming regions. On the other hand, Giang et al. (2022) showed that polarization flipping with wavelengths can happen if the dominant source of polarized dust emission changes from VLGs grains with the wrong IA to micron-sized grains with the right IA. This phenomenon appears at optically thin wavelengths, which is the key for distinguishing the change in polarization mechanism happening at optically thick wavelengths. However, our new results show that the coexistence of grains with right and wrong IA only makes the polarization pattern to become more fluctuated without clearly revealing the change of \(P\) with wavelengths as suggested by Giang et al. (2022). In Figure 11, one can see the \(90^{\circ}\) flipping of \(P\) at 2mm due to the change in IA along the outer edge in the North-East direction of the disk at \(\sim 200\) au. This feature happens for SPM grains with low \(N_{\rm cl}=100\), but it is not clear. We suppose the difference is due to the higher gas density inside the disk that induces a higher amount of grains with slow internal relaxation than our previous model. However, since the internal alignment state of grains depends on the gas density profiles, magnetic field strength, and stellar radiation, more synthetic observations with different MHD simulations of cloud-collapsing cores must be done to generalize the effect of grains with slow internal relaxation on dust polarization. ### Probing grain properties with multiwavelength dust polarization #### 9.5.1 Alignment direction of grains with magnetic fields As discussed in Sections 9.2.1 and 9.4, determining the alignment direction of grains with magnetic fields is the key for accurately inferring \(B\) vectors from dust polarization. This analysis is very important for the case that grains are probed to trace \(B\)-fields inside the disk scale. In Section 9.4, we found that the polarization pattern in the inner region will be the constant with wavelengths if 1) the maximum grain size is below \(\sim 50\,\mu{\rm m}\), and 2) grains have wrong IA (see Figures 12 and 21). The first parameter can be estimated via the dust opacity index or via the dust polarization (which we will discuss in the following section). If \(a_{\rm max}\) is above \(\sim 50\,\mu{\rm m}\), the consistent polarization pattern across multi-wavelength observations in the innermost region of the protostellar core could be the sign of the dominant existence of grains with the wrong IA there. In this case, we should not rotate \(P\) because the polarization pattern already reveals the magnetic field morphology in studied areas. On the contrary, the \(90^{\circ}\) flipping of \(P\) in the disk scale with wavelengths indicates that all aligned dust grains have the right IA. In this case, rotating \(P\) obtained at optically thin wavelengths by \(90^{\circ}\) and keeping \(P\) obtained at optically thick wavelengths provide us the accurate map of \(B\)-fields. #### 9.5.2 Constraining the grain magnetic properties and grain growth Giang et al. (2022) suggested using the dust polarization degree to classify grain type (PM or SPM grains) and infer again the level of iron inclusions locked inside dust grains based on the positive correlation between \(p\) and the grain magnetic properties. However, as shown in Figures 13 and 14, at \(\lambda=2\) mm, both \(p\) and \(a_{\rm max}\) coupling together to constraining the observed degree of polarization, making the situation become more complicated. In detail, \(p\) generally increases with increasing iron inclusions. However, for grains having high embedded iron inclusion, \(p\) obtained in the envelope increases with increasing \(a_{\rm max}\), but \(p\) obtained inside the inner \(\sim 500\) au region decreases with increasing maximum size. For grains containing lower levels of iron inclusions, the increasing maximum size reduces \(p(\%)\), and the suppression of grain growth on dust polarization happens stronger for grains with lower magnetic susceptibility. Using the standard RAT alignment theory, Valdivia et al. (2019) found the increase of \(p\) from \(870\,\mu{\rm m}\) to \(1.3{\rm mm}\) if grains grow to above \(a_{\rm max}\geq 20\,\mu{\rm m}\) in \(\sim 2000\) au region around the protostar. This feature could be used as a sign of grain growth activities in protostellar environments (Yen et al., 2020). In Figure 23, we show the variation of the mean polarization degree \(\langle p\rangle(\%)\) found in the envelope beyond \(500\) au to the protostar with wavelengths from \(250\,\mu{\rm m}\) to \(2\) mm for different maximum grain size from \(a_{\rm max}=1\,\mu{\rm m}\) to \(a_{\rm max}=100\,\mu{\rm m}\). The left panel is the result for SPM grains with high \(N_{\rm cl}=10^{4}\), the central panel is for SPM grains with low \(N_{\rm cl}=100\), and the right panel is for PM grains. The results for \(p-\lambda\) relation in the inner \(100-500\) au and in the disk scale within \(100\) au are shown in Appendix D. Excluding the complex variation of \(\langle p\rangle\) with \(a_{\rm max}\) and grain magnetic properties, one can clearly see that for both SPM and PM grains, \(\langle p\rangle\) decreases toward longer wavelengths if the maximum size is low of \(a_{\rm max}=1\,\mu{\rm m}\). As \(a_{\rm max}\) increases, the decrease of \(\langle p\rangle\) with \(\lambda\) becomes weaker because of increasing polarized dust emission from large grains at long wavelengths, then \(\langle p\rangle\) changes to increase with increasing wavelengths as grains grow to \(a_{\rm max}\geq 50\,\mu{\rm m}\). The results for the inner \(500\) au region are generally consistent with the tendency of \(\langle p\rangle-\lambda\) for different \(a_{\rm max}\) as found in the envelope scale (see Appendix D). Thus, we suppose that the maximum grain size (or the grain growth activities) could be recognized by observing multiwavelength dust polarization from submillimeter to millimeter wavelengths (Yen et al., 2020). Indeed, the most popular technique to detect the sign of grain growth is via the SED fitting (i.e., Kwon et al., 2009, Liu, 2021) or estimated from the detection of dust polarization from self-scattering inside the protostellar disk (reference). Although there is still a discrepancy of \(a_{\rm max}\) estimated from the two above methods, if we can first constrain the maximum size, the grain type and amount of iron inclusions locked inside dust grains can be estimated by fitting the observed polarization degree with simulated results from different dust models. Otherwise, two quantities will be the free parameters. In this case, we suggest looking into the polarization spectrum (Figure 23, Valdivia et al., 2019) and the overall degree of dust polarization (Figure 13), also the slope of \(p-I\) (Section 6.3) to have the first constraints on \(a_{\rm max}\) and grain magnetic properties. Then, the accurate values of the above quantities can be found by confronting the synthetic dust polarization degree from POLARIS with observational data. Based on the consistency between grain properties - grain alignment degree - synthetic dust polarization implemented inside the updated POLARIS version, now we are able to accurately extract both information on magnetic properties and grain growth inside the star-forming region. One can have a look at Bich Ngoc et al. (2023) and Akshaya & Hoang (2023) who carry the first numerical interpretation of grain magnetic properties and maximum grain size via dust polarization in the massive filament G11.11\(-\)0.12 and the Galactic center. ### Factor affects dust polarization degree Observations toward Class 0/I YSOs show the wide range of polarization degree (see the summarised distribution of \(p\) in Le Gouellec et al., 2020), from few percent above 1% to very high \(p\sim 30-40\%\) in the thousand au scales around the protostar, reduces to low \(p\sim 0.1\%\) to \(\sim 1\%\) in the disk scale (Henning et al., 2001, Hull et al., 2014, Cox et al., 2018, Galametz et al., 2018, Sadavoy et al., 2019). Brauer et al. (2016) show that the increased gas density could induce stronger dichroic extinction efficiency, producing lower levels of polarization degree around the protostar. Brauer et al. (2016) and Valdivia et al. (2019) found that increasing maximum grain size could induce larger \(p\) in the entire protostellar core due to the extension of the alignment range. However, taking into account the realistic model of grain alignment in protostellar environments, we found that the increase \(a_{\rm max}\) can help to increase \(p\) only for grains with a high amount of iron inclusions in the envelope scale (Figure 14, upper right panel). Within a few hundred au around the protostar for grains with high iron inclusions, and for grains with low magnetic susceptibility, grains growth suppresses dust polarization and produces lower levels of polarization (Figure 14, lower panels). Besides the maximum grain size, the grain magnetic properties directly affect the grain alignment degree (Section 4.2) and strongly control the polarization fraction in the entire thousands au around the protostar (Giang et al., 2022, Figures 13 and 15). The projection effect of magnetic fields and the tangling by turbulence also can play a role in determining the observed \(p\) (Section 9.3, Appendix E). Besides iron inclusions, maximum grain size, \(\mathbf{B}\) field morphology, and turbulence level, dust composition is also an important factor in determining the observed degree of polarization. Brauer et al. (2016) found that the composite dust model with and without aligned graphite grains can be different by a factor of two. In Valdivia et al. (2019), by using the composite dust model including aligned silicate grains and unaligned graphite grains, they found the clear distinct between \(p\) produced from the model with \(a_{\rm max}=1\,\mu\)m and \(a_{\rm max}=50\,\mu\)m in the entire 2000 au region (see Figure 3, right panels). They Figure 23: Left panel: variation of the mean polarization degree beyond 500 au from 250 \(\mu\)m to 2 mm for different maximum grain sizes, assuming SPM grains with high \(N_{\rm cl}=10^{4}\). Central and right panels: similar results as the left panel but for SPM grains with low \(N_{\rm cl}=100\) and for PM grains. One can see that \(p\) clearly decreases with increasing wavelengths if the maximum size is small of \(a_{\rm max}\sim 1\,\mu\)m due to the weak emission of small grains at millimeter wavelengths. As the maximum grain size increases, the decrease of \(p\) with \(\lambda\) is weaker and \(p\) then increases toward millimeter wavelengths if \(a_{\rm max}\geq 50\,\mu\)m. The tendency of the polarization curve is similar for both SPM and PM grains, but the rise of \(p\) with wavelengths for PM grains is not prominent as the case of SPM grains due to the weak alignment of VLGs in protostellar environments. conclude that grains must grow to above \(\sim 10-50\,\mu\)m to reproduce the detected level of \(p\sim 1-5\%\) found in the sample of Class 0/I YSOs from Cox et al. (2018). However, in Figure 14, we show that beyond 200 au, the model with \(a_{\rm max}=1\,\mu\)m also can produce a similar high \(p\geq 5\%\) as model with \(a_{\rm max}=100\,\mu\)m if both silicate and graphite grains are aligned with \(\mathbf{B}\) together. The overall \(p\) for all \(a_{\rm max}\) in our results is also much larger than results from Valdivia et al. (2019), implying the importance of understanding the dust model in interpreting dust polarization. This problem needs to be studied in more detail in the future, which could reveal a new way for us to constrain the dust model via dust polarization. Another factor that could affect dust polarization degree is the magnetic field strength and stellar radiation field strength. Le Gouellec et al. (3 b) found the extension of alignment range, i.e., smaller \(a_{\rm align}\) for larger stellar luminosity along the outflow cavity wall, which could help to increase both polarized dust emission and their degree of polarization in this area. Hoang et al. (2022) showed that the stronger stellar radiation could allow more grains to rotate faster and to have fast internal relaxation at high-\(J\) attractors. But it also increases dust temperature, which could suppress both the Barnett relaxation and magnetic relaxation. So, does strong stellar radiation field support or suppress dust polarization is still unclear. On the other hand, the high magnetic field strength inside protostellar cores and disks can strengthen magnetic relaxation and allows more large grains to be aligned with \(\mathbf{B}\) by MRAT mechanism. Further studies taking into account different models of magnetic fields and turbulence are required to generalize the effect of \(\mathbf{B}\)-fields on polarized dust emission. to the radiation field direction because all aligned dust grains at low-\(J\) have slow internal relaxation in protostellar environments. And the most importance is that even radiation precession timescale is shorter than the Larmor precession timescale, grains are still not able to align with the radiation field if the gas damping timescale is too short to their radiation precession timescale. The prediction of polarization patterns from different alignment mechanisms and the alignment direction of grains are quantitatively described in Hoang et al. (2022). However, the superposition of polarization signals originating from different sources is much more complicated than the above estimation. Therefore, more alignment mechanisms need to be incorporated into POLARIS, and more detailed synthetic observations need to be done in detail to accurately quantify their contributions in producing dust polarization in protostellar disks. This work will basically set the catalog for us to apply to interpret observational data. ## 10 Summary In this study, we post-processing an MHD simulation of a protostellar core with our updated POLARIS code to study the effects of grain magnetic properties and maximum grain sizes on synthetic dust polarization from magnetically aligned dust grains. Our main findings are summarized as follows: 1. We found that only PM grains below \(10\,\mu\)m can have the magnetic alignment beyond \(\sim 200\) au by RATs, but most of them have inefficient internal alignment (IA) by slow internal relaxation. In contrast, SPM grains with a high amount of iron inclusions can have perfect magnetic alignment in the envelope by an efficient MRAT mechanism. However, in the disk scale of \(\sim 100\) au around the protostar, very large grains (VLGs) above \(20\,\mu\)m are not aligned with \(B\), their IA becomes inefficient at both low and high-\(J\) attractors, and they are only aligned with \(B\) by RATs due to the strong gas randomization here. 2. We found a positive correlation between the degree of polarization \(p\) and the level of iron inclusions locked inside dust grains. Besides, for grains with high embedded iron inclusions, \(p\) observed in the envelope tends to increase with grain growth, \(a_{\rm max}\). However, toward the inner region, \(p\) only slightly increases with the maximum size up to \(a_{\rm max}=20\,\mu\)m, and a further increase in \(a_{\rm max}\) decreases the polarization degree, \(p\). For grains with low embedded iron inclusions, grain growth tends to reduce \(p\), and this anti-correlation trend becomes more prominent for grains with lower magnetic susceptibility. 3. We found that for SPM grains with high iron inclusions, turbulence and geometrical effect of magnetic fields is the major origin driving the depolarization effect in the envelope scale with the weak relation \(p\sim I^{-0.3}\). In the inner \(500\) au region, the depolarization is mainly caused by the reduced IA and the change of external alignment from MRAT to RATs, resulting in the relation \(p\sim I^{-0.7}\). For grains with a lower level of iron inclusions, the reduction of the IA efficiency is the origin behind the depolarization in thousands au scale, with \(p\sim I^{-0.5}\), and the alignment loss is the origin of the polarization hole found in the central region with very steep relation \(p\sim I^{-1}\). 4. We found the increase in the polarization angle dispersion function, \(S\), with increasing the grain magnetic susceptibility due to the broadening in the region where grains can be aligned with magnetic fields. Grain growth also affects \(S\). Further numerical studies on the effects of grain alignment and growth on \(S\) are needed to achieve accurate measurements of the \(B\)-field strength using the DCF method. 5. Dust polarization is a robust tool for tracing the magnetic field orientation in the envelope scale, with their polarization vectors always being \(\mathbf{P}\perp\mathbf{B}\). In the inner \(200\) au around the protostar, only dust polarization from grains with high embedded iron inclusions can be used to infer again \(B\) field orientation around the protostar. However, the alignment direction of grains with \(B\) must be examined carefully before deciding how to get \(B\) from \(P\). 6. We found that SPM grains can produce high \(p\sim 1-40\%\) in the entire protostellar cores and show the rise in the polarization spectrum toward millimeter wavelengths if grains grow to above \(a_{\rm max}\geq 50\,\mu\)m. In contrast, PM grains produce low \(p<1\%\) around the protostar and the decrease of \(p(\%)\) with increasing \(\lambda\) for all values of \(a_{\rm max}\). These features could be treated as a sign of grain growth and to constrain the level of iron locked inside dust grains via dust polarization. In addition, we might be able to determine the alignment direction of grains with magnetic fields by observing the inner \(200\) au region from optically thick to optically thin wavelengths. 7. The effect of dichroic extinction is reduced due to the inefficient alignment of VLGs with magnetic fields inside the disk, which reduces the role of dichroic extinction on producing the polarization hole at submillimeter wavelengths. In addition, dichroic extinction only can become the major source of polarization and produce the \(90^{\circ}\) flipping of the polarization pattern with wavelengths in the inner \(\sim 100\) au region if 1) grains grow to above \(a_{\rm max}\geq 50\,\mu\)m, 2) they are SPM with a high amount of iron clusters, and 3) all of them have right IA. ## Acknowledgements We thank Jeong-Gyu Kim for stimulating discussions and Ka Ho Lam for sharing with us the MHD simulation datacube. We thank the members of the Vietnam Astrophysics Research Network (VARNET) for various useful discussions and comments. T.H. is supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT (No. 2019R1A2C1087045). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2301.13407
A compact and highly collimated atomic/molecular beam source
We describe the design, characterization and application of a simple, highly collimated and compact atomic/molecular beam source. This source is based on a segmented capillary design, constructed using a syringe needle. Angular width measurements and free molecular flow simulations show that the segmented structure effectively suppresses atoms travelling in off-axis directions, resulting in a narrow beam of Helium atoms having a width of 7 mrad (full width half maximum). We demonstrate an application of this source by using it for monitoring real-time changes in surface coverage on a clean Cu(110) surface exposed to oxygen, by measuring specular reflectivity of the Helium beam generated using this source.
Geetika Bhardwaj, Saurabh Kumar Singh, Pranav R. Shirhatti
2023-01-31T04:54:14Z
http://arxiv.org/abs/2301.13407v1
# A compact and highly collimated atomic/molecular beam source ###### Abstract **Abstract:** We describe the design, characterization and application of a simple, highly collimated and compact atomic/molecular beam source. This source is based on a segmented capillary design, constructed using a syringe needle. Angular width measurements and free molecular flow simulations show that the segmented structure effectively suppresses atoms travelling in off-axis directions, resulting in a narrow beam of Helium atoms having a width of 7 mrad (full width half maximum). We demonstrate an application of this source by using it for monitoring real-time changes in surface coverage on a clean Cu(110) surface exposed to oxygen, by measuring specular reflectivity of the Helium beam generated using this source. + Footnote †: Author to whom correspondence should be addressed. ## I Introduction Atomic and molecular beam techniques find widespread use in areas ranging from fundamental scientific studies to important technological applications. They play a crucial role in high resolution atomic and molecular spectroscopy measurements, studies involving cold atoms, understanding energy transfer in intermolecular and molecule - surface collisions, dynamics of chemical reactions, surface chemistry, thin film and coating technology to name a few [1; 2]. In the context of understanding physical and chemical process on surfaces, atomic/molecular beam - surface scattering experiments are very valuable. For example, techniques based on Helium atom scattering (HAS) from surfaces provide a wide range of information ranging from changes in surface coverage, phonon energy spectrum and dynamics, surface adsorbate motion, crystalline nature and even structural features by means of microscopy [3; 4]. In particular, specular reflection of He atoms from surfaces is a highly sensitive technique to measure small changes in adsorbate coverage, especially on flat single crystal surfaces. Here, diffuse scattering of incident He atoms caused by adsorbate induced disorder on the surface leads to a decrease in specular reflected He signal with increasing adsorbate coverage. Typically, diffuse scattering cross sections of He from surface adsorbates are much larger than that expected from Van der Waals radii and are of the order of 100 A\({}^{2}\) per adsorbed molecule [5; 6; 7]. As a result, small changes in surface coverage of the order of 0.01 monolayer (ML) can be detected. Further, use of thermal energy He beams typically having incidence kinetic energy \(<\) 100 meV means that this technique is soft and non-destructive. These features make specular He reflectivity an excellent tool for measuring sticking probabilities of adsorbates on surfaces. An elegant strategy to measure surface coverage using He reflectivity was demonstrated by Higgins and co-workers [8]. In their studies of quantum state resolved chemisorption of CH\({}_{4}\) on a Pt(111) surface they used a seeded beam of CH\({}_{4}\) in He, at an incidence angle of 45\({}^{\circ}\). Change in specular scattered He was used to estimate the surface coverage resulting from the dissociation of CH\({}_{4}\) and to evaluate the initial sticking probabilities. It should be noted that a large incidence angle of 45\({}^{\circ}\) (needed for using He specular reflection as a probe) limits the kinetic energy associated with the normal component of incident momentum, thereby making the study of reactions with large incidence energy thresholds very difficult using this approach. Using an independent He beam at large angle from surface normal (for increased diffuse scattering cross section) as a probe, with the incident molecular beam (reactant) near surface normal, can in principle circumvent this limitation. However, a typical design for producing a well-collimated He beam consists of a series (2-3) of differentially pumped vacuum chambers with a large footprint. This makes it very difficult to integrate a well-collimated He atom source with molecule-surface scattering experiments. Recent work by Li and co-workers [9], where they demonstrate an extremely compact, on-chip collimated atomic beam source using segmented micro-channels etched on a silicon wafer, provides a route to overcome the above difficulty. Building further on the ideas presented by Li and co-workers, we present a design of a simple, compact and highly collimated atom beam source which can be easily fabricated and incorporated into molecule - surface scattering experiments. In our case, the atom beam source is based on a stainless steel capillary (a commercially available syringe needle), machined to have a segmented structure. Further, we demonstrate an application of this compact and highly collimated atom beam source for measuring real-time surface coverage changes the case of dissociative chemisorption of oxygen on a clean Cu(110) surface, using specular He reflection as a probe. ## II Experimental setup ### Segmented capillary based atomic beam source Figure 1a shows a schematic diagram of our compact atomic beam source. A commercially available stainless steel capillary (syringe needle) with 0.5 mm inner diameter and 0.25 mm wall thickness, was machined using a hand-held grinding tool to make openings along its walls, resulting in a segmented structure with an overall length of 50 mm (Fig. 1b). The segmented capillary was mounted on a pair of metal plates (B and C) which were supported using two 6 mm (M6) grub screws mounted on a CF40 flange with a cone-shaped opening in its center. A leak tight seal among outer surface of the capillary and the supporting plates was achieved using vacuum compatible glue (Torr-Seal, two part epoxy sealant). A picture of this assembly is shown in Fig. 1c. The segmented capillary consists of three stages acting as long thin channels, each 10 mm in length (segments 1- 3, Fig. 1b). These are separated by two 10 mm long segments with openings along the walls on diametrically opposite sides, acting as pumping ports. These openings allow the removal of atoms travelling in off-axis direction, eventually leading to a highly collimated beam emerging from the outlet. Optimal dimensions of the capillary and individual segments were decided with the help of free molecular flow simulations carried out using the software Molflow+ [10; 11]. ### Beam width characterization Figure 2a shows a schematic diagram of the experimental setup used for measuring the width of the atomic beam generated using segmented capillary source. This source was inserted in a CF40 4-way cross (source chamber), with a needle valve outside to control the He flow into the capillary. Source chamber was attached to a larger vacuum chamber, a CF100 4-way cross (detection chamber). Detector assembly consisted of a differentially pumped sampling tube (25.4 mm diameter) connected to a mass spectrometer (SRS RGA 200), with a slit shaped opening (approximately 1 mm width and 20 mm length). Sampling tube and the mass spectrometer were mounted on a single-axis linear manipulator, allowing to move the detection assembly in a vertical plane (perpendicular to the atomic beam), thereby enabling angular width measurement. Source chamber, detection chamber and the sampling tube were pumped using turbo molecular pumps (denoted by C, D and E) with the nominal speeds of 80 l/s (HiPace 80, Pfeiffer Vacuum), 400 l/s (HiPace 400, Pfeiffer Vacuum) and 80 l/s (HiPace 80, Pfeiffer Vacuum), respectively. All the turbo pumps were backed by a single rotary vane pump with 11 m\({}^{3}\)/hr pumping speed (Duo 11, Pfeiffer Vacuum). Typical steady state pressures (with beam off) in the source and detection chambers were 2\(\times\)10\({}^{-8}\) mbar and 1\(\times\)10\({}^{-8}\) mbar, respectively. To generate He atom beam, the needle valve was opened in a controlled manner to maintain a steady state pressure of around 3\(\times\)10\({}^{-5}\) mbar in the source chamber. The ultimate background partial pressure of He in the sampling tube was \(\sim\) 3\(\times\)10\({}^{-11}\) mbar with the beam off and with the beam on, a maximum He signal (at the peak) Figure 1: (a) Schematic diagram of the segmented capillary based atom beam source. Plates A, B and C were used to hold the segmented needle and were secured using a pair of M6 grub screws. Entire assembly was mounted on a CF40 flange such that the He beam outlet was positioned at the center of a cone-shaped aperture on the flange. (b) A detailed view of the segmented capillary structure, made using a syringe needle. Its overall length was 50 mm, the inner diameter was 0.5 mm and the wall thickness was 0.25 mm. Its walls were machined to create open regions of 10 mm length (pumping ports) between successive segments. He gas was leaked into the inlet side in a controlled manner using a needle valve and a collimated beam emerged from the outlet side. (c) Actual picture of our compact atom beam source. of 3 \(\times 10^{-9}\) mbar was observed. ### Surface coverage and sticking probability measurement using Helium reflectivity Sticking probability measurements of oxygen on Cu(110) surface, using He reflectivity, were performed in an independent ultrahigh vacuum (UHV) chamber. This custom designed UHV chamber is a part of a recently built experimental setup for quantum state resolved molecule-surface scattering and reactivity measurements (Fig. 2b). A Cu(110) single crystal (99.9999% pure, 10 mm diameter and 2 mm thickness), cut to precision better than 0.1\({}^{\circ}\) and polished to have roughness lower than 10 nm (MaTeck Material Technologie & Kristalle GmbH) was used as a target sample. It was mounted on a 4-axis differentially pumped manipulator (XYZ\(\Theta\)) using a pair of 0.25 mm diameter tungsten wires which enabled sample heating. The sample manipulator is equipped with electrical and thermocouple feedthroughs for resistive heating and monitoring sample temperature (using a K-type thermocouple). This UHV chamber is also equipped with Ar ion source (IS40, Prevac) for surface cleaning via sputtering and Auger electron spectrometer (AES, Model: SMG600, OCI Vacuum Microengineering) for checking surface chemical composition. Additional vacuum chambers denoted by F, G and H in Fig. 2b correspond to a double differentially pumped system, built for quantum state selected molecular beam - surface scattering experiments. These stages were not used in the present work and were isolated from the UHV chamber using a custom-built Teflon sealed sliding valve. Initial surface cleaning was done using repeated sputtering and annealing cycles, similar to that reported previously [12]. Thereafter, for day-to-day operation, the sample surface was subjected to Ar ion sputtering for a duration of 30 minutes (0.4 \(\mu\)A ion current) at 3 keV ion energy. Under these conditions, impurity levels (mainly carbon) were found to be below the detection threshold of AES. Subsequently, the surface was annealed at 800 K for 20-30 min and allowed to cool down to 300-310 K before conducting the He reflectivity measurements. Base pressure of the UHV chamber in these measurements was \(1.5\times 10^{-9}\) mbar. Under these conditions, we observed that the sample remained clean (as measured by AES) for a duration of 4 hours, which is sufficient for the sticking probability measurements under consideration (typically 15 minutes per measurement). Source chamber with the segmented capillary was mounted on one of the arms of UHV chamber. Alignment of the capillary with the target surface was checked by sending a laser beam through the capillary inlet. Appropriate sample position was determined by ensuring that the light beam exiting the capillary lands on the center of the sample surface and the reflected light beam enters the sampling aperture. Thereafter, the chamber was pumped and baked out to reach UHV conditions needed Figure 2: (a) Schematic diagram of the experimental setup to characterize the beam width. The segmented capillary based source was placed in the CF40 four-way cross, which was attached to the detection chamber. A differentially pumped sampling tube having a slit (1 mm width, 20 mm length) was moved in a plane perpendicular to the beam using a linear manipulator, for beam width measurement. Source to sampling plane distance was 160 mm. (b) Schematic diagram of the ultra high vacuum chamber setup used to measure the real-time surface coverage of Cu(110) exposed to oxygen, using specular reflection of He. Source chamber was attached at an angle of 50\({}^{\circ}\) (from target surface normal) and the source to target distance was 130 mm. Specular reflected flux of He from the Cu(110) surface was detected in a similar manner as in (a). This chamber is equipped with an ion source and Auger electron spectrometer for sample cleaning (sputtering) and chemical composition analysis, respectively. for sticking probability measurements. A well collimated He beam emerging from capillary outlet, denoted by A in Fig. 2b, was made incident on a clean Cu(110) surface at an incidence angle of 50\({}^{\circ}\) from the surface normal. Specularly reflected He atoms entered a differentially pumped sampling tube through a slit (1 mm width, 20 mm length) and were detected using a mass spectrometer (SRS RGA 200), in a manner similar to that used for beam width characterization. The UHV chamber was pumped by a turbomolecular pump (HiPace 700 H, Pfeiffer Vacuum) which was backed by a dry roots pump (ACP 15, Pfeiffer Vacuum). He source and the detection stage were pumped by independent turbomolecular pumps (HiPace 80, Pfeiffer Vacuum). These were backed by a rotary vane pump (Duo 11, Pfeiffer). Steady state pressures in the source and the UHV chamber were 2\(\times\)10\({}^{-8}\) mbar and 1.5\(\times\)10\({}^{-9}\) mbar with He beam off. With the He beam on, these values were 2\(\times\)10\({}^{-6}\) mbar and 2\(\times\)10\({}^{-9}\) mbar (Cu(110) surface moved away), respectively. Under these conditions, background partial pressure of He in the sampling tube was 5\(\times\)10\({}^{-11}\) mbar (He beam off) and increased to 8\(\times\)10\({}^{-11}\) with He beam on. With the Cu(110) surface in optimal position, specular reflected signal for He from a clean Cu(110) surface typically corresponded to 2\(\times\)10\({}^{-10}\) mbar, as seen by the mass spectrometer. For the surface coverage dependent He reflectivity and sticking probability measurements, high purity oxygen gas was leaked into the chamber using a precision leak valve (simultaneously with the He beam on). Steady state background pressure with the oxygen leaking in was set to 1\(\times\)10\({}^{-8}\) mbar, corresponding to an incident oxygen flux of approximately 0.01 ML per second. Time integrated oxygen pressure was measured by an ionization gauge and was used to evaluate the incident oxygen dose on the sample. Change in reflected He signal, normalized to the reflectivity of the clean surface (I/I\({}_{0}\)) and the incident oxygen dose were used to obtain the sticking probabilities and diffuse elastic scattering cross section. ## III Results and Discussions ### Segmented capillary source characterization Figure 3a shows a snapshot of free molecular flow simulations (using Molflow+) of He flowing through a segmented capillary structure with dimensions similar to that used in our measurements. The two dimensional texture plot (top) depicts spatial distribution of the generated He beam, mapped at a distance of 160 mm (same as in beam width measurements) from the exit plane. Texture plot shown below depicts the distribution resulting from a single capillary with the same dimensions (diameter = 0.5 mm and length = 50 mm, no segmented structure). Quite clearly, the segmented structure leads to a much narrower and well collimated beam. Figure 3b depicts a comparison among the angular distributions of output flux obtained from the experiment and simulations. It is quite evident that the experimental observations are reproduced well by these simulations. Most importantly, a narrow beam with an angular width of 0.7\({}^{\circ}\) (full width half maximum, FWHM), corresponding to a beam size of 1.9 mm at a distance of 160 mm is observed in our measurements. Considering a source size of 0.5 mm and that the sampling slit is 1 mm wide, we estimate the angular divergence of the He beam to be 7 mrad (FWHM). Based on the pressure changes and the observed beam width we estimate the flux of He atoms in the beam to be \(\sim 10^{18}\) atoms/(sec str). Differential pumping stages enabled by the open segments effectively suppress the broad tail-like feature in the angular distribution, expected for long thin capillaries [1]. Such a Figure 3: (a) Arrangement of the segmented capillaries used for free molecular flow simulations (using Molflow+). The dimensions are same as that used in experiments. The texture plot on the top and bottom show the spatial distribution of He atoms (obtained using simulations) emerging from the segmented and a single long capillary of the same overall dimensions, respectively. (b) Comparison of angular distributions obtained from simulations and experiments. Observed width (FWHM) of angular distributions is 0.7\({}^{\circ}\) and 0.8\({}^{\circ}\) from experiments and simulations, respectively. The position 0\({}^{\circ}\) corresponds to the centerline of the beam. collimated beam with is well-suited for He atom reflectivity measurements providing a high signal to background ratio. ### Oxidation of Cu(110) monitored using He reflectivity Surface cleanliness of the Cu(110) sample was checked using AES (Fig. 4). The lower (blue) and the middle (red) curves represent the spectrum obtained before and after sample cleaning (Ar ion sputtering), respectively. Once the major contaminants carbon (272 eV) and oxygen (503 eV) were removed and only prominent features corresponding to copper (700-920 eV) remained, the sample was annealed (800 K, 30 min) and cooled down to 300 K. Following this, the clean Cu(110) surface was exposed to oxygen, leaked into the chamber through a precision leak valve, corresponding to a dose of approximately 25 ML. Subsequent AES measurements (Fig. 4, grey curve) showed that the surface is covered with oxygen. It was also noted that under these conditions the surface oxygen coverage was saturated and no further increase in oxygen signal was observed with additional oxygen exposure. This is consistent with previous studies where dissociative chemisorption of oxygen on Cu(110) has been studied and on exposures greater than 10 ML the surface coverage was observed to be saturated [13; 14; 15; 16] Having established appropriate conditions for preparing the oxygenated Cu(110) surface in a controlled manner, we measured the evolution of surface coverage in real-time using specular He reflection as a probe. Upon exposure to oxygen, normalized reflected He signal (I/I\({}_{0}\)) decreased with time and ultimately reached a steady state value of about 0.4 times its initial value (Fig. 5a). These observations clearly show that as the oxygen coverage on the Cu(110) surface builds up, He reflectivity decreases. At longer times (\(>\) 1000 sec), the surface was saturated with oxygen (as confirmed by AES) and no further change in He reflectivity was observed. This decrease in specular reflected He signal is attributed to increased diffuse scattering of incident He atoms on the oxygen covered Cu(110) surface. These observations are consistent with those expected from several He scattering studies reported previously [3; 6], where the adsorbate induced disorder on the surface leads to reduced specular reflection of He atoms. In order to evaluate surface coverage and initial sticking probability using this data, a relation among He reflectivity and surface coverage needs to be established first. Previous studies using low energy electron diffraction show that saturation of oxygen on Cu(110) surface corresponds to a coverage of 0.5 ML. Using this information and the fact that steady state He reflected signal corresponds to an oxygen saturated surface (as confirmed using AES), we obtain the following relation: I/I\({}_{0}\) = 1 corresponds to zero coverage and the steady state He reflectivity (\(\sim\)0.4) following oxygen exposure corresponds to 0.5 ML oxygen coverage (saturation). Figure 5b shows a plot of surface coverage vs oxygen dose obtained using the above method. Oxygen dose in terms of monolayers was estimated from the change in background pressure (after leaking in oxygen) and considering surface atom density of Cu(110) to be 1.09\(\times\)10\({}^{15}\) atoms/cm\({}^{2}\). Quite clearly, the observed trend follows the expected behaviour where the rate of sticking is proportional to the number of unoccupied adsorption sites available. Red line (Fig. 5b) corresponds to a best fit using a model 0.5(1-e\({}^{b\phi_{i}}\)), where \(\phi_{i}\) represents the incident O\({}_{2}\) dose and the initial sticking probability, \(S_{0}\) = \(k/4\) (considering that saturation coverage is 0.5 ML and each O\({}_{2}\) dissociation gives rise to two O-atoms adsorbed on the surface). The dashed blue curve represents a linear fit to the initial part of the curve which results in an initial sticking probability of 0.35 (= slope/2). It should be noted that for the same system, a sticking probability of 0.23 has been reported previously [17; 15; 18]. These systematic differences are likely to be arising from the fact that in our case oxygen dosage estimation was made using the pressure values directly obtained from the ion gauge using the typical gas sensitivity factors, without any additional calibration. Nonetheless, we checked the repeatability of our observations by performing an additional five independent measurements using the same method (see SI-1). Overall, the \(S_{0}\) values obtained ranged from 0.33 to 0.37, showing good consistency among the results and validate the utility of this method. Additionally, based on the initial rate of change in I/I\({}_{0}\) with respect to O-atom coverage, we estimate the diffuse elastic scattering cross section (= \(\frac{d(I/I_{0})}{d\phi^{2}}\)) [5] of He from the adsorbed O-atoms to be 90 A\({}^{2}\) (assuming Figure 4: Auger electron spectra of the Cu(110) surface, measured before cleaning (lower curve, blue), after cleaning (middle curve, red) and after the oxygen dosing (approximately 25 ML). Characteristic peaks at 272 eV and 503 eV correspond to carbon and oxygen on the sample while peaks in the 700 - 920 eV region correspond to Cu. Inset shows a zoomed view of the peaks resulting from oxygen. \(S_{0}\) = 0.23). ## IV Concluding remarks In this work, we have successfully demonstrated the design, development and characterization of a simple, compact atomic beam source based on a segmented capillary design. It produces a highly collimated beam of He atoms with angular divergence of 7 mrad and a brightness of \(\sim 10^{18}\) atoms /sec /str. Further, we demonstrate an application of this compact atomic beam source by using it for measuring the real time surface coverage and initial sticking probability of oxygen on a clean Cu(110) surface, by means of measuring its He reflectivity. The compact footprint and relatively simpler design, unlike conventional differentially pumped vacuum chamber systems, allows for a relatively easier integration into our molecule - surface scattering experimental setup. This design is flexible in the sense that, the angular width can be easily adjusted by choosing an appropriate L/d ratio and segment length, without having to make any major changes to the vacuum chamber itself. We believe that this design is very valuable for quantum state resolved chemisorption experiments, currently being developed in our lab. Here, He reflectivity measurements using such a compact and simple source allows for a very practical way of measuring surface coverage in real time, non-destructive, highly sensitive and universal manner. This design can be further improved by using an optical window at the inlet side of the capillary. This will allow the in-vacuum alignment (using a laser beam) of the atomic beam, which is currently not possible in the present setup. Further, using two sequential slits for additional differential pumping in the detection setup is expected to provide a much higher background rejection, leading to a higher signal to background ratio and ultimately better detection sensitivity. These improvements will be considered for future versions of this set up. Additionally, we also envisage that such a design is potentially useful for real-time, non destructive monitoring of the growth of thin films on atomically flat surfaces, especially where layer by layer growth occurs. Also, such a design is expected to be generally useful in several situations where a highly directed flux of atoms/molecules is required with a compact footprint. ## Supplementary Information * SI-1: Repeated surface coverage measurements ## Data availability All relevant data related to the current study are available from the corresponding author upon reasonable request. ## Acknowledgements This work was partly supported by intramural funds at TIFR Hyderabad from the Department of Atomic Energy and Scientific and Engineering Research Board, Department of Science and Technology (grant numbers: ECR/2018/001127 and CRG/2020/003877). We thank Rakesh Moodike (institute workshop) for suggesting the Figure 5: (left) Specular He signal (normalized) of the Cu(110) surface as a function of time, as the surface was exposed to oxygen (red arrow). The reflected He signal as observed by the mass spectrometer is also shown on the y-axis on the right (b) Surface coverage of O-atoms as a function of oxygen exposure (in ML), obtained using the data shown in the left panel. Red curve shows a fit to a model assuming 0.5 ML as the saturation coverage. Blue line (dashed) shows a linear fit to the initial part of the curve. We estimate the initial sticking probability to be 0.35 (using linear fit). use of stainless steel capillary, fabricating the segmented structure and components for the detection assembly. ## Author Contributions GB performed the simulations, designed the necessary components and characterized the performance of the segmented capillary source with inputs from PRS. SKS contributed to preparing the UHV chamber used to conduct the sticking probability measurements with inputs from PRS. GB and SKS performed the sticking probability measurements using He reflectivity and analyzed the data. PRS conceptualized the project. GB and PRS prepared the manuscript with inputs from SKS. All authors discussed the results, analysis and contributed to the manuscript.
2309.07250
All you need is spin: SU(2) equivariant variational quantum circuits based on spin networks
Variational algorithms require architectures that naturally constrain the optimisation space to run efficiently. In geometric quantum machine learning, one achieves this by encoding group structure into parameterised quantum circuits to include the symmetries of a problem as an inductive bias. However, constructing such circuits is challenging as a concrete guiding principle has yet to emerge. In this paper, we propose the use of spin networks, a form of directed tensor network invariant under a group transformation, to devise SU(2) equivariant quantum circuit ans\"atze -- circuits possessing spin rotation symmetry. By changing to the basis that block diagonalises SU(2) group action, these networks provide a natural building block for constructing parameterised equivariant quantum circuits. We prove that our construction is mathematically equivalent to other known constructions, such as those based on twirling and generalised permutations, but more direct to implement on quantum hardware. The efficacy of our constructed circuits is tested by solving the ground state problem of SU(2) symmetric Heisenberg models on the one-dimensional triangular lattice and on the Kagome lattice. Our results highlight that our equivariant circuits boost the performance of quantum variational algorithms, indicating broader applicability to other real-world problems.
Richard D. P. East, Guillermo Alonso-Linaje, Chae-Yeun Park
2023-09-13T18:38:41Z
http://arxiv.org/abs/2309.07250v1
# All you need is spin: SU(2) equivariant variational quantum circuits based on spin networks ###### Abstract Variational algorithms require architectures that naturally constrain the optimisation space to run efficiently. In geometric quantum machine learning, one achieves this by encoding group structure into parameterised quantum circuits to include the symmetries of a problem as an inductive bias. However, constructing such circuits is challenging as a concrete guiding principle has yet to emerge. In this paper, we propose the use of _spin networks_, a form of directed tensor network invariant under a group transformation, to devise SU(2) equivariant quantum circuit ansatze - circuits possessing spin rotation symmetry. By changing to the basis that block diagonalises SU(2) group action, these networks provide a natural building block for constructing parameterised equivariant quantum circuits. We prove that our construction is mathematically equivalent to other known constructions, such as those based on twirling and generalised permutations, but more direct to implement on quantum hardware. The efficacy of our constructed circuits is tested by solving the ground state problem of SU(2) symmetric Heisenberg models on the one-dimensional triangular lattice and on the Kagome lattice. Our results highlight that our equivariant circuits boost the performance of quantum variational algorithms, indicating broader applicability to other real-world problems. ## 1 Introduction Variational algorithms are prominent across physics as well as computer science with particularly fruitful applications in machine learning, condensed matter physics, and quantum chemistry [1, 2, 3, 4]. In such areas, a parameterized function, often called an ansatz, is used to model a probability distribution or a quantum state, and parameters are optimised by minimising a cost function. However, this simple principle does not work without properly chosen ansatze when dealing with a huge parameter space [5]. For this reason, researchers often incorporate an _inductive bias_ into their algorithms [6]. An inductive bias is prior knowledge about the system under investigation that can be included in the algorithm to restrict our function classes. Thus the parameterised function favours a better class of outputs for a given target problem. In classical machine learning, for example, it is known that the great success of convolutional neural networks (CNNs) is based on the fact that they contain 'layers', essentially parameterised maps, which encode the idea that the content of an image does not change when shifted. Specifically, these convolutional layers are (approximately) translation equivariant: When one shifts the input state by \(n\) pixels up and \(m\) bits down, the output is also shifted in the same way [7, 8]. Geometric deep learning naturally extends this framework to arbitrary groups [9], suggesting the use of group equivariant layers for learning data with symmetric properties. Neural networks consisting of group equivariant layers have indeed reported better performance for classifying images [7], point clouds [10], and in the modelling of dynamical systems [11]. More broadly they have also been used in a general variational context for tasks such as identifying the ground state of molecules [12]. Recently, the idea of geometric machine learning has been combined with quantum machine learning (QML). Generally speaking, QML algorithms [13] hope to find an advantage over classical algorithms in ML tasks by exploiting the quantum nature of Hilbert space using parameterised quantum circuits. Despite its potential, however, the trainability and generalization performance of QML algorithms without tailored circuit ansatze often scale poorly, limiting their usability for more than tens of qubits [14]. Because of this, recent studies introduced geometric quantum machine learning (GQML) as a guiding principle for constructing a quantum circuit ansatz. The literature shows these symmetry-informed circuits have been successful in offering better trainability and generalization performance [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. In the GQML setup, the symmetry group \(\mathrm{SU}(2)\) is particularly interesting as it naturally arises in quantum systems with rotational symmetry. It also corresponds to a natural symmetry of qubits which can be seen as a product of spin-\(\frac{1}{2}\) states. While QML algorithms with the \(\mathrm{SU}(2)\) symmetry have been previously studied in Refs. [22, 24, 26], implementing the proposed circuit ansatze in quantum hardware was not straightforward. For example, Ref. [24] proposed twirling as a constructive principle for equivariant gates, but computing this twirling formula for a many-qubit gate is highly non-trivial as it involves the summation over the symmetric group (thus over \(n!\) terms). In contrast, Ref. [26] showed that a certain form of elements in an algebra generated by the symmetric group (formally written as \(\mathbb{C}[S_{n}]\)) can be seen as \(\mathrm{SU}(2)\) equivariant quantum circuits. Nonetheless, these circuits do not admit a simple decomposition to few-qubit gates (implementable on quantum hardware). In this paper, we propose an alternative approach to construct \(\mathrm{SU}(2)\) equivariant circuits. Our circuit ansatze, dubbed _spin-network circuits_, are inspired by spin networks, \(\mathrm{SU}(2)\) equivariant tensor networks. A core tool for us will be the _Schur_ gate (or map, we will use these terms interchangeably) which sends us from a qubit basis to a spin-basis. For example for two qubits, it provides the following mapping \(|J=0,J_{z}=0\rangle=|01\rangle-|10\rangle\), \(|J=1,J_{z}=1\rangle=|00\rangle\), \(|J=1,J_{z}=0\rangle=|01\rangle+|10\rangle\), and \(|J=1,J_{z}=-1\rangle=|11\rangle\) where \(J\) is the total angular momentum of two qubits and the \(J_{z}\) is its \(z\)-direction component. The advantage of this basis is that it leaves the matrix representations block-diagonal in the total angular momenta [27]. We make use of this by applying certain unitaries to these blocks that allow us to directly parameterise the equivariant maps that make up spin networks. This approach to parameterising equivariant maps via their block decomposition as a QML method coincides directly with what is highlighted in Refs. [28, 22]. Furthermore, we prove that our circuit is mathematically equivalent to other constructions using the representation theory of \(\mathrm{SU}(2)\). In particular, we prove that both our gates and gates from the twirling formula [22, 24] can be written in the form of generalised permutations as introduced in Refs. [20, 26]. When restricted to unitary operators, all the three constructions give the same set of gates. Our main theoretical tool is the Schur-Weyl duality which, roughly speaking, posits a duality between SU(2) and the symmetric group \(S_{n}\). While Refs. [19, 28, 22] already introduced a general theory of equivariant circuits for arbitrary Lie groups, thus presenting a part of our results in a slightly different manner, we develop a theory specifically for the SU(2) group and provide a concrete example using the three-qubit equivariant gate. We additionally show that the proposed three-qubit gates can be useful for solving a real-world problem with supporting numerical results for SU(2) symmetric models. While our circuits can be used for usual machine learning tasks, e.g., classifying rotationally invariant data, we choose the problem of finding the ground state of SU(2) symmetric Hamiltonians as it provides a better benchmark platform for classically simulated QML models (with \(\sim 20\) qubits). In particular, we solve the Heisenberg model on one-dimensional triangular and Kagome lattices, which have the SU(2) symmetry but are tricky for Monte Carlo based classical algorithms due to the sign problem [29, 30]. We show that our circuit ansatze give accurate ground states with a common parameter optimization technique, which demonstrates the efficiency of our method and justifies the use of our SU(2) equivariant circuits for appropriately symmetric variational and QML problems more generally. The paper is organised as follows. In Sec. 2, we introduce the preliminaries needed to understand the other sections: The representation theory for SU(2), spin coupling, and spin networks. In Sec. 3, we introduce our ansatze termed _spin-network circuits_ which are parameterisable unitary quantum circuits that are also spin networks. To this end the aforementioned Schur gate will be introduced which will be a core technical component in creating our parameterisations. We also concretely present the two and three-qubit unitary _vertex_ gates. In Sec. 4, results are presented showing that all SU(2) equivariant unitaries are a form of generalised permutation. This directly connects the work here with that on permutational quantum computing (PQC) [31, 32] and in particular PQC+ as outlined in Ref. [26]. We also discuss the relation with the twirling method introduced in Ref. [24] showing how all SU(2) equivariant gates, i.e., generalised permutations, are the same as the set of all unitary gates generated by twirled Hermitian operators. Next, in Sec. 5, we present the efficacy of the introduced vertex gates by solving the Heisenberg model defined on one-dimensional triangular lattice as well as the two-dimensional Kagome lattice. We then discuss the implications of our results and the connections to the broader literature with a particular focus on PQC+ and loop quantum gravity in Sec. 6 and conclude with a short remark in Sec. 7. Overall the new contributions of this work are the following: We introduce an SU(2) equivariant quantum circuit ansatz based on spin networks. We provide a number of numerical simulations validating their efficacy; in particular by solving the Heisenberg model on the Kagome lattice. We connect the theory of equivariant operators as seen in the geometric quantum machine learning literature [22] to the work done on PQC+ [20]. ## 2 Preliminaries Groups and their representationThroughout the paper, we are interested in quantum gates that are equivariant under the SU(2) group transformation. The group SU(2) itself is part of a larger class of groups known as SU(\(N\)) is made up of \(N\times N\) unitary matrices with a determinant of 1. Formally, we can define an SU(2) equivariant gate as a quantum gate \(T\) satisfying \[U^{\otimes n}T=TU^{\otimes n}, \tag{1}\] for all \(U\in\mathrm{SU}(2)\), where \(n\) is the number of qubits in a circuit. If we consider a circuit \(C\) constructed with those gates, thus satisfying \(CU^{\otimes n}=U^{\otimes n}C\), one can create an \(\mathrm{SU}(2)\)-invariant output state given an \(\mathrm{SU}(2)\)-invariant input state. If \(\ket{\psi_{0}}\) is an input state satisfying \(\ket{\psi_{0}}=U^{\otimes n}\ket{\psi_{0}}\) (we will see an example of such states in Sec. 3), we have \[U^{\otimes n}C\ket{\psi_{0}}=CU^{\otimes n}\ket{\psi_{0}}=C\ket{\psi_{0}}. \tag{2}\] Thus such a circuit \(C\) can be used for learning tasks involving rotationally invariant data, e.g., finding ground states of Heisenberg spin models or classifying point-sets. The symmetry we consider here is tightly connected to the notion of groups and their representation. Recall that a group \(G=\{g_{i}\}\) is a set with a map acting on two of its elements \(g_{1}\cdot g_{2}=g_{3}\) such that there is an identity \(e\cdot g=g\), the operations are associative \(g_{1}\cdot(g_{2}\cdot g_{3})=(g_{1}\cdot g_{2})\cdot g_{3}\), and there is an inverse for all elements \(g\cdot g^{-1}=e\). It is also natural to consider an action of a group on a vector. For example, a rotation \(R\in\mathrm{SO}(3)\) acts on a three dimensional (real) vector and transforms it. This type of action (on a vector space) is called a _representation_ of a group. Formally speaking, a group representation is a map \(R:G\to\mathrm{GL}(V)\) from the group to the space of invertible linear maps of a vector space \(V\) (or equivalently, invertible matrices of dimension \(N\) if \(\dim(V)=N\)) such that \(R(g_{1}\cdot g_{2})=R(g_{1})\cdot R(g_{2})\). In essence, it is a map from the group to linear maps that preserves the group structure. For a system with a single qubit, a simple map \(R(U)=U\) for \(U\in\mathrm{SU}(2)\) already defines a representation. One can readily extend this representation to a \(n\)-qubit system by defining \(\tilde{R}(U)=U^{\otimes n}\), which is also a representation (as \(\tilde{R}(U_{1}U_{2})=(U_{1}U_{2})^{\otimes n}=U_{1}^{\otimes n}U_{2}^{\otimes n }=\tilde{R}(U_{1})\tilde{R}(U_{2})\)). We can then see that to find \(\mathrm{SU}(2)\) equivariant gates for an \(n\)-qubit system, we have to pay attention to the representation \(\tilde{R}\). Studying the representation of symmetry introduces the concept of _irreducible representations_ (irreps, for short). Firstly, a sub-representation \(W\) of \(V\) is a subspace \(W\leq V\) which satisfies \(R(g)W=\{R(g)w:w\in W\}\subseteq W\) for all \(g\in G\). Then we say a representation \(R:G\to\mathrm{GL}(V)\) is irreducible if it does not have any non-trivial sub-representations, i.e., if \(W\leq V\) and \(R(g)W=\{R(g)w:w\in W\}\subseteq W\) for all \(g\in G\), then \(W=0\) or \(W=V\). Thus by decomposing \(n\)-qubit system to vector spaces of different spin numbers (which is always possible by the Peter-Weyl theorem), we may be able to find a structure of equivariant gates. Indeed as we shall see, the _Schur map_ sends equivariant operators into a block diagonal form. This form will allow us to explicitly design such maps. From qubits to spinsA spin is an irreducible representation of the \(\mathrm{SU}(2)\) group. This vector space is spanned by basis vectors \(\{\ket{J,J_{z}}:-J\leq J_{z}\leq J\}\) where \(2J\) is an integer (e.g., \(J=0\), \(J=\frac{1}{2}\), \(J=1\), \(J=3/2\), etc.). Physically \(J\) and \(J_{z}\) correspond to the quantised total angular momentum and the angular momentum in the \(z\)-direction, respectively (though the \(z\)-direction is a convention choice, any would do). For each allowed value of \(J\), we call the corresponding vector space a spin-\(J\) system. A qubit is naturally identified as a spin-\(\frac{1}{2}\) particle, by a mapping \(\ket{0}=\ket{J=\frac{1}{2};J_{z}=\frac{1}{2}}\) and \(\ket{1}=\ket{J=\frac{1}{2};J_{z}=-\frac{1}{2}}\). When we take two qubits, we are thinking of the basis elements \(\{\ket{00},\ket{01},\ket{10},\ket{11}\}\). Consider the angular momentum of two qubits (or two spin-\(\frac{1}{2}\) particles, equivalently). It is well known that when one considers two spin-systems of momenta \(J_{1}\) and \(J_{2}\) in terms of their joint angular momentum the possible total angular momentum \(J\) measurements range from \(J=\ket{J_{1}-J_{2}}\) to \(J_{1}+J_{2}\). Thus two qubits have the two total angular momentum possibilities of \(J=0\) and \(J=1\). To get the full basis, we must also include the possible \(J_{z}\) values that range from \(-J\) to \(J\) in steps of 1 [33]. In general, we can always move from a basis of qubits to a basis of angular momenta by considering pairwise coupling of qubits and subsequent spins, which amounts to considering the possible angular momentum outcomes of a measurement of each pairing. This coupling scheme is depicted in Fig. 1. For more than two spins, we will have a choice of the order in which we do this. The different orders of pairing the spin-systems amount to different bases (as they correspond to different choices of complete measurements) which we can describe by branching tree-like structures. In Fig. 2 we can see this for three qubits. In later discussion, we will use \(J_{\mathcal{J}}=\mathbb{C}^{2\mathcal{J}+1}\) to denote a spin-\(\mathcal{J}\) system. For example, \(J_{1/2}=\mathbb{C}^{2}\) is a vector space for spin-\(\frac{1}{2}\) system, i.e., a qubit. Spin networksWe now consider a generalization of equivariant gates using the notion of multi-linear maps. Let us first recall properties of spin-\(1/2\) kets and bras under \(g\in\mathrm{SU}(2)\): \[\ket{a}\xrightarrow{g}g\ket{a} \tag{3}\] \[\bra{b}\xrightarrow{g}\bra{b}g^{\dagger}, \tag{4}\] where \(g=e^{-i\phi\boldsymbol{\sigma}\cdot\hat{n}/2}\in\mathrm{SU}(2)\). Here, \(\boldsymbol{\sigma}=\{\sigma_{x},\sigma_{y},\sigma_{z}\}\) is a vector of \(2\times 2\) Pauli matrices, \(\hat{n}\) is a normal vector indicating the direction of the rotation, and \(\phi\) is the angle we rotate. By identifying kets as vectors and bras as dual vectors, we can generalize the above principle by considering an arbitrary spin-\(\mathcal{J}\) system given as \(V=J_{\mathcal{J}}=\mathbb{C}^{2\mathcal{J}+1}\). Then \(\ket{a}\in V\) and \(\bra{b}\in V^{*}\) changes to \[\ket{a}\xrightarrow{g}R(g)\ket{a} \tag{5}\] \[\bra{b}\xrightarrow{g}\bra{b}R(g)^{\dagger} \tag{6}\] under the group transformation, where \(R(g)\) is a representation of \(g\in\mathrm{SU}(2)\). Specifically, it is a \(2\mathcal{J}+1\) by \(2\mathcal{J}+1\) unitary matrix given by \(e^{-i\phi\boldsymbol{\mathcal{J}}\cdot\hat{n}}\) which is a representation of \(e^{-i\phi\boldsymbol{\sigma}\cdot\hat{n}/2}=g\in\mathrm{SU}(2)\). Here, \(\boldsymbol{J}=\{J_{x},J_{y},J_{z}\}\) is a vector of \(2\mathcal{J}+1\) by \(2\mathcal{J}+1\) spin matrices satisfying \([J_{a},J_{b}]=i\epsilon_{abc}J_{c}\) for all \(a,b,c\in\{x,y,z\}\) where \(\epsilon_{abc}\) is the Levi-Civita symbol. The above principle also induces group transformation formulas for other expressions. For example, one can see that the inner product \(\bra{a}\!b\rangle\) is invariant under the group transform as \[\bra{b}\!a\xrightarrow{g}\bra{b}R(g)^{\dagger}R(g)\ket{a}=\bra{b}\!a\rangle. \tag{7}\] Figure 1: Graphical presentation of the basis constructed by combining angular momentum of two spin-\(\frac{1}{2}\) systems and the possible outcomes of total and \(z\)-directed angular momenta. These can be seen as two spin networks, corresponding to the two possible total angular momentum values on the bottom edge, with specific \(\ket{J;J_{z}}\) states chosen for the bottom edges Hilbert spaces. Note that the last equality is obtained as \(R(g)\) is unitary. Next, let us consider a linear map \(T:V\to V\). As \(T\) can be written as \(T=\sum_{ij}t_{ij}\left|i\right\rangle\left\langle j\right|\in V\otimes V^{*}\), we know it changes to \[T\xrightarrow{g}R(g)TR(g)^{\dagger} \tag{8}\] under the transformation. We now add a constraint that a linear map \(T\) also preserves the group structure. In other words, we require \(T\) to satisfy \[R(g)(T\left|a\right\rangle)=T(R(g)\left|a\right\rangle) \tag{9}\] for all \(g\in G\) and \(\left|a\right\rangle\in V\), which implies that \(R(g)^{\dagger}TR(g)=T\) (or equivalently, \(T=R(g)TR(g)^{\dagger}\)). As \(R(g)TR(g)^{\dagger}\) is nothing but \(T\) after the group transformation, a linear map preserving the group Figure 2: Graphical depiction of a coupling basis of three qubits, where the pairwise coupling of the spaces proceeds from the left (other possibilities give alternative bases). Each row of trees is indexed by the possible total angular momenta that can occur for each composition of two systems. The elements in the rows correspond to the different states these correspond to given a final \(J_{z}\) value on the spaces at the bottom of the trees. Note how the top two rows of diagrams index spaces with the same total angular momentum at the base but that the patterns of coupling that form them are distinct. In Sec. 4, we will see that this allows for the mixing of such states because SU(2) equivariant maps cannot distinguish the two spin coupling structures. Note that in absence of specifying the \(J_{z}\) values the set of diagrams on each row correspond to three separate spin networks as the SU(2) invariance on three-valent networks reduces to spin-coupling rules, this is discussed in more detail in Appendix A. structure is a matrix that is invariant under the group transformation (given by conjugation with \(R(g)\)). One may further extend this property to multilinear maps (tensors). For example, a two-qubit gate is a linear map \(T\) between \(V^{\otimes 2}\) and \(V^{\otimes 2}\) (where \(V=J_{1/2}=\mathbb{C}^{2}\) in the standard formulation). If we add the equivariant condition to this gate, i.e., \(R(g)^{\otimes 2}T=TR(g)^{\otimes 2}\), this is nothing but the condition for a group-structure preserving map. As a two-qubit gate \(T\) can be considered as an element of \(V^{\otimes 2}\otimes(V^{*})^{\otimes 2}\), \(T\) becomes \[T\xrightarrow{g}R(g)^{\otimes 2}T(R(g)^{\dagger})^{\otimes 2}=T, \tag{10}\] under the group transformation, where the last equality is from the equivariant condition. Thus there is one-to-one correspondence between group-structure preserving maps and group-invariant tensors1. In Figure 3: A three valent spin network as typically presented in the broader literature: an edge labelled graph (though directed this is often suppressed in depictions since the spaces are isomorphic). In the three-valent case the edge labels are spins such that around any vertex they meet the Clebsch-Gordan conditions \(j_{1}+j_{2}+j_{3}\in\mathbb{N}\) and \(|j_{1}-j_{2}|\leqslant j_{3}\leqslant j_{1}+j_{2}.\) which can be shown to exactly match when the vertex is an invariant subspace of SU(2) (See Appendix A for more details). other words, if we consider a general (possibly non-unitary) linear map between \(V^{\otimes n}\) and \(V^{\otimes m}\) (where \(n\) and \(m\) can be different integers) preserving the group structure, it can be seen as a group-invariant tensor with \(n\) input legs and \(m\) output legs [34, 35] (often called a tensor of type \((n,m)\)). Now we consider a tensor network which consists of SU(2) invariant tensors with contraction edges that run over irreps of SU(2). This special type of network is called a "spin network"; an example from the broader literature can be seen in Fig. 3. These were originally introduced by Penrose [36] in the very different context of a combinatorial derivation of space-time. In modern physics, they are typically discussed as the basis of quantised space in the covariant formulation of loop quantum gravity [37] (though not the focus of this work, interested readers can look Appendix C for the connection). Roughly, a spin network is a directed graph where each edge has an associated spin and each vertex \(v\) has an associated equivariant map from the tensor product of the incoming spins to the tensor product of the outgoing spins. Formally, we describe this as a graph detailing the connectivity of vertices \(v\) with incoming edges \(e_{in}\) and outgoing ones \(e_{out}\) such that for every vertex, there is an associated map \(T_{v}\) such that \(T_{v}\in\bigotimes_{i\in e_{in}}\bigotimes_{o\in e_{out}}J_{j_{i}}\otimes J_{ j_{o}}^{*}\), where \(J_{j_{i}}\) and \(J_{j_{o}}\) are the incoming and outgoing respective Hilbert spaces. We further require \(T_{v}\) to satisfy the equivariant condition \[\bigotimes_{i\in e_{in}}\bigotimes_{o\in e_{out}}T_{v}\left(R_{j_{i}}(g)J_{j_{ i}}\otimes J_{j_{o}}\right)=\bigotimes_{i\in e_{in}}\bigotimes_{o\in e_{out}}T_{v} \left(J_{j_{i}}\otimes R_{j_{o}}(g)J_{j_{o}}\right)\hskip 28.452756pt\forall g \in G,\quad\forall v, \tag{11}\] where \(R_{j_{i}}(g)\) and \(R_{j_{o}}(g)\) are the representations of the group element \(g\) acting on the \(J_{j_{i}}\) and \(J_{j_{o}}\), respectively. From the discussion above, each map associated with a vertex (\(T_{v}\)) can be regarded as a group-invariant tensor. In this way, spin networks are a form of tensor network where the composing tensors are elements in the invariant sub-spaces of a group and the contraction is over spin-spaces of size \(2J+1\). For a more detailed description of these objects, we direct the reader to Appendix A. For our interests, it is sufficient to say that we can build a quantum circuit that is inherently SU(2) equivariant by restricting to specific spin networks whose vertices can be interpreted as parameterised qubit unitaries. Within the literature, spin networks that form binary trees have been particularly prominent. The simplest example is of the kind seen in Fig. 1 where we ignore the specification of the \(J_{z}\) state at the bottom and focus only on the total angular momentum (so there are just two unique diagrams from this perspective). A more general example is provided by Fig. 2 where we have three spin-spaces coming together which naturally leads to three possible spin networks, specifically one for each row. The reason the columns are not different networks is because they amount to fixing a choice of \(J_{z}\) value on one edge, which is a choice of contraction index (i.e., final projection). Thus such a fixing does not alter the spin-spaces in the definition of the network2. It should be noted that spin networks have previously been considered in the broader quantum information literature as diagrammatic qubit maps and as variational maps for numerical investigations of LQG on quantum computers Refs. [38, 39, 40, 41] though never as general SU(2) equivariant variational ansatze. Footnote 2: The careful reader might note that here we are simultaneously looking at diagrams that correspond to the rules of angular momentum addition and saying these match to the definition of the vertices being SU(2) invariant sub-spaces. The connection is outlined in Appendix C where we see that the invariant spaces can be decomposed in terms of Clebsch-Gordan coefficients which are the exact same elements used in deriving angular momentum decompositions. Spin-network circuits In this section, we outline circuit ansatze designed based on the principles of spin networks. To show their utility, we present concrete examples which in turn are used for our simulations further below in Sec. 5. Due to the circuits' mathematical equivalence to certain types of spin network, they are explicitly SU(2) equivariant. While the core ideas are outlined here, we discuss the finer points, related concepts, and generalisations in Appendix A. Our circuits, termed _spin-network circuits_, are a specific form of spin network. They are spin networks where all vertices have an even number of external wires, and every wire in the network is spin-\(\frac{1}{2}\), and so are formed of qubits. Among all external wires for each vertex, half are inputs, and the other half are outputs; the combination of these vertices amounts to a quantum circuit. For this reason, when viewed as a quantum circuit, we refer to the vertices as _vertex gates_. Critically, the vertices of a spin network are equivariant maps between the input and output edges, which is a direct consequence of the definition given in Eq. (11). This means the resultant circuit is also equivariant. An important property of spin networks with vertices with more that three edges is that they can be parameterised (see Appendix A). By training over these parameters we thus arrive at a trainable equivariant network. Schur gate and two qubit vertex gateThe simplest spin-network circuit is built from vertex gates acting solely on two qubits. To understand the structure of this gate, and its later generalisations, we first require the two-qubit Schur gate as a prerequisite [42]: \[S_{2}=\begin{pmatrix}1&0&0&0\\ 0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\ 0&0&0&1\\ 0&\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}&0\end{pmatrix} \tag{12}\] This gate is a unitary operator that maps the computational basis of two qubits to the spin-basis of their combined \(J\) and \(J_{z}\) angular momenta. As qubits can be seen as spin-\(\frac{1}{2}\) spaces, with spin-up and spin-down being assigned to \(0\) and \(1\) respectively, then qubit registers correspond to tensor products of spin-\(\frac{1}{2}\) irreps. While these are individually irreducible, their product is not and so can be block diagonalised into irreducible components. In the case of two qubits, it is often typical to write that \(J_{\frac{1}{2}}\otimes J_{\frac{1}{2}}\simeq J_{0}\oplus J_{1}\) which says that a tensor product of two spin-\(\frac{1}{2}\) spaces is isomorphic to the direct sum of a spin-\(0\) and a spin-\(1\) space telling us that there is a unitary map between them. The two qubit Schur gate performs exactly this map. Looking at this in terms of the computational basis, the two qubit Schur gate maps the computational basis states to the following basis (where we often drop the normalisation in later exposition): \(|J=1,J_{z}=1\rangle=|00\rangle\), \(|J=1,J_{z}=0\rangle=\frac{1}{\sqrt{2}}(|01\rangle+|10\rangle)\), \(|J=1,J_{z}=-1\rangle=|11\rangle\), and \(|J=0,J_{z}=0\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle)\), which is occasionally referred to as the triplet/singlet basis3. In general, though trivially in the two-qubit case, we can say that the two-qubit Schur map sends us to the sequentially coupled basis of two qubits exactly as depicted in Fig. 1. As was discussed in Sec. 2 above, this amounts to two different binary spin networks with the \(J_{z}\) values specified on the base as first outlined in Ref. [43]. Footnote 3: For reasons of the different total angular momentum states energies separating under the presence of an external magnetic field. The two-qubit Schur gate from Eq. (12) is the simplest Schur map that sends us from the tensor product of qubits to the direct sum of spins. Precisely, the general form of prescription: \[S_{n}:J_{\frac{1}{2}}^{\otimes n}\rightarrow\bigoplus_{k}J_{k} \tag{13}\] where we understand \(J_{\frac{1}{2}}^{\otimes n}\) as the Hilbert space corresponding to \(n\) qubits and \(k\) ranges over the irreducible representations of \(\mathrm{SU}(2)\) that make up the space in the spin-basis where we note that _irreps can repeat, in which case we say there is a multiplicity4_. Footnote 4: More formally the Schur map implements the isomorphism given in Theorem 2 below. The matrix elements of the Schur map can be obtained by using _Clebsch-Gordan coefficients_ and coupling paths of qubits. Each Clebsch-Gordan coefficient \(\langle j_{1}m_{1}j_{2}m_{2}\mid JM\rangle=c_{j_{1}m_{1}j_{2}m_{2}}^{JM}\) corresponds to the projection of two particular spin-states into their combined angular momenta. Thus its matrix entries correspond to the Clebsch-Gordan coefficients that result from projecting coupled spin systems (specifically one spin-\(\frac{1}{2}\) qubit with whatever angular momentum has been reached by previous spin-couplings) into a particular total \(J\) value. Each coefficient that gets multiplied corresponds to a vertex in the coupling diagrams that index each of the spin-basis elements (such as those seen in Fig. 2), i.e., each element of the Schur map can be obtained by multiplying the Clebsch-Gordan coefficients associated with each vertex of the spin-coupling diagram. As an example, let us consider the three-qubit case. Here each element in the matrix of the Schur map corresponds to \(c_{j_{1},m_{1};j_{2},m_{2}}^{j,M}c_{j^{\prime},m^{\prime};j_{3},m_{3}}^{j,M}\) for some choice of \(j^{\prime}\in\{0,1\}\) and \(-j^{\prime}\leq m^{\prime}\leq j^{\prime}\). Here \(j^{\prime}\) stands for the resulting spin from coupling the first two qubits, which leads to possible total spin momenta \(j^{\prime}=0\) and \(j^{\prime}=1\). In the following, we focus on the spin-0 case (\(j^{\prime}=0\)). This corresponds to the coefficient \(c_{\frac{1}{2},m_{1};\frac{1}{2},m_{2}}^{0,0}\). When we in turn couple with the third qubit the only possible outcome for the total angular momentum is \(\frac{1}{2}\), so the combined coupling coefficient for these total angular momenta is \(c_{\frac{1}{2},m_{1};\frac{1}{2},m_{2}}^{0,0}c_{\frac{1}{2},m_{1};0,0}^{\frac {1}{2},m}\). These choices single out a particular recoupling path with associated final \(J_{z}\) values on the root (as seen in Fig. 1) and so a row in the matrix. The computational basis, equivalently the \(J_{z}\) values for the individual qubits, fixes the columns (for more on this see Ref. [44]). For practical implementations, it is important to note that the Schur gate can be implemented in polynomial time and the literature already contains examples of specific methods to do this [44, 45]. In the case of two qubits there is only a single coefficient to consider in each element of the matrix and so we have the following: \[S_{2}=\begin{pmatrix}c_{\frac{1}{2},\frac{1}{2},\frac{1}{2}}^{1,1}&c_{\frac{1 }{2},\frac{1}{2},\frac{1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2}, \frac{1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1}{2},-\frac{1}{2}}^{1,1}&c_{\frac{1 }{2},-\frac{1}{2},-\frac{1}{2}}^{1,1}\\ \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}&c_{\frac{1}{2},\frac{1}{2}, \frac{1}{2},-\frac{1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2},\frac {1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2}}^{1,1}\\ \frac{1}{2},-\frac{1}{2},\frac{1}{2}&c_{\frac{1}{2},\frac{1}{2},-\frac{1}{2}} ^{1,1}&c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1} {2},-\frac{1}{2}}^{1,1}\\ \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}&c_{\frac{1}{2},\frac{1}{2},- \frac{1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2}}^{1,1}&c_{\frac{1}{ 2},-\frac{1}{2},-\frac{1}{2}}^{1,1}\\ \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}&c_{\frac{1}{2},\frac{1}{2}, \frac{1}{2},-\frac{1}{2}}^{1,1}&c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2}}^{1,1}& c_{\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2}}^{1,1}\\ \end{pmatrix}=\begin{pmatrix}1&0&0\\ 0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\ 0&0&1\\ 0&\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}&0\\ 0&\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}&0\\ \end{pmatrix}\] which indeed matches the definition of the two qubit Schur gate in Eq. (12). Once we are in the spin basis, we can elegantly construct the two-qubit vertex gate by applying a phase solely on the spin-0, or singlet, element \(\left|J=0,J_{z}=0\right\rangle\) (see Lemma 1 below). Intuitively if a map is \(\mathrm{SU}(2)\) equivariant, so that you can isolate and apply group representations before or after the map, then the different spin-irreps should not interact under the mapping and remain differentiated - as matrices this is why the map is block diagonal in the spin basis. For the two-qubit case, up to a global phase, this amounts to just a phase on one of the spaces: \[P_{2}(\theta)=\left(\begin{array}{ccc|c}\mathbb{1}_{3}&&0\\ &0\\ &&0\\ \hline 0&0&0\end{array}\right) \tag{14}\] In terms of spin networks, which we recall are equivariant maps, the Schur gate is sending us to the two possible coupling options. Two qubits coupling to spin-0 or to spin-1. In isolation5, these correspond to two possible spin networks. The parameterised gate \(P_{2}(\theta)\), applies a phase on the spin-0 network. In Sec. 4, we will see this structure completely characterises the possible unitary equivariant maps. To understand how this phase manages to isolate only one part of the spin space, we need to look again at representations. The spin-basis is always such that any group representation in this basis (up to row permutation depending on your exact basis choices and Schur gate, which can vary a little in the literature) is block diagonal. Each individual block is associated to a particular total angular momentum \(J\)_and_ a way of arriving at it by sequentially coupling spin-1/2s as seen in Fig. 2. In this way, given a tensor product of \(n\)-spins, each block corresponds to one of the \(2J+1\) dimensional spin spaces of its direct product decomposition as seen in Eq.(13). As we now know, for the case of two qubits, we either have spin-0 or spin-1 and so this block decomposition resembles the following: Footnote 5: An equivariant gate acting on two or more qubits can be regarded as a spin network with more than three legs. One can specify intermediate vertex choices for such a network, which introduces a sub-network structures. \[\left(\begin{array}{ccc}&&0\\ \text{spin-1}&&0\\ &&0\\ \hline 0&0&0&\text{spin-0}\end{array}\right) \tag{15}\] The block diagonal structure is critical for our SU(2) equivariant ansatze. As we will see below, their general structure is to apply parameterised maps that act independently on blocks of different sizes (which are different irreducible representations) and as unitaries that mix those parts of repeated blocks of the same irreducible representation when they correspond to the same \(J_{z}\) value. Indeed this structure completely characterises equivariant maps, as is shown below in Sec. 4. As such we can create an equivariant ansatz for SU(2), i.e., spin rotation symmetry. We note it bears some resemblance with work seen in Ref. [22]. Figure 4: Depiction of a parameterised gate \(V(\theta)\in\text{Inv}_{\text{SU(2)}}(J_{\frac{1}{2}}\otimes J_{\frac{1}{2}} \otimes J_{\frac{1}{2}}\otimes J_{\frac{1}{2}})\) living in the basis block diagonal in the space of SU(2) equivariant unitaries on two qubits and therefore a four-valent spin network vertex. We can see it is composed of a superposition of two three-valent spin networks indexed by the possible internal spin-0 or spin-1 edge (see Appendix C for details on spin network decompositions). On the right hand side we allude to the geometric interpretation of the basis where the couplings correspond to triangles of different quantised edge length (again see Appendix C). This leads us to the definition of a vertex gate. **Definition 1**.: _The two qubit vertex gate \(V_{2}(\theta)\) is composed as follows:_ _where \(S_{2}\) is the two qubit Schur gate and \(P_{2}(\theta)\) is the controlled phase seen in Eq. (14)._ What we have created is specific two-qubit gates that live in the space of equivariant maps from, and to, the tensor product of two spin-\(\frac{1}{2}\)s, these can be seen depicted in Fig. 4. These, by definition, are elements of the vertices of a four-valent spin network with edges fixed as qubits. We can see the spin network as corresponding to an operator formed by sequential gate operations as seen in Fig. 5 Three and more qubit vertex gatesEvery even valence spin network vertex admits a possible vertex gate (though 2 is trivial; see Appendix C). A second, more subtle, example is the three-qubit Schur gate \(S_{3}\). \[S_{3}=(c_{j_{1},m_{1};j_{2},m_{2}}^{j_{4},m_{4}}c_{j_{4},m_{4};j_{3},m_{3}}^{J, M})=\begin{pmatrix}1&0&0&0&0&0&0&0\\ 0&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{3}}&0&\frac{1}{\sqrt{3}}&0&0&0\\ 0&0&0&\frac{1}{\sqrt{3}}&0&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{3}}&0\\ 0&0&0&0&0&0&0&1\\ 0&\sqrt{\frac{2}{3}}&-\frac{1}{\sqrt{6}}&0&-\frac{1}{\sqrt{6}}&0&0&0\\ 0&0&0&\frac{1}{\sqrt{6}}&0&\frac{1}{\sqrt{6}}&-\sqrt{\frac{2}{3}}&0\\ 0&0&\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}&0&0&0\\ 0&0&0&-\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}&0&0\end{pmatrix} \tag{16}\] Again we have a parameterised \(P_{3}(\vec{\theta})\) rotation applied in the spin basis. In the parameterised gate we define a three-qubit unitary that acts on the two spin-\(\frac{1}{2}\) spaces that come from the block diagonal decomposition of three qubits \(J_{\frac{1}{2}}\otimes J_{\frac{1}{2}}\otimes J_{\frac{1}{2}}\simeq J_{\frac{ 3}{2}}\oplus J_{\frac{1}{2}}\oplus J_{\frac{1}{2}}\). The difference between this gate and the Figure 5: A four-valent spin-network circuit that can be trained over the free parameters in its vertex gates. The curved qubit wires serve only to highlight the idea that such spin-network circuits are both spin networks and quantum circuits. one above is that the above two-qubit vertex gate lacks multiplicities, i.e., multiple blocks of the same size, meaning the only option is to have a phase on each different block. If we have multiple blocks of the same size this indicates that there are multiple sub-spaces of the state space with the same total angular momentum and that multiple states exist with same quantum numbers \(\ket{J;J_{z}}\). In terms of SU(2) equivariant maps, these are states that we can interchange without altering the structure of the space - this implies that our vertex gates are not just phases on differing blocks but also unitaries that mix the multiple copies of \(\ket{J;J_{z}}\) (see Fig. 2 for how our unitaries act on this space and Sec. 4 for theoretical backgrounds). As an example, for our three qubit space we have one spin-\(\frac{3}{2}\) space and two spin-\(\frac{1}{2}\) spaces so it suffices to have a single unitary acting to mix the two \(\ket{\frac{1}{2},J_{z}}\) states. The general matrix has the following form: \[P_{3}(\vec{\theta})=\begin{pmatrix}\mathbb{1}_{4}&0_{4}\\ \hline 0_{4}&U_{2}(\vec{\theta})\otimes\mathbb{1}_{2}\end{pmatrix}=\begin{pmatrix} \mathbb{1}_{2}&0_{2}\\ \hline 0_{2}&U_{2}(\vec{\theta})\end{pmatrix}\otimes\mathbb{1}_{2} \tag{17}\] where \(U_{2}(\vec{\theta})\) is a unitary matrix of dimension two, implying this gate has four real parameters. One might imagine that there could be a relative phase here on the isolated spin-\(\frac{3}{2}\) space but (up to a global phase) this is a sub-case of the unitary acting on the two spin-\(\frac{1}{2}\) components. We note that this gate can be written as the ControlledUnitary gate between the first and second qubits (and acting trivially on the thrid qubit), where the ControlledUnitary is generated by \(\{\ket{1}\bra{1}\otimes\mathbb{1}_{2},\ket{1}\bra{1}\otimes X,\ket{1}\bra{1} \otimes Y,\ket{1}\bra{1}\otimes Z\}\). This leads to the three qubit vertex gate definition. **Definition 2**.: _The three qubit vertex gate is composed as follows:_ _where \(S_{3}\) is the three qubit Schur gate and \(P_{3}(\vec{\theta})\) is the controlled unitary seen in Eq. (17)._ Our construction extends to arbitrary \(k\)-qubit gates. In general these spin-network circuits have the following shape: \[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig. 1}\end{array}\quad=\quad\begin{array}{c}\includegraphics[width=142.26378pt]{ Fig. 2}\end{array}\quad P_{k}(\vec{\theta})\end{array}\quad\begin{array}{c}\includegraphics[width=142.26378pt]{ Fig. 3}\end{array} \tag{18}\] Here, \(\vec{\theta}\) is the vector of trainable parameters. These are the free variables needed to parameterise the space of the \(l\) different irreps that make up the spin basis of \(k\) qubits \(\oplus_{i=1}^{l}(U_{i}\otimes\mathbb{1}_{d_{i}})\) where each \(U_{i}\in\mathrm{U}(m_{i})\) is a unitary of the size of the multiplicity of the \(i^{th}\) representation and \(d_{i}\) is the dimension of the \(i^{th}\) irrep (i.e., \(2J+1\) where \(J\) is the spin number of the subspace). These unitaries specifically act to mix the states with the same \(J_{z}\) value between the repeated irreps (again see Sec. 4). As any arbitrary \(k\)-qubit gate can be decomposed into \(\mathcal{O}(k)\) elementary gates [46], one can implement a spin-network circuit with a given parameter \(\vec{\theta}\) using quantum hardware with a constant overhead (as \(k\) is constant). However, it is generally difficult to decompose a spin-network circuit with arbitrary \(\vec{\theta}\) to single and two-qubit parameterised quantum gates with a fixed structure and so this is a compilation task that requires further study (i.e., finding a circuit with single and two-qubit parameterised gates that generate the equivariant gate). An interesting question is how the few-qubit gates introduced in this section act on the global \(\mathrm{SU}(2)\) subspace. For example, let us consider a spin-3 irreducible subspace of 8 qubits (e.g., a state \(\cos(\theta)|11111110\rangle+\sin(\theta)|11011111\rangle\) lives in this subspace). How can we write down the matrix form of the gate in this subspace? In the following section we answer this question by outlining the theory of \(\mathrm{SU}(2)\) equivariant gates from a global perspective. Interestingly, we will show that all \(\mathrm{SU}(2)\) equivariant gates are the generalised permutations introduced in Ref. [20]. ## 4 Equivariant gates from representation theory In the previous section, we have introduced the Schur map for constructing gates that commute with the \(\mathrm{SU}(2)\) group action. However, the transformed basis from the Schur map only block diagonalise \(\mathrm{SU}(2)\) action, and an additional parameterised unitary gate (introduced as \(P(\theta)\)) acting between the blocks was necessary to build an equivariant gate. In this section, we completely characterise all possible forms of such unitary gates by developing a general theory of \(\mathrm{SU}(2)\) equivariant operations. Furthermore, using the representation theory of \(\mathrm{SU}(2)\) and the duality between the permutation group \(S_{n}\) and \(\mathrm{SU}(2)\), we prove that \(\mathrm{SU}(2)\) equivariant operations are generalised permutations (which we formally define below), and conversely, all generalised permutations are also equivariant operators. Using this result, we prove that our construction of equivariant gates gives the identical set of gates from the twirling formula and parameterised permuatations introduced in Refs. [20, 24]. We further answer the question raised at the end of the previous section using this identification. As this section is rather technical and not directly related to simulation results, the readers may directly jump to later sections. ### Equivariant operations as the center of a representation Let us start with the definition of the commutant algebra. **Definition 3**.: _For a given representation \(R:T\to\mathrm{GL}(\mathbb{C}^{n})\), we define the commutant algebra \(C(R)\) as_ \[C(R)=\{T\in\mathcal{M}_{n}(\mathbb{C}):TR(g)=R(g)T\text{ for all }g\in G\}, \tag{19}\] _where \(\mathcal{M}_{n}(\mathbb{C})\) is the set of \(n\times n\) complex matrices._ One can verify that \(C(R)\) indeed forms an algebra (under matrix addition and multiplication). This tells us that equivariant gates for \(U^{\otimes N}\) with \(U\in\mathrm{SU}(2)\) are nothing but unitary operators in \(C(U^{\otimes N})\). Throughout the rest of this subsection, we will construct a complete set of equivariant gates. To achieve this, it will be practical to pay closer attention to the structure of the commutant algebra. To this end we consider the following lemmas. **Lemma 1** (Schur's lemma).: _A homomorphism preserving the group structure \(f\in\operatorname{Hom}_{G}(V,W)\) is a homomorphism satisfying \(f(gv)=gf(v)\) for all \(g\in G\) and \(v\in V\). If \(V\) and \(W\) are two irreducible representations of a group \(G\) over \(\mathbb{C}\), then \(f\) must be \(c\mathbb{1}\) for \(c\in\mathbb{C}\) or \(0\)._ In short, a structure-preserving map between two irreps is either proportional to the identity (which implies that the vector space \(V\) and \(W\) are essentially the same) or zero (they are different irreps). A proof can be found in Refs. [47, 33]. As \(T\in\operatorname{Hom}_{G}(V,W)\) in Definition 3 is a linear map, the condition \(TR(g)=R(g)T\) can be written in terms of matrices. From this we can more easily construct the commutant algebra for some simple cases, showing for example that the commutant of a direct sum of differing irreps is a direct sum of two scaled identity maps. **Lemma 2**.: _Let \(R^{(1)}\) and \(R^{(2)}\) be different irreducible representations of a group \(G\) with dimensions \(d_{1}\) and \(d_{2}\), respectively. Let us consider a representation \(R=R^{(1)}\oplus R^{(2)}\), written as_ \[R(g)=\begin{pmatrix}R^{(1)}(g)&0\\ 0&R^{(2)}(g)\end{pmatrix}. \tag{20}\] _Then we have_ \[C(R)=\{c_{1}\mathbb{1}_{d_{1}}\oplus c_{2}\mathbb{1}_{d_{2}}:c_{1},c_{2}\in \mathbb{C}\}. \tag{21}\] Proof.: Let \(T\) be a matrix with internal blocks \(T_{1,1},T_{1,2},T_{2,1},T_{2,2}\) given by \[T=\begin{pmatrix}T_{1,1}&T_{1,2}\\ T_{2,1}&T_{2,2}\end{pmatrix}. \tag{22}\] If \(TX=XT\), \[T_{1,1}R^{(1)} =R^{(1)}T_{1,1},\qquad T_{1,2}R^{(2)}=R^{(1)}T_{1,2},\] \[T_{2,1}R^{(1)} =R^{(2)}T_{2,1},\qquad T_{2,2}R^{(2)}=R^{(2)}T_{2,2}.\] Using Schur's lemma, we obtain \(T_{1,1}=c_{1}\mathbb{1}\), \(T_{2,2}=c_{2}\mathbb{1}\), \(T_{1,2}=T_{2,1}=0\). The situation is more complicated in cases where we have a direct sum of the same representation. In this case we find that the commutant is not simply a direct sum but allows for mixing between the irreps. As we see further below, this will correspond to mixing between elements of the repeated irreps which are the same. **Lemma 3**.: _We now consider a direct sum of the same representation \(R=R^{(1)}\oplus R^{(1)}\). Then we have_ \[C(R)=\mathcal{M}_{2}(\mathbb{C})\otimes\mathbb{1}_{d_{1}}. \tag{23}\] Proof.: As before, we write \(T\in C(R)\) in a block-diagonal matrix. Then \(TR=RT\) gives \[T_{i,j}R^{(1)}=R^{(1)}T_{i,j}. \tag{24}\] Schur's lemma implies that each \(T_{i,j}\) is proportional to \(\mathbb{1}\), i.e., \(T_{i,j}=c_{i,j}\mathbb{1}\) for \(c_{i,j}\in\mathbb{C}\). Thus we have \[T=\begin{pmatrix}c_{1,1}\mathbb{1}&c_{1,2}\mathbb{1}\\ c_{2,1}\mathbb{1}&c_{2,2}\mathbb{1}\end{pmatrix}=\begin{pmatrix}c_{1,1}&c_{1,2 }\\ c_{2,1}&c_{2,2}\end{pmatrix}\otimes\mathbb{1}. \tag{25}\] Now let us generalise the above results. Let \(R\) be a representation of \(G\) on \(V\). Then Maschke's theorem (for finite groups) or the Peter-Weyl Theorem (for Lie groups) asserts that \(V\) is decomposable into a direct sum of irreducible representations \[V\simeq m_{1}R^{(1)}\oplus m_{2}R^{(2)}\oplus\cdots m_{k}R^{(k)}, \tag{26}\] where \(mR=R\oplus R\cdots\oplus R\) signifies \(m\) repetitions of the same representation, and \(\{R^{(i)}\}\) are the different irreducible representations. Applying the above lemmas gives the following theorem. **Theorem 1**.: _Under the decomposition given by Eq. (26), the commutant is given by_ \[C(R)=\{\oplus_{i=1}^{k}(M_{i}\otimes\mathbb{1}_{d_{i}}):M_{i}\in \mathcal{M}_{m_{i}}(\mathbb{C})\text{ for all }i\} \tag{27}\] _where each \(d_{i}\) is the dimension of the representation \(R^{(i)}\)._ Given that a square matrix \(M\oplus N\) is unitary iff \(M\) and \(N\) are both unitary matrices, we obtain the following corollary. **Corollary 1**.: _All unitary operators commuting with \(R\) are given by_ \[C(R)\cap\mathrm{U}(d)=\{\oplus_{i=1}^{k}(U_{i}\otimes\mathbb{1}_ {d_{i}}):U_{i}\in\mathrm{U}(m_{i})\text{ for all }i\}, \tag{28}\] _where \(d=\mathrm{dim}V=\sum_{i=1}^{k}m_{i}d_{i}\) is the dimension of \(V\)._ The Corollary tells us the exact form of intermediate unitary gates \(P(\theta)\) we should use for \(\mathrm{SU}(2)\) equivariant gates, which is evident from the following example. **Example 1**.: _For a system with three qubits, we can decompose the space under \(\mathrm{SU}(2)\) as_ \[(\mathbb{C}^{2})^{\otimes 3}\simeq J_{3/2}\oplus J_{1/2}\oplus J_{1/2}, \tag{29}\] _where \(J_{s}\) is a space of total spin \(s\) with dimension \(2s+1\). Note that the basis transformation from the computational basis to the total spin basis is nothing but the Schur transformation given in the previous section [Eq. 3]. We can now see that the unitary operators that commute with \(\mathrm{SU}(2)\) are given (up to a global phase) by_ \[\begin{pmatrix}\mathbb{1}_{4}&0_{4}\\ \hline 0_{4}&U_{2}\otimes\mathbb{1}_{2}\end{pmatrix}, \tag{30}\] _which is the gate we defined in the previous section._ ### \(\mathrm{SU}(2)\) equivariant gates are generalised permutations We now completely characterise \(\mathrm{SU}(2)\) equivariant gates for \(n\) qubits using the above results, by computing the multiplicity of each representation. Our main tool is the Schur-Weyl duality which posits the duality between the irreducible representation of the symmetric group \(S_{n}\) and that of \(\mathrm{SU}(2)\). Thus the multiplicity is given by the dimension of the corresponding irreducible representation of \(S_{n}\). Let us first define two group actions. For \(U\in\mathrm{SU}(2)\), we define its action on \((\mathbb{C}^{2})^{\otimes n}\) as \[U(|v_{1}\rangle\otimes|v_{2}\rangle\otimes\cdots\otimes|v_{n} \rangle)=|Uv_{1}\rangle\otimes|Uv_{2}\rangle\otimes\cdots|Uv_{n}\rangle\,, \tag{31}\] where each \(v_{i}\) is a vector in \(\mathbb{C}^{2}\). In matrix form, this action is nothing but \(U^{\otimes N}\). Another group we consider is the symmetric group \(S_{n}\). For \(\alpha\in S_{n}\), we define \[\alpha(|v_{1}\rangle\otimes|v_{2}\rangle\otimes\cdots\otimes|v_{n}\rangle)=|v_{ \alpha^{-1}(1)}\rangle\otimes|v_{\alpha^{-1}(2)}\rangle\otimes\cdots\otimes|v_ {\alpha^{-1}(n)}\rangle\,. \tag{32}\] We can also write down a matrix representation of this group action. Let us consider a transposition \(\tau=(a,b)\in S_{n}\) first, which just swaps the \(a\)-th and \(b\)-th qubit. In matrix form, this operation is written as \[\tau=\frac{1}{2}\mathbf{\sigma}^{a}\cdot\mathbf{\sigma}^{b}+\frac{1}{2}\mathbb{1}, \tag{33}\] where \(\mathbf{\sigma}^{i}=\{\sigma_{x}^{i},\sigma_{y}^{i},\sigma_{z}^{i}\}\) is a vector of Pauli matrices acting on the \(i\)-th qubit. As any permutation \(\alpha\) in \(S_{n}\) can be decomposed into transpositions, i.e., \(\alpha=\tau_{k}\cdots\tau_{2}\tau_{1}\) where each \(\tau_{i}=(a_{i},b_{i})\) is a transposition, we obtain \[\alpha=\big{(}\frac{1}{2}\mathbf{\sigma}^{a_{k}}\cdot\mathbf{\sigma}^{b_{k}}+\frac{1} {2}\mathbb{1}\big{)}\cdots\big{(}\frac{1}{2}\mathbf{\sigma}^{a_{2}}\cdot\mathbf{\sigma }^{b_{2}}+\frac{1}{2}\mathbb{1}\big{)}\big{(}\frac{1}{2}\mathbf{\sigma}^{a_{1}} \cdot\mathbf{\sigma}^{b_{1}}+\frac{1}{2}\mathbb{1}\big{)}. \tag{34}\] A crucial property of those two group actions is that they commute with each other, i.e., \(U\alpha=\alpha U\). One can easily check this for a product state \[U\alpha(|v_{1}\rangle\otimes\cdots\otimes|v_{n}\rangle) =U(|v_{\alpha^{-1}(1)}\rangle\otimes\cdots\otimes|v_{\alpha^{-1}( n)}\rangle)\] \[=|Uv_{\alpha^{-1}(1)}\rangle\otimes\cdots\otimes|Uv_{\alpha^{-1} (n)}\rangle\] \[=\alpha(|Uv_{1}\rangle\otimes\cdots\otimes|Uv_{n}\rangle)\] \[=\alpha U(|v_{1}\rangle\otimes\cdots\otimes|v_{n}\rangle),\] which can be extended linearly to all vectors in the space. Thus it follows that a permutation is an \(\mathrm{SU}(2)\) equivariant operation. This fact is also the basis of the Schur-Weyl duality which we introduce below. Inspired by Ref. [26], we further consider an operator \[Q=e^{\sum_{i=1}^{k}c_{i}\alpha_{i}}=\sum_{n=0}^{\infty}\frac{1}{n!}(\sum_{i=1 }^{k}c_{i}\alpha_{i})^{n}, \tag{35}\] where \(c_{i}\in\mathbb{C}\), which we call generalised permutations. From the expansion, we see that \(Q\) also commutes with \(U\in\mathrm{SU}(2)\), which implies that \(Q\) is an \(\mathrm{SU}(2)\) equivariant operation as well (albeit not unitary, in general). If we further restrict unitarity, i.e., an operator \(e^{\sum_{i}c_{i}\alpha_{i}}\) with Hermitian \(\sum_{i}c_{i}\alpha_{i}\), such an operator is an element of the set given by Eq. (28). We now prove the converse of the above statement, which is the main result of this section: All \(\mathrm{SU}(2)\) equivariant unitary operators can also be written as a form of \(\exp[\sum_{i=1}^{k}c_{i}\alpha_{i}]\). Even though this can be understood as a consequence of von Neumann's double commutation theorem (see e.g., Ref. [48]), here we provide a constructive proof with a concrete example. The first ingredient for the proof is the _Schur-Weyl duality_. **Theorem 2** (Schur-Weyl duality).: _Under the group actions of \(U\in\mathrm{SU}(2)\) and the symmetric group \(\alpha\in S_{n}\), the tensor-product space decomposes into a direct sum of tensor products of irreducible modules6_ Footnote 6: A vector space where the scalars are a ring. that determine each other. Precisely, we can write_ \[(\mathbb{C}^{2})^{\otimes n}\simeq\bigoplus_{D}\pi_{n}^{D}\otimes J_{D} \tag{36}\] _where the summation is over the Young diagram \(D\) with \(n\) boxes and at most two rows. For each \(D\) with \(r_{1}\) boxes in the first row and \(r_{2}\) boxes in the second row, \(J_{D}\) is the irreducible representation of \(\mathrm{SU}(2)\) with total spin \(J=(r_{1}-r_{2})/2\), and \(\pi_{n}^{D}\) is the irreducible representation of the symmetric group associated with the given Young diagram \(D\)._ We formally introduce the Young diagram and the irreducible representation of \(S_{n}\) in Appendix B. However, for the rest of discussion in this section, it is fine to skip the details and only consider the dimension of \(\pi_{n}^{D}\), as we show in the following Corollary. **Corollary 2**.: _From the Schur-Weyl duality, one obtains_ \[(\mathbb{C}^{2})^{\otimes n}\simeq\bigoplus_{i=0}^{\lfloor n/2\rfloor}m_{i}J_ {s_{i}} \tag{37}\] _where \(m_{i}\) is the dimension of the irreducible representation of \(S_{n}\) whose Young diagram \(D_{i}\) has \(n-i\) boxes in the first row and \(i\) boxes in the second row, and \(s_{i}=n/2-i\) is the total spin._ _The dimension of the irreducible representation can be computed using the Hook length formula. After some steps, one can obtain_ \[m_{i}=\begin{cases}1,&\text{if i=0},\\ \binom{n}{i}-\binom{n}{i-1},&\text{otherwise}.\end{cases} \tag{38}\] We can now apply Corollary 1 to this decomposition to obtain all possible \(\mathrm{SU}(2)\) equivariant gates. Precisely, we obtain \[U=\Big{\{}\bigoplus_{i=0}^{\lfloor n/2\rfloor}(U_{i}\otimes \mathbb{1}_{d_{i}}):U_{i}\in U(m_{i})\Big{\}} \tag{39}\] In addition, as each \(U(m_{i})\) has \(m_{i}^{2}\) independent generators, the total number of parameters is given by \[\sum_{i=0}^{\lfloor n/2\rfloor}m_{i}^{2}=\frac{1}{n+1}\binom{2n}{n} \tag{40}\] Note that Ref. [22] also presents the same result. We also note that, for a quantum gate, we can subtract one from this formula as there is a redundancy for the global phase. Another ingredient we need is the completeness of the irreducible representation. **Theorem 3** (The density theorem [49]).: _Let \(V=\mathbb{C}^{n}\) be an irreducible finite dimensional representation of a group \(G\), i.e., there is a map \(R:G\rightarrow\mathrm{GL}(\mathbb{C}^{n})\). Then \(\{R(g):g\in G\}\) spans \(\mathcal{M}_{n}(\mathbb{C})\)._ See, e.g., Ref. [50] for a proof. The theorem implies that for any \(M\in\mathrm{M}_{n}(\mathbb{C})\), we can find \(g_{i}\in G\) and \(c_{i}\in\mathbb{C}\) such that \(M=\sum_{i=1}^{k}c_{i}R(g_{i})\) when \(\mathbb{C}^{\otimes n}\) is the irreducible representation of \(G\). Using the Schur-Weyl duality and the density theorem, we now prove the equivalence between a generalised permutation group action and \(\mathrm{SU}(2)\) equivariant unitary gates. **Theorem 4**.: _For any \(\mathrm{SU}(2)\) equivariant unitary gate \(T\), we can find \(c_{i}\in\mathbb{C}\) and \(\alpha_{i}\in S_{n}\) such that_ \[T=e^{\sum_{i=1}^{k}c_{i}\alpha_{i}}. \tag{41}\] Proof.: First, from Corollary 2, we obtain \[(\mathbb{C}^{2})^{\otimes n}\simeq\bigoplus_{i=0}^{\lfloor N/2\rfloor}m_{i}J_{ s_{i}}. \tag{42}\] Then let \(H\) be the generator of \(T\), i.e., \(T=e^{iH}\) and \(H\) is a Hermitian matrix. Looking at Corollary 1, we can move from the description of equivariant unitaries to their generators and see that \(H\) can be written as \[H=\bigoplus_{i}h_{i}\otimes\mathbb{1}_{2s_{i}+1}=\sum_{i}h_{i}P_{i} \tag{43}\] where \(h_{i}\) is a hermitian matrix in \(\mathcal{M}_{m_{i}}(\mathbb{C})\) and \(P_{i}\) is a projector onto a subspace with total spin \(2s_{i}+1\). From the density theorem, one can find \(\{c_{ij}\in\mathbb{R}\}\) and \(\{\alpha_{ij}\in S_{n}\}\) such that \(h_{i}=\sum_{j}c_{ij}\alpha_{ij}\) for each \(i\). Moreover each projector \(P_{i}\) can be written as \[P_{i}=\prod_{j\neq i}\frac{J^{2}-s_{j}(s_{j}+1)}{s_{i}(s_{i}+1)-s_{j}(s_{j}+1)}, \tag{44}\] where \(\mathbf{J}=\sum_{i=1}^{n}\mathbf{\sigma}^{i}/2\) is the total spin operator and \(J^{2}=\mathbf{J}\cdot\mathbf{J}\). As \(J^{2}\) has eigenvalues \(s_{i}(s_{i}+1)\) for each subspace \(J_{s_{i}}\), one can verify that the given operator is indeed a projector. After rewriting \[J^{2}=\frac{1}{4}\big{(}3n+\sum_{i\neq j}\mathbf{\sigma}^{i}\cdot\mathbf{\sigma}^{j} \big{)}=\frac{4n-n^{2}}{4}+\sum_{i>j}(i,j) \tag{45}\] where \((i,j)\) is a transposition, we see that \(J^{2}\in\mathbb{R}[S_{n}]\). If we again look at Eq. (43), we can now see that as \(h_{i},P_{i}\in\mathbb{R}[S_{n}]\) our unitary \(T=e^{iH}\) is indeed an exponentiated sum of permutations with coefficients in \(\mathbb{C}\). ### Twirling and permutations In Ref. [24], the Twirling method is proposed to construct an equivariant unitary gate. For a given Hermitian matrix \(H\) which is the generator of a unitary gate \(V=\exp(iH)\) and a Lie group \(\mathcal{G}\), one obtains an equivariant version of it using the twirling formula: \[\mathcal{T}_{U}[H]=\int d\mu(g)R(g)HR(g)^{\dagger}, \tag{46}\] where \(\mu(g)\) is the Haar measure for the Lie group \(\mathcal{G}\). Then \(\mathcal{T}_{U}[H]\) commutes with any \(h\in\mathcal{G}\) due to a defining property of the Haar measure, and so does the gate \(\exp\{i\mathcal{T}_{U}[H]\}\). We now show that for \(\mathcal{G}=\mathrm{SU}(2)\), the twirling formula yields a generalised permutation. For a Hermitian matrix \(H\in\mathcal{M}_{2^{n}}(\mathbb{C})\), we obtain \[\mathcal{T}_{U}[H] =\int d\mu(g)R(g)HR(g)^{\dagger}\] \[=\int_{U}dU^{\otimes n}H(U^{\dagger})^{\otimes n}\] \[=\sum_{\sigma,\tau\in S_{n}}\mathcal{W}g(\sigma^{-1}\tau,d) \mathrm{Tr}[H\tau]\sigma, \tag{47}\] where \(d=2^{n}\) is the dimension of the Hilbert space, \(\mathcal{W}g(\sigma,d)\) is the Weingarten function, and we identified \(\sigma,\tau\in S_{n}\) as an operator using the representation (see e.g., Refs. [48, 51] for the explanation how the last line is obtained). Ultimately this is a permutation scaled by a real coefficient as required. Furthermore, as \(\mathcal{T}_{U}[H]\) is also Hermitian by definition, we know that \(\mathcal{T}_{U}[H]\) is a Hermitian element of \(\mathbb{C}[S_{n}]\), which can be a generator for an equivariant unitary gate. On the other hand, all generators of equivariant gates can be obtained from the twirling formula. In the spin-basis, we know that each generator of an equivariant gate is given by Eq. (43), i.e., \(H\simeq\oplus_{i}h_{i}\otimes I_{d_{i}}\) (where the dimension of \(h_{i}\) and \(d_{i}\) are obtained from the Schur-Weyl duality). As this is an element of the commutant [Eq. (27)], \(H\) is also equivariant, i.e., \(HU^{\otimes N}=U^{\otimes N}H\), so \(\mathcal{T}_{U}[H]=H\). In other words, the set of all generators of equivariant gates and the set of all twirled generators are the same: \[\big{\{}H\in\mathcal{M}_{2^{n}}(\mathbb{C}):U^{\otimes n}e^{iH}=e ^{iH}U^{\otimes n}\text{ for all }U\in\mathrm{SU}(2)\text{ and }H=H^{\dagger}\big{\}}\] \[\qquad=\big{\{}\mathcal{T}_{U}[H]:H\in\mathcal{M}_{2^{n}}( \mathbb{C})\text{ and }H^{\dagger}=H\big{\}}. \tag{48}\] ### Revisiting three-qubit \(\mathrm{SU}(2)\) equivariant gates In this subsection, using the three-qubit vertex gate as an example, we illustrate how to represent our equivariant gates as elements of \(\mathbb{C}[S_{n}]\). We apply Theorem 4 to the three-qubit gate we have found in Sec. 3, using the Schur map given in Eq. (16). A direct consequence of the Schur transform is that it defines invariant subspaces under \(U^{\otimes 3}\) for any \(U\in\mathrm{SU}(2)\), given by \(J_{3/2}=\mathrm{span}\{S_{3}^{\dagger}\ket{0},S_{3}^{\dagger}\ket{1},S_{3}^{ \dagger}\ket{2},S_{3}^{\dagger}\ket{3}\}\), \(J_{1/2}^{a}=\mathrm{span}\{S_{3}^{\dagger}\ket{4},S_{3}^{\dagger}\ket{5}\}\), and \(J_{1/2}^{b}=\mathrm{span}\{S_{3}^{\dagger}\ket{6},S_{3}^{\dagger}\ket{7}\}\). From the structure of \(P(\vec{\theta})\), we know the gate has four generators given by \(\{G_{I}:=\mathbf{0}_{4}\oplus\mathbb{1}_{4},G_{X}:=\mathbf{0}_{4}\oplus(X \otimes\mathbb{1}_{2}),G_{Y}:=\mathbf{0}_{4}\oplus(Y\otimes\mathbb{1}_{2}),G_{ Z}:=\mathbf{0}_{4}\oplus(Z\otimes\mathbb{1}_{2})\}\), where \(\mathbf{0}_{4}\) acts on \(J_{3/2}\) whereas \(X,Y,Z\) mixes \(J_{1/2}^{a}\) and \(J_{1/2}^{b}\). One can also see that a permutation in \(S_{3}\) mixes subspaces \(J_{1/2}^{a}\) and \(J_{1/2}^{b}\) (whereas it acts trivially on \(J_{3/2}\) subspace). A matrix representation of a permutation for \(\{J_{1/2}^{a},J_{1/2}^{b}\}\) is obtained by applying each permutation to a basis vector, which is given as \[(1,2) =\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\otimes\mathbb{1}_{2}=Z\otimes\mathbb{1}_{2} \tag{49}\] \[(2,3) =\begin{pmatrix}-1/2&-\sqrt{3}/2\\ -\sqrt{3}/2&1/2\end{pmatrix}\otimes\mathbb{1}_{2}=-\frac{1}{2}Z\otimes\mathbb{1 }_{2}-\frac{\sqrt{3}}{2}X\otimes\mathbb{1}_{2}\] (50) \[(1,3) =\begin{pmatrix}-1/2&\sqrt{3}/2\\ \sqrt{3}/2&1/2\end{pmatrix}\otimes\mathbb{1}_{2}=-\frac{1}{2}Z\otimes\mathbb{1 }_{2}+\frac{\sqrt{3}}{2}X\otimes\mathbb{1}_{2}, \tag{51}\] Each matrix should be read as follows. For example, if we apply \((2,3)\) to \(S_{3}^{\dagger}\ket{4}\), we have \[(2,3)S_{3}^{\dagger}\ket{4}=-\frac{1}{2}S_{3}^{\dagger}\ket{4}-\frac{\sqrt{3} }{2}S_{3}^{\dagger}\ket{6}, \tag{52}\] where the coefficients are from the first column of the matrix representation of \((2,3)\). Note that the permutation transforms \(S_{3}^{\dagger}\ket{5}\) exactly the same way (but mixes \(S_{3}^{\dagger}\ket{5}\) and \(S_{3}^{\dagger}\ket{7}\)). Using the above expressions, remaining elements are obtained as follows (where we dropped \(\otimes\mathbb{1}_{2}\) to simplify the notation): \[(1,2,3) =(1,2)(2,3)=-\frac{1}{2}\mathbb{1}-i\frac{\sqrt{3}}{2}Y \tag{53}\] \[(1,3,2) =(1,2)(1,3)=-\frac{1}{2}\mathbb{1}+i\frac{\sqrt{3}}{2}Y. \tag{54}\] Thus we have \[I=1, X=-\frac{2}{\sqrt{3}}[(2,3)+1/2(1,2)] \tag{55}\] \[Y=i\frac{1}{\sqrt{3}}[2(1,2,3)+1], Z=(1,2). \tag{56}\] However, these operators cannot be generators of our gate as they do not annihilate the \(J=3/2\) subspace (recall that our generators have \(\mathbf{0}_{4}\) on the \(J_{3/2}\) subspace). Thus we need a projector to the \(J=1/2\) subspace, which is given by \[P_{J=1/2}=\frac{J^{2}-15/4}{3/4-15/4}=\frac{5}{4}-\frac{1}{3}J^{2} \tag{57}\] where \(J^{2}\) is \[J^{2}=\frac{1}{4}[\boldsymbol{\sigma}_{1}+\boldsymbol{\sigma}_{2}+ \boldsymbol{\sigma}_{3}]^{2}=\frac{3}{4}+[(1,2)+(2,3)+(1,3)]. \tag{58}\] By combining the projector and expressions of Pauli operators in \(J=1/2\) subspaces, we can write three generators as \[G_{I} =1-\frac{1}{3}[(1,2)+(2,3)+(1,3)] \tag{59}\] \[G_{X} =-\frac{2}{\sqrt{3}}\Bigl{[}-\frac{1}{2}+(2,3)+\frac{1}{2}(1,2)- \frac{1}{2}(1,2,3)-\frac{1}{2}(1,3,2)\Bigr{]}\] (60) \[G_{Y} =i\frac{1}{\sqrt{3}}\Bigl{[}1+2(1,2,3)-(1,2)-(2,3)-(1,3)\Bigr{]}\] (61) \[G_{Z} =(1,2)-\frac{1}{3}[1+(1,3,2)+(1,2,3)] \tag{62}\] One can check that each generator annihilates the \(J_{3/2}\) subspace (e.g., \(G_{X}\left|000\right>=0\)), and acts like a Pauli gate between the \(J_{1/2}^{a}\) and \(J_{1/2}^{b}\) subspaces (e.g., \(G_{X}S_{3}^{\dagger}\left|5\right>=S_{3}^{\dagger}\left|7\right>\)). Also note that, as there is a freedom in choosing two \(J=1/2\) subspaces (any unitary mixtures between \(J_{1/2}^{a}\), \(J_{1/2}^{b}\) are also valid subspaces), the exact form of generators \(\{G_{I},G_{X},G_{Y},G_{Z}\}\) depends on the specific choice of the Schur gate \(S_{3}\) (which is from Eq. (16) for our case). To summarise, any SU(2) equivariant gate on the three qubit can be written as \[V(\vec{\theta})=S_{3}^{\dagger}P(\vec{\theta})S_{3}=\exp\Bigl{[}i\bigl{\{} \theta_{0}G_{I}+\theta_{1}G_{X}+\theta_{2}G_{Y}+\theta_{3}G_{Z}\bigr{\}} \Bigr{]}, \tag{63}\] which is a generalised permuatation from Eq. (59-62). We now answer the question raised at the end of the previous section. If we apply our three-qubit gate to 3rd, 4th, and 7th qubits among 8 qubits, we first obtain its representation as a generalised permutation between those qubits, and apply it to basis vectors of a global spin subspaces. For example, \(G_{X}\) for those qubits are given as \[G_{X}^{(3,4,7)}=-\frac{2}{\sqrt{3}}\Bigl{[}-\frac{1}{2}+(4,7)+\frac{1}{2}(3,4 )-\frac{1}{2}(3,4,7)-\frac{1}{2}(3,7,4)\Bigr{]}. \tag{64}\] Then one can construct its matrix form in a certain subspace (e.g. one of the \(J_{2}\) subspaces) by applying it to the basis vectors of the subspace. Then the gate \(\exp[-i\theta G_{X}^{(3,4,7)}]\) can be reconstructed by applying the exponential map. We finalise this section by introducing an alternative description of these generators using the scalar products. For three operator vectors \(\boldsymbol{\sigma}_{1}\), \(\boldsymbol{\sigma}_{2}\), \(\boldsymbol{\sigma}_{3}\), the only possible scalar operators (that are invariant under the group transformation) obtained from those operators are \(\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2}\), \(\boldsymbol{\sigma}_{2}\cdot\boldsymbol{\sigma}_{3}\), \(\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{3}\), and \(\boldsymbol{\sigma}_{1}\cdot(\boldsymbol{\sigma}_{2}\times\boldsymbol{\sigma} _{3})\) up to constant factors, where \(A\times B\) is the cross product between two vectors. Thus another possible representation of a parameterised three-qubit equivariant gate is \[W=\exp\bigl{[}i(\theta_{12}\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{ 2}+\theta_{23}\boldsymbol{\sigma}_{2}\cdot\boldsymbol{\sigma}_{3}+\theta_{13} \boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{3})+i\phi\boldsymbol{\sigma} _{1}\cdot(\boldsymbol{\sigma}_{2}\times\boldsymbol{\sigma}_{3})\bigr{]}. \tag{65}\] Then it can be shown that this gate is the same as \(V(\vec{\theta})\) up to a global phase. Using \[(\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2})(\boldsymbol {\sigma}_{2}\cdot\boldsymbol{\sigma}_{3}) =\sum_{a\in\{x,y,z\}}\sum_{c\in\{x,y,z\}}\sigma_{1}^{a}\sigma_{2}^ {a}\sigma_{2}^{c}\sigma_{3}^{c}\] \[=\sum_{a\in\{x,y,z\}}\sum_{c\in\{x,y,z\}}\delta_{ac}\sigma_{1}^{ c}\sigma_{3}^{c}+i\sum_{b\in\{x,y,z\}}\epsilon_{abc}\sigma_{1}^{a}\sigma_{2}^{b} \sigma_{3}^{c}\] \[=\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{3}+i\boldsymbol {\sigma}_{1}\cdot(\boldsymbol{\sigma}_{2}\times\boldsymbol{\sigma}_{3}), \tag{66}\] and Eq. (33), we obtain \[2i\boldsymbol{\sigma}_{1}\cdot(\boldsymbol{\sigma}_{2}\times\boldsymbol{\sigma}_ {3})=[\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2},\boldsymbol{\sigma}_ {2}\cdot\boldsymbol{\sigma}_{3}]=[2(1,2)-1,2(2,3)-1]=4(1,2,3)-4(1,3,2). \tag{67}\] In addition, we need another identity \(P_{J=3/2}^{2}=P_{J=3/2}\), which gives \[(1,2,3)+(1,3,2)=(1,2)+(2,3)+(1,3)-1. \tag{68}\] Note that this equality only implies that the LHS and RHS act the same on our vector space. Of course, they are different elements in \(\mathbb{C}[S_{n}]\). Combining all these together, we can write each generator of \(W\) in terms of \(\{G_{I},G_{X},G_{Y},G_{Z}\}\) as \[\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2} =2(1,2)-1=1-2G_{I}+2G_{Z} \tag{69}\] \[\boldsymbol{\sigma}_{2}\cdot\boldsymbol{\sigma}_{3} =2(2,3)-1=1-2G_{I}-\sqrt{3}G_{X}-G_{Z}\] (70) \[\boldsymbol{\sigma}_{3}\cdot\boldsymbol{\sigma}_{1} =2(1,3)-1=1-2G_{I}+\sqrt{3}G_{X}-G_{Z}\] (71) \[\boldsymbol{\sigma}_{1}\cdot(\boldsymbol{\sigma}_{2}\times \boldsymbol{\sigma}_{2}) =-\frac{i}{2}[4(1,2,3)-4(1,3,2)]=-2\sqrt{3}G_{Y}, \tag{72}\] which implies that \(W\) is just another parameterisation of \(V(\vec{\theta})\) (up to a global phase). ## 5 Numerical Simulations In this section, we numerically demonstrate the efficacy of our equivariant gates for solving quantum many-body Hamiltonians. Our Hamiltonians are Heisenberg models (which are rotationally invariant) defined on frustrated lattices. Even though the Heisenberg models are toy models, they play an important role in understanding low-temperature physics of some exotic materials [52]. All numerical simulations in this section were performed using the PennyLane [53] software package with the Lightning [54] plugin. Relevant source code is available in GitHub repository [55]. ### One dimensional triangular lattice Let us first consider a one-dimensional triangluar lattice given as in Fig. 6. The Hamiltonian we want to solve is \[H=J_{1}\sum_{i=1}^{n}\bigl{[}\sigma_{i}^{x}\sigma_{i+1}^{x}+\sigma_{i}^{y} \sigma_{i+1}^{y}+\sigma_{i}^{z}\sigma_{i+1}^{z}\bigr{]}+J_{2}\sum_{i=1}^{n} \bigl{[}\sigma_{i}^{x}\sigma_{i+2}^{x}+\sigma_{i}^{y}\sigma_{i+2}^{y}+\sigma _{i}^{z}\sigma_{i+2}^{z}\bigr{]}, \tag{73}\] Figure 6: A one dimensional triangular lattice. We solve the Heisenberg model defined on this lattice using the equivariant gates. The interaction strength between qubits linked with solid lines is given by \(J_{1}\) whereas those between qubits linked with dash lines are \(J_{2}\). where we impose the periodic boundary condition \(\sigma_{n+1}^{x,y,z}=\sigma_{1}^{x,y,z}\). Throughout the section, we fix \(J_{1}=1\) and consider \(J_{2}\in\{0,0.44\}\). When \(J_{2}=0\), the Hamiltonian can be transformed into a stoquastic form [56] and a classical algorithm, the variational quantum Monte Carlo (vQMC) with a simple complex-valued restricted Boltzmann machine (RBM), can find the ground state energy extremely accurately [57]. In contrast, such a transformation does not work for \(J_{2}\geq 0\)[29] and the vQMC with the RBM deviates from the true ground state. We here choose \(J_{2}=0.44\) as a recent study [58] reported that such a deviation is maximised near this value. Still, we note that the density matrix renormalization group can faithfully solve our model as the model is one-dimensional. We compare the performance of two ansatze for solving this Hamiltonian. The first ansatz only uses the two-qubit vertex gates, which is given by \[\ket{\psi(\{\theta\})}=\prod_{i=p}^{1}\Bigl{[}\prod_{j=1}^{n}V_{j,j+2}(\theta_ {i,j+n})\prod_{j=1}^{n/2}V_{2j,2j+1}(\theta_{i,j+n/2})\prod_{j=1}^{n/2}V_{2j-1,2j}(\theta_{i,j})\Bigr{]}\ket{\psi_{0}}, \tag{74}\] where \(V_{kl}\) is the two-qubit vertex gate acting on \(k\)-th and \(l\)-th qubits and \(\ket{\psi_{0}}=(\ket{01}-\ket{10})^{\otimes n/2}/\sqrt{2}^{n/2}\) is a series of singlets. As \(\ket{\psi_{0}}\) is SU(2) invariant and our circuit is SU(2) equivariant, the output state is also SU(2) invariant. The ansatz has a total of \(2np\) parameters where \(p\) is the number of blocks in the ansatz. Likewise, we also define the second ansatz which consists of the three-qubit vertex gates as \[\ket{\psi(\{\theta_{i,j}\})}=\prod_{i=p}^{1}\Bigl{[}\prod_{j=1}^{n}V_{j,j+1,j+ 2}(\{\theta_{i,4j-3},\theta_{i,4j-2},\theta_{i,4j-1},\theta_{i,4j}\})\Bigr{]} \ket{\psi_{0}}, \tag{75}\] where \(V_{j,j+1,j+2}\) is the three-qubit vertex gate acting on qubits \(\{j,j+1,j+2\}\). Also recall that the Figure 7: Normalised converged energies as functions of the total number of parameters in a given ansatz for \(J_{2}=0.0\) (left) and \(J_{2}=0.44\) (right). Each datapoint represents the converged energy obtained from an initial parameter. three-qubit vertex gate has four parameters, so the ansatz has in total \(4np\) parameters. We now solve the Hamiltonian from Eq. (73) with \(n=20\) for two different values of \(J_{2}\in\{0.0,0.44\}\) using the two proposed ansatze by simulating variational quantum eigensolvers (VQEs) using a classical simulator. For each ansatz, we optimise the parameters by minimizing \(\langle H\rangle\) using the Adam optimiser. We then compute the converged normalised energies \(\tilde{E}=(\langle H\rangle-E_{\text{GS}})/|E_{\text{GS}}|\) where \(E_{\text{GS}}\) is the true ground state energy obtained from exact diagonalization. For the ansatz with two-qubit vertex gates, we use the number of blocks \(p=[2,4,6,8,10]\). On the other hand, \(p=[1,2,3,4,5]\) are used for the ansatz with three-qubit vertex gates. In addition, inspired by Ref. [59], we initialise the parameters using samples from the distribution \(\mathcal{U}_{[0,\alpha]}/(\text{total number of parameters})\) where \(\mathcal{U}_{[0,\alpha]}\) is the uniform distribution between \(0\) and \(\alpha\), and \(\alpha\) is a hyperparameter giving a relative scaling. We also note that our simulation is performed by computing exact gradients (without shot noise), which is more efficient for classical simulators. For \(16\) random initial parameters, we plot the converged normalised energies in Fig. 7 as a function of the total number of parameters. We observe that the converged normalised energies from the ansatz with three-qubit vertex gates are generally closer to the true ground state energy. Especially, when \(J_{2}=0.44\), the converged energy from the three-qubit vertex gates decreases as the number of parameter increases, whereas that from the two-qubit vertex gates gets flat. This example shows that using a multi-qubit vertex gate is helpful even for solving a Hamiltonian with two-body interactions. We expect that this is because the circuit ansatz with three-qubit vertex gates is more expressive than that of two-qubit vertex gates when the same number of parameters is provided. Figure 8: The Kagome lattice. We choose a unit cell with \(N=18\) spins enclosed by blue lines. Red links indicate the singlets which we use as an initial state. Our variational circuit is constructed by applying three-qubit vertex gates to each triangle (\(a\)-\(f\) and \(A\)-\(F\)). See the main text for details. ### Kagome lattice We now extend the previous result to study the model on the Kagome lattice. We consider an \(n=18\) unit cell from the lattice with periodic boundary condition. Our choice of the unit cell is depicted in Fig. 8. Formally, the Hamiltonian of the system is written as \[H=\sum_{\langle i,j\rangle}\bigl{[}X_{i}X_{j}+Y_{i}Y_{j}+Z_{i}Z_{j}\bigr{]} \tag{76}\] where the summation is over all nearest neighbours in the lattice. We construct an ansatz using three-qubit vertex gates as \[|\psi(\{\theta_{i,j}\})\rangle=\prod_{i=p}^{1}\prod_{j=A}^{F}V_{j}(\theta_{i,j })\prod_{j=a}^{f}V_{j}(\theta_{i,j})|\psi_{0}\rangle \tag{77}\] where \(V_{a,\cdots,f}\) (\(V_{A,\cdots,F}\)) are the three-qubit vertex gates acting on vertices of each triangle \(a\) to \(f\) (\(A\) to \(F\), respectively; see Fig. 8). As each block has 12 gates, the total number of parameters is \(48p\) (recall that each three-qubit vertex gate has four parameters). We also use a series of singlets as an initial state, where each singlet is indicated by a red link in Fig. 8. Formally, we can write \[|\psi_{0}\rangle=\frac{1}{\sqrt{2}^{n/2}}\bigotimes_{\{i,j\}\in S}(|01\rangle- |10\rangle)_{ij}\] where \(S\) is the set of all links. We numerically optimise the parameters of the circuit by minimizing \(\langle H\rangle\). The Adam optimiser is used with the same parameter initialization techniques as in the previous example. We plot the converged Figure 9: Converged normalised energies as a function of circuit depths for the Heisenberg model on the Kagome lattice. For each value of \(p\), 18 random initial parameters are sampled. For each random initial parameter, full VQE simulation is performed and the converged energy is shown. normalised energies as a function of \(p\) in Fig. 9. The plot shows that the best converged energies decrease nearly exponentially with \(p\). The smallest converged normalised energy is \(\tilde{E}\approx 5.7\times 10^{-4}\) obtained from \(p=24\), which is comparable to data obtained in Refs. [60, 61] using different ansatze. To summarise, we have shown that the three-qubit vertex gate introduced in the previous sections is useful for solving the Heisenberg model on different lattices. Given the efficacy of our equivariant gates for solving the ground state problem, we also expect that one can construct a QML model using our gates to classify rotationally invariant datasets such as point clouds [62]. However, as a QML model for those datasets without classical pre-processing requires a large number of qubits beyond the reach of a classical simulator (which is about \(\lesssim 30\) qubits), we leave it for future study. ## 6 Connections and discussions Throughout the previous sections, we have introduced an elegant construction method for SU(2) equivariant quantum circuits based on the Schur transformation. Those circuits can be naturally seen as a spin network which is a tensor network of group-invariant tensors. We have further developed a theory of the SU(2) equivariant gates from the Schur-Weyl duality, relating our gates to other known constructions based on the twirling formula and generalised permutations. As spin networks and quantum circuits for permutations appear in lots of different contexts in the field of high energy physics and theoretical quantum computations, we discuss various connections to other fields of research as well as possible future directions of study in the following. ### PQC, PQC+, and non-classical heuristic algorithms The idea of taking spins and coupling them is reminiscent of a computational model already seen in the literature. This idea is at the heart of what we mentioned above is called permutational quantum computing (PQC) which is centered around the computational class PQC and the closely related PQC+[31, 20]. This class of problems is important as they provide strong evidence that the transition from permutations to exponentiated sums of of the generators of permutations marks a transition to classically hard sampling tasks. The PQC modelIn short, PQC is a model of quantum computing that is intimately tied to the the structure of a _binary tree_ coupling of spins. The original idea stemmed from the notion that spin networks could form a model of quantum computing [43]. In an attempt to extract a formal computational class from this model, PQC was introduced which only considers tree-like structures [31]. To achieve this, we take \(n\)-spins and choose a particular ordering to add the qubits to the already coupled spins (which we can see as a choice of what sequence of spins to apply the \(J^{2}\) operator to). The possible outcomes of this chosen order of spin recoupling along with the addition of the possible total angular momentum outcomes gives an alternative basis. PQC is the computational class of problems described as a permutation circuit set between two coupled spin-basis states. Given a permutation operator \(U_{\sigma}\) representing the unitary composed of swap gates implementing the permutation \(\sigma\in S_{n}\), PQC is the set of problems written as: \[\left\langle v^{\prime}\left|U_{\sigma}\right|v\right\rangle=\left\langle b^{ \prime}\left|S^{\dagger}U_{\sigma}S\right|b\right\rangle \tag{78}\] where \(b\) is some binary label for the computational basis and \(S\) is the schur gate. The reason Schur gate is a core component in PQC is because PQC states are simply elements of the spin basis. The Schur gate is the preparation procedure that sends qubit basis states to spin states. In the PQC literature, these states are often presented by PQC coupling diagrams of the kind seen in Fig. 2. Practically a standard PQC calculation is merely the inner-product between two Schur gates applied to some computational basis states with some SWAP gates in between them. It was shown that this model is in fact classically simulable in large part due to the particular tree like structure of binary spin-recoupling and the restrictions this tacitly forces on the Clebsch-Gordan coefficients dictating their coupling [32]. An immediate observation we can make, given our above discussion on spin networks, is that PQC diagrams, which we take to be sequentially coupled spin-\(1/2\)s, are spin networks with their external wires fixed to specific \(J_{z}\) values. Each PQC basis element is a member of the collection of spin networks of the same tree structure permissible by the recoupling of their spins and a \(J_{z}\) value angular state at the end of the tree. Pqc+Despite the initial disappointment that PQC was in fact classically simulable, it has been generalised to a broader model that is believed to be unlikely to have this property. The extended model is known as PQC+ where instead of working with a permutation \(\sigma\in S_{n}\), we work with unitaries generated by sums of elements of the permutation algebra \(\mathbb{C}[S_{n}]\): this is composed of elements \(f=\sum_{i}c_{i}\sigma_{i}\) with \(U_{f}=e^{if}=e^{i\sum_{k}c_{k}\sigma_{k}}\) so in the end computations are defined in the following manner: \[\left\langle v^{\prime}\left|U_{f}\right|v\right\rangle. \tag{79}\] As was mentioned above, the belief in the resilience of this model to 'dequantisation'7 rests on the fact that PQC+ is capable approximately computing unitary \(S_{n}\) Fourier coefficients in polynomial time, the details can be found in Ref. [26]. The general idea is that, much like in a traditional Fourier transform, to calculate the Fourier coefficient of any element one must get the component from every element in the original basis, so in the worst case classically one must go through as many components as there are basis elements. For an \(S_{n}\) Fourier transform there are a permutational number of elements8, as Figure 10: A PQC calculation is an expectation value of a permutation of qubits in the spin-coupling basis. such even an approximate classical polynomial time algorithm to compute the worst case is unlikely. It is this property that relates to claims of super-exponential speed-up as permutational complexity grows considerably faster than the exponential. For more details we direct the reader to Refs. [26, 20] where one also finds some practical application of this in condensed matter calculations in accessing coefficients relevant for the Heisenberg chain. Spin-network circuits as _non-classical heuristics_ The major observation in the work on PQC+ is that, for a Hamiltonian written \(H=\sum_{i}c_{i}\sigma_{i}\), we can approximate \(\langle u|\exp(-itH)|v\rangle\) in polynomial time using a quantum circuit. As the Hamiltonian is in the space \(\mathbb{C}\left[S_{n}\right]\) (the algebra of permutations), we are computing \(\mathbb{C}\left[S_{n}\right]\) Fourier coefficients in polynomial time. This computation of a Fourier coefficient using the best-known classical algorithm requires one to run over all of \(S_{n}\) which is super-exponential in size thus this suggests a super-exponential speed-up. This suggests that in general elements of the form of our parameterised vertex gates - this tells us that the paths through parameter space our vertex gates move through are in general classically inaccessible. This motivates us to introduce the term _Non-classical heuristics_ - parameterised ansatze that are defined as moving through spaces that cannot be accessed classically in polynomial time. We should note however that this idea does not tell us if moving through this space is a useful thing to do; the space may still be barren. We have shown that the form of these problems matches those of \(\mathrm{SU}(2)\) (perhaps more generally \(\mathrm{SU}(d)\)) equivariant gates which are of direct practical interest. The principle then is that there could be practical problems, such as \(\mathrm{SU}(2)\) equivariant optimisation problems, for which we can design quantum circuit heuristics, such as spin-network circuits, that cannot be replicated classically because of the maps they implement cannot be replicated in polynomial time. In terms of the approaches to machine learning presented in PQC+ to-date and our spin-network circuits, it should be noted that there is a technical distinction between the methods used. The PQC+ focuses on tuning the coefficients \(c_{i}\) of the exponent \(\sum_{i}c_{i}\sigma_{i}\in\mathbb{C}\left[S_{n}\right]\). In our spin-network circuits we parameterise the \(\mathrm{SU}(2)\) distinguishable spin-spaces and mix spin irreps of the same \(J\)-value in the Schur-Weyl decomposition via unitaries (see Corollary 1). Though both exist in the same space the way in which one moves through that parameter space is very different. ### Further directions Mixed valency networksIn this work, we have focused on the traditional spin network perspective where the same valency exists throughout the graph. In the usual contexts for spin networks, there is a Figure 11: A PQC+ calculation is the exponent of a linear combination of the generators of permutations. We note that, previously in Fig. 10, the permuted wires stand for the actual permutation, while here they stand for the generators. physical motivation for this (see Appendix C). However, from a quantum algorithms perspective, there is no fundamental reason not to mix the valencies. While it is true that larger vertex maps are likely more expressive than small ones as they are generated by a larger set of permutations, it could also be possible that an architecture with small vertex ones are advantageous for practical training. \(G\)-NetworksThe idea of graphs with edges indexed by representations of a group in the manner presented here is more general than \(\mathrm{SU}(2)\). The most obvious extension is to \(\mathrm{SU}(N)\) for which many of the technical elements used in the \(\mathrm{SU}(2)\) still remain. In particular, we have generalised Clebsch-Gordan coefficients. Thus we can still decompose products of irreps into block diagonal form allowing us to express the idea of coupling two representations and presenting this as a collection of irrep indexed diagrams. These can then be parameterised in the manner used throughout this paper to create general parameterised equivariant maps suitable for machine learning. In the specific case of \(\mathrm{SU}(N)\), there is reason to believe that the same hopes of finding algorithms particularly suited to quantum computing remains: Namely because the speed-up arguments presented in Ref. [20] apply to \(\mathrm{SU}(N)\). From an applications perspective this would allow for this research to connect to condensed matter physics which would be an excellent candidate domain for such non-classical heuristic algorithms [63, 64, 65]. Leaving \(\mathrm{SU}(2)\) for higher dimensions, however, is not without complications. One striking difference is that while with \(\mathrm{SU}(2)\) we have one irreducible representation per dimension, the size of which identifies the representation, for \(\mathrm{SU}(N)\) the irreps are identified by 'highest weights' which are \(N-1\) (half) integers that provide representations only in certain dimensions. While this may be surmountable, it is likely that general \(G\)-networks will be markedly more complex than spin networks. Implicitly, we are relying on the ability to construct all representations from irreducible ones, which tells us that our groups of interest will typically also need a notion of compactness, or that the situation of interest is restricted to elements where irreducible deconstruction can be relied upon. Without this guarantee, we cannot expect that it is enough to identify a structure of irreducible representations to construct the other representations. An interesting perspective on this direction is that it can be seen as fusing the perspective of equivariant QML algorithms with work done in tensor networks. Indeed a spin network is essentially a tensor network decomposition of some map where the tensors involved are always \(\mathrm{SU}(2)\) invariant. The general version of this through \(G\)-networks is essentially tensor network decomposition of \(G\)-equivariant maps into \(G\)-invariant 'harmonic' tensors. Quantum GravityWhile the connection to the field of Loop Quantum Gravity (LQG) has only been indirectly alluded to in this work, it holds a natural significance. In LQG, space itself is a quantum state on which geometric operators act to give values for length, area, angle, and volume. The basis of its state space is made up of spin networks. A more detailed explanation of this can be found in Appendix C. As with all theories of quantum gravity, LQG faces a general lack of decisive experimental data. However, our research demonstrates that quantum computing has the potential to represent some of the fundamental mathematical structures that underlie the quantised nature of space in LQG. This opens up the possibility for exploring these structures numerically using quantum computing devices. While in LQG, the dynamics of spin networks often involve broader groups such as \(\mathrm{SL}(2;\mathbb{C})\) that correspond to relativistic symmetries, we still find value in the \(\mathrm{SU}(2)\) (Euclidean) models. This is because even in the most developed LQG models addressing quantised relativistic space-time, the states of space themselves are still projected onto \(\mathrm{SU}(2)\)[66]. In summary, though tackling the full dynamics directly might prove challenging through this approach, exploring the kinematic aspects is well within reach. Interestingly, the PQC literature already contains the treatment of a limited class of spin networks to calculate the Ponzano-Regge amplitudes [31] which are the transition amplitudes for the topological quantum field theory know as the Ponzano-Regge model which itself is studied as a model for quantum gravity [67]. In this context, spin networks are not viewed as states, but rather as transition maps in a 2+1 euclidean gravity setup, i.e., non-relativistic dynamics over lower dimensions (see Appendix C for details). While there might be an absence of the full group of relativistic symmetries, investigating even a simplification of these transition amplitudes and the associated objects, termed spinfoams, could yield valuable insights. An additional observation, mentioned above in a different context, is the possibility of generalising the models we have explored. This includes considering networks with mixed vertices or looking into groups like SU(\(N\)), which extend beyond what is typically seen in LQG. Indeed in LQG, even models with vertices larger than four are considered exotic. The exploration of the properties of this wider class of models could prove useful in quantised gravity. Such generalisation would be in the spirit of the work done on probabilistic theories [68, 69]. Those studies often consider a diverse landscape of theories similar to quantum mechanics in order to discover why quantum mechanics, in particular, is seen in nature. Investigations of different valency spin networks could proceed along similar lines. ## 7 Conclusion In this paper, we have put forward a theoretically motivated ansatze based on spin networks, a form of SU(2) equivariant tensor networks. This offers a way to design SU(2) equivariant variational quantum algorithms which are natural for rotationally invariant quantum systems, based on the Schur map induced by a spin-coupling diagram. Furthermore, we show that our approach leads to the same parameter spaces as generated by the twirling formula but in a direct manner that avoids the twirling computation for many-qubit gates which is highly non-trivial. For the two and three-qubit gate cases, we further justify our approach with numerical results solving the ground state problem of the SU(2) symmetric Heisenberg models on the one-dimensional triangular lattice and on the Kagome lattice. Connecting to the broader literature we also show that SU(2) equivariant gates are identical to the generalised permutations discussed in the context of PQC+ [26]. The connection to PQC+ is also used to argue for how our ansatze moves through a parameter space that a classical algorithm would find difficult to access. The original observation in Ref. [26] showed that the expectation value of generalised permutations in the spin-basis calculates \(S_{n}\) Fourier coefficients in polynomial time (a possible super-exponential speed-up) and this is now extended by our work to SU(2) equivariant gates. This leads to our introduction of the term _non-classical heuristics_ for quantum variational techniques which can be argued to access regions of the parameter space that are classically intractable. It is our hope that future research in this direction can extend this notion to rigorous complexity arguments by finding a task with SU(2) symmetry that is solvable by SU(2) equivariant circuits where no known efficient classical algorithm exists. For example, Ref. [70] has proven quantum advantage in an ML task by designing a dataset whose classification task is convertible to the discrete logarithm problem which is efficiently solvable by a QML algorithm, yet an efficient classical algorithm is deemed impossible (unless discrete logarithm problem is in BPP). Similarly, we expect that it is possible to design an ML task related to the Fourier transformation over \(S_{n}\), establishing rigorous quantum advantage arguments in this domain as well. ## Acknowledgements RDPE and CYP contributed equally to this work. RDPE would like to acknowledge useful conversation and comments from Sergii Strelchuk, Deepak Vaid, and Pierre Martin-Dussaud. CYP thanks Seongwook Shin for helpful comments. All authors thank the Xanadu QML and software research teams at large with special thanks to Maria Schuld, David Wierichs, Joseph Bowles, and David Wakeham. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
2309.04759
High pressure-temperature proton migration in P-3 brucite [Mg(OH)2]: Implication for electrical conductivity in deep mantle
Hydrous minerals contribute largely to the transport and distribution of water into the mantle of earth to regulate the process of deep-water cycle. Brucite is one of the simplest layered dense hydrous mineral belonging to MgO-SiO2-H2O ternary system, which contains significant amount of water in the form of OH- groups, spanning a wide range of pressure stability. Simultaneously, the pressure (p) and temperature (T) induced mobility of protons within the layered structure of brucite is crucial for consequences on electrical conductivity of the mantle. Using ab initio molecular dynamics (AIMD) simulations, we investigate the diffusion of H in high-pressure trigonal P-3 polymorph of brucite in a combined p-T range of 10-85 GPa and 1250-2000K, relevant to the mantle of earth. The AIMD simulations reveal an unusual pressure-dependence of the proton migration in brucite characterized by maximum H-diffusion in the pressure range of 72-76 GPa along different isotherms. We predict that in the P-3 brucite the H mobility is onset only when a critical hydrostatic pressure is attained. The onset pressure is observed to drop with increasing temperature. The H-diffusion in brucite phase at elevated p-T takes place in such a manner that the process results in the amorphization of the H-sublattice, without disturbing the Mg- and O-sublattices. This selective amorphization yields a pool of highly mobile protons causing a subsequent increment in the electrical conductivity in P-3 brucite. Our calculated values of conductivity are compared with ex-situ geophysical magnetic satellite data indicating that brucite can be present in larger quantities in the lower mantle than previously observed. This hydroxide phase can occur as segregated patches between the dominant constituents e.g., silicates and oxides of the lower mantle and thus can explain the origin of high electrical conductivity therein.
Sudip Kumar Mondal, Pratik Kumar Das, Nibir Mandal
2023-09-09T11:07:15Z
http://arxiv.org/abs/2309.04759v1
High pressure-temperature proton migration in P3 brucite [Mg(OH)\({}_{2}\)]: Implication for electrical conductivity in deep mantle ###### Abstract Hydrous minerals contribute largely to the transport and distribution of water into the mantle of earth to regulate the process of deep-water cycle. Brucite is one of the simplest layered dense hydrous mineral belonging to MgO-SiO2-H2O ternary system, which contains significant amount of water in the form of OH- groups, spanning a wide range of pressure stability. Simultaneously, the pressure (p) and temperature (T) induced mobility of protons within the layered structure of brucite is crucial for consequences on electrical conductivity of the mantle. Using ab initio molecular dynamics (AIMD) simulations, we investigate the diffusion of H in high-pressure trigonal P-3 polymorph of brucite in a combined p-T range of 10-85 GPa and 1250-2000K, relevant to the mantle of earth. AIMD simulations reveal an unusual pressure-dependence of the proton migration in brucite characterized by maximum H-diffusion in the pressure range of 72-76 GPa along different isotherms. We predict that in the P-3 brucite the H mobility is onset only when a critical hydrostatic pressure is attained. The onset pressure is observed to drop with increasing temperature. The H-diffusion in brucite phase at elevated p-T takes place in such a manner that the process results in the amorphization of the H-sublattice, without disturbing the Mg- and O-sublattices. This selective amorphization yields a pool of highly mobile protons causing a subsequent increment in the electrical conductivity in P-3 brucite. Our calculated values of conductivity are compared with ex-situ geophysical magnetic satellite data indicating that brucite can be present in larger quantities in the lower mantle than previously observed. This hydroxide phase can occur as segregated patches between the dominant constituents e.g., silicates and oxides of the lower mantle and thus can explain the origin of high electrical conductivity therein. ## 1 Introduction The presence of small elements and water have significant effect on the mineralogical structure, composition and dynamics[1, 2] of the Earth's mantle which manifests itself in terms of varying melting temperature[3, 4], elastic properties[5, 6, 7, 8, 9, 10, 11, 12, 13], electrical conductivity[14, 15, 16, 17, 18, 19], viscosity, diffusional motion of atoms[20, 21, 22, 23, 24, 25, 26, 27] in minerals. It is widely accepted that the carrier of hydrogen into deep earth are a batch of hydrous minerals such as dense hydrous mineral silicates (DHMSs)[28], nominally anhydrous minerals (NAMs)[4, 29] and \(\delta\)-AlOOH[30, 31]. However, apart from phase D (ideal formula MgSi\({}_{2}\)H\({}_{2}\)O\({}_{6}\)) most of the hydrous minerals are reported to decompose at high pressures corresponding to the cold subducting slabs [32, 33, 34, 35]. Electrical conductivities of DHMSs are observed to increase with pressure suggesting a higher mobility of H atoms. This observation indicates that pressure may act as an ally to enhance movement of protons in crystalline mineral phases. Although quantitatively rare in the mantle of the earth, brucite is an archivpe hydrous and layered mineral of the MgO-SiO\({}_{2}\)-H\({}_{2}\)O ternary system (MSH), which is the most rich in its ability to potentially host water and water-derived species in the mantle[36, 37]. In ambient condition brucite assumes a trigonal crystalline structure (space group: P\(\overline{3}\)m1) where Mg\({}^{2+}\) and OH- are arranged in layer. Pressure induced proton frustration in P\(\overline{3}\)m1 Brucite have been investigated by Raugei et al. [38] and Mookherjee and Strixwood[39]. The former experimental study has showed that under elevated pressure H moves in the _ab_ plane and localize separately at three equivalent positions contrary to one in low pressure condition. However, at around 1 GPa brucite undergoes a structural transition to a lesser symmetric trigonal structure (space group: P\(\overline{3}\))[40] leading to a change of dynamical positional disorder of proton to a static one[41]. The layered structure of P\(\overline{3}\)m1 brucite have motivated several researchers to study the diffusion of proton, resulting electrical conductivities[37, 42] and it's dehydration properties[43] whereas the proton diffusion in P\(\overline{3}\) brucite still remains unexplored. Static DFT calculation and novel structure searching method have demonstrated that P\(\overline{3}\) brucite has a larger p-T stability field compared to its low pressure predecessor and around 19 GPa it transforms to a new tetragonal P4;2;2 phase[41]. Nevertheless, this new phase is yet to be experimentally verified. Hermann and Mookherjee [41] also reported that P\(\overline{3}\) brucite decomposes into MgO + H\({}_{2}\)O (liquid) and MgO + ice VII mixtures at high p- low T and high p-high T conditions respectively. But barring the ex-situ geophysical survey of Kirby et al. [44] and subduction zone thermal models of Bina et al. [45], the presence of ice VII phase in the deep interior of the earth remains highly debated. In brucite, high pressure x-ray diffraction study of Fei et al. [46] reveals a smooth diffraction pattern, however in the same paper the authors deduced that at room temperature brucite would decompose into periclase (MgO) and water at 27 GPa. The apparently contradictory results indicate that the decomposition is associated with a high kinetic barrier and brucite is likely to be stable at even higher pressures. Pressure induced enhancement of proton transport in P\(\overline{3}\)m1 brucite have been reported by Guo and Yoshino[37] but limited to 13 GPa corresponding to the upper mantle regime. Recently, Shaack et al. [47] has demonstrated that in P\(\overline{3}\) brucite nuclear quantum effects play a major role in the mobility of H and it reaches a maximum at 67 GPa in room temperature. For the most part, an exhaustive account of the H-diffusion in brucite at high pressure combined with high temperature and its implication for the deep earth is still lacking in literature. On the other hand, electrical conductivities of DHMS and nominally anhydrous minerals (NAMs) like olivine and its high-pressure polymorph ringwoodite and wadsleyite cannot account for the high conductivity zones in lower mantle. To explain this difference, there should be some other minerals or mineral aggregate which can notably contribute to the electrical conductivity of deep earth. It is important to note that the P\(\overline{3}\) brucite structure consists of hollow 2-D well parallel to the ab-plane which are devoid of Mg and O atoms. This channel can offer significant passage for the unhindered motion of H atoms and thereby enhance the electrical conductivity of the mantle. This study uses Ab-initio Molecular Dynamics (AIMD) to systematically explore the diffusion of H in P\(\overline{3}\) brucite in the range 10-80 GPa and 1250-2000K. The diffusion co-efficient of H is calculated in the chosen _p-T_ conditions which reveal an anomalous relation of the diffusion coefficients with pressure along the isotherms. The reasons for the differences in diffusivities of H are elucidated. The results demonstrate that the onset temperature of H-diffusion in brucite is largely influenced by the confining pressure, and also deals with the characteristic anisotropy in H diffusion in brucite. Finally, the proton induced electrical conductivity is calculated and compared with deep mantle electrical conductivity to find how far the presence of brucite can affect it. ## 2 Computational methodology We have performed static, density-functional theory calculations to obtain structures of P\(\overline{3}\) brucite at desired pressures up to 80 GPa using the Vienna ab initio simulation package (VASP)[48, 49]. The local structural relaxations calculations were performed using the generalized gradient approximation of the Perdew-Becke-Ernzhoff [50] formalism to model electronic exchange-correlation effects, together with projector-augmented wave (PAW) [51] implementation. The PAW-GGA potentials are used to describe the ionic core of H, O and Mg where their valence electronic configurations were 1s\({}^{1}\), 2s\({}^{2}\)2p\({}^{4}\) and 3s\({}^{2}\)3p\({}^{0}\), respectively, with core cut-off radii of 1.1, 1.52 and 2.0 A. The simulations employ an appropriate regular gamma centric 5\(\times\)5\(\times\)5 grid of Monkhorst-Pack[52] k points to sample the electron density in the reciprocal space and a kinetic energy cut-off of 625 eV along with a cut-off of 610 eV for the augmented part. These parameters ensured that the precision of the energy calculation is typically lower than 1meV/atom in energy and better than 0.5 GPa in pressure. To capture the effect of temperature on the motion of atoms, we have performed AIMD simulations as implemented in VASP. The isokinetic NVT ensembles are chosen at a given temperature, T, keeping a fixed volume, V, and the number of atoms, N, in the simulation box. The simulations were performed at volumes corresponding to the desired pressure. The ionic temperatures during the simulations were kept steady employing a Noose-Hoover thermostat[53, 54]. Each AIMD simulation was run for 8000-10000 timesteps with each timestep being equal to 1 femtoconod, resulting to a total simulation time ranging from 8 to 10 pico seconds. AIMD simulation are typically sensitive of the size of the system under study. To account for this size effect, all the AIMD simulations were carried out on a 2\(\times\)2\(\times\)3 supercell containing 180 atoms, obtained from the fully relaxed conventional unit cells from the previous static DFT calculations. The fixed gamma point was used to sample the reciprocal space of the supercells. The constraint imposed by fixed volume along with increasing temperatures affect the resultant pressure. This study considers the thermal pressure correction and, observed them to be within 1.3-2.8% of the target pressure. The images of the crystal structures are rendered using VESTA and the images of the trajectories of H-atoms are extracted from Visual Molecular Dynamics (VMD) suite. ## 3 Result and Discussion ### Crystal structure and equation of state The crystal structure of P\(\overline{3}\) brucite consists of Mg\({}^{2+}\) cations and OH\({}^{-}\) anions arranged in layers, assuming an overall trigonal structure. The protons are located in channels in between the edge sharing MgO\({}_{6}\) octahedra (Figure 1). The transition from P\(\overline{3}\)m1 to P\(\overline{3}\) brucite, the latter being a maximal subgroup of the former, is characterised by the reduction of mirror planes owing to the spatial disorder of proton distribution. In the lattice structure, the Mg atoms occupy two distinct crystallographic positions: Wyckoff site 1a (0, 0, 0) and 2d (1/3, 2/3, \(x_{\rm Mg}\)), whereas the O and H atoms occupy general Wyckoff sites 6g and 6i respectively. The most interesting characteristics of the P\(\overline{3}\) phase is the occurrence of two different H-H distances[39], where both of them decrease on increasing pressure up to a certain threshold value, beyond which one of them starts to stretch as a consequence of increasing O-H--O angle. Under ambient conditions, the lattice parameters are calculated as \(a=b=5.48529\) A, \(c=4.79506\) A and \(\alpha=\beta=90^{\circ}\), \(\gamma=\)120\({}^{\circ}\); these calculated values are in close agreement with previous findings [39, 41]. The weak interaction between MgO\({}_{6}\) layers is responsible for higher compressibility of this phase along its \(c\)-axis. The 2\({}^{\rm nd}\) order Birch-Murnaghan equation of state fit (Supplementary Figure S1) yields a bulk modulus of 46.25 (\(\pm\) 1.7) GPa with a pressure derivative of 5.03, which agree well with the DFT results of Hermann and Mukherjee [41]. The calculated equilibrium volume per formula unit for this phase, 42.02 A\({}^{3}\) is also consistent with their finding. ### Proton transport mechanisms Pressure induced proton frustration and its effect on the mobility of H in brucite has been previously studied both computationally[38] and experimentally [39, 55, 56] albeit limited to the P\(\overline{3}\)m1 phase. Quench experiments are also performed to elucidate the phase stability of P\(\overline{3}\)m1 brucite[40]. Despite those previous studies, the migratory behaviour of H in high pressure P3 structure and its effects on the electrical conductivities in deep earth remains almost unattended. We have performed AIMD simulations to investigate the kinematic behaviour of proton in P\(\overline{3}\) brucite at pressures corresponding to lower and upper mantle. The diffusion constants are calculated from the slope of the mean squared displacement (MSD) versus time curve. It is important to emphasise here that the calculated MSD demonstrates different line segments of varying slope (Figure 2 and Supplementary Figures S2-S5). In order to minimize the error, the final slope was calculated as an average of the slopes of the MSD in several non-intersecting time intervals. Figure S2 shows our calculated MSD of hydrogen atoms at several pressures at the temperature of 1250K, where no notable movement of protons were observed below 43 GPa, which decreases to 28 GPa when the temperature is increased to 1500K as evident from Figure 2. The onset pressure for proton mobility decreases further to 10.1 and 13.4 GPa when the temperatures are set to 1750K and 2000K, respectively (Supplementary Figure S3-S5). However, irrespective of the temperature to which crystalline brucite is subjected to, the maximum movement of H-atoms are observed in the range 73.6-75.6 GPa. Thus, the MSD's of hydrogen are observed to display an anomalous correlation with confining pressure, decreasing in magnitude upon further increment in pressure. The disorder in proton distribution in P\(\overline{3}\) brucite can be categorized into two distinct non-exclusive type: a. dynamic disorder, in which each hydrogen jumps from one to another of the three symmetrically equivalent sites; b. static disorder, in which each hydrogen atom is stationary, in any of the three symmetrically equivalent position[39]. With increasing pressure this disorder changes from a dominantly dynamic one to a dominantly static one. The OH' interlayer distances are observed to be much more sensitive to pressure compared to the intralayer OH' distances. At elevated pressure the interactions between OH' becomes much stronger and results to the reversal of proton disorder in the hydrogen sublattice. At the same time it brings the interlayer H-atoms close to each other to form H-bonds, which are short-lived and very weak [38] aiding the protons to hop between O atoms among Mg-O layer facing each other. The proton diffusion mechanism in brucite is a complicated process, influenced by Nuclear Quantum Effects [47], that involves two stages: 1) the dissociation of a covalent O-H bond to form another distinct O-H bond, and 2) the reorientation i.e., the jump of proton from one of the three equivalent 6i sites in order to move from an initial O to the nearest one. Pressure has antipodal effect on these two stages. Rising pressure enhances the Nuclear Quantum Effect and increases the dissociation of O-H bonds. On the other hand, the reorientation process is mostly controlled by temperature where pressure is likely to be inclined to localize the proton in a certain orientation, making this motion unfavourable. The dissociation of O-H bonds creates a quasi 2-D proton layer between adjacent MgO layers. At lower pressures two quasi 2-D layers of H atoms are formed near each Mg-O layer and proton move back and forth between them and also throughout the layers. However, at elevated temperature and at pressure between 73-76 GPa these two layers merge and become indistinguishable. We argue that our MSD's at pressures below 75 GPa represent characteristic back and forth movement of proton between two such layers as well as thermally activated motion of proton in each of these layers. At 73-76 GPa the formation of the one indistinguishable layer, populated with large number of mobile protons, enhances the protonic movement. Below the onset pressure at each temperature only the reorientation occurs and resulting in a back-and-forth motion of H atoms leading to negligible net movement. Although these AIMD calculations do not take into account the NQE on an explicit term, the results are in good agreement with that of Schaack et al.[47] ### 3.3 Proton diffusion coefficients We have systematically calculated the diffusion coefficient of H (D\({}_{\rm H}\)) at various pressure and temperature conditions using the well-known Einstein's relation \[D_{H}=\frac{MSD}{2nt} \tag{1}\] Where MSD is the mean squared displacement of H-atoms, \(t\) is time, \(n=1,2,3\) depending on the dimension of the system under study, which in our case is set to 3. Table 1 lists our calculated D\({}_{\rm H}\). As the temperature increases the diffusion coefficients also increase. Notably at each temperature we found D\({}_{\rm H}\) to assume maximum value in the pressure range 73-76 GPa pressure. Figure 3 shows the variation of D\({}_{\rm H}\) with reference to the mantle pressure conditions and different zones in mantle. Several NAMs and transition zone silicates are experimentally observed to host H in their crystalline lattice as substitutional point defects. Olivine aggregates which are dominant species in Earth's lower mantle house nominal amount of water and at higher temperature the H atoms are found to diffuse through the lattice. Discontinuities in seismic wave velocities establishes that olivine undergoes a transition to wadsleyite at 14 GPa and further to Ringwoodite at 24 GPa. Similar to their low-pressure counterpart these transition zone silicates also demonstrate H-mobility at elevated temperatures. Nevertheless, their calculated D\({}_{\rm H}\) are one to two order lower than what we have observed in case of brucite at comparable pressure-temperature conditions. The reason behind these differences in D\({}_{\rm H}\) can be attributed to their distinctive crystal structure and the different class of mechanisms at play to promote the H-diffusion process. Olivine and Ringwoodite belong to the category of inesosilicates whereas wadsleyite is a sorosilicate. All the former minerals are characterised by the presence of Mg/Fe octahedra and Si-tetrahedra forming a network like structure. In those silicates H must diffuse through asymmetric channels formed by the cationic polyhedra. In contrast brucite is a layered hydroxide mineral which offers an unhindered motion of protons through the layer between MgO\({}_{6}\)-octahedra. In addition to that, the H-diffusion mechanism in both classes of minerals have remarkable differences. In silicates a net diffusion of H atoms are realised only when an H atom jumps from one substitutional vacancy to the next one. This whole process is thermally controlled and relies on simultaneous creation-annihilation of vacancies together with probabilistic hopping of H through those vacancies. In contrast, the H-diffusion in brucite is initiated and largely regulated by the pressure induced amorphization of the H-sublattice. This pressure induced amorphization creates a pool of mobile H atoms between adjacent MgO\({}_{6}\) octahedral layers, depleted of any Mg or O atom to restrict H-mobility. The combined effect of the structure and mechanism of H-diffusion gives rise to a diffusional free energy barrier ranging from 1.66 eV to 3.14 eV for H in ringwoodite and wadsleyite [27] respectively. In contrast, the dissociative and rotational free energy barriers for H in brucite are in the order of 0.01-0.11 eV and 0.03-0.10 eV at room temperature [47], which are expected to drop to even lower values when the temperature rises. Clearly, this higher migration barrier in silicates makes H-diffusion in them kinetically restricted, energetically less favoured and demands relatively higher temperature to initialize as compared to brucite. The apparent free flow of protons and lower migration barrier are thus responsible for the observed high D\({}_{\rm H}\) in brucite phase. The initial high value of D\({}_{\rm H}\) observed in 2000K isotherm in the low- pressure regime can be attributed to the incongruent melting of brucite. Supplementary Figure S6 and S7 shows the variations of MSD's of Mg and O atoms at several pressures along 2000K isotherm. At low pressure of \(\sim\)13 GPa both the MSD's of Mg and O atoms shoot upwards, together with the MSD of H atom (Supplementary Figure S5), indicating that the entire crystalline structure undergoes a melting at this pressure and temperature. However, at higher pressures the melting point of brucite increases, and the H atoms only occur in a mobile state. Supplementary Figure S8 illustrates the MSD of the comparatively heavy atoms at 1250K. It is important to note that at 1250K both the MSD of Mg and O atoms oscillates around some small value indicating the stretching and shortening of Mg-O bonds as the Mg and O atoms execute thermally activated vibrational motions. ### Anisotropy in proton diffusion The proton diffusion in brucite is highly anisotropic in nature. Figure 4a and 4b demonstrates that movements of almost all of the H-atoms are restricted within the planes parallel to crystallographic \(ab\)-plane, with hardly any out of plane motion observed. The MgO\({}_{6}\)-polyhedral network here act as a barrier to restrict motions of H-atoms parallel to \(c\)-axis. I have calculated the axis decomposed diffusion coefficient along the three axes viz. D\({}_{[100]}\), D\({}_{[010]}\) and D\({}_{[001]}\). Further, they are normalized with respect to D\({}_{\rm H}\) at corresponding p-T conditions as d'\({}_{[100]}\) = D\({}_{[100]}\)/D\({}_{\rm H}\), d'\({}_{[010]}\) = D\({}_{[010]}\)/D\({}_{\rm H}\) and d'\({}_{[001]}\) = D\({}_{[010]}\)/D\({}_{\rm H}\), respectively. d'\({}_{[001]}\) is found to be negligible in magnitude, asserting that proton diffusion along c-axis contributes almost null. The plot of d'\({}_{[100]}\)/d'\({}_{[010]}\) ratio as a function of pressure in the range 30-90 GPa shows no obvious correlation between the d'\({}_{[100]}\)/d'\({}_{[010]}\) ratio with pressure-temperature (Figure 4c). Comparable values of d'\({}_{[100]}\) and d'\({}_{[010]}\) are only obtained at some specific \(p\)-\(T\) eg. 40GPa-1500K, 70 GPa-1250K, 80 GPa-1500K and 80 GPa-1750K. At those \(p\)-\(T\) points the movements of H along the \(a\)- and \(b\)-axis are equal in magnitude. d'\({}_{[100]}\)/d'\({}_{[100]}\) at 2000K exhibits maximum anisotropy in diffusion on and above 70 GPa, indicating that protons are much more prone to move along the \(a\)-axis rather than \(b\)-axis. Figure 1 shows that the distribution of H-atoms in between MgO\({}_{6}\) layers are identical when viewed along \(a\)- or \(b\)-axis. When the pressure induced proton disorder and amorphization of the H-sublattice set in, both \(a\)- and \(b\)-direction becomes relatively less indistinguishable in terms of proton mobility. The H-atom diffusion parallel to \(ab\)-plane thus becomes asymmetric in nature without showing any preferred directional dependence. ### Analysis of pair distribution functions (PDF) To explain the unconventional high diffusivity of H in P-3 brucite, we analysed the evolution of pair distribution function (PDF) under varied hydrothermal condition (Fig. 5). Our PDF analysis reveals that over the entire range of pressure and temperature, the H-H PDF (blue line Fig. 5) shows a peak at \(\sim\) 1.6 A between 11-43 GPa, characteristic of the typical nearest neighbour H-H distance in brucite. On further compression the peak moves toward \(\sim\)1.45-1.50 A. Notably, the spread of the shoulder in the H-H PDF at higher distances and the lack of shape therein except that of the first peak, indicates the pressure induced amorphization of the H-sublattice. In contrast, the O-Mg PDF (green line Fig. 5) demonstrate a first peak around 1.8-2.0 A commensurate with the typical O-Mg bond lengths [55,56] and the contraction in those lengths due to increasing external pressure on brucite. The second peak located around \(\sim\) 3.3-3.65 A represents the second nearest neighbour O-Mg distances. It is important to emphasize that both the first and second peaks appearing in the O-Mg PDF are sharp. These sharp peaks are indicative of the fact that even at higher pressure and temperature conditions, under which the H sublattice promptly amorphizes, the Mg-O sublattice still retains it crystalline form i.e., the underlying layered structure of the Mg-O remains unaltered at the onset of the fast proton conduction. Additionally, the increment of pressure narrows and sharpens the first O-Mg peak which suggests that the temperature induced vibrational motions of the O and Mg atoms are restricted to a higher degree. The O-H PDF (orange line Fig. 5) consists of further intriguing details and shows atypical variation with pressure at different temperatures. At a lower pressure of 11GPa, the first peak in the O-H PDF is the sharpest and very high in amplitude which occurs at \(\sim\) 0.98 A consistent with the O-H bond length in P-3 brucite at that pressure [56]. The right shoulder of the O-H PDF drops to zero and rises to another peak around 2.5 A [55], located near the trio of the second nearest neighbouring O atoms i.e., the O atoms from the next Mg-O layer in the direction of the H-O bond along the c-axis. As the pressure is increased from 11 GPa to 80 GPa, we observed a minute stretching of the O-H bonds from 0.98 A to 1.02 A due to the quenching of interlayer distances which stretches the O-H-O angle as observed by Parise et al[56, 57]. Between 50-70 GPa, the another set nearest neighbour peak located at 1.6 A approach the first peak and the interlayer O-H peak appears \(\sim\) 2.35 A. This is the direct effect of the shortening of Mg-O layer distances which enhances the strength of O-H interaction [40]. At low pressures, the H-atoms are well localized as observed by previous experiments [8, 55, 56, 57, 58]. However, in addition to the amorphization of the H-sublattice, elevated pressure encourages delocalization of H-atom positions as evident from Fig. 5 c-e. O-H PDF in Fig. 5d shows that apart from the pronounced peak at 1 A, two other relatively lower amplitude peaks appear within 2.5 A. This pressure induced delocalization of the of H-atoms thus creates more probable positions for the H-atoms to jump to. The increase in stochastic H-jump frequency due to more available jump sites positively influence D\({}_{\text{H}}\) as observed in the sharp rise in D\({}_{\text{H}}\) starting from 40 GPa along different isotherms. To elucidate further, increased pressure forces the adjacent O-H layers to coalesce. From observing the abrupt decrease in the O-H peak distances, we infer that the H atoms which required a jump distance of almost 1.5 A for producing a net H-diffusion at 10-20 GPa range, need only to traverse a distance of 0.5 A at higher pressure of 40-70 GPa. This observation is also in par with our previous claim that at high pressure the 2-D channel between adjacent Mg-O layers are populated by highly mobile H atoms. Beyond 70 GPa the 1\({}^{\text{a}}\) and 2\({}^{\text{nd}}\) peak in the O-H PDF merges and gives rise to a less sharp peak spread over almost 0.6 A. At this pressure, the O-H layers come too close to each other to strengthen the O-H bonds [58] and the delocalization of H positions starts to disappear. One possible reason for such behaviour can be the rotation and spatial rearrangements of the O-H bonds taking precedence over the simultaneous creation and annihilation of O-H bonds which reduces the probability of jump of H-atom, eventually resulting in a lower D\({}_{\text{H}}\) compared to the D\({}_{\text{H}}\) at 70 GPa. ### Electrical conductivity To estimate the contribution of conductivity of the H-atom, \(\sigma_{H}\), to the apparent bulk conductivity of the system, we have used the alternative form of the Nernst-Einstein equation \[\sigma_{H}=\tfrac{1}{k_{B}T}\ D_{H}\sigma_{H}^{2}\sigma^{2}c_{H} \tag{2}\] where \(Z_{\text{H}}\) and \(c\) are the valence of the diffusing species and charge of an electron respectively. \(k_{B}\) is the thermodynamic Boltzman constant, and T is the simulation temperature. C\({}_{\text{H}}\) is the concentration of H atoms i.e., number of atoms/unit volume. The increased proton diffusion in brucite results in an increment in electrical conductivity (\(\sigma\)) as both the temperature and pressure are raised (Figure 6). Electrical conductivity of dry and wet P\(\bar{3}\)m1 brucite have been experimentally calculated by Gasc et al. [42] but limited to the pressure of 2 GPa only. At these pressures the wet sample is found to show an electrical conductivity in the range 10\({}^{\text{-2}}\) to 10\({}^{\text{-3}}\) S/m at 1173K, which is reasonably low compared to our calculations. Guo and Yoshino [37] did a similar study on similar crystalline brucite and observed a maximum conductivity of 32 S/m at pressure 11-13 GPa. This value is comparable to the calculated electrical conductivity at _p-T_ points of 28GPa-1500K, 10GPa-1750K and 18.7 GPa-2000K respectively. Even much higher values of \(\sigma\) are observed experimentally in DHMS phase A, phase D and the superhydrous phase B by Guo and Yoshino [59]. Phase A features a \(\sigma\) of 55 S/m at 10 GPa in the temperature range 500-900K, phase D on the other hand shows an electrical conductivity of 1342 S/m at 22 GPa in the same temperature range. We have obtained comparable values of the electrical conductivities in P\(\bar{3}\) brucite but only in the pressure range 50-60 GPa and between temperature 1500-2000K. This asserts that such high conductivities are not unusual. In fact, the removal of the mirror plane and lowering of symmetry in pressure induced P\(\bar{3}\)m1 to P\(\bar{3}\) transition in brucite allows more space for H to diffuse rapidly. At pressures higher than 60 GPa, our calculated values of \(\sigma\) surpass the conductivities of DHMS phases. Although the diffusion of H is characterised by a maximum value in the range 73-76 GPa, for 1250k and 2000K we observed that the maximum \(\sigma\) is attained beyond this pressure range despite the lowering of diffusion coefficients. For the other two temperatures the trend of the variation of \(\sigma\) is qualitatively similar to what we observed for D\({}_{\text{H}}\) at those temperatures. In Figure 6 we have compared our calculated \(\sigma\) values with mantle electrical conductivity using magnetic satellite measurements by Constable and Constable[60]. At low pressure regime our \(\sigma\) values are in good agreement with their data. Their observation shows a seemingly rapid increment of \(\sigma\) at around 50 GPa, however this study is limited to 60 GPa in pressure. The calculated \(\sigma\) also demonstrates a rapid increment in similar range of pressure although they don't converge to a fixed value as the pressure in increased further. Our observation together with the identification of high \(\sigma\) in DHMS by Guo and Yoshino [59] indicates the presence of proton disordered brucite in lower and upper mantle region of the earth. The mean electrical conductivity of the mantle ranges from 10\({}^{\text{-4}}\) to 10\({}^{\text{3}}\) S/m[61] which is lower than our calculated conductivities of brucite beyond 60 GPa corresponding to the upper mantle. Thus, this study infers that the amount of brucite in shallow mantle could be moderate to high but reasonably small in lower mantle and brucite could occur as independent pockets at those depths. ### Conclusion This study has systematically investigated the proton diffusion behaviour in P\(\overline{3}\) - brucite at high pressure and high temperature regime. The study reveals an anomalous behaviour of hydrogen diffusion where the diffusion constants increase up to a certain pressure and then exhibit maxima in 73-76 GPa pressure range across all the isotherms. At this pressure range two separate layers of protons between MgO\({}_{6}\) octahedral sheets emerge and coalesce with each other. This coalition of proton layers generates high number of free protons. At high temperature the hydrogen sublattice amorphized leaving the Mg and O atoms static in their lattice sites. The degree of amorphization increases with increasing temperature and thus yields highly mobile protons. Beyond this pressure, the coalition of proton layers becomes ineffective and thereby reduces the diffusion constant. The arrangement of H in layered structure of P\(\overline{3}\) - brucite is identical along crystallographic a- and b-axis. The calculated anisotropy in proton migration thus reveals no axial preference but indicates towards the random thermal motion of protons, other than the fact that no net diffusion of proton is observed along c-axis, which was present in P\(\overline{3}\)m1 brucite[37]. AIMD calculations are used to evaluate the apparent contribution of the protonic conductivities to the electrical conductivities of brucite under varied hydro-thermal conditions. While the diffusion constants are observed to increase steadily with temperature, the electrical conductivities offer a complex variation. For 1500K and 1750K, the maximum of conductivity coincides with the same p-T points where the diffusion constant shows maxima, whereas for 1250K and 2000K the conductivities are observed to increase further with temperature. At pressures corresponding to upper mantle the conductivity features very high values comparable to several DHMS phase. Comparison with geomagnetic data [60,61] allows us to conclude that apart from predominant constituents of the mantle such as silicates and oxides, brucite can also be present in the earth's mantle in small amount.
2309.15654
The Complexity of Resilience Problems via Valued Constraint Satisfaction Problems
Valued constraint satisfaction problems (VCSPs) constitute a large class of computational optimisation problems. It was shown recently that, over finite domains, every VCSP is in P or NP-complete, depending on the admitted cost functions. In this article, we study cost functions over countably infinite domains whose automorphisms form an oligomorphic permutation group. Our results include a hardness condition based on a generalisation of pp-constructability as known from classical CSPs and a polynomial-time tractability condition based on the concept of fractional polymorphisms. We then observe that the resilience problem for unions of conjunctive queries (UCQs) studied in database theory, under bag semantics, may be viewed as a special case of the VCSPs that we consider. We obtain a complexity dichotomy for the case of incidence-acyclic UCQs and exemplarily use our methods to determine the complexity of a query that had remained open in the literature. Further, we conjecture that our hardness and tractability conditions match for resilience problems for UCQs.
Manuel Bodirsky, Žaneta Semanišinová, Carsten Lutz
2023-09-27T13:41:00Z
http://arxiv.org/abs/2309.15654v3
# The complexity of resilience problems via valued constraint satisfaction problems ###### Abstract. Valued constraint satisfaction problems (VCSPs) are a large class of computational optimisation problems. If the variables of a VCSP take values from a finite domain, then recent results in constraint satisfaction imply that the problem is in P or NP-complete, depending on the set of admitted cost functions. Here we study the larger class of cost functions over countably infinite domains that have an oligomorphic automorphism group. We present a hardness condition based on a generalisation of pp-constructability as known for (classical) CSPs. We also provide a universal-algebraic polynomial-time tractability condition, based on the concept of fractional polymorphisms. We apply our general theory to study the computational complexity of resilience problems in database theory (under bag semantics). We show how to construct, for every fixed conjunctive query (and more generally for every union of conjunctive queries), a set of cost functions with an oligomorphic automorphism group such that the resulting VCSP is polynomial-time equivalent to the resilience problem; we only require that the query is _connected_ and show that this assumption can be made without loss of generality. For the case where the query is _acylic_, we obtain a complexity dichotomy of the resilience problem, based on the dichotomy for finite-domain VCSPs. To illustrate the utility of our methods, we exemplarily settle the complexity of a (non-acyclic) conjunctive query whose computational complexity remained open in the literature by verifying that it satisfies our tractability condition. We conjecture that for resilience problems, our hardness and tractability conditions match, which would establish a complexity dichotomy for resilience problems for (unions of) conjunctive queries. The first two authors have been funded by the European Research Council (Project POCOCOP, ERC Synergy Grant 101071674) and by the DFG (Project FinHom, Grant 467967530). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The third author was supported by the DFG project LU 1417/3-1 QTEC. ## 1. Introduction The _classical_ quantum theory of _classical_ quantum mechanics is a powerful tool in quantum field theory. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics, and it is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics, which is a classical theory of quantum mechanics. It is a classical theory of quantum mechanics of quantum mechanics. It is a classical theory of quantum mechanics of quantum mechanics. It is a classical theory of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum mechanics of quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum quantum mechanics of quantum quantum mechanics of quantum quantum quantum quantum quantum mechanics of quantum as VCSPs over a finite domain. Our notion of expressive power (and fractional polymorphisms) can be used to also study the parametrised complexity of all of these problems. **Outline.** The article is organised from the general to the specific, starting with VCSPs in full generality (Section 2), then focussing on valued structures with an oligomorphic automorphism group (Section 3), for which our notion of expressive power (Section 4) leads to polynomial-time reductions. Our general hardness condition, which also builds upon the notion of expressive power, is presented in Section 5. To study the expressive power and to formulate general polynomial-time tractability results, we introduce the concept of _fractional polymorphisms_ in Section 6 (they are probability distributions over operations on the valued structure). We take inspiration from the theory of VCSPs for finite-domain valued structures, but apply some non-trivial modifications that are specific to the infinite-domain setting (because the considered probability distributions are over uncountable sets). We then present a general polynomial-time tractability result (Theorem 7.17) which is phrased in terms of fractional polymorphisms. Section 8 applies the general theory to resilience problems. We illustrate the power of our approach by settling the computational complexity of a resilience problem for a concrete conjunctive query from the literature (Section 8.5). Section 9 closes with open problems for future research. ## 2. Preliminaries ### Valued Structures The set \(\{0,1,2,\dots\}\) of natural numbers is denoted by \(\mathbb{N}\), the set of rational numbers is denoted by \(\mathbb{Q}\), the set of non-negative rational numbers by \(\mathbb{Q}_{\geq 0}\) and the set of positive rational numbers by \(\mathbb{Q}_{>0}\). We use analogous notation for the set of real numbers \(\mathbb{R}\) and the set of integers \(\mathbb{Z}\). We also need an additional value \(\infty\); all we need to know about \(\infty\) is that * \(a<\infty\) for every \(a\in\mathbb{Q}\), * \(a+\infty=\infty+a=\infty\) for all \(a\in\mathbb{Q}\cup\{\infty\}\), and * \(0\cdot\infty=\infty\cdot 0=0\) and \(a\cdot\infty=\infty\cdot a=\infty\) for \(a>0\). Let \(C\) be a set and let \(k\in\mathbb{N}\). A _weighted relation of arity \(k\) over \(C\)_ is a function \(R\colon C^{k}\to\mathbb{Q}\cup\{\infty\}\). We write \(\mathscr{R}_{C}^{(k)}\) for the set of all weighted relations of arity \(k\), and define \[\mathscr{R}_{C}:=\bigcup_{k\in\mathbb{N}}\mathscr{R}_{C}^{(k)}.\] A weighted relation is called _finite-valued_ if it takes values only in \(\mathbb{Q}\). **Example 2.1**.: _The weighted equality relation \(R_{=}\) is the binary weighted relation defined over \(C\) by \(R_{=}(x,y)=0\) if \(x=y\) and \(R_{=}(x,y)=\infty\) otherwise. The empty relation\(R_{\emptyset}\) is the unary weighted relation defined over \(C\) by \(R_{\emptyset}(x)=\infty\) for all \(x\in C\)._ A weighted relation \(R\in\mathscr{R}_{C}^{(k)}\) that only takes values from \(\{0,\infty\}\) will be identified with the following relation in the usual sense \[\{a\in C^{k}\mid R(a)=0\}.\] For \(R\in\mathscr{R}_{C}^{(k)}\) the _feasibility relation of \(R\)_ is defined as \[\operatorname{Feas}(R):=\{a\in C^{k}\mid R(a)<\infty\}.\] A _(relational) signature \(\tau\)_ is a set of _relation symbols_, each equipped with an arity from \(\mathbb{N}\). A _valued \(\tau\)-structure \(\Gamma\)_ consists of a set \(C\), which is also called the _domain_ of \(\Gamma\), and a weighted relation \(R^{\Gamma}\in\mathscr{R}_{C}^{(k)}\) for each relation symbol \(R\in\tau\) of arity \(k\). A _\(\tau\)-structure_ in the usual sense may then be identified with a valued \(\tau\)-structure where all weighted relations only take values from \(\{0,\infty\}\). **Example 2.2**.: _Let \(\tau=\{<\}\) be a relational signature with a single binary relation symbol \(<\). Let \(\Gamma_{<}\) be the valued \(\tau\)-structure with domain \(\{0,1\}\) and where \(<\!\!(x,y)=0\) if \(x<y\), and \(<\!\!(x,y)=1\) otherwise._ **Example 2.3**.: _Let \(\tau=\{E,N\}\) be a relational signature with two binary relation symbols \(E\) and \(N\). Let \(\Gamma_{LCC}\) be the valued \(\tau\)-structure with domain \(\mathbb{N}\) and where \(E(x,y)=0\) if \(x=y\) and \(E(x,y)=1\) otherwise, and where \(N(x,y)=0\) if \(x\neq y\) and \(N(x,y)=1\) otherwise._ An _atomic \(\tau\)-expression_ is an expression of the form \(R(x_{1},\ldots,x_{k})\) for \(R\in\tau\) and (not necessarily distinct) variable symbols \(x_{1},\ldots,x_{k}\). A _\(\tau\)-expression_ is an expression \(\phi\) of the form \(\sum_{i\leq m}\phi_{i}\) where \(m\in\mathbb{N}\) and \(\phi_{i}\) for \(i\in\{1,\ldots,m\}\) is an atomic \(\tau\)-expression. Note that the same atomic \(\tau\)-expression might appear several times in the sum. We write \(\phi(x_{1},\ldots,x_{n})\) for a \(\tau\)-expression where all the variables are from the set \(\{x_{1},\ldots,x_{n}\}\). If \(\Gamma\) is a valued \(\tau\)-structure, then a \(\tau\)-expression \(\phi(x_{1},\ldots,x_{n})\) defines over \(\Gamma\) a member of \(\mathscr{R}_{C}^{(n)}\), which we denote by \(\phi^{\Gamma}\). If \(\phi\) is the empty sum then \(\phi^{\Gamma}\) is constant \(0\). ### Valued Constraint Satisfaction In this section we assume that \(\Gamma\) is a fixed valued \(\tau\)-structure for a _finite_ signature \(\tau\). The weighted relations of \(\Gamma\) are also called _cost functions_. The _valued constraint satisfaction problem for \(\Gamma\)_, denoted by \(\operatorname{VCSP}(\Gamma)\), is the computational problem to decide for a given \(\tau\)-expression \(\phi(x_{1},\ldots,x_{n})\) and a given \(u\in\mathbb{Q}\) whether there exists \(a\in C^{n}\) such that \(\phi^{\Gamma}(a)\leq u\). We refer to \(\phi(x_{1},\ldots,x_{n})\) as an _instance_ of \(\operatorname{VCSP}(\Gamma)\), and to \(u\) as the _threshold_. Tuples \(a\in C^{n}\) such that \(\phi^{\Gamma}(a)\leq u\) are called a _solution for \((\phi,u)\)_. The _value_ of \(\phi\) (with respect to \(\Gamma\)) is defined to be \[\inf_{a\in C^{n}}\phi^{\Gamma}(a).\] In some contexts, it will be beneficial to consider only a given \(\tau\)-expression \(\phi\) to be the input of \(\operatorname{VCSP}(\Gamma)\) (rather than \(\phi\) and the threshold \(u\)) and a tuple \(a\in C^{n}\) will then be called a _solution for \(\phi\)_ if the value of \(\phi\) equals \(\phi^{\Gamma}(a)\). Note that in general there might not be any solution. If there exists a tuple \(a\in C^{n}\) such that \(\phi^{\Gamma}(a)<\infty\) then \(\phi\) is called _satisfiable_. Note that our setting also captures classical CSPs, which can be viewed as the VCSPs for valued structures \(\Gamma\) that only contain cost functions that take value \(0\) or \(\infty\). In this case, we will sometimes write \(\operatorname{CSP}(\Gamma)\) for \(\operatorname{VCSP}(\Gamma)\). Below we give a few examples of known optimisation problems that can be formulated as valued constraint satisfaction problems. **Example 2.4**.: _The problem \(\operatorname{VCSP}(\Gamma_{<})\) for the valued structure \(\Gamma_{<}\) from Example 2.2 models the directed max-cut problem: given a finite directed graph \((V,E)\) (we do allow loops and multiple edges), partition the vertices \(V\) into two classes \(A\) and \(B\) such that the number of edges from \(A\) to \(B\) is maximal. Maximising the number of edges from \(A\) to \(B\) amounts to minimising the number \(e\) of edges within \(A\), within \(B\), and from \(B\) to \(A\). So when we associate \(A\) to the preimage of \(0\) and \(B\) to the preimage of \(1\), computing the number \(e\) corresponds to finding the evaluation map \(s\colon V\to\{0,1\}\) that minimises the value \(\sum_{(x,y)\in E}\!<\!\!(s(x),s(y))\), which can be formulated as an instance of \(\operatorname{VCSP}(\Gamma_{<})\). Conversely, every instance of \(\operatorname{VCSP}(\Gamma_{<})\) corresponds to a directed max-cut instance. It is known that \(\operatorname{VCSP}(\Gamma_{<})\) is NP-complete [23] (even if we do not allow loops and multiple edges in the input). We mention that this problem can be viewed as a resilience problem in database theory as explained in Section 8._ **Example 2.5**.: _Consider the valued structure \(\Gamma_{\geq}\) with domain \(\{0,1\}\) and the binary weighted relation \(\geq\) defined by \(\geq\!\!(x,y)=0\) if \(x\geq y\) and \(\geq\!\!(x,y)=1\) otherwise. Similarly to the previous example, \(\operatorname{VCSP}(\Gamma_{\geq})\) models the directed min-cut problem, i.e., given a finite directed graph \((V,E)\), partition the vertices \(V\) into two classes \(A\) and \(B\) such that the number of edges from \(A\) to \(B\) is minimal. The min-cut problem is solvable in polynomial time; see, e.g., [24]._ **Example 2.6**.: _The problem of least correlation clustering with partial information [43, Example 5] is equal to \(\operatorname{VCSP}(\Gamma_{LCC})\) where \(\Gamma_{LCC}\) is the valued structure from Example 2.3. It is a variant of the min-correlation clustering problem [1], where we have precisely one constraint between any two variables. The problem is NP-complete in both settings [23, 43]._ ## 3. Oligomorphicity Many facts about VCSPs for valued structures with a finite domain can be generalised to a large class of valued structures over an infinite domain, defined in terms of automorphisms. Automorphisms of valued structures are defined as follows. **Definition 3.1**.: _Let \(k\in\mathbb{N}\), let \(R\in\mathscr{R}_{C}^{(k)}\), and let \(\alpha\) be a permutation of \(C\). Then \(\alpha\) preserves \(R\) if for all \(a\in C^{k}\) we have \(R(\alpha(a))=R(a)\). If \(\Gamma\) is a valued structure with domain \(C\), then an automorphism of \(\Gamma\) is a permutation of \(C\) that preserves all weighted relations of \(R\)._ The set of all automorphisms of \(\Gamma\) is denoted by \(\operatorname{Aut}(\Gamma)\), and forms a group with respect to composition. Let \(k\in\mathbb{N}\). An _orbit of \(k\)-tuples_ of a permutation group \(G\) is a set of the form \(\{\alpha(a)\mid\alpha\in G\}\) for some \(a\in C^{k}\). A permutation group \(G\) on a countable set is called _oligomorphic_ if for every \(k\in\mathbb{N}\) there are finitely many orbits of \(k\)-tuples in \(G\). From now on, whenever we write that a structure has an oligomorphic automorphism group, we also imply that its domain is countable. Clearly, every valued structure with a finite domain has an oligomorphic automorphism group. A countable structure has an oligomorphic automorphism group if and only if it is \(\omega\)_-categorical_, i.e., if all countable models of its first-order theory are isomorphic [25]. **Example 3.2**.: _The automorphism group of \(\Gamma_{LCC}\) from Examples 2.3 and 2.6 is the full symmetric group and hence oligomorpohic._ **Lemma 3.3**.: _Let \(\Gamma\) be a valued structure with a countable domain \(C\) and an oligomorphic automorphism group. Then for every instance \(\phi(x_{1},\dots,x_{n})\) of \(\operatorname{VCSP}(\Gamma)\) there exists \(a\in C^{n}\) such that the value of \(\phi\) equals \(\phi^{\Gamma}(a)\)._ Proof.: The statement follows from the assumption that there are only finitely many orbits of \(n\)-tuples of \(\operatorname{Aut}(\Gamma)\), because it implies that there are only finitely many possible values from \(\mathbb{Q}\cup\{\infty\}\) for \(\phi^{\Gamma}(a)\). A first-order sentence is called _universal_ if it is of the form \(\forall x_{1},\dots,x_{l}.\,\psi\) where \(\psi\) is quantifier-free. Every quantifier-free formula is equivalent to a formula in conjunctive normal form, so we assume in the following that quantifier-free formulas are of this form. Recall that a \(\tau\)-structure \(\mathfrak{A}\)_embeds_ into a \(\tau\) structure \(\mathfrak{B}\) if there is an injective map from \(A\) to \(B\) that preserves all relations of \(\mathfrak{A}\) and their complements; the corresponding map is called an _embedding_. The _age_ of a \(\tau\)-structure is the class of all finite \(\tau\)-structures that embed into it. A structure \(\mathfrak{B}\) with a finite relational signature \(\tau\) is called * _finitely bounded_ if there exists a universal \(\tau\)-sentence \(\phi\) such that a finite structure \(\mathfrak{A}\) is in the age of \(\mathfrak{B}\) if and only if \(\mathfrak{A}\models\phi\). * _homogeneous_ if every isomorphism between finite substructures of \(\mathfrak{B}\) can be extended to an automorphism of \(\mathfrak{B}\). If \(\tau^{\prime}\subseteq\tau\), then a \(\tau^{\prime}\)-structure \(\mathfrak{B}^{\prime}\) is called the _reduct_ of \(\mathfrak{B}\) if \(\mathfrak{B}\) and \(\mathfrak{B}^{\prime}\) have the same domain and \(R^{\mathfrak{B}^{\prime}}=R^{\mathfrak{B}}\) for every \(R\in\tau^{\prime}\). Note that for every structure \(\mathfrak{B}\) with a finite relational signature, for every \(n\) there are only finitely many non-isomorphic substructures of \(\mathfrak{B}\) of size \(n\). Therefore, all countable homogeneous structures with a finite relational signature and all of their reducts have finitely many orbits of \(k\)-tuples for all \(k\in\mathbb{N}\), and hence an oligomorphic automorphism group. **Theorem 3.4**.: _Let \(\Gamma\) be a countable valued structure with finite signature such that there exists a finitely bounded homogeneous structure \(\mathfrak{B}\) with \(\operatorname{Aut}(\mathfrak{B})\subseteq\operatorname{Aut}(\Gamma)\). Then \(\operatorname{VCSP}(\Gamma)\) is in NP._ Proof.: Let \((\phi,u)\) be an instance of \(\operatorname{VCSP}(\Gamma)\) with \(n\) variables. Since \(\operatorname{Aut}(\mathfrak{B})\subseteq\operatorname{Aut}(\Gamma)\), every orbit of \(n\)-tuples of \(\operatorname{Aut}(\Gamma)\) is determined by the substructure induced by \(\mathfrak{B}\) on the elements of some tuple from the orbit. Note that two tuples \((a_{1},\dots,a_{n})\) and \((b_{1},\dots,b_{n})\) lie in the same orbit of \(\operatorname{Aut}(\mathfrak{B})\) if and only if the map that maps \(a_{i}\) to \(b_{i}\) for \(i\in\{1,\dots,n\}\) is an isomorphism between the substructures induced by \(\mathfrak{B}\) on \(\{a_{1},\dots,a_{n}\}\) and on \(\{b_{1},\dots,b_{n}\}\). Whether a given finite structure \(\mathfrak{A}\) is in the age of a fixed finitely bounded structure \(\mathfrak{B}\) can be decided in polynomial time: if \(\phi\) is the universal \(\tau\)-sentence which describes the age of \(\mathfrak{B}\), it suffices to exhaustively check all possible instantiations of the variables of \(\phi\) with elements of \(A\) and verify whether \(\phi\) is true in \(\mathfrak{A}\) under the instantiation. Hence, we may non-deterministically generate a structure \(\mathfrak{A}\) with domain \(\{1,\dots,n\}\) from the age of \(\mathfrak{B}\) and then verify in polynomial time whether the value \(\phi^{\Gamma}(b_{1},\dots,b_{n})\) is at most \(u\) for any tuple \((b_{1},\dots,b_{n})\in B^{n}\) such that \(i\mapsto b_{i}\) is an embedding of \(\mathfrak{A}\) into \(\mathfrak{B}\). ## 4. Expressive Power One of the fundamental concepts in the theory of constraint satisfaction is the concept of _primitive positive definitions_, which is the fragment of first-order logic where only equality, existential quantification, and conjunction are allowed (in other words, negation, universal quantification, and disjunction are forbidden). The motivation for this concept is that relations with such a definition can be added to the structure without changing the complexity of the respective CSP. The natural generalisation to _valued_ constraint satisfaction is the following notion of expressibility. **Definition 4.1**.: _Let \(\Gamma\) be a valued \(\tau\)-structure. We say that \(R\in\mathscr{R}_{C}^{(k)}\) can be expressed by \(\Gamma\) if there exists a \(\tau\)-expression \(\phi(x_{1},\dots,x_{k},y_{1},\dots,y_{n})\) such that for all \(a\in C^{k}\) we have_ \[R(a)=\inf_{b\in C^{n}}\phi^{\Gamma}(a,b).\] Note that \(\inf_{b\in C^{n}}\phi^{\Gamma}(a,b)\) might be irrational or \(-\infty\). If this is the case in Definition 4.1, then \(\phi\) does not witness that \(R\) can be expressed in \(\Gamma\) since weighted relations must have weights from \(\mathbb{Q}\cup\{\infty\}\). If \(C\) has an oligomorphic permutation group, however, then Lemma 3.3 guarantees existence. We will further see in Lemma 4.7 that if \(\Gamma\) has an oligomorphic automorphism group, then the addition of weighted relations that are expressible by \(\Gamma\) does not change the computational complexity of \(\operatorname{VCSP}(\Gamma)\). Another way to derive new relations from existing ones that preserves the computational complexity of the original VCSP is introduced in the following definition. **Definition 4.2**.: _Let \(R,R^{\prime}\in\mathscr{R}_{C}\). We say that \(R^{\prime}\) can be obtained from \(R\) by_ * non-negative scaling _if there exists_ \(r\in\mathbb{Q}_{\geq 0}\) _such that_ \(R=rR^{\prime}\)_;_ * shifting _if there exists_ \(s\in\mathbb{Q}\) _such that_ \(R=R^{\prime}+s\) In the literature about the complexity of finite-domain VCSPs we find another operator on sets of weighted relations that preserves the complexity of the VCSP: the operator \(\mathrm{Opt}\) (see, e.g., [22, 35]). **Definition 4.3**.: _Let \(R\in\mathscr{R}_{C}^{(k)}\). The relation containing all minimal-value tuples of \(R\) is defined as_ \[\mathrm{Opt}(R):=\{a\in\mathrm{Feas}(R)\mid R(a)\leq R(b)\text{ for every }b\in C^{k}\}.\] **Definition 4.4** (weighted relational clone).: _A weighted relational clone (over \(C\)) is a subset of \(\mathscr{R}_{C}\) that contains \(R_{=}\) and \(R_{\emptyset}\) (from Example 2.1), and is closed under expressibility, shifting, and non-negative scaling, \(\mathrm{Feas}\), and \(\mathrm{Opt}\). For a valued structure \(\Gamma\) with domain \(C\), we write \(\langle\Gamma\rangle\) for the smallest relational clone that contains the weighted relations of \(\Gamma\)._ The following example shows that neither the operator \(\mathrm{Opt}\) nor the operator \(\mathrm{Feas}\) is redundant in the definition above. **Example 4.5**.: _Consider the domain \(C=\{0,1,2\}\) and the unary weighted relation \(R\) on \(C\) defined by \(R(0)=0\), \(R(1)=1\) and \(R(2)=\infty\). Then the relation \(\mathrm{Feas}(R)\) cannot be obtained from \(R\) by expressing, shifting, non-negative scaling and use of \(\mathrm{Opt}\). Similarly, the relation \(\mathrm{Opt}(R)\) cannot be obtained from \(R\) by expressing, shifting, non-negative scaling and use of \(\mathrm{Feas}\)._ **Remark 4.6**.: _Note that for every valued structure \(\Gamma\) and \(R\in\langle\Gamma\rangle\), every automorphism of \(\Gamma\) is an automorphism of \(R\)._ The motivation of Definition 4.4 for valued CSPs stems from the following lemma, which shows that adding relations in \(\langle\Gamma\rangle\) does not change the complexity of the VCSP up to polynomial-time reductions. For valued structures over finite domains this is proved in [16], except for the operator \(\mathrm{Opt}\), for which a proof can be found in [22, Theorem 5.13]. Only parts of the proof can be generalised to valued structures over infinite domains in the general case, that is, when oligomorphic automorphism groups are not required; see, e.g., Schneider and Viola [41] and Viola [43, Lemma 7.1.4]. Note, however, that in these works the definition of VCSPs was changed: instead of asking whether a solution can be found of value at most \(u\), they ask whether there exists a solution of value strictly less than \(u\), to circumvent problems about infima that are not realised. Moreover, in [41] the authors restrict themselves to finite-valued weighted relations and hence do not consider the operator \(\mathrm{Opt}\). It is visible from Example 4.5 that the operator \(\mathrm{Opt}\) cannot be simulated by the other ones already on finite domains, which is why it was introduced in [22]. The same is true for the operator \(\mathrm{Feas}\), which was included implicitly in [22] by allowing to scale weighted relations by \(0\) and defining \(0\cdot\infty=\infty\). In our approach, we work under the assumption that the valued structure has an oligomorphic automorphism group, which implies that infima in expressions are realized and the values of VCSPs of such structures can be attained. Therefore, we obtain a polynomial-time reduction for each of the operators in Definition 4.4 as in the finite domain case. **Lemma 4.7**.: _Let \(\Gamma\) be a valued structure with an oligomorphic automorphism group and a finite signature. Suppose that \(\Delta\) is a valued structure with a finite signature over the same domain \(C\) such that every cost function of \(\Delta\) is from \(\langle\Gamma\rangle\). Then there is a polynomial-time reduction from \(\mathrm{VCSP}(\Delta)\) to \(\mathrm{VCSP}(\Gamma)\)._ Proof.: Let \(\tau\) be the signature of \(\Gamma\). It suffices to prove the statement for expansions of \(\Gamma\) to signatures \(\tau\cup\{R\}\) that extend \(\tau\) with a single relation \(R\), \(R^{\Delta}\in\langle\Gamma\rangle\). If \(R^{\Delta}=R_{\emptyset}\), then an instance \(\phi\) of \(\mathrm{VCSP}(\Delta)\) with threshold \(u\in\mathbb{Q}\) is unsatisfiable if and only if \(\phi\) contains the symbol \(R\) or if it does not contain \(R\) and is unsatisfiable viewed as an instance of \(\mathrm{VCSP}(\Gamma)\). In the former case, choose a \(k\)-ary relation symbol \(S\in\tau\) and note that \(S^{\Gamma}\) attains only finitely many values, by the oligomorphicity of \(\operatorname{Aut}(\Gamma)\). Let \(u^{\prime}\in\mathbb{Q}\) be smaller than all of them. Then \(S(x_{1},\ldots,x_{k})\) is an instance of \(\operatorname{VCSP}(\Gamma)\) that never meets the threshold \(u^{\prime}\), so this provides a correct reduction. In the latter case, for every \(a\in C^{n}\) we have that \(\phi^{\Delta}(a)=\phi^{\Gamma}(a)\); this provides a polynomial-time reduction. Now suppose that \(R^{\Delta}=R_{=}\). Let \(\psi(x_{i_{1}},\ldots,x_{i_{k}})\) be obtained from an instance \(\phi(x_{1},\ldots,x_{n})\) of \(\operatorname{VCSP}(\Delta)\) by identifying all variables \(x_{i}\) and \(x_{j}\) such that \(\phi\) contains the summand \(R(x_{i},x_{j})\). Then \(\phi\) is satisfiable if and only if the instance \(\psi\) is satisfiable, and \(\inf_{a\in C^{n}}\phi^{\Delta}(a)=\inf_{b\in C^{k}}\psi^{\Gamma}(b)\); Again, this provides a polynomial-time reduction. Next, consider the case that for some \(\tau\)-expression \(\delta(y_{1},\ldots,y_{l},z_{1},\ldots,z_{k})\) we have \[R^{\Delta}(y_{1},\ldots,y_{l})=\inf_{a\in C^{k}}\delta^{\Gamma}(y_{1},\ldots,y _{l},a_{1},\ldots,a_{k}).\] Let \(\phi(x_{1},\ldots,x_{n})\) be an instance of \(\operatorname{VCSP}(\Delta)\). We replace each summand \(R(y_{1},\ldots,y_{l})\) in \(\phi\) by \(\delta(y_{1},\ldots,y_{l},z_{1},\ldots,z_{k})\) where \(z_{1},\ldots,z_{k}\) are new variables (different for each summand). Let \(\theta(x_{1},\ldots,x_{n},w_{1},\ldots,w_{t})\) be the resulting \(\tau\)-expression after doing this for all summands that involve \(R\). For any \(a\in C^{n}\) we have that \[\phi(a_{1},\ldots,a_{n})=\inf_{b\in C^{t}}\theta(a_{1},\ldots,a_{n},b)\] and hence \(\inf_{a\in C^{n}}\phi=\inf_{c\in C^{n+t}}\theta\); here we used the assumption that \(\operatorname{Aut}(\Gamma)\) is oligomorphic. Since we replace each summand by an expression whose size is constant (since \(\Gamma\) is fixed and finite) the expression \(\theta\) can be computed in polynomial time, which shows the statement. Suppose that \(R^{\Delta}=rS^{\Gamma}+s\) where \(r\in\mathbb{Q}_{\geq 0},s\in\mathbb{Q}\). Let \(p\in\mathbb{Z}_{\geq 0}\) and \(q\in\mathbb{Z}_{>0}\) be coprime integers such that \(p/q=r\). Let \((\phi,u)\) be an instance of \(\operatorname{VCSP}(\Delta)\) where \(\phi(x_{1},\ldots,x_{n})=\sum_{i=1}^{\ell}\phi_{i}+\sum_{j=1}^{k}\psi_{j}\), the summands \(\phi_{i}\) contain only symbols from \(\tau\), and each \(\psi_{j}\) involves the symbol \(R\). Let \(\psi_{j}^{\prime}\) be the expression obtained from \(\psi_{j}\) by replacing \(R\) with \(S\). For \(i\in\{1,\ldots,\ell\}\) replace \(\phi_{i}\) with \(q\) copies of itself and for \(j\in\{1,\ldots,k\}\), replace \(\psi_{j}\) with \(p\) copies of \(\psi_{j}^{\prime}\); let \(\phi^{\prime}(x_{1},\ldots,x_{n})\) be the resulting \(\tau\)-expression. Define \(u^{\prime}:=q(u-ks)\). Then for every \(a\in C^{n}\) the following are equivalent: \[\phi(a_{1},\ldots,a_{n}) =\sum_{i=1}^{\ell}\phi_{i}+\sum_{j=1}^{k}\left(\frac{p}{q}\psi_{j} ^{\prime}+s\right)\leq u\] \[\phi^{\prime}(a_{1},\ldots,a_{n}) =q\sum_{i=1}^{\ell}\phi_{i}+p\sum_{j=1}^{k}\psi_{j}^{\prime}\leq qu -qks=u^{\prime}\] Since \((\phi^{\prime},u^{\prime})\) can be computed from \((\phi,u)\) in polynomial time, this provides the desired reduction. Now suppose that \(R^{\Delta}=\operatorname{Feas}(S^{\Gamma})\) for some \(S\in\tau\). Let \((\phi,u)\) be an instance of \(\operatorname{VCSP}(\Delta)\), i.e., \(\phi(x_{1},\ldots,x_{n})=\sum_{i=1}^{\ell}\phi_{i}+\sum_{j=1}^{k}\psi_{j}\) where \(\psi_{j}\), \(j\in\{1,\ldots,k\}\) are all the atomic expressions in \(\phi\) that involve \(R\). If \(R^{\Delta}=R_{\emptyset}\), then the statement follows from the reduction for \(R_{\emptyset}\). Therefore, suppose that this not the case and let \(w\) be the maximum finite weight assigned by \(S\). Note that there are only finitely many values that the \(\ell\) atoms \(\phi_{i}\) may take and therefore only finitely many values that \(\sum_{i=1}^{\ell}\phi_{i}\) may take. Let \(v\) be the smallest of these values such that \(v>u\) and let \(d=v-u\); if \(v\) does not exist, let \(d=1\). To simplify the notation, set \(t=\lceil(kw)/d\rceil+1\). Let \(\psi_{j}^{\prime}\) be the \(\tau\)-expression resulting from \(\psi_{j}\) by replacing the symbol \(R\) by the symbol \(S\). Let \(\phi^{\prime}\) be the \(\tau\)-expression obtained from \(\phi\) by replacing each atom \(\phi_{i}\) with \(t\) copies of it and replacing every atom \(\psi_{j}\) by \(\psi_{j}^{\prime}\). Let \((\phi^{\prime},tu+kw)\) be the resulting instance of \(\operatorname{VCSP}(\Gamma)\); note that it can be computed in polynomial time. We claim that for every \(a\in C^{n}\), the following are equivalent: \[\phi(a_{1},\ldots,a_{n}) =\sum_{i=1}^{\ell}\phi_{i}+\sum_{j=1}^{k}\psi_{j}\leq u \tag{1}\] \[\phi^{\prime}(a_{1},\ldots,a_{n}) =t\cdot\sum_{i=1}^{\ell}\phi_{i}+\sum_{j=1}^{k}\psi_{j}^{\prime} \leq tu+kw \tag{2}\] If (1) holds, then by the definition of Feas we must have \(\psi_{j}=0\) for every \(j\in\{1,\ldots,k\}\). Thus \(\sum_{i=1}^{\ell}\phi_{i}\leq u\) and \(\sum_{j=1}^{k}\psi_{j}^{\prime}\leq kw\), which implies (2). Conversely, if (2) holds, then \(\psi_{j}^{\prime}\) is finite for every \(j\in\{1,\ldots,k\}\) and hence \(\psi_{j}=0\). Moreover, (2) implies \[\sum_{i=1}^{\ell}\phi_{i}\leq u+\frac{kw}{t}.\] Note that if \(v\) exists, then \(u+(kw)/t<v\). Therefore (regardless of the existence of \(v\)), this implies \(\sum_{i=1}^{\ell}\phi_{i}\leq u\), which together with what we have observed previously shows (1). Finally, we consider the case that \(R^{\Delta}=\operatorname{Opt}(S^{\Gamma})\) for some relation symbol \(S\in\tau\). Since \(\tau\) is finite and \(\operatorname{Aut}(\Gamma)\) is oligomorphic, we may assume without loss of generality that the minimum weight of all weighted relations in \(\Delta\) equals \(0\); otherwise, we subtract the smallest weight assigned to a tuple by some weighted relation in \(\Delta\). This transformation does not affect the computational complexity of the VCSP (up to polynomial-time reductions). We may also assume that \(S^{\Gamma}\) takes finite positive values, because otherwise \(\operatorname{Opt}(S^{\Gamma})=S^{\Gamma}\) and the statement is trivial. Let \(m\) be the smallest positive weight assigned by \(S^{\Gamma}\) and let \(M\) be the largest finite weight assigned by any weighted relation of \(\Gamma\) (again we use that \(\tau\) is finite and that \(\operatorname{Aut}(\Gamma)\) is oligomorphic). Let \((\phi,u)\), where \(\phi(x_{1},\ldots,x_{n})=\sum_{i=1}^{k}\phi_{i}\), be an instance of \(\operatorname{VCSP}(\Delta)\). For \(i\in\{1,\ldots,k\}\), if \(\phi_{i}\) involves the symbol \(R\), then replace it by \(k\cdot\lceil M/m\rceil+1\) copies and replace \(R\) by \(S\). Let \(\phi^{\prime}\) be the resulting \(\tau\)-expression. We claim that \(a\in C^{n}\) is a solution to the instance \((\phi^{\prime},\min(kM,u))\) of \(\operatorname{VCSP}(\Gamma)\) if and only if it is the solution to \((\phi,u)\). If \(a\in C^{n}\) is such that \(\phi(a)\leq u\) then for every \(i\in\{1,\ldots,k\}\) such that \(\phi_{i}\) involves \(R\) we have \(\phi_{i}(a)=0\). In particular, the minimal value attained by \(S^{\Gamma}\) equals \(0\) by our assumption, and hence \(\phi^{\prime}(a)=\phi(a)\leq u\) and hence \(\phi^{\prime}(a)\leq\min(kM,u)\). Now suppose that \(\phi(a)>u\). Then \(\phi^{\prime}(a)>u\geq\min(kM,u)\) or there exists an \(i\in\{1,\ldots,k\}\) such that \(\phi_{i}(a)=\infty\). If \(\phi_{i}\) does not involve the symbol \(R\), then \(\phi^{\prime}(a)=\infty\) as well. If \(\phi_{i}\) involves the symbol \(R\), then \(\phi^{\prime}(a)\geq(k\cdot\lceil M/m\rceil+1)m>kM\). In any case, \(\phi^{\prime}(a)>\min(kM,u)\). Since \(\phi^{\prime}\) can be computed from \(\phi\) in polynomial time, this concludes the proof. The next example illustrates the use of Lemma 4.7 for obtaining hardness results. **Example 4.8**.: _We revisit the countably infinite valued structure \(\Gamma_{LCC}\) from Example 2.3. Recall that \(\operatorname{VCSP}(\Gamma_{LCC})\) is the least correlation clustering problem with partial information and that \(\operatorname{Aut}(\Gamma_{LCC})\) is oligomorphic. Let \(\Gamma_{EC}\) be the relational structure with the same domain as \(\Gamma_{LCC}\) and the relation \(R:=\{(x,y,z)\mid(x=y\wedge y\neq z)\vee(x\neq y\wedge y=z)\}\) (attaining values \(0\) and \(\infty\)). Note that_ \[R(x,y,z)=\operatorname{Opt}(N(x,z)+N(x,z)+E(x,y)+E(y,z)).\] _This provides an alternative proof of NP-hardness of the least correlation clustering with partial information via Lemma 4.7, because \(\operatorname{CSP}(\Gamma_{EC})\) is known to be NP-hard [6]._ ## 5. Hardness from pp-Constructions A universal-algebraic theory of VCSPs for finite valued structures has been developed in [34], following the classical approach to CSPs which is based on the concepts of cores, addition of constants, and primitive positive interpretations. Subsequently, an important conceptual insight has been made for classical CSPs which states that every structure that can be interpreted in the expansion of the core of the structure by constants can also be obtained by taking a pp-power if we then consider structures up to homomorphic equivalence [3]. We are not aware of any published reference that adapts this perspective to the algebraic theory of VCSPs, so we develop (parts of) this approach here. As in [3], we immediately step from valued structures with a finite domain to the more general case of valued structures with an oligomorphic automorphism group. **Definition 5.1** (pp-power).: _Let \(\Gamma\) be a valued structure with domain \(C\) and let \(d\in\mathbb{N}\). Then a (\(d\)-th) pp-power of \(\Gamma\) is a valued structure \(\Delta\) with domain \(C^{d}\) such that for every weighted relation \(R\) of \(\Delta\) of arity \(k\) there exists a weighted relation \(S\) of arity \(kd\) in \(\langle\Gamma\rangle\) such that_ \[R((a_{1}^{1},\ldots,a_{d}^{1}),\ldots,(a_{1}^{k},\ldots,a_{d}^{k}))=S(a_{1}^{1 },\ldots,a_{d}^{1},\ldots,a_{1}^{k},\ldots,a_{d}^{k}).\] The name 'pp-power' comes from 'primitive positive power', since for relational structures expressibility is captured by primitive positive formulas. The following proposition shows that the VCSP of a pp-power reduces to the VCSP of the original structure. **Proposition 5.2**.: _Let \(\Gamma\) and \(\Delta\) be valued structures such that \(\operatorname{Aut}(\Gamma)\) is oligomorphic and \(\Delta\) is a pp-power of \(\Gamma\). Then \(\operatorname{Aut}(\Delta)\) is oligomorphic and there is a polynomial-time reduction from \(\operatorname{VCSP}(\Delta)\) to \(\operatorname{VCSP}(\Gamma)\)._ Proof.: Let \(d\) be the dimension of the pp-power and let \(\tau\) be the signature of \(\Gamma\). By Remark 4.6, \(\operatorname{Aut}(\Gamma)\subseteq\operatorname{Aut}(\Delta)\) and thus \(\operatorname{Aut}(\Delta)\) is oligomorphic. By Lemma 4.7, we may suppose that for every weighted relation \(R\) of arity \(k\) of \(\Delta\) the weighted relation \(S\in\langle\Gamma\rangle\) of arity \(dk\) from the definition of pp-powers equals \(S^{\Gamma}\) for some \(S\in\tau\). Let \((\phi,u)\) be an instance of \(\operatorname{VCSP}(\Delta)\). For each variable \(x\) of \(\phi\) we introduce \(d\) new variables \(x_{1},\ldots,x_{d}\). For each summand \(R(y^{1},\ldots,y^{k})\) we introduce a summand \(S(y_{1}^{1},\ldots,y_{d}^{1},\ldots,y_{1}^{k},\ldots,y_{d}^{k})\); let \(\psi\) be the resulting \(\tau\)-expression. It is now straightforward to verify that \((\phi,u)\) has a solution with respect to \(\Delta\) if and only if \((\psi,u)\) has a solution with respect to \(\Gamma\). Note that, in particular, if \(\operatorname{VCSP}(\Gamma)\), parametrized by the threshold \(u\), is fixed-parameter tractable, then so is \(\operatorname{VCSP}(\Delta)\). If \(C\) and \(D\) are sets, then we equip the space \(C^{D}\) of functions from \(D\) to \(C\) with the topology of pointwise convergence, where \(C\) is taken to be discrete. In this topology, a basis of open sets is given by \[\mathscr{S}_{a,b}:=\{f\in C^{D}\mid f(a)=b\}\] for \(a\in D^{k}\) and \(b\in C^{k}\) for some \(k\in\mathbb{N}\). For any topological space \(T\), we denote by \(B(T)\) the Borel \(\sigma\)-algebra on \(T\), i.e., the smallest subset of the powerset \(\mathcal{P}(T)\) which contains all open sets and is closed under countable intersection and complement. We write \([0,1]\) for the set \(\{x\in\mathbb{R}\mid 0\leq x\leq 1\}\). **Definition 5.3** (fractional map).: _Let \(C\) and \(D\) be sets. A fractional map from \(D\) to \(C\) is a probability distribution_ \[(C^{D},B(C^{D}),\omega\colon B(C^{D})\to[0,1]),\] _that is,_ * \(\omega\) _is countably additive: if_ \(A_{1},A_{2},\cdots\in B(C^{D})\) _are disjoint, then_ \[\omega(\bigcup_{i\in\mathbb{N}}A_{i})=\sum_{i\in\mathbb{N}}\omega(A_{i}).\] * \(\omega(C^{D})=1\)_._ If \(f\in C^{D}\), we often write \(\omega(f)\) instead of \(\omega(\{f\})\). Note that \(\{f\}\in B(C^{D})\) for every \(f\). The set \([0,1]\) carries the topology inherited from the standard topology on \(\mathbb{R}\). We also view \(\mathbb{R}\cup\{\infty\}\) as a topological space with a basis of open sets given by all open intervals \((a,b)\) for \(a,b\in\mathbb{R}\), \(a<b\) and additionally all sets of the form \(\{x\in\mathbb{R}\mid x>a\}\cup\{\infty\}\). A _(real-valued) random variable_ is a _measurable function_\(X\colon T\to\mathbb{R}\cup\{\infty\}\), i.e., pre-images of elements of \(B(\mathbb{R}\cup\{\infty\})\) under \(X\) are in \(B(T)\). If \(X\) is a real-valued random variable, then the _expected value of \(X\) (with respect to a probability distribution \(\omega\))_ is denoted by \(E_{\omega}[X]\) and is defined via the Lebesgue integral \[E_{\omega}[X]:=\int_{T}Xd\omega.\] Recall that the Lebesgue integral \(\int_{T}Xd\omega\) need not exist, in which case \(E_{\omega}[X]\) is undefined; otherwise, the integral equals a real number, \(\infty\), or \(-\infty\). For the convenience of the reader we recall the definition and some properties of the Lebesgue integral, specialised to our setting, in Appendix A. Also recall that the expected value is * _linear_, i.e., for every \(a,b\in\mathbb{R}\) and random variables \(X\), \(Y\) such that \(E_{\omega}[X]\) and \(E_{\omega}[Y]\) exist and \(aE_{\omega}[X]+bE_{\omega}[Y]\) is defined we have \[E_{\omega}[aX+bY]=aE_{\omega}[X]+bE_{\omega}[Y];\] * _monotone_, i.e., if \(X,Y\) are random variables such that \(E_{\omega}[X]\) and \(E_{\omega}[Y]\) exist and \(X(f)\leq Y(f)\) for all \(f\in T\), then \(E_{\omega}[X]\leq E_{\omega}[Y]\). Let \(C\) and \(D\) be sets. In the rest of the paper, we will work exclusively on a topological space \(C^{D}\) of maps \(f\colon D\to C\) and the special case \(\mathscr{O}_{C}^{(\ell)}\) for some \(\ell\in\mathbb{N}\) (i.e., \(D=C^{\ell}\)). Note that if \(C\) and \(D\) are infinite, then these spaces are uncountable and hence there are probability distributions \(\omega\) such that \(\omega(A)=0\) for every \(1\)-element set \(A\). Therefore, in these cases, \(E_{\omega}[X]\) for a random variable \(X\) might not be expressible as a sum. **Definition 5.4** (fractional homomorphism).: _Let \(\Gamma\) and \(\Delta\) be valued \(\tau\)-structures with domains \(C\) and \(D\), respectively. A fractional homomorphism from \(\Delta\) to \(\Gamma\) is a fractional map from \(D\) to \(C\) such that for every \(R\in\tau\) of arity \(k\) and every tuple \(a\in D^{k}\) it holds for the random variable \(X\colon C^{D}\to\mathbb{R}\cup\{\infty\}\) given by_ \[f\mapsto R^{\Gamma}(f(a))\] _that \(E_{\omega}[X]\) exists and that_ \[E_{\omega}[X]\leq R^{\Delta}(a).\] The following lemma shows that if \(\operatorname{Aut}(\Gamma)\) is oligomorphic, then the expected value from Definition 5.4 always exists. **Lemma 5.5**.: _Let \(C\) and \(D\) be sets, \(a\in D^{k}\), \(R\in\mathscr{R}_{C}^{(k)}\). Let \(X\colon C^{D}\to\mathbb{R}\cup\{\infty\}\) be the random variable given by_ \[f\mapsto R(f(a)).\] _If \(\operatorname{Aut}(C;R)\) is oligomorphic, then \(E_{\omega}[X]\) exists and \(E_{\omega}[X]>-\infty\)._ Proof.: It is enough to show that \(\int_{C^{D}}X^{-}d\omega\neq\infty\). Since \(\operatorname{Aut}(C;R)\) is oligomorphic, there are only finitely many orbits of \(k\)-tuples in \(\operatorname{Aut}(C;R)\). Let \(O_{1},\ldots,O_{m}\) be all orbits of \(k\)-tuples of \(\operatorname{Aut}(C;R)\) on which \(R\) is negative. For every \(i\in\{1,\ldots,m\}\), let \(b_{i}\in O_{i}\). Then we obtain (see (21) in Appendix A for a detailed derivation of the first equality) \[\int_{C^{D}}X^{-}d\omega =\sum_{b\in C^{k},R(b)<0}-R(b)\omega(\mathscr{S}_{a,b})\] \[=-\sum_{i=1}^{m}R(b_{i})\sum_{b\in O_{i}}\omega(\mathscr{S}_{a,b})\] \[=-\sum_{i=1}^{m}R(b_{i})\;\omega\left(\bigcup_{b\in O_{i}} \mathscr{S}_{a,b}\right)\] \[\leq-\sum_{i=1}^{m}R(b_{i})<\infty.\qed\] **Lemma 5.6**.: _Let \(\Gamma_{1}\), \(\Gamma_{2}\), \(\Gamma_{3}\) be countable valued \(\tau\)-structures such that there exists a fractional homomorphism \(\omega_{1}\) from \(\Gamma_{1}\) to \(\Gamma_{2}\) and a fractional homomorphism \(\omega_{2}\) from \(\Gamma_{2}\) to \(\Gamma_{3}\). Then there exists a fractional homomorphism \(\omega_{3}:=\omega_{2}\circ\omega_{1}\) from \(\Gamma_{1}\) to \(\Gamma_{3}\)._ Proof.: Let \(C_{1}\), \(C_{2}\), \(C_{3}\) be the domains of \(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\), respectively. If \(a\in C_{1}^{k}\) and \(c\in C_{3}^{k}\), for some \(k\in\mathbb{N}\), then define \[\omega_{3}(\mathscr{S}_{a,c}):=\sum_{b\in C_{2}^{k}}\omega_{1}(\mathscr{S}_{a, b})\omega_{2}(\mathscr{S}_{b,c}).\] Note that on sets of this form, i.e., on basic open sets in \(C_{3}^{C_{1}}\), \(\omega_{3}\) is countably additive. Since our basis of open sets is closed under intersection, this definition extends uniquely to all of \(B(C_{3}^{C_{1}})\) by Dynkin's \(\pi\)-\(\lambda\) theorem. The following was shown for valued structures over finite domains in [12, Proposition 8.4]. **Proposition 5.7**.: _Let \(\Gamma\) and \(\Delta\) be valued \(\tau\)-structures with domains \(C\) and \(D\) and with a fractional homomorphism \(\omega\) from \(\Delta\) to \(\Gamma\). Then the value of every VCSP instance \(\phi\) with respect to \(\Gamma\) is at most the value of \(\phi\) with respect to \(\Delta\)._ Proof.: Let \(\phi(x_{1},\ldots,x_{n})=\sum_{i=1}^{m}R_{i}(x_{j_{1}^{i}},\ldots,x_{j_{k_{i}^{ i}}})\) be a \(\tau\)-expression, where \(j_{1}^{i},\ldots,j_{k_{i}}^{i}\in\{1,\ldots,n\}\) for every \(i\in\{1,\ldots m\}\). To simplify the notation in the proof, if \(v=(v_{1},\ldots,v_{t})\) is a \(t\)-tuple of elements of some domain and \(i_{1},\ldots,i_{s}\in\{1,\ldots,t\}\), we will write \(v_{i_{1},\ldots,i_{s}}\) for the tuple \((v_{i_{1}},\ldots,v_{i_{s}})\). Let \(\varepsilon>0\). From the definition of infimum, there exists \(a^{*}\in D^{n}\) such that \[\phi^{\Delta}(a^{*})\leq\inf_{a\in D^{n}}\phi^{\Delta}(a)+\varepsilon/2 \tag{3}\] and \(f^{*}\in C^{D}\) such that \[\phi^{\Gamma}(f^{*}(a^{*}))\leq\inf_{f\in C^{D}}\phi^{\Gamma}(f(a^{*}))+ \varepsilon/2. \tag{4}\] Note that \(E_{\omega}[f\mapsto R_{i}^{\Gamma}(f(a^{*})_{j_{1}^{i},\dots,j_{k_{i}}^{i}})]\) exists for every \(i\in\{1,\dots,m\}\) by the definition of a fractional homomorphism. Suppose first that \(\sum_{i=1}^{m}E_{\omega}[f\mapsto R_{i}^{\Gamma}(f(a^{*})_{j_{1}^{i},\dots,j_{k_ {i}}^{i}})]\) is defined. Then \[\inf_{b\in C^{n}}\phi^{\Gamma}(b) \leq\phi^{\Gamma}(f^{*}(a^{*})) \text{(definition of infimum)}\] \[\leq\inf_{f\in C^{D}}\phi^{\Gamma}(f(a^{*}))+\varepsilon/2 \text{(by (\ref{eq:f}))}\] \[\leq E_{\omega}[f\mapsto\phi^{\Gamma}(f(a^{*}))]+\varepsilon/2 \text{(by the monotonicity of }E_{\omega})\] \[=\sum_{i=1}^{m}E_{\omega}[f\mapsto R_{i}^{\Gamma}(f(a^{*})_{j_{1 }^{i},\dots,j_{k_{i}}^{i}})]+\varepsilon/2 \text{(by the linearity of }E_{\omega})\] \[\leq\sum_{i=1}^{m}R_{i}^{\Delta}(a^{*}_{j_{1}^{i},\dots,j_{k_{i} }^{i}})+\varepsilon/2 \text{(since }\omega\text{ is a fractional homomorphism)}\] \[=\phi^{\Delta}(a^{*})+\varepsilon/2\] \[\leq\inf_{a\in D^{n}}\phi^{\Delta}(a)+\varepsilon \text{(by (\ref{eq:f}))}.\] Since \(\varepsilon>0\) was chosen arbitrarily, it follows that the value of \(\phi\) with respect to \(\Gamma\) is at most the value of \(\phi\) with respect to \(\Delta\). Suppose now that \(\sum_{i=1}^{m}E_{\omega}[f\mapsto R_{i}^{\Gamma}(f(a^{*})_{j_{1}^{i},\dots,j_{ k_{i}}^{i}})]\) is not defined. Then there exists \(i\in\{1,\dots,m\}\) such that \(E_{\omega}[f\mapsto R_{i}^{\Gamma}(f(a^{*})_{j_{1}^{i},\dots,j_{k_{i}}^{i}})]=\infty\). By the definition of a fractional homomorphism, this implies that \(R_{i}^{\Delta}(a^{*}_{j_{1}^{i},\dots,j_{k_{i}}^{i}})=\infty\) and hence \(\sum_{i=1}^{m}R_{i}^{\Delta}(a^{*}_{j_{1}^{i},\dots,j_{k_{i}}^{i}})=\infty\). Therefore, we obtain as above that \[\inf_{b\in C^{n}}\phi^{\Gamma}(b)\leq\inf_{a\in D^{n}}\phi^{\Delta}(a),\] which is what we wanted to prove. **Remark 5.8**.: _For finite domains, the converse of Proposition 5.7 is true as well [12, Proposition 8.4]._ We say that two valued \(\tau\)-structures \(\Gamma\) and \(\Delta\) are _fractionally homomorphically equivalent_ if there exists a fractional homomorphisms from \(\Gamma\) to \(\Delta\) and from \(\Delta\) to \(\Gamma\). Clearly, fractional homomorphic equivalence is indeed an equivalence relation on valued structures of the same signature. **Corollary 5.9**.: _Let \(\Gamma\) and \(\Delta\) be valued \(\tau\)-structures with oligomorphic automorphism groups that are fractionally homomorphically equivalent. Then \(\operatorname{VCSP}(\Gamma)\) and \(\operatorname{VCSP}(\Delta)\) are polynomial-time equivalent._ Proof.: In fact, the two problems \(\operatorname{VCSP}(\Gamma)\) and \(\operatorname{VCSP}(\Delta)\) coincide. By Proposition 5.7, for every instance \(\phi\), the values of \(\phi\) with respect to \(\Gamma\) and \(\Delta\) are equal. By Lemma 3.3, the value is attained in both structures and hence every instance \(\phi\) with a threshold \(u\) has a solution with respect to \(\Gamma\) if and only if it has a solution with respect to \(\Delta\). **Remark 5.10**.: _Note that if \(\Gamma\) and \(\Delta\) are classical relational \(\tau\)-structures that are homomorphically equivalent in the classical sense, then they are fractionally homomorphically equivalent when we view them as valued structures: if \(h_{1}\) is the homomorphism from \(\Gamma\) to \(\Delta\) and \(h_{2}\) is the homomorphism from \(\Delta\) to \(\Gamma\), then this is witnessed by the fractional homomorphisms \(\omega_{1}\) and \(\omega_{2}\) such that \(\omega_{1}(h_{1})=\omega_{2}(h_{2})=1\)._ **Definition 5.11** (pp-construction).: _Let \(\Gamma\) and \(\Delta\) be valued structures. We say that \(\Delta\) has a pp-construction in \(\Gamma\) if \(\Delta\) is fractionally homomorphically equivalent to a structure \(\Delta^{\prime}\) which is a pp-power of \(\Gamma\)._ **Corollary 5.12**.: _Let \(\Gamma\) and \(\Delta\) be valued structures with finite signatures and oligomorphic automorphism groups such that \(\Delta\) has a pp-construction in \(\Gamma\). Then there is a polynomial-time reduction from \(\mathrm{VCSP}(\Delta)\) to \(\mathrm{VCSP}(\Gamma)\)._ Proof.: Combine Proposition 5.2 and Corollary 5.9. Let \(\mathrm{OIT}\) be the following relation \[\mathrm{OIT}=\{(0,0,1),(0,1,0),(1,0,0)\}.\] It is well-known (see, e.g., [5]) that \(\mathrm{CSP}(\{0,1\};\mathrm{OIT})\) is NP-complete. **Corollary 5.13**.: _Let \(\Gamma\) be a valued structure with a finite signature and oligomorphic automorphism group such that \((\{0,1\};\mathrm{OIT})\) has a pp-construction in \(\Gamma\). Then \(\mathrm{VCSP}(\Gamma)\) is NP-hard._ Proof.: Follows from the NP-hardness of \(\mathrm{CSP}(\{0,1\};\mathrm{OIT})\) via Corollary 5.12. **Lemma 5.14**.: _The relation of pp-constructibility on the class of countable valued structures is transitive._ Proof.: Clearly, a pp-power of a pp-power is again a pp-power, and fractional homomorphic equivalence is transitive by Lemma 5.6. We are therefore left to prove that if \(\Gamma\) and \(\Delta\) are valued structures such that \(\Delta\) is a \(d\)-dimensional pp-power of \(\Gamma\), and if \(\Gamma^{\prime}\) is fractionally homomorphically equivalent to \(\Gamma\) via fractional homomorphisms \(\omega_{1}\colon\Gamma\to\Gamma^{\prime}\) and \(\omega_{2}\colon\Gamma^{\prime}\to\Gamma\), then \(\Delta\) also has a pp-construction in \(\Gamma^{\prime}\). Let \(C\) and \(C^{\prime}\) be the domains of \(\Gamma\) and \(\Gamma^{\prime}\), respectively. Take the \(\tau\)-expressions that define the weighted relations of \(\Delta\) over \(\Gamma\), and interpret them over \(\Gamma^{\prime}\) instead of \(\Gamma\); let \(\Delta^{\prime}\) be the resulting valued structure. Note that \(\Delta^{\prime}\) is a \(d\)-dimensional pp-power of \(\Gamma^{\prime}\). For a map \(f\colon\Gamma\to\Gamma^{\prime}\), let \(\tilde{f}\colon\Delta\to\Delta^{\prime}\) be given by \((x_{1},\ldots,x_{d})\mapsto(f(x_{1}),\ldots,f(x_{d}))\). Then for all \(S\in B((C^{\prime})^{C})\) we define \[\tilde{\omega}_{1}(\{\tilde{f}\mid f\in S\}):=\omega_{1}(S)\] and \[\tilde{\omega}_{1}(\tilde{S}):=\tilde{\omega}_{1}(\tilde{S}\cap\{\tilde{f} \mid f\in(C^{\prime})^{C}\})\] for all \(\tilde{S}\in B\big{(}((C^{\prime})^{d})^{C^{d}}\big{)}\). Note that \(\tilde{\omega}_{1}\) is a fractional homomorphism from \(\Delta\) to \(\Delta^{\prime}\). Analogously we obtain from \(\omega_{2}\) a fractional homomorphism \(\tilde{\omega}_{2}\) from \(\Delta^{\prime}\) to \(\Delta\). Therefore, \(\Delta\) is fractionally homomorphically equivalent to \(\Delta^{\prime}\), which is a pp-power of \(\Gamma^{\prime}\). In other words, \(\Delta\) has a pp-construction in \(\Gamma^{\prime}\). ## 6. Fractional Polymorphisms In this section we introduce _fractional polymorphisms_ of valued structures; they are an important tool for formulating tractability results and complexity classifications of VCSPs. For valued structures with a finite domain, our definition specialises to the established notion of a fractional polymorphism which has been used to study the complexity of VCSPs for valued structures over finite domains (see, e.g. [42]). Our approach is different from the one of Viola and Schneider [41, 43] in that we work with arbitrary probability spaces instead of distributions with finite support. As we will see in Section 7, fractional polymorphisms can be used to give sufficient conditions for tractability of VCSPs of certain valued structures with oligomorphic automorphism groups. This justifies the more general notion of a fractional polymorphism, as it might provide a tractability proof for more problems. We do not know if there are examples in our setting where it is necessary to use the more general notion; see Question 9.2. Let \(\mathscr{O}_{C}^{(\ell)}\) be the set of all operations \(f\colon C^{\ell}\to C\) on a set \(C\) of arity \(\ell\). We equip \(\mathscr{O}_{C}^{(\ell)}\) with the topology of pointwise convergence, where \(C\) is taken to be discrete. That is, the basic open sets are of the form \[\mathscr{S}_{a^{1},\ldots,a^{\ell},b}:=\{f\in\mathscr{O}_{C}^{(\ell)}\mid f(a^ {1},\ldots,a^{\ell})=b\} \tag{5}\] where \(a^{1},\ldots,a^{\ell},b\in C^{m}\), for some \(m\in\mathbb{N}\), and \(f\) is applied componentwise. Let \[\mathscr{O}_{C}:=\bigcup_{\ell\in\mathbb{N}}\mathscr{O}_{C}^{(\ell)}.\] **Definition 6.1** (fractional operation).: _Let \(\ell\in\mathbb{N}\). A fractional operation on \(C\) of arity \(\ell\) is a probability distribution_ \[\big{(}\mathscr{O}_{C}^{(\ell)},B(\mathscr{O}_{C}^{(\ell)}),\omega\colon B( \mathscr{O}_{C}^{(\ell)})\to[0,1]\big{)}.\] _The set of all fractional operations on \(C\) of arity \(\ell\) is denoted by \(\mathscr{F}_{C}^{(\ell)}\), and \(\mathscr{F}_{C}:=\bigcup_{\ell\in\mathbb{N}}\mathscr{F}_{C}^{(\ell)}\)._ If the reference to \(C\) is clear, we occasionally omit the subscript \(C\). We often use \(\omega\) for both the entire fractional operation and for the map \(\omega\colon B(\mathscr{O}_{C}^{(\ell)})\to[0,1]\). **Definition 6.2**.: _A fractional operation \(\omega\in\mathscr{F}_{C}^{(\ell)}\) improves a \(k\)-ary weighted relation \(R\in\mathscr{R}_{C}^{(k)}\) if for all \(a^{1},\ldots,a^{\ell}\in C^{k}\)_ \[E:=E_{\omega}[f\mapsto R(f(a^{1},\ldots,a^{\ell}))]\] _exists and_ \[E\leq\frac{1}{\ell}\sum_{j=1}^{\ell}R(a^{j}). \tag{6}\] Note that (6) has the interpretation that the expected value of \(R(f(a^{1},\ldots,a^{\ell}))\) is at most the average of the values \(R(a^{1}),\ldots,R(a^{\ell})\). Also note that if \(R\) is a classical relation improved by a fractional operation \(\omega\) and \(\omega(f)>0\) for \(f\in\mathscr{O}^{(\ell)}\), then \(f\) must preserve \(R\) in the usual sense. It follows from Lemma 5.5 that if \(\operatorname{Aut}(C;R)\) is oligomorphic, then \(E_{\omega}[f\mapsto R(f(a^{1},\ldots,a^{\ell}))]\) always exists and is greater than \(-\infty\). **Definition 6.3** (fractional polymorphism).: _If \(\omega\) improves every weighted relation in \(\Gamma\), then \(\omega\) is called a fractional polymorphism of \(\Gamma\); the set of all fractional polymorphisms of \(\Gamma\) is denoted by \(\operatorname{fPol}(\Gamma)\)._ **Remark 6.4**.: _A fractional polymorphism of arity \(\ell\) of a valued structure \(\Gamma\) might also be viewed as a fractional homomorphism from a specific \(\ell\)-th pp-power of \(\Gamma\) to \(\Gamma\), which we denote by \(\Gamma^{\ell}\): if \(C\) is the domain and \(\tau\) the signature of \(\Gamma\), then the domain of \(\Gamma^{\ell}\) is \(C^{\ell}\), and for every \(R\in\tau\) of arity \(k\) we have_ \[R^{\Gamma^{\ell}}((a^{1}_{1},\ldots,a^{1}_{\ell}),\ldots,(a^{k}_{1},\ldots,a^{ k}_{\ell})):=\frac{1}{\ell}\sum_{i=1}^{\ell}R^{\Gamma}(a^{1}_{i},\ldots,a^{k}_{i}).\] **Example 6.5**.: _Let \(\pi^{\ell}_{i}\in\mathscr{O}_{C}^{(\ell)}\) be the \(i\)-th projection of arity \(\ell\), which is given by \(\pi^{\ell}_{i}(x_{1},\ldots,x_{\ell})=x_{i}\). The fractional operation \(\operatorname{Id}_{\ell}\) of arity \(\ell\) such that \(\operatorname{Id}_{\ell}(\pi^{\ell}_{i})=\frac{1}{\ell}\) for every \(i\in\{1,\ldots,\ell\}\) is a fractional polymorphism of every valued structure with domain \(C\)._ **Example 6.6**.: _Let \(\Gamma\) be a valued structure and \(\alpha\in\operatorname{Aut}(\Gamma)\). The fractional operation \(\omega\in\mathscr{F}_{C}^{(1)}\) defined by \(\omega(\alpha)=1\) is a fractional polymorphism of \(\Gamma\)._ Let \(\mathscr{C}\subseteq\mathscr{F}_{C}\). We write \(\mathscr{C}^{(\ell)}\) for \(\mathscr{C}\cap\mathscr{F}_{C}^{(\ell)}\) and \(\operatorname{Imp}(\mathscr{C})\) for the set of weighted relations that are improved by every fractional operation in \(\mathscr{C}\). **Lemma 6.7**.: _Let \(R\in\mathscr{R}_{C}^{(k)}\) and let \(\Gamma\) be a valued structure with domain \(C\) and an automorphism \(\alpha\in\operatorname{Aut}(\Gamma)\) which does not preserve \(R\). Then \(R\notin\operatorname{Imp}(\operatorname{fPol}(\Gamma)^{(1)})\)._ Proof.: Since \(\alpha\) does not preserve \(R\), there exists \(a\in C^{k}\) such that \(R(a)\neq R(\alpha(a))\). If \(R(\alpha(a))>R(a)\), then let \(\omega\in\mathscr{F}_{C}^{(1)}\) be the fractional operation defined by \(\omega(\alpha)=1\). Then \(\omega\) improves every weighted relation of \(\Gamma\) and does not improve \(R\). If \(R(\alpha(a))<R(a)\), then the fractional polymorphism \(\omega\) of \(\Gamma\) given by \(\omega(\alpha^{-1})=1\) does not improve \(R\). Parts of the arguments in the proof of the following lemma can be found in the proof of [43, Lemma 7.2.1]; however, note that the author works with a more restrictive notion of fractional operation, so we cannot cite this result. **Lemma 6.8**.: _For every valued \(\tau\)-structure \(\Gamma\) over a countable domain \(C\) we have_ \[\langle\Gamma\rangle\subseteq\operatorname{Imp}(\operatorname{fPol}(\Gamma)).\] Proof.: Let \(\omega\in\operatorname{fPol}(\Gamma)^{(\ell)}\). By definition, \(\omega\) improves every weighted relation \(R\) of \(\Gamma\). It is clear that \(\omega\) also preserves \(\phi_{\emptyset}\). To see that \(\omega\) preserves \(\phi_{=}\), let \(a^{1},\dots,a^{\ell}\in C^{2}\). Note that either \(a^{i}_{1}=a^{i}_{2}\) for every \(i\in\{1,\dots,\ell\}\), in which case \(f(a^{1}_{1},\dots,a^{\ell}_{1})=f(a^{1}_{2},\dots,a^{\ell}_{2})\) for every \(f\in\mathscr{O}_{C}^{(\ell)}\), and hence \[E_{\omega}[f\mapsto\phi_{=}(f(a^{1},\dots,a^{\ell}))]=0=\frac{1}{\ell}\sum_{j =1}^{\ell}\phi_{=}(a^{j}).\] or \(a^{i}_{1}\neq a^{i}_{2}\) for some \(i\in\{1,\dots,\ell\}\), in which case \(\frac{1}{\ell}\sum_{j=1}^{\ell}\phi_{=}(a^{j})=\infty\) and the inequality in (6) is again satisfied. The statement is also clear for weighted relations obtained from weighted relations in \(\Gamma\) by non-negative scaling and addition of constants, since these operations preserve the inequality in (6) by the linearity of expectation. Let \(\phi(x_{1},\dots,x_{k},y_{1},\dots,y_{n})\) be a \(\tau\)-expression. We need to show that the fractional operation \(\omega\) improves the \(k\)-ary weighted relation \(R\) defined for every \(a\in C^{k}\) by \(R(a)=\inf_{b\in C^{n}}\phi^{\Gamma}(a,b)\). Since \(\phi\) is a \(\tau\)-expression, there are \(R_{i}\in\tau\) such that \[\phi(x_{1},\dots,x_{k},y_{1},\dots,y_{n})=\sum_{i=1}^{m}R_{i}(x_{p^{i}_{1}}, \dots,x_{p^{i}_{k_{i}}},y_{q^{i}_{1}},\dots,y_{q^{i}_{n_{i}}})\] for some \(k_{i},n_{i}\in\mathbb{N}\), \(p^{i}_{1},\dots,p^{i}_{k_{i}}\in\{1,\dots,k\}\) and \(q^{i}_{1},\dots,q^{i}_{n_{i}}\in\{1,\dots,n\}\). In this paragraph, if \(v=(v_{1},\dots,v_{t})\in C^{t}\) and \(i_{1},\dots,i_{s}\in\{1,\dots,t\}\), we will write \(v_{i_{1},\dots,i_{s}}\) for the tuple \((v_{i_{1}},\dots,v_{i_{s}})\) for short. Let \(a^{1},\dots,a^{\ell}\in C^{k}\). Let \(\varepsilon>0\) be a rational number. From the definition of an infimum, for every \(j\in\{1,\dots,\ell\}\), there is \(b^{j}\in C^{n}\) such that \[R(a^{j})\leq\phi(a^{j},b^{j})<R(a^{j})+\varepsilon.\] Moreover, for every \(f\in\mathscr{O}_{C}^{(\ell)}\), \[R(f(a^{1},\dots,a^{\ell}))\leq\phi(f(a^{1},\dots,a^{\ell}),f(b^{1},\dots,b^{ \ell})).\] By linearity and monotonicity of expectation, we obtain \[E_{\omega}[f\mapsto R(f(a^{1},\ldots,a^{\ell}))] \leq E_{\omega}[f\mapsto\phi(f(a^{1},\ldots,a^{\ell}),f(b^{1}, \ldots,b^{\ell}))]\] \[=E_{\omega}[f\mapsto\sum_{i=1}^{m}R_{i}((f(a^{1},\ldots,a^{\ell})) _{p^{i}_{1},\ldots,p^{i}_{k_{i}}},(f(b^{1},\ldots,b^{\ell}))_{q^{i}_{1},\ldots, q^{i}_{n_{i}}})]\] \[=\sum_{i=1}^{m}E_{\omega}[f\mapsto R_{i}((f(a^{1},\ldots,a^{\ell}) )_{p^{i}_{1},\ldots,p^{i}_{k_{i}}},(f(b^{1},\ldots,b^{\ell}))_{q^{i}_{1},\ldots, q^{i}_{n_{i}}})].\] Since \(\omega\) improves \(R_{i}\) for every \(i\in\{1,\ldots,m\}\), the last row of the inequality above is at most \[\sum_{i=1}^{m}\frac{1}{\ell}\sum_{j=1}^{\ell}R_{i}(a^{j}_{p^{i}_{ 1},\ldots,p^{i}_{k_{i}}},b^{j}_{q^{i}_{1},\ldots,q^{i}_{n_{i}}}) =\frac{1}{\ell}\sum_{j=1}^{\ell}\sum_{i=1}^{m}R_{i}(a^{j}_{p^{i}_ {1},\ldots,p^{i}_{k_{i}}},b^{j}_{q^{i}_{1},\ldots,q^{i}_{n_{i}}})\] \[=\frac{1}{\ell}\sum_{j=1}^{\ell}\phi(a^{j},b^{j})<\frac{1}{\ell} \sum_{j=1}^{\ell}R(a^{j})+\varepsilon.\] Since \(\varepsilon\) was arbitrary, it follows that \(\omega\) improves \(R\). Finally, we prove that \(\operatorname{Imp}(\operatorname{fPol}(\Gamma))\) is closed under Feas and Opt. Let \(R\in\tau\) be of arity \(k\) and define \(S=\operatorname{Feas}(R)\) and \(T=\operatorname{Opt}(R)\). We aim to show that \(S,T\in\operatorname{Imp}(\operatorname{fPol}(\Gamma))\). Let \(s^{1},\ldots,s^{\ell}\in C^{k}\). If \(S(s^{i})=\infty\) for some \(i\in\{1,\ldots,\ell\}\), then \(\frac{1}{\ell}\sum_{j=1}^{\ell}S(s^{j})=\infty\) and hence \(\omega\) satisfies (6) (with \(R\) replaced by \(S\)) for the tuples \(s^{1},\ldots,s^{\ell}\). So suppose that \(S(s^{i})=0\) for all \(i\in\{1,\ldots,\ell\}\), i.e., \(R(s^{i})\) is finite for all \(i\). Since \(\omega\) improves \(R\) it holds that \[E_{\omega}[f\mapsto R(f(s^{1},\ldots,s^{\ell}))]\leq\frac{1}{\ell}\sum_{j=1}^{ \ell}R(s^{j}) \tag{7}\] and hence the expected value on the left-hand side is finite as well. By (21) in Appendix A, \[E_{\omega}[f\mapsto R(f(s^{1},\ldots,s^{\ell}))]=\sum_{t\in C^{k}}R(t)\omega( \mathscr{S}_{s^{1},\ldots,s^{\ell},t}), \tag{8}\] which implies that \(R(t)\) is finite and \(S(t)=0\) unless \(\omega(\mathscr{S}_{s^{1},\ldots,s^{\ell},t})=0\). Consequently (again by (21)), \[E_{\omega}[f\mapsto S(f(s^{1},\ldots,s^{\ell}))]=\sum_{t\in C^{k}}S(t)\omega( \mathscr{S}_{s^{1},\ldots,s^{\ell},t})=0=\frac{1}{\ell}\sum_{j=1}^{\ell}S(s^ {j}).\] It follows that \(\omega\) improves \(S\). Moving to the weighted relation \(T\), we may again assume without loss of generality that \(T(s^{i})=0\) for every \(i\in\{1,\ldots,\ell\}\) as we did for \(S\). This means that \(c:=R(s^{1})=\cdots=R(s^{\ell})\leq R(b)\) for every \(b\in C^{k}\). Therefore, the right-hand side in (7) is equal to \(c\) and by combining it with (8) we get \[\sum_{t\in C^{k}}R(t)\omega(\mathscr{S}_{s^{1},\ldots,s^{\ell},t})\leq c.\] Together with the assumption that \(R(t)\geq c\) for all \(t\in C^{k}\) and \(\omega\) being a probability distribution we obtain that \(R(t)=c\) and \(T(t)=0\) unless \(\omega(\mathscr{S}_{s^{1},\ldots,s^{\ell},t})=0\), and hence \[E_{\omega}[f\mapsto T(f(s^{1},\ldots,s^{\ell}))]=\sum_{t\in C^{k}}T(t)\omega( \mathscr{S}_{s^{1},\ldots,s^{\ell},t})=0=\frac{1}{\ell}\sum_{j=1}^{\ell}T(s^{ j}).\] This concludes the proof that \(\omega\) improves \(T\). **Example 6.9**.: _Let \(<\) be the binary relation on \(\{0,1\}\) and \(\Gamma_{<}\) the valued structure from Example 2.2. By definition, \(\operatorname{Opt}(<)\in\langle\Gamma_{<}\rangle\). Denote the minimum operation on \(\{0,1\}\) by \(\min\) and let \(\omega\) be a binary fractional operation defined by \(\omega(\min)=1\). Note that \(\omega\in\operatorname{fPol}(\{0,1\};\operatorname{Opt}(<))\). However,_ \[<\left(\min\left(\begin{pmatrix}0\\ 1\end{pmatrix},\begin{pmatrix}0\\ 0\end{pmatrix}\right)\right)=\,<(0,0)=1,\] _while \((1/2)\cdot<(0,1)+(1/2)\cdot<(0,0)=1/2\). This shows that \(\omega\) does not improve \(<\) and hence \(<\not\in\langle(\{0,1\};\operatorname{Opt}(<))\rangle\) by Lemma 6.8._ ## 7. Polynomial-time Tractability via Canonical Fractional Polymorphisms In this section we make use of a tractability result for finite-domain VCSPs of Kolmogorov, Krokhin, and Rolinek [32], which itself was building on earlier work of Kolmogorov, Thapper, and Zivny [33, 42]. **Definition 7.1**.: _An operation \(f\colon C^{\ell}\to C\) for \(\ell\geq 2\) is called cyclic if_ \[f(x_{1},\ldots,x_{\ell})=f(x_{2},\ldots,x_{\ell},x_{1})\] _for all \(x_{1},\ldots,x_{\ell}\in C\). Let \(\operatorname{Cyc}_{C}^{(\ell)}\subseteq\mathscr{O}_{C}^{(\ell)}\) be the set of all operations on \(C\) of arity \(\ell\) that are cyclic._ If \(G\) is a permutation group on a set \(C\), then \(\overline{G}\) denotes the closure of \(G\) in the space of functions from \(C\to C\) with respect to the topology of pointwise convergence. Note that \(\overline{G}\) might contain some operations that are not surjective, but if \(G=\operatorname{Aut}(\mathfrak{B})\) for some structure \(\mathfrak{B}\), then all operations in \(\overline{G}\) are still embeddings of \(\mathfrak{B}\) into \(\mathfrak{B}\) that preserve all first-order formulas. **Definition 7.2**.: _Let \(G\) be a permutation group on the set \(C\). An operation \(f\colon C^{\ell}\to C\) is called pseudo cyclic with respect to \(G\) if there are \(e_{1},e_{2}\in\overline{G}\) such that for all \(x_{1},\ldots,x_{\ell}\in C\)_ \[e_{1}(f(x_{1},\ldots,x_{\ell}))=e_{2}(f(x_{2},\ldots,x_{\ell},x_{1})).\] _Let \(\operatorname{PC}_{G}^{(\ell)}\subseteq\mathscr{O}_{C}^{(\ell)}\) be the set of all operations on \(C\) of arity \(\ell\) that are pseudo cyclic with respect to \(G\)._ Note that \(\operatorname{PC}_{G}^{(\ell)}\in B(\mathscr{O}_{C}^{(\ell)})\). Indeed, the complement can be written as a countable union of sets of the form \(\mathscr{S}_{a^{1},\ldots,a^{\ell},b}\) where for all \(f\in\mathscr{O}_{C}^{(\ell)}\) the tuples \(f(a^{1},\ldots,a^{\ell})\) and \(f(a^{2},\ldots,a^{\ell},a^{1})\) lie in different orbits with respect to \(G\). **Definition 7.3**.: _Let \(G\) be a permutation group with domain \(C\). An operation \(f\colon C^{\ell}\to C\) for \(\ell\geq 2\) is called canonical with respect to \(G\) if for all \(k\in\mathbb{N}\) and \(a^{1},\ldots,a^{\ell}\in C^{k}\) the orbit of the \(k\)-tuple \(f(a^{1},\ldots,a^{\ell})\) only depends on the orbits of \(a^{1},\ldots,a^{\ell}\) with respect to \(G\). Let \(\operatorname{Can}_{G}^{(\ell)}\subseteq\mathscr{O}_{C}^{(\ell)}\) be the set of all operations on \(C\) of arity \(\ell\) that are canonical with respect to \(G\)._ Note that \(\operatorname{Can}_{G}^{(\ell)}\in B(\mathscr{O}_{C}^{(\ell)})\), since the complement is a countable union of sets of the form \(\mathscr{S}_{a^{1},\ldots,a^{\ell},b}\cap\mathscr{S}_{c^{1},\ldots,c^{\ell},d}\) where for all \(i\in\{1,\ldots,\ell\}\) the tuples \(a^{i}\) and \(c^{i}\) lie in the same orbit with respect to \(G\), but \(b\) and \(d\) do not. **Remark 7.4**.: _Note that if \(h\) is an operation over \(C\) of arity \(\ell\) which is canonical with respect to \(G\), then \(h\) induces for every \(k\in\mathbb{N}\) an operation \(h^{*}\) of arity \(\ell\) on the orbits of \(k\)-tuples of \(G\). Note that if \(h\) is pseudo cyclic with respect to \(G\), then \(h^{*}\) is cyclic._ **Definition 7.5**.: _A fractional operation \(\omega\) is called pseudo cyclic with respect to \(G\) if for every \(A\in B(\mathscr{O}_{C}^{(\ell)})\) we have \(\omega(A)=\omega(A\cap\mathrm{PC}_{G}^{(\ell)})\). Canonicity with respect to \(G\) and cyclicity for fractional operations are defined analogously._ We refer to Section 8.3 for examples of concrete fractional polymorphisms of valued structures \(\Gamma\) that are cyclic and canonical with respect to \(\mathrm{Aut}(\Gamma)\). If the reference to a specific permutation group \(G\) is clear, then we omit for cyclicity and canonicity the specification 'with respect to \(G\)'. We will prove below that canonical pseudo cyclic fractional polymorphisms imply polynomial-time tractability of the corresponding VCSP. We prove this result by reducing to tractable VCSPs over finite domains. Motivated by Theorem 3.4 and the infinite-domain tractability conjecture from [10], we state these results for valued structures related to finitely bounded homogeneous structures. **Definition 7.6** (\(\Gamma_{m}^{*}\)).: _Let \(\Gamma\) be a valued structure with signature \(\tau\) such that \(\mathrm{Aut}(\Gamma)\) contains the automorphism group of a homogeneous structure \(\mathfrak{B}\) with a finite relational signature. Let \(m\) be at least as large as the maximal arity of the relations of \(\mathfrak{B}\). Let \(\Gamma_{m}^{*}\) be the following valued structure._ * _The domain of_ \(\Gamma_{m}^{*}\) _is the set of orbits of_ \(m\)_-tuples of_ \(\mathrm{Aut}(\Gamma)\)_._ * _For every_ \(R\in\tau\) _of arity_ \(k\leq m\) _the signature of_ \(\Gamma_{m}^{*}\) _contains a unary relation symbol_ \(R^{*}\)_, which denotes in_ \(\Gamma_{m}^{*}\) _the unary weighted relation that returns on the orbit of an_ \(m\)_-tuple_ \(t=(t_{1},\ldots,t_{m})\) _the value of_ \(R^{\Gamma}(t_{1},\ldots,t_{k})\) _(this is well-defined, because the value is the same for all representatives_ \(t\) _of the orbit)._ * _For every_ \(p\in\{1,\ldots,m\}\) _and_ \(i,j\colon\{1,\ldots,p\}\to\{1,\ldots,m\}\) _there exists a binary relation_ \(C_{i,j}\) _which returns_ \(0\) _for two orbits of_ \(m\)_-tuples_ \(O_{1}\) _and_ \(O_{2}\) _if for every_ \(s\in O_{1}\) _and_ \(t\in O_{2}\) _we have that_ \((s_{i(1)},\ldots,s_{i(p)})\) _and_ \((t_{j(1)},\ldots,t_{j(p)})\) _lie in the same orbit of_ \(p\)_-tuples of_ \(\mathrm{Aut}(\Gamma)\)_, and returns_ \(\infty\) _otherwise._ Note that \(\mathrm{Aut}(\mathfrak{B})\) and hence \(\mathrm{Aut}(\Gamma)\) has finitely many orbits of \(k\)-tuples for every \(k\in\mathbb{N}\) and therefore \(\Gamma_{m}^{*}\) has a finite domain. The following generalises a known reduction for CSPs from [8]. **Theorem 7.7**.: _Let \(\Gamma\) be a valued structure such that \(\mathrm{Aut}(\Gamma)\) equals the automorphism group of a finitely bounded homogeneous structure \(\mathfrak{B}\). Let \(r\) be the maximal arity of the relations of \(\mathfrak{B}\) and the weighted relations in \(\Gamma\), let \(v\) be the maximal number of variables that appear in a single conjunct of the universal sentence \(\psi\) that describes the age of \(\mathfrak{B}\), and let \(m\geq\max(r+1,v,3)\). Then there is a polynomial-time reduction from \(\mathrm{VCSP}(\Gamma)\) to \(\mathrm{VCSP}(\Gamma_{m}^{*})\)._ Proof.: Let \(\tau\) be the signature of \(\Gamma\) and \(\tau^{*}\) be the signature of \(\Gamma_{m}^{*}\). Let \(\phi\) be an instance of \(\mathrm{VCSP}(\Gamma)\) with threshold \(u\) and let \(V\) be the variables of \(\phi\). Create a variable \(y(\bar{x})\) for every \(\bar{x}=(x_{1},\ldots,x_{m})\in V^{m}\). For every summand \(R(x_{1},\ldots,x_{k})\) of \(\phi\) and we create a summand \(R^{*}(y(x_{1},\ldots,x_{k},\ldots,x_{k}))\); this makes sense since \(m\geq r\). For every \(\bar{x},\bar{x}^{\prime}\in V^{m}\), \(p\in\{1,\ldots,m\}\), and \(i,j\colon\{1,\ldots,p\}\to\{1,\ldots,m\}\), add the summand \(C_{i,j}(y(\bar{x}),y(\bar{x}^{\prime}))\) if \((x_{i(1)},\ldots,x_{i(p)})=(x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(p)})\); we will refer to these as _compatibility constraints_. Let \(\phi^{*}\) be the resulting \(\tau^{*}\)-expression. Clearly, \(\phi^{*}\) can be computed from \(\phi\) in polynomial time. Suppose first that \((\phi,u)\) has a solution; it will be notationally convenient to view the solution as a function \(f\) from the variables of \(\phi\) to the elements of \(\Gamma\) (rather than a tuple). We claim that the map \(f^{*}\) which maps \(y(\bar{x})\) to the orbit of \(f(\bar{x})\) in \(\mathrm{Aut}(\Gamma)\) is a solution for \((\phi^{*},u)\). And indeed, each of the summands involving a symbol \(C_{i,j}\) evaluates to \(0\), and \((\phi^{*})^{\Gamma_{m}^{*}}\) equals \(\phi^{\Gamma}\). Now suppose that \((\phi^{*},u)\) has a solution \(f^{*}\). To construct a solution \(f\) to \((\phi,u)\), we first define an equivalence relation \(\sim\) on \(V\). For \(x_{1},x_{2}\in V\), define \(x_{1}\sim x_{2}\) if a (equivalently: every) tuple \(t\) in \(f^{*}(y(x_{1},x_{2},\ldots,x_{2}))\) satisfies \(t_{1}=t_{2}\). Clearly, \(\sim\) is reflexive and symmetric. To verify that \(\sim\) is transitive, suppose that \(x_{1}\sim x_{2}\) and \(x_{2}\sim x_{3}\). In the following we use that \(m\geq 3\). Let \(i\) be the identity map on \(\{1,2\}\), let \(j\colon\{1,2\}\to\{2,3\}\) be given by \(x\mapsto x+1\), and let \(j^{\prime}\colon\{1,2\}\to\{1,3\}\) be given by \(j^{\prime}(1)=1\) and \(j^{\prime}(2)=3\). Then \(\phi^{*}\) contains the conjuncts \[C_{i,i}(y(x_{1},x_{2},x_{2},\ldots,x_{2}),y(x_{1},x_{2},x_{3}, \ldots,x_{3})),\] \[C_{i,j}(y(x_{2},x_{3},x_{3},\ldots,x_{3}),y(x_{1},x_{2},x_{3}, \ldots,x_{3})),\] \[C_{i,j^{\prime}}(y(x_{1},x_{3},x_{3},\ldots,x_{3}),y(x_{1},x_{2},x_{3},\ldots,x_{3})).\] Let \(t\) be a tuple from \(f^{*}(y(x_{1},x_{2},x_{3},\ldots,x_{3}))\). Then it follows from the conjuncts with the relation symbols \(C_{i,i}\) and \(C_{i,j}\) that \(t_{1}=t_{2}\) and \(t_{2}=t_{3}\), and therefore \(t_{1}=t_{3}\). Thus we obtain from the conjunct with \(C_{i,j^{\prime}}\) that \(x_{1}\sim x_{3}\). **Claim 0.** For all equivalence classes \([x_{1}]_{\sim},\ldots,[x_{m}]_{\sim}\), \(t\in f^{*}(y(x_{1},\ldots,x_{m}))\), \(S\in\sigma\) of arity \(k\), and \(j\colon\{1,\ldots,k\}\to\{1,\ldots,m\}\), whether \(\mathfrak{B}\models S(t_{j(1)},\ldots,t_{j(k)})\) does not depend on the choice of the representatives \(x_{1},\ldots,x_{m}\). It suffices to show this statement if we choose another representative \(x^{\prime}_{i}\) for \([x_{i}]_{\sim}\) for some \(i\in\{1,\ldots,m\}\), because the general case then follows by induction. Suppose that for every \(t\in f^{*}(y(x_{1},\ldots,x_{m}))\) we have \(\mathfrak{B}\models S(t_{j(1)},\ldots,t_{j(k)})\); we have to show that for every \(t^{\prime}\in f^{*}(y(x_{1},\ldots,x_{i-1},x^{\prime}_{i},x_{i+1},\ldots,x_{m }))\) we have \(\mathfrak{B}\models S(t^{\prime}_{j(1)},\ldots,t^{\prime}_{j(k)})\). If \(i\notin\{j(1),\ldots,j(k)\}\), then \(\phi^{*}\) contains \[C_{j,j}(y(x_{1},\ldots,x_{m}),y(x_{1},\ldots,x_{i-1},x^{\prime}_{i},x_{i+1}, \ldots,x_{m}))\] and hence \(\mathfrak{B}\models S(t^{\prime}_{j(1)},\ldots,t^{\prime}_{j(k)})\). Now suppose that \(i\in\{j(1),\ldots,j(k)\}\); for the sake of notation we suppose that \(i=j(1)\). By the definition of \(\sim\), and since \(x_{j(1)}\sim x^{\prime}_{j(1)}\), every tuple \(t^{\prime\prime}\in f^{*}(y(x_{j(1)},x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1) }))\) satisfies \(t^{\prime\prime}_{1}=t^{\prime\prime}_{2}\). Let \(\tilde{t}\) be a tuple from \[f^{*}(y(x_{j(1)},\ldots,x_{j(k)},x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1)})).\] (Here we use that \(m\geq r+1\).) * \(\mathfrak{B}\models S(\tilde{t}_{1},\ldots,\tilde{t}_{k})\) because of a compatibility constraint between \(y(x_{1},\ldots,x_{m})\) and \(y(x_{j(1)},\)\(\ldots,x_{j(k)},x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1)})\) in \(\phi^{*}\); * \(\tilde{t}_{1}=\tilde{t}_{k+1}\) because of a compatibility constraint between \(y(x_{j(1)},\ldots,x_{j(k)},x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1)})\) and \(y(x_{j(1)},x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1)})\) and \(x_{j(1)}\sim x^{\prime}_{j(1)}\) in \(\phi^{*}\); * hence, \(\mathfrak{B}\models S(\tilde{t}_{k+1},\tilde{t}_{2},\ldots,\tilde{t}_{k})\); * \(\mathfrak{B}\models S(\tilde{t}^{\prime}_{j(1)},t^{\prime}_{j(2)},\ldots,t^{ \prime}_{j(k)})\) because of a compatibility constraint between the variables \(y(x_{j(1)},\ldots,x_{j(k)},\)\(x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1)})\) and \(y(x_{1},\ldots,x_{i-1},x^{\prime}_{i},x_{i+1},\ldots,x_{m})\) in \(\phi^{*}\): namely, if \(j^{\prime}\colon\{1,\ldots,k\}\to\{1,\ldots,m\}\) is the map that coincides with the identity map except that \(j^{\prime}(1):=k+1\), then \(\phi^{*}\) contains \[C_{j,j^{\prime}}\big{(}y(x_{1},\ldots,x_{i-1},x^{\prime}_{i},x_{i+1},\ldots,x_ {m}),y(x_{j(1)},\ldots,x_{j(k)},x^{\prime}_{j(1)},\ldots,x^{\prime}_{j(1)}) \big{)}.\] This concludes the proof of Claim 0. Now we can define a structure \(\mathfrak{C}\) in the signature \(\sigma\) of \(\mathfrak{B}\) on the equivalence classes of \(\sim\). If \(S\in\sigma\) has arity \(k\), \(j_{1},\ldots,j_{k}\in\{1,\ldots,m\}\), and \([x_{1}]_{\sim},\ldots,[x_{m}]_{\sim}\) are equivalence classes of \(\sim\) such that the tuples \(t\) in \(f^{*}(y(x_{1},\ldots,x_{m}))\) satisfy \(S^{\mathfrak{B}}(t_{j_{1}},\ldots,t_{j_{k}})\) for some representatives \(x_{1},\ldots x_{m}\) (equivalently, for all representatives, by Claim 0), then add \(([x_{j_{1}}]_{\sim},\ldots,[x_{j_{k}}]_{\sim})\) to \(S^{\mathfrak{C}}\). No other tuples are contained in the relations of \(\mathfrak{C}\). **Claim 1.** If \([x_{1}]_{\sim},\ldots,[x_{m}]_{\sim}\) are equivalence classes of \(\sim\), and \(t\in f^{*}(y(x_{1},\ldots,x_{m}))\), then \([x_{i}]_{\sim}\mapsto t_{i}\), for \(i\in\{1,\ldots,m\}\), is an isomorphism between a substructure of \(\mathfrak{C}\) and a substructure of \(\mathfrak{B}\) for any choice of representatives \(x_{1},\ldots,x_{m}\). First note that \([x_{i}]_{\sim}=[x_{j}]_{\sim}\) if and only if \(t_{i}=t_{j}\), so the map is well-defined and bijective. Let \(S\in\sigma\) be of arity \(k\) and \(j\colon\{1,\ldots,k\}\to\{1,\ldots,m\}\). If \(\mathfrak{B}\models S(t_{j(1)},\ldots,t_{j(k)})\), then \(\mathfrak{C}\models S([x_{j(1)}]_{\sim},\ldots,[x_{j(k)}]_{\sim})\) by the definition of \(\mathfrak{C}\). Conversely, suppose that \(\mathfrak{C}\models S([x_{j(1)}]_{\sim},\ldots[x_{j(k)}]_{\sim})\). By Claim 0 and the definition of \(\mathfrak{C}\), there is \(t^{\prime}\in f^{*}(y(x_{1},\ldots,x_{m}))\) such that \(\mathfrak{B}\models S(t^{\prime}_{j(1)},\ldots,t^{\prime}_{j(k)})\). Since \(f^{*}(y(x_{1},\ldots,x_{m}))\) is an orbit of \(\operatorname{Aut}(\mathfrak{B})\), we have \(\mathfrak{B}\models S(t_{j(1)},\ldots,t_{j(k)})\) as well. **Claim 2.**\(\mathfrak{C}\) embeds into \(\mathfrak{B}\). It suffices to verify that \(\mathfrak{C}\) satisfies each conjunct of the universal sentence \(\psi\). Let \(\psi^{\prime}(x_{1},\ldots,x_{q})\) be such a conjunct, and let \([c_{1}]_{\sim},\ldots,[c_{q}]_{\sim}\) be elements of \(\mathfrak{C}\). Let \(t\) be a tuple from the orbit \(f^{*}(y(c_{1},\ldots,c_{q},\ldots,c_{q}))\) of \(\operatorname{Aut}(\Gamma)\); this makes sense since \(m\geq v\). Since \(t_{1},\ldots,t_{q}\) are elements of \(\mathfrak{B}\), the tuple \((t_{1},\ldots,t_{q})\) satisfies \(\psi^{\prime}\). Claim 1 then implies that \(([c_{1}]_{\sim},\ldots,[c_{q}]_{\sim})\) satisfies \(\psi^{\prime}\). Let \(e\) be an embedding of \(\mathfrak{C}\) to \(\mathfrak{B}\). For every \(x\in V\), define \(f(x)=e([x]_{\sim})\). Note that for every summand \(R(x_{1},\ldots,x_{k})\) in \(\phi\) and \(t\in f^{*}(y(x_{1},\ldots,x_{k},\ldots,x_{k}))\), we have \[R^{*}(f^{*}(y(x_{1},\ldots,x_{k},\ldots,x_{k})))=R(t_{1},\ldots,t_{k})=R(e([x _{1}]_{\sim}),\ldots,e([x_{k}]_{\sim}))=R(f(x_{1}),\ldots,f(x_{k})),\] where the middle equality follows from \(t_{i}\mapsto e([x_{i}]_{\sim})\) being a partial isomorphism of \(\mathfrak{B}\) by Claim 1 and 2, which by the homogeneity of \(\mathfrak{B}\) extends to an automorphism of \(\mathfrak{B}\) and therefore also an automorphism of \(\Gamma\). Since \(f^{*}\) is a solution to \((\phi^{*},u)\), it follows from the construction of \(\phi^{*}\) that \(f\) is a solution to \((\phi,u)\). Let \(G\) be a permutation group that contains the automorphism group of a homogeneous structure with a finite relational signature of maximal arity \(k\). A fractional operation \(\omega\) over the domain \(C\) of \(\Gamma\) of arity \(\ell\) which is canonical with respect to \(G\) induces a fractional operation \(\omega^{*}\) on the orbits of \(k\)-tuples of \(G\), given by \[\omega^{*}(A):=\omega\big{(}\{f\in\operatorname{Can}_{G}^{(\ell)}\mid f^{*} \in A\}\big{)},\] for every subset \(A\) of the set of operations of arity \(\ell\) on the finite domain of \(\Gamma^{*}_{k}\) (all such subsets are Borel). Note that \(\{f\in\operatorname{Can}_{G}^{(\ell)}\mid f^{*}\in A\}\) is a measurable subset of \(\mathscr{O}_{C}^{(\ell)}\). Also note that if \(\omega\) is pseudo cyclic, then \(\omega^{*}\) is cyclic. Statements about the fractional polymorphisms of \(\Gamma^{*}_{m}\) lift back to statements about the fractional polymorphisms of \(\Gamma\) via the following useful lemma. **Lemma 7.8**.: _Let \(\Gamma\) be a valued structure such that \(\operatorname{Aut}(\Gamma)\) equals the automorphism group \(G\) of a finitely bounded homogeneous structure and let \(m\) be as in Theorem 7.7. Let \(\nu\in\operatorname{fPol}(\Gamma^{*}_{m})\) be cyclic. Then there exists \(\omega\in\operatorname{fPol}(\Gamma)\) which is canonical with respect to \(G\) such that \(\omega^{*}=\nu\)._ Proof.: Let \(C\) be the domain of \(\Gamma\), let \(D\) be the domain of \(\Gamma^{*}_{m}\), and let \(\ell\) be the arity of \(\nu\). Suppose that \(\nu(f)>0\) for some operation \(f\). Then there exists a function \(g\colon C^{\ell}\to C\) which is canonical with respect to \(G\) such that \(g^{*}=f\) by Lemma 4.9 in [8] (also see Lemma 10.5.12 in [5]). For every such \(f\), choose \(g\) such that \(g^{*}=f\) and define \(\omega(g):=\nu(f)\) and \(\omega(h):=0\) for all other \(h\in\mathscr{O}_{C}^{(\ell)}\). Since the domain of \(\Gamma^{*}_{m}\) is finite, this correctly defines a fractional operation \(\omega\) of the same arity \(\ell\) as \(\nu\). It also improves every weighted relation \(R\) of \(\Gamma\): if \(R\) has arity \(k\), and \(a^{1},\ldots,a^{\ell}\in C^{k}\), then \[E_{\omega}[g\mapsto R(g(a^{1},\ldots,a^{\ell}))] =\sum_{g\in\mathcal{O}_{C}^{(\ell)}}\omega(g)R(g(a^{1},\ldots,a^{ \ell}))\] \[=\sum_{f\in\mathcal{O}_{D}^{(\ell)}}\nu(f)R^{*}(f(a^{1},\ldots,a^ {\ell})_{1},\ldots,f(a^{1},\ldots,a^{\ell})_{k},\ldots,f(a^{1},\ldots,a^{\ell} )_{k})\] \[\leq\frac{1}{\ell}\sum_{j=1}^{\ell}R^{*}(a^{j}_{1},\ldots,a^{j}_{ k},\ldots,a^{j}_{k})\] \[=\frac{1}{\ell}\sum_{j=1}^{\ell}R(a^{j}_{1},\ldots,a^{j}_{k}).\qed\] **Lemma 7.9**.: _Let \(G\) be the automorphism group of a homogeneous structure \(\mathfrak{B}\) with a relational signature of maximal arity \(m\). If \(\omega\in\mathscr{F}_{C}^{(\ell)}\) is canonical with respect to \(G\) such that \(\omega^{*}\) (defined on the orbits of \(m\)-tuples of \(G\)) is cyclic, then \(\omega\) is pseudo cyclic with respect to \(G\)._ Proof.: We use the fact that if \(f\) is canonical with respect to \(G\) such that \(f^{*}\) (defined on the orbits of \(m\)-tuples) is cyclic, then \(f\) is pseudo cyclic (see the proof of Proposition 6.6 in [10]; also see Lemma 10.1.5 in [5]). Let \(C\) be the domain of \(\Gamma\) and let \(a^{1},\ldots,a^{\ell},b\in C^{m}\). It suffices to show that \(\omega(S_{a^{1},\ldots,a^{\ell},b}\cap\mathrm{PC}_{G}^{(\ell)})=\omega(S_{a^{ 1},\ldots,a^{\ell},b}).\) Indeed, \[\omega(S_{a^{1},\ldots,a^{\ell},b}) =\omega(S_{a^{1},\ldots,a^{\ell},b}\cap\mathrm{Can}_{G}^{(\ell)}) \text{(canonicity of $\omega$)}\] \[=\omega^{*}\big{(}\{f^{*}\mid f\in S_{a^{1},\ldots,a^{\ell},b} \cap\mathrm{Can}_{G}^{(\ell)}\}\big{)} \text{(definition of $\omega^{*}$)}\] \[=\omega^{*}\big{(}\{f^{*}\mid f\in S_{a^{1},\ldots,a^{\ell},b} \cap\mathrm{Can}_{G}^{(\ell)}\}\cap\mathrm{Cyc}_{C}^{(\ell)}\,\big{)} \text{(by assumption)}\] \[=\omega^{*}\big{(}\{f^{*}\mid f\in S_{a^{1},\ldots,a^{\ell},b} \cap\mathrm{Can}_{G}^{(\ell)}\cap\mathrm{PC}_{G}^{(\ell)}\}\big{)} \text{(fact mentioned above and Remark~{}\ref{lem:f-a-1})}\] \[=\omega(S_{a^{1},\ldots,a^{\ell},b}\cap\mathrm{Can}_{G}^{(\ell)} \cap\mathrm{PC}_{G}^{(\ell)})\] \[=\omega(S_{a^{1},\ldots,a^{\ell},b}\cap\mathrm{PC}_{G}^{(\ell)}). \qed\] ### Fractional Polymorphisms on Finite Domains For studying canonical operations, we can use known results about operations on finite domains. **Definition 7.10**.: _Let \(\omega\) be a fractional operation of arity \(\ell\) on a finite domain \(C\). Then the support of \(\omega\) is the set_ \[\mathrm{Supp}(\omega)=\{f\in\mathcal{O}_{C}^{(\ell)}\mid\omega(f)>0\}.\] _If \(\mathscr{F}\) is a set of fractional operations, then_ \[\mathrm{Supp}(\mathscr{F}):=\bigcup_{\omega\in\mathscr{F}}\mathrm{Supp}( \omega).\] Note that, a fractional operation \(\omega\) on a finite domain is determined by the values \(\omega(f)\), \(f\in\mathrm{Supp}(\omega)\), in contrast to fractional operations on infinite domains. Moreover, a fractional polymorphism \(\omega\) of a valued structure with a finite domain is cyclic if and only if all operations in its support are cyclic, in accordance to the definitions from [34]. An operation \(f\colon C^{4}\to C\) is called _Siggers_ if \(f(a,r,e,a)=f(r,a,r,e)\) for all \(a,r,e\in C\). **Lemma 7.11**.: _Let \(\Gamma\) and \(\Delta\) be valued structures with finite domains that are fractionally homomorphically equivalent._ * _If_ \(\Gamma\) _has a cyclic fractional polymorphism, then_ \(\Delta\) _has a cyclic fractional polymorphism of the same arity._ * _If_ \(\operatorname{Supp}(\operatorname{fPol}(\Gamma))\) _contains a cyclic operation, then_ \(\operatorname{Supp}(\operatorname{fPol}(\Delta))\) _contains a cyclic operation of the same arity._ Proof.: Let \(C\) be the domain of \(\Gamma\) and let \(D\) be the domain of \(\Delta\). Let \(\nu_{1}\) be a fractional homomorphism from \(\Gamma\) to \(\Delta\), and let \(\nu_{2}\) be a fractional homomorphism from \(\Delta\) to \(\Gamma\). Define \(\nu_{2}^{\prime}\) as the fractional homomorphism from \(\Delta^{\ell}\) to \(\Gamma^{\ell}\) as follows. If \(f\colon D\to C\), then \(f^{\prime}\) denotes the map from \(D^{\ell}\) to \(C^{\ell}\) given by \((c_{1},\dots,c_{\ell})\mapsto(f(c_{1}),\dots,f(c_{\ell}))\). Define \(\nu_{2}^{\prime}(f^{\prime}):=\nu_{2}(f)\) and \(\nu_{2}^{\prime}(h)=0\) for all other \(h\colon D^{\ell}\to C^{\ell}\); since \(C\) and \(D\) are finite, this defines a fractional operation. Suppose that \(\omega\) is a fractional polymorphism of \(\Gamma\) of arity \(\ell\). Then \(\omega^{\prime}:=\nu_{1}\circ\omega\circ\nu_{2}^{\prime}\) is a fractional homomorphism from \(\Delta^{\ell}\) to \(\Delta\) (see Lemma 5.6), and hence a fractional polymorphism of \(\Delta\) (see Remark 6.4). Note that if \(\omega\) is cyclic, then \(\omega^{\prime}\) is cyclic; this shows that first statement of the lemma. Next, suppose that there exists \(\omega\in\operatorname{fPol}^{(\ell)}(\Gamma)\) such that \(\operatorname{Supp}(\omega)\) contains a cyclic operation \(g\) of arity \(\ell\). Since the domain \(C\) of \(\Gamma\) is finite, there exists a function \(f_{1}\colon C\to D\) such that \(\nu_{1}(f_{1})>0\) and a function \(f_{2}\colon D\to C\) such that \(\nu_{2}(f_{2})>0\). Note that \(f_{1}\circ g\circ f_{2}^{\prime}\colon D^{\ell}\to D\) is cyclic since \(g\) is cyclic, and that \(\omega^{\prime}(f_{1}\circ g\circ f_{2}^{\prime})>0\). The following definition is taken from [34]. **Definition 7.12** (core).: _A valued structure \(\Gamma\) over a finite domain is called a core if all operations in \(\operatorname{Supp}(\operatorname{fPol}(\Gamma))^{(1)}\) are injective._ We have been unable to find an explicit reference for the following proposition, but it should be considered to be known; we also present a proof as a guide to the literature. **Proposition 7.13**.: _Let \(\Gamma\) be a valued structure with a finite domain. Then there exists a core valued structure \(\Delta\) over a finite domain which is fractionally homomorphically equivalent to \(\Gamma\)._ Proof.: Let \(C\) be the domain of \(\Gamma\). If \(\Gamma\) itself is a core then there is nothing to be shown, so we may assume that there exists a non-injective \(f\in\operatorname{Supp}(\operatorname{fPol}^{(1)}(\Gamma))\). Since \(C\) is finite, we have that \(D:=f(C)\neq C\); let \(\Delta\) be the valued structure with domain \(D\) and the same signature as \(\Gamma\) whose weighted relations are obtained from the corresponding weighted relations of \(\Gamma\) by restriction to \(D\). It then follows from Lemma 15 in [34] in combination with Remark 5.8 that \(\Gamma\) and \(\Delta\) are fractionally homomorphically equivalent. After applying this process finitely many times, we obtain a core valued structure that is fractionally homomorphically equivalent to \(\Gamma\). The following lemma is a variation of Proposition 39 from [34], which is phrased there only for valued structures \(\Gamma\) that are cores and for idempotent cyclic operations. **Lemma 7.14**.: _Let \(\Gamma\) be a valued structure over a finite domain. Then \(\Gamma\) has a cyclic fractional polymorphism if and only if \(\operatorname{Supp}(\operatorname{fPol}(\Gamma))\) contains a cyclic operation._ Proof.: The forward implication is trivial. We prove the reverse implication. Let \(\Delta\) be a core valued structure over a finite domain that is homomorphically equivalent to \(\Gamma\), which exists by Proposition 7.13. By Lemma 7.11, \(\operatorname{Supp}(\operatorname{fPol}(\Delta))\) contains a cyclic operation. Then \(\operatorname{Supp}(\operatorname{fPol}(\Delta))\) contains even an idempotent cyclic operation: If \(c\in\operatorname{Supp}(\operatorname{fPol}(\Delta))\) is cyclic, then the operation \(c_{0}\colon x\mapsto c(x,\dots,x)\) is in \(\operatorname{Supp}(\operatorname{fPol}(\Delta))\) as well. Since \(\Delta\) is a finite core, \(c_{0}\) is bijective and therefore \(c_{0}^{-1}\) (which is just a finite power of \(c_{0}\)) and the idempotent cyclic operation \(c_{0}^{-1}\circ c\) lie in \(\operatorname{Supp}(\operatorname{fPol}(\Delta))\). By Proposition 39 in [34], \(\Delta\) has a cyclic fractional polymorphism and by Lemma 7.11, \(\Gamma\) also has one. The following outstanding result classifies the computational complexity of VCSPs for valued structures over finite domains; it does not appear in this form in the literature, but we explain how to derive it from results in [11, 32, 34, 45, 46]. In the proof, if \(\mathfrak{C}\) is a finite relational structure (understood also as a valued structure), we denote by \(\operatorname{Pol}(\mathfrak{C})\) the set \(\operatorname{Supp}(\operatorname{fPol}(\mathfrak{C}))\); this notation is consistent with the literature since the set \(\operatorname{Supp}(\operatorname{fPol}(\mathfrak{C}))\) concides with the set of polymorphisms of a relational structure. **Theorem 7.15**.: _Let \(\Gamma\) be a valued structure with a finite signature and a finite domain. If \((\{0,1\};\operatorname{OIT})\) does not have a pp-construction in \(\Gamma\), then \(\Gamma\) has a fractional cyclic polymorphism, and \(\operatorname{VCSP}(\Gamma)\) is in P, and it is NP-hard otherwise._ Proof.: If \((\{0,1\};\operatorname{OIT})\) has a pp-construction in \(\Gamma\), then the NP-hardness of \(\operatorname{VCSP}(\Gamma)\) follows from Corollary 5.12. So assume that \((\{0,1\};\operatorname{OIT})\) does not have a pp-construction in \(\Gamma\). Let \(\mathfrak{C}\) be a classical relational structure on the same domain as \(\Gamma\) such that \(\operatorname{Pol}(\mathfrak{C})=\operatorname{Supp}(\operatorname{fPol}( \Gamma))\); it exists since \(\operatorname{Supp}(\operatorname{fPol}(\Gamma))\) contains projections by Remark 6.5 and is closed under composition by Lemma 5.6 and Remark 6.4. Note that therefore \(\operatorname{fPol}(\Gamma)\subseteq\operatorname{fPol}(\mathfrak{C})\) and since \(\Gamma\) has a finite domain, [22, Theorem 3.3] implies that every relation of \(\mathfrak{C}\) lies in \(\langle\Gamma\rangle\). Since \(\Gamma\) does not pp-construct \((\{0,1\};\operatorname{OIT})\), neither does \(\mathfrak{C}\), and in particular, \(\mathfrak{C}\) does not pp-construct \((\{0,1\};\operatorname{OIT})\) in the classical relational setting (see [3, Definition 3.4, Corollary 3.10]). Combining Theorems 1.4 and 1.8 from [3], \(\operatorname{Pol}(\mathfrak{C})\) contains a cyclic operation. Since \(\operatorname{Supp}(\operatorname{fPol}(\Gamma))\) contains a cyclic operation, by Lemma 7.14, \(\Gamma\) has a cyclic fractional polymorphism. Then Kolmogorov, Rolinek, and Krokhin [32] prove that in this case \(\operatorname{CSP}(\Gamma)\) can be reduced to a finite-domain CSP with a cyclic polymorphism; such CSPs were shown to be in P by Bulatov [11] and, independently, by Zhuk [46]. The problem of deciding for a given valued structure \(\Gamma\) with finite domain and finite signature whether \(\Gamma\) satisfies the condition given in the previous theorem can be solved in exponential time [31]. We now state consequences of this result for certain valued structures with an infinite domain. **Proposition 7.16**.: _Let \(\mathfrak{B}\) be a finitely bounded homogeneous structure and let \(\Gamma\) be a valued structure with finite relational signature such that \(\operatorname{Aut}(\Gamma)=\operatorname{Aut}(\mathfrak{B})\). Let \(m\) be as in Theorem 7.7. Then the following are equivalent._ 1. \(\operatorname{fPol}(\Gamma)\) _contains a fractional operation which is canonical and pseudo cyclic with respect to_ \(\operatorname{Aut}(\mathfrak{B})\)_;_ 2. \(\operatorname{fPol}(\Gamma_{m}^{*})\) _contains a cyclic fractional operation;_ 3. \(\operatorname{Supp}(\operatorname{fPol}(\Gamma_{m}^{*}))\) _contains a cyclic operation._ 4. \(\operatorname{Supp}(\operatorname{fPol}(\Gamma_{m}^{*}))\) _contains a Siggers operation._ Proof.: We first prove the implication from (1) to (2). If \(\omega\) is a fractional polymorphism of \(\Gamma\), then \(\omega^{*}\) is a fractional polymorphism of \(\Gamma_{m}^{*}\): the fractional operation \(\omega^{*}\) improves \(R^{*}\) because \(\omega\) improves \(R\), and \(\omega^{*}\) improves \(C_{i,j}\) for all \(i,j\) because \(\omega\) is canonical with respect to \(G\). Finally, if \(\omega\) is pseudo cyclic with respect to \(G\), then \(\omega^{*}\) is cyclic. The implication from (2) to (1) is a consequence of Lemma 7.8 and Lemma 7.9. The equivalence of (2) and (3) follows from Lemma 7.14. The equivalence of (3) and (4) is proved in [5, Theorem 6.9.2]; the proof is based on [2, Theorem 4.1]. Note that item (4) in the previous proposition can be decided algorithmically for a given valued structure \(\Gamma_{m}^{*}\) (which has a finite domain and finite signature). **Theorem 7.17**.: _If the conditions from Proposition 7.16 hold, then \(\operatorname{VCSP}(\Gamma)\) is in P._ Proof.: If \(\Gamma_{m}^{*}\) has a cyclic fractional polymorphism of arity \(\ell\geq 2\), then the polynomial-time tractability of \(\operatorname{VCSP}(\Gamma_{m}^{*})\) follows from Theorem 7.15. For \(m\) large enough, we may apply Theorem 7.7 and obtain a polynomial-time reduction from \(\operatorname{VCSP}(\Gamma)\) to \(\operatorname{VCSP}(\Gamma_{m}^{*})\), which concludes the proof. ## 8. Application: Resilience A research topic that has been studied in database theory is the computational complexity of the so-called _resilience problem_[20, 21, 37]. We formulate it here for the case of conjunctive queries and, more generally, for unions of conjunctive queries. We generally work with Boolean queries, i.e., queries without free variables. Our results, however, can be extended also to the non-Boolean case. A _conjunctive query_ is a primitive positive \(\tau\)-sentence and a _union of conjunctive queries_ is a (finite) disjunction of conjunctive queries. Note that every existential positive sentence can be written as a union of conjunctive queries. Let \(\tau\) be a finite relational signature and \(\mu\) a conjunctive query over \(\tau\). The input to the _resilience problem for_\(\mu\) consists of a finite \(\tau\)-structure \(\mathfrak{A}\), called a _database1_, and the task is to compute the number of tuples that have to be removed from relations of \(\mathfrak{A}\) so that \(\mathfrak{A}\) does _not_ satisfy \(\mu\). We call this number the _resilience_ of \(\mathfrak{A}\) (with respect to \(\mu\)). As usual, this can be turned into a decision problem where the input also contains a natural number \(u\in\mathbb{N}\) and the question is whether the resilience is at most \(u\). Clearly, \(\mathfrak{A}\) does not satisfy \(\mu\) if and only if its resilience equals \(0\). The computational complexity of this problem depends on \(\mu\) and various cases that can be solved in polynomial time and that are NP-hard have been described in [20, 21, 37]. A general classification, however, is open. Footnote 1: To be precise, a finite relational structure is not exactly the same as a database because the latter may not contain elements that do not contain in any relation. This difference, however, is inessential for the problems studied in this paper. A natural variation of the problem is that the input database is a _bag database_, meaning that it may contain tuples with _multiplicities_, i.e., the same tuple may have multiple occurrences in the same relation. Formally, a bag database is a valued structure with all weights (which represent multiplicities) taken from \(\mathbb{N}\). Resilience on bag databases was introduced by Makhija and Gatterbauer [37] who also present a conjunctive query for which the resilience problem with multiplicities is NP-hard whereas the resilience problem without multiplicities is in P. Note that bag databases are of importance because they represent SQL databases more faithfully than set databases [14]. Bag databases often require different methods than set databases [14, 28]. In this paper, we exclusively consider bag databases. Note that if the resilience problem of a query \(\mu\) can be solved in polynomial time on bag databases, then also the resilience problem on set databases can be solved in polynomial time. A natural generalization of the basic resilience problem defined above is obtained by admitting the decoration of databases with a subsignature \(\sigma\subseteq\tau\), in this way declaring all tuples in \(R^{\mathfrak{A}}\), \(R\in\sigma\), to be _exogenous_. This means that we are not allowed to remove such tuples from \(\mathfrak{A}\) to make \(\mu\) false; the tuples in the other relations are then called _endogenous_. For brevity, we also refer to the relations in \(\tau\) as being exogenous/endogenous. If not specified, then \(\sigma=\emptyset\), i.e., all tuples are endogenous. Different variants of exogenous tuples were studied [37]. However, in bag semantics all of them are polynomial-time equivalent to a problem of this form, see Remark 8.16. In this paper, we generally admit exogeneous relations. The resilience problem that we study is given in Figure 1. We next explain how to represent resilience problems as VCSPs using appropriately chosen valued structures with oligomorphic automorphism groups. **Example 8.1**.: _The following query is taken from Meliou, Gatterbauer, Moore, and Suciu [38]; they show how to solve its resilience problem without multiplicities in polynomial time by a reduction to a max-flow problem. Let \(\mu\) be the query_ \[\exists x,y,z\big{(}R(x,y)\wedge S(y,z)\big{)}.\] _Observe that a finite \(\tau\)-structure satisfies \(\mu\) if and only if it does not have a homomorphism to the \(\tau\)-structure \(\mathfrak{B}\) with domain \(B=\{0,1\}\) and the relations \(R^{\mathfrak{B}}=\{(0,1),(1,1)\}\) and \(S^{\mathfrak{B}}=\{(0,0),(0,1)\}\) (see Figure 2). We turn \(\mathfrak{B}\) into the valued structure \(\Gamma\) with domain \(\{0,1\}\) where \(R^{\Gamma}(0,1)=R^{\Gamma}(1,1)=0=S^{\Gamma}(0,0)=S^{\Gamma}(0,1)\) and \(R^{\Gamma}\) and \(S^{\Gamma}\) take value 1 otherwise. Then \(\mathrm{VCSP}(\Gamma)\) is precisely the resilience problem for \(\mu\) (with multiplicities). Our results reprove the result from [37] that even with multiplicities, the problem can be solved in polynomial time (see Theorem 7.17, Proposition 8.15 and Example 8.12)._ **Example 8.2**.: _Let \(\mu\) be the conjunctive query_ \[\exists x,y,z(R(x,y)\wedge S(x,y,z)).\] _This query is linear in the sense of Freire, Gatterbauer, Immerman, and Meliou and thus its resilience problem without multiplicities can be solved in polynomial time (Theorem 4.5 in [38]; also see Fact 3.18 in [19]). Our results reprove the result from [37] that this problem remains polynomial-time solvable with multiplicities (see Theorem 7.17, Proposition 8.15 and Example 8.17)._ Figure 1. The resilience problem considered in this paper. Figure 2. The query \(\mu\) from Example 8.1 (on the left) and the corresponding structure \(\mathfrak{B}\) (on the right). **Remark 8.3**.: _Note that if the resilience problem (with or without multiplicities) for a union \(\mu\) of conjunctive queries is in P, then also the computational problem of finding tuples to be removed from the input database \(\mathfrak{A}\) so that \(\mathfrak{A}\not\models\mu\) is in \(P.\) To see this, let \(u\in\mathbb{N}\) be threshold. If \(u=0\), then no tuple needs to be found and we are done. Otherwise, for every tuple \(t\) in a relation \(R^{\mathfrak{A}}\), we remove \(t\) from \(R^{\mathfrak{A}}\) and test the resulting database with the threshold \(u-m\), where \(m\) is the multiplicity of \(t\). If the modified instance is accepted, then \(t\) is a correct tuple to be removed and we may proceed to find a solution of this modified instance._ ### Connectivity We show that when classifying the resilience problem for conjunctive queries, it suffices to consider queries that are connected. The _canonical database_ of a conjunctive query \(\mu\) with relational signature \(\tau\) is the \(\tau\)-structure \(\mathfrak{A}\) whose domain are the variables of \(\mu\) and where \(a\in R^{\mathfrak{A}}\) for \(R\in\tau\) if and only if \(\mu\) contains the conjunct \(R(a)\). A \(\tau\)-structure is _connected_ if it cannot be written as the disjoint union of two \(\tau\)-structures with non-empty domains. Conversely, the _canonical query_ of a relational \(\tau\)-structure \(\mathfrak{A}\) is the conjunctive query whose variable set is the domain \(A\) of \(\mathfrak{A}\), and which contains for every \(R\in\tau\) and \(\bar{a}\in R^{\mathfrak{A}}\) the conjunct \(R(\bar{a})\). **Remark 8.4**.: _All terminology introduced for \(\tau\)-structures also applies to conjunctive queries with signature \(\tau\): by definition, the query has the property if the canonical database has the property._ In particular, it is clear what it means for a conjunctive query to be connected. **Lemma 8.5**.: _Let \(\mu_{1},\ldots,\mu_{k}\) be conjunctive queries such that \(\mu_{i}\) does not imply \(\mu_{j}\) if \(i\neq j\). Then the resilience problem for \(\mu:=\mu_{1}\wedge\cdots\wedge\mu_{k}\) is NP-hard if the resilience problem for one of the \(\mu_{i}\) is NP-hard. Conversely, if the resilience problem is in P (in NP) for each \(\mu_{i}\), then the resilience problem for \(\mu\) is in P as well (in NP, respectively). The same is true in the setting without multiplicities and/or exogeneous relations._ Proof.: We first present a polynomial-time reduction from the resilience problem of \(\mu_{i}\), for some \(i\in\{1,\ldots,k\}\), to the resilience problem of \(\mu\). Given an instance \(\mathfrak{A}\) of the resilience problem for \(\mu_{i}\), let \(m\) be the number of tuples in relations of \(\mathfrak{A}\). Let \(\mathfrak{A}^{\prime}\) be the disjoint union of \(\mathfrak{A}\) with \(m\) copies of the canonical database of \(\mu_{j}\) for every \(j\in\{1,\ldots,k\}\setminus\{i\}\). Observe that \(\mathfrak{A}^{\prime}\) can be computed in polynomial time in the size of \(\mathfrak{A}\) and that the resilience of \(\mathfrak{A}\) with respect to \(\mu_{i}\) equals the resilience of \(\mathfrak{A}^{\prime}\) with respect to \(\mu\). Conversely, if the resilience problem is in P for each \(\mu_{i}\), then also the resilience problem for \(\mu\) is in P: given an instance \(\mathfrak{A}\) of the resilience problem for \(\mu\), we compute the resilience of \(\mathfrak{A}_{j}\) with respect to \(\mu_{i}\) for every \(i\in\{1,\ldots,k\}\), and the minimum of all the resulting values. The proof for the membership in NP is the same. The same proof works in the setting without multiplicities. When classifying the complexity of the resilience problem for conjunctive queries, by Lemma 8.5 we may restrict our attention to conjunctive queries that are connected. We also formulate an immediate corollary of Lemma 8.5 that, after finitely many applications, establishes the same for unions of conjunctive queries. **Corollary 8.6**.: _Let \(\mu=\mu_{1}\wedge\cdots\wedge\mu_{k}\) be as in Lemma 8.5 and suppose that \(\mu\) occurs in a union \(\mu^{\prime}\) of conjunctive queries. For \(i\in\{1,\ldots,k\}\), let \(\mu^{\prime}_{i}\) be the union of queries obtained by replacing \(\mu\) by \(\mu_{i}\) in \(\mu^{\prime}\). Then the resilience problem for \(\mu^{\prime}\) is NP-hard if the resilience problem for one of the \(\mu^{\prime}_{i}\) is NP-hard. Conversely, if the resilience problem is in P (in NP) for each \(\mu^{\prime}_{i}\), then the resilience problem for \(\mu^{\prime}\) is in P as well (in NP, respectively). The same is true in the setting without multiplicities and/or exogeneous relations._ ### Finite Duals If \(\mu\) is a union of conjunctive queries with signature \(\tau\), then a _dual_ of \(\mu\) is a \(\tau\)-structure \(\mathfrak{A}\) with the property that a finite structure \(\mathfrak{B}\) has a homomorphism to \(\mathfrak{A}\) if and only if \(\mathfrak{B}\) does not satisfy \(\mu\). The conjunctive query in Example 8.1, for instance, even has a _finite_ dual. There is an elegant characterisation of the (unions of) conjunctive queries that have a finite dual. To state it, we need some basic terminology from database theory. **Definition 8.7**.: _The incidence graph of a relational \(\tau\)-structure \(\mathfrak{A}\) is the bipartite undirected multigraph whose first colour class is \(A\), and whose second colour class consists of expressions of the form \(R(b)\) where \(R\in\tau\) has arity \(k\), \(b\in A^{k}\), and \(\mathfrak{A}\models R(b)\). An edge \(e_{a,i,R(b)}\) joins \(a\in A\) with \(R(b)\) if \(b_{i}=a\). A structure is called acyclic if its incidence graph is acyclic, i.e., it contains no cycles (if two vertices are linked by two different edges, then they establish a cycle). A structure is called a tree if it is acyclic and connected in the sense defined in Section 8.1._ The following was proved by Nesetril and Tardif [40]; also see [36, 18]. **Theorem 8.8**.: _A conjunctive query \(\mu\) has a finite dual if and only if the canonical database of \(\mu\) is homomorphically equivalent to a tree. A union of conjunctive queries has a finite dual if and only if the canonical database for each of the conjunctive queries is homomorphically equivalent to a tree._ The theorem shows that in particular Example 8.2 does not have a finite dual, since the query given there is not acyclic and hence cannot be homomorphically equivalent to a tree. To construct valued structures from duals, we introduce the following notation. **Definition 8.9**.: _Let \(\mathfrak{B}\) be a \(\tau\)-structure and \(\sigma\subseteq\tau\). Define \(\Gamma(\mathfrak{B},\sigma)\) to be the valued \(\tau\)-structure on the same domain as \(\mathfrak{B}\) such that_ * _for each_ \(R\in\tau\setminus\sigma\)_,_ \(R^{\Gamma(\mathfrak{B},\sigma)}(a):=0\) _if_ \(a\in R^{\mathfrak{B}}\) _and_ \(R^{\Gamma(\mathfrak{B},\sigma)}(a):=1\) _otherwise, and_ * _for each_ \(R\in\sigma\)_,_ \(R^{\Gamma(\mathfrak{B},\sigma)}(a):=0\) _if_ \(a\in R^{\mathfrak{B}}\) _and_ \(R^{\Gamma(\mathfrak{B},\sigma)}(a):=\infty\) _otherwise._ Note that \(\operatorname{Aut}(\mathfrak{B})=\operatorname{Aut}(\Gamma(\mathfrak{B}, \sigma))\) for any \(\tau\)-structure \(\mathfrak{B}\) and any \(\sigma\). In the following result we use a correspondence between resilience problems for acyclic conjunctive queries and valued CSPs. The result then follows from the P versus NP-complete dichotomy theorem for valued CSPs over finite domains stated in Theorem 7.15. **Theorem 8.10**.: _Let \(\mu\) be a union of acyclic conjunctive queries with relational signature \(\tau\) and let \(\sigma\subseteq\tau\). Then the resilience problem for \(\mu\) with exogenous relations from \(\sigma\) is in P or NP-complete. Moreover, it is decidable whether the resilience problem for a given union of acyclic conjunctive queries is in P. If \(\mu\) is a union of queries each of which is homomorphically equivalent to a tree and \(\mathfrak{B}\) is the finite dual of \(\mu\) (which exists by Theorem 8.8), then \(\operatorname{VCSP}(\Gamma(\mathfrak{B},\sigma))\) is polynomial-time equivalent to the resilience problem for \(\mu\) with exogenous relations from \(\sigma\)._ Proof.: By virtue of Corollary 8.6, we may assume for the P versus NP-complete dichotomy that each of the conjunctive queries in \(\mu\) is connected and thus a tree. The same is true also for the polynomial-time equivalence to a VCSP since replacing a conjunctive query in a union with a homomorphically equivalent one does not affect the complexity of resilience. Define \(\Gamma:=\Gamma(\mathfrak{B},\sigma)\). We show that \(\operatorname{VCSP}(\Gamma)\) is polynomial-time equivalent to the resilience problem for \(\mu\) with exogenous relations from \(\sigma\). Given a finite bag database \(\mathfrak{A}\) with signature \(\tau\) with exogenous tuples from relations in \(\sigma\), let \(\phi\) be the \(\tau\)-expression which contains for every \(R\in\tau\) and for every tuple \(a\in R^{\mathfrak{A}}\) the summand \(R(a)\) with the same number of occurrences as is the multiplicity of \(a\) in \(R^{\mathfrak{A}}\). Conversely, for every \(\tau\)-expression \(\phi\) we can create a bag database \(\mathfrak{A}\) with signature \(\tau\) and exogenous relations from \(\sigma\) The domain of \(\mathfrak{A}\) is the set of variables of \(\phi\) and for every \(R\in\tau\) and \(a\in R^{\mathfrak{A}}\) with multiplicity equal to the number of occurrences of the summand \(R(a)\) in \(\phi\). In both situations, the resilience of \(\mathfrak{A}\) with respect to \(\mu\) equals the value of \(\phi\) with respect to \(\Gamma\). This shows the final statement of the theorem. The first statement now follows from Theorem 7.15. Concerning the decidability of the tractability condition, first note that the finite dual of \(\mu\), and hence also \(\Gamma\), can be effectively computed from \(\mu\) (e.g., the construction of the dual in [40] is effective). The existence of a fractional cyclic polymorphism for a given valued structure \(\Gamma\) with finite domain and finite signature can be decided (in exponential time in the size of \(\Gamma\); see [31]). **Remark 8.11**.: _We mention that Theorem 8.10 also applies to regular path queries which can be shown to always have a finite dual, see the related [13]._ Theorem 8.10 can be combined with the tractability results for VCSPs from Section 7 that use fractional polymorphisms. To illustrate fractional polymorphisms and how to find them, we revisit a known tractable resilience problem from [19, 20, 21, 38] and show that it has a fractional canonical pseudo cyclic polymorphism. **Example 8.12**.: _We revisit Example 8.1. Consider again the conjunctive query_ \[\exists x,y,z(R(x,y)\wedge S(y,z)).\] _There is a finite dual \(\mathfrak{B}\) of \(\mu\) with domain \(\{0,1\}\) which is finitely bounded homogeneous, as described in Example 8.1. That example also describes a valued structure \(\Gamma\) which is actually \(\Gamma(\mathfrak{B},\emptyset)\). Let \(\omega\) be the fractional cyclic operation given by \(\omega(\min)=\omega(\max)=\frac{1}{2}\). Since \(\operatorname{Aut}(\Gamma)\) is trivial, \(\omega\) is canonical. The fractional operation \(\omega\) improves both weighted relations \(R\) and \(S\) (they are submodular; see, e.g., [35]) and hence is a canonical cyclic fractional polymorphism of \(\Gamma\)._ Combining Theorem 7.17 and 8.10, Example 8.12 reproves the results from [20] (without multiplicities) and [37] (with multiplicities) that the resilience problem for this query is in P. ### Infinite Duals Conjunctive queries might not have a finite dual (see Example 8.2), but unions of connected conjunctive queries always have a countably infinite dual. Cherlin, Shelah and Shi [15] showed that in this case we may even find a dual with an oligomorphic automorphism group (see Theorem 8.13 below). This is the key insight to phrase resilience problems as VCSPs for valued structures with oligomorphic automorphism groups. The not necessarily connected case again reduces to the connected case by Corollary 8.6. In Theorem 8.13 below we state a variant of a theorem of Cherlin, Shelah, and Shi [15] (also see [5, 26, 27]). If \(\mathfrak{B}\) is a structure, we write \(\mathfrak{B}_{\operatorname{pp}(m)}\) for the expansion of \(\mathfrak{B}\) by all relations that can be defined with a connected primitive positive formula (see Remark 8.4) with at most \(m\) variables, at least one free variable, and without equality. For a union of conjunctive queries \(\mu\) over the signature \(\tau\), we write \(|\mu|\) for the maximum of the number of variables of each conjunctive query in \(\mu\), the maximal arity of \(\tau\), and \(2\). **Theorem 8.13**.: _For every union \(\mu\) of connected conjunctive queries over a finite relational signature \(\tau\) there exists a \(\tau\)-structure \(\mathfrak{B}_{\mu}\) such that the following statements hold:_ 1. \((\mathfrak{B}_{\mu})_{pp(|\mu|)}\) _is homogeneous._ 2. \(\operatorname{Age}(\mathfrak{B}_{pp(|\mu|)})\) _is the class of all substructures of structures of the form_ \(\mathfrak{A}_{pp(|\mu|)}\) _for a finite structure_ \(\mathfrak{A}\) _that satisfies_ \(\neg\mu\)_._ 3. _A countable_ \(\tau\)_-structure_ \(\mathfrak{A}\) _satisfies_ \(\neg\mu\) _if and only if it embeds into_ \(\mathfrak{B}_{\mu}\)_._ 4. \(\mathfrak{B}_{\mu}\) _is finitely bounded._ _ 5. \(\operatorname{Aut}(\mathfrak{B}_{\mu})\) _is oligomorphic._ 6. \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\) _is finitely bounded._ Proof.: The construction of a structure \(\mathfrak{B}_{\mu}\) with the given properties follows from a proof of Hubicka and Nesetril [26, 27] of the theorem of Cherlin, Shelah, and Shi [15], and can be found in [5, Theorem 4.3.8]. Properties (1), (2) and property (3) restricted to finite structures \(\mathfrak{A}\) are explicitly stated in [5, Theorem 4.3.8]. Property (3) restricted to finite structures clearly implies property (4). Property (5) holds because homogeneous structures with a finite relational signature have an oligomorphic automorphism group. Property (3) for countable structures now follows from [5, Lemma 4.1.7]. Since we are not aware of a reference for (6) in the literature, we present a proof here. Let \(\sigma\) be the signature of \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). We claim that the following universal \(\sigma\)-sentence \(\psi\) describes the structures in the age of \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). If \(\phi\) is a \(\sigma\)-sentence, then \(\phi^{\prime}\) denotes the \(\tau\)-sentence obtained from \(\phi\) by replacing every occurrence of \(R(\bar{x})\), for \(R\in\sigma\setminus\tau\), by the primitive positive \(\tau\)-formula \(\eta(\bar{x})\) for which \(R\) was introduced in \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). Then \(\psi\) is a conjunction of all \(\sigma\)-sentences \(\neg\phi\) such that \(\phi\) is primitive positive, \(\phi^{\prime}\) has at most \(|\mu|\) variables, and \(\phi^{\prime}\) implies \(\mu\). Clearly, there are finitely many conjuncts of this form. Suppose that \(\mathfrak{A}\in\operatorname{Age}(\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu |)}\). Then \(\mathfrak{A}\) satisfies each conjunct \(\neg\phi\) of \(\psi\), because otherwise \(\mathfrak{B}_{\mu}\) satisfies \(\phi^{\prime}\), and thus satisfies \(\mu\), contrary to our assumptions. The interesting direction is that if a finite \(\sigma\)-structure \(\mathfrak{A}\) satisfies \(\psi\), then \(\mathfrak{A}\) embeds into \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). Let \(\phi\) be the canonical query of \(\mathfrak{A}\). Let \(\mathfrak{A}^{\prime}\) be the canonical database of the \(\tau\)-formula \(\phi^{\prime}\). Suppose for contradiction that \(\mathfrak{A}^{\prime}\models\mu\). Let \(\chi\) be a minimal subformula of \(\phi\) such that the canonical database of \(\chi\) models \(\mu\). Then \(\chi\) has at most \(|\mu|\) variables and implies \(\mu\), and hence \(\neg\chi\) is a conjunct of of \(\psi\) which is not satisfied by \(\mathfrak{A}\), a contradiction to our assumptions. Therefore, \(\mathfrak{A}^{\prime}\models\neg\mu\) and by Property (2), we have that \(\mathfrak{A}^{\prime}_{\operatorname{pp}(|\mu|)}\) has an embedding \(f\) into \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). We claim that the restriction of \(f\) to the elements of \(\mathfrak{A}\) is an embedding of \(\mathfrak{A}\) into \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). Clearly, if \(\mathfrak{A}\models R(\bar{x})\) for some relation \(R\) that has been introduced for a primitive positive formula \(\eta\), then \(\mathfrak{A}^{\prime}\) satisfies \(\eta(\bar{x})\), and hence \(\mathfrak{B}_{\mu}\models\eta(f(\bar{x}))\), which in turn implies that \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\models R(f(\bar{x}))\) as desired. Conversely, if \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\models R(f(\bar{x}))\), then \(\mathfrak{A}^{\prime}_{\operatorname{pp}(|\mu|)}\models R(\bar{x})\), and hence \(\mathfrak{A}^{\prime}\models\eta(\bar{x})\). This in turn implies that \(\mathfrak{A}\models R(\bar{x})\). Since the restriction of \(f\) and its inverse preserve the relations from \(\tau\) trivially, we conclude that \(\mathfrak{A}\) embeds into \((\mathfrak{B}_{\mu})_{\operatorname{pp}(|\mu|)}\). By Properties (1) and (6) of Theorem 8.13, \(\mathfrak{B}_{\mu}\) is always a reduct of a finitely bounded homogeneous structure. For short, we write \(\Gamma_{\mu}\) for \(\Gamma(\mathfrak{B}_{\mu},\emptyset)\) and \(\Gamma_{\mu,\sigma}\) for \(\Gamma(\mathfrak{B}_{\mu},\sigma)\), see Definition 8.9. For some queries \(\mu\), the structure \(\mathfrak{B}_{\mu}\) can be replaced by a simpler structure \(\mathfrak{C}_{\mu}\). This will be convenient for some examples that we consider later, because the structure \(\mathfrak{C}_{\mu}\) is finitely bounded and homogeneous itself and hence admits the application of Theorem 7.17. To define the respective class of queries, we need the following definition. The _Gaifman graph_ of a relational structure \(\mathfrak{A}\) is the undirected graph with vertex set \(A\) where \(a,b\in A\) are adjacent if and only if \(a\neq b\) and there exists a tuple in a relation of \(\mathfrak{A}\) that contains both \(a\) and \(b\). The Gaifman graph of a conjunctive query is the Gaifman graph of the canonical database of that query. **Theorem 8.14**.: _For every union \(\mu\) of connected conjunctive queries over a finite relational signature \(\tau\) such that the Gaifman graph of each of the conjunctive queries in \(\mu\) is complete, there exists a countable \(\tau\)-structure \(\mathfrak{C}_{\mu}\) such that the following statements hold:_ 1. \(\mathfrak{C}_{\mu}\) _is homogeneous._ 2. \(\operatorname{Age}(\mathfrak{C}_{\mu})\) _is the class of all finite structures_ \(\mathfrak{A}\) _that satisfy_ \(\neg\mu\) _Moreover, \(\mathfrak{C}_{\mu}\) is finitely bounded, \(\operatorname{Aut}(\mathfrak{C}_{\mu})\) is oligomorphic, and a countable \(\tau\)-structure satisfies \(\neg\mu\) if and only if it embeds into \(\mathfrak{C}_{\mu}\)._ Proof.: Let \(\mathfrak{A}_{1}\) and \(\mathfrak{A}_{2}\) be finite \(\tau\)-structures that satisfy \(\neg\mu\). Since the Gaifman graph of each of the conjunctive queries in \(\mu\) is complete, the union of the structures \(\mathfrak{A}_{1}\) and \(\mathfrak{A}_{2}\) satisfies \(\neg\mu\) as well. By Fraisse's Theorem (see, e.g., [25]) there is a countable homogeneous \(\tau\)-structure \(\mathfrak{C}_{\mu}\) such that \(\operatorname{Age}(\mathfrak{C}_{\mu})\) is the class of all finite structures that satisfy \(\neg\mu\); this shows that \(\mathfrak{C}_{\mu}\) is finitely bounded. Homogeneous structures with finite relational signature clearly have an oligomorphic automorphism group. For the final statement, see [5, Lemma 4.1.7]. Note that \(\mathfrak{C}_{\mu}\) is homomorphically equivalent to \(\mathfrak{B}_{\mu}\) by [5, Lemma 4.1.7]. Therefore, \(\Gamma(\mathfrak{C}_{\mu},\sigma)\) is homomorphically equivalent to \(\Gamma_{\mu,\sigma}\) for any \(\sigma\subseteq\tau\). The following proposition follows straightforwardly from the definitions and provides a valued constraint satisfaction problem that is polynomial-time equivalent to the resilience problem for \(\mu\), similar to Theorem 8.10. **Proposition 8.15**.: _The resilience problem for a union of connected conjunctive queries \(\mu\) where the relations from \(\sigma\subseteq\tau\) are exogenous is polynomial-time equivalent to \(\operatorname{VCSP}(\Gamma(\mathfrak{B},\sigma))\) for any dual \(\mathfrak{B}\) of \(\mu\); in particular, to \(\operatorname{VCSP}(\Gamma_{\mu,\sigma})\)._ Proof.: Let \(\mathfrak{B}\) be a dual of \(\mu\). For every bag database \(\mathfrak{A}\) over signature \(\tau\) and with exogenous relations from \(\sigma\), let \(\phi\) be the \(\tau\)-expression obtained by adding atomic \(\tau\)-expressions \(S(x_{1},\ldots,x_{n})\) according to the multiplicity of the tuples \((x_{1},\ldots,x_{n})\) in \(S^{\mathfrak{A}}\) for all \(S\in\tau\). Note that \(\phi\) can be computed in polynomial time. Then the resilience of \(\mathfrak{A}\) with respect to \(\mu\) is at most \(u\) if and only if \((\phi,u)\) has a solution over \(\Gamma(\mathfrak{B},\sigma)\). To prove a polynomial-time reduction in the other direction, let \(\phi\) be a \(\tau\)-expression. We construct a bag database \(\mathfrak{A}\) with signature \(\tau\). The domain of \(\mathfrak{A}\) are the variables that appear in \(\phi\) and for every \(S\in\tau\) and tuple \((x_{1},\ldots,x_{n})\in S^{\mathfrak{A}}\), its multiplicity in \(\mathfrak{A}\) is the number of times that \((x_{1},\ldots,x_{n})\) in \(S^{\mathfrak{A}}\) occurs as a summand of \(\phi\). The relations \(S^{\mathfrak{A}}\) with \(S\in\sigma\) are exogenous in \(\mathfrak{A}\), the remaining ones are endogenous. Again, \(\mathfrak{A}\) can be computed in polynomial time and the resilience of \(\mathfrak{A}\) with respect to \(\mu\) is at most \(u\) if and only if \((\phi,u)\) has a solution over \(\Gamma(\mathfrak{B},\sigma)\). In [37] one may find a seemingly more general notion of exogenous tuples, where in a single relation there might be both endogenous and exogenous tuples. Using Proposition 8.15 and Lemma 4.7, however, we can show that classifying the complexity of resilience problems according to our original definition also entails a classification of this variant. **Remark 8.16**.: _Let \(\mu\) be a union of conjunctive queries with the signature \(\tau\), let \(\sigma\subseteq\tau\), and let \(\rho\subseteq\tau\setminus\sigma\). Suppose we would like to model the resilience problem for \(\mu\) where the relations in \(\sigma\) are exogenous and the relations in \(\rho\) might contain both endogenous and exogenous tuples. Let \(\mathfrak{B}\) be a dual of \(\mu\) and \(\Gamma\) be the expansion of \(\Gamma(\mathfrak{B},\sigma)\) where for every relational symbol \(R\in\rho\), there is also a relation \((R^{x})^{\Gamma}=R^{\mathfrak{B}}\), i.e., a classical relation that takes values \(0\) and \(\infty\). The resilience problem for \(\mu\) with exogenous tuples specified as above is polynomial-time equivalent to \(\operatorname{VCSP}(\Gamma)\) by analogous reductions as in Proposition 8.15. Note that \((R^{x})^{\Gamma}=\operatorname{Opt}\left(R^{\Gamma(\mathfrak{B},\sigma)}\right)\) for every \(R\in\rho\), and therefore by Lemma 4.7, \(\operatorname{VCSP}(\Gamma)\) is polynomial-time equivalent to \(\operatorname{VCSP}(\Gamma(\mathfrak{B},\sigma))\) and thus to the resilience problem for \(\mu\) where the relations in \(\sigma\) are exogeneous and the relations in \(\tau\setminus\sigma\) are purely endogeneous. This justifies the restriction to our setting for exogenous tuples. Moreover, the same argument shows that if resilience of \(\mu\) with all tuples endogenous is in \(P\), then all variants of resilience of \(\mu\) with exogenous tuples are in \(P\) as well._ Similarly as in Example 8.12, Proposition 8.15 can be combined with the tractability results for VCSPs from Section 7 that use fractional polymorphisms to prove tractability of resilience problems. **Example 8.17**.: _We revisit Example 8.2. Consider the conjunctive query \(\exists x,y,z\left(R(x,y)\wedge S(x,y,z)\right)\) over the signature \(\tau=\{R,S\}\). Note that the Gaifman graph of \(\mu\) is complete; let \(\mathfrak{C}_{\mu}\) be the structure from Theorem 8.14. We construct a binary pseudo cyclic canonical fractional polymorphism of \(\Gamma(\mathfrak{C}_{\mu},\emptyset)\). Let \(\mathfrak{M}\) be the \(\tau\)-structure with domain \((C_{\mu})^{2}\) and where_ * \(((b_{1}^{1},b_{1}^{2}),(b_{2}^{1},b_{2}^{2}))\in R^{\mathfrak{M}}\) _if and only if_ \((b_{1}^{1},b_{2}^{1})\in R^{\mathfrak{C}_{\mu}}\) _and_ \((b_{1}^{2},b_{2}^{2})\in R^{\mathfrak{C}_{\mu}}\)_, and_ * \(((b_{1}^{1},b_{1}^{2}),(b_{2}^{1},b_{2}^{2}),(b_{3}^{1},b_{3}^{2}))\in S^{ \mathfrak{M}}\) _if and only if_ \((b_{1}^{1},b_{2}^{1},b_{3}^{1})\in S^{\mathfrak{C}_{\mu}}\) _or_ \((b_{1}^{2},b_{2}^{2},b_{3}^{2})\in S^{\mathfrak{C}_{\mu}}\)_._ _Similarly, let \(\mathfrak{N}\) be the \(\tau\)-structure with domain \((C_{\mu})^{2}\) and where_ * \(((b_{1}^{1},b_{1}^{2}),(b_{2}^{1},b_{2}^{2}))\in R^{\mathfrak{N}}\) _if and only if_ \((b_{1}^{1},b_{2}^{1})\in R^{\mathfrak{C}_{\mu}}\) _or_ \((b_{1}^{2},b_{2}^{2})\in R^{\mathfrak{C}_{\mu}}\)_, and_ * \(((b_{1}^{1},b_{1}^{2}),(b_{2}^{1},b_{2}^{2}),(b_{3}^{1},b_{3}^{2}))\in S^{ \mathfrak{N}}\) _if and only if_ \((b_{1}^{1},b_{2}^{1},b_{3}^{1})\in S^{\mathfrak{C}_{\mu}}\) _and_ \((b_{1}^{2},b_{2}^{2},b_{3}^{2})\in S^{\mathfrak{C}_{\mu}}\)_._ _Note that \(\mathfrak{M}\not\models\mu\) and hence there exists an embedding \(f\colon\mathfrak{M}\to\mathfrak{C}_{\mu}\). Similarly, there exists an embedding \(g\colon\mathfrak{N}\to\mathfrak{C}_{\mu}\). Clearly, both \(f\) and \(g\) regarded as operations on the set \(C_{\mu}\) are pseudo cyclic (but in general not cyclic) and canonical with respect to \(\operatorname{Aut}(\mathfrak{C}_{\mu})\) (see Claim 6 in Proposition 8.21 for a detailed argument of this type). Let \(\omega\) be the fractional operation given by \(\omega(f)=\frac{1}{2}\) and \(\omega(g)=\frac{1}{2}\). Then \(\omega\) is a binary fractional polymorphism of \(\Gamma:=\Gamma(\mathfrak{C}_{\mu},\emptyset)\): for \(b^{1},b^{2}\in(C_{\mu})^{2}\) we have_ \[\sum_{h\in\mathscr{O}^{(2)}}\omega(h)R^{\Gamma}(h(b^{1},b^{2})) =\frac{1}{2}R^{\Gamma}(f(b^{1},b^{2}))+\frac{1}{2}R^{\Gamma}(g(b^ {1},b^{2}))\] \[=\frac{1}{2}\sum_{j=1}^{2}R^{\Gamma}(b^{j}). \tag{9}\] _so \(\omega\) improves \(R\), and similarly we see that \(\omega\) improves \(S\)._ We proved that the corresponding valued structure has a binary canonical pseudo cyclic fractional polymorphism. By Theorem 7.17 and 8.15, this reproves the results from [20] (without multiplicities) and [37] (with multiplicities) that the resilience problem for this query is in P. ### The Resilience Tractability Conjecture In this section we present a conjecture which implies, together with Corollary 5.13 and Lemma 8.5, a P versus NP-complete dichotomy for resilience problems for finite unions of conjunctive queries. **Conjecture 8.18**.: _Let \(\mu\) be a union of connected conjunctive queries over the signature \(\tau\), and let \(\sigma\subseteq\tau\). If the structure \((\{0,1\};\operatorname{OIT})\) has no pp-construction in \(\Gamma:=\Gamma_{\mu,\sigma}\), then \(\Gamma\) has a fractional polymorphism of arity \(\ell\geq 2\) which is canonical and pseudo cyclic with respect to \(\operatorname{Aut}(\Gamma)\) (and in this case, \(\operatorname{VCSP}(\Gamma)\) is in P by Theorem 7.17)._ The conjecture is intentionally only formulated for VCSPs that stem from resilience problems, because it is known to be false for the more general situation of VCSPs for valued structures \(\Gamma\) that have the same automorphisms as a reduct of a finitely bounded homogeneous structure [5] (Section 12.9.1; the counterexample is even a CSP). However, see Conjecture 9.3 for a conjecture that could hold for VCSPs in this more general setting. For the following conjunctive query \(\mu\), the NP-hardness of the resilience problem without multiplicites was shown in [20]; to illustrate our condition, we verify that \((\{0,1\};\operatorname{OIT})\) has a pp-construction in \(\Gamma_{\mu}\) and thus prove in a different way that the resilience problem (with multiplicities) for \(\mu\) is NP-hard. **Example 8.19** (Triangle query).: _Let \(\tau\) be the signature that consists of three binary relation symbols \(R\), \(S\), and \(T\), and let \(\mu\) be the conjunctive query_ \[\exists x,y,z\big{(}R(x,y)\wedge S(y,z)\wedge T(z,x)\big{)}.\] _The resilience problem without multiplicities for \(\mu\) is NP-complete [20], and hence \(\mathrm{VCSP}(\Gamma_{\mu})\) is NP-hard (Proposition 8.15). Since the Gaifman graph of \(\mu\) is NP-complete, the structure \(\mathfrak{C}_{\mu}\) from Theorem 8.14 exists. Let \(\Gamma:=\Gamma(\mathfrak{C}_{\mu},\emptyset)\). We provide a pp-construction of \((\{0,1\};\mathrm{OIT})\) in \(\Gamma\), which also proves NP-hardness of \(\mathrm{VCSP}(\Gamma)\) and hence the resilience problem of \(\mu\) with multiplicities by Corollary 5.13. Since \(\Gamma\) is homomorphically equivalent to \(\Gamma_{\mu}\), this also provides a pp-construction of \((\{0,1\};\mathrm{OIT})\) in \(\Gamma_{\mu}\) (see Lemma 5.14)._ _Let \(C\) be the domain of \(\Gamma\). Let \(\phi(a,b,c,d,e,f,g,h,i)\) be the \(\tau\)-expression_ \[R(a,b)+S(b,c)+T(c,d)+R(d,e)+S(e,f)+T(f,g)+R(g,h)+S(h,i) \tag{10}\] \[+T(i,g)+S(h,f)+R(g,e)+T(f,d)+S(e,c)+R(d,b)+T(c,a). \tag{11}\] _For an illustration of \(\mu\) and \(\phi\), see Figure 3. Note that \(\phi\) can be viewed as seven overlapping copies of \(\mu\)._ _In what follows, we say that an atomic \(\tau\)-expression holds if it evaluates to \(0\). Note that every atom in (10) except the first and the last ones appears in exactly two copies of \(\mu\) in \(\phi\), whereas all other atoms of \(\phi\) occur in only one copy of \(\mu\) in \(\phi\). Hence, since there are seven copies of \(\mu\) in \(\phi\), in the optimal solution of the instance \(\phi\) of \(\mathrm{VCSP}(\Gamma)\) all atoms in (11) hold, and either every atom at even position or every atom at odd position in (10) holds. Let \(RT\in\langle\Gamma\rangle\) be given by_ \[RT(a,b,f,g):=\mathrm{Opt}\inf_{c,d,e,h,i\in C}\phi.\] _Note that \(RT(a,b,f,g)\) holds if and only if_ * \(R(a,b)\) _holds and_ \(T(f,g)\) _does not hold, or_ * \(T(f,g)\) _holds and_ \(R(a,b)\) _does not hold,_ _where the reverse implication uses that \(\mathfrak{C}_{\mu}\) is homogeneous. Similarly, define \(RS\in\langle\Gamma\rangle\) by_ \[RS(a,b,h,i):=\mathrm{Opt}\inf_{c,d,e,f,g\in C}\phi.\] _Note that \(RS(a,b,h,i)\) holds if and only if_ * \(R(a,b)\) _holds and_ \(S(h,i)\) _does not hold, or_ * \(S(h,i)\) _holds and_ \(R(a,b)\) _does not hold._ _Next, we define the auxiliary relation \(\widetilde{RS}(a,b,e,f)\) to be_ \[\mathrm{Opt}\inf_{c,d,g,h,i\in C}\phi.\] Figure 3. Example 8.19, visualisation of \(\mu\) and \(\phi\). _Note that \(\widetilde{RS}(a,b,e,f)\) holds if and only if_ * _both_ \(R(a,b)\) _and_ \(S(e,f)\) _hold, or_ * _neither_ \(R(a,b)\) _and nor_ \(S(e,f)\) _holds._ _This allows us to define the relation_ \[RR(u,v,x,y):=\inf_{w,z\in C}RS(u,v,w,z)+\widetilde{RS}(x,y,w,z)\] _which holds if and only if_ * \(R(u,v)\) _holds and_ \(R(x,y)\) _does not hold, or_ * \(R(x,y)\) _holds and_ \(R(u,v)\) _does not hold._ _Define \(M\in\langle\Gamma\rangle\) as_ \[M:=\operatorname{Opt}\inf_{x,y,z\in C} \big{(}RR(u,v,x,y)+RS(u^{\prime},v^{\prime},y,z)+RT(u^{\prime \prime},v^{\prime\prime},z,x)\] \[+R(x,y)+S(y,z)+T(z,x)\big{)}.\] _Note that \(R(x,y)\), \(S(y,z)\) and \(T(z,x)\) cannot hold at the same time and therefore \((u,v,u^{\prime},v^{\prime},u^{\prime\prime},v^{\prime\prime})\in M\) if and only if exactly one of of \(R(u,v)\), \(R(u^{\prime},v^{\prime})\), and \(R(u^{\prime\prime},v^{\prime\prime})\) holds. Let \(\Delta\) be the pp-power of \((C;M)\) of dimension two with signature \(\{\operatorname{OIT}\}\) such that_ \[\operatorname{OIT}^{\Delta}((u,v),(u^{\prime},v^{\prime}),(u^{\prime\prime},v^ {\prime\prime})):=M(u,v,u^{\prime},v^{\prime},u^{\prime\prime},v^{\prime\prime }).\] _Then \(\Delta\) is homomorphically equivalent to \((\{0,1\};\operatorname{OIT})\), witnessed by the homomorphism from \(\Delta\) to \((\{0,1\};\operatorname{OIT})\) that maps \((u,v)\) to \(1\) if \(R(u,v)\) and to \(0\) otherwise, and the homomorphism \((\{0,1\};\operatorname{OIT})\to\Delta\) that maps \(1\) to any pair of vertices \((u,v)\in R\) and \(0\) to any pair of vertices \((u,v)\notin R\). Therefore, \(\Gamma\) pp-constructs \((\{0,1\};\operatorname{OIT})\)._ We mention that another conjecture concerning a P vs. NP-complete complexity dichotomy for resilience problems appears in [37, Conjecture 7.7]. The conjecture has a similar form as Conjecture 8.18 in the sense that it states that a sufficient hardness condition for resilience is also necessary. The relationship between our hardness condition from Corollary 5.13 and the condition from [37] remains to be studied. ### An example of formerly open complexity We use our approach to settle the complexity of the resilience problem for a conjunctive query that was mentioned as an open problem in [21] (Section 8.5): \[\mu:=\exists x,y(S(x)\wedge R(x,y)\wedge R(y,x)\wedge R(y,y)) \tag{12}\] Let \(\tau=\{R,S\}\) be the signature of \(\mu\). To study the complexity of resilience of \(\mu\), it will be convenient to work with a dual which has different model-theoretic properties than the duals \(\mathfrak{B}_{\mu}\) from Theorem 8.13 and \(\mathfrak{C}_{\mu}\) from Theorem 8.14, namely a dual that is a model-complete core. **Definition 8.20**.: _A structure \(\mathfrak{B}\) with an oligomorphic automorphism group is model-complete if every embedding of \(\mathfrak{B}\) into \(\mathfrak{B}\) preserves all first-order formulas. It is a core if every endomorphism is an embedding._ Note that the definition of cores of valued structures with finite domain (Definition 7.12) and the definition above specialise to the same concept for relational structures over finite domains. A structure with an oligomorphic automorphism group is a model-complete core if and only if for every \(n\in\mathbb{N}\) every orbit of \(n\)-tuples can be defined with an existential positive formula [5]. Every countable structure \(\mathfrak{B}\) is homomorphically equivalent to a model-complete core, which is unique up to isomorphism [4, 5]; we refer to this structure as the model-complete core of \(\mathfrak{B}\). The advantage of working with model-complete cores is that the structure is in a sense'minimal' and therefore easier to work with in concrete examples.2 Footnote 2: The model-complete core of \(\mathfrak{B}_{\mu}\) would be a natural choice for the canonical dual of \(\mu\) to work with instead of \(\mathfrak{B}_{\mu}\). However, proving that the model-complete core has a finitely bounded homogeneous expansion (so that, for example, Theorem 3.4 applies) requires introducing further model-theoretical notions [39] which we want to avoid in this article. **Proposition 8.21**.: _There is a finitely bounded homogeneous dual \(\mathfrak{B}\) of \(\mu\) such that the valued \(\tau\)-structure \(\Gamma:=\Gamma(\mathfrak{B},\emptyset)\) has a binary fractional polymorphism which is canonical and pseudo cyclic with respect to \(\operatorname{Aut}(\Gamma)\). Hence, \(\operatorname{VCSP}(\Gamma)\) and the resilience problem for \(\mu\) are in \(P.\) As a consequence, the polynomial-time tractability result even holds for resilience of \(\mu\) with exogeneous relations from any \(\sigma\subseteq\tau\)._ Proof.: Since the Gaifman graph of \(\mu\) is a complete graph, there exists the structure \(\mathfrak{C}_{\mu}\) as in Theorem 8.14. Let \(\mathfrak{B}\) be the model-complete core of \(\mathfrak{C}_{\mu}\). Note that \(\mathfrak{B}\) has the property that a countable structure \(\mathfrak{A}\) maps homomorphically to \(\mathfrak{B}\) if and only if \(\mathfrak{A}\models\neg\mu\); in particular, \(\mathfrak{B}\) is a dual of \(\mu\) and \(\mathfrak{B}\models\neg\mu\). The structure \(\mathfrak{C}_{\mu}\) is homogeneous, and it is known that the model-complete core of a homogeneous structure is again homogeneous (see Proposition 4.7.7 in [5]), so \(\mathfrak{B}\) is homogeneous. Let \(\Gamma:=\Gamma(\mathfrak{B},\emptyset)\). Note that \[\mathfrak{B} \models\forall x\big{(}\neg S(x)\vee\neg R(x,x)\big{)} \tag{13}\] \[\text{and }\mathfrak{B} \models\forall x,y\big{(}x=y\lor R(x,y)\lor R(y,x)\big{)}. \tag{14}\] To see (14), suppose for contradiction that \(\mathfrak{B}\) contains distinct elements \(x,y\) such that neither \((x,y)\) nor \((y,x)\) is in \(R^{\mathfrak{B}}\). Let \(\mathfrak{B}^{\prime}\) be the structure obtained from \(\mathfrak{B}\) by adding \((x,y)\) to \(R^{\mathfrak{B}}\). Then Figure 4. Visualisation of the query \(\mu\) from (12). Figure 5. Illustration of a finite substructure of \(\mathfrak{B}\) that contains representatives for all orbits of pairs of \(\operatorname{Aut}(\mathfrak{B})\). Arrows are not drawn on undirected edges. \(\mathfrak{B}^{\prime}\models\neg\mu\) as well, and hence there is a homomorphism from \(\mathfrak{B}^{\prime}\) to \(\mathfrak{B}\) by the properties of \(\mathfrak{B}\). This homomorphism is also an endomorphism of \(\mathfrak{B}\) which is not an embedding, a contradiction to the assumption that \(\mathfrak{B}\) is a model-complete core. Also observe that \[\mathfrak{B}\models\forall x,y\big{(}x=y\vee(R(x,y)\wedge R(y,x))\vee(S(x) \wedge R(y,y))\vee(R(x,x)\wedge S(y))\big{)}. \tag{15}\] Suppose for contradiction that (15) does not hold for some distinct \(x\) and \(y\). Then \(\neg S(x)\vee\neg R(y,y)\) and \(\neg R(x,x)\vee\neg S(y)\), i.e., \(\neg S(x)\wedge\neg R(x,x)\), or \(\neg S(x)\wedge\neg S(y)\), or \(\neg R(y,y)\wedge\neg R(x,x)\), or \(\neg R(y,y)\wedge\neg S(y)\). In each of these cases we may add both \(R\)-edges between the distinct elements \(x\) and \(y\) to \(\mathfrak{B}\) and obtain a structure not satisfying \(\mu\), which leads to a contradiction as above. For an illustration of a finite substructure of \(\mathfrak{B}\) which contains a representative for every orbit of pairs in \(\operatorname{Aut}(\mathfrak{B})\), see Figure 5. **Claim 1.** For every a finite \(\tau\)-structure \(\mathfrak{A}\) that satisfies \(\neg\mu\) and the sentences in (14) and (15), there exists a _strong_ homomorphism to \(\mathfrak{B}\), i.e., a homomorphism that also preserves the complements of \(R\) and \(S\). First observe that \(\mathfrak{B}\) embeds the countably infinite complete graph, where \(R\) is the edge relation and precisely one element lies in the relation \(S\); this is because this structure maps homomorphically to \(\mathfrak{B}\) and unless embedded, it contradicts \(\mathfrak{B}\not\models\mu\). In particular, there are infinitely many \(x\in B\) such that \(\mathfrak{B}\models\neg S(x)\wedge\neg R(x,x)\) and by (15), for every \(y\in B\), \(x\neq y\), we have \(\mathfrak{B}\models R(x,y)\wedge R(y,x)\). To prove the claim, let \(\mathfrak{A}\) be a finite structure that satisfies \(\neg\mu\) and the sentences in (14) and (15). For a homomorphism \(h\) from \(\mathfrak{A}\) to \(\mathfrak{B}\), let \[s(h):=|\{x\in A\mid\mathfrak{A}\models\neg S(x)\wedge\mathfrak{B}\models S(h( x))\}|\] and \[r(h):=|\{(x,y)\in A^{2}\mid\mathfrak{A}\models\neg R(x,y)\wedge\mathfrak{B} \models R(h(x),h(y))\}|.\] Let \(h\) be a homomorphism from \(\mathfrak{A}\) to \(\mathfrak{B}\), which exists since \(\mathfrak{A}\models\neg\mu\). If \(s(h)+r(h)=0\), then \(h\) is a strong homomorphism and there is nothing to prove. Suppose therefore \(s(h)+r(h)>0\). We construct a homomorphism \(h^{\prime}\) such that \(r(h^{\prime})+s(h^{\prime})<r(h)+s(h)\). Since \(r(h)+s(h)\) is finite, by applying this construction finitely many times, we obtain a strong homomorphism from \(\mathfrak{A}\) to \(\mathfrak{B}\). If \(s(h)>0\), then there exists \(a\in A\setminus S^{\mathfrak{A}}\) such that \(h(a)\in S^{\mathfrak{B}}\). By (13), \(\mathfrak{B}\not\models R(h(a),h(a))\) and hence \(\mathfrak{A}\not\models R(a,a)\). Pick \(b\in B\setminus h(A)\) such that \(\mathfrak{B}\models\neg S(b)\wedge\neg R(b,b)\) and define \[h^{\prime}(x):=\begin{cases}b\text{ if }x=a,\\ h(x)\text{ otherwise.}\end{cases}\] Observe that \(h^{\prime}\) is a homomorphism, \(s(h^{\prime})<s(h)\) and \(r(h^{\prime})=r(h)\). If \(r(h)>0\), then there exists \((x,y)\in A^{2}\setminus R^{\mathfrak{A}}\) such that \((h(x),h(y))\in R^{\mathfrak{B}}\). If \(x=y\), the argument is similar as in the case \(s(h)>0\). Finally, if \(x\neq y\), then \(\mathfrak{A}\models(S(x)\wedge R(y,y))\vee(R(x,x)\wedge S(y))\), because \(\mathfrak{A}\) satisfies the sentence in (15). Since \(\mathfrak{A}\) satisfies the sentence in (14), \(\mathfrak{A}\models R(y,x)\). Since \(h\) is a homomorphism, we have \[\mathfrak{B}\models R(h(x),h(y))\wedge R(h(y),h(x))\wedge((S(h(x))\wedge R(h(y),h(y)))\vee(R(h(x),h(x))\wedge S(h(y)))),\] which contradicts \(\mathfrak{B}\not\models\mu\). **Claim 2.** Every finite \(\tau\)-structure \(\mathfrak{A}\) that satisfies \(\neg\mu\) and the sentences in (14) and (15) embeds into \(\mathfrak{B}\). In particular, \(\mathfrak{B}\) is finitely bounded. Let \(\mathfrak{A}\) be such a structure. By Theorem 8.13, there is an embedding \(e\) of \(\mathfrak{A}\) into \(\mathfrak{B}_{\mu}\). Since \(\mathfrak{B}_{\mu}\) is homogeneous and embeds every finite \(\tau\)-structure that satisfies \(\neg\mu\), there exists a finite substructure \(\mathfrak{A}^{\prime}\) of \(\mathfrak{B}_{\mu}\) satisfying the sentences in (14) and (15) such that \(e(\mathfrak{A})\) is a substructure of \(\mathfrak{A}^{\prime}\) and for all distinct \(a,b\in A\) there exists \(s\in S^{\mathfrak{A}^{\prime}}\) such that \(\mathfrak{B}_{\mu}\models R(e(a),s)\wedge R(s,e(b))\). By Claim 1, there is a strong homomorphism \(h\) from \(\mathfrak{A}^{\prime}\) to \(\mathfrak{B}\). We claim that \(h\circ e\) is injective and therefore an embedding of \(\mathfrak{A}\) into \(\mathfrak{B}\). Suppose there exist distinct \(a,b\in A\) such that \(h(e(a))=h(e(b))\). Since \(e(\mathfrak{A})\) satisfies the sentence in (14) and \(h\) is a strong homomorphism, we obtain that \(\mathfrak{B}_{\mu}\models R(e(a),e(a))\wedge R(e(b),e(b))\). Let \(s\in S^{\mathfrak{A}^{\prime}}\) be such that \(\mathfrak{B}_{\mu}\models R(e(a),s)\wedge R(s,e(b))\). Hence, \[\mathfrak{B}\models S(h(s))\wedge R(h(e(a)),h(s))\wedge R(h(s),h(e(a)))\wedge R (h(e(a)),h(e(a))),\] a contradiction to \(\mathfrak{B}\not\models\mu\). It follows that \(h\circ e\) is an embedding of \(\mathfrak{A}\) into \(\mathfrak{B}\). We define two \(\{R,S\}\)-structures \(\mathfrak{M},\mathfrak{N}\) with domain \(B^{2}\) as follows. For all \(x_{1},x_{2},y_{1},y_{2},x,y\in B\) define \[\mathfrak{M},\mathfrak{N} \models R\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)} \text{if }\mathfrak{B}\models R(x_{1},x_{2})\wedge R(y_{1},y_{2}), \tag{16}\] \[\mathfrak{M},\mathfrak{N} \models S\big{(}(x,y)\big{)} \text{if }\mathfrak{B}\models S(x)\wedge S(y)\] (17) \[\mathfrak{M} \models S\big{(}(x,y)\big{)} \text{if }\mathfrak{B}\models S(x)\lor S(y)\] (18) \[\mathfrak{N} \models R\big{(}(x,y),(x,y)\big{)} \text{if }\mathfrak{B}\models R(x,x)\lor R(y,y). \tag{19}\] Add pairs of distinct elements to \(R^{\mathfrak{M}}\) and \(R^{\mathfrak{N}}\) such that both \(\mathfrak{M}\) and \(\mathfrak{N}\) satisfy the sentence in (15) (note that no addition of elements to \(S^{\mathfrak{M}}\) and \(S^{\mathfrak{N}}\) is needed). Finally, add \(((x_{1},y_{1}),(x_{2},y_{2}))\) to \(R^{\mathfrak{M}}\) and \(((x_{2},y_{2}),(x_{1},y_{1}))\) to \(R^{\mathfrak{N}}\) if at least one of the following cases holds: 1. \(\mathfrak{B}\models S(x_{1})\wedge R(x_{1},x_{2})\wedge R(x_{2},x_{2})\wedge R (y_{2},y_{2})\wedge R(y_{2},y_{1})\wedge S(y_{1})\), 2. \(\mathfrak{B}\models R(x_{1},x_{1})\wedge R(x_{1},x_{2})\wedge S(x_{2})\wedge y _{1}=y_{2}\wedge R(y_{1},y_{2})\), 3. \(\mathfrak{B}\models S(y_{1})\wedge R(y_{1},y_{2})\wedge R(y_{2},y_{2})\wedge R (x_{2},x_{2})\wedge R(x_{2},x_{1})\wedge S(x_{1})\), 4. \(\mathfrak{B}\models R(y_{1},y_{1})\wedge R(y_{1},y_{2})\wedge S(y_{2})\wedge x _{1}=x_{2}\wedge R(x_{1},x_{2})\). Conditions (A) and (B) are illustrated in Figure 6; conditions (C) and (D) are obtained from (A) and (B) by replacing \(x\) by \(y\). Note that for \((x_{1},y_{1})=(x_{2},y_{2})\), none of the conditions (A)-(D) is ever satisfied. No other atomic formulas hold on \(\mathfrak{M}\) and \(\mathfrak{N}\). Note that both \(\mathfrak{M}\) and \(\mathfrak{N}\) satisfy the property stated for \(\mathfrak{B}\) in (13). **Claim 3.**\(\mathfrak{M}\) and \(\mathfrak{N}\) satisfy the sentence in (14). We prove the statement for \(\mathfrak{M}\); the proof for \(\mathfrak{N}\) is similar. Let \((x_{1},y_{1}),(x_{2},y_{2})\in B\) be such that \((x_{1},y_{1})\neq(x_{2},y_{2})\) and \(\mathfrak{M}\models\neg R((x_{2},y_{2}),(x_{1},y_{1}))\). Since \(\mathfrak{M}\) satisfies the sentence in (15), we must have either \(\mathfrak{M}\models S(x_{1},y_{1})\wedge R((x_{2},y_{2}),(x_{2},y_{2}))\) or \(\mathfrak{M}\models S(x_{2},y_{2})\wedge R((x_{1},y_{1}),(x_{1},y_{1}))\). Suppose the former is true; the other case is treated analogously. Then \(\mathfrak{B}\models R(x_{2},x_{2})\wedge R(y_{2},y_{2})\) and \(\mathfrak{B}\models S(x_{1})\lor S(y_{1})\). If \(\mathfrak{B}\models S(x_{1})\), then \(x_{1}\neq x_{2}\) and by (14) we have \(\mathfrak{B}\models R(x_{1},x_{2})\lor R(x_{2},x_{1})\). By (14) and (15) for \((y_{1},y_{2})\), we obtain that \(\mathfrak{M}\models R((x_{1},y_{1}),(x_{2},y_{2}))\) by (16) or one of the conditions (A)-(D). The argument if \(\mathfrak{B}\models S(y_{1})\) is similar with \(x\) and \(y\) switched. **Claim 4.**\(\mathfrak{M}\) and \(\mathfrak{N}\) satisfy \(\neg\mu\). Let \(x_{1},x_{2},y_{1},y_{2}\in B\). Suppose for contradiction that \[\mathfrak{M}\models S(x_{1},y_{1})\wedge R((x_{1},y_{1}),(x_{2},y_{2}))\wedge R((x _{2},y_{2}),(x_{1},y_{1}))\wedge R((x_{2},y_{2}),(x_{2},y_{2})).\] By the definition of \(\mathfrak{M}\), we have \(\mathfrak{B}\models R(x_{2},x_{2})\wedge R(y_{2},y_{2})\) and \(\mathfrak{B}\models S(x_{1})\lor S(y_{1})\). Assume that \(\mathfrak{B}\models S(x_{1})\); the case \(\mathfrak{B}\models S(y_{1})\) is analogous. By the assumption, \(\mathfrak{M}\models R((x_{1},y_{1}),(x_{2},y_{2}))\). Then, by the definition of \(\mathfrak{M}\), one of the conditions (16), (A)-(D) holds, or \[\mathfrak{M}\models\neg\big{(}S(x_{1},y_{1})\wedge R((x_{2},y_{2}),(x_{2},y_{2})) \big{)}\] (recall that \(((x_{1},y_{1}),(x_{2},y_{2}))\) might have been added to \(R^{\mathfrak{M}}\) so that \(\mathfrak{M}\) satisfies the sentence in (15)). The last option is false by the assumption and by (13), \(\mathfrak{B}\models\neg S(x_{2})\wedge\neg S(y_{2})\), and hence neither (B) nor (D) holds. Therefore, one of the conditions (16), (A), or (C) holds for \(((x_{1},y_{1}),(x_{2},y_{2}))\). Similarly, we obtain that one of the conditions (16) or (B) holds for \(((x_{2},y_{2}),(x_{1},y_{1}))\), since \(\mathfrak{M}\models R((x_{2},y_{2}),(x_{1},y_{1}))\) (to exclude (D) we use the assumption that \(\mathfrak{B}\models S(x_{1})\) and hence \(x_{1}\neq x_{2}\)). This yields six cases and in each of them we must have that \(\mathfrak{B}\models R(x_{1},x_{2})\wedge R(x_{2},x_{1})\) or \(\mathfrak{B}\models S(y_{1})\wedge R(y_{1},y_{2})\wedge R(y_{2},y_{1})\). Since \(\mathfrak{B}\models S(x_{1})\wedge R(x_{2},x_{2})\wedge R(y_{2},y_{2})\), this contradicts \(\mathfrak{B}\models\neg\mu\). Since \((x_{1},y_{1}),(x_{2},y_{2})\in M\) were chosen arbitrarily, this shows that \(\mathfrak{M}\models\neg\mu\). The argument for \(\mathfrak{N}\) is similar. **Claim 5.** There is an embedding \(f\) of \(\mathfrak{M}\) into \(\mathfrak{B}\) and an embedding \(g\) of \(\mathfrak{N}\) into \(\mathfrak{B}\). We show the claim for \(\mathfrak{M}\); the proof for \(\mathfrak{N}\) is analogous. By [5, Lemma 4.1.7], it is enough to show that every finite substructure of \(\mathfrak{M}\) embeds into \(\mathfrak{B}\). By the definition of \(\mathfrak{M}\) and Claims 3 and 4, every finite substructure \(\mathfrak{M}\) satisfies (14), (15) and \(\neg\mu\) and hence, by Claim 2, it embeds into \(\mathfrak{B}\). Let \(\omega\) be the fractional operation over \(B\) defined by \(\omega(f)=\frac{1}{2}\) and \(\omega(g)=\frac{1}{2}\). **Claim 6.**\(\omega\) is pseudo cyclic and canonical with respect to \(\operatorname{Aut}(\mathfrak{B})=\operatorname{Aut}(\Gamma)\). Note that since \(\mathfrak{B}\) is homogeneous in a finite relational signature, two \(k\)-tuples of elements of \(\mathfrak{B}\) lie in the same orbit if and only if they satisfy the same atomic formulas. Therefore, the canonicity of \(f\) and \(g\) with respect to \(\operatorname{Aut}(\mathfrak{B})\) follows from the definition of \(\mathfrak{M}\) and \(\mathfrak{N}\): for \((a,b)\in B^{2}\), whether \(\mathfrak{B}\models S(f(a,b))\) only depends on whether \(\mathfrak{M}\models S(a,b)\) by Claim 5, which depends only on the atomic formulas that hold on \(a\) and on \(b\) in \(\mathfrak{B}\). An analogous statement is true for atomic formulas of the form \(R(x,y)\) and \(x=y\). Therefore, \(f\) is canonical. The argument for the canonicity of \(g\) is analogous. Figure 6. An illustration of the conditions (A) and (B) in \(\mathfrak{M}\) and \(\mathfrak{N}\). To see that \(f\) and \(g\) are pseudo cyclic, we show that \(f^{*}\) and \(g^{*}\) defined on 2-orbits (using the terminology of Remark 7.4) are cyclic. By the definition of \(f^{*}\), we need to show that for any \(a_{1},a_{2},b_{1},b_{2}\in B\), the two pairs \((f(a_{1},b_{1}),f(a_{2},b_{2}))\) and \((f(b_{1},a_{1}),f(b_{2},a_{2}))\) satisfy the same atomic formulas. For the formulas of the form \(S(x)\) and \(R(x,y)\), this can be seen from Claim 5 and the definition of \(\mathfrak{M}\) and \(\mathfrak{N}\), since each of the conditions (16),(17),(18),(19),(15) and the union of (A), (B), (C), (D) is symmetric with respect to exchanging \(x\) and \(y\). For the atomic formulas of the form \(x=y\), this follows from the injectivity of \(f\). This shows that \(f^{*}\) is cyclic; the argument for \(g^{*}\) is the same. Hence, the pseudo-cyclicity of \(f\) and \(g\) is a consequence of Lemma 7.9 for \(m=2\). **Claim 7.**\(\omega\) improves \(S\). By the definition of \(\mathfrak{M}\) and \(\mathfrak{N}\) and Claim 5, we have for all \(x,y\in B\) \[\omega(f)S^{\Gamma}(f(x,y))+\omega(g)S^{\Gamma}(g(x,y))=\frac{1}{2}(S^{\Gamma }(x)+S^{\Gamma}(y)).\] **Claim 8.**\(\omega\) improves \(R\). Let \(x_{1},y_{1},x_{2},y_{2}\in B\). We have to verify that \[\omega(f)R^{\Gamma}(f(x_{1},y_{1}),f(x_{2},y_{2}))+\omega(g)R^{\Gamma}(g(x_{1 },y_{1}),g(x_{2},y_{2}))\leq\frac{1}{2}(R^{\Gamma}(x_{1},x_{2})+R^{\Gamma}(y_ {1},y_{2})). \tag{20}\] We distinguish four cases. * \(\mathfrak{M},\mathfrak{N}\models R((x_{1},y_{1}),(x_{2},y_{2}))\). Then Inequality (20) holds since the left-hand side is zero, and the right-hand side is non-negative (each weighted relation in \(\Gamma\) is non-negative). * \(\mathfrak{M},\mathfrak{N}\models\neg R((x_{1},y_{1}),(x_{2},y_{2}))\). Since \(\mathfrak{M}\) and \(\mathfrak{N}\) satisfy the sentences in (14) and (15) and \(\mathfrak{B}\) satisfies (14) we must have \(\mathfrak{B}\models\neg R(x_{1},x_{2})\wedge\neg R(y_{1},y_{2})\), and both sides of the inequality evaluate to \(1\). * \(\mathfrak{M}\models\neg R((x_{1},y_{1}),(x_{2},y_{2}))\) and \(\mathfrak{N}\models R((x_{1},y_{1}),(x_{2},y_{2}))\). By Claim 5, the left-hand side evaluates to \(\frac{1}{2}\). By (16), we have \(\mathfrak{B}\models\neg R(x_{1},x_{2})\) or \(\mathfrak{B}\models\neg R(y_{1},y_{2})\). Therefore, the right-hand side of (20) is at least \(\frac{1}{2}\) and the inequality holds. * \(\mathfrak{M}\models R((x_{1},y_{1}),(x_{2},y_{2}))\) and \(\mathfrak{N}\models\neg R((x_{1},y_{1}),(x_{2},y_{2}))\). Similar to the previous case. This exhausts all cases and concludes the proof of Claim 8. It follows that \(\omega\) is a binary fractional polymorphism of \(\Gamma\) which is canonical and pseudo cyclic with respect to \(\operatorname{Aut}(\Gamma)\). Polynomial-time tractability of \(\operatorname{VCSP}(\Gamma)\) follows by Theorem 7.17 and 8.15. The final statement follows from Remark 8.16. ## 9. Conclusion and Future Work We formulated a general hardness condition for VCSPs of valued structures with an oligomorphic automorphism group and a new polynomial-time tractability result. We use the latter to resolve a resilience problem whose complexity was left open in the literature and conjecture that our conditions exactly capture the hard and easy resilience problems for conjunctive queries (with multiplicities), respectively. In fact, a full classification of resilience problems for conjunctive queries based on our approach seems feasible, but requires further research, as discussed in the following. We have proved that if \(\Gamma\) is a valued structure with an oligomorphic automorphism group and \(R\) is a weighted relation in the smallest weighted relational clone that contains the weighted relations of \(\Gamma\), then \(R\) is preserved by all fractional polymorphisms of \(\Gamma\) (Lemma 6.8). We do not know whether the converse is true. Note that it is known to hold for the special cases of finite-domain valued structures [17, 22] and for classical relational structures with 0-\(\infty\) valued relations (CSP setting) having an oligomorphic automorphism group [9]. **Question 9.1**.: _Let \(\Gamma\) be a valued structure with an oligomorphic automorphism group. Is it true that \(R\in\langle\Gamma\rangle\) if and only if \(R\in\operatorname{Imp}(\operatorname{fPol}(\Gamma))\)?_ Note that a positive answer to this question would imply that the computational complexity of VCSPs for valued structures \(\Gamma\) with an oligomorphic automorphism group, and in particular the complexity of resilience problems, is fully determined by the fractional polymorphisms of \(\Gamma\). Fractional polymorphisms are probability distributions on operations. In all the examples that arise from resilience problems that we considered so far, it was sufficient to work with fractional polymorphisms \(\omega\) that are _finitary_, i.e., such that there are finitely many operations \(f_{1},\ldots,f_{k}\in\mathscr{O}_{C}\) such that \(\sum_{i\in\{1,\ldots,k\}}\omega(f_{i})=1\). This motivates the following question. **Question 9.2**.: _Does our notion of pp-constructability change if we restrict to finitary fractional homomorphisms \(\omega\)? Is there a valued structure \(\Gamma\) with an oligomorphic automorphism group and a weighted relation \(R\) such that \(R\) is not improved by all fractional polymorphism of \(\Gamma\), but is improved by all finitary fractional polymorphisms \(\omega\)? In particular, are these statements true if we restrict to valued \(\tau\)-structures \(\Gamma\) that arise from resilience problems as described in Proposition 8.15?_ In the following, we formulate a common generalisation of the complexity-theoretic implications of Conjecture 8.18 and the infinite-domain tractability conjecture from [10] that concerns a full complexity classification of VCSPs for valued structures from reducts of finitely bounded homogeneous structures. **Conjecture 9.3**.: _Let \(\Gamma\) be a valued structure with finite signature such that \(\operatorname{Aut}(\Gamma)=\operatorname{Aut}(\mathfrak{B})\) for some reduct \(\mathfrak{B}\) of a countable finitely bounded homogeneous structure. If \((\{0,1\};\operatorname{OIT})\) has no pp-construction in \(\Gamma\), then \(\operatorname{VCSP}(\Gamma)\) is in \(P\) (otherwise, we already know that \(\operatorname{VCSP}(\Gamma)\) is NP-complete by Theorem 3.4 and Corollary 5.13)._ One might hope to prove this conjecture under the assumption of the infinite-domain tractability conjecture. Recall that also the finite-domain VCSP classification was first proven conditionally on the finite-domain tractability conjecture [32, 34], which was only confirmed later [11, 46]. We also believe that the'meta-problem' of deciding whether for a given conjunctive query the resilience problem with multiplicities is in P is decidable. This would follow from a positive answer to Conjecture 8.18 because \(\Gamma_{m}^{*}\) can be computed and Item 4 of Proposition 7.16 for the finite-domain valued structure \(\Gamma_{m}^{*}\) can be decided algorithmically using linear programming [31].
2309.04981
Streamlined Data Fusion: Unleashing the Power of Linear Combination with Minimal Relevance Judgments
Linear combination is a potent data fusion method in information retrieval tasks, thanks to its ability to adjust weights for diverse scenarios. However, achieving optimal weight training has traditionally required manual relevance judgments on a large percentage of documents, a labor-intensive and expensive process. In this study, we investigate the feasibility of obtaining near-optimal weights using a mere 20\%-50\% of relevant documents. Through experiments on four TREC datasets, we find that weights trained with multiple linear regression using this reduced set closely rival those obtained with TREC's official "qrels." Our findings unlock the potential for more efficient and affordable data fusion, empowering researchers and practitioners to reap its full benefits with significantly less effort.
Qiuyu Xu, Yidong Huang, Shengli Wu, Adrian Moore
2023-09-10T10:09:21Z
http://arxiv.org/abs/2309.04981v2
Streamlined Data Fusion: Unleashing the Power of Linear Combination with Minimal Relevance Judgments ###### Abstract Linear combination is a potent data fusion method in information retrieval tasks, thanks to its ability to adjust weights for diverse scenarios. However, achieving optimal weight training has traditionally required manual relevance judgments on a large percentage of documents, a labor-intensive and expensive process. In this study, we investigate the feasibility of obtaining near-optimal weights using a mere 20%-50% of relevant documents. Through experiments on four TREC datasets, we find that weights trained with multiple linear regression using this reduced set closely rival those obtained with TREC's official "qrels." Our findings unlock the potential for more efficient and affordable data fusion, empowering researchers and practitioners to reap its full benefits with significantly less effort. Keywords:data fusion information retrieval linear combination weight training ## 1 Introduction Data fusion is a useful technology for various information retrieval tasks to improve performance. Linear combination is a strong data fusion method. If proper weights are assigned to component retrieval systems, then it is able to achieve better results than those methods such as CombSum [8], CombMNZ [8], Borda Count [1], which treat all component retrieval systems equally. Weights assignment is a key issue for the success of linear combination. Quite a few different weights assignment methods have been proposed [13, 11, 19, 9, 18, 20]. However, almost all of them are supervised learning methods and a training data set is required. Usually, a training data set includes a collection of documents \(D\), a group of queries \(Q\), a group of retrieval systems \(IR\) and retrieval results \(S\) from \(IR\) corresponding to \(Q\) and \(C\), and relevance judgment \(J\). In many situations, to set up a proper training data set requires a lot of effort. Among them, relevance judgment is an expensive component because it needs human judges to decide which document is relevant to which query. It is especially so when the document collection is very large. Probably this is one of the major reasons that very simple data fusion methods such as CombSum, CombMNZ, and Borda Count, are more frequently used in various retrieval tasks [12]. More complicated and expensive ones are rarely used, although they can lead to better retrieval performance. In this piece of work, we investigate if it is possible to train proper weights for linear combination using a "lightweight" training data set. Rather than identify and use all relevant documents for each query, a "lightweight" one only includes a subset of all the relevant documents for each query. Especially, we focus on the method of multiple linear regression because it is very good for weights training. Theoretically, the weights calculated by that method are optimum in the least squares sense [19]. Empirically, it outperforms many other weights assignment methods such as SlideFuse [13], PosFuse [11], MAPFuse [11], and SegFuse [17]. This is also confirmed later in this paper. The remaining of this paper is organized as follows. Section 2 reviews the method of weights assignment by multiple linear regression for linear combination. Section 3 discusses relevance judgment, especially the pooling policy used in TREC. Section 4 presents the setting and experimental results of this study. Some more analysis is given in Section 5. Finally, Section 6 makes some concluding remarks. ## 2 Weights assignment by multiple linear regression A training data set comprises a collection of \(l\) documents (\(D\)), a group of \(m\) queries (\(Q\)), and a group of \(n\) information retrieval systems (\(IR\)). For each query \(q^{i}\) (\(1\leq i\leq m\)), all information retrieval systems \(ir_{j}\) (\(1\leq j\leq n\)) provide their estimated relevance scores to all the documents in the collection. Therefore, we have (\(s_{1k}^{i}\), \(s_{2k}^{i}\),..., \(s_{nk}^{i}\), \(y_{k}^{i}\)) for \(i\) = (1, 2,..., \(m\)), \(k\) = (1, 2,..., \(l\)). Here \(s_{jk}^{i}\) stands for the score assigned by retrieval system \(ir_{j}\) to document \(d_{k}\) for query \(q^{i}\); \(y_{k}^{i}\) is the judged relevance score of \(d_{k}\) for query \(q^{i}\). If binary relevance judgment is used, then it is 1 for relevant documents and 0 otherwise. \(Y=\{y_{k}^{i};i=(1,2,...,m),k=(1,2,...,l)\}\) can be estimated by a linear combination of scores from all component systems. Consider the following quantity \[\mathcal{G}=\sum_{i=1}^{m}\sum_{k=1}^{l}\left[y_{k}^{i}-(\hat{\beta_{0}}+\hat {\beta_{1}}s_{1k}^{i}+\hat{\beta_{2}}s_{2k}^{i}+...+\hat{\beta_{n}}s_{nk}^{i} )\right]^{2}\] when \(\mathcal{G}\) reaches its minimum, the estimation is the most accurate. \(\beta_{0}\), \(\beta_{1}\), \(\beta_{2}\),..., and \(\beta_{n}\), the multiple linear regression coefficients, are numerical constants that can be determined from observed data. In the least squares sense the coefficients obtained by multiple linear regression can bring us the optimum fusion results by the linear combination method, since they can be used to make the most accurate estimation of the relevance scores of all the documents to all the queries as a whole [19]. \(\beta_{j}\) can be used as weights for the fusion of retrieval systems \(ir_{j}\) (\(1\leq j\leq n\)). Ideally, \(Y=\{y_{k}^{i};i=(1,2,...,m),k=(1,2,...,l)\}\) should include all relevant documents, then it is possible to make the most accurate estimation. If only partial relevant documents are identified, some measures should be taken to treat all component retrieval systems fairly. Thus possible bias towards one or a subgroup of retrieval systems can be avoided. The pooling system used in TREC is a good practice and we also apply it in this study. See next section for more details. ## 3 Relevance judgment For a given task in TREC, its organizer provides a relevance judgment file "qrels". Usually this file is generated by applying a pooling policy. That is, for all the runs submitted to that task or a carefully-selected subset of them, a certain number of top-ranked documents are put into a pool. All the documents in the pool are evaluated manually. All those documents that are not in the pool are assumed as non-relevant. Such a pooling policy is cost effective and fair to all participants [10]. The pooling policy can be divided into two types: fixed-length and variable-length [14]. In the fixed-length pooling with a given number \(k\), the top-\(k\) ranked documents from all the runs are put into the pool for every query. In the variable-length pooling, the number of documents taken into the pool may vary across queries [2, 3, 5]. Comparing these two, the latter requires a little more effort but can be more effective. Up to now the fixed-length pooling is the most common methods used in most TREC events. Therefore, we go with this policy and find it is good for our purpose, as discussed in the next section. Having said that, It is still an interesting thing to see the effect of different pooling policies. This remains to be our future work. We used four data sets for this investigation. Two of them were runs submitted to the TREC 2018 and 2019 precision medicine track [15, 16] and the other two were runs submitted to the TREC 2020 and 2021 deep learning track [6, 7]. Table 1 shows the related information about them. We observe that the TREC 2021 data set includes the most relevant documents, while the TREC 2020 data set includes the least relevant documents. The most documents were evaluated for the 2018 data set, while the least documents were evaluated for the 2020 data set. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Data set} & No. of & No. of & No. of \\ \hline Year & Track & Task & Queries & Evaluated docs & Relevant docs \\ \hline 2018 & Precision medicine & Literature articles & 50 & 22429 & 5588 \\ 2019 & Precision medicine & Literature articles & 40 & 18316 & 5544 \\ 2020 & Deep learning & Passage retrieval & 54 & 11386 & 1666 \\ 2021 & Deep learning & Document retrieval & 57 & 13058 & 8203 \\ \hline \end{tabular} \end{table} Table 1: Information of the four TREC data sets (the literature articles task, precision medicine track in TREC 2018 and 2019, machine learning track in TREC 2020 and their “qrels” files. In order to investigate the impact of poor length on the number of relevance documents included, we generated a good number of relevant judgment files from the official qrels by the fixed-length pooling. More specifically, the procedure is as follows: for all the runs in a data set, we look at top 2, 3,..., and up to 20 documents of the result lists for each of the queries to see if they are labeled as relevant or not in the official qrels. This is very similar to the pooling policy used in TREC apart from one point: we do not make relevance judgment manually, but instead assuming that all relevant documents have been identified in TREC's official qrels. For each data set, we obtain 19 partial qrels and each of them includes a subset of relevant documents in the official qrels. Fig. 2 shows the number of relevant documents included in these partial qrels, while Fig. 2 shows the percentage of relevant documents in these partial qrels compared with the official full qrels. Not surprisingly, both the number and the percentage increase with the pool length all the way through. For the same data set, the shapes of the curves in both Fig. 2 and 2 are similar. One noticeable thing is the 2021 data set. It includes the largest number of relevant documents (8203 in total or 143.91 per query), but the percentage of all relevant documents is the lowest. ## 4 Experimental settings and results Four data sets, as described in Section 3, were used for this investigation. In each data set, we selected a subset of runs for the fusion experiment. The selection criteria is: we chose the best one in MAP from each participant, thus 19, 14, 15, and 16 runs were selected for each of the four data sets, respectively. Such a selection policy would let the retrieval systems selected with more diversity, because the multiple runs submitted by the same participant are retrieved from the same retrieval systems with small difference on optional components, parameter settings, and so on, and they are more similar than those submitted by different participants. All the selected runs are listed in Appendix, in descending order of their MAP values. Apart from the official qrels, we used two partial qrels. From all these 19 partial qrels, we choose two with roughly 20% and 50% of the relevant documents in the official qrels. The pool lengths are 2 (18.74%) and 6 (44.47%) for the data set of 2018, 4 (24.10%) and 10 (48.16%) for the data set of 2019, 2 (23.77%) and 10 (49.58%) for the data set of 2020, and 3 (19.02%) and 17 (50.52%) for the data set of 2021. They are referred to as 20% and 50% partial qrels, respectively. For all the runs in each year group, they were ranked by MAP values. Then we fused top 2, 3,..., until all of them by linear combination, in which multiple linear regression was used for weights assignment. See Section 2 for details. The two-fold cross-validation methodology was applied: for all the queries in a data set, we divided them into two partitions: odd-numbered and even-numbered. One partition was used for weights training and the other for testing, and vice versa. The reciprocal model, \(score(d)=1/(60+rank(d))\), was used to convert each document's ranking into a score because it is very good and reliable [4], while raw scores from initial retrieval systems were not used. Relevance judgment is required for weights training. Apart from the official qrels file, we used two partial \(qrels\) files, which consist of roughly 50% and 20% of the relevant documents in the official qrels file, respectively. Four metrics were used for evaluation: MAP, RP, P@10, and P@20. MAP and RP are system-oriented metrics, while P@10 and P@20 are user-oriented metrics. Fusion performance of linear combination, using three different qrels files for weights training, is shown in Figs. 3-6. We can see that fusion performance (measured by MAP) is very close for all of them, although in most cases the official qrels does a little better than the two partial qrels. For all other three metrics not shown in Figs. 3-6, the situation is very similar. A comprehensive comparison is summarized in Table 2. In most cases, using a relevance judgment file with 20% or 50% of the relevant documents identified, fusion performance is very close to that of using the official relevance judgment file "qrels". The difference is below 3% in all the cases. In some situations, fusion performance of using partial qrels is even better than that of using official qrels. For the 2018 data set and 50% partial pool, the fusion results are even a little better than that of using the full official qrels in both P@10 and P@20. The same happens to the 2021 data set with 50% partial pool and measured in P@10. A few other data fusion methods including CombSum, CombMNZ, PosFuse, SlideFuse, and MAPFuse are also tested. The results of two data sets 2018 and 2019 are shown in Figs. 7 and 8 for comparison. The results of other two data sets 2020 and 2021 are similar and not shown. In both data sets, linear combination performs better than all four other fusion methods and the best component system on average. In Figs. 7 and 8, all data fusion methods do better than the best retrieval system involved with one exception. Figure 5: Effect of three qrels (2020) Figure 6: Effect of three qrels (2021) Figure 3: Effect of three qrels (2018) Figure 4: Effect of three qrels (2019) ## 5 More observation and analysis When the pool length is very short, it may happen that some queries only have very few relevant documents. It is interesting to find the effect of such queries to fusion results. The situation is more noticable for the 20% partial qrels, therefore we take a look at it. For the four data sets 2018-2021, 8, 3, 42, and 4 queries include no more than 10 relevant documents, respectively. Therefore, we report the results of two data sets 2018 and 2020. See Table 3 for the average performances of all 50 queries, Group A (it includes queries with no more than 10 relevant documents), and Group B (it includes queries with more than 10 relevant documents). We can see that Group B queries achieve better performance than Group A queries in all four metrics. Especially for the 2018 data set, the performance of Group B "normal" queries is roughly doubled compared with that of Group A queries with fewer relevant documents. To see it in a more comprehensive way, we divide all the queries in a data set into three sub-groups of equal size based on the number of relevant documents identified. We report the results of two data sets 2018 and 2019. The three subgroups include 17, 16, and 17 queries for the 2018 data set, and 13, 14, and 13 queries for the 2019 data set, respectively. "Low", "Middle", and "High" are used to name them. For all those component systems and fused results by linear combination, their average performance for each of the sub-groups is presented in Table 4. For both metrics P@10 & P@20 and both average component results & fusion results, Group "High" always obtains the highest value, group "Middle" is in the middle, while group "Low" obtains the lowest value. For MAP and RP, group "Low" always obtains the lowest values, while group "Middle" and "High" are competitive to be the winner. It demonstrates there is a positive correlation between the number of relevant documents identified for given queries and retrieval performance for both component results and fusion results. Finally, we look at a related issue: when different qrels are used, how does that affect various metric values? For the same fusion results by linear combination and trained by using the official qrels, we calculate their MAP, RP, P@10, and P@20 values by using the official and two partial qrels, respectively. Table 5 shows the values of two data sets 2018 and 2019. When using the two partial qrels, the variances of fusion performance from that of using the official qrels are also given. We can see that MAP and RP are less affected than P@10 and P@20. Because MAP and RP values are not directly linked to the number of relevant documents in the whole collection, they are more robust and insensitive to the changes in qrels. Let us consider an example. If there are two relevant documents in the collection for a given query. One result list includes both, and one is at rank 1 and the other at rank 20, then its MAP is (1+2/20)/2=0.55. If the one at rank 20 is regarded as non-relevant in a partial qrels, then its MAP becomes 1! This \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Group & No. R. & MAP & RP & P@10 & P@20 \\ \hline Low (2018, F) & 36.82 & 0.3766 & 0.3893 & 0.5471 & 0.4765 \\ Middle (2018, F) & 94.94 & 0.5029 & 0.5150 & 0.7438 & 0.7157 \\ High (2018, F) & 202.52 & 0.4311 & 0.4620 & 0.8750 & 0.8219 \\ \hline Low (2018, C) & 36.82 & 0.2022 & 0.2562 & 0.3706 & 0.3113 \\ Middle (2018, C) & 94.94 & 0.2906 & 0.3447 & 0.5799 & 0.5266 \\ High (2018, C) & 202.52 & 0.2445 & 0.3024 & 0.7418 & 0.6381 \\ \hline Low (2019, F) & 30.15 & 0.3233 & 0.3180 & 0.4231 & 0.3808 \\ Middle (2019, F) & 100.93 & 0.4556 & 0.4696 & 0.6571 & 0.6571 \\ High (2019, F) & 287.62 & 0.4887 & 0.4961 & 0.9154 & 0.8692 \\ \hline Low (2019, C) & 30.15 & 0.1848 & 0.2137 & 0.3198 & 0.2657 \\ Middle (2019, C) & 100.93 & 0.2494 & 0.3000 & 0.5133 & 0.4834 \\ High (2019, C) & 287.62 & 0.2376 & 0.3055 & 0.7390 & 0.6920 \\ \hline \end{tabular} \end{table} Table 4: The effect of number of relevant documents per query on component results and fusion performance (“No. R.” stands for “number of relevant documents”; F: fusion results; C: average performance of all 19 (2018) or 14 (2019) component results) can explain why in both data sets, both MAP and RP values with the 50% partial qrels are even slightly higher than their counterparts with the official qrels. Such a situation will never happen to P@10 and P@20. If some relevant documents are removed from a qrels file, we will obtain equal or lower P@10 and P@20 values for the same result list. In the official qrels of TREC 2018, only three queries have less than 10 relevant documents and all the others have more than 20 relevant documents; while in its 20% partial qrels, eight queries have no more than 10 relevant documents, and 12 more queries have no more than 20 relevant documents, such a big difference can explain why P@20 decreases by (0.6540-0.4458)/0.6540=31.83%. ## 6 Conclusions For linear combination in data fusion, usually supervised methods are used for weights training. In this paper we have presented a method with partial relevance judgment. Through four data sets from TREC, we have demonstrated that using a small percentage of the relevant documents, the trained weights by multiple linear regression are almost as good as using all the relevant documents in TREC's official qrels. This finding is very helpful for the data fusion technology to be used in a more affordable way, and enable researchers and practitioners to get the full benefit of it but with much less effort. As our future work, we would consider extensions in two directions. One is to consider other alternative methods to the simple fixed-length pooling. Although the fixed-length pooling works well in this study, it is interesting to find how other methods such as variable-length pooling [14, 2, 3, 5] perform for this purpose. Also related to this work, another research issue is to investigate the effect of partial qrels on some optimization-based weights training methods for linear combination such as the genetic algorithms [9, 18] and differential evolution [20]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Data set & qrels & MAP & RP & P@10 & P@20 \\ \hline \multirow{4}{*}{TREC} & full & 0.4230 & 0.4428 & 0.7064 & 0.6540 \\ & 50\% & 0.4366 & 0.4450 & 0.6854 & 0.6088 \\ & & (+3.22\%) & (+0.50\%) & (-2.97\%) & (-6.91\%) \\ 2018 & 20\% & 0.4151 & 0.4143 & 0.5751 & 0.4458 \\ & & (-1.87\%) & (-6.44\%) & (-18.59\%) & (-31.83\%) \\ \hline \multirow{4}{*}{TREC} & full & 0.4225 & 0.4307 & 0.6695 & 0.6345 \\ & 50\% & 0.4345 & 0.4443 & 0.6606 & 0.5991 \\ \cline{1-1} & 20\% & (+2.8\%) & (+4.2\%) & (-1.3\%) & (-5.5\%) \\ \cline{1-1} & 20\% & 0.3922 & 0.3997 & 0.5767 & 0.4691 \\ \cline{1-1} & & (-7.2\%) & (-7.2\%) & (-13.9\%) & (-26.1\%) \\ \hline \end{tabular} \end{table} Table 5: The effect of partial qrels on fusion performance evaluation (the figures in parentheses indicate performance variances between partial qrels and official full qrels).
2309.05447
DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping
The improvement of LLMs' instruction-following capabilities relies heavily on the availability of high-quality instruction-response pairs. Unfortunately, the current methods used to collect the pairs suffer from either unaffordable labor costs or severe hallucinations in the self-generation of LLM. To tackle these challenges, this paper proposes a scalable solution. It involves training LLMs to generate instruction-response pairs based on human-written documents, rather than relying solely on self-generation without context. Our proposed method not only exploits the advantages of human-written documents in reducing hallucinations but also utilizes an LLM to wrap the expression of documents, which enables us to bridge the gap between various document styles and the standard AI response. Experiments demonstrate that our method outperforms existing typical methods on multiple benchmarks. In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10\% relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data. Furthermore, a comprehensive manual evaluation validates the quality of the data we generated. Our trained wrapper is publicly available at https://github.com/Bahuia/Dog-Instruct.
Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, Guilin Qi
2023-09-11T13:41:18Z
http://arxiv.org/abs/2309.05447v2
# TeGit: Generating High-Quality Instruction-Tuning Data with Text-Grounded Task Design ###### Abstract High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collection methods are limited by unrealistic manual labeling costs or by the hallucination of relying solely on LLM generation. To address the problems, this paper presents a scalable method to automatically collect high-quality instructional adaptation data by training language models to automatically design tasks based on human-written texts. Intuitively, human-written text helps to help the model attenuate illusions during the generation of tasks. Unlike instruction back-translation-based methods that directly take the given text as a response, we require the model to generate the _instruction_, _input_, and _output_ simultaneously to filter the noise. The results of the automated and manual evaluation experiments demonstrate the quality of our dataset. 0 Footnote 0: Work in progress, * represents co-corresponding authors. ## 1 Introduction Recent efforts in the NLP community have focused on _instruction-tuning_(Sanh et al., 2022; Mishra et al., 2022; Wei et al., 2022), i.e., improving large language models' (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023) capacity to understand and effectively follow instructions. Advanced LLMs have been trained to be capable of generating customized outputs when provided with specific instructions, enabling them to adapt to new tasks without prior exposure. As a crucial problem in improving the instruction-following capability, how to collect high-quality instruction-tuning data is gaining popularity. Previous methods can be broadly divided into three primary categories: a) Super-NI (Wang et al., 2022) and Dolliv (Conover et al., 2023) hire professionals to write instructions for diverse NLP tasks, whereas are limited in scale due to the labor-intensive nature of the process. b) Self-Instruct (Wang et al., 2023) and Alpaca(Taori et al., 2023) advocate the use of LLMs to automatically generate instruction-tuning data to eliminate labor costs. However, they are plagued by the illusion of LLMs, which leads to poor data quality. c) Dynosaur (Yin et al., 2023) employs LLMs to convert the existing NLP datasets (from Huggingface) to instruction-tuning data at a lower cost. Unfortunately, it is not applicable to the scenarios where no dataset is available. Recent research (Koksal et al., 2023; Li et al., 2023) has provided a more potential idea: using human-written text as gold responses for user instructions and utilizing LLMs to predict the instructions corresponding to the text. With this text-grounded approach, massive tasks can be easily constructed and somewhat alleviate the hallucinations. However, as far as human intuition is concerned, we believe that these methods still face two realistic challenges that may limit the performance of LLMs: a) Not all texts are suitable as gold responses to the instruction. Figure 1 gives a bad case from the LongForm(Koksal et al., 2023) dataset, where the human-written text contains noise (red), but is incorrectly treated as a full response to the task (yellow box). b) Lack of input field is detrimental to LLM's generalization capability in diversity tasks. As shown in Figure 1, the existing datasets have only an instruction field and an output field but no input field. We hypothesize that adding inputs can decouple the overly complex instructions, thus allowing LLMs to more accurately perceive similar tasks. In this paper, we propose a novel paradigm for collecting instruction-tuning data by training LMs as task designers to automatically generate the instruction, input, and output based on the given human-written text. Since our proposed designer is fully public, it is expected to be directly applicable to creating instruction-tuning tasks for domains with copyrighted data. Briefly, our method consists of two phases. In the first stage, we utilize ChatGPT (OpenAI, 2023) as an advanced teacher to build a meta-training set. To make it applicable to human texts and ensure diversity, we propose dual-view in-context learning to build prompts, whose demonstrations are obtained from the real human-written corpus and the existing instruction-tuning dataset. In the second stage, we train two LMs as a task generator and a task discriminator, respectively. The former aims to design tasks based on the given text, while the latter evaluates the designed tasks in order to retain high-quality examples. Experimentally, we utilized manual reviews against multiple existing datasets. The results show that TeGit effectively mitigates the hallucination in the input and output and reduces the noise from back translation. ## 2 Problem Formulation Given a corpus of human-written documents \(\mathcal{C}=\{\mathcal{D}_{1},\mathcal{D}_{2},...,\mathcal{D}_{n}\}\), the text-grounded instruction-tuning data collection aims to build a task designer \(\mathcal{M}\). For each document \(\mathcal{D}_{i}\in\mathcal{C}\), \(\mathcal{M}\) returns a valid task \(\mathcal{T}_{i}:=\mathcal{M}(\mathcal{D}_{i})\) when \(\mathcal{D}_{i}\) is self-contained, otherwise null. The valid task \(\mathcal{T}_{i}=(\mathcal{P}_{i},\mathcal{I}_{i},\mathcal{O}_{i})\) is a instruction-tuning instance, where \(\mathcal{P}_{i}\), \(\mathcal{I}_{i}\) and \(\mathcal{O}_{i}\) denote the instruction field, input field and output field, respectively. Ideally, the document \(\mathcal{D}_{i}\) can belong to any domain or style. This paper attempts six types of structured and unstructured text, including Wikipedia, academic papers, code, etc (see Section 3.1 for details). Eventually, has finished processing all documents in \(\mathcal{C}\), a instruction-tuned dataset \(\mathcal{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{m}\}\) is available, where \(m\leq n\). ## 3 Collection of TeGit Data Figure 1 shows the construction process of TeGit, which can be divided into three main phases, i.e., meta-training set construction, task designer training and inference. ### Corpus & Document Sampling We start with the Pile dataset that will be used as a multi-domain, multi-style human-written text corpus. Specifically, we choose six corpora in Pile, namely ArXiv, FreeLaw, StackExchange, Wikipedia, Github, and DM Mathematics. Figure 1: Differences between our proposed dataset TeGit with previous LongForm and HumpBack. Red text indicates noise that is not related to the generated task. Blue text indicates output that has been touched up by LM to be more consistent with the form of the task. Underlined is the input field separated from the instruction. To ensure that each document is as self-contained as possible, specific sampling methods were used for the different corpora. For longer documents such as ArXiv and FreeLaw, successive segments are randomly intercepted within the character range of 2000 to 3500; For shorter documents such as DM Mathematics, a pair of question answers is randomly selected; For Wikipedia, Github, and StackExchange, the entire document is used directly as input text. ### Meta-Training Set Construction with Dual-View ICL To train a ideal task designer, we first need to collect high-quality training examples of (document, task) pairs, which we call the meta-training set. Inspired by recent works, we leave this job to ChatGPT to reduce labor costs. Concretely, for each sampled document \(\mathcal{D}_{i}\), we ask ChatGPT to create a task \(\mathcal{T}_{i}\) relevant to \(\mathcal{D}_{i}\) by simultaneously predicting the instruction, input and output. To accomplish this goal, we harness the power of _in-context learning_ (ICL). In particular, the prompt fed to ChatGPT is denoted by \((\mathcal{P}^{*},x_{1},...,x_{k})\). The full prompt is shown in Appendix A. Here \(\mathcal{P}^{*}\) is the instruction that describes our goal and \(x_{i}=(\mathcal{D}_{i},\mathcal{T}_{i})\) represents \(i\)-th demonstration sampled from a seed set \(\mathcal{X}\). In contrast to previous methods the Self-Instruct that started with only manually annotated tasks as seed sets, we let \(\mathcal{X}=\mathcal{X}_{d}\cup\mathcal{X}_{t}\) contain examples from two different views. **Document-View Seed Examples.** The examples in this view are designed to allow ChatGPT to quickly adapt to the style of the target text when creating tasks. For each corpus, 20 pairs (document, task) are first manually constructed as the initial seeds \(\mathcal{X}^{0}_{d}\). Subsequently, 200 documents are sampled and each is wrapped with \(\mathcal{P}^{*}\) and 5 randomly selected demonstrations from \(\mathcal{X}^{0}_{d}\), resulting in 200 prompts. Finally, ChatGPT takes these prompts as inputs and return 200 tasks. These (document, task) pairs, denoted by \(\mathcal{X}_{d}\), characterize the target text style and the tasks that are appropriate to design based on that text. **Task-View Seed Examples.** Recent studies have reached a consensus that diverse instructions are necessary for LLMs. To further increase the diversity of the tasks, we propose to draw on the existing large-scale instruction-tuning dataset. Initially, we sample 50 tasks from the Alpaca-GPT4 dataset. Notice that there are only tasks and no corresponding documents, since these tasks are automatically generated by the ChatGPT. Therefore, we perform an inverse process to transform these 50 tasks into possible human-written documents. In our experiments, this work is also done by ChatGPT using ICL. The procedure is similar to generating document-view seeds, except that we manually construct another 20 initial seeds \(\mathcal{X}^{0}_{t}\) based on Alpaca. The resulting 500 pairs (document, task) are denoted by \(\mathcal{X}_{t}\) and provide ChatGPT with the potential for more diverse tasks. **Post-Processing.** In fact, despite the use of dual-view ICL, the tasks generated by ChatGPT are not always valid due to the _hallucination_ problem. Thus, we propose a simple but efficient post-processing method to filter invalid examples. A natural idea is to use the similarity between instructions, inputs, and outputs and a given document to determine whether a task is valid. Motivated by this, we devise a score \(\sigma(\mathcal{T}_{i})=\min(\tilde{\sigma}(\mathcal{D}_{i},\mathcal{I}_{i}), \tilde{\sigma}(\mathcal{D}_{i},\mathcal{O}_{i}))\), where \(\tilde{\sigma}(\mathcal{D}_{i},s)=|t(\mathcal{D}_{i})\&t(s)|/|t(s)|\) and \(t(s)\) denotes the set of tokens of \(s\). All examples \((\mathcal{D}_{i},\mathcal{T}_{i})\) will be removed where \(\sigma(\mathcal{T}_{i})<\theta\). Figure 2: Overview of TeGit construction process. First, sampled texts are annotated using ChatGPT to create a text-relevant task. After undergoing post-processing, a meta-training set comprising pairs of (document, task) is obtained. Then, we utilize the meta-training set to train two Alama2 models: one as a task generator and the other as a task discriminator. In addition, in a valid task, the inputs and outputs must follow the instruction \(\mathcal{P}_{i}\). To verify this, we use ChatGPT to determine if a task makes logical sense by actually completing it. First, we concatenate \(\mathcal{P}_{i}\) and \(\mathcal{I}_{i}\) as a prompt \(\mathcal{Z}\) and delete this example if ChatGPT is unable to answer. Subsequently, we add \(\mathcal{D}_{i}\) to \(\mathcal{Z}\) as a new prompt to ask ChatGPT. and compute the similarity of ChatGPT's reply to the text previously generated output. The examples where \(\sigma\) is smaller than the threshold \(\theta\) are removed. ### Text-Grounded Task Designer In this stage, we utilize the collected raw training data to train publicly released LMs to satisfy the requirement of automatically generating command tuning data in certain privacy or copyright domains, which do not allow disclosure of data to ChatGPT. Inspired by the field of computer vision, we train two LMs as a task generator \(\mathcal{M}_{g}\) and a task discriminator \(\mathcal{M}_{d}\), respectively. The former aims to design tasks based on the given text, while the latter evaluates the designed tasks in order to retain high-quality tasks. To accomplish these goals, we apply Lama 2 7B, one of the most advanced LMs publicly available as the initialization of both generator and discriminator. #### 3.3.1 TeGit Generator We perform _supervised fine-tuning_ (SFT) on \(\mathcal{M}_{g}\) using the (text, task) pairs \(\{(\mathcal{D}^{+},\mathcal{T}^{+})\}\) from the raw training data. To adapt Lama 2 to our defined SFT process, we add a meta-instruction \(\mathcal{P}_{g}\) to each \((\mathcal{D}^{+},\mathcal{T}^{+})\), to describe the mapping from \(\mathcal{D}^{+}\) to \(\mathcal{T}^{+}\). In our experiments, the \(\mathcal{P}_{g}\) is identical for all the training examples, \(\mathcal{P}_{g}\) = "Convert the given text into a task. Input is a text and Response contains three fields: #instruction#, #input# and #output#.". In this way, the training loss is calculated by a log-likelihood, \[\mathcal{L}(\mathcal{P}_{g},\mathcal{D}^{+},\mathcal{T}^{+})=-\log P(\mathcal{ T}^{+}|\mathcal{P}_{g},\mathcal{D}^{+})=-\sum_{j=1}^{|\mathcal{T}^{+}|}\log P(t_{j}^{+} |\mathcal{P}_{g},\mathcal{D}^{+},t_{<j}^{+}), \tag{1}\] where \(t_{j}^{+}\) is the \(j\)-th token of \(\mathcal{T}^{+}\) and \(P(t_{j}^{+}|\mathcal{P}_{g},\mathcal{D}^{+},t_{<j}^{+})\) is the predicted probability at each step of the autoregressive decoding. #### 3.3.2 TeGit Discriminator The sole goal of \(\mathcal{M}_{d}\) is to perform a bi-classification that determine whether each task generated by \(\mathcal{M}_{g}\) is valid. Each training example can be denoted as \(([\mathcal{D};\mathcal{T}],\mathcal{Y})\), where \([;]\) represents the concatenation of text. In this setting, examples from the original training data can be naturally considered as positive examples, where \(\mathcal{Y}\) = "valid". To simulate the errors that would occur during ChatGPT prediction in our training phase, we use the examples removed in post-processing (Section 3.2) as challenging negative examples, where \(\mathcal{Y}\) = "invalid". Similar to the task generation, we likewise add a meta-instruction \(\mathcal{P}_{d}\) to help the model understand the target, \(\mathcal{P}_{d}\) = "Given a piece of text and a task generated from that text, determine if the task is valid or invalid.". For each example \(([\mathcal{D};\mathcal{T}],\mathcal{Y})\), a log-likelihood has also been utilized, denoted by \(\mathcal{L}(\mathcal{P}_{g},[\mathcal{D};\mathcal{T}],\mathcal{Y})\). ## 4 Experiments ### TeGit Data Statistics **Data Statistics.** Table 1 shows the statistics of the seed data, the meta-training set, and our _TeGit_ dataset. The instructions, inputs and outputs of \(\mathcal{X}_{d}\) are significantly longer compared to the seed task \(\mathcal{X}_{t}\) extracted from Alpaca-GPT4. TeGit tends to have longer inputs and outputs compared to the seed data and meta-training set. Notice that both the meta-training set and TeGit have large standard deviations regarding the input and output fields. This observation highlights that the tasks obtained from text-grounded designs may exhibit substantial variations in length. The top of Table 2 presents the statistical data for various corpora in TeGit. FreeLaw exhibits the longest input length due to its generated tasks frequently involving the reading of extensive legal texts. Conversely, Wikipedia generally has the shortest input length, possibly because it lacks input requirements for the majority of its tasks. **Diversity of Instructions.** We perform a task diversity analysis on both the meta-training set and TeGit using the method described by Wang et al. (2023). Figure 3 illustrates the distribution of the verb-noun structure of instructions in the the meta-training set (left) and TeGit (right), respectively. The TeGit data generated using Llama 2-7b exhibits a comparable level of diversity to the meta-training set created by ChatGPT. In fact, it offers a more reasonable distribution of data. **Relevance to Texts.** Additionally, we have computed the relevance of both the inputs and outputs of the generated tasks to the original text. The relevance scores are displayed at the bottom of Table 2, with \(\tilde{\sigma}\) representing the measure of literal relevance utilized in our post-processing, and BS denoting the employment of BertScore(Zhang et al., 2019) for evaluating the semantic relevance. In terms of literal relevance, it is noteworthy that the inputs and outputs of TeGit's tasks predominantly originate from the provided text. The slight decrease in relevance scores for the output can be attributed to the model's tendency to condense and modify the original text as needed. The relatively lower BS scores, particularly for the input \(\mathcal{I}\) related to Wikipedia and Arxiv, can be attributed to the fact that the input size for these tasks is significantly smaller compared to the given output \(\mathcal{O}\). ### Human Evaluation **Compared Datasets** We compare TeGit with several existing instruction-tuning datasets: Self-Instruct(Wang et al., 2023), Alpaca(Taori et al., 2023), Alpaca+GPT-4(Taori et al., 2023), and Unnatural Instructions(Honovich et al., 2023) are automatically generated by LLMs including ChatGPT and text-davinci-002. LaMini(Wu et al., 2023) is created by leveraging existing instructions, such as Self-Instruct and Alpaca, as its foundation. Dynosaur(Yin et al., 2023) repackages huggingface's existing NLP dataset and regenerates instructions for it using ChatGPT. LongForm(Koksal et al., 2023) and Humbback(Li et al., 2023) are most similar to our work in that they generate tasks by performing instruction back translation on human-written texts. The key distinction between our TeGit and theirs lies in the way TeGit wraps the text and carefully selects the essential components to compose a comprehensive task, incorporating the instruction, input, and output. This minimizes the noise present in the original text and provides a more streamlined and coherent task structure. **Evaluation Metrics.** To conduct a precise evaluation of the compared datasets, we formulate five metrics: a) _instruction clarity_ (CL\({}_{\mathcal{P}}\)) indicates the percentage of instructions that are correct and make sense. b) _input hallucination_ (HA\({}_{\mathcal{I}}\)) and _output hallucination_ (HA\({}_{\mathcal{O}}\)) measure how much input and output contain factual or logical errors, respectively. c) _input fluency_ (FL\({}_{\mathcal{I}}\)) and _output fluency_ (FL\({}_{\mathcal{O}}\)) gauge the extent to which the input and output exhibit fluency and adhere to the dialog scenario, excluding any extraneous information. **Main Results.** We randomly select 50 examples from each of the compared datasets and manually evaluate them according to the five metrics described above. The results are shown in Figure 3. Self-Instruct, Alpaca, Alpaca+GPT-4, LaMini, and Unnatural Instructions leverage the robust language generation capabilities of LLMs, resulting in their instructions demonstrating high CL\({}_{\mathcal{P}}\), FL\({}_{\mathcal{I}}\), and FL\({}_{\mathcal{O}}\) scores. However, since they lack the support of actual text, these models are susceptible to the problem of hallucination, thereby leading to higher HA\({}_{\mathcal{I}}\) and HA\({}_{\mathcal{O}}\) scores. Dynosaur likely attained the highest score due to being built upon extensively standardized datasets with a well-defined structure and specifications. By generating tasks from human-written \begin{table} \begin{tabular}{l c c c c} & **\# of Examples** & **Instruction Length** & **Input Length** & **Output Length** \\ \hline \(\mathcal{X}_{d}\) & \(189\) & \(146\pm 72\) & \(566\pm 977\) & \(548\pm 464\) \\ \(\mathcal{X}_{t}\) & \(50\) & \(70\pm 24\) & \(74\pm 107\) & \(409\pm 448\) \\ Meta-Training Set & \(15153\) & \(160\pm 97\) & \(485\pm 854\) & \(595\pm 474\) \\ TeGit & \(25529\) & \(121\pm 83\) & \(833\pm 1057\) & \(604\pm 631\) \\ \hline \end{tabular} \end{table} Table 1: Statistics of document-view seed \(\mathcal{X}_{d}\), task-view seed \(\mathcal{X}_{t}\), meta-training set and TeGit. Instruction, input and output lengths are given as the number of characters. text, both LongForm and Humpback effectively mitigate the issue of hallucination in the output. Regrettably, the presence of noise in real text diminishes its fluency (FL\({}_{\mathcal{I}}\) and FL\({}_{\mathcal{O}}\)) compared to fully self-generated datasets. Furthermore, our findings indicate that employing back-translation may lead the model to engage in dialog rather than generating clear instructions. In contrast, our dataset, TeGit, encompasses the entire task design process based on the provided text. This approach not only mitigates the illusion of the task but also guarantees the coherence and smoothness of the instructions, as well as the input-output alignment. Additionally, it is crucial to highlight that TeGit achieves its results using a 7b-sized model, demonstrating its efficiency and high scalability. **Comparison for the same text.** To further compare our TeGit with other text-grounded generation methods. Here, we have chosen four comparison methods, namely ChatGPT, Llama 2-7b-chat, LongForm and Humpback. To obtain the tasks designed by ChatGPT and Llama 2-7b-chat, we randomly provide approximately 100 synthetic prompts from the document corpus utilized by TeGit. For LongForm and Humpback, we each randomly sample 100 synthetic prompts of the documents they use to feed our task generator \(\mathcal{M}_{g}\). ## 5 Related Work **Instruction Tuning** Humans possess the ability to effortlessly comprehend and execute tasks based on verbal instructions. Likewise, advancements in deep learning have enabled Language Models (LLMs) (Brown et al., 2020; OpenAI, 2023; Chowdhery et al., 2022; Touvron et al., 2023) to acquire the capability to understand and follow instructions. Instruction tuning serves as a promising \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Unstructured**} & \multicolumn{2}{c}{**Structured**} & \multicolumn{1}{c}{**Code**} \\ \cline{2-6} & Wikipedia & FreeLaw & ArXiv & StackExchange & DM Math & Github \\ \hline \hline \(\mathbf{\#\text{ of Examples}}\) & \(5092\) & \(4291\) & \(2708\) & \(8652\) & \(3669\) & \(2839\) \\ **Length of \(\mathcal{P}_{i}\)** & \(73\pm 26\) & \(124\pm 65\) & \(110\pm 51\) & \(133\pm 77\) & \(50\pm 21\) & \(186\pm 127\) \\ **Length of \(\mathcal{I}_{i}\)** & \(3\pm 43\) & \(2236\pm 1225\) & \(570\pm 1070\) & \(690\pm 546\) & \(41\pm 30\) & \(1120\pm 910\) \\ **Length of \(\mathcal{O}_{i}\)** & \(616\pm 593\) & \(603\pm 617\) & \(902\pm 750\) & \(543\pm 522\) & \(6\pm 6\) & \(694\pm 765\) \\ \hline \(\tilde{\sigma}(\mathcal{D}_{i},\mathcal{I}_{i})\) & \(1.000\) & \(0.997\) & \(0.999\) & \(0.998\) & \(0.951\) & \(0.934\) \\ \(\tilde{\sigma}(\mathcal{D}_{i},\mathcal{O}_{i})\) & \(0.960\) & \(0.949\) & \(0.968\) & \(0.976\) & \(0.978\) & \(0.957\) \\ \(\text{BS}(\mathcal{D}_{i},\mathcal{I}_{i})\) & \(0.080\) & \(0.814\) & \(0.454\) & \(0.915\) & \(0.874\) & \(0.910\) \\ \(\text{BS}(\mathcal{D}_{i},\mathcal{O}_{i})\) & \(0.914\) & \(0.841\) & \(0.862\) & \(0.876\) & \(0.821\) & \(0.900\) \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of different corpora in TeGit. Instruction, input, and output lengths are given as the number of characters. BS denotes the Bert-Score(Zhang et al., 2019)). Figure 3: Instruction diversity of meta-training set (left) and TeGit data (right). The inner circle shows common root verbs with the corresponding common noun objects in the outer circle. The TeGit data we generated using Llama 2-7b is essentially on par in diversity with the meta-training set generated by ChatGPT, and even provides a more reasonable distribution. method, involving the fine-tuning of LLMs using training data and instructions from a collection of upstream tasks(Sanh et al., 2022; Mishra et al., 2022; Wei et al., 2022; Chung et al., 2022; Longpre et al., 2023; Peng et al., 2023). Subsequently, these models can then be employed to perform inference on unfamiliar tasks using both instructions and instance inputs. In this paper, we not only train instruction tuning models, but also propose a novel method to formulating training tasks. **Instruction-Tuning Data Collection** The collection of high-quality instruction-tuning data is a pressing issue in enhancing the capability of instruction-following. Previous approaches can be broadly categorized into three main groups. Firstly, methods like Super-NI(Wang et al., 2022) and Dolly(Conover et al., 2023) rely on hiring professionals to create instructions for diverse NLP tasks. However, these methods suffer from limited scalability due to the labor-intensive nature of the process. Secondly, approaches such as Self-Instruct(Wang et al., 2023) and Alpaca(Taori et al., 2023) advocate for the use of LLMs to automatically generate instruction-tuning data, thus eliminating the need for manual labor. However, the data quality is compromised due to the inherent limitations and biases of LLMs. Lastly, Dynosaur(Yin et al., 2023) employs LLMs to convert existing NLP datasets from platforms like Huggingface into instruction-tuning data at a reduced cost. Unfortunately, this approach is not applicable in scenarios where no dataset is available. All of the mentioned approaches utilize model-generated responses as training data. However, a method closely related to ours is the simultaneous work by (Koksal et al., 2023; Li et al., 2023). Their approach involves using human-written text as a natural response and leveraging an LLM to generate the corresponding instruction based on the response. The primary differentiation between our TeGit method and theirs is the approach we take to encapsulate the text and meticulously select the crucial elements for constructing a comprehensive task. In TeGit, we integrate the instruction, input, and output, carefully curating these components to minimize the noise inherent in the original text. This process results in a more streamlined and coherent structure for the task at hand. ## 6 Conclusion This paper has presented a scalable method for automatically collecting high-quality instructional adaptation data to improve language model capabilities. Our approach involves training language \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & ChatGPT & Llama-2-7b-chat & LongForm & Humpback\({}^{\dagger}\) \\ \hline TeGit _Win_ (\%) & \(32.5\) & - & \(24.2\) & \(67.0\) \\ _Te_ (\%) & \(35.5\) & - & \(63.1\) & \(28.2\) \\ TeGit _Lose_ (\%) & \(32\) & - & \(12.7\) & \(4.8\) \\ \hline \hline \end{tabular} \end{table} Table 4: Human evaluation comparing TeGit with various text-grounded instruction-tuning data collection methods, using identical input human-written text. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & CL\({}_{\mathcal{P}}\) (\%) & HA\({}_{\mathcal{I}}\) (\%) \(\downarrow\) & HA\({}_{\mathcal{O}}\) (\%) \(\downarrow\) & FL\({}_{\mathcal{I}}\) (\%) & FL\({}_{\mathcal{O}}\) (\%) \\ \hline Self-Instruct(Wang et al., 2023) & \(92\) & \(28\) & \(32\) & \(96\) & \(92\) \\ Alpaca(Taori et al., 2023) & \(88\) & \(20\) & \(38\) & \(96\) & \(96\) \\ Alpaca+ GPT-4(Taori et al., 2023) & \(94\) & \(14\) & \(22\) & \(94\) & \(98\) \\ Unnaturat.(Honovich et al., 2023) & \(92\) & \(16\) & \(24\) & \(96\) & \(92\) \\ Dynosaur(Yin et al., 2023) & \(98\) & - & - & - & - \\ LaMini(Wu et al., 2023) & \(94\) & - & \(20\) & - & \(98\) \\ WizardLM(Xu et al., 2023) & - & - & - & - & - \\ LongForm(Köksal et al., 2023) & \(76\) & - & \(10\) & - & \(84\) \\ Humpback\({}^{\dagger}\)(Li et al., 2023) & \(48\) & - & \(12\) & - & \(64\) \\ \hline Meta-Training set & \(92\) & \(12\) & \(10\) & \(96\) & \(94\) \\ TeGit & \(94\) & \(10\) & \(12\) & \(96\) & \(96\) \\ \hline \hline \end{tabular} \end{table} Table 3: Human evaluation of compared instruction-tuning datasets. For each dataset, we randomly sampled 50 examples. Dynosaur does not have the last four metrics because it directly uses the inputs and outputs from the previous datasets. LaMini, WizardLM, LongForm, and Humpback\({}^{+}\) do not have input field. Here \(\downarrow\) means the smaller the value, the better. models to design tasks based on human-written texts, attenuating illusions during task generation. Unlike traditional methods, we require the model to simultaneously generate the instruction, input, and output to filter out noise. We have conducted automated and manual evaluation experiments to assess the quality of our dataset. The results demonstrate the effectiveness of our approach in producing high-quality data. Our proposed method offers an efficient solution for collecting reliable instructional adaptation data. By leveraging language models and human-written texts, we enhance LLM capabilities and provide a benchmark for future research in instruction-tuning and data collection methodologies.
2309.13231
ParticleNet and its application on CEPC Jet Flavor Tagging
Identification of quark flavor is essential for collider experiments in high-energy physics, relying on the flavor tagging algorithm. In this study, using a full simulation of the Circular Electron Positron Collider (CEPC), we investigated the flavor tagging performance of two different algorithms: ParticleNet, originally developed at CMS, and LCFIPlus, the current flavor tagging algorithm employed at CEPC. Compared to LCFIPlus, ParticleNet significantly enhances flavor tagging performance, resulting in a significant improvement in benchmark measurement accuracy, i.e., a 36% improvement for $\nu\bar{\nu}H\to c\bar{c}$ measurement and a 75% improvement for $|V_{cb}|$ measurement via W boson decay when CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 $ab^{-1}$. We compared the performance of ParticleNet and LCFIPlus at different vertex detector configurations, observing that the inner radius is the most sensitive parameter, followed by material budget and spatial resolution.
Yongfeng Zhu, Hao Liang, Yuexin Wang, Huilin Qu, Chen Zhou, Manqi Ruan
2023-09-23T01:57:40Z
http://arxiv.org/abs/2309.13231v3
# ParticleNet and its application on CEPC Jet Flavor Tagging ###### Abstract Quarks, except top quark, and gluon would hadronize and fragment into a spray of stable particles, called jet. Identification of quark flavor is essential for collider experiments in high-energy physics, relying on flavor tagging algorithms. In this study, using a full simulation of the Circular Electron Positron Collider (CEPC), we investigated the flavor tagging performance of two different algorithms: ParticleNet, based on Graph Neural Network, and LCFIPus, based on the Gradient Booted Decision Tree. Compared to LCFIPus, ParticleNet significantly enhances flavor tagging performance, resulting in a significant improvement in benchmark measurement accuracy, i.e., a 36% improvement for \(\sigma(ZH)\cdot Br(Z\to\nu\bar{\nu},H\to c\bar{c})\) measurement and a 75% improvement for \(|V_{cb}|\) measurement via W boson decay, respectively, when CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 \(ab^{-1}\). We compared the performance of ParticleNet and LCFIPus at different vertex detector configurations, observing that the inner radius is the most sensitive parameter, followed by material budget and spatial resolution. Keywords:CEPC Jet Flavor Tagging ParticleNet + Footnote †: journal: Eur. Phys. J. C ## 1 Introduction A jet refers to a spray of stable particles formed through the hadronization of an energetic quark or gluon. The W/Z/Higgs boson and the top quark, the four most massive Standard Model (SM) particles, decay mainly into quarks and hadronize to jets. Figure 1 illustrates a reconstructed \(e^{+}e^{-}\to Z\to c\bar{c}\) event with center-of-mass energy of 91.2 GeV. Efficient identification of the jet flavor could shed light on the properties of those massive particles and is critical for experimental exploration at the high-energy frontier. Flavor tagging is used to distinguish jets hadronized from different quark flavors or gluon. To promote the development of future electron-positron Higgs factories, which is regarded as the highest priority for the next collider [1], accurate performance analysis and optimization of both detectors and algorithms are essential. Jet flavor tagging and relevant benchmark analyses serve as good objectives. The Circular Electron Positron Collider (CEPC) [2] is a large-scale collider facility proposed after the discovery of the Higgs boson in 2012. It is designed with a circumference of 100 km with two interaction points. It can operate at multiple center-of-mass energies, including 240 GeV as a Higgs factory, 160 GeV for the W\({}^{+}\)W\({}^{-}\) threshold scan, and 91 GeV as a Z factory. It also can be upgraded to 360 GeV for the \(t\bar{t}\) threshold scan. Table 1 summarizes its baseline operating scheme Figure 1: The display of a reconstructed \(e^{+}e^{-}\to Z\to c\bar{c}\) event with center-of-mass energy of 91.2 GeV. Different particles are depicted with different colors: red for \(e^{\pm}\), cyan for \(\mu^{\pm}\), blue for \(\pi^{\pm}\), orange for photons, and magenta for neutral hadrons. and the corresponding boson yields [3]. In the future, it can be upgraded to a proton-proton collider to directly explore new physics at a center-of-mass energy of about 100 TeV. The main scientific objective of the CEPC is the precise measurement of the Higgs properties, especially its coupling properties. Additionally, trillions of \(Z\to q\bar{q}\) events can provide an excellent opportunity for studying flavor physics. Jet flavor tagging performance depends on detector design, particularly the design of the vertex detector, as well as the utilization of reconstruction algorithms. In this study, we apply ParticleNet [4] to the CEPC and assess its flavor tagging performance in the measurement of \(\sigma(ZH)\cdot B\tau(Z\rightarrow\nu\bar{\nu},H\to c\bar{c})\) and \(|V_{cb}|\) via W decay. Our results demonstrate that ParticleNet outperforms the baseline jet flavor tagging algorithm, LCFIPus [5], by achieving a 36% and 75% improvement in the relative statistical accuracy of \(\sigma(ZH)\cdot Br(Z\rightarrow\nu\bar{\nu},H\to c\bar{c})\) and \(|V_{cb}|\) measurement via W boson decay at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 \(ab^{-1}\). We also observe that both ParticleNet and LCFIPus perform better in the barrel region when compared to the endcap region. By analyzing the dependence of flavor tagging performance on vertex detector configurations, we observe that the most sensitive vertex detector parameter is the inner radius, followed by the material budget and spatial resolution. This result is consistent with previous studies conducted using LCFIPus. This article is organized as follows. Section 2 introduces the CEPC detector, software, and the samples used in this analysis. Section 3 describes the jet flavor tagging algorithms (LCFIPus and ParticleNet) and the method used to evaluate their performance. Section 4 quantifies the dependence of flavor tagging performance on vertex detector configuration and compares the performance of ParticleNet and LCFIPus. Finally, Section 5 provides a brief conclusion. ## 2 CEPC Detector, software, and samples At present, two interaction points of CEPC are designed with the same baseline detector, which is designed according to the Particle Flow Algorithm (PFA) principle and emphasizes reconstructing visible final state particles in the most-suited detector subsystems. The structure of the CECP detector is shown in Fig. 2. From inner to outer, the baseline detector is composed of a silicon pixel vertex detector, a silicon inner tracker, a Time Projection Chamber (TPC) surrounded by a silicon external tracker, a silicon-tungsten sampling Electromagnetic Calorimeter (ECAL), a steel-glass Resistive Plate Chambers sampling Hadronic Calorimeter (HCAL), a 3 Tesla superconducting solenoid, and a flux return yoke embedded with a muon detector. For flavor tagging, the vertex detector is critical. At the CEPC, the vertex detector is designed with six concentric cylindrical layers of square silicon pixel sensors. The mechanical structure of the vertex detector consists of ladders, with each ladder supporting sensors on both sides. The \begin{table} \begin{tabular}{c c c c c} \hline Operation mode & Z factory & WW & Higgs factory & \(t\bar{t}\) \\ \hline \(\sqrt{s}\) (GeV) & 91.2 & 160 & 240 & 360 \\ Run time (year) & 2 & 1 & 10 & 5 \\ Instantaneous luminosity & 191.7 & 26.6 & 8.3 & 0.83 \\ (\(10^{34}\)cm\({}^{-2}\)s\({}^{-1}\), per IP) & & & & \\ Integrated luminosity & 100 & 6 & 20 & 1 \\ (ab\({}^{-1}\), 2 IPs) & 1\(\times\)\(10^{8}\) & 4\(\times\)\(10^{6}\) & 5 \(\times\)\(10^{5}\) \\ Event yields & 3\(\times\)\(10^{12}\) & 1\(\times\)\(10^{8}\) & 4\(\times\)\(10^{6}\) & 5 \(\times\)\(10^{5}\) \\ \hline \end{tabular} \end{table} Table 1: The operation scheme of the CEPC, including the center-of-mass energy, the instantaneous luminosity, the total integrated luminosity, and the event yields [3]. Figure 2: The CEPC baseline detector. From inner to outer, the detector is composed of a silicon pixel vertex detector, a silicon inner tracker, a TPC, a silicon external tracker, an ECAL, an HCAL, a solenoid of 3 Tesla, and a return yoke embedded with a muon detector. Five pairs of silicon tracking disks are installed in the forward regions to enlarge the tracking acceptance. [3] detailed structure of the vertex detector is depicted in Fig. 3, and its specific parameters are listed in Table 2. A baseline reconstruction software chain has been developed to quantify the scientific merit and guide the detector optimization of CEPC, see Fig. 4. The data flow of the CEPC baseline software starts from the event generators of Whizard [8] and Pythia [9]. The detector geometry is implemented into MokkaPlus [10], a GEANT4-based full simulation module. MokkaPlus calculates the energy deposition in the detector-sensitive volumes and creates simulated hits. For each sub-detector, the digitization module converts the simulated hits into digitized hits by convolution of the corresponding sub-detector responses. The reconstruction modules include the tracking, the Particle Flow, and the high-level reconstruction algorithms. The digitized tracker hits are reconstructed into tracks via the tracking algorithms [11]. The Particle Flow algorithm, Arbor [12], reads the reconstructed tracks and the calorimeter hits to build reconstructed particles. High-level reconstruction algorithms reconstruct composite physics objects such as converted photons, jets, taus, and so on, and identify the flavor of the jets. In this paper, we utilized hadronic events at Z-pole operation, including 1 million \(Z\to b\bar{b}\) events, 1 million \(Z\to c\bar{c}\) events, and 0.33 million each of \(Z\to u\bar{u}/d\bar{d}/s\bar{s}\) events. For ParticleNet, we divided the samples into three distinct sets: the training set for training the model, the validation set used to validate whether the model is overfitting or underfitting, and the testing set used to give flavor tagging results. The ratios of samples in these sets were set at 60%, 20%, and 20%, respectively. For LCFIPlus, we use all samples to do the test since we have already trained the model. ## 3 Flavor tagging algorithms and their performance In this section, we introduce LCFIPlus and ParticleNet and compare their performance based on the CEPC detector and software. Both algorithms read the information of reconstructed jet candidates and calculate the jet likeness to b, c, and light categories. The LCFIPlus package, a framework for jet analysis in linear collider studies, was originally developed by the International Linear Collider (ILC) [13], and has since been widely used at the Compact Linear \(e^{+}e^{-}\) Collider (CLIC) [14], the Future Circular Collider \(e^{+}e^{-}\) (FCC-ee) [15], and CEPC. The LCFIPlus package consists of vertex finding, jet clustering, vertex refinement, and flavor tagging. To perform flavor tagging, the jets are classified into four categories based on the number of reconstructed vertices and isolated leptons in the jet. A set of variables is then extracted for each category, which includes the number of tracks in each vertex, the vertex mass, the distance between the secondary vertex and the primary vertex, the vertex decay length, the track transverse momentum, and more. Further details can be found in [5]. In each category, two types of flavor tagging algorithms are trained using the Gradient Boosted Decision Tree (GBDT) method, one for the b-tagging algorithm and the other for the c-tagging algorithm. The ParticleNet based on Graph Neural Network (GNN) [17] was published at the beginning of 2019. The architecture of ParticleNet is shown in the left plot of Fig. 5. It consists of three EdgeConv blocks, one \begin{table} \begin{tabular}{l c c c} \hline \hline & radius & spatial resolution & material budget \\ & (mm) & (\(\mu m\)) & \\ \hline Layer 1 & 16 & 2.8 & 0.15\%/X\({}_{0}\) \\ Layer 2 & 18 & 6 & 0.15\%/X\({}_{0}\) \\ Layer 3 & 37 & 4 & 0.15\%/X\({}_{0}\) \\ Layer 4 & 39 & 4 & 0.15\%/X\({}_{0}\) \\ Layer 5 & 58 & 4 & 0.15\%/X\({}_{0}\) \\ Layer 6 & 60 & 4 & 0.15\%/X\({}_{0}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The baseline design parameters of the CEPC vertex system. [6] Figure 4: The information flow of the CEPC software chain. [7] Figure 3: Schematic view of vertex detector. Two layers of silicon pixel sensors are mounted on both sides of each of the three ladders to provide six space points. The beam pipe is surrounded by the vertex detector. [6] channel-wise global average pooling block, and two fully connected blocks followed by a softmax function to output the b/c/light-likeness for each jet. The core concept of ParticleNet is the EdgeConv operation, which is realized by applying feature aggregation for each particle and its k nearest particles in the jet. The specific process of each EdgeConv block is illustrated in the right plot of Fig. 5. It starts by finding the \(k\)-nearest neighbors for each particle within the jet. The edge between each particle and its \(k\)-nearest neighbors is determined using the input features of each particle. In the first EdgeConv block, the spatial coordinates \((\Delta\eta,\Delta\phi)\) of the particles in the pseudorapidity-azimuth space are used to compute the edge of each pair of particles, while the subsequent EdgeConv blocks use the learned feature vectors as coordinates. The input features for our task, listed in Table 3, include the kinematic variables constructed with the 4-momentum of each particle, the PID information, the charge, and impact parameters. The distance between the interaction point and the path of a track is defined as the impact parameter, where the distance along the beam is called \(z_{0}\) and perpendicular to the beam is called \(d_{0}\). Both flavor tagging algorithms assign three values to each jet: b-likeness, c-likeness, and light-likeness, with the constraint that their sum equals unity. The scatter plots in Fig. 6 show the distribution of b-likeness versus c-likeness for samples of \(Z\to b\bar{b}/c\bar{c}\)/light quarks with ParticleNet. In these plots, b-jets tend to concentrate in the region of larger b-likeness, c-jets in the region of larger c-likeness, and light-jets in the region of smaller b/c-likeness. The phase space spanned by the b/c-likeness is divided into three different regions corresponding to identified b, c, and light quarks. We then obtain the ratios of b-jets identified as b-jets, b-jets identified as c-jets, and so on. These ratios can be represented with a migration matrix, as shown in Fig. 7. The working point (phase space separation) can be optimized according to the specific analysis requirements. For general cases, we adopt the method using two orthogonal lines passing through the point (0.5, 0.5), as depicted by the two red lines in figure 6. \begin{table} \begin{tabular}{c c} \hline Variable & Definition \\ \hline \(\Delta\eta\) & difference in pseudorapidity between the particle and the jet axis \\ \(\Delta\phi\) & difference in azimuthal angle between the particle and the jet axis \\ \hline \(\log\)P\({}_{\rm t}\) & logarithm of the particle’s \(P_{\rm t}\) \\ \(\log\)E & logarithm of the particle’s energy \\ \(\log\)P\({}_{\rm t}\)(jet) & logarithm of the particle’s \(P_{\rm t}\) relative to the jet \(P_{\rm t}\) \\ \(\log\)E(jet) & logarithm of the particle’s energy relative to the jet energy \\ \(\Delta R\) & angular separation between the particle and the jet axis \((\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}})\) \\ \(d_{0}\) & transverse impact parameter of the track \\ \(d_{0}\)err & uncertainty associated with the measurement of the \(d_{0}\) \\ \(z_{0}\) & longitudinal impact parameter of the track \\ \(z_{0}\)err & uncertainty associated with the measurement of the \(z_{0}\) \\ charge & electric charge of the particle \\ \hline isElectron & whether the particle is an electron \\ isMuon & whether the particle is a muon \\ isChargedKaon & whether the particle is a charged Kaon \\ isChargedPion & whether the particle is a charged Pion \\ isProton & whether the particle is a proton \\ isNeutralHadron & whether the particle is a neutral hadron \\ isPhoton & whether the particle is a photon \\ \hline \end{tabular} \end{table} Table 3: The input variables used in ParticleNet for jet flavor tagging at the CEPC. Figure 5: The architecture of the ParticleNet (left) and the structure of the EdgeConv block (right). [4] ## 4 Performance analyses The performance of ParticleNet and LCFIPus is evaluated by the following three criteria. The first one is the migration matrix since the perfect flavor tagging performance corresponds to the identity matrix. The second one is the physics performance, the better flavor tagging algorithm would induce better physics results. The last one is the vertex detector optimization since it is relevant to the resolution of transverse momentum and impact parameters, and further quantifies the reconstructed performance of the detector. Performance comparison and impact on benchmarks of \(\sigma(ZH)\cdot Br(Z\rightarrow\nu\bar{\nu},H\to c\bar{c})\) and \(|V_{cb}|\) Figure 7 displays the migration matrices obtained using LCFIPus and ParticleNet, respectively. Comparing the performance of LCFIPus, ParticleNet achieves a significant improvement in b/c-tagging efficiency, with an enhancement of 15% for b jets and 32% for c jets. The trace of the matrix abbreviated as \(\rm{Tr}_{\rm{mig}}\) is 3.0 for perfect jet flavor tagging performance and it increases from 2.30 to 2.64 with the utilization of ParticleNet. Both LCFIPus and ParticleNet face a more challenging task in c-tagging, as its properties lie between those of b and light jets. In the top plot of Fig. 8, we present the correlation between jet flavor tagging performance, described by \(\rm{Tr}_{\rm{mig}}\), and jet polar angle, which is defined as the angle with respect to the beam line and represented by the Figure 6: The distribution of b/c-likeness for samples of \(Z\to b\bar{b}/c\bar{c}\)/light quarks. The parallel lines divide the space spanned by the b/c-likeness into three regions. angle \(\theta_{jet}\). Both LCFIPlus and ParticleNet exhibit better performance in the barrel region compared to the endcap region, due to the relatively lower resolution of transverse momentum (\(P_{t}\)) and impact parameters (\(d_{0}\) and \(z_{0}\)) in the endcap region. The value of ParticleNet performance divided by LCFIPlus performance can be used to describe the performance improvement of ParticleNet relative to LCFIPlus. The bottom plot of Fig. 8 shows the correlation between those values and the jet polar angle. Compared to LCFIPlus, ParticleNet can improve the trace of the migration matrix by more than 10% in the barrel region and more than 30% in the endcap regions. The performance of both flavor tagging algorithms is compared in benchmark analyses. The first analysis we look into is the signal strength measurement of \(\sigma(ZH)\cdot Br(Z\rightarrow\nu\bar{\nu},H\to c\bar{c})\). In the paper [18], the authors demonstrate a correlation between the trace of the migration matrix and the accuracy of the signal strength of \(\sigma(ZH)\cdot Br(Z\rightarrow\nu\bar{\nu},H\to c\bar{c})\) when CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 \(ab^{-1}\), as depicted in the top plot of Fig. 9. Using LCFIPlus, the trace is 2.30, corresponding to an accuracy of 0.057, indicated by the green star. ParticleNet enhances the trace to 2.64, aligning with an accuracy of 0.042, represented by the orange star. The second analysis is the signal strength measurement of \(|V_{cb}|\) Figure 8: The top plot shows the correlation between jet flavor tagging performance, quantified using the trace of the flavor tagging performance matrix, and the jet polar angle. The bottom plot illustrates the performance improvement of ParticleNet relative to LCFIPlus at different jet polar angles. Two vertical lines mark the boundary between the barrel and endcap regions. Figure 7: The migration matrix of flavor tagging performance of ParticleNet (top) and LCFIPlus (bottom) at the CEPC. the magnitude of \(V_{cb}\), which is one of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and governs the transition between charm and bottom quarks. Accurate measurement of \(|V_{cb}|\) plays a pivotal role in the study of weak interactions within the SM. When CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 \(ab^{-1}\), the authors demonstrate that ParticleNet can significantly improve the accuracy of signal strength by 75% in the measurement of \(|V_{cb}|\) through \(W^{+}W^{-}\to\mu\nu q\bar{q}\)[19], as depicted in the bottom plot of Fig. 9. ### Comparison on vertex detector optimization Jet flavor tagging performance depends on the detector design, especially the vertex detector. The vertex detector is mainly characterized by three parameters: material budget, spatial resolution, and inner radius. The CEPC vertex detector is designed with three concentric cylinders of double-sided layers, and the parameters are listed in Table 2. In a previous study [6] using the LCFIPlus flavor tagging algorithm, the correlation between c-jet tagging efficiency multiplied by purity (\(\epsilon\cdot p\)) and relevant vertex detector parameters was quantified. The measurement of Higgs\(\to b\bar{b}/c\bar{c}/gg\) at the CEPC revealed a correlation between \(\rm{Tr_{mig}}\) and the c-jet tagging \(\epsilon\cdot p\). By combining these correlations, the relationship between \(Tr_{mig}\) and relevant vertex detector parameters can be obtained, shown as the top plot of Fig. 10. This correlation is formulated in expression 1, where \(\rm{R^{0}_{radius}/R^{0}_{resolution}/R^{0}_{material}}\) represent the default design of CEPC vertex detector and \(\rm{R_{radius}/R_{resolution}/R_{material}}\) represent the new design. The coefficients fitted from the correlations indicate the importance of the corresponding detector parameter on the flavor tagging performance. The results obtained from LCFIPus demonstrate that the flavor tagging performance is more sensitive to the inner radius, followed by the material budget, and lastly the spatial resolution. \[\begin{split}\rm{Tr_{mig}}&=2.30+0.06\cdot log_{2} \frac{R^{0}_{material}}{R_{material}}\\ &+0.04\cdot log_{2}\frac{R^{0}_{resolution}}{R_{resolution}}+0.10 \cdot log_{2}\frac{R^{0}_{radius}}{R_{radius}}\end{split} \tag{1}\] \[\begin{split}\rm{Tr_{mig}}&=2.64+0.03\cdot log_{2} \frac{R^{0}_{material}}{R_{material}}\\ &+0.02\cdot log_{2}\frac{R^{0}_{resolution}}{R_{resolution}}+0.0 6\cdot log_{2}\frac{R^{0}_{radius}}{R_{radius}}\end{split} \tag{2}\] Figure 9: The dependence of relative statistical uncertainties for measurement of \(\sigma(ZH)\cdot Br(Z\to\nu\bar{\nu},H\to c\bar{c})\) (top) and \(|V_{cb}|\) (bottom) on flavor tagging performance, which is represented with the trace of flavor tagging performance matrix. The larger green/orange marker corresponds to the result obtained by LCFIPlus/ParticleNet. When the vertex detector parameters, including the inner radius, material budget, and spatial resolution, are changed by a factor of 0.5/2 from the baseline design (the geometry used in this simulation), the \(\rm{Tr_{mig}}\) value changes accordingly. It shifts from 2.64 to 2.75/2.53 for ParticleNet and from 2.30 to 2.10/2.50 for LCFIPlus, as indicated by the four vertical lines. The same analysis was conducted using ParticleNet. The Z-pole samples are fully simulated based on different vertex detector configurations and fed to ParticleNet to train. The results are illustrated in the bottom plot of Fig. 10 and equation 2. Compared to LCFIPlus, ParticleNet exhibits a larger \(Tr_{mig}\) value (2.64 v.s. 2.30), and its coefficients are roughly 50% of those of LCFIPus. In other words, the ParticleNet has a lower dependence on the geometric parameters. However, both methods have the same order of impact for three different geometric parameters: both identify the inner radius as the most sensitive to flavor tagging performance and spatial resolution as the least sensitive. The influence of geometric modifications on benchmark analyses can be assessed by referring to Fig. 10 in subsection 4.1. Consider two scenarios: one optimal and the other conservative, where the values of three vertex detector parameters is 0.5/2 times compared to the baseline design. This adjustment leads to changes in \(\rm{Tr}_{mig}\) from 2.64 to 2.75/2.53 for ParticleNet and from 2.30 to 2.10/2.50 for LCFIPus, as indicated by the vertical lines in Fig. 9. The accuracy of \(\sigma(ZH)\cdot Br(Z\to\nu\bar{\nu},H\to c\bar{c})\) and \(|V_{cb}|\) measurement under different scenarios using ParticleNet and LCFIPus is presented in Table 4. Compared to LCFIPus, ParticleNet significantly improves the accuracy of benchmark measurements. In the baseline scenario, the improvement is 36% and 75% for \(\sigma(ZH)\cdot Br(Z\to\nu\bar{\nu},H\to c\bar{c})\) and \(V_{cb}\) measurement, respectively. While at the conservative scenario, the improvement can be enhanced to 58% for \(\sigma(ZH)\cdot Br(Z\to\nu\bar{\nu},H\to c\bar{c})\) and nearly 3 times for \(V_{cb}\). ## 5 Conclusion Flavor tagging, a methodology employed to discern the origins of jets, holds immense significance in the realm of experimental exploration at the High Energy Frontier. Jets originating from different quarks or gluons have key differences, represented in the multiplicity of different species of particles, the secondary vertices, the opening angle of jets, etc. The flavor tagging performance depends on both the flavor tagging algorithm and detector design. To pursue excellent discovery power and innovative design of the detector, intensive research and design towards the key detector technologies, especially towards the vertex detectors are performed. Meanwhile, the development of innovative algorithms injects new momentum into this field. In this paper, we analyze the performance of ParticleNet and LCFIPus. The ParticleNet based on GNN \begin{table} \begin{tabular}{c c c c c} \hline & & conservative & baseline & optimal \\ \hline \multirow{3}{*}{\(\nu\nu Hc\bar{c}\)} & LCFIPus & 0.071 & 0.057 & 0.047 \\ & ParticleNet & 0.045 & 0.042 & 0.038 \\ & LCFIPus & 1.58 & 1.36 & 1.26 \\ \hline \multirow{3}{*}{\(|V_{cb}|\)} & LCFIPus & 0.0241 & 0.0133 & 0.0091 \\ & ParticleNet & 0.0086 & 0.0076 & 0.0067 \\ \cline{1-1} & ParticleNet & 2.80 & 1.75 & 1.36 \\ \hline \end{tabular} \end{table} Table 4: The accuracy of \(\nu\nu Hc\bar{c}\) and \(V_{cb}\) measurement is assessed under three scenarios: conservative, baseline, and optimal. In the conservative and optimal scenarios, three vertex detector parameters are adjusted to 2 and 0.5 times their values in the baseline design. The value of \(\frac{\rm{LCFIPus}}{\rm{ParticleNet}}\) reflects the impact of the flavor tagging algorithm on benchmark measurements. Figure 10: The correlation between the trace of a migration matrix and relative scanned parameters for LCFIPus (top) and ParticleNet (bottom). has been intensively used at CMS [20; 21; 22] and FCCee [23]. The LCFIPlus is a GBDT-based algorithm that has served as the baseline flavor tagging algorithm for CEPC and multiple future electron-positron Higgs factories. Using fully simulated hadronic events at a center-of-mass energy of 91.2 GeV at the CEPC baseline detector, we quantify the performance of both algorithms. We use a 3-dimensional migration matrix to describe the flavor tagging performance (representing the identification efficiency and misidentification rate), and the trace of the migration matrix is used as the key parameter to characterize flavor tagging. At the CEPC baseline detector geometry, we observe that ParticleNet is significantly superior to LCFIPlus. At the inclusive hadronic Z pole sample, the trace of ParticleNet is larger than LCFIPlus by more than 14%. Consequently, the relative statistical accuracy of \(\sigma(ZH)\cdot Br(Z\to\nu\bar{\nu},H\to c\bar{c})\) and \(|V_{cb}|\) measurement via W boson decay is improved by 36% and 75%, respectively, when CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 \(ab^{-1}\). Another paper [24] shows that ParticleNet can improve the statistical uncertainty of \(R_{C}\) measurement by 60% at the CEPC. The flavor tagging performance, which is described by \(Tr_{mig}\), of both ParticleNet and LCFIPlus depends on the polar angle. Both algorithms exhibit better performance in the barrel and smoothly degrade in the forward region. We also apply ParticleNet to different vertex detector geometries and observe that the flavor tagging performance is most sensitive to the inner radius, followed by the material budget and the spatial resolution. The result is consistent with that conducted by LCFIPlus. Benchmark performance in two scenarios, conservative and optimal, where the values of three vertex detector parameters are 2 and 0.5 times compared to the baseline design, reveals that ParticleNet can significantly enhance physics performance in the conservative scenario while showing less significant improvement with the aggressive detector design. ###### Acknowledgements. We thank the computing center of the Institute of High Energy Physics for providing the computing resources. Thanks to Gang Li, Congqiao Li, and Shudong Wang for providing guidance on software. This project is supported by the Fundamental Research Funds for the Central Universities, Peking University. This project is also supported by the National Natural Science Foundation of China under grant No. 12042507.
2309.04995
How to assign volunteers to tasks compatibly ? A graph theoretic and parameterized approach
In this paper we study a resource allocation problem that encodes correlation between items in terms of \conflict and maximizes the minimum utility of the agents under a conflict free allocation. Admittedly, the problem is computationally hard even under stringent restrictions because it encodes a variant of the {\sc Maximum Weight Independent Set} problem which is one of the canonical hard problems in both classical and parameterized complexity. Recently, this subject was explored by Chiarelli et al.~[Algorithmica'22] from the classical complexity perspective to draw the boundary between {\sf NP}-hardness and tractability for a constant number of agents. The problem was shown to be hard even for small constant number of agents and various other restrictions on the underlying graph. Notwithstanding this computational barrier, we notice that there are several parameters that are worth studying: number of agents, number of items, combinatorial structure that defines the conflict among the items, all of which could well be small under specific circumstancs. Our search rules out several parameters (even when taken together) and takes us towards a characterization of families of input instances that are amenable to polynomial time algorithms when the parameters are constant. In addition to this we give a superior $2^{m}|I|^{\Co{O}(1)}$ algorithm for our problem where $m$ denotes the number of items that significantly beats the exhaustive $\Oh(m^{m})$ algorithm by cleverly using ideas from FFT based fast polynomial multiplication; and we identify simple graph classes relevant to our problem's motivation that admit efficient algorithms.
Sushmita Gupta, Pallavi Jain, Saket Saurabh
2023-09-10T11:02:16Z
http://arxiv.org/abs/2309.04995v1
# How to assign volunteers to tasks compatibly? A graph theoretic and parameterized approach ###### Abstract In this paper we study a resource allocation problem that encodes correlation between items in terms of conflict and maximizes the minimum utility of the agents under a conflict free allocation. Admittedly, the problem is computationally hard even under stringent restrictions because it encodes a variant of the Maximum Weight Independent Set problem which is one of the canonical hard problems in both classical and parameterized complexity. Recently, this subject was explored by Chiarelli et al. [Algorithmica'22] from the classical complexity perspective to draw the boundary between NP-hardness and tractability for a constant number of agents. The problem was shown to be hard even for small constant number of agents and various other restrictions on the underlying graph. Notwithstanding this computational barrier, we notice that there are several parameters that are worth studying: number of agents, number of items, combinatorial structure that defines the conflict among the items, all of which could well be small under specific circumstances. Our search rules out several parameters (even when taken together) and takes us towards a characterization of families of input instances that are amenable to polynomial time algorithms when the parameters are constant. In addition to this we give a superior \(2^{m}|I|^{O(1)}\) algorithm for our problem where \(m\) denotes the number of items that significantly beats the exhaustive \(\mathcal{O}(m^{m})\) algorithm by cleverly using ideas from FFT based fast polynomial multiplication; and we identify simple graph classes relevant to our problem's motivation that admit efficient algorithms. **Keywords:** Conflict free allocation fair allocation job scheduling independent set parameterized complexity. ## 1 Introduction Imagine a situation where we are running a non-profit organization that specialises in volunteer work. Specifically, our objective is to bundle the tasks that need to be completed and pair them with the available volunteer workers in some meaningful way. Naturally, the volunteer workers have some preference over the available tasks and the tasks may have some inherent compatibility issues in that a person may only be assigned to at most one of the tasks that are mutually incompatible. The incompatibility among the tasks could be due to something as simple as the time interval in which they have to be performed. While it would be ideal to assign all the tasks, it may not actually be possible due to the above compatibility issues and the number of available workers. Moreover, this being a volunteer operation, the workers are "paid" by the satisfaction they derive from completing the bundle of tasks assigned to them. Thus, we want to ensure that the assignment is done in way that gives every volunteer worker the highest level of satisfaction possible. This is the setting of the job assignment problem studied in this article. The above described scenario falls under the more general topic of resource allocation which is a central topic in economics and computation. Resource allocation is an umbrella term that captures a plethora of well-known problem settings where resources are matched to agents in a meaningful way that respects the preferences/choices of agents, and when relevant, resources as well. Stable matching, generalized assignment, fair division, are some well-known problems that fall under the purview of resource allocation. These topics are extensively studied in economics, (computational) social choice theory, game theory, and computer science, to name a few; and are incredibly versatile and adaptable to a wide variety of terminology, techniques and traditions. A well-known framework within which resource allocation is studied is in the world of Job Scheduling problems on non-identical machines. In this scenario, the machines are acting as agents and the jobs are the tasks such that certain machines are better suited for some jobs than others and this variation is captured by the "satisfaction level" of the machine towards the assigned jobs. Moreover, the jobs have specific time intervals within which they have to be performed and only one job can be scheduled on a machine at a time. Thus, the subset of jobs assigned to a single machine must respect these constraints, and the objective can be both maximization and minimization as well as to simply test feasibility. Results on the computational aspect of resource allocation that incorporate interactions and dependencies between the resources is relatively few. This is the backdrop of our work in this article. A rather inexhaustive but representative list of papers that take a combinatorial approach in analysing a resource allocation problem and are aligned with our work in this paper is [1, 3, 4, 6, 7, 15, 27, 31, 2]. In particular, we can point to the decades old work of Deuermeyer et. al [12] that studies a variant of Job Scheduling in which they goal is to assign a set of independent jobs to identical machines in order to maximize the minimal completion time of the jobs. Their NP-hardness result for two machines (i.e two agents in our setting) is an early work with similar flavor. They analyse a well-known heuristic called the LPT-algoirth to capture best-case performance and show that its worst case performance is \(4/3\)-factor removed from optimum. The more recent work of Chiarelli et. al [7] that studies "fair allocation" of indivisible items into pairwise disjoint subsets items that maximimizes the minimum satisfaction of the agents is the work that is closest to ours. They too consider various graph classes that capture compatibilities among items and explore the classical complexity boundary between strong NP-hardness and pseudo-polynomial tractability for a constant number of agents. Our analysis probes beyond the NP-hardness of these problems and explores this world from the lens of parameterized complexity, thereby drawing out the suitability of natural parameters-such as the number of agents, the number of jobs, the maximum size of each allocated "bundle", and the structural parameters of the underlying graph-towards yielding polynomial time algorithms when the parameters take on constant values. We formally model our setting by viewing it is a two-sided matching market where each worker (i.e an _agent_) has a utility function defined over the set of available tasks (call them _jobs_) such that their satisfaction for a bundle of jobs is the sum of the agents' utilities for each individual job in the bundle. The incompatibilities among the jobs is captured by a graph \(\mathcal{H}\) defined on the set of jobs such that an edge represents _conflict_. The overall objective is to assign bundles-pairwise disjoint subset of jobs that induce an independent set in \(\mathcal{H}\) (i.e have no edges among each other)-to agents such that the minimum satisfaction of the agents is maximized. To make our discussion concrete, we formally define the computational problem under study. Conflict free Fair Allocation (CFFA) **Input:** A set of agents \(\mathcal{A}\), a set of jobs \(\mathcal{I}\), utility function \(\operatorname{u}_{a}\colon\mathcal{I}\to\mathbb{N}\), for each agent \(a\in\mathcal{A}\), a positive integer \(\eta\in\mathbb{N}\); and a graph \(\mathcal{H}\) with vertex set \(\mathcal{I}\). **Question:** Does there exist a function \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\) such that for every \(a\in\mathcal{A}\), \(\phi(a)\) is an independent set in \(\mathcal{H}\), \(\sum_{x\in\phi(a)}\operatorname{u}_{a}(x)\geq\eta\), and \(\phi(a)\cap\phi(a^{\prime})=\emptyset\) for all \(\{a,a^{\prime}\}\subseteq\mathcal{A}\)? For each \(a\in\mathcal{A}\), we call \(\phi(a)\) a _bundle_ assigned to the agent \(a\). We call graph \(\mathcal{H}\) the _conflict graph_. As alluded to earlier, Deurmeyer et al. [12] studied CFFA with a different name and showed that the problem is NP-complete even for \(2\) agents and even when \(\mathcal{H}\) is edgeless (that is, no conflict). Since the problem statement has a conflict graph and we need the subsets of allocated resources to be an independent set in \(\mathcal{H}\), naturally, the classical Maximum Weight Independent Set (MWIS) problem comes into play. In this problem given a graph \(G\), a weight function \(w:V(G)\to\mathbb{N}\), and an integer \(\eta\), the objective is to test whether there exists an independent set \(S\) such that \(w(S)=\sum_{v\in S}w(v)\geq\eta\). Let \(\mathcal{G}\) be a family of graphs. Chiarelli et al. [7] showed that if MWIS is NP-complete on \(\mathcal{G}\), then CFFA is NP-complete, when \(\mathcal{H}\) belongs to the graph class \(\mathcal{G}\), even when there is one agent. Consequently, it is natural to focus on graph classes in which MWIS is polynomial-time solvable. However, [7] proves that CFFA remains NP-complete even for bipartite graphs and their _line graphs_. Some polynomial time algorithms for special instances of the problem and polynomial time approximation algorithms are known for the problem [12, 24]. Some papers that have used conflict graphs to capture various constraints on items/jobs that are related to compatibility are [11, 16, 5, 17]. ### Our Results and Methods As described above we formulate our question in graph theoretic terms and analyze the problem in the realm of parameterized complexity. We note that this is a natural choice of algorithmic toolkit for our problem because CFFAis naturally governed by several parameters such as the number of agents (\(\#\mathsf{agents}\)), the number of jobs (\(\#\mathsf{jobs}\)), the maximum size of a bundle (\(\mathsf{bundleSize}\)) in the solution, and the utility of any agent \(\eta\). This makes it a natural candidate for a study from the viewpoint of parameterized complexity. Moreover, we also note that for certain specific situations the job graph may have special structures that can be exploited for designing efficient algorithms. In what follows, we describe certain scenarios where the "small-ness" of the parameters and the underlying graph structure comes into focus and allows us to discuss our results more concretely. **Input/Output Parameters.** The first set of parameters, that we study, consists of \(n=\#\mathsf{agents}\), \(m=\#\mathsf{jobs}\), \(s=\mathsf{bundleSize}\), and \(\eta\). With this set of parameters, we obtain the following set of results. **Brief overview of parameterized complexity:** The goal of parameterized complexity is to find ways of solving NP-hard problems more efficiently than exhaustive search: the aim is to restrict the combinatorial explosion to a parameter that is likely to much smaller than the input size in families of input instances. Formally, a _parameterization_ of a problem is assigning an integer \(k\) to each input instance of the problem. We say that a parameterized problem is _fixed-parameter tractable_ (\(\mathsf{FPT}\)) if there is an algorithm that solves the problem in time \(f(k)\cdot|I|^{\mathcal{O}(1)}\), where \(|I|\) is the size of the input and \(f\) is an arbitrary computable function depending on the parameter \(k\) only. A more general class of parameterized algorithm is the XP algorithms where a parameterized problem is _slicewise poly_ (\(\mathsf{XP}\)) if there is an algorithm that solves the problem in time \(|I|^{f(k)}\), where \(|I|\) is the size of the input and \(f\) is an arbitrary computable function depending on the parameter \(k\) only. Moreover, we will refer to such algorithms as an \(\mathsf{FPT}\) (resp. \(\mathsf{XP}\)) algorithm and the problem to have an \(\mathsf{FPT}(k)\) (resp. \(\mathsf{XP}(k)\)) algorithm. For more details on the subject, we refer to the textbooks [14, 8, 18]. **Graph classes under investigation:** We begin our discussion describing the simple graph classes that bookend our parameterized study: the two extremes are _utopia_ and _chaos_ and in between we have potentially an infinite possibility of graph classes in which to study our problem. In Section 1.2 we delve deeper into what parameters are meaningful for further study and draw out the connections between the graph classes and fruitful parameterization. 1. **Utopia**: when there are no incompatibilities, and the conflict graph \(\mathcal{H}\) is edgeless. In this scenario the problem is hard even when bundle size is a small constant, Theorem 10. 2. **Chaos**: when every job is incompatible with every other job, and so conflict graph \(\mathcal{H}\) is complete. In this scenario, the problem becomes rather easy to solve since each bundle can only be of size at most one, Theorem 3. 3. **Incompatibilities are highly localized**: \(\mathcal{H}\) is a _cluster graph_, a graph that is compromised of vertex disjoint cliques. Such a situation may occur quite naturally such as in the following scenario. In the example of the assignment of volunteers to tasks, consider the situation where the tasks can only be completed on specific days and specific times. Consequently, all the tasks that can be completed on day 1 form a clique, the ones for day 2 form another clique and so on. Moreover, the volunteers are working after hours for say two hours each day and it has been decided that each worker can only work for the same number of hours each day to manage their work load. In this scenario a worker can be assigned at most task per day. This is the intuitive basis for the algorithm described in Theorems 4 and 5 and Proposition 1. 4. **"distance" \(t\) away from chaos**: \(\mathcal{H}\) has at least \(\binom{m}{2}-t\) edges, Theorems 6 to 8. If not a constant, it is reasonable to expect these parameters to be fairly small compared to the input. ### Closer look at the parameters and search for fruitful graph families **I. (Superior) \(\mathsf{FPT}(m)\) algorithm exists:** We note that CFFA admits a trivial \(\mathsf{FPT}\) algorithm parameterized by \(m\) by enumerating all possible \((n+1)^{m}\) ways of assigning the jobs to the agents, where each job has \((n+1)\) choices of agents to choose from. Since \(m\geq n\), we get a running time of \(\mathcal{O}(m^{m})\). However, in **Section 2** we present an algorithm with running time \(2^{m}(n+m)^{\mathcal{O}(1)}\), which is clearly far superior. It is an algebraic algorithm that recasts the problem as that of of polynomial multiplication that mimics subset convolution. This suggests, in contrast to no \(\mathsf{FPT}(n)\) algorithm, that the larger parameter \(m\) is sufficient in constraining the (exponential growth in the) time complexity as function of itself. **II. No \(\mathsf{XP}(n)\) algorithm exists:** We first note that since CFFA is \(\mathsf{NP}\)-complete even for one agent (due to the reduction from MWIS by Chiarelli et. al [7]), hence, we cannot even hope for an \((n+m)^{f(n)}\) time algorithm for any function \(f\), unless \(\mathsf{P}\)=\(\mathsf{NP}\). Thus, there is no hope for an \(\mathsf{FPT}\) algorithm with respect to \(n\). This appears to be a confirmation that the number of agents (volunteers) which is likely to smaller than the number of jobs (tasks) is inadequate in terms of expressing the (exponential growth in the) time complexity as a function of itself. **III. No \(\mathsf{XP}(s)\) algorithm when \(\mathcal{H}\) is edgeless:** In **Section 3** we show that CFFA is \(\mathsf{NP}\)-complete when \(\mathcal{H}\) is edgeless and \(s=3\). This implies that we cannot even hope for an \((n+m)^{g(s)}\) time algorithm for any function \(g\), unless \(\mathsf{P}\)=\(\mathsf{NP}\). Therefore, \(n\) and \(s\) are inadequate parameters individually, hence it is natural to consider them together. **IV. When both \(n\) and \(s\) are small:** We note that \(n\) and \(s\) being small valued compared to \(m\) is quite realistic because there are likely to be far too many tasks at hand but relatively fewer volunteers; and the assignment should not overburden any of them and thus the number of assigned tasks should be small. This motivates us to consider the parameter \(n+s\). However, hoping that CFFA is \(\mathsf{FPT}\) parameterized by \(n+s\) in general graphs is futile because the problem generalizes the MWIS problem. Hence, we can only expect to obtain an \(\mathsf{FPT}(n+s)\) algorithm for special classes of graphs. Consequently, our exploration moves towards identifying graph classes which may admit such an algorithm. Towards that we note that an \(\mathsf{FPT}(n+s)\) algorithm for the underlying decision problem that incorporates the bundle size \(s\) (defined formally below) yields an \(\mathsf{FPT}(n+s)\) algorithm for CFFA. Size bounded-Conflict free Fair Allocation (Sb-CFFA) **Input:** A set of agents \(\mathcal{A}\), a set of jobs \(\mathcal{I}\), utility function \(\operatorname{u}_{a}:\mathcal{I}\to\mathbb{N}\), for each agent \(a\in\mathcal{A}\), positive integers \(s,\eta\in\mathbb{Z}_{>0}\), and a graph \(\mathcal{H}\) with vertex set \(\mathcal{I}\). **Question:** Does there exist a function \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\) such that for every agent \(a\in\mathcal{A}\), bundle \(\phi(a)\) is an independent set in \(\mathcal{H}\), \(|\phi(a)|\leq s\), \(\sum_{x\in\phi(a)}\operatorname{u}_{a}(x)\geq\eta\), and \(\phi(a)\cap\phi(a^{\prime})=\emptyset\) for all \(\{a,a^{\prime}\}\subseteq\mathcal{A}\)? To elaborate further, an \(\mathsf{FPT}\) algorithm for Sb-CFFA would imply an \(\mathsf{FPT}\) algorithm for CFFA, because \(s\leq m\) and an algorithm that makes subroutine calls to an \(\mathsf{FPT}(n+s)\) algorithm for Sb-CFFA for increasing values of \(s\) (from \(1\) to \(m\)) is an \(\mathsf{FPT}(n+s)\) algorithm for CFFA. Hence, from now we focus our attention towards an \(\mathsf{FPT}(n+s)\) algorithm for Sb-CFFA for reasonable graph classes. \(\star\)_Two parameters \(n\), \(s\) in search of a graph family:_ A closer look at the hardness proof of CFFA from MWIS by Chiarelli et. al [7] yields a hardness result for Sb-CFFA from a size bounded version of MWIS, defined below. Note that in this problem the size of the (independent set) solution is upper bounded by the parameter and this distinguishes it from the (standard) maximum weight independent set solution. Size bounded-Maximum Weight Independent Set (Sb-MWIS) **Input:** A graph \(G\), positive integers \(k\) and \(\rho\), a weight function \(w:V(G)\to\mathbb{N}\). **Parameter:**\(k\) **Question:** Does there exist an independent set \(S\) of size at most \(k\) such that \(\sum_{v\in S}w(v)\geq\rho\)? In that reduction, \(s=k\) and \(n=1\). Hence, we can deduce the following connection: _any_\(\mathsf{FPT}(n+s)\) algorithm for Sb-CFFA will yield an \(\mathsf{FPT}(k)\) algorithm for Sb-MWIS; and conversely, any hardness result that holds for Sb-MWIS with respect to \(k\) must also hold for Sb-CFFA with respect to \(n+s\). The latter condition allows us to narrow down the potential graph classes that admit an \(\mathsf{FPT}(n+s)\) algorithm for Sb-CFFA. Since Sb-MWIS is a clear generalization (by setting \(\rho=k\) and unit weight function) of the Independent Set problem, a very well-studied problem in the realm of parameterized complexity and indeed the wider field of graph algorithms. This connection allows us to demarcate the tractability border of our problem Sb-CFFA via the computational landscape of Independent Set. In the paragraphs to follow we will flesh out the connection in more explicit terms and derive a result, **Theorem 1**, that tightly characterises the tractability boundary of Sb-CFFA with respect to \(n+s\). \(\star\)Independent Set_as the guiding light for Sb-CFFA:_ In the field of parameterized complexity Independent Set has been extensively studied on families of graphs that satisfy some structural properties. We take the same exploration path for our problem Sb-CFFA. The graph classes in which Independent Set has an \(\mathsf{FPT}(k)\) algorithm is a potential field for an \(\mathsf{FPT}\) algorithms for Sb-CFFA. While this is not a guarantee, and we need to argue the connection precisely. Let \(\mathcal{G}\) be a family of _hereditary_ graphs. That is, if \(G\in\mathcal{G}\), then all induced subgraphs of \(G\) belong to \(\mathcal{G}\). In other words, \(\mathcal{G}\) is closed under taking induced subgraphs. For a hereditary family \(\mathcal{G}\), \(\mathcal{G}\)-Sb-MWIS denotes the restriction of Sb-MWIS where the input graph \(G\in\mathcal{G}\). Thus, a natural question is what happens when Sb-CFFA is restricted to a graph class for which \(\mathcal{G}\)-Sb-MWIS is fixed parameter tractable with respect to \(k\)? Given a hereditary family \(\mathcal{G}\), we define \(\mathcal{G}\)-Sb-CFFA similarly to Sb-CFFA such that the graph \(\mathcal{H}\) belongs to the family \(\mathcal{G}\). The tractability of \(\mathcal{G}\)-Sb-MWIS does not immediately imply tractability of \(\mathcal{G}\)-Sb-CFFA. Indeed, even if \(\mathcal{G}\)-Sb-MWIS is \(\mathsf{FPT}\) when parameterized by \(k\), we cannot hope for an \((n+m)^{f(n)}\) time algorithm for \(\mathcal{G}\)-Sb-CFFA for any function \(f\), unless \(\mathsf{P}\)=\(\mathsf{NP}\), because the \(\mathsf{NP}\)-hardness of Independent Set implies the \(\mathsf{NP}\)-hardness for CFFA even for one agent, i.e \(n=1\). Due to Theorem 4 (explained later), we also cannot hope for an \((n+m)^{f(s)}\) time algorithm for \(\mathcal{G}\)-Sb-CFFA for any function \(f\), unless \(\mathsf{P}\)=\(\mathsf{NP}\), even if \(\mathcal{G}\)-Sb-MWIS has an \(\mathsf{FPT}(k)\) algorithm. These results imply that we cannot even expect \(\mathcal{G}\)-Sb-CFFA to have an \(\mathsf{XP}\) algorithm with respect to either \(n\) or \(s\) individually, let alone an \(\mathsf{FPT}\) algorithm. However, the following result completely characterizes the parameterized complexity of \(\mathcal{G}\)-Sb-CFFA with respect to \(n+s\) vis-a-vis the parameterized complexity of \(\mathcal{G}\)-Sb-MWIS with respect to \(k\). Theorem 1.1: _Let \(\mathcal{G}\) be a hereditary family of graphs. Then, \(\mathcal{G}\)-Sb-CFFA is \(\mathsf{FPT}\) parameterized by \(n+s\) if and only if \(\mathcal{G}\)-Sb-MWIS is \(\mathsf{FPT}\) parameterized by \(k\)._ Theorem 1.1 implies that \(\mathcal{G}\)-Sb-CFFA is \(\mathsf{FPT}\) when \(\mathcal{G}\) is the family of interval graphs, chordal graphs, perfect graphs, planar graphs, bipartite graphs, graphs of bounded degeneracy, to name a few, [20]. **Overview of Theorem 1.1.** This is one of the main algorithmic results of this article. The result is obtained by combining the classical color coding technique of Alon-Yuster-Zwick [2], applied on the set of jobs, together with a dynamic programming algorithm to find a "colorful solution". In the dynamic programming phase of the algorithm, we invoke an \(\mathsf{FPT}(k)\) algorithm for \(\mathcal{G}\)-Sb-MWIS. While there are papers that study hereditary graph classes to give \(\mathsf{FPT}\) algorithms for MWIS (the standard maximum weighted independent set) problem [10], we are not aware of known classes of graphs for which Sb-MWIS (the size bounded variant of maximum weighted independent set problem) is \(\mathsf{FPT}\) parameterized by \(k\). Hence, we first identify some such graph classes. We define an _independence friendly class_ as follows. Let \(f\colon\mathbb{N}\to\mathbb{N}\) be a monotonically increasing function, that is, invertible. A graph class \(\mathcal{G}\) is called _\(f\)-independence friendly class (\(f\)-ifc)_ if \(\mathcal{G}\) is hereditary and for every \(G\in\mathcal{G}\) of size \(n\) has an independent set of size \(f(n)\). Observe that the families of bipartite graphs, planar graphs, graphs of bounded degeneracy, graphs excluding some fixed clique as an induced subgraphs are \(f\)-independence friendly classes with appropriate function \(f\). For example, for bipartite graphs \(f(n)=\nicefrac{{n}}{{2}}\) and for \(d\)-degenerate graphs \(f(n)=\nicefrac{{n}}{{(d+1)}}\). For graphs excluding some fixed clique as an induced subgraph, we can obtain the desired \(f\) by looking at Ramsey numbers. It is the minimum number of vertices, \(n=R(r,s)\), such that all undirected simple graphs of order \(n\), contain a clique of size \(r\), or an independent set of size \(s\) It is known to be upper bounded by \(R(r,s)\leq{r+s-2\choose r-1}\)[25]. We prove the following result for \(\mathcal{G}\)-Sb-MWIS when \(\mathcal{G}\) is \(f\)-ifc. **Theorem 2**.: _Let \(\mathcal{G}\) be an \(f\)-independence friendly class. Then, there exists an algorithm for \(\mathcal{G}\)-Sb-MWIS running in time \(\mathcal{O}((f^{-1}(k))^{k}\cdot(n+m)^{\mathcal{O}(1)})\)._ We also give a polynomial-time algorithm for \(\mathcal{G}\)-Sb-MWIS when \(\mathcal{G}\) is a cluster graph. In contrast, CFFA is \(\mathsf{NP}\)-hard when the conflict graph is a cluster graph as proved in Theorem 4. Finally, we show that Sb-CFFA is \(\mathsf{W}[1]\)-hard with respect to \(n+s+\eta\). We reduce it from the Independent Set problem. Given an instance \((G,k)\) of the Independent Set, we can construct an instance of CFFA with only one agent, jobs as \(V(G)\), unit utility function, \(\mathcal{H}=G\), and \(s=\eta=k\). Since Independent Set is \(\mathsf{W}[1]\)-hard [13], we get the following. **Observation 1**: Sb-CFFA _is \(\mathsf{W}[1]\)-hard with respect to \(n+s+\eta\)._ Next, we move to our next set of parameters. ### Structural Parameterization via graph classes. Our next set of results is motivated by the following result whose proof is in **Section 4**. **Theorem 3**.: _There exists an algorithm that solves CFFA in polynomial time when the conflict graph is a complete graph._ Contrastingly, we show that when conflict graph is edgeless, the problem is computationally hard even when bundles are size at most three, Theorem 10. This result leads us to asking if what happens if incompatibilities are highly localized: Does CFFA admit a polynomial time algorithm when \(\mathcal{H}\) is a disjoint union of cliques? We answer this question negatively by proving the following result, which is due to a reduction from Numerical \(3\)-Dimensional Matching. **Theorem 4**.: CFFA _is \(\mathsf{NP}\)-complete even when \(\mathcal{H}\) is a cluster graph comprising of \(3\) cliques._ Since, an edgeless graph is also a cluster graph, due to [12], we have the following. **Proposition 1**.: CFFA _is \(\mathsf{NP}\)-complete even for \(2\) agents when \(\mathcal{H}\) is a cluster graph._ Next, we design a polynomial-time algorithm when a cluster graph contains \(2\) cliques and the utility functions are _uniform_, i.e., utility functions are the same for all the agents. In particular, we prove the following result. **Theorem 5**.: _There exists an algorithm that solves CFFA in polynomial time when the conflict graph is a cluster graph comprising of \(2\) cliques and the utility functions are uniform._ Proofs of Theorems 4 and 5 are in **Section 4.1**. In light of Theorem 3, the _distance of a graph_ from a complete graph is a natural parameter to study in parameterized complexity. The distance function can be defined in several ways. We define it as follows: the number of edges, say \(t\), whose addition makes the graph a complete graph. We first show a result that gives a _subexponential time algorithm_ when the number of agents is constant. **Theorem 6**.: _There exists an algorithm that solves CFFA in \(\mathcal{O}((2t\cdot 2^{2\sqrt{t}}+1)^{n}(n+m)^{\mathcal{O}(1)})\) time, where \(t={m\choose 2}-|E(\mathcal{H})|\) denotes the number of edges when added to \(\mathcal{H}\) yields a complete graph._ Theorem 6 is obtained by showing that if a graph \(G\) can be made into a clique by adding at most \(t\) edges then the number of independent sets of size at least \(2\) is upper bounded by \(\mathcal{O}(2t\cdot 2^{2\sqrt{t}})\). However, it is not an FPT algorithm _parameterized by \(t\) alone_. To show that the problem is FPT parameterized by \(t\), we obtain the following result. Theorem 6.1: _There exists an algorithm that solves_ CFFA _in \(\mathcal{O}((2t)^{t+1}(n+m)^{\mathcal{O}(1)})\) time._ In light of Theorem 3, we know that CFFA is polynomial-time solvable when every vertex has degree \(m-1\). Next, we show that the problem is also polynomial-time solvable when every vertex has degree \(m-2\) and the utility functions are uniform. Theorem 6.2: _There exists an algorithm that solves_ CFFA _in polynomial time when every vertex in the conflict graph has degree \(m-2\) and the utility functions are uniform._ Proofs of Theorems 6.1, 6.2 are in **Section 4.2**.Table 1 summarises all our results. ## 2 Cffa: A single exponential FPT parameterized by #jobs In this section, we will prove that CFFA is FPT when parameterized by the number of jobs, \(m\). The algorithm will use the technique of polynomial multiplication and fast Fourier transformation. The idea is as follows. For every agent \(i\in\mathcal{A}\), we first construct a family of bundles that can be assigned to the agent \(i\) in an optimal solution. Let us denote this family by \(\mathcal{F}_{i}\). Then, our goal is to find \(n\) disjoint bundles, one from each set \(\mathcal{F}_{i}\). To find these disjoint sets efficiently, we use the technique of polynomial multiplication. Before we discuss our algorithm, we have to introduce some notations and terminologies. Let \(\mathcal{I}\) be a set of size \(m\), then we can associate \(\mathcal{I}\) with \([m]\). The _characteristic vector_ of a subset \(S\subseteq[m]\), denoted by \(\chi(S)\), is an \(m\)-length vector whose \(i^{\text{th}}\) bit is \(1\) if and only if \(i\in S\). Two binary strings of length \(m\) are said to be disjoint if for each \(i\in[m]\), the \(i^{th}\) bits in the two strings are different. The _Hamming weight_ of a binary string \(S\), denoted by \(H(S)\), is defined to be the number of \(1\)s in the string \(S\). A monomial \(y^{i}\) is said to have Hamming weight \(w\), if the degree \(i\) when represented as a binary string has Hamming weight \(w\). We begin with the following observation. **Observation 2**: _Let \(S_{1}\) and \(S_{2}\) be two binary strings of same length. Let \(S=S_{1}+S_{2}\). If \(H(S)=H(S_{1})+H(S_{2})\), then \(S_{1}\) and \(S_{2}\) are disjoint binary vectors._ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Utility Functions & Arbitrary & Uniform & Arbitrary \\ \hline Parameters & Arbitrary & Complete & Cluster & Regular & \(\mathcal{G}\)-Sb-MWIS is \\ & & & 2 cliques (degree \(m-2\)) & FPT wrt \(k\) \\ \hline \(n=\)\#agents & [12] & **Thm.**3 & **Obs.**1 & **Thm.**5 & **Thm.**8 \\ \(s=\)bundleSize & **Thm.**10 & **Thm.**4 & **Thm.**10 \\ \(\eta\) & **Obs.**1 &? &? &? \\ \(\#\)agents + bundleSize & **Obs.**1 & **Thm.**1 & **Thm.**1 \\ \(\#\)agents + bundleSize\(+\eta\) & **Obs.**1 & & **Thm.**1 \\ \(m=\#\)jobs & **Thm.**9 & & & \\ \(t=\binom{m}{2}-|E(\mathcal{H})|\) & **Thm.**7 & & & \\ \hline \end{tabular} \end{table} Table 1: Summary of our results of CFFA, where the conflict graph belong to the family \(\mathcal{G}\). Lavender cells denote polynomial time complexity; _open_ cells and _pink_ cells denote that the problem is FPT and W-hard w.r.t. the parameter in col 1, respectively; white cells with? mark denote that the complexity is open; and yellow cells denote that the respective parameters are not interesting as the problem is either FPT w.r.t. smaller parameter or for more general graph class. The following is due to Cygan et. al [9]. Proposition 2: _Let \(S=S_{1}\cup S_{2}\), where \(S_{1}\) and \(S_{2}\) are two disjoint subsets of \([m]\). Then, \(\chi(S)=\chi(S_{1})+\chi(S_{2})\) and \(H(\chi(S))=H(\chi(S_{1}))+H(\chi(S_{2}))=|S_{1}|+|S_{2}|\)._ Observation 2 and Proposition 2 together yield the following. Corollary 1: _Subsets \(S_{1},S_{2}\subseteq\mathcal{I}\) are disjoint if and only if Hamming weight of the monomial \(x^{\chi(S_{1})+\chi(S_{2})}\) is \(|S_{1}|+|S_{2}|\)._ The _Hamming projection_ of a polynomial \(p(y)\) to \(h\), denoted by \(H_{h}(p(y))\), is the sum of all the monomials of \(p(y)\) which have Hamming weight \(h\). We define the _representative polynomial_ of \(p(y)\), denoted by \(\mathcal{R}(p(y))\), as the sum of all the monomials that have non-zero coefficient in \(p(y)\) but have coefficient \(1\) in \(\mathcal{R}(p(y))\), i.e., it ignores the actual coefficients and only remembers whether the coefficient is non-zero. We say that a polynomial \(p(y)\)_contains a monomial_\(y^{i}\) if the coefficient of \(y^{i}\) is non-zero. The zero polynomial is the one in which the coefficient of each monomial is \(0\). Now, we are ready to discuss our algorithm. Theorem 4.1: CFFA _is solvable in \(\mathcal{O}(2^{m}(n+m)^{\mathcal{O}(1)})\) time, where \(m=\#\mathsf{jobs}\) and \(n=\#\mathsf{agents}\)._ Proof: Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},\eta)\) denote an instance of CFFA. We start by defining a set family indexed by the agents. Let \(\mathcal{A}=[n]\). For an agent \(i\in\mathcal{A}\), let \(\mathcal{F}_{i}\) contain each of subsets of \(\mathcal{I}\) that can be _feasibly_ allocated to \(i\) as a bundle. Specifically, a set \(S\subseteq\mathcal{I}\) is in \(\mathcal{F}_{i}\) if \(S\) is an independent set in \(\mathcal{H}\) and the utility \(\sum_{x\in S}\mathrm{u}_{i}(x)\geq\eta\). We define the _round_ inductively as follows. For round \(1\) and a positive integer \(s\), we define a polynomial \[p^{1}_{s}(y)=\sum_{S\in\mathcal{F}_{1},|S|=s}y^{\chi(S)}\] For round \(i\in[n]\setminus\{1\}\), and a positive integer \(s\), we define a polynomial by using the \(\mathcal{R}(\cdot)\) operator \[p^{i}_{s}(y)=\sum_{\begin{subarray}{c}S\in\mathcal{F}_{i}\\ s^{\prime}=s-|S|\end{subarray}}\mathcal{R}\left(H_{s}\left(p^{i-1}_{s^{\prime }}(y)\times y^{\chi(S)}\right)\right)\] The algorithm returns "yes" if for any positive integer \(s\in\mathbb{Z}_{\geq 0}\), \(p^{n}_{s}(y)\) is non-zero. In fact, any non-zero monomial in the polynomial "represents" a solution for the instance \(\mathscr{J}\) such that we can find the bundle to assign to each agent \(i\in\mathcal{A}\) by backtracking the process all the way to round \(1\). _Computing a solution (if it exists)._ We assume that for some positive integer \(s\), \(p^{n}_{s}(y)\) is a non-zero polynomial. Thus, it contains a non-zero monomial of the form \(p^{n-1}_{s^{\prime}}(y)\times y^{\chi(S)}\), where \(S\in\mathcal{F}_{n}\). Note that \(\chi(S)\) describes the bundle assigned to agent \(n\), the set \(S\). Since the monomial \(p^{n-1}_{s^{\prime}}(y)\times y^{\chi(S)}\) exists in the polynomial \(p^{n}_{s}(y)\) after applying \(H_{s}(\cdot)\) function, it must be that \(p^{n-1}_{s^{\prime}}(y)=y^{\chi(S^{\prime})}\) for some set \(S^{\prime}\subseteq\mathcal{I}\) such that \(S^{\prime}\cap S=\emptyset\). By recursively applying the same argument to the polynomial \(p^{n-1}_{s^{\prime}}(y)\), we can obtain the bundles that are allocated to the agents \(i=n-1,\ldots,1\). Lemma 1: _The above algorithm returns "yes" if and only if \(\mathscr{J}\) is a yes-instance of_ CFFA_._ Proof: Suppose that \(\mathscr{J}\) is a yes-instance of CFFA. Then, there is an _assignment_, i.e., an injective function \(\phi\) that maps \(\mathcal{A}\) to subsets of \(\mathcal{I}\). For each agent \(i\in\mathcal{A}\), we define \(S_{i}=\phi(i)\). We begin with the following claim that enables us to conclude that the polynomial \(p^{n}_{s}(y)\), where \(s=\sum_{i\in[n]}|S_{i}|\), contains the monomial \(y^{\sum_{i\in[n]}\chi(S_{i})}\). Claim 1: _For each \(j\in[n]\), the polynomial \(p^{j}_{s}(y)\), where \(s=\sum_{i\in[j]}|S_{i}|\), contains the monomial \(y^{\sum_{i\in[j]}\chi(S_{i})}\)._ Proof: The proof is by induction on \(j\). **Base Case:**\(j=1\). We first note that \(S_{1}\) is in the family \(\mathcal{F}_{1}\) as it is a feasible bundle for the agent \(1\). Thus, due to the construction of the polynomial \(p_{s}^{1}(y)\), we know that \(p_{|S_{1}|}^{1}(y)\) contains the monomial \(y^{\chi(S_{1})}\). **Induction Step:** Suppose that the claim is true for \(j=j^{\prime}-1\). We next prove it for \(j=j^{\prime}\). To construct the polynomial \(p_{s}^{j^{\prime}}(y)\), where \(s=\sum_{i\in[j^{\prime}]}|S_{i}|\), we consider the multiplication of polynomial \(p_{s^{\prime}}^{j^{\prime}-1}(y)\), where \(s^{\prime}=\sum_{i\in[j^{\prime}-1]}|S_{i}|\), and \(y^{\chi(S_{j^{\prime}})}\). Due to the inductive hypothesis, \(p_{s^{\prime}}^{j^{\prime}-1}(y)\), where \(s^{\prime}=\sum_{i\in[j^{\prime}-1]}|S_{i}|\), contains the monomial \(y^{\sum_{i\in[j^{\prime}-1]}\chi(S_{i})}\). Note that \(S_{j^{\prime}}\) is in the family \(\mathcal{F}_{j^{\prime}}\) as it is a feasible bundle for the agent \(j^{\prime}\). Since \(S_{j^{\prime}}\) is disjoint from \(S_{1}\cup\ldots\cup S_{j^{\prime}-1}\), due to Corollary 1, we can infer that \(p_{s}^{j^{\prime}}(y)\), where \(s=\sum_{i\in[j^{\prime}]}|S_{i}|\), has the monomial \(y^{\sum_{i\in[j^{\prime}]}\chi(S_{i})}\). Due to Claim 1, we can conclude that \(p_{s}^{n}(y)\), where \(s=\sum_{i\in[n]}|S_{i}|\), contains the monomial \(y^{\sum_{i\in[n]}\chi(S_{i})}\). For the other direction, suppose that the algorithm returns "yes". Then, for some positive integer \(s\), \(p_{s}^{n}(y)\) is a non-zero polynomial. We need to show that there exists pairwise disjoint sets \(S_{1},\ldots,S_{n}\) such that \(S_{i}\in\mathcal{F}_{i}\), where \(i\in[n]\). This will give us an assignment function \(\phi\), where \(\phi(i)=S_{i}\). Since each \(S\in\mathcal{F}_{i}\), where \(i\in[n]\), is an independent set and \(\sum_{x\in S}\mathfrak{u}_{i}(x)\geq\eta\), \(\phi\) is a feasible assignment. We next prove the following claim that enables us to conclude the existence of pairwise disjoint sets. **Claim 2**: _For each \(j\in[n]\), if the polynomial \(p_{s}^{j}(y)\) is non-zero for some \(s\in[m]\), then there exists \(j\) pairwise disjoint sets \(S_{1},\ldots,S_{j}\) such that \(S_{i}\in\mathcal{F}_{i}\), where \(i\in[j]\)._ Proof: We prove it by induction on \(j\). **Base Case:**\(j=1\). Suppose \(p_{s}^{1}(y)\) is non-zero for some \(s\in[m]\). Then, it contains a monomial \(y^{\chi(S)}\), where \(S\in\mathcal{F}_{1}\). Thus, the claim is true. **Induction Step:** Suppose that the claim is true for \(j=j^{\prime}-1\). We next prove it for \(j=j^{\prime}\). Suppose that \(p_{s}^{j^{\prime}}(y)\) is non-zero for some \(s\in[m]\). Then, it contains a monomial of the form \(p_{s^{\prime}}^{j-1}(y)\times y^{\chi(S)}\), where \(|S|=s-s^{\prime}\) and \(S\in\mathcal{F}_{j^{\prime}}\). Due to induction hypothesis, since \(p_{s^{\prime}}^{j-1}(y)\) is a non-zero polynomial, there exists \(j^{\prime}-1\) pairwise disjoint sets \(S_{1},\ldots,S_{j^{\prime}-1}\) such that \(S_{i}\in\mathcal{F}_{i}\), where \(i\in[j^{\prime}-1]\). Furthermore, due to Corollary 1, we have that \(S_{j^{\prime}}\) is disjoint from \(S_{1}\cup\ldots\cup S_{j^{\prime}-1}\). Thus, we have \(j^{\prime}\) pairwise disjoint sets \(S_{1},\ldots,S_{j^{\prime}}\) such that \(S_{i}\in\mathcal{F}_{i}\), where \(i\in[j^{\prime}]\). This completes the proof. To claim the running time, we use the following well-known result about polynomial multiplication. Proposition 3 ([29]): _There exists an algorithm that multiplies two polynomials of degree \(d\) in \(\mathcal{O}(d\log d)\) time._ Lemma 2: _This algorithm runs in \(\mathcal{O}(2^{m}\cdot(n+m)^{\mathcal{O}(1)})\) time._ Proof: In the algorithm, we first construct a family of feasible bundles for each agent \(i\in\mathcal{A}\). Since we check all the subsets of \(\mathcal{I}\), the constructions of families takes \(\mathcal{O}(2^{m}\cdot(n+m)^{\mathcal{O}(1)})\) time. For \(i=1\), we construct \(m\) polynomials that contains \(\mathcal{O}(2^{m})\) terms. Thus, \(p_{s}^{1}(y)\) can be constructed in \(\mathcal{O}(2^{m}\cdot m)\) time. Then, we recursively construct polynomials by polynomial multiplication. Since every polynomial has degree at most \(\mathcal{O}(2^{m})\), due to Proposition 3, every polynomial multiplication takes \(\mathcal{O}(2^{m}\cdot m)\) time. Hence, the algorithm runs in \(\mathcal{O}(2^{m}\cdot(n+m)^{\mathcal{O}(1)})\) time. Thus, the theorem is proved. ## 3 Cffa: Parameterized by #agents and bundleSize In this section, we study CFFA parameterized by \(n=\) #agents, bundleSize, and their combinations. We first show some hardness results and then complement it with our main algorithmic result. ### NP-hardness when conflict graph is edgeless and bundle size is bounded Since CFFA is NP-hard for all the graph classes for which MWIS is NP-hard [7], in this section, we first discuss the intractability of the problem for special classes of graph when MWIS can be solved in polynomial time. In particular, we show that the problem is NP-hard even when the conflict-graph is edgeless and size of every bundle is at most 3, which is due to the reduction from the 3-Partition problem. In the 3-Partition problem, we are given a set \(X\) of \(3\tilde{m}\) elements, a bound \(B\in\mathbb{Z}_{+}\), and a size \(s(x)\in\mathbb{Z}_{+}\) for each \(x\in X\) such that \(\nicefrac{{B}}{{4}}<s(x)<\nicefrac{{B}}{{2}}\) and \(\sum_{x\in X}s(x)=\tilde{m}B\). The goal is to decide whether there exists a partition of \(X\) into \(\tilde{m}\) disjoint sets \(X_{1},X_{2},\ldots,X_{\tilde{m}}\) such that for each \(1\leq i\leq\tilde{m}\), \(\sum_{x\in X_{i}}s(x)=B\). Note that each \(X_{i}\) must contain three elements from \(X\). To the best of our ability, we could not find a citation for this result and hence we have included it here for completeness. Theorem 3.1: CFFA _is_ NP_-complete when \(\mathcal{H}\) is edgeless and \(s\) is three._ Proof: Given an instance \(\mathscr{J}=(X,B,\{s(x)\}_{x\in X})\) of 3-Partition, we create an instance \(\mathscr{J}^{\prime}=(\mathcal{A},\mathcal{I},\{\mathsf{u}_{a}\}_{a\in \mathcal{A}},\mathcal{H},\eta=2B)\) of CFFA, where \(\mathcal{I}=X\) and \(\mathcal{H}\) is an edgeless graph on the vertex set \(\mathcal{I}\). We define a set of agents \(\mathcal{A}=\{a_{1},\ldots,a_{\tilde{m}}\}\) and for each agent \(a_{i}\in\mathcal{A}\), we define the utility function \(\mathsf{u}_{a_{i}}(x)=B-s(x)\) for each job \(x\in\mathcal{I}\). The intuition behind this construction is that we want to create a _bundle_ so that the utility derived by an agent from that bundle is at least \(2B\), which will be attainable only if the bundle size is three. Next, we prove the correctness of the reduction. Lemma 3: \(\mathscr{J}\) _is a yes-instance of 3-Partition if and only if \(\mathscr{J}^{\prime}\) is a yes-instance of_ CFFA_._ Proof: If \(\mathscr{J}\) is a yes-instance of 3-Partition, then there is a solution \(X_{1},\ldots,X_{\tilde{m}}\) that satisfies the desired properties, i.e., for each \(1\leq i\leq\tilde{m}\), \(\sum_{x\in X_{i}}s(x)=B\). Note that \(\sum_{x\in X_{i}}u_{a_{i}}(x)=3B-B=2B\). Thus, the assignment function \(\phi\), where \(\phi(a_{i})=X_{i}\), yields a solution for \(\mathscr{J}^{\prime}\). For the other direction, let \(\phi\) be a solution for \(\mathscr{J}^{\prime}\). That is, for each agent \(a\in\mathcal{A}\), \(\phi(a)\) is the bundle assigned to the agent \(a\). Thus, \(\sum_{x\in\phi(a)}\mathsf{u}_{a}(x)\geq 2B\). We claim that for each agent \(a\in\mathcal{A}\), the bundle size \(|\phi(a)|=3\). If the size is at most two, then \(\mathsf{u}_{a}(\phi(a))\leq 2B-\sum_{x\in\phi(a)}s(x)<2B\), since \(\phi(a)\) is non-empty and for each object \(x\in\phi(a)\), \(s(x)\) is positive by definition. This is a contradiction. Hence, the only possibility is that for each agent \(a\in\mathcal{A}\), \(|\phi(a)|\geq 3\). If for some agent \(a\in\mathcal{A}\), bundle \(\phi(a)\) has more than three jobs, then for some agent \(a^{\prime}\neq a\), bundle \(\phi(a^{\prime})\) will contain at most two jobs, and thus will not attain the target. Hence, for each agent, the bundle size is exactly three. Next, we claim that for each agent the utility of its bundle is exactly \(\eta\). Suppose that there is an agent \(a\in\mathcal{A}\), such that utility of its bundle, \(\sum_{x\in\phi(a)}\mathsf{u}_{a}(x)>2B\). By definition, \(\sum_{x\in\phi(a)}\mathsf{u}_{a}(x)=3B-\sum_{x\in\phi(a)}s(x)\). Thus, it follows that \(\sum_{x\in\phi(a)}s(x)<B\). Since \(\sum_{x\in X}s(x)=\tilde{m}B\), it must be that \(\sum_{x\in\mathcal{I}\setminus\phi(a)}s(x)>(\tilde{m}-1)B\). Moreover, each bundle has size exactly three, and \(\mathcal{I}\setminus\phi(a)\) has \(3(\tilde{m}-1)\) jobs, so there must exist a bundle \(\phi(a^{\prime})\) for some agent \(a^{\prime}\neq a\) such that \(\sum_{x\in\phi(a^{\prime})}s(x)>B\), and so that agent's utility \(\mathsf{u}_{a^{\prime}}(\phi(a^{\prime}))=3B-\sum_{x\in\phi(a^{\prime})}s(x)<2B\). Hence, we have reached a contradiction. Thus, for every agent \(a\in\mathcal{A}\), the utility of its bundle is exactly \(2B\). We now note that we can form a solution for the instance \(\mathscr{J}\) of 3-Partition by taking each of the three-set jobs constituting each bundle. More specifically, for each \(i\in[\tilde{m}]\), we define \(X_{i}=\{x\colon x\in\phi(a_{i})\}\). For each \(i\in[\tilde{m}]\), since \(\mathsf{u}_{a_{i}}(x)=B-s(x)\), we have \[\sum_{x\in X_{i}}s(x) =\sum_{x\in\phi(i)}(B-\mathsf{u}_{a_{i}}(x))=3B-\sum_{x\in\phi(i)} \mathsf{u}_{a_{i}}(x)\] \[=3B-2B=B\] Hence, \(\mathscr{J}\) is a yes-instance of 3-Partition. Thus, the theorem is proved. ### Proof of Theorem 1 In this section, we give the proof of Theorem 1. Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}},\mathcal{ H},s,\eta)\) be an instance of \(\mathcal{G}\)-Sb-CFFA, and let \(|\mathscr{J}|\) denote the size of the instance. We first prove the first part of Theorem 1, which is the easier direction of the proof. In particular, let \(\mathbb{A}\) be an \(\mathsf{FPT}\) algorithm for \(\mathcal{G}\)-Sb-CFFA, running in time \(f(n,s)|\mathscr{J}|^{\mathcal{O}(1)}\). Given an instance \((G,k,\rho,w)\) of \(\mathcal{G}\)-Sb-MWIS, we construct an instance \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},s,\eta)\) of \(\mathcal{G}\)-Sb-CFFA as follows. The set of agents \(\mathcal{A}\) has only one agent \(a^{\star}\). Further, \(\mathcal{I}=V(G)\), \(\mathrm{u}_{a^{\star}}=w\), \(\mathcal{H}=G\), \(s=k\), and \(\eta=\rho\). It is easy to see that \((G,k,\rho,w)\) is a yes-instance of \(\mathcal{G}\)-Sb-MWIS if and only if \(\mathscr{J}\) is a yes-instance of \(\mathcal{G}\)-Sb-CFFA. Thus, by invoking algorithm \(\mathbb{A}\) on instance \(\mathscr{J}\) of \(\mathcal{G}\)-Sb-CFFA, we get an \(\mathsf{FPT}\) algorithm for \(\mathcal{G}\)-Sb-MWIS that runs in \(f(k)|\mathscr{J}|^{\mathcal{O}(1)}\) time. This completes the proof in the forward direction. In the rest of the section, we prove the reverse direction of the proof. That is, given an \(\mathsf{FPT}\) algorithm for \(\mathcal{G}\)-Sb-MWIS, we design an \(\mathsf{FPT}\) algorithm for \(\mathcal{G}\)-Sb-CFFA. For ease of explanation, we first present a randomized algorithm which will be derandomized later using the known tool of \((p,q)\)_-perfect hash family_[2, 19]. #### 3.2.1 Randomized Algorithm In this section, we design a randomized algorithm with the following specification. If the input, \(\mathscr{J}\), is a no-instance then the algorithm always returns "no". However, if the input, \(\mathscr{J}\), is a yes-instance then the algorithm returns "yes" with probability at least \(1/2\). Throughout this section, we assume that we have been given a yes-instance. This implies that there exists a hypothetical solution \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\). We define everything with respect to \(\phi\). That is, \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\) is an injective function satisfying all the requirements. Let \(S=\phi(\mathcal{A})=\cup_{a\in\mathcal{A}}\phi(a)\), i.e., the set of jobs that are assigned to some agent. Further, note that \(|S|\leq ns\), as the size of each bundle is upper bounded by \(s\). Our main idea is to first highlight all the jobs in the set \(S\), that are assigned to some agent, using color coding. **Separation of jobs:** Color the vertices of \(\mathcal{H}\) uniformly and independently at random using \(ns\) colors, say \(\{1,\ldots,ns\}\). The goal of the coloring is that "with high probability", we color the jobs assigned to agents in a solution using distinct colors. The following proposition bounds the success probability. Proposition 4: [8, Lemma 5.4] _Let \(U\) be a universe and \(X\subseteq U\). Let \(\chi\colon U\to[|X|]\) be a function that colors each element of \(U\) with one of \(|X|\) colors uniformly and independently at random. Then, the probability that the elements of \(X\) are colored with pairwise distinct colors is at least \(e^{-|X|}\)._ Due to Proposition 4, the coloring step of the algorithm colors the jobs in \(\phi(\mathcal{A})\) using distinct colors with probability at least \(e^{-ns}\). We call an assignment \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\) as _colorful_ if every two jobs \(\{i,i^{\prime}\}\in\phi(A)\) get distinct color. Moreover, for each \(a\), \(|\phi(a)|\leq s\). Next, we find a _colorful_ feasible assignment in the following lemma. Further, let us assume that we have an \(\mathsf{FPT}\) algorithm, \(\mathbb{B}\), for \(\mathcal{G}\)-Sb-MWIS running in time \(h(k)n^{\mathcal{O}(1)}\). Lemma 4: _Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},s,\eta)\) be an instance of \(\mathcal{G}\)-Sb-CFFA and \(\chi\colon V(\mathcal{H})\to[ns]\) be a coloring function. Then, there exists a dynamic programming algorithm that finds a colorful feasible assignment \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\) in \(\mathcal{O}(3^{ns}\cdot h(s)\cdot(n+m)^{\mathcal{O}(1)})\) time, if it exists, otherwise, return "no"._ Proof: Let \(\mathsf{colors}=\{1,\ldots,ns\}\) be the set of colors and let \(a_{1},\ldots,a_{n}\) be an arbitrary ordering of the agents. We apply dynamic programming: for a non-empty set \(S\subseteq\mathsf{colors}\) and \(i\in[n]\), we define the table entry \(T[i,S]\) as \(1\) if there is a colorful feasible assignment of jobs (that are colored by the function \(\chi\)) using colors in \(S\) to agents \(\{a_{1},\ldots,a_{i}\}\); otherwise it is \(0\). For an agent \(a\in\mathcal{A}\) and \(S\subseteq\mathsf{colors}\), let \(\mathcal{H}_{a,S}\) be a vertex-weighted graph constructed as follows. Let \(V_{S}\) be the subset of vertices in \(\mathcal{H}\) that are colored using the colors in \(S\). Then, \(\mathcal{H}_{a,S}=\mathcal{H}[V_{S}]\). The weight of every vertex \(x\in\mathcal{H}_{a,S}\) is \(\mathrm{u}_{a}(x)\). For a vertex-weighted graph \(G\), let \(\mathbb{I}(G)\in\{0,1\}\), where \(\mathbb{I}(G)=1\) if there exists an independent set of size at most \(s\) and weight at least \(\eta\) in \(G\), otherwise \(0\). We compute \(\mathbb{I}(G)\) using algorithm \(\mathbb{B}\). We compute the table entries as follows. **Base Case:** For \(i=1\) and non-empty set \(S\), we compute as follows: \[T[1,S]=\mathbb{I}(\mathcal{H}_{a_{1},S}) \tag{1}\] **Recursive Step:** For \(i>1\) and non-empty set \(S\), we compute as follows: \[T[i,S]=\bigvee_{\emptyset\neq S^{\prime}\subset S}T[i-1,S^{\prime}]\wedge \mathbb{I}(\mathcal{H}_{a_{i},S\setminus S^{\prime}}) \tag{2}\] We return "yes" if \(T[n,S]=1\) for some \(S\subseteq\mathsf{colors}\), otherwise "no". Next, we prove the correctness of the algorithm. Towards this, we prove the following result. **Claim 3**: _Equation (1) and Equation (2) correctly compute \(T[i,S]\), for each \(i\in[n]\) and \(\emptyset\neq S\subseteq\mathsf{colors}\)._ Proof: We will prove it by induction on \(i\). For \(i=1\), we are looking for any feasible assignment of jobs colored using the colors in \(S\) to the agent \(a_{1}\). Thus, Equation (1) computes \(T[1,S]\) correctly due to the construction of the graph \(\mathcal{H}_{a_{1},S}\) and the correctness of algorithm \(\mathbb{B}\). Now, consider the recursive step. For \(i>1\) and \(\emptyset\neq S\subseteq\mathsf{colors}\), we compute \(T[i,S]\) using Equation (2). We show that the recursive formula is correct. Suppose that Equation (2) computes \(T[i^{\prime},S]\) correctly, for all \(i^{\prime}<i\) and \(\emptyset\neq S\subseteq\mathsf{colors}\). First, we show that \(T[i,S]\) is at most the R.H.S. of Equation (2). If \(T[i,S]=0\), then the claim trivially holds. Suppose that \(T[i,S]=1\). Let \(\psi\) be a colorful feasible assignment to agents \(\{a_{1},\ldots,a_{i}\}\) using jobs that are colored using colors in \(S\). Let \(S_{j}\subseteq S\) be the set of colors of jobs in \(\psi(a_{j})\), where \(j\in[i]\). Since \(\psi(a_{i})\) uses the colors from the set \(S_{i}\) and \(\sum_{x\in\psi(a_{i})}\mathsf{u}_{a_{i}}(x)\geq\eta\), due to the construction of \(\mathcal{H}_{a_{i},S_{i}}\), we have that \(\mathbb{I}(\mathcal{H}_{a_{i},S_{i}})=1\). Consider the assignment \(\psi^{\prime}=\psi|_{\{a_{1},\ldots,a_{i-1}\}}\) (restrict the domain to \(\{a_{1},\ldots,a_{i-1}\}\)). Since \(S_{i}\) is disjoint from \(S_{1}\cup\ldots\cup S_{i-1}\) due to the definition of colorful assignment, \(\psi^{\prime}\) is a feasible assignment for the agents \(\{a_{1},\ldots,a_{i-1}\}\) such that the color of all the jobs in \(\psi^{\prime}(\{a_{1},\ldots,a_{i-1}\})\) is in \(S\setminus S_{i}\). Furthermore, since \(\psi\) is colorful, \(\psi^{\prime}\) is also colorful. Hence, \(T[i-1,S\setminus S_{i}]=1\) due to induction hypothesis. Hence, R.H.S. of Equation (2) is 1. Thus, \(T[i,S]\) is at most R.H.S. of Equation (2). For the other direction, we show that \(T[i,S]\) is at least R.H.S. of Equation (2). If R.H.S. is 0, then the claim trivially holds. Suppose R.H.S. is 1. That is, there exists \(S^{\prime}\subseteq S\) such that \(T[i-1,S^{\prime}]=1\) and \(\mathbb{I}(\mathcal{H}_{a_{i},S\setminus S^{\prime}})=1\). Let \(\psi\) be a colorful feasible assignment to agents \(\{a_{1},\ldots,a_{i-1}\}\) using jobs that are colored using colors in \(S^{\prime}\). Since \(\mathbb{I}(\mathcal{H}_{a_{i},S\setminus S^{\prime}})=1\), there exists a subset \(X\subseteq V_{S\setminus S^{\prime}}\) such that \(\sum_{x\in X}\mathsf{u}_{a_{i}}(x)\geq\eta\). Thus, construct an assignment \(\psi^{\prime}\) as follows: \(\psi^{\prime}(a)=\psi(a)\), if \(a\in\{a_{1},\ldots,a_{i-1}\}\) and \(\psi^{\prime}(a_{i})=X\). Since \(\psi^{\prime}\) is a feasible assignment and \(\mathbb{I}(\mathcal{H}_{a_{i},S\setminus S^{\prime}})=1\), \(\psi\) is a feasible assignment. Furthermore, since \(\psi\) is colorful and \(\psi(\{a_{1},\ldots,a_{i-1}\})\) only uses colors from the set \(S^{\prime}\), \(\psi^{\prime}\) is also colorful. Hence, \(T[i,S]=1\). Due to Claim 3, \(T[n,S]=1\) for some \(S\subseteq\mathsf{colors}\) if and only if \(\mathscr{J}\) is a yes-instance of \(\mathcal{G}\)-Sb-CFFA. This completes the proof of the lemma. Due to Proposition 4 and Lemma 4, we obtain an \(\mathcal{O}(3^{ns}\cdot h(s)\cdot(n+m)^{\mathcal{O}(1)})\) time randomized algorithm for \(\mathcal{G}\)-Sb-CFFA which succeeds with probability \(e^{-ns}\). Thus, by repeating the algorithm independently \(e^{ns}\) times, we obtain the following result. **Theorem 11**: _There exists a randomized algorithm that given an instance \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathsf{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},s,\eta)\) of \(\mathcal{G}\)-Sb-CFFA either reports a failure or finds a feasible assignment in \(\mathcal{O}((3e)^{ns}\cdot h(s)\cdot(n+m)^{\mathcal{O}(1)})\) time. Moreover, if the algorithm is given a yes-instance, the algorithm returns "yes" with probability at least \(1/2\), and if the algorithm is given a no-instance, the algorithm returns "no" with probability \(1\)._ Proof: Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathsf{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},s,\eta)\) be an instance of \(\mathcal{G}\)-Sb-CFFA. We color the jobs uniformly at random with colors \([ns]\). Let \(\chi\colon V(\mathcal{H})\to[ns]\) be this coloring function. We run the algorithm in Lemma 4 on the instance \(\mathscr{J}\) with coloring function \(\chi\). If the algorithm returns "yes", then we return "yes". Otherwise, we report failure. Let \(\mathscr{J}\) be a yes-instance of \(\mathcal{G}\)-Sb-CFFA and \(\phi\) be a hypothetical solution. Due to Proposition 4, all the jobs in \(\phi(\mathcal{A})\) are colored using distinct colors with probability at least \(e^{-ns}\). Thus, the algorithm in Lemma 4 returns yes with probability at least \(e^{-ns}\). Thus, to boost the success probability to a constant, we repeat the algorithm independently \(e^{ns}\) times. Thus, the success probability is at least \[1-\left(1-\frac{1}{e^{ns}}\right)^{ns}\geq 1-\frac{1}{e}\geq\frac{1}{2}\] If the algorithm returns "yes", then clearly \(\mathscr{J}\) is a yes-instance of \(\mathcal{G}\)-Sb-CFFA due to Lemma 4. #### 3.2.2 Deterministic Algorithm We derandomize the algorithm using \((p,q)\)-perfect hash family to obtain a deterministic algorithm for our problem. Definition 1 (\((p,q)\)-perfect hash family): ([2]) For non-negative integers \(p\) and \(q\), a family of functions \(f_{1},\ldots,f_{t}\) from a universe \(U\) of size \(p\) to a universe of size \(q\) is called a \((p,q)\)-perfect hash family, if for any subset \(S\subseteq U\) of size at most \(q\), there exists \(i\in[t]\) such that \(f_{i}\) is injective on \(S\). We can construct a \((p,q)\)-perfect hash family using the following result. Proposition 5 ([8, 30]): _There is an algorithm that given \(p,q\geq 1\) constructs a \((p,q)\)-perfect hash family of size \(e^{q}q^{\mathcal{O}(\log q)}\log p\) in time \(e^{q}q^{\mathcal{O}(\log q)}p\log p\)._ Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},s,\eta)\) be an instance of \(\mathcal{G}\)-Sb-CFFA. Instead of taking a random coloring \(\chi\), we construct an \((m,ns)\)-perfect hash family \(\mathcal{F}\) using Proposition 5. Then, for each function \(f\in\mathcal{F}\), we invoke the algorithm in Lemma 4 with the coloring function \(\chi=f\). If there exists a feasible assignment \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\) such that \(|\phi(a)|\leq s\), for all \(a\in\mathcal{A}\), then there exists a function \(f\in\mathcal{F}\) that is injective on \(\phi(\mathcal{A})\), since \(\mathcal{F}\) is an \((m,ns)\)-perfect hash family. Consequently, due to Lemma 4, the algorithm return "yes". Hence, we obtain the following deterministic algorithm. Theorem 3.1: _There exists a deterministic algorithm for \(\mathcal{G}\)-Sb-CFFA running in time \(\mathcal{O}((3e)^{ns}\cdot(ns)^{\log ns}\cdot h(s)\cdot(n+m)^{\mathcal{O}(1)})\)._ Due to Theorem 3.1, we can conclude the following. Corollary 2: _If \(\mathcal{G}\)-Sb-MWIS is solvable in polynomial time, then there exists a deterministic algorithm for \(\mathcal{G}\)-Sb-CFFA running in time \(\mathcal{O}((3e)^{ns}\cdot(ns)^{\log ns}\cdot(n+m)^{\mathcal{O}(1)})\)._ It is possible that MWIS is polynomial-time solvable on \(\mathcal{G}\), but \(\mathcal{G}\)-Sb-MWIS is NP-complete, as _any_\(k\)-sized solution of Sb-MWIS need not satisfy the weight constraint in the \(\mathcal{G}\)-Sb-MWIS problem. However, when we use an algorithm \(\mathbb{B}\) for \(\mathcal{G}\)-Sb-MWIS in our algorithm, we could have simply used an algorithm for MWIS. Though this will not result in a size bound of \(s\) on the size of an independent set of weight at least \(\eta\) that we found, however, this is sufficient to solve CFFA. However, we need to use \(\mathcal{G}\)-Sb-MWIS, when MWIS is NP-complete and we wish to use FPT algorithm with respect to \(k\). Due to Theorem 3.1 and this observation, CFFA is FPT when parameterized by \(n+s\) for several graph classes, such as chordal graphs [22], bipartite graphs [22], \(P_{6}\)-free graphs [23], outerstring graph [26], and fork-free graph [28]. Remark 1: _Our algorithm for chordal graphs is an improvement over the known algorithm that runs in \(\mathcal{O}(m^{n+2}(Q+1)^{2n})\) time, where \(Q=\max_{a\in\mathcal{A}}\sum_{i\in\mathcal{I}}p_{a}(i)\)[7]._ ### Fpt Algorithms for \(\mathcal{G}\)-Sb-MWis when \(\mathcal{G}\) is \(f\)-ific In this section, we prove Theorem 3.1. Let \((G,k,\rho,w)\) be a given instance of \(\mathcal{G}\)-Sb-MWIS. Further, \(\mathcal{G}\) is \(f\)-ifc. Let \(\mathsf{HighWeight}=\{v\in V(G)\colon w(v)\geq\nicefrac{{\rho}}{{k}}\}\). Note that if there exists an independent set of \(G[\mathsf{HighWeight}]\) of size \(k\), then it is the solution of our problem. Since, \(G\) belongs to \(f\)-ifc, \(G[\mathsf{HighWeight}]\) is also \(f\)-ifc. Thus, there exists an independent set in \(G[\mathsf{HighWeight}]\) of size at least \(f(|\mathsf{HighWeight}|)\). If \(f(|\mathsf{HighWeight}|)\geq k\), then there exists a desired solution. To find a solution we do as follow. Consider an arbitrary set \(X\subseteq\mathsf{HighWeight}\) of size \(f^{-1}(k)\). The size of \(X\) guarantees that the set \(X\) also has a desired solution. Now we enumerate subsets of size \(k\) of \(X\) one by one, and check whether it is independent; and if independent return it. This concludes the proof. Otherwise, \(|\mathsf{HighWeight}|<f^{-1}(k)\). Note that the solution contains at least one vertex of \(\mathsf{HighWeight}\). Thus, we guess a vertex, say \(v\), in the set \(\mathsf{HighWeight}\) which is in the solution, delete \(v\) and its neighbors from \(G\), and decrease \(k\) by \(1\). Repeat the algorithm on the instance \((G-N[v],k-1,\rho-w(v),w|_{V(G-N[v])})\). Since the number of guesses at any step of the algorithm is at most \(f^{-1}(k)\) and the algorithm repeats at most \(k\) times, the running time of the algorithm is \(\mathcal{O}((f^{-1}(k))^{k}\cdot(n+m)^{\mathcal{O}(1)})\). Corollary 3: _There exists an algorithm that solves \(\mathcal{G}\)-\(\mathrm{Sb}\)-\(\mathrm{MWIS}\) in \(\mathcal{O}((2k)^{k}\cdot(n+m)^{\mathcal{O}(1)})\), \(\mathcal{O}((4k^{2})^{k}\cdot(n+m)^{\mathcal{O}(1)})\), \(\mathcal{O}((4k)^{k}\cdot(n+m)^{\mathcal{O}(1)})\), \(\mathcal{O}((dk+k)^{k}\cdot(n+m)^{\mathcal{O}(1)})\), \(\mathcal{O}(R(\ell,k)^{k}\cdot(n+m)^{\mathcal{O}(1)})\) time, when \(\mathcal{G}\) is a family of bipartite graphs, triangle free graphs, planar graphs, \(d\)-degenerate graphs, graphs excluding \(K_{\ell}\) as an induced graphs, respectively. Here, \(R(\ell,k)\) is an upper bound on Ramsey number._ A polynomial time algorithm for \(\mathcal{G}\)-\(\mathrm{Sb}\)-\(\mathrm{MWIS}\) when \(\mathcal{G}\) is a cluster graph In this section, we design a polynomial time algorithm. Let \(\mathscr{J}=(G,k,\rho,w)\) be a given instance of \(\mathcal{G}\)-\(\mathrm{Sb}\)-\(\mathrm{MWIS}\). From each clique, we pick a vertex of highest weight. Let \(X\) be the set of these vertices. Let \(S\subseteq X\) be a set of \(k\) vertices of \(X\) that has highest weight. We return "yes" if \(w(S)\geq\eta\), otherwise, "no". Next, we argue the correctness of the algorithm. If we return "yes", then, clearly, \(S\) is an independent set of size at most \(k\) and weight at least \(\eta\). In the other direction, suppose that \(Z\) is a solution to \(\mathscr{J}\). Suppose that \(Z\) picks elements from the cliques \(C_{1},C_{2},\ldots,C_{\ell}\), \(\ell\leq k\). If \(Z\) does not pick highest weight vertex from \(C_{j}\), for some \(j\leq l\), then we can replace the vertex \(v=Z\cap C_{j}\) with highest weight vertex of \(C_{j}\) in \(Z\), and it is still a solution. Note that if \(S\cap C_{j}=\emptyset\), where \(j\leq\ell\), then \(S\) contains a vertex whose weight is at least the weight of \(v=S\cap C_{j}\) due to the construction of \(S\). Since \(|S|\geq|Z|\), we have a unique vertex for every such \(j\). Thus, \(w(S)\geq w(Z)\geq\eta\), and hence, the algorithm returns "yes". ## 4 Distance From Chaos: Examining possible structures of the conflict graph Starting point of our results in this section is the polynomial-time algorithm for CFFA when conflict graph is a complete graph (there is an edge between every pair of vertices). We give proof of Theorem 3.1. We begin with a simple observation, which follows due to the fact that the maximum size of an independent set in clique is \(1\). **Observation 3**: _If the conflict graph is a complete graph, then the bundle size is \(1\)._ Proof (of Theorem 3.1): Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},\eta)\) be a given instance of CFFA. In light of Observation 3, we construct the following auxiliary bipartite graph \(G=(L,R)\) as follows: for every agent \(a\in\mathcal{A}\), we add a vertex \(a\) in \(L\) and for every job \(x\in\mathcal{I}\), we add a vertex \(x\) in \(R\). If \(p_{a}(x)\geq\eta\), then add an edge \(ax\) in \(G\). Next, we find a maximum matching \(M\) in \(G\). If \(M\) does not saturate \(L\), then return "no", otherwise return the following function \(\phi\colon\mathcal{A}\to 2^{\mathcal{I}}\), \(\phi(a)=M(a)\). Next, we prove the correctness of the algorithm. Clearly, if we return the function \(\phi\), then it is a solution as we have an edge \(ax\) if and only if \(p_{a}(x)\geq\eta\) and \(M\) is a matching in \(G\) saturating \(L\). Next, we prove that if \(\mathscr{J}\) is a yes-instance of CFFA, then the algorithm returns a function. Let \(\phi\) be a solution to \(\mathscr{J}\). Since \(p_{a}(\phi(a))\geq\eta\), we have an edge \(a\phi(a)\) in \(G\). Thus, there exists a matching in \(G\) saturating \(L\). Hence, the maximum matching \(M\) in \(G\) saturates \(L\) and the algorithm returns a function. ### When conflict is highly localized: conflict graph is a cluster graph As discussed in the Introduction, there can be scenarios, where the incompatibilities are highly localized in a way that the set of jobs can be decomposed into these small chunks where there is incompatibilities between all jobs in the same chunk and none between jobs in different chunks. Such a scenario is captured by a cluster graph We show that the problem is intractable for cluster graph even when it consists of \(3\) cliques. This is in contrast to Theorem 3.1. To show the \(\NP\)-hardness, we give a polynomial time reduction from Numerical \(3\)-dimensional Matching problem, which is known to be \(\NP\)-hard [21]. In Numerical \(3\)-Dimensional Matching problem, we are given three disjoint sets \(X\), \(Y\), and \(Z\), each containing \(\tilde{m}\) elements, a size \(s(a)\in\mathbb{Z}_{+}\) for each element \(a\in X\cup Y\cup Z\), and a bound \(B\in\mathbb{Z}_{+}\). The goal is to partition \(X\cup Y\cup Z\) into \(\tilde{m}\) disjoint sets \(A_{1},\ldots,A_{\tilde{m}}\) such that (i) each \(A_{i}\), where \(i\in[\tilde{m}]\), contains exactly one element from each of \(X\), \(Y\), and \(Z\), and (ii) for each \(i\in[\tilde{m}]\), \(\sum_{a\in A_{i}}s(a)=B\). Note that it follows that \(\sum_{i\in[\tilde{m}]}\sum_{a\in A_{i}}s(a)=\tilde{m}B\). Next, we give the desired reduction. Proof (of Theorem 3.1): Given an instance \(\mathscr{J}=(X,Y,Z,\{s_{a}\}_{a\in X\cup Y\cup Z},B)\) of the Numerical \(3\)-Dimensional Matching problem, we create an instance \(\mathscr{J}^{\prime}=(\mathcal{A},\mathcal{I},\{u_{a}\}_{a\in\mathcal{A}}, \mathcal{H},\eta=B)\) of CFFA, where \(\mathcal{I}=X\cup Y\cup Z\) and \(\mathcal{H}\) is a cluster graph on the vertex set \(\mathcal{I}\) with induced cliques on the vertices in the set \(X\), \(Y\), and \(Z\). We define a set of agents \(\mathcal{A}=\{a_{1},\ldots,a_{\tilde{m}}\}\) and for each agent \(a_{i}\in\mathcal{A}\), we define the utility function \(\mathrm{u}_{a_{i}}(j)=s(j)\) for each job \(j\in\mathcal{I}\). Claim 4: \(\mathscr{J}\) _is a yes-instance of Numerical \(3\)-Dimensional Matching if and only if \(\mathscr{J}^{\prime}\) is a yes-instance of_ CFFA__ Proof: Suppose that \(\mathscr{J}\) is a yes-instance of Numerical \(3\)-Dimensional Matching. Then, there is a solution \(A_{i},i\in[\tilde{m}]\), that satisfies the desired properties. It follows then that for the agent \(a_{i}\in\mathcal{A}\), we have the condition that \(\sum_{j\in A_{i}}\mathrm{u}_{a_{i}}(j)=\sum_{j\in A_{i}}s(j)=B\). Thus, the assignment function \(\phi\), where \(\phi(a_{i})=A_{i}\), yields a solution for \(\mathscr{J}^{\prime}\) as well due to the construction of \(\mathcal{H}\). Conversely, suppose that we have a solution for \(\mathscr{J}^{\prime}\), i.e., an assignment function \(\phi\) for which \(\phi(a_{i})\cap\phi(a_{j})=\emptyset\) for every \(\{a_{i},a_{j}\}\subseteq\mathcal{A}\), and for every \(a_{i}\in\mathcal{A}\), \(\phi(a_{i})\) is an independent set and \(\sum_{j\in\phi(a_{i})}\mathrm{u}_{a_{i}}(j)\geq B\). Suppose that there exists an agent \(a_{i}\) whose bundle \(\sum_{j\in\phi(a_{i})}\mathrm{u}_{a_{i}}(j)>B\), then taking all the \(\tilde{m}\) bundles together we note that \(\sum_{i\in[\tilde{m}]}\sum_{j\in\phi(a_{i})}\mathrm{u}_{a_{i}}(j)>\tilde{m}B\), contradiction to the definition of the problem. Hence, we know that for each agent \(a_{i}\in\mathcal{A}\), \(\sum_{j\in\phi(a_{i})}\mathrm{u}_{a_{i}}(j)=B\). Since there are \(\tilde{m}\) agents, every job is assigned to some agent. Furthermore, since \(\phi(a_{i})\) is an independent set in \(\mathcal{H}\), for each \(i\in[\tilde{m}]\), we know that \(\phi(a_{i})\) contains \(3\) elements, one from each clique. Hence, setting \(A_{i}=\phi(a_{i})\) gives a solution to \(\mathscr{J}\). This completes the proof. Next, we show that when the cluster graph has only two cliques and the utility functions are uniform, i.e., for any two agents \(a,a^{\prime}\), \(u_{a}=u_{a^{\prime}}\), then CFFA can be solved in polynomial time. In particular, we prove Theorem 3.1. For arbitrary utility functions, the complexity is open. We begin by noting that due to the utility functions being uniform, every bundle is valued equally by every agent. This allows us to look at the problem purely from the perspective of partitioning the jobs into \(n\) bundles of size at most two. Proof (of Theorem 3.1): Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathrm{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},\eta)\) be a given instance of CFFA. Since the utility functions are uniform, we skip the agent identification from the subscript of utility function, i.e., instead of writing \(\mathrm{u}_{a}\) for the utility function of agent \(a\), we will only use \(\mathrm{u}\). We note that if there exists an job \(z\) such that \(\mathrm{u}(z)\geq\eta\), then there exists a solution that assign it to some agent. Since the utility functions are uniform, it can be assigned to any agent. Let \(\mathcal{I}_{\mathsf{HighUtility}}\subseteq\mathcal{I}\) be the set of jobs whose utility is at least \(\eta\), i.e., \(\mathcal{I}_{\mathsf{HighUtility}}=\{z\in\mathcal{I}\colon\mathrm{u}(z)\geq\eta\}\). Let \(\mathcal{I}_{\mathsf{LowUtility}}=\mathcal{I}\setminus\mathcal{I}_{\mathsf{ HighUtility}}\). If \(|\mathcal{I}_{\mathsf{HighUtility}}|\geq n\), then every agent get an job from the set \(\mathcal{I}_{\mathsf{HighUtility}}\), and it is a solution. Otherwise, there are \(|\mathcal{A}|-|\mathcal{I}_{\mathsf{HighUtility}}|\) agents to whom we need to assign bundles of size two. Let \(\mathsf{IS}\) denote the set of all independent sets of size two in \(\mathcal{H}[\mathcal{I}_{\mathsf{LowUtility}}]\). Thus, \(\mathsf{IS}\) has size at most \(m^{2}\). Next, we construct a graph, denoted by \(\widehat{\mathcal{H}}\), on the jobs in \(\mathcal{I}_{\mathsf{LowUtility}}\) where there is an edge between vertices \(a\) and \(b\) if \(\{a,b\}\in\mathsf{IS}\) and \(u(a)+u(b)\geq\eta\). In this graph we compute a maximum sized matching, denoted by \(\mathcal{M}\). If its size is less than \(n-|\mathcal{I}_{\mathsf{HighUtility}}|\), then we return the answer "no". Otherwise, we return answer "yes" and create an assignment as follows: if \((a,b)\in\mathcal{M}\), then we have a bundle containing \(\{a,b\}\). We create exactly \(n-|\mathcal{I}_{\mathsf{HighUtility}}|\) such bundles of size two and discard the others. These bundles along with the singleton bundles from \(\mathcal{I}_{\mathsf{HighUtility}}\) yield our assignment for \(n\) agents. Clearly, this graph has \(m\) vertices and at most \(m^{2}\) edges. Thus, the maximum matching can be found in polynomial time. Next, we prove the correctness of the algorithm. **Correctness:** If the algorithm returns an assignment of jobs to the agents, then clearly, for every agent the utility from the bundle is at least \(\eta\). Every bundle is also an independent set in \(\mathcal{H}\). Moreover, if a bundle is of size one, then the singleton job is clearly an element of the set \(\mathcal{I}_{\mathsf{HighUtility}}\); otherwise, the bundle represents an independent set of size two in \(\mathsf{IS}\) whose total utility is at least \(\eta\). There are \(n\) bundles in total, exactly \(|\mathcal{I}_{\mathsf{HighUtility}}|\) bundles of size one and at least \(n-|\mathcal{I}_{\mathsf{HighUtility}}|\) bundles of size two. In the other direction, suppose that \(\phi\) is a solution to \(\mathscr{J}\). Let \(a\) be an agent whose bundle size is two and \(\phi(a)\) contains at least one job from \(\mathcal{I}_{\mathsf{HighUtility}}\), say \(z\). Update the assignment \(\phi\) as follows: \(\phi(a)=\{z\}\). Note that \(\phi\) is still a solution to \(\mathscr{J}\). Let \(\mathcal{A}_{1}\subseteq\mathcal{A}\) be the set of agents such that for every agent \(a\in\mathcal{A}_{1}\), \(|\phi(a)|=1\), i.e., the bundle size assigned to every agent in \(\mathcal{A}_{1}\) is \(1\). Clearly, \(\phi(\mathcal{A}_{1})\subseteq\mathcal{I}_{\mathsf{HighUtility}}\). Let \(\mathsf{rem}=\mathcal{I}_{\mathsf{HighUtility}}\setminus\phi(\mathcal{A}_{1})\), the set of unassigned "high value" jobs. Suppose that \(\mathsf{rem}\neq\emptyset\). Let \(\mathcal{A}^{\prime\prime}\subseteq\mathcal{A}\setminus\mathcal{A}_{1}\) be a set of size \(\min\{|\mathcal{A}\setminus\mathcal{A}_{1}|,|\mathsf{rem}|\}\). Let \(\mathcal{A}^{\prime\prime}=\{a_{1},\ldots,a_{\ell}\}\) and \(\mathsf{rem}=\{z_{1},\ldots,z_{q}\}\), where clearly \(\ell\leq q\). Update the assignment \(\phi\) as follows: for every \(i\in[\ell]\), \(\phi(a_{i})=\{z_{i}\}\). Clearly, \(\phi\) is still a solution of \(\mathscr{J}\). We note that there are only two cases: either \(\mathcal{A}=\mathcal{A}_{1}\cup\mathcal{A}^{\prime\prime}\) or \(\tilde{\mathcal{A}}=\mathcal{A}\setminus(\mathcal{A}_{1}\cup\mathcal{A}^{ \prime\prime})\) is non-empty. If \(\mathcal{A}=\mathcal{A}_{1}\cup\mathcal{A}^{\prime\prime}\), then we have that the disjoint union of \(\phi(\mathcal{A}_{1})\cup\phi(\mathsf{rem})\subseteq\mathcal{I}_{\mathsf{ HighUtility}}\). In other words, \(|\mathcal{I}_{\mathsf{HighUtility}}|\geq n\), and so there exists a solution in which every bundle is of size one and contains an element from \(\mathcal{I}_{\mathsf{HighUtility}}\). Otherwise, let \(\tilde{\mathcal{A}}=\mathcal{A}\setminus(\mathcal{A}_{1}\cup\mathcal{A}^{ \prime\prime})\). Clearly, each of the jobs in \(\mathcal{I}_{\mathsf{HighUtility}}\) are assigned to agents in \(\mathcal{A}_{1}\cup\mathcal{A}^{\prime\prime}\) and subsets of jobs in \(\mathcal{I}_{\mathsf{LowUtility}}\) are assigned to agents in \(\tilde{\mathcal{A}}\). In other words, there exist \(|\mathcal{I}_{\mathsf{HighUtility}}|\) bundles of size one and \(n-|\mathcal{I}_{\mathsf{HighUtility}}|\) bundles of size two. Specifically for the latter, we know that each of the bundles is an independent set, they are pairwise disjoint and the total utility within each bundle is at least \(\eta\). Thus, the members of each bundle share an edge in the graph \(\tilde{\mathcal{H}}\) and the bundles themselves form a matching in the graph. Thus, our algorithm that computes a maximum matching in \(\tilde{\mathcal{H}}\) would find a matching of size at least \(n-|\mathcal{I}_{\mathsf{HighUtility}}|\). Hence, given the construction of the assignment from such a matching, we can conclude that our algorithm would return an assignment with the desired properties. ### Distance from chaos: parameterization by \(\mathsf{\#missing}\) edges from complete graph In this section, we will prove that CFFA is \(\mathsf{FPT}\) with respect to the parameter \(t\), the number of edges missing from \(\mathcal{H}\) being a complete graph. Further, we will present a polynomial time algorithm when the degree of every vertex in \(\mathcal{H}\) is \(m-2\) (one less than the degree in complete graph) and the utility functions are uniform. We first show a result that gives a _subexponential time algorithm_ when the number of agents is constant. Proof (of Theorem 4.1): We observe that the complement graph \(\mathcal{H}\), denoted by \(\overline{\mathcal{H}}\), contains all the vertices of \(\mathcal{H}\) but \(t\) edges only. Moreover, each clique in this graph constitutes a conflict-free bundle in the instance of CFFA. Conversely, we claim that any conflict-free bundle in the instance of CFFA must form a clique in \(\overline{\mathcal{H}}\) since for every pair of jobs \(x_{1},x_{2}\) in a bundle, there exists an edge in \(\overline{\mathcal{H}}\). Thus, enumerating all possible cliques (not just maximal ones) in \(\overline{\mathcal{H}}\) allows us to check for possible allocations to agents. To show that this is doable in the claimed time, we will count the number of cliques in \(\overline{\mathcal{H}}\). Since \(\overline{\mathcal{H}}\) has \(t\) edges, there can be at most \(2t\) vertices that are not isolated. Vertices that are isolated _constitute a clique of size \(1\)_, and are called _trivial cliques_. They are upper bounded by the number of jobs (\(m\)), and will be counted separately. A clique is said to be _non-trivial_ if it does not contain an isolated vertex. Next, we will upper bound the non-trivial cliques. Towards this, we first show that \(\overline{\mathcal{H}}\) is a \(2\sqrt{t}\)-degenerate graph by a simple counting argument. Note that if there exists a subgraph \(H\) with minimum degree at least \(2\sqrt{t}\), then the graph must have more than \(t\) edges. Let \(H\) be the subgraph of \(\mathcal{H}\) induced on the non-isolated vertices of \(\mathcal{H}\). Since \(H\) has at most \(t\) edges, every subgraph of \(H\) has a vertex of degree at most \(2\sqrt{t}\). Thus, \(H\) is a \(2\sqrt{t}\)-degenerate graph, and hence has a \(2\sqrt{t}\)-degeneracy sequence. Let \(\mathcal{D}=v_{1},\ldots,v_{2t}\) denote a \(2\sqrt{t}\)-degenerate degree sequence of \(H\). Notice that for any \(i\in[2t]\), \(v_{i}\) has at most \(2\sqrt{t}\) neighbors among \(\{v_{j}\colon j>i\}\). Consider the \(2\sqrt{t}\) neighbors of \(v_{1}\) and among them there can be at most \(2^{2\sqrt{t}}\) cliques and can be enumerated in time \(\mathcal{O}(2^{2\sqrt{t}})\). By iterating over \(v_{i}\), we can enumerate all the non-trivial cliques in \(\overline{\mathcal{H}}\) in \(\mathcal{O}(2t\cdot 2^{2\sqrt{t}})\) time. Indeed, for a non-trivial clique \(C\), if \(v_{i}\) is the first vertex in \(C\) with respect to \(\mathcal{D}\), that is all other vertices in \(C\) appear after \(v_{i}\) in \(\mathcal{D}\), then \(C\) is enumerated when we enumerate all the cliques with respect to \(v_{i}\) in our process. This implies that the number of independent sets in \(\mathcal{H}\) is upper bounded by \(\mathcal{O}(2t\cdot 2^{2\sqrt{t}}+m)\) and the number of independent sets of size at least \(2\) in \(\mathcal{H}\) is upper bounded by \(\mathcal{O}(2t\cdot 2^{2\sqrt{t}})\). Let \(\mathbb{I}_{\geq 2}\) denote the family of independent sets of \(\mathcal{H}\) that have size at least \(2\) - the family of non-trivial independent sets. Thus, one potential algorithm is as follows. We first guess which agents are assigned non-trivial independent sets and which independent set. That is, for each agent \(a\in\mathcal{A}\), we guess an independent set \(I_{a}\in\mathbb{I}_{\geq 2}\cup\gamma\) (\(\gamma\) is just to capture that the agent will not get non-trivial bundle). Let \(\mathcal{A}^{\prime}\subseteq\mathcal{A}\) be the set of agents for whom the guess is not \(\gamma\). Let \((\mathcal{A}^{\prime},\{I_{a}\}_{a\in\mathcal{A}^{\prime}})\) denote the corresponding guess for the agents in \(\mathcal{A}^{\prime}\). We first check that the guess for \(\mathcal{A}^{\prime}\) is _correct_. Towards that we check that for each \(a_{1},a_{2}\in\mathcal{A}^{\prime}\), \(I_{a_{1}}\cap I_{a_{2}}=\emptyset\) and for each \(a\in\mathcal{A}^{\prime}\), we have that \(\sum_{i\in I_{a}}\mathfrak{u}_{a}(i)\geq\eta\). Since, \(\|\mathbb{I}_{\geq 2}\|\) is upper bounded by \(\mathcal{O}(2t\cdot 2^{2\sqrt{t}})\), the number of guess are upper bounded by \(\mathcal{O}((2t\cdot 2^{2\sqrt{t}}+1)^{n})\). For each correct guess \((\mathcal{A}^{\prime},\{I_{a}\}_{a\in\mathcal{A}^{\prime}})\), we solve the remaining problem by invoking Theorem 3.1. Let \(\mathcal{A}^{*}=\mathcal{A}\setminus\mathcal{A}^{\prime}\) and \(\mathcal{I}^{*}=\mathcal{I}\setminus(\bigcup_{a\in\mathcal{A}^{\prime}}I_{a})\). Then, we apply Theorem 3.2 on the following instance: \((\mathcal{A}^{*},\mathcal{I}^{*},(p_{a})_{a\in\mathcal{A}^{*}},\eta,\mathcal{H }[\mathcal{I}^{*}])\), here \(\mathcal{H}[\mathcal{I}^{*}]\) is a clique. This implies that the total running time of the algorithm is upper bounded by \(\mathcal{O}((2t\cdot 2^{2\sqrt{t}}+1)^{n}(n+m)^{\mathcal{O}(1)})\). However, Theorem 3.2 is not an \(\mathsf{FPT}\) algorithm _parameterized by \(t\) alone_. In what follows we design such an algorithm. Proof (of Theorem 3.2): Let \(\mathscr{J}=(\mathcal{A},\mathcal{I},\{\mathfrak{u}_{a}\}_{a\in\mathcal{A}}, \mathcal{H},\eta)\) be a given instance of CFFA. Let \(V_{>1}\) be the set of vertices which are part of independent sets of size at least \(2\). As argued in the proof of Theorem 3.2, \(|V_{>1}|\leq 2t\). Thus, there are at most \(t\) bundles that contains more than one job. We guess partition of the jobs in \(V_{>1}\) into at most \(t+1\) sets, \(\mathsf{notLarge},\mathsf{Large}_{1},\ldots,\mathsf{Large}_{\ell}\), where \(\ell\leq t\), such that each set is an independent set in \(\mathcal{H}\). The set \(\mathsf{notLarge}\) might be empty. This contains the set of jobs in \(V_{>1}\) which will not be part of any bundle of size at least \(2\). The size of \(\mathsf{Large}_{i}\) is at least \(2\), for every \(i\in[\ell]\), and each \(\mathsf{Large}_{i}\) will be assigned to distinct agents in the solution. Next, we construct a complete graph \(\mathcal{H}^{\prime}\) as follows. For each \(\mathsf{Large}_{i}\), where \(i\in[\ell]\), we have a vertex \(\mathsf{Large}_{i}\) in \(\mathcal{H}^{\prime}\), and \(\mathfrak{u}_{a}^{\prime}(\mathsf{Large}_{i})=\sum_{x\in\mathsf{Large}_{i}} \mathfrak{u}_{a}(x)\), where \(a\in\mathcal{A}\). If a vertex \(v\in\mathcal{H}\) does not belong to any \(\mathsf{Large}_{i}\), where \(i\in[\ell]\), then add the vertex \(v\) to \(\mathcal{H}^{\prime}\), and \(\mathfrak{u}_{a}^{\prime}(v)=\mathfrak{u}_{a}(v)\). Let \(\mathscr{J}^{\prime}=(\mathcal{A},\mathcal{I},\{\mathfrak{u}_{a}^{\prime}\}_{a \in\mathcal{A}},\mathcal{H}^{\prime},\eta)\) be the new instance of CFFA where \(\mathcal{H}^{\prime}\) is a complete graph. Using Theorem 3.2, we find the assignment of bundles to the agents for the instance \(\mathscr{J}^{\prime}\), if it exists, and return "yes". If the algorithm does not find the assignment for any guessed partition, then we return "no". The running time follows from Theorem 3.2 and the fact that there are at most \((2t)^{t+1}\) possible partitions. Next, we prove the correctness of the algorithm. Suppose that \(\mathscr{J}\) is a yes-instance of CFFA and \(\phi\) be one of its solution. Let \(\mathcal{B}=\{\phi(a)\colon a\in\mathcal{A}\text{ and }|\phi(a)|\geq 2\}\). Clearly, sets in \(\mathcal{B}\) are disjoint subsets of \(V_{>1}\). Let \(\mathcal{B}=\{B_{1},\ldots,B_{\ell}\}\). Let \(X\subseteq V_{>1}\) contains all the jobs that do not belong to any set in \(\mathcal{B}\). Since we try all possible partitions of \(V_{>1}\), we also tried \(B_{1},\ldots,B_{\ell},X\). Without loss of generality, let \(B_{i}\) is assigned to \(a_{i}\) under \(\phi\). Thus, in the graph \(\mathcal{H}^{\prime}\), there is a matching \(M=\{a_{i}B_{i}\in i\in[\ell]\}\cup\{a\phi(a)\colon|\phi(a)|=1\}\) that saturates \(L\) in the proof of Theorem 3.2. Thus, the algorithm returns "yes". The correctness of the other direction follows from the correctness of Theorem 3.2 and the construction of the instance \(\mathcal{J}^{\prime}\). Next, we give our claimed polynomial-time algorithm. Proof (of Theorem 3.2): The algorithm is same as in Theorem 3.2. Here, the size of \(\mathsf{IS}\) is bounded by \(\nicefrac{{n}}{{2}}\). ## 5 Outlook In this article, we studied conflict-free fair allocation problem under the paradigm of parameterized complexity with respect to several natural input parameters. We hope that this will lead to the new set of results for the problem. The following question eludes so far: (i) the computation complexity of CFFA when the cluster graph contains only \(2\) cliques with arbitrary utility functions, (ii) the computational complexity when the degree of every vertex in the conflict graph is \(n-2\) with arbitrary utility functions. Another direction of research is to consider various other fairness notions known in the literature, such as envy-freeness, proportional fair-share, min-max fair-share, etc., under the conflict constraint.
2309.16349
Human Feedback is not Gold Standard
Human feedback has become the de facto standard for evaluating the performance of Large Language Models, and is increasingly being used as a training objective. However, it is not clear which properties of a generated output this single `preference' score captures. We hypothesise that preference scores are subjective and open to undesirable biases. We critically analyse the use of human feedback for both training and evaluation, to verify whether it fully captures a range of crucial error criteria. We find that while preference scores have fairly good coverage, they under-represent important aspects like factuality. We further hypothesise that both preference scores and error annotation may be affected by confounders, and leverage instruction-tuned models to generate outputs that vary along two possible confounding dimensions: assertiveness and complexity. We find that the assertiveness of an output skews the perceived rate of factuality errors, indicating that human annotations are not a fully reliable evaluation metric or training objective. Finally, we offer preliminary evidence that using human feedback as a training objective disproportionately increases the assertiveness of model outputs. We encourage future work to carefully consider whether preference scores are well aligned with the desired objective.
Tom Hosking, Phil Blunsom, Max Bartolo
2023-09-28T11:18:20Z
http://arxiv.org/abs/2309.16349v2
# Human Feedback is not Gold Standard ###### Abstract Human feedback has become the de facto standard for evaluating the performance of Large Language Models, and is increasingly being used as a training objective. However, it is not clear which properties of a generated output this single 'preference' score captures. We hypothesise that preference scores are subjective and open to undesirable biases. We critically analyse the use of human feedback for both training and evaluation, to verify whether it fully captures a range of crucial error criteria. We find that while preference scores have fairly good coverage, they under-represent important aspects like factuality. We further hypothesise that both preference scores and error annotation may be affected by confounders, and leverage instruction-tuned models to generate outputs that vary along two possible confounding dimensions: assertiveness and complexity. We find that the assertiveness of an output skews the perceived rate of factuality errors, indicating that human annotations are not a fully reliable evaluation metric or training objective. Finally, we offer preliminary evidence that using human feedback as a training objective disproportionately increases the assertiveness of model outputs. We encourage future work to carefully consider whether preference scores are well aligned with the desired objective. ## 1 Introduction The fluency exhibited by Large Language Models (LLMs) has reached the point where rigorous evaluation of LLM capabilities is extremely challenging, with the quality of model outputs often now exceeding that of reference examples from datasets (Zhang et al., 2023; Hosking et al., 2023). A great advantage of LLMs is their flexibility, but this makes it difficult to design an all-purpose evaluation metric. Human evaluation using a single overall score has become the _de facto_ standard method. For a given input prompt, samples or _responses_ from models are shown to annotators, who are asked to score the responses according to their quality (Novikova et al., 2018). These scores can either be absolute ratings, or relative 'preference' scores whereby two responses are ranked by their quality. Models can additionally be directly optimised for these preference scores using Reinforcement Learning from Human Feedback (RLHF, Ziegler et al., 2019; Ouyang et al., 2022). Although the simplicity of a single overall preference score is appealing, it obscures the decision making process used by annotators, including any trade-offs or compromises, and does not explain _why_ one response or model is better than another. Human annotators tend to look for shortcuts to make the task easier (Ipeirotis et al., 2010), and so are more likely to base their judgement on superficial properties (e.g., fluency and linguistic complexity) than aspects that require more effort to check (e.g., factuality). Historically, human evaluation of natural language generation systems did consider multiple aspects of the generated output. However, the criteria used were often unique to the specific task being considered (van der Lee et al., 2021; Hosking et al., 2022; Xu and Lapata, 2022), making them difficult to apply to LLMs. With recent rapid improvement in system performance, it is important to test whether preference scores capture the desired aspects of output quality, and whether they provide a gold standard objective for evaluating or training LLMs. In this work, we analyse human annotation of model outputs, both for overall preference scores and for specific error criteria. In Section 2 we establish a set of error types that are task independent and act as minimum requirements for model outputs. We analyse the error coverage of overall pref erence scores. We ask two sets of annotators to rate a range of LLM outputs, the first according to these error types and the second according to their own judgements of overall quality, and find that preference scores under-represent factuality and faithfulness. In Section 3, we consider two possible sources of bias when annotating for specific error types by generating outputs with varying assertiveness and complexity, and find that assertiveness strongly biases human factuality judgements. Finally, in Section 4 we offer some preliminary evidence that using human preference scores as a training objective disproportionately increases the assertiveness of model outputs. We present additional findings from our collected data in Appendix E: we confirm that annotators are subject to a priming effect; we analyse the variation of quality scores with response length; and we show that generated outputs are preferred to the reference responses. Our code and data is available at [https://github.com/cohere-ai/human-feedback-paper](https://github.com/cohere-ai/human-feedback-paper). ## 2 Are preference scores reliable? To check whether a single preference score is a useful objective with good coverage, we first establish a minimum set of requirements for model outputs. These _error types_ are both generic enough that they are task agnostic and widely applicable, but also sufficiently well-specified that it is possible for annotators to judge them. We are inspired partly by Xu et al. (2023c), who asked crowdworkers and experts to rate model outputs and give justifications for their scores. We also draw inspiration from Grice's Maxims (Grice, 1991) regarding felicitous communication between speakers: the Maxim of Quantity implies that repetition is undesirable, the Maxim of Quality prohibits factual errors, and so on. The error types used in our experiments are: * Is the response unsafe, harmful or likely to cause offence in some way? * Is the response grammatically incorrect, or does it contain spelling mistakes? * Does the response exceed the scope limits of a chatbot? Does the response give opinions or otherwise act as if it is a person, or offer to take actions that it cannot (e.g. make a call, access the internet)? * Does the response repeat itself? For example, if there is a list in the response, are any items repeated? Does the response reuse the same phrase again and again? * If the request is reasonable, does the response refuse to answer it (e.g. "I'm sorry, I can't help you with that")? * Does the response fail to conform to any formatting or length requirements from the prompt? * Does the response go off topic or include information that is not relevant to the request? * Is the response factually incorrect (regardless of what the request said)? * Does the response incorrectly represent or change information from the _request_? This criterion is often also referred to as _faithfulness_. * Is the response inconsistent with _itself_, or does it contradict itself? ### Experimental Setup We ask crowdworkers to evaluate model outputs according to our criteria, marking each example with a binary _yes_ or _no_ to denote whether an error is present. Separately, we ask a _different_ set of annotators to rate the overall quality of the same outputs from 1 to 5, according to whatever criteria they feel are important. DatasetsIn order to cover a range of different tasks for which evaluation is challenging, we construct input prompts from three datasets: Curation Corpus (Curation, 2020) is a summarization dataset composed of 40,000 news articles and professionally written summaries; Amazon Product Descriptions (Ni et al., 2019) gives a product title and specification as input and requires generating a compelling product description; and Wikihow (Koupaee & Wang, 2018) consists of 'how to' questions and step-by-step guides. Full details of the prompt templates used for each task can be found in Appendix C. ModelsWhile a comparison of different models is not the focus of this work, we nonetheless source responses from multiple performant models that we were able to access at time of writing: MPT 30B Instruct is fine-tuned on Dolly DDRLHF and additional datasets (MosaicML NLP Team, 2023; Conover et al., 2023); Falcon 40B instruct is fine-tuned on a subset of Baize (Almazrouei et al., 2023; Xu et al., 2023b); and Command 6B and 52B are commercial models trained by Cohere, fine tuned on proprietary datasets. We additionally include the reference outputs for each input. Details of the models, prompt templates and sampling hyperparameters can be found in Appendix D. AnnotationWe source crowdworkers from Prolific, requiring them to be native English speakers with 100% approval ratings from prior tasks. Our annotation interface is based on Potato (Pei et al., 2022). Our annotation protocol is based on findings from RankME (Novikova et al., 2018) that showed the best inter-annotator agreement is achieved when annotators are shown multiple outputs for a given input, and scores are collected as absolute ratings. We expect that showing annotators five full outputs at once would lead to higher cognitive load and therefore lower annotator engagement, therefore we collect ratings for two outputs at a time, pairing each output with an output from one of the other four models. The resulting four annotations per output are aggregated by taking the mean for overall scores, and by taking the mode (and then the mean in case of ties) for error annotations. We annotate a total of 900 distinct outputs, with a total of 4,440 annotations including quality checks. Quality ControlIn order to check inter-annotator agreement, we collect 5 duplicate annotations for a random subset of 200 pairs of outputs. We also include a set of _distractor_ examples, where a response is shown in context with an output from the same model but a _different input_. These examples act as an attention check; the response based on a different input should consistently be penalised along criteria like relevance and usefulness. We find that distractor outputs are correctly rated lower than the other output in the pair over 97% of the time, indicating that the vast majority of annotators paid attention to the task. We use Gwet's AC1 measure (Gwet, 2014) to assess inter-annotator agreement for the multiply annotated examples, finding good agreement scores of between \(0.64\) (for Factuality) and \(0.94\) (for Refusal). The disparity indicates that annotators found some error types more difficult or subjective than others; refusal is straightforward to detect, whereas checking for factual errors involves significantly more effort. ### Results Preference scores under-represent factuality and inconsistencyIn order to determine the degree to which each error type was captured by the overall scores, we fit a Lasso regression model (Tibshirani, 1996) with \(\alpha=0.01\) between the scores and the error ratings. Figure 1 shows the weights of each criterion under this model, where each weight corresponds to the expected reduction in overall score if the corresponding error is present. Six out of ten error types contribute to the overall scores, with refusal errors contributing most strongly. Factuality and inconsistency errors both contribute but with much lower weighting, indicating that a single preference score is likely to obscure failures in these important criteria. Figure 1: Weightings for each criteria under a Lasso regression model of overall scores. Almost all the criteria contribute to the overall scores, with refusal contributing most strongly. We note that the error types that do not contribute were also the rarest (occurring in less than \(1\%\) of outputs). We would expect that harmfulness and fluency should influence overall scores, but the models in our experiments are sufficiently strong and the tasks sufficiently well-posed that such errors are infrequent. Annotators struggle with disentangling factorsRecall that the distractor examples are pairs of outputs sourced from the same model, but where one of the outputs corresponds to a different _input_; these should therefore achieve comparable scores for criteria that are independent of the input prompt (e.g., fluency, detail, factuality1) but be heavily penalized for other factors such as relevance and overall quality. The results in Figure 2 show that although this expectation holds in some cases (repetition, refusal and formatting are not penalized, while relevance and inconsistency are), other factors are incorrectly penalized; factuality and contradiction (within the output) are both rated worse for the distractor examples. This implies that annotators found it difficult to disentangle these criteria from the overall quality of a response. Footnote 1: Although a statement could be deemed factual if the input prompt supports it, the instructions shown to annotators explicitly asked them to consider factuality in absolute terms. Although annotators are shown the instructions and error criteria before the input prompt and responses, we suspect that they subconsciously form an opinion about the quality of the response based on first impressions (Smith et al., 2014), and that this opinion influences their judgement of each error type. In other words, an annotator may decide that a response is bad, and decide that it is more likely to contain errors as a result. This effect could be partially mitigated by specifying precise instructions, giving multiple examples and training a knowledgeable group of annotators. However, there is always potential for ambiguity. ## 3 Are annotations affected by confounders? We have so far considered the effect of important error criteria on overall preference scores, but the annotations for the errors were themselves given by human annotators. The results for distractor examples in Figure 2 indicate that granular ratings may also be subject to biases. Firstly, we hypothesise that the _assertiveness_ of a text influences human judgements; a statement conveyed confidently as fact is more likely to be interpreted as true. Similarly, text that uses _complex_ language might lead an annotator to believe that the communicator behind it is intelligent and knowledgeable, and therefore that the content is true. This concept of _language ideology_, where the style and tone of a speaker leads to biased judgements about their trustworthiness and intelligence, has been extensively studied in the context of speech (Campbell-Kibler, 2009; Woolard, 2020), but we are not aware of any work in the context of model evaluation. Figure 2: Difference in annotated error rates for distractor examples (outputs from the same model but different input). Some error types are correctly unchanged (e.g., repetition, refusal) while relevance and inconsistency are correctly penalised. Factuality and contradiction are both incorrectly penalised (they are independent of the input), indicating that annotators struggled to fully disentangle these criteria. ### Experimental Setup We generate model outputs from the same datasets as Section 2, but using an additional _preamble2_ to vary the tone of the output and create outputs with both high and low _assertiveness_ and high and low _linguistic complexity_. We constructed these preambles by iterative testing, with the aim of eliciting a noticeable change in output tone without overly degrading output quality. The full text used for the preambles is as follows: Footnote 2: A preamble is a short natural language prompt, usually prepended to the user query, designed to set the behavioural parameters of the system, e.g. “Respond helpfully and safely”. * **Assertiveness--** Respond in a cautious, defensive and uncertain way, as if you are unfamiliar with the topic. * **Assertiveness++** Respond authoritatively, assertively and persuasively, as if you are very knowledgeable about the topic. * **Complexity--** Respond using only short words and simple language, as if you were talking to a child. * **Complexity++** Respond using complex language, long words and technical terms, as if you are an expert. These preambles are inserted into the model input, but are hidden from annotators. We use a similar annotation setup to Section 2.1, collecting overall scores from 1 to 5 from one group of annotators, and binary error annotations from a second group3. Additionally, we collect judgements about the assertiveness and complexity of each output from 1 to 5 from a third, distinct group of annotators. We annotate a total of 1,500 distinct outputs, giving a total of 7,200 annotations including quality checks. Footnote 3: We exclude scope, fluency and harmfulness from this set of experiments due to their rarity. Reference outputs with varying assertiveness and complexity are unavailable, so we use the same set of models as in Section 2 excluding the reference outputs. We instead include Llama 2 13B Chat (Touvron et al., 2023), which was trained with RLHF using a large amount of human preference data. It is possible that the preambles might lead to changes in the _true_ error rates of the output (Xu et al., 2023a). The authors therefore carefully annotate a subset of 300 examples for each error type, to act as a set of 'expert' annotations. Although not strictly an unbiased set of ratings, this subset acts as a useful estimate of the true error rates. ### Results Figure 3: Human ratings of assertiveness, complexity and overall quality for each preamble type. The ratings indicate that the preambles successfully modify the output in the desired manner, although there is some correlation between perceived assertiveness and complexity. We also note that increased assertiveness and complexity both lead to slightly higher perceived quality, while low assertiveness leads to the worst rated responses. Confidence and complexity can be varied using preamblesWe first confirm that our preambles successfully change the model outputs in the desired way. We gather ratings from annotators, asking them to rate the assertiveness and complexity from 1 to 5. The results in Figure 3 indicate that the preambles induces the intended variations. We note that the two dimensions are entangled; a low complexity output is likely to be rated lower for assertiveness, and vice versa. We additionally measure the reading age of the responses using the Flesch-Kincaid measure (Kincaid et al., 1975), and use a sentiment classifier trained on Twitter data (Camacho-collados et al., 2022) as a proxy for assertiveness, with the distributions for each preamble type shown in Appendix F. Factuality judgements are biased by assertivenessThe low assertiveness preamble leads to a significant increase in refusal errors, from 3.5% in the baseline case to 24%. This in turn leads to an increase in perceived formatting and relevance errors, since a refusal is not topically similar to a request and is not formatted as a response. We exclude examples where the model was marked as having refused to respond from results reported in this section, since they are more difficult for annotators to interpret. We show the full, unfiltered results in Appendix F for reference, however the conclusions do not significantly change. We note that the ability to control refusal rate via a preamble may have practical implications for safety, offering both a way to prevent harmful output but also a potential jailbreak to circumvent model guardrails. Figure 4 shows the difference in annotated error rates between crowdsourced annotators and the 'experts', broken down by preamble type. Crowdworkers underestimate the rate of factuality and inconsistency errors. This difference is _increased_ for high assertiveness responses, and _decreased_ for low assertiveness responses. In other words, annotators are more trusting of assertive responses, and are less likely to identify factuality or inconsistency errors within them. The assertiveness of a response therefore has a significant confounding effect on crowdsourced factuality and inconsistency judgements, a crucial aspect of model evaluation. Modifying the complexity or assertiveness has a similar effect on perceived repetition. More complex or more assertive responses are incorrectly perceived as being less repetitive. Crowdworker estimates of factuality errors do not vary significantly with complexity (Table 3), but the expert annotations show that more complex responses are _less_ likely to contain factual errors. Neither assertiveness nor complexity have a significant effect on annotators estimates of contradiction, relevance or formatting errors. Somewhat surprisingly, the crowdsourced estimate of the factuality error rate for the 'low assertiveness' group is higher than the baseline case, while the expert-annotated estimate is _lower_ (Table 3). Qualitatively, we find that this is because they tend to be shorter and therefore contain fewer factual assertions that could be incorrect. Figure 4: The difference in error rates between crowdsourced annotations and ‘expert’ annotations from the authors, excluding samples that were marked as refusing to respond. Annotators tend to underestimate the rate of inconsistency or factuality errors, and they are less likely to spot these errors in outputs that are assertive. Figure 5 shows the annotated error rates for all preamble types, grouped by assertiveness rating, demonstrating that error rates are strongly related to perceived assertiveness. This acts as confirmation of the relationship between the assertiveness and the perceived factuality of a response; the relationship holds when the assertiveness is controlled via the preambles _and_ when it is measured.
2301.13375
Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees
Robustness and safety are critical for the trustworthy deployment of deep reinforcement learning. Real-world decision making applications require algorithms that can guarantee robust performance and safety in the presence of general environment disturbances, while making limited assumptions on the data collection process during training. In order to accomplish this goal, we introduce a safe reinforcement learning framework that incorporates robustness through the use of an optimal transport cost uncertainty set. We provide an efficient implementation based on applying Optimal Transport Perturbations to construct worst-case virtual state transitions, which does not impact data collection during training and does not require detailed simulator access. In experiments on continuous control tasks with safety constraints, our approach demonstrates robust performance while significantly improving safety at deployment time compared to standard safe reinforcement learning.
James Queeney, Erhan Can Ozcan, Ioannis Ch. Paschalidis, Christos G. Cassandras
2023-01-31T02:39:52Z
http://arxiv.org/abs/2301.13375v2
# Optimal Transport Perturbations for ###### Abstract Robustness and safety are critical for the trustworthy deployment of deep reinforcement learning in real-world decision making applications. In particular, we require algorithms that can guarantee robust, safe performance in the presence of general environment disturbances, while making limited assumptions on the data collection process during training. In this work, we propose a safe reinforcement learning framework with robustness guarantees through the use of an optimal transport cost uncertainty set. We provide an efficient, theoretically supported implementation based on Optimal Transport Perturbations, which can be applied in a completely offline fashion using only data collected in a nominal training environment. We demonstrate the robust, safe performance of our approach on a variety of continuous control tasks with safety constraints in the Real-World Reinforcement Learning Suite. Machine Learning, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness Guarantees, Robustness, Robustness Guarantees, 1. We formulate a safe RL framework that incorporates robustness to general disturbances using the optimal transport cost between environment transitions. 2. We show that the resulting distributionally robust optimization problems over transition distributions can be reformulated as constrained adversarial perturbations to state transitions in the training environment. 3. We propose an efficient deep RL implementation of our Optimal Transport Perturbations, which can be applied in a completely offline fashion without impacting data collection during training. 4. We demonstrate that the use of Optimal Transport Perturbations leads to robust and safe performance both during training and in the presence of disturbances through experiments on continuous control tasks with safety constraints in the Real-World RL Suite (Dulac-Arnold et al., 2020, 2021). ## 2 Related Work ### Safe Reinforcement Learning The most common approach to modeling safety in RL is to incorporate constraints on expected total costs (Altman, 1999). In recent years, several deep RL algorithms have been developed for this framework. A popular approach is to solve the Lagrangian relaxation of the constrained problem (Tessler et al., 2019; Ray et al., 2019; Stooke et al., 2020), which is supported by theoretical results that constrained RL has zero duality gap (Paternain et al., 2019). Xu et al. (2021), on the other hand, consider immediate switching between the reward and cost objectives to better satisfy safety during training. Alternatively, Achiam et al. (2017) and Liu et al. (2022) construct closed-form solutions to guide policy updates in safe RL. A related line of work focuses on the issue of safe exploration during data collection. In order to promote safety throughout trajectory rollouts, these methods correct potentially dangerous actions through the use of control barrier functions (Cheng et al., 2019; Emam et al., 2021; Ma et al., 2021), safety layers (Dalal et al., 2018), learned safety critics (Srinivasan et al., 2020; Bharadhwaj et al., 2021), and recovery policies (Thananjeyan et al., 2021; Wagener et al., 2021). In particular, Bharadhwaj et al. (2021) learn a conservative estimate of the safety critic in order to protect against safety violations during training. Our robust perspective towards safety can be viewed as an alternative method for learning a conservative safety critic, which guarantees safety both during and after training by guarding against unknown environment disturbances. ### Robust Reinforcement Learning Robust RL methods account for uncertainty in the environment by considering worst-case transition distributions from an uncertainty set (Nilim and Ghaoui, 2005; Iyengar, 2005). In order to scale the robust RL framework to the deep RL setting, most techniques have focused on parametric uncertainty or adversarial training. Domain randomization (Tobin et al., 2017; Peng et al., 2018) represents a popular approach to parametric uncertainty in sim-to-real transfer settings, where a policy is trained to maximize average performance across a range of simulated training environments. These environments are generated by modifying important parameters in the simulator, which are often determined based on domain knowledge. The goal of maximizing average performance over a range of training environments has also been referred to as a soft-robust approach (Derman et al., 2018). Other methods directly impose a robust perspective towards parametric uncertainty by focusing on the worst-case training environments generated over a range of simulator parameters (Rajeswaran et al., 2017; Abdullah et al., 2019; Mankowitz et al., 2020). All of these approaches assume access to a simulated version of the real environment, as well as the ability to modify parameters of this simulator. Adversarial RL methods represent an alternative approach to robustness that introduce perturbations directly into the training process. In order to learn policies that perform well under worst-case disturbances, these perturbations are trained to minimize performance. Deep RL approaches to adversarial training have introduced perturbations in the form of physical forces in the environment (Pinto et al., 2017), as well as adversarial corruptions to actions (Tessler et al., 2019; Vinitsky et al., 2020) and state observations (Mandlekar et al., 2017; Zhang et al., 2020; Kuang et al., 2022). In this work, we learn adversarial perturbations on state transitions, but different from adversarial RL methods we apply these perturbations in a completely offline fashion without impacting the data collection process. Finally, safety and robustness have recently been considered together in a unified RL framework. Mankowitz et al. (2021) and Russel et al. (2021) propose a formulation that incorporates robustness into both the objective and constraints in safe RL. We consider this general framework as a starting point for our work. ## 3 Preliminaries ### Safe Reinforcement Learning Consider an infinite-horizon, discounted Constrained Markov Decision Process (C-MDP) (Altman, 1999) defined by the tuple \((\mathcal{S},\mathcal{A},p,r,c,\rho_{0},\gamma)\), where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of actions, \(p:\mathcal{S}\times\mathcal{A}\to P(\mathcal{S})\) is the transition probability function where \(P(\mathcal{S})\) denotes the space of probability measures over \(\mathcal{S}\), \(r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is the reward function, \(c:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is the cost function, \(\rho_{0}\) is the initial state distribution, and \(\gamma\) is the discount rate. We model the agent's decisions as a stationary policy \(\pi:\mathcal{S}\to P(\mathcal{A})\). For a given C-MDP and policy \(\pi\), the expected total discounted rewards and costs are given by \(J_{p,r}(\pi)=\mathbb{E}_{\tau\sim(\pi,p)}\left[\sum_{t=0}^{\infty}\gamma^{t}r (s_{t},a_{t})\right]\) and \(J_{p,c}(\pi)=\mathbb{E}_{\tau\sim(\pi,p)}\left[\sum_{t=0}^{\infty}\gamma^{t}c (s_{t},a_{t})\right]\), respectively, where \(\tau\sim(\pi,p)\) represents a trajectory sampled according to \(s_{0}\sim\rho_{0}\), \(a_{t}\sim\pi(\cdot\mid s_{t})\), and \(s_{t+1}\sim p(\cdot\mid s_{t},a_{t})\). The goal of safe RL is to find a policy \(\pi\) that maximizes the constrained optimization problem \[\begin{split}\max_{\pi}& J_{p,r}(\pi)\\ \text{s.t.}& J_{p,c}(\pi)\leq B,\end{split} \tag{1}\] where \(B\) is a safety budget on expected total discounted costs. We denote the state-action value functions (i.e., Q functions) of \(\pi\) for a given C-MDP as \(Q^{\pi}_{p,r}(s,a)\) and \(Q^{\pi}_{p,c}(s,a)\), and the state value functions as \(V^{\pi}_{p,r}(s)=\mathbb{E}_{a\sim\pi(\cdot\mid s)}[Q^{\pi}_{p,r}(s,a)]\) and \(V^{\pi}_{p,c}(s)=\mathbb{E}_{a\sim\pi(\cdot\mid s)}[Q^{\pi}_{p,c}(s,a)]\). Policy optimization techniques (Xu et al., 2021; Liu et al., 2022) iteratively optimize (1) by considering the related optimization problem \[\begin{split}\max_{\pi}&\mathop{\mathbb{E}}_{s\sim \mathcal{D}}\left[\mathop{\mathbb{E}}_{a\sim\pi(\cdot\mid s)}\left[Q^{\pi_{k}}_ {p,r}(s,a)\right]\right]\\ \text{s.t.}&\mathop{\mathbb{E}}_{s\sim\mathcal{D}} \left[\mathop{\mathbb{E}}_{a\sim\pi(\cdot\mid s)}\left[Q^{\pi_{k}}_{p,c}(s,a) \right]\right]\leq B,\end{split} \tag{2}\] where \(\pi_{k}\) is the current policy and \(\mathcal{D}\) represents data collected during training. ### Robust and Safe Reinforcement Learning We are often interested in finding a policy \(\pi\) that achieves strong, safe performance across a range of related environments. In order to accomplish this, Mankowitz et al. (2021) and Russel et al. (2021) propose a Robust Constrained MDP (RC-MDP) framework defined by the tuple \((\mathcal{S},\mathcal{A},\mathcal{P},r,c,\rho_{0},\gamma)\), where \(\mathcal{P}\) represents an uncertainty set of transition models. We assume \(\mathcal{P}\) takes the form \(\mathcal{P}=\bigotimes_{(s,a)\in\mathcal{S}\times\mathcal{A}}\mathcal{P}_{s,a}\), where \(\mathcal{P}_{s,a}\) is a set of transition models \(p_{s,a}=p(\cdot\mid s,a)\in P(\mathcal{S})\) at a given state-action pair and \(\mathcal{P}\) is the product of these sets. This structure is referred to as rectangularity, and is a common assumption in the literature (Nilim and Ghaoui, 2005; Iyengar, 2005). The RC-MDP framework leads to a robust version of (1) given by \[\begin{split}\max_{\pi}&\inf_{p\in\mathcal{P}}J_{p,r }(\pi)\\ \text{s.t.}&\sup_{p\in\mathcal{P}}J_{p,c}(\pi)\leq B.\end{split} \tag{3}\] As in the standard safe RL setting, we can iteratively optimize (3) by considering the related optimization problem \[\begin{split}\max_{\pi}&\mathop{\mathbb{E}}_{s\sim \mathcal{D}}\left[\mathop{\mathbb{E}}_{a\sim\pi(\cdot\mid s)}\left[Q^{\pi_{k}}_ {\mathcal{P},r}(s,a)\right]\right]\\ \text{s.t.}&\mathop{\mathbb{E}}_{s\sim\mathcal{D}} \left[\mathop{\mathbb{E}}_{a\sim\pi(\cdot\mid s)}\left[Q^{\pi_{k}}_{\mathcal{P},c}(s,a)\right]\right]\leq B,\end{split} \tag{4}\] where \(Q^{\pi}_{\mathcal{P},r}(s,a)\) and \(Q^{\pi}_{\mathcal{P},c}(s,a)\) represent robust Q functions. Alternatively, if we only care about robustness with respect to safety, we can instead consider a nominal or optimistic objective in (4) to promote exploration. ## 4 Optimal Transport Uncertainty Set Compared to the standard safe RL update in (2), the only difference in the robust and safe RL update of (4) comes from the use of robust Q functions. Therefore, in order to incorporate robustness into existing deep safe RL algorithms, we must be able to efficiently learn \(Q^{\pi}_{\mathcal{P},r}(s,a)\) and \(Q^{\pi}_{\mathcal{P},c}(s,a)\). We can write these robust Q functions recursively as \[\begin{split} Q^{\pi}_{\mathcal{P},r}(s,a)&=r(s,a)+ \gamma\inf_{p_{s,a}\in\mathcal{P}_{s,a}}\mathop{\mathbb{E}}_{s^{\prime}\sim p _{s,a}}\left[V^{\pi}_{\mathcal{P},r}(s^{\prime})\right],\\ Q^{\pi}_{\mathcal{P},c}(s,a)&=c(s,a)+\gamma\sup_{p _{s,a}\in\mathcal{P}_{s,a}}\mathbb{E}_{s^{\prime}\sim p_{s,a}}\left[V^{\pi}_{ \mathcal{P},c}(s^{\prime})\right],\end{split} \tag{5}\] where we define the corresponding robust state value functions as \(V^{\pi}_{\mathcal{P},r}(s^{\prime})=\mathbb{E}_{a^{\prime}\sim\pi(\cdot\mid s^{ \prime})}\left[Q^{\pi}_{\mathcal{P},r}(s^{\prime},a^{\prime})\right]\) and \(V^{\pi}_{\mathcal{P},c}(s^{\prime})=\mathbb{E}_{a^{\prime}\sim\pi(\cdot\mid s ^{\prime})}\left[Q^{\pi}_{\mathcal{P},c}(s^{\prime},a^{\prime})\right]\). The corresponding robust Bellman operators (Nilim and Ghaoui, 2005; Iyengar, 2005) can be written as \[\begin{split}\mathcal{T}^{\pi}_{\mathcal{P},r}Q_{r}(s,a)& :=r(s,a)+\gamma\inf_{p_{s,a}\in\mathcal{P}_{s,a}}\mathop{\mathbb{E}}_{s^{ \prime}\sim p_{s,a}}\left[V^{\pi}_{r}(s^{\prime})\right],\\ \mathcal{T}^{\pi}_{\mathcal{P},c}Q_{c}(s,a)&:=c(s,a)+ \gamma\sup_{p_{s,a}\in\mathcal{P}_{s,a}}\mathop{\mathbb{E}}_{s^{\prime}\sim p_{s,a}}\left[V^{\pi}_{c}(s^{\prime})\right],\end{split} \tag{6}\] where we write \(V^{\pi}_{r}(s^{\prime})=\mathbb{E}_{a^{\prime}\sim\pi(\cdot\mid s^{\prime})} \left[Q_{r}(s^{\prime},a^{\prime})\right]\) and \(V^{\pi}_{c}(s^{\prime})=\mathbb{E}_{a^{\prime}\sim\pi(\cdot\mid s^{\prime})} \left[Q_{c}(s^{\prime},a^{\prime})\right]\). Note that \(\mathcal{T}^{\pi}_{\mathcal{P},r}\) and \(\mathcal{T}^{\pi}_{\mathcal{P},c}\) are contraction operators, with \(Q^{\pi}_{\mathcal{P},r}(s,a)\) and \(Q^{\pi}_{\mathcal{P},c}(s,a)\) their respective unique fixed points (Nilim and Ghaoui, 2005; Iyengar, 2005). Because \(\mathcal{T}^{\pi}_{\mathcal{P},r}\) and \(\mathcal{T}^{\pi}_{\mathcal{P},c}\) are contraction operators, we can apply standard temporal difference (TD) learning techniques to learn these robust Q functions. In order to do so, we must be able to calculate the Bellman targets in (5) and (6), which involve optimization problems over transition distributions that depend on the choice of uncertainty set \(\mathcal{P}_{s,a}\) at every state-action pair. Popular choices of \(\mathcal{P}_{s,a}\) in the literature require the ability to change physical parameters of the environment (Peng et al., 2018) or directly apply adversarial perturbations during trajectory rollouts (Tessler et al., 2019) to calculate worst-case transitions. In this work, we use optimal transport theory to consider an uncertainty set that can be efficiently implemented in a model-free fashion using only samples collected from a nominal environment. In order to do so, we assume that \(\mathcal{S}\) is a Polish space (i.e., a separable, completely metrizable topological space). Note that the Euclidean space \(\mathbb{R}^{n}\) is Polish, so this is not very restrictive. Next, we define \(\mathcal{P}_{s,a}\) using the optimal transport cost between transition distributions. **Definition 4.1** (Optimal Transport Cost).: Let \(\mathcal{S}\) be a Polish space, and let \(d:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}_{+}\) be a non-negative, lower semicontinuous function satisfying \(d(s^{\prime},s^{\prime})=0\) for all \(s^{\prime}\in\mathcal{S}\). Then, the _optimal transport cost_ between two transition distributions \(\hat{p}_{s,a},p_{s,a}\in P(\mathcal{S})\) is defined as \[\text{OTC}_{d}(\hat{p}_{s,a},p_{s,a})=\inf_{\nu\in\Gamma(\hat{p}_{s,a}p_{s,a}) }\int_{\mathcal{S}\times\mathcal{S}}d(\hat{s}^{\prime},s^{\prime})\text{d}\nu (\hat{s}^{\prime},s^{\prime}),\] where \(\Gamma(\hat{p}_{s,a},p_{s,a})\) is the set of all couplings of \(\hat{p}_{s,a}\) and \(p_{s,a}\). If \(d\) is chosen to be a metric raised to some power \(p\geq 1\), we recover the \(p\)-Wasserstein distance raised to the power \(p\) as a special case. If we let \(d(\hat{s}^{\prime},s^{\prime})=\mathbbm{1}_{\hat{s}^{\prime}\neq s^{\prime}}\), we recover the total variation distance as a special case (Villani, 2008). By considering the optimal transport cost from some nominal transition distribution \(\hat{p}_{s,a}\), we define the optimal transport uncertainty set as follows. **Definition 4.2** (Optimal Transport Uncertainty Set).: For a given nominal transition distribution \(\hat{p}_{s,a}\) and radius \(\epsilon_{s,a}\) at state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\), the _optimal transport uncertainty set_ is defined as \[\mathcal{P}_{s,a}=\left\{p_{s,a}\in P(\mathcal{S})\mid\text{OTC}_{d}(\hat{p}_{s,a},p_{s,a})\leq\epsilon_{s,a}\right\}.\] This uncertainty set has previously been considered in robust RL for the special case of the Wasserstein distance (Abdullah et al., 2019; Hou et al., 2020; Kuang et al., 2022). The use of optimal transport cost to compare transition distributions has several benefits. First, optimal transport cost accounts for the relationship between states in \(\mathcal{S}\) through the function \(d\), and we can choose \(d\) to reflect the geometry of \(\mathcal{S}\) in a meaningful way. In particular, optimal transport cost allows significant flexibility in the choice of \(d\), including threshold-based binary comparisons between states that are not metrics or pseudo-metrics (Pydi and Jog, 2020). Next, optimal transport cost remains valid for distributions that do not share the same support, unlike other popular measures between distributions such as the Kullback-Leibler divergence. In particular, the optimal transport uncertainty set can be applied to both stochastic and deterministic transitions. Finally, as we will show in the following sections, the use of an optimal transport uncertainty set results in an efficient model-free implementation of robust and safe RL that only requires the ability to collect data in a nominal environment. ## 5 Reformulation as Adversarial Perturbations to State Transitions In order to provide tractable reformulations of the Bellman operators in (5) and (6), we consider the following main assumptions. **Assumption 5.1**.: For any \(\pi\) and \(Q_{r}(s^{\prime},a^{\prime})\) in (5), \(V_{r}^{\pi}(s^{\prime})=\mathbb{E}_{a^{\prime}\sim\pi(:|s^{\prime})}\left[Q_{r }(s^{\prime},a^{\prime})\right]\) is lower semicontinuous and \(\mathbb{E}_{s^{\prime}\sim\hat{p}_{s,a}}|V_{r}^{\pi}(s^{\prime})|<\infty\). For any \(\pi\) and \(Q_{c}(s^{\prime},a^{\prime})\) in (6), \(V_{c}^{\pi}(s^{\prime})=\mathbb{E}_{a^{\prime}\sim\pi(:|s^{\prime})}\left[Q_{c }(s^{\prime},a^{\prime})\right]\) is upper semicontinuous and \(\mathbb{E}_{s^{\prime}\sim\hat{p}_{s,a}}|V_{c}^{\pi}(s^{\prime})|<\infty\). **Assumption 5.2**.: Optimal transport plans exist for the distributionally robust optimization problems in (5) and (6). Note that Assumptions 5.1-5.2 correspond to assumptions in Blanchet and Murthy (2019) applied to our setting. In practice, the use of neural network representations results in continuous value functions, which are bounded for the common case when rewards and costs are bounded, respectively. A sufficient condition for Assumption 5.2 to hold is if \(\mathcal{S}\) is compact, or if we restrict our attention to a compact subset of next states in our definition of \(\mathcal{P}_{s,a}\) which is reasonable in practice. Blanchet and Murthy (2019) also provide other sufficient conditions for Assumption 5.2 to hold. Under these assumptions, we can reformulate the Bellman operators in (5) and (6) to allow for efficient deep RL implementations. **Lemma 5.3**.: _Let Assumption 5.1 hold. Then, we have_ \[\mathcal{T}_{\mathcal{P},r}^{\pi}Q_{r}(s,a)=r(s,a)+\gamma\ \sup_{\lambda\geq 0}\ \left\{\vphantom{\sum_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[\vphantom{ \sum_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[\vphantom{\sum_{\hat{s}^{\prime} \sim\hat{p}_{s,a}}\left[\vphantom{\sum_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[ \vphantom{\sum_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[\vphantom{\sum_{\hat{s}^{ \prime}\sim\hat{p}_{s,a}}\left[\vphantom{\sum_{\hat{s}^{\prime}\sim\hat{p}_{s,a}} \left[\vphantom{\sum_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[\vphantom{\sum_{\hat{s}^{ \prime}\sim\hat{p}_{s,a}}\left[\vphantom{\sum_{\hat{s}^ Assumption 5.1. By applying optimal transport strong duality and substituting these results into (5) and (6), we arrive at the results in (7) and (8), respectively. See the Appendix for details. With the addition of Assumption 5.2, we can further reformulate the results in Lemma 5.3 to arrive at a tractable result that can be efficiently implemented in a deep RL setting. **Theorem 5.4**.: _Let Assumptions 5.1-5.2 hold, and let \(\mathcal{G}\) be the set of all functions from \(\mathcal{S}\) to \(\mathcal{S}\). Then, we have_ \[\mathcal{T}_{\mathcal{P},r}^{\pi}Q_{r}(s,a) =r(s,a)+\gamma\mathop{\mathbb{E}}_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[V_{r}^{\pi}(g_{s,a}^{r}(\hat{s}^{\prime}))\right], \tag{9}\] \[\mathcal{T}_{\mathcal{P},c}^{\pi}Q_{c}(s,a) =c(s,a)+\gamma\mathop{\mathbb{E}}_{\hat{s}^{\prime}\sim\hat{p}_{ s,a}}\left[V_{c}^{\pi}(g_{s,a}^{c}(\hat{s}^{\prime}))\right], \tag{10}\] _where \(g_{s,a}^{r}:\mathcal{S}\rightarrow\mathcal{S}\) is a minimizer of_ \[\min_{g\in\mathcal{G}} \mathop{\mathbb{E}}_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[V_{ r}^{\pi}(g(\hat{s}^{\prime}))\right]\] (11) s.t. \[\mathop{\mathbb{E}}_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[d( \hat{s}^{\prime},g(\hat{s}^{\prime}))\right]\leq\epsilon_{s,a},\] _and \(g_{s,a}^{c}:\mathcal{S}\rightarrow\mathcal{S}\) is a maximizer of_ \[\max_{g\in\mathcal{G}} \mathop{\mathbb{E}}_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[V_{ c}^{\pi}(g(\hat{s}^{\prime}))\right]\] (12) s.t. \[\mathop{\mathbb{E}}_{\hat{s}^{\prime}\sim\hat{p}_{s,a}}\left[d( \hat{s}^{\prime},g(\hat{s}^{\prime}))\right]\leq\epsilon_{s,a},\] _for a given state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\)._ Proof.: First, we show that the dual problems to (11) and (12) appear in the right-hand side of (7) and (8), repectively. Next, we use Assumption 5.2 to show that strong duality holds for these pairs of primal-dual problems. See the Appendix for details. Theorem 5.4 demonstrates that we can calculate the Bellman operators \(\mathcal{T}_{\mathcal{F},r}^{\pi}\) and \(\mathcal{T}_{\mathcal{F},c}^{\pi}\) by using samples collected from a nominal environment with transition distributions \(\hat{p}_{s,a}\), and adversarially perturbing the next state samples according to (11) and (12), respectively. We refer to the resulting changes in state transitions as _Optimal Transport Perturbations (OTP)_. As a result, we have replaced difficult optimization problems over distribution space in (5) and (6) with the tractable problems of computing Optimal Transport Perturbations in state space. Theorem 5.4 represents the main theoretical contribution of our work, which directly motivates an efficient deep RL implementation of robust and safe RL. Finally, note that these perturbed state transitions are only used to calculate the Bellman targets in (9) and (10) for training the robust Q functions \(Q_{\mathcal{P},r}^{\pi}(s,a)\) and \(Q_{\mathcal{P},c}^{\pi}(s,a)\). Therefore, unlike other adversarial approaches to robust RL (Pinto et al., 2017; Tessler et al., 2019; Vinitsky et al., 2020), we do not need to apply our Optimal Transport Perturbations during trajectory rollouts. Instead, these perturbations are applied in a completely offline fashion. See Figure 1 for an illustration. ## 6 Perturbation Networks for Deep Reinforcement Learning From Theorem 5.4, we can calculate Bellman targets for our robust Q functions \(Q_{\mathcal{P},r}^{\pi}(s,a)\) and \(Q_{\mathcal{P},c}^{\pi}(s,a)\) by considering adversarially perturbed versions of next states sampled from \(\hat{p}_{s,a}\). We can construct these adversarial perturbations by solving (11) and (12), respectively. Note that the perturbation functions \(g_{s,a}^{r},g_{s,a}^{c}\in\mathcal{G}\) from Theorem 5.4 differ across state-action pairs. We can represent the collection of perturbation functions at every state-action pair by considering the perturbation functions \(g^{r},g^{c}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathcal{S}\), which take the state-action pair \((s,a)\) as input for context in addition to the next state \(\hat{s}^{\prime}\) to be perturbed. We let \(\mathcal{F}\) be the set of all functions from \(\mathcal{S}\times\mathcal{A}\times\mathcal{S}\) to \(\mathcal{S}\), with \(g^{r},g^{c}\in\mathcal{F}\). In order to efficiently calculate the perturbation functions from Theorem 5.4 in a deep RL setting, we consider the optimization problems \[\begin{split} g^{r}\in\arg\min_{g\in\mathcal{F}s}& \mathop{\mathbb{E}}_{(s,a,\hat{s}^{\prime})\sim\mathcal{D}}\left[V_{r}^{\pi}(g (s,a,\hat{s}^{\prime}))\right]\\ \text{s.t.}&\mathop{\mathbb{E}}_{(s,a,\hat{s}^{ \prime})\sim\mathcal{D}}\left[d(\hat{s}^{\prime},g(s,a,\hat{s}^{\prime})) \right]\leq\epsilon,\end{split} \tag{13}\] Figure 1: Illustration of Optimal Transport Perturbations from Theorem 5.4 for a given next state sample \(\hat{s}^{\prime}\sim\hat{p}_{s,a}\) in the nominal environment. The black arrow denotes the state transition observed in the nominal environment, and the shaded area denotes the feasible set in \(\mathcal{S}\) from (11) and (12). Theorem 5.4 calculates separate next state perturbations for the robust reward Bellman operator (shown in blue) and the robust cost Bellman operator (shown in orange). The dashed arrows denote imagined transitions used only to calculate Bellman operators. and \[\begin{split} g^{c}\in\arg\max_{g\in\mathcal{F}_{\delta}}& \operatorname*{\mathbb{E}}_{(s,a,\hat{s}^{\prime})\sim\mathcal{D}}[V_{c}^{ \pi}(g(s,a,\hat{s}^{\prime}))]\\ \text{s.t.}&\operatorname*{\mathbb{E}}_{(s,a,\hat{s} ^{\prime})\sim\mathcal{D}}[d(\hat{s}^{\prime},g(s,a,\hat{s}^{\prime}))]\leq \epsilon,\end{split} \tag{14}\] where \((s,a,\hat{s}^{\prime})\sim\mathcal{D}\) are transitions collected in the nominal environment and \(\mathcal{F}_{\delta}\subseteq\mathcal{F}\) represents a class of parameterized perturbation functions. The average constraints in (13) and (14) effectively allow \(\epsilon_{s,a}\) to differ across state-action pairs while being no greater than \(\epsilon\) on average. In the context of deep RL, we consider perturbation functions parameterized by a neural network \(\delta:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathcal{S}\). In our experiments, we consider tasks where \(\mathcal{S}=\mathbb{R}^{n}\) and we apply multiplicative perturbations to state transitions. In particular, we consider perturbation functions of the form \[g(s,a,\hat{s}^{\prime})=s+(\hat{s}^{\prime}-s)(1+\delta(s,a,\hat{s}^{\prime})), \tag{15}\] where \(\delta(s,a,\hat{s}^{\prime})\in\mathbb{R}^{n}\) and all operations are performed coordinate-wise. By defining \(\mathcal{F}_{\delta}\) in this way, we obtain plausible adversarial transitions that are interpretable, where \(\delta(s,a,\hat{s}^{\prime})\) represents the percentage change to the nominal state transition in each coordinate. In practice, we directly constrain the average magnitude of \(\delta(s,a,\hat{s}^{\prime})\) by \(\epsilon_{\delta}\), which can be interpreted as setting \(\epsilon_{s,a}\) to be a percentage of the average state transition magnitude at every state-action pair. We train separate reward and cost perturbation networks \(\delta_{r}\) and \(\delta_{c}\), and we apply the resulting Optimal Transport Perturbations to calculate Bellman targets for training the robust Q functions \(Q_{\mathcal{P},r}^{\pi}(s,a)\) and \(Q_{\mathcal{P},c}^{\pi}(s,a)\). ## 7 Algorithm We summarize our approach to robust and safe RL in Algorithm 1. At every update, we sample previously collected data from a replay buffer \(\mathcal{D}\). We update our reward and cost perturbation networks \(\delta_{r}\) and \(\delta_{c}\) according to (13) and (14), respectively. Then, we estimate Bellman targets according to (9) and (10), which we use to update our critics via standard TD learning loss functions. Finally, we use these critic estimates to update our policy according to (4). Compared to standard safe RL methods, the only additional components of our approach are the perturbation networks used to apply Optimal Transport Perturbations, which we train alongside the critics and the policy using standard gradient-based methods. Otherwise, the computations for updating the critics and policy remain unchanged. Therefore, it is simple to incorporate our Optimal Transport Perturbation method into existing deep safe RL algorithms in order to provide robustness guarantees on performance and safety. constraint satisfaction compared to Lagrangian-based approaches. We use the unconstrained deep RL algorithm Maximum a Posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018) to calculate policy updates in CRPO. We consider a multivariate Gaussian policy, where the mean and diagonal covariance at a given state are parameterized by a neural network. We also consider separate neural network parameterizations for the reward and cost critics. See the Appendix for additional implementation details, including network architectures and values of all hyperparameters.1 Footnote 1: Code is publicly available at [https://github.com/jqueeney/robust-safe-rl](https://github.com/jqueeney/robust-safe-rl). We incorporate robustness into this baseline safe RL algorithm in three ways: (i) Optimal Transport Perturbations, (ii) adversarial RL using the action-robust PR-MDP framework from Tessler et al. (2019) applied to the safety constraint, and (iii) the soft-robust approach of domain randomization (Peng et al., 2018; Derman et al., 2018). For our Optimal Transport Perturbations, we consider the perturbation structure in (15), where \(\delta_{r}\) and \(\delta_{c}\) are neural networks. We constrain the average per-coordinate magnitude of these perturbation networks to be less than \(\epsilon_{\delta}=0.02\) (i.e., 2% perturbations on average). Figure 2 shows the total rewards and costs obtained by our OTP algorithm for each task across a range of perturbed test environments, compared to CRPO and PR-MDP. The performance of all algorithms averaged across tasks and test environments is summarized in Table 1. ### Comparison to Safe RL By applying Optimal Transport Perturbations to the objective and constraint in safe RL, we achieve meaningful test-time improvements compared to the standard non-robust version of CRPO. While in most cases we observe a decrease in total rewards in the nominal environment in order to achieve robustness, as expected, on average our framework leads to an increase in total rewards of 1.06x relative to CRPO across the range of test environments. Most importantly, we see a significant improvement in safety, with our algorithm satisfying constraints in 87% of test cases (compared to 51% for CRPO) and incurring 0.34x the costs of CRPO, on average. Note that we achieve this robustness while collecting data from the same training environment considered in CRPO, without requiring adversarial interventions in the environment or domain knowledge on the structure of the perturbed test environments. ### Comparison to Adversarial RL Next, we compare our approach to the PR-MDP framework (Tessler et al., 2019), an adversarial RL method that randomly applies adversarial actions a percentage of the time during training. In order to apply this method to the safe RL setting, we train the adversary to maximize costs. We apply the default probability of intervention of 10% considered in Tessler et al. (2019). As shown in Figure 2, this adversarial approach leads to robust constraint satisfaction across test environments (88% of the time compared to 87% for our OTP framework), especially in the Quadruped environments. Our OTP framework, on the other hand, leads to improved constraint satisfaction in the remaining 3 tasks. However, the robust safety demonstrated by the PR-MDP approach also results in lower total rewards on average, and is the only robust approach in Table 1 that underperforms CRPO in terms of reward. In order to improve performance with respect to reward, we also considered PR-MDP with a lower probability of intervention of 5%. We include this version of PR-MDP in Table 1, and detailed results across tasks can be found in Figure 4 of the Appendix. While this less adversarial implementation of PR-MDP is comparable Figure 2: Comparison of algorithms across tasks and environment perturbations. Performance of PR-MDP is evaluated without adversarial interventions. Shading denotes half of one standard error across policies. Vertical dotted lines represent nominal training environment. Top: Total reward. Bottom: Total cost, where horizontal dotted lines represent the safety budget and values below these lines represent safety constraint satisfaction. to our OTP framework in terms of total rewards, it leads to a decrease in safety constraint satisfaction to 82%. Therefore, our OTP formulation demonstrates the safety benefits of the more adversarial setting and the reward benefits of the less adversarial setting. In addition, an important drawback of adversarial RL is that it requires the intervention of an adversary in the training environment. Therefore, in order to achieve robust safety at deployment time, the PR-MDP approach incurs additional cost during training due to the presence of an adversary. Even in the Quadruped domains where PR-MDP results in near-zero cost at deployment time, Figure 3 shows that this algorithm leads to the highest total cost during training due to adversarial interventions. In many real-world situations, this additional cost during training is undesirable. Our OTP framework, on the other hand, achieves the lowest total cost during training, while also resulting in robust safety when deployed in perturbed environments. This is due to the fact that our Optimal Transport Perturbations are applied in a completely offline fashion. ### Comparison to Domain Randomization Finally, we compare our OTP framework to the soft-robust approach of domain randomization (Peng et al., 2018; Derman et al., 2018), which assumes access to a range of environments during training through the use of a simulator. We consider the same training distributions for domain randomization as in Queeney & Benosman (2023). By training across a range of environments, domain randomization achieves strong performance across test cases in terms of reward (1.14x compared to CRPO, on average), which was the motivation for its development in the setting of sim-to-real transfer. However, note that domain randomization was originally proposed for the unconstrained setting, and we observe that it does not consistently satisfy safety constraints outside of its range of training environments (see Figure 5 in the Appendix). This is likely due to its soft-robust approach that focuses on average performance across the training distribution. Domain randomization satisfies safety constraints in 76% of test cases, which is lower than both OTP and PR-MDP which explicitly consider robust formulations. It is also important to note that domain randomization not only requires access to a range of training environments, it also requires prior knowledge on the structure of potential disturbances in order to define its training distribution. In order to evaluate the case where we lack domain knowledge, we include an out-of-distribution (OOD) version of domain randomization in Table 1 that is trained on a distribution over a different parameter than the one varied in our perturbed test environments. See Figure 5 in the Appendix for detailed results across tasks. When the training distribution is not appropriately selected, we see that domain randomization provides little benefit compared to standard non-robust safe RL. Our OTP framework, on the other hand, guarantees robust and safe performance under general forms of environment uncertainty while only collecting data from a single training environment. ## 9 Conclusion In this work, we have developed a general, efficient framework for robust and safe RL. Through the use of optimal transport theory, we demonstrated that we can guarantee robustness to general forms of environment disturbances by applying adversarial perturbations to observed state transitions. These Optimal Transport Perturbations can be efficiently implemented in an offline fashion using only data collected from a nominal training environment, and can be easily combined with existing techniques for safe RL to provide protection against unknown disturbances. Because our framework makes limited assumptions on the data collection process during training and does not require directly modifying the environment, it should be compatible with many real-world decision making applications. As a result, we hope that our work represents a promising step towards trustworthy deep RL algorithms that can be reliably deployed to improve real-world decision making. ## Acknowledgements This research was partially supported by the NSF under grants CCF-2200052, CNS-1645681, CNS-2149511, DMS-1664644, ECCS-1931600, and IIS-1914792, by the ONR under grants N00014-19-1-2571 and N00014-21-1-2844, by the NIH under grants R01 GM135930 and UL54 TR004130, by AFOSR under grant FA9550-19-1-0158, by ARPA-E under grant DE-AR0001282, by the MathWorks, and by the Boston University Filachand Fund for Integrated Life Science and Engineering. Figure 3: Comparison of average final training cost in the nominal training environment. Training cost of PR-MDP includes impact of adversarial interventions. Horizontal dotted line represents safety budget.
2301.04609
The Effects of Hofstede's Cultural Dimensions on Pro-Environmental Behaviour: How Culture Influences Environmentally Conscious Behaviour
The need for a more sustainable lifestyle is a key focus for several countries. Using a questionnaire survey conducted in Hungary, this paper examines how culture influences environmentally conscious behaviour. Having investigated the direct impact of Hofstedes cultural dimensions on pro-environmental behaviour, we found that the culture of a country hardly affects actual environmentally conscious behaviour. The findings indicate that only individualism and power distance have a significant but weak negative impact on pro-environmental behaviour. Based on the findings, we can state that a positive change in culture is a necessary but not sufficient condition for making a country greener.
Szabolcs Nagy, Csilla Konyha Molnarne
2022-12-26T09:53:29Z
http://arxiv.org/abs/2301.04609v1
The Effects of Hofstede's Cultural Dimensions on Pro-Environmental Behaviour: How Culture Influences Environments - The Effects of Hofstede's Cultural Dimensions on Pro-Environmental Behaviour: How Culture Influences Environments - The Effects of Hofstede's Cultural Dimensions on Pro-Environmental Behaviour: How Culture Influences Environments - The Effect of Hofstede's Cultural Dimensions on Pro-Environmental Behaviour ###### Abstract The need for a more sustainable lifestyle is a key focus for several countries. Using a questionnaire survey conducted in Hungary, this paper examines how culture influences environmentally conscious behaviour. Having investigated the direct impact of Hofstede's cultural dimensions on pro-environmental behaviour, we found that the culture of a country hardly affects actual environmentally conscious behaviour. The findings indicate that only individualism and power distance have a significant but weak negative impact on pro-environmental behaviour. Based on the findings, we can state that a positive change in culture is a necessary but not sufficient condition for making a country greener. Pro-environmental behavior, culture, Hofstede's cultural dimensions, Hungary, individualism, power distance - Journal of Economic Literature (JEL) codes: M31, P27, Q01, Q50, Z13 DOI: [http://dx.doi.org/10.18096/TMP.2018.01.03](http://dx.doi.org/10.18096/TMP.2018.01.03) ## 1 Introduction Our world is overwhelmed with environmental and social problems. Air pollution, climate change, deforestation, extinction of species, soil degradation, chemicals and waste are regarded as the most crucial environmental issues (UNEP, 2016). Culture influences behavioural patterns of individuals, including pro-environmental behaviour, to a large extent through socialization; therefore, analysis of the effects of culture on environmentally conscious behaviour is indispensable. The starting point of our investigation is that different cultures are based on different dominant core values. Those core values determine to what extent people will behave in an environmentally conscious way and whether environmental friendly products will be accepted in a society, and if so, to what extent consumers will demand them. We assume if a culture is based on a dominant set of values that are positively correlated with pro-environmental behaviour, that is, if environmental-related values are important in a society, it will have a positive impact on the general level of pro-environmental behaviour and the demand for environmental friendly products. Our research objective was to analyse how culture influences pro-environmental behaviour. ## 2 Literature Review Although many researchers have addressed the negative consequences of individual behaviour behind environmental issues (Boldero 1995; Oskamp 2000; Nordlund & Garvill 2002; Ojala 2008; Klockner & Oppedal 2011; Swami et al. 2011; Guerrero et al. 2013; Marshall & Farahbakhsh 2013), previous works failed to investigate the effects of Hofstede's cultural dimensions on pro-environmental behaviour. However, understanding and predicting forces influencing pro-environmental behaviour would be highly significant, as previous studies (Nagy 2005, 2012; Hofmeister-Toth et al. 2011) suggest that the level of environmentally conscious behaviour is rather low in Hungary. Szakaly et al. 2015 found that the size of environmentally conscious LOHAS (Lifestyle of Health and Sustainability) customer segment in Hungary was only 8.7 percent. Pro-environmental behaviour is defined by Steg and Vlek (2009, p. 309) to mean "behavior that harms the environment as little as possible, or even benefits the environment". Tylor (1871) was probably the first one to define culture as "the complex whole which includes knowledge, beliefs, arts, morals, law, customs, and any other capabilities and habits acquired by [a human] as a member of society." According to Hofstede (2011) culture is "the collective programming of the mind which distinguishes the members of one group or category of people from another." Hofstede states that we can distinguish three levels in programming the mind, which are: universal human nature (inherited) group specific culture (learned) personality (inherited and learned) The aim of Hofstede's early research (1980) was to globally analyse the differences in employee values. He collected data concerning culture from more than forty countries in the world, then he analysed them using statistical methods. Culture and the personality traits of the individuals are interrelated; they mutually and greatly affect each other. In the 1980s Hofstede identified four dimensions of culture as follows: power distance (PDI), uncertainty avoidance (UAI). individualism - collectivism (IND) masculinity - femininity (MAS) Later he added a fifth dimension to his model (Hofstede & Bond 1988), which was called long term orientation (LTO), then he introduced the sixth dimension, which is indulgence - restraint (Hofstede et al. 2010). This is the development of the 6D model of national culture, which is Hofstede's latest model for exploring the similarities and differences across national cultures (Hofstede 2017). The relative positions of the countries involved in the model on these six dimensions are expressed in a score on a 0-to-100-point scale. The higher value is intended to represent the stronger presence of the given dimension in the given country. Power distance (PDI) refers to the opinion about inequality among people and the modes of handling the problem: how much the members of a society who are excluded from power accept and expect the unequal distribution of power. In societies with high power distance not only the leaders but people who are excluded from power also support the system. In those countries the support of autocratic or oligarchic leadership is significant: power is concentrated in a narrow circle, paternalistic leadership style is expected, children are taught to obey and give respect at school and in the family. In contrast with this, low power distance societies (i.e. Scandinavian countries) show a democratic system in practice, they have pluralist governance, privileges are not accepted; children are considered to be equal in the family and at school as well (Hofstede et al. 1998). Based on the fact that Scandinavian countries are performing exceptionally well in sustainability rankings, i.e. Finland, Iceland, Sweden and Denmark are the four best performers in 2016 EPI rankings (Hsu et al. 2016), _it can be assumed that low power distance has a positive impact on environmentally conscious behaviour (Hypothesis 1)_. Uncertainty Avoidance Index (UAI) expresses the level of stress that unknown situations can cause in a society. It refers to how much people feel uncomfortable with uncertainty and ambiguity. Avoiding uncertainty is not the same as avoiding risk, since uncertainty avoidance means how a society tolerates ambiguous situations. In cultures exhibiting strong uncertainty avoidance (Latin America, Mediterranean countries and Japan) written rules, laws of behaviour are very important, the level of risk taking is low and conflict avoidance behaviour is typical. On the other hand, in cultures where the degree of uncertainty avoidance is low, uncertainty is regarded as a natural inherent of life and people consider unusual situations as opportunities rather than threats. The individualism versus collectivism (IND) dimension of culture refers to how much individuals integrate into the primary groups, to what extent they care about only themselves and/or their close family. It expresses how responsible people feel for the members of a wider community (for example relatives), who also expect support in return. In individualist cultures (e.g. the USA, Hungary, etc.) the degree of emotional attachment to groups is low, self-reliance, diversity and self-centredness are highly important. Everyone cares for himself/herself or the immediate family. Members of collective societies (e.g. South Asia, Korea, Japan and China) are fully identified with their community from their birth. Relationships within the community are strong, cohesion is high. Loyalty toward the extended family (i.e. grandparents and relatives), which protects its members in return, is unquestionable. Masculinity versus femininity (MAS) dimension refers to the emotional roles between men and women, as well as role-sharing of genders. In masculine societies (e.g. Hungary) masculine and feminine roles are clearly distinguished. In masculine societies "we live to work", so focusing on work and its exaggerated form, workabolism, is typical. The most important goals for people are to make achievements and to make money. The most important values in those countries are related to money and career. It is common for people to show their high status in society by owning recognised brands and luxury goods, which is in contrast with environmentally conscious behaviour. In feminine societies with modest, caring features (i.e. Scandinavian countries) protecting the environment and nature, caring for others, solidarity, the need for better quality of life and nurturing human relations are crucial (Hofstede and Arindell 1998). _For all these reasons we suppose that masculinisation of a society is against environmentally conscious behaviour (Hypothesis 2)_. Long-term orientation versus short term normative orientation dimension (LTO) signals that the focus of human behaviour is placed on the future or present/past. In this context, it is referred to as "(short term) normative versus (long term) pragmatic" (PRA). In the academic context, the terminology Monumentalism versus Flexhumility can also be used (Hofstede 2017). The most important distinguishing features of high level long term orientation, which is typical of China, Korea, Japan and some other Asian countries, are persistence, saving and shaming those who do not fulfil duties. People with such an attitude think that the most important events of life have not happened yet, they will occur in the future. The ability to change is important for them. It means that a "good" person adapts to the circumstances. This is true for traditions as well. Traditions must be adjusted to the circumstances. Such cultures are characterised by learning from others. In contrast, in western societies with short term orientation, people tend to think that the most important things are happening now or have already happened. Such cultures have sacred and invaluable traditions. People are proud of their own nation and do not want to change their traditions. Learning from others is not typical for them (Hofstede and Bond 1988). The indulgence (IND) versus restraint dimension focuses on how people satisfy or control the basic human drive for an enjoyable life. In societies exhibiting strong indulgence people are allowed to freely satisfy their desires in connection with enjoying life and having fun. On the other hand, in societies where the level of restraint is high, strict social norms regulate the gratification of needs. In restrained societies only a few people are happy; many of them feel they are vulnerable because things just happen to them. Spare time and comfort are not priorities in restrained countries. Fewer people do sports, sexual norms are stricter, the birth rate is lower but there are also fewer obese people than in cultures permitting an enjoyable life (Hofstede et al. 2010). Oneel and Mukherjee (2013) investigated the effects of national culture and human development on environmental health. Using multiple linear regression models, they found that cultural dimensions of individualism and uncertainty avoidance, as well as human development components of life expectancy at birth, education, and income, significantly influence environmental health performance. Cho et al. (2013) investigated the relationship between collectivism versus individualism as a cultural dimension and environmentally conscious behaviour by using the value-belief-norm model. They found that both horizontal collectivism - when the individual is the part of the group and there are no differences among the individuals within the group - and vertical individualism - when the individual is autonomous, independent and accepts differences - are important influential factors of perceived consumer effectiveness, which has a positive effect on environmental attitudes and finally results in higher levels of environmentally conscious commitment. Once and Almagtome (2014) made a cross-cultural comparison of the effect of national culture values on corporate environmental disclosure (CED). They found that two of Hofstede's national culture dimensions were linked to a higher degree of corporate environmental disclosure. In particular, a nation's high degree of individualism and long-term orientation were linked to high levels of corporate environmental disclosure. On the other hand, they found that one of Hofstede's national culture dimensions were related to a low degree of corporate environmental disclosure. ## Data and Methods In order to investigate pro-environmental behaviour an online survey was conducted in Hungary in 2017. A total of 442 respondents aged over 18 were included in the convenience sample with the snowball method. This means a 4.66% confidence interval at the 95% confidence level. As the original sample was not a representative sample, we used a commonly applied correction technique, the weighting adjustment, to make our sample representative according to variables such as sex and age. To explore the impact of culture on pro-environmental behaviour, we investigated the relationships between Hofstede's cultural dimensions (HCD) and environmentally conscious behaviour. In an attempt to measure pro-environmental behaviour, we used a revised version of the General Ecological Behaviour scale. The original measuring tool involves thirty-eight items in two sections representing different types of ecological and pro-social behaviour (Kaiser et al. 1999). Since we did not intend to investigate pro-social behaviour and some of the pro-environmental items proved to be irrelevant or outdated (Nagy, 2012), we deliberately left out eight variables concerning prosocial behaviour and three variables regarding ecological behaviour from the revised version of the GEB scale. However, we added ten ecological behaviour items, therefore the resulting pro-environmental behaviour scale (PEB scale) consists of thirty-seven items (Appendix 1). We measured the actual behaviour instead of behaviour intention by using dichotomous questions (yes/no responses). Negative behaviour items (item No: 5, 7, 10, 11, 12, 13, 16, 19, 21, 22, 23, 30, 35 and 36) were reversed in coding. Missing values were handled as 'no' responses. Behaviour difficulty of each PEB item was calculated by dividing the number of people behaving in an environmentally conscious way by the total number of respondents. We also considered the respondents' tendency to behave ecologically by considering the number of ecological behaviours they have carried out. In order to measure pro-environmental behaviour of individuals, we calculated the weighted sum of each item on the revised GEB scale. Difficulty parameters of pro-environmental behaviour items were used as weights. Then we divided the weighted sums by the total sum of difficulty parameters to transform it into a 0-1 scale of pro-environmental behaviour. Zero (0) score expresses that the individual does not behave environmentally consciously at all. On the contrary, if someone's behaviour is a hundred percent environmentally conscious, the PEB score will be the maximum (1). Since the multidimensional measuring scale of Hofstede's cultural dimensions was not available to us, we used a simplified, one-dimensional measurement approach, therefore we measured each cultural dimension with only one variable. We used Likert's five-level scale to measure Hofstede's cultural dimensions. We asked respondents to express to what extent they agree with the statements that can be seen in the "operationalization" column in Table 1. The lowest score (1) on the Likert scale signals that the respondent did not agree with the given statement at all, while the highest score (5) indicates that (s)he completely agreed. Then we transformed the scores that we measured on the Likert scale to a 0-100 scale to make them comparable with Hofstede's scores as the relative positions of the countries on all cultural dimensions are expressed in a score on a 0-to-100-point scale in Hofstede's 6D model. Scores below 50 points indicate the dominance of one of the values, whereas scores above 50 points refer to the dominance of the opposite value. Uncertainty resulting from this measurement transformation can be considered as a limitation of our study. Application of a scale between 0 and 100 points to measure Hofstede's cultural dimensions in future research can reduce this kind of measurement error. Because of the limitations discussed above, our results require further investigations in the future. ## Results & Discussion As a result of our investigations on pro-environmental behaviour, we found that most Hungarians do not behave environmentally consciously at all, i.e. they do not consider the environmental consequences of their behaviour. Figure 1 shows the percentage distribution of the population in Hungary in terms of the level of pro-environmental behaviour. Axis x shows the level of PEB, while we can see the percentage distribution of respondents on axis y. The mean of PEB in Hungary is only 0.445 on the 0-1 PEB scale, therefore it can be concluded that the level of pro-environmental behaviour is moderately low. Our result confirms previous findings in the literature (Hofmeister-Toth et al. 2011; Nagy 2005, 2012) that pro-environmental behaviour is not typical of Hungarians. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Cultural Dimension** & **Operationalization** & **Short Form** & **Means (\(\overline{\textbf{x}}\))** & **Pearson correlation (\(\textbf{r}\))** & **Relationship with Pro Environmental Behaviour** \\ \hline **Masculinity Versus** & Competition, success and performance are more important than caring for others and the quality of life. & MAS & 1.67 & -0.084 & not significant \\ \hline **Uncertainty Avoidance Index** & Change and unknown situations always involve a lot more threats than opportunities. & UAI & 2.31 & 0.020 & not significant \\ \hline **Power Distance** & Power is distributed unequally in the society, i.e. there are the rich and the poor, and it is completely acceptable for me. & PDI & 2.43 & -156** & weak, negative \\ \hline **Indulgence Versus Restraint** & Social norms and expectations must always be met and we must control our desires, even if it makes our life less enjoyable. & IND & 2.55 & -0.062 & not significant \\ \hline **Individualism Versus Collectivism** & Everyone has to take care of themselves, we cannot expect help from others. & IDV & 2.73 & -163** & weak, negative \\ \hline **Long Term Orientation Versus Short Term Normative Orientation** & We must rely on the past only to the extent that it serves the interests of the future. & LTO & 3.24 & -0.069 & not significant \\ \hline \end{tabular} Notes: **Correlation is significant at 0.01 level (2-tale) Source: Authors’ own research \end{table} Table 1: The relationship between Hofstede’s cultural dimensions and pro- environmental behaviour Figure 1: Percentage distribution of _pro-environmental behaviour (PEB) in Hungary_ Figure 2: Hofstede’s scores, our results and the gap between them for Hofstede’s cultural dimensions in Hungary Figure 2 highlights the gap between the country scores of Hungary in Hofstede's 6D model and our results. Hofstede's scores suggest that Hungary is an extremely masculine country (MAS=88), characterized by a high degree of uncertainty avoidance (UAI=82) and high level of individualism (IDV=80). The relatively high score of long term orientation (LTO=58) suggests that the Hungarian society is rather pragmatic. Power distance index (PDI=46) is relatively low, which indicates that people in Hungary are slightly against the unequal distribution of the power. Since the value of indulgence (IND=31) is very low, we can conclude that restraint is characteristic of Hungarians to a large extent. It means that many of them may think that fulfilling their desires is against social norms and an enjoyable life is "something wrong". The most important gap - 55 points - between our results and Hofstede's scores was found in terms of the masculinity-femininity dimension, where our score (MAS=33) was significantly lower than Hofstede's score (MAS=88). Our result suggests that Hungary is a feminine country where taking care of others and quality of life are dominant values. In feminine societies a focus on quality of life is top priority and only very few people want to stand out from the crowd. While in masculine countries people are driven to be the best, in feminine cultures it is important for people to like what they do and find it interesting. The second greatest gap (36 points) was found in terms of the uncertainty avoidance index. We measured only 46 points in contrast to Hofstede's 82 points. Another study by Neumann-Bodi et al. (2008) yielded a score of 64 points. When the UAI score is below 50 points, people are not afraid of changes and these are seen as opportunities rather than threats. In countries with low uncertainty avoidance, people feel they can shape the future to some extent and it does not just happen to them. In societies accepting uncertainty there is a willingness to accept new ideas, to try new products and entrepreneurial spirit of people is also higher. These cultures require fewer rules and people show their emotions less expressively. The third largest gap (25 points) occurred in terms of individualism (IDV). We measured only 55 points instead of the 80 points that can be found in the 6D model. It means that Hungarian society is less individualist according to our results. Our findings are not in line with those of Neumann-Bodi et al. (2008), who found a very high level of individualism in Hungary. However, it must be highlighted that our result is consistent with that of Hofstede's in regard to the finding that the Hungarian society is not collectivist. In Hungary, people take care of their immediate family and only loose social ties exist. Self-centredness is also characteristic in individualist societies. People need a private sphere and relationships are based on obtaining mutual benefits. As far as indulgence and restraint are concerned, we measured much higher scores (51 points) than Hofstede did (31 points). It means that Hungarians are not so restrained as it would appear based on Hofstede's scores. We enjoy our life much more and we live it more impulsively and people do not tend to be so cynical and pessimistic. As for the other dimensions, no significant differences can be found between our results and those of Hofstede's. The Long Term Orientation score that we measured (65 points) was only slightly higher than Hofstede's 58 points. The above results suggest that Hungary is a pragmatic country where people are convinced that truth largely depends on the specific situation, context and time. In Hungary traditions are transformed according to the changing situations and people fight persistently to achieve results. As for Power Distance Index (PDI), the difference was insignificant, only 3 points, as we measured 49 points instead of 46 points that can be found in the 6D model. Moderately low scores for power distance mean that Hungarians tend to favour independence and do not like subordination and control. The majority of Hungarians believe in equal rights. To test our hypotheses and to investigate the impact of Hofstede's cultural dimensions (HCD) on pro-environmental behaviour (PEB), we used the Pearson correlation. The results of the significance analysis as well as the correlation coefficients suggest that only two cultural dimensions have a significant impact on environmentally conscious behaviour. However, in both cases the relationship is only weak. _A higher level of individualism, i.e. the individualisation of the society, is slightly against pro-environmental behaviour (r=-0.163) and so is the higher level of power distance (r=-0.156)._ We also found that the other cultural dimensions in Hofstede's 6D model have no significant effect on environmentally conscious behaviour (Table 1). The results of stepwise linear regression support that _Hofstede's cultural dimensions have only a very weak direct influence on pro-environmental behaviour._ It can be assumed that HCD affects PEB indirectly through other factors (i.e. personal values and attitudes) as the regression model has only very low explanatory power (Table 2). With model 2, only 3.4 percent of the variation in the dependent variable (pro-environmental behaviour) can be explained using the independent variables (Individualism and Power Distance Index). Significance values in Table 3 indicate that both models are significant. As Table 4 shows, individualism has a weak negative effect on pro-environmental behaviour (\(\beta\)=-0.126), while higher Power Distance is also against pro-environmental behaviour (\(\beta\)=-0.111). B scores in Table 4 suggest that respondents who score 1 point higher on Individualism will - on average - score 0.16 points lower on the PEB scale, while people who score 1 point higher on Power Distance Index will - on average - score 0.13 points lower on the PEB scale. Based on the above findings, it can be concluded that _in collectivist societies with low power distance the probability of pro-environmental behaviour is higher_. Both the results of Pearson correlation and the linear regression support our first hypothesis, as we found that _high power distance index has a negative impact on pro-environmental behaviour_. However, the second hypothesis is not supported, since we found no significant relationship between the feminine/masculine nature of a society and the level of PEB. ## Conclusions This research was carried out in order to analyse how culture influences pro-environmental behaviour. Firstly, \begin{table} \begin{tabular}{|l l|r|r|r|r|r|} \multicolumn{8}{c}{**Coefficients***} \\ \hline & & \multicolumn{2}{c|}{Unstandardized Coefficients} & \multicolumn{2}{c|}{Standardized Coefficients} & \multicolumn{1}{c|}{} \\ \cline{3-7} Model & \multicolumn{1}{c|}{B} & \multicolumn{1}{c|}{Std. Error} & \multicolumn{1}{c|}{Beta} & \multicolumn{1}{c|}{t} & \multicolumn{1}{c|}{Sig.} \\ \hline 1 & (Constant) &.495 &.018 & & 27.412 &.000 \\ & IDV & -.021 &.006 & -.167 & -3.571 &.000 \\ \hline 2 & (Constant) &.513 &.020 & & 26.042 &.000 \\ & IDV & -.016 &.006 & -.126 & -2.513 &.012 \\ & PDI & -.013 &.006 & -.111 & -2.214 &.027 \\ \hline \multicolumn{8}{l}{a. Dependent Variable: PEB (0-1)} \\ \multicolumn{8}{l}{Source: Authors’ own research} \\ \end{tabular} \end{table} Table 4: Regression - Coefficients \begin{table} \begin{tabular}{|l l|r|r|r|r|} \multicolumn{8}{c}{**ANOVA***} \\ \hline Model & & Sum of Squares & df & Mean Square & F & Significance \\ \hline 1 & Regression &.269 & 1 &.269 & 12.749 &.000\({}^{\text{b}}\) \\ & Residual & 9.313 & 442 &.021 & & \\ & Total & 9.582 & 443 & & \\ \hline 2 & Regression &.371 & 2 &.185 & 8.881 &.000\({}^{\text{c}}\) \\ & Residual & 9.211 & 441 &.021 & & \\ & Total & 9.582 & 443 & & & \\ \hline \multicolumn{8}{l}{a. Dependent Variable: PEB (0-1)} \\ \multicolumn{8}{l}{b. Predictors: (Constant), IDV} \\ \multicolumn{8}{l}{c. Predictors: (Constant), IDV, PDI} \\ \multicolumn{8}{l}{Source: Authors’ own research} \\ \end{tabular} \end{table} Table 2: Regression model \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \multicolumn{8}{c}{**Model Summary**} \\ \hline Model & R & R Square & Adjusted R Square & Std. Error of the Estimate \\ \hline 1 &.167\({}^{\text{a}}\) &.028 &.026 &.14516 \\ 2 &.197\({}^{\text{b}}\) &.039 &.034 &.14452 \\ \hline \multicolumn{8}{l}{a. Predictors: (Constant), IDV} \\ \multicolumn{8}{l}{b. Predictors: (Constant), IDV, PDI} \\ \multicolumn{8}{l}{Source: Authors’ own research} \\ \end{tabular} \end{table} Table 3: ANOVA table we investigated the level of environmental consciousness in Hungary. We measured actual behaviour instead of behaviour intention and found that the level of pro-environmental behaviour is moderately low. This means that corrective actions are needed to increase environmentally consciousness. However, changing the culture of the country would not be sufficient, as the evidence from this study suggests that Hofstede's cultural dimensions only slightly influence pro-environmental behaviour. Among the significant cultural dimensions, only individualism and power distance have a weak negative impact on environmentally conscious behaviour. Yet, for these reasons, if we intend to make a country greener, collectivization of the society - or at least significant moderation of the level of individualism - and/or lowering the level of power distance are required. The results also suggest that people living in countries with low individualism and power distance index (e.g. Costa Rica) will behave in a more environmentally conscious way. The 2014 Global Green Economy Index also confirms this interpretation, as Costa Rica recorded an impressive result, ranking 3rd behind Sweden and Norway on performance and in the top 15 for perceptions overall (GGEI, 2014). "The described article was carried out as part of the EFOP-3.6.1- 16-2016-00011 "Younger and Renewing University - Innovative Knowledge City - institutional development of the University of Miskole aiming at intelligent specialisation" project implemented in the framework of the Szechenyi 2020 program. The realization of this project is supported by the European Union, co-financed by the European Social Fund." "A cikkben ismertettet kutato munka az EFOP-3.6.1-16-2016-00011 jelu,,Fiatalodo es Megujulo Egyetem - Innovativ Tudasvaros - a Miskolci Egyetem intelligens szakosodast szlogala intezmenyi fejlesztese" projekt reszekent - a Szechenyi 2020 kereteben - az Europai Unio tamogatasaval, az Europai Szocialis Alap tarsfinanszirozasival valosil meg"
2309.07522
A survey of complex generalized weighing matrices and a construction of quantum error-correcting codes
Some combinatorial designs, such as Hadamard matrices, have been extensively researched and are familiar to readers across the spectrum of Science and Engineering. They arise in diverse fields such as cryptography, communication theory, and quantum computing. Objects like this also lend themselves to compelling mathematics problems, such as the Hadamard conjecture. However, complex generalized weighing matrices, which generalize Hadamard matrices, have not received anything like the same level of scrutiny. Motivated by an application to the construction of quantum error-correcting codes, which we outline in the latter sections of this paper, we survey the existing literature on complex generalized weighing matrices. We discuss and extend upon the known existence conditions and constructions, and compile known existence results for small parameters. Some interesting quantum codes are constructed to demonstrate their value.
Ronan Egan
2023-09-14T08:47:38Z
http://arxiv.org/abs/2309.07522v2
A survey of complex generalized weighing matrices and a construction of quantum error-correcting codes ###### Abstract Some combinatorial designs, such as Hadamard matrices, have been extensively researched and are familiar to readers across the spectrum of Science and Engineering. They arise in diverse fields such as cryptography, communication theory, and quantum computing. Objects like this also lend themselves to compelling mathematics problems, such as the Hadamard conjecture. However, complex generalized weighing matrices, which generalize Hadamard matrices, have not received anything like the same level of scrutiny. Motivated by an application to the construction of quantum error-correcting codes, which we outline in the latter sections of this paper, we survey the existing literature on complex generalized weighing matrices. We discuss and extend upon the known existence conditions and constructions, and compile known existence results for small parameters. Some interesting quantum codes are constructed to demonstrate their value. ## 1 Introduction Combinatorial designs are finite objects that obey some kind of combinatorial condition and they take many forms. Many of them are comprehensively surveyed in the Handbook of Combinatorial Designs [14], to which we refer the reader for more information on almost every design mentioned in this paper. Some designs, such as Hadamard matrices due either to their applications or their appearance in other fields, have been extensively researched due to applications in diverse fields such as cryptography, communication theory, and quantum computing [38]. Objects like this also lend themselves to compelling mathematics problems - the Hadamard conjecture proposing the existence of a Hadamard matrix of order \(4n\) for all \(n\in\mathbb{N}\) has captured the imagination of numerous researchers since it was posed by Paley almost a century ago [48]. Other designs are well known only to researchers in closely related fields. Complex generalized weighing matrices, which include Hadamard matrices as a special case, are the subject of this survey. We focus entirely on the general case, and do not survey the extensive literature on the special cases here. This addresses what we feel is a notable gap in the literature, as complex generalised weighing matrices do not appear to have been surveyed elsewhere. They are referenced as an example of a pairwise combinatorial design in de Launey and Flannery's monograph on Algebraic Design Theory [25], but are not analysed in any detail except in the context of results that apply to large families of pairwise combinatorial designs. To begin, we give the necessary definitions and outline our notation. Thoughout, \(k\) is a positive integer and \(\zeta_{k}=e^{\frac{2\pi\sqrt{-1}}{k}}\) is a primitive \(k^{\rm th}\) root of unity. Let \(\langle\zeta_{k}\rangle=\{\zeta_{k}^{j}\ :\ 0\leq j\leq k-1\}\) be the set of all \(k^{\rm th}\) roots of unity, and let \(\mathcal{U}_{k}=\{0\}\cup\langle\zeta_{k}\rangle\). We denote the set of all \(n\times n\) matrices over the complex numbers by \(\mathcal{M}_{n}(\mathbb{C})\), and the subset of matrices with entries in \(\mathcal{U}_{k}\) by \(\mathcal{M}_{n}(k)\). A monomial matrix is one with exactly one non-zero entry in each row and column. The subset of monomial matrices in \(\mathcal{M}_{n}(k)\) is denoted by \(\mathrm{Mon}_{n}(k)\). More generally, the set of all \(n\times n\) matrices with entries in an alphabet \(\mathcal{A}\) containing a zero is denoted by \(\mathcal{M}_{n}(\mathcal{A})\), and the set of monomial matrices therein by \(\mathrm{Mon}_{n}(\mathcal{A})\). Given a matrix \(M\in\mathcal{M}_{n}(\mathcal{A})\), the matrix \(S\) obtained from \(M\) by replacing each non-zero entry with \(1\) is called the _support matrix_ of \(M\). If \(S\) supports \(M\), we also say that \(S\)_lifts_ to \(M\). Given an alphabet \(\mathcal{A}\) with the property that \(a^{-1}\in\mathcal{A}\) for any non-zero \(a\in\mathcal{A}\), we let \(*\) be the transposition acting on \(\mathcal{A}\) such that \(a^{*}=a^{-1}\) if \(a\neq 0\), and \(0^{*}=0\). Typically, an alphabet will either be a field, or the set \(\mathcal{U}_{k}\). By a common abuse of notation, when \(M=[m_{ij}]\in\mathcal{M}_{n}(\mathcal{A})\), we write \(M^{*}=[m^{*}_{ji}]\) for the Hermitian transpose of \(M\). For a complex number \(z\), the complex conjugate is denoted by \(\overline{z}\). We denote the ring of integers modulo \(k\) by \(\mathbb{Z}_{k}\). When \(k=p\) is prime, the transposition \(*\) acts on \(\mathbb{Z}_{p}\) so that \(a^{*}\) is the multiplicative inverse of \(a\), for all \(a\neq 0\). When \(q=p^{r}\) for some prime \(p\) and positive integer \(r\), we denote by \(\mathbb{F}_{q}\), the finite field of order \(q\). Rows and columns of \(n\times n\) matrices or sequences of length \(n\) are typically indexed by the integers \(0,1,\ldots,n-1\). A circulant matrix is an \(n\times n\) matrix \(C\) which is generated by its first row \([c_{0},c_{1},\ldots,c_{n-1}]\), and is denoted by \(C=\mathrm{circ}([c_{0},c_{1},\ldots,c_{n-1}])\). Each row is obtained by shifting the previous row forward cyclically. That is, \(C=[c_{j-i}]_{i,j\in\mathbb{Z}_{n}}\). **Definition 1.1**.: An \(n\times n\) matrix \(W\) with entries in \(\mathcal{U}_{k}\) is a _complex generalized weighing matrix_ of _weight_\(w\) if \(WW^{*}=wI_{n}\), where \(W^{*}\) is the complex conjugate transpose of \(W\), and \(I_{n}\) denotes the \(n\times n\) identity matrix. The set of all such matrices is denoted by \(\mathrm{CGW}(n,w;k)\). We will abbreviate to CGW when parameters are unspecified. An \(n\times n\) matrix \(W\) is a \(\mathrm{CGW}(n,w;k)\) if each row of \(W\) has exactly \(w\) non-zero entries in \(\langle\zeta_{k}\rangle\), and the Hermitian inner product of any two distinct rows in zero. If \(w=n\), then \(W\) is a _Butson Hadamard matrix_, the set of which is denoted by \(\mathrm{BH}(n,k)\). If \(k=2\), then \(W\) is a _weighing matrix_, the set of which is denoted by \(W(n,w)\). If both \(k=2\) and \(w=n\), then \(W\) is a _Hadamard matrix_, the set of which is denoted by \(\mathrm{H}(n)\). Hadamard matrices in particular have been studied extensively for over a century, for detailed expositions we refer the reader to any of [1, 7, 14, 17, 35, 38, 51, 53]. Both weighing matrices and Butson Hadamard matrices have been studied frequently, albeit far less than the Hadamard matrices that comprise their intersection. Weighing matrices feature prominently in works of Craigen, Seberry, and their coauthors; see any of [15, 16, 18, 22, 20, 51, 52, 53] for examples. For a general background into Butson Hadamard matrices, which are often called generalized Hadamard matrices by different authors, see any of [9, 25, 27, 29, 30, 43, 55], and for a comprehensive survey and an up to date catalog, see [56] and [8] respectively. Despite being the superset containing all of these objects, CGWs have, in their own right, received very little scrutiny outside of these special cases. The first significant work we note is due to Berman [5, 6]. Berman's constructions, which we will discuss in this paper, reveal several connections to finite geometry and finite fields, and demonstrate that these objects merit study outside of the Butson Hadamard or real weighing matrix cases. Around the same time, Seberry [50] and Seberry and Whiteman [52] considered the case where \(k=4\) due to a relationship to orthogonal designs. We document these in Section 3. Only sporadic work on the topic has since appeared, perhaps most significantly due to Craigen and de Launey [18], who studied CGWs that are invariant under a regular group action. To our knowledge, there has been no recent comprehensive survey collating up to date results on CGWs, that do not qualify as being either real weighing matrices, or Butson Hadamard matrices. In Section 2 we discuss and extend upon the known existence conditions. In Section 3 we describe the known constructions, beginning with direct constructions and then recursive constructions. In some cases, known constructions of objects like weighing matrices are generalized. In Section 4 we introduce a construction of Hermitian self-orthogonal \(q\)-ary codes from appropriate CGWs and describe the subsequent approach to building quantum codes. This application motivates our survey. In Section 5 we report on early computational results from this construction. Finally, an appendix follows the paper containing tables collating the information of Sections 2 and 3, giving existence or nonexistence of \(\mathrm{CGW}(n,w,k)\), if known, for all \(1\leq n\leq 15\), \(1\leq w\leq n\) and \(2\leq k\leq 6\). ## 2 Existence conditions Existence conditions for \(\mathrm{CGW}(n,w;k)\) tend to be number theoretical, but we will combine these with techniques from design theory. A _generalized weighing matrix_\(W\) is an \(n\times n\) matrix with \(w\) non-zero entries in each row and column coming from a finite group \(G\) such that \(WW^{*}=wI_{n}\) over \(\mathbb{Z}[G]/\mathbb{Z}G\). Because generalized weighing matrices over groups of prime order \(k\) coincide with CGWs over \(\mathcal{U}_{k}\) (see, e.g., [29, Lemma 2.2]), non-existence results for generalized weighing matrices over groups of prime order will apply. ### Equivalence The group action of \(\mathrm{Mon}_{n}^{2}(k)\) on a matrix in \(M\in\mathcal{M}_{n}(k)\) is defined by \(M(P,Q)=PMQ^{*}\). This action stabilizes the set \(\mathrm{CGW}(n,w;k)\). The orbit of \(W\) under this action is the _equivalence class_ of \(W\). That is, any matrix \(W^{\prime}\) obtainable from \(W\) by permuting rows (respectively columns) or multiplying rows (respectively columns) by an element of \(\langle\zeta_{k}\rangle\) is also an element of \(\mathrm{CGW}(n,w;k)\), and is said to be _equivalent_ to \(W\). More succinctly, two matrices \(W\) and \(W^{\prime}\) are equivalent if \[W^{\prime}=PWQ^{*}\] for matrices \(P,Q\in\operatorname{Mon}_{k}(n)\), and we write \(W\equiv W^{\prime}\). It is typical to study the set \(\operatorname{CGW}(n,w;k)\) through the lens of equivalence. It is a particularly useful tool when proving the non-existence of an element of \(\operatorname{CGW}(n,w;k)\). Complete classifications up to equivalence, even for reasonably small \(n\), are computationally difficult. Equivalence classes of Hadamard matrices of order up to \(32\) have been classified, but the number of classes appears to grow extremely rapidly with \(n\), and the problem becomes computationally infeasible very quickly. Classifying Butson matrices or weighing matrices is even more difficult. Harada et al. [36] used coding theory techniques to classify \(\operatorname{BH}(18,3)\) up to equivalence. The equivalence classes of \(\operatorname{BH}(n,4)\) for \(n\in\{10,12,14\}\) were determined by Lampio, Szollosi and Ostergard in [43], and they later classified \(\operatorname{BH}(21,3)\), \(\operatorname{BH}(16,4)\) and \(\operatorname{BH}(14,6)\) in [42], using computational methods. Other classifications restrict to matrices with extra properties. Group developed and cocyclic matrices are examples of special cases that have a lot of extra structure, reducing the search space significantly. The cocyclic equivalence classes in \(\operatorname{BH}(n,p)\) were classified for all \(np\leq 100\) where \(p\) is an odd prime in [29], and the cocyclic real Hadamard matrices have been classified at all orders up to \(36\) in [47], and at orders \(44\) and \(52\) in [4]. ### Number theoretical conditions The inner product of any pair of distinct rows of columns of a CGW must equal zero, hence the main number theoretical restrictions follow from a Theorem of Lam and Leung on vanishing sums of roots of unity [41]. **Theorem 2.1**.: _If \(\sum_{j=0}^{k-1}c_{j}\zeta_{k}^{j}=0\) for non-negative integers \(c_{0},\ldots,c_{k-1}\), and \(p_{1},\ldots,p_{r}\) are the primes dividing \(k\), then \(\sum_{j=0}^{k-1}c_{j}=\sum_{\ell=1}^{r}d_{\ell}p_{\ell}\) where \(d_{1},\ldots,d_{\ell}\) are non-negative integers._ A special case of this is the well known fact that if \(\sum_{i=0}^{p-1}c_{i}\zeta_{p}^{i}=0\) for a prime \(p\), then \(\sum_{i=0}^{p-1}c_{i}=dp\) for some non-negative integer \(d\), and consequently \(c_{0}=c_{1}=\cdots=c_{p-1}=d\). Hence, a \(\operatorname{CGW}(n,w;p^{r})\) exists only if the non-zero entries in any two distinct rows or columns coincide in a multiple of \(p\) positions. The question of existence is more complicated when \(k\) is composite, as Theorem 2.1 is a less significant barrier. When \(k=6\), it is no restriction at all, but some further conditions on their existence are described below. Formulating precise existence conditions in this case is far from straightforward - there is a known element of \(\operatorname{BH}(7,6)\) so the order can be coprime to \(6\), but the set \(\operatorname{BH}(5,6)\) is empty. Perhaps the most general nonexistence results follow from a condition on the determinant of the gram matrix. **Lemma 2.2**.: _Suppose that there exists \(W\in\operatorname{CGW}(n,w;k)\). Then \(|\mathrm{det}(W)|^{2}=w^{n}\)._ Proof.: If \(W\in\operatorname{CGW}(n,w;k)\) then \(WW^{*}=wI_{n}\). It follows that \[|\mathrm{det}(W)|^{2}=\mathrm{det}(W)\mathrm{det}(W^{*})=\mathrm{det}(WW^{*})= w^{n}.\] Lemma 2.2 implies the well known conditions that a \(\operatorname{CGW}(n,w;2)\) exists when \(n\) is odd only if \(w\) is a square. The Sum of Two Squares Theorem states that \(w\) is expressible as the sum of two squares if and only if the square free part of \(w\) is not divisible by any prime \(p\equiv 3\mod 4\). It follows that a \(\operatorname{CGW}(n,w;4)\) exists when \(n\) is odd only if \(w\) is the sum of two integer squares. We will see Lemma 2.2 applied again later in this section. Many of the better known, and easiest to apply non-existence results apply when \(k\) is prime. In the case of real weighing matrices, many of the strongest non-existence conditions were described in the 1970's by Geramita, Geramita and Seberry Wallis [31]. When \(k\neq 2\), some of these results have been generalized, and extended. Some of the best known conditions when \(k\) is an odd prime are due to de Launey [24], who studied generalized weighing matrices. For consistency, we present the relevant results in the language of CGWs. **Theorem 2.3** (cf. Theorem 1.2, [24]).: _If there exists a \(\operatorname{CGW}(n,w;k)\) with \(n\neq w\) and \(k\) a prime, then the following must hold:_ 1. \(w(w-1)\equiv 0\bmod k\)_._ 2. \((n-w)^{2}-(n-w)\geq\sigma(n-1)\) _where_ \(0\leq\sigma\leq k-1\) _and_ \(\sigma\equiv n-2w\bmod k\)_._ 3. _If_ \(n\) _is odd and_ \(k=2\)_, then_ \(w\) _is a square._ **Theorem 2.4** (cf. Theorem 5.1, [24]).: _Suppose there exists a \(\operatorname{CGW}(n,w;k)\) with \(n\) odd and \(k\) a prime. Suppose that \(m\not\equiv 0\bmod k\) is an integer dividing the square free part of \(w\). Then the order of \(m\) modulo \(k\) is odd._ Adhering to the notation of Theorem 2.4, if \(G=\langle\zeta_{k}\rangle\) for prime \(k>2\), then necessarily \(p=k\). If, for example, \(n=w=15\) and \(p=5\), then \(n\) is square free, so the existence of a \(\operatorname{BH}(n,5)\) requires that \(m=3\) is of odd order modulo \(5\), which is false. In the same paper, the possibility of a \(\operatorname{CGW}(19,10;5)\) is eliminated. These Theorems are effective for eliminating the possibility of finding elements of \(\operatorname{CGW}(n,w;k)\) when \(k\) is an odd prime, but it generally fails to extend to composite \(k\). However, for Butson matrices, i.e., matrices of full weight, these results were partially extended by Winterhof [57] using number theoretic techniques and the restriction of Lemma 2.2 to include certain cases where \(k=p^{r}\) or \(k=2p^{r}\) for a prime \(p\equiv 3\bmod 4\). The main result is the following. **Theorem 2.5** (Theorem 5, [57]).: _Let \(k=p^{r}\) be a prime power where \(p\equiv 3\bmod 4\), and let \(n=p^{\ell}a^{2}m\) be odd where \(p\) does not divide \(m\), \(m\) is square free. Then there is no \(\operatorname{BH}(n,k)\) or \(\operatorname{BH}(n,2k)\) if there is a prime \(q\mid m\) such that \(q\) is a non-quadratic residue modulo \(p\)._ Of the various implications of Winterhof's Theorem, perhaps the most frequently cited is a restriction eliminating the existence of a \(\operatorname{BH}(3^{\ell}p^{d},6)\) where \(p\equiv 5\bmod 6\) is prime and \(d\) is odd, which include cases such as \(\operatorname{BH}(5,6)\) and \(\operatorname{BH}(15,6)\). We can similarly use Lemma 2.2 to eliminate existence of CGWs. The following is essentially the same argument as the proof of [55, Corollary 1.4.6]. **Proposition 2.6**.: _There is no \(\operatorname{CGW}(n,w;6)\) when \(n\) is odd and \(w\equiv 2\mod 3\)._ Proof.: Suppose \(W\in\operatorname{CGW}(n,w;6)\). Then \(|\det(W)|^{2}=w^{n}\). Since any element of \(\mathcal{U}_{6}\) can be written in the form \(a+b\zeta_{3}\) for integers \(a\) and \(b\), it follows that there are integers \(a\) and \(b\) such that \[w^{n}=|\det(W)|^{2}=|a+b\zeta_{3}|^{2}=a^{2}+b^{2}-ab.\] It is not possible that \(a^{2}+b^{2}-ab\equiv 2\mod 3\), and so it cannot be that \(n\) is odd and \(w\equiv 2\mod 3\) This result motivated the following Propositions. The proofs are similar, but none are omitted as there are subtle differences. **Proposition 2.7**.: _There is no \(\mathrm{CGW}(n,w;6)\) when \(n\) is odd and \(w\equiv 2\mod 4\)._ Proof.: As before, if such a matrix exists we require that there are integers \(a\) and \(b\) such that \[w^{n}=|\mathrm{det}(W)|^{2}=|a+b\zeta_{3}|^{2}=a^{2}+b^{2}-ab.\] First observe that \(a^{2}+b^{2}-ab\equiv 0\mod 4\) only if both \(a\) and \(b\) are even. Let \(w=2m\) for some odd \(m\). We show that there is no solution to \[(2m)^{n}=a^{2}+b^{2}-ab,\] or equivalently, \[(2m)^{n}-ab=(a-b)^{2}.\] Now let \(2^{t}\) be the largest power of \(2\) dividing both \(a\) and \(b\). Then we require a solution to \[\frac{(2m)^{n}}{2^{2t}}-\frac{ab}{2^{2t}}=\frac{(a-b)^{2}}{2^{2t}}. \tag{1}\] We split the remainder of the proof into two cases. First suppose that one of \(a\) or \(b\) is divisible by \(2^{t+1}\), but not both. Then \(\frac{a-b}{2^{t}}\) is odd, and so the right hand side of Equation (1) is odd. However, on the left hand side, the term \(\frac{ab}{2^{2t}}\) is even, and the term \(\frac{(2m)^{n}}{2^{2t}}\) is either an even integer, or not an integer. Thus Equation (1) is not satisfied. Next suppose that neither \(a\) nor \(b\) are divisible by \(2^{t+1}\). Then \(\frac{a-b}{2^{t}}\) is even, and so the right hand side of Equation (1) is even. However, on the left hand side, the term \(\frac{ab}{2^{2t}}\) is odd, and the term \(\frac{(2m)^{n}}{2^{2t}}\) is again either an even integer, or not an integer. Thus Equation (1) is again not satisfied. This proves the claim. **Proposition 2.8**.: _There is no \(\mathrm{CGW}(n,w;6)\) when \(n\) is odd and \(w\equiv 6\mod 9\)._ Proof.: As before, if such a matrix exists we require that there are integers \(a\) and \(b\) such that \[w^{n}=|\mathrm{det}(W)|^{2}=|a+b\zeta_{3}|^{2}=a^{2}+b^{2}-ab.\] Let \(w=3m\) for some \(m\equiv 2\bmod 3\). This time we show that there is no solution to \[(3m)^{n}-ab=(a-b)^{2}.\] Let \(3^{t}\) be the largest power of \(3\) dividing both \(a\) and \(b\). Then we require a solution to \[\frac{(3m)^{n}}{3^{2t}}-\frac{ab}{3^{2t}}=\frac{(a-b)^{2}}{3^{2t}}.\] We split the remainder of the proof into two cases. First suppose that one of \(a\) or \(b\) is divisible by \(3^{t+1}\), but not both. Then \(\frac{a-b}{3^{t}}\) is not a multiple of \(3\). However, if \(n>2t\) then both terms on the left hand side are multiples of \(3\), and if \(n<2t\) the left hand side is not an integer, so the equation is not satisfied. Next suppose that neither \(a\) nor \(b\) are divisible by \(3^{t+1}\). In this case, the term \(\frac{(a-b)^{2}}{3^{2t}}\equiv 1\bmod 3\). Thus a solution is only possible if \(n>2t\) and if \(\frac{ab}{3^{2t}}\equiv 2\bmod 3\). This implies that \(a=3^{t}a^{\prime}\) and \(b=3^{t}b^{\prime}\) where, without loss of generality, \(a^{\prime}\equiv 1\bmod 3\) and \(b^{\prime}\equiv 2\bmod 3\). Now consider the equivalent expression \[(3m)^{n}=(a+b)^{2}-3ab.\] In this expression \((a+b)^{2}=(3^{t}(a^{\prime}+b^{\prime}))^{2}\) is divisible by \(3^{2t+2}\), but the highest power of \(3\) dividing \(3ab\) is \(3^{2t+1}\). So if \[\frac{(3m)^{n}}{3^{2t+1}}=\frac{(a+b)^{2}}{3^{2t+1}}-\frac{3ab}{3^{2t+1}},\] then the right hand side is congruent to \(1\bmod 3\). However, either \(n>2t+1\) and \(\frac{(3m)^{n}}{3^{2t+1}}\equiv 0\bmod 3\), or \(n=2t+1\) and \(\frac{(3m)^{n}}{3^{2t+1}}=m^{2t+1}\equiv 2\bmod 3\). Thus no solution exists. Finally, the following generalizes a well known non-existence result for Butson Hadamard matrices. **Proposition 2.9**.: _Let \(p\equiv 2\bmod 3\) be a prime and let the squarefree part of \(w\) be divisible by \(p\). Then there is no \(\operatorname{CGW}(n,w;6)\) when \(n\) is odd._ Proof.: Let \(w=p^{r}m\) for odd \(r\) and \(m\not\equiv 0\bmod p\). The first part is similar to the proofs of the previous Propositions. We show that there is no solution to \[(p^{r}m)^{n}-ab=(a-b)^{2}.\] Let \(p^{t}\) be the largest power of \(p\) dividing both \(a\) and \(b\). Then we require a solution to \[\frac{(p^{r}m)^{n}}{p^{2t}}-\frac{ab}{p^{2t}}=\frac{(a-b)^{2}}{p^{2t}}.\] Suppose that one of \(a\) or \(b\) is divisible by \(p^{t+1}\), but not both. Then \(\frac{a-b}{p^{2}}\) is not a multiple of \(p\). However, if \(n>2t\) then both terms on the left hand side are multiples of \(p\), and if \(n<2t\) the left hand side is not an integer, so the equation is not satisfied. Next suppose that neither \(a\) nor \(b\) are divisible by \(p^{t+1}\). On the left hand side, \(\frac{(p^{r}m)^{n}}{p^{2t}}\) is an integer only if it is a multiple of \(p\), and \(\frac{ab}{p^{2t}}\) is not a multiple of \(p\). Letting \(a=p^{t}a^{\prime}\) and \(b=p^{t}b^{\prime}\) for \(a^{\prime},b^{\prime}\not\equiv 0\bmod p\), a solution can only exist if \[-a^{\prime}b^{\prime}\equiv(a^{\prime}-b^{\prime})^{2}\bmod p.\] Rearranging, this implies that \[a^{\prime 2}+b^{\prime 2}\equiv a^{\prime}b^{\prime}\bmod p.\] Multiplying by \((a^{\prime}b^{\prime})^{-1}\), and letting \(x=a^{\prime}(b^{\prime-1})\), this expression reduces to \[x^{2}-x+1\equiv 0\bmod p.\] Should a solution to this expression exist, we would find that \((x-1)^{4}\equiv x-1\bmod p\), so either \(x-1\equiv 1\bmod p\), or \((x-1)\) is a element of multiplicative order \(3\) in \(\mathbb{Z}_{p}\). The former implies that \(x\equiv 2\bmod p\) which contradicts \(x^{2}-x+1\equiv 0\bmod p\) for all primes \(p\neq 3\), so we must have the latter. However, if \(p\equiv 2\bmod 3\), then \(3\) does not divide \(p-1\), so there is no element of multiplicative order \(3\) in \(\mathbb{Z}_{p}\), so there can be no solution. _Remark 2.10_.: Letting \(w=n\) and assuming that \(n\) is odd, Proposition 2.9 recovers the known condition that a \(\mathrm{BH}(n,6)\) exists only if the squarefree part of \(n\) is not divisible by a prime \(p\equiv 5\bmod 6\), which is a special case of Theorem 2.5. ### Block designs and the lifting problem When the results already outlined in this section are insufficient, some cases need to be investigated individually. For this purpose, particularly when \(k\) is prime, it is often easier to consider whether or not the support matrix of a CGW, should it exist, can meet some necessary conditions. In small cases, we can use a well known restriction on the existence of block designs. First we need a definition. Let \(n\), \(w\) and \(\lambda\) be integers where \(n>w>\lambda\geq 0\). Let \(X\) be a set of size \(n\). A _Symmetric balanced incomplete block design_\(\mathrm{SBIBD}(n,w,\lambda)\) is a set of \(n\) subsets of \(X\) of size \(w\), called _blocks_ such that each unordered pair of distinct elements of \(X\) are contained in exactly \(\lambda\) blocks. If \(A\) is the incidence matrix of the \(\mathrm{SBIBD}(n,w,\lambda)\), then \[AA^{\top}=wI_{n}+\lambda(J_{n}-I_{n}),\] where \(J_{n}\) denotes the \(n\times n\) matrix of all ones. It is a well known necessary condition that a \(\mathrm{SBIBD}(n,w,\lambda)\) exists only if \[\lambda(n-1)=w(w-1). \tag{2}\] This condition will be useful for eliminating the possibility of a \(\mathrm{CGW}(n,w;k)\) for certain small parameters. A reason for this is that we can sometimes observe that a \(\mathrm{CGW}(n,w;k)\) can only exist if the support matrix is the incidence matrix of some \(\mathrm{SBIBD}(n,w,\lambda)\). For example, it can be shown that if a \(\mathrm{CGW}(11,5;4)\) exists, then it's support must be a \(\mathrm{SBIBD}(11,5,2)\). Such a design exists, but it is unique, so we only need to check if it is possible that the incidence matrix of this design can support a \(\mathrm{CGW}(11,5;4)\), which we can do by hand. This is one way to eliminate the existence of a \(\mathrm{CGW}(11,5;4)\). It is an example of the following problem. **Problem 2.11** (The lifting problem).: Given a \((0,1)\)-matrix \(S\), does \(S\) lift to a \(\mathrm{CGW}(n,w;k)\)? In several cases, non-existence of a \(\mathrm{CGW}(n,w;k)\) is verified by showing that there does not exist a \((0,1)\)-matrix \(S\) that lifts to a \(\mathrm{CGW}(n,w;k)\) for the given parameters. Completing the unfilled entries in the existence tables in Appendix A may require solving the lifting problem, as potential support matrices exist in those cases. ### Sporadic non-existence conditions One of our aims will be to settle the question of existence for all many orders \(1\leq n\leq 15\) and weights \(1\leq w\leq n\) as we can for small \(k\); see Section 3.6 and the Tables in Appendix A. Non-existence is mostly determined by the results already described in this section, but occasionally some more specialized results are required. Existence is in most cases given as a result of one of the constructions of Section 3. In some cases we can prove non-existence for certain parameters individually, which often reduces to determining if a support matrix can exist, and if so, trying to solve the lifting problem. This section is not intended to be comprehensive, but to demonstrate the kind of methods that can be implemented at small orders. We give some examples here. **Proposition 2.12**.: _There exists a \(\mathrm{CGW}(n,4;3)\) if and only if \(n\equiv 0\bmod 5\)._ Proof.: Let \(W\in\mathrm{CGW}(n,4;3)\) and let \(S\) be the support matrix of \(W\). Then the dot product of any two distinct rows must be either \(0\) or \(3\). Any pair of the four rows that contain a \(1\) in column \(1\) must therefore share a \(1\) in exactly two other columns, and so up to permutation equivalence, these rows are of the form \[\left[\begin{array}{cccc|cccc}1&1&1&1&0&0&\cdots&0\\ 1&1&1&0&1&0&\cdots&0\\ 1&1&0&1&1&0&\cdots&0\\ 1&0&1&1&1&0&\cdots&0\end{array}\right].\] In this configuration, columns \(2,3,4,5\) share a \(1\) in exactly \(2\) rows, and have only one further \(1\) remaining in the column and so we can immediately deduce a fifth row of the matrix which up to equivalence takes the form \[\left[\begin{array}{cccc|cccc}1&1&1&1&0&0&\cdots&0\\ 1&1&1&0&1&0&\cdots&0\\ 1&1&0&1&1&0&\cdots&0\\ 1&0&1&1&1&0&\cdots&0\\ 0&1&1&1&1&0&\cdots&0\end{array}\right]=[C\ \mid\ 0_{5,n-5}].\] Proceeding in the same way, we find that \(S\) must be permutation equivalent to a block diagonal matrix with the \(5\times 5\) matrix \(C\) in block on the diagonal. The claim that \(n\equiv 5\) follows immediately. To see that \(n\equiv 5\) is sufficient, let \(C\) be the support of the \(\mathrm{CGW}(5,4;3)\) of Example 3.5. **Proposition 2.13**.: _There is no matrix in \(\mathrm{CGW}(10,6;3)\)._ Proof.: Should such a matrix \(W\) exist, then it must have exactly \(4\) zeros in each row and column, and in any distinct row/column they must share the entry zero in either \(1\) or \(4\) positions. Suppose first that no two rows share a zero in \(4\) positions, and so each pair share a zero in exactly one position. Then if \(S\) is the support matrix of \(W\), the matrix \(J_{n}-S\) should be the incidence matrix of a \(\mathrm{SBIBD}(10,4,1)\). However, these parameters contradict Equation (2). Next suppose that some pair of distinct rows have their zeros in the same four columns. Then because these columns now share a zero in at least \(2\) positions, they must also do so in \(4\). As a result, up to equivalence, the support matrix must have \(4\) rows such that it takes the form \[\left[\begin{array}{cccc|cccc}1&1&1&1&1&1&0&\cdots&0\\ 1&1&1&1&1&1&0&\cdots&0\\ 1&1&1&1&1&1&0&\cdots&0\\ 1&1&1&1&1&1&0&\cdots&0\end{array}\right].\] Now, in any distinct pair of the first six columns, the entries equal to \(1\) are in \(4\) common rows, and so the remaining two \(1\)s must also be in common rows. Thus, up to equivalence, the two subsequent rows take the form \[\left[\begin{array}{cccc|cccc}1&1&1&1&1&1&*&\cdots&*\\ 1&1&1&1&1&1&*&\cdots&*\end{array}\right].\] In order that the rows have weight \(6\), the entries marked \(*\) must be zero, but then the corresponding columns would have weight \(\leq 4\), and so we have a contradiction. **Proposition 2.14**.: _There is no \(\operatorname{CGW}(10,7;4)\)._ Proof.: Suppose that \(W\in\operatorname{CGW}(10,7;4)\) and let \(S\) be the support matrix of \(W\). The positions of the ones in any two rows intersect in either \(4\) or \(6\) places. If positions of the ones in all pairs of distinct rows intersected in exactly \(4\) places then \(S\) would describe a \((10,7,4)\)-design, which is forbidden by Equation (2). So at least two rows share ones in \(6\) positions, and up to equivalence the first two rows of \(S\) are \[\left[\begin{array}{ccccccccc}0&1&1&1&1&1&1&0&0\\ 1&0&1&1&1&1&1&1&0&0\end{array}\right].\] Now in any subsequent row, if there are an even number of ones in positions \(3\) to \(8\), then both entries in the positions \(1\) and \(2\) are zero. If there are an odd number of ones in positions \(3\) to \(8\), then both entries in the positions \(1\) and \(2\) are one. It follows that, up to equivalence, \(S\) takes the form \[\left[\begin{array}{ccccccccc}0&1&1&1&1&1&1&0&0\\ 1&0&1&1&1&1&1&1&0&0\\ \hline 1&1&&&&&&&\\ 1&1&&&&&&\\ 1&1&&&&&&\\ 1&1&&&&&&\\ 1&1&&&&\\ 0&0&&&&\end{array}\right].\] Now, in rows \(3\) to \(8\), there are an odd number of ones in columns \(3\) to \(8\), and so an even number of ones in columns \(9\) and \(10\). Since zeros in columns \(9\) and \(10\) already meet in rows \(1\) and \(2\), this cannot happen again and so both entries in rows \(3\) to \(8\) must equal \(1\). Completing rows \(9\) and \(10\) is similar, and we find \(S\) is of the form \[\left[\begin{array}{ccccccccc}0&1&1&1&1&1&1&0&0\\ 1&0&1&1&1&1&1&0&0\\ \hline 1&1&&&&1&1\\ 1&1&&&&&&1&1\\ 1&1&&&&&&1&1\\ 1&1&&&&&&1&1\\ 1&1&&&&&&1&1\\ 1&1&&&&&&1&1\\ 1&1&&&&&&1&1\\ \hline 0&0&1&1&1&1&1&1&0\\ 0&0&1&1&1&1&1&0&1\end{array}\right].\] Now consider the \(6\times 6\) submatrix in the centre, which must have exactly three entries equal to one in each row and column, and the ones in any pair of rows must meet in either \(0\) or \(2\) positions. If the ones in any two rows meet in zero positions, it is impossible to complete a third row that meets each of these two in \(0\) or \(2\) positions, so they must all meet in exactly two positions. However this would imply that the submatrix in the centre describes a \((6,3,2)\)-design, which is also forbidden by Equation (2). We give one more example of this kind of argument. **Proposition 2.15**.: _There is no \(\operatorname{CGW}(11,5;4)\)._ Proof.: Using similar arguments to those of Proposition 2.14, it can be shown that the support matrix of such a matrix must be the incidence matrix of a \((11,5,2)\)-design. There is, up to equivalence, exactly one such design (see, e.g., [14]). Thus we must be able to solve the lifting problem for this particular support matrix if a \(\mathrm{CGW}(11,5;4)\) is to exist. However, it is not difficult to verify that this is impossible. We omit details for brevity. ## 3 Constructions In this section we outline the best known constructions of CGWs. We begin with direct constructions and infinite families, including a summarization of the best known work on the topic by Berman and Seberry and Whiteman. We then consider various recursive constructions, including standard direct sum or tensor product constructions, and more general recursive constructions such as the powerful method of weaving introduced by Craigen. We begin with a method strongly influenced by a familiar construction of conference matrices due to Paley. ### Generalized Paley The most famous constructions of an infinite family of Hadamard matrices are due to Paley. There are two constructions yielding what are now known as the type I and type II Paley Hadamard matrices. Both constructions are built on circulant cores, obtained by applying the quadratic character to the elements of a finite field \(\mathbb{F}_{q}\). The next construction we introduce is not strictly a generalization of Paley's construction of the circulant core, but bears a strong enough resemblance that we refer to this as a generalized Paley construction. Let \(p\) and \(q\) be primes, with \(q\equiv 1\mod p\). Let \(x\) be a multiplicative generator of the non-zero elements of \(\mathbb{Z}_{q}\). Consider the map \(\phi:\mathbb{Z}_{q}\rightarrow\langle\zeta_{p}\rangle\cup\{0\}\) defined by setting \(\phi(x^{j})=\zeta_{p}^{j}\) for all \(1\leq j\leq q-1\), and setting \(\phi(0)=0\). Then \(\phi\) has the following two properties: * \(\phi(xy)=\phi(x)\phi(y)\) for all \(x,y\in\mathbb{Z}_{q}\); and * \(\phi(x^{*})=\phi(x)^{*}\) for all \(x\in\mathbb{Z}_{q}\). **Lemma 3.1**.: _Let \(C=\mathrm{circ}([\phi(x)\ :\ 0\leq x\leq q-1])\). Then \(CC^{*}=qI_{q}-J_{q}\)._ Proof.: Observe that each row has exactly \(q-1\) non-zero entries, so the diagonal entries of \(CC^{*}\) are clearly as claimed. It remains to show that the Hermitian inner product of any two distinct rows \(r_{i}\) and \(r_{j}\) of \(C\) is \(-1\). Since \(C\) is circulant, this inner product is \[\langle r_{i},r_{j}\rangle=\sum_{x\in\mathbb{Z}_{q}}\phi(x)\phi(x-s)^{*}\] for some \(s\neq 0\). Using the properties of \(\phi\) we observe that \[\phi(x)\phi(x-s)^{*}=\phi(x(x-s)^{*}).\] Now, \(x(x-s)^{*}=0\) if and only if \(x=0,s\). If \(x,y\not\in\{0,s\}\), then \[x(x-s)^{*} =y(y-s)^{*}\] \[\Leftrightarrow x(y-s) =y(x-s)\] \[\Leftrightarrow xs =ys\] \[\Leftrightarrow x =y.\] Further, \(x(x-s)^{*}=1\) only if \(s=0\). It follows that when \(s\neq 0\), the multiset \(\{x(x-s)^{*}\ :\ x\in\mathbb{Z}_{q}\setminus\{0,s\}\}=\{2,3,\ldots,q-1\}\). Consequently, by Theorem 2.1, \(\sum_{x\in\mathbb{Z}_{q}}\phi(x)\phi(x-s)^{*}=-1\), as required. The following is now immediate. **Theorem 3.2**.: _Let \(C=\operatorname{circ}([\phi(x)\ :\ 0\leq x\leq q-1])\). Then the matrix_ \[W=\left[\begin{array}{c|c}0&\mathbf{1}\\ \hline\mathbf{1}^{\top}&C\end{array}\right]\!,\] _is a \(\operatorname{CGW}(q+1,q;p)\)._ ### Berman's constructions The earliest constructions of CGWs that we know of are due to Berman [5, 6]. We have only been able to obtain a copy of the more recent paper [6], which claims to generalize the constructions in [5] which are limited to real weighing matrices. Families are constructed using connections to finite geometry. Let \(p\), \(n\), and \(t\) be positive integers with \(p\) a prime. Let \(F\) be the finite field \(\mathbb{F}_{p^{n}}\) and let \(P^{\prime}\) be the set of all points in the affine space \(F^{t}\), excluding the origin \(\mathbf{0}=(0,0,\ldots,0)\). Let \(H^{\prime}\) denote the set of hyperplanes of \(F^{t}\) that do not include \(\mathbf{0}\). Hence \(|P^{\prime}|=|H^{\prime}|=p^{tn}-1\). A hyperplane in \(F^{t}\) can be described by a linear equation of the form \[u_{1}x_{1}+\cdots+u_{t}x_{t}=b\] for an arbitrary constant \(b\in F\). In order for this construction to work, we cannot choose \(b=0\). Adhering to the choice of Berman in [6], we let \(b=1\). Thus every hyperplane can be described by a \(t\)-tuple \(\mathbf{u}=(u_{1},\ldots,u_{t})\) where \(u_{i}\in F\), where each \(\mathbf{u}\in H^{\prime}\) satisfies a linear equation \[u_{1}x_{1}+\cdots+u_{t}x_{t}=1\] where at least one \(u_{j}\neq 0\). Letting \(\mathbf{x}\) be a point in \(P^{\prime}\), it follows that this equation can be written as \(\mathbf{u}\mathbf{x}^{\top}=1\). We say that the point \(\mathbf{x}\) is on the hyperplane \(\mathbf{u}\) or that \(\mathbf{u}\) contains the point \(\mathbf{x}\) and write \(\mathbf{x}\in\mathbf{u}\) if \(\mathbf{x}\) and \(\mathbf{u}\) satisfy this equation. It follows that a point \(\mathbf{x}\in P^{\prime}\) is on \(p^{(t-1)n}\) hyperplanes of \(H^{\prime}\) and a hyperplane \(\mathbf{u}\in H^{\prime}\) contains \(p^{(t-1)n}\) points of \(P^{\prime}\). A collineation \(\phi\) is a transformation of \(F^{t}\) preserving collinearity; the order of \(\phi\) is the smallest \(r\) such that \(\phi^{r}\) is the identity transformation. The map \(\phi_{\lambda}:\mathbf{x}\mapsto\lambda\mathbf{x}\) for \(\lambda\in F\setminus\{0\}\) is a collineation of order \(r_{\lambda}\) which maps the hyperplane \(\mathbf{u}\) onto the hyperplane \(\lambda^{-1}\mathbf{u}\). Writing \([\mathbf{x}]=\{\phi_{\lambda}^{j}\mathbf{x}\ :\ j=0,\ldots,r_{\lambda}-1\}\) for \(\mathbf{x}\in P^{\prime}\), we observe that \(\mathbf{y}\in[\mathbf{x}]\) if and only if \(\mathbf{x}\in[\mathbf{y}]\). It follows that \(P^{\prime}=[\mathbf{x}^{(1)}]\cup[\mathbf{x}^{(2)}]\cup\cdots\cup[\mathbf{x} ^{(m)}]\) is a partition into \(m\) classes, where \(mr_{\lambda}=p^{tn}-1\). Similarly, we have the partition \(H^{\prime}=[\mathbf{u}^{(1)}]\cup[\mathbf{u}^{(2)}]\cup\cdots\cup[\mathbf{u}^{(m)}]\). If \(\mathbf{x}^{(j)}\) is a point of \(\phi_{\lambda}^{\ell}\mathbf{u}^{(i)}\), then \[1=\lambda^{-\ell}\mathbf{u}^{(i)}(\mathbf{x}^{(j)})^{\top}=\lambda^{-\ell-k} \mathbf{u}^{(i)}(\lambda^{k}\mathbf{x}^{(j)})^{\top}\] for any \(0\leq k\leq r_{\lambda}-1\), and so \(\phi_{\lambda}^{k}\mathbf{x}^{(j)}\) is a point of \(\phi_{\lambda}^{\ell+k}\mathbf{u}^{(i)}\). It follows that if a point \(\mathbf{x}^{(j)}\) lies on any hyperplane in \([\mathbf{u}^{(i)}]\), then each point in \([\mathbf{x}^{(j)}]\) lies on exactly one hyperplane in \([\mathbf{u}^{(i)}]\). As such, if points of \([\mathbf{x}^{(j)}]\) are on hyperplanes of \([\mathbf{u}^{(i)}]\), we write \([\mathbf{x}^{(j)}]\in[\mathbf{u}^{(i)}]\). Now let \(d>1\) be any divisor of \(r_{\lambda}\), and let \(v(\mathbf{u}^{(i)},\mathbf{x}^{(j)})\) be the unique integer \(h\) such that \(\phi_{\lambda}^{h}\mathbf{x}^{(j)}\) is a point of \(\mathbf{u}^{(i)}\). Finally, let \(A\) be the \(m\times m\) matrix \((a_{ij})\) defined by \[a_{ij}=\begin{cases}\zeta_{d}^{v(\mathbf{u}^{(i)},\mathbf{x}^{(j)})}&\text{ if }[\mathbf{x}^{(j)}]\in[\mathbf{u}^{(i)}]\\ 0&\text{otherwise.}\end{cases}\] Assuming all of the notation of this section, we have the following. **Theorem 3.3** (cf. [6, Theorem 2.2]).: _The matrix \(A\) is an element of \(\operatorname{CGW}((p^{tn}-1)/r_{\lambda},p^{(t-1)n};d)\)._ Proof.: The parameters of \(A\) are all clear from its construction. It remains to show that \(A\) is orthogonal, i.e., that \[Q=\sum a_{ij}\overline{a_{kj}}=0\] for all \(i\neq j\). The \(i^{\mathrm{th}}\) and \(k^{\mathrm{th}}\) rows of \(A\) correspond to hyperplane classes. If \([\mathbf{u}^{(i)}]\) and \([\mathbf{u}^{(k)}]\) are parallel, then they have no points in common, and it follows that the sum \(Q\) contains no non-zero terms. Suppose then that \([\mathbf{u}^{(i)}]\) and \([\mathbf{u}^{(k)}]\) do intersect, and so \(\mathbf{u}^{(i)}\) intersects each of the hyperplanes \(\phi_{\lambda}^{h}\mathbf{u}^{(k)}\), \(h=0,1,\ldots,r_{\lambda}-1\), in \(p^{(t-2)n}\) points. Thus the sum \(Q\) contains \(r_{\lambda}p^{(t-2)n}\) non-zero terms. For any point \(\phi_{\lambda}^{\ell}\mathbf{x}^{(j)}\) on each of \(\mathbf{u}^{(i)}\) and \(\phi_{\lambda}^{h}\mathbf{u}^{(k)}\), we have \[\mathbf{u}^{(i)}(\lambda^{\ell}\mathbf{x}^{(j)})^{\top}=1\] and \[(\lambda^{h}\mathbf{u}^{(k)})(\lambda^{\ell}\mathbf{x}^{(j)})^{\top}=\mathbf{ u}^{(k)}(\lambda^{\ell-h}\mathbf{x}^{(j)})^{\top}=1\] so that \(v(\mathbf{u}^{(i)},\mathbf{x}^{(j)})=\ell\) and \(v(\mathbf{u}^{(k)},\mathbf{x}^{(j)})=\ell-h\). Consequently, \(a_{ij}=\zeta_{d}^{\ell}\) and \(a_{kj}=\zeta_{d}^{\ell-h}\) and so \(a_{ij}\overline{a_{kj}}=\zeta_{d}^{h}\). Thus for all \(h=0,1,\ldots,r_{\lambda}-1\), there are \(p^{(t-2)n}\) terms of \(Q\) which have the value \(\zeta_{d}^{h}\), and so \[Q=p^{(t-2)n}(1+\zeta_{d}+\cdots+\zeta_{d}^{r_{\lambda}-1})=0.\] **Corollary 3.4** (cf. [6, Corollary 2.3]).: _Let \(p\), \(n\), \(t\), \(d\) and \(r\) be any positive integers such that \(p\) is prime, \(d\mid r\), and \(r\mid(p^{n}-1)\). Then there exists a matrix \(W\) in \(\operatorname{CGW}((p^{tn}-1)/r,p^{(t-1)n};d)\)._ **Example 3.5**.: Letting \(p=n=2\), then for any choice of \(t>1\) we can let \(d=r=3\), and build a matrix in \(\operatorname{CGW}((2^{2t}-1)/3,2^{2t-2};3)\). When \(t=2\), we get a matrix \[W\equiv\left[\begin{array}{ccccc}0&1&1&1&1\\ 1&0&1&\zeta_{3}&\zeta_{3}^{2}\\ 1&1&0&\zeta_{3}^{2}&\zeta_{3}\\ 1&\zeta_{3}&\zeta_{3}^{2}&0&1\\ 1&\zeta_{3}^{2}&\zeta_{3}&1&0\end{array}\right].\] _Remark 3.6_.: Berman also provides constructions of \(\zeta\)-circulant CGWs in [6]. They involve some specialised edits to the construction above, but do not construct matrices with parameters distinct from those facilitated for by Corollary 3.4. As such, in the interest of brevity we just refer the reader to the original paper for more details. ### Complementary sequences This subsection summarises the work of [28] as it pertains to CGWs. Perhaps the best known example of complementary sequences are Golay pairs [32]. These are pairs of \(\{\pm 1\}\)-sequences \((a,b)\) of length \(v\) such that \[\sum_{j=0}^{v-1-s}a_{j}a_{j+s}+b_{j}b_{j+s}=0\] for all \(1\leq s\leq v-1\). This equation says that the aperiodic autocorrelation of the sequences \(a\) and \(b\) sum to zero for all possible shifts \(s\). The existence of Golay pairs is known when the length of the sequences is \(v=2^{x}10^{y}26^{z}\) for \(x,y,z\geq 0\), but not for any other values. This motivated the generalization to complementary sequences according to periodic autocorrelation functions. We describe a very general extension of the idea here, as it pertains to the construction of CGWs. In this section, we identify sequences with row vectors, for the purposes of describing certain operations with matrix multiplication. For any \(\alpha\in\mathcal{U}_{k}\), define the \(\alpha\)-circulant matrix \[C_{\alpha}=\left[\begin{array}{cccccc}0&1&0&\cdots&0&0\\ 0&0&1&&0&0\\ 0&0&0&&0&0\\ \vdots&&\ddots&&\vdots\\ 0&0&0&&0&1\\ \alpha&0&0&\cdots&0&0\end{array}\right].\] The \(\alpha\)-_phased periodic autocorrelation function_ of a \(\mathcal{U}_{k}\)-sequence \(a\) of length \(v\) and shift \(s\) to be \[\mathrm{PAF}_{\alpha,s}(a)=a(aC_{\alpha}^{s})^{*}.\] Let \((a,b)\) be a pair of \(\mathcal{U}_{k}\)-sequences. Let \(w_{a}\) denote the weight of a sequence \(a\), i.e., the number of non-zero entries in \(a\). We say \(w=w_{a}+w_{b}\) is the weight of a pair \((a,b)\). A pair of sequences \((a,b)\) is a _weighted \(\alpha\)-phased periodic Golay pair_ (\(\mathrm{WGP}(\mathcal{U}_{k},v,\alpha,w)\)) if \[\mathrm{PAF}_{\alpha,s}(a)+\mathrm{PAF}_{\alpha,s}(b)=0.\] for all \(1\leq s\leq v-1\). For some \(\alpha\in\mathcal{U}_{k}\), let \(A\) and \(B\) be the \(\alpha\)-circulant matrices with first row \(a\) and \(b\) respectively, that is \(A_{i+1}=A_{i}C_{\alpha}\) and \(B_{i+1}=B_{i}C_{\alpha}\) for all \(2\leq i\leq v\). When \(a\) and \(b\) are complementary, we construct a matrix with pairwise orthogonal rows as follows. **Theorem 3.7** (Theorem 5.1 [28]).: _Let \((a,b)\in\mathrm{WGP}(\mathcal{U}_{k},v,\alpha,w)\) and define the matrices \(A\) and \(B\) as above. If_ \[W=\left[\begin{array}{cc}A&B\\ -B^{*}&A^{*}\end{array}\right],\] _then \(WW^{*}=wI_{2v}\). That is, \(W\) is a \(\operatorname{CGW}(2v,w;2k)\) if \(k\) is odd, and \(W\) is a \(\operatorname{CGW}(2v,w;k)\) if \(k\) is even._ The constructions of \(\operatorname{WPGP}(\mathcal{U}_{k},v,\alpha,w)\) that appear most frequently in the literature are limited to the when \(k=2\) or \(k=4\), and when \(\alpha=1\) or \(\alpha=\frac{k}{2}\). Computational methods for searching are useful, but ultimately limited to small length sequences. However, we can take advantage of constructions of aperiodic complementary sequences. In particular, a _ternary Golay pair_ of is a pair of \((0,\pm 1)\)-sequences \((a,b)\) of length \(n\) such that \[\sum_{j=0}^{n-1-s}a_{j}a_{j+s}+b_{j}b_{j+s}=0\] for all \(1\leq s\leq n-1\). There is a range of studies of ternary Golay pairs in the mathematics and engineering literature for numerous reasons, we refer the reader to [21] and [34] for more details. The following is a special case of [28, Theorem 3.5]. **Theorem 3.8**.: _Let \((a,b)\) be a ternary Golay pair of length \(n\) and weight \(w\). Then \((a,b)\in\operatorname{WPGP}(\mathcal{U}_{k},n,\alpha,w)\) for any even \(k\), and any \(\alpha\in\langle\zeta_{k}\rangle\)._ As a consequence, given \((a,b)\) we can construct several distinct matrices in \(\operatorname{CGW}(2n,w;k)\) that are not equivalent to a \(\operatorname{CGW}(2n,w;2)\). ### Seberry and Whiteman For any prime power \(q\equiv 1\mod 8\), Seberry and Whiteman [52] present a construction of a matrix in \(\operatorname{CGW}(q+1,q;4)\). It is essentially a construction of an element of \(\operatorname{WPGP}(\mathcal{U}_{4},\frac{q+1}{2},1,q)\), although this is not how the construction is described in the paper. Let \(i=\zeta_{4}\) in this section. The method involves constructing two circulant matrices \(R\) and \(S\) of order \(\frac{q+1}{2}\) with all entries in \(\{\pm 1,\pm i\}\) except for the diagonal of \(R\) which is \(0\), and building the order \(q+1\) matrix \[W\equiv\left[\begin{array}{cc}R&S\\ S^{*}&-R^{*}\end{array}\right]\!. \tag{3}\] The sequence of entries of their first rows is obtained by cleverly applying an eighth power character \(\chi\) to elements of \(\mathbb{F}_{q^{2}}\) in a particular order. The method is as follows. Let \(q\equiv 1\mod 8\) be a prime power and let \(n=\frac{q+1}{2}\). Let \(\tau\) be a primitive element of \(\mathbb{F}_{q^{2}}\) and let \(\gamma=\tau^{n}\). For \(x\in\mathbb{F}_{q^{2}}\setminus\{0\}\), let \(\operatorname{ind}(x)\) be the least non-negative integer \(t\) such that \(\tau^{t}=x\) and define \(\chi:\mathbb{F}_{q^{2}}\to\langle\zeta_{8}\rangle\cup\{0\}\) so that \[\chi(x)=\begin{cases}\zeta_{8}^{\operatorname{ind}(x)}&\text{ if }x\neq 0\\ 0&\text{ if }x=0.\end{cases} \tag{4}\] We can write each element of \(\mathbb{F}_{q^{2}}\) uniquely in the form \(\alpha\gamma+\beta\) where \(\alpha,\beta\in\mathbb{F}_{q}\). So let \(\tau^{j}=\alpha_{j}\gamma+\beta_{j}\) for all \(j\), and define the sequences \(a\) and \(b\) such that \(a_{j}=\chi(\alpha_{j})\) and \(b_{j}=\chi(\beta_{j})\). These sequences satisfy the following two identities for all \(0\leq j\leq q^{2}-2\); \[b_{j+2n}=ib_{j}, \tag{5}\] \[b_{j+n}=ia_{j}. \tag{6}\] The first rows \(r\) and \(s\) of \(R\) and \(S\) are chosen to be the subsequences \(r=(a_{0},a_{8},\ldots,a_{8(n-1)})\) and \(s=(b_{0},b_{8},\ldots,b_{8(n-1)})\) respectively. The matrix of the form in (3) is orthogonal only if the sequences \(r\) and \(s\) are complementary, i.e., if \[\sum_{j=0}^{n-1}a_{8j}\overline{a_{8j+8t}}+b_{8j}\overline{b_{8j+8t}}=0 \tag{7}\] for \(1\leq t\leq n-1\), where the indices are read modulo \(8n\). Appealing to (5) this requirement reduces to \[\sum_{j=0}^{n-1}b_{8j}\overline{b_{8j+8t}}+b_{8j+n}\overline{b_{8j+n+8t}}=0. \tag{8}\] The identity of (6) implies that \(b_{j}\overline{b_{\ell}}=b_{j+2n}\overline{b_{\ell+2n}}\) for all \(j,\ell\). Thus we can write the indices modulo \(2n=q+1\) in the sum above. Further, because \(n\) is odd, the indices \(8j\) in the left hand term of the sum of (8) covers the even integers between \(0\) and \(q-1\) and the indices \(8j+n\) in right hand term covers the odd integers between \(1\) and \(q\), and we get the equivalent expression \[\sum_{j=0}^{n-1}b_{2j}\overline{b_{2j+8t}}+b_{2j+1}\overline{b_{2j+1+8t}}=0.\] More succinctly, we get \[\sum_{j=0}^{q}b_{j}\overline{b_{j+8t}}=0. \tag{9}\] It remains to show that Equation (9) holds for all \(1\leq t\neq n-1\). First, observe that \(b_{j}\overline{b_{j+\ell}}=\chi(\beta_{j})\overline{\chi(\beta_{j+\ell})}\). Suppose that \(\tau^{j}=\alpha_{j}\gamma+\beta_{j}\) and \(\tau^{\ell}=\alpha_{\ell}\gamma+\beta_{\ell}\). Then \[\tau^{j+\ell} =(\alpha_{j}\gamma+\beta_{j})(\alpha_{\ell}\gamma+\beta_{\ell})\] \[=(\alpha_{j}\beta_{\ell}+\beta_{j}\alpha_{\ell})\gamma+(\alpha_{ j}\alpha_{\ell}\gamma^{2}+\beta_{j}\beta_{\ell}).\] and so \(\beta_{j+\ell}=\alpha_{j}\alpha_{\ell}\gamma^{2}+\beta_{j}\beta_{\ell}\). It follows that \[\chi(\beta_{j})\overline{\chi(\beta_{j+\ell})}=\chi(\beta_{j})\overline{\chi (\alpha_{j}\alpha_{\ell}\gamma^{2}+\beta_{j}\beta_{\ell})}.\] Consequently, for any fixed \(\ell\neq 0\), \[\sum_{j=0}^{q^{2}-2}b_{j}\overline{b_{j+\ell}}=\sum_{j=0}^{q^{2}-2}\chi(\beta_ {j})\overline{\chi(\alpha_{j}\alpha_{\ell}\gamma^{2}+\beta_{j}\beta_{\ell})}. \tag{10}\] Alternatively, \[\sum_{j=0}^{q^{2}-2}b_{j}\overline{b_{j+\ell}} =\sum_{\alpha,\beta\in\mathbb{F}_{q}}\chi(\beta)\overline{\chi( \alpha\alpha_{\ell}\gamma^{2}+\beta\beta_{\ell})}\] \[=\sum_{\beta\in\mathbb{F}_{q}}\chi(\beta)\sum_{\alpha\in\mathbb{F }_{q}}\overline{\chi(\alpha\alpha_{\ell}\gamma^{2}+\beta\beta_{\ell})}\] where the inner sum is zero whenever \(\alpha_{\ell}\neq 0\). Also note that the only value of the form \(\ell=8t\) with \(0\leq t\leq n-1\) for which \(\alpha_{\ell}=0\) when \(8|(q-1)\) is when \(t=0\). It follows that \[\sum_{j=0}^{q^{2}-2}b_{j}\overline{b_{j+8t}}=0\] for any \(1\leq t\leq n-1\). Now we note that \[\sum_{j=0}^{q^{2}-2}b_{j}\overline{b_{j+8t}}=\sum_{h=0}^{q-2}\sum_{j=0}^{q}b_{ j+h(q+1)}\overline{b_{j+8(t+h(q+1))}}.\] Appealing again to (5) we note that the inner sum takes the same value for every \(h\) because \(q+1=2n\), and so letting \(h=0\), we conclude that \[\sum_{j=0}^{q}b_{j}\overline{b_{j+8t}}=0.\] This proves the following. **Theorem 3.9** (Theorem 2, [52]).: _For any prime power \(q\equiv 1\mod 8\) there exists a matrix in \(\mathrm{CGW}(q+1,q;4)\)._ **Example 3.10**.: Let \(q=9\), so \(n=5\). Let \(\tau\) be a primitive element of \(\mathbb{F}_{81}\), let \(\gamma=\tau^{5}\), and let \(z=\tau^{10}\) so that \(z\) is a primitive element of a subfield isomorphic to \(\mathbb{F}_{9}\). Then \[\tau^{0} =0\cdot\gamma+z^{8}=1,\] \[\tau^{8} =z^{5}\cdot\gamma+z^{7},\] \[\tau^{16} =z^{8}\cdot\gamma+z,\] \[\tau^{24} =z^{8}\cdot\gamma+z^{5},\] \[\tau^{32} =z^{5}\cdot\gamma+z^{3}.\] Adhering to Equation (4) we find \(r=(0,i,1,1,i)\) and \(s=(1,-i,i,i,-i)\). It is simple to verify that the matrix \(W\) defined as in Equation (3) is an element of \(\mathrm{CGW}(10,9;4)\). ### Recursive constructions The constructions above, in addition to the many constructions known for real weighing, Hadamard and Butson Hadamard matrices provide numerous CGWs at infinitely many orders. However, recursive constructions applied to these matrices are the most effective tool for producing large quantities of CGWs. Tensor product type constructions are the most frequently useful, however we will begin with the simplest of recursive constructions of a direct sum type. #### 3.5.1 Direct sum type constructions We define the direct sum of an \(m\times m\) matrix \(A\) and an \(n\times n\) matrix \(B\) to be \[A\oplus B=\left[\begin{array}{cc}A&0_{m,n}\\ 0_{n,m}&B\end{array}\right].\] The following is immediate. **Proposition 3.11**.: _If \(A\in\mathrm{CGW}(m,w;k_{1})\) and \(B\in\mathrm{CGW}(n,w;k_{2})\), then \(A\oplus B\in\mathrm{CGW}(m+n,w;k)\) where \(k=\mathrm{lcm}(k_{1},k_{2})\)._ The following is also quite straightforward. **Proposition 3.12**.: _Let \(A\in\mathrm{CGW}(n,w;k)\). Then the matrix_ \[\left[\begin{array}{cc}A&I_{n}\\ -I_{n}&A^{*}\end{array}\right]\] _is a \(\mathrm{CGW}(2n,w+1;k)\) if \(k\) is even, or a \(\mathrm{CGW}(2n,w+1;2k)\) if \(k\) is odd._ The following generalizes Proposition 3.12. **Proposition 3.13**.: _Let \(A\in\mathrm{CGW}(n,w_{1};k_{1})\) and \(B\in\mathrm{CGW}(n,w_{2};k_{2})\) be such that \(AB=BA\). Then the matrix_ \[\left[\begin{array}{cc}A&B\\ -B^{*}&A^{*}\end{array}\right]\] _is a \(\mathrm{CGW}(2n,w;k)\) where \(w=w_{1}+w_{2}\) and \(k=\mathrm{lcm}(k_{1},k_{2},2)\)._ The conditions for Proposition 3.13 are met, for example, when \(A\) and \(B\) are \(\zeta\)-circulant for some \(\zeta\in\mathcal{U}_{k_{1}}\cap\mathcal{U}_{k_{2}}\), although this would just be a very special case of the construction of Theorem 3.7. #### 3.5.2 Tensor product type constructions One of the simplest recursive constructions is via the Kronecker product. For any two matrices \(A\) and \(B\) having entries with defined multiplication, the Kronecker product of \(A\) and \(B\) is defined to be the block matrix \[A\otimes B=[a_{ij}B].\] It is a simple exercise to verify the following. **Proposition 3.14**.: _Let \(A\in\mathrm{CGW}(n_{1},w_{1};k_{1})\) and \(B\in\mathrm{CGW}(n_{2},w_{2};k_{2})\). Then \(A\otimes B\in\mathrm{CGW}(n,w;k)\) where \(n=n_{1}n_{2}\), \(w=w_{1}w_{2}\) and \(k=\mathrm{lcm}(k_{1},k_{2})\)._ This tensor product construction is generalised by a construction of Dita [26], originally proposed for complex Hadamard matrices. For this construction we require a matrix \(A\in\mathrm{CGW}(n,w_{a};k_{a})\) and a set of matrices \(\{B_{1},\ldots,B_{n}\}\) with each \(B_{i}\in\mathrm{CGW}(m,w_{b,i};k_{b,i})\). Note that each \(B_{i}\) must be of the same order. **Proposition 3.15** (cf. Proposition 2 [26]).: _Let \(A,B_{1},\ldots,B_{n}\) be as described above. Then_ \[D=\left[\begin{array}{cccc}a_{11}B_{1}&a_{12}B_{2}&\cdots&a_{1n}B_{n}\\ a_{21}B_{1}&a_{22}B_{2}&\cdots&a_{2n}B_{n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}B_{1}&a_{n2}B_{2}&\cdots&a_{nn}B_{n}\end{array}\right]\] _is a \(\operatorname{CGW}(mn,w;k)\) where \(w=w_{a}\left(\sum_{i=1}^{n}w_{b,i}\right)\) and \(k=\operatorname{lcm}(k_{a},k_{b,1},\ldots,k_{b,n})\)._ _Remark 3.16_.: If the matrices \(B_{1},\ldots,B_{n}\) are all equal, then the matrix \(D\) of Proposition 3.15 is the Kronecker product \(A\otimes B_{1}\). A construction of complex Hadamard matrices was introduced by McNulty and Weigert that is a little more general, see [45, Theorem 3]. It is difficult to see how this construction might be simply generalized for building weighing matrices with parameters not catered for by Proposition 3.15, but in theory it could be so we include it for completeness. The key components are two sets of \(q\times q\) unitary matrices \(\{L_{1},\ldots,L_{p}\}\) and \(\{K_{1},\ldots,K_{p}\}\) such that \(K_{i}^{*}L_{j}\) is complex Hadamard for all \(1\leq i,j\leq p\), and another complex Hadamard matrix \(M=[m_{ij}]\) of order \(p\). The result is a \(pq\times pq\) complex Hadamard matrix. The matrix constructed takes the form \[H=\left[\begin{array}{cccc}m_{11}K_{1}^{*}L_{1}&m_{12}K_{1}^{*}L_{2}&\cdots&m _{1p}K_{1}^{*}L_{p}\\ m_{21}K_{2}^{*}L_{1}&m_{22}K_{2}^{*}L_{2}&\cdots&m_{2p}K_{2}^{*}L_{p}\\ \vdots&\vdots&\ddots&\vdots\\ m_{p1}K_{p}^{*}L_{1}&m_{p2}K_{p}^{*}L_{2}&\cdots&m_{pp}K_{p}^{*}L_{p}\end{array} \right].\] Restricting entries to \(k^{\mathrm{th}}\) roots of unity is not a complete barrier to constructing Butson Hadamard matrices and several matrices not of Dita type are constructed in [45]. One of the most useful aspects of this construction is the freedom to use sets of mutually unbiased bases of \(\mathbb{C}^{q}\) for the unitary matrices \(\{L_{1},\ldots,L_{p}\}\) and \(\{K_{1},\ldots,K_{p}\}\). The obvious drawback to these tensor product type constructions is that the order is typically the product of the orders of the factors, and as a consequence there are significant restrictions on order that can be achieved with this approach. This issue is particularly apparent when constructing real Hadamard matrices. A product of real Hadamard matrices of order \(4n\) and \(4m\) is necessarily of order divisible by \(16\), so to produce a Hadamard matrix of order \(4k\) for any odd number \(k\), a construction like this fails. Constructions that mitigate this issue are rare, but a method known as weaving, introduced by Craigen [22], does just this. In general this construction involves orthogonal designs, but it was employed specifically for constructing weighing matrices in [16] and it settled some previously undecided existence questions at reasonably small orders and weights. #### 3.5.3 Weaving The ideas in this section are drawn from the thesis of Craigen [22] but some have since appeared in other published works. The idea of weaving is to knit together weighing matrices of different orders to form a larger one, without relying on a tensor product type construction that forces the order to be the product of the orders of its constituents. The version of this next Theorem that appears in [16] refers only to real weighing matrices, but we give a more general version that applies to CGWs, the proof of which is essentially identical, and is constructive. **Theorem 3.17** (cf. [16, Theorem 1]).: _Let \(M=(m_{ij})\) be a \(m\times n\)\((0,1)\)-matrix with row sums \(r_{1},\ldots,r_{m}\) and column sums \(c_{1},\ldots,c_{n}\). If for fixed integers \(a\) and \(b\) there are matrices \(A_{i}\in\operatorname{CGW}(r_{i},a;k_{1})\) and \(B_{j}\in CGW(c_{j},b;k_{2})\) for \(1\leq i\leq m\) and \(1\leq j\leq n\), then there is a \(\operatorname{CGW}(\sigma(M),ab;k)\) where_ \[\sigma(M)=\sum_{i=1}^{m}r_{i}=\sum_{j=1}^{n}c_{j},\] _and \(k=\operatorname{lcm}(k_{1},k_{2})\)._ Proof.: Construct \(W=(W_{ij})\) as an \(m\times n\) array of blocks as follows. If \(m_{ij}=0\) set \(W_{ij}=0_{r_{i}\times c_{j}}\). If \(m_{ij}=1\), then \(m_{ij}\) is the \(p^{\mathrm{th}}\) non-zero entry in the \(i^{\mathrm{th}}\) row, and the \(q^{\mathrm{th}}\) non-zero entry in the \(j^{\mathrm{th}}\) column of \(M\) for some \(p=p(i,j)\) and \(q=q(i,j)\). Denote the \(p^{\mathrm{th}}\) column of \(A_{i}\) and the \(q^{\mathrm{th}}\) row of \(B_{j}\) by \(A_{i}[\cdot,p]\) and \(B_{j}[q,\cdot]\) respectively, and set \(W_{ij}=A_{i}[\cdot,p]B_{j}[q,\cdot]\), the rank one \(r_{i}\times c_{j}\) matrix. Then \(W\) is a square matrix of order \(\sigma(M)\), and the entries are in \(\mathcal{U}_{k}\). It remains to verify that \(WW^{*}=abI_{\sigma(M)}\). Since \(W\) is an \(m\times n\) array of blocks, the matrix \(WW^{*}\) is expressed as an \(m\times m\) array of blocks with the \((i,j)\) block given by \[\sum_{\ell=1}^{n}W_{i\ell}W_{j\ell}^{*} =\sum_{\{\ell\,:\,m_{i\ell}=m_{j\ell}=1\}}A_{i}[-,p(i,\ell)]B_{i }[q(i,\ell),-](B_{j}[q(j,\ell),-])^{*}(A_{j}[-,p(j,\ell)])^{*}\] \[=\sum_{\{\ell\,:\,m_{i\ell}=1\}}\delta_{ij}bA_{i}[-,p(i,\ell)](A_ {j}[-,p(j,\ell)])^{*}\] \[=\delta_{ij}b\sum_{p=1}^{r_{i}}A_{i}[-,p](A_{j}[-,p])^{*}\] \[=\delta_{ij}abI_{r_{i}},\] where \(\delta_{ij}=1\) if \(i=j\) and \(\delta_{ij}=0\) otherwise. It follows that \(W\) is a weighing matrix. The conditions of Theorem 3.17 are such that the weight of the constructed matrix is the product of the two distinct weights of the components, however the order is no longer tied to this condition. The benefit of this is immediately demonstrated in [16] by the construction of a \(W(66,36)\) using four real weighing matrices - one from each of \(W(13,9)\), \(W(10,9)\), \(W(6,4)\) and \(W(4,4)\) - and a \(6\times 13\) matrix \(M\) with the required row and column sums. This settled the then open question of existence of a \(W(66,36)\). **Example 3.18**.: We can use this technique to build a \(\operatorname{CGW}(15,9;3)\), which cannot be constructed through a tensor product. Let \[M=\left[\begin{array}{cccc}1&1&1&0&0\\ 0&1&1&1&0\\ 0&0&1&1&1\\ 1&0&0&1&1\\ 1&1&0&0&1\end{array}\right]\] and let \[A_{i}=B_{j}=\left[\begin{array}{ccc}1&1&1\\ 1&\omega&\omega^{2}\\ 1&\omega^{2}&\omega\end{array}\right]\] for all \(1\leq i,j\leq 5\), where \(\omega=\zeta_{3}\). Then via the method outlined in the proof of Theorem 3.17 we obtain the matrix \[W=\left[\begin{array}{ccccccccccccc|ccccc}1&1&1&1&1&1&1&1&1&0&0&0&0&0&0\\ 1&1&1&\omega&\omega&\omega&\omega^{2}&\omega^{2}&\omega^{2}&0&0&0&0&0&0\\ 1&1&\omega^{2}&\omega^{2}&\omega^{2}&\omega&\omega&\omega&0&0&0&0&0&0\\ \hline 0&0&0&1&\omega&\omega^{2}&1&\omega&\omega^{2}&1&1&1&0&0&0\\ 0&0&0&1&\omega&\omega^{2}&\omega&\omega^{2}&1&\omega^{2}&\omega^{2}&0&0&0\\ 0&0&0&1&\omega&\omega^{2}&\omega^{2}&1&\omega&\omega&\omega&\omega&0&0&0\\ \hline 0&0&0&0&0&0&1&\omega^{2}&\omega&1&\omega&\omega^{2}&1&1&1\\ 0&0&0&0&0&0&1&\omega^{2}&\omega&\omega&\omega^{2}&1&\omega^{2}&\omega^{2}& \omega^{2}\\ 0&0&0&0&0&1&\omega^{2}&\omega&\omega^{2}&1&\omega&\omega&\omega&\omega\\ \hline 1&\omega&\omega^{2}&0&0&0&0&0&1&\omega^{2}&\omega&1&\omega&\omega^{2}\\ 1&\omega&\omega^{2}&0&0&0&0&0&0&\omega&1&\omega^{2}&\omega^{2}&1\\ 1&\omega^{2}&\omega&0&0&0&0&0&\omega^{2}&\omega&1&\omega&\omega^{2}&1\\ \hline 1&\omega^{2}&\omega&1&\omega^{2}&\omega&0&0&0&0&0&1&\omega^{2}&\omega\\ 1&\omega^{2}&\omega&\omega&1&\omega^{2}&0&0&0&0&0&\omega^{2}&\omega&1\\ 1&\omega^{2}&\omega&\omega^{2}&\omega&1&0&0&0&0&0&\omega&1&\omega^{2}\end{array}\right]\] which is a \(\mathrm{CGW}(15,9;3)\). Also building on the work in [22], Craigen and de Launey developed on an idea similar to weaving with the intention of constructing circulant and other group developed CGWs [18]. Being group developed is an added condition that we don't wish to apply in his paper so we refer the interested reader to the article for more details. However a method of weaving together different objects to form CGWs without necessarily having this property is also described, and a special case of it is used to construct the group developed matrices. Fundamental to this construction is the general concept of an orthogonal set. An _orthogonal set_ of weight \(w\) is a set of \(v\times v\) matrices \(\{A_{1},\ldots,A_{n}\}\) such that \(A_{i}A_{j}^{*}=0\) for all \(i\neq j\), and there exist positive integers \(\lambda_{1},\ldots,\lambda_{n}\) such that \[\sum_{i=1}^{n}\lambda_{i}A_{i}A_{i}^{*}=wI_{n}.\] The matrices in an orthogonal set can be woven together by summing together the Kronecker products \(P_{s}\otimes A_{s}\) provided that \(\{P_{1},\ldots,P_{n}\}\) is a set of disjoint \(N\times N\) monomial matrices, where \(n\leq N\). The result is a \(\mathrm{CGW}(nN,w;k)\), if the constituent parts all have entries in \(\mathcal{U}_{k}\). Note that the matrices in the orthogonal set are not necessarily CGW matrices, rather the weight of the set is \(w\) if there are exactly \(w\) non-zero entries in the concatenation of the \(r^{\mathrm{th}}\) row or column of the matrices \(A_{1},\ldots,A_{n}\), for each \(1\leq r\leq n\). This gives a lot of freedom to the construction. ### Tables of existence Harada and Munemasa classified weighing matrices of order up to \(15\) in [37], building on earlier work of Chan, Rogers and Seberry in [13]. As such, the question of existence or non-existence of real weighing matrices of order up \(15\) is known in all cases. Using a combination of the non-existence results of Section 2, the constructions of Section 3, we attempt to complete tables showing either existence or non-existence of matrices in \(\mathrm{CGW}(n,w;k)\) for all \(n\leq 15\), \(w\leq n\), and \(k\in\{2,3,4,5,6\}\). These Tables are presented in Appendix A, with an entry E indicating existence, and N indicating non-existence. Some entries in these tables remain unresolved, they are indicated by a question mark. In Table 2, the \(k=2\) case is reported, which just compiles results from [37]. Tables 3, 4, 5, 6 report on the \(k=3,\,4,\,5,\,6\) cases respectively. The as yet undetermined entries which are marked with a? are all parameters that meet the known existence criteria. In each case, should a CGW exist, we can usually say something about the support matrix. If a \(\mathrm{CGW}(12,9;3)\) exists, then up to permutation equivalence its support matrix takes the form \((J_{3}-I_{3})\otimes J_{3}\). If a \(\mathrm{CGW}(13,9;3)\) exists, then its support matrix must be a \(\mathrm{SBIBD}(13,9,6)\), which is known to exist. If a \(\mathrm{CGW}(15,7;3)\) exists, then its support matrix must be a \(\mathrm{SBIBD}(15,7,3)\) which also exists (there is a Hadamard design with these parameters). In these cases, we need to solve the lifting problem. The restrictions for small \(n\) in the \(k=5\) case are such that very little extra analysis is required and almost all parameters are ruled out. In Table 4, there are only occasionally parameters for which a \(\mathrm{CGW}(n,w;4)\) exists and a \(\mathrm{CGW}(n,w;2)\) does not. The first we encounter is a \(\mathrm{CGW}(10,6;4)\), which can be built from a \(\mathrm{WPPG}(\mathcal{U}_{4},5,1,6)\) where \(a=(1,\zeta_{4},1,0,0)\) and \(b=(1,-1,-1,0,0)\). ## 4 Application: Quantum error-correcting codes A classical linear \([n,k,d]_{q}\)_-code_\(C\) of _minimum distance_\(d\) is a \(k\)-dimensional subspace of \(\mathbb{F}_{q}^{n}\), the elements of which are called _codewords_, such that the minimum Hamming distance between any two distinct codewords is \(d\). The _rate_ of \(C\) is the ratio \(\frac{k}{n}\). For fixed parameters \(n\) and \(k\) a code where \(d\) attains the theoretical upper bound is called _optimal_, and one where \(d\) does not attain the sharpest known bound, but attains the highest value of any known code, is called _best known_. We refer the reader to [39] for a complete background in coding theory and its applications, and we refer to the expertly maintained webpage at [33] for up to date links to research and tables displaying the best known linear codes for several parameters. Let \(C\) be a \([n,k]_{q^{2}}\) code. The _Hermitian inner product_ of codewords \(x,y\in C\) is defined by \[\langle x,y\rangle=\sum_{i=0}^{n-1}x_{i}y_{i}^{q}.\] The _Hermitian Dual_ of \(C\) is the code \[C^{H}=\{x\in C\ \mid\ \langle x,y\rangle=0\,\forall\,y\in C\}.\] The code \(C\) is _Hermitian self-orthogonal_ if \(C\subseteq C^{H}\), and _Hermitian self-dual_ if \(C=C^{H}\). Quantum codes are to quantum information theory what classical codes are to information theory. However, the problem is inherently more difficult due to the postulates of quantum mechanics. We cannot duplicate information by the No-Cloning Theorem [58], and the observation of a qubit forces it to collapse to a binary state. Shor's solution [54], is to spread the information of one qubit across the entangled state of several qubits. The following definition is taken from [12]: A _quantum error-correcting code_ is defined to be a unitary mapping (encoding) of \(k\) qubits into a subspace of the quantum state space of \(n\) qubits such that if any \(t\) of the qubits undergo arbitrary decoherence, not necessarily independently, the resulting \(n\) qubits can be used to faithfully reconstruct the original quantum state of the \(k\) encoded qubits. Unlike classical codes, quantum codes are usually linear [46]. For a quantum code with parameters \(n\), \(k\) and \(d\), we typically denote it as an \([[n,k,d]]_{q}\)-code. Shor's model in [54] requires the information of one qubit to be spread across nine qubits, so the rate is \(1/9\), and it protects against just one error. In [12] Calderbank and Shor use binary linear codes to build improved quantum codes, and later produce quantum codes capable of correcting multiple errors using group theoretic ideas in [10]. In [11] it is shown how, given a Hermitian self-orthogonal \([n,k]_{4}\)-linear code \(C\) such that no codeword in \(C^{\perp}\setminus C\) has weight less than \(d\), one can construct a quantum \([[n,n-2k,d]]_{2}\)-code. Rains [49] later established that there are similar applications to Hermitian self-orthogonal \([n,k]_{q^{2}}\) codes. The following is a restatement of [40, Corollary 19]. See also [2]. **Theorem 4.1**.: _If there exists a linear Hermitian self-orthogonal \([n,k]_{q^{2}}\) code \(C\) such that the minimum weight of \(C^{H}\) is \(d\), then there exists an \([[n,n-2k,\geq d]]_{q}\) quantum code._ _Remark 4.2_.: A quantum code can be \(0\)-dimensional, and so it is possible to construct a quantum \([[n,0,d]]_{q}\)-code given a Hermitian self-dual \([n,n/2,d]_{q^{2}}\) code. See [44] for details. Applications of these results have led to many of the best known constructions of quantum error-correcting codes, and so it is pertinent to study the construction of Hermitian self-orthogonal codes over \(\mathbb{F}_{q^{2}}\). With some restrictions, CGWs provide the perfect tool. To begin, we observe that when \(k=q+1\), we can translate the set of \(k^{\rm th}\) roots of unity into \(\mathbb{F}_{q^{2}}\), because \(k\) divides \(q^{2}-1\). The following Propositions formalize and generalize some observations noted in [23]. **Proposition 4.3**.: _Let \(q\) be a prime power, let \(k=q+1\) and let \(\alpha\) be a primitive \(k^{\rm th}\) root of unity in \(\mathbb{F}_{q^{2}}\). Define the homomorphism \(f:\mathcal{U}_{k}\to\mathbb{F}_{q^{2}}\) so that \(f(0)=0\) and \(f(\zeta_{k}^{j})=\alpha^{j}\) for \(j=0,1,\ldots,q\). Let \(x\) be a \(\mathcal{U}_{k}\)-vector of length \(n\) and let \(f(x)=[f(x_{i})]_{0\leq i\leq n-1}\). Then for any \(\mathcal{U}_{k}\)-vectors \(x\) and \(y\),_ \[\langle x,y\rangle=0\quad\Longrightarrow\quad\langle f(x),f(y)\rangle_{H}=0.\] Proof.: By construction, \(f(\zeta_{k}^{j})\) is a \(k^{\rm th}\) root of unity in the field \(\mathbb{F}_{q^{2}}\) for all \(0\leq j\leq q\). Observe that \[f(\omega)^{q}=\alpha^{q}=\alpha^{-1}=f(\omega^{*}),\] for all \(\omega\in\mathcal{U}_{k}\). Then for any \(\mathcal{U}_{k}\)-vectors \(x\) and \(y\), \[\langle f(x),f(y)\rangle_{H} =\sum_{i=0}^{n-1}f(x_{i})f(y_{i})^{q}\] \[=\sum_{i=0}^{n-1}f(x_{i})f(y_{i}^{*})\] \[=\sum_{i=0}^{n-1}f(x_{i}y_{i}^{*})\] \[=f^{+}\left(\sum_{i=0}^{n-1}x_{i}y_{i}^{*}\right)\] \[=f^{+}(\langle x,y\rangle).\] Thus if \(\langle x,y\rangle=\sum_{j=0}^{k-1}c_{j}\zeta_{k}^{j}\), then \(\langle f(x),f(y)\rangle_{H}=\sum_{j=0}^{k-1}c_{j}\alpha^{j(q-1)}\), and so \[\langle x,y\rangle=0\quad\Longrightarrow\quad\langle f(x),f(y)\rangle_{H}=0.\] The following Proposition is also now immediate. **Proposition 4.4**.: _Let \(W\) be a \(\mathrm{CGW}(n,w;q+1)\) for some prime power \(q\) and let \(f\) be the homomorphism defined in Proposition 4.3, with \(f(W)=[f(W_{ij})]_{1\leq i,j,\leq n}\). If \(w\) is divisible by the characteristic of \(\mathbb{F}_{q^{2}}\), then \(f(W)\) generates a Hermitian self-orthogonal \(F_{q^{2}}\)-code._ Proof.: Let \(x\) and \(y\) be distinct rows of \(W\). Then \(\langle x,y\rangle=0\) and so \(\langle f(x),f(y)\rangle_{H}=0\) by Proposition 4.3. Further, because \(x\) is a row of \(W\) and so by design, each entry \(\alpha\) in \(x\) has the property that \(\alpha^{*}=\alpha^{-1}\), then \(\langle f(x),f(x)\rangle_{H}=0\) because \(w\) is divisible by the characteristic of \(\mathbb{F}_{q^{2}}\). _Remark 4.5_.: The necessity that a row of \(W\) is of weight divisible by the characteristic of \(\mathbb{F}_{q^{2}}\) does not extend to codewords in general. By construction, the entries \(\alpha\) in a row of \(f(W)\) are all such that \(\alpha^{*}=\alpha^{q}\), and so \(\langle f(\alpha),f(\alpha)\rangle=w\). Other codewords obtained by a linear combination of the rows of \(f(W)\) can contain field elements as entries that may not have this property. However, the fact that a linear combination of the rows of \(f(W)\) is orthogonal to itself is guaranteed by the properties of an inner product, and the self-orthogonality of the rows of \(f(W)\) which form a basis. As a consequence of Proposition 4.4 we can use a \(\mathrm{CGW}(n,w;k)\) with appropriate weight to build quantum codes for any \(k=q+1\) where \(q\) is a prime power, which includes any \(k\in\{3,4,5,6,8,9,10\}\). _Remark 4.6_.: This implication of Proposition 4.3 is one directional, and the converse does not hold. Nevertheless, this relationship is crucial to the classification of matrices in \(\mathrm{BH}(18,3)\) via Hermitian self-dual codes over \(\mathbb{F}_{4}\) in [36]. The propositions above can now be implemented to construct quantum codes. **Example 4.7**.: Let \(W\) be the \(\mathrm{CGW}(5,4;3)\) obtained using Berman's construction in Example 3.5. The code \(C\) generated by \(W\) is a \([5,2,4]_{4}\) code, and the hermitian dual \(C^{H}\) is a \([5,3,3]_{4}\) code. Applying Theorem 4.1, we construct a \([[5,1,3]]_{2}\) quantum error-correcting code, which is optimal. Since the construction of quantum codes is generalized by Theorem 4.1, our intention now is to generalize the propositions above. **Example 4.8**.: Let \(W\) be the \(\mathrm{CGW}(10,9;4)\) obtained using the Seberry and Whiteman construction in Example 3.10. The code \(C\) generated by \(W\) is a \([10,5,4]_{9}\) code, and is Hermitian self-dual. We apply Theorem 4.1 and construct a \([[10,0,4]]_{3}\) quantum error-correcting code. Any \(\mathrm{BH}(n,4)\) where \(3\mid n\) may be used to construct ternary quantum codes in this manner. This is particularly useful because \(\mathrm{BH}(n,4)\) matrices are plentiful. For example, there are exactly 319 equivalence classes in \(\mathrm{BH}(12,4)\), see [43, Theorem 6.1]. **Example 4.9**.: For example, let \[H=\left[\begin{array}{ccccc}1&i&1&1&1&-\\ 1&1&i&-&1&1\\ i&1&1&1&-&1\\ 1&-&1&-&i\\ 1&1&-&i&-&-\\ -&1&1&-&i&-\end{array}\right].\] The Code \(C\) generated by \(H\) is a Hermitian self-dual \([6,3,4]_{9}\) code, which constructs a \([[6,0,4]]_{3}\) quantum error-correcting code. **Example 4.10**.: By Proposition 4.4, we can use a \(\mathrm{CGW}(n,w;6)\) with \(w\) divisible by \(5\) to construct a Hermitian self-orthogonal code over \(\mathbb{F}_{25}\). As an example, we take the \(\mathrm{BH}(25,6)\) constructed via [55, Theorem 1.4.41], and construct a \([25,9,13]_{25}\) Hermitian self-orthogonal code \(C\). The Hermitian dual \(C^{H}\) is a \([25,16,6]_{25}\) code, and so we construct a \([[25,7,6]]_{5}\) quantum code. This has larger minimum weight than the current best known \([[25,7]]_{2}\) quantum code, which has minimum weight \(5\), according to [33]. ## 5 Computational results In Table 1 we list the parameter of the quantum codes constructed that are, according to the information available to us, at least as good or better than the best known quantum codes. It is difficult to compare the codes constructed to others that may be known, as there does not appear to be any database comparable to [33] that caters for quantum \(q\)-ary codes in general. Recently, the authors of [3] have introduced a database, but at least for now it is not completely populated for all parameters. At the time of writing, the only \([[n,k]]_{q}\) code in this database that is comparable to a code in Table 1 is a \([[24,0,6]]_{3}\) code; we found a \([[24,0,9]]_{3}\) code. For this reason, the parameters of codes constructed here are often compared to the best known \([[n,k]]_{2}\) codes listed in [33]. _Remark 5.1_.: All of the \([[n,k]]_{q}\) codes listed in Table 1 have a minimum distance at least as large as any known \([[n,k]]_{2}\) code according to [33]. The codes marked with an asterisk listed in Table 1 are examples of \([[n,k]]_{q}\) quantum codes with a minimum distance that surpasses the known upper bound for a corresponding \([[n,k]]_{2}\) code. The matrices used to build the codes in Table 1 come from a variety of sources, many of which are from constructions outlined in this paper. Many of source matrices are Butson matrices, taken from existing databases such as the online database of complex Hadamard matrices at [8]. ### Concluding remarks The computations of this section are not the result of exhaustive searches, as we do not have access to any convenient database of matrices to search through. Nor have we attempted to use any coding theory methods to either extend the codes we found, or to search for good subcodes. The codes with parameters listed in Table 1 are the results of "proof of concept" experimentation using matrices we could either construct using some of the methods described in this paper, or matrices that could be easily accessed through online sources. The purpose is to demonstrate that good quantum codes can be constructed. A complete computational survey of codes constructible with these tools is beyond the scope of this paper, but the evidence presented here suggests that many good codes may be found with this approach. Mostly Butson matrices were used \begin{table} \begin{tabular}{c|c|c|c} Source matrix & Self orthogonal \([n,k,d]_{q^{2}}\) code & New \([[n,k,d]]_{q}\) code & Best known \([[n,k]]_{2}\) from [33] \\ \hline \(\mathrm{CGW}(5,4;3)\) & \([5,2,4]_{4}\) & \([[5,1,3]]_{2}\) & \([[5,1,3]]_{2}\) \\ \(\mathrm{BH}(6,4)\) & \([6,3,4]_{9}\) & \([[6,0,4]]_{3}\) & \([[6,0,4]]_{2}\) \\ \(\mathrm{BH}(9,10)\) & \([9,4,6]_{81}\) & \([[9,1,5]]_{9}^{*}\) & \([[9,1,3]]_{2}\) \\ \(\mathrm{CGW}(10,9;4)\) & \([10,5,4]_{9}\) & \([[10,0,4]]_{3}\) & \([[10,0,4]]_{2}\) \\ \(\mathrm{BH}(10,6)\) & \([10,5,5]_{25}\) & \([[10,0,5]]_{5}^{*}\) & \([[10,0,4]]_{2}\) \\ \(\mathrm{BH}(10,5)\) & \([10,5,6]_{16}\) & \([[10,0,6]]_{4}^{*}\) & \([[10,0,4]]_{2}\) \\ \(\mathrm{CGW}(12,10;6)\) & \([12,6,6]_{25}\) & \([[12,0,6]]_{5}\) & \([[12,0,6]]_{2}\) \\ \(\mathrm{BH}(14,8)\) & \([14,7,8]_{49}\) & \([[14,0,8]]_{7}^{*}\) & \([[14,0,6]]_{2}\) \\ \(\mathrm{BH}(18,4)\) & \([18,9,8]_{9}\) & \([[18,0,8]]_{3}\) & \([[18,0,8]]_{2}\) \\ \(\mathrm{BH}(20,6)\) & \([20,10,8]_{25}\) & \([[20,0,8]]_{5}\) & \([[20,0,8]]_{2}\) \\ \(\mathrm{BH}(20,5)\) & \([20,9,8]_{16}\) & \([[20,2,6]]_{4}\) & \([[20,2,6]]_{2}\) \\ \(\mathrm{CGW}(20,9;4)\) & \([20,8,9]_{9}\) & \([[20,4,6]]_{3}\) & \([[20,4,6]]_{2}\) \\ \(\mathrm{CGW}(21,16;3)\) & \([21,3,16]_{4}\) & \([[21,15,3]]_{2}\) & \([[21,15,3]]_{2}\) \\ \(\mathrm{BH}(24,4)\) & \([24,12,9]_{9}\) & \([[24,0,9]]_{3}\) & \([[24,0,8]]_{2}\) \\ \(\mathrm{BH}(25,6)\) & \([25,9,13]_{25}\) & \([[25,7,6]]_{5}\) & \([[25,7,5]]_{2}\) \\ \(\mathrm{CGW}(26,25;6)\) & \([26,5,22]_{25}\) & \([[26,16,6]]_{5}^{*}\) & \([[26,16,4]]_{2}\) \\ \(\mathrm{BH}(30,4)\) & \([30,15,12]_{9}\) & \([[30,0,12]]_{3}\) & \([[30,0,12]]_{2}\) \\ \(\mathrm{BH}(36,3)\) & \([36,18,12]_{4}\) & \([[36,0,12]]_{2}\) & \([[36,0,12]]_{2}\) \\ \(\mathrm{BH}(42,4)\) & \([42,21,14]_{9}\) & \([[42,0,14]]_{3}\) & \([[42,0,12]]_{2}\) \\ \end{tabular} \end{table} Table 1: New quantum codes as source matrices because they can be easier to find in databases. A large database of CGWs with different parameters would be a worthwhile development. Finally, we note that the Tables in Appendix A below are incomplete, and any contributions to their completion are very welcome. ## Declaration of competing interest The author declares that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements The author thanks Rob Craigen, Wolf Holzmann and Hadi Kharaghani for sharing complex Golay sequences computed in [19] which we used to build matrices in \(\mathrm{BH}(n,4)\), and subsequently \([[n,k]]_{3}\) quantum codes.
2309.12649
On the $ψ$-mixing coefficients of Rényi-type maps
Via dependence with complete connections we investigate the $\psi$-mixing coefficients of the sequence $(a_n)_{n \in \mathbb{N}}$ of incomplete quotients and also of the doubly infinite sequence $(\overline{a}_l)_{l \in \mathbb{Z}}$ of extended incomplete quotients of the R\'enyi-type continued fraction expansions. A L\'evy-type approach allows us to obtain good upper bounds for these coefficients.
Gabriela Ileana Sebe, Dan Lascu
2023-09-22T06:41:01Z
http://arxiv.org/abs/2309.12649v1
# On the \(\psi\)-mixing coefficients of Renyi-type maps ###### Abstract Via dependence with complete connections we investigate the \(\psi\)-mixing coefficients of the sequence \((a_{n})_{n\in\mathbb{N}}\) of incomplete quotients and also of the doubly infinite sequence \((\overline{a}_{l})_{l\in\mathbb{Z}}\) of extended incomplete quotients of the Renyi-type continued fraction expansions. A Levy-type approach allows us to obtain good upper bounds for these coefficients. **Keywords** Renyi-type continued fractions \(\cdot\) incomplete quotients \(\cdot\) natural extension \(\cdot\)\(\psi\)-mixing coefficients. **Mathematics Subject Classification** Primary 11J70 \(\cdot\) 11K50; Secondary 60J10 ## 1 Introduction The regular continued fraction (RCF) expansions of real numbers have long been known as interesting and fruitful because their digits are closely connected to a dynamical system with nice mixing properties and an explicit invariant measure. For many dynamical systems, it is possible to prove the existence of an invariant measure, however, there are few systems for which this measure is explicitly known. We briefly recall some facts about RCFs. Every irrational number \(x\in I:=[0,1]\) can be uniquely expressed as an infinite continued fraction of the form \[x=\frac{1}{d_{1}(x)+\frac{1}{d_{2}(x)+\frac{1}{d_{3}(x)+\ddots}}}=:[d_{1}(x), d_{2}(x),d_{3}(x),\ldots], \tag{1.1}\] where the sequence \((d_{n}(x))_{n\geq 1}\) consists of natural integers which we refer to as RCF digits or incomplete quotients of \(x\). Using the Gauss map \(G(x):=1/x\,(\mathrm{mod}\,1)\), \(x\in I\), the digits are recursively given by \(d_{1}(x)=\left\lfloor\frac{1}{x}\right\rfloor\), where \(\left\lfloor\cdot\right\rfloor\) denotes the floor function, and \(d_{n+1}(x)=d_{1}\left(G^{n}(x)\right),n\geq 1\). The Gauss map is invariant under the Gauss measure, \(\gamma([0,x])=\log(1+x)/\log 2\). Equipped with the Gauss measure, we may think of \((I,\mathcal{B}_{I})\) as a probability space and the sequence of incomplete quotients \(d_{i}:I\to\mathbb{N}\) as a sequence of random variables. In the case of the RCF, the random variables \((d_{n})\) form a stationary sequence due to the invariance of the Gauss measure with respect to \(G\). The \(d_{n}\)s are known to be \(\psi\)-mixing with respect to the Gauss measure \(\gamma\). This follows from independent work of Kuzmin and Levy in the late 1920's, see for example [3] for the proof and a discussion of its history. The results of Kuzmin and Levy were then followed by various improvements by several authors (see [5]). In 2000, Iosifescu [4] gave a more general result, proving that the sequence of incomplete quotients of the RCF expansion is \(\psi\)-mixing under different probability measures. In general, just upper bounds for \(\psi\)-mixing or other dependence coefficients are derived. The problem of finding the exact values of \(\psi\)-mixing coefficients under Gauss' measure or upper bounds of them under a particular family of conditional probability measures of the sequence of RCF digits was solved in [4]. This paper continues a series of papers [9, 10, 11], dedicated to Renyi-type continued fraction expansions. These continued fractions are a particular case of \(u\)-backward continued fractions studied by Grochenig and Haas [1]. Starting from the Renyi map \(R\)[12], Grochenig and Haas define a one-parameter family of interval maps of the form \(T_{u}(x):=\frac{1}{u(1-x)}-\lfloor\frac{1}{u(1-x)}\rfloor\), where \(u>0\) and \(x\in[0,1)\). As the parameter \(u\) varies in \((0,4)\) there is a viable theory of \(u\)-backward continued fractions, which fails when \(u\geq 4\). The main purpose of Grochenig and Haas was to find an explicit form for the absolutely continuous invariant measure for \(T_{u}\) similar to that of the Gauss measure \(\mathrm{d}x/(x+1)\) for \(G\) and Renyi's measure \(\mathrm{d}x/x\) for \(R:=T_{1}\). They identified in [1] the invariant probability measure for \(T_{u}\) corresponding to the values \(u=1/N\), for positive integers \(N\geq 2\), namely \(\mathrm{d}x/(x+N-1)\). So the map \(R_{N}:=T_{1/N}\), \(N\geq 2\), will be called a _Renyi-type transformation_. In [9] we started an approach to the metrical theory of Renyi-type continued fraction expansions via dependence with complete connections [6]. Using the natural extensions for Renyi-type transformation, we obtained an infinite-order-chain representation \((\overline{a}_{l})_{l\in\mathbb{Z}}\) of the sequence \((a_{n})_{n\in\mathbb{N}_{+}}\) of incomplete quotients of these expansions. Our goal in this paper is to investigate the \(\psi\)-mixing coefficients \(\left(\psi_{\mu}(n)\right)_{n\geq 1}\), where \(\mu\in\mathrm{pr}(I)\), of the incomplete quotients \((a_{n})_{n\in\mathbb{N}_{+}}\) under the invariant probability measure \(\rho_{N}\) of \(R_{N}\) and under a one-parameter family \(\{\rho_{N}^{t},t\in I\}\) of conditional probability measures on \((I,\mathcal{B}_{I})\) inspired by Doeblin [2]. Here \(\mathcal{B}_{I}\) denotes the \(\sigma\)-algebra of all Borel subsets of \(I\). Since the computation of these coefficients becomes forbidding for \(n\geq 3\), we use a Levy-type approach developed in Section 3 (Theorem 3.1) to derive good upper bounds for them whatever \(n\geq 3\). In section 4 we prove that the sequence \((a_{n})_{n\in\mathbb{N}_{+}}\) is \(\psi\)-mixing under \(\rho_{N}\) and \(\rho_{N}^{t}\), \(t\in I\). Also the doubly infinite sequence \((\overline{a}_{l})_{l\in\mathbb{Z}}\) of extended incomplete quotients is \(\psi\)-mixing under the extended invariant measure \(\overline{\rho}_{N}\) and its \(\psi\)-mixing coefficients are equal to the corresponding \(\psi\)-mixing coefficients under \(\rho_{N}\) of \((a_{n})_{n\in\mathbb{N}_{+}}\). ## 2 Prerequisites In this section we briefly present known results about Renyi-type continued fractions (see, e.g., [9]). ### Renyi-type continued fraction expansions Fix an integer \(N\geq 2\). Let the _Renyi-type continued fraction transformation_\(R_{N}:I\to I\) be given by \[R_{N}(x):=\frac{N}{1-x}-\left\lfloor\frac{N}{1-x}\right\rfloor,x\neq 1;\quad R _{N}(1):=0. \tag{2.1}\] For any irrational \(x\in I\), \(R_{N}\) generates a new continued fraction expansion of \(x\) of the form \[x=1-\frac{N}{1+a_{1}-\frac{N}{1+a_{2}-\frac{N}{1+a_{3}-\ddots}}}=:[a_{1},a_{2 },a_{3},\ldots]_{R}. \tag{2.2}\] Here, \(a_{n}\)'s are non-negative integers greater than or equal to \(N\) defined by \(a_{1}:=a_{1}(x)=\left\lfloor\frac{N}{1-x}\right\rfloor,x\neq 1;a_{1}(1)=\infty\) and \(a_{n}:=a_{n}(x)=a_{1}\left(R_{N}^{n-1}(x)\right),n\geq 2\), with \(R_{N}^{0}(x)=x\). The Renyi-type continued fraction in (2.2) can be viewed as a measure preserving dynamical system \((I,\mathcal{B}_{I},R_{N},\rho_{N})\), and for any integer \(N\geq 2\) \[\rho_{N}(A):=\left(\log\left(\frac{N}{N-1}\right)\right)^{-1}\int_{A}\frac{ \mathrm{d}x}{x+N-1},\quad A\in\mathcal{B}_{I} \tag{2.3}\] is the absolutely continuous invariant probability measure under \(R_{N}\)[1]. ### Natural extension and extended random variables Now we recall the natural extension \(\overline{R}_{N}\) of \(R_{N}\) and its extended random variables introduced in ([9]). Let \((I,\mathcal{B}_{I},R_{N})\) be as in Section 2.1 and define \((u_{N,i})_{i\geq N}\) by \[u_{N,i}:I\to I;\quad u_{N,i}(x):=1-\frac{N}{x+i},\quad x\in I. \tag{2.4}\] For each \(i\geq N\), \((R_{N}\circ u_{N,i})\left(x\right)=x\) for any \(x\in I\) and if \(a_{1}(x)=i\), then \(\left(u_{N,i}\circ R_{N}\right)(x)=x\), \(x\in I\). The natural extension \(\left(I^{2},\mathcal{B}_{I}^{2},\overline{R}_{N}\right)\) of \((I,\mathcal{B}_{I},R_{N})\) is the transformation \(\overline{R}_{N}\) of the square space \(\left(I^{2},\mathcal{B}_{I}^{2}\right):=(I,\mathcal{B}_{I})\times(I,\mathcal{ B}_{I})\) defined as follows: \[\overline{R}_{N}:I^{2}\to I^{2};\quad\overline{R}_{N}(x,y):=\left(R_{N}(x),\, u_{N,a_{1}(x)}(y)\right),\quad(x,y)\in I^{2}. \tag{2.5}\] Since \(\overline{R}_{N}\) is bijective on \(I^{2}\) with the inverse \[(\overline{R}_{N})^{-1}(x,y)=(u_{N,a_{1}(y)}(x),\,R_{N}(y)),\quad(x,y)\in I^{ 2}. \tag{2.6}\] we get the following iterations for each \(n\geq 2\): \[\left(\overline{R}_{N}\right)^{n}\left(x,y\right)=\left(\,R_{N}^{n}(x),\,[a_{n}(x ),a_{n-1}(x),\ldots,a_{2}(x),\,a_{1}(x)+y-1]_{R}\,\right), \tag{2.7}\] \[\left(\overline{R}_{N}\right)^{-n}\left(x,y\right)=\left(\,[a_{n}(y),a_{n-1}(y ),\ldots,a_{2}(y),\,a_{1}(y)+x-1]_{R},\,R_{N}^{n}(y)\,\right). \tag{2.8}\] For \(\rho_{N}\) in (2.3), we define its _extended measure_\(\overline{\rho}_{N}\) on \(\left(I^{2},\mathcal{B}_{I}^{2}\right)\) as \[\overline{\rho}_{N}(B):=\left(\log\left(\frac{N}{N-1}\right)\right)^{-1}\iint _{B}\frac{N\mathrm{d}x\mathrm{d}y}{\left[N-(1-x)(1-y)\right]^{2}},\quad B\in \mathcal{B}_{I}^{2}. \tag{2.9}\] Then \(\overline{\rho}_{N}(A\times I)=\overline{\rho}_{N}(I\times A)=\rho_{N}(A)\) for any \(A\in\mathcal{B}_{I}\). The measure \(\overline{\rho}_{N}\) is preserved by \(\overline{R}_{N}\), i.e., \(\overline{\rho}_{N}((\overline{R}_{N})^{-1}(B))=\overline{\rho}_{N}(B)\) for any \(B\in\mathcal{B}_{I}^{2}\). With respect to \(\overline{R}_{N}\), define _extended incomplete quotients_\(\overline{a}_{l}(x,y),\,l\in\mathbb{Z}:=\{\ldots,-2,-1,0,1,2,\ldots\}\) at \((x,y)\in I^{2}\) by \[\overline{a}_{l}(x,y):=\overline{a}_{1}\left(\,(\overline{R}_{N})^{l-1}(x,y) \,\right),\quad l\in\mathbb{Z},\] with \(\overline{a}_{1}(x,y)=a_{1}(x),\,(x,y)\in I^{2}\). By (2.4) and (2.6) we have \[\overline{a}_{n}(x,y)=a_{n}(x),\quad\overline{a}_{0}(x,y)=a_{1}(y),\quad \overline{a}_{-n}(x,y)=a_{n+1}(y),\] for any \(n\in\mathbb{N}_{+}\) and \((x,y)\in I^{2}\). Since the measure \(\overline{\rho}_{N}\) is preserved by \(\overline{R}_{N}\), the doubly infinite sequence \((\overline{a}_{l}(x,y))_{l\in\mathbb{Z}}\) is strictly stationary under \(\overline{\rho}_{N}\). Now recall some results obtained in [9]. **Theorem 2.1**.: _Fix \((x,y)\in I^{2}\) and let \(\overline{a}_{l}:=\overline{a}_{l}(x,y)\) for \(l\in\mathbb{Z}\). Set \(a:=[\overline{a}_{0},\overline{a}_{-1},\ldots]_{R}\). For any \(x\in I\):_ \[\overline{\rho}_{N}\left([0,x]\times I\,|\,\overline{a}_{0},\overline{a}_{-1},\ldots\right)=\frac{Nx}{N-(1-x)(1-a)}\quad\overline{\rho}_{N}\text{-a.s.} \tag{2.10}\] The stochastic property of \((\overline{a}_{l})_{l\in\mathbb{Z}}\) under \(\overline{\rho}_{N}\) is as follows. **Corollary 2.2**.: _For any \(i\geq N\), we have_ \[\overline{\rho}_{N}(\overline{a}_{1}=i|\overline{a}_{0},\overline{a}_{-1}, \ldots)=P_{N,i}(a)\quad\overline{\rho}_{N}\text{-a.s.} \tag{2.11}\] _where \(a=[\overline{a}_{0},\overline{a}_{-1},\ldots]_{R}\) and_ \[P_{N,i}(x):=\frac{x+N-1}{(x+i)(x+i-1)}. \tag{2.12}\] **Remark 2.3**.: _The strict stationarity of \((\overline{a}_{l})_{l\in\mathbb{Z}}\), under \(\overline{\rho}_{N}\) implies that_ \[\overline{\rho}_{N}(\overline{a}_{l+1}=i\,|\,\overline{a}_{l},\overline{a}_{l -1},\ldots)=P_{N,i}(a)\quad\overline{\rho}_{N}\text{-a.s.} \tag{2.13}\] _for any \(i\geq N\) and \(l\in\mathbb{Z}\), where \(a=[\overline{a}_{l},\overline{a}_{l-1},\ldots]_{R}\)._ Define extended random variables \((\overline{s}_{l})_{l\in\mathbb{Z}}\) by \(\overline{s}_{l}:=[\overline{a}_{l},\overline{a}_{l-1},\ldots]_{R}\), \(l\in\mathbb{Z}\). Clearly, \(\overline{s}_{l}=\overline{s}_{0}\circ(\overline{R}_{N})^{l}\), \(l\in\mathbb{Z}\). It follows from Corollary 2.2 that \((\overline{s}_{l})_{l\in\mathbb{Z}}\) is a strictly stationary \([0,1)\)-valued Markov process on \(\big{(}I^{2},\mathcal{B}_{I}^{2},\overline{\rho}_{N}\big{)}\) with the following transition mechanism. From state \(\overline{s}\in I\) the possible transitions are to any state \(1-N/(\overline{s}+i)\) with corresponding transition probability \(P_{N,i}(\overline{s})\), \(i\geq N\). Clearly, for any \(l\in\mathbb{Z}\) we have \[\overline{\rho}_{N}(\overline{s}_{l}<x)=\overline{\rho}_{N}(I\times[0,x))= \rho_{N}([0,x)),\quad x\in I. \tag{2.14}\] Motivated by Theorem 2.1, we shall consider the one-parameter family \(\{\rho_{N}^{t}:t\in I\}\) of (conditional) probability measures on \((I,\mathcal{B}_{I})\) defined by their distribution functions \[\rho_{N}^{t}([0,x]):=\frac{Nx}{N-(1-x)(1-t)},\quad x,t\in I. \tag{2.15}\] Note that \(\rho_{N}^{1}=\lambda\). The density of \(\rho_{N}^{t}\) is \[h_{N}^{t}(x)=\frac{N(N-1+t)}{[N-(1-x)(1-t)]^{2}},\quad x,t\in I. \tag{2.16}\] For any \(t\in I\) put \[s_{N,0}^{t}:=t,\quad s_{N,n}^{t}:=1-\frac{N}{a_{n}+s_{N,n-1}^{t}},\quad n\in \mathbb{N}_{+}. \tag{2.17}\] **Remark 2.4**.: _It follows from the properties just described for the process \((\overline{s}_{l})_{l\in\mathbb{Z}}\) that the sequence \((s_{N,n}^{t})_{n\in\mathbb{N}_{+}}\) is an \(I\)-valued Markov chain on \(\big{(}I,\mathcal{B}_{I},\rho_{N}^{t}\big{)}\) which starts at \(s_{N,0}^{t}:=t\) and has the following transition mechanism: from state \(s\in I\) the possible transitions are to any state \(1-N/(s+i)\) with corresponding transition probability \(P_{N,i}(s)\), \(i\geq N\)._ Now, we recall that an \(n\)-block \((a_{1},a_{2},\ldots,a_{n})\) is said to be _admissible_ for the expansion in (2.2) if there exists \(x\in[0,1)\) such that \(a_{i}(x)=a_{i}\) for all \(1\leq i\leq n\). If \((a_{1},a_{2},\ldots,a_{n})\) is an admissible sequence, we call the set \[I(a_{1},a_{2},\ldots,a_{n}):=\{x\in I:a_{1}(x)=a_{1},a_{2}(x)=a_{2},\ldots,a_{ n}(x)=a_{n}\}, \tag{2.18}\] _the \(n\)-th order cylinder_. As we mentioned above, \((a_{1},a_{2},\ldots,a_{n})\in\Lambda^{n}\), with \(\Lambda:=\{N,N+1,\ldots\}\). **Theorem 2.5** (Generalized Broden-Borel-Levy-type formula).: _For any \(t\in I\) and \(n\in\mathbb{N}_{+}\), we have_ \[\rho_{N}^{t}(R_{N}^{n}<x|a_{1},\ldots,a_{n})=\frac{Nx}{N-(1-x)(1-s_{N,n}^{t}) },\quad x\in I. \tag{2.19}\] Proof.: For any \(n\in\mathbb{N}_{+}\) and \(x\in I\) consider the conditional probability \[\overline{\rho}_{N}(\overline{R}_{N}^{-n}([0,x]\times I)\,\Big{|}\,\,\, \overline{a}_{n},\ldots,\overline{a}_{1},\overline{a}_{0},\overline{a}_{-1}, \ldots). \tag{2.20}\] Put \(t=[\overline{a}_{0},\overline{a}_{-1},\ldots]_{R}\) and note that \([\overline{a}_{n},\ldots,\overline{a}_{1},\overline{a}_{0},\overline{a}_{-1}, \ldots]_{R}=s_{N,n}^{t}\). On the one hand, it follows from the fact that the measure \(\overline{\rho}_{N}\) is preserved by \(\overline{R}_{N}\) and from Theorem 2.1 and Remark 2.3 that the conditional probability (2.20) \(\overline{\rho}_{N}\) - a.s. equals \[\frac{Nx}{N-(1-x)(1-s_{N,n}^{t})}.\] On the other hand, putting \[\overline{\rho}_{N}^{t}(\cdot)=\overline{\rho}_{N}(\cdot\,|\ \overline{a}_{0}, \overline{a}_{-1},\ldots)\] it is clear that (2.20) \(\overline{\rho}_{N}\) - a.s. equals \[\frac{\overline{\rho}_{N}^{t}\left(\overline{R}_{N}^{-n}([0,x]\times I)\cap(I( a_{1},\ldots,a_{n})\times I)\right)}{\overline{\rho}_{N}^{t}\left(I(a_{1},\ldots,a_{n}) \times I\right)}. \tag{2.21}\] Since \(\overline{R}_{N}^{-n}([0,x]\times I)=\overline{R}_{N}^{-n}([0,x])\times I\) and \(\overline{\rho}_{N}^{t}(A\times I)=\rho_{N}^{t}(A)\), \(A\in\mathcal{B}_{I}\), the fraction in (2.21) equals \[\rho_{N}^{t}\left(\,R_{N}^{-n}([0,x])\,\big{|}\ I(a_{1},\ldots,a_{n})\right)= \rho_{N}^{t}(R_{N}^{n}<x|a_{1},\ldots,a_{n}).\] **Corollary 2.6**.: _For any \(t\in I\) and \(n\in\mathbb{N}_{+}\) we have_ \[\rho_{N}^{t}(A|a_{1},\ldots,a_{n})=\rho_{N}^{s_{N,n}^{t}}\left(R_{N}^{n}(A)\right) \tag{2.22}\] _whatever the set \(A\) belonging to the \(\sigma\)-algebra generated by the random variables \(a_{n+1},a_{n+2},\ldots\), that is, \(R_{N}^{-n}\left(\mathcal{B}_{I}\right)\)._ ## 3 A Levy-type approach In the sequel we shall obtain estimates for both errors \(F_{N,n}^{t}-G_{N}\) and \(G_{N,n}^{t}-G_{N}\), \(t\in I\), \(n\in\mathbb{N}\) where \[F_{N,n}^{t}(x) :=\rho_{N}^{t}(R_{N}^{n}<x),x\in I,n\in\mathbb{N} \tag{3.1}\] \[G_{N,n}^{t}(s) :=\rho_{N}^{t}(s_{N,n}^{t}<s),s\in I,n\in\mathbb{N}_{+}, \tag{3.2}\] and \[G_{N,0}^{t}(s):=\left\{\begin{array}{ll}0,&s\leq t\\ 1,&s>t\end{array}\right.,\quad G_{N}(s):=\rho_{N}([0,s]),\quad s\in I. \tag{3.3}\] It follows from (2.19) that \[F_{N,n}^{t}(x)=\int_{0}^{1}\frac{Nx}{N-(1-x)(1-s)}\mathrm{d}G_{N,n}^{t}(s) \tag{3.4}\] for any \(x,t\in I\) and \(n\in\mathbb{N}\). It is easy to check that \[G_{N}(x)=\int_{0}^{1}\frac{Nx}{N-(1-x)(1-s)}\mathrm{d}G_{N}(s),\quad x\in I \tag{3.5}\] and \[F_{N,n}^{t}\left(1-\frac{N}{i+1}\right)=G_{N,n+1}^{t}\left(1-\frac{N}{i+1} \right),\quad n\in\mathbb{N}_{+},t\in I,i\geq N. \tag{3.6}\] The last equation is still valid for \(n=0\) and \(t\neq 1\), while \[F_{N,0}^{1}\left(1-\frac{N}{i}\right)=G_{N,1}^{1}\left(1-\frac{N}{i+1}\right),i \geq N. \tag{3.7}\] Since \(\left(s_{N,n}^{t}\right)_{n\in\mathbb{N}}\) is a Markov chain on \(\left(I,\mathcal{B}_{I},\rho_{N}^{t}\right)\) for any \(i\geq N\), \(n\in\mathbb{N}_{+}\), \(t\in I\) and \(\theta\in[0,1)\) we have \[G_{N,n+1}^{t}\left(1-\frac{N}{i+\theta}\right)-G_{N,n+1}^{t} \left(1-\frac{N}{i}\right)=\rho_{N}^{t}\left(1-\frac{N}{i}\leq s_{N,n+1}^{t}<1 -\frac{N}{i+\theta}\right)\] \[=E\left(\left.\rho_{N}^{t}\left(1-\frac{N}{i}\leq s_{N,n+1}^{t}<1 -\frac{N}{i+\theta}\right)\right|s_{N,n}^{t}\right)= \tag{3.8}\] \[=\int_{0}^{\theta}P_{N,i}(s)\mathrm{d}G_{N,n}^{t}(s),\quad i\geq N\] while \[G_{N,1}^{t}\left(1-\frac{N}{i+\theta}\right)-G_{N,1}^{t}\left(1- \frac{N}{i}\right) = \rho_{N}^{t}\left(1-\frac{N}{i}\leq s_{N,1}^{t}<1-\frac{N}{i+ \theta}\right) \tag{3.9}\] \[= \int_{0}^{\theta}P_{N,i}(s)\mathrm{d}G_{N,0}^{t}(s),\] that is, (3.8) also holds for \(n=0\) if \(t\in[0,1)\). It is easy to check that \[\int_{0}^{\theta}P_{N,i}(s)\mathrm{d}G_{N}(s)=G_{N}\left(1-\frac{N}{i+\theta} \right)-G_{N}\left(1-\frac{N}{i}\right) \tag{3.10}\] for any \(i\geq N\) and \(\theta\in[0,1)\). Now, by (3.4) and (3.5) we have \[F_{N,n}^{t}(x)-G_{N}(x) = \int_{0}^{1}\frac{Nx}{N-(1-x)(1-s)}\mathrm{d}\left(G_{N,n}^{t}(s )-G_{N}(s)\right)\] \[= -\int_{0}^{1}\left(G_{N,n}^{t}(s)-G_{N}(s)\right)\frac{\partial} {\partial s}\left(\frac{Nx}{N-(1-x)(1-s)}\right)\mathrm{d}s\] for any \(x,t\in I\) and \(n\in\mathbb{N}\). Setting \[\alpha_{N,n}^{t}:=\sup_{s\in I}\left|G_{N,n}^{t}(s)-G_{N}(s)\right|,\quad t \in I,n\in\mathbb{N}, \tag{3.11}\] we obtain \[\left|F_{N,n}^{t}(x)-G_{N}(x)\right|\leq\alpha_{N,n}^{t}\int_{0}^{1}\frac{Nx( 1-x)}{[N-(1-x)(1-s)]^{2}}\mathrm{d}s=\alpha_{N,n}^{t}\frac{x(1-x)}{N-1+x},\] hence \[\left|F_{N,n}^{t}(x)-G_{N}(x)\right|\leq\left(\sup_{x\in I}\frac{x(1-x)}{N-1+ x}\right)\alpha_{N,n}^{t}=\beta_{N}\cdot\alpha_{N,n}^{t}, \tag{3.12}\] \(t,x\in I\) and \(n\in\mathbb{N}\), where \[\beta_{N}:=\frac{(1-N+\sqrt{N^{2}-N})(N-\sqrt{N^{2}-N})}{\sqrt{N^{2}-N}}=2N-1-2 \sqrt{N(N-1)}. \tag{3.13}\] Let us note that \[\alpha_{N,0}^{t}=\max\left(G_{N}(t),1-G_{N}(t)\right),\quad t\in I.\] **Theorem 3.1**.: _For any \(n\in\mathbb{N}_{+}\) and \(t\in I\) we have_ \[\sup_{x\in I}\left|F_{N,n}^{t}(x)-G_{N}(x)\right|\leq\left\{\begin{array}{ ll}(0.171)(0.251)(0.348)^{n-1},&N=2\\ \beta_{N}\cdot\delta_{N}\cdot c_{N}^{n-1},&N\geq 3,\end{array}\right.\] \[\sup_{x\in I}\left|G_{N,n}^{t}(x)-G_{N}(x)\right|\leq\left\{\begin{array}{ ll}(0.251)(0.348)^{n-1},&N=2\\ \delta_{N}\cdot c_{N}^{n-1},&N\geq 3,\end{array}\right.\] _where \(\beta_{N}\) is as in(3.13) and_ \[\delta_{N} := \frac{2}{N+1}-\left(\log\left(\frac{N}{N-1}\right)\right)^{-1} \log\left(\frac{N^{2}}{N^{2}-1}\right),\] \[c_{N} := 2N-1-2\sqrt{N(N-1)}+\frac{N-1}{N(N+1)}.\] Proof.: For any \(i\geq N\), \(n\in\mathbb{N}_{+}\), \(t\in I\) and \(\theta\in[0,1)\) we have \[G_{N,n+1}^{t}\left(1-\frac{N}{i+1+\theta}\right)-G_{N}\left(1- \frac{N}{i+1+\theta}\right)\] \[=G_{N,n+1}^{t}\left(1-\frac{N}{i+1}\right)-G_{N}\left(1-\frac{N}{ i+1}\right)\] \[+G_{N,n+1}^{t}\left(1-\frac{N}{i+1+\theta}\right)-G_{N,n+1}^{t} \left(1-\frac{N}{i+1}\right)\] \[+G_{N}\left(1-\frac{N}{i+1}\right)-G_{N}\left(1-\frac{N}{i+1+ \theta}\right)\] and by (3.6), (3.8),(3.9), (3.10) and (3.12) we obtain \[\left|G_{N,n+1}^{t}\left(1-\frac{N}{i+1+\theta}\right)-G_{N}\left( 1-\frac{N}{i+1+\theta}\right)\right|\] \[\leq\left|F_{N,n}^{t}\left(1-\frac{N}{i+1}\right)-G_{N}\left(1- \frac{N}{i+1}\right)\right|\] \[+\left|\int_{0}^{\theta}P_{N,i+1}(s)\mathrm{d}\left(G_{N,n}^{t}( s)-G_{N}(s)\right)\right|\leq\beta_{N}\alpha_{N,n}^{t}\] \[+\left|\int_{0}^{\theta}\left(G_{N}(s)-G_{N,n}^{t}(s)\right) \mathrm{d}P_{N,i+1}(s)+P_{N,i+1}(\theta)\left(G_{N,n}^{t}(\theta)-G_{N}(\theta )\right)\right|\] \[\leq\left(\beta_{N}+\beta_{N}(i,\theta)\right)\alpha_{N,n}^{t}\] where \[\beta_{N}(i,\theta)=\int_{0}^{\theta}\left|\frac{\mathrm{d}P_{N,i+1}(s)}{\mathrm{ d}s}\right|\mathrm{d}s+P_{N,i+1}(\theta). \tag{3.14}\] Since \[\left|\frac{\mathrm{d}P_{N,i+1}(s)}{\mathrm{d}s}\right| = \left|\frac{i-N+1}{(s+i)^{2}}-\frac{i-N+2}{(s+i+1)^{2}}\right|\] \[= \left\{\begin{array}{ll}\frac{1}{(s+2)^{2}}-\frac{2}{(s+3)^{2 }},&\mbox{$i=N=2$ and $s\in[0,\sqrt{2}-1]$}\\ -\frac{1}{(s+2)^{2}}+\frac{2}{(s+3)^{2}},&\mbox{$i=N=2$ and $s\in(\sqrt{2}-1,1]$}\\ -\frac{1}{(s+N)^{2}}+\frac{2}{(s+N+1)^{2}},&\mbox{$i=N\geq 3$ and $s\in[0,1]$}\\ \frac{i-N+1}{(s+i)^{2}}-\frac{i-N+2}{(s+i+1)^{2}},&\mbox{$i\geq N+1,N\geq 2 $ and $s\in[0,1]$}\end{array}\right.\] we get for any \(\theta\in[0,1)\) \[\int_{0}^{\theta}\left|\frac{\mathrm{d}P_{N,i+1}(s)}{\mathrm{d}s} \right|\mathrm{d}s\] \[= \left\{\begin{array}{ll}\int_{0}^{\theta}\left[\frac{1}{(s+2) ^{2}}-\frac{2}{(s+3)^{2}}\right]\mathrm{d}s,&\mbox{$i=N=2$ and $\theta\in[0,\sqrt{2}-1]$}\\ \\ \int_{0}^{\sqrt{2}-1}\left[\frac{1}{(s+2)^{2}}-\frac{2}{(s+3)^{2}} \right]\mathrm{d}s+\int_{\sqrt{2}-1}^{\theta}\left[-\frac{1}{(s+2)^{2}}+\frac {2}{(s+3)^{2}}\right]\mathrm{d}s,&\mbox{$i=N=2$}\\ &\mbox{and $\theta\in(\sqrt{2}-1,1]$}\\ \\ \int_{0}^{\theta}\left[-\frac{1}{(s+N)^{2}}+\frac{2}{(s+N+1)^{2}} \right]\mathrm{d}s,&\mbox{$i=N\geq 3$ and $\theta\in[0,1)$}\\ \\ \int_{0}^{\theta}\left[\frac{i-N+1}{(s+i)^{2}}-\frac{i-N+2}{(s+i+1)^{2}} \right]\mathrm{d}s,&\mbox{$i\geq N+1,N\geq 2$ and $\theta\in[0,1)$}.\end{array}\right.\] Actually, \[\beta_{N}(i,\theta)=\left\{\begin{array}{ll}2P_{2,3}(\theta)-\frac{1}{6},& \mbox{$i=N=2$ and $\theta\in[0,\sqrt{2}-1]$}\\ \\ 6-4\sqrt{2}-\frac{1}{6},&\mbox{$i=N=2$ and $\theta\in(\sqrt{2}-1,1)$}\\ \\ \frac{N-1}{N(N+1)},&\mbox{$i=N\geq 3$ and $\theta\in[0,1)$}\\ \\ 2P_{N,i+1}(\theta)-\frac{N-1}{i(i+1)},&\mbox{$i\geq N+1,N\geq 2$ and $\theta\in[0,1)$}.\end{array}\right. \tag{3.15}\] Now, if \(i\geq N+1\), with \(N\geq 2\), \(\beta_{N}^{\prime}(i,\theta^{*})=0\) for \[\theta^{*}=1-N+\sqrt{(i+1-N)(i+2-N)}.\] For any real \(\theta\), \(\beta_{N}(i,\theta)\) is increasing if \(\theta\leq\theta^{*}\) and decreasing if \(\theta\geq\theta^{*}\) Since \[\theta^{*}\in\left\{\begin{array}{ll}(1,+\infty),&i\geq 3\mbox{ and }N=2\\ (0,1),&i=4\mbox{ and }N=3\\ (1,+\infty),&i\geq 5\mbox{ and }N=3\\ (-\infty,0),&i=5\mbox{ and }N=4\\ (0,1),&i=6\mbox{ and }N=4\\ (1,+\infty),&i\geq 7\mbox{ and }N=4\\ (-\infty,0),&i=6\mbox{ and }N=5\\ (0,1),&i=8\mbox{ and }N=5\\ (1,+\infty),&i\geq 9\mbox{ and }N=5\\ \vdots\end{array}\right.\] and \(\theta\in[0,1)\) we get \[\sup_{\begin{array}{c}i\geq N+1,\\ N\geq 2,\theta\in[0,1)\end{array}}\beta_{N}(i,\theta)\] \[=\left\{\begin{array}{ll}\sup_{i\geq 3}\beta_{2}(i,1)=\beta_{2 }(3,1)=0.11667,&N=2\\ \sup_{i\geq 5}\left\{\beta_{3}(4,\theta^{*}),\beta_{3}(i,1)\right\}=\beta_{3} (4,\theta^{*})=0.10204,&N=3\\ \sup_{i\geq 7}\left\{\beta_{4}(5,0),\beta_{4}(6,\theta^{*}),\beta_{4}(i,1) \right\}=\beta_{4}(5,0)=0.1,&N=4\\ \sup_{i\geq 9}\left\{\beta_{5}(6,0),\beta_{5}(7,0),\beta_{5}(8,\theta^{*}), \beta_{5}(i,1),\right\}=\beta_{5}(6,0)=0.07143,&N=5\\ \beta_{N}(N+1,0)&N\geq 6.\end{array}\right.\] Finally, it is easy to check that for any \(N\geq 2\) \[\sup_{\begin{array}{c}i\geq N,\\ \theta\in[0,1)\end{array}}\beta_{N}(i,\theta) = \left\{\begin{array}{ll}6-4\sqrt{2}-\frac{1}{6}=0.17647,&N=2\\ \frac{1}{6}=0.16666,&N=3\\ \frac{3}{20}=0.15,&N=4\\ \vdots\end{array}\right.\] \[= \left\{\begin{array}{ll}0.17647,&N=2\\ \frac{N-1}{N(N+1)},&N\geq 3.\end{array}\right.\] Hence \[\alpha_{N,n+1}^{t}=\sup_{\begin{array}{c}i\geq N,\\ \theta\in[0,1)\end{array}}\left|G_{N,n+1}^{t}\left(1-\frac{N}{i+1+\theta}\right) -G_{N}\left(1-\frac{N}{i+1+\theta}\right)\right|\] \[\leq\left(2N-1-2\sqrt{N(N-1)}+\sup_{\begin{array}{c}i\geq N,\\ \theta\in[0,1)\end{array}}\beta_{N}(i,\theta)\right)\alpha_{N,n}^{t}\] \[=\left\{\begin{array}{ll}(9-\frac{1}{6}-6\sqrt{2})\alpha_{N,n} ^{t}=0.348\ldots\alpha_{N,n}^{t},&N=2\\ \left(2N-1-2\sqrt{N(N-1)}+\frac{N-1}{N(N+1)}\right)\alpha_{N,n}^{t}=:c_{N} \cdot\alpha_{N,n}^{t},&N\geq 3\end{array}\right. \tag{3.16}\] for any \(t\in I\) and \(n\in\mathbb{N}_{+}\). Finally, by (3.6) and (3.9) we have \[G_{N,1}^{1}\left(1-\frac{N}{i+1+\theta}\right)=F_{N,0}^{1}\left(1-\frac{N}{i+ 1}\right)\] and \[G_{N,1}^{t}\left(1-\frac{N}{i+1+\theta}\right) = G_{N,1}^{t}\left(1-\frac{N}{i+1}\right)+\int_{0}^{\theta}P_{N,i+ 1}(s)\mathrm{d}G_{N,0}^{t}(s)\] \[= \left\{\begin{array}{ll}F_{N,0}^{t}\left(1-\frac{N}{i+1}\right),&0\leq\theta\leq t\\ F_{N,0}^{t}\left(1-\frac{N}{i+1}\right)+P_{N,i+1}(t),&\theta>t\end{array}\right.\] for any \(t\in[0,1)\), \(\theta\in[0,1)\) and \(i\geq N\), \(N\geq 2\). It is easy to see that \[\alpha_{N,1}^{t} = \sup_{\begin{array}{c}i\geq N,\\ \theta\in[0,1)\end{array}}\left|G_{N,1}^{t}\left(1-\frac{N}{i+1+\theta}\right) -G_{N}\left(1-\frac{N}{i+1+\theta}\right)\right| \tag{3.17}\] \[\leq \frac{2}{N+1}-\left(\log\left(\frac{N}{N-1}\right)\right)^{-1} \log\left(\frac{N^{2}}{N^{2}-1}\right)=:\delta_{N},\quad t\in I.\] It follows from (3.16) and (3.17) that \[\alpha_{N,n}^{t}\leq\left\{\begin{array}{ll}(0.251)(0.348)^{n-1},&N=2\\ \delta_{N}\cdot c_{N}^{n-1},&N\geq 3\end{array}\right.\] for any \(t\in I\) and \(n\in\mathbb{N}_{+}\). By (3.12) the proof is complete. ## 4 \(\psi\)-mixing coefficients of \((a_{n})_{n\in\mathbb{N}_{+}}\) A mixing property of a stationary stochastic process \((\ldots,X_{-1},X_{0},X_{1},\ldots)\) reflects a decay of the statistical dependence between the past \(\sigma\)-algebra \(\sigma\left(\left\{X_{k}:k\leq 0\right\}\right)\) and the asymptotic future \(\sigma\)-algebra \(\sigma\left(\left\{X_{k}:k\geq n\right\}\right)\) as \(n\to\infty\). The various mixing properties are described by corresponding measures of dependence between \(\sigma\)-algebras. We study the \(\psi\)-mixing coefficients of \((a_{n})_{n\in\mathbb{N}_{+}}\) under either \(\rho_{N}^{t}\), \(t\in I\), or \(\rho_{N}\). For any \(k\in\mathbb{N}_{+}\) let \(\mathcal{B}_{1}^{k}=\sigma(a_{1},\ldots,a_{k})\) and \(\mathcal{B}_{k}^{\infty}=\sigma(a_{k},a_{k+1},\ldots)\) denote the \(\sigma\)-algebras generated by the random variables \(a_{1},\ldots,a_{k}\), respectively, \(a_{k},a_{k+1},\ldots\). Clearly, \(\mathcal{B}_{1}^{k}\) is the \(\sigma\)-algebra generated by the closures of the fundamental intervals of rank \(k\) while \(\mathcal{B}_{k}^{\infty}=R_{N}^{-k+1}(\mathcal{B}_{I})\), \(k\in\mathbb{N}_{+}\). For any \(\mu\in\operatorname{pr}(\mathcal{B}_{I})\) consider the \(\psi\)-mixing coefficients \[\psi_{\mu}(n)=\sup\left|\frac{\mu(A\cap B)}{\mu(A)\mu(B)}-1\right|,\quad n\in \mathbb{N}_{+}, \tag{4.1}\] where the supremum is taken over all \(A\in\mathcal{B}_{1}^{k}\) and \(B\in\mathcal{B}_{k+n}^{\infty}\) such that \(\mu(A)\mu(B)\neq 0\) and \(k\in\mathbb{N}_{+}\). Define \[\varepsilon_{N,n}=\sup\left|\frac{\rho_{N}^{t}(B)}{\rho_{N}(B)}-1\right|, \quad n\in\mathbb{N}_{+}, \tag{4.2}\] where the supremum is taken over all \(t\in I\) and \(B\in\mathcal{B}_{n}^{\infty}\) with \(\rho_{N}(B)>0\). Note that the sequence \(\left(\varepsilon_{N,n}\right)_{n\in\mathbb{N}_{+}}\) is non-increasing since \(\mathcal{B}_{n+1}^{\infty}\subset\mathcal{B}_{n}^{\infty}\) for any \(n\in\mathbb{N}_{+}\). We shall show that \(\varepsilon_{N,n}\) can be expressed in terms of \(F_{N,n-1}^{t}\), \(t\in I\) and \(G_{N}\), namely \(\varepsilon_{N,n}=\varepsilon_{N,n}^{\prime}\) with \[\varepsilon_{N,n}^{\prime}=\sup_{t,x\in I}\left|\frac{\mathrm{d}F_{N,n-1}^{t} (x)/\mathrm{d}x}{\nu_{N}(x)}-1\right|,\quad n\in\mathbb{N}_{+}, \tag{4.3}\] where \(\nu_{N}\) is the density measure of \(\rho_{N}\), i.e., \[\nu_{N}(x)=\left(\log\left(\frac{N}{N-1}\right)\right)^{-1}\frac{1}{x+N-1}. \tag{4.4}\] Indeed, by the very definition of \(\varepsilon_{N,n}^{\prime}\), for any \(t,x\in I\) we have \[\varepsilon_{N,n}^{\prime}\nu(x)\geq\left|\frac{\mathrm{d}F_{N,n-1}^{t}(x)}{ \mathrm{d}x}-\nu_{N}(x)\right|. \tag{4.5}\] By integrating the above inequality over \(B\in\mathcal{B}_{n}^{\infty}\) we obtain \[\rho_{N}(B)\varepsilon_{N,n}^{\prime} \geq \int_{B}\left|\frac{\mathrm{d}F_{N,n-1}^{t}(x)}{\mathrm{d}x}-\nu_ {N}(x)\right|\mathrm{d}x\] \[\geq \left|\int_{B}\mathrm{d}F_{N,n-1}^{t}(x)-\int_{B}\nu_{N}(x) \mathrm{d}x\right|=\left|\rho_{N}^{t}(B)-\rho_{N}(B)\right|\] for any \(B\in\mathcal{B}_{n}^{\infty}\), \(n\in\mathbb{N}_{+}\) and \(t\in I\). Hence \(\varepsilon_{N,n}^{\prime}\geq\varepsilon_{N,n}\), \(n\in\mathbb{N}_{+}\). On the other hand, for any arbitrarily given \(n\in\mathbb{N}_{+}\) let \(B_{x,k}^{+}:=\left(x\leq R_{N}^{n-1}<x+k\right)\in\mathcal{B}_{n}^{\infty}\), with \(x\in[0,1)\), \(k>0\), \(x+k\in I\), and \(B_{x,k}^{-}:=\left(x-k\leq R_{N}^{n-1}<x\right)\in\mathcal{B}_{n}^{\infty}\), with \(x\in(0,1]\), \(k>0\), \(x-k\in I\). Clearly, \[\varepsilon_{N,n}\geq\max\left(\left|\frac{\rho_{N}^{t}\left(B_{x,k}^{+} \right)}{\rho_{N}\left(B_{x,k}^{+}\right)}-1\right|,\left|\frac{\rho_{N}^{t} \left(B_{x,k}^{-}\right)}{\rho_{N}\left(B_{x,k}^{-}\right)}-1\right|\right)\] for any \(t\in I\) and suitable \(x\in I\) and \(k>0\). Letting \(k\to 0\) we get \(\varepsilon_{N,n}\geq\varepsilon_{N,n}^{\prime}\), \(n\in\mathbb{N}_{+}\). Therefore \(\varepsilon_{N,n}=\varepsilon_{N,n}^{\prime}\), \(n\in\mathbb{N}_{+}\). It is easy to compute \(\varepsilon_{N,1}^{\prime}=\varepsilon_{N,1}\) and \(\varepsilon_{N,2}^{\prime}=\varepsilon_{N,2}\). Since \(F_{N,0}^{t}(x)=\rho_{N}^{t}([0,x])\), \(t,x\in I\), we have \[\varepsilon_{N,1} = \sup_{t,x\in I}\left|\frac{\mathrm{d}F_{N,0}^{t}(x)/\mathrm{d}x }{\nu_{N}(x)}-1\right|=\sup_{t,x\in I}\left|\frac{\left(\frac{Nx}{N-(1-x)(1-t )}\right)^{\prime}\right|_{x}}{\frac{\left(\log(\frac{N}{N-1})\right)^{-1}}{x+ N-1}}-1\right|\] \[= \sup_{t,x\in I}\left|\frac{N(N-1+t)(N-1+x)}{[N-(1-x)(1-t)]^{2}} \cdot\log\left(\frac{N}{N-1}\right)-1\right|.\] As \[N-1\leq\frac{N(N-1+t)(N-1+x)}{[N-(1-x)(1-t)]^{2}}\leq N,\quad x,t\in I,\] it follows that \[F_{N,1}^{t}(x)=\rho_{N}^{t}\left(R_{N}<x\right)=\rho_{N}^{t}\left(R_{N}^{-1}( 0,x)\right)\] and since \[R_{N}^{-1}(0,x)=\bigcup_{i\geq N}\left(1-\frac{N}{i},1-\frac{N}{x+i}\right]\] it follows that \[F_{N,1}^{t}(x) = \sum_{i\geq N}\rho_{N}^{t}\left(1-\frac{N}{i},1-\frac{N}{x+i} \right]=\sum_{i\geq N}\frac{x(t+N-1)}{(x+i+t-1)(i+t-1)}\] \[= \left(t+N-1\right)\sum_{i\geq N}\left(\frac{1}{i+t-1}-\frac{1}{x +i+t-1}\right),\quad x,t\in I.\] Then \[\varepsilon_{N,2}^{\prime} = \sup_{t,x\in I}\left|\frac{\mathrm{d}F_{N,1}^{t}(x)/\mathrm{d}x }{\nu_{N}(x)}-1\right|\] \[= \sup_{t,x\in I}\left|\left(\log\left(\frac{N}{N-1}\right)\right) (x+N-1)(t+N-1)\sum_{i\geq N}\frac{1}{(x+i+t-1)^{2}}-1\right|.\] A simple computation yields \[1+N^{2}\zeta(2,N+1)-N\zeta(2,N)\leq(x+N-1)(t+N-1)\sum_{i\geq N}\frac{1}{(x+i+t-1 )^{2}}\leq 1+(N-1)^{2}\zeta(2,N),\] Hence \[\varepsilon_{N,2}^{\prime}=\max\left\{\begin{array}{l}\left|\left(1+(N-1)^{2 }\zeta(2,N)\right)\log\left(\frac{N}{N-1}\right)-1\right|,\\ \\ \left|\left(1+N^{2}\zeta(2,N+1)-N\zeta(2,N)\right)\log\left(\frac{N}{N-1} \right)-1\right|.\end{array}\right.\] For \(N=2\), we get \[\varepsilon_{2,2}=\varepsilon_{2,2}^{\prime} = \max\{\left|\left[1+4\zeta(2,3)-2\zeta(2,2)\right]\log 2-1\right|, \left|\left[1+\zeta(2,2)\right]\log 2-1\right|\}\] \[= \max\{\left|\left[2(\zeta(2)-1)\right]\log 2-1\right|,\left|\zeta(2) \log 2-1\right|\}\] \[= \max\{1-\left[2(\zeta(2)-1)\right]\log 2,\zeta(2)\log 2-1\}= \zeta(2)\log 2-1=0.14018\ldots\] Clearly, for \(n\geq 3\) the computation of \(\varepsilon_{N,n}\) becomes very difficult. Instead, Theorem 3.1 can be used to derive good upper bounds for \(\varepsilon_{N,n}\) whatever \(n\geq 3\). **Proposition 4.1**.: _We have \(\varepsilon_{N,1}\leq\left(\log\left(\frac{N}{N-1}\right)\right)\cdot K_{N}\) and_ \[\varepsilon_{N,n}\leq\left\{\begin{array}{ll}(\log 2)(7.55)(0.251)(0.348)^{n-2 },&N=2\\ \left(\log\left(\frac{N}{N-1}\right)\right)K_{N}\cdot\delta_{N}\cdot c_{N}^{n- 1},&N\geq 3\end{array}\right.,\quad n\geq 2, \tag{4.6}\] _where \(K_{N}:=N+\frac{N^{3}}{(N-1)^{2}}-(N-1)\left[\frac{(2N-1)N}{(N-1)^{2}+N^{2}}+ \frac{2N+1}{2N}\right]\) and \(\delta_{N}\) and \(c_{N}\) are as in Theorem 3.1._ Proof.: It follows from (3.4) and (3.5) that \[\frac{\mathrm{d}F_{N,n}^{t}(x)}{\mathrm{d}x}=\int_{0}^{1}\frac{N(N-1+s)}{[N-(1 -x)(1-s)]^{2}}\mathrm{d}G_{N,n}^{t}(s)\] and \[\nu(x)=\int_{0}^{1}\frac{N(N-1+s)}{[N-(1-x)(1-s)]^{2}}\mathrm{d}G_{N}(s)\] for any \(x,t\in I\) and \(n\in\mathbb{N}\). Using the last two equations, integration by parts yields \[\left|\frac{\mathrm{d}F_{N,n}^{t}(x)}{\mathrm{d}x}-\nu(x)\right| = \left|\int_{0}^{1}\frac{N(N-1+s)}{[N-(1-x)(1-s)]^{2}}\mathrm{d} \left(G_{N,n}^{t}(s)-G_{N}(s)\right)\right|\] \[= \left|\int_{0}^{1}\left(G_{N,n}^{t}(s)-G_{N}(s)\right)\frac{ \partial}{\partial s}\left(\frac{N(N-1+s)}{[N-(1-x)(1-s)]^{2}}\right)\mathrm{ d}s\right|\] \[\leq \sup_{s\in I}\left|G_{N,n}^{t}(s)-G_{N}(s)\right|\cdot\int_{0}^{1 }N\left|\frac{N-(1-x)(2N-1+s)}{[N-(1-x)(1-s)]^{3}}\right|\mathrm{d}s.\] But \[\int_{0}^{1}N\left|\frac{N-(1-x)(2N-1+s)}{[N-(1-x)(1-s)]^{3}} \right|\mathrm{d}s\] \[=\left\{\begin{array}{ll}N\int_{0}^{1}\frac{-N+(1-x)(2N-1+s)}{[N -(1-x)(1-s)]^{3}}\mathrm{d}s,&x\in\left[0,\frac{N-1}{2N}\right)\\ \\ N\int_{0}^{\frac{(2N-1)x-N+1}{1-x}}\frac{N-(1-x)(2N-1+s)}{[N-(1-x)(1-s)]^{3}} \mathrm{d}s+N\int_{\frac{(2N-1)x-N+1}{1-x}}^{1}\frac{-N+(1-x)(2N-1+s)}{[N-(1-x )(1-s)]^{3}}\mathrm{d}s,&x\in\left[\frac{N-1}{2N},\frac{N}{2N-1}\right]\\ \\ N\int_{0}^{1}\frac{N-(1-x)(2N-1+s)}{[N-(1-x)(1-s)]^{3}}\mathrm{d}s,&x\in \left(\frac{N}{2N-1},1\right]\\ \\ \frac{N(N-1)}{(x+N-1)^{2}}-1,&x\in\left[0,\frac{N-1}{2N}\right) \\ \\ \frac{-N(N-1)}{(x+N-1)^{2}}+\frac{1}{2x(1-x)}-1,&x\in\left[\frac{N-1}{2N}, \frac{N}{2N-1}\right]\\ \\ 1-\frac{N(N-1)}{(x+N-1)^{2}},&x\in\left(\frac{N}{2N-1},1\right] \end{array}\right.\] \[=\left\{\begin{array}{ll}\frac{N(N-1)}{x+N-1}-(x+N-1)\leq 1,&x\in \left[0,\frac{N-1}{2N}\right)\\ \\ \frac{-N(N-1)}{x+N-1}+\frac{x+N-1}{2x(1-x)}-(x+N-1),&x\in\left[\frac{N-1}{2N} \leq K_{N},\frac{N}{2N-1}\right]\\ \\ x+N-1-\frac{N(N-1)}{x+N-1}\leq 1,&x\in\left(\frac{N}{2N-1},1\right] \end{array}\right.\] where \[K_{N}=N+\frac{N^{3}}{(N-1)^{2}}-(N-1)\left[\frac{(2N-1)N}{(N-1)^{2}+N^{2}}+ \frac{2N+1}{2N}\right].\] Therefore \[\sup_{x,t\in I}\left|\frac{\mathrm{d}F_{N,n}^{t}(x)}{\mathrm{d}x}-\nu(x) \right|\leq\left(log\left(\frac{N}{N-1}\right)\right)\cdot K_{N}\sup_{s,t\in I }\left|G_{N,n}^{t}(s)-G_{N}(s)\right|,\quad n\in\mathbb{N}.\] Then by Theorem 3.1 the proof is complete. **Theorem 4.2**.: _For any \(t\in I\) we have_ \[\psi_{\rho_{N}^{t}}(n)\leq\frac{\varepsilon_{N,n}+\varepsilon_{N,n+1}}{1- \varepsilon_{N,n+1}},\quad n\in\mathbb{N}_{+}. \tag{4.7}\] _Also,_ \[\psi_{\rho_{N}}(n)=\varepsilon_{N,n},\quad n\in\mathbb{N}_{+}. \tag{4.8}\] Proof.: Let \((I\left(i^{(k)}\right)\) denote the cylinder \(I(i_{1},\ldots,i_{k})\) for \(k\in\mathbb{N}\). It follows from (2.22) that for any \(t\in I\) we have \[\varepsilon_{N,n}=\sup\left|\frac{\rho_{N}^{t}\left(B|I\left(I\left(i^{(k)} \right)\right)\right)}{\rho_{N}(B)}-1\right| \tag{4.9}\] where the supremum is taken over all \(B\in\mathcal{B}_{k+n}^{\infty}\) with \(\rho_{N}(B)>0\), \(i^{(k)}\in\Lambda^{k}\), and \(k\in\mathbb{N}\). For arbitrarily given \(k,l,n\in\mathbb{N}_{+}\), \(i^{(k)}\in\Lambda^{k}\) and \(j^{(l)}\in\Lambda^{l}\) put \[A=I\left(i^{(k)}\right),\quad B=\left((a_{k+n},\ldots,a_{k+n+l-1})=j^{(l)}\right)\] and note that \(\rho_{N}^{t}(A)\rho_{N}^{t}(B)\neq 0\) for any \(t\in I\). By (4.8) we have \[\left|\rho_{N}^{t}(B|A)-\rho_{N}(B)\right|\leq\varepsilon_{N,n}\rho_{N}(B) \tag{4.10}\] and \[\left|\rho_{N}^{t}(B)-\rho_{N}(B)\right|\leq\varepsilon_{N,n+k}\rho_{N}(B). \tag{4.11}\] It follows from (4.9) and (4.10) that \[\left|\rho_{N}^{t}(B|A)-\rho_{N}^{t}(B)\right|\leq(\varepsilon_{N,n}+ \varepsilon_{N,n+k})\rho_{N}(B)\] whence \[\left|\rho_{N}^{t}(A\cap B)-\rho_{N}^{t}(A)\rho_{N}^{t}(B)\right|\leq( \varepsilon_{N,1}+\varepsilon_{N,n+k})\rho_{N}^{t}(A)\rho_{N}(B).\] It follows from (4.10) that \[\rho_{N}(B)\leq\frac{\rho_{N}^{t}(B)}{1+\varepsilon_{N,n+k}}.\] Since the sequence \(\left(\varepsilon_{N,n}\right)_{n\in\mathbb{N}_{+}}\) is non-increasing, we have \[\frac{\varepsilon_{N,n}+\varepsilon_{N,n+k}}{1-\varepsilon_{N,n+k}}\leq \frac{\varepsilon_{N,n}+\varepsilon_{N,n+1}}{1-\varepsilon_{N,n+1}},n\in \mathbb{N}_{+},\] which completes the proof of (4.7). To prove (4.8) we first note that putting \(A\in I\left(i^{(k)}\right)\) for any given \(k\in\mathbb{N}_{+}\) and \(i^{(k)}\in\Lambda^{k}\) by (4.9) we have \[\left|\rho_{N}^{t}(A\cap B)-\rho_{N}^{t}(A)\rho_{N}(B)\right|\leq\varepsilon_ {N,n}\rho_{N}^{t}(A)\rho_{N}(B)\] for any \(t\in I\), \(B\in\mathcal{B}_{k+n}^{\infty}\), and \(n\in\mathbb{N}_{+}\). By integrating the above inequality over \(t\in I\) with respect to \(\rho_{N}\) and taking into account that \[\int_{I}\rho_{N}^{t}(E)\rho_{N}(\mathrm{d}t)=\rho_{N}(E),\quad E\in\mathcal{B }_{I}\] we get \[\psi_{\rho_{N}}(n)\leq\varepsilon_{N,n},\quad n\in\mathbb{N}_{+}.\] To prove the converse inequality, remark that the \(\psi\)-mixing coefficients under the extended measure \(\overline{\rho}_{N}\) of the doubly infinite sequence \((\overline{a}_{l})_{l\in\mathbb{Z}}\) of extended incomplete quotients, are equal to the corresponding \(\psi\)-mixing coefficients under \(\rho_{N}\) of \((a_{n})_{n\in\mathbb{N}_{+}}\). This is obvious by the very definitions of \((\overline{a}_{l})_{l\in\mathbb{Z}}\) and \(\psi\)-mixing coefficients. As \((\overline{a}_{l})_{l\in\mathbb{Z}}\) is strict1y stationary under \(\overline{\rho}_{N}\), we have \[\psi_{\rho_{N}}(n)=\psi_{\overline{\rho}_{N}}(n)=\sup\left|\frac{\overline{ \rho}_{N}\left(\overline{A}\cap\overline{B}\right)}{\overline{\rho}_{N}\left( \overline{A}\right)\overline{\rho}_{N}\left(\overline{B}\right)}-1\right|, \quad n\in\mathbb{N}_{+},\] where the upper bound is taken over all \(\overline{A}=\sigma(\overline{a}_{n},\overline{a}_{n+1},\ldots)\) and \(\overline{B}=\sigma(\overline{a}_{0},\overline{a}_{-1},\ldots)\) for which \(\overline{\rho}_{N}\left(\overline{A}\right)\overline{\rho}_{N}\left( \overline{B}\right)\neq 0\). Clearly, \(\overline{A}=A\times I\) and \(\overline{B}=I\times B\), with \(A\in\mathcal{B}_{n}^{\infty}=R_{N}^{-n+1}\left(\mathcal{B}_{I}\right)\) and \(B\in\mathcal{B}_{I}\). Then \[\psi_{\rho_{N}}(n)=\sup_{\begin{array}{c}A\in R_{N}^{-n+1}\left( \mathcal{B}_{I}\right)\\ B\in\mathcal{B}_{I}\end{array}}\left|\frac{\overline{\rho}_{N}(A\times B)}{ \rho_{N}(A)\rho_{N}(B)}-1\right|,\quad n\in\mathbb{N}_{+}. \tag{4.12}\] \[\rho_{N}(A)\rho_{N}(B)\neq 0\] Now, it is easy to check that \[\overline{\rho}_{N}(A\times B)=\int_{A}\rho_{N}(\mathrm{d}t)\rho_{N}^{t}(B)= \int_{B}\rho_{N}(\mathrm{d}u)\rho_{N}^{u}(A)\] for any \(A,B\in\mathcal{B}_{I}\). It then follows from (4.12) and the very definition of \(\varepsilon_{N,n}\) that \[\psi_{\rho_{N}}(n)\geq\sup_{\begin{array}{c}A\in R_{N}^{-n+1}\left( \mathcal{B}_{I}\right)\\ u\in I,\rho_{N}(A)\neq 0\end{array}}\left|\frac{\rho_{N}^{u}(A)}{\rho_{N}(A)}-1 \right|=\varepsilon_{N,n},\quad n\in\mathbb{N}_{+}.\] This completes the proof of (4.8). **Corollary 4.3**.: _The sequence \(\left(a_{n}\right)_{n\in\mathbb{N}+}\) is \(\psi\)-mixing under \(\rho_{N}\) and \(\rho_{N}^{t}\), \(t\in I\). For any \(t\in I\) we have_ \[\psi_{\rho_{N}^{t}}(1)=\frac{\varepsilon_{N,1}+\varepsilon_{N,2}}{1- \varepsilon_{N,2}}\] _and_ \[\psi_{\rho_{N}^{t}}(n)=\frac{\left(\log\left(\frac{N}{N-1}\right)\right)K_{N} \cdot\delta_{N}\cdot c_{N}^{n-2}(1+c_{N})}{1-\left(\log\left(\frac{N}{N-1} \right)\right)K_{N}\cdot\delta_{N}\cdot c_{N}^{n-1}},\quad n\geq 2.\] _In particular, for \(N=2\), since \(\varepsilon_{2,1}=2\log 2-1=0.38629\) and \(\varepsilon_{2,2}=\zeta(2)\log 2-1=0.14018\) it follows that_ \[\psi_{\rho_{2}^{t}}(1)\leq 0.612302575.\] _Also \(\psi_{\rho_{N}}(1)=N\log\left(\frac{N}{N-1}\right)-1\),_ \[\psi_{\rho_{N}}(2)=\max\left\{\begin{array}{c}\left|\left(1+(N-1)^{2}\zeta(2,N)\right)\log\left(\frac{N}{N-1}\right)-1\right|,\\ \\ \left|\left(1+N^{2}\zeta(2,N+1)-N\zeta(2,N)\right)\log\left(\frac{N}{N-1} \right)-1\right|\end{array}\right.\] _and_ \[\psi_{\rho_{N}}(n)\leq\left(\log\left(\frac{N}{N-1}\right)\right)K_{N}\cdot \delta_{N}\cdot c_{N}^{n-2},\quad n\geq 3.\] _In particular, for \(N=2\)_ \[\psi_{\rho_{2}}(1)=2\log 2-1=0.38629,\quad\psi_{\rho_{2}}(2)=\zeta(2)\log 2-1=0.14018.\] _The doubly infinite sequence \((\overline{a}_{l})_{l\in\mathbb{Z}}\) of extended incomplete quotients is \(\psi\)-mixing under the extended measure \((\overline{\rho}_{N})\), and its \(\psi\)-mixing coefficients are equal to the corresponding \(\psi\)-mixing coefficients under \(\rho_{N}\) of \((a_{n})_{n\in\mathbb{N}+}\)._ The proof follows from Proposition 4.1 and Theorem 4.2. As already noted, the last assertion is obvious by the very definitions of \((\overline{a}_{l})_{l\in\mathbb{Z}}\) and \(\psi\)-mixing coefficients. ## Appendix A Proofs of some results from Section 3 **Proof of (3.17)** We have \[G_{N,1}^{t}\left(1-\frac{N}{i+1+\theta}\right) = \left\{\begin{array}{ll}F_{N,0}^{t}\left(1-\frac{N}{i+1} \right),&0\leq\theta\leq t\\ F_{N,0}^{t}\left(1-\frac{N}{i+1}\right)+P_{N,i+1}(t),&\theta>t\end{array}\right.\] \[\leq \frac{i+1-N}{i+t}+\frac{t+N-1}{(i+t)(i+1+t)}\leq\frac{1}{N}+ \frac{N-1}{N(N+1)}=\frac{2}{N+1}\] and \[G_{N}\left(1-\frac{N}{i+1+\theta}\right) = \left(\log\left(\frac{N}{N-1}\right)\right)^{-1}\log\left(\frac{ N(i+\theta)}{(N-1)(i+1+\theta)}\right)\] \[\geq \left(\log\left(\frac{N}{N-1}\right)\right)^{-1}\log\left(\frac{ N^{2}}{N^{2}-1}\right).\]